While reading the book; Making Things Work: Solving Complex Problems in a Complex World by Yaneer Bar-Yam and his chapter on networks and collective memory I got the idea to mix Hebbian theory with RDF. I have also played with similar thoughts before with Topic Maps technology in my research paper Quality, Relevance and Importance in Information Retrieval with Fuzzy Semantic Networks. This time my thoughts were more on adaptive knowledge.
So here's my idea for a Hebbian triplestore
Each triple in the triplestore database has an array of say up to 10 rows. Each array row has a datetime value. When a SPARQL query is processed the current date is added to the array as a new row. The array acts a FIFO queue so the first date is removed if the array is full. Whenever more triples are added to the database it checks if the database is full. If it is full it will delete triples that are least used. Whenever the database is not in use it will perform routine checks to find out what triples are least used. The database could then be regularly fed with new triples and it would over time automatically adapt to the domain where it is being used (queried). So the Hebbian adaptive semantic triplestore is a knowledge store that evolves and becomes more relevant in the environment where it is being used.
Another interesting feature of this triplestore is that it would know what triples are most used. Based on correlation of dates in arrays for triples it can suggest relevant or extended SPARQL queries.