Only by using memory itself, it doesn't bother around lucene's memory mapping.
Michael > Am 11.06.2015 um 01:51 schrieb Zongheng Yang <[email protected]>: > > To clarify, Lucene mmaps the indexes by default, so reading the mmap'ed > indexes will put them into the OS buffer cache. My previous question is just > about whether neo4j's dbms.pagecache.memory puts an upper bound on how many > of those Lucene pages can be mmap'ed. > > On Wednesday, June 10, 2015 at 2:59:29 PM UTC-7, Zongheng Yang wrote: > Thanks Michael. > > About Lucene internally caches its indexes: does that part of the memory come > from neo4j's dbms.pagecache.memory portion? Your answers seem to suggest > that it doesn't. > > On Tuesday, June 9, 2015 at 11:47:41 PM UTC-7, Michael Hunger wrote: > >> Am 08.06.2015 um 22:24 schrieb Zongheng Yang <[email protected] <>>: >> >> I'm using Neo4j 2.2.2 community edition, embedded in a Java app, and no >> concurrency in queries at all. >> >> (1) Caching of the indexes. What components in Neo4j are responsible for >> caching the indexes (on node properties)? The manual doesn't seem to have >> mentioned this, and it seems that the page cache is purely for the data >> (nodes, relationships, properties, etc.). > > Lucene internally caches the index data. Going forward we will build our own > exact-indexes which utilize the same page-cache structures we use for our > store-files today. > >> >> (2) The object cache (reference caches). What are some files / packages in >> the source code that implement these? > > Don't bother, it's gone in 2.3 anyway >> >> (3) Memory config for a small-memory machine. Say the physical memory is M >> = ~4GB, the indexes are about 4GB, and the store files are much larger (say >> ~20GB). > > Why would you do a perf test on such a machine for a large graph like this? > In general I would try 2.3-M02 for that and use 1G heap, 2.5G page-cache > (leave .5G for OS and it's work) and cross my fingers. But I wouldn't trust > in these performance numbers :) > >> >> Suppose I'd want to benchmark performance of a particular query, say getting >> all nodes that have property1 to be val1. If I'd want the index (of size >> ~M) to mostly fit in memory, how should I set the JVM heap size & >> dbms.pagecache.memory? Also, is it right that in this case pagecache size >> is not as important? >> >> Another query: say this time it doesn't involve indexes, and I'm just >> traversing random portions of a graph. I imagine for this I'd need to set a >> large pagecache size and small JVM heap? Could someone give a concrete >> suggestion? >> >> Thanks in advance. >> >> Zongheng >> >> -- >> You received this message because you are subscribed to the Google Groups >> "Neo4j" group. >> To unsubscribe from this group and stop receiving emails from it, send an >> email to [email protected] <>. >> For more options, visit https://groups.google.com/d/optout >> <https://groups.google.com/d/optout>. > > > -- > You received this message because you are subscribed to the Google Groups > "Neo4j" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to [email protected] > <mailto:[email protected]>. > For more options, visit https://groups.google.com/d/optout > <https://groups.google.com/d/optout>. -- You received this message because you are subscribed to the Google Groups "Neo4j" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. For more options, visit https://groups.google.com/d/optout.
