I have the following settings in neo4j.conf: This is about 1/8th of the machines memory:
dbms.memory.heap.initial_size=120000m dbms.memory.heap.max_size=120000m The page cache is commented out: #dbms.memory.pagecache.size=10g Wayne On Monday, 13 March 2017 02:35:00 UTC, Michael Hunger wrote: > > What does your configuration look like in neo4j.conf for heap and > page-cache? > > neo4j> call apoc.warmup.run() ; > pageSize nodesPerPage nodesTotal nodePages nodesTime relsPerPage relsTotal > 8192 546 123396013 226001 2 240 2035429925 > relPages relsTime totalTime > 8480959 73 76 > > so it loaded 226001 pages of nodes totalling 123m = 1.7 GB > and 8480959 pages of relationships totalling 2bn = 65 GB > > in 76 seconds > > So depending on your configuration you either have now 67GB of page-cache > filled or it spilled over. > > All operations that now require only node or relationship-record access > should access them in-memory. > > Operations that require properties will still need to load them into the > page-cache and depending on the size setup displace existing entries. > > > > On Sun, Mar 12, 2017 at 4:45 PM, unrealadmin23 via Neo4j < > [email protected] <javascript:>> wrote: > >> I installed apoc - I didn't know this existed - looks very comprehensive; >> thanks: >> >> neo4j> call apoc.warmup.run() ; >> pageSize, nodesPerPage, nodesTotal, nodePages, nodesTime, relsPerPage, >> relsTotal, relPages, relsTime, totalTime >> 8192, 546, 123396013, 226001, 2, 240, 2035429925, 8480959, 73, 76 >> >> Not much in the way of a noticeable speedup though. >> >> I don't know anything about Java, and cannot find the jvm command. >> Can you be more specific about what I need to run or configure from the >> Linux command line ? >> >> Wayne. >> >> >> On Saturday, 11 March 2017 10:14:49 UTC, Michael Hunger wrote: >>> >>> There is not a kernel buffer, but own own page cache that you have to >>> configure >>> >>> call apoc.warmup.run() >>> >>> Which warms up node and rel blocks >>> >>> Other speedup comes from jvm JIT warmup >>> >>> >>> Von meinem iPhone gesendet >>> >>> Am 11.03.2017 um 08:28 schrieb unrealadmin23 via Neo4j < >>> [email protected]>: >>> >>> Hi, >>> >>> Assuming that I have enough memory to loaded the entire DB into it, are >>> there any options (optimisations) to allow for this or is it just a case of >>> the disk blocks being able to reside in the kernels buffer cache ? >>> >>> Currently, the same query gets faster the more it is run - to a point. >>> I can live with an elongated startup time. >>> >>> Thanks, Wayne >>> >>> -- >>> You received this message because you are subscribed to the Google >>> Groups "Neo4j" group. >>> To unsubscribe from this group and stop receiving emails from it, send >>> an email to [email protected]. >>> For more options, visit https://groups.google.com/d/optout. >>> >>> -- >> You received this message because you are subscribed to the Google Groups >> "Neo4j" group. >> To unsubscribe from this group and stop receiving emails from it, send an >> email to [email protected] <javascript:>. >> For more options, visit https://groups.google.com/d/optout. >> > > -- You received this message because you are subscribed to the Google Groups "Neo4j" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. For more options, visit https://groups.google.com/d/optout.
