Excellent. Related to this, I believe, I am seeing what looks like a memory leak in the memory mapped files when using the HPC starting with 2.2.0. I have a system batch process that grabs Place nodes that need to be geocoded in chunks of 10k, holds on to their graph id (releasing the nodes and closing the transaction). Then, we grab one node at a time (by id) and use an external service to get the lat / lon for the node’s address and update the node. Only one node is updated per transaction here. This worked great before 2.2.0. Since then, after running for several hours the Linux kernel ends up killing the entire process due to it eating too much system memory. Note that I don’t run out of heap space but rather system memory leading me to believe it is an issue with the memory mapped outside of the heap. Since I changed the system to the soft cache yesterday to verify the other issue with HPC and array size the system hasn’t run out of memory.
Clark Richey [email protected] > On May 12, 2015, at 9:42 AM, Chris Vest <[email protected]> wrote: > > The HPC heuristics will be fixed in the next 2.2.x release. > > -- > Chris Vest > System Engineer, Neo Technology > [ skype: mr.chrisvest, twitter: chvest ] > > >> On 11 May 2015, at 15:47, Clark Richey <[email protected] >> <mailto:[email protected]>> wrote: >> >> yes, if I change the cache to soft then I can just set the page cache and >> everything works great with having to manually configure the relationships >> or node cache sizes. >> >> >> >> Clark Richey >> [email protected] <mailto:[email protected]> >> >> >> >>> On May 10, 2015, at 4:47 PM, Chris Vest <[email protected] >>> <mailto:[email protected]>> wrote: >>> >>> I think this might be caused by a miscalculation in the High Performance >>> Cache settings heuristics. Does the problem go away if you change the >>> cache_type setting away from “hpc” (which is the default in our enterprise >>> edition), or use the 2.3-M1 milestone? >>> >>> By the way, the “dbms.pagecache.memory” setting is on its own; it is not >>> prefixed with “neo4j.neostore.nodestore.” or anything like that. >>> >>> -- >>> Chris Vest >>> System Engineer, Neo Technology >>> [ skype: mr.chrisvest, twitter: chvest ] >>> >>> >>>> On 07 May 2015, at 17:51, Clark Richey <[email protected] >>>> <mailto:[email protected]>> wrote: >>>> >>>> There is no usable stack trace: : Error creating bean with name >>>> 'neoService': Invocation of init method failed; nested exception is >>>> java.lang.OutOfMemoryError: Requested array size exceeds VM limit >>>> >>>> >>>> Further testing indicates that it is either the node_cache_array_fraction >>>> and the relationship_cache_array_faction are causing the problem. It is >>>> supposed to default to 1%. On an 150G heap that should be 1.5G. However >>>> the array size being generated is too long. Explicitly setting >>>> node_cache_size and relationship_cache_size seems to address this although >>>> it is far from ideal. >>>> >>>> >>>> >>>> >>>> Clark Richey >>>> [email protected] <mailto:[email protected]> >>>> >>>> >>>> >>>>> On May 6, 2015, at 8:24 PM, Sumit Gupta <[email protected] >>>>> <mailto:[email protected]>> wrote: >>>>> >>>>> hi, >>>>> >>>>> Please provide the exact exception along with the modified parameters. >>>>> >>>>> Thanks, >>>>> Sumit >>>>> >>>>> On Wednesday, 6 May 2015 20:21:51 UTC+5:30, Clark Richey wrote: >>>>> Hello, >>>>> I’m running Neo4J 2.2.1 with 150G heap space on a box with 240G. I set >>>>> the neo4j.neostore.nodestore.dbms.pagecache.memory >>>>> to 60G (slightly less than 75% of remaining system memory as >>>>> recommended). However, when I startup I get an error that the system >>>>> can’t start because I’m trying to allocate an array whose size exceeds >>>>> the maximum allowed size. >>>>> >>>>> >>>>> >>>>> --- >>>>> Clark Richey >>>>> [email protected] <javascript:> >>>>> >>>>> >>>>> >>>>> >>>>> -- >>>>> You received this message because you are subscribed to the Google Groups >>>>> "Neo4j" group. >>>>> To unsubscribe from this group and stop receiving emails from it, send an >>>>> email to [email protected] >>>>> <mailto:[email protected]>. >>>>> For more options, visit https://groups.google.com/d/optout >>>>> <https://groups.google.com/d/optout>. >>>> >>>> >>>> -- >>>> You received this message because you are subscribed to the Google Groups >>>> "Neo4j" group. >>>> To unsubscribe from this group and stop receiving emails from it, send an >>>> email to [email protected] >>>> <mailto:[email protected]>. >>>> For more options, visit https://groups.google.com/d/optout >>>> <https://groups.google.com/d/optout>. >>> >>> >>> -- >>> You received this message because you are subscribed to the Google Groups >>> "Neo4j" group. >>> To unsubscribe from this group and stop receiving emails from it, send an >>> email to [email protected] >>> <mailto:[email protected]>. >>> For more options, visit https://groups.google.com/d/optout >>> <https://groups.google.com/d/optout>. >> >> >> -- >> You received this message because you are subscribed to the Google Groups >> "Neo4j" group. >> To unsubscribe from this group and stop receiving emails from it, send an >> email to [email protected] >> <mailto:[email protected]>. >> For more options, visit https://groups.google.com/d/optout >> <https://groups.google.com/d/optout>. > > > -- > You received this message because you are subscribed to the Google Groups > "Neo4j" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to [email protected] > <mailto:[email protected]>. > For more options, visit https://groups.google.com/d/optout > <https://groups.google.com/d/optout>. -- You received this message because you are subscribed to the Google Groups "Neo4j" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. For more options, visit https://groups.google.com/d/optout.
