I emailed you the messages.log privately on your email address. I hope it helps. Thanks!
On Thursday, April 24, 2014 7:23:31 PM UTC-4, Michael Hunger wrote: > > Streaming doesn't help when you aggregate, as it has to go through all > data to finish the aggregation. > > What is the stacktrace for the error? > It should not run into an OOM anymore. > > Can you share your graph.db/messages.log? > > > > On Thu, Apr 24, 2014 at 10:45 PM, Alx <[email protected] <javascript:>>wrote: > >> Hi Michael, >> >> Thanks for the suggestions. So I followed them but unfortunately same >> error like before. The database occupies 16GB on my hard disk. I wonder if >> it should be "wrapper.java.maxmemory=160000". Unfortunately my machine >> has 16GB RAM all in all. I am running the latest version of Neo4j >> (2.0.2). >> >> Is there a "trick" to have the cypher query stream the results as they >> come into a file? Currently I am running the cypher query from the command >> line like this: >> >> "./bin/neo4j-shell -file query.cql > result.txt" >> >> >> >> On Thursday, April 24, 2014 1:19:52 PM UTC-4, Michael Hunger wrote: >> >>> Don't use cache type strong >>> It will just gc all the time >>> >>> I changed your settings inline >>> >>> Which version are you using? >>> >>> Sent from mobile device >>> >>> Am 24.04.2014 um 16:11 schrieb Alx <[email protected]>: >>> >>> I have a neo4j db with 1M nodes and 10M relationships running on my >>> local computer (16GB RAM, 4-core i7). The configuration of the server is as >>> follows: >>> >>> neo4j.properties: >>> neostore.nodestore.db.mapped_memory=25M >>> neostore.relationshipstore.db.mapped_memory=500M >>> neostore.propertystore.db.mapped_memory=300M >>> neostore.propertystore.db.strings.mapped_memory=300M >>> neostore.propertystore.db.arrays.mapped_memory=0M >>> cache_type=weak >>> >>> neo4j-wrapper.conf: >>> wrapper.java.initmemory=1512 >>> wrapper.java.maxmemory=8000 >>> >>> >>> I am running the following query: >>> >>> MATCH (m:User)-[:REL1]->(a) - [:REL2]-> (b) >>> >>> With m, b, count(*) as cnt >>> >>> Match (b)<-[:REL1]-(n) RETURN m.id AS From ,n.id AS To, count(*)*cnt >>> AS Number >>> >>> >>> Reduce the intermediate count of b's with the aggregation >>> >>> >>> The query makes the heap size increase until it hits 11Gib on the >>> memory. Then the server just struggles (loses connection, re-connects) for >>> a few hours and eventually it dies with either this error: >>> >>> ERROR (-v for expanded information): >>> >>> Error unmarshaling return header; nested exception is: >>> >>> java.net.SocketException: Operation timed out >>> >>> >>> or this error: >>> >>> ERROR (-v for expanded information): >>> >>> Error occurred in server thread; nested exception is: >>> >>> java.lang.OutOfMemoryError: Java heap space >>> >>> >>> I know that [:REL2] are roughly 8M. I don't know what else I can do to >>> fix this. Perhaps the query needs to be optimized but I don't know how I >>> can make it simpler because [:REL2] is important to use. Any thoughts or >>> suggestions would be much appreciated. >>> >>> >>> >>> -- >>> You received this message because you are subscribed to the Google >>> Groups "Neo4j" group. >>> To unsubscribe from this group and stop receiving emails from it, send >>> an email to [email protected]. >>> >>> For more options, visit https://groups.google.com/d/optout. >>> >>> -- >> You received this message because you are subscribed to the Google Groups >> "Neo4j" group. >> To unsubscribe from this group and stop receiving emails from it, send an >> email to [email protected] <javascript:>. >> For more options, visit https://groups.google.com/d/optout. >> > > -- You received this message because you are subscribed to the Google Groups "Neo4j" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. For more options, visit https://groups.google.com/d/optout.
