At a brief glance I think you issue might be related to the fact that neo 
doesn’t support nested transactions. So, while you are committing txQ 
EVERYTHING is happening inside the single txG transaction. This is taking up a 
lot of memory to hold the entire transaction state.



> On Mar 8, 2016, at 9:19 PM, Mohammad Hossain Namaki <[email protected]> 
> wrote:
> 
> I’m using Neo4j 2.3.0 in Java. I have 16 GB RAM, running the code on MAC OSX 
> Laptop, using "-Xmx12g -Xms12g" as VM arguments.
> 
> I’ve encountered a “GC overhead limit exceeded” problem in Neo4j Java API.
> 
> 
> 
> In order to do experiments with lots of queries, I have a program which opens 
> a transaction over different query.db's and get the answers of that from my 
> own framework which is wrapped in an object (It runs a query and print its 
> running time in a file).
> 
> 
> 
> So, for running the query, I don’t use Cypher.
> 
> 
> 
> For each query I open two transactions over a query.db and a data.db, 
> initialize my framework and run it. The memory usage slightly increases and 
> the “GC overhead” finally happens.
> 
> 
> 
> try (Transaction txG = knowledgeGraph.beginTx()) {
>      try (Transaction txQ = queryGraph.beginTx()) {
>           MyObj myFramework = new MyObj();
>           printTheResultsIntoTheFile(framework.run());
>           myFramework =null;
>           txQ.success();
>           txQ.close();
> 
> 
> These are some of my endeavors to get rid of this error:
> 
> 1. After I’ve used a monitoring program to dump the heap, I’ve found that 
> there is some problem with this 
> “org.neo4j.io.pagecache.impl.muninn.MuninnPageCache” So, I’ve tried to set 
> the page cache size and limit it to a small value:
> 
> dataGraph = new 
> GraphDatabaseFactory().newEmbeddedDatabaseBuilder(MODELGRAPH_DB_PATH)
>     .setConfig(GraphDatabaseSettings.pagecache_memory, 
> "500M").newGraphDatabase();
> 
> However, still the "memory leakage” problem exists.
> 
> 
> 
> 2. After tx.success(), I called the tx.close() to make sure that it doesn’t 
> use the memory.
> 
> 3. After using my framework(object) to find the answers of a query, I 
> explicitly set it to null. topkFramework=null;
> 
> 4. I called System.gc(); and System.runFinalization();
> 
> 5. I changed all of my static variables like MyCacheServer or 
> MyNeighborIndexer to non-static ones and in each query, I made them clear, 
> and explicitly set them to null.
> 
> queryNodeIdSet.clear();
> queryNodeIdSet = null;
> queryNodeIdSet = new HashSet<Long>();
> 
> I've used MemoryAnalyzer (MAT) for eclipse. The biggest object is 
> org.neo4j.io.pagecache.impl.muninn.MuninnPage which got 3.54 GB of memory. 
> Other objects are ok in my case because of some nodes/relationships id 
> indexing. (The dataset itself is 3.15 GB, I really don't want this much of 
> caching)
> Could you please help me on this? I've asked this question 
> <http://stackoverflow.com/questions/35876057/neo4j-java-api-gc-overhead-limit-exceeded-after-running-multiple-queries>
>  in stackoverflow also.
> 
> 
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Neo4j" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to [email protected] 
> <mailto:[email protected]>.
> For more options, visit https://groups.google.com/d/optout 
> <https://groups.google.com/d/optout>.

-- 
You received this message because you are subscribed to the Google Groups 
"Neo4j" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to