It is not about heap, the page-cache uses off-heap memory.

And for the heap related sections you will have to make sure that shutdown is 
called and also null-out references before calling system.gc()

You can just clean out the instance, e.g. with MATCH (n) DETACH DELETE n;
And then repopulate it.

Michael

> Am 29.03.2016 um 00:06 schrieb Mohammad Hossain Namaki <[email protected]>:
> 
> Yeah. Thanks! my problem is solved by using that setConfig and removing the 
> registerShutDownHook();
> 
> Another option I have is deleting all nodes/relationships and then add new 
> ones to that. However, until some days ago, I didn't even know the problem is 
> for creating multiple instances (I thought with shutting down them, they will 
> be removed from heap too).
> 
> My queries created before by some other algorithms, written in a file, and 
> then in experimental setup, I construct them as a Neo4j instance and use them 
> one by one.
> 
> you can write your answer under 
> http://stackoverflow.com/questions/35876057/neo4j-java-api-gc-overhead-limit-exceeded-after-running-multiple-queries
>  
> <http://stackoverflow.com/questions/35876057/neo4j-java-api-gc-overhead-limit-exceeded-after-running-multiple-queries>,
>  then I can select your answer as the one. 
> 
> Thanks.
> 
> On Thursday, March 24, 2016 at 11:25:30 PM UTC-7, Mohammad Hossain Namaki 
> wrote:
> Hi all,
> I'm using neo4j.2.3.0.
> I'm running some experiments (over ~1000 graph queries) on the Neo4j Java 
> API. I don't use the cypher for that. So, I have my own algorithm. After 
> running around 100/200 queries, I'm getting the "GC Overhead Limit Exceeded". 
> I think there is something wrong with 
> "org.neo4j.io.pagecache.impl.muninn.MuninnPageCache". 
> 
> Is there anyway to bound this cache size in Java API? or disable it?
> 
> I've already used this:
> knowledgeGraph = new 
> GraphDatabaseFactory().newEmbeddedDatabaseBuilder(MODELGRAPH_DB_PATH)
> 
> .setConfig(GraphDatabaseSettings.pagecache_memory, "500M").newGraphDatabase();
> 
> but it didn't work.
> 
> 
> 
> Every time, I open a nested transaction for both query/data graphs and I 
> success/close them after finishing each query.
> 
> 
> 
> I've used -Xmx12g -Xms12g as VM arguments and my machine has 16GB memory. The 
> data graph itself has 4.00 GB volume on the disk.
> 
> 
> 
> Could you please help me on that?
> 
> 
> 
> Thanks
> 
> 
> 
> 
> 
> 
> 
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Neo4j" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to [email protected] 
> <mailto:[email protected]>.
> For more options, visit https://groups.google.com/d/optout 
> <https://groups.google.com/d/optout>.

-- 
You received this message because you are subscribed to the Google Groups 
"Neo4j" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to