Hi all,
I'm using neo4j.2.3.0.
I'm running some experiments (over ~1000 graph queries) on the Neo4j Java 
API. I don't use the cypher for that. So, I have my own algorithm. After 
running around 100/200 queries, I'm getting the "GC Overhead Limit 
Exceeded". I think there is something wrong with "
org.neo4j.io.pagecache.impl.muninn.MuninnPageCache". 

*Is there anyway to bound this cache size in Java API? or disable it?*

I've already used this:

knowledgeGraph = new GraphDatabaseFactory().newEmbeddedDatabaseBuilder(
MODELGRAPH_DB_PATH)

.*setConfig(GraphDatabaseSettings.pagecache_memory, "500M")*
.newGraphDatabase();

but it didn't work.


Every time, I open a nested transaction for both query/data graphs and I 
success/close them after finishing each query.


I've used -Xmx12g -Xms12g as VM arguments and my machine has 16GB memory. 
The data graph itself has 4.00 GB volume on the disk.


Could you please help me on that?


Thanks




-- 
You received this message because you are subscribed to the Google Groups 
"Neo4j" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to