Can you share your full code? It looks as if you are recreating a Neo4j instance each time your run a query. While you should only create a single one for the whole runtime of your JVM.
You should also make sure to always call *db.shutdown* when you are done using a GraphDatabaseService but better use a fresh JVM. Michael > Am 25.03.2016 um 18:02 schrieb Mohammad Hossain Namaki <[email protected]>: > > This is my stacktrace when I get this error: > Caused by: java.lang.OutOfMemoryError: GC overhead limit exceeded > > at > org.neo4j.io.pagecache.impl.muninn.MuninnPageCache.<init>(MuninnPageCache.java:246) > > at > org.neo4j.kernel.impl.pagecache.ConfiguringPageCacheFactory.createPageCache(ConfiguringPageCacheFactory.java:96) > > at > org.neo4j.kernel.impl.pagecache.ConfiguringPageCacheFactory.getOrCreatePageCache(ConfiguringPageCacheFactory.java:87) > > at > org.neo4j.kernel.impl.factory.PlatformModule.createPageCache(PlatformModule.java:277) > > at > org.neo4j.kernel.impl.factory.PlatformModule.<init>(PlatformModule.java:154) > > at > org.neo4j.kernel.impl.factory.GraphDatabaseFacadeFactory.createPlatform(GraphDatabaseFacadeFactory.java:181) > > at > org.neo4j.kernel.impl.factory.GraphDatabaseFacadeFactory.newFacade(GraphDatabaseFacadeFactory.java:124) > > at > org.neo4j.kernel.impl.factory.CommunityFacadeFactory.newFacade(CommunityFacadeFactory.java:43) > > at > org.neo4j.kernel.impl.factory.GraphDatabaseFacadeFactory.newFacade(GraphDatabaseFacadeFactory.java:108) > > at > org.neo4j.graphdb.factory.GraphDatabaseFactory.newDatabase(GraphDatabaseFactory.java:129) > > at > org.neo4j.graphdb.factory.GraphDatabaseFactory$1.newDatabase(GraphDatabaseFactory.java:117) > > at > org.neo4j.graphdb.factory.GraphDatabaseBuilder.newGraphDatabase(GraphDatabaseBuilder.java:185) > > at > org.neo4j.graphdb.factory.GraphDatabaseFactory.newEmbeddedDatabase(GraphDatabaseFactory.java:79) > > at > org.neo4j.graphdb.factory.GraphDatabaseFactory.newEmbeddedDatabase(GraphDatabaseFactory.java:74) > > at > wsu.eecs.mlkd.KGQuery.test.QueryGenerator.ConstrucQueryGraph(QueryGenerator.java:478) > > at > wsu.eecs.mlkd.KGQuery.machineLearningQuerying.BeamSearchRunner.initialize(BeamSearchRunner.java:248) > > at > wsu.eecs.mlkd.KGQuery.machineLearningQuerying.BeamSearchRunner.main(BeamSearchRunner.java:439) > > > On Thursday, March 24, 2016 at 11:25:30 PM UTC-7, Mohammad Hossain Namaki > wrote: > Hi all, > I'm using neo4j.2.3.0. > I'm running some experiments (over ~1000 graph queries) on the Neo4j Java > API. I don't use the cypher for that. So, I have my own algorithm. After > running around 100/200 queries, I'm getting the "GC Overhead Limit Exceeded". > I think there is something wrong with > "org.neo4j.io.pagecache.impl.muninn.MuninnPageCache". > > Is there anyway to bound this cache size in Java API? or disable it? > > I've already used this: > knowledgeGraph = new > GraphDatabaseFactory().newEmbeddedDatabaseBuilder(MODELGRAPH_DB_PATH) > > .setConfig(GraphDatabaseSettings.pagecache_memory, "500M").newGraphDatabase(); > > but it didn't work. > > > > Every time, I open a nested transaction for both query/data graphs and I > success/close them after finishing each query. > > > > I've used -Xmx12g -Xms12g as VM arguments and my machine has 16GB memory. The > data graph itself has 4.00 GB volume on the disk. > > > > Could you please help me on that? > > > > Thanks > > > > > > > > > -- > You received this message because you are subscribed to the Google Groups > "Neo4j" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to [email protected] > <mailto:[email protected]>. > For more options, visit https://groups.google.com/d/optout > <https://groups.google.com/d/optout>. -- You received this message because you are subscribed to the Google Groups "Neo4j" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. For more options, visit https://groups.google.com/d/optout.
