Thanks for reply.
Do you mean that every time I should delete nodes/relationship from that
only query graph instance and insert new nodes/relationship to that? By
deleting the nodes/relationship query graph instance cache would be empty?
Right now, I've used the following solution and it's really better.
However, still it has small memory leakage.
smallGraph = new GraphDatabaseFactory().newEmbeddedDatabaseBuilder(graphPath)
*.setConfig(GraphDatabaseSettings.pagecache_memory,
"240k")*.newGraphDatabase();
Thanks anyway
Best.
On Thursday, March 24, 2016 at 11:25:30 PM UTC-7, Mohammad Hossain Namaki
wrote:
>
> Hi all,
> I'm using neo4j.2.3.0.
> I'm running some experiments (over ~1000 graph queries) on the Neo4j Java
> API. I don't use the cypher for that. So, I have my own algorithm. After
> running around 100/200 queries, I'm getting the "GC Overhead Limit
> Exceeded". I think there is something wrong with "
> org.neo4j.io.pagecache.impl.muninn.MuninnPageCache".
>
> *Is there anyway to bound this cache size in Java API? or disable it?*
>
> I've already used this:
>
> knowledgeGraph = new GraphDatabaseFactory().newEmbeddedDatabaseBuilder(
> MODELGRAPH_DB_PATH)
>
> .*setConfig(GraphDatabaseSettings.pagecache_memory, "500M")*
> .newGraphDatabase();
>
> but it didn't work.
>
>
> Every time, I open a nested transaction for both query/data graphs and I
> success/close them after finishing each query.
>
>
> I've used -Xmx12g -Xms12g as VM arguments and my machine has 16GB memory.
> The data graph itself has 4.00 GB volume on the disk.
>
>
> Could you please help me on that?
>
>
> Thanks
>
>
>
>
>
--
You received this message because you are subscribed to the Google Groups
"Neo4j" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
For more options, visit https://groups.google.com/d/optout.