Great, so maybe neo4j-index should be updated to depend on Lucene 2.9.3.
2010/7/9 Bill Janssen jans...@parc.com
Note that a couple of memory issues are fixed in Lucene 2.9.3. Leaking
when indexing big docs, and indolent reclamation of space from the
FieldCache.
Bill
Arijit Mukherjee
1:35 PM
To: (User@lists.neo4j.org)
Subject: [Neo4j] OutOfMemory while populating large graph
I have seen people discuss committing transactions after some microbatch of
a few hundred records, but I thought this was optional. I thought Neo4J
would automatically write out to disk as memory
Hi,
Would it actually be worth something to be able to begin a transaction which
auto-committs stuff every X write operation, like a batch inserter mode
which can be used in normal EmbeddedGraphDatabase? Kind of like:
graphDb.beginTx( Mode.BATCH_INSERT )
...so that you can start such
I've a similar problem. Although I'm not going out of memory yet, I can see
the heap constantly growing, and JProfiler says most of it is due to the
Lucene indexing. And even if I do the commit after every X transactions,
once the population is finished, the final commit is done, and the graph db
), a time (each 30
seconds), or on a memory usage rule.
-Original Message-
From: user-boun...@lists.neo4j.org [mailto:user-boun...@lists.neo4j.org] On
Behalf Of Mattias Persson
Sent: Friday, July 09, 2010 7:30 AM
To: Neo4j user discussions
Subject: Re: [Neo4j] OutOfMemory while populating large
09, 2010 7:30 AM
To: Neo4j user discussions
Subject: Re: [Neo4j] OutOfMemory while populating large graph
2010/7/9 Marko Rodriguez okramma...@gmail.com
Hi,
Would it actually be worth something to be able to begin a transaction
which
auto-committs stuff every X write operation, like a batch
Note that a couple of memory issues are fixed in Lucene 2.9.3. Leaking
when indexing big docs, and indolent reclamation of space from the
FieldCache.
Bill
Arijit Mukherjee ariji...@gmail.com wrote:
I've a similar problem. Although I'm not going out of memory yet, I can see
the heap
I have seen people discuss committing transactions after some microbatch of a
few hundred records, but I thought this was optional. I thought Neo4J would
automatically write out to disk as memory became full. Well, I encountered an
OOM and want to make sure that I understand the reason. Was
A. Jackson
Sent: Thursday, July 08, 2010 1:35 PM
To: (User@lists.neo4j.org)
Subject: [Neo4j] OutOfMemory while populating large graph
I have seen people discuss committing transactions after some microbatch of
a few hundred records, but I thought this was optional. I thought Neo4J
would automatically write
9 matches
Mail list logo