have you tried running with +d to get a crash dump? It might provide
some clues.
On Oct 6, 2009, at 3:39 PM, Paul Davis wrote:
Ignore it?
I'd focus on writing a script that checks memory usage while there's
an indexer running.
Paul
On Tue, Oct 6, 2009 at 3:35 PM, Glenn Rempe <[email protected]> wrote:
Running compaction but it is *slow*. Running at a pace of about
~1,000 records every 30 seconds according the the log.
At that pace it I think it will take 233 hours (!) to compact. Is
there anything I can tweak to get that running faster? I can't wait
10 days... :-(
G
On Tue, Oct 6, 2009 at 11:28 AM, Paul Davis <[email protected]
> wrote:
Also, I just went through and re-read the entire discussion. After
your 0.9.1 -> trunk upgrade did you compact the database? I can't
think of anything that'd cause an issue there but it might be
something to try (there is a conversion process during
compaction).
I did not do a compaction. I can try that. Unfortunately that
probably
kills another day compacting my 50GB 28mm record DB. ;-) But,
hey, if it
helps... :-)
Its a possibility is all. Theoretically this is more incremental, so
even if you kick it off and it dies it'll restart part way through
even without a complete run. (Very theoretically as I haven't
tried it
yet). Also it'll run just fine in the background.