Bob Ham wrote:

> On Tue, 2007-01-30 at 16:54 +0000, Matthew Toseland wrote:
>> After various reports of OOMs, and after high CPU usage which might have
>> been related to excessive GCing, I did some profiling.
> 
> I've done some profiling of my own:
> 
>   http://teasel.6gnip.net/~rah/freenet-threads-2007-01-30-21:23:18.png
> 
> The red line is the effective maximum memory that fred should use, as
> configured in wrapper.conf, in addition to the default DB memory usage.
> The brown line is the actual memory usage.  The JVM in use is sun's,
> version 1.5.0, release 9.
> 
> It's clear to see that the configured limits are wholly ignored, and by
> significant percentages.  This is a major problem.  On my own machine,
> fred will get so large as to cause the kernel to start killing processes
> due to lack of memory.  These include named, cron, mysql and apache, in
> addition to fred itself.
> 
> I only recently got my node back up after it refused to be able to start
> for some time.  This I've noted on the IRC, if you recall.  The problem
> would seem to be the size of the data store: after removing all of the
> data files, the node starts without any problems.  Needless to say, I've
> reduced the size now.
> 
>> Any ideas? We can:
>> - Just ignore it
> 
> Not an option in my opinion.
> 
>> - Use another database. BDB has been fairly unreliable, hence the code
>>   in the store to reconstruct the database (store index) from the store
>>   file by deleting the database and parsing each key.
>> - Given the second item, if we had a reliable database we could store
>>   queued requests in it, thus limiting the overall memory usage
>>   regardless of the size of the request queue. But that would be a
>>   significant amount of work even with a database.
> 
> I think there are deeper issues to deal with.  Even disregarding the
> database, fred's memory footprint is massive.  For the functionality
> that it provides, it seems to me to be excessive.  I can't understand
> why fred would need anything in excess of a few 10s of megabytes.  Where
> is this memory going?

I agree here. BDB seems a mature database; switching databases without
really isolating the culprit seems premature.


Reply via email to