After various reports of OOMs, and after high CPU usage which might have
been related to excessive GCing, I did some profiling.

Having analysed the first 70% of memory on the dump, with a 100MB
overall limit, the Berkeley DB Java Edition that we use for the database
uses at least 54.5MB of RAM. databaseMaxMemory is not overridden in the
config, and therefore should be at the default value of 20MB.

The documentation says:
"Note that the cache does not include transient objects created by the JE
library, such as cursors, locks and transactions."

It is possible that the cache only refers to the first stack trace,
which is 17MB. However there are many other traces which appear to be
dealing with the database log (BDB is a log-structured database), many
of them rather large - for example the next 3 traces of 10MB, 5MB and
5MB respectively.

Any ideas? We can:
- Just ignore it. We may have to increase the default memory limit. But
  anecdotally OOMs happen when you have lots of queued requests.
- Use another database. BDB has been fairly unreliable, hence the code
  in the store to reconstruct the database (store index) from the store
  file by deleting the database and parsing each key.
- Given the second item, if we had a reliable database we could store
  queued requests in it, thus limiting the overall memory usage
  regardless of the size of the request queue. But that would be a
  significant amount of work even with a database.
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 189 bytes
Desc: Digital signature
URL: 
<https://emu.freenetproject.org/pipermail/tech/attachments/20070130/c7977b15/attachment.pgp>

Reply via email to