Dear Jim,

Thanks for your answer.

The OS' file-system cache acts as a storage server cache.  The storage
server does (essentially) no processing to data read from disk, so an
application-level cache would add nothing over the disk cache provided by
the storage server.

I see, then I guess it would be good to have at least the same amount of RAM as the total size of the DB, no? From what I see in our server, the linux buffer cache takes around 13GB of the 16G available, while the rest is mostly taken by the ZEO process (1.7G). The database is 17GB on disk.

Also note that, for better or worse, FileStorage uses an in-memory index
of current record positions, so no disk access is needed to find current data.

Yes, but pickles still have to be retrieved, right? I guess this would mean random access (for a database like ours, in which we have many small objects), which doesn't favor cache performance.

I'm asking this because in the tests we've made wih SSDs we have seen a 20% decrease in reading time for non-client-cached objects. So, there seems to be some disk i/o going on.


In general, I'd say no.  It can depend on lots of details, including:

- database size
- active set size
- network speed
- memory and disk speeds on clients and servers
- ...

In any case, from what I see, these client caches cannot be shared between processes, which doesn't make them very useful , in which we have many parallel processes asking for the same objects over and over again.

Thanks once again,

Pedro
_______________________________________________
For more information about ZODB, see http://zodb.org/

ZODB-Dev mailing list  -  ZODB-Dev@zope.org
https://mail.zope.org/mailman/listinfo/zodb-dev

Reply via email to