Tim Peters wrote:
Sure, but no way to guess from here. The only thing I can really
guess from the above is that your client is going to the server a lot
to get data.
Well, the client and the server are on the same machine, which isn't
load or memory bound, and doesn't seem to be i/o bound eit
Hi,
This reminds me something I noticed when we migrated from 2.7 to 2.8
Our issue was a very big PersistentMapping based tree of objects, which was
involved in a lot of RW and RO transactions from different Zope instances (we
use ZEO of course). There was no miracle to solve the issue; we had t
Pascal Peregrina wrote:
This reminds me something I noticed when we migrated from 2.7 to 2.8
Well, it's 2.7 to 2.9 here, but yeah, it's the same big jump ;-)
Our issue was a very big PersistentMapping based tree of objects, which was
involved in a lot of RW and RO transactions from different
We located it by hacking the ClientStorage code in order to display the real
load operations (oids) from ZEO server (cache misses).
I just read again this mail thread. The lack disk I/O surge looks different
from the symptoms we saw here, so it must me a different issue.
Sorry for the noise.
P
Is it ok if I add the following comments to Connection's docstring?
Are there inaccuracies?
- _cache is a PickeCache, a cache which can ghostify objects not
recently used. Its API is roughly that of a dict, with additional
gc-related and invalidation-related methods.
- _add
Just a bit of proofreading ..
> - _cache is a PickeCache, a cache which can ghostify objects not
PickleCache
> - _added is a dict of oid->obj added explicitely through add().
explicitly
> _added is used as a sort of preliminary cache until commit time
is used as a preliminary ca