On Tue, Jan 15, 2013 at 2:07 PM, Leonardo Santagada <santag...@gmail.com>wrote:

> On Tue, Jan 15, 2013 at 3:10 PM, Jim Fulton <j...@zope.com> wrote:
>> On Tue, Jan 15, 2013 at 12:00 PM, Claudiu Saftoiu <csaft...@gmail.com>
>> wrote:
>> > Hello all,
>> >
>> > I'm looking to speed up my server and it seems memcached would be a good
>> > way to do it - at least for the `Catalog` (I've already put the catalog
>> in a
>> > separate
>> > zodb with a separate zeoserver with persistent client caching enabled
>> and it
>> > still doesn't run as nice as I like...)
>> >
>> > I've googled around a bit and found nothing definitive, though...
>> what's the
>> > best way to combine zodb/zeo + memcached as of now?
>> My opinion is that a distributed memcached isn't
>> a big enough win, but this likely depends on your  use cases.
>> We (ZC) took a different approach.  If there is a reasonable way
>> to classify your corpus by URL (or other request parameter),
>> then check out zc.resumelb.  This fit our use cases well.
> Maybe I don't understand zodb correctly but if the catalog is small enough
> to fit in memory wouldn't it be much faster to just cache the whole catalog
> on the clients? Then at least for catalog searches it is all mostly as fast
> as running through python objects. Memcache will put an extra
> serialize/deserialize step into it (plus network io, plus context
> switches).

That would be fine, actually. Is there a way to explicitly tell ZODB/ZEO to
load an entire object and keep it in the cache? I also want it to remain in
the cache on connection restart, but I think I've already accomplished that
with persistent client-side caching.
For more information about ZODB, see http://zodb.org/

ZODB-Dev mailing list  -  ZODB-Dev@zope.org

Reply via email to