Re: [ZODB-Dev] zeo.memcache

2011-10-13 Thread Laurence Rowe
On 12 October 2011 23:53, Shane Hathaway  wrote:
> As I see it, a cache of this type can take 2 basic approaches: it can
> either store {oid: (state, tid)}, or it can store {(oid, tid): (state,
> last_tid)}. The former approach is much simpler, but since memcache has
> no transaction guarantees whatsoever, it would lead to consistency
> errors. The latter approach makes it possible to avoid all consistency
> errors even with memcache, but it requires interesting algorithms to
> make efficient use of the cache. I chose the latter.

On first reading I had thought that the {oid: (state, tid)} approach
would not necessarily lead to consistency errors as a connection could
simply discard cached values where the cached state tid is later than
the current transaction's last tid. But I guess that it must be
impossible for a committing connection to guarantee that all cached
oids remain invalidated during a commit and are not refilled with a
previous state by another connection performing a read. This would
necessitate the same checkpointing algorithm to avoid consistency
errors.

I sometimes wonder if it would be better to separate the maintenance
of the oid_tid mapping from the storage of object states. A database
storing only the oid_tid mapping and enough previous tids to support
current transactions -- essentially the Data.fs.index -- would always
fit easily in RAM and could conceivably be replicated to every machine
in a cluster to ensure fast lookups. The storage / caching of object
states could then be very simple.

Laurence
___
For more information about ZODB, see http://zodb.org/

ZODB-Dev mailing list  -  ZODB-Dev@zope.org
https://mail.zope.org/mailman/listinfo/zodb-dev


Re: [ZODB-Dev] zeo.memcache

2011-10-12 Thread Shane Hathaway
On 10/12/2011 04:53 PM, Shane Hathaway wrote:
> Given the choice to structure the cache as {(oid, tid): (state,
> last_tid)}, a simple way to use the cache would be to get the last
> committed tid from the database and use that tid for the lookup key.
> This would be extremely efficient until the next commit, at which point
> the entire cache would become irrelevant and would have to be rebuilt.
>
> Therefore, most of the interesting parts of the cache code in RelStorage
> are focused on simply choosing a good tid for the cache lookup operation.

Furthermore... anytime the cache chooses a tid other than the most 
recently committed tid for the lookup operation, there is a risk that it 
will choose a tid that is too old, leading to consistency errors. I have 
searched deeply for any such holes and closed some obscure ones, but 
it's important to acknowledge the risk.

(BTW, I worked with a client who saw many consistency errors that seemed 
to be caused by the cache, but the problem turned out to be a major flaw 
in Oracle's documentation of read only mode. The cache operated flawlessly.)

Shane
___
For more information about ZODB, see http://zodb.org/

ZODB-Dev mailing list  -  ZODB-Dev@zope.org
https://mail.zope.org/mailman/listinfo/zodb-dev


Re: [ZODB-Dev] zeo.memcache

2011-10-12 Thread Shane Hathaway
On 10/09/2011 08:26 AM, Jim Fulton wrote:
> On Sat, Oct 8, 2011 at 4:34 PM, Shane Hathaway  wrote:
>> On 10/05/2011 11:40 AM, Pedro Ferreira wrote:
>>> Hello all,
>>>
>>> While doing some googling on ZEO + memcache I came across this:
>>>
>>> https://github.com/eleddy/zeo.memcache
>>>
>>> Has anybody ever tried it?
>>
>> Having implemented memcache integration for RelStorage, I now know what
>> it takes to make a decent connection between memcache and ZODB.  The
>> code at the link above does not look sufficient to me.
>>
>> I could adapt the cache code in RelStorage for ZEO.  I don't think it
>> would be very difficult.  How many people would be interested in such a
>> thing?
>
> This would be of broad interest!
>
> Can you briefly describe the strategy?  How do you arrange that
> the client sees a consistent view of the current tid for a given
> oid?

(Sorry for not replying sooner--I've been busy.)

As I see it, a cache of this type can take 2 basic approaches: it can 
either store {oid: (state, tid)}, or it can store {(oid, tid): (state, 
last_tid)}. The former approach is much simpler, but since memcache has 
no transaction guarantees whatsoever, it would lead to consistency 
errors. The latter approach makes it possible to avoid all consistency 
errors even with memcache, but it requires interesting algorithms to 
make efficient use of the cache. I chose the latter.

Given the choice to structure the cache as {(oid, tid): (state, 
last_tid)}, a simple way to use the cache would be to get the last 
committed tid from the database and use that tid for the lookup key. 
This would be extremely efficient until the next commit, at which point 
the entire cache would become irrelevant and would have to be rebuilt.

Therefore, most of the interesting parts of the cache code in RelStorage 
are focused on simply choosing a good tid for the cache lookup operation.

It caches the following things in memcache:

1. A pair of checkpoints.
2. A state and last committed transaction ID for a given transaction ID 
and object ID.
3. A commit counter.

The checkpoints are two arbitrary committed transaction IDs.  Clients 
can use any pair of committed transaction IDs as checkpoints (so it's OK 
if the checkpoints disappear from the cache), but the cache is much more 
efficient if all clients use the same checkpoints.

Each storage object holds a pair of "delta" mappings, where each delta 
contains {oid: tid}. The deltas contain information about what objects 
have changed since the checkpoints: delta0 lists the changes since 
checkpoint0 and delta1 lists the changes between checkpoint1 and 
checkpoint0. Within each transaction, the delta0 mapping must be updated 
before reading from the database.

When retrieving an object, the cache tries to discover the object's 
current tid by looking first in delta0.  If it's there, then the cache 
asks memcache for the object state at that exact tid.  If not, the cache 
asks memcache for the object state and tid at the current checkpoints.

It is not actually necessary to have 2 checkpoints.  It could work with 
more checkpoints or only 1 checkpoint, but if there were only 1, each 
checkpoint shift would be equivalent to flushing the cache.  With more 
checkpoints, the cache would often query many keys for each read 
operation.  2 checkpoints seems like a good balance.

I wrote more notes about the caching strategy here:

http://svn.zope.org/relstorage/trunk/notes/caching.txt

As I review all of this, I wonder at the moment why I chose to create 
delta1.  It seems like the system would work without it.  I probably 
added it because I thought it would improve cache efficiency, but today 
I'd rather simplify as much as possible even at the cost of a little 
theoretical efficiency.

The commit counter is not very related, but since I brought it up, I'll 
explain it briefly: it serves as a way for clients to discover whether 
the database has changed without actually reading anything from the 
database.  It is a counter rather than a transaction ID because that 
choice avoids a race condition.

Shane
___
For more information about ZODB, see http://zodb.org/

ZODB-Dev mailing list  -  ZODB-Dev@zope.org
https://mail.zope.org/mailman/listinfo/zodb-dev


Re: [ZODB-Dev] zeo.memcache

2011-10-12 Thread Vincent Pelletier
Le mercredi 12 octobre 2011 11:55:43, Vincent Pelletier a écrit :
> "distributed"

Woops. Networked lock server. Not distributed.

-- 
Vincent Pelletier
ERP5 - open source ERP/CRM for flexible enterprises
___
For more information about ZODB, see http://zodb.org/

ZODB-Dev mailing list  -  ZODB-Dev@zope.org
https://mail.zope.org/mailman/listinfo/zodb-dev


Re: [ZODB-Dev] zeo.memcache

2011-10-12 Thread Vincent Pelletier
Le vendredi 7 octobre 2011 15:02:44, Vincent Pelletier a écrit :
> Le vendredi 7 octobre 2011 14:16:42, Andreas Gabriel a écrit :
> > However, is your implementation thread safe? Maybe I am blind ;). That
> > was the reason  I used lovely.memcached as memcached connector. Each
> > thread has its own connection and namespace to store keys. Therefore,
> > the locks from one or more zeo-clients with multiple threads ẃere
> > distinguishable.
> 
> You're not blind :) .

I've read python-memcached module: it is internally thread-safe as it uses one 
network connection per thread sharing the same instance.

I haven't implemented any namespace separation per thread, but I'm not sure of 
the point: if I share a single threading.Lock instance between threads, a 
single one can successfully acquire it - and this is the point of a lock. 
Adding "distributed" should work the same way, but not only across different 
threads but also different processes and machines. So locks for the same 
resource (identified by their "key" as you named it in your code - and I kept 
the naming) should be able to access the same memcache entry (ie, same 
namespace).

Did I miss something ?

If not, I believe my code is thread-safe.

Regards,
-- 
Vincent Pelletier
ERP5 - open source ERP/CRM for flexible enterprises
___
For more information about ZODB, see http://zodb.org/

ZODB-Dev mailing list  -  ZODB-Dev@zope.org
https://mail.zope.org/mailman/listinfo/zodb-dev


Re: [ZODB-Dev] zeo.memcache

2011-10-09 Thread Andreas Gabriel
On 08.10.2011 22:34, Shane Hathaway wrote:
> I could adapt the cache code in RelStorage for ZEO.  I don't think it
> would be very difficult.  How many people would be interested in such a
> thing?

+1 for me too !

Kind regards,
Andreas


-- 
Dr. Andreas Gabriel, Hochschulrechenzentrum
Hans-Meerwein-Str., 35032 Marburg, fon +49 (0)6421 28-23560 fax -26994
- Philipps-Universitaet Marburg --

___
For more information about ZODB, see http://zodb.org/

ZODB-Dev mailing list  -  ZODB-Dev@zope.org
https://mail.zope.org/mailman/listinfo/zodb-dev


Re: [ZODB-Dev] zeo.memcache

2011-10-09 Thread Jim Fulton
On Sat, Oct 8, 2011 at 4:34 PM, Shane Hathaway  wrote:
> On 10/05/2011 11:40 AM, Pedro Ferreira wrote:
>> Hello all,
>>
>> While doing some googling on ZEO + memcache I came across this:
>>
>> https://github.com/eleddy/zeo.memcache
>>
>> Has anybody ever tried it?
>
> Having implemented memcache integration for RelStorage, I now know what
> it takes to make a decent connection between memcache and ZODB.  The
> code at the link above does not look sufficient to me.
>
> I could adapt the cache code in RelStorage for ZEO.  I don't think it
> would be very difficult.  How many people would be interested in such a
> thing?

This would be of broad interest!

Can you briefly describe the strategy?  How do you arrange that
the client sees a consistent view of the current tid for a given
oid?

Jim

-- 
Jim Fulton
http://www.linkedin.com/in/jimfulton
___
For more information about ZODB, see http://zodb.org/

ZODB-Dev mailing list  -  ZODB-Dev@zope.org
https://mail.zope.org/mailman/listinfo/zodb-dev


Re: [ZODB-Dev] zeo.memcache

2011-10-09 Thread Thierry Florac
Le Sat, 08 Oct 2011 14:34:59 -0600, Shane Hathaway
 a �crit:

> On 10/05/2011 11:40 AM, Pedro Ferreira wrote:
> > Hello all,
> >
> > While doing some googling on ZEO + memcache I came across this:
> >
> > https://github.com/eleddy/zeo.memcache
> >
> > Has anybody ever tried it?
> 
> Having implemented memcache integration for RelStorage, I now know
> what it takes to make a decent connection between memcache and ZODB.
> The code at the link above does not look sufficient to me.
> 
> I could adapt the cache code in RelStorage for ZEO.  I don't think it 
> would be very difficult.  How many people would be interested in such
> a thing?

+1 for me !!

Regards,
Thierry
___
For more information about ZODB, see http://zodb.org/

ZODB-Dev mailing list  -  ZODB-Dev@zope.org
https://mail.zope.org/mailman/listinfo/zodb-dev


Re: [ZODB-Dev] zeo.memcache

2011-10-08 Thread Pedro Ferreira

> I could adapt the cache code in RelStorage for ZEO. I don't think it
> would be very difficult. How many people would be interested in such a
> thing?

1 here :)

Thanks in advance,

Pedro
___
For more information about ZODB, see http://zodb.org/

ZODB-Dev mailing list  -  ZODB-Dev@zope.org
https://mail.zope.org/mailman/listinfo/zodb-dev


Re: [ZODB-Dev] zeo.memcache

2011-10-08 Thread Shane Hathaway
On 10/05/2011 11:40 AM, Pedro Ferreira wrote:
> Hello all,
>
> While doing some googling on ZEO + memcache I came across this:
>
> https://github.com/eleddy/zeo.memcache
>
> Has anybody ever tried it?

Having implemented memcache integration for RelStorage, I now know what 
it takes to make a decent connection between memcache and ZODB.  The 
code at the link above does not look sufficient to me.

I could adapt the cache code in RelStorage for ZEO.  I don't think it 
would be very difficult.  How many people would be interested in such a 
thing?

Shane
___
For more information about ZODB, see http://zodb.org/

ZODB-Dev mailing list  -  ZODB-Dev@zope.org
https://mail.zope.org/mailman/listinfo/zodb-dev


Re: [ZODB-Dev] zeo.memcache

2011-10-07 Thread Vincent Pelletier
Le vendredi 7 octobre 2011 14:16:42, Andreas Gabriel a écrit :
> However, is your implementation thread safe? Maybe I am blind ;). That was
> the reason  I used lovely.memcached as memcached connector. Each thread has
> its own connection and namespace to store keys. Therefore, the locks from
> one or more zeo-clients with multiple threads ẃere distinguishable.

You're not blind :) .

I didn't take care of which connection is used by which thread. IMHO, this 
belongs to another level, similarly to - for example - mysqldb vs. ZMySQLDA: 
the former just establishes a connection, the latter maintains a connection 
pool and takes care of binding a connection to a thread for the duration of a 
transaction.

For my code to be usable in Zope, it needs to be managed by a database adapter 
implementing such pooling & binding, plus lock releases on transaction 
boundaries.

In current state my code should fit the needs of code outside transaction 
management, such as zeo.memcache .

Regards,
-- 
Vincent Pelletier
ERP5 - open source ERP/CRM for flexible enterprises
___
For more information about ZODB, see http://zodb.org/

ZODB-Dev mailing list  -  ZODB-Dev@zope.org
https://mail.zope.org/mailman/listinfo/zodb-dev


Re: [ZODB-Dev] zeo.memcache

2011-10-07 Thread Andreas Gabriel
Hi,

Am 07.10.2011 11:18, schrieb Vincent Pelletier:
> Le vendredi 7 octobre 2011 10:15:34, Andreas Gabriel a écrit :
>> self._update() in the while loop is called (calls indirectly the memcache
>> "query" method, a synonym for "get") before the "cas" method is called.
> 
> In my understanding from "pydoc memcache", there is "get", which loads, and 
> "gets" which loads and supposedly does some magic needed by "cas".
> Maybe on any "cas"-supporting memcache implementation "get" just does that 
> magic too.

You are right. There is a bug in my code, because it depends on 
lovely.memcached,
which does not support 'cas' :(. I didn't remember that the code was not tested.
Sorry!

However, is your implementation thread safe? Maybe I am blind ;). That was
the reason  I used lovely.memcached as memcached connector. Each thread has its 
own
connection and namespace to store keys. Therefore, the locks from one or more
zeo-clients with multiple threads ẃere distinguishable.

Kind regards
Andreas





-- 
Dr. Andreas Gabriel, Hochschulrechenzentrum, http://www.uni-marburg.de/hrz
Hans-Meerwein-Str., 35032 Marburg,  fon +49 (0)6421 28-23560  fax 28-26994
 Philipps-Universitaet Marburg ---
___
For more information about ZODB, see http://zodb.org/

ZODB-Dev mailing list  -  ZODB-Dev@zope.org
https://mail.zope.org/mailman/listinfo/zodb-dev


Re: [ZODB-Dev] zeo.memcache

2011-10-07 Thread Vincent Pelletier
Le vendredi 7 octobre 2011 10:15:34, Andreas Gabriel a écrit :
> self._update() in the while loop is called (calls indirectly the memcache
> "query" method, a synonym for "get") before the "cas" method is called.

In my understanding from "pydoc memcache", there is "get", which loads, and 
"gets" which loads and supposedly does some magic needed by "cas".
Maybe on any "cas"-supporting memcache implementation "get" just does that 
magic too.

More thoughts:
I didn't read memcache (client & server) code, but I expect some server-side 
flag (?) on the object (?) telling which connection (?) loaded that value 
using "gets", flag which would be cleared upon first store on that value (and 
de-facto dropped when the value is dropped).
I expect breakages when connection is closed, and when the same connection is 
used by multiple competitors for a single lock.
I could not imagine another way this would work so far.

> Please continue your developement because this will be important
> feature/enhancement for big zope sites with many zeo-clients under heavy
> load.

Actually I'm not sure how this should be properly tested: testing this 
requires reproducing race conditions, and I think one cannot reproduce all 
possible race conditions in test cases, even knowing the code...

Ideas ?

Of course, I (as an exercise) stay focused on a stand-alone usage, where no 
ZODB conflict resolution would help recover from a bug.

-- 
Vincent Pelletier
ERP5 - open source ERP/CRM for flexible enterprises
___
For more information about ZODB, see http://zodb.org/

ZODB-Dev mailing list  -  ZODB-Dev@zope.org
https://mail.zope.org/mailman/listinfo/zodb-dev


Re: [ZODB-Dev] zeo.memcache

2011-10-07 Thread Andreas Gabriel
Hi,

Am 07.10.2011 01:57, schrieb Vincent Pelletier:
> Le jeudi 06 octobre 2011 21:18:39, Andreas Gabriel a écrit :
> I couldn't resist writing my own version inspired from your code:
>   https://github.com/vpelletier/python-memcachelock

That's no problem :)

> It lacks any integration with ZODB.
> It drops support for non-"cas" memcached. I understand your code relies on 
> ZODB conflicts as last resort, but I wanted to scratch an itch :) .
> It drops support for timeout (not sure why they are used for, so it's 
> actually 
> more a "left asides" than a drop).

This feature supports the fallback from pessimistic locking to the standard 
optimistic locking
of ZODB (if all locks are lost because of a restart of memcachd etc.)
 -> details: http://pypi.python.org/pypi/unimr.memcachedlock

> I admit this is my first real attempt at using "cas", and the documentation 
> mentions gets must be called before calling cas for it to succeed. I don't 
> see 
> gets calls in your code, so I wonder if there wouldn't be a bug... Or maybe 
> it's just my misunderstanding.

self._update() in the while loop is called (calls indirectly the memcache 
"query"
method, a synonym for "get") before the "cas" method is called.

> As the README states: it's not well tested. I only did stupid sanity checks 
> (2 
> instances in a single python interactive interpreter, one guy on the keyboard 
> - and a slow one, because it's late) and a pylint run.

Please continue your developement because this will be important 
feature/enhancement
for big zope sites with many zeo-clients under heavy load.

kind regards
Andreas

-- 
Dr. Andreas Gabriel, Hochschulrechenzentrum, http://www.uni-marburg.de/hrz
Hans-Meerwein-Str., 35032 Marburg,  fon +49 (0)6421 28-23560  fax 28-26994
 Philipps-Universitaet Marburg ---
___
For more information about ZODB, see http://zodb.org/

ZODB-Dev mailing list  -  ZODB-Dev@zope.org
https://mail.zope.org/mailman/listinfo/zodb-dev


Re: [ZODB-Dev] zeo.memcache

2011-10-06 Thread Vincent Pelletier
Le jeudi 06 octobre 2011 21:18:39, Andreas Gabriel a écrit :
> Maybe this code will help as example for the shared locking problem
> 
> https://svn.plone.org/svn/collective/unimr.memcachedlock/trunk/unimr/memcac
> hedlock/memcachedlock.py

I couldn't resist writing my own version inspired from your code:
  https://github.com/vpelletier/python-memcachelock

It lacks any integration with ZODB.
It drops support for non-"cas" memcached. I understand your code relies on 
ZODB conflicts as last resort, but I wanted to scratch an itch :) .
It drops support for timeout (not sure why they are used for, so it's actually 
more a "left asides" than a drop).
It moves the "uid" problem from random generator to a hope that no instance 
will survive 2**32 instanciation of other instances for a single lock. I feel 
somewhat safer that way.
It does a super-minor optimisation: the key won't change in instance life, so 
hash it once and use the 2-tuple form for memcache's key parameter.

I admit this is my first real attempt at using "cas", and the documentation 
mentions gets must be called before calling cas for it to succeed. I don't see 
gets calls in your code, so I wonder if there wouldn't be a bug... Or maybe 
it's just my misunderstanding.

As the README states: it's not well tested. I only did stupid sanity checks (2 
instances in a single python interactive interpreter, one guy on the keyboard 
- and a slow one, because it's late) and a pylint run.

Regards,
/me falls asleep on keyboard
-- 
Vincent Pelletier
___
For more information about ZODB, see http://zodb.org/

ZODB-Dev mailing list  -  ZODB-Dev@zope.org
https://mail.zope.org/mailman/listinfo/zodb-dev


Re: [ZODB-Dev] zeo.memcache

2011-10-06 Thread Pedro Ferreira
 > So accesses from different processes (or even
> different instances of the cache connector) will modify it without
> synchronisation. Supporting such setup requires using the test-and-set
> memcached operation, plus some sugar. I just don't think this was intended to
> be supported in the original code.

And looking at the code, it appears like load() operations are locked as 
well. In the original client cache load() operations seem to be locked 
in order to avoid inconsistent file states, but if we assume cache 
set/read operations to be atomic (as I believe they are in memcache) 
can't we get rid of this one?

Cheers,

Pedro
___
For more information about ZODB, see http://zodb.org/

ZODB-Dev mailing list  -  ZODB-Dev@zope.org
https://mail.zope.org/mailman/listinfo/zodb-dev


Re: [ZODB-Dev] zeo.memcache

2011-10-06 Thread Andreas Gabriel
Hi,

On 06.10.2011 19:59, Vincent Pelletier wrote:
> synchronisation. Supporting such setup requires using the test-and-set 
> memcached operation, plus some sugar. I just don't think this was intended to 
> be supported in the original code.

Maybe this code will help as example for the shared locking problem

https://svn.plone.org/svn/collective/unimr.memcachedlock/trunk/unimr/memcachedlock/memcachedlock.py

Kind regards
Andreas

-- 
Dr. Andreas Gabriel, Hochschulrechenzentrum
Hans-Meerwein-Str., 35032 Marburg, fon +49 (0)6421 28-23560 fax -26994
- Philipps-Universitaet Marburg --

___
For more information about ZODB, see http://zodb.org/

ZODB-Dev mailing list  -  ZODB-Dev@zope.org
https://mail.zope.org/mailman/listinfo/zodb-dev


Re: [ZODB-Dev] zeo.memcache

2011-10-06 Thread Vincent Pelletier
Le mercredi 05 octobre 2011 19:45:38, Jim Fulton a écrit :
> Interesting.  I'll review it.

I gave it a look.

>From what I see, I don't think this can be used by more than a single zope at 
a time. My bigest hint toward this is that there is a lock on class instance 
which is not visible in memcache (ie, nothing is modified in memcache when 
taking/releasing that lock). So accesses from different processes (or even 
different instances of the cache connector) will modify it without 
synchronisation. Supporting such setup requires using the test-and-set 
memcached operation, plus some sugar. I just don't think this was intended to 
be supported in the original code.

I'm afraid of the keyify function: if an oid happend to contain a 0x20 byte, 
and another object exists with the same oid minus 0x20 bytes, they will 
collide in that cache. That's easy to fix.

Jim: Beware, it's GPL'ed ;) .
/ducks

Regards,
-- 
Vincent Pelletier
___
For more information about ZODB, see http://zodb.org/

ZODB-Dev mailing list  -  ZODB-Dev@zope.org
https://mail.zope.org/mailman/listinfo/zodb-dev


Re: [ZODB-Dev] zeo.memcache

2011-10-05 Thread Jim Fulton
On Wed, Oct 5, 2011 at 1:40 PM, Pedro Ferreira
 wrote:
> Hello all,
>
> While doing some googling on ZEO + memcache I came across this:
>
> https://github.com/eleddy/zeo.memcache

Interesting.  I'll review it.

Thanks for pointing it out.

Jim

-- 
Jim Fulton
http://www.linkedin.com/in/jimfulton
___
For more information about ZODB, see http://zodb.org/

ZODB-Dev mailing list  -  ZODB-Dev@zope.org
https://mail.zope.org/mailman/listinfo/zodb-dev


[ZODB-Dev] zeo.memcache

2011-10-05 Thread Pedro Ferreira
Hello all,

While doing some googling on ZEO + memcache I came across this:

https://github.com/eleddy/zeo.memcache

Has anybody ever tried it?

I gave it a short try and after fixing a small issue with a return value 
(pull request sent) I managed to make it run. Despite some random errors 
related with TIDs (which should be simple to solve[?]) it seems to work 
OK. The decrease in loading time for very heavy pages is pretty 
noticeable for Indico, with the additional advantage that the cache can 
be shared cross-process and even cross-machine.

There is a small issue in the way ClientStorage initializes the client 
cache that seems to have got in the way of the original developer and 
that I bumped into while trying to initialize the memcached cache 
without having to hardcode the server address:

"""
 if client is not None:
 dir = var or os.getcwd()
 cache_path = os.path.join(dir, "%s-%s.zec" % (client, storage))
 else:
 cache_path = None

 self._cache = self.ClientCacheClass(cache_path, size=cache_size)
"""

could this be done such that the constructor argument for 
ClientCacheClass can be specified in a less restrictive way? For 
instance, by directly passing `client` and building the cache path 
inside the constructor? That would help a lot.


Cheers,

Pedro

-- 
José Pedro Ferreira

Software Developer, Indico Project
http://indico-software.org

+---+
+  '``'--- `+  CERN - European Organization for Nuclear Research
+ |CERN|  / +  1211 Geneve 23, Switzerland
+ ..__. \.  +  IT-UDS-AVC
+  \\___.\  +  Office: 513-1-005
+  /+  Tel. +41227677159
+---+
___
For more information about ZODB, see http://zodb.org/

ZODB-Dev mailing list  -  ZODB-Dev@zope.org
https://mail.zope.org/mailman/listinfo/zodb-dev