On Sat, Aug 04, 2001 at 10:34:34AM -0700, brian moseley wrote:

> ps: i've modified my code to 1) only get once per request
> and 2) set at the end of each request. the net effect is
> that stuff works as expected. i'm reasonably happy with the
> current state of affairs, but...

Excellent, this is the right approach.  Sounds like I need to update
the documentation to say that "objects retrieved from the cache are
not 'live,' they are clones.  If you want to save modifications,
remember to store them again in the cache."


> i don't like having to explicitly call set to force modifications to
> be written to the cache, and i'd prefer that get always return the
> same instance that i originally set. can these issues be considered
> for a future version of the interface? i see the need for the
> current behavior when using file-based caches, but perhaps there's a
> way to streamline operations for memory caches? perhaps Storable can
> be bypassed for memory caches?

Please see my last email on the subject, but for all intents and
purposes, I'd like caches to behave consistently, but we could
definitely create a special purpose "live" memory-based cache that
does what you want.



> also, has there been any thought given to locking cached
> items? when i'm using a shared cache with multiprocess
> apache, the opportunity exists for multiple requests to
> access a single session simultaneously, which can lead to
> races. which i'm happy to ignore for now but would be nice
> to eventually prevent.

A number of people have requested this, and I think it is a good idea.
For some back-end implementations, locking is conceptually easy to do,
such as the FileCache on a POSIX system.  And the SharedMemoryCache
already does some locking using IPC::ShareLite.  However, I wanted to
wait until I came up with the right level of granularity for the
locking API.  Plus, it would require a non-trivial bit of a rewrite to
separate the front-end of the cache (which would block on locks) from
the back-end (which would be aware of the locks and manage them).

However, the good news is that there isn't really a race on writes.
Basically, the last write wins.  It is tough to really figure out
(from the cache's perspective) what the appropriate behavior is in all
cases.  So the user should be responsible for locking if them want it.
Of course, that should be done through the cache API, which I
regretfully haven't added yet.

Cheers,

-DeWitt

Reply via email to