Steve, thanks for the advice.
Chris, I did think of a similar approach. I was thinking of putting the
items in chunks. For example I would use event_0_500 for the first 500
then 501_1000 for the next and so on. However, as ours is a list that
represents a timeline i.e. from todays date to some future date, the
problem is that a new object could land inside an existing chunk thus
pushing the edge of the list into the other list meaning all the chunks
would need updating.
Paul Stacey
Chris Hondl wrote:
One thing that we do is store collections in multiple keys.
For example we keep a collection of all users currently running the
latest release. This collection has about 30,000 items. We have a
set of numbered keys "release_user_00" through "release_user_99" that
each contain a comma separated list of ids. We take the id of the
user module 100 and store the id into the corresponding key. Makes it
relatively lightweight to update the date, and easy to pull down the
entire set as necessary.
We use this approach for handling sets of objects in several parts of
our system.
Chris
On 7/5/07, *Steve Grimm* < [EMAIL PROTECTED]
<mailto:[EMAIL PROTECTED]>> wrote:
On 7/5/07 5:12 AM, "Paul Stacey" < [EMAIL PROTECTED]
<mailto:[EMAIL PROTECTED]>> wrote:
> Events contains an array of event
A better way to deal with this kind of thing is with a two-phase
fetch. So
instead of directly caching an array of event data, instead cache
an array
of event IDs. Query that list, then use it construct a list of the
keys of
individual event objects you want to fetch, then multi-get that
list of
keys.
In addition to greatly increasing the maximum size of the list
(memcached
has a 1-megabyte hard limit on item size, and obviously you can
fit a lot
more IDs than full data in that limit) you will also, if you're using
multiple memcached servers, spread the load around by virtue of your
multi-get hitting basically every server once your key list is a
certain
size. If you stick all the data in one frequently-requested item,
then you
will pound the particular server that happened to get that item
while all
the others sit idle.
Another advantage of a scheme like this is that you can update an
item's
data without having to read then write every list that contains
that item.
Just update it by ID (like you'd do in your database queries) and
all the
lists that contain it will magically get the correct information.
All the above said, it is true that dealing with lists of things
is not
currently memcached's strong suit. For example, there is no "append"
operation. It is much better at caching individual objects.
-Steve
--
http://avatars.imvu.com/chris