On 21. juli. 2009, at 13.27, Henrik Schröder wrote:

Wait, you say that if you manually find items and delete them, your cache never really grows much and you avoid the problem. But why don't you set short expiration times on all items from the start, then you wouldn't have to do your manual find/delete because it would never grow, and you would always have memory left for new slab allocations when circumstances change?


Well, it will allocate a new item unless it finds an expired item in the tail of the item LRU. You may have expired items in the middle of your LRU list, and you would still evict items from the cache if all memory is allocated.

Cheers,

Trond



/Henrik

On Tue, Jul 21, 2009 at 00:21, dl4ner <[email protected]> wrote:

by now I started some tests to send my expire regularly (every 30 min)
to a fresh started cache.
It does not even grow large - it's allowed to handle 3 G RAM and stays
ways
below 300MB. that way it has lots of spare pages to allocate for
rarely used slots.

But it's quite a waste of cpu cycles to expire such a tremendous cache
client-side.
this should be done server-side. It's just wrong to let the cpus
extract the keys, prepare
the cachedump, send it over the net, parse the cachedump client-side,
compare the
timestamps (what could be done serverside instead of preparing the
cachedump), then
send thousands of delete requests back to the server and waste
bandwidth a second time.


--
Trond Norbye

Web Scale Infrastructure                 E-mail: [email protected]
SUN Microsystems                         Phone:  +47 73842100
Haakon VII's gt. 7B                      Fax:    +47 73842101
7485 Trondheim, Norway

Reply via email to