'Twas brillig, and Steven Brown at 09/06/10 22:42 did gyre and gimble:
> Check out my blog posts:
> 
> http://www.yewchube.com/2009/03/zend_cache_backend_file-auto-clean-causing-problems/
>
> 
http://www.yewchube.com/2009/04/zend_cache_backend_file-and-tag-based-cleaning/
> 
> Basically the cleaning process takes too long if you have a large
> number of cache entries. Metadata is not indexed so the cleaning
> process needs to open and check every single metadata file.
> 
> Turn the auto cleaning off and use a cron job instead so your users
> are not impacted, or switch to memcache which cleans itself (remember
> to turn auto cleaning off here too).

Thanks for the info. I'll give them a read over. I'll certainly turn off
auto-cleaning, but even with the manual process I have to run for
clearing all entries with a certain tag will still be problematic once
the number of cached items grows too large.

Most of the time, I actually don't want any automatic cleaning anyway
(it's a denormalisation scheme, so the caches live forever provided they
are valid - the tag system is used to ensure the right entries are
removed when they become invalid). This makes a very efficient view
system, but clearly the job of purging the invalid entries can be very
time consuming and IO intensive :(

I wonder if SQLite backed metadata storage would be feasible. The SQLite
database itself would only need to be write locked when updating it
which wouldn't be the common case....

Col


-- 

Colin Guthrie
gmane(at)colin.guthr.ie
http://colin.guthr.ie/

Day Job:
  Tribalogic Limited [http://www.tribalogic.net/]
Open Source:
  Mandriva Linux Contributor [http://www.mandriva.com/]
  PulseAudio Hacker [http://www.pulseaudio.org/]
  Trac Hacker [http://trac.edgewall.org/]

Reply via email to