I have no tests, but we have a website where we use infinite Zend_Cache

for examples and at 2.4G files everything looks normal. 

After this point I see that the I/O disk time increases a slightly and so

does the load (around 2.x). We are used to clean up the cache at this

level.



Rgds,

Armand





On Wed, 09 Jun 2010 14:27:30 +0100, Colin Guthrie <[email protected]>

wrote:

> Hi,

> 

> I've not done much in the way of extensive testing in this regard but I

> figured I'd ask some questions and see if any other folks are in this

> situation.

> 

> I've been developing a fairly extensive data denormalisation system

> which is based on Zend_Cache + the File backend. It makes use of Tags to

> ensure that the relevant bits of denormalised data are expired properly

> (lifetime is infinite).

> 

> I'm only rolling this out minimally just now, but plan to take this

> *much* further in due course.

> 

> The problem is, that I've had several errors in my log telling me that

> the maximum script execution time has been exceeded. This always happens

> in Zend/Cache/Backend/File.php on line 655, 659 or 962.

> 

> This part of the file generally relates to the clean() method (i.e.

> called when deleting by tag).

> 

> 

> So my question is, how scalable is the tagging support in Zend Cache

> File? Currently I only have about 89megs of data in about 22k files (so

> about 11k cached items, bearing in mind that half the files are

metadata).

> 

> I suspect that when I roll out this denormalisation scheme more

> extensively, I'll have closer to a million files.

> 

> Has anyone done any scalability tests on this? For my purposes, I'm

> happy to actually store the metadata regarding tags in a database table

> and expire the items based on that (so create a HybridFile.php backend

> of sorts - I have already created something similar to allow me to

> support tags with Memcache backend). Is this an approach that would

> scale better? Has anyone done anything similar to this?

> 

> Col

Reply via email to