On Wednesday 12 March 2003 00.01, Lightfoot.Michael wrote:

> Hmmm.  The only cache I have ever run on Linux is on a Debian
> system and had similar symptoms as the UFS problem on Solaris - the
> filesystem wasn't full (AFAIR), but the cache ran out of space. 
> This was nearly two years ago so perhaps my memory of the problem
> is faulty.
>
> So what causes problems on ext2 as I observed?

No idea. Maybe you ran out of inodes? Or ran into the magic 2GB limit 
for swap.state from not rotating the logs often enough?

But I do know ext2 does not have support for block fragments and only 
allocates whole blocks. Support for block fragments has been on the 
todo for ext2 since the filesystem was created, but has not yet been 
implemented and probably never, especially not considering that Linux 
now have other filesystem types very suitable for storing many tiny 
files...

Almost related note: reiserfs has support for a kind of block 
fagments, but is not noticeable plauged by block fragment 
fragmentation like the Solaris UFS because of it, only increased 
storage capacity of small/tiny files.

And to be honest to Solaris: The actual block fragmentation issue of 
Solaris UFS is not that bad in technical terms for Squid. In fact it 
is more of a lie in the amount of free space reported by df. If you 
run out of space because of block fragment fragmentation then you 
have either configured the cache_dir too large or Squid accounts 
wrongly for the cache size and block fragments.

Regards
Henrik

Reply via email to