Gavin McCullagh wrote:
Hi,
our squid system (according to our munin graphs), is suffering rather from
high iowait. I'm also seeing warnings of disk i/o overloading.
I'm interested to understand how this disk load scales. I know more disks
(we only have a single cache disk just now) would be a big help. One
question I have is how (and if) the disk load scales with the size of the
cache.
I'll present a ludicrously simplistic description of how disk load might
scale (purely as a starting point) and see could people point out where I'm
wrong.
The job a single disk running a cache must do in some time step might be:
disk_work = (write_cached_data) + (cache_replacement_policy) + (read_hits)
where:
(write_cached_data) =~ x * (amount_downloaded)
(cache_replacement_policy) = (remove_expired_data) + (LRU,LFUDA,...)
(read_hits) =~ byte_hit_rate
(LRU,LFUDA,...) =~ amount of space needed =~ x * (amount_downloaded)
(remove_expired_data) =~ (amount_downloaded) over previous time
so
disk_work = f(amount_downloaded,byte_hit_rate,cache_replacement_policy)
To me this speculative analaysis suggests that the load on the disk is a
function of the byte_hit_rate and the amount being downloaded, but not of
the absolute cache size.
So, decreasing the cache_dir size might lower the disk load, but only as it
lowers the byte_hit_rate (and possibly the seek time on the disk I guess).
Is there something wrong in this?
Gavin
You appear to be correct. Squid ignores disk objects that it does not
need. Things get stored to disk when requests need more memory space,
and pulled off disk when they are needed to server a request.
Occasionally during low-load and with a full cache_dir the garbage
collection will purge a small % batch of objects which raises disk IO
above the hit rate.
Amos
--
Please be using
Current Stable Squid 2.7.STABLE6 or 3.0.STABLE13
Current Beta Squid 3.1.0.6