fooler wrote:

----- Original Message -----
From: "Mark M. Barrios" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Tuesday, March 11, 2003 1:06 PM
Subject: Re: [plug] squid cache




so say i have a 10GB partition/HD dedicated for a cache_dir, i should
only set a cache_dir size "way below from the total actual size of that
partition" like set it at say 4GB?



it is up to you to set but be sure it is below 10GB and not 10GB




then wouldn't that be a waste of
space?



waste of space but you will find the answer below....




setting the cache_dir to something small doesnt solve the
problem,



huh???? it will not solve the disk full problem???


yes



it would only reach 100% faster, then youd get warnings in
squid's log



huh???? and what warning messege would that be???



i forgot the exact message but it would something about the cache_dir size being over 100% or something about the disk being full/not enough disk space and squid dying




a small cache_dir with lots of requests coming in also makes for an
ineffective cache.



how sure you are with your claims? do you understand how a filesystem works?
i thought you understand this why we allocate cache_dir half of the total
capacity of that partition....



firsthand experience :) and please dont question me like i dont know squat


web objects are temporarily stays in your disk cache.. these objects are
rapidly comes in and comes out in your filesystem... with this, a rapid of
file or block allocation and deallocation in your filesystem.... this
scenario will lead to disk *fragmentation*... fragmentation is very harmful
to filesystem disk i/o performance.... to compensate this problem.. you
have to sacrife some *wasted* space in it... that is why i allocated half of
the total disk capacity....with this, it will find the *best fit* number of
contiguous blocks found always in the wasted space... until the wasted space
are in used, the previous space that has been deallocated are assumed to be
free contigous blocks again.... therefore, this will lead to an effective
caching especially on the disk i/o performance...



ive never encountered or heard of problems on UFS concerning fragmentation.
and by 'ineffective' i meant a cache that doesnt do much to help conserve bandwidth because popular objects are evicted even before they get a HIT.




we were running the latest stable release at that time and when i set
cache_replacement_policy to something other than the default the cache
size stayed within its limits.



GDFS, LFUDA or LRU replacement policy.... squid always stayed within its
limits...



in my personal humble experience, that was not the case.


fooler.




_ Philippine Linux Users Group. Web site and archives at http://plug.linux.org.ph To leave: send "unsubscribe" in the body to [EMAIL PROTECTED]

Fully Searchable Archives With Friendly Web Interface at http://marc.free.net.ph

To subscribe to the Linux Newbies' List: send "subscribe" in the body to [EMAIL PROTECTED]

Reply via email to