Hello,

I've got a couple of questions that I'm hoping that some of you more 
experienced can enlighten me with.

1)We're recently set up several squid servers to act as a reverse proxy 
accelerator for some of our web-content servers. When we first set them up they 
were mostly set with default configs, we have since changed the configs to be 
more tuned for performance and have noticed some oddities.

Initialy we deployed two boxes with the default cache settings:
  cache_mem 8 MB  
  maximum_object_size_in_memory 8 KB
  cache_dir ufs /usr/local/squid/cache 1024 16 256
  maximum_object_size 4096 KB
 
We then changed them to:
  cache_mem 1536 MB  
  maximum_object_size_in_memory 120 KB
  cache_dir ufs /usr/local/squid/cache 10240 16 256
  maximum_object_size 100 MB

Both servers are kept with identical configs, specs, and OS/component versions. 
Both service the same load balanced pool and in turn receive requests through 
the load balancer. Conceptually they see identical traffic and should behave 
the same.

The first server was hard bounced and was un-able to save its cache objects in 
memory, the second was gracefully told to reload the config with squid -k 
parse. We then spun up a third server and used the second cache config (listed 
above) on server since it's startup. We have been running all three servers for 
a couple of days (the third one day less) and have noted the stats below and 
are confused with the deviation. Most interesting is that when monitoring the 
access log for server A and B I frequently see the same objects requested but 
they are not being moved back into memory. Would I be wrong to assume that 
squid has an idea of the time it takes to return the object and can therefore 
identify if the request was serviced by the linux buffer/cache and not actually 
read from disk and that; because of this process, it doesn't bother to move it 
into its own memory?

Cache information for squid:                    SERVER_A                        
        SERVER_B                                SERVER_C
        Hits as % of all requests:                      5min: 99.2%, 60min: 
99.5%       5min: 99.4%, 60min: 99.5%       5min: 99.2%, 60min: 99.2%
        Hits as % of bytes sent:                5min: 99.6%, 60min: 99.8%       
5min: 99.8%, 60min: 99.8%       5min: 99.6%, 60min: 99.6%
        Memory hits as % of hit requests:               5min:  1.4%, 
60min:  1.4%       5min: 28.2%, 60min: 27.3%       5min: 98.6%, 60min: 98.6%
        Disk hits as % of hit requests:         5min: 97.2%, 60min: 97.2%       
5min: 70.6%, 60min: 71.3%       5min:  0.0%, 60min:   0.0%
        Storage Swap size:                      114136 KB                       
        114104 KB                     53412 KB
        Storage Swap capacity:                  1.1% used, 98.9% free           
1.1% used, 98.9% free           0.5% used, 99.5% free
        Storage Mem size:                       62904 KB                        
        66100 KB                                55076 KB
        Storage Mem capacity:                   3.0% used, 97.0% free           
3.2% used, 96.8% free           3.0% used, 97.0% free
        Mean Object Size:                       4.48 KB                         
4.48 KB                         4.41 KB
        Requests given to unlinkd:              3                               
        5                                       1
   
Internal Data Structures:                       SERVER_A                        
        SERVER_B                                SERVER_C
        StoreEntries:                                   26599                   
                26517                                   12560
        StoreEntries with MemObjects:           14099                           
        15321                                   12560
        Hot Object Cache Items:                 14098                           
        15320                           12559
        on-disk objects:                                25480                   
        25453                                   12101

2)In relation to one I've tired to change memory_replacment_policy from lru to 
heap LFUDA however when I load the config with "memory_replacment_policy heap 
LFUDA" it errors telling me It doesn't know what policy "heap" is. I've tried 
with various quotes and just "LFUDA" with no success. Is there a step I'm 
missing here? 

3) in order to further extort performance from pinning the cache in memory, 
I've been toying with the idea of creating a tempfs and using it to store the 
'disk' cache, and periodically (about every 15 min) syncing the cache back to a 
physical disk. If the system restarts I'll just remount the tempfs and copy the 
cache back into the tempfs before starting squid again. Does anyone have any 
experience or comments on the procedure, and if it is better / worse than 
letting Linux buffers handle pinning the objects in memory. My indications say 
that this would be favorable over the possibility of the buffer's being wiped 
by a service process (assuming that squid is the only dedicated service on the 
server) grabbing the buffers for other files

Thanks,

Andrew Woodward

Reply via email to