> Is there anything that you guys can suggest I do around the cache? > Should I try a different store type? A different filesystem type > perhaps? > > > > If your store has a configuration knob that effectively limits disk > writing rate, then use it to limit that rate to avoid overflowing the > queue. > > You can also consider using rock store that has an explicit write > rate limiting option (but comes with other problems that may or may > not affect your setup). > > Adding more physical disk spindles helps, of course. > Just to be clear since you mentioned working with virtual services. By "spindle" Alex refers to individual physical HDD devices underneath the whole VM storage setup. 'Disk' a the VM level, even at the RAID level might refer to multiple or overlapping (shared) 'spindles'. Unless your AUFS/UFS/diskd cache_dir are limited to being on one "spindle" HDD, they can cause a shared HDD controller to fill up / overload and adding more cache_dir just makes that particular problem worse. Amos Hi Amos At the time of my email my cache_dir was a 100GB disk assigned from the same datastore as where the VM is. I have to say that I never had problems with this setup. The storage is provided from a IBM XIV storage unit so it's not the slowest of storage devices. After some discussions with a couple of people we thought that perhaps the cache is too big, so I now made it 4 x 15GB aufs cach_dirs. 2 Disks are from the same datastore as the VM, and 2 are from a datastore assigned to the cluster from a SVC (XIV behind it) with the following in my squid.conf. maximum_object_size_in_memory 256 KB maximum_object_size 4096 MB store_dir_select_algorithm least-load cache_dir aufs /var/cache/squid-c1/ 10240 32 256 cache_dir aufs /var/cache/squid-c2/ 10240 32 256 cache_dir aufs /var/cache/squid-c3/ 10240 32 256 cache_dir aufs /var/cache/squid-c4/ 10240 32 256 cache_replacement_policy heap lfuda memory_replacement_policy lru Do you think this is optimal? Disclaimer: http://www.shopriteholdings.co.za/Pages/ShopriteE-mailDisclaimer.aspx
_______________________________________________ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users