Hello, Guys!
OK, now I have:
  maximum_object_size 200 MB

That means your cache will store up to 200MB of each file.


You can even store ISO files if your users download Linux ISOs. Just
need to up that 200MB to say 800MB.

Yes, this would be nice if there were just one linux distro and so all Linux people would download just, say, 8 Linux images :-)


Setting this to 800 MB would be interesting when you have, say, a cache bigger than 500 GB, at least. If I have a cache below 100 GB and I do not know too much about the traffic which my users download, the LFUDA algorithm does not let an ISO in the cache too long :-)

YOu might also want to change your L1 directories, for a 90GB cache,
only having 16 L1 directories may be overkill.
Thank you, you are right.
At the moment I have
cache_dir aufs /cache 92000 16 256
because I have 3 x 36GB SCSI disk behind a RAID0 (strip) contoller (seems to be just one disk).
Yes, I know, raid is bad, but a RAID 0 controller is the only controller I have had since today :-)
So I am going to connect there 3 disks to the new controller (no RAID) tomorrow with this settings:


  cache_dir aufs /cache1 30000 60 256
  cache_dir aufs /cache2 30000 60 256
  cache_dir aufs /cache3 30000 60 256

(30000 is roughly 80% of the 36 GB disk, so I am right, right?)

I am just sorry about my actual, full 92 GB cache - when I remove the three disks from the strip, I will have to reformate them and start with empty caches (it took more than 3 days to fill up this 92 GB cache).

The only way how to save some cached data is:
- change my actual cache_dir aufs /cache 92000 16 256
to cache_dir aufs /cache 30000 16 256
and start Squid. It removes 62 GB from the cache.
Switch the Squid off. Copy the entire cache to a temporary disk (yes, it will take hours, I already tried. Probably better to use tar, without a compresion).
Change the controller, format the tree SCSI disks, mount them and copy and untar the backuped cache to one of them.


Change configuration - 3x cache_dir. Initialize with 'squid -z'.
Start squid and trada, I have 30 GB cache of data.

Ow, thank you for the L1 algorithm.


Just out of curiousity, what is your cache's filesystem? Ext3? reiserfs?
I had reiserfs (with noatime) but it seemed too slow. I changed it to ext3 (noatime), which was supposed to be quicker according to the "Squid, the definitive guide" book, there are benchmarks and ext3 has much better throughput.
Finally, I decided my Squid box is going to be stable (good hardware, UPS) and decided for ext2 with noatime.


Do you expect to have more _large_ files or more small files? I use
reiserfs. (anticipate more small files caches)
I do not know. I will have to get some stats, somehow. Is this info stored somewhere in Squid cachemgr info pages by any chances?
Oh, sorry, you mentioned I can do it by querying the cache. I will have a look at it (and post it here with other conclusions :-)



cache_mem 200 MB
How much of memory do yo have??

for a 90GB cache, and assuming 10MB RAM per 1GB cache, you better have
like 900MB RAM
1 GB. I am find, as you can see from my top:

 14:11:52 up 1 day,  5:35,  1 user,  load average: 2.04, 2.17, 2.25
45 processes: 41 sleeping, 4 running, 0 zombie, 0 stopped
CPU states:  25.9% user,  74.1% system,   0.0% nice,   0.0% idle
Mem:   1551284K total,  1520332K used,    30952K free,   109912K buffers
Swap:   979956K total,     6200K used,   973756K free,   478308K cached

  PID USER     PRI  NI  SIZE  RSS SHARE STAT %CPU %MEM   TIME COMMAND
13742 root      15   0  689M 687M  4020 R    90.0 45.4 377:58 squid
15497 root      16   0   940  940   748 R     8.4  0.0   0:02 top
13754 root       9   0  689M 687M  4020 S     0.4 45.4   0:20 squid


So, I am supposed to have my Squid in a good shape :-), stable and running without stopping/crashing.
The "thousands" means approx. 3500 users at the moment.

OK.. and they're all accessing 1 cache? Wow.
Yes, but they are not active at the same time. My peaks are:
200 client HTTP requests/sec. Server in: 1.6MB/sec. Client out: 2MB/sec

Have a nice day,

If you post back the results, I sure will.
So have it. I will (post them :-)

Marji

Reply via email to