Re: [squid-users] Compute digest as content is written to cache

2012-08-12 Thread Drunkard Zhang
2012/8/12 Amos Jeffries squ...@treenet.co.nz:
 On 11/08/2012 10:21 p.m., Jack Bates wrote:

 On 11/08/12 12:30 AM, Amos Jeffries wrote:

 On 11/08/2012 7:22 p.m., Jack Bates wrote:

 I am interested in intercepting content as it is written to the cache,
 and computing a digest from the content. Do you know if this can be
 done in some kind of add on, or would it require a change to the core?


 What type of digest and to what purpose?


 I was thinking of using OpenSSL
 SHA256_Init()/SHA256_Update()/SHA256_Final(). The purpose I have in mind is
 to detect identical content at different URLs

 Given a response with a Location: ... header and a Digest: SHA-256=...
 header (such as from MirrorBrain), if the URL in the Location: ... header
 is not already cached but the Digest: SHA-256=... header matches the
 content at some other URL that is already cached, then I want to update the
 Location: ... header with the cached URL. I think this should redirect
 clients to mirrors that are already cached


 Small problem there. The digest is not calculated/known until the object is
 finished arriving. By then it is too late to attach new headers. And way too
 late to decide whether to ask that source for it.

Agree. Multiple different splashing headers with same content is
really hard to store. Split headers/contents and store them
respectively may works, and the headers should store as on-to-many
mapping.


Re: [squid-users] Squid memory usage

2012-08-03 Thread Drunkard Zhang
2012/8/3 Hugo Deprez hugo.dep...@gmail.com:
 Dear community,

 I am running squid3 on Linux Debian squeeze.(3.1.6).

 I encounter a suddenly a high memory usage on my virtual machine don't
 really know why.
 Looking at the cacti memory graph is showing a memory jump from 1.5 Gb
 to 4GB and then ther server started to swap.

 For information the virtual machine has 4Gb of RAM.

 Here is the settings of squid.conf :

 cache_dir ufs /var/spool/squid3 100 16 256
 cache_mem 100 MB


Can you try tuning these options?
memory_pools off
memory_pools_limit 1 MB


 hierarchy_stoplist cgi-bin ?
 refresh_pattern ^ftp:   144020% 10080
 refresh_pattern ^gopher:14400%  1440
 refresh_pattern -i (/cgi-bin/|\?) 0 0%  0
 refresh_pattern .   0   20% 4320


 my squid3 process is using: 81% of my RAM. So arround 3,2Gb of memory.

 proxy25889  0.6 81.1 3937744 3299616 ? SAug02   9:34
 (squid) -YC -f /etc/squid3/squid.conf

 I am currently having arround 50 users using it.


 I did have a look at the FAQ
 (http://wiki.squid-cache.org/SquidFaq/SquidMemory#how-much-ram), but I
 didn't find any tips for my situation in it.


 Have you got any idea ? How can I troubleshoot this ?

 Thanks !



-- 
张绍文
gongfan...@gmail.com
zhan...@gwbnsh.net.cn
18601633785


Re: [squid-users] squid load balancing

2011-09-20 Thread Drunkard Zhang
2011/9/20 nikko...@gmail.com nikko...@gmail.com:
 Hello,
 I would like to implement a proxy server in load balancing with 2 or
 more server proxy.
 It is possible to do this?
 What can I use? LVS? Ultramonkey? Other?

keepalived uses LVS, very simple but strong implementation, PLUS CARP
which give you more power.


Re: [squid-users] squid performance tunning

2011-08-18 Thread Drunkard Zhang
 Median Service Times (seconds)  5 min    60 min:
        HTTP Requests (All):   0.00865  0.00865
        Cache Misses:          0.01035  0.01035
        Cache Hits:            0.0  0.0
        Near Hits:             0.00091  0.00091
        Not-Modified Replies:  0.0  0.0
        DNS Lookups:           0.0  0.0
        ICP Queries:           0.0  0.0

Response time is reasonable at this time, while, peak time capture is
good for performance tunning. Try atop 1 at peak time, this magic
tool can clear about bottleneck.

Try multi-instance, which can improve throughput dramaticlly. Docs's here:
http://wiki.squid-cache.org/MultipleInstances

CARP is another choice for extreme perf demand.
http://wiki.squid-cache.org/ConfigExamples/ExtremeCarpFrontend


Re: [squid-users] squid performance tunning

2011-08-18 Thread Drunkard Zhang
2011/8/18 Chen Bangzhong bangzh...@gmail.com:
 My cached objects will expire after 10 minutes.

 Cache-Control:max-age=600

Static content like pictures should cache longer, like 1 day, 86400.

 I don't know why there are so many disk writes and there are so many
 objects on disk.

 In addtion, Disk hits as % of hit requests: 5min: 1.6%, 60min: 1.9%
 is very low.

Maybe cause by disk read timeout. You used too much disk space, you
can shrink it a little by a little, until disk busy percentage reduced
to 80% or lower.

 Can I increase the cache_mem? or not use disk cache at all?

I used all memory I can use :-)


Re: [squid-users] squid performance tunning

2011-08-18 Thread Drunkard Zhang
2011/8/18 Amos Jeffries squ...@treenet.co.nz:
 On 18/08/11 19:40, Drunkard Zhang wrote:

 2011/8/18 Chen Bangzhong:

 My cached objects will expire after 10 minutes.

 Cache-Control:max-age=600

 Static content like pictures should cache longer, like 1 day, 86400.

 Could also be a whole year. If you control the origin website, set caching
 times as large as reasonably possible for each object. With revalidate
 settings relevant to its likely replacement needs. And always send a correct
 ETag.

 With those details Squid and other caches will take care of reducing caching
 times to suit the network and disk needs and updates/revalidation to suit
 your needs. So please set it large.


 I don't know why there are so many disk writes and there are so many
 objects on disk.

 All traffic goes through either RAM cache or if its bigger than
 maximum_object_size_in_memory will go through disks.

 From that info report ~60% of your traffic bytes are MISS responses. A large
 portion of that MISS traffic is likely not storable, so will be written to
 cache then discarded immediately. Squid is overall mostly-write with its
 disk behaviour.

 Likely your 10-minute age is affecting this in a big way. The cache will
 have a lot of storable object which are stale. Next request they will be
 fetched into memory, then replaced by a revalidation REFRESH (near-HIT)
 response, which writes new data back to disk later.


 In addtion, Disk hits as % of hit requests: 5min: 1.6%, 60min: 1.9%
 is very low.

 Maybe cause by disk read timeout. You used too much disk space, you
 can shrink it a little by a little, until disk busy percentage reduced
 to 80% or lower.

 Your Squid version is one which will promote HIT objects from disk and
 service repeat HITs from memory. Which reducing that disk-hit % a lot more
 than earlier squid versions would show it as.


 Can I increase the cache_mem? or not use disk cache at all?

 I used all memory I can use :-)

 Indeed, the more the merrier. Unless it is swapping under high load. If that
 happens Squid speed goes terrible almost immediately.

Actually I disabled swap at all, and use a script to start squid
process immediately when killed by OS. OS will kill squid when OOM(Out
of memory).


Re: [squid-users] Resize coss online

2011-01-12 Thread Drunkard Zhang
2011/1/12 Amos Jeffries squ...@treenet.co.nz:
 On 12/01/11 20:12, Drunkard Zhang wrote:

 I'm testing squid-2.7STABLE9 + COSS + ext4 + SSD now.

 When enlarge the coss, eg: from 10240 to 20480, I can see success in
 cache.log, but the coss file on disk did not change, after 3 times of
 squid -k reconfigure the coss file size changed.

 How long did you wait? it could be that Squid was doing a long resize in the
 background. This is a guess supported by the change actually happening.

 But some times later, the squid process exited, with nothing left in
 cache.log. Just once in cache.log I found:

 2011/01/12 11:10:04| assertion failed: coss/store_io_coss.c:215:
 cs-curstripe  (cs-numstripes - 1)

 So, I wondering if resize of coss online is supported perfectly, that
 we can use it without anxiety.
 BTW, is shrink of coss filesystem is supported? If it is, do I have to
 do it online, or offline? By online, I means operates without restart
 squid process, and the offline means opposite.

 I think you need to try offline change. Via a stop, squid -z and restart
 sequence.

Thanks, maybe I'm too hurry.

 With squid-2.7STABLE9 + COSS + btrfs + SSD, reload can cause process
 to stuck, with 100% CPU usage per squid process. I caught these info
 in cache.log once:

 2011/01/09 14:56:44| Killing RunCache, pid 59502
 2011/01/09 14:56:44| kill 59502: (1) Operation not permitted

 And kill of the process will make the process into a zombie. The
 defunct process still using 100% CPU, which wasn't show in ps.

 squid75 ~ # ps -eo pid,%cpu,cmd --sort=c
 59343  2.7 [squid]defunct
 59505  2.8 [squid]defunct
 59380  2.9 [squid]defunct
 59474  2.9 [squid]defunct
 42558  3.4 [btrfs-endio-1]
 43717  3.7 (squid) -YC -D -f squid73.conf
 43925  4.0 (squid) -YC -D -f squid74.conf
 42520  4.3 (squid) -YC -D -f squid75.conf
 51532  4.4 (squid) -YC -D -f squid77.conf
 18014  5.9 (squid) -YC -D -f squid72.conf

 FWIW;
  I think you will very much want to play with the squid-3.2 RockStore code
 being written by The Measurement Factory guys. Contact Alex for ways to get
 a current working version.

We did a test on squid-3.2, but using of 3.2 make no difference to
what we are doing now. SMP acts very similar to our multi-instance
mode... but still looking forward.


[squid-users] Resize coss online

2011-01-11 Thread Drunkard Zhang
I'm testing squid-2.7STABLE9 + COSS + ext4 + SSD now.

When enlarge the coss, eg: from 10240 to 20480, I can see success in
cache.log, but the coss file on disk did not change, after 3 times of
squid -k reconfigure the coss file size changed.
But some times later, the squid process exited, with nothing left in
cache.log. Just once in cache.log I found:

2011/01/12 11:10:04| assertion failed: coss/store_io_coss.c:215:
cs-curstripe  (cs-numstripes - 1)

So, I wondering if resize of coss online is supported perfectly, that
we can use it without anxiety.
BTW, is shrink of coss filesystem is supported? If it is, do I have to
do it online, or offline? By online, I means operates without restart
squid process, and the offline means opposite.


With squid-2.7STABLE9 + COSS + btrfs + SSD, reload can cause process
to stuck, with 100% CPU usage per squid process. I caught these info
in cache.log once:

2011/01/09 14:56:44| Killing RunCache, pid 59502
2011/01/09 14:56:44| kill 59502: (1) Operation not permitted

And kill of the process will make the process into a zombie. The
defunct process still using 100% CPU, which wasn't show in ps.

squid75 ~ # ps -eo pid,%cpu,cmd --sort=c
59343  2.7 [squid] defunct
59505  2.8 [squid] defunct
59380  2.9 [squid] defunct
59474  2.9 [squid] defunct
42558  3.4 [btrfs-endio-1]
43717  3.7 (squid) -YC -D -f squid73.conf
43925  4.0 (squid) -YC -D -f squid74.conf
42520  4.3 (squid) -YC -D -f squid75.conf
51532  4.4 (squid) -YC -D -f squid77.conf
18014  5.9 (squid) -YC -D -f squid72.conf
19465  7.9 (squid) -YC -D -f squid76.conf
59833  8.9 (squid) -YC -D -f squid79.conf
59803  9.4 (squid) -YC -D -f squid78.conf
59744  9.7 [squid] defunct
 4511 10.8 (squid) -YC -D -f squid81.conf
 4563 10.9 (squid) -YC -D -f squid85.conf
59705 12.0 [squid] defunct
 4524 12.8 (squid) -YC -D -f squid82.conf
 4550 12.9 (squid) -YC -D -f squid84.conf
 4537 13.2 (squid) -YC -D -f squid83.conf
 4498 29.9 (squid) -YC -D -f squid80.conf
squid75 ~ # ps auwx | grep -e defunct -e COMMAND$
USER   PID %CPU %MEMVSZ   RSS TTY  STAT START   TIME COMMAND
squid59343  2.8  0.0  0 0 ?Zl  Jan07  83:47
[squid] defunct
squid59380  3.0  0.0  0 0 ?ZNl  Jan07  88:37
[squid] defunct
squid59474  3.0  0.0  0 0 ?ZNl  Jan07  89:57
[squid] defunct
squid59505  2.9  0.0  0 0 ?ZNl  Jan07  86:20
[squid] defunct
squid59705 12.0  0.0  0 0 ?Zl  Jan07 355:56
[squid] defunct
squid59744  9.7  0.0  0 0 ?ZNl  Jan07 288:11
[squid] defunct


Re: [squid-users] Performance Extremely squid configuration advice

2011-01-07 Thread Drunkard Zhang
2011/1/7 Amos Jeffries squ...@treenet.co.nz:
 On 07/01/11 19:08, Drunkard Zhang wrote:

 In order to get squid server 400M+ traffic, I did these:
 1. Memory only
 IO bottleneck is too hard to avoid at high traffic, so I did not use
 harddisk, use only memory for HTTP cache. 32GB or 64GB memory per box
 works good.

 NP: The problem in squid-2 is large objects in memory. Though the more
 objects you have cached the slower the index lookups (very, very minor
 impact).


With 6-8GB memory, there's about 320K objects per instance, so no
significant delay would yield.


 2. Disable useless acl
 I did not use any acl, even default acls:
 acl SSL_ports port 443
 acl Safe_ports port 80          # http
 acl Safe_ports port 21          # ftp
 acl Safe_ports port 443         # https
 acl Safe_ports port 70          # gopher
 acl Safe_ports port 210         # wais
 acl Safe_ports port 1025-65535  # unregistered ports
 acl Safe_ports port 280         # http-mgmt
 acl Safe_ports port 488         # gss-http
 acl Safe_ports port 591         # filemaker
 acl Safe_ports port 777         # multiling http
 acl Safe_ports port 901         # SWAT
 http_access deny !Safe_ports
 http_access deny CONNECT !SSL_ports

 squid itself do not do any acls, security is ensured by other layers,
 like iptables or acls on routers.

 Having the routers etc assemble the packets and parse the HTTP-layer
 protocol to find these details may be a larger bottleneck than testing for
 them inside Squid where the parsing has to be done a second time anyway to
 pass the request on.


We only do http cache on tcp port 80, and the incoming source IPs is
controllable, so iptables should be OK.

 Note that the default port and method ACL in Squid are validating on the
 HTTP header content URLs not the packet destination port.


 3. refresh_pattern, mainly cache for pictures
 Make squid cache as long as it can, so it looks likes this:
 refresh_pattern -i \.(jpg|jpeg|gif|png|swf|htm|html|bmp)(\?.*)?$
 21600 100% 21600  reload-into-ims ignore-reload ignore-no-cache
 ignore-auth ignore-private

 4. multi-instance
 I can't get single squid process runs over 200M, so multi-instance
 make perfect sense.

 Congratulations, most can't get Squid to go over 50MBps per instance.

 Both CARP frontend and backend (for store HTTP files) need to be
 multi-instanced. Frontend configuration is here:
 http://wiki.squid-cache.org/ConfigExamples/ExtremeCarpFrontend

 I heard that squid is still can't process huge memory properly, so I
 splited big memory into 6-8GB per instance, which listens at ports
 lower than 80. And on a box with 32GB memory CARP frontend configs
 like this:

 cache_peer 192.168.1.73 parent 76 0 carp name=73-76 proxy-only
 cache_peer 192.168.1.73 parent 77 0 carp name=73-77 proxy-only
 cache_peer 192.168.1.73 parent 78 0 carp name=73-78 proxy-only
 cache_peer 192.168.1.73 parent 79 0 carp name=73-79 proxy-only

 5. CARP frontend - cache_mem 0 MB
 I used to use cache_mem 0 MB, time flies, I think that files smaller
 than 1.5KB would be waste if GET from CARP backend, am I right? I use
 these now:

 cache_mem 5 MB
 maximum_object_size_in_memory 1.5 KB

 The best value here differs on every network so we can't answer your
 question with details.

Here's my idea: did 3 times of tcp hand shake, and transfered data in
ONE packet is silly, so let it store locally. According to my
observation, no more than 500 StoreEntries per CARP frontend.

 Log analysis of live traffic will show you the amount of objects your Squid
 are handling in each size bracket. That will determine where the best place
 to set this limit at to reduce the lag on small items versus your available
 cache_mem memory.



Re: [squid-users] Performance Extremely squid configuration advice

2011-01-07 Thread Drunkard Zhang
2011/1/8 Mohsen Saeedi mohsen.sae...@gmail.com:
 and now which filesystem has better performance. aufs or diskd? on the
 SAS hdd for example.

Neither of them, we are using coss on SATA. And coss on SSD is under
testing, looks good still.

 On Fri, Jan 7, 2011 at 7:56 PM, Drunkard Zhang gongfan...@gmail.com wrote:

 2011/1/7 Amos Jeffries squ...@treenet.co.nz:
  On 07/01/11 19:08, Drunkard Zhang wrote:
 
  In order to get squid server 400M+ traffic, I did these:
  1. Memory only
  IO bottleneck is too hard to avoid at high traffic, so I did not use
  harddisk, use only memory for HTTP cache. 32GB or 64GB memory per box
  works good.
 
  NP: The problem in squid-2 is large objects in memory. Though the more
  objects you have cached the slower the index lookups (very, very minor
  impact).
 

 With 6-8GB memory, there's about 320K objects per instance, so no
 significant delay would yield.

 
  2. Disable useless acl
  I did not use any acl, even default acls:
  acl SSL_ports port 443
  acl Safe_ports port 80          # http
  acl Safe_ports port 21          # ftp
  acl Safe_ports port 443         # https
  acl Safe_ports port 70          # gopher
  acl Safe_ports port 210         # wais
  acl Safe_ports port 1025-65535  # unregistered ports
  acl Safe_ports port 280         # http-mgmt
  acl Safe_ports port 488         # gss-http
  acl Safe_ports port 591         # filemaker
  acl Safe_ports port 777         # multiling http
  acl Safe_ports port 901         # SWAT
  http_access deny !Safe_ports
  http_access deny CONNECT !SSL_ports
 
  squid itself do not do any acls, security is ensured by other layers,
  like iptables or acls on routers.
 
  Having the routers etc assemble the packets and parse the HTTP-layer
  protocol to find these details may be a larger bottleneck than testing for
  them inside Squid where the parsing has to be done a second time anyway to
  pass the request on.
 

 We only do http cache on tcp port 80, and the incoming source IPs is
 controllable, so iptables should be OK.

  Note that the default port and method ACL in Squid are validating on the
  HTTP header content URLs not the packet destination port.
 
 
  3. refresh_pattern, mainly cache for pictures
  Make squid cache as long as it can, so it looks likes this:
  refresh_pattern -i \.(jpg|jpeg|gif|png|swf|htm|html|bmp)(\?.*)?$
  21600 100% 21600  reload-into-ims ignore-reload ignore-no-cache
  ignore-auth ignore-private
 
  4. multi-instance
  I can't get single squid process runs over 200M, so multi-instance
  make perfect sense.
 
  Congratulations, most can't get Squid to go over 50MBps per instance.
 
  Both CARP frontend and backend (for store HTTP files) need to be
  multi-instanced. Frontend configuration is here:
  http://wiki.squid-cache.org/ConfigExamples/ExtremeCarpFrontend
 
  I heard that squid is still can't process huge memory properly, so I
  splited big memory into 6-8GB per instance, which listens at ports
  lower than 80. And on a box with 32GB memory CARP frontend configs
  like this:
 
  cache_peer 192.168.1.73 parent 76 0 carp name=73-76 proxy-only
  cache_peer 192.168.1.73 parent 77 0 carp name=73-77 proxy-only
  cache_peer 192.168.1.73 parent 78 0 carp name=73-78 proxy-only
  cache_peer 192.168.1.73 parent 79 0 carp name=73-79 proxy-only
 
  5. CARP frontend - cache_mem 0 MB
  I used to use cache_mem 0 MB, time flies, I think that files smaller
  than 1.5KB would be waste if GET from CARP backend, am I right? I use
  these now:
 
  cache_mem 5 MB
  maximum_object_size_in_memory 1.5 KB
 
  The best value here differs on every network so we can't answer your
  question with details.

 Here's my idea: did 3 times of tcp hand shake, and transfered data in
 ONE packet is silly, so let it store locally. According to my
 observation, no more than 500 StoreEntries per CARP frontend.

  Log analysis of live traffic will show you the amount of objects your Squid
  are handling in each size bracket. That will determine where the best place
  to set this limit at to reduce the lag on small items versus your available
  cache_mem memory.
 


Re: [squid-users] Performance Extremely squid configuration advice

2011-01-07 Thread Drunkard Zhang
2011/1/8 Mohsen Saeedi mohsen.sae...@gmail.com:
 I know about coss. it's great. but i have squid 3.1 and i think it's
 unstable in 3.x version. that's correct?

I need null for memory-only cache, which is not provided in squid-3,
so it's all squid-2.x in product environment.
Of cource, we tested every squid-3.x, many bugs and poor performance
to squid-2.x. We tested squid-2.HEAD too, it's worth to try.

aufs acts very bad under high presure, with 8GB memory and least SATA
aufs space per instance, it's still too hard to over 180Mbps.

I haven't try diskd yet.

 On Fri, Jan 7, 2011 at 8:05 PM, Drunkard Zhang gongfan...@gmail.com wrote:
 2011/1/8 Mohsen Saeedi mohsen.sae...@gmail.com:
 and now which filesystem has better performance. aufs or diskd? on the
 SAS hdd for example.

 Neither of them, we are using coss on SATA. And coss on SSD is under
 testing, looks good still.

 On Fri, Jan 7, 2011 at 7:56 PM, Drunkard Zhang gongfan...@gmail.com wrote:

 2011/1/7 Amos Jeffries squ...@treenet.co.nz:
  On 07/01/11 19:08, Drunkard Zhang wrote:
 
  In order to get squid server 400M+ traffic, I did these:
  1. Memory only
  IO bottleneck is too hard to avoid at high traffic, so I did not use
  harddisk, use only memory for HTTP cache. 32GB or 64GB memory per box
  works good.
 
  NP: The problem in squid-2 is large objects in memory. Though the more
  objects you have cached the slower the index lookups (very, very minor
  impact).
 

 With 6-8GB memory, there's about 320K objects per instance, so no
 significant delay would yield.

 
  2. Disable useless acl
  I did not use any acl, even default acls:
  acl SSL_ports port 443
  acl Safe_ports port 80          # http
  acl Safe_ports port 21          # ftp
  acl Safe_ports port 443         # https
  acl Safe_ports port 70          # gopher
  acl Safe_ports port 210         # wais
  acl Safe_ports port 1025-65535  # unregistered ports
  acl Safe_ports port 280         # http-mgmt
  acl Safe_ports port 488         # gss-http
  acl Safe_ports port 591         # filemaker
  acl Safe_ports port 777         # multiling http
  acl Safe_ports port 901         # SWAT
  http_access deny !Safe_ports
  http_access deny CONNECT !SSL_ports
 
  squid itself do not do any acls, security is ensured by other layers,
  like iptables or acls on routers.
 
  Having the routers etc assemble the packets and parse the HTTP-layer
  protocol to find these details may be a larger bottleneck than testing 
  for
  them inside Squid where the parsing has to be done a second time anyway 
  to
  pass the request on.
 

 We only do http cache on tcp port 80, and the incoming source IPs is
 controllable, so iptables should be OK.

  Note that the default port and method ACL in Squid are validating on the
  HTTP header content URLs not the packet destination port.
 
 
  3. refresh_pattern, mainly cache for pictures
  Make squid cache as long as it can, so it looks likes this:
  refresh_pattern -i \.(jpg|jpeg|gif|png|swf|htm|html|bmp)(\?.*)?$
  21600 100% 21600  reload-into-ims ignore-reload ignore-no-cache
  ignore-auth ignore-private
 
  4. multi-instance
  I can't get single squid process runs over 200M, so multi-instance
  make perfect sense.
 
  Congratulations, most can't get Squid to go over 50MBps per instance.
 
  Both CARP frontend and backend (for store HTTP files) need to be
  multi-instanced. Frontend configuration is here:
  http://wiki.squid-cache.org/ConfigExamples/ExtremeCarpFrontend
 
  I heard that squid is still can't process huge memory properly, so I
  splited big memory into 6-8GB per instance, which listens at ports
  lower than 80. And on a box with 32GB memory CARP frontend configs
  like this:
 
  cache_peer 192.168.1.73 parent 76 0 carp name=73-76 proxy-only
  cache_peer 192.168.1.73 parent 77 0 carp name=73-77 proxy-only
  cache_peer 192.168.1.73 parent 78 0 carp name=73-78 proxy-only
  cache_peer 192.168.1.73 parent 79 0 carp name=73-79 proxy-only
 
  5. CARP frontend - cache_mem 0 MB
  I used to use cache_mem 0 MB, time flies, I think that files smaller
  than 1.5KB would be waste if GET from CARP backend, am I right? I use
  these now:
 
  cache_mem 5 MB
  maximum_object_size_in_memory 1.5 KB
 
  The best value here differs on every network so we can't answer your
  question with details.

 Here's my idea: did 3 times of tcp hand shake, and transfered data in
 ONE packet is silly, so let it store locally. According to my
 observation, no more than 500 StoreEntries per CARP frontend.

  Log analysis of live traffic will show you the amount of objects your 
  Squid
  are handling in each size bracket. That will determine where the best 
  place
  to set this limit at to reduce the lag on small items versus your 
  available
  cache_mem memory.
 




 --
 Seyyed Mohsen Saeedi
 سید محسن سعیدی




-- 
张绍文
gongfan...@gmail.com
zhan...@gwbnsh.net.cn
18601633785


Re: [squid-users] Performance Extremely squid configuration advice

2011-01-07 Thread Drunkard Zhang
2011/1/8 Amos Jeffries squ...@treenet.co.nz:
 On 08/01/11 06:22, Drunkard Zhang wrote:

 2011/1/8 Mohsen Saeedimohsen.sae...@gmail.com:

 I know about coss. it's great. but i have squid 3.1 and i think it's
 unstable in 3.x version. that's correct?

 I need null for memory-only cache, which is not provided in squid-3,
 so it's all squid-2.x in product environment.

 The memory cache has been made default in Squid-3. Removing all cache_dir
 entries moves squid-3 to the same operational state as squid-2 with a fake
 null directory.

My fault :-). Thanks.

 Of cource, we tested every squid-3.x, many bugs and poor performance
 to squid-2.x. We tested squid-2.HEAD too, it's worth to try.

 Which 3.x? We just had reports that 3.1.10 is faster than 2.7.STABLE9 (in
 RPS). Prior to that it has been slower. If there are any bugs you are aware
 of that are not already reported or fixed in bugzilla please report. Also,
 please add your additional knowledge to the bugzilla entries to aid a faster
 fix.


 aufs acts very bad under high presure, with 8GB memory and least SATA
 aufs space per instance, it's still too hard to over 180Mbps.

 I haven't try diskd yet.


 Thanks for this.




[squid-users] Performance Extremely squid configuration advice

2011-01-06 Thread Drunkard Zhang
In order to get squid server 400M+ traffic, I did these:
1. Memory only
IO bottleneck is too hard to avoid at high traffic, so I did not use
harddisk, use only memory for HTTP cache. 32GB or 64GB memory per box
works good.

2. Disable useless acl
I did not use any acl, even default acls:
acl SSL_ports port 443
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl Safe_ports port 901 # SWAT
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports

squid itself do not do any acls, security is ensured by other layers,
like iptables or acls on routers.

3. refresh_pattern, mainly cache for pictures
Make squid cache as long as it can, so it looks likes this:
refresh_pattern -i \.(jpg|jpeg|gif|png|swf|htm|html|bmp)(\?.*)?$
21600 100% 21600  reload-into-ims ignore-reload ignore-no-cache
ignore-auth ignore-private

4. multi-instance
I can't get single squid process runs over 200M, so multi-instance
make perfect sense.
Both CARP frontend and backend (for store HTTP files) need to be
multi-instanced. Frontend configuration is here:
http://wiki.squid-cache.org/ConfigExamples/ExtremeCarpFrontend

I heard that squid is still can't process huge memory properly, so I
splited big memory into 6-8GB per instance, which listens at ports
lower than 80. And on a box with 32GB memory CARP frontend configs
like this:

cache_peer 192.168.1.73 parent 76 0 carp name=73-76 proxy-only
cache_peer 192.168.1.73 parent 77 0 carp name=73-77 proxy-only
cache_peer 192.168.1.73 parent 78 0 carp name=73-78 proxy-only
cache_peer 192.168.1.73 parent 79 0 carp name=73-79 proxy-only

5. CARP frontend - cache_mem 0 MB
I used to use cache_mem 0 MB, time flies, I think that files smaller
than 1.5KB would be waste if GET from CARP backend, am I right? I use
these now:

cache_mem 5 MB
maximum_object_size_in_memory 1.5 KB

6. LAN, WAN seperates
Again, to split load on NIC. Use LAN for clients and CARP interaction,
WAN to fetch content from internet.

7. Using official NIC driver.
Sometimes chip vender's official driver acts better behavior than
builtin driver, so it's worth to try.

8. Based on gentoo
Using gentoo, we can strip useless function as much as possible, make
the cache system thinner, and faster.

9. Strip useless compile options and runtime options
Proper CFLAGS and LDFLAGS are needed, here's one good doc:
http://en.gentoo-wiki.com/wiki/Safe_Cflags

~ # squid -v
Squid Cache: Version 2.7.STABLE9
configure options:  '--prefix=/usr' '--build=x86_64-pc-linux-gnu'
'--host=x86_64-pc-linux-gnu' '--mandir=/usr/share/man'
'--infodir=/usr/share/info' '--datadir=/usr/share' '--sysconfdir=/etc'
'--localstatedir=/var/lib' '--libdir=/usr/lib64'
'--sysconfdir=/etc/squid' '--libexecdir=/usr/libexec/squid'
'--localstatedir=/var' '--datadir=/usr/share/squid' '--disable-auth'
'--disable-delay-pools' '--enable-removal-policies=lru,heap'
'--enable-ident-lookups' '--enable-useragent-log'
'--enable-cache-digests' '--enable-referer-log'
'--enable-http-violations' '--with-pthreads' '--with-large-files'
'--enable-wccpv2' '--enable-htcp' '--enable-carp' '--enable-icmp'
'--enable-follow-x-forwarded-for' '--enable-x-accelerator-vary'
'--enable-kill-parent-hack' '--enable-cachemgr-hostname=squid37'
'--enable-err-languages=English'
'--enable-default-err-language=English' '--with-maxfd=65535'
'--without-libcap' '--disable-snmp' '--disable-ssl'
'--enable-storeio=ufs,diskd,coss,aufs,null' '--enable-async-io'
'--enable-linux-netfilter' '--disable-linux-tproxy' '--enable-epoll'
'build_alias=x86_64-pc-linux-gnu' 'host_alias=x86_64-pc-linux-gnu'
'CC=x86_64-pc-linux-gnu-gcc' 'CFLAGS=-march=barcelona -mtune=barcelona
-O2 -pipe' 'LDFLAGS=-Wl,-O1 -Wl,--as-needed'

10. sysctl tune
net.ipv4.ip_forward = 0
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.default.accept_source_route = 0
kernel.sysrq = 0
kernel.core_uses_pid = 1
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_syn_retries = 3
net.ipv4.tcp_synack_retries = 3
net.ipv4.tcp_max_syn_backlog = 4096
net.core.netdev_max_backlog = 4096
net.ipv4.ip_local_port_range = 1024 65534
net.netfilter.nf_conntrack_max = 1048576
net.netfilter.nf_conntrack_tcp_timeout_established = 1000
net.ipv4.tcp_timestamps = 0
net.ipv4.tcp_sack = 0
net.ipv4.tcp_low_latency = 1
net.ipv4.tcp_fin_timeout = 15
net.ipv4.tcp_keepalive_intvl = 30
net.ipv4.tcp_keepalive_probes = 3
net.ipv4.tcp_keepalive_time = 1800
net.ipv4.tcp_max_orphans = 16384
net.ipv4.tcp_orphan_retries = 1
net.ipv4.ipfrag_high_thresh = 524288
net.ipv4.ipfrag_low_thresh = 262144
kernel.pid_max = 65535
vm.swappiness = 1
net.ipv4.tcp_mem = 6085248 8113664 12170496
net.ipv4.tcp_wmem = 4096 65536 8388608

[squid-users] Performance Extremely squid configuration advice

2011-01-06 Thread Drunkard Zhang
In order to get squid server 400M+ traffic, I did these:
1. Memory only
IO bottleneck is too hard to avoid at high traffic, so I did not use
harddisk, use only memory for HTTP cache. 32GB or 64GB memory per box
works good.

2. Disable useless acl
I did not use any acl, even default acls:
acl SSL_ports port 443
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl Safe_ports port 901 # SWAT
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports

squid itself do not do any acls, security is ensured by other layers,
like iptables or acls on routers.

3. refresh_pattern, mainly cache for pictures
Make squid cache as long as it can, so it looks likes this:
refresh_pattern -i \.(jpg|jpeg|gif|png|swf|htm|html|bmp)(\?.*)?$
21600 100% 21600  reload-into-ims ignore-reload ignore-no-cache
ignore-auth ignore-private

4. multi-instance
I can't get single squid process runs over 200M, so multi-instance
make perfect sense.
Both CARP frontend and backend (for store HTTP files) need to be
multi-instanced. Frontend configuration is here:
http://wiki.squid-cache.org/ConfigExamples/ExtremeCarpFrontend

I heard that squid is still can't process huge memory properly, so I
splited big memory into 6-8GB per instance, which listens at ports
lower than 80. And on a box with 32GB memory CARP frontend configs
like this:

cache_peer 192.168.1.73 parent 76 0 carp name=73-76 proxy-only
cache_peer 192.168.1.73 parent 77 0 carp name=73-77 proxy-only
cache_peer 192.168.1.73 parent 78 0 carp name=73-78 proxy-only
cache_peer 192.168.1.73 parent 79 0 carp name=73-79 proxy-only

5. CARP frontend - cache_mem 0 MB
I used to use cache_mem 0 MB, time flies, I think that files smaller
than 1.5KB would be waste if GET from CARP backend, am I right? I use
these now:

cache_mem 5 MB
maximum_object_size_in_memory 1.5 KB

6. LAN, WAN seperates
Again, to split load on NIC. Use LAN for clients and CARP interaction,
WAN to fetch content from internet.

7. Using official NIC driver.
Sometimes chip vender's official driver acts better behavior than
builtin driver, so it's worth to try.

8. Based on gentoo
Using gentoo, we can strip useless function as much as possible, make
the cache system thinner, and faster.

9. Strip useless compile options and runtime options
Proper CFLAGS and LDFLAGS are needed, here's one good doc:
http://en.gentoo-wiki.com/wiki/Safe_Cflags

~ # squid -v
Squid Cache: Version 2.7.STABLE9
configure options:  '--prefix=/usr' '--build=x86_64-pc-linux-gnu'
'--host=x86_64-pc-linux-gnu' '--mandir=/usr/share/man'
'--infodir=/usr/share/info' '--datadir=/usr/share' '--sysconfdir=/etc'
'--localstatedir=/var/lib' '--libdir=/usr/lib64'
'--sysconfdir=/etc/squid' '--libexecdir=/usr/libexec/squid'
'--localstatedir=/var' '--datadir=/usr/share/squid' '--disable-auth'
'--disable-delay-pools' '--enable-removal-policies=lru,heap'
'--enable-ident-lookups' '--enable-useragent-log'
'--enable-cache-digests' '--enable-referer-log'
'--enable-http-violations' '--with-pthreads' '--with-large-files'
'--enable-wccpv2' '--enable-htcp' '--enable-carp' '--enable-icmp'
'--enable-follow-x-forwarded-for' '--enable-x-accelerator-vary'
'--enable-kill-parent-hack' '--enable-cachemgr-hostname=squid37'
'--enable-err-languages=English'
'--enable-default-err-language=English' '--with-maxfd=65535'
'--without-libcap' '--disable-snmp' '--disable-ssl'
'--enable-storeio=ufs,diskd,coss,aufs,null' '--enable-async-io'
'--enable-linux-netfilter' '--disable-linux-tproxy' '--enable-epoll'
'build_alias=x86_64-pc-linux-gnu' 'host_alias=x86_64-pc-linux-gnu'
'CC=x86_64-pc-linux-gnu-gcc' 'CFLAGS=-march=barcelona -mtune=barcelona
-O2 -pipe' 'LDFLAGS=-Wl,-O1 -Wl,--as-needed'

10. sysctl tune
net.ipv4.ip_forward = 0
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.default.accept_source_route = 0
kernel.sysrq = 0
kernel.core_uses_pid = 1
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_syn_retries = 3
net.ipv4.tcp_synack_retries = 3
net.ipv4.tcp_max_syn_backlog = 4096
net.core.netdev_max_backlog = 4096
net.ipv4.ip_local_port_range = 1024 65534
net.netfilter.nf_conntrack_max = 1048576
net.netfilter.nf_conntrack_tcp_timeout_established = 1000
net.ipv4.tcp_timestamps = 0
net.ipv4.tcp_sack = 0
net.ipv4.tcp_low_latency = 1
net.ipv4.tcp_fin_timeout = 15
net.ipv4.tcp_keepalive_intvl = 30
net.ipv4.tcp_keepalive_probes = 3
net.ipv4.tcp_keepalive_time = 1800
net.ipv4.tcp_max_orphans = 16384
net.ipv4.tcp_orphan_retries = 1
net.ipv4.ipfrag_high_thresh = 524288
net.ipv4.ipfrag_low_thresh = 262144
kernel.pid_max = 65535
vm.swappiness = 1
net.ipv4.tcp_mem = 6085248 8113664 12170496
net.ipv4.tcp_wmem = 4096 65536 8388608

[squid-users] squid with coss can not write to SSD

2011-01-06 Thread Drunkard Zhang
My configuration:
cache_dir coss /mnt/c/72 10240 max-size=524288 max-stripe-waste=32768
block-size=4096 maxfullbufs=10
cache_swap_log /mnt/s/%s

 /mnt/c/72 is a file on btrfs + SSD. The btrfs is created by:
mkfs.btrfs /dev/sdb1 /dev/sdc1, so it will spanning across two SSDs.

But squid did not write anything to disk, here's info in cache.log

2011/01/07 14:36:42| WARNING: failed to unpack meta data
2011/01/07 14:36:42| storeCossWriteMemBufDone: got failure (-6)
2011/01/07 14:36:42| FD 9, size=1048576
2011/01/07 14:36:42| WARNING: failed to unpack meta data

Why? squid can't work with btrfs? or SSD? or my way using it?


Re: [squid-users] how much traffic can squid handle?

2010-08-12 Thread Drunkard Zhang
2010/8/12 Matus UHLAR - fantomas uh...@fantomas.sk:
 On Wed, Aug 11, 2010 at 1:09 AM, Drunkard Zhang gongfan...@gmail.com wrote:
  With same multi-squid-instance configuration, same Linux distro, and
  different hardware, AMD Opteron gets more balanced CPU usage, while on
  Intel Xeon just one CPU core running out, others still too idle, about
  5%-15%. When that core runs out, simple TCP SYN check on service
  failed occasional.
 
  I'm still trying to get this problem resolved...:-( Now I'm trying
  linux-2.6.35 kernel :-).

 On 11.08.10 19:24, Jose Ildefonso Camargo Tolosa wrote:
 So, the only logical conclusion is: AMD rules! :)

 Actually, since there's different hardware, I can even thing of APIC
 problems on the intel machine...


I‘m trying new kernel 2.6.35 since yesterday, it looks like CPU usage
balanced better.
Has anyone did the same thing? Or it's just my illusion of loving linux, :-(

I read this: 
http://www.linuxfordevices.com/c/a/News/Linux-2635-and-early-days-of-Linux/

 (I actually like
 AMD, but I have never faced this kind of problem with Xeon, or maybe
 I'm just not paying attention, will make a few test myself, and see
 how it ends).

I like AMD too, but the hardware is way hard to obtain to Intel in
China... Tragic!


Re: [squid-users] how much traffic can squid handle?

2010-08-10 Thread Drunkard Zhang
2010/8/11 Jose Ildefonso Camargo Tolosa ildefonso.cama...@gmail.com:
 Hi!

 On Tue, Aug 10, 2010 at 12:27 AM, Stand H hstan...@yahoo.com wrote:


 --- On Mon, 8/9/10, Jose Ildefonso Camargo Tolosa 
 ildefonso.cama...@gmail.com wrote:

 From: Jose Ildefonso Camargo Tolosa ildefonso.cama...@gmail.com
 Subject: Re: [squid-users] how much traffic can squid handle?
 To: Drunkard Zhang gongfan...@gmail.com
 Cc: Stand H hstan...@yahoo.com, squid-users@squid-cache.org
 Date: Monday, August 9, 2010, 6:26 PM
 Hi!

 On Mon, Aug 9, 2010 at 8:18 PM, Drunkard Zhang gongfan...@gmail.com
 wrote:

 
  BTW, may bonding of multiple NICs helps on too many
 interrupts.
 

 Or maybe just a good NIC, or a GOOD NIC + bonding :)


 Can you recommend a good NIC?

 Most Intel have behaved really well with me.  As for Broadcom: bad
 luck, I had to disable most of the hardware assistance, and thus:
 add more load to the server, I'm currently on a avoid Broadcom
 policy, but that could change in the future (I'll try them again
 sometime).

I got bottle on forcedeth shipped with nVidia MCP55 chipset, not got
problem on Intel e1000e yet, but CPU usage on Intel cores balanced
badly. On the other hand CPU time usage on AMD Opteron cores balanced
very good with same configuration, so confuse about this.


Re: [squid-users] how much traffic can squid handle?

2010-08-10 Thread Drunkard Zhang
2010/8/11 Jose Ildefonso Camargo Tolosa ildefonso.cama...@gmail.com:
 On Tue, Aug 10, 2010 at 10:19 PM, Drunkard Zhang gongfan...@gmail.com wrote:
 2010/8/11 Jose Ildefonso Camargo Tolosa ildefonso.cama...@gmail.com:
 Hi!

 On Tue, Aug 10, 2010 at 12:27 AM, Stand H hstan...@yahoo.com wrote:


 --- On Mon, 8/9/10, Jose Ildefonso Camargo Tolosa 
 ildefonso.cama...@gmail.com wrote:

 From: Jose Ildefonso Camargo Tolosa ildefonso.cama...@gmail.com
 Subject: Re: [squid-users] how much traffic can squid handle?
 To: Drunkard Zhang gongfan...@gmail.com
 Cc: Stand H hstan...@yahoo.com, squid-users@squid-cache.org
 Date: Monday, August 9, 2010, 6:26 PM
 Hi!

 On Mon, Aug 9, 2010 at 8:18 PM, Drunkard Zhang gongfan...@gmail.com
 wrote:

 
  BTW, may bonding of multiple NICs helps on too many
 interrupts.
 

 Or maybe just a good NIC, or a GOOD NIC + bonding :)


 Can you recommend a good NIC?

 Most Intel have behaved really well with me.  As for Broadcom: bad
 luck, I had to disable most of the hardware assistance, and thus:
 add more load to the server, I'm currently on a avoid Broadcom
 policy, but that could change in the future (I'll try them again
 sometime).

 I got bottle on forcedeth shipped with nVidia MCP55 chipset, not got
 problem on Intel e1000e yet, but CPU usage on Intel cores balanced
 badly. On the other hand CPU time usage on AMD Opteron cores balanced
 very good with same configuration, so confuse about this.


 Yeah, e1000 have worked very well for me.

 Ok, so, you saw that CPU usage on Intel tends to be inclined to one
 of the cores? and on AMD it gets more balanced?

 Also, are talking about network related load here? or just about any
 processes running on Intel multi-core and AMD multicore.

With same multi-squid-instance configuration, same Linux distro, and
different hardware, AMD Opteron gets more balanced CPU usage, while on
Intel Xeon just one CPU core running out, others still too idle, about
5%-15%. When that core runs out, simple TCP SYN check on service
failed occasional.

I'm still trying to get this problem resolved...:-( Now I'm trying
linux-2.6.35 kernel :-).


Re: [squid-users] how much traffic can squid handle?

2010-08-09 Thread Drunkard Zhang
2010/8/9 Stand H hstan...@yahoo.com:
 Hi all,

 If configured properly, how much traffic can a server with 16GB RAM, 3.0Ghz 
 CPU, and 5 x 500GB SAS drive can handle? Anyone has a squid box that can 
 handle more than 300Mbps traffic?

 From my experience and configuration, it can handle around 80Mbps only. Thank 
 you.

 Stand
With multi-instances of squid, I'm running a couple of servers handle
300-400Mbps traffic. We use memory only, 32GB.


Re: [squid-users] how much traffic can squid handle?

2010-08-09 Thread Drunkard Zhang
2010/8/9 Stand H hstan...@yahoo.com:


  If configured properly, how much traffic can a server
 with 16GB RAM, 3.0Ghz CPU, and 5 x 500GB SAS drive can
 handle? Anyone has a squid box that can handle more than
 300Mbps traffic?
 
  From my experience and configuration, it can handle
 around 80Mbps only. Thank you.
 
  Stand
 With multi-instances of squid, I'm running a couple of
 servers handle
 300-400Mbps traffic. We use memory only, 32GB.

 Do you mean multiple instances on a single physical server with 32GB memory? 
 What type of disk and how many you have?
 Thanks,

We don't use any disk for cache, just cache_dir null /tmp, the
following option controls how much memory squid uses:

memory_pools off
memory_pools_limit 1 MB
cache_mem 6500 MB
maximum_object_size_in_memory 4096 KB
memory_replacement_policy heap LFUDA

In this case, CPU is not bottleneck generally, I THINK dual-channel
memory and FSB speed is much more important.
NIC(network card) could be bottleneck too, at least we encountered,
too many interrupt from NIC limited the traffic no more than 550Mbps.
I have not touch that line in production enviroment, but it is
possible.

BTW, may bonding of multiple NICs helps on too many interrupts.


Re: [squid-users] How much ram

2010-07-28 Thread Drunkard Zhang
2010/7/28 Marcello Romani mrom...@ottotecnica.com:
 Tóth Tibor Péter ha scritto:

 It was just a curiosity.
 I am interested what other people use in their cache server as far as ram
 goes. :)
 That's all.

 -Original Message-
 From: Marcello Romani [mailto:mrom...@ottotecnica.com] Sent: Wednesday,
 July 28, 2010 11:48 AM
 To: squid-users@squid-cache.org
 Cc: Tóth Tibor Péter
 Subject: Re: [squid-users] How much ram

 Tóth Tibor Péter ha scritto:

 Hi Guys!

 How much ram do you have in your squids?
 We have 8GB, and it's all being used.
 Proc usage is low, but ram seems to be never enough.

 Just interested.

 Tibby

 What OS are you using ?
 Under Linux, using top it's easy to see that all the memory that's not
 allocated to processes is used by the OS as disk cache and buffers. Their
 sisze is automatically managed by the kernel and given enough time will fill
 up the entire ram however big.
 After all, you don't want to pay for precious ram just to have it unused,
 right ?


 I was just responding to ram seems to be never enough, and perhaps I
 should've put a :-) after the question mark.

Agree, my servers got 16GB, 24GB, 32GB to 64GB ram, seems hit ratio
increased tiny...


Re: [squid-users] SQUID3: Access denied connecting to one site

2010-04-20 Thread Drunkard Zhang
2010/4/20 Alexandr Dmitriev alexandr.dmitr...@mos.lv:
 Hello,

 I have ubuntu 9.10 runing with squid 3.0.STABLE18-1 and squidGuard.

 Squid is set up as a transparent proxy - everything is working just fine,
 except I can't access one site (www.airbaltic.lv). Squid drops me an error -
 Access denied.

Try this:
echo 0  /proc/sys/net/ipv4/tcp_ecn

 I tried to disable squidGuard - it did not help, but when I connect without
 squid (disabling transparent access) - I can visit airbaltic.lv

 Here are records from access.log:
 1271761294.299      5 192.168.1.64 TCP_MISS/403 2834 GET
 http://www.airbaltic.lv/ - DIRECT/87.110.220.160 text/html
 1271761305.202      0 192.168.1.64 TCP_NEGATIVE_HIT/403 2842 GET
 http://www.airbaltic.lv/ - NONE/- text/html

 And here is my squid.conf:
 acl manager proto cache_object
 acl localhost src 127.0.0.1/32
 acl to_localhost dst 127.0.0.0/8
 acl localnet src 192.168.1.0/24
 acl Safe_ports port 80        # http
 acl Safe_ports port 21        # ftp
 acl Safe_ports port 443        # https
 acl Safe_ports port 70        # gopher
 acl Safe_ports port 210        # wais
 acl Safe_ports port 1025-65535    # unregistered ports
 acl Safe_ports port 280        # http-mgmt
 acl Safe_ports port 488        # gss-http
 acl Safe_ports port 591        # filemaker
 acl Safe_ports port 777        # multiling http
 acl CONNECT method CONNECT
 http_access allow manager localhost
 http_access deny manager
 http_access deny !Safe_ports
 http_access deny CONNECT !SSL_ports
 http_access allow localhost
 http_access allow localnet
 http_access deny all
 icp_access deny all
 htcp_access deny all
 http_port 3128 transparent
 hierarchy_stoplist cgi-bin ?
 access_log /var/log/squid3/access.log squid
 refresh_pattern ^ftp:        1440    20%    10080
 refresh_pattern ^gopher:    1440    0%    1440
 refresh_pattern (cgi-bin|\?)    0    0%    0
 refresh_pattern .        0    20%    4320
 coredump_dir /var/spool/squid3
 redirect_program /usr/bin/squidGuard -c /etc/squid/squidGuard.conf

 Any ideas?

 Best regards,

 --
 Alexandr Dmitrijev
 Head of IT Department
 Fashion Retail Ltd.
 Phone:     +371 67560501
 Fax:       +371 67560502
 GSM:       +371 2771
 E-mail:    alexandr.dmitr...@mos.lv




[squid-users] Is this REAL squid-CARP cluster?

2010-04-13 Thread Drunkard Zhang
I'm using a squid cluster with CARP configured, they works great.

I'm not sure if all the CARP frontend distributed URLs based on the
_same_ hash value,
here's result queried by squidclient:

16:00:25 ~ $ for i in 66 67 68 71; do ask_squid $i carp | grep -A7
Hostname; done
    Hostname   Hash  Multiplier Factor Actual
  150.164.100.65   d6945438   1.00   0.142857   0.459355
  150.164.100.69   89857dc5   1.00   0.142857   0.212488
  150.164.100.70   239c90ac   1.00   0.142857   0.161517
  150.164.100.72   7d152572   1.00   0.142857   0.153253
  150.164.100.72   7d152572   1.00   0.142857   0.004496
  150.164.100.72   7d152572   1.00   0.142857   0.004460
  150.164.100.72   7d152572   1.00   0.142857   0.004431
    Hostname   Hash      Multiplier Factor Actual
  150.164.100.65   d6945438   1.00   0.142857   0.348483
  150.164.100.69   89857dc5   1.00   0.142857   0.272565
  150.164.100.70   239c90ac   1.00   0.142857   0.184725
  150.164.100.72   7d152572   1.00   0.142857   0.151390
  150.164.100.72   7d152572   1.00   0.142857   0.014509
  150.164.100.72   7d152572   1.00   0.142857   0.014176
  150.164.100.72   7d152572   1.00   0.142857   0.014151
    Hostname   Hash  Multiplier Factor Actual
  150.164.100.65   d6945438   1.00   0.142857   0.309244
  150.164.100.69   89857dc5   1.00   0.142857   0.257143
  150.164.100.70   239c90ac   1.00   0.142857   0.206424
  150.164.100.72   7d152572   1.00   0.142857   0.209645
  150.164.100.72   7d152572   1.00   0.142857   0.005738
  150.164.100.72   7d152572   1.00   0.142857   0.005962
  150.164.100.72   7d152572   1.00   0.142857   0.005844
    Hostname   Hash  Multiplier Factor Actual
  150.164.100.65   d6945438   1.00   0.142857   0.300572
  150.164.100.69   89857dc5   1.00   0.142857   0.266725
  150.164.100.70   239c90ac   1.00   0.142857   0.203087
  150.164.100.72   7d152572   1.00   0.142857   0.214236
  150.164.100.72   7d152572   1.00   0.142857   0.005091
  150.164.100.72   7d152572   1.00   0.142857   0.005338
  150.164.100.72   7d152572   1.00   0.142857   0.004951
ps: the last one: 150.164.100.72 has 64GB memory, so I setup 4 squid
processes which
 listening on 80 81 82 83 ports.

If the column Hash identified hashed URLs chuck, is it good; if not,
how can I make
 several CARP-frontend distribute the SAME hashed URL chunk to one squid box;
I'm using squid-2.6 and squid-2.7.

--
gongfan...@gmail.com
zhan...@gwbnsh.net.cn
18601633785


Re: [squid-users] Is this REAL squid-CARP cluster?

2010-04-13 Thread Drunkard Zhang
2010/4/14 Amos Jeffries squ...@treenet.co.nz:
 On Tue, 13 Apr 2010 22:10:38 +0800, Drunkard Zhang gongfan...@gmail.com
 wrote:
 I'm using a squid cluster with CARP configured, they works great.

 I'm not sure if all the CARP frontend distributed URLs based on the
 _same_ hash value,
 here's result queried by squidclient:


 Bug http://bugs.squid-cache.org/show_bug.cgi?id=2153 already fixed.

Thanks!
Despite this, am I configured CARP right? I'm not very sure about this.




-- 
gongfan...@gmail.com
zhan...@gwbnsh.net.cn
18601633785