Re: [squid-users] Squid url_rewrite and cookie

2010-01-06 Thread Adrian Chadd
Please create an Issue and attach the patch. I'll see about including it!




adrian

2010/1/6 Rajesh Nair rajesh.nair...@gmail.com:
 Thanks for the response, Matt!

 Unfortunately the cooperating HTTP service solution would not work
 as I need to set the cookie for the same domain for which the request
 is coming and that happens only when the request comes to the squid
 proxy.

 I have resolved it by extending the squid-url_rewrite protocol to
 accept the cookie string too and modifying the squid code to send the
 cookie in the 302 redirect response.

 Let me know if anybody is interested in the patch!

 Thanks,
 Rajesh

 On Tue, Jan 5, 2010 at 9:41 AM, Matt W. Benjamin m...@linuxbox.com wrote:
 Hi,

 Yes, you cannot (could not) per se.  However, you can rewrite to a 
 cooperating HTTP service which sets a cookie.  And, if you had adjusted 
 Squid so as to pass cookie data to url_rewriter programs, you could also 
 inspect the cookie in it on future requests.

 Matt

 - Rajesh Nair rajesh.nair...@gmail.com wrote:


 Reading the docs , it looks like it is not possible to send any
 HTTP
 response header from the url_rewriter program and the url_rewriter
 merely can return the redirected URI.
 Is this correct?

 Thanks,
 Rajesh

 --

 Matt Benjamin

 The Linux Box
 206 South Fifth Ave. Suite 150
 Ann Arbor, MI  48104

 http://linuxbox.com

 tel. 734-761-4689
 fax. 734-769-8938
 cel. 734-216-5309





Re: [squid-users] 'gprof squid squid.gmon' only shows the initial configuration functions

2009-12-09 Thread Adrian Chadd
Talk to the freebsd guys (eg me) about pmcstat and support for your
hardware. You may just need to find / organise a backport of the
particular hardware support for your platform. I've been working on
profiling Lusca with pmcstat and some new-ish tools which use and
extend it in useful ways.

gprof data is almost certainly uselessly unreliable on modern CPUs.
Too much can and will happen between profiling ticks.

I can hazard a few guesses about where your CPU is going. Likely
candidate is poll() if your Squid is too old. First thing to do is
organise porting the kqueue() stuff if it isn't already included.

I can make more educated guesses about where the likely CPU hog
culprits are given workload and configuration file information.



Adrian

2009/12/10 Guy Bashkansky guy...@gmail.com:
 Is there an oprofile version for FreeBSD?  I thought it is limited to
 Linux.  On FreeBSD I tried pmcstat, but it gives an initialization
 error.

 My version of Squid is old and customized (so I can't upgrade) and may
 not have builtin timers.  Since what version did they appear?

 As for gprof - even with the event loop on top, still the rest of the
 table might give some idea why the CPU is overloaded.  The problem is
 - I see only initial configuration functions:

                                 called/total       parents
 index  %time    self descendents  called+self    name           index
                                 called/total       children
                                                    spontaneous
 [1]     63.4    0.17        0.00                 _mcount [1]
 ---
               0.00        0.10       1/1           _start [3]
 [2]     36.0    0.00        0.10       1         main [2]
               0.00        0.10       1/1           parseConfigFile [4]
 ...
 ---
                                                    spontaneous
 [3]     36.0    0.00        0.10                 _start [3]
               0.00        0.10       1/1           main [2]
 ---
               0.00        0.10       1/1           main [2]
 [4]     36.0    0.00        0.10       1         parseConfigFile [4]
               0.00        0.09       1/1           readConfigLines [5]
               0.00        0.00     169/6413        parse_line [6]
 ..
 

 System info:

 # uname -m -r -s
 FreeBSD 6.2-RELEASE-p9 amd64

 # gcc -v
 Using built-in specs.
 Configured with: FreeBSD/amd64 system compiler
 Thread model: posix
 gcc version 3.4.6 [FreeBSD] 20060305


 There are 7 fork()s for unlinkd/diskd helpers.  Can these fork()s
 affect profiling info?

 On Wed, Dec 9, 2009 at 2:04 AM, Robert Collins
 robe...@robertcollins.net wrote:
 On Tue, 2009-12-08 at 15:32 -0800, Guy Bashkansky wrote:
 I've built squid with the -pg flag and run it in the no-daemon mode
 (-N flag), without the initial fork().

 I send it the SIGTERM signal which is caught by the signal handler, to
 flag graceful exit from main().

 I expect to see meaningful squid.gmon, but 'gprof squid squid.gmon'
 only shows the initial configuration functions:

 gprof isn't terribly useful anyway - due to squids callback based model,
 it will see nearly all the time belonging to the event loop.

 oprofile and/or squids built in analytic timers will get much better
 info.

 -Rob





Re: [squid-users] Distributed High Performance Squid

2009-08-20 Thread Adrian Chadd
Squid doesn't share memory or disk cache at the moment. It won't
share/slice filedescriptors the way you want them to.

I could probably write a unified logging hack so multiple squid
processes log to the same file via a single helper that handles
multiple pipes or something, one from each Squid. There's no atomic
append a line IO method in UNIX so doing it that way won't work.

You could try hacking things up to lock/unlock the file for each
logfile write but I have no idea what the impact would be.

Adrian


2009/8/20 Joel Ebrahimi jebrah...@bivio.net:
 Hi,

 Im trying to build a high performance squid. The performance actually
 comes from the hardware without changes to the code base. I am a
 beginning user of squid so I figured I would ask the list for the
 best/different way of setting up this configuration.

 The architecture set up is like this: There are 12 cpu cores that each
 run an instance of squid, each of these 12 cores has access to the same
 disk space but not the same memory, each is its own instance of an OS
 and they can communicate on an internal network, there is a network
 processor that slices up sessions and can hand them off to any one of
 the 12 cores that is available, there is a single conf file and a single
 logging directory.

 The current problem I can see with this set up is that each of the 12
 instances of squid acts individually, therefore any one of them could
 try to access the same log file at the same time. Im not sure what
 impact this could cause with overwriting data.

 I actually have it set up this way now and it works well though it's a
 very small test environment and Im concerned issues may only pop up in
 larger environments when accessing the logs is very frequent.

 I was looking through some online materials and I saw there are other
 mechanisms for log formatting. The ones that I thought may be of use
 here are either the daemon or udp. There is actually a 13th core in the
 system that is used for management. I was wondering if setting up udp
 logging on this 13th core and having the 12 instances of squid send the
 log info over the internal network would work.

 Thought or better ideas? Problems with either of these scenarios?


 Thanks in advance,

 // Joel

 jebrah...@bivio.net

 Joel Ebrahimi
 Solutions Engineer
 Bivio Networks
 925.924.8681
 jebrah...@bivio.net




Re: [squid-users] Squid 3.1.0.13 Speed Test - Upload breaks?

2009-08-16 Thread Adrian Chadd
The pipelining used by speedtest.net and such won't really get a
benefit from the current squid pipelining support.



Adrian

2009/8/15 Daniel sq...@zoomemail.com:
 Henrik,

        I added 'pipeline_prefetch on' to my squid.conf and it still isn't 
 working right. I've pasted my entire squid.conf below, if you have anything 
 extra turned on/off or et cetera than please let me know and I'll try it.  
 Thanks!

 acl manager proto cache_object
 acl localhost src 127.0.0.1/32
 acl to_localhost dst 127.0.0.0/8
 acl TestPoolIPs src lpt-hdq-dmtqq31 wksthdq88w
 acl localnet src 10.0.0.0/8     # RFC1918 possible internal network
 acl localnet src 172.16.0.0/12  # RFC1918 possible internal network
 acl localnet src 192.168.0.0/16 # RFC1918 possible internal network
 acl sclthdq01w src 10.211.194.187/32    # custom acl for apache/cache manager
 acl SSL_ports port 443
 acl Safe_ports port 80          # http
 acl Safe_ports port 21          # ftp
 acl Safe_ports port 443         # https
 acl Safe_ports port 70          # gopher
 acl Safe_ports port 210         # wais
 acl Safe_ports port 1025-65535  # unregistered ports
 acl Safe_ports port 280         # http-mgmt
 acl Safe_ports port 488         # gss-http
 acl Safe_ports port 591         # filemaker
 acl Safe_ports port 777         # multiling http
 acl CONNECT method CONNECT
 http_access allow manager localhost
 http_access allow manager sclthdq01w
 http_access deny manager
 http_access deny !Safe_ports
 http_access deny CONNECT !SSL_ports
 #http_access allow localnet
 http_access allow localhost
 http_access allow TestPoolIPs
 http_access deny all
 http_port 3128
 hierarchy_stoplist cgi-bin ?
 coredump_dir /usr/local/squid/var/cache
 cache_mem 512 MB
 pipeline_prefetch on
 refresh_pattern ^ftp:           1440    20%     10080
 refresh_pattern ^gopher:        1440    0%      1440
 refresh_pattern -i (/cgi-bin/|\?) 0     0%      0
 refresh_pattern .               0       20%     4320

 -Original Message-
 From: Henrik Lidström [mailto:free...@lidstrom.eu]
 Sent: Monday, August 10, 2009 8:16 PM
 To: Daniel
 Cc: squid-users@squid-cache.org
 Subject: Re: [squid-users] Squid 3.1.0.13 Speed Test - Upload breaks?

 Daniel skrev:
 Kinkie,

       I'm using the default settings, so I don't have any specific max 
 request sizes specified. I guess I'll hold out until someone else running 
 3.1 can test this.

 Thanks!

 -Original Message-
 From: Kinkie [mailto:gkin...@gmail.com]
 Sent: Saturday, August 08, 2009 6:44 AM
 To: squid-users@squid-cache.org
 Subject: Re: [squid-users] Squid 3.1.0.13 Speed Test - Upload breaks?

 Maybe the failure could depend on some specific settings, such as max
 request size?

 On 8/8/09, Heinz Diehl h...@fancy-poultry.org wrote:

 On 08.08.2009, Daniel wrote:


 Would anyone else using Squid mind doing this same bandwidth test and
 seeing
 if they have the same issue(s)?

 It works flawlessly using both 2.7-STABLE6 and 3.0-STABLE18 here.






 Squid Cache: Version 3.1.0.13

 Working without a problem, tested multiple sites on the list.
 Nothing special in the config except maybe pipeline_prefetch on

 /Henrik




Re: [squid-users] Script Check

2009-08-09 Thread Adrian Chadd
don't do that.

As someone who did this 10+ years, I suggest you do this.

* do some hackery to find out how your freeradius server stores the
currently logged in users. It may be in a mysql database, it may be
in a disk file, etc, etc
* have your redirector query -that- directly, rather than running
radwho. When I did this 10 years ago, the radius server kept a wtmp
style file with current logins which worked okish for a few dozen
users, then sucked for a few hundred users. I ended up replacing it
with a berkeley DB hash table to make searching for users faster.
* then in the helper, cache the IP results for a short period (say, 5
to 10 seconds) so frequent page accesses wouldn't result in a flurry
of requests to the backend
* keep the number of helpers low - you're doing it wrong if you need
more than 5 or 6 helpers doing this..



Adrian

2009/8/8  mic...@casa.co.cu:
 Hello

 Using squid 2.6 on my work, I have a group of users who connect by dial-up
 access to a NAS and a server freeradius to authenticate each time they log
 my users are assigned a dynamic IP address, making it impossible to create
 permissions without authentication by IP address.

 now to assign levels of access to sites are
 authenticating against an Active Directory, but I want to change that.

 I want to create a script for when you get a request to the squid from the
 block of IP addresses, run a script that reads the username and IP address
 from the server freeradius radwho tool that shows users connected + ip
 address or mysql  from which you can achieve the same process

 and can be compared to a text file if the user is listed, then access it
 without authentication of any kind.

 It is possible to do this?

 Sorry for my english, is very poor.

 Thanks

 Michel





 --
 Webmail, servicio de correo electronico
 Casa de las Americas - La Habana, Cuba.




Re: [squid-users] proxy: explicit transparent + VideoCache

2009-08-06 Thread Adrian Chadd
Have you asked the videocache group why it functions the way it functions?



adrian

2009/8/6 pavel kolodin pavelkolo...@gmail.com:
 On Thu, 06 Aug 2009 05:34:09 -, Amos Jeffries squ...@treenet.co.nz
 wrote:


 Why?

 Possible reasons:

 1) 302 being the status you really want to use for this.

 2) transparent proxy aka intercepting proxy aka man-in-middle attack
 perhapse the plugin is smart enough to detect such attacks and prevent
 them
 from working.

 3) perhapse the plugins is simply smart enough to realize it will never
 get
 a redirect back from the real source.

 4) perhapse the browser does not have access to the new location.

 5) perhapse the browser is limiting the sources the plugin may connect to.
 raw-IP addresses are known to be dangerous.

 Browser (or flash plugin in browser) even don't trying to send request to
 10.10.10.1 if proxy is transparent.




Re: [squid-users] Re: [new] videocache question

2009-08-04 Thread Adrian Chadd
Is this still involving the videocache stuff?

If it is, why aren't you asking them?



Adrian

2009/8/4 ░▒▓ ɹɐzǝupɐɥʞ ɐzɹıɯ ▓▒░ mirz...@gmail.com:
 (repost)
 and how about caching online game patcher ? e.g ragnarok online, rohan
 online, etc ?
 is that  use same method ?

 and can anyone give me example about this cache streaming ?
 cause:
 r...@server:/home/mirza# tail -f /var/log/squid/store.log
 1249364647.522 RELEASE -1  94E0DA2780D918D4AC808CBCF54144CC
 200 1249364644        -1 1249364644 text/html -1/952 GET
 http://openx.detik.com/delivery/afr.php?n=a7157323zoneid=349cb=INSERT_RANDOM_NUMBER_HERE
 1249364647.745 RELEASE -1  7F3A99532B8A6CEA7D846D4F0AE2E6AE
 200 1249364647        -1 1249364647 image/gif 43/43 GET
 http://openx.detik.com/delivery/lg.php?bannerid=2167campaignid=1043zoneid=349loc=http%3A%2F%2Fwww.detiknews.com%2Fread%2F2009%2F08%2F04%2F123104%2F1177031%2F10%2Fjenazah-mbah-surip-siap-dimandikan-ibu-ibu-bacakan-surat-yasincb=1b1801337b

 always RELEASE

 On Mon, Aug 3, 2009 at 12:44 AM, ░▒▓ ɹɐzǝupɐɥʞ ɐzɹıɯ
 ▓▒░mirz...@gmail.com wrote:
 ok amos

 anyone ? have any idea bout this prob ?

 On Sun, Aug 2, 2009 at 7:59 PM, Amos Jeffriessqu...@treenet.co.nz wrote:
 ░▒▓ ɹɐzǝupɐɥʞ ɐzɹıɯ ▓▒░ wrote:

 On Sun, Aug 2, 2009 at 7:55 AM, Amos Jeffriessqu...@treenet.co.nz wrote:
 .

 ░▒▓ ɹɐzǝupɐɥʞ ɐzɹıɯ ▓▒░ wrote:

 im using 2.x ( latest )

 anyone can help ?

 Are we to assume by latest 2.x you mean 2.7 or merely the latest
 available
 in your unknown operating system?
 Hint: 'latest 2.x' for RedHat and several others is 2.5. Obsolete many
 years
 ago.

 If you did mean 2.7, then the example there and in related Discussion
 page
 is the best you are going to get right now. They even provide a useful
 helper script to do the URL mapping.


 Amos
 --

 i use latest from ubuntu
 yes it is 2.7

 and how about caching online game patcher ? e.g ragnarok online, rohan
 online, etc ?
 is that  use same method ?

 I can't say. I have not seen those game patchers in operation.
 The steam game patcher shows some promise though for certain of its
 operations.

 Amos
 --
 Please be using
  Current Stable Squid 2.7.STABLE6 or 3.0.STABLE17
  Current Beta Squid 3.1.0.12




 --
 -=-=-=-=
 Personal Blog http://my.blog.or.id ( lagi belajar )
 Hot News !!! : Pengin punya Layanan SMS PREMIUM ? Contact me ASAP.
 dapatkan Share revenue MAXIMAL tanpa syarat traffic...




 --
 -=-=-=-=
 Personal Blog http://my.blog.or.id ( lagi belajar )
 Hot News !!! : Pengin punya Layanan SMS PREMIUM ? Contact me ASAP.
 dapatkan Share revenue MAXIMAL tanpa syarat traffic...




Re: Re: [squid-users] Squid high bandwidth IO issue (ramdisk SSD)

2009-08-04 Thread Adrian Chadd
How much disk IO is going on when the CPU shows 70% IOWAIT? Far too
much. The CPU time spent in CPU IOWAIT shouldn't be that high. I think
you really should consider trying an alternative disk controller.




adrian

2009/8/4 smaugadi a...@binat.net.il:

 Dear Adrian and Heinz,
 Sorry for the delayed replay and thanks for all the help so far.
 I have tried changing the file system (ext2 and ext3), changed the
 partitioning geometry (fdisk -H 224 -S 56) as I read that this would improve
 performance with SSD.
 I tried ufs, aufs and even coss (downgrade to 2.6). (By the way the average
 object size is 13KB).
 And failed!

 From system monitoring during the squid degradation I saw:

 /usr/local/bin/iostat -dk -x 1 1000 sdb
 Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz
 avgqu-sz   await  svctm  %util
 sdb               0.00     0.00    0.00    4.00     0.00    72.00    36.00
 155.13 25209.75 250.25 100.10

 Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz
 avgqu-sz   await  svctm  %util
 sdb               0.00     0.00    0.00    4.00     0.00    16.00     8.00
 151.50 26265.50 250.50 100.20

 Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz
 avgqu-sz   await  svctm  %util
 sdb               0.00     0.00    0.00    3.00     0.00    12.00     8.00
 147.49 27211.33 333.33 100.00

 Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz
 avgqu-sz   await  svctm  %util
 sdb               0.00     0.00    0.00    4.00     0.00    32.00    16.00
 144.54 28311.25 250.25 100.10

 Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz
 avgqu-sz   await  svctm  %util
 sdb               0.00     0.00    0.00    4.00     0.00   100.00    50.00
 140.93 29410.25 250.25 100.10

 Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz
 avgqu-sz   await  svctm  %util
 sdb               0.00     0.00    0.00    4.00     0.00    36.00    18.00
 137.00 30411.25 250.25 100.10

 Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz
 avgqu-sz   await  svctm  %util
 sdb               0.00     0.00    0.00    2.00     0.00     8.00     8.00
 133.29 31252.50 500.50 100.10

 As soon as the service time increases above 200MS problems start, also the
 total time for service (time in queue + service time) goes all the way to 32
 sec.

 This is from mpstat at the same time:

 09:33:56 AM  CPU   %user   %nice    %sys %iowait    %irq   %soft  %steal
 %idle    intr/s
 09:33:58 AM  all    3.00    0.00    2.25   84.02    0.12    2.75    0.00
 7.87   9782.00
 09:33:58 AM    0    3.98    0.00    2.99   72.64    0.00    3.98    0.00
 16.42   3971.00
 09:33:58 AM    1    2.01    0.00    1.01   80.40    0.00    1.51    0.00
 15.08   1542.00
 09:33:58 AM    2    2.51    0.00    2.01   92.96    0.00    2.51    0.00
 0.00   1763.50
 09:33:58 AM    3    3.02    0.00    3.02   90.95    0.00    3.02    0.00
 0.00   2506.00

 09:33:58 AM  CPU   %user   %nice    %sys %iowait    %irq   %soft  %steal
 %idle    intr/s
 09:34:00 AM  all    0.50    0.00    0.25   74.12    0.00    0.62    0.00
 24.50   3833.50
 09:34:00 AM    0    0.50    0.00    0.50    0.00    0.00    1.00    0.00
 98.00   2015.00
 09:34:00 AM    1    0.50    0.00    0.00   98.51    0.00    1.00    0.00
 0.00    544.50
 09:34:00 AM    2    0.50    0.00    0.00   99.50    0.00    0.00    0.00
 0.00    507.00
 09:34:00 AM    3    0.50    0.00    0.00   99.00    0.00    0.50    0.00
 0.00    766.50

 09:34:00 AM  CPU   %user   %nice    %sys %iowait    %irq   %soft  %steal
 %idle    intr/s
 09:34:02 AM  all    0.12    0.00    0.25   74.53    0.00    0.12    0.00
 24.97   1751.50
 09:34:02 AM    0    0.00    0.00    0.00    0.00    0.00    0.00    0.00
 100.00   1155.50
 09:34:02 AM    1    0.00    0.00    0.50   99.50    0.00    0.00    0.00
 0.00    230.50
 09:34:02 AM    2    0.00    0.00    0.00  100.00    0.00    0.00    0.00
 0.00    220.00
 09:34:02 AM    3    0.00    0.00    0.50   99.50    0.00    0.00    0.00
 0.00    146.00

 09:34:02 AM  CPU   %user   %nice    %sys %iowait    %irq   %soft  %steal
 %idle    intr/s
 09:34:04 AM  all    1.25    0.00    1.50   74.97    0.00    0.00    0.00
 22.28   1607.50
 09:34:04 AM    0    5.47    0.00    5.47    0.00    0.00    0.00    0.00
 89.05   1126.00
 09:34:04 AM    1    0.00    0.00    0.00  100.00    0.00    0.00    0.00
 0.00    158.50
 09:34:04 AM    2    0.00    0.00    0.50   98.51    0.50    0.50    0.00
 0.00    175.50
 09:34:04 AM    3    0.00    0.00    0.00  100.00    0.00    0.00    0.00
 0.00    147.00

 Well, some times you eat the bear and some times the bears eat you.

 Do you have any more ideas?
 Regards,
 Adi.




 Adrian Chadd-3 wrote:

 2009/8/2 Heinz Diehl h...@fancy-poultry.org:

 1. Change cache_dir in squid from ufs to aufs.

 That is almost always a good idea for any decent performance under any
 sort of concurrent load. I'd like proof otherwise - if one finds

Re: [squid-users] New Accel Reverse Proxy Cache is not caching everything... how to force?

2009-08-04 Thread Adrian Chadd
2009/8/4 Hery Setiawan yellowha...@gmail.com:

 maybe in his mind (and my mind too actually), with big mem_cache the
 file transfer will be transferred faster. But using that big for me
 it's too much, since I only have 4GB of RAM and thousand workstation
 connect with my squid.

The squid memory cache doesn't work the way people seem to think it
does. Once objects leave the memory cache pool they're out for good.

The rule of thumb is quite simple - keep cache_mem large enough to
handle in-transit objects and a few hot objects; leave the rest to be
available for general operating system disk buffer caching. Only
deviate from this if you absolutely, positively require it.

This will occur when your workload has a lot of small objects which
you frequently hit. Hack up or download something to generate a
request size vs {hit rate, byte hit rate, service time, cumulative
traffic} to see exactly how many tiny/small objects you're getting a
hit off of.

If you have a very small set of constantly hot traffic that will fit
in memory, up cache_mem. But be aware of the performance repercussions
if the hot traffic leaves cache_mem and stays on disk.. :)

If you have a set of hot traffic that moves over time, upping
cache_mem may not help.

2c,



Adrian


Re: [squid-users] Way to hide Caching Server IP

2009-08-03 Thread Adrian Chadd
Investigate tproxy



Adrian

2009/8/4 Ja-Ryeong Koo wjb...@gmail.com:
 Hello,

 I am writing this email to ask something regarding ways to hide Caching
 Server IP address.

 I have one apache server, one caching server (squid2.6.stable22).
 (Client -- Caching Server (Reverse Proxy)  Apache Server)

 Now, whenever I try to connect apache server, both the Caching server IP and
 Client IP (my PC ip address) are seen on the Apache server.

 I hope that the apache server only can see client IP address.

 Please let me know if you have any kinds of ways to do this.

 In advance, thank you for your kind consideration.

 Best Regards,
 Ja-Ryeong Koo

 --
 Ja-Ryeong Koo,
 Department of Computer Science,
 Texas AM University-College Station,
 TX, 77843-3112, USA,
 Phone: +1-979-204-8021



Re: [squid-users] Does squid support multithreading ?

2009-08-02 Thread Adrian Chadd
2009/8/2 Sachin Malave sachinmal...@gmail.com:
 I have multicore processor here, I want to run squid3 on this
 platform, Does squid support multithreading ? will it improve the
 performance ?

None of the public Squid codebases currently support general
multithreading. There's some threading for IO but that is it.

The only support is some magic support for sharing the same incoming
HTTP socket between multiple, separate squid processes.

If you care about performance, Squid-2.7 is probably the best for you
at the moment from the Squid codebases..


Adrian


Re: [squid-users] Squid high bandwidth IO issue (ramdisk SSD)

2009-08-02 Thread Adrian Chadd
2009/8/2 smaugadi a...@binat.net.il:

 Dear Adrian,
 During the implementation we encountered issues with all kind of variables
 such as:
 Limit of file descriptors (now the squid is using 204800).
 TCP port range was low (increased to 1024 65535) TCP timers (changed them)
 The ip_conntrack and hash size were low (now 524288 262144 respectively)

 Now we are at a point that IO is the only issue.

What profiling have you done to support that? For example, one of the
issues I had which looked like IO performance was actually because the
controller was completely unhappy. Upgrading the firmware on the
controller card signficantly increased performance.

But I think you need to post some further information about the
problem. IO can be rooted in a lot of issues. :)


Adrian


Re: [squid-users] Squid high bandwidth IO issue (ramdisk SSD)

2009-08-02 Thread Adrian Chadd
Are you seeing high IO wait CPU use, or high IO wait times on IO?



Adrian

2009/8/2 smaugadi a...@binat.net.il:

 Dear Adrian,
 Well my conclusion that this is an IO problem came from the fact that I see
 huge IO waits as the volume of traffic increase (with tools such as mpstat),
 when using ramdisk there is no such issue.
 I have configured the SSD drive with ext2, no journal, noatime. Used the
 “noop” I/O scheduler.
 In /etc/fstab
 /dev/sdb1               /cache                  ext2 defaults,noatime 1 2

 hdparm results:
 hdparm -t /dev/sdb1

 /dev/sdb1:
  Timing buffered disk reads:  304 MB in  3.01 seconds = 100.93 MB/sec
 
 hdparm -T /dev/sdb1

 /dev/sdb1:
  Timing cached reads:   4192 MB in  2.00 seconds = 2096.58 MB/sec

 Any ideas?

 Regards.



 Adrian Chadd-3 wrote:

 2009/8/2 smaugadi a...@binat.net.il:

 Dear Adrian,
 During the implementation we encountered issues with all kind of
 variables
 such as:
 Limit of file descriptors (now the squid is using 204800).
 TCP port range was low (increased to 1024 65535) TCP timers (changed
 them)
 The ip_conntrack and hash size were low (now 524288 262144 respectively)

 Now we are at a point that IO is the only issue.

 What profiling have you done to support that? For example, one of the
 issues I had which looked like IO performance was actually because the
 controller was completely unhappy. Upgrading the firmware on the
 controller card signficantly increased performance.

 But I think you need to post some further information about the
 problem. IO can be rooted in a lot of issues. :)


 Adrian



 --
 View this message in context: 
 http://www.nabble.com/Squid-high-bandwidth-IO-issue-%28ramdisk-SSD%29-tp24775448p24776193.html
 Sent from the Squid - Users mailing list archive at Nabble.com.




Re: [squid-users] Squid high bandwidth IO issue (ramdisk SSD)

2009-08-02 Thread Adrian Chadd
Well, from what I've read, SSDs don't necessarily provide very high
random write throughput over time. You should do some further research
into how they operate to understand what the issues may be.

In any case, the much more important information is what IO pattern(s)
are occuring on your storage media and what the controller is doing
with it. You still haven't eliminated the possibility that the
controller/driver is somehow not helping.

You should also graph at least read/write IO count and byte counts;
investigate what is going on.

2c,



Adrian

2009/8/2 smaugadi a...@binat.net.il:

 Dear Waitman,


 Testing the SSD drive, before installing it on the squid, showed huge
 performance advantage in IOPS, read/write. So, I thought that this will
 solve the problems I had with HDD.
 But it was not so, look at this output:
 12:39:35 PM  CPU   %user   %nice    %sys %iowait    %irq   %soft  %steal
 %idle    intr/s
 12:39:37 PM  all    2.87    0.00    2.25   44.44    0.12    3.50    0.00
 46.82  11666.50
 12:39:37 PM    0    0.00    0.00    0.00    0.00    0.50    4.50    0.00
 95.00   4764.00
 12:39:37 PM    1    0.00    0.00    0.50    4.98    0.00    2.49    0.00
 92.04   2097.50
 12:39:37 PM    2   11.56    0.00    8.54   76.88    0.00    3.02    0.00
 0.00   1977.50
 12:39:37 PM    3    0.50    0.00    0.00   95.52    0.50    3.48    0.00
 0.00   2827.50

 This is a moment before the system went down, the IO is up high.


 Waitman Gobble-2 wrote:


 smaugadi wrote:
 Dear ALL,
 We have a squid server with high volume of traffic, 200 – 300 MB


Re: [squid-users] Squid high bandwidth IO issue (ramdisk SSD)

2009-08-02 Thread Adrian Chadd
Generally large amounts of CPU being spent in IO wait means that the
driver is not well-written or the hardware requires extra upkeep to
handle IO operations.

What hardware in particular are you using?

This was one of those big differences between IDE and SATA in the past
btw. At least under Linux in the distant past, a lot of the IDE
drivers would have to manually transfer the data using PIO rather than
having a bus-master DMA transfer occur like many SCSI cards did. This
was counted to IO wait.

Investigate what your storage driver is doing. :)

HTH,


Adrian

2009/8/2 smaugadi a...@binat.net.il:

 Well I'm seeing that the CPU is taking a lot of time waiting for outstanding
 disk I/O request.
 Adi

 Adrian Chadd-3 wrote:

 Are you seeing high IO wait CPU use, or high IO wait times on IO?



 Adrian

 2009/8/2 smaugadi a...@binat.net.il:

 Dear Adrian,
 Well my conclusion that this is an IO problem came from the fact that I
 see
 huge IO waits as the volume of traffic increase (with tools such as
 mpstat),
 when using ramdisk there is no such issue.
 I have configured the SSD drive with ext2, no journal, noatime. Used the
 “noop” I/O scheduler.
 In /etc/fstab
 /dev/sdb1               /cache                  ext2 defaults,noatime 1 2

 hdparm results:
 hdparm -t /dev/sdb1

 /dev/sdb1:
  Timing buffered disk reads:  304 MB in  3.01 seconds = 100.93 MB/sec
 
 hdparm -T /dev/sdb1

 /dev/sdb1:
  Timing cached reads:   4192 MB in  2.00 seconds = 2096.58 MB/sec

 Any ideas?

 Regards.



 Adrian Chadd-3 wrote:

 2009/8/2 smaugadi a...@binat.net.il:

 Dear Adrian,
 During the implementation we encountered issues with all kind of
 variables
 such as:
 Limit of file descriptors (now the squid is using 204800).
 TCP port range was low (increased to 1024 65535) TCP timers (changed
 them)
 The ip_conntrack and hash size were low (now 524288 262144
 respectively)

 Now we are at a point that IO is the only issue.

 What profiling have you done to support that? For example, one of the
 issues I had which looked like IO performance was actually because the
 controller was completely unhappy. Upgrading the firmware on the
 controller card signficantly increased performance.

 But I think you need to post some further information about the
 problem. IO can be rooted in a lot of issues. :)


 Adrian



 --
 View this message in context:
 http://www.nabble.com/Squid-high-bandwidth-IO-issue-%28ramdisk-SSD%29-tp24775448p24776193.html
 Sent from the Squid - Users mailing list archive at Nabble.com.





 --
 View this message in context: 
 http://www.nabble.com/Squid-high-bandwidth-IO-issue-%28ramdisk-SSD%29-tp24775448p24776478.html
 Sent from the Squid - Users mailing list archive at Nabble.com.




Re: [squid-users] Re: Squid high bandwidth IO issue (ramdisk SSD)

2009-08-02 Thread Adrian Chadd
2009/8/2 Heinz Diehl h...@fancy-poultry.org:

 1. Change cache_dir in squid from ufs to aufs.

That is almost always a good idea for any decent performance under any
sort of concurrent load. I'd like proof otherwise - if one finds it,
it indicates something which should be fixed.

 2. Format /dev/sdb1 with mkfs.xfs -f -l lazy-count=1,version=2 -i attr=2 -d 
 agcount=4
 3. Mount it afterwards using rw,noatime,logbsize=256k,logbufs=2,nobarrier 
 in fstab.

 4. Use cfq as the standard scheduler with the linux kernel

Just out of curiousity, why these settings? Do you have any research
which shows this?

 (Btw: on my systems, squid-2.7 is noticeably _a lot_ slower than squid-3,
 if the object is not in cache...)

This is an interesting statement. I can't think of any specific reason
why there should be any particular reason squid-2.7 performs worse
than Squid-3 in this instance. This is the kind of works by magic
stuff which deserves investigation so the issue(s) can be fully
understood. Otherwise you may find that a regression creeps up in
later Squid-3 versions because all of the issues weren't fully
understood and documented, and some coder makes a change which they
think won't have as much of an effect as it does. It has certainly
happened before in squid. :)

So, more information please.



Adrian


Re: [squid-users] Donate section not update

2009-08-01 Thread Adrian Chadd
The donations were always few and far between. I'm not sure if there's
been any real active donations in the last twelve months; I think only
Duane knows.


Adrian

2009/8/2 Juan C. Crespo R. jcre...@ifxnw.com.ve:
 Guys

   Checking the site I found there is no donation from December 2008, or its
 an error in the page?, because no donation its like no one cares about this
 project and that could not be possible because I see a lot of people
 complaining and asking for features, a errors resolutions, I include myself
 in this group.

 Regards.




Re: [squid-users] High CPU utilization

2009-07-27 Thread Adrian Chadd
Change ufs to aufs - assuming you compiled in aufs.

Consider upgrading to Squid-2.7.STABLEx - I did a whole lot of little
performance tweaks between 2.6 and 2.7.

Learn about oprofile and submit some performance information to help
developers. :)



Adrian

2009/7/28 jotacekm minu...@viaip.com.br:

 Hello.
 Recently we have added a lot more clientes behind a squid proxy, and now cpu
 utilizitaion is usually 70-95%. The processor is a intel dual core 2160  @
 1.80GHz. Users started complaing about the speed on accessing pages, and the
 link is fine.

 Here is squidclient mgr:info:

 Squid Object Cache: Version 2.6.STABLE5
 Start Time:     Fri, 24 Jul 2009 20:19:16 GMT
 Current Time:   Mon, 27 Jul 2009 19:09:07 GMT
 Connection information for squid:
        Number of clients accessing cache:      1561
        Number of HTTP requests received:       8590404
        Number of ICP messages received:        0
        Number of ICP messages sent:    0
        Number of queued ICP replies:   0
        Number of HTCP messages received:       0
        Number of HTCP messages sent:   0
        Request failure ratio:   0.00
        Average HTTP requests per minute since start:   2021.3
        Average ICP messages per minute since start:    0.0
        Select loop called: 122560206 times, 2.081 ms avg
 Cache information for squid:
        Request Hit Ratios:     5min: 23.0%, 60min: 21.9%
        Byte Hit Ratios:        5min: 13.3%, 60min: 14.7%
        Request Memory Hit Ratios:      5min: 16.3%, 60min: 17.0%
        Request Disk Hit Ratios:        5min: 26.3%, 60min: 29.9%
        Storage Swap size:      4609784 KB
        Storage Mem size:       65692 KB
        Mean Object Size:       15.67 KB
        Requests given to unlinkd:      455490
 Median Service Times (seconds)  5 min    60 min:
        HTTP Requests (All):   1.24267  1.46131
        Cache Misses:          1.54242  1.81376
        Cache Hits:            0.28853  0.30459
        Near Hits:             1.31166  1.62803
        Not-Modified Replies:  0.23230  0.23230
        DNS Lookups:           0.29097  0.31806
        ICP Queries:           0.0  0.0
 Resource usage for squid:
        UP Time:        254991.358 seconds
        CPU Time:       56029.670 seconds
        CPU Usage:      21.97%
        CPU Usage, 5 minute avg:        94.38%
        CPU Usage, 60 minute avg:       93.36%
        Process Data Segment Size via sbrk(): 193008 KB
        Maximum Resident Size: 0 KB
        Page faults with physical i/o: 70
 Memory usage for squid via mallinfo():
        Total space in arena:  193008 KB
        Ordinary blocks:       179871 KB   6058 blks
        Small blocks:               0 KB      0 blks
        Holding blocks:          1080 KB      2 blks
        Free Small blocks:          0 KB
        Free Ordinary blocks:   13136 KB
        Total in use:          180951 KB 93%
        Total free:             13136 KB 7%
        Total size:            194088 KB
 Memory accounted for:
        Total accounted:       111416 KB
        memPoolAlloc calls: 973288506
        memPoolFree calls: 972077571
 File descriptor usage for squid:
        Maximum number of file descriptors:   4096
        Largest file desc currently in use:   1604
        Number of file desc currently in use: 1308
        Files queued for open:                   0
        Available number of file descriptors: 2788
        Reserved number of file descriptors:   100
        Store Disk files open:                  30
        IO loop method:                     epoll
 Internal Data Structures:
        300182 StoreEntries
         14733 StoreEntries with MemObjects
         14414 Hot Object Cache Items
        294106 on-disk objects

 And here is part of of squid.conf:

 http_port 3128
 visible_hostname xxx
 hierarchy_stoplist cgi-bin ?
 acl QUERY urlpath_regex cgi-bin \?
 cache deny QUERY
 acl apache rep_header Server ^Apache
 broken_vary_encoding allow apache
 access_log /var/log/squid/access.log squid
 cache_store_log none
 hosts_file /etc/hosts


 #
 --
 cache_mem 64 MB
 cache_dir ufs /var/spool/squid 5000 50 256
 cache_replacement_policy heap LFUDA
 maximum_object_size 51200 KB
 maximum_object_size_in_memory 64 KB
 memory_replacement_policy heap GDSF
 logfile_rotate 3


 #
 --
 refresh_pattern ^ftp:           1440    20%     10080
 refresh_pattern ^gopher:        1440    0%      1440
 refresh_pattern .               0       20%     4320
 acl all src 0.0.0.0/0.0.0.0
 acl manager proto cache_object
 acl localhost src 127.0.0.1/255.255.255.255
 acl to_localhost dst 127.0.0.0/8



 max_open_disk_fds 2046

 # timeouts
 connect_timeout 30 seconds
 shutdown_lifetime 5 seconds
 forward_timeout 2 minutes
 pconn_timeout 30 seconds
 persistent_request_timeout 1 minute
 request_timeout 2 

Re: [squid-users] Re: TCp_HIT problem

2009-07-25 Thread Adrian Chadd
2009/7/25 Amos Jeffries squ...@treenet.co.nz:

 ?? looks like your problem. Most of the web traffic you will ever see is
 under 2 MB big.
 Average size is somewhere between 32KB and 128KB depending on your clients.

Weird; my largest proxy customer with around 15,000 users or so now
behind one proxy has a different traffic distribution. 99% of requests
are under 64k, but over half the traffic is for objects above 8
megabytes. I've told Lusca to cache objects up to around 1900mbytes in
size and I so far have seen hits for objects up to a gigabyte.

(I'll publish actual stats when the client gives me the green light.)

2c,


Adrian


Re: [squid-users] Caching Pandora

2009-07-25 Thread Adrian Chadd
2009/7/26 Jason Spegal jspe...@comcast.net:
 I was able to cache Pandora by compiling with --enable-http-violations and
 using a refresh_pattern to cache everything regardless. This however broke
 everything by preventing proper refreshing of any site. If it could be
 worked where violations only happened as directly specified in the
 configuration it would be a workable solution. I did some testing and I
 could not confirm that it was anything in the configuration file itself that
 was causing the issue. I wouldn't recommend using this as such.

Perhaps you could email them and ask why they've made their content uncachable?

Having cachable video content on websites will make them much, much
less likely to begin being blocked by bandwidth-strapped end-sites. :)



Adrian


Re: [squid-users] Caching Pandora

2009-07-25 Thread Adrian Chadd
This doesn't surprise me. They may be trying to maximise outbound
bits, or try to retain control over content, or not understanding
caching, or all/combination of the above.

I'd suggest contacting them and asking.




adrian

2009/7/26 Jason Spegal jspe...@comcast.net:
 A little bit messy but here are some snippets.

 ###Access.log

 1248572380.275    178 10.10.122.248 TCP_REFRESH_UNMODIFIED/304 232 GET
 http://images-sjl-1.pandora.com/images/public/amz/1/2/0/4/727361124021_500W_495H.jpg
 - DIRECT/208.85.40.13 -
 1248572409.144   8472 10.10.122.241 TCP_MISS/200 1581181 GET
 http://audio-sjl-t3-2.pandora.com/access/7008639604707703825.mp4? -
 DIRECT/208.85.41.38 application/octet-stream
 1248572439.512     94 10.10.122.241 TCP_MEM_HIT/200 55396 GET
 http://images-sjl-2.pandora.com/images/public/amz/3/0/2/3/602498413203_500W_499H.jpg
 - NONE/- image/jpeg
 1248572570.898    300 10.10.122.248 TCP_MISS/200 6521 GET
 http://images-sjl-3.pandora.com/images/public/amz/2/2/4/4/039841434422_130W_130H.jpg
 - DIRECT/208.85.41.23 image/jpeg
 1248572600.538  29937 10.10.122.248 TCP_MISS/200 7704188 GET
 http://audio-sjl-t3-2.pandora.com/access/3642267922875646389.mp3? -
 DIRECT/208.85.41.38 application/octet-stream
 1248572615.735  11507 10.10.122.241 TCP_MISS/200 2109481 GET
 http://audio-sjl-t2-2.pandora.com/access/5722981497105294607.mp4? -
 DIRECT/208.85.41.36 application/octet-stream
 1248572635.903    179 10.10.122.248 TCP_REFRESH_UNMODIFIED/304 232 GET
 http://images-sjl-3.pandora.com/images/public/amz/2/2/4/4/039841434422_130W_130H.jpg
 - DIRECT/208.85.41.23 -
 1248572641.444     40 10.10.122.241 TCP_HIT/200 21616 GET
 http://images-sjl-2.pandora.com/images/public/amz/8/7/6/1/602498611678_300W_273H.jpg
 - NONE/- image/jpeg

 ###Store.log

 1248572380.275 RELEASE -1  097EAE1108DCEF192ED1C3BFF1F6C1B5  304
 1248572380        -1        -1 unknown -1/0 GET
 http://images-sjl-1.pandora.com/images/public/amz/1/2/0/4/727361124021_500W_495H.jpg
 1248572409.144 RELEASE -1  6B93B1BF958703B3FC3CD1ADDD515695  200
 1248572400        -1 1248572400 application/octet-stream 1580815/1580815 GET
 http://audio-sjl-t3-2.pandora.com/access/7008639604707703825.mp4?
 1248572570.897 SWAPOUT 00 0004CF23 BEEE111A39B596B14903743011AF2C36  200
 1248572570 1248490006        -1 image/jpeg 6181/6181 GET
 http://images-sjl-3.pandora.com/images/public/amz/2/2/4/4/039841434422_130W_130H.jpg
 1248572600.538 RELEASE -1  070416ED935AD18DCA793569D2C6A652  200
 1248572570        -1 1248572570 application/octet-stream 7703822/7703822 GET
 http://audio-sjl-t3-2.pandora.com/access/3642267922875646389.mp3?
 1248572615.735 RELEASE -1  B0EB42B39131DF028BA3BE9A39CC24E4  200
 1248572604        -1 1248572604 application/octet-stream 2109115/2109115 GET
 http://audio-sjl-t2-2.pandora.com/access/5722981497105294607.mp4?
 1248572635.903 RELEASE -1  CDCA0D3510080D121E5578310976676E  304
 1248572635        -1        -1 unknown -1/0 GET
 http://images-sjl-3.pandora.com/images/public/amz/2/2/4/4/039841434422_130W_130H.jpg
 1248572886.822 RELEASE -1  A95C86074129546301911C2FC251071D  200
 1248572872        -1 1248572872 application/octet-stream 2086824/2086824 GET
 http://audio-sjl-t1-1.pandora.com/access/5188159311574708305.mp4?

 ###Wireshark

 Hypertext Transfer Protocol
 HTTP/1.0 200 OK\r\n
 Date: Sun, 26 Jul 2009 05:12:58 GMT\r\n
 Server: Apache\r\n
 Content-Length: 6137729\r\n
 Cache-Control: no-cache, no-store, must-revalidate, max-age=-1\r\n
 Pragma: no-cache, no-store\r\n
 Expires: -1\r\n
 Content-Type: application/octet-stream\r\n
 X-Cache: MISS from ichiban\r\n
 X-Cache-Lookup: MISS from ichiban:3128\r\n
 Via: 1.0 ichiban (squid)\r\n
 Proxy-Connection: keep-alive\r\n
 \r\n

 mos Jeffries wrote:

 Jason Spegal wrote:

 I was able to cache Pandora by compiling with --enable-http-violations
 and using a refresh_pattern to cache everything regardless. This however
 broke everything by preventing proper refreshing of any site. If it could be
 worked where violations only happened as directly specified in the
 configuration it would be a workable solution. I did some testing and I
 could not confirm that it was anything in the configuration file itself that
 was causing the issue. I wouldn't recommend using this as such.


 Which indicates that there are fine tuning possible to cache just Pandora.
 Find yoursef one of the Pandora URLs in your access.log and take a visit to
 www.redbot.org or the ircache.org cacheability engine.


 Amos




 Henrik Nordstrom wrote:

 lör 2009-07-25 klockan 12:05 -0600 skrev Brett Glass:


 One of the largest consumers of our HTTP bandwidth is Pandora, the free
 music service. Unfortunately, Pandora marks its streams as non-cacheable 
 and
 also puts question marks in the URLs, which is a huge waste of bandwidth.
 How can this be overridden?


 The questionmark can be ignored. See the cache directive. But if there
 is other parameters behind there (normally not logged) that just may 

Re: [squid-users] rep_mime_type is evaluated before content has been reached ?

2009-07-21 Thread Adrian Chadd
2009/7/21 Soporte Técnico @lemNet sopo...@nodoalem.com.ar:
 rep_mime_type can´t be used for parent selection because this is evaluated
 before content has been reached ?

Correct.



Adrian


Re: AW: AW: AW: AW: AW: [squid-users] Squid 3.1.0.11 beta is available

2009-07-21 Thread Adrian Chadd
Just break on SIGABRT and SIGSEGV. The actual place in the code where
things failed will be slightly further up the callstack than the break
point but it -will- be triggered.

Just remember to ignore SIGPIPE's or you'll have a strangely failing Squid. :)



adrian

2009/7/21 Marcus Kool marcus.k...@urlfilterdb.com:
 my 2 cents:
 someone needs to explain how to set a breakpoint
 because when the assertion fails, the program exits
 (see previous emails: Program exited with code 01)
 The question is where to set the breakpoint
 but probably Amos knows where to set it.

 Marcus


 Silamael wrote:

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1

 Zeller, Jan wrote:

 Hi Amos,

 I now explicitly enabled
 --enable-stacktraces Enable automatic call backtrace on fatal errors

 during the build and added CFLAGS=-g -ggdb in front of ./configure but
 the result seems to be the same...

 # ./squid -v
 Squid Cache: Version 3.1.0.11
 configure options:  '--prefix=/opt/squid-3.1.0.11' '--enable-icap-client'
 '--enable-ssl' '--enable-linux-netfilter' '--disable-ipv6'
 '--disable-translation' '--disable-auto-locale' '--with-pthreads'
 '--with-filedescriptors=32768' '--enable-stacktraces' 'CFLAGS=-g -ggdb'
 --with-squid=/usr/local/src/squid-3.1.0.11 --enable-ltdl-convenience
 2009/07/21 15:43:50| assertion failed: mem.cc:236: size ==
 StrPoolsAttrs[i].obj_size
 Aborted

 # gdb --args ./squid -NCXd9
 GNU gdb 6.8-debian
 Copyright (C) 2008 Free Software Foundation, Inc.
 License GPLv3+: GNU GPL version 3 or later
 http://gnu.org/licenses/gpl.html
 This is free software: you are free to change and redistribute it.
 There is NO WARRANTY, to the extent permitted by law.  Type show
 copying
 and show warranty for details.
 This GDB was configured as x86_64-linux-gnu...
 (gdb) bt
 No stack.
 (gdb) quit


 You forgot to tell gdb to run the program.
 # gdb --args ./squid -NCXd9
 start gdb and tell it to use -NCXd9 as arguments for squid
 When you get the gdb prompt, enter:
 (gdb) r
 which will run squid. When it crashes you type
 (gdb) bt
 to get the backtrace. If squid does not crash, typing bt is pretty
 useless. Same, if it even didn't run before ;)

 - -- Matthias
 -BEGIN PGP SIGNATURE-
 Version: GnuPG v1.4.9 (GNU/Linux)
 Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

 iEYEARECAAYFAkpl0pYACgkQGgHcOSur6dRRagCfQpDfLaFqg1mLwJCVTcAUJRWP
 R+oAn2LnoLTxNJV6+YX+Q8Ja8ILUHayl
 =JhHL
 -END PGP SIGNATURE-






Re: [squid-users] Architecture for scaling delivery of large static files

2009-07-16 Thread Adrian Chadd
I was going to say; I'm tweaking the performance of a cache with 21
million objects in it now. Thats a bti bigger than 2^24.

2009/7/16 Henrik Nordstrom hen...@henriknordstrom.net:
 tor 2009-07-16 klockan 14:29 +1200 skrev Amos Jeffries:

 For you with MB-GB files in Squid-2 that changes to faster Squid due to
 limiting RAM-cache to small files, with lots of large fast disks. Squid is
 limited to a few million (2^24) cache _objects_

 per cache_dir, and up to 32 (2^6) cache_dir.

 Regards
 Henrik




Re: [squid-users] Architecture for scaling delivery of large static files

2009-07-15 Thread Adrian Chadd
2009/7/16 Jamie Tufnell die...@googlemail.com:

 We are talking files up-to-1GB in size here.  Taking that into
 consideration, would you still recommend this architecture?

On disk? Sure. The disk buffer cache helps quite a bit.

In memory ? (as in, the squid hot object cache; not the buffer cache)
? Not without investing some time into the Squid-2 fixes to do it.
I've toyed with it before and its reasonably easy to fix without
hurting performance.

2c,


Adrian


Re: [squid-users] https from different Subnet not working

2009-07-14 Thread Adrian Chadd
2009/7/14 Jarosch, Ralph ralph.jaro...@justiz.niedersachsen.de:
 This is the latest support squid-2 version for RHEL5.3

 An I want to use the dnsserver

Right. Well, besides the other posters' response about the cache peer
setup being a clue - you're choosing a peer based on source IP as far
as I can tell there - which leads me to think that perhaps that
particular cache has a problem. You didn't say which caches they were
in your config or error message so we can't check whether they're the
same or different.

But since yo'ure using a supported squid for RHEL5.3, why don't you
contact Redhat for support? That is why you're paying them for.


adrian


Re: [squid-users] https from different Subnet not working

2009-07-14 Thread Adrian Chadd
Are you using a url rewriter program?

Also, why haven't you just emailed redhat support?



Adrian

2009/7/15 Jarosch, Ralph ralph.jaro...@justiz.niedersachsen.de:
 I found the section which rewrite the request in my cache.log.

 Can someone explain what happens there.

 2009/07/15 06:51:56| cbdataValid: 0x17f684f8
 2009/07/15 06:51:56| redirectHandleRead: {http:/golem.de 10.39.119.9/- - 
 CONNECT}
 2009/07/15 06:51:56| cbdataValid: 0x1808d4a8
 2009/07/15 06:51:56| cbdataUnlock: 0x1808d4a8
 2009/07/15 06:51:56| clientRedirectDone: 'erv-justiz.niedersachsen.de:443' 
 result=http:/golem.de
 2009/07/15 06:51:56| init-ing hdr: 0x1808f160 owner: 1
 2009/07/15 06:51:56| appending hdr: 0x1808f160 += 0x1808ec00
 2009/07/15 06:51:56| created entry 0x17f726f0: 'User-Agent: Mozilla/4.0 
 (compatible; MSIE 7.0; Windows NT 5.2; .NET CLR 1.1.4322; .NET CLR 2.0.50727; 
 .NET CLR 3.0.04506.30)'
 2009/07/15 06:51:56| 0x1808f160 adding entry: 50 at 0
 2009/07/15 06:51:56| created entry 0x17fcc870: 'Proxy-Connection: Keep-Alive'
 2009/07/15 06:51:56| 0x1808f160 adding entry: 41 at 1
 2009/07/15 06:51:56| created entry 0x17fc3190: 'Content-Length: 0'
 2009/07/15 06:51:56| 0x1808f160 adding entry: 14 at 2
 2009/07/15 06:51:56| created entry 0x17fcbd80: 'Host: 
 erv-justiz.niedersachsen.de'
 2009/07/15 06:51:56| 0x1808f160 adding entry: 27 at 3
 2009/07/15 06:51:56| created entry 0x17fcc990: 'Pragma: no-cache'
 2009/07/15 06:51:56| 0x1808f160 adding entry: 37 at 4
 2009/07/15 06:51:56| 0x1808f160 lookup for 37
 2009/07/15 06:51:56| 0x1808f160: joining for id 37
 2009/07/15 06:51:56| 0x1808f160: joined for id 37: no-cache
 2009/07/15 06:51:56| 0x1808f160 lookup for 7
 2009/07/15 06:51:56| 0x1808f160 lookup for 7
 2009/07/15 06:51:56| 0x1808f160 lookup for 40
 2009/07/15 06:51:56| 0x1808f160 lookup for 52
 2009/07/15 06:51:56| clientInterpretRequestHeaders: REQ_NOCACHE = SET
 2009/07/15 06:51:56| clientInterpretRequestHeaders: REQ_CACHABLE = NOT SET
 2009/07/15 06:51:56| clientInterpretRequestHeaders: REQ_HIERARCHICAL = NOT SET
 2009/07/15 06:51:56| clientProcessRequest: CONNECT 
 'http.justiz.niedersachsen.de:443'
 2009/07/15 06:51:56| aclCheckFast: list: (nil)
 2009/07/15 06:51:56| aclCheckFast: no matches, returning: 1
 2009/07/15 06:51:56| sslStart: 'CONNECT http.justiz.niedersachsen.de:443'
 2009/07/15 06:51:56| comm_open: FD 58 is a new socket
 2009/07/15 06:51:56| fd_open FD 58 http.justiz.niedersachsen.de:443
 2009/07/15 06:51:56| comm_add_close_handler: FD 58, handler=0x463e31, 
 data=0x1808d378

 -Ursprüngliche Nachricht-
 Von: Jarosch, Ralph [mailto:ralph.jaro...@justiz.niedersachsen.de]
 Gesendet: Dienstag, 14. Juli 2009 11:40
 An: squid-users@squid-cache.org
 Betreff: AW: [squid-users] https from different Subnet not working

  -Ursprüngliche Nachricht-
  Von: adrian.ch...@gmail.com [mailto:adrian.ch...@gmail.com] Im
 Auftrag
  von Adrian Chadd
  Gesendet: Dienstag, 14. Juli 2009 11:16
  An: Jarosch, Ralph
  Cc: squid-users@squid-cache.org
  Betreff: Re: [squid-users] https from different Subnet not working
 
  2009/7/14 Jarosch, Ralph ralph.jaro...@justiz.niedersachsen.de:
   This is the latest support squid-2 version for RHEL5.3
  
   An I want to use the dnsserver
 
  Right. Well, besides the other posters' response about the cache peer
  setup being a clue - you're choosing a peer based on source IP as far
  as I can tell there - which leads me to think that perhaps that
  particular cache has a problem. You didn't say which caches they were
  in your config or error message so we can't check whether they're the
  same or different.
 
 Ok sorry.
 The current way for an website request is

 Client -- headproxy(10.37.132.2) -- my cache proxys
 (10.37.132.5/6/7/8) -- proxy off our isp -- internet

 The error message come from the isp proxy which tell when I request
 something like https://www.ebay.com

  The requested URL could not be retrieved
  --
  -- While trying to retrieve the URL: http.yyy.xxx:443 The
       yyy.xxx is our local domain
  following error was encountered:
  Unable to determine IP address from host name for The dnsserver
  returned:
  Name Error: The domain name does not exist.
  This means that:
   The cache was not able to resolve the hostname presented in the URL.
   Check if the address is correct.
  Your cache administrator is webmaster.
  --
  -- Generated Tue, 14 Jul 2009 08:10:39 GMT by xxx
       the answer come from the isp
  (squid/2.5.STABLE12)

 I´ve made an tcpdump between our headproxy and our cacheproxy´s  an
 there I can see that the headproxy change the request from
 https//www.ebay.com to https.our.domain.com



  But since yo'ure using a supported squid for RHEL5.3, why don't you
  contact Redhat for support? That is why you're paying them for.
 
 
  adrian




Re: [squid-users] CentOS/Squid/Tproxy but no transfer

2009-07-13 Thread Adrian Chadd
2009/7/14 Amos Jeffries squ...@treenet.co.nz:

 Aha!  duplicate syn-ack is exactly the case I got a good trace of earlier.
 Turned out to be missing config on the cisco box.

Do you have an example of this particular (mis) configuration? The
note in the Wiki article isn't very clear.

 The Features/Tproxy4 wiki page now makes explicit mention of this and
 several possible workarounds.

 The problem seems to be that the WCCP automatic bypass for return traffic
 uses IP, which is not usable under TPROXY. Some other method of traffic
 detection and bypass must be explicitly added for traffic
 Squid-Cisco-Internet. In the old tproxy v2 configs (which still apply)
 the class 90 was used for this.

.. uhm, again, that isn't very clear. automatic bypass isn't
explicitly configured anywhere nor do I see anything in the tproxy2
config which mentions bypass with class 90. So I'm very curious what
exactly it is that people are seeing, with what exact
configuration(s).


Adrian


Re: [squid-users] CentOS/Squid/Tproxy but no transfer

2009-07-13 Thread Adrian Chadd
2009/7/14 Amos Jeffries squ...@treenet.co.nz:

 Do you have an example of this particular (mis) configuration? The
 note in the Wiki article isn't very clear.

 I don't. The admin only mentioned that by adding a bypass on service group
 fixed the issue.
 I had a tcpdump of as set of requests showing pairs of seemingly identical
 requests arriving from the router within 1sec of each other. On deep
 inspection the slightly delayed one showed some minor alterations such as
 Squid makes from the first.

Right. But what was the squid config, cisco config and network
topology for both the doesn't work and works setups?

 If there is any way to make the wiki clearer without wholesale including of
 per-IOS config setting go for it.

Well, it may  boil down to per-IOS config and per-platform, per-IOS
config. The problem is getting some more information to at least
document what is needed.

 The behavior I saw was:

  enable wccpv2 + NAT intercept with wiki config
   == perfectly working, not a sign of any squid-sourced packets.

Right, probably because it was using one service group and the
half-duplex redirection needed for normal, non-tproxy interception was
being done.

  swap NAT for tproxy4 with the wiki config (no change to WCCP or links)
   == loop trace showing squid outward packets coming IN from WCCP.

Yeah that won't work. :)

 So I say seems and appears to be an automatic bypass in WCCP or router
 somewhere. No idea where. may need bypassing manually to fix tproxy.

Well, the automatic bypass should be if the router sees packets from
an IP address or MAC of a registered device, it should be passing it
through. I have no idea whether it is doing this without explicit
don't further redirect rules (eg by deny entries in the redirect
list, or wccp exclude in, etc) because that may absolutely be
platform, IOS and WCCPv2 negotiation type dependant.

So please, poke the admin in question to get as much information about
the configuration and setup of everything.



Adrian


Re: [squid-users] How to do a limit quota download on a Squid proxy

2009-07-10 Thread Adrian Chadd
I had specified how to implement proper quota support for a client -
but the project unfortunately fell through.

Its easy to hook into the end of a HTTP request and mark how much
bandwidth was used. The missing piece is a method of permitting
network access for users so they can't easily access hundreds of
megabytes in a given download. I had outlined another helper process
to allow download quotas in configurable chunks - eg per megabyte.
The other missing piece was to be able to clear all connections from a
given IP or for a given user.

All of these are easy to do if someone has some motivation. :)


Adrian

2009/7/9 tintin_vefg54e654g maf1...@hotmail.fr:

 Hi everyone,

 my configuration is as follow :

 I have a Mandriva 2009.1 OS, with squid ( + sarg, and mrtg) proxy.
 so, in the purpose to keep few free brandwicth for working using ^^ I would
 like to set limits download.

 I don't have users identifications, I see my users by IP adress.
 The matter is I would like to set a quota as a limit of 100Mb per day of
 download per IP adress.
 How is it possible to do such thing.
 is it ? ^^
 if I dare ... most simple way.

 ok, thanks for your help,
 see ya' on the forum

 Tintin


 --
 View this message in context: 
 http://www.nabble.com/How-to-do-a-limit-quota-download-on-a-Squid-proxy-tp24410453p24410453.html
 Sent from the Squid - Users mailing list archive at Nabble.com.




Re: [squid-users] Updated CentOS/Squid/Tproxy Transparency steps.

2009-07-01 Thread Adrian Chadd
This won't work. You're only redirecting half of the traffic flow with
the wccp web-cache service group. The tproxy code is probably
correctly trying to originate packets -from- the client IP address to
the upstream server but because you're only redirecting half of the
packets (ie, packets from original client to upstream, and not also
the packets from the upstream to the client - and this is the flow
that needs to be hijacked!) things will hang.

You need to read the TPROXY2 examples and look at the Cisco/Squid WCCP
setup. There are two service groups configured - 80 and 90 - which
redirect client - server and server-client respectively. They have
the right bits set in the service group definitions to redirect the
traffic correctly.

The WCCPv2/TPROXY4 pages are hilariously unclear. I ended up having to
find the TPROXY2 pages to extract the right WCCPv2 setup to use,
then combine that with the TPROXY4 rules. That is fine for me (I know
a thing or two about this) but it should all be made much, much
clearer for people trying to set this up.

As I suggested earlier, you may wish to consider fleshing out an
interception section in the Wiki complete with explanations about how
all of the various parts of the puzzle hold together.

2c,


adrian

2009/7/2 Alexandre DeAraujo al...@cal.net:
 I am giving this one more try, but have been unsuccessful. Any help is always 
 greatly appreciated.

 Here is the setup:
 Router:
 Cisco 7200 IOS 12.4(25)
 ip wccp web-cache redirect-list 11
 access-list 11 permits only selective ip addresses to use wccp

 Wan interface (Serial)
 ip wccp web-cache redirect out

 Global WCCP information:
 Router information:
 Router Identifier:                      192.168.20.1
 Protocol Version:                       2.0

 Service Identifier: web-cache
 Number of Service Group Clients:        1
 Number of Service Group Routers:        1
 Total Packets s/w Redirected:   8797
 Process:                                4723
 Fast:                                   0
 CEF:                                    4074
 Redirect access-list:                   11
 Total Packets Denied Redirect:  124925546
 Total Packets Unassigned:               924514
 Group access-list:                      -none-
 Total Messages Denied to Group: 0
 Total Authentication failures:          0
 Total Bypassed Packets Received:        0

 WCCP Client information:
 WCCP Client ID: 192.168.20.2
 Protocol Version:       2.0
 State:                  Usable
 Initial Hash Info:      
                        
 Assigned Hash Info:     
                        
 Hash Allotment: 256 (100.00%)
 Packets s/w Redirected: 306
 Connect Time:           00:21:33
 Bypassed Packets
 Process:                0
 Fast:                   0
 CEF:                    0
 Errors:                 0

 Clients are on FEthernet0/1
 Squid server is the only device on FEthernet0/3
 
 Squid Server:
 eth0      Link encap:Ethernet  HWaddr 00:14:22:21:A1:7D
          inet addr:192.168.20.2  Bcast:192.168.20.7  Mask:255.255.255.248
          inet6 addr: fe80::214:22ff:fe21:a17d/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:3325 errors:0 dropped:0 overruns:0 frame:0
          TX packets:2606 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:335149 (327.2 KiB)  TX bytes:394943 (385.6 KiB)

 gre0      Link encap:UNSPEC  HWaddr 
 00-00-00-00-CB-BF-F4-FF-00-00-00-00-00-00-00-00
          inet addr:192.168.20.2  Mask:255.255.255.248
          UP RUNNING NOARP  MTU:1476  Metric:1
          RX packets:400 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:31760 (31.0 KiB)  TX bytes:0 (0.0 b)
 
 /etc/rc.d/rc.local file:
 ip rule add fwmark 1 lookup 100
 ip route add local 0.0.0.0/0 dev lo table 100
 modprobe ip_gre
 ifconfig gre0 192.168.20.2 netmask 255.255.255.248 up
 echo 1  /proc/sys/net/ipv4/ip_nonlocal_bind
 
 /etc/sysconfig/iptables file:
 # Generated by iptables-save v1.4.4 on Wed Jul  1 03:32:55 2009
 *mangle
 :PREROUTING ACCEPT [166:11172]
 :INPUT ACCEPT [164:8718]
 :FORWARD ACCEPT [0:0]
 :OUTPUT ACCEPT [130:12272]
 :POSTROUTING ACCEPT [130:12272]
 :DIVERT - [0:0]
 -A DIVERT -j MARK --set-xmark 0x1/0x
 -A DIVERT -j ACCEPT
 -A PREROUTING -p tcp -m socket -j DIVERT
 -A PREROUTING -p tcp -m tcp --dport 80 -j TPROXY --on-port 3128 --on-ip 
 192.168.20.2 --tproxy-mark 0x1/0x1
 COMMIT
 # Completed on Wed Jul  1 03:32:55 2009
 # Generated by iptables-save v1.4.4 on Wed Jul  1 03:32:55 2009
 *filter
 :INPUT ACCEPT [0:0]
 :FORWARD ACCEPT 

Re: [squid-users] squid becomes very slow during peak hours

2009-06-30 Thread Adrian Chadd
Upgrade to a later Squid version!



adrian

2009/6/30 goody goody think...@yahoo.com:

 Hi there,

 I am running squid 2.5 on freebsd 7, and my squid box respond very slow 
 during peak hours. my squid machine have twin dual core processors, 4 ram and 
 following hdds.

 Filesystem     Size    Used   Avail Capacity  Mounted on
 /dev/da0s1a    9.7G    241M    8.7G     3%    /
 devfs          1.0K    1.0K      0B   100%    /dev
 /dev/da0s1f     73G     35G     32G    52%    /cache1
 /dev/da0s1g     73G    2.0G     65G     3%    /cache2
 /dev/da0s1e     39G    2.5G     33G     7%    /usr
 /dev/da0s1d     58G    6.4G     47G    12%    /var


 below are the status and settings i have done. i need further guidance to  
 improve the box.

 last pid: 50046;  load averages:  1.02,  1.07,  1.02                          
                               up

 7+20:35:29  15:21:42
 26 processes:  2 running, 24 sleeping
 CPU states: 25.4% user,  0.0% nice,  1.3% system,  0.8% interrupt, 72.5% idle
 Mem: 378M Active, 1327M Inact, 192M Wired, 98M Cache, 112M Buf, 3708K Free
 Swap: 4096M Total, 20K Used, 4096M Free

  PID USERNAME      THR PRI NICE   SIZE    RES STATE  C   TIME   WCPU COMMAND
 49819 sbt    1 105    0   360M   351M CPU3   3  92:43 98.14% squid
  487 root            1  96    0  4372K  2052K select 0  57:00  3.47% natd
  646 root            1  96    0 16032K 12192K select 3  54:28  0.00% snmpd
 49821 sbt    1  -4    0  3652K  1048K msgrcv 0   0:13  0.00% diskd
 49822 sbt    1  -4    0  3652K  1048K msgrcv 0   0:10  0.00% diskd
 49864 root            1  96    0  3488K  1536K CPU2   1   0:04  0.00% top
  562 root            1  96    0  3156K  1008K select 0   0:04  0.00% syslogd
  717 root            1   8    0  3184K  1048K nanslp 0   0:02  0.00% cron
 49631 x-man           1  96    0  8384K  2792K select 0   0:01  0.00% sshd
 49635 root            1  20    0  5476K  2360K pause  0   0:00  0.00% csh
 49628 root            1   4    0  8384K  2776K sbwait 1   0:00  0.00% sshd
  710 root            1  96    0  5616K  2172K select 1   0:00  0.00% sshd
 49634 x-man           1   8    0  3592K  1300K wait   1   0:00  0.00% su
 49820 sbt    1  -8    0  1352K   496K piperd 3   0:00  0.00% unlinkd
 49633 x-man           1   8    0  3456K  1280K wait   3   0:00  0.00% sh
  765 root            1   5    0  3156K   872K ttyin  1   0:00  0.00% getty
  766 root            1   5    0  3156K   872K ttyin  2   0:00  0.00% getty
  767 root            1   5    0  3156K   872K ttyin  2   0:00  0.00% getty
  769 root            1   5    0  3156K   872K ttyin  3   0:00  0.00% getty
  771 root            1   5    0  3156K   872K ttyin  1   0:00  0.00% getty
  770 root            1   5    0  3156K   872K ttyin  0   0:00  0.00% getty
  768 root            1   5    0  3156K   872K ttyin  3   0:00  0.00% getty
  772 root            1   5    0  3156K   872K ttyin  1   0:00  0.00% getty
 47303 root            1   8    0  8080K  3560K wait   1   0:00  0.00% squid
  426 root            1  96    0  1888K   420K select 0   0:00  0.00% devd
  146 root            1  20    0  1356K   668K pause  0   0:00  0.00% adjkerntz


 pxy# iostat
      tty             da0            pass0             cpu
  tin tout  KB/t tps  MB/s   KB/t tps  MB/s  us ni sy in id
   0  126 12.79   5  0.06   0.00   0  0.00   4  0  1  0 95

 pxy# vmstat
  procs      memory      page                    disks     faults      cpu
  r b w     avm    fre   flt  re  pi  po    fr  sr da0 pa0   in   sy   cs us 
 sy id
  1 3 0  458044 103268    12   0   0   0    30   5   0   0  273 1721 2553  4  
 1 95

 pxy# netstat -am
 1376/1414/2790 mbufs in use (current/cache/total)
 1214/1372/2586/25600 mbuf clusters in use (current/cache/total/max)
 1214/577 mbuf+clusters out of packet secondary zone in use (current/cache)
 147/715/862/12800 4k (page size) jumbo clusters in use 
 (current/cache/total/max)
 0/0/0/6400 9k jumbo clusters in use (current/cache/total/max)
 0/0/0/3200 16k jumbo clusters in use (current/cache/total/max)
 3360K/5957K/9317K bytes allocated to network (current/cache/total)
 0/0/0 requests for mbufs denied (mbufs/clusters/mbuf+clusters)
 0/0/0 requests for jumbo clusters denied (4k/9k/16k)
 0/7/6656 sfbufs in use (current/peak/max)
 0 requests for sfbufs denied
 0 requests for sfbufs delayed
 0 requests for I/O initiated by sendfile
 0 calls to protocol drain routines


 netstat -an | grep TIME_WAIT | more  command 17 scroll pages of crt.

 some lines from squid.conf
 cache_mem 256 MB
 cache_replacement_policy heap LFUDA
 memory_replacement_policy heap GDSF

 cache_swap_low 80
 cache_swap_high 90

 cache_dir diskd /cache2 6 16 256 Q1=72 Q2=64
 cache_dir diskd /cache1 6 16 256 Q1=72 Q2=64

 cache_log /var/log/squid25/cache.log
 cache_access_log /var/log/squid25/access.log
 cache_store_log none

 half_closed_clients off
 maximum_object_size 1024 KB

 pxy# sysctl -a | grep maxproc
 kern.maxproc: 6164
 kern.maxprocperuid: 5547
 kern.ipc.somaxconn: 1024
 kern.maxfiles: 

Re: [squid-users] Architecture

2009-06-29 Thread Adrian Chadd
2009/6/30 Ronan Lucio lis...@tiper.com.br:

 Could you tell what hardware do you use?
 Reading Squid-Guide
 (http://www.deckle.co.za/squid-users-guide/Installing_Squid) it says Squid
 isn't CPU intensive, says a multiprocessor machines would not increase speed
 dramatically.


Its a dual quad core amd of some sort. Squid is CPU intensive but
currently only uses 1 CPU for the main application. You'll get
benefits from having multi-core machines but only for offloading
network and disk processing onto them.

 I know this docs is so old, but it talks about machines like a Pentium 133
 with 128 Mb RAM.

 So initially I was thinking in Dual QuadCore + 4Gb RAM. Now I'm thinking in
 a Single QuadCore + 2Gb.

Another Squid rule - as much RAM as possible.

 What do you think about that?

 I think a throughput like yours would be great for me.

 Another question: How many disks do you use?
 In other words: Do I need some special disk strategy to achieve such a
 throughput?

Like anything, your best bet is to test and document the performance.
In this case, its lots of disk on a sensible RAID controller, but no
RAID. I wasn't given time to benchmark RAID vs non-RAID but in this
particular workload, RAID has never ever been faster in my testing in
cases other than the RAID card itself being buggy. Others have a
differing opinion.


Adrian


Re: [squid-users] Updated CentOS/Squid/Tproxy Transparency steps.

2009-06-27 Thread Adrian Chadd
Good writeup!

I'm rapidly coming to the conclusion that the problem with
transparency setups is not just a lack of documentation and examples,
but a lack of clear explanation and understanding of what is actually
going on.

I had one user try to manually configure GRE interfaces on the Cisco
side because that is how they thought WCCP worked. Another policy
routed TCP to the proxy and didn't quite get why some connections
where hanging (ICMP doesn't make it to the proxy, so PMTU is
guaranteed to break without blackhole detection in one or more
participants end-nodes/proxy.) Combined with all of the crazy IOS
related bugs and crackery that is going on and I'm not really
surprised the average joe doesn't have much luck. :)

I reckon what would be really, really useful is a writeup of all of
the related technologies involved in all parts of transparent
interception, including a writeup on what WCCPv2 actually is and how
it works; what the various interception options are and do (especially
TPROXY4, which AFAICT is severely lacking in -actual- documentation
about what it is, how it works and how to code for it) so there is at
least a small chance that someone with a bit of clue can easily figure
all the pieces out and debug stuff.

I also see people doing TPROXY4/Linux hackery involving -bridging-
proxies instead of routed/WCCPv2 proxies. That is another fun one.

Finally, figuring out how to tie all of that junk into a cache
hierarchy is also hilariously amusing to get right.

Just for the record, the kernel and iptables binary shipping with the
latest Debian unstable supports TPROXY4 fine. I didn't have to
recompile my kernel or anything - I just had to tweak a few things
(disable pmtu, for example) and add some iptables rules. Oh, and
compile Squid right.

2c,


Adrian


Re: [squid-users] Cache youtube videos WITHOUT videocache?

2009-06-27 Thread Adrian Chadd
2009/7/20 Mark Lodge mlodg...@gmail.com:
 I've come across this at
 http://wiki.squid-cache.org/Features/StoreUrlRewrite

 Feature: Store URL Rewriting?

 Does this mean i can cache videos without using videocache?

That was the intention. Unfortunately, people didn't really pick up on
the power of the feature and have stuck to abusing the redirector API
to serve this kind of content.

The advantage of the redirector approach is that it can bypass all of
the cache rule checking which goes on inside Squid. A lot of these
video (and CDN content sites in general - they charge for content
served! :) make content caching quite difficult if not impossible. The
store URL rewriting scheme also requires a set of refresh patterns to
override the don't cache me please! tags added to content.

I'd love to see a community take on board the store URL rewriter
interface and maintain rulesets for caching youtube, maps, windows
updates, etc. It just doesn't seem like it'll happen.



Adrian


Re: [squid-users] Internal redirector

2009-06-26 Thread Adrian Chadd
Squid-2.HEAD has some internal rewriting support.

I'm breaking it out into a separate module in Lusca (rather than being
an optional part of the external rewriter) to make using it in
conjunction with the external URL rewriter possible.



Adrian

2009/6/26 Jeff Pang pa...@laposte.net:
 Does squid support internel redirect officially?
 If not, using an external redirector is simple enough.

 #!/usr/bin/perl -wl

 $|=1;   # don't buffer the output

 while () {

        our ($uri,$client,$ident,$method) = split;
        $uri =~ s/\begin=[0-9]*//;

 } continue {
        print $uri;
 }

 2009/6/26 Chudy Fernandez chudy_fernan...@yahoo.com:

 can we use internal redirector(rewrite feature) to replace/remove some 
 regex(\begin=[0-9]*) on URL?

 like..
 http://www.foo.com/video.flvbegin=900
 to
 http://www.foo.com/video.flv







 --
 In this magical land, everywhere
 is in full bloom with flowers of evil.
                     - Jeff Pang (CN)




Re: [squid-users] Squid/PDF

2009-06-26 Thread Adrian Chadd
2009/6/26 Phibee Network Operation Center n...@phibee.net:
 ok the bug are not resolved no ?

The bugs get resolved when someone contributes a fix.. :)



Adrian


Re: [squid-users] Architecture

2009-06-26 Thread Adrian Chadd
2009/6/27 Chris Robertson crobert...@gci.net:

 I'm running a strictly forward proxy setup, which puts an entirely different
 load on the system.  It's also a pretty low load (peaks of 160 req/sec at
 25mbit/sec).

Just another random datapoint - I've just deployed my Squid-2
derivative (which is at least as fast as Squid-2.HEAD) as a forward
proxy on some current generation hardware. It's peaking at 700
requests/sec and ~120mbit a sec with a ~ 30% byte hit rate.

A reverse proxy with a high hit rate should do quite a bit better than that.


Adrian


Re: [squid-users] Split caching by size

2009-05-20 Thread Adrian Chadd
Its a per-cache_dir option in Squid-2.7 and above; I'm not sure about 3.



Adrian

2009/5/20 Jason Spegal jspe...@comcast.net:
 Just tested and verified this. At least in Squid 3.0 minimum_object_size
 affects both memory and disk caches. Anyone know if this is true in 3.1 as
 well? Any thoughts as to how to split it? I may be wrong and likely am but I
 recall there was separate minimum_object_size for each cache at one time.

 Chris Robertson wrote:

 Jason Spegal wrote:

 How do I configure squid to only cache small objects, say less than 4mb
 in memory cache,

 http://www.squid-cache.org/Doc/config/maximum_object_size_in_memory/

 and only objects larger than 4mb to the disk?

 http://www.squid-cache.org/Doc/config/minimum_object_size/

 I want to optimize the cache based on object size. The reasoning is the
 small stuff will change often and be accessed the most while the larger
 items that tie up bandwidth will not change as often and I can cache more
 aggressively. Also this way I minimize disk io and lag. I am using squid
 3.0. While I can see this being done with the disk cache I am not certain
 the memory cache can be configured like this anymore as the options seem to
 be missing.

 Thanks,
  Jason

 Chris




Re: [squid-users] WCCP return method

2009-05-05 Thread Adrian Chadd
Squid doesn't currently implement any smarts for the WCCPv2 return path.



Adrian

2009/5/6 kgardenia42 kgardeni...@googlemail.com:
 On Fri, May 1, 2009 at 5:28 AM, Amos Jeffries squ...@treenet.co.nz wrote:
 kgardenia42 wrote:

 On 4/30/09, Ritter, Nicholas nicholas.rit...@americantv.com wrote:

 * WCCP supports a return method for packets which the web-cache
 decides to reject/return.  Does squid support this?  I see that the
 return method can be configured in squid but is the support for
 returning actually there?

 I dunno about this one.

 Does anyone know the answer to this?  I'd just like to know what squid
 can do when it comes to return method.

 Only whats documented.

 http://www.squid-cache.org/Doc/config/wccp2_return_method/

 In what circumstances would squid decided to trigger the return
 mechanism currently?  I was looking at the source and I couldn't see
 where this might be implemented.

 One of the reasons I ask is that since I'm using iptables to forward
 things to the local squid port that came to me via WCCP I was
 wondering if it was feasible to take squid out of the loop just by
 changing my iptables rules to reject packets forwarded by WCCP but I
 don't know enough but WCCP return methods to know if it is possible to
 use the return method mechanism to return such packets back to the
 router.

 Can anyone who is knowledgeable about this please help?

 Thanks,




[squid-users] /dev/poll solaris 10 fixes

2009-05-03 Thread Adrian Chadd
I'm giving my /dev/poll (Solaris 10) code a good thrashing on some
updated Sun hardware. I've fixed one silly bug of mine in 2.7 and
2.HEAD.

If you're running Solaris 10 and not using the /dev/poll code then
please try out the current CVS version(s) or wait for tomorrow's
snapshots.

I'll commit whatever other fixes are needed in this environment here :)

Thanks,


Adrian


Re: [squid-users] Scalability in serving large ammount of concurrent requests

2009-05-02 Thread Adrian Chadd
it means they didn't bother investigating the problem and reporting
back to squid-users/squid-dev.

They may find that Squid-2.7 (and my squid-2 fork) perform a ton
better over whatever version they tried.

I'm trying to continue benchmarking my local Squid-2 fork against
simulated lots of concurrent sessions but the main problem is
finding free/open tools to simulate internet traffic levels.
Polygraph just can't simulate that many concurrent requests at a
decent enough traffic rate without significant equipment investment. I
have this nasty feeling I'm going to have invent my own..

2c,


Adrian

2009/5/2 Roy M. setesting...@gmail.com:
 In http://highscalability.com/youtube-architecture , under Serving
 Thumbnails, it said:

 .
 - Used squid (reverse proxy) in front of Apache. This worked for a
 while, but as load increased performance eventually decreased. Went
 from 300 requests/second to 20.
 .

 So does it mean squid is not suitable for serving large ammount of
 concurrent requests (as compare to apache)


 Thanks.




[squid-users] Resigning from squid-core

2009-01-31 Thread Adrian Chadd
Hi all,

It's been a tough decision, but I'm resigning from any further active
role in the Squid core group and cutting back on contributing towards
Squid development.

I'd like to wish the rest of the active developers all the best in the
future, and thank everyone here for helping me develop and test my
performance and feature related Squid work.



Adrian


Re: [squid-users] Frequent cache rebuilding

2009-01-22 Thread Adrian Chadd
2009/1/21 Amos Jeffries squ...@treenet.co.nz:

 Yes it can. Squid's passing through of large objects is much more
 efficient than its pass-thru of small objects. A few dozen clients
 simultaneously grabbing movies or streaming through a CONNECT request can
 saturate a multi-GB link network buffer easily.
 A dual-core Xeon _should_ be able to saturate a 10GB link with all clients
 online.

Has anyone tried this?

The last time I tried multi-gige with Squid, it didn't really hit
anywhere near 10GE with streaming data because current kernels are
optimised with the idea that people will write concurrent software,
and so will run multiple threads to do the socket and network stuff
(copyin/copyout/tcp/ip stuff, with a kernel thread handling part of
the NIC stuff and potentially some of the TX/RX.)




Adrian


Re: [squid-users] cache_mem

2009-01-22 Thread Adrian Chadd
2009/1/22 Amos Jeffries squ...@treenet.co.nz:

 How intensive is intensive? At the moment squid is averaging a mere 2.4%
 processor time.

 IIRC older Squid-2 had to step a linked-list the length of the object in 4KB
 chunks to perform one of the basic operations (network write I think).

Yeah - the memory cache in Squid-2 was really only initially designed
as a sort of data pipeline between the server, the store, and the
client-side. It sort of grew the stuff needed to be a memory cache
by virtue, iirc, of wanting to support one incoming stream - multiple
client retrievals without having to always go via the disk store for
it.

Unless you need the extra boost it gives you in very specific circumstances:

* use low cache_mem; but if you notice that you're hitting the disk often;
* use a larger cache_mem; but keep maximum_object_size_in_memory down
to around 64k

Squid-3 sort of fixed this. It wasn't ever fully fixed, much like
how the problem could be fixed in Squid-2 if someone wanted to do the
slight trickery required.



Adrian


Re: [squid-users] (help!) strange things about maximum_object_size_in_memory in squid.conf

2009-01-21 Thread Adrian Chadd
Then it may be a bug. :)



Adrian

2009/1/20 Tawan Won taehwan.w...@gmail.com:
 As you see the out of object dump in my previous mail, there is no client
 fetching the object.
 If an object has clients fetching it, object dump should print out the
 client list information too, if any.
 In addition, at the dump time,  squid had no client connection.




 -Original Message-
 From: adrian.ch...@gmail.com [mailto:adrian.ch...@gmail.com] On Behalf Of
 Adrian Chadd
 Sent: Wednesday, January 21, 2009 10:42 AM
 To: taehwan.w...@gmail.com
 Cc: squid-users@squid-cache.org
 Subject: Re: [squid-users] (help!) strange things about
 maximum_object_size_in_memory in squid.conf

 If it hasn't been swapped out to disk, the object has to stay in RAM
 until the client(s) currently fetching from it have fetched enough for
 part of the object (ie, the stuff at the beginning which has been sent
 to clients) to be freed.



 Adrian


 2009/1/20 Taehwan Weon taehwan.w...@gmail.com:
 Hi,

 I am using squid 2.6.STABLE 21 on linux.
 my squid.conf has the following settings:
  maximum_object_size_in_memory 10 KB
  cache_mem 3072 MB
  maximum_object_size   2000 MB
  minimum_object_size  0

 After running squid for more than 1 months, I ran 'squidclient 
 mgr:vm_objects' to
 look at the transit/hot object size.

 Even if I SET the maximum in-memory object size to 10 KB,
 squid HAD the following objects!  (the diff of inmem_lo and inmem_hi is
 299KB)


 KEY CD0154B911563741E3E69CDB2E2D6FF0
  GET http://images.test.com/test_data/61/99/319.jpg
  STORE_OK  IN_MEMORY SWAPOUT_NONE PING_DONE
  CACHABLE,DISPATCHED,VALIDATED
  LV:1232416172 LU:1232417107 LM:1231909989 EX:-1
  0 locks, 0 clients, 6 refs
  Swap Dir -1, File 0X
  inmem_lo: 0
  inmem_hi: 299187
  swapout: 0 bytes queued


 In Squid, The Definitive Guide published by O'Reilly,
 maximum_object_size_in_memory is the diff of inmem_lo and inmem_hi.
 But the real implementation is seemed to be strange.

 Any help will be highly appreciated.

 Thanks  in advance.

 Tawan Won






[squid-users] squidtools rewriter and substistution support

2009-01-20 Thread Adrian Chadd
hi everyone,

Someone posted a request here a few days ago for how to convince
squirm to use parts of a regular expression match in a rewritten
URL. This isn't a new request, and I've done it a bunch of times in my
own rewriters, but I figured I should get around to doing it in a
generic way so I don't have to keep re-inventing the wheel.

So now my rewriter supports using matches in the rewritten URL. This
means a rule as such

http://www.foo.com/(.*)$ http://bar.com/$1

.. will work.

I'm currently in the process of rewriting some of my Youtube rules to
use my URL rewriter now instead of a custom bit of perl code.

Hopefully having this simple rewriter out there will tease a few of
you to start using it and sharing configuration file snippets, which
is a whole lot easier than trying to share rewriter code. :)

Have fun,


Adrian
(http://code.google.com/p/squidtools/)


Re: [squid-users] (help!) strange things about maximum_object_size_in_memory in squid.conf

2009-01-20 Thread Adrian Chadd
If it hasn't been swapped out to disk, the object has to stay in RAM
until the client(s) currently fetching from it have fetched enough for
part of the object (ie, the stuff at the beginning which has been sent
to clients) to be freed.



Adrian


2009/1/20 Taehwan Weon taehwan.w...@gmail.com:
 Hi,

 I am using squid 2.6.STABLE 21 on linux.
 my squid.conf has the following settings:
  maximum_object_size_in_memory 10 KB
  cache_mem 3072 MB
  maximum_object_size   2000 MB
  minimum_object_size  0

 After running squid for more than 1 months, I ran 'squidclient 
 mgr:vm_objects' to
 look at the transit/hot object size.

 Even if I SET the maximum in-memory object size to 10 KB,
 squid HAD the following objects!  (the diff of inmem_lo and inmem_hi is 299KB)


 KEY CD0154B911563741E3E69CDB2E2D6FF0
  GET http://images.test.com/test_data/61/99/319.jpg
  STORE_OK  IN_MEMORY SWAPOUT_NONE PING_DONE
  CACHABLE,DISPATCHED,VALIDATED
  LV:1232416172 LU:1232417107 LM:1231909989 EX:-1
  0 locks, 0 clients, 6 refs
  Swap Dir -1, File 0X
  inmem_lo: 0
  inmem_hi: 299187
  swapout: 0 bytes queued


 In Squid, The Definitive Guide published by O'Reilly,
 maximum_object_size_in_memory is the diff of inmem_lo and inmem_hi.
 But the real implementation is seemed to be strange.

 Any help will be highly appreciated.

 Thanks  in advance.

 Tawan Won




[squid-users] squidtools collection

2009-01-18 Thread Adrian Chadd
Hi everyone,

Just letting you all know that I'm (slowly) tidying up and uploading
the various squid related tools that I've written over the years (the
ones I can find / release :) into another googlecode project.

The url is: http://code.google.com/p/squidtools/

There's not much there at the moment. There's a simple redirector for
URL filtering/rewriting (thanks to a support contract client who
needed something stable to replace what he was using!) and my example
external_acl helper which implements filtering against the
phishtank.com blacklist.

If anyone has any interesting squid tools they'd like to include in
the squidtools code project then please let me know. I'd like to
eventually have the whole collection available as a single set which
can be packaged up and installed together to enhance existing and new
Squid (and cacheboy :) installations.

Thanks,


Adrian


Re: [squid-users] COSS causing squid Segment Violation on FreeBSD 6.2S (store_io_coss.c)

2009-01-17 Thread Adrian Chadd
2009/1/15 Mark Powell m.s.pow...@salford.ac.uk:

  Did you manage to get that FreeBSD 7 server working with COSS?

 Well did you :)

Yes. At least in testing. I don't (yet) have a client running
FreeBSD-7 and using COSS.

  This problem still exists in the latest squid. Any likelihood of a fix, or
 is COSS not recommended for FBSD?
  Thanks for your time.

FreeBSD-7 (and FreeBSD-current) + AUFS + COSS works fine in
Squid-2.HEAD at least in a polygraph polymix-4 workload. Admittedly
I've been testing it in Cacheboy-1.6 rather than Squid-2.HEAD, but the
COSS code should be the same as far as this bug is concerned.

The fact that you have many pending relocate errors may mean something
else is busted, but as far as I can tell you're the only person who
has reported that COSS problem in particular.

Not that COSS is a fantastically clean codebase to begin with; a lot
of hacking went into it to properly support async disk IO and thus
perform with any semblence of working well. its possible there's a bug
which I just haven't seen in production.

2c,


Adrian


 adrian


 2008/9/12 Mark Powell m.s.pow...@salford.ac.uk:

 On Fri, 12 Sep 2008, Amos Jeffries wrote:

 Can you report a bug on this please, so we don't forget it. with a
 stack
 trace when the crash is occuring.

 Already did, last year:

 http://www.squid-cache.org/bugs/show_bug.cgi?id=1944

 Does this mean that COSS can't be successfully used with FreeBSD 7?
  Many thanks.

 --
 Mark Powell - UNIX System Administrator - The University of Salford
 Information Services Division, Clifford Whitworth Building,
 Salford University, Manchester, M5 4WT, UK.
 Tel: +44 161 295 6843  Fax: +44 161 295 5888  www.pgp.com for PGP key







 --
 Mark Powell - UNIX System Administrator - The University of Salford
 Information Services Division, Clifford Whitworth Building,
 Salford University, Manchester, M5 4WT, UK.
 Tel: +44 161 295 6843  Fax: +44 161 295 5888  www.pgp.com for PGP key




[squid-users] FreeBSD users: 'squidstats' package

2009-01-10 Thread Adrian Chadd
Hi guys,

Those of you who are using FreeBSD should have a look at squidstats.
Its based on Henrik's scripts to gather basic statistics from Squid
via SNMP and graph them. Its based on a googlecode project I created
and I'm also the port maintainer. So it should be easy for me to fix
bugs. :)

Having statistics of your running server is the best thing to do for
debugging and provisioning, so please consider installing the package
and setting it up.

Enjoy!


Adrian


Re: [squid-users] HTTP_HEADER

2009-01-07 Thread Adrian Chadd
No, I don't think it can.

I'm just wrapping up some changes to FreeBSD-current and my Squid fork
to support tproxy-like functionality under FreeBSD + ipfw.



Adrian

2009/1/7 Mehmet ÇELİK r...@justunix.org:

 As per usual, the easiest fix is to re-write the web app properly.
 The REMOTE_ADDR is taken by PHP from the network layer below everything.

 Otherwise you will have to patch your kernel and use the tproxy feature
 of Squid.

 Amos
 --
 Please be using
 Current Stable Squid 2.7.STABLE5 or 3.0.STABLE11
 Current Beta Squid 3.1.0.3


 I understand you and thanx.. But, I am using OpenBSD-PF. So Squid, can it
 provide linux-tproxy support for OpenBSD-PF ?  I don't know ??

 Regards,
 Mehmet CELIK



Re: [squid-users] storeurl_rewrite and ICP

2008-12-24 Thread Adrian Chadd
Thanks. Be sure to comment on the bugzilla ticket too.

Oh and tell me which bug it is so I can make sure I'm watching it. :)


Adrian

2008/12/23 Imri Zvik im...@bsd.org.il:
 On Sunday 21 December 2008 10:52:42 Imri Zvik wrote:
 Hi,

 On Thursday 18 December 2008 21:57:22 Adrian Chadd wrote:
  Nope, I don't think the storeurl-rewriter stuff was ever integrated into
  ICP.
 
  I think someone posted a patch to the squid bugzilla to implement this.

 If you can point me to said patch, I'd be happy to test it under load.

  I'm happy to commit whatever people sensibly code up and deploy. :)
 
 
 
  Adrian
 
  2008/12/18 Imri Zvik im...@bsd.org.il:
   Hi,
  
   I'm using the storeurl_rewrite feature to store content with changing
   attributes.
  
   As my traffic grows, I want to be able to add cache_peers to share the
   load.
  
   After configuring the peers, I've found out that all my ICP queries
   results with misses.
   It seems like the storeurl_rewrite logic is not implemented in the ICP
   queries - i.e., nor the ICP client or the server passes the URL through
   the storeurl_rewrite process before checking if the requested content
   is cached or not.
  
   Am I missing something?
  
  
  
   Thank you in advance,

 Thanks!


 I've found the said patch in squid's bugzilla - It seems to be working, but
 I'm going to test the patch under load (700 mbit~) and report back.





Re: [squid-users] storeurl_rewrite and ICP

2008-12-24 Thread Adrian Chadd
I'm still not sure whether the correct behaviour is to send ICP for
the rewritten URL, or to rewrite the URLs being received before
they're looked up.

Hm!



Adrian

2008/12/24 Imri Zvik im...@bsd.org.il:
 On Wednesday 24 December 2008 17:01:39 Adrian Chadd wrote:
 Thanks. Be sure to comment on the bugzilla ticket too.

 Oh and tell me which bug it is so I can make sure I'm watching it. :)


 Adrian

 2008/12/23 Imri Zvik im...@bsd.org.il:
  On Sunday 21 December 2008 10:52:42 Imri Zvik wrote:
  Hi,
 
  On Thursday 18 December 2008 21:57:22 Adrian Chadd wrote:
   Nope, I don't think the storeurl-rewriter stuff was ever integrated
   into ICP.
  
   I think someone posted a patch to the squid bugzilla to implement
   this.
 
  If you can point me to said patch, I'd be happy to test it under load.
 
   I'm happy to commit whatever people sensibly code up and deploy. :)
  
  
  
   Adrian
  
   2008/12/18 Imri Zvik im...@bsd.org.il:
Hi,
   
I'm using the storeurl_rewrite feature to store content with
changing attributes.
   
As my traffic grows, I want to be able to add cache_peers to share
the load.
   
After configuring the peers, I've found out that all my ICP queries
results with misses.
It seems like the storeurl_rewrite logic is not implemented in the
ICP queries - i.e., nor the ICP client or the server passes the URL
through the storeurl_rewrite process before checking if the
requested content is cached or not.
   
Am I missing something?
   
   
   
Thank you in advance,
 
  Thanks!
 
  I've found the said patch in squid's bugzilla - It seems to be working,
  but I'm going to test the patch under load (700 mbit~) and report back.

 Here is the bug report: http://www.squid-cache.org/bugs/show_bug.cgi?id=2354

 The patch works flawlessly so far.




Re: [squid-users] cached MS updates !

2008-12-21 Thread Adrian Chadd
The one thing I've been looking to do for other updates is to
post-process store.log and find URLs which have been partial-replied
to (206) ending in various extensions, then queuing entire file
fetches of them to make sure they fully enter the cache.

Its suboptimal but it seems to work just fine.



adrian

2008/12/21 Oleg Motienko motie...@gmail.com:
 On Tue, Jun 17, 2008 at 1:24 AM, Henrik Nordstrom
 hen...@henriknordstrom.net wrote:
 On mån, 2008-06-16 at 08:16 -0700, pokeman wrote:
 thanks henrik for you reply
 any other way to save bandwidth windows updates almost use 30% of my entire
 bandwidth

 Microsoft has a update server you can run locally. But you need to have
 some control over the clients to make them use this instead of windows
 update...

 Or you could look into sponsoring some Squid developer to add caching of
 partial objects with the goal of allowing http access to windows update
 to be cached. (the versions using https can not be done much about...)

 I made such caching by removing headers Range from requests
 (transparent redirect to nginx webserver in proxy mode before squid).
 Works fine for my  ~ 1500 users. Cache size is 4G for now and growing.
 Additionally It's possible to make static cache (I made it on the same
 nginx, via proxy_store), so big files like servicepacks will be stored
 in filesystem. Also it's possible to put in filesystem already
 downloaded servicepacks and fixes, this will save the bandwidth.

 Squid is running transparent port on http://127.0.0.1:1 .
 Http requests from LAN to windowupdate networks are redirected to 127.0.0.4:80

 Nginx caches cab exe psf and cuts off Range header, other requests
 redirected to MS sites;

 Here is nginx config for caching site:

server {
listen127.0.0.4:80;
server_name  au.download.windowsupdate.com
 www.au.download.windowsupdate.com;

access_log
 /var/log/nginx/access-au.download.windowsupdate.com-cache.log  main;


 # root url - don't cache here

location /  {
proxy_passhttp://127.0.0.1:1;
proxy_set_header   Host $host;
}


 # ? urls - don't cache here
location ~* \?  {
proxy_passhttp://127.0.0.1:1;
proxy_set_header   Host $host;
}


 # here is static caching

location ~* ^/msdownload.+\.(cab|exe|psf)$ {
root /.1/msupd/au.download.windowsupdate.com;
error_page   404 = @fetch;
}


location @fetch {
internal;

proxy_passhttp://127.0.0.1:1;
proxy_set_header   Range'';
proxy_set_header   Host $host;

proxy_store  on;
proxy_store_access   user:rw  group:rw  all:rw;
proxy_temp_path  /.1/msupd/au.download.windowsupdate.com/temp;

root /.1/msupd/au.download.windowsupdate.com;
}

 # error messages (if got err from squid)

error_page   500 502 503 504  /50x.html;
location = /50x.html {
root   html;
}


}



Re: [squid-users] storeurl_rewrite and ICP

2008-12-18 Thread Adrian Chadd
Nope, I don't think the storeurl-rewriter stuff was ever integrated into ICP.

I think someone posted a patch to the squid bugzilla to implement this.

I'm happy to commit whatever people sensibly code up and deploy. :)



Adrian

2008/12/18 Imri Zvik im...@bsd.org.il:
 Hi,

 I'm using the storeurl_rewrite feature to store content with changing
 attributes.

 As my traffic grows, I want to be able to add cache_peers to share the load.

 After configuring the peers, I've found out that all my ICP queries results
 with misses.
 It seems like the storeurl_rewrite logic is not implemented in the ICP
 queries - i.e., nor the ICP client or the server passes the URL through the
 storeurl_rewrite process before checking if the requested content is cached
 or not.

 Am I missing something?



 Thank you in advance,




Re: [squid-users] Performance problems with 2.6.STABLE18

2008-12-17 Thread Adrian Chadd
2008/12/17 Mark Kent mk...@messagelabs.com:

 I tried running under valgrind, and it found a couple of leaks, but I'm
 not sure that those are strictly the problem. If it were a traditional
 memory leak, where memory was just wandering off, I don't quite see why
 the CPU would climb along with the memory usage.

Grab oprofile and do some digging?


Adrian


 Mark.



 -Original Message-
 From: Kinkie [mailto:gkin...@gmail.com]
 Sent: Wednesday, December 17, 2008 4:50 PM
 To: Mark Kent
 Cc: squid-users@squid-cache.org
 Subject: Re: [squid-users] Performance problems with 2.6.STABLE18

 On Wed, Dec 17, 2008 at 4:24 PM, Mark Kent mk...@messagelabs.com
 wrote:

  Hi,

  I'm currently having a performance issue with Squid 2.6.STABLE18
 (running on RHEL4). As I run traffic through the proxy, the memory
 grows steadily, and apparently without limit. This increase in memory
 usage is coupled with a steadily growing CPU usage, up to a point at
 which a single core is saturated (97% usage at ~400MB of RSS). At this

 point, the latency of requests increases. When the load is taken off
 the proxy, the CPU returns to minimal usage, but the memory usage
 sticks at the high water mark.

  I should point out that I'm using squid for authentication only (HTTP

 digest), not for caching. Consequently, I have maximum_object_size and

 maximum_object_size_in_memory both set to 0 in the squid config file.
 My understanding is that this should be sufficient to stop squid from
 caching.

  There's plenty of spare physical RAM on the machine, so it seems
 unlikely that it's a memory shortage causing the performance problem.
 My interpretation is that something has gotten too large for Squid to
 handle but, without object caching, it's not clear to me what that
 might be. I would blame the authentication cache, but there's only
 2000 different users.

  Does anyone have an idea what might be going on, and how to fix it?

 There may  be a memory leak somewhere..
 Squid 2.6 is rather old, can you try upgrading to the last 2.7 STABLE
 release?


Kinkie

 __
 This email has been scanned by the MessageLabs Email Security System.
 For more information please visit http://www.messagelabs.com/email
 __

 __
 This email has been scanned by the MessageLabs Email Security System.
 For more information please visit http://www.messagelabs.com/email
 __




Re: [squid-users] What does storeClientCopyEvent mean?

2008-12-11 Thread Adrian Chadd
Which version of Squid are you using again? I patched the latest
Squid-2.HEAD with some aufs related fixes that reduce the amount of
callback checking which is done.

Uhm, check src/fs/aufs/store_asyncufs.h :

/* Which operations to run async */
#define ASYNC_OPEN 1
#define ASYNC_CLOSE 0
#define ASYNC_CREATE 1
#define ASYNC_WRITE 0
#define ASYNC_READ 1

Thats by default on Squid-2.HEAD. I've just changed them all to be
async under cacheboy-1.6 and this performs great under freebsd-7 +
AUFS with my testing.



Adrian

2008/12/11 Bin Liu binliu.l...@gmail.com:
 Thanks for your reply, Adrian. I'm very appreciated for your help.

 I'd suggest using your OS profiling to figure out where the CPU is
 being spent. This may be a symptom, not the cause.

 Here is the top output snapshot:

 last pid: 76181;  load averages:  1.15,  1.12,  1.08up 6+05:35:14  
 22:25:07
 184 processes: 5 running, 179 sleeping
 CPU states: 24.2% user,  0.0% nice,  3.8% system,  0.0% interrupt, 72.0% idle
 Mem: 4349M Active, 2592M Inact, 599M Wired, 313M Cache, 214M Buf, 11M Free
 Swap: 4096M Total, 4096M Free

  PID USERNAME   THR PRI NICE   SIZERES STATE  C   TIME   WCPU COMMAND
 38935 nobody  27  440  4385M  4267M ucond  1 302:19 100.00% squid
 46838 root 1  440 24144K  2344K select 0   3:09  0.00% snmpd
  573 root 1  440  4684K   608K select 0   0:34  0.00% syslogd
  678 root 1  440 24780K  4360K select 1   0:12  0.00% perl5.8.8
  931 root 1  440 10576K  1480K select 0   0:11  0.00% sendmail
  871 root 1  440 20960K   508K select 3   0:08  0.00% sshd
  941 root 1   80  5736K   424K nanslp 2   0:03  0.00% cron
 14177 root 1  440 40620K  2648K select 0   0:02  0.00% httpd


  # iostat 1  5
  tty da0  da1  da2 cpu
  tin tout  KB/t tps  MB/s   KB/t tps  MB/s   KB/t tps  MB/s  us ni sy in id
   4   75 28.86   5  0.13  41.08  22  0.88  40.01  42  1.66   6  0  4  0 89
   0  230 22.86   7  0.16  21.33   6  0.12  44.78  23  1.00  23  0  4  0 73
   0   77 16.00   1  0.02  51.56  27  1.35  40.38  48  1.88  22  0  6  0 72
   0   77 16.00   8  0.12  18.29   7  0.12  26.64  22  0.57  24  0  3  0 72
   0   77 16.00   2  0.03  32.00   2  0.06  41.43  35  1.41  24  0  4  0 71

 # vmstat 1 5
  procs  memory  pagedisks faults  cpu
  r b w avmfre   flt  re  pi  pofr  sr da0 da1   in   sy
 cs us sy id
  1 2 0 13674764 386244   455   9   0   0   792 4672   0   0 4147 6847
 6420  6  4 89
  1 1 0 13674764 383112  1365   4   0   0   147   0   2   4 5678 9065
 16860 18  6 76
  1 1 0 13674764 383992   894   3   0   0   916   0   5   6 5089 7950
 16239 22  5 73
  1 1 0 13674764 378624  1399  11   0   052   0  11   1 5533 10447
 18994 23  5 72
  1 1 0 13674768 373360  1427   6   0   030   0   9   3 5919 10913
 19686 25  5 70


 ASYNC IO Counters:
 Operation   # Requests
 open2396837
 close   1085
 cancel  2396677
 write   3187
 read16721807
 stat0
 unlink  299208
 check_callback  800440690
 queue   14


 I've noticed that the counter 'queue'  is relatively high, which
 normally should always be zero. But the disks seems pretty idle. I've
 tested that by copying some large files to cache_dir, very fast. So
 there must be something blocking squid. I've got 2 boxes with the same
 hardware/software configuration running load balancing, when one of
 them was blocking, another one ran pretty well.

 I'm using FreeBSD 7.0 + AUFS, and I've also noticed what you have
 written several days ago
 (http://www.squid-cache.org/mail-archive/squid-users/200811/0647.html),
 which mentions that some operations may  block under FreeBSD. So could
 that cause this problem?

 Thanks again.

 Regards,
 Liu


 On Tue, Dec 9, 2008 at 23:28, Adrian Chadd adr...@squid-cache.org wrote:
 Its a hack which is done to defer a storage manager transaction from
 beginning whilst another one is in progress for that same connection.

 I'd suggest using your OS profiling to figure out where the CPU is
 being spent. This may be a symptom, not the cause.


 adrian

 2008/12/7 Bin Liu binliu.l...@gmail.com:
 Hi there,

 Squid is pegging CPU to 100% with storeClientCopyEvent and hit
 service time soar up to server seconds here. The following is what I
 see in cachemgr:events:

 OperationNext ExecutionWeightCallback Valid?
 storeClientCopyEvent-0.019010 seconds0yes
 storeClientCopyEvent-0.019010 seconds0yes
 storeClientCopyEvent-0.019010 seconds0yes
 storeClientCopyEvent-0.019010 seconds0yes
 storeClientCopyEvent-0.019010 seconds0yes
 storeClientCopyEvent-0.019010 seconds0yes
 storeClientCopyEvent-0.019010 seconds0yes
 storeClientCopyEvent-0.019010 seconds0yes
 storeClientCopyEvent-0.019010 seconds0yes
 storeClientCopyEvent-0.019010 seconds0yes
 storeClientCopyEvent-0.019010 seconds0yes
 storeClientCopyEvent-0.019010 seconds0yes
 storeClientCopyEvent-0.019010 seconds0yes

Re: [squid-users] What does storeClientCopyEvent mean?

2008-12-09 Thread Adrian Chadd
Its a hack which is done to defer a storage manager transaction from
beginning whilst another one is in progress for that same connection.

I'd suggest using your OS profiling to figure out where the CPU is
being spent. This may be a symptom, not the cause.


adrian

2008/12/7 Bin Liu [EMAIL PROTECTED]:
 Hi there,

 Squid is pegging CPU to 100% with storeClientCopyEvent and hit
 service time soar up to server seconds here. The following is what I
 see in cachemgr:events:

 OperationNext ExecutionWeightCallback Valid?
 storeClientCopyEvent-0.019010 seconds0yes
 storeClientCopyEvent-0.019010 seconds0yes
 storeClientCopyEvent-0.019010 seconds0yes
 storeClientCopyEvent-0.019010 seconds0yes
 storeClientCopyEvent-0.019010 seconds0yes
 storeClientCopyEvent-0.019010 seconds0yes
 storeClientCopyEvent-0.019010 seconds0yes
 storeClientCopyEvent-0.019010 seconds0yes
 storeClientCopyEvent-0.019010 seconds0yes
 storeClientCopyEvent-0.019010 seconds0yes
 storeClientCopyEvent-0.019010 seconds0yes
 storeClientCopyEvent-0.019010 seconds0yes
 storeClientCopyEvent-0.019010 seconds0yes
 storeClientCopyEvent-0.019010 seconds0yes
 storeClientCopyEvent-0.019010 seconds0yes
 storeClientCopyEvent-0.019010 seconds0yes
 storeClientCopyEvent-0.019010 seconds0yes
 storeClientCopyEvent-0.019010 seconds0yes
 storeClientCopyEvent-0.019010 seconds0yes
 storeClientCopyEvent-0.019010 seconds0yes
 storeClientCopyEvent-0.019010 seconds0yes
 storeClientCopyEvent0.00 seconds0yes
 storeClientCopyEvent0.00 seconds0yes
 storeClientCopyEvent0.00 seconds0yes
 storeClientCopyEvent0.00 seconds0yes
 storeClientCopyEvent0.00 seconds0yes
 storeClientCopyEvent0.00 seconds0yes
 storeClientCopyEvent0.00 seconds0yes
 storeClientCopyEvent0.00 seconds0yes
 storeClientCopyEvent0.00 seconds0yes
 storeClientCopyEvent0.00 seconds0yes
 storeClientCopyEvent0.00 seconds0yes
 MaintainSwapSpace0.980990 seconds1N/A
 idnsCheckQueue1.00 seconds1N/A
 ipcache_purgelru5.457004 seconds1N/A
 wccp2HereIam5.464900 seconds1N/A
 fqdncache_purgelru5.754399 seconds1N/A
 storeDirClean10.767635 seconds1N/A
 statAvgTick59.831274 seconds1N/A
 peerClearRR110.539127 seconds0N/A
 peerClearRR279.341239 seconds0N/A
 User Cache Maintenance1610.136367 seconds1N/A
 storeDigestRebuildStart1730.225879 seconds1N/A
 storeDigestRewriteStart1732.267852 seconds1N/A
 peerRefreshDNS1957.777934 seconds1N/A
 peerDigestCheck2712.910515 seconds1yes

 So what does storeClientCopyEvent mean? Is it disk IO cause this problem?

 Regards,
 Liu




Re: [squid-users] How to interrupt ongoing transfers?

2008-12-07 Thread Adrian Chadd
There isn't. Sorry.



Adrian


2008/12/7 Kaustav Dey Biswas [EMAIL PROTECTED]:
 Hi Adrian,

 Thanks a lot for your prompt reply.

 Actually, I need to implement the quota system as a part of my final year 
 Engineering project. I am planning to make it as a sort of an add-on package 
 over Squid, which will be compatible with all current versions of Squid. As 
 you can see, modifying the Squid source code is not an option for me.

 Please let me know if there is any way (or workaround) by which I can 
 interrupt ongoing transfers in current versions of Squid without having to 
 patch  rebuild it.

 Thanks  Regards,
 Kaustav



 - Original Message 
 From: Adrian Chadd [EMAIL PROTECTED]
 To: Kaustav Dey Biswas [EMAIL PROTECTED]
 Cc: Squid squid-users@squid-cache.org
 Sent: Saturday, 6 December, 2008 12:28:10 AM
 Subject: Re: [squid-users] How to interrupt ongoing transfers?

 Someone may beat me to this, but I'm actually proposing a quote to a
 company to implement quota services in Squid to support stuff just
 like what you've asked for.

 I'll keep the list posted about this. Hopefully I'll get the green
 light in a week or so and can begin work on implementing the
 functionality in Squid-2.

 Thanks,



 Adrian

 2008/12/5 Kaustav Dey Biswas [EMAIL PROTECTED]:
 Hi,

 I am a squid newbie. I am trying to set up daily download quotas for NCSA 
 authorized users. I have a daemon running which checks the log files, and 
 whnever the download limit is reached (for a particular user), it blocks 
 that user in the config and reconfigures squid (squid -k reconfigure) for 
 the changes to take effect.

 The problem is, if an http/ftp transfer is on (for that user), the changes 
 made in the config doesnt take effect until that transfer session completes.

 Is there any way I can interrupt the transfer somehow (or say, force squid 
 to re-read its ACL) without affecting sessions of other users?

 Thanks  Regards,
 Kaustav Dey Biswas



  Add more friends to your messenger and enjoy! Go to 
 http://messenger.yahoo.com/invite/




Re: [squid-users] Number of Spindles

2008-12-06 Thread Adrian Chadd
2008/12/5 Nyamul Hassan [EMAIL PROTECTED]:
 Thx for the response Adrian.  Earlier I was using only AUFS on each drive,
 and the system choked on IOWait above 200 req/sec.  But, after I added COSS
 in the mix, it improved VASTLY.

Well, thats why its there, right? :)



 Since you're the COSS expert, I would really love to hear about what you
 think of my configuration options for COSS above.  Do you think I can
 improve them?

Not off the top of my head, no.

 As for L1 and L2 numbers in AUFS, can you suggest any benchmark tests which
 I can run and give you feedback?

Again, not off the top of my head. I'd look at trying to gather stats
on various types of memory usage and IO patterns and do some
statistical comparisons. I've been focusing on different areas lately
so I'm not really in the storage headspace right now :)

 Also, if anybody else can share their ideas / experience, it would be great!
 I'm a bit puzzled about the following:

 1.  Although I've set the cache_replacement_policy as differentl to each
 other (GDSF for COSS and LFUDA for AUFS), as has been suggested by the HP
 whitepaper which is referenced in the config file, the Current Squid
 Configuratoin page in CacheMGR shows only LFUDA above all the 8 (eight)
 cache_store entries.  Does that mean all of them are LFUDA?  Isn't GDSF
 better for smaller objects?

COSS will always be LRU. Its the nature of the storage system itself.
You can't override that.


 2.  When I had only one type of storage (AUFS), it was easy to find out the
 average objects per cache_store.  However, now that I've 2 types on each of
 the 4 HDDs, I can't seem to find out how many of the total 11,000,000 plus
 objects that are being reported in CacheMGR are actually in the COSS and
 AUFS partitions.  Is there a way to find that out?

I thought that the storedir page listed the number of objects in the
cache. Hm, if it doesn't then it shouldn't be that difficult to patch
stuff in to track the number of objects in each storedir.



Adrian


Re: [squid-users] Number of Spindles

2008-12-05 Thread Adrian Chadd
Things have changed somewhat since that algorithm was decided upon.

Directory searches were linear and the amount of buffer cache /
directory name cache available wasn't huge.

Having large directories took time to search and took RAM to cache.

Noone's really sat down and done any hard-core tuning - or at least,
they've done it, but haven't published the results anywhere. :)



Adrian

2008/12/3 Nyamul Hassan [EMAIL PROTECTED]:
 Why aren't there any (or marginal / insignificant) improvements over 3
 spindles?  Is it because squid is a single threaded application?

 On this note, what impact does the L1 and L2 directories have on AUFS
 performance?  I understand that these are there to control the number of
 objects in each folder.  But, what would be a good number of files to keep
 in a directory, performance wise?

 Regards
 HASSAN



 - Original Message - From: Amos Jeffries [EMAIL PROTECTED]
 To: Henrik Nordstrom [EMAIL PROTECTED]
 Cc: Nyamul Hassan [EMAIL PROTECTED]; Squid Users
 squid-users@squid-cache.org
 Sent: Monday, December 01, 2008 04:33
 Subject: Re: [squid-users] Number of Spindles


 sön 2008-11-30 klockan 09:56 +0600 skrev Nyamul Hassan:

 The primary purpose of these tests is to show that Squid's performance
 doesn't increase in proportion to the number of disk drives. Excluding
 other
 factors, you may be able to get better performance from three systems
 with
 one disk drive each, rather than a single system with three drives.

 There is a significant difference up to 3 drives in my tests.


 Um, can you clarify please? Do you mean difference in experience than
 described, or separate systems are faster up to 3 drives?

 Amos







Re: [squid-users] How to interrupt ongoing transfers?

2008-12-05 Thread Adrian Chadd
Someone may beat me to this, but I'm actually proposing a quote to a
company to implement quota services in Squid to support stuff just
like what you've asked for.

I'll keep the list posted about this. Hopefully I'll get the green
light in a week or so and can begin work on implementing the
functionality in Squid-2.

Thanks,



Adrian

2008/12/5 Kaustav Dey Biswas [EMAIL PROTECTED]:
 Hi,

 I am a squid newbie. I am trying to set up daily download quotas for NCSA 
 authorized users. I have a daemon running which checks the log files, and 
 whnever the download limit is reached (for a particular user), it blocks that 
 user in the config and reconfigures squid (squid -k reconfigure) for the 
 changes to take effect.

 The problem is, if an http/ftp transfer is on (for that user), the changes 
 made in the config doesnt take effect until that transfer session completes.

 Is there any way I can interrupt the transfer somehow (or say, force squid to 
 re-read its ACL) without affecting sessions of other users?

 Thanks  Regards,
 Kaustav Dey Biswas



  Add more friends to your messenger and enjoy! Go to 
 http://messenger.yahoo.com/invite/




Re: [squid-users] TCP connections keep alive problem after 302 HTTP response from web

2008-11-30 Thread Adrian Chadd
Good detective work! I'm not sure whether this is a requirement or
not. Henrik would know better.

Henrik, is this worthy of a bugzilla report?


adrian

2008/11/30 Itzcak Pechtalt [EMAIL PROTECTED]:
 Hi

 I found some inefficiency in Squid TCP connection handling toward servers .
 In some cases Squid closes TCP connection to servers immediately after
 304 not modified response and doesn't save it for reuse.
 There is no visbible reason why Squid closes the connection. Squid
 Sends Connection: Keep-Alive in HTTP request and the web server
 returns Connection: Keep-Alive on the response, Also pconn_timeout
 is configured to 1 minute.

 After digging into the problem,  I found that the the problem occurs
 only in cases the object type is PRIVATE. It seems like when
 client_side code hadnles 304 not modified reply it calls
 store_unregister which closes store entry and TCP connection in turn.

 To reproduce it do the following
 1) Browse www.cnn.com
 2) Delete browser cache.
 3) Browse again. The case will occur here.

 Does someone know about it ?

 Itzcak

 Following short Wireshark sniff with 1 sample, 10.50.0.100 is Squid
 IP. See FIN packet from Squid.

 0.00  10.50.0.100 - 205.128.90.126 TCP 4006  http [SYN] Seq=0
 Len=0 MSS=1460 WS=2
 0.085216 205.128.90.126 - 10.50.0.100  TCP http  4006 [SYN, ACK]
 Seq=0 Ack=1 Win=5840 Len=0 MSS=1460 WS=7
 0.085226  10.50.0.100 - 205.128.90.126 TCP 4006  http [ACK] Seq=1
 Ack=1 Win=5840 Len=0
 0.085230  10.50.0.100 - 205.128.90.126 HTTP GET
 /cnn/.element/css/2.0/common.css HTTP/1.0
 GET /cnn/.element/css/2.0/common.css HTTP/1.0
 If-Modified-Since: Tue, 16 Sep 2008 14:48:32 GMT
 Accept: */*
 Referer: http://www.cnn.com/
 Accept-Language: en-us
 UA-CPU: x86
 User-Agent: Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; .NET
 CLR 2.0.50727; .NET CLR 3.0.04506.30)
 Host: i.cdn.turner.com
 Cache-Control: max-age=259200
 Connection: keep-alive

 0.172250 205.128.90.126 - 10.50.0.100  TCP http  4006 [ACK] Seq=1
 Ack=366 Win=6912 Len=0
 0.172934 205.128.90.126 - 10.50.0.100  HTTP HTTP/1.1 304 Not Modified
 HTTP/1.1 304 Not Modified
 Date: Wed, 26 Nov 2008 12:33:33 GMT
 Expires: Wed, 26 Nov 2008 13:03:51 GMT
 Last-Modified: Tue, 16 Sep 2008 14:48:32 GMT
 Cache-Control: max-age=3600
 Connection: keep-alive

 0.173145  10.50.0.100 - 205.128.90.126 TCP 4006  http [ACK] Seq=366
 Ack=206 Win=6912 Len=0
 0.173238  10.50.0.100 - 205.128.90.126 TCP 4006  http [FIN, ACK]
 Seq=366 Ack=206 Win=6912 Len=0
 0.259520 205.128.90.126 - 10.50.0.100  TCP http  4006 [FIN, ACK]
 Seq=206 Ack=367 Win=6912 Len=0
 0.259906  10.50.0.100 - 205.128.90.126 TCP 4006  http [ACK] Seq=367
 Ack=207 Win=6912 Len=0
 0.565702 205.128.90.126 - 10.50.0.100  TCP http  4006 [FIN, ACK]
 Seq=206 Ack=367 Win=6912 Len=0
 0.565842  10.50.0.100 - 205.128.90.126 TCP [TCP Dup ACK 10#1] 4006 
 http [ACK] Seq=367 Ack=207 Win=6912 Len=0




Re: [squid-users] improve flow capacity for Squid

2008-11-28 Thread Adrian Chadd
Well, the way to start looking at that is getting to know your system
profiling tools.

I do this for a living on Solaris, FreeBSD and Linux - each has
different system profiling tools, all of which can tell you where the
problem may lie.

Considering people have deployed Squid forward and reverse proxies
that achieve much more than 150mbit/sec, even considering the
shortcomings of the codebases, I can't help but think there's
something else going on that isn't specifically Squids' fault. :)


Adrian


2008/11/28 Ken DBA [EMAIL PROTECTED]:



 --- On Thu, 11/27/08, Adrian Chadd [EMAIL PROTECTED] wrote:

 From: Adrian Chadd [EMAIL PROTECTED]
 Subject: Re: [squid-users] improve flow capacity for Squid
 To: [EMAIL PROTECTED]
 Cc: squid-users@squid-cache.org
 Date: Thursday, November 27, 2008, 11:09 PM
 Is that per-flow, or in total?


 I mean in total, thanks.







Re: [squid-users] improve flow capacity for Squid

2008-11-28 Thread Adrian Chadd
Heh. The best way under unix is a hybrid of threads and epoll/kqueue
w/ non-blocking socket IO.



Adrian

2008/11/28 Ken DBA [EMAIL PROTECTED]:



 --- On Sat, 11/29/08, Adrian Chadd [EMAIL PROTECTED] wrote:

 From: Adrian Chadd [EMAIL PROTECTED]


 Considering people have deployed Squid forward and reverse
 proxies
 that achieve much more than 150mbit/sec, even considering
 the
 shortcomings of the codebases,

 Thanks. I also hope someone has deployed the high-flow application for squid 
 to give helps.

I can't help but think
 there's
 something else going on that isn't specifically
 Squids' fault. :)


 oh.I was thinking the flow capacity is limited, maybe due to squid's IO 
 select way? for example, it reads/writes socket using epoll/select/poll, not 
 the threads/multi-processes. Thanks.

 Ken







Re: [squid-users] assertion failed: store_swapout.cc:317: mem-swapout.sio == self

2008-11-28 Thread Adrian Chadd
Does Squid-2.7.STABLE5 exhibit this issue?



Adrian


2008/11/28 Marcel Grandemange [EMAIL PROTECTED]:
 Looks like squid broke itself again.
 If anybody could advise me as to what's happening here it would be great.

 Im thinking the move to v3 has been disasterous so far.

 Every time is now use our main proxy the following happens..

 2008/11/29 03:10:19|   Validated 1285147 Entries
 2008/11/29 03:10:19|   store_swap_size = 25682924
 2008/11/29 03:10:19| storeLateRelease: released 0 objects
 2008/11/29 03:11:08| assertion failed: store_swapout.cc:317:
 mem-swapout.sio == self
 2008/11/29 03:11:17| Starting Squid Cache version 3.0.STABLE9 for
 amd64-portbld-freebsd7.0...
 2008/11/29 03:11:17| Process ID 32313
 2008/11/29 03:11:17| With 11072 file descriptors available
 2008/11/29 03:11:17| DNS Socket created at 0.0.0.0, port 63464, FD 7
 2008/11/29 03:11:17| Adding nameserver 127.0.0.1 from squid.conf
 2008/11/29 03:11:17| Adding nameserver 192.168.12.2 from squid.conf
 2008/11/29 03:11:17| Adding nameserver 192.168.12.3 from squid.conf
 2008/11/29 03:11:17| Unlinkd pipe opened on FD 12
 2008/11/29 03:11:17| Swap maxSize 71925760 KB, estimated 4795050 objects
 2008/11/29 03:11:17| Target number of buckets: 239752
 2008/11/29 03:11:17| Using 262144 Store buckets
 2008/11/29 03:11:17| Max Mem  size: 131072 KB
 2008/11/29 03:11:17| Max Swap size: 71925760 KB
 2008/11/29 03:11:22| Version 1 of swap file without LFS support detected...
 2008/11/29 03:11:22| Rebuilding storage in /mnt/cache1 (DIRTY)
 2008/11/29 03:11:22| Version 1 of swap file without LFS support detected...
 2008/11/29 03:11:22| Rebuilding storage in /mnt/cache2 (DIRTY)
 2008/11/29 03:11:22| Version 1 of swap file without LFS support detected...
 2008/11/29 03:11:22| Rebuilding storage in /usr/local/squid/cache (DIRTY)
 2008/11/29 03:11:22| Using Round Robin store dir selection
 2008/11/29 03:11:22| Set Current Directory to /usr/local/squid/cache
 2008/11/29 03:11:23| Loaded Icons.
 2008/11/29 03:11:23| Accepting  HTTP connections at 192.168.12.1, port 3128,
 FD 18.
 2008/11/29 03:11:23| Accepting  HTTP connections at 127.0.0.1, port 8080, FD
 19.
 2008/11/29 03:11:23| Accepting transparently proxied HTTP connections at
 127.0.0.1, port 3128, FD 20.
 2008/11/29 03:11:23| HTCP Disabled.
 2008/11/29 03:11:23| Accepting SNMP messages on port 3401, FD 21.
 2008/11/29 03:11:23| Configuring Parent 192.168.12.2/3128/3130
 2008/11/29 03:11:23| Ready to serve requests.
 2008/11/29 03:11:23| Store rebuilding is 3.48% complete
 2008/11/29 03:11:28| Done reading /mnt/cache1 swaplog (117800 entries)
 2008/11/29 03:11:28| Done reading /mnt/cache2 swaplog (117807 entries)


 It keeps crashing when you visit pages and reloading.. Input?
 Stable10 had other issues that prevents me from using it.




Re: [squid-users] improve flow capacity for Squid

2008-11-27 Thread Adrian Chadd
Is that per-flow, or in total?



Adrian

2008/11/24 Ken DBA [EMAIL PROTECTED]:
 Hello,

 I was just finding the flow capacity for Squid is too limited.
 It's even hard to reach an upper limit of 150 MBits.

 How can I improve the flow capacity for Squid in the reverse-proxy mode?
 Thanks in advance.

 Ken







Re: [squid-users] tuning an overloaded server

2008-11-27 Thread Adrian Chadd
Gah, they way they work is really quite simple.

* ufs does the disk io at the time the request happens. It used to try
using select/poll on the disk fds from what I can gather in the deep,
deep dark history of CVS but that was probably so the disk io happened
in the next IO loop so recursion was avoided.

* aufs operations push requests into a global queue which are then
dequeued by the aio helper threads as they become free. The aio helper
threads do the particular operation (open, close, read, write, unlink)
and then push the results into a queue so the main squid thread can
handle the callbacks at a later time.

* diskd operations push requests into a per storedir queue which is
then dequeued in order, one operation at a time, by the diskd helper.
The diskd helper does the normal IO operations (open, close, read,
write, unlink) and holds all the disk filedescriptors (ie, the main
squid process doesn't hold open the disk FDs; they're just given
handles.) The diskd processes do the operation and then queue the
result back to the main squid process which handles the callbacks at a
later time.

AUFS works great where the system threads allow for concurrent
blocking syscalls. This meant Linux (linuxthreads being just
processes) and Solaris in particular worked great. The BSDs used
userland threads via a threading library which wrapped syscalls to
try and be non-blocking. This wouldn't work for disk operations and so
a disk operation stalled all threads in a given process. diskd, as far
as I can gather (duane would know better!) came into existance to
solve a particular problem or two, and one of those problems was the
lack of scalable disk IO available in the BSDs.

FreeBSD in particular has since grown a real threading library which
supports disk IO happening across threads quite fine.

The -big- difference right now is how the various disk buffer cache
and VM systems handle IO. By default, the AUFS support in Squid only
uses the aio helper threads for a small subset of the operations. This
may work great under Linux but operations such as write() and close()
block under FreeBSD (think 'writing out metadata', for example) and
this mostly gives rise to the notion of Linux being better by most
people who haven't studied the problem in depth. :)

hope that helps,



Adrian

2008/11/27 Amos Jeffries [EMAIL PROTECTED]:
 B. Cook wrote:

 On Nov 22, 2008, at 7:30 AM, Amos Jeffries wrote:

 8 -- snip -- 8



 That said BSD family of systems get more out of diskd than aufs in
 current Squid.


 --
 Please be using
  Current Stable Squid 2.7.STABLE5 or 3.0.STABLE10
  Current Beta Squid 3.1.0.2

 Hello,

 Sorry to bother..

 so even in any FreeBSD (6.3, 7.0, etc..) diskd is still better than aufs?

 and if so,

 http://wiki.squid-cache.org/Features/DiskDaemon

 this page talks about 2.4

 and I can't seem to find an aufs page.. I can find coss, but coss has been
 removed from 3.0..

 so again, diskd should be what FreeBSD users use?  As well as the kernel
 additions?  Even on 6.3 and 7.0 machines amd64 and i386 alike?

 Yes. We have some circumstantial info that leads to believe its probably a
 bug in the way Squid uses AUFS and the underlying implementation differences
 in FreeBSD and Linux. We have not yet had anyone investigate deeply and
 correct the issue. So its still there in all Squid releases.



 Thanks in advance..

 (I would think a wiki page on an OS would be very useful.. common configs
 for linux 2.x and BSD, etc.. )

 Many people are not as versed in squid as the developers, and giving them
 guidelines to follow would probably make it easier for them to use.. imho.

 They don't understand coss vs aufs vs diskd vs ufs.. ;)

 We are trying to get there :). It's hard for just a few people and
 non-experts in many areas at that. So if anyone has good knowledge of how
 AUFS works jump in with a feature page analysis.

 What we have so far in the way of config help is explained at
 http://wiki.squid-cache.org/SquidFaq/ConfiguringSquid#head-ad11ea76c4876a92aa1cf8fb395e7efd3e1993d5

 Amos
 --
 Please be using
  Current Stable Squid 2.7.STABLE5 or 3.0.STABLE10
  Current Beta Squid 3.1.0.2




Re: [squid-users] Cache_dir more than 10GB

2008-11-27 Thread Adrian Chadd
2008/9/29 Amos Jeffries [EMAIL PROTECTED]:

  Squid-2 has issues with handling of very large individual files being
 somewhat slow.

Only if you have an insanely large cache_mem and
maximum_object_size_in_memory setting. Very large individual files on
disk are handled just as efficiently across all Squid versions.

If its kept low then it performs just fine.




Adrian


Re: [squid-users] Raid 0 vs Two cache_dir

2008-10-05 Thread Adrian Chadd
Do it yourself, benchmark, post results?


2008/10/5 Rafael Gomes [EMAIL PROTECTED]:
 I have two Scsi discs. I can set a unique cache_dir and make a Raid 0,
 so i will  improve the write or I can set two cache_dir one per disc.

 What is better?

 Are There any documents about information? Like comparation and other
 things like this.

 Thanks!

 --
 Rafael Gomes
 Consultor em TI
 Embaixador Fedora
 LPIC-1
 (71) 8709-1289




[squid-users] In SF from October 1 - 7

2008-09-27 Thread Adrian Chadd
G'day everyone,

I'll be in San Francisco (ish area) from October 1 to October 7. Drop
me a line if you're interested in catching up for an impromptu Squid
related evening event sometime then.



Adrian


Re: [squid-users] latency issues squid2.7 WCCP

2008-09-26 Thread Adrian Chadd
uhm, running without cache would mean don't use any disk storage
I'd suggest trying to run squid with no aufs cache_dir lines, just the
NULL line (cache_dir null /). This rules out the disk storage as a
potential candidate for failure.



Adrian

2008/9/25 Ryan Goddard [EMAIL PROTECTED]:

 Thanks for the response, Adrian.
 Is recompile required to change to internal DNS?
 I've disabled ECN, pmtu_disc and mtu_probing.
 cache_dir is as follows:
 (recommended by Henrik)

 cache_dir aufs /squid0 125000 128 256 cache_dir aufs /squid1 125000 128
 256
 cache_dir aufs /squid2 125000 128 256
 cache_dir aufs /squid3 125000 128 256
 cache_dir aufs /squid4 125000 128 256
 cache_dir aufs /squid5 125000 128 256
 cache_dir aufs /squid6 125000 128 256
 cache_dir aufs /squid7 125000 128 256

 No peak data available, here's some pre-peak data:
 Cache Manager menu
 5-MINUTE AVERAGE
 sample_start_time = 1222199580.85434 (Tue, 23 Sep 2008 19:53:00 GMT)
 sample_end_time = 1222199905.507274 (Tue, 23 Sep 2008 19:58:25 GMT)
 client_http.requests = 268.239526/sec
 client_http.hits = 111.741117/sec
 client_http.errors = 0.00/sec
 IOSTAT shows lots of idle time - I'm unclear what you mean by
 profiling ?
 Also, have not tried running w/out any cache - can you explain
 how this is done?

 appreciate the assistance.
 -Ryan



 Adrian Chadd wrote:

 Firstly, you should use the internal DNS code instead of the external
 DNS helpers.

 Secondly, I'd do a little debugging to see if its network related -
 make sure you've disabled PMTU for example, as WCCP doesn't redirect
 the ICMP needed. Other things like Window scaling negotiation and such
 may contribute.

 From a server side of things, what cache_dir config are you using?

 Whats your average/peak request rate? What about disk IO? Have you
 done any profiling? Have you tried running the proxy without any disk
 cache to see if the problem goes away?

 ~ terabyte of cache is quite large; I don't think any developers have
 a terabyte of storage in a box this size in a testing environment.

 2008/9/24 Ryan Goddard [EMAIL PROTECTED]:

 Squid 2.7.STABLE1-20080528 on Debian Linux 2.6.19.7
 running on quad dual-core 2.6mhz Opterons with 32 gig RAM; 8x140GB disk
 partitions
 using WCCP L2 redirects transparently from a Cisco 4948 GigE switch

 Server has one GigE NIC for the incoming redirects and two GigE NICs for
 outbound http requests.
 Using IPTables to port forward HTTP to Squid; no ICP, auth, etc.;
 strictly a
 web cache using heap/LFUDA replacement
 and 16GB memory allocated with mem pools on, no limit.

 Used in an ISP environment, accommodating approx. 8k predominately cable
 modem customers during peak.

 Issue we're experiencing is some web pages taking in excess of 20 seconds
 to
 load, marked latency for customers
 running web-based speed tests, etc.
 Cache.log and Access.log aren't indicating any errors or timeouts; system
 operates 96 DNS instances and 32k file descriptors
 (neither has gotten maxed yet).
 General Runtime Info from Cachemgr taken during pre-peak usage:
 Start Time:Tue, 23 Sep 2008 18:07:37 GMT
 Current Time:Tue, 23 Sep 2008 21:00:49 GMT

 Connection information for squid:
  Number of clients accessing cache:3382
  Number of HTTP requests received:2331742
  Number of ICP messages received:0
  Number of ICP messages sent:0
  Number of queued ICP replies:0
  Request failure ratio: 0.00
  Average HTTP requests per minute since start:13463.4
  Average ICP messages per minute since start:0.0
  Select loop called: 11255153 times, 0.923 ms avg
 Cache information for squid:
  Request Hit Ratios:5min: 42.6%, 60min: 40.0%
  Byte Hit Ratios:5min: 21.2%, 60min: 18.6%
  Request Memory Hit Ratios:5min: 18.3%, 60min: 17.2%
  Request Disk Hit Ratios:5min: 33.6%, 60min: 33.3%
  Storage Swap size:952545580 KB
  Storage Mem size:8237648 KB
  Mean Object Size:40.43 KB
  Requests given to unlinkd:0
 Median Service Times (seconds)  5 min60 min:
  HTTP Requests (All):   0.19742  0.12106
  Cache Misses:  0.27332  0.17711
  Cache Hits:0.08265  0.03622
  Near Hits: 0.27332  0.16775
  Not-Modified Replies:  0.02317  0.00865
  DNS Lookups:   0.09535  0.04854
  ICP Queries:   0.0  0.0
 Resource usage for squid:
  UP Time:10391.501 seconds
  CPU Time:4708.150 seconds
  CPU Usage:45.31%
  CPU Usage, 5 minute avg:33.29%
  CPU Usage, 60 minute avg:33.36%
  Process Data Segment Size via sbrk(): 1041332 KB
  Maximum Resident Size: 0 KB
  Page faults with physical i/o: 4
 Memory usage for squid via mallinfo():
  Total space in arena:  373684 KB
  Ordinary blocks:   372642 KB809 blks
  Small blocks:   0 KB  0 blks
  Holding blocks:216088 KB 21 blks
  Free Small blocks:  0 KB
  Free Ordinary blocks:1041 KB
  Total in use:  588730 KB 100%
  Total free:  1041 KB 0%
  Total size

Re: [squid-users] Object becomes STALE: refresh_pattern min and max

2008-09-26 Thread Adrian Chadd
Well, what are the complete request/reply headers for each of the
requests you're testing with?


Adrian

2008/9/25 BUI18 [EMAIL PROTECTED]:
 My Squid Version is 2.6/STABLE14

 Here's my refresh_pattern from squid.conf

 #Suggested default:
 refresh_pattern ^ftp:   144020% 10080
 refresh_pattern ^gopher:14400%  1440

 #The following line will ignore a client no-cache header
 #refresh_pattern -i \.vid$   0   90% 2880 ignore-reload
 refresh_pattern -i \.vid$   7200100%10080 ignore-reload

 refresh_pattern .   0   20% 4320

 A link to the file looks something like this -- 
 http://ftp.mydomain.com/websites/data/myvideofile.vid

 I have to set up a station to grab the header but I can tell you that it does 
 not seem out of the ordinary.

 There is one cache-control:  Pragma: no-cache

 I believe I handle this with the ignore-reload options.

 Our server is an IIS server running on Windows 2003.

 I also ran a test with min and max age of 0 and 1 respectively, and it seems 
 to work.  I receive a TCP_REFRESH_HIT, which is what I would have expected as 
 these files do not change.

 Please let me know if you have any other ideas on how to track down why it 
 would release from cache before min age with no Expiration set on the object.

 Open to any suggestions.
 Thanks




 - Original Message 
 From: Michael Alger [EMAIL PROTECTED]
 To: squid-users@squid-cache.org
 Sent: Wednesday, September 24, 2008 8:09:50 AM
 Subject: Re: [squid-users] Object becomes STALE: refresh_pattern min and max

 On Wed, Sep 24, 2008 at 05:29:52AM -0700, BUI18 wrote:
 I went through your same thinking as you described below.

 I checked the Expires header from the server and we do not set
 one.  I checked via Fiddler web debug tool.  I also verified with
 the dev guys here regarding no Expires header.  I have set the min
 and max via refresh_pattern because of the absence of the Expires
 header thinking that Squid would keep it FRESH.

 Notice the -1 for expiration header (I do not set one on the
 object).  My min age is 5 days so I'm not sure why the object
 would be released from cache in less than 2 days.

 If the object was released from cache, when the user tried to
 access file, Squid reports TCP_REFRESH_MISS, which to me means
 that it was found in cache but when it sends a If-Modified-Since
 request, it thinks that the file has been modified (which it was
 not as seen by the lastmod date indicated in the store.log below.

 Interesting that it's caching the file for 2 days. What are the full
 headers returned with the object? Any other cache control headers?

 Is there any chance you have a conflicting refresh_pattern, so the
 freshness rules being applied aren't the ones you're expecting? May
 be worth doing some tests with very small max ages to confirm it's
 matching the right rule.








Re: [squid-users] latency issues squid2.7 WCCP

2008-09-25 Thread Adrian Chadd
Firstly, you should use the internal DNS code instead of the external
DNS helpers.

Secondly, I'd do a little debugging to see if its network related -
make sure you've disabled PMTU for example, as WCCP doesn't redirect
the ICMP needed. Other things like Window scaling negotiation and such
may contribute.

From a server side of things, what cache_dir config are you using?
Whats your average/peak request rate? What about disk IO? Have you
done any profiling? Have you tried running the proxy without any disk
cache to see if the problem goes away?

~ terabyte of cache is quite large; I don't think any developers have
a terabyte of storage in a box this size in a testing environment.

2008/9/24 Ryan Goddard [EMAIL PROTECTED]:
 Squid 2.7.STABLE1-20080528 on Debian Linux 2.6.19.7
 running on quad dual-core 2.6mhz Opterons with 32 gig RAM; 8x140GB disk
 partitions
 using WCCP L2 redirects transparently from a Cisco 4948 GigE switch

 Server has one GigE NIC for the incoming redirects and two GigE NICs for
 outbound http requests.
 Using IPTables to port forward HTTP to Squid; no ICP, auth, etc.; strictly a
 web cache using heap/LFUDA replacement
 and 16GB memory allocated with mem pools on, no limit.

 Used in an ISP environment, accommodating approx. 8k predominately cable
 modem customers during peak.

 Issue we're experiencing is some web pages taking in excess of 20 seconds to
 load, marked latency for customers
 running web-based speed tests, etc.
 Cache.log and Access.log aren't indicating any errors or timeouts; system
 operates 96 DNS instances and 32k file descriptors
 (neither has gotten maxed yet).
 General Runtime Info from Cachemgr taken during pre-peak usage:
 Start Time:Tue, 23 Sep 2008 18:07:37 GMT
 Current Time:Tue, 23 Sep 2008 21:00:49 GMT

 Connection information for squid:
   Number of clients accessing cache:3382
   Number of HTTP requests received:2331742
   Number of ICP messages received:0
   Number of ICP messages sent:0
   Number of queued ICP replies:0
   Request failure ratio: 0.00
   Average HTTP requests per minute since start:13463.4
   Average ICP messages per minute since start:0.0
   Select loop called: 11255153 times, 0.923 ms avg
 Cache information for squid:
   Request Hit Ratios:5min: 42.6%, 60min: 40.0%
   Byte Hit Ratios:5min: 21.2%, 60min: 18.6%
   Request Memory Hit Ratios:5min: 18.3%, 60min: 17.2%
   Request Disk Hit Ratios:5min: 33.6%, 60min: 33.3%
   Storage Swap size:952545580 KB
   Storage Mem size:8237648 KB
   Mean Object Size:40.43 KB
   Requests given to unlinkd:0
 Median Service Times (seconds)  5 min60 min:
   HTTP Requests (All):   0.19742  0.12106
   Cache Misses:  0.27332  0.17711
   Cache Hits:0.08265  0.03622
   Near Hits: 0.27332  0.16775
   Not-Modified Replies:  0.02317  0.00865
   DNS Lookups:   0.09535  0.04854
   ICP Queries:   0.0  0.0
 Resource usage for squid:
   UP Time:10391.501 seconds
   CPU Time:4708.150 seconds
   CPU Usage:45.31%
   CPU Usage, 5 minute avg:33.29%
   CPU Usage, 60 minute avg:33.36%
   Process Data Segment Size via sbrk(): 1041332 KB
   Maximum Resident Size: 0 KB
   Page faults with physical i/o: 4
 Memory usage for squid via mallinfo():
   Total space in arena:  373684 KB
   Ordinary blocks:   372642 KB809 blks
   Small blocks:   0 KB  0 blks
   Holding blocks:216088 KB 21 blks
   Free Small blocks:  0 KB
   Free Ordinary blocks:1041 KB
   Total in use:  588730 KB 100%
   Total free:  1041 KB 0%
   Total size:589772 KB
 Memory accounted for:
   Total accounted:   11355185 KB
   memPoolAlloc calls: 439418241
   memPoolFree calls: 378603777
 File descriptor usage for squid:
   Maximum number of file descriptors:   32000
   Largest file desc currently in use:   9171
   Number of file desc currently in use: 8112
   Files queued for open:   2
   Available number of file descriptors: 23886
   Reserved number of file descriptors:   100
   Store Disk files open: 175
   IO loop method: epoll
 Internal Data Structures:
   23570637 StoreEntries
   532260 StoreEntries with MemObjects
   531496 Hot Object Cache Items
   23561001 on-disk objects

 Generated Tue, 23 Sep 2008 21:00:47 GMT, by
 cachemgr.cgi/[EMAIL PROTECTED]


 TCPDUMP shows packets traversing all interfaces as expected; bandwidth to
 both upstream providers isn't being maxed
 and when Squid is shut down, http traffic loads much faster and without any
 noticeable delay.

 Where/what else can I look at for the cause of the latency?  It becomes
 significantly worse during peak use - but as
 we're not being choked on bandwidth and things greatly improve when I shut
 down squid that narrows it to something
 on the server.  Is the amount of activity overloading a single squid
 process?  I'm not 

Re: [squid-users] Storeurl - redirect contents without Cache-Control:no-cache header

2008-09-14 Thread Adrian Chadd
2008/9/14 chudy fernandez [EMAIL PROTECTED]:
 I've posted as ask by Adrian.
 http://wiki.squid-cache.org/WikiSandBox/Discussion/YoutubeCaching

 I wanna know if somebody out there has a better idea of how to fix 
 it(temporarily) it inside the squid.

Keep an eye on that page if you haven't subscribed chudy, I've just
replied with some ideas.



Adrian


Re: [squid-users] COSS causing squid Segment Violation on FreeBSD 6.2S (store_io_coss.c)

2008-09-12 Thread Adrian Chadd
Well, I fixed the thing up under FreeBSD so it certainly was working
for me at some point.

I'm one server away from getting my polygraph test cluster going and
I'll hopefully be installing that tomorrow; I'll make sure COSS gets a
decent thrashing when thats all up and running.



adrian


2008/9/12 Mark Powell [EMAIL PROTECTED]:
 On Fri, 12 Sep 2008, Amos Jeffries wrote:

 Can you report a bug on this please, so we don't forget it. with a stack
 trace when the crash is occuring.

 Already did, last year:

 http://www.squid-cache.org/bugs/show_bug.cgi?id=1944

 Does this mean that COSS can't be successfully used with FreeBSD 7?
  Many thanks.

 --
 Mark Powell - UNIX System Administrator - The University of Salford
 Information Services Division, Clifford Whitworth Building,
 Salford University, Manchester, M5 4WT, UK.
 Tel: +44 161 295 6843  Fax: +44 161 295 5888  www.pgp.com for PGP key




[squid-users] Australian Development Meetup 2008 - Notes

2008-09-11 Thread Adrian Chadd
G'day,

I've started publishing the notes from the presentations and developer
discussions that we held at the Yahoo!7 offices last month.
You can find them at
http://www.squid-cache.org/Conferences/AustraliaMeeting2008/ .

I'm going to try and make sure any further
mini-conferences/discussions/etc which happen go up there so people
get more of an idea of whats going on.

Who knows, eventually there may be enough interest to hold a
reasonably formal Squid conference somewhere.. :)



Adrian


Re: [squid-users] how to solve DNS server outage

2008-09-09 Thread Adrian Chadd
It should be doing that by default; I suggest you stick wireshark or
tcpdump on your proxy, fail a DNS server and see what happens.

Log a bugzilla report if Squid absolutely doesn't fail over querying
the other DNS servers if your first one fails. Fail means no reply
btw..



Adrian


2008/9/8 Jevos, Peter [EMAIL PROTECTED]:
 Hi

 I'm using latest squid 2.7 and in my resolve.conf there're 2 name
 servers.
 Unfortunately first didn't work well and all queries was not resolved :
 ..Unable to determine IP address from host name...

 Even though the secondary server was working the quere timeout out after
 a couple of minutes.

 How can I manage to try secondary server if primary is out of order?

 Thx

 Br

 pet




Re: [squid-users] Port -1

2008-09-09 Thread Adrian Chadd
Then patch Squid to convert a -1 port to another port.



Adrian

2008/9/10 rsoza [EMAIL PROTECTED]:

 Thanks, but not an option.


 Amos Jeffries-2 wrote:


 I have a legacy piece of code that is attempting to go through the proxy
 using the following port -1:

 http://www.server.com:-1/test/

 Squid is setup as transparent but still blocking the -1 port.
 Any suggestions to allow this port to go through the proxy?

 Patch/crack the application to use a valid port number.

 Amos





 --
 View this message in context: 
 http://www.nabble.com/Port--1-tp19403924p19404624.html
 Sent from the Squid - Users mailing list archive at Nabble.com.




Re: [squid-users] increasing threads for coss

2008-09-08 Thread Adrian Chadd
No idea, you'll have to check the code and then verify what your
operating system reports for the particular way Squid gets its CPU
time figures.



adrian

2008/9/8 Ramon Moreno [EMAIL PROTECTED]:
 Thanks for the update.

 Quick question.. does the squid snmp mib value for cpu usage by squid
 include the aio thread processes? or is this the squid thread only?

 Thanks!

 On Fri, Sep 5, 2008 at 10:17 PM, Adrian Chadd [EMAIL PROTECTED] wrote:
 the thread count should apply for all async-ops thread users.



 Adrian

 2008/9/6 Ramon Moreno [EMAIL PROTECTED]:
 Typically with using aufs we can increase the threads using

 --enable-async-io=thread count

 If I am using coss as my storage scheme, does increasing the thread
 count above also apply to COSS, or only AUFS.

 Also, if this is not the correct way to increase the thread count for
 coss, what is the correct way to do this for squid-2.7?

 Thank you.







Re: [squid-users] about cache Vary

2008-09-05 Thread Adrian Chadd
As long as the non-gzip'ed response also has a vary header, and
different etags are returned, yes.



Adrian


2008/9/6 Jeff Peng [EMAIL PROTECTED]:
 Hello,

 If realserver sends a gziped response with a Vary header, does squid
 cache two objects for the same url? one for gziped, another for non-gziped.

 Thanks.




Re: [squid-users] increasing threads for coss

2008-09-05 Thread Adrian Chadd
the thread count should apply for all async-ops thread users.



Adrian

2008/9/6 Ramon Moreno [EMAIL PROTECTED]:
 Typically with using aufs we can increase the threads using

 --enable-async-io=thread count

 If I am using coss as my storage scheme, does increasing the thread
 count above also apply to COSS, or only AUFS.

 Also, if this is not the correct way to increase the thread count for
 coss, what is the correct way to do this for squid-2.7?

 Thank you.




Re: [squid-users] min-fresh / max-stale not working?

2008-09-03 Thread Adrian Chadd
When someone contributes the work or funds development.



Adrian

2008/9/4 Markus Karg [EMAIL PROTECTED]:
 Is there a plan when HTTP/1.1 completely will be supported in all sides?
 I mean, I hardly can't believe it -- HTTP/1.1 was specified in 2008. Why
 waiting so long?

 Thanks
 Markus

 -Original Message-
 From: Amos Jeffries [mailto:[EMAIL PROTECTED]
 Sent: Mittwoch, 3. September 2008 15:40
 To: Markus Karg
 Cc: squid-users@squid-cache.org
 Subject: Re: [squid-users] min-fresh / max-stale not working?

 Markus Karg wrote:
  Sorry it was a typo. The test was done mit SQUID-2.7-STABLE4
 actually.
  The HTTP/1.1-Support is only experimental???

 Brand new in 2.7 and some bugs still being found.
 It's also only on one side of Squid, the one which links to Servers
 IIRC, so the client-facing code is still HTTP/1.0-only.

 Amos

 
  -Original Message-
  From: Amos Jeffries [mailto:[EMAIL PROTECTED]
  Sent: Mittwoch, 3. September 2008 07:14
  To: Markus Karg
  Cc: squid-users@squid-cache.org
  Subject: Re: [squid-users] min-fresh / max-stale not working?
 
  Dear SQUID Community,
 
  it seems as if SQUID is not dealing correctly with min-fresh and
  max-stale:
 
  Currently we are evaluating the use of SQUID-2.6-STABLE4. It all
  seems
  to work pretty well, but just min-fresh and max-stale is not
  working. Our client agent wants to guarantee to get data that is
  fresh
  for a specific amount of time. So we provide min-fresh=3500 and
  max-stale=0. To verify SQUID's behaviour we have programmed an
  origin
  server the always responds with some static headers and entity
 data,
  and
  a client that requests exactly that information, via SQUID as a
  proxy.
  The client uses the Cache-Control header with a min-fresh=3500 and
  max-stale=0 value, and the server is always sending data with a
  max-age=3600 value. But the client gets from SQUID a 200 OK
 response
  having max-age=3600 and Age=502! So, the current age of 502 plus
 the
  desired min-fresh of 3500 is 4002, minus the max-stale of 0 still
 is
  4002, what is much more than the max-age of 3600 -- so the request
  cannot be satisfied without a warning, since the response will not
  be
  fresh long enough! So we expect to get at least a Warning header.
  But
  there is none! It looks like SQUID just ignores the min-fresh=3500
  and
  max-stale=0 headers!
 
  The HTTP/1.1 specification says:
  13.1.2 Warnings
  Whenever a cache returns a response that is neither first-hand nor
  fresh enough (in the sense of condition 2 in section 13.1.1), it
  MUST
  attach a warning to that effect, using a Warning general-header.
  also it says:
  13.1.1 Cache Correctness
  If a stored response is not fresh enough by the most restrictive
  freshness requirement of both the client and the origin server, in
  carefully considered circumstances the cache MAY still return the
  response with the appropriate Warning header.
 
  In the default case, this means it meets the least restrictive
  freshness
  requirement of the client, origin server, and cache (see section
  14.9)
  So for me it looks as if SQUID is buggy, since it does not add the
  mandatory Warning header. Can that be true? Or do I have to enable
  some
  switch like HTTP/1.1-Compliance = YES?
  Squid 2.6 is HTTP/1.0 only.  For any HTTP/1.1 stuff you will need
  Squid
  2.7 and its experimental support.
 
  As for the cache controls, someone more knowledgeable will
 hopefully
  speak
  up.
 
  Amos
 


 --
 Please use Squid 2.7.STABLE4 or 3.0.STABLE8




Re: [squid-users] Squid-2.7 vary failure w/ non-encoded objects?

2008-09-02 Thread Adrian Chadd
Its logged in store.log ; you just have to know what to look for.

But yes, an explicit log for that would be good.



Adrian

2008/9/2 Mark Nottingham [EMAIL PROTECTED]:
 Random thought: when an origin is doing one of these, is / can it be noted
 in cache.log somehow? Would be useful, at least for accelerator setups...


 On 01/09/2008, at 8:30 PM, Adrian Chadd wrote:

 2008/9/1 Henrik Nordstrom [EMAIL PROTECTED]:

 http://wiki.squid-cache.org/KnowledgeBase/VaryNotCaching

 Could you please take a peek and tell me if I've covered everything
 clearly enough?

 I think so.

 RFC references can be found at the vary/etag development pages.

 http://devel.squid-cache.org/vary/
 http://devel.squid-cache.org/etag/

 Thanks; I've just updated the article with this information.



 Adrian

 --
 Mark Nottingham   [EMAIL PROTECTED]





Re: [squid-users] Looking for Squid expert : tweak accelerator mode

2008-09-02 Thread Adrian Chadd
Yup, they're on http://www.squid-cache.org/, ah here it is:

http://www.squid-cache.org/Support/services.dyn



Adrian
(Xenion)

2008/9/2 Pure Azal [EMAIL PROTECTED]:
 Hi,

 We're looking for a Squid expert, in order to tweak our current
 installation, regarding accelerator mode (200 k connections per day).

 Is there a list of societies / people on a web site, that we can contact ?

 Thanks.




Re: [squid-users] COSS squid2.7stable4 windowsxpsp2

2008-09-02 Thread Adrian Chadd
COSS under Windows is 100% untested by the two main developers (Myself
and Steven Wilton.)

Sorry!



Adrian

2008/9/3 chudy fernandez [EMAIL PROTECTED]:
 I've try squid 2.7 stable 4 with feature coss.

 using mozillia with firebug. first browse cache miss.
 second browse cache hit.
 after restarting squid
 browsing site again suppose to be cache hit right? but its cache miss all of 
 it.
 running squid -d1 found out its being released.
 one more thing closing squid and run it again with -d1 its still the same 
 number of objects being released.

 conf
 cache_dir coss C:/squid/coss 100 max-size=131072

 I've tried using freebsd same conf except path its just working fine.

 squid coss for windows works really like this?







Re: [squid-users] Squid-2.7 vary failure w/ non-encoded objects?

2008-09-01 Thread Adrian Chadd
2008/9/1 Henrik Nordstrom [EMAIL PROTECTED]:

 http://wiki.squid-cache.org/KnowledgeBase/VaryNotCaching

 Could you please take a peek and tell me if I've covered everything
 clearly enough?

 I think so.

 RFC references can be found at the vary/etag development pages.

 http://devel.squid-cache.org/vary/
 http://devel.squid-cache.org/etag/

Thanks; I've just updated the article with this information.



Adrian


Re: [squid-users] Squid-2.7 vary failure w/ non-encoded objects?

2008-08-31 Thread Adrian Chadd
2008/8/31 Henrik Nordstrom [EMAIL PROTECTED]:

[snip]

I've tried to summarise this in a wiki article:

http://wiki.squid-cache.org/KnowledgeBase/VaryNotCaching

Could you please take a peek and tell me if I've covered everything
clearly enough?

Thanks!


Adrian


[squid-users] Squid-2.7 vary failure w/ non-encoded objects?

2008-08-30 Thread Adrian Chadd
G'day,

I'm seeing something strange with the behaviour of Squid-2.7 and Vary objects.

The following happens w/ an accelerator setup:

* Request for X comes in w/ Accept-Encoding: gzip,deflate
* Request is forwarded, gzip encoded variant is stored in the cache
* Request for X comes in w/out Accept-Encoding: header
* Request is forwarded, non-gzip encoded variant is returned
* storeSetPublicKey() does a lookup on the object URL, finds there's
an existing key and invalidates it
* This invalidates the Vary index object and encoded variant object
* Non-compressed object is stored in the cache

* Subsequent requests w/ or w/out Accept-Encoding: set will always
return the non-compressed object

Now, I understand why Squid returns a non-compressed variant of the
object if its in the cache (and I may look further into that behaviour
later on) but the first few steps are what bother me.

My uttterly conjecture based guess: should the origin server be
returning non-compressed objects with the same Vary: headers?

How are others' dealing with this in accelerator based setups?



Adrian


Re: [squid-users] Differences between Squid 2.7 and 3.0

2008-08-28 Thread Adrian Chadd
I'd suggest disabling it entirely from the build until someone
fixes/rewrites it.


Adrian

2008/8/29 Amos Jeffries [EMAIL PROTECTED]:
 On ons, 2008-08-27 at 17:07 -0800, Chris Robertson wrote:

 I suppose getting the COSS cache_dir store type cleaned up, or removed
 would be another suggestion.  I recall seeing COSS support in Squid 3
 hit the list a couple of times.  Associated with that would be the
 min-size option to cache_dir.

 Actually Squid-3.0 was supposed to ship without COSS, but it was
 forgotten when the release was made..


 Think its worthwhile me dead-coding COSS for 3.0.STABLE9+ ?
 Or leaving it for someone to fix?

 Amos





Re: [squid-users] squid in ISP

2008-08-28 Thread Adrian Chadd
By default, yes.

Squid-2.X has TPROXY-2 support which allows you to spoof that w/a
custom Linux kernel.

Squid-3.HEAD has TPROXY-4 support; I'm working on tidying up the code
and importing it for the next Squid-2.X release.

I'm working on the FreeBSD support for spoofing source IPs but I've
just had no time lately to finish it off.



Adrian

2008/8/29 mimbanis [EMAIL PROTECTED]:



 We loaded squid on a quad core linux box with around 1.2Tb disk
 capacity and 32Gb RAM, using a Cisco 4948 switch and WCCP2
 to transparently redirect to Squid.
 There were some major hurdles along the way
 mostly getting the 4948 to pass the L2 WCCP traffic -
 2 IOS bugs and a year in the process) but once that worked
 and we got our IPTABLES set up properly, transparent redirection
 has been working quite well.


 As a side query does the transparent redirection with WCCP have the proxy IP
 visible to the world or the originating client? e.g. if the client goes to a
 site along the lines of whatismyip.com will it show their address or that of
 the proxy.

 Cheers
 Simon
 --
 View this message in context: 
 http://www.nabble.com/squid-in-ISP-tp18396350p19212611.html
 Sent from the Squid - Users mailing list archive at Nabble.com.




Re: [squid-users] Re: Moving cache to another server

2008-08-27 Thread Adrian Chadd
Oh yes, the swapfile metadata uses integers rather than fixed-sized
values; damn I forgot about that.

I could probably write a tool to convert those pretty easily

I wonder if I could patch Squid to do things right..


Adrian

2008/8/27 Matus UHLAR - fantomas [EMAIL PROTECTED]:
  On 8/26/08, Adrian Chadd [EMAIL PROTECTED] wrote:
   You can just use rsync to copy the storedirs and the swaplogs.
You just need to shut the original Squid down first. :)

 On Wed, 27 Aug 2008 11:37:05 +0800
 howard chen [EMAIL PROTECTED] wrote:
  What is the swaplogs?
 
  you mean the swap.state  swap.state.last-clean files

 On 27.08.08 04:51, RW wrote:
 In other words, just copy the directory specified in cache_dir

 you must keep thhe same arch, e.g. you must not convert between 32 and 64bit
 OS...

 if you are migrating that even, I'm not sure if removing swap.* will be
 enough and if that will keep the cache content...
 --
 Matus UHLAR - fantomas, [EMAIL PROTECTED] ; http://www.fantomas.sk/
 Warning: I wish NOT to receive e-mail advertising to this address.
 Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
 Remember half the people you know are below average.




Re: [squid-users] Re: Moving cache to another server

2008-08-27 Thread Adrian Chadd
2008/8/27 Matus UHLAR - fantomas [EMAIL PROTECTED]:
 I probably didn't get it ... do you mean to rewrite it always to use 64bit
 fields instead of using machine-dependent size?\

Network-order 64-bit fields, yes.

The rest of the store file is platform agnostic..




Adrian


Re: [squid-users] out of aiocb slots

2008-08-27 Thread Adrian Chadd
It may alleviate it a bit but the problem is that there's no graceful
failover if you start exceeding the number of aiocb slots.

Just disable using aiops under freebsd-6/7 and use threaded async IO.
I'll look to sort out sensible POSIX AIO support in a future Squid
release.



Adrian

2008/8/28 Ramon Moreno [EMAIL PROTECTED]:
 I noticed someone else had this error, and they were looking to increase

 #define MAX_ASYNCOP 128


 What is a safe number to increase this too, and does this help to alleviate 
 the

 2008/08/22 12:22:49| WARNING: out of aiocb slots!

 errors.

 Thanks!


 On Wed, Aug 27, 2008 at 3:27 PM, Ramon Moreno [EMAIL PROTECTED] wrote:
 Linux, kernel 2.6

 On Mon, Aug 25, 2008 at 6:39 PM, Adrian Chadd [EMAIL PROTECTED] wrote:
 Which operating system is this?



 adrian


 2008/8/26 Ramon Moreno [EMAIL PROTECTED]:
 --enable-coss-aio-ops is there.

 Is there a way to manipulate, or increase the available number of slots?

 Anything I can tweak in squid to help this?

 Thanks

 On Fri, Aug 22, 2008 at 2:31 PM, Adrian Chadd [EMAIL PROTECTED] wrote:
 Whats squid -v show?

 The POSIX AIO support in Squid isn't all that crash hot at the moment,
 COSS + threads works better.



 Adrian

 2008/8/23 Ramon Moreno [EMAIL PROTECTED]:
 Hello Squid Gurus,

 Can someone please shed some light on the below error?

 2008/08/22 12:22:49| WARNING: out of aiocb slots!

 I am using COSS with the following options:
 max-size=16384 block-size=2048 max-stripe-waste=16384 membufs=500

 Thanks!









Re: [squid-users] Differences between Squid 2.7 and 3.0

2008-08-27 Thread Adrian Chadd
2008/8/27 Altrock, Jens [EMAIL PROTECTED]:
 Hi there,

 Looked for a while, but haven't found something useful thouzgh about the
 differences; is there anything really relevant? Would be nice if someone
 could help me :-)

Squid-3 is a branch from Squid-2.5 from years ago. It wasn't rewritten
in C++; the source was made to compile using a C++ compiler and then
parts have been reimplemented in C++. A large part of the codebase is
still Squid-2.x type C. Most of the squid developers are actively
developing it.

Squid-2.7 and subsequent releases happen because people are still
overwhelmingly using Squid-2. Features make it into Squid-2 because
they're generally contributed or paid for by its users. Its still
written in C. I'm actively developing this elsewhere :)



Adrian


Re: [squid-users] Performance of Squid as Balancer

2008-08-27 Thread Adrian Chadd
Lots of stuff performs better than Squid for just straight connection
redirection.

The trick isn't request rate - its how the application behaves under a
variety of conditions. Some people have luck with varnish, pound,
nginx, apache. Some people have no luck with those and luck with
Squid. YMMV.

At some point Squid will be fast enough to compete with the above
-and- work in more use conditions. I hope :)



Adrian

2008/8/28 Jeff Peng [EMAIL PROTECTED]:
 elsergio 写道:

 Hi all,

 Does anybody knows how many http queries can Squid dispatch configured as
 a
 balancer (no caching data).

 Thanks!


 For pure balancer application I suggest you use LVS.
 (or maybe Nginx is better than squid on this behavior).




Re: [squid-users] Generating cache file hash - continued

2008-08-27 Thread Adrian Chadd
G'day,

Whats your reason for avoiding the use of the Squid code? Licence issues?

I'm slowly migrating the Squid-2 code into a whole lot of reusable
modules as part of an experiment in code organisation. One of the
goals is to allow this exact situation - instead of rolling your own,
you can link against one of the Squid libraries at run-time.



Adrian

2008/8/21 John =) [EMAIL PROTECTED]:

 Further to my request yesterday... I would prefer to be able to just generate 
 the md5 hash manually, rather than writing code to use storeKeyPublic() in 
 src/store_key_md5.c. However, I must not be interpreting that function 
 correctly as my hashes do not match the hashes produced in store.log:

 For example, for 'GET http://www.squid-cache.org/Images/img8.gif' - putting 
 001http://www.squid-cache.org/Images/img8.gif into the hash generator gives 
 d5bf8db92c34e66592faa82454b5d867, but store.log 
 shows:F506597929DF2C9F8E51ED12E77E6548

 Is there a simple way to produce the correct hash without touching the 
 sourcecode? I am very new to this.


 John Redford.

 _
 Get Hotmail on your mobile from Vodafone
 http://clk.atdmt.com/UKM/go/107571435/direct/01/



Re: [squid-users] Moving cache to another server

2008-08-26 Thread Adrian Chadd
You can just use rsync to copy the storedirs and the swaplogs.
You just need to shut the original Squid down first. :)



Adrian

2008/8/26 howard chen [EMAIL PROTECTED]:
 Hello,

 One of our squid server is running for a year and collected around
 100GB+ cache, as I have new hardware (better disks, from 7200 ATA to
 15K SAS), I want to use the new server without rebuild the cache, is
 it possible to copy the cache directly or some other means?

 Thanks.




Re: [squid-users] Moving cache to another server

2008-08-26 Thread Adrian Chadd
yup.


Adrian

2008/8/27 howard chen [EMAIL PROTECTED]:
 Hi

 On 8/26/08, Adrian Chadd [EMAIL PROTECTED] wrote:
 You can just use rsync to copy the storedirs and the swaplogs.
  You just need to shut the original Squid down first. :)

 What is the swaplogs?

 you mean the swap.state  swap.state.last-clean files


 thanks.




  1   2   3   4   5   6   7   8   9   10   >