[squid-users] ANN: squidpeek

2011-08-27 Thread Mark Nottingham
I've put a script that I've found useful for a while up on github:

 https://github.com/mnot/squidpeek

In a nutshell, it does log analysis and gives a per-URL view, with interesting 
stats and sparklines to give you a snapshot of how the cache is treating your 
resources. Because it's per-URL, it's most suitable for a reverse proxy.

It's not the prettiest Python, but it gets the job done. Issues, pull requests, 
etc. welcome.

Cheers,

--
Mark Nottingham   http://www.mnot.net/





Re: [squid-users] How Squid behaves if we turn off Apache‏

2011-05-26 Thread Mark Nottingham
store-stale might be useful too;
  http://www.squid-cache.org/Doc/config/refresh_pattern/

Cheers,


On 26/05/2011, at 2:35 PM, Amos Jeffries wrote:

 On Wed, 25 May 2011 10:16:46 -0700, melissa schellenberg wrote:
 We're performing an upgrade on the CMS that is sitting behind Squid,
 and we want Squid to serve up ALL pages from its cache during the 
 hour
 or two of the upgrade, so that no requests are made to the CMS during
 that time.  Is there a hero mode setting that we can toggle in
 Squid?  Or should we be pre-loading all cached pages with long expiry
 times beforehand?
 I've been reading some rather old threads offline mode but that
 seems to be applying only to forward proxying.  Thanks in advance for
 any help!
 
 There is a magic option. The very badly named offline_mode causes 
 squid to grab things as greedily as possible for caching. Turn it ON for 
 a while leading up to the outage, some days usually.
 
 Also, run a check of the Expires: headers being used by site content. 
 That is an absolute limit on cache storage. Bumping up the short ones to 
 after the outage is over will reduce unavailable objects for the 
 duration.
 
 Check max_stale (if available in your Squid) is set much longer than 
 the outage time. Several multiples of the outage period would probably 
 be best, this has to cope with data stored at the start of the 
 offline_mode turn-on as well as stuff requested just before outage.
 
 Remove must-revalidate and proxy-revalidate cache controls wherever 
 possible for a while leading up to the outage. This trades problems with 
 unavailable objects for problems with stale objects, so care is needed. 
 In general if you can easily remove a must-revalidate safely you may 
 benefit long-term by leaving it that way :)
 
 Also, maybe have a sorry downtime page to redirect posts to:
  acl POSTs method POST PUT
  deny_info http://example.com/sorry.html POSTs
  http_access deny POSTs
 
 
 You likely will miss some things. But those will help a lot.
 
 
 Alternatively, if this is super critical you could start the outage by 
 taking a static mirror of the site then pointing Squid to use that 
 temporarily. Sending this static copy with a fixed Expires: set to the 
 end of the upgrade outage will make all new requests transition to the 
 new site version at a easily determined time.
 
 Amos
 

--
Mark Nottingham   m...@yahoo-inc.com




Re: [squid-users] Re: Origin header fails on bridged-tproxy squid

2011-02-27 Thread Mark Nottingham
Origin is relatively new;
  http://tools.ietf.org/html/draft-ietf-websec-origin-00

In a nutshell, it's sort of like a referer just for the authority part of the 
URL (i.e., it omits the path).

Robert -- what exactly is breaking? Is your tproxy stripping any headers or 
modifying the request URL? Is there anything else unusual about your setup?

Cheers,



On 27/02/2011, at 4:08 PM, Amos Jeffries wrote:

 On 26/02/11 13:19, Robert Pipca wrote:
 Sorry,
 
 The correct site to see the problem is:
 
 http://vistoria.brochweld.com.br/bvsweb/paginas/tela_login.aspx
 
 
 There is no such header as Origin: in HTTP. Did you means Host: ?
 
 Squid uses the Host: header to determine destination IP unless 
 configured otherwise. What are you seeing as the problem?
 
 
 Amos
 -- 
 Please be using
   Current Stable Squid 2.7.STABLE9 or 3.1.11
   Beta testers wanted for 3.2.0.5

--
Mark Nottingham   m...@yahoo-inc.com




Re: [squid-users] Allow or deny HTCP CLR

2010-08-31 Thread Mark Nottingham
What version of Squid?

Regards,


On 31/08/2010, at 10:35 PM, Thijs Stuurman wrote:

 Squid users,
 
 I am replacing ICP with HTCP in a configuration with 2 and one with 4 squid 
 servers.
 When testing I can see that besides the HTCP_TST neighbor cache hit test it 
 sometimes sends a HTCP_CLR to purge content on a neighbor.
 What I do not know is when or why it does this and if I want this behavior?
 
 All the documents I can find and information only covers HTCP_TST.
 Also I have read about warnings on forwarding HTCP_CLR commands because it 
 might create a loop.
 It does seem to be what I would want when using 4 servers, does anyone have 
 any experience with this?
 
 Kind regards,
 
 Thijs Stuurman
 System Administrator
 Security Officer
 
 Nxs Internet BV
 Kabelweg 37, 1014 BA, Amsterdam
 T. +31 (0) 20 58 11 088
 F. +31 (0) 20 58 11 071
 E. beheer.li...@nxs.nl
 
 
 Met vriendelijke groet,
 
 Thijs Stuurman
 System Administrator
 Security Officer
 
 Nxs Internet BV
 Kabelweg 37, 1014 BA, Amsterdam
 T. +31 (0) 20 58 11 088
 F. +31 (0) 20 58 11 071
 E. beheer.li...@nxs.nl
 

--
Mark Nottingham   m...@yahoo-inc.com




Re: [squid-users] Squid 2.7 without signature

2010-05-05 Thread Mark Nottingham
It requires patching the source; see errorpage.c, look for 
'ERR_SQUID_SIGNATURE'.

I'd support making it possible to suppress this (or turn it into an HTML 
comment, as I've done) via configuration.

Cheers,


On 05/05/2010, at 10:14 PM, marcus wrote:

 Hi,
 
 Due a security reasons, I would like my default page error without the squid 
 signature.
 I already could custom my error page and display it but I don't know how to 
 remove the signature of page bottom.
 
 Is it possible? The best I could make was a short signature using %s tag.
 
 Regards,
 Marcus D
 
 
 -- 
 ()  ascii ribbon campaign - against html e-mail
 /\  www.asciiribbon.org   - against proprietary attachments
 
 Why is it evil? -- http://www.georgedillon.com/web/html_email_is_evil.shtml

--
Mark Nottingham   m...@yahoo-inc.com




Re: [squid-users] Squid 3.1.1 and flash video scrubbing

2010-04-10 Thread Mark Nottingham

On 09/04/2010, at 9:05 PM, Henrik Nordström wrote:

 We don't know how the server would react on Range requests to this
 ranged fs=.. object. Maybe it imlpements them, maybe it don't.


RED says it doesn't.

--
Mark Nottingham   m...@yahoo-inc.com




Re: [squid-users] Squid 3.1.1 and flash video scrubbing

2010-04-08 Thread Mark Nottingham
The response to the request with the fs query arg is sending back a 
Content-Range header;

  
http://redbot.org/?uri=http%3A%2F%2Fserver437.files.youporn.com%2Fe4%2Fflv%2F426677_Splash.flv%3Fe%3D1273284436%26h%3D47ee1fbcb8d3ab05a06988683c2d94c1%26fs%3D4281434

That's weird. 


On 08/04/2010, at 1:32 PM, David Robinson wrote:

 My range_offset_limit and quick_abort_* setting were all default.
 
 I tried setting range_offset_limit -1   - did not fix the problem
 
 quick_abort_min 0 and quick_abort_max 0  -  did not fix the problem
 
 quick_abort_min -1 -  did not fix the problem
 
 
 The type of urls its having problems with are like these,
 
 
 1270696241.147   3691 172.16.16.199 TCP_MISS/200 3069898 GET 
 http://server437.files.youporn.com/e4/flv/426677_Splash.flv?e=1273284436h=47ee1fbcb8d3ab05a06988683c2d94c1
  - DIRECT/208.111.181.139 video/x-flv
 1270696248.438   7293 172.16.16.199 TCP_MISS/200 1442091 GET 
 http://server437.files.youporn.com/e4/flv/426677_Splash.flv?e=1273284436h=47ee1fbcb8d3ab05a06988683c2d94c1fs=4281434
  - DIRECT/208.111.181.139 video/x-flv
 
 The first one is the initial video player loading the flv. This request works 
 correctly and the video starts to download. 
 
 The second URL is when I jump the video player slider ahead of the 
 downloading video, note the fs=4281434 added to the url.
 
 Its this fs= parameter that changes the behavior of the download. You could 
 wget the first url and a flv would download. Wgetting the second url keeps 
 making wget retry even though the website sends back a 200 OK.
 
 I have this all setup in a lab so if you want tcpdumps I can provide them.
 
 
 -Original Message-
 From: Amos Jeffries [mailto:squ...@treenet.co.nz] 
 Sent: Wednesday, April 07, 2010 8:36 PM
 To: squid-users@squid-cache.org
 Subject: Re: [squid-users] Squid 3.1.1 and flash video scrubbing
 
 On Wed, 7 Apr 2010 14:41:42 -0500, David Robinson
 drobin...@pavlovmedia.com wrote:
 I've started doing field tests of 3.1.1 and a interesting bug has showed
 up. If you try to jump ahead in a partially loaded video from
 youporn.com
 or redtube.com the flash player freezes and doesn't continue to download
 the video. With squid off, you would be able to jump to any part of the
 video and have it continue playing. I've tested this on 3.1.1, 3.1.0.14
 and
 3.1.0.15 and they all have the same behavior.  I've also tested this on
 squid 2.7 and both sites work properly.
 
 Can some other users confirm this before I submit a bug report?
 
 Using squid 3.1.1 on Debian 5.0.1  2.6.30.10 kernel
 
 What range_offset_limit and quick_abort_* settings are you working with?
 
 Also, are you able to track down any info about what the requests hitting
 Squid are? headers, etc
 
 Amos

--
Mark Nottingham   m...@yahoo-inc.com




Re: [squid-users] Version 2.5.STABLE14-20060721 support HTTP 1.1 ?

2010-03-10 Thread Mark Nottingham
Just curious -- is this an HP iLO server?

http://en.wikipedia.org/wiki/HP_Integrated_Lights-Out

It looks like others have had this problem:
  http://forums.isaserver.org/m_40053900/printable.htm

It's an extremely unfriendly thing to do. 

Cheers,


On 11/03/2010, at 4:19 AM, Gilles Routier wrote:

 Hy,
 
 I'm running on Version 2.5.STABLE14-20060721 and i have a error message
 when i'm connect on a Intranet Web Site :
 
 *NOTE: If the proxy server is not configured to use HTTP 1.1, then you
 will not be able to communicate to iLO. Please contact the server's
 System Administrator for assistance.
 
 This version of squid *2.5.STABLE14* support HTTP 1.1 ?
 
 *So yes, has a configuration it in the squid.conf there to be made ?
 
 Thanks,
 
 
 
 
 *
 Le contenu de ce courriel et ses eventuelles pièces jointes sont 
 confidentiels. Ils s'adressent exclusivement à la personne destinataire. Si 
 cet envoi ne vous est pas destiné, ou si vous l'avez reçu par erreur, et afin 
 de ne pas violer le secret des correspondances, vous ne devez pas le 
 transmettre à d'autres personnes ni le reproduire. Merci de le renvoyer à 
 l'émetteur et de le détruire.
 
 Attention : L'Organisme de l'émetteur du message ne pourra être tenu 
 responsable de l'altération du présent courriel. Il appartient au 
 destinataire de vérifier que les messages et pièces jointes reçus ne 
 contiennent pas de virus. Les opinions contenues dans ce courriel et ses 
 éventuelles pièces jointes sont celles de l'émetteur. Elles ne reflètent pas 
 la position de l'Organisme sauf s'il en est disposé autrement dans le présent 
 courriel.
 **
 

--
Mark Nottingham   m...@yahoo-inc.com




Re: [squid-users] Squid not returning gzip files

2010-03-09 Thread Mark Nottingham
http://redbot.org/ should be able to confirm if this a problem... 


On 10/03/2010, at 4:56 PM, Amos Jeffries wrote:

 Elli Albek wrote:
 Hi,
 I have squid in front of tomcat servers as reverse proxy. The origin
 servers return some files gzipped. I can confirm this by going to them
 directly with header
 Accept-Encoding: gzip,deflate
 Origin server returns:
 HTTP/1.1 200 OK
 Server: Apache-Coyote/1.1
 Cache-Control: max-age=1801
 Accept-Ranges: bytes
 ETag: W/18267-1250213328000
 Last-Modified: Fri, 14 Aug 2009 01:28:48 GMT
 Content-Type: text/css
 Content-Encoding: gzip
 Vary: Accept-Encoding
 Date: Wed, 10 Mar 2010 03:58:54 GMT
 Connection: close
 If I go to squid with the same header I get the uncompressed file:
 HTTP/1.0 200 OK
 Accept-Ranges: bytes
 Last-Modified: Fri, 14 Aug 2009 01:28:48 GMT
 Content-Type: text/css
 Content-Length: 18267
 Server: Apache-Coyote/1.1
 Cache-Control: max-age=1801
 ETag: W/18267-1250213328000
 Date: Wed, 10 Mar 2010 04:38:40 GMT
 X-Cache: HIT from www...
 X-Cache-Lookup: HIT from www...
 Via: 1.1 www...:80 (squid/2.7.STABLE6)
 Connection: keep-alive
 The only squid configuration is reverse proxy ACL for origin servers
 and the domains they map to, there is nothing specific to compression
 or headers in general. This is using the default tomcat connectors
 that support compression.
 Any ideas?
 
 It looks like the inconsistent vary problem.  Do a bit more of a scan 
 requesting the same URL this time checking variations of Accept header 
 content. ( *, */*, gzip, deflate, identity ).
 
 If any of the responses come back without Vary: Accept-Encoding that needs 
 to be fixed.
 
 Amos
 -- 
 Please be using
  Current Stable Squid 2.7.STABLE7 or 3.0.STABLE24
  Current Beta Squid 3.1.0.17

--
Mark Nottingham   m...@yahoo-inc.com




Re: [squid-users] How can I edit err pages footer?

2010-03-03 Thread Mark Nottingham
I think it's still worthwhile to allow the user to disable the footer, for 
aesthetic reasons. If you're accelerating your site, you don't want that 
showing up on your pages.

Cheers,


On 23/02/2010, at 6:53 AM, Amos Jeffries wrote:

 Struzik Wojciech wrote:
 Look at link below:
 http://techspalace.blogspot.com/2008/01/remove-squid-footer.html
 On Mon, Feb 22, 2010 at 1:22 PM, Landy Landy landysacco...@yahoo.com wrote:
 Hello.
 
 I would like to know where I can edit the footer for the error pages for 
 example I would like to change:
 
 Generated Mon, 22 Feb 2010 12:20:02 GMT by Optimum-Wireless-Services 
 (squid/3.0.STABLE21)
 
 to something else. I don't want the user to know I'm using squid.
 
 PS: the nasty ones who wants to know will ALWAYS be able to detect Squid when 
 plugged in. Software fingerprinting goes as far down as far down as the TCP 
 and UDP packet and port behaviour.
 
 httpd_suppress_version_string option allows removal of the obvious version 
 name.
 http://www.squid-cache.org/Doc/config/httpd_suppress_version_string
 
 visible_hostname alters the machine name displayed.
 http://www.squid-cache.org/Doc/config/visible_hostname
 
 The rest of that line is usually kind of important for troubleshooting.
 
 To blame all network errors on your web server, or leave users in the dark 
 about some problems use deny_info.
 http://www.squid-cache.org/Doc/config/deny_info
 
 Amos
 -- 
 Please be using
  Current Stable Squid 2.7.STABLE7 or 3.0.STABLE24
  Current Beta Squid 3.1.0.16

--
Mark Nottingham   m...@yahoo-inc.com




Re: [squid-users] Max throughput of CARP Proxy

2010-02-04 Thread Mark Nottingham
Have you done any TCP tuning? 

When you say 25% CPU, do you mean 25% of the total of a quad-core box, or 25% 
of one core? What's the CPU?

Cheers,


On 04/02/2010, at 9:06 PM, Markus Meyer wrote:

 Hi folks,
 
 I'm testing a CARP setup and try to find out what the maximum throughput
 is. After a lot of testing I'm stuck with 5000 req/s and ca. 220 mbit/s
 in and out. Whatever I do I can't make it faster. Does anyone have an
 idea whats going on or if I do something wrong? How can I find out what
 the bottleneck is?
 
 This is the test setup:
 
 - 1 GBit NICs and even faster switches, all servers in the same subnet
 - 1 CARP and 5-9 Proxies as parents
 - all proxies use AUFS and cache_mem
 - proxies are warmed up, meaning I ran the test multiple tests so that
 all content is on the proxies and they don't have to get anything from
 the original servers
 - the CARP does not cache anything(cache_mem 0 MB  cache_dir null no-store)
 - used Version: Squid 2.7.STABLE7-2
 - Kernel is the standard Etch 2.6.18
 
 The test itself:
 I use http_load[1] and start it on four servers at the same time with 50
 parallel requests. It uses a list with appr. 3 mio. URLs with most of
 the files being smaller than 4 kB but bigger than 2 kB. The test runs 30
 minutes.
 
 The CARP uses appr. 25% CPU and has lots of free memory. I get no
 problems reported in cache.log or syslog. And the throughput of 5k req/s
 and 220 mbit/s doesn't change with five, seven or nine parent proxies.
 
 
 [1] http://www.acme.com/software/http_load/http_load-12mar2006.tar.gz
 
 
 Any ideas and hints are welcome.
 
 Cheers, Markus

--
Mark Nottingham   m...@yahoo-inc.com




Re: [squid-users] Max throughput of CARP Proxy

2010-02-04 Thread Mark Nottingham
FWIW, I've seen Squid running at 12K req/sec (very small responses, 100% hits 
from memory). Higher using the multiple instance perl script (it scales 
reasonably linearly).

For a pure proxy (no caching), I'd estimate you could do about 5K req/sec for 
small responses on modern hardware, based on tests I've done previously. Again, 
that's one core.

Regarding TCP tuning on Linux,  see:
  http://fasterdata.es.net/TCP-tuning/linux.html
  http://www.psc.edu/networking/projects/tcptune/#Linux

Cheers,



On 05/02/2010, at 3:56 AM, Kinkie wrote:

 On Thu, Feb 4, 2010 at 5:52 PM, Markus Meyer markus.me...@koeln.de wrote:
 
 Nice one. I think I can get to testing it next week. But the numbers I
 get out of it must be handled with care. Since this is a pure test
 environment. It's like a best-case scenario which sadly never will
 happen in a production environment ;)
 
 It's ok, and the fact will be highlighted when publishing the results.
 It is however common practice of all commercial vendors to use
 pure-lab-environment numbers when pitching their offers, and I find it
 only fair that we match that with numbers of our own. Also, it'll be a
 nice ego-boost for all the Squid community to be able to claim
 impressive numbers - hell, your numbers are impressive already..
 
 Notice: if you implement multi-instance squid, an added boost might
 come from tying each instance to a specific CPU core (on Linux it's
 done via the taskset command)
 
 -- 
/kinkie

--
Mark Nottingham   m...@yahoo-inc.com




Re: [squid-users] Squid 2.7STABLE7 randomly crashes

2010-01-26 Thread Mark Nottingham
 
 shutdown_lifetime 10 seconds
 
 cache_dir aufs /var/spool/squid/cache01 62500 32 256
 cache_dir aufs /var/spool/squid/cache02 62500 32 256
 cache_dir aufs /var/spool/squid/cache03 62500 32 256
 cache_dir aufs /var/spool/squid/cache04 62500 32 256
 
 cache_replacement_policy heap GDSF
 memory_replacement_policy heap GDSF
 
 cache_log /var/log/squid/cache.log
 cache_store_log none
 pid_filename /var/run/squid.pid
 coredump_dir /var/spool/squid/crash
 log_icp_queries off
 client_db on
 half_closed_clients off
 
 cache_mem 512 MB
 maximum_object_size 768000 KB
 maximum_object_size_in_memory 96 KB
 memory_pools off
 
 forwarded_for off
 
 snmp_port 1601
 snmp_incoming_address 0.0.0.0
 snmp_outgoing_address 255.255.255.255
 
 auth_param ntlm  program /usr/bin/ntlm_auth 
 --helper-protocol=squid-2.5-ntlmssp
 auth_param ntlm children 30
 auth_param ntlm keep_alive on
 
 auth_param basic program /usr/bin/ntlm_auth --helper-protocol=squid-2.5-basic
 auth_param basic children 5
 auth_param basic realm User Authentication
 
 external_acl_type ads-group children=20 %LOGIN
 /usr/local/squid/libexec/wbinfo_group.pl
 
 url_rewrite_children 50
 redirector_bypass off
 url_rewrite_program /opt/Websense/bin/WsRedtor
 
 
 Does anyone have any idea how to fix this problem?
 
 Many Thanks
 
 myOcella
 

--
Mark Nottingham   m...@yahoo-inc.com




Re: [squid-users] Help with extension_methods

2010-01-25 Thread Mark Nottingham
_z___ isn't a request method on anybody's planet :)

It's more than likely that the client is either trying to talk a protocol other 
than HTTP to you, or its request message delimitation (usually, Content-Length) 
is messed up. 



On 26/01/2010, at 6:02 AM, Dean Weimer wrote:

 I found some errors in my cache.log file this afternoon, I have tracked it 
 down to a development machine and know that  they occurred while the 
 developer working on the machine was doing a build out of Plone, which did in 
 the end succeed so I am not sure this is a huge concern but would rather not 
 have the errors in the future if it can be fixed.
 
 There were several entries like this in the access.log:
 1264442419.041  0 10.20.147.34 NONE/400 1806 NONE 
 error:unsupported-request-method - NONE/- text/html
 
 That corresponded to entries like this in the cache.log:
 2010/01/25 12:03:35| clientParseRequestMethod: Unsupported method attempted 
 by 10.20.147.34: This is not a bug. see squid.conf extension_methods
 2010/01/25 12:03:35| clientParseRequestMethod: Unsupported method in request 
 '_z___'
 
 I checked on the extension_methods, but am a little confused as to what to 
 enter for the method?  To possibly solve this issue, would I just use the 
 following configuration line?
 extension_methods _z___
 
 If anyone could point me in the right direction to find some resources on 
 this issue it would be greatly appreciated.  I tried searching but didn't 
 find any information on _z___ on the web.  I am currently running 
 squid3.0.STABLE21.
 
 Thanks,
  Dean Weimer
  Network Administrator
  Orscheln Management Co
 

--
Mark Nottingham   m...@yahoo-inc.com




Re: [squid-users] Exception error:corrupted chunk size

2009-12-07 Thread Mark Nottingham
Seems to be, at a glance...


 HTTP/1.1 200 OK
 Server: Caribou/5.0
 Date: Mon, 07 Dec 2009 23:37:27 GMT
 Content-type:  text/html; charset=utf-8
 Transfer-Encoding: chunked
 Transfer-encoding: chunked
 Vary: Accept-Encoding
 
 2000
 HTML
 HEAD
 TITLE/TITLE
 


On 03/12/2009, at 10:33 PM, Amos Jeffries wrote:

 Mark Nottingham wrote:
 It looks like they're sending Transfer-Encoding: chunked twice;
 http://redbot.org/?uri=http%3A%2F%2Ft.news-accorhotels.com%2Fnl%2Fjsp%2Fm.jsp%3Fc%3Da6f0cf19b1c0782ef7
 Cheers,
 
 Maybe. But I'm inclined to believe its because ...
 
 (...drumroll please...)
 
 ... the body content in fact NOT being chunked encoded at all?
 
 Amos
 
 On 02/12/2009, at 10:10 AM, Amos Jeffries wrote:
 On Tue, 1 Dec 2009 17:10:27 +0800, Wong wongb...@telkom.net wrote:
 Dear All,
 
 I got problem to access 
 http://t.news-accorhotels.com/nl/jsp/m.jsp?c=a6f0cf19b1c0782ef7 through 
 proxy (site can be accessed without proxy).
 
 After checking cache.log, I found message below.
 
 Need advise why squid generate that message and stop the browsing
 activity. How can I fix it?
 
 Thanks a lot for your kind help.
 
 Wong
 
 ---snip---
 
 2009/12/01 16:55:45| Exception error:corrupted chunk size
 2009/12/01 16:56:13| Exception error:corrupted chunk size
 2009/12/01 17:04:14| Exception error:corrupted chunk size
 Make sure you have the latest Squid-2.7 or Squid-3.x release to handle the
 chunking properly.
 
 If it still remains, the web server is badly broken. Complain to the
 website administrator.
 
 Amos
 
 --
 Mark Nottingham   m...@yahoo-inc.com
 
 
 -- 
 Please be using
  Current Stable Squid 2.7.STABLE7 or 3.0.STABLE20
  Current Beta Squid 3.1.0.15

--
Mark Nottingham   m...@yahoo-inc.com




Re: [squid-users] Exception error:corrupted chunk size

2009-12-07 Thread Mark Nottingham
Yup. 

At first I thought Caribou was http://www.cariboucms.com/, but now I'm not 
so sure; AFAICT that's a PHP/MySQL CMS, and doesn't have its own server (as 
well as there being a 'jsp' in the URL). Any other candidates?

That URL also seems to be 500'ing now...


On 08/12/2009, at 12:09 PM, Amos Jeffries wrote:

 On Tue, 8 Dec 2009 10:38:33 +1100, Mark Nottingham m...@yahoo-inc.com
 wrote:
 Seems to be, at a glance...
 
 
 HTTP/1.1 200 OK
 Server: Caribou/5.0
 Date: Mon, 07 Dec 2009 23:37:27 GMT
 Content-type:  text/html; charset=utf-8
 Transfer-Encoding: chunked
 Transfer-encoding: chunked
 Vary: Accept-Encoding
 
 2000
 HTML
 HEAD
 TITLE/TITLE
 
 
 
 
 hmm
 
 Aha!. Darn ISP filtering again.
 
 Okay, so my ISP run some proxy for domestic traffic.
 
 What I was getting back was the first chunked header stripped off and the
 chunks decoded. As one would expect the remaining Transfer-Encoding header
 with no remainign chunks goes badly.
 
 Amos
 
 
 On 03/12/2009, at 10:33 PM, Amos Jeffries wrote:
 
 Mark Nottingham wrote:
 It looks like they're sending Transfer-Encoding: chunked twice;
 
 http://redbot.org/?uri=http%3A%2F%2Ft.news-accorhotels.com%2Fnl%2Fjsp%2Fm.jsp%3Fc%3Da6f0cf19b1c0782ef7
 Cheers,
 
 Maybe. But I'm inclined to believe its because ...
 
 (...drumroll please...)
 
 ... the body content in fact NOT being chunked encoded at all?
 
 Amos
 
 On 02/12/2009, at 10:10 AM, Amos Jeffries wrote:
 On Tue, 1 Dec 2009 17:10:27 +0800, Wong wongb...@telkom.net
 wrote:
 Dear All,
 
 I got problem to access
 http://t.news-accorhotels.com/nl/jsp/m.jsp?c=a6f0cf19b1c0782ef7
 through proxy (site can be accessed without proxy).
 
 After checking cache.log, I found message below.
 
 Need advise why squid generate that message and stop the browsing
 activity. How can I fix it?
 
 Thanks a lot for your kind help.
 
 Wong
 
 ---snip---
 
 2009/12/01 16:55:45| Exception error:corrupted chunk size
 2009/12/01 16:56:13| Exception error:corrupted chunk size
 2009/12/01 17:04:14| Exception error:corrupted chunk size
 Make sure you have the latest Squid-2.7 or Squid-3.x release to
 handle
 the
 chunking properly.
 
 If it still remains, the web server is badly broken. Complain to the
 website administrator.
 
 Amos
 
 --
 Mark Nottingham   m...@yahoo-inc.com
 
 
 -- 
 Please be using
 Current Stable Squid 2.7.STABLE7 or 3.0.STABLE20
 Current Beta Squid 3.1.0.15
 
 --
 Mark Nottingham   m...@yahoo-inc.com

--
Mark Nottingham   m...@yahoo-inc.com




Re: [squid-users] Exception error:corrupted chunk size

2009-12-02 Thread Mark Nottingham
It looks like they're sending Transfer-Encoding: chunked twice;

http://redbot.org/?uri=http%3A%2F%2Ft.news-accorhotels.com%2Fnl%2Fjsp%2Fm.jsp%3Fc%3Da6f0cf19b1c0782ef7

Cheers,


On 02/12/2009, at 10:10 AM, Amos Jeffries wrote:

 On Tue, 1 Dec 2009 17:10:27 +0800, Wong wongb...@telkom.net wrote:
 Dear All,
 
 I got problem to access 
 http://t.news-accorhotels.com/nl/jsp/m.jsp?c=a6f0cf19b1c0782ef7 through 
 proxy (site can be accessed without proxy).
 
 After checking cache.log, I found message below.
 
 Need advise why squid generate that message and stop the browsing
 activity. 
 How can I fix it?
 
 Thanks a lot for your kind help.
 
 Wong
 
 ---snip---
 
 2009/12/01 16:55:45| Exception error:corrupted chunk size
 2009/12/01 16:56:13| Exception error:corrupted chunk size
 2009/12/01 17:04:14| Exception error:corrupted chunk size
 
 Make sure you have the latest Squid-2.7 or Squid-3.x release to handle the
 chunking properly.
 
 If it still remains, the web server is badly broken. Complain to the
 website administrator.
 
 Amos
 

--
Mark Nottingham   m...@yahoo-inc.com




Re: [squid-users] Custom max-age header

2009-10-26 Thread Mark Nottingham
I know it's nasty, but I've been thinking for a while about a  
refresh_pattern option that would write the new, overridden freshness  
(e.g., assuming your'e using override-expires, LM factor, etc.) into  
outgoing response cache-control.



On 27/10/2009, at 1:47 PM, Amos Jeffries wrote:

On Thu, 22 Oct 2009 11:24:50 +0200, Struzik Wojciech bm9ib...@gmail.com 


wrote:

You have not yet answered the question of:
 WHY are you even thinking this ??


Because one of my backend is Amazon S3 and I can't set up
Cache-Control on it. So I looking for different solution how to  
change

this header field (squid). I want the objects to be cache as long as
possible on the clients web-browser side (1 year).



Eww. Nasty.
FWIW; any dynamic page language should be able to set them per-page.

There is no way to insert values into Cache-Control: while retaining  
any
other settings there. But if Squid is built to allow HTTP violations  
you
might use header_access to strip out the Cache-Control: header  
entirely and
header_replace add a new version of your own when one is stripped  
out (I'm

not sure of the behavior if there was none to begin with).

This is *very* risky, since it only allows one flat CC value  
regardless of

the content.

Amos




On Wed, Oct 21, 2009 at 1:52 PM, Amos Jeffries squ...@treenet.co.nz
wrote:

Struzik Wojciech wrote:


Actually I'm using varnish but varnish is unstable (slowdowns,
coredumps). Varnish supports Cache-Control, so I can set up this on
it.
Here is a part of network my topology

 nginx (consistent hash) - varnish
--- 1st backend
   |
|
   |
   --- 2nd backend
   |
   |
- varnish --- 1st backend




|

  --- 2nd backend


I want replace varnish into squid (better stability/performance  
when

one of backends is down), so I wonder where is better to add
Cache-Control: max-age header, on nginx or squid. Is it possible to
set up custom Cache-Control (max-age) on squid  ???


You have not yet answered the question of:
 WHY are you even thinking this ??

It's only possible to do it reliably and securely on the originating

web

server.  Squid does not make it easy to grossly violate the HTTP
protocol.

There are three levels of Cache-Control values:
s-max-age applying to middleware proxies
max-age applying to web browsers (and middleware only if there is no
s-max-age present)

The Surrogate-Control header is also available on latest 3.1 via  
ESI to
control middleware in the delegated reverse-proxy chain separate  
to the

s-max-age values for external middleware.
It should not be to hard to make it work for HTTP reverse-proxy
situations.

Amos



On Wed, Oct 21, 2009 at 11:40 AM, Matus UHLAR - fantomas
uh...@fantomas.sk wrote:


On 21.10.09 10:48, Struzik Wojciech wrote:


I using Squid 2.7. How can i add custom field max-age to

Cache-Control

response header ???


why would you want to do that on squid?
--
Matus UHLAR - fantomas, uh...@fantomas.sk ; http:// 
www.fantomas.sk/

Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu  
postu.

BSE = Mad Cow Desease ... BSA = Mad Software Producents Desease



--
Please be using
Current Stable Squid 2.7.STABLE7 or 3.0.STABLE19
Current Beta Squid 3.1.0.14



--
Mark Nottingham   m...@yahoo-inc.com




Re: [squid-users] Architecture

2009-06-23 Thread Mark Nottingham
Last time I looked at lighty, it buffered the entire message when used  
in proxy mode; this isn't really workable.


I'd use Squid on the FE in CARP mode; CARP is a way of using  
consistent hashing to spread the load out to multiple servers in a  
predictable way (like a DHT).


If you need even more capacity, you could run multiple instances of  
squid on the FE listening to the same socket; see

  http://www1.id.squid-cache.org/mail-archive/squid-users/200810/0136.html

Cheers,


On 24/06/2009, at 9:18 AM, Ronan Lucio wrote:


Hi Kinkie,

On Tue, 23 Jun 2009 21:51:17 +0200, Kinkie wrote

Hi,
 I can't see the advantage of using lighthttpd instead of squid+carp
as the frontend,


The idea of putting a lighttpd server as a the frontend is for load  
balance.


What exactly do you mean with squid+carp? several squid servers  
working as one?

Can I have it working in an external DataCenter?
If so it seems to be a better solution, even because it's a fault  
tolerance

solution.


and if using lighthttpd i can't see the advantage of
not serving static content directly out of the balancer.


Actually, I'm just afraid of overload the server.
Initially I don't know exactly how much resources would it consume  
from each

server.
If a server like that fits executing two roles, I'm sure it would be  
better.



Also watch out as nfs has locking and scaling issues of its own
(assuming thet nfs is what you mean by single filesystem), and it
also introduces a very nasty point-of-failure.


Yes, it's a NAS.

Kinkie, the architecture shouldn't be that suggested from me.
It's just how I could figure out. Of course I want to make it better.
Do you have a suggestion for that?

For all I have understood your suggestion is:

1) Some squid servers + carp

2) Application server as the backend servers

3) A third server serving static resources

I just didn't figure out your suggestion for storage.

Thank you,
Ronan


--
Mark Nottingham   m...@yahoo-inc.com




Re: [squid-users] caching the uncacheable

2009-06-04 Thread Mark Nottingham
Yep. The only things that this won't cover, from the top of my head,  
is CC: no-store and some permutations of Vary.


Cheers,

On 03/06/2009, at 3:15 PM, Chudy Fernandez wrote:



Date Wed, 21 Jan 2004 19:50:30 GMT
Pragma no-cache
Cache-Control private, no-cache, no-cache=Set-Cookie, proxy-revalidate
Expires Wed, 17 Sep 1975 21:32:10 GMT
Last-Modified Wed, 19 Apr 2000 11:43:00 GMT
How to cache contents with this kind of headers?

ignore-private ignore-no-cache ignore-must-revalidate ignore-reload  
override-expire override-lastmod

what else do I need? ... store-stale?





--
Mark Nottingham   m...@yahoo-inc.com




Re: [squid-users] squid 2.7 stable 6 increased load from 2.6

2009-04-27 Thread Mark Nottingham

try

update_headers off

and see if there's a difference.


On 28/04/2009, at 1:56 PM, Amos Jeffries wrote:


Greetings,

I just recently upgraded (or in the midst of testing) and I note that
3 servers that I upgraded from 2.6 stable 13 to 2.7 stable 6, are
running 3-4x load of the identical servers running the 2.6 stable
variety.

I was wondering what would cause this?

Should I stick with 2.6 stable 13 and be happy? I was looking forward
to some of the additional http 1.1 (unsupported support) :)

Very werid, ideas?


Might be the attempted fix for bug #7.
It causes modified objects to be deleted and re-stored with every 3xx
message updating them. 2.6 only updated the memory copy, 2.7 alters  
the

disk copy.

Amos




--
Mark Nottingham   m...@yahoo-inc.com




Re: [squid-users] HTCP logging?

2009-04-22 Thread Mark Nottingham

No there's not. See:
  http://www.squid-cache.org/bugs/show_bug.cgi?id=2627


On 23/04/2009, at 7:59 AM, Dean Weimer wrote:

Working on Testing a child parent proxy setup using HTCP, I was  
wondering if there is any way to see a log of the HTCP requests on  
the parent similar to how you see the ICP requests in the access log?


Thanks,
 Dean Weimer
 Network Administrator
 Orscheln Management Co


--
Mark Nottingham   m...@yahoo-inc.com




Re: [squid-users] Invalidating of a resource cached with a POST request

2009-04-22 Thread Mark Nottingham

Squid2-HEAD does this. See:
  http://www.squid-cache.org/Versions/v2/HEAD/changesets/12355.patch
(be aware that that has dependencies on several other changesets on  
HEAD)


Cheers,



On 23/04/2009, at 1:42 AM, pgrisolano@orange-ftgroup.com pgrisolano@orange-ftgroup.com 
 wrote:



Hello,

I would like to know if with SQUID it is possible to disable a  
resource
via a POST, PUT or DELETE request on a resource caching via a GET  
(same

URI)

Here is an example of what I would do:
* A client sends a GET request on a page ex : / mapage1
* The response is cached by the proxy SQUID
* The site manager send a POST request  to modify this resource, the
resource is remove from the SQUID cache

HTTP 1.1 theoretically it is possible to do this (RFC 2616 sec 13-10),
but from my research SQUID does not implement this recommendation (For
the POST request, it's ok for PUT and DELETE request).

(I used the 2.6 version of SQUID)

thank you for your help
Philippe


--
Mark Nottingham   m...@yahoo-inc.com




Re: [squid-users] logging changes 2.6 - 2.7

2009-04-21 Thread Mark Nottingham

That was fixed in STABLE4;
  
http://www.squid-cache.org/Versions/v2/2.7/squid-2.7.STABLE6-RELEASENOTES.html#s7

See also:
  http://www.squid-cache.org/bugs/show_bug.cgi?id=2406

Cheers,


On 22/04/2009, at 5:46 AM, Ross J. Reedstrom wrote:


Hi all -
Recently upgraded a proxy-accelerator setup to using 2.7 (Debian
2.7.STABLE3-4.1, specifically) from 2.6 (2.6.20-1~bpo40+1). In this
setup, I'm using an external rewriter script to add virtual rooting  
bits
to the requested URL. (It's a zope system, using ther  
VirtualHostMonster

rewriter, like so:
Incoming request:
GET http://example.com/someimage.gif

Rewritten to:

GET 
http://example.com/VirtualHostBase/http/example.com:80/somepath/VirtualHostRoot/someimage.gif

These are then farmed out to multiple cache_peer origin servers.

The change I'm seeing is that the access.log using a custom format
line:

logformat custom %ts.%03tu %6tr %a %ui %un [%tl] %rm %ru HTTP/%rv  
%Hs %st %{Referer}h %{User-Agent}h %Ss:%Sh/%A %%{X-Forwarded- 
For}h


The change is that in 2.6 %ru logged the requested URL as seen on the
wire. In 2.7, we get the rewritten URL.

Is this intentional? Is there a way around it? Since referer (sic) url
is not similarly rewritten, it gives log analysis software (that
attempts  to determine click-traces and page views) fits. I can
post-process my logs, but I'd rather fix them at generation time. I  
can
understand the need to have the rewritten version available: just  
not at

the cost of missing what was actually on the wire that Squid read.

Ross
--
Ross Reedstrom, Ph.D.  
reeds...@rice.edu
Systems Engineer  Admin, Research Scientistphone:  
713-348-6166
The Connexions Project  http://cnx.orgfax:  
713-348-3665

Rice University MS-375, Houston, TX 77005
GPG Key fingerprint = F023 82C8 9B0E 2CC6 0D8E  F888 D3AE 810E 88F0  
BEDE


--
Mark Nottingham   m...@yahoo-inc.com




Re: [squid-users] An option to force keepalives for POST during forward proxy

2009-03-20 Thread Mark Nottingham

It's not so much that it isn't well-defined, it's that it's dangerous.

Since a server can close an idle connection at any time, there's a  
chance that as you start your POST, the server will close the  
connection, leaving things in an indeterminate state. Unlike GET, POST  
can't be automatically retried if it encounters this kind of failure.


Note that this risk is present even with a new connection -- i.e., a  
server can close a connection you've just opened to it before you send  
the first request. However, the chances of this happening are a lot  
less than when the conn has been idle for a while.


All of that said, it would be nice to do something about this. There  
are a few possible short-term solutions;


 - A hop-by-hop protocol extension that allows a server to advertise  
how long it will keep an idle connection open, so that the client can  
judge whether it's safe to reuse it for a POST. While there are still  
some circumstances where this could fail (network segment, server  
running out of resources and ditching existing conns, etc.), it would  
make it somewhat safer.


 - Having clients make the POST request with Expect: 100-continue to  
make sure that the connection is still usable. However, Squid  
currently doesn't support Expect/Continue (at least well).


See http://www.squid-cache.org/bugs/show_bug.cgi?id=1979 for more  
discussion along these lines.


The long-term solution is to layer HTTP over another protocol, like  
SCTP. This is already in discussion in the IETF and there are a few  
papers and test implementations available. I'd love to see Squid do  
this, but it's probably going to be a while.


Cheers,


On 20/03/2009, at 5:49 PM, Michael Spiegle wrote:

I have a situation where being able to use keepalives for POST  
methods would be very handy.  Essentially, I have a series of .NET  
webservers that POST data to a linux-based webservice.  The .NET  
webservers and the the linux servers are geographically distributed,  
so we have a 65ms latency between them.  The average POST body size  
is 40-50KB.


Our goal is to significantly reduce the time it takes to make these  
POSTs.  We have tried using TCP Window scaling and keepalives on  
the .NET servers, however the .NET framework appears to have many  
bugs in this regard and is not reliable.  We have decided that our  
best course of action is to have the .NET servers talk to the  
webservice through a small cluster of squid proxies.  This way, we  
can use Linux's reliable TCP Window scaling.  Unfortunately, squid  
doesn't keep persistent connections on the backend for POSTs.


I read a message from the list a while back that this was done on  
purpose because using POSTs with keepalives is not well defined.  I  
believe there should be an option to allow this sort of behavior and  
would like to investigate implementing it myself if there are no  
other plans to do it.  My webservice is running behind Apache, and  
it seems to properly handle POSTs with keepalive.


Can anyone point me in a general direction to get started on this?   
I am  not familiar with the squid codebase.  Also, does anyone know  
how much work it might be to implement this?


Thanks,
Mike


--
Mark Nottingham   m...@yahoo-inc.com




Re: [squid-users] CARP question

2009-03-17 Thread Mark Nottingham


On 17/03/2009, at 7:44 AM, Chris Woodfield wrote:


Hi,

Had a question about squid's CARP implementation.

Let's say I have a farm of squids sitting behind an SLB, and behind  
those I have a set of parent caches. If I were to enable CARP on the  
front-end caches, is the hash algorithm deterministic enough to  
result in a URL request seen by more than one edge cache to be  
directed to the same parent cache?


Yes (keeping in mind that they can move around if the set of servers  
considered 'up' changes, and of course different FE caches will have a  
different idea of what set is 'up' at any particular point in time).



Or will each front-end cache have its own hash assignments compared  
to the others?


Also, how does CARP handle parent server removals and/or additions  
(i.e. are hash buckets reassigned gracefully or are they all  
redistributed)? Is this behavior also deterministic between multiple  
front-end squids?


It is deterministic, and the idea is to cause the least disruption.  
Search on 'consistent hashing' for the math; it's the same technique  
used in Akamai, Hadoop/BigTable, etc.


Cheers,


--
Mark Nottingham   m...@yahoo-inc.com




Re: [squid-users] mallinfo() vs. sbrk()

2008-11-09 Thread Mark Nottingham

What about

Memory accounted for:
Total accounted:   2039507 KB

This seems to be the only one that's not wrapped around to negative  
above 2G on Linux... both mallinfo and sbrk are.


Cheers,


On 07/11/2008, at 12:38 PM, Henrik Nordstrom wrote:


On fre, 2008-11-07 at 10:36 -0800, Mark Nottingham wrote:

thx.

When mallinfo fails, how does it appear in cachemgr -- i.e., is there
any way to reliably detect it just by examining the output?


numbers wrap over and become negative, then 0 and positive again.  
Cycle

repeats each 4 GB. (32-bit signed counters counting bytes)

regards
Henrik



--
Mark Nottingham   [EMAIL PROTECTED]




Re: [squid-users] mallinfo() vs. sbrk()

2008-11-07 Thread Mark Nottingham

thx.

When mallinfo fails, how does it appear in cachemgr -- i.e., is there  
any way to reliably detect it just by examining the output?


Cheers,


On 06/11/2008, at 7:03 PM, Henrik Nordstrom wrote:


On tor, 2008-11-06 at 13:17 -0800, Mark Nottingham wrote:

I remember reading somewhere (can't forget where, and I may be
incorrect) that when available, sbrk is a more reliable indication of
memory use for squid than mallinfo().


mallinfo is more reliabe than sbrk when it works... but at least Linux
mallinfo fails when the process grows above 2GB in size..

mallinfo includes all memory allocated by the memory allocator (malloc
and friends).

sbrk includes the size of the data segment, where most memory
allocations go, but not all. Large allocations is handled by malloc
outside of the data segment.

Regards
Henrik


--
Mark Nottingham   [EMAIL PROTECTED]




[squid-users] mallinfo() vs. sbrk()

2008-11-06 Thread Mark Nottingham
I remember reading somewhere (can't forget where, and I may be  
incorrect) that when available, sbrk is a more reliable indication of  
memory use for squid than mallinfo(). However, I'm seeing this on a box;


Process Data Segment Size via sbrk(): 1040844 KB
Maximum Resident Size: 0 KB
Page faults with physical i/o: 0
Memory usage for squid via mallinfo():
Total space in arena:  1621452 KB
Ordinary blocks:   1620888 KB   8820 blks
Small blocks:   0 KB  0 blks
Holding blocks: 20928 KB  4 blks
Free Small blocks:  0 KB
Free Ordinary blocks: 563 KB
Total in use:  1641816 KB 100%
Total free:   563 KB 0%
Total size:1642380 KB
Memory accounted for:
Total accounted:   1565651 KB

As you can see, sbrk() is considerably less than total size via  
mallinfo(). Is this unusual? Any thoughts about which is better, if  
both are available?


Thanks,

--
Mark Nottingham   [EMAIL PROTECTED]




Re: [squid-users] Expires: vs. Cache-Control: max-age

2008-10-02 Thread Mark Nottingham


On 02/10/2008, at 3:37 PM, Alex Rousskov wrote:


I.e., the max-age cache-control directive takes precedence over  
Expires.
I've tested Squid and a number of other caches with Co-Advisor, and  
if

Expires indicates the response is fresh, but CC: max-age says it's
stale, it will treat it as stale, for just about any given cache.
Unfortunately, Co-Advisor doesn't test the reverse situation,  
although

I may have overlooked it (Alex?).


I am sorry, I am not sure what you mean by the reverse situation.
Expires indicates stale but max-age says fresh? If not, can you  
specify

it explicitly?

Thank you,

Alex.




Yes, that's it.

Cheers,

--
Mark Nottingham   [EMAIL PROTECTED]




Re: [squid-users] negative_ttl vs. an Expires header -- which should win?

2008-10-02 Thread Mark Nottingham

What version of Squid are you using?

This changed somewhat in 2.7; IIRC in 2.6 negative_ttl overrides  
response freshness, whereas in 2.7 response freshness (i.e., expires  
or cache-control) has precedence.


Cheers,




On 02/10/2008, at 3:56 PM, Gordon Mohr wrote:


Using 2.6.14-1ubuntu2 in an reverse/accelerator setup.

My backend/parent is by design setting explicit 'Expires' headers 1  
day into the future, even on 404/403/302 response codes.


I'm seeing the 4XX responses later served as TCP_NEGATIVE_HITs,  
which is good.


It appears, from my testing, that they are sometimes cached a bit  
longer than 'negative_ttl', but they are not cached as long as the  
Expires header suggests, even with plentiful cache space.


What is the designed intent of Squid -- should the 'negative_ttl' or  
the Expires header be definitive?


- Gordon @ IA


--
Mark Nottingham   [EMAIL PROTECTED]




Re: [squid-users] How get negative cache along with origin server error?

2008-10-02 Thread Mark Nottingham
Have you considered setting squid up to know about both origins, so it  
can fail over automatically?



On 26/09/2008, at 5:04 AM, Dave Dykstra wrote:

I am running squid on over a thousand computers that are filtering  
data

coming out of one of the particle collision detectors on the Large
Hadron Collider.  There are two origin servers, and the application
layer is designed to try the second server if the local squid  
returns a

5xx HTTP code (server error).  I just recently found that before squid
2.7 this could never happen because squid would just return stale data
if the origin server was down (more precisely, I've been testing with
the server up but the listener process down so it gets 'connection
refused').  In squid 2.7STABLE4, if squid.conf has 'max_stale 0' or if
the origin server sends 'Cache-Control: must-revalidate' then squid  
will

send a 504 Gateway Timeout error.  Unfortunately, this timeout error
does not get cached, and it gets sent upstream every time no matter  
what

negative_ttl is set to.  These squids are configured in a hierarchy
where each feeds 4 others so loading gets spread out, but the fact  
that

the error is not cached at all means that if the primary origin server
is down, the squids near the top of the hierarchy will get hammered  
with

hundreds of requests for the server that's down before every request
that succeeds from the second server.

Any suggestions?  Is the fact that negative_ttl doesn't work with
max_stale a bug, a missing feature, or an unfortunate interpretation  
of

the HTTP 1.1 spec?

By the way, I had hoped that 'Cache-Control: max-stale=0' would work  
the

same as squid.conf's 'max_stale 0' but I never see an error come back
when the origin server is down; it returns stale data instead.  I  
wonder

if that's intentional, a bug, or a missing feature.  I also note that
the HTTP 1.1 spec says that there MUST be a Warning 110 (Response is
stale) header attached if stale data is returned and I'm not seeing
those.

- Dave


--
Mark Nottingham   [EMAIL PROTECTED]




Re: [squid-users] Expires: vs. Cache-Control: max-age

2008-09-30 Thread Mark Nottingham

No, that's not true. RFC2616:


13.2.4 Expiration Calculations
In order to decide whether a response is fresh or stale, we need to  
compare its freshness lifetime to its age. The age is calculated as  
described in section 13.2.3; this section describes how to calculate  
the freshness lifetime, and to determine if a response has expired.  
In the discussion below, the values can be represented in any form  
appropriate for arithmetic operations.


We use the term expires_value to denote the value of the Expires  
header. We use the term max_age_value to denote an appropriate  
value of the number of seconds carried by the max-age directive of  
the Cache-Control header in a response (see section 14.9.3).


The max-age directive takes priority over Expires, so if max-age is  
present in a response, the calculation is simply:


freshness_lifetime = max_age_value
Otherwise, if Expires is present in the response, the calculation is:

freshness_lifetime = expires_value - date_value


and later:


14.9.3 Modifications of the Basic Expiration Mechanism
The expiration time of an entity MAY be specified by the origin  
server using the Expires header (see section 14.21). Alternatively,  
it MAY be specified using the max-age directive in a response. When  
the max-age cache-control directive is present in a cached response,  
the response is stale if its current age is greater than the age  
value given (in seconds) at the time of a new request for that  
resource. The max-age directive on a response implies that the  
response is cacheable (i.e., public) unless some other, more  
restrictive cache directive is also present.


If a response includes both an Expires header and a max-age  
directive, the max-age directive overrides the Expires header, even  
if the Expires header is more restrictive. This rule allows an  
origin server to provide, for a given response, a longer expiration  
time to an HTTP/1.1 (or later) cache than to an HTTP/1.0 cache. This  
might be useful if certain HTTP/1.0 caches improperly calculate ages  
or expiration times, perhaps due to desynchronized clocks.




I.e., the max-age cache-control directive takes precedence over Expires.
I've tested Squid and a number of other caches with Co-Advisor, and if  
Expires indicates the response is fresh, but CC: max-age says it's  
stale, it will treat it as stale, for just about any given cache.  
Unfortunately, Co-Advisor doesn't test the reverse situation, although  
I may have overlooked it (Alex?).



On 27/09/2008, at 4:44 PM, Markus Karg wrote:

According to HTTP/1.1 specification, the precedence is not  
determined by

the keyword, but by the value: The shorter age is to be taken.

Regards
Markus


-Original Message-
From: Chris Woodfield [mailto:[EMAIL PROTECTED]
Sent: Freitag, 26. September 2008 23:46
To: Squid Users
Subject: [squid-users] Expires: vs. Cache-Control: max-age

Hi,

Can someone confirm whether Expires: or Cache-control: max-age
parameters take precedence when both are present in an object's
headers? My assumption would be Cache-control: max-age would be
preferred, but we're seeing some behavior that suggests otherwise.

Specifically, we're seeing Expires: headers in the past resulting in
refresh checks against our origin even when a Cache-Control: max-age
header is present and the cached object should be fresh per that
metric.

What we're seeing is somewhat similar to bug 2430, but I want to make
sure what we're seeing isn't expected behavior.

Thanks,

-Chris


--
Mark Nottingham   [EMAIL PROTECTED]




Re: [squid-users] external_refresh_check

2008-09-11 Thread Mark Nottingham

See:
  http://www.mnot.net/cache_channels/

Note that it's Python, not Perl. If you want to use that code, ping  
me; I have a few bugfixes/improvements that I haven't put out there yet.


Cheers,



On 11/09/2008, at 5:42 PM, chudy fernandez wrote:

Can I ask some examples for this feature external_refresh_check in  
squid.conf and in perl script...






--
Mark Nottingham   [EMAIL PROTECTED]




Re: [squid-users] Squid-2.7 vary failure w/ non-encoded objects?

2008-09-02 Thread Mark Nottingham


On 02/09/2008, at 7:39 PM, Henrik Nordstrom wrote:


tis 2008-09-02 klockan 11:28 +1000 skrev Mark Nottingham:

Random thought: when an origin is doing one of these, is / can it be
noted in cache.log somehow? Would be useful, at least for accelerator
setups...


Log when an URI goes from Vary to non-Vary during normal requests  
should

be fairly easy. Just a debug statement in the right code path..


That would be cool...

It's harder to log when the server behaves bad wrt ETag, doing so  
would

require interpreting Accept-Encoding and Content-Encoding and detect
when the server responds with Vary: Accept-Encoding and an ETag  
matching

an object with incompatible encoding.


Sounds like it's not worth the effort.

But both these really belong in tools such as the cacheability  
engine..


Hint taken; I've been meaning to dust that code off for a while now.




Regards
Henrik



--
Mark Nottingham   [EMAIL PROTECTED]




Re: [squid-users] Squid-2.7 vary failure w/ non-encoded objects?

2008-09-01 Thread Mark Nottingham
Random thought: when an origin is doing one of these, is / can it be  
noted in cache.log somehow? Would be useful, at least for accelerator  
setups...



On 01/09/2008, at 8:30 PM, Adrian Chadd wrote:


2008/9/1 Henrik Nordstrom [EMAIL PROTECTED]:


http://wiki.squid-cache.org/KnowledgeBase/VaryNotCaching

Could you please take a peek and tell me if I've covered everything
clearly enough?


I think so.

RFC references can be found at the vary/etag development pages.

http://devel.squid-cache.org/vary/
http://devel.squid-cache.org/etag/


Thanks; I've just updated the article with this information.



Adrian


--
Mark Nottingham   [EMAIL PROTECTED]




Re: [squid-users] Persistent connect to cache_peer parent question

2008-07-15 Thread Mark Nottingham
That's not how HTTP works. What are you using to collect these  
statistics?



On 14/07/2008, at 10:59 PM, Russell Suter wrote:


Amos Jeffries wrote:

Russell Suter wrote:


Hi,

I have a question with regards to persistent connections to a  
cache peer

parent.  I have
multiple users connecting through a custom compiled  Squid. 
2.6.STABLE17

(also tried
3.0.STABLE7) on a RedHat EL 4 box in front of a commercial web  
filter

appliance.  In my
squid.conf file, I have the cache_peer as:

cache_peer IP parent 8084 0 login=*:mxlogic no-query no-digest  
proxy-only


What seems to happen is that a persistent connection is made to the
appliance.  This in
and of itself isn't a problem except that all of the different users
show up as the first user
that made the initial connection.  This really jacks up the  
statistics

within the appliance.
I can get around this with:

server_persistent_connections off

but that is not as efficient as the persistent connection.
Is there any way to get one persistent connection per user to the
cache_peer parent?




Not my knowledge. Persistent Connections are a link-layer artifact
between any given client (ie squid) and a server.



To me, the behavior is broken.  Either the single connection
to the cache parent should provide the correct user
credentials, or there should be one persistent connection per
user.  To have multiple requests from different users be
represented by only one user is wrong...

--
Russ

We can't solve problems by using the same kind of
thinking we used when we created them.
  -- Albert Einstein

Russell Suter
MX Logic, Inc.
Phone: 720.895.4481

Your first line of email defense.
http://www.mxlogic.com



--
Mark Nottingham   [EMAIL PROTECTED]




Re: [squid-users] Squid 2.7 access log and url_rewrite_program

2008-07-15 Thread Mark Nottingham

+1 - both cases are useful.


On 10/07/2008, at 1:15 AM, Chris Woodfield wrote:


Bug filed, #2406.

As I annotated in the bug report, this behavior does have its uses,  
but not having access to the URL pre-rewrite is clearly broken  
behavior IMO. My idea solution would restore 2.6's behavior, but  
also add a logformat code to log the URL post-rewrite in addition to  
pre-rewrite if desired. If no rewriter is configured, this element  
would print null.


-C

On Jul 8, 2008, at 6:36 PM, Henrik Nordstrom wrote:


On tis, 2008-07-08 at 16:47 -0400, Chris Woodfield wrote:

I've noticed that squid 2.7STABLE3 logs incoming URLs differently  
than

2.6 did when using a url_rewrite_program. It appears that under 2.6,
the URL logged was pre-rewrite, under 2.7 it's the URL returned by  
the

rewriter. This presents problems as I have the potential for a large
number of incoming URL hostnames being rewritten to the same origin
hostname, and with the current 2.7 logging I can't tell what the
incoming hostnames were.

Was this an expected change? If so, can I have the old behavior  
back? :)


Not expected, but now that I read the change log again it's obvious..

File a bug so we have some place to keep a lasing discussion about  
this.

Not sure today what the solution will look like.

http://bugs.squid-cache.org/

Regards
Henrik




--
Mark Nottingham   [EMAIL PROTECTED]




Re: [squid-users] how safe is server_http11?

2008-07-06 Thread Mark Nottingham
FWIW, I've tested it, and have been using it in production on a fair  
number of boxes for a little while; so far so good. Like H says, the  
main thing is lacking Expect/Continue support.


Cheers,


On 04/07/2008, at 6:55 AM, Chris Woodfield wrote:

So we're looking to upgrade from 2.6 to 2.7, primarily to get the  
HTTP/1.1 header support. I realize that the full 1.1 spec is not  
completely implemented, but are there any real Danger, Will  
Robinson! implications?


Specifically, is there any functionality or access to content that  
would be actively broken because squid is advertising HTTP/1.1 but  
doesn't have the spec completely implemented?


Thanks,

-C




--
Mark Nottingham   [EMAIL PROTECTED]




Re: [squid-users] Combined log showing 0/0 for status/bytes?

2008-07-06 Thread Mark Nottingham
FWIW, I've seen this on hits as well with 2.[6,7]... I assumed it was  
either a very immediate abort, or the log tag being set incorrectly  
(which AIUI happens sometimes, as tcp_hit is the default, no?).


Cheers,


On 07/07/2008, at 5:15 AM, Henrik Nordstrom wrote:


On sön, 2008-07-06 at 13:50 -0400, Tuc at T-B-O-H.NET wrote:

Hi,

I'm running squid/2.6.STABLE20+ICAP via WCCP2. I seem to be
getting more and more instances of :

192.168.3.249 - - [05/Jul/2008:19:08:44 -0400] GET http://photos-c.ak.facebook.com/photos-ak-sctm/v43/18/2433486906/app_2_2433486906_3650.gif 
 HTTP/1.1 0 0 http://www.facebook.com/home.php?; Mozilla/4.0  
(compatible; MSIE 7.0; Windows NT 5.1; .NET CLR 1.1.4322; .NET CLR  
2.0.50727) TCP_HIT:DIRECT


192.168.3.249 - - [05/Jul/2008:19:08:44 -0400] GET http://profile.ak.facebook.com/v230/847/104/q748084879_3980.jpg 
 HTTP/1.1 0 0 http://www.facebook.com/home.php?; Mozilla/4.0  
(compatible; MSIE 7.0; Windows NT 5.1; .NET CLR 1.1.4322; .NET CLR  
2.0.50727) TCP_MISS:DIRECT



0 status and 0 bytes sent. Is this something that just happens ?
(193 out of 2909 hits yesterday)


On MISS it's usually when the browser aborts the request before  
headers

is known.

Not sure what to make out of it on cache hits... but I guess it may be
that the request was aborted before the RESPMOD response from the ICAP
server is seen..

Regards
Henrik


--
Mark Nottingham   [EMAIL PROTECTED]




Re: [squid-users] purge domain / more flexible purge ?

2008-06-29 Thread Mark Nottingham

You might find this useful;
  http://www.mnot.net/blog/2008/01/04/cache_channels

Tell me if you're interested; there are a couple of bugfixes that  
haven't made it out yet.



Cheers,


On 29/06/2008, at 1:40 AM, Roy M. wrote:


Hello,

A past discussion in list back to the year of 2001 said not possible:

http://www.squid-cache.org/mail-archive/squid-users/200112/0880.html

Any updates now?



Consider the following case:

You have a blog application, and use Squid as reverse proxy to cache
the dynamic pages.

Ideally, if user updated an article, you need to purge all pages from
the blog (consider all pages contains a sidebar item of recent
articles - need to be updated),

Sure you can't loop all the pages and do the purge one by one...

Then you need a way to purge by domain or subdomain, or even by  
folder...


Roy


--
Mark Nottingham   [EMAIL PROTECTED]




Re: [squid-users] Squid on steroids

2008-06-17 Thread Mark Nottingham
What's your workload? E.g., is it going to be used as a proxy farm for  
dialup users? Broadband? If so, how many? Or, is it for an  
accelerator, and if so, how much content is there?


Cheers,


On 18/06/2008, at 5:07 AM, [EMAIL PROTECTED] wrote:


I've been given a directive to build a squid farm on steroids.

Load balanced, multiple servers, etc.

I've been googling around and found some documentation but does  
anyone have any direct experience with this?


Any suggestions?

Thank you in advance.


--
Mark Nottingham   [EMAIL PROTECTED]




Re: [squid-users] Squid on steroids

2008-06-17 Thread Mark Nottingham
If you're not caching at all and using reasonably modern hardware  
(e.g., dual core, ~3Ghz), you should be able to get somewhere between  
2,000 and 4,000 requests a second out of a single squid process,  
depending on the average response size. YMMV, of course, and that  
doesn't count the overhead of the filtering, etc.


By 50,000 users, do you mean total (i.e., you have 50,000 customers),  
or 50,000 a day, or 50,000 concurrently, or...? Figuring out how much  
capacity you need is an inexact science, of course, but it's usually  
best to over-provision.


The hard part is going to be directing requests to the proxies, and  
handling failure well. I haven't done ISP proxy deployments in a long  
time, so I'll leave it to others to give you advice on that part. I'm  
assuming you'll want it to be transparent (e.g., use WCCP)?





On 18/06/2008, at 9:05 AM, [EMAIL PROTECTED] wrote:


More broadband connections than anything else.

Possibly as many as 50,000 users.

No accelerator, maybe not even caching. Mostly to filter downloads,  
record websites, etc. maybe with something like urldb or Dansguardian.


Do you have ideas???

Thank you.


-- Original message --
From: Mark Nottingham [EMAIL PROTECTED]
What's your workload? E.g., is it going to be used as a proxy farm  
for

dialup users? Broadband? If so, how many? Or, is it for an
accelerator, and if so, how much content is there?

Cheers,


On 18/06/2008, at 5:07 AM, [EMAIL PROTECTED] wrote:


I've been given a directive to build a squid farm on steroids.

Load balanced, multiple servers, etc.

I've been googling around and found some documentation but does
anyone have any direct experience with this?

Any suggestions?

Thank you in advance.


--
Mark Nottingham   [EMAIL PROTECTED]




More broadband connections than anything else.

Possibly as many as 50,000 users.

No accelerator, maybe not even caching. Mostly to filter downloads,  
record websites, etc. maybe with something like urldb or Dansguardian.


Do you have ideas???

Thank you.




--
Mark Nottingham   [EMAIL PROTECTED]




Re: [squid-users] Squid log formats - 2.5-2.6?

2008-06-16 Thread Mark Nottingham
That reminds me; when using logformat, I've seen some counters show up  
as '-' when the value is 0. I can try to reproduce if more info is  
needed...



On 17/06/2008, at 8:26 AM, Henrik Nordstrom wrote:


On mån, 2008-06-16 at 11:21 -0400, Mike Diggins wrote:
Has something changed in the access log format between Squid  
2.5Stable14
and 2.6Stable20? I'm just upgrading and noticed my webalizer can no  
longer
parse the access.log file. It complains about the date which I  
believe is

the same on both (seconds since the Epoch).

Error: Skipping record (bad date): [31/dec/1969:19:00:00 -] [68]

In squid 2.6, I've picked the default squid format (logformat):

logformat squid  %ts.%03tu %6tr %a %Ss/%03Hs %st %rm %ru %un %Sh/ 
%A %mt


Should work.

Are you sure you told webalizer to parse a Squid access log in Squid
native format, and not a common log format?

Also try without the logformat directive. The squid format is
built-in, and may differ sligtly if you redefine it with a logformat
directive..

Regards
Henrik


--
Mark Nottingham   [EMAIL PROTECTED]




Re: [squid-users] Re: squid-cache.org

2008-06-11 Thread Mark Nottingham
Might also be good to update the configuration guide link from 2.6 to  
2.7 in the left-hand column of the front page.


Cheers,


On 11/06/2008, at 10:34 PM, Amos Jeffries wrote:


Monah Baki wrote:

Forget it :)


Nah. Lets fix it... Done.
Thanks.

Amos


On Jun 11, 2008, at 6:28 AM, Monah Baki wrote:
Out of curiosity at the download section it says version 2.7  
Latest Release Stable 1, but when you click on the 2.7 link it  
says stable2, which is it?



Thanks




BSD Networking, Microsoft Notworking




BSD Networking, Microsoft Notworking



--
Please use Squid 2.7.STABLE1 or 3.0.STABLE6


--
Mark Nottingham   [EMAIL PROTECTED]




Re: [squid-users] GET request with long URI TCP_DENIED

2008-05-19 Thread Mark Nottingham

Can 3.x's URL limit be safely changed at compile time?

Cheers,


On 19/05/2008, at 10:31 PM, Amos Jeffries wrote:


Henrik Nordstrom wrote:

On fre, 2008-05-16 at 15:14 -0400, Freeman, Aleda (EEA) wrote:
Our organization is running squid 2.5.9-10. We're having a problem  
sending a request to tomcat through squid that

has a 4,377 chars or more URI. (it's a very long GET request).

Indeed. Squid supports URL sizes up to 4KB.
This is defined by MAX_URL in defines.h, but I can not guarantee that
it's safe to change this.


This has been changed to allow up to 8KB URL from 3.0 stable 5.

Amos
--
Please use Squid 2.6.STABLE20 or 3.0.STABLE5


--
Mark Nottingham   [EMAIL PROTECTED]




Re: [squid-users] retrieving 1 file with 50 concurrent connections from memory cache in reverse proxy is really slow?

2008-05-15 Thread Mark Nottingham
ab is good for some kinds of benchmarking, but one of the problems  
with it is that it doesn't give you enough information to determine if  
the bottleneck is in the server, the network or the client.


Take a look at httperf...

Cheers,


On 15/05/2008, at 8:15 AM, wheres theph wrote:

I set up a squid as a reverse proxy that round robins to 2  
webservers (one is which is the same machine where squid is), and it  
appears that things work fine as a single user browsing casually.   
However, doing an apachebench request with 50 concurrent users to  
the same url times out the apachebench request:


***
# ab -c 50 -n 250 -H Accept-Encoding: gzip,deflate
http://www.domain.com/images/20k.gif
This is ApacheBench, Version 2.0.40-dev $Revision: 1.146 $  
apache-2.0

Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Copyright 2006 The Apache Software Foundation, http://www.apache.org/

Benchmarking domain.com (be patient)
Completed 100 requests
Completed 200 requests
apr_poll: The timeout specified has expired (70007)
Total of 238 requests completed

***

On the first access of the gif file, the access.log shows the TCP_MISS
as expected.  Subsequent accesses show the TCP_MEM_HIT as expected  
also.


No errors show up in squid.out.  Any ideas on why serving a static  
image

from memory is so slow that ApacheBench times out?


My round robin setup is pretty standard I think:
*
http_port 12.34.56.1:80 vhost defaultsite=www.domain.com
http_port 3128
cache_peer 12.34.56.2 parent 80 0 no-query originserver round-robin
cache_peer 127.0.0.1 parent 80 0 no-query originserver round-robin

cache_dir null /tmp

url_rewrite_host_header off

acl our_sites www.domain.com
http_access allow our_sites

maximum_object_size_in_memory 1024 KB
*

I am using CentOS 5, squid v 2.6.STABLE6-5.el5_1.




--
Mark Nottingham   [EMAIL PROTECTED]




[squid-users] Melbourne Squid Users [was: Squid Users Groups?]

2008-04-24 Thread Mark Nottingham

I'm up for it... anybody else?


On 18/04/2008, at 7:41 PM, Adrian Chadd wrote:

On Fri, Apr 18, 2008, Mark Nottingham wrote:

Cache-Control: only-if-cached.

Hmm... maybe we should start a Melbourne Squid Users Group... :)


There's at least three of you.

I'd be happy to start a Squid Users Group in Perth. I have no idea
whether there's enough of y'all here in Perth to justify it, but then,
it -is- an excuse to drink beer on another day in the month..




Adrian

--  
- Xenion - http://www.xenion.com.au/ - VPS Hosting - Commercial  
Squid Support -
- $25/pm entry-level VPSes w/ capped bandwidth charges available in  
WA -


--
Mark Nottingham   [EMAIL PROTECTED]




Re: [squid-users] forcing retrieval of cached copy

2008-04-18 Thread Mark Nottingham

Cache-Control: only-if-cached.

Hmm... maybe we should start a Melbourne Squid Users Group... :)

Cheers,



On 09/04/2008, at 2:46 AM, Tim Connors wrote:
Without applying any patches or running a different version, is  
there a
way to convince it to really and truly ignore all expires, age, etag  
etc
related headers and if there is a local cache copy at all, do not  
consult

the web at all, not even for an if-modified-since query?


--
Mark Nottingham   [EMAIL PROTECTED]




Re: [squid-users] squid 2.7 behaviour

2008-03-17 Thread Mark Nottingham

I submitted this as a bug, at Henrik's suggestion;
  http://www.squid-cache.org/bugs/show_bug.cgi?id=2269


On 18/03/2008, at 12:09 AM, Pablo García wrote:


Guys, here's the output of the access_log with header logging.

Let me know If I can gather some more info.

Regards, Pablo

1205758406.127   1324 172.16.254.4 TCP_REFRESH_MISS/200 9580 GET
http://listados.deremate.cl/consolas-video-juegos-sony-playstation-3_46745/_srZpricedesc
- FIRST_UP_PARENT/172.16.100.22 text/html [Host:
listados.deremate.cl\r\nUser-Agent: Mozilla/5.0 (Windows; U; Windows
NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\r\nAccept:
text/xml,application/xml,application/xhtml+xml,text/html;q=0.9,text/ 
plain;q=0.8,image/png,*/*;q=0.5\r\nAccept-Language:

en-us,en;q=0.5\r\nAccept-Encoding: gzip,deflate\r\nAccept-Charset:
ISO-8859-1,utf-8;q=0.7,*;q=0.7\r\nKeep-Alive: 300\r\nConnection:
keep-alive\r\nReferer:
http://listados.deremate.cl/consolas-video-juegos-sony-playstation-3_46745/ 
\r\nCookie:

__utma=256982686.1388267116.1203594335.1205344397.1205674701.10;
__utmz=256982686.1203594335.1.1.utmgclid=CPKY76Ga1ZECFQkdPAodoCgLZg| 
utmccn=(not+set)|utmcmd=(not+set)|utmctr=deremate.cl;

__utmb=256982686; __utmc=256982686;
CASTGC=TGT-60112-VM7jHuI5Ra6i72JSxfiGUN50uG2c2lBuLJr-50\r\nX- 
Forwarded-For:

64.117.132.58\r\n] [HTTP/1.0 200 OK\r\nDate: Mon, 17 Mar 2008 12:53:25
GMT\r\nServer: Apache\r\nExpires: Mon, 17 Mar 2008 10:13:25
-0300\r\nCache-Control: public, max-age=1200\r\nContent-Language:
es-CL\r\nX-Apache: search01\r\nVary:
Accept-Encoding\r\nContent-Encoding: gzip\r\nContent-Type:
text/html;charset=ISO-8859-1\r\nX-Cache: MISS from sq01.dc.dr\r\nVia:
1.0 sq01.dc.dr:80 (squid/2.7.DEVEL0-20080313)\r\nConnection:
close\r\n\r]

2008/03/16 11:39:16| ctx: exit level  0
2008/03/16 11:39:16| ctx: enter level  0:
'http://listados.deremate.cl/consolas-video-juegos-sony-playstation-3_46745/_srZpricedesc'
2008/03/16 11:39:16| storeSetPublicKey: unable to determine vary_id
for 
'http://listados.deremate.cl/consolas-video-juegos-sony-playstation-3_46745/_srZprice
desc'
2008/03/16 11:39:16| ctx: exit level  0




On Sat, Mar 15, 2008 at 10:20 PM, J. Peng [EMAIL PROTECTED] wrote:
On Sun, Mar 16, 2008 at 3:22 AM, Pablo García [EMAIL PROTECTED]  
wrote:

Mark, I can provide network captures for this, I'm using mod_deflate
to compress the responses.



I'm also using squid-2.7 for apache's mod_deflate.
It can work, but I also get lots of the same warnings in cache.log.



--
Mark Nottingham   [EMAIL PROTECTED]




Re: [squid-users] squid 2.7 behaviour

2008-03-14 Thread Mark Nottingham

Ah, good; it's not just me...

I'm seeing it on replies with Vary: Accept-Encoding (not sure if  
they're actually encoded responses or not, will try to find out).



On 14/03/2008, at 5:58 PM, Adrian Chadd wrote:


Hm, I thought the vary id stuff was changed to not log at this level.

can you enable header logging in squid.conf and see what the replies  
look like

for these URLs?



Adrian

On Thu, Mar 13, 2008, Pablo Garcia Melga wrote:
Hi, I just upgraded to 2.7 latest Snapshot from 2.6.9 and I'm  
getting a

lot of this errors in cache.log
I'm using SQUID as a reverse proxy with multiple backends

2008/03/13 20:03:45| ctx: exit level  0
2008/03/13 20:03:45| ctx: enter level  0:
'http://listados.deremate.cl/mercedes+benz_dtZgallery_pnZ4'
2008/03/13 20:03:45| storeSetPublicKey: unable to determine vary_id  
for

'http://listados.deremate.cl/mercedes+benz_dtZgallery_pnZ4'
2008/03/13 20:03:45| ctx: exit level  0
2008/03/13 20:03:45| ctx: enter level  0:
'http://listados.deremate.cl/neumatico+195'
2008/03/13 20:03:45| storeSetPublicKey: unable to determine vary_id  
for

'http://listados.deremate.cl/neumatico+195'
2008/03/13 20:03:45| ctx: exit level  0
2008/03/13 20:03:45| ctx: enter level  0:
'http://listados.dereto.com.mx/computacion-impresoras_45336/_dtZgallery_pnZ7'
2008/03/13 20:03:45| storeSetPublicKey: unable to determine vary_id  
for

'http://listados.dereto.com.mx/computacion-impresoras_45336/_dtZgallery_pnZ7'
2008/03/13 20:03:46| ctx: exit level  0
2008/03/13 20:03:46| ctx: enter level  0:
'http://oferta.dereto.com.mx/ajaxg1/QAG2.asp?ido=18090159itemCant=1showbutton=1ispreview=0'
2008/03/13 20:03:46| storeSetPublicKey: unable to determine vary_id  
for

'http://oferta.dereto.com.mx/ajaxg1/QAG2.asp?ido=18090159itemCant=1showbutton=1ispreview=0'
2008/03/13 20:03:46| ctx: exit level  0
2008/03/13 20:03:46| ctx: enter level  0:
'http://listados.deremate.cl/accesorios-repuestos-para-autos-audio-car_43114/_pcZnew_ptZbuyitnow_dtZgallery'
2008/03/13 20:03:46| storeSetPublicKey: unable to determine vary_id  
for

'http://listados.deremate.cl/accesorios-repuestos-para-autos-audio-car_43114/_pcZnew_ptZbuyitnow_dtZgallery'
2008/03/13 20:03:46| ctx: exit level  0
2008/03/13 20:03:46| ctx: enter level  0:
'http://listados.deremate.cl/_uiZ6542852'
2008/03/13 20:03:46| storeSetPublicKey: unable to determine vary_id  
for

'http://listados.deremate.cl/_uiZ6542852'
2008/03/13 20:03:47| ctx: exit level  0
2008/03/13 20:03:47| ctx: enter level  0:
'http://listados.dereto.com.co/htc_pnZ4_srZpricedesc'
2008/03/13 20:03:47| storeSetPublicKey: unable to determine vary_id  
for

'http://listados.dereto.com.co/htc_pnZ4_srZpricedesc'
2008/03/13 20:03:47| ctx: exit level  0
2008/03/13 20:03:47| ctx: enter level  0:
'http://listados.dereto.com.co/accesorios-celulares-ringtones-software_38021/_dtZgallery_pnZ1_srZbiddesc'
2008/03/13 20:03:47| storeSetPublicKey: unable to determine vary_id  
for

'http://listados.dereto.com.co/accesorios-celulares-ringtones-software_38021/_dtZgallery_pnZ1_srZbiddesc'
2008/03/13 20:03:47| ctx: exit level  0
2008/03/13 20:03:47| ctx: enter level  0:
'http://listados.dereto.com.co/animales-mascotas-perros_50267/_prZ11+14_pcZnew_ptZbuyitnow_dtZgallery_srZviewdesc'
2008/03/13 20:03:47| storeSetPublicKey: unable to determine vary_id  
for

'http://listados.dereto.com.co/animales-mascotas-perros_50267/_prZ11+14_pcZnew_ptZbuyitnow_dtZgallery_srZviewdesc'
2008/03/13 20:03:47| ctx: exit level  0
2008/03/13 20:03:47| ctx: enter level  0:
'http://listados.dereto.com.mx/muebles-muebles-bibliotecas_56865/_smZdelivery_srZcloseasc'
2008/03/13 20:03:47| storeSetPublicKey: unable to determine vary_id  
for

'http://listados.dereto.com.mx/muebles-muebles-bibliotecas_56865/_smZdelivery_srZcloseasc'
2008/03/13 20:03:47| ctx: exit level  0
2008/03/13 20:03:47| ctx: enter level  0:
'http://listados.deremate.cl/musica-peliculas-entradas-para-recitales_51440/_lnZrm'
2008/03/13 20:03:47| storeSetPublicKey: unable to determine vary_id  
for

'http://listados.deremate.cl/musica-peliculas-entradas-para-recitales_51440/_lnZrm'


Any Ideas ?

Regards, Pablo


--
- Xenion - http://www.xenion.com.au/ - VPS Hosting - Commercial  
Squid Support -
- $25/pm entry-level VPSes w/ capped bandwidth charges available in  
WA -


--
Mark Nottingham   [EMAIL PROTECTED]




Re: [squid-users] Multi processors

2008-03-11 Thread Mark Nottingham

Sounds like you want processor affinity;
  http://www.linuxcommand.org/man_pages/taskset1.html


Cheers,


On 12/03/2008, at 8:22 AM, Marcos Camões Bourgeaiseau wrote:


In parts:

1-One of the most important things to check is that you have  
different

PID's for every instance of squid, see pid_filename
Sure. Otherwise you can't even start more than one process.

2-Also, how many cpu's does that box have? Do you see squid always
using the same one (I.E. CPU2)
Squid always use the same CPU, but others services (apache for  
exemple)

in the same machine use all four CPUs, the Ubuntu itself uses the four
CPUs. That I know, this problem only occurs with squid.

More info: Each squid instance uses it own cache, have it own  
squid.conf

file and listens in different ports.

Thanks one more time,

escreveu:

Marcos,

Ubuntu should work fine with an SMP kernel for squid.

Just to double check, with your setup have you followed these  
guidelines?


http://wiki.squid-cache.org/MultipleInstances

one of the most important things to check is that you have  
different PID's

for every instance of squid, see pid_filename

Also, how many cpu's does that box have? Do you see squid always  
using the

same one (I.E. CPU2)

Saul W

-Original Message-
From: Marcos Camões Bourgeaiseau [mailto:[EMAIL PROTECTED]
Sent: Tuesday, March 11, 2008 4:34 PM
To: saul waizer; squid-users@squid-cache.org
Subject: Re: [squid-users] Multi processors

Sorry about that.
It is a Ubuntu Feisty with a re-compiled Kernel version 2.6.15.7. We
just took out some harware modules. We tried some newer Kernel but we
couldn't make it work with the hadware that we have here.
And just for clarity: It was OK to put four or more instances  
running at
the same time, but all of those instances keep using the same  
processor

and only that ONE processor. It is such a waste. And we have very
limited material to work here.

Thanks again,

saul waizer escreveu:


Marcos,

What OS are you running squid on?

According to the Docs, squid cannot take advantage of an SMP  
kernel but
there is a reference about having multiple instances of squid  
running,
However some OS's are very specific on how they handle processes,  
a little

more information about your setup would be helpful

Saul
-Original Message-
From: Marcos Camões Bourgeaiseau [mailto:[EMAIL PROTECTED]
Sent: Tuesday, March 11, 2008 3:21 PM
To: squid-users@squid-cache.org
Subject: [squid-users] Multi processors

I have compiled squid with those options below:

squid -v
Squid Cache: Version 2.5.STABLE12
configure options:  --sysconfdir=/etc/squid
--enable-storeio=aufs,coss,diskd,ufs --enable-poll --enable-delay- 
pools

--enable-linux-netfilter --enable-htcp --enable-carp --with-pthreads
--enable-underscores --enable-external --enable-arp-acl
--with-maxfd=16384 --enable-async-io=50 --enable-snmp

It runs in a machine with 4 Itel Xeon processors, but squid no  
matter
how many instances i start, uses only one processor, and my other  
three

processors stay idle.

My Squid.conf is this: (I have cutted-out my acls and http_acces)

http_port 8080
hierarchy_stoplist cgi-bin ?
acl QUERY urlpath_regex cgi-bin aspx \?
no_cache deny QUERY

# OPTIONS WHICH AFFECT THE CACHE SIZE
cache_mem 3072000 KB
maximum_object_size 2 KB
minimum_object_size 0 KB
maximum_object_size_in_memory 4 MB
cache_replacement_policy lru
memory_replacement_policy lru

# LOGFILE PATHNAMES AND CACHE DIRECTORIES
cache_dir ufs /var/spool/squid 5000 16 256
cache_access_log /var/log/squid/access.log
cache_log none
cache_store_log none
pid_filename /var/run/squid.pid

# OPTIONS FOR EXTERNAL SUPPORT PROGRAMS
ftp_list_width 32
ftp_passive on

auth_param basic children 5
auth_param basic realm Squid proxy-caching web server
auth_param basic credentialsttl 2 hours
auth_param basic casesensitive off

# OPTIONS FOR TUNING THE CACHE
refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern .   0   20% 4320
quick_abort_pct 98

# MISCELLANEOUS
append_domain .rio.rj.gov.br
memory_pools_limit 50 MB
log_icp_queries off
snmp_port 3401


Does anyone have an idea?
I have looked up in this list old mails, and have not found  
anything.


Thanks a lot,







--
Marcos Camões Bourgeaiseau - KIKO

e-mail pessoal: [EMAIL PROTECTED]
e-mail institucional: [EMAIL PROTECTED]


--
Mark Nottingham   [EMAIL PROTECTED]




Re: [squid-users] Re: Re: [squid-users] centra lized storage for squid

2008-03-10 Thread Mark Nottingham
This is the problem that CARP and other consistent hashing approaches  
are supposed to solve. Unfortunately, the Squid in the front will  
often be a bottleneck...


Cheers,


On 07/03/2008, at 1:43 PM, Siu Kin LAM wrote:


Hi Pablo

Actually, it is my case.
The URL-hash is helpful to reduce the duplicated
objects. However, once adding/removing squid server,
load balancer needs to re-calculate the hash of URL
which cause lot of TCP_MISS in squid server at the
inital stage.

Do you have same experience ?

Thanks


--- Pablo Garc燰 [EMAIL PROTECTED] 說:


I dealt with the same problem using a load balancer
in front of the
cache farm, using a URL-HASH algorithm to send the
same url to the
same cache every time. It works great, and also
increases the hit
ratio a lot.

Regards, Pablo

2008/3/6 Siu Kin LAM [EMAIL PROTECTED]:

Dear all

At this moment, I have several squid servers for

http

caching. Many duplicated objects have been found

in

different servers.  I would minimize to data

storage

by installing a large centralized storage and the
squid servers mount to the storage as data disk.

Have anyone tried this before?

thanks a lot


Yahoo! 網上安全攻略,教你如何防範黑客!

請前往http://hk.promo.yahoo.com/security/index.html
了解更多。








 Yahoo! 網上安全攻略,教你如何防範黑客! 請前往http://hk 
.promo.yahoo.com/security/index.html 了解更多。


--
Mark Nottingham   [EMAIL PROTECTED]




Re: [squid-users] Squid-2, Squid-3, roadmap

2008-03-10 Thread Mark Nottingham

Thanks, Alex -- that's actually quite helpful.

It might be good to have a *little* more process around how ones  
sponsors changes in Squid -- whether -2 or -3 -- to assure that  
there's coordination between the feature sets, and that the entire  
Squid developer community is bought into the changes.


More detail on the roadmap would also be useful; potential sponsors  
need more information about what's involved in the changes, what the  
risks are, and how long it will take / how much it will cost.


WRT responsible sponsoring: I'm willing to pay a (reasonable) premium  
to get the things that I pay to get into -2 into -3 as well, as long  
as -3 doesn't block -2 (which AFAICT it wouldn't). I'm not willing to  
do that in perpetuity, though.


Cheers,


On 08/03/2008, at 4:46 AM, Alex Rousskov wrote:



Below you will find my personal comments on a few hand-picked thoughts
(from various posters) that I consider important on this thread:

On Thu, 2008-03-06 at 08:44 -0800, Michael Puckett wrote:

If there is one killer app that tops all other functionality
additions it would be to multi-thread Squid so that it can
perform on multi-cores.


Ability to perform on multiple cores is a performance/scalability
optimization. We obviously do want Squid to perform and scale better,
and are working on that.

Squid3 already has several mechanisms that would make such work  
easier.

Folks that need faster Squid, including CPU-core scalability
optimizations, should consider contributing time or money to the  
cause,

keeping in mind that it is a serious project and it will require
cooperation with other developers and projects.


On Thu, 2008-03-06 at 11:26 +1100, Mark Nottingham wrote:

Again, parity with -2 isn't enough; why would someone pay for
something they can already get in -2 if it meets their needs?


Nobody should pay for something they do not need. However, any sponsor
should consider the long-term sustainability of an old or new feature
they rely on: Will the feature I need be included in the next major
Squid version? Do I need cooperation and trust of Squid developers?  
Do I

want a fork dedicated to my needs? These questions are often as
important as the How much would it cost me to make Squid do Foo by  
the

end of the month? question.

Currently, sponsors have significant impact on Squid direction.  
There is

a lot of implied responsibility that comes with that influence. Please
use your power with care.


On Thu, 2008-03-06 at 01:15 +, Dodd, Tony wrote:
development on -3 seems entirely ad-hoc, with no direction; whereas  
-2

development is entirely focused [...]. I could be talking entirely
out of turn here though, as I haven't seen a -3 roadmap.



The second thing [...] the majority of squid developers don't seem to
get, is that the big users of squid are businesses.



The truth of it is, as much as you guys tell yourselves that
your userbase is people who run one or two cache boxes in their
basements to cache their lan internet access, and that there's no  
money

in squid, ...



I've spoken to Adrian too many times to count on two hands about this
whole thing, and if you guys are trying to re-invent the wheel, you
may as well stop now.


I am not sure how to say this the right way, but when your opinion is
based on a single and often extremely biased source of information,  
your

perception of reality becomes so distorted, that it is very difficult
for others to respond.

Your assumptions about the majority of squid developers are simply
wrong.

Believe it or not, we understand your situation fairly well. Nobody I
know is asking you to upgrade to Squid3, for example.

What I would suggest is that you make a fundamental choice: Do you  
want

to collaborate with the Squid project (as a whole)? If yes, we will do
our best to address your short-term and long-term needs. If no, I am
sure your dedicated developer will do his best to address your needs
within or outside the project.

Collaborating with an open source project is difficult because you  
have

to cooperate with others and balance different needs, all while
struggling with inefficiencies of a weak decision-making structure.
Whether collaboration benefits are worth the trouble, is something you
have to decide. I certainly hope they are.


On Thu, 2008-03-06 at 18:17 +0900, Adrian Chadd wrote:

Mark Nottingham wrote:

A killer app for -3 would be multi-core support


12 months away on my draft Squid-2 roadmap, if there was enough
commercial interest.


11 months away on Squid-3 roadmap if there is enough commercial
interest. And I will also throw in a 90% chance that the feature will
also be in Squid4 without a major porting effort. Wait, wait, and a  
10%

off coupon!

But, really, this is _not_ the way Squid features should be planned or
sponsorship should be solicited, and I trust Adrian knows that.


On Thu, 2008-03-06 at 11:26 +1100, Mark Nottingham wrote:

While I'm in a mood for ruffling feathers (*grin*),
it might

Re: [squid-users] Squid-2, Squid-3, roadmap

2008-03-06 Thread Mark Nottingham
Ideally, you'd avoid locking as much as possible; e.g., have a pool of  
threads for disk access (as now with aufs), a pool for header parsing,  
a pool for forward requests, and so on. I don't think it's a good idea  
at all to re-architect squid into a thread-per-connection model or  
anything; just find the places that are bottlenecks and allow some  
parallelism, keeping the number of threads low.


(says he, the non-threads programmer. I'm not *that* crazy...)

Redirectors and other helpers are already able to run on other CPUs,  
so that's a non-issue.


Cheers,


On 07/03/2008, at 3:05 AM, Adrian Chadd wrote:


Well, the way I'd approach it is to first get an idea of how to throw
things into 'threads', and probably draft and craft a basic event loop
and submission queue for stuff to happen across threads.

Then Squid can run as one thread, and CPU intensive stuff can happen
via message queues to other threads.

Eventually my gut feeling (reliable as it is) tells me that the most
efficient and scalable way of doing this is to create a lightweight
squid that handles just client and server-side interactions, with  
storage,

logging, ACLs and other stuff happening in other threads, and then
create multiple squid threads that run almost indepedently from  
one another.
This would avoid all of the crazy fine-grain locking that  
traditionally is done

to take a non-threaded app into the threaded world. I really think
avoiding that is a very good idea.

Oh, and no, there's nothing in Squid right now that jumps out save  
perhaps
pushing regular expression lookups into a seperate thread or  
threads. But
really, if you're going to do that then you're better off pushing a  
large part
of the ACL subsystem into seperate threads and have the main code  
submit
lookup requests there. Of course, what would be interesting there is  
benchmarking
how effective it'd be to batch things like ACL lookups in groups  
to try and
get some cache coherency effects going, rather than the current  
tendency for Squid
to process a request as far as it can go before something blocking  
comes along,

blowing much of the CPU cache away as possible in the meantime.

But really, the big problem is to spend some time looking at efficient
ways of parallelising network applications and what works well on  
current
hardware/OSes. I'm just playing around with a simple TCP proxy right  
now which
I'll use to experiment with better ways of doing stuff reasonably  
portably.
I can then set this as the upper bounds for how well stuff may  
perform, and
can then spend some time looking at how to tune things like  
parallelism,
IO handling, memory allocation and event notification. Then I can  
spend some more
time looking at batching operations such as IO, ACL lookups, etc -  
see if better
use of CPU caches can be made and also see if doing all the system  
read/write
syscalls in one hit per loop rather than spread out throughout the  
program execution

makes any difference.

Its really hard to benchmark -these- inside Squid, and thus its very  
difficult to
figure out how to make better use of current hardware. _This_ is the  
First Problem

to solve.

Of course, all of this depends entirely on whether I get enough  
clients to start
funding some of this work, and how much I can dedicate to this over  
my Semantics,
Experimental Methods and Behavioural Neuropsychology classes this  
semester. :)





Adrian
(Sleep? Hah!)

On Thu, Mar 06, 2008, Chris Woodfield wrote:

I'll readily admit that I Am Not A Developer, but I'm wondering if
this could be something that could be worked incrementally - finding
easy-to-cleave-off subsystems that can be moved to separate threads
similarly to how asyncio was. The most obvious one I can think of is
the front-end client/server network socket communication code; next
would be logging. Are there any other subsystems that jump out as
independent enough to do this in the existing code base?

-C

On Mar 6, 2008, at 4:17 AM, Adrian Chadd wrote:


On Wed, Mar 05, 2008, Michael Puckett wrote:

Mark Nottingham wrote:


A killer app for -3 would be multi-core support (and the perf
advantages that it would bring), or something else that the
re-architecture makes possible that isn't easy in -2. AIUI,  
though,

that isn't the case; i.e., -3 doesn't make this significantly
easier.

Absolutely THE killer app for either -2 or -3. The fact that multi-
core
processors are now the defacto standard in any box makes this more
important by the day IMHO. Being able to do sustained IO across
multiple
Gb NICs will absolutely require it. This is the single biggest
performance enhancement that could be implemented. So where does
multi-core support fall on either roadmap?


12 months away on my draft Squid-2 roadmap, if there was enough
commercial
interest. Thing is, the Squid internals are very horrible for SMP
(both 2 and 3)
and the list of stuff that I've put into the squid-2 roadmap is what
I think
is the minimum amount

Re: [squid-users] Squid-2, Squid-3, roadmap

2008-03-05 Thread Mark Nottingham


On 05/03/2008, at 1:39 PM, Amos Jeffries wrote:


Well,

I am interested in speed, features and ICAP.
So I like -2 and -3 to merge.

It seems to me that for the sake of being polite with each other
we do not want to call the -2 / -3 issue a fork, but effectively
it really is a fork.

So here is my question back to the main maintainers:
do you want to undo the fork and merge ?
Note this: for a merge there are 2 ways:
1) port functionality from -3 to -2
2) port functionality from -2 to -3


Don't forget the .5) tasks:
1.5) port all changes made to -3 since starting the base port to -2.
2.5) port all changes made to -2 since starting the base port to -3.

(1) would require a full re-code of -2 into C++ (repeating 6+ years  
of 3.x

development under a new name) in order to encompass the features of -3
that cannot be back-ported.


Well, that's a bit of a straw-man, isn't it? AIUI 3 *is* already 2 re- 
coded into C++. Never mind the question of why that's necessary;  
indeed, I think a lot of people's discomfort is centred on the fact  
that large parts of 3 have been rewritten and not battle-tested in  
wide deployment.


I think you'd get that deployment if there were significant reasons  
for users to migrate; conversion to C++ is motivation for the  
developers, not the users, unless it's accompanied by user-visible  
improvements in performance, stability, or functionality. Again, while  
ESI and ICAP are cool and useful, IMO they don't motivate the majority  
of your users.



(2) requires info from you the users, about what features you need  
ported,

and some help on porting those over to -3.


full vary/etag support
collapsed_forwarding
stale-if-error
stale-while-revalidate
external_refresh_check
pinned peer connections
external logfile daemon
stablility
performance
wide adoption (yes, this is a chicken-and-egg problem)


Most of the developers are already working on this. We do want to  
close
the divide. We also have not yet had a sponsor willing to pay  
specifically
for any feature porting. So we are stuck with doing it whenever time  
is

available.


Again, parity with -2 isn't enough; why would someone pay for  
something they can already get in -2 if it meets their needs?


You need to find a killer app for -3 that has broader appeal than just  
ICAP and ESI.


While I'm in a mood for ruffling feathers (*grin*), it might also help  
to have the core discussions in public; AIUI there's a separate  
mailing list for this, and while having those discussions hidden away  
shelters you guys to some degree -- and I appreciate your motivation  
for doing so -- it also removes the opportunity for feedback by  
interested non-core folks. You might find that some more transparency  
improves the process and vitality of the project.


Cheers,

--
Mark Nottingham   [EMAIL PROTECTED]




Re: [squid-users] Squid-2, Squid-3, roadmap

2008-03-05 Thread Mark Nottingham


On 06/03/2008, at 12:28 PM, Amos Jeffries wrote:

stale-if-error
stale-while-revalidate

- Um, so why did you (the sponsor for these two I believe) not also
request their addition in -3 for future-proofing your install app?


Because -3 isn't on our roadmap, for the reasons cited. If it appears  
there, I imagine we could easily fund the conversion (although I  
should check with H to see if that was already included; to be frank,  
it wasn't really even on my radar).



You need to find a killer app for -3 that has broader appeal than  
just

ICAP and ESI.


3.0 was about parity with needs. It failed some in that regard.
3.1 is about making up that failure plus some.
Is seamless IPv6, SSL control, and weighted round-robin not enough  
of a

killer app for you?


Not particularly. The thing is, for most any functionality, I can get  
there more quickly by funding it in -2; until -3 is ready for  
production use, it doesn't make sense to fund features in it (see  
above).


A killer app for -3 would be multi-core support (and the perf  
advantages that it would bring), or something else that the re- 
architecture makes possible that isn't easy in -2. AIUI, though, that  
isn't the case; i.e., -3 doesn't make this significantly easier.




Well, to shed some light on things (I hate secrecy too). The core
discussions are all about what we are going to publicly say so we  
don't
contradict ourselves and confuse people too much. Often personal  
messages
between individuals. We ruffle each others feathers at times too.  
None of
which is something people exactly want public. The rest is going  
through

squid-dev and squid-users.


Well, I guess that's good to hear, but I do note that having a private  
core list on an OS project is AFAIK not that common.


Cheers,


--
Mark Nottingham   [EMAIL PROTECTED]




Re: [squid-users] Squid-2, Squid-3, roadmap

2008-03-05 Thread Mark Nottingham
BTW, eCAP *is* interesting; it just looks really tentative at this  
point, and the perf/stability issues overshadow it to some degree.


Now, if you released Python bindings for eCAP, *that* would be  
interesting. Also, multi-core would make eCAP that much more powerful;  
as it is, servers like lighttpd have a huge performance advantage, and  
are getting to the point where it's pretty easy to write a module for  
them.


Cheers,


On 06/03/2008, at 12:52 PM, Mark Nottingham wrote:


A killer app for -3 would be multi-core support (and the perf
advantages that it would bring), or something else that the re-
architecture makes possible that isn't easy in -2. AIUI, though, that
isn't the case; i.e., -3 doesn't make this significantly easier.


--
Mark Nottingham   [EMAIL PROTECTED]




[squid-users] Squid-2, Squid-3, roadmap

2008-02-26 Thread Mark Nottingham

Hello Squid folk,

I maintain Yahoo!'s internal build of Squid, and serve as a resource  
for the various Y! properties that use it.


We currently only use Squid-2, and don't have plans to migrate to  
Squid-3; although ESI, ICAP as well as eCAP look interesting, there  
are too many critical features (e.g., collapsed fowarding, refresh  
stale hit,  full Vary/ETag support, not to mention several things in  
2.7DEVEL0) missing for us to use it. Additionally, anecdotal evidence  
shows that it's still too unstable and slow for production use where  
these aspects are important; or at least, there is enough doubt about  
them to make switching too risky for too little benefit.


I know that there's a lot of water under the bridge WRT -2 vs -3, and  
don't want to stir up what must seem like a very old discussion to the  
developers. However, there's not much clarity about the situation WRT  
2 vs 3, and we've been in this state for a long period of time.


Specifically, a few questions for the developers of Squid:

  * Besides the availability of *CAP and ESI -- which are very  
specialised, and of interest only to a subset of Squid users -- is  
there any user-visible benefit to switching to -3?


  * What do the developers consider to be a success metric for -3?  
I.e., when will maintenance on -2 stop?


  * Until that time, what is the development philosophy for Squid-2?  
Will it be only maintained, or will new features be added / rewrites  
be done as (possibly sponsored) resources are available? Looking at http://wiki.squid-cache.org/RoadMap/Squid2 
, it seems to be the latter; is that the correct interpretation?


  * If that success metric is not reached, what is the contingency  
plan?


  * How will these answers change if a substantial number of users  
willingfully choose to stay on -2 (and not just because they neglect  
to update their software)?



Also, a few questions for -users:

  * Who is using -3 in production now? How are you using it (load,  
use case, etc.) and what are your experiences?


  * Who is planning to use -3 soon? Why?

  * Who is not planning to use -3 soon? Why not?


Thanks,

--
Mark Nottingham   [EMAIL PROTECTED]




Re: [squid-users] Squid meetup in london

2008-02-26 Thread Mark Nottingham
Is this going to be a semi-regular event? I'd be interested in  
participating in the future, but need more lead time to arrange  
travel...


Cheers,


On 27/02/2008, at 7:35 AM, Robert Collins wrote:

I'm very happy to announce that Canonical are hosting a squid meetup  
in

London this coming Saturday and Sunday the 1st and 2nd of March. Any
*developers* (in the broad sense - folk doing
coding/testing/documenting/community support/) are very welcome to
attend. As it is a weekend and a security office building, you need to
contact me to arrange to come - just rocking up won't work :). We'll  
be

there all saturday and sunday through to mid-afternoon.

The Canonical London office is in Millbank Tower
http://en.wikipedia.org/wiki/Millbank_Tower.

So if you want to come by please drop me a mail.

For folk wanting a purely social meetup, I'm going to pick a  
reasonable

place to meet for food and (optionally) alcohol on Saturday evening -
I'll post details here mid-friday.

-Rob



--
Mark Nottingham   [EMAIL PROTECTED]




Re: [squid-users] Asking for feedback on new Squid Wiki theme

2008-02-05 Thread Mark Nottingham


Great job!

Looks very nice -- except that the background image is distracting,  
and makes it hard to read text.


Cheers,


On 05/02/2008, at 4:13 PM, Kinkie wrote:


Hi all,
 in the past few days I've worked on a new theme for the Squid Wiki,
and feedback has been mostly positive so far.
Before switching to it, I'd like to get feedback from you - after all,
the squid users are those who are going to benefit - or suffer - from
it.

I've set a read-only mirror of the wiki (plus the new theme) up at  
the URL

http://squid.new.kinkie.it/FrontPage

I invite all of you to take a tour, and please report your opinions.

Thanks!

--
   /kinkie


--
Mark Nottingham   [EMAIL PROTECTED]




Re: [squid-users] Asking for feedback on new Squid Wiki theme

2008-02-05 Thread Mark Nottingham
The problem is that it depends on the gamma of the monitor, which is  
different user-to-user, and OS-to-OS, monitor-to-monitor, etc.



On 05/02/2008, at 5:08 PM, Adrian Chadd wrote:


On Tue, Feb 05, 2008, Mark Nottingham wrote:


Great job!

Looks very nice -- except that the background image is distracting,
and makes it hard to read text.


Yeah, we were playing with how light to make it to be able to keep
it there.

I like the idea of a watermark, but I agree its still a bit  
distracting.




Adrian



--
Mark Nottingham   [EMAIL PROTECTED]




Re: [squid-users] Issue with getting python ICAP server running

2008-01-30 Thread Mark Nottingham
regex was part of Python itself, but it's been deprecated for quite  
some time. Probably the easiest thing to do (i.e., no coding) would be  
to use a less recent version of Python; try 2.0, 2.1 or 1.6.


Cheers,


On 31/01/2008, at 6:43 AM, Mohamedk wrote:


Hello  my name is Moe

I am not a 100% sure if this is the appropriate mailing list since the
problem i am having deals mainly about getting python ICAP up and
running. For some reason not all the modules required by ICAP are
available so i searched on google for each module and placed them in
/usr/local/lib/python2.5/site
-packages/. Here are the changes i have committed to the setup file:

export ICAP_PATH=`pwd`
export PYTHONPATH=$ICAP_PATH/medusa-20010416:$ICAP_PATH/proxylet:/ 
usr/local/lib/python2.5/site-packages:/usr/local/lib/python2.5/

export http_proxy=192.168.0.1:80

After i downloaded most of the external modules for python i am stuck
at this error even though there is a token.py in site-packages.

# /tmp/icap_server/start_icap.py
/usr/local/lib/python2.5/site-packages/regsub.py:15:
DeprecationWarning: the regsub module is deprecated; please use
re.sub() DeprecationWarning)
Traceback (most recent call last):
 File /tmp/icap_server/start_icap.py, line 16, in module
   import filesys
 File /tmp/icap_server/medusa-20010416/filesys.py, line 83, in  
module

import regsub
 File /usr/local/lib/python2.5/site-packages/regsub.py, line 20,  
in module

   import regex
 File /usr/local/lib/python2.5/site-packages/regex.py, line 16, in  
module

from Token import Token
ImportError: cannot import name Token

I would appreciate any help from you guys or just point me towards
some sort of howto.


--
Mark Nottingham   [EMAIL PROTECTED]




Re: [squid-users] client_http.hit_median_svc_time: what's the definition, again ?

2008-01-28 Thread Mark Nottingham
Interesting. What happens with requests that contain bodies? I.e., is  
#2 really the end of the request, or the request headers?




On 26/01/2008, at 11:23 AM, Chris Robertson wrote:


john allspaw wrote:

Hello smart and nice folks:

We have some reverse-proxy caches on the west coast that get hit  
quite a bit from across the Pacific, and we
see cache hit times much higher there than in our other  
datacenters. I *think* it's because client_http.hit_median_svc_time  
might also include transfer time to client ? So to confirm:


1. first byte of request into squid
2. last byte of request into squid
3. squid looks to see if ACLs are ok with servicing the request,  
probably some DNS going on here

4. (squid finds that it's a HIT of some kind)
5. first byte of response to client
6. last byte of response to client

does client_http.hit_median_svc_time mean the time from #2 thru  
#6 ?




Yes.  http://www.squid-cache.org/mail-archive/squid-users/200606/0351.html

If it does, then these 'hit' times make sense. if it doesn't, well  
then I'm confuzzed. :)


thanks guys,
John Allspaw



Chris


--
Mark Nottingham   [EMAIL PROTECTED]




Re: [squid-users] Reverse Proxy Cache - implementing gzip

2008-01-20 Thread Mark Nottingham
Content codings (like gzip) are absolutely usable with HTTP/1.0. See  
RFC2145.



On 19/01/2008, at 4:40 AM, Tory M Blue wrote:


On Jan 18, 2008 12:46 AM, Ash Damle [EMAIL PROTECTED] wrote:
Hello. Any pointers how how to get Squid to do gzip compression and  
then e-tags when used as a reverse proxy cache.


Thanks

-Ash


Has to do with version HTTP1.1 vs gzip. But since Squid passes http1.0
version to your origin servers, they are going to respond in kind and
thus the origin is not going to gzip the content (if squid preserved
the 1.0 vs 1.1 version,  the origin server could do what it wanted.
But believe that is the RFC compliance that squid seems to be hard
pressed to conform with.

How much would it cost to get Squid to preserve the http version so
that our servers could provide gzip functionality?

Tory


--
Mark Nottingham   [EMAIL PROTECTED]




Re: [squid-users] Re: Squid http1.1 vs http1.0 (probably again)

2008-01-15 Thread Mark Nottingham
...and work that's commissioned by other folks who need new features  
in a production-ready Squid.


Cheers,


On 15/01/2008, at 11:28 PM, Amos Jeffries wrote:

Features are largely the reverse. Ported down. With the exception of  
Adrians work.


--
Mark Nottingham   [EMAIL PROTECTED]




[squid-users] FYI: cache channels, etc.

2008-01-06 Thread Mark Nottingham
Just a quick FYI - we contracted Henrik to do a few things. They may  
be of wider interest, so they've found their way into 2.7. For more  
information, see:


  http://www.mnot.net/blog/2007/12/12/stale
  http://www.mnot.net/blog/2008/01/04/cache_channels

Cheers,

--
Mark Nottingham   [EMAIL PROTECTED]




Re: [squid-users] Squid-2.7 branched (was [EMAIL PROTECTED]: cvs commit: squid configure.in])

2008-01-01 Thread Mark Nottingham
-20071224/src/ 
store_key_md5.c:147: undefined reference to `MD5Update'
store_key_md5.o(.text+0x353):/tmp/squid-2.7.DEVEL0-20071224/src/ 
store_key_md5.c:148: undefined reference to `MD5Update'
store_key_md5.o(.text+0x375):/tmp/squid-2.7.DEVEL0-20071224/src/ 
store_key_md5.c:150: undefined reference to `MD5Update'
store_key_md5.o(.text+0x38f):/tmp/squid-2.7.DEVEL0-20071224/src/ 
store_key_md5.c:151: undefined reference to `MD5Update'

wccp2.o(.text+0x208): In function `wccp2_update_md5_security':
/tmp/squid-2.7.DEVEL0-20071224/src/wccp2.c:472: undefined reference  
to `MD5Init'
wccp2.o(.text+0x21a):/tmp/squid-2.7.DEVEL0-20071224/src/wccp2.c: 
473: undefined reference to `MD5Update'
wccp2.o(.text+0x229):/tmp/squid-2.7.DEVEL0-20071224/src/wccp2.c: 
474: undefined reference to `MD5Update'
wccp2.o(.text+0x235):/tmp/squid-2.7.DEVEL0-20071224/src/wccp2.c: 
475: undefined reference to `MD5Final'

wccp2.o(.text+0x1579): In function `wccp2HandleUdp':
/tmp/squid-2.7.DEVEL0-20071224/src/wccp2.c:514: undefined reference  
to `MD5Init'
wccp2.o(.text+0x158b):/tmp/squid-2.7.DEVEL0-20071224/src/wccp2.c: 
515: undefined reference to `MD5Update'
wccp2.o(.text+0x159a):/tmp/squid-2.7.DEVEL0-20071224/src/wccp2.c: 
516: undefined reference to `MD5Update'
wccp2.o(.text+0x15a6):/tmp/squid-2.7.DEVEL0-20071224/src/wccp2.c: 
517: undefined reference to `MD5Final'

*** Error code 1

Stop in /tmp/squid-2.7.DEVEL0-20071224/src.
*** Error code 1

Stop in /tmp/squid-2.7.DEVEL0-20071224/src.
*** Error code 1

Stop in /tmp/squid-2.7.DEVEL0-20071224/src.
*** Error code 1

Stop in /tmp/squid-2.7.DEVEL0-20071224.

Script done on Mon Dec 24 14:55:16 2007


--
- Xenion - http://www.xenion.com.au/ - VPS Hosting - Commercial  
Squid Support -
- $25/pm entry-level VPSes w/ capped bandwidth charges available in  
WA -


--
Mark Nottingham   [EMAIL PROTECTED]




Re: [squid-users] Pending Squid-2.7 release - testers wanted!

2007-12-19 Thread Mark Nottingham
I take it the HTTP/1.1 support is on both the client and server sides?  
http://www.squid-cache.org/Versions/v2/HEAD/changesets/11796.patch


Has it been run through co-advisor?

Cheers,


On 2007/12/20, at 1:56 AM, Adrian Chadd wrote:


Hi everyone,

The Squid-2.7 release should be tagged any day now, so we'd  
appreciate it

if people currently using Squid-2.6 in high-traffic environments could
give Squid-2.HEAD a whirl.

It should just drop in with no configuration changes needed.

More fun stuff will start appearing in Squid-2 after Squid-2.7 is  
release

so stay tuned.



Adrian



--
Mark Nottingham   [EMAIL PROTECTED]




Re: [squid-users] Future of ESI

2007-12-06 Thread Mark Nottingham
WRT the language itself, I think that it was good for its time, but  
that it can be improved.


I currently have a back-burner project on to come up with an improved  
ESI-like language, taking some of the ideas from my XTech talk last  
year http://www.mnot.net/blog/2006/05/16/web_2_caching.


More soon (hopefully).


On 2007/12/03, at 3:54 PM, Adrian Chadd wrote:


All it requires is someone with interest and/or a sponsor or two to
fund fixing the bugs and making it stable.




Adrian

On Mon, Dec 03, 2007, Janne Kario wrote:

Hi,

I've understood that the ESI support in squid is still experimental.
What is the status of ESI in general? Akamai, Oracle Web Cache and  
IBM

WebSphere seem to support it. However, articles on ESI date back to
2001-2004 and JESI (Java ESI tag library) still lacks a reference
implementation. Additionally, there is very little information  
about ESI

outside Akamai/IBM/Oracle websites. These small signs have lead me to
believe that nobody cares and it's not that great a technology in the
first place.

What will become of ESI?

I'm working on a project which is to produce a public website with  
ads
and some personalized content blocks (on the front page). I'm  
evaluating

whether ESI might be the silver bullet.

j


--
- Xenion - http://www.xenion.com.au/ - VPS Hosting - Commercial  
Squid Support -
- $25/pm entry-level VPSes w/ capped bandwidth charges available in  
WA -


--
Mark Nottingham   [EMAIL PROTECTED]




Re: [squid-users] read_ahead_gap

2007-11-15 Thread Mark Nottingham
Thanks. How much do I have to worry about this from a memory  
perspective (i.e., does each connection have this much memory  
allocated for a buffer, or only allocated when there's actually a gap)?


Cheers,


On 2007/11/15, at 12:01 PM, Henrik Nordstrom wrote:


On ons, 2007-11-14 at 11:49 +1100, Mark Nottingham wrote:

I'd like to double-check the semantics of read_ahead_gap.

AIUI, Squid will buffer up to that much data on both requests and
responses, in addition to the TCP send and receive buffers.


responses only.


So, if I have (for the sake of argument) 16K TCP read buffers, 24K
TCP write buffers, and 32K read_ahead_gap, a pathological request
case might look like:

client --- [ 16K worth of TCP read buffer ] -- [ 32K internal Squid
buffering ] -- [  24K worth of TCP write buffering ] -- server


yes, if reversing the picture.

The request forwarding path is slightly different, with internal  
buffers
of only 8KB under normal conditions. There is no tunable knob for  
this.


Regards
Henrik


--
Mark Nottingham   [EMAIL PROTECTED]




[squid-users] read_ahead_gap

2007-11-13 Thread Mark Nottingham

I'd like to double-check the semantics of read_ahead_gap.

AIUI, Squid will buffer up to that much data on both requests and  
responses, in addition to the TCP send and receive buffers.


So, if I have (for the sake of argument) 16K TCP read buffers, 24K  
TCP write buffers, and 32K read_ahead_gap, a pathological request  
case might look like:


client --- [ 16K worth of TCP read buffer ] -- [ 32K internal Squid  
buffering ] -- [  24K worth of TCP write buffering ] -- server


or, as much as 72K of buffer data on this host (but again, that's a  
pathological case). A response would be the reverse of this, and this  
doesn't include TCP buffers on the client or server.


Correct?

Also, how is memory for the read_ahead_gap allocated? If I have 1024  
open connections and a 32K read_ahead_gap, is 32M of memory used for  
this buffer? Or is it only allocated upon use?


Cheers,

--
Mark Nottingham   [EMAIL PROTECTED]




Re: [squid-users] criticism against squid

2007-08-30 Thread Mark Nottingham


On 2007/08/31, at 7:04 AM, Nicole wrote:


On 29-Aug-07 My Secret NSA Wiretap Overheard john allspaw Saying  :
Varnish shows a lot of promise.  I do believe that there's a good  
amount of

trash talking in
those comments, especially given that squid would for sure have  
been designed

differently if
it set out to be a fast accelerator, not a forward proxy with all  
of the

bells and whistles.

Flickr can't use Varnish in its current form, for example, because  
object

eviction isn't yet a feature.  :)
Hence, we use squid.  It's working just fine for us. So in that  
case, I'll

take the 1980 design that works,
versus the 2007 design that doesn't. :)

-j


 It seems like their trash talking is to try to get people to  
switch and try to

garner more funding. Varnish is also not as user friendly as squid.


The lack of documentation is the biggest problem I have. They also  
describe it as a HTTP caching reverse proxy, when they don't honour  
many parts of HTTP.


The 'trash talk' seems based on their conviction that the OS can  
manage VM better than Squid does, with application-specific  
knowledge. As long as you don't set your cache_mem too high, you  
won't run into the doomsday scenarios they paint...


 Now.. if your from flickr.. and you use Squid.. Seems like a big  
company like
yours should be making some nice donations.. poke poke.. That would  
help squid

get updated :)


Stay tuned.

--
Mark Nottingham   [EMAIL PROTECTED]




Re: [squid-users] Opinions sought on best storage type for FreeBSD

2007-08-13 Thread Mark Nottingham
FreeBSD and aufs was discussed a while back, IIRC, and the upshot was  
that for FreeBSD 6, it's useful (threads on 4 is a no-no). The  
lingering doubt in my mind was this bug: http://www.freebsd.org/cgi/ 
query-pr.cgi?pr=103127, which appears to have been patched in 6.1- 
RELEASE-p5.


So, in a nutshell, can it be safely said that aufs is stable and  
reasonably performant on FreeBSD = 6.2, as long as the described  
thread configuration is performed?


Cheers,


On 2007/08/11, at 7:36 PM, Henrik Nordstrom wrote:


On lör, 2007-08-11 at 15:10 +0545, Tek Bahadur Limbu wrote:

As far as I know and seen with my limited experience, diskd seems  
good

for BSD boxes. But I guess I have to try other alternatives too.

If I opt to use aufs, will the following compilations work?

'--enable-async-io' '--with-pthreads'


--enable-storeio=aufs

pthreads is automatically enabled, so no need to specify that. Won't
hurt if you do however.

If you are on FreeBSD then remember to configure FreeBSD to use kernel
threads for Squid or it won't work well. See another user response in
this thread.

On Linux and Solaris you do not need to care about this as the default
posix threads implementations there supports kernel threads out of the
box.


--enable-async-io=40


There is no --enable-async-io option any more. This was very long  
ago..

Still understood as an alias for --enable-storeio=aufs and a few other
configure options however (see --help)

You generally do not need to specify the amount of I/O threads. The
default selected by Squid based on your squid.conf is quite  
reasonable.


Regards
Henrik


--
Mark Nottingham   [EMAIL PROTECTED]




Re: [squid-users] ESI feature in squid3?

2007-05-30 Thread Mark Nottingham

I believe that Oracle also ships ESI in their product...
  http://www.oracle.com/technology/products/ias/daily/sept17.html
...as well as IBM;
  http://www-304.ibm.com/jct03002c/software/webservers/appserv/was/ 
network/edge.html
I don't know enough about them to say whether they're mature or  
not, but I'd imagine they're at least usable.


Cheers,


On 2007/05/22, at 3:33 AM, Henrik Nordstrom wrote:


mån 2007-05-21 klockan 22:30 +0800 skrev howard chen:


But the point of using squid + esi at the reverse proxy is to reduce
the server loading from web server.


Yes.

Yes, memcached is fast, but when requests hit your php  
interpreter, it

is slooow and difficult to scale...


There is alternatives to PHP, but yes.


seems that only akamai has mature implementation right now...


Hasn't been a very great interest from the users in getting a  
stable ESI

implementation. The Squid project is entirely community driven, and
features not having noticeable support from the users evolve very
slowly..

Regards
Henrik


--
Mark Nottingham   [EMAIL PROTECTED]




Re: [squid-users] new website: final beta

2007-05-10 Thread Mark Nottingham

Might want to have a look at
  http://www.mnot.net/cgi_buffer/

which, despite its name, has a PHP one-line drop-in that might do the  
trick.


Mind you, I haven't looked at that code in years, and there very well  
may be some bugs in there, or compatibility with newer versions of  
PHP, but it's a starting point...


Cheers,


On 2007/05/10, at 12:02 AM, Adrian Chadd wrote:


On Wed, May 09, 2007, Craig Skinner wrote:

On Wed, May 09, 2007 at 02:14:33PM +0200, Ralf Hildebrandt wrote:


Nice work Adrian!


Definitely.



Struth Bruce! Nice one mate!

Sort of quoting one of Yahweh's olde proverbs:
...squidmaster, cache thy self

Will the final site be cache-able?

I don't have the web skills that you do, but I found the easiest  
way to

make php's cache-able was to lynx dump the php to a .html, and have
apache serve index.html in preference to index.phtml. Naturally, all
links to pages must be to the .html and not the .php:


It will be. I just haven't yet added E-Tag and Expiry generation to  
the
PHP code. I'll see what I can do. I haven't found an example of a  
really

good dynamic site that actually sets appropriate cachability tags
(and does so with minimal load to the server - there's no point in  
having

to do the whole database query set and parse the database replies
just to generate etags, for example!) so I figure this can double as
that.

Now, where's that spare time..




Adrian



--
Mark Nottingham   [EMAIL PROTECTED]




Re: [squid-users] comm_select: kevent failure: (9) Bad file descriptor

2007-03-26 Thread Mark Nottingham

Consider yourself beaten -- this has been bugging me for a while. :)


On 2007/03/26, at 5:31 PM, Adrian Chadd wrote:


On Sun, Mar 18, 2007, Martin Solciansky wrote:

hello,

we have been using squid-2.6.6 (with some security patches from  
higher
versions and the kevent patch from .11) for some time now but few  
weeks ago

we started getting:
2007/03/17 08:25:12| comm_select: kevent failure: (9) Bad file  
descriptor

2007/03/17 08:25:12| Select loop Error. Retry 1


I'm pretty sure I know why it is: Squid isn't removing the entries  
from
the kqueue array when a filedescriptor is closed and if the  
filedescriptor

isn't reused fast enough it'll end up throwing the above error.

This error used to actually cause runtime leaks. I'm not sure if  
Henrik's
recent work to mimic what libevent does has fixed it or not, but I  
do know

the issue is still there.

The real fix is to delay close() filedescriptors until its time to  
call
kevent: on close(), modify the event in the array to be EV_DELETE,  
then
close() all the filedescriptors. This is however very horrible and  
ugly.
The fix I came up with for one of my private projects was to record  
the

kevent array offset of the registered read/write events, if any, and
rewrite them to be null. Now, kqueue doesn't have any concept of an
event slot thats to be ignored (unfortunately!) so you fake it with:

* ident = 0 (fd = 0, for EV_READ and EV_WRITE filters)
* filter = EV_READ or EV_WRITE, whatever's appropriate for what you're
  replacing
* flags = EV_ADD | EV_DISABLE

Kqueue doesn't mind if there's multiple filters added; it just  
overrides
the currently registered one. The above is about as close to a NULL  
filter

as you're going to get.

I've done this in my test code to support loads much higher than Squid
(~6k proxied req/sec (6k client sockets, 6k server sockets) with  
~3000 to

~4000 concurrent connections. Its sane.)

If someone wants to beat me to fixing then please, by all means. :)




Adrian



--
Mark Nottingham   [EMAIL PROTECTED]




Re: [squid-users] Squid 2.6.STABLE9 and caching of 302 redirects

2007-02-13 Thread Mark Nottingham
 and simply send a normal request to the  
origin server. That would certainly have the right outcome (getting  
a copy of the requested document, as long as the user was  
authenticated).


RFC 2616 says (in section 13.3 Validation Model)

  Note: a response that lacks a validator may still be cached, and
  served from cache until it expires, unless this is explicitly
  prohibited by a cache-control directive. However, a cache cannot
  do a conditional retrieval if it does not have a validator  
for the

  entity, which means it will not be refreshable after it expires.

That reads like it is agreeing with me, unless it is legitimate for  
Squid to use the timestamp from another header (must be Date: or  
Expires:) with If-Modified-Since.


Section 13.3.5 of the RFC says (in part) Thus, comparisons of any  
other headers (except Last-Modified, for compatibility with HTTP/ 
1.0) are never used for purposes of validating a cache entry.,  
which appears to confirm that Last-Modified: is the only header  
which can be used in such comparisons.


Is the problem actually Squid's fault? Or is it the origin server's  
fault for responding with details about the HTML document when  
Squid was asking about the redirect?


I don't see how the origin server could avoid doing that, though,  
since the request from Squid does not and cannot distinguish those  
two cases (i.e. it cannot ask is this redirect what you would  
currently send for this request?). Although some requests include  
an authentication cookie and others do not, and different outcomes  
are expected, Squid cannot be expected to know the significance of  
the cookie to the origin server.


I'll file a bug report if responses seem to favour it being Squid's  
fault!


John

--
John Line - web  news development, University of Cambridge  
Computing Service


--
Mark Nottingham   [EMAIL PROTECTED]




Re: [squid-users] Squid 2.6.STABLE9 and caching of 302 redirects

2007-02-05 Thread Mark Nottingham

Hi John,

Just curious -- have you tried using workarounds like
  Cache-Control: max-age=0
or
  Cache-Control: no-cache
to see how they behave?

Cheers,


On 2007/02/06, at 12:00 PM, John Line wrote:


I recently built Squid 2.6.STABLE9 as a potential replacement for
2.5.STABLE10, but encountered a problem with our local web  
authentication system (which worked just fine with the older Squid  
version).


Investigation showed that the problem was that the new Squid  
version was caching the temporary redirects (HTTP status 302) sent  
by origin servers to direct unauthenticated requests to our  
authentication server. When the authentication server subsequently  
redirected the (now authenticated) requests back to the originally- 
requested URLs, Squid served the corresponding cached redirects  
instead of passing the requests through to the origin servers.


That didn't happen with the old Squid version, and I couldn't see  
any mention of relevant-seeming changes in the release notes at  
http://www.squid-cache.org/Versions/v2/2.6/squid-2.6.STABLE9- 
RELEASENOTES.html or the ChangeLog file in the source kit.


It looked initially like it might be the problem described by Squid  
Bugzilla entry 1420 (302 responses with an Expires header is  
always cached), but I'm not sure about that - it may instead be a  
related (but distinct) problem. Hence asking about it here, rather  
than simply adding a note to the bugzilla item to suggest it's more  
serious than an enhancement request (which is how it is currently  
classified).


The authentication component running on the origin servers  
currently includes an Expires: header in all responses (including  
redirects), using the same timestamp as the Date: header. That's  
the specific recommendation in RFC 2616 (section 14.21) for how to  
mark a response as already expired, which I interpret as meaning  
it should never be cached (or perhaps, alternatively, that a cached  
copy can be stored as long as it is never be returned to a client  
(always treated as stale, forcing re-validation) - which should  
have the same effect, though with different implementation.


Section 10.3.3 (302 Found) of the RFC says This response is only  
cacheable if indicated by a Cache-Control or Expires header field.  
but (surely!) an Expires: header marking a response as pre-expired  
should not be counted as indicating that a redirect is cacheable.  
However, that (caching it and then serving the cached redirect  
instead of passing a
followup request for the same URL through to the origin server)  
appears to be what is happening with Squid 2.6.STABLE9.


One of comments in the bugzilla item notes that

This bug report is about 302 redirect responses with an explicit  
expiry time set. Currently Squid caches these objects even if  
already expired, which is not optimal behaviour (but works..).


Superficially, that appears to be saying caching all 302 redirects  
with an Expires header is just an efficiency issue, and won't break  
anything, directly contradicting our experience that it does break  
things that expect the RFC-mandated handling of expired content.


I can see, however, that maybe *caching* all redirects is expected  
to be safe (though sub-optimal) *if* in addition it is expected  
that any (pre-)expired redirects will be ignored when Squid is  
deciding how to handle subsequent requests for the same URL, so  
that such requests would be passed to the origin server. If so,  
then it looks like the bug is that the code which should ignore  
(pre-)expired cached redirects has got broken at some point between  
2.5.STABLE12 and 2.6.STABLE9.


Any suggestions about where the problem really lies (and whether  
it's a known bug, if different from 1420, or if I need to file a  
new bug report), would be appreciated.


[I suppose I should mention that we are seeing the problem with  
Squid running on x86 PC servers running the 32-bit version of SLES9  
Linux - but that seems rather unlikely to be relevant to the problem.]


John Line

--
John Line - web  news development, University of Cambridge  
Computing Service


--
Mark Nottingham   [EMAIL PROTECTED]




[squid-users] Thoughts on an invalidation protocol

2006-11-29 Thread Mark Nottingham
-1A8A50007190/id
   updated2003-12-13T18:30:15Z/updated
 /entry
   /feed

Again, there are several interesting properties here.

a) Since it's just HTTP, the Atom feed documents that represent the  
channel can themselves be cached, so that they can be shared  
efficiently (and a bit more reliably) between a cluster or hierarchy  
of caches.


b) Yes, the cache(s) will need to poll the Atom feed, but the poll  
frequency can be attenuated to make that workable while still  
delivering invalidations; e.g., if a channel represents an entire Web  
site (or several sites), and a cahe polls once every 10 seconds, that  
should be a reasonable load for the origin server.


c) Since it's just plain HTTP and an XML format, most any site can  
produce an event feed with widely available tools.


There are a lot of other issues to cover (e.g., invaliding groups of  
URIs, updating content, interaction with other freshness controls)  
but I would very much like to get feedback first on whether this is  
an interesting direction to go in, if there are any huge problems I  
don't see, and whether this would be a useful facility in Squid.


In particular, this approach is looser than some other approaches;  
events may take some time to propagate (which is unavoidable in a  
distributed system, but here it takes longer than some may be  
comfortable with), certainly isn't atomic (e.g., you will see  
situations where one cache will have a different idea of a response's  
freshness as compared to another), will fall back to the default  
cacheability on any kind of error in the invalidation channel, and so  
on.


Thoughts?

--
Mark Nottingham
[EMAIL PROTECTED]





[squid-users] refresh_stale_hit, collapsed_forwarding and Cache-Control: no-cache request headers

2006-11-17 Thread Mark Nottingham

When:
  *refresh_stale_hit and collapsed_forwarding are both active, and
  * the request contains a Cache-Control: no-cache directive, and
  * multiple requests arrive within the refresh_stale_hit window

the first request will go forward as expected, but subsequent  
requests get collapsed and effectively block, instead of being served  
from cache.


In other words, requests with CC: no-cache (or Pragma: nc, I'm  
guessing) effectively make *all* requests for the URI have a no-cache  
directive for the span of the refresh_stale_hit window, if  
collapsed_forwarding is on.


This isn't desirable, but it's not a huge deal; just thought I should  
mention it in case somebody got near this code again.


Cheers,

--
Mark Nottingham
[EMAIL PROTECTED]





Re: [squid-users] 2 tilde in URL = bogus or not ?

2006-10-26 Thread Mark Nottingham
That's allowed. The ~user convention is just that -- a convention,  
not a standard.


Cheers,


On 2006/10/26, at 4:25 AM, Mark Elsen wrote:


According to http standards and semantics , I wonder :

   http://foo.bar.com/~~/

Is this allowed, and what would be the result ?

M.


--
Mark Nottingham
[EMAIL PROTECTED]





Re: [squid-users] POST cacheability

2006-10-23 Thread Mark Nottingham

What does it do with variants currently?

I.e., if it gets a CLR for one variant, will it also purge other ones  
on the same URI?




On 2006/10/20, at 1:11 PM, Henrik Nordstrom wrote:


fre 2006-10-20 klockan 10:32 -0700 skrev Mark Nottingham:

The tricky part of doing that, I think, is that to make this useful
(e.g., to a reverse proxy), it'd be necessary to share invalidations
between cache peers, which I think would require an extension to ICP.


Or perhaps extensions to HTCP to account for Vary. If it wasn't for  
the

silly fact that Squid HTCP is completely ignorant of Vary still..

Regards
Henrik


--
Mark Nottingham
[EMAIL PROTECTED]





Re: [squid-users] POST cacheability

2006-10-20 Thread Mark Nottingham
The tricky part of doing that, I think, is that to make this useful  
(e.g., to a reverse proxy), it'd be necessary to share invalidations  
between cache peers, which I think would require an extension to ICP.




On 2006/10/20, at 2:38 AM, Henrik Nordstrom wrote:


tor 2006-10-19 klockan 17:19 -0700 skrev Mark Nottingham:


Overall, caching POST responses would be nice. If I were
prioritising, however, I'd be much more excited about getting Squid
conformant to section 13.10 (invalidations from side effects).


Fully argreed.

Regards
Henrik


--
Mark Nottingham
[EMAIL PROTECTED]





Re: [squid-users] POST cacheability

2006-10-19 Thread Mark Nottingham

Well, the relevant part is:

Section 9.5
Responses to this method are not cacheable, unless the response  
includes appropriate Cache-Control or Expires header fields.  
However, the 303 (See Other) response can be used to direct the  
user agent to retrieve a cacheable resource.
Further down (e.g., in 13.6), there is only reference to using the  
URI and the selecting request-headers as input into the cache key,  
never the method.


Effectively, the POST response is, by default, a status message, but  
if it's allowed to be cached, it becomes a representation of that  
resource, and available to GET.


This area isn't documented very well, no matter which way you  
interpret it. It should be cleaned up.


Overall, caching POST responses would be nice. If I were  
prioritising, however, I'd be much more excited about getting Squid  
conformant to section 13.10 (invalidations from side effects).


Cheers,



On 2006/10/17, at 2:24 PM, Henrik Nordstrom wrote:


mån 2006-10-16 klockan 22:17 -0700 skrev Mark Nottingham:

That's been my understanding of it for many years, and I've heard
others say likewise.


Been reading the RFC up and down, and can't in my way of reading it  
find

any support for this, only the opposite.

Please clue me in on the reasoning here, from the point of the RFC.


If a server wanted to say Please don't POST to this URI, they could
just respond with a 405 Method Not Allowed. 4xx responses have a
don't do that again semantic anyway...


For a single request yes. But the semantics for a shared cache is  
not as
well defined.. so we use the general bailout of responses may be  
cached

unless indicated otherwise in Squid..

Regards
Henrik


--
Mark Nottingham
[EMAIL PROTECTED]





Re: [squid-users] HTTP protocol violation error using .NET 2.0 web services through Squid-2.5 proxy

2006-10-19 Thread Mark Nottingham

Hi,

Can you post a trace (e.g., tcpdump, tcpflow) of the interaction  
between .NET and Squid? Just headers is fine. I'd be very interested  
to see what's happening here...


Thanks,


On 2006/10/19, at 4:42 AM, Marcus Ogden wrote:


Hello,

A client of ours using the Squid proxy server (version
2.5.STABLE6-3.4E.12.1) on Red Hat Enterprise Linux 4 is experiencing a
problem when running our .NET 2.0 client application, which  
communicates

with a .NET 2.0 web service on our server.

When our client application sends an HTTP 1.1 request through the  
Squid

proxy to our server, it receives the error:

The server committed a protocol violation.  
Section=ResponseStatusLine


Other clients not using Squid are not experiencing this problem.

Researching this, we've found a few posts that report similar problems
using .NET 2.0 web services and/or the HTTP 1.1 protocol through  
Squid,

e.g.

http://forums.asp.net/thread/1194960.aspx
http://groups.google.to/group/ 
microsoft.public.dotnet.framework.remoting

/msg/dae1a8e9eed3dcf3?dmode=source
http://www.squid-cache.org/mail-archive/squid-users/200606/0534.html

We've also tried the suggestion in
http://forums.asp.net/thread/1284850.aspx to set the
useUnsafeHeaderParsing property in the client .NET application's  
config

file to true, but our client reports this hasn't solved the problem.

Any suggestions on how we can resolve this issue would be much
appreciated.

Regards,

Marcus Ogden
Software Development
QSR International
www.qsrinternational.com




--
Mark Nottingham
[EMAIL PROTECTED]





Re: [squid-users] POST cacheability

2006-10-14 Thread Mark Nottingham
Squid (and most other caches) considers all POST responses  
uncacheable, even if they have freshness information.


I think that's unfortunate, but there you go.

Just out of curiosity, what's your use case?

Note that when POST is cached, it's future GETs that will get the  
cached response, not POSTs (which will be forwarded to the origin).


Cheers,

On 2006/10/13, at 2:49 AM, Zsolt Laposa wrote:


Hi,

I'm using squid-2.6 as a reverse proxy, and I'm trying to cache POST
responses.
The Squid FAQ and the HTTP 1.1 RFC says that responses to a POST  
request are

not cacheable,
unless the response includes a Cache-Control or Expires header. I'm  
using

Cache-Control max-age in
the responses, but Squid still doesn't cache the POST responses.

Did I miss something in the Squid configuration?

Thank in advance,
Zsolt





--
Mark Nottingham
[EMAIL PROTECTED]





Re: [squid-users] elapsed time accuracy

2006-09-29 Thread Mark Nottingham
Hmm... looking at this a bit more closely, I'm also seeing this on  
responses that are uncachable (and therefore shouldn't be collapsed).


On 2006/09/28, at 8:20 PM, Mark Nottingham wrote:

You mean with collapsed forwarding? That makes perfect sense, if  
collapsed requests are logged as TCP_MISS...


Thanks!

P.S. It would be cool if there was a separate log tag for collapsed  
requests...




On 2006/09/28, at 3:20 PM, Henrik Nordstrom wrote:

Hmm.. maybe what you are seeing is merged requests to an already  
running
fech of that URL? Do you have other requests for the same URL very  
close

in time?


--
Mark Nottingham
[EMAIL PROTECTED]






--
Mark Nottingham
[EMAIL PROTECTED]





[squid-users] elapsed time accuracy

2006-09-27 Thread Mark Nottingham
I'm seeing some apparently impossible elapsed times in access.log,  
e.g., TCP_MISSes DIRECT to servers that are 100+ms away showing 2ms  
elapsed.


I seem to remember someone saying that those numbers were sometimes  
inaccurate, but can't find any more detail. What's the story? My  
first thought was aborted requests, but it appears that about the  
right number of bytes were written.


Thanks,

--
Mark Nottingham
[EMAIL PROTECTED]





Re: [squid-users] Persistent Connections

2006-09-20 Thread Mark Nottingham
RFC2616 refers to RFC2068 for HTTP/1.0-style persistent connections,  
which is the most normative source we have for this.

  http://rfc.net/rfc2068.html#s19.7.1

The way that that's written leads me to believe that a HTTP/1.1  
client can send a request to a HTTP/1.0 server and expect the  
resulting connection to be persistent, as long as it has a Content- 
Length.


However, since this is a spec interpretation issue, I might take it  
up with the folks over at HTTP-WG.


Cheers,


On 2006/09/20, at 5:55 AM, Henrik Nordstrom wrote:


Except that HTTP/1.1 doesn't define Connection: keep-alive, only
Connection: close. The keep-alive of an HTTP/1.1 connection is
implicit by the protocol being HTTP/1.1.

Connection: keep-alive is keep-alive of a HTTP/1.0+ style persistent
web server connection. HTTP/1.0+ defines different signaling for web
servers and proxies due to Connection not being an HTTP/1.0 header
making it likely proxies does not understand Connection: keep- 
alive.. A

client accepting Connection: keep-alive as keep-alive of a proxied
connection is broken not respecting the Netscape specifications for
keep-alive for HTTP/1.0.

Regards
Henrik


--
Mark Nottingham
[EMAIL PROTECTED]





Re: [squid-users] Persistent Connections

2006-09-20 Thread Mark Nottingham

On 2006/09/20, at 2:14 PM, Henrik Nordstrom wrote:


But it's true that we probably could assume a HTTP/1.1 message is
persistent unless it has a connection: close tag as the close tag is
required by HTTP/1.1. But at the same time RFC 2616 8.1.2.1 says:

   Clients and servers SHOULD NOT assume that a persistent  
connection is

   maintained for HTTP versions less than 1.1 unless it is explicitly
   signaled. See section 19.6.2 for more information on backward
   compatibility with HTTP/1.0 clients.


... and one could argue that it's explicitly signalled by the Content- 
Length header in the response.



8.1.3 says

   A proxy server MUST NOT establish a HTTP/1.1 persistent connection
   with an HTTP/1.0 client (but see RFC 2068 [33] for information and
   discussion of the problems with the Keep-Alive header  
implemented by

   many HTTP/1.0 clients).


I'm actually more interested in this in the gateway case, but point  
taken.



However, since this is a spec interpretation issue, I might take it
up with the folks over at HTTP-WG.


You are welcome.

But I don't really see much value to stir up discussions around  
HTTP/1.0

persistent connections, they work the ways they do and can not be
changed, only documented (was a dead end).


If you haven't seen Roy's... colourful response on HTTP-WG along  
these lines, I'll forward. :)



The most significant blank spot is how HTTP/1.0 proxies knowing about
persistent connections should react to HTTP/1.1 clients not explicitly
signaling persistent connections. Here we choose take the safe path  
and

assumes the client doesn't know about HTTP/1.0 persistent connections
and close the connection.

Unfortunately I have no idea where to find that Netscape document  
today

after all their restructuring. Maybe in the Internet Archive?


I'll look for it.

Just thinking aloud -- the obvious solution to this is to make Squid  
HTTP/1.1. Of course, that's a lot of work, but I wonder if it would  
be more manageable by going 1.1 on just the client side at first,  
while remaining 1.0 on the server side, to avoid chunked responses.


Yes, I realise that's pretty sick.

Cheers,

--
Mark Nottingham
[EMAIL PROTECTED]




Re: [squid-users] Persistent Connections

2006-09-20 Thread Mark Nottingham
I reaise that C-L has two different purposes. Since closing the  
connection signals both, synthesising the C-L doesn't seem like  
taking a huge liberty in the face of serving the partial cached  
response as if it's the whole thing. YMMV (obviously).


Thanks for the help,


On 2006/09/20, at 2:22 PM, Henrik Nordstrom wrote:


The problem you're pointing out WRT Squid caching partial responses
exists today; if I send a connection-delimited response and close
early, Squid will cache it, given the appropriate headers...


Yes, what I said. In many cases it's impossible to tell the two apart
with both signaled by close of connection.




--
Mark Nottingham
[EMAIL PROTECTED]





[squid-users] refresh_stale_hit - refresh_stale_window?

2006-09-11 Thread Mark Nottingham
Looking through the source of P3, I *think* what's documented as  
refresh_stale_hit is actually coded as refresh_stale_window -- the  
only place the former occurs is in comments, the config file and the  
release notes, while the latter is used throughout client_side.c and  
refresh.c.


Is this a known problem, and should it work to just use  
refresh_stale_window in P3?


Cheers,

--
Mark Nottingham
[EMAIL PROTECTED]





Re: [squid-users] refresh_stale_hit - refresh_stale_window?

2006-09-11 Thread Mark Nottingham

Nevermind, I just looked at cf.data.pre...


On 2006/09/11, at 3:03 PM, Mark Nottingham wrote:

Looking through the source of P3, I *think* what's documented as  
refresh_stale_hit is actually coded as refresh_stale_window -- the  
only place the former occurs is in comments, the config file and  
the release notes, while the latter is used throughout  
client_side.c and refresh.c.


Is this a known problem, and should it work to just use  
refresh_stale_window in P3?


Cheers,

--
Mark Nottingham
[EMAIL PROTECTED]






--
Mark Nottingham
[EMAIL PROTECTED]





[squid-users] httpd_accel_with_proxy (again)

2006-09-09 Thread Mark Nottingham
I've seen a few questions WRT httpd_accel_with_proxy and 2.6, but  
they seem to be centred around interception proxying.


Does 2.6 have the ability to replicate the original purpose of the  
option -- running a proxy and an accelerator on the same http_port?


I've tried a few different combinations of likely http_port and  
cache_peer lines, sometimes with a sprinkling of always_direct, but  
with no luck.


(yes, I know this isn't a recommended configuration...)

Cheers,


--
Mark Nottingham
[EMAIL PROTECTED]





Re: [squid-users] Large Files

2006-09-07 Thread Mark Nottingham
Lighttpd uses it... Is it just that it would require a substantial  
redesign? (he says, completely ignorant of the internals...)


On 2006/09/06, at 6:51 PM, Henrik Nordstrom wrote:


   - How does sendfile support in 2.6 affect this?


It doesn't. Not really usable for Squid. Using sendfile outside one
thread/process per request designs is not trivial. So it works quite
nicely for most traditional servers, but not that well for event loop
based ones..


--
Mark Nottingham
[EMAIL PROTECTED]





Re: [squid-users] delivering stale content while fetching fresh

2006-09-07 Thread Mark Nottingham

On 2006/09/05, at 2:33 PM, Henrik Nordstrom wrote:


The drawback of enabling collapsed_forwarding is that concurrent
requests for uncachable content will get delayed a bit until Squid  
sees

that the response can not be cached (response headers required), at
which point all pending collapsed queries for that URL will all  
start a

new request each..


I'm guessing that's the full URL. It would be nice if there were an  
option to ignore the query string for this purpose;


E.g., if Squid sees
  http://example.com/search?q=foo
and finds it's uncacheable, it would then stop collapsing other  
requests with the same base, e.g.,

  http://example.com/search?q=bar

Even better, a response cache-control extension could control this...

Cheers,

--
Mark Nottingham
[EMAIL PROTECTED]





Re: [squid-users] delivering stale content while fetching fresh

2006-09-07 Thread Mark Nottingham


On 2006/09/07, at 4:33 PM, Henrik Nordstrom wrote:

In accelerator setups which this is primarily targeted for I don't  
think

it's that hard to add some rules telling which query URLs are cachable
and which are not if you want to collapse requests for some query  
URLs.


Fair enough, good point.


Even better, a response cache-control extension could control this...


Personally I think that only complicates matters. If doing this
speculation abour URL relationships then I suspect results is
sufficiently good deducing it automatically. Adding a new response
directive to hint about this is only relevant in cases where the same
base URL sometimes is cachable sometimes not, and having it
automatically toggle based on what was last seen is probably  
optimal as

it may be a bit hard for the server to guess what the next request
pattern will look like..


My goal was more to move configuration off the box and into the hands  
of the server, which is useful for shared accelerators. Of course,  
there are other ways to achieve that; having response headers for one  
object affect another's handling is useful, but does add a lot of  
complexity.


Note: Same reasoning can be made about file extensions, directories  
etc.

Question is how far to go into this guessing of likelyhood that the
response will be cachable.


Agreed. I was thinking more about a general mechanism using a  
template; http://www.ietf.org/internet-drafts/draft-nottingham-http- 
link-header-00.txt


Cheers,

--
Mark Nottingham
[EMAIL PROTECTED]





[squid-users] Large Files

2006-09-01 Thread Mark Nottingham
I'd appreciate some enlightenment as to how Squid handles large files  
WRT memory and disk.


In particular;
  - squid.conf says that memory is used for in-transit objects.  
What exactly is kept in memory for in-transit objects; just metadata,  
or the whole thing?
  - if something is in memory cache, does it get copied when it is  
requested (because it is in-transit)?

  - How does sendfile support in 2.6 affect this?
  - Does anyone have any experiences they'd care to relate regarding  
memory-caching very large objects?


Thanks!

--
Mark Nottingham
[EMAIL PROTECTED]