Re: [squid-users] Squid 3.0 icap HIT

2010-11-08 Thread Steve Hill

On Sat, 6 Nov 2010, Luis Enrique Sanchez Arce wrote:


When squid resolve the resource from cache does not send the answer to ICAP.
How I can change this behavior?


You need a respmod_postcache hook, which unfortunately hasn't been 
implemented yet.  The workaround I use is to run two separate Squid 
instances - one of them does all the usual caching stuff and listens only 
on [::1]:3129.  A second Squid instance runs with caching turned off 
entirely, forwarding requests to [::1]:3129.  The second squid instance 
is configured to talk to the ICAP service.  All the clients connect to the 
second instance.


My configuration for the non-caching Squid instance that talks to the ICAP 
server is here:

https://subversion.opendium.net/trac/free/browser/thirdparty/squid/trunk/extra_sources/squid-nocache.conf

This effectively provides a precache reqmod hook (reqmod_precache) and a 
postcache respmod hook (respmod_precache).  The caching Squid would 
provide the same precache reqmod hook (reqmod_precache) and a precache 
respmod hook (respmod_precache), although I don't have a use for these 
myself.


Its a bit nasty, but it happens to work. :)

--

 - Steve Hill
   Technical Director
   Opendium Limited http://www.opendium.com

Direct contacts:
   Instant messager: xmpp:st...@opendium.com
   Email:st...@opendium.com
   Phone:sip:st...@opendium.com

Sales / enquiries contacts:
   Email:sa...@opendium.com
   Phone:+44-844-9791439 / sip:sa...@opendium.com

Support contacts:
   Email:supp...@opendium.com
   Phone:+44-844-4844916 / sip:supp...@opendium.com



Re: [squid-users] Zero Penalty Hit on Squid 3.1.x

2010-11-08 Thread Amos Jeffries

On 08/11/10 03:58, Fabiano Carlos Heringer wrote:

Hey guys, in the squid the ZPH it´s already enabled by default or its
necessary to enable something?

I´ve installed the squid with --enable-zph-qos option, and put three


good.


options on my squid.conf to mark packets with diferente ToS, but i


diferente to what?
 each other? (all are 0x10)
 the global tcp_outgoing_tos override?
 or some marking happening after the packets exist Squid?


didn´t get this packets marked, how can enable this?
qos_flows local-hit=0x10
qos_flows sibling-hit=0x10
qos_flows parent-hit=0x10
qos_flows disable-preserve-miss

How can I use that? I guess my squid it´s not working correctly with HIT
sites.


Stop *guessing* unknown things are Squid fault please.
Check the logs, trace the packets, find out whats actually happening.

Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.9
  Beta testers wanted for 3.2.0.3


Re: [squid-users] Unable to make Squid work as a transparent proxy (Squid 3.1.7, Linux Debian, WCCP2)

2010-11-08 Thread Leonardo
Hi Amos,

On Sun, Nov 7, 2010 at 5:12 AM, Amos Jeffries squ...@treenet.co.nz wrote:
 http_port 3128 intercept

I have changed the config from http_port 3128 transparent to
http_port 3128 intercept, but I see no change in the behaviour.

 You will also need a separate port for the normal browser-configured and
 management requests. 3.1 will reject these if sent to a NAT interception
 port.

I don't get this.  Could you please be so kind to explain, or to point
me to a page in the documentation?

 Also check the squid access.log. This will determine whether it is the ASA
 side or the Internet side of Squid which then needs to be tcpdumped for port
 80 to find out whats going on.

The file access.log is empty.

Thanks a lot for your help,

L.


[squid-users] Multiple NICs

2010-11-08 Thread Nick Cairncross
Hi list,

I'm looking at building a couple more 3.1.8 servers on RHEL 5.5 x86. The 
servers are nicely high-powered have multiple Gb NICs (4 in total). My previous 
proxy server (bluecoat) had two NICs. I understand that one was used to listen 
to requests and send to our upstream accelerator and one was used if the 
equivalent 'send direct' was used i.e bypass the accelerator. Can the list make 
any thoughts or recommendations about the best way to utilise the NICs for best 
performance? Can I achieve the same outbound as above? Should I even bother 
trying to do this? User base would be about 700 users; I'm not caching. Simple 
ACLs but with two authentication helpers (depending on browser).

Cheers
Nick

The information contained in this e-mail is of a confidential nature and is 
intended only for the addressee.  If you are not the intended addressee, any 
disclosure, copying or distribution by you is prohibited and may be unlawful.  
Disclosure to any party other than the addressee, whether inadvertent or 
otherwise, is not intended to waive privilege or confidentiality.  Internet 
communications are not secure and therefore Conde Nast does not accept legal 
responsibility for the contents of this message.  Any views or opinions 
expressed are those of the author.

The Conde Nast Publications Ltd (No. 226900), Vogue House, Hanover Square, 
London W1S 1JU


[squid-users] unexplainable MISSes (squid 2.7stable9)

2010-11-08 Thread Adrian Dascalu

Hi,

I'm out of ideeas trying to debug cache misses that I cannot explain. As a last 
resort I'm sending this problem to the list with the hope that you could come 
up with some explanation and/or cure for this.

the setup is: squid 2.7stable9 on RHEL 5, configured as accel, 12 parents 1 
sibling (another squid). Apache in front zope as parents.

For the root page I send requests from the same browser. The page is supposed 
to stay in cache for 1h. I've seen it behaving correctly one time (at squid 
startup) afterwards if i keep requesting the page a few times I will get a MISS 
long before the 3600s have passed.

I have checked and there is no PURGE for this URL in the mean time. There are 
some for other URL's deeper in the structure.

here's a request:

Host www.somewebsite.com

User-Agent Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.1.9) Gecko/20100330 
Fedora/3.5.9-1.fc11 
Firefox/3.5.9Accepttext/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8

Accept-Language en-us,en;q=0.5

Accept-Encoding gzip,deflate

Accept-Charset ISO-8859-1,utf-8;q=0.7,*;q=0.7

Keep-Alive 300

Connection keep-alive

Referer http://www.somewebsite.com/

Cookie 
__utma=173508663.4134765344646281700.1250060356.1271487209.1289208944.50; 
__utmb=173508663.59.10.1289208944; __utmc=173508663; 
__utmz=173508663.1289208944.50.1.utmcsr=(direct)|utmccn=(direct)|utmcmd=(none)

here's a HIT reply:

Date Mon, 08 Nov 2010 12:06:29 GMT

Server Zope/(Zope 2.9.10-final, python 2.4.3, linux2) ZServer/1.1 Plone/2.5.5

Content-Length 10131

Content-Language en

Content-Encoding gzip

Expires Fri, 10 Nov 2000 12:05:48 GMT

Vary Accept-Encoding,Accept,If-None-Match,X-Username

X-Caching-Rule-Id plone-containers

Cache-Control max-age=0, s-maxage=3600

Content-Type text/html;charset=utf-8

X-Header-Set-Id cache-in-proxy-1-hour

Age 40

X-Cache HIT from squid1.somewebsite.com

X-Cache-Lookup HIT from squid1.somewebsite.com:3128

Via 1.0 squid1.somewebsite.com:3128 (squid/2.7.STABLE9)

Keep-Alive timeout=8, max=100

Connection Keep-Alive

Long before the 3600s have passed ,from the same browser, I would get a MISS. 
The request headers are IDENTICAL and there is no PURGE. What else might 
invalidate the cached object?


Thank you,
Adrian




Re: [squid-users] High cpu load with squid

2010-11-08 Thread Michał Prokopiuk
Witam,

Dnia pon, lis 01, 2010 at 01:17:46 +, Amos Jeffries napisał:

 
 by soft raid you mean *software* raid? That is a disk IO killer for
 Squid.

For one day squid work perfect, i thinnk it was a point. Thank You.

-- 
Pozdrawiam
Michał Prokopiuk
mich...@]sloneczko.net
http://www.sloneczko.net


Re: [squid-users] This cache is currently building its digest.

2010-11-08 Thread david robertson
 What is your digest rebuild time set to?
  your cache_dir and cache_mem sizes?
  and your negative_ttl setting?

digest_rebuild_period 60 minutes
negative_ttl 1 minute
backends use a cache_dir of 20gb (8mb cache_mem)
frontends use a cache_mem of 2gb (no cache_dir)


 What do you get back when making a manual digest fetch from one of the
 Squid?
  squidclient -h $squid-visible_hostname
 mgr:squid-internal-periodic/store_digest

I get 'Invalid URL' when trying to hit mgr:squid-internal-periodic/store_digest

I've since set up HTCP, and it seems to be working fine - however this
brings up one additional (unrelated to original problem) question:
Does 2.7 have support for forwarding HTCP CLR's?  If so, it doesn't
seem like it's working.

Thanks for the help, by the way.


 Squid Cache: Version 2.7.STABLE9
 configure options:  '--prefix=/squid2' '--enable-async-io'
 '--enable-icmp' '--enable-useragent-log' '--enable-snmp'
 '--enable-cache-digests' '--enable-follow-x-forwarded-for'
 '--enable-storeio=null,aufs' '--enable-removal-policies=heap,lru'
 '--with-maxfd=16384' '--enable-poll' '--disable-ident-lookups'
 '--enable-truncate' '--with-pthreads' 'CFLAGS=-DNUMS=60 -march=nocona
 -O3 -pipe -fomit-frame-pointer -funroll-loops -ffast-math
 -fno-exceptions'

 Amos
 --
 Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.9
  Beta testers wanted for 3.2.0.2



Re: [squid-users] client_side_request.cc messages in cache.log

2010-11-08 Thread donovan jeffrey j

On Nov 5, 2010, at 7:37 PM, Amos Jeffries wrote:

 On 06/11/10 03:28, donovan jeffrey j wrote:
 
 On Nov 5, 2010, at 10:24 AM, Amos Jeffries wrote:
 
 On 06/11/10 03:20, donovan jeffrey j wrote:
 
 snip
 
 does this look right ?
 
 #redirect_program  /usr/local/bin/squidGuard -c 
 /usr/local/squidGuard/squidGuard.conf
 url_rewrite_program /usr/local/bin/squidGuard -c 
 /usr/local/squidGuard/squidGuard.conf
 #redirect_children 100
 url_rewrite_children 100
 
 
 Yes.
 
 is it okay to issue a - k reconfigure for this change or it better to wait 
 until not many users are accessing?
 -j
 
 reconfigure is enough. It is just a cosmetic config change at this point.
 
 Amos

okay im getting same message under load.

2010/11/08 09:04:50| client_side_request.cc(1047) clientRedirectDone: 
redirecting body_pipe 0x2135be20*2 from request 0x14e14200 to 0x8ac0200
2010/11/08 09:04:56| client_side_request.cc(1047) clientRedirectDone: 
redirecting body_pipe 0x1fabb330*2 from request 0xc7a5e00 to 0xe05d000
2010/11/08 09:05:00| client_side_request.cc(1047) clientRedirectDone: 
redirecting body_pipe 0x2135be20*1 from request 0x8fa7200 to 0x127f7400
2010/11/08 09:05:06| client_side_request.cc(1047) clientRedirectDone: 
redirecting body_pipe 0x20606560*1 from request 0x11508200 to 0x11add800
2010/11/08 09:05:07| client_side_request.cc(1047) clientRedirectDone: 
redirecting body_pipe 0x21278360*1 from request 0xbcbc00 to 0x190d4a00

and yes there is redirection going on so it's not lying to me. ^^^ 
client redirect done. is this just a notification of the redirect ? or is it an 
error ?
-j




[squid-users] Squid is caching the 404 Error Msg...

2010-11-08 Thread karj

Dear Expert,

I'm using:
- Squid Cache: Version Squid Cache: Version 2.7.STABLE9

My Problem  is.

When i'm using
Cache-Control headers in the origin iis ( post-check=3600, 
pre-check=43200 )


Squid is caching the 404 Error Msg.

In the first two or thre requests i have
TCP_MISS:FIRST_UP_PARENT  --- squid goes back to origin server

After while i'm getting
404 926 TCP_NEGATIVE_HIT:NONE --- squid servers 404 from it's cache


I don't want to cache  Error Msgs.
Error Msgs should never be cached.
How can I do that.?


thanks in advance


[squid-users] Re: Bandwidth split?

2010-11-08 Thread J Webster
I have put in some controls for downloading files like iso, mp3 etc but I 
would like to limit the connection per ip address?


--
From: J Webster webster_j...@hotmail.com
Sent: Sunday, November 07, 2010 9:18 PM
To: squid-users@squid-cache.org
Subject: Bandwidth split?

It is becoming apparent that some users are hogging the bandwidth on the 
server by downloading videos instead of streaming them.

Any idea on how I can restrict this?
I would like to keep the server as unlimited downloads but split the 
bandwidth at any one time between the users - I figured that this was 
shared automatically but it seems anyone downloading a lot gets more use 
of the bandwidth? 




[squid-users] Re: Bandwidth split?

2010-11-08 Thread Chad Naugle
Use delay_pools ...

-
Chad E. Naugle
Tech Support II, x. 7981
Travel Impressions, Ltd.
 


 J Webster webster_j...@hotmail.com 11/8/2010 9:46 AM 
I have put in some controls for downloading files like iso, mp3 etc but
I 
would like to limit the connection per ip address?

--
From: J Webster webster_j...@hotmail.com
Sent: Sunday, November 07, 2010 9:18 PM
To: squid-users@squid-cache.org
Subject: Bandwidth split?

 It is becoming apparent that some users are hogging the bandwidth on
the 
 server by downloading videos instead of streaming them.
 Any idea on how I can restrict this?
 I would like to keep the server as unlimited downloads but split the

 bandwidth at any one time between the users - I figured that this was

 shared automatically but it seems anyone downloading a lot gets more
use 
 of the bandwidth? 



Travel Impressions made the following annotations
-
This message and any attachments are solely for the intended recipient
and may contain confidential or privileged information.  If you are not
the intended recipient, any disclosure, copying, use, or distribution of
the information included in this message and any attachments is
prohibited.  If you have received this communication in error, please
notify us by reply e-mail and immediately and permanently delete this
message and any attachments.
Thank you.


[squid-users] Re: Bandwidth split?

2010-11-08 Thread Chad Naugle
Anyway, I apologize for the short response, I was busy on the phone.  I
would research delay_pools and try to figure out / tweak your config to
meet your needs.  It's not a real straight forward config, but that's
because it is very flexible in how users are limited.  The only thing
that it does not do is control uploading / POST requests.

-
Chad E. Naugle
Tech Support II, x. 7981
Travel Impressions, Ltd.
 


 Chad Naugle chad.nau...@travimp.com 11/8/2010 9:50 AM 
Use delay_pools ...

-
Chad E. Naugle
Tech Support II, x. 7981
Travel Impressions, Ltd.



 J Webster webster_j...@hotmail.com 11/8/2010 9:46 AM 
I have put in some controls for downloading files like iso, mp3 etc
but
I 
would like to limit the connection per ip address?

--
From: J Webster webster_j...@hotmail.com
Sent: Sunday, November 07, 2010 9:18 PM
To: squid-users@squid-cache.org
Subject: Bandwidth split?

 It is becoming apparent that some users are hogging the bandwidth on
the 
 server by downloading videos instead of streaming them.
 Any idea on how I can restrict this?
 I would like to keep the server as unlimited downloads but split the

 bandwidth at any one time between the users - I figured that this
was

 shared automatically but it seems anyone downloading a lot gets more
use 
 of the bandwidth? 



Travel Impressions made the following annotations
-
This message and any attachments are solely for the intended
recipient
and may contain confidential or privileged information.  If you are
not
the intended recipient, any disclosure, copying, use, or distribution
of
the information included in this message and any attachments is
prohibited.  If you have received this communication in error, please
notify us by reply e-mail and immediately and permanently delete this
message and any attachments.
Thank you.


Travel Impressions made the following annotations
-
This message and any attachments are solely for the intended recipient
and may contain confidential or privileged information.  If you are not
the intended recipient, any disclosure, copying, use, or distribution of
the information included in this message and any attachments is
prohibited.  If you have received this communication in error, please
notify us by reply e-mail and immediately and permanently delete this
message and any attachments.
Thank you.


Re: [squid-users] Re: Bandwidth split?

2010-11-08 Thread J Webster

I have done this but I am not sure if it will pick up the ncsa users.
This should restrict max bandwidth for any 1 user to 1024 (1Mbps)?

acl magic_words1 url_regex -i 192.168
acl magic_words2 url_regex -i ftp .exe .mp3 .vqf .tar.gz .gz .rpm .zip .rar 
.avi .mpeg .mpe .mpg .qt .ram .rm .iso .raw .wav .mov

delay_pools 2
delay_class 1 2
delay_parameters 1 -1/-1 -1/-1
delay_access 1 allow magic_words1
delay_class 2 2
delay_parameters 2 5000/15 5000/12
delay_access 2 allow magic_words2

acl restuser proxy_auth ncsa_users
delay_class 1 1
# 256 Kbit/s fill rate, 1024 Kbit/s reserve
delay_parameters 1 32000/128000
delay_access 1 allow restuser
delay_access 1 deny all


--
From: Chad Naugle chad.nau...@travimp.com
Sent: Monday, November 08, 2010 3:57 PM
To: J Webster webster_j...@hotmail.com; squid-users@squid-cache.org; 
Chad Naugle chad.nau...@travimp.com

Subject: [squid-users] Re: Bandwidth split?


Anyway, I apologize for the short response, I was busy on the phone.  I
would research delay_pools and try to figure out / tweak your config to
meet your needs.  It's not a real straight forward config, but that's
because it is very flexible in how users are limited.  The only thing
that it does not do is control uploading / POST requests.






Re: [squid-users] Re: Bandwidth split?

2010-11-08 Thread Chad Naugle
Your problem here is that you are trying to layer delay_pool 1 twice, so
I corrected the config below adding a third delay_pool for your
ncsa_users.
 
-
Chad E. Naugle
Tech Support II, x. 7981
Travel Impressions, Ltd.
 


-
Chad E. Naugle
Tech Support II, x. 7981
Travel Impressions, Ltd.
 


 J Webster webster_j...@hotmail.com 11/8/2010 10:06 AM 
I have done this but I am not sure if it will pick up the ncsa users.
This should restrict max bandwidth for any 1 user to 1024 (1Mbps)?

acl magic_words1 url_regex -i 192.168
acl magic_words2 url_regex -i ftp .exe .mp3 .vqf .tar.gz .gz .rpm .zip
.rar 
.avi .mpeg .mpe .mpg .qt .ram .rm .iso .raw .wav .mov
delay_pools 3
delay_class 1 2
delay_parameters 1 -1/-1 -1/-1
delay_access 1 allow magic_words1
delay_class 2 2
delay_parameters 2 5000/15 5000/12
delay_access 2 allow magic_words2

acl restuser proxy_auth ncsa_users
delay_class 3 1
# 256 Kbit/s fill rate, 1024 Kbit/s reserve
delay_parameters 3 32000/128000
delay_access 3 allow restuser
delay_access 3 deny all


--
From: Chad Naugle chad.nau...@travimp.com
Sent: Monday, November 08, 2010 3:57 PM
To: J Webster webster_j...@hotmail.com;
squid-users@squid-cache.org; 
Chad Naugle chad.nau...@travimp.com
Subject: [squid-users] Re: Bandwidth split?

 Anyway, I apologize for the short response, I was busy on the phone. 
I
 would research delay_pools and try to figure out / tweak your config
to
 meet your needs.  It's not a real straight forward config, but
that's
 because it is very flexible in how users are limited.  The only
thing
 that it does not do is control uploading / POST requests.





Travel Impressions made the following annotations
-
This message and any attachments are solely for the intended recipient
and may contain confidential or privileged information.  If you are not
the intended recipient, any disclosure, copying, use, or distribution of
the information included in this message and any attachments is
prohibited.  If you have received this communication in error, please
notify us by reply e-mail and immediately and permanently delete this
message and any attachments.
Thank you.


Re: [squid-users] Re: Bandwidth split?

2010-11-08 Thread Chad Naugle
I also forgot to mention how you also forgot to deny all the first two
pools.
 
-
Chad E. Naugle
Tech Support II, x. 7981
Travel Impressions, Ltd.
 


-
Chad E. Naugle
Tech Support II, x. 7981
Travel Impressions, Ltd.
 


 Chad Naugle 11/8/2010 10:12 AM 
Your problem here is that you are trying to layer delay_pool 1 twice,
so I corrected the config below adding a third delay_pool for your
ncsa_users.

-
Chad E. Naugle
Tech Support II, x. 7981
Travel Impressions, Ltd.



 J Webster webster_j...@hotmail.com 11/8/2010 10:06 AM 
I have done this but I am not sure if it will pick up the ncsa users.
This should restrict max bandwidth for any 1 user to 1024 (1Mbps)?

acl magic_words1 url_regex -i 192.168
acl magic_words2 url_regex -i ftp .exe .mp3 .vqf .tar.gz .gz .rpm .zip
.rar 
.avi .mpeg .mpe .mpg .qt .ram .rm .iso .raw .wav .mov
delay_pools 3
delay_class 1 2
delay_parameters 1 -1/-1 -1/-1
delay_access 1 allow magic_words1
delay_access 1 deny all
delay_class 2 2
delay_parameters 2 5000/15 5000/12
delay_access 2 allow magic_words2
delay_access 2 deny all

acl restuser proxy_auth ncsa_users
delay_class 3 1
# 256 Kbit/s fill rate, 1024 Kbit/s reserve
delay_parameters 3 32000/128000
delay_access 3 allow restuser
delay_access 3 deny all


--
From: Chad Naugle chad.nau...@travimp.com
Sent: Monday, November 08, 2010 3:57 PM
To: J Webster webster_j...@hotmail.com;
squid-users@squid-cache.org; 
Chad Naugle chad.nau...@travimp.com
Subject: [squid-users] Re: Bandwidth split?

 Anyway, I apologize for the short response, I was busy on the phone.

I
 would research delay_pools and try to figure out / tweak your config
to
 meet your needs.  It's not a real straight forward config, but
that's
 because it is very flexible in how users are limited.  The only
thing
 that it does not do is control uploading / POST requests.





Travel Impressions made the following annotations
-
This message and any attachments are solely for the intended
recipient
and may contain confidential or privileged information.  If you are
not
the intended recipient, any disclosure, copying, use, or distribution
of
the information included in this message and any attachments is
prohibited.  If you have received this communication in error, please
notify us by reply e-mail and immediately and permanently delete this
message and any attachments.
Thank you.


Travel Impressions made the following annotations
-
This message and any attachments are solely for the intended recipient
and may contain confidential or privileged information.  If you are not
the intended recipient, any disclosure, copying, use, or distribution of
the information included in this message and any attachments is
prohibited.  If you have received this communication in error, please
notify us by reply e-mail and immediately and permanently delete this
message and any attachments.
Thank you.


Re: [squid-users] Re: Bandwidth split?

2010-11-08 Thread J Webster

do I need to add this:
delay_access 2 deny all
delay_access 1 deny all
?

Also, what is the difference between fill rate and reserve?
I think I have a fill rate of 256, maybe I should increase this for watching 
video?


I am using iftop on the server, and users still seem to be connecting at 
more than 1Mbps so maybe it isn;t picking up the ncsa users?





From: Chad Naugle
Sent: Monday, November 08, 2010 4:11 PM
To: J Webster ; squid-users@squid-cache.org
Subject: Re: [squid-users] Re: Bandwidth split?


Your problem here is that you are trying to layer delay_pool 1 twice, so I 
corrected the config below adding a third delay_pool for your ncsa_users.






Re: [squid-users] Re: Bandwidth split?

2010-11-08 Thread Chad Naugle
Yes sorry, at work.  See Below.  I am not 100% on fill-rate versus the
other numbers, so I'll leave that up for someone else to reply.  I would
just tinker with the values until you get acceptable results until
then.

acl magic_words1 url_regex -i 192.168
acl magic_words2 url_regex -i ftp .exe .mp3 .vqf .tar.gz .gz .rpm .zip
.rar .avi .mpeg .mpe .mpg .qt .ram .rm .iso .raw .wav .mov
acl restuser proxy_auth ncsa_users

delay_pools 3
delay_class 1 2
delay_parameters 1 -1/-1 -1/-1
delay_access 1 allow magic_words1
delay_access 1 deny all
delay_class 2 2
delay_parameters 2 5000/15 5000/12
delay_access 2 allow magic_words2
delay_access 2 deny all
delay_class 3 1
# 256 Kbit/s fill rate, 1024 Kbit/s reserve
delay_parameters 3 32000/128000
delay_access 3 allow restuser
delay_access 3 deny all


-
Chad E. Naugle
Tech Support II, x. 7981
Travel Impressions, Ltd.
 


 J Webster webster_j...@hotmail.com 11/8/2010 10:28 AM 
do I need to add this:
delay_access 2 deny all
delay_access 1 deny all
?

Also, what is the difference between fill rate and reserve?
I think I have a fill rate of 256, maybe I should increase this for
watching 
video?

I am using iftop on the server, and users still seem to be connecting
at 
more than 1Mbps so maybe it isn;t picking up the ncsa users?




From: Chad Naugle
Sent: Monday, November 08, 2010 4:11 PM
To: J Webster ; squid-users@squid-cache.org 
Subject: Re: [squid-users] Re: Bandwidth split?


Your problem here is that you are trying to layer delay_pool 1 twice,
so I 
corrected the config below adding a third delay_pool for your
ncsa_users.





Travel Impressions made the following annotations
-
This message and any attachments are solely for the intended recipient
and may contain confidential or privileged information.  If you are not
the intended recipient, any disclosure, copying, use, or distribution of
the information included in this message and any attachments is
prohibited.  If you have received this communication in error, please
notify us by reply e-mail and immediately and permanently delete this
message and any attachments.
Thank you.


Re: [squid-users] Re: Bandwidth split?

2010-11-08 Thread J Webster

Thanks.
I still have users connecting at around 1.91Mb and faster on the server so 
the delay pools don;t seem to be working.

Only thing I can think of is that it's not registering the ncsa users?

--
From: Chad Naugle chad.nau...@travimp.com
Sent: Monday, November 08, 2010 4:36 PM
To: J Webster webster_j...@hotmail.com; squid-users@squid-cache.org
Subject: Re: [squid-users] Re: Bandwidth split?


Yes sorry, at work.  See Below.  I am not 100% on fill-rate versus the
other numbers, so I'll leave that up for someone else to reply.  I would
just tinker with the values until you get acceptable results until
then.

acl magic_words1 url_regex -i 192.168
acl magic_words2 url_regex -i ftp .exe .mp3 .vqf .tar.gz .gz .rpm .zip
.rar .avi .mpeg .mpe .mpg .qt .ram .rm .iso .raw .wav .mov
acl restuser proxy_auth ncsa_users

delay_pools 3
delay_class 1 2
delay_parameters 1 -1/-1 -1/-1
delay_access 1 allow magic_words1
delay_access 1 deny all
delay_class 2 2
delay_parameters 2 5000/15 5000/12
delay_access 2 allow magic_words2
delay_access 2 deny all
delay_class 3 1
# 256 Kbit/s fill rate, 1024 Kbit/s reserve
delay_parameters 3 32000/128000
delay_access 3 allow restuser
delay_access 3 deny all






[squid-users] Re: Access control problem

2010-11-08 Thread mrmmm

Thanks for your response. For example, I have the file with the following
entries:

.site1.com
.site2.com
123.123.123.123
234.234.234.234

Type: Web Server Hostname

And still I am able to browse all these sites from behind the proxy...
Anything I might be missing?

Thank you,
-- 
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Access-control-problem-tp2332220p3032226.html
Sent from the Squid - Users mailing list archive at Nabble.com.


Re: [squid-users] unexplainable MISSes (squid 2.7stable9)

2010-11-08 Thread Adrian Dascalu
Done some new tests and I found out that caching an URL that has a 
different X-Username header will invalidate the other version of that 
object.


Is this the intended behaviour? I mean, Vary header will just inform 
there is a new version so everything else is discarded? If so, is there 
a method for cacheing multiple versions of the same URL ?


Adrian

On 11/08/2010 02:33 PM, Adrian Dascalu wrote:

Hi,

I'm out of ideeas trying to debug cache misses that I cannot explain. As a last 
resort I'm sending this problem to the list with the hope that you could come 
up with some explanation and/or cure for this.

the setup is: squid 2.7stable9 on RHEL 5, configured as accel, 12 parents 1 
sibling (another squid). Apache in front zope as parents.

For the root page I send requests from the same browser. The page is supposed 
to stay in cache for 1h. I've seen it behaving correctly one time (at squid 
startup) afterwards if i keep requesting the page a few times I will get a MISS 
long before the 3600s have passed.

I have checked and there is no PURGE for this URL in the mean time. There are 
some for other URL's deeper in the structure.

here's a request:

Host www.somewebsite.com

User-Agent Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.1.9) Gecko/20100330 
Fedora/3.5.9-1.fc11 
Firefox/3.5.9Accepttext/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8

Accept-Language en-us,en;q=0.5

Accept-Encoding gzip,deflate

Accept-Charset ISO-8859-1,utf-8;q=0.7,*;q=0.7

Keep-Alive 300

Connection keep-alive

Referer http://www.somewebsite.com/

Cookie 
__utma=173508663.4134765344646281700.1250060356.1271487209.1289208944.50; 
__utmb=173508663.59.10.1289208944; __utmc=173508663; 
__utmz=173508663.1289208944.50.1.utmcsr=(direct)|utmccn=(direct)|utmcmd=(none)

here's a HIT reply:

Date Mon, 08 Nov 2010 12:06:29 GMT

Server Zope/(Zope 2.9.10-final, python 2.4.3, linux2) ZServer/1.1 Plone/2.5.5

Content-Length 10131

Content-Language en

Content-Encoding gzip

Expires Fri, 10 Nov 2000 12:05:48 GMT

Vary Accept-Encoding,Accept,If-None-Match,X-Username

X-Caching-Rule-Id plone-containers

Cache-Control max-age=0, s-maxage=3600

Content-Type text/html;charset=utf-8

X-Header-Set-Id cache-in-proxy-1-hour

Age 40

X-Cache HIT from squid1.somewebsite.com

X-Cache-Lookup HIT from squid1.somewebsite.com:3128

Via 1.0 squid1.somewebsite.com:3128 (squid/2.7.STABLE9)

Keep-Alive timeout=8, max=100

Connection Keep-Alive

Long before the 3600s have passed ,from the same browser, I would get a MISS. 
The request headers are IDENTICAL and there is no PURGE. What else might 
invalidate the cached object?


Thank you,
Adrian


   




[squid-users] RE: Multiple NICs

2010-11-08 Thread Tóth Tibor Péter
Hi!

I wouldn't think you need multiple network cards to use squid, unless your 
internet connection is on or above 1GB/s. If your ISP provides you less, I 
would think a regular gigabit Nic would do the job.
Your Hard Drives probably wont be fast enough to cache data on multiple Nics 
anyways.

We have over 1000 Clients, and the previous setup we used, we had only 1 GB 
network interface of our squid. It was sitting in the DMZ, and the connections 
went trough it.
It was fine. Had no connection problems.

Tibby

Feladó: Nick Cairncross [nick.cairncr...@condenast.co.uk]
Küldve: 2010. november 8. 12:13
Címzett: Squid Users
Tárgy: [squid-users] Multiple NICs

Hi list,

I'm looking at building a couple more 3.1.8 servers on RHEL 5.5 x86. The 
servers are nicely high-powered have multiple Gb NICs (4 in total). My previous 
proxy server (bluecoat) had two NICs. I understand that one was used to listen 
to requests and send to our upstream accelerator and one was used if the 
equivalent 'send direct' was used i.e bypass the accelerator. Can the list make 
any thoughts or recommendations about the best way to utilise the NICs for best 
performance? Can I achieve the same outbound as above? Should I even bother 
trying to do this? User base would be about 700 users; I'm not caching. Simple 
ACLs but with two authentication helpers (depending on browser).

Cheers
Nick

The information contained in this e-mail is of a confidential nature and is 
intended only for the addressee.  If you are not the intended addressee, any 
disclosure, copying or distribution by you is prohibited and may be unlawful.  
Disclosure to any party other than the addressee, whether inadvertent or 
otherwise, is not intended to waive privilege or confidentiality.  Internet 
communications are not secure and therefore Conde Nast does not accept legal 
responsibility for the contents of this message.  Any views or opinions 
expressed are those of the author.

The Conde Nast Publications Ltd (No. 226900), Vogue House, Hanover Square, 
London W1S 1JU

Re: [squid-users] Squid is caching the 404 Error Msg...

2010-11-08 Thread david robertson
This is what you're looking for:

#  TAG: negative_ttltime-units
#   Time-to-Live (TTL) for failed requests.  Certain types of
#   failures (such as connection refused and 404 Not Found) are
#   negatively-cached for a configurable amount of time.  The
#   default is 5 minutes.  Note that this is different from
#   negative caching of DNS lookups.
#
#Default:
# negative_ttl 5 minutes

Just set it to 0 and it won't cache 404's


2010/11/8 karj gkaragianni...@dolnet.gr:
 Dear Expert,

 I'm using:
 - Squid Cache: Version Squid Cache: Version 2.7.STABLE9

 My Problem  is.

 When i'm using
 Cache-Control headers in the origin iis ( post-check=3600, pre-check=43200 )

 Squid is caching the 404 Error Msg.

 In the first two or thre requests i have
 TCP_MISS:FIRST_UP_PARENT  --- squid goes back to origin server

 After while i'm getting
 404 926 TCP_NEGATIVE_HIT:NONE --- squid servers 404 from it's cache


 I don't want to cache  Error Msgs.
 Error Msgs should never be cached.
 How can I do that.?


 thanks in advance



Re: [squid-users] possible bug on 2.7S9

2010-11-08 Thread Leonardo Rodrigues


Em 07/11/2010 01:45, Amos Jeffries escreveu:


Indicating that your NAT rules are incorrect.

The above line is simply forcing Squid to send from 127.0.0.1. It 
would only have any effect if your NAT intercept rules were forcing 
all localhost traffic back into Squid.


Removing the above line may mean that you are simply shifting the 
problem from your Squid to some web server elsewhere. Your Squid will 
be passing it requests for http://localhost:8080/...;. The upside is 
that at least it will not be a DoS flood when it arrives there.



Hi Amos,

Thanks for your tips  they made me realize that i was doing 
some 'dangerous' configurations. I have just adjusted things here, 
changed the transparent port and made a little more secure http_access 
rules to protect localhost_to access.


Thanks !


--


Atenciosamente / Sincerily,
Leonardo Rodrigues
Solutti Tecnologia
http://www.solutti.com.br

Minha armadilha de SPAM, NÃO mandem email
gertru...@solutti.com.br
My SPAMTRAP, do not email it






Re: [squid-users] unexplainable MISSes (squid 2.7stable9)

2010-11-08 Thread Adrian Dascalu

So, the problem boils down to:

why a cashed version of the page with URL X will be invalidated by an 
access to the same URL and a different value in one of the headers 
listed in the Vary header? The store log consistently logs:


Mon 08 Nov 2010 11:27:50 PM CET RELEASE 200 text/html GET 
http://127.0.0.1:3128/VirtualHostBase/http/www.somewebsite.com:80/www/SITE/VirtualHostRoot/
Mon 08 Nov 2010 11:27:50 PM CET SWAPOUT 200 text/html GET 
http://127.0.0.1:3128/VirtualHostBase/http/www.somewebsite.com:80/www/SITE/VirtualHostRoot/
Mon 08 Nov 2010 11:27:50 PM CET SWAPOUT 200 x-squid-internal/vary GET 
http://127.0.0.1:3128/VirtualHostBase/http/www.somewebsite.com:80/www/SITE/VirtualHostRoot/
Mon 08 Nov 2010 11:27:50 PM CET RELEASE 200 x-squid-internal/vary GET 
http://127.0.0.1:3128/VirtualHostBase/http/www.somewebsite.com:80/www/SITE/VirtualHostRoot/


each time I make a request that has one of the headers in vary with a 
different value.



On 11/08/2010 06:33 PM, Adrian Dascalu wrote:

Done some new tests and I found out that caching an URL that has a
different X-Username header will invalidate the other version of that
object.

Is this the intended behaviour? I mean, Vary header will just inform
there is a new version so everything else is discarded? If so, is there
a method for cacheing multiple versions of the same URL ?

Adrian

On 11/08/2010 02:33 PM, Adrian Dascalu wrote:
   

Hi,

I'm out of ideeas trying to debug cache misses that I cannot explain. As a last 
resort I'm sending this problem to the list with the hope that you could come 
up with some explanation and/or cure for this.

the setup is: squid 2.7stable9 on RHEL 5, configured as accel, 12 parents 1 
sibling (another squid). Apache in front zope as parents.

For the root page I send requests from the same browser. The page is supposed 
to stay in cache for 1h. I've seen it behaving correctly one time (at squid 
startup) afterwards if i keep requesting the page a few times I will get a MISS 
long before the 3600s have passed.

I have checked and there is no PURGE for this URL in the mean time. There are 
some for other URL's deeper in the structure.

here's a request:

Host www.somewebsite.com

User-Agent Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.1.9) Gecko/20100330 
Fedora/3.5.9-1.fc11 
Firefox/3.5.9Accepttext/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8

Accept-Language en-us,en;q=0.5

Accept-Encoding gzip,deflate

Accept-Charset ISO-8859-1,utf-8;q=0.7,*;q=0.7

Keep-Alive 300

Connection keep-alive

Referer http://www.somewebsite.com/

Cookie 
__utma=173508663.4134765344646281700.1250060356.1271487209.1289208944.50; 
__utmb=173508663.59.10.1289208944; __utmc=173508663; 
__utmz=173508663.1289208944.50.1.utmcsr=(direct)|utmccn=(direct)|utmcmd=(none)

here's a HIT reply:

Date Mon, 08 Nov 2010 12:06:29 GMT

Server Zope/(Zope 2.9.10-final, python 2.4.3, linux2) ZServer/1.1 Plone/2.5.5

Content-Length 10131

Content-Language en

Content-Encoding gzip

Expires Fri, 10 Nov 2000 12:05:48 GMT

Vary Accept-Encoding,Accept,If-None-Match,X-Username

X-Caching-Rule-Id plone-containers

Cache-Control max-age=0, s-maxage=3600

Content-Type text/html;charset=utf-8

X-Header-Set-Id cache-in-proxy-1-hour

Age 40

X-Cache HIT from squid1.somewebsite.com

X-Cache-Lookup HIT from squid1.somewebsite.com:3128

Via 1.0 squid1.somewebsite.com:3128 (squid/2.7.STABLE9)

Keep-Alive timeout=8, max=100

Connection Keep-Alive

Long before the 3600s have passed ,from the same browser, I would get a MISS. 
The request headers are IDENTICAL and there is no PURGE. What else might 
invalidate the cached object?


Thank you,
Adrian



 
   




[squid-users] Windows Updates, YouTube and WoW

2010-11-08 Thread Kevin Wilcox
Hi all.

This is currently a test environment so making changes isn't an issue.

Initially I had issues with hosts updating any flavour of Microsoft
Windows but solved that with the included squid.conf. I'm even
getting real cache hits on some of the Windows XP and Windows 7
updates in my test lab, so the amount of effort I've put in so far is
pretty well justified. Since the target audience won't have access to
a local WSUS, I can pretty well count it as a win, even if the rest of
this email becomes moot.

Then came the big issue - World of Warcraft installation via the
downloaded client. Things pretty well fell apart. It would install up
to 20% and crash. Then it would install up to 25% and crash. Then 30%
and crash. It did that, crashing further in the process each time,
until it finally installed the base game (roughly 15 crashes). Due to
clamping down on P2P I disabled that update mechanism and told the
downloader to use only direct download. I'm averaging 0.00KB/s with
bursts from 2KB/s to 64 KB/s. If I take squid out of the line I get
speeds between 1 and 3 MB/s+ and things just work - but that sort of
defeats the purpose in having a device that will cache
non-authenticated user content. Having one user download a new 1 GB
patch, and it being available locally for the other couple of hundred,
would be ideal. Still, it isn't a deal breaker.

I understand that it could be related to the partial content reply for
the request and I understand that it could also be related to the
URL/foo? style request. Is the best approach to just automatically
pass anything for blizzard.com/worldofwarcraft.com straight through
and not attempt to cache the updates? I've seen some comments where
using

acl QUERY urlpath_regex cgi-bin \?
cache deny QUERY

will cause those requests to not be cached (and I understand why that
is) but I'm wondering if I should just ignore them altogether,
especially given the third item - YouTube.

The target population for this cache is rather large. Typically,
youtube is a huge culprit for bandwidth usage and a lot of the times
it's hundreds of people hitting the same videos. I've been looking at
how to cache those and it seems like it's required to either not use
the above ACL or it's to setup another ACL that specifically allows
youtube.

All of those comments and workarounds have been regarding the 2.x set
of squid, though. I'm curious if there is a cleaner way to go about
caching youtube (or, perhaps I should say, video.google.com) in 3.1.x,
or if it's possible to cache things like the WoW updates now? We're
looking to experiment with some proprietary devices that claim to be
able to cache Windows Updates, YouTube/Google Video, etc., but I'm
wondering if my woes are just because of my inexperience with squid or
if they're just that far ahead in terms of functionality?

Any hints, tips or suggestions would be more than welcome!

Relevant version information and configuration files:

fergie# squid -v
Squid Cache: Version 3.1.9
configure options:  '--with-default-user=squid'
'--bindir=/usr/local/sbin' '--sbindir=/usr/local/sbin'
'--datadir=/usr/local/etc/squid'
'--libexecdir=/usr/local/libexec/squid' '--localstatedir=/var/squid'
'--sysconfdir=/usr/local/etc/squid' '--with-logdir=/var/log/squid'
'--with-pidfile=/var/run/squid/squid.pid'
'--enable-removal-policies=lru heap' '--disable-linux-netfilter'
'--disable-linux-tproxy' '--disable-epoll' '--disable-translation'
'--enable-auth=basic digest negotiate ntlm'
'--enable-basic-auth-helpers=DB NCSA PAM MSNT SMB squid_radius_auth'
'--enable-digest-auth-helpers=password'
'--enable-external-acl-helpers=ip_user session unix_group
wbinfo_group' '--enable-ntlm-auth-helpers=smb_lm' '--without-pthreads'
'--enable-storeio=ufs diskd' '--enable-disk-io=AIO Blocking
DiskDaemon' '--disable-ipv6' '--disable-snmp' '--disable-htcp'
'--disable-wccp' '--enable-pf-transparent' '--disable-ecap'
'--disable-loadable-modules' '--enable-kqueue' '--with-large-files'
'--prefix=/usr/local' '--mandir=/usr/local/man'
'--infodir=/usr/local/info/' '--build=amd64-portbld-freebsd8.1'
'build_alias=amd64-portbld-freebsd8.1' 'CC=cc' 'CFLAGS=-O2 -pipe
-fno-strict-aliasing' 'LDFLAGS=' 'CPPFLAGS=' 'CXX=c++' 'CXXFLAGS=-O2
-pipe -fno-strict-aliasing' 'CPP=cpp'
--with-squid=/usr/ports/www/squid31/work/squid-3.1.9
--enable-ltdl-convenience

It's running in transparent mode on

fergie# uname -m -r -s -v
FreeBSD 8.1-RELEASE FreeBSD 8.1-RELEASE #0: Mon Jul 19 02:36:49 UTC
2010 r...@mason.cse.buffalo.edu:/usr/obj/usr/src/sys/GENERIC
amd64

which is basically a vanilla FreeBSD 8.1 install with squid installed
from ports.

My squid.conf:


###
#
# Recommended minimum configuration:
#

acl manager proto cache_object
acl localhost src 127.0.0.1/32 ::1
acl to_localhost dst 127.0.0.0/8 0.0.0.0/32 ::1

# Example rule allowing access from your local networks.
# Adapt to list your (internal) IP networks from where browsing
# should be allowed

Re: [squid-users] This cache is currently building its digest.

2010-11-08 Thread Amos Jeffries
On Mon, 8 Nov 2010 09:02:37 -0500, david robertson d...@nevernet.com
wrote:
 What is your digest rebuild time set to?
  your cache_dir and cache_mem sizes?
  and your negative_ttl setting?
 
 digest_rebuild_period 60 minutes
 negative_ttl 1 minute
 backends use a cache_dir of 20gb (8mb cache_mem)
 frontends use a cache_mem of 2gb (no cache_dir)
 

Okay, rebuild is long enough that it should have a period where completed.

negative_ttl is a worry, it will amplify any 4xx and 5xx errors into a
short term DoS for all clients. The proper setting is 0 seconds.

 
 What do you get back when making a manual digest fetch from one of the
 Squid?
  squidclient -h $squid-visible_hostname
 mgr:squid-internal-periodic/store_digest
 
 I get 'Invalid URL' when trying to hit
 mgr:squid-internal-periodic/store_digest
 
 I've since set up HTCP, and it seems to be working fine - however this
 brings up one additional (unrelated to original problem) question:
 Does 2.7 have support for forwarding HTCP CLR's?  If so, it doesn't
 seem like it's working.

IIRC it does, but may require additional htcp_clr_access configuration to
permit it to be acted on.

Amos


Re: [squid-users] Squid is caching the 404 Error Msg...

2010-11-08 Thread Amos Jeffries
On Mon, 08 Nov 2010 16:15:24 +0200, karj gkaragianni...@dolnet.gr wrote:
 Dear Expert,
 
 I'm using:
 - Squid Cache: Version Squid Cache: Version 2.7.STABLE9
 
 My Problem  is.
 
 When i'm using
 Cache-Control headers in the origin iis ( post-check=3600, 
 pre-check=43200 )
 
 Squid is caching the 404 Error Msg.
 
 In the first two or thre requests i have
 TCP_MISS:FIRST_UP_PARENT  --- squid goes back to origin server
 
 After while i'm getting
 404 926 TCP_NEGATIVE_HIT:NONE --- squid servers 404 from it's cache
 

Check that you have negative_ttl 0 seconds configured. 2.x has a wrong
default of some minutes.

 
 I don't want to cache  Error Msgs.
 Error Msgs should never be cached.
 How can I do that.?

If the above setting is correctly at 0 seconds already, check the web
server provided headers.
4xx messages CAN be cached in a lot of cases, but it should be up to the
origin site to specify correctly when.

Amos


Re: [squid-users] unexplainable MISSes (squid 2.7stable9)

2010-11-08 Thread Amos Jeffries
On Mon, 08 Nov 2010 14:33:31 +0200, Adrian Dascalu
adrian.dasc...@eea.europa.eu wrote:
 Hi,
 
 I'm out of ideeas trying to debug cache misses that I cannot explain. As
a
 last resort I'm sending this problem to the list with the hope that you
 could come up with some explanation and/or cure for this.
 
 the setup is: squid 2.7stable9 on RHEL 5, configured as accel, 12
parents
 1 sibling (another squid). Apache in front zope as parents.
 
 For the root page I send requests from the same browser. The page is
 supposed to stay in cache for 1h. I've seen it behaving correctly one
time
 (at squid startup) afterwards if i keep requesting the page a few times
I
 will get a MISS long before the 3600s have passed.
 
 I have checked and there is no PURGE for this URL in the mean time.
There
 are some for other URL's deeper in the structure.

Does not need to be an explicit PURGE. Merely a required alternative ETag,
or a force-reload request.

Can you please provide a new set of headers, for a given object initially
when its a HIT and afterwards when its a MISS.
NP: a full new HIT set is required to correlate exact times and tags, the
ones below are  too old now to be reliably compared to any MISS.


 
 here's a request:
 
 Host www.somewebsite.com
 
 User-Agent Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.1.9)
 Gecko/20100330 Fedora/3.5.9-1.fc11

Firefox/3.5.9Accepttext/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
 
 Accept-Language en-us,en;q=0.5
 
 Accept-Encoding gzip,deflate
 
 Accept-Charset ISO-8859-1,utf-8;q=0.7,*;q=0.7
 
 Keep-Alive 300
 
 Connection keep-alive
 
 Referer http://www.somewebsite.com/
 
 Cookie

__utma=173508663.4134765344646281700.1250060356.1271487209.1289208944.50;
 __utmb=173508663.59.10.1289208944; __utmc=173508663;

__utmz=173508663.1289208944.50.1.utmcsr=(direct)|utmccn=(direct)|utmcmd=(none)
 
 here's a HIT reply:
 
 Date Mon, 08 Nov 2010 12:06:29 GMT
 
 Server Zope/(Zope 2.9.10-final, python 2.4.3, linux2) ZServer/1.1
 Plone/2.5.5
 
 Content-Length 10131
 
 Content-Language en
 
 Content-Encoding gzip
 
 Expires Fri, 10 Nov 2000 12:05:48 GMT

Hmm, object is cacheable for *2 days*. Not the hour you said.

 
 Vary Accept-Encoding,Accept,If-None-Match,X-Username
 
 X-Caching-Rule-Id plone-containers
 
 Cache-Control max-age=0, s-maxage=3600

Irrelevant, Expires: header overrides these.

 
 Content-Type text/html;charset=utf-8
 
 X-Header-Set-Id cache-in-proxy-1-hour
 
 Age 40
 
 X-Cache HIT from squid1.somewebsite.com
 
 X-Cache-Lookup HIT from squid1.somewebsite.com:3128
 
 Via 1.0 squid1.somewebsite.com:3128 (squid/2.7.STABLE9)
 
 Keep-Alive timeout=8, max=100
 
 Connection Keep-Alive
 
 Long before the 3600s have passed ,from the same browser, I would get a
 MISS. The request headers are IDENTICAL and there is no PURGE. What else
 might invalidate the cached object?
 
 
 Thank you,
 Adrian


Re: [squid-users] unexplainable MISSes (squid 2.7stable9)

2010-11-08 Thread Amos Jeffries
On Mon, 08 Nov 2010 18:33:37 +0200, Adrian Dascalu
adrian.dasc...@eea.europa.eu wrote:
 Done some new tests and I found out that caching an URL that has a 
 different X-Username header will invalidate the other version of that 
 object.

Aha, you can disregard my earlier reply.

 
 Is this the intended behaviour?

Yes.

 I mean, Vary header will just inform 
 there is a new version so everything else is discarded?

No, vary header informs Squid whether the cached object may be served or a
second fetched. The headers of the new reply informs whether to invalidate
the old copy.

 If so, is there 
 a method for cacheing multiple versions of the same URL ?

Unique ETag headers are required for that.

Amos


Re: [squid-users] Windows Updates, YouTube and WoW

2010-11-08 Thread Amos Jeffries
On Mon, 8 Nov 2010 18:32:52 -0500, Kevin Wilcox kevin.wil...@gmail.com
wrote:
 Hi all.
 
 This is currently a test environment so making changes isn't an issue.
 
 Initially I had issues with hosts updating any flavour of Microsoft
 Windows but solved that with the included squid.conf. I'm even
 getting real cache hits on some of the Windows XP and Windows 7
 updates in my test lab, so the amount of effort I've put in so far is
 pretty well justified. Since the target audience won't have access to
 a local WSUS, I can pretty well count it as a win, even if the rest of
 this email becomes moot.
 
 Then came the big issue - World of Warcraft installation via the
 downloaded client. Things pretty well fell apart. It would install up
 to 20% and crash. Then it would install up to 25% and crash. Then 30%
 and crash. It did that, crashing further in the process each time,
 until it finally installed the base game (roughly 15 crashes). Due to
 clamping down on P2P I disabled that update mechanism and told the
 downloader to use only direct download. I'm averaging 0.00KB/s with
 bursts from 2KB/s to 64 KB/s. If I take squid out of the line I get
 speeds between 1 and 3 MB/s+ and things just work - but that sort of
 defeats the purpose in having a device that will cache
 non-authenticated user content. Having one user download a new 1 GB
 patch, and it being available locally for the other couple of hundred,
 would be ideal. Still, it isn't a deal breaker.
 
 I understand that it could be related to the partial content reply for
 the request and I understand that it could also be related to the
 URL/foo? style request. Is the best approach to just automatically
 pass anything for blizzard.com/worldofwarcraft.com straight through
 and not attempt to cache the updates? I've seen some comments where
 using
 
 acl QUERY urlpath_regex cgi-bin \?
 cache deny QUERY
 
 will cause those requests to not be cached (and I understand why that
 is) but I'm wondering if I should just ignore them altogether,
 especially given the third item - YouTube.

Yes, don't use that QUERY stuff. The dynamic URL which are cacheable will
have expiry and control headers to make it happen. The others are caught
and discarded properly by the new default refresh_pattern for cgi-bin and
\?.

 
 The target population for this cache is rather large. Typically,
 youtube is a huge culprit for bandwidth usage and a lot of the times
 it's hundreds of people hitting the same videos. I've been looking at
 how to cache those and it seems like it's required to either not use
 the above ACL or it's to setup another ACL that specifically allows
 youtube.
 
 All of those comments and workarounds have been regarding the 2.x set
 of squid, though. I'm curious if there is a cleaner way to go about
 caching youtube (or, perhaps I should say, video.google.com) in 3.1.x,
 or if it's possible to cache things like the WoW updates now? We're
 looking to experiment with some proprietary devices that claim to be
 able to cache Windows Updates, YouTube/Google Video, etc., but I'm
 wondering if my woes are just because of my inexperience with squid or
 if they're just that far ahead in terms of functionality?


Caching youtube still currently requires the storeurl feature of 2.7 which
has not bee ported to 3.x.
There are embeded visitor details and timestamps of when the video was
requested in the YT URL which cause the cache to fill up with large videos
at URL which will never be re-requested. This actively prevents any totally
unrelated web objects from using the cache space.

It is a good idea to prevent the YT videos from being stored at all unless
you can de-duplicate them.

 
 Any hints, tips or suggestions would be more than welcome!
 
 Relevant version information and configuration files:
 
snip
 
 # Uncomment and adjust the following to add a disk cache directory.
 cache_dir ufs /var/squid/cache 175000 16 256
 
 # Cache Mem - ideal amount of RAM to use
 cache_mem 2048 MB
 
 # Maximum object size - default is 4MB, not nearly enough to be useful
 maximum_object_size 1024 MB
 
 # Maximum object size in memory - we have 4GB, we can handle larger
objects
 maximum_object_size_in_memory 512 MB

Um, no you have 2GB (cache_mem) in which to store these objects. All 4+ of
them.


Amos


Re: [squid-users] top reports twice memory as much as Total in mgr:mem

2010-11-08 Thread Kaiwang Chen
RES grows to 14.7GB. Looks like this patch does not fix the problem...

2010/10/26 Kaiwang Chen kaiwang.c...@gmail.com:
 Currently running two instances behind round-robin load balanced DNS,
 one with the following patch(bug3068_mk2.patch with several twists to
 apply to 3.1.6), the other without. Wish to get some testification.

 diff -Nur squid-3.1.6.orig/src/fs/coss/store_dir_coss.cc
 squid-3.1.6/src/fs/coss/store_dir_coss.cc
 --- squid-3.1.6.orig/src/fs/coss/store_dir_coss.cc      2010-08-01
 22:01:39.0 +0800
 +++ squid-3.1.6/src/fs/coss/store_dir_coss.cc   2010-10-25
 21:56:00.816496622 +0800
 @@ -996,8 +996,8 @@
  CossSwapDir::statfs(StoreEntry  sentry) const
  {
     storeAppendPrintf(sentry, \n);
 -    storeAppendPrintf(sentry, Maximum Size: %d KB\n, max_size);
 -    storeAppendPrintf(sentry, Current Size: %d KB\n, cur_size);
 +    storeAppendPrintf(sentry, Maximum Size: %lu KB\n, max_size);
 +    storeAppendPrintf(sentry, Current Size: %lu KB\n, cur_size);
     storeAppendPrintf(sentry, Percent Used: %0.2f%%\n,
                       100.0 * cur_size / max_size);
     storeAppendPrintf(sentry, Number of object collisions: %d\n,
 (int) numcollisions);
 @@ -1095,7 +1095,7 @@
  void
  CossSwapDir::dump(StoreEntry entry)const
  {
 -    storeAppendPrintf(entry,  %d, max_size  10);
 +    storeAppendPrintf(entry,  %lu, max_size  10);
     dumpOptions(entry);
  }

 diff -Nur squid-3.1.6.orig/src/fs/ufs/store_dir_ufs.cc
 squid-3.1.6/src/fs/ufs/store_dir_ufs.cc
 --- squid-3.1.6.orig/src/fs/ufs/store_dir_ufs.cc        2010-08-01
 22:01:39.0 +0800
 +++ squid-3.1.6/src/fs/ufs/store_dir_ufs.cc     2010-10-25
 22:26:58.629016115 +0800
 @@ -82,7 +82,7 @@

     /* just reconfigure it */
     if (reconfiguring) {
 -        if (size == max_size)
 +        if ((unsigned)size == max_size)
             debugs(3, 2, Cache dir '  path  ' size remains
 unchanged at   size   KB);
         else
             debugs(3, 1, Cache dir '  path  ' size changed to
   size   KB);
 @@ -314,8 +314,8 @@
     int x;
     storeAppendPrintf(sentry, First level subdirectories: %d\n, l1);
     storeAppendPrintf(sentry, Second level subdirectories: %d\n, l2);
 -    storeAppendPrintf(sentry, Maximum Size: %d KB\n, max_size);
 -    storeAppendPrintf(sentry, Current Size: %d KB\n, cur_size);
 +    storeAppendPrintf(sentry, Maximum Size: %PRIu64 KB\n, max_size);
 +    storeAppendPrintf(sentry, Current Size: %PRIu64 KB\n, cur_size);
     storeAppendPrintf(sentry, Percent Used: %0.2f%%\n,
                       100.0 * cur_size / max_size);
     storeAppendPrintf(sentry, Filemap bits in use: %d of %d (%d%%)\n,
 @@ -380,7 +380,7 @@
     walker = repl-PurgeInit(repl, max_scan);

     while (1) {
 -        if (cur_size  (int) minSize()) /* cur_size should be unsigned */
 +        if (cur_size  minSize()) /* cur_size should be unsigned */
             break;

         if (removed = max_remove)
 @@ -1325,10 +1325,7 @@
  void
  UFSSwapDir::dump(StoreEntry  entry) const
  {
 -    storeAppendPrintf(entry,  %d %d %d,
 -                      max_size  10,
 -                      l1,
 -                      l2);
 +    storeAppendPrintf(entry,  %PRIu64 %d %d, (max_size  10), l1, l2);
     dumpOptions(entry);
  }

 diff -Nur squid-3.1.6.orig/src/SquidMath.cc squid-3.1.6/src/SquidMath.cc
 --- squid-3.1.6.orig/src/SquidMath.cc   2010-08-01 22:01:38.0 +0800
 +++ squid-3.1.6/src/SquidMath.cc        2010-10-25 21:49:36.436913647 +0800
 @@ -7,6 +7,12 @@
     return b ? ((int) (100.0 * a / b + 0.5)) : 0;
  }

 +int64_t
 +Math::int64Percent(const int64_t a, const int64_t b)
 +{
 +    return b ? ((int64_t) (100.0 * a / b + 0.5)) : 0;
 +}
 +
  double
  Math::doublePercent(const double a, const double b)
  {
 diff -Nur squid-3.1.6.orig/src/SquidMath.h squid-3.1.6/src/SquidMath.h
 --- squid-3.1.6.orig/src/SquidMath.h    2010-08-01 22:01:39.0 +0800
 +++ squid-3.1.6/src/SquidMath.h 2010-10-25 21:50:00.953836387 +0800
 @@ -6,6 +6,7 @@
  {

  extern int intPercent(const int a, const int b);
 +extern int64_t int64Percent(const int64_t a, const int64_t b);
  extern double doublePercent(const double, const double);
  extern int intAverage(const int, const int, int, const int);
  extern double doubleAverage(const double, const double, int, const int);
 diff -Nur squid-3.1.6.orig/src/store_dir.cc squid-3.1.6/src/store_dir.cc
 --- squid-3.1.6.orig/src/store_dir.cc   2010-08-01 22:01:38.0 +0800
 +++ squid-3.1.6/src/store_dir.cc        2010-10-25 22:02:17.431379546 +0800
 @@ -360,13 +360,13 @@
     storeAppendPrintf(output, Store Directory Statistics:\n);
     storeAppendPrintf(output, Store Entries          : %lu\n,
                       (unsigned long int)StoreEntry::inUseCount());
 -    storeAppendPrintf(output, Maximum Swap Size      : %8ld KB\n,
 -                      (long int) maxSize());
 +    storeAppendPrintf(output, Maximum Swap Size      : %PRIu64 KB\n,
 +                      maxSize());
     storeAppendPrintf(output, Current 

Re: [squid-users] Re: Access control problem

2010-11-08 Thread Amos Jeffries

On 09/11/10 05:30, mrmmm wrote:


Thanks for your response. For example, I have the file with the following
entries:

.site1.com
.site2.com
123.123.123.123
234.234.234.234

Type: Web Server Hostname

And still I am able to browse all these sites from behind the proxy...
Anything I might be missing?


Your initial message said among other stuff I have. The conclusion 
then has to be that somewhere in that other stuff is http_access rules 
which bypass the ones you mentioned here.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.9
  Beta testers wanted for 3.2.0.3


Re: [squid-users] Unable to make Squid work as a transparent proxy (Squid 3.1.7, Linux Debian, WCCP2)

2010-11-08 Thread Amos Jeffries

On 09/11/10 00:11, Leonardo wrote:

Hi Amos,

On Sun, Nov 7, 2010 at 5:12 AM, Amos Jeffriessqu...@treenet.co.nz  wrote:

http_port 3128 intercept


I have changed the config from http_port 3128 transparent to
http_port 3128 intercept, but I see no change in the behaviour.


You will also need a separate port for the normal browser-configured and
management requests. 3.1 will reject these if sent to a NAT interception
port.


I don't get this.  Could you please be so kind to explain, or to point
me to a page in the documentation?


Ah, sorry I was mixing up me modes and versions. The statement was wrong 
about the rejections. It's just a LAN-wide exploitable security hole.





Also check the squid access.log. This will determine whether it is the ASA
side or the Internet side of Squid which then needs to be tcpdumped for port
80 to find out whats going on.


The file access.log is empty.


So the ASA side. Now you know were to look for the mysterious missing 
packets.



Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.9
  Beta testers wanted for 3.2.0.3


Re: [squid-users] Problem with ACL (disabling download)

2010-11-08 Thread Amos Jeffries

On 03/11/10 09:57, Konrado Z wrote:

But how to write properly sth like this
'http_access allow clients|managers|clients2 #Squid cannot start with that line'
I want to replace 'http_access allow all' line with this given above.

Best



http://wiki.squid-cache.org/SquidFaq/SquidAcl#Common_Mistakes

Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.9
  Beta testers wanted for 3.2.0.3


Re: [squid-users] client_side_request.cc messages in cache.log

2010-11-08 Thread Amos Jeffries

On 09/11/10 03:08, donovan jeffrey j wrote:


On Nov 5, 2010, at 7:37 PM, Amos Jeffries wrote:


On 06/11/10 03:28, donovan jeffrey j wrote:


On Nov 5, 2010, at 10:24 AM, Amos Jeffries wrote:


On 06/11/10 03:20, donovan jeffrey j wrote:



snip


does this look right ?

#redirect_program   /usr/local/bin/squidGuard -c 
/usr/local/squidGuard/squidGuard.conf
url_rewrite_program /usr/local/bin/squidGuard -c 
/usr/local/squidGuard/squidGuard.conf
#redirect_children 100
url_rewrite_children 100



Yes.


is it okay to issue a - k reconfigure for this change or it better to wait 
until not many users are accessing?
-j


reconfigure is enough. It is just a cosmetic config change at this point.

Amos


okay im getting same message under load.

2010/11/08 09:04:50| client_side_request.cc(1047) clientRedirectDone: 
redirecting body_pipe 0x2135be20*2 from request 0x14e14200 to 0x8ac0200
2010/11/08 09:04:56| client_side_request.cc(1047) clientRedirectDone: 
redirecting body_pipe 0x1fabb330*2 from request 0xc7a5e00 to 0xe05d000
2010/11/08 09:05:00| client_side_request.cc(1047) clientRedirectDone: 
redirecting body_pipe 0x2135be20*1 from request 0x8fa7200 to 0x127f7400
2010/11/08 09:05:06| client_side_request.cc(1047) clientRedirectDone: 
redirecting body_pipe 0x20606560*1 from request 0x11508200 to 0x11add800
2010/11/08 09:05:07| client_side_request.cc(1047) clientRedirectDone: 
redirecting body_pipe 0x21278360*1 from request 0xbcbc00 to 0x190d4a00

and yes there is redirection going on so it's not lying to me. ^^^ 
client redirect done. is this just a notification of the redirect ? or is it an 
error ?


It's a notice that the redirect changed something potentially dodgy. 
I've canvased the dev who might know better than me and have not come up 
with any reason to keep it.


 It may be worth adding url_rewrite_access deny CONNECT if possible 
to protect against tunnel problems. Beyond that it can be ignored or 
patched out of existence.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.9
  Beta testers wanted for 3.2.0.2


Re: [squid-users] icp peering multicast option

2010-11-08 Thread Amos Jeffries

On 04/11/10 02:27, My LinuxHAList wrote:

Hi,

I may run multiple instances of squid inside a box.
Those instances may be serving out of the same eth0 or some bonded interface.

I have a question on the icp multicast option. 

Is squid icp multicast called with loopback option on, so that when
one instance of my squid is sending out the multicast, the rest of the
instances within the same box will receive it ?

Thanks


No loopback option is turned off.

To set it you will need to patch src/multicast.cc in the function 
mcastJoinGroups() and set char c = 1.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.9
  Beta testers wanted for 3.2.0.2


Re: [squid-users] Multisite ICP peering

2010-11-08 Thread Amos Jeffries

On 03/11/10 21:53, Chris Toft wrote:

Thanks for the reply, I actually fixed it. Removed the multicast-responder 
option and just left multicast-sibling.

Man this thing flies on 5 boxes with 64gb memory and 10x 50gb solid state 
drives for the cache :-)

I will post working config tomorrow for anyone interested.



Interested :) please post.

Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.9
  Beta testers wanted for 3.2.0.3


Re: [squid-users] Multisite ICP peering

2010-11-08 Thread Chris Toft
Will do.

Although with single requests it would get mem and sibling hits but then revert 
to pulling from the backend even tho I replayed the same logs with httperf 30 
mins later. All objects have a 3600s (60 min) expiry so it's not that either.

Will post configs for comments



Chris Toft
Mob: 0459 029 454

On 09/11/2010, at 17:36, Amos Jeffries squ...@treenet.co.nz wrote:

 On 03/11/10 21:53, Chris Toft wrote:
 Thanks for the reply, I actually fixed it. Removed the multicast-responder 
 option and just left multicast-sibling.

 Man this thing flies on 5 boxes with 64gb memory and 10x 50gb solid state 
 drives for the cache :-)

 I will post working config tomorrow for anyone interested.


 Interested :) please post.

 Amos
 --
 Please be using
   Current Stable Squid 2.7.STABLE9 or 3.1.9
   Beta testers wanted for 3.2.0.3

The information contained in this e-mail message and any accompanying files is 
or may be confidential. If you are not the intended recipient, any use, 
dissemination, reliance, forwarding, printing or copying of this e-mail or any 
attached files is unauthorised. This e-mail is subject to copyright. No part of 
it should be reproduced, adapted or communicated without the written consent of 
the copyright owner. If you have received this e-mail in error please advise 
the sender immediately by return e-mail or telephone and delete all copies. 
Fairfax does not guarantee the accuracy or completeness of any information 
contained in this e-mail or attached files. Internet communications are not 
secure, therefore Fairfax does not accept legal responsibility for the contents 
of this message or attached files.


Re: [squid-users] unexplainable MISSes (squid 2.7stable9)

2010-11-08 Thread Adrian Dascalu


On 11/09/2010 04:35 AM, Amos Jeffries wrote:


 Unique ETag headers are required for that.


Hi Amos,

I have understood as far but I still cant get my setup to behave as
intended: to keep distinct versions of the same URL in cache until they
expire. Distinct based on ETag value/ Vary header values.

Here's what happens:


**A request of a logged in user (identified by the X-Username header as
X-Username: v81krndgi9jw8Oq7RqP9gpj/XkIgZGFzY2FsdQ==|Plone Default and
ETag: |dascalu|Plone Default|1|False|358134) takes the object in
cache:

1289284280.898   4387 127.0.0.1 TCP_MISS/200 76574 GET
http://127.0.0.1:3128/VirtualHostBase/http/www.somewebsite.com:80/www/SITE/VirtualHostRoot/
- FIRST_PARENT_MISS/17 text/html

[Host: 127.0.0.1:3128\r\nUser-Agent: Mozilla/5.0 (X11; U; Linux i686;
en-US; rv:1.9.1.9) Gecko/20100330 Fedora/3.5.9-1.fc11
Firefox/3.5.9\r\nAccept:
text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8\r\nAccept-Language:
en-us,en;q=0.5\r\nAccept-Encoding: gzip,deflate\r\nAccept-Charset:
ISO-8859-1,utf-8;q=0.7,*;q=0.7\r\nReferer:
http://www.somewebsite.com/portal_cache_settings/old-policy/rules/frontpage?portal_status_message=Changes%2520saved.\r\nCookie:
__utma=173508663.4134765344646281700.1250060356.1289254636.1289283144.57; 
__utmc=173508663;
__utmz=173508663.1289208944.50.1.utmcsr=(direct)|utmccn=(direct)|utmcmd=(none);
__ac=v81krndgi9jw8Oq7RqP9gpj/XkIgZGFzY2FsdQ==; plone_skin=Plone
Default; __utmb=173508663.8.10.1289283144\r\nX-Username:
v81krndgi9jw8Oq7RqP9gpj/XkIgZGFzY2FsdQ==|Plone
Default\r\nX-Forwarded-For: 109.99.31.143\r\nX-Forwarded-Host:
www.somewebsite.com\r\nX-Forwarded-Server:
www.somewebsite.com\r\nConnection: Keep-Alive\r\n]

[HTTP/1.0 200 OK\r\nServer: Zope/(Zope 2.9.10-final, python 2.4.3,
linux2) ZServer/1.1 Plone/2.5.5\r\nDate: Tue, 09 Nov 2010 06:31:20
GMT\r\nContent-Length: 75865\r\nContent-Language: en\r\nExpires: Tue, 09
Nov 2010 06:31:24 GMT\r\nVary: Accept-Encoding, Accept,
X-Username\r\nLast-Modified: Fri, 28 Nov 2008 07:19:49 GMT\r\nETag:
|dascalu|Plone Default|1|False|358134\r\nX-Caching-Rule-Id:
frontpage\r\nCache-Control: max-age=5, s-maxage=3600,
public\r\nContent-Type: text/html;charset=utf-8\r\nX-Header-Set-Id:
cache-with-etag-in-proxy\r\nX-Cache: MISS from
squid1.somewebsite.com\r\nX-Cache-Lookup: MISS from
squid1.somewebsite.com:3128\r\nVia: 1.0 squid1.somewebsite.com:3128
(squid/2.7.STABLE9)\r\nConnection: keep-alive\r\n\r]

*
**A second request of the same user will be served from cache:

1289284489.560  0 127.0.0.1 TCP_MEM_HIT/200 76582 GET
http://127.0.0.1:3128/VirtualHostBase/http/www.somewebsite.com:80/www/SITE/VirtualHostRoot/
- NONE/- text/html

[Host: 127.0.0.1:3128\r\nUser-Agent: Mozilla/5.0 (X11; U; Linux i686;
en-US; rv:1.9.1.9) Gecko/20100330 Fedora/3.5.9-1.fc11
Firefox/3.5.9\r\nAccept:
text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8\r\nAccept-Language:
en-us,en;q=0.5\r\nAccept-Encoding: gzip,deflate\r\nAccept-Charset:
ISO-8859-1,utf-8;q=0.7,*;q=0.7\r\nReferer:
http://www.somewebsite.com/\r\nCookie:
__utma=173508663.4134765344646281700.1250060356.1289254636.1289283144.57; 
__utmc=173508663;
__utmz=173508663.1289208944.50.1.utmcsr=(direct)|utmccn=(direct)|utmcmd=(none);
__ac=v81krndgi9jw8Oq7RqP9gpj/XkIgZGFzY2FsdQ==; plone_skin=Plone
Default; __utmb=173508663.9.10.1289283144\r\nX-Username:
v81krndgi9jw8Oq7RqP9gpj/XkIgZGFzY2FsdQ==|Plone
Default\r\nX-Forwarded-For: 109.99.31.143\r\nX-Forwarded-Host:
www.somewebsite.com\r\nX-Forwarded-Server:
www.somewebsite.com\r\nConnection: Keep-Alive\r\n]

[HTTP/1.0 200 OK\r\nServer: Zope/(Zope 2.9.10-final, python 2.4.3,
linux2) ZServer/1.1 Plone/2.5.5\r\nDate: Tue, 09 Nov 2010 06:31:20
GMT\r\nContent-Length: 75865\r\nContent-Language: en\r\nExpires: Tue, 09
Nov 2010 06:31:24 GMT\r\nVary: Accept-Encoding, Accept,
X-Username\r\nLast-Modified: Fri, 28 Nov 2008 07:19:49 GMT\r\nETag:
|dascalu|Plone Default|1|False|358134\r\nX-Caching-Rule-Id:
frontpage\r\nCache-Control: max-age=5, s-maxage=3600,
public\r\nContent-Type: text/html;charset=utf-8\r\nX-Header-Set-Id:
cache-with-etag-in-proxy\r\nAge: 209\r\nX-Cache: HIT from
squid1.somewebsite.com\r\nX-Cache-Lookup: HIT from
squid1.somewebsite.com:3128\r\nVia: 1.0 squid1.somewebsite.com:3128
(squid/2.7.STABLE9)\r\nConnection: keep-alive\r\n\r]

**
**A request from an Anonymous user (X-Username: Anonymous) will fetch a
new version into the cache AND invalidate the above logged in user. The
responce ETag is distinct from the logged in user (ETag:
||XXXDesign2006|1|False|358134)

1289284576.228858 127.0.0.1 TCP_MISS/200 10900 GET
http://127.0.0.1:3128/VirtualHostBase/http/www.somewebsite.com:80/www/SITE/VirtualHostRoot/
- FIRST_PARENT_MISS/110 text/html

[Host: 127.0.0.1:3128\r\nIf-None-Match:
|1|False|358134\r\nIf-Modified-Since: Fri, 28 Nov 2008 07:19:49
GMT\r\nAccept: text/html, image/jpeg;q=0.9, image/png;q=0.9,
text/*;q=0.9, image/*;q=0.9,