Re: [squid-users] Lots of "Vary object loop!"

2016-10-20 Thread Amos Jeffries
On 21/10/2016 12:07 a.m., Anton Kornexl wrote:
> Hello,
>  
> i also had many of these messages in cache.log
>  
> we do filtering with squidguard (redirect http://www..xx )
>  
> It is possible that the same url is redirected for one user but not for 
> another (different filter rules per user)
>  
> Are the redirected objects saved in cache:dir ?

Yes.

* If it is a true HTTP 30x redirect the followup client request will be
handled normally with regards to everything, including caching.

* If you are re-writing URLs the fetched object will be stored under the
mangled URL location.


> Can i control which variables are used for vary checking?

No. Variant values are determined by the server and provided by the client.

>  
> Now i have disabled caching and the messages are gone but i would like to 
> reactivate caching.
> 

The bug data being presented recently hints that the message is being
logged when some new variant is needed and simply not-yet-cached. In
other words, a normal MISS is happening and nothing to worry about.

Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Lots of "Vary object loop!"

2016-10-20 Thread Garri Djavadyan
On Thu, 2016-10-20 at 13:07 +0200, Anton Kornexl wrote:
> Hello,
>  
> i also had many of these messages in cache.log
>  
> we do filtering with squidguard (redirect http://www..xx )
>  
> It is possible that the same url is redirected for one user but not
> for another (different filter rules per user)
>  
> Are the redirected objects saved in cache:dir ?
> Can i control which variables are used for vary checking?
>  
> Now i have disabled caching and the messages are gone but i would
> like to reactivate caching.
> 
> Anton Kornexl

Hi,

There are many reasons have been discussed on the list. For example:

http://lists.squid-cache.org/pipermail/squid-users/2015-August/005132.h
tml

Also, If you use collapsed_forwarding you may be affected by the false
positives described here:

http://bugs.squid-cache.org/show_bug.cgi?id=4619

Garri 
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Lots of "Vary object loop!"

2016-10-20 Thread Anton Kornexl
Hello,
 
i also had many of these messages in cache.log
 
we do filtering with squidguard (redirect http://www..xx )
 
It is possible that the same url is redirected for one user but not for another 
(different filter rules per user)
 
Are the redirected objects saved in cache:dir ?
Can i control which variables are used for vary checking?
 
Now i have disabled caching and the messages are gone but i would like to 
reactivate caching.


Anton Kornexl
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Lots of "Vary object loop!"

2015-09-24 Thread FredB


- Mail original -
> De: "Sebastián Goicochea" <se...@vianetcon.com.ar>
> À: squid-users@lists.squid-cache.org
> Envoyé: Mercredi 23 Septembre 2015 19:12:33
> Objet: Re: [squid-users] Lots of "Vary object loop!"
> 
> 
> Hi FredB,
> 
> Do you have collapsed_forwarding in your config?
> 

No
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Lots of "Vary object loop!"

2015-09-23 Thread Sebastián Goicochea

It happens without disk caches too. Was anyone able to reproduce it?




El 21/09/15 a las 07:03, Eliezer Croitoru escribió:

Is it happening also with ram cahce only? no disk cache?

Eliezer

On 04/09/2015 00:02, Sebastián Goicochea wrote:

But still seeing all those Vary loops all the time

:(

Thanks,
Sebastian


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users



___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Lots of "Vary object loop!"

2015-09-23 Thread FredB

> 
> 
> It happens without disk caches too. Was anyone able to reproduce it?
> 
> 
> 

Same messages here, some days many, some days not one, a message among others


2015/09/23 13:50:33 kid1| WARNING: HTTP: Invalid Response: Bad header 
encountered from http://www.cdiscount.com/auto/porte-velos/l-13360-2.html AKA 
www.cdiscount.com/auto/porte-velos/l-13360-2.html
2015/09/23 13:55:14 kid1| ipcacheParse: No Address records in response to 
'6.perf.msedge.net'
2015/09/23 13:55:44 kid1| ipcacheParse: No Address records in response to 
'6.perf.msedge.net'
2015/09/23 13:56:15 kid1| ipcacheParse: No Address records in response to 
'6.perf.msedge.net'
2015/09/23 13:56:29 kid1| clientIfRangeMatch: Weak ETags are not allowed in 
If-Range: "43572-1442488958000" ? "43572-1442488958000"
2015/09/23 14:08:20 kid1| urlParse: Illegal hostname '.xiti.com'
2015/09/23 14:08:31 kid1| urlParse: Illegal hostname '.xiti.com'
2015/09/23 14:08:38 kid1| urlParse: Illegal hostname '.xiti.com'
2015/09/23 14:09:34 kid1| clientIfRangeMatch: Weak ETags are not allowed in 
If-Range: "93324-134147762" ? "93324-134147762"
2015/09/23 14:09:34 kid1| clientIfRangeMatch: Weak ETags are not allowed in 
If-Range: "93504-1438151092000" ? "93504-1438151092000"
2015/09/23 14:09:34 kid1| clientIfRangeMatch: Weak ETags are not allowed in 
If-Range: "33901-1438151736000" ? "33901-1438151736000"
2015/09/23 14:12:47 kid1| urlParse: Illegal hostname '.xiti.com'
2015/09/23 14:12:57 kid1| urlParse: Illegal hostname '.xiti.com'
2015/09/23 14:13:22 kid1| clientProcessHit: Vary object loop!
2015/09/23 14:14:00 kid1| urlParse: Illegal hostname '.xiti.com'
2015/09/23 14:14:21 kid1| urlParse: Illegal hostname '.xiti.com'
2015/09/23 14:14:26 kid1| urlParse: Illegal hostname '.xiti.com'
2015/09/23 14:14:47 kid1| clientProcessHit: Vary object loop!
2015/09/23 14:14:47 kid1| clientProcessHit: Vary object loop!
2015/09/23 14:17:16 kid1| clientIfRangeMatch: Weak ETags are not allowed in 
If-Range: "33720-1442503946000" ? "33720-1442503946000"
2015/09/23 14:17:17 kid1| clientIfRangeMatch: Weak ETags are not allowed in 
If-Range: "33720-1442503946000" ? "33720-1442503946000"
2015/09/23 14:17:18 kid1| clientIfRangeMatch: Weak ETags are not allowed in 
If-Range: "33720-1442503946000" ? "33720-1442503946000"
2015/09/23 14:21:21 kid1| urlParse: Illegal hostname '.xiti.com'
2015/09/23 14:21:27 kid1| urlParse: Illegal hostname '.xiti.com'

About "urlParse: Illegal hostname '.xiti.com'" not related, I known the problem 
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Lots of "Vary object loop!"

2015-09-23 Thread Sebastián Goicochea

Hi FredB,

Do you have collapsed_forwarding in your config?

El 23/09/15 a las 13:42, FredB escribió:

It happens without disk caches too. Was anyone able to reproduce it?




Same messages here, some days many, some days not one, a message among others


2015/09/23 13:50:33 kid1| WARNING: HTTP: Invalid Response: Bad header 
encountered from http://www.cdiscount.com/auto/porte-velos/l-13360-2.html AKA 
www.cdiscount.com/auto/porte-velos/l-13360-2.html
2015/09/23 13:55:14 kid1| ipcacheParse: No Address records in response to 
'6.perf.msedge.net'
2015/09/23 13:55:44 kid1| ipcacheParse: No Address records in response to 
'6.perf.msedge.net'
2015/09/23 13:56:15 kid1| ipcacheParse: No Address records in response to 
'6.perf.msedge.net'
2015/09/23 13:56:29 kid1| clientIfRangeMatch: Weak ETags are not allowed in If-Range: 
"43572-1442488958000" ? "43572-1442488958000"
2015/09/23 14:08:20 kid1| urlParse: Illegal hostname '.xiti.com'
2015/09/23 14:08:31 kid1| urlParse: Illegal hostname '.xiti.com'
2015/09/23 14:08:38 kid1| urlParse: Illegal hostname '.xiti.com'
2015/09/23 14:09:34 kid1| clientIfRangeMatch: Weak ETags are not allowed in If-Range: 
"93324-134147762" ? "93324-134147762"
2015/09/23 14:09:34 kid1| clientIfRangeMatch: Weak ETags are not allowed in If-Range: 
"93504-1438151092000" ? "93504-1438151092000"
2015/09/23 14:09:34 kid1| clientIfRangeMatch: Weak ETags are not allowed in If-Range: 
"33901-1438151736000" ? "33901-1438151736000"
2015/09/23 14:12:47 kid1| urlParse: Illegal hostname '.xiti.com'
2015/09/23 14:12:57 kid1| urlParse: Illegal hostname '.xiti.com'
2015/09/23 14:13:22 kid1| clientProcessHit: Vary object loop!
2015/09/23 14:14:00 kid1| urlParse: Illegal hostname '.xiti.com'
2015/09/23 14:14:21 kid1| urlParse: Illegal hostname '.xiti.com'
2015/09/23 14:14:26 kid1| urlParse: Illegal hostname '.xiti.com'
2015/09/23 14:14:47 kid1| clientProcessHit: Vary object loop!
2015/09/23 14:14:47 kid1| clientProcessHit: Vary object loop!
2015/09/23 14:17:16 kid1| clientIfRangeMatch: Weak ETags are not allowed in If-Range: 
"33720-1442503946000" ? "33720-1442503946000"
2015/09/23 14:17:17 kid1| clientIfRangeMatch: Weak ETags are not allowed in If-Range: 
"33720-1442503946000" ? "33720-1442503946000"
2015/09/23 14:17:18 kid1| clientIfRangeMatch: Weak ETags are not allowed in If-Range: 
"33720-1442503946000" ? "33720-1442503946000"
2015/09/23 14:21:21 kid1| urlParse: Illegal hostname '.xiti.com'
2015/09/23 14:21:27 kid1| urlParse: Illegal hostname '.xiti.com'

About "urlParse: Illegal hostname '.xiti.com'" not related, I known the problem
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users



___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Lots of "Vary object loop!"

2015-09-15 Thread Amos Jeffries
On 15/09/2015 9:16 a.m., Sebastián Goicochea wrote:
> I could finally isolate the problem, it only happens if you are using
> collapsed_forwarding.
> 
> If you want, you can use this script to replicate it:
> 
> #!/bin/bash
> H='--header'
> 
> echo "With Firefox"
> wget -d  \
> $H='Accept:
> text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8' \
> $H='Accept-Encoding: gzip, deflate' \
> $H='Accept-Language: en-us,en;q=0.5' \
> $H='Cache-Control: max-age=0' \
> $H='Connection: keep-alive' \
> -U 'Mozilla/5.0 (Windows NT 5.1; rv:10.0.2) Gecko/20100101
> Firefox/10.0.2' \
> -O /dev/null \
>  $1
> 
> echo "With Chrome"
> wget -d  \
> $H='Accept:text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8'\
> 
> $H='Accept-Charset:ISO-8859-1,utf-8;q=0.7,*;q=0.3'\
> $H='Accept-Encoding:gzip,deflate,sdch'\
> $H='Accept-Language:es-ES,es;q=0.8'\
> $H='Cache-Control:no-cache'\
> $H='Connection:keep-alive'\
> $H='Pragma:no-cache'\
> -U 'User-Agent:Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.32
> (KHTML, like Gecko) Chrome/27.0.1425.0 Safari/537.32 SUSE/27.0.1425.0'\
> -O /dev/null \
>  $1
> # End of script
> 
> script usage: ./wgets.sh
> http://www.clarin.com/external-images/GranDTUnificada_5729dd7a1487678526c23516a5083661.jpg
> 

That script is not doing what you think it is.

The requests are being made in series with the first one finishing
before the second starts. Your access.log timing confirms that with a
whole 18ms between the two requests.

So I dont think this script is actually triggering the collapsed
forwarding behaviour. Or it should not be if it is.


Also, look closely for the "\r\n" between headers.

[User-Agent: Wget/1.12
> (linux-gnu)\r\nAccept:
> text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8--header=Accept-Charset:ISO-8859-1,utf-8;q=0.7,*;q=0.3--header=Accept-Encoding:gzip,deflate,sdch--header=Accept-Language:es-ES,es;q=0.8--header=Cache-Control:no-cache--header=Connection:keep-alive--header=Pragma:no-cache-U\r\nHost:
> www.clarin.com\r\nConnection: Keep-Alive\r\n]


> cache.log output:
> 2015/09/14 14:05:01 kid1| clientProcessHit: Vary object loop!
> 2015/09/14 14:05:27 kid1| varyEvaluateMatch: Oops. Not a Vary match on
> second attempt,
> 'http://www.clarin.com/external-images/GranDTUnificada_5729dd7a1487678526c23516a5083661.jpg'
> 'accept-encoding="gzip,%20deflate",
> user-agent="Mozilla%2F5.0%20(Windows%20NT%205.1%3B%20rv%3A10.0.2)%20Gecko%2F20100101%20Firefox%2F10.0.2"'
> 
> 2015/09/14 14:05:27 kid1| clientProcessHit: Vary object loop!
> 2015/09/14 14:05:27 kid1| varyEvaluateMatch: Oops. Not a Vary match on
> second attempt,
> 'http://www.clarin.com/external-images/GranDTUnificada_5729dd7a1487678526c23516a5083661.jpg'
> 'accept-encoding, user-agent="Wget%2F1.12%20(linux-gnu)"'
> 2015/09/14 14:05:27 kid1| clientProcessHit: Vary object loop!
> 

> 
> What do you think? Is this the expected behaviour?
> 

There is something slightly odd. I'm not sure if its wrong exactly, but
definitely odd.


Its not clear if the cache.log output is from the first or second
request. I assume (big IF) that above cache.log is the first one finding
some prior Firefox entry, then the second one finding the first ones entry.

Its very weird that the second one gets "Not a Vary match". The lookups
should have been the same regardless of your script breakage.

Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Lots of "Vary object loop!"

2015-09-15 Thread Sebastián Goicochea
Amos, thanks for your answer. I understand your point in 
collapsed_forwarding not being triggered because the requests are not 
concurrent, nevertheless if I use collapsed_forwarding the Vary loop 
appears, if I disable it, format the cache_dir and start over .. It does 
not appear.


If you think I could do something else to debug this I'll be glad to do 
it. For now I've disabled collapsed_forwarding in my production servers 
and everything looks good



Regards,
Sebastian

El 15/09/15 a las 07:37, Amos Jeffries escribió:

On 15/09/2015 9:16 a.m., Sebastián Goicochea wrote:

I could finally isolate the problem, it only happens if you are using
collapsed_forwarding.

If you want, you can use this script to replicate it:

#!/bin/bash
H='--header'

echo "With Firefox"
wget -d  \
$H='Accept:
text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8' \
$H='Accept-Encoding: gzip, deflate' \
$H='Accept-Language: en-us,en;q=0.5' \
$H='Cache-Control: max-age=0' \
$H='Connection: keep-alive' \
-U 'Mozilla/5.0 (Windows NT 5.1; rv:10.0.2) Gecko/20100101
Firefox/10.0.2' \
-O /dev/null \
  $1

echo "With Chrome"
wget -d  \
$H='Accept:text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8'\

$H='Accept-Charset:ISO-8859-1,utf-8;q=0.7,*;q=0.3'\
$H='Accept-Encoding:gzip,deflate,sdch'\
$H='Accept-Language:es-ES,es;q=0.8'\
$H='Cache-Control:no-cache'\
$H='Connection:keep-alive'\
$H='Pragma:no-cache'\
-U 'User-Agent:Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.32
(KHTML, like Gecko) Chrome/27.0.1425.0 Safari/537.32 SUSE/27.0.1425.0'\
-O /dev/null \
  $1
# End of script

script usage: ./wgets.sh
http://www.clarin.com/external-images/GranDTUnificada_5729dd7a1487678526c23516a5083661.jpg


That script is not doing what you think it is.

The requests are being made in series with the first one finishing
before the second starts. Your access.log timing confirms that with a
whole 18ms between the two requests.

So I dont think this script is actually triggering the collapsed
forwarding behaviour. Or it should not be if it is.


Also, look closely for the "\r\n" between headers.

[User-Agent: Wget/1.12

(linux-gnu)\r\nAccept:
text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8--header=Accept-Charset:ISO-8859-1,utf-8;q=0.7,*;q=0.3--header=Accept-Encoding:gzip,deflate,sdch--header=Accept-Language:es-ES,es;q=0.8--header=Cache-Control:no-cache--header=Connection:keep-alive--header=Pragma:no-cache-U\r\nHost:
www.clarin.com\r\nConnection: Keep-Alive\r\n]
cache.log output:
2015/09/14 14:05:01 kid1| clientProcessHit: Vary object loop!
2015/09/14 14:05:27 kid1| varyEvaluateMatch: Oops. Not a Vary match on
second attempt,
'http://www.clarin.com/external-images/GranDTUnificada_5729dd7a1487678526c23516a5083661.jpg'
'accept-encoding="gzip,%20deflate",
user-agent="Mozilla%2F5.0%20(Windows%20NT%205.1%3B%20rv%3A10.0.2)%20Gecko%2F20100101%20Firefox%2F10.0.2"'

2015/09/14 14:05:27 kid1| clientProcessHit: Vary object loop!
2015/09/14 14:05:27 kid1| varyEvaluateMatch: Oops. Not a Vary match on
second attempt,
'http://www.clarin.com/external-images/GranDTUnificada_5729dd7a1487678526c23516a5083661.jpg'
'accept-encoding, user-agent="Wget%2F1.12%20(linux-gnu)"'
2015/09/14 14:05:27 kid1| clientProcessHit: Vary object loop!

What do you think? Is this the expected behaviour?


There is something slightly odd. I'm not sure if its wrong exactly, but
definitely odd.


Its not clear if the cache.log output is from the first or second
request. I assume (big IF) that above cache.log is the first one finding
some prior Firefox entry, then the second one finding the first ones entry.

Its very weird that the second one gets "Not a Vary match". The lookups
should have been the same regardless of your script breakage.

Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users



___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Lots of "Vary object loop!"

2015-09-03 Thread Sebastián Goicochea

Amos, I spent a couple of days doing some test with the info you gave me:

Retested emptying the cache several times, disabled the rewriter, 
different config files .. all I could think of



Downloaded fresh 3.5.8 tar.gz (just in case it was some 3.5.4 thing) and 
compiled it using this configure options:


Squid Cache: Version 3.5.8
Service Name: squid
configure options:  '--prefix=/usr/local' '--datadir=/usr/local/share' 
'--bindir=/usr/local/sbin' '--libexecdir=/usr/local/lib/squid' 
'--localstatedir=/var' '--sysconfdir=/etc/squid3' '--enable-delay-pools' 
'--enable-ssl' '--enable-ssl-crtd' '--enable-linux-netfilter' 
'--enable-eui' '--enable-snmp' '--enable-gnuregex' 
'--enable-ltdl-convenience' '--enable-removal-policies=lru heap' 
'--enable-http-violations' '--with-openssl' 
'--with-filedescriptors=24321' '--enable-poll' '--enable-epoll' 
'--enable-storeio=ufs,aufs,diskd,rock' '--disable-ipv6'




And the problem appeared again, I am suspicious that the problem is in 
the configuration, I even removed all my refresh patterns, but:


2015/09/02 15:03:42 kid1| varyEvaluateMatch: Oops. Not a Vary match on 
second attempt, 'http://assets.pinterest.com/js/pinit.js' 
'accept-encoding="gzip,%20deflate"'

2015/09/02 15:03:42 kid1| clientProcessHit: Vary object loop!
2015/09/02 15:03:43 kid1| varyEvaluateMatch: Oops. Not a Vary match on 
second attempt, 'http://static.cmptch.com/v/lib/str.html' 
'accept-encoding="gzip,%20deflate,%20sdch"'

2015/09/02 15:03:43 kid1| clientProcessHit: Vary object loop!
2015/09/02 15:03:43 kid1| varyEvaluateMatch: Oops. Not a Vary match on 
second attempt, 
'http://pstatic.bestpriceninja.com/nwp/v0_0_773/release/Shared/Extra/IFrameStoreReciever.js' 
'accept-encoding="gzip,%20deflate,%20sdch"'

2015/09/02 15:03:43 kid1| clientProcessHit: Vary object loop!
2015/09/02 15:03:59 kid1| varyEvaluateMatch: Oops. Not a Vary match on 
second attempt, 
'http://static.xvideos.com/v2/css/xv-video-styles.css?v=7' 
'accept-encoding="gzip,deflate"'

2015/09/02 15:03:59 kid1| clientProcessHit: Vary object loop!
2015/09/02 15:03:59 kid1| varyEvaluateMatch: Oops. Not a Vary match on 
second attempt, 'http://s7.addthis.com/js/250/addthis_widget.js' 
'accept-encoding="gzip,deflate"'

2015/09/02 15:03:59 kid1| clientProcessHit: Vary object loop!



Later on I tested it with this short config file and the problem persisted:

http_access allow localhost manager
http_access deny manager
acl purge method PURGE
http_access allow purge localhost
http_access deny purge
acl all src all
acl localhost src 127.0.0.1/32
acl localnet src 127.0.0.0/8
acl Safe_ports port 80
acl snmppublic snmp_community public
http_access deny !Safe_ports
http_access allow all
dns_v4_first on
cache_mem 1024 MB
maximum_object_size_in_memory 64 KB
memory_cache_mode always
maximum_object_size 15 KB
minimum_object_size 100 bytes
collapsed_forwarding on
logfile_rotate 5
mime_table /etc/squid3/mime.conf
debug_options ALL,1
store_id_access deny all
store_id_bypass on
refresh_pattern ^ftp:144020%10080
refresh_pattern ^gopher:14400%1440
refresh_pattern ^http:\/\/movies\.apple\.com   86400 20% 
86400 override-expire override-lastmod ignore-no-cache ignore-private 
ignore-reload
refresh_pattern -i \.flv$   10080   90% 99 
ignore-no-cache override-expire ignore-private
refresh_pattern -i \.mov$   10080   90% 99 
ignore-no-cache override-expire ignore-private
refresh_pattern windowsupdate.com/.*\.(cab|exe) 4320 100% 43200 
reload-into-ims
refresh_pattern download.microsoft.com/.*\.(cab|exe) 4320 100% 43200 
reload-into-ims
refresh_pattern -i 
\.(deb|rpm|exe|zip|tar|tgz|ram|rar|bin|ppt|doc|pdf|tiff)$ 10080 90% 
43200 override-expire ignore-no-cache ignore-private

refresh_pattern -i (/cgi-bin/) 00%0
refresh_pattern .020%4320
quick_abort_min 0 KB
quick_abort_max 0 KB
quick_abort_pct 100
range_offset_limit 0
negative_ttl 1 minute
negative_dns_ttl 1 minute
read_ahead_gap 128 KB
request_header_max_size 100 KB
reply_header_max_size 100 KB
via off
acl apache rep_header Server ^Apache
half_closed_clients off
cache_mgr webmaster
cache_effective_user squid
cache_effective_group squid
httpd_suppress_version_string on
snmp_access allow snmppublic localhost
snmp_access deny all
snmp_incoming_address 127.0.0.1
error_directory /etc/squid3/errors/English
max_filedescriptors 65535
ipcache_size 1024
forwarded_for off
log_icp_queries off
icp_access allow localnet
icp_access deny all
htcp_access allow localnet
htcp_access deny all
digest_rebuild_period 15 minutes
digest_rewrite_period 15 minutes
strip_query_terms off
max_open_disk_fds 150
cache_replacement_policy heap LFUDA
memory_pools off
http_port 9001
http_port 901 tproxy
if ${process_number} = 1
access_log stdio:/var/log/squid/1/access.log squid
cache_log /var/log/squid/1/cache.log
cache_store_log none
cache_swap_state 

Re: [squid-users] Lots of "Vary object loop!"

2015-09-03 Thread Amos Jeffries
On 4/09/2015 6:24 a.m., Sebastián Goicochea wrote:
> Regarding configure options, I disable IPv6 because of the latency that
> adds to DNS queries, enable-ssl could be removed, gnuregex gave no
> problems (or that I think).
> 
> That options on the config file are the core of my configuration. Just
> stripped ACLs and that kind of stuff to make it shorter, and I also
> stripped the part of the rewriter (because I have it commented at the
> moment).
> Could any of the misconfigurations you mention could be causing this
> Vary loop?

In summary;
  it looks like you may have been using SMP workers in an unsafe manner
(simetime recently perhapse) and screwed over your cache_dir. A full
cache re-scan is probably in order to fix it.


In detail;

What I noticed particularly was that you have a section of SMP
configuration. And that later you have "cache_dir diskd ..." without any
SMP protections. But what you posted did not say "workers" directive so
I was unsure.

If you have at any time run that config file with the "workers"
directive in it, then those diskd caches will have been randomly
overwriting each others stored content. Almost guaranteeing these types
of problem and other SWAPFAIL events as well. Even if workers was for
only happening for a short time, disk cache corruption is persistent.

You have two options.

1) wait until all the collisions have been found and erased. That could
take a while to happen naturally.

2) stop Squid, erase the swap.state in those cache_dir and restart
Squid. The slow "DIRTY" rebuild will fix collision type corruptions.


In related settings you have shared memory cache disabled and rock store
in use. Disabling shared memory and running with SMP workers might make
rock store collide as well - though I'm not sure of that. It does
nothing in a non-SMP configuration.

If the rock is corrupted it self-heals pretty quickly. Just restart
Squid and that happens.

Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Lots of "Vary object loop!"

2015-09-03 Thread Amos Jeffries
On 4/09/2015 3:20 a.m., Sebastián Goicochea wrote:
> Amos, I spent a couple of days doing some test with the info you gave me:
> 
> Retested emptying the cache several times, disabled the rewriter,
> different config files .. all I could think of
> 
> 
> Downloaded fresh 3.5.8 tar.gz (just in case it was some 3.5.4 thing) and
> compiled it using this configure options:
> 
> Squid Cache: Version 3.5.8
> Service Name: squid
> configure options:  '--prefix=/usr/local' '--datadir=/usr/local/share'
> '--bindir=/usr/local/sbin' '--libexecdir=/usr/local/lib/squid'
> '--localstatedir=/var' '--sysconfdir=/etc/squid3' '--enable-delay-pools'
> '--enable-ssl' '--enable-ssl-crtd' '--enable-linux-netfilter'
> '--enable-eui' '--enable-snmp' '--enable-gnuregex'
> '--enable-ltdl-convenience' '--enable-removal-policies=lru heap'
> '--enable-http-violations' '--with-openssl'
> '--with-filedescriptors=24321' '--enable-poll' '--enable-epoll'
> '--enable-storeio=ufs,aufs,diskd,rock' '--disable-ipv6'
> 

If you can avoid that --disable-ipv6 please do.

--enable-ssl is obsolete.

--enable-gnuregex is also pretty broken. Though we have not quite
managed to eradicate it yet.


> 
> And the problem appeared again, I am suspicious that the problem is in
> the configuration, I even removed all my refresh patterns, but:
> 
> 2015/09/02 15:03:42 kid1| varyEvaluateMatch: Oops. Not a Vary match on
> second attempt, 'http://assets.pinterest.com/js/pinit.js'
> 'accept-encoding="gzip,%20deflate"'
> 2015/09/02 15:03:42 kid1| clientProcessHit: Vary object loop!
> 2015/09/02 15:03:43 kid1| varyEvaluateMatch: Oops. Not a Vary match on
> second attempt, 'http://static.cmptch.com/v/lib/str.html'
> 'accept-encoding="gzip,%20deflate,%20sdch"'
> 2015/09/02 15:03:43 kid1| clientProcessHit: Vary object loop!
> 2015/09/02 15:03:43 kid1| varyEvaluateMatch: Oops. Not a Vary match on
> second attempt,
> 'http://pstatic.bestpriceninja.com/nwp/v0_0_773/release/Shared/Extra/IFrameStoreReciever.js'
> 'accept-encoding="gzip,%20deflate,%20sdch"'
> 2015/09/02 15:03:43 kid1| clientProcessHit: Vary object loop!
> 2015/09/02 15:03:59 kid1| varyEvaluateMatch: Oops. Not a Vary match on
> second attempt,
> 'http://static.xvideos.com/v2/css/xv-video-styles.css?v=7'
> 'accept-encoding="gzip,deflate"'
> 2015/09/02 15:03:59 kid1| clientProcessHit: Vary object loop!
> 2015/09/02 15:03:59 kid1| varyEvaluateMatch: Oops. Not a Vary match on
> second attempt, 'http://s7.addthis.com/js/250/addthis_widget.js'
> 'accept-encoding="gzip,deflate"'
> 2015/09/02 15:03:59 kid1| clientProcessHit: Vary object loop!
> 
> 
> 
> Later on I tested it with this short config file and the problem persisted:
>



> 
> Any ideas what could be wrong?

There are quite a few out of date things configured, or wrongly
configured in that list.

What does your actual normal config contain ?

Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Lots of "Vary object loop!"

2015-09-03 Thread Sebastián Goicochea

Amos, I recompiled 3.5.8 with this configuration (removed ipv6 and ssl):

Squid Cache: Version 3.5.8
Service Name: squid
configure options:  '--prefix=/usr/local' '--datadir=/usr/local/share' 
'--bindir=/usr/local/sbin' '--libexecdir=/usr/local/lib/squid' 
'--localstatedir=/var' '--sysconfdir=/etc/squid3' '--enable-delay-pools' 
'--enable-linux-netfilter' '--enable-eui' '--enable-snmp' 
'--enable-gnuregex' '--enable-ltdl-convenience' 
'--enable-removal-policies=lru heap' '--enable-http-violations' 
'--with-openssl' '--with-filedescriptors=24321' '--enable-poll' 
'--enable-epoll' '--enable-storeio=ufs,aufs,diskd,rock'



Again formatted the partitions, started with this config (removed shared 
memory off, removed all refresh patterns) and no workers directive at all:


http_access allow localhost manager
http_access deny manager
acl purge method PURGE
http_access allow purge localhost
http_access deny purge
acl all src all
acl localhost src 127.0.0.1/32
acl localnet src 127.0.0.0/8
acl Safe_ports port 80
acl snmppublic snmp_community public
http_access deny !Safe_ports
http_access allow all
dns_v4_first on
cache_mem 1024 MB
maximum_object_size_in_memory 64 KB
memory_cache_mode always
maximum_object_size 26 KB
minimum_object_size 100 bytes
collapsed_forwarding on
logfile_rotate 5
mime_table /etc/squid3/mime.conf
debug_options ALL,1
store_id_access deny all
store_id_bypass on
quick_abort_min 0 KB
quick_abort_max 0 KB
quick_abort_pct 100
range_offset_limit 0
negative_ttl 1 minute
negative_dns_ttl 1 minute
read_ahead_gap 128 KB
request_header_max_size 100 KB
reply_header_max_size 100 KB
via off
half_closed_clients off
cache_mgr webmaster
cache_effective_user squid
cache_effective_group squid
httpd_suppress_version_string on
snmp_access allow snmppublic localhost
snmp_access deny all
snmp_incoming_address 127.0.0.1
error_directory /etc/squid3/errors/English
max_filedescriptors 65535
ipcache_size 1024
forwarded_for off
log_icp_queries off
icp_access allow localnet
icp_access deny all
htcp_access allow localnet
htcp_access deny all
digest_rebuild_period 15 minutes
digest_rewrite_period 15 minutes
strip_query_terms off
max_open_disk_fds 150
cache_replacement_policy heap LFUDA
memory_pools off
http_port 9001
http_port 901 tproxy
pid_filename /var/run/squid1.pid
visible_hostname localhost
snmp_port 1611
icp_port 3131
htcp_port 4828
cachemgr_passwd admin admin
if ${process_number} = 1
 access_log stdio:/var/log/squid/1/access.log squid
 cache_log /var/log/squid/1/cache.log
 cache_store_log none
 cache_swap_state /var/log/squid/1/%s.swap.state
else
 access_log none
 cache_log /dev/null
endif
cache_dir rock  /cache1/rock1 256  min-size=500 max-size=2000
cache_dir rock  /cache1/rock2 2000  min-size=2000 max-size=3
cache_dir diskd /cache1/diskd2 6 16 256 min-size=3 max-size=40
cache_dir diskd /cache2/2 10 16 256 min-size=40 max-size=1048576
cache_dir diskd /cache2/1 68 16 256 min-size=1048576



This config generates this processes:

# ps ax | grep squid
 9768 ?Ss 0:00 /usr/local/sbin/squid -f /etc/squid3/squid1.conf
 9770 ?S  0:00 (squid-coord-4) -f /etc/squid3/squid1.conf
 9771 ?S  0:01 (squid-disk-3) -f /etc/squid3/squid1.conf
 9772 ?S  0:00 (squid-disk-2) -f /etc/squid3/squid1.conf
 9773 ?S  1:13 (squid-1) -f /etc/squid3/squid1.conf


But still seeing all those Vary loops all the time

:(

Thanks,
Sebastian



El 03/09/15 a las 15:48, Amos Jeffries escribió:

On 4/09/2015 6:24 a.m., Sebastián Goicochea wrote:

Regarding configure options, I disable IPv6 because of the latency that
adds to DNS queries, enable-ssl could be removed, gnuregex gave no
problems (or that I think).

That options on the config file are the core of my configuration. Just
stripped ACLs and that kind of stuff to make it shorter, and I also
stripped the part of the rewriter (because I have it commented at the
moment).
Could any of the misconfigurations you mention could be causing this
Vary loop?

In summary;
   it looks like you may have been using SMP workers in an unsafe manner
(simetime recently perhapse) and screwed over your cache_dir. A full
cache re-scan is probably in order to fix it.


In detail;

What I noticed particularly was that you have a section of SMP
configuration. And that later you have "cache_dir diskd ..." without any
SMP protections. But what you posted did not say "workers" directive so
I was unsure.

If you have at any time run that config file with the "workers"
directive in it, then those diskd caches will have been randomly
overwriting each others stored content. Almost guaranteeing these types
of problem and other SWAPFAIL events as well. Even if workers was for
only happening for a short time, disk cache corruption is persistent.

You have two options.

1) wait until all the collisions have been found and erased. That could
take a while to happen naturally.

2) stop Squid, erase the swap.state in those cache_dir and restart

Re: [squid-users] Lots of Vary object loop!

2015-08-26 Thread Amos Jeffries
On 27/08/2015 4:11 a.m., Sebastián Goicochea wrote:
 Hello Amos, thanks for your help. I've disabled our rewriter helper but
 the errors remain the same. So I think that's not the reason.

You emptied the cache, or at least altered the cache_dir line to point
at a new empty cache_dir during the test after disabling the helper?
anything it might have done to the cache contents is already done to the
data stored there by the time you disable it.


 I've been reading some older mails from the list and a guy named Hussam
 Al-Tayeb exchanged some interesting mails with you a couple of months
 ago .. That got me thinking, can I completely disable Vary checking? I
 know is an http violation and not recommended, but if I could disable it
 using an ACL for certain sites that are missconfigured and I have the
 certainty that the content is exactly the same no matter what .. I could
 get better performance. (It's ok if I have to patch something and
 recompile squid)
 If this is not possible, what about bypassing content that has the
 Vary in its response header so squid does not make this 2 lookups only
 to find that it has to retrieve it from the original server anyway?
 

Its not a violation of HTTP. It is a critical internal validity check
for the cache index itself.

Preventing the contents of say your bank account display page being sent
to someone else fetching http://google.com/. That kind of critical.

If the Vary meta object is not pointing at the object its supposed to
be. Then the object it was supposed to be pointing at could be anything
at all.


For your other question. Yes, 3.5 has the store_miss directive now.
http://master.squid-cache.org/Doc/config/store_miss/.
You can use ACLs in there to check for either the known URLs or the Vary
header existence on replies and prevent caching of those objects. I'm
not sure how that will interact with the vary objects in your case but
none I know of using it has mentioned any issues.

Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Lots of Vary object loop!

2015-08-26 Thread Amos Jeffries
On 27/08/2015 7:53 a.m., Sebastián Goicochea wrote:
 After I sent you my previous email, I continued investigating the
 subject .. I made a change in the source code as follows:
 
 File: /src/http.cc
 
 HttpStateData::haveParsedReplyHeaders()
 {
 .
 .
 # THIS IS NEW STUFF ###
 if (rep-header.has(HDR_VARY)) {
 rep-header.delById(HDR_VARY);
 debugs(11,3, Vary detected. Hack Cleaning it up);
 }
 # END OF NEW STUFF ###
 
 #if X_ACCELERATOR_VARY
 if (rep-header.has(HDR_X_ACCELERATOR_VARY)) {
 rep-header.delById(HDR_X_ACCELERATOR_VARY);
 debugs(11,3, HDR_X_ACCELERATOR_VARY Vary detected. Hack Cleaning it
 up);
 }
 #endif
 .
 .
 
 
 Deleting Vary from the header at this point gives me hits in every
 object I test (that previously didn't hit) .. web browser never receives
 the Vary in the response header.
 Now I read your answer and you say that this is a critical validity
 check and that worries me. Taking away the vary altogether at this point
 could lead to the problems that you described? If that is the case .. I
 have to investigate other alternatives.
 

I'll have to look into that function when I'm back at the code later to
confirm this. But IIRC that function is acting directly on a freshly
received reply message. You are not removing the validity check, you are
removing Squids ability to see that it is a Vary object at all. So it is
never even cached as one.

The side effect of that is that clients asking for non-gzip can get the
cached gzip copy, etc. but at least its the same URL. So the security
risks are gone. But the user experience is not always good either way.

Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Lots of Vary object loop!

2015-08-26 Thread Sebastián Goicochea
After I sent you my previous email, I continued investigating the 
subject .. I made a change in the source code as follows:


File: /src/http.cc

HttpStateData::haveParsedReplyHeaders()
{
.
.
# THIS IS NEW STUFF ###
if (rep-header.has(HDR_VARY)) {
rep-header.delById(HDR_VARY);
debugs(11,3, Vary detected. Hack Cleaning it up);
}
# END OF NEW STUFF ###

#if X_ACCELERATOR_VARY
if (rep-header.has(HDR_X_ACCELERATOR_VARY)) {
rep-header.delById(HDR_X_ACCELERATOR_VARY);
debugs(11,3, HDR_X_ACCELERATOR_VARY Vary detected. Hack Cleaning 
it up);

}
#endif
.
.


Deleting Vary from the header at this point gives me hits in every 
object I test (that previously didn't hit) .. web browser never receives 
the Vary in the response header.
Now I read your answer and you say that this is a critical validity 
check and that worries me. Taking away the vary altogether at this point 
could lead to the problems that you described? If that is the case .. I 
have to investigate other alternatives.



Thanks,
Sebastian




El 26/08/15 a las 16:11, Amos Jeffries escribió:

On 27/08/2015 4:11 a.m., Sebastián Goicochea wrote:

Hello Amos, thanks for your help. I've disabled our rewriter helper but
the errors remain the same. So I think that's not the reason.

You emptied the cache, or at least altered the cache_dir line to point
at a new empty cache_dir during the test after disabling the helper?
anything it might have done to the cache contents is already done to the
data stored there by the time you disable it.



I've been reading some older mails from the list and a guy named Hussam
Al-Tayeb exchanged some interesting mails with you a couple of months
ago .. That got me thinking, can I completely disable Vary checking? I
know is an http violation and not recommended, but if I could disable it
using an ACL for certain sites that are missconfigured and I have the
certainty that the content is exactly the same no matter what .. I could
get better performance. (It's ok if I have to patch something and
recompile squid)
If this is not possible, what about bypassing content that has the
Vary in its response header so squid does not make this 2 lookups only
to find that it has to retrieve it from the original server anyway?


Its not a violation of HTTP. It is a critical internal validity check
for the cache index itself.

Preventing the contents of say your bank account display page being sent
to someone else fetching http://google.com/. That kind of critical.

If the Vary meta object is not pointing at the object its supposed to
be. Then the object it was supposed to be pointing at could be anything
at all.


For your other question. Yes, 3.5 has the store_miss directive now.
http://master.squid-cache.org/Doc/config/store_miss/.
You can use ACLs in there to check for either the known URLs or the Vary
header existence on replies and prevent caching of those objects. I'm
not sure how that will interact with the vary objects in your case but
none I know of using it has mentioned any issues.

Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users



___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Lots of Vary object loop!

2015-08-26 Thread Sebastián Goicochea
Hello Amos, thanks for your help. I've disabled our rewriter helper but 
the errors remain the same. So I think that's not the reason.
I've been reading some older mails from the list and a guy named Hussam 
Al-Tayeb exchanged some interesting mails with you a couple of months 
ago .. That got me thinking, can I completely disable Vary checking? I 
know is an http violation and not recommended, but if I could disable it 
using an ACL for certain sites that are missconfigured and I have the 
certainty that the content is exactly the same no matter what .. I could 
get better performance. (It's ok if I have to patch something and 
recompile squid)
If this is not possible, what about bypassing content that has the 
Vary in its response header so squid does not make this 2 lookups only 
to find that it has to retrieve it from the original server anyway?




Thanks,
Sebastian


El 23/08/15 a las 08:14, Amos Jeffries escribió:

On 22/08/2015 4:20 a.m., Sebastian Goicochea wrote:

Hello everyone, I'm having a strange problem:

Several servers, same hardware, using same version of squid (3.5.4)
compiled using the same configure options, same configuration files. But
in two of them I get LOTS of these Vary object loop! lines in cache.log

2015/08/21 13:07:52 kid1| varyEvaluateMatch: Oops. Not a Vary match on
second attempt,
'http://resources.mlstatic.com/frontend/vip-fend-webserver/assets/bundles/photoswipe-6301b943e5586fe729e5d6480120a893.js'
'accept-encoding=gzip'
2015/08/21 13:07:52 kid1| clientProcessHit: Vary object loop!
2015/08/21 13:07:52 kid1| varyEvaluateMatch: Oops. Not a Vary match on
second attempt, 'http://www.google.com/afs/ads/i/iframe.html'
'accept-encoding=gzip,%20deflate'
2015/08/21 13:07:52 kid1| clientProcessHit: Vary object loop!
2015/08/21 13:08:01 kid1| varyEvaluateMatch: Oops. Not a Vary match on
second attempt,
'http://minicuotas.ribeiro.com.ar/images/products/large/035039335000.jpg' 
'accept-encoding=gzip,%20deflate'

2015/08/21 13:08:01 kid1| clientProcessHit: Vary object loop!

I've read what I could find on forums but could not solve it. Is this
something to worry about?

The short answer:

Yes and no. Squid is signalling that it is completely unable to perform
its caching duty for these URLs. The proxying duty continues with only
high latency visible to the client.

It is up to you whether that latency cost is urgent or not. It is
certainy high enough importance that you need to be told each time (no
rate limiting) when you have asked to receive important notices.



If that is not the case, how can I disable the
excessive logging?

You can reduce your logging level to show only critical problems,
instead of showing all details rated 'important'.

   debug_options ALL,0

NOTE: important (ALL,1) includes a lot of things like this that do
really need to be fixed to get better service out of either your proxy
or the underlying network. But can be put on your todo list if you dont
have time right now.



Which is the condition that generates this?

In long;


The whats happening is:

Your cache contains an object which was delivered by the server along
with headers stating that behind the URL is a large set of porssible
responses. *all* requests for that URL use a certain set of headers
(listed in Vary) to determine which binary-level object is applicable
(or not) on a per-client / per-reqeust basis.
  In order to cache the object Squid has to follow that same selection
criteria *exactly*.

Most common example is gzip vs non-gzip encoded copies of things. Which
you can see those messages relate to.

Squid stores this information in a Vary object associated with only
the URL. That vary object is used to perform a secondary cache index
lookup to see if the partiular variant needed is stored.

The expectation is that there would be 3+ objects stored for this URL; a
gzip data object, various non-gzip data objects, and a metadata object
(Vary object) telling Squid that it needs to look at the
accept-encoding header to find which of the those data objects to send
the client.


The messages themselves mean:

Oops. Not a Vary match on second attempt

  - that the Vary object saying look at headers X+Y+X is pointing at
itself or another Vary metadata object saying look at some other
headers. A URL cannot have two different Vary header values
simultaneously (Vary is a single list value).
Something really weird is going on in your cache. Squid should handle
this by abandoning the cache lookups and go to the origin for fresh copies.

You could be causing it by using url-rewrite or store-id helpers wrongly
to pass requests for a URL to servers which produce different responses.
So that is well worth looking into.

IMPORTANT: It is mandatory that any re-writing only be done to
'collapse' URLs that are *actually* producing identical objects and
producing them in (outwardly) identical ways. This Vary looping is just
the tip of an iceberg of truly horrible failures that occur silently
with re-writing.




Re: [squid-users] Lots of Vary object loop!

2015-08-26 Thread Yuri Voinov

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
 
Btw,

when Squid will directly support gzip, inflate compression itself?

27.08.15 2:15, Amos Jeffries пишет:
 On 27/08/2015 7:53 a.m., Sebastián Goicochea wrote:
 After I sent you my previous email, I continued investigating the
 subject .. I made a change in the source code as follows:

 File: /src/http.cc

 HttpStateData::haveParsedReplyHeaders()
 {
 .
 .
 # THIS IS NEW STUFF ###
 if (rep-header.has(HDR_VARY)) {
 rep-header.delById(HDR_VARY);
 debugs(11,3, Vary detected. Hack Cleaning it up);
 }
 # END OF NEW STUFF ###

 #if X_ACCELERATOR_VARY
 if (rep-header.has(HDR_X_ACCELERATOR_VARY)) {
 rep-header.delById(HDR_X_ACCELERATOR_VARY);
 debugs(11,3, HDR_X_ACCELERATOR_VARY Vary detected. Hack Cleaning it
 up);
 }
 #endif
 .
 .


 Deleting Vary from the header at this point gives me hits in every
 object I test (that previously didn't hit) .. web browser never receives
 the Vary in the response header.
 Now I read your answer and you say that this is a critical validity
 check and that worries me. Taking away the vary altogether at this point
 could lead to the problems that you described? If that is the case .. I
 have to investigate other alternatives.


 I'll have to look into that function when I'm back at the code later to
 confirm this. But IIRC that function is acting directly on a freshly
 received reply message. You are not removing the validity check, you are
 removing Squids ability to see that it is a Vary object at all. So it is
 never even cached as one.

 The side effect of that is that clients asking for non-gzip can get the
 cached gzip copy, etc. but at least its the same URL. So the security
 risks are gone. But the user experience is not always good either way.

 Amos

 ___
 squid-users mailing list
 squid-users@lists.squid-cache.org
 http://lists.squid-cache.org/listinfo/squid-users

-BEGIN PGP SIGNATURE-
Version: GnuPG v2
 
iQEcBAEBCAAGBQJV3iaOAAoJENNXIZxhPexG5/gIALjD6Xg9DMlW+Bten9ZvElsh
3/XHNy5OUAn6WvFULldkZSEnF5jvgUk0vGaGluVjnfriCsoCTnpaxwZmAclG3ZTd
ug/QRYvQCNB2fDXRFxUPJl+kzE53WSA6gpistDK2xmQJxEF1er7cMZVzGeMbR73/
32/wlo7WgZQ/pRmM1EhYqwCiXf8MRhufCx4AILNXkiN5O/CgUqsSHwl+jnOXWzqH
IakxTbtLyxgdN/nLhph0rscTQAMcYIX8aRTbmYjoXJoOYdDL49I4CoMegM7sX8Qi
lhctMLFjl7y2Qo7ofowJzo4NQPCIjx7268J80k6mZcRCROTor/VERFwp72UxZ6o=
=Vv6C
-END PGP SIGNATURE-

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Lots of Vary object loop!

2015-08-23 Thread Amos Jeffries
On 22/08/2015 4:20 a.m., Sebastian Goicochea wrote:
 Hello everyone, I'm having a strange problem:
 
 Several servers, same hardware, using same version of squid (3.5.4)
 compiled using the same configure options, same configuration files. But
 in two of them I get LOTS of these Vary object loop! lines in cache.log
 
 2015/08/21 13:07:52 kid1| varyEvaluateMatch: Oops. Not a Vary match on
 second attempt,
 'http://resources.mlstatic.com/frontend/vip-fend-webserver/assets/bundles/photoswipe-6301b943e5586fe729e5d6480120a893.js'
 'accept-encoding=gzip'
 2015/08/21 13:07:52 kid1| clientProcessHit: Vary object loop!
 2015/08/21 13:07:52 kid1| varyEvaluateMatch: Oops. Not a Vary match on
 second attempt, 'http://www.google.com/afs/ads/i/iframe.html'
 'accept-encoding=gzip,%20deflate'
 2015/08/21 13:07:52 kid1| clientProcessHit: Vary object loop!
 2015/08/21 13:08:01 kid1| varyEvaluateMatch: Oops. Not a Vary match on
 second attempt,
 'http://minicuotas.ribeiro.com.ar/images/products/large/035039335000.jpg' 
 'accept-encoding=gzip,%20deflate'
 
 2015/08/21 13:08:01 kid1| clientProcessHit: Vary object loop!
 
 I've read what I could find on forums but could not solve it. Is this
 something to worry about?

The short answer:

Yes and no. Squid is signalling that it is completely unable to perform
its caching duty for these URLs. The proxying duty continues with only
high latency visible to the client.

It is up to you whether that latency cost is urgent or not. It is
certainy high enough importance that you need to be told each time (no
rate limiting) when you have asked to receive important notices.


 If that is not the case, how can I disable the
 excessive logging?

You can reduce your logging level to show only critical problems,
instead of showing all details rated 'important'.

  debug_options ALL,0

NOTE: important (ALL,1) includes a lot of things like this that do
really need to be fixed to get better service out of either your proxy
or the underlying network. But can be put on your todo list if you dont
have time right now.


 Which is the condition that generates this?


In long;


The whats happening is:

Your cache contains an object which was delivered by the server along
with headers stating that behind the URL is a large set of porssible
responses. *all* requests for that URL use a certain set of headers
(listed in Vary) to determine which binary-level object is applicable
(or not) on a per-client / per-reqeust basis.
 In order to cache the object Squid has to follow that same selection
criteria *exactly*.

Most common example is gzip vs non-gzip encoded copies of things. Which
you can see those messages relate to.

Squid stores this information in a Vary object associated with only
the URL. That vary object is used to perform a secondary cache index
lookup to see if the partiular variant needed is stored.

The expectation is that there would be 3+ objects stored for this URL; a
gzip data object, various non-gzip data objects, and a metadata object
(Vary object) telling Squid that it needs to look at the
accept-encoding header to find which of the those data objects to send
the client.


The messages themselves mean:

Oops. Not a Vary match on second attempt

 - that the Vary object saying look at headers X+Y+X is pointing at
itself or another Vary metadata object saying look at some other
headers. A URL cannot have two different Vary header values
simultaneously (Vary is a single list value).
Something really weird is going on in your cache. Squid should handle
this by abandoning the cache lookups and go to the origin for fresh copies.

You could be causing it by using url-rewrite or store-id helpers wrongly
to pass requests for a URL to servers which produce different responses.
So that is well worth looking into.

IMPORTANT: It is mandatory that any re-writing only be done to
'collapse' URLs that are *actually* producing identical objects and
producing them in (outwardly) identical ways. This Vary looping is just
the tip of an iceberg of truly horrible failures that occur silently
with re-writing.



There is another similar message that can be mixed in the long list:

Oops. Not a Vary object on second attempt, (note the 1-word difference)
 - this is almost but not quite so bad, and is usually seen with broken
origin servers. All you can do about the problem itself then is fire off
bug reports to people and hope it gets fixed by the sysadmin in charge.


Both situations are very bad for HTTP performance, and bad for churning
your cache as well. But Squid can cope easily enough by just fetching a
new object and dropping what is in the cache. That Vary object loop!
message is telling you Squid is doing exactly that.


A quick test with the tool at redbot.org shows that the
resources.mlstatic.com server is utterly borked. Not even sending
correct ETag ids for the objects its outputting. Thats a sign to me that
the admin is trying to be smart with headers, and getting it very badly
wrong.