Re: [squid-users] Cache Chrome updates

2014-04-16 Thread Jasper Van Der Westhuizen

  On Tue, 2014-04-15 at 13:11 +0100, Nick Hill wrote:
  This may the the culprit
 
  hierarchy_stoplist cgi-bin ?
 
  I believe this will prevent caching of any URL containing a ?
 
  
  Should I remove the ? and leave cgi-bin?
 
 You can remove the whole line quite safely.
 
 It prevents cache_peers being sent requests that match the regex
 patterns listed. Since it is now very rare to find a peer that cannot
 support those requests...
 
 Amos

Thanks Amos. I will remove the string and test.

Regards
Jasper


Re: [squid-users] Cache Chrome updates

2014-04-16 Thread Jasper Van Der Westhuizen


On Tue, 2014-04-15 at 14:38 +0100, Nick Hill wrote:
 URLs with query strings have traditionally returned dynamic content.
 Consequently, http caches by default tend not to cache content when
 the URL has a query string.
 
 In recent years, notably Microsoft and indeed many others have adopted
 a habit of putting query strings on static content.
 
 This could be somewhat inconvenient on days where Microsoft push out a
 new 4Gb update for windows 8, and you have many such devices connected
 to your nicely cached network. Each device will download exactly the
 same content, but with it's own query string.
 
 The nett result is generation of a huge amount of network traffic.
 Often for surprisingly minor updates.
 
 I am currently testing a new configuration for squid which identifies
 the SHA1 hash of the windows update in the URL, then returns the bit
 perfect cached content, irrespective of a wide set of URL changes. I
 have it in production in a busy computer repair centre. I am
 monitoring the results. So far, very promising.

Hi Nick

As you rightly said, Windows 8 devices are becoming more and more common
now, specially in the work place. I don't want to download the same 4GB
update multiple times. Would you mind sharing your SHA1 hash
configuration or is it perhaps available somewhere?

Regards
Jasper


Re: [squid-users] Cache Chrome updates

2014-04-16 Thread Nick Hill
Hi Jasper
I have compiled 3.4 to provide the store_id functionality implemented
by Ellizer.

I have it running in a production heterogeneous environment.
I'm still checking for bugs, but seems to work well.

#squid.conf file for Squid Cache: Version 3.4.4
#compiled on Ubuntu with configure options:  '--enable-async-io=8'
'--enable-storeio=ufs,aufs,diskd' '--enable-removal-policies=lru,heap'
#'--enable-delay-pools' '--enable-underscores' '--enable-icap-client'
'--enable-follow-x-forwarded-for' '--with-logdir=/var/log/squid3'
#'--with-pidfile=/var/run/squid3.pid' '--with-filedescriptors=65536'
'--with-large-files' '--with-default-user=proxy'
#'--enable-linux-netfilter' '--enable-storeid-rewrite-helpers=file'

#Recommendations: in full production, you may want to set debug
options from 2 to 1 or 0.
#You may also want to comment out strip_query_terms off for user privacy

logformat squid  %tg.%03tu %6tr %a %Ss/%03Hs %st %rm %ru %[un %Sh/%a %mt

#Explicitly define logs for my compiled version
cache_store_log /var/log/squid3/store.log
access_log /var/log/squid3/access.log
cache_log /var/log/squid3/cache.log


#Lets have a fair bit of debugging info
debug_options ALL,2
#Include query strings in logs
strip_query_terms off

acl all src all
#Which domains do windows updates come from?
acl windowsupdate dstdomain .ws.microsoft.com
acl windowsupdate dstdomain .download.windowsupdate.com

acl QUERY urlpath_regex cgi-bin \?

#I'm  behind a NAT firewall, so I don't need to restrict access
http_access allow all

#Uncomment these if you have web apps on the local server which auth
through local ip
#acl to_localhost dst 127.0.0.0/8 0.0.0.0/32
#http_access deny to_localhost

visible_hostname myclient.hostname.com
http_port 3128

#Always optimise bandwidth over hits
cache_replacement_policy heap LFUDA

#Windows update files are HUGE! I have set this to 6Gb.
#A recent (as of Apr 2014) windows 8 update file is 4Gb
maximum_object_size 6 GB

#Set these according to your file system
cache_dir ufs /home/smb/squid/squid 7 16 256
coredump_dir /home/smb/squid/squid


#Guaranteed static content from Microsoft. Usually fetched with range
requests so lets not revalidate. Underscore, 40 hex(SHA1 hash) .
extension
refresh_pattern _[0-9a-f]{40}\.(cab|exe|esd|psf|zip|msi|appx) 518400
80% 518400 override-lastmod override-expire ignore-reload
ignore-must-revalidate ignore-private
#Otherwise potentially variable
refresh_pattern -i
ws.microsoft.com/.*\.(cab|exe|ms[i|u|f]|asf|wm[v|a]|dat|zip|psf|appx|esd)
43200 80% 43200 reload-into-ims
refresh_pattern -i
download.windowsupdate.com/.*\.(cab|exe|ms[i|u|f]|asf|wm[v|a]|dat|zip|psf|appx|esd)
43200 80% 43200 reload-into-ims
#Default refresh patterns last if no others match
refresh_pattern ^ftp: 1440 20% 10080
refresh_pattern ^gopher: 1440 0% 1440
refresh_pattern . 0 20% 4320

#Directive sets I have been experimenting with
#override-lastmod override-expire ignore-reload ignore-must-revalidate
ignore-private
#reload-into-ims

#Windows updates use a lot of range requests. The only way to deal with this
#in Squid is to fetch the whole file as soon as requested
range_offset_limit -1 windowsupdate
quick_abort_min -1 KB windowsupdate


#My internet connection is not just used for Squid. I want to leave
#responsive bandwidth for other services. This limits D/L speed
delay_pools 1
delay_class 1 1
delay_access 1 allow all
delay_parameters 1 120/120

#We use the store_id helper to convert windows update file hashes to bare URLs.
#This way, any fetch for a given hash embedded in the URL will deliver
the same data
#You must make your own /etc/squid3/storeid_rewrite instructiosn at end.
#change the helper program location from
/usr/local/squid/libexec/storeid_file_rewrite to wherever yours is
#It is written in PERL, so on most Linux systems, put it somewhere
convenient, chmod 755 filename
store_id_program /usr/local/squid/libexec/storeid_file_rewrite
/etc/squid3/storeid_rewrite
store_id_children 10 startup=5 idle=3 concurrency=0
store_id_access allow windowsupdate
store_id_access deny all

#We want to cache windowsupdate URLs which include queries
#but only those queries which act on an installable file.
#we don't want to cache queries on asp files as this is a genuine server
#side query as opposed to just a cache breaker
acl wupdatecachablequery urlpath_regex
(cab|exe|ms[i|u|f]|asf|wm[v|a]|dat|zip|psf|appx|appxbundle|esd)\?

cache allow windowsupdate wupdatecachablequery
cache deny QUERY

#Given windows update is un-cooperative towards third party
#methods to reduce network bandwidth, it is safe to presume
#cache-specific headers or dates significantly differing from
#system date will be unhelpful
reply_header_access Date deny windowsupdate
reply_header_access Age deny windowsupdate

#Put the following line in /etc/squid3/storeid_rewrite ommitting the
starting hash. Tab separates fields
#_([0-9a-z]{40})\.(cab|exe|ms[i|u|f]|asf|wm[v|a]|dat|zip|psf|appx|esd)
   http://wupdate.squid.local/$1


Re: [squid-users] Cache Chrome updates

2014-04-16 Thread Amos Jeffries
Hi Nick,
 could you add a section for the WU SHA1 patterns to our DB of useful
StoreID patterns please:

 http://wiki.squid-cache.org/Features/StoreID/DB

Cheers
Amos


[squid-users] Re: Squid 3.3.4 - Zero Sized Reply for HTTP POST

2014-04-16 Thread tomsl
Additional information: This only happens when squid uses HTTPS to connect to
the origin server. HTTP appears to work fine.



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-3-3-4-Zero-Sized-Reply-for-HTTP-POST-tp4665601p4665609.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Happy eyeballs and https

2014-04-16 Thread Rob van der Putten

Hi there


Happy eyeballs (IPv4 fallback) doesn't seem to work with https (Squid 
3.3). Works OK with http.

Any suggestions?


Regards,
Rob



Re: [squid-users] Happy eyeballs and https

2014-04-16 Thread Amos Jeffries
On 16/04/2014 10:45 p.m., Rob van der Putten wrote:
 Hi there
 
 
 Happy eyeballs (IPv4 fallback) doesn't seem to work with https (Squid
 3.3). Works OK with http.
 Any suggestions?

If the TCP connection to server succeeds itis a success from the HTTP
layers viewpoint. Whatever happens with the TLS or wrapped HTTP layer
inside the tunnel is between the server and client alone.

So, what is the failure *exactly*?

Amos


[squid-users] Re: Happy eyeballs and https

2014-04-16 Thread Rob van der Putten

Hi there


Amos Jeffries wrote:


If the TCP connection to server succeeds itis a success from the HTTP
layers viewpoint. Whatever happens with the TLS or wrapped HTTP layer
inside the tunnel is between the server and client alone.

So, what is the failure *exactly*?


This morning my isp's tunnelserver (6in4) failed. This happened after 
replacing a router. I don't know if this is an IPv4 router, IPv6 or both.
I couldn't ping the remote IPv6 address of the tunnel, or the IPv4 
address of the tunnel server. Things returned to normal after they 
rebooted the tunnel server.


This is a log entry;
1397639045.383   1940 pc6.ip6.sput.nl TCP_MISS/503 0 CONNECT 
www.xs4all.nl:443 - HIER_NONE/- -


My connect_timeout is 2 seconds which works fine for http. But https 
pages just wouldn't load.



Regards,
Rob




[squid-users] Squid 3.4.4 and SSL Bump not working (error (92) Protocol not available)

2014-04-16 Thread Ict Security
 Hello to everybody,

we use Squid for http transparent proxyging and everything is all right.

I followed some howtos and we add SSL Bump transparent interception.

In squid.conf i have:

http_port 3127 intercept  ssl-bump generate-host-certificates=on
dynamic_cert_mem_cache_size=4MB cert=/etc/squid/myCA.pem
acl broken_sites dstdomain .example.com
ssl_bump none localhost
ssl_bump none broken_sites
ssl_bump server-first all
sslcrtd_program /usr/lib/squid/ssl_crtd -s /usr/lib/squid/ssl_db -M 4MB
sslcrtd_children 30


and in iptables i added this directive:

 -A PREROUTING -p tcp -s 192.168.10.8 --dport 443 -j DNAT
--to-destination 192.168.10.254:3127

HTTP surfing is still right, but when i connect, as example, to
https://www.google.com browser returns page error and i have these
log:

2014/04/16 16:08:27 kid1| ERROR: NF getsockopt(ORIGINAL_DST) failed on
local=192.168.10.254:3127 remote=192.168.10.8:58831 FD 15 flags=33:
(92) Protocol not available
2014/04/16 16:08:27 kid1| ERROR: NF getsockopt(ORIGINAL_DST) failed on
local=192.168.10.254:3127 remote=192.168.10.8:58832 FD 15 flags=33:
(92) Protocol not available
2014/04/16 16:08:27 kid1| ERROR: NF getsockopt(ORIGINAL_DST) failed on
local=192.168.10.254:3127 remote=192.168.10.8:58833 FD 15 flags=33:
(92) Protocol not available

I read some similar post but i did not apply, and find, the solution.

Thank you a log and best regards!

Francesco


Re: [squid-users] Squid 3.4.4 and SSL Bump not working (error (92) Protocol not available)

2014-04-16 Thread Amm



On 04/16/2014 07:45 PM, Ict Security wrote:

  Hello to everybody,

we use Squid for http transparent proxyging and everything is all right.


http_port 3127 intercept  ssl-bump generate-host-certificates=on
dynamic_cert_mem_cache_size=4MB cert=/etc/squid/myCA.pem

  -A PREROUTING -p tcp -s 192.168.10.8 --dport 443 -j DNAT
--to-destination 192.168.10.254:3127


for 443 intercept use https_port not http_port.

Amm.


[squid-users] Re: Happy eyeballs and https

2014-04-16 Thread Rob van der Putten

Hi there


Rob van der Putten wrote:


This morning my isp's tunnelserver (6in4) failed. This happened after
replacing a router. I don't know if this is an IPv4 router, IPv6 or both.
I couldn't ping the remote IPv6 address of the tunnel, or the IPv4
address of the tunnel server. Things returned to normal after they
rebooted the tunnel server.

This is a log entry;
1397639045.383 1940 pc6.ip6.sput.nl TCP_MISS/503 0 CONNECT
www.xs4all.nl:443 - HIER_NONE/- -

My connect_timeout is 2 seconds which works fine for http. But https
pages just wouldn't load.


Blocking IPv6 connections to port 443 in my firewall has exactly the 
same effect: The browser reports an error. There is no fallback to IPv4.



Regards,
Rob



Re: [squid-users] Re: Happy eyeballs and https

2014-04-16 Thread Amos Jeffries
On 17/04/2014 2:30 a.m., Rob van der Putten wrote:
 Hi there
 
 
 Rob van der Putten wrote:
 
 This morning my isp's tunnelserver (6in4) failed. This happened after
 replacing a router. I don't know if this is an IPv4 router, IPv6 or both.
 I couldn't ping the remote IPv6 address of the tunnel, or the IPv4
 address of the tunnel server. Things returned to normal after they
 rebooted the tunnel server.

 This is a log entry;
 1397639045.383 1940 pc6.ip6.sput.nl TCP_MISS/503 0 CONNECT
 www.xs4all.nl:443 - HIER_NONE/- -

 My connect_timeout is 2 seconds which works fine for http. But https
 pages just wouldn't load.
 
 Blocking IPv6 connections to port 443 in my firewall has exactly the
 same effect: The browser reports an error. There is no fallback to IPv4.
 

If you have time to dig into it the logics or CONNECT are in src/tunnel.cc.

NP: The peerSelect logics produce a list of potential destinations which
are supposed to be walked through and attempted until one succeeds.
Failure sent to the client only when there are none left to try or
connection timeout aborts the overall process.

For a cache.log trace of the tunnel operations use debug_options 26,4
Look for comm failure recovery. for when failover is attempted.

HTH
Amos



[squid-users] Re: Happy eyeballs and https

2014-04-16 Thread Rob van der Putten

Hi there


Amos Jeffries wrote:


If you have time to dig into it the logics or CONNECT are in src/tunnel.cc.

NP: The peerSelect logics produce a list of potential destinations which
are supposed to be walked through and attempted until one succeeds.
Failure sent to the client only when there are none left to try or
connection timeout aborts the overall process.

For a cache.log trace of the tunnel operations use debug_options 26,4
Look for comm failure recovery. for when failover is attempted.


This is a test with https://lists.debian.org/
IP addresses 82.195.75.100 and 2001:41b8:202:deb:216:36ff:fe40:4002
It seems to decide to try again, without actually trying again;

2014/04/16 19:08:41.178 kid1| tunnel.cc(650) tunnelStart:
2014/04/16 19:08:41.179 kid1| tunnel.cc(680) tunnelStart: 'CONNECT 
lists.debian.org:443 HTTP1.1'
2014/04/16 19:08:41.211 kid1| tunnel.cc(768) tunnelPeerSelectComplete: 
paths=2, p[0]={local=[::] 
remote=[2001:41b8:202:deb:216:36ff:fe40:4002]:443 flags=1}, 
serverDest[0]={local=[::] 
remote=[2001:41b8:202:deb:216:36ff:fe40:4002]:443 flags=1}
2014/04/16 19:08:41.211 kid1| AsyncCall.cc(18) AsyncCall: The AsyncCall 
tunnelConnectDone constructed, this=0xb850cbd0 [call3934863]
2014/04/16 19:08:43.960 kid1| AsyncCall.cc(85) ScheduleCall: 
ConnOpener.cc(132) will call tunnelConnectDone(local=[::] 
remote=[2001:41b8:202:deb:216:36ff:fe40:4002]:443 flags=1, errno=110, 
flag=-4, data=0xbaae7270) [call3934863]
2014/04/16 19:08:43.961 kid1| AsyncCallQueue.cc(51) fireNext: entering 
tunnelConnectDone(local=[::] 
remote=[2001:41b8:202:deb:216:36ff:fe40:4002]:443 flags=1, errno=110, 
flag=-4, data=0xbaae7270)
2014/04/16 19:08:43.961 kid1| AsyncCall.cc(30) make: make call 
tunnelConnectDone [call3934863]
2014/04/16 19:08:43.961 kid1| tunnel.cc(577) tunnelConnectDone: 
local=[::] remote=[2001:41b8:202:deb:216:36ff:fe40:4002]:443 flags=1, 
comm failure recovery.
2014/04/16 19:08:43.961 kid1| tunnel.cc(599) tunnelConnectDone: 
terminate with error.
2014/04/16 19:08:43.961 kid1| AsyncCallQueue.cc(53) fireNext: leaving 
tunnelConnectDone(local=[::] 
remote=[2001:41b8:202:deb:216:36ff:fe40:4002]:443 flags=1, errno=110, 
flag=-4, data=0xbaae7270)

2014/04/16 19:08:43.962 kid1| tunnel.cc(557) tunnelErrorComplete: FD 9
2014/04/16 19:08:43.962 kid1| tunnel.cc(168) tunnelClientClosed: 
local=[2001:888:1533:1::1]:8080 remote=[2001:888:1533:1::6]:53488 flags=1
2014/04/16 19:08:43.962 kid1| tunnel.cc(185) tunnelStateFree: 
tunnelState=0xbaae7270



Regards,
Rob



Re: [squid-users] Cache Chrome updates

2014-04-16 Thread Eliezer Croitoru

Hey Amos,

I have a tiny question which I am not sure about the answer(related to 
the topic).
What would happen in the case which we deny reply or request headers? 
Would squid look at the Vary (headers as an example) and decide if it's 
a Vary object or it would see the request or\and response without 
the headers?
Or, would squid suppose to write the full response headers to the Disk 
and Memory object at the same shape it was received from the server?

As far as I can tell ICAP removes the headers before caching it to disk.

Eliezer

On 04/16/2014 09:51 AM, Amos Jeffries wrote:

Hi Nick,
  could you add a section for the WU SHA1 patterns to our DB of useful
StoreID patterns please:

  http://wiki.squid-cache.org/Features/StoreID/DB

Cheers
Amos





Re: [squid-users] Re: Squid 3.3.4 - Zero Sized Reply for HTTP POST

2014-04-16 Thread Eliezer Croitoru

On 04/16/2014 01:36 PM, tomsl wrote:

Additional information: This only happens when squid uses HTTPS to connect to
the origin server. HTTP appears to work fine.
This issue was as a topic before but never really had the chance of 
verifying the issue fully.


Can you file a bug in the bugzilla?
http://bugs.squid-cache.org/

This way we can track it and test it.

I am not sure if there is an open bugzilla report on it but it will be 
more convenient to follow it in the bugzilla.


Can you add more details?
squid -v output.
squid.conf content (removing any confidential details such as passwords 
and\or public IP address)

What OS are you using for the squid service?
And what is the service server that you send the post request to is 
sitting on?


Since the issue can be from lots of reasons which not all of them are 
squid we need to verify the issue once.


Eliezer


[squid-users] generate-host-certficates

2014-04-16 Thread James Lay
From the squid.conf.documented:

#   SSL Bump Mode Options:
#   In addition to these options ssl-bump requires TLS/SSL
options.
#
#  generate-host-certificates[=on|off]
#   Dynamically create SSL server certificates for
the
#   destination hosts of bumped CONNECT
requests.When 
#   enabled, the cert and key options are used to
sign
#   generated certificates. Otherwise generated
#   certificate will be selfsigned.
#   If there is a CA certificate lifetime of the
generated 
#   certificate equals lifetime of the CA
certificate. If
#   generated certificate is selfsigned lifetime is
three 
#   years.
#   This option is enabled by default when ssl-bump
is used.
#   See the ssl-bump option above for more
information.

I did not find this to be the case and had to add it to my https_ports
line:

https_port bleh:3129 intercept generate-host-certificates=on ssl-bump
cert=/opt/sslsplit/sslsplit.crt key=/opt/sslsplit/sslsplitca.key
options=ALL

Thank you.

James


signature.asc
Description: This is a digitally signed message part