Re: [squid-users] https://weather.yahoo.com redirect loop

2014-08-20 Thread Amm


On 08/20/2014 10:52 AM, Jatin Bhasin wrote:

And when I browse to https://weather.yahoo.com then it goes in
redirect loop. I am using Chrome browser and I get a message at
the end saying 'This webpage has a redirect loop'.


Happens in 3.4 series too.

I added these in squid.conf as a solution:

via off
forwarded_for delete

Amm


Re: [squid-users] unbound and squid not resolving SSL sites

2014-08-20 Thread Amos Jeffries
On 20/08/2014 1:12 p.m., Eliezer Croitoru wrote:
 I wasn't sure but I am now.
 You are doing something wrong and I cannot tell what exactly.
 Try to share this script output:
 http://www1.ngtech.co.il/squid/basic_data.sh
 
 There are missing parts in the whole setup such as clients IP and server
 IP, what GW are you using etc..
 
 Eliezer


Probably expecting DNS based forgery to hijack the connections is the
mistake.

When receiving HTTPS all Squid has to work with are the two TCP packet
IP addresses. If one of them is the client IP and the other is forged by
DNS (unbound), what server is to be contacted?

Hostname from the accel hack is buried inside the encryption which has
not yet arrived from the client. So Squid has to decrypt some future
traffic in order to discover what server to contact right now to get the
cert details which need to be emitted in order to start decrypting that
future traffic. Impossible situation.
 But Squid is not aware of that, it just uses the TCP packet dst IP
(itself) and tries to get server TLS certificate from there. Entering in
an infinite loop of lookups instead of a useful decryption.


proxyplayer.co.uk;
 why are you using unbound for this at all?

Amos



Re: [squid-users] Poor cache

2014-08-20 Thread Amos Jeffries
On 20/08/2014 9:21 a.m., Délsio Cabá wrote:
 Hi guys,
 Need some help on cache. Basically I do not see many caches.
 
 root@c /]# cat  /var/log/squid/access.log  | awk '{print $4}' | sort |
 uniq -c | sort -rn
   17403 TCP_MISS/200
3107 TCP_MISS/304

 - objects in the client browser cache were used.

1903 TCP_MISS/000

 - server was contacted but no response came back. This is bad. Seeing
it in such numbers is very bad.
 It is a strong sign that TCP window scaling, ECN or ICMP blocking
(Path-MTUd) issues are occuring on your traffic.


1452 TCP_MISS/204

 - 204 no content means there was no object to be cached.

1421 TCP_MISS/206

 - Range request responses. Squid cannot cache these yet, but they
should be cached in the client browser and contribute to those 304
responses above.

1186 TCP_MISS/302

 - along with the MISS/301, MISS/303 these are not cacheable without
special instructions.

 659 TCP_MISS/503
 641 NONE/400
 548 TCP_MISS/301
 231 TCP_OFFLINE_HIT/200

 - cached object used.

 189 TCP_MISS/404
 126 TCP_IMS_HIT/304

 - cache object found, but objects in the client browser cache were used.

 112 TCP_MISS/504
  68 TCP_MISS/401
  56 TCP_MEM_HIT/200

 - cached object used.

  50 TCP_SWAPFAIL_MISS/304

 - cached object found, but disk error occurred loading it. And the
client request was conditional. So object in client browser cache used
instead.

  49 TCP_REFRESH_UNMODIFIED/200

 - cached objects found, mandatory update check required and resulted in
Squid cached object being delivered to client.

  46 TCP_SWAPFAIL_MISS/200
  39 TCP_MISS/500
  36 TCP_MISS/502
  34 TCP_REFRESH_UNMODIFIED/304

 - cached objects found, mandatory update check required and resulted in
client browser cache object being used.


  31 TCP_MISS/403
  25 TCP_MISS/400
  19 TCP_CLIENT_REFRESH_MISS/200

 - cached object found, but client request forced a new fetch.

  17 TCP_REFRESH_MODIFIED/200

- cached object found, mandatory update check resulted in a new object
being used.

  11 NONE/417
   9 TCP_MISS/303
   6 TCP_HIT/000

 - cached object used, but client disconnected before it could be delivered.

   5 TCP_MISS/501
   5 TCP_HIT/200

 - cached object used.

   4 TCP_MISS/202

 - this is usually only seen on POST or PUT. Which are not cacheable by
Squid.

   3 TCP_MISS/412
   2 TCP_SWAPFAIL_MISS/000

 - cache object found, but disk error while loading it and the client
disconnected before a server response was found.

   2 TCP_MISS/408
   1 TCP_MISS/522
   1 TCP_MISS/410
   1 TCP_MISS/405
   1 TCP_CLIENT_REFRESH_MISS/000

 - cached object found, but client request mandated an update check.
Then client disconnected before that was completed.



All the 4xx and 5xx status responses are only cacheable short term and
only if the server explicitly provides caching information. It looks
like the servers in your traffic are not providing that info (or not
correctly).


Also, this grep counting does not account for what method the
transaction used. Things like the 204 response and 30x responses
cacheability depend on what method is involved.


So I see 19k MISS and 4k HIT. About 18% hit rate.


What version of Squid are you using?

Amos


Re: [squid-users] https://weather.yahoo.com redirect loop

2014-08-20 Thread Jatin Bhasin
Hi,

Thanks, for that. It solved for me as well. But does anyone why this loop 
happens and how does these squid directives resolve the issue?


Thanks,
Jain

 On 20 Aug 2014, at 16:16, Amm ammdispose-sq...@yahoo.com wrote:
 
 
 On 08/20/2014 10:52 AM, Jatin Bhasin wrote:
 And when I browse to https://weather.yahoo.com then it goes in
 redirect loop. I am using Chrome browser and I get a message at
 the end saying 'This webpage has a redirect loop'.
 
 Happens in 3.4 series too.
 
 I added these in squid.conf as a solution:
 
 via off
 forwarded_for delete
 
 Amm


Re: [squid-users] https://weather.yahoo.com redirect loop

2014-08-20 Thread Amm


On 08/20/2014 04:06 PM, Jatin Bhasin wrote:

Hi,

Thanks, for that. It solved for me as well. But does anyone why this loop 
happens and how does these squid directives resolve the issue?
I think only Yahoo can answer that. They seem to send redirect when they 
find Via and/or X-Forwarded-For headers.


I was lazy to find which header exactly but I disabled both anyway.

Amm.



Re: [squid-users] what AV products have ICAP support?

2014-08-20 Thread Francesco Mobile
Trend micro viruswall can work as upstream proxy or icap.

Amos Jeffries squ...@treenet.co.nz ha scritto:

On 18/08/2014 9:30 p.m., Jason Haar wrote:
 Hi there
 
 I've been testing out squidclamav as an ICAP service and it works well.
 I was wondering what other AV vendors have (linux) ICAP-capable
 offerings that could similarly be hooked into Squid?
 
 Thanks
 

http://www.icap-forum.org/icap?do=productsisServer=checked

Amos


Re: [squid-users] store_id and key in store.log

2014-08-20 Thread Squid

Hello Stepanenko,

The store.log is a record of Squid's decisions to store and remove 
objects from the cache. Squid creates an entry for each object it stores 
in the cache, each uncacheable object, and each object that is removed 
by the replacement policy.

The log file covers both in-memory and on-disk caches.

The store.log provides some values it's can't get from access.log.
Mainly the response's cache key only in (i.e., MD5 hash value are present).

refresh_pattern 
^http://(youtube|ytimg|vimeo|[a-zA-Z0-9\-]+)\.squid\.internal/.* 10080 
80%  79900 override-lastmod override-expire ignore-reload 
ignore-must-revalidate ignore-private


Simple example for StoreID refresh pattern:

acl rewritedoms dstdomain .dailymotion.com .video-http.media-imdb.com  
av.vimeo.com .dl.sourceforge.net .vid.ec.dmcdn.net .videoslasher.com


store_id_program /usr/local/squid/bin/new_format.rb
store_id_children 40 startup=10 idle=5 concurrency=0
store_id_access allow rewritedoms !banned_methods
store_id_access deny all

root# /usr/local/squid/bin/new_format.rb

ERR
http://i2.ytimg.com/vi/95b1zk3qhSM/hqdefault.jpg
OK store-id=http://ytimg.squid.internal/vi/95b1zk3qhSM/hqdefault.jpg

Thanks,
ViSolve Squid

On 8/14/2014 1:07 PM, Степаненко Сергей wrote:

Hi All!

I'm try use store_id helper, and i'm try debug regexp for url (which
processed by helper) I'm turn on store.log and I expect to see in store log
changed key value. But key in store.log is oiginal URL for object.
Maybe I'm wrong and this normal behavior?r
My squid version 3.4.5


Stepanenko Sergey








[squid-users] Re: server failover/backup

2014-08-20 Thread nuhll
I found out why.

If i go direct to
http://llnw.blizzard.com/hs-pod/beta/EU/4944.direct/Updates/hs-6187-6284-Win_deDE-final.MPQ
it works (without proxy).

If i enable proxy, it wont work and i get 503.


BTW i upgraded to Squid Cache: Version 3.3.8 



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/ONLY-Cache-certain-Websites-tp4667121p4667279.html
Sent from the Squid - Users mailing list archive at Nabble.com.


RE: [squid-users] Re: server failover/backup

2014-08-20 Thread Lawrence Pingree
Ideally you should upgrade to 3.4.4 or higher. I was able to download the
file just fine through my transparent squid. 503 error is odd, this is an
indication of a server side issue but I realize it is coming from squid.
Amos, any ideas?

-Original Message-
From: nuhll [mailto:nu...@web.de] 
Sent: Wednesday, August 20, 2014 9:51 AM
To: squid-users@squid-cache.org
Subject: [squid-users] Re: server failover/backup

I found out why.

If i go direct to
http://llnw.blizzard.com/hs-pod/beta/EU/4944.direct/Updates/hs-6187-6284-Win
_deDE-final.MPQ
it works (without proxy).

If i enable proxy, it wont work and i get 503.


BTW i upgraded to Squid Cache: Version 3.3.8 



--
View this message in context:
http://squid-web-proxy-cache.1019090.n4.nabble.com/ONLY-Cache-certain-Websit
es-tp4667121p4667279.html
Sent from the Squid - Users mailing list archive at Nabble.com.




RE: [squid-users] https://weather.yahoo.com redirect loop

2014-08-20 Thread Lawrence Pingree
Personally I have found that the latest generation of Next Generation Firewalls 
have been doing blocking when they detect a via with a squid header, so I did 
the same and that way no-one can detect my cache. The key thing you need to 
make sure is that NAT and redirection doesn't go into a loop so that the cache 
isn't receiving the packets twice and trying to re-process the requests.

-Original Message-
From: Amm [mailto:ammdispose-sq...@yahoo.com] 
Sent: Tuesday, August 19, 2014 11:16 PM
To: squid-users@squid-cache.org
Subject: Re: [squid-users] https://weather.yahoo.com redirect loop


On 08/20/2014 10:52 AM, Jatin Bhasin wrote:
 And when I browse to https://weather.yahoo.com then it goes in 
 redirect loop. I am using Chrome browser and I get a message at the 
 end saying 'This webpage has a redirect loop'.

Happens in 3.4 series too.

I added these in squid.conf as a solution:

via off
forwarded_for delete

Amm




RE: [squid-users] what AV products have ICAP support?

2014-08-20 Thread Lawrence Pingree
Squid is an ICAP server not a client 

-Original Message-
From: Jason Haar [mailto:jason_h...@trimble.com] 
Sent: Tuesday, August 19, 2014 4:18 PM
To: squid-users@squid-cache.org
Subject: Re: [squid-users] what AV products have ICAP support?

Thanks for that, shouldn't squid be listed there as an ICAP client?

On 19/08/14 17:56, Amos Jeffries wrote:
 http://www.icap-forum.org/icap?do=productsisServer=checked 


-- 
Cheers

Jason Haar
Corporate Information Security Manager, Trimble Navigation Ltd.
Phone: +1 408 481 8171
PGP Fingerprint: 7A2E 0407 C9A6 CAF6 2B9F 8422 C063 5EBB FE1D 66D1





RE: [squid-users] Re: server failover/backup

2014-08-20 Thread Lawrence Pingree
In transparent mode things are working for me just fine including access to
battle.net and using the battle client. Does battle.net support proxy
configurations? i.e. are you putting the squid IP and Port as a proxy for
the client app to use?

-Original Message-
From: nuhll [mailto:nu...@web.de] 
Sent: Wednesday, August 20, 2014 9:51 AM
To: squid-users@squid-cache.org
Subject: [squid-users] Re: server failover/backup

I found out why.

If i go direct to
http://llnw.blizzard.com/hs-pod/beta/EU/4944.direct/Updates/hs-6187-6284-Win
_deDE-final.MPQ
it works (without proxy).

If i enable proxy, it wont work and i get 503.


BTW i upgraded to Squid Cache: Version 3.3.8 



--
View this message in context:
http://squid-web-proxy-cache.1019090.n4.nabble.com/ONLY-Cache-certain-Websit
es-tp4667121p4667279.html
Sent from the Squid - Users mailing list archive at Nabble.com.




RE: [squid-users] what AV products have ICAP support?

2014-08-20 Thread Lawrence Pingree
Sorry, got that backwards, squid is a client, so I guess it should be listed. 

-Original Message-
From: Lawrence Pingree [mailto:geek...@geek-guy.com] 
Sent: Wednesday, August 20, 2014 10:09 AM
To: 'Jason Haar'; squid-users@squid-cache.org
Subject: RE: [squid-users] what AV products have ICAP support?

Squid is an ICAP server not a client 

-Original Message-
From: Jason Haar [mailto:jason_h...@trimble.com] 
Sent: Tuesday, August 19, 2014 4:18 PM
To: squid-users@squid-cache.org
Subject: Re: [squid-users] what AV products have ICAP support?

Thanks for that, shouldn't squid be listed there as an ICAP client?

On 19/08/14 17:56, Amos Jeffries wrote:
 http://www.icap-forum.org/icap?do=productsisServer=checked 


-- 
Cheers

Jason Haar
Corporate Information Security Manager, Trimble Navigation Ltd.
Phone: +1 408 481 8171
PGP Fingerprint: 7A2E 0407 C9A6 CAF6 2B9F 8422 C063 5EBB FE1D 66D1







RE: [squid-users] what AV products have ICAP support?

2014-08-20 Thread Lawrence Pingree
Squid is listed as a client 
http://www.icap-forum.org/icap?do=productsisClient=checked


-Original Message-
From: Lawrence Pingree [mailto:geek...@geek-guy.com] 
Sent: Wednesday, August 20, 2014 10:17 AM
To: 'Jason Haar'; squid-users@squid-cache.org
Subject: RE: [squid-users] what AV products have ICAP support?

Sorry, got that backwards, squid is a client, so I guess it should be listed. 

-Original Message-
From: Lawrence Pingree [mailto:geek...@geek-guy.com] 
Sent: Wednesday, August 20, 2014 10:09 AM
To: 'Jason Haar'; squid-users@squid-cache.org
Subject: RE: [squid-users] what AV products have ICAP support?

Squid is an ICAP server not a client 

-Original Message-
From: Jason Haar [mailto:jason_h...@trimble.com] 
Sent: Tuesday, August 19, 2014 4:18 PM
To: squid-users@squid-cache.org
Subject: Re: [squid-users] what AV products have ICAP support?

Thanks for that, shouldn't squid be listed there as an ICAP client?

On 19/08/14 17:56, Amos Jeffries wrote:
 http://www.icap-forum.org/icap?do=productsisServer=checked 


-- 
Cheers

Jason Haar
Corporate Information Security Manager, Trimble Navigation Ltd.
Phone: +1 408 481 8171
PGP Fingerprint: 7A2E 0407 C9A6 CAF6 2B9F 8422 C063 5EBB FE1D 66D1









[squid-users] Re: server failover/backup

2014-08-20 Thread nuhll
Hello,
thanks for your help.

I own a dhcp server which spread the proxy ip:port to all clients (proxy
settings are default search for) so all programs are using this proxy
automatic for http requests.

I use Linux version 3.2.0-4-amd64 (debian-ker...@lists.debian.org) (gcc
version 4.6.3 (Debian 4.6.3-14) ) #1 SMP Debian 3.2.60-1+deb7u3

I worked hard to upgrade to 3.3.8. Im not a linux guru. 





--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/ONLY-Cache-certain-Websites-tp4667121p4667286.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: server failover/backup

2014-08-20 Thread nuhll
Some Logs:
== /var/log/squid3/cache.log ==
2014/08/20 19:33:19.809 kid1| client_side.cc(777) swanSong:
local=192.168.0.1:3128 remote=192.168.0.125:62595 flags=1
2014/08/20 19:33:20.227 kid1| client_side.cc(777) swanSong:
local=192.168.0.1:3128 remote=192.168.0.125:62378 flags=1
2014/08/20 19:33:20.232 kid1| client_side.cc(900) deferRecipientForLater:
clientSocketRecipient: Deferring request
http://llnw.blizzard.com/hs-pod/beta/EU/4944.direct/Updates/hs-6187-6284-Win_deDE-final.MPQ
2014/08/20 19:33:20.232 kid1| client_side.cc(1518)
ClientSocketContextPushDeferredIfNeeded: local=192.168.0.1:3128
remote=192.168.0.125:62611 FD 29 flags=1 Sending next
2014/08/20 19:33:20.235 kid1| client_side.cc(777) swanSong:
local=192.168.0.1:3128 remote=192.168.0.125:62611 flags=1
2014/08/20 19:33:20.638 kid1| client_side.cc(777) swanSong:
local=192.168.0.1:3128 remote=192.168.0.125:62669 flags=1

== /var/log/squid3/access.log ==
1408555999.808  10552 192.168.0.125 TCP_MISS/503 3899 GET
http://dist.blizzard.com.edgesuite.net/hs-pod/beta/EU/4944.direct/Updates/hs-6187-6284-Win-final.MPQ
- HIER_DIRECT/192.168.0.4 text/html
1408556000.232   9976 192.168.0.125 TCP_MISS/503 3844 GET
http://llnw.blizzard.com/hs-pod/beta/EU/4944.direct/Updates/hs-6187-6284-Win-final.MPQ
- HIER_DIRECT/192.168.0.4 text/html
1408556000.232   9975 192.168.0.125 TCP_MISS/503 3803 GET
http://llnw.blizzard.com/hs-pod/beta/EU/4944.direct/Updates/hs-6187-6284-Win_deDE-final.MPQ
- HIER_DIRECT/192.168.0.4 text/html
1408556000.638406 192.168.0.125 TCP_MISS/200 1642 CONNECT
dws1.etoro.com:443 - HIER_DIRECT/149.126.77.194 -




--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/ONLY-Cache-certain-Websites-tp4667121p4667287.html
Sent from the Squid - Users mailing list archive at Nabble.com.


Re: [squid-users] Poor cache

2014-08-20 Thread Délsio Cabá
Hi,
Using version: Squid Cache: Version 3.1.10  (Centos RPM)

I also have this changes on the OS:

/etc/rc.local
/sbin/modprobe iptable_nat
/sbin/modprobe ip_nat_ftp
/sbin/modprobe ip_gre
/sbin/modprobe ip_conntrack
/sbin/modprobe ip_conntrack_ftp

echo 0  /proc/sys/net/ipv4/tcp_syncookies
echo 131072  /proc/sys/net/ipv4/tcp_max_syn_backlog
echo 524288  /proc/sys/net/netfilter/nf_conntrack_max
echo 1  /proc/sys/net/ipv4/ip_forward
echo 0  /proc/sys/net/ipv4/conf/lo/rp_filter
modprobe iptable_nat
iptables -t nat -F PREROUTING
ip tunnel add gre0 mode gre remote 196.10.148.1 local 196.10.148.6 dev eth0
ip link set gre0 up

iptables -t nat -F
iptables -t nat -A PREROUTING -i gre0 -p tcp -m tcp --dport 80 -j DNAT
--to-destination 196.10.148.6:3401
touch /var/lock/subsys/local
~

/etc/sysctl.conf
net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.default.disable_ipv6 = 1
net.ipv4.ip_local_port_range = 1025 65535
fs.file-max = 372925
net.ipv4.tcp_max_syn_backlog = 2048
net.ipv4.tcp_syncookies = 1
net.ipv4.ip_forward = 1
# Controls source route verification
net.ipv4.conf.default.rp_filter = 0
net.ipv4.conf.eth0.rp_filter = 0
net.ipv4.conf.eth0.ip_filter = 0
net.ipv4.conf.gre0.rp_filter = 0
net.ipv4.conf.gre0.ip_filter = 0
net.ipv4.conf.default.accept_source_route = 1
# Controls the System Request debugging functionality of the kernel
kernel.sysrq = 0

# Controls whether core dumps will append the PID to the core filename.
# Useful for debugging multi-threaded applications.
kernel.core_uses_pid = 1

# Controls the use of TCP syncookies
net.ipv4.tcp_syncookies = 1

# Disable netfilter on bridges.
net.bridge.bridge-nf-call-ip6tables = 0
net.bridge.bridge-nf-call-iptables = 0
net.bridge.bridge-nf-call-arptables = 0



On 20 August 2014 09:50, Amos Jeffries squ...@treenet.co.nz wrote:
 On 20/08/2014 9:21 a.m., Délsio Cabá wrote:
 Hi guys,
 Need some help on cache. Basically I do not see many caches.

 root@c /]# cat  /var/log/squid/access.log  | awk '{print $4}' | sort |
 uniq -c | sort -rn
   17403 TCP_MISS/200
3107 TCP_MISS/304

  - objects in the client browser cache were used.

1903 TCP_MISS/000

  - server was contacted but no response came back. This is bad. Seeing
 it in such numbers is very bad.
  It is a strong sign that TCP window scaling, ECN or ICMP blocking
 (Path-MTUd) issues are occuring on your traffic.


1452 TCP_MISS/204

  - 204 no content means there was no object to be cached.

1421 TCP_MISS/206

  - Range request responses. Squid cannot cache these yet, but they
 should be cached in the client browser and contribute to those 304
 responses above.

1186 TCP_MISS/302

  - along with the MISS/301, MISS/303 these are not cacheable without
 special instructions.

 659 TCP_MISS/503
 641 NONE/400
 548 TCP_MISS/301
 231 TCP_OFFLINE_HIT/200

  - cached object used.

 189 TCP_MISS/404
 126 TCP_IMS_HIT/304

  - cache object found, but objects in the client browser cache were used.

 112 TCP_MISS/504
  68 TCP_MISS/401
  56 TCP_MEM_HIT/200

  - cached object used.

  50 TCP_SWAPFAIL_MISS/304

  - cached object found, but disk error occurred loading it. And the
 client request was conditional. So object in client browser cache used
 instead.

  49 TCP_REFRESH_UNMODIFIED/200

  - cached objects found, mandatory update check required and resulted in
 Squid cached object being delivered to client.

  46 TCP_SWAPFAIL_MISS/200
  39 TCP_MISS/500
  36 TCP_MISS/502
  34 TCP_REFRESH_UNMODIFIED/304

  - cached objects found, mandatory update check required and resulted in
 client browser cache object being used.


  31 TCP_MISS/403
  25 TCP_MISS/400
  19 TCP_CLIENT_REFRESH_MISS/200

  - cached object found, but client request forced a new fetch.

  17 TCP_REFRESH_MODIFIED/200

 - cached object found, mandatory update check resulted in a new object
 being used.

  11 NONE/417
   9 TCP_MISS/303
   6 TCP_HIT/000

  - cached object used, but client disconnected before it could be delivered.

   5 TCP_MISS/501
   5 TCP_HIT/200

  - cached object used.

   4 TCP_MISS/202

  - this is usually only seen on POST or PUT. Which are not cacheable by
 Squid.

   3 TCP_MISS/412
   2 TCP_SWAPFAIL_MISS/000

  - cache object found, but disk error while loading it and the client
 disconnected before a server response was found.

   2 TCP_MISS/408
   1 TCP_MISS/522
   1 TCP_MISS/410
   1 TCP_MISS/405
   1 TCP_CLIENT_REFRESH_MISS/000

  - cached object found, but client request mandated an update check.
 Then client disconnected before that was completed.



 All the 4xx and 5xx status responses are only cacheable short term and
 only if the server explicitly provides caching information. It looks
 like the servers in your traffic are not providing that info (or not
 correctly).


 Also, this grep counting does not account for what method the
 transaction used. Things like the 204 response and 

[squid-users] Re: server failover/backup

2014-08-20 Thread nuhll
I give up. Squid sucks so hard.

New and easier idea:

accel the sites i want to cache.

But how? Information about this is crazy much  

http://wiki.squid-cache.org/SquidFaq/ReverseProxy

But how to cache?

#
#Recommended minimum configuration:
#

debug_options ALL,1 33,2

# Example rule allowing access from your local networks.
# Adapt to list your (internal) IP networks from where browsing
# should be allowed
acl localnet src 192.168.0.0/16
acl localnet src fc00::/7
acl localnet src fe80::/10 # RFC1918 possible internal network
#acl Safe_ports port 1-65535 # RFC1918 possible internal network
#acl CONNECT method GET POST HEAD CONNECT PUT DELETE # RFC1918 possible
internal network
#acl block-fnes urlpath_regex -i .*/fnes/echo # RFC 4193 local private
network range
#acl noscan dstdomain .symantecliveupdate.com liveupdate.symantec.com
psi3.secunia.com update.immunet.com # RFC 4291 link-local (directly plugged)
machines
#acl video urlpath_regex -i
\.(m2a|avi|mov|mp(e?g|a|e|1|2|3|4)|m1s|mp2v|m2v|m2s|wmx|rm|rmvb|3pg|3gpp|omg|ogm|asf|asx|wmvm3u8|flv|ts)

#
# Recommended minimum Access Permission configuration:
#
# Only allow cachemgr access from localhost

#no_cache deny noscan
#always_direct allow noscan
#always_direct allow video

# Deny requests to certain unsafe ports

# Deny CONNECT to other than secure SSL ports

# We strongly recommend the following be uncommented to protect innocent
# web applications running on the proxy server who think the only
# one who can access services on .localhost. is a local user
#http_access deny to_localhost

#
# INSERT YOUR OWN RULE(S) HERE TO ALLOW ACCESS FROM YOUR CLIENTS
#
#cache_peer 192.168.1.1 parent 8080 0 default no-query no-digest
#no-netdb-exchange
#never_direct allow all

# Example rule allowing access from your local networks.
# Adapt localnet in the ACL section to list your (internal) IP networks
# from where browsing should be allowed

#http_access allow all

# allow localhost always proxy functionality

# And finally deny all other access to this proxy

http_port 192.168.0.8:80 accel defaultsite=windowsupdate.com
cache_peer windowsupdate.com parent 80 0 no-query originserver

http_port 192.168.0.8:80 accel defaultsite=microsoft.com
cache_peer microsoft.com parent 80 0 no-query originserver

http_port 192.168.0.8:80 accel defaultsite=windows.com
cache_peer windows.com parent 80 0 no-query originserver
# We recommend you to use at least the following line.
hierarchy_stoplist cgi-bin ?

# Uncomment and adjust the following to add a disk cache directory.
maximum_object_size 5000 MB
#store_dir_select_algorithm round-robin
cache_dir aufs /daten/squid 10 16 256


# Leave coredumps in the first cache dir
coredump_dir /daten/squid

#windows update
refresh_pattern -i microsoft.com/.*\.(cab|exe|ms[i|u|f]|asf|wma|dat|zip)$
202974 80% 262974
refresh_pattern -i
windowsupdate.com/.*\.(cab|exe|ms[i|u|f]|asf|wma|dat|zip)$ 202974 80% 262974
refresh_pattern -i windows.com/.*\.(cab|exe|ms[i|u|f]|asf|wma|dat|zip)$
202974 80% 262974

log_icp_queries off
icp_port 0
htcp_port 0
snmp_port 3401
acl snmppublic snmp_community public
snmp_access allow snmppublic all
minimum_object_size 0 KB
buffered_logs on
cache_effective_user proxy
#header_replace User-Agent Mozilla/5.0 (X11; U;) Gecko/20080221
Firefox/2.0.0.9
vary_ignore_expire on
cache_swap_low 90
cache_swap_high 95
#visible_hostname shadow
#unique_hostname shadow-DHS
shutdown_lifetime 0 second
request_header_max_size 256 KB
half_closed_clients off
max_filedesc 65535
connect_timeout 10 second
cache_effective_group proxy
#access_log /var/log/squid/access.log squid
#access_log daemon:/var/log/squid3/access.test.log squid
client_db off
#dns_nameservers 192.168.0.10
ipcache_size 1024
fqdncache_size 1024
positive_dns_ttl 24 hours
negative_dns_ttl 5 minutes
#itcp_outgoing_address 192.168.2.2
dns_v4_first on
check_hostnames off
forwarded_for delete
via off
#pinger_enable off
#memory_replacement_policy heap LFUDA
#cache_replacement_policy heap LFUDA
cache_mem 2048 MB
maximum_object_size_in_memory 512 KB
#memory_cache_mode disk
cache_store_log none
read_ahead_gap 50 MB
pipeline_prefetch on
reload_into_ims on
#quick_abort_min -1 KB


Does not cache any windows updates.



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/ONLY-Cache-certain-Websites-tp4667121p4667289.html
Sent from the Squid - Users mailing list archive at Nabble.com.


Re: [squid-users] Re: server failover/backup

2014-08-20 Thread Antony Stone
On Wednesday 20 August 2014 at 21:08:03 (EU time), nuhll wrote:

 accel the sites i want to cache.
 
 But how? Information about this is crazy much
 
 http://wiki.squid-cache.org/SquidFaq/ReverseProxy
 
 But how to cache?

Simple answer - with a caching proxy server.

Longer answer - accelerator mode is incompatible with caching mode - you use 
either one, or the other, but not both on the same proxy.

They're not in the least bit interchangeable, either:

 - caching mode is where you're running a service for clients (browsers) to 
improve their performance accessing any server/s

 - accelerator mode is where you're running a service for servers (websites) 
to improve their performance (maybe load-balancing, maybe high-availability) 
for any clients which access them.

Caching mode caches, accelerator mode doesn't.

From the URL you quoted above: If you wish only to cache the 'rest of the 
world' to improve local users browsing performance, then accelerator mode is 
irrelevant.

Sites which own and publish a URL hierarchy use an accelerator to improve 
access to it from the Internet. Sites wishing to improve their local users' 
access to other sites' URLs use proxy caches.

So, accelerator mode or caching mode - different purposes, it's your choice 
which you need.



Antony.

-- 
f u cn rd ths, u cn gt a gd jb n nx prgrmmng

   Please reply to the list;
 please *don't* CC me.


[squid-users] Individual delay pools and youtube

2014-08-20 Thread fpap
I have set up a delay pool in order to restrict bandwidth to a specific
client, and it works just fine. That client starts downloading multiple big
files, and the bandwidth consumed is limited as set up. But... when this
client goes to youtube and starts viewing hd videos, the bandwidth consumed
is very very high, way to far above the maximum configured.

My configuration: 

acl theclient src 166.82.4.116
delay_pools 1
delay_class 1 2
delay_parameters 1 -1/-1 32000/32000
delay_access 1 allow theclient

Any ideas...?

Thanks in advance

Greetings



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Individual-delay-pools-and-youtube-tp4667291.html
Sent from the Squid - Users mailing list archive at Nabble.com.


Re: [squid-users] Individual delay pools and youtube

2014-08-20 Thread Antony Stone
On Wednesday 20 August 2014 at 22:14:06 (EU time), fpap wrote:

 I have set up a delay pool in order to restrict bandwidth to a specific
 client, and it works just fine. That client starts downloading multiple big
 files, and the bandwidth consumed is limited as set up. But... when this
 client goes to youtube and starts viewing hd videos, the bandwidth consumed
 is very very high, way to far above the maximum configured.
 
 My configuration:
 
 acl theclient src 166.82.4.116
 delay_pools 1
 delay_class 1 2
 delay_parameters 1 -1/-1 32000/32000
 delay_access 1 allow theclient
 
 Any ideas...?

HTTP/S?

I know Squid can bandwidth-limit the content it sees, but are the Youtube 
videos perhaps over HTTPS, so Squid simply passes on the CONNECT request and 
then sees nothing of the content?

I admit this is a bit of a guess, but it should be easy enough to check:

1. are all the youtube videos which go over-limit HTTPS connections?

2. can the client go over-limit with any other URL provided it's HTTPS?


Antony.

-- 
When you talk about Linux versus Windows, you're talking about which 
operating system is the best value for money and fit for purpose. That's a very 
basic decision customers can make if they have the information available to 
them. Quite frankly if we lose to Linux because our customers say it's better 
value for money, tough luck for us.

 - Steve Vamos, MD of Microsoft Australia

   Please reply to the list;
 please *don't* CC me.


Re: [squid-users] unbound and squid not resolving SSL sites

2014-08-20 Thread squid

why are you using unbound for this at all?

Well, we use a geo location service much like a VPN or a proxy.
For transparent proxies, it works fine, squid passes through the SSL  
request and back to the client.

For VPN, everything is passed through.
But with unbound, we only want to pass through certain requests and  
some of them have SSL sites.
Surely, there's a way to pass a request from unbound, and redirect it  
through the transparent proxy, returning it straight to the client?







Re: [squid-users] Re: server failover/backup

2014-08-20 Thread Amos Jeffries
On 21/08/2014 7:22 a.m., Antony Stone wrote:
 On Wednesday 20 August 2014 at 21:08:03 (EU time), nuhll wrote:
 
 accel the sites i want to cache.

 But how? Information about this is crazy much

 http://wiki.squid-cache.org/SquidFaq/ReverseProxy

 But how to cache?
 
 Simple answer - with a caching proxy server.
 
 Longer answer - accelerator mode is incompatible with caching mode - you use 
 either one, or the other, but not both on the same proxy.

This is wrong. Acceleration and caching are simply separate features.
They are *independent*, not incompatible. Both forward- and reverse-
(accel) proxy can and do cache in exactly the same ways.

So nuhll,
 you will get exactly the same caching behaviour from Squid regardless
of using accel mode or a regular proxy port. Only transparent/intercept
mode has strange caching behaviours.

Amos



Re: [squid-users] Re: server failover/backup

2014-08-20 Thread Amos Jeffries
On 21/08/2014 5:29 a.m., nuhll wrote:
 Hello,
 thanks for your help.
 
 I own a dhcp server which spread the proxy ip:port to all clients (proxy
 settings are default search for) so all programs are using this proxy
 automatic for http requests.

Not quite. Only the applications which obey DHCP based WPAD
auto-configuration.

There is also DNS based WPAD, and lots of applications (Java based and
mobile apps mostly) which do not auto-configure at all.


 
 I use Linux version 3.2.0-4-amd64 (debian-ker...@lists.debian.org) (gcc
 version 4.6.3 (Debian 4.6.3-14) ) #1 SMP Debian 3.2.60-1+deb7u3
 
 I worked hard to upgrade to 3.3.8. Im not a linux guru. 
 

:-( sorry. The Debian maintainer team has a 3.4 package almost ready but
it has been held up by other administrative details for a few months.

Amos



Re: [squid-users] Re: ONLY Cache certain Websites.

2014-08-20 Thread Amos Jeffries
On 19/08/2014 3:42 a.m., nuhll wrote:
 Just to clarify my problem: I dont use it as a transparente proxy! I
 distribute the proxy with my dhcp server and a .pac file. So it gets used on
 all machines with auto detection proxy
 

Your earlier config file posted contained:

  http_port 192.168.0.1:3128 transparent

transparent/intercept mode ports are incompatible with WPAD and PAC
configuration. You need a regular forward-proxy port (no transparent)
for receiving that type of traffic.

This is probably a good hint as to what yoru problem actually is. The
logs you posted in other email are showing what could be the side effect
of this misunderstanding. I will reply to that email with details.

Amos



Re: [squid-users] Re: server failover/backup

2014-08-20 Thread Amos Jeffries
On 21/08/2014 5:33 a.m., nuhll wrote:
 Some Logs:

These logs are showing a problem...


 == /var/log/squid3/cache.log ==
 2014/08/20 19:33:19.809 kid1| client_side.cc(777) swanSong:
 local=192.168.0.1:3128 remote=192.168.0.125:62595 flags=1
 2014/08/20 19:33:20.227 kid1| client_side.cc(777) swanSong:
 local=192.168.0.1:3128 remote=192.168.0.125:62378 flags=1
 2014/08/20 19:33:20.232 kid1| client_side.cc(900) deferRecipientForLater:
 clientSocketRecipient: Deferring request
 http://llnw.blizzard.com/hs-pod/beta/EU/4944.direct/Updates/hs-6187-6284-Win_deDE-final.MPQ
 2014/08/20 19:33:20.232 kid1| client_side.cc(1518)
 ClientSocketContextPushDeferredIfNeeded: local=192.168.0.1:3128
 remote=192.168.0.125:62611 FD 29 flags=1 Sending next

This appears to be a client (192.168.0.125) connecting to what it thinks
is a regular forward-proxy port:
  http_port 3128
or
  http_port 192.168.0.1:3128


 2014/08/20 19:33:20.235 kid1| client_side.cc(777) swanSong:
 local=192.168.0.1:3128 remote=192.168.0.125:62611 flags=1
 2014/08/20 19:33:20.638 kid1| client_side.cc(777) swanSong:
 local=192.168.0.1:3128 remote=192.168.0.125:62669 flags=1
 
 == /var/log/squid3/access.log ==
 1408555999.808  10552 192.168.0.125 TCP_MISS/503 3899 GET
 http://dist.blizzard.com.edgesuite.net/hs-pod/beta/EU/4944.direct/Updates/hs-6187-6284-Win-final.MPQ
 - HIER_DIRECT/192.168.0.4 text/html
 1408556000.232   9976 192.168.0.125 TCP_MISS/503 3844 GET
 http://llnw.blizzard.com/hs-pod/beta/EU/4944.direct/Updates/hs-6187-6284-Win-final.MPQ
 - HIER_DIRECT/192.168.0.4 text/html
 1408556000.232   9975 192.168.0.125 TCP_MISS/503 3803 GET
 http://llnw.blizzard.com/hs-pod/beta/EU/4944.direct/Updates/hs-6187-6284-Win_deDE-final.MPQ
 - HIER_DIRECT/192.168.0.4 text/html

This above shows Squid receiving various requests for blizzard.com
domains and relaying them to the web server at 192.168.0.4.

Do you actually have a blizzard.com web server running at 192.168.0.4  ?
 I dont think so.


 1408556000.638406 192.168.0.125 TCP_MISS/200 1642 CONNECT
 dws1.etoro.com:443 - HIER_DIRECT/149.126.77.194 -
 

It seems to me that you are mixing the HTTP traffic modes up.

Squid accepts traffic with two very different on-wire syntax formats,
and also with possibly mangled TCP packet details. These combine into 3
perutatiosn we call traffic modes.

1) forward-proxy (aka manual or auto-configured explicit proxy)
  - port 3128 traffic syntax designed for proxy communication. Nothing
special needed to service the traffic.

2) reverse-proxy (aka accelerator / CDN gateway)
  - port 80 traffic syntax designed for web server communication.
Message URLs need reconstructing and an origin cache_peer server is
expected to be explicitly configured.

3) interception proxy (aka transparent proxy)
  - port 80 traffic syntax and also possible TCP/IP mangling of the
packet IPs. Any mangling needs to be detected and undone, input
validation security checks applied, then the reverse-proxy URL
manipulations performed.
  NP: if the security checks fail caching will be disabled for request,
but it will still be serviced as a transparent MISS.
  NP2: if the security checks fail and the TCP packet details are broken
you will get 503 exactly as logged above.


What you need to do for a properly working proxy is ensure that:
* each mode of traffic is sent to a separate http_port opened by Squid.
 - you may use multiple port directives as long as each has a unique
port number.
* each http_port directive is flagged as appropriate to indicate the
traffic mode being received there.



From the logs above it looks to me like you are possibly intercepting
the blizzard traffic and NAT'ing it to a forward-proxy port 3128.

You probably need to actually configure this to get rid of the 503s:

 http_port 3128
 http_port 3129 intercept

and change your NAT rules to -j REDIRECT packets to port 3129. Leave
your DHCP rules sending traffic to port 3128.

Amos



Re: [squid-users] unbound and squid not resolving SSL sites

2014-08-20 Thread Amos Jeffries
On 21/08/2014 8:59 a.m., sq...@proxyplayer.co.uk wrote:
 why are you using unbound for this at all?
 
 Well, we use a geo location service much like a VPN or a proxy.
 For transparent proxies, it works fine, squid passes through the SSL
 request and back to the client.
 For VPN, everything is passed through.
 But with unbound, we only want to pass through certain requests and some
 of them have SSL sites.
 Surely, there's a way to pass a request from unbound, and redirect it
 through the transparent proxy, returning it straight to the client?
 

I'm not sure what you mean, unbound is a DNS server it does not process
HTTP protocol at all. All it does is tell the client where the *web
server* for a domain is located. But the client only needs to know which
route to use.

With a client connecting over WAN through a proxy you have:
 client --WAN-- proxy -- Internet
 client --WAN-- proxy -- Internet
plus for non-proxied traffic:
 client --WAN-- Internet
 client --WAN-- Internet

With a client connecting over a VPN you have:
 client --VPN-- proxy -- Internet
 client --VPN-- proxy -- Internet
plus for non-proxied traffic:
 client --VPN--NAT-- Internet
 client --VPN--NAT-- Internet

in both above cases the gateway router receiving WAN or VPN traffic is
responsible for the NAT/TPROXY/WCCP interception.

What I've gathered so far is that you are trying to achieve one of these:

A)
 client --VPN-- proxy -- Internet
 client --VPN-- proxy -- Internet
plus for non-proxied traffic:
 client --WAN-- Internet
 client --WAN-- Internet


B)
 client --VPN-- proxy -- Internet
 client --WAN-- proxy -- Internet
plus for non-proxied traffic:
 client --VPN-- Internet
 client --WAN-- Internet


which one?

Amos



Re: [squid-users] Poor cache

2014-08-20 Thread Amos Jeffries
On 21/08/2014 6:05 a.m., Délsio Cabá wrote:
 Hi,
 Using version: Squid Cache: Version 3.1.10  (Centos RPM)
 

Ah. The version itself is probably most of the prooblem.

3.1 does not cache traffic with Cache-Control:no-cache, which these days
consists of a large percentage (30-40) of all traffic. That is resolved
in 3.2 and later, along with better caching of private and authenticated
traffic.

You can find details of newer CentOS RPM packages from Eliezer at
http://wiki.squid-cache.org/KnowledgeBase/CentOS

Amos



Re: [squid-users] https://weather.yahoo.com redirect loop

2014-08-20 Thread Amos Jeffries
On 21/08/2014 5:08 a.m., Lawrence Pingree wrote:
 Personally I have found that the latest generation of Next Generation
 Firewalls have been doing blocking when they detect a via with a
 squid header,

Have you been making bug reports to these vendors?
 Adding Via header is mandatory in HTTP/1.1 specification, and HTTP
proxy is a designed part of the protocol. So any blocking based on the
simple existence of a proxy is non-compliance with HTTP itself. That
goes for ports 80, 443, 3128, 3130, and 8080 which are all registered
for HTTP use.

However, if your proxy is emitting Via: 1.1 localhost or Via: 1.1
localhost.localdomain it is broken and may not be blocked so much as
rejected for forwarding loop because the NG firewall has a proxy itself
on localhost. The Via header is generated from visible_hostname (or the
OS hostname lookup) and supposed to contain the visible public FQDN of
the each server the message relayed through.

Amos


RE: [squid-users] https://weather.yahoo.com redirect loop

2014-08-20 Thread Lawrence Pingree
No, I mean they are intentionally blocking with a configured policy, its not a 
bug. :) They have signatures that match Via headers and forwarded for headers 
to determine that it's squid. This is because many hackers are using bounces 
off open squid proxies to launch web attacks. 

-Original Message-
From: Amos Jeffries [mailto:squ...@treenet.co.nz] 
Sent: Wednesday, August 20, 2014 4:10 PM
To: squid-users@squid-cache.org
Subject: Re: [squid-users] https://weather.yahoo.com redirect loop

On 21/08/2014 5:08 a.m., Lawrence Pingree wrote:
 Personally I have found that the latest generation of Next Generation 
 Firewalls have been doing blocking when they detect a via with a squid 
 header,

Have you been making bug reports to these vendors?
 Adding Via header is mandatory in HTTP/1.1 specification, and HTTP proxy is a 
designed part of the protocol. So any blocking based on the simple existence of 
a proxy is non-compliance with HTTP itself. That goes for ports 80, 443, 3128, 
3130, and 8080 which are all registered for HTTP use.

However, if your proxy is emitting Via: 1.1 localhost or Via: 1.1 
localhost.localdomain it is broken and may not be blocked so much as rejected 
for forwarding loop because the NG firewall has a proxy itself on localhost. 
The Via header is generated from visible_hostname (or the OS hostname lookup) 
and supposed to contain the visible public FQDN of the each server the message 
relayed through.

Amos




RE: [squid-users] Re: server failover/backup

2014-08-20 Thread Lawrence Pingree
Nuhll,
Just use the following config and point your clients to port 8080 on the
squid ip. The ONLY thing you really should change with this configuration is
the IP addresses, the hostname or add file extensions to the
refresh_patterns. It should work!


#
#Recommended minimum configuration:
#
always_direct allow all

# Example rule allowing access from your local networks.
# Adapt to list your (internal) IP networks from where browsing
# should be allowed
acl localnet src 192.168.0.0/16
acl localnet src fc00::/7
acl localnet src fe80::/10 # RFC1918 possible internal network
acl Safe_ports port 1-65535 # RFC1918 possible internal network
acl CONNECT method GET POST HEAD CONNECT PUT DELETE # RFC1918 possible
internal network
#acl block-fnes urlpath_regex -i .*/fnes/echo # RFC 4193 local private
network range
acl noscan dstdomain .symantecliveupdate.com liveupdate.symantec.com
psi3.secunia.com update.immunet.com # RFC 4291 link-local (directly plugged)
machines

acl video urlpath_regex -i
\.(m2a|avi|mov|mp(e?g|a|e|1|2|3|4)|m1s|mp2v|m2v|m2s|wmx|rm|rmvb|3pg|3gpp|omg
|ogm|asf|asx|wmvm3u8|flv|ts)

#
# Recommended minimum Access Permission configuration:
#
# Only allow cachemgr access from localhost

no_cache deny noscan
always_direct allow noscan
always_direct allow video

# Deny requests to certain unsafe ports

# Deny CONNECT to other than secure SSL ports

# We strongly recommend the following be uncommented to protect innocent
# web applications running on the proxy server who think the only
# one who can access services on .localhost. is a local user
#http_access deny to_localhost

#
# INSERT YOUR OWN RULE(S) HERE TO ALLOW ACCESS FROM YOUR CLIENTS
#
#cache_peer 192.168.1.1 parent 8080 0 default no-query no-digest
no-netdb-exchange
#never_direct allow all

# Example rule allowing access from your local networks.
# Adapt localnet in the ACL section to list your (internal) IP networks
# from where browsing should be allowed

http_access allow all

# allow localhost always proxy functionality

# And finally deny all other access to this proxy

# Squid normally listens to port 3128
http_port 192.168.2.2:8080

# We recommend you to use at least the following line.
hierarchy_stoplist cgi-bin ?

# Uncomment and adjust the following to add a disk cache directory.
maximum_object_size 5000 MB
#store_dir_select_algorithm round-robin
cache_dir aufs /daten/squid 10 16 256
# Leave coredumps in the first cache dir
coredump_dir /var/cache/squid

# Add any of your own refresh_pattern entries above these.
# General Rules
refresh_pattern -i
\.(jpg|gif|png|webp|jpeg|ico|bmp|tiff|bif|ver|pict|pixel|bs)$ 22 90%
30 override-expire ignore-no-store ignore-private ignore-auth
refresh-ims
refresh_pattern -i
\.(js|css|class|swf|wav|dat|zsci|do|ver|advcs|woff|eps|ttf|svg|svgz|ps|acsm|
wma)$ 22 90% 30 override-expire ignore-no-store ignore-private
ignore-auth refresh-ims
refresh_pattern -i \.(html|htm|crl)$ 22 90% 259200 override-expire
ignore-no-store ignore-private ignore-auth refresh-ims
refresh_pattern -i \.(xml|flow)$ 0 90% 10
refresh_pattern -i \.(json)$ 1440 90% 5760
refresh_pattern -i ^http:\/\/liveupdate.symantecliveupdate.com.*\(zip)$ 0 0%
0
refresh_pattern -i microsoft.com/.*\.(cab|exe|ms[i|u|f]|asf|wma|dat|zip)$
22 80% 259200
refresh_pattern -i
windowsupdate.com/.*\.(cab|exe|ms[i|u|f]|asf|wma|dat|zip)$ 22 80% 259200
refresh_pattern -i windows.com/.*\.(cab|exe|ms[i|u|f]|asf|wma|dat|zip)$
22 80% 259200
refresh_pattern -i
\.(bin|deb|rpm|drpm|exe|zip|tar|tgz|bz2|ipa|bz|ram|rar|bin|uxx|gz|crl|msi|dl
l|hz|cab|psf|vidt|apk|wtex|hz|ipsw)$ 22 90% 50 override-expire
ignore-no-store ignore-private ignore-auth refresh-ims
refresh_pattern -i \.(ppt|pptx|doc|docx|pdf|xls|xlsx|csv|txt)$ 22 90%
259200 override-expire ignore-no-store ignore-private ignore-auth
refresh-ims
refresh_pattern -i ^ftp: 66000 90% 259200
refresh_pattern -i (/cgi-bin/|\?) 0 0% 0
refresh_pattern -i . 0 90% 259200
log_icp_queries off
icp_port 0
htcp_port 0
snmp_port 3401
acl snmppublic snmp_community public
snmp_access allow snmppublic all
minimum_object_size 0 KB
buffered_logs on
cache_effective_user squid
#header_replace User-Agent Mozilla/5.0 (X11; U;) Gecko/20080221
Firefox/2.0.0.9
vary_ignore_expire on
cache_swap_low 90
cache_swap_high 95
visible_hostname shadow
unique_hostname shadow-DHS
shutdown_lifetime 0 second
request_header_max_size 256 KB
half_closed_clients off
max_filedesc 65535
connect_timeout 10 second
cache_effective_group squid
#access_log /var/log/squid/access.log squid
access_log daemon:/var/log/squid/access.log buffer-size=1MB
client_db off
dns_nameservers 127.0.0.1
#pipeline_prefetch 20
ipcache_size 8192
fqdncache_size 8192
#positive_dns_ttl 72 hours
#negative_dns_ttl 5 minutes
tcp_outgoing_address 192.168.2.2
dns_v4_first on
check_hostnames off
forwarded_for delete
via off
pinger_enable off
cache_mem 2048 MB
maximum_object_size_in_memory 256 KB
memory_cache_mode disk
cache_store_log none

Re: [squid-users] unbound and squid not resolving SSL sites

2014-08-20 Thread squid



which one?
It's client -- unbound -- if IP listed in unbound.conf -- forwarded  
to proxy -- page or stream returned to client


For others it's client -- unbound -- direct to internet with normal DNS



Re: [squid-users] https://weather.yahoo.com redirect loop

2014-08-20 Thread Amos Jeffries
On 21/08/2014 2:23 p.m., Lawrence Pingree wrote:
 No, I mean they are intentionally blocking with a configured policy,
 its not a bug. :) They have signatures that match Via headers and
 forwarded for headers to determine that it's squid. This is because
 many hackers are using bounces off open squid proxies to launch web
 attacks.
 

That still sounds like a bug. Blocking on squid existence makes as much
sense as blocking all traffic with UA header containing MSIE on
grounds that 90% of web attacks come with that agent string.
The content inside those headers is also context specific, signature
matching will not work beyond a simple proxy/maybe-proxy determination
(which does not even determine non-proxy!).


A proposal came up in the IETF a few weeks ago that HTTPS traffic
containing Via header should be blocked on sight by all servers. It got
booted out on these grounds:

* the bad guys are not sending Via.

* what Via do exist are being sent by good guys who obey the specs but
are othewise literally forced (by law or previous TLS based attacks) to
MITM the HTTPS in order to increase security checking on that traffic
(ie. AV scanning).

Therefore, the existence of Via is actually a sign of *good* health in
the traffic and a useful tool for finding culprits behind the well
behaved proxies.
 Rejecting or blocking based on its existence just increases the ratio
of nasty traffic which makes it through. While simultaneously forcing
the good guys to become indistinguishable from bad guys. Only the
bad guys get any actual benefit out of the situation.


Basically via off is a bad idea, and broken services (intentional or
otherwise) which force it to be used are worse than terrible.

Amos


Re: [squid-users] unbound and squid not resolving SSL sites

2014-08-20 Thread Amos Jeffries
On 21/08/2014 2:37 p.m., sq...@proxyplayer.co.uk wrote:
 
 which one?
 It's client -- unbound -- if IP listed in unbound.conf -- forwarded
 to proxy -- page or stream returned to client
 
 For others it's client -- unbound -- direct to internet with normal DNS
 

Replace forwarded to proxy with IP address forged as proxy.
Which is the source of the problem, your proxy does not have any TLS
security certificates or keys to handle the HTTPS traffic properly, and
no way to identify what the real server actually is.

Squid does not yet support receiving SNI, nor do many client software
support sending it. So the only way this can work is with packets
*routed* through the Squid device. The unbound setup you have cannot work.


What I am looking for is the network topology over which the TCP
connections are supposed to flow. VPN connection, LAN connection, WAN
connection, etc.
 This is necessary in order to identify which device is the suitable
gateway to setup a tunnel to the proxy. Then we can look at what types
of tunnel are appropriate for your situation.

Amos