[squid-users] RE: YouTube Resolution Locker

2014-07-29 Thread Stakres
Hi Eliezer,
If the URL is the same, meaning using the same locked resolution for the
same video, the deduplicated ID will be the same so it'll come from the
existing Squid cache.
I agree with you if the admin changes the resolution the ID will be
different so it'll not be taken from the cache.

Bye Fred



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/YouTube-Resolution-Locker-tp4667042p4667096.html
Sent from the Squid - Users mailing list archive at Nabble.com.


Re: [squid-users] External ACL tags

2014-07-29 Thread Steve Hill

On 29.07.14 06:37, Amos Jeffries wrote:


The note ACL type should match against values in the tag key name same
as any other annotation. If that does not work try a different key name
than tag=.


Perfect, thank you!


--
 - Steve Hill
   Technical Director
   Opendium Limited http://www.opendium.com

Direct contacts:
   Instant messager: xmpp:st...@opendium.com
   Email:st...@opendium.com
   Phone:sip:st...@opendium.com

Sales / enquiries contacts:
   Email:sa...@opendium.com
   Phone:+44-844-9791439 / sip:sa...@opendium.com

Support contacts:
   Email:supp...@opendium.com
   Phone:+44-844-4844916 / sip:supp...@opendium.com


[squid-users] TCP_MISS then TCP_DENIED

2014-07-29 Thread peter
Hi, I have configured a new install of Squid on CentOS 6.5 via yum. I 
have followed some of the guides on the Squid wiki to get AD group 
authentication working but am getting some strange results when looking 
within the access.log.


As you can see from the following log entries, the server, with an 
authentication user logged in and browsing to www.google.com, gets a 
couple of TCP_MISS/200 entries and then TCP_DENIED/407 before going back 
to TCP_MISS/200 again:


1406653633.180220 172.29.94.15 TCP_MISS/200 3863 CONNECT 
ssl.gstatic.com:443 admin_pete DIRECT/74.125.230.119 -
1406653633.180 78 172.29.94.15 TCP_MISS/200 3524 CONNECT 
www.google.com:443 admin_pete DIRECT/173.194.41.116 -
1406653633.182  0 172.29.94.15 TCP_DENIED/407 3951 CONNECT 
www.google.com:443 - NONE/- text/html
1406653633.185  0 172.29.94.15 TCP_DENIED/407 4280 CONNECT 
www.google.com:443 - NONE/- text/html
1406653633.194  0 172.29.94.15 TCP_DENIED/407 3955 CONNECT 
ssl.gstatic.com:443 - NONE/- text/html
1406653633.196  0 172.29.94.15 TCP_DENIED/407 4284 CONNECT 
ssl.gstatic.com:443 - NONE/- text/html
1406653633.247 72 172.29.94.15 TCP_MISS/200 3862 CONNECT 
www.gstatic.com:443 admin_pete DIRECT/74.125.230.127 -
1406653633.249  0 172.29.94.15 TCP_DENIED/407 3955 CONNECT 
www.gstatic.com:443 - NONE/- text/html
1406653633.252  0 172.29.94.15 TCP_DENIED/407 4284 CONNECT 
www.gstatic.com:443 - NONE/- text/html
1406653633.394  0 172.29.94.15 TCP_DENIED/407 3955 CONNECT 
apis.google.com:443 - NONE/- text/html


It is a bit confusing as the web page loads but I get all these denied 
logs within access.log.


Could someone help me understand what this means?

Thanks.
Pete.


Re: [squid-users] Tproxy immediately closing connection

2014-07-29 Thread jan
I installed libcap-dev package, recompiled squid and TPROXY is now 
working fine for both IPv4 and IPv6.


Thanks Amos!

On 2014-07-26 11:35, Amos Jeffries wrote:

On 25/07/2014 10:02 a.m., Jan Krupa wrote:

Hi all,

I've been struggling to configure transparent proxy for IPv6 on my
Raspberry Pi acting as a router following the guide:
http://wiki.squid-cache.org/Features/Tproxy4

Despite all my efforts, all I got was squid squid immediately closing
connection after it was established (not rejecting connection, 
three-way

handshake is successful and then the client receives RST packet).



Do you have libcap2 installed and libcap2-dev used to build Squid?
 there have been a few issues where its absence were not notified by 
Squid.


Amos


Re: [squid-users] why squid can block https when i point my browser to port , and cant when its transparent ?

2014-07-29 Thread Alex Rousskov
On 07/27/2014 04:49 PM, Jason Haar wrote:

 I do wonder where this will end.

Since one cannot combine interception, inspection, and secure delivery,
this can only end when at least one of those components dies.

Interception is probably the weak link here because it can be removed(*)
by technological means if enough folks decide it has to go. Inspection
(by trusted intermediaries) and secure delivery (through trusted
intermediaries) will probably stay (with modifications) because their
existence sprouts from the human nature (rather than just lack of
development discipline, will, and resources).


 How long before Firefox starts pinning,
 then MSIE, then it gets generalized, etc?

If applied broadly, pinning in an interception world will clash with
government, corporate, and parental desire to protect assets.  With
todays technology, pinning can only survive on a limited scale IMHO. The
day after tomorrow, if interception dies, replaced by trusted
intermediaries, pinning will not be a problem.


Either that, or the entire web content is going to be owned by a few
content providers that would guarantee that their content is safe and
appropriate (hence, does not need to be inspected). This is what Google
claims with its pinning solution today, and I suspect it is not the
responsibility they actually want and enjoy.


Cheers,

Alex.
(*) I am only discussing overt technologies and needs here. Needless to
say, covert interception will stay with us for the foreseeable future.



[squid-users] unbound and squid not resolving SSL sites

2014-07-29 Thread squid
In my network I have unbound redirecting some sites through the proxy  
server and checking authentication, If I redirect www.thisite.com it  
works corectly. However, as soon as SSL is used  
https://www.thissite.com it doesn't resolve at all. Any ideas what I  
have to do to enable ssl redirects in unbound or squid?


squid.conf
#
# Recommended minimum configuration:
#
acl manager proto cache_object
acl localhost src 127.0.0.1/32 ::1
acl to_localhost dst 127.0.0.0/8 0.0.0.0/32 ::1

# Example rule allowing access from your local networks.
# Adapt to list your (internal) IP networks from where browsing
# should be allowed
acl localnet src 10.0.0.0/8 # RFC1918 possible internal network
acl localnet src 172.16.0.0/12  # RFC1918 possible internal network
acl localnet src 192.168.0.0/16 # RFC1918 possible internal network
acl localnet src fc00::/7   # RFC 4193 local private network range
acl localnet src fe80::/10  # RFC 4291 link-local (directly plugged) 
machines

acl SSL_ports port 443
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT

#
# Recommended minimum Access Permission configuration:
#
# Only allow cachemgr access from localhost
http_access allow manager localhost
http_access deny manager

# Deny requests to certain unsafe ports
http_access deny !Safe_ports

# Deny CONNECT to other than secure SSL ports
http_access deny CONNECT !SSL_ports

# We strongly recommend the following be uncommented to protect innocent
acl manager proto cache_object
acl localhost src 127.0.0.1/32 ::1
acl to_localhost dst 127.0.0.0/8 0.0.0.0/32 ::1

acl localnet src 10.0.0.0/8 # RFC1918 possible internal network
acl localnet src 172.16.0.0/12  # RFC1918 possible internal network
acl localnet src 192.168.0.0/16 # RFC1918 possible internal network
acl localnet src fc00::/7   # RFC 4193 local private network range
acl localnet src fe80::/10  # RFC 4291 link-local (directly plugged) 
machines

acl SSL_ports port 443
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT

http_access allow manager localhost
http_access deny manager
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports

external_acl_type time_squid_auth ttl=5 %SRC /usr/local/bin/squidauth
acl interval_auth external time_squid_auth
http_access allow interval_auth
http_access deny all
http_port 80 accel vhost allow-direct
hierarchy_stoplist cgi-bin ?
coredump_dir /var/spool/squid

refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern -i (/cgi-bin/|\?) 0 0%  0
refresh_pattern .   0   20% 4320




Re: [squid-users] TCP_MISS then TCP_DENIED

2014-07-29 Thread Amos Jeffries
On 30/07/2014 5:18 a.m., pe...@pshankland.co.uk wrote:
 Hi, I have configured a new install of Squid on CentOS 6.5 via yum. I
 have followed some of the guides on the Squid wiki to get AD group
 authentication working but am getting some strange results when looking
 within the access.log.
 
 As you can see from the following log entries, the server, with an
 authentication user logged in and browsing to www.google.com, gets a
 couple of TCP_MISS/200 entries and then TCP_DENIED/407 before going back
 to TCP_MISS/200 again:
 
 1406653633.180220 172.29.94.15 TCP_MISS/200 3863 CONNECT
 ssl.gstatic.com:443 admin_pete DIRECT/74.125.230.119 -
 1406653633.180 78 172.29.94.15 TCP_MISS/200 3524 CONNECT
 www.google.com:443 admin_pete DIRECT/173.194.41.116 -
 1406653633.182  0 172.29.94.15 TCP_DENIED/407 3951 CONNECT
 www.google.com:443 - NONE/- text/html
 1406653633.185  0 172.29.94.15 TCP_DENIED/407 4280 CONNECT
 www.google.com:443 - NONE/- text/html
 1406653633.194  0 172.29.94.15 TCP_DENIED/407 3955 CONNECT
 ssl.gstatic.com:443 - NONE/- text/html
 1406653633.196  0 172.29.94.15 TCP_DENIED/407 4284 CONNECT
 ssl.gstatic.com:443 - NONE/- text/html
 1406653633.247 72 172.29.94.15 TCP_MISS/200 3862 CONNECT
 www.gstatic.com:443 admin_pete DIRECT/74.125.230.127 -
 1406653633.249  0 172.29.94.15 TCP_DENIED/407 3955 CONNECT
 www.gstatic.com:443 - NONE/- text/html
 1406653633.252  0 172.29.94.15 TCP_DENIED/407 4284 CONNECT
 www.gstatic.com:443 - NONE/- text/html
 1406653633.394  0 172.29.94.15 TCP_DENIED/407 3955 CONNECT
 apis.google.com:443 - NONE/- text/html
 
 It is a bit confusing as the web page loads but I get all these denied
 logs within access.log.
 
 Could someone help me understand what this means?

Since you mention AD group authentication I asume you have used NTLM
or Negotiate authentication.

Two things to be aware of when reading these logs:

1) the entries are logged at time of transaction completion. So the
admin_pete CONNECT requests that got a MISS/200 actually started far
earlier than the denied ones. eg the one 1406653633 (logged) - 72ms
(duration) == started 1406653561.

 ... that helps you read the log for identifying #2 ...

2) Authentication requires multiple HTTP transactions to perform an
authentication handshake. Both NTLM and Negotiate have mandatory fresh
handshakes on every new connection. NTLM always has an extra transaction
in the middle of the handshake.
So you get a denied first then a success. This shows up worst of all
with HTTPS like above where every tunnnel attempt requires a new connection.

3) browsers also have a tendency to open multiple connections at a time.
Sometimes this can be attributed to happy eyeballs sometimes they are
just grabbing more for future performance. That (or NTLM) is probably
the case for these attempts which are only 3ms apart.

Amos


Re: [squid-users] why squid can block https when i point my browser to port , and cant when its transparent ?

2014-07-29 Thread Amos Jeffries
On 30/07/2014 11:59 a.m., Alex Rousskov wrote:
 On 07/27/2014 04:49 PM, Jason Haar wrote:
 
 I do wonder where this will end.
 
 Since one cannot combine interception, inspection, and secure delivery,
 this can only end when at least one of those components dies.
 
 Interception is probably the weak link here because it can be removed(*)
 by technological means if enough folks decide it has to go. Inspection
 (by trusted intermediaries) and secure delivery (through trusted
 intermediaries) will probably stay (with modifications) because their
 existence sprouts from the human nature (rather than just lack of
 development discipline, will, and resources).
 
 
 How long before Firefox starts pinning,
 then MSIE, then it gets generalized, etc?
 
 If applied broadly, pinning in an interception world will clash with
 government, corporate, and parental desire to protect assets.  With
 todays technology, pinning can only survive on a limited scale IMHO. The
 day after tomorrow, if interception dies, replaced by trusted
 intermediaries, pinning will not be a problem.
 
 
 Either that, or the entire web content is going to be owned by a few
 content providers that would guarantee that their content is safe and
 appropriate (hence, does not need to be inspected). This is what Google
 claims with its pinning solution today, and I suspect it is not the
 responsibility they actually want and enjoy.

It is also a false claim.
http://www.thewhir.com/web-hosting-news/aws-supports-41-malware-hosting-sites-web-host-isp

Shared hosting providers are a well known source of malware and viral
infection. Google hosted sites are no different even though their
https:// service is pinned. They do well enough to only get an also
ran mention but that is still not clean enough to warrant a bypass of
inspection (hundreds or a few thousand infection points make up their
their low % rating).

Amos