[squid-users] WARNING: Forwarding loop detected for:

2014-04-08 Thread Dipjyoti Bharali

Hi,

I facing this peculiar issue with certain specific clients. When these 
clients connect to the proxy server, it goes for a toss until i reload 
the service. When examined through the log file, i get this same message 
everytime.


   /2014/04/02 09:00:17| WARNING: Forwarding loop detected for:
   GET / HTTP/1.1
   Content-Type: text/xml; charset=Utf-16
   UNICODE: YES
   Content-Length: 0
   Host: 192.168.1.1:3128
   Via: 1.0 hindenberg (squid), 1.1 hindenberg (squid), 1.1 hindenberg
   (squid), 1.1 hindenberg (squid), 1.1 hindenberg (squid), 1.1
   hindenberg (squid), 1.1 hindenberg (squid), 1.1 hindenberg (squid),
   1.1 hindenberg (squid), 1.1 hindenberg (squid), 1.1 hindenberg
   (squid), 1.1 hindenberg (squid), 1.1 hindenberg (squid), 1.1
   hindenberg (squid), 1.1 hindenberg (squid), 1.1 hindenberg (squid),
   1.1 hindenberg (squid), 1.1 hindenberg (squid), 1.1 hindenberg
   (squid), 1.1 hindenberg (squid), 1.1 hindenberg (squid), 1.1
   hindenberg (squid), 1.1 hindenberg (squid), 1.1 hindenberg (squid),
   1.1 hindenberg (squid), 1.1 hindenberg (squid), 1.1 hindenberg
   (squid), 1.1 hindenberg (squid), 1.1 hindenberg (squid), 1.1
   hindenberg (squid), 1.1 hindenberg (squid), 1.1 hindenberg (squid),
   1.1 hindenberg (squid), 1.1 hindenberg (squid), 1.1 hindenberg
   (squid), 1.1 hindenberg (squid), 1.1 hindenberg (squid), 1.1
   .
   .
   .
   .
   .
   .

   X-Forwarded-For: 192.168.1.74, 192.168.1.1, 192.168.1.1,
   192.168.1.1, 192.168.1.1, 192.168.1.1, 192.168.1.1, 192.168.1.1,
   192.168.1.1, 192.168.1.1, 192.168.1.1, 192.168.1.1, 192.168.1.1,
   192.168.1.1, 192.168.1.1, 192.168.1.1, 192.168.1.1, 192.168.1.1,
   192.168.1.1, 192.168.1.1, 192.168.1.1, 192.168.1.1, 192.168.1.1,
   192.168.1.1, 192.168.1.1, 192.168.1.1, 192.168.1.1, 192.168.1.1,
   192.168.1.1, 192.168.1.1, 192.168.1.1, 192.168.1.1, 192.168.1.1,
   192.168.1.1, 192.168.1.1, 192.168.1.1, 192.168.1.1, 192.168.1.1, .
   .
   .
   .
   .
   .
   .

   Cache-Control: max-age=5999400
   Connection: keep-alive/


Please help. I have to reload every now and then, otherwise. For now i 
have disconnected those clients from the network



*Dipjyoti Bharali*

Skanray Technologies Pvt Ltd,
Plot No. 15-17, Hebbal Industrial Area,
Mysore – 570018
Cell Phone : +919243552011
Phone/Fax: +91 821 2415559/2403344 Extn: 310

; www.skanray.com http://www.skanray.com

*Please consider the environment before printing this email. *


---
avast! Antivirus: Outbound message clean.
Virus Database (VPS): 140407-0, 07-04-2014
Tested on: 08-04-2014 11:45:53
avast! - copyright (c) 1988-2014 AVAST Software.
http://www.avast.com





[squid-users] Re: Caching not working for Youtube videos

2014-04-08 Thread babajaga
Hi, you are a bit late to detect this issue :-)
youtube changed this already some months ago. Actually I can not do further
research, but also look here:
http://squid-web-proxy-cache.1019090.n4.nabble.com/Do-we-have-an-algorithm-to-define-the-cachabillity-of-an-object-by-the-request-and-response-td4665473.html



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Caching-not-working-for-Youtube-videos-tp4665486p4665488.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: Do we have an algorithm to define the cachabillity of an object by the request and response?

2014-04-08 Thread babajaga
the only way is to force a fetch of the full object

I do not see, how this will solve the random (?) range-issue, without a lot
of new, clever coding.
Actually, I can not seriously test for random range, but will definitely do. 
(NOTE: With range I refere to explicit range=xxx-yyy somewhere within
URL, NOT range request in http-header, which was used and then dumped
already quite some time ago by youtube.)



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Do-we-have-an-algorithm-to-define-the-cachabillity-of-an-object-by-the-request-and-response-tp4665473p4665489.html
Sent from the Squid - Users mailing list archive at Nabble.com.


Re: [squid-users] WARNING: Forwarding loop detected for:

2014-04-08 Thread Kinkie
This looks like a legitimate forwarding loop. What is your request
routing configuration?
cache_peer parent and never_direct are the most interesting lines on
top of my head.

On Tue, Apr 8, 2014 at 8:15 AM, Dipjyoti Bharali dipjy...@skanray.com wrote:
 Hi,

 I facing this peculiar issue with certain specific clients. When these
 clients connect to the proxy server, it goes for a toss until i reload the
 service. When examined through the log file, i get this same message
 everytime.

/2014/04/02 09:00:17| WARNING: Forwarding loop detected for:
GET / HTTP/1.1
Content-Type: text/xml; charset=Utf-16
UNICODE: YES
Content-Length: 0
Host: 192.168.1.1:3128
Via: 1.0 hindenberg (squid), 1.1 hindenberg (squid), 1.1 hindenberg
(squid), 1.1 hindenberg (squid), 1.1 hindenberg (squid), 1.1
hindenberg (squid), 1.1 hindenberg (squid), 1.1 hindenberg (squid),
1.1 hindenberg (squid), 1.1 hindenberg (squid), 1.1 hindenberg
(squid), 1.1 hindenberg (squid), 1.1 hindenberg (squid), 1.1
hindenberg (squid), 1.1 hindenberg (squid), 1.1 hindenberg (squid),
1.1 hindenberg (squid), 1.1 hindenberg (squid), 1.1 hindenberg
(squid), 1.1 hindenberg (squid), 1.1 hindenberg (squid), 1.1
hindenberg (squid), 1.1 hindenberg (squid), 1.1 hindenberg (squid),
1.1 hindenberg (squid), 1.1 hindenberg (squid), 1.1 hindenberg
(squid), 1.1 hindenberg (squid), 1.1 hindenberg (squid), 1.1
hindenberg (squid), 1.1 hindenberg (squid), 1.1 hindenberg (squid),
1.1 hindenberg (squid), 1.1 hindenberg (squid), 1.1 hindenberg
(squid), 1.1 hindenberg (squid), 1.1 hindenberg (squid), 1.1
.
.
.
.
.
.

X-Forwarded-For: 192.168.1.74, 192.168.1.1, 192.168.1.1,
192.168.1.1, 192.168.1.1, 192.168.1.1, 192.168.1.1, 192.168.1.1,
192.168.1.1, 192.168.1.1, 192.168.1.1, 192.168.1.1, 192.168.1.1,
192.168.1.1, 192.168.1.1, 192.168.1.1, 192.168.1.1, 192.168.1.1,
192.168.1.1, 192.168.1.1, 192.168.1.1, 192.168.1.1, 192.168.1.1,
192.168.1.1, 192.168.1.1, 192.168.1.1, 192.168.1.1, 192.168.1.1,
192.168.1.1, 192.168.1.1, 192.168.1.1, 192.168.1.1, 192.168.1.1,
192.168.1.1, 192.168.1.1, 192.168.1.1, 192.168.1.1, 192.168.1.1, .
.
.
.
.
.
.

Cache-Control: max-age=5999400
Connection: keep-alive/


 Please help. I have to reload every now and then, otherwise. For now i have
 disconnected those clients from the network


 *Dipjyoti Bharali*

 Skanray Technologies Pvt Ltd,
 Plot No. 15-17, Hebbal Industrial Area,
 Mysore - 570018
 Cell Phone : +919243552011
 Phone/Fax: +91 821 2415559/2403344 Extn: 310

 ; www.skanray.com http://www.skanray.com

 *Please consider the environment before printing this email. *


 ---
 avast! Antivirus: Outbound message clean.
 Virus Database (VPS): 140407-0, 07-04-2014
 Tested on: 08-04-2014 11:45:53
 avast! - copyright (c) 1988-2014 AVAST Software.
 http://www.avast.com






-- 
Francesco


[squid-users] Re: WARNING: Forwarding loop detected for:

2014-04-08 Thread babajaga
Pls, post squid.conf, without comments.
And, wich URL exactly results in the forward loop ?



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/WARNING-Forwarding-loop-detected-for-tp4665487p4665491.html
Sent from the Squid - Users mailing list archive at Nabble.com.


Re: [squid-users] Re: WARNING: Forwarding loop detected for:

2014-04-08 Thread Dipjyoti Bharali

squid.conf is as follows,


https_port 192.168.1.1:3129 cert=/etc/pki/myCA/private/server-key-cert.pem 
transparent

http_port 192.168.1.1:3128 transparent

acl QUERY urlpath_regex cgi-bin \?
acl apache rep_header Server ^Apache
access_log /var/log/squid/access.log squid
hosts_file /etc/hosts

refresh_pattern ^ftp:// 480 60% 22160
refresh_pattern ^gopher:// 30 20% 120
refresh_pattern . 480 50% 22160

forwarded_for on

cache_dir ufs /var/spool/squid 1 16 256

acl manager proto cache_object
acl localhost src 127.0.0.1/32
acl to_localhost dst 127.0.0.0/8 0.0.0.0/32

acl nocache dst 192.168.0.0/24
acl lan src 192.168.1.0/24 fe80::/10
acl SSL_ports port 443 # https
acl Safe_ports port 80 443 # http, https
acl Safe_ports port 21 # ftp
acl Safe_ports port 995 # SSL/TLS
acl Safe_ports port 70 # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535 # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl Safe_ports port 631 # cups
acl Safe_ports port 873 # rsync
acl Safe_ports port 901 # SWAT
acl Safe_ports port 2082 # CPANEL
acl Safe_ports port 2083 # CPANEL
acl Safe_ports port 2078 # Webdav
acl purge method PURGE
acl CONNECT method CONNECT

acl BadSite ssl_error SQUID_X509_V_ERR_DOMAIN_MISMATCH
acl banned_sites url_regex -i who.is whois cricket resolver lyrics songs 
bollywood porn xxx livetv
acl ads dstdom_regex /var/squidGuard/ad_block.txt
#acl local src 192.168.1.1
acl numeric_IPs dstdom_regex 
^(([0-9]+\.[0-9]+\.[0-9]+\.[0-9]+)|(\[([0-9af]+)?:([0-9af:]+)?:([0-9af]+)?\])):443

acl blockfiles urlpath_regex /var/squidGuard/blocks.files.acl

deny_info ERR_BLOCKED_FILES blockfiles
http_access deny blockfiles


http_access deny banned_sites
http_access deny skype_access
http_access allow manager localhost
http_access deny manager
http_access allow purge localhost
http_access deny purge
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow localhost
http_access allow lan
http_access deny numeric_IPS
http_access deny all
http_reply_access allow all
icp_access allow all

visible_hostname hindenberg
coredump_dir /var/spool/squid

cache_peer hindenberg parent 3128 3129
acl PEERS src 192.168.1.1
cache_peer_access hindenberg allow !PEERS

sslproxy_cert_error allow lan
sslproxy_flags DONT_VERIFY_PEER

cache_effective_user squid
cache_effective_group squid
cache_mem 2048 MB
memory_replacement_policy lru
cache_replacement_policy heap LFUDA
cache deny nocache

redirect_program /usr/bin/squidGuard -c /etc/squid/squidGuard.conf
err_html_text Blocked !!
dns_nameservers 127.0.0.1
url_rewrite_children 30
url_rewrite_concurrency 0
httpd_suppress_version_string on





*Dipjyoti Bharali*


*Please consider the environment before printing this email. *
On 08-04-2014 15:05, babajaga wrote:

Pls, post squid.conf, without comments.
And, wich URL exactly results in the forward loop ?



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/WARNING-Forwarding-loop-detected-for-tp4665487p4665491.html
Sent from the Squid - Users mailing list archive at Nabble.com.


---
avast! Antivirus: Inbound message clean.
Virus Database (VPS): 140407-0, 07-04-2014
Tested on: 08-04-2014 15:14:59
avast! - copyright (c) 1988-2014 AVAST Software.
http://www.avast.com








---
avast! Antivirus: Outbound message clean.
Virus Database (VPS): 140407-0, 07-04-2014
Tested on: 08-04-2014 15:51:48
avast! - copyright (c) 1988-2014 AVAST Software.
http://www.avast.com





Re: [squid-users] Re: Do we have an algorithm to define the cachabillity of an object by the request and response?

2014-04-08 Thread Amos Jeffries
On 8/04/2014 7:24 p.m., babajaga wrote:
 the only way is to force a fetch of the full object
 
 I do not see, how this will solve the random (?) range-issue, without a lot
 of new, clever coding.
 Actually, I can not seriously test for random range, but will definitely do. 
 (NOTE: With range I refere to explicit range=xxx-yyy somewhere within
 URL, NOT range request in http-header, which was used and then dumped
 already quite some time ago by youtube.)

If the range is done properly with Range: header then the future random
ranges can be served as HIT on the cached object.

Problem remains if anything in the URL changes and/or the range detail
is sent in the URL query-string values.

Youtube videos are uniquely nasty with their FLV meta header inside the
content and fake range request. These things really *are* a whole
unique video chunk per fetch.

Amos



Re: [squid-users] WARNING: Forwarding loop detected for:

2014-04-08 Thread Amos Jeffries
On 8/04/2014 8:51 p.m., Kinkie wrote:
 This looks like a legitimate forwarding loop. What is your request
 routing configuration?
 cache_peer parent and never_direct are the most interesting lines on
 top of my head.


Configuration directive via on should make Squid catch and abort on
the first cycle through the loop so you can debug where its happening
much easier.

We also see these loops with IP:port in the Host header with NAT port
forwarding. The NAT intercept *MUST* be done (only) on the Squid
machine itself - it cannot be done by a separate router.

Amos

 
 On Tue, Apr 8, 2014 at 8:15 AM, Dipjyoti Bharali wrote:
 Hi,

 I facing this peculiar issue with certain specific clients. When these
 clients connect to the proxy server, it goes for a toss until i reload the
 service. When examined through the log file, i get this same message
 everytime.

/2014/04/02 09:00:17| WARNING: Forwarding loop detected for:
GET / HTTP/1.1
Content-Type: text/xml; charset=Utf-16
UNICODE: YES
Content-Length: 0
Host: 192.168.1.1:3128
Via: 1.0 hindenberg (squid), 1.1 hindenberg (squid), 1.1 hindenberg
(squid), 1.1 hindenberg (squid), 1.1 hindenberg (squid), 1.1
hindenberg (squid), 1.1 hindenberg (squid), 1.1 hindenberg (squid),
1.1 hindenberg (squid), 1.1 hindenberg (squid), 1.1 hindenberg
(squid), 1.1 hindenberg (squid), 1.1 hindenberg (squid), 1.1
hindenberg (squid), 1.1 hindenberg (squid), 1.1 hindenberg (squid),
1.1 hindenberg (squid), 1.1 hindenberg (squid), 1.1 hindenberg
(squid), 1.1 hindenberg (squid), 1.1 hindenberg (squid), 1.1
hindenberg (squid), 1.1 hindenberg (squid), 1.1 hindenberg (squid),
1.1 hindenberg (squid), 1.1 hindenberg (squid), 1.1 hindenberg
(squid), 1.1 hindenberg (squid), 1.1 hindenberg (squid), 1.1
hindenberg (squid), 1.1 hindenberg (squid), 1.1 hindenberg (squid),
1.1 hindenberg (squid), 1.1 hindenberg (squid), 1.1 hindenberg
(squid), 1.1 hindenberg (squid), 1.1 hindenberg (squid), 1.1
.
.
.
.
.
.

X-Forwarded-For: 192.168.1.74, 192.168.1.1, 192.168.1.1,
192.168.1.1, 192.168.1.1, 192.168.1.1, 192.168.1.1, 192.168.1.1,
192.168.1.1, 192.168.1.1, 192.168.1.1, 192.168.1.1, 192.168.1.1,
192.168.1.1, 192.168.1.1, 192.168.1.1, 192.168.1.1, 192.168.1.1,
192.168.1.1, 192.168.1.1, 192.168.1.1, 192.168.1.1, 192.168.1.1,
192.168.1.1, 192.168.1.1, 192.168.1.1, 192.168.1.1, 192.168.1.1,
192.168.1.1, 192.168.1.1, 192.168.1.1, 192.168.1.1, 192.168.1.1,
192.168.1.1, 192.168.1.1, 192.168.1.1, 192.168.1.1, 192.168.1.1, .
.
.
.
.
.
.

Cache-Control: max-age=5999400
Connection: keep-alive/


 Please help. I have to reload every now and then, otherwise. For now i have
 disconnected those clients from the network


 *Dipjyoti Bharali*

 Skanray Technologies Pvt Ltd,
 Plot No. 15-17, Hebbal Industrial Area,
 Mysore - 570018
 Cell Phone : +919243552011
 Phone/Fax: +91 821 2415559/2403344 Extn: 310

 ; www.skanray.com http://www.skanray.com

 *Please consider the environment before printing this email. *


 ---
 avast! Antivirus: Outbound message clean.
 Virus Database (VPS): 140407-0, 07-04-2014
 Tested on: 08-04-2014 11:45:53
 avast! - copyright (c) 1988-2014 AVAST Software.
 http://www.avast.com



 
 
 



[squid-users] AW: NTLM problem with Internet explorer/windows

2014-04-08 Thread Rietzler, Markus (RZF, SG 324 / RIETZLER_SOFTWARE)
What are the problems?

There are two issues:

1) squid doing user auth via ACL 
2) squid forwards / passthrough auth to IIS

Are there any squid-parent proxies involved?


Normally

http_port 8080 connection-auth=on

where connection-auth is on per default? That means, that ntlm-auth is passed 
through squid. Which samba/winbind version are you using. For IE10 and up you 
have to use newer versions (i think  3.6.x)


markus


-Ursprüngliche Nachricht-
Von: Antero Prazeres [mailto:antero.praze...@blackboard.com] 
Gesendet: Mittwoch, 2. April 2014 13:21
An: squid-users@squid-cache.org
Betreff: [squid-users] NTLM problem with Internet explorer/windows

Hello,
I need some help with this issue as I am out of ideas and I don¹t find any 
similar issues on your lists/emails/faqs.
I am using a server with CentOS6 and Squid 3.1.10 as a proxy. One of my teams 
needs to access to ISS 7 trough Squid for test and development purposes using 
only NTLM. Squid server is accessing the AD and credentials are working. All 
tests performed with Wbinfo are successful. The access to the IIS ntlm site is 
successful from Firefox and Safari, all returning the message ³you are 
authenticated using NTLM². I try to perform the same test on several machines 
with Windows 7 and Internet Explorer, most of them 11, and don¹t work. GPO was 
altered on the windows for NTLM, ISS site is requesting NTLM with extended 
protection and kernel=mode authentication off.

Somebody as any ideas please??

Thank you

Best regards

Antero Prazeres



This email and any attachments may contain confidential and proprietary 
information of Blackboard that is for the sole use of the intended recipient. 
If you are not the intended recipient, disclosure, copying, re-distribution or 
other use of any of this information is strictly prohibited. Please immediately 
notify the sender and delete this transmission if you received this email in 
error.


Re: [squid-users] AW: NTLM problem with Internet explorer/windows

2014-04-08 Thread Amos Jeffries

The fact that other browsers are perfectly fine logging in through the
proxy should tell you it is a problem with IE.


Also for the record please reconsider the use of NTLM on a website.

The amount of trouble you are having to go to to get it to operate
should be an indication of its (lack of) usefulness. WindowsXP is being
end-of-life'd this month and was the last system to support NTLM by default.

Amos

On 8/04/2014 11:37 p.m., Rietzler, Markus wrote:
 What are the problems?
 
 There are two issues:
 
 1) squid doing user auth via ACL 2) squid forwards / passthrough auth
 to IIS
 
 Are there any squid-parent proxies involved?
 
 
 Normally
 
 http_port 8080 connection-auth=on
 
 where connection-auth is on per default? That means, that ntlm-auth
 is passed through squid. Which samba/winbind version are you using.
 For IE10 and up you have to use newer versions (i think  3.6.x)
 
 
 markus
 
 
 -Ursprüngliche Nachricht-
  Von: Antero Prazeres
 
 Hello, I need some help with this issue as I am out of ideas and I
 don¹t find any similar issues on your lists/emails/faqs. I am using a
 server with CentOS6 and Squid 3.1.10 as a proxy. One of my teams
 needs to access to ISS 7 trough Squid for test and development
 purposes using only NTLM. Squid server is accessing the AD and
 credentials are working. All tests performed with Wbinfo are
 successful. The access to the IIS ntlm site is successful from
 Firefox and Safari, all returning the message ³you are authenticated
 using NTLM². I try to perform the same test on several machines with
 Windows 7 and Internet Explorer, most of them 11, and don¹t work. GPO
 was altered on the windows for NTLM, ISS site is requesting NTLM with
 extended protection and kernel=mode authentication off.
 
 Somebody as any ideas please??
 
 Thank you
 
 Best regards
 
 Antero Prazeres
 



Re: [squid-users] Re: Caching not working for Youtube videos

2014-04-08 Thread aditya agarwal
Yeah we didn't test this feature for some time :). I went through the link, but 
couldn't find a definite solution to this issue.

And now as google/youtube are adding range fields along with other fields to 
the URL which are dynamic it makes caching almost impossible (unless as 
mentioned it requires smart coding to join parts).


So as of now isn't there any way of caching Youtube videos?


Thanks,
Aditya



On Tuesday, 8 April 2014 12:49 PM, babajaga augustus_me...@yahoo.de wrote:
Hi, you are a bit late to detect this issue :-)
youtube changed this already some months ago. Actually I can not do further
research, but also look here:
http://squid-web-proxy-cache.1019090.n4.nabble.com/Do-we-have-an-algorithm-to-define-the-cachabillity-of-an-object-by-the-request-and-response-td4665473.html



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Caching-not-working-for-Youtube-videos-tp4665486p4665488.html
Sent from the Squid - Users mailing list archive at Nabble.com.



[squid-users] Re: Do we have an algorithm to define the cachabillity of an object by the request and response?

2014-04-08 Thread babajaga
If the range is done properly with Range: header then the future random
ranges can be served as HIT on the cached object.
Yes. 
But that is NOT the actual state with youtube; only history, unfortunately.

Problem remains if anything in the URL changes and/or the range detail
is sent in the URL query-string values.
That IS actual state. 

And it looks like, that the range details, as you call it, are NOT
repeatable ANY MORE (which means, the WERE), even in case you request 2 time
the same video from same client, just one after the other.
That is, what I meant with random range. Will check random range in more
detail some time in the future.




--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Do-we-have-an-algorithm-to-define-the-cachabillity-of-an-object-by-the-request-and-response-tp4665473p4665497.html
Sent from the Squid - Users mailing list archive at Nabble.com.


Re: [squid-users] Re: Do we have an algorithm to define the cachabillity of an object by the request and response?

2014-04-08 Thread Eliezer Croitoru

On 04/08/2014 04:28 PM, babajaga wrote:

is sent in the URL query-string values.
That IS actual state.

And it looks like, that the range details, as you call it, are NOT
repeatable ANY MORE (which means, the WERE), even in case you request 2 time
the same video from same client, just one after the other.
That is, what I meant with random range. Will check random range in more
detail some time in the future.

there is an option to redirect or download a partial request to a full 
request stripping the range headers.

it's an idea that I have not tested in the last month yet.

Eliezer


[squid-users] Re: Do we have an algorithm to define the cachabillity of an object by the request and response?

2014-04-08 Thread babajaga

 stripping the range header 
How often should I say: There is no range header any more ! There was one, a
year ago, may be. 
Now the range is within URL !

Real world example, brand new:

1396987801.026   1766 127.0.0.1 TCP_MISS/200 930166 GET
http://r3---sn-a8au-nuae.googlevideo.com/videoplayback?c=webclen=8890573cpn=YbdH9EPrD2WaihVOcver=as3dur=229.296expire=1397011183fexp=931327%2C909708%2C943404%2C913564%2C921727%2C916624%2C931014%2C936106%2C937417%2C913434%2C936916%2C934022%2C936923%2C333%2C3300108%2C3300132%2C3300137%2C3300164%2C3310366%2C3310622%2C3310649gcr=usgir=yesid=o-AJr0zxHxn0iVmV-Cln_bZf3PMd4um4Qt9Thok1FphZR0ip=199.217.116.158ipbits=0itag=134keepalive=yeskey=yt5lmt=1384344824807223ms=aumt=1396987642mv=umws=yesrange=2789376-3719167ratebypass=yes;
  
!RANGE IN URL !! 
signature=622A2F30C82E4D26D6C9A88C2D08CBD7737DBD83.EFEABA801CDE1A0AE37F5F55161DD8994EC17788source=youtubesparams=clen%2Cdur%2Cgcr%2Cgir%2Cid%2Cip%2Cipbits%2Citag%2Clmt%2Csource%2Cupn%2Cexpiresver=3upn=b0ZrCxMNX5k
- DIRECT/4.53.166.142 application/octet-stream
Accept:%20*/*%0D%0AAccept-Encoding:%20gzip,deflate%0D%0AAccept-Language:%20de-DE,de;q=0.8,en-US;q=0.6,en;q=0.4%0D%0ACache-Control:%20max-age=0%0D%0AHost:%20r3---sn-a8au-nuae.googlevideo.com%0D%0AReferer:%20http://www.youtube.com/watch?v=hSjIz8oQuko%0D%0AUser-Agent:%20Mozilla/5.0%20(Windows%20NT%206.3;%20WOW64)%20AppleWebKit/537.36%20(KHTML,%20like%20Gecko)%20Chrome/33.0.1750.154%20Safari/537.36%0D%0A
HTTP/1.0%20200%20OK%0D%0ALast-Modified:%20Wed,%2013%20Nov%202013%2012:13:44%20GMT%0D%0ADate:%20Tue,%2008%20Apr%202014%2020:09:57%20GMT%0D%0AExpires:%20Tue,%2008%20Apr%202014%2020:09:57%20GMT%0D%0ACache-Control:%20private,%20max-age=23086%0D%0AContent-Type:%20application/octet-stream%0D%0AAccept-Ranges:%20bytes%0D%0AContent-Length:%20929792%0D%0AAlternate-Protocol:%2080:quic%0D%0AX-Content-Type-Options:%20nosniff%0D%0AConnection:%20close%0D%0AX-UA-Compatible:%20IE=edge%0D%0A%0D

And when this range=2789376-3719167 is more or less random now, no
chance for caching.
(Unless you write very smart code to join/select the pieces, assuming, no
checksum or similar nasty parameters in the URL, too.)

I hope, now it is absolutely clear to you.



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Do-we-have-an-algorithm-to-define-the-cachabillity-of-an-object-by-the-request-and-response-tp4665473p4665500.html
Sent from the Squid - Users mailing list archive at Nabble.com.


Re: [squid-users] Re: Do we have an algorithm to define the cachabillity of an object by the request and response?

2014-04-08 Thread Eliezer Croitoru

On 04/08/2014 11:25 PM, babajaga wrote:

 stripping the range header 
How often should I say: There is no range header any more ! There was one, a
year ago, may be.
Now the range is within URL !

Real world example, brand new:

Redirect to a url with no range at all.
It's one of google defaults as far as I can understand.

Eliezer


[squid-users] Re: Do we have an algorithm to define the cachabillity of an object by the request and response?

2014-04-08 Thread babajaga
 Real world example, brand new:
Redirect to a url with no range at all.
It's one of google defaults as far as I can understand.

Sorry, I do not understand. Please, be more specific.




--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Do-we-have-an-algorithm-to-define-the-cachabillity-of-an-object-by-the-request-and-response-tp4665473p4665502.html
Sent from the Squid - Users mailing list archive at Nabble.com.


Re: [squid-users] how to dynamically reconfigure squid?

2014-04-08 Thread Waldemar Brodkorb
Hi,
Amos Jeffries wrote,

  What do you think? What might be a solution to this problem? I can't
  restart squid when changing the ACL rules, because then all users in
  the network would be disconnected.
 
 You could set the request_timeout to be short. This would make the
 CONNECT requests terminate after a few minutes.

Will try that.
 
 You could also use SSL-bump feature in Squid. This has a double benefit
 of allowing the control software acting on the HTTPS requests and
 preventing SPDY etc. being used by the browser.
 
This is not wanted by my boss. Probably because of ethical reasons. 
If a user uses https, he normally believes his traffic is secure and
we want that this is the case.

Going back to the initial problem, slow NTLM authentications with
newer browsers. Would it be worth to switch completely to Negotiate?
Or is it possible to cache the NTLM authentication results, so that
Squid does not need to fork a ntlm auth helper on every request?

Thanks
 Waldemar