[squid-users] Re: anyOne who has working ssl_bump configuration for facebook ???

2013-11-25 Thread iishiii
Dear shinoj

i have tried to do it as you mentioned already... but squid was printing
same issue :(

huh where i am stuck ?

i just need to cache facebook, dailymotin anf http  ftp downloads upto 50
MB...

Dear shinoj...can you please help me step by step with squid 3.4 ?



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/anyOne-who-has-working-ssl-bump-configuration-for-facebook-tp4663452p4663496.html
Sent from the Squid - Users mailing list archive at Nabble.com.


Re: [squid-users] Squid stops handling requests after 30-35 requests

2013-11-25 Thread Bhagwat Yadav
Hi,

Please find below packet dump taken at one of the intermediate
machines on network while processing the request.

108 1.186439 10.0.11.22 165.254.58.18 TCP 76 38682  http [SYN] Seq=0
Win=14360 Len=0 MSS=1436 SACK_PERM=1 TSval=123422360 TSecr=0 WS=64
109 1.254231 165.254.58.18 10.0.11.22 TCP 76 http  38682 [SYN, ACK]
Seq=0 Ack=1 Win=14480 Len=0 MSS=1460 SACK_PERM=1 TSval=1476780015
TSecr=123422360 WS=2
110 1.254765 10.0.11.22 165.254.58.18 TCP 68 38682  http [ACK] Seq=1
Ack=1 Win=14400 Len=0 TSval=123422377 TSecr=1476780015
111 1.273058 10.0.11.22 165.254.58.18 HTTP 302 GET / HTTP/1.1
112 1.273696 165.254.58.18 10.0.11.22 HTTP 1121 HTTP/1.1 503 Service
Unavailable (text/html)
113 1.273706 165.254.58.18 10.0.11.22 TCP 56 http  38682 [FIN, ACK]
Seq=1066 Ack=235 Win=450 Len=0
114 1.274300 10.0.11.22 165.254.58.18 TCP 68 38682  http [ACK]
Seq=235 Ack=1066 Win=17216 Len=0 TSval=123422382 TSecr=1476780015
115 1.275840 10.0.11.22 165.254.58.18 TCP 68 38682  http [FIN, ACK]
Seq=235 Ack=1067 Win=17216 Len=0 TSval=123422382 TSecr=1476780015
116 1.275949 165.254.58.18 10.0.11.22 TCP 56 http  38682 [RST]
Seq=1067 Win=538 Len=0
938 10.795716 10.0.11.22 165.254.58.18 TCP 76 [TCP Port numbers
reused] 38682  http [SYN] Seq=0 Win=14360 Len=0 MSS=1436 SACK_PERM=1
TSval=123424762 TSecr=0 WS=64

From the dump when TCP Port numbers reused message is shown, Squid
is hanging at the same time.

Please help us on this.

Thanks,
Bhagwat

On Mon, Nov 25, 2013 at 12:36 PM, Bhagwat Yadav
yadav.bhagwa...@gmail.com wrote:
 Hi,

 Upgraded squid to 3.1.20-2.2 from debian.org. Issue still persists.
 Note: I have not disable the stats collection as mentioned in earlier mails.

 Please suggest how to resolve this?

 Thanks,
 Bhagwat

 On Fri, Nov 22, 2013 at 4:47 PM, Eliezer Croitoru elie...@ngtech.co.il 
 wrote:
 And what does this 503 page content??
 I do not know what the issue in hands is but there are couple things to
 first test before running into full debug or try to fix issues that might
 not exists.

 The version upgrade is there for a reason.
 I do know why an upgrade might not solve the issues but still if you have
 testing environment try to make sure what are the results with the latest
 3.1.X branch which should be 3.1.21 if I am not wrong.

 It is very critical for you to test it.
 Since squid can run on many OS and many specs your logs are nice but not
 helping to understand the whole issue.

 There are many bugs that was fixed from the 3.1 list but I have used it for
 a very long time.

 If you need help installing 3.1.21 or any newer version I can try to assist
 you.
 Also it can be installed alongside another version.

 Best Regards,
 Eliezer


 On 21/11/13 09:38, Bhagwat Yadav wrote:

 Hi Eliezer/All,

 Thanks for your help.

 PFA log snippets.
 Log1.txt is having sample 1 of cache.log in which you can find the time
 gap.
 Log2.txt is having sample 2 of client output and cache.log showing the
 time gap.

 It seems that there is some in memory operation StatHistCopy which
 is causing this issue, not sure though.

 Squid version is: Squid Cache: Version 3.1.6.

 Please let me know that if these logs are helpfull.


 Thanks  Regards,

 On Wed, Nov 20, 2013 at 6:11 PM, Eliezer Croitoru elie...@ngtech.co.il
 wrote:

 Hey,

 Can you try another test?
 It is very nice to use wget but there are couple options that needs to be
 consider.
 Just to help you if was not there until now add: --delete-after
 to the wget command line.

 It's not related to squid but it helps a lot.
 Now If you are up to it I will be happy to see the machine specs and OS.
 Also what is squid -v output?

 Can you ping the machine at the time it got stuck? what about tcp-ping or
 nc -v squid_ip port ?
 we need to verify also in the access logs that it's not naukri.com that
 thinks your client is trying to covert it into a DDOS target.
 What about trying to access other resources?
 What is written in this 503 response page?

 Eliezer


 On 20/11/13 12:35, Bhagwat Yadav wrote:


 Hi,

 I enable the logging but didn't find any conclusive or decisive logs
 so that I can forward you.

 In my testing, I am accessing same URL 500 times in a loop from the
 client using wget.
 Squid got hanged sometimes after 120 requests ,sometimes after 150
 requests as:

 rm: cannot remove `index.html': No such file or directory
 --2013-11-20 03:52:37--http://www.naukri.com/
 Resolvingwww.naukri.com... 23.72.136.235, 23.72.136.216
 Connecting towww.naukri.com|23.72.136.235|:80... connected.

 HTTP request sent, awaiting response... 503 Service Unavailable
 2013-11-20 03:53:39 ERROR 503: Service Unavailable.


 Whenever it got hanged, it resumes after 1 minute e.g in above example
 after 03:52:37 the response came at 03:53:39.

 Please provide more help.

 Many Thanks,
 Bhagwat






[squid-users] bug 3517 (SMP-aware stateful HTTP authentication)

2013-11-25 Thread Mickael Lequesne
Hello

I send you an email because I like to know if the bug 3517 of the version 3.3 
(SMP-aware stateful HTTP authentication) has been corrected in the 3.4.
Currently I have configured a squid 3.4 with authentication ldap (digest) but 
when I put 4 workers, I need to send my login / password for each GET.

Can we to share and synchronize the cache between workers in the version 3.4?
Is it possible to work around the problem or we must make macros?

Thank you in advance for your response.

Cordialy,
Mickael Lequesne


RE: [squid-users] Re: anyOne who has working ssl_bump configuration for facebook ???

2013-11-25 Thread Shinoj Gangadharan
Hi,

Please send me :

1. sslcrtd_program line from squid.conf
2. Output of  ls /cache/lib/ssl_db

Regards,
Shinoj.


 -Original Message-
 From: iishiii [mailto:esh...@gmail.com]
 Sent: Monday, November 25, 2013 2:41 PM
 To: squid-users@squid-cache.org
 Subject: [squid-users] Re: anyOne who has working ssl_bump configuration
 for facebook ???

 Dear shinoj

 i have tried to do it as you mentioned already... but squid was printing
same
 issue :(

 huh where i am stuck ?

 i just need to cache facebook, dailymotin anf http  ftp downloads upto
50
 MB...

 Dear shinoj...can you please help me step by step with squid 3.4 ?



 --
 View this message in context: http://squid-web-proxy-
 cache.1019090.n4.nabble.com/anyOne-who-has-working-ssl-bump-
 configuration-for-facebook-tp4663452p4663496.html
 Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Slow internet navigation squid vs blue coat

2013-11-25 Thread Michele Mase'
Problem: internet navigation is extremely slow.
I've used squid from 1999 with no problems at all; during last month,
one proxy gave me a lot of troubles.
First we upgraded the system, from RHEL5.x - squid 2.6.x to RHEL6.x
squid3.4.x with no improvements.
Second, we have bypassed the Trend Micro Interscan proxy (the parent
proxy) without success.
Third: I do not know what to do.
So what should be done?
Some configuration improvements (sysctl/squid)?
Could it be a network related problem? (bandwidth/delay/MTU/other)?

Pls., give me some hints. My boss wants to use bluecoat. I want to
solve the issue.
Regards
Michele Masè

Here are the configuration and some info:
Environment:
1Gbit lan; 200Mbit internet bandwidth; Squid 3.4.0.2 from
http://www1.ngtech.co.il/rpm/centos/6/$basearch, 2GB ram + 2x xeon
3GHZ, RHEL6, guest on VMware ESXi
The server is more than 80% idle, more than 1GB free memory, no iowait.
Configuration: see below:
squid.conf:
workers 2
acl SSL_ports port 443
acl Safe_ports port /etc/squid/acls/Safe_ports.acl.list
acl myexample dstdomain /etc/squid/acls/myexample.acl.list
acl domain-dst-direct dstdomain /etc/squid/acls/domain-dst-direct.acl.list
acl ip-dst-direct dst /etc/squid/acls/ip-dst-direct.acl.list
acl localnet src /etc/squid/acls/ip-src-localnet.acl.list
acl CONNECT method CONNECT
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow localhost manager
http_access deny manager
http_access allow localnet
http_access allow localhost
http_access deny all
always_direct allow all
always_direct allow myexample
always_direct allow localhost
always_direct allow domain-dst-direct
always_direct allow ip-dst-direct
always_direct allow SSL_ports
never_direct deny localhost
never_direct deny domain-dst-direct
never_direct allow all
coredump_dir /var/spool/squid

minimum_object_size 64 KB
maximum_object_size 256 MB
maximum_object_size_in_memory 2 MB
cache_mem 1024 MB
cache_dir ufs /cache 9000 16 256
cache_access_log stdio:/logs/squid/access.log
cache_log /logs/squid/cache.log
refresh_pattern ^ftp:144020%10080
refresh_pattern ^gopher:14400%1440
refresh_pattern -i (/cgi-bin/|\?) 00%0
refresh_pattern .020%4320

sysctl.conf
net.ipv4.tcp_fin_timeout = 30
net.ipv4.tcp_max_syn_backlog = 4096
net.core.somaxconn = 1024
net.ipv4.tcp_keepalive_time = 3600
net.ipv4.ip_local_port_range = 102465000
net.core.netdev_max_backlog = 2048
The Response Time is slow and comparatively slower than bluecoat proxy
During working hours extremely slow and sometimes some sites seems blocked
Here are the connections:
  TIME_WAIT   4012 #
 CLOSE_WAIT 81 #
  FIN_WAIT1 42 #
   SYN_SENT591 
  FIN_WAIT2136 ##
ESTABLISHED   4950 
   SYN_RECV 13 #
CLOSING 13 #
   LAST_ACK 81 #
 LISTEN 11 #
---

  TOTAL   9930
squidclient mgr:info|grep file\ desc
Sending HTTP request ... done.
Maximum number of file descriptors:   32768
Largest file desc currently in use:   3419
Number of file desc currently in use: 6022
Available number of file descriptors: 26746
Reserved number of file descriptors:   200

With Proxy Blue Coat:
Navigation is little bit better.
Note:
There is an external acl on firewall that allow network access to
trusted sources only.


Re: [squid-users] Slow internet navigation squid vs blue coat

2013-11-25 Thread Kinkie
On Mon, Nov 25, 2013 at 11:26 AM, Michele Mase' michele.m...@gmail.com wrote:
 Problem: internet navigation is extremely slow.
 I've used squid from 1999 with no problems at all; during last month,
 one proxy gave me a lot of troubles.
 First we upgraded the system, from RHEL5.x - squid 2.6.x to RHEL6.x
 squid3.4.x with no improvements.
 Second, we have bypassed the Trend Micro Interscan proxy (the parent
 proxy) without success.
 Third: I do not know what to do.
 So what should be done?
 Some configuration improvements (sysctl/squid)?
 Could it be a network related problem? (bandwidth/delay/MTU/other)?

Hi Michele,
  extremely slow is quite a poor indication, unfortunately. Can you
measure it? e.g. by using apachebench (ab) or squidclient to measure
the time download a large file and a small file through the proxy from
a local source and from a remote source. Then repeating the same from
the box where Squid runs and then from a different box.

Think about what has remained unchanged since when there was no
performance problems: e.g. network cables, switches, routers etc.

   Kinkie


[squid-users] bypass=on?

2013-11-25 Thread Ralf Hildebrandt
From my log:

Nov 23 18:00:34 proxy-cbf-1 squid[5874]: Reconfiguring Squid Cache (version 
3.4.0.2-20131115-r13027)...
Nov 23 18:00:34 proxy-cbf-1 squid[5874]: Closing HTTP port 0.0.0.0:8080
Nov 23 18:00:34 proxy-cbf-1 squid[5874]: Stop receiving ICP on 0.0.0.0:3130
Nov 23 18:00:34 proxy-cbf-1 squid[5874]: Closing SNMP receiving port 
0.0.0.0:3401
Nov 23 18:00:34 proxy-cbf-1 squid[5874]: Stop sending ICP from 0.0.0.0:3130
Nov 23 18:00:34 proxy-cbf-1 squid[5874]: Logfile: closing log 
stdio:/var/log/squid3/access.log
Nov 23 18:00:34 proxy-cbf-1 squid[5874]: Startup: Initializing Authentication 
Schemes ...
Nov 23 18:00:34 proxy-cbf-1 squid[5874]: Startup: Initialized Authentication 
Scheme 'basic'
Nov 23 18:00:34 proxy-cbf-1 squid[5874]: Startup: Initialized Authentication 
Scheme 'digest'
Nov 23 18:00:34 proxy-cbf-1 squid[5874]: Startup: Initialized Authentication 
Scheme 'negotiate'
Nov 23 18:00:34 proxy-cbf-1 squid[5874]: Startup: Initialized Authentication 
Scheme 'ntlm'
Nov 23 18:00:34 proxy-cbf-1 squid[5874]: Startup: Initialized Authentication.
Nov 23 18:00:34 proxy-cbf-1 squid[5874]: Processing Configuration File: 
/etc/squid3/squid.conf (depth 0)
Nov 23 18:00:34 proxy-cbf-1 squid[5874]: Processing Configuration File: 
/etc/squid3/squid-icap.conf.3.3 (depth 1)
Nov 23 18:00:34 proxy-cbf-1 squid[5874]: UPGRADE: Please use 'bypass=on' option 
to enable service bypass

my /etc/squid3/squid-icap.conf.3.3 looks like this; I already have
bypass=on statement:

icap_enable on

icap_send_client_ip on
icap_send_client_username on
icap_client_username_encode off
icap_client_username_header X-Authenticated-User

icap_preview_enable on
icap_preview_size 1024

icap_service_failure_limit -1

icap_service service_resp respmod_precache bypass=on 
icap://127.0.0.1:1344/srv_clamav
icap_service service_req  reqmod_precache  bypass=on 
icap://127.0.0.1:1344/srv_clamav
# Im Fehlerfall - bypass!

adaptation_access service_resp allow all
adaptation_access service_req  allow all

# someone can setup his/her squid to get c-icap statistics from the web
acl infoaccess dstdomain icap.info

icap_service service_info reqmod_precache 1 icap://127.0.0.1:1344/info
adaptation_service_set class_info service_info

adaptation_access class_info allow infoaccess
adaptation_access class_info deny all

-- 
Ralf Hildebrandt   Charite Universitätsmedizin Berlin
ralf.hildebra...@charite.deCampus Benjamin Franklin
http://www.charite.de  Hindenburgdamm 30, 12203 Berlin
Geschäftsbereich IT, Abt. Netzwerk fon: +49-30-450.570.155


Re: [squid-users] bypass=on?

2013-11-25 Thread Amos Jeffries
On 26/11/2013 2:02 a.m., Ralf Hildebrandt wrote:
 Nov 23 18:00:34 proxy-cbf-1 squid[5874]: UPGRADE: Please use 'bypass=on' 
 option to enable service bypass
 
 my /etc/squid3/squid-icap.conf.3.3 looks like this; I already have
 bypass=on statement:
 
snip

 
 icap_service service_info reqmod_precache 1 icap://127.0.0.1:1344/info

I believe its talking about this one  ^^^.

Amos


Re: [squid-users] bug 3517 (SMP-aware stateful HTTP authentication)

2013-11-25 Thread Amos Jeffries
On 25/11/2013 10:47 p.m., Mickael Lequesne wrote:
 Hello
 
 I send you an email because I like to know if the bug 3517 of the version 3.3 
 (SMP-aware stateful HTTP authentication) has been corrected in the 3.4.

You have a bug number. All the known information about it is in the
bugzilla report.

 Currently I have configured a squid 3.4 with authentication ldap (digest) but 
 when I put 4 workers, I need to send my login / password for each GET.
 
 Can we to share and synchronize the cache between workers in the version 3.4?
 Is it possible to work around the problem or we must make macros?
 

Enabling persistent connections can reduce the problem a lot. However
that is still quite limited and so far nobody has been willing to
sponsor the required development or provide a patch.

Amos


Re: [squid-users] Slow internet navigation squid vs blue coat

2013-11-25 Thread Pavel Kazlenka

Hi,

Just want to put my two pennies in. 'Slow' internet navigation through 
squid is often observed in case of incorrect DNS server settings on 
squid box. Often issues are:

- ipv6 DNS queries are performed first;
- first DNS server in /etc/resolv.conf is not responsible.

Both of these cases give plus 5-7 seconds to each request (dns request 
time-out). So I'd recommend to make sure that DNS is not an issue here.


The easiest way to find out the place where delay is added is to sniff 
request-response  on squid box. Capture will show you what part of the 
path adds delay.


Best wishes,
Pavel

On 11/25/2013 03:57 PM, Kinkie wrote:

On Mon, Nov 25, 2013 at 11:26 AM, Michele Mase' michele.m...@gmail.com wrote:

Problem: internet navigation is extremely slow.
I've used squid from 1999 with no problems at all; during last month,
one proxy gave me a lot of troubles.
First we upgraded the system, from RHEL5.x - squid 2.6.x to RHEL6.x
squid3.4.x with no improvements.
Second, we have bypassed the Trend Micro Interscan proxy (the parent
proxy) without success.
Third: I do not know what to do.
So what should be done?
Some configuration improvements (sysctl/squid)?
Could it be a network related problem? (bandwidth/delay/MTU/other)?

Hi Michele,
   extremely slow is quite a poor indication, unfortunately. Can you
measure it? e.g. by using apachebench (ab) or squidclient to measure
the time download a large file and a small file through the proxy from
a local source and from a remote source. Then repeating the same from
the box where Squid runs and then from a different box.

Think about what has remained unchanged since when there was no
performance problems: e.g. network cables, switches, routers etc.

Kinkie




Re: [squid-users] Slow internet navigation squid vs blue coat

2013-11-25 Thread Michele Mase'
The analysis was made using firebug from different browsers, using
different proxyes/
direct access; with the problematic proxy, what I can see is an high
waiting time, indicating that the header's page response is high.
Therefore in cache.log i see many
local=xx: remote=yy:zz FD  flags=1: read/write
failure: (110) Connection timed out
local=xx: remote=yy:zz FD  flags=1: read/write
failure: (32) Broken pipe
WARNING: Closing client connection due to lifetime timeout


How could I test network related problems such as:
Imix traffic limit
Delay
Bandwidth
?
Michele Masè



On Mon, Nov 25, 2013 at 1:57 PM, Kinkie gkin...@gmail.com wrote:
 On Mon, Nov 25, 2013 at 11:26 AM, Michele Mase' michele.m...@gmail.com 
 wrote:
 Problem: internet navigation is extremely slow.
 I've used squid from 1999 with no problems at all; during last month,
 one proxy gave me a lot of troubles.
 First we upgraded the system, from RHEL5.x - squid 2.6.x to RHEL6.x
 squid3.4.x with no improvements.
 Second, we have bypassed the Trend Micro Interscan proxy (the parent
 proxy) without success.
 Third: I do not know what to do.
 So what should be done?
 Some configuration improvements (sysctl/squid)?
 Could it be a network related problem? (bandwidth/delay/MTU/other)?

 Hi Michele,
   extremely slow is quite a poor indication, unfortunately. Can you
 measure it? e.g. by using apachebench (ab) or squidclient to measure
 the time download a large file and a small file through the proxy from
 a local source and from a remote source. Then repeating the same from
 the box where Squid runs and then from a different box.

 Think about what has remained unchanged since when there was no
 performance problems: e.g. network cables, switches, routers etc.

Kinkie


[squid-users] #Can't access certain webpages

2013-11-25 Thread Grooz, Marc (regio iT)
Hi,

Currently I use Squid 3.3.8 and I can't use/access two webservers thru squid. 
If I bypass squid this websites work great.

One of this websites is a fileupload/download website with a generated 
downloadlink. When I upload a file I receive the following Squidlog Entrys:

TCP_MISS/200 398 GET http://w.y.x.z/cgi-bin/upload_status.cgi?
.
.
TCP_MISS_ABORTED/000 0 GET http:// w.y.x.z/cgi-bin/upload_status.cgi?
TCP_MISS/200 398 GET http://w.y.x.z/cgi-bin/upload_status.cgi?

And the downloadlink never gets generated.


In the second case you never get a webpage back from squid. If I use lynx from 
the commandline of the squid system the Webpage gets loaded.
With a tcpdump I see that if squid makes the request then the Webserver didn't 
answer.

Any ideas or suggestions? 

Kind regards

Marc




Re: [squid-users] Slow internet navigation squid vs blue coat

2013-11-25 Thread Kinkie
Hm...
  Connection timed out is an OS/TCPIP issue. Can you try using
accessing the same resource from the server itself (e.g. with wget,
GET or running firefox on the proxy server).
It seems that there's something on the network level: faulty ethernet,
switch, router, firewall or network line, or high packet loss
somewhere on the path to the affected server(s).


On Mon, Nov 25, 2013 at 3:01 PM, Michele Mase' michele.m...@gmail.com wrote:
 The analysis was made using firebug from different browsers, using
 different proxyes/
 direct access; with the problematic proxy, what I can see is an high
 waiting time, indicating that the header's page response is high.
 Therefore in cache.log i see many
 local=xx: remote=yy:zz FD  flags=1: read/write
 failure: (110) Connection timed out
 local=xx: remote=yy:zz FD  flags=1: read/write
 failure: (32) Broken pipe
 WARNING: Closing client connection due to lifetime timeout


 How could I test network related problems such as:
 Imix traffic limit
 Delay
 Bandwidth
 ?
 Michele Masè



 On Mon, Nov 25, 2013 at 1:57 PM, Kinkie gkin...@gmail.com wrote:
 On Mon, Nov 25, 2013 at 11:26 AM, Michele Mase' michele.m...@gmail.com 
 wrote:
 Problem: internet navigation is extremely slow.
 I've used squid from 1999 with no problems at all; during last month,
 one proxy gave me a lot of troubles.
 First we upgraded the system, from RHEL5.x - squid 2.6.x to RHEL6.x
 squid3.4.x with no improvements.
 Second, we have bypassed the Trend Micro Interscan proxy (the parent
 proxy) without success.
 Third: I do not know what to do.
 So what should be done?
 Some configuration improvements (sysctl/squid)?
 Could it be a network related problem? (bandwidth/delay/MTU/other)?

 Hi Michele,
   extremely slow is quite a poor indication, unfortunately. Can you
 measure it? e.g. by using apachebench (ab) or squidclient to measure
 the time download a large file and a small file through the proxy from
 a local source and from a remote source. Then repeating the same from
 the box where Squid runs and then from a different box.

 Think about what has remained unchanged since when there was no
 performance problems: e.g. network cables, switches, routers etc.

Kinkie



-- 
/kinkie


Re: [squid-users] #Can't access certain webpages

2013-11-25 Thread Kinkie
On Mon, Nov 25, 2013 at 3:21 PM, Grooz, Marc (regio iT)
marc.gr...@regioit.de wrote:
 Hi,

 Currently I use Squid 3.3.8 and I can't use/access two webservers thru squid. 
 If I bypass squid this websites work great.

 One of this websites is a fileupload/download website with a generated 
 downloadlink. When I upload a file I receive the following Squidlog Entrys:

 TCP_MISS/200 398 GET http://w.y.x.z/cgi-bin/upload_status.cgi?
 .
 .
 TCP_MISS_ABORTED/000 0 GET http:// w.y.x.z/cgi-bin/upload_status.cgi?
 TCP_MISS/200 398 GET http://w.y.x.z/cgi-bin/upload_status.cgi?

 And the downloadlink never gets generated.


 In the second case you never get a webpage back from squid. If I use lynx 
 from the commandline of the squid system the Webpage gets loaded.
 With a tcpdump I see that if squid makes the request then the Webserver 
 didn't answer.

Well, this is consistent with the behavior in squid's logs.
Have you tried accessing the misbehaving server from a client running
on the squid box, and comparing the differences in the network traces?


-- 
/kinkie


Re: [squid-users] bypass=on?

2013-11-25 Thread Ralf Hildebrandt
* Amos Jeffries squ...@treenet.co.nz:

  icap_service service_info reqmod_precache 1 icap://127.0.0.1:1344/info
 
 I believe its talking about this one  ^^^.

Oh yes. Damn.

-- 
Ralf Hildebrandt   Charite Universitätsmedizin Berlin
ralf.hildebra...@charite.deCampus Benjamin Franklin
http://www.charite.de  Hindenburgdamm 30, 12203 Berlin
Geschäftsbereich IT, Abt. Netzwerk fon: +49-30-450.570.155


[squid-users] What do you recommend?

2013-11-25 Thread alamb200
Hi,
I have several aborted attempts to get what I want to do to work and have
failed miserably every time, so I thought I would ask you for advice.
My plan is simple (in my head anyway) I want to set in place a device to run
squid proxy so that I can the reduce bandwidth usage and also so i can see
what users are doing on the web.
So far I have tried a Windows solution but could not sort out the syslog bit
and a linux solution which I struggled with and had to give up.
My plan is to host squid on a virtual server hosted in Hyper V, on my
previous attempts with linux I tried to use the gui desktop but could not
get it to display properley so I am going to have to work around the command
line to get it working.
Can anyone help with this? Which OS should I use? What monitoring software
would you recommend?
I am trying to keep costs to a minimum while doing this while managing to
have a reasonable solution.
Thanks in advance for any advice you can pass on.
alamb200



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/What-do-you-recommend-tp4663512.html
Sent from the Squid - Users mailing list archive at Nabble.com.


Re: [squid-users] What do you recommend?

2013-11-25 Thread Antony Stone
On Monday 25 November 2013 at 15:50, alamb200 wrote:

 I want to set in place a device to run squid proxy so that I can the reduce
 bandwidth usage

So, you want Squid to be configured as a caching proxy - that's not difficult, 
and there are lots of tutorials and guides around on the net to help with 
this.

 and also so i can see what users are doing on the web.

Assuming that it is legal to do this, wherever you are, then you want a 
logfile analysis tool - again there are several around, but you could start 
with http://squidalyser.sourceforge.net/

 So far I have tried a Windows solution but could not sort out the syslog
 bit and a linux solution which I struggled with and had to give up.

I would always recommend a Linux-based solution for Squid (and indeed for 
networked services in general), so it would be helpful if you could tell us:

 - which version (distribution) of Linux did you try?

 - which guidelines did you follow to get Squid working?

 - which aspects did you find yourself struggling with?

 - what didn't work satisfactorily, leading you to give up?

What you're looking for is a relatively simple Squid setup, so hopefully it 
shoudn't be too hard to identify what problem you had, and people here can 
almost certainyl help you to solve it.


Regards,

-- 
There has always been an underlying argument that we should open up our 
source code more broadly. The fact is that we are learning from open source 
and we are opening our code more broadly through Shared Source.

Is there value to providing source code? The answer is unequivocally yes.

 - Jason Matusow, head of Microsoft's Shared Source Program, in response to 
leaks of Windows source code on the Internet.

 Please reply to the list;
   please don't CC me.


[squid-users] Directives ignore-private and override-expire not working Squid 3.2 and 3.3

2013-11-25 Thread Le Trung, Kien
Hello everyone, (sorry if this email is duplicate because I sent this
by Outlook before but received no comments about this problem).

 I’m using these configurations which work fine with squid 3.1 every
items gets HIT. However these configurations  don’t work properly with
Squid 3.2 and 3.3, because I always get MISS with all items

http_port 127.0.0.1:82 accel ignore-cc

cache_peer 192.168.2.43 parent 80 0 no-query originserver name=Site1
max-conn=15 cache_peer_domain Site1 mysite.com refresh_pattern -i
((.)*) 30 30% 60 ignore-no-cache ignore-private ignore-reload
ignore-no-store override-lastmod override-expire

Header from 3.3 version:

HTTP/1.1 200 OK

Cache-Control: private

Content-Length: 117991

Content-Type: text/html; charset=utf-8

Expires: Thu, 21 Nov 2013 03:12:14 GMT

Server: Microsoft-IIS/7.5

Date: Thu, 21 Nov 2013 03:12:15 GMT

X-Cache: MISS from localhost.localdomain

Connection: close


 Please help.


-- 

Best Regards,
Kiên Lê


[squid-users] Squid with PHP Apache

2013-11-25 Thread Ghassan Gharabli
 Hi,

I have built a PHP script to cache HTTP 1.X 206 Partial Content like
WindowsUpdates  Allow seeking through Youtube  many websites .

I am willing to move from PHP to C++ hopefully after a while.

The script is almost finished , but I have several question, I have no
idea if I should always grab the HTTP Response Headers and send them
back to the borwsers.

1) Does Squid still grab the HTTP Response Headers, even if the
object is already in cache or Squid has already a cached copy of the
HTTP Response header . If Squid caches HTTP Response Headers then how
do you deal with HTTP CODE 302 if the object is already cached . I am
asking this question because I have already seen most websites use
same extensions such as .FLV including Location Header.

2) Do you also use mime.conf to send the Content-Type to the browser
in case of FTP/HTTP or only FTP ?

3) Does squid compare the length of the local cached copy with the
remote file if you already have the object file or you use
refresh_pattern?.

4) What happens if the user modies a refresh_pattern to cache an
object, for example .xml which does not have [Content-Length] header.
Do you still save it, or would you search for the ignore-headers used
to force caching the object and what happens if the cached copy
expires , do you still refresh the copy even if there is no
Content-Length header?.

I am really confused with this issue , because I am always getting a
headers list from the internet and I send them back to the browser
(using PHP and Apache) even if the object is in cache.

Your help and answers will be much appreciated

Thank you

Ghassan


Re: [squid-users] bug 3517 (SMP-aware stateful HTTP authentication)

2013-11-25 Thread Eliezer Croitoru

Hey Mickael,

What authentication methods are you using?
NTLM, DIGEST, IDENT, regular username + pass?

Eliezer

On 25/11/13 11:47, Mickael Lequesne wrote:

Hello

I send you an email because I like to know if the bug 3517 of the version 3.3 
(SMP-aware stateful HTTP authentication) has been corrected in the 3.4.
Currently I have configured a squid 3.4 with authentication ldap (digest) but 
when I put 4 workers, I need to send my login / password for each GET.

Can we to share and synchronize the cache between workers in the version 3.4?
Is it possible to work around the problem or we must make macros?

Thank you in advance for your response.

Cordialy,
Mickael Lequesne




Re: [squid-users] Directives ignore-private and override-expire not working Squid 3.2 and 3.3

2013-11-25 Thread Eliezer Croitoru

Hey there,

I am not sure and maybe it is a typo in the cache_peer line?
The refresh_pattern should be a separated line from the cache_peer line.
like this:
##START
http_port 127.0.0.1:82 accel ignore-cc
cache_peer 192.168.2.43 parent 80 0 no-query originserver name=Site1 
max-conn=15 cache_peer_domain Site1 mysite.com
refresh_pattern -i ((.)*) 30 30% 60 ignore-no-cache ignore-private 
ignore-reload ignore-no-store override-lastmod override-expire

##END

also note that there is not need for the whole ((.)*) while you can 
use . for the wanted effect effect.
It might be the reason but we just see the response while the request 
also is very important for the matter.


Try to change the refresh_pattern and let see what happens.

Eliezer

On 25/11/13 19:16, Le Trung, Kien wrote:

Hello everyone, (sorry if this email is duplicate because I sent this
by Outlook before but received no comments about this problem).

  I’m using these configurations which work fine with squid 3.1 every
items gets HIT. However these configurations  don’t work properly with
Squid 3.2 and 3.3, because I always get MISS with all items

http_port 127.0.0.1:82 accel ignore-cc

cache_peer 192.168.2.43 parent 80 0 no-query originserver name=Site1
max-conn=15 cache_peer_domain Site1 mysite.com refresh_pattern -i
((.)*) 30 30% 60 ignore-no-cache ignore-private ignore-reload
ignore-no-store override-lastmod override-expire




Re: [squid-users] Directives ignore-private and override-expire not working Squid 3.2 and 3.3

2013-11-25 Thread Le Trung, Kien
Thank Eliezer Croitoru for your response,

I double checked your suggestion but still MISS all request
refresh_pattern, of course, is in separated line from the cache_peer
line, just my mistake when copy/paste to email.
 I use . for the refresh_pattern before but no luck

With this configuration, squid-3.1.23 still working properly (same
original server).



On Tue, Nov 26, 2013 at 7:49 AM, Eliezer Croitoru elie...@ngtech.co.il wrote:
 Hey there,

 I am not sure and maybe it is a typo in the cache_peer line?
 The refresh_pattern should be a separated line from the cache_peer line.
 like this:
 ##START

 http_port 127.0.0.1:82 accel ignore-cc
 cache_peer 192.168.2.43 parent 80 0 no-query originserver name=Site1
 max-conn=15 cache_peer_domain Site1 mysite.com
 refresh_pattern -i ((.)*) 30 30% 60 ignore-no-cache ignore-private
 ignore-reload ignore-no-store override-lastmod override-expire
 ##END

 also note that there is not need for the whole ((.)*) while you can use
 . for the wanted effect effect.
 It might be the reason but we just see the response while the request also
 is very important for the matter.

 Try to change the refresh_pattern and let see what happens.

 Eliezer


 On 25/11/13 19:16, Le Trung, Kien wrote:

 Hello everyone, (sorry if this email is duplicate because I sent this
 by Outlook before but received no comments about this problem).

   I’m using these configurations which work fine with squid 3.1 every
 items gets HIT. However these configurations  don’t work properly with
 Squid 3.2 and 3.3, because I always get MISS with all items

 http_port 127.0.0.1:82 accel ignore-cc

 cache_peer 192.168.2.43 parent 80 0 no-query originserver name=Site1
 max-conn=15 cache_peer_domain Site1 mysite.com refresh_pattern -i
 ((.)*) 30 30% 60 ignore-no-cache ignore-private ignore-reload
 ignore-no-store override-lastmod override-expire





-- 

Best Regards,
Kiên Lê


Re: [squid-users] Squid 3.3 Reverse Proxy Mode - 502 Errors when uploading files larger than 6MB

2013-11-25 Thread techguy005...@yahoo.com
After further analysis, the issue only happens when using SSL (HTTPS). The 
problem does not happen when using normal HTTP.


Here is an output of the trace:

POST
/products/application/WorkQueue/BatchExcel.aspx HTTP/1.1
Accept:
text/html, application/xhtml+xml, */*
Referer: https://mysite.com/products/application/WorkQueue/BatchExcel.aspx
Accept-Language:
en-US
User-Agent:
Mozilla/5.0 (compatible; MSIE 10.0; Windows NT 6.1; WOW64; Trident/6.0;
EIE10;ENUSMCM)
Content-Type:
multipart/form-data; boundary=---7dd2f423f03f2
Accept-Encoding:
gzip, deflate
Host:
mysite.com
Content-Length:
6182021
DNT: 1
Connection:
Keep-Alive
Cache-Control:
no-cache
Cookie:
ASPSESSIONIDSGCQCDST=GLAPOOKCMDGLNLKGNLHGBFHI;
ASP.NET_SessionId=etmf1s45uqe0a5nrbyspzr45;
App.Products.IPAddress=10.10.10.7;
.ASPXAUTH=3D3AA3A40EE8060EB290E20E0CF046C40993B5BBD5F041FF2F27A79E4E25FFCEC28EB860F26F388175EE3CDD26448F4F8246FC3CA16FC26DDA467B9B67062A6174D8AD8908F3AD8E16A3DF54E9D02AA77E22CD5751A72C5A2B85FFE52853270655ECEAD5A30BF01F239032A3B05D63D30D69194A155D7CB64CD72D4C55FC6BCED489663B0D84E6C0D2F6FB7117048EFA7E24;
Security.Services.ApplicationManager.CurrentApplication=1; 
APP.Web.WebApplication.CurrentPage=/products/application/WorkQueue/BatchExcel.aspx
 
 
--
2013/11/21
09:57:35.811 kid1| client_side_request.cc(786) clientAccessCheckDone: The
request POST https://mysite.com/products/application/WorkQueue/BatchExcel.aspx 
is ALLOWED, because it matched 'port443'
2013/11/21
09:57:35.811 kid1| client_side_request.cc(760) clientAccessCheck2: No
adapted_http_access configuration. default: ALLOW
2013/11/21
09:57:35.811 kid1| client_side_request.cc(786) clientAccessCheckDone: The
request POST https://mysite.com/products/application/WorkQueue/BatchExcel.aspx 
is ALLOWED, because it matched 'port443'
2013/11/21
09:57:35.811 kid1| forward.cc(121) FwdState: Forwarding client request
local=192.168.1.1:443 remote=10.10.10.7:52743 FD 10 flags=1,
url=https://mysite.com/products/application/WorkQueue/BatchExcel.aspx
2013/11/21
09:57:35.811 kid1| peer_select.cc(268) peerSelectDnsPaths: Find IP destination
for: https://mysite.com/products/application/WorkQueue/BatchExcel.aspx'
via 192.168.148.21
2013/11/21
09:57:35.811 kid1| peer_select.cc(268) peerSelectDnsPaths: Find IP destination
for: https://mysite.com/products/application/WorkQueue/BatchExcel.aspx'
via 192.168.148.21
2013/11/21
09:57:35.811 kid1| peer_select.cc(289) peerSelectDnsPaths: Found sources for
'https://mysite.com/products/application/WorkQueue/BatchExcel.aspx'
2013/11/21
09:57:35.811 kid1| peer_select.cc(290) peerSelectDnsPaths:  
always_direct = DENIED
2013/11/21
09:57:35.811 kid1| peer_select.cc(291) peerSelectDnsPaths:   
never_direct = DENIED
2013/11/21
09:57:35.811 kid1| peer_select.cc(301)
peerSelectDnsPaths:  cache_peer = local=0.0.0.0
remote=192.168.148.21:443 flags=1
2013/11/21
09:57:35.811 kid1| peer_select.cc(301)
peerSelectDnsPaths:  cache_peer = local=0.0.0.0
remote=192.168.148.21:443 flags=1
2013/11/21
09:57:35.811 kid1| peer_select.cc(304)
peerSelectDnsPaths:    timedout = 0
2013/11/21
09:57:35.814 kid1| http.cc(2211) sendRequest: HTTP Server
local=192.168.156.14:50710 remote=192.168.148.21:443 FD 22 flags=1
2013/11/21
09:57:35.814 kid1| http.cc(2212) sendRequest: HTTP Server REQUEST:
-
POST
/products/application/WorkQueue/BatchExcel.aspx HTTP/1.1
Accept:
text/html, application/xhtml+xml, */*
Referer: https://mysite.com/products/application/WorkQueue/BatchExcel.aspx
Accept-Language:
en-US
User-Agent:
Mozilla/5.0 (compatible; MSIE 10.0; Windows NT 6.1; WOW64; Trident/6.0;
EIE10;ENUSMCM)
Content-Type:
multipart/form-data; boundary=---7dd2f423f03f2
Accept-Encoding:
gzip, deflate
Host:
mysite.com
Content-Length:
6182021
DNT: 1
Cookie:
ASPSESSIONIDSGCQCDST=GLAPOOKCMDGLNLKGNLHGBFHI;
ASP.NET_SessionId=etmf1s45uqe0a5nrbyspzr45;
App.Products.IPAddress=10.10.10.7;
.ASPXAUTH=3D3AA3A40EE8060EB290E20E0CF046C40993B5BBD5F041FF2F27A79E4E25FFCEC28EB860F26F388175EE3CDD26448F4F8246FC3CA16FC26DDA467B9B67062A6174D8AD8908F3AD8E16A3DF54E9D02AA77E22CD5751A72C5A2B85FFE52853270655ECEAD5A30BF01F239032A3B05D63D30D69194A155D7CB64CD72D4C55FC6BCED489663B0D84E6C0D2F6FB7117048EFA7E24;
Security.Services.ApplicationManager.CurrentApplication=1;
App.Web.WebApplication.CurrentPage=/products/application/WorkQueue/BatchExcel.aspx
Surrogate-Capability:
284=Surrogate/1.0
X-Forwarded-For:
10.10.10.7
Cache-Control:
no-cache
Connection:
keep-alive
 
 
--
2013/11/21
09:57:35.911 kid1| support.cc() ssl_read_method: SSL FD 10 is pending
2013/11/21
09:57:35.912 kid1| support.cc() ssl_read_method: SSL FD 10 is pending
 
...
 
2013/11/21
09:57:40.053 kid1| support.cc() ssl_read_method: SSL FD 10 is pending
2013/11/21
09:57:40.130 kid1| support.cc() ssl_read_method: SSL FD 10 is pending
2013/11/21
09:57:40.130 kid1| client_side.cc(2347) maybeMakeSpaceAvailable: growing
request buffer: notYetUsed=4095 size=8192
2013/11/21

Re: [squid-users] Squid with PHP Apache

2013-11-25 Thread Amos Jeffries
On 26/11/2013 10:13 a.m., Ghassan Gharabli wrote:
  Hi,
 
 I have built a PHP script to cache HTTP 1.X 206 Partial Content like
 WindowsUpdates  Allow seeking through Youtube  many websites .
 

Ah. So you have written your own HTTP caching proxy in PHP. Well done.
Did you read RFC 2616 several times? your script is expected to to obey
all the MUST conditions and clauses in there discussing proxy or cache.



NOTE: the easy way to do this is to upgrade your Squid to the current
series and use ACLs on the range_offset_limit directive. That way Squid
will convert Range requests to normal fetch requests and cache the
object before sending the requested pieces of it back to the client.
http://www.squid-cache.org/Doc/config/range_offset_limit/


 I am willing to move from PHP to C++ hopefully after a while.
 
 The script is almost finished , but I have several question, I have no
 idea if I should always grab the HTTP Response Headers and send them
 back to the borwsers.

The response headers you get when receiving the object are meta data
describing that object AND the transaction used to fetch it AND the
network conditions/pathway used to fetch it. The cachs job is to store
those along with the object itself and deliver only the relevant headers
when delivering a HIT.

 
 1) Does Squid still grab the HTTP Response Headers, even if the
 object is already in cache or Squid has already a cached copy of the
 HTTP Response header . If Squid caches HTTP Response Headers then how
 do you deal with HTTP CODE 302 if the object is already cached . I am
 asking this question because I have already seen most websites use
 same extensions such as .FLV including Location Header.

Yes. All proxies on the path are expected to relay the end-to-end
headers, drop the hop-by-hop headers, and MUST update/generate the
feature negotiation and state information headers to match its
capabilities in each direction.


 
 2) Do you also use mime.conf to send the Content-Type to the browser
 in case of FTP/HTTP or only FTP ?

Only FTP and Gopher *if* Squid is translating from the native FTP/Gopher
connection to HTTP. HTTP and protocols relayed using HTTP message format
are expected to supply the correct header.

 
 3) Does squid compare the length of the local cached copy with the
 remote file if you already have the object file or you use
 refresh_pattern?.

Content-Length is a declaration of how many payload bytes are following
the response headers. It has no relation to the servers object except in
the special case where the entire object is being delivered as payload
without any encoding.


 
 4) What happens if the user modies a refresh_pattern to cache an
 object, for example .xml which does not have [Content-Length] header.
 Do you still save it, or would you search for the ignore-headers used
 to force caching the object and what happens if the cached copy
 expires , do you still refresh the copy even if there is no
 Content-Length header?.

refresh_pattern does not cause caching of any objects. What it does is
tell Squid how long an object is valid for before it needs to be
revalidated or replaced. In some situations this can affect caching
decision, in most it only affects expiry.


Objects without content-length are handled differently by HTTP/1.0 and
HTTP/1.1 software.

When either end of the connection is advertising HTTP/1.0 the sending
software is expected to terminate the TCP connection on completion of
the payload block.

When both ends advertise HTTP/1.1 the sending software is expected to
use Transfer-Encoding:chunked in order to keep the connection alive
unless the client sent Connection:close.
 Doing the HTTP/1.0 behaviour is also acceptible if both ends are
HTTP/1.1, but causes a performance loss due to churn and setup costs of TCP.




 
 I am really confused with this issue , because I am always getting a
 headers list from the internet and I send them back to the browser
 (using PHP and Apache) even if the object is in cache.

I am really confused about what you are describing here. You should only
get a headers list from the upstream server if you have contacted one.


You say the script is sending to the browser. This is not true at the
HTTP transaction level. The script sends to Apache, Apache sends to
whichever software requested from it.

What is the order you chained the Browser, Apache and Squid ?

  Browser - Squid - Apache - Script - Origin server
or,
  Browser - Apache - Script - Squid - Origin server


Amos


Re: [squid-users] Squid 3.3 Reverse Proxy Mode - 502 Errors when uploading files larger than 6MB

2013-11-25 Thread Eliezer Croitoru

Hey Madhav,

I will try to test it later with 3.3.10 to make sure I get the same 
issue in a reverse proxy with a http cache_peer.

To test a SSL cache_peer for me it will take a bit longer.
If you can file a bug at the project Bugzilla it would help to keep 
track about this issue at:

http://bugs.squid-cache.org/

Since we have two different cases in hand which is reverse vs 
forward and http vs https I will later-on add a list of tests to the 
Bugzilla to minimize the issue into a very specific case which then can 
be analyzed and solved.


The subject of the bug can be https big POST request failing in the 
middle or any similar one.


If you can add also to the frontend a http port just to check if in a 
case of http frontend and http backhand is similar it will might help more.
You can use it on port 10800 if you want since squid doesn't care about 
the port in this specific case.


Regards,
Eliezer

On 23/11/13 06:36, Madhav V Diwan wrote:

Eliezer

  I just now tried  a connection  with the cache_peer set to port 80
without SSL , i left the frontend ssl

same result.. tiny file makes it through , larger files do not.

Madhav




Re: [squid-users] Directives ignore-private and override-expire not working Squid 3.2 and 3.3

2013-11-25 Thread Eliezer Croitoru

Hey,

Just to make sure you have taken a small look at the headers.
The headers states that at almost the same time the request was asked it 
was expired.
I have not seen the Request headers and I cannot tell you why it is like 
that but it seems like there is a reason for that.

Have you tried to fetch the url with wget or curl?

Eliezer

On 26/11/13 03:57, Le Trung, Kien wrote:

Thank Eliezer Croitoru for your response,

I double checked your suggestion but still MISS all request
refresh_pattern, of course, is in separated line from the cache_peer
line, just my mistake when copy/paste to email.
  I use . for the refresh_pattern before but no luck

With this configuration, squid-3.1.23 still working properly (same
original server).





Re: [squid-users] Squid 3.3 Reverse Proxy Mode - 502 Errors when uploading files larger than 6MB

2013-11-25 Thread Amos Jeffries
On 26/11/2013 3:26 p.m., techguy005...@yahoo.com wrote:
 After further analysis, the issue only happens when using SSL (HTTPS). The 
 problem does not happen when using normal HTTP.
 


Ah. Try adding front-end-https to the cache_peer options.

http://support.microsoft.com/kb/307347


Amos


Re: [squid-users] Directives ignore-private and override-expire not working Squid 3.2 and 3.3

2013-11-25 Thread Amos Jeffries
On 26/11/2013 4:35 p.m., Eliezer Croitoru wrote:
 Hey,
 
 Just to make sure you have taken a small look at the headers.
 The headers states that at almost the same time the request was asked it
 was expired.
 I have not seen the Request headers and I cannot tell you why it is like
 that but it seems like there is a reason for that.

Usually this is done on resources where the webmaster knows what they
are doing and is completely confident that the data MUST NOT be stored.
You know, the stuff the contains *private* user details and such.

Expires: header causes HTTP/1.0 caches to remove the content immediately
(or not store in the first place).

Cache-Control:private does the same thing for HTTP/1.1 caches except for
browsers. Which in HTTP/1.1 are allowed to store private data unless the
Cache-Control:no-store or Expires: controls are also used.


Amos



Re: [squid-users] Directives ignore-private and override-expire not working Squid 3.2 and 3.3

2013-11-25 Thread Le Trung, Kien
Hi, Eliezer Croitoru

I already sent the header in the first email. Is this the information you want ?
= Squid 3.3.x 
HTTP/1.1 200 OK
Cache-Control: private
Content-Length: 117991
Content-Type: text/html; charset=utf-8
Expires: Thu, 21 Nov 2013 03:12:14 GMT
Server: Microsoft-IIS/7.5
Date: Thu, 21 Nov 2013 03:12:15 GMT
X-Cache: MISS from localhost.localdomain
Connection: close

And after Amos's reply I check again the header of Squid-3.1

= Squid 3.1.x 
HTTP/1.0 200 OK
Cache-Control: private
Content-Type: text/html; charset=utf-8
Expires: Tue, 26 Nov 2013 05:00:03 GMT
Server: Microsoft-IIS/7.5
Date: Tue, 26 Nov 2013 05:00:04 GMT
Content-Length: 117904
Age: 64
Warning: 110 squid/3.1.23 Response is stale (confused here too !)
X-Cache: HIT from localhost.localdomain
Connection: close

In both case I used the same directives ignore-private and
override-expire and same origin server. Squids also built in same
server, the difference is only http service ports.

Still don't know why squid 3.3 and 3.2 can't ignore-private and
override-expire header.

Best Regards,
Kien Le

On Tue, Nov 26, 2013 at 11:10 AM, Amos Jeffries squ...@treenet.co.nz wrote:
 On 26/11/2013 4:35 p.m., Eliezer Croitoru wrote:
 Hey,

 Just to make sure you have taken a small look at the headers.
 The headers states that at almost the same time the request was asked it
 was expired.
 I have not seen the Request headers and I cannot tell you why it is like
 that but it seems like there is a reason for that.

 Usually this is done on resources where the webmaster knows what they
 are doing and is completely confident that the data MUST NOT be stored.
 You know, the stuff the contains *private* user details and such.

 Expires: header causes HTTP/1.0 caches to remove the content immediately
 (or not store in the first place).

 Cache-Control:private does the same thing for HTTP/1.1 caches except for
 browsers. Which in HTTP/1.1 are allowed to store private data unless the
 Cache-Control:no-store or Expires: controls are also used.


 Amos




-- 

Best Regards,
Kiên Lê


Re: [squid-users] Directives ignore-private and override-expire not working Squid 3.2 and 3.3

2013-11-25 Thread Amos Jeffries
On 26/11/2013 6:06 p.m., Le Trung, Kien wrote:
 Hi, Eliezer Croitoru
 
 I already sent the header in the first email. Is this the information you 
 want ?
 = Squid 3.3.x 
 HTTP/1.1 200 OK
 Cache-Control: private
 Content-Length: 117991
 Content-Type: text/html; charset=utf-8
 Expires: Thu, 21 Nov 2013 03:12:14 GMT
 Server: Microsoft-IIS/7.5
 Date: Thu, 21 Nov 2013 03:12:15 GMT
 X-Cache: MISS from localhost.localdomain
 Connection: close
 
 And after Amos's reply I check again the header of Squid-3.1
 
 = Squid 3.1.x 
 HTTP/1.0 200 OK
 Cache-Control: private
 Content-Type: text/html; charset=utf-8
 Expires: Tue, 26 Nov 2013 05:00:03 GMT
 Server: Microsoft-IIS/7.5
 Date: Tue, 26 Nov 2013 05:00:04 GMT
 Content-Length: 117904
 Age: 64
 Warning: 110 squid/3.1.23 Response is stale (confused here too !)
 X-Cache: HIT from localhost.localdomain
 Connection: close
 
 In both case I used the same directives ignore-private and
 override-expire and same origin server. Squids also built in same
 server, the difference is only http service ports.
 
 Still don't know why squid 3.3 and 3.2 can't ignore-private and
 override-expire header.

I still think you are misunderstanding what is happening here.


Ignoring private simply means that Squid will store it instead of
discarding immediately as required by RFC 2616 (and by Law in many
countries). For safe use of privileged information we consider this
content to expire the instant it was received.
 * The handling of that content once it is in cache still goes ahead in
full accordance with HTTP/1.1 requirements had the private not been
there to prevent caching.


override-expires means that when the Expires: header is present the
value inside it is replaced (overridden with) with the values in
refresh_pattern header.
 * The calculation of how fresh/stale the object is still happens - just
without the HTTP response header value for Expires.


3.1.20 are HTTP/1.0 proxies and do not perform HTTP/1.1 protocol
validation perfectly. The headers still contain the Squid Warning: about
the object coming out of cache (HIT) and being stale.

3.2+ are HTTP/1.1 proxies and are more strictly following RFC2616
requirements about revalidating stale content before use. It just
happened that the server presented a new copy for delivery.

NOTE: private *was* ignored. Expires *was* overridden. There was new
content to deliver regardless of the values you changed them to.

ALSO NOTE: The X-Cache header does not display REFRESH states. It
displays MISS usually in the event of REFRESH_MODIFIED and HIT
usually in the event of REFRESH_UNMODIFIED.


You can get a better test of the private/Expires caching by causing the
server those objects came from to be disconnected/unavailable when
accessed from your Squid. In which case you should see the same headers
as present in 3.1 indicating a HIT with stale object returned.

Amos