[squid-users] Re: assertion failed

2013-04-22 Thread Alexandre Chappaz
Hi,

can anyone explain to me the meaning/reason of this assertion :
src/fs/ufs/ufscommon.cc l 706 :
 ..
 assert(sde);
 ..

got the backtrace out :

(gdb) bt
#0  0x2ba9da45d265 in raise () from /lib64/libc.so.6
#1  0x2ba9da45ed10 in abort () from /lib64/libc.so.6
#2  0x0050ae66 in xassert (msg=0x72992f sde, file=0x729918
ufs/ufscommon.cc, line=706) at debug.cc:567
#3  0x00669c18 in RebuildState::undoAdd (this=value optimized
out) at ufs/ufscommon.cc:706
#4  0x0066b7f4 in RebuildState::rebuildFromSwapLog
(this=0x16090668) at ufs/ufscommon.cc:570
#5  0x0066bed7 in RebuildState::rebuildStep (this=0x16090668)
at ufs/ufscommon.cc:411
#6  0x0066c099 in RebuildState::RebuildStep (data=0x366c) at
ufs/ufscommon.cc:384
#7  0x0064ac68 in AsyncCallQueue::fireNext (this=value
optimized out) at AsyncCallQueue.cc:54
#8  0x0064adc9 in AsyncCallQueue::fire (this=0x366c) at
AsyncCallQueue.cc:40
#9  0x00526fa1 in EventLoop::dispatchCalls (this=value
optimized out) at EventLoop.cc:154
#10 0x005271a1 in EventLoop::runOnce (this=0x74ac93f0) at
EventLoop.cc:119
#11 0x00527338 in EventLoop::run (this=0x74ac93f0) at
EventLoop.cc:95
#12 0x005911b3 in SquidMain (argc=value optimized out,
argv=value optimized out) at main.cc:1501
#13 0x00591443 in SquidMainSafe (argc=13932, argv=0x366c) at
main.cc:1216
#14 0x2ba9da44a994 in __libc_start_main () from /lib64/libc.so.6




Thank you
Alex

2013/4/18 Alexandre Chappaz alexandrechap...@gmail.com:
 sorry, I meant One kid fails to start giving the following assertion

 2013/4/18 Alexandre Chappaz alexandrechap...@gmail.com:
 Hi,

 In our SMP enabled environnement, I have one kid to start giving the
 fooloowing assertion :
 2013/04/18 04:03:43 kid1| assertion failed: ufs/ufscommon.cc:706: sde

 I guess it is something related with the store / store rebuiding.
 Maybe a malformed object in the cache store?
 here the part of the log :

 2013/04/18 04:03:42 kid1| Store rebuilding is 5.57% complete
 2013/04/18 04:03:42 kid1| Done reading /var/cache/squid/W1 swaplog
 (18735 entries)
 2013/04/18 04:03:43 kid1| Accepting SNMP messages on 0.0.0.0:3401
 2013/04/18 04:03:43 kid1| Accepting HTTP Socket connections at
 local=0.0.0.0:3128 remote=[::] FD 12 flags=1
 2013/04/18 04:03:43 kid1| assertion failed: ufs/ufscommon.cc:706: sde


 core file is generated, but it seems to be not valid : gdb says :

 gdb /usr/local/squid/sbin/squid 004/core.758
 GNU gdb (GDB) Red Hat Enterprise Linux (7.0.1-32.el5_6.2)
 Copyright (C) 2009 Free Software Foundation, Inc.
 License GPLv3+: GNU GPL version 3 or later http://gnu.org/licenses/gpl.html
 This is free software: you are free to change and redistribute it.
 There is NO WARRANTY, to the extent permitted by law.  Type show copying
 and show warranty for details.
 This GDB was configured as x86_64-redhat-linux-gnu.
 For bug reporting instructions, please see:
 http://www.gnu.org/software/gdb/bugs/...
 Reading symbols from /usr/local/squid/sbin/squid...done.
 Attaching to program: /usr/local/squid/sbin/squid, process 4
 ptrace: Opération non permise.
 BFD: Warning: /root/004/core.758 is truncated: expected core file size
= 58822656, found: 20480.
 Reading symbols from /lib64/ld-linux-x86-64.so.2...(no debugging
 symbols found)...done.
 Loaded symbols for /lib64/ld-linux-x86-64.so.2
 Failed to read a valid object file image from memory.
 Core was generated by `(squid-1) -S -f /etc/squid/squid.conf'.
 Program terminated with signal 6, Aborted.
 #0  0x2b6babe7a265 in ?? ()
 (gdb) bt
 Cannot access memory at address 0x7fff38b1cf98
 (gdb) quit






 Any clue on how to get a usable core file and/or on the meaning of the
 assertion ?


 Thanks
 Alex


Re: [squid-users] Squid TPROXY and TCP_MISS/000 entries

2013-04-22 Thread Marcin Czupryniak



Hello all!,
checking my logs from time to time I see that there are some requests 
which return the TCP_MISS/000 log code, I'm managing a medium sized 
Active-Standby transparent caching proxy (direct routing) which is 
handling around 100 requests per second (average on daily basis), I 
know what the entry means but I'm not exactly sure whether under 
normal operating conditions they are normal to see in such amount.


The number of these entries is less than 0,001% of total requests 
served (avg 1 entry per 10 seconds), should I worry about it or 
others get them too?


How long a duration do they show? any consistency to the type of 
requests? is there
As far as I can see sometimes a sequence of 000 misses is replied to the 
same requesting IP (mostly web spiders) but in the meantime they do get 
tons of other content.

Some of them (maybe 20%) come in couples something like:

1366622555.453   1488 87.19.154.90 TCP_MISS/000 0 GET 
http://yammo.it/index.php? - DIRECT/151.1.96.198 -
1366622555.454   2327 87.19.154.90 TCP_MISS/000 0 GET 
http://yammo.it/index.php? - DIRECT/151.1.96.198 -


1366622571.558292 82.90.127.184 TCP_MISS/000 0 GET 
http://www.forumviaggiatori.com/tabindex.php? - DIRECT/5.134.122.135 -
1366622571.575242 82.90.127.184 TCP_MISS/000 0 GET 
http://www.forumviaggiatori.com/popup%0Bg.png - DIRECT/5.134.122.135 -


1366622596.390   1972 193.32.73.24 TCP_MISS/000 0 GET 
http://www.romaintheclub.com/24042013-shed-function-goa - 
DIRECT/5.134.122.154 -
1366622596.561166 193.32.73.24 TCP_MISS/000 0 GET 
http://www.romaintheclub.com/24042013-shed-function-goa - 
DIRECT/5.134.122.154 -





In normal traffic this could be the result of:

* DNS lookup failure/timeout.
Identified by the lack of upstream server information on the log line. 
This is very common as websites contain broken links, broken XHR 
scripts, and even some browsers send garbage FQDN in requests to probe 
network functionality. Not to mention DNS misconfiguration and broken 
DNS servers not responding to  lookups.
We are not using IPv6 yet, and it could be due to actually failed DNS 
lookups, as I still have to fix some issues we have with our local 
resolvers. Details from DNS stats


Rcode Matrix:
RCODE ATTEMPT1 ATTEMPT2 ATTEMPT3
09369030
1000
2 1525 1522 1522
3  54000
4000
5000



* Happy Eyeballs clients.
 Identified by the short duration of transaction as clients open 
multiple connections abort some almost immediately.

Maybe that's why they come in couples?


* HTTP Expect:100-continue feature being used over a Squid having 
ignore_expect_100 on configured - or some other proxy doing the 
equivalent.
Identified by the long duration of the transaction, HTTP method type 
plus an Expect header on request, and sometimes no body size. As the 
client sends headers with Expect: then times out waiting for a 
100-continue response which is never going to appear. These clients 
are broken as they are supposed to send the request payload on timeout 
anyway which would make the transaction complete properly.

Did not check this one


 3) PMTUd breakage on the upstream routes.
Identified at the TCP level by complete lack of TCP ACK to data 
packets following a successful TCP SYN + SYN/ACK handshake. This would 
account for the intermittent nature of it as HTTP response sizes vary 
a only large packets go over the MTU size (individual TCP packets, 
*not* HTTP response message size).

I don't think it's the case here



Amos


I suspect that most of the misses come from loaded webservers discarding 
requests (and so squid never receives a reply) or by actual firewalls 
discarding excessive packets.

Any other suggestions?

Martin


[squid-users] Fwd: ext_time_quota probleam.

2013-04-22 Thread Amir Mottaghian
Dear all,

i have compiled squid 3.3.3 with the following options:

./configure --prefix=/usr --sbindir=/usr/sbin
--sysconfdir=/etc/squid3 --includedir=/usr/include
--datadir=/usr/share --libexecdir=/usr/lib/squid3 --localstatedir=/var
--enable-removal-policies=lru --enable-delay-pools
--enable-storeio=aufs,ufs --with-large-files --disable-ident-lookups
--with-default-user=proxy --enable-basic-auth-helpers=LDAP,NCSA
--enable-external-acl-helpers=wbinfo_group,ldap_group,ip_user,unix_group,time_quota,AD_group,kerberos_ldap_group
--enable-negotiate-auth-helpers=squid_kerb_auth



but after compile and make install procedure the ext_time_quota not
found in /USR/LIB/SQUID3 ,so when i change config file and run squid
,it do not start and i view the following error at the end of
screen,when i execute /usr/sbin/squid -NCd1

2013/04/22 17:12:08| WARNING: time_quota #1 exited
2013/04/22 17:12:08| Too few time_quota processes are running (need 1/1)
2013/04/22 17:12:08| Closing HTTP port [::]:3128
2013/04/22 17:12:08| storeDirWriteCleanLogs: Starting...
2013/04/22 17:12:08|   Finished.  Wrote 1019 entries.
2013/04/22 17:12:08|   Took 0.00 seconds (554708.76 entries/sec).
FATAL: The time_quota helpers are crashing too rapidly, need help!

could you please help me ?

Regards
Amir.


[squid-users] Fwd: ext_time_quota probleam.

2013-04-22 Thread Amir Mottaghian
Dear all,

i have compiled squid 3.3.3 with the following options:

./configure --prefix=/usr --sbindir=/usr/sbin
--sysconfdir=/etc/squid3 --includedir=/usr/include
--datadir=/usr/share --libexecdir=/usr/lib/squid3 --localstatedir=/var
--enable-removal-policies=lru --enable-delay-pools
--enable-storeio=aufs,ufs --with-large-files --disable-ident-lookups
--with-default-user=proxy --enable-basic-auth-helpers=LDAP,NCSA
--enable-external-acl-helpers=wbinfo_group,ldap_group,ip_user,unix_group,time_quota,AD_group,kerberos_ldap_group
--enable-negotiate-auth-helpers=squid_kerb_auth



but after compile and make install procedure the ext_time_quota not
found in /USR/LIB/SQUID3 ,so when i change config file and run squid
,it do not start and i view the following error at the end of
screen,when i execute /usr/sbin/squid -NCd1

2013/04/22 17:12:08| WARNING: time_quota #1 exited
2013/04/22 17:12:08| Too few time_quota processes are running (need 1/1)
2013/04/22 17:12:08| Closing HTTP port [::]:3128
2013/04/22 17:12:08| storeDirWriteCleanLogs: Starting...
2013/04/22 17:12:08|   Finished.  Wrote 1019 entries.
2013/04/22 17:12:08|   Took 0.00 seconds (554708.76 entries/sec).
FATAL: The time_quota helpers are crashing too rapidly, need help!

could you please help me ?

Regards
Amir.


[squid-users] Re: Squid 3.4 Head can't cache static url

2013-04-22 Thread syaifuddin
header and respond

http://www.megalink-online.com/images/logo-d-link.jpg

GET /images/logo-d-link.jpg HTTP/1.1
Host: www.megalink-online.com
Connection: keep-alive
Cache-Control: max-age=0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
User-Agent: Mozilla/5.0 (Windows NT 5.1) AppleWebKit/537.31 (KHTML, like
Gecko) Chrome/26.0.1410.64 Safari/537.31
Referer:
http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-3-4-Head-can-t-cache-static-url-td4659576.html
Accept-Encoding: gzip,deflate,sdch
Accept-Language: en,id;q=0.8,en-US;q=0.6
Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3
If-Modified-Since: Tue, 18 Oct 2011 04:23:16 GMT

HTTP/1.1 304 Not Modified
Server: nginx admin
Date: Mon, 22 Apr 2013 13:49:56 GMT
Last-Modified: Tue, 18 Oct 2011 04:23:16 GMT
Connection: keep-alive
Expires: Mon, 29 Apr 2013 13:49:56 GMT
Cache-Control: max-age=604800
X-Cache: HIT from Backend




--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-3-4-Head-can-t-cache-static-url-tp4659576p4659597.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] FATAL: dying from an unhandled exception: c

2013-04-22 Thread Ortega Gustavo Martin
Hi, sorry about my English, i am from Argentina (Spanish)

We have about 15.000 clients connecting to three squid server.
I recently updated two of them to Squid Cache: Version 3.3.3-20130419-r12530 
and our squid service is been restarted with this error en cache.log:

[2013/04/22 10:27:17.882398,  1] libsmb/ntlmssp.c:342(ntlmssp_update)
[2013/04/22 10:27:27.910890,  1] libsmb/ntlmssp.c:342(ntlmssp_update)
2013/04/22 10:27:27 kid1| FATAL: dying from an unhandled exception: c
2013/04/22 10:27:31 kid1| Starting Squid Cache version 3.3.3-20130419-r12530 
for x86_64-unknown-linux-gnu...
2013/04/22 10:27:31 kid1| Process ID 30844
2013/04/22 10:27:31 kid1| Process Roles: worker
2013/04/22 10:27:31 kid1| With 20 file descriptors available
2013/04/22 10:27:31 kid1| Initializing IP Cache...
2013/04/22 10:27:31 kid1| DNS Socket created at [::], FD 8

Any help will be useful and eternally grateful

Gustavo Martín Ortega



[squid-users] Youtube Changes

2013-04-22 Thread Ghassan Gharabli
Hello,

Did anyone notice the changes with youtube videplayback url?. I have
noticed that most of the youtube videos are no longer cached because
its id is not static anymore .

Most videos starts with o id= o- .

I even changed my perl script to rewrite each videoplayback request to
remove range and I could successfully get the whole video file into
youtube player , but this is not my target .

I also tried to get the video_id and save it to each videoplayback ,
but it was saving successfully if i watch each video .. Starting to
send much more opening videos at the same time but perl starts to save
a random video id to each videoplayback request which is not good at
all .

I have noticed that each video page has a file get_video_info which
includes all urls related to the video and also the video_id is
included so does anyone have any more ideas . The only way is to get
the $video_id from each video request and keep video playback on hold
because now we need to compare the $cpn which is being common between
videoplayback\? url  s\? . Can we compare the $cpn using Squid since
we are getting requests line by line!. For Example, we read s\? and
get the video_id and cpn reserve it and then we search for the
videoplayback which has the common cpn , if cpn matched , then save
the video_id that was with the same 'CPN .

Is it hard with Squid? . What is the solution?

I already tried Php coding but perl was better..


Re: [squid-users] Need help on SSL bump and certificate chain

2013-04-22 Thread a...@imaginers.org
Dear All!
I've also a problem running ssl-bump with an intermediate CA using a signed
certificate from a CA.
My setup is as follows:
squid-3.3.3-20130418-r12525 with
- https_port 3130 intercept ssl-bump generate-host-certificates=on
dynamic_cert_mem_cache_size=4MB cert=/etc/squid33/ssl_cert/server.pem
key=/etc/squid33/ssl_cert/key.pem
- ssl_bump server-first all
- sslproxy_cert_error allow all
- sslproxy_cert_adapt setCommonName ssl::certDomainMismatch
following the rules http://wiki.squid-cache.org/Features/MimicSslServerCert

This is working fine when using my self generated CA for signing the requests,
however I want to get rid of the browser warning so I try to use a CA already
recognized in the browser, what should be possible following this ticket:
http://bugs.squid-cache.org/show_bug.cgi?id=3426 (already mentioned)

But no matter what I do I can't get rid of the browser warning. If I use a self
signed root CA or certificate squid detects it is self signed and does not
append any intermediate CA or other chain.
If I generate an csr and send it to a CA I get back an .crt and an
intermediate-bundle, pack them up with the key in a single .pem file and restart
squid - then a chain is displayed in the browser but now with one 'cert' to much
(imho) and marked as invalid. Firefox reports sec_error_unknown_issuer, safari
says invalid chain length

For example in the browser details it looks like this:
RootCA (which is marked fine by the browser) - Intermediate CA (marked invalid)
- Certificate signed and created by the csr (marked invalid) - fake
certificate created by squid for the requested site (marked invalid)

If anyone has a running setup without importing the self-signed CA to all
browsers please let me know.

Thanks for any feedback,
Alex


[squid-users] Re: Youtube Changes

2013-04-22 Thread syaifuddin
try this

https://code.google.com/p/tempat-sampah/source/browse/storeurl.pl

and read it



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Youtube-Changes-tp4659599p4659600.html
Sent from the Squid - Users mailing list archive at Nabble.com.


Re: [squid-users] Re: assertion failed

2013-04-22 Thread Alex Rousskov
On 04/22/2013 03:21 AM, Alexandre Chappaz wrote:

 can anyone explain to me the meaning/reason of this assertion :
 src/fs/ufs/ufscommon.cc l 706 :
  ..
  assert(sde);
  ..

This assertion indicates a bug in Squid caching code, possibly in the
UFS-based storage code specifically.

The UFS code found a delete entry E record in swap.state log and is
now trying to delete E that was previously loaded into the cache.
Current E state indicates that it is cached on disk. However, the
cache_dir where the entry claims to be stored is missing.

I do not know whether the entry itself is corrupted or the code loaded
an entry from a now-missing cache_dir. The bug may be related to changes
in your squid.conf, especially if you removed some cache_dirs.

You should file a bug report (if you have not already) and post the
result of these gdb commands:

  frame 3
  print added
  print *added
  print Config.cacheSwap.swapDirs

  frame 4
  print this
  print *this

More information may be needed to fully triage this.


HTH,

Alex.



 got the backtrace out :
 
 (gdb) bt
 #0  0x2ba9da45d265 in raise () from /lib64/libc.so.6
 #1  0x2ba9da45ed10 in abort () from /lib64/libc.so.6
 #2  0x0050ae66 in xassert (msg=0x72992f sde, file=0x729918
 ufs/ufscommon.cc, line=706) at debug.cc:567
 #3  0x00669c18 in RebuildState::undoAdd (this=value optimized
 out) at ufs/ufscommon.cc:706
 #4  0x0066b7f4 in RebuildState::rebuildFromSwapLog
 (this=0x16090668) at ufs/ufscommon.cc:570
 #5  0x0066bed7 in RebuildState::rebuildStep (this=0x16090668)
 at ufs/ufscommon.cc:411
 #6  0x0066c099 in RebuildState::RebuildStep (data=0x366c) at
 ufs/ufscommon.cc:384
 #7  0x0064ac68 in AsyncCallQueue::fireNext (this=value
 optimized out) at AsyncCallQueue.cc:54
 #8  0x0064adc9 in AsyncCallQueue::fire (this=0x366c) at
 AsyncCallQueue.cc:40
 #9  0x00526fa1 in EventLoop::dispatchCalls (this=value
 optimized out) at EventLoop.cc:154
 #10 0x005271a1 in EventLoop::runOnce (this=0x74ac93f0) at
 EventLoop.cc:119
 #11 0x00527338 in EventLoop::run (this=0x74ac93f0) at
 EventLoop.cc:95
 #12 0x005911b3 in SquidMain (argc=value optimized out,
 argv=value optimized out) at main.cc:1501
 #13 0x00591443 in SquidMainSafe (argc=13932, argv=0x366c) at
 main.cc:1216
 #14 0x2ba9da44a994 in __libc_start_main () from /lib64/libc.so.6
 
 
 
 
 Thank you
 Alex
 
 2013/4/18 Alexandre Chappaz alexandrechap...@gmail.com:
 sorry, I meant One kid fails to start giving the following assertion

 2013/4/18 Alexandre Chappaz alexandrechap...@gmail.com:
 Hi,

 In our SMP enabled environnement, I have one kid to start giving the
 fooloowing assertion :
 2013/04/18 04:03:43 kid1| assertion failed: ufs/ufscommon.cc:706: sde

 I guess it is something related with the store / store rebuiding.
 Maybe a malformed object in the cache store?
 here the part of the log :

 2013/04/18 04:03:42 kid1| Store rebuilding is 5.57% complete
 2013/04/18 04:03:42 kid1| Done reading /var/cache/squid/W1 swaplog
 (18735 entries)
 2013/04/18 04:03:43 kid1| Accepting SNMP messages on 0.0.0.0:3401
 2013/04/18 04:03:43 kid1| Accepting HTTP Socket connections at
 local=0.0.0.0:3128 remote=[::] FD 12 flags=1
 2013/04/18 04:03:43 kid1| assertion failed: ufs/ufscommon.cc:706: sde


 core file is generated, but it seems to be not valid : gdb says :

 gdb /usr/local/squid/sbin/squid 004/core.758
 GNU gdb (GDB) Red Hat Enterprise Linux (7.0.1-32.el5_6.2)
 Copyright (C) 2009 Free Software Foundation, Inc.
 License GPLv3+: GNU GPL version 3 or later 
 http://gnu.org/licenses/gpl.html
 This is free software: you are free to change and redistribute it.
 There is NO WARRANTY, to the extent permitted by law.  Type show copying
 and show warranty for details.
 This GDB was configured as x86_64-redhat-linux-gnu.
 For bug reporting instructions, please see:
 http://www.gnu.org/software/gdb/bugs/...
 Reading symbols from /usr/local/squid/sbin/squid...done.
 Attaching to program: /usr/local/squid/sbin/squid, process 4
 ptrace: Opération non permise.
 BFD: Warning: /root/004/core.758 is truncated: expected core file size
 = 58822656, found: 20480.
 Reading symbols from /lib64/ld-linux-x86-64.so.2...(no debugging
 symbols found)...done.
 Loaded symbols for /lib64/ld-linux-x86-64.so.2
 Failed to read a valid object file image from memory.
 Core was generated by `(squid-1) -S -f /etc/squid/squid.conf'.
 Program terminated with signal 6, Aborted.
 #0  0x2b6babe7a265 in ?? ()
 (gdb) bt
 Cannot access memory at address 0x7fff38b1cf98
 (gdb) quit






 Any clue on how to get a usable core file and/or on the meaning of the
 assertion ?


 Thanks
 Alex



Re: [squid-users] Need help on SSL bump and certificate chain

2013-04-22 Thread Alex Rousskov
On 04/22/2013 10:36 AM, a...@imaginers.org wrote:


 This is working fine when using my self generated CA for signing the requests

Let's call this CA selfCA.


 I want to get rid of the browser warning so I try to use a CA already
 recognized in the browser, what should be possible following this ticket:
 http://bugs.squid-cache.org/show_bug.cgi?id=3426 (already mentioned)

You may have misinterpreted what that bug report says. The reporter
placed his selfCA into the browser. The reporter did not use a CA
certificate from a well-known CA root in his signing chain -- it is not
possible to do that because you do not have the private key from that
well-known root CA certificate.

You should use selfCA as root CA of your signing chain and you have to
place that selfCA in the browser.


 If anyone has a running setup without importing the self-signed CA to all
 browsers please let me know.

It is not possible to bump traffic without importing your self-signed
root CA into all browsers. If it were possible, SSL would have been useless.


HTH,

Alex.



[squid-users] Re: Youtube Changes

2013-04-22 Thread babajaga
YES: Very important:

READ IT ! :-)



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Youtube-Changes-tp4659599p4659604.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: Squid url_rewrite and cookie

2013-04-22 Thread dannori
Hi, 

digging up an old thread, I was wondering what happened to this
functionality discussed.

What happened with Rajesh patch? Did it got added to the squid code?

I searched bugzilla and did not find any post.

Did the function about adding cookies via a url_rewriter got implemented in
squid?

Thanks for your help.

Cheers,

Guenter



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-url-rewrite-and-cookie-tp1042655p4659605.html
Sent from the Squid - Users mailing list archive at Nabble.com.


Re: [squid-users] Re: Squid 3.4 Head can't cache static url

2013-04-22 Thread Amos Jeffries

On 23/04/2013 1:52 a.m., syaifuddin wrote:

header and respond


NOTE: when dealing with a proxy there are *two* pairs of 
request/response to check. One set client-squid and the other 
squid-server. Both have an effect on the transaction cacheability.


As does the traffic mode.



http://www.megalink-online.com/images/logo-d-link.jpg

GET /images/logo-d-link.jpg HTTP/1.1


This is a port-80 format request. Are you intercepting?


Host: www.megalink-online.com
Connection: keep-alive
Cache-Control: max-age=0


This is an instruction from the requestor *not* to use any cached 
objects when responding. The response is always supposed to be a MISS.



Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
User-Agent: Mozilla/5.0 (Windows NT 5.1) AppleWebKit/537.31 (KHTML, like
Gecko) Chrome/26.0.1410.64 Safari/537.31
Referer:
http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-3-4-Head-can-t-cache-static-url-td4659576.html
Accept-Encoding: gzip,deflate,sdch
Accept-Language: en,id;q=0.8,en-US;q=0.6
Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3
If-Modified-Since: Tue, 18 Oct 2011 04:23:16 GMT

HTTP/1.1 304 Not Modified
Server: nginx admin
Date: Mon, 22 Apr 2013 13:49:56 GMT
Last-Modified: Tue, 18 Oct 2011 04:23:16 GMT
Connection: keep-alive
Expires: Mon, 29 Apr 2013 13:49:56 GMT
Cache-Control: max-age=604800
X-Cache: HIT from Backend


This response says there was a cached object considered (HIT), and the 
answer to the clients If-Modified-Since question is UNMODIFIED. It looks 
perfectly correct and normal to me.



Your report was that Squid was not caching URLs, but this *is* caching 
(somewhere). So

  Where is the server named Backend?
what caching software is it running?
and how did the request get there? (what path through what 
software, delivered by what means?)


Amos