Hi,
I followed your guide and the WARNING message disappears at start.
However, new machines still have 500 MISS in the access.log while
old machine do not have any 500 MISS. Those caching machines
(including old and new ones) serve the same domains.
And those are reverse proxies.
I'll back
Hi all,
in Jan 2009 Amos Jeffries wrote for Squid 3.0-STABLE12 why rep_header
Cookie: gives error clear_logged_in_user_cookie' ACL is used but there
is no HTTP reply -- not matching
here:
http://www.squid-cache.org/mail-archive/squid-users/200901/0501.html
Squid checks to see whether
On 28/11/2012 8:35 p.m., Stefan Bauer wrote:
-Ursprüngliche Nachricht-
Von:Amos Jeffries squ...@treenet.co.nz
*could* be yes. *if* the website were relying on clients always having
one IP through their visit. As demonstrated by its very broken uses of
Vary, Pragma, Range,
-Ursprüngliche Nachricht-
Von:Amos Jeffries squ...@treenet.co.nz
In which case I would recommend removing the round-robin algorithm from
yoru peer options. The first one you list will be used first and when it
has problems the failover will move traffic to the second one listed.
-Ursprüngliche Nachricht-
Von:Amos Jeffries squ...@treenet.co.nz
In which case I would recommend removing the round-robin algorithm from
yoru peer options. The first one you list will be used first and when it
has problems the failover will move traffic to the second one listed.
On 28/11/2012 10:51 p.m., A. W. wrote:
Hi all,
in Jan 2009 Amos Jeffries wrote for Squid 3.0-STABLE12 why rep_header
Cookie: gives error clear_logged_in_user_cookie' ACL is used but
there is no HTTP reply -- not matching
here:
This can be seen as solved. Problem is squidguard. If i disable the url_rewrite
- no problem anymore.
So time to hit the squidguard people with this problem.
thank you for your time!
On 28/11/2012 10:37 p.m., Le Trung, Kien wrote:
Hi,
I followed your guide and the WARNING message disappears at start.
However, new machines still have 500 MISS in the access.log while
old machine do not have any 500 MISS. Those caching machines
(including old and new ones) serve the same
I need to transparently proxy traffic, and the best way to do this seems
to be to use tproxy, since that allows IPv6 traffic to be supported.
However, when using tproxy, Squid spoofs the client's source address
when making the connection to the web server - this is something I don't
need,
Hi Amos,
Thanks a lot. Even my configuration is working fine in Squid-3.1.15
. I believe there could be bug in squid-3.1.14
Anyhow thanks a lot to all.
Regards,
Sekar
On Wed, Nov 28, 2012 at 11:44 AM, Amos Jeffries squ...@treenet.co.nz wrote:
On 28/11/2012 1:48 a.m., 金 戈 wrote:
This is
Dear all
Since Google and Youtube force browser to use SSL we have lake of
statistics and web filtering with Squid.
I would like if there is a good way in order to redirect SSL requests to
google/Youtube to non-encrypted requests ?
Best regards
-Message d'origine-
De : Amos Jeffries [mailto:squ...@treenet.co.nz]
Envoyé : 27 novembre 2012 17:40
À : squid-users@squid-cache.org
Objet : Re: [squid-users] Problem publishing on Facebook via Squid 3.1.2
On 28.11.2012 09:00, Eliezer Croitoru wrote:
You will like to allow for your
On 28.11.12 13:52, David Touzeau wrote:
Since Google and Youtube force browser to use SSL we have lake of
statistics and web filtering with Squid.
I would like if there is a good way in order to redirect SSL requests to
google/Youtube to non-encrypted requests ?
Google allow you to do this
Hi Amos,
thanks a lot for your explanation. My english is not so perfect - maybe
i understood something incorectly - sorry
you wrote:
Hopefully one day someone will get around to re-checking storeability
about 11/12 in the above sequence - when that happens reply headers will
be available
-Message d'origine-
De : Delisle, Marc [mailto:marc.deli...@cegepsherbrooke.qc.ca]
Envoyé : 28 novembre 2012 10:48
À : squid-users@squid-cache.org
Objet : RE: [squid-users] Problem publishing on Facebook via Squid 3.1.2
-Message d'origine-
De : Amos Jeffries
On 29.11.2012 06:02, Delisle, Marc wrote:
snip
We are now running with Squid 3.2.3 and 65535 file descriptors: same
problem with Facebook. Are there special settings to use, to benefit
from HTTP/1.1 improvements?
Only the persistent connections settings. The rest of HTTP/1.1
functionality is
Thanks !!!
But what about Youtube ?
-Original Message-
From: Steve Hill
Sent: Wednesday, November 28, 2012 5:13 PM
To: David Touzeau
Cc: squid-users@squid-cache.org
Subject: Re: [squid-users] A way to redirect google/Youtube SSL
On 28.11.12 13:52, David Touzeau wrote:
Since
I'm beginning to conclude that refresh pattern in Squid is useless.
I had a neat refresh pattern which is supposed to help cache just
about everything, below:
refresh_pattern
([^.]+\.)?(download|(windows)?update)\.(microsoft\.)?com/.*\.(cab|exe|msi|msp|psf)
4320 100% 43200 override-expire
On 29.11.2012 13:31, Joshua B. wrote:
I'm beginning to conclude that refresh pattern in Squid is useless.
I had a neat refresh pattern which is supposed to help cache just
about everything, below:
refresh_pattern
Dear,
Yes, I didn't mention 500 MISS from start, but that the reason I
assumed that accept_filter makes different between my old cache and
new cache.
Now, my old cache doesn't has any 500 MISS but my all new caches have.
All cache share the same domain and same original servers.
The 500 MISS
Thanks for the various suggestions.
- Running on HEAD from August, I would have thought I'm running
(almost) the newest 3.3, Server bumping is in there.
- http://wiki.squid-cache.org/ConfigExamples/Chat/Skype does not help,
it is basically saying allow 443, and explains how to allow HTTP to
all
-Original Message-
From: Steve Hill
Sent: Wednesday, November 28, 2012 5:13 PM
To: David Touzeau
Cc: squid-users@squid-cache.org
Subject: Re: [squid-users] A way to redirect google/Youtube SSL
On 28.11.12 13:52, David Touzeau wrote:
Since Google and Youtube force browser to
22 matches
Mail list logo