Amos Jeffries wrote:
Tom Williams wrote:
Ok, now that I've basically got Squid 3 configured as a HTTP
accelerator, I have a question about ACL rules and http_access.
Here is the basic config: I've got two web servers behind a load
balancer. The idea is to have Squid server as a HTTP accele
Hi. I'm seeing periodic odd behavior from one of our squid2.6 stable18
(and stable22) boxes during the peak hours when the squid is busiest,
but not off-peak, and no other signs of a capacity limit except the
occasional queue congestion warning. About 15% of the requests
for one url we're externall
On fre, 2008-10-24 at 13:32 -0700, Linda W wrote:
> BUT---this sure is misleading and confuses the heck out of poor
> "ignorami" like me, who think of a response time as something along the lines
> of "(srchost)ping -> (->remotehost-echo: "YO!" ->)-> (srchost: "YO!")
It's the median response time
Henrik Nordstrom wrote:
On fre, 2008-10-24 at 11:52 -0700, Linda W wrote:
> I see alot of these messages in my squid warning log...
> (count=107) WARNING: Median response time is 57448 milliseconds
This can happen naturally if you at some time have only very few users
and those mostly perfo
Doh! That was like a lightning strike on my head :-)
Many thanks as always Henrik!
.vp
> Subject: Re: [squid-users] Squid.Conf Needed For Proxy to Proxy Cache (Not
> Via ICP)
> From: [EMAIL PROTECTED]
> To: [EMAIL PROTECTED]
> CC: squid-users@squid-cache.org
> Date: Fri, 24 Oct 2008 21:27:49 +
On fre, 2008-10-24 at 15:51 -0400, Strauss, Christopher wrote:
> Thanks for your reply, Henrik.
> Has this always been the way squid handles these aborted requests?
As far as I can remember yes.
Regards
Henrik
signature.asc
Description: This is a digitally signed message part
Thanks for your reply, Henrik.
Has this always been the way squid handles these aborted requests? The
reason I'm asking is that we've been using squid as a proxy for over a year,
but I didn't start seeing any "-" status codes until just recently, around
the same time we updated to 6.2.STABLE20 from
On fre, 2008-10-24 at 11:52 -0700, Linda W wrote:
> I see alot of these messages in my squid warning log...
>
> Specifically, in filtering off the date, and sort+uniq+counting, I see:
>
> var/log# grp "Median response" warn|cut -c36-90 |more|sort|uniq -c
> 107 WARNING: Median response time i
On fre, 2008-10-24 at 14:46 -0400, Strauss, Christopher wrote:
> > I am running Squid version 2.6.STABLE20 as a proxy server on
> > 2.2.20-gentoo-r3 Linux. I am seeing HTTP status code "-" in the http_log
> > file:
It means the request was aborted before there was any form of response.
Regards
He
On fre, 2008-10-24 at 14:32 -0400, [EMAIL PROTECTED] wrote:
> I am looking to force all requests sent to an internal proxy to another
> internal proxy. The two proxies are separated via a WAN link and each one is
> managed by different admins. I am not able to use ICP.
> Does anyone have a sq
I see alot of these messages in my squid warning log...
Specifically, in filtering off the date, and sort+uniq+counting, I see:
var/log# grp "Median response" warn|cut -c36-90 |more|sort|uniq -c
107 WARNING: Median response time is 57448 milliseconds
1 WARNING: Median response time is
> I am running Squid version 2.6.STABLE20 as a proxy server on
> 2.2.20-gentoo-r3 Linux. I am seeing HTTP status code "-" in the http_log
> file:
> 216.82.93.201 - - [24/Oct/2008:07:32:49 -0400] "GET
> http://www.galco.com/scripts/cgiip.exe/wa/wcat/catalog.htm? HTTP/1.1" - -
> "-" "Mozilla/4.0 (com
Hello;
I am looking to force all requests sent to an internal proxy to another
internal proxy. The two proxies are separated via a WAN link and each one is
managed by different admins. I am not able to use ICP.
I will not be able to resolve via DNS any of the URLs parsed by my internal
Squi
Ricardo,
You cannot do it with a transparent proxy.
If you want Squid to handle https traffic, you must
use Squid in a non-transparent setup.
-Marcus
Ricardo Augusto de Souza wrote:
I am still not able to block https sites.
I tested all you sugested here.
I am using transparent proxy. I am re
Ricardo Augusto de Souza wrote:
> I am still not able to block https sites.
> I tested all you sugested here.
> I am using transparent proxy. I am redirecting all outgoing traffic to
> port 80 to squid port 3128. If i redirect 443 port to squid i wont be
> able to access ANY https site.
I'm no squ
I am still not able to block https sites.
I tested all you sugested here.
I am using transparent proxy. I am redirecting all outgoing traffic to
port 80 to squid port 3128. If i redirect 443 port to squid i wont be
able to access ANY https site.
I just wanna block *FEW* https sites like i AM ALREA
On fre, 2008-10-24 at 13:40 +0530, nitesh naik wrote:
> Is there way to ignore query string in url so that objects are cached
> without query string ? I am using external perl program to strip them query
> string from url which is slowing down response time. I have started 1500
> processes of red
On fre, 2008-10-24 at 16:52 +0900, [EMAIL PROTECTED] wrote:
> Hello,I have a question.
>
> I'd like to configure Keepalive-Timeout.
> But I can't find "Keepalive" section in the squid.conf file.
>
> Does "persistent_request_timeout"TAG mean "Keepalive-timeout"?
Yes. It sets the timeout for idle
On fre, 2008-10-24 at 08:31 -0500, Osmany Goderich wrote:
> It was the range_offset_limit -1 KB line that was not letting squid
> resume downloads. I set it back to 0KB as it is by default and
> woila!!! Everything back to normal!!
Good.
"range_offset_limit -1" says Squid should NEVER resume dow
On tor, 2008-10-23 at 15:54 -0500, Osmany Goderich wrote:
> I had squid2.6STABLE6-5 before and I upgraded it thinking it was a bug in
> that release. Should I still downgrade to 2.7?
Yes.
Regards
Henrik
signature.asc
Description: This is a digitally signed message part
On 24.10.08 13:40, nitesh naik wrote:
> Is there way to ignore query string in url so that objects are cached
> without query string ? I am using external perl program to strip them query
> string from url which is slowing down response time. I have started 1500
> processes of redirect program.
>
I solved the problem.
It was the range_offset_limit -1 KB line that was not letting squid resume
downloads. I set it back to 0KB as it is by default and woila!!! Everything
back to normal!!
Thank you very much for your support. This is one of the best mailing lists.
-Mensaje original-
Hello,
IE6 does not support the Negotiate authentication scheme for proxies.
It does support that only against web servers.
Regards
Malte
On Fri, 24 Oct 2008 07:38:57 -0400
"Steven Cardinal" <[EMAIL PROTECTED]> wrote:
> Thanks Henrik,
>
> That was my issue with Firefox - it now authenticates ju
Thanks Henrik,
That was my issue with Firefox - it now authenticates just fine. I've
been unable to get IE (6.0.2900.2180.xpsp_sp2_gdr.080814-1233) to
authenticate. I know this isn't a squid-specific thing, but any ideas
what setting in IE may be responsible for this? If not, no problem. I
appreci
[EMAIL PROTECTED] wrote:
Hello,I have a question.
I'd like to configure Keepalive-Timeout.
But I can't find "Keepalive" section in the squid.conf file.
Does "persistent_request_timeout"TAG mean "Keepalive-timeout"?
If so, Can I choose "KeepAlive on" or "KeepAlive off " on each destination sit
Hi All,
Is there way to ignore query string in url so that objects are cached
without query string ? I am using external perl program to strip them query
string from url which is slowing down response time. I have started 1500
processes of redirect program.
If I run squid without redirect progra
Hello,I have a question.
I'd like to configure Keepalive-Timeout.
But I can't find "Keepalive" section in the squid.conf file.
Does "persistent_request_timeout"TAG mean "Keepalive-timeout"?
If so, Can I choose "KeepAlive on" or "KeepAlive off " on each destination site?
And Can I choose "KeepA
Tom Williams wrote:
Ok, now that I've basically got Squid 3 configured as a HTTP
accelerator, I have a question about ACL rules and http_access.
Here is the basic config: I've got two web servers behind a load
balancer. The idea is to have Squid server as a HTTP accelerator for
Apache so i
28 matches
Mail list logo