--
Paul Ch
sima...@operamail.com
On Sun, Feb 24, 2013, at 09:25 AM, squid-users-h...@squid-cache.org
wrote:
Hi! This is the ezmlm program. I'm managing the
squid-users@squid-cache.org mailing list.
I'm working for my owner, who can be reached
at squid-users-ow...@squid-cache.org.
hello ,
thanks Amos , ive modified the config file as u suggested .
after removing the raid 0 , ive noted a better performance .
=
in general , browsing speed is lower than the speed in the absence of squid
, but any way it is
note that that there is alot of free memory , and the cpu of the server didnt
reach 100 %
==
[root@squid ~]# free -m
total used free sharedbuffers cached
Mem: 32192 16289 15903
Hallo, Ahmad,
Du meintest am 24.02.13:
here is the log of squidguard !
squidGuard is another problem than squid.
==
2013-02-24 06:25:32 [17282] Warning: Possible bypass attempt. Found
multiple slashes where only one is expected:
hello helmut ,
i know that !
sorry , im not understanding you .
regards
--
View this message in context:
http://squid-web-proxy-cache.1019090.n4.nabble.com/slow-browsing-in-centos-6-3-with-squid-3-tp4658635p4658678.html
Sent from the Squid - Users mailing list archive at Nabble.com.
hi ,
i have os debian 6.0.1
with kernel
Linux cache1 2.6.37-1
im using squid 2.7 stable 9 !
i have a frequent problem as in the image below :
http://squid-web-proxy-cache.1019090.n4.nabble.com/file/n4658679/lQXZb.png
the only thing i can do is , i remove the ip of the site from cache !
i mean
Hallo, Ahmad,
Du meintest am 24.02.13:
i have os debian 6.0.1
with kernel
Linux cache1 2.6.37-1
im using squid 2.7 stable 9 !
That's a version which is about 3 years old.
Can you use an actual squid version?
Viele Gruesse!
Helmut
IF no problem to ping www.alhurani.com THEN
IF Problem also with a default squid.conf THEN rebuild squid
with a stripped-down configure
Still a problem ?
I just tried with my squid to access www.alhurani.com, no problem at all.
My squid:
Squid Cache: Version
Ahmad,
If you think the problem is squidguard, you have to get sure about this.
I suggest to disable squidguard and see if the performance get better
to confirm that the bottleneck is indeed caused by squidguard.
IF it is confirmed that squidguard is the bottleneck you can either
try to
Dang Gmail...guess it didn't want to send a copy to the mailing list.
Go figure.
-- Forwarded message --
From: Adam W. Dace colonelforbi...@gmail.com
Date: Sun, Feb 24, 2013 at 2:02 PM
Subject: Re: [squid-users] Squid 3.3.1 Compiler Error
To: Amos Jeffries squ...@treenet.co.nz
Documented in Bugzilla as Bug #3794
(http://bugs.squid-cache.org/show_bug.cgi?id=3794).
I'd add the patch but I don't want to be rude.
On Sun, Feb 24, 2013 at 2:02 PM, Adam W. Dace colonelforbi...@gmail.com wrote:
Great! I'll give it a shot. I'll also add this to bugzilla as you
Thanks a lot, that patch did the trick. I would've helped more but
I'm not much of a C++ guy.
Relevant output:
2013/02/24 14:23:22 kid1| Squid Cache (Version 3.2.7): Exiting normally.
2013/02/24 14:23:58 kid1| Starting Squid Cache version 3.3.1 for
i686-apple-darwin11.4.2...
And BTW, it's
Hi, I hope its the correct address for my problem.
I'm living in germany and want to get access to the us-netflix.
So, i set up a amazon EC2 instance in the united states and install
openvpn/pptpd and squid3.
Openvpn works fine (of corse only if i habe a client for it), pptpd
works also great
On Fri, Feb 22, 2013 at 02:48:56PM +, Markus Moeller wrote:
A pure squid Kerberos authentication setup does not create any connection
between squid and AD. I am 100% sure of that.
OK, in that case I am now confused.
If you use additionally squid_kerb_ldap then yes there are
I have noticed that it always start to fail when there are only available
exactly 3276 file descriptors and 13108 file desc in use. That is almost
exactly 20% free file descriptors. Still it look for me that there is a
problem of not enough file descriptors (just because of the with-maxfd=16384
*You say that in your slow server you are able to achieve twice req/sec than
in your fastest one I obviously meant the opposite
--
View this message in context:
http://squid-web-proxy-cache.1019090.n4.nabble.com/About-bottlenecks-Max-number-of-connections-etc-tp4658650p4658689.html
Sent from
On 25/02/2013 12:30 a.m., Ahmad wrote:
hello ,
thanks Amos , ive modified the config file as u suggested .
after removing the raid 0 , ive noted a better performance .
=
in general , browsing speed is lower than the speed in the
Amos,
Do you have an idea as to what I am doing wrong here?
Thanks,
On Fri, Feb 22, 2013 at 12:40 PM, Roman Gelfand rgelfa...@gmail.com wrote:
Thanks for taking time to help me out.
If I understood you correctly, I think I made the changes you
mentioned including iptables -A FORWARD -i eth0
Hi,
Thanks for response sorry for late reply coz i have just recovered from
fever
Are they the same ones constantly? even after the timeout is reported to
clients?
No, the ids remains changing but all the clients keep on receiving timeout
message. Here is first output of
#squidclient
Hi All,
I am trying to write an API that will fetch objects from the squid cache
given the url and http method.
When I analyzed the code, I found that this can be achieved with a
storeGetPublic(url, http_method) to see if the url has an entry in the
cache.
This followed by storeClientCopy() call.
20 matches
Mail list logo