hello,
I performed the tests (to block sites using squidguard) with some less
domains but squid did not respond properly, that is the network got slow.
squid-2.5.STABLE11.tar
squidGuard-1.2.10.tar
Berkeley DB 4.2.52
number of domains in black list - 656490 (0.6 million) ; urls - 141581 (0.1
Hello again,
I watched cache.log and found these info:
2009/06/26 14:04:36| TCP connection to 192.168.1.101/80 failed
2009/06/26 14:05:07| TCP connection to 192.168.1.101/80 failed
2009/06/26 14:05:18| TCP connection to 192.168.1.101/80 failed
2009/06/26 14:05:48| TCP connection to
Ken Peng wrote:
Hello again,
I watched cache.log and found these info:
2009/06/26 14:04:36| TCP connection to 192.168.1.101/80 failed
2009/06/26 14:05:07| TCP connection to 192.168.1.101/80 failed
2009/06/26 14:05:18| TCP connection to 192.168.1.101/80 failed
2009/06/26 14:05:48| TCP
My Setup is
Dans -- Squid -- Web
Why Dans-Squid-Web and not Squid-Dans-Web?
Do the clients connect to the Squid port or to the Dansguardian port?
Amos wrote:
Why it takes so long time?
Because its 10* request timeout.
dear Amos,
what directive in squid.conf should I take to decrease this timeout
value?
Thanks.
Do the clients connect to the Squid port or to the Dansguardian port?
Clients Connect to Dansguardian, then it connects to squid, then squid
goes out of the net.
Why Dans-Squid-Web and not Squid-Dans-Web?
Why, coz all the configuration i've seen does it this way. I didn't
think it worked the
ok the bug are not resolved no ?
Chris Woodfield a écrit :
It's really a squid issue, not an Adobe issue, assuming that you're
viewing the .pdf in-browser via the Reader plugin (as opposed to
downloading, then opening)...
http://www.squid-cache.org/bugs/show_bug.cgi?id=2639
The issue is
can we use internal redirector(rewrite feature) to replace/remove some
regex(\begin=[0-9]*) on URL?
like..
http://www.foo.com/video.flvbegin=900
to
http://www.foo.com/video.flv
Does squid support internel redirect officially?
If not, using an external redirector is simple enough.
#!/usr/bin/perl -wl
$|=1; # don't buffer the output
while () {
our ($uri,$client,$ident,$method) = split;
$uri =~ s/\begin=[0-9]*//;
} continue {
print $uri;
}
Squid-2.HEAD has some internal rewriting support.
I'm breaking it out into a separate module in Lusca (rather than being
an optional part of the external rewriter) to make using it in
conjunction with the external URL rewriter possible.
Adrian
2009/6/26 Jeff Pang pa...@laposte.net:
Does
2009/6/26 Phibee Network Operation Center n...@phibee.net:
ok the bug are not resolved no ?
The bugs get resolved when someone contributes a fix.. :)
Adrian
Le vendredi 26 juin 2009 13:01:17, Chudy Fernandez a écrit :
can we use internal redirector(rewrite feature) to replace/remove some
regex(\begin=[0-9]*) on URL?
like..
http://www.foo.com/video.flvbegin=900
to
http://www.foo.com/video.flv
If you program an app for de rewrite_program it
Hi Hims,
I am the author of ufdbGuard which is based on squidGuard.
ufdbGuard is free software which does 5 URL lookups/sec
on a recent CPU and has no problems with large databases.
-Marcus
hims92 wrote:
hello,
I performed the tests (to block sites using squidguard) with some less
Dear list,
I have two squid sibling caches in accelerator mode with different http
ports (squid A port 3128 and squid B 3129). Both point to the same back
end origin server. When i used the squidclient to test them, I made sure
squid A has a requested object in cache(TCP_HIT), while squid B does
Here is some more information on my previous question. The
configuration, test requests and logs are listed below.
Configuration on squid A (http port 7129):
http_port 7129 accel defaultsite=vmdevcagpcna01.firstamdata.net
acl ACLGPLSites dstdomain
Hi,
So we have an application servers cluster and I'd like to create a cache
strategy for them.
I'm thinking to install a third, lighttpd server, only for static
contents (images, CSS and JS).
The idea, here, is to save resources from application servers,
redirecting these static request
i'm running Squid 2.7 stable 6 for a quite few users and i'm getting
a lot of messages on cache.log like:
2009/06/26 11:41:51| httpReadReply: Excess data from GET
http://webcs.msg.yahoo.com/crossdomain.xml;
2009/06/26 11:42:00| httpReadReply: Excess data from GET
My cache traffic volume (I/O) is about 2 Mbps a week with peaks of 3
Mbps.
ehm, 2 megabits per socond a week?
excuse me, 1 megabit per second for a week, so :
1 Mb *60*60*24*7=604.800 / 8 = 75.600 MB about 76 GB traffic for week.
I'm refering both inbound and outbound traffic (1 Mbps in , 1
This what your looking for?
acl javaNtlmFix browser -i java
acl javaConnect method CONNECT
header_access Proxy-Authenticate deny javaNtlmFix javaConnect
header_replace Proxy-Authenticate Basic realm=Internet
now only https/ssl access from java will have basic auth and so a
password dialog.
Hi,
This has been resolved. I downloaded 3.0 and reinstalled with the same
setup and ran the same tests. Everything worked as expected. Thanks.
Roy
-Original Message-
From: Lu, Roy
Sent: Friday, June 26, 2009 11:21 AM
To: Lu, Roy; squid-users@squid-cache.org
Subject: RE: [squid-users]
Amos Jeffries wrote:
Ronan Lucio wrote:
It really seems to be a better choice.
Do you have any idea about how many page hit would handle one squid
servers?
Thinking about a Dual QuadCore 4Gb RAM serving only small files (less
than 300 Kb each).
Adrian has mapped Squid 2.7 as far as 800-850
Ken Peng wrote:
Amos wrote:
Why it takes so long time?
Because its 10* request timeout.
dear Amos,
what directive in squid.conf should I take to decrease this timeout
value?
http://www.squid-cache.org/Doc/config/peer_connect_timeout/
Thanks.
Chris
Ronan Lucio wrote:
Hi,
So we have an application servers cluster and I'd like to create a
cache strategy for them.
I'm thinking to install a third, lighttpd server, only for static
contents (images, CSS and JS).
The idea, here, is to save resources from application servers,
redirecting
Leonardo Rodrigues wrote:
i'm running Squid 2.7 stable 6 for a quite few users and i'm
getting a lot of messages on cache.log like:
2009/06/26 11:41:51| httpReadReply: Excess data from GET
http://webcs.msg.yahoo.com/crossdomain.xml;
2009/06/26 11:42:00| httpReadReply: Excess data from
2009/6/27 Chris Robertson crobert...@gci.net:
I'm running a strictly forward proxy setup, which puts an entirely different
load on the system. It's also a pretty low load (peaks of 160 req/sec at
25mbit/sec).
Just another random datapoint - I've just deployed my Squid-2
derivative (which is
Jeff Pang wrote:
Does squid support internel redirect officially?
Some Squid-2 releases do.
The Squid-3 port is still pending testing.
If not, using an external redirector is simple enough.
#!/usr/bin/perl -wl
$|=1; # don't buffer the output
while () {
our
Lu, Roy wrote:
Hi,
This has been resolved. I downloaded 3.0 and reinstalled with the same
setup and ran the same tests. Everything worked as expected. Thanks.
Please be aware that state may change.
The Squid-2 behavior there was more correct with the RFC requirements.
When the 3.0 code is
hims92 wrote:
hello,
I performed the tests (to block sites using squidguard) with some less
domains but squid did not respond properly, that is the network got slow.
squid-2.5.STABLE11.tar
squidGuard-1.2.10.tar
Berkeley DB 4.2.52
number of domains in black list - 656490 (0.6 million) ; urls -
1246076496.527 79 (ip_hidden) TCP_MISS/200 4467 CONNECT
login.yahoo.com:443 - DIRECT/209.191.92.114 -
1246076496.689139 (ip_hidden) TCP_MISS/302 1451 GET
http://us.f1119.mail.yahoo.com/ym/login? - DIRECT/98.137.26.66 text/html
1246076496.730 38 (ip_hidden) TCP_MISS/302 564 GET
29 matches
Mail list logo