sorry, it's probably my fault.
again:
all i need are some good - already tested - values for a bunch of configuration parameters of the
squid.conf, so that squid will not stop working when i run 20 simultanous clients against it.
this can't be too difficult???
thanks marc
Marc Elsen wrote:
Marc Schmidt wrote:
thanx for the fast reply! .-)All of this I don't understand completely (my fault , probably :-).
the version is 2.4 stable 7
the access.log reports round about 500 up to 1000 requests than it
hangs.
there is nothing reported in the cache.log
actually i did set up a few things in the squid.conf. http_port and a
few other
things are among these changes as well, but i have not changed
anything in terms
of object sizes. squid is configured to not cache anything
( acl all src 0/0
no_cache deny all
)
in the no_cache section. that is the only important thing to mention.
one thing i have encountered after inspecting the log files from
apache. (the requested url)
long time after squid stopped working there are a couple of requests
appearing
in the access.log from apache:
"IP-blablabla" - - [12/Feb/2003:13:37:01 +0100] "-" 408 -
"IP-blablabla" - - [12/Feb/2003:13:37:01 +0100] "-" 408 -
"IP-blablabla" - - [12/Feb/2003:13:37:01 +0100] "-" 408 -
"IP-blablabla" - - [12/Feb/2003:13:37:01 +0100] "-" 408 -
"IP-blablabla" - - [12/Feb/2003:13:37:01 +0100] "-" 408 -
this means squid did open a socket but didn't send a request, right?
that's why there is
a 408 (timeout).
Are you testing an accelerator setup ?
M.
ok, but this doesn't help me. i need a hook from somebody where to
start reconfiguring
the proxy
cheers marc
Marc Elsen wrote:
Marc Schmidt wrote:
hi all,Which version of squid are you testing ?
after writing and starting a little performance test client and
running
it against squid,
the poor little fish stops doing what he is supposed to do:
serving the
requests.
the setting is something like this:
the test client is written in java (using jdk1.4.0)
there are 20 threads (each simulating a web client)
each thread requests 50 times the same url
the os is linux suse 7.3
the squid configuration is the one that gets shipped (standard
squid.conf)
when using 20 threads with 10 iterations per thread everything is
fine.
so, for my five pens this is more or less a configuration issue.
isn't it?
anybody out there with a proper squid.conf file that is prepared
to
startup squid in
a high performance mode? or does anybody know what conf parameters
to screw?
help's appreciated
cheers marc
What's in access.log during the problem test window ?
More important : anything in cache.log ,during the problem phase ?
There is no high performance mode squid.conf so to speak, because
squid is always high performant...
A standard squid.conf as shipped can not work I think,
at least a listening port for requests must be specified.
What about cache sizes used etc ?
M.
