Tollef Fog Heen napsal(a):
]] Václav Bílek
| 5)gracefull restart( not in meaning of persistent cache, but in the
| meaning of not flushing all clients and letting them wait for tens of
| seconds till enough threads appear.)
You can tune the thread_pool_add_delay parameter to at least
I have tried setting session_linger =50 on 2.0.4 and it seems that is
solves the problem ( I wasnt able to reproduce after that)
Kristian Lyngstol napsal(a):
(Excessive trimming ahead. Whoohoo)
On Tue, Nov 03, 2009 at 11:51:22AM +0100, Václav Bílek wrote:
When testing varnish throughput
it was bad tcp stack setting ... too small tcp_rmem tcp_wmem
Tollef Fog Heen napsal(a):
]] Václav Bílek
| Is there anyone who can point us where to look to find the problem?
As you are getting a write error and not a timeout, I would take a look
at any load balancers or similar
I have tested send_timeout setting to biger values with no changes
Rogério Schneider napsal(a):
2009/10/29 Václav Bílek v.bi...@1art.cz:
Is there anyone who can point us where to look to find the problem?
thanks for any respond
Václav,
Isn't your problem related to the send_timeout
Hi
When testing varnish throughput and scalability I have found strange
varnish behavior.
using 2.0.4 with cache_acceptor_epoll.c patch :
http://varnish.projects.linpro.no/ticket/492
When testing number of clients scalability I am able to get varnish
into a state when it stops responding but
Václav Bílek napsal(a):
Hi
When testing varnish throughput and scalability I have found strange
varnish behavior.
using 2.0.4 with cache_acceptor_epoll.c patch :
http://varnish.projects.linpro.no/ticket/492
without the patch there is no stuck but performace goes dramaticaly down
Hi
We are solving strange problem with smol part of clients ( aprox 1%) on
some objects they do not get all the content in varnishlog we see:
Write error, len = 69696/260844, errno = Success
in the archive of this mailing list i found that client ended connection
before end of transfer...
Hello
On high load we are geting asserts (and varnish restarts) like this:
varnishd[23515]: Child (7569) Panic message: Assert error in Tcheck(),
cache.h line 648:#012 Condition((t.e) != 0) not true. thread =
(cache-worker)sp = 0x7f76c5875008 {#012 fd = 611, id = 611, xid =
778413112,#012
such bad request?
Václav Bílek napsal(a):
Hello
On high load we are geting asserts (and varnish restarts) like this:
varnishd[23515]: Child (7569) Panic message: Assert error in Tcheck(),
cache.h line 648:#012 Condition((t.e) != 0) not true. thread =
(cache-worker)sp = 0x7f76c5875008
Thanks allot...
Is there anything else what changed in vcl config in trunk?
Kristian Lyngstol napsal(a):
On Thu, Sep 17, 2009 at 05:02:37PM +0200, Václav Bílek wrote:
I have tried trunk releas and hit problem vith VLC which worked in 2.0.4...
Variable 'obj.grace' not accessible in method
With knowledge of that we dont know exactly how to patch for disabling
keepalive we tried nasty hack:
diff bin/varnishdcache_acceptor_epoll.c
bin/varnishdcache_acceptor_epoll.c.new
114c114
deadline = TIM_real() - params-sess_timeout;
---
//deadline =
Laurence Rowe napsal(a):
You can easily up the maximum open file descriptor limit from the
default of 1024 with `ulimit -n some_large_value` in the script used
to start varnish (must be done as root). Varnish should cope fine with
a large number of connections.
Laurence
We already did
Václav Bílek napsal(a):
Hello
We are trying to deploy varnish in production but IE6 is a big problem
for us.
On the first try of lounching varnish we learned that it is imposible to
use keepalive because of the number of clients. So we tried to disable
keepalive by adding :
set
Hello
We are trying to deploy varnish in production but IE6 is a big problem
for us.
On the first try of lounching varnish we learned that it is imposible to
use keepalive because of the number of clients. So we tried to disable
keepalive by adding :
set resp.http.Connection=close;
to the
Helo
I have a problem that varnish sometimes returns expired data.
The ttl of objects is from 1 to 10 seconds but varnish retuned objects
older than tens of minutes.
Grace is set to 60s.
default ttl to 60s.
Age header of such old object had negative value...
Age: -6643 or
Age: -4803
Any
varnishd[12160]: Child (3092) said managed to mmap 30010953728 bytes of
30010953728
varnishd[12160]: Child (3092) said Ready
Václav Bílek napsal(a):
Helo
I have a problem that varnish sometimes returns expired data.
The ttl of objects is from 1 to 10 seconds but varnish retuned objects
older
I guess my point is that certain use cases (some valid, some not, some
involving bad pthread libraries in distributions (lots of them out
there!))
How can I identify if our pthread libraries are in trouble?
___
varnish-misc mailing list
Václav Bílek napsal(a):
I have a problem that varnish sometimes returns expired data.
The ttl of objects is from 1 to 10 seconds but varnish retuned objects
older than tens of minutes.
Grace is set to 60s.
default ttl to 60s.
Can you attach all the header-data you have regarding
Václav Bílek napsal(a):
is it posible that it is related to clock shift we had after reboot?
the time was after reboot for few minutes set 2 hours forward ( before
ntp corects that) ... we forgot to add /dev/rtc to the kernel
confirmed... it was problem with the time shift
Helo
what are your experiences using varnish on multi CPU systems on
Linux/Freebsd?
my experience in Linux on 8core is that varnish never gets more than
20% of all CPUs, only vhen it is overloaded, then it takes all CPU ( but
berformance drops).
Vaclav Bilek
Helo
I have got a problem using varnish infront of webserwer cluster running
arroung 20K req/s; when i tried to put varnish(4 loadbalanced machines
8CPU cores each 32GB ram) infront of this farm it serves OK for few
seconds but then aper in syslog such messages:
kernel: TCP: drop open request
helo
is it posible to set subsecond valu in bereq.connect_timeout
example:
set bereq.connect_timeout = 0.3
and how to find out that the timeout was exceded?
... the idea is that varnish is in ffront of LVS cluster and vhen one
LVS backend is too slow i want to restart te reques on another LVS
Helo
I work on one project, hosted on LVS (linux virtual server) cluster (cca
20 nodes) which makes 25K req/s 800Mbit/s in peaks. I want to use
varnish to accelerate that.
Question:
Is there anyone using varnish on such request rates? If yes, what HW do
you use.
Thanks for any comment.
Vasek
23 matches
Mail list logo