Re: Yahoo! Traffic Server

2009-11-03 Thread Poul-Henning Kamp
In message 2e51048e-58cd-4599-b4be-85cdac78f...@develooper.com, =?iso-8859-1?
Q?Ask_Bj=F8rn_Hansen?= writes:
I thought this might be of interest:

   http://wiki.apache.org/incubator/TrafficServerProposal

It probably is -- for some people.

My impression of Inktomi is that it is a much more comprehensive
solution than Varnish, it does SMTP, NNTP and much else.

If I were to start Yahoo, HotMail or a similar app today, I would
seriously consider starting out on Inktomi, because of the scalability,
horizontal and vertical, it offers.

Varnish on the other hand, only does HTTP, but aims to do it with
ultimate performance and flexibility, by pushing the technological
envelope as far as it can be pushed.

But For me, personally, the main difference is one of size:

Inktomi:
*.h93920
*.c50892
*.hh   0
*.cc  350199

  495011

Varnish:
*.h 5957
*.c38871
*.hh   0
*.cc   0

   44828

We have a rhyming saying in Denmark that goes: En lille og vågen,
er bedre end en stor og doven which roughly translates to: Small
and alert is better than big and inert.


But I'm happy to see the Inktomi code out in the free air, I've heard
much good about it, over the years, from my FreeBSD cronies at Yahoo.

And because I like a healty competition, I particularly welcome
Inktomi, because quite frankly: competing with squid is not much
fun... :-)

Poul-Henning

-- 
Poul-Henning Kamp   | UNIX since Zilog Zeus 3.20
p...@freebsd.org | TCP/IP since RFC 956
FreeBSD committer   | BSD since 4.3-tahoe
Never attribute to malice what can adequately be explained by incompetence.
___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-misc


Re: Yahoo! Traffic Server

2009-11-03 Thread Tollef Fog Heen
]] Ask Bjørn Hansen 

| I thought this might be of interest:
| 
|   http://wiki.apache.org/incubator/TrafficServerProposal

Yeah, I've been following it loosely for a while now.

A quick feature comparison (just based off the incubator page):

# Scalable on SMP (TS is a hybrid thread + event processor)

This might be an interesting direction to take Varnish in at some point
as slow clients are a problem for us now.  Being able to punt them off
to a pool of event-based threads would be useful.

# Extensible: TS has a feature rich plugin API

We don't have this, though you can get some of it by inlining C.
There's a proposal out for how to do it.  Whether we end up doing it
remains to be seen, though.

«We have benchmarked Traffic Server to handle in excess of 35,000 RPS on
a single box.»

I know Kristian has benchmarked Varnish to about three times that,
though with 1-byte objects, so it's not really anything resembling a
real-life scenario.  I think sky has been serving ~64k requests/s using
synthetic.

# Porting to more Unix flavors (currently we only support Linux)

We have this already, at least to some degree.

# Add missing features, e.g., CARP, HTCP, ESI and native IPv6

We have native ipv6 and ESI at least.  CARP and HTCP we don't.

Their code seems to be C++ (they depend on STL)

They support HTTPS, which we don't.

-- 
Tollef Fog Heen 
Redpill Linpro -- Changing the game!
t: +47 21 54 41 73
___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-misc


Varnish stuck on stresstest/approved by real traffic

2009-11-03 Thread Václav Bílek
Hi

When testing varnish throughput and scalability I have found strange
varnish behavior.

using 2.0.4 with cache_acceptor_epoll.c  patch :
http://varnish.projects.linpro.no/ticket/492

When testing number of clients scalability I  am able to get varnish
into a state when it stops responding but in varnishlog it looks like
doing some requests from the past ...
detailed :
I start few instances of ab with keepalive on and with -r option (Don't
exit on socket receive errors) with concurency aprox. 2000
ab -r -k -n 10 -c 2000 http://127.0.0.1/;
doesn't mater if from localhost or from other hosts(even distributed
each instance on diferent host); when i run so many instances that
connections on varnish raise aprox above 10K, varnish stucks, and stops
responding.In varnishstat Client requests received drops to 0, In
that time varnish starts to spawn new threads, but not dramatically (
for example from 2,4K to 3K  ).
Then I stop all the ab instances and try to fetch something ...
/opt/httpd2.2/bin/ab  -k  -n 1 -c 1 http://127.0.0.1/
but get timeout, when I try after few minutes when varnish certainly
does nothing i get timeout too.
In such state there is in varnishlog every 30s request like this but
nothing else:
10342 SessionOpen  c 127.0.0.1 28988 0.0.0.0:80
10342 ReqStart c 127.0.0.1 28988 1170523327
10342 RxRequestc GET
10342 RxURLc /
10342 RxProtocol   c HTTP/1.0
10342 RxHeader c Connection: Keep-Alive
10342 RxHeader c Host: 127.0.0.1
10342 RxHeader c User-Agent: ApacheBench/2.3
10342 RxHeader c Accept: */*
10342 VCL_call c recv lookup
10342 VCL_call c hash hash
10342 VCL_call c miss fetch
10342 Backend  c 20041 default default
10342 ObjProtocol  c HTTP/1.1
10342 ObjStatusc 200
10342 ObjResponse  c OK
10342 ObjHeaderc Date: Tue, 03 Nov 2009 10:28:33 GMT
10342 ObjHeaderc Server: Apache
10342 ObjHeaderc Last-Modified: Sat, 20 Nov 2004 20:16:24 GMT
10342 ObjHeaderc ETag: fc76-2c-3e9564c23b600
10342 ObjHeaderc Content-Type: text/html
10342 TTL  c 1170523327 RFC 5 1257244113 0 0 0 0
10342 VCL_call c fetch
10342 VCL_info c XID 1170523327: obj.prefetch (-30) less than ttl
(5.00406), ignored.
10342 VCL_return   c deliver
10342 Length   c 44
10342 VCL_call c deliver deliver
10342 TxProtocol   c HTTP/1.1
10342 TxStatus c 200
10342 TxResponse   c OK
10342 TxHeader c Server: Apache
10342 TxHeader c Last-Modified: Sat, 20 Nov 2004 20:16:24 GMT
10342 TxHeader c ETag: fc76-2c-3e9564c23b600
10342 TxHeader c Content-Type: text/html
10342 TxHeader c Content-Length: 44
10342 TxHeader c Date: Tue, 03 Nov 2009 10:28:33 GMT
10342 TxHeader c X-Varnish: 1170523327
10342 TxHeader c Age: 0
10342 TxHeader c Via: 1.1 varnish
10342 TxHeader c Connection: keep-alive
10342 ReqEnd   c 1170523327 1257244113.149692297
1257244113.153866768 526.840385675 0.004138470 0.36001

There is nothing strange in syslog or varnishlog, the only thing that
helps recover varnish is restarting it.

Bad is that it has already happened  in production traffic.

Is there anything in my settings what should i check?


my setings:
accept_fd_holdoff  50 [ms]
acceptor   default (epoll, poll)
auto_restart   on [bool]
backend_http11 on [bool]
between_bytes_timeout  60.00 [s]
cache_vbe_connsoff [bool]
cc_command exec cc -fpic -shared -Wl,-x -o %o %s
cli_buffer 8192 [bytes]
cli_timeout15 [seconds]
client_http11  off [bool]
clock_skew 10 [s]
connect_timeout0.40 [s]
default_grace  10
default_ttl60 [seconds]
diag_bitmap0x0 [bitmap]
err_ttl0 [seconds]
esi_syntax 0 [bitmap]
fetch_chunksize128 [kilobytes]
first_byte_timeout 60.00 [s]
group  nogroup (65534)
listen_address 0.0.0.0:80
listen_depth   40960 [connections]
log_hashstring off [bool]
log_local_address  off [bool]
lru_interval   2 [seconds]
max_esi_includes   5 [includes]
max_restarts   4 [restarts]
obj_workspace  8192 [bytes]
overflow_max   300 [%]
ping_interval  3 [seconds]
pipe_timeout   60 [seconds]
prefer_ipv6off [bool]
purge_dups off [bool]
purge_hash on [bool]
rush_exponent  3 [requests per request]
send_timeout   5 [seconds]
sess_timeout   1 [seconds]
sess_workspace 16384 [bytes]
session_linger 0 [ms]
shm_reclen 255 [bytes]
shm_workspace  8192 [bytes]
srcaddr_hash   1049 [buckets]
srcaddr_ttl0 [seconds]
thread_pool_add_delay  2 [milliseconds]
thread_pool_add_threshold  2 [requests]

Re: Varnish stuck on stresstest/approved by real traffic

2009-11-03 Thread Václav Bílek


Václav Bílek napsal(a):
 Hi
 
 When testing varnish throughput and scalability I have found strange
 varnish behavior.
 
 using 2.0.4 with cache_acceptor_epoll.c  patch :
 http://varnish.projects.linpro.no/ticket/492
 

without the patch there is no stuck but performace goes dramaticaly down
on more than few thousands of connections.

Vaclav Bilek
___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-misc


Re: Virtualhost issue

2009-11-03 Thread Vladimir Dyachenko
Folks,

I have changed the configuration to the following (mostly based on
mediawiki). Any idea why it does not restart? Sill empty logs.

[r...@net2 ~]# varnishd -V
varnishd (varnish-2.0.4)
Copyright (c) 2006-2009 Linpro AS / Verdens Gang AS

[r...@net2 ~]# cat /etc/varnish/default.vcl

# set default backend if no server cluster specified
backend default {
.host = www.google.ru;
.port = 80;
}
backend google-bg {
.host = www.google.bg;
.port = 80;
}
backend google-pl {
.host = www.google.pl;
.port = 80;
}
# access control list for purge: open to only localhost and other local
nodes
acl purge {
localhost;
}
# vcl_recv is called whenever a request is received
sub vcl_recv {
# Serve objects up to 2 minutes past their expiry if the backend
# is slow to respond.
set req.grace = 120s;
# Use our round-robin apaches cluster for the backend.
if (req.http.host ~ ^(www.)?google.bg$ http://www.)/?google.bg$)
{
set req.http.host = www.google.bg;
set req.backend = google-bg;
}
elsif (req.http.host ~ ^(www.)?google.pl$http://www.)/?google.pl$)
{
set req.http.host = www.google.pl;
set req.backend = google-pl;
}
else {
set req.backend = default;
}
# This uses the ACL action called purge. Basically if a request to
# PURGE the cache comes from anywhere other than localhost, ignore
it.
if (req.request == PURGE) {
if (!client.ip ~ purge) {
error 405 Not allowed.;
}
lookup;
}
# Pass any requests that Varnish does not understand straight to the
backend.
if (req.request != GET  req.request != HEAD 
req.request != PUT  req.request != POST 
req.request != TRACE  req.request != OPTIONS 
req.request != DELETE)
{pipe;} /* Non-RFC2616 or CONNECT which is weird. */
# Pass anything other than GET and HEAD directly.
if (req.request != GET  req.request != HEAD)
   {pass;}  /* We only deal with GET and HEAD by default */

# Pass requests from logged-in users directly.
if (req.http.Authorization || req.http.Cookie)
   {pass;}  /* Not cacheable by default */
# Pass any requests with the If-None-Match header directly.
if (req.http.If-None-Match)
   {pass;}
# Force lookup if the request is a no-cache request from the client.
if (req.http.Cache-Control ~ no-cache)
   {purge_url(req.url);}
# normalize Accept-Encoding to reduce vary
if (req.http.Accept-Encoding) {
  if (req.http.User-Agent ~ MSIE 6) {
unset req.http.Accept-Encoding;
  } elsif (req.http.Accept-Encoding ~ gzip) {
set req.http.Accept-Encoding = gzip;
  } elsif (req.http.Accept-Encoding ~ deflate) {
set req.http.Accept-Encoding = deflate;
  } else {
unset req.http.Accept-Encoding;
  }
}
lookup;
}
sub vcl_pipe {
# Note that only the first request to the backend will have
# X-Forwarded-For set.  If you use X-Forwarded-For and want to
# have it set for all requests, make sure to have:
# set req.http.connection = close;
# This is otherwise not necessary if you do not do any request
rewriting.
set req.http.connection = close;
}
# Called if the cache has a copy of the page.
sub vcl_hit {
if (req.request == PURGE)
{purge_url(req.url);
error 200 Purged;}
if (!obj.cacheable)
   {pass;}
}
# Called if the cache does not have a copy of the page.
sub vcl_miss {
if (req.request == PURGE)
   {error 200 Not in cache;}
}
# Called after a document has been successfully retrieved from the backend.
sub vcl_fetch {
# set minimum timeouts to auto-discard stored objects
#   set obj.prefetch = -30s;
set obj.grace = 120s;
if (obj.ttl  48h) {
  set obj.ttl = 48h;}
if (!obj.cacheable)
{pass;}
if (obj.http.Set-Cookie)
{pass;}
#   if (obj.http.Cache-Control ~ (private|no-cache|no-store))
#   {pass;}
if (req.http.Authorization  !obj.http.Cache-Control ~ public)
{pass;}
}

Any hint most welcome.

Regards.

Vladimir
___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-misc


Re: Virtualhost issue

2009-11-03 Thread andan andan
2009/11/3 Vladimir Dyachenko vlad.dyache...@gmail.com:
 Folks,

 I have changed the configuration to the following (mostly based on
 mediawiki). Any idea why it does not restart? Sill empty logs.

 [r...@net2 ~]# varnishd -V
 varnishd (varnish-2.0.4)
 Copyright (c) 2006-2009 Linpro AS / Verdens Gang AS

 [r...@net2 ~]# cat /etc/varnish/default.vcl

 # set default backend if no server cluster specified
 backend default {
 .host = www.google.ru;
 .port = 80;
 }

www.google.ru resolves to multiple IPs. You are trying to use varnish
with a strange behaviour :)

To see errors, in your init script change:

daemon ${DAEMON} $DAEMON_OPTS -P ${PIDFILE}  /dev/null 21

by:

daemon ${DAEMON} $DAEMON_OPTS -P ${PIDFILE}  /whereveryouwant 21

Or simply remove   /dev/null 21 to see errors on console.

Kind Regards.
___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-misc


Re: Virtualhost issue

2009-11-03 Thread Vladimir Dyachenko
Hey,

Thanks for reply.

 www.google.ru resolves to multiple IPs. You are trying to use varnish
 with a strange behaviour :)


That was just to replace the actual hostname :)

Anyway, thanks - I've found out one silly thing. I had two concurrent
version of varnish being installed (one from source one from RPM).

Everything works as expected (how and we can't use the string '-' (dash) in
backend names, one must use '_' (underscore).

Thanks all.

Cheers.

Vladimir
___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-misc


Re: Varnish stuck on stresstest/approved by real traffic

2009-11-03 Thread Rogério Schneider
Václav,

That is good to know that you have also experienced better performance
with patch from ticket 492.

I am replying this e-mail just to endorse your scenario. I have
already been in the same circumstance of Varnish getting stuck after a
bigger wave of simultaneous users.

What I did to solve the problem? Put more machines.

That would be great if we find the answer for this problem together.

About the unpatched version, I was not able to see if freeze like the
patched one because it hangs with another error before we can reach a
maximum of simultaneous users, like in this report:
http://varnish.projects.linpro.no/ticket/573
This problem by itself was what lead me to make this patch/port for
epoll acceptor, since I use linux.

Regards,
Rogério Schneider

2009/11/3 Václav Bílek v.bi...@1art.cz:


 Václav Bílek napsal(a):
 Hi

 When testing varnish throughput and scalability I have found strange
 varnish behavior.

 using 2.0.4 with cache_acceptor_epoll.c  patch :
 http://varnish.projects.linpro.no/ticket/492


 without the patch there is no stuck but performace goes dramaticaly down
 on more than few thousands of connections.

 Vaclav Bilek
 ___
 varnish-misc mailing list
 varnish-misc@projects.linpro.no
 http://projects.linpro.no/mailman/listinfo/varnish-misc




-- 
Rogério Schneider

MSN: stoc...@hotmail.com
GTalk: stoc...@gmail.com
Skype: stockrt
http://stockrt.github.com
___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-misc


Re: Yahoo! Traffic Server

2009-11-03 Thread Rogério Schneider
 I know Kristian has benchmarked Varnish to about three times that,
 though with 1-byte objects, so it's not really anything resembling a
 real-life scenario.  I think sky has been serving ~64k requests/s using
 synthetic.

Just to place my results: I have made 75k reqs/s with a half of full
body (http code 200) objects at about 4k bytes each and a half of 304
response codes. With only 304 I have made 104k reqs/s with Varnish
(patched from ticket 492, pre instanced threads and using linger if I
remember well).

Regards,
-- 
Rogério Schneider

MSN: stoc...@hotmail.com
GTalk: stoc...@gmail.com
Skype: stockrt
http://stockrt.github.com
___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-misc