Re: nginx alone performs x2 than haproxy-nginx

2012-05-01 Thread Pasi Kärkkäinen
On Mon, Apr 30, 2012 at 12:19:26PM +0200, Sebastien Estienne wrote:
 Hi Pasi,
 
 Do you know if ubuntu 12.04 has these optimized drivers or not?
 

I think Canonical developers are going to add the drivers later
in some update to Ubuntu 12.04 packages. The drivers are not yet in 12.04.

I saw some discussion from Canonical guys on xen-devel about that.

-- Pasi

 thanx
 
 --
 Sebastien E.
 
 
 Le 30 avr. 2012 à 11:06, Pasi Kärkkäinen pa...@iki.fi a écrit :
 
  On Sun, Apr 29, 2012 at 06:18:52PM +0200, Willy Tarreau wrote:
  
  I'm using VPS machines from Linode.com, they are quite powerful. They're
  based on Xen. I don't see the network card saturated.
  
  OK I see now. There's no point searching anywhere else. Once again you're
  a victim of the high overhead of virtualization that vendors like to 
  pretend
  is almost unnoticeable :-(
  
  As for nf_conntrack, I have iptables enabled with rules as a firewall on
  each machine, I stopped it on all involved machines and I still get those
  results. nf_conntrack is compiled to the kernel (it's a kernel provided by
  Linode) so I don't think I can disable it completely. Just not use it (and
  not use any firewall between them).
  
  It's having the module loaded with default settings which is harmful, so
  even unloading the rules will not change anything. Anyway, now I'm pretty
  sure that the overhead caused by the default conntrack settings is nothing
  compared with the overhead of Xen.
  
  Even if 6-7K is very low (for nginx directly), why is haproxy doing half
  than that?
  
  That's quite simple : it has two sides so it must process twice the number
  of packets. Since you're virtualized, you're packet-bound. Most of the time
  is spent communicating with the host and with the network, so the more the
  packets and the less performance you get. That's why you're seeing a 2x
  increase even with nginx when enabling keep-alive.
  
  I'd say that your numbers are more or less in line with a recent benchmark
  we conducted at Exceliance and which is summarized below (each time the
  hardware was running a single VM) :
  

  http://blog.exceliance.fr/2012/04/24/hypervisors-virtual-network-performance-comparison-from-a-virtualized-load-balancer-point-of-view/
  
  (BTW you'll note that Xen was the worst performer here with 80% loss
  compared to native performance).
  
  
  Note that Ubuntu 11.10 kernel is lacking important drivers such as the 
  Xen ACPI power management / cpufreq drivers so it's not able to use the 
  better performing CPU states. That driver is merged to recent upstream 
  Linux 3.4 (-rc).
  Also the xen-netback dom0 driver is still unoptimized in the upstream Linux 
  kernel.
  
  Using RHEL5/CentOS5 as Xen host/dom0, or SLES11 or OpenSuse is a better 
  idea today
  for benchmarking because those have the fully optimized kernel/drivers. 
  Upstream Linux will get the optimizations in small steps (per the Linux 
  development model).
  
  Citrix XenServer 6 is using the optimized kernel/drivers so that explains 
  the difference 
  in the benchmark compared to Ubuntu Xen4.1.
  
  I just wanted to hilight that. 
  
  -- Pasi
  
  



Rate limiting based on backend response

2012-05-01 Thread Ben Hood
Hi,

I was wondering if HAProxy has the capability to rate limit HTTP POSTs
based on the response from the backend.

The clients identify themselves with an token passed as query
parameter in the POST. I would like to implement the business logic
for calculating rate limits in my backend app. If the backend decides
that a limit has been breached for a particular client token, it would
response with a certain non-2xx code and would supply a TTL value to
express the period of time that the client will be throttled.

Hence I was wondering whether it is possible to configure HAProxy to
cache the fact that a given client token is to be throttled for the
period expressed by the TTL.

Any help is appreciated,

Cheers,

Ben