It seems that Ec2 is too elastic. Also it seems to be more targeted by hackers than real hosting solutions.
One solution to overcome this partially is to reserve some IP's so changes that they were under attack before are low. As for the performance problems there is no way to control the throughput. In EU zone I could get around 70 MB/s but unstable (not speaking about haproxy). ________________________________ From: Brent Walker <[email protected]> To: [email protected]; Alexander Staubo <[email protected]> Sent: Mon, February 1, 2010 12:09:33 PM Subject: Re: Tuning HAProxy on EC2 instances? We are doing about the same amount of traffic on a CentOS AMI but using a small instance. Has been problem free for better than 6 months of use. On Mon, Feb 1, 2010 at 1:19 AM, Joe Williams <[email protected]> wrote: >>We use haproxy and EC2 instances as load balancers for our clusters. The >>tuning we use is pretty standard (somaxconn, nf_conntrack_max, >>tcp_fin_timeout, rmem_max, wmem_max and etc) running vanilla ubuntu AMIs. >>While EC2's instances and network have performance problems it is possible to >>get reasonable reliability and performance from them. We push between 10s of >>Mbps through a single c1.medium without issues, not sure about beyond that. > >>-Joe > > > > >>On 1/31/10 3:14 PM, Willy Tarreau wrote: > >>>Hi Alexander, >> >>>>On Sun, Jan 31, 2010 at 11:36:02PM +0100, Alexander Staubo wrote: >>>> >> >>>>>Has anyone any experience tuning HAProxy for performance when running >>>>>>on Amazon EC2 instances? For example, are there any kernel parameters >>>>>>that should be tuned differently, or are some instance types better >>>>>>than others? Does HAProxy generally perform well on EC2? >>>>>> >>> >>well, last year I helped some guys in charge of a world wide sports >>>>event which was hosted there. The performance was terrible. Completely >>>>unstable. It was impossible to tune anything. Ping times would vary a >>>>lot. It was impossible to know where the bottlenecks were, because >>>>every machine was showing limited performance in turn without >>>>necessarily having its CPU saturated. It was noticed that the internal >>>>network was at least faulty, because the observed network congestions >>>>were not constat and moving between machines. Sometimes it was even >>>>almost impossible to type in SSH. We also discovered that when they >>>>bought new nodes, some of them were under massive attacks, most likely >>>>because people who are attacked quickly drop the nodes with the IPs >>>>that belong to them and create new ones. So the attacked ones will >>>>be picked by the next customer... Finally they moved to a real hosting >>>>company with real machines and real performance in order to be able >>>>to participate at least to a little part of the event. >> >>>>In this experience, I think that for them, everything was virtual : >>>>the machines, the network, the support, the availability, the visitors >>>>and finally the profit. >> >>>>I really can't say what you could play on to improve quality. After >>>>having spent 3 full nights working with them on their machine, no >>>>sensible trend appeared whatever we did. I think the real knobs are >>>>outside your scope, on the other side of the VM :-/ >> >>>>Regards, >>>>Willy >> >> >>>> >> >>-- >>Name: Joseph A. Williams >>Email: [email protected] >>Blog: http://www.joeandmotorboat.com/ > > >

