Hi!

If you were running with hyperthreading, then it's very likely that the
> working cores were polluted by other activity on their siblings. In our
> appliances we manage to reach high perf even with HT left enabled, just
> because we are very careful to bind only the first thread of each real
> core. Older CPUs were very slow when HT was enabled, but recent ones are
> doing an impressive job (you can tell it's impressive by the fact that I
> stopped blackmouthing this technology, and it takes a lot :-)).
>
>
Very strange, but your experience is enourmous so I'd be better to listen to
you. As I wrote, before disabling HT I've tested all possible combinatrions
of bindings:
(haproxy, irq) = (cpu#n, cpu#m) in (0,1), (0,2), (0,3), ..(0,11), (1,0),
(1,2), (1,3)...,(1,11) - total of 138 combinations, testing each one for
three minutes with two httperf running, and recording information on
haproxy's stat socket. If problem was not in HT what then? There should be
at least one good combination among these, so even if I'm totaly
misunderstading idea of cpu affinity I've had all chances to find good
combination by that test. Anyway I have a few things to try, including
disabling MSI, another NIC and so on.


> > So it is impossible to
> > get this 2xHexacore Xeon @2.66 run haproxy faster then my desktop
> > (which is simple core i5 - it showed 85k session rate without any
> > tuning at all).
>
> I'm realizing another thing : if it's 2 sockets, it's possible that
> core 0 is one of them and core 1 the other one. You should really
> ensure that all the low-latency processing is performed by the same
> physical CPU in order to avoid inter-CPU communications. /proc/cpuinfo
> will tell you what core is where. And to make it simple : use two real
> cores of the same physical CPU sharing the same L2 cache (if possible)
> or L3. That way you have the most processing power with limited cache
> misses.
>

Yes, according to /proc/cpuinfo CPU#0 has physical_id = 0, CPU#1 has
physical_id = 1, and so on 0, 1, 0, 1... My first idea was to bind haproxy
is CPU#0 and process irqs and CPU #2, it should be one CPU in one socket.
And looks like these cores shoud share same L2.



>
> > Tommorow I'm going to try running to two haproxy processes and
> > distributing irqs on second core. Also I'm going to try to remove MSI
> > support when loading bnx2. I have almost no hope to see 100k here, but
> > I'm just curios:)
>
> At least now you know that in case of a DDoS, you can just put your
> desktop machine in front of your expensive servers to protect them :-)
>
>

In case of DDoS I should read haproxy's mailing list:) I really appreciate
your help.


> Regards,
> Willy
>
>

Reply via email to