Hi, Yes on the LD_PRELOAD.
Yes, I have one node running with Solarflare SFN8522 2p 10Gbit/s currently without Onload enabled. it has 17.5K http_request_rate and ~26% server interrupts on core 0 and 1 where the NIC IRQ is bound to. And I have a similar node with Intel X710 2p 10Gbit/s. It has 26.1K http_request_rate and ~26% server interrupts on core 0 and 1 where the NIC IRQ is bound to. both nodes have 1 socket, Intel Xeon CPU E3-1280 v6, 32 GB RAM. So without Onload Solarflare performs worse than the X710 since it has the same amount of SI load with less traffic. And a side note is that I haven't compared the ethtool settings between Intel and Solarflare, just running with the defaults of both cards. I currently have a support ticket open with the Solarflare team to about the issues I mentioned in my previous mail, if they sort that out I can perhaps setup a test server if I can manage to free up one server. Then we can do some synthetic benchmarks with a set of parameters of your choosing. Regards, /Elias On Wed, Dec 20, 2017 at 9:48 AM, Willy Tarreau <[email protected]> wrote: > Hi Elias, > > On Tue, Dec 19, 2017 at 02:23:21PM +0100, Elias Abacioglu wrote: > > Hi, > > > > I recently bought a solarflare NIC with (ScaleOut) Onload / OpenOnload to > > test it with HAproxy. > > > > Have anyone tried running haproxy with solarflare onload functions? > > > > After I started haproxy with onload, this started spamming on the kernel > > log: > > Dec 12 14:11:54 dflb06 kernel: [357643.035355] [onload] > > oof_socket_add_full_hw: 6:3083 ERROR: FILTER TCP 10.3.54.43:4147 > > 10.3.20.116:80 failed (-16) > > Dec 12 14:11:54 dflb06 kernel: [357643.064395] [onload] > > oof_socket_add_full_hw: 6:3491 ERROR: FILTER TCP 10.3.54.43:39321 > > 10.3.20.113:80 failed (-16) > > Dec 12 14:11:54 dflb06 kernel: [357643.081069] [onload] > > oof_socket_add_full_hw: 3:2124 ERROR: FILTER TCP 10.3.54.43:62403 > > 10.3.20.30:445 failed (-16) > > Dec 12 14:11:54 dflb06 kernel: [357643.082625] [onload] > > oof_socket_add_full_hw: 3:2124 ERROR: FILTER TCP 10.3.54.43:62403 > > 10.3.20.30:445 failed (-16) > > > > And this in haproxy log: > > Dec 12 14:12:07 dflb06 haproxy[21145]: Proxy ssl-relay reached system > > memory limit at 9931 sockets. Please check system tunables. > > Dec 12 14:12:07 dflb06 haproxy[21146]: Proxy ssl-relay reached system > > memory limit at 9184 sockets. Please check system tunables. > > Dec 12 14:12:07 dflb06 haproxy[21145]: Proxy HTTP reached system memory > > limit at 9931 sockets. Please check system tunables. > > Dec 12 14:12:07 dflb06 haproxy[21145]: Proxy HTTP reached system memory > > limit at 9931 sockets. Please check system tunables. > > > > > > Apparently I've hit the max hardware filter limit on the card. > > Does anyone here have experience in running haproxy with onload features? > > I've never got any report of any such test, though in the past I thought > it would be nice to run such a test, at least to validate the perimeter > covered by the library (you're using it as LD_PRELOAD, that's it ?). > > > Mind sharing insights and advice on how to get a functional setup? > > I really don't know what can reasonably be expected from code trying to > partially bypass a part of the TCP stack to be honnest. From what I've > read a long time ago, onload might be doing its work in a not very > intrusive way but judging by your messages above I'm having some doubts > now. > > Have you tried without this software, using the card normally ? I mean, > 2 years ago I had the opportunity to test haproxy on a dual-40G setup > and we reached 60 Gbps of forwarded traffic with all machines in the > test bench reaching their limits (and haproxy reaching 100% as well), > so for me that proves that the TCP stack still scales extremely well > and that while such acceleration software might make sense for a next > generation NIC running on old hardware (eg: when 400 Gbps NICs start > to appear), I'm really not convinced that it makes any sense to use > them on well supported setups like 2-4 10Gbps links which are very > common nowadays. I mean, I managed to run haproxy at 10Gbps 10 years > ago on a core2-duo! Hardware has evolved quite a bit since :-) > > Regards, > Willy >

