Thanks for the patch for testing. I could not see significant improvements on virtual machines, I will try again this week on servers.
I tried the following values for buffers: 65536 -  64Kb, 262144 - 256Kb, 524288 - 512Kb, 1048576 - 1MB, 4194304 - 4MB, 16777216 - 16MB, 33554432 - 32Mb, 67108864 - 64Mb, 134217728 - 128MB. I changed the buffer size of Solr and Jetty. It was visible in the logs:
2023-02-25 19:01:42.201 DEBUG (qtp1812823171-22) [   ] o.a.s.c.u.FastWriter checking OS env for BUFSIZE => java.lang.NumberFormatException: null
2023-02-25 19:01:42.201 INFO  (qtp1812823171-22) [   ] o.a.s.c.u.FastWriter FastWriter.BUFSIZE=4194304
I noticed that increasing the buffer reduces %wait on the core down to 0 and also with 100% loaded core, the speed sometimes increased to 520 megabits (I haven't seen such numbers before, but It's still far from Gigabit+). Adding ident=false and/or wt=csv increases the speed a bit more (+30/50 Mbit and wt=xml slow down -80 Mbit).
 
What else in the data chain can be a bottleneck? OS and Network (network interface and kernel tuned for 10-Gigabit, tested by iperf - ok), disk (ramdisk), processor (except that 4.3 GHz core is not enough to transfer data from Solr in single thread faster than 0.5 Gigabit) are not a bottleneck, jetty is able to distribute a file at high speeds, with large buffers I have now received by wget: 2023-02-25 21:51:37 (6.25 Gb/s),  FastWriter.BUFSIZE now it's big too, what is the next possible bottleneck in Solr software architecture to explore and search further?
 
Thank you for your help, I hope if these are not natural algorithmic limitations, we will be able to figure out and make Solr even better, especially since with the advent of PCIe 5.0, NVME, DDR5 and Wi-Fi 7 speeds close to 10 Gigabit are already commonplacem but many end-user needs still dependent on single-threaded/core performance and do not get significant benefits from new hardware speeds...
 
Best Regards,

Reply via email to