On 12/14/2013 04:47 AM, Lennart Poettering wrote: > On Fri, 13.12.13 22:16, Karol Lewandowski (lmc...@gmail.com) wrote: >> On Fri, Dec 13, 2013 at 03:45:36PM +0100, Lennart Poettering wrote: >>> On Fri, 13.12.13 12:46, Karol Lewandowski (k.lewando...@samsung.com) wrote:
>> One of the problems I see, though, is that no matter how deep I make >> the queue (`max_dgram_qlen') I still see process sleeping on send() >> way earlier that configured queue depth would suggest. > It would be interesting to find out why this happens. I mean, there are > three parameters here I could think of that matter: the qlen, SO_SNDBUF > on the sender, and SO_RCVBUF on the receiver (though the latter two might > actually change the same value on AF_UNIX? or maybe one of the latter > two is a NOP on AF_UNIX?). If any of them reaches the limit then the > sender will necessarily have to block. > > (SO_SNDBUF and SO_RCVBUF can also be controlled via > /proc/sys/net/core/rmem* and ../wmem*... For testing purposes it might > be easier to play around with these and set them to ludicrously high > values...) That's it. While journal code tries to set buffer size via SO_SNDBUF/SO_RCVBUF options to 8MB, kernel limits these to wmem_max/rmem_max. On machines I've tested respective values are quite small - around 150-200kB each. Increasing these did reduce context switches considerably - preliminary tests show that I can now queue thousands of messages (~5k) without problems. I will test this thoroughly in next few days. I do wonder what is the rationale behind such low limits... Thanks a lot! Karol _______________________________________________ systemd-devel mailing list systemd-devel@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/systemd-devel