On 19 Nov 2012, at 08:13, Dan Van Der Ster wrote:

> So, the key sysctl to set to enable large receive buffers is 
> net.core.rmem_max.

This should actually be the only thing that you need to change. The transmit 
queue length shouldn't matter, as the kernel is _much_ more efficient at 
putting UDP packets on to the wire than RX is at generating them. At some point 
during the 1.4.x series we also starting changing the transmit buffer size, but 
it isn't clear to me that this has any benefit at all.

>> So, setting a UDP buffer of 8Mbytes from user space is _just_ enough to 
>> handle 4096 incoming RX packets on a standard ethernet. However, it doesn't 
>> give you enough overhead to handle pings and other management packets. 
>> 16Mbytes should be plenty providing that you don't
> 
> We use 256 server threads and found experimentally in our environment that to 
> achieve zero packet loss we need around 12MBytes buffers. So we went with 
> 16MB to give a little extra headroom.

Yeah, exactly how much memory you require will depend on how efficient your 
listener thread is at removing packets from the queue. The numbers I provided 
are for the worst case scenario, where the listener stalls for a whole 
round-trip's worth of packets. Some kernels will also only require 2048 bytes 
per UDP packet, rather than 4096. It's the kind of thing that's best determined 
experimentally, sadly.

What I had originally hoped to gain from this exercise was a way of 
automatically determining the ideal receive buffer size for a system, and 
warning the user if we're unable to set a buffer of that size. It isn't clear 
that this is going to be possible, as there is entirely too much kernel magic 
going on between the size provided by the application, and what is actually 
consumed per packet.

Cheers,

Simon_______________________________________________
OpenAFS-info mailing list
[email protected]
https://lists.openafs.org/mailman/listinfo/openafs-info

Reply via email to