Darren Reed wrote:

Part of where I'm coming from is that the naive systems person using Solaris might put 4GB of RAM into a box and expect 3GB of that to be available for network buffers in an environment where the box is primarily only forwarding traffic. If the network drivers are only using [d]esballoc then that isn't going to happen. Currently, is there any way someone might learn about that besides reading source code?

Even if the box is only forwarding at IP, it isn't clear that having 3Gig of buffers is needed. Normally the buffering in a router is a small fraction of the aggregate bandwidth*delay product for all the flows going through the router; 24 Gigabits of buffers would match the BW*delay at 10 Gigabit NIC and averge 2.4 second rtt so I think that is way more than is ever needed.

If it turns out that this much buffering is actually needed, and we want to avoid copying data around or too much ddi_dma calls, then the driver would need to hang out about that many receive buffers worth.
With 1500 byte buffers that would be 2 million receive descriptors.
I don't know what the hardware limits are on the number of receive descriptors, but 2M sounds like a lot. My point is that reading and modifying the driver might not be sufficient; might have to modify the NIC hardware.

For less extreme numbers it might be sufficient to be able to control the number of receive buffers/descriptors the driver uses. If we can't make the drivers automatically pick a good number, then perhaps we should look at adding some dladm set-prop properties to be able to control this.

   Erik
_______________________________________________
networking-discuss mailing list
[email protected]

Reply via email to