Erik Nordmark wrote:

Darren Reed wrote:

Of couse this would allow base (in esballoc/desballoc) to be
ignored.



I don't understand why you'd want to change that part.



If we're creating virtual NICs then doesn't it stand to reason that those
virtual NICs also have virtual buffers on them?


Yes, but what does this have to do with base in esballoc/desballoc? That was the part I didn't understand why you wanted to change.


Maybe changing esballoc/desballoc is the wrong approach.

I was thinking of how could you do this with as little change as possible to existing drivers, and/or make it not possible for them to allocate buffers in a manner that is outside of this policy.

Following that thought further, if I'm sharing a NIC between various
virtual NICs, why shouldn't I be able to reserve a % of the NIC's
buffers for a given purpose?   If a NIC only has a small number of
buffers, then I might want to exercise more control over who does
or does not get on-card buffers.


For the receive buffers that the hardware DMAs into, the only way you can control what they are used for is to use separate receive descriptor rings and some classification rules in the NIC to determine which packets use which receive descriptor ring.

If you want to support that, you definitely need to modify the driver to use multiple receive descriptor rings. Changing it to use allocb or *esballoc differently could be done at the same time as those modifications.


Ah, right.

Part of where I'm coming from is that the naive systems person using Solaris might put 4GB of RAM into a box and expect 3GB of that to be available for network buffers in an environment where the box is primarily only forwarding traffic. If the network drivers are only using [d]esballoc then that isn't going to happen. Currently, is there any way someone might learn about that besides reading source code?

Darren

_______________________________________________
networking-discuss mailing list
[email protected]

Reply via email to