Garrett D'Amore wrote:
Andrew Gallatin wrote:
Garrett D'Amore wrote:

*) For tx, just keep the addresses in low space for now. There should be enough space to find a few hundred MB of VA space for packet buffers under the 4GB limit. I don't think we need to support gigabytes of these after all. When the pool is exhausted, the system could internally resort to bcopy.

So we cripple performance on good NICs to help crippled NICs?
That's just so backwards.  How about we only use the pool and
resort to bcopy if the NIC cannot do 64-bit DMA?

I'd be OK with that... if there was a nice way to represent it. (Perhaps a "mblk_get_restricted_paddr()" or some such that does the copy if the mblk needs it.)

I was thinking more along the lines of some kind of setup function
which took as an argument a ddi_dma_attr_t, so that it could know
what the driver could handle.  If the driver couldn't handle the
resulting address, it would copy for the driver into something it
could handle, and maybe increment a counter so the driver author
could know what was happening.

If we use the 32-bit pool, then in the common case all NICs will get good performance. Exhaustion of the pool should ideally be a rare enough event that it can be slowpathed in either case, without a significant impact on performance.

I'm scared of that kind of "nobody could ever need more than 640K" thinking. That's what gets us stuck with weird performance landmines years down the road. Heck, my NIC alone supports 8 TX queues per port, with 4K elements each. Assuming everything is mapped for DMA, that's
about 256MB right there for a dual port NIC.  I can only imagine
that queues will get bigger, and then there are some NIC drivers
which don't immediately free buffers after transmit, etc.

Drew

_______________________________________________
networking-discuss mailing list
[email protected]

Reply via email to