Francesco DiMambro wrote:
>
> If GLDv3 starts locking down DMA for drivers then I can see the benefit, 
> but it
> doesn't, looking at the Solaris code for the NIU soft LSO, the drivers 
> are still
> expected to lock down and sync buffers. My guess is soft LSO will not 
> give much
> benefit, I have a Neptune card I can try it and let you know.
> I'll wait for you to integrate UDP LSO, then I'll get the firmware guy 
> to implement
> it for our chip, likely we'll have to do it for M$ as well,  then I'll 
> use that as soon as
> you make it available meanwhile please hold onto MDT.
>     Can we pursue a contract so you have knowledge that I'm using MDT 
> for the
> timebeing?
>   

I really, really want MDT to go away.   It adds a fair bit of complexity 
that *all* NIC drivers in the stack wind up paying for, even ones that 
don't support MDT.  For 10G at ordinary MTUs, the per-packet processing 
overhead associated with each check to determine if MDT is in use or not 
is non-trivial.  At smaller packet sizes, it becomes very significant.  
(I did a lot of work a while back trying to increase the number of 
packets-per-second that Solaris could process.  We're far, far short of 
the theoretical maximum for 10G last I looked, and only got the 1G 
number up on Neptune systems using a bunch of Niagra cores and equally 
balanced "benchmark special" traffic.  The problem is one of limited 
CPU, and every cycle counts.  Because of lock contention, sometimes the 
cycle counts accrue much faster than you might otherwise expect.)

Its not my choice (and maybe I have little influence here), but if I had 
my druthers, I'd reject a contract to keep MDT alive.  (A contract that 
had an explicit clause invalidating it for use on OpenSolaris or Solaris 
versions after S10 might not be objectionable, though.)

That said, I agree that it makes sense for GLDv3 to perform some of the 
DMA steps on behalf of drivers, particularly if optimizations like LSO 
are in use.

    -- Garrett

_______________________________________________
networking-discuss mailing list
[email protected]

Reply via email to