Hi Roamer
    The hardware I'm using supports TCP LSO, and the driver support MDT and
LSO for TCP.
Yunsong (Roamer) Lu wrote:
> Hi Frank,
> You mentioned that you got better MDT TCP performance than LSO, so 
> could you share some data you got? What's the performance numbers and 
> CPU utilizations with or without MDT or LSO?
The performance is line rate, with the Linux as a link partner, the 
difference is
in the utilization, MDT is better by about 8%, not much really, but like 
I said
if the hardware could go above 64K then it's likely LSO will overtake MDT.
With UDP, see below. If you had a working UDP LSO I would use it, and
then give you a real benchmark between UDP MDT and LSO.
>
> The complexity of MDT is not only for the drivers, but mainly in the 
> stack.
What sort of complexities are you running into?
> Actually, we have a SOFT LSO implementation that simulates hardware 
> LSO inside driver that can contribute at least the same performance 
> gain compared to MDT.
It can't possible simulate hardware LSO, with hardware LSO there's only 
one call
to setup a DMA mapping, just like MDT. There's potentially only one 
descriptor
for the messages, encapsulated in the LSO message, so is PCI bus 
friendly, the
hardware also generates the headers, that is both CPU and system bus 
friendly as
headers are not generated(CPU), or transfered across any system bus.
    The Soft LSO allows one call to the driver per message, and 
eliminates the per
packet traversal of the stack, which is good, but is merely a bit of
what MDT provides, Soft LSO still does is per packet DMA mapping/syncing
which MDT has reduced to once per header buffer, and once for data buffer,
covering as many packets as the stack can give in one call then the  
setup of the
packets is in a tight loop, just filling in descriptors.
> And the LSO implementation, both in stack and driver will be much more 
> simpler than MDT.
If it's in the stack yes it will be simpler for the driver as it's the 
same old same old Tx
algorithm, but it will loose the benefit one DMA mapping for multiple 
packets.
> There is also a plan to implement SOFT LSO in GLDv3 so that any 
> non-LSO-capable drivers may take advantage of it without taking care 
> of the interface. That will address your concern with your hardware 
> that doesn't support H/W UDP LSO.
If GLDv3 starts locking down DMA for drivers then I can see the benefit, 
but it
doesn't, looking at the Solaris code for the NIU soft LSO, the drivers 
are still
expected to lock down and sync buffers. My guess is soft LSO will not 
give much
benefit, I have a Neptune card I can try it and let you know.
I'll wait for you to integrate UDP LSO, then I'll get the firmware guy 
to implement
it for our chip, likely we'll have to do it for M$ as well,  then I'll 
use that as soon as
you make it available meanwhile please hold onto MDT.
    Can we pursue a contract so you have knowledge that I'm using MDT 
for the
timebeing?

    Frank
>
> Thanks,
>
> Roamer
>
> Francesco DiMambro wrote:
>> Hi Eric
>> Erik Nordmark wrote:
>>> Francesco DiMambro wrote:
>>>> <div class="moz-text-flowed" style="font-family: -moz-fixed">Hi Darren
>>>>    That would be UDP LSO right?  Solaris 10 update 4 has
>>>> LSO for TCP, but not UDP. The key point I want to make here
>>>> is the adapter I'm working with has LSO for TCP also, the UDP
>>>> part it doesn't, yet, but no big deal because neither does Solaris.
>>>> I implemented MDT and it works for UDP and TCP and in a side
>>>> by side comparison of LSO v MDT, for TCP on the same card
>>>> MDT wins.
>>>>    I want to get the Max performance for my adapter and Solaris
>>>> and presently MDT is proving the best way to do it. Having come
>>>> to this conclusion I thought I should share it with the alias, because
>>>> I need it to be maintained in Solaris, I was not expecting to hear
>>>> it was being EOL'd that's just premature.
>>> Frank,
>>>
>>> why did you choose MDT over LSO?
>> When I started the development, there was no LSO on Solaris 10, so I 
>> put MDT
>> into the driver, I got distracted with Windows driver development for 
>> 3 years. Then
>> when I returned to Solaris LSO had became available so I added LSO as 
>> well.
>>> Did you need to run on Solaris releases that doesn't support LSO but 
>>> supports MDT?
>> We're aiming to support Solaris 8, 9 and 10 and following activity on 
>> Solaris11,
>> sparc and x86.
>>> Or did you benchmark MDT and LSO and found that MDT was more efficient?
>> I benchmarked MDT v LSO and MDT was better on our card but I attribute
>> that to limited hardware capability, if the hardware can go up to 1M 
>> LSO them
>> I'd expect LSO to overtake MDT, we're not there yet hence the need 
>> for MDT.
>> In the case of UDP there's no LSO in Solaris so the benchmark was 
>> against
>> plain old one packet at a time model. With that MDT was twice as 
>> fast, but
>> was limited by the receiver which couldn't deal with the volume of 
>> packets.
>> (Working on that next.)
>>> There is a fair bit (understatement) of complexity on the MDT 
>>> implementation and that is why I'd like to understand the above.
>> The complexity in the driver is equal to the complexity of the one 
>> packet at
>> a time model. The difference is familiarity, in other words most 
>> people can
>> write a one packet at a time Tx, and layer on top of that LSO, no 
>> brainer ;-).
>> But MDT is a new data structure so needs a different way to Tx, once you
>> know how to do that then you can handle the complexity. It's just 
>> like what
>> M$ did in the transition from NDIS_PACKET to NBL/NB's, everyone who
>> knew how to write a NDIS_PACKET tx had to re-learn how to do it
>> for NBL/NB's. I'm not asking you go that far, but just leave the 
>> option to
>> allow MDT, with it's ability to send more than one packet to the 
>> driver in
>> one call.
>>
>>     thanks
>>     Frank
>>> Thanks,
>>>     Erik
>>
>> _______________________________________________
>> networking-discuss mailing list
>> [email protected]
>

_______________________________________________
networking-discuss mailing list
[email protected]

Reply via email to