Garrett D'Amore wrote:
> Furthermore, since its apparently easy to deal with restrictions on RX, 
> its only a problem for drivers that have this restriction on the TX side.
> 
> I still believe that a much better solution would be to offer drivers a 
> way to "Punt" on a packet.  E.g. HME/QFE cannot offload checksum for 
> tiny packets due to a chip bug.  I suspect other chips might have 
> problems offloading checksum for packets that have other physical 
> properties (crossing a page boundary for jumbo frames, or maybe frames 
> that need to use scatter/gather DMA?)
> 
> While we're talking about checksum, there is still also the case that 
> most (all?) Sun NICs don't do UDP checksum offload properly... that is 
> they cannot deal properly with the case where the IP checksum result is 
> 0.  UDP requires that the value 0xffff be stashed instead (which is 
> different from TCP), with the result that non-conformant frames may be 
> put on the wire.
> 
> To deal with this, I believe that we really need to separate partial 
> checksum calculation for TCP vs. UDP.

I don't disagree with any of that, although working around driver or 
hardware bugs as opposed fo fixing them makes me uncomfortable.  In some 
cases, though, it's not possible to fix those bugs.

Anyway, it sounds to me like there are general architectural issues with 
the current checksum offload capability that are unrelated to Cathy's 
fast-track.  In addition to that, I don't necessarily see the direct 
dependency between HCKSUM_VLANCKSUM and the rest of the fast-track 
contents.  I'd feel more comfortable dealing with these checksum issues 
separately.

Cathy, is there really a need for UV to deliver HCKSUM_VLANCKSUM?  I 
don't see how it's necessarily in scope (i.e., it's an existing problem 
needing a solution, and it can be solved at any time), and therefore we 
could solve it independently of the "margin" issue you're solving as part 
of UV and the specific fast-track that was submitted yesterday.

-Seb
_______________________________________________
networking-discuss mailing list
[email protected]

Reply via email to