James Carlson wrote: > Garrett D'Amore writes: >> I can imagine that the only way for hardware to get this "right" is to >> examine the IP protocol field. Since that is not part of the data that >> is covered by the checksum, it implies that there is some classification >> done in either software or hardware, to pick which interpretation for >> the zero result to use. >> >> It shocks me that IETF would have been so cavalier about this seemingly >> minor difference. > > I don't think it was cavalier; it was deliberate. There were > performance-oriented people who once thought that disabling UDP > transport layer checksum was a good idea (in particular for NFS), > because "of course" the Ethernet CRC is good enough for the wire and > there are never any other possible problems in real systems, such as > (say) broken switches that corrupt the gate. That viewpoint (minor > and temporary performance hack to the detriment of correctness) held > out in UDP and, because protocols live long, we're stuck with it. > > Similarly, I don't think it's a great idea to push the end-to-end > checksum generation or verification down into hardware that's further > away from the endpoint, nor is it in the long term good to add more > complexity to the system, but that's exactly what the feature we're > talking about does. > > I think it's a fair bet to say we'll be back here again.
It is sentiments like this that make me wish for a method to control which features are enabled, individually, per NIC that doesn't involve editting driver.conf files. Darren _______________________________________________ networking-discuss mailing list [email protected]
