Carsten Bormann wrote:

Also, when a UDP-checksum-elided packet traverses the backbone, it will be sent back into the lowpan without the elision optimization. Big deal, I'd say; not the case we are optimizing for.
Think "outsourcing the checksum computation to the ER".

I don't have a problem with that form of outsourcing as long as L2 has checksum/CRC that is applied between the ER and 6lowpan host that is at least as strong as the UDP checksum.

That would be a reasonable layer violation, since L3 can check the L2 capabilities (implicitly or explicitly.)

But the case at hand is where L3 needs to depend on L7 capabilities, and furthermore the ER doesn't have an exact way to determine whether the application has those capabilies; ithe ERt needs to employ some heuristic guesswork.

Yes, in a greenfield world.
The problem is that the correspondent nodes are random legacy PCs and such stuff.
How do you handle "UDP with a MIC" in Windows 3.1?
(Yes, that is a rhetorical question, since 3.1 did not have IPv6.)

Don't all the OSes on the legacy nodes on the backbone or that will run as gateways have support for raw sockets?

PS.: I'm not saying the whole ordeal is worth it. I'm just saying it can be made to work.

Your definition of "work" might be different than mine ;-)
It doesn't look robust when it is based on heuristics of this form.

   Erik
_______________________________________________
6lowpan mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/6lowpan

Reply via email to