Pascal Thubert (pthubert) wrote:
Also, when a UDP-checksum-elided packet traverses the backbone, it
will
be sent back into the lowpan without the elision optimization.  Big
deal, I'd say; not the case we are optimizing for.
Think "outsourcing the checksum computation to the ER".
I don't have a problem with that form of outsourcing as long as L2 has
checksum/CRC that is applied between the ER and 6lowpan host that is at
least as strong as the UDP checksum.
[Pascal] The stronger CRC we are talking about is in fact end-to-end so it
encompasses the compressed segment. The Edge router can not know whether
that happens, it can only trust that the source node behaved, and
compressed the checksum only if it was entitled to per our spec.

You ignored the "L2" in my sentence. It is quite key.



That would be a reasonable layer violation, since L3 can check the L2
capabilities (implicitly or explicitly.)
[Pascal] We can not be relying on the L2 MIC in mesh-under because the L2 information that the ER sees covers only the last hop.

So it would have to be a L2.5 MIC.

But my point is really about having an architecture that helps with robustness and having a solid base from which the protocols can evolve. Having L3 depend on a L7 MIC without being able to reliably determine whether the L7 MIC is present in the packet doesn't seem to do that.

I can not see the need for heuristics. This mechanism has been discussed at length in this group and at ISA. For all I know it works but it's fair to shake it and review the
associated text to make sure it's very crisp on when the checksum can be
elided.

I think without an explicit way for the decompressor at the L3-L2 boundary to tell that there is a L7 MIC the proposal is just a hack and should not be standardized.

   Erik
_______________________________________________
6lowpan mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/6lowpan

Reply via email to