Hi Jonathan

Jonathan Hui wrote:

Hi Dario,

I support the effort to make the offset field be in units of single octets, rather than 8 octets.  Though we need to be careful about shrinking the tag field.  Earlier drafts of RFC 4944 originally made the tag field 10 bits, but some comments from IESG review raised concerns about whether that was enough.  After some discussion on the list, the decision was made to expand the field to 16 bits (though only 14 bits was necessary at the time) [1].  The key point is that 802.15.4 may operate on different PHYs with higher bit rates so we should be prepared for those.
What kind of future bit rates are expected? Isn't it also possible that even a 16bit tag might not be enough in the future, if bit rates increase enough. My point here, is that I think the tag size is somewhat subjective depending on who you talk to and how much of the future one tries to predict.

Here's a rough calculation of the time it would take for the tag value in one node to rollover based on worst case packet sizes (i.e. smallest) and a 1Mb/s bit rate:
1st Packet Size = 128 bytes (PHY + MAC + Frag Hdr + HC + data + FCS)
2nd Packet Size = 18 bytes (PHY + MAC{short addrs} + Frag Hdr + 1 + FCS)
Total bytes sent for one datagram = 128 + 18 = 146
Total bits sent = 146 x 8 = 1168
Bit Rate = 1Mb/s
Time to send one datagram = 1168 / 1Mb/s =~ 1.168ms
**** Assuming datagrams are continuously sent from one source without delays (unlikely, but lets ignore any delays for now).
Time for 16bit tag field to rollover = 65536 * 1.168ms = 77s
Time for 13bit tag field to rollover = 8192 * 1.168ms = 10s

So is a 9 second rollover at 1Mb/s not sufficient as a worst case? I think it would be OK. Also, if we see bit-rates for PHYs increase we may also see an increase in allowable packet sizes (one can only hope :-).

However, if it means re-opening a heated debate or causing IESG problems, perhaps its best to stay with a 16bit tag and avoid wasting valuable time.

In my ideal world, we would reallocate a single fragment header type for simplicity.
I agree. An offset of 0 would indicate the first fragment, anyway (assuming one fragment header type for all fragments and the offset included).

  I'm not concerned about backwards compatibility given that we are changing the entire HC format.  And little has been done in the way of multi-vendor interoperability with the existing formats.  In that case, we include the tag, size, and offset fields in every fragment while keeping the tag 16 bits and others 11 bits.  The header type would remain at two bits with '11'.
Sounds plausible, except that I have two reservations:
  1. Having a dispatch pattern of '11xxxxxx' would overlay FRAG1, FRAGN and LOWPAN_NHC encoding in "draft...-hc-06". FRAG1 and FRAGN are of greater concern, because there would be no way of filtering out old fragment headers. I think in this case we'd want to remain backward compatible.
  2. The _size and _offset fields would no longer be octet aligned (assuming the same field order).

Eliding the offset field in the first fragment doesn't give much added benefit.
I agree.


To me, the above is the most straight-forward approach - maintaining most of the existing format structure, while at the same time simplifying it and avoids revisiting the same IESG concerns we've had in the past.

An alternate approach is to pick up on existing work to redesign the fragmentation layer to support more efficient fragment recovery and out-of-order delivery, but that would require a bigger change.  I'd echo Pascal's request to get additional thoughts there.

[1] http://www.ietf.org/mail-archive/web/6lowpan/current/msg00588.html

--
Jonathan Hui

On Feb 18, 2010, at 1:07 PM, Dario Tedeschi wrote:

Richard and Owen,


I think that the best solution would be to switch to using
the compressed size and offsets in the fragment headers.
This would allow (un)compression and (de)fragmentation to
be done independently.

I absolutely agree.

I agree.

I think it should be simplified further by making datagram_offset 11bits long. Having datagram_offset as 8bits (forcing fragment length to a multiple of 8) may save 1 byte, but it also means that up to 7 bytes at the end of a fragment can be left unused (save 1 byte to, potentially, lose 7). So how about a new fragmentation header where both datagram_size and datagram_offset are 11bits and the datagram_tag is reduced to 13bits (how big does the tag need to be anyway). A new dispatch pattern would indicate a new fragmentation header where _offset and _size are relative to the compressed datagram. Something like the following:

                     1                   2                   3
 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|1 1 1 1 1|    datagram_size    |      datagram_tag       |   datagram_offset   |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+



>From an implementation point of view, bits 0 to 28 can be used as a pseudo tag in identifying fragments that belong to one datagram. It could be stored in 4 bytes with the last three bits of the last byte masked out, but that's just an idea for trying to be a little more efficient.

Regards
Dario


_______________________________________________
6lowpan mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/6lowpan


_______________________________________________
6lowpan mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/6lowpan

Reply via email to