Given that IPv4 has managed to do fine with a
16-bit identifier, I'd say this size represents a
reasonable compromise.

vipul

On Oct 18, 2005, at 6:34 PM, gabriel montenegro wrote:

--- Vipul Gupta <[EMAIL PROTECTED]> wrote:

I couldn't find an easy way to search the archive
for prior discussions on this topic but the
datagram tag size of 7 bits seems too low to me.

Don't think there's been any discussion on this topic.

The max data rate for 802.15.4 is 250kbps.
Of this, lets assume that one only achieves about
128kbps or 16KBps. At this rate, an application could

this rate sounds reasonable as a maximum actual throughput
(assuming ACKs being turned on).

potentially be pumping out 128-byte packets (this
is long enough to cause each packet to be fragmented)
at the rate of 128 packets per second and cause the
tag field to rollover every second. So to avoid any
potential of confusion due to tag reuse, one would
have to assume that packets do not stay in the
multihop network for more than a second or so. This
expectation seems unreasonable (especially
for a reactive routing protocol where route discovery
alone might take this long).

does sound unreasonably short. one could also say that no
matter how much we decide to grow the tag field, after a certain
rate, the fragmentation and reassembly services of the
adaptation layer do not apply anymore. apps that
spew out more than a certain volume of traffic (TBD) would
then have to provide fragmentation and reassembly at their layer.
notice that they could continue using mesh delivery, they would
just spew out "unfragmented" (at the lowpan layer) packets.

The issue is, as always, one of tradeoff. should we specify a field
that will be future proof and cover all possible cases? with 15.4a's
defining two new PHYs (UWB and "chirp" spread spectrum), who knows
what rates we might have. i've heard that anything from 50kbps up
to 1Mbps or so is possible. could anyone who follows 15.4a confirm?
let's say it's 1Mbps. this means that even if we grow the tag to 10
bits, we'd see a rollover about every second.

we could, on the other hand, simply grow it to, say, 16 bits for a rollover every minute or so at 1Mbps (someone check the math). depending on the mesh, one minute may be cutting it short as well, given store and forwarding, transient network partitions, etc. heck, might as well grow it to 32 bits and be done with it.

sure, that imposes quite an overhead over apps that cause fragments to occur. But, if so, they *deserve* it (one could see this as the "fragmentation tax").
This fragmentation and reassembly capability is supposed to
be used sparingly (if at all), so any overhead to the fragmentation packets
does not apply normal operation.

just some thoughts. what do others think?

-gabriel


_______________________________________________
6lowpan mailing list
[email protected]
https://www1.ietf.org/mailman/listinfo/6lowpan

Reply via email to