On Mar 7, 2006, at 5:17 PM, Ian Rose wrote:

Apologies if this has been addressed previously:

Currently I am running a simple application to measure the throughput of the default Telos-A MAC. I have one mote send messages as fast as it can (i.e. it sends a new message as soon as SendDone is signalled) and another mote keeps a running count of all of the messages that it hears. In addition, each message has a sequence number so the listener can tell if a message was missed. When I run this experiment, I consistantly see that (roughly) only 85% of the sent messages are received by the listener. If I change the sender to send a little more slowly, the reception rate quickly goes up to 100%, so its definitely the speed-of-sending that is causing the ~15% loss rate.

Is this known behavior? What causes it? Does the listener take longer to "reset" it radio somehow, as compared to the sender?

The root cause is how the 1.x stack handles radio interrupts. The basic flaw is that it only allows one packet to be moving through the stack at any time. There's a lot of very defensive programming, where if anything looks remotely problematic, it flushes the radio receive memory. So if you start sending packets as quickly as possible, something that often happens is you're in the middle of reading one packet out when another arrives (e.g., due to some task or interrupt latency). The radio just stores the second packet in the RXFIFO memory, where it can be later retrieved, but the 1.x is paranoid and so flushes the memory.

This happens more often than you'd like due to some inefficient code paths. So the node drops packets.

What's much more insidious about the causes of packet drops, though, are its effects on acknowledgments. Because the radio has successfully stored the packet in the RXFIFO memory, if acks are enabled then it will issue an ack. However, the application might never receive the packet, as the radio stack has flushed it. This greatly reduces the utility of acks on that stack (but only under high packet transmission rates).

As part of 2.x development, I reworked the 1.x stack to remove these problems and was able to receive something on the order of 600pps without dropping any of them. Jonathan Hui (Arched Rock) then wrote a clean-slate implementation that borrowed some of the techniques I used, but by doing a clean-slate approach was able to clean out all of the cruft and take full advantage of some of the advantages in 2.x (e.g., resource arbitration for the SPI bus). His code is now the standard CC2420 stack in 2.x.

Phil
_______________________________________________
Tinyos-help mailing list
[email protected]
https://mail.millennium.berkeley.edu/cgi-bin/mailman/listinfo/tinyos-help

Reply via email to