Hi Ted, On Sun, Aug 22, 2010 at 1:54 AM, Ted Herman <[email protected]> wrote: > Miklos Maroti wrote: >> >> Yes, that is true. Somehow it would be necessary to know which packets >> contain embedded time stamps and which do not, so for example the >> Basestation could work without changes. For AM message we have used >> one of the AM types. Time Sync messages already have 1+4 byte shorter >> payload (1 byte for the embedded new AM type, 4 bytes for the even >> time). >> > > Actually, I found it so hard to use this level of the interfaces that I > abandoned trying to follow that part of the code in the current TinyOS, > instead just using the parts in transmit and receive, modifying them to look > for new metadata switches. Formerly I did work on things at the higher > layer of abstraction, but with the newest version, I just gave up because I > ran out of headache pills. When I switched to just working with transmit > and receive, my headache disappeared! You are correct, this shifts the > burden to the application rather than doing so much in the system/stack. > But of course this is an old theme in system development (if you've tried > recently to work with I2C on the Linux/driver side you probably know what I > mean).
I think the old interface and implementation of timesync was poor. I think timestamping is good as it is. > Regarding the embedded AM type, this seems to be more ambitious than > I would want to suppose by default and enforce on everybody, but I guess you > guys have found that this saves labor, at least for now :) -- you may be > painting yourself into a corner ... Embedded AM type was a hack and that is what I wanted to remove, so for example the Basestation and any other application would work with timesync messages without a problem (no more tweaking of the layout of the packet, etc.) >> True. This was the intent. However, on most platforms the binary >> microsecond clock is stopped when the mote sleeps, so one has to be >> careful with this. >> > > Well, already I have been using the hybrid strategy you mention in a later > email. The true binary microsecond clock is not used in all cases, just the > microsecond standard of expressing time, in my case. On a Telos, using the > microsecond clock had such marginal benefits in my experiments (because you > have to limit its use to periods of 15ms for stability). But with mixed > Telos/MicaZ networks, things are better. The hardware is still evolving and > in another year or two when we are all working on ARM Cortex processors, > maybe things will be different. This is true. I can live with the microsecond precision only, i.e. leave the standard as it is and avoid introducing the TMilli variant. The most important thing is to recognize that a message has embedded timesync value and remove the embedded am type. Yes, maybe ARM Cortex will be the standard, but I fear that if we move to more powerful chips, then TinyOS will become irrelevant and will be overtaken by stripped down linux. We will see. >>> Also, I had to add fields for microsecond-based >>> timestamps in the metadata plus new interfaces for that. >>> >> >> Why cannot you use the PacketTimeStamp interface already provided? >> > > I only added a "setmicro()" command to PacketTimeStamp so that, in addition > to set() there would be a microsecond field for times; I did this because > set() might still be needed for the lower resolution uses somewhere else in > the stack. I see. >> True. What I have seen that someone wanted to get the senders's TMilli >> time on the receiver side. He set the event time to 0, and then read >> the event time on the receiver. This "works", but since the delay in >> the message is in microseconds, this method does not work, at least >> the TMilli value returned is truncated (the top 10 bits is incorrect). >> > > OK, yes, maybe the goal is to "foolproof" the timestamping implementation. > Maybe if you keep things like the proposed overhead fields at a higher > level than transmit/receive, that won't have any effect on what I am doing. > Sort of like HAL/HIL distinctions. The proposed overhead field would not be an overhead compared to the current state of affairs, since we would eliminate the embedded AM-type. Something needs to be done, a proper way for the radio stack to identify incoming timesync messages without relying on a specific am-type. This would allow raw Ieee 802.15.4 time synchronization as well. Miklos _______________________________________________ Tinyos-help mailing list [email protected] https://www.millennium.berkeley.edu/cgi-bin/mailman/listinfo/tinyos-help
