My concern is with code size.  Adding 10% overhead to a program for
network types is fairly eggregious.

I don't believe the cpu cycles (latency) are really an issue and I'm
not convinced they have been since the mica1 platform.

Cpu cycles (energy) aren't an issue on the msp430 platform; my work is
done quickly enough and at low enough energy burn that the extra
microseconds aren't really an issue.

Really what it sounds like I'm looking for is any easy way to define
or select from a list of profiles the mechanism that makes the most
sense for each microcontroller/platform.  I've never been able to get
gcc to inline the things I want and not inline the things that I don't
want, so I'm not convinced the gcc hack is an easy one.  The 'inline'
function marginally affects the actual compiler operation.

I do think that conversion of the struct to network format when
ownership of the struct is transferred to a low module may make the
most sense.  For example, the data payload that Multihop receives is
opaque, and thus the conversion could be done immediately before
submitting to the Multihop component.  Likewise in the rest of the
stack.  The reverse can be done on the way back.  Unfortunately I can
see a number of issues with this approach, most notably how the
structure packs into the full message buffer given that the network
aligned structure may have a different size than the microcontroller
aligned structure.  Perhaps something like:

my_format_t x;
x.blah = 1;
x.foo = 2;

message_t msg;
msg.data = (nx_my_format_t)x;

Who knows, but the overhead almost makes network types unusable in
most msp430 applications.

-Joe

On 3/31/06, Philip Levis <[EMAIL PROTECTED]> wrote:
> On Mar 31, 2006, at 1:41 PM, Joe Polastre wrote:
>
> >> Network types are your friend. :)
> >
> > Except for the fact that they add over 2k of program space to my
> > applications.  Sigh.  Any chance network types are going to become
> > somewhat more efficient in the future?
>
> Efficient in CPU cycles, RAM usage, or code size? There are tradeoffs
> there. The first concern in the network type design was optimizing
> RAM usage. David Gay's evaluations were the tradeoff between CPU
> utilization (energy) and RAM. E.g., copying to native structs could
> save energy for common accesses, but would require allocating another
> struct.
>
> Then, with the advent of telosb and derivatives, code size suddenly
> became an issue. It's not a problem on any other platform. That
> wasn't an original design consideration, given the commonly noted
> direction that microcontrollers are taking to have more program
> memory. I think it's safe to say that this wasn't anticipated.
>
> It sounds like you want to do 1 of 2 things, both of which are
> completely within your power and really outside the scope of the
> compiler (but perhaps within its toolchain):
>
> 1) Prevent gcc from inlining the conversion functions. This will mean
> that there's only one copy, saving code space, but invoking it will
> be a function call, costing you CPU cycles and possibly making
> interrupt paths too long (my guess is that this wouldn't be an issue
> in the CC2420 stack, though, as it shouldn't be worrying about this
> kind of thing too much). You could do this by suppressing the
> inclusion of nesc_nx.h and linking against another C object.
>
> 2) Do all of your operations on native structs, then copy them over
> when they're ready to send. This would be a bit of a pain, though,
> and might require some restructuring of programming interfaces.
>
> Actually increasing the "efficiency" of the functions is difficult. I
> mean, here's a sample function:
>
> static __inline uint16_t __nesc_ntoh_leuint16(void *source){
>    uint8_t *base = source;
>    return ((uint16_t )base[1] << 8) | base[0];
> }
>
> If you can think of a way to optimize that -- or some other
> suggestion that isn't preferential to one architecture over another
> --  I'm sure David would be happy to check it in when he returns from
> paternity leave.
>
> Phil
>

_______________________________________________
Tinyos-help mailing list
[email protected]
https://mail.millennium.berkeley.edu/cgi-bin/mailman/listinfo/tinyos-help

Reply via email to