Von: Michiel Konstapel
Gesendet am: 10 Aug 2010 14:40:06

>> Generally using 20 bit registers isn't advisable as it increases code
>> size and execution times and only a fraction of the MSP users needs
>> the additional code size. Many (inlcuding me) just need the higher
>> speed, more ram or better peripherals.

> On the other hand, many (including me) are mainly interested in the
> extra flash and are happy to take the step BACK in RAM going from a
> 1611 to a 2418. "It's faster" doesn't help if you're stuck at "but it doesn't
> fit" :)

Some do, some don't. Indeed, if you NEED the space, then you need it.
My experience at e2e.ti.com is that many only think they need it while
they actually need petter programming skills and more imagination :)

>> And even if so, often putting the functions far is good enough.

>Constant data, too, would be nice, but I agree with the advantages of
>sticking to 16 bit data pointers (unless 20 bits are requested by a
>switch). 

The problem is that it affects code. If you use a single 20 bit
pointer, then all code in the whole project needs to be written
to support 20 it pointers. As it needs to save the registers as 32
bit on stack, (including ISRs), needs to deal with stack parameters
differently etc.

> For bulk data, you can fall back to a manual loading routine.

that's what I do with my dynamically loaded firmware update and for
some constants. A hand-crafted 'getter' and 'setter' method.
Those who are already using C++ objects won't notice the
difference or the overhead :)

> Being able to automatically split code into both low and far memory
> would be great, but as far as I understand, is fairly impossible for ld.

Yes, it's a rather complex task

>>>*) How much unhappiness would there be if the default compiler behavior was
>>> to support near-mode compilation, with far-mode for code and/or data as
>>> an option specified either by attributes in the source or compiler flags?

>I don't mind which becomes the default, if it's easy to (globally)
>change. The -mdata-64k and -mcode-64k flags of mspgcc3's MSP430X branch,
>or their inverse, would do the trick.

But it is necessary to note the usage of these flags in the object file.
You may not mix code compiled with or without a flag. It will result in
erratric program behaviour.


>> So my proposal is: far attribute for functions and constants (automatic
>> for strings e.g. in printf), but for constants only if enabled with
>> additional commandline switch.

> Then the command line switch would select a different, precompiled
> library with far data pointers?

yes.

> If I understand correctly, the overhead for 20 bit code addresses is
> localized to the functions that use it (so libraries can be compiled
> once and will work either near or far), but 20 bit data pointers force
> that overhead on the whole program?

Basically yes. 20bit program code needs to be called with 20bit
CALLA instruction, but the calling function and all surroundings may
be 16 bit completely except for the function call itself.
ISRs work transparent (the missing 4 bits are automatically saved
in an unused area of the status word)

A problem are all library functions which take (callback) function
pointers. As the pointers (and pointer variables) are 16 bit.
If these functions use a long value and deal with it properly,
things should work independently of the data model.
Main candidate is printf.

Once 20 bit data pointers are used, almost all code needs
to take care of it. At least all ISRs and any
code that is called below the first point where these are
used.

Imagine an ISR that clears the upper 4 bits of your library 
functions local data, right in the middle of execution...
Hence the need of a 'data model compatibility flag' in the object file.

Perhaps this can be emulated by using different code segments for
code using 16 or 20 bit data, and the linker script ensures that only
one of them is populated...

JMGross

Reply via email to