----- Ursprüngliche Nachricht -----
Von: Michiel Konstapel
An: [email protected]
Gesendet am: 03 Feb 2010 13:24:25
Betreff: Re: [Mspgcc-users] suggestion: MSP430X far address warning
> Hi JMGross,
> Interesting discussion, nice to hear your thoughts on the matter.
You're welcome. As I am, hopefully :)
> Isn't it the case that a pointer can't ever be more than 20 bits, and
> you only need 32 because you can only keep them in an even number of
> bytes? So however many times you move it around, those top 12 bits will
> always be zero, so they can't be lost, misplaced or changed?
Yes, speaking strictly of a pointer, you're completely right. But things are
different when it comes to poitner arithmetics.
Math calculations are done in either 16 or 32 bit. And C standard has some
oddities about the size of the operands in any calculation.
Also, the math functions used by the compiler are only defined for 16/32/64
bit. So what happens with your 20 bit value?
Is it treated as a 32 bit value? If so, the result of any operation might be
bigger than 20 bit and nobody would expect it being cut to 20 bit after doing
the math. Even if more than 20 bit are not useful for any pointer job.
But then, who can tell what the result of the operation might be used for.
Maybe passed to printf as parameter? Then printing a value that had been cut to
20 bit will never reveal any possible overflow. Unexpectedly.
One of the culprits (and advantages) of C is that any type is just a way to
interpret a value. In other languages, where a type is opaque, there would be
no problem at all.
>> If they are required to use a FAR qualifier then they at least KNOW
>> that there's something special going on.
>But if the compiler does its job, there's nothing special going on, is
>there? Of course, if there's bugs in the compiler, you're potentially
>going to get hosed in spectacular ways, anyway.
No, the compiler should not do anything that is unexpected and even less
something that does not comply with the C standard.
So it's the job of the compiler to translate your code into assembly language /
binary code, following the rules of the language.
If something requires special treatment that does not follow these rules or
even just interprets these rules in an unexpected manner, it should be obvious
to the programmer.
If the compiler knows about some hardware specialities, it must be ensured that
their use is either 100% transparent or flags at least a warning when the
programmer does not acknowledge that he knows what's going
on.
Some example: some of the MSPs have a 16 bit I/O area which cannot be read as
separate 8 bit values. Here the compiler didn't know about this and the user
has made some typecasting which made the compiler to
'fear' that there might be no word alignment when reading from a
calculated/typecasted address. So to avoid the fact that the processor does not
allow misaligned byte address (the LSB is always set to 0 in word
operations), the compiler did split the word access to the 16bit register into
two 8bit reads - with unexpected results.
The programmer didn't know (and this is a really good hidden feature in the
documentation) that the MSP ignores the LSB in any word transfer address.
Even worse, a previous compiler version didn't check for possible misalignment,
so the same source code worked, as the compiler simply generated a (possibly
wrong) word access.
This example does not exactly fit the 20 bit pointers but it shows what
confusion might arise when the hardware requires some special treatment that
the programmer is unaware of.
>> Nothing lives longer than a temporary solution.
> True, true. Still, a good solution now is better than a great solution
> in two years, and this is an open source project, which makes
> improvement over time rather more likely.
But thinking a bit in hope of a better solution might cut down development
effort considerably.
Also, open source projects are often only further developed if the pain of
using the current solution exceeds the effort for improvement.
I for myself know, that I'll most certainly never find the time to write a
single code line for the compiler.
But I can put some brain grease into the matter, looking for possible pitfalls,
detecting likely dead-ends etc.
After all, I will have to live with the result :)
> Well, if you treat a 20-bit pointer as a 32-bit value and stuff
> something into the not-actually-there top 12 bits, then I think you've
> got it coming. In normal use (function and data addresses) those will
> always be all zeroes, right? I wouldn't write
> uint32_t p = (uint32_t) &myvariable;
> but
> foo_t* p = &myvariable;
> and how big a foo_t* is, is none of my business and I should not assume
> I can fiddle with its bits and still have something useful. The only
> useful things you can do with a pointer are passing it around,
> indexing/incrementing, and dereferencing, right? C will let you do
> anything, of course, but then you're on your own.
Yes - and no. Sure, if I tread a pointer as an opaque type, I don't have to
care at all. And on a 'big' system, such as a PC, I wouldn't care too. Ever
tried MFC? All pointer there are actually handles of hidden, managed
pointer objects. Highly ineffective, but almost 100% safe.
On something like the MSP, you often need to tweak things a bit further, so you
might make assumptions based on common experience or whatever. And when the
compiler does things the other way 'round, your're
hosed.
One example: you know the start address of an array and want to know the end
address for a given number of elements. You don't know the size of the
elements, but the compiler does. So let the compiler calculate
the pointer to behind the last element. Could happen if you want to program
given or calculated tables at runtime in the FARMEM.
Alas the compiler made a wrap-around. And you'll never know unless you're
flashing the processor registers :)
(I admit, a very constructed case and it doesn't make much sense, but I hope
you'll get the point)
After all, a pointer is a value, either usid internally or stored, and the C
standard knows no 20 bit value, no UINT_20 :)
> I guess that's the part I don't understand - why could it not be made
> transparent? IIRC, the C spec doesn't specify the size of a pointer, so
> as long as you don't make any assumptions (or only make assumptions that
> are valid for your platform, i.e. an address is 20 bits, but may be
> passed around in a 32-bit slot) it should Just Work (TM).
Right. The problem is the 'or'. You need to know the platform, or more
precisely, you need to know what the compiler knows about your platform and
applies automatically, in addition or contradiction of the C
standard.
That's not what I call transparent.
>> Hey, something we finally agree :) Unfortunately, documentation often
>> is an orphan in non-commercial projects.
>Well, as long as it behaves according to the C spec, that's good enough
>for most people I suppose.
Yes, and that's the problem. 20 bit pointers do not fit into the C specs. C
does not tell the size of a pointer, but 20 bits won't fit into the common
implementations of the C standard, as it won't fit any of the standard
data type sizes.
A Pointer might be 8, 16, 32 or even 64 bit, but 20 is uncommon. And to make
clear that it is uncommon and won't fit into the usual schemes, it should be
necessary to flag this in the source code. To make the
programmer flag it and confirm that he knows it.
Such as registers where every access can cause a hardware action. So the
programmer (actually the writer of the headers) flags them as volatile. This
tells the compiler to not save the content but reload it each time
it is used, but it also results in a warning when you pass a volatile pointer
to a function that does not know about volatile. And might access the address
an unknown number of times, maybe passing it further to other
functions. You don't know but you'll be warned.
>> What about a pragma to put the whole compilation unit into FAR or near,
>> unless explicitely changed (switch the default)?
> Unfortunately, that won't help a large group of MSP430 developers: those
> using TinyOS. TinyOS compiles your application into a single app.c,
> which is then compiled by gcc, so you only have one compilation unit.
Maybe the range of the pragma is only until the next one is found.
So set the default to FAR, include the source files that shall be far, then set
it to near and include the other ones.
> Still, I think these are all hacks to work around the fact that the
> linker can't handle the interrupt vectors in the middle of flash - to
> the MSP430, it's all just one big block of memory and ideally, the
> programmer should be able to use it as such.
Partly, yes. But that's not really a problem or difficult to solve.
Theoretically, we could generate a jumptable right before the vector table
which contains far jumps to any possible ISR.
Then the linker can put any function - including the ISRs - anywhere it wants
and all is fine.
Except that it adds some cycles to any interrupt. And if you're writing
applications where every cycle counts, then this solution is as ugly as default
far functions and far variables and their additional overhead.
(did you know that on the PC this problem is solved by hardware-mirroring the
bios to the end of the physical address space, so the processor will always
find its reset vector?)