Hi JMGross,

Interesting discussion, nice to hear your thoughts on the matter.

> > I meant making it 32 bits where needed and 20 bits where possible
(in
> the special registers),
> > but not 16 unless the programmer indicates that that's what he
wants.
> That way, the whole address space is transparantly available.
> 
> Yes, the address space is, but if this is the default case, then all
> problems arising with a mixed 20/32 bit handling are also
transparently
> (and ivisibly) available. That's my concern.
> With a pure 32 bit solution, there are no problems, other that it is
> highly ineffective. So it could be made default, but it is not
> desirable.
> With the 20/32 solution, the programm has to know this and its
> implications.

Isn't it the case that a pointer can't ever be more than 20 bits, and
you only need 32 because you can only keep them in an even number of
bytes? So however many times you move it around, those top 12 bits will
always be zero, so they can't be lost, misplaced or changed?

> Consider that when the compiler is done, many users will just use it
> without knowledge of the inner workings. They just write C code and
> think that all will be the way it is meant to be.
> And if things go wrong, many of them don't know about assembly
language
> at all and are unable to detect WHAT's going wrong in the compiler
> generated code.
> If they are required to use a FAR qualifier then they at least KNOW
> that there's something special going on.

But if the compiler does its job, there's nothing special going on, is
there? Of course, if there's bugs in the compiler, you're potentially
going to get hosed in spectacular ways, anyway.

> My experience (25 years of programming on various systems) is that
once
> things work, they'll most certainly never straightend out.
> Nothing lives longer than a temporary solution.
> Look at things like the SMTP protocol (which was never meant to be
used
> outside a small local network) or the IPV4 specification, which is
> known to be utterly outdated for at least 15 years and still there is
> no end despite of better solutions.
> So if you ever want to have it made right, make it right right from
the
> start.

True, true. Still, a good solution now is better than a great solution
in two years, and this is an open source project, which makes
improvement over time rather more likely.

> The programmer would be
> surprised when 12 bits of his long value keep disappearing.
> (well, it would be possible to add the upper 4 bit to the lower word
> register and use it for referencing, but this requires an awful lot of
> intelligence on the compiler side)

Well, if you treat a 20-bit pointer as a 32-bit value and stuff
something into the not-actually-there top 12 bits, then I think you've
got it coming. In normal use (function and data addresses) those will
always be all zeroes, right? I wouldn't write
        uint32_t p = (uint32_t) &myvariable;
but
        foo_t* p = &myvariable;

and how big a foo_t* is, is none of my business and I should not assume
I can fiddle with its bits and still have something useful. The only
useful things you can do with a pointer are passing it around,
indexing/incrementing, and dereferencing, right? C will let you do
anything, of course, but then you're on your own.

If I write
        uint16_t x;
        uint32_t* p = (uint32_t*) &x;
        *p = 3;

things are also probably going to break, but if I want to, the compiler
will let me.

> And I don't like default setting swhich are not 100% transparent. If I
> have to activate an option or add qualifiers or attributes to make
sure
> things work the way I'd expect following the way they work on most
> other
> machines (left out the PICs with their hardware stack and 12 to 14 bit
> instruction words), then I'd consider this a really bad thing.
> It's like a car where you need to put your foot on the gas pedal to
> make it _not_ move :)

I guess that's the part I don't understand - why could it not be made
transparent? IIRC, the C spec doesn't specify the size of a pointer, so
as long as you don't make any assumptions (or only make assumptions that
are valid for your platform, i.e. an address is 20 bits, but may be
passed around in a 32-bit slot) it should Just Work (TM).

> Hey, something we finally agree :) Unfortunately, documentation often
> is an orphan in non-commercial projects.

Well, as long as it behaves according to the C spec, that's good enough
for most people I suppose.

> >> p.s.: actually I don't use nor require any of the X functions right
> >> now. The lower 64k is enough for all of my projects right now. I
use
> >> the 54xx mainly because of its hardware modules and the huge RAM
size.
> >> And with
> >> the current stte of the compiler, my first use of the FARTEXT would
be
> >> placing some constant tables and the init data for the variables
there.
> 
> > Ah, conversely, I'm most interested in the extra flash for code. Our
> application is outgrowning the 1611 and the 2418 only has 52 KB if you
> can't use FARTEXT, so that's 4 KB extra flash while losing 2KB
> > of RAM. I don't look forward to slapping FAR on hundreds of
> functions, I just want the linker to toss one half up there and one
> half down here.
> 
> We're also replacing the 1611, but mostly because of its limited I/O
(I
> have to multiplex SPI and I2C on one USART right now) and the lower
> speed. (nor 16 instead of 8 MHz). What I'll miss is the D/A module.
But
> the lower price of the device compensates this.
> 
> What about a pragma to put the whole compilation unit into FAR or
near,
> unless explicitely changed (switch the default)?

Unfortunately, that won't help a large group of MSP430 developers: those
using TinyOS. TinyOS compiles your application into a single app.c,
which is then compiled by gcc, so you only have one compilation unit.

Still, I think these are all hacks to work around the fact that the
linker can't handle the interrupt vectors in the middle of flash - to
the MSP430, it's all just one big block of memory and ideally, the
programmer should be able to use it as such.

Michiel

Reply via email to