----- Ursprüngliche Nachricht -----
Von: Peter Bigot
Gesendet am: 10 Aug 2010 01:18:25

> Having cut my teeth on 8088 assembly language, the concept of near and far
> qualifiers is natural to me.  Mixing memory models is ugly, but if the
> linker can figure it out, 95% of the coders out there won't even know it's
> happening.  Plus, it reduces the number of multilibs that need to be
> provided.

Indeed, in good old 8088 times this was normal and worked well. 
I wrote a lot of projects in Borland C++ 3.1 with or without using far 
qualifiers.

> *) Can anybody say why it isn't reasonable to have ld rewrite the assembly
>   code to adjust for near and far pointers?

changing code from near to far is simple for the CALL/CALLA instruction, but the
moved function also needs the RET replaced by RETA. And detecting where to
replace it and where not is a bit difficult. Also, by default, whole compilation
units are compiled and optimized together, so the linker cnanot move individual
functions. And interrupt functions may not be moved at all.
Even more difficult is the handling of function pointers.
A pointer to a far function must be 32 bit, while a near pointer is 16 bit only.
This affects code size.

For (const) variables, things are even more difficult. As soon as one single
variable is placed far, all registers must be treated 20 bit.
This is a significant overhead (especially for the interrupt functions, where 
it really
matters). So this should be at least under user control.
There is a project-wide 'use far variables' flag necessary and the linker 
should warn
or fail if not all linked code has this flag identically.
Also, the compiler should throw an error if any far variable is generated and 
this
flag isn't set.

On 80386 there was a similar problem when you used 32bit registers in your
program under DOS, when software/hardware interrupts didn't save and restore
them as 32 bit. There were workarounds available, but these are not available
on MSP.

Generally using 20 bit registers isn't advisable as it increases code size and
execution times and only a fraction of the MSP users needs the additional
code size. Many (inlcuding me) just need the higher speed, more ram or
better peripherals.

And even if so, often putting the functions far is good enough.

Also, the placing algorithm for ld isn't that simple anymore 
(it isn't already, I know) as it would need to split the code base into two
segments (near/far) but the elimination of unneeded code must be done
 in a common pool first.
It will require an additional step in the linking process.


>*) How much unhappiness would there be if the default compiler behavior was
>   to support near-mode compilation, with far-mode for code and/or data as
>   an option specified either by attributes in the source or compiler flags?

This should be fine. If lowmem flash get short, farmem is used for functions.
There is, however, a problem with library functions which must be always far,
even if linked near, as they have a fixed return command and this must be
always RETA so it will properly return to any far-linked function.
This introduces the least overhead (2 more bytes stack usage and one more
cycle at CALLA and RETA due to the 20 bit return address).

So my proposal is: far attribute for functions and constants (automatic for
strings e.g. in printf), but for constants only if enabled with additional
commandline switch.
And a marker for ld that ensures all linked code has the same 'far state' for
variables, so it is ensured that all code saves registers as 20 bit if far
constants are used anywhere in the project.

JMGross


Reply via email to