----- Ursprüngliche Nachricht -----
Von: Grant Edwards
An: [email protected]
Gesendet am: 16 Jun 2010 21:43:44
On 2010-06-16, JMGross <[email protected]> wrote:
>> I had this happen long time ago when I included the different object
>> files or libraries in the wrong order with GCC under linux. After
>> reordering them so that their contend had already been referenced,
>> all was well. I think I had similar problems (immediately solved)
>> with mspgcc.
> Those problems were with the --gc-sections feature?
> It sounds to me like you were linking libraries before you linked the
> object files that referenced the routines in the library. That's a
> different, unrelated issue.
Not with the gc-sections feature. But I'm pretty sure it happened with object
files and not just libraries. (And I'm sure that no libraries were linked
before the object files, but maybe dependent libraries in the wrong
order)
But it is possible that the setup made a library first from part of the object
files and then linked it. As I said, it happened long ago.
I inherited these projects from my predecessor in the job. I then reorganized
the code base (and the makefiles) and never got into this kind of problems
again.
I even cannot tell anymore whether it was a gcc or an mspgcc project where this
happened or both.
>> If these compiler flags allow treating different functions in the
>> same files to be kept or discarded depending of whether they are
>> used, it's obvious that they need to be be considered unused when
>> they haven't been referenced yet.
>The garbage collection algorithm used by the linker shouldn't depend
>what order you link the object files. If you've got examples where it
>seems to matter, please post it because that's a bug and needs to be
>fixed.
I'm sorry, but everything that could be used to check or even reproduce this
behaviour has been relocated to the null device long ago.
>> Unless the linker has been greatly improved since then, the problem
>> should be there. Maybe I'm wrong.
> You seem to be assuming a single-pass linker. It works in two passes.
> First it reads all of the object files. Then it starts at all of the
> nodes (symbols) marked as "keep" and traverses the reference graph
> marking all reachable nodes.
> When it's done traversing the graph, it discards any nodes that
> haven't been reached from the initial set of "keep" nodes.
When I read last time about the inner workings of the linking process, things
were different, But of course there have been many improvements in the past
years and I didn't really care anyway.
And with todays surplus of memory and CPU power, it makes no difference keeping
everything first and sorting out things later.
>> About the reduced effectiveness of the optimization, this one is
>> obvious. If the linker is allowed to include or discard parts of a
>> compilation unit depending of their usage, the compiler may not do
>> any optimization that crosses the boundary between functions.
> I'm afraid that's not at all obvious to me.
> Example: A C file contains two functions: foo() and bar(). foo() is
> called by various external functions. bar() is called only by foo().
> The compiler can inline bar() without causing any problems.
Sure, but the optimization process sometimes generates code where foo() and
bar() share common code after optimization.
If foo() is kept and bar() is not, what about the shared code?
After (re-)reading the explanation of -ffunction-sections, it looks like every
function is placed in its own section, so the optimizer should not optimize
across functions anyway, generating less-efficient code since it isn't
known at compile-time whether the shared code will be still available after
linking. I'm sure I've seen this kind of optimizations not too long ago. But
don't aske me for an example.
Of cource the outcome won't be less efficient than having both functions in
separate compilation units.
I'd really like to put this kind of options (including optimization settings)
into the appropriate source file only (by a pragma) instead of having them
active for the whole project (unless one wants to make rather complex
makefiles)
Anyway, I just tried to use the -ffunction-sections and -gc-sections flags and
while the compiler now places the functions in separate sections like
".text.foo", the linker does not remove them from the build at all. And
I'm certain that the two functions in question are not referenced ever as I
just added them to the code. Also, I added some sections/moved the vector table
and the code/data in these sections is never referenced
anywhere, yet they are still in the binary. (and I do not explicitely keep them
in the linker script). I wonder what's (not) going on. It's ld 060404, part of
the mspgcc build from 12/2008.
Even more, I think the linker will keep everything I feed it as source file,
whether anything in an object file is ever used or not. I normally only include
those object files which _are_ in use, but...
I just included a completely unrelated file with never-used functions into the
makefile and my binary has grown. So what's wrong?
> IIRC, the linker will convert "long" branches into short/relative
> jumps at link time. At least it does that for all the other targets I
> use...
interesting. Still two wasted bytes (I don't think the linker will move the
following code and adjust any other jumps if necessary), but faster.
I wonder if this is done for branches with unrelocated destinations or with
already linked ones too? I'm just curious. Altering the assembly instructions
is nothing a linker should do imho.
>> Also, some addressing modes cannot be used under such a setup.
>I don't understand why. Can you provide an actual example?
I'm not sure whether the compiler ever uses relative addressing modes but I
doubt this will be usable if the destination is in a different segment. A fixed
distance to the destination is the base of relative addressing. And
with separate segments there is no such fixed position.
> Some things like optmizing addressing modes for far/short destinations
> have to be done at link time instead of compile time, but I don't see
> how that hurts anything.
The linker messes with the addressing modes? That's really new to me. Isn't it
just placing the proper relocated addresses into the holes left by the compiler?
Well, I'm better safe than sorry and I guess someone else did too. Why else is
this whole -ffunction-sections thing an option and disabled by default?
Maybe I have been too suspicious and I'm sorry if I wasted your time.
I'm not an information source for ongoing compiler/linker development anyway
sicne I don't use the latest version. In a production environment, an old
compiler with known bugs is better than a new one with perhaps
unknown bugs (same for embedded OS versions). We made this mistake once and
will never do again unless we're really forced to.
Also, my boss pays me for working on the projects, not the compiler :) In his
opinion, I'm making too many experiments already, wasting too much time
(luckily he _needs_ me)
JMGross