> I read that message and I'm afraid I'm puzzled.
> 
> Do you mean that GHC-compiled Haskell libraries don't have a stable
> ABI ?  I can see that that might be true, but it might also be
> possible to have a souped-up dynamic linker that could fix things up
> (possibly with, as you say in the message, adverse effects on the
> sharedness of the library).

Well, there wouldn't be any benefit to GHC in the sense that libraries could
be replaced without recompiling the binaries, because the dependencies are
much more complicated than simply the function names and types exported by
the library.  However, if you compile your programs without cross-module
optimisation (or fix GHC to not do cross-module optimisation from a
library), then the libraries become more stable.

Even C is having trouble in this area, though: witness the recent trouble
with upgrading versions of glibc.  They tried to avoid bumping the major
version of the shared library, but didn't manage to keep the ABI stable with
the result that there were a whole host of weird incompatibilites.

> But if that's all that the problem is it doesn't explain why even when
> exactly the same object files are put together via dynamic linking
> instead of just static it doesn't work (as I infer from people's
> messages - I haven't tried it).

Actually, I didn't know you could do this (maybe my misunderstanding is
caused by the fact that you certainly couldn't do this with older ld.so's,
maybe the Linux one is more generic).

And it does seem to work!  Cool.

> I was under the impression that at least on sensible platforms like
> Linux, the relocation and linking that is done by the dynamic linker
> is very similar to that done by the compile-time linker.  In fact, on
> Linux you don't even need to compile your code -fpic: without it there
> are simply lots more relocations in the resulting object files and
> hence in the shared library, and you end up not sharing most of the
> text (because the dynamic linker has to edit it when relocating it).
> So without -fpic things work, but there's just a performance penalty.
>
> You say:
> > ... for example the assumption that  a data object consists 
> of a symbol
> > followed by a fixed amount of data, so that certain data 
> objects can be
> > copied into the binary's data segment at runtime.  GHC's 
> object files don't
> > follow these rules, so can't be made into shared objects easily.
> 
> It seems to me that this view of `data object' is simply the
> definition of what ought to go into the initialised data segment of
> the resulting shared object.  Unix shared libraries don't have a view
> about what a `data object' is.

No, but -fpic does.  As I recall, it inserts .size directives so that data
objects can be copied into the binary's data segment at load time This is
because static data referenced from the program must be resolved to an
absolute address at link-time (this isn't the case with DLLs, where the
program may reference the static data through an indirection, but this
requires knowing at compile-time whether you're going to be linking with a
shared or static library).  The concept of .size (and .type?) doesn't fit
with GHC's strange storage layout with data mixed with code.

But, it seems that -fpic isn't required for dynamic-linkness, just for
sharedness, so we may have dynamically-linked libraries for GHC soon...

Thanks for the pointer, Ian!

Cheers,
        Simon

Reply via email to