It seems like the discussion so far on separate compilation has been
trying for some solution that's better in principle than the C/C++
module of header files.  If we take dynamic linking and library
versioning into account, I'm not convinced that's the case (though I'd
love to be convinced otherwise).  Apologies if I missed someone
covering this point already.

Say we have a library B depending on another library A.  If we're
trying to replace the current ecosystem of dynamic libraries, we would
like to be able to change and recompile A without touching B at all.
This only works if the parts of A that were likely to have changed
were somehow marked as off limits to the optimizer when B was built.
Only the programmer could possibly know this in general, so the
programmer has to be given so way of marking which parts of the
interface are likely to change.

Moreover, the structure of A should be such that B can be compiled by
looking at only a portion of the files of A.  This basically mandates
the existence of header files.  They could conceivably be generated by
the compiler, but they definitely want to be at least human readable
so that programmers can inspect them.

It's possible to separate the notion of interface file from the binary
interface file, but it seems like such an approach would be doomed to
confusion.  For example, if functions that could and could not be
inlined were mixed in the same file, it would be extremely easy for
someone to mix the two up and cause horrible bugs that only show up in
*the next* version of the library.

The conclusion seems to be that you either need header files, or you
need to abandon the notion that upstream libraries can be recompiled
without recompiling downstream code.  Is this a correct
characterization of the situation?

Geoffrey
_______________________________________________
bitc-dev mailing list
[email protected]
http://www.coyotos.org/mailman/listinfo/bitc-dev

Reply via email to