* Al <oss.el...@googlemail.com> wrote:

> While porting to cygwin I can be happy when they use it. For my first
> impression those libraries are more easy to port. They produce
> libraries with a *.dll.a suffix like the native libraries of Cygwin.

Just a few years ago, autotools (especially w/ libtool) was totally
unusable for any kind of isolated (not just crosscompiling),
based on completely stupid assumptions and practically undebuggable.

All the functionality could be easily done w/ shell functions
instead of syntactically unstable m4 macros. I've started a little
bit of hacking @ zlib (see oss-qm repo) how that could look like.

> The other example is libz. AFAIK it has a manually written configure
> script. It generates libz.so. bzip2 ends up in error messages until I
> build it statically.

AFAIK zlib properly builds shared library as well as static one
(at least on unixoid targets). Win32 targets are still a bit messy,
but there's work ongoing on that.
 
> I still try to understand the relation of shared libraries and dynamic
> libraries. I read that dynamic libraries are linked at runtime. I also
> read, that you can dynamically link againgst a shared as well as
> against a normal library.
> 
> But isn't a normal library also shared when multiple programs link it
> at runtime or does shared library mean it is shared in memory (PIC)?

Well, static libraries are essentially an archive of plain object
files (maybe with an additional symbol table for faster lookup).
They essentially get linked in at build time just like plain objects.

Shared libraries are different: they really get linked together,
certain sections (especially symbol tables) are merged, local
references resolved. An shared library actually is one big object
file (that's why they're also called shared object). Now these
objects are loaded and linked-in at the process startup phase 
(or later using dlopen()). At this point, "shared" means that
multiple programs can use the very same code from one shared
object file.

Another level of sharing is at runtime, by using shared pages by
the MMU. This is a bit more complex: you need to construct the
binary code in a way that multiple processes can map it into 
separate address offsets (historical systems require the addresses
to be defined at compile time, which obviously is impractical).
At that point -fPIC comes into play: the code is generated in a
way that it's position in process's address space doesnt matter
anymore (at least on page-granularity). So calling processes
can directly map the shared object's text segments into quite
arbitraty offsets without touching the actual code pages, and
the MMU only has to maintain one physical copy of them.

PIC tends to be a little bit larger and slower than non-PIC
(more indirect addressing), but on most today's machines
the impact on real workloads is questionable.


cu
-- 
----------------------------------------------------------------------
 Enrico Weigelt, metux IT service -- http://www.metux.de/

 phone:  +49 36207 519931  email: weig...@metux.de
 mobile: +49 151 27565287  icq:   210169427         skype: nekrad666
----------------------------------------------------------------------
 Embedded-Linux / Portierung / Opensource-QM / Verteilte Systeme
----------------------------------------------------------------------

Reply via email to