On Tue, Apr 18, 2006 at 07:53:30PM -0700, [EMAIL PROTECTED] wrote:
> During the Christmas holidays in 1989, I ran a little benchmark
> (`forktime') on various machines. The program was roughly
>
> loop many times
> fork
> parent waits
> child exits
>
> I ran it twice or more linked statically and twice or more linked
> dynamically on each machine. The machines were lightly loaded. For
> 1000 iterations, user time is consistently about 100ms for
> statically-linked forktime, but 4-11 times that when dynamically
> linked (i.e., using shared libraries).
>
> System times are more interesting. SunOS 3.5 had no shared libraries,
> so Sun seems to have sped up fork() by about 50% between SunOS 3.5 and
> 4.0 for statically-linked programs, but then lost all that performance
> and more when using shared libraries on 4.0, which took almost 4 times
> as much system time as statically-linked forktime on 4.0.
>
> We scratched our heads about what in the world the kernel was doing
> for 42ms per fork. Even on a Sun 3/50, that's a lot of cycles.
>
> This was a long time ago, but it's some actual measurements.
It is still somewhat reproducible. We had a little chat about this
issue with a colleague of mine, who happens to be an expert in
performance tuning and the consensus seems to be that:
1. First of all, you see dynamic linker resolving calls to _exit
in every child process because of the default lazy binding policy.
You can instruct it to do just the opposite by LD_BIND_NOW and
cut the time in half.
2. Even if you do #1 the dynamically linked binary is still inferior
to the statically linked one because of the calls not being direct
and going through the level of indirection.
Thanks,
Roman.
P.S. And yes, Solaris 10 banned static libraries :-(