Stas:

Actually there is continued overhead due to the fact that different
instructions (that require a larger number of cycles to decode / dispatch)
are used when calling subroutines and / or accessing memory in a shared
environment.

Paul E Wilt 
Senior Principal Software Engineer
ProQuest Information and Learning
---------------------------------------------------------
http://www.proquest.com  mailto:[EMAIL PROTECTED]
300 North Zeeb Rd      Phone: (734) 302-6777
Ann Arbor, MI 48106    Fax:   (734) 302-6779
---------------------------------------------------------



-----Original Message-----
From: Stas Bekman [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, November 05, 2003 4:17 AM
To: Gisle Aas
Cc: Rafael Garcia-Suarez; [EMAIL PROTECTED]; [EMAIL PROTECTED]
Subject: Re: different versions of Perl


Gisle Aas wrote:
> Stas Bekman <[EMAIL PROTECTED]> writes:
> 
> 
>>Hmm, I wonder about this para from that file:
>>
>>---
>>In terms of performance, on my test system (Solaris 2.5_x86) the perl
>>test suite took roughly 15% longer to run with the shared libperl.so.
>>Your system and typical applications may well give quite different
>>results.
> 
> 
> I also found 15% slowdown with perlbench on Linux.

I should give it a try and see if my ideas make any difference.

>>The whole point of using shared libs is that if there are more than
>>one app using the same library it gets loaded only once. I don't know
>>who wrote the claim above, but have you tried doing perl -le 'sleep
>>1000 while 1' in another window (of course using the same perl) and
>>then start the test suite? This way libperl.so will be constantly
>>loaded and won't cause the loading overhead.
> 
> 
> The slowdown is not in the loading.  The core perl code runs slower
> because it is compiled with -fpic to make it position independent when
> you enable 'useshrplib'.

If I understand correctly the following description from the gcc manpage:

        -fpic
            Generate position-independent code (PIC) suitable for use in a
            shared library, if supported for the target machine.  Such code
            accesses all constant addresses through a global offset table
            (GOT).  The dynamic loader resolves the GOT entries when the
pro-
            gram starts (the dynamic loader is not part of GCC; it is part
of
            the operating system).  If the GOT size for the linked
executable
            exceeds a machine-specific maximum size, you get an error
message
            from the linker indicating that -fpic does not work; in that
case,
            recompile with -fPIC instead.  (These maximums are 16k on the
m88k,
            8k on the SPARC, and 32k on the m68k and RS/6000.  The 386 has
no
            such limit.)

            Position-independent code requires special support, and
therefore
            works only on certain machines.  For the 386, GCC supports PIC
for
            System V but not for the Sun 386i.  Code generated for the IBM
            RS/6000 is always position-independent.

it affects only the loading time:

            The dynamic loader resolves the GOT entries when the program
starts

Doesn't it imply that once resolved, there is no more run-time overhead
incurred?

__________________________________________________________________
Stas Bekman            JAm_pH ------> Just Another mod_perl Hacker
http://stason.org/     mod_perl Guide ---> http://perl.apache.org
mailto:[EMAIL PROTECTED] http://use.perl.org http://apacheweek.com
http://modperlbook.org http://apache.org   http://ticketmaster.com


-- 
Reporting bugs: http://perl.apache.org/bugs/
Mail list info: http://perl.apache.org/maillist/modperl.html

-- 
Reporting bugs: http://perl.apache.org/bugs/
Mail list info: http://perl.apache.org/maillist/modperl.html

Reply via email to