On Wed, Jul 14, 2010 at 06:29:14PM -0500, Nicolas Williams wrote:
> On Thu, Jul 15, 2010 at 01:27:06AM +0200, ????? ???????????? wrote:
> > A fairly simple - but not perfectly accurate - test case would be to
> > run ksh93 on Linux and Solaris and let it call /bin/true in a loop.
> > 
> > The numbers which follow are for identical ksh93 versions built with
> > the same gcc version and /bin/true a simple main() { return 0 ;}:
> > 
> > 1. Timing values for Solaris B134 on bare metal hardware:
> > time ksh -c 'integer i ; for (( i=0 ; i < 100000 ; i++ )) ; do /bin/true ; 
> > done'
> > 
> > real    9m29.75s
> > user    0m8.55s
> > sys     2m46.89s

The real time number here doesn't look right.  In particular, you're
only using < 3m of CPU time, but you're spending almost 9.5m wall time
to complete the operation.

When I run this command on my system, I get the following timing data:

real       53.750666106
user       14.188122722
sys        30.753105797

This still isn't as fast as Linux, but it's not 9 minutes.

When I trussed this, I got the following output:

syscall               seconds   calls  errors
_exit                    .000  100001
read                     .000       1
open                     .833  200009  100002
close                    .229  100007
time                     .000       1
brk                      .000       8
stat                     .000      12       3
lseek                    .000       1
getpid                   .291  200004
getuid                   .000       2
fstat                    .000       8
access                   .000       1
getsid                   .153  100000
getgid                   .000       2
sysi86                   .169  100001
ioctl                    .407  200004
execve                  8.471  100002
umask                    .000       2
fcntl                    .000       1
readlink                 .000       1
sigprocmask              .145  100000
sigaction                .187  100003
sigfillset               .000       1
getcontext               .213  100002
setcontext               .221   99999
setustack               2.645  100002
waitid                  1.171  399786  299786
mmap                    1.823  600017
mmapobj                 1.843  100006
getrlimit                .176  100002
memcntl                  .932  300011       1
sysconfig                .142  100008
sysinfo                  .165  100003
vforkx                  1.531  100000
yield                    .000       1
lwp_sigmask              .934  600001
lwp_private              .148  100002
schedctl                 .147  100001
resolvepath             1.606  300010
stat64                   .763  200002
                     --------  ------   ----
sys totals:            25.359 4699925 399792
usr time:              16.526
elapsed:              222.140

Keep in mind that the timing from truss has a lot of probe effect, so
don't consider these numbers the actual speed.  However, the system call
that's using the most time here is execve, not vfork.  The wall time is
still much higher than the CPU time, so I would investigate where this
process is waiting.

A couple of possibilities:

1. The swap reservations are done differently on Linux than Solaris.  It
might be the case that you're running out of memory, or waiting for swap
to become available.

2. Perhaps ksh is waiting for the wrong thing to complete here?

I would dtrace for what's causing your processes to go off CPU and
aggregate for those stacks that are consuming the most time waiting.

HTH,

-j
_______________________________________________
perf-discuss mailing list
perf-discuss@opensolaris.org

Reply via email to