On Tue, Oct 21, 2008 at 4:24 PM, Bill McGonigle <[EMAIL PROTECTED]> wrote: >> Is it because the shell tools fork, and children aren't counted? > > If that were so my shell numbers should be lower.
Good point. As you suggest, maybe time(1) is just borken on MP. Neither the time(1) man page, nor the one for times(2), suggest anything is wrong with MP, but man page accuracy has never been Linux's strength. > $rpm -qa | wc -l > 1715 blackfire$ rpm -qa | wc -l 2059 > $uname -a > Linux dhd.bfc 2.6.26.3-29.fc9.i686 #1 SMP Wed Sep 3 03:42:27 EDT 2008 i686 > i686 i386 GNU/Linux blackfire$ uname -a Linux blackfire 2.6.25.14-69.fc8 #1 SMP Mon Aug 4 14:20:24 EDT 2008 i686 i686 i386 GNU/Linux > model name : Intel(R) Pentium(R) 4 CPU 2.50GHz Single core, then? (I can't keep Intel's CPU nomenclature straight anymore.) So. Curiouser and curiouser. You've got more RAM. Slightly slower CPU clock. Your RPM database file size is not that much bigger than mine. I've got a few more packages installed. All in all, pretty similar. Yet it takes your computer 3 minutes to run the Perl script, and mine 7 seconds. MP can't explain *that*. There's got to be something else going on here. Maybe you have a version of rpm/yum/yum-utils/Python/Perl/nethack that is radically slower for some reason? Here's me: blackfire$ rpm -q rpm yum yum-utils python perl rpm-4.4.2.2-7.fc8 yum-3.2.8-2.fc8 yum-utils-1.1.14-4.fc8 python-2.5.1-26.fc8.2 perl-5.8.8-40.fc8 > Perl is poor at SMP (gah! perl threads!). I've never had to worry about Perl MP. Sounds like I should be glad. :-) > Good point. You should be able to hack my perl script to do that in about > 10 minutes. :) Challenge accepted.... okay, intersect.pl will follow in a separate message. Running time trials... it seems like <bash+sort+comm> typically takes roughly the same wall clock time as <bash+intersect.pl>. At least, on my box, it does. > Hrm, could there be something about the 'tail' pipe that causes CPU > affinity? I have no idea how SMP scheduling really works in linux. Ditto. -- Ben _______________________________________________ gnhlug-discuss mailing list [email protected] http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/
