Matt Dillon wrote:
>
> :Thunderbird 900, with 256 MB of PC-133 memory, and using 3 - ATA-66
> :HD's on different controllers. The elapsed time dropped from 58:16 to
> :45:54 by using softupdates.
> :
> :Kent
>
> That sounds about right for -pipe. The original email was
> 1 hour vs 40 minutes, a 20 minute difference which seemed a bit
> high (what I would expect without -pipe). 46 minutes verses 58 minutes
> is only a 12 minute 20 second difference, which is more inline with what
> I would expect.
>
> Most of the savings is occuring during the dependancy and
> cleaning step(s). The system creates and deletes a huge
> number of files in a huge number of directories and softupdates
> really helps there.
>
> Softupdates is not helping much during the actual compilation,
> which is a cpu-bound step if you use -pipe (the creation of the
> object files costs nothing because there is no other disk activity
> going on).
On of the differences between the Thunderbird and the P-III is the
FSB. AMD claims it gets 200 out of a 100MHz bus. The P-III are mostly
using a 133 MHz FSB. I have a dual mb arriving shortly with a pair of
866's. The next test. I have a cluster project that would be very
similar to a buildworld. It will end up using 2 - P-II 400's, the AMD,
and the P-III's. Some people I work with run batch jobs that run
overnight but I think they could wait for them to run with a small
version of Purdue's ACME. A few hours of their time would pay for a
small cluster. They don't have any problem with work. It is turning
jobs around that seems to be the bottle neck.
>
> The buildworld hits various choke points -- even with -j 128, if
> there are only 30 files in a library you will generally only see
> 30 compiles going at once. The final library link stage chokes
> it down to one process and this will become a pure bandwidth issue
> for your disk subsystem for a second or so (for the larger libraries).
I have gotten used to watching my buildworld's with top on occasions.
I have setiathome running with a "nice" value for nice. On the AMD
system, when the buildworld is hitting an I/O bottleneck, seti will be
using 40% or more of the system. When it is compute bound, seti
doesn't get any time. I think the accrual of time by seti is a sort of
integrated past history of the system availability due to an I/O type
of bottle neck during the buildworld. It follows the % value reported
by time fairly well. Softupdates is the first time the time % value on
the AMD went above 60%. I think this indicates I have an overall I/O
bottle neck.
The current version of seti writes a small blip of data to the HD
about every 30-60 seconds and so there isn't much HD interaction going
on there. I'm thinking about adding 2 - Ultra-160 scsi's to the system
at some point. For the kind of stuff I do, iozone seems to cover the
HD activity the best. The tagged queing of the scsi deals with small
record random I/O much better than the IDE drives do. From what I have
read, FreeBSD supports tagged queing on the new IBM IDE drives but I
don't have any of them to use to test.
One other point that I would like to understand is why -j4 takes
longer on all of my systems. That goes against what everyone claims
should happen.
Kent
>
> -Matt
--
Kent Stewart
Richland, WA
mailto:[EMAIL PROTECTED]
http://kstewart.urx.com/kstewart/index.html
FreeBSD News http://daily.daemonnews.org/
To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-hackers" in the body of the message