On Feb 10, 2005, at 11:29 AM, John Oliver wrote:
Run multiple Linux kernel compiles on a standard x86 box.  The
performance degrades much faster than one would expect given the
processor and disk utilization.

??? At the same time??? Never heard of that. And can't see how switching between kernels is going to "degrade performance".

You're missing the point.

At the same time, start a kernel compile in each of, say, 10 kernel source trees on the same box. That will show you what can happen on a development machine when 10 developers all decide to build their projects at the same time, in an extreme case.

The system will thrash so much trying to satisfy disk I/O requests for the 10 builds that, pretty soon, everything grinds to a halt as the system beats itself to death. This is generally more apparent on IDE/ATA systems than SCSI systems, especially where multiple-disk configurations are concerned.

Even on single-drive systems, multiple competing I/O requests to the hard drive will severely punish most IDE subsystems, while SCSI subsystems handle them with aplomb. The effect is a little less noticable on SATA systems, from what I'm told, due to the revised architecture and simplified bus structure of SATA (well, there is no more bus.) SATA still has a hard time matching SCSI in multitasking I/O, though.

Gregory
--
Gregory K. Ruiz-Ade <[EMAIL PROTECTED]>
OpenPGP Key ID: EAF4844B  keyserver: pgpkeys.mit.edu

Attachment: PGP.sig
Description: This is a digitally signed message part

-- 
[email protected]
http://www.kernel-panic.org/cgi-bin/mailman/listinfo/kplug-list

Reply via email to