I've run some not too scientific hard drive performance tests on my 800
MHz Athlon (slot-A) and the results seemed pretty good, but not quite
what I hoped for.
Here is the system set up: Athlon 800 MHz CPU, one 256 MB stick of NEC
PC133 RAM, ASUS K7V m/b (it has an Ultra66 controller built into the
chip set among other niceties), In-Win Q500 case (full tower 300 W), a 4
Meg Diamond Stealth 2000 3D PCI video (pretty old, but it doesn't matter
too much with this system), a Promise Ultra66 PCI controller (on one of
the PCI expansion slots), and a 10 GB IBM 14GXP on one bus and a 45GB
IBM 75GXP one the other bus on the Promise controller. (The hard drives
started on a machine that didn't have onboard UDMA and I never got
around to changing to the onboard controller when I put this machine
together. Plus, I have other niceties on the onboard controller that I
would have to reconfigure if I moved.) Since the 10 GB drive can only
handle UDMA33, I configured it by typing in the following and later
adding to my startup files: hdparm -m16 -c1 -d1 -X66 /dev/hde. Seeing
that I put the 45 GB drive on its own bus on the Promise controller, I
did similar to the 10 GB drive except I typed in hdparm -m16 -c1 -d1
-X68 /dev/hdg. (According to the man page if I read it right this
should have set both drives to normal optimized setting plus the 10 GB
drive has Ultra33 explicitly set and the 45 GB explicitly set to Ultra66
mode.)
I then ran hdparm -t /dev/hdx repeatedly on both drives. On the 10 GB
drive, the measured rate usually hung around 13 MB/s. This is about
where IBM said that the maximum drive performance should be at, so I was
happy there. With the 45 GB drive, the measured rate usually hung
around 24 MB/s. According to IBM the maximum x-fer rate of the drive is
around 37 MB/s. I don't consider 24 MB/s bad for a current day system
when doing a brute force reads without considering file system overhead
or seek times, but it seems like I should be seeing more. It leaves the
question open to what the bottleneck is. Just to cover a few
possibilities, it could be that hdparm is not that accurate with the
fastest of the fast IDE drives, a bottleneck in the kernel, a
misunderstanding of how to use hdparm, a possible noisy cable forcing
rates down to ultra33 without it being reported, and so on and so
forth. It would be nice if someone with the know how could point out
what is the most likely bottleneck or at least give test results that
could give me clues on what it is probably not the bottleneck.
Something else that I did was to run hdparm -T /dev/hdx multiple times.
I saw values ranging from 55 MB/s to 128 MB/s. The results seemed to
vary mostly with how much memory I had in use. Like when I didn't have
much of anything in memory is when I saw rates like 55 MB/s. When I had
a moderate amount in memory in use (like having X, SETI@home, Netscape,
and a few other things loaded) is when I saw the higher rates all the
way up to 128 MB/s. I haven't tried it yet with the memory jammed full
of stuff. (It takes too long to fill up 256 MB of RAM under Linux at
the moment for me).
I realize that for general applications that disk access time is
probably going to be the biggest killer, which is why I went after 7,200
RPM disks, but it is also always nice to eek as much raw speed as one
can out of the disk I/O subsystem. Also it seems that the faster the
cache is, the less the latency on cache misses, though the bigger the
cache, the less likely that a miss is going to occur.