Here are some tests I just ran to see how various hdparm options
affected my system's hard drive throughput (aka data copy speed
or drive performance).
Note: hdparm can do terrible things to your hard drive and your
data so read the manual and be very careful what you type when
you use this program.
1-----------------------------------------------------------------
List Debian's default hdparms:
/dev/hda:
multcount = 0 (off)
I/O support = 0 (default 16-bit)
unmaskirq = 0 (off)
using_dma = 0 (off)
keepsettings = 0 (off)
nowerr = 0 (off)
readonly = 0 (off)
readahead = 8 (on)
geometry = 4865/255/63, sectors = 78165360, start = 0
busstate = 1 (on)
Test throughput:
hdparm -t /dev/hda
Timing buffered disk reads: 64 MB in 16.96 seconds = 3.77 MB/sec
Results:
This is the system throughput without any hard drive optimizations.
2-----------------------------------------------------------------
Turn on recommended hard drive optimizations:
hdparm -m16 -c1 -d1 -a8 /dev/hda
/dev/hda:
setting fs readahead to 8
setting 32-bit I/O support flag to 1
setting multcount to 16
setting using_dma to 1 (on)
multcount = 16 (on)
I/O support = 1 (32-bit)
using_dma = 1 (on)
readahead = 8 (on)
Test throughput:
/dev/hda:
Timing buffered disk reads: 64 MB in 1.88 seconds = 34.04 MB/sec
Timing buffered disk reads: 64 MB in 1.87 seconds = 34.22 MB/sec
Timing buffered disk reads: 64 MB in 1.87 seconds = 34.22 MB/sec
Results:
Throughput increases to 907.7 percent of what it was before these
optimizations were turned on.
3-----------------------------------------------------------------
Turn off 32-bit I/O:
hdparm -m16 -c0 -d1 -a8 /dev/hda
/dev/hda:
setting fs readahead to 8
setting 32-bit I/O support flag to 0
setting multcount to 16
setting using_dma to 1 (on)
multcount = 16 (on)
I/O support = 0 (default 16-bit)
using_dma = 1 (on)
readahead = 8 (on)
Test throughput:
/dev/hda:
Timing buffered disk reads: 64 MB in 1.87 seconds = 34.22 MB/sec
Timing buffered disk reads: 64 MB in 1.87 seconds = 34.22 MB/sec
Timing buffered disk reads: 64 MB in 1.87 seconds = 34.22 MB/sec
Results:
Throughput did not change.
4-----------------------------------------------------------------
Turn off DMA support:
hdparm -m16 -c0 -d0 -a8 /dev/hda
/dev/hda:
setting fs readahead to 8
setting 32-bit I/O support flag to 0
setting multcount to 16
setting using_dma to 0 (off)
multcount = 16 (on)
I/O support = 0 (default 16-bit)
using_dma = 0 (off)
readahead = 8 (on)
Test throughput:
/dev/hda:
Timing buffered disk reads: 64 MB in 16.12 seconds = 3.97 MB/sec
Timing buffered disk reads: 64 MB in 16.14 seconds = 3.97 MB/sec
Timing buffered disk reads: 64 MB in 16.13 seconds = 3.97 MB/sec
Results:
Throughput decreases to 11.6 percent of what it was before DMA was
turned off.
5-----------------------------------------------------------------
Turn 32-bit I/O back on:
hdparm -m16 -c1 -d0 -a8 /dev/hda
/dev/hda:
setting fs readahead to 8
setting 32-bit I/O support flag to 1
setting multcount to 16
setting using_dma to 0 (off)
multcount = 16 (on)
I/O support = 1 (32-bit)
using_dma = 0 (off)
readahead = 8 (on)
Test throughput:
/dev/hda:
Timing buffered disk reads: 64 MB in 8.76 seconds = 7.31 MB/sec
Timing buffered disk reads: 64 MB in 8.77 seconds = 7.30 MB/sec
Timing buffered disk reads: 64 MB in 8.77 seconds = 7.30 MB/sec
Results:
Throughput increases to 183.9 percent of what it was before 32-bit
I/O support was turned on.
Conclusion:
32-bit I/O only effects throughput if DMA support is turned off.
(Reached by comparing the results of section 5 with section 3 above.)
6-----------------------------------------------------------------
Turn on IRQ masking: (note: the hdparm manual says this could lead
to sever data corruption, but I am trying it anyway just to see if
it improves performance.)
hdparm -m16 -c1 -d0 -a8 -u1 /dev/hda
/dev/hda:
setting fs readahead to 8
setting 32-bit I/O support flag to 1
setting multcount to 16
setting unmaskirq to 1 (on)
setting using_dma to 0 (off)
multcount = 16 (on)
I/O support = 1 (32-bit)
unmaskirq = 1 (on)
using_dma = 0 (off)
readahead = 8 (on)
Test throughput:
/dev/hda:
Timing buffered disk reads: 64 MB in 8.76 seconds = 7.31 MB/sec
Timing buffered disk reads: 64 MB in 8.73 seconds = 7.33 MB/sec
Timing buffered disk reads: 64 MB in 8.75 seconds = 7.31 MB/sec
Results:
Throughput did not change significantly (a .001 percent increase.)
7-----------------------------------------------------------------
Turn 32-bit I/O and DMA support back on:
(Note: IRQ masking is still on.)
hdparm -m16 -c1 -d1 -a8 -u1 /dev/hda
/dev/hda:
setting fs readahead to 8
setting 32-bit I/O support flag to 1
setting multcount to 16
setting unmaskirq to 1 (on)
setting using_dma to 1 (on)
multcount = 16 (on)
I/O support = 1 (32-bit)
unmaskirq = 1 (on)
using_dma = 1 (on)
readahead = 8 (on)
Test throughput:
/dev/hda:
Timing buffered disk reads: 64 MB in 1.88 seconds = 34.04 MB/sec
Timing buffered disk reads: 64 MB in 1.87 seconds = 34.22 MB/sec
Timing buffered disk reads: 64 MB in 1.86 seconds = 34.41 MB/sec
Results:
In comparison with section 2 above, throughput did not change when
IRQ masking was turned on.
8-----------------------------------------------------------------
List IDE information from Debian 3.0 system startup:
dmesg | grep "ide"
Kernel command line: BOOT_IMAGE=Debian ro root=xxx idebus=66
ide_setup: idebus=66
ide: Assuming 66MHz system bus speed for PIO modes
ide0: BM-DMA at 0xffa0-0xffa7, BIOS settings: hda:DMA, hdb:pio
ide1: BM-DMA at 0xffa8-0xffaf, BIOS settings: hdc:DMA, hdd:pio
ide0 at 0x1f0-0x1f7,0x3f6 on irq 14
ide1 at 0x170-0x177,0x376 on irq 15
EXT3 FS 2.4-0.9.17, 10 Jan 2002 on ide0(3,1), internal journal
9-----------------------------------------------------------------
Just to confirm these results and eliminate any Debian specific
issues, I ran the same test using Red Hat 8.0.
List Redhat's default hdparms:
hdparm /dev/hda
/dev/hda:
multcount = 16 (on)
IO_support = 0 (default 16-bit)
unmaskirq = 0 (off)
using_dma = 1 (on)
keepsettings = 0 (off)
readonly = 0 (off)
readahead = 8 (on)
geometry = 4865/255/63, sectors = 78165360, start = 0
Test throughput:
hdparm -t /dev/hda
/dev/hda:
Timing buffered disk reads: 64 MB in 1.87 seconds = 34.20 MB/sec
Timing buffered disk reads: 64 MB in 1.88 seconds = 34.06 MB/sec
Timing buffered disk reads: 64 MB in 1.87 seconds = 34.17 MB/sec
Results:
Redhat's disk throughput is slightly slower than Debian's but that
could be explained by the fact that I was running the tests in a
Gnome terminal under Redhat versus in text mode under Debian.
10-----------------------------------------------------------------
Turn off 32-bit I/O:
hdparm -c1 /dev/hda
/dev/hda:
setting 32-bit IO_support flag to 1
IO_support = 1 (32-bit)
Test throughput:
/dev/hda:
Timing buffered disk reads: 64 MB in 1.87 seconds = 34.20 MB/sec
Timing buffered disk reads: 64 MB in 1.88 seconds = 34.13 MB/sec
Timing buffered disk reads: 64 MB in 1.87 seconds = 34.20 MB/sec
Results:
No significant change.
11-----------------------------------------------------------------
Turn on IRQ masking: (note: the hdparm manual says this could lead
to sever data corruption, but I am trying it anyway just to see if
it improves performance.)
hdparm -u1 /dev/hda
/dev/hda:
setting unmaskirq to 1 (on)
unmaskirq = 1 (on)
Test throughput:
/dev/hda:
Timing buffered disk reads: 64 MB in 1.88 seconds = 34.06 MB/sec
Timing buffered disk reads: 64 MB in 1.87 seconds = 34.17 MB/sec
Timing buffered disk reads: 64 MB in 1.87 seconds = 34.17 MB/sec
Results:
No significant change.
12-----------------------------------------------------------------
List IDE information from Red Hat 8.0 system startup:
dmesg | grep "ide"
Kernel command line: auto BOOT_IMAGE=Redhat ro root=xxx
BOOT_FILE=/boot/vmlinuz hdc=ide-scsi idebus=66
ide_setup: hdc=ide-scsi
ide_setup: idebus=66
ide: Assuming 66MHz system bus speed for PIO modes
ide0: BM-DMA at 0xffa0-0xffa7, BIOS settings: hda:DMA, hdb:pio
ide1: BM-DMA at 0xffa8-0xffaf, BIOS settings: hdc:DMA, hdd:pio
ide0 at 0x1f0-0x1f7,0x3f6 on irq 14
ide1 at 0x170-0x177,0x376 on irq 15
EXT3 FS 2.4-0.9.18, 14 May 2002 on ide0(3,6), internal journal
EXT3 FS 2.4-0.9.18, 14 May 2002 on ide0(3,1), internal journal
13-----------------------------------------------------------------
Final conclusions:
Turning on DMA Support and increasing the Multicount to the drive's
maximum (varies with drive, use "hdparm -I /dev/hda" to find out)
greatly increased the drives throughput (copy speed/performance).
Turning on 32-bit I/O improves hard drive performance only if DMA
support is turned off, and then only a small fraction of what it
would be improved if DMA Support were turned on. (So use DMA and
not 32-bit I/O.)
Turning on IRQ Masking has no effect on hard drive performance (but
could be dangerous and lead to data corruption.)
_______________________________________________
Eug-LUG mailing list
[EMAIL PROTECTED]
http://mailman.efn.org/cgi-bin/listinfo/eug-lug