Hello. thor's right. The raidframe driver defaults to a rediculously low number of maximum outstanding transactions for today's environment. This is not a criticism of how the number was chosen initially, but things have changed. In my production kernels around here, I include the following option which is a number I derived from a bit of impirical testing. I found that for arrays of raid5 disks, I didn't get much benefit with higher numbers, but numbers below this did show a marked decline in performance. For example, on an amd64 machine with 32G of ram, I have a raid5 set with 12 disks running on 2 mpt(4) buses. I get the following read and write numbers written to a filesystem with softdep enabled on top of a dk(4) wedge built on the raid5 set: (This is NetBSD-5.1)
test# dd if=/dev/zero of=testfile bs=64k count=65535 65535+0 records in 65535+0 records out 4294901760 bytes transferred in 125.486 secs (34226142 bytes/sec) test# dd if=testfile of=/dev/null bs=64k count=65535 65535+0 records in 65535+0 records out 4294901760 bytes transferred in 5.994 secs (716533493 bytes/sec) The line I include in my config files is: options RAIDOUTSTANDING=40 #try and enhance raid performance.