Hello, I have a couple Dell machines to play with, both with OpenBSD 5.[89] and some sort of "stacked" RAID setup, involving crypto, mirroring and striping in various orders. I've decided to play a little benchmark game and share some numbers.
Machine 1: 5.8, PowerEdge 2970, Opteron 2378 (8 cores), 8GB, 6x2TB drives Machine 2: 5.9, PowerEdge T20, Pentium G3220 (2 cores), 8GB, 4x3TB drives First machine is set up with /, /usr & /var on top of an unencrypted, "raw" hard drive. There's also a big "data" volume assembled from three sets of two-disk RAID1 mirrors, these three striped with RAID0, and a crypto volume on top. Let's call it "RAID 10C". Second machine has all four disks encrypted end-to-end, and the first disk has the system (/, /usr, /var), while the remaining area is set up with RAID10; let's call that "RAID C10". On the second box, power saving settings are all at the speedy end (hw.setperf=100, hw.perfpolicy=high); sysctl doesn't show anything on the first one (probably missing support, but I assume the machine is stuck at the high performance setting). The commands I've used to measure write/read speed: > dd if=/dev/urandom of=/var/test bs=1M count=100 > dd if=/dev/urandom of=/var/test bs=1M count=1000 > dd of=/dev/null if=/var/test bs=1M > dd if=/dev/urandom of=/data/test bs=1M count=100 > dd if=/dev/urandom of=/data/test bs=1M count=1000 > dd of=/dev/null if=/data/test bs=1M Data (bytes per second): > Host Setup Write 100M Write 1000M Read 1000M > m1 raw 139414012 137725122 110265042 > m1 10C 39029024 38779394 38833820 > m2 C 64169039 63344908 64132991 > m2 C10 26717974 37121514 61389590 Interesting observations: - Crypto seems to add significant overhead, regardless of where it sits in a RAID stack; - Crypto-then-RAID10 seems to be much more performant than RAID10-then-crypto, at least for large sequential reads; this seems counter-intuitive (blocks of data have to be encrypted 4x as often); - Any form of RAID10 + crypto seems to be *slower* than a non-RAID setup with crypto; since RAID0 is supposed to help performance, perhaps a concatenating discipline would be more appropriate with such a setup? - I don't have exact numbers right now since I've wiped and reinstalled machine #2 in between, but I've observed ~260MB/s write speeds with RAID10 w/o crypto; - I have no idea how to flush the VFS cache! 100MB reads return immediately, a 1000MB file seems to always come off the disk. The Other OS has a "nocache" utility, and a "nocache" flag for dd. Currently the machine #2 is just a plaything, so if anyone is interested in more silly benchmarks, I can wipe the entire thing and set up something else (1+C+0, C+5, 5+C, 0+C, C+1+cat, etc etc). (Yes yes I know, all benchmarks are flawed, I should test the machine with production workloads, performance vs redundancy vs encryption is a tradeoff - pick one that suits the application, etc.) K.

