w...@hiwaay.net ("William A. Mahaffey III") writes: >> The RAID in question has 4 active drives, 1 parity drive & 1 spare, >> created from identical ~900 GiB partitions on each of 6 7200 RPM 1 TB >> SATA3 HDD's. Those drives purportedly ave platter I/O speeds of around >> 120 MiB/s (observed on other boxen). With 4 drive in parallel, that >> would be 480-ish MiB/s sustainable, under ideal conditions. I see >> about 11 MiB/s above. That implies somewhat non-ideal conditions, >> which might not be surprising :-/. I *thought* I setup the RAID for >> reasonably optimum performance during provisioning of the machine, as >> breath-takingly/tediously documented onlist. What sort of online >> diagnostics can I do (dumpfs, etc.) on the mounted filesystem to >> assess where I might reconfigure/tune the RAID for better performance.
You need to check - size and alignment of the RAID stripes - size and alignment of the filesystem blocks With 4 active drives (assuming 512Byte/Sector) you should: - align the RAID partitions to a multiple of 128 sectors. - use a 'sectPerSU' value of 32 (== 16kByte) - create the FFS filesystem with a blocksize of 64kByte. On some disks, using half the values (sectPerSU=16,blocksize=32kByte) might be slightly better. Directory and other metadata operations might still be slow. You can avoid this by formatting the filesystem also with a fragment size of 64kByte, but that will waste disk space. WAPBL on such a disk will also have performance problems, it might be necessary to set vfs.wapbl.flush_disk_cache=0 with a higher risk for data loss. Write caching on the drives will also improve performance, again with a higher risk for data loss. -- -- Michael van Elst Internet: mlel...@serpens.de "A potential Snark may lurk in every tree."