On 8 September 2015 at 14:39, William A. Mahaffey III <w...@hiwaay.net> wrote: > On 09/08/15 03:13, Ian Clark wrote: [snip] > > > Thanks for your reply. This RAID is created from partitions of the > underlying drives, not from whole drives. The resulting raid is mounted as > /home. I attach the disklabel info for the underlying drive0 (all 6 are > identically sliced) & the header for the /home FS. Below is the fdisk info > for drive0: > > 4256EE1 # fdisk wd0 > Disk: /dev/rwd0d > NetBSD disklabel disk geometry: > cylinders: 1938021, heads: 16, sectors/track: 63 (1008 sectors/cylinder) > total sectors: 1953525168 > > BIOS disk geometry: > cylinders: 1024, heads: 255, sectors/track: 63 (16065 sectors/cylinder) > total sectors: 1953525168 > > Partitions aligned to 16065 sector boundaries, offset 63 > > Partition table: > 0: NetBSD (sysid 169) > start 2048, size 1953523120 (953869 MB, Cyls 0/32/33-121601/80/63), > Active > 1: <UNUSED> > 2: <UNUSED> > 3: <UNUSED> > Bootselector disabled. > First active partition: 0 > 4256EE1 # > That looks fine, as does your attached disklabel. This would suggest the misalignment is at the RAID level. Can you provide the disklabel/partiton info for your raid device(s)?
> > All 6 drives are fdisked into 1 large partition, then those partitions are > sliced into 3 slices (2 X 16 GiB (2 of the 1st 16 GiB's for root (RAID1), > the other 4 for /usr (RAID0), the 2nd 16 GiB's for swap (all 6 drives)), > then the rest of each drive for the /home (RAID5)), & the slices are > RAID'ed. Need anything else, *please* do not hesitate. TIA & thanks again. All your paritions look to be aligned to 4k boundries so it looks like your underlying disk setup is fine. This leaves the problem likely to be with the partioning on the raid device itself, or potentially the fs block size of the raid. Additionally, (and I'm sure someone will correct me here if I'm wrong), I'm not sure you're going to get great performance from 6 drives in RAID5. The FS block size is a power of two, which means you ideally want to split an FS block over all your drives in a RAID5 so it fills each drive exactly. Because there's an extra drive for parity you need to take this into account. For example, imagine you had a RAID 5 array with 3 disks, and a filesystem block size of 32K, if you arrange your raid so that each drive gets 16K this means you can complete a write of a block to all 3 drives at once, as you write 16K to the first drive, 16K to the second and 16K of parity to the third. However if you have 4 disks you would need to write 48K blocks to avoid having to do a read/rewrite, but 48K is not a power of two (and therefore not settable as a FS block size), so you either go with 32K or 64K, both of which will result in a certain amount of write related reads (as the RAID has to recalculate the parity). Try creating your RAID5 array from 5 of the 6 discs and see if performance improves; if it does you could just not have one disc in the RAID5 array or try RAID6 (with 2 disks of parity). Also, it's probably worth making sure you've got a backup of all your configuration information, as whilst raid5 is redundant and can tolerate a disk failure RAID0 can't (so you would loose the contents of /usr on a drive failure). Cheers, Ian