After some reading, I come back from my original idea. Main reason is I'd like to be able to grow the fs as the need develops in time.

One could create a raidz zpool with a couple of disks, but when adding a disk later on, it will not become part of the raidz (I tested this).

It seems vdevs can not be nested (create raidz sets and join them as a whole), so I came up with the following:

Start out with 4*1TB, and use geom_raid5 to create an independent redundant pool of storage:

'graid5 label -v graid5a da0 da1 da2 da3' (this is all tested in vmware, one of these 'da' drives is 8GB)

Then I 'zpool create bigvol /dev/raid5/graid5a', and I have a /bigvol of 24G - sounds about right to me for a raid5 volume.

Now lets say later in time I need more storage, I buy another 4 of these drives, and

'graid5 label -v graid5b da4 da5 da6 da7'
'zpool add bigvol /dev/raid5/graid5b'

Now my bigvol is 48G. Very cool! Now I have redundant storage that can grow and it's pretty easy too.

Is this OK (besides from the fact that graid5 is not in production yet, nor is ZFS ;) or are there easier (or better) ways to do this?

- So I want redundancy (I don't want one failing drive to cause me to loose all my data) - I want to be able to grow the filesystem if I need to, by adding a (set of) drive(s) later on.

-- FR
_______________________________________________ mailing list
To unsubscribe, send any mail to ""

Reply via email to