Just FWIW, the "proper" way to do "online expansion" with ZFS is quite
different than that of a RAID controller, as doing a regular OCE
process is quite difficult in ZFS
(due to the variable-size stripes that ZFS uses, which makes the
simple expansion of regular RAID-5 or RAID-6 not workable - or so I've
read on the ZFS mailing lists)
The way that I believe you can add capacity to an existing RAIDZ /
RAIDZ2 is as follows:
(this is based directly off of their documentation:
http://docs.sun.com/app/docs/doc/819-5461/gcvjg?a=view)
Your ZFS "pool" is made up of "vdevs", which are in turn made up of
raw disks / partitions / unicorns, other block devices.
To increase the capacity of your existing pool, you simply add another
"vdev" to it, and ZFS will intelligently start distributing the data
between the two vdevs.
The example, straight from their documentation (with some commentary
from me) is an existing ZFS pool named "rpool", which as 3 disks and
is in a RAIDZ configuration
# zpool status
pool: rpool
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
raidz1 ONLINE 0 0 0
c1t2d0 ONLINE 0 0 0
c1t3d0 ONLINE 0 0 0
c1t4d0 ONLINE 0 0 0
errors: No known data errors
Now, we want to add three *new* disks to the storage pool - to do
this, we simply add a new RAIDZ device comprised of our three new
disks into the existing pool:
# zpool add rpool raidz c2t2d0 c2t3d0 c2t4d0
# zpool status
pool: rpool
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
raidz1 ONLINE 0 0 0
c1t2d0 ONLINE 0 0 0
c1t3d0 ONLINE 0 0 0
c1t4d0 ONLINE 0 0 0
raidz1 ONLINE 0 0 0
c2t2d0 ONLINE 0 0 0
c2t3d0 ONLINE 0 0 0
c2t4d0 ONLINE 0 0 0
errors: No known data errors
If you're looking for more info on ZFS, I suggest looking at the Sun
ZFS administration guide: http://docs.sun.com/app/docs/doc/819-5461/gavwn?a=browse
Particularly, the section #5, on ZFS Storage Pools:
http://docs.sun.com/app/docs/doc/819-5461/gavwn?a=view
Charles Richards
[email protected]
charlesrichards.net
On Jan 3, 2009, at 5:19 AM, Frederique Rijsdijk wrote:
After some reading, I come back from my original idea. Main reason
is I'd like to be able to grow the fs as the need develops in time.
One could create a raidz zpool with a couple of disks, but when
adding a disk later on, it will not become part of the raidz (I
tested this).
It seems vdevs can not be nested (create raidz sets and join them as
a whole), so I came up with the following:
Start out with 4*1TB, and use geom_raid5 to create an independent
redundant pool of storage:
'graid5 label -v graid5a da0 da1 da2 da3' (this is all tested in
vmware, one of these 'da' drives is 8GB)
Then I 'zpool create bigvol /dev/raid5/graid5a', and I have a /
bigvol of 24G - sounds about right to me for a raid5 volume.
Now lets say later in time I need more storage, I buy another 4 of
these drives, and
'graid5 label -v graid5b da4 da5 da6 da7'
and
'zpool add bigvol /dev/raid5/graid5b'
Now my bigvol is 48G. Very cool! Now I have redundant storage that
can grow and it's pretty easy too.
Is this OK (besides from the fact that graid5 is not in production
yet, nor is ZFS ;) or are there easier (or better) ways to do this?
- So I want redundancy (I don't want one failing drive to cause me
to loose all my data)
- I want to be able to grow the filesystem if I need to, by adding a
(set of) drive(s) later on.
-- FR
_______________________________________________
[email protected] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "[email protected]
"
_______________________________________________
[email protected] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "[email protected]"