On Thu, Dec 6, 2012 at 12:35 AM, Albert Shih <albert.s...@obspm.fr> wrote:
> Le 01/12/2012 ? 08:33:31-0700, Jan Owoc a ?crit
> > 2) replace the disks with larger ones one-by-one, waiting for a
> > resilver in between
> This is the point I don't see how to do it. I've 48 disk actually from
> /dev/da0 -> /dev/da47 (I'm under FreeBSD 9.0) lets say 3To.
> I've 4 raidz2 the first from /dev/da0 -> /dev/da11 etc..
> So I add physically a new enclosure with new 12 disks for example 4To disk.
> I'm going to have new /dev/da48 --> /dev/da59.
> Say I want remove /dev/da0 -> /dev/da11. First I pull out the /dev/da0.
> The first raidz2 going to be in «degraded state». So I going to tell the
> pool the new disk is /dev/da48.
zpool replace <poolname> da0 da48
> repeat this_process until /dev/da11 replace by /dev/da59.
> But at the end how many space I'm going to use on those /dev/da48 -->
> /dev/da51. Am I going to have 3To or 4To ? Because each time before
> complete ZFS going to use only 3 To how at the end he going to magically
> use 4To ?
The first disk you replace, it will use 3 TB, the size of the disk it
The second disk you replace, it will use 3 TB, the size of the disk it
The 12th disk you replace, it will use 3 TB, the size of the disk it
However, now that all of the disks in the raidz vdev have been replaced,
the overall size of the vdev will increase to use the full 4 TB of each
disk. This either happens automatically (autoexpand property is on), or
manually by export/import the pool.
Second question, when I'm going to pull out the first enclosure meaning the
> old /dev/da0 --> /dev/da11 and reboot the server the kernel going to give
> new number of those disk meaning
> old /dev/da12 --> /dev/da0
> old /dev/da13 --> /dev/da1
> old /dev/da59 --> /dev/da47
> how zfs going to manage that ?
Every disk that is part of a ZFS pool has metadata on it, that includes
which pool it's part of, which vdev it's part of, etc. Thus, if you do an
export followed by an import, then ZFS will read the metadata off the disks
and sort things out automatically.
zfs-discuss mailing list