Le 01/12/2012 ? 08:33:31-0700, Jan Owoc a ?crit
Hi,

Sorry, I'm very busy those past few days. 

> >> >
> >> >     http://tldp.org/HOWTO/LVM-HOWTO/removeadisk.html
> 
> The commands described on that page do not have direct equivalents in
> zfs. There is currently no way to reduce the number of "top-level
> vdevs" in a pool or to change the RAID level.

OK. 

> >> > I have a zpool with 48 disks with 4 raidz2 (12 disk). Inside those 48 
> >> > disk
> >> > I've 36x 3T and 12 x 2T.
> >> > Can I buy new 12x4 To disk put in the server, add in the zpool, ask zpool
> >> > to migrate all data on those 12 old disk on the new and remove those old
> >> > disk ?
> 
> In your specific example this means that you have 4 * RAIDZ2 of 12
> disks each. Zfs doesn't allow you to reduce that there are 4. Zfs
> doesn't allow you to change any of them from RAIDZ2 to any other
> configuration (eg RAIDZ). Zfs doesn't allow you to change the fact
> that you have 12 disks in a vdev.

OK thanks. 

> 
> If you don't have a full set of new disks on a new system, or enough
> room on backup tapes to do a backup-restore, there are only two ways
> for to add capacity to the pool:
> 1) add a 5th top-level vdev (eg. another set of 12 disks)

That's not a problem. 

> 2) replace the disks with larger ones one-by-one, waiting for a
> resilver in between

This is the point I don't see how to do it. I've 48 disk actually from
/dev/da0 -> /dev/da47 (I'm under FreeBSD 9.0) lets say 3To. 

I've 4 raidz2 the first from /dev/da0 -> /dev/da11 etc..

So I add physically a new enclosure with new 12 disks for example 4To disk. 

I'm going to have new /dev/da48 --> /dev/da59. 

Say I want remove /dev/da0 -> /dev/da11. First I pull out the /dev/da0. 
The first raidz2 going to be in «degraded state». So I going to tell the
pool the new disk is /dev/da48.

repeat this_process until /dev/da11 replace by /dev/da59.

But at the end how many space I'm going to use on those /dev/da48 -->
/dev/da51. Am I going to have 3To or 4To ? Because each time before
complete ZFS going to use only 3 To how at the end he going to magically
use 4To ? 

Second question, when I'm going to pull out the first enclosure meaning the
old /dev/da0 --> /dev/da11 and reboot the server the kernel going to give
new number of those disk meaning 

        old /dev/da12 --> /dev/da0
        old /dev/da13 --> /dev/da1
        etc...
        old /dev/da59 --> /dev/da47

how zfs going to manage that ? 

> > When I would like to change the disk, I also would like change the disk
> > enclosure, I don't want to use the old one.
> 
> You didn't give much detail about the enclosure (how it's connected,
> how many disk bays it has, how it's used etc.), but are you able to
> power off the system and transfer the all the disks at once?

Server : Dell PowerEdge 610
4 x enclosure : MD1200 with 12 disk of 3To
Connection : SAS
SAS Card : LSI
enclosures are chained : 

        server --> MD1200.1 --> MD1200.2 --> MD1200.3 --> MD1200.4


> 
> 
> > And what happen if I have 24, 36 disks to change ? It's take mounth to do
> > that.
> 
> Those are the current limitations of zfs. Yes, with 12x2TB of data to
> copy it could take about a month.

OK. 

> 
> If you are feeling particularly risky and have backups elsewhere, you
> could swap two drives at once, but then you lose all your data if one
> of the remaining 10 drives in the vdev failed.

OK. 

Thanks for the help

Regards.

JAS
-- 
Albert SHIH
DIO bâtiment 15
Observatoire de Paris
5 Place Jules Janssen
92195 Meudon Cedex
Téléphone : 01 45 07 76 26/06 86 69 95 71
xmpp: j...@obspm.fr
Heure local/Local time:
jeu 6 déc 2012 09:20:55 CET
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to