[zfs-discuss] convert raidz from osx
I am converting a 4 disk raidz from osx to opensolaris. And I want to keep the data intact. I want zfs to get access to the full disk instead of a slice. I believe like c8d0 instead off c8d0s1. I wanted to do this 1 disk at a time and let it resilver. what is the proper way to do this. I tried, I believe from memory : zpool replace -f rpool c8d1s1 c8d1 but it didn't let me do that. then I tried to put the disk offline first , but same result. Thanks, Dirk -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] convert raidz from osx
Dirk, I'm not sure I'm following you exactly but this is what I think you are trying to do: You have a RAIDZ pool that is built with slices and you are trying to convert the slice configuration to whole disks. This isn't possible because you are trying replace the same disk. This is what happens: # zpool create test raidz c0t4d0s0 c0t5d0s0 c0t6d0s0 # zpool replace test c0t6d0s0 c0t6d0 invalid vdev specification use '-f' to override the following errors: /dev/dsk/c0t6d0s0 is part of active ZFS pool test. Please see zpool(1M). You could replace the disk slice with a different disk like this: # zpool replace test c0t6d0s0 c0t7d0 If you don't have any additional disks then I think you will have to backup the data and recreate the pool. Maybe someone else has a better idea. Also, you refer to rpool, which is the default name of the ZFS root pool in the Opensolaris release. This pool cannot be RAIDZ pool nor can it contain whole disks. It must be created with disk slices. Cindy On 10/08/09 02:15, dirk schelfhout wrote: I am converting a 4 disk raidz from osx to opensolaris. And I want to keep the data intact. I want zfs to get access to the full disk instead of a slice. I believe like c8d0 instead off c8d0s1. I wanted to do this 1 disk at a time and let it resilver. what is the proper way to do this. I tried, I believe from memory : zpool replace -f rpool c8d1s1 c8d1 but it didn't let me do that. then I tried to put the disk offline first , but same result. Thanks, Dirk ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] convert raidz from osx
yes, that was what I was doing. I wanted to give the raidz whole disks because grub didn't want to install. ( I forgot which command I used . bootadm ? ) I have another slice free on a shared disk with osx and win7 but I am having problems with grub. I will try that again and document it so I can ask a proper question about it. As the installer has problems and I couldn't get the fdisk workaround to work with the osol-1002-118-x86.iso Dirk On 08 Oct 2009, at 16:03, Cindy Swearingen wrote: Dirk, I'm not sure I'm following you exactly but this is what I think you are trying to do: You have a RAIDZ pool that is built with slices and you are trying to convert the slice configuration to whole disks. This isn't possible because you are trying replace the same disk. This is what happens: # zpool create test raidz c0t4d0s0 c0t5d0s0 c0t6d0s0 # zpool replace test c0t6d0s0 c0t6d0 invalid vdev specification use '-f' to override the following errors: /dev/dsk/c0t6d0s0 is part of active ZFS pool test. Please see zpool (1M). You could replace the disk slice with a different disk like this: # zpool replace test c0t6d0s0 c0t7d0 If you don't have any additional disks then I think you will have to backup the data and recreate the pool. Maybe someone else has a better idea. Also, you refer to rpool, which is the default name of the ZFS root pool in the Opensolaris release. This pool cannot be RAIDZ pool nor can it contain whole disks. It must be created with disk slices. Cindy On 10/08/09 02:15, dirk schelfhout wrote: I am converting a 4 disk raidz from osx to opensolaris. And I want to keep the data intact. I want zfs to get access to the full disk instead of a slice. I believe like c8d0 instead off c8d0s1. I wanted to do this 1 disk at a time and let it resilver. what is the proper way to do this. I tried, I believe from memory : zpool replace -f rpool c8d1s1 c8d1 but it didn't let me do that. then I tried to put the disk offline first , but same result. Thanks, Dirk ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Convert raidz
Tim Foster wrote: And is it possible to add 1 new disk to raidz configuration without backups and recreating zpool from cratch. You can add a disk to a raidz configuration, but then that makes a pool containing 1 raidz + 1 additional disk in a dynamic stripe configuration (which ZFS will warn you about, since you have different fault tolerance then) eg. You can do that, but then if /tmp/4 fails (in the example), you loose all of the data on that disk. If you were running raid5, you probably care about data-loss and cost -- and this configuration only cares about cost. -Luke smime.p7s Description: S/MIME Cryptographic Signature ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Convert raidz
On Tue, 2007-04-03 at 10:54 -0400, Luke Scharf wrote: Tim Foster wrote: You can add a disk to a raidz configuration, but then that makes a pool containing 1 raidz + 1 additional disk in a dynamic stripe configuration (which ZFS will warn you about, since you have different fault tolerance then) eg. You can do that, but then if /tmp/4 fails (in the example), you loose all of the data on that disk. If you were running raid5, you probably care about data-loss and cost -- and this configuration only cares about cost. Exactly - sorry I thought the implication was clear :-) -- Tim Foster, Sun Microsystems Inc, Solaris Engineering Ops http://blogs.sun.com/timf ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
[zfs-discuss] Convert raidz
Hi Is it possible to convert live 3 disks zpool from raidz to raidz2 And is it possible to add 1 new disk to raidz configuration without backups and recreating zpool from cratch. Thanks This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Convert raidz
Hi there, On Mon, 2007-04-02 at 00:37 -0700, homerun wrote: Is it possible to convert live 3 disks zpool from raidz to raidz2 Unfortunately not - you'd need to backup your data, destroy the pool, create the new pool and restore your data. And is it possible to add 1 new disk to raidz configuration without backups and recreating zpool from cratch. You can add a disk to a raidz configuration, but then that makes a pool containing 1 raidz + 1 additional disk in a dynamic stripe configuration (which ZFS will warn you about, since you have different fault tolerance then) eg. # mkfile 64m 1 2 3 4 # zpool create mypool raidz `pwd`/1 `pwd`/2 `pwd`/3 # zpool status -v mypool pool: mypool state: ONLINE scrub: none requested config: NAMESTATE READ WRITE CKSUM mypool ONLINE 0 0 0 raidz1ONLINE 0 0 0 /tmp/1 ONLINE 0 0 0 /tmp/2 ONLINE 0 0 0 /tmp/3 ONLINE 0 0 0 errors: No known data errors # zpool add mypool `pwd`/4 invalid vdev specification use '-f' to override the following errors: mismatched replication level: pool uses raidz and new vdev is file # zpool add -f mypool `pwd`/4 # zpool status -v mypool pool: mypool state: ONLINE scrub: none requested config: NAMESTATE READ WRITE CKSUM mypool ONLINE 0 0 0 raidz1ONLINE 0 0 0 /tmp/1 ONLINE 0 0 0 /tmp/2 ONLINE 0 0 0 /tmp/3 ONLINE 0 0 0 /tmp/4ONLINE 0 0 0 errors: No known data errors # cheers, tim -- Tim Foster, Sun Microsystems Inc, Solaris Engineering Ops http://blogs.sun.com/timf ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Convert raidz
On Mon, Apr 02, 2007 at 12:37:24AM -0700, homerun wrote: Is it possible to convert live 3 disks zpool from raidz to raidz2 And is it possible to add 1 new disk to raidz configuration without backups and recreating zpool from scratch. The reason that's not possible is because RAID-Z uses a variable stripe width. This solves some problems (notably the RAID-5 write hole [1]), but it means that a given 'stripe' over N disks in a raidz1 configuration may contains as many as floor(N/2) parity blocks -- clearly a single additional disk wouldn't be sufficient to grow the stripe properly. It would be possible to have a different type of RAID-Z where stripes were variable-width to avoid the RAID-5 write hole, but the remainder of the stripe was left unused. This would allow users to add an additional parity disk (or several if we ever implement further redundancy) to an existing configuration, BUT would potentially make much less efficient use of storage. Adam [1] http://blogs.sun.com/bonwick/entry/raid_z -- Adam Leventhal, Solaris Kernel Development http://blogs.sun.com/ahl ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss