Re: [zfs-discuss] Migrate ZFS volume to new pool

2010-04-29 Thread Jim Horng
Why would you recommend a spare for raidz2 or raidz3? -- richard Spare is to minimize the reconstruction time. Because remember a vdev can not start resilvering until there is a spare disk available. And with disks as big as they are today, resilvering also take many hours. I rather have

Re: [zfs-discuss] Migrate ZFS volume to new pool

2010-04-29 Thread Bob Friesenhahn
On Wed, 28 Apr 2010, Jim Horng wrote: Why would you recommend a spare for raidz2 or raidz3? Spare is to minimize the reconstruction time. Because remember a vdev can not start resilvering until there is a spare disk available. And with disks as big as they are today, resilvering also take

Re: [zfs-discuss] Migrate ZFS volume to new pool

2010-04-29 Thread Jim Horng
Would your opinion change if the disks you used took 7 days to resilver? Bob That will only make a stronger case that hot spare is absolutely needed. This will also make a strong case for choosing raidz3 over raidz2 as well as vdev smaller number of disks. -- This message posted from

Re: [zfs-discuss] Migrate ZFS volume to new pool

2010-04-28 Thread Wolfraider
The original drive pool was configured with 144 1TB drives and a hardware raid 0 strip across every 4 drives to create 4TB luns. These luns where then combined into 6 raidz2 luns and added to the zfs pool. I would like to delete the original hardware raid 0 stripes and add the 144 drives

Re: [zfs-discuss] Migrate ZFS volume to new pool

2010-04-28 Thread Wolfraider
We are running the latest dev release. I was hoping to just mirror the zfs voumes and not the whole pool. The original pool size is around 100TB in size. The spare disks I have come up with will total around 40TB. We only have 11TB of space in use on the original zfs pool. -- This message

Re: [zfs-discuss] Migrate ZFS volume to new pool

2010-04-28 Thread Richard Elling
On Apr 28, 2010, at 6:40 AM, Wolfraider wrote: We are running the latest dev release. I was hoping to just mirror the zfs voumes and not the whole pool. The original pool size is around 100TB in size. The spare disks I have come up with will total around 40TB. We only have 11TB of space

Re: [zfs-discuss] Migrate ZFS volume to new pool

2010-04-28 Thread Richard Elling
On Apr 28, 2010, at 6:37 AM, Wolfraider wrote: The original drive pool was configured with 144 1TB drives and a hardware raid 0 strip across every 4 drives to create 4TB luns. For the archives, this is not a good idea... These luns where then combined into 6 raidz2 luns and added to the zfs

Re: [zfs-discuss] Migrate ZFS volume to new pool

2010-04-28 Thread Wolfraider
On Apr 28, 2010, at 6:37 AM, Wolfraider wrote: The original drive pool was configured with 144 1TB drives and a hardware raid 0 strip across every 4 drives to create 4TB luns. For the archives, this is not a good idea... Exactly, This is the reason I want to blow all the old configuration

Re: [zfs-discuss] Migrate ZFS volume to new pool

2010-04-28 Thread Wolfraider
Mirrors are made with vdevs (LUs or disks), not pools. However, the vdev attached to a mirror must be the same size (or nearly so) as the original. If the original vdevs are 4TB, then a migration to a pool made with 1TB vdevs cannot be done by replacing vdevs (mirror method). -- richard

Re: [zfs-discuss] Migrate ZFS volume to new pool

2010-04-28 Thread Richard Elling
On Apr 28, 2010, at 8:39 AM, Wolfraider wrote: Mirrors are made with vdevs (LUs or disks), not pools. However, the vdev attached to a mirror must be the same size (or nearly so) as the original. If the original vdevs are 4TB, then a migration to a pool made with 1TB vdevs cannot be done

Re: [zfs-discuss] Migrate ZFS volume to new pool

2010-04-28 Thread Jim Horng
For this type of migration a downtime is required. However, it can be reduce to only a few hours to a few minutes depending how much change need to be synced. I have done this many times on a NetApp Filer but can be apply to zfs as well. First thing is consider is only do the migration once

Re: [zfs-discuss] Migrate ZFS volume to new pool

2010-04-28 Thread Jim Horng
So on the point of not need an migration back. Even at 144 disk. they won't be on the same raid group. So figure out what is the best raid group size for you since zfs don't support changing number of disk in raidz yet. I usually use the number of the slots per shelf. or a good number is

Re: [zfs-discuss] Migrate ZFS volume to new pool

2010-04-28 Thread Jim Horng
Sorry, I need to correct myself. Mirror luns on the windows side to switch storage pool under it is a great idea and I think you can do this without downtime. -- This message posted from opensolaris.org ___ zfs-discuss mailing list

Re: [zfs-discuss] Migrate ZFS volume to new pool

2010-04-28 Thread Bob Friesenhahn
On Wed, 28 Apr 2010, Jim Horng wrote: So on the point of not need an migration back. Even at 144 disk. they won't be on the same raid group. So figure out what is the best raid group size for you since zfs don't support changing number of disk in raidz yet. I usually use the number of

Re: [zfs-discuss] Migrate ZFS volume to new pool

2010-04-28 Thread Wolfraider
3 shelves with 2 controllers each. 48 drive per shelf. These are Fibrechannel attached. We would like all 144 drives added to the same large pool. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

Re: [zfs-discuss] Migrate ZFS volume to new pool

2010-04-28 Thread Jim Horng
I understand your point. however in most production system the selves are added incrementally so make sense to be related to number of slots per shelf. and in most case withstand a shelf failure is to much of overhead on storage any are. for example in his case he will have to configure 1+0

Re: [zfs-discuss] Migrate ZFS volume to new pool

2010-04-28 Thread Bob Friesenhahn
On Wed, 28 Apr 2010, Jim Horng wrote: I understand your point. however in most production system the selves are added incrementally so make sense to be related to number of slots per shelf. and in most case withstand a shelf failure is to much of overhead on storage any are. for example in

Re: [zfs-discuss] Migrate ZFS volume to new pool

2010-04-28 Thread Jim Horng
3 shelves with 2 controllers each. 48 drive per shelf. These are Fibrechannel attached. We would like all 144 drives added to the same large pool. I would do either a 12 or 16 disk raidz3 vdev and do spread out the disk across controllers within vdevs. also may want to leave a least 1 spare

Re: [zfs-discuss] Migrate ZFS volume to new pool

2010-04-28 Thread Richard Elling
On Apr 28, 2010, at 9:48 PM, Jim Horng wrote: 3 shelves with 2 controllers each. 48 drive per shelf. These are Fibrechannel attached. We would like all 144 drives added to the same large pool. I would do either a 12 or 16 disk raidz3 vdev and do spread out the disk across controllers

[zfs-discuss] Migrate ZFS volume to new pool

2010-04-27 Thread Wolfraider
We would like to delete and recreate our existing zfs pool without losing any data. The way we though we could do this was attach a few HDDs and create a new temporary pool, migrate our existing zfs volume to the new pool, delete and recreate the old pool and migrate the zfs volumes back. The

Re: [zfs-discuss] Migrate ZFS volume to new pool

2010-04-27 Thread Cindy Swearingen
Hi Wolf, Which Solaris release is this? If it is an OpenSolaris system running a recent build, you might consider the zpool split feature, which splits a mirrored pool into two separate pools, while the original pool is online. If possible, attach the spare disks to create the mirrored pool as

Re: [zfs-discuss] Migrate ZFS volume to new pool

2010-04-27 Thread Jim Horng
Unclear what you want to do? What's the goal for this excise? If you want to replace the pool with larger disks and the pool is in mirror or raidz. You just replace one disk at a time and allow the pool to rebuild it self. Once all the disk has been replace, it will atomically realize the