Re: [zfs-discuss] Best practice for moving FS between pool on same machine?

2007-06-20 Thread Constantin Gonzalez
Hi Chris,

 What is the best (meaning fastest) way to move a large file system 
 from one pool to another pool on the same machine.  I have a machine
 with two pools.  One pool currently has all my data (4 filesystems), but it's
 misconfigured. Another pool is configured correctly, and I want to move the 
 file systems to the new pool.  Should I use 'rsync' or 'zfs send'?

zfs send/receive is the fastest and most efficient way.

I've used it multiple times on my home server until I had my configuration
right :).

 What happens is I forgot I couldn't incrementally add raid devices.  I want
 to end up with two raidz(x4) vdevs in the same pool.  Here's what I have now:

For this reason, I decided to go with mirrors. Yes, they use more raw storage
space, but they are also much more flexible to expand. Just add two disks when
the pool is full and you're done.

If you have a lot of disks or can afford to add disks 4-5 disks at a time, then
RAID-Z may be as easy to do, but remember that two disk failures in RAID-5
variants can be quite common - You may want RAID-Z2 instead.

 1. move data to dbxpool2
 2. remount using dbxpool2
 3. destroy dbxpool1
 4. create new proper raidz vdev inside dbxpool2 using devices from dbxpool1

Add:

0. Snapshot data in dbxpool1 so you can use zfs send/receive

Then the above should work fine.

 I'm constrained by trying to minimize the downtime for the group
 of people using this as their file server.  So I ended up with
 an ad-hoc assignment of devices.  I'm not worried about
 optimizing my controller traffic at the moment.

Ok. If you want to really be thorough, I'd recommend:

0. Run a backup, just in case. It never hurts.
1. Do a snapshot of dbxpool1
2. zfs send/receive dbxpool1 - dbxpool2
   (This happens while users are still using dbxpool1, so no downtime).
3. Unmount dbxpool1
4. Do a second snapshot of dbxpool1
5. Do an incremental zfs send/receive of dbxpool1 - dbxpool2.
   (This should take only a small amount of time)
6. Mount dbxpool2 where dbxpool1 used to be.
7. Check everything is fine with the new mounted pool.
8. Destroy dbxpool1
9. Use disks from dbxpool1 to expand dbxpool2 (be careful :) ).

You might want to exercise the above steps on an extra spare disk with
two pools just to gain some confidence before doing it in production.

I have a script that automatically does 1-6 that is looking for beta
testers. If you're interested, let me know.

Hope this helps,
   Constantin

-- 
Constantin GonzalezSun Microsystems GmbH, Germany
Platform Technology Group, Global Systems Engineering  http://www.sun.de/
Tel.: +49 89/4 60 08-25 91   http://blogs.sun.com/constantin/

Sitz d. Ges.: Sun Microsystems GmbH, Sonnenallee 1, 85551 Kirchheim-Heimstetten
Amtsgericht Muenchen: HRB 161028
Geschaeftsfuehrer: Marcel Schneider, Wolfgang Engels, Dr. Roland Boemer
Vorsitzender des Aufsichtsrates: Martin Haering
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Best practice for moving FS between pool on same machine?

2007-06-19 Thread Chris Quenelle
What is the best (meaning fastest) way to move a large file system 
from one pool to another pool on the same machine.  I have a machine
with two pools.  One pool currently has all my data (4 filesystems), but it's
misconfigured. Another pool is configured correctly, and I want to move the 
file systems to the new pool.  Should I use 'rsync' or 'zfs send'?

What happens is I forgot I couldn't incrementally add raid devices.  I want
to end up with two raidz(x4) vdevs in the same pool.  Here's what I have now:

B# zpool status
  pool: dbxpool
 state: ONLINE
 scrub: none requested
config:

NAMESTATE READ WRITE CKSUM
dbxpool ONLINE   0 0 0
  raidz1ONLINE   0 0 0
c1t4d0  ONLINE   0 0 0
c2t6d0  ONLINE   0 0 0
  c2t1d0ONLINE   0 0 0
  c2t4d0ONLINE   0 0 0

errors: No known data errors

  pool: dbxpool2
 state: ONLINE
 scrub: resilver completed with 0 errors on Tue Jun 19 15:16:19 2007
config:

NAMESTATE READ WRITE CKSUM
dbxpool2ONLINE   0 0 0
  raidz1ONLINE   0 0 0
c1t2d0  ONLINE   0 0 0
c1t5d0  ONLINE   0 0 0
c2t5d0  ONLINE   0 0 0
c1t1d0  ONLINE   0 0 0

errors: No known data errors

---

'dbxpool' has all my data today.  Here are my steps:

1. move data to dbxpool2
2. remount using dbxpool2
3. destroy dbxpool1
4. create new proper raidz vdev inside dbxpool2 using devices from dbxpool1

Any advice?

I'm constrained by trying to minimize the downtime for the group
of people using this as their file server.  So I ended up with
an ad-hoc assignment of devices.  I'm not worried about
optimizing my controller traffic at the moment.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss