I need a bit of a sanity check here.

1) I have a a RAIDZ2 of 8 1TB drives, so 6TB usable, running on an ancient
version of OpenSolaris (snv_134 I think). On that zpool (miniraid) I have
a zvol (RichRAID) that's using almost the whole FS. It's shared out via
COMSTAR Fibre Channel target mode. I'd like to move that zvol to a newer
server with a larger zpool. Sounds like a job for ZFS send/receive, right?

2) Since ZFS send/receive is snapshot-based I need to create a snapshot.
Unfortunately I did not realize that zvols require disk space sufficient
to duplicate the zvol, and my zpool wasn't big enough. After a false start
(zpool add is dangerous when low on sleep) I added a 250GB mirror and a
pair of 3GB mirrors to miniraid and was able to successfully snapshot the
zvol: miniraid/RichRAID@exportable (I ended up booting off an OI 151a5 USB
stick to make that work, since I don't believe snv_134 could handle a 3TB
disk).

3) Now it's easy, right? I enabled root login via SSH on the new host,
which is running a zpool "archive1" consisting of a single RAIDZ2 of 3TB
drives using ashift=12, and did a ZFS send:
ZFS send miniraid/RichRAID@exportable | ssh root@newhost zfs receive
archive1/RichRAID

It asked for the root password, I gave it that password, and it was off
and running. GigE ain't super fast, but I've got time.

The problem: so far the send/recv appears to have copied 6.25TB of 5.34TB.
That... doesn't look right. (Comparing zfs list -t snapshot and looking at
the 5.34 ref for the snapshot vs zfs list on the new system and looking at
space used.)

Is this a problem? Should I be panicking yet?

-- 
Dave Pooser
Manager of Information Services
Alford Media  http://www.alfordmedia.com



_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to