Thomas Burgess <wonsl...@gmail.com> writes:

> I've never worked with zfs send/recieve before. I will have to go
> read it....

It is actually *very* easy. I show you a real example. On the new
home server machine that I am setting up I had these filesystems two
minutes ago (full listing):

# zfs list
NAME                     USED  AVAIL  REFER  MOUNTPOINT
d0                       231G  1.11T    25K  /d0
d0/m1                     21K  1.11T    21K  /d0/m1
d0/oback                 164G  1.11T   164G  /d0/oback
d0/pvr                  66.6G  1.11T  66.6G  /d0/pvr
rpool                   11.5G  1.33T    80K  /rpool
rpool/ROOT              6.24G  1.33T    21K  legacy
rpool/ROOT/opensolaris  6.24G  1.33T  5.59G  /
rpool/dump              1.62G  1.33T  1.62G  -
rpool/export            2.05G  1.33T    23K  /export
rpool/export/home       2.05G  1.33T    24K  /export/home
rpool/export/home/jni   1.13M  1.33T   945K  /export/home/jni
rpool/export/home/ni    2.04G  1.33T  1.87G  /export/home/ni
rpool/swap              1.62G  1.33T   109M  -
rpool/usr.local          371K  1.33T   214K  /usr/local

Now assume I want to move the file system for /usr/local to the d0
pool. First I create a snapshot with an arbitrary name in the source
file system:

# zfs snapshot rpool/usr.lo...@transfer_snapshot

Of course you can use any pre-existing snapshot in the file system,
too. It does not even have to be the last one, but a new one is
usually what you want.

Then I send this snapshot with "zfs send" into a "zfs receive"
command (a.k.a. "zfs recv").

# zfs send rpool/usr.lo...@transfer_snapshot | zfs receive -d d0

The "-d" option tells zfs receive to use the file system name and
the snapshot name as they come. You can also specify a different
file system name (and position in the file system tree in the target
pool), but you can "zfs rename" them later anyway and change the
mount point as you wish. Now see the result:

# zfs list
NAME                     USED  AVAIL  REFER  MOUNTPOINT
d0                       231G  1.11T    25K  /d0
d0/m1                     21K  1.11T    21K  /d0/m1
d0/oback                 164G  1.11T   164G  /d0/oback
d0/pvr                  67.0G  1.11T  67.0G  /d0/pvr
d0/usr.local             214K  1.11T   214K  /d0/usr.local
rpool                   11.5G  1.33T    80K  /rpool
rpool/ROOT              6.24G  1.33T    21K  legacy
rpool/ROOT/opensolaris  6.24G  1.33T  5.59G  /
rpool/dump              1.62G  1.33T  1.62G  -
rpool/export            2.05G  1.33T    23K  /export
rpool/export/home       2.05G  1.33T    24K  /export/home
rpool/export/home/jni   1.13M  1.33T   945K  /export/home/jni
rpool/export/home/ni    2.04G  1.33T  1.87G  /export/home/ni
rpool/swap              1.62G  1.33T   109M  -
rpool/usr.local          371K  1.33T   214K  /usr/local

That's all!

# diff -r /usr/local /d0/usr.local
# 

Now I would just destroy (or only unmount, at first) the old file
system and change the mount point of the new one.

> the main thing is i do NOT want to replicate the snapshots of this
> particular filesystem, i just want to create a new filesystem. The
> main question i had regarding zfs send/recieve was whether or not
> it would work the way i wanted regarding compression.

Snapshots older than the one used for the send are only sent if you
specify the -R option to the send command. This is the reason why
the target file system is a bit smaller in this case -- some of the
space in the source file system is occupied by older snapshots that
are not replicated to the target.

The compression setting in the target is independent of the source
file system. If you want this setting (or any other) in the target
different from the default, create the target file system first,
change settings, and receive explicitly into that file system (). You
can test this before with some small file systems that you destroy
later, if you want to be sure before replicating large amounts of
data. Example:

# zfs send rpool/usr.lo...@transfer_snapshot | zfs receive -F d0/usr/local

> After reading a little, it seems to work by replicating the actual
> zfs transaction data. This seems quite cool.

It is. The mechanism is quite efficient and reliable. At work we use
it, with incremental streams, to make hourly off-site backups of
over 100 file systems. Works like a charm.

-- 
If someone comes at you with a sword, run if you can. Kung Fu
doesn't always work.                           -- Bruce Lee
_______________________________________________
opensolaris-help mailing list
opensolaris-help@opensolaris.org

Reply via email to