Re: [zfs-discuss] Incremental backup via zfs send / zfs receive

2009-09-21 Thread David Pacheco

Frank Middleton wrote:

The problem with the regular stream is that most of the file
system properties (such as mountpoint) are not copied as they
are with a recursive stream. This may seem an advantage to some,
(e.g., if the remote mountpoint is already in use, the mountpoint
seems to default to legacy). However, did I miss anything in the
documentation, or would it be worth submitting an RFE for an
option to send/recv properties in a non-recursive stream?



This is

6839260 want zfs send with properties

-- Dave

--
David Pacheco, Sun Microsystems Fishworks. http://blogs.sun.com/dap/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Incremental backup via zfs send / zfs receive

2009-09-20 Thread Frank Middleton

A while back I posted a script that does individual send/recvs
for each file system, sending incremental streams if the remote
file system exists, and regular streams if not.

The reason for doing it this way rather than a full recursive
stream is that there's no way to avoid sending certain file
systems such as swap, and it would be nice not to always send
certain properties such as mountpoint, and there might be file
systems you want to keep on the receiving end.

The problem with the regular stream is that most of the file
system properties (such as mountpoint) are not copied as they
are with a recursive stream. This may seem an advantage to some,
(e.g., if the remote mountpoint is already in use, the mountpoint
seems to default to legacy). However, did I miss anything in the
documentation, or would it be worth submitting an RFE for an
option to send/recv properties in a non-recursive stream?

Oddly, incremental non-recursive streams do seem to override
properties, such as mountpoint, hence the /opt problem. Am I
missing something, or is this really an inconsistency? IMO
non-recursive regular and incremental streams should behave the
same way and both have options to send or not send properties.
For my purposes the default behavior is reversed for what I
would like to do...

Thanks -- Frank

Latest version of the  script follows; suggestions for improvements
most welcome, especially the /opt problem where source and destination
hosts have different /opts (host6-opt and host5-opt here) - see
ugly hack below (/opt is on the data pool because the boot disks
- soon to be SSDs - are filling up):

#!/bin/bash
#
# backup is the alias for the host receiving the stream
# To start, do a full recursive send/receive and put the
# name of the initial snapshot in cur_snap, In case of
# disasters, the older snap name is saved in cur_snap_prev
# and there's an option not to delete any snapshots when done.
#
if test ! -e cur_snap; then echo cur_snap not found; exit; fi
P=`cat cur_snap`
mv -f cur_snap cur_snap_prev
T=`date +%Y-%m-%d:%H:%M:%S`
echo $T  cur_snap
echo snapping to sp...@$t
echo Starting backup from sp...@$p to sp...@$t at `date`  snap_time
zfs snapshot -r sp...@$t
echo snapshot done
for FS in `zfs list -H | cut -f 1`
do
RFS=`ssh backup zfs list -H $FS 2/dev/null | cut  -f 1`
case $FS in
space/file system to skip here)
  echo skipping $FS
  ;;
*)
  if test $RFS; then
if [ $FS = space/swap ]; then
  echo skipping $FS
else
  echo do zfs send -i $...@$p $...@$t I ssh backup zfs recv -vF $RFS
  zfs send -i $...@$p $...@$t | ssh backup zfs recv -vF $RFS
fi
  else
echo do zfs send $...@$t I ssh backup zfs recv -v $FS
zfs send $...@$t | ssh backup zfs recv -v $FS
  fi
  if [ $FS = space/host5-opt ]; then
  echo do ssh backup zfs set mountpoint=legacy space/host5-opt
  ssh backup zfs set mountpoint=legacy space/host5-opt
  fi
  ;;
esac
done

echo --Ending backup from sp...@$p to sp...@$t at `date`  snap_time

DOIT=1
while [ $DOIT -eq 1 ]
do
  read -p Delete old snapshot y/n  REPLY
  REPLY=`echo $REPLY | tr '[:upper:]' '[:lower:]'`
  case $REPLY in
y)
  ssh backup zfs destroy -r sp...@$p
  echo Remote sp...@$p destroyed
  zfs destroy -r sp...@$p
  echo Local sp...@$p destroyed
  DOIT=0
  ;;
n)
  echo Skipping:
  echossh backup zfs destroy -r sp...@$p
  echozfs destroy -r sp...@$p
  DOIT=0
  ;;
 *)
  echo Please enter y or n
  ;;
  esac
done



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Incremental backup via zfs send / zfs receive

2009-09-20 Thread Peter Pickford
just destroy the swap snapshot and it doesn't get sent when you do a full send

2009/9/20 Frank Middleton f.middle...@apogeect.com:
 A while back I posted a script that does individual send/recvs
 for each file system, sending incremental streams if the remote
 file system exists, and regular streams if not.

 The reason for doing it this way rather than a full recursive
 stream is that there's no way to avoid sending certain file
 systems such as swap, and it would be nice not to always send
 certain properties such as mountpoint, and there might be file
 systems you want to keep on the receiving end.

 The problem with the regular stream is that most of the file
 system properties (such as mountpoint) are not copied as they
 are with a recursive stream. This may seem an advantage to some,
 (e.g., if the remote mountpoint is already in use, the mountpoint
 seems to default to legacy). However, did I miss anything in the
 documentation, or would it be worth submitting an RFE for an
 option to send/recv properties in a non-recursive stream?

 Oddly, incremental non-recursive streams do seem to override
 properties, such as mountpoint, hence the /opt problem. Am I
 missing something, or is this really an inconsistency? IMO
 non-recursive regular and incremental streams should behave the
 same way and both have options to send or not send properties.
 For my purposes the default behavior is reversed for what I
 would like to do...

 Thanks -- Frank

 Latest version of the  script follows; suggestions for improvements
 most welcome, especially the /opt problem where source and destination
 hosts have different /opts (host6-opt and host5-opt here) - see
 ugly hack below (/opt is on the data pool because the boot disks
 - soon to be SSDs - are filling up):

 #!/bin/bash
 #
 # backup is the alias for the host receiving the stream
 # To start, do a full recursive send/receive and put the
 # name of the initial snapshot in cur_snap, In case of
 # disasters, the older snap name is saved in cur_snap_prev
 # and there's an option not to delete any snapshots when done.
 #
 if test ! -e cur_snap; then echo cur_snap not found; exit; fi
 P=`cat cur_snap`
 mv -f cur_snap cur_snap_prev
 T=`date +%Y-%m-%d:%H:%M:%S`
 echo $T  cur_snap
 echo snapping to sp...@$t
 echo Starting backup from sp...@$p to sp...@$t at `date`  snap_time
 zfs snapshot -r sp...@$t
 echo snapshot done
 for FS in `zfs list -H | cut -f 1`
 do
 RFS=`ssh backup zfs list -H $FS 2/dev/null | cut  -f 1`
 case $FS in
 space/file system to skip here)
  echo skipping $FS
  ;;
 *)
  if test $RFS; then
    if [ $FS = space/swap ]; then
      echo skipping $FS
    else
      echo do zfs send -i $...@$p $...@$t I ssh backup zfs recv -vF $RFS
              zfs send -i $...@$p $...@$t | ssh backup zfs recv -vF $RFS
    fi
  else
    echo do zfs send $...@$t I ssh backup zfs recv -v $FS
            zfs send $...@$t | ssh backup zfs recv -v $FS
  fi
  if [ $FS = space/host5-opt ]; then
  echo do ssh backup zfs set mountpoint=legacy space/host5-opt
          ssh backup zfs set mountpoint=legacy space/host5-opt
  fi
  ;;
 esac
 done

 echo --Ending backup from sp...@$p to sp...@$t at `date`  snap_time

 DOIT=1
 while [ $DOIT -eq 1 ]
 do
  read -p Delete old snapshot y/n  REPLY
  REPLY=`echo $REPLY | tr '[:upper:]' '[:lower:]'`
  case $REPLY in
    y)
      ssh backup zfs destroy -r sp...@$p
      echo Remote sp...@$p destroyed
      zfs destroy -r sp...@$p
      echo Local sp...@$p destroyed
      DOIT=0
      ;;
    n)
      echo Skipping:
      echo    ssh backup zfs destroy -r sp...@$p
      echo    zfs destroy -r sp...@$p
      DOIT=0
      ;;
     *)
      echo Please enter y or n
      ;;
  esac
 done



 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Incremental backup via zfs send / zfs receive

2009-09-08 Thread Frank Middleton

On 09/07/09 07:29 PM, David Dyer-Bennet wrote:


Is anybody doing this [zfs send/recv] routinely now on 2009-6
OpenSolaris, and if so can I see your commands?


Wouldn't a simple recursive send/recv work in your case? I
imagine all kinds of folks are doing it already. The only problem
with it, AFAIK, is when a new fs is created locally without also
being created on the backup disk (unless this now works with
zfs  V3). The following works with snv103. If it works there, it
should work with 2009-6. The script method may have the advantage
of not destroying file systems on the backup that don't exist
on the source, but I have not tested that.

ZFS send/recv is pretty cool, but at least with older versions, it
takes some tweaking to get right. Rather than send to a local drive,
I'm sending to a live remote system, which is some ways is more
complicated since there might be things like /opt and xxx/swap
that you might not want to even send. Finally, at least with ZFS
version 3, an incremental send of a filesystem that doesn't exist
on the far side doesn't work either, so one needs to test for that.

Given this, a simple send of a recursive snapshot AFAIK isn't going to
work. I am no bash expert, so this script probably can do with lots
of improvements, but it seems to do what I need it to do. You would
have to extensively modify it for your local needs; you would have
to remove the ssh backup and fix it to receive to your local disk. I
include it here in response to your request in the hope that it
might be useful. Note, as written, it will create space/swap but it
won't send updates.

The pool I'm backing up is called space and the target host is called
backup, an alias in /etc/hosts. When the machines switch roles, I
edit both /etc/hosts so the stream can go the other way. This script
probably won't work for rpools; there is lots of documentation about
that in previous posts to this list.

My solution to the rpool problem is to receive it locally to an
alternate root and then send that, but this works here if the
rpool isn't your only pool, of course.

If any zfs/bash gurus out there can suggest improvements, they
would be much appreciated, especially ways to deal with the /opt
problem (which probably relates to the general rpool question).
Currently the /opts for each host are set mountpoint=legacy,
but that is not a great solution :-(.

Cheers -- Frank

#!/bin/bash
P=`cat cur_snap`
rm -f cur_snap
T=`date +%Y-%m-%d:%H:%M:%S`
echo $T  cur_snap
echo snapping to sp...@$t
zfs snapshot -r sp...@$t
echo snapshot done
for FS in `zfs list -H | cut -f 1`
do
RFS=`ssh backup zfs list -H $FS 2/dev/null | cut  -f 1`
if test $RFS; then
  if [ $FS = space/swap ]; then
echo skipping $FS
  else
echo do zfs send -i $...@$p $...@$t I ssh backup zfs recv -vF $RFS
zfs send -i $...@$p $...@$t | ssh backup zfs recv -vF $RFS
  fi
else
  echo do zfs send $...@$t I ssh backup zfs recv -v $FS
  zfs send $...@$t | ssh backup zfs recv -v $FS
fi
done

ssh backup zfs destroy -r sp...@$p
zfs destroy -r sp...@$p



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Incremental backup via zfs send / zfs receive

2009-09-07 Thread David Dyer-Bennet
Under my OpenSolaris 2008-11 install I reached the conclusion this 
didn't work due to a combination of bugs.  Is this fixed in 2009-6?


Specifically, I want to do a nightly incremental send from the most 
recent common snapshot and apply it to an external (USB) drive also 
formatted as ZFS.  I want to include any more recent snapshots in the 
transfer, so that the external drive ends up equivalent to the primary 
filesystem at the end of the transfer.  Is anybody doing this routinely 
now on 2009-6 OpenSolaris, and if so can I see your commands? 

I'm about to start experimenting with this on a virtual machine, before 
upgrading the ZFS on-disk version of my main pool and my backup drives, 
but I'll work at it a lot harder if there are people out there 
succeeding at making it work!


--
David Dyer-Bennet, d...@dd-b.net; http://dd-b.net/
Snapshots: http://dd-b.net/dd-b/SnapshotAlbum/data/
Photos: http://dd-b.net/photography/gallery/
Dragaera: http://dragaera.info

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss