Re: [zfs-discuss] zfs send/receive scenario problem w/ auto-snap service

2011-11-07 Thread HUGE | David Stahl
I think altering the amount of copies method would work best for us. The
hold feature could also work, but seems like it might be more complicated
as there will be a large number of snapshots in between the two we are
sending.
I am going to try and implement this keep change and see if it does the
trick.
  I am surprised this hasn't been implemented as a service into illumos.
Something that nexenta has already done with their auto-sync service.

Thanks Jim and also thanks to Erik for his suggestion as well. I am looking
over the autobackup.sh to see if there are parts that I can use or modify
it for our purposes.

On Sat, Nov 5, 2011 at 9:18 AM, Jim Klimov jimkli...@cos.ru wrote:

 2011-11-05 2:12, HUGE | David Stahl wrote:
 Our problem is that we need to use the -R to snapshot and send all

 the child zvols, yet since we have a lot of data (3.5 TB), the hourly
 snapshots are cleaned on the sending side, and breaks the script as it
 is running.



 In recent OpenSolaris and Illumos releases, you can use
 zfs hold command to lock a snapshot from deletion.
 So before sending you'd walk the snapshots you want to
 send and hold them; after the send is complete you'd
 unhold them so they can be actually deleted. It would
 be correct to wrap this all into a script...

 You can review the latest snapshots for a tree with a
 one-liner like this:

 # zfs list -tall -H -o name -r pool/export | grep -v @ | \
  while read DS; do zfs list -t snapshot -d1 -r $DS | tail -1; done

 pool/export@zfs-auto-snap:**frequent-2011-11-05-17:00  0  -
  22K  -
 pool/export/distr@zfs-auto-**snap:frequent-2011-11-05-17:00  0  -
  4.81G  -
 pool/export/home@zfs-auto-snap**:frequent-2011-11-05-17:00  0  -
  396M  -
 pool/export/home/jim@zfs-auto-**snap:frequent-2011-11-05-17:00  0  -
  24.7M  -

 If you only need filesystem OR volume datasets, you can
 replace the first line with one of these:

 # zfs list -t filesystem -H -o name -r pool/export | \
 # zfs list -t volume -H -o name -r pool/export | \

 Probably (for a recursive send) you'd need to catch
 all the identically-named snapshots in the tree.


 Another workaround can be to store more copies of the
 snapshots you need, i.e. not 24 hourlies but 100 or so.
 That would be like:

 # svccfg -s hourly listprop | grep zfs/keep
 zfs/keep   astring  24

 # svccfg -s hourly setprop zfs/keep = 100
 zfs/keep   astring  24

 # svcadm refresh hourly

 You could also use zfs-auto-send SMF-instance attributes
 like zfs/backup-save-cmd to use a script which would
 place a hold on the snapshot, then send and unhold it.

 So you have a number of options almost out-of-the-box ;)

 //Jim

 __**_
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/**mailman/listinfo/zfs-discusshttp://mail.opensolaris.org/mailman/listinfo/zfs-discuss




-- 
HUGE

David Stahl
Sr. Systems Administrator
718 233 9164
www.hugeinc.com http://www.hugeinc.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs send/receive scenario problem w/ auto-snap service

2011-11-05 Thread Jim Klimov

2011-11-05 2:12, HUGE | David Stahl wrote:
Our problem is that we need to use the -R to snapshot and send all

the child zvols, yet since we have a lot of data (3.5 TB), the hourly
snapshots are cleaned on the sending side, and breaks the script as it
is running.



In recent OpenSolaris and Illumos releases, you can use
zfs hold command to lock a snapshot from deletion.
So before sending you'd walk the snapshots you want to
send and hold them; after the send is complete you'd
unhold them so they can be actually deleted. It would
be correct to wrap this all into a script...

You can review the latest snapshots for a tree with a
one-liner like this:

# zfs list -tall -H -o name -r pool/export | grep -v @ | \
  while read DS; do zfs list -t snapshot -d1 -r $DS | tail -1; done

pool/export@zfs-auto-snap:frequent-2011-11-05-17:00  0  -22K  -
pool/export/distr@zfs-auto-snap:frequent-2011-11-05-17:00  0  - 
 4.81G  -
pool/export/home@zfs-auto-snap:frequent-2011-11-05-17:00  0  - 
 396M  -
pool/export/home/jim@zfs-auto-snap:frequent-2011-11-05-17:00  0 
 -  24.7M  -


If you only need filesystem OR volume datasets, you can
replace the first line with one of these:

# zfs list -t filesystem -H -o name -r pool/export | \
# zfs list -t volume -H -o name -r pool/export | \

Probably (for a recursive send) you'd need to catch
all the identically-named snapshots in the tree.


Another workaround can be to store more copies of the
snapshots you need, i.e. not 24 hourlies but 100 or so.
That would be like:

# svccfg -s hourly listprop | grep zfs/keep
zfs/keep   astring  24

# svccfg -s hourly setprop zfs/keep = 100
zfs/keep   astring  24

# svcadm refresh hourly

You could also use zfs-auto-send SMF-instance attributes
like zfs/backup-save-cmd to use a script which would
place a hold on the snapshot, then send and unhold it.

So you have a number of options almost out-of-the-box ;)

//Jim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss