Re: [zfs-discuss] zfs send/receive scenario problem w/ auto-snap service

2011-11-07 Thread HUGE | David Stahl
I think altering the amount of copies method would work best for us. The
hold feature could also work, but seems like it might be more complicated
as there will be a large number of snapshots in between the two we are
sending.
I am going to try and implement this keep change and see if it does the
trick.
  I am surprised this hasn't been implemented as a service into illumos.
Something that nexenta has already done with their auto-sync service.

Thanks Jim and also thanks to Erik for his suggestion as well. I am looking
over the autobackup.sh to see if there are parts that I can use or modify
it for our purposes.

On Sat, Nov 5, 2011 at 9:18 AM, Jim Klimov  wrote:

> 2011-11-05 2:12, HUGE | David Stahl wrote:
> Our problem is that we need to use the -R to snapshot and send all
>
>> the child zvols, yet since we have a lot of data (3.5 TB), the hourly
>> snapshots are cleaned on the sending side, and breaks the script as it
>> is running.
>>
>
>
> In recent OpenSolaris and Illumos releases, you can use
> "zfs hold" command to lock a snapshot from deletion.
> So before sending you'd walk the snapshots you want to
> send and "hold" them; after the send is complete you'd
> "unhold" them so they can be actually deleted. It would
> be correct to wrap this all into a script...
>
> You can review the latest snapshots for a tree with a
> one-liner like this:
>
> # zfs list -tall -H -o name -r pool/export | grep -v @ | \
>  while read DS; do zfs list -t snapshot -d1 -r "$DS" | tail -1; done
>
> pool/export@zfs-auto-snap:**frequent-2011-11-05-17:00  0  -
>  22K  -
> pool/export/distr@zfs-auto-**snap:frequent-2011-11-05-17:00  0  -
>  4.81G  -
> pool/export/home@zfs-auto-snap**:frequent-2011-11-05-17:00  0  -
>  396M  -
> pool/export/home/jim@zfs-auto-**snap:frequent-2011-11-05-17:00  0  -
>  24.7M  -
>
> If you only need filesystem OR volume datasets, you can
> replace the first line with one of these:
>
> # zfs list -t filesystem -H -o name -r pool/export | \
> # zfs list -t volume -H -o name -r pool/export | \
>
> Probably (for a recursive send) you'd need to catch
> all the identically-named snapshots in the tree.
>
>
> Another workaround can be to store more copies of the
> snapshots you need, i.e. not 24 "hourlies" but 100 or so.
> That would be like:
>
> # svccfg -s hourly listprop | grep zfs/keep
> zfs/keep   astring  24
>
> # svccfg -s hourly setprop zfs/keep = 100
> zfs/keep   astring  24
>
> # svcadm refresh hourly
>
> You could also use zfs-auto-send SMF-instance attributes
> like "zfs/backup-save-cmd" to use a script which would
> place a "hold" on the snapshot, then send and unhold it.
>
> So you have a number of options almost out-of-the-box ;)
>
> //Jim
>
> __**_
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/**mailman/listinfo/zfs-discuss
>



-- 
HUGE

David Stahl
Sr. Systems Administrator
718 233 9164
www.hugeinc.com 
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs send/receive scenario problem w/ auto-snap service

2011-11-05 Thread Jim Klimov

2011-11-05 2:12, HUGE | David Stahl wrote:
Our problem is that we need to use the -R to snapshot and send all

the child zvols, yet since we have a lot of data (3.5 TB), the hourly
snapshots are cleaned on the sending side, and breaks the script as it
is running.



In recent OpenSolaris and Illumos releases, you can use
"zfs hold" command to lock a snapshot from deletion.
So before sending you'd walk the snapshots you want to
send and "hold" them; after the send is complete you'd
"unhold" them so they can be actually deleted. It would
be correct to wrap this all into a script...

You can review the latest snapshots for a tree with a
one-liner like this:

# zfs list -tall -H -o name -r pool/export | grep -v @ | \
  while read DS; do zfs list -t snapshot -d1 -r "$DS" | tail -1; done

pool/export@zfs-auto-snap:frequent-2011-11-05-17:00  0  -22K  -
pool/export/distr@zfs-auto-snap:frequent-2011-11-05-17:00  0  - 
 4.81G  -
pool/export/home@zfs-auto-snap:frequent-2011-11-05-17:00  0  - 
 396M  -
pool/export/home/jim@zfs-auto-snap:frequent-2011-11-05-17:00  0 
 -  24.7M  -


If you only need filesystem OR volume datasets, you can
replace the first line with one of these:

# zfs list -t filesystem -H -o name -r pool/export | \
# zfs list -t volume -H -o name -r pool/export | \

Probably (for a recursive send) you'd need to catch
all the identically-named snapshots in the tree.


Another workaround can be to store more copies of the
snapshots you need, i.e. not 24 "hourlies" but 100 or so.
That would be like:

# svccfg -s hourly listprop | grep zfs/keep
zfs/keep   astring  24

# svccfg -s hourly setprop zfs/keep = 100
zfs/keep   astring  24

# svcadm refresh hourly

You could also use zfs-auto-send SMF-instance attributes
like "zfs/backup-save-cmd" to use a script which would
place a "hold" on the snapshot, then send and unhold it.

So you have a number of options almost out-of-the-box ;)

//Jim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zfs send/receive scenario problem w/ auto-snap service

2011-11-04 Thread HUGE | David Stahl
Hi,
  I am having some problems with architecting a zfs snapshot replication
scheme that would suit the needs of my company.
Presently we do hour/daily/weekly snapshots of our file server. This file
system is organized in parent/child/child type zvols. so think
*pool/zvol1/zvol2/zvol3,
pool/zvol1/zvol4, pool/zvol1/zvol5*, etc.
  For our zfs send/recieve we have been using a format like this

*"/usr/bin/pfexec zfs send -R osol1/shares@zfs-auto-snap:daily-2011-11-03-00:00
|/usr/bin/ssh 10.10.10.75 /usr/bin/pfexec /usr/sbin/zfs receive -vFd
osol2/osol1backup"*

  Our problem is that we need to use the -R to snapshot and send all the
child zvols, yet since we have a lot of data (3.5 TB), the hourly snapshots
are cleaned on the sending side, and breaks the script as it is running. It
throws an error message like

*warning: cannot send 'osol1/shares@zfs-auto-snap:hourly-2011-11-02-16:00':
no such pool or dataset*

This seems to me to be because the snap was there when it started sending,
but has been cleaned by auto-snapshot service as the script was running.
Does anyone know of a way I can resolve this?  One way I can think of is
killing off hourly snapshot service, yet we seem to use that feature as
people here tend to accidentally delete stuff off the server. Or perhaps
disabling the hourlies service at the beginning of the script and
re-enabling at the end.

Or is there a better way of doing this that I am not seeing?


-- 
HUGE

David Stahl
Sr. Systems Administrator
718 233 9164
www.hugeinc.com 
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss