Re: Amanda and ZFS

2008-04-28 Thread Philippe Michel

Anthony Worrall wrote:

Hi

unfortunately zfsdump, or zfs send as it is now, does not relate to
ufsdump in any way :-(
[...]
I wrote a script to use this but I had a problem getting estimates for
the incremental snapshots.


I have used this script, picked from the list's archive, for about 18 
months and I indeed had to make some small changes related to these 
estimates.


FWIW, what I use is attached (does the list accept attachements ?)


I can not see how amrecover would not be able to restore from the
snapshot as it does not know the format used. In fact there is no way
that I know of to extract a file from the snapshot sort of recovering
the whole snapshot. This is probably not too much of a big issue as the
tape backup is only needed for disaster recover and snapshot can be used
for file recovery.

amrestore could be used with zfs receive to recover the snapshot.


Test restores went fine. Restoring to an incremental level is a little 
awkward (one has to rename each restored snaphot before receiving the 
next one) but certainly cleaner than doing it from an ufsdump with its 
potential resurrected/moved files problems.


#!/bin/pfksh
#need to use pfksh so that zfs can be run

ZFS=/usr/sbin/zfs

#Get the last argument
for FS in $@;
do
echo $FS  /dev/null
done

#Check if the filesystem is zfs or ufs
res=`$ZFS list | /usr/xpg4/bin/egrep ${FS}$`
if [ $? -eq 0 ]; then
DEV=`echo $res | awk '{ print $1}'`
ESTIMATE=0
LEVEL=unknown
UPDATE=0

case $1 in
*0*) LEVEL=0;;
*1*) LEVEL=1;;
*2*) LEVEL=2;;
*3*) LEVEL=3;;
*4*) LEVEL=4;;
*5*) LEVEL=5;;
*6*) LEVEL=6;;
*7*) LEVEL=7;;
*8*) LEVEL=8;;
*9*) LEVEL=9;;
*) echo level NOT specfied; exit -1;;
esac

case $1 in
*S*) ESTIMATE=1;;
esac

case $1 in
*u*) UPDATE=1;;
esac

$ZFS list -H -t snapshot | /usr/xpg4/bin/grep -q [EMAIL PROTECTED]
if [ $? -eq 0 ]; then
$ZFS destroy [EMAIL PROTECTED]
fi
$ZFS snapshot [EMAIL PROTECTED]

# make sure all the snapshots up to $LEVEL exist
n=1
while [ $n -le $LEVEL ];
do
$ZFS list -H -t snapshot | /usr/xpg4/bin/grep -q [EMAIL PROTECTED]
if [ $? -ne 0 ]; then
LEVEL=$(( $n - 1 ))
break
fi
n=$(($n+1))
done

full=`$ZFS list -H -o refer [EMAIL PROTECTED]
#convert returned size into kilobytes
case $full in
*K) full=`echo $full | sed -e 's/K//'`;;
*M) full=`echo $full | sed -e 's/M//'`; full=$(( $full * 1024 ));;
*G) full=`echo $full | sed -e 's/G//'`; full=$(( $full * 1024 * 1024));;
esac
if [ $LEVEL -gt 0 ]; then
incr=`$ZFS list -H -o refer [EMAIL PROTECTED]
case $incr in
*K) incr=`echo $incr | sed -e 's/K//'`;;
*M) incr=`echo $incr | sed -e 's/M//'`; incr=$(( $incr * 1024 ));;
*G) incr=`echo $incr | sed -e 's/G//'`; incr=$(( $incr * 1024 * 1024));;
esac
size=$(( $full - $incr ))
# $full can be less than $incr if we recently removed data
# zfs send -i will no be exactly 0, but relatively small
if [ $size -lt 0 ]; then
size=0
fi
else
size=$full
fi
size=$(( $size + 16 ))
if [ $ESTIMATE == 1 ]; then
# echo doing Estimate $DEV at level $LEVEL 2
size=$(( $size * 1024 ))
echo $size
$ZFS destroy [EMAIL PROTECTED]
else
# echo Dumping $DEV at level $LEVEL 2
if [ $LEVEL -eq 0 ]; then
$ZFS send [EMAIL PROTECTED]
else
$ZFS send -i [EMAIL PROTECTED] [EMAIL PROTECTED]
fi
block=$(( $size * 2 ))
MB=$(( $size / 1024 ))
echo DUMP: $block blocks (${MB}MB) 2
if [ $UPDATE -eq 1 ]; then
for snap in `$ZFS list -H -t snapshot | awk '{print $1}' | egrep 
${DEV}@[1-9]$`; do
n=`echo $snap | cut -f2 [EMAIL PROTECTED]
if [ $n -gt $LEVEL ]; then
$ZFS destroy $snap
fi
done
n=$(( $LEVEL + 1 ))
$ZFS rename [EMAIL PROTECTED] [EMAIL PROTECTED]
else
$ZFS destroy [EMAIL PROTECTED]
fi
fi
else
/usr/lib/fs/ufs/ufsdump $*
fi


Re: Amanda and ZFS

2008-04-25 Thread Chris Hoogendyk



Nick Smith wrote:

Dear Amanda Administrators.

What dump configuration would you suggest for backing up a ZFS pool of 
about 300GB? Within the pool there several smaller 'filesystems'.


Would you :

1.  Use a script to implement ZFS snapshots and send these to the server
as the DLE?
2.  Use tar to backup the filesystems? We do not make much use of ACLs
so tar's lack of ACL support shouldn't be an issue?
3.  Something else?

Question : If a use 2 can still use 'amrecover' which AFAIK would be the
   case if I went with 1??

The host is a Sun Solaris 10 X86 box is that pertinent.



I'm not on Solaris 10 yet, and haven't used ZFS, but . . .

I understand that with ZFS you have zfsdump (just as with ufs I have 
ufsdump). So you could usse zfsdump with snapshots. I'm guessing it 
wouldn't be too hard to modify the wrapper I wrote for Solaris 9 that 
uses ufsdump with snapshots and is documented here 
http://wiki.zmanda.com/index.php/Backup_client#Chris_Hoogendyk.27s_Example


If you have that pool logically broken up into a number of smaller 
pieces that can be snapshotted and dumped, it will make it smoother for 
Amanda's planner to distribute the load over the dump cycle.


Shouldn't have any problems with amrecover.



---

Chris Hoogendyk

-
   O__   Systems Administrator
  c/ /'_ --- Biology Department
 (*) \(*) -- 140 Morrill Science Center
~~ - University of Massachusetts, Amherst

[EMAIL PROTECTED]

---


RE: Amanda and ZFS

2008-04-25 Thread Anthony Worrall
Hi

unfortunately zfsdump, or zfs send as it is now, does not relate to
ufsdump in any way :-(


From man zfs 

zfs send [-i snapshot1] snapshot2
 Creates a stream representation of snapshot2,  which  is
 written to standard output. The output can be redirected
 to a file or to a different machine (for example,  using
 ssh(1). By default, a full stream is generated.

 -i snapshot1Generate  an  incremental  stream   from
 snapshot1  to snapshot2. The incremental
 source snapshot1 can be specified as the
 last component of the snapshot name (for
 example, the part after the @), and it
 will be assumed to be from the same file
 system as snapshot2.

 The format of the stream is evolving. No backwards  compati-
 bility  is  guaranteed.  You may not be able to receive your
 streams on future versions of ZFS.

I wrote a script to use this but I had a problem getting estimates for
the incremental snapshots.

I can not see how amrecover would not be able to restore from the
snapshot as it does not know the format used. In fact there is no way
that I know of to extract a file from the snapshot sort of recovering
the whole snapshot. This is probably not too much of a big issue as the
tape backup is only needed for disaster recover and snapshot can be used
for file recovery.

amrestore could be used with zfs receive to recover the snapshot.

One of the properties of zfs is that in encourages the use of a
filesystem for a logical set of files, i.e. user home directory,
software package etc.
This means that every time you create a new filesystem you need to
create a new DLE for amanda. In fact creating the amanda DLE takes
longer than creating the zfs filesystem.

You can not just use tar to dump multiple zfs filestems because amamda
tells tar not to cross filesystem boundaries.

You could probably write a wrapper to tar to remove --one-file-system
option to get around this limitation.

  

Anthony Worrall

 
 -Original Message-
 From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED]
 On Behalf Of Chris Hoogendyk
 Sent: 25 April 2008 13:39
 To: Nick Smith
 Cc: amanda-users@amanda.org
 Subject: Re: Amanda and ZFS
 
 
 
 Nick Smith wrote:
  Dear Amanda Administrators.
 
  What dump configuration would you suggest for backing up a ZFS pool
of
  about 300GB? Within the pool there several smaller 'filesystems'.
 
  Would you :
 
  1.  Use a script to implement ZFS snapshots and send these to the
server
  as the DLE?
  2.  Use tar to backup the filesystems? We do not make much use of
ACLs
  so tar's lack of ACL support shouldn't be an issue?
  3.  Something else?
 
  Question : If a use 2 can still use 'amrecover' which AFAIK would be
the
 case if I went with 1??
 
  The host is a Sun Solaris 10 X86 box is that pertinent.
 
 
 I'm not on Solaris 10 yet, and haven't used ZFS, but . . .
 
 I understand that with ZFS you have zfsdump (just as with ufs I have
 ufsdump). So you could usse zfsdump with snapshots. I'm guessing it
 wouldn't be too hard to modify the wrapper I wrote for Solaris 9 that
 uses ufsdump with snapshots and is documented here

http://wiki.zmanda.com/index.php/Backup_client#Chris_Hoogendyk.27s_Examp
le
 
 If you have that pool logically broken up into a number of smaller
 pieces that can be snapshotted and dumped, it will make it smoother
for
 Amanda's planner to distribute the load over the dump cycle.
 
 Shouldn't have any problems with amrecover.
 
 
 
 ---
 
 Chris Hoogendyk
 
 -
 O__   Systems Administrator
c/ /'_ --- Biology Department
   (*) \(*) -- 140 Morrill Science Center
 ~~ - University of Massachusetts, Amherst
 
 [EMAIL PROTECTED]
 
 ---


Re: Amanda and ZFS

2008-04-25 Thread Jon LaBadie
On Fri, Apr 25, 2008 at 02:32:27PM +0100, Anthony   Worrall wrote:
 Hi
 
 unfortunately zfsdump, or zfs send as it is now, does not relate to
 ufsdump in any way :-(
 
 
  [ big snip ]
 
 One of the properties of zfs is that in encourages the use of a
 filesystem for a logical set of files, i.e. user home directory,
 software package etc.
 This means that every time you create a new filesystem you need to
 create a new DLE for amanda. In fact creating the amanda DLE takes
 longer than creating the zfs filesystem.
 
 You can not just use tar to dump multiple zfs filestems because amamda
 tells tar not to cross filesystem boundaries.
 
 You could probably write a wrapper to tar to remove --one-file-system
 option to get around this limitation.

Another way would be to use include directives.  For example, if the
zfs pool was /pool and had file systems of a, b,c, and d, you could
set up multiple DLEs that were rooted at /pool (different tag names)
and had include directives of include ./a ./c and another with
include ./b ./d  While traversing each of the included starting
points (directories), tar would never cross a file system boundary.

-- 
Jon H. LaBadie  [EMAIL PROTECTED]
 JG Computing
 12027 Creekbend Drive  (703) 787-0884
 Reston, VA  20194  (703) 787-0922 (fax)


Re: Amanda and ZFS

2008-04-25 Thread Chris Hoogendyk



Anthony Worrall wrote:

Hi

unfortunately zfsdump, or zfs send as it is now, does not relate to
ufsdump in any way :-(



hmm. I guess I was being a bit naive.

I had assumed zfs development was more mature.

After reading the comments on this thread, I went searching for 
references to zfsdump (which doesn't exist, but nevertheless is a good 
search term for discussions of the missing capability). There are a 
variety of discussions on Sun's web site and others regarding 
difficulties with figuring out how to backup zfs, most particularly with 
respect to disaster recovery, where file system structure and 
information have been lost.


This gives me a bit more insight into comments from a Sun engineer we 
met with a couple of weeks ago (ok, he was an engineer specializing in 
Sun storage systems who works for the vendor who has the Sun contract 
for our state). Anyway, when I asked him if ZFS was ready for prime 
time, he hedged. I asked why people weren't adopting it more. He said 
that it hadn't really panned out, and that UFS had developed more during 
the time of ZFS development. So, most people were sticking with UFS. Of 
particular note was that ZFS isn't really supported for your boot drive.


Based on our discussions with this engineer, the new servers and storage 
systems we are getting will be set up entirely without ZFS.


However, given my earlier naive assumptions, I'm not going to assume 
that this is the complete story. Just enough to get my skepticism and 
sysadmin conservatism into full gear. ;-)




---

Chris Hoogendyk

-
   O__   Systems Administrator
  c/ /'_ --- Biology Department
 (*) \(*) -- 140 Morrill Science Center
~~ - University of Massachusetts, Amherst

[EMAIL PROTECTED]

---


RE: Amanda and ZFS

2008-04-25 Thread John E Hein
Anthony   Worrall wrote at 14:32 +0100 on Apr 25, 2008:
  unfortunately zfsdump, or zfs send as it is now, does not relate to
  ufsdump in any way :-(

Sorry to hijack this thread, but...

Can Solaris and/or ZFS snapshots support partial filesystem dumps (and
restores)?  If not, how do people using dump for backups support large
filesystems (that may be bigger than a tape)?  Are split dumps and
dump/restore or tar with excludes the only way in amanda right now?


RE: Amanda and ZFS

2008-04-25 Thread Anthony Worrall

neat

 -Original Message-
 From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED]
 On Behalf Of Jon LaBadie
 Sent: 25 April 2008 16:00
 To: amanda-users@amanda.org
 Subject: Re: Amanda and ZFS
 
 On Fri, Apr 25, 2008 at 02:32:27PM +0100, Anthony   Worrall wrote:
  Hi
 
  unfortunately zfsdump, or zfs send as it is now, does not relate
to
  ufsdump in any way :-(
 
 
   [ big snip ]
 
  One of the properties of zfs is that in encourages the use of a
  filesystem for a logical set of files, i.e. user home directory,
  software package etc.
  This means that every time you create a new filesystem you need to
  create a new DLE for amanda. In fact creating the amanda DLE takes
  longer than creating the zfs filesystem.
 
  You can not just use tar to dump multiple zfs filestems because
amamda
  tells tar not to cross filesystem boundaries.
 
  You could probably write a wrapper to tar to remove
--one-file-system
  option to get around this limitation.
 
 Another way would be to use include directives.  For example, if the
 zfs pool was /pool and had file systems of a, b,c, and d, you could
 set up multiple DLEs that were rooted at /pool (different tag names)
 and had include directives of include ./a ./c and another with
 include ./b ./d  While traversing each of the included starting
 points (directories), tar would never cross a file system boundary.
 
 --
 Jon H. LaBadie  [EMAIL PROTECTED]
  JG Computing
  12027 Creekbend Drive(703) 787-0884
  Reston, VA  20194(703) 787-0922 (fax)


Re: Amanda and ZFS

2008-04-25 Thread John E Hein
Jon LaBadie wrote at 10:59 -0400 on Apr 25, 2008:
  Another way would be to use include directives.  For example, if the
  zfs pool was /pool and had file systems of a, b,c, and d, you could
  set up multiple DLEs that were rooted at /pool (different tag names)
  and had include directives of include ./a ./c and another with
  include ./b ./d  While traversing each of the included starting
  points (directories), tar would never cross a file system boundary.

Do those work when using 'dump' instead of 'tar'?


Re: Amanda and ZFS

2008-04-25 Thread Pieter Bowman
I started using ZFS in a big way over a year ago on our main file
server.  Since there is no ufsdump replacement to use with ZFS, I
elected to use GNU tar.  I know this doesn't yet cover backing up
things like ACLs, but we don't use them in our very heterogeneous
environment.  The main idea I had was to take a snapshot and point tar
at the snapshot so it had a nice static, read-only copy of the
filesystem to work from.

I created a shell script to run as a cron job, just before amdump is
run, which cleans up the previous snapshots and takes new snapshots of
each of the pools (effectively):

zfs destroy -r [EMAIL PROTECTED]
zfs snapshot -r [EMAIL PROTECTED]

Fortunately, amanda has a nice way to specify that the filesystem name
is something like /local, but the point to have tar start at is a
different location.  A disklist entry such as:

foo.math.utah.edu /local /local/.zfs/snapshot/AMANDA user-tar

The final issue I found was that the inode numbers in the snapshots
change each time a new snapshot is created.  This is a problem with
GNU tar's listed-incremental facility.  To work around this I ended up
hacking GNU tar to make it ignore the inodes stored in the listed
incremental files.  This was just a simple change, to have ZFS
filesystems treated the same as NFS.  The patch was submitted to the
GNU tar developers, but was rejected.  Here is the patch as applied to
GNU tar 1.16 (this patch also contains what I consider a fix for an
actual coding bug):

diff -r -c tar-1.16/src/incremen.c tar-1.16-local/src/incremen.c
*** tar-1.16/src/incremen.c Fri Sep  8 10:42:18 2006
--- tar-1.16-local/src/incremen.c   Fri Dec  8 14:53:37 2006
***
*** 71,77 
  
  #if HAVE_ST_FSTYPE_STRING
static char const nfs_string[] = nfs;
! # define NFS_FILE_STAT(st) (strcmp ((st).st_fstype, nfs_string) == 0)
  #else
  # define ST_DEV_MSB(st) (~ (dev_t) 0  (sizeof (st).st_dev * CHAR_BIT - 1))
  # define NFS_FILE_STAT(st) (((st).st_dev  ST_DEV_MSB (st)) != 0)
--- 71,77 
  
  #if HAVE_ST_FSTYPE_STRING
static char const nfs_string[] = nfs;
! # define NFS_FILE_STAT(st) (strcmp ((st).st_fstype, nfs_string) == 0 || 
strcmp ((st).st_fstype, zfs) == 0)
  #else
  # define ST_DEV_MSB(st) (~ (dev_t) 0  (sizeof (st).st_dev * CHAR_BIT - 1))
  # define NFS_FILE_STAT(st) (((st).st_dev  ST_DEV_MSB (st)) != 0)
***
*** 247,253 
 directories, consider all NFS devices as equal,
 relying on the i-node to establish differences.  */
  
!   if (! (((DIR_IS_NFS (directory)  nfs)
  || directory-device_number == stat_data-st_dev)
  directory-inode_number == stat_data-st_ino))
{
--- 247,253 
 directories, consider all NFS devices as equal,
 relying on the i-node to establish differences.  */
  
!   if (! (((DIR_IS_NFS (directory)  nfs)
  || directory-device_number == stat_data-st_dev)
  directory-inode_number == stat_data-st_ino))
{


I hope this helps other people with using amanda and ZFS.

I'm happy to clear up any unclear issues.

Pieter


Re: Amanda and ZFS

2008-04-25 Thread John E Hein
Pieter Bowman wrote at 11:41 -0600 on Apr 25, 2008:
  The final issue I found was that the inode numbers in the snapshots
  change each time a new snapshot is created.  This is a problem with
  GNU tar's listed-incremental facility.  To work around this I ended up
  hacking GNU tar to make it ignore the inodes stored in the listed
  incremental files.  This was just a simple change, to have ZFS
  filesystems treated the same as NFS.  The patch was submitted to the
  GNU tar developers, but was rejected.  Here is the patch as applied to
  GNU tar 1.16 (this patch also contains what I consider a fix for an
  actual coding bug):

The gtar devs finally accepted something to help with this problem:
--no-check-device.

http://article.gmane.org/gmane.comp.archivers.amanda.user/32804/match=nfs+tar


Re: Amanda and ZFS

2008-04-25 Thread Jon LaBadie
On Fri, Apr 25, 2008 at 11:46:34AM -0600, John E Hein wrote:
 Jon LaBadie wrote at 10:59 -0400 on Apr 25, 2008:
   Another way would be to use include directives.  For example, if the
   zfs pool was /pool and had file systems of a, b,c, and d, you could
   set up multiple DLEs that were rooted at /pool (different tag names)
   and had include directives of include ./a ./c and another with
   include ./b ./d  While traversing each of the included starting
   points (directories), tar would never cross a file system boundary.
 
 Do those work when using 'dump' instead of 'tar'?

Though I've not tried it, it should.

I base that on the description of the command

/usr/sbin/ufsdump [options] [arguments] files_to_dump

and the belief that the include directive merely provides the args
corresponding to files_to_dump.

jl
-- 
Jon H. LaBadie  [EMAIL PROTECTED]
 JG Computing
 12027 Creekbend Drive  (703) 787-0884
 Reston, VA  20194  (703) 787-0922 (fax)


Re: Amanda and ZFS

2008-04-25 Thread John E Hein
Jon LaBadie wrote at 13:57 -0400 on Apr 25, 2008:
  Though I've not tried it, it should.
  
  I base that on the description of the command
  
  /usr/sbin/ufsdump [options] [arguments] files_to_dump
  
  and the belief that the include directive merely provides the args
  corresponding to files_to_dump.

Ah.  Okay.  That's a solaris ufsdump feature... linux, too, maybe
others.  It won't work for the BSDs (filesystem only).

And that's _if_ amanda passes that on the dump invocation.
I haven't tried it either or looked at the code yet.

But one limitation (with solaris' ufsdump and linux's dump) is that
you can't do incrementals using that method.  Level 0 only.  I don't
know if amanda adds support on top of that to kludge in incremental
support - I doubt it, but I'm someone will speak up if you can.


Re: Amanda and ZFS

2008-04-25 Thread Pieter Bowman

 ...
 The gtar devs finally accepted something to help with this problem:
 --no-check-device.
 ...

Thanks, I hadn't caught the addition of that option.  That also
reminds me that the problem isn't the inode number, but the device
number which was the problem.

Pieter


Re: Amanda and ZFS

2008-04-25 Thread Jon LaBadie
On Fri, Apr 25, 2008 at 01:36:35PM -0600, John E Hein wrote:
 Jon LaBadie wrote at 13:57 -0400 on Apr 25, 2008:
   Though I've not tried it, it should.
   
   I base that on the description of the command
   
   /usr/sbin/ufsdump [options] [arguments] files_to_dump
   
   and the belief that the include directive merely provides the args
   corresponding to files_to_dump.
 
 Ah.  Okay.  That's a solaris ufsdump feature... linux, too, maybe
 others.  It won't work for the BSDs (filesystem only).
 
 And that's _if_ amanda passes that on the dump invocation.
 I haven't tried it either or looked at the code yet.
 
 But one limitation (with solaris' ufsdump and linux's dump) is that
 you can't do incrementals using that method.  Level 0 only.  I don't
 know if amanda adds support on top of that to kludge in incremental
 support - I doubt it, but I'm someone will speak up if you can.
 

In the original query regarding ZFS the question was about file systems.
I don't know if dump/ufsdump would regard a relative pathname that
is a mount point as a file system.  If it did, then incrementals and
fulls could both be done.

-- 
Jon H. LaBadie  [EMAIL PROTECTED]
 JG Computing
 12027 Creekbend Drive  (703) 787-0884
 Reston, VA  20194  (703) 787-0922 (fax)


Re: amanda and zfs

2006-03-08 Thread Paddy Sreenivasan
Hi Anthony,

Thanks for providing ufsdump wrapper to handle ZFS.  When Application
API (http://wiki.zmanda.com/index.php/Application_API) is implemented
(hopefully
in the next Amanda release after 2.5), creating application plugin for snapshot
based filesystem such as ZFS will be easier.  Issue 1 will be handled
by Application
API.

Paddy


On 3/8/06, Anthony Worrall [EMAIL PROTECTED] wrote:
 Hi

 I have playing about with Sun's new filesystem ZFS.

 One of the things I wanted to do was make sure I could get amanda to
 backup zfs.
 The way the backup utility works with zfs is based on snapshots. You can
 either
 do a full bakup of a filesystem or the difference between two snapshots.
 The restore process is an all or nothing recovery of the snapshot. There
 is no
 way to recover a single file from a backup or to index the snapshot.

 I have hacked a script to replace ufsdump which will manage the
 snapshots and
 appropriately respond to amanda's sendsize and sendbackup requests which
 is included below.
 It requires that the amanda user has a role that can run the zfs
 command. Also the
 zfs DLE's should have indexing turned off.

 Here are some observations about zfs and amanda. ZFS encourages the use
 of more
 smaller filesystems at the home directory or even software package level
 as it is easy
 to grow and manage filesystems. This may have an impact on amanda in a
 couple of ways.

 1. Amanda seems to be more likely to do a full backup of the filesystem
 since it is
small. However with many small filesystems the total amount of data
 backed up may
end up being larger. Currently I have only a handfull of filestesm
 with a few megabytes
each and amanda has only request fullbackups

 2. With many filesystems managing the DLE's becomes more complex. One
 idea maybe to
haves the DLE's for a client stored on the client. Amanda would then
 need to get
the DLEs during the estimate phase or when doing a check of the
 client. This
would also be useful in the case where the management of the client
 is separated
from the backup management.

 I guess some of this issues may be addressed in 2.5

 Cheers

 Anthony Worrall

 #!/bin/pfksh
 #need to use pfksh so that zfs can be run

 ZFS=/usr/sbin/zfs

 #Get the last argument
 for FS in $@;
 do
  echo $FS  /dev/null
 done

 #Check if the filesystem is zfs or ufs
 res=`$ZFS list | /usr/xpg4/bin/egrep ${FS}$`
 if [ $? -eq 0 ]; then
   DEV=`echo $res | awk '{ print $1}'`
   ESTIMATE=0
   LEVEL=unknown
   UPDATE=0

   case $1 in
 *0*) LEVEL=0;;
 *1*) LEVEL=1;;
 *2*) LEVEL=2;;
 *3*) LEVEL=3;;
 *4*) LEVEL=4;;
 *5*) LEVEL=5;;
 *6*) LEVEL=6;;
 *7*) LEVEL=7;;
 *8*) LEVEL=8;;
 *9*) LEVEL=9;;
 *) echo level NOT specfied; exit -1;;
   esac

   case $1 in
 *S*) ESTIMATE=1;;
   esac

   case $1 in
 *u*) UPDATE=1;;
   esac

   $ZFS list -H -t snapshot | /usr/xpg4/bin/grep -q [EMAIL PROTECTED]
   if [ $? -eq 0 ]; then
 $ZFS destroy [EMAIL PROTECTED]
   fi
   $ZFS snapshot [EMAIL PROTECTED]

   # make sure all the snapshots up to $LEVEL exist
   n=1
   while [ $n -le $LEVEL ];
   do
 $ZFS list -H -t snapshot | /usr/xpg4/bin/grep -q [EMAIL PROTECTED]
 if [ $? -ne 0 ]; then
   LEVEL=$(( $n - 1 ))
   break
 fi
 n=$(($n+1))
   done

   full=`$ZFS list -H -o refer [EMAIL PROTECTED]
 #convert returned size into kilobytes
   case $full in
 *K) full=`echo $full | sed -e 's/K//'`;;
 *M) full=`echo $full | sed -e 's/M//'`; full=$(( $full * 1024 ));;
 *G) full=`echo $full | sed -e 's/G//'`; full=$(( $full * 1024 *
 1024));;
   esac
   if [ $LEVEL -gt 0 ]; then
 incr=`$ZFS list -H -o refer [EMAIL PROTECTED]
 case $full in
   *K) full=`echo $full | sed -e 's/K//'`;;
   *M) full=`echo $full | sed -e 's/M//'`; full=$(( $full * 1024 ));;
   *G) full=`echo $full | sed -e 's/G//'`; full=$(( $full * 1024 *
 1024));;
 esac
 size=$(( $full - $incr ))
   else
 size=$full
   fi
   size=$(( $size + 16 ))
   if [ $ESTIMATE == 1 ]; then
 #echo doing Estimate $DEV at level $LEVEL 2
 size=$(( $size * 1024 ))
 echo $size
 $ZFS destroy [EMAIL PROTECTED]
   else
 #echo Dumping $DEV at level $LEVEL 2
 if [ $LEVEL -eq 0 ]; then
   $ZFS backup [EMAIL PROTECTED]
 else
   $ZFS backup -i [EMAIL PROTECTED] [EMAIL PROTECTED]
 fi
 block=$(( $size * 2 ))
 MB=$(( $size / 1024 ))
 echo DUMP: $block blocks (${MB}MB) 2
 if [ $UPDATE -eq 1 ]; then
   for snap in `$ZFS list -H -t snapshot | awk '{print $1}' | egrep
 ${DEV}@[1-9]`; do
 n=`echo $snap | cut -f2 [EMAIL PROTECTED]
 if [ $n -gt $LEVEL ]; then
$ZFS destroy $snap
 fi
   done
   n=$(( $LEVEL + 1 ))
   $ZFS rename [EMAIL PROTECTED] [EMAIL PROTECTED]
 else
   $ZFS destroy [EMAIL PROTECTED]
 fi
   fi
 else
   /usr/lib/fs/ufs/ufsdump $*
 fi












--

Amanda documentation: http://wiki.zmanda.com
Amanda