Hi unfortunately zfsdump, or "zfs send" as it is now, does not relate to ufsdump in any way :-(
>From man zfs zfs send [-i snapshot1] snapshot2 Creates a stream representation of snapshot2, which is written to standard output. The output can be redirected to a file or to a different machine (for example, using ssh(1). By default, a full stream is generated. -i snapshot1 Generate an incremental stream from snapshot1 to snapshot2. The incremental source snapshot1 can be specified as the last component of the snapshot name (for example, the part after the "@"), and it will be assumed to be from the same file system as snapshot2. The format of the stream is evolving. No backwards compati- bility is guaranteed. You may not be able to receive your streams on future versions of ZFS. I wrote a script to use this but I had a problem getting estimates for the incremental snapshots. I can not see how amrecover would not be able to restore from the snapshot as it does not know the format used. In fact there is no way that I know of to extract a file from the snapshot sort of recovering the whole snapshot. This is probably not too much of a big issue as the tape backup is only needed for disaster recover and snapshot can be used for file recovery. amrestore could be used with "zfs receive" to recover the snapshot. One of the properties of zfs is that in encourages the use of a filesystem for a logical set of files, i.e. user home directory, software package etc. This means that every time you create a new filesystem you need to create a new DLE for amanda. In fact creating the amanda DLE takes longer than creating the zfs filesystem. You can not just use tar to dump multiple zfs filestems because amamda tells tar not to cross filesystem boundaries. You could probably write a wrapper to tar to remove --one-file-system option to get around this limitation. Anthony Worrall > -----Original Message----- > From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] > On Behalf Of Chris Hoogendyk > Sent: 25 April 2008 13:39 > To: Nick Smith > Cc: amanda-users@amanda.org > Subject: Re: Amanda and ZFS > > > > Nick Smith wrote: > > Dear Amanda Administrators. > > > > What dump configuration would you suggest for backing up a ZFS pool of > > about 300GB? Within the pool there several smaller 'filesystems'. > > > > Would you : > > > > 1. Use a script to implement ZFS snapshots and send these to the server > > as the DLE? > > 2. Use tar to backup the filesystems? We do not make much use of ACLs > > so tar's lack of ACL support shouldn't be an issue? > > 3. Something else? > > > > Question : If a use 2 can still use 'amrecover' which AFAIK would be the > > case if I went with 1?? > > > > The host is a Sun Solaris 10 X86 box is that pertinent. > > > I'm not on Solaris 10 yet, and haven't used ZFS, but . . . > > I understand that with ZFS you have zfsdump (just as with ufs I have > ufsdump). So you could usse zfsdump with snapshots. I'm guessing it > wouldn't be too hard to modify the wrapper I wrote for Solaris 9 that > uses ufsdump with snapshots and is documented here > http://wiki.zmanda.com/index.php/Backup_client#Chris_Hoogendyk.27s_Examp le > > If you have that pool logically broken up into a number of smaller > pieces that can be snapshotted and dumped, it will make it smoother for > Amanda's planner to distribute the load over the dump cycle. > > Shouldn't have any problems with amrecover. > > > > --------------- > > Chris Hoogendyk > > - > O__ ---- Systems Administrator > c/ /'_ --- Biology Department > (*) \(*) -- 140 Morrill Science Center > ~~~~~~~~~~ - University of Massachusetts, Amherst > > <[EMAIL PROTECTED]> > > ---------------