Re: [zfs-discuss] Backup complete rpool structure and data to tape

2011-05-12 Thread Edward Ned Harvey
 From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
 boun...@opensolaris.org] On Behalf Of Peter Jeremy
 
 Finally, the send/recv protocol is not guaranteed to be compatible
 between ZFS versions.  

Years ago, there was a comment in the man page that said this.  Here it is:
The format of the stream is evolving. No backwards  compatibility is
guaranteed. You may not be able to receive your streams on future versions
of ZFS.

But in the last several years, backward/forward compatibility has always
been preserved, so despite the warning, it was never a problem.

In more recent versions, the man page says:  The format of the stream is
committed. You will be  able to receive your streams on future versions of
ZFS.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Backup complete rpool structure and data to tape

2011-05-12 Thread Arjun YK
Thanks everyone. Your inputs helped me a lot.

The 'rpool/ROOT' mountpoint is set to 'legacy' as I don't see any reason to
mount it. But I am not certain if that can cause any issue in the future, or
that's a right thing to do. Any suggestions ?


Thanks
Arjun
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Backup complete rpool structure and data to tape

2011-05-12 Thread Fajar A. Nugraha
On Thu, May 12, 2011 at 8:31 PM, Arjun YK arju...@gmail.com wrote:
 Thanks everyone. Your inputs helped me a lot.
 The 'rpool/ROOT' mountpoint is set to 'legacy' as I don't see any reason to
 mount it. But I am not certain if that can cause any issue in the future, or
 that's a right thing to do. Any suggestions ?

The general answer is if it ain't broken, don't fix it.

See 
http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide#Complete_Solaris_ZFS_Root_Pool_Recovery
for example of bare metal rpool recovery example using nfs + zfs
send/receive. For your purpose, it's probably easier to just use the
example and have Legato back up the images created from zfs send.

-- 
Fajar
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Backup complete rpool structure and data to tape

2011-05-11 Thread Arjun YK
Hello,

Trying to understand how to backup mirrored zfs boot pool 'rpool' to tape,
and restore it back if in case the disks are lost.
Backup would be done with an enterprise tool like tsm, legato etc.

As an example, here is the layout:

# zfs list
NAME  USED  AVAIL  REFER  MOUNTPOINT
rpool64.5G   209G97K  legacy
rpool/ROOT   24.0G   209G21K  legacy
rpool/ROOT/s10s_u8wos_08a  24G  14.8G  5.15G  /
rpool/ROOT/s10s_u8wos_08a/var   4G  3.93G  74.6M  /var
rpool/dump   2.50G   209G  2.50G  -
rpool/swap 16G   225G   136M  -
#

Could you answer these queries:

1. Is it possible to backup 'rpool' as a single entity, or do we need to
backup each filesystems, volumes etc. within rpool seperately ?

2. How do we backup the whole structure of zfs (pool, filesystem, volume,
snapshot etc.) along with all its property settings. Not just the actual
data stored within.

3. If in case the whole structure cannot be backed up using enterprise
backup, how do we save and restore zfs sctructure if in case the disks are
lost. I have read about 'zfs send receive ...'. Is this the only recommended
way ?

4. I have never tried to restore a whole boot disk from tape. Could you
share some details on how to rebuild the boot disks by restoring from tape.

5. I have set 'rpool/ROOT' mountpoint to 'legacy' as I don't see any reason
to mount it. Not sure if it's a right thing to do. Any suggestions ?


Thanks
Arjun
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Backup complete rpool structure and data to tape

2011-05-11 Thread Edward Ned Harvey
 From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
 boun...@opensolaris.org] On Behalf Of Arjun YK
 
 Trying to understand how to backup mirrored zfs boot pool 'rpool' to tape,
 and restore it back if in case the disks are lost.
 Backup would be done with an enterprise tool like tsm, legato etc.

Backup/restore of bootable rpool to tape with a 3rd party application like
legato etc is kind of difficult.  Because if you need to do a bare metal
restore, how are you going to do it?  The root of the problem is the fact
that you need an OS with legato in order to restore the OS.  It's a
catch-22.  It is much easier if you can restore the rpool from some storage
that doesn't require the 3rd party tool to access it ... 

I might suggest:  If you use zfs send to backup rpool to a file in the
data pool...  And then use legato etc to backup the data pool...  If you
need to do a bare metal restore some day, you would just install a new OS,
install legato or whatever, and restore your data pool.  Then you could boot
to a command prompt of the installation disc, and restore (obliterate) the
rpool using the rpool backup file.

But I hope you can completely abandon the whole 3rd party backup software
and tapes.  Some people can, and others cannot.  By far, the fastest best
way to backup ZFS is to use zfs send | zfs receive on another system or a
set of removable disks.  ZFS send has the major advantage that it doesn't
need to crawl the whole filesystem scanning for changes.  It just knows what
incremental blocks have changed, and it instantly fetches only the necessary
blocks.


 1. Is it possible to backup 'rpool' as a single entity, or do we need to
backup
 each filesystems, volumes etc. within rpool seperately ?

You can do it either way you like.  Specify a single filesystem, or
recursively do all of its children.  man zfs send


 2. How do we backup the whole structure of zfs (pool, filesystem, volume,
 snapshot etc.) along with all its property settings. Not just the actual
data
 stored within.

Regarding pool  filesystem properties, I believe this changed at some
point.  There was a time in history when I decided to zpool get all mypool
and zfs get all mypool and store those text files alongside the backup.
But if you check the man page for zfs send, I think this is automatic now.

No matter what, you'll have to create a pool before you can restore.  So
you'll just have to take it upon yourself to remember your pool architecture
... striping, mirroring, raidz, cache  log devices etc.

Incidentally, when you do incremental zfs send, you have to specify the
from and to snapshots.  So there must be at least one identical snapshot
in the sending and receiving system (or else your only option is to do a
complete full send.)  Point is:  You can take a lot of baby steps if you
wish, keeping all the snapshots if you wish.  Or you can jump straight from
the oldest matching snapshot to the latest snap.  You'll complete somewhat
faster but lose granularity in the backups if you do that.


 3. If in case the whole structure cannot be backed up using enterprise
 backup, how do we save and restore zfs sctructure if in case the disks are
 lost. I have read about 'zfs send receive ...'. Is this the only
recommended
 way ?

For anything other than rpool, you can use any normal backup tool you like.
Netbackup, legato, tar, cpio.  Whatever.  (For rpool, I wouldn't really
trust those - I recommend zfs send for rpool.)  You can also use zfs send 
receive for data pools.  You gain performance (potentially many orders of
magnitude shorter backup window) if zfs send  receive are acceptable in
your environment.  But it's not suitable for everyone for many reasons...
You can't exclude anything from zfs send...  And you can't do a selective
zfs receive.  It's the whole filesystem or nothing.  And a single bit
corruption will render the whole backup unusable, so it's not recommended to
store a zfs send data stream for later use.  It's recommended to pipe a
zfs send directly into a zfs receive.  Which implies disk-to-disk, no tape.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Backup complete rpool structure and data to tape

2011-05-11 Thread Glenn Lagasse
* Edward Ned Harvey (opensolarisisdeadlongliveopensola...@nedharvey.com) wrote:
  From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
  boun...@opensolaris.org] On Behalf Of Arjun YK
  
  Trying to understand how to backup mirrored zfs boot pool 'rpool' to tape,
  and restore it back if in case the disks are lost.
  Backup would be done with an enterprise tool like tsm, legato etc.
 
 Backup/restore of bootable rpool to tape with a 3rd party application like
 legato etc is kind of difficult.  Because if you need to do a bare metal
 restore, how are you going to do it?  The root of the problem is the fact
 that you need an OS with legato in order to restore the OS.  It's a

If you're talking about Solaris 11 Express, you could create your own
liveCD using the Distribution Constructor[1] and include the backup
software on the cd image.  You'll have to customize the Distribution
Constructor to install the backup software (presumably via an SVR4
package[2]) but that's not too difficult.  Once you've created the image,
you're good to go forevermore (unless you need to update the backup
software on the image, in which case if you keep your Distribution
Constructor manifests around should be a simple edit to just point at
the newer backup software package)..

If you're talking about S10, then that's a tougher nut to crack.

Cheers,

-- 
Glenn

[1] - http://download.oracle.com/docs/cd/E19963-01/html/820-6564/
[2] - http://download.oracle.com/docs/cd/E19963-01/html/820-6564/addpkg.html
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Backup complete rpool structure and data to tape

2011-05-11 Thread Peter Jeremy
On 2011-May-12 00:20:28 +0800, Edward Ned Harvey 
opensolarisisdeadlongliveopensola...@nedharvey.com wrote:
Backup/restore of bootable rpool to tape with a 3rd party application like
legato etc is kind of difficult.  Because if you need to do a bare metal
restore, how are you going to do it?

This is a generic problem, not limited to ZFS.  The generic solutions
are either:
a) Customised boot disk that includes the 3rd party restore client
b) Separate backup of root+client in a format that's restorable using
   tools only on the generic boot disk (eg tar or ufsdump).
(Where boot disk could be network boot instead of a physical CD/DVD).

I might suggest:  If you use zfs send to backup rpool to a file in the
data pool...  And then use legato etc to backup the data pool...

As Edward pointed out later, this might be OK as a disaster-recovery
approach but isn't suitable for the situation where you want to
restore a subset of the files (eg you need to recover a file someone
accidently deleted) and a zfs send stream isn't intended for storage.

Another potential downside is that the only way to read the stream is
using zfs recv into ZFS - this could present a problem if you wanted
to migrate the data into a different filesystem.  (All other restore
utilities I'm aware of use normal open/write/chmod/... interfaces so
you can restore your backup into any filesystem).

Finally, the send/recv protocol is not guaranteed to be compatible
between ZFS versions.  I'm not aware of any specific issues (though
someone reports that a zfs.v15 send | zfs.v22 recv caused pool
corruption in another recent thread) and would hope that zfs recv
would always maintain full compatibility with older zfs send.

But I hope you can completely abandon the whole 3rd party backup software
and tapes.  Some people can, and others cannot.  By far, the fastest best
way to backup ZFS is to use zfs send | zfs receive on another system or a
set of removable disks.

Unfortunately, this doesn't fit cleanly into the traditional
enterprise backup solution where Legato/NetBackup/TSM/... backs up
into a SILO with automatic tape replication and off-site rotation.

Incidentally, when you do incremental zfs send, you have to specify the
from and to snapshots.  So there must be at least one identical snapshot
in the sending and receiving system (or else your only option is to do a
complete full send.)

And (at least on v15) if you are using an incremental replication
stream and you create (or clone) a new descendent filesystem, you will
need to manually manage the initial replication of that filesystem.

BTW, if you do elect to build a bootable, removable drive for backups,
you should be aware that gzip compression isn't supported - at least
in v15, trying to make a gzip compressed filesystem bootable or trying
to set compression=gzip on a bootable filesystem gives a very
uninformative error message and it took a fair amount of trawling
through the source code to find the real cause.

-- 
Peter Jeremy


pgpnNCrRwuYrc.pgp
Description: PGP signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss