Re: [zfs-discuss] This may be a somewhat silly question ...

2006-07-27 Thread Matthew Ahrens
On Tue, Jun 27, 2006 at 06:30:46PM -0400, Dennis Clarke wrote:
 
 ... but I have to ask.
 
 How do I back this up?

The following two RFEs would help you out enormously:

6421958 want recursive zfs send ('zfs send -r')
6421959 want zfs send to preserve properties ('zfs send -p')

As far as RFEs go, these are pretty high priority...

--matt
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] This may be a somewhat silly question ...

2006-06-28 Thread Cindy Swearingen


Dennis,

You are absolutely correct that the doc needs a step to verify
that the backup occurred.

I'll work on getting this step added to the admin guide ASAP.

Thanks for feedback...

Cindy

Dennis Clarke wrote:



Am I missing something here?  [1]


Dennis

[1] I am fully prepared for RTFM and outright snickering if deserved :-)



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] This may be a somewhat silly question ...

2006-06-27 Thread Dennis Clarke

... but I have to ask.

How do I back this up?

Here is my definition of a backup :

(1) I can copy all data and metadata onto some media in
a manner that verifies the integrity of the data and
metadata written.

(1.1) By verify I mean that the data written onto
  the media is read back and compared to the
  source and accuracy is assured.

(2) I can walk away with the media and be able to restore
the data onto bare metal with nothing other than Solaris
10 Update 2 ( or Nevada ) CDROM sets and reasonable hardware.

I have a copy of the Solaris ZFS Administration Guide which is some
document numbered 817-2271.  158 pages and well worth printing out I think.

Let's suppose that I have a pile of disks arranged in mirrors and everything
seems to be going along swimmingly thus :

# zpool status zfs0
  pool: zfs0
 state: ONLINE
 scrub: none requested
config:

NAME STATE READ WRITE CKSUM
zfs0 ONLINE   0 0 0
  mirror ONLINE   0 0 0
c0t10d0  ONLINE   0 0 0
c1t10d0  ONLINE   0 0 0
  mirror ONLINE   0 0 0
c0t11d0  ONLINE   0 0 0
c1t11d0  ONLINE   0 0 0
  mirror ONLINE   0 0 0
c0t12d0  ONLINE   0 0 0
c1t12d0  ONLINE   0 0 0
  mirror ONLINE   0 0 0
c0t9d0   ONLINE   0 0 0
c1t9d0   ONLINE   0 0 0
  mirror ONLINE   0 0 0
c0t13d0  ONLINE   0 0 0
c1t13d0  ONLINE   0 0 0

errors: No known data errors
#

# zfs list
NAME   USED  AVAIL  REFER  MOUNTPOINT
zfs0  95.3G  70.8G  27.5K  /export/zfs
zfs0/backup   91.2G  70.8G  88.4G  /export/zfs/backup
zfs0/backup/pasiphae  2.77G  24.2G  2.77G  /export/zfs/backup/pasiphae
zfs0/lotus 786M  70.8G   786M  /opt/lotus
zfs0/zone 3.40G  70.8G  24.5K  /export/zfs/zone
zfs0/zone/common  24.5K  8.00G  24.5K  legacy
zfs0/zone/domino  24.5K  70.8G  24.5K  /opt/zone/domino
zfs0/zone/sugar   3.40G  12.6G  3.40G  /opt/zone/sugar

At this point I attach a tape drive to the machine :

# devfsadm -v -C -c tape
devfsadm[24247]: verbose: symlink /dev/rmt/0 -
../../devices/[EMAIL PROTECTED],0/SUNW,[EMAIL PROTECTED],880/[EMAIL 
PROTECTED],0:
.
.
.
devfsadm[24247]: verbose: symlink /dev/rmt/0ubn -
../../devices/[EMAIL PROTECTED],0/SUNW,[EMAIL PROTECTED],880/[EMAIL 
PROTECTED],0:ubn
# mt -f /dev/rmt/0lbn status
DLT4000 tape drive:
   sense key(0x6)= Unit Attention   residual= 0   retries= 0
   file no= 0   block no= 0
#

I then create a snapshot as per the documentation :

# zfs list zfs0
NAME   USED  AVAIL  REFER  MOUNTPOINT
zfs0  95.3G  70.8G  27.5K  /export/zfs
# date
Tue Jun 27 18:10:36 EDT 2006
# zfs snapshot [EMAIL PROTECTED]
# zfs list
NAME   USED  AVAIL  REFER  MOUNTPOINT
zfs0  95.3G  70.8G  27.5K  /export/zfs
[EMAIL PROTECTED]  0  -  27.5K  -
zfs0/backup   91.2G  70.8G  88.4G  /export/zfs/backup
zfs0/backup/pasiphae  2.77G  24.2G  2.77G  /export/zfs/backup/pasiphae
zfs0/lotus 786M  70.8G   786M  /opt/lotus
zfs0/zone 3.40G  70.8G  24.5K  /export/zfs/zone
zfs0/zone/common  24.5K  8.00G  24.5K  legacy
zfs0/zone/domino  24.5K  70.8G  24.5K  /opt/zone/domino
zfs0/zone/sugar   3.40G  12.6G  3.40G  /opt/zone/sugar
#

And then I send that snapshot to tape :

# zfs send [EMAIL PROTECTED]  /dev/rmt/0mbn
#

That command ran for maybe 15 seconds.  I seriously doubt that 95GB of data
was written to tape and verified in that time although I'd like to see the
device and bus that can do it!  :-)

I'll destroy that snapshot and try something else here :

# zfs destroy [EMAIL PROTECTED]

Now perhaps the mystery is to try a different ZFS filesystem :

# date
Tue Jun 27 18:17:33 EDT 2006
# zfs snapshot zfs0/[EMAIL PROTECTED]:17Hrs

I'll check the tape drive that did something above although I have no idea
what.

# mt -f /dev/rmt/0mbn status
DLT4000 tape drive:
   sense key(0x0)= No Additional Sense   residual= 0   retries= 0
   file no= 1   block no= 0
#

Now I will send that stream to the tape :

# zfs send zfs0/[EMAIL PROTECTED]:17Hrs  /dev/rmt/0mbn

The tape is now doing something again and I don't know what.

I would like to think that when it is down I can walk to a totally new
machine and restore the ZFS filesystem zfs0/lotus with no issue but I don't
see a verify step here anywhere and I really have no idea what will happen
when I hit the end of that tape.

I am very bothered that my 95GB zfs0 did not go to tape and I don't know why
not.  I think that my itty bitty 786MB zfs0/lotus is actually going to tape
right now ( lights are flashing )