[zfs-discuss] zfs permanent errors in a clone

2010-05-31 Thread devsk
$ zfs list -t filesystem
NAME  USED  AVAIL  REFER  MOUNTPOINT
datapool  840M  25.5G21K  /datapool
datapool/virtualbox   839M  25.5G   839M  /virtualbox
mypool   8.83G  6.92G82K  /mypool
mypool/ROOT  5.48G  6.92G21K  legacy
mypool/ROOT/May25-2010-Image-Update  5.48G  6.92G  4.12G  /
mypool/ROOT/OpenSolaris-Latest   1.42M  6.92G  4.07G  /
mypool/export2.41G  6.92G22K  /export
mypool/export/home   2.41G  6.92G  1.90G  /export/home

$ zfs list -t snapshot|grep ROOT
mypool/r...@dbus_gam_server_race_partly_solved-6pm-may30-2010   
   0  -21K  -
mypool/ROOT/may25-2010-image-upd...@2010-05-26-02:13:44 
   1.34G  -  4.07G  -
mypool/ROOT/may25-2010-image-upd...@dbus_gam_server_race_partly_solved-6pm-may30-2010
  23.9M  -  4.14G  -
mypool/ROOT/opensolaris-lat...@dbus_gam_server_race_partly_solved-6pm-may30-2010
   0  -  4.07G  -

I am trying to send myr...@dbus_gam_server_race_partly_solved-6pm-may30-2010 to 
my backup server. And it dies with error while sending:

warning: cannot send 'mypool/ROOT/may25-2010-image-upd...@2010-05-26-02:13:44': 
I/O error
sending from @2010-05-26-02:13:44 to 
mypool/ROOT/may25-2010-image-upd...@dbus_gam_server_race_partly_solved-6pm-may30-2010
cannot receive new filesystem stream: invalid backup stream

mypool has errors:

  pool: mypool
 state: ONLINE
status: One or more devices has experienced an error resulting in data
corruption.  Applications may be affected.
action: Restore the file in question if possible.  Otherwise restore the
entire pool from backup.
   see: http://www.sun.com/msg/ZFS-8000-8A
 scan: scrub repaired 0 in 0h9m with 0 errors on Sun May 30 21:28:33 2010
config:

NAMESTATE READ WRITE CKSUM
mypool  ONLINE   0 0 1
  c6t0d0s0  ONLINE   0 0 2

errors: Permanent errors have been detected in the following files:


mypool/ROOT/may25-2010-image-upd...@2010-05-26-02:13:44:/opt/openoffice.org/ure/lib/javaloader.uno.so

mypool/ROOT/may25-2010-image-upd...@2010-05-26-02:13:44:/usr/sfw/swat/help/Samba3-HOWTO/InterdomainTrusts.html

This snapshot @2010-05-26-02:13:44 seems to have been created when I updated 
the image to latest 140 build on  May25. These files have since been 
overwritten may be. I don't care about this snapshot. I care about the current 
data AND the previous BE (ROOT/OpenSolaris-Latest). I tried to delete the 
snapshot but it gave me an error about it being a clone and can't be deleted.

Few questions:

1. Does this mean my ROOT/OpenSolaris-Latest BE has issues as well (because the 
snapshot/clone was created from it)? Or is the error introduced afterwards and 
present only in the dated snapshot?

2. Why does scrub on mypool come out clean every time? It takes about 10 mins 
to scrub this pool, so I have been doing that pretty much on every unclean 
shutdown (there have been few panics, see my other posts).

3. I don't care about either of these files. How do I make it clean up these 
inodes and move forward?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs permanent errors in a clone

2010-05-31 Thread devsk
I wrongly said myr...@dbus_gam_server_race_partly_solved-6pm-may30-2010. I 
meant mypool.

This is the send command that failed:

time zfs send -Rv myp...@dbus_gam_server_race_partly_solved-6pm-may30-2010 | 
ssh 192.168.0.6 zfs recv -vuF zfs-backup/opensolaris-backup/mypool
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs permanent errors in a clone

2010-05-31 Thread devsk
OK, I have no idea what ZFS is smoking...:-)

I was able to send the individual datasets to the backup server.

zfs-backup/opensolaris-backup/mypool   11.5G   197G 
   82K  /zfs-backup/opensolaris-backup/mypool
zfs-backup/opensolaris-backup/mypool/ROOT  8.21G   197G 
   21K  /zfs-backup/opensolaris-backup/mypool/ROOT
zfs-backup/opensolaris-backup/mypool/ROOT/May25-2010-Image-Update  4.14G   197G 
 4.14G  /zfs-backup/opensolaris-backup/mypool/ROOT/May25-2010-Image-Update
zfs-backup/opensolaris-backup/mypool/ROOT/OpenSolaris-Latest   4.07G   197G 
 4.07G  /zfs-backup/opensolaris-backup/mypool/ROOT/OpenSolaris-Latest
zfs-backup/opensolaris-backup/mypool/export2.39G   197G 
   23K  /zfs-backup/opensolaris-backup/mypool/export
zfs-backup/opensolaris-backup/mypool/export/home   2.39G   197G 
 1.90G  /zfs-backup/opensolaris-backup/mypool/export/home

Does having a later snapshot safe in pocket on the backup server mean that I am 
OK now? Do I have both BE images imtact? How do I verify? There is no checksum 
command in zfs that I can run on both the source and backup server to make sure 
that my backups are good!

What's the error with that dated snapshot/clone? Why does scrub ignore it but 
send doesn't?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs permanent errors in a clone

2010-05-31 Thread devsk
Scrub has turned up clean again:

  pool: mypool
 state: ONLINE
status: The pool is formatted using an older on-disk format.  The pool can
still be used, but some features are unavailable.
action: Upgrade the pool using 'zpool upgrade'.  Once this is done, the
pool will no longer be accessible on older software versions.
 scan: scrub repaired 0 in 0h7m with 0 errors on Mon May 31 09:00:27 2010
config:

NAMESTATE READ WRITE CKSUM
mypool  ONLINE   0 0 0
  c6t0d0s0  ONLINE   0 0 0

errors: No known data errors

Is this a bug?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss