With the help of dtrace, I found out that in vdev_disk_open() (in 
vdev_disk.c), the ddi_devid_compare() function was failing.

I don't know why the devid has changed,  but simply doing zpool export ; zpool 
import did the trick - the pool imported correctly and the contents seem to 
be intact.

Examining the backup logs, I can see some rather strange behaviour on the part 
of ZFS during the rsync! I'm not even going to try to understand it, but I'm 
keeping the logs if anyone is interested.

I'm now doing a scrub, this is the result so far:

[EMAIL PROTECTED]:/pool/tmp# zpool status -v
  pool: pool
 state: ONLINE
status: One or more devices has experienced an error resulting in data
        corruption.  Applications may be affected.
action: Restore the file in question if possible.  Otherwise restore the
        entire pool from backup.
   see: http://www.sun.com/msg/ZFS-8000-8A
 scrub: scrub in progress, 33.38% done, 1h10m to go
config:

        NAME        STATE     READ WRITE CKSUM
        pool        ONLINE       0     0     2
          c2t0d0s1  ONLINE       0     0     2

errors: The following persistent errors have been detected:

          DATASET  OBJECT  RANGE
          pool     3539    634257408-634388480

The file with inode number 3539 is a leftover from the failed backups, and the 
snapshots seem to be intact, so everything is well now. ZFS checksums rule.
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to