Hello, all.

  I'm in the process of repairing a corrupted unmirrored rpool,
and my current idea was to evacuate all reachable data by "zfs
send" to the redundant data pool, then recreate and repopulate
the rpool with copies=2.

  As I previously wrote, my machine crashes when trying to
import the rpool in any sort of read-write mode, however I
got it to import with readonly=on.

  I struck an unexpected problem: modern "zfs send" uses the
"zfs hold" feature to lock the snapshots from destruction.
However, on a readonly pool the holds don't succeed and the
send never occurs:

root@openindiana:~# zfs send -R rpool/ROOT/oi_148a@20111028-01 | \
  zfs recv -vnFd pool/rpool-backup
cannot hold 'rpool/ROOT/oi_148a@20110317-03': pool is read-only
cannot hold 'rpool/ROOT/oi_148a@20110319-01': pool is read-only
cannot hold 'rpool/ROOT/oi_148a@20110322-01': pool is read-only
cannot hold 'rpool/ROOT/oi_148a@20110401': pool is read-only
cannot hold 'rpool/ROOT/oi_148a@20110430': pool is read-only
cannot hold 'rpool/ROOT/oi_148a@20111028-01': pool is read-only

  IMHO this is a bug (some checks are due in zfs code - this
is an oi_148a liveusb); however - are any workarounds possible
with this code as is? Perhaps, some kernel tunables to set
with MDB?

  Of course, I can cpio this data both ways with little loss
in this case, but the problem for broken pool evacuation was
unexpected. I should've at least reported it ;)

Thanks for any ideas,
and good luck fixing it for the future ,)
//Jim Klimov
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to