Hello all,

  I'd like to report a tricky situation and a workaround
I've found useful - hope this helps someone in similar
situations.

  To cut the long story short, I could not properly mount
some datasets from a readonly pool, which had a non-"legacy"
mountpoint attribute value set, but the mountpoint was not
available (directory absent or not empty). In this case I
could neither create/cleanup the mountpoints, nor change
the dataset properties to mountpoint=legacy.

  After a while I managed to override a higher-level
point in the FS tree by mounting tmpfs over it, and in
that tmpfs I could make the mountpoints needed by zfs.

  I don't want to claim this adventure as a bug (because
in this case zfs actually works as documented), but rather
as an inconvenience that might need some improvement, i.e.
to allow (forced?) use of "mount -F zfs" even for datasets
with the mountpoint property defined.

  Here goes the detailed version:

  I was evacuating data from a corrupted rpool which I could
only import read-only while booted from a LiveUSB. As I wrote
previously, I could not use "zfs send" (bug now submitted to
Illumos tracker), so I reverted to directly mounting datasets
and copying data off them into another location (into a similar
dataset hierarchy on my data pool), i.e.:

# zpool import -R /POOL pool
# zpool import -R /RPOOL -o readonly=on rpool

  My last-used root fs (rpool/ROOT/openindiana-1) had the
property mountpoint=/ set, so it got mounted into /RPOOL of
the LiveUSB environment. I copied my data off it, roughly
like this:

# cd /RPOOL && ( find . -xdev -depth -print | \
  cpio -pvdm /POOL/rpool-backup/openindiana-1 ; \
  rsync -avPHK ./ /POOL/rpool-backup/openindiana-1/ )

Likewise for many other filesystems, like those with legacy
mountpoints (mounted wherever I like, like /mnt/1) or those
with existing valid mountpoints (like /export).

However I ran into trouble with secondary (older) root FSes
which I wanted to keep for posterity. For whatever reason,
the "mountpoint" property was set to "/mnt". This directory
was not empty, and I found no way to pass "-O" flag to mount
for ZFS automounted FSes (to mount over non-empty dirs).
On a read-only rpool I couldn't clean up the "/mnt" dir.
I could not use the "mount -F zfs" command because dataset's
mountpoint was defined and not "legacy".
I could not change it to legacy - because rpool is read-only.
If I unmounted the "rpool/ROOT/openindiana-1" dataset, there
was no "/mnt" left at all and no way to create one on a
read-only pool.

So I thought of tmpfs - I can mount these with "-O" anywhere
and need no resources for that. First I tried mounting tmpfs
over /RPOOL/mnt with "openindiana-1" mounted, but I couldn't
mount the older root over this mountpoint directly.

So I unmounted all datasets of "rpool", keeping the pool
imported, and mounted tmpfs over the pool's alternate mount
point. Now I could do my trick:

# mount -F tmpfs -O - /RPOOL
# mkdir /RPOOL/mnt
# zfs mount rpool/ROOT/openindiana

To be complete for those who might need this walkthrough,
since I wanted to retain the benefits of root dataset
cloning, I did that with my backup as well:

# zfs snapshot pool/rpool-backup/openindiana-1@20110501
# zfs clone pool/rpool-backup/openindiana-1@20110501 \
  zfs clone pool/rpool-backup/openindiana
# cd /RPOOL/mnt && rsync -avPHK --delete-after \
  ./ /POOL/rpool-backup/openindiana/

Thanks to rsync, I got only differing (older) files written
onto that copy, with newer files removed.

"cpio" and "rsync" both barked on some unreadable files
(I/O errors) which I believe were in the zfs blocks with
mismatching checksums, initially leading to the useless
rpool. I replaced these files with those on the LiveUSB
in the copy on "pool".

Finally I exported and recreated rpool on the same device,
and manually repopulated the zpool properties (failmode,
bootfs) as well as initial values of zfs properties that
I wanted (copies=2, dedup=off, compression=off). Then I
used installgrub to ensure that the new rpool is bootable.

I also set the zfs properties I wanted (i.e. copies=2, also
canmount, mountpoint, caiman.* and others set by OpenIndiana,
compression=on where allowed - on non-root datasets) on the
rpool hierarchy copy in pool. Even though the actual data on
"pool" was written with copies=1, the property value copies=2
will be copied and applied during "zfs send|zfs recv".

Now I could "zfs send" the hierarchical replication stream
from my copy in "pool" to the new "rpool", kind of like this:

# zfs snapshot -r pool/rpool-backup@20111119-05
# zfs send -R pool/rpool-backup@20111119-05 | zfs recv -vF rpool

Since the hardware was all the same, there was little else
to do. I revised "RPOOL/rpool/boot/grub/menu.lst" and
"RPOOL/etc/vfstab" just in case, but otherwise was ready
to reboot. Luckily for me, the system came up as expected.

That kind of joy does not always happen as planned,
especially when you're half-a-globe away from the
computer you're repairing ;)

Good luck and strong patience to those in similar situations,
//Jim Klimov

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to