Re: [zfs-discuss] SUMMARY: mounting datasets from a read-only pool with aid of tmpfs

2011-11-23 Thread Lori Alt

On 11/22/11 17:54, Jim Klimov wrote:

2011-11-23 2:26, Lori Alt wrote:

Did you try a temporary mount point?
zfs mount -o mountpoint=/whatever 

- lori



I do not want to lie so I'll delay with a definite answer.
I think I've tried that, but I'm not certain now. I'll try
to recreate the situation later and respond responsibly ;)

If this works indeed - that's a good idea ;)
Should it work relative to alternate root as well (just
like a default/predefined mountpoint value would)?


It does not take an alternate root into account.  Whatever you specify 
as the value of the temporary mountpoint property is exactly where it's 
mounted.




BTW, is there a way to do overlay mounts like "mount -O"
with zfs automount attributes?


No, there is no property for that.  You would need to specify the -O to 
the "mount" or "zfs mount" command to get an overlay mount.  You don't 
need to use legacy  mounts for this.  The -O option works with regular 
zfs mounts.



I have to use legacy mounts
and /etc/vfstab for that now on some systems, but would
like to aviod such complication if possible...

//Jim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] SUMMARY: mounting datasets from a read-only pool with aid of tmpfs

2011-11-22 Thread Jim Klimov

2011-11-23 2:26, Lori Alt wrote:

Did you try a temporary mount point?
zfs mount -o mountpoint=/whatever 

- lori



I do not want to lie so I'll delay with a definite answer.
I think I've tried that, but I'm not certain now. I'll try
to recreate the situation later and respond responsibly ;)

If this works indeed - that's a good idea ;)
Should it work relative to alternate root as well (just
like a default/predefined mountpoint value would)?

BTW, is there a way to do overlay mounts like "mount -O"
with zfs automount attributes? I have to use legacy mounts
and /etc/vfstab for that now on some systems, but would
like to aviod such complication if possible...

//Jim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] SUMMARY: mounting datasets from a read-only pool with aid of tmpfs

2011-11-22 Thread Lori Alt



Did you try a temporary mount point?

zfs mount -o mountpoint=/whatever 

- lori


On 11/22/11 15:11, Jim Klimov wrote:

Hello all,

  I'd like to report a tricky situation and a workaround
I've found useful - hope this helps someone in similar
situations.

  To cut the long story short, I could not properly mount
some datasets from a readonly pool, which had a non-"legacy"
mountpoint attribute value set, but the mountpoint was not
available (directory absent or not empty). In this case I
could neither create/cleanup the mountpoints, nor change
the dataset properties to mountpoint=legacy.

  After a while I managed to override a higher-level
point in the FS tree by mounting tmpfs over it, and in
that tmpfs I could make the mountpoints needed by zfs.

  I don't want to claim this adventure as a bug (because
in this case zfs actually works as documented), but rather
as an inconvenience that might need some improvement, i.e.
to allow (forced?) use of "mount -F zfs" even for datasets
with the mountpoint property defined.

  Here goes the detailed version:

  I was evacuating data from a corrupted rpool which I could
only import read-only while booted from a LiveUSB. As I wrote
previously, I could not use "zfs send" (bug now submitted to
Illumos tracker), so I reverted to directly mounting datasets
and copying data off them into another location (into a similar
dataset hierarchy on my data pool), i.e.:

# zpool import -R /POOL pool
# zpool import -R /RPOOL -o readonly=on rpool

  My last-used root fs (rpool/ROOT/openindiana-1) had the
property mountpoint=/ set, so it got mounted into /RPOOL of
the LiveUSB environment. I copied my data off it, roughly
like this:

# cd /RPOOL && ( find . -xdev -depth -print | \
  cpio -pvdm /POOL/rpool-backup/openindiana-1 ; \
  rsync -avPHK ./ /POOL/rpool-backup/openindiana-1/ )

Likewise for many other filesystems, like those with legacy
mountpoints (mounted wherever I like, like /mnt/1) or those
with existing valid mountpoints (like /export).

However I ran into trouble with secondary (older) root FSes
which I wanted to keep for posterity. For whatever reason,
the "mountpoint" property was set to "/mnt". This directory
was not empty, and I found no way to pass "-O" flag to mount
for ZFS automounted FSes (to mount over non-empty dirs).
On a read-only rpool I couldn't clean up the "/mnt" dir.
I could not use the "mount -F zfs" command because dataset's
mountpoint was defined and not "legacy".
I could not change it to legacy - because rpool is read-only.
If I unmounted the "rpool/ROOT/openindiana-1" dataset, there
was no "/mnt" left at all and no way to create one on a
read-only pool.

So I thought of tmpfs - I can mount these with "-O" anywhere
and need no resources for that. First I tried mounting tmpfs
over /RPOOL/mnt with "openindiana-1" mounted, but I couldn't
mount the older root over this mountpoint directly.

So I unmounted all datasets of "rpool", keeping the pool
imported, and mounted tmpfs over the pool's alternate mount
point. Now I could do my trick:

# mount -F tmpfs -O - /RPOOL
# mkdir /RPOOL/mnt
# zfs mount rpool/ROOT/openindiana

To be complete for those who might need this walkthrough,
since I wanted to retain the benefits of root dataset
cloning, I did that with my backup as well:

# zfs snapshot pool/rpool-backup/openindiana-1@20110501
# zfs clone pool/rpool-backup/openindiana-1@20110501 \
  zfs clone pool/rpool-backup/openindiana
# cd /RPOOL/mnt && rsync -avPHK --delete-after \
  ./ /POOL/rpool-backup/openindiana/

Thanks to rsync, I got only differing (older) files written
onto that copy, with newer files removed.

"cpio" and "rsync" both barked on some unreadable files
(I/O errors) which I believe were in the zfs blocks with
mismatching checksums, initially leading to the useless
rpool. I replaced these files with those on the LiveUSB
in the copy on "pool".

Finally I exported and recreated rpool on the same device,
and manually repopulated the zpool properties (failmode,
bootfs) as well as initial values of zfs properties that
I wanted (copies=2, dedup=off, compression=off). Then I
used installgrub to ensure that the new rpool is bootable.

I also set the zfs properties I wanted (i.e. copies=2, also
canmount, mountpoint, caiman.* and others set by OpenIndiana,
compression=on where allowed - on non-root datasets) on the
rpool hierarchy copy in pool. Even though the actual data on
"pool" was written with copies=1, the property value copies=2
will be copied and applied during "zfs send|zfs recv".

Now I could "zfs send" the hierarchical replication stream
from my copy in "pool" to the new "rpool", kind of like this:

# zfs snapshot -r pool/rpool-backup@2019-05
# zfs send -R pool/rpool-backup@2019-05 | zfs recv -vF rpool

Since the hardware was all the same, there was little else
to do. I revised "RPOOL/rpool/boot/grub/menu.lst" and
"RPOOL/etc/vfstab" just in case, but otherwise was ready
to reboot. Luckily for me, the system came up as expected.

T

[zfs-discuss] SUMMARY: mounting datasets from a read-only pool with aid of tmpfs

2011-11-22 Thread Jim Klimov

Hello all,

  I'd like to report a tricky situation and a workaround
I've found useful - hope this helps someone in similar
situations.

  To cut the long story short, I could not properly mount
some datasets from a readonly pool, which had a non-"legacy"
mountpoint attribute value set, but the mountpoint was not
available (directory absent or not empty). In this case I
could neither create/cleanup the mountpoints, nor change
the dataset properties to mountpoint=legacy.

  After a while I managed to override a higher-level
point in the FS tree by mounting tmpfs over it, and in
that tmpfs I could make the mountpoints needed by zfs.

  I don't want to claim this adventure as a bug (because
in this case zfs actually works as documented), but rather
as an inconvenience that might need some improvement, i.e.
to allow (forced?) use of "mount -F zfs" even for datasets
with the mountpoint property defined.

  Here goes the detailed version:

  I was evacuating data from a corrupted rpool which I could
only import read-only while booted from a LiveUSB. As I wrote
previously, I could not use "zfs send" (bug now submitted to
Illumos tracker), so I reverted to directly mounting datasets
and copying data off them into another location (into a similar
dataset hierarchy on my data pool), i.e.:

# zpool import -R /POOL pool
# zpool import -R /RPOOL -o readonly=on rpool

  My last-used root fs (rpool/ROOT/openindiana-1) had the
property mountpoint=/ set, so it got mounted into /RPOOL of
the LiveUSB environment. I copied my data off it, roughly
like this:

# cd /RPOOL && ( find . -xdev -depth -print | \
  cpio -pvdm /POOL/rpool-backup/openindiana-1 ; \
  rsync -avPHK ./ /POOL/rpool-backup/openindiana-1/ )

Likewise for many other filesystems, like those with legacy
mountpoints (mounted wherever I like, like /mnt/1) or those
with existing valid mountpoints (like /export).

However I ran into trouble with secondary (older) root FSes
which I wanted to keep for posterity. For whatever reason,
the "mountpoint" property was set to "/mnt". This directory
was not empty, and I found no way to pass "-O" flag to mount
for ZFS automounted FSes (to mount over non-empty dirs).
On a read-only rpool I couldn't clean up the "/mnt" dir.
I could not use the "mount -F zfs" command because dataset's
mountpoint was defined and not "legacy".
I could not change it to legacy - because rpool is read-only.
If I unmounted the "rpool/ROOT/openindiana-1" dataset, there
was no "/mnt" left at all and no way to create one on a
read-only pool.

So I thought of tmpfs - I can mount these with "-O" anywhere
and need no resources for that. First I tried mounting tmpfs
over /RPOOL/mnt with "openindiana-1" mounted, but I couldn't
mount the older root over this mountpoint directly.

So I unmounted all datasets of "rpool", keeping the pool
imported, and mounted tmpfs over the pool's alternate mount
point. Now I could do my trick:

# mount -F tmpfs -O - /RPOOL
# mkdir /RPOOL/mnt
# zfs mount rpool/ROOT/openindiana

To be complete for those who might need this walkthrough,
since I wanted to retain the benefits of root dataset
cloning, I did that with my backup as well:

# zfs snapshot pool/rpool-backup/openindiana-1@20110501
# zfs clone pool/rpool-backup/openindiana-1@20110501 \
  zfs clone pool/rpool-backup/openindiana
# cd /RPOOL/mnt && rsync -avPHK --delete-after \
  ./ /POOL/rpool-backup/openindiana/

Thanks to rsync, I got only differing (older) files written
onto that copy, with newer files removed.

"cpio" and "rsync" both barked on some unreadable files
(I/O errors) which I believe were in the zfs blocks with
mismatching checksums, initially leading to the useless
rpool. I replaced these files with those on the LiveUSB
in the copy on "pool".

Finally I exported and recreated rpool on the same device,
and manually repopulated the zpool properties (failmode,
bootfs) as well as initial values of zfs properties that
I wanted (copies=2, dedup=off, compression=off). Then I
used installgrub to ensure that the new rpool is bootable.

I also set the zfs properties I wanted (i.e. copies=2, also
canmount, mountpoint, caiman.* and others set by OpenIndiana,
compression=on where allowed - on non-root datasets) on the
rpool hierarchy copy in pool. Even though the actual data on
"pool" was written with copies=1, the property value copies=2
will be copied and applied during "zfs send|zfs recv".

Now I could "zfs send" the hierarchical replication stream
from my copy in "pool" to the new "rpool", kind of like this:

# zfs snapshot -r pool/rpool-backup@2019-05
# zfs send -R pool/rpool-backup@2019-05 | zfs recv -vF rpool

Since the hardware was all the same, there was little else
to do. I revised "RPOOL/rpool/boot/grub/menu.lst" and
"RPOOL/etc/vfstab" just in case, but otherwise was ready
to reboot. Luckily for me, the system came up as expected.

That kind of joy does not always happen as planned,
especially when you're half-a-globe away from the
computer you're repair