Re: zpool import lossage

2021-02-17 Thread Greg Troxel

Lloyd Parkes  writes:

> You should be able to create the symlink in any directory and tell zfs
> import which directory to use.

Thanks for the great hint; it works, reduces ick, and limits scope of
ick.  In a directory searched via -d, all files are searched, not just
whole disks.

> I think that /etc/zfs is used for maintaining certain system state
> information about imported pools across reboots and so I'm not overly
> surprised to see that it is empty after you exported the pool. It
> might just optimise the boot time import of the pool.

/etc/zfs/zpool.cache has a record for each pool of where the devices
are.  It is deleted on export; that's a feature :-)

I updated the HOWTO; see "pool importing problems".

https://wiki.netbsd.org/zfs/


signature.asc
Description: PGP signature


Re: zpool import lossage

2021-02-16 Thread Lloyd Parkes
This is all off the top of my head and while I use ZFS almost daily, not 
on NetBSD :-(, it's been a few years since I poked at the internals.


Your action of creating a symlink seems like a reasonable 
workaround/solution to your issue. You should be able to create the 
symlink in any directory and tell zfs import which directory to use.


ZFS generally expects to use a whole GPT labelled disk and so I expect 
that BSD labelled partitions are not checked. Since almost everyone 
starts with ZFS by doing exactly what you did, adding information to the 
wiki is a good idea.


I think that /etc/zfs is used for maintaining certain system state 
information about imported pools across reboots and so I'm not overly 
surprised to see that it is empty after you exported the pool. It might 
just optimise the boot time import of the pool.


Cheers

On 17/02/21 2:39 pm, Greg Troxel wrote:

(I'm testing on 9, but am guessing this is similar on current and will
if anywhere be fixed there and not necessarily pulled up to 9.)

I'm starting to try out zfs.   So far I don't have any data that
matters.

On a 1T SSD I have wd0[abe] as root/swap/usr as an unremarkable netbsd-9
system, on an unremarkable amd64 desktop with 8G of RAM.

I created pool1 with wd0f, which is the rest of the 1T disk, about 850G,
not raid of any kind.  I created a few filesystems, changed their mount
points, changed their options, and mounted one over NFS from another
machine, and all seemed ok.  (Yes, I realize the doctrine that "use the
whole disk as a zfs component" is the preferred approach.)

I wanted to rename my pool from pool1 to tank0, for no good reason,
mostly trying to do all the scary things while the only data I had was a
pkgsrc checkout, but partly having seen Stephen Borrill's report of
import trouble.

So I did

   zpool export pool1

and sure enough all my zfs stuff was gone.

Then I did, per the man page:

   zpool import

and nothing was found.  After a bunch of reading and ktracing, I
realized that there is no record of the pool in /etc/zfs or anywhere
else I could find, and the notion is that zpool import will somehow find
all the disks that have zfs data on them, apparently by opening all
disks and looking for some kind of ZFSMAGIC.  But it looked at wd0 and
not the slices.  There was no apparent way to ask it to look at wd0f
specifically.  So I did

   cd /dev; rm wd0; ln -s wd0f wd0

which is icky, but then zpool import found wd0f and I could

   zpool import pool1 tank0

So this feels like a significant bug, and matches Stephen Borrill's
report.  I think we're heading to documenting this in the wiki, or at
least I am.

Does anything think I have this wrong?
Is anyone inclined to do anything more serious?


zpool import lossage

2021-02-16 Thread Greg Troxel

(I'm testing on 9, but am guessing this is similar on current and will
if anywhere be fixed there and not necessarily pulled up to 9.)

I'm starting to try out zfs.   So far I don't have any data that
matters.

On a 1T SSD I have wd0[abe] as root/swap/usr as an unremarkable netbsd-9
system, on an unremarkable amd64 desktop with 8G of RAM.

I created pool1 with wd0f, which is the rest of the 1T disk, about 850G,
not raid of any kind.  I created a few filesystems, changed their mount
points, changed their options, and mounted one over NFS from another
machine, and all seemed ok.  (Yes, I realize the doctrine that "use the
whole disk as a zfs component" is the preferred approach.)

I wanted to rename my pool from pool1 to tank0, for no good reason,
mostly trying to do all the scary things while the only data I had was a
pkgsrc checkout, but partly having seen Stephen Borrill's report of
import trouble.

So I did

  zpool export pool1

and sure enough all my zfs stuff was gone.

Then I did, per the man page:

  zpool import

and nothing was found.  After a bunch of reading and ktracing, I
realized that there is no record of the pool in /etc/zfs or anywhere
else I could find, and the notion is that zpool import will somehow find
all the disks that have zfs data on them, apparently by opening all
disks and looking for some kind of ZFSMAGIC.  But it looked at wd0 and
not the slices.  There was no apparent way to ask it to look at wd0f
specifically.  So I did

  cd /dev; rm wd0; ln -s wd0f wd0

which is icky, but then zpool import found wd0f and I could

  zpool import pool1 tank0

So this feels like a significant bug, and matches Stephen Borrill's
report.  I think we're heading to documenting this in the wiki, or at
least I am.

Does anything think I have this wrong?
Is anyone inclined to do anything more serious?


signature.asc
Description: PGP signature