Re: changes to the zfs boot (was: Re: git: 72a1cb05cd23 - main - rc(8): Add a zpoolupgrade rc.d script)

2022-11-12 Thread Alexander Leidinger

Quoting Warner Losh  (from Wed, 9 Nov 2022 08:54:33 -0700):


On Wed, Nov 9, 2022 at 5:46 AM Alexander Leidinger 
wrote:



While most of these options look OK on the surface, I'd feel a lot better
if there were tests for these to prove they work. I'd also feel better if
the ZFS experts could explain how those come to be set on a zpool
as well. I'd settle for a good script that could be run as root (better


It is explained in the zpool-features man page.


would be not as root) that would take a filesystem that was created
by makefs -t zfs and turn on these features after an zpool upgrade.


Script attached. Maybe a little bit too verbose, but you can see which  
features are active directly, and which ones only enabled.


It expects a zroot.img in the current directory and creates copies to  
zroot_num_featurename.img where it enables the features. In the  
beginning are some variables to adapt to pool/image name and  
destination directory.


Bye,
Alexander.
--
http://www.Leidinger.net alexan...@leidinger.net: PGP 0x8F31830F9F2772BF
http://www.FreeBSD.orgnetch...@freebsd.org  : PGP 0x8F31830F9F2772BF


zpool_features.sh
Description: Bourne shell script


pgpnwcnaQ1dYc.pgp
Description: Digitale PGP-Signatur


Re: changes to the zfs boot (was: Re: git: 72a1cb05cd23 - main - rc(8): Add a zpoolupgrade rc.d script)

2022-11-09 Thread Tomoaki AOKI
On Wed, 9 Nov 2022 14:12:21 -0800
Mark Millard  wrote:

> On Nov 9, 2022, at 12:56, Warner Losh  wrote:
> 
> > On Wed, Nov 9, 2022 at 1:54 PM Patrick M. Hausen  wrote:
> > 
> > > Am 09.11.2022 um 21:51 schrieb Warner Losh :
> > > Yes. For safety, boot loader upgrade is mandatory when you do a zpool 
> > > upgrade of the root filesystem.
> > > It was definitely needed in the OpenZFS jump, and we've had one or two 
> > > other flag days since.
> > 
> > That's a given and not a problem. What I fear from my understanding of this 
> > thread so far is
> > that there might be a situation when I upgrade the zpool and the boot 
> > loader and the system
> > ends up unbootable nonetheless.
> > 
> > Possible or not?
> > 
> > If all you do is upgrade, then no, modulo bugs that we've thankfully not 
> > had yet.
> 
> I guess you mean FreeBSD upgrade, not zpool updgrade?
> 
> For zpool upgrade after a main [so: 14] FreeBSD upgrade . . .
> 
> As I understand it, com.delphix:head_errlog was not added to
> the loader until after some folks had done a zpool upgrade
> and then could not boot. The loader was updated in response
> to the people's problem with trying to boot.
> 
> Of course, this was main, not stable or releng/13.* or the
> like. There is more control over the staging of updates
> for them.

Unfortunately, sometimes adding X-MFC-WITH is forgotton on follow-up
commits, making the commit NOT MFC'ed to stable/* and fire there again.

And unfortunately, even though X-MFC-WITH is added, sometimes MFC is
missed.


> It would be nice if UPDATING reported when a openzfs update
> was adding new zpool feature(s) to main and if the loader
> was ready for them yet. If not: Later adding an entry for
> the loader being ready for the feature(s).

+1.


> > It's when you enable something on the zpool that you can run into trouble, 
> > but that's true independent of upgrade :)
> > 
> > Warner
> >   Modulo bugs, try test systems first, etc. Of course.
> > 
> 
> ===
> Mark Millard
> marklmi at yahoo.com
> 
> 
> 


-- 
Tomoaki AOKI



Re: changes to the zfs boot (was: Re: git: 72a1cb05cd23 - main - rc(8): Add a zpoolupgrade rc.d script)

2022-11-09 Thread Tomoaki AOKI
On Wed, 9 Nov 2022 23:04:17 +0100
"Patrick M. Hausen"  wrote:

> Hi,
> 
> > Am 09.11.2022 um 22:59 schrieb Alexander Leidinger 
> > :
> > Quoting "Patrick M. Hausen"  (from Wed, 9 Nov 2022 
> > 22:47:28 +0100):
> >> 
> >> I apologize, should have included that in the last mail.
> >> This is a current FreeBSD 13.1-p2 hosting system we run.
> >> [ List of features from an active root pool ]
> >> Boots quite fine ;-)
> > 
> > There are several in the list which are not in the list in zfsipl.c. So 
> > that list is not the full truth...
> 
> Or whatever is missing is not critical to booting. The boot loader does not 
> need
> read/write access to the zpool. It only needs to be able to locate and read 
> the kernel.
> 
> Kind regards,
> Patrick

loader needs to read /boot/loader.conf, related scripts and kmods
under /boot/ from boot pool, too. Not just kernel.

Maybe, `zpool upgrade` would need some overhaul, with small change
to stand/libsa/zfs/zfsimpl.c.

 *Move features_for_read[] alone to new, dedicated header that can be
  included from both stand/libsa/zfs/zfsimpl.c and any others (including
  zpool).

 *Let zpool to check whether the pool is bootable or not, and if
  bootable, only enable features included in the matrix, otherwise,
  enable all supported features on FreeBSD.

I don't think ALL OS'es that have OpenZFS inported can always boot with
all features enabled.
So changes to zpool would be able to OS independent except the header
defining features supported by its loader.

There can be other matrixes for implemented (after boot) and
default-to-be-enabled features. The OS-dependent matrixes would ease ZFS
developers to determine which features are implemented / defaulted /
bootable on specific OS. (Already there?)


-- 
Tomoaki AOKI



Re: changes to the zfs boot (was: Re: git: 72a1cb05cd23 - main - rc(8): Add a zpoolupgrade rc.d script)

2022-11-09 Thread Mark Millard
On Nov 9, 2022, at 12:56, Warner Losh  wrote:

> On Wed, Nov 9, 2022 at 1:54 PM Patrick M. Hausen  wrote:
> 
> > Am 09.11.2022 um 21:51 schrieb Warner Losh :
> > Yes. For safety, boot loader upgrade is mandatory when you do a zpool 
> > upgrade of the root filesystem.
> > It was definitely needed in the OpenZFS jump, and we've had one or two 
> > other flag days since.
> 
> That's a given and not a problem. What I fear from my understanding of this 
> thread so far is
> that there might be a situation when I upgrade the zpool and the boot loader 
> and the system
> ends up unbootable nonetheless.
> 
> Possible or not?
> 
> If all you do is upgrade, then no, modulo bugs that we've thankfully not had 
> yet.

I guess you mean FreeBSD upgrade, not zpool updgrade?

For zpool upgrade after a main [so: 14] FreeBSD upgrade . . .

As I understand it, com.delphix:head_errlog was not added to
the loader until after some folks had done a zpool upgrade
and then could not boot. The loader was updated in response
to the people's problem with trying to boot.

Of course, this was main, not stable or releng/13.* or the
like. There is more control over the staging of updates
for them.

It would be nice if UPDATING reported when a openzfs update
was adding new zpool feature(s) to main and if the loader
was ready for them yet. If not: Later adding an entry for
the loader being ready for the feature(s).

> It's when you enable something on the zpool that you can run into trouble, 
> but that's true independent of upgrade :)
> 
> Warner
>   Modulo bugs, try test systems first, etc. Of course.
> 

===
Mark Millard
marklmi at yahoo.com




Re: changes to the zfs boot (was: Re: git: 72a1cb05cd23 - main - rc(8): Add a zpoolupgrade rc.d script)

2022-11-09 Thread Patrick M. Hausen
Hi,

> Am 09.11.2022 um 22:59 schrieb Alexander Leidinger :
> Quoting "Patrick M. Hausen"  (from Wed, 9 Nov 2022 22:47:28 
> +0100):
>> 
>> I apologize, should have included that in the last mail.
>> This is a current FreeBSD 13.1-p2 hosting system we run.
>> [ List of features from an active root pool ]
>> Boots quite fine ;-)
> 
> There are several in the list which are not in the list in zfsipl.c. So that 
> list is not the full truth...

Or whatever is missing is not critical to booting. The boot loader does not need
read/write access to the zpool. It only needs to be able to locate and read the 
kernel.

Kind regards,
Patrick


Re: changes to the zfs boot (was: Re: git: 72a1cb05cd23 - main - rc(8): Add a zpoolupgrade rc.d script)

2022-11-09 Thread Alexander Leidinger


Quoting "Patrick M. Hausen"  (from Wed, 9 Nov 2022  
22:47:28 +0100):



Hi,


Am 09.11.2022 um 22:38 schrieb Patrick M. Hausen :
Am 09.11.2022 um 22:26 schrieb Alexander Leidinger  
:
On quick look I haven't found a place where a compatibility  
setting is used for the rpool during the creation, so I can't  
point out what the exact difference is.
Given that empty_bpobj is not in the list of the boot code, it  
can't be the same, some limit of enabled features has to be in  
place during initial install, and your example has to be different.


That feature was imported into FreeBSD in 2012 so it should be
enabled in every pool created since then.


I apologize, should have included that in the last mail.
This is a current FreeBSD 13.1-p2 hosting system we run.

Boots quite fine ;-)


There are several in the list which are not in the list in zfsipl.c.  
So that list is not the full truth...


Bye,
Alexander.


---
[ry93@pdn006 ~]$ zpool get all zroot|grep feature
zroot  feature@async_destroy  enabledlocal
zroot  feature@empty_bpobjactive local
zroot  feature@lz4_compress   active local
zroot  feature@multi_vdev_crash_dump  enabledlocal
zroot  feature@spacemap_histogram active local
zroot  feature@enabled_txgactive local
zroot  feature@hole_birth active local
zroot  feature@extensible_dataset active local
zroot  feature@embedded_data  active local
zroot  feature@bookmarks  enabledlocal
zroot  feature@filesystem_limits  enabledlocal
zroot  feature@large_blocks   enabledlocal
zroot  feature@large_dnodeenabledlocal
zroot  feature@sha512 enabledlocal
zroot  feature@skein  enabledlocal
zroot  feature@userobj_accounting active local
zroot  feature@encryption enabledlocal
zroot  feature@project_quota  active local
zroot  feature@device_removal enabledlocal
zroot  feature@obsolete_countsenabledlocal
zroot  feature@zpool_checkpoint   enabledlocal
zroot  feature@spacemap_v2active local
zroot  feature@allocation_classes enabledlocal
zroot  feature@resilver_defer enabledlocal
zroot  feature@bookmark_v2enabledlocal
zroot  feature@redaction_bookmarksenabledlocal
zroot  feature@redacted_datasets  enabledlocal
zroot  feature@bookmark_written   enabledlocal
zroot  feature@log_spacemap   active local
zroot  feature@livelist   active local
zroot  feature@device_rebuild enabledlocal
zroot  feature@zstd_compress  enabledlocal
zroot  feature@draid  enabledlocal
---



--
http://www.Leidinger.net alexan...@leidinger.net: PGP 0x8F31830F9F2772BF
http://www.FreeBSD.orgnetch...@freebsd.org  : PGP 0x8F31830F9F2772BF


pgpfPnLCToEq6.pgp
Description: Digitale PGP-Signatur


Re: changes to the zfs boot (was: Re: git: 72a1cb05cd23 - main - rc(8): Add a zpoolupgrade rc.d script)

2022-11-09 Thread Patrick M. Hausen
Hi,

> Am 09.11.2022 um 22:38 schrieb Patrick M. Hausen :
>> Am 09.11.2022 um 22:26 schrieb Alexander Leidinger :
>> On quick look I haven't found a place where a compatibility setting is used 
>> for the rpool during the creation, so I can't point out what the exact 
>> difference is.
>> Given that empty_bpobj is not in the list of the boot code, it can't be the 
>> same, some limit of enabled features has to be in place during initial 
>> install, and your example has to be different.
> 
> That feature was imported into FreeBSD in 2012 so it should be
> enabled in every pool created since then.

I apologize, should have included that in the last mail.
This is a current FreeBSD 13.1-p2 hosting system we run.

Boots quite fine ;-)

Kind regards,
Patrick

---
[ry93@pdn006 ~]$ zpool get all zroot|grep feature
zroot  feature@async_destroy  enabledlocal
zroot  feature@empty_bpobjactive local
zroot  feature@lz4_compress   active local
zroot  feature@multi_vdev_crash_dump  enabledlocal
zroot  feature@spacemap_histogram active local
zroot  feature@enabled_txgactive local
zroot  feature@hole_birth active local
zroot  feature@extensible_dataset active local
zroot  feature@embedded_data  active local
zroot  feature@bookmarks  enabledlocal
zroot  feature@filesystem_limits  enabledlocal
zroot  feature@large_blocks   enabledlocal
zroot  feature@large_dnodeenabledlocal
zroot  feature@sha512 enabledlocal
zroot  feature@skein  enabledlocal
zroot  feature@userobj_accounting active local
zroot  feature@encryption enabledlocal
zroot  feature@project_quota  active local
zroot  feature@device_removal enabledlocal
zroot  feature@obsolete_countsenabledlocal
zroot  feature@zpool_checkpoint   enabledlocal
zroot  feature@spacemap_v2active local
zroot  feature@allocation_classes enabledlocal
zroot  feature@resilver_defer enabledlocal
zroot  feature@bookmark_v2enabledlocal
zroot  feature@redaction_bookmarksenabledlocal
zroot  feature@redacted_datasets  enabledlocal
zroot  feature@bookmark_written   enabledlocal
zroot  feature@log_spacemap   active local
zroot  feature@livelist   active local
zroot  feature@device_rebuild enabledlocal
zroot  feature@zstd_compress  enabledlocal
zroot  feature@draid  enabledlocal
---




Re: changes to the zfs boot (was: Re: git: 72a1cb05cd23 - main - rc(8): Add a zpoolupgrade rc.d script)

2022-11-09 Thread Patrick M. Hausen
Hi,

> Am 09.11.2022 um 22:26 schrieb Alexander Leidinger :
> On quick look I haven't found a place where a compatibility setting is used 
> for the rpool during the creation, so I can't point out what the exact 
> difference is.
> Given that empty_bpobj is not in the list of the boot code, it can't be the 
> same, some limit of enabled features has to be in place during initial 
> install, and your example has to be different.

That feature was imported into FreeBSD in 2012 so it should be
enabled in every pool created since then.

Kind regards,
Patrick


Re: changes to the zfs boot (was: Re: git: 72a1cb05cd23 - main - rc(8): Add a zpoolupgrade rc.d script)

2022-11-09 Thread Alexander Leidinger
Quoting Brooks Davis  (from Wed, 9 Nov 2022  
21:18:41 +):



On Wed, Nov 09, 2022 at 09:19:47PM +0100, Alexander Leidinger wrote:

Quoting Mark Millard  (from Wed, 9 Nov 2022
12:10:18 -0800):

> On Nov 9, 2022, at 11:58, Alexander Leidinger
>  wrote:
>
>> Quoting "Patrick M. Hausen"  (from Wed, 9 Nov 2022
>> 20:49:37 +0100):
>>
>>> Hi,
>>>
 Am 09.11.2022 um 20:45 schrieb Alexander Leidinger
 :
 But "zpool set feature@edonr=enabled rpool" (or any other feature
 not in the list we talk about) would render it unbootable.
>>>
>>> Sorry, just to be sure. So an active change of e.g. checksum or
>>> compression algorithm
>>> might render the system unbootable but a zpool upgrade never will?
>>> At least not intentionally? ;-)
>>
>> If you mean "zpool upgrade", then no (modulo bugs). OpenZFS uses
>> the feature flags instead of zpool upgrade.
>
> I'm confused by that answer:

See my correction in another mail, the behavior seems to have changed
and yes, doing a zpool upgrade on a boot pool should not be done.

Maybe someone wants to check or add provisions to not do that on a
pool which has the bootfs property set.


Literally the entire point of the script added in the commit this thread
is about upgrade the boot pool on first boot so that seems like it would
be counterproductive.


Something is missing here. Either some pointer to some safetynet for  
pools with the bootfs property set (or a similar "this is a bootable  
pool" flag), or a real-world test of the script.


Any brave soul around to spin up a test-VM and perform a "echo before;  
zpool get all rpool | grep feature; zpool upgrade rpool; echo after;  
zpool get all rpool | grep feature" inside?


Bye,
Alexander.

--
http://www.Leidinger.net alexan...@leidinger.net: PGP 0x8F31830F9F2772BF
http://www.FreeBSD.orgnetch...@freebsd.org  : PGP 0x8F31830F9F2772BF


pgpbcaJMrp3Tk.pgp
Description: Digitale PGP-Signatur


Re: changes to the zfs boot (was: Re: git: 72a1cb05cd23 - main - rc(8): Add a zpoolupgrade rc.d script)

2022-11-09 Thread Alexander Leidinger
Quoting "Patrick M. Hausen"  (from Wed, 9 Nov 2022  
22:11:29 +0100):



Hi,

Am 09.11.2022 um 22:05 schrieb Alexander Leidinger  
:
Attention, "upgrade" is overloaded here. "OS upgrade" will not  
render the pool unbootable (modulo bugs), but "zpool upgrade rpool"  
will (except we have provisions that zpool upgrade doesn't enable  
all features in case the bootfs property is set).


And we are back at the start. The "problem" is that I really like  
consistency.

So when "zpool status" throws that ominous message at me - any you have
to admit that it is phrased like a warning - I want simply to get  
rid of that.

After a reasonable after-update grace period.

But during our discussion I have come to wonder:

- I upgrade from 13.0 to 13.1, I do a "zpool upgrade" afterwards, I  
also upgrade the boot loader


- I install 13.1 with ZFS

What is the difference? Shouldn't these two imaginary systems be  
absolutely the same in terms

of ZFS features, boot loader, and all that?


On quick look I haven't found a place where a compatibility setting is  
used for the rpool during the creation, so I can't point out what the  
exact difference is.
Given that empty_bpobj is not in the list of the boot code, it can't  
be the same, some limit of enabled features has to be in place during  
initial install, and your example has to be different.


Bye,
Alexander.

--
http://www.Leidinger.net alexan...@leidinger.net: PGP 0x8F31830F9F2772BF
http://www.FreeBSD.orgnetch...@freebsd.org  : PGP 0x8F31830F9F2772BF


pgpGlO3ESlGcF.pgp
Description: Digitale PGP-Signatur


Re: changes to the zfs boot (was: Re: git: 72a1cb05cd23 - main - rc(8): Add a zpoolupgrade rc.d script)

2022-11-09 Thread Brooks Davis
On Wed, Nov 09, 2022 at 09:19:47PM +0100, Alexander Leidinger wrote:
> Quoting Mark Millard  (from Wed, 9 Nov 2022  
> 12:10:18 -0800):
> 
> > On Nov 9, 2022, at 11:58, Alexander Leidinger  
> >  wrote:
> >
> >> Quoting "Patrick M. Hausen"  (from Wed, 9 Nov 2022  
> >> 20:49:37 +0100):
> >>
> >>> Hi,
> >>>
>  Am 09.11.2022 um 20:45 schrieb Alexander Leidinger  
>  :
>  But "zpool set feature@edonr=enabled rpool" (or any other feature  
>  not in the list we talk about) would render it unbootable.
> >>>
> >>> Sorry, just to be sure. So an active change of e.g. checksum or  
> >>> compression algorithm
> >>> might render the system unbootable but a zpool upgrade never will?  
> >>> At least not intentionally? ;-)
> >>
> >> If you mean "zpool upgrade", then no (modulo bugs). OpenZFS uses  
> >> the feature flags instead of zpool upgrade.
> >
> > I'm confused by that answer:
> 
> See my correction in another mail, the behavior seems to have changed  
> and yes, doing a zpool upgrade on a boot pool should not be done.
> 
> Maybe someone wants to check or add provisions to not do that on a  
> pool which has the bootfs property set.

Literally the entire point of the script added in the commit this thread
is about upgrade the boot pool on first boot so that seems like it would
be counterproductive.

-- Brooks


signature.asc
Description: PGP signature


Re: changes to the zfs boot (was: Re: git: 72a1cb05cd23 - main - rc(8): Add a zpoolupgrade rc.d script)

2022-11-09 Thread Alexander Leidinger

 Quoting Warner Losh  (from Wed, 9 Nov 2022 13:53:59 -0700):


 

   On Wed, Nov 9, 2022 at 12:47 PM Alexander Leidinger  
 wrote:



Quoting Warner Losh  (from Wed, 9 Nov 2022 08:54:33 -0700):


as well. I'd settle for a good script that could be run as root (better
would be not as root) that would take a filesystem that was created
by makefs -t zfs and turn on these features after an zpool upgrade.
I have the vague outlines of a test suite for the boot loader that I
could see about integrating something like that into, but most of my
time these days is chasing after 'the last bug' in some kboot stuff I'm
working on (which includes issues with our ZFS in the boot loader
integration).


How would you test a given image? bhyve/qemu/...?


 
I have a script that creates a number of image files and a  
number of qemu scripts that look

like the following:
 
/home/imp/git/qemu/00-build/qemu-system-aarch64 -nographic -machine  
virt,gic-version=3 -m 512M -smp 4 \

        -cpu cortex-a57 \
        -drive  
file=/home/imp/stand-test-root/images/arm64-aarch64/linuxboot-arm64-aarch64-zfs.img,if=none,id=drive0,cache=writeback  
\

        -device virtio-blk,drive=drive0,bootindex=0 \
        -drive  
file=/home/imp/stand-test-root/bios/edk2-arm64-aarch64-code.fd,format=raw,if=pflash  
\
        -drive  
file=/home/imp/stand-test-root/bios/edk2-arm64-aarch64-vars.fd,format=raw,if=pflash  
\

        -monitor telnet::,server,nowait \
        -serial stdio $*
 
There's a list of these files that's generated and looks to see  
if it gets to the 'success' echo in the minimal root I have for them.

 


So a little script which makes a copy of a source image, enables  
features on the copies and spits out a list of image files would suit  
your needs?

e.g.:
for feature A B C; do
   # ignoring inter-feature dependencies for a moment
  cp $source_image zfs_feature_$feature.img
  pool_name = import_pool zfs_feature_$feature.img
  enable_feature $pool_name $feature
  export_pool $pool_name
  echo zfs_feature_$feature.img
done

Bye,
Alexander.
--
http://www.Leidinger.net alexan...@leidinger.net: PGP 0x8F31830F9F2772BF
http://www.FreeBSD.orgnetch...@freebsd.org  : PGP 0x8F31830F9F2772BF


pgpK5i4nalL3R.pgp
Description: Digitale PGP-Signatur


Re: changes to the zfs boot (was: Re: git: 72a1cb05cd23 - main - rc(8): Add a zpoolupgrade rc.d script)

2022-11-09 Thread Patrick M. Hausen
Hi,

> Am 09.11.2022 um 22:05 schrieb Alexander Leidinger :
> Attention, "upgrade" is overloaded here. "OS upgrade" will not render the 
> pool unbootable (modulo bugs), but "zpool upgrade rpool" will (except we have 
> provisions that zpool upgrade doesn't enable all features in case the bootfs 
> property is set).

And we are back at the start. The "problem" is that I really like consistency.
So when "zpool status" throws that ominous message at me - any you have
to admit that it is phrased like a warning - I want simply to get rid of that.
After a reasonable after-update grace period.

But during our discussion I have come to wonder:

- I upgrade from 13.0 to 13.1, I do a "zpool upgrade" afterwards, I also 
upgrade the boot loader

- I install 13.1 with ZFS

What is the difference? Shouldn't these two imaginary systems be absolutely the 
same in terms
of ZFS features, boot loader, and all that?

Kind regards,
Patrick


Re: changes to the zfs boot (was: Re: git: 72a1cb05cd23 - main - rc(8): Add a zpoolupgrade rc.d script)

2022-11-09 Thread Alexander Leidinger

Quoting Warner Losh  (from Wed, 9 Nov 2022 13:56:43 -0700):


On Wed, Nov 9, 2022 at 1:54 PM Patrick M. Hausen  wrote:


Hi Warner,

> Am 09.11.2022 um 21:51 schrieb Warner Losh :
> Yes. For safety, boot loader upgrade is mandatory when you do a zpool
upgrade of the root filesystem.
> It was definitely needed in the OpenZFS jump, and we've had one or two
other flag days since.

That's a given and not a problem. What I fear from my understanding of
this thread so far is
that there might be a situation when I upgrade the zpool and the boot
loader and the system
ends up unbootable nonetheless.

Possible or not?



If all you do is upgrade, then no, modulo bugs that we've thankfully not
had yet. It's when you enable something on the zpool that you can run into
trouble, but that's true independent of upgrade :)


Attention, "upgrade" is overloaded here. "OS upgrade" will not render  
the pool unbootable (modulo bugs), but "zpool upgrade rpool" will  
(except we have provisions that zpool upgrade doesn't enable all  
features in case the bootfs property is set).


Bye,
Alexander.

--
http://www.Leidinger.net alexan...@leidinger.net: PGP 0x8F31830F9F2772BF
http://www.FreeBSD.orgnetch...@freebsd.org  : PGP 0x8F31830F9F2772BF


pgpm5pT61kFUD.pgp
Description: Digitale PGP-Signatur


Re: changes to the zfs boot (was: Re: git: 72a1cb05cd23 - main - rc(8): Add a zpoolupgrade rc.d script)

2022-11-09 Thread Patrick M. Hausen
Hi,

> Am 09.11.2022 um 21:56 schrieb Warner Losh 
> 
> If all you do is upgrade, then no, modulo bugs that we've thankfully not had 
> yet. It's when you enable something on the zpool that you can run into 
> trouble, but that's true independent of upgrade :)

Thanks. That lets me sleep way better.

Kind regards,
Patrick


Re: changes to the zfs boot (was: Re: git: 72a1cb05cd23 - main - rc(8): Add a zpoolupgrade rc.d script)

2022-11-09 Thread Warner Losh
On Wed, Nov 9, 2022 at 1:54 PM Patrick M. Hausen  wrote:

> Hi Warner,
>
> > Am 09.11.2022 um 21:51 schrieb Warner Losh :
> > Yes. For safety, boot loader upgrade is mandatory when you do a zpool
> upgrade of the root filesystem.
> > It was definitely needed in the OpenZFS jump, and we've had one or two
> other flag days since.
>
> That's a given and not a problem. What I fear from my understanding of
> this thread so far is
> that there might be a situation when I upgrade the zpool and the boot
> loader and the system
> ends up unbootable nonetheless.
>
> Possible or not?
>

If all you do is upgrade, then no, modulo bugs that we've thankfully not
had yet. It's when you enable something on the zpool that you can run into
trouble, but that's true independent of upgrade :)

Warner


> Modulo bugs, try test systems first, etc. Of course.
>
> Thanks,
> Patrick


Re: changes to the zfs boot (was: Re: git: 72a1cb05cd23 - main - rc(8): Add a zpoolupgrade rc.d script)

2022-11-09 Thread Warner Losh
On Wed, Nov 9, 2022 at 12:47 PM Alexander Leidinger 
wrote:

> Quoting Warner Losh  (from Wed, 9 Nov 2022 08:54:33
> -0700):
>
> > as well. I'd settle for a good script that could be run as root (better
> > would be not as root) that would take a filesystem that was created
> > by makefs -t zfs and turn on these features after an zpool upgrade.
> > I have the vague outlines of a test suite for the boot loader that I
> > could see about integrating something like that into, but most of my
> > time these days is chasing after 'the last bug' in some kboot stuff I'm
> > working on (which includes issues with our ZFS in the boot loader
> > integration).
>
> How would you test a given image? bhyve/qemu/...?
>

I have a script that creates a number of image files and a number of qemu
scripts that look
like the following:

/home/imp/git/qemu/00-build/qemu-system-aarch64 -nographic -machine
virt,gic-version=3 -m 512M -smp 4 \
-cpu cortex-a57 \
-drive
file=/home/imp/stand-test-root/images/arm64-aarch64/linuxboot-arm64-aarch64-zfs.img,if=none,id=drive0,cache=writeback
\
-device virtio-blk,drive=drive0,bootindex=0 \
-drive
file=/home/imp/stand-test-root/bios/edk2-arm64-aarch64-code.fd,format=raw,if=pflash
\
-drive
file=/home/imp/stand-test-root/bios/edk2-arm64-aarch64-vars.fd,format=raw,if=pflash
\
-monitor telnet::,server,nowait \
-serial stdio $*

There's a list of these files that's generated and looks to see if it gets
to the 'success' echo in the minimal root I have for them.

Warner


Re: changes to the zfs boot (was: Re: git: 72a1cb05cd23 - main - rc(8): Add a zpoolupgrade rc.d script)

2022-11-09 Thread Patrick M. Hausen
Hi Warner,

> Am 09.11.2022 um 21:51 schrieb Warner Losh :
> Yes. For safety, boot loader upgrade is mandatory when you do a zpool upgrade 
> of the root filesystem.
> It was definitely needed in the OpenZFS jump, and we've had one or two other 
> flag days since.

That's a given and not a problem. What I fear from my understanding of this 
thread so far is
that there might be a situation when I upgrade the zpool and the boot loader 
and the system
ends up unbootable nonetheless.

Possible or not?

Modulo bugs, try test systems first, etc. Of course.

Thanks,
Patrick


Re: changes to the zfs boot (was: Re: git: 72a1cb05cd23 - main - rc(8): Add a zpoolupgrade rc.d script)

2022-11-09 Thread Warner Losh
On Wed, Nov 9, 2022 at 12:02 PM Patrick M. Hausen  wrote:

> Hi all,
>
> > Am 09.11.2022 um 16:54 schrieb Warner Losh :
> > >>There is a fixed list of features we support in the boot loader:
> > >>[...]
> > >>Any feature not on this list will cause the boot loader to
> > >> reject the pool.
>
> I admit that I do not grasp the full implications of this thread and the
> proposed
> and debated changes. Does that imply that a simple "zpool upgrade" of the
> boot/root pool might lead to an unbootable system in the future - even if
> the
> boot loader is upgraded as it should, too?
>

Yes. For safety, boot loader upgrade is mandatory when you do a zpool
upgrade of the root filesystem. It was definitely needed in the OpenZFS
jump, and we've had one or two other flag days since.

It would be nice if we had a failsafe here, but we don't today. With a
failsafe, we could say 'well, go ahead and try, even if it encounters
something it doesn't understand... to at least allow the system to boot.

Warner


Re: changes to the zfs boot (was: Re: git: 72a1cb05cd23 - main - rc(8): Add a zpoolupgrade rc.d script)

2022-11-09 Thread Patrick M. Hausen
Hi,

> Am 09.11.2022 um 21:31 schrieb Alexander Leidinger :
> Some features are used directly when enabled. Some features go back to the 
> enabled state when some conditions are met. Some features are not reversible 
> without re-creating the pool (e.g. device_removal). The zzpool-features 
> man-page gives explanations which features belong into which category.

I already knew most of this, too - but thanks for bringing it to my attention 
again.

If I have to check each individual feature with respect to its "bootability" in 
the future that
is really bad news given that I have about a hundred servers to manage.

So let me shortly describe our use case.

I run servers hosting jails. I upgrade them e.g. like this:

13.1
13.2
13.3
14.1
...

The "habit" that I cited is to upgrade the zpools to match the release. We use
freebsd-update and in the rare cases where we don't we track releng/x.y.

We do not run main/head in production.

Can I be confident about upgrading my pools or not? Modulo bugs, but we have
test systems, of course.

Kind regards,
Patrick


Re: changes to the zfs boot (was: Re: git: 72a1cb05cd23 - main - rc(8): Add a zpoolupgrade rc.d script)

2022-11-09 Thread Alexander Leidinger
Quoting "Patrick M. Hausen"  (from Wed, 9 Nov 2022  
21:19:23 +0100):



Hi,

Am 09.11.2022 um 21:15 schrieb Alexander Leidinger  
:
Quoting "Patrick M. Hausen"  (from Wed, 9 Nov 2022  
21:02:52 +0100):

Yet, I made it a habit to whenever I see this message:

---
status: Some supported features are not enabled on the pool. The pool can
still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
the pool may no longer be accessible by software that does not support
the features. See zpool-features(7) for details.
---

to do a "zpool upgrade" after some time of burn in followed by an  
update of the

boot loader.

I desire to know if that is in fact dangerous.


Ugh. This changed. It is indeed dangerous now. I just tested it  
with a non-root pool which didn't had all flags enabled. "zpool  
upgrade " will now enable all features.


I know. But until now I assumed that features *enabled* but not  
*used* were not impeding booting.

And that for all others the boot loader was supposed to keep track.


Some features are used directly when enabled. Some features go back to  
the enabled state when some conditions are met. Some features are not  
reversible without re-creating the pool (e.g. device_removal). The  
zzpool-features man-page gives explanations which features belong into  
which category.


Bye,
Alexander.

--
http://www.Leidinger.net alexan...@leidinger.net: PGP 0x8F31830F9F2772BF
http://www.FreeBSD.orgnetch...@freebsd.org  : PGP 0x8F31830F9F2772BF


pgp9Oy22Z69dx.pgp
Description: Digitale PGP-Signatur


Re: changes to the zfs boot (was: Re: git: 72a1cb05cd23 - main - rc(8): Add a zpoolupgrade rc.d script)

2022-11-09 Thread Alexander Leidinger
Quoting Mark Millard  (from Wed, 9 Nov 2022  
12:10:18 -0800):


On Nov 9, 2022, at 11:58, Alexander Leidinger  
 wrote:


Quoting "Patrick M. Hausen"  (from Wed, 9 Nov 2022  
20:49:37 +0100):



Hi,

Am 09.11.2022 um 20:45 schrieb Alexander Leidinger  
:
But "zpool set feature@edonr=enabled rpool" (or any other feature  
not in the list we talk about) would render it unbootable.


Sorry, just to be sure. So an active change of e.g. checksum or  
compression algorithm
might render the system unbootable but a zpool upgrade never will?  
At least not intentionally? ;-)


If you mean "zpool upgrade", then no (modulo bugs). OpenZFS uses  
the feature flags instead of zpool upgrade.


I'm confused by that answer:


See my correction in another mail, the behavior seems to have changed  
and yes, doing a zpool upgrade on a boot pool should not be done.


Maybe someone wants to check or add provisions to not do that on a  
pool which has the bootfs property set.


Bye,
Alexander.

--
http://www.Leidinger.net alexan...@leidinger.net: PGP 0x8F31830F9F2772BF
http://www.FreeBSD.orgnetch...@freebsd.org  : PGP 0x8F31830F9F2772BF


pgpmpZ1ZW63NA.pgp
Description: Digitale PGP-Signatur


Re: changes to the zfs boot (was: Re: git: 72a1cb05cd23 - main - rc(8): Add a zpoolupgrade rc.d script)

2022-11-09 Thread Patrick M. Hausen
Hi,

> Am 09.11.2022 um 21:15 schrieb Alexander Leidinger :
> Quoting "Patrick M. Hausen"  (from Wed, 9 Nov 2022 21:02:52 
> +0100):
>> Yet, I made it a habit to whenever I see this message:
>> 
>> ---
>> status: Some supported features are not enabled on the pool. The pool can
>>  still be used, but some features are unavailable.
>> action: Enable all features using 'zpool upgrade'. Once this is done,
>>  the pool may no longer be accessible by software that does not support
>>  the features. See zpool-features(7) for details.
>> ---
>> 
>> to do a "zpool upgrade" after some time of burn in followed by an update of 
>> the
>> boot loader.
>> 
>> I desire to know if that is in fact dangerous.
> 
> Ugh. This changed. It is indeed dangerous now. I just tested it with a 
> non-root pool which didn't had all flags enabled. "zpool upgrade " will 
> now enable all features.

I know. But until now I assumed that features *enabled* but not *used* were not 
impeding booting.
And that for all others the boot loader was supposed to keep track.

Kind regards,
Patrick


Re: changes to the zfs boot (was: Re: git: 72a1cb05cd23 - main - rc(8): Add a zpoolupgrade rc.d script)

2022-11-09 Thread Alexander Leidinger
Quoting "Patrick M. Hausen"  (from Wed, 9 Nov 2022  
21:02:52 +0100):



Hi,

Am 09.11.2022 um 20:58 schrieb Alexander Leidinger  
:


Quoting "Patrick M. Hausen"  (from Wed, 9 Nov 2022  
20:49:37 +0100):



Hi,

Am 09.11.2022 um 20:45 schrieb Alexander Leidinger  
:
But "zpool set feature@edonr=enabled rpool" (or any other feature  
not in the list we talk about) would render it unbootable.


Sorry, just to be sure. So an active change of e.g. checksum or  
compression algorithm
might render the system unbootable but a zpool upgrade never will?  
At least not intentionally? ;-)


If you mean "zpool upgrade", then no (modulo bugs). OpenZFS uses  
the feature flags instead of zpool upgrade.


I know about feature flags and all my pools are recent enough to have them.

Yet, I made it a habit to whenever I see this message:

---
status: Some supported features are not enabled on the pool. The pool can
still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
the pool may no longer be accessible by software that does not support
the features. See zpool-features(7) for details.
---

to do a "zpool upgrade" after some time of burn in followed by an  
update of the

boot loader.

I desire to know if that is in fact dangerous.


Ugh. This changed. It is indeed dangerous now. I just tested it with a  
non-root pool which didn't had all flags enabled. "zpool upgrade  
" will now enable all features.


I remember that it wasn't in the past and I had to enable the feature  
flags by hand. I don't know if a pool with bootfs set is behaving  
differently, but I consider testing this with a real rpool to be  
dangerous.


Bye,
Alexander.

--
http://www.Leidinger.net alexan...@leidinger.net: PGP 0x8F31830F9F2772BF
http://www.FreeBSD.orgnetch...@freebsd.org  : PGP 0x8F31830F9F2772BF


pgp55U6MDTOxF.pgp
Description: Digitale PGP-Signatur


Re: changes to the zfs boot (was: Re: git: 72a1cb05cd23 - main - rc(8): Add a zpoolupgrade rc.d script)

2022-11-09 Thread Mark Millard
On Nov 9, 2022, at 11:58, Alexander Leidinger  wrote:

> Quoting "Patrick M. Hausen"  (from Wed, 9 Nov 2022 20:49:37 
> +0100):
> 
>> Hi,
>> 
>>> Am 09.11.2022 um 20:45 schrieb Alexander Leidinger 
>>> :
>>> But "zpool set feature@edonr=enabled rpool" (or any other feature not in 
>>> the list we talk about) would render it unbootable.
>> 
>> Sorry, just to be sure. So an active change of e.g. checksum or compression 
>> algorithm
>> might render the system unbootable but a zpool upgrade never will? At least 
>> not intentionally? ;-)
> 
> If you mean "zpool upgrade", then no (modulo bugs). OpenZFS uses the feature 
> flags instead of zpool upgrade.

I'm confused by that answer:

QUOTE of "man zpool-upgrade" output:
NAME
 zpool-upgrade – manage version and feature flags of ZFS storage pools
. . .
DESCRIPTION
 zpool upgrade
 Displays pools which do not have all supported features enabled
 and pools formatted using a legacy ZFS version number.  These
 pools can continue to be used, but some features may not be
 available.  Use zpool upgrade -a to enable all features on all
 pools (subject to the -o compatibility property).
. . .
 zpool upgrade [-V version] -a|pool…
 Enables all supported features on the given pool.
. . .
END QUOTE

zpool upgrade is a way of changing the feature flags, is it  not?

zpool upgrade also does not consider the loader's status, so far
as I know. If FreeBSD's kernel/world supports something that the
loader would reject, teh result is still rejection for booting.
As I understand, some folks ran into this for
com.delphix:head_errlog until an updated loader was available
and then installed.

===
Mark Millard
marklmi at yahoo.com




Re: changes to the zfs boot (was: Re: git: 72a1cb05cd23 - main - rc(8): Add a zpoolupgrade rc.d script)

2022-11-09 Thread Alexander Leidinger
Quoting "Patrick M. Hausen"  (from Wed, 9 Nov 2022  
20:49:37 +0100):



Hi,

Am 09.11.2022 um 20:45 schrieb Alexander Leidinger  
:
But "zpool set feature@edonr=enabled rpool" (or any other feature  
not in the list we talk about) would render it unbootable.


Sorry, just to be sure. So an active change of e.g. checksum or  
compression algorithm
might render the system unbootable but a zpool upgrade never will?  
At least not intentionally? ;-)


If you mean "zpool upgrade", then no (modulo bugs). OpenZFS uses the  
feature flags instead of zpool upgrade.


Bye,
Alexander.

--
http://www.Leidinger.net alexan...@leidinger.net: PGP 0x8F31830F9F2772BF
http://www.FreeBSD.orgnetch...@freebsd.org  : PGP 0x8F31830F9F2772BF


pgpO78tAvNPWO.pgp
Description: Digitale PGP-Signatur


Re: changes to the zfs boot (was: Re: git: 72a1cb05cd23 - main - rc(8): Add a zpoolupgrade rc.d script)

2022-11-09 Thread Mark Millard
On Nov 9, 2022, at 11:02, Patrick M. Hausen  wrote:
> 
>> Am 09.11.2022 um 16:54 schrieb Warner Losh :
   There is a fixed list of features we support in the boot loader:
   [...] 
   Any feature not on this list will cause the boot loader to  
 reject the pool.
> 
> I admit that I do not grasp the full implications of this thread and the 
> proposed
> and debated changes. Does that imply that a simple "zpool upgrade" of the
> boot/root pool might lead to an unbootable system in the future - even if the
> boot loader is upgraded as it should, too?

Not as you word it ("as it should"). But . . .

Using an actual example, if I understand/remember right:

When com.delphix:head_errlog was added to main [so: 14],
it was not added yet to the loader as of that commit.
As I understand some folks did pool upgrades before a
loader update for the issue was commited. These people
had the loader refuse the upgraded pools until the
loader was update was available and they got it installed.

There is likely a timing gottcha in that openzfs updates
probably have to be committed before the loader can be
updated to track. Landing in the middle without noticing
and upgrading both the loader and the pool(s) could lead
to such problems: not yet the right loader version to
upgrade to.

===
Mark Millard
marklmi at yahoo.com




Re: changes to the zfs boot (was: Re: git: 72a1cb05cd23 - main - rc(8): Add a zpoolupgrade rc.d script)

2022-11-09 Thread Patrick M. Hausen
Hi,

> Am 09.11.2022 um 20:45 schrieb Alexander Leidinger :
> But "zpool set feature@edonr=enabled rpool" (or any other feature not in the 
> list we talk about) would render it unbootable.

Sorry, just to be sure. So an active change of e.g. checksum or compression 
algorithm
might render the system unbootable but a zpool upgrade never will? At least not 
intentionally? ;-)

Thanks,
Patrick


Re: changes to the zfs boot (was: Re: git: 72a1cb05cd23 - main - rc(8): Add a zpoolupgrade rc.d script)

2022-11-09 Thread Alexander Leidinger

Quoting Warner Losh  (from Wed, 9 Nov 2022 08:54:33 -0700):


as well. I'd settle for a good script that could be run as root (better
would be not as root) that would take a filesystem that was created
by makefs -t zfs and turn on these features after an zpool upgrade.
I have the vague outlines of a test suite for the boot loader that I
could see about integrating something like that into, but most of my
time these days is chasing after 'the last bug' in some kboot stuff I'm
working on (which includes issues with our ZFS in the boot loader
integration).


How would you test a given image? bhyve/qemu/...?

Bye,
Alexander.

--
http://www.Leidinger.net alexan...@leidinger.net: PGP 0x8F31830F9F2772BF
http://www.FreeBSD.orgnetch...@freebsd.org  : PGP 0x8F31830F9F2772BF


pgpBddbGiB5KJ.pgp
Description: Digitale PGP-Signatur


Re: changes to the zfs boot (was: Re: git: 72a1cb05cd23 - main - rc(8): Add a zpoolupgrade rc.d script)

2022-11-09 Thread Alexander Leidinger
Quoting "Patrick M. Hausen"  (from Wed, 9 Nov 2022  
20:02:55 +0100):



Hi all,


Am 09.11.2022 um 16:54 schrieb Warner Losh :
>>There is a fixed list of features we support in the boot loader:
>>[...]
>>Any feature not on this list will cause the boot loader to
>> reject the pool.


I admit that I do not grasp the full implications of this thread and  
the proposed

and debated changes. Does that imply that a simple "zpool upgrade" of the
boot/root pool might lead to an unbootable system in the future - even if the
boot loader is upgraded as it should, too?


For a recent pool (zpool get all rpool | grep -q feature && echo  
recent enough): no.


But "zpool set feature@edonr=enabled rpool" (or any other feature not  
in the list we talk about) would render it unbootable.


Bye,
Alexander.

--
http://www.Leidinger.net alexan...@leidinger.net: PGP 0x8F31830F9F2772BF
http://www.FreeBSD.orgnetch...@freebsd.org  : PGP 0x8F31830F9F2772BF


pgp2qrhU55d75.pgp
Description: Digitale PGP-Signatur


Re: changes to the zfs boot (was: Re: git: 72a1cb05cd23 - main - rc(8): Add a zpoolupgrade rc.d script)

2022-11-09 Thread Alexander Leidinger

Quoting Warner Losh  (from Wed, 9 Nov 2022 08:54:33 -0700):


On Wed, Nov 9, 2022 at 5:46 AM Alexander Leidinger 
wrote:


Quoting Alexander Leidinger  (from Tue, 08
Nov 2022 10:50:53 +0100):



> Should the above list be sorted in some way? Maybe in the same order
> as the zpool-features lists them (sort by feature name after the
> colon), or alphabetical?

Is it OK if I commit this alphabetical sorting?


[diff of feature-sorting]



This patch looks good because it's a nop and just tidies things up a bit.

Reviewed by: imp


Will do later.


> As Mark already mentioned some flags, I checked the features marked
> as read only (I checked in the zpool-features man page, including
> the dependencies documented there) and here are those not listed in
> zfsimpl.c. I would assume as they are read-only compatible, we
> should add them:
> com.delphix:async_destroy
> com.delphix:bookmarks
> org.openzfs:device_rebuild
> com.delphix:empty_bpobj
> com.delphix:enable_txg
> com.joyent:filesystem_limits
> com.delphix:livelist
> com.delphix:log_spacemap
> com.zfsonlinux:project_quota
> com.zfsonlinux:userobj_accounting
> com.openzfs:zilsaxattr

If my understanding is correct that the read-only compatible parts
(according to the zpool-features man page) are safe to add to the zfs
boot, here is what I have only build-tested (relative to the above
alphabetical sorting):
---snip---
--- zfsimpl.c_sorted2022-11-09 12:55:06.346083000 +0100
+++ zfsimpl.c2022-11-09 13:01:24.083364000 +0100
@@ -121,24 +121,35 @@
  "com.datto:bookmark_v2",
  "com.datto:encryption",
  "com.datto:resilver_defer",
+"com.delphix:async_destroy",
  "com.delphix:bookmark_written",
+"com.delphix:bookmarks",
  "com.delphix:device_removal",
  "com.delphix:embedded_data",
+"com.delphix:empty_bpobj",
+"com.delphix:enable_txg",
  "com.delphix:extensible_dataset",
  "com.delphix:head_errlog",
  "com.delphix:hole_birth",
+"com.delphix:livelist",
+"com.delphix:log_spacemap",
  "com.delphix:obsolete_counts",
  "com.delphix:spacemap_histogram",
  "com.delphix:spacemap_v2",
  "com.delphix:zpool_checkpoint",
  "com.intel:allocation_classes",
+"com.joyent:filesystem_limits",
  "com.joyent:multi_vdev_crash_dump",
+"com.openzfs:zilsaxattr",
+"com.zfsonlinux:project_quota",
+"com.zfsonlinux:userobj_accounting",
  "org.freebsd:zstd_compress",
  "org.illumos:lz4_compress",
  "org.illumos:sha512",
  "org.illumos:skein",
  "org.open-zfs:large_blocks",
  "org.openzfs:blake3",
+"org.openzfs:device_rebuild",
  "org.zfsonlinux:allocation_classes",
  "org.zfsonlinux:large_dnode",
  NULL
---snip---

Anyone able to test some of those or confirms my understanding is
correct and would sign-off on a "reviewed by" level?



I'm inclined to strongly NAK this patch, absent some way to test it.
There's no issues today with any of them being absent causing
problems on boot that have been reported. The ZFS that's in the
boot loader is a reduced copy of what's in base and not everything is
supported. There's no urgency here to rush into this. The ones that
are on the list already are for things that we know we support in the
boot loader because we've gone to the trouble to put blake3 or sha512
into it (note: Not all boot loaders will support all ZFS features in the
future... x86 BIOS booting likely is going to have to be frozen at its
current ZFS feature set due to code size issues).

While most of these options look OK on the surface, I'd feel a lot better
if there were tests for these to prove they work. I'd also feel better if
the ZFS experts could explain how those come to be set on a zpool
as well. I'd settle for a good script that could be run as root (better
would be not as root) that would take a filesystem that was created
by makefs -t zfs and turn on these features after an zpool upgrade.
I have the vague outlines of a test suite for the boot loader that I
could see about integrating something like that into, but most of my
time these days is chasing after 'the last bug' in some kboot stuff I'm
working on (which includes issues with our ZFS in the boot loader
integration).

So not a hard no, but I plea for additional scripts to create images
that can be tested.


I didn't want to commit untested or unverified stuff. I fully agree  
with your reasoning.


Bye,
Alexander.

--
http://www.Leidinger.net alexan...@leidinger.net: PGP 0x8F31830F9F2772BF
http://www.FreeBSD.orgnetch...@freebsd.org  : PGP 0x8F31830F9F2772BF


pgppq4Jrt1La5.pgp
Description: Digitale PGP-Signatur


Re: changes to the zfs boot (was: Re: git: 72a1cb05cd23 - main - rc(8): Add a zpoolupgrade rc.d script)

2022-11-09 Thread Patrick M. Hausen
Hi all,

> Am 09.11.2022 um 16:54 schrieb Warner Losh :
> >>There is a fixed list of features we support in the boot loader:
> >>[...] 
> >>Any feature not on this list will cause the boot loader to  
> >> reject the pool.

I admit that I do not grasp the full implications of this thread and the 
proposed
and debated changes. Does that imply that a simple "zpool upgrade" of the
boot/root pool might lead to an unbootable system in the future - even if the
boot loader is upgraded as it should, too?

Kind regards, thanks for some insight,
Patrick


Re: changes to the zfs boot (was: Re: git: 72a1cb05cd23 - main - rc(8): Add a zpoolupgrade rc.d script)

2022-11-09 Thread Warner Losh
On Wed, Nov 9, 2022 at 5:46 AM Alexander Leidinger 
wrote:

> Quoting Alexander Leidinger  (from Tue, 08
> Nov 2022 10:50:53 +0100):
>
> > Quoting Warner Losh  (from Mon, 7 Nov 2022 14:23:11
> -0700):
> >
> >>
> >>
> >>   On Mon, Nov 7, 2022 at 4:15 AM Alexander Leidinger
> >>  wrote:
> >>
> >>> Quoting Li-Wen Hsu  (from Mon, 7 Nov 2022 03:39:19
> GMT):
> >>>
>  The branch main has been updated by lwhsu:
> 
>  URL:
> 
> https://cgit.FreeBSD.org/src/commit/?id=72a1cb05cd230ce0d12a7180ae65ddbba2e0cb6d
> 
>  commit 72a1cb05cd230ce0d12a7180ae65ddbba2e0cb6d
>  Author: Li-Wen Hsu 
>  AuthorDate: 2022-11-07 03:30:09 +
>  Commit: Li-Wen Hsu 
>  CommitDate: 2022-11-07 03:30:09 +
> 
>   rc(8): Add a zpoolupgrade rc.d script
> 
>   If a zpool is created by makefs(8), its version is 5000, i.e.,
> all
>   feature flags are off.  Introduce an rc script to run `zpool
> upgrade`
>   over the assigned zpools on the first boot.  This is useful to
> the
>   ZFS based VM images built from release(7).
> >>>
>  diff --git a/share/man/man5/rc.conf.5 b/share/man/man5/rc.conf.5
>  index f9ceabc83120..43fa44a5f1cb 100644
>  --- a/share/man/man5/rc.conf.5
>  +++ b/share/man/man5/rc.conf.5
>  @@ -24,7 +24,7 @@
>    .\"
>    .\" $FreeBSD$
>    .\"
>  -.Dd August 28, 2022
>  +.Dd November 7, 2022
>    .Dt RC.CONF 5
>    .Os
>    .Sh NAME
>  @@ -2109,6 +2109,13 @@ A space-separated list of ZFS pool names for
>  which new pool GUIDs should be
>    assigned upon first boot.
>    This is useful when using a ZFS pool copied from a template, such
>  as a virtual
>    machine image.
>  +.It Va zpool_upgrade
>  +.Pq Vt str
>  +A space-separated list of ZFS pool names for which version should
>  be upgraded
>  +upon first boot.
>  +This is useful when using a ZFS pool generated by
>  +.Xr makefs 8
>  +utility.
> >>>
> >>> For someone who knows ZFS well, it is clear that only a zpool upgrade
> >>> is done. Not so experienced people may assume there is a combination
> >>> of zpool upgrade and zfs upgrade (more so for people which do not know
> >>> what the difference is). Maybe you want to add some explicit
> >>> documentation, that zfs upgrade + feature flags needs to be done by
> >>> hand.
> >>>
> >>> And this brings me to a second topic, we don't have an explicit list
> >>> of features which are supported by the bootloader (I had a look at the
> >>> zfs and the boot related man pages, if I overlooked a place, then the
> >>> other places should reference this important part with some text).
> >>
> >>
> >>There is a fixed list of features we support in the boot loader:
> >>
> >>/*
> >>  * List of ZFS features supported for read
> >>  */
> >> static const char *features_for_read[] = {
> >> "org.illumos:lz4_compress",
> >> "com.delphix:hole_birth",
> >> "com.delphix:extensible_dataset",
> >> "com.delphix:embedded_data",
> >> "org.open-zfs:large_blocks",
> >> "org.illumos:sha512",
> >> "org.illumos:skein",
> >> "org.zfsonlinux:large_dnode",
> >> "com.joyent:multi_vdev_crash_dump",
> >> "com.delphix:spacemap_histogram",
> >> "com.delphix:zpool_checkpoint",
> >> "com.delphix:spacemap_v2",
> >> "com.datto:encryption",
> >> "com.datto:bookmark_v2",
> >> "org.zfsonlinux:allocation_classes",
> >> "com.datto:resilver_defer",
> >> "com.delphix:device_removal",
> >> "com.delphix:obsolete_counts",
> >> "com.intel:allocation_classes",
> >> "org.freebsd:zstd_compress",
> >> "com.delphix:bookmark_written",
> >> "com.delphix:head_errlog",
> >> "org.openzfs:blake3",
> >> NULL
> >> };
> >>
> >>Any feature not on this list will cause the boot loader to
> >> reject the pool.
> >>
> >>Whether or not it should do that by default, always, or never is an
> open
> >>question. I've thought there should be a 'shoot footing'
> >> override that isn't
> >>there today.
> >>
> >
> > Thanks for the list. For those interested, it is in
> > $SRC/stand/libsa/zfs/zfsimpl.c
> >
> > Just to make my opinion expressed before explicit again, this should
> > be documented in a boot / bootloader related man-page, but isn't.
> >
> > Should the above list be sorted in some way? Maybe in the same order
> > as the zpool-features lists them (sort by feature name after the
> > colon), or alphabetical?
>
> Is it OK if I commit this alphabetical sorting?
> ---snip---
> diff --git a/stand/libsa/zfs/zfsimpl.c b/stand/libsa/zfs/zfsimpl.c
> index 6b961f3110a..36c90613e82 100644
> --- a/stand/libsa/zfs/zfsimpl.c
> +++ b/stand/libsa/zfs/zfsimpl.c
> @@ -118,29 +118,29 @@ static vdev_list_t zfs_vdevs;
>* List of ZFS features supported for read
>*/
>   static const char 

changes to the zfs boot (was: Re: git: 72a1cb05cd23 - main - rc(8): Add a zpoolupgrade rc.d script)

2022-11-09 Thread Alexander Leidinger
Quoting Alexander Leidinger  (from Tue, 08  
Nov 2022 10:50:53 +0100):



Quoting Warner Losh  (from Mon, 7 Nov 2022 14:23:11 -0700):


 

  On Mon, Nov 7, 2022 at 4:15 AM Alexander Leidinger  
 wrote:



Quoting Li-Wen Hsu  (from Mon, 7 Nov 2022 03:39:19 GMT):


The branch main has been updated by lwhsu:

URL: 
https://cgit.FreeBSD.org/src/commit/?id=72a1cb05cd230ce0d12a7180ae65ddbba2e0cb6d

commit 72a1cb05cd230ce0d12a7180ae65ddbba2e0cb6d
Author:     Li-Wen Hsu 
AuthorDate: 2022-11-07 03:30:09 +
Commit:     Li-Wen Hsu 
CommitDate: 2022-11-07 03:30:09 +

     rc(8): Add a zpoolupgrade rc.d script

     If a zpool is created by makefs(8), its version is 5000, i.e., all
     feature flags are off.  Introduce an rc script to run `zpool upgrade`
     over the assigned zpools on the first boot.  This is useful to the
     ZFS based VM images built from release(7).



diff --git a/share/man/man5/rc.conf.5 b/share/man/man5/rc.conf.5
index f9ceabc83120..43fa44a5f1cb 100644
--- a/share/man/man5/rc.conf.5
+++ b/share/man/man5/rc.conf.5
@@ -24,7 +24,7 @@
  .\"
  .\" $FreeBSD$
  .\"
-.Dd August 28, 2022
+.Dd November 7, 2022
  .Dt RC.CONF 5
  .Os
  .Sh NAME
@@ -2109,6 +2109,13 @@ A space-separated list of ZFS pool names for 
which new pool GUIDs should be
  assigned upon first boot.
  This is useful when using a ZFS pool copied from a template, such 
as a virtual
  machine image.
+.It Va zpool_upgrade
+.Pq Vt str
+A space-separated list of ZFS pool names for which version should 
be upgraded
+upon first boot.
+This is useful when using a ZFS pool generated by
+.Xr makefs 8
+utility.


For someone who knows ZFS well, it is clear that only a zpool upgrade 
is done. Not so experienced people may assume there is a combination 
of zpool upgrade and zfs upgrade (more so for people which do not know 
what the difference is). Maybe you want to add some explicit 
documentation, that zfs upgrade + feature flags needs to be done by 
hand.

And this brings me to a second topic, we don't have an explicit list 
of features which are supported by the bootloader (I had a look at the 
zfs and the boot related man pages, if I overlooked a place, then the 
other places should reference this important part with some text).


    
   There is a fixed list of features we support in the boot loader:
    
   /*
 * List of ZFS features supported for read
 */
static const char *features_for_read[] = {
        "org.illumos:lz4_compress",
        "com.delphix:hole_birth",
        "com.delphix:extensible_dataset",
        "com.delphix:embedded_data",
        "org.open-zfs:large_blocks",
        "org.illumos:sha512",
        "org.illumos:skein",
        "org.zfsonlinux:large_dnode",
        "com.joyent:multi_vdev_crash_dump",
        "com.delphix:spacemap_histogram",
        "com.delphix:zpool_checkpoint",
        "com.delphix:spacemap_v2",
        "com.datto:encryption",
        "com.datto:bookmark_v2",
        "org.zfsonlinux:allocation_classes",
        "com.datto:resilver_defer",
        "com.delphix:device_removal",
        "com.delphix:obsolete_counts",
        "com.intel:allocation_classes",
        "org.freebsd:zstd_compress",
        "com.delphix:bookmark_written",
        "com.delphix:head_errlog",
        "org.openzfs:blake3",
        NULL
};
    
   Any feature not on this list will cause the boot loader to  
reject the pool.

    
   Whether or not it should do that by default, always, or never is an open
   question. I've thought there should be a 'shoot footing'  
override that isn't

   there today.
    


Thanks for the list. For those interested, it is in
    $SRC/stand/libsa/zfs/zfsimpl.c

Just to make my opinion expressed before explicit again, this should  
be documented in a boot / bootloader related man-page, but isn't.


Should the above list be sorted in some way? Maybe in the same order  
as the zpool-features lists them (sort by feature name after the  
colon), or alphabetical?


Is it OK if I commit this alphabetical sorting?
---snip---
diff --git a/stand/libsa/zfs/zfsimpl.c b/stand/libsa/zfs/zfsimpl.c
index 6b961f3110a..36c90613e82 100644
--- a/stand/libsa/zfs/zfsimpl.c
+++ b/stand/libsa/zfs/zfsimpl.c
@@ -118,29 +118,29 @@ static vdev_list_t zfs_vdevs;
  * List of ZFS features supported for read
  */
 static const char *features_for_read[] = {
-"org.illumos:lz4_compress",
-"com.delphix:hole_birth",
-"com.delphix:extensible_dataset",
-"com.delphix:embedded_data",
-"org.open-zfs:large_blocks",
-"org.illumos:sha512",
-"org.illumos:skein",
-"org.zfsonlinux:large_dnode",
-"com.joyent:multi_vdev_crash_dump",
-"com.delphix:spacemap_histogram",
-"com.delphix:zpool_checkpoint",
-"com.delphix:spacemap_v2",
-"com.datto:encryption",
 "com.datto:bookmark_v2",
-"org.zfsonlinux:allocation_classes",
+"com.datto:encryption",
 "com.datto:resilver_defer",
+