Re: CURRRENT snapshot won't boot due missing ZFS feature

2023-09-16 Thread Tomoaki AOKI
On Sat, 16 Sep 2023 08:43:49 -0700
Mark Millard  wrote:

> void  wrote on
> Date: Sat, 16 Sep 2023 12:12:02 UTC :
> 
> > On Sat, Sep 16, 2023 at 12:55:19PM +0100, Warner Losh wrote:
> > 
> > >Yes. The boot loader comes from the host. It must know how to read ZFS. 
> > 
> > It knows how to read zfs.
> 
> I expect Warner was indicating: you have a (efi?) loader that knows
> how to deal with the features listed in:
> 
> sys/contrib/openzfs/cmd/zpool/compatibility.d/openzfs-2.1-freebsd
> 
> being active but not with some new feature(s) listed in:
> 
> sys/contrib/openzfs/cmd/zpool/compatibility.d/openzfs-2.2
> 
> being active.
> 
> The following are the "read-only-compatibile no" features
> that are new in openzfs-2.2 compared to openzfs-2.1-freebsd :
> 
> blake3
> ednor
> head_errlog
> vdev_zaps_v2
> 
> So any of those being active leads to lack of even read-only
> activity being compatible. (Although, the loader's subset
> of the potential overall activity might allow ignoring some
> specific "read-only-compatibile no" status examples.)
> 
> For reference:
> 
> # diff -u99 
> /usr/main-src/sys/contrib/openzfs/cmd/zpool/compatibility.d/openzfs-2.1-freebsd
>  /usr/main-src/sys/contrib/openzfs/cmd/zpool/compatibility.d/openzfs-2.2
> --- 
> /usr/main-src/sys/contrib/openzfs/cmd/zpool/compatibility.d/openzfs-2.1-freebsd
>  2021-06-24 20:08:57.206621000 -0700
> +++ /usr/main-src/sys/contrib/openzfs/cmd/zpool/compatibility.d/openzfs-2.2 
> 2023-06-10 15:59:25.354999000 -0700
> @@ -1,34 +1,40 @@
> -# Features supported by OpenZFS 2.1 on FreeBSD
> +# Features supported by OpenZFS 2.2 on Linux and FreeBSD
>  allocation_classes
>  async_destroy
> +blake3
> +block_cloning
>  bookmark_v2
>  bookmark_written
>  bookmarks
>  device_rebuild
>  device_removal
>  draid
> +edonr
>  embedded_data
>  empty_bpobj
>  enabled_txg
>  encryption
>  extensible_dataset
>  filesystem_limits
> +head_errlog
>  hole_birth
>  large_blocks
>  large_dnode
>  livelist
>  log_spacemap
>  lz4_compress
>  multi_vdev_crash_dump
>  obsolete_counts
>  project_quota
>  redacted_datasets
>  redaction_bookmarks
>  resilver_defer
>  sha512
>  skein
>  spacemap_histogram
>  spacemap_v2
>  userobj_accounting
> +vdev_zaps_v2
> +zilsaxattr
>  zpool_checkpoint
>  zstd_compress
> 
> (Last I checked, /usr/share/zfs/compatibility.d/openzfs-2.2 does
> not exist yet. Thus were I had the diff look.)

It may be because it's not yet listed here, thus not installed.

  /usr/src/cddl/share/zfs/compatibility.d/Makefile

> > On the host in question, there are many guests,
> > some with zfs-boot, some not, just file-based.
> 
> But with what openzfs features active vs. not active
> in each case?
> 
> > What the host is not, is zfs-on-root. It boots from ssd (ada0).
> > The vdevs are on a sas disk array.
> > 
> > >So either your bootable partitions must not have 
> > >com.klarasystems:vdev_zaps_v2
> > >in your BEs or you must have a new user boot. I think you can just install
> > >the one from 14, but haven't tried it.
> > 
> > Can you briefly explain how I'd install the one from 14 please?
> 
> 
> I do not use bhyve so I do not even know if the
> context is using the efi loader from a msdosfs
> vs. not. For efi loaders, copying from one msdosfs
> with a sufficient vintage to the one with the wrong
> vintage (replacing) is sufficient.
> 
> For reference (from an aarch64 context):
> 
> # find /boot/efi/EFI/ -print
> /boot/efi/EFI/
> /boot/efi/EFI/FREEBSD
> /boot/efi/EFI/FREEBSD/loader.efi
> /boot/efi/EFI/BOOT
> /boot/efi/EFI/BOOT/bootaa64.efi
> 
> There may well be only:
> 
> EFI/BOOT/bootaa64.efi
> 
> for all I know.
> 
> >From an amd64 context:
> 
> # find /boot/efi/EFI/ -print
> /boot/efi/EFI/
> /boot/efi/EFI/FREEBSD
> /boot/efi/EFI/FREEBSD/loader.efi
> /boot/efi/EFI/BOOT
> /boot/efi/EFI/BOOT/bootx64.efi
> 
> There may well be only:
> 
> EFI/BOOT/bootx64.efi
> 
> for all I know.
> 
> (I set things up to have the EFI capitalization
> so that referencing efi/ vs. EFI/ in my context
> is unique for the mount point. vs. the msdosfs
> directory.)
> 
> ===
> Mark Millard
> marklmi at yahoo.com

-- 
Tomoaki AOKI



Re: CURRRENT snapshot won't boot due missing ZFS feature

2023-09-16 Thread void

On Sat, Sep 16, 2023 at 06:05:14PM +0100, void wrote:

On Sat, Sep 16, 2023 at 05:22:20PM +0100, Warner Losh wrote:


So, either you have to turn off those features (which I got no clue how to
do in the
normal installer), or you have to update userboot.so to the FreeBSD 14
version (which
I think had a good chance of actually running on FreeBSD 13 since it has no
'system'
references, which are confined to bhyveload).


OK thanks I'll try that too


OK this works TYVM Warner

1. mount the 14-beta image with mdconfig
2. find within it boot/userboot.so
3. copy it across to 13-stable host
4. start 14-beta vm
5. additionally, start a 13.2 vm

zpool status on both guests doesn't report any errors

--



Re: vfs.zfs.bclone_enabled (was: FreeBSD 14.0-BETA2 Now Available)

2023-09-16 Thread Alexander Motin

On 16.09.2023 01:25, Graham Perrin wrote:

On 16/09/2023 01:28, Glen Barber wrote:

o A fix for the ZFS block_cloning feature has been implemented.


Thanks

I see 
, with  in stable/14.


As vfs.zfs.bclone_enabled is still 0 (at least, with 15.0-CURRENT 
n265350-72d97e1dd9cc): should we assume that additional fixes, not 
necessarily in time for 14.0-RELEASE, will be required before 
vfs.zfs.bclone_enabled can default to 1?


I am not aware of any block cloning issues now.  All this thread about 
bclone_enabled actually started after I asked why it is still disabled. 
Thanks to Mark Millard for spotting this issue I could fix, but now we 
are back at the point of re-enabling it again.  Since the tunable does 
not even exist anywhere outside of FreeBSD base tree, I'd propose to 
give this code another try here too.  I see no point to have it disabled 
at least in main unless somebody needs time to run some specific tests 
first.


--
Alexander Motin



Re: CURRRENT snapshot won't boot due missing ZFS feature

2023-09-16 Thread void

On Sat, Sep 16, 2023 at 06:35:01PM +0200, Michael Gmelin wrote:

As this is the continuation of a thread I started in June,
let me top post again the solution I used back then:


Hi, I saw your fix but vm-bhyve is unavailable for me to use 
for a variety of reasons.

--



Re: CURRRENT snapshot won't boot due missing ZFS feature

2023-09-16 Thread void

On Sat, Sep 16, 2023 at 05:22:20PM +0100, Warner Losh wrote:


So, either you have to turn off those features (which I got no clue how to
do in the
normal installer), or you have to update userboot.so to the FreeBSD 14
version (which
I think had a good chance of actually running on FreeBSD 13 since it has no
'system'
references, which are confined to bhyveload).


OK thanks I'll try that too

--



Re: CURRRENT snapshot won't boot due missing ZFS feature

2023-09-16 Thread void

On Sat, Sep 16, 2023 at 08:43:49AM -0700, Mark Millard wrote:

void  wrote on
Date: Sat, 16 Sep 2023 12:12:02 UTC :


On Sat, Sep 16, 2023 at 12:55:19PM +0100, Warner Losh wrote:

>Yes. The boot loader comes from the host. It must know how to read ZFS.

It knows how to read zfs.


I expect Warner was indicating: you have a (efi?) loader that knows
how to deal with the features listed in:


What I'm astonished with is this part about the boot loader
for the guest coming from the host, with respect to zfs,
needing zfs properties the guest has but the host hasn't.

I have some freebsd zfs guests on linux (redhat) hosts as well.
They're using KVM. A few on Azure. These hosts (as I understand it) don't 
have zfs at all. None of the guests are root-on-zfs.


So I guess the issue is with bhyve and the new zfs exposes it?
If it's not happening with other hypervisors.

Does the issue affect root-on-zfs guests only on freebsd zfs hosts
or would it also affect those booting to ufs but then loading
zfs for their data?

I'm doing some testing now with a zfs-on-root 13.2 guest, will try src 
upgrading it to 14-stable, on this openzfs-2.1(?) host.

--



Re: CURRRENT snapshot won't boot due missing ZFS feature

2023-09-16 Thread Michael Gmelin
As this is the continuation of a thread I started in June,
let me top post again the solution I used back then:

"""
For completeness sake, this is how I boot 14.0 on 13.2 using
  sysutils/vm-bhyve:

  ISO=FreeBSD-14.0-CURRENT-amd64-20230608-653738e895ba-263444-bootonly.iso
  export ISO

  cd /mountpoint/for/pool/vm

  vm iso https://download.freebsd.org/snapshots/ISO-IMAGES/14.0/$ISO
  mkdir .loaders
  tar --strip-components 1 -C .loaders -xf .iso/$ISO boot/userboot.so
  mv .loaders/userboot.so .loaders/userboot14.so

  vm create test14
  sysrc -f test14/test14.conf memory=1G
  sysrc -f test14/test14.conf \
bhyveload_loader="$(realpath .loaders/userboot14.so)"

OS installation is done the usual way (using tmux instead of cu in this
example):

  pkg install -y tmux
  sysrc -f .config/system.conf console=tmux
  vm install test14 $ISO
  tmux attach -t test14
"""

You can find thew whole thread here:
https://lists.freebsd.org/archives/freebsd-current/2023-June/003835.html


Best
Michael



On Sat, 16 Sep 2023 17:22:20 +0100
Warner Losh  wrote:

> On Sat, Sep 16, 2023 at 5:11 PM Toomas Soome  wrote:
> 
> >
> >  
> > > On 16. Sep 2023, at 18:43, Mark Millard  wrote:
> > >
> > > void  wrote on
> > > Date: Sat, 16 Sep 2023 12:12:02 UTC :
> > >  
> > >> On Sat, Sep 16, 2023 at 12:55:19PM +0100, Warner Losh wrote:
> > >>  
> > >>> Yes. The boot loader comes from the host. It must know how to
> > >>> read  
> > ZFS.  
> > >>
> > >> It knows how to read zfs.  
> > >
> > > I expect Warner was indicating: you have a (efi?) loader that
> > > knows how to deal with the features listed in:
> > >
> > > sys/contrib/openzfs/cmd/zpool/compatibility.d/openzfs-2.1-freebsd
> > >
> > > being active but not with some new feature(s) listed in:
> > >
> > > sys/contrib/openzfs/cmd/zpool/compatibility.d/openzfs-2.2
> > >
> > > being active.
> > >
> > > The following are the "read-only-compatibile no" features
> > > that are new in openzfs-2.2 compared to openzfs-2.1-freebsd :
> > >
> > > blake3
> > > ednor
> > > head_errlog
> > > vdev_zaps_v2
> > >
> > > So any of those being active leads to lack of even read-only
> > > activity being compatible. (Although, the loader's subset
> > > of the potential overall activity might allow ignoring some
> > > specific "read-only-compatibile no" status examples.)
> > >
> > > For reference:
> > >
> > > # diff -u99  
> > /usr/main-src/sys/contrib/openzfs/cmd/zpool/compatibility.d/openzfs-2.1-freebsd
> > /usr/main-src/sys/contrib/openzfs/cmd/zpool/compatibility.d/openzfs-2.2
> >  
> > > ---  
> > /usr/main-src/sys/contrib/openzfs/cmd/zpool/compatibility.d/openzfs-2.1-freebsd
> > 2021-06-24 20:08:57.206621000 -0700  
> > > +++  
> > /usr/main-src/sys/contrib/openzfs/cmd/zpool/compatibility.d/openzfs-2.2
> > 2023-06-10 15:59:25.354999000 -0700  
> > > @@ -1,34 +1,40 @@
> > > -# Features supported by OpenZFS 2.1 on FreeBSD
> > > +# Features supported by OpenZFS 2.2 on Linux and FreeBSD
> > > allocation_classes
> > > async_destroy
> > > +blake3
> > > +block_cloning
> > > bookmark_v2
> > > bookmark_written
> > > bookmarks
> > > device_rebuild
> > > device_removal
> > > draid
> > > +edonr
> > > embedded_data
> > > empty_bpobj
> > > enabled_txg
> > > encryption
> > > extensible_dataset
> > > filesystem_limits
> > > +head_errlog
> > > hole_birth
> > > large_blocks
> > > large_dnode
> > > livelist
> > > log_spacemap
> > > lz4_compress
> > > multi_vdev_crash_dump
> > > obsolete_counts
> > > project_quota
> > > redacted_datasets
> > > redaction_bookmarks
> > > resilver_defer
> > > sha512
> > > skein
> > > spacemap_histogram
> > > spacemap_v2
> > > userobj_accounting
> > > +vdev_zaps_v2
> > > +zilsaxattr
> > > zpool_checkpoint
> > > zstd_compress
> > >
> > > (Last I checked, /usr/share/zfs/compatibility.d/openzfs-2.2 does
> > > not exist yet. Thus were I had the diff look.)
> > >  
> > >> On the host in question, there are many guests,
> > >> some with zfs-boot, some not, just file-based.  
> > >
> > > But with what openzfs features active vs. not active
> > > in each case?
> > >  
> > >> What the host is not, is zfs-on-root. It boots from ssd (ada0).
> > >> The vdevs are on a sas disk array.
> > >>  
> > >>> So either your bootable partitions must not have  
> > com.klarasystems:vdev_zaps_v2  
> > >>> in your BEs or you must have a new user boot. I think you can
> > >>> just  
> > install  
> > >>> the one from 14, but haven't tried it.  
> > >>
> > >> Can you briefly explain how I'd install the one from 14 please?  
> > >
> > >
> > > I do not use bhyve so I do not even know if the
> > > context is using the efi loader from a msdosfs
> > > vs. not. For efi loaders, copying from one msdosfs
> > > with a sufficient vintage to the one with the wrong
> > > vintage (replacing) is sufficient.  
> >
> > bhyve in freebsd is traditionally using /boot/userboot.so, I
> > believe.  
> 
> 
> Yes. We use the *HOSTS* (running FreeBSD 13) /boot/userboot.so to
> boot the FreeBSD 14
> image. Since we're not using the boot 

Re: CURRRENT snapshot won't boot due missing ZFS feature

2023-09-16 Thread Warner Losh
On Sat, Sep 16, 2023 at 5:11 PM Toomas Soome  wrote:

>
>
> > On 16. Sep 2023, at 18:43, Mark Millard  wrote:
> >
> > void  wrote on
> > Date: Sat, 16 Sep 2023 12:12:02 UTC :
> >
> >> On Sat, Sep 16, 2023 at 12:55:19PM +0100, Warner Losh wrote:
> >>
> >>> Yes. The boot loader comes from the host. It must know how to read
> ZFS.
> >>
> >> It knows how to read zfs.
> >
> > I expect Warner was indicating: you have a (efi?) loader that knows
> > how to deal with the features listed in:
> >
> > sys/contrib/openzfs/cmd/zpool/compatibility.d/openzfs-2.1-freebsd
> >
> > being active but not with some new feature(s) listed in:
> >
> > sys/contrib/openzfs/cmd/zpool/compatibility.d/openzfs-2.2
> >
> > being active.
> >
> > The following are the "read-only-compatibile no" features
> > that are new in openzfs-2.2 compared to openzfs-2.1-freebsd :
> >
> > blake3
> > ednor
> > head_errlog
> > vdev_zaps_v2
> >
> > So any of those being active leads to lack of even read-only
> > activity being compatible. (Although, the loader's subset
> > of the potential overall activity might allow ignoring some
> > specific "read-only-compatibile no" status examples.)
> >
> > For reference:
> >
> > # diff -u99
> /usr/main-src/sys/contrib/openzfs/cmd/zpool/compatibility.d/openzfs-2.1-freebsd
> /usr/main-src/sys/contrib/openzfs/cmd/zpool/compatibility.d/openzfs-2.2
> > ---
> /usr/main-src/sys/contrib/openzfs/cmd/zpool/compatibility.d/openzfs-2.1-freebsd
> 2021-06-24 20:08:57.206621000 -0700
> > +++
> /usr/main-src/sys/contrib/openzfs/cmd/zpool/compatibility.d/openzfs-2.2
> 2023-06-10 15:59:25.354999000 -0700
> > @@ -1,34 +1,40 @@
> > -# Features supported by OpenZFS 2.1 on FreeBSD
> > +# Features supported by OpenZFS 2.2 on Linux and FreeBSD
> > allocation_classes
> > async_destroy
> > +blake3
> > +block_cloning
> > bookmark_v2
> > bookmark_written
> > bookmarks
> > device_rebuild
> > device_removal
> > draid
> > +edonr
> > embedded_data
> > empty_bpobj
> > enabled_txg
> > encryption
> > extensible_dataset
> > filesystem_limits
> > +head_errlog
> > hole_birth
> > large_blocks
> > large_dnode
> > livelist
> > log_spacemap
> > lz4_compress
> > multi_vdev_crash_dump
> > obsolete_counts
> > project_quota
> > redacted_datasets
> > redaction_bookmarks
> > resilver_defer
> > sha512
> > skein
> > spacemap_histogram
> > spacemap_v2
> > userobj_accounting
> > +vdev_zaps_v2
> > +zilsaxattr
> > zpool_checkpoint
> > zstd_compress
> >
> > (Last I checked, /usr/share/zfs/compatibility.d/openzfs-2.2 does
> > not exist yet. Thus were I had the diff look.)
> >
> >> On the host in question, there are many guests,
> >> some with zfs-boot, some not, just file-based.
> >
> > But with what openzfs features active vs. not active
> > in each case?
> >
> >> What the host is not, is zfs-on-root. It boots from ssd (ada0).
> >> The vdevs are on a sas disk array.
> >>
> >>> So either your bootable partitions must not have
> com.klarasystems:vdev_zaps_v2
> >>> in your BEs or you must have a new user boot. I think you can just
> install
> >>> the one from 14, but haven't tried it.
> >>
> >> Can you briefly explain how I'd install the one from 14 please?
> >
> >
> > I do not use bhyve so I do not even know if the
> > context is using the efi loader from a msdosfs
> > vs. not. For efi loaders, copying from one msdosfs
> > with a sufficient vintage to the one with the wrong
> > vintage (replacing) is sufficient.
>
> bhyve in freebsd is traditionally using /boot/userboot.so, I believe.


Yes. We use the *HOSTS* (running FreeBSD 13) /boot/userboot.so to boot the
FreeBSD 14
image. Since we're not using the boot loader from the target image to load
it for bhyve,
the loader we're using has to understand the ZFS dataset that it's booting
off of. FreeBSD
13's userboot.so doesn't support all the bells and whistles that the ZFS
folks have added
to 14.

So, either you have to turn off those features (which I got no clue how to
do in the
normal installer), or you have to update userboot.so to the FreeBSD 14
version (which
I think had a good chance of actually running on FreeBSD 13 since it has no
'system'
references, which are confined to bhyveload).

Warner


> >
> > # find /boot/efi/EFI/ -print
> > /boot/efi/EFI/
> > /boot/efi/EFI/FREEBSD
> > /boot/efi/EFI/FREEBSD/loader.efi
> > /boot/efi/EFI/BOOT
> > /boot/efi/EFI/BOOT/bootaa64.efi
> >
> > There may well be only:
> >
> > EFI/BOOT/bootaa64.efi
> >
> > for all I know.
> >
> > From an amd64 context:
> >
> > # find /boot/efi/EFI/ -print
> > /boot/efi/EFI/
> > /boot/efi/EFI/FREEBSD
> > /boot/efi/EFI/FREEBSD/loader.efi
> > /boot/efi/EFI/BOOT
> > /boot/efi/EFI/BOOT/bootx64.efi
> >
> > There may well be only:
> >
> > EFI/BOOT/bootx64.efi
> >
> > for all I know.
> >
> > (I set things up to have the EFI capitalization
> > so that referencing efi/ vs. EFI/ in my context
> > is unique for the mount point. vs. the msdosfs
> > directory.)
> >
> > ===
> > Mark Millard
> > marklmi at yahoo.com
> >
> >
>
>
>


Re: CURRRENT snapshot won't boot due missing ZFS feature

2023-09-16 Thread Toomas Soome


Re: CURRRENT snapshot won't boot due missing ZFS feature

2023-09-16 Thread Mark Millard
void  wrote on
Date: Sat, 16 Sep 2023 12:12:02 UTC :

> On Sat, Sep 16, 2023 at 12:55:19PM +0100, Warner Losh wrote:
> 
> >Yes. The boot loader comes from the host. It must know how to read ZFS. 
> 
> It knows how to read zfs.

I expect Warner was indicating: you have a (efi?) loader that knows
how to deal with the features listed in:

sys/contrib/openzfs/cmd/zpool/compatibility.d/openzfs-2.1-freebsd

being active but not with some new feature(s) listed in:

sys/contrib/openzfs/cmd/zpool/compatibility.d/openzfs-2.2

being active.

The following are the "read-only-compatibile no" features
that are new in openzfs-2.2 compared to openzfs-2.1-freebsd :

blake3
ednor
head_errlog
vdev_zaps_v2

So any of those being active leads to lack of even read-only
activity being compatible. (Although, the loader's subset
of the potential overall activity might allow ignoring some
specific "read-only-compatibile no" status examples.)

For reference:

# diff -u99 
/usr/main-src/sys/contrib/openzfs/cmd/zpool/compatibility.d/openzfs-2.1-freebsd 
/usr/main-src/sys/contrib/openzfs/cmd/zpool/compatibility.d/openzfs-2.2
--- 
/usr/main-src/sys/contrib/openzfs/cmd/zpool/compatibility.d/openzfs-2.1-freebsd 
2021-06-24 20:08:57.206621000 -0700
+++ /usr/main-src/sys/contrib/openzfs/cmd/zpool/compatibility.d/openzfs-2.2 
2023-06-10 15:59:25.354999000 -0700
@@ -1,34 +1,40 @@
-# Features supported by OpenZFS 2.1 on FreeBSD
+# Features supported by OpenZFS 2.2 on Linux and FreeBSD
 allocation_classes
 async_destroy
+blake3
+block_cloning
 bookmark_v2
 bookmark_written
 bookmarks
 device_rebuild
 device_removal
 draid
+edonr
 embedded_data
 empty_bpobj
 enabled_txg
 encryption
 extensible_dataset
 filesystem_limits
+head_errlog
 hole_birth
 large_blocks
 large_dnode
 livelist
 log_spacemap
 lz4_compress
 multi_vdev_crash_dump
 obsolete_counts
 project_quota
 redacted_datasets
 redaction_bookmarks
 resilver_defer
 sha512
 skein
 spacemap_histogram
 spacemap_v2
 userobj_accounting
+vdev_zaps_v2
+zilsaxattr
 zpool_checkpoint
 zstd_compress

(Last I checked, /usr/share/zfs/compatibility.d/openzfs-2.2 does
not exist yet. Thus were I had the diff look.)

> On the host in question, there are many guests,
> some with zfs-boot, some not, just file-based.

But with what openzfs features active vs. not active
in each case?

> What the host is not, is zfs-on-root. It boots from ssd (ada0).
> The vdevs are on a sas disk array.
> 
> >So either your bootable partitions must not have 
> >com.klarasystems:vdev_zaps_v2
> >in your BEs or you must have a new user boot. I think you can just install
> >the one from 14, but haven't tried it.
> 
> Can you briefly explain how I'd install the one from 14 please?


I do not use bhyve so I do not even know if the
context is using the efi loader from a msdosfs
vs. not. For efi loaders, copying from one msdosfs
with a sufficient vintage to the one with the wrong
vintage (replacing) is sufficient.

For reference (from an aarch64 context):

# find /boot/efi/EFI/ -print
/boot/efi/EFI/
/boot/efi/EFI/FREEBSD
/boot/efi/EFI/FREEBSD/loader.efi
/boot/efi/EFI/BOOT
/boot/efi/EFI/BOOT/bootaa64.efi

There may well be only:

EFI/BOOT/bootaa64.efi

for all I know.

From an amd64 context:

# find /boot/efi/EFI/ -print
/boot/efi/EFI/
/boot/efi/EFI/FREEBSD
/boot/efi/EFI/FREEBSD/loader.efi
/boot/efi/EFI/BOOT
/boot/efi/EFI/BOOT/bootx64.efi

There may well be only:

EFI/BOOT/bootx64.efi

for all I know.

(I set things up to have the EFI capitalization
so that referencing efi/ vs. EFI/ in my context
is unique for the mount point. vs. the msdosfs
directory.)

===
Mark Millard
marklmi at yahoo.com




Re: CURRRENT snapshot won't boot due missing ZFS feature

2023-09-16 Thread void

On Sat, Sep 16, 2023 at 12:55:19PM +0100, Warner Losh wrote:

Yes. The boot loader comes from the host. 


How does this work for non-zfs aware hosts?

--



Re: CURRRENT snapshot won't boot due missing ZFS feature

2023-09-16 Thread void

On Sat, Sep 16, 2023 at 12:55:19PM +0100, Warner Losh wrote:

Yes. The boot loader comes from the host. It must know how to read ZFS. 


It knows how to read zfs. On the host in question, there are many guests,
some with zfs-boot, some not, just file-based.

What the host is not, is zfs-on-root. It boots from ssd (ada0).
The vdevs are on a sas disk array.


So either your bootable partitions must not have com.klarasystems:vdev_zaps_v2
in your BEs or you must have a new user boot. I think you can just install
the one from 14, but haven't tried it.


Can you briefly explain how I'd install the one from 14 please?

--



Re: CURRRENT snapshot won't boot due missing ZFS feature

2023-09-16 Thread Warner Losh
On Sat, Sep 16, 2023, 12:06 PM void  wrote:

> On Thu, Jun 08, 2023 at 06:11:15PM +0200, Michael Gmelin wrote:
> >Hi,
> >
> >I didn't dig into this yet.
> >
> >After installing the current 14-snapshot (June 1st) in a bhyve-vm, I
> >get this on boot:
> >
> >  ZFS: unsupported feature: com.klarasystems:vdev_zaps_v2
> >
> >(booting stops at this point)
> >
> >Seems like the boot loader is missing this recently added feature.
>
> This is still happening in the following context:
>
> host: stable/13-n256033
> bhyve guest was installed from FreeBSD-14.0-BETA2-amd64-bootonly.iso
>
> error on booting the guest
> ZFS: unsupported feature: com.klarasystems:vdev_zaps_v2
> ERROR: cannot open /boot/lua/loader.lua: no such file or directory.
>
> This used to work with earlier -current (14-current) bhyve guests.
> Is there a workaround?
>
> Does this mean that any bhyve host needs to be either at or above
> the guest version in future? In other words, is having host
> version less than guest not meant to work?
>

Yes. The boot loader comes from the host. It must know how to read ZFS. So
either your bootable partitions must not have com.klarasystems:vdev_zaps_v2
in your BEs or you must have a new user boot. I think you can just install
the one from 14, but haven't tried it.

Warner


-- 
>
>


Re: CURRRENT snapshot won't boot due missing ZFS feature

2023-09-16 Thread void

On Thu, Jun 08, 2023 at 06:11:15PM +0200, Michael Gmelin wrote:

Hi,

I didn't dig into this yet.

After installing the current 14-snapshot (June 1st) in a bhyve-vm, I
get this on boot:

 ZFS: unsupported feature: com.klarasystems:vdev_zaps_v2

(booting stops at this point)

Seems like the boot loader is missing this recently added feature.


This is still happening in the following context:

host: stable/13-n256033
bhyve guest was installed from FreeBSD-14.0-BETA2-amd64-bootonly.iso

error on booting the guest 
ZFS: unsupported feature: com.klarasystems:vdev_zaps_v2

ERROR: cannot open /boot/lua/loader.lua: no such file or directory.

This used to work with earlier -current (14-current) bhyve guests.
Is there a workaround?

Does this mean that any bhyve host needs to be either at or above
the guest version in future? In other words, is having host
version less than guest not meant to work?
--