Re: Updating EFI boot loader results in boot hangup

2022-08-15 Thread Yasuhiro Kimura
From: Warner Losh 
Subject: Re: Updating EFI boot loader results in boot hangup
Date: Mon, 15 Aug 2022 21:35:52 -0600

> Would you be able to share camcontrol devlist output? Privately if
> need be.

root@rolling-vm-freebsd1[1001]# camcontrol devlist
at scbus0 target 0 lun 0 (pass0,ada0)
  at scbus1 target 0 lun 0 (cd0,pass1)
root@rolling-vm-freebsd1[1002]# 

> And have any of these disks ever held ufs filesystems??

No. ada0 consists of ESP, swap and ZFS.

root@rolling-vm-freebsd1[1002]# gpart show ada0
=>   40  209715120  ada0  GPT  (100G)
 40 532480 1  efi  (260M)
 532520   2008- free -  (1.0M)
 534528   67108864 2  freebsd-swap  (32G)
   67643392  142069760 3  freebsd-zfs  (68G)
  209713152   2008- free -  (1.0M)

root@rolling-vm-freebsd1[1003]#

---
Yasuhiro Kimura



Re: Updating EFI boot loader results in boot hangup

2022-08-15 Thread Warner Losh
On Mon, Aug 15, 2022, 9:04 PM Yasuhiro Kimura  wrote:

> From: Yasuhiro Kimura 
> Subject: Re: Updating EFI boot loader results in boot hangup
> Date: Sun, 14 Aug 2022 06:34:40 +0900 (JST)
>
> > From: Yasuhiro Kimura 
> > Subject: Updating EFI boot loader results in boot hangup
> > Date: Sat, 13 Aug 2022 01:54:26 +0900 (JST)
> >
> >> I made regular update of my 14-CURRENT amd64 system from
> >> main-n257134-a69c0964625 to main-n257316-9d16275c65b. I also updated
> >> EFI boot loader (/boot/efi/efi/freebsd/loader.efi) but it results in
> >> boot hangup as following.
> >>
> >>
> https://people.freebsd.org/~yasu/FreeBSD-14-CURRENT-amd64-20220813-boot-hangup.png
> >>
> >> If I restore previous loader file (that is, loader.efi of
> >> main-n257134-a69c0964625 and kernel of main-n257316-9d16275c65b), then
> >> system boots successfully.
> >
> > I submitted the problem to FreeBSD Bugzilla.
> >
> > https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=265825
> >
> > Best Regards.
>
> d98de744050 is committed. So I tested it with following steps.
>
> 1. Check out the commit.
> 2. cd /usr/src/stand
> 3. make
> 4. make install
> 5. install -m 755 -p /boot/loader.efi /boot/efi/efi/freebsd
> 6. shutdown -r now
>
> And system boots successfully. But while efi loader is workding a lot
> of messages are displayed as following.
>
>
> https://people.freebsd.org/~yasu/FreeBSD-14-CURRENT-amd64.20220816.efi-loader-message.png


Would you be able to share camcontrol devlist output? Privately if need be.
And have any of these disks ever held ufs filesystems??

Warner


> ---
> Yasuhiro Kimura
>
>


Re: Updating EFI boot loader results in boot hangup

2022-08-15 Thread Yasuhiro Kimura
From: Yasuhiro Kimura 
Subject: Re: Updating EFI boot loader results in boot hangup
Date: Sun, 14 Aug 2022 06:34:40 +0900 (JST)

> From: Yasuhiro Kimura 
> Subject: Updating EFI boot loader results in boot hangup
> Date: Sat, 13 Aug 2022 01:54:26 +0900 (JST)
> 
>> I made regular update of my 14-CURRENT amd64 system from
>> main-n257134-a69c0964625 to main-n257316-9d16275c65b. I also updated
>> EFI boot loader (/boot/efi/efi/freebsd/loader.efi) but it results in
>> boot hangup as following.
>> 
>> https://people.freebsd.org/~yasu/FreeBSD-14-CURRENT-amd64-20220813-boot-hangup.png
>> 
>> If I restore previous loader file (that is, loader.efi of
>> main-n257134-a69c0964625 and kernel of main-n257316-9d16275c65b), then
>> system boots successfully.
> 
> I submitted the problem to FreeBSD Bugzilla.
> 
> https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=265825
> 
> Best Regards.

d98de744050 is committed. So I tested it with following steps.

1. Check out the commit.
2. cd /usr/src/stand
3. make
4. make install
5. install -m 755 -p /boot/loader.efi /boot/efi/efi/freebsd
6. shutdown -r now

And system boots successfully. But while efi loader is workding a lot
of messages are displayed as following.

https://people.freebsd.org/~yasu/FreeBSD-14-CURRENT-amd64.20220816.efi-loader-message.png

---
Yasuhiro Kimura



Re: MegaCLI port is ports-only -- how would you deploy it?

2022-08-15 Thread Doug Ambrisko
On Sat, Aug 13, 2022 at 10:41:33PM -0400, Dan Mahoney wrote:
| 
| 
| > On Aug 12, 2022, at 12:35, Doug Ambrisko  wrote:
| > 
| > On Fri, Aug 12, 2022 at 12:32:56PM -0400, Dan Mahoney wrote:
| > | 
| > | 
| > | > On Aug 12, 2022, at 12:31, Doug Ambrisko  wrote:
| > | > 
| > | > On Fri, Aug 12, 2022 at 12:21:36PM -0400, Dan Mahoney wrote:
| > | > | 
| > | > | > On Aug 8, 2022, at 16:45, Doug Ambrisko  
wrote:
| > | > | > 
| > | > | > On Mon, Aug 08, 2022 at 04:10:10PM -0400, Dan Mahoney wrote:
| > | > | > | 
| > | > | > | 
| > | > | > | > On Aug 8, 2022, at 15:57, Doug Ambrisko  
wrote:
| > | > | > | > 
| > | > | > | > On Thu, Aug 04, 2022 at 05:22:29PM +0300, Ruslan Makhmatkhanov 
wrote:
| > | > | > | > |03.08.2022, 02:07, "Dan Mahoney" :
| > | > | > | > |  Hey there all,
| > | > | > | > |  At the dayjob we have a fleet of Dell Poweredge servers 
that can use
| > | > | > | > |  either mptsas or mrsas -- if you use mptsas, you use 
mptutil (in
| > | > | > | > |  base) to check the state of the card.
| > | > | > | > |  If you use mrsas, you need megacli, which is only in 
ports, and the
| > | > | > | > |  port hasn't translated to pkg probably because of license
| > | > | > | > |  restrictions. ( _LICENSE_RESTRICTED = delete-package
| > | > | > | > |  delete-distfiles), but the license listed is just 
"megacli".
| > | > | > | > |  * We want to deploy a cron job to periodically check the 
raid status
| > | > | > | > |  (we're writing a wrapper, also having it check mfiutil, 
zpool, etc).
| > | > | > | > |  * We do not want to install and manage a whole ports 
tree on every
| > | > | > | > |  machine in our fleet, just to install a raid utlity.
| > | > | > | > |  Option A:
| > | > | > | > |  Make a local package somehow.
| > | > | > | > |  The port just downloads a static binary, there's nothing 
to build
| > | > | > | > |  here, but we want to do this the "right" way. Is there 
some way to
| > | > | > | > |  have pkg deploy a single local package for this that 
will, for
| > | > | > | > |  example, report the right package ownership, without 
moving every
| > | > | > | > |  other package to our poudriere install (we're just using 
base
| > | > | > | > |  packages, we keep poudriere around for testing in case 
we need to
| > | > | > | > |  hot-patch something).
| > | > | > | > |  For what it's worth, we use puppet for config 
management, so pushing
| > | > | > | > |  out the static binary is not the worst answer, but it 
also feels
| > | > | > | > |  "dirty".
| > | > | > | > |  Option B:
| > | > | > | > |  Figure out how to fix the license. I have no idea what 
this would
| > | > | > | > |  involve.
| > | > | > | > |  Option C:
| > | > | > | > |  Also, apparently MegaCLI is no longer maintained 
(replaced by
| > | > | > | > |  StorCLI), but there's no port for StorCLI, and...there 
are multiple
| > | > | > | > |  raid-card specific versions? Jeez.
| > | > | > | > |  Feels even more dirty.
| > | > | > | > |  
[1]https://support.siliconmechanics.com/portal/en/kb/articles/storcl
| > | > | > | > |  i-for-freebsd-and-other-operating-systems
| > | > | > | > |  Ideas welcome?
| > | > | > | > |  -Dan Mahoney
| > | > | > | > 
| > | > | > | > Although the path to get to StorCli goes through various cards 
the
| > | > | > | > latest greatest seem to work on all earlier cards.  It works on
| > | > | > | > HBAs and not just RAID cards.  At work I did a Linux/FreeBSD
| > | > | > | > POC for FW management and found the FreeBSD version could flash 
the HBA
| > | > | > | > and drive FW.  I've moved to StorCli from MegaCli.  I would 
suggest
| > | > | > | > we drop the MegaCli port and move to StorCli.
| > | > | > | > 
| > | > | > | > I have code to make mfiutil into mrsasutil and added the MFI 
ioctl
| > | > | > | > handler to mrsas.  I'm not sure how much value that has.  I 
don't
| > | > | > | > deal with supporting FreeBSD and RAID much anymore.  If 
interested
| > | > | > | > I could send patches.
| > | > | > | 
| > | > | > | This feels like it should be in base, regardless.  Just 
*something* to 
| > | > | > | query the raid status and health, even if it doesn't ring all the 
bells 
| > | > | > | of StorCLI.
| > | > | > | 
| > | > | > | Right now, you can do this with the older mfi, but not the newer 
mrsas, 
| > | > | > | which performs better in some cases, which leaves an admin with a 
| > | > | > | dilemma: better reliability, or better manageability.
| > | > | > | 
| > | > | > | I also feel like this could be added to a minor release (i.e. a 
| > | > | > | 12.3 --> 12.4 or a 13.0 --> 13.1), but obviously that decision is 
above me.
| > | > | > 
| > | > | > This is based of -current.  I haven't tested it recently:
| > | > | > 
https://people.freebsd.org/~ambrisko/git.mrsas_support_in_mfiutil.patch
| > | > | > 
| > | > | > Please give it a try.  You will need a new kerne

Re: Recent 14.0-CURRENT(s) crash in BTX whilest booting

2022-08-15 Thread Herbert J. Skuhra
On Mon, 15 Aug 2022 22:11:59 +0200, Warner Losh wrote:
> 
> This should be fixed now.
> 
> commit d98de7440507aea1648c8f4bc302bf88c0eb9458
> Author: Toomas Soome 
> Date:   Mon Aug 15 00:49:50 2022 +0300
> 
> loader: zfs reader should only store devdesc in f_devdata
> 
> Use d_opendata for device specific data.
> 
> PR: 265825
> Reviewed by:imp
> Differential Revision:  https://reviews.freebsd.org/D36202
> 
> is the commit you want to have in your tree.
> 
> Warner

My 14.0-CURRENT BE boots again!

Thanks.

--
Herbert



Re: Recent 14.0-CURRENT(s) crash in BTX whilest booting

2022-08-15 Thread Willem Jan Withagen

On 15-8-2022 22:11, Warner Losh wrote:

This should be fixed now.

commit d98de7440507aea1648c8f4bc302bf88c0eb9458
Author: Toomas Soome 
Date:   Mon Aug 15 00:49:50 2022 +0300

    loader: zfs reader should only store devdesc in f_devdata

    Use d_opendata for device specific data.

    PR:             265825
    Reviewed by:    imp
    Differential Revision: https://reviews.freebsd.org/D36202

is the commit you want to have in your tree.


I needed the system back, since it ran some critical applications.
But I split the zfs-mirror and only reused one disk.
Now see if I can find the time to verify Toomas's fix.

Problem is also that the system itself does no longer boot.
But "Idwer Vollering " suggested ripping the
/boot/loader from 5th of Aug. So I can atleast boot into multi user,
and rebuild.

Probably next weekend.

Thanx,
--WjW

Warner

On Mon, Aug 15, 2022 at 6:23 AM Willem Jan Withagen  
wrote:


Hi,

I tried upgrading a 14.0-CURRENT system from somewhere Dec 2021,
to the
most recent state.
So I build everything and installed the things...

Reboot and I end up in BTX
Oke, so I perhaps should also update pmbr and gptzfsboot.
Booted from stick, and update...

Still I end up in BTX.

So get out the guns:
Downloaded the latest snapshot, and even there I do end up in BTX
once I have installed the system.
The Stick does boot the mem-stcik into EFI, and lets me install the
system. But upon reboot I again endup in BTX.

So i anybody out there willing to help me figure out what is wrong
here.
I can sent you a foto of the BTX screen.

System specs:
 ASUS M5A97 PLUS
 AMD FX-8370 Eight-Core Processor
 24Gbyte Ram
 1 HD (rust) seagate ST3250318AS 250G
     met auto-install ZFS stripe, 8G swap, BIOS+UEFI

Thanx,
--WjW

BTW: I now installed 13.1-RELASE on another disk, and that (just)
works




Re: Recent 14.0-CURRENT(s) crash in BTX whilest booting

2022-08-15 Thread Warner Losh
This should be fixed now.

commit d98de7440507aea1648c8f4bc302bf88c0eb9458
Author: Toomas Soome 
Date:   Mon Aug 15 00:49:50 2022 +0300

loader: zfs reader should only store devdesc in f_devdata

Use d_opendata for device specific data.

PR: 265825
Reviewed by:imp
Differential Revision:  https://reviews.freebsd.org/D36202

is the commit you want to have in your tree.

Warner

On Mon, Aug 15, 2022 at 6:23 AM Willem Jan Withagen  wrote:

> Hi,
>
> I tried upgrading a 14.0-CURRENT system from somewhere Dec 2021, to the
> most recent state.
> So I build everything and installed the things...
>
> Reboot and I end up in BTX
> Oke, so I perhaps should also update pmbr and gptzfsboot.
> Booted from stick, and update...
>
> Still I end up in BTX.
>
> So get out the guns:
> Downloaded the latest snapshot, and even there I do end up in BTX
> once I have installed the system.
> The Stick does boot the mem-stcik into EFI, and lets me install the
> system. But upon reboot I again endup in BTX.
>
> So i anybody out there willing to help me figure out what is wrong here.
> I can sent you a foto of the BTX screen.
>
> System specs:
>  ASUS M5A97 PLUS
>  AMD FX-8370 Eight-Core Processor
>  24Gbyte Ram
>  1 HD (rust) seagate ST3250318AS 250G
>  met auto-install ZFS stripe, 8G swap, BIOS+UEFI
>
> Thanx,
> --WjW
>
> BTW: I now installed 13.1-RELASE on another disk, and that (just) works
>
>
>


Re: bootx64.efi; and loader.efi in the FreeBSD reserved area (was: Updating EFI boot loader results in boot hangup)

2022-08-15 Thread Warner Losh
On Sun, Aug 14, 2022 at 7:52 AM Graham Perrin 
wrote:

> On 12/08/2022 17:54, Yasuhiro Kimura wrote:
>
> … amd64 … (/boot/efi/efi/freebsd/loader.efi) …
>
> Side note: please, why the FreeBSD reserved area?
>

When you create a boot variable using efibootmgr, it's better to specify
something that's not the default
binary. It's what Windows, Linux, etc do when they are installed and it
facilitates better multiboot when
the target OSes depend on the first stage efi boot loader (like FreeBSD and
Windows certainly do).

Warner

>
> I'm more familiar with /efi/boot/bootx64.efi for amd64.
>
> 
> 
>


Re: ZFS: cannot import zroot: I/O error

2022-08-15 Thread Michael Gmelin


> On 15. Aug 2022, at 18:22, Toomas Soome  wrote:
> 
> 
> 
>> On 15. Aug 2022, at 18:01, FreeBSD User  wrote:
>> 
>> Hello,
>> 
>> I'm running a FreeBSD 13.1-RELENG-p1 zroot-based guest in a VirtualBox 
>> 4.1.24/26 (do not know
>> exactly). The host is a special system based on Linux und VirtualBox and I 
>> have no chances to
>> configure the VBox.
>> 
>> Somehow the VBox crashed and hung up the complete computer, so I had to cold 
>> start it after
>> approx. 30 minutes of waiting. After that, rhe virtual drive and its ZFS 
>> filesystem was
>> wrecked, shwing a stream of 
>> 
>> zio_read error: 5
>> ZFS: i/o error - all block copies unavailable
>> 
>> After a quick search I found some advices howto try fixing, last an longest 
>> one was 
>> 
>> zpool import -fFX -N -R /alternate/path zroot
>> 
>> which took approx 20 minutes - with no success.
>> 
>> There are some valuable data on the partition, which are all backed up, but 
>> it would take its
>> time to restore everything, so I'd like to ask whether there is any cance to 
>> "repair" the
>> mysterious damage.
>> 
>> I'm able to boot off from an USB flash drive …
>> 
> 
> This happens when vbox is telling zfs that data is written on disk, but is 
> actually still in caches… So yea, the standard answer could be “restore from 
> backup”, but it also may help to use ability to revert TXG (it does drop 
> data!).  See also 
> https://gist.github.com/mkhon/34d979c78077a20648456272d7f2cc15
> 

While it might not help the requester with the problem at hand, this situation 
can be prevented (or at least made less likely) by disabling "IgnoreFlush" - 
depending on the virtual device emulated this could be something like:

VBoxManage setextradata VM-name   
"VBoxInternal/Devices/ahci/0/LUN#[0]/Config/IgnoreFlush" 0

or

VBoxManage setextradata VM-name 
"VBoxInternal/Devices/piix3ide/0/LUN#[x]/Config/IgnoreFlush" 0

See also: https://www.virtualbox.org/manual/ch12.html#ts_ide-sata-flush

It’s highly recommended for ZFS in case your VM isn’t a throwaway CI thing.

Best
Michael




Re: ZFS: cannot import zroot: I/O error

2022-08-15 Thread Toomas Soome


> On 15. Aug 2022, at 18:01, FreeBSD User  wrote:
> 
> Hello,
> 
> I'm running a FreeBSD 13.1-RELENG-p1 zroot-based guest in a VirtualBox 
> 4.1.24/26 (do not know
> exactly). The host is a special system based on Linux und VirtualBox and I 
> have no chances to
> configure the VBox.
> 
> Somehow the VBox crashed and hung up the complete computer, so I had to cold 
> start it after
> approx. 30 minutes of waiting. After that, rhe virtual drive and its ZFS 
> filesystem was
> wrecked, shwing a stream of 
> 
> zio_read error: 5
> ZFS: i/o error - all block copies unavailable
> 
> After a quick search I found some advices howto try fixing, last an longest 
> one was 
> 
> zpool import -fFX -N -R /alternate/path zroot
> 
> which took approx 20 minutes - with no success.
> 
> There are some valuable data on the partition, which are all backed up, but 
> it would take its
> time to restore everything, so I'd like to ask whether there is any cance to 
> "repair" the
> mysterious damage.
> 
> I'm able to boot off from an USB flash drive …
> 

This happens when vbox is telling zfs that data is written on disk, but is 
actually still in caches… So yea, the standard answer could be “restore from 
backup”, but it also may help to use ability to revert TXG (it does drop 
data!).  See also 
https://gist.github.com/mkhon/34d979c78077a20648456272d7f2cc15 


rgds,
toomas






ZFS: cannot import zroot: I/O error

2022-08-15 Thread FreeBSD User
Hello,

I'm running a FreeBSD 13.1-RELENG-p1 zroot-based guest in a VirtualBox 
4.1.24/26 (do not know
exactly). The host is a special system based on Linux und VirtualBox and I have 
no chances to
configure the VBox.

Somehow the VBox crashed and hung up the complete computer, so I had to cold 
start it after
approx. 30 minutes of waiting. After that, rhe virtual drive and its ZFS 
filesystem was
wrecked, shwing a stream of 

zio_read error: 5
ZFS: i/o error - all block copies unavailable

After a quick search I found some advices howto try fixing, last an longest one 
was 

zpool import -fFX -N -R /alternate/path zroot

which took approx 20 minutes - with no success.

There are some valuable data on the partition, which are all backed up, but it 
would take its
time to restore everything, so I'd like to ask whether there is any cance to 
"repair" the
mysterious damage.

I'm able to boot off from an USB flash drive ...

Kind regards

oh



-- 
O. Hartmann



Recent 14.0-CURRENT(s) crash in BTX whilest booting

2022-08-15 Thread Willem Jan Withagen

Hi,

I tried upgrading a 14.0-CURRENT system from somewhere Dec 2021, to the 
most recent state.

So I build everything and installed the things...

Reboot and I end up in BTX
Oke, so I perhaps should also update pmbr and gptzfsboot.
Booted from stick, and update...

Still I end up in BTX.

So get out the guns:
Downloaded the latest snapshot, and even there I do end up in BTX
once I have installed the system.
The Stick does boot the mem-stcik into EFI, and lets me install the
system. But upon reboot I again endup in BTX.

So i anybody out there willing to help me figure out what is wrong here.
I can sent you a foto of the BTX screen.

System specs:
    ASUS M5A97 PLUS
    AMD FX-8370 Eight-Core Processor
    24Gbyte Ram
    1 HD (rust) seagate ST3250318AS 250G
        met auto-install ZFS stripe, 8G swap, BIOS+UEFI

Thanx,
--WjW

BTW: I now installed 13.1-RELASE on another disk, and that (just) works