Re: grub requirements for fonts

2024-05-01 Thread Darac Marjal


On 01/05/2024 10:45, Richard wrote:
I'd like to increase the font size in Grub (v2.12, at least I think 
that's the better alternative to just lowering the resolution) and 
opted to just use a custom font as there seems to be an OTF version of 
"GNU Unifont", though it seems to be jagged by design, but I'm running 
into issues. I thought about just using Noto Mono Regular for it, as 
Noto is supposed to always work and a monospaced font is recommended 
for easier setting of letters, as Grub uses bitmap fonts. Now my issue 
is that on one hand, the conversion to a bitmap font seems to be quite 
bad, the letters look really jagged. On the other hand, it seems that 
despite Noto supposed to being about no tofu, I actually get a lot of 
tofu. Both the up and down arrows in the description text at the 
bottom of grubs boot selector and the border around everything is just 
made up of tofu. And I tried converting the font with both grub-mkfont 
and Grub Customizer, same result.


What command line are you using? I've used the following in the past 
"grub-mkfont -o dejavu_12.pf -a -s 12 
/usr/share/fonts/truetype/dejavu/DejaVuSans.ttf"


You also mention that you're trying Unifont and Noto - are you trying to 
display characters beyond the range of ASCII? I've not tried displaying 
much more than English text. You might need to use the "-r" option if 
you are.




So what exactly are the requirements for fonts to be used in Grub so 
that they are converted to PFF2 fonts in a higher quality and don't 
show tofu?


Best
Richard


OpenPGP_signature.asc
Description: OpenPGP digital signature


grub requirements for fonts

2024-05-01 Thread Richard
I'd like to increase the font size in Grub (v2.12, at least I think that's
the better alternative to just lowering the resolution) and opted to just
use a custom font as there seems to be an OTF version of "GNU Unifont",
though it seems to be jagged by design, but I'm running into issues. I
thought about just using Noto Mono Regular for it, as Noto is supposed to
always work and a monospaced font is recommended for easier setting of
letters, as Grub uses bitmap fonts. Now my issue is that on one hand, the
conversion to a bitmap font seems to be quite bad, the letters look really
jagged. On the other hand, it seems that despite Noto supposed to being
about no tofu, I actually get a lot of tofu. Both the up and down arrows in
the description text at the bottom of grubs boot selector and the border
around everything is just made up of tofu. And I tried converting the
font with both grub-mkfont and Grub Customizer, same result.

So what exactly are the requirements for fonts to be used in Grub so that
they are converted to PFF2 fonts in a higher quality and don't show tofu?

Best
Richard


Re: /boot/grub/grub.cfg hex number reference

2024-03-13 Thread David Wright
On Wed 13 Mar 2024 at 18:59:30 (+), Gareth Evans wrote:
> On Wed 13/03/2024 at 12:50, Michel Verdier  wrote:
> > On 2024-03-13, Gareth Evans wrote:
> >
> >> That suggests perhaps something to do with an FS UUID, but it doesn't seem 
> >> to appear in the output of any of
> >>
> >> # blkid
> >
> > Here I have them shown as UUID by blkid
> >
> > # grep root /boot/grub/grub.cfg 
> > ...
> > search --no-floppy --fs-uuid --set=root --hint-bios=hd1,gpt2 
> > --hint-efi=hd1,gpt2 --hint-baremetal=ahci1,gpt2  
> > 5210342e-548e-4c4d-a0e9-a5f6d13888d6
> > ...
> >
> > # blkid|grep -i 5210342e
> > /dev/sdb2: UUID="5210342e-548e-4c4d-a0e9-a5f6d13888d6" ...
> >
> > hint-bios=hd0,gpt3 suggests its your 3rd partition on your first disk.
> >
> > Do you use raid ?
> 
> Hi Michael,
> 
> I'm currently using a single disk with ZFS, partitioned as 

I don't know anything about ZFS …

> $ sudo fdisk -l /dev/sda
> 
> Disk identifier: 3405...
> Device   Start   End   Sectors   Size Type
> /dev/sda1   48  2047  2000  1000K BIOS boot
> /dev/sda2 2048   1050623   1048576   512M EFI System
> /dev/sda3  1050624   3147775   2097152     1G Solaris /usr & Apple ZFS
> /dev/sda4  3147776 250069646 246921871 117.7G Solaris root

… or what Solaris /usr & Apple ZFS and Solaris root mean.

[ … ]

> So after making some sense of grub-mkconfig.in, it turns out the 16-digit hex 
> number is returned by
> 
> # grub-probe --device /dev/sda3 --target=fs_uuid
> 9cbef743dfafd874

I'd be interested to know what   grub-probe --device /dev/sda3 --target=fs
thinks the filesystem is. It may just follow what udev says, in which
case I'd look at the contents of /run/udev/data/b8\:3 (guessing¹).

BTW there's no necessity for the UUID to be 32 hex digits; for
example,   grub-probe --device /dev/sda2 --target=fs_uuid
will give you something more like C027-B627.

> Given that the above command fails with 
> 
> --device /dev/sda4 
> 
> (which is the running system's /), gpt3 appears to be sda3, ie. the boot 
> partition,

If you say so.

> so 
> 
> --set=root
> 
> in the grub.cfg search line presumably relates to the root of /boot.

Yes, confusingly, AIUI, the root of  --set=root  is nothing necessarily
to do with the   root=   in the linux line.

> I'm still curious as to why it doesn't seem to appear anywhere else, how it 
> was applied, and what exactly it applies to - a filesystem?

The answer to that may lie with ZFS.

> This is not aiui the usual form of a UUID either.

As mentioned above.

> grub-probe.in or grub-install.c might hold answers.

AFAIK grub-install requires a system device name (like /dev/sda) or
something that points to it (like the /dev/disk/ symlinks).

¹ I've guessed for partition three on the first HDD. For an nvme SSD,
  it could be /run/udev/data/b259\:3 or some other b-number. It's
  usually pretty obvious from a directory listing because of the
  :0 :1 :2 … corresponding partition numbers.

Cheers,
David.



Re: /boot/grub/grub.cfg hex number reference

2024-03-13 Thread Gareth Evans



> On 13 Mar 2024, at 19:00, Gareth Evans  wrote:
> 
> Hi Michael,

I'm sorry - Michel



Re: /boot/grub/grub.cfg hex number reference

2024-03-13 Thread Gareth Evans
On Wed 13/03/2024 at 12:50, Michel Verdier  wrote:
> On 2024-03-13, Gareth Evans wrote:
>
>> That suggests perhaps something to do with an FS UUID, but it doesn't seem 
>> to appear in the output of any of
>>
>> # blkid
>
> Here I have them shown as UUID by blkid
>
> # grep root /boot/grub/grub.cfg 
> ...
> search --no-floppy --fs-uuid --set=root --hint-bios=hd1,gpt2 
> --hint-efi=hd1,gpt2 --hint-baremetal=ahci1,gpt2  
> 5210342e-548e-4c4d-a0e9-a5f6d13888d6
> ...
>
> # blkid|grep -i 5210342e
> /dev/sdb2: UUID="5210342e-548e-4c4d-a0e9-a5f6d13888d6" ...
>
> hint-bios=hd0,gpt3 suggests its your 3rd partition on your first disk.
>
> Do you use raid ?

Hi Michael,

I'm currently using a single disk with ZFS, partitioned as 

$ sudo fdisk -l /dev/sda

Disk identifier: 3405...
Device   Start   End   Sectors   Size Type
/dev/sda1   48  2047  2000  1000K BIOS boot
/dev/sda2 2048   1050623   1048576   512M EFI System
/dev/sda3  1050624   3147775   2097152 1G Solaris /usr & Apple ZFS
/dev/sda4  3147776 250069646 246921871 117.7G Solaris root

$ sudo grep root /boot/grub/grub.cfg

  search --no-floppy --fs-uuid --set=root --hint-bios=hd0,gpt3 
--hint-efi=hd0,gpt3 --hint-baremetal=ahci0,gpt3  9cbef743dfafd874 <--


$ sudo blkid|grep 9cbe
$

$ lsblk -o MODEL,NAME,LABEL,WWN,UUID,SERIAL,PTTYPE,PTUUID,PARTUUID|grep 9cbe
$

/# ls -R|grep 9cbe
some filenames contain a match to the substring searched for, but none match 
the 16-digit hex number fully.

This excludes it appearing in any of the /dev/disk/by-*/ identifiers, which I 
have also checked separately just in case.

I git cloned grub to see if I could make sense of how grub.cfg is generated.

$ git clone https://git.savannah.gnu.org/git/grub.git

According to NEWS for v1.97:

> * update-grub is replaced by grub-mkconfig

So after making some sense of grub-mkconfig.in, it turns out the 16-digit hex 
number is returned by

# grub-probe --device /dev/sda3 --target=fs_uuid
9cbef743dfafd874

Given that the above command fails with 

--device /dev/sda4 

(which is the running system's /), gpt3 appears to be sda3, ie. the boot 
partition, so 

--set=root

in the grub.cfg search line presumably relates to the root of /boot.

I'm still curious as to why it doesn't seem to appear anywhere else, how it was 
applied, and what exactly it applies to - a filesystem?  This is not aiui the 
usual form of a UUID either.

grub-probe.in or grub-install.c might hold answers.  I will reply again if I 
discover anything informative.

Thanks,
Gareth




 



Re: /boot/grub/grub.cfg hex number reference

2024-03-13 Thread Michel Verdier
On 2024-03-13, Gareth Evans wrote:

> That suggests perhaps something to do with an FS UUID, but it doesn't seem to 
> appear in the output of any of
>
> # blkid

Here I have them shown as UUID by blkid

# grep root /boot/grub/grub.cfg 
...
search --no-floppy --fs-uuid --set=root --hint-bios=hd1,gpt2 
--hint-efi=hd1,gpt2 --hint-baremetal=ahci1,gpt2  
5210342e-548e-4c4d-a0e9-a5f6d13888d6
...

# blkid|grep -i 5210342e
/dev/sdb2: UUID="5210342e-548e-4c4d-a0e9-a5f6d13888d6" ...

hint-bios=hd0,gpt3 suggests its your 3rd partition on your first disk.

Do you use raid ?



/boot/grub/grub.cfg hex number reference

2024-03-13 Thread Gareth Evans
Does anyone know what the 16-digit hex number (truncated below to 9cbe...) 
refers to in /boot/grub/grub.cfg, where it makes several appearances?  

# grep 9cbe -A2 -B2 /boot/grub/grub.cfg

if [ x$feature_platform_search_hint = xy ]; then
  search --no-floppy --fs-uuid --set=root --hint-bios=hd0,gpt3 
--hint-efi=hd0,gpt3 --hint-baremetal=ahci0,gpt3  9cbe...  <---
else
  search --no-floppy --fs-uuid --set=root 9cbe...   <---
fi


That suggests perhaps something to do with an FS UUID, but it doesn't seem to 
appear in the output of any of

# blkid

# lsblk -o MODEL,NAME,LABEL,WWN,UUID,SERIAL,PTUUID,PARTUUID

# zfs get all bpool|grep 9cbe
# zfs get all rpool|grep 9cbe

# zpool get all bpool|grep 9cbe
# zpool get all rpool|grep 9cbe

The previous hex number appeared in a grub error ("no such device: ") 
after a failed update-grub, either after recreating the boot partition or the 
ZFS boot pool on it, or both.  This caused a delay during boot of approx 10s 
while being invited to "press any key to continue" immediately.

After running update-grub, the hex number was replaced with another in 
grub.cfg, and the error message and boot delay disappeared.

For info:

There is currently a bug in grub whereby if a ZFS boot pool is snapshotted at 
the top level, the compression type "inherit" can result in the grub error 
"compression algorithm inherit not supported", and a broken boot process, which 
is what I was sorting out when I noticed this.

https://github.com/openzfs/zfs/issues/13873

https://savannah.gnu.org/bugs/index.php?64297

According to some comments on github, it doesn't seem to be definitively fixed 
in grub 2.12, not that I have tried.

Snapshotting boot pool datasets is said to be OK.

Thanks,
Gareth



Re: Re: GRUB lost graphical terminal mode

2024-02-20 Thread Borden
>On 19 Feb 2024 22:44 +0100, from borde...@tutanota.com (Borden):
>>> Would you be willing to post your /boot/grub/grub.cfg for a setup
>>> where you get the blank screen GRUB?
>> 
>> Yeah, I probably should have opened with that. Sorry:
>> 
>> ```
>> # If you change this file, run 'update-grub' afterwards to update
>> # /boot/grub/grub.cfg.
>> # For full documentation of the options in this file, see:
>> #   info -f grub -n 'Simple configuration'
>> 
>> GRUB_DEFAULT=0
>[snipped remainder]
>
>If that's your /boot/grub/grub.cfg, it's a miracle that your GRUB
>installation is working at all and not dumping you to a grub> rescue
>prompt.

Right you are. I forget that there's a /boot/grub/grub.cfg file because I 
always edit /etc/default/grub . This is what you're looking for, I hope:

#
# DO NOT EDIT THIS FILE
#
# It is automatically generated by grub-mkconfig using templates
# from /etc/grub.d and settings from /etc/default/grub
#

### BEGIN /etc/grub.d/00_header ###
if [ -s $prefix/grubenv ]; then
  set have_grubenv=true
  load_env
fi
if [ "${next_entry}" ] ; then
   set default="${next_entry}"
   set next_entry=
   save_env next_entry
   set boot_once=true
else
   set default="0"
fi

if [ x"${feature_menuentry_id}" = xy ]; then
  menuentry_id_option="--id"
else
  menuentry_id_option=""
fi

export menuentry_id_option

if [ "${prev_saved_entry}" ]; then
  set saved_entry="${prev_saved_entry}"
  save_env saved_entry
  set prev_saved_entry=
  save_env prev_saved_entry
  set boot_once=true
fi

function savedefault {
  if [ -z "${boot_once}" ]; then
    saved_entry="${chosen}"
    save_env saved_entry
  fi
}
function load_video {
  if [ x$feature_all_video_module = xy ]; then
    insmod all_video
  else
    insmod efi_gop
    insmod efi_uga
    insmod ieee1275_fb
    insmod vbe
    insmod vga
    insmod video_bochs
    insmod video_cirrus
  fi
}

if [ x$feature_default_font_path = xy ] ; then
   font=unicode
else
insmod part_gpt
insmod btrfs
set root='hd0,gpt2'
if [ x$feature_platform_search_hint = xy ]; then
  search --no-floppy --fs-uuid --set=root --hint-bios=hd0,gpt2 
--hint-efi=hd0,gpt2 --hint-baremetal=ahci0,gpt2  #Redacted UUID#
else
  search --no-floppy --fs-uuid --set=root #Redacted UUID#
fi
    font="/@rootfs/usr/share/grub/unicode.pf2"
fi

if loadfont $font ; then
  set gfxmode=auto
  load_video
  insmod gfxterm
  set locale_dir=$prefix/locale
  set lang=en_CA
  insmod gettext
fi
terminal_input gfxterm
terminal_output gfxterm
if [ "${recordfail}" = 1 ] ; then
  set timeout=30
else
  if [ x$feature_timeout_style = xy ] ; then
    set timeout_style=menu
    set timeout=5
  # Fallback normal timeout code in case the timeout_style feature is
  # unavailable.
  else
    set timeout=5
  fi
fi
### END /etc/grub.d/00_header ###

### BEGIN /etc/grub.d/05_debian_theme ###
insmod part_gpt
insmod btrfs
set root='hd0,gpt2'
if [ x$feature_platform_search_hint = xy ]; then
  search --no-floppy --fs-uuid --set=root --hint-bios=hd0,gpt2 
--hint-efi=hd0,gpt2 --hint-baremetal=ahci0,gpt2  #Redacted UUID#
else
  search --no-floppy --fs-uuid --set=root #Redacted UUID#
fi
insmod png
if background_image 
/@rootfs/usr/share/desktop-base/emerald-theme/grub/grub-16x9.png; then
  set color_normal=white/black
  set color_highlight=black/white
else
  set menu_color_normal=cyan/blue
  set menu_color_highlight=white/blue
fi
### END /etc/grub.d/05_debian_theme ###

### BEGIN /etc/grub.d/06_dark_theme ###
set menu_color_normal=white/black
set menu_color_highlight=yellow/black
set color_normal=white/black
set color_highlight=yellow/black
background_image
### END /etc/grub.d/06_dark_theme ###

### BEGIN /etc/grub.d/10_linux ###
function gfxmode {
set gfxpayload="${1}"
}
set linux_gfx_mode=
export linux_gfx_mode
menuentry 'Debian GNU/Linux' --class debian --class gnu-linux --class gnu 
--class os $menuentry_id_option 'gnulinux-simple-#Redacted UUID#' {
load_video
insmod gzio
if [ x$grub_platform = xxen ]; then insmod xzio; insmod lzopio; fi
insmod part_gpt
insmod btrfs
set root='hd0,gpt2'
if [ x$feature_platform_search_hint = xy ]; then
  search --no-floppy --fs-uuid --set=root --hint-bios=hd0,gpt2 
--hint-efi=hd0,gpt2 --hint-baremetal=ahci0,gpt2  #Redacted UUID#
else
  search --no-floppy --fs-uuid --set=root #Redacted UUID#
fi
echo'Loading Linux 6.6.15-amd64 ...'
linux   /@rootfs/boot/vmlinuz-6.6.15-amd64 root=UUID=#Redacted UUID# ro 
rootflags=subvol=@rootfs  
echo'Loading initial ramdisk ...'
initrd  /@rootfs/boot/initrd.img-6.6.15-amd64
}
submenu 'Advanced options for Debian GNU/Linux' $menuentry_id_option 
'gnulinux-advanced-#Redacted UUID#' {
menuentry 'Debian GNU/Linux, with Linux 6.6.15-amd64' --class debian --class 
gnu-linux --class gnu --class os $menuentry_id_option 
'gnulinux-6.6.15-a

Re: GRUB lost graphical terminal mode

2024-02-20 Thread Charles Curley
On Tue, 20 Feb 2024 08:04:47 +
Michael Kjörling <2695bd53d...@ewoof.net> wrote:

> On 19 Feb 2024 22:44 +0100, from borde...@tutanota.com (Borden):
> >> Would you be willing to post your /boot/grub/grub.cfg for a setup
> >> where you get the blank screen GRUB?  
> > 
> > Yeah, I probably should have opened with that. Sorry:
> > 
> > ```
> > # If you change this file, run 'update-grub' afterwards to update
> > # /boot/grub/grub.cfg.
> > # For full documentation of the options in this file, see:
> > #   info -f grub -n 'Simple configuration'
> > 
> > GRUB_DEFAULT=0  
> [snipped remainder]
> 
> If that's your /boot/grub/grub.cfg, it's a miracle that your GRUB
> installation is working at all and not dumping you to a grub> rescue
> prompt.
> 

That clearly isn't the OP's /boot/grub/grub.cfg, but
/etc/default/grub.

As the former is often rather lengthy, and the list does reject large
attachments, perhaps Borden  will put it up at
https://paste.debian.net or some other pastebin facility and provide
the URL.

-- 
Does anybody read signatures any more?

https://charlescurley.com
https://charlescurley.com/blog/



Re: GRUB lost graphical terminal mode

2024-02-20 Thread Michael Kjörling
On 19 Feb 2024 22:44 +0100, from borde...@tutanota.com (Borden):
>> Would you be willing to post your /boot/grub/grub.cfg for a setup
>> where you get the blank screen GRUB?
> 
> Yeah, I probably should have opened with that. Sorry:
> 
> ```
> # If you change this file, run 'update-grub' afterwards to update
> # /boot/grub/grub.cfg.
> # For full documentation of the options in this file, see:
> #   info -f grub -n 'Simple configuration'
> 
> GRUB_DEFAULT=0
[snipped remainder]

If that's your /boot/grub/grub.cfg, it's a miracle that your GRUB
installation is working at all and not dumping you to a grub> rescue
prompt.

-- 
Michael Kjörling  https://michael.kjorling.se
“Remember when, on the Internet, nobody cared that you were a dog?”



Re: Re: GRUB lost graphical terminal mode

2024-02-19 Thread Borden
> On 18 Feb 2024 21:28 +0100, from borde...@tutanota.com (Borden):
> > what the default is when neither of those are set (which doesn't
> > work). Is this another "undocumented feature" of GRUB?
> 
> Would you be willing to post your /boot/grub/grub.cfg for a setup where you 
> get the blank screen GRUB?

Yeah, I probably should have opened with that. Sorry:

```
# If you change this file, run 'update-grub' afterwards to update
# /boot/grub/grub.cfg.
# For full documentation of the options in this file, see:
#   info -f grub -n 'Simple configuration'

GRUB_DEFAULT=0
GRUB_TIMEOUT=5
GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian`
GRUB_CMDLINE_LINUX_DEFAULT=""
GRUB_CMDLINE_LINUX=""

# If your computer has multiple operating systems installed, then you
# probably want to run os-prober. However, if your computer is a host
# for guest OSes installed via LVM or raw disk devices, running
# os-prober can cause damage to those guest OSes as it mounts
# filesystems to look for things.
GRUB_DISABLE_OS_PROBER=false

# Uncomment to enable BadRAM filtering, modify to suit your needs
# This works with Linux (no patch required) and with any kernel that obtains
# the memory map information from GRUB (GNU Mach, kernel of FreeBSD ...)
#GRUB_BADRAM="0x01234567,0xfefefefe,0x89abcdef,0xefefefef"

# Uncomment to disable graphical terminal
#GRUB_TERMINAL=console

# The resolution used on graphical terminal
# note that you can use only modes which your graphic card supports via VBE
# you can see them in real GRUB with the command `vbeinfo'
#GRUB_GFXMODE=640x480

# Uncomment if you don't want GRUB to pass "root=UUID=xxx" parameter to Linux
#GRUB_DISABLE_LINUX_UUID=true

# Uncomment to disable generation of recovery mode menu entries
#GRUB_DISABLE_RECOVERY="true"

# Uncomment to get a beep at grub start
#GRUB_INIT_TUNE="480 440 1"
```

For what it's worth, my graphics adapter is an ancient Intel HD 4000 series 
processor (Ivy Bridge era). It's possible that my card is dying or no longer 
supported. It's just a pain that it worked a few weeks ago and now it doesn't. 
It would be nice to know why.



Re: GRUB lost graphical terminal mode

2024-02-18 Thread Michael Kjörling
On 18 Feb 2024 21:28 +0100, from borde...@tutanota.com (Borden):
> what the default is when neither of those are set (which doesn't
> work). Is this another "undocumented feature" of GRUB?

Would you be willing to post your /boot/grub/grub.cfg for a setup
where you get the blank screen GRUB?

-- 
Michael Kjörling  https://michael.kjorling.se
“Remember when, on the Internet, nobody cared that you were a dog?”



Re: Re: GRUB lost graphical terminal mode

2024-02-18 Thread Borden
> Or perhaps you have all colors set to blank.
> Try add something like
> GRUB_COLOR_NORMAL="light-blue/black"
> GRUB_COLOR_HIGHLIGHT="light-cyan/blue"

Unfortunately, that didn't work. Still a blank screen. I'm curious that if 
GRUB_TERMINAL=gfxterm works and 
GRUB_TERMINAL=console works, what the default is when neither of those are set 
(which doesn't work). Is this another "undocumented feature" of GRUB?



Re: GRUB lost graphical terminal mode

2024-02-17 Thread Michel Verdier
On 2024-02-16, Borden wrote:

> For a couple weeks now, I can't use graphical terminal in my GRUB
> configuration. Setting `GRUB_TERMINAL=console` works fine. With that line
> commented out, (thus using default settings), I get a blank screen on boot, 5
> second timeout, then normal boot.
>
> Curiously, keyboard commands work normally. Specifically, I'm on multi-boot
> system, so I can boot into Windows by pressing the down arrow the correct
> number of times and pressing Enter. So I suspect that GRUB is either sending
> to the wrong video output or GRUB no longer supports my video card.
>
> Any way I can troubleshoot without setting set debug=all?

Or perhaps you have all colors set to blank.
Try add something like
GRUB_COLOR_NORMAL="light-blue/black"
GRUB_COLOR_HIGHLIGHT="light-cyan/blue"



Re: GRUB lost graphical terminal mode

2024-02-16 Thread Borden
Thank you for the tip!

So `GRUB_TERMINAL=gfxterm` works, `GRUB_TERMINAL=console` works, but whatever 
the default is supposed to be does not. Does this imply that "the platform's 
native terminal output" is broken?



Re: GRUB lost graphical terminal mode

2024-02-16 Thread Darac Marjal


On 16/02/2024 17:27, Borden wrote:

For a couple weeks now, I can't use graphical terminal in my GRUB 
configuration. Setting `GRUB_TERMINAL=console` works fine. With that line 
commented out, (thus using default settings), I get a blank screen on boot, 5 
second timeout, then normal boot.

Curiously, keyboard commands work normally. Specifically, I'm on multi-boot 
system, so I can boot into Windows by pressing the down arrow the correct 
number of times and pressing Enter. So I suspect that GRUB is either sending to 
the wrong video output or GRUB no longer supports my video card.

Any way I can troubleshoot without setting set debug=all?



According to the info pages, "console" means "native platform console". 
So, for UEFI, that would mean the UEFI console. For BIOS, I'm not sure 
if there is an equivalent.


Strangely, the info page says that default is "to use the platform's 
native terminal output" (Minor nit, I wish documentation would be 
consistent. Is "native terminal" the same as "native console"?).


Things you can try:

* Keep "GRUB_TERMINAL=console" uncommented. If it works, don't break it.

* Try "GRUB_TERMINAL=gfxterm" (uses graphics mode output).

* Try "GRUB_TERMINAL=morse" (uses the system speaker. Only for really 
desperate debugging :) )




OpenPGP_signature.asc
Description: OpenPGP digital signature


GRUB lost graphical terminal mode

2024-02-16 Thread Borden
For a couple weeks now, I can't use graphical terminal in my GRUB 
configuration. Setting `GRUB_TERMINAL=console` works fine. With that line 
commented out, (thus using default settings), I get a blank screen on boot, 5 
second timeout, then normal boot.

Curiously, keyboard commands work normally. Specifically, I'm on multi-boot 
system, so I can boot into Windows by pressing the down arrow the correct 
number of times and pressing Enter. So I suspect that GRUB is either sending to 
the wrong video output or GRUB no longer supports my video card.

Any way I can troubleshoot without setting set debug=all?



Re: grub-pc error when upgrading from buster to bullseye

2024-02-13 Thread John Boxall

On 2024-02-12 15:14, Greg Wooledge wrote:


According to
<https://unix.stackexchange.com/questions/745904/how-does-the-grub-pc-postinstall-script-know-which-device-to-install-to>
it uses debconf's database.  That page includes instructions for viewing
the device and changing it.



I had just started looking into the grub-pc package before I saw this. 
I'll be able to test this out sometime tomorrow.



I can't verify this on my machine, because mine uses UEFI.



Will advise. Thank you Greg!

--
Regards,

John Boxall



Re: grub-pc error when upgrading from buster to bullseye

2024-02-12 Thread Greg Wooledge
On Mon, Feb 12, 2024 at 09:04:01PM +0100, Thomas Schmitt wrote:
> Hi,
> 
> John Boxall wrote:
> > I am aware that the label and uuid (drive and partition) are replicated on
> > the cloned drive, but I can't find the model number (in text format) stored
> > anywhere on the drive.
> 
> Maybe the grub-pc package takes its configuration from a different drive
> which is attached to the system ?

According to
<https://unix.stackexchange.com/questions/745904/how-does-the-grub-pc-postinstall-script-know-which-device-to-install-to>
it uses debconf's database.  That page includes instructions for viewing
the device and changing it.

I can't verify this on my machine, because mine uses UEFI.



Re: grub-pc error when upgrading from buster to bullseye

2024-02-12 Thread Thomas Schmitt
Hi,

John Boxall wrote:
> I am aware that the label and uuid (drive and partition) are replicated on
> the cloned drive, but I can't find the model number (in text format) stored
> anywhere on the drive.

Maybe the grub-pc package takes its configuration from a different drive
which is attached to the system ?

Somewhat wayward idea:
Does the initrd contain the inappropirate address ?
(I don't see much connection between initrd and grub-pc. But initrd is a
classic hideout for obsolete paths after modification of boot procedures.)


Have a nice day :)

Thomas



Re: grub-pc error when upgrading from buster to bullseye

2024-02-12 Thread John Boxall

On 2024-02-12 09:34, Thomas Schmitt wrote:


The disk/by-id file names are made up from hardware properties.
I believe to see in the name at least: Manufacturer, Model, Serial Number.

So you will have to find the configuration file which knows that
/dev/disk/by-id address and change it either to the new hardware id or
to a /dev/disk/by-uuid address, which refers to the cloned disk content.


Have a nice day :)

Thomas



Thank you Thomas. That is what I am trying to find as I have searched 
for both the SSD drive model number and the WWN on the cloned HDD but 
can't find anything.


I am aware that the label and uuid (drive and partition) are replicated 
on the cloned drive, but I can't find the model number (in text format) 
stored anywhere on the drive.


I will keep looking.

--
Regards,

John Boxall



Re: grub-pc error when upgrading from buster to bullseye

2024-02-12 Thread Thomas Schmitt
Hi,

John Boxall wrote:
>   Setting up grub-pc (2.06-3~deb11u6) ...
>   /dev/disk/by-id/ata-WDC_WDS100T2B0A-00SM50_21185R801540 does not
> exist, so cannot grub-install to it!
> What is confusing to me is that the error indicates the source SDD even
> though I have updated the boot images and installed grub on the cloned HDD.

The disk/by-id file names are made up from hardware properties.
I believe to see in the name at least: Manufacturer, Model, Serial Number.

So you will have to find the configuration file which knows that
/dev/disk/by-id address and change it either to the new hardware id or
to a /dev/disk/by-uuid address, which refers to the cloned disk content.


Have a nice day :)

Thomas



grub-pc error when upgrading from buster to bullseye

2024-02-12 Thread John Boxall



I am attempting to upgrade my laptop (Thinkpad X230) from buster to 
bullseye and have run into the error below. In order to ensure that all 
goes well and not to lose all of the tweaks I have added over time, I am 
performing the upgrade first on a cloned HDD (via "dd") of the working SDD.


apt-get -y upgrade --without-new-pkgs

Setting up grub-pc (2.06-3~deb11u6) ...
/dev/disk/by-id/ata-WDC_WDS100T2B0A-00SM50_21185R801540 does not
  exist, so cannot grub-install to it!
You must correct your GRUB install devices before proceeding:

DEBIAN_FRONTEND=dialog dpkg --configure grub-pc
dpkg --configure -a
dpkg: error processing package grub-pc (--configure):
installed grub-pc package post-installation script subprocess
 returned error exit status 1


All of the latest updates for buster have been applied before starting 
the process (below).


apt-get update;apt-get -y upgrade;apt-get -y dist-upgrade;

#shutdown, boot Debian live

#clone working SSD drive to an HDD  

#boot cloned drive

#login and open terminal session

#su to root

update-initramfs -u -k all

    grub-install --recheck /dev/sda

apt-get update;apt-get -y upgrade;apt-get -y dist-upgrade;

#modify /etc/apt/source.list to point to bullseye
#modify all /etc/apt/source.list.d/* files to point to bullseye

apt-get update;apt-get -y upgrade --without-new-pkgs;

Running the recommended dpkg commands brings up the dialog to install 
grub and does complete successfully so that I can then run

"apt-get -y dist-upgrade", which also runs successfully.

What is confusing to me is that the error indicates the source SDD even 
though I have updated the boot images and installed grub on the cloned HDD.


Is there some other configuration file that needs to be updated/removed 
so that the grub-pc install works without intervention?


Source system info:

user:~$ uname -a
Linux laptop 4.19.0-26-amd64 #1 SMP Debian 4.19.304-1 (2024-01-09) 
x86_64 GNU/Linux


user:~$ cat /etc/debian_version
10.13

user:~$ lscpu
Architecture:x86_64
CPU op-mode(s):  32-bit, 64-bit
Byte Order:  Little Endian
Address sizes:   36 bits physical, 48 bits virtual
CPU(s):  4
On-line CPU(s) list: 0-3
Thread(s) per core:  2
Core(s) per socket:  2
Socket(s):   1
NUMA node(s):1
Vendor ID:   GenuineIntel
CPU family:  6
Model:   58
Model name:  Intel(R) Core(TM) i7-3520M CPU @ 2.90GHz
Stepping:9
CPU MHz: 1202.696
CPU max MHz: 3600.
CPU min MHz: 1200.
BogoMIPS:5786.44
Virtualization:  VT-x
L1d cache:   32K
L1i cache:   32K
L2 cache:256K
L3 cache:4096K
NUMA node0 CPU(s):   0-3
Flags:   fpu vme de pse tsc msr pae mce cx8 apic sep mtrr 
pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe 
syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl 
xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor 
ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic 
popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm cpuid_fault 
epb pti ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid 
fsgsbase smep erms xsaveopt dtherm ida arat pln pts md_clear flush_l1d



--
Regards,

John Boxall



Re: install Kernel and GRUB in chroot.

2024-02-05 Thread Max Nikulin

On 05/02/2024 17:40, Dmitry wrote:

 > It would not work with secure boot

Yes.

But secure boot is usually turned off. It is a standard advice during 
Linux installation.


That advice may be standard for distributions that do not provide signed 
shim and grub. Likely it is applicable for Arch and derivatives. Debian 
supports installation with enabled secure boot.


At first I suspected that you enrolled your own MOK and maybe even wiped 
out Microsoft keys.


Perhaps you may get encrypted /boot in Debian similar to what you have 
in Manjaro, but certainly it is not default configuration.





Re: install Kernel and GRUB in chroot.

2024-02-05 Thread Ralph Aichinger
On Mon, 2024-02-05 at 17:40 +0700, Dmitry wrote:
> 
> But secure boot is usually turned off. It is a standard advice during
> Linux 
> installation.
> 
Will probably be increasingly common though, I've got a Microsoft
Surface Laptop that works fine with Debian, but if you switch off
secure boot, it displays some big red scary warning screen before the
bootloader.

/ralph



Re: install Kernel and GRUB in chroot.

2024-02-05 Thread Dmitry

> It would not work with secure boot

Yes.

But secure boot is usually turned off. It is a standard advice during Linux 
installation.




Re: install Kernel and GRUB in chroot.

2024-02-04 Thread Dmitry

sudo -i


Thank you!



I am unsure what UUID you mean.


At Manjaro:

grubx64.efi is at the sdb1 - EFI vfat /dev/sdb1
grub.cfg is at the sdb2 - crypto_LUKS /dev/sdb2

grubx64.efi contains data UUID=""a8...b7" of /dev/sdb2 which is 
TYPE="crypto_LUKS".


`blkid` output:
/dev/sdb2: UUID="a8...b7" TYPE="crypto_LUKS" PARTUUID="8...5"

`strings /boot/efi/EFI/Manjaro/grub64x.efi` output:
cryptomount -u a8...b7
(cryptouuid/a8...b7)/boot/grub

I have a Manjaro installed, and what to migrate to Debian. That involves 
exploration of Booting order.


In the Manjaro GRUB installation mounting point for ESP (sdb1) is:
/boot/efi
And the grub.cfg is
/boot/grub/grub.cfg

The grub.cfg located at the crypto partition sdb2.

Manjaro has different GRUB installation scheme from Debian.



Re: install Kernel and GRUB in chroot.

2024-02-04 Thread Max Nikulin

On 03/02/2024 22:32, Dmitry wrote:

2. sudo bash


sudo -i


3. cd /boot/efi/EFI/Mangaro
4. strings grubx64.efi
5. And at the output of strings there is UUID and /boot/grub.


I am unsure what UUID you mean.

Summary: GRUB installation not only involves configuration of text 
files, but

also it involves generating data in binary grubx64.efi.


It would not work with secure boot

md5sum /boot/efi/EFI/debian/grubx64.efi 
/usr/lib/grub/x86_64-efi-signed/grubx64.efi.signed

62ff1ee5b75b4565f609043c4b1da759  /boot/efi/EFI/debian/grubx64.efi
62ff1ee5b75b4565f609043c4b1da759 
/usr/lib/grub/x86_64-efi-signed/grubx64.efi.signed






Re: install Kernel and GRUB in chroot.

2024-02-03 Thread Felix Miata
Tim Woodall composed on 2024-02-03 21:25 (UTC):

> Max Nikulin wrote:

>> It seems secure boot is disabled in your case, so I am wondering why you do 
>> not boot xen.efi directly.

> Because the NVRAM is extremely tempremental. 

Not in my experience. I recognized long ago that WRT non-removable media, only 
one
bootloader per machine is required, and pretty well stuck to having only one
active, or at all, no matter how many FOSS operating systems or media I have
installed. The Grubs I have used are not picky about whose kernel or initrd they
are called to load. With only one bootloader installed, retouching NVRAM isn't
often required, and there needn't be much in it to scramble.
-- 
Evolution as taught in public schools is, like religion,
based on faith, not based on science.

 Team OS/2 ** Reg. Linux User #211409 ** a11y rocks!

Felix Miata



Re: install Kernel and GRUB in chroot.

2024-02-03 Thread Tim Woodall

On Sat, 3 Feb 2024, Max Nikulin wrote:

It seems secure boot is disabled in your case, so I am wondering why you do 
not boot xen.efi directly.



Because the NVRAM is extremely tempremental. Most updates fail, or
worse, corrupt it to the point it's hard to get anything to boot.

Additionally, there was a bug in an older version of xen that caused a
kernel oops if wifi networking was started. So I wanted to start vanilla
debian and I don't dare touch the NVRAM again (or the bios) until I
absolutely have to.

I don't remember for certain now but I might be booting using
bootx86.efi (which is a copy of grubx64.efi)

It's an old laptop but it still works well for me.




Re: Automatically installing GRUB on multiple drives

2024-02-03 Thread Andy Smith
Hi,

On Fri, Feb 02, 2024 at 02:41:38PM +0100, Franco Martelli wrote:
> There is an alternative to hardware RAID if you want a Linux RAID: you can
> disable UEFI in the BIOS and delete the ESP as I did when I bought my gaming
> PC several years ago.

I have storage devices which legacy BIOS cannot see for booting
purposes. In past years these would require an "option ROM", Today,
they require UEFI firmware. They aren't exotic devices; just
enterprise NVMe.

Thanks,
Andy

-- 
https://bitfolk.com/ -- No-nonsense VPS hosting



Re: install Kernel and GRUB in chroot.

2024-02-03 Thread Dmitry

Main question is resolved.

GRUB knows how to reach grub.cfg because grubx64.efi binary has the UUID and 
path to grub configurations.


1. sudo blkid;
2. sudo bash
3. cd /boot/efi/EFI/Mangaro
4. strings grubx64.efi
5. And at the output of strings there is UUID and /boot/grub.

Summary: GRUB installation not only involves configuration of text files, but
also it involves generating data in binary grubx64.efi.



Re: install Kernel and GRUB in chroot.

2024-02-02 Thread Max Nikulin

On 03/02/2024 02:15, Tim Woodall wrote:

$ cat /boot/efi/EFI/XEN/xen.cfg

[...]

I'd be interested if there's a way to tell grubx64.efi to look for a
particular partition UUID.


An example of such grub.cfg from EFI/debian has been posted already in 
this thread

https://lists.debian.org/msgid-search/20240201200846.0bb82...@dorfdsl.de

Frankly speaking, I am unsure concerning your configuration. Perhaps the 
following may make it more clear


efibootmgr -v
find /boot/efi | sort

It seems secure boot is disabled in your case, so I am wondering why you 
do not boot xen.efi directly.




Re: install Kernel and GRUB in chroot.

2024-02-02 Thread Max Nikulin

On 03/02/2024 02:51, Thomas Schmitt wrote:

Max Nikulin wrote:

Just copy files from LiveCD (it should have EFI/Boot/bootx64.efi)
to the ESP partition on the USB stick.

The /EFI/boot directory of a bootable Debian ISO usually does not contain
the full GRUB equipment for EFI. Important parts of an amd64 Live ISO are
in /boot/grub.


Certainly. And grubx64.efi in EFI/Boot of a live media behaves a bit 
differently from one in EFI/debian of a regular install since in the 
former case it relies on boot/grub residing on the same partition.


My point was to copy *files* to the pre-partitioned drive, not a whole 
image to the whole block device. I had a hope that the topic starter is 
aware of the recommended way to create a bootable USB stick using dd (or 
cp, etc.).


I usually copy files to existing single FAT partition on USB drives 
having msdos partition table (as they are shipped). It requires 
additional actions to setup syslinux for the sake of legacy boot, but it 
leaves enough space to put some additional files while the boot drive is 
prepared or during live session (requires remounting as rw). UEFI boot 
relies on files and their specific layouts, not on specific block addresses.




Re: install Kernel and GRUB in chroot.

2024-02-02 Thread Franco Martelli

On 02/02/24 at 15:12, Dmitry wrote:

Going to read carefully.

https://www.debian.org/releases/buster/amd64/ch04s03.en.html

Interesting that Buster has more documentation than current release.




Nope, maybe you gave a quick read, the release notes of the current 
release ¹  are exhaustive. If you need to go deeper, a link ² to the 
wiki it's published in that page.


Kind regards,


¹ https://www.debian.org/releases/bookworm/amd64/ch04s03.en.html
² https://wiki.debian.org/DebianInstaller/CreateUSBMedia
--
Franco Martelli



Re: install Kernel and GRUB in chroot.

2024-02-02 Thread Thomas Schmitt
Hi,

Dmitry wrote:
> Yep. `dd` copy partitions table. Amazing.

Not so amazing after you realize that a partition table is just data on
the storage medium and not some special property of the storage device.
dd copies data. If these data contain a partition table and get copied to
the right place on the storage medium, the partition table will be
recognized by EFI and Linux.


> applies no 'intelligence' to the operation.

This describes it very well. Sometimes dumb is good. Sometimes not.


Initially you stated in
  https://lists.debian.org/debian-user/2024/02/msg8.html
>...> I need to prepare that system for booting.
>...> 1. Install Kernel.
>...> 2. Install GRUB and Configure.
>...> 3. Add changes to UEFI to start booting.

dd-ing a bootable Debian ISO will not do what you describe.
Assumed the ISO is prepared for booting from USB stick, you will get a
bootable Live or an installer system. At least ISOs for i386, amd64, and
arm64 should be prepared for that.

If it is not ready for booting from USB stick, it will be just a storage
with a mountable filesystem and Debian files in it.


Max Nikulin wrote:
> > Just copy files from LiveCD (it should have EFI/Boot/bootx64.efi)
> > to the ESP partition on the USB stick.

The /EFI/boot directory of a bootable Debian ISO usually does not contain
the full GRUB equipment for EFI. Important parts of an amd64 Live ISO are
in /boot/grub.
The programs in /EFI/boot are specialized on convincing Secure Boot and
on finding the ISO filesystem with /boot/grub in it. (Actually those are
copies of the EFI boot partition files. The boot partition is a FAT
filesystem image file inside the ISO named /boot/grub/efi.img .)


Tim Woodall wrote:
> > I'm not exactly sure what you're doing.

I join this statement. :))

Do you want a normal changeable Debian system installation or do you want
a Live system with its immutable core and maybe some partition where you
can store files ?
(Just curiosity of mine. Possibly i could not help much with chroot
questions anyway.)


Have a nice day :)

Thomas



Re: install Kernel and GRUB in chroot.

2024-02-02 Thread Tim Woodall

On Thu, 1 Feb 2024, Marco Moock wrote:


Am 01.02.2024 um 19:20:01 Uhr schrieb Tim Woodall:


$ cat /boot/efi/EFI/XEN/xen.cfg
[global]
default=debian

[debian]
options=console=vga smt=true
kernel=vmlinuz root=/dev/mapper/vg--dirac-root ro quiet
ramdisk=initrd.img


menuentry "Xen EFI NVME" {
 insmod part_gpt
 insmod search_fs_uuid
 insmod chain
#set root=(hd1,gpt1)
 search --no-floppy --fs-uuid --set=root C057-BC13
 chainloader (hd1,gpt1)/EFI/XEN/xen.efi
}


Then this file tells the boot loader about the /boot or / partition.
Is that the Xen virtualization software?


The NVRAM is configured to boot:
/boot/efi/EFI/debian/grubx64.efi

That then hunts for grub.cfg. I believe it finds the first grub.cfg -
which has caused me issues in the past where I've had a legacy partition
on the disk that I'd forgotten about but the efi application sees. I'd
be interested if there's a way to tell grubx64.efi to look for a
particular partition UUID.

That menuentry above then tells efi to chainload the xen.efi
application. This is all in efi land.

That then reads xen.cfg and boots the kernel and initrd defined in that
file.




Re: Automatically installing GRUB on multiple drives

2024-02-02 Thread hw
On Fri, 2024-02-02 at 14:41 +0100, Franco Martelli wrote:
> On 31/01/24 at 22:51, hw wrote:
> > > [...]
> > > If your suggested solution is "use hardware RAID", no need to repeat
> > > that one though: I see you said it in a few other messages, and that
> > > suggestions has been received. Assume the conversation continues
> > > amongst people who don't like that suggestion.
> 
> > Well, too late, I already said it again since you asked.  Do you have
> > a better solution?  It's ok not to like this solution, but do you have
> > a better one?
> 
> There is an alternative to hardware RAID if you want a Linux RAID: you 
> can disable UEFI in the BIOS and delete the ESP as I did when I bought 
> my gaming PC several years ago.
> 
> I created my software RAID level 5 using debian-installer and it works 
> perfectly without ESP, you have to choose "Expert install" in "Advanced 
> options". I installed Bookworm when it was released in this way.

Right, I forgot about that.  Is that always an option?



Re: install Kernel and GRUB in chroot.

2024-02-02 Thread tomas
On Sat, Feb 03, 2024 at 01:17:05AM +0700, Dmitry wrote:
> > Just copy files from LiveCD (it should have EFI/Boot/bootx64.efi) to the
> ESP partition on the USB stick.
> 
> As I understand right now `dd` command applied to a device will copy all
> information including partitions table. Thus:

Actually, cp (or even, horrors ;-) cat do the same. One advantage of dd is...

> dd if=debian-xx.iso of=/dev/sdb bs=4M status=progress; sync

...this "status=progress". The other is "oflag=sync": for bigger sticks
(and if you have tons of RAM) this last "sync" could take a long while,
without giving you feedback of what's happening.

And the third one (which it shares with cp but not with cat) is that
sudo won't work in "sudo cat foo.img > /dev/bar" (unless you already
are root, but that'd be cheating ;-)

Cheers
-- 
t


signature.asc
Description: PGP signature


Re: install Kernel and GRUB in chroot.

2024-02-02 Thread Dmitry
> Just copy files from LiveCD (it should have EFI/Boot/bootx64.efi) to the 
ESP partition on the USB stick.


Yep. `dd` copy partitions table. Amazing.

```
dd will simply recreate the old partition scheme, as it is a bitwise copy & 
applies no 'intelligence' to the operation.

```
https://askubuntu.com/a/847533



Re: install Kernel and GRUB in chroot.

2024-02-02 Thread Dmitry
> Just copy files from LiveCD (it should have EFI/Boot/bootx64.efi) to the 
ESP partition on the USB stick.


As I understand right now `dd` command applied to a device will copy all 
information including partitions table. Thus:


dd if=debian-xx.iso of=/dev/sdb bs=4M status=progress; sync

Would just create a copy of device, with FileSystem and PartitionsTable.



Re: install Kernel and GRUB in chroot.

2024-02-02 Thread David Wright
On Fri 02 Feb 2024 at 21:12:30 (+0700), Dmitry wrote:
> Going to read carefully.
> 
> https://www.debian.org/releases/buster/amd64/ch04s03.en.html
> 
> Interesting that Buster has more documentation than current release.

It appears the balance has now been spun off into a wiki page, at

  https://wiki.debian.org/DebianInstaller/CreateUSBMedia

Cheers,
David.



Re: install Kernel and GRUB in chroot.

2024-02-02 Thread Max Nikulin

On 02/02/2024 21:06, Dmitry wrote:
Need additional research what to do with a FlashStick with several 
partitions to make a LiveCD from it.


Just copy files from LiveCD (it should have EFI/Boot/bootx64.efi) to the 
ESP partition on the USB stick.




Re: install Kernel and GRUB in chroot.

2024-02-02 Thread Dmitry

Going to read carefully.

https://www.debian.org/releases/buster/amd64/ch04s03.en.html

Interesting that Buster has more documentation than current release.




Re: install Kernel and GRUB in chroot.

2024-02-02 Thread Dmitry

> Do you want to install the OS on it?

Eventually no, I do not want OS on the Flash Stick.
The Flash Stick is only a testing place. I want OS at the SSD.

Now I am wondering how to prepare the Flash Stick to write LiveImage on it. 
Because I already created a GPT table on that Flash and use debootstrap.


Looks like need to create a new parition and copy LiveImage there.
Need additional research what to do with a FlashStick with several partitions 
to make a LiveCD from it.



> Do you want an encrypted system?

No. I do not need this abstraction layer now.



Re: install Kernel and GRUB in chroot.

2024-02-02 Thread Marco Moock
Am 02.02.2024 schrieb Dmitry :

> I want OS at the SSD.

Then the ESP should be on that SSD too.



Re: Automatically installing GRUB on multiple drives

2024-02-02 Thread Franco Martelli

On 31/01/24 at 22:51, hw wrote:

[...]
If your suggested solution is "use hardware RAID", no need to repeat
that one though: I see you said it in a few other messages, and that
suggestions has been received. Assume the conversation continues
amongst people who don't like that suggestion.



Well, too late, I already said it again since you asked.  Do you have
a better solution?  It's ok not to like this solution, but do you have
a better one?


There is an alternative to hardware RAID if you want a Linux RAID: you 
can disable UEFI in the BIOS and delete the ESP as I did when I bought 
my gaming PC several years ago.


I created my software RAID level 5 using debian-installer and it works 
perfectly without ESP, you have to choose "Expert install" in "Advanced 
options". I installed Bookworm when it was released in this way.


Cheers,
--
Franco Martelli



Re: install Kernel and GRUB in chroot.

2024-02-01 Thread Marco Moock

Max Nikulin schrieb:
On a *removable* drive EFI/Boot/bootx64.efi (that is actually 
/usr/lib/shim/shimx64.efi.signed that loads grubx64.efi) may allow to 
boot without modification of boot entries in NVRAM.
Yes, UEFI can (and must be able) to boot from a device without a boot 
entry in the UEFI. Otherwise you wouldn't be able to install an OS.
You can boot such a device by simply selecting the device in the UEFI 
boot manager. Often it shows the model number of the device.
Likely it is implementation-dependent whether a drive with GPT 
partition table is considered as a removable. For regular (internal) 
drives UEFI requires GPT.

MBR should also work.




Re: Automatically installing GRUB on multiple drives

2024-02-01 Thread hw
On Wed, 2024-01-31 at 23:28 +0100, Nicolas George wrote:
> hw (12024-01-31):
> > Well, I doubt it.
> 
> Well, doubt it all you want. In the meantime, we will continue to use
> it.
> 
> Did not read the rest, not interested in red herring nightmare
> scenarios.
> 

You'll figure it out eventually.  Meanwhile, you may be happier by
unscrubscribing from all mailing lists since you're not interested in
what people have to say.  In any case, I'm filtering all emails from
you right into trash folder now.



Re: Automatically installing GRUB on multiple drives

2024-02-01 Thread Max Nikulin

On 01/02/2024 05:45, hw wrote:

It would make sense that all the UEFI BIOSs would be fixed so that
they do not create this problem in the first place like they
shouldn't.


Besides regular boots, sometimes it is necessary to update firmware and 
.efi files loaded for this purpose may write logs or other .efi files 
(that should be applied after reboot) to ESP. Likely I have seen that 
with HP firmware.


APT likely have hooks that run after install/upgrade and may be used to 
synchronize ESP partitions mounted to different paths.




Re: install Kernel and GRUB in chroot.

2024-02-01 Thread Max Nikulin

On 02/02/2024 01:46, Dmitry wrote:

3. Now I want to boot using that Flash.

1. ESP is a partition that stores GRUB Binary. /boot/EFI/Name/grub64.eif


On a *removable* drive EFI/Boot/bootx64.efi (that is actually 
/usr/lib/shim/shimx64.efi.signed that loads grubx64.efi) may allow to 
boot without modification of boot entries in NVRAM. Likely it is 
implementation-dependent whether a drive with GPT partition table is 
considered as a removable. For regular (internal) drives UEFI requires 
GPT. I do not suggest you to use msdos partition table that might be 
suitable for live media, not for installation with multiple partitions 
including Linux native file systems.



3. At the system partition there is a /boot/grub/grub.cfg


There are 2 grub.cfg: for ESP and for /boot




Re: install Kernel and GRUB in chroot.

2024-02-01 Thread Marco Moock
Am 01.02.2024 um 19:20:01 Uhr schrieb Tim Woodall:

> $ cat /boot/efi/EFI/XEN/xen.cfg
> [global]
> default=debian
> 
> [debian]
> options=console=vga smt=true
> kernel=vmlinuz root=/dev/mapper/vg--dirac-root ro quiet
> ramdisk=initrd.img
> 
> 
> menuentry "Xen EFI NVME" {
>  insmod part_gpt
>  insmod search_fs_uuid
>  insmod chain
> #set root=(hd1,gpt1)
>  search --no-floppy --fs-uuid --set=root C057-BC13
>  chainloader (hd1,gpt1)/EFI/XEN/xen.efi
> }

Then this file tells the boot loader about the /boot or / partition.
Is that the Xen virtualization software?

-- 
Gruß
Marco

Spam und Werbung bitte an ichschickerekl...@cartoonies.org



Re: install Kernel and GRUB in chroot.

2024-02-01 Thread Tim Woodall

On Thu, 1 Feb 2024, Marco Moock wrote:


Am 02.02.2024 um 01:46:06 Uhr schrieb Dmitry:


2. ==>BAM<== some how that binary knows the system partition.


That information is on the EFI partition, where the GRUB bootloader
binary also resides.

root@ryz:/boot/efi/EFI# cat /boot/efi/EFI/debian/grub.cfg
search.fs_uuid 5b8b669d-xyz root hd0,gpt2 #boot partition
set prefix=($root)'/grub'
configfile $prefix/grub.cfg
root@ryz:/boot/efi/EFI#

If that information is loaded, the kernel can be loaded from the boot
partition.





Are you sure that file does anything? I don't have one

drwxr-xr-x 2 root root   4096 Dec 31  2017 .
drwxr-xr-x 6 root root   4096 Dec 25  2019 ..
-rwxr-xr-x 1 root root 163840 Sep 11  2022 grubx64.efi


This finds my boot partition and then chainloads the XEN efi binary
which does have some config.

/boot/efi/EFI/XEN:
total 38204
drwxr-xr-x 2 root root 4096 May  5  2023 .
drwxr-xr-x 6 root root 4096 Dec 25  2019 ..
-rwxr-xr-x 1 root root 31132473 Aug 12 08:34 initrd.img
-rwxr-xr-x 1 root root  5283136 Aug 12 08:34 vmlinuz
-rwxr-xr-x 1 root root  138 May  5  2023 xen.cfg
-rwxr-xr-x 1 root root  2687456 Jun 20  2021 xen.efi

$ cat /boot/efi/EFI/XEN/xen.cfg
[global]
default=debian

[debian]
options=console=vga smt=true
kernel=vmlinuz root=/dev/mapper/vg--dirac-root ro quiet
ramdisk=initrd.img


menuentry "Xen EFI NVME" {
insmod part_gpt
insmod search_fs_uuid
insmod chain
#set root=(hd1,gpt1)
search --no-floppy --fs-uuid --set=root C057-BC13
chainloader (hd1,gpt1)/EFI/XEN/xen.efi
}



Re: install Kernel and GRUB in chroot.

2024-02-01 Thread Tim Woodall

On Fri, 2 Feb 2024, Dmitry wrote:


Hi Tim. The community is so kind.

So.


I'm not exactly sure what you're doing.


Understand how GRUB works, to boot myself.

1. Trying to install Debian on the Flash.
2. Use it by the Debootstrap.
3. Now I want to boot using that Flash.

Looks like a caught the thread.

1. ESP is a partition that stores GRUB Binary. /boot/EFI/Name/grub64.eif
2. ==>BAM<== some how that binary knows the system partition.



because grub64.efi understands the disk layout and looks for it. You can
build your own

I'm not giving any guarantees - look at the date on this file:

$ ls -al test-uefi
-rw-r--r-- 1 tim tim 341 Dec 31  2018 test-uefi

$ cat test-uefi
grub-mkimage -o bootx64.efi -p /EFI/BOOT -O x86_64-efi \
 fat iso9660 part_gpt part_msdos \
 normal boot echo linux configfile loopback chain \
 efifwsetup efi_gop efi_uga \
 ls search search_label search_fs_uuid search_fs_file \
 gfxterm gfxterm_background gfxterm_menu test all_video loadenv \
 exfat ext2 lvm mdraid09 mdraid1x diskfilter

but that probably builds (or once worked) a .efi application that will
successfully boot a system by searching for grub.cfg. I don't remember
the details...

I also have this - take with a pinch of salt - I wrote this learning
about this system as you are trying to now...


$ ls -al uefi-notes
-rw-r--r-- 1 tim tim 2375 Dec  1  2018 uefi-notes

1 FDISK

g - create a new empty GPT partition table

p - create a primary partition
+128M (size)

t - change type
1 - EFI system

p - create primary partition
fill rest of disk

vgcreate vg-uefi-boot /dev/sdb2

lvcreate -L 128M -n boot vg-uefi-boot

lvcreate -l 100%FREE -n root vg-uefi-boot

mke2fs -j /dev/mapper/vg--uefi--boot-boot

mke2fs -j /dev/mapper/vg--uefi--boot-root

mkdosfs /dev/sdb1

mount /dev/vg-uefi-boot/root /mnt/image/

debootstrap --variant=minbase stretch /mnt/image ftp://einstein/debian/

mount -o bind /proc /mnt/image/proc
mount -o bind /dev /mnt/image/dev
mount -o bind /sys /mnt/image/sys

chroot /mnt/image </etc/hostname

cat </etc/fstab
# /etc/fstab: static file system information.
#
#   
/dev/vg-uefi-boot/root   /  ext3errors=remount-ro  1   1
/dev/vg-uefi-boot/boot   /boot  ext3defaults   1   2
UUID=D1EA-35CC   /boot/efi auto defaults,noatime,nofail 0 0
EOFX

mount /boot
mkdir /boot/efi
mount /boot/efi

#Don't install recommends
cat </etc/apt/apt.conf.d/99-no-recommends
APT
{
  Install-Recommends "false";
}
EOFX

#Setup apt sources
cat </etc/apt/sources.list
deb ftp://einstein/debian stretch main non-free
deb ftp://einstein/debian-security stretch/updates main non-free
deb ftp://einstein/local stretch main
EOFX

echo link_in_boot = Yes >/etc/kernel-img.conf

apt-get update
apt-get -y upgrade

apt-get -y install sysvinit-core
apt-get -y install openssh-server
apt-get -y install ifupdown
apt-get -y install grub-efi-amd64
apt-get -y install mdadm
apt-get -y install lvm2
apt-get -y install linux-image-amd64

grub-install

mkdir /boot/efi/EFI/BOOT
cp /boot/efi/EFI/debian/grubx64.efi /boot/efi/EFI/BOOT/bootx64.efi

update-grub

(update root password)

umount /boot/efi
umount /boot

EOF

umount /mnt/image/proc
umount /mnt/image/dev
umount /mnt/image/sys
umount /mnt/image/

vgchange -aln vg-uefi-boot

(Installed firmware-realtek)

mount /dev/vg-uefi-boot/root /mnt/image/
mount -o bind /proc /mnt/image/proc
mount -o bind /dev /mnt/image/dev
mount -o bind /sys /mnt/image/sys
chroot /mnt/image
mount -a

umount -a
exit

umount /mnt/image/proc
umount /mnt/image/dev
umount /mnt/image/sys
umount /mnt/image/
vgchange -aln vg-uefi-boot




Re: install Kernel and GRUB in chroot.

2024-02-01 Thread Marco Moock
Am 02.02.2024 um 01:46:06 Uhr schrieb Dmitry:

> 2. ==>BAM<== some how that binary knows the system partition.

That information is on the EFI partition, where the GRUB bootloader
binary also resides.

root@ryz:/boot/efi/EFI# cat /boot/efi/EFI/debian/grub.cfg
search.fs_uuid 5b8b669d-xyz root hd0,gpt2 #boot partition
set prefix=($root)'/grub'
configfile $prefix/grub.cfg
root@ryz:/boot/efi/EFI#

If that information is loaded, the kernel can be loaded from the boot
partition.


-- 
Gruß
Marco

Spam und Werbung bitte an ichschickerekl...@cartoonies.org



Re: install Kernel and GRUB in chroot.

2024-02-01 Thread Dmitry

Hi Tim. The community is so kind.

So.

> I'm not exactly sure what you're doing.

Understand how GRUB works, to boot myself.

1. Trying to install Debian on the Flash.
2. Use it by the Debootstrap.
3. Now I want to boot using that Flash.

Looks like a caught the thread.

1. ESP is a partition that stores GRUB Binary. /boot/EFI/Name/grub64.eif
2. ==>BAM<== some how that binary knows the system partition.
3. At the system partition there is a /boot/grub/grub.cfg
4. And at that /boot/grub/grub.cfg is UUID and etc. to start Booting.

But the question is on the step 2. /boot/EFI/Name/grub64.efi knows where
to start /boot/grub/grub.cfg that resides at the absolutely different partition.

Interesting.
But the question already asked. Now it possible to find the answer.

Thank you!



Re: install Kernel and GRUB in chroot.

2024-02-01 Thread Marco Moock
Am 02.02.2024 um 00:09:56 Uhr schrieb Dmitry:

> I made experiments with a FlashDrive, and create GPT there,
> if I want to use standard Debian Image how I should partition that
> flash drive (MBR, GPT)?

Do you want to install the OS on it?
For the partition table, I recommend GPT.

Do you want an encrypted system?

>  > Do you need a special configuration here or is the default just
>  > fine?  
> 
> Need just working one. But I am confusing about how GRUB would get a
> plenty of things related to filesystem, kernel location and so on.

That is being done be the installer. If you don't need special
configuration, use the install process. It does everything for you.

-- 
Gruß
Marco

Spam und Werbung bitte an ichschickerekl...@cartoonies.org



Re: install Kernel and GRUB in chroot.

2024-02-01 Thread Dmitry

Huge thanks.
Your message starts the understanding.
And as well give a plenty of texts to read.

> EFI/debian/grub.cfg on the EFI System Partition contains filesystem UUID 
where grub files reside.


All parts are simple But when compounding them together become messy.

In the Manjaro:
/boot/EFI/Majaro/grub64x.efi - binary to start by UEFI.
/boot/grub/grub.cfg - shell (?) script with configurations.
/boot/vimlinuz.* - the kernel.

And if call a `lsblk`.
Only a /boot/efi with a binary is a separate partiton.

Things become more clear.



Re: install Kernel and GRUB in chroot.

2024-02-01 Thread Tim Woodall

On Thu, 1 Feb 2024, Dmitry wrote:


Greetings!

After:
1. Creating GPT table and GPT partition with fdisk.
2. Copy data with a debootstrap.
3. Chroot into newly creating system.

I need to prepare that system for booting.
1. Install Kernel.
2. Install GRUB and Configure.
3. Add changes to UEFI to start booting.

And at the point two (Install GRUB) I a little bit confused.

1. Need to create ESP, and put GRUB there.
2. Need to configure GRUB to select appropriate kernel and ramdisk.


I'm not exactly sure what you're doing. But the "trick" to doing most of
this in a chroot is to bind mount /dev, /proc, /sys and /run into the
chroot.

Then things like installing the kernel, building the initrd etc
(usually) just work.

"Add changes to UEFI to start booting" depends on the actual hardware
that will boot. If you're preparing images on one system to boot on
another then that bit you'll have to solve by booting the hardware.

I'd probably pick a live distro but it's theoretically[1] possible to
generate your own bootx64.efi that will then boot your system. Once it's
booted you can then use the normal tools to replace it with a more
easily maintained debian solution.

[1] Not just theoretical, I've actually done it once long ago.



Re: install Kernel and GRUB in chroot.

2024-02-01 Thread Dmitry

> Why don't you use the normal setup?

Spend a lot of time on research, it would be nice to finish.

I made experiments with a FlashDrive, and create GPT there,
if I want to use standard Debian Image how I should partition that
flash drive (MBR, GPT)?

> Do you need a special configuration here or is the default just fine?

Need just working one. But I am confusing about how GRUB would get a
plenty of things related to filesystem, kernel location and so on.

> If you create a separate boot partition (do you really need it?), it
must be mounted at /boot.

Here where the mess starts. How GRUB and Kernel would get information about
all this mounting points during the Boot.



Re: install Kernel and GRUB in chroot.

2024-02-01 Thread Max Nikulin

On 01/02/2024 22:54, Marco Moock wrote:

Am 01.02.2024 schrieb Dmitry:

Use gdisk for that.
You can create an EFI partition there.
Choose Type EFI (EF00), 100MB.
Format it with FAT32.


550MiB is recommended in "Preparing your ESP"
http://www.rodsbooks.com/linux-uefi/#installing
see also
https://www.rodsbooks.com/gdisk/advice.html#esp_sizing
https://fedoraproject.org/wiki/Changes/BiggerESP


2. Need to configure GRUB to select appropriate kernel and ramdisk.


Do you need a special configuration here or is the default just fine?


EFI/debian/grub.cfg on the EFI System Partition contains filesystem UUID 
where grub files reside.


After installing grub check that NVRAM has an appropriate entry

efibootmgr -v


How GRUB would understand where to be install and where is the kernel?


It loads files from filesystem on the specified partition. Unlike for 
BIOS device blocks are not involved.





Re: install Kernel and GRUB in chroot.

2024-02-01 Thread Marco Moock
Am 01.02.2024 schrieb Dmitry :

Why don't you use the normal setup?
It does many tasks for you.

> After:
> 1. Creating GPT table and GPT partition with fdisk.

Use gdisk for that.
You can create an EFI partition there.
Choose Type EFI (EF00), 100MB.
Format it with FAT32.

> And at the point two (Install GRUB) I a little bit confused.
> 
> 1. Need to create ESP

Do that before the install with gdisk.

> and put GRUB there.

That is done automatically if it is mounted at /boot/efi.

> 2. Need to configure GRUB to select appropriate kernel and ramdisk.

Do you need a special configuration here or is the default just fine?

> How to create a ESP partition and mount it to /boot?

That must be mounted to /boot/efi.

If you create a separate boot partition (do you really need it?), it
must be mounted at /boot.

> How GRUB would understand where to be install and where is the kernel?

It chooses by the path.
grub-install is the command, no device as parameter.



install Kernel and GRUB in chroot.

2024-02-01 Thread Dmitry

Greetings!

After:
1. Creating GPT table and GPT partition with fdisk.
2. Copy data with a debootstrap.
3. Chroot into newly creating system.

I need to prepare that system for booting.
1. Install Kernel.
2. Install GRUB and Configure.
3. Add changes to UEFI to start booting.

And at the point two (Install GRUB) I a little bit confused.

1. Need to create ESP, and put GRUB there.
2. Need to configure GRUB to select appropriate kernel and ramdisk.

Could you help me a little bit?

How to create a ESP partition and mount it to /boot?
If a ESP partition a simple Partition formatted to FAT32?
How GRUB would understand where to be install and where is the kernel?

And the last one. If I want to step back. And just use a USB stick with
predefined image, what kind of partition table I should create there MBR or GPT?

Thank you!



Re: Automatically installing GRUB on multiple drives

2024-01-31 Thread hw
On Wed, 2024-01-31 at 06:33 +0100, to...@tuxteam.de wrote:
> On Tue, Jan 30, 2024 at 09:47:35PM +0100, hw wrote:
> > On Mon, 2024-01-29 at 18:41 +0100, to...@tuxteam.de wrote:
> > > On Mon, Jan 29, 2024 at 05:52:38PM +0100, hw wrote:
> > > 
> > > [...]
> > > 
> > > > Ok in that case, hardware RAID is a requirement for machines with UEFI
> > > > BIOS since otherwise their reliability is insufficient.
> > > 
> > > The price you pay for hardware RAID is that you need a compatible 
> > > controller
> > > if you take your disks elsewhere (e.g. because your controller dies).
> > 
> > How often do you take the system disks from one machine to another,
> > and how often will the RAID controller fail?
> > 
> > > With (Linux) software RAID you just need another Linux...
> > 
> > How's that supposed to help?  The machine still won't boot if the disk
> > with the UEFI partition has failed.
> 
> We are talking about getting out of a catastrophic event. In such cases,
> booting is the smallest of problems: use your favourite rescue medium
> with a kernel which understands your RAID (and possibly other details
> of your storage setup, file systems, LUKS, whatever).

Try to do that with a remote machine.

> [...]
> 
> > Maybe the problem needs to be fixed in all the UEFI BIOSs.  I don't
> > think it'll happen, though.
> 
> This still makes sense if you want a hands-off recovery (think data
> centre far away). Still you won't recover from a broken motherboard.

It would make sense that all the UEFI BIOSs would be fixed so that
they do not create this problem in the first place like they
shouldn't.

You seem to forget the point that one reason for using redundant
storage, like some kind of RAID, to boot from, is that I don't want to
have booting issues, especially not with remote machines.

Unfortunately UEFI BIOSs make that difficult unless you use hardware
raid.

And I don't want to have that problem with local machines either
because it's a really nasty problem.  How do you even restore the UEFI
partition when the disk it's on has failed and you don't have a copy?




Re: Automatically installing GRUB on multiple drives

2024-01-31 Thread Nicolas George
hw (12024-01-31):
> Well, I doubt it.

Well, doubt it all you want. In the meantime, we will continue to use
it.

Did not read the rest, not interested in red herring nightmare
scenarios.

-- 
  Nicolas George



Re: Automatically installing GRUB on multiple drives

2024-01-31 Thread hw
On Tue, 2024-01-30 at 21:35 +0100, Nicolas George wrote:
> hw (12024-01-30):
> > Yes, and how much effort and how reliable is doing that?
> 
> Very little effort and probably more reliable than hardware RAID with
> closed-source hardware.

Well, I doubt it.  After all you need to copy a whole partition and
must make sure that it doesn't fail through distribution upgrades and
all kinds of possible changes and even when someone shuts down or
reboots the computer or pulls the plug while the copying is still in
progress.  You also must make sure that the boot manager is installed
on multiple disks.

And when you're going to do it?  When shutting the machine down?
Might not happen and when it does happen, maybe you don't want to wait
on it.

When rebooting it?  Perhaps you don't want to overwrite the copy at
that time, or perhaps it's too late then because there were software
updates before you rebooted and one of the disks failed when
rebooting.

Do you suggest to install backup batteries or capacitors to keep the
machine running until the copying process has completed when the power
goes out?

Or do you want to do it all manually at a time convenient for you?
What if you forgot to do it?


I'm not so silly that you could convince me that you can do it more
reliably than the hardware RAID does it whith a bunch of scripts you
put together yourself, especially not by implying that the hardware
raid which has been tested in many datacenters with who knows how many
hundreds of thousands of machines over many years uses closed source
software which has been maintained and therefore must be unreliable.

The lowest listed MTBF of hardware RAID is over 26 hours
(i. e. about 30 years) on [1].  Can you show that you can do it more
reliably with your bunch of scripts?


[1]: 
https://www.intel.com/content/www/us/en/support/articles/07641/server-products/sasraid.html



Re: Automatically installing GRUB on multiple drives

2024-01-31 Thread hw
On Wed, 2024-01-31 at 15:16 +, Andy Smith wrote:
> Hi,
> 
> On Tue, Jan 30, 2024 at 09:50:23PM +0100, hw wrote:
> > On Mon, 2024-01-29 at 23:53 +, Andy Smith wrote:
> > > I think you should read it again until you find the part where it
> > > clearly states what the problem is with using MD RAID for this. If
> > > you still can't find that part, there is likely to be a problem I
> > > can't assist with.
> > 
> > That there may be a problem doesn't automatically mean that you need a
> > bunch of scripts.
> 
> This is getting quite tedious.
> 
> Multiple people have said that there is a concern that UEFI firmware
> might write to an ESP, which would invalidate the use of software
> RAID for the ESP.
> 
> Multiple people have suggested instead syncing ESP partitions in
> userland. If you're going to do that then you'll need a script to do
> it.
> 
> I don't understand what you find so difficult to grasp about this.

You kept only saying 'read the link'.  Well, read the link!  It points
out 4 choices and none of them says 'you need a bunch of scripts'.

> If it's that you have some other proposal for solving this, it would
> be helpful for you to say so

I already said my solution is using hardware raid or fixing the
problem in all the UEFI BIOSs.

I'd also say that the BIOS must never write to the storage, be it to
an UEFI partition or anywhere else.  But that's a different topic.

> [...]
> If your suggested solution is "use hardware RAID", no need to repeat
> that one though: I see you said it in a few other messages, and that
> suggestions has been received. Assume the conversation continues
> amongst people who don't like that suggestion.

Well, too late, I already said it again since you asked.  Do you have
a better solution?  It's ok not to like this solution, but do you have
a better one?

> Otherwise, I don't think anyone knows what you have spent several
> messages trying to say. All we got was, "you don't need scripts".

What do you expect when you keep repeating 'read the link'.



Re: Automatically installing GRUB on multiple drives

2024-01-31 Thread Andy Smith
Hi,

On Tue, Jan 30, 2024 at 09:50:23PM +0100, hw wrote:
> On Mon, 2024-01-29 at 23:53 +, Andy Smith wrote:
> > I think you should read it again until you find the part where it
> > clearly states what the problem is with using MD RAID for this. If
> > you still can't find that part, there is likely to be a problem I
> > can't assist with.
> 
> That there may be a problem doesn't automatically mean that you need a
> bunch of scripts.

This is getting quite tedious.

Multiple people have said that there is a concern that UEFI firmware
might write to an ESP, which would invalidate the use of software
RAID for the ESP.

Multiple people have suggested instead syncing ESP partitions in
userland. If you're going to do that then you'll need a script to do
it.

I don't understand what you find so difficult to grasp about this.
If it's that you have some other proposal for solving this, it would
be helpful for you to say so, instead of just repeating "why do you
need scripts, you don't need scripts", because if you just repeat
that, all I can do is repeat what I've already said until I become
bored and stop.

If your suggested solution is "use hardware RAID", no need to repeat
that one though: I see you said it in a few other messages, and that
suggestions has been received. Assume the conversation continues
amongst people who don't like that suggestion.

Otherwise, I don't think anyone knows what you have spent several
messages trying to say. All we got was, "you don't need scripts".

Thanks,
Andy

-- 
https://bitfolk.com/ -- No-nonsense VPS hosting



Re: os-prober detects in wrong order and GRUB doesn't have enough options

2024-01-31 Thread Greg Wooledge
On Wed, Jan 31, 2024 at 05:29:56AM -, David Chmelik wrote:
> Earlier this or last year I tried to use Devuan to report os-prober 
> detects in wrong order.  It may detect current OS partition first, but if 
> you have more than 10, then it continues from 10, and (if this is all you 
> have) goes to the last in the tens but then continues somewhere in single-
> digit partitions,

Sounds like someone's doing an alphanumeric sort on numbers.

unicorn:~$ printf '%s\n' 1 2 3 11 12 13 | sort
1
11
12
13
2
3

It could be a result of filename globbing, e.g. /dev/sda[0-9]* which
would give that kind of ordering.

In any case, this isn't a Devuan mailing list.  If you've reported a
bug to Debian, then you should have the bug number, and the URL to
its web page.  If you've reported it to Devuan, then we can't help.



Re: Automatically installing GRUB on multiple drives

2024-01-30 Thread tomas
On Tue, Jan 30, 2024 at 09:47:35PM +0100, hw wrote:
> On Mon, 2024-01-29 at 18:41 +0100, to...@tuxteam.de wrote:
> > On Mon, Jan 29, 2024 at 05:52:38PM +0100, hw wrote:
> > 
> > [...]
> > 
> > > Ok in that case, hardware RAID is a requirement for machines with UEFI
> > > BIOS since otherwise their reliability is insufficient.
> > 
> > The price you pay for hardware RAID is that you need a compatible controller
> > if you take your disks elsewhere (e.g. because your controller dies).
> 
> How often do you take the system disks from one machine to another,
> and how often will the RAID controller fail?
> 
> > With (Linux) software RAID you just need another Linux...
> 
> How's that supposed to help?  The machine still won't boot if the disk
> with the UEFI partition has failed.

We are talking about getting out of a catastrophic event. In such cases,
booting is the smallest of problems: use your favourite rescue medium
with a kernel which understands your RAID (and possibly other details
of your storage setup, file systems, LUKS, whatever).

[...]

> Maybe the problem needs to be fixed in all the UEFI BIOSs.  I don't
> think it'll happen, though.

This still makes sense if you want a hands-off recovery (think data
centre far away). Still you won't recover from a broken motherboard.

Cheers
-- 
t


signature.asc
Description: PGP signature


os-prober detects in wrong order and GRUB doesn't have enough options

2024-01-30 Thread David Chmelik
Earlier this or last year I tried to use Devuan to report os-prober 
detects in wrong order.  It may detect current OS partition first, but if 
you have more than 10, then it continues from 10, and (if this is all you 
have) goes to the last in the tens but then continues somewhere in single-
digit partitions, so then puts your OS all in wrong order in GRUB2, which 
should have more options about menu order like is easy to configure LILO  
exactly the way you want.  I have some entries I wrote myself, because 
even after a bug report over 10 years ago, os-prober didn't detect FreeBSD 
& NetBSD (reported) & DragonFlyBSD UNIXes, nor OpenSolaris/IllumOS UNIXes, 
nor does GRUB2 do some GNU/Linux right like SystemRescue and some obscure 
boot options some RedHat variants need or won't boot.  Seems like the bug 
maybe didn't get reported to the os-prober programmers.  Did it not get 
through or is there another way I could report this?




Re: Automatically installing GRUB on multiple drives

2024-01-30 Thread hw
On Mon, 2024-01-29 at 23:53 +, Andy Smith wrote:
> Hi,
> 
> On Mon, Jan 29, 2024 at 05:28:56PM +0100, hw wrote:
> > On Sun, 2024-01-28 at 21:55 +, Andy Smith wrote:
> > > On Sun, Jan 28, 2024 at 09:09:17PM +0100, hw wrote:
> > > > On Sun, 2024-01-28 at 17:32 +, Andy Smith wrote:
> > > > > If someone DOES want a script option that solves that problem, a
> > > > > couple of actual working scripts were supplied in the link I gave to
> > > > > the earlier thread:
> > > > > 
> > > > > https://lists.debian.org/debian-user/2020/11/msg00455.html
> > > > > https://lists.debian.org/debian-user/2020/11/msg00458.html
> > > > 
> > > > Huh?  Isn't it simpler to use mdraid RAID1 to keep the UEFI partitions
> > > > in sync without extra scripts needed?
> > > 
> > > Could you read the first link above.
> > 
> > I did, and it doesn't explain why you would need a bunch of scripts.
> 
> I think you should read it again until you find the part where it
> clearly states what the problem is with using MD RAID for this. If
> you still can't find that part, there is likely to be a problem I
> can't assist with.

That there may be a problem doesn't automatically mean that you need a
bunch of scripts.



Re: Automatically installing GRUB on multiple drives

2024-01-30 Thread hw
On Mon, 2024-01-29 at 18:41 +0100, to...@tuxteam.de wrote:
> On Mon, Jan 29, 2024 at 05:52:38PM +0100, hw wrote:
> 
> [...]
> 
> > Ok in that case, hardware RAID is a requirement for machines with UEFI
> > BIOS since otherwise their reliability is insufficient.
> 
> The price you pay for hardware RAID is that you need a compatible controller
> if you take your disks elsewhere (e.g. because your controller dies).

How often do you take the system disks from one machine to another,
and how often will the RAID controller fail?

> With (Linux) software RAID you just need another Linux...

How's that supposed to help?  The machine still won't boot if the disk
with the UEFI partition has failed.  Look at Linux installers, like
the Debian installer or the Fedora installer.  Last time I used
either, none of them would automatically create or at least require
redundant UEFI partitions --- at least for instances when software
RAID is used --- to make it possible to boot when a disk has failed.
It's a very bad oversight.

Maybe the problem needs to be fixed in all the UEFI BIOSs.  I don't
think it'll happen, though.



Re: Automatically installing GRUB on multiple drives

2024-01-30 Thread Nicolas George
hw (12024-01-30):
> Yes, and how much effort and how reliable is doing that?

Very little effort and probably more reliable than hardware RAID with
closed-source hardware.

-- 
  Nicolas George



Re: Automatically installing GRUB on multiple drives

2024-01-30 Thread hw
On Mon, 2024-01-29 at 18:00 +0100, Nicolas George wrote:
> hw (12024-01-29):
> > Ok in that case, hardware RAID is a requirement for machines with UEFI
> 
> That is not true, you can still put the RAID in a partition and keep the
> boot partitions in sync manually or with scripts.

Yes, and how much effort and how reliable is doing that?

I didn't say it can't be done.



Re: Automatically installing GRUB on multiple drives

2024-01-29 Thread Andy Smith
Hi,

On Mon, Jan 29, 2024 at 05:28:56PM +0100, hw wrote:
> On Sun, 2024-01-28 at 21:55 +, Andy Smith wrote:
> > On Sun, Jan 28, 2024 at 09:09:17PM +0100, hw wrote:
> > > On Sun, 2024-01-28 at 17:32 +, Andy Smith wrote:
> > > > If someone DOES want a script option that solves that problem, a
> > > > couple of actual working scripts were supplied in the link I gave to
> > > > the earlier thread:
> > > > 
> > > > https://lists.debian.org/debian-user/2020/11/msg00455.html
> > > > https://lists.debian.org/debian-user/2020/11/msg00458.html
> > > 
> > > Huh?  Isn't it simpler to use mdraid RAID1 to keep the UEFI partitions
> > > in sync without extra scripts needed?
> > 
> > Could you read the first link above.
> 
> I did, and it doesn't explain why you would need a bunch of scripts.

I think you should read it again until you find the part where it
clearly states what the problem is with using MD RAID for this. If
you still can't find that part, there is likely to be a problem I
can't assist with.

Thanks,
Andy

-- 
https://bitfolk.com/ -- No-nonsense VPS hosting



Re: Automatically installing GRUB on multiple drives

2024-01-29 Thread tomas
On Mon, Jan 29, 2024 at 05:52:38PM +0100, hw wrote:

[...]

> Ok in that case, hardware RAID is a requirement for machines with UEFI
> BIOS since otherwise their reliability is insufficient.

The price you pay for hardware RAID is that you need a compatible controller
if you take your disks elsewhere (e.g. because your controller dies).

With (Linux) software RAID you just need another Linux...

Cheers
-- 
t


signature.asc
Description: PGP signature


Re: Automatically installing GRUB on multiple drives

2024-01-29 Thread Nicolas George
hw (12024-01-29):
> Ok in that case, hardware RAID is a requirement for machines with UEFI

That is not true, you can still put the RAID in a partition and keep the
boot partitions in sync manually or with scripts.

-- 
  Nicolas George



Re: Automatically installing GRUB on multiple drives

2024-01-29 Thread hw
On Mon, 2024-01-29 at 14:45 +0100, Franco Martelli wrote:
> On 28/01/24 at 17:17, hw wrote:
> > On Fri, 2024-01-26 at 16:57 +0100, Nicolas George wrote:
> > > hw (12024-01-26):
> > > > How do you make the BIOS read the EFI partition when it's on mdadm
> > > > RAID?
> > > 
> > > I have not yet tested but my working hypothesis is that the firmware
> > > will just ignore the RAID and read the EFI partition: with the scheme I
> > > described, the GPT points to the EFI partition and the EFI partition
> > > just contains the data.
> > > 
> > > Of course, it only works with RAID1, where the data on disk is the data
> > > in RAID.
> > 
> > Ok if Andy and you are right, you could reasonably boot machines with
> > an UEFI BIOS when using mdadm RAID :)
> 
> There is a sort of HOWTO [1] published in the archLinux wiki [2] but I 
> don't advise it because there are many things that could go wrong.
> 
> Cheers,
> 
> [1] https://outflux.net/blog/archives/2018/04/19/uefi-booting-and-raid1/
> [2] 
> https://wiki.archlinux.org/title/EFI_system_partition#ESP_on_software_RAID1

Ok in that case, hardware RAID is a requirement for machines with UEFI
BIOS since otherwise their reliability is insufficient.

I didn't plan on using hardware RAID for my next server, and now
things are getting way more complicated than they already are because
I can't just keep using the disks from my current one :(  Hmm ...

But I'm glad that I looked into this.



Re: Automatically installing GRUB on multiple drives

2024-01-29 Thread hw
On Sun, 2024-01-28 at 21:55 +, Andy Smith wrote:
> Hello,
> 
> On Sun, Jan 28, 2024 at 09:09:17PM +0100, hw wrote:
> > On Sun, 2024-01-28 at 17:32 +, Andy Smith wrote:
> > > If someone DOES want a script option that solves that problem, a
> > > couple of actual working scripts were supplied in the link I gave to
> > > the earlier thread:
> > > 
> > > https://lists.debian.org/debian-user/2020/11/msg00455.html
> > > https://lists.debian.org/debian-user/2020/11/msg00458.html
> > 
> > Huh?  Isn't it simpler to use mdraid RAID1 to keep the UEFI partitions
> > in sync without extra scripts needed?
> 
> Could you read the first link above.

I did, and it doesn't explain why you would need a bunch of scripts.



Re: Automatically installing GRUB on multiple drives

2024-01-29 Thread Franco Martelli

On 28/01/24 at 17:17, hw wrote:

On Fri, 2024-01-26 at 16:57 +0100, Nicolas George wrote:

hw (12024-01-26):

How do you make the BIOS read the EFI partition when it's on mdadm
RAID?


I have not yet tested but my working hypothesis is that the firmware
will just ignore the RAID and read the EFI partition: with the scheme I
described, the GPT points to the EFI partition and the EFI partition
just contains the data.

Of course, it only works with RAID1, where the data on disk is the data
in RAID.


Ok if Andy and you are right, you could reasonably boot machines with
an UEFI BIOS when using mdadm RAID :)


There is a sort of HOWTO [1] published in the archLinux wiki [2] but I 
don't advise it because there are many things that could go wrong.


Cheers,

[1] https://outflux.net/blog/archives/2018/04/19/uefi-booting-and-raid1/
[2] 
https://wiki.archlinux.org/title/EFI_system_partition#ESP_on_software_RAID1


--
Franco Martelli



Re: Automatically installing GRUB on multiple drives

2024-01-28 Thread Andy Smith
Hello,

On Sun, Jan 28, 2024 at 09:09:17PM +0100, hw wrote:
> On Sun, 2024-01-28 at 17:32 +, Andy Smith wrote:
> > If someone DOES want a script option that solves that problem, a
> > couple of actual working scripts were supplied in the link I gave to
> > the earlier thread:
> > 
> > https://lists.debian.org/debian-user/2020/11/msg00455.html
> > https://lists.debian.org/debian-user/2020/11/msg00458.html
> 
> Huh?  Isn't it simpler to use mdraid RAID1 to keep the UEFI partitions
> in sync without extra scripts needed?

Could you read the first link above.

Thanks,
Andy

-- 
https://bitfolk.com/ -- No-nonsense VPS hosting



Re: Automatically installing GRUB on multiple drives

2024-01-28 Thread Andy Smith
Hello,

On Sun, Jan 28, 2024 at 09:03:50PM +0100, hw wrote:
> Show me any installer for Linux distributions that handles this
> sufficently without further ado.

That was the question I posed several posts back: what do people do
for redundant ESP.

> When you don't use btrfs, you have either hardware RAID or
> mdraid.

…or zfs or bcachefs or no redundancy at all…

> With mdadm RAID, it isn't much better than with btrfs since
> without further ado, you still don't have redundant UEFI
> partitions.

As mentioned, people do try it, and we don't yet have any reports
of catastrophe.

I'm not sure I am brave enough though.

> With btrfs and mdadm RAID, it's basically worse because you have
> to deploy another variant of software RAID in addition to the
> software built into btrfs.

I see, so this is basically a philosophical objection. You already
have software that provides redundancy (btrfs), but since UEFI
firmware can't read it and insists that ESP be vfat, it would mean
providing redundancy another way. This need to have two means of
redundancy is an affront to you.

In practical terms, having md driver just for a small ESP array is
not going to be a big deal, but just the idea of configuring this
extra form of redundancy, having that extra driver loaded etc., is
unpleasant.

> So at least for boot disks, I'll go for hardware RAID whenever
> possible, especially with btrfs, until this problem is fixed.  Or do
> you have a better option?

Right, so your answer is hardware RAID. If you're happy with that,
that's great, but I've left hardware RAID behind nearly ten years
ago and this issue isn't enough for me to welcome it back. Though it
leaves a bad taste, I still would rather go with either syncing ESPs
by script or putting ESP in MD RAID and hoping firmware never write
to it.

I just wondered if there were more options (yet).

Thanks,
Andy

-- 
https://bitfolk.com/ -- No-nonsense VPS hosting



Re: Automatically installing GRUB on multiple drives

2024-01-28 Thread hw
On Sun, 2024-01-28 at 17:32 +, Andy Smith wrote:
> Hi,
> 
> Keeping all this context because I don't actually see how the
> response matches the context and so I might have missed something…
> 
> On Sun, Jan 28, 2024 at 11:54:05AM -0500, Dan Ritter wrote:
> > hw wrote: 
> > > How is btrfs going to deal with this problem when using RAID?  Require
> > > hardware RAID?
> > > 
> > > Having to add mdadm RAID to a setup that uses btrfs just to keep efi
> > > partitions in sync would suck.
> > 
> > You can add hooks to update-initramfs or update-grub.
> > 
> > To a first approximation:
> > 
> > firstbootpart = wwn-0x5006942feedbee1-part1
> > extrabootparts = wwn-0x5004269deafbead-part1\
> >  wwn-0x5001234adefabe-part1 \
> >  wwn-0x5005432faebeeda-part1
> > 
> > for eachpart in $extrabootparts ; \
> > do cp /dev/disk/by-id/$firstbootpart /dev/disk/by-id/$eachpart; done
> 
> I realise that the above is pseudocode, but I have some issues with
> it, namely:
> 
> a) I don't see what this has to do with btrfs, the subject of the
>message you are replying to. Then again, I also did not see what
>btrfs had to do with the thing that IT was replying to, so
>possibly I am very confused.
> 
> b) My best interpretation of your message is that it solves the "how
>to keep ESPs in sync" question, but if it is intended to do that
>then you may as well have just said "just keep the ESPs in sync",
>because what you wrote is literally something like:
> 
>cp /dev/disk/by-id/wwn-0x5002538d425560a4-part1 
> /dev/disk/by-id/wwn-0x5002538d425560b5-part1
> 
>which …is rather like a "now draw the rest of the owl" sort of
>response given that it doesn't literally work and most of the job
>is in reworking that line of pseudocode into something that will
>actually work.
> 
> If someone DOES want a script option that solves that problem, a
> couple of actual working scripts were supplied in the link I gave to
> the earlier thread:
> 
> https://lists.debian.org/debian-user/2020/11/msg00455.html
> https://lists.debian.org/debian-user/2020/11/msg00458.html

Huh?  Isn't it simpler to use mdraid RAID1 to keep the UEFI partitions
in sync without extra scripts needed?

(I don't like that option, but it seems like an option when you have
no hardware RAID.)



Re: Automatically installing GRUB on multiple drives

2024-01-28 Thread hw
On Sun, 2024-01-28 at 16:46 +, Andy Smith wrote:
> Hi,
> 
> On Sun, Jan 28, 2024 at 05:17:14PM +0100, hw wrote:
> > Ok if Andy and you are right, you could reasonably boot machines with
> > an UEFI BIOS when using mdadm RAID :)
> 
> I've been doing it for more than two decades, though not with UEFI.
> 
> > How is btrfs going to deal with this problem when using RAID?  Require
> > hardware RAID?
> > 
> > Having to add mdadm RAID to a setup that uses btrfs just to keep efi
> > partitions in sync would suck.
> 
> ESP have to be vfat so why are you bringing up btrfs?
> 
> If you want to use btrfs, use btrfs. UEFI firmware isn't going to
> care as long as your ESP is not inside that.

It's easy to boot from btrfs software RAID without further ado.  These
nasty and annoying UEFI partitions get in the way of that since they
are not kept in sync with each other when you have several without
further ado.

That easily leads to situations in which you can't boot after a disk
has failed despite you have RAID.  That is something that must not
happen; it defeats the RAID.  It's bad enough when you have access to
the machine and it's a total nightmare when not because you'll have to
somehow go there to fix this.  If the disk having the UEFI partition
has failed and there's no redundance that's at least sufficently in
sync, it's even worse.

Show me any installer for Linux distributions that handles this
sufficently without further ado.

When you don't use btrfs, you have either hardware RAID or
mdraid.  With harware RAID, the problem doesn't come up.  With mdadm
RAID, it isn't much better than with btrfs since without further ado,
you still don't have redundant UEFI partitions.  With btrfs and mdadm
RAID, it's basically worse because you have to deploy another variant
of software RAID in addition to the software built into btrfs.

So at least for boot disks, I'll go for hardware RAID whenever
possible, especially with btrfs, until this problem is fixed.  Or do
you have a better option?



Re: Automatically installing GRUB on multiple drives

2024-01-28 Thread Andy Smith
Hi,

Keeping all this context because I don't actually see how the
response matches the context and so I might have missed something…

On Sun, Jan 28, 2024 at 11:54:05AM -0500, Dan Ritter wrote:
> hw wrote: 
> > How is btrfs going to deal with this problem when using RAID?  Require
> > hardware RAID?
> > 
> > Having to add mdadm RAID to a setup that uses btrfs just to keep efi
> > partitions in sync would suck.
> 
> You can add hooks to update-initramfs or update-grub.
> 
> To a first approximation:
> 
> firstbootpart = wwn-0x5006942feedbee1-part1
> extrabootparts = wwn-0x5004269deafbead-part1\
>  wwn-0x5001234adefabe-part1 \
>  wwn-0x5005432faebeeda-part1
> 
> for eachpart in $extrabootparts ; \
>   do cp /dev/disk/by-id/$firstbootpart /dev/disk/by-id/$eachpart; done

I realise that the above is pseudocode, but I have some issues with
it, namely:

a) I don't see what this has to do with btrfs, the subject of the
   message you are replying to. Then again, I also did not see what
   btrfs had to do with the thing that IT was replying to, so
   possibly I am very confused.

b) My best interpretation of your message is that it solves the "how
   to keep ESPs in sync" question, but if it is intended to do that
   then you may as well have just said "just keep the ESPs in sync",
   because what you wrote is literally something like:

   cp /dev/disk/by-id/wwn-0x5002538d425560a4-part1 
/dev/disk/by-id/wwn-0x5002538d425560b5-part1

   which …is rather like a "now draw the rest of the owl" sort of
   response given that it doesn't literally work and most of the job
   is in reworking that line of pseudocode into something that will
   actually work.

If someone DOES want a script option that solves that problem, a
couple of actual working scripts were supplied in the link I gave to
the earlier thread:

https://lists.debian.org/debian-user/2020/11/msg00455.html
https://lists.debian.org/debian-user/2020/11/msg00458.html

Thanks,
Andy

-- 
https://bitfolk.com/ -- No-nonsense VPS hosting



Re: Automatically installing GRUB on multiple drives

2024-01-28 Thread Dan Ritter
hw wrote: 
> How is btrfs going to deal with this problem when using RAID?  Require
> hardware RAID?
> 
> Having to add mdadm RAID to a setup that uses btrfs just to keep efi
> partitions in sync would suck.


You can add hooks to update-initramfs or update-grub.

To a first approximation:

firstbootpart = wwn-0x5006942feedbee1-part1
extrabootparts = wwn-0x5004269deafbead-part1\
 wwn-0x5001234adefabe-part1 \
 wwn-0x5005432faebeeda-part1

for eachpart in $extrabootparts ; \
do cp /dev/disk/by-id/$firstbootpart /dev/disk/by-id/$eachpart; done

You'll need to provide suitable values for the partitions, and
remember to fix this when you change disks for any reason.

And test it, because I have not even run it once.

-dsr-



Re: Automatically installing GRUB on multiple drives

2024-01-28 Thread Andy Smith
Hi,

On Sun, Jan 28, 2024 at 05:17:14PM +0100, hw wrote:
> Ok if Andy and you are right, you could reasonably boot machines with
> an UEFI BIOS when using mdadm RAID :)

I've been doing it for more than two decades, though not with UEFI.

> How is btrfs going to deal with this problem when using RAID?  Require
> hardware RAID?
> 
> Having to add mdadm RAID to a setup that uses btrfs just to keep efi
> partitions in sync would suck.

ESP have to be vfat so why are you bringing up btrfs?

If you want to use btrfs, use btrfs. UEFI firmware isn't going to
care as long as your ESP is not inside that.

Thanks,
Andy

-- 
https://bitfolk.com/ -- No-nonsense VPS hosting



Re: Automatically installing GRUB on multiple drives

2024-01-28 Thread hw
On Fri, 2024-01-26 at 16:57 +0100, Nicolas George wrote:
> hw (12024-01-26):
> > How do you make the BIOS read the EFI partition when it's on mdadm
> > RAID?
> 
> I have not yet tested but my working hypothesis is that the firmware
> will just ignore the RAID and read the EFI partition: with the scheme I
> described, the GPT points to the EFI partition and the EFI partition
> just contains the data.
> 
> Of course, it only works with RAID1, where the data on disk is the data
> in RAID.

Ok if Andy and you are right, you could reasonably boot machines with
an UEFI BIOS when using mdadm RAID :)

How is btrfs going to deal with this problem when using RAID?  Require
hardware RAID?

Having to add mdadm RAID to a setup that uses btrfs just to keep efi
partitions in sync would suck.



Re: Automatically installing GRUB on multiple drives

2024-01-26 Thread Nicolas George
hw (12024-01-26):
> How do you make the BIOS read the EFI partition when it's on mdadm
> RAID?

I have not yet tested but my working hypothesis is that the firmware
will just ignore the RAID and read the EFI partition: with the scheme I
described, the GPT points to the EFI partition and the EFI partition
just contains the data.

Of course, it only works with RAID1, where the data on disk is the data
in RAID.

Regards,

-- 
  Nicolas George



Re: Automatically installing GRUB on multiple drives

2024-01-26 Thread Andy Smith
Hello,

On Fri, Jan 26, 2024 at 04:50:00PM +0100, hw wrote:
> How do you make the BIOS read the EFI partition when it's on mdadm
> RAID?

If MD superblock is at a part of device not used by filesystem (e.g.
the end) and it is a RAID-1, each member device is indistinguishable
from FAT filesystem without RAID for naive software in read-only
mode. This is also how grub boots MD RAID-1 before Grub understood
MD RAID.

Thanks,
Andy

-- 
https://bitfolk.com/ -- No-nonsense VPS hosting



Re: Automatically installing GRUB on multiple drives

2024-01-26 Thread hw
On Wed, 2024-01-24 at 21:05 +0100, Nicolas George wrote:
> [...]
> GPT
>  ├─EFI
>  └─RAID
> └─LVM (of course)
> 
> Now, thanks to you, I know I can do:
> 
> GPT   
>  ┊  RAID
>  └───┤
>  ├─EFI
>  └─LVM
> 
> It is rather ugly to have the same device be both a RAID with its
> superblock in the hole between GPT and first partition and the GPT in
> the hole before the RAID superblock, but it serves its purpose: the EFI
> partition is kept in sync over all devices.
> 
> It still requires setting the non-volatile variables, though.

How do you make the BIOS read the EFI partition when it's on mdadm
RAID?

It seems you have to have an EFI partition directly, outside sofware
RAID, on each storage device, and that indeed raises the question how
you keep them up to date so you can still boot when a disk has failed.
It's a nasty problem.

I use hardware RAID to avoid this problem ...



Re: Automatically installing GRUB on multiple drives

2024-01-26 Thread Andy Smith
Hello,

On Fri, Jan 26, 2024 at 08:40:42AM -0500, gene heskett wrote:
> On 1/26/24 08:19, Tim Woodall wrote:
> > Hardware raid that the bios cannot subvert is obviously one solution.
> > 
> Is nearly the only solution,

If the problem to be solved is defined as redundancy for the ESP,
there are a bunch of solutions as already discussed. All of them
come with upsides and downsides. The downsides of hardware RAID for
this, for me, are too big.

> [hardware RAID] needs to have a hard specified format that
> guarantees 100% compatibility across all makers

If that happened, mdadm could support it, and then I would continue
to use mdadm. In fact it already has happened, in that Intel came up
with a standard for its "fake RAID" data layout and mdadm does
support it already. But of course, none of the other vendors of
hardware RAID took that on.


https://www.intel.com/content/dam/www/public/us/en/documents/white-papers/rst-linux-paper.pdf

It has also been pointed out that there is no technical reason why
EFI firmware can't support MD RAID, since MD is open source.

But on the whole, we can't wait around for any of that to happen.

> full intentions of locking the customer to only their product.

There was a time when hardware RAID was really the only game in
town, and the ability it gave to lock in the customer was just the
cost of doing business.

That time has passed, but I don't think the UEFI firmware developers
are interested in helping out.

Thanks,
Andy

-- 
https://bitfolk.com/ -- No-nonsense VPS hosting



Re: Automatically installing GRUB on multiple drives

2024-01-26 Thread Andy Smith
Hello,

On Fri, Jan 26, 2024 at 01:18:53PM +, Tim Woodall wrote:
> Hardware raid that the bios cannot subvert is obviously one solution.

These days the different trade-offs for HW RAID are IMHO worse. I
left it behind in 2014 and don't intend to go back. 

Thanks,
Andy

-- 
https://bitfolk.com/ -- No-nonsense VPS hosting



Re: Automatically installing GRUB on multiple drives

2024-01-26 Thread gene heskett

On 1/26/24 08:19, Tim Woodall wrote:

On Fri, 26 Jan 2024, Nicolas George wrote:



Now that I think a little more, this concern is not only unconfirmed,
it is rather absurd. The firmware would never write in parts of the
drive that might contain data.


UEFI understands the EFI system filesystem so it can "safely" write new
files there.

The danger then is that a write via mdadm corrupts the filesystem. I'm
not sure if mdadm will detect the inconsistent data or assume both
sources are the same.

Hardware raid that the bios cannot subvert is obviously one solution.

Is nearly the only solution, but it needs to have a hard specified 
format that guarantees 100% compatibility across all makers or they 
cannot use the word raid in their advertising. I am sick of proprietary 
makers doing a job that subverts the method with full intentions of 
locking the customer to only their product.  Let there be competition 
based on the quality of their product.

.


Cheers, Gene Heskett.
--
"There are four boxes to be used in defense of liberty:
 soap, ballot, jury, and ammo. Please use in that order."
-Ed Howdershelt (Author, 1940)
If we desire respect for the law, we must first make the law respectable.
 - Louis D. Brandeis



Re: Automatically installing GRUB on multiple drives

2024-01-26 Thread Tim Woodall

On Fri, 26 Jan 2024, Tim Woodall wrote:


On Fri, 26 Jan 2024, Nicolas George wrote:



Now that I think a little more, this concern is not only unconfirmed,
it is rather absurd. The firmware would never write in parts of the
drive that might contain data.


UEFI understands the EFI system filesystem so it can "safely" write new
files there.

The danger then is that a write via mdadm corrupts the filesystem. I'm
not sure if mdadm will detect the inconsistent data or assume both
sources are the same.

Hardware raid that the bios cannot subvert is obviously one solution.



https://stackoverflow.com/questions/32324109/can-i-write-on-my-local-filesystem-using-efi



Re: Automatically installing GRUB on multiple drives

2024-01-26 Thread Tim Woodall

On Fri, 26 Jan 2024, Nicolas George wrote:



Now that I think a little more, this concern is not only unconfirmed,
it is rather absurd. The firmware would never write in parts of the
drive that might contain data.


UEFI understands the EFI system filesystem so it can "safely" write new
files there.

The danger then is that a write via mdadm corrupts the filesystem. I'm
not sure if mdadm will detect the inconsistent data or assume both
sources are the same.

Hardware raid that the bios cannot subvert is obviously one solution.



Re: Automatically installing GRUB on multiple drives

2024-01-26 Thread Thomas Schmitt
Hi,

Nicolas George wrote:
> You seem to be assuming that the system will first check sector 0 to
> parse the MBR and then, if the MBR declares a GPT sector try to use the
> GPT.

That's what the UEFI specs prescribe. GPT is defined by UEFI-2.8 in
chapter 5 "GUID Partition Table (GPT) Disk Layout". Especially:

  5.2.3 Protective MBR
  For a bootable disk, a Protective MBR must be located at LBA 0 (i.e.,
  the first logical block) of the disk if it is using the GPT disk layout.
  The Protective MBR precedes the GUID Partition Table Header to maintain
  compatibility with existing tools that do not understand GPT partition
  structures.


> I think it is the other way around on modern systems: it will first
> check sector 1 for a GPT header, and only if it fails check sector 0.

Given the creativity of firmware programmers, this is not impossible.
At least the programmers of older versions of OVMF took the presence of
a GPT header as reason to boot, whereas without it did not boot.
Meanwhile this demand for GPT debris has vanished and OVMF boots from
media with only MBR partitions, too.


I wrote:
> > This layout [used by Debian installation ISOs] was invented by Matthew
> > J. Garrett for Fedora

> I think I invented independently something similar.
> https://www.normalesup.org/~george/comp/live_iso_usb/grub_hybrid.html

Not to forget Vladimir Serbinenko who specified how a grub-mkrescue ISO
shall present its lures for BIOS and EFI on optical media and USB stick.
The ISO has a pure GPT partition table, where the ISO filesystem is not
mountable as partition but only by the base device (like /dev/sdc) or
possibly by its HFS+ directory tree via the Apple Partition Map, if
present.

(To create such an ISO, install grub-common, grub-efi-amd64, grub-efi-ia32,
and grub-pc. Then run grub-mkrescue with some dummy directory as payload.
The ISO will boot to a GRUB prompt.)


Have a nice day :)

Thomas



Re: Automatically installing GRUB on multiple drives

2024-01-26 Thread Andy Smith
Hello,

On Fri, Jan 26, 2024 at 10:09:53AM +0100, Nicolas George wrote:
> Andy Smith (12024-01-26):
> > The "firmware may write to it" thing was raised as a concern by a
> > few people,but always a theoretical one from what I could see.
> 
> Now that I think a little more, this concern is not only unconfirmed,
> it is rather absurd. The firmware would never write in parts of the
> drive that might contain data.

I suppose my concern with that is that a firmware developer might
feel justified in poking about in the ESP, which they might consider
is there "for them".

I have seen quite a few first hand reports of motherboard firmware
that writes empty GPT when it sees a drive with no GPT, which I had
previously considered unthinkable, so I do worry about trusting in
the firmware developers.

Thanks,
Andy

-- 
https://bitfolk.com/ -- No-nonsense VPS hosting



Re: Automatically installing GRUB on multiple drives

2024-01-26 Thread Thomas Schmitt
Hi,

i hate to put in question the benefit of my proposal, but:

Nicolas George wrote:
> The firmware would never write in parts of the
> drive that might contain data.

See

  https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1056998
  "cdrom: Installation media changes after booting it"

Two occasions were shown in this bug where the EFI system partition of
a Debian installation ISO on USB stick changed. One was caused by a
Microsoft operating system, writing a file named WPSettings.dat. But the
other was from Lenovo firmware writing /efi/Lenovo/BIOS/SelfHealing.fd .

One may doubt that the success of these operations is desirable at all.
The ISO was also tested with a not-anymore-writable DVD. In that case the
Lenovo firmware did not raise protest over the fact that it was not
possible to write to the EFI partition.


Have a nice day :)

Thomas



  1   2   3   4   5   6   7   8   9   10   >