Bug#978595: #978595 is looking higher priority

2023-08-19 Thread Paul Leiber
On Thu, 17 Aug 2023 15:29:29 -0700 Elliott Mitchell 
 wrote:

On Tue, Jul 04, 2023 at 11:56:39PM +0300, Michael Tokarev wrote:
> Out of curiocity, what value is it to boot a xen domU (or qemu) guest in uefi 
mode?
> I mean, bios mode is still recommended for at least commercial virt solutions 
such
> as vmware, and it works significantly faster in qemu and xen too.  It is 
more, qemu
> ships minimal bios (qboot) to eliminate all boot-time cruft which is not 
needed in
> a vm most of the time.

First, the known high value portion of #978595 is getting
ArmVirtPkg/ArmVirtXen.dsc built and packaged.  This results in a
XEN_EFI.fd file.  As such the presently verified value only applies to
ARM.

What you do with XEN_EFI.fd is you configure an ARM domain with
'kernel = "${edk2_install_dir}/XEN_EFI.fd"'

The resultant domain has no extra daemons emulating hardware.  Inside the
domain, Tianocore/EDK2 will search via its normal means for a boot.efi
file and load that if it can.

This is similar to PyGRUB versus PvGRUB.  If the OS being loaded has
native Xen drivers, you've gotten rid of the Qemu process hanging around
in domain 0 providing security holes.

So far this is reliably booting the WIP FreeBSD/arm64.  I imagine this
could also load GRUB.

I believe OvmfPkg/OvmfXen.dsc aims to be something similar for x86, but
I've yet to achieve results from that.  My hope is this could load
FreeBSD/x86 in a PVH domain.




On Tue, Jul 04, 2023 at 10:30:34PM +0200, Paul Leiber wrote:
> As the Windows systems are not usable anymore, Xen is significantly
> reduced in functionality after the upgrade. Is this existing bug report
> the right place to file this, or should I open a new bug report? If this
> bug report is the right place, its priority should indeed be raised, at
> least to important (linux PVH DomUs are still working fine). If I should
> open a new bug report, for which package?

New report.  The topic for #978595 is I was hoping for some other build
types of EDK2/Tianocore to be built and packaged.  What you're describing
is a regression and certainly not merely a wishlist packaging request.

I'm unsure, but at first thought this would be src:xen.  On that note a
FreeBSD VM I've got has been having difficulty since the 4.14 -> 4.17
upgrade.  I'm still fighting other upgrade issues right now.

Some portions of the EDK2/Tianocore packaging look suspicious, so I
wouldn't be surprised if the failure was there.
As a test using a previous version of ovmf package (version 
2020.11-2+deb11u1, which the DomU booted normally with) pointed to the 
bug being in ovmf, I filed the bug there:


https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1050030

Thanks for the assistance!

Paul



Bug#1050030: Potential regression: ovmf package update from Debian bookworm breaks Xen HVM boot

2023-08-18 Thread Paul Leiber

Package: ovmf
Version: 2022.11-6
Severity: important

Dear Maintainer,

After upgrading from Debian Bullseye to Debian Bookworm, existing HVM 
DomUs using "firmware = 'ovmf'" (specifically Windows Server 2022 and 
Windows 10) can't boot anymore. Booting these systems leads to the 
Windows error 0xc225. I wasn't able to fix this error. Booting an 
installation .iso leads to the same error. Booting the installation 
media with "firmware = 'bios'" leads to a normal boot.


This seems to be the effect of a change in ovmf sources, where a xen 
specific platform was created in 2019: 
https://lore.kernel.org/all/20190813113119.14804-1-anthony.per...@citrix.com/#t


With some help I found a workaround. I could successfully start the 
Windows DomU with ovmf firmware after removing the current ovmf package 
and installing a previous ovmf package from Debian repositories, 
specifically version 2020.11-2+deb11u1. This strongly indicates that the 
cause of this issue lies in the ovmf package.


My HVM config file:

type = "hvm"
memory = 6144
vcpus = 2
name = "kalliope"
firmware = 'ovmf'
firmware = '/usr/local/lib/ovmf/OVMF.fd'
vif = ['bridge=xenbr0,mac=XX:XX:XX:XX:XX:XX,ip=10.0.0.4']
disk = ['phy:/dev/vg0/matrix,hda,w']
device_model_version = 'qemu-xen'
hdtype = 'ahci'
pae = 1
acpi = 1
apic = 1
vga = 'stdvga'
videoram = 16
xen_platform_pci = 1
vendor_device = 'xenserver'
viridian = 1
on_crash = 'restart'
device_model_args_hvm = [
  # Debug OVMF
  '-chardev', 'file,id=debugcon,path=/var/log/xen/ovmf.log,',
  '-device', 'isa-debugcon,iobase=0x402,chardev=debugcon',
]
sdl = 0
serial = 'pty'
vnc = 1
vnclisten = "0.0.0.0"
vncpasswd = ""

As the Windows systems are not usable anymore, Xen is significantly 
reduced in functionality after the upgrade.


There is a Debian bug report which could be related, I also described my 
situation there: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=978595


This (or at least a very similar) issue seems to be fixed in 
Redhat/Fedora. A related bug report exists in 
https://bugzilla.redhat.com/show_bug.cgi?id=2170930



Additional information:

I also tried building Ovmf following 
https://lore.kernel.org/all/20190813113119.14804-1-anthony.per...@citrix.com/#t 
, but I wasn't fully able to create a working system:


(1) Using the resulting OVMF.fd from the build process with "firmware 
='/path/to/new/OVMF.fd' led consistently to a black screen in VNC or 
Spice with the text "Guest has not initialized the display (yet)".


(2) Replacing the OVMF.fd in /var/lib/ovmf with the freshly built 
OVMF.fd led to a slight improvement. I could then boot the Windows 
Server installation .iso, but booting the Windows 10 installation .iso 
lead to a crash where the TianoCore logo was visible, but nothing 
happened. The two existing DomUs were still no>


However, I am not sure that I followed the procedure correctly, I might 
very well have done something very wrong. Any pointers are welcome.


Thanks,

Paul

-- System Information:
Debian Release: 12.1
  APT prefers stable-security
  APT policy: (500, 'stable-security'), (500, 'stable')
Architecture: amd64 (x86_64)

Kernel: Linux 6.1.0-11-amd64 (SMP w/2 CPU threads; PREEMPT)
Locale: LANG=en_US.UTF-8, LC_CTYPE=en_US.UTF-8 (charmap=UTF-8), 
LANGUAGE=en_US:en

Shell: /bin/sh linked to /usr/bin/dash
Init: systemd (via /run/systemd/system)
LSM: AppArmor: enabled

-- no debconf information



Bug#978595: #978595 is looking higher priority

2023-07-08 Thread Paul Leiber

On Tue, 4 Jul 2023 23:56:39 +0300 Michael Tokarev  wrote:

Out of curiocity, what value is it to boot a xen domU (or qemu) guest in uefi 
mode?
I mean, bios mode is still recommended for at least commercial virt solutions 
such
as vmware, and it works significantly faster in qemu and xen too.  It is more, 
qemu
ships minimal bios (qboot) to eliminate all boot-time cruft which is not needed 
in
a vm most of the time.


I personally didn't really give much thought to the choice of bootloader 
when setting up the DomU. If I had known back then what Michael has 
written now, I probably would have chosen BIOS instead. Perhaps this 
information about the advantages of BIOS bootloaders is something that 
should be added to the Xen wiki.


AFAIU, things will change with Windows 11 VMs, as Windows 11 requires 
UEFI, unless you use some hacks. I use future tense, because Xen still 
lacks TPM 2.0 support.


In the end, it shouldn't really matter IMHO, because as UEFI/Ovmf is an 
bootloader officially supported by Xen, and as it did work in Debian 
until the upgrade to Bookworm, I think the general expectation is that 
it should continue work in Debian Bookworm. The technical solution seems 
to be at hand (see the closed bug reports for Redhat and Fedora), at 
least for somebody who knows what he/she is doing (meaning that I often 
feel like poking with sticks at a nuclear reactor :-).


Paul



Bug#978595: #978595 is looking higher priority

2023-07-04 Thread Paul Leiber
On Thu, 30 Sep 2021 15:00:45 -0700 Elliott Mitchell 
 wrote:

Being able to load other OSes in PVH would be nice.  Having loading of
other OSes in both HVM and PVH broken would be a Problem(tm).


It seems that I have run into this Problem(tm) after upgrading from 
Debian Bullseye to Debian Bookworm. With standard packages, existing HVM 
DomUs using "firmware = 'ovmf'" (Windows Server 2022 and Windows 10) 
can't boot anymore. Booting these systems leads to the Windows error 
0xc225. I wasn't able to fix this error. Booting an installation 
.iso leads to the same error. Booting the installation media with 
"firmware = 'bios'" leads to a normal boot.


I tried building Ovmf following 
https://lore.kernel.org/all/20190813113119.14804-1-anthony.per...@citrix.com/#t 
, but I wasn't fully able to create a working system:


(1) Using the resulting OVMF.fd from the build process with "firmware = 
'/path/to/new/OVMF.fd' led consistently to a black screen in VNC or 
Spice with the text "Guest has not initialized the display (yet)".


(2) Replacing the OVMF.fd in /var/lib/ovmf with the freshly built 
OVMF.fd led to a slight improvement. I could then boot the Windows 
Server installation .iso, but booting the Windows 10 installation .iso 
lead to a crash where the TianoCore logo was visible, but nothing 
happened. The two existing DomUs were still not bootable. When trying to 
boot any of them, in Ovmf log appears an error "FATAL ERROR - RaiseTpl 
with OldTpl(0x1F) > NewTpl(0x10)".


However, I am not sure that I followed the procedure correctly, I might 
very well have done something very wrong. Any pointers are welcome.


As the Windows systems are not usable anymore, Xen is significantly 
reduced in functionality after the upgrade. Is this existing bug report 
the right place to file this, or should I open a new bug report? If this 
bug report is the right place, its priority should indeed be raised, at 
least to important (linux PVH DomUs are still working fine). If I should 
open a new bug report, for which package?


Sidenote: A related bug report exists in 
https://bugzilla.redhat.com/show_bug.cgi?id=2170930


Paul



Bug#1038247: ddclient detects correct IPv6 address, but fails updating it

2023-06-16 Thread Paul Leiber

Package: ddclient
Version: 3.10.0-2
Severity: important
Tags: ipv6

Dear Maintainer,

ddclient doesn't update my IPv6 address. It doesn't matter which option 
(I tried ifv6, webv6) for detecting the IPv6 addess I use, both detect 
the correct IPv6, but it doesn't get passed on to the DynDNS provider. 
IPv4 seems to be working normally.


In verbose output, a line seems to indicate an error in line 2160 (the 
same line is missing in a working setup):


---snip---
DEBUG:nic_dyndns2_update ---
Use of uninitialized value $_[0] in sprintf at /usr/bin/ddclient line 2160.
INFO: setting IP address to  for xxx
UPDATE:   updating xxx
DEBUG:proxy= 
DEBUG:protocol = http
DEBUG:server   = dyndns.strato.com
DEBUG:url  = nic/update?
DEBUG:ip ver   =
CONNECT:  dyndns.strato.com
CONNECTED:  using HTTP
SENDING:  GET /nic/update?system=dyndns=xxx= HTTP/1.1
---snip---

The last entry seems to indicate that no IP is sent. In a working 
configuration, an IP is added to the line after myip=.


When replacing /usr/bin/ddclient with the current version of ddclient.in 
from https://github.com/ddclient/ddclient, ddclient seems to be working 
correctly (after adjusting the version check and the config location).


My ddclient.conf:

protocol=dyndns2
use=no
usev6=if, ifv6=enX0
login=xxx
password='xxx'
server=dyndns.strato.com
xxx

Best regards,

Paul

-- System Information:
Debian Release: 12.0
  APT prefers stable-security
  APT policy: (500, 'stable-security'), (500, 'stable')
Architecture: amd64 (x86_64)

Kernel: Linux 6.1.0-9-amd64 (SMP w/1 CPU thread; PREEMPT)
Locale: LANG=C, LC_CTYPE=C.UTF-8 (charmap=UTF-8), LANGUAGE not set
Shell: /bin/sh linked to /usr/bin/dash
Init: systemd (via /run/systemd/system)
LSM: AppArmor: enabled

Versions of packages ddclient depends on:
ii  debconf [debconf-2.0]  1.5.82
ii  init-system-helpers1.65.2
ii  perl   5.36.0-7

Versions of packages ddclient recommends:
ii  iproute2 6.1.0-3
pn  libdigest-sha-perl   
ii  libio-socket-inet6-perl  2.73-1
ii  libio-socket-ssl-perl2.081-2
ii  perl [libjson-pp-perl]   5.36.0-7

ddclient suggests no packages.

-- debconf information:
  ddclient/password-repeat: (password omitted)
  ddclient/password: (password omitted)
  ddclient/run_mode: As a daemon
  ddclient/web: ipify-ipv4 https://api.ipify.org/
  ddclient/protocol-other:
  ddclient/proxy:
  ddclient/web-url: https://api.ipify.org/
* ddclient/protocol: dyndns2
  ddclient/hostslist:
  ddclient/interface:
  ddclient/password-mismatch:
  ddclient/fetchhosts: Manually
* ddclient/names:
* ddclient/server:
  ddclient/blankhostslist:
* ddclient/method: Web-based IP discovery service
* ddclient/username:
  ddclient/daemon_interval: 5m
* ddclient/service: other



Bug#998088: Same issue when moving from kernel 5.17 to 5.18

2022-10-22 Thread Paul Leiber
On Wed, 29 Jun 2022 15:03:31 +0200 =?UTF-8?B?RnLDqWTDqXJpYyBNQVNTT1Q=?= 
 wrote:

> Hi,
>
> I have a local server that uses Debian testing. I updated the server, it
> went from a kernel 5.17 to 5.18. After the reboot, the network was no
> longer active. The network interfaces were down.
>
> In the logs I had these error messages:
>
> systemd-udevd[396]: :01:00.0: Worker [425] processing SEQNUM=2140 is
> taking a long time
> systemd[1]: systemd-udev-settle.service: Main process exited,
> code=exited, status=1/FAILURE
> systemd[1]: systemd-udev-settle.service: Failed with result 'exit-code'.
> systemd[1]: Failed to start Wait for udev To Complete Device 
Initialization.

> systemd[1]: systemd-udev-settle.service: Consumed 4.055s CPU time.
> [...]
> systemd[1]: ifupdown-pre.service: Main process exited, code=exited,
> status=1/FAILURE
> systemd[1]: ifupdown-pre.service: Failed with result 'exit-code'.
> systemd[1]: Failed to start Helper to synchronize boot up for ifupdown.
> systemd[1]: Dependency failed for Raise network interfaces.
> systemd[1]: networking.service: Job networking.service/start failed with
> result 'dependency'.
> ithaqua systemd[1]: ifupdown-pre.service: Consumed 3.807s CPU time.
>
>
> I found this bug report and replaced the line
> "Requires=ifupdown-pre.service" with "Wants=ifupdown-pre.service" in
> "/lib/systemd/system/networking.service".
>
> During boot there is a delay, but the network interfaces were up and the
> network is active.
>
>
> Regards.
> --
> ==
> | FRÉDÉRIC MASSOT |
> | http://www.juliana-multimedia.com |
> | mailto:frede...@juliana-multimedia.com |
> | +33.(0)2.97.54.77.94 +33.(0)6.67.19.95.69 |
> ===Debian=GNU/Linux===
>
>

I have the same issue as described by Frédéric in an up-to-date Debian 
Bullseye 11.5, kernel 5.10.0-19-amd64. Manually starting the network via 
ifconfig works. The workaround described by Ron mitigates the error. 
However, the delay during boot exists also in my system.


Paul