[Bug 1903817] Re: Clustalo 1.2.4-6 segfaults on s390x

2020-11-22 Thread Christian Ehrhardt
Thank you Andreas and Stefan,
I've pinged the Debian MR [1] and upstream (via Mail which you are on CC) about 
it.
Let us see how things evolve from here.

[1]: https://salsa.debian.org/med-team/clustalo/-/merge_requests/1

** Changed in: gcc-10 (Ubuntu)
   Status: New => Invalid

** Changed in: clustalo (Ubuntu)
   Status: New => Triaged

** Changed in: clustalo (Ubuntu)
   Importance: Undecided => Medium

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1903817

Title:
  Clustalo 1.2.4-6 segfaults on s390x

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu-z-systems/+bug/1903817/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1903817] Re: Clustalo 1.2.4-6 segfaults on s390x

2020-11-22 Thread Christian Ehrhardt
FYI: Also filed as Debian bug (linked here) and got a response on the
Debian MR.

** Bug watch added: Debian Bug tracker #975511
   https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=975511

** Also affects: clustalo (Debian) via
   https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=975511
   Importance: Unknown
   Status: Unknown

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1903817

Title:
  Clustalo 1.2.4-6 segfaults on s390x

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu-z-systems/+bug/1903817/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1905067] Re: qemu-system-riscv64 sbi_trap_error powering down VM riscv64

2020-11-22 Thread Christian Ehrhardt
Hi Sean,
last time I used our riscv in VMs it still had to use a lot of hand-collected 
bits in [1]. Could you outline exactly what bits you used and from where?

Furthermore I've just made qemu 5.1 [2] available in 21.04 a few days
ago. For the sake of trying if a fix already might exist in this recent
version could you give this a try?

[1]: https://people.ubuntu.com/~wgrant/riscv64/
[2]: https://launchpad.net/ubuntu/+source/qemu/1:5.1+dfsg-4ubuntu1

** Changed in: qemu (Ubuntu)
   Status: New => Incomplete

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1905067

Title:
  qemu-system-riscv64 sbi_trap_error powering down VM riscv64

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/qemu/+bug/1905067/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1854396] Re: virt-manager freezes the whole desktop for 3 seconds when starting/stopping a VM

2020-11-22 Thread Christian Ehrhardt
Ok, then this is fixed in Ubuntu 21.04 (done) and will need someone to
pick up verification of the PPA to get going on the SRUs for 20.04 and
20.10.

It never really "hurt" me so far (was never like the reported 3 seconds
for me), therefore alone I can't make a good case convincing the SRU
team that the change is worth it in regard to the SRU policy [1].

If anyone else affected considers it really painful and important please
give the PPAs in comment #13 a try and let me know.

[1]: https://wiki.ubuntu.com/StableReleaseUpdates

** Also affects: virt-manager (Ubuntu Groovy)
   Importance: Undecided
   Status: New

** Also affects: virt-manager (Ubuntu Focal)
   Importance: Undecided
   Status: New

** Changed in: virt-manager (Ubuntu)
   Status: In Progress => Fix Released

** Changed in: virt-manager (Ubuntu Focal)
   Status: New => Incomplete

** Changed in: virt-manager (Ubuntu Groovy)
   Status: New => Incomplete

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1854396

Title:
  virt-manager freezes the whole desktop for 3 seconds when
  starting/stopping a VM

To manage notifications about this bug go to:
https://bugs.launchpad.net/virt-manager/+bug/1854396/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1854396] Re: virt-manager freezes the whole desktop for 3 seconds when starting/stopping a VM

2020-11-22 Thread Christian Ehrhardt
And by fixed in 21.04 I mean by https://launchpad.net/ubuntu/+source
/virt-manager/1:3.1.0-1

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1854396

Title:
  virt-manager freezes the whole desktop for 3 seconds when
  starting/stopping a VM

To manage notifications about this bug go to:
https://bugs.launchpad.net/virt-manager/+bug/1854396/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1905067] Re: qemu-system-riscv64 sbi_trap_error powering down VM riscv64

2020-11-23 Thread Christian Ehrhardt
This looks a lot like
https://mail.gnu.org/archive/html/qemu-devel/2020-09/msg00212.html

You'd think the offending commit mentioned there is actually in 5.1 and not 
earlier.
But it is backported in Groovy as part of
  Bug-Debian: https://bugs.debian.org/964793
   
  Bug-Debian: https://bugs.debian.org/964247
  https://bugs.launchpad.net/qemu/+bug/1886318
It already had one follow on fix in
  d/p/riscv-allow-64-bit-access-to-SiFive-CLINT.patch

Focal has that as well via CVE fixes:
  d/p/ubuntu/hw-riscv-Allow-64-bit-access-to-SiFive-CLINT.patch
  debian/patches/ubuntu/CVE-2020-13754-1.patch

Chances are we need this later follow on fix as well.

I wanted to check for Focal for stable patches of 4.2 (qemu-
sta...@nongnu.org) anyway (but there is not 4.2.2 yet). This would be
one of them, but one step at a time.

I guess we need to backport
https://git.qemu.org/?p=qemu.git;a=commit;h=ab3d207fe89bc0c63739db19e177af49179aa457

@Sean - if I'd build you qemu with that fix could you test it? If so
what would you need qemu for F&G ?

** Bug watch added: Debian Bug tracker #964793
   https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=964793

** Bug watch added: Debian Bug tracker #964247
   https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=964247

** CVE added: https://cve.mitre.org/cgi-bin/cvename.cgi?name=2020-13754

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1905067

Title:
  qemu-system-riscv64 sbi_trap_error powering down VM riscv64

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/qemu/+bug/1905067/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

Re: [Bug 1785262] Re: arm64 autopkgtests are flaky

2020-11-23 Thread Christian Ehrhardt
On Fri, Nov 20, 2020 at 9:00 PM Heather Ellsworth
<1785...@bugs.launchpad.net> wrote:
>
> I agree that the uicheck-sw test has increased in flakiness. How about
> instead of removing the test or sinking the resources into actually
> fixing them right now, I can mark them flaky so that at least a failed
> test looks like a skipped test:

Would work for me, can we mark flaky-on-armhf only (I'd not be aware)?

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1785262

Title:
  arm64 autopkgtests are flaky

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/libreoffice/+bug/1785262/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1903817] Re: Clustalo 1.2.4-6 segfaults on s390x

2020-11-23 Thread Christian Ehrhardt
FYI fixed in 1.2.4-7 which is coming our way via auto-sync

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1903817

Title:
  Clustalo 1.2.4-6 segfaults on s390x

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu-z-systems/+bug/1903817/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1902654] Re: failure to migrate virtual machines with pc-i440fx-wily type to ubuntu 20.04

2020-11-23 Thread Christian Ehrhardt
Thanks for the ping Łukasz,
  during RTL pass: reload
  /<>/fpu/softfloat.c: In function ‘soft_f64_muladd’:
  /<>/fpu/softfloat.c:1535:1: internal compiler error: 
Segmentation fault
   1535 | }
| ^
that is bug 1890435 which should be flaky in groovy.

I'd really prefer not to switch to gcc-9 in groovy for this.
In 21.04 this became 100% non buildable and we had no other chance until bug 
1890435 is resolved there. But in Groovy I'd hope we get away with 
re-triggering the build.
I did that and will give it three tries before we have to consider the gcc-9 
treatment.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1902654

Title:
  failure to migrate virtual machines with pc-i440fx-wily type to ubuntu
  20.04

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/libvirt/+bug/1902654/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1677398] Re: Apparmor prevents using storage pools and hostdev networks

2020-11-23 Thread Christian Ehrhardt
Hi Yury,
until implemented for real adding apparmor rules for the uncommon paths are the 
way to go.
The difference I'd suggest to your solution is to use local overrides since 
they will neither prompt you nor be overwritten on updates.

This can be done in:
# allow virt-aa-helper to generate per-guest rules in an uncommon path
/etc/apparmor.d/local/usr.lib.libvirt.virt-aa-helper
# allow things for an individual guests
/etc/apparmor.d/libvirt/libvirt-
# allow something for all guests
/etc/apparmor.d/local/abstractions/libvirt-qemu

In the particular case the best way should be an entry like
   /srv/libvirt/images/** r,
in /etc/apparmor.d/local/usr.lib.libvirt.virt-aa-helper

That is especially good since each individual guest will still only get rules 
added to allow "his own storage" as configured in the guest XML.
In your solution as comparison an exploited guest A could access the storage of 
guest B.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1677398

Title:
  Apparmor prevents using storage pools and hostdev networks

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/libvirt/+bug/1677398/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1904584] Re: libvirt 6.8 / qemu 5.1 - --p2p --tunnelled is hanging

2020-11-23 Thread Christian Ehrhardt
Retest:
- Q5.0 / L6.6 -> Q5.0 / L6.6 - working
- Q5.1 / L6.9 -> Q5.1 / L6.9 - failing
- Q5.1 / L6.9 -> Q5.0 / L6.6 - working
- Q5.0 / L6.6 -> Q5.1 / L6.9 - working

So if either end is not on the most recent versions things still work.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1904584

Title:
  libvirt 6.8 / qemu 5.1 - --p2p --tunnelled is hanging

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/libvirt/+bug/1904584/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1902654] Re: failure to migrate virtual machines with pc-i440fx-wily type to ubuntu 20.04

2020-11-23 Thread Christian Ehrhardt
The armhf build is resolved (third was a charm)

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1902654

Title:
  failure to migrate virtual machines with pc-i440fx-wily type to ubuntu
  20.04

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/libvirt/+bug/1902654/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1905360] [NEW] FTBFS in Hirsute due to glibc2.32

2020-11-23 Thread Christian Ehrhardt
Public bug reported:

Hi,
just FYI I've seen this while looking at proposed migration for other things.
Currently glusterfs is FTFBS in hirsute

libtool: link: gcc -Wall -I/usr/include/uuid -I/usr/include/tirpc -Wformat 
-Werror=format-security -Werror=implicit-function-declaration -g -O2 
-fdebug-prefix-map=/<>=. -fstack-protector-strong -Wformat 
-Werror=format-security -Wl,-Bsymbolic-functions -Wl,-z -Wl,relro -Wl,-z 
-Wl,now -o .libs/gf_attach gf_attach.o  
../../libglusterfs/src/.libs/libglusterfs.so ../../api/src/.libs/libgfapi.so 
../../rpc/rpc-lib/src/.libs/libgfrpc.so ../../rpc/xdr/src/.libs/libgfxdr.so 
-lrt -ldl -lpthread -lcrypto
/usr/bin/ld: gf_attach.o: undefined reference to symbol 
'xdr_sizeof@@TIRPC_0.3.3'
/usr/bin/ld: /lib/x86_64-linux-gnu/libtirpc.so.3: error adding symbols: DSO 
missing from command line
collect2: error: ld returned 1 exit status


It is not a problem in Debian [1] yet as this is a known common case around 
glibc 2.32.
We had for years xdr handling in libtirpc (that is where it was moved) but also 
a fallback in glibc (where it originally was). But glibc 2.32 finally removed 
the fallback to libtirpc.

Common issues of this type are either:
- rpc.h not found -> include libtirpc-dev (not the case here)
- fail to link to xdr* -> use to work as glibc had a fallback, but now needs 
linker flags (see pkg-config for libtirpc-dev)

P.S. I'm just filing this to avoid someone loosing time re-debugging
this as I have seen it for libvirt. I'm - right now - not intending to
work on it but hope it can give someone a head start.

[1]: https://buildd.debian.org/status/package.php?p=glusterfs

** Affects: glusterfs (Ubuntu)
 Importance: Undecided
 Status: New


** Tags: update-excuse

** Tags added: update-excuse

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1905360

Title:
  FTBFS in Hirsute due to glibc2.32

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/glusterfs/+bug/1905360/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1890435] Re: gcc-10 breaks on armhf (flaky): internal compiler error: Segmentation fault

2020-11-23 Thread Christian Ehrhardt
FYI: Systems are back up, restarted tests on r10-3727

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1890435

Title:
  gcc-10 breaks on armhf (flaky): internal compiler error: Segmentation
  fault

To manage notifications about this bug go to:
https://bugs.launchpad.net/groovy/+bug/1890435/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1905377] [NEW] postrm fails in hirsute as the path generation for modules is broken

2020-11-24 Thread Christian Ehrhardt
Public bug reported:

Removing qemu-system-gui:amd64 (1:5.1+dfsg-4ubuntu1) ...
rm: cannot remove '/var/run/qemu/Debian': Is a directory
dpkg: error processing package qemu-system-gui:amd64 (--remove):
 installed qemu-system-gui:amd64 package post-removal script subprocess 
returned error exit status 1


Due to
purge|remove)
# remove .so files for still running qemu instances in /var/run
# for details see bug LP: #1847361
rm -f /var/run/qemu/Debian 1:5.1+dfsg-4ubuntu1/ui-gtk.so
rm -f /var/run/qemu/Debian 1:5.1+dfsg-4ubuntu1/audio-*.so
;;

That space in the middle was and should here as well have been a "/".

** Affects: qemu (Ubuntu)
 Importance: High
 Status: Triaged

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1905377

Title:
  postrm fails in hirsute as the path generation for modules is broken

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/qemu/+bug/1905377/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1905377] Re: postrm fails in hirsute as the path generation for modules is broken

2020-11-24 Thread Christian Ehrhardt
Example from Focal (how it should be):
 26 rm -f 
/var/run/qemu/Debian_1_5.0-5ubuntu9~backport20.04-202010241037~ubuntu20.04.1/ui-gtk.so
 27 rm -f 
/var/run/qemu/Debian_1_5.0-5ubuntu9~backport20.04-202010241037~ubuntu20.04.1/audio-*.so

This is generated from debversion in d/rules and probably changed a
character that was changing on the last merge.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1905377

Title:
  postrm fails in hirsute as the path generation for modules is broken

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/qemu/+bug/1905377/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1905377] Re: postrm fails in hirsute as the path generation for modules is broken

2020-11-24 Thread Christian Ehrhardt
prerm (and others) have the same issue:
mkdir -p /var/run/qemu/Debian 1:5.1+dfsg-4ubuntu1
cp /usr/lib/x86_64-linux-gnu/qemu/block-*.so /var/run/qemu/Debian 
1:5.1+dfsg-4ubuntu1/

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1905377

Title:
  postrm fails in hirsute as the path generation for modules is broken

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/qemu/+bug/1905377/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1905377] Re: postrm fails in hirsute as the path generation for modules is broken

2020-11-24 Thread Christian Ehrhardt
Due to the above it also might leave dirs like "1:5.1+dfsg-4ubuntu1"
wherever it was running - bump severity

** Changed in: qemu (Ubuntu)
   Status: New => Triaged

** Changed in: qemu (Ubuntu)
   Importance: Undecided => High

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1905377

Title:
  postrm fails in hirsute as the path generation for modules is broken

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/qemu/+bug/1905377/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1905377] Re: postrm fails in hirsute as the path generation for modules is broken

2020-11-24 Thread Christian Ehrhardt
Until resolved affected users can rather safely run
  rm -rf /var/run/qemu/Debian

Afterwards the remove/purge will work.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1905377

Title:
  postrm fails in hirsute as the path generation for modules is broken

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/qemu/+bug/1905377/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1905377] Re: postrm fails in hirsute as the path generation for modules is broken

2020-11-24 Thread Christian Ehrhardt
The string in --version is the same:
 QEMU emulator version 5.1.0 (Debian 1:5.1+dfsg-4ubuntu1)
 QEMU emulator version 5.0.0 (Debian 1:5.0-5ubuntu9.1)

Templates as well:
mkdir -p /var/run/qemu/@PKGVERSION@  
cp /usr/lib/@ARCH@/qemu/block-*.so /var/run/qemu/@PKGVERSION@/ 

d/rules:
Focal:
 PKGVERSION := $(shell printf "Debian ${DEB_VERSION}" | tr --complement 
'[:alnum:]+-.~' '_')
Groovy:
 PKGVERSION := $(shell printf "Debian ${DEB_VERSION}" | tr --complement 
'[:alnum:]+-.~' '_')
Hirsute:
 PKGVERSION := $(shell printf "Debian ${DEB_VERSION}" | tr --complement 
'[:alnum:]+-.~' '_')

The same ... but Debian picked up the variable name and changed it later in 
d/rules
We have used PKGVERSION, Debian has changed it (nor later in d/rules and 
different format)
  PKGVERSION = Debian ${DEB_VERSION}
   
  SAVEMODDIR = /run/qemu/$(shell echo -n "${PKGVERSION}" | tr --complement 
'[:alnum:]+-.~' '_')


But to replace the maintainer scripts it used PKGVERSION which is before the tr 
and that has fixed special chars.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1905377

Title:
  postrm fails in hirsute as the path generation for modules is broken

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/qemu/+bug/1905377/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1905377] Re: postrm fails in hirsute as the path generation for modules is broken

2020-11-24 Thread Christian Ehrhardt
df9ba08d6cdb50bd66792db2b02a31e1fc8befef has also added

+# save block-extra loadable modules on upgrades
+# other module types for now (5.0) can't be loaded at runtime, only at startup
+   echo 'case $$1 in (upgrade|deconfigure) mkdir -p ${SAVEMODDIR}; cp -p 
${libdir}/qemu/block-*.so ${SAVEMODDIR}/;; esac' \
+ >> debian/qemu-block-extra.prerm.debhelper
+   echo 'case $$1 in (purge|remove) rm -f ${SAVEMODDIR}/block-*.so;; esac' 
\
+ >> debian/qemu-block-extra.postrm.debhelper

We still have:
 25 AUTOGENERATED:= qemu-block-extra.prerm qemu-block-extra.postrm 
qemu-system-gui.prerm qemu-system-gui.postrm
133 »···for f in ${AUTOGENERATED} ; do \
 
134 sed -e 's%@ARCH@%${DEB_HOST_MULTIARCH}%g' \ 
 
135 »···-e 's%@PKGVERSION@%${PKGVERSION}%g' \   
 
136 »···»···»···< debian/$$f.in  > debian/$$f ; \   
 
137 done  

Those two conflict
Problem is in d210c63c576 which we just don't need anymore.

Yet this unintentionally dropped DEB_HOST_MULTIARCH but that is
recovered by ${libdir}

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1905377

Title:
  postrm fails in hirsute as the path generation for modules is broken

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/qemu/+bug/1905377/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1905377] Re: postrm fails in hirsute as the path generation for modules is broken

2020-11-24 Thread Christian Ehrhardt
Fixed in PPA [1] for testing

[1]: https://launchpad.net/~ci-train-ppa-service/+archive/ubuntu/4348

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1905377

Title:
  postrm fails in hirsute as the path generation for modules is broken

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/qemu/+bug/1905377/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1904584] Re: libvirt 6.8 / qemu 5.1 - --p2p --tunnelled is hanging

2020-11-24 Thread Christian Ehrhardt
Extending the above list changing Qemu and Libvirt individually to
better identify what we need to look at:

- Q5.1 / L6.9 -> Q5.1 / L6.6 - working
- Q5.1 / L6.6 -> Q5.1 / L6.9 - working
- Q5.1 / L6.6 -> Q5.1 / L6.6 - working
- Q5.1 / L6.9 -> Q5.1 / L6.9 - (still) failing
- Q5.1 / L6.9 -> Q5.0 / L6.9 - failing
- Q5.0 / L6.9 -> Q5.1 / L6.9 - failing
- Q5.0 / L6.9 -> Q5.0 / L6.9 - failing

Note I: I kept the libvirtd.conf, apparmor rules and all that on the
level of libvirt 6.9 all the time. Mostly just the binaries change.

Note II: when changing qemu on a system I ensured that the guest was
destroyed&started to run with the new version.

Overall it depends on just libvirt, but not qemu.
And it is enough to not have the new version on either end of the migration to 
avoid the issue.

By the above list the "minimal" change we can look at to make a good/bad case 
comparison will be:
 Q5.1 / L6.9 -> Q5.1 / L6.6 (good)  vs   Q5.1 / L6.9 -> Q5.1 / L6.9 (bad)
I'll get debug logs of both peers in those two cases to compare.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1904584

Title:
  libvirt 6.8 / qemu 5.1 - --p2p --tunnelled is hanging

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/libvirt/+bug/1904584/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1904584] Re: libvirt 6.8 / qemu 5.1 - --p2p --tunnelled is hanging

2020-11-24 Thread Christian Ehrhardt
Bad case:
1. trigger migration
  
2. sleep 2m
  
3. fetch the logs
^^ this shall ensure we don't have the error path when I abort the migration in 
the log

# reset the logfile via (so both have a log from the start of the service into 
a migration)
$ systemctl stop libvirtd
$ rm /var/log/libvirtd.log
$ systemctl start libvirtd

Good case
1. trigger migration
2. migration completed
3. fetch the logs


All logs are then stripped of the leading time and PID index for better 
comparison.
:%s/^[0-9][0-9][0-9][0-9]-[0-9][0-9]-[0-9][0-9] 
[0-9][0-9]:[0-9][0-9]:[0-9][0-9].[0-9][0-9][0-9]+: [0-9]*: //gc


Then everything of the daemon init before the RPC call that led to 
"virDomainMigratePerform3Params" is removed (right now not interested in the 
startup, but it is in the attached files). In the same way everything before 
"remoteDispatchDomainMigratePrepareTunnel3Params" is cut out on the target.

Finally for further improved diffing (we can always check details in the raw 
logs) line numbers, threads and adresses are removed.
:%s/\v^(debug|info) : ([a-zA-Z0-9]*):[0-9]* : /\2 /gc
:%s/Thread [0-9]* /Thread /gc
:%s/=0x[a-f0-9]*/0xAddrRemoved/gc
:%s/:0x[a-f0-9]*/:0xAddrRemoved/gc
:%s/fd:[0-9]*/fd:FDRemoved/gc
:%s/fd=[0-9]*/fd=FDRemoved/gc


That is then compared.
Target:
- The incoming "destination_xml" on virDomainMigratePrepareTunnel3Params is the 
same
- the qemu spawn command on the target is the same
- migration communication with qemu initially goes the same 
query-migrate-capabilities -> migrate-set-capabilities ... (overall ~2k lines 
of log mostly qom-* commands)
- both successfully reach "qemuProcessHandleMigrationStatus Migration of domain 
... h-migr-test changed state to active"
- then a long sequence of virStream activity starts with slightly alternating 
order but otherwise no diff
- the bad case stays in there, the good case has more of them and finally 
completes into "virStreamFinish -> virDomainMigrateFinish3Params".

Seems it would hang on communication, but why/where ?

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1904584

Title:
  libvirt 6.8 / qemu 5.1 - --p2p --tunnelled is hanging

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/libvirt/+bug/1904584/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1905412] [NEW] LVM install broken if other disks have meta-data on the VG name already

2020-11-24 Thread Christian Ehrhardt
Public bug reported:

Hi,
I was puzzled today at my install aborting until I was lookgin in the crash 
file.
There I've found:
 Running command ['vgcreate', '--force', '--zero=y', '--yes', 'ubuntu-vg', 
'/dev/dasda2'] with allowed return codes [0] (capture=True)
 An error occured handling 'lvm_volgroup-0': ProcessExecutionError - Unexpected 
error while running command.
 Command: ['vgcreate', '--force', '--zero=y', '--yes', 'ubuntu-vg', 
'/dev/dasda2']
 Exit code: 5
 Reason: -
 Stdout: ''
 Stderr:   A volume group called ubuntu-vg already exists.


And now things fall into place.

I've had a default vg as the installer creates it across a few disks.
Then later my main root disk was broken and I replaced it.

Now at install time I have activated all disks that I eventually wanted
to use (this is s390x therefore activate disks, but from the bit I see
in the crash I'd expect no other behavior if on e.g. x86 you'd replace
one disk and try to re-install).

What happens is that the disks I'm not installing onto still have LVM metadata.
That has the "ubuntu-vg" defined and thereby crashes the install.

I think we will need to harden the installer that probably needs to wipe
some signatures and re-probe LVM to then get things going.

** Affects: subiquity (Ubuntu)
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1905412

Title:
  LVM install broken if other disks have meta-data on the VG name
  already

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/subiquity/+bug/1905412/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1905412] Re: LVM install broken if other disks have meta-data on the VG name already

2020-11-24 Thread Christian Ehrhardt
** Attachment added: "log as it was left in the installer environment"
   
https://bugs.launchpad.net/ubuntu/+source/subiquity/+bug/1905412/+attachment/5437532/+files/1606223663.332479477.install_fail.crash

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1905412

Title:
  LVM install broken if other disks have meta-data on the VG name
  already

To manage notifications about this bug go to:
https://bugs.launchpad.net/subiquity/+bug/1905412/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1905412] Re: LVM install broken if other disks have meta-data on the VG name already

2020-11-24 Thread Christian Ehrhardt
** Also affects: curtin (Ubuntu)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1905412

Title:
  LVM install broken if other disks have meta-data on the VG name
  already

To manage notifications about this bug go to:
https://bugs.launchpad.net/subiquity/+bug/1905412/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1890435] Re: gcc-10 breaks on armhf (flaky): internal compiler error: Segmentation fault

2020-11-24 Thread Christian Ehrhardt
r10-3727 had another 2.5 good runs, overall it LGTM now.
I'll re-run r10-4054 just to be sure not to hunt a ghost.

20190425 good
r10-1014
r10-2027 good
r10-2533
r10-3040 good
r10-3220
r10-3400 good
r10-3450
r10-3475
r10-3478
r10-3593
r10-3622
r10-3657 good
r10-3727 good
r10-4054 bad next
r10-6080
20200507 bad

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1890435

Title:
  gcc-10 breaks on armhf (flaky): internal compiler error: Segmentation
  fault

To manage notifications about this bug go to:
https://bugs.launchpad.net/groovy/+bug/1890435/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1904584] Re: libvirt 6.8 / qemu 5.1 - --p2p --tunnelled is hanging

2020-11-24 Thread Christian Ehrhardt
** Attachment added: "stripped logs of good and bad case at source and target 
of the migration"
   
https://bugs.launchpad.net/ubuntu/+source/qemu/+bug/1904584/+attachment/5437542/+files/stripped-log.tgz

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1904584

Title:
  libvirt 6.8 / qemu 5.1 - --p2p --tunnelled is hanging

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/libvirt/+bug/1904584/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1904584] Re: libvirt 6.8 / qemu 5.1 - --p2p --tunnelled is hanging

2020-11-24 Thread Christian Ehrhardt
** Attachment added: "full logs of good and bad case at source and target of 
the migration"
   
https://bugs.launchpad.net/ubuntu/+source/qemu/+bug/1904584/+attachment/5437541/+files/full-log.tgz

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1904584

Title:
  libvirt 6.8 / qemu 5.1 - --p2p --tunnelled is hanging

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/libvirt/+bug/1904584/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1904584] Re: libvirt 6.8 / qemu 5.1 - --p2p --tunnelled is hanging

2020-11-24 Thread Christian Ehrhardt
I pinged upstream about the case at
https://www.redhat.com/archives/libvir-list/2020-November/msg01399.html

I'll try how easy (or not) it would be to build 6.7 and 6.8 with otherwise 
mostly the same.
That could help to spot where the change happened.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1904584

Title:
  libvirt 6.8 / qemu 5.1 - --p2p --tunnelled is hanging

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/libvirt/+bug/1904584/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1890435] Re: gcc-10 breaks on armhf (flaky): internal compiler error: Segmentation fault

2020-11-24 Thread Christian Ehrhardt
On this re-check r10-4054 had 7 complete runs without a fail.
So as I was afraid of in comment 71 already, it might have been another much 
more rare ICE hidden in there as well. Or OTOH we are cursed by some very bad 
statistical chances :-/.

I'll check r10-6080 next to see if it
a) reproduces an ICE faster
b) will show the same signature we saw more often before

20190425 good
r10-1014
r10-2027 good
r10-2533
r10-3040 good
r10-3220
r10-3400 good
r10-3450
r10-3475
r10-3478
r10-3593
r10-3622
r10-3657 good
r10-3727 good
r10-4054 other kind of bad?
r10-6080 next
20200507 bad

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1890435

Title:
  gcc-10 breaks on armhf (flaky): internal compiler error: Segmentation
  fault

To manage notifications about this bug go to:
https://bugs.launchpad.net/groovy/+bug/1890435/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1905412] Re: LVM install broken if other disks have meta-data on the VG name already

2020-11-24 Thread Christian Ehrhardt
FYI I just looked at this system after installing with only one disk
enabled.

I found this which might be interesting for LVM handling:

#1 two LVMs with same name co-exist
$ sudo vgs -o vg_name,vg_uuid
  VGVG UUID   
  ubuntu-vg 6j9dUF-KO8t-7Svv-RRze-VcMu-pMUY-FJxgA8
  ubuntu-vg bKYrCn-gMDT-UFAW-XUlX-oRVS-hUxr-ksINed

Usual commands would fail:
$ sudo vgremove ubuntu-vg
  Multiple VGs found with the same name: skipping ubuntu-vg
  Use --select vg_uuid= in place of the VG name.

We can look at the UUIDs and maybe even know which one created.


$ sudo vgdisplay
  --- Volume group ---
  VG Name   ubuntu-vg
  System ID 
  Formatlvm2
  Metadata Areas2
  Metadata Sequence No  6
  VG Access read/write
  VG Status resizable
  MAX LV0
  Cur LV1
  Open LV   0
  Max PV0
  Cur PV4
  Act PV2
  VG Size   <169.12 GiB
  PE Size   4.00 MiB
  Total PE  43294
  Alloc PE / Size   43294 / <169.12 GiB
  Free  PE / Size   0 / 0   
  VG UUID   6j9dUF-KO8t-7Svv-RRze-VcMu-pMUY-FJxgA8
   
  --- Volume group ---
  VG Name   ubuntu-vg
  System ID 
  Formatlvm2
  Metadata Areas1
  Metadata Sequence No  2
  VG Access read/write
  VG Status resizable
  MAX LV0
  Cur LV1
  Open LV   1
  Max PV0
  Cur PV1
  Act PV1
  VG Size   <19.63 GiB
  PE Size   4.00 MiB
  Total PE  5025
  Alloc PE / Size   5025 / <19.63 GiB
  Free  PE / Size   0 / 0   
  VG UUID   bKYrCn-gMDT-UFAW-XUlX-oRVS-hUxr-ksINed

In my case for example I can actually remove one of them via:

$ sudo vgremove --select vg_uuid=bKYrCn-gMDT-UFAW-XUlX-oRVS-hUxr-ksINed
Do you really want to remove volume group "ubuntu-vg" containing 1 logical 
volumes? [y/n]: y
  Logical volume ubuntu-vg/ubuntu-lv contains a filesystem in use.

Maybe this helps implementing some details in regard to "potentially the
same VG Name"

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1905412

Title:
  LVM install broken if other disks have meta-data on the VG name
  already

To manage notifications about this bug go to:
https://bugs.launchpad.net/subiquity/+bug/1905412/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1905067] Re: qemu-system-riscv64 sbi_trap_error powering down VM riscv64

2020-11-25 Thread Christian Ehrhardt
The patch in in 5.2 so it would be needed in >=Focal - I marked bug tasks 
accordingly.
But for now let us test it in one place (=Focal being the LTS and the furthest 
back) and if confirmed to work then prep the dev-fix and SRUs.

Could you try the build 4.2-3ubuntu6.10~ppa1 at [1] if it resolved the
issue for you in Focal?

Also we'll really need these clear "where to get artifacts and how
exactly to invoke" steps for the SRU process, so please add these as
well.

[1]: https://launchpad.net/~ci-train-ppa-
service/+archive/ubuntu/4351/+packages

** Also affects: qemu (Ubuntu Hirsute)
   Importance: Undecided
   Status: Incomplete

** Also affects: qemu (Ubuntu Focal)
   Importance: Undecided
   Status: New

** Also affects: qemu (Ubuntu Groovy)
   Importance: Undecided
   Status: New

** Changed in: qemu (Ubuntu Hirsute)
   Status: Incomplete => Triaged

** Changed in: qemu (Ubuntu Groovy)
   Status: New => Triaged

** Changed in: qemu (Ubuntu Focal)
   Status: New => Triaged

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1905067

Title:
  qemu-system-riscv64 sbi_trap_error powering down VM riscv64

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/qemu/+bug/1905067/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1905377] Re: postrm fails in hirsute as the path generation for modules is broken

2020-11-25 Thread Christian Ehrhardt
Hi Paride,
This is tied into upgrade mechanisms (as that is what it needs to fix, long 
running qemu's through an upgrade)
It needs an upgrade to happen to enter the bad state:
That upgrade doesn't have to be another version

For example:
$ apt install qemu-block-extra
$ apt install --reinstall qemu-block-extra

Triggers the bug in a fresh container.

That very same test now works fine with the fix in https://launchpad.net
/~ci-train-ppa-service/+archive/ubuntu/4348

The maintainer scrips look less normal/formal (no more based ont he
usual skeleton), but they work - use the .debhelper suffix that seems to
be established and align with how Debian does it.

Fix can be uploaded, I'm just waiting for a few other bugs we have open
to decide if I do one or many uploads (test and build queue is full
enough).

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1905377

Title:
  postrm fails in hirsute as the path generation for modules is broken

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/qemu/+bug/1905377/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1905377] Re: postrm fails in hirsute as the path generation for modules is broken

2020-11-25 Thread Christian Ehrhardt
Hmm, the other one takes at least a few day - maybe more due to other
involved people being out for thanksgiving. Prepping this for a review.

MP here:
https://code.launchpad.net/~paelzer/ubuntu/+source/qemu/+git/qemu/+merge/394453

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1905377

Title:
  postrm fails in hirsute as the path generation for modules is broken

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/qemu/+bug/1905377/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1905424] Re: Live migration failure

2020-11-25 Thread Christian Ehrhardt
** Package changed: qemu-kvm (Ubuntu) => qemu (Ubuntu)

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1905424

Title:
  Live migration failure

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/libvirt/+bug/1905424/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1904584] Re: libvirt 6.8 / qemu 5.1 - --p2p --tunnelled is hanging

2020-11-25 Thread Christian Ehrhardt
Prep Bisecting - info in case anyone needs to repro or take over

# Build
$ rm -rf build install
$ mkdir build install
$ V=$(git describe)
$ cd build
$ LC_ALL=C.UTF-8 meson .. --wrap-mode=nodownload --buildtype=plain 
--prefix=/usr --sysconfdir=/etc --localstatedir=/var 
--libdir=lib/x86_64-linux-gnu --libexecdir=/usr/lib/libvirt -Drunstatedir=/run 
-Dpackager="Debug" -Dpackager_version="bisect6.8-6.9" -Dapparmor=enabled 
-Dsecdriver_apparmor=enabled -Dapparmor_profiles=true -Ddriver_qemu=enabled 
-Dqemu_user=libvirt-qemu -Dqemu_group=kvm 
-Dqemu_moddir=/usr/lib/x86_64-linux-gnu/qemu -Ddocs=disabled -Dtests=disabled 
-Daudit=disabled -Ddriver_libxl=disabled -Ddriver_lxc=disabled 
-Ddriver_openvz=disabled -Ddriver_vbox=disabled -Dfirewalld=disabled 
-Dlibssh2=disabled -Dnetcf=disabled -Dnumactl=disabled -Dnumad=disabled 
-Dpolkit=disabled -Dsanlock=disabled -Dstorage_disk=disabled 
-Dstorage_gluster=disabled -Dstorage_iscsi=disabled -Dstorage_lvm=disabled 
-Dstorage_rbd=disabled -Dstorage_sheepdog=disabled -Dstorage_zfs=disabled 
-Ddriver_esx=disabled -Dwireshark_dissector=disabled -Ddtrace=disabled 
-Dglusterfs=disabled -Dfuse=disabled
$ meson compile
$ DESTDIR=../install meson install
$ cd ../install
$ find usr/sbin usr/bin usr/lib/x86_64-linux-gnu usr/lib/libvirt | tar czf 
~/git-libvirt-${V}.tgz -T -

# example Transport
$ f=git-libvirt-v6.9.0.tgz; lxc file pull h-build-libvirt-bisect/$root/f .; lxc 
file push $f testkvm-hirsute-from/root/; lxc file push $f 
testkvm-hirsute-to/root/;

# example "Install"
$ tar xzf ~/git-libvirt-v6.9.0.tgz --directory=/
$ systemctl restart libvirtd
$ systemctl status libvirtd | grep version
Nov 25 10:41:45 testkvm-hirsute-to libvirtd[32067]: 2020-11-25 
10:41:45.913+: 32067: info : libvirt version: 6.9.0, package: bisect6.8-6.9 
(Debug)

Note: For better integration this uses an installed libvirt package and
services/config of it as it is in 6.9-1ubuntu1 in Hirsute. We just
exchange the binaries/libs, ugly but working.

With the above I have 6.8 from git working and 6.9 from git failing -
just as I have with the Ubuntu package builds.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1904584

Title:
  libvirt 6.8 / qemu 5.1 - --p2p --tunnelled is hanging

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/libvirt/+bug/1904584/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1904584] Re: libvirt 6.8 / qemu 5.1 - --p2p --tunnelled is hanging

2020-11-25 Thread Christian Ehrhardt
ps output of the hanging receiving virt-ssh-helper


Source:
4     0   41305       1  20   0 1627796 23360 poll_s Ssl ?          0:05 
/usr/sbin/libvirtd
0     0   41523   41305  20   0   9272  4984 poll_s S    ?          0:02  \_ 
ssh -T -e none -- testkvm-hirsute-to sh -c 'virt-ssh-helper 'qemu:///system''

Target
4     0     213       1  20   0  13276  4132 poll_s Ss   ?          0:00 sshd: 
/usr/sbin/sshd -D [listener] 0 of 250-500 startups
4     0   35148     213  20   0  19048 11320 poll_s Ss   ?          0:02  \_ 
sshd: root@notty
4     0   35206   35148  20   0   2584   544 do_wai Ss   ?          0:00      
\_ sh -c virt-ssh-helper qemu:///system
0     0   35207   35206  20   0  81348 26684 -      R    ?          0:34        
  \_ virt-ssh-helper qemu:///system

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1904584

Title:
  libvirt 6.8 / qemu 5.1 - --p2p --tunnelled is hanging

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/libvirt/+bug/1904584/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1904584] Re: libvirt 6.8 / qemu 5.1 - --p2p --tunnelled is hanging

2020-11-25 Thread Christian Ehrhardt
In comparison NC mode:

Source:
4 0   41305   1  20   0 1627796 23456 poll_s Ssl ?  0:05 
/usr/sbin/libvirtd
0 0   41545   41305  20   0   9064  4440 poll_s S?  0:00  \_ 
ssh -T -e none -- testkvm-hirsute-to sh -c 'if 'nc' -q 2>&1 | grep "requires an 
argument" >/dev/null 2>&1; then AR

Target:
4 0 213   1  20   0  13276  4132 poll_s Ss   ?  0:00 sshd: 
/usr/sbin/sshd -D [listener] 0 of 250-500 startups
4 0   35259 213  20   0  17848 10120 -  Rs   ?  0:00  \_ 
sshd: root@notty
4 0   35316   35259  20   0   2584   588 do_wai Ss   ?  0:00  
\_ sh -c if nc -q 2>&1 | grep "requires an argument" >/dev/null 2>&1; then 
ARG=-q0;else ARG=;fi;nc $ARG -U /var/r
0 0   35319   35316  20   0   3088   748 -  S?  0:00
  \_ nc -q0 -U /var/run/libvirt/libvirt-sock

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1904584

Title:
  libvirt 6.8 / qemu 5.1 - --p2p --tunnelled is hanging

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/libvirt/+bug/1904584/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1904584] Re: libvirt 6.8 / qemu 5.1 - --p2p --tunnelled is hanging

2020-11-25 Thread Christian Ehrhardt
bad case: running into a hang - takes at least 2 minutes
good case: migrates without error in <20s; I'm migrating forward and back 
between two systems to be sure (in case anything is flaky) and ready (start the 
next test with the same conditions).

git bisect start
git bisect good v6.8.0
git bisect bad v6.9.0
c1cfbaab25 - good
4ced77a309 - good
862f7e5c73 - good
b866adf8d9 - good
ab6439b960 - good
b87cfc957f - bad
ae23a87d85 - bad
7d959c302d - bad
ea7af657f1 - good

Identified:

commit 7d959c302d10e97390b171334b885887de427a32
Author: Andrea Bolognani 
Date:   Tue Oct 27 00:15:33 2020 +0100

rpc: Fix virt-ssh-helper detection

When trying to figure out whether virt-ssh-helper is available
on the remote host, we mistakenly look for the helper by the
name it had while the feature was being worked on instead of
the one that was ultimately picked, and thus end up using the
netcat fallback every single time.

Fixes: f8ec7c842df9e40c6607eae9b0223766cb226336
Signed-off-by: Andrea Bolognani 
Reviewed-by: Neal Gompa 
Reviewed-by: Daniel P. Berrangé 

 src/rpc/virnetclient.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

Ok that makes a bit of sense, first we had in 6.8
  f8ec7c84 rpc: use new virt-ssh-helper binary for remote tunnelling
That makes it related to tunneling which is our broken use-case.

The identified commit "7d959c30 rpc: Fix virt-ssh-helper detection" then
might finally enable the new helper and that is the broken one?

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1904584

Title:
  libvirt 6.8 / qemu 5.1 - --p2p --tunnelled is hanging

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/libvirt/+bug/1904584/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1904584] Re: libvirt 6.8 / qemu 5.1 - --p2p --tunnelled is hanging

2020-11-25 Thread Christian Ehrhardt
With that knowledge I also was able to confirm that it really is the
native mode

$ virsh migrate --unsafe --live --p2p --tunnelled h-migr-test 
qemu+ssh://testkvm-hirsute-to/system?proxy=netcat

$ virsh migrate --unsafe --live --p2p --tunnelled h-migr-test 
qemu+ssh://testkvm-hirsute-to/system?proxy=native

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1904584

Title:
  libvirt 6.8 / qemu 5.1 - --p2p --tunnelled is hanging

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/libvirt/+bug/1904584/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1904584] Re: libvirt 6.8 / qemu 5.1 - --p2p --tunnelled is hanging

2020-11-25 Thread Christian Ehrhardt
I got suspicious that the target has virt-ssh-helper always in run state.
With something that transports network data I'd expect more sleepy/poll states 
sometimes (as the NC case shows).

If I only start it manually it is in
0 0   35497   35490  20   0  69888  9924 poll_s S+   pts/0  0:00  \_ 
virt-ssh-helper qemu:///system

So does it really never return and busy loops on something - or is that
a red herring?


Per strace it loops on this:

 0.71 read(4, "\4\0\0\0\0\0\0\0", 16) = 8 <0.11>
 0.000238 write(4, "\1\0\0\0\0\0\0\0", 8) = 8 <0.12>
 0.40 read(0, "ba9821e493159a306fce6e8ff4adc1bd"..., 1024) = 1024 
<0.15>
 0.43 write(4, "\1\0\0\0\0\0\0\0", 8) = 8 <0.10>
 0.34 write(4, "\1\0\0\0\0\0\0\0", 8) = 8 <0.10>
 0.35 write(3, "\nSHA512: 1d4f9ad2f209cca2d1d4cfd"..., 2048) = 2048 
<0.21>
 0.57 write(4, "\1\0\0\0\0\0\0\0", 8) = 8 <0.12>
 0.000467 poll([{fd=0, events=POLLIN}, {fd=3, events=POLLIN}, {fd=4, 
events=POLLIN}], 3, 0) = 2 ([{fd=0, revents=POLLIN}, {fd=4, revents=POLLIN}]) 
<0.16>
 0.69 read(4, "\4\0\0\0\0\0\0\0", 16) = 8 <0.11>
 0.000292 write(4, "\1\0\0\0\0\0\0\0", 8) = 8 <0.13>
 0.49 read(0, "a26f12d123bdfc74ccc3\nHomepage: h"..., 1024) = 1024 
<0.12>
 0.46 write(4, "\1\0\0\0\0\0\0\0", 8) = 8 <0.14>
 0.51 write(4, "\1\0\0\0\0\0\0\0", 8) = 8 <0.10>
 0.38 write(4, "\1\0\0\0\0\0\0\0", 8) = 8 <0.14>
 0.000518 poll([{fd=0, events=POLLIN}, {fd=3, events=POLLIN|POLLOUT}, 
{fd=4, events=POLLIN}], 3, 0) = 3 ([{fd=0, revents=POLLIN}, {fd=3, 
revents=POLLOUT}, {fd=4, revents=POLLIN}]) <0.16>

Initially that had some readable content like:
 0.48 read(0, "\n  \n  
But later on it does seem to have binary data like:
 0.61 read(0, 
"i@\0\330\6\247\1\2|2\0\1\0\0\0\252\231\21\0\0\0\0\0\253\231\21\0\324\33\25\0\370"...,
 1024) = 1024 <0.15>

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1904584

Title:
  libvirt 6.8 / qemu 5.1 - --p2p --tunnelled is hanging

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/libvirt/+bug/1904584/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1904584] Re: libvirt 6.8 / qemu 5.1 - --p2p --tunnelled is hanging

2020-11-25 Thread Christian Ehrhardt
Backtraces also look like iterating events.
virEventRunDefaultImpl is an event loop and an infinite loop by design. I'll 
report my new info upstream - maybe the Authors have a good hint now that we 
know that much.


A few Backtraces:

Program received signal SIGINT, Interrupt.
g_source_ref (source=0x561b88a36a10) at ../../../glib/gmain.c:2182
2182in ../../../glib/gmain.c
#0  g_source_ref (source=0x561b88a36a10) at ../../../glib/gmain.c:2182
#1  0x7feae7437568 in g_source_iter_next (iter=iter@entry=0x7ffee3d55ba0, 
source=source@entry=0x7ffee3d55b98) at ../../../glib/gmain.c:1060
#2  0x7feae7438b91 in g_main_context_prepare 
(context=context@entry=0x561b875ed1b0, priority=priority@entry=0x7ffee3d55c20) 
at ../../../glib/gmain.c:3619
#3  0x7feae743968b in g_main_context_iterate 
(context=context@entry=0x561b875ed1b0, block=block@entry=1, 
dispatch=dispatch@entry=1, self=) at ../../../glib/gmain.c:4099
#4  0x7feae7439893 in g_main_context_iteration (context=0x561b875ed1b0, 
context@entry=0x0, may_block=may_block@entry=1) at ../../../glib/gmain.c:4184
#5  0x7feae75e9694 in virEventGLibRunOnce () at 
../../src/util/vireventglib.c:533
#6  0x7feae7654449 in virEventRunDefaultImpl () at 
../../src/util/virevent.c:344
#7  0x561b8597cdfa in virRemoteSSHHelperRun (sock=) at 
../../src/remote/remote_ssh_helper.c:316
#8  main (argc=, argv=) at 
../../src/remote/remote_ssh_helper.c:418
^C
Program received signal SIGINT, Interrupt.
g_source_iter_next (iter=iter@entry=0x7ffee3d55ba0, 
source=source@entry=0x7ffee3d55b98) at ../../../glib/gmain.c:1035
1035in ../../../glib/gmain.c
#0  g_source_iter_next (iter=iter@entry=0x7ffee3d55ba0, 
source=source@entry=0x7ffee3d55b98) at ../../../glib/gmain.c:1035
#1  0x7feae743908b in g_main_context_check 
(context=context@entry=0x561b875ed1b0, max_priority=0, 
fds=fds@entry=0x561b875ed2d0, n_fds=-472556648, n_fds@entry=3) at 
../../../glib/gmain.c:3919
#2  0x7feae7439705 in g_main_context_iterate 
(context=context@entry=0x561b875ed1b0, block=block@entry=1, 
dispatch=dispatch@entry=1, self=) at ../../../glib/gmain.c:4116
#3  0x7feae7439893 in g_main_context_iteration (context=0x561b875ed1b0, 
context@entry=0x0, may_block=may_block@entry=1) at ../../../glib/gmain.c:4184
#4  0x7feae75e9694 in virEventGLibRunOnce () at 
../../src/util/vireventglib.c:533
#5  0x7feae7654449 in virEventRunDefaultImpl () at 
../../src/util/virevent.c:344
#6  0x561b8597cdfa in virRemoteSSHHelperRun (sock=) at 
../../src/remote/remote_ssh_helper.c:316
#7  main (argc=, argv=) at 
../../src/remote/remote_ssh_helper.c:418
^C
Program received signal SIGINT, Interrupt.
g_source_ref (source=0x561b87de7ac0) at ../../../glib/gmain.c:2182
2182in ../../../glib/gmain.c
#0  g_source_ref (source=0x561b87de7ac0) at ../../../glib/gmain.c:2182
#1  0x7feae7437568 in g_source_iter_next (iter=iter@entry=0x7ffee3d55ba0, 
source=source@entry=0x7ffee3d55b98) at ../../../glib/gmain.c:1060
#2  0x7feae743908b in g_main_context_check 
(context=context@entry=0x561b875ed1b0, max_priority=0, 
fds=fds@entry=0x561b875ed2d0, n_fds=-472556648, n_fds@entry=3) at 
../../../glib/gmain.c:3919
#3  0x7feae7439705 in g_main_context_iterate 
(context=context@entry=0x561b875ed1b0, block=block@entry=1, 
dispatch=dispatch@entry=1, self=) at ../../../glib/gmain.c:4116
#4  0x7feae7439893 in g_main_context_iteration (context=0x561b875ed1b0, 
context@entry=0x0, may_block=may_block@entry=1) at ../../../glib/gmain.c:4184
#5  0x7feae75e9694 in virEventGLibRunOnce () at 
../../src/util/vireventglib.c:533
#6  0x7feae7654449 in virEventRunDefaultImpl () at 
../../src/util/virevent.c:344
#7  0x561b8597cdfa in virRemoteSSHHelperRun (sock=) at 
../../src/remote/remote_ssh_helper.c:316
#8  main (argc=, argv=) at 
../../src/remote/remote_ssh_helper.c:418
^C
Program received signal SIGINT, Interrupt.
0x7feae7435828 in g_source_unref_internal (source=0x561b87607140, 
context=0x561b875ed1b0, have_lock=1) at ../../../glib/gmain.c:2204
2204in ../../../glib/gmain.c
#0  0x7feae7435828 in g_source_unref_internal (source=0x561b87607140, 
context=0x561b875ed1b0, have_lock=1) at ../../../glib/gmain.c:2204
#1  0x7feae7437585 in g_source_iter_next (iter=iter@entry=0x7ffee3d55ba0, 
source=source@entry=0x7ffee3d55b98) at ../../../glib/gmain.c:1063
#2  0x7feae7438b91 in g_main_context_prepare 
(context=context@entry=0x561b875ed1b0, priority=priority@entry=0x7ffee3d55c20) 
at ../../../glib/gmain.c:3619
#3  0x7feae743968b in g_main_context_iterate 
(context=context@entry=0x561b875ed1b0, block=block@entry=1, 
dispatch=dispatch@entry=1, self=) at ../../../glib/gmain.c:4099
#4  0x7feae7439893 in g_main_context_iteration (context=0x561b875ed1b0, 
context@entry=0x0, may_block=may_block@entry=1) at ../../../glib/gmain.c:4184
#5  0x7feae75e9694 in virEventGLibRunOnce () at 
../../src/util/vireventglib.c:533
#6  0x7feae7654449 in virEventRunDefaultImpl () at 
../../sr

[Bug 1905424] Re: Live migration failure

2020-11-25 Thread Christian Ehrhardt
The upstream bug identified (and asked the reporter to check) to remove the 
"nodeset".
IMHO progress there depends on someone saying on the upstream bug "yes indeed 
it is the same for me, once nodeset is gone it works, but with it it fails".

If you could in your environment do this modification and confirm you
could induce some life back into that upstream bug.

Also I'd recommend to verify with the latest Ubuntu development release (21.04 
/ Hirsute) which has qemu 5.1 - not in general but for your testing.
If that has it fixed we could look for an existing patch to backport if 
applicable.

** Bug watch added: Red Hat Bugzilla #1710687
   https://bugzilla.redhat.com/show_bug.cgi?id=1710687

** Also affects: libvirt via
   https://bugzilla.redhat.com/show_bug.cgi?id=1710687
   Importance: Unknown
   Status: Unknown

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1905424

Title:
  Live migration failure

To manage notifications about this bug go to:
https://bugs.launchpad.net/libvirt/+bug/1905424/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1905424] Re: Live migration failure

2020-11-25 Thread Christian Ehrhardt
FYI: I added a tracker task to the upstream bug.

After lunch I'll try to recreate this if possible without HP (just
nodeset) if there is a way to do it. Would help testability, but no
promises ...

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1905424

Title:
  Live migration failure

To manage notifications about this bug go to:
https://bugs.launchpad.net/libvirt/+bug/1905424/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1905424] Re: Live migration failure

2020-11-25 Thread Christian Ehrhardt
What I found interesting in your bug vs the upstream one is that you
have a single node in your nodeset, that might help to simplify things.


# new guest with UVT (any other probably does as well)
$ uvt-kvm create --host-passthrough --password=ubuntu h-migr-nodeset 
release=hirsute arch=amd64 label=daily

# verify things migrate
virsh migrate --unsafe --live h-migr-nodeset 
qemu+ssh://testkvm-hirsute-to/system

# make it use HP+nodeset
Adding this scction to the config
  

  

  


This migrates just fine in Hirsute (qemu 5.1 / libvirt 6.9)
You mentioned to see this on Bionic, so I tried the same on Bionic - but that 
works as well.

Hmm, so far trying to simplify this (e.g. take openstack out of the equation) 
fails.
Once you get to create VMs manually could you check what the smallest set of 
"ingredients" is to trigger the issue?

Note: this is a non Numa system, I only have node 0 - this could be
important. It would be awesome if we could make it trigger on those, but
if eventually a numa bare metal system is required we can't change it.


P.S: TBH - I'm not sure if that isn't just a real limitation, but let us track 
the case until we know for sure. Or did you see that ever working and it 
degraded in a newer release?

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1905424

Title:
  Live migration failure

To manage notifications about this bug go to:
https://bugs.launchpad.net/libvirt/+bug/1905424/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1905424] Re: Live migration failure

2020-11-25 Thread Christian Ehrhardt
Subscribing James/Corey - have you seen these issues with HP backed
guests as generated by openstack?

** Changed in: qemu (Ubuntu)
   Status: Confirmed => Incomplete

** Changed in: libvirt (Ubuntu)
   Status: Confirmed => Incomplete

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1905424

Title:
  Live migration failure

To manage notifications about this bug go to:
https://bugs.launchpad.net/libvirt/+bug/1905424/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1904584] Re: libvirt 6.8 / qemu 5.1 - --p2p --tunnelled is hanging

2020-11-25 Thread Christian Ehrhardt
FYI - discussion with upstream ongoing, see the link above

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1904584

Title:
  libvirt 6.8 / qemu 5.1 - --p2p --tunnelled is hanging

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/libvirt/+bug/1904584/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1904584] Re: libvirt 6.8 / qemu 5.1 - --p2p --tunnelled is hanging

2020-11-25 Thread Christian Ehrhardt
** Attachment added: "libvirt debug log - netcat (classic) mode being fast"
   
https://bugs.launchpad.net/ubuntu/+source/qemu/+bug/1904584/+attachment/5437959/+files/vol-download.netcat.log

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1904584

Title:
  libvirt 6.8 / qemu 5.1 - --p2p --tunnelled is hanging

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/libvirt/+bug/1904584/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1904584] Re: libvirt 6.8 / qemu 5.1 - --p2p --tunnelled is hanging

2020-11-25 Thread Christian Ehrhardt
The upstream discussion helped to eliminate migration as well as networking 
from the equation.
It seems the new virt-ssh-helper mode is just rather slow and thereby 
stalling/hanging.

Here are logs of:
virsh -c qemu+ssh://127.0.0.1/system?proxy=netcat vol-download --pool uvtool 
h-migr-test.qcow testfile
=> ~150-220MB/s

vs

virsh -c qemu+ssh://127.0.0.1/system?proxy=native vol-download --pool uvtool 
h-migr-test.qcow testfile
=> 5 MB/s degrading to ~200 KB/s and less

Attaching logs Taken with these configs:
log_filters="1:qemu 1:libvirt 3:object 3:json 3:event 1:util"
log_outputs="1:file:/var/log/libvirtd.log"

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1904584

Title:
  libvirt 6.8 / qemu 5.1 - --p2p --tunnelled is hanging

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/libvirt/+bug/1904584/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1904584] Re: libvirt 6.8 / qemu 5.1 - --p2p --tunnelled is hanging

2020-11-25 Thread Christian Ehrhardt
** Attachment added: "libvirt debug log - native (new) mode being slow"
   
https://bugs.launchpad.net/ubuntu/+source/qemu/+bug/1904584/+attachment/5437960/+files/vol-download.native.log

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1904584

Title:
  libvirt 6.8 / qemu 5.1 - --p2p --tunnelled is hanging

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/libvirt/+bug/1904584/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1905674] [NEW] libvirt snapshots specifying --memspec need apparmor support

2020-11-25 Thread Christian Ehrhardt
Public bug reported:

In a similar way as we found in bug 1845506 that multiple disks can kill
the rules for each other the rarely used snapshot option --memspec has
issues as well.

If used the flow reaches access to the disks before rules are added (maybe none 
are added for memspec, but the failing one is on the actual snapshot, which 
works without --memspec.
So a rule that would be created isn't in this case at the time access starts.

Repro:
#1 get a guest
$ uvt-kvm create --host-passthrough --password=ubuntu h-test release=hirsute 
arch=amd64 label=daily
# get rid of secondary disk (otherwise we'd need to back that up as well)
$ virsh detach-disk h-test vdb
$ virsh snapshot-create-as --domain h-test --name h-test-snap --diskspec 
vda,snapshot=external,file=/var/lib/uvtool/libvirt/images/h-test.qcow.snapshot 
--memspec snapshot=external,file=/var/lib/uvtool/libvirt/images/h-test2.mem 
--print-xml


Denial:
[3006813.872572] audit: type=1400 audit(1606374248.321:6198): apparmor="DENIED" 
operation="open" namespace="root//lxd-f_" 
profile="libvirt-8f8dce51-0abb-470f-a5b1-dd11393cc0c8" 
name="/var/lib/uvtool/libvirt/images/h-test2.qcow.snapshot" pid=1014838 
comm="qemu-system-x86" requested_mask="r" denied_mask="r" fsuid=64055 ouid=64055

IMHO this is super uncommon (exists for years and had no report yet),
but if one is affected you'd need to add an override either for all
guests (/etc/apparmor.d/local/abstractions/libvirt-qemu) or an
individual guest (/etc/apparmor.d/libvirt/libvirt-)

Due to that prio is IMHO low, but this bug shall help if people search
the net for it and be a place to chime in outlining why this use-case is
more important than we think atm.

** Affects: libvirt (Ubuntu)
 Importance: Low
 Status: Confirmed

** Changed in: libvirt (Ubuntu)
   Importance: Undecided => Low

** Changed in: libvirt (Ubuntu)
   Status: New => Confirmed

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1905674

Title:
  libvirt snapshots specifying --memspec need apparmor support

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/libvirt/+bug/1905674/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1890435] Re: gcc-10 breaks on armhf (flaky): internal compiler error: Segmentation fault

2020-11-25 Thread Christian Ehrhardt
r10-6080 now had 10 good runs.

I'm going back to test 20200507 next - we had bad states with that
version so often this MUST trigger IMHO,

Reminder this runs on in armhf LXD containers on arm64 VMs (like our builds do).
I'm slowly getting the feeling it could be an issue with the underlying 
virtualization or bare metal.
We had a datacenter move, so the cloud runs on the same bare metal overall, but 
my instance could run on something else today than last week. If 20200507 no 
more triggers we have to investigate where the code is running.


20190425 good
r10-1014
r10-2027 good
r10-2533
r10-3040 good
r10-3220
r10-3400 good
r10-3450
r10-3475
r10-3478
r10-3593
r10-3622
r10-3657 good
r10-3727 good
r10-4054 other kind of bad?
r10-6080 good
20200507 bad ?next?

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1890435

Title:
  gcc-10 breaks on armhf (flaky): internal compiler error: Segmentation
  fault

To manage notifications about this bug go to:
https://bugs.launchpad.net/groovy/+bug/1890435/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1890435] Re: gcc-10 breaks on armhf (flaky): internal compiler error: Segmentation fault

2020-11-25 Thread Christian Ehrhardt
FYI - inquiry for the underlying HW/SW is in RT 128805 - I set Doko and
Rick to CC on that.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1890435

Title:
  gcc-10 breaks on armhf (flaky): internal compiler error: Segmentation
  fault

To manage notifications about this bug go to:
https://bugs.launchpad.net/groovy/+bug/1890435/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1904584] Re: libvirt 6.8 / qemu 5.1 - --p2p --tunnelled is hanging

2020-11-26 Thread Christian Ehrhardt
https://www.redhat.com/archives/libvir-list/2020-November/msg01472.html has 
fixes.
I've done some extensive testing and the work fine for me.

Let us wait the few days until it lands in 6.10 and then either upgrade
to 6.10 (if trivial) or apply the two patches for now (I've done that
already - no patch noise and working).

** Changed in: libvirt (Ubuntu)
   Status: New => In Progress

** Changed in: qemu (Ubuntu)
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1904584

Title:
  libvirt 6.8 / qemu 5.1 - --p2p --tunnelled is hanging

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/libvirt/+bug/1904584/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1890435] Re: gcc-10 breaks on armhf (flaky): internal compiler error: Segmentation fault

2020-11-26 Thread Christian Ehrhardt
Ok, 20200507 almost immediately triggered the ICE

/root/qemu-5.0/linux-user/syscall.c: In function ‘do_syscall1’:
/root/qemu-5.0/linux-user/syscall.c:12479:1: internal compiler error: 
Segmentation fault
12479 | }
  | ^

0x5518cb crash_signal
../../gcc/gcc/toplev.c:328
0x542673 avoid_constant_pool_reference(rtx_def*)
../../gcc/gcc/simplify-rtx.c:237
0x515cad commutative_operand_precedence(rtx_def*)
../../gcc/gcc/rtlanal.c:3482
0x515d6b swap_commutative_operands_p(rtx_def*, rtx_def*)
../../gcc/gcc/rtlanal.c:3543
0x53cacb simplify_binary_operation(rtx_code, machine_mode, rtx_def*, rtx_def*)
../../gcc/gcc/simplify-rtx.c:2333
0x53cb19 simplify_gen_binary(rtx_code, machine_mode, rtx_def*, rtx_def*)
../../gcc/gcc/simplify-rtx.c:189
0x44d033 lra_constraints(bool)
../../gcc/gcc/lra-constraints.c:4964
0x440653 lra(_IO_FILE*)
../../gcc/gcc/lra.c:2440
0x411f05 do_reload
../../gcc/gcc/ira.c:5523
0x411f05 execute
../../gcc/gcc/ira.c:5709


This triggered on the first build. While waiting for some builds between 
r10-6080 and 20200507 I'll rerun this version to get some stats on how early to 
expect it.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1890435

Title:
  gcc-10 breaks on armhf (flaky): internal compiler error: Segmentation
  fault

To manage notifications about this bug go to:
https://bugs.launchpad.net/groovy/+bug/1890435/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1890435] Re: gcc-10 breaks on armhf (flaky): internal compiler error: Segmentation fault

2020-11-26 Thread Christian Ehrhardt
Another crash with 20200507 at first try:

/root/qemu-5.0/fpu/softfloat.c: In function ‘float128_div’:
/root/qemu-5.0/fpu/softfloat.c:7504:1: internal compiler error: Segmentation 
fault
 7504 | }
  | ^

0x5518cb crash_signal
../../gcc/gcc/toplev.c:328
0x43e363 add_regs_to_insn_regno_info
../../gcc/gcc/lra.c:1512
0x43e465 add_regs_to_insn_regno_info
../../gcc/gcc/lra.c:1534
0x43e465 add_regs_to_insn_regno_info
../../gcc/gcc/lra.c:1534
0x43e51b add_regs_to_insn_regno_info
../../gcc/gcc/lra.c:1538
0x43f497 lra_update_insn_regno_info(rtx_insn*)
../../gcc/gcc/lra.c:1627
0x43f5dd lra_update_insn_regno_info(rtx_insn*)
../../gcc/gcc/lra.c:1620
0x43f5dd lra_push_insn_1
../../gcc/gcc/lra.c:1777
0x4579fb spill_pseudos
../../gcc/gcc/lra-spills.c:542
0x4579fb lra_spill()
../../gcc/gcc/lra-spills.c:655
0x4406bf lra(_IO_FILE*)
../../gcc/gcc/lra.c:2557
0x411f05 do_reload
../../gcc/gcc/ira.c:5523
0x411f05 execute
../../gcc/gcc/ira.c:5709

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1890435

Title:
  gcc-10 breaks on armhf (flaky): internal compiler error: Segmentation
  fault

To manage notifications about this bug go to:
https://bugs.launchpad.net/groovy/+bug/1890435/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1890435] Re: gcc-10 breaks on armhf (flaky): internal compiler error: Segmentation fault

2020-11-26 Thread Christian Ehrhardt
Doko is so kind and builds r10-7093 got me.

20190425 good
r10-1014
r10-2027 good
r10-2533
r10-3040 good
r10-3220
r10-3400 good
r10-3450
r10-3475
r10-3478
r10-3593
r10-3622
r10-3657 good
r10-3727 good
r10-4054 other kind of bad?
r10-6080 good
r10-7093 next
20200507 bad bad bad

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1890435

Title:
  gcc-10 breaks on armhf (flaky): internal compiler error: Segmentation
  fault

To manage notifications about this bug go to:
https://bugs.launchpad.net/groovy/+bug/1890435/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1905735] [NEW] ubuntu-image autopkgtests failing since pytohn-debian 0.1.38

2020-11-26 Thread Christian Ehrhardt
Public bug reported:

In the tests it seems that since some - yet to be found - change ~20th
Nov the tests of ubuntu-image fail.

Tests all list those three sub-tests as failing:
 unittests.sh FAIL non-zero exit status 1
 qa   FAIL non-zero exit status 1
 coverage.sh  FAIL non-zero exit status 1

Fails all seem to be related to some python/pytest/py* change that might
have slipped in without gating on this test.

Ubuntu-image itself also isn't new - still the same as in groovy
 ubuntu-image | 1.10+20.10ubuntu2 | groovy | source, all
 ubuntu-image | 1.10+20.10ubuntu2 | hirsute| source, all

== log start ===
Obtaining file:///tmp/autopkgtest.ZuL7Da/build.chY/src
ERROR: Command errored out with exit status 1:
 command: /tmp/autopkgtest.ZuL7Da/build.chY/src/.tox/py38-nocov/bin/python 
-c 'import sys, setuptools, tokenize; sys.argv[0] = 
'"'"'/tmp/autopkgtest.ZuL7Da/build.chY/src/setup.py'"'"'; 
__file__='"'"'/tmp/autopkgtest.ZuL7Da/build.chY/src/setup.py'"'"';f=getattr(tokenize,
 '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', 
'"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' egg_info 
--egg-base /tmp/pip-pip-egg-info-yaplrymq
 cwd: /tmp/autopkgtest.ZuL7Da/build.chY/src/
Complete output (5 lines):
Traceback (most recent call last):
  File "", line 1, in 
  File "/tmp/autopkgtest.ZuL7Da/build.chY/src/setup.py", line 49, in 

__version__ = str(Changelog(infp).get_version())
AttributeError: 'Changelog' object has no attribute 'get_version'

ERROR: Command errored out with exit status 1: python setup.py egg_info Check 
the logs for full command output.
=== log end 

The issue reproducible in local KVM-autopkgtest against hirsute-proposed and 
hirsute-release for me (I mistyped before).
Example:
 sudo ~/work/autopkgtest/autopkgtest/runner/autopkgtest --no-built-binaries 
--apt-upgrade --apt-pocket=proposed --shell-fail 
ubuntu-image_1.10+20.10ubuntu2.dsc --testname=qa -- qemu --qemu-options='-cpu 
host' --ram-size=1536 --cpus 2 ~/work/autopkgtest-hirsute-amd64.img


In terms of similar bug signatures I found
https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=973227
Fixed by:
https://gitlab.kitware.com/debian/dh-cmake/-/commit/3337c8e0e9ebd109490d3c40f0bd5c1e367bedc8

Looking for the same issue in ubuntu-image has shown an entry in setup.py
  setup.py:49:__version__ = str(Changelog(infp).get_version())

And now that we know all that we see
https://launchpad.net/ubuntu/+source/python-debian/+publishinghistory

New version in since
2020-11-20 02:23:27 CET

That is a perfect match to our bug.


$ diff -Naur python-debian-0.1.3[78]/lib/debian/changelog.py
...
-def get_version(self):
-# type: () -> Version
+def _get_version(self):
+# type: () -> Optional[Version]
 """Return a Version object for the last version"""
-return self._blocks[0].version
+return self._blocks[0].version   # type: ignore
...

** Affects: python-debian (Ubuntu)
 Importance: Undecided
 Status: New

** Affects: ubuntu-image (Ubuntu)
 Importance: Undecided
 Status: New


** Tags: update-excuse

** Tags added: update-excuse

** Also affects: python-debian (Ubuntu)
   Importance: Undecided
   Status: New

** Summary changed:

- autopkgtests failing since 20th Nov
+ ubuntu-image autopkgtests failing since pytohn-debian 0.1.38

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1905735

Title:
  ubuntu-image autopkgtests failing since pytohn-debian 0.1.38

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/python-debian/+bug/1905735/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1905735] Re: ubuntu-image autopkgtests failing since pytohn-debian 0.1.38

2020-11-26 Thread Christian Ehrhardt
** Description changed:

  In the tests it seems that since some - yet to be found - change ~20th
  Nov the tests of ubuntu-image fail.
  
  Tests all list those three sub-tests as failing:
-  unittests.sh FAIL non-zero exit status 1
-  qa   FAIL non-zero exit status 1
-  coverage.sh  FAIL non-zero exit status 1
+  unittests.sh FAIL non-zero exit status 1
+  qa   FAIL non-zero exit status 1
+  coverage.sh  FAIL non-zero exit status 1
  
  Fails all seem to be related to some python/pytest/py* change that might
  have slipped in without gating on this test.
  
  Ubuntu-image itself also isn't new - still the same as in groovy
-  ubuntu-image | 1.10+20.10ubuntu2 | groovy | source, all
-  ubuntu-image | 1.10+20.10ubuntu2 | hirsute| source, all
- 
+  ubuntu-image | 1.10+20.10ubuntu2 | groovy | source, all
+  ubuntu-image | 1.10+20.10ubuntu2 | hirsute| source, all
  
  == log start 
===
  Obtaining file:///tmp/autopkgtest.ZuL7Da/build.chY/src
- ERROR: Command errored out with exit status 1:
-  command: 
/tmp/autopkgtest.ZuL7Da/build.chY/src/.tox/py38-nocov/bin/python -c 'import 
sys, setuptools, tokenize; sys.argv[0] = 
'"'"'/tmp/autopkgtest.ZuL7Da/build.chY/src/setup.py'"'"'; 
__file__='"'"'/tmp/autopkgtest.ZuL7Da/build.chY/src/setup.py'"'"';f=getattr(tokenize,
 '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', 
'"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' egg_info 
--egg-base /tmp/pip-pip-egg-info-yaplrymq
-  cwd: /tmp/autopkgtest.ZuL7Da/build.chY/src/
- Complete output (5 lines):
- Traceback (most recent call last):
-   File "", line 1, in 
-   File "/tmp/autopkgtest.ZuL7Da/build.chY/src/setup.py", line 49, in 

- __version__ = str(Changelog(infp).get_version())
- AttributeError: 'Changelog' object has no attribute 'get_version'
- 
+ ERROR: Command errored out with exit status 1:
+  command: 
/tmp/autopkgtest.ZuL7Da/build.chY/src/.tox/py38-nocov/bin/python -c 'import 
sys, setuptools, tokenize; sys.argv[0] = 
'"'"'/tmp/autopkgtest.ZuL7Da/build.chY/src/setup.py'"'"'; 
__file__='"'"'/tmp/autopkgtest.ZuL7Da/build.chY/src/setup.py'"'"';f=getattr(tokenize,
 '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', 
'"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' egg_info 
--egg-base /tmp/pip-pip-egg-info-yaplrymq
+  cwd: /tmp/autopkgtest.ZuL7Da/build.chY/src/
+ Complete output (5 lines):
+ Traceback (most recent call last):
+   File "", line 1, in 
+   File "/tmp/autopkgtest.ZuL7Da/build.chY/src/setup.py", line 49, in 

+ __version__ = str(Changelog(infp).get_version())
+ AttributeError: 'Changelog' object has no attribute 'get_version'
+ 
  ERROR: Command errored out with exit status 1: python setup.py egg_info Check 
the logs for full command output.
  === log end 

  
+ The issue reproducible in local KVM-autopkgtest against hirsute-proposed and 
hirsute-release for me (I mistyped before).
+ Example:
+  sudo ~/work/autopkgtest/autopkgtest/runner/autopkgtest --no-built-binaries 
--apt-upgrade --apt-pocket=proposed --shell-fail 
ubuntu-image_1.10+20.10ubuntu2.dsc --testname=qa -- qemu --qemu-options='-cpu 
host' --ram-size=1536 --cpus 2 ~/work/autopkgtest-hirsute-amd64.img
  
- The issue not reproducible in local KVM-autopkgtest against hirsute-proposed 
and hirsute-release for me atm. Which is odd, but could be due to whatever 
python changes that are being all-in/all-out.
  
  In terms of similar bug signatures I found
  https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=973227
  Fixed by:
  
https://gitlab.kitware.com/debian/dh-cmake/-/commit/3337c8e0e9ebd109490d3c40f0bd5c1e367bedc8
  
  Looking for the same issue in ubuntu-image has shown an entry in setup.py
-   setup.py:49:__version__ = str(Changelog(infp).get_version())
+   setup.py:49:__version__ = str(Changelog(infp).get_version())
  
  And now that we know all that we see
  https://launchpad.net/ubuntu/+source/python-debian/+publishinghistory
  
  New version in since
  2020-11-20 02:23:27 CET
  
  That is a perfect match to our bug.
+ 
+ 
+ $ diff -Naur python-debian-0.1.3[78]/lib/debian/changelog.py
+ ...
+ -def get_version(self):
+ -# type: () -> Version
+ +def _get_version(self):
+ +# type: () -> Optional[Version]
+  """Return a Version object for the last version"""
+ -return self._blocks[0].version
+ +return self._blocks[0].version   # type: ignore
+ ...

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1905735

Titl

[Bug 1905735] Re: ubuntu-image autopkgtests failing since pytohn-debian 0.1.38

2020-11-26 Thread Christian Ehrhardt
PPA: https://launchpad.net/~ci-train-ppa-service/+archive/ubuntu/4353/
...

killing the rest as I've seen seb128 to do the same ...

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1905735

Title:
  ubuntu-image autopkgtests failing since python-debian 0.1.38

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/python-debian/+bug/1905735/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1905735] Re: ubuntu-image autopkgtests failing since pytohn-debian 0.1.38

2020-11-26 Thread Christian Ehrhardt
Seb has opened: 
https://code.launchpad.net/~seb128/ubuntu-image/+git/ubuntu-image/+merge/394533
It is idential to what I build and tested in 
https://launchpad.net/~ci-train-ppa-service/+archive/ubuntu/4353/+packages
If we don't want to wait and retrigger the tests with the new version, here a 
hint: 
https://code.launchpad.net/~paelzer/britney/+git/hints-ubuntu/+merge/394535

** Bug watch added: Debian Bug tracker #975910
   https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=975910

** Also affects: python-debian (Debian) via
   https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=975910
   Importance: Unknown
   Status: Unknown

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1905735

Title:
  ubuntu-image autopkgtests failing since python-debian 0.1.38

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/python-debian/+bug/1905735/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1802533] Re: [MIR] pipewire

2020-11-26 Thread Christian Ehrhardt
Hi Rasmus,
this looks soemwhat ready to me - it is on the Desktop team to now integrate 
and thereby trigger the component mismatch that will pull it into main.
I've last week pinged seb128 and didrocks about it - AFAIK they will take a 
look.

=> https://irclogs.ubuntu.com/2020/11/19/%23ubuntu-desktop.html#t09:23

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1802533

Title:
  [MIR] pipewire

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/pipewire/+bug/1802533/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1904584] Re: libvirt 6.8 / qemu 5.1 - --p2p --tunnelled is hanging

2020-11-26 Thread Christian Ehrhardt
Changes are upstream as:
829142699ecf1b51e677c6719fce29282af62c92
6d69afe4517646811ee96981408bc6fc18b5ffbb

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1904584

Title:
  libvirt 6.8 / qemu 5.1 - --p2p --tunnelled is hanging

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/libvirt/+bug/1904584/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1904584] Re: libvirt 6.8 / qemu 5.1 - --p2p --tunnelled is hanging

2020-11-26 Thread Christian Ehrhardt
Uploaded the fix after some more testing, this should fix the remaining
known issues we had with L6.9/Q5.1 in 21.04

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1904584

Title:
  libvirt 6.8 / qemu 5.1 - --p2p --tunnelled is hanging

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/libvirt/+bug/1904584/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1890435] Re: gcc-10 breaks on armhf (flaky): internal compiler error: Segmentation fault

2020-11-27 Thread Christian Ehrhardt
3 full runs good with r10-7093 but then I got:

/root/qemu-5.0/disas/nanomips.cpp: In member function ‘std::string 
NMD::JALRC_HB(uint64)’:
/root/qemu-5.0/disas/nanomips.cpp:7969:1: internal compiler error: Segmentation 
fault
 7969 | }
  | ^
0x602fa7 crash_signal
../../gcc/gcc/toplev.c:328
0x4f1f47 add_regs_to_insn_regno_info
../../gcc/gcc/lra.c:1509
0x4f203b add_regs_to_insn_regno_info
../../gcc/gcc/lra.c:1531
0x4f3061 lra_update_insn_regno_info(rtx_insn*)
../../gcc/gcc/lra.c:1624
0x505ca7 process_insn_for_elimination
../../gcc/gcc/lra-eliminations.c:1322
0x505ca7 lra_eliminate(bool, bool)
../../gcc/gcc/lra-eliminations.c:1372
0x500877 lra_constraints(bool)
../../gcc/gcc/lra-constraints.c:4856
0x4f4237 lra(_IO_FILE*)
../../gcc/gcc/lra.c:2437
0x4c5c59 do_reload
../../gcc/gcc/ira.c:5523
0x4c5c59 execute
../../gcc/gcc/ira.c:5709

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1890435

Title:
  gcc-10 breaks on armhf (flaky): internal compiler error: Segmentation
  fault

To manage notifications about this bug go to:
https://bugs.launchpad.net/groovy/+bug/1890435/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1890435] Re: gcc-10 breaks on armhf (flaky): internal compiler error: Segmentation fault

2020-11-27 Thread Christian Ehrhardt
We again need to ask, is this the one we are hunting for - or might it be 
another issue in between.
Doko ?

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1890435

Title:
  gcc-10 breaks on armhf (flaky): internal compiler error: Segmentation
  fault

To manage notifications about this bug go to:
https://bugs.launchpad.net/groovy/+bug/1890435/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1905950] [NEW] openmpi 4.0.5 breaks mpi4py autopkgtest

2020-11-27 Thread Christian Ehrhardt
Public bug reported:

mpi4py fails with new openmpi:

- On amd64 and arm64 fail
- On armhf and s390x the tests are skipped and therefore work

Errors:
  test/test_rma.py::TestRMAWorld::testAccumulate FAILEDFAILED  
[ 82%]
  test/test_rma.py::TestRMASelf::testStartComplete FAILED  [ 
82%]
  test/test_rma.py::TestRMASelf::testPutGet FAILED  [ 82%]
  test/test_rma.py::TestRMASelf::testStartComplete FAILED  [ 
82%]

a few tests later fail sometimes, but they vary.

Debian had those issues in 3.0.3-6
  https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=974560
  
https://bitbucket.org/mpi4py/mpi4py/issues/171/test_rmapy-failing-tests-with-openmpi-404
Was skipped for certain versions
  
https://salsa.debian.org/science-team/mpi4py/-/commit/1ee01a245b7097a46da95489a5e962c3165ebfa2

3.0.3-7 should have what we need, but on rerun it still is failing the
same way.

Local repro shows these failures as well

Note: The working Debian tests all used python3 3.8 still we are on 3.9 already.
Ubu: platform linux -- Python 3.9.0+, pytest-4.6.11, py-1.9.0, pluggy-0.13.0 -- 
/usr/bin/python3
Deb: platform linux -- Python 3.8.6,  pytest-4.6.11, py-1.9.0, pluggy-0.13.0 -- 
/usr/bin/python3


But those should be skipped in the new version - right?
/tmp/autopkgtest.a6nCsM/build.DVK/src$ grep -Hrn -C1 testAccumulate test/*
grep: test/__pycache__/test_rma.cpython-39-PYTEST.pyc: binary file matches
grep: test/__pycache__/test_rma_nb.cpython-39-PYTEST.pyc: binary file matches
--
test/test_rma.py-92-@unittest.skipMPI('openmpi(>=4.0.4,<4.1)')
test/test_rma.py:93:def testAccumulate(self):
test/test_rma.py-94-group = self.WIN.Get_group()
...


Test results have some more readability with:
$ mpirun -n 2 python3 -m pytest test/test_rma.py -vv


In Debugging I found that
@unittest.skipMPI('openmpi(>=4.0.4,<4.1)')
does not really skip it in our case.


With this installed:
  libopenmpi3:amd644.0.5-7ubuntu1 
I get these results in skipMPI
  DEBUG:TestLog:skipMPI-vendor: name Open MPI version (4, 0, 3)

I'd expect 4.0.5 ?!

Debugging further revealed that mpi4py does internalize the versions at build 
time
#elif defined(OPEN_MPI)

  name = "Open MPI";
  #if defined(OMPI_MAJOR_VERSION)
  major = OMPI_MAJOR_VERSION;
  #endif
  #if defined(OMPI_MINOR_VERSION)
  minor = OMPI_MINOR_VERSION;
  #endif
  #if defined(OMPI_RELEASE_VERSION)
  micro = OMPI_RELEASE_VERSION;
  #endif


This is a build time static - it does not check the installed lib!
It needs a rebuild of mpi4py vs the new version.

A rebiuld of mpi4py should resolve it
Test build in
  https://launchpad.net/~ci-train-ppa-service/+archive/ubuntu/4354

Tested in autopkgtest against that PPA worked fine now.
$ sudo ~/work/autopkgtest/autopkgtest/runner/autopkgtest --no-built-binaries 
--apt-upgrade --apt-pocket=proposed --setup-commands="add-apt-repository 
ppa:ci-train-ppa-service/4354; apt update; apt -y upgrade" --shell-fail 
mpi4py_3.0.3-7build1.dsc -- qemu --ram-size=1536 --cpus 2 
~/work/autopkgtest-hirsute-amd64.img

Uploading the no change rebuild and tagging this as update excuse to
reflect that until resolved.

** Affects: mpi4py (Ubuntu)
 Importance: Undecided
 Status: New

** Affects: openmpi (Ubuntu)
 Importance: Undecided
 Status: New


** Tags: update-excuse

** Also affects: mpi4py (Ubuntu)
   Importance: Undecided
   Status: New

** Tags added: update-excuse

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1905950

Title:
  openmpi 4.0.5 breaks mpi4py autopkgtest

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/mpi4py/+bug/1905950/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1905790] Re: Recompile SSSD in 20.04 using OpenSSL (instead of NSS) support

2020-11-27 Thread Christian Ehrhardt
** Tags added: server-next

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1905790

Title:
  Recompile SSSD in 20.04 using OpenSSL (instead of NSS) support

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/sssd/+bug/1905790/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1868703] Re: Support "ad_use_ldaps" flag for new AD requirements (ADV190023)

2020-11-27 Thread Christian Ehrhardt
** Tags added: verification-done

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1868703

Title:
  Support "ad_use_ldaps" flag for new AD requirements (ADV190023)

To manage notifications about this bug go to:
https://bugs.launchpad.net/cyrus-sasl2/+bug/1868703/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1905783] Re: package libapache2-mod-wsgi-py3 4.6.8-1ubuntu3 failed to install/upgrade: el subproceso instalado paquete libapache2-mod-wsgi-py3 script post-installation devolvió el código de salid

2020-11-27 Thread Christian Ehrhardt
Thank you for taking the time to report this bug and helping to make
Ubuntu better.

On upgrading a service this service has to be restarted to pick up the fixes.
Rather rarely a real issue occurs that the newer version does e.g. fail with 
the formerly working configuration.
But most of the time what happens is, that a service was installed, but stays 
unconfigured or experimented with but left in a broken state.

Now on any update of the related packages that service has to be restarted, but 
since its config is incomplete/faulty it fails to restart.
Therefore the update of that package has to consider itself incomplete.

Depending on your particular case there are two solutions:
- either remove the offending package if you don't want to continue using it.
- Or if you do want to keep it please fix the configuration so that re-starting 
the service will work.

Since it seems likely to me that this is a local configuration problem,
rather than a bug in Ubuntu, I'm marking this bug as Incomplete.

If indeed this is a local configuration problem, you can find pointers
to get help for this sort of problem here:
http://www.ubuntu.com/support/community

Or if you believe that this is really a bug, then you may find it
helpful to read "How to report bugs effectively"
http://www.chiark.greenend.org.uk/~sgtatham/bugs.html. We'd be grateful
if you would then provide a more complete description of the problem,
explain why you believe this is a bug in Ubuntu rather than a problem
specific to your system, and then change the bug status back to New.

** Changed in: mod-wsgi (Ubuntu)
   Status: New => Incomplete

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1905783

Title:
  package libapache2-mod-wsgi-py3 4.6.8-1ubuntu3 failed to
  install/upgrade: el subproceso instalado paquete libapache2-mod-wsgi-
  py3 script post-installation devolvió el código de salida de error 1

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/mod-wsgi/+bug/1905783/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1905783] Re: package libapache2-mod-wsgi-py3 4.6.8-1ubuntu3 failed to install/upgrade: el subproceso instalado paquete libapache2-mod-wsgi-py3 script post-installation devolvió el código de salid

2020-11-27 Thread Christian Ehrhardt
>From your logs
 modified.conffile..etc.apache2.mods-available.wsgi.conf: [deleted]
 modified.conffile..etc.apache2.mods-available.wsgi.load: [deleted]

Other than that there isn't much in the logs you attached, but this
quite likely could break the service from starting. And restarting the
service is needed on upgrade.

You might consider restoring the conffiled, see [1] as an example.
https://askubuntu.com/questions/66533/how-can-i-restore-configuration-files

Not sure I can do more to help with these logs, if the above doesn't
help you please report back with info if apache2 properly restarts, and
details of the service messages.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1905783

Title:
  package libapache2-mod-wsgi-py3 4.6.8-1ubuntu3 failed to
  install/upgrade: el subproceso instalado paquete libapache2-mod-wsgi-
  py3 script post-installation devolvió el código de salida de error 1

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/mod-wsgi/+bug/1905783/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1890435] Re: gcc-10 breaks on armhf (flaky): internal compiler error: Segmentation fault

2020-11-27 Thread Christian Ehrhardt
To be sure i was running r10-7093 again and so far got 8 good runs in a row :-/
If only we could have a better trigger :-/

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1890435

Title:
  gcc-10 breaks on armhf (flaky): internal compiler error: Segmentation
  fault

To manage notifications about this bug go to:
https://bugs.launchpad.net/groovy/+bug/1890435/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1890435] Re: gcc-10 breaks on armhf (flaky): internal compiler error: Segmentation fault

2020-11-28 Thread Christian Ehrhardt
14 runs and going ...
It was never "so rare" when we were at the gcc that is in hirsute or 20200507.
I'll let it continue to run for now

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1890435

Title:
  gcc-10 breaks on armhf (flaky): internal compiler error: Segmentation
  fault

To manage notifications about this bug go to:
https://bugs.launchpad.net/groovy/+bug/1890435/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1890435] Re: gcc-10 breaks on armhf (flaky): internal compiler error: Segmentation fault

2020-11-29 Thread Christian Ehrhardt
Failed on #17
during RTL pass: reload

/root/qemu-5.0/fpu/softfloat.c: In function ‘soft_f64_muladd’:
/root/qemu-5.0/fpu/softfloat.c:1535:1: internal compiler error: Segmentation 
fault
 1535 | }
  | ^
cc -iquote /root/qemu-5.0/b/qemu/target/mips -iquote target/mips -iquote 
/root/qemu-5.0/tcg/arm -isystem /root/qemu-5.0/linux-headers -isystem 
/root/qemu-5.0/b/qemu/linux-headers -iquote . -iquote /root/qemu-5.0 -iquote 
/root/qemu-5.0/accel/tcg -iquote /root/qemu-5.0/include -iquote 
/root/qemu-5.0/disas/libvixl -I/usr/include/pixman-1   -pthread 
-I/usr/include/glib-2.0 -I/usr/lib/arm-linux-gnueabihf/glib-2.0/include 
-pthread -I/usr/include/glib-2.0 
-I/usr/lib/arm-linux-gnueabihf/glib-2.0/include -fPIE -DPIE  -D_GNU_SOURCE 
-D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE -Wstrict-prototypes 
-Wredundant-decls -Wall -Wundef -Wwrite-strings -Wmissing-prototypes 
-fno-strict-aliasing -fno-common -fwrapv -std=gnu99  -g -O2 
-fdebug-prefix-map=/root/qemu-5.0=. -fstack-protector-strong -Wformat 
-Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -Wexpansion-to-defined 
-Wendif-labels -Wno-shift-negative-value -Wno-missing-include-dirs -Wempty-body 
-Wnested-externs -Wformat-security -Wformat-y2k -Winit-self 
-Wignored-qualifiers -Wold-style-declaration -Wold-style-definition 
-Wtype-limits -fstack-protector-strong -I/usr/include/p11-kit-1  
-DSTRUCT_IOVEC_DEFINED  -I/usr/include/libpng16  
-I/root/qemu-5.0/capstone/include -isystem ../linux-headers -iquote .. -iquote 
/root/qemu-5.0/target/mips -DNEED_CPU_H -iquote /root/qemu-5.0/include -MMD -MP 
-MT target/mips/helper.o -MF target/mips/helper.d -O2 -U_FORTIFY_SOURCE 
-D_FORTIFY_SOURCE=2 -g   -c -o target/mips/helper.o 
/root/qemu-5.0/target/mips/helper.c
0x527c2f crash_signal
../../gcc/gcc/toplev.c:328
0x4147bf add_regs_to_insn_regno_info
../../gcc/gcc/lra.c:1509
0x4148b3 add_regs_to_insn_regno_info
../../gcc/gcc/lra.c:1531
0x4148b3 add_regs_to_insn_regno_info
../../gcc/gcc/lra.c:1531
0x4158d9 lra_update_insn_regno_info(rtx_insn*)
../../gcc/gcc/lra.c:1624
0x415a29 lra_update_insn_regno_info(rtx_insn*)
../../gcc/gcc/lra.c:1617
0x415a29 lra_push_insn_1
../../gcc/gcc/lra.c:1774
0x42dd53 spill_pseudos
../../gcc/gcc/lra-spills.c:523
0x42dd53 lra_spill()
../../gcc/gcc/lra-spills.c:636
0x416b1b lra(_IO_FILE*)
../../gcc/gcc/lra.c:2554
0x3e84d1 do_reload
../../gcc/gcc/ira.c:5523
0x3e84d1 execute
../../gcc/gcc/ira.c:5709

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1890435

Title:
  gcc-10 breaks on armhf (flaky): internal compiler error: Segmentation
  fault

To manage notifications about this bug go to:
https://bugs.launchpad.net/groovy/+bug/1890435/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1890435] Re: gcc-10 breaks on armhf (flaky): internal compiler error: Segmentation fault

2020-11-29 Thread Christian Ehrhardt
I'm not yet sure what we should learn from that - do we need 30 runs of
each step to be somewhat sure? That makes an already slow bisect even
slower ...

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1890435

Title:
  gcc-10 breaks on armhf (flaky): internal compiler error: Segmentation
  fault

To manage notifications about this bug go to:
https://bugs.launchpad.net/groovy/+bug/1890435/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1890435] Re: gcc-10 breaks on armhf (flaky): internal compiler error: Segmentation fault

2020-11-29 Thread Christian Ehrhardt
FYI - another 8 runs without a crash on r10-7093.
My current working theory is that the root cause of the crash might have been 
added as early as r10-4054 but one or many later changes have increased the 
chance (think increase the race window or such) for the issue to trigger.
If that assumption is true and with the current testcase it is nearly 
impossible to properly bisect the "original root cause". And at the same time 
still hard to find the one that increased the race window - since crashing 
early does not surely imply we are in the high/low chance area.


We've had many runs with the base versions so that one is really good.
But any other good result we've had so far could - in theory - be challenged 
and needs ~30 good runs to be somewhat sure (puh that will be a lot of time).

I'm marking the old runs that are debatable with good?.

Also we might want to look for just the "new" crash signature.

20190425 good
r10-1014
r10-2027 good?4
r10-2533
r10-3040 good?4
r10-3220
r10-3400 good?4
r10-3450
r10-3475
r10-3478
r10-3593
r10-3622
r10-3657 good?5
r10-3727 good?3
r10-4054 other kind of bad - signature different, and rare?
r10-6080 good?10
r10-7093 bad, but slow to trigger
20200507 bad bad bad

Signatures:
r10-4054 arm_legitimate_address_p (nonimmediate)
r10-7093 add_regs_to_insn_regno_info (lra)
r10-7093 add_regs_to_insn_regno_info (lra)
20200507 extract_plus_operands (lra)
20200507 avoid_constant_pool_reference (lra)
20200507 add_regs_to_insn_regno_info (lra)
ubu-10.2 add_regs_to_insn_regno_info (lra)
ubu-10.2 avoid_constant_pool_reference (lra)
ubu-10.2 thumb2_legitimate_address_p (lra)
ubu-10.2 add_regs_to_insn_regno_info (lra)

Of course it could be that the same root cause surfaces as two different
signatures - but to it could as well be a multitude of issues. Therefore
- for now - "add_regs_to_insn_regno_info (lra)" is what I'll continue to
hunt for.

With some luck (do we have any in this?) the 10 runs on 6080 are sufficient.
Let us try r10-6586 next and plan for 15-30 runs to be sure it is good.
If hitting the issue I'll still re-run it so we can compare multiple signatures.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1890435

Title:
  gcc-10 breaks on armhf (flaky): internal compiler error: Segmentation
  fault

To manage notifications about this bug go to:
https://bugs.launchpad.net/groovy/+bug/1890435/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1906155] Re: USB Passthrough Fails on Start, Needs domain Reset

2020-11-30 Thread Christian Ehrhardt
Hi Russel,
I haven't seen such issues recently, but let us try to sort it out.
For that I beg your pardon but I need to start asking for a few details:

1. what exactly (commands) does "reset of the VM" mean for you?
2. in the guest does the output of lspci -v (or whatever the macos counterpart 
is) change before/after reset and if so how does it change?
3. Could you track on your host "journalctl -f" output, then start the guest, 
then reset the guest - and attach that log here. If possible please identify 
the timestamps when you have reset the guest.

** Changed in: qemu (Ubuntu)
   Status: New => Incomplete

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1906155

Title:
  USB Passthrough Fails on Start, Needs domain Reset

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/qemu/+bug/1906155/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1890435] Re: gcc-10 breaks on armhf (flaky): internal compiler error: Segmentation fault

2020-11-30 Thread Christian Ehrhardt
Since this seems to become a reproducibility-fest I've spawned and
prepared two more workers using the same setup as the one we used
before. That should allow for some more runs/days to increase the rate
at we can process it - given the new insight to its unreliability.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1890435

Title:
  gcc-10 breaks on armhf (flaky): internal compiler error: Segmentation
  fault

To manage notifications about this bug go to:
https://bugs.launchpad.net/groovy/+bug/1890435/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1896250] Re: SDL support is missing while virglrenderer has problems with GTK

2020-11-30 Thread Christian Ehrhardt
Thanks for the feedback Oliver!

IMHO re-enabling SDL for that is still no option for all the reasons
outlined and discussed when it was disabled. Since we still miss a good
"next step to go" other than trying new versions I'm unsure what to do
for now.

For the sake of trying - I have qemu 5.1 in the new qemu 21.04 (in
development but usable) which can be worth a shot.


Grml ... we don't even have a good crash to debug/report upstream just "it 
grinds to a halt" :-/.

Never the less especially after you have had a chance to try the most
recent qemu 5.1 if still failing might consider reporting it to upstream
mailing list [1] (there might be a graphic expert on the list that
recognizes something).

P.S. Qemu 5.2 is out mid-December and in Ubuntu 21.04 in ~January which
then is another version to try

[1]: https://lists.nongnu.org/mailman/listinfo/qemu-discuss

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1896250

Title:
  SDL support is missing while virglrenderer has problems with GTK

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/qemu/+bug/1896250/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1905067] Re: qemu-system-riscv64 sbi_trap_error powering down VM riscv64

2020-11-30 Thread Christian Ehrhardt
@Sean - I guess you were on a thanksgiving break, once you are back and
had a chance to test - please let me know.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1905067

Title:
  qemu-system-riscv64 sbi_trap_error powering down VM riscv64

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/qemu/+bug/1905067/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1906245] Re: qemu-system-gui 1:5.1+dfsg-4ubuntu2 fails to upgrade

2020-11-30 Thread Christian Ehrhardt
*** This bug is a duplicate of bug 1905377 ***
https://bugs.launchpad.net/bugs/1905377

Hi Matthieu,
thanks for the report, but this bug is known and fixed under bug 1905377 in the 
version you report this against.
It is the "old" maintainer script that still has an issue (and we can't fix 
that old script).

You can work around the issue by doing:
  rm -rf /var/run/qemu/Debian

** This bug has been marked a duplicate of bug 1905377
   postrm fails in hirsute as the path generation for modules is broken

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1906245

Title:
  qemu-system-gui 1:5.1+dfsg-4ubuntu2 fails to upgrade

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/qemu/+bug/1906245/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1897854] Re: groovy qemu-arm-static: /build/qemu-W3R0Rj/qemu-5.0/linux-user/elfload.c:2317: pgb_reserved_va: Assertion `guest_base != 0' failed.

2020-11-30 Thread Christian Ehrhardt
We've bundled this fix for Groovy (Thanks Mark) with another upcoming upload.
This should soon be resolved in groovy as well.

** Also affects: qemu (Ubuntu Groovy)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1897854

Title:
  groovy qemu-arm-static: /build/qemu-W3R0Rj/qemu-5.0/linux-
  user/elfload.c:2317: pgb_reserved_va: Assertion `guest_base != 0'
  failed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/qemu/+bug/1897854/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1906245] Re: qemu-system-gui 1:5.1+dfsg-4ubuntu2 fails to upgrade

2020-11-30 Thread Christian Ehrhardt
Could you report me a "ls -laF /var/run/qemu/" what else might be the issue 
there?
I mean you even removed it so ... ?!

The failing script should be
 /var/lib/dpkg/info/qemu-system-gui\:amd64.prerm
Could you replace the "set -e" with "set -ex" there please and report back all 
the output your upgrade generates?

And finally to keep output down a bit, don't do "sudo apt-get dist-
upgrade -y" but maybe "sudo apt install qemu-system-gui -y"

Eager to hear back to see what other issue is hidden in there.

P.S. And thanks for testing the current -dev release even if this time
this bite you (and probably everyone else).

** Changed in: qemu (Ubuntu)
   Status: New => Incomplete

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1906245

Title:
  qemu-system-gui 1:5.1+dfsg-4ubuntu2 fails to upgrade

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/qemu/+bug/1906245/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1906245] Re: qemu-system-gui 1:5.1+dfsg-4ubuntu2 fails to upgrade

2020-11-30 Thread Christian Ehrhardt
Interesting - taking the dup away until we found what is different for
you then.

** This bug is no longer a duplicate of bug 1905377
   postrm fails in hirsute as the path generation for modules is broken

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1906245

Title:
  qemu-system-gui 1:5.1+dfsg-4ubuntu2 fails to upgrade

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/qemu/+bug/1906245/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1906245] Re: qemu-system-gui 1:5.1+dfsg-4ubuntu2 fails to upgrade

2020-11-30 Thread Christian Ehrhardt
Thanks for the data Matthieu!

Hmm, that is indeed the "old & broken" style that should be (and is for
qemu-block-extra) fixed in 5.1+dfsg-4ubuntu2.


...

Packaging:
- d/qemu-system-gui.*.in is gone in the packaging as intended.

Binary Package:
$ apt install qemu-system-gui
$ dpkg -l qemu-system-gui
ii  qemu-system-gui:amd64 1:5.1+dfsg-4ubuntu2
root@h:~# ll /var/lib/dpkg/info/qemu-system-gui*
-rw-r--r-- 1 root root 396 Nov 30 11:33 
/var/lib/dpkg/info/qemu-system-gui:amd64.list
-rw-r--r-- 1 root root 463 Nov 24 10:16 
/var/lib/dpkg/info/qemu-system-gui:amd64.md5sums
=> no maintainer scripts anymore (as intended)

So - as with the former issue - what is failing is the old packages
prerm of 1:5.1+dfsg-4ubuntu1.

I found in a discussion (thanks Juliank) why this isn't an issue with
qemu-block-extra (which is what this was first found and fixed for) and
can resolve it in a coming update.

** Changed in: qemu (Ubuntu)
   Status: Incomplete => Triaged

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1906245

Title:
  qemu-system-gui 1:5.1+dfsg-4ubuntu2 fails to upgrade

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/qemu/+bug/1906245/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1906245] Re: qemu-system-gui 1:5.1+dfsg-4ubuntu2 fails to upgrade

2020-11-30 Thread Christian Ehrhardt
Summary:
of [1] "upgrading" we can not prevent "1.2-3->prerm upgrade 1.2-4" to fail.
But we can make the fallback of "1.2-4-prerm failed-upgrade 1.2-3" working 
(which it already is for qemu-block-extra).

We essentially need a rather empty "prerm" in hirsute that can be
dropped in the next release as it is only covering that issue in the
initial 1:5.1+dfsg-4ubuntu1.

... working on it

[1]: https://wiki.debian.org/MaintainerScripts

** Changed in: qemu (Ubuntu)
   Status: Triaged => In Progress

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1906245

Title:
  qemu-system-gui 1:5.1+dfsg-4ubuntu2 fails to upgrade

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/qemu/+bug/1906245/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1906245] Re: qemu-system-gui 1:5.1+dfsg-4ubuntu2 fails to upgrade

2020-11-30 Thread Christian Ehrhardt
** Changed in: qemu (Ubuntu)
   Importance: Undecided => Critical

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1906245

Title:
  qemu-system-gui 1:5.1+dfsg-4ubuntu2 fails to upgrade

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/qemu/+bug/1906245/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1906245] Re: qemu-system-gui 1:5.1+dfsg-4ubuntu2 fails to upgrade

2020-11-30 Thread Christian Ehrhardt
Test:
# install current version - will be >=1:5.1+dfsg-4ubuntu2
$ apt install qemu-system-gui
# get old broken version and install it
$ wget 
https://launchpad.net/ubuntu/+source/qemu/1:5.1+dfsg-4ubuntu1/+build/20311790/+files/qemu-system-gui_5.1+dfsg-4ubuntu1_amd64.deb
$ dpkg --force-all -i qemu-system-gui_5.1+dfsg-4ubuntu1_amd64.deb
# trigger the bug
$ apt install qemu-system-gui
...
Preparing to unpack .../qemu-system-gui_1%3a5.1+dfsg-4ubuntu2_amd64.deb ...
cp: -r not specified; omitting directory '/var/run/qemu/Debian'
dpkg: warning: old qemu-system-gui:amd64 package pre-removal script subprocess 
returned error exit status 1
dpkg: trying script from the new package instead ...
dpkg: error processing archive 
/var/cache/apt/archives/qemu-system-gui_1%3a5.1+dfsg-4ubuntu2_amd64.deb 
(--unpack):
 there is no script in the new version of the package - giving up
Errors were encountered while processing:
 /var/cache/apt/archives/qemu-system-gui_1%3a5.1+dfsg-4ubuntu2_amd64.deb
E: Sub-process /usr/bin/dpkg returned an error code (1)


A fixed version builds in [1] and will be tested before uploading the fix (no 
other changes than for this bug in this)

[1]: https://launchpad.net/~ci-train-ppa-service/+archive/ubuntu/4356

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1906245

Title:
  qemu-system-gui 1:5.1+dfsg-4ubuntu2 fails to upgrade

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/qemu/+bug/1906245/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1905377] Re: postrm fails in hirsute as the path generation for modules is broken

2020-11-30 Thread Christian Ehrhardt
FYI - While the fix for this bug worked fine on qemu-block-extra (which is the 
only place module upgrade handling will be done in future). The dropping of any 
prerm for qemu-syste-gui has triggered bug 1906245.
That essentially surfaces as the same issue, but for qemu-syste-gui by the 
broken old prerm script. Bug 1906245 will fix that with another upload.

Due to 1:5.1+dfsg-4ubuntu2 being released on Friday plenty of people will run 
into this :-/
This FYI shall help to identify 1906245 quickly and once there resolved will 
get your updates going again.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1905377

Title:
  postrm fails in hirsute as the path generation for modules is broken

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/qemu/+bug/1905377/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1902059] Re: [MIR] postgresql-13

2020-11-30 Thread Christian Ehrhardt
FYI postgresql-common 223 and postgresql-13 13.1 are now ready to
migrate, but waiting for perl to be ready.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1902059

Title:
  [MIR] postgresql-13

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/postgresql-13/+bug/1902059/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1904192] Re: ebtables can not rename just created chain

2020-11-30 Thread Christian Ehrhardt
Bfore upgrade:

ubuntu@g-test:~$ sudo ebtables -t nat -N foo
ubuntu@g-test:~$ sudo ebtables -t nat -E foo bar
ebtables v1.8.5 (nf_tables): Chain 'foo' doesn't exists
Try `ebtables -h' or 'ebtables --help' for more information.


Upgrade:
ubuntu@g-test:~$ sudo apt install iptables
Reading package lists... Done
Building dependency tree   
Reading state information... Done
The following additional packages will be installed:
  libip4tc2 libip6tc2 libxtables12
Suggested packages:
  firewalld nftables
The following packages will be upgraded:
  iptables libip4tc2 libip6tc2 libxtables12
4 upgraded, 0 newly installed, 0 to remove and 5 not upgraded.
Need to get 498 kB of archives.
After this operation, 0 B of additional disk space will be used.
Do you want to continue? [Y/n] Y
Get:1 http://archive.ubuntu.com/ubuntu groovy-proposed/main amd64 iptables 
amd64 1.8.5-3ubuntu2.20.10.2 [432 kB]
Get:2 http://archive.ubuntu.com/ubuntu groovy-proposed/main amd64 libxtables12 
amd64 1.8.5-3ubuntu2.20.10.2 [28.7 kB]
Get:3 http://archive.ubuntu.com/ubuntu groovy-proposed/main amd64 libip6tc2 
amd64 1.8.5-3ubuntu2.20.10.2 [19.1 kB]
Get:4 http://archive.ubuntu.com/ubuntu groovy-proposed/main amd64 libip4tc2 
amd64 1.8.5-3ubuntu2.20.10.2 [18.7 kB]
Fetched 498 kB in 0s (1465 kB/s)
(Reading database ... 64660 files and directories currently installed.)
Preparing to unpack .../iptables_1.8.5-3ubuntu2.20.10.2_amd64.deb ...
Unpacking iptables (1.8.5-3ubuntu2.20.10.2) over (1.8.5-3ubuntu2.20.10.1) ...
Preparing to unpack .../libxtables12_1.8.5-3ubuntu2.20.10.2_amd64.deb ...
Unpacking libxtables12:amd64 (1.8.5-3ubuntu2.20.10.2) over 
(1.8.5-3ubuntu2.20.10.1) ...
Preparing to unpack .../libip6tc2_1.8.5-3ubuntu2.20.10.2_amd64.deb ...
Unpacking libip6tc2:amd64 (1.8.5-3ubuntu2.20.10.2) over 
(1.8.5-3ubuntu2.20.10.1) ...
Preparing to unpack .../libip4tc2_1.8.5-3ubuntu2.20.10.2_amd64.deb ...
Unpacking libip4tc2:amd64 (1.8.5-3ubuntu2.20.10.2) over 
(1.8.5-3ubuntu2.20.10.1) ...
Setting up libip4tc2:amd64 (1.8.5-3ubuntu2.20.10.2) ...
Setting up libip6tc2:amd64 (1.8.5-3ubuntu2.20.10.2) ...
Setting up libxtables12:amd64 (1.8.5-3ubuntu2.20.10.2) ...
Setting up iptables (1.8.5-3ubuntu2.20.10.2) ...
Processing triggers for man-db (2.9.3-2) ...
Processing triggers for libc-bin (2.32-0ubuntu3) ...

After upgrade
ubuntu@g-test:~$ sudo ebtables -t nat -N foo2
ubuntu@g-test:~$ sudo ebtables -t nat -E foo2 bar


Thanks, setting verified!

** Tags removed: verification-needed verification-needed-groovy
** Tags added: verification-done verification-done-groovy

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1904192

Title:
  ebtables can not rename just created chain

To manage notifications about this bug go to:
https://bugs.launchpad.net/iptables/+bug/1904192/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1906248] Re: Unwanted conffile prompt on default.xml

2020-11-30 Thread Christian Ehrhardt
Hi Iain,
we've already become aware of this and were discussing how to fix in 
https://salsa.debian.org/libvirt-team/libvirt/-/merge_requests/78

That should be in 6.10/6.11 by Debian. I'll recheck this case when going
for ~7.0 in January.

** Tags added: libvirt-21.04

** Changed in: libvirt (Ubuntu)
   Status: New => Confirmed

** Changed in: libvirt (Ubuntu)
   Importance: Undecided => Medium

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1906248

Title:
  Unwanted conffile prompt on default.xml

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/libvirt/+bug/1906248/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1906245] Re: qemu-system-gui 1:5.1+dfsg-4ubuntu2 fails to upgrade

2020-11-30 Thread Christian Ehrhardt
Fix works:
Preparing to unpack .../qemu-system-gui_1%3a5.1+dfsg-4ubuntu3_amd64.deb ...
cp: -r not specified; omitting directory '/var/run/qemu/Debian'
dpkg: warning: old qemu-system-gui:amd64 package pre-removal script subprocess 
returned error exit status 1
dpkg: trying script from the new package instead ...
dpkg: ... it looks like that went OK
Unpacking qemu-system-gui:amd64 (1:5.1+dfsg-4ubuntu3) over 
(1:5.1+dfsg-4ubuntu1) ...


Note:
if installing without proposed there is a block on
 qemu-system-gui : Depends: libgdk-pixbuf-2.0-0 (>= 2.22.0) but it is not 
installable
So to migrating this to hirsute-release this might have some dependencies.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1906245

Title:
  qemu-system-gui 1:5.1+dfsg-4ubuntu2 fails to upgrade

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/qemu/+bug/1906245/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1906245] Re: qemu-system-gui 1:5.1+dfsg-4ubuntu2 fails to upgrade

2020-11-30 Thread Christian Ehrhardt
Repackaged deb of 5.1+dfsg-4ubuntu1 to allow users unable to wait for
5.1+dfsg-4ubuntu1 to work around the issue.

** Attachment added: "Repackaged deb of 5.1+dfsg-4ubuntu1 to allow users unable 
to wait for 5.1+dfsg-4ubuntu1 to work around the issue."
   
https://bugs.launchpad.net/ubuntu/+source/qemu/+bug/1906245/+attachment/5439620/+files/qemu-system-gui_5.1+dfsg-4ubuntu1_amd64.repackaged.lp1906245.deb

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1906245

Title:
  qemu-system-gui 1:5.1+dfsg-4ubuntu2 fails to upgrade

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/qemu/+bug/1906245/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1906245] Re: qemu-system-gui 1:5.1+dfsg-4ubuntu2 fails to upgrade

2020-11-30 Thread Christian Ehrhardt
For those in a hurry that just want things to work again here a few
workarounds for now:

Workaround #1 - uninstall and install
$ apt remove qemu-system-gui
# the remove will complain the same way as the upgrade, but an install
# of the new version over this state "rH" works.
# You most likely had this installed through qemu-system-x86
# so remember to re-install dependency removed packages as well:
$ apt install qemu-system-x86 qemu-system-gui


Workaround #2 - use repackaged 5.1+dfsg-4ubuntu1 (If you can't remove qemu* 
packages for some reason)
# replace original 5.1+dfsg-4ubuntu1 with one that does not have the prerm issue
$ wget 
https://bugs.launchpad.net/ubuntu/+source/qemu/+bug/1906245/+attachment/5439620/+files/qemu-system-gui_5.1+dfsg-4ubuntu1_amd64.repackaged.lp1906245.deb
$ dpkg -i qemu-system-gui_5.1+dfsg-4ubuntu1_amd64.repackaged.lp1906245.deb
# upgrade to 5.1+dfsg-4ubuntu2 (now working)
$ apt install qemu-system-gui

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1906245

Title:
  qemu-system-gui 1:5.1+dfsg-4ubuntu2 fails to upgrade

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/qemu/+bug/1906245/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1906245] Re: qemu-system-gui 1:5.1+dfsg-4ubuntu2 fails to upgrade

2020-11-30 Thread Christian Ehrhardt
Workaround #3 - use the PPA version
$ sudo add-apt-repository ppa:ci-train-ppa-service/4356
$ apt install qemu-system-gui
# or
$ apt install upgrade
# This will need libgdk-pixbuf-2.0-0 from proposed thou

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1906245

Title:
  qemu-system-gui 1:5.1+dfsg-4ubuntu2 fails to upgrade

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/qemu/+bug/1906245/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1890435] Re: gcc-10 breaks on armhf (flaky): internal compiler error: Segmentation fault

2020-11-30 Thread Christian Ehrhardt
r10-6586 - passed 27 good runs, no fails

Updated Result Overview:
20190425 good
r10-1014
r10-2027 good?4
r10-2533
r10-3040 good?4
r10-3220
r10-3400 good?4
r10-3450
r10-3475
r10-3478
r10-3593
r10-3622
r10-3657 good?5
r10-3727 good?3
r10-4054 other kind of bad - signature different, and rare?
r10-6080 good?10
r10-6586 good?27
r10-7093 bad, but slow to trigger (2 of 19)
20200507 bad bad bad

Signatures:
r10-4054 arm_legitimate_address_p (nonimmediate)
r10-7093 add_regs_to_insn_regno_info (lra)
r10-7093 add_regs_to_insn_regno_info (lra)
20200507 extract_plus_operands (lra)
20200507 avoid_constant_pool_reference (lra)
20200507 add_regs_to_insn_regno_info (lra)
ubu-10.2 add_regs_to_insn_regno_info (lra)
ubu-10.2 avoid_constant_pool_reference (lra)
ubu-10.2 thumb2_legitimate_address_p (lra)
ubu-10.2 add_regs_to_insn_regno_info (lra)

Next I'll run r10-7093 in this new setup.
@Doko - It would be great to have ~6760 be built for the likely next step.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1890435

Title:
  gcc-10 breaks on armhf (flaky): internal compiler error: Segmentation
  fault

To manage notifications about this bug go to:
https://bugs.launchpad.net/groovy/+bug/1890435/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1906245] Re: qemu-system-gui 1:5.1+dfsg-4ubuntu2 fails to upgrade

2020-12-01 Thread Christian Ehrhardt
FYI This is all good in regard to autopkgtests now, should move to
released in the next Britney run.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1906245

Title:
  qemu-system-gui 1:5.1+dfsg-4ubuntu2 fails to upgrade

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/qemu/+bug/1906245/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

  1   2   3   4   5   6   7   8   9   10   >