There are no patches, it's a straight import of the source package into
Ubuntu. Ubuntu *does* have different compiler options than Debian, so
that may be a factor. Otherwise I'm in the same boat as you -- there's
only so much time I can throw at this (I've done full-time "investigate,
report, and
Public bug reported:
One of our Cockpit integration tests [1] spotted an AppArmor regression
in rsyslogd. This is coincidental, the test passes and it doesn't do
anything with rsyslogd -- just something happens to happen in the
background to trigger this (and I can actually reproduce it locally
Yeah, I could live with that -- but TBH I still consider this mostly a
bug in openssh. querying the status of sshd.service really should work.
Arch, RHEL, Fedora, OpenSUSE etc. all call this sshd.service.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is
Timo: It doesn't fail on Debian. See the "That works in Debian
because.." in the description (TL/DR: Debian doesn't enable ssh.socket,
but ssh.service, which sets up the symlink)
** Description changed:
Joining a FreeIPA domain reconfigures SSH. E.g. it enables GSSAPI
authentication in
Confirmed in current noble.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1946244
Title:
When installing/uninstalling with realmd, uninstalling crashes with
ScriptError
To manage notifications
Public bug reported:
Joining a FreeIPA domain reconfigures SSH. E.g. it enables GSSAPI
authentication in /etc/ssh/sshd_config.d/04-ipa.conf . After that, it
tries to restart sshd, but that fails as "sshd.service" is not a thing
on Ubuntu:
2024-04-12T03:10:57Z DEBUG args=['/bin/systemctl',
Yay, today this is finally fixed, pbuilder creation and building a noble
VM image finally works again \o/ Thanks!
** Changed in: perl (Ubuntu Noble)
Status: Confirmed => Fix Released
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to
In other words, having the fix in backports is fine I think.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/2060014
Title:
CVE-2024-2947 command injection when deleting a sosreport with a
crafted
Marc: Thanks -- no urgency from my side, I just wasn't sure about your
current CVE "must/may fix" policies.
** Changed in: cockpit (Ubuntu Mantic)
Status: Triaged => Won't Fix
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
Nathan Scott [2024-04-09 17:30 +1000]:
> > It's not really unknown, it's "just" a file conflict:
>
> Yeah - the unknown bit for me is "why tho" - I cannot see conflicting
> files in those packages that would have any debug symbols (there's
> some common directories... but no binaries shared
Hello Nathan,
Nathan Scott [2024-04-09 16:19 +1000]:
> Is any of this getting through... ? Just checked the Ubuntu tracker
> URL, and looks like every response Ken or I sent has been dropped on
> the ground.
Right, I didn't get any response either (not a surprise, as it's *first*
Launchpad
Aside from curl this can be reproduced most quickly with
sudo /usr/sbin/debootstrap --include=build-essential noble /tmp/n
http://archive.ubuntu.com/ubuntu
Errors were encountered while processing:
perl
libdpkg-perl
libperl5.38t64:amd64
dpkg-dev
build-essential
These are all ultimately
I wonder where that comes from --
https://launchpad.net/ubuntu/+source/perl/+publishinghistory says that
5.38.2-3 was deleted, but only from noble-updates. In noble proper it is
merely "superseded". https://launchpad.net/ubuntu/+source/perl/5.38.2-3
doesn't show it being published anyway, and it's
Public bug reported:
For the last two weeks, building noble VM images for our CI has been
broken. Most of it was uninstallability due to the xz reset, but for the
last three days, `pbuilder --create` has failed [2] because it gets perl
and perl-modules-5.38 in two different versions:
2024-04-08
> They didn't propagate yet due to noble being jammed so much
This happened now \o/, so they are ready to go.
** Changed in: cockpit (Ubuntu Noble)
Status: Fix Committed => Fix Released
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to
Maybe the missing dbgsym packages are on purpose? The build log has
this:
# Note: --no-automatic-dbgsym not defined for all releases up to
# and including Debian 8 (jessie), but defined after that
# ... expect a warning on older releases, but no other ill
# effects from the
Public bug reported:
In Cockpit's CI we see a lot of pmproxy crashes like [1] in a test which
starts/stops/reconfigures pmlogger, pmproxy, and redis. The journal
(some examples are [2][3][4]) always shows a similar stack trace:
pmproxy[9832]: segfault at 3 ip 767961047e45 sp 7ffe97e825d0
Sorry, clicked the wrong button, I'll expand the bug description. In the
meantime, attaching the core dump.
** Attachment added: "core dump"
Backporters: I uploaded backports from noble-proposed to mantic and
jammy. They didn't propagate yet due to noble being jammed so much, but
we do validate them on both releases upstream. I'll let you decide
whether to accept or stall them.
--
You received this bug notification because you are a
@Marc, security team: I'd like your opinion/preference/guidance for
mantic: It currently has upstream version 300.1. Half a year ago we did
two more upstream point releases for critical bug fixes (aimed at and
uploaded to RHEL): https://github.com/cockpit-
project/cockpit/releases/tag/300.2 and
Note: I tried to add backports tasks, but there's neither a
https://launchpad.net/jammy-backports nor a
https://launchpad.net/mantic-backports project. But not a biggie, these
will both get 314 as soon as it lands in noble.
--
You received this bug notification because you are a member of Ubuntu
and autopkgtest queue
before it can land in noble proper (and thus the backports of mantic and
jammy get updated).
** Affects: cockpit (Ubuntu)
Importance: High
Assignee: Martin Pitt (pitti)
Status: Fix Committed
** Affects: cockpit (Ubuntu Mantic)
Importance: Medium
Status
** Changed in: chrony (Ubuntu)
Status: New => Won't Fix
** Changed in: gnutls28 (Ubuntu)
Status: New => Won't Fix
** Changed in: libvirt (Ubuntu)
Status: New => Won't Fix
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to
Just to make sure that we really talk about the same thing: This bug
sounds like it is *intended* that
unshare --user --map-root-user /bin/bash -c whoami
(as unpriv user) now fails in current Ubuntu 24.04 noble. That still
worked in released 23.10.
I am starting to test Cockpit on the
** Tags added: cockpit-test
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1774000
Title:
Fails to boot cirros QEMU image with tuned running
To manage notifications about this bug go to:
** Tags added: cockpit-test
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/2040483
Title:
AppArmor denies crun sending signals to containers (stop, kill)
To manage notifications about this bug go
Public bug reported:
There is an AppArmor regression in current noble. In cockpit we recently
started to test on noble (to prevent the "major regressions after
release" fiasco from 23.10 again).
For some weird reason, rsyslog is installed *by default* [1] in the
cloud images. That is a rather
*** This bug is a duplicate of bug 2056739 ***
https://bugs.launchpad.net/bugs/2056739
Absolutely agree, thanks Christian!
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/2056747
Title:
Public bug reported:
Merely booting current noble cloud image with "chrony" installed causes
this:
audit: type=1400 audit(1710152842.540:107): apparmor="DENIED"
operation="open" class="file" profile="/usr/sbin/chronyd"
name="/etc/gnutls/config" pid=878 comm="chronyd" requested_mask="r"
Public bug reported:
Running any VM in libvirt causes a new AppArmor violation in current
noble. This is a regression, this didn't happen in any previous release.
Reproducer:
virt-install --memory 50 --pxe --virt-type qemu --os-variant
alpinelinux3.8 --disk none --wait 0 --name test1
(This
I tested the PPA, and it works like a charm now. Thanks Christian and
Simon!
For once, kicking some{thing,one} out of their $HOME does something
good..
** Changed in: swtpm (Ubuntu Jammy)
Status: Confirmed => In Progress
--
You received this bug notification because you are a member
Our CI uses a Jammy Ubuntu cloud image, but with quite a large list of
extra installed packages. To make sure it's not something specific to
that environment, I tried this:
autopkgtest-buildvm-ubuntu-cloud
qemu-system-x86_64 -enable-kvm -nographic -m 2048 -device virtio-rng-pci
-drive
Right, I understand -- but introducing the dependency was an explicit
decision (#1948748), and it seems it is broken for its main use case. So
in the simplest case the recommends: could be reverted, and reintroduced
once this is understood?
--
You received this bug notification because you are a
Public bug reported:
https://launchpad.net/ubuntu/+source/libvirt/8.0.0-1ubuntu6 introduced a
recommendation to "swtpm", so this package now gets installed by default
when installing libvirt. But this broke UEFI:
touch /var/lib/libvirt/empty.iso
virt-install --name t1 --os-variant fedora28
This broke VMs with UEFI, reported as bug 1968131.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1948748
Title:
[MIR] swtpm
To manage notifications about this bug go to:
Ouch, thanks Marc! Indeed our previous seddery was broken, it should
have left the pam_deny/pam_permit lines. With this it works just fine:
--- /tmp/common-auth.orig 2022-04-01 07:16:26.072608984 +0200
+++ /tmp/common-auth.faillock 2022-04-01 07:14:20.246707861 +0200
@@ -16,6 +16,8 @@
#
Timeout -- I uploaded the patch of the salsa PR to Jammy now.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1802005
Title:
socket is inaccessible for libvirt-dbus
To manage notifications about
Public bug reported:
ProblemType: Bug
DistroRelease: Ubuntu 22.04
Package: libpam-modules 1.4.0-11ubuntu1
I just noticed that Ubuntu 22.04 changed from the old pam_tally2 module
to the more widespread pam_faillock one. \o/
However, locking (denying logins) does not actually seem to work.
Confirmed in jammy as well.
https://logs.cockpit-
project.org/logs/pull-17182-20220325-080131-1b8abf94-ubuntu-2204/log.html#303-2
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1946244
Title:
When
I sent a fix to Debian: https://salsa.debian.org/libvirt-team/libvirt-
dbus/-/merge_requests/14
I'll give it a few days, if I can get that landed soon, we can just
sync. Otherwise I'll upload it to Jammy directly.
** Changed in: libvirt-dbus (Ubuntu Jammy)
Status: Triaged => Fix Committed
The image build log shows why:
Setting up libvirt-dbus (1.4.1-1) ...
/var/lib/dpkg/info/libvirt-dbus.postinst: 54: dpkg-vendor: not found
dpkg-vendor is in the "dpkg-dev" package, so it should not be used in
postinst scripts. libvirt-dbus could depend on dpkg-dev, but that's
highly undesirable.
hanged in: libvirt-dbus (Ubuntu)
Assignee: (unassigned) => Martin Pitt (pitti)
** Also affects: libvirt (Ubuntu Jammy)
Importance: Medium
Status: Won't Fix
** Also affects: libvirt-dbus (Ubuntu Jammy)
Importance: High
Assignee: Martin Pitt (pitti)
Status: Triaged
--
You
Still confirmed on 21.10, and also Debian testing; I filed a Debian bug
and linked it.
** Bug watch added: Debian Bug tracker #1008209
https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1008209
** Also affects: freeipa (Debian) via
https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1008209
A-ha! I wasn't seeing things after all. Our test images install the
"systemd-timesyncd" package (as we also run tests against that), and
that removes the chrony package and installs the mask:
# apt install systemd-timesyncd
Reading package lists... Done
Building dependency tree... Done
Reading
Hello Timo,
I'm not actually sure where these /etc/systemd/system/chrony* files come
from (in particular the mask). They are not owned by any package, nor
does chrony's postinst seem to create it (but maybe through a helper,
they are not exactly simple -- some weird interaction with the SysV
This is *still* broken on Ubuntu 21.10 and Debian testing. However, it
is subtly different, I filed bug 1966181 about it.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1890786
Title:
Public bug reported:
DistroRelease: Ubuntu 21.10
Package: freeipa-client 4.8.6-1ubuntu6
This is a bug that just doesn't want to die -- the package *really*
should grow an autopkgtest that checks if a basic ipa-client-install
actually works. It's very similar to bug 1890786 except that it now
** Also affects: freeipa (Ubuntu Focal)
Importance: Undecided
Status: New
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1890786
Title:
ipa-client-install fails on restarting non-existing
*** This bug is a duplicate of bug 1890786 ***
https://bugs.launchpad.net/bugs/1890786
Let's handle this in bug 1890786 instead, I added a focal task and will
close this as a duplicate.
** This bug has been marked a duplicate of bug 1890786
ipa-client-install fails on restarting
I did a test build in my PPA:
https://launchpad.net/~pitti/+archive/ubuntu/fixes
I re-ran the reproducer on current Jammy to confirm the bug, then
updated to the PPA, and re-ran the last virt-install command. That
succeeded.
** Changed in: libvirt (Ubuntu)
Status: Triaged => Fix Committed
I sent https://salsa.debian.org/libvirt-
team/libvirt/-/merge_requests/135 to update Debian. Unfortunately that
does not build right now due to the inconsistent state of the packaging
git. But the patch itself backports fairly cleanly.
I'll upload to Jammy next.
--
You received this bug
Fix landed upstream:
https://gitlab.com/libvirt/libvirt/-/commit/7aec69b7fb9d0cfe8b7203473764c205b28d2905
** Changed in: libvirt
Status: In Progress => Fix Released
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
Thanks Christian. I updated the upstream PR. I just don't want to apply
a patch just to Ubuntu. Once it lands upstream, I backport it, send it
to Debian, and *then* I'm happy to apply it to Jammy -- there should
still be enough time before the freeze, right? (Would be nice to have
that in the LTS,
** Changed in: libvirt
Status: New => In Progress
** Changed in: libvirt
Assignee: (unassigned) => Martin Pitt (pitti)
** Package changed: apparmor (Debian) => libvirt (Debian)
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is s
I sent the proposed and tested fix upstream:
https://gitlab.com/libvirt/libvirt/-/merge_requests/140
** Also affects: libvirt
Importance: Undecided
Status: New
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
I came up with this patch:
--- /etc/apparmor.d/abstractions/libvirt-qemu.orig 2022-01-22
18:22:57.0 +
+++ /etc/apparmor.d/abstractions/libvirt-qemu 2022-02-25 13:54:22.075405809
+
@@ -85,7 +85,7 @@
/usr/share/misc/sgabios.bin r,
/usr/share/openbios/** r,
it -1
--noautoconsole --cdrom /var/lib/libvirt/novell.iso --autostart
** Package changed: apparmor (Ubuntu) => libvirt (Ubuntu)
** Changed in: libvirt (Ubuntu)
Status: New => Triaged
** Changed in: libvirt (Ubuntu)
Assignee: (unassigned) => Martin Pitt (pitti)
--
You received this bu
** Bug watch added: Debian Bug tracker #1006324
https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1006324
** Also affects: apparmor (Debian) via
https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1006324
Importance: Unknown
Status: Unknown
--
You received this bug notification
Paride, many thanks for digging out the upstream fix!
The patch does apply cleanly. It just need a round of "quilt refresh" to
get over
dpkg-source: error: diff 'sssd-2.4.1/debian/patches/5572.patch'
patches files multiple times; split the diff in multiple files or merge
the hunks into a
> I'll do a no-change rebuild of impish's sssd now and try with that.
Done, but the bug is still the same. So not some weird build-time issue.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1958392
Thanks Paride! I confirm that updating to your PPA fixes the issue,
which confirms that it's sssd.
I'll do a no-change rebuild of impish's sssd now and try with that.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
I upgraded my test VM to current Jammy, with sssd 2.6.1-1ubuntu3. This
works fine, so this only applies to impish. *phew*
I.e. I figure/expect pretty well nothing will happen on this bug, and
that's fine -- I just need it as downstream reference for our OS bug
tracker. :-)
** Also affects: sssd
** Tags added: impish
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1958392
Title:
pam_sss_gss crashes with Communication error [3, 32]: Error in service
module; Broken pipe
To manage
This is confirmed to work on Debian 11 (current stable), Debian testing,
Fedora 34/35, CentOS 8, RHEL 8/9, so it does not smell like an upstream
issue.
Ubuntu 20.04 LTS does not yet have pam_sss_gss, so this does not apply
there.
** Description changed:
I am trying to set up pam_sss_gss to
Public bug reported:
I am trying to set up pam_sss_gss to authenticate to sudo with Kerberos.
I am fairly sure that this worked in the past, but stopped recently.
Reproducer:
- Join a FreeIPA domain, with "ipa-client-install". I use "COCKPIT.LAN" here
in our tests.
- Enable GSS for sudo in
This is fixed in current Ubuntu 21.04.
I dropped our hacks in our projects: https://github.com/cockpit-
project/cockpit-machines/pull/465 and https://github.com/cockpit-
project/bots/pull/2676
** Changed in: firewalld (Ubuntu)
Status: Confirmed => Fix Released
--
You received this bug
> Xorg -config tests/xorg-dummy.conf -logfile /tmp/log -once :5
The -once was an attempt to work around this, but it doesn't help, nor
change the behaviour of this bug.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
umockdev's test suite now started to see this crash in current Ubuntu
jammy. Simple reproducer:
$ cat tests/xorg-dummy.conf
Section "Device"
Identifier "test"
Driver "dummy"
EndSection
$ Xorg -config tests/xorg-dummy.conf -logfile /tmp/log -once :5
Then, run at least one
Thanks Dan! Reuploaded with s/ubuntu/bpo/. So far I used the
"backportpackage" script from ubuntu-dev-tools (0.186), can this be
fixed there then, please?
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
Right, cockpit-machines only shows libvirt machines. So if `virsh list`
is empty, so will be c-machines.
** Changed in: cockpit-machines (Ubuntu)
Status: Incomplete => Won't Fix
** Summary changed:
- does not show any VMs
+ does not show VMWare VMs
--
You received this bug notification
kpit-project/cockpit/issues/16438
** Affects: cockpit-machines (Ubuntu)
Importance: Undecided
Status: Invalid
** Affects: cockpit-machines (Ubuntu Focal)
Importance: Undecided
Assignee: Martin Pitt (pitti)
Status: In Progress
** Affects: cockpit-machines (Ubuntu Imp
Uploaded to https://launchpad.net/ubuntu/impish/+queue?queue_state=1 and
https://launchpad.net/ubuntu/focal/+queue?queue_state=1
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1949715
Title:
[BPO]
cockpit-dashboard was removed in 234 [1], its functionality got
integrated into cockpit-shell and cockpit-system. cockpit-machines got
split into its own source package and thus now has an independent
version number.
[1] https://cockpit-project.org/blog/cockpit-234.html
So devoting this to the
Public bug reported:
ProblemType: Crash
DistroRelease: Ubuntu 21.04
PackageVersion: python3-ipaclient 4.8.6-1ubuntu5
SourcePackage: freeipa
Architecture: amd64
Joining a FreeIPA domain with plain ipa-client-install works well:
# ipa-client-install -p admin --password=SECRET --no-ntp
[...]
The
For completeness, this is /var/log/ipaclient-install from the successful
"realm join".
** Attachment added: "ipaclient-install.log from realmd join"
https://bugs.launchpad.net/ubuntu/+source/freeipa/+bug/1946244/+attachment/5531110/+files/ipaclient-install.log
** Summary changed:
- When
Christian, as I write above I believe this really needs to be fixed in
bolt's tests. The umockdev change was a bug fix which bolt's tests
(incorrectly) worked around. So I hope you don't mind that I flipped the
affected package around? I am in contact with Christian now, and hope to
sort this out
> I am in contact with Christian now, and hope to sort this out soon.
Sorry -- I meant Christian Kellner, bolt's upstream, not you :-)
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1945321
Title:
)
Status: New => In Progress
** Changed in: bolt (Ubuntu)
Assignee: (unassigned) => Martin Pitt (pitti)
** Changed in: umockdev (Ubuntu)
Status: New => In Progress
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubun
Indeed the open(2) manpage is misleading in that regard. The actual
definition in fcntl.h is like this:
extern int open (const char *__file, int __oflag, ...) __nonnull
((1));
(with a few variants, but they all use varargs). So I did the same in
umockdev for full header compatibility.
--
Dang, we already found a ppc64el SIGBUS issue in 0.16.0, which got fixed
in https://github.com/martinpitt/umockdev/commit/277c80243a . But this
is reported against 0.16.1 already.
There is a tiny chance that
https://github.com/martinpitt/umockdev/commit/264cabbb will magically
fix this, but
I installed udisks2 2.9.2-1ubuntu1 from hirsute-proposed, and confirm
that both the manual test case above as well as cockpit's automatic
TestStorageFormat.testFormatTypes now succeed. Thank you Sebastien and
Robie!
** Tags removed: verification-needed verification-needed-hirsute
** Tags added:
** Changed in: libvirt (Ubuntu Hirsute)
Status: New => Won't Fix
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1802005
Title:
socket is inaccessible for libvirt-dbus
To manage notifications
Argh indeed, forgot about that one already -- I even looked at that
before, it's tracked here: https://bugs.debian.org/cgi-
bin/bugreport.cgi?bug=983751
But you knew that as well, in comment #4 -- So I hope this didn't take
too much time to track down. Merci beaucoup !
** Bug watch added: Debian
Direct mkfs works:
# mkfs.vfat -I -n label /dev/vdb
mkfs.fat 4.2 (2021-01-31)
mkfs.fat: Warning: lowercase labels might not work properly on some systems
# blkid -p /dev/vdb
/dev/vdb: PTUUID="892240dd" PTTYPE="dos"
** Changed in: udisks2 (Ubuntu)
Status: Incomplete => New
--
You
Reproducer from scratch:
# download current cloud image
curl -L -O
https://cloud-images.ubuntu.com/daily/server/hirsute/current/hirsute-server-cloudimg-amd64.img
# nothing fancy, just admin:foobar and root:foobar
curl -L -O
@Reinhard:
> Unfortunately, I cannot confirm this on a freshly installed Ubuntu
20.04
I assume this was a typo and you really meant 21.04.
> and see what's the one that breaks podman.
That was easy, it's tuned. Full reproducer:
apt install -y tuned
podman run -it --rm -p 5000:5000 --name
Thanks Reinhard for trying! I'm running a standard cloud image (https
://cloud-images.ubuntu.com/daily/server/hirsute/current/hirsute-server-
cloudimg-amd64.img), but with some additional packages installed. I'll
go through them with a fine comb and see what's the one that breaks
podman.
(But
Forgot to mention, there is nothing useful in the journal. The only
message is this when the timeout happens:
Apr 23 15:12:35 ubuntu udisksd[3116]: Error synchronizing after
formatting with type `vfat': Timed out waiting for object
** Description changed:
There is a regression somewhere
I tried to run it in the foreground with
G_MESSAGES_DEBUG=all /usr/libexec/udisks2/udisksd
but still no messages aside from the timeout.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1925822
Public bug reported:
There is a regression somewhere between udisks, udev, and dosfstools.
Formatting a device with vfat hangs and fails:
# blkid -p /dev/sda
(nothing)
# busctl call org.freedesktop.UDisks2
/org/freedesktop/UDisks2/block_devices/sda org.freedesktop.UDisks2.Block Format
Thanks Christian! Lesson learned -- for 21.10 I'll update our images a
few weeks *before* the release. (I found a handful of regressions so
far..)
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1802005
This regressed in 21.04 (hirsute) again. 1.4.0-2 was synced from Debian
(https://launchpad.net/ubuntu/+source/libvirt-dbus/+changelog) instead
of merged.
** Tags added: hirsute regression-release
** Changed in: libvirt-dbus (Ubuntu)
Status: Fix Released => Triaged
** Also affects:
Public bug reported:
This stopped working in 21.04:
podman run -it --rm -p 5000:5000 --name registry docker.io/registry:2
curl http://localhost:5000/v2/
The curl just hangs forever. This works fine in Ubuntu 20.10 with podman
2.0.6+dfsg1-1ubuntu1.
Outbound direction is also broken:
#
** Changed in: fatrace (Ubuntu)
Status: New => Incomplete
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1861053
Title:
no fatrace output in focal
To manage notifications about this bug go
I've been scratching my head over this regression [1] for a while now,
in the context of running a hirsute container on a 20.04 host (in
particular, a GitHub workflow machine) In my case, the symptom is that
after upgrading glibc, `which` is broken; that of course also uses
faccessat(), similar to
I now did exactly the same steps as above on an Ubuntu 20.04 VM, with
exactly the same results. This verifies 4.33-3ubuntu1.20.04.1.
** Tags removed: verification-needed-focal
** Tags added: verification-done-focal
--
You received this bug notification because you are a member of Ubuntu
Bugs,
Verification for groovy:
I took a 20.10 VM with current pollinate 4.33-3ubuntu1, and after
booting, pollinate.service is in state failed as per the bug
description.
I then updated to 4.33-3ubuntu1.20.10.1. The package update auto-
restarted pollinate.service, and it looked successful:
#
@Christian: Debian still needs/wants to support sysvinit. Of course
init.d scripts ought to create cache directories too (like munin,
mopidy, and others already do, but probably not all of them), but that
will be a bit more work. FHS applies to SysV init as well, so the same
reasoning still holds.
** Changed in: pollinate (Ubuntu)
Status: Confirmed => In Progress
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1848923
Title:
pollinate.service fails to start: ERROR: should execute as the
> Where could we download one of them to check the state of that path in
there?
See comment #7:
git clone https://github.com/cockpit-project/bots/
bots/vm-run ubuntu-stable
But I suppose that's moot now :)
--
You received this bug notification because you are a member of Ubuntu
Bugs,
1 - 100 of 54143 matches
Mail list logo