[Bug 2064089] Re: python-gssapi 1.8.2-1ubuntu2 regression: ModuleNotFoundError: No module named 'gssapi.raw'

2024-05-01 Thread Martin Pitt
This was "fixed" in noble by clearing out noble-proposed, thanks!  That
took care of the worst fallout.

** Changed in: python-gssapi (Ubuntu Noble)
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/2064089

Title:
  python-gssapi 1.8.2-1ubuntu2 regression: ModuleNotFoundError: No
  module named 'gssapi.raw'

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/python-gssapi/+bug/2064089/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 2064089] [NEW] python-gssapi 1.8.2-1ubuntu2 regression: ModuleNotFoundError: No module named 'gssapi.raw'

2024-04-29 Thread Martin Pitt
Public bug reported:

The recent no-change rebuild in
https://launchpad.net/ubuntu/+source/python-gssapi/1.8.2-1ubuntu2
regressed. With -1ubuntu1, the import works:

  python3 -c 'import gssapi'

but with -1ubuntu2, it crashes with

Traceback (most recent call last):
  File "", line 1, in 
  File "/usr/lib/python3/dist-packages/gssapi/__init__.py", line 31, in 
from gssapi.raw.types import NameType, RequirementFlag, AddressType  # noqa
^^^
  File "/usr/lib/python3/dist-packages/gssapi/raw/__init__.py", line 45, in 

importlib.import_module('{0}._enum_extensions.{1}'.format(__name__, name))
  File "/usr/lib/python3.12/importlib/__init__.py", line 90, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
   
  File "gssapi/raw/_enum_extensions/ext_dce.pyx", line 3, in init 
gssapi.raw._enum_extensions.ext_dce
ModuleNotFoundError: No module named 'gssapi.raw'

which is a bit weird, as indeed the rebuilt deb lost e.g.

  /usr/lib/python3/dist-
packages/gssapi/raw/types.cpython-311-x86_64-linux-gnu.so

and related *311* files, but the *312* ones are still present. But
Python *is* 3.12:

# python3 --version
Python 3.12.3


This breaks e.g. ipa-client-install (right away, no arguments or IPA setup 
needed).

So there's something subtle going on, but this really should be removed
from noble-proposed now that noble is stable.

** Affects: python-gssapi (Ubuntu)
 Importance: Undecided
 Status: New

** Affects: python-gssapi (Ubuntu Noble)
 Importance: Undecided
 Status: New


** Tags: noble regression-proposed

** Also affects: python-gssapi (Ubuntu Noble)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/2064089

Title:
  python-gssapi 1.8.2-1ubuntu2 regression: ModuleNotFoundError: No
  module named 'gssapi.raw'

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/python-gssapi/+bug/2064089/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 2060275] Re: pmproxy crash at startup in libpcp_web.so.1

2024-04-17 Thread Martin Pitt
There are no patches, it's a straight import of the source package into
Ubuntu. Ubuntu *does* have different compiler options than Debian, so
that may be a factor. Otherwise I'm in the same boat as you -- there's
only so much time I can throw at this (I've done full-time "investigate,
report, and try to reproduce regressions in various OSes" in the last
two weeks).

It would certainly be good if someone from Debian or Ubuntu could figure
out the debug symbol building, though. Without that, it's too hard to
figure out this crash.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/2060275

Title:
  pmproxy crash at startup in libpcp_web.so.1

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/pcp/+bug/2060275/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 2061726] [NEW] rsyslog apparmor denial on reading /proc/sys/net/ipv6/conf/all/disable_ipv6

2024-04-15 Thread Martin Pitt
Public bug reported:

One of our Cockpit integration tests [1] spotted an AppArmor regression
in rsyslogd. This is coincidental, the test passes and it doesn't do
anything with rsyslogd -- just something happens to happen in the
background to trigger this (and I can actually reproduce it locally
quite reliably).


Mar 08 10:48:20 m1.cockpit.lan systemd[1]: dpkg-db-backup.service: Deactivated 
successfully.
Mar 08 10:48:20 m1.cockpit.lan systemd[1]: Finished dpkg-db-backup.service - 
Daily dpkg database backup service.
Mar 08 10:48:20 m1.cockpit.lan systemd[1]: rsyslog.service: Sent signal SIGHUP 
to main process 752 (rsyslogd) on client request.
Mar 08 10:48:20 m1.cockpit.lan kernel: audit: type=1400 
audit(1615200500.418:125): apparmor="DENIED" operation="open" class="file" 
profile="rsyslogd" name="/proc/sys/net/ipv6/conf/all/disable_ipv6" pid=752 
comm="rsyslogd" requested_mask="r" denied_mask="r" fsuid=102 ouid=0
Mar 08 10:48:20 m1.cockpit.lan kernel: audit: type=1400 
audit(1615200500.418:126): apparmor="DENIED" operation="open" class="file" 
profile="rsyslogd" name="/proc/sys/net/ipv6/conf/all/disable_ipv6" pid=752 
comm="rsyslogd" requested_mask="r" denied_mask="r" fsuid=102 ouid=0


This happens on current Ubuntu 24.04 LTS noble devel, rsyslog 8.2312.0-3ubuntu8 
and apparmor 4.0.0-beta3-0ubuntu3.

[1] 
https://cockpit-logs.us-east-1.linodeobjects.com/pull-20317-ce39e07e-20240415-204952-ubuntu-stable-other/log.html#152
[2] 
https://cockpit-logs.us-east-1.linodeobjects.com/pull-20317-ce39e07e-20240415-204952-ubuntu-stable-other/TestHistoryMetrics-testEvents-ubuntu-stable-127.0.0.2-2901-FAIL.log.gz

** Affects: rsyslog (Ubuntu)
 Importance: Undecided
 Status: New


** Tags: apparmor noble

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/2061726

Title:
  rsyslog apparmor denial on reading
  /proc/sys/net/ipv6/conf/all/disable_ipv6

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/rsyslog/+bug/2061726/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 2061055] Re: Joining IPA domain does not restart ssh -- 'sshd.service' alias is not set up by default

2024-04-12 Thread Martin Pitt
Yeah, I could live with that -- but TBH I still consider this mostly a
bug in openssh. querying the status of sshd.service really should work.
Arch, RHEL, Fedora, OpenSUSE etc. all call this sshd.service.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/2061055

Title:
  Joining IPA domain does not restart ssh -- 'sshd.service' alias is not
  set up by default

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/freeipa/+bug/2061055/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 2061055] Re: Joining IPA domain does not restart ssh -- 'sshd.service' alias is not set up by default

2024-04-12 Thread Martin Pitt
Timo: It doesn't fail on Debian. See the "That works in Debian
because.." in the description (TL/DR: Debian doesn't enable ssh.socket,
but ssh.service, which sets up the symlink)

** Description changed:

  Joining a FreeIPA domain reconfigures SSH. E.g. it enables GSSAPI
  authentication in /etc/ssh/sshd_config.d/04-ipa.conf . After that, it
  tries to restart sshd, but that fails as "sshd.service" is not a thing
  on Ubuntu:
  
  2024-04-12T03:10:57Z DEBUG args=['/bin/systemctl', 'is-active', 
'sshd.service']
  2024-04-12T03:10:57Z DEBUG Process finished, return code=4
  
  (in /var/log/ipaclient-install.log)
  
  While that could be changed in freeipa, I'd argue that this is really a
  bug in Ubuntu's openssh package. Many upstream software, Ansible scripts
  etc. assume that the service is "sshd.service". In Debian/Ubuntu the
  primary unit is "ssh.service", but it has an `[Install]
  Alias=sshd.service`. That works in Debian because there sshd.service
  *actually* gets enabled by default, and ssh.socket isn't.
  
  But Ubuntu moved to socket activation (which is good!), so that
  ssh.socket is running by default. But that means that ssh.service never
  gets "systemctl enable"d, and hence the alias never gets set up:
  
  # systemctl status sshd.service
  Unit sshd.service could not be found.
  
  So if ssh.service is already running, it never gets restarted by "ipa-
  client-install".
  
  It would be really good to make that alias work by default -- if nothing
- else, just create the symlink manually in the postinst?
+ else, just ship the symlink in the .deb, or create the symlink manually
+ in the postinst?
  
  freeipa-client 4.10.2-2ubuntu3
  openssh-server 1:9.6p1-3ubuntu12
  
- 
  Note: we have tested this functionality in Cockpit on Ubuntu for a long time 
already. But until very recently we had a workaround to force the creation of 
that alias:
  
https://github.com/cockpit-project/bots/commit/3bf1b20f3fa5fe202b9710b3fe78d2133ba03f5d
  We dropped it because it broke image builds due to some bugs in openssh's 
postinst, but it was a bad one anyway: actual users don't have that hack, and 
it hides bugs like this.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/2061055

Title:
  Joining IPA domain does not restart ssh -- 'sshd.service' alias is not
  set up by default

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/freeipa/+bug/2061055/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1946244] Re: When installing/uninstalling with realmd, uninstalling crashes with ScriptError

2024-04-11 Thread Martin Pitt
Confirmed in current noble.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1946244

Title:
  When installing/uninstalling with realmd, uninstalling crashes with
  ScriptError

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/freeipa/+bug/1946244/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 2061055] [NEW] Joining IPA domain does not restart ssh -- 'sshd.service' alias is not set up by default

2024-04-11 Thread Martin Pitt
Public bug reported:

Joining a FreeIPA domain reconfigures SSH. E.g. it enables GSSAPI
authentication in /etc/ssh/sshd_config.d/04-ipa.conf . After that, it
tries to restart sshd, but that fails as "sshd.service" is not a thing
on Ubuntu:

2024-04-12T03:10:57Z DEBUG args=['/bin/systemctl', 'is-active', 'sshd.service']
2024-04-12T03:10:57Z DEBUG Process finished, return code=4

(in /var/log/ipaclient-install.log)

While that could be changed in freeipa, I'd argue that this is really a
bug in Ubuntu's openssh package. Many upstream software, Ansible scripts
etc. assume that the service is "sshd.service". In Debian/Ubuntu the
primary unit is "ssh.service", but it has an `[Install]
Alias=sshd.service`. That works in Debian because there sshd.service
*actually* gets enabled by default, and ssh.socket isn't.

But Ubuntu moved to socket activation (which is good!), so that
ssh.socket is running by default. But that means that ssh.service never
gets "systemctl enable"d, and hence the alias never gets set up:

# systemctl status sshd.service
Unit sshd.service could not be found.

So if ssh.service is already running, it never gets restarted by "ipa-
client-install".

It would be really good to make that alias work by default -- if nothing
else, just create the symlink manually in the postinst?

freeipa-client 4.10.2-2ubuntu3
openssh-server 1:9.6p1-3ubuntu12


Note: we have tested this functionality in Cockpit on Ubuntu for a long time 
already. But until very recently we had a workaround to force the creation of 
that alias:
https://github.com/cockpit-project/bots/commit/3bf1b20f3fa5fe202b9710b3fe78d2133ba03f5d
We dropped it because it broke image builds due to some bugs in openssh's 
postinst, but it was a bad one anyway: actual users don't have that hack, and 
it hides bugs like this.

** Affects: freeipa (Ubuntu)
 Importance: Undecided
 Status: New

** Affects: openssh (Ubuntu)
 Importance: Undecided
 Status: New

** Also affects: openssh (Ubuntu)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/2061055

Title:
  Joining IPA domain does not restart ssh -- 'sshd.service' alias is not
  set up by default

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/freeipa/+bug/2061055/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 2060615] Re: [noble] two versions of perl-modules are published, breaking pbuilder/debootstrap

2024-04-11 Thread Martin Pitt
Yay, today this is finally fixed, pbuilder creation and building a noble
VM image finally works again \o/ Thanks!

** Changed in: perl (Ubuntu Noble)
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/2060615

Title:
  [noble] two versions of perl-modules are published, breaking
  pbuilder/debootstrap

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/perl/+bug/2060615/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 2060014] Re: CVE-2024-2947 command injection when deleting a sosreport with a crafted name

2024-04-09 Thread Martin Pitt
In other words, having the fix in backports is fine I think.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/2060014

Title:
  CVE-2024-2947 command injection when deleting a sosreport with a
  crafted name

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/cockpit/+bug/2060014/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 2060014] Re: CVE-2024-2947 command injection when deleting a sosreport with a crafted name

2024-04-09 Thread Martin Pitt
Marc: Thanks -- no urgency from my side, I just wasn't sure about your
current CVE "must/may fix" policies.

** Changed in: cockpit (Ubuntu Mantic)
   Status: Triaged => Won't Fix

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/2060014

Title:
  CVE-2024-2947 command injection when deleting a sosreport with a
  crafted name

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/cockpit/+bug/2060014/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

Re: Fwd: [Bug 2060275] [NEW] pmproxy crash at startup in libpcp_web.so.1

2024-04-09 Thread Martin Pitt
Nathan Scott [2024-04-09 17:30 +1000]:
> > It's not really unknown, it's "just" a file conflict:
>
> Yeah - the unknown bit for me is "why tho" - I cannot see conflicting
> files in those packages that would have any debug symbols (there's
> some common directories... but no binaries shared AFAICS).
>
> > | dpkg: error processing archive 
> > build/deb/pcp-pmda-infiniband-dbgsym_6.2.1-0.20240409.f312285_amd64.deb 
> > (--install):
> > |  trying to overwrite 
> > '/usr/lib/debug/.build-id/57/02df011cfaf166b948e1fefde236eaf3a6ee65.debug', 
> > which is also in package pcp-dbgsym 6.2.1-0.20240409.f312285
> > |
> > | dpkg: error processing archive 
> > build/deb/pcp-testsuite-dbgsym_6.2.1-0.20240409.f312285_amd64.deb 
> > (--install):
> > | trying to overwrite 
> > '/usr/lib/debug/.build-id/17/6edc7e590f766a2ea87b5decaeb994d7c48d24.debug', 
> > which is also in package pcp-dbgsym 6.2.1-0.20240409.f312285
> >
> > I.e. these are shipped in two different packages.
>
> "these"?

These two files, i.e.
/usr/lib/debug/.build-id/57/02df011cfaf166b948e1fefde236eaf3a6ee65.debug exists
both in pcp-pmda-infiniband-dbgsym and pcp-dbgsym. Presumably they shouldn't be
in the latter.

(I'm out of this for many years, so I'm afraid I don't know what a good
solution is, i.e. how much control you have over dbgsym generation).

> OK ... so that's pointing towards v3 archives a little bit, good.
>
> > > The limited stack we have suggests we're in pmproxy log discovery
> > > code, in an inotify/libuv event, which does have v3-specific code.
> > >
> > > For those who can reproduce this, it'd be worth experimenting and
> > > setting the following field back to 2 ... (requires pmlogger restart).
> > >
> > > $ grep PCP_ARCHIVE_VERSION /etc/pcp.conf
> > > PCP_ARCHIVE_VERSION=3

I created https://github.com/cockpit-project/cockpit/pull/20275 with an x120
test amplification, and intererestinly there the overwhelming majority of test
runs actually crashes there. So with that I have a fairly high confidence in
the significance of test results when trying a change.

I tested with

  sed -i 's/PCP_ARCHIVE_VERSION=3/PCP_ARCHIVE_VERSION=2/' /etc/pcp.conf

This runs on image preparation, i.e. clean /var/log and no daemons running. The
VM is freshly booted for each test, so no running pmlogger. There is no
observed change, it still crashes the same way and with the same frequency
("almost every time").

Note that I can easily pull in a PPA or even a binary with curl for
testing.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/2060275

Title:
  pmproxy crash at startup in libpcp_web.so.1

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/pcp/+bug/2060275/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

Re: Fwd: [Bug 2060275] [NEW] pmproxy crash at startup in libpcp_web.so.1

2024-04-09 Thread Martin Pitt
Hello Nathan,

Nathan Scott [2024-04-09 16:19 +1000]:
> Is any of this getting through... ?  Just checked the Ubuntu tracker
> URL, and looks like every response Ken or I sent has been dropped on
> the ground.

Right, I didn't get any response either (not a surprise, as it's *first*
Launchpad receiving replies, and then it sends out notifications). I did do bug
replies via email in the past, but these days it may get caught in some spam
prevention measures? Probably better to just post them on the web ui?

I CC my reply to the LP bug if you don't mind -- first of all to test this, and
also to keep a more permanent record of our discussion.

> Long and short of it is, we've not been able to reproduce and debian
> dbgsym on sub-packages is still broken for unknown reasons...
> https://github.com/performancecopilot/pcp/pull/1948

It's not really unknown, it's "just" a file conflict:

| dpkg: error processing archive 
build/deb/pcp-pmda-infiniband-dbgsym_6.2.1-0.20240409.f312285_amd64.deb 
(--install):
|  trying to overwrite 
'/usr/lib/debug/.build-id/57/02df011cfaf166b948e1fefde236eaf3a6ee65.debug', 
which is also in package pcp-dbgsym 6.2.1-0.20240409.f312285
|
| dpkg: error processing archive 
build/deb/pcp-testsuite-dbgsym_6.2.1-0.20240409.f312285_amd64.deb (--install):
| trying to overwrite 
'/usr/lib/debug/.build-id/17/6edc7e590f766a2ea87b5decaeb994d7c48d24.debug', 
which is also in package pcp-dbgsym 6.2.1-0.20240409.f312285

I.e. these are shipped in two different packages.

[1]
https://github.com/performancecopilot/pcp/actions/runs/8610492722/job/23596103839?pr=1948#step:9:149

> This is not a known bug - do you know if this is specific to pcp-6.2.0
> (latest PCP) or are earlier versions affected?  One change that may
> be related here is we enabled v3 PCP archives by default in 6.2.0.

We see this only in noble (i.e. upcoming 24.04). I.e. 6.0.5 in 23.10 was still
ok, and 6.2.0 occasionally crashes.

> The limited stack we have suggests we're in pmproxy log discovery
> code, in an inotify/libuv event, which does have v3-specific code.
>
> For those who can reproduce this, it'd be worth experimenting and
> setting the following field back to 2 ... (requires pmlogger restart).
>
> $ grep PCP_ARCHIVE_VERSION /etc/pcp.conf
> PCP_ARCHIVE_VERSION=3
>
> If that clears the issue, it'll help us triangulate on a possible cause.

OK -- I'll do some experimentation and report back here.

Thanks!

Martin

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/2060275

Title:
  pmproxy crash at startup in libpcp_web.so.1

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/pcp/+bug/2060275/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 2060615] Re: [noble] two versions of perl-modules are published, breaking debootstrap

2024-04-08 Thread Martin Pitt
Aside from curl this can be reproduced most quickly with

  sudo /usr/sbin/debootstrap --include=build-essential noble /tmp/n
http://archive.ubuntu.com/ubuntu

Errors were encountered while processing:
 perl
 libdpkg-perl
 libperl5.38t64:amd64
 dpkg-dev
 build-essential

These are all ultimately due to

dpkg: dependency problems prevent configuration of perl:
 perl depends on perl-modules-5.38 (>= 5.38.2-3.2build2); however:
  Version of perl-modules-5.38 on system is 5.38.2-3.

dpkg: error processing package perl (--configure):
 dependency problems - leaving unconfigured


** Tags added: debootstrap noble

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/2060615

Title:
  [noble] two versions of perl-modules are published, breaking
  pbuilder/debootstrap

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/perl/+bug/2060615/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 2060615] Re: [noble] two versions of perl-modules are published, breaking debootstrap

2024-04-08 Thread Martin Pitt
I wonder where that comes from --
https://launchpad.net/ubuntu/+source/perl/+publishinghistory says that
5.38.2-3 was deleted, but only from noble-updates. In noble proper it is
merely "superseded". https://launchpad.net/ubuntu/+source/perl/5.38.2-3
doesn't show it being published anyway, and it's not in https://ubuntu-
archive-team.ubuntu.com/nbs.html either.

** Summary changed:

- [noble] two versions of perl-modules are published, breaking debootstrap
+ [noble] two versions of perl-modules are published, breaking 
pbuilder/debootstrap

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/2060615

Title:
  [noble] two versions of perl-modules are published, breaking
  pbuilder/debootstrap

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/perl/+bug/2060615/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 2060615] [NEW] [noble] two versions of perl-modules are published, breaking pbuilder/debootstrap

2024-04-08 Thread Martin Pitt
Public bug reported:

For the last two weeks, building noble VM images for our CI has been
broken. Most of it was uninstallability due to the xz reset, but for the
last three days, `pbuilder --create` has failed [2] because it gets perl
and perl-modules-5.38 in two different versions:

2024-04-08 08:47:08 
URL:http://archive.ubuntu.com/ubuntu/pool/main/p/perl/perl-base_5.38.2-3.2build2_amd64.deb
 [1822564/1822564] -> 
"/var/cache/pbuilder/aptcache//perl-base_5.38.2-3.2build2_amd64.deb" [1]
2024-04-08 08:47:09 
URL:http://archive.ubuntu.com/ubuntu/pool/main/p/perl/perl-modules-5.38_5.38.2-3_all.deb
 [3110080/3110080] -> 
"/var/cache/pbuilder/aptcache//perl-modules-5.38_5.38.2-3_all.deb" [1]

and then trying to configure the packages blows up. The root cause is
that perl-modules has *two* versions published:


# curl -s 
http://archive.ubuntu.com/ubuntu/dists/noble/main/binary-amd64/Packages.xz|xzgrep
 -A5 'Package: perl-modules-'
Package: perl-modules-5.38
Architecture: all
Version: 5.38.2-3.2build2
Multi-Arch: foreign
Priority: optional
Build-Essential: yes
--
Package: perl-modules-5.38
Architecture: all
Version: 5.38.2-3
Multi-Arch: foreign
Priority: optional
Build-Essential: yes

While apt is clever enough to pick the right one, debootstrap isn't. Can
you please remove the old perl-modules-5.38 5.38.2-3 from noble?

Thanks!


[1] https://github.com/cockpit-project/bots/issues/6147
[2] 
https://cockpit-logs.us-east-1.linodeobjects.com/image-refresh-ubuntu-stable-02cafde3-20240407-074108/log.html

** Affects: perl (Ubuntu)
 Importance: Undecided
 Status: New

** Affects: perl (Ubuntu Noble)
 Importance: Undecided
 Status: New


** Tags: debootstrap noble

** Also affects: perl (Ubuntu Noble)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/2060615

Title:
  [noble] two versions of perl-modules are published, breaking
  pbuilder/debootstrap

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/perl/+bug/2060615/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 2060014] Re: CVE-2024-2947 command injection when deleting a sosreport with a crafted name

2024-04-07 Thread Martin Pitt
> They didn't propagate yet due to noble being jammed so much

This happened now \o/, so they are ready to go.

** Changed in: cockpit (Ubuntu Noble)
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/2060014

Title:
  CVE-2024-2947 command injection when deleting a sosreport with a
  crafted name

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/cockpit/+bug/2060014/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 2060275] Re: pmproxy crash at startup in libpcp_web.so.1

2024-04-06 Thread Martin Pitt
Maybe the missing dbgsym packages are on purpose? The build log has
this:

# Note: --no-automatic-dbgsym not defined for all releases up to
#   and including Debian 8 (jessie), but defined after that
#   ... expect a warning on older releases, but no other ill
#   effects from the unknown option ... until dh_strip started
#   aborting on Ubuntu 14.04 (vm00) on 23 Nov 2017
if dh_strip -a --no-automatic-dbgsym; then :; else dh_strip -a; fi

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/2060275

Title:
  pmproxy crash at startup in libpcp_web.so.1

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/pcp/+bug/2060275/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 2060275] [NEW] pmproxy crash at startup in libpcp_web.so.1

2024-04-05 Thread Martin Pitt
Public bug reported:

In Cockpit's CI we see a lot of pmproxy crashes like [1] in a test which
starts/stops/reconfigures pmlogger, pmproxy, and redis. The journal
(some examples are [2][3][4]) always shows a similar stack trace:

pmproxy[9832]: segfault at 3 ip 767961047e45 sp 7ffe97e825d0
error 4 in libpcp_web.so.1[767961018000+5c000] likely on CPU 0 (core 0,
socket 0)

Stack trace of thread 9832:
#0  0x767961047e45 n/a (libpcp_web.so.1 + 0x38e45)
#1  0x767961059745 n/a (libpcp_web.so.1 + 0x4a745)
#2  0x767961056311 n/a (libpcp_web.so.1 + 0x47311)
#3  0x767960f5c52b n/a (libuv.so.1 + 0x2752b)
#4  0x767960f5dbdb n/a (libuv.so.1 + 0x28bdb)
#5  0x767960f44ce8 uv_run (libuv.so.1 + 0xfce8)
#6  0x5cae24f55097 n/a (pmproxy + 0xb097)
#7  0x5cae24f53b6d n/a (pmproxy + 0x9b6d)
#8  0x76796062a1ca __libc_start_call_main (libc.so.6 + 0x2a1ca)
#9  0x76796062a28b __libc_start_main_impl (libc.so.6 + 0x2a28b)
#10 0x5cae24f54135 n/a (pmproxy + 0xa135)

Unfortunately that's not super useful. But I managed to reproduce it
once locally and got a core dump (attached). But running it through gdb
isn't super enlightening either. It does spend several minutes
downloading debug symbols, but apparently not the right ones?

This GDB supports auto-downloading debuginfo from the following URLs:
  
Enable debuginfod for this session? (y or [n]) y
Debuginfod has been enabled.

Downloading separate debug info for /lib/libpcp_web.so.1
[... lots more ...]

(gdb) bt
#0  0x7b1d588cbe45 in ?? () from /lib/libpcp_web.so.1
#1  0x7b1d588dd745 in ?? () from /lib/libpcp_web.so.1
#2  0x7b1d588da311 in ?? () from /lib/libpcp_web.so.1
#3  0x7b1d587e052b in uv__inotify_read (loop=0x7b1d587ed180 
, dummy=, events=1)
at /usr/src/libuv1-1.48.0-1/src/unix/linux.c:2466
#4  0x7b1d587e1bdb in uv__io_poll (loop=0x7b1d587ed180 
, timeout=)
at /usr/src/libuv1-1.48.0-1/src/unix/linux.c:1528
#5  0x7b1d587c8ce8 in uv_run (loop=0x7b1d587ed180 , 
mode=UV_RUN_DEFAULT) at /usr/src/libuv1-1.48.0-1/src/unix/core.c:448
#6  0x5b98349dd097 in ?? ()
#7  0x5b98349dbb6d in ?? ()
#8  0x7b1d57e2a1ca in __libc_start_call_main 
(main=main@entry=0x5b98349db610, argc=argc@entry=3, 
argv=argv@entry=0x7ffc673aeac8)
at ../sysdeps/nptl/libc_start_call_main.h:58
#9  0x7b1d57e2a28b in __libc_start_main_impl (main=0x5b98349db610, argc=3, 
argv=0x7ffc673aeac8, init=, fini=,
rtld_fini=, stack_end=0x7ffc673aeab8) at 
../csu/libc-start.c:360
#10 0x5b98349dc135 in ?? ()

So I followed the "good old dbgsym" way [5], but:

E: Unable to locate package libpcp-web1-dbgsym
E: Unable to locate package libpcp3-dbgsym
E: Unable to locate package pcp-dbgsym

The build log [6] also doesn't mention any dbgsym builds, so it seems
they are missing?

[1] 
https://cockpit-logs.us-east-1.linodeobjects.com/pull-20264-13fcc041-20240404-201827-ubuntu-stable-other/log.html#34
[2] 
https://cockpit-logs.us-east-1.linodeobjects.com/pull-20264-13fcc041-20240404-201827-ubuntu-stable-other/TestHistoryMetrics-testPmProxySettings-ubuntu-stable-127.0.0.2-2201-FAIL.log.gz
[3] 
https://cockpit-logs.us-east-1.linodeobjects.com/pull-6177-6626b317-20240404-225904-ubuntu-stable-other-cockpit-project-cockpit/TestHistoryMetrics-testPmProxySettings-ubuntu-stable-127.0.0.2-2401-FAIL.log.gz
[4] 
https://cockpit-logs.us-east-1.linodeobjects.com/pull-20261-d1621935-20240404-105717-ubuntu-stable-other/TestHistoryMetrics-testPmProxySettings-ubuntu-stable-127.0.0.2-2201-FAIL.log.gz
[5] https://wiki.ubuntu.com/DebuggingProgramCrash
[6] 
https://launchpadlibrarian.net/714485247/buildlog_ubuntu-noble-amd64.pcp_6.2.0-1_BUILDING.txt.gz


Ubuntu 24.04
pcp 6.2.0-1

** Affects: pcp (Ubuntu)
 Importance: Undecided
 Status: New


** Tags: crash noble

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/2060275

Title:
  pmproxy crash at startup in libpcp_web.so.1

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/pcp/+bug/2060275/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 2060275] Re: pmproxy crash at startup in libpcp_web.so.1

2024-04-05 Thread Martin Pitt
Sorry, clicked the wrong button, I'll expand the bug description. In the
meantime, attaching the core dump.

** Attachment added: "core dump"
   
https://bugs.launchpad.net/ubuntu/+source/pcp/+bug/2060275/+attachment/5761630/+files/core.pmproxy.997.9420690eb6044feb9fbda197076efdac.4632.171229645400.zst

** Description changed:

  In Cockpit's CI we see a lot of pmproxy crashes like [1] in a test which
  starts/stops/reconfigures pmlogger, pmproxy, and redis. The journal
  (some examples are [2][3][4]) always shows a similar stack trace:
  
  pmproxy[9832]: segfault at 3 ip 767961047e45 sp 7ffe97e825d0
  error 4 in libpcp_web.so.1[767961018000+5c000] likely on CPU 0 (core 0,
  socket 0)
  
  Stack trace of thread 9832:
  #0  0x767961047e45 n/a (libpcp_web.so.1 + 0x38e45)
  #1  0x767961059745 n/a (libpcp_web.so.1 + 0x4a745)
  #2  0x767961056311 n/a (libpcp_web.so.1 + 0x47311)
  #3  0x767960f5c52b n/a (libuv.so.1 + 0x2752b)
  #4  0x767960f5dbdb n/a (libuv.so.1 + 0x28bdb)
  #5  0x767960f44ce8 uv_run (libuv.so.1 + 0xfce8)
  #6  0x5cae24f55097 n/a (pmproxy + 0xb097)
  #7  0x5cae24f53b6d n/a (pmproxy + 0x9b6d)
  #8  0x76796062a1ca __libc_start_call_main (libc.so.6 + 0x2a1ca)
  #9  0x76796062a28b __libc_start_main_impl (libc.so.6 + 0x2a28b)
  #10 0x5cae24f54135 n/a (pmproxy + 0xa135)
  
- Unfortunately that's not super useful
+ Unfortunately that's not super useful. But I managed to reproduce it
+ once locally and got a core dump (attached). But running it through gdb
+ isn't super enlightening either. It does spend several minutes
+ downloading debug symbols, but apparently not the right ones?
  
+ This GDB supports auto-downloading debuginfo from the following URLs:
+   
+ Enable debuginfod for this session? (y or [n]) y
+ Debuginfod has been enabled.
+ 
+ Downloading separate debug info for /lib/libpcp_web.so.1
+ [... lots more ...]
+ 
+ (gdb) bt
+ #0  0x7b1d588cbe45 in ?? () from /lib/libpcp_web.so.1
+ #1  0x7b1d588dd745 in ?? () from /lib/libpcp_web.so.1
+ #2  0x7b1d588da311 in ?? () from /lib/libpcp_web.so.1
+ #3  0x7b1d587e052b in uv__inotify_read (loop=0x7b1d587ed180 
, dummy=, events=1)
+ at /usr/src/libuv1-1.48.0-1/src/unix/linux.c:2466
+ #4  0x7b1d587e1bdb in uv__io_poll (loop=0x7b1d587ed180 
, timeout=)
+ at /usr/src/libuv1-1.48.0-1/src/unix/linux.c:1528
+ #5  0x7b1d587c8ce8 in uv_run (loop=0x7b1d587ed180 , 
mode=UV_RUN_DEFAULT) at /usr/src/libuv1-1.48.0-1/src/unix/core.c:448
+ #6  0x5b98349dd097 in ?? ()
+ #7  0x5b98349dbb6d in ?? ()
+ #8  0x7b1d57e2a1ca in __libc_start_call_main 
(main=main@entry=0x5b98349db610, argc=argc@entry=3, 
argv=argv@entry=0x7ffc673aeac8)
+ at ../sysdeps/nptl/libc_start_call_main.h:58
+ #9  0x7b1d57e2a28b in __libc_start_main_impl (main=0x5b98349db610, 
argc=3, argv=0x7ffc673aeac8, init=, fini=,
+ rtld_fini=, stack_end=0x7ffc673aeab8) at 
../csu/libc-start.c:360
+ #10 0x5b98349dc135 in ?? ()
+ 
+ So I followed the "good old dbgsym" way [5], but:
+ 
+ E: Unable to locate package libpcp-web1-dbgsym
+ E: Unable to locate package libpcp3-dbgsym
+ E: Unable to locate package pcp-dbgsym
+ 
+ The build log [6] also doesn't mention any dbgsym builds, so it seems
+ they are missing?
  
  [1] 
https://cockpit-logs.us-east-1.linodeobjects.com/pull-20264-13fcc041-20240404-201827-ubuntu-stable-other/log.html#34
  [2] 
https://cockpit-logs.us-east-1.linodeobjects.com/pull-20264-13fcc041-20240404-201827-ubuntu-stable-other/TestHistoryMetrics-testPmProxySettings-ubuntu-stable-127.0.0.2-2201-FAIL.log.gz
  [3] 
https://cockpit-logs.us-east-1.linodeobjects.com/pull-6177-6626b317-20240404-225904-ubuntu-stable-other-cockpit-project-cockpit/TestHistoryMetrics-testPmProxySettings-ubuntu-stable-127.0.0.2-2401-FAIL.log.gz
  [4] 
https://cockpit-logs.us-east-1.linodeobjects.com/pull-20261-d1621935-20240404-105717-ubuntu-stable-other/TestHistoryMetrics-testPmProxySettings-ubuntu-stable-127.0.0.2-2201-FAIL.log.gz
+ [5] https://wiki.ubuntu.com/DebuggingProgramCrash
+ [6] 
https://launchpadlibrarian.net/714485247/buildlog_ubuntu-noble-amd64.pcp_6.2.0-1_BUILDING.txt.gz
+ 
+ 
+ Ubuntu 24.04
+ pcp 6.2.0-1

** Tags added: crash noble

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/2060275

Title:
  pmproxy crash at startup in libpcp_web.so.1

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/pcp/+bug/2060275/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 2060014] Re: CVE-2024-2947 command injection when deleting a sosreport with a crafted name

2024-04-02 Thread Martin Pitt
Backporters: I uploaded backports from noble-proposed to mantic and
jammy. They didn't propagate yet due to noble being jammed so much, but
we do validate them on both releases upstream. I'll let you decide
whether to accept or stall them.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/2060014

Title:
  CVE-2024-2947 command injection when deleting a sosreport with a
  crafted name

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/cockpit/+bug/2060014/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 2060014] Re: CVE-2024-2947 command injection when deleting a sosreport with a crafted name

2024-04-02 Thread Martin Pitt
@Marc, security team: I'd like your opinion/preference/guidance for
mantic: It currently has upstream version 300.1. Half a year ago we did
two more upstream point releases for critical bug fixes (aimed at and
uploaded to RHEL): https://github.com/cockpit-
project/cockpit/releases/tag/300.2 and https://github.com/cockpit-
project/cockpit/releases/tag/300.3 . These got a lot of field testing
now, and would be useful to fix in mantic as well.

So I can either cut a 300.4 on top of 300.3 and cherry-pick that
sosreport patch, or if you don't want these, then a 300.1.1 with just
the sosreport fix.

It's also valid IMHO to just declare it as "wontfix" -- TBH most server
users are going to stick to LTS, the sosreport plugin/page is not really
that interesting for Ubuntu (there's apport and other support tools for
Canonical), the vuln isn't *that* dramatic, and many Cockpit users use
the official backports anyway.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/2060014

Title:
  CVE-2024-2947 command injection when deleting a sosreport with a
  crafted name

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/cockpit/+bug/2060014/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 2060014] Re: CVE-2024-2947 command injection when deleting a sosreport with a crafted name

2024-04-02 Thread Martin Pitt
Note: I tried to add backports tasks, but there's neither a
https://launchpad.net/jammy-backports nor a
https://launchpad.net/mantic-backports project. But not a biggie, these
will both get 314 as soon as it lands in noble.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/2060014

Title:
  CVE-2024-2947 command injection when deleting a sosreport with a
  crafted name

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/cockpit/+bug/2060014/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 2060014] [NEW] CVE-2024-2947 command injection when deleting a sosreport with a crafted name

2024-04-02 Thread Martin Pitt
Public bug reported:

Cockpit 270 introduced a possible local privilege escalation
vulnerability with deleting diagnostic reports (sosreport). Files in
/var/tmp/ are controllable by any user. In particular, an unprivileged
user could create an sosreport* file containing a ' and a shell command,
which would then run with root privileges when the admin Cockpit user
tried to delete the report.

Cockpit version 314 fixes the problem by removing the files with direct
system calls instead of a shell command. Specifically, this commit:
https://github.com/cockpit-
project/cockpit/commit/9c4cc9b6df632082538b53bdc8ee9ec1c5cad4da

Thus the only affected released version is in 23.10 mantic, 22.04 LTS'
is older (264). The backports version is affected, though. 314 has been
in noble-proposed for a while, but it'll probably take several more
weeks to sort out the massive uninstallability and autopkgtest queue
before it can land in noble proper (and thus the backports of mantic and
jammy get updated).

** Affects: cockpit (Ubuntu)
 Importance: High
 Assignee: Martin Pitt (pitti)
 Status: Fix Committed

** Affects: cockpit (Ubuntu Mantic)
 Importance: Medium
 Status: Triaged

** Affects: cockpit (Ubuntu Noble)
 Importance: High
 Assignee: Martin Pitt (pitti)
 Status: Fix Committed


** Tags: mantic noble

** Changed in: cockpit (Ubuntu)
 Assignee: (unassigned) => Martin Pitt (pitti)

** Also affects: cockpit (Ubuntu Mantic)
   Importance: Undecided
   Status: New

** Also affects: cockpit (Ubuntu Noble)
   Importance: High
 Assignee: Martin Pitt (pitti)
   Status: New

** Changed in: cockpit (Ubuntu Noble)
   Status: New => Fix Committed

** Changed in: cockpit (Ubuntu Mantic)
   Importance: Undecided => Medium

** Changed in: cockpit (Ubuntu Mantic)
   Status: New => Triaged

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/2060014

Title:
  CVE-2024-2947 command injection when deleting a sosreport with a
  crafted name

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/cockpit/+bug/2060014/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 2056739] Re: apparmor="DENIED" operation="open" class="file" profile="virt-aa-helper" name="/etc/gnutls/config"

2024-03-12 Thread Martin Pitt
** Changed in: chrony (Ubuntu)
   Status: New => Won't Fix

** Changed in: gnutls28 (Ubuntu)
   Status: New => Won't Fix

** Changed in: libvirt (Ubuntu)
   Status: New => Won't Fix

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/2056739

Title:
  apparmor="DENIED" operation="open" class="file" profile="virt-aa-
  helper" name="/etc/gnutls/config"

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/apparmor/+bug/2056739/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 2046477] Re: Enable unprivileged user namespace restrictions by default

2024-03-11 Thread Martin Pitt
Just to make sure that we really talk about the same thing: This bug
sounds like it is *intended* that

unshare --user --map-root-user /bin/bash -c whoami

(as unpriv user) now fails in current Ubuntu 24.04 noble. That still
worked in released 23.10.

I am starting to test Cockpit on the current noble dailies [1] to make
sure everything is ready for 24.04 LTS (as 23.10 was a bit of a
disaster..), and aside from some non-fatal AppAmor noise this is the
most important issue. This breaks /usr/lib/cockpit/cockpit-desktop ,
which uses an user namespace to isolate cockpit's web server + a
browser, and that isolation is absolutely crucial for its security.

I can update cockpit-ws.deb to ship a new file /etc/apparmor.d/cockpit-
desktop with

-- 8< ---
abi ,

include 

profile cockpit-desktop /usr/lib/cockpit/cockpit-desktop flags=(unconfined) {
  userns,

  # Site-specific additions and overrides. See local/README for details.
  include if exists 
}
-- 8< ---

I confirmed that this works fine. I just wanted to check that this is
intended, and not circumventing your intentions here?

Thanks!


[1] https://github.com/cockpit-project/bots/pull/6048

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/2046477

Title:
  Enable unprivileged user namespace restrictions by default

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/apparmor/+bug/2046477/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1774000] Re: Fails to boot cirros QEMU image with tuned running

2024-03-11 Thread Martin Pitt
** Tags added: cockpit-test

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1774000

Title:
  Fails to boot cirros QEMU image with tuned running

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/tuned/+bug/1774000/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 2040483] Re: AppArmor denies crun sending signals to containers (stop, kill)

2024-03-11 Thread Martin Pitt
** Tags added: cockpit-test

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/2040483

Title:
  AppArmor denies crun sending signals to containers (stop, kill)

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/libpod/+bug/2040483/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 2056768] [NEW] apparmor="DENIED" operation="open" class="file" profile="rsyslogd" name="/run/systemd/sessions/"

2024-03-11 Thread Martin Pitt
Public bug reported:

There is an AppArmor regression in current noble. In cockpit we recently
started to test on noble (to prevent the "major regressions after
release" fiasco from 23.10 again).

For some weird reason, rsyslog is installed *by default* [1] in the
cloud images. That is a rather pointless waste of CPU and disk space, as
it's an unnecessary running daemon and duplicates all the written logs.

But more specifically, we noticed [2] an AppArmor rejection. Reproducer
is simple:

logger -p user.emerg --tag check-journal EMERGENCY_MESSAGE

this causes

type=1400 audit(1710168739.345:108): apparmor="DENIED"
operation="open" class="file" profile="rsyslogd"
name="/run/systemd/sessions/" pid=714 comm=72733A6D61696E20513A526567
requested_mask="r" denied_mask="r" fsuid=102 ouid=0

Note that it doesn't actually fail, the "EMERGENCY_MESSAGE" does appear
in the journal and also in /var/log/syslog. But it's some noise that
triggers our (and presumbly other admin's) log detectors.


rsyslog 8.2312.0-3ubuntu3
apparmor 4.0.0~alpha4-0ubuntu1


[1] 
https://cloud-images.ubuntu.com/daily/server/noble/current/noble-server-cloudimg-amd64.manifest
[2] 
https://cockpit-logs.us-east-1.linodeobjects.com/pull-6048-20240311-125838-b465e9b2-ubuntu-stable-other-cockpit-project-cockpit/log.html#118

** Affects: rsyslog (Ubuntu)
 Importance: Undecided
 Status: New

** Affects: rsyslog (Ubuntu Noble)
 Importance: Undecided
 Status: New


** Tags: apparmor cockpit-test noble regression-release

** Also affects: rsyslog (Ubuntu Noble)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/2056768

Title:
  apparmor="DENIED" operation="open" class="file" profile="rsyslogd"
  name="/run/systemd/sessions/"

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/rsyslog/+bug/2056768/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 2056747] Re: apparmor="DENIED" operation="open" class="file" profile="/usr/sbin/chronyd" name="/etc/gnutls/config"

2024-03-11 Thread Martin Pitt
*** This bug is a duplicate of bug 2056739 ***
https://bugs.launchpad.net/bugs/2056739

Absolutely agree, thanks Christian!

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/2056747

Title:
  apparmor="DENIED" operation="open" class="file"
  profile="/usr/sbin/chronyd" name="/etc/gnutls/config"

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/chrony/+bug/2056747/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 2056747] [NEW] apparmor="DENIED" operation="open" class="file" profile="/usr/sbin/chronyd" name="/etc/gnutls/config"

2024-03-11 Thread Martin Pitt
Public bug reported:

Merely booting current noble cloud image with "chrony" installed causes
this:

audit: type=1400 audit(1710152842.540:107): apparmor="DENIED"
operation="open" class="file" profile="/usr/sbin/chronyd"
name="/etc/gnutls/config" pid=878 comm="chronyd" requested_mask="r"
denied_mask="r" fsuid=0 ouid=0

It's not harmful, but causes noise in the logs. This is similar to bug
#2056739 for libvirt.

apparmor 4.0.0~alpha4-0ubuntu1
chrony 4.5-1ubuntu1
libgnutls30 3.8.3-1ubuntu1

** Affects: chrony (Ubuntu)
 Importance: Undecided
 Status: New

** Affects: chrony (Ubuntu Noble)
 Importance: Undecided
 Status: New


** Tags: cockpit-test noble regression-release

** Tags added: cockpit-test

** Also affects: chrony (Ubuntu Noble)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/2056747

Title:
  apparmor="DENIED" operation="open" class="file"
  profile="/usr/sbin/chronyd" name="/etc/gnutls/config"

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/chrony/+bug/2056747/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 2056739] [NEW] apparmor="DENIED" operation="open" class="file" profile="virt-aa-helper" name="/etc/gnutls/config"

2024-03-11 Thread Martin Pitt
Public bug reported:

Running any VM in libvirt causes a new AppArmor violation in current
noble. This is a regression, this didn't happen in any previous release.

Reproducer:

  virt-install --memory 50 --pxe --virt-type qemu --os-variant
alpinelinux3.8 --disk none --wait 0 --name test1

(This is the simplest way to create a test VM. But it's form or shape
doesn't matter at all).

Results in lots of

audit: type=1400 audit(1710146677.570:108): apparmor="DENIED"
operation="open" class="file" profile="virt-aa-helper"
name="/etc/gnutls/config" pid=1480 comm="virt-aa-helper"
requested_mask="r" denied_mask="r" fsuid=0 ouid=0


libvirt-daemon 10.0.0-2ubuntu1
apparmor 4.0.0~alpha4-0ubuntu1
libgnutls30:amd64 3.8.3-1ubuntu1

** Affects: libvirt (Ubuntu)
 Importance: Undecided
 Status: New

** Affects: libvirt (Ubuntu Noble)
 Importance: Undecided
 Status: New


** Tags: cockpit-test noble regression-release

** Also affects: libvirt (Ubuntu Noble)
   Importance: Undecided
   Status: New

** Tags added: cockpit-test

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/2056739

Title:
  apparmor="DENIED" operation="open" class="file" profile="virt-aa-
  helper" name="/etc/gnutls/config"

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/libvirt/+bug/2056739/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1968131] Re: Starting VM with UEFI firmware fails with swtpm

2022-04-07 Thread Martin Pitt
I tested the PPA, and it works like a charm now. Thanks Christian and
Simon!

For once, kicking some{thing,one} out of their $HOME does something
good.. 

** Changed in: swtpm (Ubuntu Jammy)
   Status: Confirmed => In Progress

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1968131

Title:
  Starting VM with UEFI firmware fails with swtpm

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/libvirt/+bug/1968131/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1968131] Re: Starting VM with UEFI firmware fails with swtpm

2022-04-07 Thread Martin Pitt
Our CI uses a Jammy Ubuntu cloud image, but with quite a large list of
extra installed packages. To make sure it's not something specific to
that environment, I tried this:

  autopkgtest-buildvm-ubuntu-cloud
  qemu-system-x86_64 -enable-kvm -nographic -m 2048 -device virtio-rng-pci 
-drive file=autopkgtest-jammy-amd64.img,if=virtio -snapshot

Log in as ubuntu:ubuntu, then

   sudo apt update
   sudo eatmydata apt install -y virtinst libvirt-daemon-system
   sudo touch /var/lib/libvirt/empty.iso
   sudo virt-install --name t1 --os-variant fedora28 --memory 128 --wait -1 
--noautoconsole --disk 'size=0.25,format=qcow2' --cdrom 
/var/lib/libvirt/empty.iso --boot uefi

it fails in exactly the same way. So (1) this confirms it's not our
Cockpit CI environment, and (2) provides a nice smoke autopkgtest for
libvirt.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1968131

Title:
  Starting VM with UEFI firmware fails with swtpm

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/libvirt/+bug/1968131/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1968131] Re: Starting VM with UEFI firmware fails with swtpm

2022-04-07 Thread Martin Pitt
Right, I understand -- but introducing the dependency was an explicit
decision (#1948748), and it seems it is broken for its main use case. So
in the simplest case the recommends: could be reverted, and reintroduced
once this is understood?

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1968131

Title:
  Starting VM with UEFI firmware fails with swtpm

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/libvirt/+bug/1968131/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1968131] [NEW] Starting VM with UEFI firmware fails with swtpm

2022-04-07 Thread Martin Pitt
Public bug reported:

https://launchpad.net/ubuntu/+source/libvirt/8.0.0-1ubuntu6 introduced a
recommendation to "swtpm", so this package now gets installed by default
when installing libvirt. But this broke UEFI:

  touch /var/lib/libvirt/empty.iso
  virt-install --name t1 --os-variant fedora28 --memory 128 --wait -1 
--noautoconsole --disk 'size=0.25,format=qcow2' --cdrom 
/var/lib/libvirt/empty.iso --boot uefi

This fails:

WARNING  Requested memory 128 MiB is less than the recommended 1024 MiB
for OS fedora28

Starting install...
Allocating 't1.qcow2'   
   |0 B  00:00:00 ... 
Removing disk 't1.qcow2'
   |0 B  00:00:00 
ERRORinternal error: Could not run '/usr/bin/swtpm_setup'. exitstatus: 1; 
Check error log '/var/log/swtpm/libvirt/qemu/t1-swtpm.log' for details.
Domain installation does not appear to have been successful.


# cat /var/log/swtpm/libvirt/qemu/t1-swtpm.log
Starting vTPM manufacturing as swtpm:swtpm @ Thu 07 Apr 2022 07:11:55 AM UTC
Successfully created RSA 2048 EK with handle 0x81010001.
  Invoking /usr/lib/x86_64-linux-gnu/swtpm/swtpm-localca --type ek --ek 
91863a7321edf06c0feb6f388950774acca7813f0d595a78463c1ce29798ab015bebb70da8a1fb8c4c353507240d32afd0e51ff173068e86e40c8f71bfa311919dd8f840e7a11576515eff08739822cfe7d3c4cc0e228623f140fc2948a0c519bc2b3d06d0a7f5bd9add9d27d9d2132459ae7911dc441dd156c842ac7b8fcb5611e589fde7ca9516eaf3a32b64b7ece348b023a6567e64a9ad491c12b1309624f7fcaa4dc9f69387bc59a743c64db664f78258dccda63635e5e934f22e594e073906e737486268601fd812979a16db23faf0512d3b714d832d69a80fc01b31cec1d5603ee06544338907f38164636df6cfbdc1168ac5eda01ff5def64076e5e7
 --dir /var/lib/libvirt/swtpm/ade6145c-3d22-46d8-8bbc-29792e4cfa0c/tpm2 
--logfile /var/log/swtpm/libvirt/qemu/t1-swtpm.log --vmid 
t1:ade6145c-3d22-46d8-8bbc-29792e4cfa0c --tpm-spec-family 2.0 --tpm-spec-level 
0 --tpm-spec-revision 164 --tpm-manufacturer id:1014 --tpm-model swtpm 
--tpm-version id:20191023 --tpm2 --configfile /etc/swtpm-localca.conf 
--optsfile /etc/swtpm-localca.options
Creating root CA and a local CA's signing key and issuer cert.
Could not create root-CA:Can't load ./.rnd into RNG
40D7AD231A7F:error:1279:random number generator:RAND_load_file:Cannot 
open file:../crypto/rand/randfile.c:106:Filename=./.rnd
Cannot write random bytes:
40D7AD231A7F:error:1279:random number generator:RAND_write_file:Cannot 
open file:../crypto/rand/randfile.c:240:Filename=./.rnd

Error creating local CA's signing key and cert.
swtpm-localca exit with status 1: 
An error occurred. Authoring the TPM state failed.
Ending vTPM manufacturing @ Thu 07 Apr 2022 07:11:56 AM UTC

When I uninstall swtpm, the domain creation/starting works (of course it
does not actually do anything due to the fake empty iso, but it does get
past that bug).

** Affects: libvirt (Ubuntu)
 Importance: Undecided
 Status: New

** Affects: libvirt (Ubuntu Jammy)
 Importance: Undecided
 Status: New


** Tags: jammy regression-release

** Also affects: libvirt (Ubuntu Jammy)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1968131

Title:
  Starting VM with UEFI firmware fails with swtpm

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/libvirt/+bug/1968131/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1948748] Re: [MIR] swtpm

2022-04-07 Thread Martin Pitt
This broke VMs with UEFI, reported as bug 1968131.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1948748

Title:
  [MIR] swtpm

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/autogen/+bug/1948748/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1966416] Re: pam_faillock does not actually deny login after given number of failures

2022-03-31 Thread Martin Pitt
Ouch, thanks Marc! Indeed our previous seddery was broken,  it should
have left the pam_deny/pam_permit lines. With this it works just fine:

--- /tmp/common-auth.orig   2022-04-01 07:16:26.072608984 +0200
+++ /tmp/common-auth.faillock   2022-04-01 07:14:20.246707861 +0200
@@ -16,6 +16,8 @@
 # here are the per-package modules (the "Primary" block)
 auth   [success=2 default=ignore]  pam_unix.so nullok
 auth   [success=1 default=ignore]  pam_sss.so use_first_pass
+auth [default=die] pam_faillock.so authfail deny=4
+auth sufficient pam_faillock.so authsucc deny=4
 # here's the fallback if no module succeeds
 auth   requisite   pam_deny.so
 # prime the stack with a positive return value if there isn't one already;


Sorry for the noise!

** Changed in: pam (Ubuntu Jammy)
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1966416

Title:
  pam_faillock does not actually deny login after given number of
  failures

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/pam/+bug/1966416/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1802005] Re: socket is inaccessible for libvirt-dbus

2022-03-31 Thread Martin Pitt
Timeout -- I uploaded the patch of the salsa PR to Jammy now.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1802005

Title:
  socket is inaccessible for libvirt-dbus

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/libvirt/+bug/1802005/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1966416] [NEW] pam_faillock does not actually deny login after given number of failures

2022-03-25 Thread Martin Pitt
Public bug reported:

ProblemType: Bug
DistroRelease: Ubuntu 22.04
Package: libpam-modules 1.4.0-11ubuntu1

I just noticed that Ubuntu 22.04 changed from the old pam_tally2 module
to the more widespread pam_faillock one. \o/

However, locking (denying logins) does not actually seem to work.
According to pam_faillock(8) I changed the config like this:

# diff -u /etc/pam.d/common-auth{.orig,}
--- /etc/pam.d/common-auth.orig 2022-03-25 10:41:29.08800 +
+++ /etc/pam.d/common-auth  2022-03-25 10:48:48.913419254 +
@@ -17,11 +17,11 @@
 auth   [success=2 default=ignore]  pam_unix.so nullok
 auth   [success=1 default=ignore]  pam_sss.so use_first_pass
 # here's the fallback if no module succeeds
-auth   requisite   pam_deny.so
+auth   [default=die] pam_faillock.so authfail
 # prime the stack with a positive return value if there isn't one already;
 # this avoids us returning an error just because nothing sets a success code
 # since the modules above will each just jump around
-auth   requiredpam_permit.so
+auth   sufficient pam_faillock.so authsucc
 # and here are more per-package modules (the "Additional" block)
 auth   optionalpam_cap.so 
 # end of pam-auth-update config


This config works fine on both Debian 11 and Debian testing, and it agrees with 
the example in the manpage -- so I don't think it's that broken.

Start from a blank slate:

# faillock  --user admin --reset
# faillock  --user admin 
admin:
WhenType  Source   Valid

Now I log in as user "admin" with a wrong password four times (one more
than the default "deny=3", just to make sure):

  sshd[3841]: pam_unix(sshd:auth): authentication failure; logname= uid=0 
euid=0 tty=ssh ruser= rhost=172.27.0.2  user=admin
  sshd[3841]: Failed password for admin from 172.27.0.2 port 39446 ssh2

After the third time, I even see this in the journal:

  sshd[3841]: Failed password for admin from 172.27.0.2 port 39446 ssh2
  pam_faillock(sshd:auth): Consecutive login failures for user admin account 
temporarily locked
  Failed password for admin from 172.27.0.2 port 39446 ssh2


But if I then log in with the correct password, it succeeds:

 sshd[4492]: Accepted password for admin from 172.27.0.2 port 39450 ssh2
 sshd[4492]: pam_unix(sshd:session): session opened for user admin(uid=1000) by 
(uid=0)

That's buggy -- "admin" should be denied access for ten minutes
("unlock_time = 600" in /etc/security/faillock.conf).

It did record the failed logins alright:

# faillock  --user admin 
admin:
WhenType  Source   Valid
2022-03-25 10:54:02 RHOST 172.27.0.2   V
2022-03-25 10:54:27 RHOST 172.27.0.2   V
2022-03-25 10:54:30 RHOST 172.27.0.2   V

But the actual denial doesn't seem to work.

** Affects: pam (Ubuntu)
 Importance: Undecided
 Status: New


** Tags: jammy regression-release

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1966416

Title:
  pam_faillock does not actually deny login after given number of
  failures

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/pam/+bug/1966416/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1946244] Re: When installing/uninstalling with realmd, uninstalling crashes with ScriptError

2022-03-25 Thread Martin Pitt
Confirmed in jammy as well.

https://logs.cockpit-
project.org/logs/pull-17182-20220325-080131-1b8abf94-ubuntu-2204/log.html#303-2

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1946244

Title:
  When installing/uninstalling with realmd, uninstalling crashes with
  ScriptError

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/freeipa/+bug/1946244/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1802005] Re: socket is inaccessible for libvirt-dbus

2022-03-25 Thread Martin Pitt
I sent a fix to Debian: https://salsa.debian.org/libvirt-team/libvirt-
dbus/-/merge_requests/14

I'll give it a few days, if I can get that landed soon, we can just
sync. Otherwise I'll upload it to Jammy directly.

** Changed in: libvirt-dbus (Ubuntu Jammy)
   Status: Triaged => Fix Committed

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1802005

Title:
  socket is inaccessible for libvirt-dbus

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/libvirt/+bug/1802005/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1802005] Re: socket is inaccessible for libvirt-dbus

2022-03-25 Thread Martin Pitt
The image build log shows why:

Setting up libvirt-dbus (1.4.1-1) ...
/var/lib/dpkg/info/libvirt-dbus.postinst: 54: dpkg-vendor: not found

dpkg-vendor is in the "dpkg-dev" package, so it should not be used in
postinst scripts. libvirt-dbus could depend on dpkg-dev, but that's
highly undesirable. This isn't terribly obvious to fix -- the postinst
could do `grep -q ubuntu /usr/lib/os-release`.

But for jammy I'll just take out the condition completely; that's safe,
and any derivative will work correctly then.

[1] https://logs-https-frontdoor.apps.ocp.ci.centos.org/logs/image-
refresh-3131-20220324-052626/log

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1802005

Title:
  socket is inaccessible for libvirt-dbus

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/libvirt/+bug/1802005/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1802005] Re: socket is inaccessible for libvirt-dbus

2022-03-25 Thread Martin Pitt
This regressed in jammy:

# id libvirtdbus
uid=120(libvirtdbus) gid=125(libvirtdbus) groups=125(libvirtdbus)


This was supposed to be fixed in 
https://salsa.debian.org/libvirt-team/libvirt-dbus/-/commit/cd5b637db51de64368723996cc770f323b6c1f53
 (and hence the package was synced), but this does not work.

This once again completely breaks cockpit-machines. I'll have a closer
look, send a fix to Debian, and apply it to jammy.

** Changed in: libvirt-dbus (Ubuntu)
   Importance: Undecided => High

** Changed in: libvirt-dbus (Ubuntu)
   Status: Fix Released => Triaged

** Changed in: libvirt-dbus (Ubuntu)
 Assignee: (unassigned) => Martin Pitt (pitti)

** Also affects: libvirt (Ubuntu Jammy)
   Importance: Medium
   Status: Won't Fix

** Also affects: libvirt-dbus (Ubuntu Jammy)
   Importance: High
 Assignee: Martin Pitt (pitti)
   Status: Triaged

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1802005

Title:
  socket is inaccessible for libvirt-dbus

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/libvirt/+bug/1802005/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1946244] Re: When installing/uninstalling with realmd, uninstalling crashes with ScriptError

2022-03-24 Thread Martin Pitt
Still confirmed on 21.10, and also Debian testing; I filed a Debian bug
and linked it.

** Bug watch added: Debian Bug tracker #1008209
   https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1008209

** Also affects: freeipa (Debian) via
   https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1008209
   Importance: Unknown
   Status: Unknown

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1946244

Title:
  When installing/uninstalling with realmd, uninstalling crashes with
  ScriptError

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/freeipa/+bug/1946244/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1966181] Re: ipa-client-install fails on restarting non-existing chrony.service

2022-03-24 Thread Martin Pitt
A-ha! I wasn't seeing things after all. Our test images install the
"systemd-timesyncd" package (as we also run tests against that), and
that removes the chrony package and installs the mask:

# apt install systemd-timesyncd
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
The following packages will be REMOVED:
  chrony
The following NEW packages will be installed:
  systemd-timesyncd
0 upgraded, 1 newly installed, 1 to remove and 0 not upgraded.
Need to get 30.8 kB of archives.
After this operation, 364 kB disk space will be freed.
Do you want to continue? [Y/n] y
Get:1 http://archive.ubuntu.com/ubuntu impish-updates/main amd64 
systemd-timesyncd amd64 248.3-1ubuntu8.2 [30.8 kB]
Fetched 30.8 kB in 0s (82.3 kB/s)  
dpkg: chrony: dependency problems, but removing anyway as you requested:
 systemd depends on systemd-timesyncd | time-daemon; however:
  Package systemd-timesyncd is not installed.
  Package time-daemon is not installed.
  Package systemd-timesyncd which provides time-daemon is not installed.
  Package chrony which provides time-daemon is to be removed.


# ls -l /etc/systemd/system/chrony.service 
lrwxrwxrwx 1 root root 9 Mar 24 12:16 /etc/systemd/system/chrony.service -> 
/dev/null


Mystery solved!

So, sorry for the noise!

** Changed in: freeipa (Ubuntu)
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1966181

Title:
  ipa-client-install fails on restarting non-existing chrony.service

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/freeipa/+bug/1966181/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1966181] Re: ipa-client-install fails on restarting non-existing chrony.service

2022-03-24 Thread Martin Pitt
Hello Timo,

I'm not actually sure where these /etc/systemd/system/chrony* files come
from (in particular the mask). They are not owned by any package, nor
does chrony's postinst seem to create it (but maybe through a helper,
they are not exactly simple -- some weird interaction with the SysV
compat code?).

The chronyd.service link is created by the Alias=chronyd.service in
chrony.service, and systemd creates that when enabling the service.

My debian-testing VM has that chrony.service → /dev/null mask link right
after a fresh install and boot, no IPA script was running yet. But I
just saw that I apparently mixed up my VMs when reporting this here --
my ubuntu-stable VM does not have chrony installed at all (even though
freeipa-client recommends it, and I don't use --no-install-recommends).
I'll investigate this more thoroughly, chase down what creates that
pesky chrony.service masking, and report back here.

Thanks, and sorry for the noise so far!

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1966181

Title:
  ipa-client-install fails on restarting non-existing chrony.service

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/freeipa/+bug/1966181/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1890786] Re: ipa-client-install fails on restarting non-existing chronyd.service

2022-03-24 Thread Martin Pitt
This is *still* broken on Ubuntu 21.10 and Debian testing. However, it
is subtly different, I filed bug 1966181 about it.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1890786

Title:
  ipa-client-install fails on restarting non-existing chronyd.service

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/freeipa/+bug/1890786/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1966181] [NEW] ipa-client-install fails on restarting non-existing chrony.service

2022-03-24 Thread Martin Pitt
Public bug reported:

DistroRelease: Ubuntu 21.10
Package: freeipa-client 4.8.6-1ubuntu6

This is a bug that just doesn't want to die -- the package *really*
should grow an autopkgtest that checks if a basic ipa-client-install
actually works. It's very similar to bug 1890786 except that it now
fails on "chrony.service", not "chronyd.service":


# ipa-client-install --domain cockpit.lan --realm COCKPIT.LAN --principal admin 
-W
This program will set up FreeIPA client.
Version 4.8.6

WARNING: conflicting time synchronization service 'ntp' will be
disabled in favor of chronyd

Discovery was successful!
Do you want to configure chrony with NTP server or pool address? [no]: 
Client hostname: x0.cockpit.lan
Realm: COCKPIT.LAN
DNS Domain: cockpit.lan
IPA Server: f0.cockpit.lan
BaseDN: dc=cockpit,dc=lan

Continue to configure the system with these values? [no]: yes
Synchronizing time
No SRV records of NTP servers found and no NTP server or pool address was 
provided.
Using default chrony configuration.
CalledProcessError(Command ['/bin/systemctl', 'restart', 'chrony.service'] 
returned non-zero exit status 5: 'Failed to restart chrony.service: Unit 
chrony.service not found.\n')
The ipa-client-install command failed. See /var/log/ipaclient-install.log for 
more information


This also happens if I say "yes" to the NTP question.


Now, the chrony package is indeed rather weird/broken:

| root@x0:~# find /etc/systemd -name '*chrony*' | xargs ls -l
| lrwxrwxrwx 1 root root  9 Mar 24 05:54 /etc/systemd/system/chrony.service -> 
/dev/null
| lrwxrwxrwx 1 root root 34 Mar 23 04:31 /etc/systemd/system/chronyd.service -> 
/lib/systemd/system/chrony.service
| lrwxrwxrwx 1 root root 34 Mar 23 04:31 
/etc/systemd/system/multi-user.target.wants/chrony.service -> 
/lib/systemd/system/chrony.service

| # systemctl status chrony chronyd
| Warning: The unit file, source configuration file or drop-ins of 
chronyd.service changed on disk. Run 'systemctl daemon-reload' to relo>
| ○ chrony.service
|  Loaded: masked (Reason: Unit chrony.service is masked.)
|  Active: inactive (dead)
|
| ○ chronyd.service
|  Loaded: error (Reason: Unit chronyd.service failed to load properly, 
please adjust/correct and reload service manager: File exists)
|  Active: inactive (dead)

Again, this is unconfigured and out of the box -- the idea is that FreeIPA
sets up everything and configures NTP/chrony/etc. to listen to the FreeIPA
server.

Purging chrony doesn't really help, though:

| dpkg -P chrony
| # no '*chrony*' files in /etc any more

Exactly the same failure, and it still tries to configure chrony even though
it's not there any more:

| WARNING: conflicting time synchronization service 'ntp' will be disabled 
in favor of chronyd
|
| Discovery was successful!
| Do you want to configure chrony with NTP server or pool address? [no]: yes
| Enter NTP source server addresses separated by comma, or press Enter to skip:
| Enter a NTP source pool address, or press Enter to skip:
| Client hostname: x0.cockpit.lan
| Realm: COCKPIT.LAN
| DNS Domain: cockpit.lan
| IPA Server: f0.cockpit.lan
| BaseDN: dc=cockpit,dc=lan
|
| Continue to configure the system with these values? [no]: yes
| Synchronizing time
| No SRV records of NTP servers found and no NTP server or pool address was 
provided.
| Using default chrony configuration.
| CalledProcessError(Command ['/bin/systemctl', 'restart', 'chrony.service'] 
returned non-zero exit status 5: 'Failed to restart chrony.service: Unit 
chrony.service
+not found.\n')
| The ipa-client-install command failed. See /var/log/ipaclient-install.log for 
more information

** Affects: freeipa (Ubuntu)
 Importance: Undecided
 Status: New

** Affects: freeipa (Debian)
 Importance: Unknown
 Status: Unknown

** Bug watch added: Debian Bug tracker #1008195
   https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1008195

** Also affects: freeipa (Debian) via
   https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1008195
   Importance: Unknown
   Status: Unknown

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1966181

Title:
  ipa-client-install fails on restarting non-existing chrony.service

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/freeipa/+bug/1966181/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1890786] Re: ipa-client-install fails on restarting non-existing chronyd.service

2022-03-24 Thread Martin Pitt
** Also affects: freeipa (Ubuntu Focal)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1890786

Title:
  ipa-client-install fails on restarting non-existing chronyd.service

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/freeipa/+bug/1890786/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1913231] Re: ipa-client-install fails on restarting non-existing chronyd.service

2022-03-24 Thread Martin Pitt
*** This bug is a duplicate of bug 1890786 ***
https://bugs.launchpad.net/bugs/1890786

Let's handle this in bug 1890786 instead, I added a focal task and will
close this as a duplicate.

** This bug has been marked a duplicate of bug 1890786
   ipa-client-install fails on restarting non-existing chronyd.service

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1913231

Title:
   ipa-client-install fails on restarting non-existing chronyd.service

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/freeipa/+bug/1913231/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1962035] Re: apparmor blocks VM installation when automatic UEFI firmware is set

2022-03-09 Thread Martin Pitt
I did a test build in my PPA:
https://launchpad.net/~pitti/+archive/ubuntu/fixes

I re-ran the reproducer on current Jammy to confirm the bug, then
updated to the PPA, and re-ran the last virt-install command. That
succeeded.

** Changed in: libvirt (Ubuntu)
   Status: Triaged => Fix Committed

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1962035

Title:
  apparmor blocks VM installation when automatic UEFI firmware is set

To manage notifications about this bug go to:
https://bugs.launchpad.net/libvirt/+bug/1962035/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1962035] Re: apparmor blocks VM installation when automatic UEFI firmware is set

2022-03-09 Thread Martin Pitt
I sent https://salsa.debian.org/libvirt-
team/libvirt/-/merge_requests/135 to update Debian. Unfortunately that
does not build right now due to the inconsistent state of the packaging
git. But the patch itself backports fairly cleanly.

I'll upload to Jammy next.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1962035

Title:
  apparmor blocks VM installation when automatic UEFI firmware is set

To manage notifications about this bug go to:
https://bugs.launchpad.net/libvirt/+bug/1962035/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1962035] Re: apparmor blocks VM installation when automatic UEFI firmware is set

2022-03-09 Thread Martin Pitt
Fix landed upstream:
https://gitlab.com/libvirt/libvirt/-/commit/7aec69b7fb9d0cfe8b7203473764c205b28d2905

** Changed in: libvirt
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1962035

Title:
  apparmor blocks VM installation when automatic UEFI firmware is set

To manage notifications about this bug go to:
https://bugs.launchpad.net/libvirt/+bug/1962035/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1962035] Re: apparmor blocks VM installation when automatic UEFI firmware is set

2022-03-07 Thread Martin Pitt
Thanks Christian. I updated the upstream PR. I just don't want to apply
a patch just to Ubuntu. Once it lands upstream, I backport it, send it
to Debian, and *then* I'm happy to apply it to Jammy -- there should
still be enough time before the freeze, right? (Would be nice to have
that in the LTS, to avoid regressions with cockpit-machines)

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1962035

Title:
  apparmor blocks VM installation when automatic UEFI firmware is set

To manage notifications about this bug go to:
https://bugs.launchpad.net/libvirt/+bug/1962035/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1962035] Re: apparmor blocks VM installation when automatic UEFI firmware is set

2022-02-25 Thread Martin Pitt
** Changed in: libvirt
   Status: New => In Progress

** Changed in: libvirt
 Assignee: (unassigned) => Martin Pitt (pitti)

** Package changed: apparmor (Debian) => libvirt (Debian)

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1962035

Title:
  apparmor blocks VM installation when automatic UEFI firmware is set

To manage notifications about this bug go to:
https://bugs.launchpad.net/libvirt/+bug/1962035/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1962035] Re: apparmor blocks VM installation when automatic UEFI firmware is set

2022-02-25 Thread Martin Pitt
I sent the proposed and tested fix upstream:
https://gitlab.com/libvirt/libvirt/-/merge_requests/140

** Also affects: libvirt
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1962035

Title:
  apparmor blocks VM installation when automatic UEFI firmware is set

To manage notifications about this bug go to:
https://bugs.launchpad.net/libvirt/+bug/1962035/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1962035] Re: apparmor blocks VM installation when automatic UEFI firmware is set

2022-02-25 Thread Martin Pitt
I came up with this patch:

--- /etc/apparmor.d/abstractions/libvirt-qemu.orig  2022-01-22 
18:22:57.0 +
+++ /etc/apparmor.d/abstractions/libvirt-qemu   2022-02-25 13:54:22.075405809 
+
@@ -85,7 +85,7 @@
   /usr/share/misc/sgabios.bin r,
   /usr/share/openbios/** r,
   /usr/share/openhackware/** r,
-  /usr/share/OVMF/** r,
+  /usr/share/OVMF/** rk,
   /usr/share/ovmf/** r,
   /usr/share/proll/** r,
   /usr/share/qemu-efi/** r,
@@ -249,5 +249,8 @@
   / r, # harmless on any lsb compliant system
   /sys/bus/nd/devices/{,**/} r,
 
+  # required for QEMU accessing UEFI nvram variables
+  /**/nvram/*_VARS.fd rwk,
+
   # Site-specific additions and overrides. See local/README for details.
   #include 

After

   systemctl reload apparmor.service; systemctl restart libvirtd

the reproducer works fine.

I'll send it to libvirt upstream now.


** Description changed:

  # lsb_release -rd
  Description:  Ubuntu 21.10
  Release:  21.10
  
  Package: apparmor
  Version: 3.0.3-0ubuntu1
  
  Package: virtinst
  Version: 1:3.2.0-3
  
  When trying to re-install an existing VM with uefi boot set up using the
  recently introduced `--reinstall` option apparmor makes the installation
  fail with the following error:
  
  Could not open '/var/lib/libvirt/qemu/nvram/test_VARS.fd': Permission
  denied
  
  Steps to reproduce:
  
  Create a VM:
  
  root@ubuntu:~# virt-install --connect qemu:///system --quiet --os-variant
  fedora28 --memory 1024 --name test --wait -1 --disk size=1,format=qcow2
  --print-xml 1 > /tmp/test1.xml
  
  Edit the VM configuration to enable automatic UEFI boot by changing the
   like follows:
  
  - 
  
  + 
  
- 
  Define the VM:
  
  root@ubuntu:~# virsh define /tmp/test1.xml
  
  Start VM installation:
  
  root@ubuntu:~# virt-install --connect qemu:///system --reinstall test --wait 
-1 --noautoconsole --cdrom /var/lib/libvirt/novell.iso --autostart
  WARNING  No operating system detected, VM performance may suffer. Specify an 
OS with --os-variant for optimal results.
  
  Starting install...
  ERRORinternal error: process exited while connecting to monitor: 
2022-02-23T18:56:54.738510Z qemu-system-x86_64: -blockdev 
{"driver":"file","filename":"/var/lib/libvirt/qemu/nvram/test_VARS.fd","node-name":"libvirt-pflash1-storage","auto-read-only":true,"discard":"unmap"}:
 Could not open '/var/lib/libvirt/qemu/nvram/test_VARS.fd': Permission denied
  Domain installation does not appear to have been successful.
  If it was, you can restart your domain by running:
-   virsh --connect qemu:///system start test
+   virsh --connect qemu:///system start test
  otherwise, please restart your installation.
- 
  
  Expected behavior:
  
  VM installation will start without apparmor error.
  
  Actual behavior:
  
- The above denial happens:
+ The above denials happen:
  
- Feb 23 18:56:54 ubuntu audit[4420]: AVC apparmor="DENIED"
- operation="open" profile="libvirt-bdd92fa6-6030-4980-951c-2a52ec7e406c"
- name="/var/lib/libvirt/qemu/nvram/test_VARS.fd" pid=4420 comm="qemu-
- system-x86" requested_mask="r" denied_m>
+ audit: type=1400 audit(1645796875.169:132): apparmor="DENIED"
+ operation="open" profile="libvirt-68567d5b-c2c1-4256-9931-ce675df2f9b0"
+ name="/var/lib/libvirt/qemu/nvram/test_VARS.fd" pid=4909 comm="qemu-
+ system-x86" requested_mask="r" denied_mask="r" fsuid=64055 ouid=64055
+ 
+ same thing later on for "k" (locking)
+ 
+ audit: type=1400 audit(1645796969.776:151): apparmor="DENIED"
+ operation="file_lock"
+ profile="libvirt-68567d5b-c2c1-4256-9931-ce675df2f9b0"
+ name="/usr/share/OVMF/OVMF_CODE_4M.secboot.fd" pid=5125 comm="qemu-
+ system-x86" requested_mask="k" denied_mask="k" fsuid=64055 ouid=0
+ 
  
  and stop the installation.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1962035

Title:
  apparmor blocks VM installation when automatic UEFI firmware is set

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/libvirt/+bug/1962035/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1962035] Re: apparmor blocks VM installation when automatic UEFI firmware is set

2022-02-25 Thread Martin Pitt
/etc/apparmor.d/abstractions/libvirt-qemu is shipped by libvirt-daemon-
system, reassigning. I can reproduce this, and I'll attempt to work on a
fix. I'll update the Debian bug as well.

Complete copy reproducer:

virt-install --connect qemu:///system --quiet --os-variant fedora28 --memory 
128 --name test --wait -1 --disk size=0.125,format=qcow2 --graphics 
vnc,listen=127.0.0.1 --graphics spice,listen=127.0.0.1 --print-xml 1 | sed 
"s/ /tmp/test1.xml
virsh define /tmp/test1.xml
touch /var/lib/libvirt/novell.iso
virt-install --connect qemu:///system --reinstall test --wait -1 
--noautoconsole --cdrom /var/lib/libvirt/novell.iso --autostart


** Package changed: apparmor (Ubuntu) => libvirt (Ubuntu)

** Changed in: libvirt (Ubuntu)
   Status: New => Triaged

** Changed in: libvirt (Ubuntu)
 Assignee: (unassigned) => Martin Pitt (pitti)

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1962035

Title:
  apparmor blocks VM installation when automatic UEFI firmware is set

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/libvirt/+bug/1962035/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1962035] Re: apparmor blocks VM installation when automatic UEFI firmware is set

2022-02-23 Thread Martin Pitt
** Bug watch added: Debian Bug tracker #1006324
   https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1006324

** Also affects: apparmor (Debian) via
   https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1006324
   Importance: Unknown
   Status: Unknown

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1962035

Title:
  apparmor blocks VM installation when automatic UEFI firmware is set

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/apparmor/+bug/1962035/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1958392] Re: pam_sss_gss crashes with Communication error [3, 32]: Error in service module; Broken pipe

2022-01-24 Thread Martin Pitt
Paride, many thanks for digging out the upstream fix!

The patch does apply cleanly. It just need a round of "quilt refresh" to
get over

  dpkg-source: error: diff 'sssd-2.4.1/debian/patches/5572.patch'
patches files multiple times; split the diff in multiple files or merge
the hunks into a single one

I put this into
https://launchpad.net/~pitti/+archive/ubuntu/fixes/+packages and
validated that it fixes the problem. However, I did no "beautifications"
to this at all, like a proper patch header or changelog.

** Changed in: sssd (Ubuntu Impish)
   Status: Incomplete => Triaged

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1958392

Title:
  pam_sss_gss crashes with Communication error [3, 32]: Error in service
  module; Broken pipe

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/sssd/+bug/1958392/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1958392] Re: pam_sss_gss crashes with Communication error [3, 32]: Error in service module; Broken pipe

2022-01-21 Thread Martin Pitt
> I'll do a no-change rebuild of impish's sssd now and try with that.

Done, but the bug is still the same. So not some weird build-time issue.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1958392

Title:
  pam_sss_gss crashes with Communication error [3, 32]: Error in service
  module; Broken pipe

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/sssd/+bug/1958392/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1958392] Re: pam_sss_gss crashes with Communication error [3, 32]: Error in service module; Broken pipe

2022-01-21 Thread Martin Pitt
Thanks Paride! I confirm that updating to your PPA fixes the issue,
which confirms that it's sssd.

I'll do a no-change rebuild of impish's sssd now and try with that.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1958392

Title:
  pam_sss_gss crashes with Communication error [3, 32]: Error in service
  module; Broken pipe

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/sssd/+bug/1958392/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1958392] Re: pam_sss_gss crashes with Communication error [3, 32]: Error in service module; Broken pipe

2022-01-19 Thread Martin Pitt
I upgraded my test VM to current Jammy, with sssd 2.6.1-1ubuntu3. This
works fine, so this only applies to impish. *phew*

I.e. I figure/expect pretty well nothing will happen on this bug, and
that's fine -- I just need it as downstream reference for our OS bug
tracker. :-)

** Also affects: sssd (Ubuntu Jammy)
   Importance: Undecided
   Status: New

** Also affects: sssd (Ubuntu Impish)
   Importance: Undecided
   Status: New

** Changed in: sssd (Ubuntu Jammy)
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1958392

Title:
  pam_sss_gss crashes with Communication error [3, 32]: Error in service
  module; Broken pipe

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/sssd/+bug/1958392/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1958392] Re: pam_sss_gss crashes with Communication error [3, 32]: Error in service module; Broken pipe

2022-01-19 Thread Martin Pitt
** Tags added: impish

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1958392

Title:
  pam_sss_gss crashes with Communication error [3, 32]: Error in service
  module; Broken pipe

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/sssd/+bug/1958392/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1958392] Re: pam_sss_gss crashes with Communication error [3, 32]: Error in service module; Broken pipe

2022-01-19 Thread Martin Pitt
This is confirmed to work on Debian 11 (current stable), Debian testing,
Fedora 34/35, CentOS 8, RHEL 8/9, so it does not smell like an upstream
issue.

Ubuntu 20.04 LTS does not yet have pam_sss_gss, so this does not apply
there.

** Description changed:

  I am trying to set up pam_sss_gss to authenticate to sudo with Kerberos.
  I am fairly sure that this worked in the past, but stopped recently.
  
  Reproducer:
-  - Join a FreeIPA domain, with "ipa-client-install". I use "COCKPIT.LAN" here 
in our tests.
+  - Join a FreeIPA domain, with "ipa-client-install". I use "COCKPIT.LAN" here 
in our tests.
  
-  - Enable GSS for sudo in sssd, as per pam_sss_gss(8) manpage:
+  - Enable GSS for sudo in sssd, as per pam_sss_gss(8) manpage:
  
- sed -i '/\[domain\/cockpit.lan\]/ a pam_gssapi_services = sudo,
+ sed -i '/\[domain\/cockpit.lan\]/ a pam_gssapi_services = sudo,
  sudo-i' /etc/sssd/sssd.conf
  
-  - Enable pam_sss_gss in sudo itself, as per the same manpage:
+  - Enable pam_sss_gss in sudo itself, as per the same manpage. Enable
+ debug output:
  
- sed -i '1 a auth sufficient pam_sss_gss.so debug' /etc/pam.d/sudo
+ sed -i '1 a auth sufficient pam_sss_gss.so debug' /etc/pam.d/sudo
  
-  - log in as domain user (ad...@cockpit.lan), validate with "klist" that
+  - log in as domain user (ad...@cockpit.lan), validate with "klist" that
  you have a kerberos ticket
  
-  - Run "sudo whoami"
+  - Run "sudo whoami"
  
  Expected result: On Fedora, I get:
  
  $ sudo whoami
  pam_sss_gss: Initializing GSSAPI authentication with SSSD
  pam_sss_gss: Switching euid from 0 to 3340
  pam_sss_gss: Trying to establish security context
  pam_sss_gss: SSSD User name: ad...@cockpit.lan
  pam_sss_gss: User domain: cockpit.lan
- pam_sss_gss: User principal: 
+ pam_sss_gss: User principal:
  pam_sss_gss: Target name: h...@x0.cockpit.lan
  pam_sss_gss: Using ccache: KCM:
  pam_sss_gss: Acquiring credentials, principal name will be derived
  pam_sss_gss: Switching euid from 3340 to 0
  pam_sss_gss: Authentication successful
  root
  
  Note that this requires enabling local admins in FreeIPA, with
  
- 
- ipa-advise enable-admins-sudo | sh -ex
+ ipa-advise enable-admins-sudo | sh -ex
  
  on the IPA server. However, it is fine for this to fail with a "normal"
  error like "permission denied".
  
  Actual result (on Ubuntu 21.04):
  
  $ sudo whoami
  pam_sss_gss: Initializing GSSAPI authentication with SSSD
  pam_sss_gss: Switching euid from 0 to 3340
  pam_sss_gss: Trying to establish security context
  pam_sss_gss: SSSD User name: ad...@cockpit.lan
  pam_sss_gss: User domain: cockpit.lan
- pam_sss_gss: User principal: 
+ pam_sss_gss: User principal:
  pam_sss_gss: Target name: h...@x0.cockpit.lan
  pam_sss_gss: Using ccache: KEYRING:persistent:3340
  pam_sss_gss: Acquiring credentials, principal name will be derived
  pam_sss_gss: Communication error [3, 32]: Error in service module; Broken pipe
  pam_sss_gss: Switching euid from 3340 to 0
  pam_sss_gss: System error [32]: Broken pipe
- [sudo] password for admin: 
+ [sudo] password for admin:
  sudo: a password is required
  
  There is nothing *at all* in `tail -f /var/log/sssd/*` during this. With
  adding "debug_level = 9" to sssd.conf one can get a lot of output, but
  no error message.
  
  The journal has the same error message, apart from a bunch of apparmor
  ALLOWED and audit messages:
  
- sudo[3917]: pam_sss_gss(sudo:auth): Communication error [3, 32]:
+ sudo[3917]: pam_sss_gss(sudo:auth): Communication error [3, 32]:
  Error in service module; Broken pipe
  
  Nothing else.
  
- 
- Note: Either the packaging or ipa-client-install are misconfigured. They 
cause the socket-activated services to all fail:
+ Note: Either the packaging or ipa-client-install are misconfigured. They
+ cause the socket-activated services to all fail:
  
  $ systemctl --failed
-   UNIT  LOAD   ACTIVE SUBDESCRIPTION  

- ● user@3340.service loaded failed failed User Manager for UID 3340

+   UNIT  LOAD   ACTIVE SUBDESCRIPTION
+ ● user@3340.service loaded failed failed User Manager for UID 3340
  ● sssd-nss.socket   loaded failed failed SSSD NSS Service responder socket
  ● sssd-pam-priv.socket  loaded failed failed SSSD PAM Service responder 
private socket
  ● sssd-ssh.socket   loaded failed failed SSSD SSH Service responder socket
  ● sssd-sudo.socket  loaded failed failed SSSD Sudo Service responder 
socket
  
  with messages like
  
  sssd_check_socket_activated_responders[3498]: (2022-01-19 13:14:13:227270): 
[sssd] [main] (0x0070): Misconfiguration found for the sudo responder.
  sssd_check_socket_activated_responders[3498]: The sudo responder has been 
configured to be socket-activated but it's still mentioned in the services' 
line in /etc/sssd/sssd.conf.
  sssd_check_socket_activated_responders[3498]: Please, consider 

[Bug 1958392] [NEW] pam_sss_gss crashes with Communication error [3, 32]: Error in service module; Broken pipe

2022-01-19 Thread Martin Pitt
Public bug reported:

I am trying to set up pam_sss_gss to authenticate to sudo with Kerberos.
I am fairly sure that this worked in the past, but stopped recently.

Reproducer:
 - Join a FreeIPA domain, with "ipa-client-install". I use "COCKPIT.LAN" here 
in our tests.

 - Enable GSS for sudo in sssd, as per pam_sss_gss(8) manpage:

sed -i '/\[domain\/cockpit.lan\]/ a pam_gssapi_services = sudo,
sudo-i' /etc/sssd/sssd.conf

 - Enable pam_sss_gss in sudo itself, as per the same manpage. Enable
debug output:

sed -i '1 a auth sufficient pam_sss_gss.so debug' /etc/pam.d/sudo

 - Restart sssd to pick up the config change:

systemctl restart sssd

 - log in as domain user (ad...@cockpit.lan), validate with "klist" that
you have a kerberos ticket

 - Run "sudo whoami"

Expected result: On Fedora, I get:

$ sudo whoami
pam_sss_gss: Initializing GSSAPI authentication with SSSD
pam_sss_gss: Switching euid from 0 to 3340
pam_sss_gss: Trying to establish security context
pam_sss_gss: SSSD User name: ad...@cockpit.lan
pam_sss_gss: User domain: cockpit.lan
pam_sss_gss: User principal:
pam_sss_gss: Target name: h...@x0.cockpit.lan
pam_sss_gss: Using ccache: KCM:
pam_sss_gss: Acquiring credentials, principal name will be derived
pam_sss_gss: Switching euid from 3340 to 0
pam_sss_gss: Authentication successful
root

Note that this requires enabling local admins in FreeIPA, with

ipa-advise enable-admins-sudo | sh -ex

on the IPA server. However, it is fine for this to fail with a "normal"
error like "permission denied".

Actual result (on Ubuntu 21.04):

$ sudo whoami
pam_sss_gss: Initializing GSSAPI authentication with SSSD
pam_sss_gss: Switching euid from 0 to 3340
pam_sss_gss: Trying to establish security context
pam_sss_gss: SSSD User name: ad...@cockpit.lan
pam_sss_gss: User domain: cockpit.lan
pam_sss_gss: User principal:
pam_sss_gss: Target name: h...@x0.cockpit.lan
pam_sss_gss: Using ccache: KEYRING:persistent:3340
pam_sss_gss: Acquiring credentials, principal name will be derived
pam_sss_gss: Communication error [3, 32]: Error in service module; Broken pipe
pam_sss_gss: Switching euid from 3340 to 0
pam_sss_gss: System error [32]: Broken pipe
[sudo] password for admin:
sudo: a password is required

There is nothing *at all* in `tail -f /var/log/sssd/*` during this. With
adding "debug_level = 9" to sssd.conf one can get a lot of output, but
no error message.

The journal has the same error message, apart from a bunch of apparmor
ALLOWED and audit messages:

sudo[3917]: pam_sss_gss(sudo:auth): Communication error [3, 32]:
Error in service module; Broken pipe

Nothing else.

Note: Either the packaging or ipa-client-install are misconfigured. They
cause the socket-activated services to all fail:

$ systemctl --failed
  UNIT  LOAD   ACTIVE SUBDESCRIPTION
● user@3340.service loaded failed failed User Manager for UID 3340
● sssd-nss.socket   loaded failed failed SSSD NSS Service responder socket
● sssd-pam-priv.socket  loaded failed failed SSSD PAM Service responder private 
socket
● sssd-ssh.socket   loaded failed failed SSSD SSH Service responder socket
● sssd-sudo.socket  loaded failed failed SSSD Sudo Service responder socket

with messages like

sssd_check_socket_activated_responders[3498]: (2022-01-19 13:14:13:227270): 
[sssd] [main] (0x0070): Misconfiguration found for the sudo responder.
sssd_check_socket_activated_responders[3498]: The sudo responder has been 
configured to be socket-activated but it's still mentioned in the services' 
line in /etc/sssd/sssd.conf.
sssd_check_socket_activated_responders[3498]: Please, consider either adjusting 
your services' line in /etc/sssd/sssd.conf or disabling the sudo's socket by 
calling:
sssd_check_socket_activated_responders[3498]: "systemctl disable 
sssd-sudo.socket"
systemd[1]: sssd-sudo.socket: Control process exited, code=exited, status=17/n/a
systemd[1]: sssd-sudo.socket: Failed with result 'exit-code'.

But I believe this is unrelated -- this has been the case for a *long*
time already, and it does *not* break e.g. GSS kerberos authentication
to ssh.

DistroRelease: Ubuntu 21.04
Package: sssd 2.4.1-2ubuntu4

** Affects: sssd (Ubuntu)
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1958392

Title:
  pam_sss_gss crashes with Communication error [3, 32]: Error in service
  module; Broken pipe

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/sssd/+bug/1958392/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1799095] Re: Firewalld nftables backend breaks networking of libvirt

2021-11-30 Thread Martin Pitt
This is fixed in current Ubuntu 21.04.

I dropped our hacks in our projects: https://github.com/cockpit-
project/cockpit-machines/pull/465 and https://github.com/cockpit-
project/bots/pull/2676

** Changed in: firewalld (Ubuntu)
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1799095

Title:
  Firewalld nftables backend breaks networking of libvirt

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/firewalld/+bug/1799095/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1853266] Re: Xorg/Xwayland segfaults in OsLookupColor() from funlockfile() from glamor_get_pixmap_texture() from glamor_create_gc()

2021-11-11 Thread Martin Pitt
> Xorg -config tests/xorg-dummy.conf -logfile /tmp/log -once :5

The -once was an attempt to work around this, but it doesn't help, nor
change the behaviour of this bug.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1853266

Title:
  Xorg/Xwayland segfaults in OsLookupColor() from funlockfile() from
  glamor_get_pixmap_texture() from glamor_create_gc()

To manage notifications about this bug go to:
https://bugs.launchpad.net/xorg-server/+bug/1853266/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1853266] Re: Xorg/Xwayland segfaults in OsLookupColor() from funlockfile() from glamor_get_pixmap_texture() from glamor_create_gc()

2021-11-11 Thread Martin Pitt
umockdev's test suite now started to see this crash in current Ubuntu
jammy. Simple reproducer:


$ cat tests/xorg-dummy.conf 
Section "Device"
Identifier "test"
Driver "dummy"
EndSection

$ Xorg -config tests/xorg-dummy.conf -logfile /tmp/log -once :5


Then, run at least one query on it, like this:

$ env DISPLAY=:5 xinput

Then pkill/kill or Control-C the Xorg process, and it will crash:

double free or corruption (!prev)
(EE) 
(EE) Backtrace:
(EE) 0: /usr/lib/xorg/Xorg (OsLookupColor+0x139) [0x55e2b1c75d39]
(EE) 1: /lib/x86_64-linux-gnu/libc.so.6 (__sigaction+0x50) [0x7f384162f520]
(EE) 2: /lib/x86_64-linux-gnu/libc.so.6 (pthread_kill+0xf8) [0x7f3841683808]
(EE) 3: /lib/x86_64-linux-gnu/libc.so.6 (raise+0x16) [0x7f384162f476]
(EE) 4: /lib/x86_64-linux-gnu/libc.so.6 (abort+0xd7) [0x7f38416157b7]
(EE) 5: /lib/x86_64-linux-gnu/libc.so.6 (__fsetlocking+0x426) [0x7f38416765e6]
(EE) 6: /lib/x86_64-linux-gnu/libc.so.6 (timer_settime+0x2cc) [0x7f384168dadc]
(EE) 7: /lib/x86_64-linux-gnu/libc.so.6 (__default_morecore+0x8bc) 
[0x7f384168f84c]
(EE) 8: /lib/x86_64-linux-gnu/libc.so.6 (free+0x55) [0x7f3841691ce5]
(EE) 9: /usr/lib/xorg/Xorg (config_fini+0x402) [0x55e2b1b6cb22]
(EE) 10: /usr/lib/xorg/Xorg (ddxGiveUp+0x62) [0x55e2b1b4fa22]
(EE) 11: /usr/lib/xorg/Xorg (InitFonts+0x669) [0x55e2b1b12d69]
(EE) 12: /lib/x86_64-linux-gnu/libc.so.6 (__libc_init_first+0x90) 
[0x7f3841616fd0]
(EE) 13: /lib/x86_64-linux-gnu/libc.so.6 (__libc_start_main+0x7d) 
[0x7f384161707d]
(EE) 14: /usr/lib/xorg/Xorg (_start+0x2e) [0x55e2b1afbf0e]
(EE) 
(EE) Received signal 6 sent by process 520, uid 0

After that it hangs and can't be cleaned up any more (zombie)

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1853266

Title:
  Xorg/Xwayland segfaults in OsLookupColor() from funlockfile() from
  glamor_get_pixmap_texture() from glamor_create_gc()

To manage notifications about this bug go to:
https://bugs.launchpad.net/xorg-server/+bug/1853266/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1949715] Re: [BPO] Backport cockpit-machines to stable releases

2021-11-10 Thread Martin Pitt
Thanks Dan! Reuploaded with s/ubuntu/bpo/. So far I used the
"backportpackage" script from ubuntu-dev-tools (0.186), can this be
fixed there then, please?

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1949715

Title:
  [BPO] Backport cockpit-machines to stable releases

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/cockpit-machines/+bug/1949715/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1949248] Re: does not show any VMs

2021-11-04 Thread Martin Pitt
Right, cockpit-machines only shows libvirt machines. So if `virsh list`
is empty, so will be c-machines.

** Changed in: cockpit-machines (Ubuntu)
   Status: Incomplete => Won't Fix

** Summary changed:

- does not show any VMs
+ does not show VMWare VMs

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1949248

Title:
  does not show VMWare VMs

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/cockpit-machines/+bug/1949248/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1949715] [NEW] [BPO] Backport cockpit-machines to stable releases

2021-11-04 Thread Martin Pitt
Public bug reported:

[Impact]

Four years ago I got a backports approval for the "cockpit" source
package in bug #1686022. A while ago, the "Machines" page was split out
into its own separate https://github.com/cockpit-project/cockpit-
machines/ project, mostly to make development easier and faster.

Just like cockpit, c-machines is a rather dynamic project which keeps
growing features and improvements fairly quickly. It has quite a lot of
users/bug reports on Ubuntu [1], running the latest version helps to
address issues like [2], and we also regularly get requests for Debian
backports [3] (I upload backports to Debian).

[Testing]

Upstream has a very comprehensive unit and integration test suite; the
latter runs on lots of OSes, amongst them are Ubuntu 20.04 LTS, Ubuntu
21.10, Debian stable and testing. As such *every* change in upstream
main gets verified that it builds, installs, and correctly works on all
OSes.

[Scope]

 * List the Ubuntu release you will backport from, and the specific
package version.

New upstream releases happen every two weeks. I usually upload them to
Debian unstable, let them propagate to Ubuntu devel-proposed and then
devel, and then backport them.

 * List the Ubuntu release(s) you will backport to.

Latest LTS and latest stable.

[Other Info]

[1] 
https://github.com/cockpit-project/cockpit-machines/issues?q=is%3Aissue+ubuntu
[2] https://github.com/cockpit-project/cockpit-machines/issues/327
[3] https://github.com/cockpit-project/cockpit/issues/16438

** Affects: cockpit-machines (Ubuntu)
 Importance: Undecided
 Status: Invalid

** Affects: cockpit-machines (Ubuntu Focal)
 Importance: Undecided
     Assignee: Martin Pitt (pitti)
 Status: In Progress

** Affects: cockpit-machines (Ubuntu Impish)
 Importance: Undecided
     Assignee: Martin Pitt (pitti)
 Status: In Progress

** Also affects: cockpit-machines (Ubuntu Impish)
   Importance: Undecided
   Status: New

** Also affects: cockpit-machines (Ubuntu Focal)
   Importance: Undecided
   Status: New

** Changed in: cockpit-machines (Ubuntu)
   Status: New => Invalid

** Changed in: cockpit-machines (Ubuntu Focal)
 Assignee: (unassigned) => Martin Pitt (pitti)

** Changed in: cockpit-machines (Ubuntu Impish)
 Assignee: (unassigned) => Martin Pitt (pitti)

** Changed in: cockpit-machines (Ubuntu Focal)
   Status: New => In Progress

** Changed in: cockpit-machines (Ubuntu Impish)
   Status: New => In Progress

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1949715

Title:
  [BPO] Backport cockpit-machines to stable releases

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/cockpit-machines/+bug/1949715/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1949715] Re: [BPO] Backport cockpit-machines to stable releases

2021-11-04 Thread Martin Pitt
Uploaded to https://launchpad.net/ubuntu/impish/+queue?queue_state=1 and
https://launchpad.net/ubuntu/focal/+queue?queue_state=1

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1949715

Title:
  [BPO] Backport cockpit-machines to stable releases

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/cockpit-machines/+bug/1949715/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1949248] Re: cockpit packages are not at the same version

2021-10-30 Thread Martin Pitt
cockpit-dashboard was removed in 234 [1], its functionality got
integrated into cockpit-shell and cockpit-system. cockpit-machines got
split into its own source package and thus now has an independent
version number.

[1] https://cockpit-project.org/blog/cockpit-234.html

So devoting this to the "does not show any machines" bit. Do you have
any running machines as your user? (virsh list). Do you have any as
root? (you need to raise to admin privileges)

Please send a screenshot and describe some details. Thanks!

** Summary changed:

- cockpit packages are not at the same version
+ does not show any VMs

** Package changed: cockpit (Ubuntu) => cockpit-machines (Ubuntu)

** Changed in: cockpit-machines (Ubuntu)
   Status: New => Incomplete

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1949248

Title:
  does not show any VMs

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/cockpit-machines/+bug/1949248/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1946244] [NEW] When installing/uninstalling with realmd, uninstalling crashes with ScriptError

2021-10-06 Thread Martin Pitt
Public bug reported:

ProblemType: Crash
DistroRelease: Ubuntu 21.04
PackageVersion: python3-ipaclient 4.8.6-1ubuntu5
SourcePackage: freeipa
Architecture: amd64

Joining a FreeIPA domain with plain ipa-client-install works well:

# ipa-client-install -p admin --password=SECRET --no-ntp
[...]
The ipa-client-install command was successful

And leaving it again with "ipa-client-install --uninstall" also works.

However, when doing this through realmd (which configures some
additional useful stuff), it causes a crash:

# realm join
Password for admin: 

This works fine:

# realm list
cockpit.lan
  type: kerberos
  realm-name: COCKPIT.LAN
  domain-name: cockpit.lan
  configured: kerberos-member
  server-software: ipa
  client-software: sssd
  required-package: freeipa-client
  required-package: sssd-tools
  required-package: sssd
  required-package: libnss-sss
  required-package: libpam-sss
  login-formats: %u...@cockpit.lan
  login-policy: allow-realm-logins

But leaving fails:

# realm leave
See: journalctl REALMD_OPERATION=r152.3671
realm: Couldn't leave realm: Running ipa-client-install failed
root@x0:~# echo $?
1


The crash from /var/log/ipaclient-uninstall.log:

2021-10-06T15:48:22Z INFO Client uninstall complete.
2021-10-06T15:48:22Z DEBUG   File 
"/usr/lib/python3/dist-packages/ipapython/admintool.py", line 179, in execute
return_value = self.run()
  File "/usr/lib/python3/dist-packages/ipapython/install/cli.py", line 340, in 
run
return cfgr.run()
  File "/usr/lib/python3/dist-packages/ipapython/install/core.py", line 360, in 
run
return self.execute()
  File "/usr/lib/python3/dist-packages/ipapython/install/core.py", line 386, in 
execute
for rval in self._executor():
  File "/usr/lib/python3/dist-packages/ipapython/install/core.py", line 431, in 
__runner
exc_handler(exc_info)
  File "/usr/lib/python3/dist-packages/ipapython/install/core.py", line 460, in 
_handle_execute_exception
self._handle_exception(exc_info)
  File "/usr/lib/python3/dist-packages/ipapython/install/core.py", line 450, in 
_handle_exception
six.reraise(*exc_info)
  File "/usr/lib/python3/dist-packages/six.py", line 703, in reraise
raise value
  File "/usr/lib/python3/dist-packages/ipapython/install/core.py", line 421, in 
__runner
step()
  File "/usr/lib/python3/dist-packages/ipapython/install/core.py", line 418, in 

step = lambda: next(self.__gen)
  File "/usr/lib/python3/dist-packages/ipapython/install/util.py", line 81, in 
run_generator_with_yield_from
six.reraise(*exc_info)
  File "/usr/lib/python3/dist-packages/six.py", line 703, in reraise
raise value
  File "/usr/lib/python3/dist-packages/ipapython/install/util.py", line 59, in 
run_generator_with_yield_from
value = gen.send(prev_value)
  File "/usr/lib/python3/dist-packages/ipapython/install/core.py", line 655, in 
_configure
next(executor)
  File "/usr/lib/python3/dist-packages/ipapython/install/core.py", line 431, in 
__runner
exc_handler(exc_info)
  File "/usr/lib/python3/dist-packages/ipapython/install/core.py", line 460, in 
_handle_execute_exception
self._handle_exception(exc_info)
  File "/usr/lib/python3/dist-packages/ipapython/install/core.py", line 518, in 
_handle_exception
self.__parent._handle_exception(exc_info)
  File "/usr/lib/python3/dist-packages/ipapython/install/core.py", line 450, in 
_handle_exception
six.reraise(*exc_info)
  File "/usr/lib/python3/dist-packages/six.py", line 703, in reraise
raise value
  File "/usr/lib/python3/dist-packages/ipapython/install/core.py", line 515, in 
_handle_exception
super(ComponentBase, self)._handle_exception(exc_info)
  File "/usr/lib/python3/dist-packages/ipapython/install/core.py", line 450, in 
_handle_exception
six.reraise(*exc_info)
  File "/usr/lib/python3/dist-packages/six.py", line 703, in reraise
raise value
  File "/usr/lib/python3/dist-packages/ipapython/install/core.py", line 421, in 
__runner
step()
  File "/usr/lib/python3/dist-packages/ipapython/install/core.py", line 418, in 

step = lambda: next(self.__gen)
  File "/usr/lib/python3/dist-packages/ipapython/install/util.py", line 81, in 
run_generator_with_yield_from
six.reraise(*exc_info)
  File "/usr/lib/python3/dist-packages/six.py", line 703, in reraise
raise value
  File "/usr/lib/python3/dist-packages/ipapython/install/util.py", line 59, in 
run_generator_with_yield_from
value = gen.send(prev_value)
  File "/usr/lib/python3/dist-packages/ipapython/install/common.py", line 73, 
in _uninstall
for unused in self._uninstaller(self.parent):
  File "/usr/lib/python3/dist-packages/ipaclient/install/client.py", line 3825, 
in main
uninstall(self)
  File "/usr/lib/python3/dist-packages/ipaclient/install/client.py", line 3528, 
in uninstall
raise ScriptError(rval=rv)

2021-10-06T15:48:22Z DEBUG The ipa-client-install command failed,
exception: ScriptError:


Ubuntu 20.04 LTS is affected the same way. Note that this crash does
*not* 

[Bug 1946244] Re: When installing/uninstalling with realmd, uninstalling crashes with

2021-10-06 Thread Martin Pitt
For completeness, this is /var/log/ipaclient-install from the successful
"realm join".

** Attachment added: "ipaclient-install.log from realmd join"
   
https://bugs.launchpad.net/ubuntu/+source/freeipa/+bug/1946244/+attachment/5531110/+files/ipaclient-install.log

** Summary changed:

- When installing/uninstalling with realmd, uninstalling crashes with 
+ When installing/uninstalling with realmd, uninstalling crashes with 
ScriptError

** Also affects: freeipa (Ubuntu Hirsute)
   Importance: Undecided
   Status: New

** Also affects: freeipa (Ubuntu Bionic)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1946244

Title:
  When installing/uninstalling with realmd, uninstalling crashes with
  ScriptError

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/freeipa/+bug/1946244/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1945321] Re: umockdev 0.16.3-1 breaks autopkgtest of bolt

2021-09-28 Thread Martin Pitt
Christian, as I write above I believe this really needs to be fixed in
bolt's tests. The umockdev change was a bug fix which bolt's tests
(incorrectly) worked around. So I hope you don't mind that I flipped the
affected package around? I am in contact with Christian now, and hope to
sort this out soon.

** Changed in: bolt (Ubuntu)
   Status: Invalid => In Progress

** Changed in: umockdev (Ubuntu)
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1945321

Title:
  umockdev 0.16.3-1 breaks autopkgtest of bolt

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/bolt/+bug/1945321/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1945321] Re: umockdev 0.16.3-1 breaks autopkgtest of bolt

2021-09-28 Thread Martin Pitt
> I am in contact with Christian now, and hope to sort this out soon.

Sorry -- I meant Christian Kellner, bolt's upstream, not you :-)

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1945321

Title:
  umockdev 0.16.3-1 breaks autopkgtest of bolt

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/bolt/+bug/1945321/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1945321] Re: umockdev 0.16.3-1 breaks autopkgtest of bolt

2021-09-28 Thread Martin Pitt
Thanks Christian -- Indeed I noticed that, and set
https://gitlab.freedesktop.org/bolt/bolt/-/merge_requests/246 the day
after to fix this. Unfortunately I didn't get a reaction yet, and
Christian also didn't respond on IRC yet. I'll do some more prodding.

** Changed in: bolt (Ubuntu)
   Status: New => In Progress

** Changed in: bolt (Ubuntu)
 Assignee: (unassigned) => Martin Pitt (pitti)

** Changed in: umockdev (Ubuntu)
   Status: New => In Progress

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1945321

Title:
  umockdev 0.16.3-1 breaks autopkgtest of bolt

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/bolt/+bug/1945321/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1934995] Re: Broken on ppc64el (toolchain bug?)

2021-07-25 Thread Martin Pitt
Indeed the open(2) manpage is misleading in that regard. The actual
definition in fcntl.h is like this:

extern int open (const char *__file, int __oflag, ...) __nonnull
((1));

(with a few variants, but they all use varargs). So I did the same in
umockdev for full header compatibility.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1934995

Title:
  Broken on ppc64el (toolchain bug?)

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/umockdev/+bug/1934995/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1934995] Re: Broken on ppc64el (toolchain bug?)

2021-07-08 Thread Martin Pitt
Dang, we already found a ppc64el SIGBUS issue in 0.16.0, which got fixed
in https://github.com/martinpitt/umockdev/commit/277c80243a . But this
is reported against 0.16.1 already.

There is a tiny chance that
https://github.com/martinpitt/umockdev/commit/264cabbb will magically
fix this, but otherwise this needs some investigation. I.e. not known
upstream yet.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1934995

Title:
  Broken on ppc64el (toolchain bug?)

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/umockdev/+bug/1934995/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1925822] Re: [21.04 regression] formatting vfat times out

2021-05-10 Thread Martin Pitt
I installed udisks2 2.9.2-1ubuntu1 from hirsute-proposed, and confirm
that both the manual test case above as well as cockpit's automatic
TestStorageFormat.testFormatTypes now succeed. Thank you Sebastien and
Robie!

** Tags removed: verification-needed verification-needed-hirsute
** Tags added: verification-done verification-done-hirsute

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1925822

Title:
  [21.04 regression] formatting vfat times out

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/udisks2/+bug/1925822/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1802005] Re: socket is inaccessible for libvirt-dbus

2021-04-28 Thread Martin Pitt
** Changed in: libvirt (Ubuntu Hirsute)
   Status: New => Won't Fix

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1802005

Title:
  socket is inaccessible for libvirt-dbus

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/libvirt/+bug/1802005/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1925822] Re: [21.04 regression] formatting vfat times out

2021-04-27 Thread Martin Pitt
Argh indeed, forgot about that one already -- I even looked at that
before, it's tracked here: https://bugs.debian.org/cgi-
bin/bugreport.cgi?bug=983751

But you knew that as well, in comment #4 -- So I hope this didn't take
too much time to track down. Merci beaucoup !

** Bug watch added: Debian Bug tracker #983751
   https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=983751

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1925822

Title:
  [21.04 regression] formatting vfat times out

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/udisks2/+bug/1925822/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1925822] Re: [21.04 regression] formatting vfat times out

2021-04-26 Thread Martin Pitt
Direct mkfs works:

# mkfs.vfat -I -n label /dev/vdb
mkfs.fat 4.2 (2021-01-31)
mkfs.fat: Warning: lowercase labels might not work properly on some systems
# blkid -p /dev/vdb
/dev/vdb: PTUUID="892240dd" PTTYPE="dos"


** Changed in: udisks2 (Ubuntu)
   Status: Incomplete => New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1925822

Title:
  [21.04 regression] formatting vfat times out

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/udisks2/+bug/1925822/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1925822] Re: [21.04 regression] formatting vfat times out

2021-04-26 Thread Martin Pitt
Reproducer from scratch:

# download current cloud image
curl -L -O 
https://cloud-images.ubuntu.com/daily/server/hirsute/current/hirsute-server-cloudimg-amd64.img
# nothing fancy, just admin:foobar and root:foobar
curl -L -O 
https://github.com/cockpit-project/bots/raw/master/machine/cloud-init.iso
# create second disk image for formatting
qemu-img create -f qcow2 disk2.img 100M
# boot it
qemu-system-x86_64 -cpu host -enable-kvm -nographic -m 2048 -drive 
file=hirsute-server-cloudimg-amd64.img,if=virtio -snapshot -cdrom 
cloud-init.iso -drive file=disk2.img,if=virtio

Log in on the console (root:foobar), then

# sanity check: should be empty
blkid -p /dev/vdb

busctl call org.freedesktop.UDisks2
/org/freedesktop/UDisks2/block_devices/vdb org.freedesktop.UDisks2.Block
Format 'sa{sv}' vfat 0

→ hangs.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1925822

Title:
  [21.04 regression] formatting vfat times out

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/udisks2/+bug/1925822/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1925765] Re: [21.04 regression] networking broken in containers

2021-04-24 Thread Martin Pitt
@Reinhard:

> Unfortunately, I cannot confirm this on a freshly installed Ubuntu
20.04

I assume this was a typo and you really meant 21.04.

> and see what's the one that breaks podman.

That was easy, it's tuned. Full reproducer:

apt install -y tuned
podman run -it --rm -p 5000:5000 --name registry docker.io/registry:2
curl http://localhost:5000/v2/

Curious, two years ago I already filed bug #1774000 where tuned breaks
qemu. Reassigning for now.

** Summary changed:

- [21.04 regression] networking broken in containers
+ [21.04 regression] tuned breaks networking in podman containers

** Package changed: libpod (Ubuntu) => tuned (Ubuntu)

** Changed in: tuned (Ubuntu)
   Status: Incomplete => New

** Changed in: tuned (Ubuntu)
 Assignee: Reinhard Tartler (siretart) => (unassigned)

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1925765

Title:
  [21.04 regression] tuned breaks networking in podman containers

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/tuned/+bug/1925765/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1925765] Re: [21.04 regression] networking broken in containers

2021-04-24 Thread Martin Pitt
Thanks Reinhard for trying! I'm running a standard cloud image (https
://cloud-images.ubuntu.com/daily/server/hirsute/current/hirsute-server-
cloudimg-amd64.img), but with some additional packages installed. I'll
go through them with a fine comb and see what's the one that breaks
podman.

(But probably not before Monday, weather is just too nice )

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1925765

Title:
  [21.04 regression] tuned breaks networking in podman containers

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/tuned/+bug/1925765/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1925822] Re: [21.04 regression] formatting vfat times out

2021-04-23 Thread Martin Pitt
Forgot to mention, there is nothing useful in the journal. The only
message is this when the timeout happens:

Apr 23 15:12:35 ubuntu udisksd[3116]: Error synchronizing after
formatting with type `vfat': Timed out waiting for object


** Description changed:

  There is a regression somewhere between udisks, udev, and dosfstools.
  Formatting a device with vfat hangs and fails:
- 
  
  # blkid -p /dev/sda
  (nothing)
  
-  busctl call org.freedesktop.UDisks2 
/org/freedesktop/UDisks2/block_devices/sda org.freedesktop.UDisks2.Block Format 
'sa{sv}' vfat 0
+ # busctl call org.freedesktop.UDisks2 
/org/freedesktop/UDisks2/block_devices/sda org.freedesktop.UDisks2.Block Format 
'sa{sv}' vfat 0
  (long pause)
  Call failed: Error synchronizing after formatting with type `vfat': Timed out 
waiting for object
  
  # blkid -p /dev/sda
  /dev/sda: PTUUID="3690494f" PTTYPE="dos"
  
  OTOH, formatting as ext4 works fine:
  
  # wipefs -a /dev/sda; wipefs -a /dev/sda
  # busctl call org.freedesktop.UDisks2 
/org/freedesktop/UDisks2/block_devices/sda org.freedesktop.UDisks2.Block Format 
'sa{sv}' ext4 0
  (immediately succeeds)
  
  # blkid -p /dev/sda
  /dev/sda: UUID="8bea7475-6af5-4835-86d0-0e5b2cb5500e" VERSION="1.0" 
BLOCK_SIZE="4096" TYPE="ext4" USAGE="filesystem"
  
  I tested this to a QEMU emulated disk, but it reproduces equally well
  against a `modprobe scsi_debug` device.
  
  Package: udisks2 2.9.2-1
  DistroRelease: Ubuntu 21.04

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1925822

Title:
  [21.04 regression] formatting vfat times out

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/udisks2/+bug/1925822/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1925822] Re: [21.04 regression] formatting vfat times out

2021-04-23 Thread Martin Pitt
I tried to run it in the foreground with


  G_MESSAGES_DEBUG=all /usr/libexec/udisks2/udisksd

but still no messages aside from the timeout.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1925822

Title:
  [21.04 regression] formatting vfat times out

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/udisks2/+bug/1925822/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1925822] [NEW] [21.04 regression] formatting vfat times out

2021-04-23 Thread Martin Pitt
Public bug reported:

There is a regression somewhere between udisks, udev, and dosfstools.
Formatting a device with vfat hangs and fails:

# blkid -p /dev/sda
(nothing)

# busctl call org.freedesktop.UDisks2 
/org/freedesktop/UDisks2/block_devices/sda org.freedesktop.UDisks2.Block Format 
'sa{sv}' vfat 0
(long pause)
Call failed: Error synchronizing after formatting with type `vfat': Timed out 
waiting for object

# blkid -p /dev/sda
/dev/sda: PTUUID="3690494f" PTTYPE="dos"

OTOH, formatting as ext4 works fine:

# wipefs -a /dev/sda; wipefs -a /dev/sda
# busctl call org.freedesktop.UDisks2 
/org/freedesktop/UDisks2/block_devices/sda org.freedesktop.UDisks2.Block Format 
'sa{sv}' ext4 0
(immediately succeeds)

# blkid -p /dev/sda
/dev/sda: UUID="8bea7475-6af5-4835-86d0-0e5b2cb5500e" VERSION="1.0" 
BLOCK_SIZE="4096" TYPE="ext4" USAGE="filesystem"

I tested this to a QEMU emulated disk, but it reproduces equally well
against a `modprobe scsi_debug` device.

Package: udisks2 2.9.2-1
DistroRelease: Ubuntu 21.04

** Affects: udisks2 (Ubuntu)
 Importance: Undecided
 Status: New


** Tags: hirsute regression-release

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1925822

Title:
  [21.04 regression] formatting vfat times out

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/udisks2/+bug/1925822/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1802005] Re: socket is inaccessible for libvirt-dbus

2021-04-23 Thread Martin Pitt
Thanks Christian! Lesson learned -- for 21.10 I'll update our images a
few weeks *before* the release. (I found a handful of regressions so
far..)

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1802005

Title:
  socket is inaccessible for libvirt-dbus

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/libvirt/+bug/1802005/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1802005] Re: socket is inaccessible for libvirt-dbus

2021-04-23 Thread Martin Pitt
This regressed in 21.04 (hirsute) again. 1.4.0-2 was synced from Debian
(https://launchpad.net/ubuntu/+source/libvirt-dbus/+changelog) instead
of merged.

** Tags added: hirsute regression-release

** Changed in: libvirt-dbus (Ubuntu)
   Status: Fix Released => Triaged

** Also affects: libvirt (Ubuntu Hirsute)
   Importance: Undecided
   Status: New

** Also affects: libvirt-dbus (Ubuntu Hirsute)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1802005

Title:
  socket is inaccessible for libvirt-dbus

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/libvirt/+bug/1802005/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1925765] [NEW] [21.04 regression] networking broken in containers

2021-04-23 Thread Martin Pitt
Public bug reported:

This stopped working in 21.04:

  podman run -it --rm -p 5000:5000 --name registry docker.io/registry:2
  curl http://localhost:5000/v2/

The curl just hangs forever. This works fine in Ubuntu 20.10 with podman
2.0.6+dfsg1-1ubuntu1.

Outbound direction is also broken:

# podman run -it --rm docker.io/ubuntu:latest apt update
Err:1 http://archive.ubuntu.com/ubuntu focal InRelease
  Temporary failure resolving 'archive.ubuntu.com'

However, that's already the case in Ubuntu 20.10.

Unfortunately there are no tools like `ip` in the container to see
network interfaces and routes, neither in fedora:latest.

/proc/net/dev and /proc/net/route do show an interface as expected, and
they are exactly the same as in 20.10.

Package: podman 3.0.1+dfsg1-1ubuntu1
DistroRelease: Ubuntu 21.04

** Affects: libpod (Ubuntu)
 Importance: Undecided
 Status: New


** Tags: hirsute regression-release

** Description changed:

  This stopped working in 21.04:
  
-   podman run -it --rm -p 5000:5000 --name registry docker.io/registry:2
-   curl http://localhost:5000/v2/
+   podman run -it --rm -p 5000:5000 --name registry docker.io/registry:2
+   curl http://localhost:5000/v2/
  
  The curl just hangs forever. This works fine in Ubuntu 20.10 with podman
  2.0.6+dfsg1-1ubuntu1.
  
  Outbound direction is also broken:
  
  # podman run -it --rm docker.io/ubuntu:latest apt update
- Err:1 http://archive.ubuntu.com/ubuntu focal InRelease   
-   Temporary failure resolving 'archive.ubuntu.com'
+ Err:1 http://archive.ubuntu.com/ubuntu focal InRelease
+   Temporary failure resolving 'archive.ubuntu.com'
  
  However, that's already the case in Ubuntu 20.10.
  
  Unfortunately there are no tools like `ip` in the container to see
  network interfaces and routes, neither in fedora:latest.
  
  /proc/net/dev and /proc/net/route do show an interface as expected, and
  they are exactly the same as in 20.10.
  
- PackageVersion: podman 3.0.1+dfsg1-1ubuntu1
+ Package: podman 3.0.1+dfsg1-1ubuntu1
+ DistroRelease: Ubuntu 21.04

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1925765

Title:
  [21.04 regression] networking broken in containers

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/libpod/+bug/1925765/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1861053] Re: no fatrace output in focal

2021-03-12 Thread Martin Pitt
** Changed in: fatrace (Ubuntu)
   Status: New => Incomplete

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1861053

Title:
  no fatrace output in focal

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/fatrace/+bug/1861053/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1916485] Re: test -x fails inside shell scripts in containers

2021-02-26 Thread Martin Pitt
I've been scratching my head over this regression [1] for a while now,
in the context of running a hirsute container on a 20.04 host (in
particular, a GitHub workflow machine) In my case, the symptom is that
after upgrading glibc, `which` is broken; that of course also uses
faccessat(), similar to test -x.

I tried all sorts of the "usual" workarounds, as seccomp has been giving
trouble for a while now [2]. But this failure is robust against fuse-
overlayfs vs. vfs (inefficient full copies of the file system), root vs.
user podman, podman vs. docker, and, relevant for this bug, it *also
happens* with --security-opt=seccomp=unconfined and/org --privileged,
both of which should disable seccomp.

Hence I believe this bug can't at least only be in libseccomp.


[1] 
https://github.com/martinpitt/umockdev/runs/1984769591?check_suite_focus=true#step:3:1019
[2] https://bugzilla.redhat.com/show_bug.cgi?id=1900021

** Bug watch added: Red Hat Bugzilla #1900021
   https://bugzilla.redhat.com/show_bug.cgi?id=1900021

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1916485

Title:
  test -x fails inside shell scripts in containers

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/glibc/+bug/1916485/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1848923] Re: pollinate.service fails to start: ERROR: should execute as the [pollinate] user -- missing CacheDirectory=

2021-02-15 Thread Martin Pitt
I now did exactly the same steps as above on an Ubuntu 20.04 VM, with
exactly the same results. This verifies 4.33-3ubuntu1.20.04.1.

** Tags removed: verification-needed-focal
** Tags added: verification-done-focal

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1848923

Title:
  pollinate.service fails to start: ERROR: should execute as the
  [pollinate] user -- missing CacheDirectory=

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/pollinate/+bug/1848923/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1848923] Re: pollinate.service fails to start: ERROR: should execute as the [pollinate] user -- missing CacheDirectory=

2021-02-15 Thread Martin Pitt
Verification for groovy:

I took a 20.10 VM with current pollinate 4.33-3ubuntu1, and after
booting, pollinate.service is in state failed as per the bug
description.

I then updated to 4.33-3ubuntu1.20.10.1. The package update auto-
restarted pollinate.service, and it looked successful:

# systemctl status pollinate
● pollinate.service - Pollinate to seed the pseudo random number generator
 Loaded: loaded (/lib/systemd/system/pollinate.service; enabled; vendor 
preset: enabled)
 Active: inactive (dead) since Tue 2021-02-16 06:03:56 UTC; 1min 45s ago
   Docs: https://launchpad.net/pollinate
Process: 2815 ExecStart=/usr/bin/pollinate (code=exited, status=0/SUCCESS)
   Main PID: 2815 (code=exited, status=0/SUCCESS)

Feb 16 06:03:56 ubuntu systemd[1]: Starting Pollinate to seed the pseudo random 
number generator...
Feb 16 06:03:56 ubuntu pollinate[2830]: client sent challenge to 
[https://entropy.ubuntu.com/]
Feb 16 06:03:56 ubuntu pollinate[2844]: client verified challenge/response with 
[https://entropy.ubuntu.com/]
Feb 16 06:03:56 ubuntu pollinate[2851]: client hashed response from 
[https://entropy.ubuntu.com/]
Feb 16 06:03:56 ubuntu pollinate[2852]: client successfully seeded 
[/dev/urandom]
Feb 16 06:03:56 ubuntu systemd[1]: pollinate.service: Succeeded.
Feb 16 06:03:56 ubuntu systemd[1]: Finished Pollinate to seed the pseudo random 
number generator.

It does not have RemainAfterExit=, so that is as expected. I rebooted
the VM, and the unit skipped cleanly, again as expected:

# systemctl status pollinate
● pollinate.service - Pollinate to seed the pseudo random number generator
 Loaded: loaded (/lib/systemd/system/pollinate.service; enabled; vendor 
preset: enabled)
 Active: inactive (dead)
  Condition: start condition failed at Tue 2021-02-16 06:06:58 UTC; 6s ago
 └─ ConditionPathExists=!/var/cache/pollinate/seeded was not met
   Docs: https://launchpad.net/pollinate

Feb 16 06:06:58 ubuntu systemd[1]: Condition check resulted in Pollinate
to seed the pseudo random number generator being skipped.

# ls -l /var/cache/pollinate/
total 0
-rw-r--r-- 1 pollinate daemon 0 Feb 16 06:03 seeded

Now let's re-try the cleanup:

# rm -rf /var/cache/*
# reboot

This causes the shutdown process to last a little longer, presumably
because running daemons got their files ripped away underneath them, but
it does succeed. After it came back up, pollinate.service once again ran
successfully like above.

** Tags removed: verification-needed-groovy
** Tags added: verification-done-groovy

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1848923

Title:
  pollinate.service fails to start: ERROR: should execute as the
  [pollinate] user -- missing CacheDirectory=

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/pollinate/+bug/1848923/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1848923] Re: pollinate.service fails to start: ERROR: should execute as the [pollinate] user -- missing CacheDirectory=

2021-02-11 Thread Martin Pitt
@Christian: Debian still needs/wants to support sysvinit. Of course
init.d scripts ought to create cache directories too (like munin,
mopidy, and others already do, but probably not all of them), but that
will be a bit more work. FHS applies to SysV init as well, so the same
reasoning still holds. Also, some postinsts seem to do legitimate work,
like fontconfig which also creates an initial font cache.

If you want to start an MBF, it first needs some initial discussion, or
at least announcement, on debian-devel@ [1]. And then it needs checking
which packages actually have that problem, as I don't think it's
actually *that* many -- two dozens tops? But in general I think this is
a nice goal for sure. (For the record, we have not detected any problems
related to this in the Cockpit test suite on any Debian or Ubuntu image,
except for pollinate)

The "/var/cache/ should be removable" reference is [2], it was already
in comment #9:

[1] 
https://www.debian.org/doc/manuals/developers-reference/beyond-pkging.en.html#reporting-lots-of-bugs-at-once-mass-bug-filing
[2] https://refspecs.linuxfoundation.org/FHS_3.0/fhs/ch05s05.html

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1848923

Title:
  pollinate.service fails to start: ERROR: should execute as the
  [pollinate] user -- missing CacheDirectory=

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/pollinate/+bug/1848923/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

  1   2   3   4   5   6   7   8   9   10   >