[Touch-packages] [Bug 1942623] [NEW] [Wishlist] systemctl CLI UI should allow glob expansion throughout the service name
Public bug reported: When attempting to craft a command to query and/or restart all services with name 'something-*-else', such as 'neutron-*-agent' to match [neutron-l3-agent, neutron-openvswitch-agent, and neutron-dhcp-agent], I've found that no services are returned from globs in the middle of the string. The only supported glob seems to be a * at the end of the service name. Also, single character globbing with ? is not honored (as in the last example). To provide an example, I expect each of these commands to return the dbus.socket and dbus.service, but only 'systemctl status dbu*' shows the proper return. drew@grimoire:~$ systemctl status dbus ● dbus.service - D-Bus System Message Bus Loaded: loaded (/lib/systemd/system/dbus.service; static) Active: active (running) since Mon 2021-08-30 23:41:27 CDT; 3 days ago TriggeredBy: ● dbus.socket Docs: man:dbus-daemon(1) Main PID: 1357 (dbus-daemon) Tasks: 1 (limit: 57290) Memory: 4.9M CGroup: /system.slice/dbus.service └─1357 @dbus-daemon --system --address=systemd: --nofork --nopidfile --systemd-activation --syslog-only Warning: some journal files were not opened due to insufficient permissions. drew@grimoire:~$ systemctl status *bus drew@grimoire:~$ systemctl status '*bus' drew@grimoire:~$ systemctl status 'dbu*' ● dbus.service - D-Bus System Message Bus Loaded: loaded (/lib/systemd/system/dbus.service; static) Active: active (running) since Mon 2021-08-30 23:41:27 CDT; 3 days ago TriggeredBy: ● dbus.socket Docs: man:dbus-daemon(1) Main PID: 1357 (dbus-daemon) Tasks: 1 (limit: 57290) Memory: 5.0M CGroup: /system.slice/dbus.service └─1357 @dbus-daemon --system --address=systemd: --nofork --nopidfile --systemd-activation --syslog-only Warning: some journal files were not opened due to insufficient permissions. ● dbus.socket - D-Bus System Message Bus Socket Loaded: loaded (/lib/systemd/system/dbus.socket; static) Active: active (running) since Mon 2021-08-30 23:41:27 CDT; 3 days ago Triggers: ● dbus.service Listen: /run/dbus/system_bus_socket (Stream) drew@grimoire:~$ systemctl status 'db*s' drew@grimoire:~$ systemctl status 'dbu?' ** Affects: systemd (Ubuntu) Importance: Undecided Status: New -- You received this bug notification because you are a member of Ubuntu Touch seeded packages, which is subscribed to systemd in Ubuntu. https://bugs.launchpad.net/bugs/1942623 Title: [Wishlist] systemctl CLI UI should allow glob expansion throughout the service name Status in systemd package in Ubuntu: New Bug description: When attempting to craft a command to query and/or restart all services with name 'something-*-else', such as 'neutron-*-agent' to match [neutron-l3-agent, neutron-openvswitch-agent, and neutron-dhcp- agent], I've found that no services are returned from globs in the middle of the string. The only supported glob seems to be a * at the end of the service name. Also, single character globbing with ? is not honored (as in the last example). To provide an example, I expect each of these commands to return the dbus.socket and dbus.service, but only 'systemctl status dbu*' shows the proper return. drew@grimoire:~$ systemctl status dbus ● dbus.service - D-Bus System Message Bus Loaded: loaded (/lib/systemd/system/dbus.service; static) Active: active (running) since Mon 2021-08-30 23:41:27 CDT; 3 days ago TriggeredBy: ● dbus.socket Docs: man:dbus-daemon(1) Main PID: 1357 (dbus-daemon) Tasks: 1 (limit: 57290) Memory: 4.9M CGroup: /system.slice/dbus.service └─1357 @dbus-daemon --system --address=systemd: --nofork --nopidfile --systemd-activation --syslog-only Warning: some journal files were not opened due to insufficient permissions. drew@grimoire:~$ systemctl status *bus drew@grimoire:~$ systemctl status '*bus' drew@grimoire:~$ systemctl status 'dbu*' ● dbus.service - D-Bus System Message Bus Loaded: loaded (/lib/systemd/system/dbus.service; static) Active: active (running) since Mon 2021-08-30 23:41:27 CDT; 3 days ago TriggeredBy: ● dbus.socket Docs: man:dbus-daemon(1) Main PID: 1357 (dbus-daemon) Tasks: 1 (limit: 57290) Memory: 5.0M CGroup: /system.slice/dbus.service └─1357 @dbus-daemon --system --address=systemd: --nofork --nopidfile --systemd-activation --syslog-only Warning: some journal files were not opened due to insufficient permissions. ● dbus.socket - D-Bus System Message Bus Socket Loaded: loaded (/lib/systemd/system/dbus.socket; static) Active: active (running) since Mon 2021-08-30 23:41:27 CDT; 3 days ago Triggers: ● dbus.service Listen: /run/dbus/system_bus_socket (Stream) drew@grimoire:~$ systemctl status 'db*s' drew@grimoire:~$ systemctl status 'dbu?' To manage notifications about this
[Touch-packages] [Bug 50093] Re: Some sysctls are ignored on boot
Still seeing this in bionic 18.04.3 with the following kernel and procps package. ii procps 2:3.3.12-3ubuntu1.2 amd64/proc file system utilities kernel 5.4.0-62-generic -- You received this bug notification because you are a member of Ubuntu Touch seeded packages, which is subscribed to procps in Ubuntu. https://bugs.launchpad.net/bugs/50093 Title: Some sysctls are ignored on boot Status in procps package in Ubuntu: Won't Fix Status in systemd package in Ubuntu: New Bug description: Binary package hint: procps /etc/init.d/procps.sh comes too early in the boot process to apply a lot of sysctl's. As it runs before networking modules are loaded and filesystems are mounted, there are quite a lot of commonly-used sysctl's which are simply ignored on boot and produce errors to the console. Simply renaming the symlink from S17 to > S40 probably isn't a great solution, as there are probably folk who want and expect some sysctl's to be applied before filesystems are mounted and so on. However, simply ugnoring something as important as sysctl settings isn't really on. Administrators expect the settings in /etc/sysctl.conf to take effect. One sto-gap solution would be to run sysctl -p twice; once at S17 and once at S41. There may still be some warnings and errors, but everything would be applied. A different, more complex approach might be to re-architect the sysctl configuration into something like; /etc/sysctl.d/$modulename and have the userland module-loading binaries take care of applying them after modules are loaded. Though this may take care of explicitly loaded modules only, I'm not sure. Incidentally, /etc/sysctl.conf still refers to /etc/networking/options, but hasn't that been deprecated? To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/procps/+bug/50093/+subscriptions -- Mailing list: https://launchpad.net/~touch-packages Post to : touch-packages@lists.launchpad.net Unsubscribe : https://launchpad.net/~touch-packages More help : https://help.launchpad.net/ListHelp
[Touch-packages] [Bug 1226855] Re: Cannot use open-iscsi inside LXC container
Adding Bootstack to watch this bug, as we are taking ownership of charm- iscsi-connector which would be ideal to test within lxd confinement, but requires VM or metal for functional tests. -- You received this bug notification because you are a member of Ubuntu Touch seeded packages, which is subscribed to lxc in Ubuntu. https://bugs.launchpad.net/bugs/1226855 Title: Cannot use open-iscsi inside LXC container Status in linux package in Ubuntu: Confirmed Status in lxc package in Ubuntu: Confirmed Bug description: Trying to use open-iscsi from within an LXC container, but the iscsi netlink socket does not support multiple namespaces, causing: "iscsid: sendmsg: bug? ctrl_fd 6" error and failure. Command attempted: iscsiadm -m node -p $ip:$port -T $target --login Results in: Exit code: 18 Stdout: 'Logging in to [iface: default, target: $target, portal: $ip,$port] (multiple)' Stderr: 'iscsiadm: got read error (0/0), daemon died? iscsiadm: Could not login to [iface: default, target: $target, portal: $ip,$port]. iscsiadm: initiator reported error (18 - could not communicate to iscsid) iscsiadm: Could not log into all portals' ProblemType: Bug DistroRelease: Ubuntu 13.04 Package: lxc 0.9.0-0ubuntu3.4 ProcVersionSignature: Ubuntu 3.8.0-30.44-generic 3.8.13.6 Uname: Linux 3.8.0-30-generic x86_64 ApportVersion: 2.9.2-0ubuntu8.3 Architecture: amd64 Date: Tue Sep 17 14:38:08 2013 InstallationDate: Installed on 2013-01-15 (245 days ago) InstallationMedia: Xubuntu 12.10 "Quantal Quetzal" - Release amd64 (20121017.1) MarkForUpload: True SourcePackage: lxc UpgradeStatus: Upgraded to raring on 2013-05-16 (124 days ago) To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1226855/+subscriptions -- Mailing list: https://launchpad.net/~touch-packages Post to : touch-packages@lists.launchpad.net Unsubscribe : https://launchpad.net/~touch-packages More help : https://help.launchpad.net/ListHelp
[Touch-packages] [Bug 1861941] Re: bcache by-uuid links disappear after mounting bcache0
I'm having similar issues to this bug and those described by Dmitrii in https://bugs.launchpad.net/charm-ceph-osd/+bug/1883585 specifically comment #2 and the last comment. It appears that if I run 'udevadm trigger --subsystem-match=block', I get my by-dname devices for bcaches, but if I run udevadm trigger w/out a subsystem match, something is triggering later than the block subsystem that is removing the links. Here are the runs with --verbose to show what appears to be getting probed on each run: https://pastebin.ubuntu.com/p/VPvSKRfGt4/ This is with 5.3.0-62 kernel on Bionic. I also have core and canonical-livepatch snaps installed as did the environment where Dmitrii has run into this. -- You received this bug notification because you are a member of Ubuntu Touch seeded packages, which is subscribed to systemd in Ubuntu. https://bugs.launchpad.net/bugs/1861941 Title: bcache by-uuid links disappear after mounting bcache0 Status in bcache-tools package in Ubuntu: Triaged Status in systemd package in Ubuntu: Fix Released Status in bcache-tools source package in Bionic: New Status in linux source package in Bionic: New Status in linux-signed source package in Bionic: New Status in systemd source package in Bionic: New Status in bcache-tools source package in Focal: Confirmed Status in linux source package in Focal: Invalid Status in linux-signed source package in Focal: Confirmed Status in systemd source package in Focal: Confirmed Status in bcache-tools source package in Groovy: Triaged Status in linux source package in Groovy: Incomplete Status in linux-signed source package in Groovy: Confirmed Status in systemd source package in Groovy: Fix Released Bug description: 1. root@ubuntu:~# lsb_release -rd Description: Ubuntu Focal Fossa (development branch) Release: 20.04 2. root@ubuntu:~# lsb_release -rd Description: Ubuntu Focal Fossa (development branch) Release: 20.04 root@ubuntu:~# apt-cache policy linux-image-virtual linux-image-virtual: Installed: 5.4.0.12.15 Candidate: 5.4.0.12.15 Version table: *** 5.4.0.12.15 500 500 http://archive.ubuntu.com/ubuntu focal/main amd64 Packages 100 /var/lib/dpkg/status root@ubuntu:~# apt-cache policy linux-image-5.4.0-12-generic linux-image-5.4.0-12-generic: Installed: 5.4.0-12.15 Candidate: 5.4.0-12.15 Version table: *** 5.4.0-12.15 500 500 http://archive.ubuntu.com/ubuntu focal/main amd64 Packages 100 /var/lib/dpkg/status 3. mount /dev/bcache0 && ls -al /dev/bcache/by-uuid/ + ls -al /dev/bcache/by-uuid/ total 0 drwxr-xr-x 2 root root 60 Feb 4 23:31 . drwxr-xr-x 3 root root 60 Feb 4 23:31 .. lrwxrwxrwx 1 root root 13 Feb 4 23:31 abdfd1f6-44ce-4266-91db-24667b9ae51a -> ../../bcache0 4. root@ubuntu:~# ls -al /dev/bcache/by-uuid ls: cannot access '/dev/bcache/by-uuid': No such file or directory ProblemType: Bug DistroRelease: Ubuntu 20.04 Package: linux-image-5.4.0-12-generic 5.4.0-12.15 ProcVersionSignature: Ubuntu 5.4.0-12.15-generic 5.4.8 Uname: Linux 5.4.0-12-generic x86_64 ApportVersion: 2.20.11-0ubuntu16 Architecture: amd64 Date: Tue Feb 4 23:31:52 2020 ProcEnviron: TERM=xterm-256color PATH=(custom, no user) LANG=C.UTF-8 SHELL=/bin/bash SourcePackage: linux-signed-5.4 UpgradeStatus: No upgrade log present (probably fresh install) To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/bcache-tools/+bug/1861941/+subscriptions -- Mailing list: https://launchpad.net/~touch-packages Post to : touch-packages@lists.launchpad.net Unsubscribe : https://launchpad.net/~touch-packages More help : https://help.launchpad.net/ListHelp
[Touch-packages] [Bug 1861941] Re: bcache by-uuid links disappear after mounting bcache0
Relevant package revisions for comment #50: bcache-tools 1.0.8-2build1 snapd2.45.1+18.04.2 systemd 237-3ubuntu10.41 udev 237-3ubuntu10.41 and snaps: Name VersionRev Tracking Publisher Notes canonical-livepatch 9.5.5 95latest/stable canonical✓ - core 16-2.45.2 9665 latest/stable canonical✓ core -- You received this bug notification because you are a member of Ubuntu Touch seeded packages, which is subscribed to systemd in Ubuntu. https://bugs.launchpad.net/bugs/1861941 Title: bcache by-uuid links disappear after mounting bcache0 Status in bcache-tools package in Ubuntu: Triaged Status in systemd package in Ubuntu: Fix Released Status in bcache-tools source package in Bionic: New Status in linux source package in Bionic: New Status in linux-signed source package in Bionic: New Status in systemd source package in Bionic: New Status in bcache-tools source package in Focal: Confirmed Status in linux source package in Focal: Invalid Status in linux-signed source package in Focal: Confirmed Status in systemd source package in Focal: Confirmed Status in bcache-tools source package in Groovy: Triaged Status in linux source package in Groovy: Incomplete Status in linux-signed source package in Groovy: Confirmed Status in systemd source package in Groovy: Fix Released Bug description: 1. root@ubuntu:~# lsb_release -rd Description: Ubuntu Focal Fossa (development branch) Release: 20.04 2. root@ubuntu:~# lsb_release -rd Description: Ubuntu Focal Fossa (development branch) Release: 20.04 root@ubuntu:~# apt-cache policy linux-image-virtual linux-image-virtual: Installed: 5.4.0.12.15 Candidate: 5.4.0.12.15 Version table: *** 5.4.0.12.15 500 500 http://archive.ubuntu.com/ubuntu focal/main amd64 Packages 100 /var/lib/dpkg/status root@ubuntu:~# apt-cache policy linux-image-5.4.0-12-generic linux-image-5.4.0-12-generic: Installed: 5.4.0-12.15 Candidate: 5.4.0-12.15 Version table: *** 5.4.0-12.15 500 500 http://archive.ubuntu.com/ubuntu focal/main amd64 Packages 100 /var/lib/dpkg/status 3. mount /dev/bcache0 && ls -al /dev/bcache/by-uuid/ + ls -al /dev/bcache/by-uuid/ total 0 drwxr-xr-x 2 root root 60 Feb 4 23:31 . drwxr-xr-x 3 root root 60 Feb 4 23:31 .. lrwxrwxrwx 1 root root 13 Feb 4 23:31 abdfd1f6-44ce-4266-91db-24667b9ae51a -> ../../bcache0 4. root@ubuntu:~# ls -al /dev/bcache/by-uuid ls: cannot access '/dev/bcache/by-uuid': No such file or directory ProblemType: Bug DistroRelease: Ubuntu 20.04 Package: linux-image-5.4.0-12-generic 5.4.0-12.15 ProcVersionSignature: Ubuntu 5.4.0-12.15-generic 5.4.8 Uname: Linux 5.4.0-12-generic x86_64 ApportVersion: 2.20.11-0ubuntu16 Architecture: amd64 Date: Tue Feb 4 23:31:52 2020 ProcEnviron: TERM=xterm-256color PATH=(custom, no user) LANG=C.UTF-8 SHELL=/bin/bash SourcePackage: linux-signed-5.4 UpgradeStatus: No upgrade log present (probably fresh install) To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/bcache-tools/+bug/1861941/+subscriptions -- Mailing list: https://launchpad.net/~touch-packages Post to : touch-packages@lists.launchpad.net Unsubscribe : https://launchpad.net/~touch-packages More help : https://help.launchpad.net/ListHelp
[Touch-packages] [Bug 1668771] Re: systemd-resolved negative caching for extended period of time
This affects bionic openstack cloud environments when os-*-hostname is configured for keystone, and the keystone entry is deleted temporarily from upstream dns, or the upstream dns fails providing no record for the lookup of keystone.endpoint.domain.com. We have to then flush all caches across the cloud once DNS issue is resolved, rather than auto-healing at 60 seconds as if we were running nscd with negative-ttl set to 60 seconds. Ultimately, a negative TTL that is settable would be ideal, or the ability to not cache negative hits would also be useful. Only workaround now is to not use caches or to operationally flush caches as needed. -- You received this bug notification because you are a member of Ubuntu Touch seeded packages, which is subscribed to systemd in Ubuntu. https://bugs.launchpad.net/bugs/1668771 Title: systemd-resolved negative caching for extended period of time Status in systemd: New Status in systemd package in Ubuntu: Confirmed Bug description: 231-9ubuntu3 If a DNS lookup returns SERVFAIL, systemd-resolved seems to cache the result for very long (infinity?). I have to restart systemd-resolved to have the negative caching purged. After SERVFAIL DNS server issue has been resolved, chromium/firefox still returns DNS error despite host can correctly resolve the name. To manage notifications about this bug go to: https://bugs.launchpad.net/systemd/+bug/1668771/+subscriptions -- Mailing list: https://launchpad.net/~touch-packages Post to : touch-packages@lists.launchpad.net Unsubscribe : https://launchpad.net/~touch-packages More help : https://help.launchpad.net/ListHelp
[Touch-packages] [Bug 1764848] Re: Upgrade to ca-certificates to 20180409 causes ca-certificates.crt to be removed if duplicate certs found
perhaps a proper fix is for ubuntu-sso-client to release a new python- ubuntu-sso-client package in bionic that doesn't include this UbuntuOne- Go_Daddy_Class_2_CA.pem now that ca-certificates package has the CA. However, I'd still like to see duplicate certs not causing ca- certificates.crt to be deleted. ** Also affects: ubuntu-sso-client Importance: Undecided Status: New -- You received this bug notification because you are a member of Ubuntu Touch seeded packages, which is subscribed to ca-certificates in Ubuntu. https://bugs.launchpad.net/bugs/1764848 Title: Upgrade to ca-certificates to 20180409 causes ca-certificates.crt to be removed if duplicate certs found Status in Ubuntu Single Sign On Client: New Status in ca-certificates package in Ubuntu: New Bug description: The certificate /usr/share/ca- certificates/mozilla/Go_Daddy_Class_2_CA.crt in package ca- certificates is conflicting with /etc/ssl/certs/UbuntuOne- Go_Daddy_Class_2_CA.pem from package python-ubuntu-sso-client. This results in the postinst trigger for ca-certificates to remove the /etc/ssl/certs/ca-certificates.crt file. This happens because the postinst trigger runs update-ca-certificates --fresh. If I run update-ca-certificates without the --fresh flag, the conflict is a non-issue and the ca-certificates.crt file is restored. If I understand some of the postinst code correctly, --fresh should only be run if called directly or if upgrading from a ca-certificates version older than 2011. Running bionic with daily -updates channel and ran into this this morning due to the release of ca-certificates version 20180409. To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu-sso-client/+bug/1764848/+subscriptions -- Mailing list: https://launchpad.net/~touch-packages Post to : touch-packages@lists.launchpad.net Unsubscribe : https://launchpad.net/~touch-packages More help : https://help.launchpad.net/ListHelp
[Touch-packages] [Bug 1764848] [NEW] Upgrade to ca-certificates to 20180409 causes ca-certificates.crt to be removed if duplicate certs found
Public bug reported: The certificate /usr/share/ca- certificates/mozilla/Go_Daddy_Class_2_CA.crt in package ca-certificates is conflicting with /etc/ssl/certs/UbuntuOne-Go_Daddy_Class_2_CA.pem from package python-ubuntu-sso-client. This results in the postinst trigger for ca-certificates to remove the /etc/ssl/certs/ca-certificates.crt file. This happens because the postinst trigger runs update-ca-certificates --fresh. If I run update-ca-certificates without the --fresh flag, the conflict is a non-issue and the ca-certificates.crt file is restored. If I understand some of the postinst code correctly, --fresh should only be run if called directly or if upgrading from a ca-certificates version older than 2011. Running bionic with daily -updates channel and ran into this this morning due to the release of ca-certificates version 20180409. ** Affects: ubuntu-sso-client Importance: Undecided Status: New ** Affects: ca-certificates (Ubuntu) Importance: Undecided Status: New -- You received this bug notification because you are a member of Ubuntu Touch seeded packages, which is subscribed to ca-certificates in Ubuntu. https://bugs.launchpad.net/bugs/1764848 Title: Upgrade to ca-certificates to 20180409 causes ca-certificates.crt to be removed if duplicate certs found Status in Ubuntu Single Sign On Client: New Status in ca-certificates package in Ubuntu: New Bug description: The certificate /usr/share/ca- certificates/mozilla/Go_Daddy_Class_2_CA.crt in package ca- certificates is conflicting with /etc/ssl/certs/UbuntuOne- Go_Daddy_Class_2_CA.pem from package python-ubuntu-sso-client. This results in the postinst trigger for ca-certificates to remove the /etc/ssl/certs/ca-certificates.crt file. This happens because the postinst trigger runs update-ca-certificates --fresh. If I run update-ca-certificates without the --fresh flag, the conflict is a non-issue and the ca-certificates.crt file is restored. If I understand some of the postinst code correctly, --fresh should only be run if called directly or if upgrading from a ca-certificates version older than 2011. Running bionic with daily -updates channel and ran into this this morning due to the release of ca-certificates version 20180409. To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu-sso-client/+bug/1764848/+subscriptions -- Mailing list: https://launchpad.net/~touch-packages Post to : touch-packages@lists.launchpad.net Unsubscribe : https://launchpad.net/~touch-packages More help : https://help.launchpad.net/ListHelp
[Touch-packages] [Bug 1727063] Re: Pacemaker package upgrades stop but fail to start pacemaker resulting in HA outage
>From a high level, it appears that invoke-rc.d script used for compatibility falls back to checking for /etc/rcX.d symlinks for a "policy" check if there is no $POLICYHELPER installed. Perhaps the actual shortcoming is not having the policy-rc.d installed to prefer systemd over init.d on Xenial. -- You received this bug notification because you are a member of Ubuntu Touch seeded packages, which is subscribed to init-system-helpers in Ubuntu. https://bugs.launchpad.net/bugs/1727063 Title: Pacemaker package upgrades stop but fail to start pacemaker resulting in HA outage Status in OpenStack hacluster charm: Invalid Status in init-system-helpers package in Ubuntu: Confirmed Status in pacemaker package in Ubuntu: Fix Released Status in init-system-helpers source package in Xenial: Confirmed Status in pacemaker source package in Xenial: Fix Released Status in init-system-helpers source package in Zesty: Confirmed Status in pacemaker source package in Zesty: Fix Released Status in init-system-helpers source package in Artful: Confirmed Status in pacemaker source package in Artful: Fix Released Status in init-system-helpers source package in Bionic: Confirmed Status in pacemaker source package in Bionic: Fix Released Bug description: [Impact] upgrades of the pacemaker package don't restart pacemaker after the package upgrade, resulting in down HA clusters. [Test Case] sudo apt install pacemaker sudo systemctl start pacemaker sudo dpkg-reconfigure pacemaker pacemaker daemons will not be restarted. [Regression Potential] Minimal, earlier and later versions provide the defaults in the lsb header. [Original Bug Report] We have found on our openstack charm-hacluster implementations that the pacemaker .deb packaging along with the upstream pacemaker configuration result in pacemaker stopping but not starting upon package upgrade (while attended or unattended). This was seen on three separate Xenial clouds. Both Mitaka and Ocata. The package upgrade today was to pacemaker 1.1.14-2ubuntu1.2. It appears that pacemaker.prerm stops the service using "invoke-rc.d pacemaker stop" and then the pacemaker.postinst attempts to start the service, but silently fails due to policy denial. It appears the policy check fails because /etc/rcX.d/S*pacemaker does not exist because /etc/init.d/pacemaker has no Default-Start or Default-Stop entries in the LSB init headers. (or rather, they are blank.) I have not checked whether this affects trusty environments. I'd suggest on systems that use systemd, the pacemaker.postinst script should check if the service is enabled and start it with systemctl commands rather than using the cross-platform compatible invoke-rc.d wrappers. Or upstream pacemaker should get default start/stop entries. Our default runlevel on cloud init built images appears to be 5 (graphical), so at least 5 should be present in /etc/init.d/pacemaker LSB init headers under Default-Start:. To manage notifications about this bug go to: https://bugs.launchpad.net/charm-hacluster/+bug/1727063/+subscriptions -- Mailing list: https://launchpad.net/~touch-packages Post to : touch-packages@lists.launchpad.net Unsubscribe : https://launchpad.net/~touch-packages More help : https://help.launchpad.net/ListHelp
[Touch-packages] [Bug 1727063] Re: Pacemaker package upgrades stop but fail to start pacemaker resulting in HA outage
re: init-system-helpers, I noticed the oddity of the init script on a systemd system as well and found that there's a systemd hack in /lib/lsb /init-functions.d/40-systemd that allows for multi-startup compatibility. I believe invoke-rc.d should check systemd "enabled/disabled" state instead of just the S/K links in /etc/rcX.d. -- You received this bug notification because you are a member of Ubuntu Touch seeded packages, which is subscribed to init-system-helpers in Ubuntu. https://bugs.launchpad.net/bugs/1727063 Title: Pacemaker package upgrades stop but fail to start pacemaker resulting in HA outage Status in OpenStack hacluster charm: Invalid Status in init-system-helpers package in Ubuntu: New Status in pacemaker package in Ubuntu: Fix Released Status in init-system-helpers source package in Xenial: New Status in pacemaker source package in Xenial: Fix Released Status in init-system-helpers source package in Zesty: New Status in pacemaker source package in Zesty: Fix Released Status in init-system-helpers source package in Artful: New Status in pacemaker source package in Artful: Fix Released Status in init-system-helpers source package in Bionic: New Status in pacemaker source package in Bionic: Fix Released Bug description: [Impact] upgrades of the pacemaker package don't restart pacemaker after the package upgrade, resulting in down HA clusters. [Test Case] sudo apt install pacemaker sudo systemctl start pacemaker sudo dpkg-reconfigure pacemaker pacemaker daemons will not be restarted. [Regression Potential] Minimal, earlier and later versions provide the defaults in the lsb header. [Original Bug Report] We have found on our openstack charm-hacluster implementations that the pacemaker .deb packaging along with the upstream pacemaker configuration result in pacemaker stopping but not starting upon package upgrade (while attended or unattended). This was seen on three separate Xenial clouds. Both Mitaka and Ocata. The package upgrade today was to pacemaker 1.1.14-2ubuntu1.2. It appears that pacemaker.prerm stops the service using "invoke-rc.d pacemaker stop" and then the pacemaker.postinst attempts to start the service, but silently fails due to policy denial. It appears the policy check fails because /etc/rcX.d/S*pacemaker does not exist because /etc/init.d/pacemaker has no Default-Start or Default-Stop entries in the LSB init headers. (or rather, they are blank.) I have not checked whether this affects trusty environments. I'd suggest on systems that use systemd, the pacemaker.postinst script should check if the service is enabled and start it with systemctl commands rather than using the cross-platform compatible invoke-rc.d wrappers. Or upstream pacemaker should get default start/stop entries. Our default runlevel on cloud init built images appears to be 5 (graphical), so at least 5 should be present in /etc/init.d/pacemaker LSB init headers under Default-Start:. To manage notifications about this bug go to: https://bugs.launchpad.net/charm-hacluster/+bug/1727063/+subscriptions -- Mailing list: https://launchpad.net/~touch-packages Post to : touch-packages@lists.launchpad.net Unsubscribe : https://launchpad.net/~touch-packages More help : https://help.launchpad.net/ListHelp
[Touch-packages] [Bug 1591411] Re: systemd-logind must be restarted every ~1000 SSH logins to prevent a ~25 second delay
Trusty versions of packages affected (I see there is a systemd update 229-4ubuntu19. Does this include the backported fixes from v230/v231 mentioned in comment #4?): ii dbus 1.10.6-1ubuntu3.1 amd64simple interprocess messaging system (daemon and utilities) ii libdbus-1-3:amd641.10.6-1ubuntu3.1 amd64simple interprocess messaging system (library) ii libdbus-glib-1-2:amd64 0.106-1 amd64simple interprocess messaging system (GLib-based shared library) ii python3-dbus 1.2.0-3 amd64simple interprocess messaging system (Python 3 interface) ii libpam-systemd:amd64 229-4ubuntu12 amd64system and service manager - PAM module ii libsystemd0:amd64229-4ubuntu12 amd64systemd utility library ii python3-systemd 231-2build1 amd64Python 3 bindings for systemd ii systemd 229-4ubuntu12 amd64system and service manager ii systemd-sysv 229-4ubuntu12 amd64system and service manager - SysV links -- You received this bug notification because you are a member of Ubuntu Touch seeded packages, which is subscribed to systemd in Ubuntu. https://bugs.launchpad.net/bugs/1591411 Title: systemd-logind must be restarted every ~1000 SSH logins to prevent a ~25 second delay Status in D-Bus: Fix Released Status in systemd: Unknown Status in dbus package in Ubuntu: Fix Released Status in systemd package in Ubuntu: Fix Released Status in dbus source package in Xenial: Fix Released Status in systemd source package in Xenial: Invalid Status in dbus source package in Yakkety: Won't Fix Status in systemd source package in Yakkety: Invalid Bug description: [Impact] The bug affects multiple users and introduces an user visible delay (~25 seconds) on SSH connections after a large number of sessions have been processed. This has a serious impact on big systems and servers running our software. The currently proposed fix is actually a safe workaround for the bug as proposed by the dbus upstream. The workaround makes uid 0 immune to the pending_fd_timeout limit that kicks in and causes the original issue. [Test Case] lxc launch ubuntu:x test lxc exec test -- login -f ubuntu ssh-import-id Then ran a script as follows (passing in ubuntu@): while [ 1 ]; do (time ssh $1 "echo OK > /dev/null") 2>&1 | grep ^real >> log done Then checking the log file if there are any ssh sessions that are taking 25+ seconds to complete. Multiple instances of the same script can be used at the same time. [Regression Potential] The fix has a rather low regression potential as the workaround is a very small change only affecting one particular case - handling of uid 0. The fix has been tested by multiple users and has been around in zesty for a while, with multiple people involved in reviewing the change. It's also a change that has been proposed by upstream. [Original Description] I noticed on a system that accepts large numbers of SSH connections that after awhile, SSH sessions were taking ~25 seconds to complete. Looking in /var/log/auth.log, systemd-logind starts failing with the following: Jun 10 23:55:28 test sshd[3666]: pam_unix(sshd:session): session opened for user ubuntu by (uid=0) Jun 10 23:55:28 test systemd-logind[105]: New session c1052 of user ubuntu. Jun 10 23:55:28 test systemd-logind[105]: Failed to abandon session scope: Transport endpoint is not connected Jun 10 23:55:28 test sshd[3666]: pam_systemd(sshd:session): Failed to create session: Message recipient disconnected from message bus without replying I reproduced this in an LXD container by doing something like: lxc launch ubuntu:x test lxc exec test -- login -f ubuntu ssh-import-id Then ran a script as follows (passing in ubuntu@): while [ 1 ]; do (time ssh $1 "echo OK > /dev/null") 2>&1 | grep ^real >> log done In my case, after 1052 logins, the 1053rd and thereafter were taking 25+ seconds to complete. Here are some snippets from the log file: $ cat log | grep 0m0 | wc -l 1052 $ cat log | grep 0m25 | wc -l 4 $ tail -5 log real 0m0.222s real 0m25.232s real 0m25.235s real 0m25.236s real 0m25.239s ProblemType: Bug DistroRelease: Ubuntu 16.04 Package: systemd 229-4ubuntu5 ProcVersionSignature: Ubuntu 4.4.0-22.40-generic 4.4.8 Uname: Linux 4.4.0-22-generic x86_64 ApportVersion: 2.20.1-0ubuntu2 Architecture: amd64 Date: Sat Jun 11 00:09:34 2016 MachineType: Notebook W230SS ProcEnviron: TERM=xterm-256color
[Touch-packages] [Bug 1591411] Re: systemd-logind must be restarted every ~1000 SSH logins to prevent a ~25 second delay
** Tags added: canonical-bootstack canonical-is -- You received this bug notification because you are a member of Ubuntu Touch seeded packages, which is subscribed to systemd in Ubuntu. https://bugs.launchpad.net/bugs/1591411 Title: systemd-logind must be restarted every ~1000 SSH logins to prevent a ~25 second delay Status in D-Bus: Fix Released Status in systemd: Unknown Status in dbus package in Ubuntu: Fix Released Status in systemd package in Ubuntu: Fix Released Status in dbus source package in Xenial: Fix Released Status in systemd source package in Xenial: Invalid Status in dbus source package in Yakkety: Won't Fix Status in systemd source package in Yakkety: Invalid Bug description: [Impact] The bug affects multiple users and introduces an user visible delay (~25 seconds) on SSH connections after a large number of sessions have been processed. This has a serious impact on big systems and servers running our software. The currently proposed fix is actually a safe workaround for the bug as proposed by the dbus upstream. The workaround makes uid 0 immune to the pending_fd_timeout limit that kicks in and causes the original issue. [Test Case] lxc launch ubuntu:x test lxc exec test -- login -f ubuntu ssh-import-id Then ran a script as follows (passing in ubuntu@): while [ 1 ]; do (time ssh $1 "echo OK > /dev/null") 2>&1 | grep ^real >> log done Then checking the log file if there are any ssh sessions that are taking 25+ seconds to complete. Multiple instances of the same script can be used at the same time. [Regression Potential] The fix has a rather low regression potential as the workaround is a very small change only affecting one particular case - handling of uid 0. The fix has been tested by multiple users and has been around in zesty for a while, with multiple people involved in reviewing the change. It's also a change that has been proposed by upstream. [Original Description] I noticed on a system that accepts large numbers of SSH connections that after awhile, SSH sessions were taking ~25 seconds to complete. Looking in /var/log/auth.log, systemd-logind starts failing with the following: Jun 10 23:55:28 test sshd[3666]: pam_unix(sshd:session): session opened for user ubuntu by (uid=0) Jun 10 23:55:28 test systemd-logind[105]: New session c1052 of user ubuntu. Jun 10 23:55:28 test systemd-logind[105]: Failed to abandon session scope: Transport endpoint is not connected Jun 10 23:55:28 test sshd[3666]: pam_systemd(sshd:session): Failed to create session: Message recipient disconnected from message bus without replying I reproduced this in an LXD container by doing something like: lxc launch ubuntu:x test lxc exec test -- login -f ubuntu ssh-import-id Then ran a script as follows (passing in ubuntu@): while [ 1 ]; do (time ssh $1 "echo OK > /dev/null") 2>&1 | grep ^real >> log done In my case, after 1052 logins, the 1053rd and thereafter were taking 25+ seconds to complete. Here are some snippets from the log file: $ cat log | grep 0m0 | wc -l 1052 $ cat log | grep 0m25 | wc -l 4 $ tail -5 log real 0m0.222s real 0m25.232s real 0m25.235s real 0m25.236s real 0m25.239s ProblemType: Bug DistroRelease: Ubuntu 16.04 Package: systemd 229-4ubuntu5 ProcVersionSignature: Ubuntu 4.4.0-22.40-generic 4.4.8 Uname: Linux 4.4.0-22-generic x86_64 ApportVersion: 2.20.1-0ubuntu2 Architecture: amd64 Date: Sat Jun 11 00:09:34 2016 MachineType: Notebook W230SS ProcEnviron: TERM=xterm-256color PATH=(custom, no user) ProcKernelCmdLine: BOOT_IMAGE=/vmlinuz-4.4.0-22-generic root=/dev/mapper/ubuntu--vg-root ro quiet splash SourcePackage: systemd SystemdDelta: [EXTENDED] /lib/systemd/system/rc-local.service → /lib/systemd/system/rc-local.service.d/debian.conf [EXTENDED] /lib/systemd/system/systemd-timesyncd.service → /lib/systemd/system/systemd-timesyncd.service.d/disable-with-time-daemon.conf 2 overridden configuration files found. UpgradeStatus: No upgrade log present (probably fresh install) dmi.bios.date: 04/15/2014 dmi.bios.vendor: American Megatrends Inc. dmi.bios.version: 4.6.5 dmi.board.asset.tag: Tag 12345 dmi.board.name: W230SS dmi.board.vendor: Notebook dmi.board.version: Not Applicable dmi.chassis.asset.tag: No Asset Tag dmi.chassis.type: 9 dmi.chassis.vendor: Notebook dmi.chassis.version: N/A dmi.modalias: dmi:bvnAmericanMegatrendsInc.:bvr4.6.5:bd04/15/2014:svnNotebook:pnW230SS:pvrNotApplicable:rvnNotebook:rnW230SS:rvrNotApplicable:cvnNotebook:ct9:cvrN/A: dmi.product.name: W230SS dmi.product.version: Not Applicable dmi.sys.vendor: Notebook To manage notifications about this bug go to: https://bugs.launchpad.net/dbus/+bug/1591411/+subscriptions -- Mailing list: https://launchpad.net/~touch-packages Post to : touch-packages@lists.launchpad.net
[Touch-packages] [Bug 1591411] Re: systemd-logind must be restarted every ~1000 SSH logins to prevent a ~25 second delay
Can we get this backported to trusty? -- You received this bug notification because you are a member of Ubuntu Touch seeded packages, which is subscribed to dbus in Ubuntu. https://bugs.launchpad.net/bugs/1591411 Title: systemd-logind must be restarted every ~1000 SSH logins to prevent a ~25 second delay Status in D-Bus: Fix Released Status in systemd: Unknown Status in dbus package in Ubuntu: Fix Released Status in systemd package in Ubuntu: Fix Released Status in dbus source package in Xenial: Fix Released Status in systemd source package in Xenial: Invalid Status in dbus source package in Yakkety: Won't Fix Status in systemd source package in Yakkety: Invalid Bug description: [Impact] The bug affects multiple users and introduces an user visible delay (~25 seconds) on SSH connections after a large number of sessions have been processed. This has a serious impact on big systems and servers running our software. The currently proposed fix is actually a safe workaround for the bug as proposed by the dbus upstream. The workaround makes uid 0 immune to the pending_fd_timeout limit that kicks in and causes the original issue. [Test Case] lxc launch ubuntu:x test lxc exec test -- login -f ubuntu ssh-import-id Then ran a script as follows (passing in ubuntu@): while [ 1 ]; do (time ssh $1 "echo OK > /dev/null") 2>&1 | grep ^real >> log done Then checking the log file if there are any ssh sessions that are taking 25+ seconds to complete. Multiple instances of the same script can be used at the same time. [Regression Potential] The fix has a rather low regression potential as the workaround is a very small change only affecting one particular case - handling of uid 0. The fix has been tested by multiple users and has been around in zesty for a while, with multiple people involved in reviewing the change. It's also a change that has been proposed by upstream. [Original Description] I noticed on a system that accepts large numbers of SSH connections that after awhile, SSH sessions were taking ~25 seconds to complete. Looking in /var/log/auth.log, systemd-logind starts failing with the following: Jun 10 23:55:28 test sshd[3666]: pam_unix(sshd:session): session opened for user ubuntu by (uid=0) Jun 10 23:55:28 test systemd-logind[105]: New session c1052 of user ubuntu. Jun 10 23:55:28 test systemd-logind[105]: Failed to abandon session scope: Transport endpoint is not connected Jun 10 23:55:28 test sshd[3666]: pam_systemd(sshd:session): Failed to create session: Message recipient disconnected from message bus without replying I reproduced this in an LXD container by doing something like: lxc launch ubuntu:x test lxc exec test -- login -f ubuntu ssh-import-id Then ran a script as follows (passing in ubuntu@): while [ 1 ]; do (time ssh $1 "echo OK > /dev/null") 2>&1 | grep ^real >> log done In my case, after 1052 logins, the 1053rd and thereafter were taking 25+ seconds to complete. Here are some snippets from the log file: $ cat log | grep 0m0 | wc -l 1052 $ cat log | grep 0m25 | wc -l 4 $ tail -5 log real 0m0.222s real 0m25.232s real 0m25.235s real 0m25.236s real 0m25.239s ProblemType: Bug DistroRelease: Ubuntu 16.04 Package: systemd 229-4ubuntu5 ProcVersionSignature: Ubuntu 4.4.0-22.40-generic 4.4.8 Uname: Linux 4.4.0-22-generic x86_64 ApportVersion: 2.20.1-0ubuntu2 Architecture: amd64 Date: Sat Jun 11 00:09:34 2016 MachineType: Notebook W230SS ProcEnviron: TERM=xterm-256color PATH=(custom, no user) ProcKernelCmdLine: BOOT_IMAGE=/vmlinuz-4.4.0-22-generic root=/dev/mapper/ubuntu--vg-root ro quiet splash SourcePackage: systemd SystemdDelta: [EXTENDED] /lib/systemd/system/rc-local.service → /lib/systemd/system/rc-local.service.d/debian.conf [EXTENDED] /lib/systemd/system/systemd-timesyncd.service → /lib/systemd/system/systemd-timesyncd.service.d/disable-with-time-daemon.conf 2 overridden configuration files found. UpgradeStatus: No upgrade log present (probably fresh install) dmi.bios.date: 04/15/2014 dmi.bios.vendor: American Megatrends Inc. dmi.bios.version: 4.6.5 dmi.board.asset.tag: Tag 12345 dmi.board.name: W230SS dmi.board.vendor: Notebook dmi.board.version: Not Applicable dmi.chassis.asset.tag: No Asset Tag dmi.chassis.type: 9 dmi.chassis.vendor: Notebook dmi.chassis.version: N/A dmi.modalias: dmi:bvnAmericanMegatrendsInc.:bvr4.6.5:bd04/15/2014:svnNotebook:pnW230SS:pvrNotApplicable:rvnNotebook:rnW230SS:rvrNotApplicable:cvnNotebook:ct9:cvrN/A: dmi.product.name: W230SS dmi.product.version: Not Applicable dmi.sys.vendor: Notebook To manage notifications about this bug go to: https://bugs.launchpad.net/dbus/+bug/1591411/+subscriptions -- Mailing list: https://launchpad.net/~touch-packages Post to : touch-packages@lists.launchpad.net Unsubscribe :