[Bug 1953572] Re: Broken links in the View Trends and the View Histogram menu
Update patch for Noble, Mantic and Jammy ** Patch added: "lp1953572_jammy_noble.debdiff" https://bugs.launchpad.net/ubuntu/+source/nagios4/+bug/1953572/+attachment/5769862/+files/lp1953572_jammy_noble.debdiff ** Patch removed: "lp1953572_jammy_noble.debdiff" https://bugs.launchpad.net/ubuntu/+source/nagios4/+bug/1953572/+attachment/5769841/+files/lp1953572_jammy_noble.debdiff ** Changed in: nagios4 (Ubuntu Focal) Assignee: (unassigned) => Jorge Merlino (jorge-merlino) -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1953572 Title: Broken links in the View Trends and the View Histogram menu To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/nagios4/+bug/1953572/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1953572] Re: Broken links in the View Trends and the View Histogram menu
Patch for focal ** Patch added: "lp1953572_focal.debdiff" https://bugs.launchpad.net/ubuntu/+source/nagios4/+bug/1953572/+attachment/5769861/+files/lp1953572_focal.debdiff -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1953572 Title: Broken links in the View Trends and the View Histogram menu To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/nagios4/+bug/1953572/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1953572] Re: Broken links in the View Trends and the View Histogram menu
Patch for Noble, Mantic and Jammy ** Description changed: - The links for the following are broken in the Service Information or the Host Information menus for the following links. + [Impact] + The links for the following are broken in the Service Information or the Host Information menus for the following links: + View Trends For This Service View Alert Histogram For This Service View Trends For This Host View Alert Histogram For This Host - Maybe similar to - https://support.nagios.com/forum/viewtopic.php?f=7=42728 - https://bugzilla.redhat.com/show_bug.cgi?id=1428111 + [Test Plan] + Just access those links in: Current status / Hosts and Current status / Services and choosing a host or a service - the legacy links seem to work (.cgi) if you manually type them in. + [Where problems could occur] + The URLs for all cgi scripts are modified. I tested extensively but there may be some section of the web UI that might get affected by this change. Also installed nagios from source and the URLs are the same as the ones in this package after the change so if something breaks it probable is broken also upstream. + + [Other Info] + The same package is used for noble, mantic and jammy ** Also affects: nagios4 (Ubuntu Noble) Importance: Undecided Status: Confirmed ** Also affects: nagios4 (Ubuntu Jammy) Importance: Undecided Status: New ** Also affects: nagios4 (Ubuntu Mantic) Importance: Undecided Status: New ** Also affects: nagios4 (Ubuntu Focal) Importance: Undecided Status: New ** Changed in: nagios4 (Ubuntu Noble) Assignee: (unassigned) => Jorge Merlino (jorge-merlino) ** Changed in: nagios4 (Ubuntu Mantic) Assignee: (unassigned) => Jorge Merlino (jorge-merlino) ** Changed in: nagios4 (Ubuntu Jammy) Assignee: (unassigned) => Jorge Merlino (jorge-merlino) ** Patch added: "lp1953572_jammy_noble.debdiff" https://bugs.launchpad.net/ubuntu/+source/nagios4/+bug/1953572/+attachment/5769841/+files/lp1953572_jammy_noble.debdiff ** Tags added: sts -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1953572 Title: Broken links in the View Trends and the View Histogram menu To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/nagios4/+bug/1953572/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1988942] Re: Failed to set image property. Invalid input for field/attribute simplestreams_metadata. Value: ... is too long (HTTP 400)
Hi Łukasz, Sorry but I'm missing something here, the patches I submitted are versions 2:23.0.0-0ubuntu1.2 for mantic and 2:20.3.1-0ubuntu1.2 for jammy. -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1988942 Title: Failed to set image property. Invalid input for field/attribute simplestreams_metadata. Value: ... is too long (HTTP 400) To manage notifications about this bug go to: https://bugs.launchpad.net/cinder/+bug/1988942/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1968040] Re: Blanked screen doesn't wake up after locking [drmModeAtomicCommit: Argument invalide] [drmModeAtomicCommit: Invalid argument]
Hi Daniel! #20 works for me! Thanks! Do you have any explanation for this? How come this was not a problem back when I was running Fedora? Is Ubuntu 22.04 using an older, unfixed version of some package? Was this bug introduced downstream in the Ubuntu release? Are there any plans of fixing this for 22.04.1? -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1968040 Title: Blanked screen doesn't wake up after locking [drmModeAtomicCommit: Argument invalide] [drmModeAtomicCommit: Invalid argument] To manage notifications about this bug go to: https://bugs.launchpad.net/mutter/+bug/1968040/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1970483] Re: extension-manager crashes when browsing behind a proxy
Trying to print from gedit or evince causes the same crash. -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1970483 Title: extension-manager crashes when browsing behind a proxy To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/gnome-shell-extension-manager/+bug/1970483/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1968040] Re: Blanked screen doesn't wake up after locking [drmModeAtomicCommit: Argument invalide] [drmModeAtomicCommit: Invalid argument]
I'm facing the same issue (duplicated in reported bug 1971674) with my Mi Notebook Pro 2018 running Ubuntu 22.04, an Intel® Core™ i7-8550U with an NVIDIA GeForce MX150 using Wayland. I thought the issue was with Nvidia, but I can tell that's not the case. ** Attachment added: "journal.txt" https://bugs.launchpad.net/ubuntu/+source/mutter/+bug/1968040/+attachment/5586937/+files/journal.txt -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1968040 Title: Blanked screen doesn't wake up after locking [drmModeAtomicCommit: Argument invalide] [drmModeAtomicCommit: Invalid argument] To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/mutter/+bug/1968040/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1971149] Re: Wayland session: Internal display is black after sleep
Thank you Daniel, I just did (bug #1971674) -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1971149 Title: Wayland session: Internal display is black after sleep To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1971149/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1971674] [NEW] Laptop screen black (not powered on) when inactive but NOT in sleep
Public bug reported: I thought this was similar to bug #1971149, but that's not the case at all. Apparently my problem comes from the screen being black but not suspended (while playing some audio). Suspending the laptop (by closing the lid or pressing the POWER buttom) and then waking it up solves the problem. Let me attach you the log. I'm on a Mi Notebook Pro 2018 running Ubuntu 22.04, an Intel® Core™ i7-8550U with an NVIDIA GeForce MX150 using Wayland. ProblemType: Bug DistroRelease: Ubuntu 22.04 Package: gnome-shell 42.0-2ubuntu1 ProcVersionSignature: Ubuntu 5.15.0-27.28-generic 5.15.30 Uname: Linux 5.15.0-27-generic x86_64 NonfreeKernelModules: nvidia_modeset nvidia ApportVersion: 2.20.11-0ubuntu82 Architecture: amd64 CasperMD5CheckResult: pass CurrentDesktop: GNOME Date: Thu May 5 10:51:26 2022 DisplayManager: gdm3 InstallationDate: Installed on 2022-04-30 (4 days ago) InstallationMedia: Ubuntu 22.04 LTS "Jammy Jellyfish" - Release amd64 (20220419) RelatedPackageVersions: mutter-common 42.0-3ubuntu2 SourcePackage: gnome-shell UpgradeStatus: No upgrade log present (probably fresh install) ** Affects: gnome-shell (Ubuntu) Importance: Undecided Status: New ** Tags: amd64 apport-bug jammy wayland-session ** Attachment added: "journal.txt" https://bugs.launchpad.net/bugs/1971674/+attachment/5586891/+files/journal.txt ** Description changed: - I thought this was similar to #1971149, but that's not the case at all. - Apparently my problem comes from the screen being black but not + I thought this was similar to bug #1971149, but that's not the case at + all. Apparently my problem comes from the screen being black but not suspended (while playing some audio). Suspending the laptop (by closing the lid or pressing the POWER buttom) and then waking it up solves the problem. Let me attach you the log. I'm on a Mi Notebook Pro 2018 running Ubuntu 22.04, an Intel® Core™ i7-8550U with an NVIDIA GeForce MX150 using Wayland. ProblemType: Bug DistroRelease: Ubuntu 22.04 Package: gnome-shell 42.0-2ubuntu1 ProcVersionSignature: Ubuntu 5.15.0-27.28-generic 5.15.30 Uname: Linux 5.15.0-27-generic x86_64 NonfreeKernelModules: nvidia_modeset nvidia ApportVersion: 2.20.11-0ubuntu82 Architecture: amd64 CasperMD5CheckResult: pass CurrentDesktop: GNOME Date: Thu May 5 10:51:26 2022 DisplayManager: gdm3 InstallationDate: Installed on 2022-04-30 (4 days ago) InstallationMedia: Ubuntu 22.04 LTS "Jammy Jellyfish" - Release amd64 (20220419) RelatedPackageVersions: mutter-common 42.0-3ubuntu2 SourcePackage: gnome-shell UpgradeStatus: No upgrade log present (probably fresh install) ** Summary changed: - Laptop black (not powered on) when inactive but NOT in sleep + Laptop screen black (not powered on) when inactive but NOT in sleep -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1971674 Title: Laptop screen black (not powered on) when inactive but NOT in sleep To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/gnome-shell/+bug/1971674/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1971149] Re: Wayland session: Internal display is black after sleep
Here you have ** Attachment added: "journal.txt" https://bugs.launchpad.net/ubuntu/+source/mutter/+bug/1971149/+attachment/5586890/+files/journal.txt -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1971149 Title: Wayland session: Internal display is black after sleep To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1971149/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1971149] Re: Wayland session: Internal display is black after sleep
I'm facing a similar issue. I'm also using Wayland and a Nvidia MX150 card. Let me attach the log once I have the issue again. It seems to happen after long periods of sleep. -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1971149 Title: Wayland session: Internal display is black after sleep To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1971149/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1969274] Re: openvpn client no longer connects in 22.04
I have an AC-88U. I updated the Certificate and after that I exported a new OVPN file that I then used on the Ubuntu 22.04 machine. It worked immediately. -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1969274 Title: openvpn client no longer connects in 22.04 To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/network-manager-openvpn/+bug/1969274/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1969274] Re: openvpn client no longer connects in 22.04
I had the same problem of losing connection after upgrading to Ubuntu 22.04. After reading on several pages all over the Internet for solutions, I found someone saying that Ubuntu 22.04 uses a new and more secure version of OpenSSL (if I remember correctly). This would force some people to use more secure "protocols". I was already using them on my router so it made no sense why it was still not working. Then I found someone explaining that for any change you make to the OpenVPN server you MUST also update the CERTIFICATES... which I had not done. After updating the CERTIFICATES on the OpenVPN server (my router), my Ubuntu 22.04 client can now connect without any problems. I hope this helps some people that have the same problem as me. -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1969274 Title: openvpn client no longer connects in 22.04 To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/network-manager-openvpn/+bug/1969274/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1872813] Re: cloud-init fails to detect iSCSI root on focal Oracle instances
Tested version 2.0.874-5ubuntu2.11 from proposed and worked fine. https://pastebin.ubuntu.com/p/n4F5rvtMrs/ ** Tags removed: verification-needed-bionic ** Tags added: verification-done-bionic -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1872813 Title: cloud-init fails to detect iSCSI root on focal Oracle instances To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-init/+bug/1872813/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1970483] [NEW] extension-manager crashes when browsing behind a proxy
Public bug reported: Ubuntu 22.04 freshly installed. Proxy configured in Settings GUI, /etc/profile and /etc/apt/apt.conf . Browsing the web, installing software and stuff like that all works. When I run extension manager it loads, but as soon as I hit the browse button it crashes immediately with the following output. ``` $ extension-manager (extension-manager:17154): GLib-GIO-WARNING **: 17:41:17.971: Invalid proxy URI 'proxy.inst.edu.uy:3128': Invalid URI ‘proxy.inst.edu.uy:3128’ ** GLib-GIO:ERROR:../../../gio/gsocketclient.c:1982:g_socket_client_enumerator_callback: assertion failed: (data->error_info->best_error) Bail out! GLib-GIO:ERROR:../../../gio/gsocketclient.c:1982:g_socket_client_enumerator_callback: assertion failed: (data->error_info->best_error) Aborted (core dumped) ``` The proxy look like (real url redacted): ``` $ echo $https_proxy proxy.inst.edu.uy:3128 ``` If I replace the proxy with its IP adress (IP redacted), the crash changes: ``` $ export https_proxy=10.10.10.10:3128 $ extension-manager ** GLib-GIO:ERROR:../../../gio/gsocketclient.c:1982:g_socket_client_enumerator_callback: assertion failed: (data->error_info->best_error) Bail out! GLib-GIO:ERROR:../../../gio/gsocketclient.c:1982:g_socket_client_enumerator_callback: assertion failed: (data->error_info->best_error) Aborted (core dumped) ``` ** Affects: gnome-shell-extension-manager (Ubuntu) Importance: Undecided Status: New ** Description changed: Ubuntu 22.04 freshly installed. Proxy configured in Settings GUI, /etc/profile and /etc/apt/apt.conf . Browsing the web, installing software and stuff like that all works. - When I run extension manager it loads, but as soon as I hi the browse + When I run extension manager it loads, but as soon as I hit the browse button it crashes immediately with the following output. - ``` - $ extension-manager + $ extension-manager (extension-manager:17154): GLib-GIO-WARNING **: 17:41:17.971: Invalid proxy URI 'proxy.inst.edu.uy:3128': Invalid URI ‘proxy.inst.edu.uy:3128’ ** GLib-GIO:ERROR:../../../gio/gsocketclient.c:1982:g_socket_client_enumerator_callback: assertion failed: (data->error_info->best_error) Bail out! GLib-GIO:ERROR:../../../gio/gsocketclient.c:1982:g_socket_client_enumerator_callback: assertion failed: (data->error_info->best_error) Aborted (core dumped) ``` The proxy look like (real url redacted): ``` $ echo $https_proxy proxy.inst.edu.uy:3128 ``` - - If I replace the proxy with its IP adress (IP redacted), the crash changes: + If I replace the proxy with its IP adress (IP redacted), the crash + changes: ``` export https_proxy=10.10.10.10:3128 - jvisca@fofotimol:~$ extension-manager + jvisca@fofotimol:~$ extension-manager ** GLib-GIO:ERROR:../../../gio/gsocketclient.c:1982:g_socket_client_enumerator_callback: assertion failed: (data->error_info->best_error) Bail out! GLib-GIO:ERROR:../../../gio/gsocketclient.c:1982:g_socket_client_enumerator_callback: assertion failed: (data->error_info->best_error) Aborted (core dumped) ``` ** Description changed: Ubuntu 22.04 freshly installed. Proxy configured in Settings GUI, /etc/profile and /etc/apt/apt.conf . Browsing the web, installing software and stuff like that all works. When I run extension manager it loads, but as soon as I hit the browse button it crashes immediately with the following output. ``` $ extension-manager (extension-manager:17154): GLib-GIO-WARNING **: 17:41:17.971: Invalid proxy URI 'proxy.inst.edu.uy:3128': Invalid URI ‘proxy.inst.edu.uy:3128’ ** GLib-GIO:ERROR:../../../gio/gsocketclient.c:1982:g_socket_client_enumerator_callback: assertion failed: (data->error_info->best_error) Bail out! GLib-GIO:ERROR:../../../gio/gsocketclient.c:1982:g_socket_client_enumerator_callback: assertion failed: (data->error_info->best_error) Aborted (core dumped) ``` The proxy look like (real url redacted): ``` $ echo $https_proxy proxy.inst.edu.uy:3128 ``` If I replace the proxy with its IP adress (IP redacted), the crash changes: ``` export https_proxy=10.10.10.10:3128 - jvisca@fofotimol:~$ extension-manager + $ extension-manager ** GLib-GIO:ERROR:../../../gio/gsocketclient.c:1982:g_socket_client_enumerator_callback: assertion failed: (data->error_info->best_error) Bail out! GLib-GIO:ERROR:../../../gio/gsocketclient.c:1982:g_socket_client_enumerator_callback: assertion failed: (data->error_info->best_error) Aborted (core dumped) ``` ** Description changed: Ubuntu 22.04 freshly installed. Proxy configured in Settings GUI, /etc/profile and /etc/apt/apt.conf . Browsing the web, installing software and stuff like that all works. When I run extension manager it loads, but as soon as I hit the browse button it crashes immediately with the following output. ``` $ extension-manager (extension-manager:17154):
[Bug 1872813] Re: cloud-init fails to detect iSCSI root on focal Oracle instances
** Description changed: + [Impact] + + When creating a bare metal instance on Oracle Cloud (which are backed by an iscsi disk), the IP address is configured on an interface (enp45s0f0) on boot, but cloud-init is generating a /etc/netplan/50-cloud-init.yaml with an entry to configure enp12s0f0 using dhcp. As a result, enp12s0f0 will send a DHCPREQUEST and wait for a reply until it times out, delaying the boot process, as there's no dhcp server serving this interface. + This is caused by a missing /run/initramfs/open-iscsi.interface that should point to the enp45s0f0 interface + + + [Fix] + + There is a script from the open-iscsi package that checks if there are + no iscsi disks present and if there are no disks removes the + /run/initramfs/open-iscsi.interface file that stores the interface where + the iscsi disk is present. + + This script originally runs along the local-top initrd scripts but uses + the /dev/disk/by-path/ path to find if there are iscsi discs present. + This path does not yet exists when the local-top scripts are run so the + file is always removed. + + This was fixed in Focal by moving the script to run along the local- + bottom scripts. When these scripts run the /dev/disk/by-path/ path + exists. + + + [Test Plan] + + This can be reproduced by instancing any bare metal instance on Oracle + Cloud (all are backed by an iscsi disk) and checking if the + /run/initramfs/open-iscsi.interface file is present. + + + [Where problems could occur] + + There should be no problems as the script runs anyway but later into the + boot process. + + If the script fails to run it could leave the open-iscsi.interface file + present with no iscsi drives but that should cause no issues besides + delaying the boot process. + + + [Original description] + Currently focal images on Oracle are failing to get data from the Oracle DS with this traceback: Traceback (most recent call last): - File "/usr/lib/python3/dist-packages/cloudinit/sources/__init__.py", line 772, in find_source - if s.update_metadata([EventType.BOOT_NEW_INSTANCE]): - File "/usr/lib/python3/dist-packages/cloudinit/sources/__init__.py", line 661, in update_metadata - result = self.get_data() - File "/usr/lib/python3/dist-packages/cloudinit/sources/__init__.py", line 279, in get_data - return_value = self._get_data() - File "/usr/lib/python3/dist-packages/cloudinit/sources/DataSourceOracle.py", line 195, in _get_data - with dhcp.EphemeralDHCPv4(net.find_fallback_nic()): - File "/usr/lib/python3/dist-packages/cloudinit/net/dhcp.py", line 57, in __enter__ - return self.obtain_lease() - File "/usr/lib/python3/dist-packages/cloudinit/net/dhcp.py", line 109, in obtain_lease - ephipv4.__enter__() - File "/usr/lib/python3/dist-packages/cloudinit/net/__init__.py", line 1019, in __enter__ - self._bringup_static_routes() - File "/usr/lib/python3/dist-packages/cloudinit/net/__init__.py", line 1071, in _bringup_static_routes - util.subp( - File "/usr/lib/python3/dist-packages/cloudinit/util.py", line 2084, in subp - raise ProcessExecutionError(stdout=out, stderr=err, + File "/usr/lib/python3/dist-packages/cloudinit/sources/__init__.py", line 772, in find_source + if s.update_metadata([EventType.BOOT_NEW_INSTANCE]): + File "/usr/lib/python3/dist-packages/cloudinit/sources/__init__.py", line 661, in update_metadata + result = self.get_data() + File "/usr/lib/python3/dist-packages/cloudinit/sources/__init__.py", line 279, in get_data + return_value = self._get_data() + File "/usr/lib/python3/dist-packages/cloudinit/sources/DataSourceOracle.py", line 195, in _get_data + with dhcp.EphemeralDHCPv4(net.find_fallback_nic()): + File "/usr/lib/python3/dist-packages/cloudinit/net/dhcp.py", line 57, in __enter__ + return self.obtain_lease() + File "/usr/lib/python3/dist-packages/cloudinit/net/dhcp.py", line 109, in obtain_lease + ephipv4.__enter__() + File "/usr/lib/python3/dist-packages/cloudinit/net/__init__.py", line 1019, in __enter__ + self._bringup_static_routes() + File "/usr/lib/python3/dist-packages/cloudinit/net/__init__.py", line 1071, in _bringup_static_routes + util.subp( + File "/usr/lib/python3/dist-packages/cloudinit/util.py", line 2084, in subp + raise ProcessExecutionError(stdout=out, stderr=err, cloudinit.util.ProcessExecutionError: Unexpected error while running command. Command: ['ip', '-4', 'route', 'add', '0.0.0.0/0', 'via', '10.0.0.1', 'dev', 'ens3'] Exit code: 2 Reason: - - Stdout: + Stdout: Stderr: RTNETLINK answers: File exists - - In https://github.com/canonical/cloud-init/blob/46cf23c28812d3e3ba0c570defd9a05628af5556/cloudinit/sources/DataSourceOracle.py#L194-L198, we can see that this path is only taken if _is_iscsi_root returns False. + In https://github.com/canonical/cloud- +
[Bug 1872813] Re: cloud-init fails to detect iSCSI root on focal Oracle instances
I tested the patch myself on an Oracle Cloud machine that presented this issue. -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1872813 Title: cloud-init fails to detect iSCSI root on focal Oracle instances To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-init/+bug/1872813/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1872813] Re: cloud-init fails to detect iSCSI root on focal Oracle instances
SRU for Bionic ** Tags added: sts ** Patch added: "lp1872813_bionic.debdiff" https://bugs.launchpad.net/ubuntu/+source/open-iscsi/+bug/1872813/+attachment/5577913/+files/lp1872813_bionic.debdiff ** Changed in: open-iscsi (Ubuntu Bionic) Status: New => In Progress -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1872813 Title: cloud-init fails to detect iSCSI root on focal Oracle instances To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-init/+bug/1872813/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1872813] Re: cloud-init fails to detect iSCSI root on focal Oracle instances
** Also affects: open-iscsi (Ubuntu Bionic) Importance: Undecided Status: New ** Changed in: open-iscsi (Ubuntu Bionic) Assignee: (unassigned) => Jorge Merlino (jorge-merlino) -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1872813 Title: cloud-init fails to detect iSCSI root on focal Oracle instances To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-init/+bug/1872813/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1965603] [NEW] [950SBE/951SBE, Realtek ALC298, Speaker, Internal] No sound at all
Public bug reported: computer model:notebook 9 pen 15" nvidia GPU: Nvidia geforce mx150 Sound: AKG stereo speakers High definition audio no sound whats so ever, not even in bluetooth. removed and reinstalled alsa and pulse audio, then restarted, just to have no sound still. checked settings in sound menu to see if I can select proper audio device but only one selection is given. Any ideas on how to go about in fixing issue. ProblemType: Bug DistroRelease: Ubuntu 20.04 Package: alsa-base 1.0.25+dfsg-0ubuntu5 ProcVersionSignature: Ubuntu 5.13.0-35.40~20.04.1-generic 5.13.19 Uname: Linux 5.13.0-35-generic x86_64 NonfreeKernelModules: nvidia_modeset nvidia ApportVersion: 2.20.11-0ubuntu27.21 Architecture: amd64 AudioDevicesInUse: USERPID ACCESS COMMAND /dev/snd/controlC0: densetsu 1521 F pulseaudio /dev/snd/pcmC0D0c: densetsu 1521 F...m pulseaudio /dev/snd/pcmC0D0p: densetsu 1521 F...m pulseaudio CasperMD5CheckResult: skip CurrentDesktop: ubuntu:GNOME Date: Fri Mar 18 19:19:42 2022 InstallationDate: Installed on 2022-03-17 (1 days ago) InstallationMedia: Ubuntu 20.04.4 LTS "Focal Fossa" - Release amd64 (20220223) PackageArchitecture: all ProcEnviron: PATH=(custom, no user) XDG_RUNTIME_DIR= LANG=en_US.UTF-8 SHELL=/bin/bash SourcePackage: alsa-driver Symptom: audio Symptom_AlsaPlaybackTest: ALSA playback test through plughw:PCH failed Symptom_Card: Built-in Audio - HDA Intel PCH Symptom_DevicesInUse: USERPID ACCESS COMMAND /dev/snd/controlC0: gdm1033 F pulseaudio densetsu 1521 F pulseaudio /dev/snd/pcmC0D0c: densetsu 1521 F...m pulseaudio /dev/snd/pcmC0D0p: densetsu 1521 F...m pulseaudio Symptom_Jack: Speaker, Internal Symptom_Type: No sound at all Title: [950SBE/951SBE, Realtek ALC298, Speaker, Internal] No sound at all UpgradeStatus: No upgrade log present (probably fresh install) dmi.bios.date: 09/19/2019 dmi.bios.release: 5.13 dmi.bios.vendor: American Megatrends Inc. dmi.bios.version: P07RES.076.190919.SP dmi.board.asset.tag: No Asset Tag dmi.board.name: NP950SBE-X01US dmi.board.vendor: SAMSUNG ELECTRONICS CO., LTD. dmi.board.version: SGL9849A0L-C01-G001-S0001+10.0.17763 dmi.chassis.asset.tag: No Asset Tag dmi.chassis.type: 31 dmi.chassis.vendor: SAMSUNG ELECTRONICS CO., LTD. dmi.chassis.version: N/A dmi.modalias: dmi:bvnAmericanMegatrendsInc.:bvrP07RES.076.190919.SP:bd09/19/2019:br5.13:svnSAMSUNGELECTRONICSCO.,LTD.:pn950SBE/951SBE:pvrP07RES:rvnSAMSUNGELECTRONICSCO.,LTD.:rnNP950SBE-X01US:rvrSGL9849A0L-C01-G001-S0001+10.0.17763:cvnSAMSUNGELECTRONICSCO.,LTD.:ct31:cvrN/A:skuSCAI-A5A5-A5A5-A5A5-PRES: dmi.product.family: Notebook 9 Series dmi.product.name: 950SBE/951SBE dmi.product.sku: SCAI-A5A5-A5A5-A5A5-PRES dmi.product.version: P07RES dmi.sys.vendor: SAMSUNG ELECTRONICS CO., LTD. ** Affects: alsa-driver (Ubuntu) Importance: Undecided Status: New ** Tags: amd64 apport-bug focal -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1965603 Title: [950SBE/951SBE, Realtek ALC298, Speaker, Internal] No sound at all To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/alsa-driver/+bug/1965603/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1912727] Re: KVM Page Fails to load with error "An unexpected error has occurred, please try refreshing your browser window."
** Also affects: maas (Ubuntu) Importance: Undecided Status: New ** No longer affects: maas (Ubuntu) -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1912727 Title: KVM Page Fails to load with error "An unexpected error has occurred, please try refreshing your browser window." To manage notifications about this bug go to: https://bugs.launchpad.net/maas/+bug/1912727/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1949643] Re: iptables-persistent unconditionally drops existing iptables rules
Tested version 1.0.15ubuntu0.21.10.1 in Impish. Performed the tests on comment #14. All worked fine. ** Tags removed: verification-needed-bionic verification-needed-focal verification-needed-hirsute verification-needed-impish ** Tags added: verification-done-bionic verification-done-focal verification-done-hirsute verification-done-impish -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1949643 Title: iptables-persistent unconditionally drops existing iptables rules To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/iptables-persistent/+bug/1949643/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1949643] Re: iptables-persistent unconditionally drops existing iptables rules
Tested version 1.0.15ubuntu0.21.04.1 in Hirsute. Performed the tests on comment #14. All worked fine. -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1949643 Title: iptables-persistent unconditionally drops existing iptables rules To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/iptables-persistent/+bug/1949643/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1949643] Re: iptables-persistent unconditionally drops existing iptables rules
Tested version 1.0.14ubuntu1 in Focal. Performed the tests on comment #14. All worked fine. -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1949643 Title: iptables-persistent unconditionally drops existing iptables rules To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/iptables-persistent/+bug/1949643/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1949643] Re: iptables-persistent unconditionally drops existing iptables rules
Tested version 1.0.4+nmu2ubuntu1.1 in Bionic. Performed the tests on comment #14. All worked fine. -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1949643 Title: iptables-persistent unconditionally drops existing iptables rules To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/iptables-persistent/+bug/1949643/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1949643] Re: iptables-persistent unconditionally drops existing iptables rules
The SRU patches where built and tested by me in each Ubuntu version. -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1949643 Title: iptables-persistent unconditionally drops existing iptables rules To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/iptables-persistent/+bug/1949643/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1949643] Re: iptables-persistent unconditionally drops existing iptables rules
SRU for Impish. Fixed changelog. ** Patch added: "lp1949643-impishv2.debdiff" https://bugs.launchpad.net/ubuntu/hirsute/+source/iptables-persistent/+bug/1949643/+attachment/5547368/+files/lp1949643-impishv2.debdiff -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1949643 Title: iptables-persistent unconditionally drops existing iptables rules To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/iptables-persistent/+bug/1949643/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1949643] Re: iptables-persistent unconditionally drops existing iptables rules
SRU for Hirsute. Fixed changelog. ** Patch added: "lp1949643-hirsutev2.debdiff" https://bugs.launchpad.net/ubuntu/hirsute/+source/iptables-persistent/+bug/1949643/+attachment/5547367/+files/lp1949643-hirsutev2.debdiff -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1949643 Title: iptables-persistent unconditionally drops existing iptables rules To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/iptables-persistent/+bug/1949643/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1949643] Re: iptables-persistent unconditionally drops existing iptables rules
SRU for Focal. Fixed changelog. ** Patch added: "lp1949643-focalv2.debdiff" https://bugs.launchpad.net/ubuntu/hirsute/+source/iptables-persistent/+bug/1949643/+attachment/5547366/+files/lp1949643-focalv2.debdiff -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1949643 Title: iptables-persistent unconditionally drops existing iptables rules To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/iptables-persistent/+bug/1949643/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1949643] Re: iptables-persistent unconditionally drops existing iptables rules
SRU for Bionic. Fixed changelog and indentantion ** Patch added: "lp1949643-bionicv2.debdiff" https://bugs.launchpad.net/ubuntu/hirsute/+source/iptables-persistent/+bug/1949643/+attachment/5547365/+files/lp1949643-bionicv2.debdiff -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1949643 Title: iptables-persistent unconditionally drops existing iptables rules To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/iptables-persistent/+bug/1949643/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1949643] Re: iptables-persistent unconditionally drops existing iptables rules
** Tags added: sts -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1949643 Title: iptables-persistent unconditionally drops existing iptables rules To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/iptables-persistent/+bug/1949643/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1949643] Re: iptables-persistent unconditionally drops existing iptables rules
SRU for impish ** Patch added: "lp1949643-impish.debdiff" https://bugs.launchpad.net/ubuntu/+source/iptables-persistent/+bug/1949643/+attachment/5546887/+files/lp1949643-impish.debdiff -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1949643 Title: iptables-persistent unconditionally drops existing iptables rules To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/iptables-persistent/+bug/1949643/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1949643] Re: iptables-persistent unconditionally drops existing iptables rules
SRU for focal ** Patch added: "lp1949643-focal.debdiff" https://bugs.launchpad.net/ubuntu/+source/iptables-persistent/+bug/1949643/+attachment/5546889/+files/lp1949643-focal.debdiff -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1949643 Title: iptables-persistent unconditionally drops existing iptables rules To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/iptables-persistent/+bug/1949643/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1949643] Re: iptables-persistent unconditionally drops existing iptables rules
SRU for hirsute ** Patch added: "lp1949643-hirsute.debdiff" https://bugs.launchpad.net/ubuntu/+source/iptables-persistent/+bug/1949643/+attachment/5546888/+files/lp1949643-hirsute.debdiff -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1949643 Title: iptables-persistent unconditionally drops existing iptables rules To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/iptables-persistent/+bug/1949643/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1949643] Re: iptables-persistent unconditionally drops existing iptables rules
SRU for bionic ** Patch added: "lp1949643-bionic.debdiff" https://bugs.launchpad.net/ubuntu/+source/iptables-persistent/+bug/1949643/+attachment/5546890/+files/lp1949643-bionic.debdiff -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1949643 Title: iptables-persistent unconditionally drops existing iptables rules To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/iptables-persistent/+bug/1949643/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1751923] Re: [SRU]_heal_instance_info_cache periodic task bases on port list from nova db, not from neutron server
Hello, I've verified that this problem doesn't reproduces with the package contained in proposed. 1) Deployed this bundle of bionic-queens Upgraded to the following version: root@juju-51d6ad-1751923-6:/home/ubuntu# dpkg -l | grep nova ii nova-api-os-compute 2:17.0.13-0ubuntu3 all OpenStack Compute - OpenStack Compute API frontend ii nova-common 2:17.0.13-0ubuntu3 all OpenStack Compute - common files ii nova-conductor 2:17.0.13-0ubuntu3 all OpenStack Compute - conductor service ii nova-placement-api 2:17.0.13-0ubuntu3 all OpenStack Compute - placement API frontend ii nova-scheduler 2:17.0.13-0ubuntu3 all OpenStack Compute - virtual machine scheduler ii python-nova 2:17.0.13-0ubuntu3 all OpenStack Compute Python libraries root@juju-51d6ad-1751923-7:/home/ubuntu# dpkg -l | grep nova ii nova-api-metadata2:17.0.13-0ubuntu3 all OpenStack Compute - metadata API frontend ii nova-common 2:17.0.13-0ubuntu3 all OpenStack Compute - common files ii nova-compute 2:17.0.13-0ubuntu3 all OpenStack Compute - compute node base ii nova-compute-kvm 2:17.0.13-0ubuntu3 all OpenStack Compute - compute node (KVM) ii nova-compute-libvirt 2:17.0.13-0ubuntu3 all OpenStack Compute - compute node libvirt support ii python-nova 2:17.0.13-0ubuntu3 all OpenStack Compute Python libraries ii python-novaclient2:9.1.1-0ubuntu1 all client library for OpenStack Compute API - Python 2.7 ii python3-novaclient 2:9.1.1-0ubuntu1 all client library for OpenStack Compute API - 3.x root@juju-51d6ad-1751923-6:/home/ubuntu# systemctl status nova*|grep -i active Active: active (running) since Fri 2021-08-27 22:02:25 UTC; 1h 7min ago Active: active (running) since Fri 2021-08-27 22:02:12 UTC; 1h 8min ago Active: active (running) since Fri 2021-08-27 22:02:25 UTC; 1h 7min ago 3) Created a server with 4 private ports, 1 public one. ubuntu@niedbalski-bastion:~/stsstack-bundles/openstack$ openstack server list +--+---++---++---+ | ID | Name | Status | Networks | Image | Flavor| +--+---++---++---+ | 5843e6b5-e1a7-4208-9f19-1d051c032afb | cirros-232302 | ACTIVE | private=192.168.21.22, 192.168.21.6, 192.168.21.10, 192.168.21.13, 10.5.150.1 | cirros | m1.cirros | +--+---++---++---+ ubuntu@niedbalski-bastion:~/stsstack-bundles/openstack$ nova interface-list 5843e6b5-e1a7-4208-9f19-1d051c032afb ++--+--+---+---+ | Port State | Port ID | Net ID | IP addresses | MAC Addr | ++--+--+---+---+ | ACTIVE | 1680b164-14d7-4d6e-b085-94292ece82cf | 8d91e266-0925-4c29-8039-0d71862df4fc | 192.168.21.13 | fa:16:3e:cf:f8:c8 | | ACTIVE | 5865a40e-36fa-4cf9-bd40-85a1e78031f5 | 8d91e266-0925-4c29-8039-0d71862df4fc | 192.168.21.6 | fa:16:3e:eb:73:b1 | | ACTIVE | 5f400107-d9eb-4a1b-a37b-3bd034d8f995 | 8d91e266-0925-4c29-8039-0d71862df4fc | 192.168.21.10 | fa:16:3e:95:9a:78 | | ACTIVE | b11d1c8e-d42a-41e0-a7ad-e34a7a93d020 | 8d91e266-0925-4c29-8039-0d71862df4fc | 192.16 8.21.22 | fa:16:3e:a3:45:45 | ++--+--+---+---+ 4) I can see the 4 tap devices. root@juju-51d6ad-1751923-7:/home/ubuntu# virsh dumpxml instance-0001|grep -i tap 5) I modified the instance info caches removing one of the interfaces. Database changed mysql> update instance_info_caches set
[Bug 1879798] Re: designate-manage pool update doesn't reflects targets master dns servers into zones.
I see the change in prior releases S/T/U https://github.com/openstack/designate/commit/b967e9f706373f1aad6db882c2295fbbe1fadfc9 https://github.com/openstack/designate/commit/953492904772933f5f8e265d1ae6cc1e6385fcc6 https://github.com/openstack/designate/commit/0b5634643b4b69cd0a7d5499f258602604741d22 Can this be backported into the cloud-archive releases? Thanks -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1879798 Title: designate-manage pool update doesn't reflects targets master dns servers into zones. To manage notifications about this bug go to: https://bugs.launchpad.net/charm-designate/+bug/1879798/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1940422] [NEW] it doesn't start
Public bug reported: /usr/games/fretsonfire: 5: ./FretsOnFire.py: not found ProblemType: Bug DistroRelease: Ubuntu 20.04 Package: fretsonfire 1.3.110.dfsg2-5ubuntu1 ProcVersionSignature: Ubuntu 5.11.0-25.27~20.04.1-generic 5.11.22 Uname: Linux 5.11.0-25-generic x86_64 ApportVersion: 2.20.11-0ubuntu27.18 Architecture: amd64 CasperMD5CheckResult: skip CurrentDesktop: ubuntu:GNOME Date: Wed Aug 18 13:11:35 2021 InstallationDate: Installed on 2020-08-07 (376 days ago) InstallationMedia: Slimbook OS 20.04 LTS (20200807) PackageArchitecture: all ProcEnviron: TERM=xterm-256color PATH=(custom, no user) XDG_RUNTIME_DIR= LANG=es_ES.UTF-8 SHELL=/bin/bash SourcePackage: fretsonfire UpgradeStatus: No upgrade log present (probably fresh install) modified.conffile..etc.default.apport: # set this to 0 to disable apport, or to 1 to enable it # you can temporarily override this with # sudo service apport start force_start=1 enabled=0 mtime.conffile..etc.default.apport: 2020-08-06T12:33:01 ** Affects: fretsonfire (Ubuntu) Importance: Undecided Status: New ** Tags: amd64 apport-bug focal -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1940422 Title: it doesn't start To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/fretsonfire/+bug/1940422/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1938634] [NEW] package mysql-server-8.0 8.0.26-0ubuntu0.21.04.3 failed to install/upgrade: installed mysql-server-8.0 package post-installation script subprocess returned error exit status 1
Public bug reported: when installing and exiting at synaptic it can't config mysql - mariadb gets in the way - I suspect. I tried before this one but no go. I need mysql to test an app. so I tried again. no luck Must say I am loving the ubuntu 21.04 hirsuit (or close). Good luck 2 all. Jorge 01.aug.2021 12:00 hours ProblemType: Package DistroRelease: Ubuntu 21.04 Package: mysql-server-8.0 8.0.26-0ubuntu0.21.04.3 ProcVersionSignature: Ubuntu 5.11.0-25.27-generic 5.11.22 Uname: Linux 5.11.0-25-generic x86_64 NonfreeKernelModules: nvidia_modeset nvidia ApportVersion: 2.20.11-0ubuntu65.1 Architecture: amd64 CasperMD5CheckResult: pass Date: Sun Aug 1 11:44:01 2021 ErrorMessage: installed mysql-server-8.0 package post-installation script subprocess returned error exit status 1 InstallationDate: Installed on 2021-07-02 (30 days ago) InstallationMedia: Ubuntu 21.04 "Hirsute Hippo" - Release amd64 (20210420) Logs.var.log.daemon.log: Logs.var.log.mysql.error.log: MySQLConf.etc.mysql.conf.d.mysql.cnf: [mysql] MySQLConf.etc.mysql.conf.d.mysqldump.cnf: [mysqldump] quick quote-names max_allowed_packet = 16M MySQLConf.etc.mysql.my.cnf: Error: [Errno 40] Too many levels of symbolic links: '/etc/mysql/my.cnf' MySQLVarLibDirListing: ['ib_buffer_pool', 'nvm01.err', 'binlog.index', 'debian-5.7.flag', 'auto.cnf', 'test', 'performance_schema', 'mysql', 'ib_logfile0', 'aria_log.0001', 'ibdata1', 'aria_log_control'] ProcCmdline: BOOT_IMAGE=/boot/vmlinuz-5.11.0-25-generic root=/dev/mapper/vgubuntu-root ro quiet splash vt.handoff=7 Python3Details: /usr/bin/python3.9, Python 3.9.5, python3-minimal, 3.9.4-1 PythonDetails: N/A RelatedPackageVersions: dpkg 1.20.9ubuntu1 apt 2.2.4ubuntu0.1 SourcePackage: mysql-8.0 Title: package mysql-server-8.0 8.0.26-0ubuntu0.21.04.3 failed to install/upgrade: installed mysql-server-8.0 package post-installation script subprocess returned error exit status 1 UpgradeStatus: No upgrade log present (probably fresh install) ** Affects: mysql-8.0 (Ubuntu) Importance: Undecided Status: New ** Tags: amd64 apport-package hirsute ** Attachment added: "copied output from synaptic." https://bugs.launchpad.net/bugs/1938634/+attachment/5515066/+files/bug.submit.mysql.conf.error.txt -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1938634 Title: package mysql-server-8.0 8.0.26-0ubuntu0.21.04.3 failed to install/upgrade: installed mysql-server-8.0 package post-installation script subprocess returned error exit status 1 To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/mysql-8.0/+bug/1938634/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1751923] Re: [SRU]_heal_instance_info_cache periodic task bases on port list from nova db, not from neutron server
I am in the process to verify bionic/rocky/queens releases. -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1751923 Title: [SRU]_heal_instance_info_cache periodic task bases on port list from nova db, not from neutron server To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-archive/+bug/1751923/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1927868] Re: vRouter not working after update to 16.3.1
Hello, I reviewed the code path and upgrade in my reproducer, following the approach of upgrading neutron-gateway and subsequently neutron-api doesn't works because of a mismatch in the migrations/rpc versions that causes the ha port to fail to be created/updated, then the keepalived process cannot be spawned and finally the state-change-monitor fails to find the PID for that keepalived process. If I upgrade neutron-api, run the migrations to head and then upgrade the gateways, all seems correct. I upgraded from the following versions root@juju-da864d-1927868-5:/home/ubuntu# dpkg -l |grep keepalived ii keepalived 1:1.3.9-1ubuntu0.18.04.2 amd64Failover and monitoring daemon for LVS clusters root@juju-da864d-1927868-5:/home/ubuntu# dpkg -l |grep neutron-common ii neutron-common 2:15.3.3-0ubuntu1~cloud0 all Neutron is a virtual network service for Openstack - common --> To root@juju-da864d-1927868-5:/home/ubuntu# dpkg -l |grep neutron-common ii neutron-common 2:16.3.2-0ubuntu3~cloud0 all Neutron is a virtual network service for Openstack - common I created a router with HA enabled as follows $ openstack router list +--+-++---+--+-+--+ | ID | Name| Status | State | Project | Distributed | HA | +--+-++---+--+-+--+ | 09fa811f-410c-4360-8cae-687e7e73ff21 | provider-router | ACTIVE | UP| 6f5aaf5130764305a5d37862e3ff18ce | False | True | +--+-++---+--+-+--+ ===> Prior to upgrade I can list the keepalived processed linked to the ha-router root 22999 0.0 0.0 91816 3052 ?Ss 19:17 0:00 keepalived -P -f /var/lib/neutron/ha_confs/09fa811f-410c-4360-8cae- 687e7e73ff21/keepalived.conf -p /var/lib/neutron/ha_confs/09fa811f- 410c-4360-8cae-687e7e73ff21.pid.keepalived -r /var/lib/neutron/ha_confs /09fa811f-410c-4360-8cae-687e7e73ff21.pid.keepalived-vrrp -D root 23001 0.0 0.1 92084 4088 ?S19:17 0:00 keepalived -P -f /var/lib/neutron/ha_confs/09fa811f-410c-4360-8cae- 687e7e73ff21/keepalived.conf -p /var/lib/neutron/ha_confs/09fa811f- 410c-4360-8cae-687e7e73ff21.pid.keepalived -r /var/lib/neutron/ha_confs /09fa811f-410c-4360-8cae-687e7e73ff21.pid.keepalived-vrrp -D ===> After upgrading -- None is returned, and in fact the keepalived processes aren't spawned after neutron-* is upgraded. Pre-upgrade: Jun 24 19:17:07 juju-da864d-1927868-5 Keepalived[22997]: Starting Keepalived v1.3.9 (10/21,2017) Jun 24 19:17:07 juju-da864d-1927868-5 Keepalived[22999]: Starting VRRP child process, pid=23001 Post - upgrade -- Not started Jun 24 19:30:41 juju-da864d-1927868-5 Keepalived[22999]: Stopping Jun 24 19:30:42 juju-da864d-1927868-5 Keepalived_vrrp[23001]: Stopped Jun 24 19:30:42 juju-da864d-1927868-5 Keepalived[22999]: Stopped Keepalived v1.3.9 (10/21,2017) The reason for those keepalived processes not re-spawned is 1) The ml2 process starts the router devices by requesting a rpc call on the device details. This one fails with different oslo target versions. Therefore is required for the neutron-api migrations to be applied before the gateways. 9819:2021-06-24 19:31:09.935 31744 DEBUG neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [req- 14f31407-6342-4f71-98b8-4437e166dbaa - - - - -] Starting to process devices in:{'current': {'87cfdd45-fea7-4c06-aa13-174cb71b294f', 'b8e18ba0-c65b-498e-9a8b-34c0fcc42d07', '926b7377-30f4-4b2c-9064-8aab3918a385'}, 'added': {'87cfdd45-fea7-4c06-aa13-174cb71b294f'}, 'removed': set(), 'updated': set(), 're_added': set()} rpc_loop /usr/lib/python3/dist- packages/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:2685 9821:2021-06-24 19:31:10.028 31744 ERROR neutron.agent.rpc [req- 14f31407-6342-4f71-98b8-4437e166dbaa - - - - -] Failed to get details for device 87cfdd45-fea7-4c06-aa13-174cb71b294f: oslo_messaging.rpc.client.RemoteError: Remote error: InvalidTargetVersion Invalid target version 1.1 9869:2021-06-24 19:31:10.510 31744 DEBUG neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [req- 14f31407-6342-4f71-98b8-4437e166dbaa - - - - -] retrying failed devices {'87cfdd45-fea7-4c06-aa13-174cb71b294f'} _update_port_info_failed_devices_stats /usr/lib/python3/dist- packages/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:1674 2) Then the l3 ha router creation mechanism can't process the HA router because the HA port id
[Bug 1879798] Re: designate-manage pool update doesn't reflects targets master dns servers into zones.
Master/Train/Ussuri/Stein fixed upstream https://review.opendev.org/q/topic:%22bug%252F1879798%22+(status:open%20OR%20status:merged) Needs backports for UCA ** Also affects: cloud-archive Importance: Undecided Status: New ** Also affects: cloud-archive/ussuri Importance: Undecided Status: New ** Also affects: cloud-archive/wallaby Importance: Undecided Status: New ** Also affects: cloud-archive/victoria Importance: Undecided Status: New ** Also affects: cloud-archive/xena Importance: Undecided Status: New ** Changed in: cloud-archive/xena Status: New => Fix Released ** Changed in: cloud-archive/wallaby Status: New => Fix Released ** Changed in: cloud-archive/victoria Status: New => Fix Committed ** Changed in: cloud-archive/ussuri Status: New => Fix Committed ** Also affects: cloud-archive/stein Importance: Undecided Status: New ** Also affects: cloud-archive/train Importance: Undecided Status: New -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1879798 Title: designate-manage pool update doesn't reflects targets master dns servers into zones. To manage notifications about this bug go to: https://bugs.launchpad.net/charm-designate/+bug/1879798/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1928434] Re: shim-signed does not boot on EFI 2.40 by Apple
I have Ubuntu MATE 20.10 running on my Raspberry Pi 4 and I can't upgrade because of this bug as well. I went to the file "DistUpgradeQuirks.py" (as mentioned above) and I can't see any "apple" string there to change and force the upgrade. I'm just stuck. -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1928434 Title: shim-signed does not boot on EFI 2.40 by Apple To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/shim/+bug/1928434/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1751923] Re: [SRU]_heal_instance_info_cache periodic task bases on port list from nova db, not from neutron server
@corey anything in specific you need at my end to get this SRU reviewed? -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1751923 Title: [SRU]_heal_instance_info_cache periodic task bases on port list from nova db, not from neutron server To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-archive/+bug/1751923/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1926938] Re: Recent mainline packages are built with Hirsuite 21.04, not Focal 20.04 LTS
I was able to install 5.12.4 using @tuxinvader ppa on 20.04 with current libc6 2.31. Great work @tuxinvader! I have used: sudo add-apt-repository ppa:tuxinvader/lts-mainline sudo apt install linux-image-unsigned-5.12.4-051204-generic linux-modules-5.12.4-051204-generic linux-headers-5.12.4-051204-generic -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1926938 Title: Recent mainline packages are built with Hirsuite 21.04, not Focal 20.04 LTS To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1926938/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1751923] Re: [SRU]_heal_instance_info_cache periodic task bases on port list from nova db, not from neutron server
** Description changed: [Impact] * During periodic task _heal_instance_info_cache the instance_info_caches are not updated using instance port_ids taken from neutron, but from nova db. * This causes that existing VMs to loose their network interfaces after reboot. [Test Plan] * This bug is reproducible on Bionic/Queens clouds. 1) Deploy the following Juju bundle: https://paste.ubuntu.com/p/HgsqZfsDGh/ - 2) Run the following script: https://paste.ubuntu.com/p/DrFcDXZGSt/ + 2) Run the following script: https://paste.ubuntu.com/p/c4VDkqyR2z/ 3) If the script finishes with "Port not found" , the bug is still present. [Where problems could occur] ** No specific regression potential has been identified. ** Check the other info section *** - [Other Info] How it looks now? = _heal_instance_info_cache during crontask: https://github.com/openstack/nova/blob/ef4000a0d326deb004843ee51d18030224c5630f/nova/compute/manager.py#L6525 is using network_api to get instance_nw_info (instance_info_caches): \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0try: \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0# Call to network API to get instance info.. this will \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0# force an update to the instance's info_cache \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0self.network_api.get_instance_nw_info(context, instance) self.network_api.get_instance_nw_info() is listed below: https://github.com/openstack/nova/blob/ef4000a0d326deb004843ee51d18030224c5630f/nova/network/neutronv2/api.py#L1377 and it uses _build_network_info_model() without networks and port_ids parameters (because we're not adding any new interface to instance): https://github.com/openstack/nova/blob/ef4000a0d326deb004843ee51d18030224c5630f/nova/network/neutronv2/api.py#L2356 Next: _gather_port_ids_and_networks() generates the list of instance networks and port_ids: \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0networks, port_ids = self._gather_port_ids_and_networks( \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0context, instance, networks, port_ids, client) https://github.com/openstack/nova/blob/ef4000a0d326deb004843ee51d18030224c5630f/nova/network/neutronv2/api.py#L2389-L2390 https://github.com/openstack/nova/blob/ef4000a0d326deb004843ee51d18030224c5630f/nova/network/neutronv2/api.py#L1393 As we see that _gather_port_ids_and_networks() takes the port list from DB: https://github.com/openstack/nova/blob/ef4000a0d326deb004843ee51d18030224c5630f/nova/objects/instance.py#L1173-L1176 And thats it. When we lose a port its not possible to add it again with this periodic task. The only way is to clean device_id field in neutron port object and re-attach the interface using `nova interface-attach`. When the interface is missing and there is no port configured on compute host (for example after compute reboot) - interface is not added to instance and from neutron point of view port state is DOWN. When the interface is missing in cache and we reboot hard the instance - its not added as tapinterface in xml file = we don't have the network on host. Steps to reproduce == 1. Spawn devstack 2. Spawn VM inside devstack with multiple ports (for example also from 2 different networks) 3. Update the DB row, drop one interface from interfaces_list 4. Hard-Reboot the instance 5. See that nova list shows instance without one address, but nova interface-list shows all addresseshttps://launchpad.net/~niedbalski/+archive/ubuntu/lp1751923/+packages 6. See that one port is missing in instance xml files 7. In theory the _heal_instance_info_cache should fix this things, it relies on memory, not on the fresh list of instance ports taken from neutron. Reproduced Example == 1. Spawn VM with 1 private network port nova boot --flavor m1.small --image cirros-0.3.5-x86_64-disk --nic net-name=private test-2 2. Attach ports to have 2 private and 2 public interfaces nova list: | a64ed18d-9868-4bf0-90d3-d710d278922d | test-2 | ACTIVE | - | Running | public=2001:db8::e, 172.24.4.15, 2001:db8::c, 172.24.4.16; private=fdda:5d77:e18e:0:f816:3eff:fee8:, 10.0.0.3, fdda:5d77:e18e:0:f816:3eff:fe53:231c, 10.0.0.5 | So we see 4 ports: stack@mjozefcz-devstack-ptg:~$ nova interface-list a64ed18d-9868-4bf0-90d3-d710d278922d ++--+--+---+---+ | Port State | Port ID | Net ID |
[Bug 1751923] Re: [SRU]_heal_instance_info_cache periodic task bases on port list from nova db, not from neutron server
** Patch added: "lp1751923_bionic.debdiff" https://bugs.launchpad.net/nova/+bug/1751923/+attachment/5498309/+files/lp1751923_bionic.debdiff ** Description changed: [Impact] * During periodic task _heal_instance_info_cache the instance_info_caches are not updated using instance port_ids taken from neutron, but from nova db. * This causes that existing VMs to loose their network interfaces after reboot. [Test Plan] * This bug is reproducible on Bionic/Queens clouds. 1) Deploy the following Juju bundle: https://paste.ubuntu.com/p/HgsqZfsDGh/ 2) Run the following script: https://paste.ubuntu.com/p/DrFcDXZGSt/ 3) If the script finishes with "Port not found" , the bug is still present. [Where problems could occur] + ** No specific regression potential has been identified. ** Check the other info section *** [Other Info] How it looks now? = _heal_instance_info_cache during crontask: https://github.com/openstack/nova/blob/ef4000a0d326deb004843ee51d18030224c5630f/nova/compute/manager.py#L6525 is using network_api to get instance_nw_info (instance_info_caches): \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0try: \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0# Call to network API to get instance info.. this will \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0# force an update to the instance's info_cache \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0self.network_api.get_instance_nw_info(context, instance) self.network_api.get_instance_nw_info() is listed below: https://github.com/openstack/nova/blob/ef4000a0d326deb004843ee51d18030224c5630f/nova/network/neutronv2/api.py#L1377 and it uses _build_network_info_model() without networks and port_ids parameters (because we're not adding any new interface to instance): https://github.com/openstack/nova/blob/ef4000a0d326deb004843ee51d18030224c5630f/nova/network/neutronv2/api.py#L2356 Next: _gather_port_ids_and_networks() generates the list of instance networks and port_ids: \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0networks, port_ids = self._gather_port_ids_and_networks( \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0context, instance, networks, port_ids, client) https://github.com/openstack/nova/blob/ef4000a0d326deb004843ee51d18030224c5630f/nova/network/neutronv2/api.py#L2389-L2390 https://github.com/openstack/nova/blob/ef4000a0d326deb004843ee51d18030224c5630f/nova/network/neutronv2/api.py#L1393 As we see that _gather_port_ids_and_networks() takes the port list from DB: https://github.com/openstack/nova/blob/ef4000a0d326deb004843ee51d18030224c5630f/nova/objects/instance.py#L1173-L1176 And thats it. When we lose a port its not possible to add it again with this periodic task. The only way is to clean device_id field in neutron port object and re-attach the interface using `nova interface-attach`. When the interface is missing and there is no port configured on compute host (for example after compute reboot) - interface is not added to instance and from neutron point of view port state is DOWN. When the interface is missing in cache and we reboot hard the instance - its not added as tapinterface in xml file = we don't have the network on host. Steps to reproduce == 1. Spawn devstack 2. Spawn VM inside devstack with multiple ports (for example also from 2 different networks) 3. Update the DB row, drop one interface from interfaces_list 4. Hard-Reboot the instance - 5. See that nova list shows instance without one address, but nova interface-list shows all addresses + 5. See that nova list shows instance without one address, but nova interface-list shows all addresseshttps://launchpad.net/~niedbalski/+archive/ubuntu/lp1751923/+packages 6. See that one port is missing in instance xml files 7. In theory the _heal_instance_info_cache should fix this things, it relies on memory, not on the fresh list of instance ports taken from neutron. Reproduced Example == 1. Spawn VM with 1 private network port nova boot --flavor m1.small --image cirros-0.3.5-x86_64-disk --nic net-name=private test-2 2. Attach ports to have 2 private and 2 public interfaces nova list: | a64ed18d-9868-4bf0-90d3-d710d278922d | test-2 | ACTIVE | - | Running | public=2001:db8::e, 172.24.4.15, 2001:db8::c, 172.24.4.16; private=fdda:5d77:e18e:0:f816:3eff:fee8:, 10.0.0.3, fdda:5d77:e18e:0:f816:3eff:fe53:231c, 10.0.0.5 | So we see 4 ports: stack@mjozefcz-devstack-ptg:~$ nova interface-list a64ed18d-9868-4bf0-90d3-d710d278922d
[Bug 1751923] Re: [SRU]_heal_instance_info_cache periodic task bases on port list from nova db, not from neutron server
Hello, I've prepared a PPA for testing the proposed patch on B/Queens https://launchpad.net/~niedbalski/+archive/ubuntu/lp1751923/+packages Attached is the debdiff for bionic. -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1751923 Title: [SRU]_heal_instance_info_cache periodic task bases on port list from nova db, not from neutron server To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-archive/+bug/1751923/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1751923] Re: [SRU]_heal_instance_info_cache periodic task bases on port list from nova db, not from neutron server
** Changed in: cloud-archive/rocky Status: In Progress => Won't Fix -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1751923 Title: [SRU]_heal_instance_info_cache periodic task bases on port list from nova db, not from neutron server To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-archive/+bug/1751923/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1751923] Re: [SRU]_heal_instance_info_cache periodic task bases on port list from nova db, not from neutron server
** Description changed: - Description - === + [Impact] - During periodic task _heal_instance_info_cache the instance_info_caches - are not updated using instance port_ids taken from neutron, but from - nova db. + * During periodic task _heal_instance_info_cache the instance_info_caches are not updated using instance port_ids taken from neutron, but from nova db. + * This causes that existing VMs to loose their network interfaces after reboot. - Sometimes, perhaps because of some race-condition, its possible to lose - some ports from instance_info_caches. Periodic task - _heal_instance_info_cache should clean this up (add missing records), - but in fact it's not working this way. + [Test Plan] + + * This bug is reproducible on Bionic/Queens clouds. + + 1) Deploy the following Juju bundle: https://paste.ubuntu.com/p/HgsqZfsDGh/ + 2) Run the following script: https://paste.ubuntu.com/p/DrFcDXZGSt/ + 3) If the script finishes with "Port not found" , the bug is still present. + + [Where problems could occur] + + ** Check the other info section *** + + + [Other Info] How it looks now? = _heal_instance_info_cache during crontask: https://github.com/openstack/nova/blob/ef4000a0d326deb004843ee51d18030224c5630f/nova/compute/manager.py#L6525 is using network_api to get instance_nw_info (instance_info_caches): - try: - # Call to network API to get instance info.. this will - # force an update to the instance's info_cache - self.network_api.get_instance_nw_info(context, instance) + \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0try: + \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0# Call to network API to get instance info.. this will + \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0# force an update to the instance's info_cache + \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0self.network_api.get_instance_nw_info(context, instance) self.network_api.get_instance_nw_info() is listed below: https://github.com/openstack/nova/blob/ef4000a0d326deb004843ee51d18030224c5630f/nova/network/neutronv2/api.py#L1377 and it uses _build_network_info_model() without networks and port_ids parameters (because we're not adding any new interface to instance): https://github.com/openstack/nova/blob/ef4000a0d326deb004843ee51d18030224c5630f/nova/network/neutronv2/api.py#L2356 Next: _gather_port_ids_and_networks() generates the list of instance networks and port_ids: - networks, port_ids = self._gather_port_ids_and_networks( - context, instance, networks, port_ids, client) + \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0networks, port_ids = self._gather_port_ids_and_networks( + \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0context, instance, networks, port_ids, client) https://github.com/openstack/nova/blob/ef4000a0d326deb004843ee51d18030224c5630f/nova/network/neutronv2/api.py#L2389-L2390 https://github.com/openstack/nova/blob/ef4000a0d326deb004843ee51d18030224c5630f/nova/network/neutronv2/api.py#L1393 As we see that _gather_port_ids_and_networks() takes the port list from DB: https://github.com/openstack/nova/blob/ef4000a0d326deb004843ee51d18030224c5630f/nova/objects/instance.py#L1173-L1176 And thats it. When we lose a port its not possible to add it again with this periodic task. The only way is to clean device_id field in neutron port object and re-attach the interface using `nova interface-attach`. When the interface is missing and there is no port configured on compute host (for example after compute reboot) - interface is not added to instance and from neutron point of view port state is DOWN. When the interface is missing in cache and we reboot hard the instance - its not added as tapinterface in xml file = we don't have the network on host. Steps to reproduce == 1. Spawn devstack 2. Spawn VM inside devstack with multiple ports (for example also from 2 different networks) 3. Update the DB row, drop one interface from interfaces_list 4. Hard-Reboot the instance 5. See that nova list shows instance without one address, but nova interface-list shows all addresses 6. See that one port is missing in instance xml files 7. In theory the _heal_instance_info_cache should fix this things, it relies on memory, not on the fresh list of instance ports taken from neutron. Reproduced Example == 1. Spawn VM with 1 private network port nova boot --flavor m1.small --image cirros-0.3.5-x86_64-disk --nic net-name=private test-2 2. Attach ports to have 2 private and 2 public interfaces nova list: | a64ed18d-9868-4bf0-90d3-d710d278922d |
[Bug 1751923] Re: _heal_instance_info_cache periodic task bases on port list from nova db, not from neutron server
** Changed in: nova (Ubuntu) Status: Confirmed => Fix Released ** Changed in: cloud-archive/queens Status: New => In Progress ** Changed in: cloud-archive/queens Assignee: (unassigned) => Jorge Niedbalski (niedbalski) ** Changed in: nova (Ubuntu Bionic) Assignee: (unassigned) => Jorge Niedbalski (niedbalski) ** Summary changed: - _heal_instance_info_cache periodic task bases on port list from nova db, not from neutron server + [SRU]_heal_instance_info_cache periodic task bases on port list from nova db, not from neutron server -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1751923 Title: [SRU]_heal_instance_info_cache periodic task bases on port list from nova db, not from neutron server To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-archive/+bug/1751923/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1883534] Re: Screen flickering
*** This bug is a duplicate of bug 1853094 *** https://bugs.launchpad.net/bugs/1853094 I have de same problem, I change to star ubuntu wayland, but in different moments, ubuntu again present the same problem, this occurs when update the ubuntu because never see this problem the last year in my laptop. Where is the Answer to resolve this problem, I have intel graphic.. -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1883534 Title: Screen flickering To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/xorg/+bug/1883534/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1885430] Re: [Bionic/Stein] Ceilometer-agent fails to collect metrics after restart
Verified victoria / focal - groovy * (focal/victoria) => https://pastebin.ubuntu.com/p/XPVQbwKY7v/ * (groovy-proposed) ==> (https://pastebin.ubuntu.com/p/ZHsvzXR7QH/) ** Tags removed: verification-needed verification-needed-focal verification-needed-groovy verification-victoria-needed ** Tags added: verification-done verification-done-focal verification-done-groovy verification-victoria-done -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1885430 Title: [Bionic/Stein] Ceilometer-agent fails to collect metrics after restart To manage notifications about this bug go to: https://bugs.launchpad.net/charm-ceilometer-agent/+bug/1885430/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1885430] Re: [Bionic/Stein] Ceilometer-agent fails to collect metrics after restart
Verified train, stein, ussuri. * train, results: (https://pastebin.ubuntu.com/p/Y7sD9w3rWz/) * stein, results: (https://pastebin.ubuntu.com/p/ZHsvzXR7QH/) * ussuri, results: (https://pastebin.ubuntu.com/p/jymSscH3TS/) ** Tags removed: verification-train-needed verification-ussuri-needed ** Tags added: verification-train-done verification-ussuri-done -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1885430 Title: [Bionic/Stein] Ceilometer-agent fails to collect metrics after restart To manage notifications about this bug go to: https://bugs.launchpad.net/charm-ceilometer-agent/+bug/1885430/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1890491] Re: A pacemaker node fails monitor (probe) and stop /start operations on a resource because it returns "rc=189
Hello Lucas, I'll reformat the patch accordingly and re-submit. Thanks. -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1890491 Title: A pacemaker node fails monitor (probe) and stop /start operations on a resource because it returns "rc=189 To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/pacemaker/+bug/1890491/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1922825] [NEW] systen identify as crashed intallation
Public bug reported: during installation it reports a crash ProblemType: Bug DistroRelease: Ubuntu 20.04 Package: ubiquity 20.04.15.2 ProcVersionSignature: Ubuntu 5.4.0-42.46-generic 5.4.44 Uname: Linux 5.4.0-42-generic x86_64 NonfreeKernelModules: zfs zunicode zavl icp zcommon znvpair ApportVersion: 2.20.11-0ubuntu27.4 Architecture: amd64 CasperMD5CheckResult: pass CasperVersion: 1.445.1 CurrentDesktop: ubuntu:GNOME Date: Tue Apr 6 22:24:28 2021 InstallCmdLine: file=/cdrom/preseed/ubuntu.seed initrd=/casper/initrd quiet splash --- maybe-ubiquity LiveMediaBuild: Ubuntu 20.04.1 LTS "Focal Fossa" - Release amd64 (20200731) ProcEnviron: LANGUAGE=en_US.UTF-8 PATH=(custom, no user) XDG_RUNTIME_DIR= LANG=en_US.UTF-8 LC_NUMERIC=C.UTF-8 SourcePackage: ubiquity UpgradeStatus: No upgrade log present (probably fresh install) ** Affects: ubiquity (Ubuntu) Importance: Undecided Status: New ** Tags: amd64 apport-bug focal ubiquity-20.04.15.2 ubuntu -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1922825 Title: systen identify as crashed intallation To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/ubiquity/+bug/1922825/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1921424] [NEW] Não foi possível calcular a atualização
Public bug reported: Não foi possível calcular a atualização Ocorreu um problema sem solução, enquanto estava calculando a atualização. Se nada disto servir, então por favor comunique a falha usando o comando 'ubuntu-bug ubuntu-release-upgrader-core' no terminal. Se você quiser investigar isto por si os ficheiros de registro em '/var/log /dist-upgrade' contém detalhes sobre a atualização. Olhe especialmente para 'main.log' e 'apt.log'. ProblemType: Bug DistroRelease: Ubuntu 18.04 Package: ubuntu-release-upgrader-core 1:18.04.42 ProcVersionSignature: Ubuntu 5.4.0-70.78~18.04.1-generic 5.4.94 Uname: Linux 5.4.0-70-generic x86_64 ApportVersion: 2.20.9-0ubuntu7.23 Architecture: amd64 CrashDB: ubuntu CurrentDesktop: ubuntu:GNOME Date: Thu Mar 25 17:40:23 2021 InstallationDate: Installed on 2020-05-16 (313 days ago) InstallationMedia: Ubuntu 18.04.4 LTS "Bionic Beaver" - Release amd64 (20200203.1) PackageArchitecture: all SourcePackage: ubuntu-release-upgrader UpgradeStatus: Upgraded to bionic on 2021-03-25 (0 days ago) VarLogDistupgradeTermlog: ** Affects: ubuntu-release-upgrader (Ubuntu) Importance: Undecided Status: New ** Tags: amd64 apport-bug bionic dist-upgrade third-party-packages -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1921424 Title: Não foi possível calcular a atualização To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/ubuntu-release-upgrader/+bug/1921424/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1913768] Re: HP Touchpad Not Detected
I think this is related to https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1883905. I have tried the solutions posted on that bug and at https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1899258 I tried modifying GRUB_CMDLINE_LINUX_DEFAULT for adding several options such as i8042.nopnp=1 i8042.notimeout i8042.reset pci=nocrs etc.. When using windows, the touchpad is recognized as ELAN0733. I am using a brand new HP ProBook 440 G8. -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1913768 Title: HP Touchpad Not Detected To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/linux-signed-hwe-5.4/+bug/1913768/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1885430] Re: [Bionic/Stein] Ceilometer-agent fails to collect metrics after restart
---> Installed version root@juju-bf8c6a-lm-ceilometer-7:/home/ubuntu# dpkg -l |grep -i ceilometer ii ceilometer-agent-compute 1:12.1.1-0ubuntu1~cloud1 all ceilometer compute agent ii ceilometer-common1:12.1.1-0ubuntu1~cloud1 all ceilometer common files ii python3-ceilometer 1:12.1.1-0ubuntu1~cloud1 all ceilometer python libraries Run through 2 cases 1) Service restart 2) Reboot ---> Service restart case root@juju-bf8c6a-lm-ceilometer-7:/home/ubuntu# systemctl status ceilometer-agent-compute ● ceilometer-agent-compute.service - Ceilometer Agent Compute Loaded: loaded (/lib/systemd/system/ceilometer-agent-compute.service; enabled; vendor preset: enabled) Active: active (running) since Thu 2021-03-18 21:20:01 UTC; 2min 35s ago Main PID: 27650 (ceilometer-poll) Tasks: 6 (limit: 4702) CGroup: /system.slice/ceilometer-agent-compute.service ├─27650 ceilometer-polling: master process [/usr/bin/ceilometer-polling --config-file=/etc/ceilometer/ceilometer.conf --polling-namespaces compute --log-file=/var/log/cei └─27735 ceilometer-polling: AgentManager worker(0) Mar 18 21:20:01 juju-bf8c6a-lm-ceilometer-7 systemd[1]: Stopped Ceilometer Agent Compute. Mar 18 21:20:01 juju-bf8c6a-lm-ceilometer-7 systemd[1]: Started Ceilometer Agent Compute. Mar 18 21:20:03 juju-bf8c6a-lm-ceilometer-7 ceilometer-agent-compute[27650]: Deprecated: Option "logdir" from group "DEFAULT" is deprecated. Use option "log-dir" from group "DEFAULT root@juju-bf8c6a-lm-ceilometer-7:/home/ubuntu# systemctl status nova-compute ● nova-compute.service - OpenStack Compute Loaded: loaded (/lib/systemd/system/nova-compute.service; disabled; vendor preset: enabled) Active: active (running) since Thu 2021-03-18 18:46:56 UTC; 2h 35min ago Main PID: 2199 (nova-compute) Tasks: 22 (limit: 4702) CGroup: /system.slice/nova-compute.service └─2199 /usr/bin/python3 /usr/bin/nova-compute --config-file=/etc/nova/nova.conf --config-file=/etc/nova/nova-compute.conf --log-file=/var/log/nova/nova-compute.log Mar 18 18:46:56 juju-bf8c6a-lm-ceilometer-7 systemd[1]: Started OpenStack Compute. -- root@juju-bf8c6a-lm-ceilometer-7:/home/ubuntu# systemctl stop nova-compute root@juju-bf8c6a-lm-ceilometer-7:/home/ubuntu# systemctl disable nova-compute.service Synchronizing state of nova-compute.service with SysV service script with /lib/systemd/systemd-sysv-install. Executing: /lib/systemd/systemd-sysv-install disable nova-compute root@juju-bf8c6a-lm-ceilometer-7:/home/ubuntu# systemctl status nova-compute ● nova-compute.service - OpenStack Compute Loaded: loaded (/lib/systemd/system/nova-compute.service; disabled; vendor preset: enabled) Active: inactive (dead) since Thu 2021-03-18 21:23:30 UTC; 7s ago Main PID: 2199 (code=exited, status=0/SUCCESS) Mar 18 18:46:56 juju-bf8c6a-lm-ceilometer-7 systemd[1]: Started OpenStack Compute. Mar 18 21:23:24 juju-bf8c6a-lm-ceilometer-7 systemd[1]: Stopping OpenStack Compute... Mar 18 21:23:30 juju-bf8c6a-lm-ceilometer-7 systemd[1]: Stopped OpenStack Compute. root@juju-bf8c6a-lm-ceilometer-7:/home/ubuntu# root@juju-bf8c6a-lm-ceilometer-7:/home/ubuntu# systemctl status ceilometer-agent-compute ● ceilometer-agent-compute.service - Ceilometer Agent Compute Loaded: loaded (/lib/systemd/system/ceilometer-agent-compute.service; enabled; vendor preset: enabled) Active: inactive (dead) since Thu 2021-03-18 21:23:24 UTC; 29s ago Main PID: 761 (code=exited, status=0/SUCCESS) Mar 18 21:23:13 juju-bf8c6a-lm-ceilometer-7 systemd[1]: Started Ceilometer Agent Compute. Mar 18 21:23:14 juju-bf8c6a-lm-ceilometer-7 ceilometer-agent-compute[761]: Deprecated: Option "logdir" from group "DEFAULT" is deprecated. Use option "log-dir" from group "DEFAULT". Mar 18 21:23:24 juju-bf8c6a-lm-ceilometer-7 systemd[1]: Stopping Ceilometer Agent Compute... Mar 18 21:23:24 juju-bf8c6a-lm-ceilometer-7 systemd[1]: Stopped Ceilometer Agent Compute. root@juju-bf8c6a-lm-ceilometer-7:/home/ubuntu# /etc/init.d/ceilometer-agent-compute restart [ ok ] Restarting ceilometer-agent-compute (via systemctl): ceilometer-agent-compute.service. root@juju-bf8c6a-lm-ceilometer-7:/home/ubuntu# systemctl status ceilometer-agent-compute ● ceilometer-agent-compute.service - Ceilometer Agent Compute Loaded: loaded (/lib/systemd/system/ceilometer-agent-compute.service; enabled; vendor preset: enabled) Active: active (running) since Thu 2021-03-18 21:24:13 UTC; 2s ago Main PID: 1549 (ceilometer-poll) Tasks: 6 (limit: 4702) CGroup: /system.slice/ceilometer-agent-compute.service ├─1549 ceilometer-polling: master process [/usr/bin/ceilometer-polling --config-file=/etc/ceilometer/ceilometer.conf --polling-namespaces compute --log-file=/var/log/ceil └─1604 ceilometer-polling: AgentManager worker(0) Mar 18
[Bug 1885430] Re: [Bionic/Stein] Ceilometer-agent fails to collect metrics after restart
With the proposed patch on stein: --- nova-compute disabled / no requires --- machine rebooted root@juju-bf8c6a-lm-ceilometer-7:/home/ubuntu# systemctl status nova-compute ● nova-compute.service - OpenStack Compute Loaded: loaded (/lib/systemd/system/nova-compute.service; disabled; vendor preset: enabled) Active: inactive (dead) root@juju-bf8c6a-lm-ceilometer-7:/home/ubuntu# systemctl status ceilometer-agent-compute ● ceilometer-agent-compute.service - Ceilometer Agent Compute Loaded: loaded (/lib/systemd/system/ceilometer-agent-compute.service; enabled; vendor preset: enabled) Active: active (running) since Thu 2021-03-11 21:54:08 UTC; 57s ago Main PID: 851 (ceilometer-poll) Tasks: 6 (limit: 4702) CGroup: /system.slice/ceilometer-agent-compute.service ├─ 851 ceilometer-polling: master process [/usr/bin/ceilometer-polling --config-file=/etc/ceilometer/ceilometer.conf --polling-namespaces compute --log-file=/var/log/ceil └─3114 ceilometer-polling: AgentManager worker(0) Mar 11 21:54:08 juju-bf8c6a-lm-ceilometer-7 systemd[1]: Started Ceilometer Agent Compute. Mar 11 21:54:25 juju-bf8c6a-lm-ceilometer-7 ceilometer-agent-compute[851]: Deprecated: Option "logdir" from group "DEFAULT" is deprecated. Use option "log-dir" from group "DEFAULT". --- nova-compute disabled / no required --- machine rebooted ubuntu@juju-bf8c6a-lm-ceilometer-7:~$ sudo su root@juju-bf8c6a-lm-ceilometer-7:/home/ubuntu# uptime 21:56:25 up 0 min, 1 user, load average: 1.67, 0.41, 0.14 root@juju-bf8c6a-lm-ceilometer-7:/home/ubuntu# systemctl status nova-compute ● nova-compute.service - OpenStack Compute Loaded: loaded (/lib/systemd/system/nova-compute.service; disabled; vendor preset: enabled) Active: active (running) since Thu 2021-03-11 21:56:07 UTC; 20s ago Main PID: 2743 (nova-compute) Tasks: 22 (limit: 4702) CGroup: /system.slice/nova-compute.service └─2743 /usr/bin/python3 /usr/bin/nova-compute --config-file=/etc/nova/nova.conf --config-file=/etc/nova/nova-compute.conf --log-file=/var/log/nova/nova-compute.log Mar 11 21:56:07 juju-bf8c6a-lm-ceilometer-7 systemd[1]: Started OpenStack Compute. root@juju-bf8c6a-lm-ceilometer-7:/home/ubuntu# systemctl status ceilometer-agent-compute ● ceilometer-agent-compute.service - Ceilometer Agent Compute Loaded: loaded (/lib/systemd/system/ceilometer-agent-compute.service; enabled; vendor preset: enabled) Active: active (running) since Thu 2021-03-11 21:56:00 UTC; 32s ago Main PID: 861 (ceilometer-poll) Tasks: 6 (limit: 4702) CGroup: /system.slice/ceilometer-agent-compute.service ├─ 861 ceilometer-polling: master process [/usr/bin/ceilometer-polling --config-file=/etc/ceilometer/ceilometer.conf --polling-namespaces compute --log-file=/var/log/ceil └─1583 ceilometer-polling: AgentManager worker(0) Mar 11 21:56:00 juju-bf8c6a-lm-ceilometer-7 systemd[1]: Started Ceilometer Agent Compute. Mar 11 21:56:05 juju-bf8c6a-lm-ceilometer-7 ceilometer-agent-compute[861]: Deprecated: Option "logdir" from group "DEFAULT" is deprecated. Use option "log-dir" from group "DEFAULT". -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1885430 Title: [Bionic/Stein] Ceilometer-agent fails to collect metrics after restart To manage notifications about this bug go to: https://bugs.launchpad.net/charm-ceilometer-agent/+bug/1885430/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1885430] Re: [Bionic/Stein] Ceilometer-agent fails to collect metrics after restart
Note: the only way that works reliable in both cases (restart, bootup) is by adding requires=nova-compute.service to the systemd service file /lib/systemd/system/ceilometer-agent-compute.service I've tried modifying the sysvinit file , rebooted with only the required-start in the sysvinit file, and it doesn't work either https://pastebin.canonical.com/p/STTRFyw9Wy/ I agree with Drew, option is Wants= or Requires (more strict), I tested the 2nd and works ok after service restart and machine bootup (with both enabled/disabled nova-compute). -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1885430 Title: [Bionic/Stein] Ceilometer-agent fails to collect metrics after restart To manage notifications about this bug go to: https://bugs.launchpad.net/charm-ceilometer-agent/+bug/1885430/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1917350] [NEW] cron not honoring pam_group.so groups
Public bug reported: When a job is invoked from cron and the pam_group.so is configured to add supplementary groups it DOES NOT work as expected. pam_group should provide membership based /etc/security/group.conf and it is working fine if you test with login or sudo. After some tests I've compiled pam_group.so in DEBUG and I can confirm that pam_setcred in being called by cron and the module is adding the expected groups membership. Then, checking do_command.c of cron I found there is need to call pam_setcred(pamh, PAM_REINITIALIZE_CRED | PAM_SILENT) after fork() the final patch should be something like #if defined(USE_PAM) if (pamh != NULL) { pam_setcred(pamh, PAM_REINITIALIZE_CRED | PAM_SILENT); } #endif ProblemType: Bug DistroRelease: Ubuntu 20.04 Package: cron 3.0pl1-136ubuntu1 ProcVersionSignature: Ubuntu 5.4.0-65.73-generic 5.4.78 Uname: Linux 5.4.0-65-generic x86_64 ApportVersion: 2.20.11-0ubuntu27.16 Architecture: amd64 CasperMD5CheckResult: pass Date: Mon Mar 1 15:49:42 2021 InstallationDate: Installed on 2021-01-21 (39 days ago) InstallationMedia: Ubuntu-Server 20.04.1 LTS "Focal Fossa" - Release amd64 (20200731) ProcEnviron: TERM=xterm PATH=(custom, no user) XDG_RUNTIME_DIR= LANG=en_US.UTF-8 SHELL=/bin/bash SourcePackage: cron UpgradeStatus: No upgrade log present (probably fresh install) ** Affects: cron (Ubuntu) Importance: Undecided Status: New ** Tags: amd64 apport-bug focal uec-images -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1917350 Title: cron not honoring pam_group.so groups To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/cron/+bug/1917350/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1891891] Re: [SRU] MATE Desktop 1.24.1 maint release
I was hoping this fixes would come with Ubuntu Mate 20.04.2, but unfortunately it seems it didn't happen. Is there any plans to release this fixes to Ubuntu Mate 20.04? Or is there at least a ppa or any other alternative to fix the missing icons? -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1891891 Title: [SRU] MATE Desktop 1.24.1 maint release To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/caja-extensions/+bug/1891891/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1906720] [NEW] Fix the disable_ssl_certificate_validation option
Public bug reported: [Environment] Bionic python3-httplib2 | 0.9.2+dfsg-1ubuntu0.2 [Description] maas cli fails to work with apis over https with self-signed certificates due to the lack of disable_ssl_certificate_validation option with python 3.5. [Distribution/Release, Package versions, Platform] cat /etc/lsb-release; dpkg -l | grep maas DISTRIB_ID=Ubuntu DISTRIB_RELEASE=18.04 DISTRIB_CODENAME=bionic DISTRIB_DESCRIPTION="Ubuntu 18.04.5 LTS" ii maas 2.8.2-8577-g.a3e674063-0ubuntu1~18.04.1 all "Metal as a Service" is a physical cloud and IPAM ii maas-cli 2.8.2-8577-g.a3e674063-0ubuntu1~18.04.1 all MAAS client and command-line interface ii maas-common 2.8.2-8577-g.a3e674063-0ubuntu1~18.04.1 all MAAS server common files ii maas-dhcp 2.8.2-8577-g.a3e674063-0ubuntu1~18.04.1 all MAAS DHCP server ii maas-proxy 2.8.2-8577-g.a3e674063-0ubuntu1~18.04.1 all MAAS Caching Proxy ii maas-rack-controller 2.8.2-8577-g.a3e674063-0ubuntu1~18.04.1 all Rack Controller for MAAS ii maas-region-api 2.8.2-8577-g.a3e674063-0ubuntu1~18.04.1 all Region controller API service for MAAS ii maas-region-controller 2.8.2-8577-g.a3e674063-0ubuntu1~18.04.1 all Region Controller for MAAS ii python3-django-maas 2.8.2-8577-g.a3e674063-0ubuntu1~18.04.1 all MAAS server Django web framework (Python 3) ii python3-maas-client 2.8.2-8577-g.a3e674063-0ubuntu1~18.04.1 all MAAS python API client (Python 3) ii python3-maas-provisioningserver 2.8.2-8577-g.a3e674063-0ubuntu1~18.04.1 all MAAS server provisioning libraries (Python 3) [Steps to Reproduce] - prepare a maas server(installed by packages for me and the customer). it doesn't have to be HA to reproduce - prepare a set of certificate, key and ca-bundle - place a new conf[2] in /etc/nginx/sites-enabled and `sudo systemctl restart nginx` - add the ca certificates to the host sudo mkdir /usr/share/ca-certificates/extra sudo cp -v ca-bundle.crt /usr/share/ca-certificates/extra/ dpkg-reconfigure ca-certificates - login with a new profile over https url - when not added the ca-bundle to the trusted ca cert store, it fails to login and '--insecure' flag also doesn't work[3] [Known Workarounds] None ** Affects: python-httplib2 (Ubuntu) Importance: Undecided Status: Fix Released ** Affects: python-httplib2 (Ubuntu Bionic) Importance: Undecided Status: Confirmed ** Affects: python-httplib2 (Ubuntu Focal) Importance: Undecided Status: Fix Released ** Affects: python-httplib2 (Ubuntu Groovy) Importance: Undecided Status: Fix Released ** Affects: python-httplib2 (Ubuntu Hirsute) Importance: Undecided Status: Fix Released ** Also affects: python-httplib2 (Ubuntu Focal) Importance: Undecided Status: New ** Also affects: python-httplib2 (Ubuntu Bionic) Importance: Undecided Status: New ** Also affects: python-httplib2 (Ubuntu Hirsute) Importance: Undecided Status: New ** Also affects: python-httplib2 (Ubuntu Groovy) Importance: Undecided Status: New ** Changed in: python-httplib2 (Ubuntu Hirsute) Status: New => Fix Released ** Changed in: python-httplib2 (Ubuntu Groovy) Status: New => Fix Released ** Changed in: python-httplib2 (Ubuntu Focal) Status: New => Fix Released ** Changed in: python-httplib2 (Ubuntu Bionic) Status: New => Confirmed -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1906720 Title: Fix the disable_ssl_certificate_validation option To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/python-httplib2/+bug/1906720/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1906720] Re: Fix the disable_ssl_certificate_validation option
Backport fix https://github.com/httplib2/httplib2/pull/15 into bionic -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1906720 Title: Fix the disable_ssl_certificate_validation option To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/python-httplib2/+bug/1906720/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1906719] [NEW] Fix the disable_ssl_certificate_validation option
Public bug reported: [Environment] Bionic python3-httplib2 | 0.9.2+dfsg-1ubuntu0.2 [Description] maas cli fails to work with apis over https with self-signed certificates due to the lack of disable_ssl_certificate_validation option with python 3.5. [Distribution/Release, Package versions, Platform] cat /etc/lsb-release; dpkg -l | grep maas DISTRIB_ID=Ubuntu DISTRIB_RELEASE=18.04 DISTRIB_CODENAME=bionic DISTRIB_DESCRIPTION="Ubuntu 18.04.5 LTS" ii maas 2.8.2-8577-g.a3e674063-0ubuntu1~18.04.1 all "Metal as a Service" is a physical cloud and IPAM ii maas-cli 2.8.2-8577-g.a3e674063-0ubuntu1~18.04.1 all MAAS client and command-line interface ii maas-common 2.8.2-8577-g.a3e674063-0ubuntu1~18.04.1 all MAAS server common files ii maas-dhcp 2.8.2-8577-g.a3e674063-0ubuntu1~18.04.1 all MAAS DHCP server ii maas-proxy 2.8.2-8577-g.a3e674063-0ubuntu1~18.04.1 all MAAS Caching Proxy ii maas-rack-controller 2.8.2-8577-g.a3e674063-0ubuntu1~18.04.1 all Rack Controller for MAAS ii maas-region-api 2.8.2-8577-g.a3e674063-0ubuntu1~18.04.1 all Region controller API service for MAAS ii maas-region-controller 2.8.2-8577-g.a3e674063-0ubuntu1~18.04.1 all Region Controller for MAAS ii python3-django-maas 2.8.2-8577-g.a3e674063-0ubuntu1~18.04.1 all MAAS server Django web framework (Python 3) ii python3-maas-client 2.8.2-8577-g.a3e674063-0ubuntu1~18.04.1 all MAAS python API client (Python 3) ii python3-maas-provisioningserver 2.8.2-8577-g.a3e674063-0ubuntu1~18.04.1 all MAAS server provisioning libraries (Python 3) [Steps to Reproduce] - prepare a maas server(installed by packages for me and the customer). it doesn't have to be HA to reproduce - prepare a set of certificate, key and ca-bundle - place a new conf[2] in /etc/nginx/sites-enabled and `sudo systemctl restart nginx` - add the ca certificates to the host sudo mkdir /usr/share/ca-certificates/extra sudo cp -v ca-bundle.crt /usr/share/ca-certificates/extra/ dpkg-reconfigure ca-certificates - login with a new profile over https url - when not added the ca-bundle to the trusted ca cert store, it fails to login and '--insecure' flag also doesn't work[3] [Known Workarounds] None ** Affects: python-httplib2 (Ubuntu) Importance: Undecided Status: New -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1906719 Title: Fix the disable_ssl_certificate_validation option To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/python-httplib2/+bug/1906719/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 987060] Re: massive memory leak in unity-panel-service and hud-service when invoking the hud on Firefox profiles with large amounts of bookmarks LTS 12.04 14.04
** Changed in: hud (Ubuntu Xenial) Status: Confirmed => In Progress -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/987060 Title: massive memory leak in unity-panel-service and hud-service when invoking the hud on Firefox profiles with large amounts of bookmarks LTS 12.04 14.04 To manage notifications about this bug go to: https://bugs.launchpad.net/hud/+bug/987060/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1905959] [NEW] can not install grub on uefi with windows 10
Public bug reported: crash when trying to install alingside windows10, in uefi. choosed mode "install alongside windows, in boot manager ProblemType: Bug DistroRelease: Ubuntu 20.04 Package: ubiquity 20.04.15.2 ProcVersionSignature: Ubuntu 5.4.0-42.46-lowlatency 5.4.44 Uname: Linux 5.4.0-42-lowlatency x86_64 NonfreeKernelModules: zfs zunicode zavl icp zcommon znvpair ApportVersion: 2.20.11-0ubuntu27.4 Architecture: amd64 CasperMD5CheckResult: pass CasperVersion: 1.445.1 Date: Fri Nov 27 09:40:08 2020 InstallCmdLine: BOOT_IMAGE=/casper/vmlinuz persistent file=/cdrom/preseed/ubuntustudio.seed quiet splash --- LiveMediaBuild: Ubuntu-Studio 20.04.1 LTS "Focal Fossa" - Release amd64 (20200731) SourcePackage: grub-installer UpgradeStatus: No upgrade log present (probably fresh install) ** Affects: grub-installer (Ubuntu) Importance: Undecided Status: New ** Tags: amd64 apport-bug focal ubiquity-20.04.15.2 ubuntustudio -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1905959 Title: can not install grub on uefi with windows 10 To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/grub-installer/+bug/1905959/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1901582] Re: 'Switch user' not available in application launcher after upgrade to 20.10
** Also affects: plasma-desktop via https://bugs.kde.org/show_bug.cgi?id=423526 Importance: Unknown Status: Unknown -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1901582 Title: 'Switch user' not available in application launcher after upgrade to 20.10 To manage notifications about this bug go to: https://bugs.launchpad.net/plasma-desktop/+bug/1901582/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1904066] [NEW] grub install error when installing alongside windows
Public bug reported: grub fails to install alongside windows 10 (secureboot = false). GPT Table in use. ProblemType: Bug DistroRelease: Ubuntu 20.04 Package: ubiquity 20.04.15.2 ProcVersionSignature: Ubuntu 5.4.0-42.46-lowlatency 5.4.44 Uname: Linux 5.4.0-42-lowlatency x86_64 NonfreeKernelModules: zfs zunicode zavl icp zcommon znvpair ApportVersion: 2.20.11-0ubuntu27.4 Architecture: amd64 CasperMD5CheckResult: pass CasperVersion: 1.445.1 Date: Thu Nov 12 18:51:29 2020 InstallCmdLine: BOOT_IMAGE=/multiboot/ubuntustudio-20.04.1-dvd-amd64/casper/vmlinuz iso-scan/filename=/multiboot/ubuntustudio-20.04.1-dvd-amd64/ubuntustudio-20.04.1-dvd-amd64.iso boot=casper noprompt floppy.allowed_drive_mask=0 ignore_uuid LiveMediaBuild: Ubuntu-Studio 20.04.1 LTS "Focal Fossa" - Release amd64 (20200731) SourcePackage: grub-installer UpgradeStatus: No upgrade log present (probably fresh install) ** Affects: grub-installer (Ubuntu) Importance: Undecided Status: New ** Tags: amd64 apport-bug focal ubiquity-20.04.15.2 -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1904066 Title: grub install error when installing alongside windows To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/grub-installer/+bug/1904066/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1901582] Re: 'Switch user' not available in application launcher after upgrade to 20.10
I agree with @radar this is disruptive. It is not only the "switch user" option in the launcher, but also in the lock screen. If user Anna locks the screen, user Bob cannot unlock it as he will never have a "switch user" button there. In my case, both me and my partner use a shared computer. Before this, when one of us finished using it, we would simply close the lid and the other one could use it afterwards. Now this is not a possibility any more. -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1901582 Title: 'Switch user' not available in application launcher after upgrade to 20.10 To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/plasma-desktop/+bug/1901582/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1872106] Re: isc-dhcp-server crashing constantly [Ubuntu 20.04]
Hello Karsten, Can you check comments https://bugs.launchpad.net/dhcp/+bug/1872118/comments/62 and https://bugs.launchpad.net/dhcp/+bug/1872118/comments/63 and validate the versions? * Also, could be possible to upload the crash report here? and the output of dpkg -l Thanks, Jorge -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1872106 Title: isc-dhcp-server crashing constantly [Ubuntu 20.04] To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/isc-dhcp/+bug/1872106/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1899064] [NEW] Decrease time_threshold for zone_purge task.
Public bug reported: [Environment] Ussuri Charms 20.08 [Description] After deleting a zone on a designate-bind backend, the zone remains active until the zone purge producer task gets executed. $ openstack zone show 98c02cfb-5a2f- Could not find Zone $ openstack zone delete 98c02cfb-5a2f- Could not find Zone mysql> select * from zones where id="98c02cfb-5a2f-"; Empty set (0.01 sec) 363:2020-09-25 05:23:41.154 1685647 DEBUG designate.central.service [req-8223a934-84df-44eb-97bd-a0194343955a - - - - -] Performing purge with limit of 100 and criterion of {u'deleted': u'!0', u'deleted_at': u'<=2020-09-18 05:23:41.143790', u'shard': u'BETWEEN 1365,2729'} purge_zones /usr/lib/python2.7/dist-packages/designate/central/service.py:1131 No zone was found by this criteria, therefore, the hard delete zone from the database https://github.com/openstack/designate/blob/2e3d8ab80daac00bad7d2b46246660592163bf17/designate/storage/impl_sqlalchemy/__init__.py#L454 didn't apply. This delta is governed by time_threshold https://github.com/openstack/designate/blob/89435416a1dcb6df2a347f43680cfe57d1eb0a82/designate/conf/producer.py#L100 which is set to 1 week. ### Proposed actions 1) Make the time_threshold shorter to a 1 hour span by default. 2) Don't list deleted_at zones in designate list operation. ** Affects: charm-designate Importance: Undecided Status: New ** Affects: designate (Ubuntu) Importance: Undecided Status: New ** Also affects: designate (Ubuntu) Importance: Undecided Status: New -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1899064 Title: Decrease time_threshold for zone_purge task. To manage notifications about this bug go to: https://bugs.launchpad.net/charm-designate/+bug/1899064/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1898450] [NEW] Upgrade to 20.04 did not complete successfuly
Public bug reported: Upgrading from 18.04 to 20.04. Just at the end of the upgrade procedure I got a window as Could not install 'initramfs-tools' The upgrade will continue but the 'initramfs-tools' package may not be in a working state. Please consider submitting a but report about it. installed initramfs-tools package post-installation script subprocess returned error exit status 1 After closing that window, I found other window behind it stating Upgrade complete The upgrade has completed but there were errors during the upgrade process Closed that window but the system did not reboot automatically. Is it safe to reboot? Any other action from my side before rebooting? Thanks for your help Jorge ProblemType: Bug DistroRelease: Ubuntu 20.04 Package: ubuntu-release-upgrader-core 1:20.04.25 ProcVersionSignature: Ubuntu 4.15.0-96.97-generic 4.15.18 Uname: Linux 4.15.0-96-generic x86_64 NonfreeKernelModules: lkp_Ubuntu_4_15_0_96_97_generic_71 ApportVersion: 2.20.11-0ubuntu27.9 Architecture: amd64 CasperMD5CheckResult: skip CrashDB: ubuntu CurrentDesktop: ubuntu:GNOME Date: Sat Oct 3 20:40:18 2020 EcryptfsInUse: Yes InstallationDate: Installed on 2015-07-03 (1919 days ago) InstallationMedia: Ubuntu 14.04.2 LTS "Trusty Tahr" - Release amd64 (20150218.1) PackageArchitecture: all SourcePackage: ubuntu-release-upgrader Symptom: release-upgrade UpgradeStatus: Upgraded to focal on 2020-10-03 (0 days ago) VarLogDistupgradeTermlog: ** Affects: ubuntu-release-upgrader (Ubuntu) Importance: Undecided Status: New ** Tags: amd64 apport-bug dist-upgrade focal -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1898450 Title: Upgrade to 20.04 did not complete successfuly To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/ubuntu-release-upgrader/+bug/1898450/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1898368] [NEW] I had a previous problem with some unofficial xorg ppa, I removed it using ppa-purge and tried to upgrade again but I only got the same result
Public bug reported: I had a previous problem with some unofficial xorg ppa, I removed it using ppa-purge and tried to upgrade again but I got the same message: "Could not calculate the upgrade An unresolvable problem occurred while calculating the upgrade. This was likely caused by: * Unofficial software packages not provided by Ubuntu Please use the tool 'ppa-purge' from the ppa-purge (...)" Description:Ubuntu 18.04.5 LTS Release:18.04 Software Center I want to upgrade from 18.04 to 20.3 version, but i got This message: "An unresolvable problem occurred while calculating the upgrade." ProblemType: Bug DistroRelease: Ubuntu 18.04 Package: ubuntu-release-upgrader-core 1:18.04.38 ProcVersionSignature: Ubuntu 5.4.0-48.52~18.04.1-generic 5.4.60 Uname: Linux 5.4.0-48-generic x86_64 ApportVersion: 2.20.9-0ubuntu7.17 Architecture: amd64 CrashDB: ubuntu CurrentDesktop: ubuntu:GNOME Date: Sat Oct 3 09:20:51 2020 InstallationDate: Installed on 2019-08-13 (416 days ago) InstallationMedia: Ubuntu 18.04.2 LTS "Bionic Beaver" - Release amd64 (20190210) PackageArchitecture: all SourcePackage: ubuntu-release-upgrader UpgradeStatus: Upgraded to bionic on 2020-10-03 (0 days ago) VarLogDistupgradeTermlog: ** Affects: ubuntu-release-upgrader (Ubuntu) Importance: Undecided Status: New ** Tags: amd64 apport-bug bionic dist-upgrade ** Attachment added: "logs" https://bugs.launchpad.net/bugs/1898368/+attachment/5417111/+files/log.zip -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1898368 Title: I had a previous problem with some unofficial xorg ppa, I removed it using ppa-purge and tried to upgrade again but I only got the same result To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/ubuntu-release-upgrader/+bug/1898368/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1895494] [NEW] no pude instalar lubuntu 18.04
Public bug reported: Imposible instalar lubuntu 18.04 ProblemType: Bug DistroRelease: Ubuntu 18.04 Package: ubiquity 18.04.14 ProcVersionSignature: Ubuntu 4.15.0-20.21-generic 4.15.17 Uname: Linux 4.15.0-20-generic x86_64 ApportVersion: 2.20.9-0ubuntu7 Architecture: amd64 Date: Sun Sep 13 23:01:08 2020 InstallCmdLine: BOOT_IMAGE=/multibootusb/lubuntu-18.04-desktop-amd64/casper/vmlinuz file=/cdrom/preseed/lubuntu.seed boot=casper initrd=/multibootusb/lubuntu-18.04-desktop-amd64/casper/initrd.lz quiet splash ignore_bootid live-media-path=/multibootusb/lubuntu-18.04-desktop-amd64/casper cdrom-detect/try-usb=true floppy.allowed_drive_mask=0 ignore_uuid root=UUID=2AD0-9D9A --- SourcePackage: ubiquity UpgradeStatus: No upgrade log present (probably fresh install) ** Affects: ubiquity (Ubuntu) Importance: Undecided Status: Invalid ** Tags: amd64 apport-bug bionic lubuntu ubiquity-18.04.14 -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1895494 Title: no pude instalar lubuntu 18.04 To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/ubiquity/+bug/1895494/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1890491] Re: A pacemaker node fails monitor (probe) and stop /start operations on a resource because it returns "rc=189
** Patch added: "lp1890491-bionic.debdiff" https://bugs.launchpad.net/ubuntu/+source/pacemaker/+bug/1890491/+attachment/5408794/+files/lp1890491-bionic.debdiff -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1890491 Title: A pacemaker node fails monitor (probe) and stop /start operations on a resource because it returns "rc=189 To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/pacemaker/+bug/1890491/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1894336] [NEW] package nvidia-340 (not installed) failed to install/upgrade: intentando sobreescribir `/lib/udev/rules.d/71-nvidia.rules', que está también en el paquete nvidia-kernel-common-390
Public bug reported: instalando driver de nvidia-340, la ultima actualización tiene problemas de resolución gráfica ProblemType: Package DistroRelease: Ubuntu 20.04 Package: nvidia-340 (not installed) ProcVersionSignature: Ubuntu 5.4.0-45.49-generic 5.4.55 Uname: Linux 5.4.0-45-generic x86_64 NonfreeKernelModules: nvidia_uvm nvidia_drm nvidia_modeset nvidia ApportVersion: 2.20.11-0ubuntu27.8 Architecture: amd64 CasperMD5CheckResult: skip Date: Fri Sep 4 20:56:57 2020 ErrorMessage: intentando sobreescribir `/lib/udev/rules.d/71-nvidia.rules', que está también en el paquete nvidia-kernel-common-390 390.138-0ubuntu0.20.04.1 InstallationDate: Installed on 2020-05-12 (115 days ago) InstallationMedia: Ubuntu 19.10 "Eoan Ermine" - Release amd64 (20191017) Python3Details: /usr/bin/python3.8, Python 3.8.2, python3-minimal, 3.8.2-0ubuntu2 PythonDetails: N/A RelatedPackageVersions: dpkg 1.19.7ubuntu3 apt 2.0.2ubuntu0.1 SourcePackage: nvidia-graphics-drivers-340 Title: package nvidia-340 (not installed) failed to install/upgrade: intentando sobreescribir `/lib/udev/rules.d/71-nvidia.rules', que está también en el paquete nvidia-kernel-common-390 390.138-0ubuntu0.20.04.1 UpgradeStatus: Upgraded to focal on 2020-05-13 (114 days ago) ** Affects: nvidia-graphics-drivers-340 (Ubuntu) Importance: Undecided Status: New ** Tags: amd64 apport-package focal -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1894336 Title: package nvidia-340 (not installed) failed to install/upgrade: intentando sobreescribir `/lib/udev/rules.d/71-nvidia.rules', que está también en el paquete nvidia-kernel-common-390 390.138-0ubuntu0.20.04.1 To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/nvidia-graphics-drivers-340/+bug/1894336/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1894012] [NEW] touchpad and keyboard
Public bug reported: I reinstalled Ubuntu on 20.04 version in my laptop on an SSD and the keyboard and touch-pad stopped working ProblemType: Bug DistroRelease: Ubuntu 20.04 Package: linux-image-5.4.0-45-generic 5.4.0-45.49 ProcVersionSignature: Ubuntu 5.4.0-45.49-generic 5.4.55 Uname: Linux 5.4.0-45-generic x86_64 ApportVersion: 2.20.11-0ubuntu27.8 Architecture: amd64 AudioDevicesInUse: USERPID ACCESS COMMAND /dev/snd/controlC0: jorge 1551 F pulseaudio CasperMD5CheckResult: skip CurrentDesktop: ubuntu:GNOME Date: Thu Sep 3 00:47:37 2020 InstallationDate: Installed on 2020-09-02 (0 days ago) InstallationMedia: Ubuntu 20.04 LTS "Focal Fossa" - Release amd64 (20200423) MachineType: CHUWI INNOVATION LIMITED AeroBook ProcEnviron: TERM=xterm-256color PATH=(custom, no user) XDG_RUNTIME_DIR= LANG=es_ES.UTF-8 SHELL=/bin/bash ProcFB: 0 i915drmfb ProcKernelCmdLine: BOOT_IMAGE=/boot/vmlinuz-5.4.0-45-generic root=UUID=4259a678-3627-47b0-81a7-b8449ea7222c ro quiet splash vt.handoff=7 RelatedPackageVersions: linux-restricted-modules-5.4.0-45-generic N/A linux-backports-modules-5.4.0-45-generic N/A linux-firmware1.187.3 SourcePackage: linux UpgradeStatus: No upgrade log present (probably fresh install) dmi.bios.date: 09/20/2019 dmi.bios.vendor: American Megatrends Inc. dmi.bios.version: YZ-BI-133-X133KR110-KJ66B-106-N dmi.board.asset.tag: Default string dmi.board.name: AeroBook dmi.board.vendor: CHUWI INNOVATION LIMITED dmi.board.version: Default string dmi.chassis.asset.tag: Default string dmi.chassis.type: 10 dmi.chassis.vendor: CHUWI INNOVATION LIMITED dmi.chassis.version: Default string dmi.modalias: dmi:bvnAmericanMegatrendsInc.:bvrYZ-BI-133-X133KR110-KJ66B-106-N:bd09/20/2019:svnCHUWIINNOVATIONLIMITED:pnAeroBook:pvrDefaultstring:rvnCHUWIINNOVATIONLIMITED:rnAeroBook:rvrDefaultstring:cvnCHUWIINNOVATIONLIMITED:ct10:cvrDefaultstring: dmi.product.family: YZ106 dmi.product.name: AeroBook dmi.product.sku: YZ106 dmi.product.version: Default string dmi.sys.vendor: CHUWI INNOVATION LIMITED ** Affects: linux (Ubuntu) Importance: Undecided Status: New ** Tags: amd64 apport-bug focal -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1894012 Title: touchpad and keyboard To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1894012/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1890491] Re: A pacemaker node fails monitor (probe) and stop /start operations on a resource because it returns "rc=189
** Changed in: pacemaker (Ubuntu Bionic) Status: New => In Progress ** Changed in: pacemaker (Ubuntu Bionic) Assignee: (unassigned) => Jorge Niedbalski (niedbalski) -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1890491 Title: A pacemaker node fails monitor (probe) and stop /start operations on a resource because it returns "rc=189 To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/pacemaker/+bug/1890491/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1872118] Re: [SRU] DHCP Cluster crashes after a few hours
** No longer affects: isc-dhcp (Ubuntu) ** No longer affects: isc-dhcp (Ubuntu Focal) ** No longer affects: isc-dhcp (Ubuntu Groovy) ** Changed in: bind9-libs (Ubuntu Groovy) Importance: Undecided => High -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1872118 Title: [SRU] DHCP Cluster crashes after a few hours To manage notifications about this bug go to: https://bugs.launchpad.net/dhcp/+bug/1872118/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1890491] Re: A pacemaker node fails monitor (probe) and stop /start operations on a resource because it returns "rc=189
Hello, I am testing a couple of patches (both imported from master), through this PPA: https://launchpad.net/~niedbalski/+archive/ubuntu/fix-1890491 c20f8920 - don't order implied stops relative to a remote connection 938e99f2 - remote state is failed if node is shutting down with connection failure I'll report back here if these patches fixes the behavior described in my previous comment. -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1890491 Title: A pacemaker node fails monitor (probe) and stop /start operations on a resource because it returns "rc=189 To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/pacemaker/+bug/1890491/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1890491] Re: A pacemaker node fails monitor (probe) and stop /start operations on a resource because it returns "rc=189
I am able to reproduce a similar issue with the following bundle: https://paste.ubuntu.com/p/VJ3m7nMN79/ Resource created with sudo pcs resource create test2 ocf:pacemaker:Dummy op_sleep=10 op monitor interval=30s timeout=30s op start timeout=30s op stop timeout=30s juju ssh nova-cloud-controller/2 "sudo pcs constraint location test2 prefers juju-acda3d-pacemaker-remote-10.cloud.sts" juju ssh nova-cloud-controller/2 "sudo pcs constraint location test2 prefers juju-acda3d-pacemaker-remote-11.cloud.sts" juju ssh nova-cloud-controller/2 "sudo pcs constraint location test2 prefers juju-acda3d-pacemaker-remote-12.cloud.sts" Online: [ juju-acda3d-pacemaker-remote-7 juju-acda3d-pacemaker-remote-8 juju-acda3d-pacemaker-remote-9 ] RemoteOnline: [ juju-acda3d-pacemaker-remote-10.cloud.sts juju-acda3d-pacemaker-remote-11.cloud.sts juju-acda3d-pacemaker-remote-12.cloud.sts ] Full list of resources: Resource Group: grp_nova_vips res_nova_bf9661e_vip (ocf::heartbeat:IPaddr2): Started juju-acda3d-pacemaker-remote-7 Clone Set: cl_nova_haproxy [res_nova_haproxy] Started: [ juju-acda3d-pacemaker-remote-7 juju-acda3d-pacemaker-remote-8 juju-acda3d-pacemaker-remote-9 ] juju-acda3d-pacemaker-remote-10.cloud.sts (ocf::pacemaker:remote): Started juju-acda3d-pacemaker-remote-8 juju-acda3d-pacemaker-remote-12.cloud.sts (ocf::pacemaker:remote): Started juju-acda3d-pacemaker-remote-8 juju-acda3d-pacemaker-remote-11.cloud.sts (ocf::pacemaker:remote): Started juju-acda3d-pacemaker-remote-7 test2 (ocf::pacemaker:Dummy): Started juju-acda3d-pacemaker- remote-10.cloud.sts ## After running the following commands on juju-acda3d-pacemaker- remote-10.cloud.sts 1) sudo systemctl stop pacemaker_remote 2) forcedfully shutdown (openstack server stop ) in less than 10 seconds after the pacemaker_remote gets executed. Remote is shutdown RemoteOFFLINE: [ juju-acda3d-pacemaker-remote-10.cloud.sts ] The resource status remains as stopped across the 3 machines, and doesn't recovers. $ juju run --application nova-cloud-controller "sudo pcs resource show | grep -i test2" - Stdout: " test2\t(ocf::pacemaker:Dummy):\tStopped\n" UnitId: nova-cloud-controller/0 - Stdout: " test2\t(ocf::pacemaker:Dummy):\tStopped\n" UnitId: nova-cloud-controller/1 - Stdout: " test2\t(ocf::pacemaker:Dummy):\tStopped\n" UnitId: nova-cloud-controller/2 However, If I do a clean shutdown (without interrupting the pacemaker_remote fence), that ends up with the resource migrated correctly to another node. 6 nodes configured 9 resources configured Online: [ juju-acda3d-pacemaker-remote-7 juju-acda3d-pacemaker-remote-8 juju-acda3d-pacemaker-remote-9 ] RemoteOnline: [ juju-acda3d-pacemaker-remote-11.cloud.sts juju-acda3d-pacemaker-remote-12.cloud.sts ] RemoteOFFLINE: [ juju-acda3d-pacemaker-remote-10.cloud.sts ] Full list of resources: [...] test2 (ocf::pacemaker:Dummy): Started juju-acda3d-pacemaker-remote-12.cloud.sts I will keep investigating this behavior and determine is this is linked to the bug reported. -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1890491 Title: A pacemaker node fails monitor (probe) and stop /start operations on a resource because it returns "rc=189 To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/pacemaker/+bug/1890491/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1872118] Re: [SRU] DHCP Cluster crashes after a few hours
** Patch added: "lp-1872118-groovy.debdiff" https://bugs.launchpad.net/ubuntu/+source/isc-dhcp/+bug/1872118/+attachment/5400760/+files/lp-1872118-groovy.debdiff -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1872118 Title: [SRU] DHCP Cluster crashes after a few hours To manage notifications about this bug go to: https://bugs.launchpad.net/dhcp/+bug/1872118/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1872118] Re: [SRU] DHCP Cluster crashes after a few hours
** Patch added: "lp-1872118-focal.debdiff" https://bugs.launchpad.net/ubuntu/+source/isc-dhcp/+bug/1872118/+attachment/5400761/+files/lp-1872118-focal.debdiff -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1872118 Title: [SRU] DHCP Cluster crashes after a few hours To manage notifications about this bug go to: https://bugs.launchpad.net/dhcp/+bug/1872118/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1872118] Re: [SRU] DHCP Cluster crashes after a few hours
Uploaded debdiff(s) for groovy and focal. This will require a follow up rebuild change for isc-dhcp, once the library change lands. -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1872118 Title: [SRU] DHCP Cluster crashes after a few hours To manage notifications about this bug go to: https://bugs.launchpad.net/dhcp/+bug/1872118/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1872118] Re: [SRU] DHCP Cluster crashes after a few hours
** Description changed: + [Description] - I have a pair of DHCP serevrs running in a cluster on ubuntu 20.04, All worked perfectly until recently, when they started stopping with code=killed, status=6/ABRT. - This is being fixed by + isc-dhcp-server uses libisc-export (coming from bin9-libs package) for handling the socket event(s) when configured in peer mode (master/secondary). It's possible that a sequence of messages dispatched by the master that requires acknowledgment from its peers holds a socket + in a pending to send state, a timer or a subsequent write request can be scheduled into this socket and the !sock->pending_send assertion + will be raised when trying to write again at the time data hasn't been flushed entirely and the pending_send flag hasn't been reset to 0 state. - https://bugs.launchpad.net/bugs/1870729 + If this race condition happens, the following stacktrace will be + hit: - However now one stops after a few hours with the following errors. One - can stay on line but not both. + (gdb) info threads + Id Target Id Frame + * 1 Thread 0x7fb4ddecb700 (LWP 3170) __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50 + 2 Thread 0x7fb4dd6ca700 (LWP 3171) __lll_lock_wait (futex=futex@entry=0x7fb4de6d2028, private=0) at lowlevellock.c:52 + 3 Thread 0x7fb4de6cc700 (LWP 3169) futex_wake (private=, processes_to_wake=1, futex_word=) at ../sysdeps/nptl/futex-internal.h:364 + 4 Thread 0x7fb4de74f740 (LWP 3148) futex_wait_cancelable (private=, expected=0, futex_word=0x7fb4de6cd0d0) at ../sysdeps/nptl/futex-internal.h:183 + + (gdb) frame 2 + #2 0x7fb4dec85985 in isc_assertion_failed (file=file@entry=0x7fb4decd8878 "../../../../lib/isc/unix/socket.c", line=line@entry=3361, type=type@entry=isc_assertiontype_insist, + cond=cond@entry=0x7fb4decda033 "!sock->pending_send") at ../../../lib/isc/assertions.c:52 + (gdb) bt + #1 0x7fb4deaa7859 in __GI_abort () at abort.c:79 + #2 0x7fb4dec85985 in isc_assertion_failed (file=file@entry=0x7fb4decd8878 "../../../../lib/isc/unix/socket.c", line=line@entry=3361, type=type@entry=isc_assertiontype_insist, + cond=cond@entry=0x7fb4decda033 "!sock->pending_send") at ../../../lib/isc/assertions.c:52 + #3 0x7fb4decc17e1 in dispatch_send (sock=0x7fb4de6d4990) at ../../../../lib/isc/unix/socket.c:4041 + #4 process_fd (writeable=, readable=, fd=11, manager=0x7fb4de6d0010) at ../../../../lib/isc/unix/socket.c:4054 + #5 process_fds (writefds=, readfds=0x7fb4de6d1090, maxfd=13, manager=0x7fb4de6d0010) at ../../../../lib/isc/unix/socket.c:4211 + #6 watcher (uap=0x7fb4de6d0010) at ../../../../lib/isc/unix/socket.c:4397 + #7 0x7fb4dea68609 in start_thread (arg=) at pthread_create.c:477 + #8 0x7fb4deba4103 in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95 + + (gdb) frame 3 + #3 0x7fb4decc17e1 in dispatch_send (sock=0x7fb4de6d4990) at ../../../../lib/isc/unix/socket.c:4041 + 4041 in ../../../../lib/isc/unix/socket.c + (gdb) p sock->pending_send + $2 = 1 + + [TEST CASE] + + 1) Install isc-dhcp-server in 2 focal machine(s). + 2) Configure peer/cluster mode as follows: +Primary configuration: https://pastebin.ubuntu.com/p/XYj648MghK/ +Secondary configuration: https://pastebin.ubuntu.com/p/PYkcshZCWK/ + 2) Run dhcpd as follows in both machine(s) + + # dhcpd -f -d -4 -cf /etc/dhcp/dhcpd.conf --no-pid ens4 + + 3) Leave the cluster running for a long (2h) period until the crash/race + condition is reproduced. + [REGRESSION POTENTIAL] - Syslog shows - Apr 10 17:20:15 dhcp-primary sh[6828]: ../../../../lib/isc/unix/socket.c:3361: INSIST(!sock->pending_send) failed, back trace - Apr 10 17:20:15 dhcp-primary sh[6828]: #0 0x7fbe78702a4a in ?? - Apr 10 17:20:15 dhcp-primary sh[6828]: #1 0x7fbe78702980 in ?? - Apr 10 17:20:15 dhcp-primary sh[6828]: #2 0x7fbe7873e7e1 in ?? - Apr 10 17:20:15 dhcp-primary sh[6828]: #3 0x7fbe784e5609 in ?? - Apr 10 17:20:15 dhcp-primary sh[6828]: #4 0x7fbe78621103 in ?? - - - nothing in kern.log - - - apport.log shows - ERROR: apport (pid 6850) Fri Apr 10 17:20:15 2020: called for pid 6828, signal 6, core limit 0, dump mode 2 - ERROR: apport (pid 6850) Fri Apr 10 17:20:15 2020: not creating core for pid with dump mode of 2 - ERROR: apport (pid 6850) Fri Apr 10 17:20:15 2020: executable: /usr/sbin/dhcpd (command line "dhcpd -user dhcpd -group dhcpd -f -4 -pf /run/dhcp-server/dhcpd.pid -cf /etc/dhcp/dhcpd.conf") - ERROR: apport (pid 6850) Fri Apr 10 17:20:15 2020: is_closing_session(): no DBUS_SESSION_BUS_ADDRESS in environment - ERROR: apport (pid 6850) Fri Apr 10 17:20:15 2020: wrote report /var/crash/_usr_sbin_dhcpd.0.crash - - - /var/crash/_usr_sbin_dhcpd.0.crash shows - - ProblemType: Crash - Architecture: amd64 - CrashCounter: 1 - Date: Fri Apr 10 17:20:15 2020 - DistroRelease: Ubuntu 20.04 - ExecutablePath: /usr/sbin/dhcpd - ExecutableTimestamp: 1586210315 - ProcCmdline: dhcpd -user dhcpd -group dhcpd -f -4 -pf
[Bug 1872118] Re: [SRU] DHCP Cluster crashes after a few hours
** Summary changed: - DHCP Cluster crashes after a few hours + [SRU] DHCP Cluster crashes after a few hours -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1872118 Title: [SRU] DHCP Cluster crashes after a few hours To manage notifications about this bug go to: https://bugs.launchpad.net/dhcp/+bug/1872118/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1872118] Re: DHCP Cluster crashes after a few hours
** Changed in: bind9-libs (Ubuntu Focal) Status: New => In Progress ** Changed in: bind9-libs (Ubuntu Groovy) Status: New => In Progress ** Changed in: isc-dhcp (Ubuntu Focal) Status: New => In Progress ** Changed in: isc-dhcp (Ubuntu Groovy) Status: Confirmed => In Progress ** Changed in: bind9-libs (Ubuntu Focal) Assignee: (unassigned) => Jorge Niedbalski (niedbalski) ** Changed in: bind9-libs (Ubuntu Groovy) Assignee: (unassigned) => Jorge Niedbalski (niedbalski) ** Changed in: isc-dhcp (Ubuntu Focal) Assignee: (unassigned) => Jorge Niedbalski (niedbalski) ** Changed in: isc-dhcp (Ubuntu Groovy) Assignee: (unassigned) => Jorge Niedbalski (niedbalski) -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1872118 Title: DHCP Cluster crashes after a few hours To manage notifications about this bug go to: https://bugs.launchpad.net/dhcp/+bug/1872118/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1872118] Re: DHCP Cluster crashes after a few hours
Hello @Andrew, @rlaager, Any crashes to report before I propose this patch? my env is running this patch for close to 3 days without any failures. Thanks, -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1872118 Title: DHCP Cluster crashes after a few hours To manage notifications about this bug go to: https://bugs.launchpad.net/dhcp/+bug/1872118/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1872118] Re: DHCP Cluster crashes after a few hours
Hello Andrew, I just reviewed the core file that you provided. Thread 1 is the thread that panics on assertion because sock.pending_send is already set. This is the condition I prevented in the PPA, so *shouldn't* be hitting the frame 3 In my test systems I don't hit this condition, dispatch_send isn't called if pending_send is set. (gdb) thread 1 [Switching to thread 1 (Thread 0x7f39a41f5700 (LWP 18780))] #1 0x7f39a4dd1859 in __GI_abort () at abort.c:79 79 in abort.c (gdb) bt #0 __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50 #1 0x7f39a4dd1859 in __GI_abort () at abort.c:79 #2 0x7f39a4faf985 in isc_assertion_failed (file=, line=, type=, cond=) at ../../../lib/isc/assertions.c:52 #3 0x7f39a4feb7e1 in dispatch_send (sock=0x7f39a4a03730) at ../../../../lib/isc/unix/socket.c:3380 #4 process_fd (writeable=, readable=, fd=0, manager=0x7f39a49fa010) at ../../../../lib/isc/unix/socket.c:4054 #5 process_fds (writefds=, readfds=0x16, maxfd=-1533038191, manager=0x7f39a49fa010) at ../../../../lib/isc/unix/socket.c:4211 #6 watcher (uap=0x7f39a49fa010) at ../../../../lib/isc/unix/socket.c:4397 [...] (gdb) frame 3 #3 0x7f39a4feb7e1 in dispatch_send (sock=0x7f39a4a03730) at ../../../../lib/isc/unix/socket.c:3380 3380../../../../lib/isc/unix/socket.c: No such file or directory. (gdb) info locals iev = 0x0 ev = sender = 0x2 iev = ev = sender = (gdb) p sock $1 = (isc__socket_t *) 0x7f39a4a03730 (gdb) p sock.pending_send $2 = 1 Can you check your library links, etc? ubuntu@dhcpd1:~$ ldd /usr/sbin/dhcpd | grep export libirs-export.so.161 => /lib/x86_64-linux-gnu/libirs-export.so.161 (0x7f5cb62e5000) libdns-export.so.1109 => /lib/x86_64-linux-gnu/libdns-export.so.1109 (0x7f5cb60b) libisc-export.so.1105 => /lib/x86_64-linux-gnu/libisc-export.so.1105 (0x7f5cb6039000) libisccfg-export.so.163 => /lib/x86_64-linux-gnu/libisccfg-export.so.163 (0x7f5cb5df5000) ubuntu@dhcpd1:~$ dpkg -S /lib/x86_64-linux-gnu/libisc-export.so.1105 libisc-export1105:amd64: /lib/x86_64-linux-gnu/libisc-export.so.1105 ubuntu@dhcpd1:~$ apt-cache policy libisc-export1105 | grep -i ppa Installed: 1:9.11.16+dfsg-3~ppa1 Candidate: 1:9.11.16+dfsg-3~ppa1 *** 1:9.11.16+dfsg-3~ppa1 500 500 http://ppa.launchpad.net/niedbalski/1872188-dbg/ubuntu focal/main amd64 Packages -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1872118 Title: DHCP Cluster crashes after a few hours To manage notifications about this bug go to: https://bugs.launchpad.net/dhcp/+bug/1872118/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1890491] Re: A pacemaker node fails monitor (probe) and stop /start operations on a resource because it returns "rc=189
** Also affects: pacemaker (Ubuntu Groovy) Importance: Undecided Status: New ** Also affects: pacemaker (Ubuntu Bionic) Importance: Undecided Status: New ** Also affects: pacemaker (Ubuntu Focal) Importance: Undecided Status: New ** Changed in: pacemaker (Ubuntu Groovy) Status: New => Fix Released ** Changed in: pacemaker (Ubuntu Focal) Status: New => Fix Released -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1890491 Title: A pacemaker node fails monitor (probe) and stop /start operations on a resource because it returns "rc=189 To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/pacemaker/+bug/1890491/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1872118] Re: DHCP Cluster crashes after a few hours
OK, I have no crashes to report for the last 24 hours with the PPA included here. ● isc-dhcp-server.service - ISC DHCP IPv4 server Loaded: loaded (/lib/systemd/system/isc-dhcp-server.service; enabled; vendor preset: enabled) Active: active (running) since Tue 2020-08-04 14:58:11 UTC; 1 day 1h ago Docs: man:dhcpd(8) Main PID: 1202 (dhcpd) Tasks: 5 (limit: 5882) Memory: 6.3M CGroup: /system.slice/isc-dhcp-server.service └─592 dhcpd -user dhcpd -group dhcpd -f -4 -pf /run/dhcp-server/dhcpd.pid -cf /etc/dhcp/dhcpd.conf root@dhcpd1:/home/ubuntu# dpkg -l | grep ppa1 ii isc-dhcp-server 4.4.1-2.1ubuntu6~ppa1 amd64 ISC DHCP server for automatic IP address assignment ii libirs-export161 1:9.11.16+dfsg-3~ppa1 amd64 Exported IRS Shared Library ii libisc-export1105:amd64 1:9.11.16+dfsg-3~ppa1 amd64 Exported ISC Shared Library ii libisccfg-export163 1:9.11.16+dfsg-3~ppa1 amd64 Exported ISC CFG Shared Library Andrew, what's the current version of libisc-export1105:amd64 installed in your system? can you provide a dpkg -l output and a systemctl status for the dhcpd service? Did you restarted/killed the dhcpd processes after upgrading that library? Thanks in advance. -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1872118 Title: DHCP Cluster crashes after a few hours To manage notifications about this bug go to: https://bugs.launchpad.net/dhcp/+bug/1872118/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1872118] Re: DHCP Cluster crashes after a few hours
Andrew, Thank you for your input. ** Do you have any logs or a crash report to take a look after you upgraded these systems? On my test lab , I am counting for 3+ hours without a crash. root@dhcpd1:/home/ubuntu# dpkg -l | grep ppa1 ii isc-dhcp-server4.4.1-2.1ubuntu6~ppa1 amd64 ISC DHCP server for automatic IP address assignment ii libirs-export161 1:9.11.16+dfsg-3~ppa1 amd64 Exported IRS Shared Library ii libisc-export1105:amd641:9.11.16+dfsg-3~ppa1 amd64 Exported ISC Shared Library ii libisccfg-export1631:9.11.16+dfsg-3~ppa1 amd64 Exported ISC CFG Shared Library --- DHCPACK on 10.19.101.120 to 52:54:00:d1:eb:66 (sleek-kodiak) via ens4 balancing pool 555643e55f40 12 total 221 free 111 backup 110 lts 0 max-own (+/-)22 balanced pool 555643e55f40 12 total 221 free 111 backup 110 lts 0 max-misbal 33 balancing pool 555643e55f40 12 total 221 free 111 backup 110 lts 0 max-own (+/-)22 balanced pool 555643e55f40 12 total 221 free 111 backup 110 lts 0 max-misbal 33 --- balancing pool 5595dff0df10 12 total 221 free 111 backup 110 lts 0 max-own (+/-)22 balanced pool 5595dff0df10 12 total 221 free 111 backup 110 lts 0 max-misbal 33 balancing pool 5595dff0df10 12 total 221 free 111 backup 110 lts 0 max-own (+/-)22 balanced pool 5595dff0df10 12 total 221 free 111 backup 110 lts 0 max-misbal 33 --- -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1872118 Title: DHCP Cluster crashes after a few hours To manage notifications about this bug go to: https://bugs.launchpad.net/dhcp/+bug/1872118/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1872118] Re: DHCP Cluster crashes after a few hours
Hello Andrew, Correct me if I am wrong but it seems your system isn't running with libisc-export1105:amd64 1:9.11.16+dfsg-3~ppa1 (?) I am running the following packages from the PPA, please note that libisc-export1105 is required (that's where the fix is located). root@dhcpd1:/home/ubuntu# dpkg -l | grep ppa1 ii isc-dhcp-server 4.4.1-2.1ubuntu6~ppa1 amd64 ISC DHCP server for automatic IP address assignment ii libirs-export161 1:9.11.16+dfsg-3~ppa1 amd64 Exported IRS Shared Library ii libisc-export1105:amd64 1:9.11.16+dfsg-3~ppa1 amd64 Exported ISC Shared Library ii libisccfg-export163 1:9.11.16+dfsg-3~ppa1 amd64 Exported ISC CFG Shared Library --- Sent update done message to failover-partner failover peer failover-partner: peer moves from recover to recover-done failover peer failover-partner: peer update completed. failover peer failover-partner: I move from recover to recover-done failover peer failover-partner: peer moves from recover-done to normal failover peer failover-partner: I move from recover-done to normal failover peer failover-partner: Both servers normal balancing pool 55d0a88a4f10 12 total 221 free 221 backup 0 lts -110 max-own (+/-)22 balanced pool 55d0a88a4f10 12 total 221 free 221 backup 0 lts -110 max-misbal 33 --- balanced pool 55eb2fe58f40 12 total 221 free 111 backup 110 lts 0 max-misbal 33 Sending updates to failover-partner. failover peer failover-partner: peer moves from recover-done to normal failover peer failover-partner: Both servers normal --- DHCPDISCOVER from 52:54:00:d1:eb:66 via ens4: load balance to peer failover-partner DHCPREQUEST for 10.19.101.120 (10.19.101.233) from 52:54:00:d1:eb:66 via ens4: lease owned by peer On failover: peer failover-partner: disconnected failover peer failover-partner: I move from normal to communications-interrupted DHCPDISCOVER from 52:54:00:e8:14:0a via ens4 DHCPOFFER on 10.19.101.10 to 52:54:00:e8:14:0a (shapely-peccary) via ens4 DHCPREQUEST for 10.19.101.10 (10.19.101.127) from 52:54:00:e8:14:0a (shapely-peccary) via ens4 DHCPACK on 10.19.101.10 to 52:54:00:e8:14:0a (shapely-peccary) via ens4 I'll leave this running until I can reproduce the crash or assume that the fix works. Please let me know if you can reproduce with those packages. Thanks, Jorge Niedbalski -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1872118 Title: DHCP Cluster crashes after a few hours To manage notifications about this bug go to: https://bugs.launchpad.net/dhcp/+bug/1872118/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1872118] Re: DHCP Cluster crashes after a few hours
Hello Andrew, The fix is on the libisc-export1105 library (check dependencies on: https://launchpad.net/~niedbalski/+archive/ubuntu/fix-1872118/+packages), just replacing the dhcpd binary won't be enough. If you can install isc-dhcp-server and its dependencies from the PPA and test, would be great. Thanks for any feedback. -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1872118 Title: DHCP Cluster crashes after a few hours To manage notifications about this bug go to: https://bugs.launchpad.net/dhcp/+bug/1872118/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1872118] Re: DHCP Cluster crashes after a few hours
** Follow up from my previous comment, I've built a PPA with a fix similar to the one pointed in 430065. https://launchpad.net/~niedbalski/+archive/ubuntu/fix-1872118 * I'd appreciate if anyone can test that PPA with focal and report back if the problem keeps reproducible with that version. In case it does, please upload the crash file / coredump and the configuration file used. Thank you. -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1872118 Title: DHCP Cluster crashes after a few hours To manage notifications about this bug go to: https://bugs.launchpad.net/dhcp/+bug/1872118/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1872118] Re: DHCP Cluster crashes after a few hours
Hello, I checked the backtrace of a crashed dhcpd running on 4.4.1-2.1ubuntu5. (gdb) info threads Id Target Id Frame * 1 Thread 0x7fb4ddecb700 (LWP 3170) __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50 2 Thread 0x7fb4dd6ca700 (LWP 3171) __lll_lock_wait (futex=futex@entry=0x7fb4de6d2028, private=0) at lowlevellock.c:52 3 Thread 0x7fb4de6cc700 (LWP 3169) futex_wake (private=, processes_to_wake=1, futex_word=) at ../sysdeps/nptl/futex-internal.h:364 4 Thread 0x7fb4de74f740 (LWP 3148) futex_wait_cancelable (private=, expected=0, futex_word=0x7fb4de6cd0d0) at ../sysdeps/nptl/futex-internal.h:183 (gdb) frame 2 #2 0x7fb4dec85985 in isc_assertion_failed (file=file@entry=0x7fb4decd8878 "../../../../lib/isc/unix/socket.c", line=line@entry=3361, type=type@entry=isc_assertiontype_insist, cond=cond@entry=0x7fb4decda033 "!sock->pending_send") at ../../../lib/isc/assertions.c:52 (gdb) bt #1 0x7fb4deaa7859 in __GI_abort () at abort.c:79 #2 0x7fb4dec85985 in isc_assertion_failed (file=file@entry=0x7fb4decd8878 "../../../../lib/isc/unix/socket.c", line=line@entry=3361, type=type@entry=isc_assertiontype_insist, cond=cond@entry=0x7fb4decda033 "!sock->pending_send") at ../../../lib/isc/assertions.c:52 #3 0x7fb4decc17e1 in dispatch_send (sock=0x7fb4de6d4990) at ../../../../lib/isc/unix/socket.c:4041 #4 process_fd (writeable=, readable=, fd=11, manager=0x7fb4de6d0010) at ../../../../lib/isc/unix/socket.c:4054 #5 process_fds (writefds=, readfds=0x7fb4de6d1090, maxfd=13, manager=0x7fb4de6d0010) at ../../../../lib/isc/unix/socket.c:4211 #6 watcher (uap=0x7fb4de6d0010) at ../../../../lib/isc/unix/socket.c:4397 #7 0x7fb4dea68609 in start_thread (arg=) at pthread_create.c:477 #8 0x7fb4deba4103 in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95 (gdb) frame 3 #3 0x7fb4decc17e1 in dispatch_send (sock=0x7fb4de6d4990) at ../../../../lib/isc/unix/socket.c:4041 4041 in ../../../../lib/isc/unix/socket.c (gdb) p sock->pending_send $2 = 1 The code is crashing on this assertion: https://gitlab.isc.org/isc-projects/bind9/-/blob/v9_11_3/lib/isc/unix/socket.c#L3364 This was already reported and marked as fixed in debian (?) via [0] ""Now if a wakeup event occurres the socket would be dispatched for processing regardless which kind of event (timer?) triggered the wakeup. At least I did not find any sanity checks in process_fds() except SOCK_DEAD(sock). This leads to the following situation: The sock is not dead yet but it is still pending when it is dispatched again. I would now check sock->pending_send before calling dispatch_send().This would at least prevent the assertion failure - well knowing that the situation described above ( not dead but still pending and alerting ) is not a very pleasant one - until someone comes up with a better solution. """ https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=430065#20 ** Follow up questions: 0) The reproducer doesn't seems consistent and seems to be related to a race condition associated with a internal timer/futex. 1) Can anyone confirm that a pristine upstream 4.4.1 doesn't reproduces the issue? ** Bug watch added: Debian Bug tracker #430065 https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=430065 -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1872118 Title: DHCP Cluster crashes after a few hours To manage notifications about this bug go to: https://bugs.launchpad.net/dhcp/+bug/1872118/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1888926] Re: tls.tlscfgcmd not recognized; rebuild rsyslog against librelp 1.5.0
Hello, I checked the backtrace of a crashed dhcpd running on 4.4.1-2.1ubuntu5. (gdb) info threads Id Target IdFrame * 1Thread 0x7fb4ddecb700 (LWP 3170) __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50 2Thread 0x7fb4dd6ca700 (LWP 3171) __lll_lock_wait (futex=futex@entry=0x7fb4de6d2028, private=0) at lowlevellock.c:52 3Thread 0x7fb4de6cc700 (LWP 3169) futex_wake (private=, processes_to_wake=1, futex_word=) at ../sysdeps/nptl/futex-internal.h:364 4Thread 0x7fb4de74f740 (LWP 3148) futex_wait_cancelable (private=, expected=0, futex_word=0x7fb4de6cd0d0) at ../sysdeps/nptl/futex-internal.h:183 (gdb) frame 2 #2 0x7fb4dec85985 in isc_assertion_failed (file=file@entry=0x7fb4decd8878 "../../../../lib/isc/unix/socket.c", line=line@entry=3361, type=type@entry=isc_assertiontype_insist, cond=cond@entry=0x7fb4decda033 "!sock->pending_send") at ../../../lib/isc/assertions.c:52 (gdb) bt #1 0x7fb4deaa7859 in __GI_abort () at abort.c:79 #2 0x7fb4dec85985 in isc_assertion_failed (file=file@entry=0x7fb4decd8878 "../../../../lib/isc/unix/socket.c", line=line@entry=3361, type=type@entry=isc_assertiontype_insist, cond=cond@entry=0x7fb4decda033 "!sock->pending_send") at ../../../lib/isc/assertions.c:52 #3 0x7fb4decc17e1 in dispatch_send (sock=0x7fb4de6d4990) at ../../../../lib/isc/unix/socket.c:4041 #4 process_fd (writeable=, readable=, fd=11, manager=0x7fb4de6d0010) at ../../../../lib/isc/unix/socket.c:4054 #5 process_fds (writefds=, readfds=0x7fb4de6d1090, maxfd=13, manager=0x7fb4de6d0010) at ../../../../lib/isc/unix/socket.c:4211 #6 watcher (uap=0x7fb4de6d0010) at ../../../../lib/isc/unix/socket.c:4397 #7 0x7fb4dea68609 in start_thread (arg=) at pthread_create.c:477 #8 0x7fb4deba4103 in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95 (gdb) frame 3 #3 0x7fb4decc17e1 in dispatch_send (sock=0x7fb4de6d4990) at ../../../../lib/isc/unix/socket.c:4041 4041in ../../../../lib/isc/unix/socket.c (gdb) p sock->pending_send $2 = 1 The code is crashing on this assertion: https://gitlab.isc.org/isc-projects/bind9/-/blob/v9_11_3/lib/isc/unix/socket.c#L3364 This was already reported and marked as fixed in debian (?) via [0] ""Now if a wakeup event occurres the socket would be dispatched for processing regardless which kind of event (timer?) triggered the wakeup. At least I did not find any sanity checks in process_fds() except SOCK_DEAD(sock). This leads to the following situation: The sock is not dead yet but it is still pending when it is dispatched again. I would now check sock->pending_send before calling dispatch_send().This would at least prevent the assertion failure - well knowing that the situation described above ( not dead but still pending and alerting ) is not a very pleasant one - until someone comes up with a better solution. """ https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=430065#20 ** Follow up questions: 0) The reproducer doesn't seems consistent and seems to be related to a race condition associated with a internal timer/futex. 1) Can anyone confirm that a pristine upstream 4.4.1 doesn't reproduces the issue? ** Bug watch added: Debian Bug tracker #430065 https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=430065 -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1888926 Title: tls.tlscfgcmd not recognized; rebuild rsyslog against librelp 1.5.0 To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/rsyslog/+bug/1888926/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1872118] Re: DHCP Cluster crashes after a few hours
** Also affects: isc-dhcp (Ubuntu Groovy) Importance: Undecided Status: Confirmed ** Also affects: isc-dhcp (Ubuntu Focal) Importance: Undecided Status: New ** Also affects: bind9-libs (Ubuntu) Importance: Undecided Status: New -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1872118 Title: DHCP Cluster crashes after a few hours To manage notifications about this bug go to: https://bugs.launchpad.net/dhcp/+bug/1872118/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs