[Bug 1811255] Re: perf archive missing

2024-03-04 Thread Trent Lloyd
*** This bug is a duplicate of bug 1823281 *** https://bugs.launchpad.net/bugs/1823281 ** This bug has been marked a duplicate of bug 1823281 perf-archive is not shipped in the linux-tools package -- You received this bug notification because you are a member of Ubuntu Bugs, which is

[Bug 1977669] Re: Metadata broken for SR-IOV external ports

2022-06-05 Thread Trent Lloyd
** Tags added: sts -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1977669 Title: Metadata broken for SR-IOV external ports To manage notifications about this bug go to:

[Bug 1977669] Re: Metadata broken for SR-IOV external ports

2022-06-05 Thread Trent Lloyd
** Description changed: OpenStack Usurri/OVN SR-IOV instances are unable to connect to the metadata service despite DHCP and normal traffic work. The 169.254.169.254 metadata route is directed at the DHCP port IP, and no arp reply is received by the VM for this IP. Diagnosis finds that

[Bug 1977669] [NEW] Metadata broken for SR-IOV external ports

2022-06-04 Thread Trent Lloyd
Public bug reported: OpenStack Usurri/OVN SR-IOV instances are unable to connect to the metadata service despite DHCP and normal traffic work. The 169.254.169.254 metadata route is directed at the DHCP port IP, and no arp reply is received by the VM for this IP. Diagnosis finds that the ARP

[Bug 1970453] Re: DMAR: ERROR: DMA PTE for vPFN 0x7bf32 already set

2022-05-11 Thread Trent Lloyd
With regards to the patch here: https://lists.linuxfoundation.org/pipermail/iommu/2021-October/060115.html It is mentioned this issue can occur if you are passing through a PCI device to a virtual machine guest. This patch seems like it never made it into the kernel. So I am curious if you are

[Bug 1964445] [NEW] Incorrectly identifies processes inside LXD container on jammy/cgroupsv2

2022-03-09 Thread Trent Lloyd
Public bug reported: Processes inside of LXD containers are incorrectly identified as needing a restart on jammy. The cause is that needrestart does not correctly parse cgroups v2. Since needrestart is installed in a default install, this is problematic as it prompts you to restart and actually

[Bug 1958148] Re: mkinitramfs is too slow

2022-02-28 Thread Trent Lloyd
Where is the discussion happening? I ran the same benchmarks for my i7-6770HQ 4-core system. This really needs revising. While disk space using in /boot is a concern, in this example at least -10 would use only 8MB (10%) more space and cut the time taken from 2m1s to 13s. zstd.0 84M 0m2.150s

[Bug 1906476] Re: PANIC at zfs_znode.c:335:zfs_znode_sa_init() // VERIFY(0 == sa_handle_get_from_db(zfsvfs->z_os, db, zp, SA_HDL_SHARED, >z_sa_hdl)) failed

2022-01-12 Thread Trent Lloyd
Re-installing from scratch should resolve the issue. I suspect in most cases if you install with the 21.10 installer (even though it has the old kernel) as long as you install updates during the install this issue probably won't hit you. It mostly seems to occur after a reboot and it's loading

[Bug 1077796] Re: /bin/kill no longer works with negative PID

2021-12-16 Thread Trent Lloyd
Most shells (including bash, zsh) have a built-in for kill so it's done internally. Some shells don't so it executes /bin/kill instead which has this issue. One comment noted this was fixed at some point in 2013 in version 3.3.4 but it apparently broke again at some point and is broken at least

[Bug 1952496] Re: ubuntu 20.04 LTS network problem

2021-11-29 Thread Trent Lloyd
Thanks for the data. I can see you queried 'steven-ubuntu.local' and that looks like the hostname of the local machine. Can you also query the hostname of the AFP server you are trying to connect to (using both getent hosts and avahi-resolve-host-name). -- You received this bug notification

[Bug 1952496] Re: ubuntu 20.04 LTS network problem

2021-11-28 Thread Trent Lloyd
As a side note, it may be time to switch to a new protocol. As even Apple has dropped support for sharing AFP versions in the last few releases and is deprecating it's usage. You can use Samba to do SMBFS including the extra special apple stuff if you need timemachine support etc on your NAS --

[Bug 1952496] Re: ubuntu 20.04 LTS network problem

2021-11-28 Thread Trent Lloyd
To assist with this can you get the following outputs from the broken system: # Change 'hostname.local' to the hostname expected to work cat /etc/nsswitch.conf systemctl status avahi-daemon journalctl -u avahi-daemon --boot avahi-resolve-host-name hostname.local getent hosts hostname.local

[Bug 1339518] Re: sudo config file specifies group "admin" that doesn't exist in system

2021-11-17 Thread Trent Lloyd
Subscribing Marc as he seems to be largely maintaining this and made the original changes and has been keeping the delta. Hopefully he can provide some insight. Seems this is a delta to Debian that is being kept intentionally for a long time, it's frequently in the changelog even in the most

[Bug 1339518] Re: sudo config file specifies group "admin" that doesn't exist in system

2021-11-17 Thread Trent Lloyd
Just noticed this today, it's still the same on Ubuntu 20.04. The default sudoers file ships the admin group having sudo privileges but the group doesn't exist by default. While it doesn't have out of the box security implications, I think this is a security concern as someone could potentially

[Bug 1931660] Re: PANIC at zfs_znode.c:339:zfs_znode_sa_init()

2021-10-15 Thread Trent Lloyd
This looks like a duplicate of this: https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1906476 -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1931660 Title: PANIC at

[Bug 1906476] Re: PANIC at zfs_znode.c:335:zfs_znode_sa_init() // VERIFY(0 == sa_handle_get_from_db(zfsvfs->z_os, db, zp, SA_HDL_SHARED, >z_sa_hdl)) failed

2021-09-27 Thread Trent Lloyd
In a related way say you wanted to recover a system from a boot disk, and copy all the data off to another disk. If you use a sequential file copy like from tar/cp in verbose mode and watch it, eventaully it will hang on the file triggering the issue (watch dmesg/kern.log). Once that happens, move

[Bug 1906476] Re: PANIC at zfs_znode.c:335:zfs_znode_sa_init() // VERIFY(0 == sa_handle_get_from_db(zfsvfs->z_os, db, zp, SA_HDL_SHARED, >z_sa_hdl)) failed

2021-09-27 Thread Trent Lloyd
So to be clear this patch revert fixes the issue being caused new, but, if the issue already happened on your filesystem it will continue to occur because the exception is reporting corruption on disk. I don't currently have a good fix for this other than to move the affected files to a directory

[Bug 1906476] Re: PANIC at zfs_znode.c:335:zfs_znode_sa_init() // VERIFY(0 == sa_handle_get_from_db(zfsvfs->z_os, db, zp, SA_HDL_SHARED, >z_sa_hdl)) failed

2021-09-26 Thread Trent Lloyd
Have created a 100% reliable reproducer test case and also determined the Ubuntu-specific patch 4701-enable-ARC-FILL-LOCKED-flag.patch to fix Bug #1900889 is likely the cause. [Test Case] The important parts are: - Use encryption - rsync the zfs git tree - Use parallel I/O from silversearcher-ag

[Bug 1906476] Re: PANIC at zfs_znode.c:335:zfs_znode_sa_init() // VERIFY(0 == sa_handle_get_from_db(zfsvfs->z_os, db, zp, SA_HDL_SHARED, >z_sa_hdl)) failed

2021-09-26 Thread Trent Lloyd
While trying to setup a reproducer that would excercise chrome or wine or something I stumbled across the following reproducer that worked twice in a row in a libvirt VM on my machine today. The general gist is to (1) Create a zfs filesystem with "-o encryption=aes-256-gcm -o compression=zstd -o

[Bug 1906476] Re: PANIC at zfs_znode.c:335:zfs_znode_sa_init() // VERIFY(0 == sa_handle_get_from_db(zfsvfs->z_os, db, zp, SA_HDL_SHARED, >z_sa_hdl)) failed

2021-09-24 Thread Trent Lloyd
34 more user reports on the upstream bug of people hitting it on Ubuntu 5.13.0: https://github.com/openzfs/zfs/issues/10971 I think this needs some priority. It doesn't seem like it's hitting upstream, for some reason only really hitting on Ubuntu. -- You received this bug notification because

[Bug 1906476] Re: PANIC at zfs_znode.c:335:zfs_znode_sa_init() // VERIFY(0 == sa_handle_get_from_db(zfsvfs->z_os, db, zp, SA_HDL_SHARED, >z_sa_hdl)) failed

2021-09-05 Thread Trent Lloyd
@Colin To be clear this is the same bug I originally hit and opened the launchpad for, it just doesn't quite match with what most people saw in the upstream bugs. But it seemed to get fixed anyway for a while, and has regressed again somehow. Same exception as from the original description and

[Bug 1783184] Re: neutron-ovs-cleanup can have unintended side effects

2021-09-01 Thread Trent Lloyd
There is a systemd option that I think will solve this issue. https://www.freedesktop.org/software/systemd/man/systemd.unit.html#RefuseManualStart= RefuseManualStart=, RefuseManualStop= Takes a boolean argument. If true, this unit can only be activated or deactivated indirectly. In this case,

[Bug 1892242] Re: Curtin doesn't handle type:mount entries without 'path' element

2021-08-27 Thread Trent Lloyd
In terms of understanding when this was fixed for what users/versions. Assuming that MAAS is copying the curtin version from the server, to the deployed client, which I think is the case, you need to get an updated Curtin to the MAAS server. The bug was fix released into curtin

[Bug 1906476] Re: PANIC at zfs_znode.c:335:zfs_znode_sa_init() // VERIFY(0 == sa_handle_get_from_db(zfsvfs->z_os, db, zp, SA_HDL_SHARED, >z_sa_hdl)) failed

2021-08-24 Thread Trent Lloyd
I traced the call failure. I found the failing code is in sa.c:1291#sa_build_index() if (BSWAP_32(sa_hdr_phys->sa_magic) != SA_MAGIC) { This code prints debug info to /proc/spl/kstat/zfs/dbgmsg, which for me is: 1629791353 sa.c:1293:sa_build_index(): Buffer Header: cb872954 !=

[Bug 1906476] Re: PANIC at zfs_znode.c:335:zfs_znode_sa_init() // VERIFY(0 == sa_handle_get_from_db(zfsvfs->z_os, db, zp, SA_HDL_SHARED, >z_sa_hdl)) failed

2021-08-23 Thread Trent Lloyd
This has re-appeared for me today after upgrading to 5.13.0-14 on Impish. Same call stack, and same chrome-based applications (Mattermost was hit first) affected. Not currently running DKMS, so: Today: 5.13.0-14-lowlat Tue Aug 24 10:59 still running (zfs module is 2.0.3-8ubuntu6) Yesterday:

[Bug 1906476] Re: PANIC at zfs_znode.c:335:zfs_znode_sa_init() // VERIFY(0 == sa_handle_get_from_db(zfsvfs->z_os, db, zp, SA_HDL_SHARED, >z_sa_hdl)) failed

2021-08-11 Thread Trent Lloyd
Try the zfs_recover step from Colin's comment above. And then look for invalid files and try to move them out of the way. I'm not aware of encrypted pools being specifically implicated (no such mention in the bug and it doesn't seem like it), having said that, I am using encryption on the dataset

[Bug 1827264] Re: ovs-vswitchd thread consuming 100% CPU

2021-06-27 Thread Trent Lloyd
Seems there is a good chance at least some of the people commenting or affected by this bug are duplicate of Bug #1839592 - essentially a libc6 bug that meant threads weren't woken up when they should have been. Fixed by libc6 upgrade to 2.27-3ubuntu1.3 in bionic. -- You received this bug

[Bug 1906476] Re: PANIC at zfs_znode.c:335:zfs_znode_sa_init() // VERIFY(0 == sa_handle_get_from_db(zfsvfs->z_os, db, zp, SA_HDL_SHARED, >z_sa_hdl)) failed

2021-05-20 Thread Trent Lloyd
Are you confident that the issue is a new issue? Unfortunately as best I can tell, the corruption can occur and then will still appear on a fixed system if it's reading corruption created in the past that unfortunately scrub doesn't seem to detect. I've still had no re-occurance here after a few

[Bug 1920640] Re: EXPKEYSIG C8CAB6595FDFF622 Ubuntu Debug Symbol Archive Automatic Signing Key (2016)

2021-03-21 Thread Trent Lloyd
** Changed in: ubuntu-keyring (Ubuntu) Importance: Undecided => Critical ** Changed in: ubuntu-keyring (Ubuntu) Importance: Critical => High -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu.

[Bug 1920640] Re: EXPKEYSIG C8CAB6595FDFF622 Ubuntu Debug Symbol Archive Automatic Signing Key (2016)

2021-03-21 Thread Trent Lloyd
Updated the following wiki pages: https://wiki.ubuntu.com/Debug%20Symbol%20Packages https://wiki.ubuntu.com/DebuggingProgramCrash With the note: Note: The GPG key expired on 2021-03-21 and may need updating by either upgrading the ubuntu-dbgsym-keyring package or re-running the apt-key command.

[Bug 1920640] Re: EXPKEYSIG C8CAB6595FDFF622 Ubuntu Debug Symbol Archive Automatic Signing Key (2016)

2021-03-21 Thread Trent Lloyd
Just to make the current status clear from what I can gather: - The GPG key was extended by 1 year to 2022-03-21 - On Ubuntu Bionic (18.04) and newer the GPG key is normally installed by the ubuntu-dbgsym-keyring package (on 18.04 Bionic onwards). This package is not yet updated. An update to

[Bug 1906476] Re: PANIC at zfs_znode.c:335:zfs_znode_sa_init() // VERIFY(0 == sa_handle_get_from_db(zfsvfs->z_os, db, zp, SA_HDL_SHARED, >z_sa_hdl)) failed

2021-03-20 Thread Trent Lloyd
I got another couple of days out of it without issue - so I think it's likely fixed. It seems like this issue looks very similar to the following upstream bug, same behaviour but a different error, and so I wonder if it was ultimately the same bug. Looks like this patch from 2.0.3 was pulled

[Bug 1869808] Re: reboot neutron-ovs-agent introduces a short interrupt of vlan traffic

2021-03-19 Thread Trent Lloyd
I have specifically verified that this bug (vlan traffic interruption during restart when rabbitmq is down) is fixed by the package in bionic- proposed. Followed my reproduction steps per the Test Case and all traffic to instances stops on 12.1.1-0ubuntu3 and does not stop on 12.1.1-0ubuntu4 But

[Bug 1894843] Re: [dvr_snat] Router update deletes rfp interface from qrouter even when VM port is present on this host

2021-03-16 Thread Trent Lloyd
When using DVR-SNAT, a simple neutron-l3-agent gateway restart triggers this issue. Reproduction Note: Nodes with an ACTIVE or BACKUP (in the case of L3HA) router for the network are not affected by this issue, so a small 1-6 node environment may make this difficult to reproduce or only affect

[Bug 1869808] Re: reboot neutron-ovs-agent introduces a short interrupt of vlan traffic

2021-03-15 Thread Trent Lloyd
Looking to get this approved so that we can verify it, as needing this ideally released by the weekend of March 27th for some maintenance activity. Is something holding back the approval? -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to

[Bug 1906476] Re: PANIC at zfs_znode.c:335:zfs_znode_sa_init() // VERIFY(0 == sa_handle_get_from_db(zfsvfs->z_os, db, zp, SA_HDL_SHARED, >z_sa_hdl)) failed

2021-03-08 Thread Trent Lloyd
It's worth noting that, as best I can understand, the patches won't fix an already broken filesystem. You have to remove all of the affected files, and it's difficult to know exactly what files are affected. I try to guess based on which show a ??? mark in "ls -la". But sometimes the "ls" hangs,

[Bug 1916708] Re: udpif_revalidator crash in ofpbuf_resize__

2021-02-23 Thread Trent Lloyd
E-mailed upstream for assistance: https://mail.openvswitch.org/pipermail/ovs-discuss/2021-February/050963.html ** Tags added: sts -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1916708 Title:

[Bug 1916708] [NEW] udpif_revalidator crash in ofpbuf_resize__

2021-02-23 Thread Trent Lloyd
Public bug reported: The udpif_revalidator thread crashed in ofpbuf_resize__ on openvswitch 2.9.2-0ubuntu0.18.04.3~cloud0 (on 16.04 from the xenial-queens cloud archive, backported from the 18.04 release of the same version). Kernel version was 4.4.0-159-generic. The issue is suspected to still

[Bug 1869808] Re: reboot neutron-ovs-agent introduces a short interrupt of vlan traffic

2021-02-17 Thread Trent Lloyd
Attaching revised SRU patch for Ubuntu Bionic, no code content changes but fixed the changelog to list all 3 bug numbers correctly. ** Patch added: "neutron SRU patch for Ubuntu Bionic (new version)"

[Bug 1869808] Re: reboot neutron-ovs-agent introduces a short interrupt of vlan traffic

2021-02-17 Thread Trent Lloyd
Ubuntu SRU Justification [Impact] - When there is a RabbitMQ or neutron-api outage, the neutron- openvswitch-agent undergoes a "resync" process and temporarily blocks all VM traffic. This always happens for a short time period (maybe <1 second) but in some high scale environments this lasts for

[Bug 1869808] Re: reboot neutron-ovs-agent introduces a short interrupt of vlan traffic

2021-02-17 Thread Trent Lloyd
SRU proposed for Ubuntu Bionic + Cloud Archive (Queens) for the following 3 bugs: Bug #1869808 reboot neutron-ovs-agent introduces a short interrupt of vlan traffic Bug #1887148 Network loop between physical networks with DVR (Fix for fix to Bug #1869808) Bug #1871850 [L3] existing router

[Bug 1906476] Re: PANIC at zfs_znode.c:335:zfs_znode_sa_init() // VERIFY(0 == sa_handle_get_from_db(zfsvfs->z_os, db, zp, SA_HDL_SHARED, >z_sa_hdl)) failed

2021-02-02 Thread Trent Lloyd
I can confirm 100% this bug is still happening with 2.0.1 from hirsute- proposed, even with a brand new install, on a different disk (SATA SSD instead of NVMe Intel Optane 900p SSD), using 2.0.1 inside the installer and from first boot. I can reproduce it reliably within about 2 hours just using

[Bug 1906476] Re: PANIC at zfs_znode.c:335:zfs_znode_sa_init() // VERIFY(0 == sa_handle_get_from_db(zfsvfs->z_os, db, zp, SA_HDL_SHARED, >z_sa_hdl)) failed

2021-01-23 Thread Trent Lloyd
Using 2.0.1 from hirsute-proposed it seems like I'm still hitting this. Move and replace .config/google-chrome and seems after using it for a day, shutdown, boot up, same issue again. Going to see if I can somehow try to reproduce this on a different disk or in a VM with xfstests or something.

[Bug 1906476] Re: PANIC at zfs_znode.c:335:zfs_znode_sa_init() // VERIFY(0 == sa_handle_get_from_db(zfsvfs->z_os, db, zp, SA_HDL_SHARED, >z_sa_hdl)) failed

2021-01-17 Thread Trent Lloyd
This issue seems to have appeared somewhere between zfs-linux 0.8.4-1ubuntu11 (last known working version) and 0.8.4-1ubuntu16. When the issue first hit, I had zfs-dkms installed, which was on 0.8.4-1ubuntu16 where as the kernel build had 0.8.4-1ubuntu11. I removed zfs-dkms to go back to the

[Bug 1899826] Re: backport upstream fixes for 5.9 Linux support

2021-01-17 Thread Trent Lloyd
Accidentally posted the above comment in the wrong bug, sorry, was meant for https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1906476 - where I suspect this bug as having caused a regression. -- You received this bug notification because you are a member of Ubuntu Bugs, which is

[Bug 1899826] Re: backport upstream fixes for 5.9 Linux support

2021-01-17 Thread Trent Lloyd
This issue seems to have appeared somewhere between zfs-linux 0.8.4-1ubuntu11 (last known working version) and 0.8.4-1ubuntu16. When the issue first hit, I had zfs-dkms installed, which was on 0.8.4-1ubuntu16 where as the kernel build had 0.8.4-1ubuntu11. I removed zfs-dkms to go back to the

[Bug 1906476] Re: PANIC at zfs_znode.c:335:zfs_znode_sa_init() // VERIFY(0 == sa_handle_get_from_db(zfsvfs->z_os, db, zp, SA_HDL_SHARED, >z_sa_hdl)) failed

2021-01-14 Thread Trent Lloyd
Another user report here: https://github.com/openzfs/zfs/issues/10971 Curiously I found a 2016(??) report of similar here: https://bbs.archlinux.org/viewtopic.php?id=217204 -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu.

[Bug 1906476] Re: PANIC at zfs_znode.c:335:zfs_znode_sa_init() // VERIFY(0 == sa_handle_get_from_db(zfsvfs->z_os, db, zp, SA_HDL_SHARED, >z_sa_hdl)) failed

2021-01-14 Thread Trent Lloyd
I hit this problem again today, but now without zfs-dkms. After upgrading my kernel from initrd.img-5.8.0-29-generic to 5.8.0-36-generic my Google Chrome Cache directory is broken again, had to rename it and then reboot to get out of the problem. ** Changed in: zfs-linux (Ubuntu) Importance:

[Bug 1874939] Re: ceph-osd can't connect after upgrade to focal

2020-12-09 Thread Trent Lloyd
This issue appears to be documented here: https://docs.ceph.com/en/latest/releases/nautilus/#instructions Complete the upgrade by disallowing pre-Nautilus OSDs and enabling all new Nautilus-only functionality: # ceph osd require-osd-release nautilus Important This step is mandatory. Failure to

[Bug 1907262] Re: raid10: discard leads to corrupted file system

2020-12-09 Thread Trent Lloyd
** Attachment added: "blktrace-lp1907262.tar.gz" https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1907262/+attachment/5442212/+files/blktrace-lp1907262.tar.gz -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu.

[Bug 1907262] Re: raid10: discard leads to corrupted file system

2020-12-09 Thread Trent Lloyd
I can reproduce this on a Google Cloud n1-standard-16 using 2x Local NVMe disks. Then partition nvme0n1 and nvne0n2 with only an 8GB partition, then format directly with ext4 (skip LVM). In this setup each 'check' takes <1 min so speeds up testing considerably. Example details - seems

[Bug 1906476] Re: PANIC at zfs_znode.c:335:zfs_znode_sa_init() // VERIFY(0 == sa_handle_get_from_db(zfsvfs->z_os, db, zp, SA_HDL_SHARED, >z_sa_hdl)) failed

2020-12-01 Thread Trent Lloyd
Should mention that Chrome itself always showed "waiting for cache" part of backing up the story around the cache files. -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1906476 Title: PANIC at

[Bug 1906476] [NEW] PANIC at zfs_znode.c:335:zfs_znode_sa_init() // VERIFY(0 == sa_handle_get_from_db(zfsvfs->z_os, db, zp, SA_HDL_SHARED, >z_sa_hdl)) failed

2020-12-01 Thread Trent Lloyd
Public bug reported: Since today while running Ubuntu 21.04 Hirsute I started getting a ZFS panic in the kernel log which was also hanging Disk I/O for all Chrome/Electron Apps. I have narrowed down a few important notes: - It does not happen with module version 0.8.4-1ubuntu11 built and

[Bug 1847361] Re: Upgrade of qemu binaries causes running instances not able to dynamically load modules

2020-11-30 Thread Trent Lloyd
Note: This patch has related regressions in Hirsute due to the version number containing a space: https://bugs.launchpad.net/ubuntu/+source/qemu/+bug/1906245 https://bugs.launchpad.net/ubuntu/+source/qemu/+bug/1905377 Seems the patch is temporarily dropped will need to ensure we don't totally

[Bug 1902351] Re: forgets touchpad settings

2020-11-24 Thread Trent Lloyd
I am experiencing this as well, it worked on 20.04 Focal and is broken on 20.10 Groovy and 21.04 Hirsute as of today with the latest Hirsute packages. I am using GNOME with a Logitech T650 touchpad. If I unplug and replug the receiver it forgets again. I then have to toggle both natural scrolling

[Bug 1903745] Re: pacemaker left stopped after unattended-upgrade of pacemaker (1.1.14-2ubuntu1.8 -> 1.1.14-2ubuntu1.9)

2020-11-16 Thread Trent Lloyd
For clarity my findings so far are that: - The package upgrade stops pacemaker - After 30 seconds (customised down from 30min by charm-hacluster), the stop times out and pretends to have finished, but leaves pacemaker running (due to SendSIGKILL=no in the .service intentionally set upstream to

[Bug 1903745] Re: pacemaker left stopped after unattended-upgrade of pacemaker (1.1.14-2ubuntu1.8 -> 1.1.14-2ubuntu1.9)

2020-11-16 Thread Trent Lloyd
With regards to Billy's Comment #18, my analysis for that bionic sosreport is in Comment #8 where I found that specific sosreport didn't experience this issue - but I found most likely that node was suffering from the issue occuring on the MySQL nodes it was connected to - and the service couldn't

[Bug 1903745] Re: upgrade from 1.1.14-2ubuntu1.8 to 1.1.14-2ubuntu1.9 breaks clusters

2020-11-12 Thread Trent Lloyd
** Changed in: charm-hacluster Status: New => Confirmed ** Changed in: pacemaker (Ubuntu) Status: Confirmed => Invalid ** Summary changed: - upgrade from 1.1.14-2ubuntu1.8 to 1.1.14-2ubuntu1.9 breaks clusters + pacemaker left stopped after unattended-upgrade of pacemaker

[Bug 1903745] Re: upgrade from 1.1.14-2ubuntu1.8 to 1.1.14-2ubuntu1.9 breaks clusters

2020-11-12 Thread Trent Lloyd
For the fix to Bug #1654403 charm-hacluster sets TimeoutStartSec and TimeoutStopSync for both corosync and pacemaker, to the same value. system-wide default (xenial, bionic): TimeoutStopSec=90s TimeoutStartSec=90s corosync package default: system-wide default (no changes) pacemaker package

[Bug 1903745] Re: upgrade from 1.1.14-2ubuntu1.8 to 1.1.14-2ubuntu1.9 breaks clusters

2020-11-12 Thread Trent Lloyd
I misread and the systemd unit is native, and it already sets the following settings: SendSIGKILL=no TimeoutStopSec=30min TimeoutStartSec=60s The problem is that most of these failures have been experienced on juju hacluster charm installations, which are overriding these values $ cat

[Bug 1903745] Re: upgrade from 1.1.14-2ubuntu1.8 to 1.1.14-2ubuntu1.9 breaks clusters

2020-11-11 Thread Trent Lloyd
Analysed the logs for an occurance of this, the problem appears to be that pacemaker doesn't stop after 1 minute so systemd gives up and just starts a new instance anyway, noting that all of the existing processes are left behind. I am awaiting the extra rotated logs to confirm but from what I

[Bug 1903745] Re: upgrade from 1.1.14-2ubuntu1.8 to 1.1.14-2ubuntu1.9 breaks clusters

2020-11-10 Thread Trent Lloyd
I reviewed the sosreports and provide some general analysis below. [sosreport-juju-machine-2-lxc-1-2020-11-10-tayyude] I don't see any sign in this log of package upgrades or VIP stop/starts, I suspect this host may be unrelated. [sosreport-juju-caae6f-19-lxd-6-20201110230352.tar.xz] This is

[Bug 1848497] Re: virtio-balloon change breaks migration from qemu prior to 4.0

2020-10-26 Thread Trent Lloyd
I have verified the package for this specific virtio-balloon issue discussed in this bug only. Migrating from 3.1+dfsg-2ubuntu3.2~cloud0 - To the latest released version (3.1+dfsg-2ubuntu3.7~cloud0) fails due to balloon setup 2020-10-26T07:40:30.157066Z qemu-system-x86_64:

[Bug 1897483] Re: With hardware offloading enabled, OVS logs are spammed with netdev_offload_tc ERR messages

2020-10-10 Thread Trent Lloyd
There is an indication in the below RHBZ this can actually prevent openvswitch from working properly as it loses too much CPU time to this processing in large environments (100s or 1000s of ports) https://bugzilla.redhat.com/show_bug.cgi?id=1737982 Seems to be a rejected upstream patch here,

[Bug 1896734] Re: A privsep daemon spawned by neutron-openvswitch-agent hangs when debug logging is enabled (large number of registered NICs) - an RPC response is too large for msgpack

2020-10-08 Thread Trent Lloyd
** Tags added: seg -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1896734 Title: A privsep daemon spawned by neutron-openvswitch-agent hangs when debug logging is enabled (large number of

[Bug 1887779] Re: recurrent uncaught exception

2020-09-29 Thread Trent Lloyd
I hit this too, after restart to fix it I also lose all my stored metrics from the last few days. So going to triage this as High. ** Changed in: graphite-carbon (Ubuntu) Importance: Undecided => High -- You received this bug notification because you are a member of Ubuntu Bugs, which is

[Bug 1882416] Re: virtio-balloon change breaks rocky -> stein live migrate

2020-09-25 Thread Trent Lloyd
I think the issue here is that Stein's qemu comes from Disco which was EOL before Bug #1848497 was fixed and so the change wasn't backported. While Stein is EOL next month the problem is this makes live migrations fail which are often wanted during OpenStack upgrades to actually get through Stein

[Bug 1882416] Re: virtio-balloon change breaks rocky -> stein live migrate

2020-09-25 Thread Trent Lloyd
** Tags added: seg -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1882416 Title: virtio-balloon change breaks rocky -> stein live migrate To manage notifications about this bug go to:

[Bug 1893889] Re: unattended-upgrade of nova-common failure due to conffile prompt

2020-09-15 Thread Trent Lloyd
Right, the systems are running 1.1ubuntu1.18.04.11 - in my original query to you I was trying to figure out if the patches in .12 or .13 were likely to have caused this specific situation and you weren't sure hence the bug report with more details. ** Changed in: unattended-upgrades (Ubuntu)

[Bug 1894453] Re: Building Ceph packages with RelWithDebInfo

2020-09-08 Thread Trent Lloyd
Are we sure it's actually building as Debug? At least 15.2.3 on focal seems to build with RelWithDebugInfo.. I see -O2 .. only do_cmake.sh had logic for this (it would set Debug if a .git directory exists) but the debian rules file doesn't seem to use that script but cmake directly. -- You

[Bug 1893889] Re: unattended-upgrade of nova-common failure due to conffile prompt

2020-09-01 Thread Trent Lloyd
** Attachment added: "dpkg.log.6" https://bugs.launchpad.net/ubuntu/+source/unattended-upgrades/+bug/1893889/+attachment/5406809/+files/dpkg.log.6 -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu.

[Bug 1893889] Re: unattended-upgrade of nova-common failure due to conffile prompt

2020-09-01 Thread Trent Lloyd
Uploaded all historical log files in lp1893889-logs.tar.gz Uploaded dpkg_-l For convenient access also uploaded unattended-upgrades.log.4, unattended-upgrades-dpkg.log.4 and dpkg.log.6 which have the lines from the first instance of hitting the error -- You received this bug notification

[Bug 1893889] Re: unattended-upgrade of nova-common failure due to conffile prompt

2020-09-01 Thread Trent Lloyd
** Attachment added: "unattended-upgrades-dpkg.log.4" https://bugs.launchpad.net/ubuntu/+source/unattended-upgrades/+bug/1893889/+attachment/5406808/+files/unattended-upgrades-dpkg.log.4 -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to

[Bug 1893889] Re: unattended-upgrade of nova-common failure due to conffile prompt

2020-09-01 Thread Trent Lloyd
** Attachment added: "dpkg_-l" https://bugs.launchpad.net/ubuntu/+source/unattended-upgrades/+bug/1893889/+attachment/5406810/+files/dpkg_-l -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1893889

[Bug 1893889] Re: unattended-upgrade of nova-common failure due to conffile prompt

2020-09-01 Thread Trent Lloyd
** Attachment added: "unattended-upgrades.log.4" https://bugs.launchpad.net/ubuntu/+source/unattended-upgrades/+bug/1893889/+attachment/5406807/+files/unattended-upgrades.log.4 -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu.

[Bug 1893889] Re: unattended-upgrade of nova-common failure due to conffile prompt

2020-09-01 Thread Trent Lloyd
** Attachment added: "all unattended-upgrades and dpkg logs" https://bugs.launchpad.net/ubuntu/+source/unattended-upgrades/+bug/1893889/+attachment/5406806/+files/lp1893889-logs.tar.gz -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to

[Bug 1893889] [NEW] unattended-upgrade of nova-common failure due to conffile prompt

2020-09-01 Thread Trent Lloyd
Public bug reported: unattended-upgrades attempted to upgrade nova from 2:17.0.9-0ubuntu1 to 2:17.0.10-0ubuntu2.1 (bionic-security), however nova-common contains a modified conffile (/etc/nova/nova.conf) which prompts during upgrade and leaves apt/dpkg in a permanent error state requiring manual

[Bug 1891269] Re: perf is not built with python script support

2020-08-12 Thread Trent Lloyd
Logs are not required for this issue ** Changed in: linux (Ubuntu) Status: Incomplete => Confirmed -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1891269 Title: perf is not built with python

[Bug 1891269] [NEW] perf is not built with python script support

2020-08-12 Thread Trent Lloyd
Public bug reported: The "perf" tool supports python scripting to process events, this support is currently not enabled. $ sudo perf script -g python Python scripting not supported. Install libpython and rebuild perf to enable it. For example: # apt-get install python-dev (ubuntu) # yum

[Bug 1888047] Re: libnss-mdns slow response

2020-07-22 Thread Trent Lloyd
This output is generally quite confusing. Can you try remove the "search www.tendawifi.com" and see how it differs? -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1888047 Title: libnss-mdns slow

[Bug 1888047] Re: libnss-mdns slow response

2020-07-22 Thread Trent Lloyd
ideally using mdns4_minimal specifically (or i guess, both, but generally not recommended to use mdns4 in most cases) -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1888047 Title: libnss-mdns slow

[Bug 1888047] Re: libnss-mdns slow response

2020-07-21 Thread Trent Lloyd
Can you please confirm (1) The timing of "getent hosts indigosky.local", "host indigosky.local", "nslookup indigosky.local" and "nslookup indigosky.local 192.168.235.1" all done at the same time (mainly adding the direct lookup through the server, wondering if nslookup is doing something weird

[Bug 80900] Re: Avahi daemon prevents resolution of FQDNs ending in ".local" due to false negatives in the detection of ".local" networks

2020-07-19 Thread Trent Lloyd
This is fixed in Ubuntu 20.04 with nss-mdns 0.14 and later which does proper split horizon handling. ** Changed in: avahi (Ubuntu) Status: Triaged => Fix Released ** Changed in: nss-mdns (Ubuntu) Status: Confirmed => Fix Released -- You received this bug notification because you

[Bug 1888047] Re: libnss-mdns slow response

2020-07-19 Thread Trent Lloyd
Rumen, When you use 'nslookup' it should go directly to using the DNS server (127.0.0.53 [which is systemd-resolved]) which typically bypasses libnss-mdns but also typically doesn't have this 5 second delay (which avahi can have in some configurations). Seems most likely the 5 second delay is

[Bug 1886809] Re: Pulse connect VPN exists because unwanted avahi network starts

2020-07-09 Thread Trent Lloyd
I'm not sure it makes sense to just universally skip "tun*" interfaces (at least yet) but we may need to review the scenarios in which /etc/network/if-up.d/avahi-autoipd is executing. Helio: Can you provider a reproducer scenario? e.g. is this ubuntu server, ubuntu desktop, what is the contents

[Bug 1871685] Re: [SRU] vagrant spits out ruby deprecation warnings on every call

2020-04-30 Thread Trent Lloyd
Hi Lucas, Thanks for the patch updates. When I first submitted this we could have snuck through before release without an SRU but the patch backport now makes sense. -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu.

[Bug 1874021] Re: avahi-browse -av is empty, mdns not working in arduino ide

2020-04-22 Thread Trent Lloyd
OK thanks for the updates. So I can see a lot of mDNS packets in the lp1874021.pcap capture from various sources. I can see some printers, google cast, sonoff, etc. Curiously though when you do the avahi cache dump it isn't seeing any of these. Wireshark is showing malformed packets for many of

[Bug 1874192] [NEW] Remove avahi .local notification support (no longer needed)

2020-04-22 Thread Trent Lloyd
Public bug reported: As of nss-mdns 0.14 (which is now shipping in Focal 20.04) Avahi no longer requires to be stopped when a unicast .local domain is present, nss-mdns now has logic to make this work correctly when Avahi is running for both multicast and unicast. We dropped the script that

[Bug 1874021] Re: avahi-browse -av is empty, mdns not working in arduino ide

2020-04-22 Thread Trent Lloyd
Looking at jctl.txt things look normal, the server starts up, gets server startup complete and then adds the appropriate IP for wlan0. Config file looks normal. Can you please try the following to collect extra debug info (1) Start a tcpdump and leaving it running - tcpdump --no-promiscuous-mode

[Bug 327362] Re: Some ISPs have .local domain which disables avahi-daemon

2020-04-22 Thread Trent Lloyd
For anyone looking at this in 2020, this is fixed in nss-mdns 0.14 which is in Ubuntu Focal 20.04 - it will now correctly pass through unicast .local lookups. -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu.

[Bug 1871685] Re: vagrant spits out ruby deprecation warnings on every call

2020-04-19 Thread Trent Lloyd
** Patch added: "full merge debdiff from old ubuntu version to new ubuntu version" https://bugs.launchpad.net/ubuntu/+source/vagrant/+bug/1871685/+attachment/5356998/+files/lp1871685_complete-merge_2.2.6+dfsg-2ubuntu1_2.2.7+dfsg-1ubuntu1.debdiff -- You received this bug notification because

[Bug 1871685] Re: vagrant spits out ruby deprecation warnings on every call

2020-04-19 Thread Trent Lloyd
** Patch added: "partial merge debdiff showing only the delta to current debian version" https://bugs.launchpad.net/ubuntu/+source/vagrant/+bug/1871685/+attachment/5356999/+files/lp1871685_merge-only_2.2.7+dfsg-1_2.2.7+dfsg-1ubuntu1.debdiff -- You received this bug notification because you

[Bug 1871685] Re: vagrant spits out ruby deprecation warnings on every call

2020-04-19 Thread Trent Lloyd
Please sponsor this upload of a merge of Vagrant 2.2.7+dfsg-1 from Debian. It is a minor upstream version bump (2.2.6 -> 2.2.7) plus contains new patches from Debian to fix multiple Ruby 2.7 deprecation warnings on every command invocation. Two debdiffs attached: partial merge debdiff showing

[Bug 1871685] Re: vagrant spits out ruby deprecation warnings on every call

2020-04-19 Thread Trent Lloyd
I was wrong this package requires a merge due to a small delta on the upstream Debian package. So I will have to prepare a patch. -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1871685 Title:

[Bug 1871685] Re: vagrant spits out ruby deprecation warnings on every call

2020-04-19 Thread Trent Lloyd
This has been fixed in a new Debian Release (2.2.7+dfsg-1). Tested a package built on Focal and these warnings go away. Specifically they added two patches: 0006-Fix-warnings-for-ruby-2.7.patch 0007-Fix-more-warnings-under-ruby-2.7.patch Since this is in universe but a popular too, hoping we can

[Bug 1873670] Re: python3-mapnik not working due to stale icu65: refernces

2020-04-19 Thread Trent Lloyd
= Test Case = Simple test case per upstream docs: python3 -c "import mapnik;print(mapnik.__file__)" # should return the path to the python bindings and no errors Fails with the current package (1:0.0~20180723-588fc9062-3ubuntu2) - works with a no-change rebuilt package ** Changed in:

[Bug 1873670] Re: python3-mapnik not working due to stale icu65: refernces

2020-04-19 Thread Trent Lloyd
A no-change rebuild seems to correct this ** Patch added: "lp1873670_mapnik_icu66.debdiff" https://bugs.launchpad.net/ubuntu/+source/python-mapnik/+bug/1873670/+attachment/5356971/+files/lp1873670_mapnik_icu66.debdiff -- You received this bug notification because you are a member of Ubuntu

[Bug 1870824] Re: Errors in script /usr/lib/avahi/avahi-daemon-check-dns.sh

2020-04-07 Thread Trent Lloyd
(for clarity nss-mdns 0.14 does this behavior itself automatically, in 0.10 in bionic does not) -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1870824 Title: Errors in script

[Bug 1870824] Re: Errors in script /usr/lib/avahi/avahi-daemon-check-dns.sh

2020-04-07 Thread Trent Lloyd
For focal we should actually remove this script, since nss-mdns automatically performs this behaviour and doesn't rely on the "hack" of stopping avahi-daemon to prevent resolution of .local domains via DNS. If this error is serious we could consider backport to stable releases. -- You received

[Bug 1868940] [NEW] [focal] gnome-calendar crashes when moving it full screen to another screen (SIGSEGV in gdk_rgba_to_string())

2020-03-25 Thread Trent Lloyd
Public bug reported: I can consistently crash gnome-calendar by dragging it from one monitor over to the other monitor at the top of the screen so that it moves into full screen. It seems to move to that screen with the window resized but the contents not redrawn. Focal with all updates as of

  1   2   3   4   5   >