Public bug reported:
It would be great if focal's systemd could have
https://github.com/systemd/systemd/commit/71180f8e57f8fbb55978b00a13990c79093ff7b3
backported to it.
[Impact]
We have observed that kexec'ing to another kernel will fail as the drive
containing the `kexec` binary has been
I would also like to see this change:
* ubuntu-advantage-tools and its dependencies add about 3MB to ubuntu-minimal
installations (checked by installing ubuntu-minimal's Depends in a Docker
container with and without u-a-t)
* it installs a systemd timer which runs (to do nothing, as the service
Public bug reported:
Debian bug #968927 is present in the version of debootstrap in focal,
which means that mk-sbuild cannot execute successfully within a Docker
environment.
https://salsa.debian.org/installer-
team/debootstrap/-/commit/87cdebbcad6f4e16ba711227cbbbd70039f88752 is
the fix for
** Patch added: "lp1948713.debdiff"
https://bugs.launchpad.net/ubuntu/+source/debootstrap/+bug/1948713/+attachment/5536032/+files/lp1948713.debdiff
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
Public bug reported:
To quote:
# AUTOCHECK:
# should mdadm run periodic redundancy checks over your arrays? See
# /etc/cron.d/mdadm.
AUTOCHECK=true
# AUTOSCAN:
# should mdadm check once a day for degraded arrays? See
# /etc/cron.daily/mdadm.
AUTOSCAN=true
Neither /etc/cron.d/mdadm nor
For hirsute, the bug does not reproduce on upgrade from the release day
image. However, it can present when upgrading between releases.
To test, I launched a groovy instance with an old cloud-init (the same
image as previously for groovy validation).
I performed a `do-release-upgrade -d` (-d,
In the cloud-init log, I see:
2021-04-19 06:58:24,455 - stages.py[DEBUG]: applying net config names for
{'version': 2, 'ethernets': {'eth0': {'match': {'driver': 'bcmgenet smsc95xx
lan78xx'}, 'set-name': 'eth0', 'dhcp4': True, 'optional': True}}}
2021-04-19 06:58:24,456 - __init__.py[DEBUG]: no
An alternative explanation: Azure has a pre-provisioning system, whereby
they'll partially boot a machine and then hold it in that state until a
user requests a system which corresponds. This is implemented using the
netlink socket you see: the fabric will reconnect that socket once it is
ready
For xenial, I'm performing the same process but with the
`ubuntu:bb8e87956495` image, serial of 20201210.
UPGRADE does fail:
$ CLOUD_INIT_OS_IMAGE=ubuntu:bb8e87956495::ubuntu::xenial
CLOUD_INIT_CLOUD_INIT_SOURCE=UPGRADE pytest
For bionic, I'm performing the same process but with the
`ubuntu:c2bdb694ecc2` image, serial of 20201211.1.
UPGRADE does fail:
$ CLOUD_INIT_OS_IMAGE=ubuntu:c2bdb694ecc2::ubuntu::bionic
CLOUD_INIT_CLOUD_INIT_SOURCE=UPGRADE pytest
For focal, I'm performing the same process but with the
`ubuntu:b321e3832dbb` image, serial of 20201210.
UPGRADE does fail:
$ CLOUD_INIT_OS_IMAGE=ubuntu:b321e3832dbb::ubuntu::focal
CLOUD_INIT_CLOUD_INIT_SOURCE=UPGRADE pytest
For groovy, I'm testing with the `ubuntu:bac1692e9ec7` image, with a
serial of 20201210. First, I confirm that the test does trigger the bug
on UPGRADE to the version of cloud-init in the release:
$ CLOUD_INIT_OS_IMAGE=ubuntu:bac1692e9ec7::ubuntu::groovy
CLOUD_INIT_CLOUD_INIT_SOURCE=UPGRADE
I'm performing verification of this locally using the cloud-init
integration testing framework. Specifically, I'm running
test_upgrade_package[0] with the following diff applied (to trigger this
bug):
@@ -104,18 +104,19 @@ def test_upgrade(session_cloud: IntegrationCloud):
@pytest.mark.ci
Public bug reported:
When performing a `do-release-upgrade -d` on my groovy system, I saw:
Setting up python3-software-properties (0.99.8) ...
Failed to byte-compile
/usr/lib/python3/dist-packages/softwareproperties/extendedsourceslist.py:
File
Thanks for the reply, Łukasz: we agree, this is a bugfix so doesn't need
an FFe.
** Changed in: cloud-init (Ubuntu)
Status: New => Invalid
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1916684
On Tue, Mar 23, 2021 at 07:53:59AM -, Alfonso Sanchez-Beato wrote:
> Is this going to be backported to focal (see LP: #1919493)?
Yep, the SRU process has been started already.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
Yep, that's what I've found; cloud-init is just waiting for its later
stages to run, which are blocked by snapd.seeded.service exiting.
** Changed in: cloud-init
Status: New => Invalid
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to
Given that the logind issue is an AppArmor issue and, per my previous
comment, "the two running jobs are systemd-logind.service and
snapd.seeded.service", I suspect that we'll find that snapd is running
into similar sorts of issues. I'll take a quick look now.
--
You received this bug
> Interfaces of type 'internal' may be used for other things than VLANs
so depending on what you want to match on it may or may not be precise
enough.
So the cloud-init code in question is used in a couple of (relevant)
ways: (a) to determine the state of any physical interfaces for which we
Another question: is there a canonical way to determine if OVS isn't up?
Currently I'm trying to execute a command and looking for "database
connection failed" in the output, is that appropriate?
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed
> But I guess it would be reasonable to split the work up in bite sized
chunks as long as we allow for supporting this in the design.
Having looked a little more, I don't think an incremental approach buys
us much here: we'd have to replace the `udevadm` code with `ovs-vsctl`
code in the next
To ensure that we understand the consequences of these changes, I've
spent a bit of time tracking down everywhere this will affect in cloud-
init by looking up the various call chains of `get_interfaces`:
Called by:
* `_get_current_rename_info`
* `_rename_interfaces`
*
Hey Frode,
Now moving on from the "does this system have any OVS-managed
interfaces?" to "how can I tell if a particular interface is managed by
OVS?":
We discussed using `udevadm info` to determine if an interface is OVS-
managed:
> If it is sufficient to know that this is a Open vSwitch
I've figured out why my LXD reproducer doesn't reproduce exactly:
NoCloud runs at both local and net stages, so the code in question is
called earlier in boot than the OpenStack data source is. For now, I'll
proceed with the synthetic reproducer: calling the Python code which
fails directly.
--
> The default `datapath_type` is 'system', so if it is not explicitly
specified for a bridge it will not be visible in `ovs-vsctl show`
output, but 'system' will still be the `datapath_type` used.
Great, I figured it wasn't material.
> You can query Open vSwitch at runtime for which datapath
On Mon, Feb 15, 2021 at 10:18:17PM -, Steve Langasek wrote:
> On Mon, Feb 15, 2021 at 02:21:28PM -, James Falcon wrote:
> > That said, cloud-image-utils is in main[2] and therefore all of its
> > dependencies also need to be in main, so we are not free to choose our
> > own replacement
Thanks Frode, that's really helpful!
I don't see the `datapath_type` in my output:
e2d9c9b4-739c-4333-a372-4d46585fcfb9
Bridge ovs-br
fail_mode: standalone
Port ovs-br
Interface ovs-br
type: internal
Port bond0
Interface bond0
** Also affects: cloud-utils (Ubuntu)
Importance: Undecided
Status: New
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1915077
Title:
genisoimage may be going away
To manage notifications
Drive-by mark as Incomplete, as the way initramfses and cloud images
interact has changed substantially since 2014.
** Changed in: cloud-init (Ubuntu Trusty)
Status: Triaged => Won't Fix
** Changed in: cloud-init (Ubuntu)
Status: Triaged => Incomplete
** Changed in: cloud-init
I can confirm that udev does report the VLAN as OVS-managed:
# udevadm info /sys/class/net/ovs-br.100
P: /devices/virtual/net/ovs-br.100
L: 0
E: DEVPATH=/devices/virtual/net/ovs-br.100
E: INTERFACE=ovs-br.100
E: IFINDEX=5
E: SUBSYSTEM=net
E: USEC_INITIALIZED=4703175
E: ID_MM_CANDIDATE=1
E:
On Fri, Jan 22, 2021 at 10:51:25PM -, Ryan Harper wrote:
> Thanks for doing most of the digging here @Oddbloke; I suspect as with
> bond and bridges for ovs, we'll need a special case to check if a vlan
> entry is also OVS, much like we did for bonds/bridges:
>
>
On Fri, Jan 22, 2021 at 10:48:56PM -, David Ames wrote:
> I am not sure I have any definitivie answers but here are my thoughts.
>
> Compare a VLAN device created with `ip link add`
>
> ip link add link enp6s0 name enp6s0.100 type vlan id 100
>
> cat /sys/class/net/enp6s0.100/uevent
>
OK, I have a suspicion of what's going on here. I've compared two
systems: one launched with the network config above (and
updated/rebooted), and one launched with that config minus the
"openvswitch: {{}}" line.
When I compare /sys/class/net/ovs-br.100/addr_assign_type in the two
systems, I see
Also of note: the MAC address being reported as duplicated in both the
reported log and in the exception I see is not present in the specified
configuration. It's presumably being generated by OVS and applied to
ovs-br (and therefore inherited by ovs-br.100?). I'm going to see if a
more minimal
The network config below closely reproduces the issue when a LXD VM is
launched with it, has openvswitch-switch installed on it (e.g. via a
manual DHCP on enp5s0), and is `cloud-init clean --logs --reboot`ed.
The log does not contain the error message, but calling
id: 100
link: br-ex
mtu: 1500
** Changed in: cloud-init (Ubuntu)
Assignee: (unassigned) => Dan Watkins (oddbloke)
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1912844
Title:
B
** Changed in: cloud-init
Status: Confirmed => Fix Released
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1910835
Title:
Azure IMDS publicKeys contain \r\n which prevents ssh access to vms
I've just attached the output of our verification testing runs for each
series.
Each series exhibits two test failures, neither of which is material to
the change in question (and both of which reproduce against the cloud-
init currently in the archive).
`test_datasource_rbx_no_stacktrace` is
** Attachment added: "groovy verification log"
https://bugs.launchpad.net/cloud-init/+bug/1910835/+attachment/5452348/+files/groovy.txt
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1910835
** Attachment added: "focal verification log"
https://bugs.launchpad.net/cloud-init/+bug/1910835/+attachment/5452347/+files/focal.txt
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1910835
Title:
** Attachment added: "bionic verification log"
https://bugs.launchpad.net/cloud-init/+bug/1910835/+attachment/5452346/+files/bionic.txt
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1910835
** Description changed:
== Begin SRU Template ==
[Impact]
- The previous version of cloud-init used OpenSSL to process the SSH keys
provided by the Azure platform. cloud-init 20.4 replaced that code with a more
efficient implementation, but one which did not use OpenSSL: this meant that
** Description changed:
== Begin SRU Template ==
[Impact]
- This release is only a single functional cherry-pick which solely affects
Azure platform. It is a critical bug we wish to release as soon as possible
+ The previous version of cloud-init used OpenSSL to process the SSH keys
So I've done a little more testing. On boot, with /sys/power/resume
unset (i.e. 0:0), I see this in the logind debug logs:
Jan 08 09:47:11 surprise systemd-logind[1887]: Sleep mode "disk" is supported
by the kernel.
Jan 08 09:47:11 surprise systemd-logind[1887]: /dev/dm-2: is a candidate
Public bug reported:
# Steps to reproduce
1) Login as a regular user.
2) `sudo systemctl restart systemd-logind`
This boots you back to GDM, as you would expect.
3) Login as a user.
4) Wait a few seconds.
# Expected behaviour
I can continue to use my system normally.
# Actual behaviour
My
And, for clarity, when systemd does hibernate, I haven't had issues
restoring: it's just getting systemd to find the correct swap space to
use that's been the issue.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
On Thu, Jan 07, 2021 at 03:23:36PM -, Dimitri John Ledkov wrote:
> Also, you do disable secureboot as well right? Because with secureboot
> on, even though hybernation image is created, it will be ignored and
> not used upon resume.
Yep, Secure Boot is disabled on this system.
--
You
> So, looking at the systemd code at
https://github.com/systemd/systemd/blob/c5b6b4b6d08cf4c16a871401358faeb5a186c02a/src/shared
/sleep-config.c#L422-L426, perhaps setting /sys/power/resume to the
correct device actually was the workaround/fix?
The confusing part about this is that I don't
I enabled systemd-logind debug logging, and I saw:
Jan 06 17:45:18 surprise systemd-logind[73027]: Got message type=method_call
sender=:1.264 destination=:1.220 path=/org/freedesktop/login1
interface=org.freedesktop.login1.Manager member=CanHibernate cookie=6
reply_cookie=0 signature=n/a
> $ cat /sys/power/resume
> 0:0
This was a red herring. What I have found consistently fixes this is:
$ sudo swapoff /dev/sda2
$ sudo swapon -p 1 /dev/sda2
Hibernate then succeeds. However, this is not how I want my system
configured: I have a small swap partition on my SSD, which I would
Oh, and `free -h`:
totalusedfree shared buff/cache available
Mem: 15Gi 6.0Gi 667Mi 567Mi 9.0Gi 8.8Gi
Swap: 98Gi13Mi98Gi
--
You received this bug notification because you are a member of Ubuntu
/sys/power/image_size represents the required amount of space for the
image; that said, the machine has 16G RAM total, so even if that were
maxed out, it would fit into 97.7G comfortably.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to
It sounds to me like there's no cloud-init aspect here, so I'm going to
move our tasks to Incomplete (so they'll expire out eventually). Please
do set them back if I've missed something!
** Changed in: cloud-init (Ubuntu)
Status: Confirmed => Incomplete
** Changed in: cloud-init (Ubuntu
One thing I have noticed, is that on boot:
$ cat /sys/power/resume
0:0
I can't test right now, but I _think_ that before the holiday break
setting that to 8:2[0] and restarting systemd-logind meant that
hibernate did then work.
[0] $ ls -l /sys/dev/block/8:2
lrwxrwxrwx 1 root root 0 Jan 5
Public bug reported:
I have plenty of swap space configured in my system:
$ cat /sys/power/image_size
6642229248 # ~ 6.2GiB
$ swapon
NAME TYPE SIZE USED PRIO
/dev/dm-2 partition 980M 0B -2
/dev/sda2 partition 97.7G 0B -3
But when I attempt to hibernate:
$ sudo systemctl
Verification was completed before the holiday break; tags updated.
** Tags removed: verification-needed verification-needed-bionic
verification-needed-focal verification-needed-groovy verification-needed-xenial
** Tags added: verification-done verification-done-bionic
verification-done-focal
https://github.com/canonical/cloud-init/pull/748 will add them to our
devel packaging for hirsute.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1908548
Title:
Add manpages to packaging branches
Marking this as Incomplete as Ryan has provided next debugging steps.
Please do move it back to New once you've responded!
** Changed in: cloud-init (Ubuntu)
Status: New => Incomplete
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to
** Also affects: cloud-init (Ubuntu Focal)
Importance: Undecided
Status: New
** Also affects: cloud-init (Ubuntu Hirsute)
Importance: Undecided
Status: New
** Also affects: cloud-init (Ubuntu Groovy)
Importance: Undecided
Status: New
** Also affects: cloud-init
** Changed in: cloud-init
Status: Confirmed => Fix Released
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1691489
Title:
fstab entries written by cloud-config may not be mounted
To manage
** Attachment added: "Verification logs for KVM"
https://bugs.launchpad.net/ubuntu/+source/cloud-init/+bug/1905599/+attachment/5444544/+files/nocloud-nocloud-kvm-sru-20.4.txt
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
** Attachment added: "Verification logs for LXD"
https://bugs.launchpad.net/ubuntu/+source/cloud-init/+bug/1905599/+attachment/5444543/+files/nocloud-lxd-sru-20.4.txt
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
The attached are the logs of a curtin run with the -proposed cloud-init.
The only "failures", indicate that expected-to-fail tests instead passed
(because the underlying issues have since been addressed in Ubuntu).
** Attachment added: "curtin verification testing logs"
Hi Ian,
I've just launched such a container and I see a bunch of non-cloud-init
errors in the log and when I examine `systemctl list-jobs`, I see that
the two running jobs are systemd-logind.service and
snapd.seeded.service:
root@certain-cod:~# systemctl list-jobs
JOB UNIT
** Changed in: cloud-init (Ubuntu)
Status: New => Triaged
** Changed in: cloud-init (Ubuntu)
Importance: Undecided => High
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1906187
Title:
Looking through the journal further, I do see non-NVidia call traces
such as:
Nov 27 09:43:52 surprise kernel: INFO: task qemu-system-x86:16736 blocked for
more than 120 seconds.
Nov 27 09:43:52 surprise kernel: Tainted: P OEL
5.8.0-29-generic #31-Ubuntu
Nov 27 09:43:52
Public bug reported:
The system was restored from hibernation this morning, but the issue did
not exhibit for ~30 minutes after "boot". I have also seen hard locks
without hibernation (but they have never produced any journal output, so
may be a different issue). Examining `journalctl -k`, I see
Hi Aman,
Thanks for the bug report! Configuring the FQDN to point at the
loopback address has been cloud-init's behaviour since 2011 on Ubuntu
(https://github.com/canonical/cloud-
init/commit/6d25c040ee566f6ef85352d7b52eb5947230f78a) and 2012 on Red
Hat (https://github.com/canonical/cloud-
Trace from that mainline kernel:
kernel: [ cut here ]
kernel: NETDEV WATCHDOG: enp5s0 (r8169): transmit queue 0 timed out
kernel: WARNING: CPU: 2 PID: 0 at net/sched/sch_generic.c:442
dev_watchdog+0x24c/0x250
kernel: Modules linked in: scsi_transport_iscsi binfmt_misc
I've just tested, and this doesn't seem to reproduce when launching from
a captured image (with 90-hotplug-azure.yaml restored and `cloud-init
clean` executed). So I think I've exhausted the ways in which I can
attempt to gain more insight into what's happening during the part of
boot where this
(Added cloud-images for visibility.)
** Also affects: cloud-images
Importance: Undecided
Status: New
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1902960
Title:
Upgrade from
Thanks for the explanation, Dan! I was off down a wrong path, I
appreciate the correction.
I've just downloaded the Azure image from cloud-images.u.c and it
includes this in `/etc/netplan/90-hotplug-azure.yaml`:
# This netplan yaml is delivered in Azure cloud images to support
# attaching and
Hi yhzou,
Thanks for using (and testing!) Ubuntu, and for filing this bug.
Setting a default password in the cloud-images.ubuntu.com images would
make them insecure: any Ubuntu instance launched from them would have a
backdoor installed, essentially.
There are a couple of options: you could
** Changed in: cloud-init (Ubuntu)
Status: New => Incomplete
** Changed in: systemd (Ubuntu)
Status: Invalid => New
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1902960
Title:
OK, I've managed to reproduce this (in a non-Juju launched VM). The
ordering of these journal lines look suspicious to me:
Nov 09 17:41:51.091033 ubuntu systemd[1]: Starting udev Coldplug all Devices...
Nov 09 17:41:51.236309 ubuntu systemd[1]: Finished Load Kernel Modules.
Nov 09
When investigating another issue, I found this line in my journal,
repeated a few times:
nm-dispatcher[3938]: /etc/network/if-up.d/resolved: 12: mystatedir: not
found
Not sure if that's related, but it seems suspicious at least.
--
You received this bug notification because you are a member of
Hey folks,
Thanks for the report! If someone could run `cloud-init collect-logs`
on an affected instance, and upload the produced tarball to this bug, we
can dig into it further. The contents of /etc/netplan would also be
very handy.
(Once attached, please move this back to New.)
Cheers,
Hi Kai-Heng,
Here is the (much longer) trace from that kernel.
Thanks!
kernel: [ cut here ]
kernel: NETDEV WATCHDOG: enp5s0 (r8169): transmit queue 0 timed out
kernel: WARNING: CPU: 2 PID: 0 at net/sched/sch_generic.c:442
dev_watchdog+0x24c/0x250
kernel: Modules linked
Hi Alex,
Thanks for filing this bug report! I just launched a trusty instance,
and both of the entries in menu.lst look like they should not trigger
this bug ("root\t(hd0)"). I then successfully upgraded the instance to
xenial without seeing this issue, and I still see "root\t(hd0)" in all
the
The code causing the failure (which Paride linked to) does specifically
know that the output is destined for /dev/console:
if console:
conpath = "/dev/console"
if os.path.exists(conpath):
with open(conpath, 'w') as wfh:
wfh.write(text)
Still seeing this with that kernel:
kernel: [ cut here ]
kernel: NETDEV WATCHDOG: enp5s0 (r8169): transmit queue 0 timed out
kernel: WARNING: CPU: 8 PID: 0 at net/sched/sch_generic.c:442
dev_watchdog+0x25b/0x270
kernel: Modules linked in: xt_comment iptable_mangle
** Changed in: cloud-init (Ubuntu)
Status: Confirmed => In Progress
** Changed in: cloud-init (Ubuntu)
Assignee: (unassigned) => Markus Schade (lp-markusschade)
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
** Changed in: cloud-init (Ubuntu)
Status: New => Triaged
** Changed in: cloud-init (Ubuntu)
Importance: Undecided => Medium
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1892447
Title:
(Oh, and I should say: please move this back to New once you've attached
the tarball!)
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1900904
Title:
netplan yaml for rpi groovy server prevents usb
Hey Paul,
Thanks for the bug report! If you could run `cloud-init collect-logs`
on an affected machine and attach the tarball here, we can dig into this
more based on that info.
Cheers,
Dan
** Changed in: cloud-init (Ubuntu)
Status: New => Incomplete
--
You received this bug
On Fri, Oct 16, 2020 at 03:05:09AM -, Kai-Heng Feng wrote:
> Can you please test this kernel:
> https://people.canonical.com/~khfeng/lp1896576/
Thanks for the kernel! Still seeing this, unfortunately:
kernel: [ cut here ]
kernel: NETDEV WATCHDOG: enp5s0 (r8169):
Actually, looks like I spoke too soon. I just upgraded to
5.8.0-22-generic and I'm seeing the issue still:
Oct 13 10:43:37 surprise kernel: [ cut here ]
Oct 13 10:43:37 surprise kernel: NETDEV WATCHDOG: enp5s0 (r8169): transmit
queue 0 timed out
Oct 13 10:43:37 surprise
On Sat, Oct 10, 2020 at 09:01:15PM -, Kai-Heng Feng wrote:
> Dan, it will be great if you can revert workaround [1] and apply
> possible fix [2] to see if it helps.
>
> I guess you no longer see the issue because of the workaround.
>
> [1]
>
Thanks for the test kernel! I can no longer reproduce this on the most
recent two kernels in groovy (5.8.0-19-generic, 5.8.0-20-generic) nor
with that test kernel.
I think we can mark this Incomplete for groovy too, and I'll respond if
I see this again.
Thanks to you and Kai-Heng for all your
On Thu, Oct 01, 2020 at 01:41:46PM -, Dan Watkins wrote:
> > How did resolve/netif get owned by root?
>
> I don't believe I've ever touched it before, so I'm not sure. I haven't
> rebooted since that last comment, I'll do that at some point today to
> check if ownersh
I'm still seeing this issue, and it now sometimes appears on boot
without me having done anything. What can I do to help move this
forward?
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1874464
> How did resolve/netif get owned by root?
I don't believe I've ever touched it before, so I'm not sure. I haven't
rebooted since that last comment, I'll do that at some point today to
check if ownership reverts to root.
If it does, what debugging can I perform to determine what's doing it?
--
I've just tested: changing the ownership of /run/systemd/resolve/netif
to systemd-resolve:systemd-resolve resolves (haha) this issue. The
first restart of systemd-resolved after the change did not address it
(because the permissions issue means that the state was not persisted);
on a network
On Thu, Sep 24, 2020 at 09:42:28PM -, Balint Reczey wrote:
> The latest upload (246.6-1ubuntu1) may have fixed this as well.
This happened again just now when I upgraded my system to the new
systemd, so I assume not.
Here's a log snippet of restarting:
Sep 29 09:28:14 surprise systemd[1]:
Public bug reported:
Running groovy on the desktop, with the systemd packages that migrated
today(/overnight EDT).
# Steps to reproduce:
1) `systemctl restart systemd-resolved.service`
(This is a minimal reproducer, but I first saw this after an apt upgrade
of systemd.)
# Expected behaviour:
I haven't been able to reproduce in a lxd container or an EC2 instance;
I don't have a convenient way of testing a different NetworkManager
system, unfortunately.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
** Changed in: cloud-init (Ubuntu)
Status: Triaged => In Progress
** Changed in: cloud-init (Ubuntu)
Assignee: (unassigned) => Dan Watkins (oddbloke)
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
On Sat, Sep 05, 2020 at 08:46:51PM -, Kreisch István András wrote:
> I'm using a really old kernel with this same error: v3.13.170 with
> Ubuntu 14.04.6. I could circumvent the issue by reduce the speed of the
> ethernet interface from 1Gb to 100Mb using ethtool.
>
> ethtool –s eth3 speed 100
** Changed in: curtin (Ubuntu)
Assignee: (unassigned) => György Szombathelyi (gyurco)
** Changed in: curtin (Ubuntu)
Status: Confirmed => In Progress
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
** Changed in: curtin (Ubuntu)
Status: Confirmed => Triaged
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1894217
Title:
2.8.2 deploy and commission fails corrupted bootorder variable
1 - 100 of 1128 matches
Mail list logo