Bionic should have been fixed for a while now, updating the status.
** Changed in: linux (Ubuntu Bionic)
Status: Fix Committed => Fix Released
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
Also since the change was identified to be in the kernel, set qemu to
Won't Fix
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1730717
Title:
Some VMs fail to reboot with "watchdog: BUG: soft lockup
Thanks that this gets into releases now.
I wonder about Bionic's status - any update on that?
** Changed in: qemu-kvm (Ubuntu Zesty)
Status: New => Won't Fix
** Changed in: qemu-kvm (Ubuntu Bionic)
Status: Confirmed => Won't Fix
** Changed in: qemu-kvm (Ubuntu Artful)
This bug was fixed in the package linux - 4.13.0-38.43
---
linux (4.13.0-38.43) artful; urgency=medium
* linux: 4.13.0-38.43 -proposed tracker (LP: #1755762)
* Servers going OOM after updating kernel from 4.10 to 4.13 (LP: #1748408)
- i40e: Fix memory leak related filter
This bug is awaiting verification that the kernel in -proposed solves
the problem. Please test the kernel and update this bug with the
results. If the problem is solved, change the tag 'verification-needed-
artful' to 'verification-done-artful'. If the problem still exists,
change the tag
Hello Everyone,
Google led me here with a search for a "soft lockup CPU#" error I am
experiencing.
I run Ubuntu Server on Citrix XenServer and have never had this issue
with Ubuntu Server 12.04, 14.04 or 16.04. I am here because 18.04 has
this issue upon using 'reboot', although 'poweroff' does
** Changed in: linux (Ubuntu Artful)
Status: In Progress => Fix Committed
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1730717
Title:
Some VMs fail to reboot with "watchdog: BUG: soft
** Changed in: linux (Ubuntu Zesty)
Status: Incomplete => Won't Fix
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1730717
Title:
Some VMs fail to reboot with "watchdog: BUG: soft lockup -
** Description changed:
- This is impacting us for ubuntu autopkgtests. Eventually the whole
- region ends up dying because each worker is hit by this bug in turn and
- backs off until the next reset (6 hourly).
+ == SRU Justification ==
+
+ The fix to bug 1672819 can cause an lockup because it
We've traced the problem to "UBUNTU: SAUCE: exec: ensure file system
accounting in check_unsafe_exec is correct." cking has a fix which will
be used for zesty and artful. I've reverted the patch in bionic since
there's a fix available for golang and we do not want Ubuntu userspace
to become
** Tags removed: kernel-key
** Tags added: kernel-da-key
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1730717
Title:
Some VMs fail to reboot with "watchdog: BUG: soft lockup - CPU#0 stuck
for
I would note that the kernel watchdog timeouts here are always at 20 odd
seconds. They are not increasing so whatever is occuring is progressing
at least as far as the kernel is concerned. If we assume the systemd
log is still working (and it was shortly before the event when it
reported
Just a heads up - this apparently became much harder for me to reproduce
at will. We're still seeing it in actual workloads but I'm having
trouble recreating manually.
My current strategy is to start stress-ng on a number of machines and
then constantly reboot them, with the idea that this will
The fact that 4.13.8 doesn't reproduce the bug might be another
indicator that the bug was introduced by a SAUCE patch.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1730717
Title:
Some VMs fail to
OK a few days ago apw pointed me at http://kernel.ubuntu.com/~kernel-
ppa/mainline/v4.13.8/ which is the mainline kernel that the artful-
proposed one I identified as bad is based on.
I ran 35 instances and rebooted them 30 times - all successful. So I
think that says this kernel is good.
Will
The bug reporter in bug 1713751 has been unable to reproduce the bug
with the 4.13.0-16-generic kernel. He's re-testing with the original
kernel that exhibited the bug to ensure he can reproduce it
consistently. If he finds that 4.13.0-16-generic is really good he
really might be hitting a
Status changed to 'Confirmed' because the bug affects multiple users.
** Changed in: qemu-kvm (Ubuntu Artful)
Status: New => Confirmed
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1730717
Status changed to 'Confirmed' because the bug affects multiple users.
** Changed in: qemu-kvm (Ubuntu)
Status: New => Confirmed
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1730717
Title:
Note that if it *is* the same bug as #1713751, that reporter already
mentioned that using mainline kernels (and he was hitting this on 4.10)
fixed it for him, so it seems more plausible not that 4.14 has a fix,
but that Ubuntu's sauce has the breakage. Of course, they may well not
be the same
On Wed, Nov 08, 2017 at 06:57:46PM -, Joseph Salisbury wrote:
> Maybe give 4.13.0-16 a try:
>
> https://launchpad.net/~canonical-kernel-security-
> team/+archive/ubuntu/ppa2/+build/13567624
>
> It could also be the bug is being triggered by a Ubuntu specific SAUCE
> patch, so it won't happen
Maybe give 4.13.0-16 a try:
https://launchpad.net/~canonical-kernel-security-
team/+archive/ubuntu/ppa2/+build/13567624
It could also be the bug is being triggered by a Ubuntu specific SAUCE
patch, so it won't happen with upstream kernels.
--
You received this bug notification because you are
On Wed, Nov 08, 2017 at 03:16:29PM -, Joseph Salisbury wrote:
> @Laney, thanks for testing the mainline kernel. It's promising that a
> fix might be in that kernel. The time consuming part will be
> identifying what commit in that kernel is the actual fix. We could
> perform a "Reverse"
@Laney, thanks for testing the mainline kernel. It's promising that a
fix might be in that kernel. The time consuming part will be
identifying what commit in that kernel is the actual fix. We could
perform a "Reverse" kernel bisect, which would required testing 12 or so
test kernels. However,
On Wed, Nov 08, 2017 at 11:56:02AM -, ChristianEhrhardt wrote:
> Torkoal (our Jenkins node) was idle atm and Ryan reported he had seen the
> issues there before, so trying there as well.
> This is LTS + HWE - Kernel 4.10.0-38-generic, qemu: 1:2.5+dfsg-5ubuntu10
>
> I thought about your case
Torkoal (our Jenkins node) was idle atm and Ryan reported he had seen the
issues there before, so trying there as well.
This is LTS + HWE - Kernel 4.10.0-38-generic, qemu: 1:2.5+dfsg-5ubuntu10
I thought about your case since you seem just to start a lot of them and reboot,
this shouldn't be so
On Wed, Nov 08, 2017 at 11:08:05AM -, ChristianEhrhardt wrote:
> @Laney I found it interesting that you essentially only needed to
> start+reboot.
> I assume on the host you had other workload goes on in the background (since
> it is lcy01)?
I don't have visibility into what else the hosts
Out of the IRC discussions documenting potentially related issues:
- this bug: KVM: Host-Kernel: Xenial-GA, Qemu: Xenial-Ocata, Guest: Bionic
- bug 1722311 KVM: Host-Kernel: Xenial-GA, Qemu: Xenial, Guest: Artful - some
relation to cache pressure
- bug 1713751 AWS: triggered by Xenial kernel
Do you know if this bug is also happening with Zesty, or just Artful and
Bionic(>4.13)?
I'm going to working on bisecting bug 1713751 in case it's related.
Also, it would be good to know if this bug is already fixed in the
latest mainline kernel. Do you have a way to reproduce this bug? If
so,
** Changed in: linux (Ubuntu)
Importance: Undecided => High
** Tags added: kernel-key
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1730717
Title:
Some VMs fail to reboot with "watchdog: BUG:
** Attachment added: "good run console-log"
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1730717/+attachment/5005465/+files/laney-test14.log
** Also affects: qemu-kvm (Ubuntu)
Importance: Undecided
Status: New
--
You received this bug notification because you are a member
Oh also see https://bugs.launchpad.net/ubuntu/+source/linux-
hwe/+bug/1713751 which has some superficially similar symptoms (cpu
stuck on shutdown).
** Description changed:
This is impacting us for ubuntu autopkgtests. Eventually the whole
region ends up dying because each worker is hit by
I tried 28 (then my quota ran out) xenial guests BTW and none of those
failed.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1730717
Title:
Some VMs fail to reboot with "watchdog: BUG: soft lockup
** Attachment added: "bad run console-log"
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1730717/+attachment/5005464/+files/laney-test25.log
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
33 matches
Mail list logo