Status changed to 'Confirmed' because the bug affects multiple users.
** Changed in: linux (Ubuntu Trusty)
Status: New => Confirmed
--
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/14
Meno,
I just tried your testcase where you described adding an ipv6gre to the
container and rebooting it, couldn't reproduce the netdev_hung problem
so far, do you mind sharing specific details or even a script that will
reproduce the problem?
Mounting an NFS share on my containers is not a commo
Hey,
here is a discussion about the reproducablity. I'm wrote very early in
these thread that
if I set the following network config in a container
ip -6 tunnel add gt6nactr01 mode ip6gre local 2a4:4483:5:1709::2:1
remote 2a4:4494:f:997:217:110:59:5
ip link set mtu 1500 dev gt6nactr01 up
ip addr a
I left a couple instances running with the mainline kernel
(http://kernel.ubuntu.com/~kernel-ppa/mainline/v3.19-rc7-vivid/) during
the weekend, it took more time to see the bug on the mainline kernel but
this morning one out of ten instances had the same problem so I'm
assuming mainline is also aff
Some additional info:
- The stack trace is always the same posted above and the break point seems to
be copy_net_ns every time.
- The process that hangs is always lxc-start in every occurrence that I was
able to check.
Rodrigo.
--
You received this bug notification because you are a member of
At this point a proper reproducer would help the most. This way we could
get crashdumps and other useful information that may not be possible in
production environments.
Reading through the bug the best description I can see is:
1) Start LXC container
2) Download > 100MB of data
3) Stop LXC contai
** Also affects: linux (Ubuntu Utopic)
Importance: Undecided
Status: New
** Also affects: linux (Ubuntu Trusty)
Importance: Undecided
Status: New
--
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https:/
Just got an instance with the kernel 3.16.0-29-generic #39-Ubuntu
(linux-lts-utopic) hitting this bug in production, we don't have a
reliable reproducer so the only way for me to validate is to boot this
kernel in production and wait the bug to happen.
Is there anything I can get from an instance
>From the docker issue it seems that someone couldn't reproduce the bug
when downgrading to kernel 3.13.0-32-generic, I can't validate this
statement because kernels prior 3.13.0-35-generic has a regression that
crashes my ec2 instance.
Also people testing kernel 3.14.0 couldn't reproduce the bug.
The patch mentioned in comment #5 was added to the mainline kernel as of
3.13-rc1, so it should already be in Trusty.
git describe --contains dcdfdf5
v3.13-rc1~7^2~16
Can you again test the latest mainline, which is now 3.19-rc4:
http://kernel.ubuntu.com/~kernel-ppa/mainline/v3.19-rc4-vivid/
Als
** Tags added: kernel-da-key trusty
** Also affects: linux via
http://bugzilla.kernel.org/show_bug.cgi?id=81211
Importance: Unknown
Status: Unknown
** Changed in: linux (Ubuntu)
Importance: Medium => High
** Changed in: linux (Ubuntu)
Status: Incomplete => Triaged
--
You
Kernel stack trace when lxc-start process hang:
[27211131.602770] INFO: task lxc-start:25977 blocked for more than 120 seconds.
[27211131.602785] Not tainted 3.13.0-40-generic #69-Ubuntu
[27211131.602789] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables
this message.
[27211131.60
Hi,
We're hitting this bug on the latest trusty kernel in a similar context
of this docker issue, we also had this problem on lucid with a custom
3.8.11 kernel which seems to be more agressive than trusty but still
happens:
https://github.com/docker/docker/issues/5618
In this issue an upstream k
i tested with
3.18.0-031800-generic #201412071935 SMP Mon Dec 8 00:36:34 UTC 2014
x86_64 x86_64 x86_64 GNU/Linux
and the problem is still there
meno
--
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpa
I try to make it reproducable and figured out that the problem is related to
use of these kind of
interfaces in a lxc container. The tunnel are working in the running
lxc-container but if you stop or reboot the
lxc-container the kernel reports this
unregister_netdevice: waiting for lo to beco
Did this issue occur in a previous version of Ubuntu, or is this a new
issue?
Would it be possible for you to test the latest upstream kernel? Refer
to https://wiki.ubuntu.com/KernelMainlineBuilds . Please test the latest
v3.18 kernel[0].
If this bug is fixed in the mainline kernel, please add th
101 - 116 of 116 matches
Mail list logo