I'd also like to suggest this be reconsidered. Minimal should mean
minimal.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1563026
Title:
LXC/LXD installed by default on Ubuntu server
To manage
There are a lot of users here echoing my sentiments.
I appreciate there is a way to file bugs and that Ubuntu addresses them
per that policy. What you have is quite a significant number of users
saying the Lucid Lynx kernel is flat out broken.
I specifically tagged this bug to 'linux' (more
I've spoken with John on IRC and the team are perfectly aware there is a
problem. This is not in dispute.
It takes time to track down these problems, and the time of a kernel
developer is in high demand. I seriously doubt this is the only
'serious' bug affecting a decent number of users which is
That sounds unrelated to this bug and may be an issue with your software
workload. I'm not seeing any fluctuation in memory usage beyond the fact
it uses 15-20x more than a Karmic Koala kernel upon initial boot up.
Are you able to post diagnostic output showing this isn't your workload
causing
Please note, as mentioned, I see exactly the same scheduler behaviour on
my hardware with Karmic Koala - downgrading from the Lucid Lynx kernel
fixes all my other issues though. Thus I'm not sure this is the actual
problem.
--
High load averages on Lucid while idling
Another bug over here is receiving more attention, and seems potentially
linked --
https://bugs.edge.launchpad.net/ubuntu/+source/linux/+bug/524281
At the moment the only solution I've got is to revert to a Karmic Koala
kernel, which is hardly ideal. Is there a reason this bug isn't
** Also affects: linux-meta (Ubuntu)
Importance: Undecided
Status: New
--
High load averages on Lucid EC2 while idling
https://bugs.launchpad.net/bugs/574910
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
--
ubuntu-bugs
Adding a link to linux-image as this does not just affect EC2.
I have a decent number of HP ProLiant systems and about 90% are
exhibiting the exact same problems. All of the systems affected have a
load average between 0.7 and 1.1 just a short while after being
rebooted, and are 100% idle.
Wakeups-from-idle per second : 184.6interval: 10.0s
no ACPI power usage estimate available
Top causes for wakeups:
28.1% (206.7) [kernel scheduler] Load balancing tick
27.2% (200.2) [kernel
a...@thunder:~$ sudo dpkg -i linux-
image-2.6.31-22-server_2.6.31-22.60_amd64.deb
Manually installing the latest kernel from Karmic Koala also acts as a
fix for the problems, which is pretty much a smoking gun pointing at the
kernel shipped with Lucid Lynx, in my opinion!
Things which were
For reference the systems used for testing ('thunder', 'lightning',
'aurora') are all HP ProLiant BL460c G5.
a...@thunder:~$ cat /proc/cpuinfo
processor : 0
vendor_id : GenuineIntel
cpu family : 6
model : 23
model name : Intel(R) Xeon(R) CPU X5450 @
Public bug reported:
Seems to happen whilst using blueproximity on Lucid Lynx
ProblemType: KernelOops
AlsaVersion: Advanced Linux Sound Architecture Driver Version 1.0.21.
Annotation: Your system might become unstable now and might need to be
restarted.
Architecture: amd64
AudioDevicesInUse:
** Attachment added: AlsaDevices.txt
http://launchpadlibrarian.net/40269553/AlsaDevices.txt
** Attachment added: AplayDevices.txt
http://launchpadlibrarian.net/40269554/AplayDevices.txt
** Attachment added: ArecordDevices.txt
http://launchpadlibrarian.net/40269555/ArecordDevices.txt
I've also noticed that the files are created with less than perfect
permissions:
-rw-r--r-- 1 ahowells ahowells 872 2009-06-19 20:38 last.tsc
-rw-r--r-- 1 ahowells ahowells 0 2009-06-19 20:29 mru.tsc
Perhaps it would be possible for them to start life as -rw--- or
something, as
Public bug reported:
Binary package hint: nfs-common
[EMAIL PROTECTED]:/media/collie$ sudo mount -t nfs collie.0wn3d.us:/helios
helios/
mount.nfs: rpc.statd is not running but is required for remote locking.
Either use '-o nolocks' to keep locks local, or start statd.
I know that rpc.statd
Should probably be worth commenting that I'm running Gutsy on that
laptop, and am not sure if the bug also affects Dapper / Feisty.
--
Recommending '-o nolocks' incorrect, it is '-o nolock'
https://bugs.launchpad.net/bugs/131789
You received this bug notification because you are a member of
16 matches
Mail list logo