Hi,
So, nobody has any idea what's going wrong with all these massive IRQs
and spin_locks that cause virtual machines to almost completely stop?
:(
Thanks,
Dmitry
On Wed, Dec 1, 2010 at 5:38 AM, Dmitry Golubev wrote:
> Hi,
>
> Sorry it took so slow to reply you - there are only few moments when
Hi,
Sorry it took so slow to reply you - there are only few moments when I
can poke a production server and I need to notify people in advance
about that :(
> Can you post kvm_stat output while slowness is happening? 'perf top' on the
> host? and on the guest?
I took 'perf top' and first thing
Thanks for the answer.
> Are you sure it is hugepages related?
Well, empirically it looked like either hugepages-related, or
regression of qemu-kvm 0.12.3 -> 0.12.5, as this did not happen until
I upgraded (needed to avoid disk corruption caused by a bug in 0.12.3)
and put hugepages. However as f
On 11/21/2010 02:24 AM, Dmitry Golubev wrote:
Hi,
Seems that nobody is interested in this bug :(
It's because the information is somewhat confused. There's a way to
prepare bug reports that gets developers competing to see who solves it
first.
Anyway I wanted to add a bit more to this
> Just out of curiocity: did you try updating the BIOS on your
> motherboard? The issus you're facing seems to be quite unique,
> and I've seen more than once how various different weird issues
> were fixed just by updating the BIOS. Provided they actually
> did they own homework and fixed someth
21.11.2010 03:24, Dmitry Golubev wrote:
> Hi,
>
> Seems that nobody is interested in this bug :(
>
> Anyway I wanted to add a bit more to this investigation.
>
> Once I put "nohz=off highres=off clocksource=acpi_pm" in guest kernel
> options, the guests started to behave better - they do not sta
Hi,
Seems that nobody is interested in this bug :(
Anyway I wanted to add a bit more to this investigation.
Once I put "nohz=off highres=off clocksource=acpi_pm" in guest kernel
options, the guests started to behave better - they do not stay in the
slow state, but rather get there for some secon
Hi,
Sorry to bother you again. I have more info:
> 1. router with 32MB of RAM (hugepages) and 1VCPU
...
> Is it too much to have 3 guests with hugepages?
OK, this router is also out of equation - I disabled hugepages for it.
There should be also additional pages available to guests because of
th
Hi,
Maybe you remember that I wrote few weeks ago about KVM cpu load
problem with hugepages. The problem was lost hanging, however I have
now some new information. So the description remains, however I have
decreased both guest memory and the amount of hugepages:
Ram = 8GB, hugepages = 3546
Tota
> Please don't top post.
Sorry
> Please use 'top' to find out which processes are busy, the aggregate
> statistics don't help to find out what the problem is.
The thing is - all more or less active processes become busy, like
httpd, etc - I can't identify any single process that generates all
th
On 10/03/2010 10:24 PM, Dmitry Golubev wrote:
So, I started anew. I decreased the memory allocated to each guest to
3500MB (from 3500MB as I told earlier), but have not decreased number
of hugepages - it is still 3696.
Please don't top post.
Please use 'top' to find out which processes are b
So, I started anew. I decreased the memory allocated to each guest to
3500MB (from 3500MB as I told earlier), but have not decreased number
of hugepages - it is still 3696.
On one host I started one guest. it looked like this:
HugePages_Total:3696
HugePages_Free: 1933
HugePages_Rsvd:
On 09/30/2010 11:07 AM, Dmitry Golubev wrote:
Hi,
I am not sure what's really happening, but every few hours
(unpredictable) two virtual machines (Linux 2.6.32) start to generate
huge cpu loads. It looks like some kind of loop is unable to complete
or something...
What does 'top' inside the
02.10.2010 03:50, Dmitry Golubev wrote:
> Hi,
>
> Thanks for reply. Well, although there is plenty of RAM left (about
> 100MB), some swap space was used during the operation:
>
> Mem: 8193472k total, 8089788k used, 103684k free, 5768k buffers
> Swap: 11716412k total,36636k used, 1167
OK, I have repeated the problem. The two machines were working fine
for few hours without some services running (these would take up some
gigabyte additionally in total), I ran these services again and some
40 minutes later the problem reappeared (may be a coincidence, though,
but I don't think so)
Hi,
Thanks for reply. Well, although there is plenty of RAM left (about
100MB), some swap space was used during the operation:
Mem: 8193472k total, 8089788k used, 103684k free, 5768k buffers
Swap: 11716412k total,36636k used, 11679776k free, 103112k cached
I am not sure why, thoug
On Thu, Sep 30, 2010 at 12:07:15PM +0300, Dmitry Golubev wrote:
> Hi,
>
> I am not sure what's really happening, but every few hours
> (unpredictable) two virtual machines (Linux 2.6.32) start to generate
> huge cpu loads. It looks like some kind of loop is unable to complete
> or something...
>
Hi,
I am not sure what's really happening, but every few hours
(unpredictable) two virtual machines (Linux 2.6.32) start to generate
huge cpu loads. It looks like some kind of loop is unable to complete
or something...
So the idea is:
1. I have two linux 2.6.32 x64 (openvz, proxmox project) gues
18 matches
Mail list logo