On Tue, Apr 5, 2011 at 2:48 PM, Peter Zijlstra wrote:
> On Tue, 2011-04-05 at 11:56 +0300, Avi Kivity wrote:
>>
>> Could be waking up due to guest wakeups, or qemu internal wakeups
>> (display refresh) or due to guest timer sources which are masked away in
>> the guest (if that's the case we shoul
On 04/05/2011 12:03 PM, Peter Zijlstra wrote:
On Tue, 2011-04-05 at 11:56 +0300, Avi Kivity wrote:
>
> Could be waking up due to guest wakeups, or qemu internal wakeups
> (display refresh) or due to guest timer sources which are masked away in
> the guest (if that's the case we should optimize
On Tue, 2011-04-05 at 11:56 +0300, Avi Kivity wrote:
>
> Could be waking up due to guest wakeups, or qemu internal wakeups
> (display refresh) or due to guest timer sources which are masked away in
> the guest (if that's the case we should optimize it away).
Right, so I guess we're all clutchin
On 04/05/2011 11:48 AM, Peter Zijlstra wrote:
What I think is happening is that all your 'idle' qemu thingies keep
waking up frequently and because you've got like twice the number of
qemu instances as you've got cpus there's a fair chance you'll have a
cpu with a pending task while another one g
On Tue, 2011-04-05 at 10:48 +0200, Peter Zijlstra wrote:
> On Tue, 2011-03-22 at 12:35 +0200, Avi Kivity wrote:
> >> Here's top with 96 idle guests running:
>
> On some hacked up 2.6.38 kernel...
>
> > > Start of perf report -g
> > > 55.26%kvm [kernel.kallsyms] [k] __ticket_
On Tue, 2011-03-22 at 12:35 +0200, Avi Kivity wrote:
>> Here's top with 96 idle guests running:
On some hacked up 2.6.38 kernel...
> > Start of perf report -g
> > 55.26%kvm [kernel.kallsyms] [k] __ticket_spin_lock
> >|
> >--- _
On 04/05/2011 10:49 AM, Peter Zijlstra wrote:
On Tue, 2011-03-22 at 12:35 +0200, Avi Kivity wrote:
> Looks like the posix-timer issue is completely gone, to be replaced by
> the load balancer.
-ENOINFO, no kernel version, no setup, no workload, no nothing.
http://www.spinics.net/lists/kvm/ms
On Tue, 2011-03-22 at 12:35 +0200, Avi Kivity wrote:
> Looks like the posix-timer issue is completely gone, to be replaced by
> the load balancer.
-ENOINFO, no kernel version, no setup, no workload, no nothing.
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a me
On 04/04/2011 06:30 AM, Ben Nagy wrote:
On Tue, Mar 22, 2011 at 4:20 PM, Avi Kivity wrote:
[...]
> Looks like the posix-timer issue is completely gone, to be replaced by the
> load balancer.
>
> Copying peterz.
Hi all,
I feel bad about such a big cc list, but I don't know who can be left ou
On Tue, Mar 22, 2011 at 4:20 PM, Avi Kivity wrote:
[...]
> Looks like the posix-timer issue is completely gone, to be replaced by the
> load balancer.
>
> Copying peterz.
Hi all,
I feel bad about such a big cc list, but I don't know who can be left out :/
Still got the performance issue with th
On 03/22/2011 10:59 AM, Ben Nagy wrote:
On Tue, Mar 22, 2011 at 12:54 PM, Eric Dumazet wrote:
> Ben Nagy reported a scalability problem with KVM/QEMU that hit very hard
> a single spinlock (idr_lock) in posix-timers code, on its 48 core
> machine.
Hi all,
Thanks a lot for all the help so fa
On Tue, Mar 22, 2011 at 12:54 PM, Eric Dumazet wrote:
> Ben Nagy reported a scalability problem with KVM/QEMU that hit very hard
> a single spinlock (idr_lock) in posix-timers code, on its 48 core
> machine.
Hi all,
Thanks a lot for all the help so far. We've tested with Eric's patch.
First up,
Ben Nagy reported a scalability problem with KVM/QEMU that hit very hard
a single spinlock (idr_lock) in posix-timers code, on its 48 core
machine.
Even on a 16 cpu machine (2x4x2), a single test can show 98% of cpu time
used in ticket_spin_lock, from lock_timer
Ref: http://www.spinics.net/lists/
13 matches
Mail list logo