rwsem_spin_on_owner and mutex_spin_on_owner.
>>> These spin_on_onwer variant also cause rcu stall before we apply this patch
>>> set
>>>
>>
>> Paolo, could you help out with an (x86) KVM interface for th
2016-07-06 20:28 GMT+08:00 Paolo Bonzini <pbonz...@redhat.com>:
>
>
> On 06/07/2016 14:08, Wanpeng Li wrote:
>> 2016-07-06 18:44 GMT+08:00 Paolo Bonzini <pbonz...@redhat.com>:
>>>
>>>
>>> On 06/07/2016 08:52, Peter Zijlstra wrote:
>>&
2016-07-06 20:28 GMT+08:00 Paolo Bonzini <pbonz...@redhat.com>:
>
>
> On 06/07/2016 14:08, Wanpeng Li wrote:
>> 2016-07-06 18:44 GMT+08:00 Paolo Bonzini <pbonz...@redhat.com>:
>>>
>>>
>>> On 06/07/2016 08:52, Peter Zijlstra wrote:
>>&
2016-07-07 18:12 GMT+08:00 Wanpeng Li <kernel...@gmail.com>:
> 2016-07-07 17:42 GMT+08:00 Peter Zijlstra <pet...@infradead.org>:
>> On Thu, Jul 07, 2016 at 04:48:05PM +0800, Wanpeng Li wrote:
>>> 2016-07-06 20:28 GMT+08:00 Paolo Bonzini <pbonz...@redhat.com>:
2016-07-07 17:42 GMT+08:00 Peter Zijlstra <pet...@infradead.org>:
> On Thu, Jul 07, 2016 at 04:48:05PM +0800, Wanpeng Li wrote:
>> 2016-07-06 20:28 GMT+08:00 Paolo Bonzini <pbonz...@redhat.com>:
>> > Hmm, you're right. We can use bit 0 of struct kvm_steal_time's fla
re built into same
> kernel image with pSeries. So we need return false if we are runnig as
> powerNV. The another fact is that lppaca->yiled_count keeps zero on
> powerNV. So we can just skip the machine type.
Lock holder vCPU preemption can be detected by hardware pSe
Cc Paolo, kvm ml
2016-07-06 12:58 GMT+08:00 xinhui <xinhui@linux.vnet.ibm.com>:
> Hi, wanpeng
>
> On 2016年07月05日 17:57, Wanpeng Li wrote:
>>
>> Hi Xinhui,
>> 2016-06-28 22:43 GMT+08:00 Pan Xinhui <xinhui@linux.vnet.ibm.com>:
>>>
>>>
feature will increase
downtime when acquire the benefit of reducing total time, maybe it
will be more acceptable if there is no downside for downtime.
Regards,
Wanpeng Li
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
migration downtime to avoid customers'
>> perception than total time, however, this feature will increase downtime
>> when acquire the benefit of reducing total time, maybe it will be more
>> acceptable if there is no downside for downtime.
>>
>> Regards,
>> Wanpeng Li
c key, be enough? A further advantage would be that this would
> work on other architectures, too.
There is a "Adaptive halt-polling" which are merged to upstream more
than two years ago avoids to thread the critical path and has already
been ported to other architectures. https://lkml.org/lkml/2015/9/3/615
Regards,
Wanpeng Li
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
o years ago in kvm which is self-tuning,
https://lkml.org/lkml/2015/9/3/615
Regards,
Wanpeng Li
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
2017-11-14 16:15 GMT+08:00 Quan Xu <quan@gmail.com>:
>
>
> On 2017/11/14 15:12, Wanpeng Li wrote:
>>
>> 2017-11-14 15:02 GMT+08:00 Quan Xu <quan@gmail.com>:
>>>
>>>
>>> On 2017/11/13 18:53, Juergen Gross wrote:
>>>>
.6 bit/s -- 76.1 %CPU
>
> 2. w/ patch and disable kvm dynamic poll (halt_poll_ns=0):
> 35787.7 bit/s -- 129.4 %CPU
>
> 3. w/ kvm dynamic poll:
> 35735.6 bit/s -- 200.0 %CPU
Actually we can reduce the CPU utilization by sleeping a period of
time as what has already been do
13 matches
Mail list logo