Am 28.11.2014 um 11:08 schrieb Raghavendra KT:
> Was able to test the patch, here is the result: I have not tested with
> bigger VMs though. Results make it difficult to talk about any side
> effect of
> patch if any.

Thanks a log.

If our assumption is correct, then this patch should have no side effect on 
x86. Do you have any confidence guess if the numbers below mean: no-change vs. 
regression vs improvement?

Christian


> 
> System 16 core 32cpu (+ht) sandybridge
> with 4 guests of 16vcpu each
> 
> +-----------+-----------+-----------+------------+-----------+
>              kernbench (time taken lower is better)
> +-----------+-----------+-----------+------------+-----------+
>      base       %stdev      patched      %stdev    %improvement
> +-----------+-----------+-----------+------------+-----------+
> 1x   53.1421     2.3086        54.6671     2.9673      -2.86966
> 2x   89.6858     6.4540        94.0626     6.8317      -4.88015
> +-----------+-----------+-----------+------------+-----------+
> 
> +-----------+-----------+-----------+------------+-----------+
>              ebizzy  (recors/sec higher is better)
> +-----------+-----------+-----------+------------+-----------+
>      base        %stdev          patched      %stdev    %improvement
> +-----------+-----------+-----------+------------+-----------+
> 1x 14523.2500     8.4388    14928.8750     3.0478       2.79294
> 2x  3338.8750     1.4592     3270.8750     2.3980      -2.03661
> +-----------+-----------+-----------+------------+-----------+
> +-----------+-----------+-----------+------------+-----------+
>              dbench  (Throughput higher is better)
> +-----------+-----------+-----------+------------+-----------+
>      base       %stdev           patched      %stdev    %improvement
> +-----------+-----------+-----------+------------+-----------+
> 1x  6386.4737     1.0487    6703.9113     1.2298       4.97047
> 2x  2571.4712     1.3733    2571.8175     1.6919       0.01347
> +-----------+-----------+-----------+------------+-----------+
> 
> Raghu
> 
> On Wed, Nov 26, 2014 at 3:01 PM, Christian Borntraeger
> <borntrae...@de.ibm.com> wrote:
>> Am 26.11.2014 um 10:23 schrieb David Hildenbrand:
>>>> This change is a trade-off.
>>>> PRO: This patch would improve the case of preemption on s390. This is 
>>>> probably a corner case as most distros have preemption off anyway.
>>>> CON: The downside is that kvm_vcpu_yield_to is called also from 
>>>> kvm_vcpu_on_spin. Here we want to avoid the scheduler overhead for a wrong 
>>>> decision.
>>>
>>> Won't most of that part be covered by:
>>>       if (!ACCESS_ONCE(vcpu->preempted))
>>
>> Hmm, right. Checking vcpu->preempted and PF_VCPU might boil down to the same.
>> Would be good if to have to performance regression test, though.
>>
>>>
>>> vcpu->preempted is only set when scheduled out involuntarily. It is cleared
>>> when scheduled in. s390 sets it manually, to speed up waking up a vcpu.
>>>
>>> So when our task is scheduled in (an PF_VCPU is set), this check will 
>>> already
>>> avoid scheduler overhead in kvm_vcpu_on_spin() or am I missing something?
>>>
>>
>> CC Raghavendra KT. Could be rerun your kernbench/sysbench/ebizzy setup on 
>> x86 to see if the patch in this thread causes any regression? If think your 
>> commit 7bc7ae25b143"kvm: Iterate over only vcpus that are preempted" might 
>> have really made the PF_VCPU check unnecessary
>>
>> CC Michael Mueller, do we still have our yield performance setup handy to 
>> check if this patch causes any regression?
>>
>>
>> Christian
>>
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majord...@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to