Thanks.
Hi, MST
I have tested the patch "intel_idle: add pv cstates when running on kvm" on
a recent host that allows guests
to execute mwait without an exit. also I have tested our patch "[RFC PATCH
v2 0/7] x86/idle: add halt poll support",
upstream linux, and idle=poll.
t
tested the patch "intel_idle: add pv cstates when running on kvm" on
a recent host that allows guests
to execute mwait without an exit. also I have tested our patch "[RFC PATCH
v2 0/7] x86/idle: add halt poll support",
upstream linux, and idle=poll.
the following is the result
> to execute mwait without an exit. also I have tested our patch "[RFC PATCH
> v2 0/7] x86/idle: add halt poll support",
> upstream linux, and idle=poll.
>
> the following is the result (which seems better than ever berfore, as I ran
> test case on a more powerful ma
ithout an exit. also I have tested our patch "[RFC PATCH
> v2 0/7] x86/idle: add halt poll support",
> upstream linux, and idle=poll.
>
> the following is the result (which seems better than ever berfore, as I ran
> test case on a more powerful machine):
>
> for __
n a recent host that allows guests
to execute mwait without an exit. also I have tested our patch "[RFC
PATCH v2 0/7] x86/idle: add halt poll support",
upstream linux, and idle=poll.
the following is the result (which seems better than ever berfore, as I
ran test case on a m
n a recent host that allows guests
to execute mwait without an exit. also I have tested our patch "[RFC
PATCH v2 0/7] x86/idle: add halt poll support",
upstream linux, and idle=poll.
the following is the result (which seems better than ever berfore, as I
ran test case on a m
On 2017/8/29 22:56, Michael S. Tsirkin wrote:
On Tue, Aug 29, 2017 at 11:46:34AM +, Yang Zhang wrote:
Some latency-intensive workload will see obviously performance
drop when running inside VM.
But are we trading a lot of CPU for a bit of lower latency?
The main reason is that the
On 2017/8/29 22:56, Michael S. Tsirkin wrote:
On Tue, Aug 29, 2017 at 11:46:34AM +, Yang Zhang wrote:
Some latency-intensive workload will see obviously performance
drop when running inside VM.
But are we trading a lot of CPU for a bit of lower latency?
The main reason is that the
On 2017/9/1 14:58, Wanpeng Li wrote:
2017-09-01 14:44 GMT+08:00 Yang Zhang :
On 2017/8/29 22:02, Wanpeng Li wrote:
Here is the data we get when running benchmark netperf:
2. w/ patch:
halt_poll_threshold=1 -- 15803.89 bits/s -- 159.5 %CPU
On 2017/9/1 14:58, Wanpeng Li wrote:
2017-09-01 14:44 GMT+08:00 Yang Zhang :
On 2017/8/29 22:02, Wanpeng Li wrote:
Here is the data we get when running benchmark netperf:
2. w/ patch:
halt_poll_threshold=1 -- 15803.89 bits/s -- 159.5 %CPU
halt_poll_threshold=2 --
2017-09-01 14:44 GMT+08:00 Yang Zhang :
> On 2017/8/29 22:02, Wanpeng Li wrote:
>>>
>>> Here is the data we get when running benchmark netperf:
>>>
>>> 2. w/ patch:
>>>halt_poll_threshold=1 -- 15803.89 bits/s -- 159.5 %CPU
>>>
2017-09-01 14:44 GMT+08:00 Yang Zhang :
> On 2017/8/29 22:02, Wanpeng Li wrote:
>>>
>>> Here is the data we get when running benchmark netperf:
>>>
>>> 2. w/ patch:
>>>halt_poll_threshold=1 -- 15803.89 bits/s -- 159.5 %CPU
>>>halt_poll_threshold=2 -- 15899.04 bits/s --
2017-09-01 14:32 GMT+08:00 Yang Zhang :
> On 2017/8/29 22:02, Wanpeng Li wrote:
>>>
>>> Here is the data we get when running benchmark netperf:
>>>
>>> 2. w/ patch:
>>>halt_poll_threshold=1 -- 15803.89 bits/s -- 159.5 %CPU
>>>
2017-09-01 14:32 GMT+08:00 Yang Zhang :
> On 2017/8/29 22:02, Wanpeng Li wrote:
>>>
>>> Here is the data we get when running benchmark netperf:
>>>
>>> 2. w/ patch:
>>>halt_poll_threshold=1 -- 15803.89 bits/s -- 159.5 %CPU
>>>halt_poll_threshold=2 -- 15899.04 bits/s --
On 2017/8/29 22:02, Wanpeng Li wrote:
Here is the data we get when running benchmark netperf:
2. w/ patch:
halt_poll_threshold=1 -- 15803.89 bits/s -- 159.5 %CPU
halt_poll_threshold=2 -- 15899.04 bits/s -- 161.5 %CPU
halt_poll_threshold=3 -- 15642.38 bits/s
On 2017/8/29 22:02, Wanpeng Li wrote:
Here is the data we get when running benchmark netperf:
2. w/ patch:
halt_poll_threshold=1 -- 15803.89 bits/s -- 159.5 %CPU
halt_poll_threshold=2 -- 15899.04 bits/s -- 161.5 %CPU
halt_poll_threshold=3 -- 15642.38 bits/s
On 2017/8/29 22:02, Wanpeng Li wrote:
Here is the data we get when running benchmark netperf:
2. w/ patch:
halt_poll_threshold=1 -- 15803.89 bits/s -- 159.5 %CPU
halt_poll_threshold=2 -- 15899.04 bits/s -- 161.5 %CPU
halt_poll_threshold=3 -- 15642.38 bits/s
On 2017/8/29 22:02, Wanpeng Li wrote:
Here is the data we get when running benchmark netperf:
2. w/ patch:
halt_poll_threshold=1 -- 15803.89 bits/s -- 159.5 %CPU
halt_poll_threshold=2 -- 15899.04 bits/s -- 161.5 %CPU
halt_poll_threshold=3 -- 15642.38 bits/s
On 2017/8/29 19:58, Alexander Graf wrote:
On 08/29/2017 01:46 PM, Yang Zhang wrote:
Some latency-intensive workload will see obviously performance
drop when running inside VM. The main reason is that the overhead
is amplified when running inside VM. The most cost i have seen is
inside idle
On 2017/8/29 19:58, Alexander Graf wrote:
On 08/29/2017 01:46 PM, Yang Zhang wrote:
Some latency-intensive workload will see obviously performance
drop when running inside VM. The main reason is that the overhead
is amplified when running inside VM. The most cost i have seen is
inside idle
On Tue, Aug 29, 2017 at 11:46:34AM +, Yang Zhang wrote:
> Some latency-intensive workload will see obviously performance
> drop when running inside VM.
But are we trading a lot of CPU for a bit of lower latency?
> The main reason is that the overhead
> is amplified when running inside VM.
On Tue, Aug 29, 2017 at 11:46:34AM +, Yang Zhang wrote:
> Some latency-intensive workload will see obviously performance
> drop when running inside VM.
But are we trading a lot of CPU for a bit of lower latency?
> The main reason is that the overhead
> is amplified when running inside VM.
On Tue, Aug 29, 2017 at 10:02:15PM +0800, Wanpeng Li wrote:
> Actually I'm not sure how much sense it makes to introduce this pv
> stuff and the duplicate adaptive halt-polling logic as what has
> already been done in kvm w/o obvious benefit for real workload like
> netperf.
In fact, I would
On Tue, Aug 29, 2017 at 10:02:15PM +0800, Wanpeng Li wrote:
> Actually I'm not sure how much sense it makes to introduce this pv
> stuff and the duplicate adaptive halt-polling logic as what has
> already been done in kvm w/o obvious benefit for real workload like
> netperf.
In fact, I would
On Tue, Aug 29, 2017 at 10:02:15PM +0800, Wanpeng Li wrote:
> > Here is the data we get when running benchmark netperf:
> >
> >2. w/ patch:
> > halt_poll_threshold=1 -- 15803.89 bits/s -- 159.5 %CPU
> > halt_poll_threshold=2 -- 15899.04 bits/s -- 161.5 %CPU
> >
On Tue, Aug 29, 2017 at 10:02:15PM +0800, Wanpeng Li wrote:
> > Here is the data we get when running benchmark netperf:
> >
> >2. w/ patch:
> > halt_poll_threshold=1 -- 15803.89 bits/s -- 159.5 %CPU
> > halt_poll_threshold=2 -- 15899.04 bits/s -- 161.5 %CPU
> >
> Here is the data we get when running benchmark netperf:
>
>2. w/ patch:
> halt_poll_threshold=1 -- 15803.89 bits/s -- 159.5 %CPU
> halt_poll_threshold=2 -- 15899.04 bits/s -- 161.5 %CPU
> halt_poll_threshold=3 -- 15642.38 bits/s -- 161.8 %CPU
>
> Here is the data we get when running benchmark netperf:
>
>2. w/ patch:
> halt_poll_threshold=1 -- 15803.89 bits/s -- 159.5 %CPU
> halt_poll_threshold=2 -- 15899.04 bits/s -- 161.5 %CPU
> halt_poll_threshold=3 -- 15642.38 bits/s -- 161.8 %CPU
>
Yang Zhang writes:
> Some latency-intensive workload will see obviously performance
> drop when running inside VM. The main reason is that the overhead
> is amplified when running inside VM. The most cost i have seen is
> inside idle path.
You could test with
Yang Zhang writes:
> Some latency-intensive workload will see obviously performance
> drop when running inside VM. The main reason is that the overhead
> is amplified when running inside VM. The most cost i have seen is
> inside idle path.
You could test with
On 08/29/2017 01:46 PM, Yang Zhang wrote:
Some latency-intensive workload will see obviously performance
drop when running inside VM. The main reason is that the overhead
is amplified when running inside VM. The most cost i have seen is
inside idle path.
This patch introduces a new mechanism to
On 08/29/2017 01:46 PM, Yang Zhang wrote:
Some latency-intensive workload will see obviously performance
drop when running inside VM. The main reason is that the overhead
is amplified when running inside VM. The most cost i have seen is
inside idle path.
This patch introduces a new mechanism to
Some latency-intensive workload will see obviously performance
drop when running inside VM. The main reason is that the overhead
is amplified when running inside VM. The most cost i have seen is
inside idle path.
This patch introduces a new mechanism to poll for a while before
entering idle
Some latency-intensive workload will see obviously performance
drop when running inside VM. The main reason is that the overhead
is amplified when running inside VM. The most cost i have seen is
inside idle path.
This patch introduces a new mechanism to poll for a while before
entering idle
34 matches
Mail list logo