On 06/02/2015 00:34, Marcelo Tosatti wrote:
You want at least a basic procedure to estimate a value
(its a function of the device after all).
I will add a tracepoint.
Rather than halt_successful_poll's, i suppose the optimum
can be estimated from a dataset containing entries
in the form:
This patch introduces a new module parameter for the KVM module; when it
is present, KVM attempts a bit of polling on every HLT before scheduling
itself out via kvm_vcpu_block.
This parameter helps a lot for latency-bound workloads---in particular
I tested it with O_DSYNC writes with a
On 05/02/2015 21:39, David Matlack wrote:
This parameter helps a lot for latency-bound workloads [...]
KVM's performance here is usually around 30% of bare metal,
or 50% if you use cache=directsync or cache=writethrough.
With this patch performance reaches 60-65% of bare metal and, more
On 05/02/2015 18:53, Radim Krčmář wrote:
99% of what you get if you use idle=poll in the guest.
(Hm, I would have thought that this can outperform idle=poll ...)
It outperforms idle=poll in the host. A vmexit is probably too cheap to
outperform having idle=poll in the guest.
On 05/02/2015 19:55, Jan Kiszka wrote:
This patch introduces a new module parameter for the KVM module; when it
is present, KVM attempts a bit of polling on every HLT before scheduling
itself out via kvm_vcpu_block.
Wouldn't it be better to tune this on a per-VM basis? Think of mixed
On 02/05/2015 02:20 PM, Paolo Bonzini wrote:
On 05/02/2015 19:55, Jan Kiszka wrote:
This patch introduces a new module parameter for the KVM module; when it
is present, KVM attempts a bit of polling on every HLT before scheduling
itself out via kvm_vcpu_block.
Wouldn't it be better to
On Thu, Feb 05, 2015 at 05:05:25PM +0100, Paolo Bonzini wrote:
This patch introduces a new module parameter for the KVM module; when it
is present, KVM attempts a bit of polling on every HLT before scheduling
itself out via kvm_vcpu_block.
This parameter helps a lot for latency-bound
On Thu, Feb 05, 2015 at 09:34:06PM -0200, Marcelo Tosatti wrote:
On Thu, Feb 05, 2015 at 05:05:25PM +0100, Paolo Bonzini wrote:
This patch introduces a new module parameter for the KVM module; when it
is present, KVM attempts a bit of polling on every HLT before scheduling
itself out via
On 02/05/2015 11:05 AM, Paolo Bonzini wrote:
This patch introduces a new module parameter for the KVM module; when it
is present, KVM attempts a bit of polling on every HLT before scheduling
itself out via kvm_vcpu_block.
This parameter helps a lot for latency-bound workloads---in particular
On 2015-02-05 17:05, Paolo Bonzini wrote:
This patch introduces a new module parameter for the KVM module; when it
is present, KVM attempts a bit of polling on every HLT before scheduling
itself out via kvm_vcpu_block.
Wouldn't it be better to tune this on a per-VM basis? Think of mixed
2015-02-05 17:05+0100, Paolo Bonzini:
This patch introduces a new module parameter for the KVM module; when it
is present, KVM attempts a bit of polling on every HLT before scheduling
itself out via kvm_vcpu_block.
This parameter helps a lot for latency-bound workloads---in particular
I
On 05/02/2015 20:23, Rik van Riel wrote:
3) long term anyway we want it to auto tune, which is better than tuning
it per-VM.
We may want to auto tune it per VM.
We may even want to auto tune it per VCPU.
However, if we make auto tuning work well, I do not
think we want to expose a user
On Thu, Feb 5, 2015 at 8:05 AM, Paolo Bonzini pbonz...@redhat.com wrote:
This patch introduces a new module parameter for the KVM module; when it
is present, KVM attempts a bit of polling on every HLT before scheduling
itself out via kvm_vcpu_block.
Awesome. I have been working on the same
13 matches
Mail list logo