On Tue, Jan 18, 2011 at 11:09:01AM -0600, Anthony Liguori wrote:
But we also need to provide a compatible interface to management tools.
Exposing the device model topology as a compatible interface
artificially limits us. It's far better to provide higher level
supported interfaces to give us
From: Lai Jiangshan la...@cn.fujitsu.com
simple cleanup and use existing helper: kvm_check_extension().
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
Signed-off-by: Marcelo Tosatti mtosa...@redhat.com
---
kvm-all.c |2 +-
target-i386/kvm.c |4 ++--
2 files changed, 3
From: Jan Kiszka jan.kis...@siemens.com
Simplify kvm_has_msr_star/hsave_pa to booleans and push their one-time
initialization into kvm_arch_init. Also handle potential errors of that
setup procedure.
Signed-off-by: Jan Kiszka jan.kis...@siemens.com
Signed-off-by: Marcelo Tosatti
From: Jan Kiszka jan.kis...@siemens.com
kvm_arch_reset_vcpu initializes mp_state, and that function is invoked
right after kvm_arch_init_vcpu.
Signed-off-by: Jan Kiszka jan.kis...@siemens.com
Signed-off-by: Marcelo Tosatti mtosa...@redhat.com
---
target-i386/kvm.c |2 --
1 files changed, 0
From: Jin Dongming jin.dongm...@np.css.fujitsu.com
Clean up cpu_inject_x86_mce() for later patch.
Signed-off-by: Jin Dongming jin.dongm...@np.css.fujitsu.com
Signed-off-by: Marcelo Tosatti mtosa...@redhat.com
---
target-i386/helper.c | 27 +--
1 files changed, 17
From: Jin Dongming jin.dongm...@np.css.fujitsu.com
Add function for checking whether current CPU support mca broadcast.
Signed-off-by: Jin Dongming jin.dongm...@np.css.fujitsu.com
Signed-off-by: Marcelo Tosatti mtosa...@redhat.com
---
target-i386/cpu.h|1 +
target-i386/helper.c | 33
From: Jan Kiszka jan.kis...@siemens.com
Ensure that we stop the guest whenever we face a fatal or unknown exit
reason. If we stop, we also have to enforce a cpu loop exit.
Signed-off-by: Jan Kiszka jan.kis...@siemens.com
Signed-off-by: Marcelo Tosatti mtosa...@redhat.com
---
kvm-all.c |
From: Jan Kiszka jan.kis...@siemens.com
In order to support loading BIOSes 256K, reorder the code, adjusting
the base if the kernel supports moving the identity map.
Signed-off-by: Jan Kiszka jan.kis...@siemens.com
Signed-off-by: Marcelo Tosatti mtosa...@redhat.com
---
target-i386/kvm.c | 63
From: Jan Kiszka jan.kis...@siemens.com
The ordering doesn't matter in this case, but better keep it consistent.
Signed-off-by: Jan Kiszka jan.kis...@siemens.com
Signed-off-by: Marcelo Tosatti mtosa...@redhat.com
---
target-i386/kvm.c |6 +++---
1 files changed, 3 insertions(+), 3
From: Jan Kiszka jan.kis...@siemens.com
We must flush pending mmio writes if we leave kvm_cpu_exec for an IO
window. Otherwise we risk to loose those requests when migrating to a
different host during that window.
Signed-off-by: Jan Kiszka jan.kis...@siemens.com
Signed-off-by: Marcelo Tosatti
From: Jan Kiszka jan.kis...@siemens.com
The imbalance in the hold time of qemu_global_mutex only exists in TCG
mode. In contrast to TCG VCPUs, KVM drops the global lock during guest
execution. We already avoid touching the fairness lock from the
IO-thread in KVM mode, so also stop using it from
From: Jin Dongming jin.dongm...@np.css.fujitsu.com
Refactor codes for maintainability.
Signed-off-by: Hidetoshi Seto seto.hideto...@jp.fujitsu.com
Signed-off-by: Jin Dongming jin.dongm...@np.css.fujitsu.com
Signed-off-by: Marcelo Tosatti mtosa...@redhat.com
---
target-i386/kvm.c | 111
From: Jan Kiszka jan.kis...@siemens.com
All CPUX86State variables before CPU_COMMON are automatically cleared on
reset. Reorder nmi_injected and nmi_pending to avoid having to touch
them explicitly.
Signed-off-by: Jan Kiszka jan.kis...@siemens.com
Signed-off-by: Marcelo Tosatti
From: Jan Kiszka jan.kis...@siemens.com
No functional changes.
Signed-off-by: Jan Kiszka jan.kis...@siemens.com
Signed-off-by: Marcelo Tosatti mtosa...@redhat.com
---
kvm-all.c | 139 ++--
1 files changed, 79 insertions(+), 60
From: Jin Dongming jin.dongm...@np.css.fujitsu.com
Pass a table instead of multiple args.
Note:
kvm_inject_x86_mce(env, bank, status, mcg_status, addr, misc,
abort_on_error);
is equal to:
struct kvm_x86_mce mce = {
.bank = bank,
.status = status,
From: Jan Kiszka jan.kis...@siemens.com
This seems to date back to the days KVM didn't support real mode. The
check is no longer needed and, even worse, is corrupting the guest state
in case SS.RPL != DPL.
Signed-off-by: Jan Kiszka jan.kis...@siemens.com
Signed-off-by: Avi Kivity a...@redhat.com
The following changes since commit b646968336d4180bdd7d2e24209708dcee6ba400:
checkpatch: adjust to QEMUisms (2011-01-20 20:58:56 +)
are available in the git repository at:
git://git.kernel.org/pub/scm/virt/kvm/qemu-kvm.git uq/master
Jan Kiszka (23):
kvm: x86: Fix DPL write back of
From: Jan Kiszka jan.kis...@siemens.com
No functional changes.
Signed-off-by: Jan Kiszka jan.kis...@siemens.com
Signed-off-by: Avi Kivity a...@redhat.com
---
target-i386/kvm.c | 335 +
1 files changed, 182 insertions(+), 153 deletions(-)
From: Jan Kiszka jan.kis...@siemens.com
Instead of splattering the code with #ifdefs and runtime checks for
capabilities we cannot work without anyway, provide central test
infrastructure for verifying their availability both at build and
runtime.
Signed-off-by: Jan Kiszka jan.kis...@siemens.com
From: Jan Kiszka jan.kis...@siemens.com
This code path will not yet be taken as we still lack in-kernel irqchip
support. But qemu-kvm can already make use of it and drop its own
mp_state access services.
Signed-off-by: Jan Kiszka jan.kis...@siemens.com
Signed-off-by: Marcelo Tosatti
From: Jan Kiszka jan.kis...@siemens.com
This exit only triggers activity in the common exit path, but we should
accept it in order to be able to detect unknown exit types.
Signed-off-by: Jan Kiszka jan.kis...@siemens.com
Signed-off-by: Marcelo Tosatti mtosa...@redhat.com
---
target-i386/kvm.c |
From: Lai Jiangshan la...@cn.fujitsu.com
Make use of the new KVM_NMI IOCTL to send NMIs into the KVM guest if the
user space raised them. (example: qemu monitor's nmi command)
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
Acked-by: Jan Kiszka jan.kis...@siemens.com
Signed-off-by: Marcelo
From: Jan Kiszka jan.kis...@siemens.com
If we lack kvm_para.h, MSR_KVM_ASYNC_PF_EN is not defined. The change in
kvm_arch_init_vcpu is just for consistency reasons.
Signed-off-by: Jan Kiszka jan.kis...@siemens.com
Signed-off-by: Marcelo Tosatti mtosa...@redhat.com
---
target-i386/kvm.c |8
From: Jan Kiszka jan.kis...@siemens.com
If the kernel does not support KVM_CAP_ASYNC_PF, it also does not know
about the related MSR. So skip it during state synchronization in that
case. Fixes annoying kernel warnings.
Signed-off-by: Jan Kiszka jan.kis...@siemens.com
Signed-off-by: Marcelo
From: Jan Kiszka jan.kis...@siemens.com
No longer used.
Signed-off-by: Jan Kiszka jan.kis...@siemens.com
Signed-off-by: Marcelo Tosatti mtosa...@redhat.com
---
kvm-all.c |4 ++--
kvm-stub.c |2 +-
kvm.h |4 ++--
target-i386/kvm.c |2 +-
From: Jan Kiszka jan.kis...@siemens.com
For unknown reasons, xcr0 reset ended up in kvm_arch_update_guest_debug
on upstream merge. Fix this and also remove the misleading comment (1 is
THE reset value).
Signed-off-by: Jan Kiszka jan.kis...@siemens.com
Signed-off-by: Marcelo Tosatti
From: Jan Kiszka jan.kis...@siemens.com
Introduce the cpu_dump_state flag CPU_DUMP_CODE and implement it for
x86. This writes out the code bytes around the current instruction
pointer. Make use of this feature in KVM to help debugging fatal vm
exits.
Signed-off-by: Jan Kiszka
From: Jin Dongming jin.dongm...@np.css.fujitsu.com
Share same error handing, and rename this function after
MCIP (Machine Check In Progress) flag.
Signed-off-by: Hidetoshi Seto seto.hideto...@jp.fujitsu.com
Signed-off-by: Jin Dongming jin.dongm...@np.css.fujitsu.com
Signed-off-by: Marcelo
From: Jan Kiszka jan.kis...@siemens.com
The DPL is stored in the flags and not in the selector. In fact, the RPL
may differ from the DPL at some point in time, and so we were corrupting
the guest state so far.
Signed-off-by: Jan Kiszka jan.kis...@siemens.com
Signed-off-by: Avi Kivity
From: Jan Kiszka jan.kis...@siemens.com
Make sure to write the cleared MSR_KVM_SYSTEM_TIME, MSR_KVM_WALL_CLOCK,
and MSR_KVM_ASYNC_PF_EN to the kernel state so that a freshly booted
guest cannot be disturbed by old values.
Signed-off-by: Jan Kiszka jan.kis...@siemens.com
CC: Glauber Costa
From: Jan Kiszka jan.kis...@siemens.com
This unbreaks guest debugging when the 4th hardware breakpoint used for
guest debugging is a watchpoint of 4 or 8 byte lenght. The 31st bit of
DR7 is set in that case and used to cause a sign extension to the high
word which was breaking the guest state (vm
From: Jan Kiszka jan.kis...@siemens.com
Report KVM_EXIT_UNKNOWN, KVM_EXIT_FAIL_ENTRY, and KVM_EXIT_EXCEPTION
with more details to stderr. The latter two are so far x86-only, so move
them into the arch-specific handler. Integrate the Intel real mode
warning on KVM_EXIT_FAIL_ENTRY that qemu-kvm
From: Jin Dongming jin.dongm...@np.css.fujitsu.com
When the following test case is injected with mce command, maybe user could not
get the expected result.
DATA
command cpu bank status mcg_status addr misc
(qemu) mce 1 10xbd00 0x05
On Fri, Jan 21, 2011 at 04:48:02PM -0700, Alex Williamson wrote:
When doing device assignment, we use cpu_register_physical_memory() to
directly map the qemu mmap of the device resource into the address
space of the guest. The unadvertised feature of the register physical
memory code path on
On Mon, Jan 10, 2011 at 09:32:04AM +0100, Jan Kiszka wrote:
From: Jan Kiszka jan.kis...@siemens.com
Introduce qemu_cpu_kick_self to send SIG_IPI to the calling VCPU
context. First user will be kvm.
Signed-off-by: Jan Kiszka jan.kis...@siemens.com
For the updated patch, can't see where
On Mon, Jan 10, 2011 at 09:32:00AM +0100, Jan Kiszka wrote:
From: Jan Kiszka jan.kis...@siemens.com
Currently, we only configure and process MCE-related SIGBUS events if
CONFIG_IOTHREAD is enabled. Fix this by factoring out the required
handler registration and system configuration. Make
On 2011-01-24 12:47, Marcelo Tosatti wrote:
On Mon, Jan 10, 2011 at 09:32:04AM +0100, Jan Kiszka wrote:
From: Jan Kiszka jan.kis...@siemens.com
Introduce qemu_cpu_kick_self to send SIG_IPI to the calling VCPU
context. First user will be kvm.
Signed-off-by: Jan Kiszka jan.kis...@siemens.com
On 2011-01-24 12:17, Marcelo Tosatti wrote:
On Mon, Jan 10, 2011 at 09:32:00AM +0100, Jan Kiszka wrote:
From: Jan Kiszka jan.kis...@siemens.com
Currently, we only configure and process MCE-related SIGBUS events if
CONFIG_IOTHREAD is enabled. Fix this by factoring out the required
handler
Please send in any agenda items you are interested in covering.
thanks,
-chris
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Plase send in any agenda items you are interested in covering.
thanks, Juan.
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
On 2011-01-21 19:49, Blue Swirl wrote:
I'd add fourth possible class:
- device, CPU and machine configuration, like nographic,
win2k_install_hack, no_hpet, smp_cpus etc. Maybe also
irqchip_in_kernel could fit here, though it obviously depends on a
host capability too.
I would count
On Mon, 2011-01-24 at 15:16 +0100, Jan Kiszka wrote:
On 2011-01-24 10:32, Marcelo Tosatti wrote:
On Fri, Jan 21, 2011 at 04:48:02PM -0700, Alex Williamson wrote:
When doing device assignment, we use cpu_register_physical_memory() to
directly map the qemu mmap of the device resource into the
On 01/22/2011 06:53 AM, Rik van Riel wrote:
The main question that remains is whether the PV ticketlocks are
a large enough improvement to also merge those. I expect they
will be, and we'll see so in the benchmark numbers.
The pathological worst-case of ticket locks in a virtual environment
On Thu, 2011-01-20 at 16:33 -0500, Rik van Riel wrote:
The clear_buddies function does not seem to play well with the concept
of hierarchical runqueues. In the following tree, task groups are
represented by 'G', tasks by 'T', next by 'n' and last by 'l'.
(nl)
/\
G(nl) G
On Thu, 2011-01-20 at 16:33 -0500, Rik van Riel wrote:
Use the buddy mechanism to implement yield_task_fair. This
allows us to skip onto the next highest priority se at every
level in the CFS tree, unless doing so would introduce gross
unfairness in CPU time distribution.
We order the
On 01/24/2011 12:57 PM, Peter Zijlstra wrote:
On Thu, 2011-01-20 at 16:33 -0500, Rik van Riel wrote:
The clear_buddies function does not seem to play well with the concept
of hierarchical runqueues. In the following tree, task groups are
represented by 'G', tasks by 'T', next by 'n' and last
As a proof of concept to KVM - Kernel Virtual Memory, this patch
implements kvmclock per-vcpu systime grabbing on top of it. At first, it
may seem as a waste of work to just redo it, since it is working well. But over
the time, other MSRs were added - think ASYNC_PF - and more will probably come.
As a proof of concept to KVM - Kernel Virtual Memory, this patch
implements wallclock grabbing on top of it. At first, it may seem
as a waste of work to just redo it, since it is working well. But over the
time, other MSRs were added - think ASYNC_PF - and more will probably come.
After this
As a proof of concept to KVM - Kernel Virtual Memory, this patch
implements kvmclock per-vcpu systime grabbing on top of it. At first, it
may seem as a waste of work to just redo it, since it is working well. But over
the time, other MSRs were added - think ASYNC_PF - and more will probably come.
Register steal time within KVM. Everytime we sample the steal time
information, we update a local variable that tells what was the
last time read. We then account the difference.
Signed-off-by: Glauber Costa glom...@redhat.com
CC: Rik van Riel r...@redhat.com
CC: Jeremy Fitzhardinge
To implement steal time, we need the hypervisor to pass the guest information
about how much time was spent running other processes outside the VM.
This is per-vcpu, and using the kvmclock structure for that is an abuse
we decided not to make.
This patch contains the hypervisor part for it. I am
To implement steal time, we need the hypervisor to pass the guest information
about how much time was spent running other processes outside the VM.
We consider time to be potentially stolen everytime we schedule out the vcpu,
until we schedule it in again. If this is, or if this will not, be
KVM, which stands for KVM Virtual Memory (I wanted to call it KVM Virtual
Mojito),
is a piece of shared memory that is visible to both the hypervisor and the guest
kernel - but not the guest userspace.
The basic idea is that the guest can tell the hypervisor about a specific
piece of memory, and
This patch accounts steal time time in kernel/sched.
I kept it from last proposal, because I still see advantages
in it: Doing it here will give us easier access from scheduler
variables such as the cpu rq. The next patch shows an example of
usage for it.
Since functions like account_idle_time()
KVM, which stands for KVM Virtual Memory (I wanted to call it KVM Virtual
Mojito),
is a piece of shared memory that is visible to both the hypervisor and the guest
kernel - but not the guest userspace.
The basic idea is that the guest can tell the hypervisor about a specific
piece of memory, and
Hello people
This is the new version of the steal time series, this time on steroids.
The steal time per se is not much different from the last time I posted, so
I'll highlight what's around it.
Since one of the main fights was around how to register the shared memory area,
which would end up
As a proof of concept to KVM - Kernel Virtual Memory, this patch
implements wallclock grabbing on top of it. At first, it may seem
as a waste of work to just redo it, since it is working well. But over the
time, other MSRs were added - think ASYNC_PF - and more will probably come.
After this
To implement steal time, we need the hypervisor to pass the guest information
about how much time was spent running other processes outside the VM.
This is per-vcpu, and using the kvmclock structure for that is an abuse
we decided not to make.
This patch contains the headers for it. I am keeping
As a proof of concept to KVM - Kernel Virtual Memory, this patch
implements kvmclock per-vcpu systime grabbing on top of it. At first, it
may seem as a waste of work to just redo it, since it is working well. But over
the time, other MSRs were added - think ASYNC_PF - and more will probably come.
As a proof of concept to KVM - Kernel Virtual Memory, this patch
implements wallclock grabbing on top of it. At first, it may seem
as a waste of work to just redo it, since it is working well. But over the
time, other MSRs were added - think ASYNC_PF - and more will probably come.
After this
KVM, which stands for KVM Virtual Memory (I wanted to call it KVM Virtual
Mojito),
is a piece of shared memory that is visible to both the hypervisor and the guest
kernel - but not the guest userspace.
The basic idea is that the guest can tell the hypervisor about a specific
piece of memory, and
On Thu, 2011-01-20 at 16:34 -0500, Rik van Riel wrote:
From: Mike Galbraith efa...@gmx.de
Currently only implemented for fair class tasks.
Add a yield_to_task method() to the fair scheduling class. allowing the
caller of yield_to() to accelerate another thread in it's thread group,
task
On 01/24/2011 01:04 PM, Peter Zijlstra wrote:
diff --git a/kernel/sched.c b/kernel/sched.c
index dc91a4d..e4e57ff 100644
--- a/kernel/sched.c
+++ b/kernel/sched.c
@@ -327,7 +327,7 @@ struct cfs_rq {
* 'curr' points to currently running entity on this cfs_rq.
* It is set to
On 01/24/2011 01:12 PM, Peter Zijlstra wrote:
On Thu, 2011-01-20 at 16:34 -0500, Rik van Riel wrote:
From: Mike Galbraithefa...@gmx.de
Currently only implemented for fair class tasks.
Add a yield_to_task method() to the fair scheduling class. allowing the
caller of yield_to() to accelerate
On 01/18/2011 03:53 AM, Jan Kiszka wrote:
On 2011-01-18 04:03, Stefan Berger wrote:
On 01/16/2011 09:43 AM, Avi Kivity wrote:
On 01/14/2011 09:27 PM, Stefan Berger wrote:
Can you sprinkle some printfs() arount kvm_run (in qemu-kvm.c) to
verify this?
Here's what I did:
interrupt exit
On Mon, 2011-01-24 at 13:06 -0500, Glauber Costa wrote:
This is a first proposal for using steal time information
to influence the scheduler. There are a lot of optimizations
and fine grained adjustments to be done, but it is working reasonably
so far for me (mostly)
With this patch (and
Just to block netperf you can send it SIGSTOP :)
Clever :) One could I suppose achieve the same result by making the remote
receive socket buffer size smaller than the UDP message size and then not worry
about having to learn the netserver's PID to send it the SIGSTOP. I *think* the
On Mon, Jan 24, 2011 at 10:27:55AM -0800, Rick Jones wrote:
Just to block netperf you can send it SIGSTOP :)
Clever :) One could I suppose achieve the same result by making the
remote receive socket buffer size smaller than the UDP message size
and then not worry about having to learn
On Mon, 2011-01-24 at 19:32 +0100, Peter Zijlstra wrote:
On Mon, 2011-01-24 at 13:06 -0500, Glauber Costa wrote:
This is a first proposal for using steal time information
to influence the scheduler. There are a lot of optimizations
and fine grained adjustments to be done, but it is working
Michael S. Tsirkin wrote:
On Mon, Jan 24, 2011 at 10:27:55AM -0800, Rick Jones wrote:
Just to block netperf you can send it SIGSTOP :)
Clever :) One could I suppose achieve the same result by making the
remote receive socket buffer size smaller than the UDP message size
and then not worry
On Mon, Jan 24, 2011 at 11:01:45AM -0800, Rick Jones wrote:
Michael S. Tsirkin wrote:
On Mon, Jan 24, 2011 at 10:27:55AM -0800, Rick Jones wrote:
Just to block netperf you can send it SIGSTOP :)
Clever :) One could I suppose achieve the same result by making the
remote receive socket
On Mon, 2011-01-24 at 16:51 -0200, Glauber Costa wrote:
I would really much rather see you change update_rq_clock_task() and
subtract your ns resolution steal time from our wall-time,
update_rq_clock_task() already updates the cpu_power relative to the
remaining time available.
But then
On Mon, 2011-01-24 at 16:51 -0200, Glauber Costa wrote:
I thought kvm had a ns resolution steal-time clock?
Yes, the one I introduced earlier in this series is nsec. However, user
and system will be accounted in usec at most, so there is no point in
using nsec here.
Well, the scheduler
On Mon, 2011-01-24 at 20:51 +0100, Peter Zijlstra wrote:
On Mon, 2011-01-24 at 16:51 -0200, Glauber Costa wrote:
I would really much rather see you change update_rq_clock_task() and
subtract your ns resolution steal time from our wall-time,
update_rq_clock_task() already updates the
On Mon, Jan 24, 2011 at 2:08 PM, Jan Kiszka jan.kis...@siemens.com wrote:
On 2011-01-21 19:49, Blue Swirl wrote:
I'd add fourth possible class:
- device, CPU and machine configuration, like nographic,
win2k_install_hack, no_hpet, smp_cpus etc. Maybe also
irqchip_in_kernel could fit here,
On 2011-01-24 22:35, Blue Swirl wrote:
On Mon, Jan 24, 2011 at 2:08 PM, Jan Kiszka jan.kis...@siemens.com wrote:
On 2011-01-21 19:49, Blue Swirl wrote:
I'd add fourth possible class:
- device, CPU and machine configuration, like nographic,
win2k_install_hack, no_hpet, smp_cpus etc. Maybe
On 01/24/2011 07:25 AM, Chris Wright wrote:
Please send in any agenda items you are interested in covering.
- coroutines for the block layer
- glib everywhere
Regards,
Anthony Liguori
thanks,
-chris
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a
On 2011-01-24 19:27, Stefan Berger wrote:
On 01/18/2011 03:53 AM, Jan Kiszka wrote:
On 2011-01-18 04:03, Stefan Berger wrote:
On 01/16/2011 09:43 AM, Avi Kivity wrote:
On 01/14/2011 09:27 PM, Stefan Berger wrote:
Can you sprinkle some printfs() arount kvm_run (in qemu-kvm.c) to
verify this?
On 01/24/2011 01:06 PM, Glauber Costa wrote:
To implement steal time, we need the hypervisor to pass the guest information
about how much time was spent running other processes outside the VM.
This is per-vcpu, and using the kvmclock structure for that is an abuse
we decided not to make.
This
On 01/24/2011 01:06 PM, Glauber Costa wrote:
To implement steal time, we need the hypervisor to pass the guest information
about how much time was spent running other processes outside the VM.
This is per-vcpu, and using the kvmclock structure for that is an abuse
we decided not to make.
This
On 01/24/2011 01:06 PM, Glauber Costa wrote:
To implement steal time, we need the hypervisor to pass the guest information
about how much time was spent running other processes outside the VM.
We consider time to be potentially stolen everytime we schedule out the vcpu,
until we schedule it in
On 01/24/2011 01:06 PM, Glauber Costa wrote:
Register steal time within KVM. Everytime we sample the steal time
information, we update a local variable that tells what was the
last time read. We then account the difference.
Signed-off-by: Glauber Costaglom...@redhat.com
CC: Rik van
On 01/24/2011 01:06 PM, Glauber Costa wrote:
Register steal time within KVM. Everytime we sample the steal time
information, we update a local variable that tells what was the
last time read. We then account the difference.
Signed-off-by: Glauber Costaglom...@redhat.com
CC: Rik van
On 01/24/2011 01:06 PM, Glauber Costa wrote:
This patch accounts steal time time in kernel/sched.
I kept it from last proposal, because I still see advantages
in it: Doing it here will give us easier access from scheduler
variables such as the cpu rq. The next patch shows an example of
usage for
On Mon, 2011-01-24 at 18:31 -0500, Rik van Riel wrote:
On 01/24/2011 01:06 PM, Glauber Costa wrote:
Register steal time within KVM. Everytime we sample the steal time
information, we update a local variable that tells what was the
last time read. We then account the difference.
On 01/24/2011 08:25 PM, Glauber Costa wrote:
On Mon, 2011-01-24 at 18:31 -0500, Rik van Riel wrote:
On 01/24/2011 01:06 PM, Glauber Costa wrote:
Register steal time within KVM. Everytime we sample the steal time
information, we update a local variable that tells what was the
last time read. We
On Mon, 2011-01-24 at 20:26 -0500, Rik van Riel wrote:
On 01/24/2011 08:25 PM, Glauber Costa wrote:
On Mon, 2011-01-24 at 18:31 -0500, Rik van Riel wrote:
On 01/24/2011 01:06 PM, Glauber Costa wrote:
Register steal time within KVM. Everytime we sample the steal time
information, we update
On 01/24/2011 05:34 PM, Jan Kiszka wrote:
On 2011-01-24 19:27, Stefan Berger wrote:
On 01/18/2011 03:53 AM, Jan Kiszka wrote:
On 2011-01-18 04:03, Stefan Berger wrote:
On 01/16/2011 09:43 AM, Avi Kivity wrote:
On 01/14/2011 09:27 PM, Stefan Berger wrote:
Can you sprinkle some printfs()
The following series implements page cache control,
this is a split out version of patch 1 of version 3 of the
page cache optimization patches posted earlier at
Previous posting http://lwn.net/Articles/419564/
The previous few revision received lot of comments, I've tried to
address as many of
This patch moves zone_reclaim and associated helpers
outside CONFIG_NUMA. This infrastructure is reused
in the patches for page cache control that follow.
Signed-off-by: Balbir Singh bal...@linux.vnet.ibm.com
---
include/linux/mmzone.h |4 ++--
include/linux/swap.h |4 ++--
Changelog v3
1. Renamed zone_reclaim_unmapped_pages to zone_reclaim_pages
Refactor zone_reclaim, move reusable functionality outside
of zone_reclaim. Make zone_reclaim_unmapped_pages modular
Signed-off-by: Balbir Singh bal...@linux.vnet.ibm.com
Reviewed-by: Christoph Lameter c...@linux.com
---
Changelog v4
1. Add max_unmapped_ratio and use that as the upper limit
to check when to shrink the unmapped page cache (Christoph
Lameter)
Changelog v2
1. Use a config option to enable the code (Andrew Morton)
2. Explain the magic tunables in the code or at-least attempt
to explain them
* Balbir Singh bal...@linux.vnet.ibm.com [2011-01-25 10:40:09]:
Changelog v3
1. Renamed zone_reclaim_unmapped_pages to zone_reclaim_pages
Refactor zone_reclaim, move reusable functionality outside
of zone_reclaim. Make zone_reclaim_unmapped_pages modular
Signed-off-by: Balbir Singh
On Mon, 2011-01-24 at 08:44 -0700, Alex Williamson wrote:
I'll look at how we might be
able to allocate slots on demand. Thanks,
Here's a first cut just to see if this looks agreeable. This allows the
slot array to grow on demand. This works with current userspace, as
well as userspace
On 2011-01-25 04:13, Stefan Berger wrote:
On 01/24/2011 05:34 PM, Jan Kiszka wrote:
On 2011-01-24 19:27, Stefan Berger wrote:
On 01/18/2011 03:53 AM, Jan Kiszka wrote:
On 2011-01-18 04:03, Stefan Berger wrote:
On 01/16/2011 09:43 AM, Avi Kivity wrote:
On 01/14/2011 09:27 PM, Stefan Berger
On 2011-01-25 06:37, Alex Williamson wrote:
On Mon, 2011-01-24 at 08:44 -0700, Alex Williamson wrote:
I'll look at how we might be
able to allocate slots on demand. Thanks,
Here's a first cut just to see if this looks agreeable. This allows the
slot array to grow on demand. This works
96 matches
Mail list logo