This patch adds a function pointer in one of the many paravirt_ops
structs, to allow guests to register a steal time function.
Signed-off-by: Glauber Costa glom...@redhat.com
CC: Rik van Riel r...@redhat.com
CC: Jeremy Fitzhardinge jeremy.fitzhardi...@citrix.com
CC: Peter Zijlstra
This patch is simple, put in a different commit so it can be more easily
shared between guest and hypervisor. It just defines a named constant
to indicate the enable bit for KVM-specific MSRs.
Signed-off-by: Glauber Costa glom...@redhat.com
CC: Rik van Riel r...@redhat.com
CC: Jeremy Fitzhardinge
On Wed, Jun 29, 2011 at 11:08:23AM +0100, Stefan Hajnoczi wrote:
On Wed, Jun 29, 2011 at 8:57 AM, Kevin Wolf kw...@redhat.com wrote:
Am 28.06.2011 21:41, schrieb Marcelo Tosatti:
stream
--
1) base - remote
2) base - remote - local
3) base - local
local image is always valid.
The following three patches pave the way for KVM in-guest performance
monitoring. One is a perf API improvement, another fixes the constraints
for the version 1 architectural PMU (which we will emulate), and the third
adds an export that KVM will use.
Please consider for merging; this will make
The perf_event overflow handler does not receive any caller-derived
argument, so many callers need to resort to looking up the perf_event
in their local data structure. This is ugly and doesn't scale if a
single callback services many perf_events.
Fix by adding a context parameter to
KVM needs one-shot samples, since a PMC programmed to -X will fire after X
events and then again after 2^40 events (i.e. variable period).
Signed-off-by: Avi Kivity a...@redhat.com
---
include/linux/perf_event.h |5 +
kernel/events/core.c |3 ++-
2 files changed, 7
The v1 PMU does not have any fixed counters. Using the v2 constraints,
which do have fixed counters, causes an additional choice to be present
in the weight calculation, but not when actually scheduling the event,
leading to an event being not scheduled at all.
Signed-off-by: Avi Kivity
29.06.2011 19:20, Iordan Iordanov wrote:
On 06/28/11 18:29, Michael Tokarev wrote:
The process listening on this socket no longer exist,
it finished. With this command line it should stay in
foreground till finished (there's no -daemonize etc),
so you should see error messages if any.
On Wed, Jun 29, 2011 at 06:42:35PM +0300, Avi Kivity wrote:
The perf_event overflow handler does not receive any caller-derived
argument, so many callers need to resort to looking up the perf_event
in their local data structure. This is ugly and doesn't scale if a
single callback services
On 06/29/2011 07:08 PM, Frederic Weisbecker wrote:
On Wed, Jun 29, 2011 at 06:42:35PM +0300, Avi Kivity wrote:
The perf_event overflow handler does not receive any caller-derived
argument, so many callers need to resort to looking up the perf_event
in their local data structure. This is
Hi Frederic,
Thanks for including me on CC.
On Wed, Jun 29, 2011 at 05:08:45PM +0100, Frederic Weisbecker wrote:
On Wed, Jun 29, 2011 at 06:42:35PM +0300, Avi Kivity wrote:
The perf_event overflow handler does not receive any caller-derived
argument, so many callers need to resort to
To allow efficient use of shorter-term threadpool jobs, don't
allocate them dynamically upon creation. Instead, store them
within 'job' structures.
This will prevent some overhead creating/destroying jobs which live
for a short time.
Signed-off-by: Sasha Levin levinsasha...@gmail.com
---
Process multiple requests within a virtio-blk device's vring
in parallel.
Doing so may improve performance in cases when a request which can
be completed using data which is present in a cache is queued after
a request with un-cached data.
bonnie++ benchmarks have shown a 6% improvement with
This will allow tracking instance names and sending commands
to specific instances if multiple instances are running.
Signed-off-by: Sasha Levin levinsasha...@gmail.com
---
tools/kvm/include/kvm/kvm.h |5 +++-
tools/kvm/kvm-run.c |5 +++-
tools/kvm/kvm.c | 55
Instead of sending a signal to the first instance found, send it
to a specific instance.
Signed-off-by: Sasha Levin levinsasha...@gmail.com
---
tools/kvm/kvm-debug.c | 19 +++
1 files changed, 15 insertions(+), 4 deletions(-)
diff --git a/tools/kvm/kvm-debug.c
Instead of sending a signal to the first instance found, send it
to a specific instance.
Signed-off-by: Sasha Levin levinsasha...@gmail.com
---
tools/kvm/kvm-pause.c | 13 +++--
1 files changed, 11 insertions(+), 2 deletions(-)
diff --git a/tools/kvm/kvm-pause.c
Signed-off-by: Sasha Levin levinsasha...@gmail.com
---
tools/kvm/kvm.c |2 ++
1 files changed, 2 insertions(+), 0 deletions(-)
diff --git a/tools/kvm/kvm.c b/tools/kvm/kvm.c
index 4f723a6..15bcf08 100644
--- a/tools/kvm/kvm.c
+++ b/tools/kvm/kvm.c
@@ -345,6 +345,8 @@ struct kvm
From the virtio spec:
The virtio memory balloon device is a primitive device for managing guest
memory: the device asks for a certain amount of memory, and the guest supplies
it (or withdraws it, if the device has more than it asks for). This allows the
guest to adapt to changes in allowance of
Add a command to allow easily inflate/deflate the balloon driver in running
instances.
Usage:
kvm balloon [command] [instance name] [size]
command is either inflate or deflate, and size is represented in MB.
Target instance must be named (started with '--name').
Signed-off-by: Sasha Levin
Not stopping VCPUs before leads to seg faults and other errors due to
synchronization between threads.
Signed-off-by: Sasha Levin levinsasha...@gmail.com
---
tools/kvm/term.c |2 ++
1 files changed, 2 insertions(+), 0 deletions(-)
diff --git a/tools/kvm/term.c b/tools/kvm/term.c
index
Hi Michael,
On 06/29/11 11:52, Michael Tokarev wrote:
The only other explanation I can think of is that you tried
to run two instances of kvm, and when second instance initialized
it re-created the monitor socket but failed later (eg, when
initin network or something else) and exited, but left
I keep running into a situation where a KVM guest will lock up on some
kind of disk process it seems. System load goes way up but cpu % is
relatively low based on a crond script collecting data before
everything goes south. As a result, the host becoming unresponsive as
well. Initially it appeared
On Wed, 29 Jun 2011, Glauber Costa wrote:
This patch is simple, put in a different commit so it can be more easily
shared between guest and hypervisor. It just defines a named constant
to indicate the enable bit for KVM-specific MSRs.
Signed-off-by: Glauber Costa glom...@redhat.com
CC: Rik
On Wed, 29 Jun 2011, Glauber Costa wrote:
To implement steal time, we need the hypervisor to pass the guest information
about how much time was spent running other processes outside the VM.
This is per-vcpu, and using the kvmclock structure for that is an abuse
we decided not to make.
In
On Wed, 29 Jun 2011, Glauber Costa wrote:
To implement steal time, we need the hypervisor to pass the guest information
about how much time was spent running other processes outside the VM.
This is per-vcpu, and using the kvmclock structure for that is an abuse
we decided not to make.
In
On Wed, 29 Jun 2011, Glauber Costa wrote:
SCHEDSTATS provide a precise source of information about time tasks
spent on a runqueue, but not running (among other things). It is
specially useful for the steal time implementation, because it doesn't
record halt time at all.
To avoid a hard
On Wed, 29 Jun 2011, Glauber Costa wrote:
This patch adds a function pointer in one of the many paravirt_ops
structs, to allow guests to register a steal time function.
Signed-off-by: Glauber Costa glom...@redhat.com
CC: Rik van Riel r...@redhat.com
CC: Jeremy Fitzhardinge
On Wed, 29 Jun 2011, Glauber Costa wrote:
This patch accounts steal time time in kernel/sched.
I kept it from last proposal, because I still see advantages
in it: Doing it here will give us easier access from scheduler
variables such as the cpu rq. The next patch shows an example of
usage
On Wed, 29 Jun 2011, Glauber Costa wrote:
This is a first proposal for using steal time information
to influence the scheduler. There are a lot of optimizations
and fine grained adjustments to be done, but it is working reasonably
so far for me (mostly)
With this patch (and some host
On Wed, 29 Jun 2011, Glauber Costa wrote:
Register steal time within KVM. Everytime we sample the steal time
information, we update a local variable that tells what was the
last time read. We then account the difference.
Signed-off-by: Glauber Costa glom...@redhat.com
CC: Rik van Riel
In order to make it easier for people to read KVM autotest logs,
went through the virt module and the kvm test, removing some not
overly useful debug messages and modified others. Some things that
were modified:
1) Removed MAC address management messages
2) Removed ellipses from most of the debug
On 06/13/2011 04:34 PM, Avi Kivity wrote:
This patchset exposes an emulated version 1 architectural performance
monitoring unit to KVM guests. The PMU is emulated using perf_events,
so the host kernel can multiplex host-wide, host-user, and the
guest on available resources.
Caveats:
- counters
Am 28.06.2011 21:41, schrieb Marcelo Tosatti:
On Tue, Jun 28, 2011 at 02:38:15PM +0100, Stefan Hajnoczi wrote:
On Mon, Jun 27, 2011 at 3:32 PM, Juan Quintela quint...@redhat.com wrote:
Please send in any agenda items you are interested in covering.
Live block copy and image streaming:
* The
On 06/22/2011 05:29 PM, Xiao Guangrong wrote:
If the range spans a boundary, the mmio access can be broke, fix it as
write emulation.
And we already get the guest physical address, so use it to read guest data
directly to avoid walking guest page table again
Signed-off-by: Xiao
On 06/12/2011 09:51 AM, Michael S. Tsirkin wrote:
If a device uses more than one queue it is the responsibility of the
device to ensure strict request ordering.
Maybe I misunderstand - how can this be the responsibility of
the device if the device does not get the information about
the
On 06/22/2011 05:29 PM, Xiao Guangrong wrote:
Introduce vcpu_gva_to_gpa to translate the gva to gpa, we can use it
to cleanup the code between read emulation and write emulation
Signed-off-by: Xiao Guangrongxiaoguangr...@cn.fujitsu.com
---
arch/x86/kvm/x86.c | 38
On 06/14/2011 10:39 AM, Hannes Reinecke wrote:
If, however, we decide to expose some details about the backend, we
could be using the values from the backend directly.
EG we could be forwarding the SCSI target port identifier here
(if backed by real hardware) or creating our own SAS-type
On 06/22/2011 05:30 PM, Xiao Guangrong wrote:
The operations of read emulation and write emulation are very similar, so we
can abstract the operation of them, in larter patch, it is used to cleanup the
same code
Signed-off-by: Xiao Guangrongxiaoguangr...@cn.fujitsu.com
---
arch/x86/kvm/x86.c
On Wed, 2011-06-29 at 10:52 +0300, Avi Kivity wrote:
On 06/13/2011 04:34 PM, Avi Kivity wrote:
This patchset exposes an emulated version 1 architectural performance
monitoring unit to KVM guests. The PMU is emulated using perf_events,
so the host kernel can multiplex host-wide, host-user,
On Tue, Jun 28, 2011 at 11:08:07AM -0500, Tom Lendacky wrote:
On Sunday, June 19, 2011 05:27:00 AM Michael S. Tsirkin wrote:
OK, different people seem to test different trees. In the hope to get
everyone on the same page, I created several variants of this patch so
they can be compared.
On Wed, Jun 29, 2011 at 10:23:26AM +0200, Paolo Bonzini wrote:
On 06/12/2011 09:51 AM, Michael S. Tsirkin wrote:
If a device uses more than one queue it is the responsibility of the
device to ensure strict request ordering.
Maybe I misunderstand - how can this be the responsibility of
On 06/22/2011 05:31 PM, Xiao Guangrong wrote:
If the page fault is caused by mmio, we can cache the mmio info, later, we do
not need to walk guest page table and quickly know it is a mmio fault while we
emulate the mmio instruction
Does this work if the mmio spans two pages?
--
error
On Tue, 2011-06-28 at 18:10 +0200, Joerg Roedel wrote:
On Fri, Jun 17, 2011 at 03:37:29PM +0200, Joerg Roedel wrote:
this is the second version of the patch-set to support the AMD
guest-/host only bits in the performance counter MSRs. Due to lack of
time I havn't looked into emulating
On Sat, May 28, 2011 at 12:34:27PM -0700, Shirley Ma wrote:
Hello Michael,
In order to use wait for completion in shutting down, seems to me
another work thread is needed to call vhost_zerocopy_add_used,
Hmm I don't see vhost_zerocopy_add_used here.
it seems
too much work to address a
On 06/22/2011 05:35 PM, Xiao Guangrong wrote:
Use rcu to protect shadow pages table to be freed, so we can safely walk it,
it should run fastly and is needed by mmio page fault
static void kvm_mmu_commit_zap_page(struct kvm *kvm,
struct list_head
On 06/22/2011 05:36 PM, Xiao Guangrong wrote:
The idea is from Avi:
| We could cache the result of a miss in an spte by using a reserved bit, and
| checking the page fault error code (or seeing if we get an ept violation or
| ept misconfiguration), so if we get repeated mmio on a page, we don't
On 06/22/2011 05:27 PM, Xiao Guangrong wrote:
In this version, we fix the bugs in the v1:
- fix broken read emulation spans a page boundary
- fix invalid spte point is got if we walk shadow page table
out of the mmu lock
And, we also introduce some rules to modify spte in this version,
then
On 06/29/2011 11:38 AM, Peter Zijlstra wrote:
Peter, can you look at 1-3 please?
Queued them, thanks!
I was more or less waiting for a next iteration of the series because of
those problems reported, but those three stand well on their own.
Thanks. I'm mired in other work but will return
On 06/29/2011 12:02 PM, Peter Zijlstra wrote:
have you had a chance to look at this patch-set? Are any changes
required?
I would feel a lot more comfortable by having it implemented on all of
x86 as well as at least one !x86 platform. Avi graciously volunteered
for the Intel bits.
Silly
On Wed, Jun 29, 2011 at 9:33 AM, Paolo Bonzini pbonz...@redhat.com wrote:
On 06/14/2011 10:39 AM, Hannes Reinecke wrote:
If, however, we decide to expose some details about the backend, we
could be using the values from the backend directly.
EG we could be forwarding the SCSI target port
Am 20.06.2011 18:48, schrieb Federico Simoncelli:
qemu-img currently writes disk images using writeback and filling
up the cache buffers which are then flushed by the kernel preventing
other processes from accessing the storage.
This is particularly bad in cluster environments where time-based
On Wed, Jun 29, 2011 at 12:27:48PM +0300, Avi Kivity wrote:
On 06/29/2011 12:02 PM, Peter Zijlstra wrote:
have you had a chance to look at this patch-set? Are any changes
required?
I would feel a lot more comfortable by having it implemented on all of
x86 as well as at least one !x86
On Wed, Jun 29, 2011 at 05:02:54AM -0400, Peter Zijlstra wrote:
On Tue, 2011-06-28 at 18:10 +0200, Joerg Roedel wrote:
On Fri, Jun 17, 2011 at 03:37:29PM +0200, Joerg Roedel wrote:
this is the second version of the patch-set to support the AMD
guest-/host only bits in the performance
On Tue, Jun 14, 2011 at 05:30:24PM +0200, Hannes Reinecke wrote:
Which is exactly the problem I was referring to.
When using more than one channel the request ordering
_as seen by the initiator_ has to be preserved.
This is quite hard to do from a device's perspective;
it might be able to
On Sun, Jun 12, 2011 at 10:51:41AM +0300, Michael S. Tsirkin wrote:
For example, if the driver is crazy enough to put
all write requests on one queue and all barriers
on another one, how is the device supposed to ensure
ordering?
There is no such things as barriers in SCSI. The thing that
On Wed, Jun 29, 2011 at 10:23:26AM +0200, Paolo Bonzini wrote:
I agree here, in fact I misread Hannes's comment as if a driver
uses more than one queue it is responsibility of the driver to
ensure strict request ordering. If you send requests to different
queues, you know that those requests
On 06/29/2011 12:03 PM, Christoph Hellwig wrote:
I agree here, in fact I misread Hannes's comment as if a driver
uses more than one queue it is responsibility of the driver to
ensure strict request ordering. If you send requests to different
queues, you know that those requests are
On Wed, Jun 29, 2011 at 10:39:42AM +0100, Stefan Hajnoczi wrote:
I think we're missing a level of addressing. We need the ability to
talk to multiple target ports in order for list target ports to make
sense. Right now there is one implicit target that handles all
commands. That means there
On Wed, Jun 29, 2011 at 8:57 AM, Kevin Wolf kw...@redhat.com wrote:
Am 28.06.2011 21:41, schrieb Marcelo Tosatti:
stream
--
1) base - remote
2) base - remote - local
3) base - local
local image is always valid. Requires backing file support.
With the above, this restriction wouldn't
On 06/29/2011 12:07 PM, Christoph Hellwig wrote:
On Wed, Jun 29, 2011 at 10:39:42AM +0100, Stefan Hajnoczi wrote:
I think we're missing a level of addressing. We need the ability to
talk to multiple target ports in order for list target ports to make
sense. Right now there is one implicit
This patch adds SMEP to all test cases and checks SMEP on prefetch
pte path when cr0.wp=0.
changes since v1:
Add SMEP to all test cases and verify it before setting cr4
Signed-off-by: Yang, Wei wei.y.y...@intel.com
Signed-off-by: Shan, Haitao haitao.s...@intel.com
Signed-off-by: Li,
On Wed, Jun 29, 2011 at 12:23:38PM +0200, Hannes Reinecke wrote:
The general idea here is that we can support NPIV.
With NPIV we'll have several scsi_hosts, each of which is assigned a
different set of LUNs by the array.
With virtio we need to able to react on LUN remapping on the array
side,
On Wed, Jun 29, 2011 at 12:06:29PM +0200, Paolo Bonzini wrote:
On 06/29/2011 12:03 PM, Christoph Hellwig wrote:
I agree here, in fact I misread Hannes's comment as if a driver
uses more than one queue it is responsibility of the driver to
ensure strict request ordering. If you send
On 06/29/2011 12:31 PM, Michael S. Tsirkin wrote:
On Wed, Jun 29, 2011 at 12:06:29PM +0200, Paolo Bonzini wrote:
On 06/29/2011 12:03 PM, Christoph Hellwig wrote:
I agree here, in fact I misread Hannes's comment as if a driver
uses more than one queue it is responsibility of the driver to
This arranges for the top-level arch/powerpc/kvm/powerpc.c file to
pass down some of the calls it gets to the lower-level subarchitecture
specific code. The lower-level implementations (in booke.c and book3s.c)
are no-ops. The coming book3s_hv.c will need this.
Signed-off-by: Paul Mackerras
Instead of branching out-of-line with the DO_KVM macro to check if we
are in a KVM guest at the time of an interrupt, this moves the KVM
check inline in the first-level interrupt handlers. This speeds up
the non-KVM case and makes sure that none of the interrupt handlers
are missing the check.
Instead of doing the kvm_guest_enter/exit() and local_irq_dis/enable()
calls in powerpc.c, this moves them down into the subarch-specific
book3s_pr.c and booke.c. This eliminates an extra local_irq_enable()
call in book3s_pr.c, and will be needed for when we do SMT4 guest
support in the book3s
Doing so means that we don't have to save the flags anywhere and gets
rid of the last reference to to_book3s(vcpu) in arch/powerpc/kvm/book3s.c.
Doing so is OK because a program interrupt won't be generated at the
same time as any other synchronous interrupt. If a program interrupt
and an
This adds the infrastructure for handling PAPR hcalls in the kernel,
either early in the guest exit path while we are still in real mode,
or later once the MMU has been turned back on and we are in the full
kernel context. The advantage of handling hcalls in real mode if
possible is that we avoid
Commit 69acc0d3ba (KVM: PPC: Resolve real-mode handlers through
function exports) resulted in vcpu-arch.trampoline_lowmem and
vcpu-arch.trampoline_enter ending up with kernel virtual addresses
rather than physical addresses. This is OK on 64-bit Book3S machines,
which ignore the top 4 bits of the
This replaces the single CPU_FTR_HVMODE_206 bit with two bits, one to
indicate that we have a usable hypervisor mode, and another to indicate
that the processor conforms to PowerISA version 2.06. We also add
another bit to indicate that the processor conforms to ISA version 2.01
and set that for
This new ioctl allows userspace to specify what paravirtualization
interface (if any) KVM should implement, what architecture version
the guest virtual processors should conform to, and whether the guest
can be permitted to use a real supervisor mode.
At present the only effect of the ioctl is to
In hypervisor mode, the LPCR controls several aspects of guest
partitions, including virtual partition memory mode, and also controls
whether the hypervisor decrementer interrupts are enabled. This sets
up LPCR at boot time so that guest partitions will use a virtual real
memory area (VRMA)
This adds support for running KVM guests in supervisor mode on those
PPC970 processors that have a usable hypervisor mode. Unfortunately,
Apple G5 machines have supervisor mode disabled (MSR[HV] is forced to
1), but the YDL PowerStation does have a usable hypervisor mode.
There are several
There are several fields in struct kvmppc_book3s_shadow_vcpu that
temporarily store bits of host state while a guest is running,
rather than anything relating to the particular guest or vcpu.
This splits them out into a new kvmppc_host_state structure and
modifies the definitions in asm-offsets.c
The first patch of the following series is a pure bug-fix for 32-bit
kernels.
The remainder of the following series of patches enable KVM to exploit
the hardware hypervisor mode on 64-bit Power ISA Book3S machines. At
present, POWER7 and PPC970 processors are supported. (Note that the
PPC970
From: David Gibson d...@au1.ibm.com
This improves I/O performance for guests using the PAPR
paravirtualization interface by making the H_PUT_TCE hcall faster, by
implementing it in real mode. H_PUT_TCE is used for updating virtual
IOMMU tables, and is used both for virtual I/O and for real I/O
This lifts the restriction that book3s_hv guests can only run one
hardware thread per core, and allows them to use up to 4 threads
per core on POWER7. The host still has to run single-threaded.
This capability is advertised to qemu through a new KVM_CAP_PPC_SMT
capability. The return value of
This moves the slb field, which represents the state of the emulated
SLB, from the kvmppc_vcpu_book3s struct to the kvm_vcpu_arch, and the
hpte_hash_[v]pte[_long] fields from kvm_vcpu_arch to kvmppc_vcpu_book3s.
This is in accord with the principle that the kvm_vcpu_arch struct
represents the
This adds infrastructure which will be needed to allow book3s_hv KVM to
run on older POWER processors, including PPC970, which don't support
the Virtual Real Mode Area (VRMA) facility, but only the Real Mode
Offset (RMO) facility. These processors require a physically
contiguous, aligned area of
On 06/29/2011 04:21 PM, Avi Kivity wrote:
-if (kvm_read_guest_virt(ctxt, addr, val, bytes, exception)
-== X86EMUL_CONTINUE)
+if (!kvm_read_guest(vcpu-kvm, gpa, val, bytes))
return X86EMUL_CONTINUE;
This doesn't perform the cpl check.
Firstly, it calls
On 06/29/2011 04:24 PM, Avi Kivity wrote:
+static int vcpu_gva_to_gpa(struct kvm_vcpu *vcpu, unsigned long gva,
+ gpa_t *gpa, struct x86_exception *exception,
+ bool write)
+{
+u32 access = (kvm_x86_ops-get_cpl(vcpu) == 3) ? PFERR_USER_MASK : 0;
+
+if
On 06/29/2011 04:37 PM, Avi Kivity wrote:
+struct read_write_emulator_ops {
+int (*read_write_prepare)(struct kvm_vcpu *vcpu, void *val,
+ int bytes);
+int (*read_write_emulate)(struct kvm_vcpu *vcpu, gpa_t gpa,
+ void *val, int bytes);
+int
On 06/29/2011 04:48 PM, Avi Kivity wrote:
On 06/22/2011 05:31 PM, Xiao Guangrong wrote:
If the page fault is caused by mmio, we can cache the mmio info, later, we do
not need to walk guest page table and quickly know it is a mmio fault while
we
emulate the mmio instruction
Does this work
On 06/29/2011 01:56 PM, Xiao Guangrong wrote:
On 06/29/2011 04:24 PM, Avi Kivity wrote:
+static int vcpu_gva_to_gpa(struct kvm_vcpu *vcpu, unsigned long gva,
+ gpa_t *gpa, struct x86_exception *exception,
+ bool write)
+{
+u32 access =
On 06/29/2011 02:09 PM, Xiao Guangrong wrote:
On 06/29/2011 04:48 PM, Avi Kivity wrote:
On 06/22/2011 05:31 PM, Xiao Guangrong wrote:
If the page fault is caused by mmio, we can cache the mmio info, later, we
do
not need to walk guest page table and quickly know it is a mmio fault while
On 06/29/2011 05:16 PM, Avi Kivity wrote:
On 06/22/2011 05:35 PM, Xiao Guangrong wrote:
Use rcu to protect shadow pages table to be freed, so we can safely walk it,
it should run fastly and is needed by mmio page fault
static void kvm_mmu_commit_zap_page(struct kvm *kvm,
On 06/29/2011 02:16 PM, Xiao Guangrong wrote:
@@ -1767,6 +1874,14 @@ static void kvm_mmu_commit_zap_page(struct kvm *kvm,
kvm_flush_remote_tlbs(kvm);
+if (atomic_read(kvm-arch.reader_counter)) {
+kvm_mmu_isolate_pages(invalid_list);
+sp =
On 06/29/2011 01:53 PM, Xiao Guangrong wrote:
On 06/29/2011 04:21 PM, Avi Kivity wrote:
-if (kvm_read_guest_virt(ctxt, addr, val, bytes, exception)
-== X86EMUL_CONTINUE)
+if (!kvm_read_guest(vcpu-kvm, gpa, val, bytes))
return X86EMUL_CONTINUE;
This doesn't
On 06/29/2011 07:09 PM, Avi Kivity wrote:
On 06/29/2011 01:56 PM, Xiao Guangrong wrote:
On 06/29/2011 04:24 PM, Avi Kivity wrote:
+static int vcpu_gva_to_gpa(struct kvm_vcpu *vcpu, unsigned long gva,
+ gpa_t *gpa, struct x86_exception *exception,
+ bool
On 06/29/2011 02:26 PM, Xiao Guangrong wrote:
On 06/29/2011 07:09 PM, Avi Kivity wrote:
On 06/29/2011 01:56 PM, Xiao Guangrong wrote:
On 06/29/2011 04:24 PM, Avi Kivity wrote:
+static int vcpu_gva_to_gpa(struct kvm_vcpu *vcpu, unsigned long gva,
+ gpa_t *gpa, struct
On Fri, Jun 17, 2011 at 03:02:39PM +0200, Arnd Bergmann wrote:
On Friday 17 June 2011 11:04:24 Johannes Stezenbach wrote:
running even a simple qemu-img create -f qcow2 some.img 1G causes
the following in dmesg on a Linux host with linux-2.6.39.1 x86_64 kernel
and 32bit userspace:
Linus, please pull from
are available in the git repository at:
git://git.kernel.org/pub/scm/virt/kvm/kvm.git kvm-updates/3.0
To receive a single KVM fix. Emulated instructions which had both an
immediate operand and an %rip-relative operand did not compute the
effective address correctly;
On 06/29/2011 07:18 PM, Avi Kivity wrote:
On 06/29/2011 02:16 PM, Xiao Guangrong wrote:
@@ -1767,6 +1874,14 @@ static void kvm_mmu_commit_zap_page(struct kvm
*kvm,
kvm_flush_remote_tlbs(kvm);
+if (atomic_read(kvm-arch.reader_counter)) {
+
On Wed, Jun 29, 2011 at 02:26:14PM +0300, Avi Kivity wrote:
On 06/29/2011 02:26 PM, Xiao Guangrong wrote:
On 06/29/2011 07:09 PM, Avi Kivity wrote:
On 06/29/2011 01:56 PM, Xiao Guangrong wrote:
On 06/29/2011 04:24 PM, Avi Kivity wrote:
+static int vcpu_gva_to_gpa(struct kvm_vcpu
On Wed, Jun 29, 2011 at 08:41:03PM +1000, Paul Mackerras wrote:
Documentation/virtual/kvm/api.txt | 35 +++
arch/powerpc/include/asm/kvm.h | 15 +++
arch/powerpc/include/asm/kvm_host.h |1 +
arch/powerpc/kvm/powerpc.c | 28
On 29.06.2011, at 13:53, Josh Boyer wrote:
On Wed, Jun 29, 2011 at 08:41:03PM +1000, Paul Mackerras wrote:
Documentation/virtual/kvm/api.txt | 35
+++
arch/powerpc/include/asm/kvm.h | 15 +++
arch/powerpc/include/asm/kvm_host.h |1 +
On Wed, Jun 29, 2011 at 01:56:16PM +0200, Alexander Graf wrote:
On 29.06.2011, at 13:53, Josh Boyer wrote:
On Wed, Jun 29, 2011 at 08:41:03PM +1000, Paul Mackerras wrote:
Documentation/virtual/kvm/api.txt | 35
+++
arch/powerpc/include/asm/kvm.h |
On Wed, Jun 29, 2011 at 11:02:54AM +0200, Peter Zijlstra wrote:
On Tue, 2011-06-28 at 18:10 +0200, Joerg Roedel wrote:
On Fri, Jun 17, 2011 at 03:37:29PM +0200, Joerg Roedel wrote:
this is the second version of the patch-set to support the AMD
guest-/host only bits in the performance
On 06/29/2011 02:50 PM, Xiao Guangrong wrote:
I think we should do this unconditionally. The cost of ping-ponging
the shared cache line containing reader_counter will increase with large smp counts. On
the other hand, zap_page is very rare, so it can be a little slower. Also, less
1 - 100 of 141 matches
Mail list logo