On Sun, Jan 07, 2001 at 10:56:23AM -0500, jamal wrote:
[snip]
I used to be against VLANS being devices, i am withdrawing that comment; it's
a lot easier to look on them as devices if you want to run IP on them. And
in this case, it makes sense the possibilirt of over a thousand devices
is
On Sun, Jan 07, 2001 at 11:56:26AM -0500, jamal wrote:
On Sun, 7 Jan 2001, Chris Wedgwood wrote:
That said, if this was done -- how would things like routing daemons
and bind cope?
I dont know of any routing daemons that are taking advantage of the
alias interfaces today. This
On Sun, Jan 07, 2001 at 01:29:51PM -0500, jamal wrote:
On Sun, 7 Jan 2001, Ben Greear wrote:
My thought was to have the vlan be attached on the interface ifa list and
just give it a different label since it is a "virtual interface" on top
of the "physical interface". Now that you
On Fri, Oct 26, 2012 at 01:29:47PM -0700, Andi Kleen wrote:
From: Andi Kleen a...@linux.intel.com
This is not arch perfmon, but older CPUs will just ignore it. This makes
it possible to do at least some TSX measurements from a KVM guest
Cc: a...@redhat.com
Cc: g...@redhat.com
v2: Various
On Tue, Oct 30, 2012 at 05:33:56PM -0700, Andi Kleen wrote:
From: Andi Kleen a...@linux.intel.com
This is not arch perfmon, but older CPUs will just ignore it. This makes
it possible to do at least some TSX measurements from a KVM guest
You are ignoring my reviews.
Cc: a...@redhat.com
On Wed, Oct 31, 2012 at 11:32:48AM +0100, Andi Kleen wrote:
On Wed, Oct 31, 2012 at 12:27:01PM +0200, Gleb Natapov wrote:
On Tue, Oct 30, 2012 at 05:33:56PM -0700, Andi Kleen wrote:
From: Andi Kleen a...@linux.intel.com
This is not arch perfmon, but older CPUs will just ignore
On Tue, Oct 23, 2012 at 10:20:34AM +0800, Asias He wrote:
On Mon, Oct 22, 2012 at 6:16 PM, Gleb Natapov g...@redhat.com wrote:
On Mon, Oct 22, 2012 at 11:24:19AM +0200, Avi Kivity wrote:
On 10/21/2012 05:39 PM, Pekka Enberg wrote:
On Sun, Oct 21, 2012 at 5:02 PM, richard -rw- weinberger
,
the in-kernel hypervisors now have a single registration point
and set x86_hyper. We can use this to output additional debug
information during a panic/oops/stack trace.
Signed-off-by: Prarit Bhargava pra...@redhat.com
Cc: Avi Kivity a...@redhat.com
Cc: Gleb Natapov g...@redhat.com
Cc: Alex
On Fri, Sep 21, 2012 at 02:57:19PM +0800, Xiao Guangrong wrote:
We can not directly call kvm_release_pfn_clean to release the pfn
since we can meet noslot pfn which is used to cache mmio info into
spte
Wouldn't it be better to move the check into kvm_release_pfn_clean()?
Signed-off-by: Xiao
On Mon, Sep 24, 2012 at 12:59:32PM +0800, Xiao Guangrong wrote:
On 09/23/2012 05:13 PM, Gleb Natapov wrote:
On Fri, Sep 21, 2012 at 02:57:19PM +0800, Xiao Guangrong wrote:
We can not directly call kvm_release_pfn_clean to release the pfn
since we can meet noslot pfn which is used to cache
On Mon, Sep 24, 2012 at 07:49:37PM +0800, Xiao Guangrong wrote:
On 09/24/2012 07:24 PM, Gleb Natapov wrote:
On Mon, Sep 24, 2012 at 12:59:32PM +0800, Xiao Guangrong wrote:
On 09/23/2012 05:13 PM, Gleb Natapov wrote:
On Fri, Sep 21, 2012 at 02:57:19PM +0800, Xiao Guangrong wrote:
We can
On Tue, Sep 25, 2012 at 10:54:21AM +0200, Avi Kivity wrote:
On 09/25/2012 10:09 AM, Raghavendra K T wrote:
On 09/24/2012 09:36 PM, Avi Kivity wrote:
On 09/24/2012 05:41 PM, Avi Kivity wrote:
case 2)
rq1 : vcpu1-wait(lockA) (spinning)
rq2 : vcpu3 (running) , vcpu2-holding(lockA)
On Thu, Sep 27, 2012 at 10:59:21AM +0200, Avi Kivity wrote:
On 09/27/2012 09:44 AM, Gleb Natapov wrote:
On Tue, Sep 25, 2012 at 10:54:21AM +0200, Avi Kivity wrote:
On 09/25/2012 10:09 AM, Raghavendra K T wrote:
On 09/24/2012 09:36 PM, Avi Kivity wrote:
On 09/24/2012 05:41 PM, Avi Kivity
On Thu, Sep 27, 2012 at 11:33:56AM +0200, Avi Kivity wrote:
On 09/27/2012 11:11 AM, Gleb Natapov wrote:
User return notifier is per-cpu, not per-task. There is a new task_work
(linux/task_work.h) that does what you want. With these
technicalities out of the way, I think it's the wrong
On Thu, Sep 27, 2012 at 12:04:58PM +0200, Avi Kivity wrote:
On 09/27/2012 11:58 AM, Gleb Natapov wrote:
btw, we can have secondary effects. A vcpu can be waiting for a lock in
the host kernel, or for a host page fault. There's no point in boosting
anything for that. Or a vcpu
On Wed, Oct 03, 2012 at 04:56:57PM +0200, Avi Kivity wrote:
On 10/03/2012 04:17 PM, Raghavendra K T wrote:
* Avi Kivity a...@redhat.com [2012-09-30 13:13:09]:
On 09/30/2012 01:07 PM, Gleb Natapov wrote:
On Sun, Sep 30, 2012 at 10:18:17AM +0200, Avi Kivity wrote:
On 09/28/2012 08:16
On Tue, Oct 02, 2012 at 11:48:26PM -, Andi Kleen wrote:
From: Andi Kleen a...@linux.intel.com
This is not arch perfmon, but older CPUs will just ignore it. This makes
it possible to do at least some TSX measurements from a KVM guest
Cc: a...@redhat.com
Signed-off-by: Andi Kleen
Levin sasha.le...@oracle.com
Acked-by: Gleb Natapov g...@redhat.com
---
arch/x86/kernel/kvm.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
index b3e5e51..4180a87 100644
--- a/arch/x86/kernel/kvm.c
+++ b/arch/x86/kernel/kvm.c
@@ -247,7
On Fri, Oct 19, 2012 at 03:37:32PM +0800, Xiao Guangrong wrote:
After commit b3356bf0dbb349 (KVM: emulator: optimize rep ins handling),
the pieces of io data can be collected and write them to the guest memory
or MMIO together.
Unfortunately, kvm splits the mmio access into 8 bytes and store
On Mon, Oct 22, 2012 at 11:24:19AM +0200, Avi Kivity wrote:
On 10/21/2012 05:39 PM, Pekka Enberg wrote:
On Sun, Oct 21, 2012 at 5:02 PM, richard -rw- weinberger
richard.weinber...@gmail.com wrote:
qemu supports all these features.
E.g. to access the host fs use:
qemu ... \
-fsdev
On Mon, Oct 22, 2012 at 07:09:38PM +0800, Xiao Guangrong wrote:
On 10/22/2012 05:16 PM, Gleb Natapov wrote:
On Fri, Oct 19, 2012 at 03:37:32PM +0800, Xiao Guangrong wrote:
After commit b3356bf0dbb349 (KVM: emulator: optimize rep ins handling),
the pieces of io data can be collected
On Mon, Oct 22, 2012 at 01:35:56PM +0200, Jan Kiszka wrote:
On 2012-10-22 13:23, Gleb Natapov wrote:
On Mon, Oct 22, 2012 at 07:09:38PM +0800, Xiao Guangrong wrote:
On 10/22/2012 05:16 PM, Gleb Natapov wrote:
On Fri, Oct 19, 2012 at 03:37:32PM +0800, Xiao Guangrong wrote:
After commit
On Mon, Oct 22, 2012 at 02:45:37PM +0200, Jan Kiszka wrote:
On 2012-10-22 14:18, Avi Kivity wrote:
On 10/22/2012 01:45 PM, Jan Kiszka wrote:
Indeed. git pull, recheck and call for kvm_flush_coalesced_mmio_buffer()
is gone. So this will break new userspace, not old. By global you mean
On Mon, Oct 22, 2012 at 02:55:14PM +0200, Jan Kiszka wrote:
On 2012-10-22 14:53, Gleb Natapov wrote:
On Mon, Oct 22, 2012 at 02:45:37PM +0200, Jan Kiszka wrote:
On 2012-10-22 14:18, Avi Kivity wrote:
On 10/22/2012 01:45 PM, Jan Kiszka wrote:
Indeed. git pull, recheck and call
On Mon, Oct 22, 2012 at 02:55:24PM +0200, Avi Kivity wrote:
On 10/22/2012 02:53 PM, Gleb Natapov wrote:
On Mon, Oct 22, 2012 at 02:45:37PM +0200, Jan Kiszka wrote:
On 2012-10-22 14:18, Avi Kivity wrote:
On 10/22/2012 01:45 PM, Jan Kiszka wrote:
Indeed. git pull, recheck and call
On Mon, Oct 22, 2012 at 03:02:22PM +0200, Avi Kivity wrote:
On 10/22/2012 03:01 PM, Gleb Natapov wrote:
It's time where the guest cannot take interrupts, and time in a high
priority guest thread that is spent processing low guest priority requests.
Proposed fix has exactly same issue
On Mon, Oct 22, 2012 at 03:05:58PM +0200, Jan Kiszka wrote:
On 2012-10-22 14:58, Avi Kivity wrote:
On 10/22/2012 02:55 PM, Jan Kiszka wrote:
Since the userspace change is needed the idea is dead, but if we could
implement it I do not see how it can hurt the latency if it would be the
only
On Mon, Oct 22, 2012 at 03:25:49PM +0200, Jan Kiszka wrote:
On 2012-10-22 15:08, Gleb Natapov wrote:
On Mon, Oct 22, 2012 at 03:05:58PM +0200, Jan Kiszka wrote:
On 2012-10-22 14:58, Avi Kivity wrote:
On 10/22/2012 02:55 PM, Jan Kiszka wrote:
Since the userspace change is needed the idea
On Tue, Oct 23, 2012 at 02:36:05PM +0200, Peter Zijlstra wrote:
On Thu, 2012-10-18 at 16:19 -0700, Andi Kleen wrote:
From: Andi Kleen a...@linux.intel.com
This is not arch perfmon, but older CPUs will just ignore it. This makes
it possible to do at least some TSX measurements from a KVM
On Thu, Oct 18, 2012 at 11:19:14PM -, Andi Kleen wrote:
static inline u8 fixed_en_pmi(u64 ctrl, int idx)
@@ -400,7 +407,7 @@ int kvm_pmu_set_msr(struct kvm_vcpu *vcpu, u32 index, u64
data)
} else if ((pmc = get_gp_pmc(pmu, index, MSR_P6_EVNTSEL0))) {
On Tue, Oct 23, 2012 at 03:20:39PM +0200, Andi Kleen wrote:
On Tue, Oct 23, 2012 at 03:05:09PM +0200, Gleb Natapov wrote:
On Thu, Oct 18, 2012 at 11:19:14PM -, Andi Kleen wrote:
static inline u8 fixed_en_pmi(u64 ctrl, int idx)
@@ -400,7 +407,7 @@ int kvm_pmu_set_msr(struct kvm_vcpu
On Fri, Nov 09, 2012 at 05:27:21PM -0800, Andi Kleen wrote:
From: Andi Kleen a...@linux.intel.com
This is not arch perfmon, but older CPUs will just ignore it. This makes
it possible to do at least some TSX measurements from a KVM guest
Cc: a...@redhat.com
Cc: g...@redhat.com
v2: Various
On Mon, Mar 21, 2005 at 07:34:02PM +0100, Arjan van de Ven wrote:
On Mon, 2005-03-21 at 17:32 +0200, Hayim Shaul wrote:
Hi all,
I have an unexplained bug with mmap/munmap on 2.6.X.
I'm writing a kernel module that gives super-fast access to the network.
It does so by doing mmap thus
On Tue, Jul 17, 2012 at 06:57:01PM +0300, Michael S. Tsirkin wrote:
On Tue, Jul 17, 2012 at 09:51:41AM -0600, Alex Williamson wrote:
On Tue, 2012-07-17 at 18:36 +0300, Michael S. Tsirkin wrote:
On Tue, Jul 17, 2012 at 09:20:11AM -0600, Alex Williamson wrote:
On Tue, 2012-07-17 at 17:53
On Tue, Jul 17, 2012 at 07:36:49PM +0300, Michael S. Tsirkin wrote:
On Tue, Jul 17, 2012 at 10:08:21AM -0600, Alex Williamson wrote:
On Tue, 2012-07-17 at 18:57 +0300, Michael S. Tsirkin wrote:
On Tue, Jul 17, 2012 at 09:51:41AM -0600, Alex Williamson wrote:
On Tue, 2012-07-17 at 18:36
On Tue, Jul 17, 2012 at 07:14:52PM +0300, Michael S. Tsirkin wrote:
_Seems_ racy, or _is_ racy? Please identify the race.
Look at this:
static inline int kvm_irq_line_state(unsigned long *irq_state,
int irq_source_id, int level)
{
/* Logical
On Wed, Jul 18, 2012 at 01:20:29PM +0300, Michael S. Tsirkin wrote:
On Wed, Jul 18, 2012 at 09:27:42AM +0300, Gleb Natapov wrote:
On Tue, Jul 17, 2012 at 07:14:52PM +0300, Michael S. Tsirkin wrote:
_Seems_ racy, or _is_ racy? Please identify the race.
Look at this:
static
On Wed, Jul 18, 2012 at 01:33:35PM +0300, Michael S. Tsirkin wrote:
On Wed, Jul 18, 2012 at 01:27:39PM +0300, Gleb Natapov wrote:
On Wed, Jul 18, 2012 at 01:20:29PM +0300, Michael S. Tsirkin wrote:
On Wed, Jul 18, 2012 at 09:27:42AM +0300, Gleb Natapov wrote:
On Tue, Jul 17, 2012 at 07
On Wed, Jul 18, 2012 at 01:41:14PM +0300, Michael S. Tsirkin wrote:
On Mon, Jul 16, 2012 at 02:33:47PM -0600, Alex Williamson wrote:
In order to inject a level interrupt from an external source using an
irqfd, we need to allocate a new irq_source_id. This allows us to
assert and (later)
On Wed, Jul 18, 2012 at 01:48:44PM +0300, Michael S. Tsirkin wrote:
On Wed, Jul 18, 2012 at 01:44:29PM +0300, Gleb Natapov wrote:
On Wed, Jul 18, 2012 at 01:41:14PM +0300, Michael S. Tsirkin wrote:
On Mon, Jul 16, 2012 at 02:33:47PM -0600, Alex Williamson wrote:
In order to inject
On Wed, Jul 18, 2012 at 01:51:05PM +0300, Michael S. Tsirkin wrote:
On Wed, Jul 18, 2012 at 01:36:08PM +0300, Gleb Natapov wrote:
On Wed, Jul 18, 2012 at 01:33:35PM +0300, Michael S. Tsirkin wrote:
On Wed, Jul 18, 2012 at 01:27:39PM +0300, Gleb Natapov wrote:
On Wed, Jul 18, 2012 at 01
On Wed, Jul 18, 2012 at 01:53:11PM +0300, Michael S. Tsirkin wrote:
On Wed, Jul 18, 2012 at 01:49:06PM +0300, Gleb Natapov wrote:
On Wed, Jul 18, 2012 at 01:48:44PM +0300, Michael S. Tsirkin wrote:
On Wed, Jul 18, 2012 at 01:44:29PM +0300, Gleb Natapov wrote:
On Wed, Jul 18, 2012 at 01
On Wed, Jul 18, 2012 at 02:39:10PM +0300, Michael S. Tsirkin wrote:
On Wed, Jul 18, 2012 at 02:22:19PM +0300, Michael S. Tsirkin wrote:
So as was discussed kvm_set_irq under spinlock is bad for
scalability
with multiple VCPUs. Why do we need a spinlock simply to
On Wed, Jul 18, 2012 at 02:08:43PM +0300, Michael S. Tsirkin wrote:
On Wed, Jul 18, 2012 at 01:53:15PM +0300, Gleb Natapov wrote:
On Wed, Jul 18, 2012 at 01:51:05PM +0300, Michael S. Tsirkin wrote:
On Wed, Jul 18, 2012 at 01:36:08PM +0300, Gleb Natapov wrote:
On Wed, Jul 18, 2012 at 01
On Wed, Jul 18, 2012 at 03:42:09PM -0300, Marcelo Tosatti wrote:
On Wed, Jul 18, 2012 at 06:58:24PM +0300, Michael S. Tsirkin wrote:
Back to original point though current
situation is that calling kvm_set_irq() under spinlock is not
worse for
scalability than calling
is 0 but irq_state is not 0.
Note that above is valid behaviour if CPU0 and CPU1 are using different
source ids.
Fix by performing all irq_states bitmap handling under pic/ioapic lock.
This also removes the need for atomics with irq_states handling.
Reported-by: Gleb
if CPU0 and CPU1 are using different
source ids.
Fix by performing all irq_states bitmap handling under pic/ioapic lock.
This also removes the need for atomics with irq_states handling.
Reported-by: Gleb Natapov g...@redhat.com
Signed-off-by: Michael S. Tsirkin m...@redhat.com
the value will always be the same for L1 and L2, we do not need
to read and write the corresponding VMCS field on L1/L2 transitions,
either.
Signed-off-by: Paolo Bonzini pbonz...@redhat.com
Perfect, thanks!
Reviewed-by: Gleb Natapov g...@redhat.com
---
v1-v2: remove read/write
Acked-by: Gleb Natapov g...@redhat.com
---
This depends on Alex Graf's irqfd generalization series to remove
IRQ routing code from assigned-dev.c.
arch/ia64/include/uapi/asm/kvm.h |1 -
arch/ia64/kvm/Kconfig| 13 +++--
arch/ia64/kvm/Makefile |6
On Wed, Apr 17, 2013 at 05:39:04PM -0300, Marcelo Tosatti wrote:
On Fri, Mar 22, 2013 at 09:15:24PM +0200, Gleb Natapov wrote:
On Fri, Mar 22, 2013 at 08:37:33PM +0800, Xiao Guangrong wrote:
On 03/22/2013 08:12 PM, Gleb Natapov wrote:
On Fri, Mar 22, 2013 at 08:03:04PM +0800, Xiao
On Tue, Apr 16, 2013 at 02:32:46PM +0800, Xiao Guangrong wrote:
pte_list_clear_concurrently allows us to reset pte-desc entry
out of mmu-lock. We can reset spte out of mmu-lock if we can protect the
lifecycle of sp, we use this way to achieve the goal:
unmap_memslot_rmap_nolock():
On Thu, Apr 18, 2013 at 07:22:23PM +0800, Xiao Guangrong wrote:
On 04/18/2013 07:00 PM, Gleb Natapov wrote:
On Tue, Apr 16, 2013 at 02:32:46PM +0800, Xiao Guangrong wrote:
pte_list_clear_concurrently allows us to reset pte-desc entry
out of mmu-lock. We can reset spte out of mmu-lock if we
On Thu, Apr 18, 2013 at 12:00:49PM +, Zhanghaoyu (A) wrote:
I start 10 VMs(windows xp), then running geekbench tool on them, about 2
days, one of them was reset,
I found the reset operation is done by
int kvm_cpu_exec(CPUArchState *env)
{
...
switch (run-exit_reason)
On Thu, Apr 18, 2013 at 11:01:18AM -0300, Marcelo Tosatti wrote:
On Thu, Apr 18, 2013 at 12:42:39PM +0300, Gleb Natapov wrote:
that, but if not then less code is better.
The number of sp-role.invalid=1 pages is small (only shadow roots). It
can grow but is bounded to a handful
On Fri, Apr 19, 2013 at 01:05:08AM +, Zhanghaoyu (A) wrote:
On Thu, Apr 18, 2013 at 12:00:49PM +, Zhanghaoyu (A) wrote:
I start 10 VMs(windows xp), then running geekbench tool on them,
about 2 days, one of them was reset, I found the reset operation is
done by int
On Tue, Apr 16, 2013 at 02:32:38PM +0800, Xiao Guangrong wrote:
This patchset is based on my previous two patchset:
[PATCH 0/2] KVM: x86: avoid potential soft lockup and unneeded mmu reload
(https://lkml.org/lkml/2013/4/1/2)
[PATCH v2 0/6] KVM: MMU: fast invalid all mmio sptes
On Thu, Apr 04, 2013 at 01:57:34PM +0200, Alexander Graf wrote:
On 04.04.2013, at 12:50, Michael S. Tsirkin wrote:
With KVM, MMIO is much slower than PIO, due to the need to
do page walk and emulation. But with EPT, it does not have to be: we
know the address from the VMCS so if the
On Thu, Apr 04, 2013 at 02:09:53PM +0200, Alexander Graf wrote:
On 04.04.2013, at 13:04, Michael S. Tsirkin wrote:
On Thu, Apr 04, 2013 at 01:57:34PM +0200, Alexander Graf wrote:
On 04.04.2013, at 12:50, Michael S. Tsirkin wrote:
With KVM, MMIO is much slower than PIO, due to the
On Thu, Apr 04, 2013 at 02:22:09PM +0200, Alexander Graf wrote:
On 04.04.2013, at 14:08, Gleb Natapov wrote:
On Thu, Apr 04, 2013 at 01:57:34PM +0200, Alexander Graf wrote:
On 04.04.2013, at 12:50, Michael S. Tsirkin wrote:
With KVM, MMIO is much slower than PIO, due to the need
On Thu, Apr 04, 2013 at 02:32:08PM +0200, Alexander Graf wrote:
On 04.04.2013, at 14:08, Gleb Natapov wrote:
On Thu, Apr 04, 2013 at 01:57:34PM +0200, Alexander Graf wrote:
On 04.04.2013, at 12:50, Michael S. Tsirkin wrote:
With KVM, MMIO is much slower than PIO, due to the need
On Thu, Apr 04, 2013 at 02:39:51PM +0200, Alexander Graf wrote:
On 04.04.2013, at 14:38, Gleb Natapov wrote:
On Thu, Apr 04, 2013 at 02:32:08PM +0200, Alexander Graf wrote:
On 04.04.2013, at 14:08, Gleb Natapov wrote:
On Thu, Apr 04, 2013 at 01:57:34PM +0200, Alexander Graf wrote
On Thu, Apr 04, 2013 at 02:49:39PM +0200, Alexander Graf wrote:
On 04.04.2013, at 14:45, Gleb Natapov wrote:
On Thu, Apr 04, 2013 at 02:39:51PM +0200, Alexander Graf wrote:
On 04.04.2013, at 14:38, Gleb Natapov wrote:
On Thu, Apr 04, 2013 at 02:32:08PM +0200, Alexander Graf wrote
On Thu, Apr 04, 2013 at 03:06:42PM +0200, Alexander Graf wrote:
On 04.04.2013, at 14:56, Gleb Natapov wrote:
On Thu, Apr 04, 2013 at 02:49:39PM +0200, Alexander Graf wrote:
On 04.04.2013, at 14:45, Gleb Natapov wrote:
On Thu, Apr 04, 2013 at 02:39:51PM +0200, Alexander Graf wrote
On Thu, Apr 04, 2013 at 05:36:40PM +0200, Alexander Graf wrote:
#define GOAL (1ull 30)
do {
iterations *= 2;
t1 = rdtsc();
for (i = 0; i iterations; ++i)
func();
t2 =
On Thu, Apr 04, 2013 at 06:36:30PM +0300, Michael S. Tsirkin wrote:
processor : 0
vendor_id : AuthenticAMD
cpu family : 16
model : 8
model name : Six-Core AMD Opteron(tm) Processor 8435
stepping: 0
cpu MHz : 800.000
cache size : 512 KB
On Thu, Apr 04, 2013 at 04:14:57PM +0300, Gleb Natapov wrote:
is to move to MMIO only when PIO address space is exhausted. For PCI it
will be never, for PCI-e it will be after ~16 devices.
Ok, let's go back a step here. Are you actually able to measure any
speed in performance
Linus,
Please pull from
git://git.kernel.org/pub/scm/virt/kvm/kvm.git master
To receive the bugfix for the regression introduced by c300aa64ddf57.
Andrew Honig (1):
KVM: Allow cross page reads and writes from cached translations.
arch/x86/kvm/lapic.c |2 -
arch/x86/kvm/x86.c
On Thu, Apr 04, 2013 at 01:27:21PM +0300, Michael S. Tsirkin wrote:
PIO and MMIO are separate address spaces, but
ioeventfd registration code mistakenly detected
two eventfds as duplicate if they use the same address,
even if one is PIO and another one MMIO.
Signed-off-by: Michael S.
On Mon, Mar 25, 2013 at 02:14:20PM -0700, Kevin Hilman wrote:
Gleb Natapov g...@redhat.com writes:
On Sun, Mar 24, 2013 at 02:44:26PM +0100, Frederic Weisbecker wrote:
2013/3/21 Gleb Natapov g...@redhat.com:
Isn't is simpler for kernel/context_tracking.c to define empty
__guest_enter
On Fri, Mar 15, 2013 at 11:29:53PM +0800, Xiao Guangrong wrote:
This patch tries to introduce a very simple and scale way to invalid all
mmio sptes - it need not walk any shadow pages and hold mmu-lock
KVM maintains a global mmio invalid generation-number which is stored in
On Mon, Mar 18, 2013 at 04:08:50PM +0800, Xiao Guangrong wrote:
On 03/17/2013 11:02 PM, Gleb Natapov wrote:
On Fri, Mar 15, 2013 at 11:29:53PM +0800, Xiao Guangrong wrote:
This patch tries to introduce a very simple and scale way to invalid all
mmio sptes - it need not walk any shadow pages
On Mon, Mar 18, 2013 at 08:29:29PM +0800, Xiao Guangrong wrote:
On 03/18/2013 05:13 PM, Gleb Natapov wrote:
On Mon, Mar 18, 2013 at 04:08:50PM +0800, Xiao Guangrong wrote:
On 03/17/2013 11:02 PM, Gleb Natapov wrote:
On Fri, Mar 15, 2013 at 11:29:53PM +0800, Xiao Guangrong wrote
On Mon, Mar 18, 2013 at 08:42:09PM +0800, Xiao Guangrong wrote:
On 03/18/2013 07:19 PM, Paolo Bonzini wrote:
Il 15/03/2013 16:29, Xiao Guangrong ha scritto:
+/*
+ * spte bits of bit 3 ~ bit 11 are used as low 9 bits of
+ * generation, the bits of bits 52 ~ bit 61 are used as
+ * high 12
On Mon, Mar 18, 2013 at 09:09:43PM +0800, Xiao Guangrong wrote:
On 03/18/2013 08:46 PM, Gleb Natapov wrote:
On Mon, Mar 18, 2013 at 08:29:29PM +0800, Xiao Guangrong wrote:
On 03/18/2013 05:13 PM, Gleb Natapov wrote:
On Mon, Mar 18, 2013 at 04:08:50PM +0800, Xiao Guangrong wrote:
On 03/17
On Mon, Mar 18, 2013 at 09:25:10PM +0800, Xiao Guangrong wrote:
On 03/18/2013 09:19 PM, Gleb Natapov wrote:
On Mon, Mar 18, 2013 at 09:09:43PM +0800, Xiao Guangrong wrote:
On 03/18/2013 08:46 PM, Gleb Natapov wrote:
On Mon, Mar 18, 2013 at 08:29:29PM +0800, Xiao Guangrong wrote:
On 03/18
On Tue, Mar 19, 2013 at 11:15:36AM +0800, Xiao Guangrong wrote:
On 03/19/2013 06:16 AM, Eric Northup wrote:
On Fri, Mar 15, 2013 at 8:29 AM, Xiao Guangrong
xiaoguangr...@linux.vnet.ibm.com wrote:
This patch tries to introduce a very simple and scale way to invalid all
mmio sptes - it need
pbonz...@redhat.com
Reviewed-by: Gleb Natapov g...@redhat.com
---
arch/x86/kvm/svm.c | 8 +---
arch/x86/kvm/vmx.c | 1 +
2 files changed, 2 insertions(+), 7 deletions(-)
diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index 7219a40..7a46c1f 100644
--- a/arch/x86/kvm/svm.c
+++ b
On Tue, Mar 19, 2013 at 05:41:45PM +0100, Jan Kiszka wrote:
On 2013-03-19 16:43, Gleb Natapov wrote:
On Tue, Mar 19, 2013 at 04:30:26PM +0100, Paolo Bonzini wrote:
The CS base was initialized to 0 on VMX (wrong, but usually overridden
by userspace before starting) or 0xf on SVM
On Tue, Mar 19, 2013 at 04:51:13PM +0100, Paolo Bonzini wrote:
There is no way for userspace to inject interrupts into a VCPU's
local APIC, which is important in order to inject INITs coming from
the chipset. KVM_INTERRUPT is currently disabled when the in-kernel
local APIC is used, so we can
On Tue, Mar 19, 2013 at 07:39:24PM +0100, Paolo Bonzini wrote:
Il 19/03/2013 19:13, Gleb Natapov ha scritto:
There is no way for userspace to inject interrupts into a VCPU's
local APIC, which is important in order to inject INITs coming from
the chipset. KVM_INTERRUPT is currently
On Tue, Mar 19, 2013 at 09:22:33PM +0100, Paolo Bonzini wrote:
Il 19/03/2013 19:50, Gleb Natapov ha scritto:
On Tue, Mar 19, 2013 at 07:39:24PM +0100, Paolo Bonzini wrote:
Il 19/03/2013 19:13, Gleb Natapov ha scritto:
There is no way for userspace to inject interrupts into a VCPU's
local
On Wed, Mar 20, 2013 at 06:58:41PM -0500, Scott Wood wrote:
On 03/14/2013 07:13:46 PM, Kevin Hilman wrote:
The new context tracking subsystem unconditionally includes kvm_host.h
headers for the guest enter/exit macros. This causes a compile
failure when KVM is not enabled.
Fix by adding an
On Wed, Mar 20, 2013 at 04:30:24PM +0800, Xiao Guangrong wrote:
Move deletion shadow page from the hash list from kvm_mmu_commit_zap_page to
kvm_mmu_prepare_zap_page, we that we can free the shadow page out of mmu-lock.
Also, delete the invalid shadow page from the hash list since this page
On Thu, Mar 21, 2013 at 01:42:34PM -0500, Scott Wood wrote:
On 03/21/2013 09:27:14 AM, Kevin Hilman wrote:
Gleb Natapov g...@redhat.com writes:
On Wed, Mar 20, 2013 at 06:58:41PM -0500, Scott Wood wrote:
On 03/14/2013 07:13:46 PM, Kevin Hilman wrote:
The new context tracking subsystem
On Thu, Mar 21, 2013 at 02:33:13PM -0500, Scott Wood wrote:
On 03/21/2013 02:16:00 PM, Gleb Natapov wrote:
On Thu, Mar 21, 2013 at 01:42:34PM -0500, Scott Wood wrote:
On 03/21/2013 09:27:14 AM, Kevin Hilman wrote:
Gleb Natapov g...@redhat.com writes:
On Wed, Mar 20, 2013 at 06:58:41PM
On Fri, Mar 22, 2013 at 02:35:50PM +1100, Stephen Rothwell wrote:
Fixes these build error when CONFIG_KVM is not defined:
In file included from arch/powerpc/include/asm/kvm_ppc.h:33:0,
from arch/powerpc/kernel/setup_64.c:67:
arch/powerpc/include/asm/kvm_book3s.h:65:20:
On Fri, Mar 22, 2013 at 07:10:44PM +0800, Xiao Guangrong wrote:
On 03/22/2013 06:54 PM, Marcelo Tosatti wrote:
And then have codepaths that nuke shadow pages break from the spinlock,
I think this is not needed any more. We can let mmu_notify use the
generation
number to invalid
On Fri, Mar 22, 2013 at 07:39:24PM +0800, Xiao Guangrong wrote:
On 03/22/2013 07:28 PM, Gleb Natapov wrote:
On Fri, Mar 22, 2013 at 07:10:44PM +0800, Xiao Guangrong wrote:
On 03/22/2013 06:54 PM, Marcelo Tosatti wrote:
And then have codepaths that nuke shadow pages break from
On Fri, Mar 22, 2013 at 08:03:04PM +0800, Xiao Guangrong wrote:
On 03/22/2013 07:47 PM, Gleb Natapov wrote:
On Fri, Mar 22, 2013 at 07:39:24PM +0800, Xiao Guangrong wrote:
On 03/22/2013 07:28 PM, Gleb Natapov wrote:
On Fri, Mar 22, 2013 at 07:10:44PM +0800, Xiao Guangrong wrote:
On 03/22
On Fri, Mar 22, 2013 at 08:37:33PM +0800, Xiao Guangrong wrote:
On 03/22/2013 08:12 PM, Gleb Natapov wrote:
On Fri, Mar 22, 2013 at 08:03:04PM +0800, Xiao Guangrong wrote:
On 03/22/2013 07:47 PM, Gleb Natapov wrote:
On Fri, Mar 22, 2013 at 07:39:24PM +0800, Xiao Guangrong wrote:
On 03/22
On Thu, Mar 21, 2013 at 05:02:15PM -0700, Kevin Hilman wrote:
Gleb Natapov g...@redhat.com writes:
On Thu, Mar 21, 2013 at 02:33:13PM -0500, Scott Wood wrote:
On 03/21/2013 02:16:00 PM, Gleb Natapov wrote:
On Thu, Mar 21, 2013 at 01:42:34PM -0500, Scott Wood wrote:
On 03/21/2013 09:27
On Sun, Mar 24, 2013 at 02:44:26PM +0100, Frederic Weisbecker wrote:
2013/3/21 Gleb Natapov g...@redhat.com:
Isn't is simpler for kernel/context_tracking.c to define empty
__guest_enter()/__guest_exit() if !CONFIG_KVM.
That doesn't look right. Off-cases are usually handled from
On Sat, Mar 09, 2013 at 07:48:33AM +0100, Paolo Bonzini wrote:
After receiving an INIT signal (either via the local APIC, or through
KVM_SET_MP_STATE), the bootstrap processor should reset immediately
and start execution at 0xfff0. Also, SIPIs have no effect on the
bootstrap processor.
On Sun, Mar 10, 2013 at 03:53:54PM +0100, Paolo Bonzini wrote:
Il 10/03/2013 12:46, Gleb Natapov ha scritto:
On Sat, Mar 09, 2013 at 07:48:33AM +0100, Paolo Bonzini wrote:
After receiving an INIT signal (either via the local APIC, or through
KVM_SET_MP_STATE), the bootstrap processor should
On Sun, Mar 10, 2013 at 06:19:07PM +0100, Paolo Bonzini wrote:
Il 10/03/2013 16:35, Gleb Natapov ha scritto:
However, it would effectively redefine the meaning of
KVM_MP_STATE_INIT_RECEIVED and KVM_MP_STATE_SIPI_RECEIVED, respectively
to KVM_MP_STATE_WAIT_FOR_SIPI
On Mon, Mar 04, 2013 at 11:31:46PM +0530, Raghavendra K T wrote:
This patch series further filters better vcpu candidate to yield to
in PLE handler. The main idea is to record the preempted vcpus using
preempt notifiers and iterate only those preempted vcpus in the
handler. Note that the
On Sun, Mar 10, 2013 at 03:46:00PM +0200, Ioan Orghici wrote:
Signed-off-by: Ioan Orghiciioan.orgh...@gmail.com
---
arch/x86/kvm/vmx.c |3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index 7cc566b..35c2c8f 100644
---
On Mon, Mar 11, 2013 at 11:14:39AM +0100, Paolo Bonzini wrote:
Il 10/03/2013 19:10, Gleb Natapov ha scritto:
On Sun, Mar 10, 2013 at 06:19:07PM +0100, Paolo Bonzini wrote:
Il 10/03/2013 16:35, Gleb Natapov ha scritto:
However, it would effectively redefine the meaning
On Mon, Mar 11, 2013 at 12:25:57PM +0100, Paolo Bonzini wrote:
Il 11/03/2013 11:28, Gleb Natapov ha scritto:
Not really true---we do exit with that state and EINTR when we get a
SIPI. Perhaps that can be changed.
That's implementation detail. We can jump to the beginning
On Mon, Mar 11, 2013 at 02:31:46PM +0100, Paolo Bonzini wrote:
Il 11/03/2013 12:51, Gleb Natapov ha scritto:
Agreed, but we still have the problem of how to signal from userspace.
For that do you have any other suggestion than mp_state? And if we keep
mp_state to signal from
1 - 100 of 1410 matches
Mail list logo