On Tue, Jul 08, 2014 at 09:49:40AM -0700, kan.li...@intel.com wrote:
diff --git a/arch/x86/kernel/cpu/perf_event.c
b/arch/x86/kernel/cpu/perf_event.c
index 2bdfbff..f0e8022 100644
--- a/arch/x86/kernel/cpu/perf_event.c
+++ b/arch/x86/kernel/cpu/perf_event.c
@@ -118,6 +118,9 @@ static int
On Tue, Jul 08, 2014 at 09:49:40AM -0700, kan.li...@intel.com wrote:
diff --git a/arch/x86/kernel/cpu/perf_event.c
b/arch/x86/kernel/cpu/perf_event.c
index 2bdfbff..f0e8022 100644
--- a/arch/x86/kernel/cpu/perf_event.c
+++ b/arch/x86/kernel/cpu/perf_event.c
@@ -118,6 +118,9 @@ static int
On Tue, Jul 08, 2014 at 09:49:40AM -0700, kan.li...@intel.com wrote:
+ /*
+ * Access LBR MSR may cause #GP under certain circumstances.
+ * E.g. KVM doesn't support LBR MSR
+ * Check all LBT MSR here.
+ * Disable LBR access if any LBR MSRs can not be accessed.
+
On Fri, Jun 20, 2014 at 01:59:59PM +0100, Marc Zyngier wrote:
pm_fake doesn't quite describe what the handler does (ignoring writes
and returning 0 for reads).
As we're about to use it (a lot) in a different context, rename it
with a (admitedly cryptic) name that make sense for all users.
On Tue, Jul 08, 2014 at 09:49:40AM -0700, kan.li...@intel.com wrote:
--- a/arch/x86/kernel/cpu/perf_event.h
+++ b/arch/x86/kernel/cpu/perf_event.h
@@ -464,6 +464,12 @@ struct x86_pmu {
*/
struct extra_reg *extra_regs;
unsigned int er_flags;
+ /*
+ * EXTRA REG
On Wed, Jul 09 2014 at 10:27:12 am BST, Christoffer Dall
christoffer.d...@linaro.org wrote:
On Fri, Jun 20, 2014 at 01:59:59PM +0100, Marc Zyngier wrote:
pm_fake doesn't quite describe what the handler does (ignoring writes
and returning 0 for reads).
As we're about to use it (a lot) in a
On Fri, Jun 20, 2014 at 02:00:01PM +0100, Marc Zyngier wrote:
Add handlers for all the AArch64 debug registers that are accessible
from EL0 or EL1. The trapping code keeps track of the state of the
debug registers, allowing for the switch code to implement a lazy
switching strategy.
On Fri, Jun 20, 2014 at 02:00:05PM +0100, Marc Zyngier wrote:
Add handlers for all the AArch32 debug registers that are accessible
from EL0 or EL1. The code follow the same strategy as the AArch64
counterpart with regards to tracking the dirty state of the debug
registers.
Reviewed-by: Anup
On Fri, Jun 20, 2014 at 02:00:06PM +0100, Marc Zyngier wrote:
Implement switching of the debug registers. While the number
of registers is massive, CPUs usually don't implement them all
(A57 has 6 breakpoints and 4 watchpoints, which gives us a total
of 22 registers only).
Also, we only
https://bugzilla.kernel.org/show_bug.cgi?id=76331
--- Comment #6 from Matt mspe...@users.sourceforge.net ---
Hi Alex and David,
I've been successfully using Alex's fix for more than a month now.
https://lkml.org/lkml/2014/5/29/932
Would it be possible to close this bug by adding the patch to
On Wed, Jul 09 2014 at 10:38:13 am BST, Christoffer Dall
christoffer.d...@linaro.org wrote:
On Fri, Jun 20, 2014 at 02:00:01PM +0100, Marc Zyngier wrote:
Add handlers for all the AArch64 debug registers that are accessible
from EL0 or EL1. The trapping code keeps track of the state of the
On Wed, Jul 09 2014 at 10:45:05 am BST, Christoffer Dall
christoffer.d...@linaro.org wrote:
On Fri, Jun 20, 2014 at 02:00:06PM +0100, Marc Zyngier wrote:
Implement switching of the debug registers. While the number
of registers is massive, CPUs usually don't implement them all
(A57 has 6
On Sun, Jun 22, 2014 at 04:35:24PM +0300, Avi Kivity wrote:
+ * Failure to instantiate pages will abort guest entry.
+ *
+ * Page frames should be pinned with get_page in advance.
+ *
+ * Pinning is not guaranteed while executing as L2 guest.
Does this undermine security?
PEBS writes should
To cleanly restore an SMP VM we need to ensure that the current pause
state of each vcpu is correctly recorded. Things could get confused if
the CPU starts running after migration restore completes when it was
paused before it state was captured.
I've done this by exposing a register (currently
On Tue, Jul 08, 2014 at 09:49:40AM -0700, kan.li...@intel.com wrote:
--- a/arch/x86/kernel/cpu/perf_event.h
+++ b/arch/x86/kernel/cpu/perf_event.h
@@ -464,6 +464,12 @@ struct x86_pmu {
*/
struct extra_reg *extra_regs;
unsigned int er_flags;
+ /*
+* EXTRA REG
On Tue, Jul 08, 2014 at 09:49:40AM -0700, kan.li...@intel.com wrote:
+/*
+ * Under certain circumstances, access certain MSR may cause #GP.
+ * The function tests if the input MSR can be safely accessed.
+ */
+static inline bool check_msr(unsigned long msr)
+{
+ u64 value;
+
+ if
On Tue, Jul 08, 2014 at 09:49:40AM -0700, kan.li...@intel.com wrote:
+/*
+ * Under certain circumstances, access certain MSR may cause #GP.
+ * The function tests if the input MSR can be safely accessed.
+ */
+static inline bool check_msr(unsigned long msr) {
+ u64 value;
+
+
On Wed, Jul 09, 2014 at 12:09:29PM +0100, Marc Zyngier wrote:
On Wed, Jul 09 2014 at 10:38:13 am BST, Christoffer Dall
christoffer.d...@linaro.org wrote:
On Fri, Jun 20, 2014 at 02:00:01PM +0100, Marc Zyngier wrote:
Add handlers for all the AArch64 debug registers that are accessible
from
On Wed, Jul 09, 2014 at 02:32:28PM +, Liang, Kan wrote:
On Tue, Jul 08, 2014 at 09:49:40AM -0700, kan.li...@intel.com wrote:
+/*
+ * Under certain circumstances, access certain MSR may cause #GP.
+ * The function tests if the input MSR can be safely accessed.
+ */
+static
-Original Message-
From: Peter Zijlstra [mailto:pet...@infradead.org]
Sent: Wednesday, July 09, 2014 10:58 AM
To: Liang, Kan
Cc: a...@firstfloor.org; linux-ker...@vger.kernel.org; kvm@vger.kernel.org
Subject: Re: [PATCH V4 1/2] perf ignore LBR and extra_regs.
On Wed, Jul 09,
Il 01/07/2014 16:45, Will Deacon ha scritto:
Now that we have a dynamic means to register kvm_device_ops, use that
for the ARM VGIC, instead of relying on the static table.
Cc: Gleb Natapov g...@kernel.org
Cc: Paolo Bonzini pbonz...@redhat.com
Cc: Marc Zyngier marc.zyng...@arm.com
Cc:
Il 01/07/2014 16:45, Will Deacon ha scritto:
Now that we have a dynamic means to register kvm_device_ops, use that
for the VFIO kvm device, instead of relying on the static table.
This is achieved by a module_init call to register the ops with KVM.
Cc: Gleb Natapov g...@kernel.org
Cc: Paolo
Il 30/06/2014 11:03, Nadav Amit ha scritto:
We encountered a scenario in which after an INIT is delivered, a pending
interrupt is delivered, although it was sent before the INIT. As the SDM
states in section 10.4.7.1, the ISR and the IRR should be cleared after INIT as
KVM does. This also
The current calculation for VTTBR_BADDR_MASK masks only 39 bits and not
all 40 bits. That last bit is important as some systems allocate
from near the top of the available address space.
This patch is necessary to run KVM on an aarch64 SOC I have been testing.
Signed-off-by: Joel Schopp
On Tue, 2014-07-01 at 15:45 +0100, Will Deacon wrote:
Now that we have a dynamic means to register kvm_device_ops, use that
for the VFIO kvm device, instead of relying on the static table.
This is achieved by a module_init call to register the ops with KVM.
Cc: Gleb Natapov g...@kernel.org
Il 26/06/2014 13:50, Matthias Lange ha scritto:
Since commit 575203 the MCE subsystem in the Linux kernel for AMD sets bit 18
in MSR_K7_HWCR. Running such a kernel as a guest in KVM on an AMD host results
in a GPE injected into the guest because kvm_set_msr_common returns 1. This
patch fixes
On Wed, Jul 09 2014 at 3:52:32 pm BST, Christoffer Dall
christoffer.d...@linaro.org wrote:
On Wed, Jul 09, 2014 at 12:09:29PM +0100, Marc Zyngier wrote:
On Wed, Jul 09 2014 at 10:38:13 am BST, Christoffer Dall
christoffer.d...@linaro.org wrote:
On Fri, Jun 20, 2014 at 02:00:01PM +0100, Marc
On Wed, Jul 09 2014 at 5:05:20 pm BST, Paolo Bonzini pbonz...@redhat.com
wrote:
Il 01/07/2014 16:45, Will Deacon ha scritto:
Now that we have a dynamic means to register kvm_device_ops, use that
for the ARM VGIC, instead of relying on the static table.
Cc: Gleb Natapov g...@kernel.org
Cc:
Il 26/06/2014 14:22, Matthias Lange ha scritto:
Linux' AMD MCE code tries to read from the MC4_MISC1 (0xc408) MSR. Because
this read is not virtualized within KVM, a GPE is injected into the guest.
This patch handles guest reads from MC4_MISC and returns 0 to the guest.
Signed-off-by:
On Wed, Jul 09, 2014 at 03:43:45PM +, Liang, Kan wrote:
-Original Message-
From: Peter Zijlstra [mailto:pet...@infradead.org]
Sent: Wednesday, July 09, 2014 10:58 AM
To: Liang, Kan
Cc: a...@firstfloor.org; linux-ker...@vger.kernel.org; kvm@vger.kernel.org
Subject: Re:
Hi Alex,
On Wed, Jul 09, 2014 at 05:19:24PM +0100, Alex Williamson wrote:
On Tue, 2014-07-01 at 15:45 +0100, Will Deacon wrote:
diff --git a/virt/kvm/vfio.c b/virt/kvm/vfio.c
index ba1a93f935c7..bb11b36ee8a2 100644
--- a/virt/kvm/vfio.c
+++ b/virt/kvm/vfio.c
@@ -246,6 +246,16 @@ static
On Wed, 2014-07-09 at 17:47 +0100, Will Deacon wrote:
Hi Alex,
On Wed, Jul 09, 2014 at 05:19:24PM +0100, Alex Williamson wrote:
On Tue, 2014-07-01 at 15:45 +0100, Will Deacon wrote:
diff --git a/virt/kvm/vfio.c b/virt/kvm/vfio.c
index ba1a93f935c7..bb11b36ee8a2 100644
---
On Tue, 2014-07-01 at 15:45 +0100, Will Deacon wrote:
Now that we have a dynamic means to register kvm_device_ops, use that
for the VFIO kvm device, instead of relying on the static table.
This is achieved by a module_init call to register the ops with KVM.
Cc: Gleb Natapov g...@kernel.org
pm_fake doesn't quite describe what the handler does (ignoring writes
and returning 0 for reads).
As we're about to use it (a lot) in a different context, rename it
with a (admitedly cryptic) name that make sense for all users.
Reviewed-by: Anup Patel anup.pa...@linaro.org
Reviewed-by:
We now have multiple tables for the various system registers
we trap. Make sure we check the order of all of them, as it is
critical that we get the order right (been there, done that...).
Reviewed-by: Anup Patel anup.pa...@linaro.org
Reviewed-by: Christoffer Dall christoffer.d...@linaro.org
This patch series adds debug support, a key feature missing from the
KVM/arm64 port.
The main idea is to keep track of whether the debug registers are
dirty (changed by the guest) or not. In this case, perform the usual
save/restore dance, for one run only. It means we only have a penalty
if a
In order to be able to use the DBG_MDSCR_* macros from the KVM code,
move the relevant definitions to the obvious include file.
Also move the debug_el enum to a portion of the file that is guarded
by #ifndef __ASSEMBLY__ in order to use that file from assembly code.
Acked-by: Will Deacon
An interesting feature of the CP14 encoding is that there is
an overlap between 32 and 64bit registers, meaning they cannot
live in the same table as we did for CP15.
Create separate tables for 64bit CP14 and CP15 registers, and
let the top level handler use the right one.
Reviewed-by: Anup
Add handlers for all the AArch64 debug registers that are accessible
from EL0 or EL1. The trapping code keeps track of the state of the
debug registers, allowing for the switch code to implement a lazy
switching strategy.
Reviewed-by: Anup Patel anup.pa...@linaro.org
Reviewed-by: Christoffer Dall
Enable trapping of the debug registers, preventing the guests to
mess with the host state (and allowing guests to use the debug
infrastructure as well).
Reviewed-by: Anup Patel anup.pa...@linaro.org
Reviewed-by: Christoffer Dall christoffer.d...@linaro.org
Signed-off-by: Marc Zyngier
Add handlers for all the AArch32 debug registers that are accessible
from EL0 or EL1. The code follow the same strategy as the AArch64
counterpart with regards to tracking the dirty state of the debug
registers.
Reviewed-by: Anup Patel anup.pa...@linaro.org
Reviewed-by: Christoffer Dall
Implement switching of the debug registers. While the number
of registers is massive, CPUs usually don't implement them all
(A57 has 6 breakpoints and 4 watchpoints, which gives us a total
of 22 registers only).
Also, we only save/restore them when MDSCR_EL1 has debug enabled,
or when we've
As we're about to trap a bunch of CP14 registers, let's rework
the CP15 handling so it can be generalized and work with multiple
tables.
Reviewed-by: Anup Patel anup.pa...@linaro.org
Reviewed-by: Christoffer Dall christoffer.d...@linaro.org
Signed-off-by: Marc Zyngier marc.zyng...@arm.com
---
Required by PEBS support as discussed at
Subject: [patch 0/4] [patch 0/5] Implement PEBS virtualization for Silvermont
Message-Id: 1401412327-14810-1-git-send-email-a...@firstfloor.org
Thread.
--
v2:
- unify remote kick function (Gleb)
- keep sptes
Allow vcpus to pin spte translations by:
1) Creating a per-vcpu list of pinned ranges.
2) On mmu reload request:
- Fault ranges.
- Mark sptes with a pinned bit.
- Mark shadow pages as pinned.
3) Then modify the following actions:
- Page age = skip spte flush.
Skip pinned shadow pages when selecting pages to zap.
Signed-off-by: Marcelo Tosatti mtosa...@redhat.com
---
arch/x86/kvm/mmu.c | 26 ++
1 file changed, 18 insertions(+), 8 deletions(-)
Index: kvm.pinned-sptes/arch/x86/kvm/mmu.c
To be used by next patch.
Signed-off-by: Marcelo Tosatti mtosa...@redhat.com
---
arch/x86/include/asm/kvm_host.h |2 +-
arch/x86/kvm/mmu.c | 11 ++-
arch/x86/kvm/paging_tmpl.h |2 +-
arch/x86/kvm/x86.c |2 +-
4 files changed, 9 insertions(+),
Reload remote vcpus MMU from GET_DIRTY_LOG codepath, before
deleting a pinned spte.
Signed-off-by: Marcelo Tosatti mtosa...@redhat.com
---
arch/x86/kvm/mmu.c | 29 +++--
1 file changed, 23 insertions(+), 6 deletions(-)
Index: kvm.pinned-sptes/arch/x86/kvm/mmu.c
On Wed, Jul 09, 2014 at 04:12:50PM -0300, mtosa...@redhat.com wrote:
Required by PEBS support as discussed at
Subject: [patch 0/4] [patch 0/5] Implement PEBS virtualization for Silvermont
Message-Id: 1401412327-14810-1-git-send-email-a...@firstfloor.org
Thread.
--
On Wed, Jul 09, 2014 at 02:32:28PM +, Liang, Kan wrote:
On Tue, Jul 08, 2014 at 09:49:40AM -0700, kan.li...@intel.com wrote:
+/*
+ * Under certain circumstances, access certain MSR may cause #GP.
+ * The function tests if the input MSR can be safely
bump ?
- Original Message -
From: Phil Daws ux...@splatnix.net
To: kvm@vger.kernel.org
Sent: Tuesday, 8 July, 2014 7:48:42 PM
Subject: Network Loss
Hello!
Running a CentOS 6.5 KVM host with a CentOS 6.5 QEMU guest with OpenNMS and
having a non-responsive network from the guest. The
51 matches
Mail list logo