Il 05/12/2013 23:59, H. Peter Anvin ha scritto:
Hi, I'm currently reviewing internally another set of patches for MPX
support which would at least in part conflict with these. I don't see
the rest of the series -- where was it posted?
It was posted to kvm-devel and not threaded. :(
Paolo
--
On 12/06/2013 01:38 AM, Paolo Bonzini wrote:
Il 05/12/2013 17:17, Marcelo Tosatti ha scritto:
I agree it is a bit ugly, but in my testing QEMU seemed to loop over all
the VCPUS fast enough for the kernel side kvm_write_tsc() to do a
reasonable job of matching the offsets (the Linux guest did
VCPU TSC is not cleared by a warm reset (*), which leaves some types of Linux
guests (non-pvops guests and those with the kernel parameter no-kvmclock set)
vulnerable to the overflow in cyc2ns_offset fixed by upstream commit
9993bc635d01a6ee7f6b833b4ee65ce7c06350b1 (sched/x86: Fix overflow in
Il 06/12/2013 09:24, Fernando Luis Vázquez Cao ha scritto:
Could we start with the patch that I already sent? It's been
tested, it is conservative in the sense that it does the minimum
necessary to fix an existing bug, and should be easy to
backport. I will be replying to this email with an
Newer kernels are capable of synchronizing TSC values of multiple VCPUs
on writeback, but we were excluding the power up case, which is not needed
anymore.
Signed-off-by: Fernando Luis Vazquez Cao ferna...@oss.ntt.co.jp
---
diff -urNp qemu-orig/target-i386/kvm.c qemu/target-i386/kvm.c
---
On 12/06/2013 05:36 PM, Paolo Bonzini wrote:
Il 06/12/2013 09:24, Fernando Luis Vázquez Cao ha scritto:
Could we start with the patch that I already sent? It's been
tested, it is conservative in the sense that it does the minimum
necessary to fix an existing bug, and should be easy to
backport.
Il 06/12/2013 09:56, Fernando Luis Vázquez Cao ha scritto:
I will also be sending a patch that makes the TSC writeback
unconditional, but this one should probably be kept on hold
until it is properly tested.
If you test it, I can drop the if myself from your patch.
Unfortunately I will
On 2013年12月06日 18:08, Paolo Bonzini wrote:
Il 06/12/2013 09:56, Fernando Luis Vázquez Cao ha scritto:
I will also be sending a patch that makes the TSC writeback
unconditional, but this one should probably be kept on hold
until it is properly tested.
If you test it, I can drop the if myself
On Tue, Dec 03, 2013 at 12:05:01PM -0500, CDR wrote:
I don't know if this is the right list. But we need urgently, I mean,
the whole industry needs urgently, tat qemu-img learns to convert ESX
5.X images, and it does not.
I am like probably thousands trying to get rid of Vmware in the
On 11/16/2013 05:46 PM, Paul Mackerras wrote:
This fixes a bug in kvmppc_do_h_enter() where the physical address
for a page can be calculated incorrectly if transparent huge pages
(THP) are active. Until THP came along, it was true that if we
encountered a large (16M) page in
On Thu, Dec 05, 2013 at 03:00:33PM -0800, Paul E. McKenney wrote:
The question is: Is it safe to have a call_rcu() without any additional
rate limiting
on user triggerable code path?
That would be a good way to allow users to run your system out of memory,
especially on
Signed-off-by: Qiaowei Ren qiaowei@intel.com
Signed-off-by: Xudong Hao xudong@intel.com
Signed-off-by: Liu Jinsong jinsong@intel.com
---
arch/x86/include/asm/cpufeature.h |2 ++
1 files changed, 2 insertions(+), 0 deletions(-)
diff --git a/arch/x86/include/asm/cpufeature.h
Signed-off-by: Qiaowei Ren qiaowei@intel.com
Signed-off-by: Xudong Hao xudong@intel.com
Signed-off-by: Liu Jinsong jinsong@intel.com
---
arch/x86/include/asm/processor.h | 23 +++
arch/x86/include/asm/xsave.h |6 +-
2 files changed, 28 insertions(+),
This patch adds the Documentation/intel_mpx.txt file with some
information about Intel MPX.
Signed-off-by: Qiaowei Ren qiaowei@intel.com
Signed-off-by: Xudong Hao xudong@intel.com
Signed-off-by: Liu Jinsong jinsong@intel.com
---
Documentation/intel_mpx.txt | 77
Il 05/12/2013 19:29, Laszlo Ersek ha scritto:
On 12/05/13 18:42, Paolo Bonzini wrote:
Il 05/12/2013 17:12, Laszlo Ersek ha scritto:
Hi,
I'm working on S3 suspend/resume in OVMF. The problem is that I'm getting an
unexpected guest reboot for code (LRET) that works on physical hardware. I
On 2013-12-05 10:52, Paolo Bonzini wrote:
Il 04/12/2013 08:58, Jan Kiszka ha scritto:
We can easily emulate the HLT activity state for L1: If it decides that
L2 shall be halted on entry, just invoke the normal emulation of halt
after switching to L2. We do not depend on specific host features
On Sat, Dec 07, 2013 at 02:52:54AM +0800, Qiaowei Ren wrote:
This patch adds the Documentation/intel_mpx.txt file with some
information about Intel MPX.
Signed-off-by: Qiaowei Ren qiaowei@intel.com
Signed-off-by: Xudong Hao xudong@intel.com
Signed-off-by: Liu Jinsong
Hi Paolo
Do you mean the OVEM already transfer control to OS Waking vector?
System stuck just because the page table in 0x9C000 is corrupt?
Thank you
Yao Jiewen
-Original Message-
From: Paolo Bonzini [mailto:pbonz...@redhat.com]
Sent: Friday, December 06, 2013 8:03 PM
To:
Il 06/12/2013 13:03, Paolo Bonzini ha scritto:
The page tables are, ahem, crap:
000c000: 6750 fe01 gP..
000c010:
000c020:
000c030:
On Sat, Dec 07, 2013 at 02:52:55AM +0800, Qiaowei Ren wrote:
Signed-off-by: Qiaowei Ren qiaowei@intel.com
Signed-off-by: Xudong Hao xudong@intel.com
Signed-off-by: Liu Jinsong jinsong@intel.com
---
arch/x86/include/asm/cpufeature.h |2 ++
1 files changed, 2 insertions(+),
On Sat, Dec 07, 2013 at 02:52:56AM +0800, Qiaowei Ren wrote:
Commit message please.
Signed-off-by: Qiaowei Ren qiaowei@intel.com
Signed-off-by: Xudong Hao xudong@intel.com
Signed-off-by: Liu Jinsong jinsong@intel.com
---
arch/x86/include/asm/processor.h | 23
Hi Paolo
I am a little confused here. You said Still, indeed it's OVMF's fault. and
Still an EDK2 problem. ??
EDKII BIOS should always create 1:1 mapping virtual-physical address. But I am
not clear about OS waking vector.
For EPT_VIOLATION rip 0x81000110., is that happen in EDKII
Intel has released Memory Protection Extensions (MPX) recently.
Please refer to
http://download-software.intel.com/sites/default/files/319433-015.pdf
These 2 patches are version2 to support Intel MPX at qemu side.
Version 1:
* Fix cpuid leaf 0x0d bug which incorrectly parsed eax and ebx;
*
From ee8b72df3b5503514b748035e6b1cb4d61f8e701 Mon Sep 17 00:00:00 2001
From: Liu Jinsong jinsong@intel.com
Date: Thu, 5 Dec 2013 08:32:12 +0800
Subject: [PATCH v3 1/2] target-i386: Intel MPX
Add some MPX related definiation, and hardcode sizes and offsets
of xsave features 3 and 4. It also
From 12fa3564b7342c4e034b13671dc922ff23ac4b1e Mon Sep 17 00:00:00 2001
From: Liu Jinsong jinsong@intel.com
Date: Sat, 7 Dec 2013 05:18:35 +0800
Subject: [PATCH v3 2/2] target-i386: MSR_IA32_BNDCFGS handle
Signed-off-by: Liu Jinsong jinsong@intel.com
---
target-i386/cpu.h |3 +++
On Fri, Dec 06, 2013 at 05:24:18PM +0900, Fernando Luis Vázquez Cao wrote:
On 12/06/2013 01:38 AM, Paolo Bonzini wrote:
Il 05/12/2013 17:17, Marcelo Tosatti ha scritto:
I agree it is a bit ugly, but in my testing QEMU seemed to loop over all
the VCPUS fast enough for the kernel side
Il 06/12/2013 14:46, Yao, Jiewen ha scritto:
Hi Paolo
I am a little confused here. You said Still, indeed it's OVMF's fault. and
Still an EDK2 problem. ??
Sorry for the confusion. I wrote OVMF/EDK2 interchangeably, just to say
not KVM.
EDKII BIOS should always create 1:1 mapping
Paolo Bonzini wrote:
Il 02/12/2013 17:46, Liu, Jinsong ha scritto:
From e9ba40b3d1820b8ab31431c73226ee3ed485edd1 Mon Sep 17 00:00:00
2001
From: Liu Jinsong jinsong@intel.com
Date: Tue, 3 Dec 2013 07:02:27 +0800
Subject: [PATCH 3/4] KVM/X86: Intel MPX vmx and msr handle
Signed-off-by:
Good investigation. I really appreciate that.
Now, it seems we need OVMF pkg owner to check when 0x9c000 are corrupted, and
why.
Thank you
Yao Jiewen
-Original Message-
From: Paolo Bonzini [mailto:paolo.bonz...@gmail.com] On Behalf Of Paolo Bonzini
Sent: Friday, December 06, 2013
Il 06/12/2013 15:47, Yao, Jiewen ha scritto:
Good investigation. I really appreciate that.
Now, it seems we need OVMF pkg owner to check when 0x9c000 are corrupted, and
why.
FWIW it's 0x1000110, not 0x9c000. But everything else is right.
Paolo
--
To unsubscribe from this list: send the
On 12/06/2013 07:06 AM, Liu, Jinsong wrote:
Intel has released Memory Protection Extensions (MPX) recently.
Please refer to
http://download-software.intel.com/sites/default/files/319433-015.pdf
These 2 patches are version2 to support Intel MPX at qemu side.
You still aren't threading
Il 06/12/2013 15:06, Liu, Jinsong ha scritto:
Intel has released Memory Protection Extensions (MPX) recently.
Please refer to
http://download-software.intel.com/sites/default/files/319433-015.pdf
These 2 patches are version2 to support Intel MPX at qemu side.
Version 1:
* Fix cpuid leaf
-Original Message-
From: Borislav Petkov [mailto:b...@alien8.de]
Sent: Friday, December 06, 2013 9:27 PM
To: Ren, Qiaowei
Cc: Paolo Bonzini; H. Peter Anvin; Ingo Molnar; Thomas Gleixner;
x...@kernel.org; linux-ker...@vger.kernel.org; qemu-de...@nongnu.org;
kvm@vger.kernel.org;
No... we always ask for cpufeature.h patches separately because they sometimes
cause conflicts between branches.
Borislav Petkov b...@alien8.de wrote:
On Sat, Dec 07, 2013 at 02:52:55AM +0800, Qiaowei Ren wrote:
Signed-off-by: Qiaowei Ren qiaowei@intel.com
Signed-off-by: Xudong Hao
On Fri, Dec 06, 2013 at 03:55:10PM +, Ren, Qiaowei wrote:
It is from public introduction and specification, you can refer to
http://software.intel.com/en-us/articles/introduction-to-intel-memory-protection-extensions
Yep, saw it there too. Which doesn't make it any less strange :)
Btw, if
-Original Message-
From: Borislav Petkov [mailto:b...@alien8.de]
Sent: Friday, December 06, 2013 9:47 PM
To: Ren, Qiaowei
Cc: Paolo Bonzini; H. Peter Anvin; Ingo Molnar; Thomas Gleixner;
x...@kernel.org; linux-ker...@vger.kernel.org; qemu-de...@nongnu.org;
kvm@vger.kernel.org;
-Original Message-
From: Borislav Petkov [mailto:b...@alien8.de]
Sent: Saturday, December 07, 2013 12:06 AM
To: Ren, Qiaowei
Cc: Paolo Bonzini; H. Peter Anvin; Ingo Molnar; Thomas Gleixner;
x...@kernel.org; linux-ker...@vger.kernel.org; qemu-de...@nongnu.org;
kvm@vger.kernel.org;
Eric Blake wrote:
On 12/06/2013 07:06 AM, Liu, Jinsong wrote:
Intel has released Memory Protection Extensions (MPX) recently.
Please refer to
http://download-software.intel.com/sites/default/files/319433-015.pdf
These 2 patches are version2 to support Intel MPX at qemu side.
You still
Paolo Bonzini wrote:
Il 06/12/2013 15:06, Liu, Jinsong ha scritto:
Intel has released Memory Protection Extensions (MPX) recently.
Please refer to
http://download-software.intel.com/sites/default/files/319433-015.pdf
These 2 patches are version2 to support Intel MPX at qemu side.
Version
This patch defines Intel MPX CPU feature.
Signed-off-by: Qiaowei Ren qiaowei@intel.com
Signed-off-by: Xudong Hao xudong@intel.com
Signed-off-by: Liu Jinsong jinsong@intel.com
---
arch/x86/include/asm/cpufeature.h |1 +
1 files changed, 1 insertions(+), 0 deletions(-)
diff --git
This patch defines xstate feature and extends struct xsave_hdr_struct
to support Intel MPX.
Signed-off-by: Qiaowei Ren qiaowei@intel.com
Signed-off-by: Xudong Hao xudong@intel.com
Signed-off-by: Liu Jinsong jinsong@intel.com
---
arch/x86/include/asm/processor.h | 12
This patch adds the Documentation/intel_mpx.txt file with some
information about Intel MPX.
Signed-off-by: Qiaowei Ren qiaowei@intel.com
Signed-off-by: Xudong Hao xudong@intel.com
Signed-off-by: Liu Jinsong jinsong@intel.com
---
Documentation/x86/intel_mpx.txt | 76
On 12/06/2013 08:27 AM, Liu, Jinsong wrote:
Eric Blake wrote:
On 12/06/2013 07:06 AM, Liu, Jinsong wrote:
Intel has released Memory Protection Extensions (MPX) recently.
Please refer to
http://download-software.intel.com/sites/default/files/319433-015.pdf
These 2 patches are version2 to
On 12/06/2013 05:46 AM, Borislav Petkov wrote:
I'm guessing this and the struct lwp_struct above is being added so that
you can have the LWP XSAVE area size? If so, you don't need it: LWP
XSAVE area is 128 bytes at offset 832 according to my manuals so I'd
guess having a u8 lwp_area[128]
Il 07/12/2013 01:20, Qiaowei Ren ha scritto:
This patch defines xstate feature and extends struct xsave_hdr_struct
to support Intel MPX.
Signed-off-by: Qiaowei Ren qiaowei@intel.com
Signed-off-by: Xudong Hao xudong@intel.com
Signed-off-by: Liu Jinsong jinsong@intel.com
---
On 12/06/2013 09:35 AM, Paolo Bonzini wrote:
Sorry for the back-and-forth, but I think this and the removal of
XSTATE_FLEXIBLE (perhaps XSTATE_LAZY?) makes your v2 worse than v1.
Since Peter already said the same, please undo these changes.
Also, how is XSTATE_EAGER used? Should MPX be
On Fri, Dec 06, 2013 at 09:23:22AM -0800, H. Peter Anvin wrote:
On 12/06/2013 05:46 AM, Borislav Petkov wrote:
I'm guessing this and the struct lwp_struct above is being added so that
you can have the LWP XSAVE area size? If so, you don't need it: LWP
XSAVE area is 128 bytes at offset 832
Paolo Bonzini wrote:
Il 07/12/2013 01:20, Qiaowei Ren ha scritto:
This patch defines xstate feature and extends struct xsave_hdr_struct
to support Intel MPX.
Signed-off-by: Qiaowei Ren qiaowei@intel.com
Signed-off-by: Xudong Hao xudong@intel.com
Signed-off-by: Liu Jinsong
Anthony,
The following changes since commit 7dc65c02fe3fb8f3146ce0b9ff5fec5945329f0e:
Open 2.0 development tree (2013-11-27 14:02:45 -0800)
are available in the git repository at:
git://github.com/awilliam/qemu-vfio.git tags/vfio-pci-for-qemu-20131206.0
for you to fetch changes up to
It's sometimes useful to be able to verify interrupts are passing
through correctly.
Signed-off-by: Alex Williamson alex.william...@redhat.com
---
hw/misc/vfio.c | 24
1 file changed, 20 insertions(+), 4 deletions(-)
diff --git a/hw/misc/vfio.c b/hw/misc/vfio.c
index
When MSI is enabled on Nvidia GeForce cards the driver seems to
acknowledge the interrupt by writing a 0xff byte to the MSI capability
ID register using the PCI config space mirror at offset 0x88000 from
BAR0. Without this, the device will only fire a single interrupt.
VFIO handles the PCI
Add and remove groups from the KVM virtual VFIO device as we make
use of them. This allows KVM to optimize for performance and
correctness based on properties of the group.
Signed-off-by: Alex Williamson alex.william...@redhat.com
---
hw/misc/vfio.c | 67
We were relying on msix_unset_vector_notifiers() to release all the
vectors when we disable MSI-X, but this only happens when MSI-X is
still enabled on the device. Perform further cleanup by releasing
any remaining vectors listed as in-use after this call. This caused
a leak of IRQ routes on
On 12/06/2013 12:05 PM, Liu, Jinsong wrote:
Since Peter already said the same, please undo these changes.
Also, how is XSTATE_EAGER used? Should MPX be disabled when xsaveopt
is disabled on the kernel command line? (Liu, how would this affect
the KVM patches, too?)
Paolo
Currently
Il 06/12/2013 21:48, Alex Williamson ha scritto:
/* Extra debugging, trap acceleration paths for more logging */
#define VFIO_ALLOW_MMAP 1
#define VFIO_ALLOW_KVM_INTX 1
+#define VFIO_ALLOW_KVM_MSI 1
+#define VFIO_ALLOW_KVM_MSIX 1
Why not make these device properties instead?
Paolo
--
To
H. Peter Anvin wrote:
On 12/06/2013 12:05 PM, Liu, Jinsong wrote:
Since Peter already said the same, please undo these changes.
Also, how is XSTATE_EAGER used? Should MPX be disabled when
xsaveopt is disabled on the kernel command line? (Liu, how would
this affect the KVM patches, too?)
On Fri, 2013-12-06 at 23:06 +0100, Paolo Bonzini wrote:
Il 06/12/2013 21:48, Alex Williamson ha scritto:
/* Extra debugging, trap acceleration paths for more logging */
#define VFIO_ALLOW_MMAP 1
#define VFIO_ALLOW_KVM_INTX 1
+#define VFIO_ALLOW_KVM_MSI 1
+#define VFIO_ALLOW_KVM_MSIX
This patch defines xstate feature and extends struct xsave_hdr_struct
to support Intel MPX.
Signed-off-by: Qiaowei Ren qiaowei@intel.com
Signed-off-by: Xudong Hao xudong@intel.com
Signed-off-by: Liu Jinsong jinsong@intel.com
---
arch/x86/include/asm/processor.h | 12
-Original Message-
From: Liu, Jinsong
Sent: Saturday, December 07, 2013 6:13 AM
To: H. Peter Anvin; Paolo Bonzini; Ren, Qiaowei
Cc: kvm@vger.kernel.org; x...@kernel.org; Xudong Hao;
qemu-de...@nongnu.org; linux-ker...@vger.kernel.org; Ingo Molnar; Thomas
Gleixner
Subject: RE:
We can cancel deferred static_key_slow_dec() instead of increasing
.enabled.counter.
Timer now won't fire before 'timeout' since the last increase, so this patch
further stabilizes the case of frequent switching.
Signed-off-by: Radim Krčmář rkrc...@redhat.com
---
kernel/jump_label.c | 3 ++-
1
this on Tuesday and then moved to higher priority work, but
returned with enough courage to post a different first part.
The first part was tested on amd64, s390x and ppc64, the rest also on
armv7.
Applies to next-20131206 and v3.13-rc3.
Radim Krčmář (5):
static_key: add a section for deferred keys
We need to know about all deferred keys if we want to correctly
- initialize timers on kernel init/module load
- destroy pending timers when unloading a module
We depend on section attribute, so direct definitions of struct
static_key_deferred should be avoided, which is suboptimal.
Fix a bug when we free module's memory while a timer is pending by
canceling all deferred timers from the unloaded module.
static_key_rate_limit() still can't be called more than once.
Reproducer: (host crasher)
modprobe kvm_intel
(sleep 1; echo quit) \
| qemu-kvm -kernel /dev/null
Complement the static_key_slow_dec_deferred().
This avoids asymmetrical API, and prepares us for future optimizations
and bug fixes.
Signed-off-by: Radim Krčmář rkrc...@redhat.com
---
arch/x86/kvm/lapic.c | 7 ---
include/linux/jump_label_ratelimit.h | 5 +
When '.enabled.counter == 1', static_key_slow_dec_deferred() gets
silently dropped if the decrease is already pending.
We print a warning if this happens and because .enabled.counter cannot
go below 1 before the decrease has finished, the number of ignored
static_key_slow_dec_deferred() is kept
On 12/06/2013 04:23 PM, Ren, Qiaowei wrote:
We need to either disable these features in lazy mode, or we need to
force eager mode if these features are to be supported. The problem
with the latter is that it means forcing eager mode regardless of if
anything actually *uses* these features.
-Original Message-
From: H. Peter Anvin [mailto:h...@zytor.com]
Sent: Saturday, December 07, 2013 9:07 AM
To: Ren, Qiaowei; Liu, Jinsong; Paolo Bonzini
Cc: kvm@vger.kernel.org; x...@kernel.org; Xudong Hao;
qemu-de...@nongnu.org; linux-ker...@vger.kernel.org; Ingo Molnar; Thomas
On 12/06/2013 05:16 PM, Ren, Qiaowei wrote:
Jinsong think that both kvm and host depend on these feature definition
header file, so we firstly submit these files depended on.
Yes, but we can't turn on the feature without proper protection. Either
way, they are now in tip:x86/cpufeature.
On 11/16/2013 05:46 PM, Paul Mackerras wrote:
This fixes a bug in kvmppc_do_h_enter() where the physical address
for a page can be calculated incorrectly if transparent huge pages
(THP) are active. Until THP came along, it was true that if we
encountered a large (16M) page in
69 matches
Mail list logo