On 26/05/2020 9:17, Mike Rapoport wrote:
On Mon, May 25, 2020 at 04:47:18PM +0300, Liran Alon wrote:
On 22/05/2020 15:51, Kirill A. Shutemov wrote:
Furthermore, I would like to point out that just unmapping guest data from
kernel direct-map is not sufficient to prevent all
guest-to-guest
On 25/05/2020 17:46, Kirill A. Shutemov wrote:
On Mon, May 25, 2020 at 04:47:18PM +0300, Liran Alon wrote:
On 22/05/2020 15:51, Kirill A. Shutemov wrote:
== Background / Problem ==
There are a number of hardware features (MKTME, SEV) which protect guest
memory from some unauthorized host
On 22/05/2020 15:51, Kirill A. Shutemov wrote:
== Background / Problem ==
There are a number of hardware features (MKTME, SEV) which protect guest
memory from some unauthorized host access. The patchset proposes a purely
software feature that mitigates some of the same host-side read-only
On 28/04/2020 18:25, Alexander Graf wrote:
On 27.04.20 13:44, Liran Alon wrote:
On 27/04/2020 10:56, Paraschiv, Andra-Irina wrote:
On 25/04/2020 18:25, Liran Alon wrote:
On 23/04/2020 16:19, Paraschiv, Andra-Irina wrote:
The memory and CPUs are carved out of the primary VM
> On 27 Sep 2019, at 17:27, Sean Christopherson
> wrote:
>
> On Fri, Sep 27, 2019 at 03:06:02AM +0300, Liran Alon wrote:
>>
>>
>>> On 27 Sep 2019, at 0:43, Sean Christopherson
>>> wrote:
>>>
>>> Write the desired L2
u);
> else
> guest_cr3 = to_kvm_vmx(kvm)->ept_identity_map_addr;
> ept_load_pdptrs(vcpu);
> }
>
> - vmcs_writel(GUEST_CR3, guest_cr3);
> + if (!skip_cr3)
Nit: It’s a matter of taste, but I prefer positive conditions. i.e. “bool
write_guest_cr3”.
Anyway, code seems valid to me. Nice catch.
Reviewed-by: Liran Alon
-Liran
> + vmcs_writel(GUEST_CR3, guest_cr3);
> }
>
> int vmx_set_cr4(struct kvm_vcpu *vcpu, unsigned long cr4)
> --
> 2.22.0
>
ing so that the vCPU's 64-bit mode is determined
> directly from EFER_LMA and the VMCS checks are based on that, which
> matches section 26.2.4 of the SDM.
>
> Cc: Sean Christopherson
> Cc: Jim Mattson
> Cc: Krish Sadhukhan
> Fixes: 5845038c111db27902bc220a4f70070fe945871c
> Signed-off-by: Paolo Bonzini
> —
Reviewed-by: Liran Alon
_accept_irq() which also only ever passes Fixed and LowPriority
> interrupts as posted interrupts into the guest.
>
> This fixes a bug I have with code which configures real hardware to
> inject virtual SMIs into my guest.
>
> Signed-off-by: Alexander Graf
Reviewed-by: Liran Al
* through the full KVM IRQ code, so refuse to take
> + * any direct PI assignments here.
> + */
> + pr_debug("SVM: %s: use legacy intr remap mode for irq %u\n",
> + __func__, irq.vector);
> + return
my guest.
>
> Signed-off-by: Alexander Graf
With some small improvements I written inline below:
Reviewed-by: Liran Alon
> ---
> arch/x86/kvm/vmx/vmx.c | 22 ++
> 1 file changed, 22 insertions(+)
>
> diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/v
is used only in the VMWare case and is obsoleted by having the emulator
> itself reinject #GP.
>
> Signed-off-by: Sean Christopherson
Reviewed-by: Liran Alon
> ---
> arch/x86/include/asm/kvm_host.h | 3 +--
> arch/x86/kvm/svm.c | 10 ++
> arch/x86/kvm/vmx
> On 23 Aug 2019, at 17:44, Sean Christopherson
> wrote:
>
> On Fri, Aug 23, 2019 at 04:47:14PM +0300, Liran Alon wrote:
>>
>>
>>> On 23 Aug 2019, at 4:07, Sean Christopherson
>>> wrote:
>>>
>>> Add an explicit emulation typ
> On 23 Aug 2019, at 4:07, Sean Christopherson
> wrote:
>
> Immediately inject a #UD and return EMULATE done if emulation fails when
> handling an intercepted #UD. This helps pave the way for removing
> EMULATE_FAIL altogether.
>
> Signed-off-by: Sean Christopherson
I suggest squashing
> On 23 Aug 2019, at 4:07, Sean Christopherson
> wrote:
>
> Add an explicit emulation type for forced #UD emulation and use it to
> detect that KVM should unconditionally inject a #UD instead of falling
> into its standard emulation failure handling.
>
> Signed-off-by: Sean Christopherson
> On 23 Aug 2019, at 16:21, Liran Alon wrote:
>
>
>
>> On 23 Aug 2019, at 4:07, Sean Christopherson
>> wrote:
>>
>> The "no #UD on fail" is used only in the VMWare case, and for the VMWare
>> scenario it really means "#GP in
UD interception as-well. :P
Besides minor comments inline below:
Reviewed-by: Liran Alon
-Liran
>
> Signed-off-by: Sean Christopherson
> ---
> arch/x86/include/asm/kvm_host.h | 2 +-
> arch/x86/kvm/svm.c | 9 ++---
> arch/x86/kvm/vmx/vmx.c | 9 ++-
the only one that use
“no #UD on fail”.
The diff itself looks fine to me, therefore:
Reviewed-by: Liran Alon
-Liran
> ---
> arch/x86/include/asm/kvm_host.h | 1 -
> arch/x86/kvm/svm.c | 3 +--
> arch/x86/kvm/vmx/vmx.c | 3 +--
> arch/x86/kvm/x86.c |
w a
> future patch to move #GP injection (for emulation failure) into
> kvm_emulate_instruction() without having to plumb in the error code.
>
> Signed-off-by: Sean Christopherson
Reviewed-by: Liran Alon
-Liran
> ---
> arch/x86/kvm/svm.c | 6 +-
> arch/x86/kvm/vmx/
kvm_vcpu_do_singlestep(vcpu, );
> + r = kvm_vcpu_do_singlestep(vcpu);
> if (!ctxt->have_exception ||
> exception_type(ctxt->exception.vector) == EXCPT_TRAP)
> __kvm_set_rflags(vcpu, ctxt->eflags);
> --
> 2.22.0
>
Reviewed-by: Liran Alon
-Liran
0.
In both cases, only #UD is injected to guest without userspace being aware of
it.
Problem is that if we would change this ABI to not queue #UD on emulation error,
we will definitely break userspace VMMs that rely on it when they re-enter into
guest
in this scenario and expect #UD to be injected.
Therefore, the only way to change this behaviour is to introduce a new KVM_CAP
that needs to be explicitly enabled from userspace.
But because most likely most userspace VMMs just terminate guest in case
of emulation-failure, it’s probably not worth it and Sean’s commit is good
enough.
For the commit itself:
Reviewed-by: Liran Alon
-Liran
> On 20 Jul 2019, at 1:21, Paolo Bonzini wrote:
>
> On 20/07/19 00:06, Liran Alon wrote:
>>
>>
>>> On 20 Jul 2019, at 0:39, Paolo Bonzini wrote:
>>>
>>> If a KVM guest is reset while running a nested guest, free_nested will
>>>
to the shadow VMCS which has since been freed.
>
> This causes a vmptrld of a NULL pointer on my machime, but Jan reports
> the host to hang altogether. Let's see how much this trivial patch fixes.
>
> Reported-by: Jan Kiszka
> Cc: Liran Alon
> Cc: sta...@vger.kernel.org
>
> On 19 Jul 2019, at 19:42, Paolo Bonzini wrote:
>
> This is useful for debugging, and is ratelimited nowadays.
>
> Signed-off-by: Paolo Bonzini
Reviewed-by: Liran Alon
> ---
> arch/x86/kvm/vmx/vmx.c | 1 +
> 1 file changed, 1 insertion(+)
>
> diff --git a
t, nested_release_vmcs12() also sets need_vmcs12_to_shadow_sync to false
explicitly. This can now be removed.
Second, I suggest putting a WARN_ON_ONCE() on copy_vmcs12_to_shadow() in case
shadow_vmcs==NULL.
To assist catching these kind of errors more easily in the future.
Besides that, the fix seems correct to
> On 5 Jul 2019, at 15:14, Paolo Bonzini wrote:
>
> kvm-unit-tests were adjusted to match bare metal behavior, but KVM
> itself was not doing what bare metal does; fix that.
>
> Signed-off-by: Paolo Bonzini
> ---
> arch/x86/kvm/lapic.c | 6 +-
> 1 file changed, 5 insertions(+), 1
> On 3 Jul 2019, at 19:23, Paolo Bonzini wrote:
>
> On 01/07/19 08:21, Yi Wang wrote:
>> There are two *_debug() macros in kvm apic source file:
>> - ioapic_debug, which is disable using #if 0
>> - apic_debug, which is commented
>>
>> Maybe it's better to control these two macros using
t;,
> 2017-08-03)
> Cc: sta...@vger.kernel.org
> Signed-off-by: Paolo Bonzini
Reviewed-by: Liran Alon
> ---
> arch/x86/kvm/vmx/nested.c | 5 +
> 1 file changed, 5 insertions(+)
>
> diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
> index c4e
t to
> /dev/kvm.
>
> Fixes: 1389309c811 ("KVM: nVMX: expose VMX capabilities for nested
> hypervisors to userspace", 2018-02-26)
> Signed-off-by: Paolo Bonzini
Reviewed-by: Liran Alon
> ---
> arch/x86/kvm/vmx/nested.c | 7 ++-
> 1 file changed, 6 insertions
> On 25 Jun 2019, at 14:15, Vitaly Kuznetsov wrote:
>
> Liran Alon writes:
>
>>> On 25 Jun 2019, at 11:51, Vitaly Kuznetsov wrote:
>>>
>>> Liran Alon writes:
>>>
>>>>> On 24 Jun 2019, at 16:30, Vitaly Kuznetsov wrote:
>&
> On 25 Jun 2019, at 11:51, Vitaly Kuznetsov wrote:
>
> Liran Alon writes:
>
>>> On 24 Jun 2019, at 16:30, Vitaly Kuznetsov wrote:
>>>
>>>
>>> +bool nested_enlightened_vmentry(struct kvm_vcpu *vcpu, u64 *evmptr)
>>
>> I pref
> On 24 Jun 2019, at 17:16, Vitaly Kuznetsov wrote:
>
> Liran Alon writes:
>
>>> On 24 Jun 2019, at 16:30, Vitaly Kuznetsov wrote:
>>>
>>> When Enlightened VMCS is in use, it is valid to do VMCLEAR and,
>>> according to TLFS, this should &qu
> On 24 Jun 2019, at 16:30, Vitaly Kuznetsov wrote:
>
> When Enlightened VMCS is in use, it is valid to do VMCLEAR and,
> according to TLFS, this should "transition an enlightened VMCS from the
> active to the non-active state". It is, however, wrong to assume that
> it is only valid to do
> On 19 Jun 2019, at 13:45, Paolo Bonzini wrote:
>
> On 19/06/19 00:36, Liran Alon wrote:
>>
>>
>>> On 18 Jun 2019, at 19:24, Paolo Bonzini wrote:
>>>
>>> From: Liran Alon
>>>
>>> Improve the KVM_{GET,SET}_NESTED_
> On 18 Jun 2019, at 19:24, Paolo Bonzini wrote:
>
> From: Liran Alon
>
> Improve the KVM_{GET,SET}_NESTED_STATE structs by detailing the format
> of VMX nested state data in a struct.
>
> In order to avoid changing the ioctl values of
> KVM_{GET,SET}_NE
You should apply something as the following instead of the original fix by Sean
to play nicely on upstream without additional dependency:
diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
index f1a69117ac0f..3fc44852ed4f 100644
--- a/arch/x86/kvm/vmx/nested.c
+++
> On 12 Jun 2019, at 21:25, Sean Christopherson
> wrote:
>
> On Wed, Jun 12, 2019 at 07:08:24PM +0200, Marius Hillenbrand wrote:
>> The Linux kernel has a global address space that is the same for any
>> kernel code. This address space becomes a liability in a world with
>> processor
Signed-off-by: Paolo Bonzini
Reviewed-by: Liran Alon
> ---
> arch/x86/include/asm/kvm_host.h | 1 +
> arch/x86/kvm/vmx/vmx.c | 6 --
> arch/x86/kvm/vmx/vmx.h | 2 --
> arch/x86/kvm/x86.c | 6 ++
> 4 files changed, 7 insertions(+), 8 deletion
> On 28 May 2019, at 3:53, Wanpeng Li wrote:
>
> From: Wanpeng Li
>
> The target vCPUs are in runnable state after vcpu_kick and suitable
> as a yield target. This patch implements the sched yield hypercall.
>
> 17% performace increase of ebizzy benchmark can be observed in an
>
Indeed those CPU resources are shared between sibling hyperthreads on same CPU
core.
There is currently no mechanism merged upstream to completely mitigate
SMT-enabled scenarios.
Note that this is also true for L1TF.
There are several proposal to address this but they are still in early
> On 14 May 2019, at 5:07, Andy Lutomirski wrote:
>
> On Mon, May 13, 2019 at 2:09 PM Liran Alon wrote:
>>
>>
>>
>>> On 13 May 2019, at 21:17, Andy Lutomirski wrote:
>>>
>>>> I expect that the KVM address space can eventua
> On 14 May 2019, at 10:29, Peter Zijlstra wrote:
>
>
> (please, wrap our emails at 78 chars)
>
> On Tue, May 14, 2019 at 12:08:23AM +0300, Liran Alon wrote:
>
>> 3) From (2), we should have theoretically deduced that for every
>> #VMExit, there is a nee
> On 14 May 2019, at 0:42, Nakajima, Jun wrote:
>
>
>
>> On May 13, 2019, at 2:16 PM, Liran Alon wrote:
>>
>>> On 13 May 2019, at 22:31, Nakajima, Jun wrote:
>>>
>>> On 5/13/19, 7:43 AM, "kvm-ow...@vger.kernel.org on beha
> On 13 May 2019, at 18:15, Peter Zijlstra wrote:
>
> On Mon, May 13, 2019 at 04:38:32PM +0200, Alexandre Chartre wrote:
>> diff --git a/arch/x86/mm/fault.c b/arch/x86/mm/fault.c
>> index 46df4c6..317e105 100644
>> --- a/arch/x86/mm/fault.c
>> +++ b/arch/x86/mm/fault.c
>> @@ -33,6 +33,10 @@
> On 13 May 2019, at 22:31, Nakajima, Jun wrote:
>
> On 5/13/19, 7:43 AM, "kvm-ow...@vger.kernel.org on behalf of Alexandre
> Chartre" wrote:
>
>Proposal
>
>
>To handle both these points, this series introduce the mechanism of KVM
>address space isolation. Note that
> On 13 May 2019, at 21:17, Andy Lutomirski wrote:
>
>> I expect that the KVM address space can eventually be expanded to include
>> the ioctl syscall entries. By doing so, and also adding the KVM page table
>> to the process userland page table (which should be safe to do because the
>> KVM
erabilities such as L1TF.
>
> These patches are based on an original patches from Liran Alon, completed
> with additional patches to effectively create KVM address space different
> from the full kernel address space.
Great job for pushing this forward! Thank you!
>
> The current code is jus
> On 13 May 2019, at 18:15, Peter Zijlstra wrote:
>
> On Mon, May 13, 2019 at 04:38:09PM +0200, Alexandre Chartre wrote:
>> From: Liran Alon
>>
>> Export symbols needed to create, manage, populate and switch
>> a mm from a kernel module (kvm in this case).
&
> On 1 Apr 2019, at 11:39, Vitaly Kuznetsov wrote:
>
> Paolo Bonzini writes:
>
>> On 29/03/19 16:32, Liran Alon wrote:
>>> Paolo I am not sure this is the case here. Please read my other
>>> replies in this email thread.
>>>
>>> I th
> On 29 Mar 2019, at 18:01, Paolo Bonzini wrote:
>
> On 29/03/19 15:40, Vitaly Kuznetsov wrote:
>> Paolo Bonzini writes:
>>
>>> On 28/03/19 21:31, Vitaly Kuznetsov wrote:
The 'hang' scenario develops like this:
1) Hyper-V boots and QEMU is trying to inject two irq
> On 29 Mar 2019, at 12:14, Vitaly Kuznetsov wrote:
>
> Liran Alon writes:
>
>>> On 28 Mar 2019, at 22:31, Vitaly Kuznetsov wrote:
>>>
>>> This is embarassing but we have another Windows/Hyper-V issue to workaround
>>> in K
> On 28 Mar 2019, at 22:31, Vitaly Kuznetsov wrote:
>
> This is embarassing but we have another Windows/Hyper-V issue to workaround
> in KVM (or QEMU). Hope "RFC" makes it less offensive.
>
> It was noticed that Hyper-V guest on q35 KVM/QEMU VM hangs on boot if e.g.
> 'piix4-usb-uhci' device
> On 26 Mar 2019, at 15:48, Vitaly Kuznetsov wrote:
>
> Liran Alon writes:
>
>>> On 26 Mar 2019, at 15:07, Vitaly Kuznetsov wrote:
>>> - Instread of putting the temporary HF_SMM_MASK drop to
>>> rsm_enter_protected_mode() (as was suggested by
alled from RSM.
>
> Reported-by: Jon Doron
> Suggested-by: Liran Alon
> Fixes: 5bea5123cbf0 ("KVM: VMX: check nested state and CR4.VMXE against SMM")
> Signed-off-by: Vitaly Kuznetsov
Patch looks good to me.
Reviewed-by: Liran Alon
> ---
> - Instread of putting the tem
> On 24 Jan 2019, at 19:39, Vitaly Kuznetsov wrote:
>
> Liran Alon writes:
>
>>> On 24 Jan 2019, at 19:15, Vitaly Kuznetsov wrote:
>>>
>>> We shouldn't probably be suggesting using Enlightened VMCS when it's not
>>> enabled (not supported f
V_X64_ENLIGHTENED_VMCS_RECOMMENDED;
> + if (evmcs_ver)
> + ent->eax |= HV_X64_ENLIGHTENED_VMCS_RECOMMENDED;
>
> /*
>* Default number of spinlock retry attempts, matches
> --
> 2.20.1
>
Seems to me that there are 2 unrelated separated patches here. Why not split
them?
For content itself: Reviewed-by: Liran Alon
> On 26 Dec 2018, at 10:15, Yang Weijiang wrote:
>
> This bit controls whether guest CET states will be loaded on guest entry.
>
> Signed-off-by: Zhang Yi Z
> Signed-off-by: Yang Weijiang
> ---
> arch/x86/kvm/vmx.c | 19 +++
> 1 file changed, 19 insertions(+)
>
> diff --git
ive TSC offset to aid debugging. The VMX code is changed to
> look more similar to SVM, which is in my opinion nicer.
>
> Based on a patch by Liran Alon.
>
> Signed-off-by: Paolo Bonzini
I would have applied this refactoring change on top of my original version of
this patch. Easier
ive TSC offset to aid debugging. The VMX code is changed to
> look more similar to SVM, which is in my opinion nicer.
>
> Based on a patch by Liran Alon.
>
> Signed-off-by: Paolo Bonzini
I would have applied this refactoring change on top of my original version of
this patch. Easier
atch was applied… Thanks.
Reviewed-by: Liran Alon
> ---
> arch/x86/kvm/vmx.c | 1 +
> 1 file changed, 1 insertion(+)
>
> diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
> index 4555077d69ce..be6f13f1c25f 100644
> --- a/arch/x86/kvm/vmx.c
> +++ b/arch/x86/kvm/vmx.c
> @@ -
atch was applied… Thanks.
Reviewed-by: Liran Alon
> ---
> arch/x86/kvm/vmx.c | 1 +
> 1 file changed, 1 insertion(+)
>
> diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
> index 4555077d69ce..be6f13f1c25f 100644
> --- a/arch/x86/kvm/vmx.c
> +++ b/arch/x86/kvm/vmx.c
> @@ -
> On 17 Nov 2018, at 0:09, syzbot
> wrote:
>
> Hello,
>
> syzbot found the following crash on:
>
> HEAD commit:006aa39cddee kmsan: don't instrument fixup_bad_iret()
> git tree:
>
> On 17 Nov 2018, at 0:09, syzbot
> wrote:
>
> Hello,
>
> syzbot found the following crash on:
>
> HEAD commit:006aa39cddee kmsan: don't instrument fixup_bad_iret()
> git tree:
>
> On 7 Nov 2018, at 20:58, syzbot
> wrote:
>
> Hello,
>
> syzbot found the following crash on:
>
> HEAD commit:7438a3b20295 kmsan: print user address when reporting info..
> git tree:
>
> On 7 Nov 2018, at 20:58, syzbot
> wrote:
>
> Hello,
>
> syzbot found the following crash on:
>
> HEAD commit:7438a3b20295 kmsan: print user address when reporting info..
> git tree:
>
> On 7 Nov 2018, at 14:47, Paolo Bonzini wrote:
>
> On 07/11/2018 13:10, Alexander Potapenko wrote:
>> This appears to be a real bug in KVM.
>> Please see a simplified reproducer attached.
>
> Thanks, I agree it's a reael bug. The basic issue is that the
> kvm_state->size member is too
> On 7 Nov 2018, at 14:47, Paolo Bonzini wrote:
>
> On 07/11/2018 13:10, Alexander Potapenko wrote:
>> This appears to be a real bug in KVM.
>> Please see a simplified reproducer attached.
>
> Thanks, I agree it's a reael bug. The basic issue is that the
> kvm_state->size member is too
> On 7 Nov 2018, at 14:10, Alexander Potapenko wrote:
>
> On Wed, Nov 7, 2018 at 2:38 AM syzbot
> wrote:
>>
>> Hello,
>>
>> syzbot found the following crash on:
>>
>> HEAD commit:88b95ef4c780 kmsan: use MSan assembly instrumentation
>> git tree:
>>
> On 7 Nov 2018, at 14:10, Alexander Potapenko wrote:
>
> On Wed, Nov 7, 2018 at 2:38 AM syzbot
> wrote:
>>
>> Hello,
>>
>> syzbot found the following crash on:
>>
>> HEAD commit:88b95ef4c780 kmsan: use MSan assembly instrumentation
>> git tree:
>>
terms of ABI guarantees. Therefore we are
> still in time to break things and conform as much as possible to the
> interface used for VMX.
>
> Suggested-by: Jim Mattson
> Suggested-by: Liran Alon
> Signed-off-by: Paolo Bonzini
> ---
> arch/x86/kvm/vmx.c | 2 +-
> 1 file cha
terms of ABI guarantees. Therefore we are
> still in time to break things and conform as much as possible to the
> interface used for VMX.
>
> Suggested-by: Jim Mattson
> Suggested-by: Liran Alon
> Signed-off-by: Paolo Bonzini
> ---
> arch/x86/kvm/vmx.c | 2 +-
> 1 file cha
> On 8 Oct 2018, at 13:59, Wanpeng Li wrote:
>
> On Mon, 8 Oct 2018 at 05:02, Liran Alon wrote:
>>
>>
>>
>>> On 28 Sep 2018, at 9:12, Wanpeng Li wrote:
>>>
>>> From: Wanpeng Li
>>>
>>> In cloud envi
> On 8 Oct 2018, at 13:59, Wanpeng Li wrote:
>
> On Mon, 8 Oct 2018 at 05:02, Liran Alon wrote:
>>
>>
>>
>>> On 28 Sep 2018, at 9:12, Wanpeng Li wrote:
>>>
>>> From: Wanpeng Li
>>>
>>> In cloud envi
> On 28 Sep 2018, at 9:12, Wanpeng Li wrote:
>
> From: Wanpeng Li
>
> In cloud environment, lapic_timer_advance_ns is needed to be tuned for every
> CPU
> generations, and every host kernel versions(the
> kvm-unit-tests/tscdeadline_latency.flat
> is 5700 cycles for upstream kernel and
> On 28 Sep 2018, at 9:12, Wanpeng Li wrote:
>
> From: Wanpeng Li
>
> In cloud environment, lapic_timer_advance_ns is needed to be tuned for every
> CPU
> generations, and every host kernel versions(the
> kvm-unit-tests/tscdeadline_latency.flat
> is 5700 cycles for upstream kernel and
> On 29 Aug 2018, at 13:29, Dan Carpenter wrote:
>
> On Wed, Aug 29, 2018 at 06:23:08PM +0800, Wanpeng Li wrote:
>> On Wed, 29 Aug 2018 at 18:18, Dan Carpenter wrote:
>>>
>>> On Wed, Aug 29, 2018 at 01:12:05PM +0300, Dan Carpenter wrote:
>>>> On
> On 29 Aug 2018, at 13:29, Dan Carpenter wrote:
>
> On Wed, Aug 29, 2018 at 06:23:08PM +0800, Wanpeng Li wrote:
>> On Wed, 29 Aug 2018 at 18:18, Dan Carpenter wrote:
>>>
>>> On Wed, Aug 29, 2018 at 01:12:05PM +0300, Dan Carpenter wrote:
>>>> On
and also checking whether or not map->phys_map[min + i] is NULL since the
> max_apic_id
> is set according to the max apic id, however, some phys_map maybe NULL when
> apic id
> is sparse, in addition, kvm also unconditionally set max_apic_id to 255 to
> reserve
> enough
and also checking whether or not map->phys_map[min + i] is NULL since the
> max_apic_id
> is set according to the max apic id, however, some phys_map maybe NULL when
> apic id
> is sparse, in addition, kvm also unconditionally set max_apic_id to 255 to
> reserve
> enough
> On 21 Aug 2018, at 17:22, David Woodhouse wrote:
>
> On Tue, 2018-08-21 at 17:01 +0300, Liran Alon wrote:
>>
>>> On 21 Aug 2018, at 12:57, David Woodhouse
>> wrote:
>>>
>>> Another alternative... I'm told POWER8 does an interesting thin
> On 21 Aug 2018, at 17:22, David Woodhouse wrote:
>
> On Tue, 2018-08-21 at 17:01 +0300, Liran Alon wrote:
>>
>>> On 21 Aug 2018, at 12:57, David Woodhouse
>> wrote:
>>>
>>> Another alternative... I'm told POWER8 does an interesting thin
> On 21 Aug 2018, at 12:57, David Woodhouse wrote:
>
> Another alternative... I'm told POWER8 does an interesting thing with
> hyperthreading and gang scheduling for KVM. The host kernel doesn't
> actually *see* the hyperthreads at all, and KVM just launches the full
> set of siblings when it
> On 21 Aug 2018, at 12:57, David Woodhouse wrote:
>
> Another alternative... I'm told POWER8 does an interesting thing with
> hyperthreading and gang scheduling for KVM. The host kernel doesn't
> actually *see* the hyperthreads at all, and KVM just launches the full
> set of siblings when it
Sync both unicast and multicast lists instead of unicast twice.
Fixes: cfc80d9a116 ("net: Introduce net_failover driver")
Reviewed-by: Joao Martins
Signed-off-by: Liran Alon
---
drivers/net/net_failover.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/d
Sync both unicast and multicast lists instead of unicast twice.
Fixes: cfc80d9a116 ("net: Introduce net_failover driver")
Reviewed-by: Joao Martins
Signed-off-by: Liran Alon
---
drivers/net/net_failover.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/d
kvm_lapic_init(void);
> void kvm_lapic_exit(void);
>
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index 06dd4cdb2ca8..a57766b940a5 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -2442,7 +2442,7 @@ int kvm_set_msr_common(struct kvm_vcpu *vcpu,
> struct msr_data *msr_info)
>
> break;
> case MSR_KVM_PV_EOI_EN:
> - if (kvm_lapic_enable_pv_eoi(vcpu, data))
> + if (kvm_lapic_enable_pv_eoi(vcpu, data, sizeof(u8)))
> return 1;
> break;
>
> --
> 2.14.4
Reviewed-By: Liran Alon
kvm_lapic_init(void);
> void kvm_lapic_exit(void);
>
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index 06dd4cdb2ca8..a57766b940a5 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -2442,7 +2442,7 @@ int kvm_set_msr_common(struct kvm_vcpu *vcpu,
> struct msr_data *msr_info)
>
> break;
> case MSR_KVM_PV_EOI_EN:
> - if (kvm_lapic_enable_pv_eoi(vcpu, data))
> + if (kvm_lapic_enable_pv_eoi(vcpu, data, sizeof(u8)))
> return 1;
> break;
>
> --
> 2.14.4
Reviewed-By: Liran Alon
- vkuzn...@redhat.com wrote:
> When Enlightened VMCS is in use by L1 hypervisor we can avoid
> vmwriting
> VMCS fields which did not change.
>
> Our first goal is to achieve minimal impact on traditional VMCS case
> so
> we're not wrapping each vmwrite() with an if-changed checker. We also
- vkuzn...@redhat.com wrote:
> When Enlightened VMCS is in use by L1 hypervisor we can avoid
> vmwriting
> VMCS fields which did not change.
>
> Our first goal is to achieve minimal impact on traditional VMCS case
> so
> we're not wrapping each vmwrite() with an if-changed checker. We also
- vkuzn...@redhat.com wrote:
> Adds hv_evmcs pointer and implement copy_enlightened_to_vmcs12() and
> copy_enlightened_to_vmcs12().
>
> prepare_vmcs02()/prepare_vmcs02_full() separation is not valid for
> Enlightened VMCS, do full sync for now.
>
> Suggested-by: Ladi Prosek
>
- vkuzn...@redhat.com wrote:
> Adds hv_evmcs pointer and implement copy_enlightened_to_vmcs12() and
> copy_enlightened_to_vmcs12().
>
> prepare_vmcs02()/prepare_vmcs02_full() separation is not valid for
> Enlightened VMCS, do full sync for now.
>
> Suggested-by: Ladi Prosek
>
t; +}
> +
> /* Emulate the VMPTRST instruction */
> static int handle_vmptrst(struct kvm_vcpu *vcpu)
> {
> @@ -8858,6 +8936,9 @@ static int handle_vmptrst(struct kvm_vcpu
> *vcpu)
> if (!nested_vmx_check_permission(vcpu))
> return 1;
>
> + if (unlikely(to_vmx(vcpu)->nested.hv_evmcs))
> + return 1;
> +
> if (get_vmx_mem_address(vcpu, exit_qualification,
> vmx_instruction_info, true, _gva))
> return 1;
> @@ -12148,7 +12229,10 @@ static int nested_vmx_run(struct kvm_vcpu
> *vcpu, bool launch)
> if (!nested_vmx_check_permission(vcpu))
> return 1;
>
> - if (!nested_vmx_check_vmcs12(vcpu))
> + if (!nested_vmx_handle_enlightened_vmptrld(vcpu))
> + return 1;
> +
> + if (!vmx->nested.hv_evmcs && !nested_vmx_check_vmcs12(vcpu))
> goto out;
>
> vmcs12 = get_vmcs12(vcpu);
> --
> 2.14.4
Reviewed-By: Liran Alon
t; +}
> +
> /* Emulate the VMPTRST instruction */
> static int handle_vmptrst(struct kvm_vcpu *vcpu)
> {
> @@ -8858,6 +8936,9 @@ static int handle_vmptrst(struct kvm_vcpu
> *vcpu)
> if (!nested_vmx_check_permission(vcpu))
> return 1;
>
> + if (unlikely(to_vmx(vcpu)->nested.hv_evmcs))
> + return 1;
> +
> if (get_vmx_mem_address(vcpu, exit_qualification,
> vmx_instruction_info, true, _gva))
> return 1;
> @@ -12148,7 +12229,10 @@ static int nested_vmx_run(struct kvm_vcpu
> *vcpu, bool launch)
> if (!nested_vmx_check_permission(vcpu))
> return 1;
>
> - if (!nested_vmx_check_vmcs12(vcpu))
> + if (!nested_vmx_handle_enlightened_vmptrld(vcpu))
> + return 1;
> +
> + if (!vmx->nested.hv_evmcs && !nested_vmx_check_vmcs12(vcpu))
> goto out;
>
> vmcs12 = get_vmcs12(vcpu);
> --
> 2.14.4
Reviewed-By: Liran Alon
cpu_ioctl_enable_cap(struct
> kvm_vcpu *vcpu,
> return -EINVAL;
> return kvm_hv_activate_synic(vcpu, cap->cap ==
> KVM_CAP_HYPERV_SYNIC2);
> + case KVM_CAP_HYPERV_ENLIGHTENED_VMCS:
> + r = kvm_x86_ops->nested_enable_evmcs(vcpu, _version);
> + if (!r) {
> + user_ptr = (void __user *)(uintptr_t)cap->args[0];
> + if (copy_to_user(user_ptr, _version,
> + sizeof(vmcs_version)))
> + r = -EFAULT;
> + }
> + return r;
> +
> default:
> return -EINVAL;
> }
> diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
> index b6270a3b38e9..5c4b79c1af19 100644
> --- a/include/uapi/linux/kvm.h
> +++ b/include/uapi/linux/kvm.h
> @@ -949,6 +949,7 @@ struct kvm_ppc_resize_hpt {
> #define KVM_CAP_GET_MSR_FEATURES 153
> #define KVM_CAP_HYPERV_EVENTFD 154
> #define KVM_CAP_HYPERV_TLBFLUSH 155
> +#define KVM_CAP_HYPERV_ENLIGHTENED_VMCS 156
>
> #ifdef KVM_CAP_IRQ_ROUTING
>
> --
> 2.14.4
Besides above comments,
Reviewed-By: Liran Alon
cpu_ioctl_enable_cap(struct
> kvm_vcpu *vcpu,
> return -EINVAL;
> return kvm_hv_activate_synic(vcpu, cap->cap ==
> KVM_CAP_HYPERV_SYNIC2);
> + case KVM_CAP_HYPERV_ENLIGHTENED_VMCS:
> + r = kvm_x86_ops->nested_enable_evmcs(vcpu, _version);
> + if (!r) {
> + user_ptr = (void __user *)(uintptr_t)cap->args[0];
> + if (copy_to_user(user_ptr, _version,
> + sizeof(vmcs_version)))
> + r = -EFAULT;
> + }
> + return r;
> +
> default:
> return -EINVAL;
> }
> diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
> index b6270a3b38e9..5c4b79c1af19 100644
> --- a/include/uapi/linux/kvm.h
> +++ b/include/uapi/linux/kvm.h
> @@ -949,6 +949,7 @@ struct kvm_ppc_resize_hpt {
> #define KVM_CAP_GET_MSR_FEATURES 153
> #define KVM_CAP_HYPERV_EVENTFD 154
> #define KVM_CAP_HYPERV_TLBFLUSH 155
> +#define KVM_CAP_HYPERV_ENLIGHTENED_VMCS 156
>
> #ifdef KVM_CAP_IRQ_ROUTING
>
> --
> 2.14.4
Besides above comments,
Reviewed-By: Liran Alon
s width)
> Cc: Paolo Bonzini <pbonz...@redhat.com>
> Cc: Radim Krčmář <rkrc...@redhat.com>
> Cc: Junaid Shahid <juna...@google.com>
> Cc: Liran Alon <liran.a...@oracle.com>
> Signed-off-by: Wanpeng Li <wanpen...@tencent.com>
> ---
> v1 -> v2:
> Cc: Radim Krčmář
> Cc: Junaid Shahid
> Cc: Liran Alon
> Signed-off-by: Wanpeng Li
> ---
> v1 -> v2:
> * remove CR3_PCID_INVD in rsvd when PCIDE is 1 instead of
>removing CR3_PCID_INVD in new_value
>
> arch/x86/kvm/emulate.c | 4 +++-
> arch/x86/kvm/x8
- kernel...@gmail.com wrote:
> 2018-05-13 16:28 GMT+08:00 Liran Alon <liran.a...@oracle.com>:
> >
> > - kernel...@gmail.com wrote:
> >
> >> 2018-05-13 15:53 GMT+08:00 Liran Alon <liran.a...@oracle.com>:
> >> >
> >> >
- kernel...@gmail.com wrote:
> 2018-05-13 16:28 GMT+08:00 Liran Alon :
> >
> > - kernel...@gmail.com wrote:
> >
> >> 2018-05-13 15:53 GMT+08:00 Liran Alon :
> >> >
> >> > - kernel...@gmail.com wrote:
> >> >
>
- kernel...@gmail.com wrote:
> 2018-05-13 15:53 GMT+08:00 Liran Alon <liran.a...@oracle.com>:
> >
> > - kernel...@gmail.com wrote:
> >
> >> From: Wanpeng Li <wanpen...@tencent.com>
> >>
> >> MSB of CR3 is a reserved bit if the
- kernel...@gmail.com wrote:
> 2018-05-13 15:53 GMT+08:00 Liran Alon :
> >
> > - kernel...@gmail.com wrote:
> >
> >> From: Wanpeng Li
> >>
> >> MSB of CR3 is a reserved bit if the PCIDE bit is not set in CR4.
> >> It sh
1 - 100 of 228 matches
Mail list logo