Re: [PATCH] KVM: nSVM: vmentry ignores EFER.LMA and possibly RFLAGS.VM

2020-07-10 Thread Maxim Levitsky
gt; > > > > Another possibility to stomp them in a more efficient manner could be to > > > rely on the dirty flags, and use them to set up an in-memory copy of the > > > VMCB. > > > > That sounds like a great idea! Is Maxim going to look into that? > > > > Now he is! Yep :-) Best regards, Maxim Levitsky > > Paolo >

[PATCH] kvm: x86: replace kvm_spec_ctrl_test_value with runtime test on the host

2020-07-08 Thread Maxim Levitsky
d-by: Sean Christopherson Signed-off-by: Maxim Levitsky --- arch/x86/kvm/svm/svm.c | 2 +- arch/x86/kvm/vmx/vmx.c | 2 +- arch/x86/kvm/x86.c | 38 +- arch/x86/kvm/x86.h | 2 +- 4 files changed, 24 insertions(+), 20 deletions(-) diff --git a/arch/x86/kvm/s

Re: [PATCH] kvm: x86: rewrite kvm_spec_ctrl_valid_bits

2020-07-07 Thread Maxim Levitsky
his msr, and therefore check all the bits, both regard to guest and host supported values. Does this makes sense, or do you think that this is overkill? One thing for sure, we currently have a bug about wrong #GP in case STIBP is supported, but IBRS isn't. I don't mind fixing it in any way that all of you agree upon. Best regards, Maxim Levitsky

Re: [PATCH] kvm: x86: rewrite kvm_spec_ctrl_valid_bits

2020-07-07 Thread Maxim Levitsky
On Mon, 2020-07-06 at 23:11 -0700, Sean Christopherson wrote: > On Sun, Jul 05, 2020 at 12:40:25PM +0300, Maxim Levitsky wrote: > > > Rather than compute the mask every time, it can be computed once on module > > > load and stashed in a global. Note, there's a RFC

Re: [PATCH] kvm: x86: rewrite kvm_spec_ctrl_valid_bits

2020-07-05 Thread Maxim Levitsky
On Thu, 2020-07-02 at 11:16 -0700, Sean Christopherson wrote: > On Thu, Jul 02, 2020 at 08:44:55PM +0300, Maxim Levitsky wrote: > > There are few cases when this function was creating a bogus #GP condition, > > for example case when and AMD host supports STIBP but doesn

[PATCH] kvm: x86: rewrite kvm_spec_ctrl_valid_bits

2020-07-02 Thread Maxim Levitsky
?id=199889 Fixes: 6441fa6178f5 ("KVM: x86: avoid incorrect writes to host MSR_IA32_SPEC_CTRL") Signed-off-by: Maxim Levitsky --- arch/x86/kvm/x86.c | 57 ++ 1 file changed, 42 insertions(+), 15 deletions(-) diff --git a/arch/x86/kvm/x86.c

Re: [PATCH v3.1 7/7] kconfig: qconf: navigate menus on hyperlinks

2020-07-01 Thread Maxim Levitsky
On Wed, 2020-07-01 at 17:51 +0200, Mauro Carvalho Chehab wrote: > Em Thu, 2 Jul 2020 00:21:36 +0900 > Masahiro Yamada escreveu: > > > On Tue, Jun 30, 2020 at 3:48 PM Mauro Carvalho Chehab > > wrote: > > > Instead of just changing the helper window to show a > > > dependency, also navigate to it

Re: [PATCH 2/2] kvm/x86: don't expose MSR_IA32_UMWAIT_CONTROL unconditionally

2020-06-30 Thread Maxim Levitsky
} } else if (function == 7 && index == 0 && reg == R_ECX) { -if (enable_cpu_pm) { +if (enable_cpu_pm && has_msr_umwait) { ret |= CPUID_7_0_ECX_WAITPKG; } else { ret &= ~CPUID_7_0_ECX_WAITPKG; -- Should I send this patch officially? Best regards, Maxim Levitsky

Re: [PATCH v2 0/5] Fix split view search and debug info navigation

2020-06-29 Thread Maxim Levitsky
reference. Thanks a lot for these fixes! Best regards, Maxim Levitsky

Re: [PATCH v2 0/5] Fix split view search and debug info navigation

2020-06-29 Thread Maxim Levitsky
On Mon, 2020-06-29 at 16:46 +0200, Mauro Carvalho Chehab wrote: > Em Mon, 29 Jun 2020 15:23:49 +0300 > Maxim Levitsky escreveu: > > > On Mon, 2020-06-29 at 11:35 +0200, Mauro Carvalho Chehab wrote: > > > This series fixes some issues with search while on split view

Re: [PATCH] kconfig: qconf: make debug links work again

2020-06-28 Thread Maxim Levitsky
config: create links in info window"). > > > > Reported-by: Maxim Levitsky > > Signed-off-by: Mauro Carvalho Chehab > > I tested this patch, but this caused > segmentation fault. > > > I enabled 'Show Debug Info', > and then clicked > dep: . > &g

Re: Commit 'fs: Do not check if there is a fsnotify watcher on pseudo inodes' breaks chromium here

2020-06-28 Thread Maxim Levitsky
On Sun, 2020-06-28 at 16:14 +0300, Maxim Levitsky wrote: > On Sun, 2020-06-28 at 15:53 +0300, Amir Goldstein wrote: > > On Sun, Jun 28, 2020 at 2:14 PM Maxim Levitsky wrote: > > > Hi, > > > > > > I just did usual kernel update and now chromium crashes on start

Re: Commit 'fs: Do not check if there is a fsnotify watcher on pseudo inodes' breaks chromium here

2020-06-28 Thread Maxim Levitsky
On Sun, 2020-06-28 at 15:53 +0300, Amir Goldstein wrote: > On Sun, Jun 28, 2020 at 2:14 PM Maxim Levitsky wrote: > > Hi, > > > > I just did usual kernel update and now chromium crashes on startup. > > It happens both in a KVM's VM (with virtio-gpu if that matters) an

Re: [PATCH] kconfig: qconf: make debug links work again

2020-06-28 Thread Maxim Levitsky
On Sun, 2020-06-28 at 14:21 +0200, Mauro Carvalho Chehab wrote: > The Qt5 conversion broke support for debug info links. > > Restore the behaviour added by changeset > ab45d190fd4a ("kconfig: create links in info window"). > > Reported-by: Maxim Levitsky > Signe

Re: Kernel issues with Radeon Pro WX4100 and DP->HDMI dongles

2020-06-28 Thread Maxim Levitsky
On Thu, 2020-06-25 at 10:14 +0300, Maxim Levitsky wrote: > Hi, > > I recently tried to connect my TV and WX4100 via two different DP->HDMI > dongles. > One of them makes my main monitor to go dark, and system to lockup (I haven't > yet debugged this futher), and the ot

Re: Search function in xconfig is partially broken after recent changes

2020-06-28 Thread Maxim Levitsky
On Sun, 2020-06-28 at 12:54 +0200, Mauro Carvalho Chehab wrote: > Em Sun, 28 Jun 2020 11:37:08 +0300 > Maxim Levitsky escreveu: > > > On Thu, 2020-06-25 at 17:05 +0200, Mauro Carvalho Chehab wrote: > > > Em Thu, 25 Jun 2020 15:53:46 +0300 > > > Maxim Levitsky

Re: [PATCH] kconfig: qconf: Fix find on split mode

2020-06-28 Thread Maxim Levitsky
select another search result which happens not to update the 'menu', then both the results are selected (that is the old one doesn't clear its selection) Best regards, Maxim Levitsky > > Reported-by: Maxim Levitsky > Signed-off-by: Mauro Carvalho Chehab > --- > scripts/

Re: Search function in xconfig is partially broken after recent changes

2020-06-28 Thread Maxim Levitsky
On Thu, 2020-06-25 at 17:05 +0200, Mauro Carvalho Chehab wrote: > Em Thu, 25 Jun 2020 15:53:46 +0300 > Maxim Levitsky escreveu: > > > On Thu, 2020-06-25 at 13:17 +0200, Mauro Carvalho Chehab wrote: > > > Em Thu, 25 Jun 2020 12:59:15 +0200 > > >

Re: Search function in xconfig is partially broken after recent changes

2020-06-25 Thread Maxim Levitsky
On Thu, 2020-06-25 at 13:17 +0200, Mauro Carvalho Chehab wrote: > Em Thu, 25 Jun 2020 12:59:15 +0200 > Mauro Carvalho Chehab escreveu: > > > Hi Maxim, > > > > Em Thu, 25 Jun 2020 12:25:10 +0300 > > Maxim Levitsky escreveu: > > > > > Hi! > >

Search function in xconfig is partially broken after recent changes

2020-06-25 Thread Maxim Levitsky
CFLAGS and CXXFLAGS don't affect build of xconfig. I tried to debug this is a bit with mixed success but still I don't see the smoking gun. Best regards, Maxim Levitsky

Kernel issues with Radeon Pro WX4100 and DP->HDMI dongles

2020-06-25 Thread Maxim Levitsky
en, but it might have beeing luck. On top of all this, I tried a 3rd dongle and it does appear to work flawlessly (no messages in dmesg). Best regards, Maxim Levitsky

KVM/RCU related warning on latest mainline kernel

2020-06-21 Thread Maxim Levitsky
use AMD's SVM ) I am using 'isolcpus=domain,managed_irq,28-31,60-63 nohz_full=28-31,60-63' Also worth noting is that I use -overcommit cpu_pm=on qemu command line for the guest to let it run all the time on the isolated cores. I can bisect/debug this futher if you think that this is worth it. Best regards, Maxim Levitsky

Re: [PATCH] KVM: x86: do not pass poisoned hva to __kvm_set_memory_region

2020-06-11 Thread Maxim Levitsky
C regression. > > Fixes: 09d952c971a5 ("KVM: check userspace_addr for all memslots", 2020-06-01) > Reported-by: Maxim Levitsky > Signed-off-by: Paolo Bonzini > --- > arch/x86/kvm/x86.c | 7 +-- > 1 file changed, 1 insertion(+), 6 deletions(-) > > diff --git

Re: [PATCH] KVM: check userspace_addr for all memslots

2020-06-11 Thread Maxim Levitsky
On Thu, 2020-06-11 at 17:27 +0200, Paolo Bonzini wrote: > On 11/06/20 16:44, Maxim Levitsky wrote: > > On Mon, 2020-06-01 at 04:21 -0400, Paolo Bonzini wrote: > > > The userspace_addr alignment and range checks are not performed for > > > private > > > memory s

Re: [PATCH] KVM: check userspace_addr for all memslots

2020-06-11 Thread Maxim Levitsky
iscards the return value. I think that the fix for this would be to either make access_ok always return true for size==0, or __kvm_set_memory_region should treat size==0 specially and skip that check for it. Best regards, Maxim Levitsky

Re: [PATCH] KVM: SVM: fix calls to is_intercept

2020-06-09 Thread Maxim Levitsky
but the behavior looks > exactly > the same pre- and post-patch. > And if I understand correctly that bug didn't affect anything I tested because your recent patches started to avoid the usage of the interrupt window unless L1 clears the usage of the interrupt intercept which is rare. Look

Re: [PATCH 0/2] Fix issue with not starting nesting guests on my system

2020-05-27 Thread Maxim Levitsky
On Tue, 2020-05-26 at 18:13 -0700, Sean Christopherson wrote: > On Sat, May 23, 2020 at 07:14:53PM +0300, Maxim Levitsky wrote: > > On my AMD machine I noticed that I can't start any nested guests, > > because nested KVM (everything from master git branches) complains > > t

Re: [PATCH 0/2] Fix issue with not starting nesting guests on my system

2020-05-27 Thread Maxim Levitsky
On Tue, 2020-05-26 at 18:13 -0700, Sean Christopherson wrote: > On Sat, May 23, 2020 at 07:14:53PM +0300, Maxim Levitsky wrote: > > On my AMD machine I noticed that I can't start any nested guests, > > because nested KVM (everything from master git branches) complains > > t

Re: [PATCH 1/2] kvm/x86/vmx: enable X86_FEATURE_WAITPKG in KVM capabilities

2020-05-27 Thread Maxim Levitsky
On Tue, 2020-05-26 at 18:20 -0700, Sean Christopherson wrote: > On Sat, May 23, 2020 at 07:14:54PM +0300, Maxim Levitsky wrote: > > Even though we might not allow the guest to use > > WAITPKG's new instructions, we should tell KVM > > that the feature is supported by the ho

Re: [PATCH 2/2] kvm/x86: don't expose MSR_IA32_UMWAIT_CONTROL unconditionally

2020-05-27 Thread Maxim Levitsky
On Tue, 2020-05-26 at 18:21 -0700, Sean Christopherson wrote: > On Sat, May 23, 2020 at 07:14:55PM +0300, Maxim Levitsky wrote: > > This msr is only available when the host supports WAITPKG feature. > > > > This breaks a nested guest, if the L1 hypervisor is set to ig

[PATCH v3 0/2] Fix issue with not starting nesting guests on my system

2020-05-27 Thread Maxim Levitsky
in kvm/queue V3: addressed the review feedback and possibly made the commit messages a bit better Thanks! Best regards, Maxim Levitsky Maxim Levitsky (2): KVM: VMX: enable X86_FEATURE_WAITPKG in KVM capabilities KVM: x86: don't expose MSR_IA32_UMWAIT_CONTROL unconditionally arch

[PATCH v3 2/2] KVM: x86: don't expose MSR_IA32_UMWAIT_CONTROL unconditionally

2020-05-27 Thread Maxim Levitsky
to inform its qemu that MSR_IA32_UMWAIT_CONTROL is a supported MSR but later on when qemu attempts to set it in the host state this fails since it is not supported. Fixes: 6e3ba4abcea56 (KVM: vmx: Emulate MSR IA32_UMWAIT_CONTROL) Signed-off-by: Maxim Levitsky Reviewed-by: Sean Christopherson

[PATCH v3 1/2] KVM: VMX: enable X86_FEATURE_WAITPKG in KVM capabilities

2020-05-27 Thread Maxim Levitsky
that indicate that we actually enable it for a guest. Fixes: e69e72faa3a07 (KVM: x86: Add support for user wait instructions) Suggested-by: Paolo Bonzini Signed-off-by: Maxim Levitsky Reviewed-by: Sean Christopherson Reviewed-by: Krish Sadhukhan --- arch/x86/kvm/vmx/vmx.c | 3 +++ 1 file changed, 3

Re: KVM broken after suspend in most recent kernels.

2020-05-27 Thread Maxim Levitsky
ctive VMs, KVM automatically re-enables VMX via > VMXON > after resume, and VMXON is what's faulting. > > Odds are good the firmware simply isn't initializing IA32_FEAT_CTL, > ever. > The kernel handles the boot-time case, but I (obviously) didn't > consider > the suspend case. I'll work on a patch. This is exactly what I was thinking about this as well. Best regards, Maxim Levitsky >

Re: KVM broken after suspend in most recent kernels.

2020-05-25 Thread Maxim Levitsky
lization > git bisect good 501444905fcb4166589fda99497c273ac5efc65e > # good: [b47ce1fed42eeb9ac8c07fcda6c795884826723d] x86/cpu: Detect VMX > features on Intel, Centaur and Zhaoxin CPUs > git bisect good b47ce1fed42eeb9ac8c07fcda6c795884826723d > # good: [167a4894c113ebe6a1f8b24fa6f9fca849c77f8a] x86/cpu: Set synthetic VMX > cpufeatures during init_ia32_feat_ctl() > git bisect good 167a4894c113ebe6a1f8b24fa6f9fca849c77f8a > # bad: [21bd3467a58ea51ccc0b1d9bcb86dadf1640a002] KVM: VMX: Drop > initialization of IA32_FEAT_CTL MSR > git bisect bad 21bd3467a58ea51ccc0b1d9bcb86dadf1640a002 > # good: [85c17291e2eb4903bf73e5d3f588f41dbcc6f115] x86/cpufeatures: Add flag > to track whether MSR IA32_FEAT_CTL is configured > git bisect good 85c17291e2eb4903bf73e5d3f588f41dbcc6f115 > # first bad commit: [21bd3467a58ea51ccc0b1d9bcb86dadf1640a002] KVM: VMX: Drop > initialization of IA32_FEAT_CTL MSR > > Regards, > Brad > When you mean that KVM is broken after suspend, you mean that you can't start new VMs after suspend, or do VMs that were running before suspend break? I see the later on my machine. I have AMD system though, so most likely this is another bug. Looking at the commit, I suspect that we indeed should set the IA32_FEAT_CTL after resume from ram, since suspend to ram might count as a complete CPU reset. Best regards, Maxim Levitsky

[PATCH 2/2] kvm/x86: don't expose MSR_IA32_UMWAIT_CONTROL unconditionally

2020-05-23 Thread Maxim Levitsky
: 6e3ba4abce KVM: vmx: Emulate MSR IA32_UMWAIT_CONTROL Signed-off-by: Maxim Levitsky --- arch/x86/kvm/x86.c | 4 1 file changed, 4 insertions(+) diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index b226fb8abe41b..4752293312947 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -5316,6

[PATCH 1/2] kvm/x86/vmx: enable X86_FEATURE_WAITPKG in KVM capabilities

2020-05-23 Thread Maxim Levitsky
actually enable it for a guest. Fixes: e69e72faa3a0 KVM: x86: Add support for user wait instructions Suggested-by: Paolo Bonzini Signed-off-by: Maxim Levitsky --- arch/x86/kvm/vmx/vmx.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index

[PATCH 0/2] Fix issue with not starting nesting guests on my system

2020-05-23 Thread Maxim Levitsky
in kvm/queue Best regards, Maxim Levitsky Maxim Levitsky (2): kvm/x86/vmx: enable X86_FEATURE_WAITPKG in KVM capabilities kvm/x86: don't expose MSR_IA32_UMWAIT_CONTROL unconditionally arch/x86/kvm/vmx/vmx.c | 3 +++ arch/x86/kvm/x86.c | 4 2 files changed, 7 insertions

Re: [PATCH 2/2] kvm/x86: don't expose MSR_IA32_UMWAIT_CONTROL unconditionally

2020-05-20 Thread Maxim Levitsky
On Wed, 2020-05-20 at 23:05 +0200, Paolo Bonzini wrote: > On 20/05/20 18:07, Maxim Levitsky wrote: > > This msr is only available when the host supports WAITPKG feature. > > > > This breaks a nested guest, if the L1 hypervisor is set to ignore > > unknown msrs, because

Re: [PATCH 00/24] KVM: nSVM: event fixes and migration support

2020-05-20 Thread Maxim Levitsky
On Wed, 2020-05-20 at 22:42 +0200, Paolo Bonzini wrote: > On 20/05/20 21:24, Maxim Levitsky wrote: > > Patch 24 doesn't apply cleanly on top of kvm/queue, I appplied it manually, > > due to missing KVM_STATE_NESTED_MTF_PENDING bit > > > > Also patch 22 needes ALIGN_

Re: [PATCH 00/24] KVM: nSVM: event fixes and migration support

2020-05-20 Thread Maxim Levitsky
y cleanly on top of kvm/queue, I appplied it manually, due to missing KVM_STATE_NESTED_MTF_PENDING bit Also patch 22 needes ALIGN_UP which is not on mainline. Probably in linux-next? With these fixes, I don't see #DE exceptions on a nested guest I try to run however it still hangs, right around the time it tries to access PS/2 keyboard/mouse. Best regards, Maxim Levitsky

[PATCH 1/1] thunderbolt: add trivial .shutdown

2020-05-20 Thread Maxim Levitsky
this .shutdown pointer. Shutting a device prior to the shutdown completely is always a good idea IMHO to help with kexec, and this one-liner patch implements it. Signed-off-by: Maxim Levitsky --- drivers/thunderbolt/nhi.c | 1 + 1 file changed, 1 insertion(+) diff --git a/drivers/thunderbolt/nhi.c b

[PATCH 0/1] RFC: Make thunderbolt NHI driver work with kexec [RESEND]

2020-05-20 Thread Maxim Levitsky
regards, Maxim Levitsky Maxim Levitsky (1): thunderbolt: add trivial .shutdown drivers/thunderbolt/nhi.c | 1 + 1 file changed, 1 insertion(+) -- 2.25.4

Re: [PATCH 1/1] thunderbolt: add trivial .shutdown

2020-05-20 Thread Maxim Levitsky
On Wed, 2020-05-20 at 21:12 +0300, Maxim Levitsky wrote: > On my machine, a kexec with this driver loaded in the old kernel > causes a very long delay on boot in the kexec'ed kernel, > most likely due to unclean shutdown prior to that. > > Unloading thunderbolt driver prior to kex

[PATCH 0/1] RFC: Make thunderbolt NHI driver work with kexec

2020-05-20 Thread Maxim Levitsky
regards, Maxim Levitsky Maxim Levitsky (1): thunderbolt: add trivial .shutdown drivers/thunderbolt/nhi.c | 1 + 1 file changed, 1 insertion(+) -- 2.25.4

[PATCH 1/1] thunderbolt: add trivial .shutdown

2020-05-20 Thread Maxim Levitsky
this .shutdown pointer. Shutting a device prior to the shutdown completely is always a good idea IMHO to help with kexec, and this one-liner patch implements it. Signed-off-by: Maxim Levitsky --- drivers/thunderbolt/nhi.c | 1 + 1 file changed, 1 insertion(+) diff --git a/drivers/thunderbolt/nhi.c b

Re: [PATCH 2/2] kvm/x86: don't expose MSR_IA32_UMWAIT_CONTROL unconditionally

2020-05-20 Thread Maxim Levitsky
On Wed, 2020-05-20 at 19:15 +0200, Vitaly Kuznetsov wrote: > Maxim Levitsky writes: > > > On Wed, 2020-05-20 at 18:33 +0200, Vitaly Kuznetsov wrote: > > > Maxim Levitsky writes: > > > > > > > This msr is only available when the host supports WAI

Re: [PATCH 2/2] kvm/x86: don't expose MSR_IA32_UMWAIT_CONTROL unconditionally

2020-05-20 Thread Maxim Levitsky
On Wed, 2020-05-20 at 18:33 +0200, Vitaly Kuznetsov wrote: > Maxim Levitsky writes: > > > This msr is only available when the host supports WAITPKG feature. > > > > This breaks a nested guest, if the L1 hypervisor is set to ignore > > unknown msrs, becau

[PATCH 0/2] Fix breakage from adding MSR_IA32_UMWAIT_CONTROL

2020-05-20 Thread Maxim Levitsky
if a feature is supported since that msr can be even in theory assigned to something else on AMD for example. Also I included a cosmetic fix for an issue I found in the same function. Best regards, Maxim Levitsky Maxim Levitsky (2): kvm: cosmetic: remove wrong braces in kvm_init_msr_list

[PATCH 1/2] kvm: cosmetic: remove wrong braces in kvm_init_msr_list switch

2020-05-20 Thread Maxim Levitsky
I think these were added accidentally. Signed-off-by: Maxim Levitsky --- arch/x86/kvm/x86.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 471fccf7f8501..fe3a24fd6b263 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c

[PATCH 2/2] kvm/x86: don't expose MSR_IA32_UMWAIT_CONTROL unconditionally

2020-05-20 Thread Maxim Levitsky
: 6e3ba4abce KVM: vmx: Emulate MSR IA32_UMWAIT_CONTROL Signed-off-by: Maxim Levitsky --- arch/x86/kvm/x86.c | 4 1 file changed, 4 insertions(+) diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index fe3a24fd6b263..9c507b32b1b77 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -5314,6

Re: [PATCH] KVM: x86: only do L1TF workaround on affected processors

2020-05-19 Thread Maxim Levitsky
On Tue, 2020-05-19 at 08:56 -0500, Tom Lendacky wrote: > On 5/19/20 5:59 AM, Maxim Levitsky wrote: > > On Tue, 2020-05-19 at 05:50 -0400, Paolo Bonzini wrote: > > > KVM stores the gfn in MMIO SPTEs as a caching optimization. These are > > > split > > >

Re: [PATCH] KVM: x86: only do L1TF workaround on affected processors

2020-05-19 Thread Maxim Levitsky
On Tue, 2020-05-19 at 13:59 +0300, Maxim Levitsky wrote: > On Tue, 2020-05-19 at 05:50 -0400, Paolo Bonzini wrote: > > KVM stores the gfn in MMIO SPTEs as a caching optimization. These are split > > in two parts, as in "[high 1 low]", to thwart any attem

Re: [PATCH] KVM: x86: only do L1TF workaround on affected processors

2020-05-19 Thread Maxim Levitsky
uest panic right in the very startup of the guest when npt=1. I tested this with many guest/host combinations and even with fedora kernel 5.3 running on both host and guest, this is the case. Tested-by: Maxim Levitsky Overall the patch makes sense to me, however I don't yet know this area

Re: [PATCH 0/2] Expose KVM API to Linux Kernel

2020-05-18 Thread Maxim Levitsky
On Mon, 2020-05-18 at 13:51 +0200, Paolo Bonzini wrote: > On 18/05/20 13:34, Maxim Levitsky wrote: > > > In high-performance configurations, most of the time virtio devices are > > > processed in another thread that polls on the virtio rings. In this > > > s

Re: [PATCH 0/2] Expose KVM API to Linux Kernel

2020-05-18 Thread Maxim Levitsky
replaced by a userspace driver, something I see a lot lately, and what was the ground for rejection of my nvme-mdev proposal. Best regards, Maxim Levitsky

Re: [PATCH v2] KVM: SVM: Disable AVIC before setting V_IRQ

2020-05-10 Thread Maxim Levitsky
m, KVM_REQ_APICV_UPDATE, > > + except); > > + if (except) > > + kvm_vcpu_update_apicv(except); > > } > > EXPORT_SYMBOL_GPL(kvm_request_apicv_update); > > > > > > Queued, thanks. > > Paolo > I tested this patch today on top of kvm/queue, the patch that add kvm_make_all_cpus_request_except and this patch (the former patch needs slight adjustment to apply). Best regards, Maxim Levitsky

Re: AVIC related warning in enable_irq_window

2020-05-05 Thread Maxim Levitsky
On Tue, 2020-05-05 at 14:55 +0700, Suravee Suthikulpanit wrote: > Paolo / Maxim, > > On 5/4/20 5:49 PM, Paolo Bonzini wrote: > > On 04/05/20 12:37, Suravee Suthikulpanit wrote: > > > On 5/4/20 4:25 PM, Paolo Bonzini wrote: > > > > On 04/05/20 11:13, Maxim Levit

Re: AVIC related warning in enable_irq_window

2020-05-04 Thread Maxim Levitsky
On Mon, 2020-05-04 at 15:46 +0700, Suravee Suthikulpanit wrote: > Paolo / Maxim, > > On 5/2/20 11:42 PM, Paolo Bonzini wrote: > > On 02/05/20 15:58, Maxim Levitsky wrote: > > > The AVIC is disabled by svm_toggle_avic_for_irq_window, which calls > > > kvm_reque

Re: AVIC related warning in enable_irq_window

2020-05-02 Thread Maxim Levitsky
On Sat, 2020-05-02 at 18:42 +0200, Paolo Bonzini wrote: > On 02/05/20 15:58, Maxim Levitsky wrote: > > The AVIC is disabled by svm_toggle_avic_for_irq_window, which calls > > kvm_request_apicv_update, which broadcasts the KVM_REQ_APICV_UPDATE vcpu > > request, > > h

AVIC related warning in enable_irq_window

2020-05-02 Thread Maxim Levitsky
. Best regards, Maxim Levitsky

Re: [PATCH] KVM: x86: Fixes posted interrupt check for IRQs delivery modes

2020-05-02 Thread Maxim Levitsky
ed interrupts on my 3970X. Low byte of the delivery mode is the vector, while high byte is the delivery mode, and the vector is masked in kvm_set_msi_irq, thus indeed the delivery mode is in high 8 bytes. Reviewed-by: Maxim Levitsky Tested-by: Maxim Levitsky Best regards, Maxim Levitsky

Re: [PATCH] nvme: Add support for Apple 2018+ models

2019-07-15 Thread Maxim Levitsky
the spec allows for non NVM IO command set, and for which the sq/cq entry sizes can be of any size, as indicated in SQES/CQES and set in CC.IOCQES/CC.IOSQES, but than most of the spec won't apply to it. Also FYI, values in CC (IOCQES/IOSQES) are for I/O queues, which kind of implies that admin queue, should always use the 64/16 bytes entries, although I haven't found any explicit mention of that. Best regards, Maxim Levitsky

Re: [RFC PATCH 8/8] svm: Allow AVIC with in-kernel irqchip mode

2019-06-15 Thread Maxim Levitsky
the state of this patch? I kind of stumbled on it accidently, while trying to understand why AVIC is only enabled in the split irqchip mode. Best regards, Maxim Levitsky

Re: [PATCH v3 0/4] KVM: LAPIC: Implement Exitless Timer

2019-06-13 Thread Maxim Levitsky
xits, to an absolute minimum. I have one small question, just out of curiosity. Why do you require mwait in the guest to be enabled? If I understand it correctly, you say that when mwait in the guest is disabled, then vmx preemption timer will be used, and thus it will handle the apic timer? Best regards, Maxim Levitsky

Re: [PATCH v3 0/4] KVM: LAPIC: Implement Exitless Timer

2019-06-13 Thread Maxim Levitsky
On Thu, 2019-06-13 at 16:25 +0800, Wanpeng Li wrote: > On Thu, 13 Jun 2019 at 15:59, Maxim Levitsky wrote: > > > > On Tue, 2019-06-11 at 20:17 +0800, Wanpeng Li wrote: > > > Dedicated instances are currently disturbed by unnecessary jitter due > > > to the emula

Re: [PATCH v2 00/10] RFC: NVME MDEV

2019-05-06 Thread Maxim Levitsky
through, it will have to dedicate a bunch of queues to the guest, configure them with the appropriate PASID, and then let the guest use these queues directly. Best regards, Maxim Levitsky

Re: [PATCH v2 06/10] nvme/core: add mdev interfaces

2019-05-06 Thread Maxim Levitsky
On Mon, 2019-05-06 at 11:31 +0300, Maxim Levitsky wrote: > On Sat, 2019-05-04 at 08:49 +0200, Christoph Hellwig wrote: > > On Fri, May 03, 2019 at 10:00:54PM +0300, Max Gurtovoy wrote: > > > Don't see a big difference of taking NVMe queue and namespace/partition > > > t

Re: [PATCH v2 06/10] nvme/core: add mdev interfaces

2019-05-06 Thread Maxim Levitsky
uplication but that can be worked on with some changes in the block layer. The last patch in my series was done with 2 purposes in mind which are to measure the overhead, and to maybe utilize that as a failback to non nvme devices. Best regards, Maxim Levitsky

Re: [PATCH v2 08/10] nvme/pci: implement the mdev external queue allocation interface

2019-05-06 Thread Maxim Levitsky
On Fri, 2019-05-03 at 06:09 -0600, Keith Busch wrote: > On Fri, May 03, 2019 at 12:20:17AM +0300, Maxim Levitsky wrote: > > On Thu, 2019-05-02 at 15:12 -0600, Heitke, Kenneth wrote: > > > On 5/2/2019 5:47 AM, Maxim Levitsky wrote: > > > > +static void nvme_ext_que

Re: [PATCH v2 08/10] nvme/pci: implement the mdev external queue allocation interface

2019-05-02 Thread Maxim Levitsky
On Thu, 2019-05-02 at 15:12 -0600, Heitke, Kenneth wrote: > > On 5/2/2019 5:47 AM, Maxim Levitsky wrote: > > Note that currently the number of hw queues reserved for mdev, > > has to be pre determined on module load. > > > > (I used to allocate the queues dynami

Re: [PATCH v2 08/10] nvme/pci: implement the mdev external queue allocation interface

2019-05-02 Thread Maxim Levitsky
On Thu, 2019-05-02 at 14:47 +0300, Maxim Levitsky wrote: > Note that currently the number of hw queues reserved for mdev, > has to be pre determined on module load. > > (I used to allocate the queues dynamicaly on demand, but > recent changes to allocate polled/read queues made

[PATCH v2 08/10] nvme/pci: implement the mdev external queue allocation interface

2019-05-02 Thread Maxim Levitsky
Note that currently the number of hw queues reserved for mdev, has to be pre determined on module load. (I used to allocate the queues dynamicaly on demand, but recent changes to allocate polled/read queues made this somewhat difficult, so I dropped this for now) Signed-off-by: Maxim Levitsky

[PATCH v2 10/10] nvme/mdev - generic block IO code

2019-05-02 Thread Maxim Levitsky
Use the block layer (bio_submit) to pass through the IO to the nvme driver instead of the direct IO submission hooks. Currently that code supports only read/write, and it still assumes that we talk to an nvme driver. Signed-off-by: Maxim Levitsky --- drivers/nvme/mdev/Kconfig | 8

[PATCH v2 04/10] nvme/core: add NVME_CTRL_SUSPENDED controller state

2019-05-02 Thread Maxim Levitsky
This state will be used by a controller that is going to suspended state, and will later be used by mdev framework to detect this and flush its queues Signed-off-by: Maxim Levitsky --- drivers/nvme/host/core.c | 15 +++ drivers/nvme/host/nvme.h | 1 + 2 files changed, 16 insertions

[PATCH v2 09/10] nvme/mdev - Add inline performance measurments

2019-05-02 Thread Maxim Levitsky
This code might not be needed to be merged in the final version Signed-off-by: Maxim Levitsky --- drivers/nvme/mdev/instance.c | 62 drivers/nvme/mdev/io.c | 21 drivers/nvme/mdev/irq.c | 6 drivers/nvme/mdev/priv.h | 13

[PATCH v2 06/10] nvme/core: add mdev interfaces

2019-05-02 Thread Maxim Levitsky
-by: Maxim Levitsky --- drivers/nvme/host/core.c | 125 ++- drivers/nvme/host/nvme.h | 54 +++-- 2 files changed, 172 insertions(+), 7 deletions(-) diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c index 22db0c51a0bf..3c1b91089631 100644

[PATCH v2 03/10] nvme/core: add some more values from the spec

2019-05-02 Thread Maxim Levitsky
This adds few defines from the spec, that will be used in the nvme-mdev driver Signed-off-by: Maxim Levitsky --- include/linux/nvme.h | 88 ++-- 1 file changed, 68 insertions(+), 20 deletions(-) diff --git a/include/linux/nvme.h b/include/linux/nvme.h

[PATCH v2 02/10] vfio/mdev: add .request callback

2019-05-02 Thread Maxim Levitsky
This will allow the hotplug to be enabled for mediated devices Signed-off-by: Maxim Levitsky --- drivers/vfio/mdev/vfio_mdev.c | 11 +++ include/linux/mdev.h | 4 2 files changed, 15 insertions(+) diff --git a/drivers/vfio/mdev/vfio_mdev.c b/drivers/vfio/mdev/vfio_mdev.c

[PATCH v2 00/10] RFC: NVME MDEV

2019-05-02 Thread Maxim Levitsky
y tested too. In addition to that, the virtual device was tested with nested guest, by passing the virtual device to it, using pci passthrough, qemu userspace nvme driver, and spdk Maxim Levitsky (10): vfio/mdev: add notifier for map events vfio/mdev: add .request callback nvme/core: add some

[PATCH v2 01/10] vfio/mdev: add notifier for map events

2019-05-02 Thread Maxim Levitsky
Allow an VFIO mdev device to listen to map events This will allow a mdev driver to dma map memory as soon as it gets added to the domain -- Signed-off-by: Maxim Levitsky --- drivers/vfio/vfio_iommu_type1.c | 97 + include/linux/vfio.h| 4 ++ 2 files

[PATCH v2 05/10] nvme/pci: use the NVME_CTRL_SUSPENDED state

2019-05-02 Thread Maxim Levitsky
When enteriing low power state, the nvme driver will now inform the core with the NVME_CTRL_SUSPENDED state which will allow mdev driver to act on this information Signed-off-by: Maxim Levitsky --- drivers/nvme/host/pci.c | 6 +- 1 file changed, 5 insertions(+), 1 deletion(-) diff --git

Re: [PATCH] nvme: determine the number of IO queues

2019-04-18 Thread Maxim Levitsky
On Thu, 2019-04-18 at 14:21 +0800, Aaron Ma wrote: > On 4/18/19 1:33 AM, Maxim Levitsky wrote: > > On Wed, 2019-04-17 at 20:32 +0300, Maxim Levitsky wrote: > > > On Wed, 2019-04-17 at 22:12 +0800, Aaron Ma wrote: > > > > Some controllers support limited IO queues, wh

Re: [PATCH] nvme: determine the number of IO queues

2019-04-17 Thread Maxim Levitsky
On Wed, 2019-04-17 at 20:32 +0300, Maxim Levitsky wrote: > On Wed, 2019-04-17 at 22:12 +0800, Aaron Ma wrote: > > Some controllers support limited IO queues, when over set > > the number, it will return invalid field error. > > Then NVME will be removed by driver. > &g

Re: [PATCH] nvme: determine the number of IO queues

2019-04-17 Thread Maxim Levitsky
e controller should return an error of Invalid Field in Command." This implies that you can ask for any value and the controller must not respond with an error, but rather indicate how many queues it supports. Maybe its better to add a quirk for the broken device, which needs this? Best regards, Maxim Levitsky

Re: your mail

2019-04-08 Thread Maxim Levitsky
On Tue, 2019-03-19 at 09:22 -0600, Keith Busch wrote: > On Tue, Mar 19, 2019 at 04:41:07PM +0200, Maxim Levitsky wrote: > > -> Share the NVMe device between host and guest. > > Even in fully virtualized configurations, > > some partitions of nvme device

Re: [PATCH 0/9] RFC: NVME VFIO mediated device [BENCHMARKS]

2019-03-26 Thread Maxim Levitsky
On Tue, 2019-03-26 at 09:38 +, Stefan Hajnoczi wrote: > On Mon, Mar 25, 2019 at 08:52:32PM +0200, Maxim Levitsky wrote: > > Hi > > > > This is first round of benchmarks. > > > > The system is Intel(R) Xeon(R) Gold 6128 CPU @ 3.40GHz > > > &g

Re: [PATCH 0/9] RFC: NVME VFIO mediated device [BENCHMARKS]

2019-03-25 Thread Maxim Levitsky
didn't had much effect on the performance. Best regards, Maxim Levitsky

Re: [PATCH 8/8] vfio/mdev: Improve the create/remove sequence

2019-03-25 Thread Maxim Levitsky
struct kset *mdev_types_kset; > struct list_head type_list; > + /* Protects unregistration to wait until create/remove > + * are completed. > + */ > + struct srcu_struct unreg_srcu; > + struct mdev_parent __rcu *self; > }; > > struct mdev_device { > @@ -58,6 +63,6 @@ struct mdev_type { > void mdev_remove_sysfs_files(struct device *dev, struct mdev_type *type); > > int mdev_device_create(struct kobject *kobj, struct device *dev, uuid_le > uuid); > -int mdev_device_remove(struct device *dev, bool force_remove); > +int mdev_device_remove(struct device *dev); > > #endif /* MDEV_PRIVATE_H */ > diff --git a/drivers/vfio/mdev/mdev_sysfs.c b/drivers/vfio/mdev/mdev_sysfs.c > index c782fa9..68a8191 100644 > --- a/drivers/vfio/mdev/mdev_sysfs.c > +++ b/drivers/vfio/mdev/mdev_sysfs.c > @@ -236,11 +236,9 @@ static ssize_t remove_store(struct device *dev, struct > device_attribute *attr, > if (val && device_remove_file_self(dev, attr)) { > int ret; > > - ret = mdev_device_remove(dev, false); > - if (ret) { > - device_create_file(dev, attr); > + ret = mdev_device_remove(dev); > + if (ret) > return ret; > - } > } > > return count; The patch looks OK to me, especially looking at the code after the changes were apllied. I might have missed something though due to amount of changes done. I lightly tested the whole patch series with my mdev driver, and it seems to survive, but my testing doesn't test much of the error paths so there that. I'll keep this applied so if I notice any errors I'll let you know. If you could split this into few patches, this would be even better, but anyway thanks a lot for this work! Reviewed-by: Maxim Levitsky Best regards, Maxim Levitsky

Re: [PATCH 7/8] vfio/mdev: Fix aborting mdev child device removal if one fails

2019-03-25 Thread Maxim Levitsky
mdev = to_mdev_device(dev); > - > - mutex_lock(_list_lock); > - list_for_each_entry(tmp, _list, next) { > - if (tmp == mdev) > - break; > - } > - > - if (tmp != mdev) { > - mutex_unlock(_list_lock); > - return -ENODEV; > - } > - > if (!mdev->active) { > mutex_unlock(_list_lock); > return -EAGAIN; Very nice catch and good refactoring. Reviewed-by: Maxim Levitsky Best regards, Maxim Levitsky

Re: [PATCH 6/8] vfio/mdev: Follow correct remove sequence

2019-03-25 Thread Maxim Levitsky
>kobj, mdev_device_attrs); > sysfs_remove_link(>kobj, "mdev_type"); > sysfs_remove_link(type->devices_kobj, dev_name(dev)); > - sysfs_remove_files(>kobj, mdev_device_attrs); > } I agree with that. Reviewed-by: Maxim Levitsky Best regards, Maxim Levitsky

Re: [PATCH 5/8] vfio/mdev: Avoid masking error code to EBUSY

2019-03-25 Thread Maxim Levitsky
e_remove) >*/ > ret = parent->ops->remove(mdev); > if (ret && !force_remove) > - return -EBUSY; > + return ret; > > sysfs_remove_groups(>dev.kobj, parent->ops->mdev_attr_groups); > return 0; Reviewed-by: Maxim Levitsky Best regards, Maxim Levitsky

Re: [PATCH 4/8] vfio/mdev: Drop redundant extern for exported symbols

2019-03-25 Thread Maxim Levitsky
knew/paid attention to that nice bit of C. Indeed 'extern' is already kind of a default for function declarations. Reviewed-by: Maxim Levitsky Best regards, Maxim Levitsky

Re: [PATCH 3/8] vfio/mdev: Removed unused kref

2019-03-25 Thread Maxim Levitsky
uuid; > void *driver_data; > - struct kref ref; > struct list_head next; > struct kobject *type_kobj; > bool active; When develping my nvme-mdev driver, I'll seen that unused kref too. Dead code has to go. Reviewed-by: Maxim Levitsky Best regards, Maxim Levitsky

Re: [PATCH 2/8] vfio/mdev: Avoid release parent reference during error path

2019-03-25 Thread Maxim Levitsky
parent) { > + parent = NULL; > ret = -EEXIST; > goto add_dev_err; > } This is also clearly an issue. Reviewed-by: Maxim Levitsky Best regards, Maxim Levitsky

Re: [PATCH 1/8] vfio/mdev: Fix to not do put_device on device_register failure

2019-03-25 Thread Maxim Levitsky
set_name(>dev, "%pUl", uuid.b); > > ret = device_register(>dev); > - if (ret) { > - put_device(>dev); > + if (ret) > goto mdev_fail; > - } > > ret = mdev_device_create_ops(kobj, mdev); > if (ret) Very good catch! Thanks! Reviewed-by: Maxim Levitsky Best regards, Maxim Levitsky

Re: [PATCH] memstick: fix a potential NULL pointer dereference

2019-03-23 Thread Maxim Levitsky
msb->io_queue = alloc_ordered_workqueue("ms_block", WQ_MEM_RECLAIM); > > + if (!msb->io_queue) { > > + rc = -ENOMEM; > > + goto out_put_disk; > > + } > > + > > INIT_WORK(>io_work, msb_io_work); > > sg_init_table(msb->prealloc_sg, MS_BLOCK_MAX_SEGS+1); > > > > -- > > 2.17.1 > > Looks OK to me! Reviewed-by: Maxim Levitsky Best regards, Maxim Levitsky

Re: [PATCH 2/2] MAINTAINERS: Add Ulf Hansson to the MEMORYSTICK section

2019-03-22 Thread Maxim Levitsky
regards, Maxim Levitsky PS: my work email is mlevi...@redhat.com On Fri, Mar 22, 2019 at 1:43 PM Ulf Hansson wrote: > > The amount of changes to the memorystick subsystem are limited as of today. > However, I have a couple of times been funneling changes through my MMC > tree

Re:

2019-03-22 Thread Maxim Levitsky
On Fri, 2019-03-22 at 07:54 +, Felipe Franciosi wrote: > > On Mar 21, 2019, at 5:04 PM, Maxim Levitsky wrote: > > > > On Thu, 2019-03-21 at 16:41 +, Felipe Franciosi wrote: > > > > On Mar 21, 2019, at 4:21 PM, Keith Busch wrote: > > > > > &

Re: your mail

2019-03-21 Thread Maxim Levitsky
On Thu, 2019-03-21 at 16:13 +, Stefan Hajnoczi wrote: > On Tue, Mar 19, 2019 at 04:41:07PM +0200, Maxim Levitsky wrote: > > Date: Tue, 19 Mar 2019 14:45:45 +0200 > > Subject: [PATCH 0/9] RFC: NVME VFIO mediated device > > > > Hi everyone! > > >

<    1   2   3   4   5   6   7   >