Re: [PATCH 08/29] nVMX: Fix local_vcpus_link handling

2011-01-30 Thread Avi Kivity

On 01/27/2011 10:33 AM, Nadav Har'El wrote:

In VMX, before we bring down a CPU we must VMCLEAR all VMCSs loaded on it
because (at least in theory) the processor might not have written all of its
content back to memory. Since a patch from June 26, 2008, this is done using
a per-cpu vcpus_on_cpu linked list of vcpus loaded on each CPU.

The problem is that with nested VMX, we no longer have the concept of a
vcpu being loaded on a cpu: A vcpu has multiple VMCSs (one for L1, others for
each L2), and each of those may be have been last loaded on a different cpu.

This trivial patch changes the code to keep on vcpus_on_cpu only L1 VMCSs.
This fixes crashes on L1 shutdown caused by incorrectly maintaing the linked
lists.

It is not a complete solution, though. It doesn't flush the inactive L1 or L2
VMCSs loaded on a CPU which is being shutdown. Doing this correctly will
probably require replacing the vcpu linked list by a link list of saved_vcms
objects (VMCS, cpu and launched), and it is left as a TODO.



It looks like the right thing is a structure that represents the common 
things between 02 and 02 vmcses:


- pointer to memory
- cpu
- linked list entries for vcpus_on_vcpu (to be renamed vmcses_on_cpu)

You could then use vcpu_clear() in the previous patch.

--
error compiling committee.c: too many arguments to function

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 08/29] nVMX: Fix local_vcpus_link handling

2011-01-27 Thread Nadav Har'El
In VMX, before we bring down a CPU we must VMCLEAR all VMCSs loaded on it
because (at least in theory) the processor might not have written all of its
content back to memory. Since a patch from June 26, 2008, this is done using
a per-cpu vcpus_on_cpu linked list of vcpus loaded on each CPU.

The problem is that with nested VMX, we no longer have the concept of a
vcpu being loaded on a cpu: A vcpu has multiple VMCSs (one for L1, others for
each L2), and each of those may be have been last loaded on a different cpu.

This trivial patch changes the code to keep on vcpus_on_cpu only L1 VMCSs.
This fixes crashes on L1 shutdown caused by incorrectly maintaing the linked
lists.

It is not a complete solution, though. It doesn't flush the inactive L1 or L2
VMCSs loaded on a CPU which is being shutdown. Doing this correctly will
probably require replacing the vcpu linked list by a link list of saved_vcms
objects (VMCS, cpu and launched), and it is left as a TODO.

Signed-off-by: Nadav Har'El n...@il.ibm.com
---
 arch/x86/kvm/vmx.c |   14 ++
 1 file changed, 10 insertions(+), 4 deletions(-)

--- .before/arch/x86/kvm/vmx.c  2011-01-26 18:06:03.0 +0200
+++ .after/arch/x86/kvm/vmx.c   2011-01-26 18:06:03.0 +0200
@@ -616,7 +616,9 @@ static void __vcpu_clear(void *arg)
vmcs_clear(vmx-vmcs);
if (per_cpu(current_vmcs, cpu) == vmx-vmcs)
per_cpu(current_vmcs, cpu) = NULL;
-   list_del(vmx-local_vcpus_link);
+   /* TODO: currently, local_vcpus_link is just for L1 VMCSs */
+   if (!is_guest_mode(vmx-vcpu))
+   list_del(vmx-local_vcpus_link);
vmx-vcpu.cpu = -1;
vmx-launched = 0;
 }
@@ -1022,8 +1024,10 @@ static void vmx_vcpu_load(struct kvm_vcp
 
kvm_make_request(KVM_REQ_TLB_FLUSH, vcpu);
local_irq_disable();
-   list_add(vmx-local_vcpus_link,
-per_cpu(vcpus_on_cpu, cpu));
+   /* TODO: currently, local_vcpus_link is just for L1 VMCSs */
+   if (!is_guest_mode(vmx-vcpu))
+   list_add(vmx-local_vcpus_link,
+per_cpu(vcpus_on_cpu, cpu));
local_irq_enable();
 
/*
@@ -1647,7 +1651,9 @@ static void vmclear_local_vcpus(void)
 
list_for_each_entry_safe(vmx, n, per_cpu(vcpus_on_cpu, cpu),
 local_vcpus_link)
-   __vcpu_clear(vmx);
+   /* TODO: currently, local_vcpus_link is just for L1 VMCSs */
+   if (!is_guest_mode(vmx-vcpu))
+   __vcpu_clear(vmx);
 }
 
 
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html