On 27/04/2018 17:19, Jim Mattson wrote:
>
> If the default treatment of SMIs and SMM (see Section 34.14) is
> active, the VMX-preemption timer counts across an SMI to VMX non-root
> operation, subsequent execution in SMM, and the return from SMM via
> the RSM instruction. However, the timer can
On 27/04/2018 17:19, Jim Mattson wrote:
>
> If the default treatment of SMIs and SMM (see Section 34.14) is
> active, the VMX-preemption timer counts across an SMI to VMX non-root
> operation, subsequent execution in SMM, and the return from SMM via
> the RSM instruction. However, the timer can
On Fri, Apr 27, 2018 at 3:03 AM, Paolo Bonzini wrote:
> On 27/04/2018 00:28, Jim Mattson wrote:
>> The other thing that comes to mind is that there are some new fields
>> in the VMCS12 since I first implemented this. One potentially
>> troublesome field is the VMX preemption
On Fri, Apr 27, 2018 at 3:03 AM, Paolo Bonzini wrote:
> On 27/04/2018 00:28, Jim Mattson wrote:
>> The other thing that comes to mind is that there are some new fields
>> in the VMCS12 since I first implemented this. One potentially
>> troublesome field is the VMX preemption timer. If the current
On 27/04/2018 00:28, Jim Mattson wrote:
> The other thing that comes to mind is that there are some new fields
> in the VMCS12 since I first implemented this. One potentially
> troublesome field is the VMX preemption timer. If the current timer
> value is not saved on VM-exit, then it won't be
On 27/04/2018 00:28, Jim Mattson wrote:
> The other thing that comes to mind is that there are some new fields
> in the VMCS12 since I first implemented this. One potentially
> troublesome field is the VMX preemption timer. If the current timer
> value is not saved on VM-exit, then it won't be
I'll send out a patch to deal with nested_run_pending.
The other thing that comes to mind is that there are some new fields
in the VMCS12 since I first implemented this. One potentially
troublesome field is the VMX preemption timer. If the current timer
value is not saved on VM-exit, then it
I'll send out a patch to deal with nested_run_pending.
The other thing that comes to mind is that there are some new fields
in the VMCS12 since I first implemented this. One potentially
troublesome field is the VMX preemption timer. If the current timer
value is not saved on VM-exit, then it
On Mon, 2018-04-16 at 09:22 -0700, Jim Mattson wrote:
> On Thu, Apr 12, 2018 at 8:12 AM, KarimAllah Ahmed wrote:
>
> >
> > v2 -> v3:
> > - Remove the forced VMExit from L2 after reading the kvm_state. The actual
> > problem is solved.
> > - Rebase again!
> > - Set
On Mon, 2018-04-16 at 09:22 -0700, Jim Mattson wrote:
> On Thu, Apr 12, 2018 at 8:12 AM, KarimAllah Ahmed wrote:
>
> >
> > v2 -> v3:
> > - Remove the forced VMExit from L2 after reading the kvm_state. The actual
> > problem is solved.
> > - Rebase again!
> > - Set nested_run_pending during
On Thu, Apr 12, 2018 at 8:12 AM, KarimAllah Ahmed wrote:
> v2 -> v3:
> - Remove the forced VMExit from L2 after reading the kvm_state. The actual
> problem is solved.
> - Rebase again!
> - Set nested_run_pending during restore (not sure if it makes sense yet or
> not).
On Thu, Apr 12, 2018 at 8:12 AM, KarimAllah Ahmed wrote:
> v2 -> v3:
> - Remove the forced VMExit from L2 after reading the kvm_state. The actual
> problem is solved.
> - Rebase again!
> - Set nested_run_pending during restore (not sure if it makes sense yet or
> not).
This doesn't actually
On Sat, 2018-04-14 at 15:56 +, Raslan, KarimAllah wrote:
> On Thu, 2018-04-12 at 17:12 +0200, KarimAllah Ahmed wrote:
> >
> > From: Jim Mattson
> >
> > For nested virtualization L0 KVM is managing a bit of state for L2 guests,
> > this state can not be captured through
On Sat, 2018-04-14 at 15:56 +, Raslan, KarimAllah wrote:
> On Thu, 2018-04-12 at 17:12 +0200, KarimAllah Ahmed wrote:
> >
> > From: Jim Mattson
> >
> > For nested virtualization L0 KVM is managing a bit of state for L2 guests,
> > this state can not be captured through the currently
On Thu, 2018-04-12 at 17:12 +0200, KarimAllah Ahmed wrote:
> From: Jim Mattson
>
> For nested virtualization L0 KVM is managing a bit of state for L2 guests,
> this state can not be captured through the currently available IOCTLs. In
> fact the state captured through all of
On Thu, 2018-04-12 at 17:12 +0200, KarimAllah Ahmed wrote:
> From: Jim Mattson
>
> For nested virtualization L0 KVM is managing a bit of state for L2 guests,
> this state can not be captured through the currently available IOCTLs. In
> fact the state captured through all of these IOCTLs is
On 12/04/2018 17:12, KarimAllah Ahmed wrote:
> From: Jim Mattson
>
> For nested virtualization L0 KVM is managing a bit of state for L2 guests,
> this state can not be captured through the currently available IOCTLs. In
> fact the state captured through all of these IOCTLs
On 12/04/2018 17:12, KarimAllah Ahmed wrote:
> From: Jim Mattson
>
> For nested virtualization L0 KVM is managing a bit of state for L2 guests,
> this state can not be captured through the currently available IOCTLs. In
> fact the state captured through all of these IOCTLs is usually a mix of L1
From: Jim Mattson
For nested virtualization L0 KVM is managing a bit of state for L2 guests,
this state can not be captured through the currently available IOCTLs. In
fact the state captured through all of these IOCTLs is usually a mix of L1
and L2 state. It is also
From: Jim Mattson
For nested virtualization L0 KVM is managing a bit of state for L2 guests,
this state can not be captured through the currently available IOCTLs. In
fact the state captured through all of these IOCTLs is usually a mix of L1
and L2 state. It is also dependent on whether the L2
20 matches
Mail list logo