Re: [PATCH] KVM: x86: fix RSM into 64-bit protected mode, round 2

2015-10-31 Thread Laszlo Ersek
On 10/30/15 16:40, Radim Krčmář wrote:
> 2015-10-26 17:32+0100, Paolo Bonzini:
>> On 26/10/2015 16:43, Laszlo Ersek wrote:
 The code would be cleaner if we had a different approach, but this works
 too and is safer for stable. In case you prefer to leave the rewrite for
 a future victim,
>>>
>>> It's hard to express how much I prefer that.
>>
>> Radim, if you want to have a try go ahead since I cannot apply the patch
>> until next Monday.
> 
> The future I originally had in mind was more hoverboardy, but a series
> just landed, "KVM: x86: simplify RSM into 64-bit protected mode".
> 
> Laszlo, I'd be grateful if you could check that it works.
> 

I'm tagging it for next week, thanks.
Laszlo
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] KVM: x86: fix RSM into 64-bit protected mode, round 2

2015-10-30 Thread Radim Krčmář
2015-10-26 17:32+0100, Paolo Bonzini:
> On 26/10/2015 16:43, Laszlo Ersek wrote:
>>> The code would be cleaner if we had a different approach, but this works
>>> too and is safer for stable. In case you prefer to leave the rewrite for
>>> a future victim,
>> 
>> It's hard to express how much I prefer that.
> 
> Radim, if you want to have a try go ahead since I cannot apply the patch
> until next Monday.

The future I originally had in mind was more hoverboardy, but a series
just landed, "KVM: x86: simplify RSM into 64-bit protected mode".

Laszlo, I'd be grateful if you could check that it works.
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] KVM: x86: fix RSM into 64-bit protected mode, round 2

2015-10-26 Thread Radim Krčmář
2015-10-23 23:43+0200, Laszlo Ersek:
> Commit b10d92a54dac ("KVM: x86: fix RSM into 64-bit protected mode")
> reordered the rsm_load_seg_64() and rsm_enter_protected_mode() calls,
> relative to each other. The argument that said commit made was correct,
> however putting rsm_enter_protected_mode() first whole-sale violated the
> following (correct) invariant from em_rsm():
> 
>  * Get back to real mode, to prepare a safe state in which to load
>  * CR0/CR3/CR4/EFER.  Also this will ensure that addresses passed
>  * to read_std/write_std are not virtual.

Nice catch.

> Namely, rsm_enter_protected_mode() may re-enable paging, *after* which
> 
>   rsm_load_seg_64()
> GET_SMSTATE()
>   read_std()
> 
> will try to interpret the (smbase + offset) address as a virtual one. This
> will result in unexpected page faults being injected to the guest in
> response to the RSM instruction.

I think this is a good time to introduce the read_phys helper, which we
wanted to avoid with that assumption.

> Split rsm_load_seg_64() in two parts:
> 
> - The first part, rsm_stash_seg_64(), shall call GET_SMSTATE() while in
>   real mode, and save the relevant state off SMRAM into an array local to
>   rsm_load_state_64().
> 
> - The second part, rsm_load_seg_64(), shall occur after entering protected
>   mode, but the segment details shall come from the local array, not the
>   guest's SMRAM.
> 
> Fixes: b10d92a54dac25a6152f1aa1ffc95c12908035ce
> Cc: Paolo Bonzini 
> Cc: Radim Krčmář 
> Cc: Jordan Justen 
> Cc: Michael Kinney 
> Cc: sta...@vger.kernel.org
> Signed-off-by: Laszlo Ersek 
> ---

The code would be cleaner if we had a different approach, but this works
too and is safer for stable. In case you prefer to leave the rewrite for
a future victim,

Reviewed-by: Radim Krčmář 
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] KVM: x86: fix RSM into 64-bit protected mode, round 2

2015-10-26 Thread Laszlo Ersek
On 10/26/15 16:37, Radim Krčmář wrote:
> 2015-10-23 23:43+0200, Laszlo Ersek:
>> Commit b10d92a54dac ("KVM: x86: fix RSM into 64-bit protected mode")
>> reordered the rsm_load_seg_64() and rsm_enter_protected_mode() calls,
>> relative to each other. The argument that said commit made was correct,
>> however putting rsm_enter_protected_mode() first whole-sale violated the
>> following (correct) invariant from em_rsm():
>>
>>  * Get back to real mode, to prepare a safe state in which to load
>>  * CR0/CR3/CR4/EFER.  Also this will ensure that addresses passed
>>  * to read_std/write_std are not virtual.
> 
> Nice catch.
> 
>> Namely, rsm_enter_protected_mode() may re-enable paging, *after* which
>>
>>   rsm_load_seg_64()
>> GET_SMSTATE()
>>   read_std()
>>
>> will try to interpret the (smbase + offset) address as a virtual one. This
>> will result in unexpected page faults being injected to the guest in
>> response to the RSM instruction.
> 
> I think this is a good time to introduce the read_phys helper, which we
> wanted to avoid with that assumption.
> 
>> Split rsm_load_seg_64() in two parts:
>>
>> - The first part, rsm_stash_seg_64(), shall call GET_SMSTATE() while in
>>   real mode, and save the relevant state off SMRAM into an array local to
>>   rsm_load_state_64().
>>
>> - The second part, rsm_load_seg_64(), shall occur after entering protected
>>   mode, but the segment details shall come from the local array, not the
>>   guest's SMRAM.
>>
>> Fixes: b10d92a54dac25a6152f1aa1ffc95c12908035ce
>> Cc: Paolo Bonzini 
>> Cc: Radim Krčmář 
>> Cc: Jordan Justen 
>> Cc: Michael Kinney 
>> Cc: sta...@vger.kernel.org
>> Signed-off-by: Laszlo Ersek 
>> ---
> 
> The code would be cleaner if we had a different approach, but this works
> too and is safer for stable. In case you prefer to leave the rewrite for
> a future victim,

It's hard to express how much I prefer that.

> 
> Reviewed-by: Radim Krčmář 
> 

Thank you!
Laszlo
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] KVM: x86: fix RSM into 64-bit protected mode, round 2

2015-10-26 Thread Paolo Bonzini


On 26/10/2015 16:43, Laszlo Ersek wrote:
> > The code would be cleaner if we had a different approach, but this works
> > too and is safer for stable. In case you prefer to leave the rewrite for
> > a future victim,
> 
> It's hard to express how much I prefer that.

Radim, if you want to have a try go ahead since I cannot apply the patch
until next Monday.

Paolo
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH] KVM: x86: fix RSM into 64-bit protected mode, round 2

2015-10-23 Thread Laszlo Ersek
Commit b10d92a54dac ("KVM: x86: fix RSM into 64-bit protected mode")
reordered the rsm_load_seg_64() and rsm_enter_protected_mode() calls,
relative to each other. The argument that said commit made was correct,
however putting rsm_enter_protected_mode() first whole-sale violated the
following (correct) invariant from em_rsm():

 * Get back to real mode, to prepare a safe state in which to load
 * CR0/CR3/CR4/EFER.  Also this will ensure that addresses passed
 * to read_std/write_std are not virtual.

Namely, rsm_enter_protected_mode() may re-enable paging, *after* which

  rsm_load_seg_64()
GET_SMSTATE()
  read_std()

will try to interpret the (smbase + offset) address as a virtual one. This
will result in unexpected page faults being injected to the guest in
response to the RSM instruction.

Split rsm_load_seg_64() in two parts:

- The first part, rsm_stash_seg_64(), shall call GET_SMSTATE() while in
  real mode, and save the relevant state off SMRAM into an array local to
  rsm_load_state_64().

- The second part, rsm_load_seg_64(), shall occur after entering protected
  mode, but the segment details shall come from the local array, not the
  guest's SMRAM.

Fixes: b10d92a54dac25a6152f1aa1ffc95c12908035ce
Cc: Paolo Bonzini 
Cc: Radim Krčmář 
Cc: Jordan Justen 
Cc: Michael Kinney 
Cc: sta...@vger.kernel.org
Signed-off-by: Laszlo Ersek 
---
 arch/x86/kvm/emulate.c | 37 ++---
 1 file changed, 30 insertions(+), 7 deletions(-)

diff --git a/arch/x86/kvm/emulate.c b/arch/x86/kvm/emulate.c
index 9da95b9..25e16b6 100644
--- a/arch/x86/kvm/emulate.c
+++ b/arch/x86/kvm/emulate.c
@@ -2311,7 +2311,16 @@ static int rsm_load_seg_32(struct x86_emulate_ctxt 
*ctxt, u64 smbase, int n)
return X86EMUL_CONTINUE;
 }
 
-static int rsm_load_seg_64(struct x86_emulate_ctxt *ctxt, u64 smbase, int n)
+struct rsm_stashed_seg_64 {
+   u16 selector;
+   struct desc_struct desc;
+   u32 base3;
+};
+
+static int rsm_stash_seg_64(struct x86_emulate_ctxt *ctxt,
+   struct rsm_stashed_seg_64 *stash,
+   u64 smbase,
+   int n)
 {
struct desc_struct desc;
int offset;
@@ -2326,10 +2335,20 @@ static int rsm_load_seg_64(struct x86_emulate_ctxt 
*ctxt, u64 smbase, int n)
set_desc_base(,  GET_SMSTATE(u32, smbase, offset + 8));
base3 =   GET_SMSTATE(u32, smbase, offset + 12);
 
-   ctxt->ops->set_segment(ctxt, selector, , base3, n);
+   stash[n].selector = selector;
+   stash[n].desc = desc;
+   stash[n].base3 = base3;
return X86EMUL_CONTINUE;
 }
 
+static inline void rsm_load_seg_64(struct x86_emulate_ctxt *ctxt,
+  struct rsm_stashed_seg_64 *stash,
+  int n)
+{
+   ctxt->ops->set_segment(ctxt, stash[n].selector, [n].desc,
+  stash[n].base3, n);
+}
+
 static int rsm_enter_protected_mode(struct x86_emulate_ctxt *ctxt,
 u64 cr0, u64 cr4)
 {
@@ -2419,6 +2438,7 @@ static int rsm_load_state_64(struct x86_emulate_ctxt 
*ctxt, u64 smbase)
u32 base3;
u16 selector;
int i, r;
+   struct rsm_stashed_seg_64 stash[6];
 
for (i = 0; i < 16; i++)
*reg_write(ctxt, i) = GET_SMSTATE(u64, smbase, 0x7ff8 - i * 8);
@@ -2460,15 +2480,18 @@ static int rsm_load_state_64(struct x86_emulate_ctxt 
*ctxt, u64 smbase)
dt.address =GET_SMSTATE(u64, smbase, 0x7e68);
ctxt->ops->set_gdt(ctxt, );
 
+   for (i = 0; i < ARRAY_SIZE(stash); i++) {
+   r = rsm_stash_seg_64(ctxt, stash, smbase, i);
+   if (r != X86EMUL_CONTINUE)
+   return r;
+   }
+
r = rsm_enter_protected_mode(ctxt, cr0, cr4);
if (r != X86EMUL_CONTINUE)
return r;
 
-   for (i = 0; i < 6; i++) {
-   r = rsm_load_seg_64(ctxt, smbase, i);
-   if (r != X86EMUL_CONTINUE)
-   return r;
-   }
+   for (i = 0; i < ARRAY_SIZE(stash); i++)
+   rsm_load_seg_64(ctxt, stash, i);
 
return X86EMUL_CONTINUE;
 }
-- 
1.8.3.1

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH] KVM: x86: fix RSM into 64-bit protected mode

2015-10-14 Thread Paolo Bonzini
In order to get into 64-bit protected mode, CS.L must be 0.  This
is always the case when executing RSM, so it is enough to load the
segments after CR0 and CR4.

Fixes: 660a5d517aaab9187f93854425c4c63f4a09195c
Cc: sta...@vger.kernel.org
Signed-off-by: Paolo Bonzini 
---
 arch/x86/kvm/emulate.c | 10 +++---
 1 file changed, 7 insertions(+), 3 deletions(-)

diff --git a/arch/x86/kvm/emulate.c b/arch/x86/kvm/emulate.c
index e7a4fde5d631..2392541a96e6 100644
--- a/arch/x86/kvm/emulate.c
+++ b/arch/x86/kvm/emulate.c
@@ -2418,7 +2418,7 @@ static int rsm_load_state_64(struct x86_emulate_ctxt 
*ctxt, u64 smbase)
u64 val, cr0, cr4;
u32 base3;
u16 selector;
-   int i;
+   int i, r;
 
for (i = 0; i < 16; i++)
*reg_write(ctxt, i) = GET_SMSTATE(u64, smbase, 0x7ff8 - i * 8);
@@ -2460,13 +2460,17 @@ static int rsm_load_state_64(struct x86_emulate_ctxt 
*ctxt, u64 smbase)
dt.address =GET_SMSTATE(u64, smbase, 0x7e68);
ctxt->ops->set_gdt(ctxt, );
 
+   r = rsm_enter_protected_mode(ctxt, cr0, cr4);
+   if (r != X86EMUL_CONTINUE)
+   return r;
+
for (i = 0; i < 6; i++) {
-   int r = rsm_load_seg_64(ctxt, smbase, i);
+   r = rsm_load_seg_64(ctxt, smbase, i);
if (r != X86EMUL_CONTINUE)
return r;
}
 
-   return rsm_enter_protected_mode(ctxt, cr0, cr4);
+   return X86EMUL_CONTINUE;
 }
 
 static int em_rsm(struct x86_emulate_ctxt *ctxt)
-- 
2.5.0

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html