On Mon, Aug 02, 2010 at 09:55:42AM -0400, Avi Kivity wrote:
>   On 08/02/2010 04:33 PM, Joerg Roedel wrote:
> > +static void test_mode_switch(struct test *test)
> > +{
> > +    asm volatile(" cli\n"
> > +            "      ljmp *1f\n" /* jump to 32-bit code segment */
> > +            "1:\n"
> > +            "      .long 2f\n"
> > +            "      .long 40\n"
> > +            ".code32\n"
> > +            "2:\n"
> > +            "      movl %%cr0, %%eax\n"
> > +            "      btcl  $31, %%eax\n" /* clear PG */
> > +            "      movl %%eax, %%cr0\n"
> > +            "      movl $0xc0000080, %%ecx\n" /* EFER */
> > +            "      rdmsr\n"
> > +            "      btcl $8, %%eax\n" /* clear LME */
> > +            "      wrmsr\n"
> > +            "      movl %%cr4, %%eax\n"
> > +            "      btcl $5, %%eax\n" /* clear PAE */
> > +            "      movl %%eax, %%cr4\n"
> > +            "      movw $64, %%ax\n"
> > +            "      movw %%ax, %%ds\n"
> > +            "      ljmpl $56, $3f\n" /* jump to 16 bit protected-mode */
> > +            ".code16\n"
> > +            "3:\n"
> > +            "      movl %%cr0, %%eax\n"
> > +            "      btcl $0, %%eax\n" /* clear PE  */
> > +            "      movl %%eax, %%cr0\n"
> > +            "      ljmpl $0, $4f\n"   /* jump to real-mode */
> > +            "4:\n"
> > +            "      vmmcall\n"
> > +            "      movl %%cr0, %%eax\n"
> > +            "      btsl $0, %%eax\n" /* set PE  */
> > +            "      movl %%eax, %%cr0\n"
> > +            "      ljmpl $40, $5f\n" /* back to protected mode */
> > +            ".code32\n"
> > +            "5:\n"
> > +            "      movl %%cr4, %%eax\n"
> > +            "      btsl $5, %%eax\n" /* set PAE */
> > +            "      movl %%eax, %%cr4\n"
> > +            "      movl $0xc0000080, %%ecx\n" /* EFER */
> > +            "      rdmsr\n"
> > +            "      btsl $8, %%eax\n" /* set LME */
> > +            "      wrmsr\n"
> > +            "      movl %%cr0, %%eax\n"
> > +            "      btsl  $31, %%eax\n" /* set PG */
> > +            "      movl %%eax, %%cr0\n"
> > +            "      ljmpl $8, $6f\n"    /* back to long mode */
> > +            ".code64\n\t"
> > +            "6:\n"
> > +            "      vmmcall\n"
> > +            ::: "rax", "rbx", "rcx", "rdx", "memory");
> > +}
> > +
> 
> What is this testing exactly?  There is no svm function directly 
> associated with mode switch.  In fact, most L1s will intercept cr and 
> efer access and emulate the mode switch, rather than letting L2 perform 
> the mode switch directly.

This is testing the failure case without the nested-svm efer patch I
submitted last week. The sequence above (which switches from long mode
to real mode and back to long mode) fails without this patch.

        Joerg

-- 
AMD Operating System Research Center

Advanced Micro Devices GmbH Einsteinring 24 85609 Dornach
General Managers: Alberto Bozzo, Andrew Bowd
Registration: Dornach, Landkr. Muenchen; Registerger. Muenchen, HRB Nr. 43632

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to