>>> On 02.02.18 at 13:03, <andrew.coop...@citrix.com> wrote:
> On 07/12/17 14:04, Jan Beulich wrote:
>> @@ -8027,6 +8060,13 @@ x86_emulate(
>>          generate_exception_if(vex.w, EXC_UD);
>>          goto simd_0f_imm8_avx;
>>  
>> +    case X86EMUL_OPC_VEX_66(0x0f3a, 0x48): /* vpermil2ps 
>> $imm,{x,y}mm/mem,{x,y}mm,{x,y}mm,{x,y}mm */
>> +                                           /* vpermil2ps 
>> $imm,{x,y}mm,{x,y}mm/mem,{x,y}mm,{x,y}mm */
>> +    case X86EMUL_OPC_VEX_66(0x0f3a, 0x49): /* vpermil2pd 
>> $imm,{x,y}mm/mem,{x,y}mm,{x,y}mm,{x,y}mm */
>> +                                           /* vpermil2pd 
>> $imm,{x,y}mm,{x,y}mm/mem,{x,y}mm,{x,y}mm */
>> +        host_and_vcpu_must_have(xop);
>> +        goto simd_0f_imm8_ymm;
> 
> Is this correct?  VEX.W selects which operand may be the memory operand,
> and I don't see anything in the decode which copes, or anything in the
> stub which adjusts .W.

That's the nice thing here - by re-using the original instruction in
the stub (with only GPR numbers adjusted if necessary) we simply
don't care which of the operands it the memory one, as long as
the access width does not differ (and it doesn't).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Reply via email to