On 14/01/2020 17:02, Jan Beulich wrote:
> On 13.01.2020 18:50, Andrew Cooper wrote:
>> --- a/xen/arch/x86/boot/head.S
>> +++ b/xen/arch/x86/boot/head.S
>> @@ -687,14 +687,19 @@ trampoline_setup:
>>           * handling/walking), and identity map Xen into bootmap (needed for
>>           * the transition into long mode), using 2M superpages.
>>           */
>> -        lea     sym_esi(start),%ebx
>> -        lea     
>> (1<<L2_PAGETABLE_SHIFT)*7+(PAGE_HYPERVISOR_RWX|_PAGE_PSE)(%ebx),%eax
>> -        shr     $(L2_PAGETABLE_SHIFT-3),%ebx
>> -        mov     $8,%ecx
>> -1:      mov     %eax,sym_fs(l2_bootmap)-8(%ebx,%ecx,8)
>> -        mov     %eax,sym_fs(l2_directmap)-8(%ebx,%ecx,8)
>> -        sub     $(1<<L2_PAGETABLE_SHIFT),%eax
>> -        loop    1b
>> +        lea     sym_esi(_start), %ecx
>> +        lea     -1 + sym_esi(_end), %edx
> This looks pretty odd - does
>
>         lea     sym_esi(_end) - 1, %edx
>
> not work?

No:

head.S: Assembler messages:
head.S:521: Error: junk `(%esi)-1' after expression

but it is not at all surprising when you expand the macro:

lea (_end - start)(%esi) - 1, %edx

The expression for the displacement ends up split across both sides of
the SIB.

>
>> +        lea     _PAGE_PSE + PAGE_HYPERVISOR_RWX(%ecx), %eax /* PTE to 
>> write. */
>> +        shr     $L2_PAGETABLE_SHIFT, %ecx                   /* First slot 
>> to write. */
>> +        shr     $L2_PAGETABLE_SHIFT, %edx                   /* Final slot 
>> to write. */
>> +
>> +1:      mov     %eax, sym_offs(l2_bootmap)  (%esi, %ecx, 8)
>> +        mov     %eax, sym_offs(l2_directmap)(%esi, %ecx, 8)
> I guess I could have noticed this on the previous patch already:
> This would look better as
>
> 1:      mov     %eax, sym_esi(l2_bootmap,   %ecx, 8)
>         mov     %eax, sym_esi(l2_directmap, %ecx, 8)
>
> Can sym_esi() perhaps be made
>
> #define sym_esi(sym, extra...)      sym_offs(sym)(%esi, ## extra)
>
> ?

I considered and dismissed this approach.  Yes, the code is slightly
shorter, but at the expense of readability.

The advantage of the longhand version is that it is obvious which half
is the displacement expression, and which half is the SIB.

The reduced version leaves a distinct possibility of %ecx being mistaken
as the base register, rather than the index.

>
>> --- a/xen/arch/x86/xen.lds.S
>> +++ b/xen/arch/x86/xen.lds.S
>> @@ -384,6 +384,3 @@ ASSERT((trampoline_end - trampoline_start) < 
>> TRAMPOLINE_SPACE - MBI_SPACE_MIN,
>>      "not enough room for trampoline and mbi data")
>>  ASSERT((wakeup_stack - wakeup_stack_start) >= WAKEUP_STACK_MIN,
>>      "wakeup stack too small")
>> -
>> -/* Plenty of boot code assumes that Xen isn't larger than 16M. */
>> -ASSERT(_end - _start <= MB(16), "Xen too large for early-boot assumptions")
> Following your reply to the cover letter, this can't be dropped just yet.

Correct.

> Even when that remaining issue got addressed, I think it would be better
> to keep it, altering the bound to GB(1).

A 1G check wouldn't be correct.

We've already got a more suitable one, which is the check that Xen
doesn't encroach into the stubs range.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Reply via email to