On Thu, 26 Feb 2026, at 13:01, Andrew Cooper wrote:
>> @@ -133,49 +150,36 @@ static noinline void hv_crash_clear_kernpt(void)   * 
>> available. We restore kernel GDT, and rest of the context, and continue
>>   * to kexec.
>>   */
>> -static asmlinkage void __noreturn hv_crash_c_entry(void) +static void
>> __naked hv_crash_c_entry(void)  {
>> - struct hv_crash_ctxt *ctxt = &hv_crash_ctxt; -     /* first thing, restore 
>> kernel gdt */
>> - native_load_gdt(&ctxt->gdtr); + asm volatile("lgdt %0" : : "m"
>> (hv_crash_ctxt.gdtr));  
>> - asm volatile("movw %%ax, %%ss" : : "a"(ctxt->ss)); - asm
>> volatile("movq %0, %%rsp" : : "m"(ctxt->rsp)); + asm volatile("movw
>> %%ax, %%ss" : : "a"(hv_crash_ctxt.ss)); + asm volatile("movq %0,
>> %%rsp" : : "m"(hv_crash_ctxt.rsp));
>
> I know this is pre-existing, but the asm here is poor.
>
> All segment registers loads can have a memory operand, rather than
> forcing through %eax, which in turn reduces the setup logic the compiler
> needs to emit.
>
> Something like this:
>
>     "movl %0, %%ss" : : "m"(hv_crash_ctxt.ss)
>
> ought to do.
>

'movw' seems to work, yes.
...
>
> As Uros notes, "a" is clobbered here but the compiler is not informed. 
> But, it's not necessary.
>
> As a naked function you could even use 3x asm() statements, but you can
> get the compiler to sort out the function reference automatically with:
>
>     asm volatile ("push %q0\n\t"
>                   "push %q1\n\t"
>                   "lretq"
>                   :: "r"(hv_crash_ctxt.cs), "r"(hv_crash_handle));
>
>

Yeah much better - thanks.

Reply via email to