On 05.08.2022 12:38, Andrew Cooper wrote:
> It turns out that we do in fact have RSB safety here, but not for obvious
> reasons.
> 
> Signed-off-by: Andrew Cooper <[email protected]>

Reviewed-by: Jan Beulich <[email protected]>
preferably with ...

> --- a/xen/common/wait.c
> +++ b/xen/common/wait.c
> @@ -210,6 +210,26 @@ void check_wakeup_from_wait(void)
>      }
>  
>      /*
> +     * We are about to jump into a deeper call tree.  In principle, this 
> risks
> +     * executing more RET than CALL instructions, and underflowing the RSB.
> +     *
> +     * However, we are pinned to the same CPU as previously.  Therefore,
> +     * either:
> +     *
> +     *   1) We've scheduled another vCPU in the meantime, and the context
> +     *      switch path has (by default) issued IPBP which flushes the RSB, 
> or

... IBPB used here and ...

> +     *   2) We're still in the same context.  Returning back to the deeper
> +     *      call tree is resuming the execution path we left, and remains
> +     *      balanced as far as that logic is concerned.
> +     *
> +     *      In fact, the path though the scheduler will execute more CALL 
> than

... (nit) "through" used here.

> +     *      RET instructions, making the RSB unbalanced in the safe 
> direction.
> +     *
> +     * Therefore, no actions are necessary here to maintain RSB safety.
> +     */
> +
> +    /*
>       * Hand-rolled longjmp().
>       *
>       * check_wakeup_from_wait() is always called with a shallow stack,


Reply via email to