On 06.02.2024 13:06, Jan Beulich wrote:
> While in the vast majority of cases failure of the function will not
> be followed by re-invocation with the same emulation context, a few
> very specific insns - involving multiple independent writes, e.g. ENTER
> and PUSHA - exist where this can happen. Since failure of the function
> only signals to the caller that it ought to try an MMIO write instead,
> such failure also cannot be assumed to result in wholesale failure of
> emulation of the current insn. Instead we have to maintain internal
> state such that another invocation of the function with the same
> emulation context remains possible. To achieve that we need to reset MFN
> slots after putting page references on the error path.
> 
> Note that all of this affects debugging code only, in causing an
> assertion to trigger (higher up in the function). There's otherwise no
> misbehavior - such a "leftover" slot would simply be overwritten by new
> contents in a release build.
> 
> Also extend the related unmap() assertion, to further check for MFN 0.
> 
> Fixes: 8cbd4fb0b7ea ("x86/hvm: implement hvmemul_write() using real mappings")
> Reported.by: Manuel Andreas <[email protected]>
> Signed-off-by: Jan Beulich <[email protected]>

Just noticed that I forgot to Cc Paul.

Jan

> ---
> While probably I could be convinced to omit the #ifndef, I'm really
> considering to extend the one in hvmemul_unmap_linear_addr(), to
> eliminate the zapping from release builds: Leaving MFN 0 in place is not
> much better than leaving a (presently) guest-owned one there. And we
> can't really put/leave INVALID_MFN there, as that would conflict with
> other debug checking.
> 
> --- a/xen/arch/x86/hvm/emulate.c
> +++ b/xen/arch/x86/hvm/emulate.c
> @@ -696,7 +696,12 @@ static void *hvmemul_map_linear_addr(
>   out:
>      /* Drop all held references. */
>      while ( mfn-- > hvmemul_ctxt->mfn )
> +    {
>          put_page(mfn_to_page(*mfn));
> +#ifndef NDEBUG /* Clean slot for a subsequent map()'s error checking. */
> +        *mfn = _mfn(0);
> +#endif
> +    }
>  
>      return err;
>  }
> @@ -718,7 +723,7 @@ static void hvmemul_unmap_linear_addr(
>  
>      for ( i = 0; i < nr_frames; i++ )
>      {
> -        ASSERT(mfn_valid(*mfn));
> +        ASSERT(mfn_x(*mfn) && mfn_valid(*mfn));
>          paging_mark_dirty(currd, *mfn);
>          put_page(mfn_to_page(*mfn));
>  


Reply via email to