On 10.01.2023 18:18, Andrew Cooper wrote:
> +static inline void wrpkrs(uint32_t pkrs)
> +{
> +    uint32_t *this_pkrs = &this_cpu(pkrs);
> +
> +    if ( *this_pkrs != pkrs )
> +    {
> +        *this_pkrs = pkrs;
> +
> +        wrmsr_ns(MSR_PKRS, pkrs, 0);
> +    }
> +}
> +
> +static inline void wrpkrs_and_cache(uint32_t pkrs)
> +{
> +    this_cpu(pkrs) = pkrs;
> +    wrmsr_ns(MSR_PKRS, pkrs, 0);
> +}

Just to confirm - there's no anticipation of uses of this in async
contexts, i.e. there's no concern about the ordering of cache vs hardware
writes?

> --- a/xen/arch/x86/setup.c
> +++ b/xen/arch/x86/setup.c
> @@ -54,6 +54,7 @@
>  #include <asm/spec_ctrl.h>
>  #include <asm/guest.h>
>  #include <asm/microcode.h>
> +#include <asm/prot-key.h>
>  #include <asm/pv/domain.h>
>  
>  /* opt_nosmp: If true, secondary processors are ignored. */
> @@ -1804,6 +1805,9 @@ void __init noreturn __start_xen(unsigned long mbi_p)
>      if ( opt_invpcid && cpu_has_invpcid )
>          use_invpcid = true;
>  
> +    if ( cpu_has_pks )
> +        wrpkrs_and_cache(0); /* Must be before setting CR4.PKS */

Same question here as for PKRU wrt the BSP during S3 resume.

Jan

Reply via email to