On 14/02/2019 13:10, Paul Durrant wrote:
> The current code uses hvm_asid_flush_vcpu() but this is insufficient for
> a guest running in shadow mode, which results in guest crashes early in
> boot if the 'hcall_remote_tlb_flush' is enabled.
> 
> This patch, instead of open coding a new flush algorithm, adapts the one
> already used by the HVMOP_flush_tlbs Xen hypercall. The implementation is
> modified to allow TLB flushing a subset of a domain's vCPUs. A callback
> function determines whether or not a vCPU requires flushing. This mechanism
> was chosen because, while it is the case that the currently implemented
> viridian hypercalls specify a vCPU mask, there are newer variants which
> specify a sparse HV_VP_SET and thus use of a callback will avoid needing to
> expose details of this outside of the viridian subsystem if and when those
> newer variants are implemented.
> 
> NOTE: Use of the common flush function requires that the hypercalls are
>       restartable and so, with this patch applied, viridian_hypercall()
>       can now return HVM_HCALL_preempted. This is safe as no modification
>       to struct cpu_user_regs is done before the return.
> 
> Signed-off-by: Paul Durrant <paul.durr...@citrix.com>
> ---
> Cc: Jan Beulich <jbeul...@suse.com>
> Cc: Andrew Cooper <andrew.coop...@citrix.com>
> Cc: Wei Liu <wei.l...@citrix.com>
> Cc: "Roger Pau Monné" <roger....@citrix.com>
> 
> v2:
>  - Use cpumask_scratch

That's not a good idea. cpumask_scratch may be used from other cpus as
long as the respectice scheduler lock is being held. See the comment in
include/xen/sched-if.h:

/*
 * Scratch space, for avoiding having too many cpumask_t on the stack.
 * Within each scheduler, when using the scratch mask of one pCPU:
 * - the pCPU must belong to the scheduler,
 * - the caller must own the per-pCPU scheduler lock (a.k.a. runqueue
 *   lock).
 */

So please don't use cpumask_scratch outside the scheduler!


Juergen

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Reply via email to