On Thu, Apr 04, 2019 at 11:21:04PM +0200, Andrew Jones wrote:
> On Fri, Mar 29, 2019 at 01:00:32PM +0000, Dave Martin wrote:
> > Due to the way the effective SVE vector length is controlled and
> > trapped at different exception levels, certain mismatches in the
> > sets of vector lengths supported by different physical CPUs in the
> > system may prevent straightforward virtualisation of SVE at parity
> > with the host.
> > 
> > This patch analyses the extent to which SVE can be virtualised
> > safely without interfering with migration of vcpus between physical
> > CPUs, and rejects late secondary CPUs that would erode the
> > situation further.
> > 
> > It is left up to KVM to decide what to do with this information.
> > 
> > Signed-off-by: Dave Martin <[email protected]>
> > Reviewed-by: Julien Thierry <[email protected]>
> > Tested-by: zhang.lei <[email protected]>
> > 
> > ---
> > 
> > QUESTION: The final structure of this code makes it quite natural to
> > clamp the vector length for KVM guests to the maximum value supportable
> > across all CPUs; such a value is guaranteed to exist, but may be
> > surprisingly small on a given hardware platform.
> > 
> > Should we be stricter and actually refuse to support KVM at all on such
> > hardware?  This may help to force people to fix Linux (or the
> > architecture) if/when this issue comes up.
> 
> Blocking KVM would be too harsh, since the users of the host may not
> care about guests with SVE, but still care about guests.
> 
> > 
> > For now, I stick with pr_warn() and make do with a limited SVE vector
> > length for guests.
> 
> I think that's probably the best we can do.

Agreed.  Since it fell out quite nicely this way in the code, this was
my preferred option.

[...]

> Reviewed-by: Andrew Jones <[email protected]>

Thanks

---Dave
_______________________________________________
kvmarm mailing list
[email protected]
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

Reply via email to