On 15/08/18 14:17, Andrew Cooper wrote:
> Hello,

Apologies.  Getting Dario's correct email address this time.

>
> Now that the embargo on XSA-273 is up, we can start publicly discussing
> the remaining work do, because there is plenty to do.  In no particular
> order...
>
> 1) Attempting to shadow dom0 from boot leads to some assertions very
> very quickly.   Shadowing dom0 after-the-fact leads to some very weird
> crashes where whole swathes of the shadow appears to be missing.  This
> is why, for now, automatic shadowing of dom0 is disabled by default.
>
> 2) 32bit PV guests which use writeable pagetable support will
> automatically get shadowed when the clear the lower half.  Ideally, such
> guests should be modified to use hypercalls rather than the ptwr
> infrastructure (as its more efficient to begin with), but we can
> probably work around this in Xen by emulating the next few instructions
> until we have a complete PTE (same as the shadow code).
>
> 3) Toolstack CPUID/MSR work.  This is needed for many reasons.
> 3a) Able to level MSR_ARCH_CAPS and maxphysaddr to regain some migration
> safety.
> 3b) Able to report accurate topology to Xen (see point 5) and to guests.
> 3c) Able to configure/level the Viridian leaves, and implement the
> Viridian L1TF extension.
> 3d) Able to configure/level the Xen leaves and implement a similar L1TF
> enlightenment.
>
> 4) The shadow MMIO fastpath truncates the MMIO gfn at 2^28 without any
> indication of failure.  The most compatible bugfix AFACIT would be to
> add an extra nibble's worth of gfn space which gets us to 2^32, and
> clamp the guest maxphysaddr calculation at 44 bits.  The alternative is
> to clamp maxphysaddr to 40 bits, but that will break incoming migrate of
> very large shadow guests.
>
> 4a) The shadow MMIO fastpath needs a runtime clobber, because it will
> not function at all on Icelake hardware with a 52-bit physical address
> width.  Also, it turns out there is an architectural corner case when
> levelling maxphysaddr, where some bits which (v)maxphysaddr says should
> elicit #PF[RSVD], don't because the actual pipeline address width is larger.
>
> 5) Core-aware scheduling.  At the moment, Xen will schedule arbitrary
> guest vcpus on arbitrary hyperthreads.  This is bad and wants fixing. 
> I'll defer to Dario for further details.
>
> Perhaps the more important longer term action is to start removing
> secrets from Xen, because its getting uncomfortably easy to ex-filtrate
> data.  I'll defer to David for his further plans in this direction.
>
> I'm sure I've probably missed something in all of this, but this is
> enough to begin the discussion.
>
> ~Andrew


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Reply via email to