Re: [Xen-devel] [PATCH 2/2] x86/AMD: Fix handling of x87 exception pointers on Fam17h hardware
On 02/09/2019 15:50, Jan Beulich wrote: >>> I'm also not sure why you >>> call them "unpredictable": If all (or most) cases match, the branch >>> there could be pretty well predicted (subject of course to capacity). >> Data-dependent branches which have no correlation to pattern history, of >> which this is an example, are frequently mispredicted because they are >> inherently unstable. >> >> In this case, you're trading off the fact that an unmasked exception is >> basically never pending, against the cost of mispredicts in the context >> switch path. > For > > if ( !(fpu_ctxt->fsw & ~fpu_ctxt->fcw & 0x003f) && > > you're claiming it to be true most of the time. How could the > predictor be mislead if whenever this is encountered the result > of the double & is zero? Because whether it is 0 or not is unrelated to previous history. As this argument isn't getting anywhere, I'll leave it in for now and do the perf work to demonstrate the problem at some point when I don't have 15 other things needing doing yesterday. >>> But as said before, just like for synthetic >>> features I strongly think we should use simple boolean variables >>> when using them only in if()-s. Use of the feature(/bug) machinery >>> is needed only to not further complicate alternatives patching. >> ... b) >> >> I see I'm going to have to repeat myself, which is time I can't really >> afford to waste. >> >> x86_capabilities is not, and has never been, "just for alternatives". >> It is also not how it is currently used in Xen. > And I've not been claiming this. You literally have, and it is quoted above. > Nevertheless my opinion is that it > shouldn't be needlessly abused beyond its main purpose. The purpose is to be a collection bits, stored in reasonably efficient manner. Synthetic features, as well as bugs are related information, and very definitely capabilities of the CPU. Alternatives use the x86_capabilities[] bitmap, which existed for 2 decades previously, because it happens to be in a convenient form. The fact that alternatives do use x86_capabilities[] has no bearing on what is reasonable or appropriate data to store in the bitmap, and it certainly doesn't mean that data-not-used-for-patching should be purged. > I thought I had successfully convinced you of not adding synthetic > feature (non-bug) flags either anymore, unless needed for alternatives > patching. No. I don't think you realise how quite how infuriating it was trying to meet the embargos for speculative issues. We had series which were 10's of patches long, being invasively rewritten leading up to the embargo. Some requests where legitimate - I'm not going to pretend otherwise, but some really were minutia like this which really didn't help the situation. There are two big series outstanding, MSR_VIRT_SPEC_CTRL and CPUID Policy, which is getting to be reprehensibly late, and both of which had proper embargos I was trying to meet. There was no way VIRT_SPEC_CTRL was going to meet the SSBD embargo because of the delay getting the spec together, but running Xen on AMD hardware is currently embarrassing and slow due to guests falling back to native means and hitting: (XEN) emul-priv-op.c:1113:d0v2 Domain attempted WRMSR c0011020 from 0x00064040 to 0x000640400400 on their context switch path, and doing a good job of filling /var/log/ in minutes. CPUID policy is even worse. It's not currently safe to migrate VMs on Intel hardware, because we can't level MSR_ARCH_CAPS.RSBA across the migration pool, and this is something which really should have met the L1TF embargo a year ago, but which was stopped dead in its tracks because I couldn't even argue in public as to why it needed to be done certain ways. It also means that Xen is crippled on current-generation Intel hardware. The sad fact is that it is rather too easy to put off finishing that work when there is other just-as-important work to do, and the thought of arguing over further minutia on vN+1 is occasionally too exhausting to contemplate. > Anyway - in the interest of forward progress, yet without being > convinced at all, I'll (as in so many earlier cases) give in here and > see about acking patch 1 then. Thankyou. > >> I also don't agree with the general suggestion because amongst other >> things, there is a factor of 8 storage difference between one extra bit >> in x86_capabilities[] and using scattered booleans. > While a valid argument at the first glance, there's nothing keeping > us from having a feature flag independent bitmap. Correct my if I'm > wrong, but I've gained the impression that you want this mainly > because Linux does it this way. To a first approximation, yes - this is a construct we inherited from Linux, and I'm continuing to use it in the way Linux uses it. > >>> With this, keying the return to cpu_bug_* also doesn't >>> look very nice, but I admit I can't suggest a better alternative >>> (other than leaving the vendor check in place and
Re: [Xen-devel] [PATCH 2/2] x86/AMD: Fix handling of x87 exception pointers on Fam17h hardware
On 02.09.2019 16:15, Andrew Cooper wrote: > On 29/08/2019 13:56, Jan Beulich wrote: >> On 19.08.2019 20:26, Andrew Cooper wrote: >>> AMD Pre-Fam17h CPUs "optimise" {F,}X{SAVE,RSTOR} by not saving/restoring >>> FOP/FIP/FDP if an x87 exception isn't pending. This causes an information >>> leak, CVE-2006-1056, and worked around by several OSes, including Xen. AMD >>> Fam17h CPUs no longer have this leak, and advertise so in a CPUID bit. >>> >>> Introduce the RSTR_FP_ERR_PTRS feature, as specified by AMD, and expose to >>> all >>> guests by default. While adjusting libxl's cpuid table, add CLZERO which >>> looks to have been omitted previously. >>> >>> Also introduce an X86_BUG bit to trigger the (F)XRSTOR workaround, and set >>> it >>> on AMD hardware where RSTR_FP_ERR_PTRS is not advertised. Optimise the >>> workaround path by dropping the data-dependent unpredictable conditions >>> which >>> will evalute to true for all 64bit OSes and most 32bit ones. >> I definitely don't buy the "all 64bit OSes" part here: Anyone doing >> full 80-bit FP operations will have to use the FPU, and hence may >> want to have some unmasked exceptions. > > And all 0 people doing that is still 0. > > Yes I'm being a little facetious, but there is exceedingly little > software which uses 80-bit FPU operations these days, as it has been > superseded by SSE. Just for your amusement, I run such software myself. When computing fractals the extra bits of precision may matter quite a lot. Granted I don't fancy running something like this on top of Xen. >> I'm also not sure why you >> call them "unpredictable": If all (or most) cases match, the branch >> there could be pretty well predicted (subject of course to capacity). > > Data-dependent branches which have no correlation to pattern history, of > which this is an example, are frequently mispredicted because they are > inherently unstable. > > In this case, you're trading off the fact that an unmasked exception is > basically never pending, against the cost of mispredicts in the context > switch path. For if ( !(fpu_ctxt->fsw & ~fpu_ctxt->fcw & 0x003f) && you're claiming it to be true most of the time. How could the predictor be mislead if whenever this is encountered the result of the double & is zero? >> All in all I'd prefer if the conditions remained in place; my minimal >> request would be for there to be a comment why there's no evaluation >> of FSW/FCW. >> >>> --- a/xen/arch/x86/i387.c >>> +++ b/xen/arch/x86/i387.c >>> @@ -43,20 +43,17 @@ static inline void fpu_fxrstor(struct vcpu *v) >>> const typeof(v->arch.xsave_area->fpu_sse) *fpu_ctxt = v->arch.fpu_ctxt; >>> >>> /* >>> - * AMD CPUs don't save/restore FDP/FIP/FOP unless an exception >>> + * Some CPUs don't save/restore FDP/FIP/FOP unless an exception >> Are there any non-AMD CPUs known to have this issue? If not, is >> there a particular reason you don't say "Some AMD CPUs ..."? > > I'm not aware of any, but leaving it as "Some AMD" might become stale if > others do surface. > > Information about which CPUs are affected should exclusively be > determined by the logic which sets cpu_bug_fpu_ptr_leak, which won't be > stale. Well, okay then. >>> * is pending. Clear the x87 state here by setting it to fixed >>> * values. The hypervisor data segment can be sometimes 0 and >>> * sometimes new user value. Both should be ok. Use the FPU saved >>> * data block as a safe address because it should be in L1. >>> */ >>> -if ( !(fpu_ctxt->fsw & ~fpu_ctxt->fcw & 0x003f) && >>> - boot_cpu_data.x86_vendor == X86_VENDOR_AMD ) >>> -{ >>> +if ( cpu_bug_fpu_ptr_leak ) >>> asm volatile ( "fnclex\n\t" >>> "ffree %%st(7)\n\t" /* clear stack tag */ >>> "fildl %0" /* load to clear state */ >>> : : "m" (*fpu_ctxt) ); >> If here and in the respective xsave instance you'd use alternatives >> patching, I wouldn't mind the use of a X86_BUG_* for this (as made >> possible by patch 1). > > a) this should probably be a static branch if/when we gain that > infrastructure, but ... > >> But as said before, just like for synthetic >> features I strongly think we should use simple boolean variables >> when using them only in if()-s. Use of the feature(/bug) machinery >> is needed only to not further complicate alternatives patching. > > ... b) > > I see I'm going to have to repeat myself, which is time I can't really > afford to waste. > > x86_capabilities is not, and has never been, "just for alternatives". > It is also not how it is currently used in Xen. And I've not been claiming this. Nevertheless my opinion is that it shouldn't be needlessly abused beyond its main purpose. I.e. deriving cpu_has_* flags from it because features flags get collected this way is certainly fine. But introducing artificial extensions is (imo) not. I thought I had successfully convinced
Re: [Xen-devel] [PATCH 2/2] x86/AMD: Fix handling of x87 exception pointers on Fam17h hardware
On 29/08/2019 13:56, Jan Beulich wrote: > On 19.08.2019 20:26, Andrew Cooper wrote: >> AMD Pre-Fam17h CPUs "optimise" {F,}X{SAVE,RSTOR} by not saving/restoring >> FOP/FIP/FDP if an x87 exception isn't pending. This causes an information >> leak, CVE-2006-1056, and worked around by several OSes, including Xen. AMD >> Fam17h CPUs no longer have this leak, and advertise so in a CPUID bit. >> >> Introduce the RSTR_FP_ERR_PTRS feature, as specified by AMD, and expose to >> all >> guests by default. While adjusting libxl's cpuid table, add CLZERO which >> looks to have been omitted previously. >> >> Also introduce an X86_BUG bit to trigger the (F)XRSTOR workaround, and set it >> on AMD hardware where RSTR_FP_ERR_PTRS is not advertised. Optimise the >> workaround path by dropping the data-dependent unpredictable conditions which >> will evalute to true for all 64bit OSes and most 32bit ones. > I definitely don't buy the "all 64bit OSes" part here: Anyone doing > full 80-bit FP operations will have to use the FPU, and hence may > want to have some unmasked exceptions. And all 0 people doing that is still 0. Yes I'm being a little facetious, but there is exceedingly little software which uses 80-bit FPU operations these days, as it has been superseded by SSE. > I'm also not sure why you > call them "unpredictable": If all (or most) cases match, the branch > there could be pretty well predicted (subject of course to capacity). Data-dependent branches which have no correlation to pattern history, of which this is an example, are frequently mispredicted because they are inherently unstable. In this case, you're trading off the fact that an unmasked exception is basically never pending, against the cost of mispredicts in the context switch path. > All in all I'd prefer if the conditions remained in place; my minimal > request would be for there to be a comment why there's no evaluation > of FSW/FCW. > >> --- a/xen/arch/x86/i387.c >> +++ b/xen/arch/x86/i387.c >> @@ -43,20 +43,17 @@ static inline void fpu_fxrstor(struct vcpu *v) >> const typeof(v->arch.xsave_area->fpu_sse) *fpu_ctxt = v->arch.fpu_ctxt; >> >> /* >> - * AMD CPUs don't save/restore FDP/FIP/FOP unless an exception >> + * Some CPUs don't save/restore FDP/FIP/FOP unless an exception > Are there any non-AMD CPUs known to have this issue? If not, is > there a particular reason you don't say "Some AMD CPUs ..."? I'm not aware of any, but leaving it as "Some AMD" might become stale if others do surface. Information about which CPUs are affected should exclusively be determined by the logic which sets cpu_bug_fpu_ptr_leak, which won't be stale. >> * is pending. Clear the x87 state here by setting it to fixed >> * values. The hypervisor data segment can be sometimes 0 and >> * sometimes new user value. Both should be ok. Use the FPU saved >> * data block as a safe address because it should be in L1. >> */ >> -if ( !(fpu_ctxt->fsw & ~fpu_ctxt->fcw & 0x003f) && >> - boot_cpu_data.x86_vendor == X86_VENDOR_AMD ) >> -{ >> +if ( cpu_bug_fpu_ptr_leak ) >> asm volatile ( "fnclex\n\t" >> "ffree %%st(7)\n\t" /* clear stack tag */ >> "fildl %0" /* load to clear state */ >> : : "m" (*fpu_ctxt) ); > If here and in the respective xsave instance you'd use alternatives > patching, I wouldn't mind the use of a X86_BUG_* for this (as made > possible by patch 1). a) this should probably be a static branch if/when we gain that infrastructure, but ... > But as said before, just like for synthetic > features I strongly think we should use simple boolean variables > when using them only in if()-s. Use of the feature(/bug) machinery > is needed only to not further complicate alternatives patching. ... b) I see I'm going to have to repeat myself, which is time I can't really afford to waste. x86_capabilities is not, and has never been, "just for alternatives". It is also not how it is currently used in Xen. I also don't agree with the general suggestion because amongst other things, there is a factor of 8 storage difference between one extra bit in x86_capabilities[] and using scattered booleans. This series, and a number of related series, have been overdue for more than a year now, partly because of speculative preemption, but also partly because of attempted scope creep such as this. Scope creep is having a catastrophic effect on the productivity of submissions to Xen, and most not continue like this the Xen community is to survive. > >> @@ -169,11 +166,10 @@ static inline void fpu_fxsave(struct vcpu *v) >> : "=m" (*fpu_ctxt) : "R" (fpu_ctxt) ); >> >> /* >> - * AMD CPUs don't save/restore FDP/FIP/FOP unless an exception >> - * is pending. >> + * Some CPUs don't save/restore FDP/FIP/FOP unless an exception is >> + * pending. The restore
Re: [Xen-devel] [PATCH 2/2] x86/AMD: Fix handling of x87 exception pointers on Fam17h hardware
On 19.08.2019 20:26, Andrew Cooper wrote: > AMD Pre-Fam17h CPUs "optimise" {F,}X{SAVE,RSTOR} by not saving/restoring > FOP/FIP/FDP if an x87 exception isn't pending. This causes an information > leak, CVE-2006-1056, and worked around by several OSes, including Xen. AMD > Fam17h CPUs no longer have this leak, and advertise so in a CPUID bit. > > Introduce the RSTR_FP_ERR_PTRS feature, as specified by AMD, and expose to all > guests by default. While adjusting libxl's cpuid table, add CLZERO which > looks to have been omitted previously. > > Also introduce an X86_BUG bit to trigger the (F)XRSTOR workaround, and set it > on AMD hardware where RSTR_FP_ERR_PTRS is not advertised. Optimise the > workaround path by dropping the data-dependent unpredictable conditions which > will evalute to true for all 64bit OSes and most 32bit ones. I definitely don't buy the "all 64bit OSes" part here: Anyone doing full 80-bit FP operations will have to use the FPU, and hence may want to have some unmasked exceptions. I'm also not sure why you call them "unpredictable": If all (or most) cases match, the branch there could be pretty well predicted (subject of course to capacity). All in all I'd prefer if the conditions remained in place; my minimal request would be for there to be a comment why there's no evaluation of FSW/FCW. > --- a/xen/arch/x86/i387.c > +++ b/xen/arch/x86/i387.c > @@ -43,20 +43,17 @@ static inline void fpu_fxrstor(struct vcpu *v) > const typeof(v->arch.xsave_area->fpu_sse) *fpu_ctxt = v->arch.fpu_ctxt; > > /* > - * AMD CPUs don't save/restore FDP/FIP/FOP unless an exception > + * Some CPUs don't save/restore FDP/FIP/FOP unless an exception Are there any non-AMD CPUs known to have this issue? If not, is there a particular reason you don't say "Some AMD CPUs ..."? > * is pending. Clear the x87 state here by setting it to fixed > * values. The hypervisor data segment can be sometimes 0 and > * sometimes new user value. Both should be ok. Use the FPU saved > * data block as a safe address because it should be in L1. > */ > -if ( !(fpu_ctxt->fsw & ~fpu_ctxt->fcw & 0x003f) && > - boot_cpu_data.x86_vendor == X86_VENDOR_AMD ) > -{ > +if ( cpu_bug_fpu_ptr_leak ) > asm volatile ( "fnclex\n\t" > "ffree %%st(7)\n\t" /* clear stack tag */ > "fildl %0" /* load to clear state */ > : : "m" (*fpu_ctxt) ); If here and in the respective xsave instance you'd use alternatives patching, I wouldn't mind the use of a X86_BUG_* for this (as made possible by patch 1). But as said before, just like for synthetic features I strongly think we should use simple boolean variables when using them only in if()-s. Use of the feature(/bug) machinery is needed only to not further complicate alternatives patching. > @@ -169,11 +166,10 @@ static inline void fpu_fxsave(struct vcpu *v) > : "=m" (*fpu_ctxt) : "R" (fpu_ctxt) ); > > /* > - * AMD CPUs don't save/restore FDP/FIP/FOP unless an exception > - * is pending. > + * Some CPUs don't save/restore FDP/FIP/FOP unless an exception is > + * pending. The restore code fills in suitable defaults. > */ > -if ( !(fpu_ctxt->fsw & 0x0080) && > - boot_cpu_data.x86_vendor == X86_VENDOR_AMD ) > +if ( cpu_bug_fpu_ptr_leak && !(fpu_ctxt->fsw & 0x0080) ) > return; The comment addition seems a little unmotivated: The code here isn't about leaking data, but about having valid data to consume (down from here). With this, keying the return to cpu_bug_* also doesn't look very nice, but I admit I can't suggest a better alternative (other than leaving the vendor check in place and checking the X86_FEATURE_RSTR_FP_ERR_PTRS bit). An option might be to give the construct a different name, without "leak" in it (NO_FP_ERR_PTRS?). Jan ___ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel
[Xen-devel] [PATCH 2/2] x86/AMD: Fix handling of x87 exception pointers on Fam17h hardware
AMD Pre-Fam17h CPUs "optimise" {F,}X{SAVE,RSTOR} by not saving/restoring FOP/FIP/FDP if an x87 exception isn't pending. This causes an information leak, CVE-2006-1056, and worked around by several OSes, including Xen. AMD Fam17h CPUs no longer have this leak, and advertise so in a CPUID bit. Introduce the RSTR_FP_ERR_PTRS feature, as specified by AMD, and expose to all guests by default. While adjusting libxl's cpuid table, add CLZERO which looks to have been omitted previously. Also introduce an X86_BUG bit to trigger the (F)XRSTOR workaround, and set it on AMD hardware where RSTR_FP_ERR_PTRS is not advertised. Optimise the workaround path by dropping the data-dependent unpredictable conditions which will evalute to true for all 64bit OSes and most 32bit ones. Signed-off-by: Andrew Cooper --- CC: Jan Beulich CC: Wei Liu CC: Roger Pau Monné v2: * Use the AMD naming, not that I am convinced this is a sensible name to use. * Adjust the i387 codepaths as well as the xstate ones. * Add xen-cpuid/libxl data for the CPUID bit. --- tools/libxl/libxl_cpuid.c | 3 +++ tools/misc/xen-cpuid.c | 1 + xen/arch/x86/cpu/amd.c | 7 +++ xen/arch/x86/i387.c | 14 +- xen/arch/x86/xstate.c | 6 ++ xen/include/asm-x86/cpufeature.h| 3 +++ xen/include/asm-x86/cpufeatures.h | 2 ++ xen/include/public/arch-x86/cpufeatureset.h | 1 + 8 files changed, 24 insertions(+), 13 deletions(-) diff --git a/tools/libxl/libxl_cpuid.c b/tools/libxl/libxl_cpuid.c index a8d07fac50..acc92fd46c 100644 --- a/tools/libxl/libxl_cpuid.c +++ b/tools/libxl/libxl_cpuid.c @@ -256,7 +256,10 @@ int libxl_cpuid_parse_config(libxl_cpuid_policy_list *cpuid, const char* str) {"invtsc", 0x8007, NA, CPUID_REG_EDX, 8, 1}, +{"clzero", 0x8008, NA, CPUID_REG_EBX, 0, 1}, +{"rstr-fp-err-ptrs", 0x8008, NA, CPUID_REG_EBX, 2, 1}, {"ibpb", 0x8008, NA, CPUID_REG_EBX, 12, 1}, + {"nc", 0x8008, NA, CPUID_REG_ECX, 0, 8}, {"apicidsize", 0x8008, NA, CPUID_REG_ECX, 12, 4}, diff --git a/tools/misc/xen-cpuid.c b/tools/misc/xen-cpuid.c index b0db0525a9..04cdd9aa95 100644 --- a/tools/misc/xen-cpuid.c +++ b/tools/misc/xen-cpuid.c @@ -145,6 +145,7 @@ static const char *const str_e7d[32] = static const char *const str_e8b[32] = { [ 0] = "clzero", +[ 2] = "rstr-fp-err-ptrs", [12] = "ibpb", }; diff --git a/xen/arch/x86/cpu/amd.c b/xen/arch/x86/cpu/amd.c index a2f83c79a5..463f9776c7 100644 --- a/xen/arch/x86/cpu/amd.c +++ b/xen/arch/x86/cpu/amd.c @@ -580,6 +580,13 @@ static void init_amd(struct cpuinfo_x86 *c) } /* +* Older AMD CPUs don't save/load FOP/FIP/FDP unless an FPU exception +* is pending. Xen works around this at (F)XRSTOR time. +*/ + if ( !cpu_has(c, X86_FEATURE_RSTR_FP_ERR_PTRS) ) + setup_force_cpu_cap(X86_BUG_FPU_PTR_LEAK); + + /* * Attempt to set lfence to be Dispatch Serialising. This MSR almost * certainly isn't virtualised (and Xen at least will leak the real * value in but silently discard writes), as well as being per-core diff --git a/xen/arch/x86/i387.c b/xen/arch/x86/i387.c index 88178485cb..82dbc461c3 100644 --- a/xen/arch/x86/i387.c +++ b/xen/arch/x86/i387.c @@ -43,20 +43,17 @@ static inline void fpu_fxrstor(struct vcpu *v) const typeof(v->arch.xsave_area->fpu_sse) *fpu_ctxt = v->arch.fpu_ctxt; /* - * AMD CPUs don't save/restore FDP/FIP/FOP unless an exception + * Some CPUs don't save/restore FDP/FIP/FOP unless an exception * is pending. Clear the x87 state here by setting it to fixed * values. The hypervisor data segment can be sometimes 0 and * sometimes new user value. Both should be ok. Use the FPU saved * data block as a safe address because it should be in L1. */ -if ( !(fpu_ctxt->fsw & ~fpu_ctxt->fcw & 0x003f) && - boot_cpu_data.x86_vendor == X86_VENDOR_AMD ) -{ +if ( cpu_bug_fpu_ptr_leak ) asm volatile ( "fnclex\n\t" "ffree %%st(7)\n\t" /* clear stack tag */ "fildl %0" /* load to clear state */ : : "m" (*fpu_ctxt) ); -} /* * FXRSTOR can fault if passed a corrupted data block. We handle this @@ -169,11 +166,10 @@ static inline void fpu_fxsave(struct vcpu *v) : "=m" (*fpu_ctxt) : "R" (fpu_ctxt) ); /* - * AMD CPUs don't save/restore FDP/FIP/FOP unless an exception - * is pending. + * Some CPUs don't save/restore FDP/FIP/FOP unless an exception is + * pending. The restore code fills in suitable defaults. */ -if ( !(fpu_ctxt->fsw & 0x0080) && - boot_cpu_data.x86_vendor == X86_VENDOR_AMD ) +