>>>> As for XEN, if it in fact masks XSAVE, but not AVX bits, than even
>>>> check for XSAVE bit should '&jnc (&label("clear_avx"));' instead of
>>>> "done". As well as that x86_64cpuid.pl should test for XSAVE...
>>> That would also work, but it's useless because the spec OTOH says that 
>>> you *can* ignore XSAVE (and anyway XSAVE means nothing: it says the 
>>> feature is available, but only OSXSAVE says it is actually unusable).
>> I still fail to see how exactly did it fail for you. Once again, which
>> flags does guest OS observe exactly? Is guest OS YMM-capable? Does
>> latest x86cpuid.pl work for you or is it still problem?
> No, it does not work as the cpuid on the guest OS observes cleared XSAVE
> and set AVX bit.

How does XSAVE end up being 0? Hypervisor masks it, right? By what
means? Through a config file or is it explicitly programmed? What was
the reasoning for masking it? In config file or code? Why same reasoning
didn't apply to AVX [as FMA] bit? As implied, I'd argue that it's
inappropriate to mask XSAVE but not AVX [and FMA].

Either way, can you at least tell if it's controlled through config? Or
in other words, it it possible to work around the problem by configuring
your Xen? Context of the question is frozen FIPS code.

> Which means that the AVX instructions will be used in
> the SHA1 code which then fail with SIGILL.
> 
> The OSXSAVE is also cleared so that means if the XSAVE test was just
> dropped it would work.

So would jumping to "clear_avx" if XSAVE is 0, right? Anyway, see
http://cvs.openssl.org/chngview?cn=21675.


______________________________________________________________________
OpenSSL Project                                 http://www.openssl.org
Development Mailing List                       openssl-dev@openssl.org
Automated List Manager                           majord...@openssl.org

Reply via email to