On Thu, 2010-03-25 at 17:57 +0100, PMHager wrote: > > As all major compilers for Intel CPUs support intrinsics and, if used > correctly, optimize to the same instructions as direct assembler, IMHO > these policies should be reconsidered to keep OpenSSL competitive. > > For good reasons perlasm is not an option for a company like Intel. To get > a solution, I now use a self-patched version of OpenSSL with intrinsics > which fulfills my and my customer's requirements.
I'm not sure I understand you. You seem to be talking about the merits of using inline assembler ('__asm__()' statements inside C code) vs. external assembler-only files which are processed by perl and then assembled (and which by necessity contain whole functions which are called from the C code). I have no interest in that debate. I'm quite happy using the perlasm approach. It's a PITA sometimes, but I see the portability advantages of it, and OpenSSL is a highly portable project. My question was about the inconsistency between, for example, SSE-optimised and AESNI-optimised functions. Both are implemented as perlasm; that's not relevant. What _is_ relevant, however, is that the SSE optimisations end up in the 'core' AES_encrypt() function which is tested by 'openssl speed aes', while the AESNI version is in an engine and isn't even used by default unless the application explicitly asks for it. My patch (unapplied for 6 months now) would at least fix the problem of the AESNI engine not being used automatically, but I still don't quite understand why it should be an engine while SSE support is not. I'd like to understand the logic. Should we be moving the SSE optimisations out into their own engine too? -- David Woodhouse Open Source Technology Centre david.woodho...@intel.com Intel Corporation ______________________________________________________________________ OpenSSL Project http://www.openssl.org Development Mailing List openssl-dev@openssl.org Automated List Manager majord...@openssl.org