Multi-level certificate chains
Hi there!I am trying to create my own CA, but am having some small issues:I can create the root CA, then an intermediate CA, both of these are linked correctly in the certification path ie. it shows that Cert B was signed by Cert A, but when I sign a certificate with the IA (Cert B) the signed certificate (Cert C) has no certification path and has a warning that windows couldn't verify the certificate's origin. Is there a way I can make all three linked? ie. Cert A-Cert B-Cert C in the certification path? Any help would be appreciated
Re: FIPS Failure on newer 32-bit Windows platforms.
I'm compiling and linking dynamic library and adding both /DYNAMICBASE:NO and /FIXED to LFLAGS in ms\ntdll.mak doesn't work for me. I've to add /FIXED to this line $(FIPSLINK) $(MLFLAGS) /map /fixed /base:$(BASEADDR) /out:$(O_CRYPTO) /def:ms/LIBEAY32.def @ $(SHLIB_EX_OBJ) $(CRYPTOOBJ) $(O_FIPSCANISTER) $(EX_LIBS) $(OBJ_D)\fips_premain.obj to get that to work. you can search for BASEADDR to find this line. -- View this message in context: http://openssl.6102.n7.nabble.com/FIPS-Failure-on-newer-32-bit-Windows-platforms-tp43771p47312.html Sent from the OpenSSL - User mailing list archive at Nabble.com. __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager majord...@openssl.org
RE: Compiling openssl fips in Windows
btw, you just have to do nmake -f ms\ntdll.mak install under openssl-fips, it'll copy files to \usr\local\ssl\fips-2.0 in the correct file structure. -- View this message in context: http://openssl.6102.n7.nabble.com/Compiling-openssl-fips-in-Windows-tp43439p47313.html Sent from the OpenSSL - User mailing list archive at Nabble.com. __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager majord...@openssl.org
RE: ssl handshake failure in 1.0.1 but not 1.0.0
From: Dave Thompson Yes, the server has a custom root cert that isn't installed on this machine. I am happy that the server cert is correct. For testing that's okay, but I hope in real use you are verifying. Otherwise an active attacker may be able to MITM your connections. Production environments do a peer verification. I disabled that for development purposes. The ServerHello does indeed contain the secure-renegotiation extension in one pcap and not the other. Assuming there isn't some really weird logic on the server that supports 5746 only sometimes, this might be due to the (much) larger cipherlist -- OpenSSL puts ERI-SCSV at the end of the cipherlist, so if the server can only handle maybe 32 or 50 or so entries in the cipherlist it might not see ERI in the default-ciphers case. You could experiment with intermediate size cipherlists -- my suggestion of forcing -tls1 by itself takes you down from 80 to 52 (because it implicitly disables the TLSv1.2-only SHA2 and GCM suites), or so does explicit -cipher DEFAULT:!TLSv1.2 . Removing more things you shouldn't want anyway goes lower e.g. DEFAULT:!TLSv1.2:!EXPORT:!LOW:!SRP:!kECDH should be 30. [snip] If the problem is the length of the ClientHello and/or cipherlist -- as is consistent with but not conclusively proven by what you've seen so far, and is somewhat similar to the fact that other servers have already been found to fail or hang *initial* negotiation when ClientHello = 256 bytes (although this server did *not* fail there), just using a shorter cipherlist should work. A few akRSA, one or two DHE-RSA and ECDHE-RSA because a server with RSA can still do akRSA unless KU prohibits, a few ECDHE-ECDSA and perhaps a few DHE-DSS -- maybe 20 total -- should handle any sane server. That's great, thank you for the detailed explanations. Your hunch that the problem lies with the length of the cipherlist seems to bear out; I removed some of the ciphers you suggested and the server still happily connects. It creates a Client Hello of 198 bytes which should also avoid the other problem you mention (that I haven't seen on this particular server). Thanks for all the help, Ben __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager majord...@openssl.org
Re: Multi-level certificate chains
On Tue, November 12, 2013 05:47, Alan Jakimiuk wrote: Is there a way I can make all three linked? this should be the default. ie. Cert A-Cert B-Cert C in the certification path? Any help would be appreciated can you view the certificates? openssl x509 -noout -text -in certfile you should see in both, the intermediate and the Cert C something like X509v3 Authority Key Identifier: keyid:EB:DF:B2:26:76:... serial:6F:7F:C0:... the serial in the intermediate here must match the serial of the root, and of Cert C the one of the intermediate Walter __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager majord...@openssl.org
Re: Fwd: How to tweak openSSL vulnerabilities CVE-2013-0169
On Tue, Nov 12, 2013, Alok Sharma wrote: One of the openSSL vulnerabilities is: CVE-2013-0169: The TLS protocol 1.1 and 1.2 and the DTLS protocol 1.0 and 1.2, as used in OpenSSL, , do not properly consider timing side-channel attacks on a MAC check requirement during the processing of malformed CBC padding, which allows remote attackers to conduct distinguishing attacks and plaintext-recovery attacks via statistical analysis of timing data for crafted packets, aka the Lucky Thirteen issue. All versions of OpenSSL are affected including 1.0.1c, 1.0.0j and 0.9.8x Affected users should upgrade to OpenSSL 1.0.1d, 1.0.0k or 0.9.8y we use DTLS 1.0 protocol. Does anyone know of any setting in openssl configuration that can be tweaked to mitigate this vulnerability? E.g. a setting to not allow use of algorithms with CBC etc.? The vulnerability is addressed in the latest OpenSSL releases. If you disable CBC ciphers then you're only left with GCM and RC4. RC4 can't be used with DTLS and GCM is only supported in DTLS 1.2. Steve. -- Dr Stephen N. Henson. OpenSSL project core developer. Commercial tech support now available see: http://www.openssl.org __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager majord...@openssl.org
Bug in OpenSSL 1.0.1e AES_cbc_encrypt?
I've noticed what appears to be a bug in the OpenSSL 1.0.1e 586 assembly-optimized AES_cbc_encrypt function when encrypting data that is 1 block in length, but not an integral multiple of the block size. Specifically it appears that when encrypting the partial-block tail, the block is XOR-ed with the *original* IV passed to AES_cbc_encrypt, rather than the previous ciphertext block. This results in incorrect output when decrypting. To test this, I encrypted 40 bytes (2 full blocks plus a half-block tail) of zeros with a 128-bit all-zeros key (key-size does not appear to be a factor but provided for reproducability), and all-zeros initial IV. The output is as follows: 66 E9 4B D4 EF 8A 2C 3B 88 4C FA 59 CA 34 2B 2E F7 95 BD 4A 52 E2 9E D7 13 D3 13 FA 20 E9 8D BC 66 E9 4B D4 EF 8A 2C 3B 88 4C FA 59 CA 34 2B 2E Note that the last ciphertext block is identical to the first ciphertext block, which since the plaintext is the same (after the internal zero-padding that occurs before encrypting final partial-block) further indicates that it was encrypted using the same IV as the first block. When decrypting this, the final block is corrupt: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 F7 95 BD 4A 52 E2 9E D7 13 D3 13 FA 20 E9 8D BC If instead the partial-block tail is encrypted separately to the full blocks, the ciphertext is: 66 E9 4B D4 EF 8A 2C 3B 88 4C FA 59 CA 34 2B 2E F7 95 BD 4A 52 E2 9E D7 13 D3 13 FA 20 E9 8D BC A1 0C F6 6D 0F DD F3 40 53 70 B4 BF 8D F5 BF B3 This decrypts to 3 blocks of zeros as expected. Recompiling without assembly-optimized AES results in the expected functionality in both cases. I've searched the request tracker and performed other general searches to see if this has already been raised/debunked but couldn't find anything. Can anyone confirm whether this is a bug, or am I missing something? I can provide code used for the above if required. Thanks, CO __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager majord...@openssl.org
RE: Signature Algorithm that was disabled because that algorithm is not secure
Two weeks ago Viktor Dukhovni wrote: Actually, SHA-2 SHOULD NOT (yet) be used for signing certificates. Many TLSv1 clients don't support SHA-2 and servers must present SHA-1 certificates except when TLSv1.2 clients indicate SHA-2 support. Fielding multiple certificates with different signature algorithms is too complex. - Good point. Microsoft isn't rushing to drop recognition of SHA-1 signatures: http://arstechnica.com/security/2013/11/hoping-to-avert-collision-with-disaster-microsoft-retires-sha1/ The company's software will stop recognizing the validity of digital certificates that use the SHA1 cryptographic algorithm after 2016 ... Thanks, Paul -- The information contained in this transmission may be confidential. Any disclosure, copying, or further distribution of confidential information is not permitted unless such privilege is explicitly granted in writing by Quantum. Quantum reserves the right to have electronic communications, including email and attachments, sent across its networks filtered through anti virus and spam software programs and retain such messages in order to comply with applicable data security and retention requirements. Quantum is not responsible for the proper and complete transmission of the substance of this communication or for any delay in its receipt. __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager majord...@openssl.org
Qt application using libeay32.dll and ssleay32.dll cannot establish connection in certain virgin installations unless other https apps have been used.
We have a cross platform client application based on Trolltech/Nokia/Digia Qt that uses a secure socket for JSON. It works perfectly well on OSX, and works on most Windows installations. The libs libeay32.dll and ssleay32.dll are located in the same directory as all the apps libraries. However, when we tested on pure Windows 7 and Windows 8 installations running on VMware, the connections failed. Installing another client like TortoiseSVN and using it once solved the issue, even if the “helper” application was uninstalled. What are we missing? We have wasted many hours on this, and are likely overlooking the obvious. Help would be appreciated… — Harald __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager majord...@openssl.org
Trying to understand performance differences
Collected performance numbers using openssl speed for two copies of OpenSSL 1.0.1e, one built as FIPS-capable, the other not, running on an ARMv6. I am having a hard time understanding the differences I observed and would appreciate any insight. Non-FIPS Capable # openssl speed aes Type16 bytes64 bytes256 bytes 1024 bytes 8192 bytes aes-128 cbc 2345.03k2627.50k2708.99k2739.11k 2730.67k aes-192 cbc 2029.69k2236.10k2293.85k2316.84k 2310.14k aes-256 cbc 1782.30k1943.21k1988.52k2000.21k 1994.93k #openssl speed -evp aes-128-cbc aes-128-cbc 2234.73k2591.72k2698.50k2726.91k 2733.40k #openssl speed -evp aes-192-cbc aes-192-cbc 1941.83k2206.61k2284.12k2304.68k 2310.14k #openssl speed -evp aes-256-cbc aes-256-cbc 1719.65k1923.88k1982.21k1997.82k 2001.58k FIPS Capable # openssl speed aes Type16 bytes64 bytes256 bytes 1024 bytes 8192 bytes aes-128 cbc 2540.86k2846.65k2923.78k2946.73k 2951.85k aes-192 cbc 2193.64k2416.26k2478.85k2503.15k 2501.29k aes-256 cbc 1933.31k2103.79k2150.57k2163.37k 2160.95k #openssl speed -evp aes-128-cbc aes-128-cbc 4370.26k6091.88k6787.25k6981.69k 7009.62k #openssl speed -evp aes-192-cbc aes-192-cbc 3992.79k5353.26k5865.22k6010.54k 6048.43k #openssl speed -evp aes-256-cbc aes-256-cbc 3650.15k4773.53k5176.66k5307.68k 5339.87k I don't understand why non-EVP and EVP results are practically the same with the non- FIPS capable library, but the EVP results are significantly faster than the non-EVP results on the FIPS-capable one. MV __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager majord...@openssl.org
Re: Trying to understand performance differences
On Wed, Nov 13, 2013, Vuille, Martin (Martin) wrote: Collected performance numbers using openssl speed for two copies of OpenSSL 1.0.1e, one built as FIPS-capable, the other not, running on an ARMv6. I am having a hard time understanding the differences I observed and would appreciate any insight. [snip results] I don't understand why non-EVP and EVP results are practically the same with the non- FIPS capable library, but the EVP results are significantly faster than the non-EVP results on the FIPS-capable one. For the non-FIPS capable EVP calls the low level implementations for that version of OpenSSL. So, other than EVP overheads you get similar results. For the FIPS capable version EVP calls the FIPS module implementations of the algorithms while the low level calls still call the OpenSSL versions. So you're comparing two different implementations. So for some reason the implementations in OpenSSL aren't as fast as those in the FIPS module for your setup. Steve. -- Dr Stephen N. Henson. OpenSSL project core developer. Commercial tech support now available see: http://www.openssl.org __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager majord...@openssl.org