Re: Can the certificate expiration be queried directly?
On 11/26/2011 6:00 PM, Lou Picciano wrote: Can a certificate's expiration date be queried directly? IE, apart from an expired cert being rejected out of hand, or from a CRL being read to determine a cert's validitiy...? I'm interested in reading the expiration from a loaded, currently-valid cert. Yes of cause. Since you are talking about a loaded certificate, I presume you are coding/modifying an application that is already using the OpenSSL to process the certificate, in which case I think it is easy: You presumably have the loaded cert as a pointer to type X509, lets call it pCert. The expiry date and time is stored at pCert-cert_info-validity-notAfter which is a pointer to type ASN1_TIME P.S. A little Zimbra issue to watch out for: If you pick a mail on the mailing list and press reply, Zimbra tells the mailing list to mark your mail as an answer to that mail, even if you remove all the text that mentions the old mail. Zimbra may not show you this happened, but most other mail programs show it to the world. __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager majord...@openssl.org
Re: certificate storage format
On 11/28/2011 08:33 AM, prabhu kalyan rout wrote: Hi, my question is how many certificate storage formats are available and what are they? just like del pks12 To my knowledge, there is PEM, DER, PKCS#7 and PKCS#12. cheers Mathias __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager majord...@openssl.org
Re: certificate storage format
On 11/28/2011 8:33 AM, prabhu kalyan rout wrote: Hi, my question is how many certificate storage formats are available and what are they? just like del pks12 Fortunately, because X.509 certificates are all based on the same standard (ITU-T standard X.509), there are actually very few formats in circulation: Certificate alone (without the secret private key of the owner): 1. BER-encoded (usually the DER subset) binary form, usually with file extension .crt, .pcs or similar. 2. S/MIME: Base64 encoding of #1, with a text line above and below that says -BEGIN CERTIFICATE- and -END CERTIFICATE-. Certificate and some related certificates (like that of the CA) together in one file, but still without the secret private key of the owner: 3. Concatenated BER-encoded (rare): Simply #1 of each certificate concatenated into one big file. 4. Concatenated S/MIME: Simply #2 of each certificate concatenated into one big text file. 5. BER encoded PKCS#7 envelope with no message in it, but still with the supplemental list of certificates in it. The secret private key on its own, possibly with the public key, but not the certificate: 6. BER-encoded (usually the DER-subset) PKCS#? format, possibly PKCS#8-encrypted. This format cab be used by OpenSSL and some other software. 7. S/MIME: Base64 encoding of #6, with different text lines above/below than #2. 8. Classic OpenSSL-variant, BER-encoded very similar but not quite the same as #6 9. Classic OpenSSL-variant, S/MIME-encoded: Base64 encoding of #6 with (almost?) the same text lines as #7 10. Microsoft .pvk format (used by historic AuthentiCode tools only): The MS CryptoAPI private key structure PRIVATEKEYBLOB optionally encrypted with a straight password-derived key. The certificate and the private key together in one file, optionally with related certificates (like that of the CA) included: 11. Concatenated BER-encoded (rare): Simply #1 of each certificate and #6 or #8 of the private key concatenated into one big file. 12. Concatenated S/MIME: Simply #2 of each certificate and #7 or #9 of the private key concatenated into one big text file. 13. PKCS#12 format, often with file extension .p12 or .pfx. Anyone have any other formats to add to this list? __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager majord...@openssl.org
ldif utility
Hi all, while going through a document i found there is a utility called ldif which will take input a certificate and form a ldif file. But in my openldap installation i didnt find this utility. Can anybody tell me where to look for the utility. Thanks __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager majord...@openssl.org
Re: Blowfish algorithm problem with OpenSSL 1.0.0e (32-bit)
No, it doesn't work on Linux either, if I link my test program using OpenSSL 1.0.0e. The test program works on Linux if I link it differently. $ ldd blowfish libcrypto.so.1 = /usr/lib/libcrypto.so.1 (0x40022000) libc.so.6 = /lib/i686/libc.so.6 (0x400de000) libdl.so.2 = /lib/libdl.so.2 (0x4020e000) /lib/ld-linux.so.2 = /lib/ld-linux.so.2 (0x4000) /usr/lib/libcrypto.so - libcrypto.so.0.9.6 Is this a bug? Jussi 2011/11/24 Jussi Peltonen pelt...@gmail.com: Hello, newbie question regarding the Blowfish algorithm: why do my encrypt/decypt functions fail on Windows XP SP3 with OpenSSL 1.0.0e? The same functions work on my Linux workstation. Windows output: Encrypt: encrypting 7680 bytes EVP_DecryptUpdate 1024 bytes EVP_DecryptFinal 8 bytes EVP_DecryptUpdate 1024 bytes EVP_DecryptFinal 8 bytes EVP_DecryptUpdate 1024 bytes EVP_DecryptFinal 8 bytes EVP_DecryptUpdate 1024 bytes EVP_DecryptFinal 8 bytes EVP_DecryptUpdate 1024 bytes EVP_DecryptFinal 8 bytes EVP_DecryptUpdate 1024 bytes EVP_DecryptFinal 8 bytes EVP_DecryptUpdate 1024 bytes EVP_DecryptFinal 8 bytes EVP_DecryptUpdate 512 bytes EVP_DecryptFinal 8 bytes encrypted 7744 bytes Decrypt: decrypting 7744 bytes EVP_DecryptUpdate 1024 bytes EVP_DecryptFinal 0 bytes EVP_DecryptUpdate 1032 bytes = why 1032 instead of 1024? EVP_DecryptFinal 0 bytes EVP_DecryptUpdate 1032 bytes EVP_DecryptFinal 0 bytes EVP_DecryptUpdate 1032 bytes EVP_DecryptFinal 0 bytes EVP_DecryptUpdate 1032 bytes EVP_DecryptFinal 0 bytes EVP_DecryptUpdate 1032 bytes EVP_DecryptFinal 0 bytes EVP_DecryptUpdate 1032 bytes EVP_DecryptFinal 0 bytes EVP_DecryptUpdate 520 bytes EVP_DecryptFinal 0 bytes decrypted 7736 bytes Linux output: == encrypting 7680 bytes EVP_DecryptUpdate 1024 bytes EVP_DecryptFinal 8 bytes EVP_DecryptUpdate 1024 bytes EVP_DecryptFinal 8 bytes EVP_DecryptUpdate 1024 bytes EVP_DecryptFinal 8 bytes EVP_DecryptUpdate 1024 bytes EVP_DecryptFinal 8 bytes EVP_DecryptUpdate 1024 bytes EVP_DecryptFinal 8 bytes EVP_DecryptUpdate 1024 bytes EVP_DecryptFinal 8 bytes EVP_DecryptUpdate 1024 bytes EVP_DecryptFinal 8 bytes EVP_DecryptUpdate 512 bytes EVP_DecryptFinal 8 bytes encrypted 7744 bytes decrypting 7744 bytes EVP_DecryptUpdate 1024 bytes EVP_DecryptFinal 0 bytes EVP_DecryptUpdate 1024 bytes EVP_DecryptFinal 0 bytes EVP_DecryptUpdate 1024 bytes EVP_DecryptFinal 0 bytes EVP_DecryptUpdate 1024 bytes EVP_DecryptFinal 0 bytes EVP_DecryptUpdate 1024 bytes EVP_DecryptFinal 0 bytes EVP_DecryptUpdate 1024 bytes EVP_DecryptFinal 0 bytes EVP_DecryptUpdate 1024 bytes EVP_DecryptFinal 0 bytes EVP_DecryptUpdate 512 bytes EVP_DecryptFinal 0 bytes decrypted 7680 bytes Source code: static int decrypt (unsigned char *inbuf, int size, unsigned char *outbuf, int *outsz) { int olen, tlen, n, left; unsigned char *inp = inbuf; unsigned char *outp = outbuf; EVP_CIPHER_CTX ctx; printf(decrypting %d bytes\n, size); EVP_CIPHER_CTX_init (ctx); EVP_DecryptInit (ctx, EVP_bf_cbc (), key, iv); left = size; *outsz = 0; while (left 0) { n = (left OP_SIZE ? OP_SIZE : left); olen = 0; memset((void *)outp, 0, IP_SIZE); if (EVP_DecryptUpdate (ctx, outp, olen, inp, n) != 1) { return -1; } printf(EVP_DecryptUpdate %d bytes\n, olen); if (EVP_DecryptFinal (ctx, outp + olen, tlen) != 1) { return -1; } printf(EVP_DecryptFinal %d bytes\n, tlen); *outsz = ((*outsz) + olen + tlen); inp += n; left -= n; outp += (olen + tlen); } printf(decrypted %d bytes\n, *outsz); EVP_CIPHER_CTX_cleanup (ctx); return 0; } static int encrypt (unsigned char *inbuf, int size, unsigned char *outbuf, int *outsz) { int olen, tlen, n, left; unsigned char *inp = inbuf; unsigned char *outp = outbuf; EVP_CIPHER_CTX ctx; printf(encrypting %d bytes\n, size); EVP_CIPHER_CTX_init (ctx); EVP_EncryptInit (ctx, EVP_bf_cbc (), key, iv); left = size; *outsz = 0; while (left 0) { n = (left IP_SIZE ? IP_SIZE : left); olen = 0; if (EVP_EncryptUpdate (ctx, outp, olen, inp, n) != 1) { return -1; } printf(EVP_DecryptUpdate %d bytes\n, olen); if (EVP_EncryptFinal (ctx, outp + olen, tlen) != 1) { return -1; } printf(EVP_DecryptFinal %d bytes\n, tlen); *outsz =
Re: Blowfish algorithm problem with OpenSSL 1.0.0e (32-bit)
On Mon November 28 2011, Jussi Peltonen wrote: No, it doesn't work on Linux either, if I link my test program using OpenSSL 1.0.0e. The test program works on Linux if I link it differently. $ ldd blowfish libcrypto.so.1 = /usr/lib/libcrypto.so.1 (0x40022000) libc.so.6 = /lib/i686/libc.so.6 (0x400de000) libdl.so.2 = /lib/libdl.so.2 (0x4020e000) /lib/ld-linux.so.2 = /lib/ld-linux.so.2 (0x4000) /usr/lib/libcrypto.so - libcrypto.so.0.9.6 Is this a bug? Only of your installation's /etc/ld.so.conf contents. ;-) or the included files/directories. Fix as required and then run ldconfig, see: man ldconfig Mike Jussi 2011/11/24 Jussi Peltonen pelt...@gmail.com: Hello, newbie question regarding the Blowfish algorithm: why do my encrypt/decypt functions fail on Windows XP SP3 with OpenSSL 1.0.0e? The same functions work on my Linux workstation. Windows output: Encrypt: encrypting 7680 bytes EVP_DecryptUpdate 1024 bytes EVP_DecryptFinal 8 bytes EVP_DecryptUpdate 1024 bytes EVP_DecryptFinal 8 bytes EVP_DecryptUpdate 1024 bytes EVP_DecryptFinal 8 bytes EVP_DecryptUpdate 1024 bytes EVP_DecryptFinal 8 bytes EVP_DecryptUpdate 1024 bytes EVP_DecryptFinal 8 bytes EVP_DecryptUpdate 1024 bytes EVP_DecryptFinal 8 bytes EVP_DecryptUpdate 1024 bytes EVP_DecryptFinal 8 bytes EVP_DecryptUpdate 512 bytes EVP_DecryptFinal 8 bytes encrypted 7744 bytes Decrypt: decrypting 7744 bytes EVP_DecryptUpdate 1024 bytes EVP_DecryptFinal 0 bytes EVP_DecryptUpdate 1032 bytes = why 1032 instead of 1024? EVP_DecryptFinal 0 bytes EVP_DecryptUpdate 1032 bytes EVP_DecryptFinal 0 bytes EVP_DecryptUpdate 1032 bytes EVP_DecryptFinal 0 bytes EVP_DecryptUpdate 1032 bytes EVP_DecryptFinal 0 bytes EVP_DecryptUpdate 1032 bytes EVP_DecryptFinal 0 bytes EVP_DecryptUpdate 1032 bytes EVP_DecryptFinal 0 bytes EVP_DecryptUpdate 520 bytes EVP_DecryptFinal 0 bytes decrypted 7736 bytes Linux output: == encrypting 7680 bytes EVP_DecryptUpdate 1024 bytes EVP_DecryptFinal 8 bytes EVP_DecryptUpdate 1024 bytes EVP_DecryptFinal 8 bytes EVP_DecryptUpdate 1024 bytes EVP_DecryptFinal 8 bytes EVP_DecryptUpdate 1024 bytes EVP_DecryptFinal 8 bytes EVP_DecryptUpdate 1024 bytes EVP_DecryptFinal 8 bytes EVP_DecryptUpdate 1024 bytes EVP_DecryptFinal 8 bytes EVP_DecryptUpdate 1024 bytes EVP_DecryptFinal 8 bytes EVP_DecryptUpdate 512 bytes EVP_DecryptFinal 8 bytes encrypted 7744 bytes decrypting 7744 bytes EVP_DecryptUpdate 1024 bytes EVP_DecryptFinal 0 bytes EVP_DecryptUpdate 1024 bytes EVP_DecryptFinal 0 bytes EVP_DecryptUpdate 1024 bytes EVP_DecryptFinal 0 bytes EVP_DecryptUpdate 1024 bytes EVP_DecryptFinal 0 bytes EVP_DecryptUpdate 1024 bytes EVP_DecryptFinal 0 bytes EVP_DecryptUpdate 1024 bytes EVP_DecryptFinal 0 bytes EVP_DecryptUpdate 1024 bytes EVP_DecryptFinal 0 bytes EVP_DecryptUpdate 512 bytes EVP_DecryptFinal 0 bytes decrypted 7680 bytes Source code: static int decrypt (unsigned char *inbuf, int size, unsigned char *outbuf, int *outsz) { int olen, tlen, n, left; unsigned char *inp = inbuf; unsigned char *outp = outbuf; EVP_CIPHER_CTX ctx; printf(decrypting %d bytes\n, size); EVP_CIPHER_CTX_init (ctx); EVP_DecryptInit (ctx, EVP_bf_cbc (), key, iv); left = size; *outsz = 0; while (left 0) { n = (left OP_SIZE ? OP_SIZE : left); olen = 0; memset((void *)outp, 0, IP_SIZE); if (EVP_DecryptUpdate (ctx, outp, olen, inp, n) != 1) { return -1; } printf(EVP_DecryptUpdate %d bytes\n, olen); if (EVP_DecryptFinal (ctx, outp + olen, tlen) != 1) { return -1; } printf(EVP_DecryptFinal %d bytes\n, tlen); *outsz = ((*outsz) + olen + tlen); inp += n; left -= n; outp += (olen + tlen); } printf(decrypted %d bytes\n, *outsz); EVP_CIPHER_CTX_cleanup (ctx); return 0; } static int encrypt (unsigned char *inbuf, int size, unsigned char *outbuf, int *outsz) { int olen, tlen, n, left; unsigned char *inp = inbuf; unsigned char *outp = outbuf; EVP_CIPHER_CTX ctx; printf(encrypting %d bytes\n, size); EVP_CIPHER_CTX_init (ctx); EVP_EncryptInit (ctx, EVP_bf_cbc (), key, iv); left = size; *outsz = 0; while (left 0) { n = (left IP_SIZE ? IP_SIZE : left); olen = 0; if (EVP_EncryptUpdate (ctx, outp,
Re: Blowfish algorithm problem with OpenSSL 1.0.0e (32-bit)
Mike, Did you read the original post? Why does not the blowfish sample work on Windows XP? Jussi 2011/11/28 Michael S. Zick open...@morethan.org: On Mon November 28 2011, Jussi Peltonen wrote: No, it doesn't work on Linux either, if I link my test program using OpenSSL 1.0.0e. The test program works on Linux if I link it differently. $ ldd blowfish libcrypto.so.1 = /usr/lib/libcrypto.so.1 (0x40022000) libc.so.6 = /lib/i686/libc.so.6 (0x400de000) libdl.so.2 = /lib/libdl.so.2 (0x4020e000) /lib/ld-linux.so.2 = /lib/ld-linux.so.2 (0x4000) /usr/lib/libcrypto.so - libcrypto.so.0.9.6 Is this a bug? Only of your installation's /etc/ld.so.conf contents. ;-) or the included files/directories. Fix as required and then run ldconfig, see: man ldconfig Mike Jussi 2011/11/24 Jussi Peltonen pelt...@gmail.com: Hello, newbie question regarding the Blowfish algorithm: why do my encrypt/decypt functions fail on Windows XP SP3 with OpenSSL 1.0.0e? The same functions work on my Linux workstation. Windows output: Encrypt: encrypting 7680 bytes EVP_DecryptUpdate 1024 bytes EVP_DecryptFinal 8 bytes EVP_DecryptUpdate 1024 bytes EVP_DecryptFinal 8 bytes EVP_DecryptUpdate 1024 bytes EVP_DecryptFinal 8 bytes EVP_DecryptUpdate 1024 bytes EVP_DecryptFinal 8 bytes EVP_DecryptUpdate 1024 bytes EVP_DecryptFinal 8 bytes EVP_DecryptUpdate 1024 bytes EVP_DecryptFinal 8 bytes EVP_DecryptUpdate 1024 bytes EVP_DecryptFinal 8 bytes EVP_DecryptUpdate 512 bytes EVP_DecryptFinal 8 bytes encrypted 7744 bytes Decrypt: decrypting 7744 bytes EVP_DecryptUpdate 1024 bytes EVP_DecryptFinal 0 bytes EVP_DecryptUpdate 1032 bytes = why 1032 instead of 1024? EVP_DecryptFinal 0 bytes EVP_DecryptUpdate 1032 bytes EVP_DecryptFinal 0 bytes EVP_DecryptUpdate 1032 bytes EVP_DecryptFinal 0 bytes EVP_DecryptUpdate 1032 bytes EVP_DecryptFinal 0 bytes EVP_DecryptUpdate 1032 bytes EVP_DecryptFinal 0 bytes EVP_DecryptUpdate 1032 bytes EVP_DecryptFinal 0 bytes EVP_DecryptUpdate 520 bytes EVP_DecryptFinal 0 bytes decrypted 7736 bytes Linux output: == encrypting 7680 bytes EVP_DecryptUpdate 1024 bytes EVP_DecryptFinal 8 bytes EVP_DecryptUpdate 1024 bytes EVP_DecryptFinal 8 bytes EVP_DecryptUpdate 1024 bytes EVP_DecryptFinal 8 bytes EVP_DecryptUpdate 1024 bytes EVP_DecryptFinal 8 bytes EVP_DecryptUpdate 1024 bytes EVP_DecryptFinal 8 bytes EVP_DecryptUpdate 1024 bytes EVP_DecryptFinal 8 bytes EVP_DecryptUpdate 1024 bytes EVP_DecryptFinal 8 bytes EVP_DecryptUpdate 512 bytes EVP_DecryptFinal 8 bytes encrypted 7744 bytes decrypting 7744 bytes EVP_DecryptUpdate 1024 bytes EVP_DecryptFinal 0 bytes EVP_DecryptUpdate 1024 bytes EVP_DecryptFinal 0 bytes EVP_DecryptUpdate 1024 bytes EVP_DecryptFinal 0 bytes EVP_DecryptUpdate 1024 bytes EVP_DecryptFinal 0 bytes EVP_DecryptUpdate 1024 bytes EVP_DecryptFinal 0 bytes EVP_DecryptUpdate 1024 bytes EVP_DecryptFinal 0 bytes EVP_DecryptUpdate 1024 bytes EVP_DecryptFinal 0 bytes EVP_DecryptUpdate 512 bytes EVP_DecryptFinal 0 bytes decrypted 7680 bytes Source code: static int decrypt (unsigned char *inbuf, int size, unsigned char *outbuf, int *outsz) { int olen, tlen, n, left; unsigned char *inp = inbuf; unsigned char *outp = outbuf; EVP_CIPHER_CTX ctx; printf(decrypting %d bytes\n, size); EVP_CIPHER_CTX_init (ctx); EVP_DecryptInit (ctx, EVP_bf_cbc (), key, iv); left = size; *outsz = 0; while (left 0) { n = (left OP_SIZE ? OP_SIZE : left); olen = 0; memset((void *)outp, 0, IP_SIZE); if (EVP_DecryptUpdate (ctx, outp, olen, inp, n) != 1) { return -1; } printf(EVP_DecryptUpdate %d bytes\n, olen); if (EVP_DecryptFinal (ctx, outp + olen, tlen) != 1) { return -1; } printf(EVP_DecryptFinal %d bytes\n, tlen); *outsz = ((*outsz) + olen + tlen); inp += n; left -= n; outp += (olen + tlen); } printf(decrypted %d bytes\n, *outsz); EVP_CIPHER_CTX_cleanup (ctx); return 0; } static int encrypt (unsigned char *inbuf, int size, unsigned char *outbuf, int *outsz) { int olen, tlen, n, left; unsigned char *inp = inbuf; unsigned char *outp = outbuf; EVP_CIPHER_CTX ctx; printf(encrypting %d bytes\n, size); EVP_CIPHER_CTX_init (ctx); EVP_EncryptInit (ctx, EVP_bf_cbc (), key, iv); left = size; *outsz = 0; while (left
FYI: Windows DLL semantics [Was: Blowfish algorithm problem with OpenSSL 1.0.0e (32-bit)]
On 11/28/2011 3:56 PM, Michael S. Zick wrote: On Mon November 28 2011, Jussi Peltonen wrote: Mike, Did you read the original post? Why does not the blowfish sample work on Windows XP? Yup, My guess is a similar problem - not loading the *.dll version that you expected/intended to load or not linking against the *.dll version that you expected/intended. M$ and *nix systems use different library locating/loading algorithms - I can't help you with sorting out your M$ problem. Mike Just for your information, here are the DLL reference rules on Windows when not using .NET mechanisms: 1. Library version numbers are only part of the DLL name if you include it yourself when designing the library, if so, the number must go before .DLL, as the system does not treat the version number specially. 2. When creating a DLL, the linker also emits a stub library listing the exported functions and the bare (pathless) DLL file name. This has the same file format as a static library and is passed to Microsoft's /usr/bin/ld when linking programs that refer to the DLL. /usr/bin/ld does not take .DLL files as input. Circular DLL dependencies are fully supported. 3. The decision as to which DLL file name to search for a given function is finalized by Microsoft's /usr/bin/ld at static link time, based on the information in the stub library. Thus there is no risk of getting a function from another indirectly loaded DLL like on UNIX. 4. At load time, Microsoft's /lib/ld.so looks at the DLL file names embedded in Programs and DLLs (which are treated almost the same) and searches for each one in the following prioritized list of locations: 4.1 DLLs already loaded into the current process 4.2 A list of common system DLLs (the KnownDLLs) 4.3 The directory containing the .EXE file of the process (not the referring DLL). 4.4 The regular PATH specified in the environment 4.5 The current directory 4.6 Give up and refuse to load. Thus if the OpenSSL DLLs on Windows had included the API version in their file names, only DLLs with that API version would need to be examined, but unfortunately, this is not the case. 5. There are ways to enumerate the DLLs loaded in a given process (easy ways for your own process, hard ways for other processes), so you can troubleshoot if you are using the expected DLL files. This is routinely done by debuggers, so if the OP runs his program under a debugger, he can simply open the appropriate debugger window and get the path to the SSLeay32*.DLL or libeay32*.DLL actually loaded. __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager majord...@openssl.org
Re: Blowfish algorithm problem with OpenSSL 1.0.0e (32-bit)
I think I'm loadaing the correct DLL versions. From the Visual Studio IDE I can see that the libaye32.dll is loaded from the debug folder where I copied it. libeay32.dllD:\work\openssl\ssl_test\debug\libeay32.dll N/A N/A Symbols loaded. D:\work\openssl\ssl_test\debug\libeay32.pdb 5 1.00.0.5 25.11.2011 11:34 1000-10113000 [7580] ssl_test.exe: Native libeay32.dll is of version 1.0.0.5. I have built it from the source code. Having the libeay32.pdb file in the same folder I'm able to step into EVP_DecryptUpdate(): evp_enc.c, lines 426-427: if (fix_len) *outl += b; The fix_len flagis not set during the first decrypt block but is set during the rest of the blocks as the output of my test program shows. Because of that increment the output buffer in my program becomes larger that the original data before it is encrypted. outp += (olen + tlen); // olen too big here 2011/11/28 Michael S. Zick open...@morethan.org: On Mon November 28 2011, Jussi Peltonen wrote: Mike, Did you read the original post? Why does not the blowfish sample work on Windows XP? Yup, My guess is a similar problem - not loading the *.dll version that you expected/intended to load or not linking against the *.dll version that you expected/intended. M$ and *nix systems use different library locating/loading algorithms - I can't help you with sorting out your M$ problem. Mike Jussi 2011/11/28 Michael S. Zick open...@morethan.org: On Mon November 28 2011, Jussi Peltonen wrote: No, it doesn't work on Linux either, if I link my test program using OpenSSL 1.0.0e. The test program works on Linux if I link it differently. $ ldd blowfish libcrypto.so.1 = /usr/lib/libcrypto.so.1 (0x40022000) libc.so.6 = /lib/i686/libc.so.6 (0x400de000) libdl.so.2 = /lib/libdl.so.2 (0x4020e000) /lib/ld-linux.so.2 = /lib/ld-linux.so.2 (0x4000) /usr/lib/libcrypto.so - libcrypto.so.0.9.6 Is this a bug? Only of your installation's /etc/ld.so.conf contents. ;-) or the included files/directories. Fix as required and then run ldconfig, see: man ldconfig Mike Jussi 2011/11/24 Jussi Peltonen pelt...@gmail.com: Hello, newbie question regarding the Blowfish algorithm: why do my encrypt/decypt functions fail on Windows XP SP3 with OpenSSL 1.0.0e? The same functions work on my Linux workstation. Windows output: Encrypt: encrypting 7680 bytes EVP_DecryptUpdate 1024 bytes EVP_DecryptFinal 8 bytes EVP_DecryptUpdate 1024 bytes EVP_DecryptFinal 8 bytes EVP_DecryptUpdate 1024 bytes EVP_DecryptFinal 8 bytes EVP_DecryptUpdate 1024 bytes EVP_DecryptFinal 8 bytes EVP_DecryptUpdate 1024 bytes EVP_DecryptFinal 8 bytes EVP_DecryptUpdate 1024 bytes EVP_DecryptFinal 8 bytes EVP_DecryptUpdate 1024 bytes EVP_DecryptFinal 8 bytes EVP_DecryptUpdate 512 bytes EVP_DecryptFinal 8 bytes encrypted 7744 bytes Decrypt: decrypting 7744 bytes EVP_DecryptUpdate 1024 bytes EVP_DecryptFinal 0 bytes EVP_DecryptUpdate 1032 bytes = why 1032 instead of 1024? EVP_DecryptFinal 0 bytes EVP_DecryptUpdate 1032 bytes EVP_DecryptFinal 0 bytes EVP_DecryptUpdate 1032 bytes EVP_DecryptFinal 0 bytes EVP_DecryptUpdate 1032 bytes EVP_DecryptFinal 0 bytes EVP_DecryptUpdate 1032 bytes EVP_DecryptFinal 0 bytes EVP_DecryptUpdate 1032 bytes EVP_DecryptFinal 0 bytes EVP_DecryptUpdate 520 bytes EVP_DecryptFinal 0 bytes decrypted 7736 bytes Linux output: == encrypting 7680 bytes EVP_DecryptUpdate 1024 bytes EVP_DecryptFinal 8 bytes EVP_DecryptUpdate 1024 bytes EVP_DecryptFinal 8 bytes EVP_DecryptUpdate 1024 bytes EVP_DecryptFinal 8 bytes EVP_DecryptUpdate 1024 bytes EVP_DecryptFinal 8 bytes EVP_DecryptUpdate 1024 bytes EVP_DecryptFinal 8 bytes EVP_DecryptUpdate 1024 bytes EVP_DecryptFinal 8 bytes EVP_DecryptUpdate 1024 bytes EVP_DecryptFinal 8 bytes EVP_DecryptUpdate 512 bytes EVP_DecryptFinal 8 bytes encrypted 7744 bytes decrypting 7744 bytes EVP_DecryptUpdate 1024 bytes EVP_DecryptFinal 0 bytes EVP_DecryptUpdate 1024 bytes EVP_DecryptFinal 0 bytes EVP_DecryptUpdate 1024 bytes EVP_DecryptFinal 0 bytes EVP_DecryptUpdate 1024 bytes EVP_DecryptFinal 0 bytes EVP_DecryptUpdate 1024 bytes EVP_DecryptFinal 0 bytes EVP_DecryptUpdate 1024 bytes EVP_DecryptFinal 0 bytes EVP_DecryptUpdate 1024 bytes EVP_DecryptFinal 0 bytes EVP_DecryptUpdate 512 bytes EVP_DecryptFinal 0 bytes decrypted 7680 bytes Source code: static int decrypt (unsigned char *inbuf, int size, unsigned char *outbuf, int *outsz) { int olen, tlen, n, left; unsigned char *inp = inbuf; unsigned char *outp = outbuf; EVP_CIPHER_CTX
ASN1 CHOICE implementation troubles
Hello, I try to encode in der format an ASN1 CHOICE, so I wrote test program. In my header file the choice is defined as follow : /* GMP ::= CHOICE */ /* none INTEGER */ /* supported OCTET_STRING */ typedef struct { inttype; union { ASN1_INTEGER *none; ASN1_OCTET_STRING*supported; }d; } GSPT_GMP; DECLARE_ASN1_FUNCTIONS(GSPT_GMP) DECLARE_STACK_OF(GSPT_GMP) Below, how I implemented it in my .c file : ASN1_CHOICE(GSPT_GMP) = { ASN1_SIMPLE(GSPT_GMP, d.none, ASN1_INTEGER), ASN1_SIMPLE(GSPT_GMP, d.supported,ASN1_OCTET_STRING) } ASN1_CHOICE_END(GSPT_GMP) IMPLEMENT_STACK_OF(GSPT_GMP) IMPLEMENT_ASN1_FUNCTIONS(GSPT_GMP) Then I wrote functions to create and encode der the GSPT_GMP choice : GSPT_GMP *GSPT_GMP_create(int x, int c, char *s) { GSPT_GMP * tGMP = NULL; tGMP = GSPT_GMP_new(); if(tGMP == NULL){ /* error trap */ } switch(x){ case V_ASN1_INTEGER: tGMP-type = V_ASN1_INTEGER; tGMP-d.none = ASN1_INTEGER_new(); if(tGMP-d.none == NULL){ /* error trap */ } ASN1_INTEGER_set(tGMP-d.none,c); break; case V_ASN1_OCTET_STRING: tGMP-type = V_ASN1_OCTET_STRING; tGMP-d.supported = ASN1_OCTET_STRING_new(); if(tGMP-d.supported == NULL){ /* error trap */ } ASN1_OCTET_STRING_set(tGMP-d.supported, (const unsigned char *)s, strlen(s)); break; } return tGMP; } It works fine. My problems come during the encoding process. Indeed, I have no problem for this process when it occurs on ASN1 SEQUENCE structure, but for the CHOICE troubles come. Below my code to der encoding : unsigned char *GSPT_GMP_i2d(GSPT_GMP *gmp, long *len) { unsigned char *buf = NULL, *next; int total =0; *len = i2d_GSPT_GMP(gmp, NULL); buf = next = (unsigned char *)malloc(*len){ /* error trap */} *len = i2d_GSPT_GMP(gmp, next); return buf; } Then in my calling program, I record this encoded buffer in a file and I read it with openssl asn1parse command. But it doesn't work because the buffer is incorrect, it contains nothing, its size is 0. If I process the same procedure belong a SEQUENCE there is no problems I can read it with the asn1parse tool. If the choice is encapsulated in SEQUENCE that contains others SEQUENCE or STACK_OF SEQUENCE, only CHOICE is unreadable. int main(void) { GSPT_GMP *gmp = NULL; long len = 0; unsigned char *der = NULL; int fd = 0; gmp = GSPT_GMP_create(V_ASN1_OCTET_STRING,1,string); der = GSPT_GMP_i2d(gmp, len); fd = open(./proto.der, O_RDWR | O_TRUNC); if(fd 0){/* error trap */ } write(fd,(const void *)der, len); close(fd); return 0; } Output for the choice (the size file produced is 0): $ openssl asn1parse -inform der -in proto.der Error: offset too large The Same test with a ASN1 SEQUENCE encoding and recorded in file : $ openssl asn1parse -inform der -in proto.der 0:d=0 hl=2 l= 14 cons: SEQUENCE 2:d=1 hl=2 l= 4 prim: OBJECT:1.0.0.2.3 8:d=1 hl=2 l= 6 prim: OCTET STRING :TEST I don't see what's wrong, may be someone can help me to diagnose the troubles. Best regards Franck __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager majord...@openssl.org
CertificateVerify message size hard-coded to 514 bytes
Does anyone know why the CertificateVerify message is still limited to a max size of 514 bytes? http://www.mail-archive.com/openssl-dev@openssl.org/msg13520.html Is there any risk with increasing this to 4096 bytes? Thank you. __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager majord...@openssl.org
Re: ASN1 CHOICE implementation troubles
It seems i forget to use ASN1_item_template : ASN1_ITEM_TEMPLATE(N) = ASN1_EX_TEMPLATE_TYPE(ASN1_TFLG_EXPTAG|ASN1_TFLG_APPLICATION, 1, N, ASN1_INTEGER) ASN1_ITEM_TEMPLATE_END(N) Now CHOICE encoding works. BS Franck Le 28/11/2011 11:00, Franck Rupin a écrit : Hello, I try to encode in der format an ASN1 CHOICE, so I wrote test program. In my header file the choice is defined as follow : /* GMP ::= CHOICE */ /* none INTEGER */ /* supported OCTET_STRING */ typedef struct { int type; union { ASN1_INTEGER *none; ASN1_OCTET_STRING *supported; }d; } GSPT_GMP; DECLARE_ASN1_FUNCTIONS(GSPT_GMP) DECLARE_STACK_OF(GSPT_GMP) Below, how I implemented it in my .c file : ASN1_CHOICE(GSPT_GMP) = { ASN1_SIMPLE(GSPT_GMP, d.none, ASN1_INTEGER), ASN1_SIMPLE(GSPT_GMP, d.supported,ASN1_OCTET_STRING) } ASN1_CHOICE_END(GSPT_GMP) IMPLEMENT_STACK_OF(GSPT_GMP) IMPLEMENT_ASN1_FUNCTIONS(GSPT_GMP) Then I wrote functions to create and encode der the GSPT_GMP choice : GSPT_GMP *GSPT_GMP_create(int x, int c, char *s) { GSPT_GMP * tGMP = NULL; tGMP = GSPT_GMP_new(); if(tGMP == NULL){ /* error trap */ } switch(x){ case V_ASN1_INTEGER: tGMP-type = V_ASN1_INTEGER; tGMP-d.none = ASN1_INTEGER_new(); if(tGMP-d.none == NULL){ /* error trap */ } ASN1_INTEGER_set(tGMP-d.none,c); break; case V_ASN1_OCTET_STRING: tGMP-type = V_ASN1_OCTET_STRING; tGMP-d.supported = ASN1_OCTET_STRING_new(); if(tGMP-d.supported == NULL){ /* error trap */ } ASN1_OCTET_STRING_set(tGMP-d.supported, (const unsigned char *)s, strlen(s)); break; } return tGMP; } It works fine. My problems come during the encoding process. Indeed, I have no problem for this process when it occurs on ASN1 SEQUENCE structure, but for the CHOICE troubles come. Below my code to der encoding : unsigned char *GSPT_GMP_i2d(GSPT_GMP *gmp, long *len) { unsigned char *buf = NULL, *next; int total =0; *len = i2d_GSPT_GMP(gmp, NULL); buf = next = (unsigned char *)malloc(*len){ /* error trap */} *len = i2d_GSPT_GMP(gmp, next); return buf; } Then in my calling program, I record this encoded buffer in a file and I read it with openssl asn1parse command. But it doesn't work because the buffer is incorrect, it contains nothing, its size is 0. If I process the same procedure belong a SEQUENCE there is no problems I can read it with the asn1parse tool. If the choice is encapsulated in SEQUENCE that contains others SEQUENCE or STACK_OF SEQUENCE, only CHOICE is unreadable. int main(void) { GSPT_GMP *gmp = NULL; long len = 0; unsigned char *der = NULL; int fd = 0; gmp = GSPT_GMP_create(V_ASN1_OCTET_STRING,1,string); der = GSPT_GMP_i2d(gmp, len); fd = open(./proto.der, O_RDWR | O_TRUNC); if(fd 0){/* error trap */ } write(fd,(const void *)der, len); close(fd); return 0; } Output for the choice (the size file produced is 0): $ openssl asn1parse -inform der -in proto.der Error: offset too large The Same test with a ASN1 SEQUENCE encoding and recorded in file : $ openssl asn1parse -inform der -in proto.der 0:d=0 hl=2 l= 14 cons: SEQUENCE 2:d=1 hl=2 l= 4 prim: OBJECT :1.0.0.2.3 8:d=1 hl=2 l= 6 prim: OCTET STRING :TEST I don't see what's wrong, may be someone can help me to diagnose the troubles. Best regards Franck __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager majord...@openssl.org
revoking a certificate without having to provide pass phrase as next step
Hi, I'm trying to find a way to make my PHP scipt capable of automatic certificate revocation. The script is run from console and the line looks like this: exec(openssl ca -keyfile ca.key -cert ca.pem -revoke .$userId..pem); which works like this: openssl ca -keyfile ca.key -cert ca.pem -revoke 04.pem It generally works, but after the command above is sent, i have to type in pass phrase manually. I need it to be done automatically. Is there any way to achieve this? I looked through manual but didn't find any information on how and if this could be done. Regards, Peter
RE: Usage of CAPath/CAFile options in int SSL_CTX_load_verify_locations Reg.
From: owner-openssl-us...@openssl.org On Behalf Of Ashok C Sent: Monday, 28 November, 2011 00:35 One more question here: In case of a server application, it is expected to send the intermediate certificates to the client. And in this case, is this API -- SSL_CTX_load_verify_locations() sufficient to be used? Or is there a separate API to send the intermediate CA certificates across to the client? No, certs to *send* are separate from verifying *received*. Yes, SSL_CTX_user_certificate_chain_file or SSL_CTX_add_extra_chain_cert . Similar but less obvious, if you use client auth (i.e. client presents cert) the CA name(s) requested in the CertRequest are separate from the CA cert(s) actually used for verification. Often you want to make these the same, but it's not automatic. Use SSL_[CTX_]set_client_CA_list or SSL_[CTX_]add_client_CA . P.S. My previous query also is unanswered. It would be great if I get some responses to that also ;) From: Ashok C ash@gmail.com Date: Wed, Nov 23, 2011 at 12:55 PM We are implementing multi-layer support for our openssl-based PKI solution and had the following query: The usual term for what I think you mean is multi-LEVEL CAs, or hierarchical CAs. Currently our PKI solution supports only single layer CA support and we use SSL_CTX_load_verify_locations API with the CAFile option, meaning that the service loads the CA certificate from a PEM file. When testing multi-layer support between a client-server model with SSL_VERIFY_PEER set to true, we observed that using the CAFile (with all CA certificates- root + intermediate concatenated into a single PEM file) does not work anymore. But using CAPath option (storing each CA in separate file, creating hashes for them in a directory and providing that directory in CAPath) seems to work fine. Is this a known bug with openSSL or is it something that we are doing wrong. 1. I doubt there's a bug in OpenSSL here; this is very widely used functionality; both CAfile and CApath have worked for me in all versions I've used. What version(s) are you running, is it vanilla build or any mods/patches, and built how? 2. What exactly are you testing, and what exactly is the error(s)? Can you reproduce it with commandline s_client and/or s_server? 3. For SSL/TLS it is common, but not universal, for the server to provide in its handshake all intermediate CA certs, and similarly for the client to do so if client-auth is used. If all peers of a relier do this it doesn't need to configure any intermediate certs, only the root(s). This is often more convenient, since for (some? many?) public CAs the intermediates tend to change more often, and the entity that gets a cert from the CA may be the first to know. You don't say if your 'solution' uses public CAs or your own CA(s); if the latter presumably the behavior is more under your control. If you are using OpenSSL cert verification (and perhaps other functions) for some other protocol/application/whatever, answer may be different. 4. It's not clear to me if it's standard, but OpenSSL always verifies up to a root in the truststore, even if a lower intermediate cert is also in the truststore. This is the same for CAfile and/or CApath. Also, from the openSSL community perspective, is it advisable to use CAFile option or CAPath option when providing multi-layer support? Maybe. See above about which CA certs to configure. If you mean a choice between CAfile and CApath, it's up to you. As far as the code goes the only differences are: - CAfile is read once, when you call _load_verify, and kept in memory. It is not updated, unless your program calls again. The memory it uses is rarely an issue on desktop class devices unless you have millions of CA certs, but might be on smaller e.g. mobile devices, but you probably don't use OpenSSL there. Any format error in the file is detected at load time. - CApath is read when needed, during handshake. If your program runs more than a short time and makes or accepts new connections you can get dynamic updates. If your program handles a very high rate of handshakes this could be a performance issue. Any format error may not be detected until a handshake uses that cert. One caveat: the hash used for CApath names changed between 0.9.8 and 1.0.0. If you need to support systems or users on both 'families', that may be a bother. __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager majord...@openssl.org
Re: revoking a certificate without having to provide pass phrase as next step
On 2011-11-29 04:15 +0100 (Tue), Peter wrote: It generally works, but after the command above is sent, i have to type in pass phrase manually. I need it to be done automatically. I believe you can just remove the passphrase from the key file. This of course has the obvious security implications. cjs -- Curt Sampson c...@cynic.net +81 90 7737 2974 http://www.starling-software.com/ I have always wished for my computer to be as easy to use as my telephone; my wish has come true because I can no longer figure out how to use my telephone. --Bjarne Stroustrup __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager majord...@openssl.org
Authenticated channel as authentication for a TLS connection
I'd welcome some advice on using an existing channel as authentication for a new connection. The client has a narrow authenticated channel to the server; I need to set up a normal TLS connection to the same server authenticated by proof of having the other connection. There is a client identifier associated with the authenticated channel, there can be a few thousand clients. The client should not have any built-in secret or private information, certificates, or similar. Exchanges on the narrow channel are visible to an eavesdropper but can't be modified; the TLS connection will be over an open network so can be observed and interfered with (including man-in-the-middle). I need the TLS connection to be private and authenticated, preferably with forward security; it needs to be reasonably secure, the preferred cipher is RC4_128. The authenticated channel is very narrow and awkward to use. Communication is by request and response initiated from the client. The client can send about 12 bits in each request and get 2 octets in reply, or send 5 bits and get 255 octets in reply (and scaling in between). It would be much simpler at the server to use just a single request, but I've not been able to think of anything remotely secure based on so small an exchange; it would be great if someone could come up with something. I'd prefer to keep the number of exchanges small at least. I've little experience of cryptography, so I've described the problem rather than diving immediately into my ideas for dealing with it. I'll mention my ideas now for amusement, then it would be great if the experts here could give me some good ones. My basic idea is to do a small Diffie-Hellman agreement over the narrow channel (with pre-arranged modulus and generator), and use the derived secret as the pre-shared key in a TLS_PSK connection. Minimizing the exchanges on the narrow channel would result in a weaker secret, so I need to understand how weak it can be while still reasonably secure. - I could make the secret be a one-time password and limit its lifetime (say to a couple of minutes). I assume this would allow a fairly weak secret to give reasonably secure authentication. - I'm not clear how the strength of the PSK affects the strength of the encryption on the connection; if the PSK is only 40 bits, for example, does that mean that a TLS_PSK_WITH_RC4_128_SHA connection would only have 40 bit security? Assuming a weak PSK weakens the encryption, could I correct for that by using TLS_DHE_PSK to incorporate a stronger key in the connection? I see that OpenSSL doesn't currently implement the DHE subset of PSK; could I get an equivalent effect by using TLS_PSK to get an authenticated connection then immediately forcing secure renegotiation to TLS_DH_anon? With this sort of scheme, could a DH agreement with a 48 bit modulus (or even smaller) give me reasonable security? I'll be grateful for any comments, both to rip apart my ideas if they're nonsense and to give me any pointers on how to solve this. Thanks, Fred
RE: certificate storage format
DER and PEM -Original Message- From: owner-openssl-us...@openssl.org [mailto:owner-openssl-us...@openssl.org] On Behalf Of prabhu kalyan rout Sent: Monday, November 28, 2011 1:04 PM To: openssl-users@openssl.org Subject: certificate storage format Hi, my question is how many certificate storage formats are available and what are they? just like del pks12 Thanks __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager majord...@openssl.org __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager majord...@openssl.org
pk11_library_init() of pkcs#11 engine
Hello all, I am trying to use pkcs#11 engine as dynamic engine for Apache configured with OpenSSL. I ran into segmentation faults when I hit Apache server with multiple sslswamp clients. I tracked down the problem to pk11_library_init() in hw_pk11.c where a child process tries to free the memory allocated by parent thinking of it as a memory leak. Code snippet and comments are as below. /* * pk11_library_initialized is set to 0 in pk11_finish() which is called * from ENGINE_finish(). However, if there is still at least one * existing functional reference to the engine (see engine(3) for more * information), pk11_finish() is skipped. For example, this can happen * if an application forgets to clear one cipher context. In case of a * fork() when the application is finishing the engine so that it can be * reinitialized in the child, forgotten functional reference causes * pk11_library_initialized to stay 1. In that case we need the PID * check so that we properly initialize the engine again. */ if (pk11_library_initialized) { if (pk11_pid == getpid()) { return (1); } else { global_session = CK_INVALID_HANDLE; /* * free the locks first to prevent memory leak in case * the application calls fork() without finishing the * engine first. */ pk11_free_all_locks(); } } ** pk11_free_locks() is freeing the memory allocated for find_locks by the parent. If I comment this out, my test works fine. But I stopped from making it a real fix because of the preceding comment. Why is it necessary that a parent should do ENGINE_finish first before forking? Can't a process simultaneously use the pkcs#11 engine with it's child? Thanks, Thulasi.