Performance related queries for SSL based client server model
Hi, I am writing one sample ssl based client server model which uses SSL_Read SSL_Write API provided by openssl. But I found that my application is very slow it takes around 40 mins to copy 700MB file. While same file using scp finishes in 10 mins. So my query is that is there an alternative way to use open ssl read or write to improve performance. I searched in scp code and found it does not use SSL_read/SSL_write. So if there is another set of APIs which I can use or any idea how I can meet the same performance as scp. Regards, Alok
Re: clarification regarding CVE-2014-3510
Hi, CVE-2014-3510 affects anonymous DH and ECDH ciphersuites only. The additional modification for RSA key exchange is just us being pedantic: we added an internal error for an impossible-to-reach condition. It is a safety net to avoid regression, should something change in the surrounding code. (In retrospect, we should have separated the commits to avoid confusion.) Cheers and sorry for the delay in response, Emilia On Fri, Aug 29, 2014 at 4:37 PM, Ivan Nestlerode inestler...@gmail.com wrote: Hello openssl-users, I am looking for clarification regarding CVE-2014-3510. The advisory refers to it as a vulnerability in DTLS when using anonymous DH/ECDH. However, the fix in git (bff5319d9038765f864ef06e2e3c766f5c01dbd7) modified code involving RSA key exchange in non-DTLS protocol versions. What is the real scope of this vulnerability? In particular, does it affect TLS 1.0 when used with non-anonymous RSA cipher suites? Thanks, Ivan
design clarification using openssl
Hi, After searching the web, I am writing to this address as my questions are still un-answered. 1) Can a SSL structure, allocated memory once via SSL_CTX be used with various socket descriptors just by changing the descriptors using SSL_set_fd? The socket descriptor used would have been passed thru SSL_accept before reaching SSL_set_fd. The socket is in blocking mode only. 2) I generated key and certificate files locally using the openssl commands. Is anything else needs to be done before loading them? I ask this because, the first read via SSL_read is always success and subsequent reads fail with error:0001:lib(0): func(0): reason 1. If this is not the right place to ask, pls direct me to the right place so that I can get my queries cleared. Thanks, Balaji.
RE: design clarification using openssl
1) That doesn't make sense. Maybe you mean the socket come from (TCP-level) accept and you give it to SSL_set_fd? That does make sense and should work for one connection=socket at a time i.e. accept #3, connect SSL to #3, do send and receive until connection closed, close socket and SSL_clear, accept #7, ditto. 2) 1 is not a real error code. If SSL_get_error(ssl) returns 1 == SSL_ERROR_SSL, you should call ERR_get_error or its variants, or just ERR_print_errors[_fp] for the simplest handling. Note ERR not SSL. Note SSL_get_error() is 5 == SSL_ERROR_SYSCALL you must also look at your *OS* error: errno on Unix or [WSA]GetLastError() on Windows. On Unix perror or strerror gives a nice decode; Windows is harder. However if your keys or certs were bad you would get the error on loading them or at the latest at handshake, which if you don't do it explicitly would be on the first SSL_read or SSL_write. Not the second. This error is almost certainly something else and the ERR_* details above should help spot it. From: owner-openssl-us...@openssl.org [mailto:owner-openssl-us...@openssl.org] On Behalf Of kasthurirangan balaji Sent: Friday, September 05, 2014 13:49 To: openssl-users@openssl.org Subject: design clarification using openssl Hi, After searching the web, I am writing to this address as my questions are still un-answered. 1) Can a SSL structure, allocated memory once via SSL_CTX be used with various socket descriptors just by changing the descriptors using SSL_set_fd? The socket descriptor used would have been passed thru SSL_accept before reaching SSL_set_fd. The socket is in blocking mode only. 2) I generated key and certificate files locally using the openssl commands. Is anything else needs to be done before loading them? I ask this because, the first read via SSL_read is always success and subsequent reads fail with error:0001:lib(0): func(0): reason 1. If this is not the right place to ask, pls direct me to the right place so that I can get my queries cleared. Thanks, Balaji.
RE: Performance related queries for SSL based client server model
This is not a –dev question, and there’s no need to send three times. scp uses the SSH protocol. OpenSSL does not implement SSH. OpenSSH, which is a different product from a different source, implements SSH, although in their design the scp program doesn’t do any comms at all, it just pipes to the ssh program which does. What kind of network(s) are you transiting, and what are your endpoints? On my dev LAN, which is one uncongested reliable 100Mbps switch, I get plain TCP at nearly the hardware limit 8sec per 100MB, and within 10% of that for SCP/SSH or trivial-app/SSL. These do 700MB in barely a minute. SSL and SSH differ significantly in connection setup/handshake, and slightly in multiplexing the data, but once actually sending application data they use mostly the same range of ciphers and MAC, with openssh actually calling libcrypto, and use TCP pretty much the same way, so unless you’re doing or (perhaps unintentionally) invoking something wrong, you should get roughly the same speed for both. Try netcat to measure only the network (and disk) with almost no CPU; that gives you an upper bound on any protocol – except one that can and does compress well: I believe openssh can and openssl definitely can depending on how it’s built, but many people disable it post-CRIME, and it certainly depends very much on your data. You might try gzip on your data and if that makes much difference send the gzipped form. From: owner-openssl-...@openssl.org [mailto:owner-openssl-...@openssl.org] On Behalf Of Alok Sharma Sent: Sunday, September 07, 2014 03:30 To: openssl-...@openssl.org; openssl-users@openssl.org Subject: Performance related queries for SSL based client server model Hi, I am writing one sample ssl based client server model which uses SSL_Read SSL_Write API provided by openssl. But I found that my application is very slow it takes around 40 mins to copy 700MB file. While same file using scp finishes in 10 mins. So my query is that is there an alternative way to use open ssl read or write to improve performance. I searched in scp code and found it does not use SSL_read/SSL_write. So if there is another set of APIs which I can use or any idea how I can meet the same performance as scp. Regards, Alok
generate key errors
dear all i'm trying to generate rsa keypair to be used in a class that has an attribute RSA*rsa_keyPair; and i use function RSA AeroRoutingProtocol :: GenerateRSAKeyPair ( ) { rsa_keyPair = RSA_generate_key(2084,RSA_F4,NULL,NULL); return rsa_keyPair; } when i try to compile this code i got this error In member function ‘RSA ns3::AeroRP::AeroRoutingProtocol::GenerateRSAKeyPair()’: ../src/aerorp/model/aerorp-routing-protocol.cc:1322:13: error: could not convert ‘((ns3::AeroRP::AeroRoutingProtocol*)this)-ns3::AeroRP::AeroRoutingProtocol::rsa_keyPair’ from ‘RSA* {aka rsa_st*}’ to ‘RSA {aka rsa_st}’ ../src/aerorp/model/aerorp-routing-protocol.cc:1323:3: error: control reaches end of non-void function [-Werror=return-type] cc1plus: all warnings being treated as errors i need help please -- Warmest regards and best wishes for a good health,*urs sincerely * *mero*
Re: Performance related queries for SSL based client server model
On Sun, Sep 07, 2014 at 01:00:17PM +0530, Alok Sharma wrote: I am writing one sample ssl based client server model which uses SSL_Read SSL_Write API provided by openssl. If you transfering each block of data as an RPC, with a round-trip acknowledgement before sending the next block, and the blocks are small enough, you're going to severely limit throughput. In bulk data transfer applications that stream data, TLS typically outperforms SSH, but a lot depends on the details. -- Viktor. __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager majord...@openssl.org
Re: Verifying authenticode signature using openssl API
On 07/09/2014 05:43, Prasad Dabak wrote: Hello, Given a signed Windows portable executable, I want to programmatically verify two things using openssl APIs 1. Verify the digital signature. 2. Confirm that the executable is signed by a specific company using that company's public key. It seems that part (1) can be done by parsing the signedData attribute in the portable executable, extracting the hashing algorithm and digest stored there, re-computing the digest of the executable using the same hashing algorithm and match them. I have following questions. 1. The signData contains messageDigest (unencrypted) and encryptedDigest (encrypted). Is it enough to match messgaeDigest with the computed digest? OR we also need to decrypt the encryptedDigest using the company public key and match that as well? Both. Comparing the stored messageDigest to the actual digest in the spcIndirectDataContext structure checks that the signature actually is for this file. Note that Authenticode defines a specific file format specific formula for omitting the signature itself from the input to the message digest. Decrypting the encryptedDigest (really validating the signedDigest against a digest of the relevant part of the PKCS#7 structure, the field name is historic) is necessary to check if the signature is a valid signature made with the expected public key. This step is, technically, the actual signature verififcation, but it is meaningless without all the other checks. Additionally, for Authenticode, you need to do a few extra things (that should be done *first*, since they are cheaper than the actual signature check, and one of them affects the signature check): - Verify that the PKCS#7 structure field contentInfo is a spcIndirectDataContext structure containing a list of attributes. This is consistent with the original PKCS#7 standard/RFC, but not entirely with the later e-mail focused CMS standard/RFC. - Verify that the spcIndirectDataContext structure includes the correct set of magic attributes for the file type, as otherwise, the signature is not for this file even if the digest value matches. These fields indicate the choice of formula (Subject Identification Package) for determining the subset of file bytes to pass the the message digest.In particalar, if the spcSipInfo field is present it must have the correct value, and check the presence of any other file format specific attributes (for PE EXE/DLL/OCX/SYS files, this means an spcPEImageData attribute of a very specific form that includes the BMPString Obsolete to distinguish it from the historic Authenticode1 signatures). - Verify that the spcIndirectDataContext structure includes an attribute whose OID is the OID of a hash algorithm and contains the number of bytes for that type of message digest, and matches your own calculation of that message digest of the file type specific subset of bytes of the file itself. - Verify that each signerInfo in signerInfos has at least the following authenticated attributes: contentType == spcIndirectDataContext and a messageDigest (see also answer 2) . Other authenticated attributes are usually present, but not mandatory. - If there is an authenticated attribute of type spcSpOpusInfo, you may want to consider this the file description and information URL from the manufacturer who signed this signerInfo, but only after all the checks pass. - If a signerInfo contains one or more unauthenticatedAttributes of type counterSignature, those should be validated first as being valid signerinfos for signatures of the encryptedDigest of the outer signerinfo. If one of them is, and contains an (inner) authenticatedAttribute of type signingTime that countersignature is for an entity whose certificate in the certificates collection is valid for extended usage purpose timeStamping (including a recursive requirement that this purpose is also present for its issuer), then the time indicated by that signingTime field overrides the value of the local clock when determining the validity of the certificates for the signature on that particular outer signerinfo. - Other spcIndirectDataContext attributes, unauthenticatedAttributes and/or authenticatedAttributes are usually present, but are not mandatory. An attribute present in the wrong of the attribute lists should also be ignored. For example a signingTime in the outer signerinfo cannot be used to set the time used for the validity checks of that outer signerinfo. - Verify that each of the outer signerinfos refers to a signature in the certificates collection which is valid for the extended usage purpose of Code Signing and a basic constraint of CA:FALSE. This check may fail for some test signatures, but should not fail for real signatures made with officially issued certificates. This check can be done during the PKCS7_verify call via callbacks etc. 2. What does PKCS7_Verify exactly do? I looked at
Why does OpenSSL own all the prefixes in the world?
Hi, RAND_xxx CRYPTO_xxx ERR_xxx ENGINE_xxx EVP_xxx sk_xxx X509_xxx BIGNUM_xxx RSA_xxx BN_xxx ASN1_xxx EC_xxx etc etc etc. May I understand why it was decided that OpenSSL can own all the prefixes or namespaces in the world? How is it possible that OpenSSL owns the ERR_ prefix (for example ERR_free_strings() and others)? OpenSSL is a library. I should be able to integrate OpenSSL into my own code and define my own prefixes without worrying about creating conflicts with the near 200 prefixes that OpenSSL owns. An example of a well designed C library is libuv [*], in which: * Public API functions and structs begin with uv_. * Private API functions begin with uv__. * Public macros begin UV_. That's a good design! PS: In my project I use both openssl and libsrtp. In which of them do you expect the following macro is defined?: SRTP_PROTECTION_PROFILE [*] https://github.com/joyent/libuv/ -- Iñaki Baz Castillo i...@aliax.net __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager majord...@openssl.org
Re: Why does OpenSSL own all the prefixes in the world?
The reason is legacy. Eric Young was not conscious of namespace pollution when he implemented SSLeay; since then, even after the migration to the OpenSSL name and team, the focus has been more on maintaining source compatibility than in creating new interoperability opportunities. To meet the goal of interoperability while enabling an alternate symbolic namespace, what would you suggest? -Kyle H On September 7, 2014 1:30:11 PM PST, Iñaki Baz Castillo i...@aliax.net wrote: Hi, RAND_xxx CRYPTO_xxx ERR_xxx ENGINE_xxx EVP_xxx sk_xxx X509_xxx BIGNUM_xxx RSA_xxx BN_xxx ASN1_xxx EC_xxx etc etc etc. May I understand why it was decided that OpenSSL can own all the prefixes or namespaces in the world? How is it possible that OpenSSL owns the ERR_ prefix (for example ERR_free_strings() and others)? OpenSSL is a library. I should be able to integrate OpenSSL into my own code and define my own prefixes without worrying about creating conflicts with the near 200 prefixes that OpenSSL owns. An example of a well designed C library is libuv [*], in which: * Public API functions and structs begin with uv_. * Private API functions begin with uv__. * Public macros begin UV_. That's a good design! PS: In my project I use both openssl and libsrtp. In which of them do you expect the following macro is defined?: SRTP_PROTECTION_PROFILE [*] https://github.com/joyent/libuv/ -- Iñaki Baz Castillo i...@aliax.net __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager majord...@openssl.org -- Sent from my Android device with K-9 Mail. Please excuse my brevity.
OpenSSL Security Policy
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 The OpenSSL Development Team have today released the OpenSSL Project Security Policy. The policy has been published at: https://www.openssl.org/about/secpolicy.html The policy details how we handle and classify security issues, as well as who we tell about them and when. Matt -BEGIN PGP SIGNATURE- Version: GnuPG v1 iQEcBAEBAgAGBQJUDNdTAAoJENnE0m0OYESRPJYH/ieewALYsd8/NvuvrjHefPWu Kr46Z7dgI8HmZ9Pj4I70gofB82XiRlSXsjuFrF7preNE0Rhvhd84yYeyi0THFI1+ t4ojAh/p2QAEUGdokzFeVgR2o5d/nNGm6nsrMrYfMbcYbDSeps3lKS1E+F1ftceo B8FB+5dYCyxEjCKYhRt3DM2YJO+wZd/FYsSpKo1mBhGL0ScmXCuh7i+kRyfjjRqP pvHCz8xo3FH8xJ0xQ7refpJITSgrdhd00qnvrtsHjE1i4zHKQULWnEBMzmM2kCe9 q8b7IjZYtBmgBXHZd2F57wyhQleJl1e/8cbs1z+gU6daBXWNHCbHmUXEI5jIgX4= =76TW -END PGP SIGNATURE- __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager majord...@openssl.org
Re: Why does OpenSSL own all the prefixes in the world?
Hmm... Switch strongly and definitely to C++ Not for fancy object programming, but for more practical syntaxES for things like this. And I am an old C fan programmer... Pierre Delaage Le 08/09/2014 00:04, Kyle Hamilton a écrit : The reason is legacy. Eric Young was not conscious of namespace pollution when he implemented SSLeay; since then, even after the migration to the OpenSSL name and team, the focus has been more on maintaining source compatibility than in creating new interoperability opportunities. To meet the goal of interoperability while enabling an alternate symbolic namespace, what would you suggest? -Kyle H On September 7, 2014 1:30:11 PM PST, Iñaki Baz Castillo i...@aliax.net wrote: Hi, RAND_xxx CRYPTO_xxx ERR_xxx ENGINE_xxx EVP_xxx sk_xxx X509_xxx BIGNUM_xxx RSA_xxx BN_xxx ASN1_xxx EC_xxx etc etc etc. May I understand why it was decided that OpenSSL can own all the prefixes or namespaces in the world? How is it possible that OpenSSL owns the ERR_ prefix (for example ERR_free_strings() and others)? OpenSSL is a library. I should be able to integrate OpenSSL into my own code and define my own prefixes without worrying about creating conflicts with the near 200 prefixes that OpenSSL owns. An example of a well designed C library is libuv [*], in which: * Public API functions and structs begin with uv_. * Private API functions begin with uv__. * Public macros begin UV_. That's a good design! PS: In my project I use both openssl and libsrtp. In which of them do you expect the following macro is defined?: SRTP_PROTECTION_PROFILE [*]https://github.com/joyent/libuv/ -- Sent from my Android device with K-9 Mail. Please excuse my brevity.
Re: Why does OpenSSL own all the prefixes in the world?
And how would you do that without breaking compatibility with every program (in C, C++ or any other language) that already uses openssl and depends on the current API names? Providing the API, semantics and portability of the original SSLeay library is thesecond-most important feature of OpenSSL (right after actually being a secure SSL/TLSimplementation when used correctly). On 08/09/2014 01:15, Pierre DELAAGE wrote: Hmm... Switch strongly and definitely to C++ Not for fancy object programming, but for more practical syntaxES for things like this. And I am an old C fan programmer... Pierre Delaage Le 08/09/2014 00:04, Kyle Hamilton a écrit : The reason is legacy. Eric Young was not conscious of namespace pollution when he implemented SSLeay; since then, even after the migration to the OpenSSL name and team, the focus has been more on maintaining source compatibility than in creating new interoperability opportunities. To meet the goal of interoperability while enabling an alternate symbolic namespace, what would you suggest? -Kyle H On September 7, 2014 1:30:11 PM PST, Iñaki Baz Castillo i...@aliax.net wrote: Hi, RAND_xxx CRYPTO_xxx ERR_xxx ENGINE_xxx EVP_xxx sk_xxx X509_xxx BIGNUM_xxx RSA_xxx BN_xxx ASN1_xxx EC_xxx etc etc etc. May I understand why it was decided that OpenSSL can own all the prefixes or namespaces in the world? How is it possible that OpenSSL owns the ERR_ prefix (for example ERR_free_strings() and others)? OpenSSL is a library. I should be able to integrate OpenSSL into my own code and define my own prefixes without worrying about creating conflicts with the near 200 prefixes that OpenSSL owns. An example of a well designed C library is libuv [*], in which: * Public API functions and structs begin with uv_. * Private API functions begin with uv__. * Public macros begin UV_. That's a good design! PS: In my project I use both openssl and libsrtp. In which of them do you expect the following macro is defined?: SRTP_PROTECTION_PROFILE [*]https://github.com/joyent/libuv/ Enjoy Jakob -- Jakob Bohm, CIO, Partner, WiseMo A/S. http://www.wisemo.com Transformervej 29, 2860 Søborg, Denmark. Direct +45 31 13 16 10 This public discussion message is non-binding and may contain errors. WiseMo - Remote Service Management for PCs, Phones and Embedded __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager majord...@openssl.org
https://www.openssl.org/news/state.html is stale
The page https://www.openssl.org/news/state.html, which is supposed to indicate what the current/next version numbers are is out of date. Specifically, it was not updated for the August 6 security updates, so it still claims thatthe versions released on that day have not yet been released. Please update the page. Enjoy Jakob -- Jakob Bohm, CIO, Partner, WiseMo A/S. http://www.wisemo.com Transformervej 29, 2860 Søborg, Denmark. Direct +45 31 13 16 10 This public discussion message is non-binding and may contain errors. WiseMo - Remote Service Management for PCs, Phones and Embedded __ OpenSSL Project http://www.openssl.org User Support Mailing Listopenssl-users@openssl.org Automated List Manager majord...@openssl.org
cannot read PEM key file - no start line
All, I am getting the following with my client cert when trying to connect to an SSL-enabled MongoDB: 2014-09-03T13:37:56.881-0500 ERROR: cannot read PEM key file: /users/apps/tstlrn/u019807/DTCD9C3B2F42757.ent.wfb.bank.corp_mongo_wells.pem error:0906D06C:PEM routines:PEM_read_bio:no start line The cert file is the following: DTCD9C3B2F42757.ent.wfb.bank.corp_mongo_wells.pem WF Enterprise CA 02 certificate, signed by WF Root WF Root certificate I was told by the support at MongoDB to do the following: § Copy the certificates into a text editor to ensure there is no whitespace § Ensure the beginning and end certificate statements are on there own line and have the same number of '-' at each end. § Ensure each line has 64 chars (except the last line) I have checked and verified that there is no whitespace. Also, the BEGIN and END statements look correct. However, each line in the cert is 76 chars in length, except for the last line. Should the lines be 64-characters long? Can someone please help me? Thanks, Liz --- This email is free from viruses and malware because avast! Antivirus protection is active. http://www.avast.com