Re: [openssl-dev] [EXTERNAL] Re: PKCS12 safecontents bag type deviation from spec
On Tue, 2018-01-16 at 19:31 +, Sands, Daniel wrote: > On Tue, 2018-01-16 at 14:50 +, Salz, Rich via openssl-dev wrote: > > OpenSSL defines it as a SET OF and the spec says it’s a SEQUENCE > > OF. Ouch! Will that cause interop problems if we change it? (I > > don’t remember the DER encoding rules) > > > > > > > > Well, a SEQUENCE uses tag 16 while a SET uses tag 17, according to a > quick reference I found. So that could be an interoperability > concern. > But maybe this is the first actual use of nested safecontents, since > this difference flew under the radar for so long :) Would it be possible to allow for loading the safecontents bag with both correct and incorrect tag? But we should always write the correct one. -- Tomáš Mráz No matter how far down the wrong road you've gone, turn back. Turkish proverb [You'll know whether the road is wrong if you carefully listen to your conscience.] -- openssl-dev mailing list To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev
Re: [openssl-dev] Systemwide configurability of OpenSSL
On 09/28/2017 12:21 AM, Steffen Nurpmeso wrote: > Hello. > > Tomas Mraz <tm...@redhat.com> wrote: > |I would like to restart the discussion about possibilities of system- > |wide configurability of OpenSSL and particularly libssl. > | > |Historically OpenSSL allowed only for configuration of the enabled > |ciphersuites list if application called appropriate API call. This is > |now enhanced with the SSL_CONF API and the applications can set thing > |such as allowed signature algorithms or protocol versions via this API. > > Now is the time to thank the OpenSSL for this improvement which > will change the world mid- or long term: thank you! +1 ... > |However libssl currently does not have a way to apply some policy such > |as using just protocol TLS1.2 or better system-wide with a possibility > |for sysadmin to configure this via some configuration file. Of course > |it would still be up to individual application configurations whether > |they override such policy or not, but it would be useful for sysadmin > |to be able to set such policy and depend on that setting if he does not > |modify the settings in individual application configurations. > | > |How would openssl maintainers regard a patch that would add loading of > |a system-wide SSL configuration file on startup and application of it > > Having a global one and especially giving administrators the > possibility to provide an outer cramp that cannot be loosened any > further, though further restricted, would indeed be good. > And that being applied automatically just when SSL library is > initialized, without an explicit application-side > CONF_modules_load_file(). If i recall correctly that was the > original suggestion. > > And is it actually possible to have a generic "super-section" that > is applied even if an application specific one has been chosen? > And unfortunately it is not possible to say MinProtocol=Latest, > like this users have to be aware, even if they are not. With > MinProtocol=Latest they would only have to face this jungle of > non-understanding (be honest: Google/DuckDuckGo plus > copy-and-paste, isn't it) if something really fails. The problem is that by default the applications do not read the file and do not apply the defaults. Even the openssl s_client/s_server does not seem to work, but I might be doing something wrong. What I would like to see is applying the defaults unconditionally or maybe with some possibility to opt-out of it by application but not opt-in. Can I please get at least some response from the openssl team? Should I open an issue on github for that feature? Tomas Mraz -- openssl-dev mailing list To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev
Re: [openssl-dev] [RFC] enc utility & under-documented behavior changes: improving backward compatibility
On Tue, 2017-10-03 at 08:23 +0100, Matt Caswell wrote: > > > 1.2. This also opens the path to stronger key derivation (PBKDF2) > > 2. During decryption, if no header block is present, and no message > > digest was specified, the default digest SHOULD be MD5. > > Should it? What about compatibility with OpenSSL 1.1.0? We cannot > make > breaking changes in 1.1.1, so it has to be compatible with 1.1.0. Yeah, the ship has sailed. SHA-256 should be used by default as in 1.1.0. -- Tomáš Mráz Red Hat No matter how far down the wrong road you've gone, turn back. Turkish proverb [You'll know whether the road is wrong if you carefully listen to your conscience.] * Google and NSA associates, this message is none of your business. * Please leave it alone, and consider whether your actions are * authorized by the contract with Red Hat, or by the US constitution. * If you feel you're being encouraged to disregard the limits built * into them, remember Edward Snowden and Wikileaks. -- openssl-dev mailing list To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev
[openssl-dev] Systemwide configurability of OpenSSL
I would like to restart the discussion about possibilities of system- wide configurability of OpenSSL and particularly libssl. Historically OpenSSL allowed only for configuration of the enabled ciphersuites list if application called appropriate API call. This is now enhanced with the SSL_CONF API and the applications can set thing such as allowed signature algorithms or protocol versions via this API. However libssl currently does not have a way to apply some policy such as using just protocol TLS1.2 or better system-wide with a possibility for sysadmin to configure this via some configuration file. Of course it would still be up to individual application configurations whether they override such policy or not, but it would be useful for sysadmin to be able to set such policy and depend on that setting if he does not modify the settings in individual application configurations. How would openssl maintainers regard a patch that would add loading of a system-wide SSL configuration file on startup and application of it on SSL_CTX initialization (or some other appropriate place)? Is this approach the way to go forward or do you have some better way on mind? Such an effort was initially attempted at: https://github.com/openssl/openssl/pull/192 and https://github.com/openssl/openssl/pull/193 pull requests but given the comments, we are exploring other options to achieve that goal. What do you think could be a better way? Thanks for your comments, -- Tomáš Mráz Red Hat No matter how far down the wrong road you've gone, turn back. Turkish proverb [You'll know whether the road is wrong if you carefully listen to your conscience.] * Google and NSA associates, this message is none of your business. * Please leave it alone, and consider whether your actions are * authorized by the contract with Red Hat, or by the US constitution. * If you feel you're being encouraged to disregard the limits built * into them, remember Edward Snowden and Wikileaks. -- openssl-dev mailing list To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev
Re: [openssl-dev] Work on a new RNG for OpenSSL
On Thu, 2017-08-17 at 22:11 +0200, Kurt Roeckx wrote: > On Thu, Aug 17, 2017 at 02:34:49PM +0200, Tomas Mraz wrote: > > On Thu, 2017-08-17 at 12:22 +, Salz, Rich via openssl-dev > > wrote: > > > I understand the concern. The issue I am wrestling with is > > > strict > > > compatibility with the existing code. Does anyone really *want* > > > the > > > RNG’s to not reseed on fork? It’s hard to imagine, but maybe > > > somewhere someone is. And then it’s not about just reseeding, > > > but > > > what about when (if) we add other things, like whether or not the > > > secure arena gets zero’d in a child? > > > > > > So let me phrase it this way: does anyone object to changing the > > > default so NO_ATFORK must be used to avoid the reseeding and > > > other > > > things we might add later? > > > > I can hardly see anyone would be broken if the default is to reseed > > RNG on fork. However that might not be true for other atfork > > functionalities so perhaps there is a need to make each of these > > future > > atfork functions configurable and either on or off by default > > individually and not as a whole. > > There might be cases where after fork you're not able to get to > /dev/urandom anymore. I do not think so. Which particular cases do you have on mind? Yes, after fork+exec you could for example switch SELinux domain and won't be able to access something but immediately after fork it should not be so. Also perhaps the reseeding after fork can be made less strict in regards to failures reading /dev/urandom or so. -- Tomáš Mráz Red Hat No matter how far down the wrong road you've gone, turn back. Turkish proverb [You'll know whether the road is wrong if you carefully listen to your conscience.] * Google and NSA associates, this message is none of your business. * Please leave it alone, and consider whether your actions are * authorized by the contract with Red Hat, or by the US constitution. * If you feel you're being encouraged to disregard the limits built * into them, remember Edward Snowden and Wikileaks. -- openssl-dev mailing list To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev
Re: [openssl-dev] Work on a new RNG for OpenSSL
On Thu, 2017-08-17 at 12:22 +, Salz, Rich via openssl-dev wrote: > I understand the concern. The issue I am wrestling with is strict > compatibility with the existing code. Does anyone really *want* the > RNG’s to not reseed on fork? It’s hard to imagine, but maybe > somewhere someone is. And then it’s not about just reseeding, but > what about when (if) we add other things, like whether or not the > secure arena gets zero’d in a child? > > So let me phrase it this way: does anyone object to changing the > default so NO_ATFORK must be used to avoid the reseeding and other > things we might add later? I can hardly see anyone would be broken if the default is to reseed RNG on fork. However that might not be true for other atfork functionalities so perhaps there is a need to make each of these future atfork functions configurable and either on or off by default individually and not as a whole. > By the way I noticed that openssl_init_fork_handlers() is not > guarded by > RUN_ONCE(). This should be fixed, too. > > Yeah, I’ll fix that; thanks. > -- Tomáš Mráz Red Hat No matter how far down the wrong road you've gone, turn back. Turkish proverb [You'll know whether the road is wrong if you carefully listen to your conscience.] * Google and NSA associates, this message is none of your business. * Please leave it alone, and consider whether your actions are * authorized by the contract with Red Hat, or by the US constitution. * If you feel you're being encouraged to disregard the limits built * into them, remember Edward Snowden and Wikileaks. -- openssl-dev mailing list To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev
Re: [openssl-dev] Access to ECDSA_METHOD do_verify function from engine
On Fri, 2017-07-21 at 15:56 +0200, Johannes Bauer wrote: > I've changed my code now to also use the (mutable) new > EC_KEY_METHOD*, > which doesn't give a diagnostic. Regardless, I believe that the first > parameter of EC_KEY_METHOD_get_sign should be const EC_KEY_METHOD*, > not > EC_KEY_METHOD*. Just open a github issue or better pull request with this change. -- Tomáš Mráz Red Hat No matter how far down the wrong road you've gone, turn back. Turkish proverb [You'll know whether the road is wrong if you carefully listen to your conscience.] * Google and NSA associates, this message is none of your business. * Please leave it alone, and consider whether your actions are * authorized by the contract with Red Hat, or by the US constitution. * If you feel you're being encouraged to disregard the limits built * into them, remember Edward Snowden and Wikileaks. -- openssl-dev mailing list To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev
[openssl-dev] Disablement of insecure hashes for digital signatures
Just a notice for anyone interested, In Red Hat Enterprise Linux 6 and 7 we disabled support for insecure hashes for digital signatures. Basically signatures with MD5, MD4, MD2, and SHA0 will fail verification by default. We could not switch off the support for these weak hash algorithms completely due to possible legacy uses so we at least switched it off for signature verification. Regards, -- Tomáš Mráz Red Hat No matter how far down the wrong road you've gone, turn back. Turkish proverb [You'll know whether the road is wrong if you carefully listen to your conscience.] * Google and NSA associates, this message is none of your business. * Please leave it alone, and consider whether your actions are * authorized by the contract with Red Hat, or by the US constitution. * If you feel you're being encouraged to disregard the limits built * into them, remember Edward Snowden and Wikileaks. -- openssl-dev mailing list To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev
Re: [openssl-dev] SNI by default in s_client
On Mon, 2017-02-13 at 16:13 +, Matt Caswell wrote: > I'd like to canvas opinion on this PR: > https://github.com/openssl/openssl/pull/2614 > The PR above changes the default behaviour of s_client so that it > always > sends SNI, and adds a "-noservername" option to suppress sending it > if > needed. > > I was targeting this change for 1.1.1. The issue is that this does > change command line behaviour between minor versions of the 1.1.x > series > - which is supposed to preserve API and ABI compatibility. Of course > this change affects neither API or ABI as its in the apps only - > although we usually extend that compatibility to try to ensure that > command line behaviour remains stable too. > > You could argue that the only change in behaviour here is the > addition > of an extension by default that wasn't there before - and that we've > already decided to add new extensions in 1.1.1 due to the forthcoming > TLSv1.3 support. On the other hand you could argue that this could > break > existing scripts that rely on the current SNI behaviour. > > So the question is: should this (type of) change be allowed in a > 1.1.x > release? Or should it only be allowed in some future 1.2.0 (or not at > all)? In my opinion the PR should be allowed in 1.1.x release, depending on s_client to not send SNI in some kind of scripts seems to me like a fairly obscure corner case. I would view this change as a simple usability improvement. But if you decide to postpone such changes to 1.2.0 I would recommend to create some intermediate step between fully breaking API/ABI and command line use changes. I mean - release 1.2.0 as something API/ABI compatible with 1.1.x, but allowing such usability changes for command- line and also allowing things like removal of obscure or insecure features but keeping the function signatures intact (they would simply return errors). And also keep the library SONAME so dependencies do not need to be rebuilt. And spare full break of API/ABI for something like 2.0 release. -- Tomas Mraz No matter how far down the wrong road you've gone, turn back. Turkish proverb (You'll never know whether the road is wrong though.) -- openssl-dev mailing list To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev
Re: [openssl-dev] (future) STORE vs X509_LOOKUP_METHOD by_dir
On Sun, 2017-02-05 at 16:47 +0100, Richard Levitte wrote: > Hi, > > I've some ponderings that I need to bounce a bit with you all. > > Some have talked about replace the X509_LOOKUP_METHOD bit with the > STORE module I'm building, and while STORE isn't ready for it yet, I > have some thoughts on how the two can approach each other. This > would > involve one or two hooks / callbacks, that a STORE user could specify > (details later) to pick and choose freely among the objects that the > STORE module finds (be it on file or whatever else that can be > represented as a URI). Just to add something to your thinking - so there is a p11-kit-trust PKCS11 module which provides all the CA certificates that should be trusted on the system via individual PKCS11 certificate objects. Could it be somehow accommodated with the STORE module approach? Mozilla NSS and GnuTLS can use this PKCS11 module directly as a trust store, we would like to add the same functionality to OpenSSL. -- Tomas Mraz No matter how far down the wrong road you've gone, turn back. Turkish proverb (You'll never know whether the road is wrong though.) -- openssl-dev mailing list To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev
Re: [openssl-dev] use SIPhash for OPENSSL_LH_strhash?
On Wed, 2017-01-11 at 03:13 +, Salz, Rich wrote: > The needs for OpenSSL's LHASH are exactly what SipHash was designed > for: fast on short strings. > OpenSSL's hash currently *does not* call MD5 or SHA1; the MD5 code is > commented out. > Yes, performance tests would greatly inform the decision. +1 Is there really no use of LHASH tables in OpenSSL where an attacker attempting a DoS attack can control the contents of the tables? If you are reasonably sure that there is no such occurrence or that the number of entries attacker can insert into such table is severally limited by other means then perhaps it really makes no sense to replace the existing algorithm. But we need to know this first. -- Tomas Mraz No matter how far down the wrong road you've gone, turn back. Turkish proverb (You'll never know whether the road is wrong though.) -- openssl-dev mailing list To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev
Re: [openssl-dev] [RFC 0/2] Proposal for seamless handling of TPM based RSA keys in openssl
On St, 2016-11-23 at 00:03 +0100, Richard Levitte wrote: > In message <021a5d5b885845f5ab79c4420232e...@usma1ex-dag1mb1.msg.corp > .akamai.com> on Tue, 22 Nov 2016 18:03:31 +, "Salz, Rich" <rsalz@ > akamai.com> said: > > rsalz> It is already possible to write a utility library that tries > rsalz> everything in turn, and returns an enumeration that says > "seems > rsalz> to be an X509 certificate" etc. And then another routine that > rsalz> takes that enumeration and the blob and calls the right > rsalz> decoder. I would be okay with that, even if it were part of > rsalz> OpenSSL. I am opposed to guessing and parsing in one step, > and > rsalz> would -1 any PR for that, forcing a team discussion. > > Uh... the d2i functions are already both in one. Are you saying > they should be split in two, one part that does all the checking and > the other that just decodes, trusting that all checks are already > done? What you're gonna do there is double part of the work. > > But, what I get from you is "what if a octet stream matches two > different ASN.1 types? Is that it? I also would not be too much worried - the API call should not be completely universal - the application should know whether it is loading a certificate or private key. It should just be able to use a single call to load a certificate in PEM, DER, or whatever other possible data format. The same for private keys, etc. -- Tomas Mraz No matter how far down the wrong road you've gone, turn back. Turkish proverb (You'll never know whether the road is wrong though.) -- openssl-dev mailing list To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev
[openssl-dev] Missing access to ex_nscert data
Hi, I'm trying to port OpenVPN to OpenSSL-1.1.0 API. Unless I overlooked something the new OpenSSL-1.1.0 does not allow access to the ex_nscert data of the X509 object. Would it be possible to add such function to the API? Regards, -- Tomas Mraz No matter how far down the wrong road you've gone, turn back. Turkish proverb (You'll never know whether the road is wrong though.) -- openssl-dev mailing list To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev
Re: [openssl-dev] OpenSSL F2F
On Út, 2016-10-04 at 13:14 +0100, David Woodhouse wrote: > On Mon, 2016-10-03 at 14:52 +, Salz, Rich wrote: > > > > Sorry, we didn’t think to put this out earlier… > > > > The OpenSSL dev team is having a face-to-face meeting this week in > > Berlin, co-located with LinuxCon. If you’re in the area, feel free > > to stop by. In particular, on Tuesday from 16:50-17:40 – “Members > > of > > the openssl development team will be available to help with porting > > applications to 1.1.0, help guide how people can contribute to the > > project, and be available to discuss other technical issues. > > Downstream distributions and embedded applications developers > > should > > also stop by to introduce themselves” > > > > If you’re not available during that time, but want to chat, please > > let us know. > Hm, not *quite* enough time for me to get a flight to Berlin today... > and I'd have a three-year-old in tow. I have a similar problem although I could reach Berlin by train which is a little bit easier and cheaper. I would be very happy to meet with OpenSSL developers in person. -- Tomas Mraz No matter how far down the wrong road you've gone, turn back. Turkish proverb (You'll never know whether the road is wrong though.) -- openssl-dev mailing list To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev
Re: [openssl-dev] [openssl.org #4664] Enhancement: better handling of CFLAGS and LDFLAGS
On Po, 2016-08-29 at 14:27 +, Richard Levitte via RT wrote: > On Mon Aug 29 12:27:59 2016, appro wrote: > > > > Or maybe ("maybe" is reference to "I don't quite grasp" above) what > > we > > are talking about is Configure reading CFLAGS and LDFLAGS and > > *adding* > > them to generated Makefile. I mean we are not talking about passing > > them > > to 'make', but "freezing" them to their values at configure time. > > Could > > you clarify? > I assume, and please correct me if I'm wrong, that the request is to > treat the > environment variables CFLAGS and LDFLAGS the same way we treat CC, > i.e. as an > initial value to be used instead of what we get from the > configuration target > information. > > This should be quite easy to implement, and we can also continue to > use > whatever additional Configure arguments as compiler or linker flags > to be used > *in addition* to the initial value (that comes from the config target > information, or if we decide to implement it, CFLAGS) Ideally the optimization/debugging flags not affecting directly the code that is being compiled would be replaced with what is placed into CFLAGS/LDFLAGS. But things like -D would be kept from the config target information. -- Tomas Mraz No matter how far down the wrong road you've gone, turn back. Turkish proverb (You'll never know whether the road is wrong though.) -- Ticket here: http://rt.openssl.org/Ticket/Display.html?id=4664 Please log in as guest with password guest if prompted -- openssl-dev mailing list To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev
Re: [openssl-dev] [openssl.org #4664] Enhancement: better handling of CFLAGS and LDFLAGS
I would like to join this request as maintainer of OpenSSL for Fedora and Red Hat Enterprise Linux. It would clean up things for us as well. -- Tomas Mraz No matter how far down the wrong road you've gone, turn back. Turkish proverb (You'll never know whether the road is wrong though.) -- Ticket here: http://rt.openssl.org/Ticket/Display.html?id=4664 Please log in as guest with password guest if prompted -- openssl-dev mailing list To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev
Re: [openssl-dev] [openssl.org #4589] Resolved: simplifying writing code that is 1.0.x and 1.1.x compatible
On Út, 2016-06-28 at 22:10 +, Thomas Waldmann via RT wrote: > On 06/28/2016 11:18 PM, Kurt Roeckx via RT wrote: > > > > On Mon, Jun 27, 2016 at 08:50:43PM +, Thomas Waldmann via RT > > wrote: > > > > > > I didn't ask where to get the missing code from, I asked whether > > > you > > > maybe want to make life simpler for people by adding this to > > > 1.0.x > > > rather than having a thousand software developers copy and > > > pasting it > > > into their projects. > > I think this will not actually make life easier. People using a > > 1.0.x version are not always using the latest 1.0.x version. > Aren't they? > > Don't they use 1.0.xLATEST rather soon, due to security fixes? > > And in case some dist maintainer chooses to rather backport, couldn't > they also backport the added function if it is documented as "openssl > 1.1.x migration support" or so? > > We aren't talking about incompatible changes, just adding 2 trivial > functions that were not there yet (but should have been there, when > looking at the rest of the API). You might get such kind of backport to something that still evolves such as (RHEL/CentOS 7) however you would not get it in older releases (RHEL/CentOS 5 and most probably RHEL/CentOS 6 either). So you will still be facing the issue that there are environments where someone wants to build your code and these functions are not present. -- Tomas Mraz No matter how far down the wrong road you've gone, turn back. Turkish proverb (You'll never know whether the road is wrong though.) -- Ticket here: http://rt.openssl.org/Ticket/Display.html?id=4589 Please log in as guest with password guest if prompted -- openssl-dev mailing list To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev
Re: [openssl-dev] [openssl.org #4589] Resolved: simplifying writing code that is 1.0.x and 1.1.x compatible
On Út, 2016-06-28 at 22:10 +, Thomas Waldmann via RT wrote: > On 06/28/2016 11:18 PM, Kurt Roeckx via RT wrote: > > > > On Mon, Jun 27, 2016 at 08:50:43PM +, Thomas Waldmann via RT > > wrote: > > > > > > I didn't ask where to get the missing code from, I asked whether > > > you > > > maybe want to make life simpler for people by adding this to > > > 1.0.x > > > rather than having a thousand software developers copy and > > > pasting it > > > into their projects. > > I think this will not actually make life easier. People using a > > 1.0.x version are not always using the latest 1.0.x version. > Aren't they? > > Don't they use 1.0.xLATEST rather soon, due to security fixes? > > And in case some dist maintainer chooses to rather backport, couldn't > they also backport the added function if it is documented as "openssl > 1.1.x migration support" or so? > > We aren't talking about incompatible changes, just adding 2 trivial > functions that were not there yet (but should have been there, when > looking at the rest of the API). You might get such kind of backport to something that still evolves such as (RHEL/CentOS 7) however you would not get it in older releases (RHEL/CentOS 5 and most probably RHEL/CentOS 6 either). So you will still be facing the issue that there are environments where someone wants to build your code and these functions are not present. -- Tomas Mraz No matter how far down the wrong road you've gone, turn back. Turkish proverb (You'll never know whether the road is wrong though.) -- openssl-dev mailing list To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev
Re: [openssl-dev] [openssl.org #4518] OpenSSL-1.1.0-pre5 RSA_set0_key and related RSA_get0_*, RSA_set0_*, DSA_set0_* and DSA_get0_* problems
On Út, 2016-04-26 at 18:25 +, Blumenthal, Uri - 0553 - MITLL wrote: > On 4/26/16, 14:20 , "openssl-dev on behalf of Salz, Rich" > <openssl-dev-boun...@openssl.org on behalf of rs...@akamai.com> > wrote: > > > > > > > > > Look. If Doug noticed this, programmers less intimate with this > > > API are > > > much > > > more likely to get stung by it. The protection against such a > > > misunderstanding > > > is cheap. > > Is it? > I think it is. See Doug’s post. > > > > > > And what is that protection? > Checking whether (n, e) passed are pointing at rsa’s own, and not > freeing > them if they do. See Doug’s posting for the details. No, that gives only false sense of correctness. And in another instance you can try to get n, e from another RSA object and set it to a different one and boom, you have doublefree or use-after-free in your code. I agree that this sequence - get + set should be more precisely documented as forbidden but that's it. -- Tomas Mraz No matter how far down the wrong road you've gone, turn back. Turkish proverb (You'll never know whether the road is wrong though.) -- openssl-dev mailing list To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev
Re: [openssl-dev] [openssl.org #4518] OpenSSL-1.1.0-pre5 RSA_set0_key and related RSA_get0_*, RSA_set0_*, DSA_set0_* and DSA_get0_* problems
On Út, 2016-04-26 at 10:16 -0500, Douglas E Engert wrote: > Let me update my response. > If I am reading GH#995 correctly it still has an issue if a user > does: > > RSA_get0_key(rsa, n, e, NULL); /* note this is a GET0 */ > /* other stuff done, such as calculating d */ > RSA_set0_key(rsa, n, e, d); > > rsa is left with n and e pointing to unallocated storage. This is programmer error in your code because the RSA_get0_key is documented to just return internal data and must not be freed. Thus you're not allowed to pass the returned values to RSA_set0_key(). -- Tomas Mraz No matter how far down the wrong road you've gone, turn back. Turkish proverb (You'll never know whether the road is wrong though.) -- openssl-dev mailing list To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev
Re: [openssl-dev] [openssl.org #4518] OpenSSL-1.1.0-pre5 RSA_set0_key and related RSA_get0_*, RSA_set0_*, DSA_set0_* and DSA_get0_* problems
On Po, 2016-04-25 at 13:39 +, Richard Levitte via RT wrote: > In message <rt-4.0.19-29510-1461590378-1354.4518-...@openssl.org> on > Mon, 25 Apr 2016 13:19:38 +, "Salz, Rich via RT" <r...@openssl.org> > said: > > rt> No, he means setting the same value twice. For example, making > this change: > rt> If (r=->n != n) BN_free(r->n); > rt> If(r->e != e) BN_free(r->e); > rt> If (r->d != d) BN_free(r->d); > rt> > rt> I agree it shouldn't happen, but do we want to protect against > that? I could be convinced either way. > > Ah ok... sorry, I misread the intention. > > Agreed that we could make sure not to free the pointers in that case. In that case this should be properly documented so the users of the API can depend on it. -- Tomas Mraz No matter how far down the wrong road you've gone, turn back. Turkish proverb (You'll never know whether the road is wrong though.) -- Ticket here: http://rt.openssl.org/Ticket/Display.html?id=4518 Please log in as guest with password guest if prompted -- openssl-dev mailing list To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev
Re: [openssl-dev] [openssl.org #4518] OpenSSL-1.1.0-pre5 RSA_set0_key and related RSA_get0_*, RSA_set0_*, DSA_set0_* and DSA_get0_* problems
On Po, 2016-04-25 at 13:39 +, Richard Levitte via RT wrote: > In message <rt-4.0.19-29510-1461590378-1354.4518-...@openssl.org> on > Mon, 25 Apr 2016 13:19:38 +, "Salz, Rich via RT" <r...@openssl.org> > said: > > rt> No, he means setting the same value twice. For example, making > this change: > rt> If (r=->n != n) BN_free(r->n); > rt> If(r->e != e) BN_free(r->e); > rt> If (r->d != d) BN_free(r->d); > rt> > rt> I agree it shouldn't happen, but do we want to protect against > that? I could be convinced either way. > > Ah ok... sorry, I misread the intention. > > Agreed that we could make sure not to free the pointers in that case. In that case this should be properly documented so the users of the API can depend on it. -- Tomas Mraz No matter how far down the wrong road you've gone, turn back. Turkish proverb (You'll never know whether the road is wrong though.) -- openssl-dev mailing list To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev
Re: [openssl-dev] [openssl.org #4518] OpenSSL-1.1.0-pre5 RSA_set0_key and related RSA_get0_*, RSA_set0_*, DSA_set0_* and DSA_get0_* problems
On Po, 2016-04-25 at 13:08 +, Richard Levitte via RT wrote: > > rsalz> > If nothing else, all the RSA_set0 routines should test if > the same pointer > rsalz> > value is being replaced if so do not free it. > rsalz> > > rsalz> > The same logic need to be done for all the RSA_set0_* > functions as well as > rsalz> > the DSA_set0_* functions. > rsalz> > rsalz> That seems like a bug we should fix. > > No, it's by design: > Then perhaps there should be a function to set only the private part of the RSA and DSA keys? -- Tomas Mraz No matter how far down the wrong road you've gone, turn back. Turkish proverb (You'll never know whether the road is wrong though.) -- openssl-dev mailing list To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev
Re: [openssl-dev] [openssl.org #4518] OpenSSL-1.1.0-pre5 RSA_set0_key and related RSA_get0_*, RSA_set0_*, DSA_set0_* and DSA_get0_* problems
On Po, 2016-04-25 at 13:08 +, Richard Levitte via RT wrote: > > rsalz> > If nothing else, all the RSA_set0 routines should test if > the same pointer > rsalz> > value is being replaced if so do not free it. > rsalz> > > rsalz> > The same logic need to be done for all the RSA_set0_* > functions as well as > rsalz> > the DSA_set0_* functions. > rsalz> > rsalz> That seems like a bug we should fix. > > No, it's by design: > Then perhaps there should be a function to set only the private part of the RSA and DSA keys? -- Tomas Mraz No matter how far down the wrong road you've gone, turn back. Turkish proverb (You'll never know whether the road is wrong though.) -- Ticket here: http://rt.openssl.org/Ticket/Display.html?id=4518 Please log in as guest with password guest if prompted -- openssl-dev mailing list To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev
Re: [openssl-dev] Ubsec and Chil engines
On Pá, 2016-02-19 at 11:31 +, Matt Caswell wrote: > So it seems that for chil there may possibly be some rare use (but > even > the most recent evidence is 4 years old). However the OpenSSL dev > team > do not have access to this hardware to maintain the engine and (as > noted > above) this is currently not building in 1.1.0. > > In both cases I would like to remove these engines from 1.1.0. I'd > like > to hear from the community if there is any active use of these. One > option if there is found to be some small scale use is to spin out > the > engine into a separately managed repo (as has happened recently with > the > GOST engine). > > If I don't hear from anyone I will remove these. As far as I know there are some customers using the Chil engine with RHEL (openssl-1.0.1). -- Tomas Mraz No matter how far down the wrong road you've gone, turn back. Turkish proverb (You'll never know whether the road is wrong though.) -- openssl-dev mailing list To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev
Re: [openssl-dev] OpenSSL version 1.1.0 pre release 3 published
On Po, 2016-02-15 at 22:17 +, Matt Caswell wrote: > > On 15/02/16 21:50, Jouni Malinen wrote: > > On Mon, Feb 15, 2016 at 09:34:33PM +, Matt Caswell wrote: > > > On 15/02/16 21:25, Jouni Malinen wrote: > > > > Is this change in OpenSSL behavior expected? Is it not allowed > > > > to call > > > > EVP_cleanup() and then re-initialize OpenSSL digests with > > > > SSL_library_init()? > > > > > > Correct, you cannot reinit once you have deinit. > > > > OK.. That used to work, though, so it would be good to mention this > > clearly in the release notes since this can cause a difficult to > > find > > issues for existing programs. Luckily I happened to have automated > > test > > cases that found this now with wpa_supplicant. > > > > > You should not need to explicitly init or deinit at all. Try > > > removing > > > all such calls. If you are getting memory leaks not caused by > > > your > > > application then that is a bug in OpenSSL. > > > > I agree with the "should not need" part, but there is a reason why > > I > > added those calls in the first place, i.e., these were needed with > > older > > OpenSSL releases (well, all releases so far since 1.1.0 has not > > been > > released). I guess I can remove these calls with #ifdef > > OPENSSL_VERSION_NUMBER < 0x1010L to maintain support for older > > versions. > > > > I'd also recommend updating EVP_cleanup man page to be clearer > > about > > EVP_cleanup() being something that must not be called if there is > > going > > to be any future calls to OpenSSL before the process exits. > > Maybe EVP_cleanup() and other similar explicit deinit functions > should > be deprecated, and do nothing in 1.1.0? The auto-deinit capability > should handle it. That way you would not need to do anything > "special" > for 1.1.0 with "#ifdef" etc. What do you think? +1 I think this is "no brainer" change as the semantics of these functions changed anyway due to the auto-initialization. -- Tomas Mraz No matter how far down the wrong road you've gone, turn back. Turkish proverb (You'll never know whether the road is wrong though.) -- openssl-dev mailing list To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev
Re: [openssl-dev] How to do reneg with client certs in 1.1.0 API
On Po, 2016-02-08 at 12:34 +, Matt Caswell wrote: > > On 08/02/16 12:11, Rainer Jung wrote: > > > Renegotiation isn't entirely within the control of the server. A > server > can request that a renegotiation takes place. It is up to the client > whether it honours that request immediately; or perhaps its finishes > off > sending some application data before it gets around to honouring it; > or > perhaps it doesn't honour it at all. > > > SSL_renegotiate(ssl); > > SSL_do_handshake(ssl); > > This sequence makes the server send the HelloVerifyRequest. It is > then > back in a state where it can continue to receive application data > from > the client. At some later point the client may or may not initiate a > reneg. > > > SSL_set_state(ssl, SSL_ST_ACCEPT); > > SSL_do_handshake(ssl); > > This is really not a good idea, and I suspect is a hack that was > originally copied from s_server :-). Doing this will make the > connection > fail if the client sends application data next (which it is allowed > to do). > > We don't know what we're going to get next from the client it could > be > more application data. It could be an immediate start of a new > handshake. The correct thing for the server to do is to attempt to > read > application data. If we happen to get a handshake instead then it > will > be automatically handled. What if the server wants to discard all the application data that was sent before the renegotiation completed? Or how the server can recognize which part of data was received before renegotiation completed and which after it? -- Tomas Mraz No matter how far down the wrong road you've gone, turn back. Turkish proverb (You'll never know whether the road is wrong though.) -- openssl-dev mailing list To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev
[openssl-dev] [openssl.org #4240] Document some of the speed options
The attached patch provides documentation of some of the currently undocumented speed options. -- Tomas Mraz No matter how far down the wrong road you've gone, turn back. Turkish proverb (You'll never know whether the road is wrong though.) diff --git a/doc/apps/speed.pod b/doc/apps/speed.pod index 1cd1998..a295709 100644 --- a/doc/apps/speed.pod +++ b/doc/apps/speed.pod @@ -8,6 +8,9 @@ speed - test library performance B [B<-engine id>] +[B<-elapsed>] +[B<-evp algo>] +[B<-decrypt>] [B] [B] [B] @@ -49,6 +52,19 @@ to attempt to obtain a functional reference to the specified engine, thus initialising it if needed. The engine will then be set as the default for all available algorithms. +=item B<-elapsed> + +Measure time in real time instead of CPU time. It can be useful when testing +speed of hardware engines. + +=item B<-evp algo> + +Use the specified cipher or message digest algorithm via the EVP interface. + +=item B<-decrypt> + +Time the decryption instead of encryption. Affects only the EVP testing. + =item B<[zero or more test algorithms]> If any options are given, B tests those algorithms, otherwise all of ___ openssl-bugs-mod mailing list openssl-bugs-...@openssl.org https://mta.openssl.org/mailman/listinfo/openssl-bugs-mod___ openssl-dev mailing list To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev
Re: [openssl-dev] Backporting opaque struct getter/setter functions
On Po, 2016-01-11 at 01:09 +, Peter Waltenberg wrote: > The point of using accessor FUNCTIONS is that the code doesn't break > if the structure size or offsets of fields in the underlying > structures change across binaries. > > Where that mainly has an impact is updating the crypto/ssl libs > underneath existing binaries is more likely to just work. > > #defines in the headers do not help at all here. > The point is in achieving reverse API compatibility between 1.1 and 1.0.2. No binary compatibility is expected between those branches. I think that having the API compatibility would be really useful thing easing porting application code to 1.1 branch. -- Tomas Mraz No matter how far down the wrong road you've gone, turn back. Turkish proverb (You'll never know whether the road is wrong though.) ___ openssl-dev mailing list To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev
Re: [openssl-dev] [openssl-users] Removing obsolete crypto from OpenSSL 1.1 - seeking feedback
On Čt, 2015-11-19 at 02:12 +, Viktor Dukhovni wrote: > On Wed, Nov 18, 2015 at 02:34:41PM -0600, Benjamin Kaduk wrote: > > > > No, of course not. But after letting people depend on this “single > > > cryptographic library” for many years, telling them “too bad” isn’t very > > > nice. > > > > I guess I'm just having a hard time wrapping my head around why, upon > > hearing that the volunteer-run project is giving years' advance notice > > that certain features will be removed, the response is insistence that > > everything must remain supported forever, instead of using the advance > > notice to investigate alternatives. Volunteers should be allowed to > > ease up when they need to, after all. > > Culture-clash. Security culture says everything remotely weak must > go, but release-engineering culture emphasizes compatibilty. The > crypto library is more of a systems component, not a security > component. The SSL library is more of a security component than > a systems component, and has algorithm negotiation. What about some reasonable middle ground? 1. remove all assembler implementations of not-current crypto 2. remove all references of it from the libssl 3. remove the respective EVP_add_cipher(), EVP_add_digest(),... from the OpenSSL_add* functions so the users have to explicitly add these to use them automatically. This way it is ensured that normal signature validation does not allow for these obsolete hashes to be used, etc. It is questionable whether it would be also good idea to disable the active operations - i.e. encryption for legacy ciphers and signature creation for legacy combinations of signing algorithms and hashes. In future it might be even reasonable to move the implementations into the libcryptolegacy.so however I am not sure this change is worth it for 1.1.0. I believe the maintenance costs for pure C implementations in such separate libcryptolegacy or even in the main C library would be quite minimal. I would also expect the users of such legacy code to step up with sharing the maintenance costs. -- Tomas Mraz No matter how far down the wrong road you've gone, turn back. Turkish proverb (You'll never know whether the road is wrong though.) ___ openssl-dev mailing list To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev
[openssl-dev] [openssl.org #4124] Illegal instruction when using aes-ni-sha256 stitched implementation on AMD CPU
The aes-ni-sha256 stitched implementation causes SIGILL on AMD A4-6210. It is caused by not using the AVX+SSSE3 code path for non-Intel CPUs although the CPU seems to be fully capable of running it. The ia32cap vector is 0x7ED8220B078B but when you set it explicitly with OPENSSL_ia32cap=0x7ED8220B478B (i.e. the Intel CPU bit is set) it works fine and the AVX+SSSE3 codepath is taken. See also https://bugzilla.redhat.com/show_bug.cgi?id=1278194 for details. -- Tomas Mraz No matter how far down the wrong road you've gone, turn back. Turkish proverb (You'll never know whether the road is wrong though.) ___ openssl-bugs-mod mailing list openssl-bugs-...@openssl.org https://mta.openssl.org/mailman/listinfo/openssl-bugs-mod ___ openssl-dev mailing list To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev
[openssl-dev] [openssl.org #3913] [RFE] Add a way to application to know a minimum DH size allowed by the client
The current minimum DH size allowed by the client is 768 bits which is a hardcoded constant. It would be nice if the constant was at least #define in public headers or even better if there was an API to query various minimum and maximum bit sizes that are checked in the library such as the maximum supported key lengths, etc. -- Tomas Mraz No matter how far down the wrong road you've gone, turn back. Turkish proverb (You'll never know whether the road is wrong though.) ___ openssl-bugs-mod mailing list openssl-bugs-...@openssl.org https://mta.openssl.org/mailman/listinfo/openssl-bugs-mod ___ openssl-dev mailing list To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev
Re: [openssl-dev] [openssl.org #3911] 1.0.2c: some kind of regression - fails to connect to server where 1.0.2a works fine
On Po, 2015-06-15 at 14:22 +, Arkadiusz Miskiewicz via RT wrote: Hello. I've just upgraded from 1.0.2a to 1.0.2c and now I no longer can connect from mysql client to my mysql server. Downgrading to 1.0.2a and the problem is gone. That's because mysql server hardcodes 512 bits DH parameters. That's insecure and connect is prevented by the LOGJAM fix. You can configure the server to not use DH ciphersuites as a workaround, or patch the mysql server to use at least 1024 bits DH parameters. -- Tomas Mraz No matter how far down the wrong road you've gone, turn back. Turkish proverb (You'll never know whether the road is wrong though.) ___ openssl-dev mailing list To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev
Re: [openssl-dev] [openssl.org #3911] 1.0.2c: some kind of regression - fails to connect to server where 1.0.2a works fine
On Po, 2015-06-15 at 14:22 +, Arkadiusz Miskiewicz via RT wrote: Hello. I've just upgraded from 1.0.2a to 1.0.2c and now I no longer can connect from mysql client to my mysql server. Downgrading to 1.0.2a and the problem is gone. That's because mysql server hardcodes 512 bits DH parameters. That's insecure and connect is prevented by the LOGJAM fix. You can configure the server to not use DH ciphersuites as a workaround, or patch the mysql server to use at least 1024 bits DH parameters. -- Tomas Mraz No matter how far down the wrong road you've gone, turn back. Turkish proverb (You'll never know whether the road is wrong though.) ___ openssl-dev mailing list To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev
Re: [openssl-dev] Kerberos
On Út, 2015-05-05 at 13:22 +, Technical Support wrote: Perhaps people use the --with-krb5-flavor=MIT config which is what we do, and we use itin all the time in 1.0.2. Ken From: Matt Caswell m...@openssl.org To: openssl-dev@openssl.org Sent: Tuesday, May 5, 2015 7:56 AM Subject: Re: [openssl-dev] Kerberos On 05/05/15 13:22, Blumenthal, Uri - 0553 - MITLL wrote: What are the problems? The code as it exists today is not compiled by default. I recently fixed a set of issues in master that had not been spotted simply because the code is not regularly compiled and used. One possible solution to that is to turn it on by default...but I think that is worse since it unnecessarily increases the attack surface for those that don't use it (the vast majority). As it turns out the --with-krb5-include Configure option has not been working correctly in 1.0.2 since it was released...but no-one noticed. Due to the infrequency with which it is being used in practice this means that the code is not being kept up to date. There are some technical issues (including its use of single DES) which mean the existing solution is not fit-for-purpose. Viktor is probably better placed to elaborate on those. Fedora and Red Hat Enterprise Linux openssl packages have the KRB5 support compiled in. I believe there are some customers that still use it on older RHEL releases. On the other hand the current set of supported ciphers does not make it useful for future use anymore so I do not care much if it is removed from openssl master branch. If you properly announce that the support will be removed unless anybody provides patch adding support for current secure KRB5 algorithms, I am OK with that. Regards, -- Tomas Mraz No matter how far down the wrong road you've gone, turn back. Turkish proverb (You'll never know whether the road is wrong though.) ___ openssl-dev mailing list To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev
Re: [openssl-dev] Seeking feedback on some #ifdef changes
On 5.3.2015 18:27, Salz, Rich wrote: Sorry for responding late to this thread, but has anyone considered consolidating the following three definitions: OPENSSL_NO_EC OPENSSL_NO_ECDH OPENSSL_NO_EDDSA Is there a valid case where all three of these wouldn't be used together? Would the code even compile if only one (or two) of these were defined? I would be happy to unify these, and you are probably right that the various mix and match options do not work. Does anyone here have issues or concerns if we do that? If you still keep the OPENSSL_NO_EC2M separate, then I do not have any problem with this. However I would expect these three ifdefs to work - of course with OPENSSL_NO_EC implying the NO_ECDH amd NO_ECDSA. Although I did not try it myself. Tomas Mraz ___ openssl-dev mailing list To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev
Re: [openssl-dev] [openssl.org #3675] Fix key wrapping mode with padding to conform to RFC 5649
Hello OpenSSL developers, can you please include this fix which although a trivial code change nevertheless does have big impact on the encrypted key-wrapped data. -- Tomas Mraz No matter how far down the wrong road you've gone, turn back. Turkish proverb (You'll never know whether the road is wrong though.) ___ openssl-dev mailing list To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev
Re: [openssl-dev] [openssl.org #3621] Support legacy CA removal, ignore unnecessary intermediate CAs in SSL/TLS handshake by default
On Po, 2014-12-15 at 14:48 +, Viktor Dukhovni wrote: On Mon, Dec 15, 2014 at 09:23:26AM -0500, Salz, Rich wrote: For what it's worth, I have tested the Alexa top 1 million servers with the - trusted_first option and haven't found a single server that looses its trusted status, on the other hand, good few percent of servers do gain it. It's worth a great deal. Thanks! I love fact-based analysis. :) This can break DANE TLSA verification, because the site's designated trust anchor might no longer be in the shorter constructed chain. It won't break Postfix because it does not support PKIX-TA(0) or PKIX-EE(1), and with DANE-TA(2), Postfix disables all default CAs using only the wire chain and any full TA keys from DNS. Yes, this can be possibly broken by this change although I don't believe there are many (or any at all?) real world cases of such configuration. However, it could break other applications. This might include applications that have specifically configured a short list of CAs to trust (perhaps just one for a particular peer, rather than the usual browser bundle). Please enlighten me how this case could be broken by this change of default? If the trust anchor is not found in the trust list, the intermediate that is sent by the peer is followed anyway. -- Tomas Mraz No matter how far down the wrong road you've gone, turn back. Turkish proverb (You'll never know whether the road is wrong though.) ___ openssl-dev mailing list openssl-dev@openssl.org https://mta.opensslfoundation.net/mailman/listinfo/openssl-dev
Re: [openssl-dev] [openssl.org #3622] bug: crypto, valgrind reports improper memory access with AES128 cbc and longer plaintext
On St, 2014-12-10 at 18:35 +0100, Andy Polyakov via RT wrote: Excellent. My summary is: - valgrind complaints about 1.0.1 OpenSLL are extremely unlikely to affect my program in operation (you will probably say will not affect) Well, as there is suggestion of what I would say, I would only say that it's false positive. - when OpenSLL 1.0.2 eventually percolates through to Ubuntu and Fedora valgrind will stop complaining. Another alternative is that they recognize it as bug worthy fixing, valgrind or OpenSSL. Even though I argue that it's valgrind bug, I'm ready to assist in addressing the issue on OpenSSL side. In other words try to report it to your favorite distro vendor. Refer to this ticket. But for now, I'm dismissing the case. As the Fedora OpenSSL maintainer I would say it is not worth fixing in OpenSSL. We will rebase to 1.0.2 final in Fedora Rawhide once it is released. -- Tomas Mraz No matter how far down the wrong road you've gone, turn back. Turkish proverb (You'll never know whether the road is wrong though.) ___ openssl-dev mailing list openssl-dev@openssl.org https://mta.opensslfoundation.net/mailman/listinfo/openssl-dev
Re: ECC key generation example using openssl
On Út, 2014-11-18 at 12:22 -0500, Indtiny S wrote: Hi, Sorry,, I am bit new to ECC , I Need to just prove the below thing Ca.Sa.G) = Sa.Ca.G) . * Client *:- private = Ca , public= Ca,G and *Server*:- private=Sa, pub = Sa.G When I read ECC tutorial, its defined that public key = Q (where Q=dG) so how to get the CaG and SaG in my case ? and validate the equation ? Please guide .. This really does not belong to the openssl-dev mailing list. Use openssl-users mailing list whose purpose is for discussions of users using OpenSSL and developing applications with OpenSSL. -- Tomas Mraz No matter how far down the wrong road you've gone, turn back. Turkish proverb (You'll never know whether the road is wrong though.) __ OpenSSL Project http://www.openssl.org Development Mailing List openssl-dev@openssl.org Automated List Manager majord...@openssl.org
Re: TLS/SSL methods and protocol version selection
On Po, 2014-11-10 at 13:38 +0100, Kurt Roeckx wrote: There seems to be great confusion on which method to use set up a TLS/SSL connection and I guess most of that has to do with history. I would like to simplify things. We currently seem to have methods for SSLv2, SSLv3, TLSv1 documented, and TLSv1_1 and TLSv1_2 undocumented, and then a SSLv23 method. At least some people seem to think that the SSLv23 method will only do SSLv2 and SSLv3. There probably are also people who think that the TLSv1 method will TLS 1.1 and newer. Then there are options like SSL_OP_NO_SSLv2 that can control what protocols are actually supported. I would like to replace all those with 1 (or 3) methods that don't have a version in it's name, like TLS_method() or SSL_method(), and maybe make the SSLv23 methods a define to the new methods. I would also like to get rid of SSL_OP_NO_SSLv2 and instead have a way to specify the minimum and maximum supported version by those methods, because that's really what people want to do as far as I know. Does this look like a good idea? I'd recommend doing all this but with such correction that the new result will not break API/ABI backwards compatibility to OpenSSL 1.0.x so it can be applied in some future 1.0.x branch. Basically things should not be removed but only added and the new name (TLS_method()) should be #define of SSLv23_method() and not the other way around. Then in 1.x branch the legacy things might be removed if appropriate. -- Tomas Mraz No matter how far down the wrong road you've gone, turn back. Turkish proverb (You'll never know whether the road is wrong though.) __ OpenSSL Project http://www.openssl.org Development Mailing List openssl-dev@openssl.org Automated List Manager majord...@openssl.org
[openssl.org #3560] OpenSSL selects weak digest for (EC)DH kex signing in TLSv1.2 when connecting to SNI virtual server
When connecting to a virtual, SNI defined host openssl selects SHA1 digest instead of SHA512, as it does for the default host. Steps to Reproduce: 1. openssl req -x509 -newkey rsa:2048 -keyout localhost.key -out localhost.crt -subj /CN=localhost -nodes -batch 2. openssl req -x509 -newkey rsa:2048 -keyout server.key -out server.crt -subj /CN=server -nodes -batch 3. openssl s_server -key localhost.key -cert localhost.crt -key2 server.key -cert2 server.crt -servername server In other console, using OpenSSL 1.0.2: 1. openssl s_client -connect localhost:4433 /dev/null 2/dev/null| grep 'Peer signing digest' 2. openssl s_client -connect localhost:4433 -servername server /dev/null 2/dev/null| grep 'Peer signing digest' Actual results: 1. Peer signing digest: SHA512 2. Peer signing digest: SHA1 Expected results: 1. Peer signing digest: SHA512 2. Peer signing digest: SHA512 See also: https://bugzilla.redhat.com/show_bug.cgi?id=1150033 I've investigated this a little and found that the second SSL context that is used when the server receives the servername extension does not have full copy of settings from the main context. Namely the tls1_process_sigalgs() is not properly called for it. I am not sure what would be the proper fix though. -- Tomas Mraz No matter how far down the wrong road you've gone, turn back. Turkish proverb (You'll never know whether the road is wrong though.) __ OpenSSL Project http://www.openssl.org Development Mailing List openssl-dev@openssl.org Automated List Manager majord...@openssl.org
[openssl.org #3537] Bug in TS_check_status_info() and misleading comments
In the TS_check_status_info() there is bug where instead of appending the ',' character to the failure info texts this character overwrites the previous failure info text with strcpy() call. Also the TS_STATUS_BUF_SIZE is named incorrectly as it does not relate to status text but to the failure info text. The attached patch fixes these minor bugs. -- Tomas Mraz No matter how far down the wrong road you've gone, turn back. Turkish proverb (You'll never know whether the road is wrong though.) diff --git a/crypto/ts/ts_rsp_verify.c b/crypto/ts/ts_rsp_verify.c index 3c7f816..ec0d37e 100644 --- a/crypto/ts/ts_rsp_verify.c +++ b/crypto/ts/ts_rsp_verify.c @@ -87,8 +87,6 @@ static int TS_find_name(STACK_OF(GENERAL_NAME) *gen_names, GENERAL_NAME *name); /* * Local mapping between response codes and descriptions. - * Don't forget to change TS_STATUS_BUF_SIZE when modifying - * the elements of this array. */ static const char *TS_status_text[] = { granted, @@ -101,11 +99,15 @@ static const char *TS_status_text[] = #define TS_STATUS_TEXT_SIZE (sizeof(TS_status_text)/sizeof(*TS_status_text)) /* - * This must be greater or equal to the sum of the strings in TS_status_text + * This must be greater or equal to the sum of the strings in TS_failure_info * plus the number of its elements. */ -#define TS_STATUS_BUF_SIZE 256 +#define TS_FAILURE_INFO_BUF_SIZE 256 +/* + * Don't forget to change TS_FAILURE_INFO_BUF_SIZE when modifying + * the elements of this array. + */ static struct { int code; @@ -482,7 +484,7 @@ static int TS_check_status_info(TS_RESP *response) long status = ASN1_INTEGER_get(info-status); const char *status_text = NULL; char *embedded_status_text = NULL; - char failure_text[TS_STATUS_BUF_SIZE] = ; + char failure_text[TS_FAILURE_INFO_BUF_SIZE] = ; /* Check if everything went fine. */ if (status == 0 || status == 1) return 1; @@ -509,7 +511,7 @@ static int TS_check_status_info(TS_RESP *response) TS_failure_info[i].code)) { if (!first) - strcpy(failure_text, ,); + strcat(failure_text, ,); else first = 0; strcat(failure_text, TS_failure_info[i].text);
Re: openssl 1.0.1i ignores ciphers in cipherlist
On Pá, 2014-08-29 at 16:19 +0200, Frank Meier wrote: While testing different ciphersuites I found a quite drastic change in the behavior between openssl version 1.0.1h to 1.0.1i. While using a cipherlist like ECDHE-RSA-AES128-SHA256:RC4 with 1.0.1h the ECDHE-RSA-AES128-SHA256 cipher is used. With 1.0.1i uses RC4-SHA. example: $ openssl s_server -cert server.pem $ openssl s_client -cipher ECDHE-RSA-AES128-SHA256:RC4 -connect localhost:4443 New, TLSv1/SSLv3, Cipher is ECDHE-RSA-AES128-SHA256 Server public key is 1024 bit Secure Renegotiation IS supported Compression: NONE Expansion: NONE SSL-Session: Protocol : TLSv1.2 Cipher: ECDHE-RSA-AES128-SHA256 I guess following patch is responsible for the change in behavior: http://rt.openssl.org/Ticket/Display.html?id=3374. There it says the SSLv2 client-hello does not include enough information to establish a connection with ECDHE, so this ciphers are not included in the cipherlist. But the test with 1.0.1i shows that it works at least against my openssl s_server. I think this behavior could force established applications to use lower-strength ciphers with openssl 1.0.1i than before with 1.0.1h. Without anyone noticing. This happens because you use specification of cipherlist that does not make sense - that is with the RC4 you add also SSLv2 ciphers to the cipher list and simultaneously you add only EC based cipher in addition. With SSLv2 client hello the supported curves extension cannot be sent and thus the EC based ciphers must not be sent as well. If there was for example DHE-RSA-AES128-GCM-SHA256 in the cipher list, it would be correctly sent in the hello and chosen for the connection. I can't see anyone using such specification in real world. Basically what you specify is what you get. -- Tomas Mraz No matter how far down the wrong road you've gone, turn back. Turkish proverb (You'll never know whether the road is wrong though.) __ OpenSSL Project http://www.openssl.org Development Mailing List openssl-dev@openssl.org Automated List Manager majord...@openssl.org
Should this got a CVE number assignment or is it not a real security issue?
Hi, during the review of OpenSSL commits I found this one: https://github.com/openssl/openssl/commit/22a10c89d7c3f951339c385d57cc8fd23c0a800b There is unfortunately not much detail in the commit message. Could this be a possible security issue? Can you please clear that up? Thanks, -- Tomas Mraz No matter how far down the wrong road you've gone, turn back. Turkish proverb (You'll never know whether the road is wrong though.) __ OpenSSL Project http://www.openssl.org Development Mailing List openssl-dev@openssl.org Automated List Manager majord...@openssl.org
Re: [openssl.org #3451] patch for x509.c
On Út, 2014-07-15 at 20:08 +0200, Jan Just Keijser via RT wrote: On 15/07/14 15:20, Daniel Kahn Gillmor wrote: On 07/15/2014 07:58 AM, Salz, Rich via RT wrote: The Globus syntax is strange. :) We should support the ISO date/time standard, and use that throughout and not invent yet another syntax, or yet another flag. It's fairly simple to parse, and handles timezones, relative times, date/time mixing, and so on. The XML XSD spec, for example, has a reasonable explanation. Agreed here. also, the presence of a hyphen in a time marker is too easily misunderstood as a minus sign. If we're talking about the duration of a certificate, we could use something like the ISO-8601 duration syntax: https://en.wikipedia.org/wiki/ISO-8601#Durations e.g. PT1800S is 1800 seconds I like the idea, but I won't have time to rewrite the patch right now. Implementing full ISO8061 timestamps will take some effort. I'd also propose to rename '-valid' to '-duration' . I'll get back on this in mid August. What about just supporting float number argument for -days (0.5 for 12 hours certificate validity)? That should be fairly simple. In the first step. And add something like -notafter argument that would specify the exact end datetime in the ISO format (not duration) as a second step. -- Tomas Mraz No matter how far down the wrong road you've gone, turn back. Turkish proverb (You'll never know whether the road is wrong though.) __ OpenSSL Project http://www.openssl.org Development Mailing List openssl-dev@openssl.org Automated List Manager majord...@openssl.org
Re: [openssl.org #3451] patch for x509.c
On St, 2014-07-16 at 17:46 +0200, Daniel Kahn Gillmor via RT wrote: On 07/16/2014 11:24 AM, Salz, Rich wrote: do you realistically think we'll ever drop support for the -days argument though? Dropping -days would break a million scripts. No, we'll never drop support for -days. But whether the code is atoi() or atof() is a big difference and might cause important silent failures for new scripts running on anything other than the most recent openssl. On most systems atoi(0.5) returns 0 and no error indicator so -days 0.5 would silently do the wrong thing on anything other than openssl 1.0.whatever Which seems much worse. ugh, you're quite right. Sorry, i wasn't thinking about the support hassle in that direction. And to make matters worse, openssl req -x509 currently interprets -days 0 or -days 0.5 or -days PT1800S as use the default number of days, which is 30. :/ From experimentation, i just discovered that -days is also happy to accept and interpret negative integer arguments as well, resulting in a key with ValidNotBefore later than ValidNotAfter :( not even an error message to let you know that you've just created a certificate that no validation stack in its right mind should ever accept. I withdraw my support for making -days take a fractional argument, given the behavior of the existing deployed base. I agree with that as well. I did not look at the actual code in openssl so I did not know that the fractional argument with the current version does not error out. -- Tomas Mraz No matter how far down the wrong road you've gone, turn back. Turkish proverb (You'll never know whether the road is wrong though.) __ OpenSSL Project http://www.openssl.org Development Mailing List openssl-dev@openssl.org Automated List Manager majord...@openssl.org
Re: [openssl.org #3415] Bug report: Uninitialized memory reads reported by valgrind for ECDSA signatures
On Čt, 2014-07-03 at 23:47 +0200, Matt Caswell via RT wrote: I've put together a fix (see below), but not pushed it because I was working on the assumption that if you had PURIFY defined then you wouldn't care about constant time operation. I've since been told that possibly some distros define PURIFY in their production builds. Anyone know of an example where this is used in a production build? We use -DPURIFY in regular openssl packages in Red Hat Enterprise Linux and Fedora. -- Tomas Mraz No matter how far down the wrong road you've gone, turn back. Turkish proverb (You'll never know whether the road is wrong though.) __ OpenSSL Project http://www.openssl.org Development Mailing List openssl-dev@openssl.org Automated List Manager majord...@openssl.org
Re: OpenSSL roadmap
On Čt, 2014-07-03 at 09:13 -0400, Theodore Ts'o wrote: However, in the kernel we are much more lax about who gets access to the Coverity project. Part of this is the sure and certain knowledge that the bad guys are quite willing to pay for a Coverity license, and so for us the balance of increasing the pool of those can who are looking through the Coverity scans, and contribute to fix bugs, and thus grow the development community, tips in favor of being more open about who gets access to Coverity. Yes, the real bad guys can surely buy Coverity license, they can even write similar tools themselves. So once is something found by Coverity scan it should be considered as public knowledge anyway. Manual review by real people is something very different. -- Tomas Mraz No matter how far down the wrong road you've gone, turn back. Turkish proverb (You'll never know whether the road is wrong though.) __ OpenSSL Project http://www.openssl.org Development Mailing List openssl-dev@openssl.org Automated List Manager majord...@openssl.org
Re: SSLv2 SSLv3
On Po, 2014-06-30 at 15:19 +0200, Dr. Stephen Henson wrote: On Mon, Jun 30, 2014, Hubert Kario wrote: As far as misconfigured servers go, single DES and export grade ciphers are much, much more common problem at 20% and 15% respectively. The security levels code also addresses that. By default any ciphersuite offering below 80 bits of equivalent security is disabled along with SSLv2. That includes single DES and all export ciphersuites. It's also not something which can be reenabled by accident either. Even if a cipher list is set to ALL those still get disabled: the only way to reenable them is to set the security level to zero as well. Support is unfortunately only in master at present though. Would it be possible to get it to 1.0.2? Or is that already closed for enhancements? Or does it break ABI compatibility? -- Tomas Mraz No matter how far down the wrong road you've gone, turn back. Turkish proverb (You'll never know whether the road is wrong though.) __ OpenSSL Project http://www.openssl.org Development Mailing List openssl-dev@openssl.org Automated List Manager majord...@openssl.org
Re: [openssl.org #3374] Do not advertise ECC ciphersuites in SSLv2 client hello
On Út, 2014-06-03 at 16:41 +, Viktor Dukhovni wrote: On Tue, Jun 03, 2014 at 06:01:03PM +0200, Tomas Mraz via RT wrote: openssl advertises ECC ciphersuites in SSLv2 client hello if ssl23 method is used. This is incorrect because the TLS extensions that indicate supported curves and point formats cannot be sent in SSLv2 client hello. The attached patch ensures that no ECC ciphersuites are sent in SSLv2 client hello. This looks about right, where do you still use SSLv2? Nowadays, you should probably have SSLv2 disabled. SSLv2 is disabled by default, however when you use the ALL cipher list which is of course something you should not do but it happened in perl LDAP module the SSLv2 ciphers are added to the cipherlist and SSLv2 client hello is used. I agree that once we break API/ABI compatibility we should remove SSLv2 support altogether. -- Tomas Mraz No matter how far down the wrong road you've gone, turn back. Turkish proverb (You'll never know whether the road is wrong though.) __ OpenSSL Project http://www.openssl.org Development Mailing List openssl-dev@openssl.org Automated List Manager majord...@openssl.org
Re: [openssl.org #3374] Do not advertise ECC ciphersuites in SSLv2 client hello
On St, 2014-06-04 at 13:03 +, Viktor Dukhovni wrote: On Wed, Jun 04, 2014 at 10:45:59AM +0200, Tomas Mraz wrote: SSLv2 is disabled by default, however when you use the ALL cipher list which is of course something you should not do but it happened in perl LDAP module the SSLv2 ciphers are added to the cipherlist and SSLv2 client hello is used. In Postfix, I use the ALL cipherlist, but I also pass SSL_OP_NO_SSLv2 to SSL_CTX_set_options(). If you can append exclusions to the cipherlist, you can use 'ALL:...:!SSLv2'. I know that. We are fixing perl-LDAP to not use ALL at all and stick with the default. However we will be patching openssl anyway for any other 3rd party cases where they intentionally or not enable SSLv2 client hello. -- Tomas Mraz No matter how far down the wrong road you've gone, turn back. Turkish proverb (You'll never know whether the road is wrong though.) __ OpenSSL Project http://www.openssl.org Development Mailing List openssl-dev@openssl.org Automated List Manager majord...@openssl.org
Re: [openssl.org #3266] [PATCH] Add the SYSTEM cipher keyword
On Po, 2014-03-31 at 16:24 +0200, Nikos Mavrogiannopoulos wrote: On Mon, 2014-03-31 at 13:55 +, Viktor Dukhovni wrote: This too feels like intrusive overreach. What problem are you trying to solve? The goal is to allow the configuration of the security level of applications centrally in a system. That is, to not require the administrator to configure each and every service to obtain a sane security level, to simplify the current best practices [0]. This assumes that there is such a thing as a uniformly applicable security policy that applies equally to opportunistic use TLS, mandatory use of unauthenticated TLS, authenticated TLS with modest security requirements, and transport of highly sensitive data. I disagree. The problem in current systems, isn't that there are different policies required per application, but the fact that in practice there is no policy set for any application. Nevertheless, with the approach I describe, the current situation can be kept when needed by just not using the system keyword. Exactly. There might be special applications - postfix with opportunistic encryption is such one - where different than system policy is appropriate. But in case of almost all applications the current situation means that there is no way to set the policy at all. For example most of the https clients do not allow to set the cipher preference string so they stick with DEFAULT. Which might be ok for general purpose system but if you have a special purpose one such as system set up to use FIPS approved ciphers only this is not right. And even if there was a way to set the cipher preference string for these https clients it would be extremely hard and error prone to set the list for each of them individually. -- Tomas Mraz No matter how far down the wrong road you've gone, turn back. Turkish proverb (You'll never know whether the road is wrong though.) __ OpenSSL Project http://www.openssl.org Development Mailing List openssl-dev@openssl.org Automated List Manager majord...@openssl.org
Re: [openssl.org #3266] [PATCH] Add the SYSTEM cipher keyword
On St, 2014-02-19 at 23:03 +0100, Nikos Mavrogiannopoulos via RT wrote: This keyword allows a program to simply specify SYSTEM in its configuration file and the SSL cipher used will be determined at run-time from a system-specific file. The system default keywords can be extended by appending any application-specific ciphers such as SYSTEM:PSK. Such a keyword allows distributors of applications to centrally control the allowed ciphers. Can OpenSSL developers please at least say what they think about the acceptability of the SYSTEM keyword support in the cipher string? I'd like to add the support to Fedora openssl package but we would like to see it upstreamed sooner or later. Thanks, -- Tomas Mraz No matter how far down the wrong road you've gone, turn back. Turkish proverb (You'll never know whether the road is wrong though.) __ OpenSSL Project http://www.openssl.org Development Mailing List openssl-dev@openssl.org Automated List Manager majord...@openssl.org
[openssl.org #3264] openssl req ignores key length set in config file
'openssl req -newkey rsa' ignores keylen set in the openssl config file in the req section. It also misleadingly prints out the configured keylen in 'Generating bit RSA private key.' message when it generates the library hardcoded default of 1024 bits. The attached patch fixes this bug. -- Tomas Mraz No matter how far down the wrong road you've gone, turn back. Turkish proverb (You'll never know whether the road is wrong though.) diff -up openssl-1.0.1e/apps/req.c.keylen openssl-1.0.1e/apps/req.c --- openssl-1.0.1e/apps/req.c.keylen 2014-02-12 14:58:29.0 +0100 +++ openssl-1.0.1e/apps/req.c 2014-02-14 13:52:48.692325000 +0100 @@ -644,6 +644,12 @@ bad: if (inrand) app_RAND_load_files(inrand); + if (newkey = 0) + { + if (!NCONF_get_number(req_conf,SECTION,BITS, newkey)) +newkey=DEFAULT_KEY_LENGTH; + } + if (keyalg) { genctx = set_keygen_ctx(bio_err, keyalg, pkey_type, newkey, @@ -651,12 +657,6 @@ bad: if (!genctx) goto end; } - - if (newkey = 0) - { - if (!NCONF_get_number(req_conf,SECTION,BITS, newkey)) -newkey=DEFAULT_KEY_LENGTH; - } if (newkey MIN_KEY_LENGTH (pkey_type == EVP_PKEY_RSA || pkey_type == EVP_PKEY_DSA)) { @@ -1649,6 +1649,8 @@ static EVP_PKEY_CTX *set_keygen_ctx(BIO keylen = atol(p + 1); *pkeylen = keylen; } + else +keylen = *pkeylen; } else if (p) paramfile = p + 1;
Re: [openssl.org #3264] openssl req ignores key length set in config file
Heh, good :) As we both came independently to the same patch we can declare it right and perhaps the openssl upstream developers can finally commit it to the git repository. __ OpenSSL Project http://www.openssl.org Development Mailing List openssl-dev@openssl.org Automated List Manager majord...@openssl.org
Re: [openssl.org #3224] OpenSSL 1.0.1f rsa_pmeth.c duplicate code block
On Pá, 2014-01-10 at 09:53 +0100, Paul Suhler via RT wrote: Lines 612 through 615 of rsa_pmeth.c apparently contain duplicated lines: Line 612: else if (!strcmp(value, oeap)) pm = RSA_PKCS1_OAEP_PADDING; else if (!strcmp(value, oaep)) pm = RSA_PKCS1_OAEP_PADDING; This appears to be a cut and paste error. No, this is actually a fix for typo 'oeap' in previous versions. -- Tomas Mraz No matter how far down the wrong road you've gone, turn back. Turkish proverb (You'll never know whether the road is wrong though.) __ OpenSSL Project http://www.openssl.org Development Mailing List openssl-dev@openssl.org Automated List Manager majord...@openssl.org
Re: Avoid multiple locks in FIPS mode commit to OpenSSL_1_0_1-stable
On Út, 2013-12-10 at 14:45 +0100, Dr. Stephen Henson wrote: On Mon, Dec 09, 2013, geoff_l...@mcafee.com wrote: Shouldn't the code read: if (!FIPS_mode()) CRYPTO_w_[un]lock(CRYPTO_LOCK_RAND); Note the '!' operator. Yes it should, sorry about that. Fixed now. But given skipping the locking in the FIPS mode doesn't that mean that the reseed operation is now not being protected under lock at all? The FIPS DRBG does not lock before calling the add/reseed callbacks. -- Tomas Mraz No matter how far down the wrong road you've gone, turn back. Turkish proverb (You'll never know whether the road is wrong though.) __ OpenSSL Project http://www.openssl.org Development Mailing List openssl-dev@openssl.org Automated List Manager majord...@openssl.org
[openssl.org #3176] Locking problem in fips_drgb_rand.c
The fips_drbg_bytes() function calls CRYPTO_w_lock(CRYPTO_LOCK_RAND); unfortunately the FIPS_drbg_generate() function can eventually call drbg_reseed() if sufficiently enough bytes are pulled out of the DRBG. This function in turn pulls bytes from the MD rand using the RAND_SSLeay()-bytes(). However MD rand uses CRYPTO_w_lock(CRYPTO_LOCK_RAND); in ssleay_rand_bytes(). This leads to double locking the CRYPTO_LOCK_RAND which can mean undefined behavior unless for example in case of pthreads the mutex type used is PTHREAD_MUTEX_RECURSIVE. -- Tomas Mraz No matter how far down the wrong road you've gone, turn back. Turkish proverb (You'll never know whether the road is wrong though.) __ OpenSSL Project http://www.openssl.org Development Mailing List openssl-dev@openssl.org Automated List Manager majord...@openssl.org
Re: [openssl.org #3151] Bug report: openssl-1.0.1e-28.fc19.i686 on Fedora 19: OPENSSL_ia32_cpuid() misdetects RDRAND instruction on old Cyrix M II i686 CPU
On Čt, 2013-10-31 at 22:05 +0100, Kurt Roeckx wrote: On Mon, Oct 28, 2013 at 09:33:05AM +0100, Andre Robatino via RT wrote: I have an old i686 machine with a Cyrix M II CPU running Fedora 19. The latest version of openssl (openssl-1.0.1e-28.fc19.i686) doesn't work properly with it due to OPENSSL_ia32_cpuid() misdetecting the RDRAND instruction (see https://bugzilla.redhat.com/show_bug.cgi?id=1022346 ). All previous versions (up to openssl-1.0.1e-4.fc19.i686) worked properly. I was advised to create an upstream ticket. The listed bug report contains /proc/cpuinfo output and a gdb stack trace. This is a duplicate of ticket #3005 This has been fixed after the 1.0.1e release in: http://git.openssl.org/gitweb/?p=openssl.git;a=commitdiff;h=5702e965d759dde8a098d8108660721ba2b93a7d But if -4 worked and -28 fails, you really should look what fedora changed between those releases. The -4 worked because the RDRAND engine was erroneously completely disabled in the Fedora build. Only after the enablement of it the bug in the CPU detection could manifest. -- Tomas Mraz No matter how far down the wrong road you've gone, turn back. Turkish proverb (You'll never know whether the road is wrong though.) __ OpenSSL Project http://www.openssl.org Development Mailing List openssl-dev@openssl.org Automated List Manager majord...@openssl.org
AES GCM considerations in regards to SP800-38D
Hello OpenSSL developers, in a review of the AES GCM code it was found that there might be some requirements that are placed by SP800-38D document missing. Especially there is no checking that the key is not used with more than 2^32 different IV values. Did I overlook it and the test is there? Or is the test not needed because in real life situation this cannot happen? I suppose it might happen if the key is not renegotiated during lengthy connections with transfer of data in TB range? -- Tomas Mraz No matter how far down the wrong road you've gone, turn back. Turkish proverb (You'll never know whether the road is wrong though.) __ OpenSSL Project http://www.openssl.org Development Mailing List openssl-dev@openssl.org Automated List Manager majord...@openssl.org
Re: CVE-2013-1900 an OpenSSL flaw?
On Tue, 2013-04-16 at 11:11 -0400, manc...@hush.com wrote: Hello. I came across a thread that discusses a recent PostgreSQL security fix (for CVE-2013-1900). The discussion raises the possibility the problem lies in OpenSSL's fork protection code. Full thread here: http://marc.info/?t=13657942101r=1w=2 If gettimeofday() was mixed in during the RNG reads, the vulnerability would be prevented. Of course it would not prevent the case where the attacker has access to the internal state of the parent process but that is a different attack that could be prevented only by reseeding on forks (or when a pid change is detected). -- Tomas Mraz No matter how far down the wrong road you've gone, turn back. Turkish proverb __ OpenSSL Project http://www.openssl.org Development Mailing List openssl-dev@openssl.org Automated List Manager majord...@openssl.org
Re: [openssl.org #3002] Communication problems with 1.0.1e
On Fri, 2013-03-01 at 22:01 +0100, Kurt Roeckx wrote: I can't either, and yet I have multiple people reporting problems with the 1.0.1e version saying the 1.0.1c version works without problems. This happened recently on Fedora as well. See: https://bugzilla.redhat.com/show_bug.cgi?id=918981 -- Tomas Mraz No matter how far down the wrong road you've gone, turn back. Turkish proverb __ OpenSSL Project http://www.openssl.org Development Mailing List openssl-dev@openssl.org Automated List Manager majord...@openssl.org
[openssl.org #2936] Properly set default trusted CA paths if -CAfile and -CApath not used
The current behavior of s_client, s_server and s_time commands in regards to the default trusted CA store path is incorrect. The default paths are loaded only in case SSL_CTX_load_verify_locations() does not fail. This means that you have to use -CApath or -CAfile The attached patch properly sets the default paths only if neither -CApath nor -CAfile is specified. -- Tomas Mraz No matter how far down the wrong road you've gone, turn back. Turkish proverb diff -up openssl-1.0.1c/apps/s_client.c.default-paths openssl-1.0.1c/apps/s_client.c --- openssl-1.0.1c/apps/s_client.c.default-paths 2012-03-18 19:16:05.0 +0100 +++ openssl-1.0.1c/apps/s_client.c 2012-12-06 18:24:06.425933203 +0100 @@ -1166,12 +1166,19 @@ bad: if (!set_cert_key_stuff(ctx,cert,key)) goto end; - if ((!SSL_CTX_load_verify_locations(ctx,CAfile,CApath)) || - (!SSL_CTX_set_default_verify_paths(ctx))) + if (CAfile == NULL CApath == NULL) { - /* BIO_printf(bio_err,error setting default verify locations\n); */ - ERR_print_errors(bio_err); - /* goto end; */ + if (!SSL_CTX_set_default_verify_paths(ctx)) + { + ERR_print_errors(bio_err); + } + } + else + { + if (!SSL_CTX_load_verify_locations(ctx,CAfile,CApath)) + { + ERR_print_errors(bio_err); + } } #ifndef OPENSSL_NO_TLSEXT diff -up openssl-1.0.1c/apps/s_server.c.default-paths openssl-1.0.1c/apps/s_server.c --- openssl-1.0.1c/apps/s_server.c.default-paths 2012-03-18 19:16:05.0 +0100 +++ openssl-1.0.1c/apps/s_server.c 2012-12-06 18:25:11.199329611 +0100 @@ -1565,13 +1565,21 @@ bad: } #endif - if ((!SSL_CTX_load_verify_locations(ctx,CAfile,CApath)) || - (!SSL_CTX_set_default_verify_paths(ctx))) + if (CAfile == NULL CApath == NULL) { - /* BIO_printf(bio_err,X509_load_verify_locations\n); */ - ERR_print_errors(bio_err); - /* goto end; */ + if (!SSL_CTX_set_default_verify_paths(ctx)) + { + ERR_print_errors(bio_err); + } + } + else + { + if (!SSL_CTX_load_verify_locations(ctx,CAfile,CApath)) + { + ERR_print_errors(bio_err); + } } + if (vpm) SSL_CTX_set1_param(ctx, vpm); @@ -1622,8 +1630,11 @@ bad: else SSL_CTX_sess_set_cache_size(ctx2,128); - if ((!SSL_CTX_load_verify_locations(ctx2,CAfile,CApath)) || - (!SSL_CTX_set_default_verify_paths(ctx2))) + if (!SSL_CTX_load_verify_locations(ctx2,CAfile,CApath)) + { + ERR_print_errors(bio_err); + } + if (!SSL_CTX_set_default_verify_paths(ctx2)) { ERR_print_errors(bio_err); } diff -up openssl-1.0.1c/apps/s_time.c.default-paths openssl-1.0.1c/apps/s_time.c --- openssl-1.0.1c/apps/s_time.c.default-paths 2006-04-17 14:22:13.0 +0200 +++ openssl-1.0.1c/apps/s_time.c 2012-12-06 18:27:41.694574044 +0100 @@ -373,12 +373,19 @@ int MAIN(int argc, char **argv) SSL_load_error_strings(); - if ((!SSL_CTX_load_verify_locations(tm_ctx,CAfile,CApath)) || - (!SSL_CTX_set_default_verify_paths(tm_ctx))) + if (CAfile == NULL CApath == NULL) { - /* BIO_printf(bio_err,error setting default verify locations\n); */ - ERR_print_errors(bio_err); - /* goto end; */ + if (!SSL_CTX_set_default_verify_paths(tm_ctx)) + { + ERR_print_errors(bio_err); + } + } + else + { + if (!SSL_CTX_load_verify_locations(tm_ctx,CAfile,CApath)) + { + ERR_print_errors(bio_err); + } } if (tm_cipher == NULL)
Re: OpenSSL and CRIME
On Tue, 2012-10-23 at 20:18 +0200, Dr. Stephen Henson wrote: On Tue, Oct 23, 2012, Tomas Hoger wrote: On Thu, 18 Oct 2012 23:55:41 +0200 Andrey Kulikov wrote: OpenSSL enables zlib by default. Could you please advice for what version and platform this is true? openssl-1.0.1c for linux-elf has no-zlib configured by default. Sorry, I asked the wrong way. OpenSSL, when compiled with zlib support, enables deflate (id 1) compression by default. I was wondering if this should stay as is or should change to disabled by default even when zlib support is compiled in (i.e. compression will only get used when explicitly enabled by an application using the library). The change would render SSL_OP_NO_COMPRESSION meaningless and possibly want a new option for doing the opposite. There isn't any room in the options field for new options, so that's tricky. An alternative would be to set SSL_OP_NO_COMPRESSION by default and require applications that need compression support to explicilty clear it with SSL_CTX_clear_options(ctx, SSL_OP_NO_COMPRESSION); I agree this is the solution that should be used as this does not break the ABI. -- Tomas Mraz No matter how far down the wrong road you've gone, turn back. Turkish proverb __ OpenSSL Project http://www.openssl.org Development Mailing List openssl-dev@openssl.org Automated List Manager majord...@openssl.org
[openssl.org #2874] Missing initialization of str in aes_ccm_init_key
The str member of EVP_AES_CCM_CTX structure stays uninitialized when aes ccm is used with the vpaes backend causing it to crash when the str is later called as it is non-NULL. The attached patch fixes the problem. -- Tomas Mraz No matter how far down the wrong road you've gone, turn back. Turkish proverb diff -up openssl-1.0.1c/crypto/evp/e_aes.c.init-str openssl-1.0.1c/crypto/evp/e_aes.c --- openssl-1.0.1c/crypto/evp/e_aes.c.init-str 2012-09-06 17:20:45.0 +0200 +++ openssl-1.0.1c/crypto/evp/e_aes.c 2012-09-06 17:18:30.0 +0200 @@ -1216,6 +1216,7 @@ static int aes_ccm_init_key(EVP_CIPHER_C vpaes_set_encrypt_key(key, ctx-key_len*8, cctx-ks); CRYPTO_ccm128_init(cctx-ccm, cctx-M, cctx-L, cctx-ks, (block128_f)vpaes_encrypt); + cctx-str = NULL; cctx-key_set = 1; break; }
Re: [openssl.org #2833] BIO_CTRL_DGRAM_QUERY_MTU handling is wrong due to bad getsockopt() use
On Sun, 2012-06-10 at 18:04 +0200, Michael Tuexen wrote: On Jun 10, 2012, at 4:03 PM, Andy Polyakov wrote: The getsockopt() for IP_MTU and IPV6_MTU at least on Linux returns a value of length 4. On little endian systems this is not so critical problem however on big endian 64 bit systems it means the interpretation of the returned value by the code in dgram_ctrl() is completely wrong - Actually similar argument applies even to sockopt_len. Modulo fact that you get into trouble in cases when *expected* sizeof(sockopt_len) is 8, while the value is declared int. The situation is intensified by fact that in some cases expected sizeof(sockopt_len) depends on compiler flags. And I'm not talking about -m32 vs. -m64 compiler flags, I'm talking about flags in 64-bit case [Tru64 for one if you have to know]. One way to attack the problem is depicted in crypto/bio/b_sock.c:975. I mean union between unsigned int and size_t, explicit zeroing of size_t member and heuristic that detects big-endian trouble. Then one can declare even sockopt_val as similar union and pick int or long depending on calculated sockopt_len being 4 or 8. General comment: Can't you use socklen_t as the type of the last argument? As it says in crypto/bio/b_sock.c:975, there *are* platforms that don't have socklen_t. Of course one can question if these platforms are modern enough/worth to care about, but why not, if it's feasible and enriching? Or course one can go for #ifdef, but does one have to? At least this is what I normally use. The type of *option_value might be platform dependent, but then we need some #ifdefs for platforms. But the choice is still between 32- and 64-bit integers. And if so, you can distinguish among them at run time as accurately. Or should one say even more accurately, because it's actual value, not assumed one from compile time. Of course, absolute majority of compiled code heavily relies on assumed values being equal to actual, but it's not prohibited to assume that there are not, is it? #ifdefs have to be maintained in sense that you have to follow their changes on multiple platforms, while #ifdef-free alternative simply adapts to whichever situation with *no* maintenance. Regarding the IP_MTU/IPV6_MTU socket option on Linux: The Linux man page says, that the type of the option_value is int. So I guess the bug is simply, that the code uses long sockopt_val instead of int sockopt_val. All this is specific to Linux. Can you guarantee that the code in question won't ever become interesting to reuse even in non-Linux context? I mean do you really have to assume Linux that categorically? In other words in context of multi-platform code such as OpenSSL there is value in *not* assuming things. I think http://rt.openssl.org/Ticket/Display.html?id=2830user=guestpass=guest already fixes the bug, since it changes sockopt_val from long to int. It fixes the first problem (although non-portably). But there are still the signed/unsigned int comparisons of the mtu values later in the code in d1_both.c. Of course fixing the first problem will probably mask the second problem. -- Tomas Mraz No matter how far down the wrong road you've gone, turn back. Turkish proverb __ OpenSSL Project http://www.openssl.org Development Mailing List openssl-dev@openssl.org Automated List Manager majord...@openssl.org
[openssl.org #2833] BIO_CTRL_DGRAM_QUERY_MTU handling is wrong due to bad getsockopt() use
The getsockopt() for IP_MTU and IPV6_MTU at least on Linux returns a value of length 4. On little endian systems this is not so critical problem however on big endian 64 bit systems it means the interpretation of the returned value by the code in dgram_ctrl() is completely wrong - you will get a bogus huge value of MTU which leads even to a segfault (fortunately without security impact) later in the DTLS code. The simplest fix would be to use int instead of long for the sockopt_val although I am not sure about the portability to other non-linux kernels. Another problem is when s-d1-mtu is compared to dtls1_min_mtu() value in dtls1_do_write() - as signed integer value is compared to unsigned value an implicit conversion of the signed integer to unsigned value is performed and negative value (which came out of the bogus call in dgram_ctrl()) is converted to some large value and thus the comparison fails and the fallback code for choosing some safe MTU value is not invoked. -- Tomas Mraz No matter how far down the wrong road you've gone, turn back. Turkish proverb __ OpenSSL Project http://www.openssl.org Development Mailing List openssl-dev@openssl.org Automated List Manager majord...@openssl.org
Re: [openssl.org #2802] 1.0.0's SSL_OP_ALL and SSL_OP_NO_TLSv1_1
On Wed, 2012-04-25 at 10:35 +0200, Andy Polyakov via RT wrote: more secure protocols. Trade-off. As 1.0.0 application is not in position to expect anything above TLS1.0, trade-off can as well be resolved in favor of interoperability. Note that there is not such trade-off in 1.0.1 application context, because 1.0.1 SSL_OP_ALL won't disable protocols above TLS1.0. I'd be in favor to moving the SSL_OP_NO_TLSv1_1 out of SSL_OP_ALL as of 1.0.0 as application should not in general really care against which openssl version _with stable ABI_ it is linked. And the capabilities should be defined by the underlying installed library version and not the version it was built against. Of course in case the application needs to refer to API additions for the new functionality the situation is different, but that is not the case of TLS1.1. -- Tomas Mraz No matter how far down the wrong road you've gone, turn back. Turkish proverb __ OpenSSL Project http://www.openssl.org Development Mailing List openssl-dev@openssl.org Automated List Manager majord...@openssl.org
Re: [openssl.org #2635] 1/n-1 record splitting technique for CVE-2011-3389
On Sun, 2012-04-15 at 16:45 +0200, Andy Polyakov via RT wrote: Here is an experimental patch I wrote that implements the 1/n-1 record splitting technique for OpenSSL. I am sending it here for consideration by OpenSSL upstream developers. By default the 0/n split is used but in case the SSL_OP_DONT_INSERT_EMPTY_FRAGMENTS flag is set, we split the first record with 1/n-1. What would you [and others] say about this alternative? Non-committed, relative to HEAD... The patch seems OK however it is not clear whether this change really brings much. The original experimental patch is not really usable as there are already known applications which are even broken by the 1/n-1 split. So for SSL_OP_DONT_INSERT_EMPTY_FRAGMENTS the split cannot be done at all anyway. Your patch will improve the compatibility of the case where SSL_OP_DONT_INSERT_EMPTY_FRAGMENTS is not used however I have not seen any reports, at least in our Bugzilla, that would ask for that. So it's just a matter of preference whether you want to change the 0/n split to 1/n-1 one. -- Tomas Mraz No matter how far down the wrong road you've gone, turn back. Turkish proverb __ OpenSSL Project http://www.openssl.org Development Mailing List openssl-dev@openssl.org Automated List Manager majord...@openssl.org
Re: [openssl.org #2635] 1/n-1 record splitting technique for CVE-2011-3389
On Mon, 2012-04-16 at 11:49 +0200, Andy Polyakov via RT wrote: Here is an experimental patch I wrote that implements the 1/n-1 record splitting technique for OpenSSL. I am sending it here for consideration by OpenSSL upstream developers. By default the 0/n split is used but in case the SSL_OP_DONT_INSERT_EMPTY_FRAGMENTS flag is set, we split the first record with 1/n-1. What would you [and others] say about this alternative? Non-committed, relative to HEAD... The patch seems OK however it is not clear whether this change really brings much. The original experimental patch is not really usable as there are already known applications which are even broken by the 1/n-1 split. So for SSL_OP_DONT_INSERT_EMPTY_FRAGMENTS the split cannot be done at all anyway. Your patch will improve the compatibility of the case where SSL_OP_DONT_INSERT_EMPTY_FRAGMENTS is not used however I have not seen any reports, at least in our Bugzilla, that would ask for that. So it's just a matter of preference whether you want to change the 0/n split to 1/n-1 one. Have you heard of *clients* that suffer from 1/n-1 split? I mean if clients are tolerant to it, it might make sense to favor 1/n-1 on server side. Major obstacle for 0/n used to be Microsoft TLS or in more practical terms IE, and with 1/n-1 IE would work... I did not hear about any HTTPS clients that would be intolerant of the 1/n-1 split but other TLS usage (VPN, Jabber, ?) might be different in this respect. But I do not know of any concrete cases where the client is intolerant of the split. As for client side, arguably it would make things worth. I mean if client plays smart and implements 1/n-1 split itself depending on situation, e.g. not when using POST, then such split in libssl would be counterproductive. I do not know of any client that uses libssl as TLS backend that would do such 1/n-1 split on itself. -- Tomas Mraz No matter how far down the wrong road you've gone, turn back. Turkish proverb __ OpenSSL Project http://www.openssl.org Development Mailing List openssl-dev@openssl.org Automated List Manager majord...@openssl.org
Re: [openssl.org #2786] Prevent crash if dctx-get_entropy() fails
On Sat, 2012-04-07 at 21:44 +0200, Stephen Henson via RT wrote: [tm...@redhat.com - Sat Apr 07 15:39:00 2012]: This bug report applies to the OpenSSL FIPS 2.0 module. If dctx-get_entropy() fails and thus the tout is set to NULL we will set the output entropy pointer to NULL + blocklen. This will later lead to crash as we check for NULL entropy before calling fips_cleanup_entropy() but it will be invalid non-NULL pointer in this case. The attached patch prevents returning invalid non-NULL pointer from the fips_get_entropy() function. While that is valid changing the FIPS code at this late stage of the validation is problematical. Since the output entropy pointer is restored to its original value in fips_cleanup_entropy can't we just make sure that function treats a NULL parameter as a no-op instead? Yes, that's surely possible as well. -- Tomas Mraz No matter how far down the wrong road you've gone, turn back. Turkish proverb __ OpenSSL Project http://www.openssl.org Development Mailing List openssl-dev@openssl.org Automated List Manager majord...@openssl.org
Re: [openssl.org #2786] Prevent crash if dctx-get_entropy() fails
On Sat, 2012-04-07 at 21:44 +0200, Stephen Henson via RT wrote: [tm...@redhat.com - Sat Apr 07 15:39:00 2012]: This bug report applies to the OpenSSL FIPS 2.0 module. If dctx-get_entropy() fails and thus the tout is set to NULL we will set the output entropy pointer to NULL + blocklen. This will later lead to crash as we check for NULL entropy before calling fips_cleanup_entropy() but it will be invalid non-NULL pointer in this case. The attached patch prevents returning invalid non-NULL pointer from the fips_get_entropy() function. While that is valid changing the FIPS code at this late stage of the validation is problematical. Since the output entropy pointer is restored to its original value in fips_cleanup_entropy can't we just make sure that function treats a NULL parameter as a no-op instead? Yes, that's surely possible as well. -- Tomas Mraz No matter how far down the wrong road you've gone, turn back. Turkish proverb __ OpenSSL Project http://www.openssl.org Development Mailing List openssl-dev@openssl.org Automated List Manager majord...@openssl.org
[openssl.org #2786] Prevent crash if dctx-get_entropy() fails
This bug report applies to the OpenSSL FIPS 2.0 module. If dctx-get_entropy() fails and thus the tout is set to NULL we will set the output entropy pointer to NULL + blocklen. This will later lead to crash as we check for NULL entropy before calling fips_cleanup_entropy() but it will be invalid non-NULL pointer in this case. The attached patch prevents returning invalid non-NULL pointer from the fips_get_entropy() function. -- Tomas Mraz No matter how far down the wrong road you've gone, turn back. Turkish proverb diff -up openssl-fips-2.0-test-20120202/fips/rand/fips_drbg_lib.c.entropy openssl-fips-2.0-test-20120202/fips/rand/fips_drbg_lib.c --- openssl-fips-2.0-test-20120202/fips/rand/fips_drbg_lib.c.entropy 2011-12-13 01:22:17.0 +0100 +++ openssl-fips-2.0-test-20120202/fips/rand/fips_drbg_lib.c 2012-04-05 17:42:50.814929366 +0200 @@ -160,6 +160,8 @@ static size_t fips_get_entropy(DRBG_CTX return dctx-get_entropy(dctx, pout, entropy, min_len, max_len); rv = dctx-get_entropy(dctx, tout, entropy + bl, min_len + bl, max_len + bl); + if (tout == NULL) + return 0; *pout = tout + bl; if (rv (min_len + bl) || (rv % bl)) return 0;
Re: OpenSSL 1.0.1 released
On Wed, 2012-03-14 at 19:36 +0100, Dr. Stephen Henson wrote: On Wed, Mar 14, 2012, Mike Frysinger wrote: On Wednesday 14 March 2012 14:25:32 Dr. Stephen Henson wrote: On Wed, Mar 14, 2012, Mike Frysinger wrote: On Wednesday 14 March 2012 11:09:22 OpenSSL wrote: OpenSSL version 1.0.1 released === http://www.openssl.org/source/exp/CHANGES. The most significant changes are: o TLS/DTLS heartbeat support. o SCTP support. o RFC 5705 TLS key material exporter. o RFC 5764 DTLS-SRTP negotiation. o Next Protocol Negotiation. o PSS signatures in certificates, requests and CRLs. o Support for password based recipient info for CMS. o Support TLS v1.2 and TLS v1.1. o Preliminary FIPS capability for unvalidated 2.0 FIPS module. o SRP support. i don't see mention of ABI compat changes, and it seems to not be compatible. did someone forget to update the version string in crypto/opensslv.h ? it still says 1.0.0 ... Can you be more specific about seems to not be compatible. if the versions were compatible, there should be no warning when running apps with openssl-1.0.1 that were built against openssl-1.0.0*. but there is: OpenSSL version mismatch. Built against 105f, you have 1000100f What is producing that warning? This is a problem of the applications (OpenSSH, postgresql,) that do not expect different versions of openssl to be ABI compatible. They compare the version that they were compiled against to the version reported by the library. They usually ignore only the patch level number (abcde...). We had to patch the version number in the library to stay constant. I suppose these applications should have the version check removed as it is not guaranteed to work anyway as the ABI of openssl depends also on the compiled-in ciphers and other compile time options. -- Tomas Mraz No matter how far down the wrong road you've gone, turn back. Turkish proverb __ OpenSSL Project http://www.openssl.org Development Mailing List openssl-dev@openssl.org Automated List Manager majord...@openssl.org
Re: Patch sent to RT never got there?
On Sun, 2012-03-04 at 14:16 -0500, Thor Lancelot Simon wrote: I sent a patch to RT a few hours ago. I didn't get an ack of any kind and don't see it in the tracker -- any way to tell where it went? The message-ID was 20120304175840.ga12...@panix.com . The RT queue is moderated. So you just need to wait till the moderator looks at it. -- Tomas Mraz No matter how far down the wrong road you've gone, turn back. Turkish proverb __ OpenSSL Project http://www.openssl.org Development Mailing List openssl-dev@openssl.org Automated List Manager majord...@openssl.org
[openssl.org #2710] Add missing checks for load_certs_crls failure
The attached trivial patch adds missing check for load_certs_crls failure in apps.c. It is applicable to 1.0.0 and 1.0.1 branches. -- Tomas Mraz No matter how far down the wrong road you've gone, turn back. Turkish proverb diff -up openssl-1.0.0a/apps/apps.c.load-certs openssl-1.0.0a/apps/apps.c --- openssl-1.0.0a/apps/apps.c.load-certs 2010-05-27 16:09:13.0 +0200 +++ openssl-1.0.0a/apps/apps.c 2011-04-28 21:24:06.0 +0200 @@ -1208,7 +1208,8 @@ STACK_OF(X509) *load_certs(BIO *err, con const char *pass, ENGINE *e, const char *desc) { STACK_OF(X509) *certs; - load_certs_crls(err, file, format, pass, e, desc, certs, NULL); + if (!load_certs_crls(err, file, format, pass, e, desc, certs, NULL)) + return NULL; return certs; } @@ -1216,7 +1217,8 @@ STACK_OF(X509_CRL) *load_crls(BIO *err, const char *pass, ENGINE *e, const char *desc) { STACK_OF(X509_CRL) *crls; - load_certs_crls(err, file, format, pass, e, desc, NULL, crls); + if (!load_certs_crls(err, file, format, pass, e, desc, NULL, crls)) + return NULL; return crls; }
[openssl.org #2711] Fix possible NULL dereference on bad MIME headers
In some cases when a S/MIME message with broken MIME headers is processed a NULL dereference in mime_hdr_cmp can happen. The attached patch guards against this dereference. -- Tomas Mraz No matter how far down the wrong road you've gone, turn back. Turkish proverb diff -up openssl-0.9.8j/crypto/asn1/asn_mime.c.bad-mime openssl-0.9.8j/crypto/asn1/asn_mime.c --- openssl-0.9.8j/crypto/asn1/asn_mime.c.bad-mime 2008-08-05 17:56:11.0 +0200 +++ openssl-0.9.8j/crypto/asn1/asn_mime.c 2009-01-14 22:08:34.0 +0100 @@ -792,6 +792,10 @@ static int mime_hdr_addparam(MIME_HEADER static int mime_hdr_cmp(const MIME_HEADER * const *a, const MIME_HEADER * const *b) { + if ((*a)-name == NULL || (*b)-name == NULL) + return (*a)-name - (*b)-name 0 ? -1 : + (*a)-name - (*b)-name 0 ? 1 : 0; + return(strcmp((*a)-name, (*b)-name)); }
[openssl.org #2712] Be more liberal when trying to recognize the XMPP starttls headers
The attached simple patch allows other possible syntaxes of XMPP starttls headers to be recognized. -- Tomas Mraz No matter how far down the wrong road you've gone, turn back. Turkish proverb diff -ru openssl-1.0.0d.old/apps/s_client.c openssl-1.0.0d/apps/s_client.c --- openssl-1.0.0d.old/apps/s_client.c 2011-07-17 21:05:19.934181169 +0200 +++ openssl-1.0.0d/apps/s_client.c 2011-07-17 21:11:42.747824990 +0200 @@ -1186,7 +1186,7 @@ xmlns='jabber:client' to='%s' version='1.0', host); seen = BIO_read(sbio,mbuf,BUFSIZZ); mbuf[seen] = 0; - while (!strstr(mbuf, starttls xmlns='urn:ietf:params:xml:ns:xmpp-tls')) + while (!strcasestr(mbuf, starttls xmlns='urn:ietf:params:xml:ns:xmpp-tls') !strcasestr(mbuf, starttls xmlns=\urn:ietf:params:xml:ns:xmpp-tls\)) { if (strstr(mbuf, /stream:features)) goto shut;
[openssl.org #2713] Move libraries that are not needed for dynamic linking to Libs.private in the .pc files
The attached simple patch moves the libraries that are not needed for dynamic linking to the Libs.private section in the OpenSSL .pc files. -- Tomas Mraz No matter how far down the wrong road you've gone, turn back. Turkish proverb diff -up openssl-1.0.0e/Makefile.org.private openssl-1.0.0e/Makefile.org --- openssl-1.0.0e/Makefile.org.private 2011-11-03 10:01:53.0 +0100 +++ openssl-1.0.0e/Makefile.org 2011-11-22 11:50:27.0 +0100 @@ -326,7 +326,8 @@ libcrypto.pc: Makefile echo 'Description: OpenSSL cryptography library'; \ echo 'Version: '$(VERSION); \ echo 'Requires: '; \ - echo 'Libs: -L$${libdir} -lcrypto $(EX_LIBS)'; \ + echo 'Libs: -L$${libdir} -lcrypto'; \ + echo 'Libs.private: $(EX_LIBS)'; \ echo 'Cflags: -I$${includedir} $(KRB5_INCLUDES)' ) libcrypto.pc libssl.pc: Makefile @@ -339,7 +340,8 @@ libssl.pc: Makefile echo 'Description: Secure Sockets Layer and cryptography libraries'; \ echo 'Version: '$(VERSION); \ echo 'Requires: '; \ - echo 'Libs: -L$${libdir} -lssl -lcrypto $(EX_LIBS)'; \ + echo 'Libs: -L$${libdir} -lssl -lcrypto'; \ + echo 'Libs.private: $(EX_LIBS)'; \ echo 'Cflags: -I$${includedir} $(KRB5_INCLUDES)' ) libssl.pc openssl.pc: Makefile @@ -352,7 +354,8 @@ openssl.pc: Makefile echo 'Description: Secure Sockets Layer and cryptography libraries and tools'; \ echo 'Version: '$(VERSION); \ echo 'Requires: '; \ - echo 'Libs: -L$${libdir} -lssl -lcrypto $(EX_LIBS)'; \ + echo 'Libs: -L$${libdir} -lssl -lcrypto'; \ + echo 'Libs.private: $(EX_LIBS)'; \ echo 'Cflags: -I$${includedir} $(KRB5_INCLUDES)' ) openssl.pc Makefile: Makefile.org Configure config
[openssl.org #2714] Fix build with no-srp option
OpenSSL-1.0.1-beta2 build with no-srp option fails because there are some missing #ifdef OPENSSL_NO_SRP directives in the s_server code. The attached patch fixes this. -- Tomas Mraz No matter how far down the wrong road you've gone, turn back. Turkish proverb diff -up openssl-1.0.1-beta2/apps/progs.pl.no-srp openssl-1.0.1-beta2/apps/progs.pl --- openssl-1.0.1-beta2/apps/progs.pl.no-srp 2009-06-30 17:08:38.0 +0200 +++ openssl-1.0.1-beta2/apps/progs.pl 2012-02-07 01:14:08.979758307 +0100 @@ -51,6 +51,8 @@ foreach (@ARGV) { print #ifndef OPENSSL_NO_CMS\n${str}#endif\n; } elsif ( ($_ =~ /^ocsp$/)) { print #ifndef OPENSSL_NO_OCSP\n${str}#endif\n; } + elsif ( ($_ =~ /^srp$/)) + { print #ifndef OPENSSL_NO_SRP\n${str}#endif\n; } else { print $str; } } diff -up openssl-1.0.1-beta2/apps/s_server.c.no-srp openssl-1.0.1-beta2/apps/s_server.c --- openssl-1.0.1-beta2/apps/s_server.c.no-srp 2012-02-07 01:04:12.0 +0100 +++ openssl-1.0.1-beta2/apps/s_server.c 2012-02-07 01:13:21.573362310 +0100 @@ -2248,6 +2248,7 @@ static int sv_body(char *hostname, int s { static count=0; if (++count == 100) { count=0; SSL_renegotiate(con); } } #endif k=SSL_write(con,(buf[l]),(unsigned int)i); +#ifndef OPENSSL_NO_SRP while (SSL_get_error(con,k) == SSL_ERROR_WANT_X509_LOOKUP) { BIO_printf(bio_s_out,LOOKUP renego during write\n); @@ -2258,6 +2259,7 @@ static int sv_body(char *hostname, int s BIO_printf(bio_s_out,LOOKUP not successful\n); k=SSL_write(con,(buf[l]),(unsigned int)i); } +#endif switch (SSL_get_error(con,k)) { case SSL_ERROR_NONE: @@ -2305,6 +2307,7 @@ static int sv_body(char *hostname, int s { again: i=SSL_read(con,(char *)buf,bufsize); +#ifndef OPENSSL_NO_SRP while (SSL_get_error(con,i) == SSL_ERROR_WANT_X509_LOOKUP) { BIO_printf(bio_s_out,LOOKUP renego during read\n); @@ -2315,6 +2318,7 @@ again: BIO_printf(bio_s_out,LOOKUP not successful\n); i=SSL_read(con,(char *)buf,bufsize); } +#endif switch (SSL_get_error(con,i)) { case SSL_ERROR_NONE: @@ -2392,6 +2396,7 @@ static int init_ssl_connection(SSL *con) i=SSL_accept(con); +#ifndef OPENSSL_NO_SRP while (i = 0 SSL_get_error(con,i) == SSL_ERROR_WANT_X509_LOOKUP) { BIO_printf(bio_s_out,LOOKUP during accept %s\n,srp_callback_parm.login); @@ -2402,6 +2407,7 @@ static int init_ssl_connection(SSL *con) BIO_printf(bio_s_out,LOOKUP not successful\n); i=SSL_accept(con); } +#endif if (i = 0) { if (BIO_sock_should_retry(i)) @@ -2626,6 +2632,7 @@ static int www_body(char *hostname, int if (hack) { i=SSL_accept(con); +#ifndef OPENSSL_NO_SRP while (i = 0 SSL_get_error(con,i) == SSL_ERROR_WANT_X509_LOOKUP) { BIO_printf(bio_s_out,LOOKUP during accept %s\n,srp_callback_parm.login); @@ -2636,7 +2643,7 @@ static int www_body(char *hostname, int BIO_printf(bio_s_out,LOOKUP not successful\n); i=SSL_accept(con); } - +#endif switch (SSL_get_error(con,i)) { case SSL_ERROR_NONE:
[openssl.org #2641] Move the libraries needed for static linking to Libs.private
The attached patch changes the generated pkgconfig files so the libraries needed for static linking are in Libs.private instead of Libs. -- Tomas Mraz No matter how far down the wrong road you've gone, turn back. Turkish proverb diff -up openssl-1.0.0e/Makefile.org.private openssl-1.0.0e/Makefile.org --- openssl-1.0.0e/Makefile.org.private 2011-11-03 10:01:53.0 +0100 +++ openssl-1.0.0e/Makefile.org 2011-11-22 11:50:27.0 +0100 @@ -326,7 +326,8 @@ libcrypto.pc: Makefile echo 'Description: OpenSSL cryptography library'; \ echo 'Version: '$(VERSION); \ echo 'Requires: '; \ - echo 'Libs: -L$${libdir} -lcrypto $(EX_LIBS)'; \ + echo 'Libs: -L$${libdir} -lcrypto'; \ + echo 'Libs.private: $(EX_LIBS)'; \ echo 'Cflags: -I$${includedir} $(KRB5_INCLUDES)' ) libcrypto.pc libssl.pc: Makefile @@ -339,7 +340,8 @@ libssl.pc: Makefile echo 'Description: Secure Sockets Layer and cryptography libraries'; \ echo 'Version: '$(VERSION); \ echo 'Requires: '; \ - echo 'Libs: -L$${libdir} -lssl -lcrypto $(EX_LIBS)'; \ + echo 'Libs: -L$${libdir} -lssl -lcrypto'; \ + echo 'Libs.private: $(EX_LIBS)'; \ echo 'Cflags: -I$${includedir} $(KRB5_INCLUDES)' ) libssl.pc openssl.pc: Makefile @@ -352,7 +354,8 @@ openssl.pc: Makefile echo 'Description: Secure Sockets Layer and cryptography libraries and tools'; \ echo 'Version: '$(VERSION); \ echo 'Requires: '; \ - echo 'Libs: -L$${libdir} -lssl -lcrypto $(EX_LIBS)'; \ + echo 'Libs: -L$${libdir} -lssl -lcrypto'; \ + echo 'Libs.private: $(EX_LIBS)'; \ echo 'Cflags: -I$${includedir} $(KRB5_INCLUDES)' ) openssl.pc Makefile: Makefile.org Configure config
Re: [openssl.org #2633] x86cpuid.pl incorrectly handles AVX when OSXSAVE not set
On Fri, 2011-11-18 at 12:16 +0100, Andy Polyakov via RT wrote: commit 6c3f6041172b78d5532c6bf3680d304e92ec2e66 Author: Sheng Yang sh...@linux.intel.com Date: Tue Jun 22 13:49:21 2010 +0800 KVM: x86: Enable AVX for guest Enable Intel(R) Advanced Vector Extension(AVX) for guest. The detection of AVX feature includes OSXSAVE bit testing. When OSXSAVE bit not set, even if AVX is supported, the AVX instruction would result in UD as well. So we're safe to expose AVX bits to guest directly. Signed-off-by: Sheng Yang sh...@linux.intel.com Signed-off-by: Avi Kivity a...@redhat.com This kind of sounds that it *was* masked and they decided to change this. Question is why? BTW, why isn't FMA flag discussed at the same time? Latest KVM still exposes AVX, and sets OSXSAVE to 0. This gives the guest kernel the option, which is based on XSAVE. This kind sounds that KVM maintains per-guest XCR0 in which case it's indeed more than appropriate to stop masking AVX flag. I.e. is it possible to conclude that what it was masked earlier and they stopped masking it after making KVM XCR0-aware? If so, why do you find it appropriate to not mask it without making Xen XCR0-aware? But the question is more of rhetorical character... because we kind of agree that latest version is acceptable, right? I forwarded your other questions to our virtualization developers. However for the last question the answer is yes, the latest version is acceptable. -- Tomas Mraz No matter how far down the wrong road you've gone, turn back. Turkish proverb __ OpenSSL Project http://www.openssl.org Development Mailing List openssl-dev@openssl.org Automated List Manager majord...@openssl.org
Re: [openssl.org #2633] x86cpuid.pl incorrectly handles AVX when OSXSAVE not set
On Tue, 2011-11-08 at 22:29 +0100, Andy Polyakov via RT wrote: As for XEN, if it in fact masks XSAVE, but not AVX bits, than even check for XSAVE bit should 'jnc (label(clear_avx));' instead of done. As well as that x86_64cpuid.pl should test for XSAVE... That would also work, but it's useless because the spec OTOH says that you *can* ignore XSAVE (and anyway XSAVE means nothing: it says the feature is available, but only OSXSAVE says it is actually unusable). I still fail to see how exactly did it fail for you. Once again, which flags does guest OS observe exactly? Is guest OS YMM-capable? Does latest x86cpuid.pl work for you or is it still problem? No, it does not work as the cpuid on the guest OS observes cleared XSAVE and set AVX bit. How does XSAVE end up being 0? Hypervisor masks it, right? By what means? Through a config file or is it explicitly programmed? What was the reasoning for masking it? In config file or code? Why same reasoning didn't apply to AVX [as FMA] bit? As implied, I'd argue that it's inappropriate to mask XSAVE but not AVX [and FMA]. Either way, can you at least tell if it's controlled through config? Or in other words, it it possible to work around the problem by configuring your Xen? Context of the question is frozen FIPS code. Which means that the AVX instructions will be used in the SHA1 code which then fail with SIGILL. The OSXSAVE is also cleared so that means if the XSAVE test was just dropped it would work. So would jumping to clear_avx if XSAVE is 0, right? Anyway, see http://cvs.openssl.org/chngview?cn=21675. Forwarding reply from our virtualization developers: RHEL5 Xen's cpuid masking is hardcoded in the HV (so no nice config to quick modify). We've recently changed it to be a whitelist, where the structure is heavily influenced by KVM. Looking at KVM's whitelist patch history I found this commit, which I believe describes this issue best. commit 6c3f6041172b78d5532c6bf3680d304e92ec2e66 Author: Sheng Yang sh...@linux.intel.com Date: Tue Jun 22 13:49:21 2010 +0800 KVM: x86: Enable AVX for guest Enable Intel(R) Advanced Vector Extension(AVX) for guest. The detection of AVX feature includes OSXSAVE bit testing. When OSXSAVE bit not set, even if AVX is supported, the AVX instruction would result in UD as well. So we're safe to expose AVX bits to guest directly. Signed-off-by: Sheng Yang sh...@linux.intel.com Signed-off-by: Avi Kivity a...@redhat.com Latest KVM still exposes AVX, and sets OSXSAVE to 0. This gives the guest kernel the option, which is based on XSAVE. Now, RHEL Xen masks XSAVE, so the option isn't really there for its guest kernels, but I don't believe that means we can't rely on the same protocol. -- Tomas Mraz No matter how far down the wrong road you've gone, turn back. Turkish proverb __ OpenSSL Project http://www.openssl.org Development Mailing List openssl-dev@openssl.org Automated List Manager majord...@openssl.org
Re: [openssl.org #2633] x86cpuid.pl incorrectly handles AVX when OSXSAVE not set
On Tue, 2011-11-08 at 20:32 +0100, Andy Polyakov via RT wrote: As for XEN, if it in fact masks XSAVE, but not AVX bits, than even check for XSAVE bit should 'jnc (label(clear_avx));' instead of done. As well as that x86_64cpuid.pl should test for XSAVE... That would also work, but it's useless because the spec OTOH says that you *can* ignore XSAVE (and anyway XSAVE means nothing: it says the feature is available, but only OSXSAVE says it is actually unusable). I still fail to see how exactly did it fail for you. Once again, which flags does guest OS observe exactly? Is guest OS YMM-capable? Does latest x86cpuid.pl work for you or is it still problem? No, it does not work as the cpuid on the guest OS observes cleared XSAVE and set AVX bit. Which means that the AVX instructions will be used in the SHA1 code which then fail with SIGILL. The OSXSAVE is also cleared so that means if the XSAVE test was just dropped it would work. -- Tomas Mraz No matter how far down the wrong road you've gone, turn back. Turkish proverb __ OpenSSL Project http://www.openssl.org Development Mailing List openssl-dev@openssl.org Automated List Manager majord...@openssl.org
Re: [openssl.org #2633] x86cpuid.pl incorrectly handles AVX when OSXSAVE not set
On Sat, 2011-11-05 at 11:53 +0100, Andy Polyakov via RT wrote: x86cpuid.pl instead is completely broken: - the whole test is bypassed if XSAVE=1, which makes absolutely no sense. x86_64cpuid.pl is right in testing OSXSAVE No, the test is bypassed if XSAVE is 0, not 1. XSAVE being 0 also implies that AVX flag [as well as FMA and XOP] is 0, which is why is jumps to 'done' and not 'clear_avx'. This assertion is unfortunately not true on RHEL-6 guests on AVX capable CPUs in XEN VM. -- Tomas Mraz No matter how far down the wrong road you've gone, turn back. Turkish proverb __ OpenSSL Project http://www.openssl.org Development Mailing List openssl-dev@openssl.org Automated List Manager majord...@openssl.org
[openssl.org #2637] Missing documentation for -no_ign_eof option
The attached patch adds missing documentation for the -no_ign_eof option. -- Tomas Mraz No matter how far down the wrong road you've gone, turn back. Turkish proverb diff -up openssl-1.0.0e/doc/apps/s_client.pod.doc-noeof openssl-1.0.0e/doc/apps/s_client.pod --- openssl-1.0.0e/doc/apps/s_client.pod.doc-noeof 2009-06-26 13:28:51.0 +0200 +++ openssl-1.0.0e/doc/apps/s_client.pod 2011-11-03 08:30:35.0 +0100 @@ -27,6 +27,7 @@ Bopenssl Bs_client [B-nbio] [B-crlf] [B-ign_eof] +[B-no_ign_eof] [B-quiet] [B-ssl2] [B-ssl3] @@ -161,6 +162,11 @@ by some servers. inhibit shutting down the connection when end of file is reached in the input. +=item B-no_ign_eof + +shut down the connection when end of file is reached in the +input. Can be used to override the implicit B-ign_eof after B-quiet. + =item B-quiet inhibit printing of session and certificate information. This implicitly
[openssl.org #2633] x86cpuid.pl incorrectly handles AVX when OSXSAVE not set
Here is analysis by Paolo Bonzini: I compared crypto/x86_64cpuid.pl and crypto/x86cpuid.pl, and the code in the latter is wrong. From x86_64cpuid.pl: mov %edx,%r10d # %r9d:%r10d is copy of %ecx:%edx bt \$27,%r9d # check OSXSAVE bit jnc .Lclear_avx xor %ecx,%ecx # XCR0 .byte 0x0f,0x01,0xd0 # xgetbv and \$6,%eax# isolate XMM and YMM state support cmp \$6,%eax je .Ldone .Lclear_avx: mov \$0xefffe7ff,%eax # ~(128|112|111) and %eax,%r9d # clear AVX, FMA and AMD XOP bits .Ldone: From x86cpuid.pl: bt (ecx,26); # check XSAVE bit jnc(label(done)); bt (ecx,27); # check OSXSAVE bit jnc(label(clear_xmm)); xor(ecx,ecx); data_byte(0x0f,0x01,0xd0); # xgetbv and(eax,6); cmp(eax,6); je (label(done)); cmp(eax,2); je (label(clear_avx)); set_label(clear_xmm); and(ebp,0xfdfd); # clear AESNI and PCLMULQDQ bits and(esi,0xfeff); # clear FXSR set_label(clear_avx); and(ebp,0xefffe7ff); # clear AVX, FMA and AMD XOP bits set_label(done); x86_64cpuid.pl is not completely correct; if bit 1 of EAX was zero (XMM support not enabled in the OS) you would need to clear AESNI and PCLMULQDQ bits as done in x86cpuid.pl. However, in practice does not matter because any OS new enough to set OSXSAVE will always enable XMM support as well. x86cpuid.pl instead is completely broken: - the whole test is bypassed if XSAVE=1, which makes absolutely no sense. x86_64cpuid.pl is right in testing OSXSAVE - if OSXSAVE=0, all SSE code is disabled, which also makes no sense because any OS less than 10 years old lets you use SSE even if it does not set OSXSAVE (via FXSAVE), and this includes of course RHEL6. The attached patch (unfortunately not yet tested) synchronizes the two tests. --- crypto/x86cpuid.pl 2011-10-26 17:13:03.599641479 +0200 +++ crypto/x86cpuid.pl 2011-10-26 17:41:04.400262001 +0200 @@ -119,20 +119,13 @@ mov (esi,edx); or (ebp,ecx); # merge AMD XOP flag - bt (ecx,26); # check XSAVE bit - jnc (label(done)); bt (ecx,27); # check OSXSAVE bit - jnc (label(clear_xmm)); - xor (ecx,ecx); + jnc (label(clear_avx)); + xor (ecx,ecx); # XCR0 data_byte(0x0f,0x01,0xd0); # xgetbv - and (eax,6); + and (eax,6); # isolate XMM and YMM state support cmp (eax,6); je (label(done)); - cmp (eax,2); - je (label(clear_avx)); -set_label(clear_xmm); - and (ebp,0xfdfd); # clear AESNI and PCLMULQDQ bits - and (esi,0xfeff); # clear FXSR set_label(clear_avx); and (ebp,0xefffe7ff); # clear AVX, FMA and AMD XOP bits set_label(done);
[openssl.org #2635] 1/n-1 record splitting technique for CVE-2011-3389
Here is an experimental patch I wrote that implements the 1/n-1 record splitting technique for OpenSSL. I am sending it here for consideration by OpenSSL upstream developers. By default the 0/n split is used but in case the SSL_OP_DONT_INSERT_EMPTY_FRAGMENTS flag is set, we split the first record with 1/n-1. Tomas Mraz diff -up openssl-1.0.0e/ssl/s3_both.c.beast openssl-1.0.0e/ssl/s3_both.c --- openssl-1.0.0e/ssl/s3_both.c.beast 2010-03-25 00:16:49.0 +0100 +++ openssl-1.0.0e/ssl/s3_both.c 2011-10-13 14:05:50.0 +0200 @@ -758,15 +758,12 @@ int ssl3_setup_write_buffer(SSL *s) if (s-s3-wbuf.buf == NULL) { len = s-max_send_fragment - + SSL3_RT_SEND_MAX_ENCRYPTED_OVERHEAD - + headerlen + align; + + 2 * (SSL3_RT_SEND_MAX_ENCRYPTED_OVERHEAD + + headerlen + align); #ifndef OPENSSL_NO_COMP if (!(s-options SSL_OP_NO_COMPRESSION)) len += SSL3_RT_MAX_COMPRESSED_OVERHEAD; #endif - if (!(s-options SSL_OP_DONT_INSERT_EMPTY_FRAGMENTS)) - len += headerlen + align -+ SSL3_RT_SEND_MAX_ENCRYPTED_OVERHEAD; if ((p=freelist_extract(s-ctx, 0, len)) == NULL) goto err; diff -up openssl-1.0.0e/ssl/s3_enc.c.beast openssl-1.0.0e/ssl/s3_enc.c --- openssl-1.0.0e/ssl/s3_enc.c.beast 2011-09-07 14:00:41.0 +0200 +++ openssl-1.0.0e/ssl/s3_enc.c 2011-10-13 14:05:50.0 +0200 @@ -428,23 +428,20 @@ int ssl3_setup_key_block(SSL *s) ret = ssl3_generate_key_block(s,p,num); - if (!(s-options SSL_OP_DONT_INSERT_EMPTY_FRAGMENTS)) + /* enable vulnerability countermeasure for CBC ciphers with + * known-IV problem (http://www.openssl.org/~bodo/tls-cbc.txt) + */ + s-s3-need_empty_fragments = 1 + (s-options SSL_OP_DONT_INSERT_EMPTY_FRAGMENTS ? 1 : 0); + + if (s-session-cipher != NULL) { - /* enable vulnerability countermeasure for CBC ciphers with - * known-IV problem (http://www.openssl.org/~bodo/tls-cbc.txt) - */ - s-s3-need_empty_fragments = 1; + if (s-session-cipher-algorithm_enc == SSL_eNULL) + s-s3-need_empty_fragments = 0; - if (s-session-cipher != NULL) - { - if (s-session-cipher-algorithm_enc == SSL_eNULL) -s-s3-need_empty_fragments = 0; - #ifndef OPENSSL_NO_RC4 - if (s-session-cipher-algorithm_enc == SSL_RC4) -s-s3-need_empty_fragments = 0; + if (s-session-cipher-algorithm_enc == SSL_RC4) + s-s3-need_empty_fragments = 0; #endif - } } return ret; diff -up openssl-1.0.0e/ssl/s3_pkt.c.beast openssl-1.0.0e/ssl/s3_pkt.c --- openssl-1.0.0e/ssl/s3_pkt.c.beast 2011-05-25 17:21:12.0 +0200 +++ openssl-1.0.0e/ssl/s3_pkt.c 2011-10-13 14:05:50.0 +0200 @@ -685,7 +685,10 @@ static int do_ssl3_write(SSL *s, int typ * this prepares and buffers the data for an empty fragment * (these 'prefix_len' bytes are sent out later * together with the actual payload) */ - prefix_len = do_ssl3_write(s, type, buf, 0, 1); + prefix_len = do_ssl3_write(s, type, buf, + s-s3-need_empty_fragments-1, 1); + buf += s-s3-need_empty_fragments-1; + len -= s-s3-need_empty_fragments-1; if (prefix_len = 0) goto err; diff -up openssl-1.0.0e/ssl/t1_enc.c.beast openssl-1.0.0e/ssl/t1_enc.c --- openssl-1.0.0e/ssl/t1_enc.c.beast 2011-09-07 14:00:41.0 +0200 +++ openssl-1.0.0e/ssl/t1_enc.c 2011-10-13 14:07:55.0 +0200 @@ -608,23 +608,20 @@ printf(\nkey block\n); { int z; for (z=0; znum; z++) printf(%02X%c,p1[z],((z+1)%16)?' ':'\n'); } #endif - if (!(s-options SSL_OP_DONT_INSERT_EMPTY_FRAGMENTS)) + /* enable vulnerability countermeasure for CBC ciphers with + * known-IV problem (http://www.openssl.org/~bodo/tls-cbc.txt) + */ + s-s3-need_empty_fragments = 1 + (s-options SSL_OP_DONT_INSERT_EMPTY_FRAGMENTS ? 1 : 0); + + if (s-session-cipher != NULL) { - /* enable vulnerability countermeasure for CBC ciphers with - * known-IV problem (http://www.openssl.org/~bodo/tls-cbc.txt) - */ - s-s3-need_empty_fragments = 1; + if (s-session-cipher-algorithm_enc == SSL_eNULL) + s-s3-need_empty_fragments = 0; - if (s-session-cipher != NULL) - { - if (s-session-cipher-algorithm_enc == SSL_eNULL) -s-s3-need_empty_fragments = 0; - #ifndef OPENSSL_NO_RC4 - if (s-session-cipher-algorithm_enc == SSL_RC4) -s-s3-need_empty_fragments = 0; + if (s-session-cipher-algorithm_enc == SSL_RC4) + s-s3-need_empty_fragments = 0; #endif - } } ret = 1;
Re: Question about OpenSSL, FIPS and version numbers
On Mon, 2011-10-17 at 21:18 +, Keith Welter wrote: The OpenSSL FIPS 140-2 User Guide says: The FIPS Object Module provides an API for invocation of FIPS approved cryptographic functions from calling applications, and is designed for use in conjunction with standard OpenSSL 0.9.8 distributions beginning with 0.9.8j. Note: OpenSSL 1.0.0 is not supported for use with the OpenSSL FIPS Object Module. These standard OpenSSL 0.9.8 source distributions support the original nonFIPS API as well as a FIPS mode in which the FIPS approved algorithms are implemented by the FIPS Object Module and nonFIPS approved algorithms other than DH are disabled by default. These nonvalidated algorithms include, but are not limited to, Blowfish, CAST, IDEA, RCfamily, and nonSHA message digest and other algorithms. However, on my installation, the 'openssl version' command reports: OpenSSL 1.0.0-fips 29 Mar 2010 That's probably because you're running Red Hat Enterprise Linux 6 - the OpenSSL library there was patched to support running in the FIPS mode and it is currently in the process of FIPS validation independent from the upstream FIPS validation. This just shares some parts of the FIPS related code from the upstream module but it does not support the upstream FIPS module. Tomas Mraz __ OpenSSL Project http://www.openssl.org Development Mailing List openssl-dev@openssl.org Automated List Manager majord...@openssl.org
Re: 1/n-1 record splitting technique
On Wed, 2011-10-05 at 14:31 -0700, no_spam...@yahoo.com wrote: Are there plans for OpenSSL to adopt the 1/n-1 record splitting technique (credit Xuelei Fan) that the browsers appear to be using to mitigate the BEAST attack? I realize that OpenSSL currently contains a different mitigation technique (sending empty fragments). Evidently there are broken SSL implementations still in use that don't get along with this technique. Here is an experimental patch written by me that implements the 1/n-1 record splitting technique for OpenSSL. Please test it if you're interested. -- Tomas Mraz No matter how far down the wrong road you've gone, turn back. Turkish proverb diff -up openssl-1.0.0e/ssl/s3_both.c.beast openssl-1.0.0e/ssl/s3_both.c --- openssl-1.0.0e/ssl/s3_both.c.beast 2010-03-25 00:16:49.0 +0100 +++ openssl-1.0.0e/ssl/s3_both.c 2011-10-13 14:05:50.0 +0200 @@ -758,15 +758,12 @@ int ssl3_setup_write_buffer(SSL *s) if (s-s3-wbuf.buf == NULL) { len = s-max_send_fragment - + SSL3_RT_SEND_MAX_ENCRYPTED_OVERHEAD - + headerlen + align; + + 2 * (SSL3_RT_SEND_MAX_ENCRYPTED_OVERHEAD + + headerlen + align); #ifndef OPENSSL_NO_COMP if (!(s-options SSL_OP_NO_COMPRESSION)) len += SSL3_RT_MAX_COMPRESSED_OVERHEAD; #endif - if (!(s-options SSL_OP_DONT_INSERT_EMPTY_FRAGMENTS)) - len += headerlen + align -+ SSL3_RT_SEND_MAX_ENCRYPTED_OVERHEAD; if ((p=freelist_extract(s-ctx, 0, len)) == NULL) goto err; diff -up openssl-1.0.0e/ssl/s3_enc.c.beast openssl-1.0.0e/ssl/s3_enc.c --- openssl-1.0.0e/ssl/s3_enc.c.beast 2011-09-07 14:00:41.0 +0200 +++ openssl-1.0.0e/ssl/s3_enc.c 2011-10-13 14:05:50.0 +0200 @@ -428,23 +428,20 @@ int ssl3_setup_key_block(SSL *s) ret = ssl3_generate_key_block(s,p,num); - if (!(s-options SSL_OP_DONT_INSERT_EMPTY_FRAGMENTS)) + /* enable vulnerability countermeasure for CBC ciphers with + * known-IV problem (http://www.openssl.org/~bodo/tls-cbc.txt) + */ + s-s3-need_empty_fragments = 1 + (s-options SSL_OP_DONT_INSERT_EMPTY_FRAGMENTS ? 1 : 0); + + if (s-session-cipher != NULL) { - /* enable vulnerability countermeasure for CBC ciphers with - * known-IV problem (http://www.openssl.org/~bodo/tls-cbc.txt) - */ - s-s3-need_empty_fragments = 1; + if (s-session-cipher-algorithm_enc == SSL_eNULL) + s-s3-need_empty_fragments = 0; - if (s-session-cipher != NULL) - { - if (s-session-cipher-algorithm_enc == SSL_eNULL) -s-s3-need_empty_fragments = 0; - #ifndef OPENSSL_NO_RC4 - if (s-session-cipher-algorithm_enc == SSL_RC4) -s-s3-need_empty_fragments = 0; + if (s-session-cipher-algorithm_enc == SSL_RC4) + s-s3-need_empty_fragments = 0; #endif - } } return ret; diff -up openssl-1.0.0e/ssl/s3_pkt.c.beast openssl-1.0.0e/ssl/s3_pkt.c --- openssl-1.0.0e/ssl/s3_pkt.c.beast 2011-05-25 17:21:12.0 +0200 +++ openssl-1.0.0e/ssl/s3_pkt.c 2011-10-13 14:05:50.0 +0200 @@ -685,7 +685,10 @@ static int do_ssl3_write(SSL *s, int typ * this prepares and buffers the data for an empty fragment * (these 'prefix_len' bytes are sent out later * together with the actual payload) */ - prefix_len = do_ssl3_write(s, type, buf, 0, 1); + prefix_len = do_ssl3_write(s, type, buf, + s-s3-need_empty_fragments-1, 1); + buf += s-s3-need_empty_fragments-1; + len -= s-s3-need_empty_fragments-1; if (prefix_len = 0) goto err; diff -up openssl-1.0.0e/ssl/t1_enc.c.beast openssl-1.0.0e/ssl/t1_enc.c --- openssl-1.0.0e/ssl/t1_enc.c.beast 2011-09-07 14:00:41.0 +0200 +++ openssl-1.0.0e/ssl/t1_enc.c 2011-10-13 14:07:55.0 +0200 @@ -608,23 +608,20 @@ printf(\nkey block\n); { int z; for (z=0; znum; z++) printf(%02X%c,p1[z],((z+1)%16)?' ':'\n'); } #endif - if (!(s-options SSL_OP_DONT_INSERT_EMPTY_FRAGMENTS)) + /* enable vulnerability countermeasure for CBC ciphers with + * known-IV problem (http://www.openssl.org/~bodo/tls-cbc.txt) + */ + s-s3-need_empty_fragments = 1 + (s-options SSL_OP_DONT_INSERT_EMPTY_FRAGMENTS ? 1 : 0); + + if (s-session-cipher != NULL) { - /* enable vulnerability countermeasure for CBC ciphers with - * known-IV problem (http://www.openssl.org/~bodo/tls-cbc.txt) - */ - s-s3-need_empty_fragments = 1; + if (s-session-cipher-algorithm_enc == SSL_eNULL) + s-s3-need_empty_fragments = 0; - if (s-session-cipher != NULL) - { - if (s-session-cipher-algorithm_enc == SSL_eNULL) -s-s3-need_empty_fragments = 0; - #ifndef OPENSSL_NO_RC4 - if (s-session-cipher-algorithm_enc == SSL_RC4) -s-s3-need_empty_fragments = 0; + if (s-session-cipher-algorithm_enc == SSL_RC4) + s-s3-need_empty_fragments = 0; #endif - } } ret = 1;
[openssl.org #2616] Missing initialization in the CHIL engine
There is a missing initialization of a variable in the CHIL engine. In case the uninitialized value of the variable answer is 'C' and there is no prompt, the engine startup will erroneously fail. The attached patch fixes this. -- Tomas Mraz No matter how far down the wrong road you've gone, turn back. Turkish proverb diff -up openssl-1.0.0e/engines/e_chil.c.chil openssl-1.0.0e/engines/e_chil.c --- openssl-1.0.0e/engines/e_chil.c.chil 2010-06-15 19:25:12.0 +0200 +++ openssl-1.0.0e/engines/e_chil.c 2011-09-21 17:32:03.0 +0200 @@ -1287,7 +1287,7 @@ static int hwcrhk_insert_card(const char if (ui) { - char answer; + char answer = '\0'; char buf[BUFSIZ]; /* Despite what the documentation says wrong_info can be * an empty string.
Re: Fipscheck X FIPS_incore_fingerprint
On Wed, 2011-08-03 at 17:40 -0300, Tatiana Evers wrote: Hi Tomas, You said that OpenSSH do not use the FIPS_incore_fingerprint call. But it does FIPS_mode_set call and that does FIPS_incore_fingerprint call. int FIPS_mode_set(int onoff) { int fips_set_owning_thread(); int fips_clear_owning_thread(); int ret = 0; fips_w_lock(); fips_set_started(); fips_set_owning_thread(); if(onoff) { unsigned char buf[48]; fips_selftest_fail = 0; if(!FIPS_check_incore_fingerprint()) { fips_selftest_fail = 1; ret = 0; goto end; } } Did Red Hat Enterprise Linux OpenSSL and OpenSSH modules modify FIPS_mode_set function, and this OpenSSL don't use FIPS_check_incore_fingerprint() call ? Yes, we modified the OpenSSL code and the Red Hat Enterprise Linux OpenSSL FIPS module is validated independently from the OpenSSL upstream FIPS module. -- Tomas Mraz No matter how far down the wrong road you've gone, turn back. Turkish proverb __ OpenSSL Project http://www.openssl.org Development Mailing List openssl-dev@openssl.org Automated List Manager majord...@openssl.org
Re: Fipscheck X FIPS_incore_fingerprint
On Wed, 2011-08-03 at 15:02 -0300, Tatiana Evers wrote: Hi, I'm a little confused with FIPS integrity test. I'm using openssh and it is using fipscheck library (FIPSCHECK_verify) to verify integrity of its binaries. But FIPS_mode_set function calls FIPS_incore_fingerprint to verify in execution time the integrity of the application. Why do we need an external validation? Isn't FIPS_incore_fingerprint sufficient to verify integrity? You're mixing the OpenSSL upstream FIPS module with the Red Hat Enterprise Linux OpenSSL and OpenSSH modules. They use different integrity verification test and they do not use the FIPS_incore_fingerprint call. -- Tomas Mraz No matter how far down the wrong road you've gone, turn back. Turkish proverb __ OpenSSL Project http://www.openssl.org Development Mailing List openssl-dev@openssl.org Automated List Manager majord...@openssl.org
[openssl.org #2572] Correct help output in openssl cms
openssl cms help output contains -skeyid option which is actually -keyid option as recognized by the cms code. The attached trivial patch corrects the help output. -- Tomas Mraz No matter how far down the wrong road you've gone, turn back. Turkish proverb diff -up openssl-1.0.0d/apps/cms.c.keyid openssl-1.0.0d/apps/cms.c --- openssl-1.0.0d/apps/cms.c.keyid 2009-10-18 16:42:26.0 +0200 +++ openssl-1.0.0d/apps/cms.c 2011-07-26 12:56:48.0 +0200 @@ -618,7 +618,7 @@ int MAIN(int argc, char **argv) BIO_printf (bio_err, -certsout file certificate output file\n); BIO_printf (bio_err, -signer file signer certificate file\n); BIO_printf (bio_err, -recip file recipient certificate file for decryption\n); - BIO_printf (bio_err, -skeyiduse subject key identifier\n); + BIO_printf (bio_err, -keyid use subject key identifier\n); BIO_printf (bio_err, -in file input file\n); BIO_printf (bio_err, -inform arginput format SMIME (default), PEM or DER\n); BIO_printf (bio_err, -inkey fileinput private key (if not signer or recipient)\n);
[openssl.org #2565] More tolerant detection of XMPP starttls sequence
The attached patch written by J.H.M Ray Dassen improves detection of the XMPP starttls sequence for s_client. Please consider applying it. -- Tomas Mraz No matter how far down the wrong road you've gone, turn back. Turkish proverb diff -ru openssl-1.0.0d.old/apps/s_client.c openssl-1.0.0d/apps/s_client.c --- openssl-1.0.0d.old/apps/s_client.c 2011-07-17 21:05:19.934181169 +0200 +++ openssl-1.0.0d/apps/s_client.c 2011-07-17 21:11:42.747824990 +0200 @@ -1186,7 +1186,7 @@ xmlns='jabber:client' to='%s' version='1.0', host); seen = BIO_read(sbio,mbuf,BUFSIZZ); mbuf[seen] = 0; - while (!strstr(mbuf, starttls xmlns='urn:ietf:params:xml:ns:xmpp-tls')) + while (!strcasestr(mbuf, starttls xmlns='urn:ietf:params:xml:ns:xmpp-tls') !strcasestr(mbuf, starttls xmlns=\urn:ietf:params:xml:ns:xmpp-tls\)) { if (strstr(mbuf, /stream:features)) goto shut;
[openssl.org #2538] Code error - bad condition in s3_srvr.c
There is code error in s3_srvr.c function ssl3_get_cert_verify(). There is a condition if ((peer != NULL) (type | EVP_PKT_SIGN)) - the second part of the condition is a no-op. The correct condition should be if ((peer != NULL) (type EVP_PKT_SIGN)) although the non-signing certificates with static DH parameters are not really used. The bug was found by Coverity scan. -- Tomas Mraz No matter how far down the wrong road you've gone, turn back. Turkish proverb __ OpenSSL Project http://www.openssl.org Development Mailing List openssl-dev@openssl.org Automated List Manager majord...@openssl.org
Re: [openssl.org #2395] openssl-1.0.0c bug: Decoding cert causes segv in ASN1 code
int mk_test_cert(int buflen, char* buf) { char* p; char* q; X509* sign_cert; q = unbase64_raw(cert_b64, cert_b64+sizeof(cert_b64)-1, p=buf, std_index_64); if (!d2i_X509(sign_cert, p, q-p) || !sign_cert) { You're passing uninitialized X509* pointer sign_cert to the d2i_X509(). The function then tries to reuse the structure that the pointer is supposed to be pointing to. -- Tomas Mraz No matter how far down the wrong road you've gone, turn back. Turkish proverb __ OpenSSL Project http://www.openssl.org Development Mailing List openssl-dev@openssl.org Automated List Manager majord...@openssl.org
Re: [openssl.org #2323] bug in openssl commandline tool with md5 fingerprint output
On Tue, 2010-12-14 at 00:00 +0100, Andy Polyakov via RT wrote: I'd argue that it's intentional. The original purpose for -out option appears to be to emit *certificate* itself, not information about it. Yes, this kind of means that I reckon that -text option should result in output to STDout, not to one appointed by -out. There also is inconsistent usage of STDout when treating -days parameter: error message should be printed on stderr, not STDout. If nobody screams for a week, http://cvs.openssl.org/chngview?cn=20156 will go down to 1.0.x. I'm afraid that the change of the target of the -text option output will break expectations of some scripts people in the wild use. Although it is slightly more logical with the change than before I'd prefer keeping it as is at least for 1.0.x. Of course the -days error output change is fine. I second this. How about -text -noout -out foo.txt writing info to foo.txt, and -text -out foo.txt to stdout? That would be still a change in the behavior that shouldn't in my opinon be committed to the stable branches, definitely not to the 1.0.0 branch. -- Tomas Mraz No matter how far down the wrong road you've gone, turn back. Turkish proverb __ OpenSSL Project http://www.openssl.org Development Mailing List openssl-dev@openssl.org Automated List Manager majord...@openssl.org
Re: [openssl.org #2323] bug in openssl commandline tool with md5 fingerprint output
On Sun, 2010-12-12 at 11:54 +0100, Andy Polyakov via RT wrote: I've meanwhile checked apps/x509.c, and patching it to send output to file is trivial (attached); but looking at the other output options around these lines makes me a bit unsure if these options are intended to go to STDOUT rather than to honor a file option? However if the later then I'm asking me why? Almost all options print their output to STDOUT and ignore a file option - except for the text option ... no comments on this? Is this behaviour now intended? Then lets close the RT with a comment stating this; or just an oversight, and can we then fix this (see my patches)? I'd argue that it's intentional. The original purpose for -out option appears to be to emit *certificate* itself, not information about it. Yes, this kind of means that I reckon that -text option should result in output to STDout, not to one appointed by -out. There also is inconsistent usage of STDout when treating -days parameter: error message should be printed on stderr, not STDout. If nobody screams for a week, http://cvs.openssl.org/chngview?cn=20156 will go down to 1.0.x. A. I'm afraid that the change of the target of the -text option output will break expectations of some scripts people in the wild use. Although it is slightly more logical with the change than before I'd prefer keeping it as is at least for 1.0.x. Of course the -days error output change is fine. -- Tomas Mraz No matter how far down the wrong road you've gone, turn back. Turkish proverb __ OpenSSL Project http://www.openssl.org Development Mailing List openssl-dev@openssl.org Automated List Manager majord...@openssl.org