Re: [openssl-dev] About multi-thread unsafe for APIs defined in crypto/objects/obj_dat.c

2018-01-24 Thread Benjamin Kaduk via openssl-dev
On 01/23/2018 07:19 PM, Salz, Rich via openssl-dev wrote:
>
>   * OpenSSL APIs, which makes the following OpenSSL documentation
> statement invalid
> (https://www.openssl.org/docs/man1.0.2/crypto/threads.html
> 
> )
>
>  
>
>   * "OpenSSL can safely be used in multi-threaded applications
> provided that at least two callback functions are set,
> locking_function and threadid_func."
>
>  
>
>   * Is there any planning to fix this issue?
>
>  
>
>  
>
> Well, the most likely fix is to make the “safely” wording be more
> vague, which I doubt you’ll like.  But I doubt anyone on the team has
> much interest in fixing 1.0.2 locking issues.
>
>

Who said they were 1.0.2-specific?  Master's obj_dat.c still has a
completely unlocked OBJ_new_nid() that is a public API function; AFAICT
the issue is still present.

-Ben
-- 
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev


Re: [openssl-dev] evp cipher/digest - add alternative to init-update-final interface

2018-01-17 Thread Benjamin Kaduk via openssl-dev
On 01/17/2018 12:04 PM, Patrick Steuer wrote:
> libcrypto's interface for ciphers and digests implements a flexible
> init-update(s)-final calling sequence that supports streaming of
> arbitrary sized message chunks.
>
> Said flexibility comes at a price in the "non-streaming" case: The
> operation must be "artificially" split between update/final. This
> leads to more functions than necessary needing to be called to
> process a single paket (user errors). It is also a small paket
> performance problem for (possibly engine provided) hardware
> implementations for which it enforces a superfluous call to a
> coprocessor or adapter.
>
> libssl currently solves the problem, e.g for tls 1.2 aes-gcm record
> layer encryption by passing additional context information via the
> control interface and calling EVP_Cipher (undocumented, no engine
> support. The analoguously named, undocumented EVP_Digest is just an
> init-update-final wrapper). The same would be possible for tls 1.3
> pakets (it is currently implemented using init-update-final and
> performs worse than tls 1.2 record encryption on some s390 hardware).
>
> I would suggest to add (engine supported) interfaces that can process a
> paket with 2 calls (i.e. init-enc/dec/hash), at least for crypto
> primitives that are often used in a non-streaming context, like aead
> constructions in modern tls (This would also make it possible to move
> tls specific code like nonce setup to libssl. Such interfaces already
> exist in boringssl[1] and libressl[2]).
>
> What do you think ?

The one-shot EVP_DigestSign() and EVP_DigestVerify() APIs were added to
support the PureEdDSA algorithm, which is incapable of performing
init/update/final signatures.  That seems like precedent for adding such
APIs for the other types of EVP functionality, though getting a
non-wrapper implementation that actually allows ENGINE implementations
would be some amount of work.

-Ben
-- 
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev


Re: [openssl-dev] [openssl-users] Failed to access LDAP server when a valid certificate is at .1+

2018-01-09 Thread Benjamin Kaduk via openssl-dev
On 01/09/2018 01:47 PM, Misaki Miyashita wrote:
>
>>> Sorry, I meant to say it is for the 1.0.2 branch.
>>>
>> Except in exceptional circumstances, code only ends up in the 1.0.2
>> branch after having first gotten into the master branch and then the
>> 1.1.0 branch.  The current release policy only allows bug fixes to be
>> backported to the stable branches, not new features. To me, this code
>> seems more like a new feature than a bugfix, though I do not claim to
>> speak authoritatively on the matter.
>>
>> The preferred mechanism for submitting patches is as github pull
>> requests (against the master branch, with a note in the pull request
>> message if the backport is desired).
>
> Thank so much for your comment, Ben.
>
> We are planing to upgrade to the 1.1.0 branch as soon as we can which
> is not so easy to do at this moment as we need the FIPS capability.
> Thus, we are still focusing on the 1.0.2 release, and haven't had a
> chance to work on the 1.1.0 branch.  Thus, I won't be able to submit a
> PR against the master branch at this moment.
>
> Thus, I was hoping to get a review on the suggested fix for the 1.0.2
> to see it is viable by the upstream first.
>
> Would it be possible to get a review on the openssl-dev@openssl.org
> alias? or filing an issue via github is the right course of action?
>

You already got a review, from Viktor.  I don't think there's much
reason to file an issue in github without a patch (and if there's a
patch, it should just go straight to a pull request with no separate
issue).  If you want the feature to get upstreamed, the onus is on you
to forward-port the patch to master and adapt it to review comments; I
don't think we've seen sufficient interest to cause a team member to
spontaneously take that work upon themselves.

-Ben

> Thanks again for your comment.
>
> Regards,
>
> -- misaki

-- 
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev


Re: [openssl-dev] Speck Cipher Integration with OpenSSL

2018-01-09 Thread Benjamin Kaduk via openssl-dev
On 01/09/2018 08:32 AM, Randall S. Becker wrote:
> On January 9, 2018 8:41 AM, Rich Salz
>> ➢  We are currently modifying the source from Apache to OpenSSL open
>> source
>> licensing for the Speck/OpenSSL integration. Related repositories such
>> as the cipher itself will remain under the Apache license. We would love
>> input on the following items:
>>
>> Don’t bother changing the license.  The future direction of OpenSSL is moving
>> to Apache, anda it’s unlikely this work would show up in OpenSSL before we
>> change the license.
>>
>> We’ll soon have a blog post about our current thoughts on a crypto policy.
>> Watch this space.
>>
>> For discussion, the future-compatible thing to do :) is open a GitHub issue.
>> Then, make a pull request after the issue discussion seems to have died
>> down.
> A request, maybe OT. The NonStop platform does broadly deploy Apache but do 
> use OpenSSL. I understand that OpenSSL does not officially support the HPE 
> NonStop NSE/NSX platforms - but it is used on the platform through my team's 
> port, which I currently support, and through other ports as well. Added a 
> dependency to Apache is likely to dead-end the project for us depending on 
> the depth of the dependency, if I understand where this is going (hoping I am 
> wrong).
>

Apache license, not Apache software.   

-Ben
-- 
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev


Re: [openssl-dev] [openssl-users] Failed to access LDAP server when a valid certificate is at .1+

2018-01-09 Thread Benjamin Kaduk via openssl-dev
On 01/09/2018 12:53 AM, Misaki Miyashita wrote:
>
>
> On 01/ 8/18 04:46 PM, Misaki Miyashita wrote:
>> (switching the alias to openssl-dev@openssl.org)
>>
>> I would like to suggest the following fix so that a valid certificate
>> at .x can be recognized during the cert validation even when
>> .0 is linking to a bad/expired certificate.  This may not be
>> the most elegant solution, but it is a minimal change with low impact
>> to the rest of the code.
>>
>> Could I possibly get a review on the change? and possibly be
>> considered to be integrated to the upstream?
>> (This is for the 1.0.1 branch)
>
> Sorry, I meant to say it is for the 1.0.2 branch.
>

Except in exceptional circumstances, code only ends up in the 1.0.2
branch after having first gotten into the master branch and then the
1.1.0 branch.  The current release policy only allows bug fixes to be
backported to the stable branches, not new features. To me, this code
seems more like a new feature than a bugfix, though I do not claim to
speak authoritatively on the matter.

The preferred mechanism for submitting patches is as github pull
requests (against the master branch, with a note in the pull request
message if the backport is desired).

-Ben
-- 
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev


Re: [openssl-dev] Speck Cipher Integration with OpenSSL

2018-01-08 Thread Benjamin Kaduk via openssl-dev
On 01/08/2018 03:10 PM, William Bathurst wrote:
> Hi Hanno/all,
>
> I can understand your view that "more is not always good" in crypto.
> The reasoning behind the offering can be found in the following
> whitepaper:
>
> https://csrc.nist.gov/csrc/media/events/lightweight-cryptography-workshop-2015/documents/papers/session1-shors-paper.pdf
>
>
> I will summarize in a different way though. We wish to offer an
> optimized lightweight TLS for IoT. A majority of devices found in IoT
> are resource constrained, for example a device CPU may only have 32K
> of RAM. Therefore security is an afterthought by developers. For some
> only AES 128 is available and they wish to use 256 bit encryption.
> Then Speck 256 would be an option because it has better performance
> and provides sufficient security.
>
> Based on the above scenario you can likely see why we are interested
> in OpenSSL. First, OpenSSL can be used for terminating lightweight TLS
> connections near the edge, and then forwarding using commonly used
> ciphers.
>
> [IoT Device] -TLS/Speck>[IoT Gateway]-TLS> [Services]
>
> Also, we are interested in using OpenSSL libraries at the edge for
> client creation. One thing we would like to do is provide instructions
> for an highly optimized build of OpenSSL that can be used for
> contrained devices.
>
> I think demand will eventually grow because there is an initiative by
> the US government to improve IoT Security and Speck is being developed
> and proposed as a standard within the government. Therefore, I see
> users who wish to play in this space would be interested in a version
> where Speck could be used in OpenSSL.
>
> It is my hope to accomplish the following:
>
> [1] Make Speck available via Open Source, this could be as an option
> or as a patch in OpenSSL.
> [2] If we make it available as a patch, is there a place where we
> would announce/make it known that it is available?
>
> We are also looking at open-sourcing the client side code. This would
> be used to create light-weight clients that use Speck and currently we
> also build basic OAuth capability on top of it.
>

Interestingly, the IETF ACE (Authentication and Authorization in
Constrained Environments) is chartered to look at this space (crypto for
constrained systems/IoT), and is aiming towards something roughly
OAuth-shaped, but there has not really been any interest in Speck
expressed that I've seen.  So, is this work happening someplace else, or
is there not actually demand for it?

-Ben

> Thanks for your input!
>
> Bill
>
> On 1/5/2018 11:40 AM, Hanno Böck wrote:
>> On Fri, 5 Jan 2018 10:52:01 -0800
>> William Bathurst  wrote:
>>
>>> 1) Community interest in such a lightweight cipher.
>> I think there's a shifting view that "more is not always good" in
>> crypto. OpenSSL has added features in the past "just because" and it
>> was often a bad decision.
>>
>> Therefore I'd generally oppose adding ciphers without a clear usecase,
>> as increased code complexity has a cost.
>> So I think questions that should be answered:
>> What's the usecase for speck in OpenSSL? Are there plans to use it in
>> TLS? If yes why? By whom? What advantages does it have over existing
>> ciphers? (Yeah, it's "lightweight", but that's a pretty vague thing.)
>>
>>
>> Also just for completeness, as some may not be aware: There are some
>> concerns about Speck due to its origin (aka the NSA). I don't think
>> that is a reason to dismiss a cipher right away, what I'd find more
>> concerning is that from what I observed there hasn't been a lot of
>> research about speck.
>>
>

-- 
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev


Re: [openssl-dev] rejecting elliptic_curves/supported_groups in ServerHello (new behavior in master/1.1.1 vs 1.1.0)

2017-10-04 Thread Benjamin Kaduk via openssl-dev
On 10/04/2017 04:30 AM, Matt Caswell wrote:
>
> Looks like we should have an exception for this case (with a suitable
> comment explaining why). Will you create a PR?
>

Yes, I was planning to.  I was just taking some time to ponder whether
it's worth burning an option bit on, to allow an opt-out (probably not).

-Ben
-- 
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev


[openssl-dev] rejecting elliptic_curves/supported_groups in ServerHello (new behavior in master/1.1.1 vs 1.1.0)

2017-10-03 Thread Benjamin Kaduk via openssl-dev
Hi all,

Doing some testing with a snapshot of master (s_client with -tls1_2 and
optionally a cipherspec that prefers ECDHE ciphers), we're running into
a sizeable number of servers that are sending extension 0xa (formerly
"elliptic_curves", now "supported_groups") in the ServerHello.  This is
not supported by RFC 7919 or RFC 4492 (the server is supposed to
indicate it's selected curve/group in the ServerKeyExchange message
instead), or by the TLS 1.3 draft spec (which permits "supported_groups"
in EncryptedExtensions, so the client can update a cache of groups
supported by the server).

In OpenSSL 1.1.0 we seem to have treated the elliptic_curves extension
in a ServerHello as an extension unknown to the library code and passed
it off to the custom extension handler.  With the extension processing
rework in master done to support TLS 1.3, which admits extensions in
many more contexts than previously, we now check that a received
extension is allowable in the context at hand.  In the table of
extensions, supported_groups is marked only as allowable in the
ClientHello and TLS 1.3 EncryptedExtensions, per the spec.  However,
this new strict behavior causes connection failures when talking to
these buggy servers.  So far we've seen this behavior from servers that
send a Server: header indicating Microsoft-IIS/7.5 and just "Apache".

This raises some question of what behavioral compatibility is desired
between 1.1.0 and 1.1.1 -- do we need to disable the "extension context"
verification for ServerHello processing entirely, or maybe just for the
one extension known to cause trouble in practice?  Or should we have an
SSL/SSL_CTX option to control the behavior (and which behavior should be
the default)?

Also, I'd be interested in hearing whether anyone else has observed this
sort of behavior.

Thanks,

Ben
-- 
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev


Re: [openssl-dev] Bug: digest parameter is rejected

2017-09-18 Thread Benjamin Kaduk via openssl-dev
On 09/18/2017 09:32 AM, Blumenthal, Uri - 0553 - MITLL wrote:
>
> RSA-OAEP supports different hash functions and MGF. SHA-1 is the default.
>
>  
>
> OpenSSL implementation of OAEP wrongly refuses to set the hash
> algorithm, preventing one from using SHA-2 family:
>
>

You'll probably need to pick up master and its -rsa_mgf1_md argument to
pkeyutl.

-Ben
-- 
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev


Re: [openssl-dev] TLS 1.3 client hello issue

2017-09-18 Thread Benjamin Kaduk via openssl-dev
On 09/18/2017 01:07 AM, Mahesh Bhoothapuri wrote:
>
> Hi,
>
> I am sending a Tls 1.3 client hello, and am seeing an issue with
>
> ossl_statem_client_write_transition in statem_clnt.c.
>
>
>     /*
>  * Note that immediately before/after a ClientHello we don't know what
>  * version we are going to negotiate yet, so we don't take this
> branch until
>  * later
>  */
>
> /*
>  * ossl_statem_client_write_transition() works out what handshake state to
>  * move to next when the client is writing messages to be sent to the
> server.
>  */
> WRITE_TRAN ossl_statem_client_write_transition(SSL *s)
> {
>
>     if (SSL_IS_TLS13(s))
>     return ossl_statem_client13_write_transition(s);
> }
>
> And in:
>
>
> /*
>  * ossl_statem_client_write_transition() works out what handshake state to
>  * move to next when the client is writing messages to be sent to the
> server.
>  */
> WRITE_TRAN ossl_statem_client_write_transition(SSL *s)
> {
>
>    /*
>  * Note: There are no cases for TLS_ST_BEFORE because we haven't
> negotiated
>  * TLSv1.3 yet at that point. They are handled by
>  * ossl_statem_client_write_transition().
>  */
>
>     switch (st->hand_state) {
>     default:
>     /* Shouldn't happen */
>     return WRITE_TRAN_ERROR;
>
> }
>
> With a TLS 1.3 client hello, using tls 1.3 version, the st->hand_state is

Sorry, I just want to clarify what you are doing -- are you taking
SSL_CTX_new(TLS_method()) and then calling
SSL_CTX_set_min_proto_version(ctx, TLS1_3_VERSION) and
SSL_CTX_set_max_proto_version(ctx, TLS1_3_VERSION)?

I note that there is no version-specific TLSv1_3_method() available, and
in any case, it's of questionable wisdom to attempt to force TLS 1.3
only while the specification is still in draft status -- in any case
where the client and server implementations are not tightly controlled,
negotiation failures seem quite likely.

> TLS_ST_BEFORE and so, the default error is returned.
>
> When I added :
>
>     case TLS_ST_BEFORE:
>     st->hand_state = TLS_ST_CW_CLNT_HELLO;
>     return WRITE_TRAN_CONTINUE;
>

The reason there is not currently a case for TLS_ST_BEFORE is that
whether or not we're going to be using TLS 1.3 is supposed to be
determined on the server as part of version negotiation, so when we're
sending a ClientHello, our version is in an indeterminate status -- the
general-purpose TLS method must be used at that part of the handshake.

> The client hello gets sent out, but I only saw a TLS 1.2 version being
> sent.
> Is this a bug?

The legacy_version field in a TLS 1.3 ClientHello will be 0x0303,
matching the historical value for TLS 1.2.  The actual list of versions
are conveyed in a "supported_versions" extension, which is what you need
to be looking at.

-Ben
-- 
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev


Re: [openssl-dev] QUIC

2017-09-07 Thread Benjamin Kaduk via openssl-dev
On 09/06/2017 05:24 PM, Matt Caswell wrote:
> Issue 4283 (https://github.com/openssl/openssl/issues/4283) has caused
> me to take a close look at QUIC. This seems to have been getting a *lot*
> of attention just recently. See the IDs below for details:

Yes, it's generated a lot of excitement and interest at the IETF.

> https://tools.ietf.org/html/draft-ietf-quic-transport-05
> https://tools.ietf.org/html/draft-ietf-quic-tls-05
> https://tools.ietf.org/html/draft-ietf-quic-recovery-05
>
> For the uninitiated QUIC is a new general-purpose transport protocol
> built on top of UDP. It provides applications with a secure stream
> abstraction (like TLS over TCP) with reliable, in-order delivery, as
> well as the ability to multiplex many streams over a single connection
> (without head-of-line blocking).
>
> It is *very* closely integrated with TLSv1.3. It uses the TLSv1.3
> handshake for agreeing various QUIC parameters (via extensions) as well
> as for agreeing keying material and providing an "early data"
> capability. The actual packet protection is done by QUIC itself (so it
> doesn't use TLS application data) using a QUIC ciphersuite that matches
> the negotiated TLS ciphersuite. Effectively you can think of QUIC as a
> modernised rival to TLS over TCP.

The nature of the QUIC/TLSv1.3 integration is somewhat interesting. 
QUIC has its origins at Google, and the "Google QUIC" or gQUIC variant
is deployed on the public internet even now; since TLS 1.3 was not
available then, it uses a separate "quic-crypto" scheme for these
purposes.  quic-crypto, in turn, helped shape the evolution of TLS 1.3,
including the strong desire for 0-RTT functionality.

But, as I understand it, the intent is to leave enough hooks that a
different crypto layer could be used, including (but not limited to) a
subsequent version of TLS.

> I've spent some time today reading through the IDs. It has become clear
> to me that in order for OpenSSL to be used to implement QUIC there are a
> number of new requirements/issues we would need to address:
>
> - We need to provide the server half of the TLSv1.3 cookie mechanism. At
> the moment an OpenSSL client will echo a TLSv1.3 cookie it receives back
> to the server, but you cannot generate a cookie on the server side.

Yeah, the cookie is pretty clear to the UDP/"stateless" operation.

> - We need to be able to support *stateless* operation for the
> ClientHello->HelloRetryRequest exchange. This is very much in the same
> vein as the stateless way that DTLSv1_listen() works now for DTLS in the
> ClientHello->HelloVerifyRequest exchange. This is quite a significant
> requirement.

The expectation is that the state gets bundled into the cookie, yes.

> - A QUIC server needs to be able to issue a NewSessionTicket on demand
>
> - Ticket PSKs need to be able to have an embedded QUIC layer token (the
> equivalent of the cookie - but embedded inside the PSK).

I think https://github.com/openssl/openssl/pull/3802 is pretty close, in
this space.

> - We need to extend the "exporter" API to allow early_secret based
> exports. At the moment you can only export based on the final 1-RTT key.

It seems in keeping with our existing handling of early data, to at
least consider providing a separate API for these early exporter values.

> - TLS PSKs are transferable between TLS-TCP and QUIC/TLS-UDP. There are
> some special rules around ALPN for this that may impact our current
> logic in this area.
>
> - Possibly a QUIC implementation will need to have knowledge of the
> TLSv1.3 state machine because different TLSv1.3 handshake records need
> to go into different types of QUIC packets (ClientHello needs to go into
> "Client Initial" packet, HelloRetryRequest needs to go into a "Server
> Stateless Retry" packet and everything else goes into "Client Cleartext"
> or "Server Cleartext" packets). It may be possible for a QUIC
> implementation to infer the required information without additional
> APIs, but I'm not sure.

We do have existing things like the message callback, but I won't try to
argue that that's an ideal situation for a QUIC implementor.  And the
QUIC layer could even parse out the unencrypted records for itself from
the output BIO, as silly as that would be.

> - QUIC places size limits on the allowed size of a ClientHello. Possibly
> we may want some way of failing gracefully if we attempt to exceed that
> (or maybe we just leave that to the QUIC implementation to detect).

(This is to limit the potential for a DoS amplification attack via
spoofed client address, since UDP does not provide the reachability
confirmation that TCP's handshake does, for the spectators.)

> I'm going to start working through this list of requirements, but if
> anyone fancies picking some of it up then let me know. Also, did I miss
> anything from the above list?
>

Nothing sticks out as missing to me, but I've not been following QUIC
development as closely as I'd like.

-Ben

-- 
openssl-dev mailing list
To 

Re: [openssl-dev] Plea for a new public OpenSSL RNG API

2017-08-29 Thread Benjamin Kaduk via openssl-dev
On 08/29/2017 01:50 PM, Blumenthal, Uri - 0553 - MITLL wrote:
> IMHO this interface is a way for the user to improve the quality of the 
> randomness it would get from the given RNG, *not* to replace (or diminish) 
> its other sources. My proposal is to abolish this parameter, especially since 
> now it is simply ignored (and IMHO – for a good reason).

That's a fine proposal ... it just can't be implemented until a major
release boundary, when our ABI stability policy permits such breaking
changes.

-Ben
-- 
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev


Re: [openssl-dev] draft-21 status

2017-08-09 Thread Benjamin Kaduk via openssl-dev
On 08/09/2017 08:03 AM, Loganaden Velvindron wrote:
> Dear OpenSSL folks,
>
> I was wondering if there is a branch for draft-21 ?
>

draft-21 support is on master at the moment; there's no need for a
separate branch until there is a draft-22 document to support.

-Ben
-- 
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev


Re: [openssl-dev] Build issue

2017-07-28 Thread Benjamin Kaduk via openssl-dev
On 07/28/2017 01:22 AM, Matthew Stickney wrote:
> With a make distclean, ./config, make depend (didn't appear to do
> anything), and a make, I'm getting the essentially the same thing:
>
> Error: _num does not have a number assigned
> /usr/bin/perl ./util/mkrc.pl libcrypto-1_1-x64.dll | windres 
> --target=pe-x86-64
> -o rc.o
> LD_LIBRARY_PATH=: gcc -DDSO_WIN32 -DNDEBUG -DOPENSSL_THREADS 
> -DOPENSSL_NO_STATIC
> _ENGINE -DOPENSSL_PIC -DOPENSSL_IA32_SSE2 -DOPENSSL_BN_ASM_MONT 
> -DOPENSSL_BN_ASM
> _MONT5 -DOPENSSL_BN_ASM_GF2m -DSHA1_ASM -DSHA256_ASM -DSHA512_ASM -DRC4_ASM 
> -DMD
> 5_ASM -DAES_ASM -DVPAES_ASM -DBSAES_ASM -DGHASH_ASM -DECP_NISTZ256_ASM 
> -DPADLOCK
> _ASM -DPOLY1305_ASM -DOPENSSLDIR="/usr/local/ssl" 
> -DENGINESDIR="/usr/local/lib/e
> ngines-1_1" -DL_ENDIAN -DWIN32_LEAN_AND_MEAN -DUNICODE -D_UNICODE -m64 -Wall 
> -O3
>  -D_MT -D_WINDLL -static-libgcc -shared -Wl,-Bsymbolic 
> -Wl,--out-implib,libcrypt
> o.dll.a crypto.def rc.o -o libcrypto-1_1-x64.dll -Wl,--whole-archive 
> libcrypto.a
>  -Wl,--no-whole-archive -lws2_32 -lgdi32 -lcrypt32
> Cannot export MD2: symbol not defined
> Cannot export MD2_Final: symbol not defined
> Cannot export MD2_Init: symbol not defined
> Cannot export MD2_Update: symbol not defined
> Cannot export MD2_options: symbol not defined
> Cannot export RC5_32_cbc_encrypt: symbol not defined
> Cannot export RC5_32_cfb64_encrypt: symbol not defined
> Cannot export RC5_32_decrypt: symbol not defined
> Cannot export RC5_32_ecb_encrypt: symbol not defined
> Cannot export RC5_32_encrypt: symbol not defined
> Cannot export RC5_32_ofb64_encrypt: symbol not defined
> Cannot export RC5_32_set_key: symbol not defined
> collect2.exe: error: ld returned 1 exit status
>

MD2 and RC5 are disabled by default, so it is expected that they will
not be defined.  It is hard to say whether those messages are the source
of the error exit status or just warnings, though.

It's certainly plausible that there are further mkdef.pl issues
responsible, though. Since mkdef.pl generates the crypto.def file
referenced on your link line, maybe you could post that somewhere and
link to it?  (mkdef.pl can also be used to generate .map files, but it
seems like the .def file is the relevant one at the moment.)

But I may have to defer to Richard for the workings of mkdef.pl itself...

-Ben
-- 
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev


Re: [openssl-dev] Build issue

2017-07-27 Thread Benjamin Kaduk via openssl-dev
On 07/25/2017 07:49 PM, Matthew Stickney wrote:
> Possibly. The original errors and hanging perl process have been
> replaced with an enormous number of "undefined reference" errors. For
> example:
> libssl.a(tls_srp.o):tls_srp.c:(.text+0xc4c): undefined reference to `BN_ucmp'
> libssl.a(tls_srp.o):tls_srp.c:(.text+0xd45): undefined reference to
> `OPENSSL_cleanse'
> libssl.a(tls_srp.o):tls_srp.c:(.text+0xd5f): undefined reference to 
> `SRP_Calc_A'
> collect2.exe: error: ld returned 1 exit status
>
> I don't know enough about the build system to know whether the
> mkdef.pl change might be responsible for this, or whether this is a
> separate issue. To follow up on my previous post, the configure line
> was indeed "./config", and this is commit 1843787173. Any other data
> that I should collect?
>

Hmm, all of the listed examples are for things in libssl failing to find
symbols from libcrypto, which perhaps suggests a link line ordering
issue.  Can you paste the actual linker invocation that is failing?

-Ben
-- 
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev


Re: [openssl-dev] Fix a dead lock of async engine.

2017-07-26 Thread Benjamin Kaduk via openssl-dev
On 07/26/2017 08:15 AM, Emeric Brun wrote:
> Hi All,
>
> This bug also affects the 1.1.0
>

Are you able to submit the patch as a github pull request?  That would
be the preferred form, as it enables some automation that we have for
CLA checks and CI.

-Ben
-- 
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev


Re: [openssl-dev] Build issue

2017-07-25 Thread Benjamin Kaduk via openssl-dev
On 07/25/2017 01:52 PM, Matthew Stickney wrote:
> I've been trying to build OpenSSL to work on a new feature, but I've
> had problems with the build hanging. I'm building on Windows 10 with
> mingw-w64 under msys2; perl is v5.24, and I installed the
> Text::Template module from CPAN.
>

You did not show the config line used, which is perhaps relevant.

Also, presumably the perl is the msys perl, but please confirm -- it
must be "matching" in order for things to work.

-Ben
-- 
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev


Re: [openssl-dev] Work on a new RNG for OpenSSL

2017-06-28 Thread Benjamin Kaduk via openssl-dev
On 06/26/2017 11:28 PM, Paul Dale wrote:
> Given the variety of RNGs available, would an EVP RNG interface make sense?  
> With a safe default in place (and no weak generators provided), the decision 
> can be left to the user.
> A side benefit is that the unit tests could implement a simple fully 
> deterministic generator and the code for producing known sequences removed 
> from the code base.

There are some benefits to this idea, as you note, but it does not seem
like a clear "immediate win" to me.  Maybe this is just some emotional
response that has not fully absorbed "no weak generators provided", as I
can't really articulate any reason to oppose it other than "randomness
is so low-level that we should just provide it and not options for it".

>
> Defence in depth seems prudent: independent sources with agglomeration and 
> whitening.

As Kurt noted, [on modern OSes,] it is really unclear what sources are
available to us that are not already being used by the kernel.  Rich had
commented about the dragonfly (kernel) implementation "wow, is it really
that easy?".  To large extent, yes, a secure RNG can present as being
that simple/easy -- if you're writing it in the kernel!  The kernel has
easy and direct access to lots of interrupt-driven entropy sources, any
hardware generators present, etc., as well as rdrand/etc.  It doesn't
have to worry about fork-safety or syscall overhead, and can basically
just implement the raw crypto needed for whitening/mixing/stretching.

So, [on these same modern OSes,] what benefit do we really get from
using multiple "independent" sources?  They are unlikely to actually be
independent if the kernel is consuming them as well and we consume the
kernel.

Now, of course OpenSSL runs on OSes that do not provide a modern kernel
RNG and we will need some solution for them, which will likely look as
you describe.  I'm just not convinced there is much value in duplicating
what the kernel is doing in the cases that the kernel does it well.

>
> We shouldn't trust the user to provide entropy.  I've seen what is typically 
> provided.  Uninitialised buffers aren't random.  User inputs (mouse and 
> keyboard) likewise aren't very good.  That both are still being suggested is 
> frustrating.  I've seen worse suggestions, some to the effect that 
> "time(NULL) ^ getpid()" is too good and just time() is enough.

Definitely.  But, as we're not the kernel, finding good sources of real
randomness as a generic userspace process is quite hard.

>
> As for specific questions and comments:
>
> John Denker wrote:
>> If you trust the ambient OS to provide a seed, why not
>> trust it for everything, and not bother to implement an
>> openssl-specfic RNG at all?
> I can think of a few possibilities:

Ah, preemptive replies to my comments above, excellent.

> * Diversifying the sources provides resistance to compromise of individual 
> sources.  Although a full kernel compromise is unrecoverable, a kernel bug 
> that leaked the internal pools in a read only manner isn't unforeseeable.

It is not unforseeable, sure, but so are lots of things.  Spewing the
contents of the openssl process-local randomness pool on the network
isn't unforseeable, either; do we have any reason to think there is
substantially more risk from one unknown than the other?

> * Not all operating systems have good RNGs.

Sure, and we need to support the ones that don't have good RNGs.
But on the ones that do, what do we gain from duplicating the effort?

>
> * Draining the kernel's entropy pools is unfriendly behaviour, other 
> processes will typically want some randomness too.
>
> * At boot time the kernel pools are empty (low or no quality).  This 
> compounds when several things require seeding.

I'm not sure what you mean by "draining the kernel's entropy pools". 
That is, if you are adhering to the belief that taking random bits out
of a generator removes entropy from it that must be replenished, does
that not apply also to any generator/pool we write for ourselves?  Or
maybe you just refer to the behavior of linux /dev/random, in which case
I would point out Ted (the author/maintainer of linux /dev/random)'s
suggestion to just use (getrandom or) /dev/random and tacit agreement
that the behavior of reducing the entropy count on reads from
/dev/random is not really needed anymore.

At boot time *all* pools are empty.  FreeBSD has a random seed file on
disk to be loaded on next boot that helps with this (I didn't check
linux), and openssl has/can use ~/.rnd or similar, but those are not
immune from compromise out-of-band.  In order to be properly confident
of good randomness, new randomness needs to be collected from the
environment and added to the pool, and the kernel is in a much better
position to do so (and know when it has enough!) than we are.

> * Performance is also a consideration, although with a gradual collection 
> strategy this should be less of a concern.  Except at start up.

Given that we're going to be 

Re: [openssl-dev] Work on a new RNG for OpenSSL

2017-06-27 Thread Benjamin Kaduk via openssl-dev
On 06/27/2017 07:24 PM, Paul Dale wrote:
>
> The hierarchy of RNGs will overcome some of the performance concerns. 
> Only the root needs to call getrandom().
>
> I do agree that having a DRBG at the root level is a good idea though.
>
>  
>

Just to check my understanding, the claim is that adding more layers of
hashing and/or encryption will still be faster than a larger number of
syscalls?

-Ben
-- 
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev


Re: [openssl-dev] Work on a new RNG for OpenSSL

2017-06-27 Thread Benjamin Kaduk via openssl-dev
On 06/27/2017 04:51 PM, Kurt Roeckx wrote:
> On Tue, Jun 27, 2017 at 11:56:04AM -0700, John Denker via openssl-dev wrote:
>>
>> On 06/27/2017 11:50 AM, Benjamin Kaduk via openssl-dev wrote:
>>
>>> Do you mean having openssl just pass through to
>>> getrandom()/read()-from-'/dev/random'/etc. or just using those to seed
>>> our own thing?
>>>
>>> The former seems simpler and preferable to me (perhaps modulo linux's
>>> broken idea about "running out of entropy")
>> That's a pretty big modulus.  As I wrote over on the crypto list:
>>
>> The xenial 16.04 LTS manpage for getrandom(2) says quite explicitly:
>>
>>>> Unnecessarily reading large quantities  of data will have a
>>>> negative impact on other users of the /dev/random and /dev/urandom
>>>> devices.
>> And that's an understatement.  Whether unnecessary or not, reading
>> not-particularly-large quantities of data is tantamount to a
>> denial of service attack against /dev/random and against its
>> upstream sources of randomness.
>>
>> No later LTS is available.  Reference:
>>   http://manpages.ubuntu.com/manpages/xenial/man2/getrandom.2.html
>>
>> Recently there has been some progress on this, as reflected in in
>> the zesty 17.04 manpage:
>>   http://manpages.ubuntu.com/manpages/zesty/man2/getrandom.2.html
>>
>> However, in the meantime openssl needs to run on the platforms that
>> are out there, which includes a very wide range of platforms.
> And I think it's actually because of changes in the Linux RNG that
> the manpage has been changed, but they did not document the
> different behavior of the kernel versions.
>
> In case it wasn't clear, I think we should use the OS provided
> source as a seed. By default that should be the only source of
> randomness.
>

I think we can get away with using OS-provided randomness directly in
many common cases.  /dev/urandom suffices once we know that the kernel
RNG has been properly seeded.  On FreeBSD, /dev/urandom blocks until the
kernel RNG is seeded; on other systems maybe we have to make one read
from /dev/random to get the blocking behavior we want before switching
to /dev/urandom for bulk reads.

-Ben
-- 
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev


Re: [openssl-dev] Work on a new RNG for OpenSSL

2017-06-27 Thread Benjamin Kaduk via openssl-dev
Hi Ted,

On 06/27/2017 03:40 PM, Theodore Ts'o wrote:
>
> My recommendation for Linux is to use getrandom(2) the flags field set
> to zero.  This will cause it to use a CRNG that will be reseeded every
> five minutes from environmental noise gathered primarily from
> interrupt timing data.  For modern kernels, the CRNG is based on
> ChaCha20.  For older kernels, it is based on SHA-1.
>
> There are a lot of people who have complained about whether or not
> Linux's urandom generator has met with there religious beliefs about
> how RNG's should be designed and implemented.  One of the things you
> will find is that many of these people are very vocal, and in some
> cases, their advice will be mutually exclusive.  So if you are going
> to be trying to design your own RNG for OpenSSL --- welcome to my
> world.
>
> (In other words, I do listen to many of the people who have opined on
> this thread.  I just don't happen to agree with all of them.  And I
> suspect you will find that in the end, it's impossible to make them
> all happy, and they will end up questioning your intelligence,
> judgement, and in some cases, your paternity.  :-)
>

Thanks for the input, and for reading what is being said.

While you're here, would you mind confirming/denying the claim I read
that the reason the linux /dev/random tracks an entropy estimate and
blocks when it gets too low is to preserve backward security in the face
of attacks against SHA1?

I'm happy to respect that there are different opinions, but it would be
nice to know the reasoning behind the behavior, even if I do not
necessarily agree with it.

Thanks,

Ben
-- 
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev


Re: [openssl-dev] Work on a new RNG for OpenSSL

2017-06-27 Thread Benjamin Kaduk via openssl-dev
On 06/27/2017 02:28 AM, Matt Caswell wrote:
>
> On 26/06/17 21:18, Kurt Roeckx wrote:
>
>> I think it should by default be provided by the OS, and I don't
>> think any OS is documenting how much randomness it can provide.
>>
> I also agree that, by default, using the OS provided source makes a lot
> of sense.
>

Do you mean having openssl just pass through to
getrandom()/read()-from-'/dev/random'/etc. or just using those to seed
our own thing?

The former seems simpler and preferable to me (perhaps modulo linux's
broken idea about "running out of entropy"), but the argument presented
about us being used in all sorts of environments that we can't even
enumerate has basically convinced me that we will need to provide some
alternative as well.  (It remains unclear how such environments will be
able to provide usable seed randomness, but there is only so much we can
do about that.)

-Ben
-- 
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev


Re: [openssl-dev] [RFC 0/4] Kernel TLS socket API

2017-06-08 Thread Benjamin Kaduk via openssl-dev
On 06/07/2017 10:19 AM, Salz, Rich via openssl-dev wrote:
> A couple of comments.
>
> First, until this shows up in the kernel adopted by major distributions, it 
> is a bit premature to include in OpenSSL.  Including netinet/tcp.h is 
> seriously wrong 

I don't know that we would need to wait until it's in distributions, but
we definitely shouldn't  commit to supporting an API until mainline
linux has [and then Linus's "don't break userspace" adage applies].

-Ben

-- 
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev


Re: [openssl-dev] 90-test_secmem.t hangs the machine for good

2017-05-15 Thread Benjamin Kaduk via openssl-dev
On 05/15/2017 12:15 PM, Blumenthal, Uri - 0553 - MITLL wrote:
>
> On a semi-related note, I want able to locate mann.h file either.

`man mmap` will list any headers needed for the mmap() declaration and
flag values.
On the random OS X machine I have handy, it claims  is
needed, and a /usr/include/sys/mman.h is present.

-Ben
-- 
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev


Re: [openssl-dev] 90-test_secmem.t hangs the machine for good

2017-05-12 Thread Benjamin Kaduk via openssl-dev
On 05/12/2017 03:05 PM, Blumenthal, Uri - 0553 - MITLL wrote:
>
> I’m not sure. It may well be that it “simply” takes all the memory
> over, so there is no way for anything else to start and do the clean-up…
>

Hmm, I wonder if a top(1) started before the tests would keep running,
in the case that it "simply" takes all the memory over.

>  
>
> From just looking at the code, the only question that comes to mind is
> whether you have a 32- or 64-bit size_t in the build environment in
> question, which is unlikely to cause a eureka moment :(
>
>  
>
> I can tell you that size_t is 64-bit here. It’s certainly not an
> “eureka” moment for me.
>
>  
>
> Some other information to check: do you have MAP_ANON defined by your
> mman.h?
>

mman.h != mmap.h (consult 'man mmap' for authoritative header).

> Todd> Yes, it’s likely this is due to the amount of memory available
> in the machine. I tried to use reasonable values, but apparently not
> reasonable enough
>
>  
>
> Yep. In case it matters, my machine has 16GB of RAM (and runs a ton of
> stuff, besides these tests :).
>
>

Thanks.

-Ben
-- 
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev


Re: [openssl-dev] 90-test_secmem.t hangs the machine for good

2017-05-12 Thread Benjamin Kaduk via openssl-dev
On 05/12/2017 02:15 PM, Benjamin Kaduk via openssl-dev wrote:
> From just looking at the code, the only question that comes to mind is
> whether you have a 32- or 64-bit size_t in the build environment in
> question, which is unlikely to cause a eureka moment :(

(The test runs happily on my Ubuntu Trusty-ish machine, FWIW.)

Some other information to check: do you have MAP_ANON defined by your
mman.h?
Do you have a /dev/zero that can be opened read/write?

-Ben
-- 
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev


Re: [openssl-dev] 90-test_secmem.t hangs the machine for good

2017-05-12 Thread Benjamin Kaduk via openssl-dev
On 05/12/2017 02:05 PM, Blumenthal, Uri - 0553 - MITLL wrote:
> On 5/12/17, 2:49 PM, "openssl-dev on behalf of Benjamin Kaduk via 
> openssl-dev" <openssl-dev-boun...@openssl.org on behalf of 
> openssl-dev@openssl.org> wrote:
>
> ➢ I’m sorry to report that in the current OpenSSL 1.1 master running “make 
> test”
> ➢  freezes up the machine. Mac OS X 10.11.6, Xcode-8.2, current Github 
> master. 
> ➢ Here’s the configuration:
>
>   A commit hash would be more useful than "current github master"
>
> I thought you know what’s in the current master right now. But here’s the 
> last few hashes for your pleasure:

Well, you're right to think that I do, and I do trust you to be accurate
when you make that claim.  But there are many people for which I would
not fully trust the veracity of such a claim, the commit hash is
completely unambiguous, and it is a whole lot easier for those poor
folks reading this thread in the archive two years from now who are
trying to track down a similar-looking issue.  So, on the whole, I
recommend always using commit hashes, with an optional annotation of how
it relates to a branch.

> $ git log
> commit 80a2fc4100daf6f1001eee33ef2f9b9eee05bedf (HEAD -> master, 
> origin/master, origin/HEAD)
> Author: Todd Short <tsh...@akamai.com>
> Date:   Wed May 10 11:44:55 2017 -0400
>
> Clean up SSL_OP_* a bit
> 
> Reviewed-by: Matt Caswell <m...@openssl.org>
> Reviewed-by: Rich Salz <rs...@openssl.org>
> (Merged from https://github.com/openssl/openssl/pull/3439)
>
> commit 33242d9d79e7f06151e905b83dc8f995006fa7cd
> Author: Rich Salz <rs...@openssl.org>
> Date:   Thu May 11 20:42:32 2017 -0400
>
> Use scalar, not length; fixes test_evp
> 
> Reviewed-by: Stephen Henson <st...@openssl.org>
> Reviewed-by: Richard Levitte <levi...@openssl.org>
> (Merged from https://github.com/openssl/openssl/pull/3452)
>
>
>   I can understand not wanting to have to power-cycle the machine again,
>   but the 'make TESTS=test_secmem V=1 test' output (or some 
> dtruss/similar)
>   would be helpful in tracking things down.

The obvious candidate for closer inspection is a few commits previous,

commit 7031ddac94d0ae616d1b0670263a9265ce672cd2
Author: Todd Short <tsh...@akamai.com>
Date:   Thu May 11 15:48:10 2017 -0400

Fix infinite loops in secure memory allocation.
   
Issue 1:
   
sh.bittable_size is a size_t but i is and int, which can result in
freelist == -1 if sh.bittable_size exceeds an int.
   
This seems to result in an OPENSSL_assert due to invalid allocation
size, so maybe that is "ok."
   
Worse, if sh.bittable_size is exactly 1<<31, then this becomes an
infinite loop (because 1<<31 is a negative int, so it can be shifted
right forever and sticks at -1).
   
Issue 2:
   
CRYPTO_secure_malloc_init() sets secure_mem_initialized=1 even when
sh_init() returns 0.
   
If sh_init() fails, we end up with secure_mem_initialized=1 but
sh.minsize=0. If you then call secure_malloc(), which then calls,
sh_malloc(), this then enters an infite loop since 0 << anything will
never be larger than size.
   
Issue 3:
   
That same sh_malloc loop will loop forever for a size greater
than size_t/2 because i will proceed (assuming sh.minsize=16):
i=16, 32, 64, ..., size_t/8, size_t/4, size_t/2, 0, 0, 0, 0, 
This sequence will never be larger than "size".
   
Reviewed-by: Rich Salz <rs...@openssl.org>
Reviewed-by: Richard Levitte <levi...@openssl.org>
(Merged from https://github.com/openssl/openssl/pull/3449)

which adds test cases intended to trigger the edge cases being fixed.

> Sorry.
>
>   How locked up is the machine?  Can you get memory usage stats or is it 
> completely unresponsive?
>
> Completely unresponsive. Totally. No memory usage. The only thing that works 
> at this point is the power button.

It seems that there should also be a bug report against OS X, as regular
userspace code running as non-root should not be able to hang a machine
like that.

>From just looking at the code, the only question that comes to mind is
whether you have a 32- or 64-bit size_t in the build environment in
question, which is unlikely to cause a eureka moment :(

-Ben
-- 
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev


Re: [openssl-dev] 90-test_secmem.t hangs the machine for good

2017-05-12 Thread Benjamin Kaduk via openssl-dev
On 05/12/2017 01:34 PM, Blumenthal, Uri - 0553 - MITLL wrote:
>
> I’m sorry to report that in the current OpenSSL 1.1 master running
> “make test” freezes up the machine. Mac OS X 10.11.6, Xcode-8.2,
> current Github master. Here’s the configuration:
>

A commit hash would be more useful than "current github master"

>  
>
> ./Configure darwin64-x86_64-cc enable-threads enable-shared
> enable-zlib enable-ec_nistp_64_gcc_128 enable-rfc3779 enable-rc5
> enable-tls1_3 --prefix=/Users/uri/src/openssl-1.1
> --openssldir=/Users/uri/src/openssl-1.1/etc
>
>  
>
> Then of course “make depend && make clean && make all && make test”
>
>  
>
> ../test/recipes/90-test_ige.t . ok   
>
> ../test/recipes/90-test_memleak.t . ok   
>
> ../test/recipes/90-test_overhead.t  skipped: Only
> supported in no-shared builds
>
> ../test/recipes/90-test_secmem.t .. 
>
>  
>
> At this point the machine has to be power-cycled.
>
> —
>


I can understand not wanting to have to power-cycle the machine again,
but the 'make TESTS=test_secmem V=1 test' output (or some
dtruss/similar) would be helpful in tracking things down.

How locked up is the machine?  Can you get memory usage stats or is it
completely unresponsive?

-Ben
-- 
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev


Re: [openssl-dev] Question about commit 222333cf01e2fec4a20c107ac9e820694611a4db

2017-04-11 Thread Benjamin Kaduk via openssl-dev
It seems like a more elegant option would be if there was some attribute
of the engine that could be queried and override the check against zero.

-Ben

On 04/11/2017 06:20 PM, Michael Reilly wrote:
> Unfortunately the check breaks code which doesn't know nor need to know the
> keysize.  The engine takes care of allocating buffers required.
>
> Leaving it set to 0 has not broken anything yet.  I supposed we could try to
> somehow set it to an arbitrary non-zero value to please the == 0 check.
>
> michael
>
> On 04/11/2017 03:47 PM, Dr. Stephen Henson wrote:
>> On Tue, Apr 11, 2017, Michael Reilly wrote:
>>
>>> Hi,
>>>
>>> commit 222333cf01e2fec4a20c107ac9e820694611a4db added a check that the size
>>> returned by EVP_PKEY_size(ctx->pkey) in M_check_autoarg() in
>>> crypto/evp/pmeth_fn.c is != 0.
>>>
>>> We are in the process of upgrading from 1.0.2j to 1.0.2k and discovered 
>>> that the
>>> if (pksize == 0) check added in 1.0.2k breaks some of our applications.
>>>
>>> We use an engine for the RSA sign operation.  The applications do not know
>>> anything about the keypair being used.  The keypair is kept private by the
>>> engine so the application couldn't determine the attributes of the keypair 
>>> if it
>>> wanted to do so.
>>>
>>> If this check is necessary is there a way to bypass it when the application 
>>> does
>>> not have the keypair but the engine being used is holding the keypair?
>>>
>>> I know we can simply remove this line from our copy of the code but we like 
>>> to
>>> avoid modifying the openssl distributed code if at all possible.
>>>
>> Well the point of that code is so an application knows how large a buffer to
>> allocate for the signature. If it returns zero I can't see how applications
>> can do that.
>>
>> Note that you don't have to return the *precise* length of the signature just
>> an upper bound is sufficient.
>>
>> Steve.
>> --
>> Dr Stephen N. Henson. OpenSSL project core developer.
>> Commercial tech support now available see: http://www.openssl.org
>>

-- 
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev


[openssl-dev] verify depth behavior change from 1.0.2 to 1.1.0?

2017-04-03 Thread Benjamin Kaduk via openssl-dev
Hi all,

We noticed that the depth limit check seems to behave differently
between 1.0.2 and 1.1.0.

In particular, with a (1.1.0)

openssl/test$ ../util/shlib_wrap.sh ../apps/openssl s_server -port 8080
-cert certs/ee-cert.pem -certform PEM -key certs/ee-key.pem -keyform PEM
-no-CApath -CAfile certs/root-cert.pem -chainCAfile certs/ca-cert.pem

running, I can then go poke at it with s_client and look for the 'Verify
return code' output from:

openssl s_client -connect localhost:8080 -CAfile
teset/certs/root-cert.pem -verify_depth N

for N equal to 0, 1, or 2.

With a 1.0.2 s_client,

N=0 --> "Verify return code: 21 (unable to verify the first certificate)"
N=1 --> "Verify return code: 20 (unable to get local issuer certificate)"
N=2 --> "Verify return code: 0 (ok)"

But the 1.1.0 s_client shows:

N=0 --> "Verify return code: 22 (certificate chain too long)"
N=1 --> "Verify return code: 0 (ok)"
N=2 --> "Verify return code: 0 (ok)"

The new behavior (which does not consider the root to be part of the
chain for purposes of verification) seems to be intentional, and is
explicitly tested in test/recipes/25-test_verify.t:


# Depth tests, note the depth limit bounds the number of CA certificates
# between the trust-anchor and the leaf, so, for example, with a
root->ca->leaf
# chain, depth = 1 is sufficient, but depth == 0 is not.
#
ok(verify("ee-cert", "sslserver", ["root-cert"], ["ca-cert"],
"-verify_depth", "2"),
   "accept chain with verify_depth 2");
ok(verify("ee-cert", "sslserver", ["root-cert"], ["ca-cert"],
"-verify_depth", "1"),
   "accept chain with verify_depth 1");
ok(!verify("ee-cert", "sslserver", ["root-cert"], ["ca-cert"],
"-verify_depth", "0"),
   "accept chain with verify_depth 0");


There was a fair amount of churn in x509_vfy.c with the inclusion of the
DANE stuff and whatnot, so it's not immediately clear to me when this
change actually happened.  I think there are good arguments for the
current 1.1.0 behavior and it doesn't really make sense to try to change
back to the historical behavior, but it would be good to know when the
change actually happened and that it is/was a known change.  Ideally we
could also document the different behavior between 1.0.x and 1.1.0
better; any thoughts about where to do so?

Thanks,

Ben
-- 
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev


Re: [openssl-dev] [openssl/openssl] ABI compatibility 1.0.0-->1.0.1-->1.0.2

2017-01-27 Thread Benjamin Kaduk via openssl-dev
I guess the dashboard is only picking up incremental differences, then,
so the four missing symbols is just for 1.0.1u to 1.0.2 (no letter); any
symbols that were added to both 1.0.1 and 1.0.2 letter releases (e.g.,
for CVE fixes) would show up as "removed" since they weren't in the
initial 1.0.2 release.

I guess the tool needs more investigation than the quickest look...

-Ben

On 01/27/2017 02:43 PM, Michel wrote:
> Hi,
> SRP_VBASE_get1_by_user() was ADDED to 1.0.2g 1 march 2016 [CVE-2016-0798].
> I remember it very well !
> ;-)
>
> Michel
>
> -Message d'origine-
> De : openssl-dev [mailto:openssl-dev-boun...@openssl.org] De la part de
> Salz, Rich via openssl-dev
> Envoyé : vendredi 27 janvier 2017 19:49
> À : Kaduk, Ben; openssl-dev@openssl.org
> Objet : Re: [openssl-dev] [openssl/openssl] ABI compatibility
> 1.0.0-->1.0.1-->1.0.2
>
> The tool looks good, but either you didn't find the right link, or it's got
> bugs.  Of the four symbols you found, ASN1_STRING_clear_free(),
> SRP_user_pwd_free(), and SRP_VBASE_get1_by_user() all exist; only
> ENGINE_load_rsax() was removed.  
>

-- 
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev


Re: [openssl-dev] [openssl/openssl] ABI compatibility 1.0.0-->1.0.1-->1.0.2

2017-01-27 Thread Benjamin Kaduk via openssl-dev
[moving from github to -dev]

On 01/27/2017 07:36 AM, mattcaswell wrote:
>
> 1.0.2 is the software version.
> The numbers on the end of lbssl.so.1.0.0 refer to the ABI version -
> which is different. Software version 1.0.2 is a drop in replacement
> for 1.0.1, which is a drop in replacement for 1.0.0 - hence they all
> have the same ABI version.
>
>

There was some discussion about 1.0.1 being EoL on a FreeBSD list [0],
and whether it would make sense to move to 1.0.2 on their stable branch,
which led to someone making the claim that 1.0.2 has removed 4 symbols
compared to 1.0.1, and thus is not strictly ABI compatible, linking to
https://abi-laboratory.pro/tracker/timeline/openssl/ .  If I start
semi-randomly clicking around, I can find a page [1] that seems to claim
the missing symbols are:
ASN1_STRING_clear_free()
ENGINE_load_rsax()
SRP_user_pwd_free()
SRP_VBASE_get1_by_user()

It may be too late to get the 1.0.x series fully compatible, but it's
probably worth thinking about how we can use automation to ensure that
the 1.1.x series remains ABI compatible going forward.  I just learned
about abi-laboratory.pro from the FreeBSD posting, so I don't know if it
is appropriate or we would want to use some other tool.

One (naive?) idea for a home-grown solution would be to come up with a
scheme to serialize the public ABI to a file in the repo, maybe
regenerated as part of 'make test', and ensure that that file is
append-only, at least between releases.  But I don't know if the state
of the art is more advanced than that -- are there better options?

-Ben


[0]
https://lists.freebsd.org/pipermail/freebsd-security/2017-January/009211.html
[1]
https://abi-laboratory.pro/tracker/compat_report/openssl/1.0.1u/1.0.2/c63bf/abi_compat_report.html#Removed
-- 
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev