Re: OMC VOTE: selection and handling for SHA1 and RIMPEMD160

2022-10-12 Thread Viktor Dukhovni
On Wed, Oct 12, 2022 at 03:35:19PM +0200, Richard Levitte wrote:

> Topic: Provider selection and handling for SHA1 and RIPEMD160 should be 
> identical
>given the current understanding of algorithm specific security issues.

Shouldn't real-world usage be taken into account.  SHA1 is widely used,
and even has important use-cases that aren't going away and where
collision resistance is not a major concern, e.g. NSEC3 in DNSSEC
where it is used for light obfuscation, not cryptographic signing.

I am not aware of any extant protocols that rely on RIPEMD160.  I think
that strictly looking at security margins is misguided, real world usage
needs to inform any such decision, and users should be able to easily
keep SHA1 without bringing RIPEMD160 along for the ride.

-- 
Viktor.


Re: OTC Vote: Remove the RSA_SSLV23_PADDING and related functions completely

2021-02-24 Thread Viktor Dukhovni
Is there an open pull request for this?

> On Feb 23, 2021, at 8:21 AM, Tomas Mraz  wrote:
> 
> topic: The RSA_SSLV23_PADDING and related functions should be
> completely removed from OpenSSL 3.0 code.
> 
> comment: The padding mode and the related functions (which are already
> deprecated in the current master branch) is useless outside of SSLv2
> support. We do not support SSLv2 and we do not expect anybody using
> OpenSSL 3.0 to try to support SSLv2 by calling those functions.

I am inclined to vote yes on general grounds, but my concern is whether
this might then cause some downstream consumers of OpenSSL to fail to
compile (things like Python bindings to OpenSSL, Net::SSLeay, ...)

It may be prudent to leave some stub functions in place that just
return errors, if they're currently exposed in various tools, and
likely unused, but would still cause some pain to the downstream
API maintainers if entirely removed.

Are there any such functions exposed by popular toolkits?

-- 
Viktor.



Re: 1.1.1f

2020-03-26 Thread Viktor Dukhovni
On Thu, Mar 26, 2020 at 11:33:40PM +, Matt Caswell wrote:

> On 26/03/2020 23:15, Viktor Dukhovni wrote:
> > On Thu, Mar 26, 2020 at 09:13:32PM +0100, Bernd Edlinger wrote:
> > 
> >> we got into this situation because everything moves so quickly,
> >> why does everyone here think we should move even faster now?
> >>
> >> What is the reason for this?
> > 
> > We've published a bug-fix release (1.1.1e) that's liable to cause more
> > problems than it fixes.  In such cases a closely-timed "fixup" (oops)
> > release is expected.  One that only reverts the (two) problem
> > EOF-handling commits.
> 
> Actually a partial revert of one of them is sufficient to resolve the
> problem.

Yes, probably so.  I took a sledge-hammer to the problem, and since the
second commit depended on the first, I reverted both.  If you leave
the pre-requisites for the second commit in place, and just remove
the changed error handling, then indeed that may also work.

-- 
Viktor.


Re: 1.1.1f

2020-03-26 Thread Viktor Dukhovni
On Thu, Mar 26, 2020 at 09:13:32PM +0100, Bernd Edlinger wrote:

> we got into this situation because everything moves so quickly,
> why does everyone here think we should move even faster now?
> 
> What is the reason for this?

We've published a bug-fix release (1.1.1e) that's liable to cause more
problems than it fixes.  In such cases a closely-timed "fixup" (oops)
release is expected.  One that only reverts the (two) problem
EOF-handling commits.  Further bug-fixes can be queued for later
releases, or deferred to a major release as appropriate.

-- 
Viktor.


Re: Deprecations

2020-02-23 Thread Viktor Dukhovni
> On Feb 22, 2020, at 4:53 AM, Richard Levitte  wrote:
> 
> Something that could be done is to take all those aged commands and
> rewrite them as wrappers for genpkey, pkey and pkeyutl.  Simply create
> and populate a new argv and call genpkey_main(), pkey_main() or
> pkeyutl_main().

Agreed, that sounds quite reasonable at first blush, and could be fantastic
if it can be made to work (no immediate obstacles come to mind).

-- 
Viktor.



Re: Deprecations

2020-02-21 Thread Viktor Dukhovni
On Sat, Feb 22, 2020 at 12:51:17AM +0100, Kurt Roeckx wrote:

> > (I just realised that what the CHANGES entry says is that
> > dhparam/dsaparam are deprecated in favour of pkeyparam - but actually I
> > think the equivalent functionality is more split between genpkey and
> > pkeyparam)
> 
> Some equivalants:
> openssl dhparam 2048
> openssl genpkey -genparam --algorithm DH -pkeyopt dh_paramgen_prime_len:2048
> 
> openssl dsaparam 2048
> openssl genpkey -genparam -algorithm DSA -pkeyopt dsa_paramgen_bits:2048

+100.  The new commands are nice for professionally written utilities
that need to be algorithm polymorphic, ...  But there's nothing like
using a screwdriver to turn a screw, rather than banging it in with
an all-purpose hammer!

> If you search internet, you will more than likely find the first
> ones. They are very easy. I have to look up at the manual page
> examples to know how to use genpkey.

Yes, same here.

-- 
Viktor.


Re: Deprecations

2020-02-21 Thread Viktor Dukhovni
On Fri, Feb 21, 2020 at 11:00:10PM +, Matt Caswell wrote:

> dhparam itself has been deprecated. For that reason we are not
> attempting to rewrite it to use non-deprecated APIs. The informed
> decision we have made about DH_check use in dhparam is to not build the
> whole application in a no-deprecated build:
> 
>   *) The command line utilities dhparam, dsa, gendsa and dsaparam have been
>  deprecated.  Instead use the pkeyparam, pkey, genpkey and pkeyparam
>  programs respectively.
>  [Paul Dale]

Dropping "dhparam" is rather an incompatible change.  It is widely used,
and its replacemnt is much more complex, and does not appear in how-to
guides that explain how to generate DH parameters.  Whatever API is
used in "pkeyparam", needs to be inserted into dhparam without changing
its CLI.

The same applies to genrsa, ... and even though I'm sometimes masochist
enough to use "genpkey" (after checking the manpage again, or re-reading
my own mkcert.sh script), it somehow has never managed to get to a point
where I can emit its various options from finger memory.

-- 
Viktor.


Re: crypt(3)

2020-01-19 Thread Viktor Dukhovni
On Sun, Jan 19, 2020 at 12:26:06PM +0100, Kurt Roeckx wrote:

> The only thing that we support currently that makes sense as a
> default is -5 (sha256) and -6 (sha512). I suggest you go with -6.

I concur, FWIW this is the default password hash for my FreeBSD 12
server, so it is not a Linux-only construct.

-- 
Viktor.


Re: crypt(3)

2020-01-16 Thread Viktor Dukhovni
On Fri, Jan 17, 2020 at 04:31:06PM +1000, Dr Paul Dale wrote:

> There are two functions (DES_crypt and DES_fcrypt) which implement the
> old crypt(3) password algorithm.  Once these are deprecated, they will
> no longer be reachable via EVP.  The confounding point is that they
> aren’t quite DES — close but not identical.  I would be surprised if
> they aren’t still in use for /etc/passwd files on old and/or embedded
> systems.

Generally speaking, on Unix-like systems that use crypt(3) for
/etc/passwd I'd expect to find a standaline crypt() implementation in
libc, that is independent of OpenSSL.  That is, if your system still
uses crypt() for passwords, you don't need OpenSSL to compute crypt
hashes.

That said, this is experience from general-purpose computers running
Unix-like OSes, not embedded systems, where I have no idea whether
crypt() is popular, and whether it is provided by a port of libcrypto
to that platform.

> I’ve got several choices: Leave them public and unchanged — that is,
> don’t deprecate these two functions yet.  Deprecate them and add KDFs
> to replace them.  Deprecate them, leave them alone and hope they go
> away painlessly at some point.

I would not expect to find many users of OpenSSL's crypt(), except
internally within OpenSSL itself.

> The apps/password.c applet calls these which is how I stumbled over
> the complication.  I’m fine refactoring this based on the solution
> chosen.  I’d also be okay with factoring out all the password
> derivation functions into KDFs if necessary.
> 
> Thoughts?  Other alternatives?

I don't know enough about embedded systems to speak about what if
anything we need to do for those with respect to crypt().

-- 
Viktor.


Re: Legacy provider

2020-01-15 Thread Viktor Dukhovni
My abstain vote was a carefully considered neutral stance backed
by many paragraphs of rationale.

The gist of which is that given that the decision to load or not
the provider is in the configuration file, the party ultimately
making the decision is whoever packages the software, not the
OpenSSL project.  OS distributions and users will make their own
choices, as they build packages and deploy systems.

Our "default" choice is just a "suggestion".  So the real change
is providing a mechanism to make the choice, the specific choice
we default to is IMHO not that important, and signalling that
the legacy algorithms are best left disabled when possible is
a reasonable outcome.  But, on the other hand we also want to
largely remain compatible with 3.0, and make compile and deploy
easy.  So there is some reason to take the compatible default.

I had the advantage of voting last, knowing that my abstain would
allow the vote to pass...

> On Jan 15, 2020, at 3:07 PM, Benjamin Kaduk  wrote:
> 
> It's good to have a decision here, but I'm kind of worried about the four
> abstains -- it's easy for me to leap to a conclusion that the individuals
> in question just didn't want to to spend the time to come to a considered
> position, even though this issue has substantial potential impact for our
> userbase.  I'm trying to not make faulty assumptions, so some greater
> clarity on the circumstances would be helpful, if possible.

-- 
Viktor.



Re: #10388

2019-11-14 Thread Viktor Dukhovni
> On Nov 14, 2019, at 9:15 AM, Matt Caswell  wrote:
> 
> "No existing public interface can be removed until its replacement has
> been in place in an LTS stable release. The original interface must also
> have been documented as deprecated for at least 5 years. A public
> interface is any function, structure or macro declared in a public
> header file."
> 
> So the functions we're talking about don't yet have a replacement -
> there will be one in 3.0. So presumably they will be documented as
> deprecated in 3.0. So we have to support them for at least 5 years from
> the release of 3.0.

I think that when we're deprecating an entire family of *related* accessors
(for the same structure) that have been in place for some time, the addition
of a missing member of that family is reasonably interpreted as not resetting
the support clock on the entire family.  We can still remove them all as though
the missing members of the family had been in place all along.

That is, we should (and if that requires a policy update and vote so be it) be
able to interpret the rules based on the "spirit" (intent) without getting
unduly burdened by the "letter", where common sense would suggest that we're
getting the intended outcome.

-- 
Viktor.



Re: #10388

2019-11-14 Thread Viktor Dukhovni
On Thu, Nov 14, 2019 at 08:41:57AM +, Matt Caswell wrote:

> I think that we should not add them to 1.1.1 without also adding them to 3.0.

Yes.

> OTOH if you have a 1.0.2 application that uses these things then not having
> them would represent a barrier to moving off of 1.0.2. And it doesn't add
> that much burden to us from the perspective of moving these things to the
> legacy bridge, because we've got to do all the other "meth" calls anyway and
> these are just two more.

My take is that we should just add them, and then deprecate them
in 3.0.0 when it comes to deprecate all the existing related
functions.  Having a few more members of the same family to deprecate
is not a burden.  The deprecation won't be any harder for having an
extra handful of code-points.

This is based on reports that these are basically missing accessors,
not a substantial new feature, and there is some plausible need for
them, now that the relevant structures are opaque.

-- 
Viktor.


Re: Deprecation of stuff

2019-09-04 Thread Viktor Dukhovni
+1 (and more) to the below!

> On Sep 4, 2019, at 10:15 AM, David Woodhouse  wrote:
> 
> I'd note that the question of *versioning* mechanisms is a very very
> special case of "when to deprecate stuff". So much so as to almost make
> it a completely separate question altogether.
> 
> My own favourite application is littered with checks on
> OPENSSL_VERSION_NUMBER, and the occasional call to SSLeay() to check
> for things that were fixed at runtime without an ABI change.
> 
> http://git.infradead.org/users/dwmw2/openconnect.git/blob/HEAD:/openssl.c
> http://git.infradead.org/users/dwmw2/openconnect.git/blob/HEAD:/openssl-dtls.c
> 
> A change to the versioning scheme is very much more than just another
> thing that's been deprecated; it's a change to the very mechanism by
> which we handle those deprecations. Changing that (and by extension,
> rapidly deprecating it) requires a lot more work on the part of
> application authors who want their code to build against whatever
> version of OpenSSL may be present on the platforms they need to
> support.

-- 
Viktor.



Re: Deprecation of stuff

2019-09-04 Thread Viktor Dukhovni
On Wed, Sep 04, 2019 at 02:43:34PM +0200, Tomas Mraz wrote:

> > The dispute in PR https://github.com/openssl/openssl/pull/7853 has
> > made it quote obvious that we have some very different ideas on when
> > and why we should or shouldn't deprecate stuff.
> > 
> > What does deprecation mean?  Essentially, it's a warning that at some
> > point in the future, the deprecated functionality will be removed.  I
> > believe that part of the issue surrounding this is uncertainty about
> > when that removal will happen, so let me just remind you what's
> > written in our release strategy document:

Actually, my issue was not timing, but whether the particular feature
warrants eventual removal.  I don't believe it does.

> > 1. Why should we deprecate stuff
> 
> Because keeping every legacy API/feature/option/... increases the
> maintenance burden, attack surface, confuses users/developers, and in
> general hinders the development.
> 
> > 2. Why should we not deprecate stuff
> 
> If something does not really have an adequate replacement, it does not
> really increase the maintenance burden, does not significantly increase
> the attack surface, and is still used widely in applications, it should
> not be deprecated.

This is essentially the basis of my objection, with less emphasis
on "adequate replacement".  Just because we *can* ask users to
rewrite their code, does not mean we *should*.

-- 
Viktor.


Re: Thread sanitiser problems

2019-07-30 Thread Viktor Dukhovni
> On Jul 30, 2019, at 10:02 PM, Dr Paul Dale  wrote:
> 
> The #9454 description includes thread sanitisizer logs showing different lock 
> orderings — this has the potential to dead lock.  Agreed with Rich that 
> giving up the lock would make sense, but I don’t see a way for this to be 
> easily done.

My take is that we should never hold any lock long enough to even consider
acquiring another lock.  No more than one lock should be held at any one
time, and only long enough to bump a reference count to ensure that the
object of interest is no deallocated before we (or our caller in a "get1"
type interface) is done with it.

I don't know what "long-term" locks we're holding, but it would be great
if it were possible to never (or never recursively) hold any such locks.

-- 
Viktor.



Re: Do we really want to have the legacy provider as opt-in only?

2019-07-16 Thread Viktor Dukhovni
On Mon, Jul 15, 2019 at 02:27:44PM +, Salz, Rich wrote:

> >>DSA
> > 
> > What is the cryptographic weakness of DSA that you are avoiding?
> 
> It's a good question. I don't recall the specific reason why that was 
> added to
> the list. Perhaps others can comment.
> 
> The only weakness I know about is that if you re-use the nonce, the private
> key is leaked. It's more brittle than RSA-PKCS, but not as flawed as RC4.
> 
> I think this should be removed from the "legacy" list unless someone can 
> point out why it's like the others in the list.

1.  DSA is not supported in TLS 1.3.
2.  DSA is almost never used with TLS 1.2, the public
CAs and the vast majority of users employ RSA.
3.  Historical DSA was limited to 1024-bit keys and SHA-1.
IIRC we now support stronger combinations, but these
are not widely used.
4.  As mentioned key disclosure is more likely than with RSA.
5.  Attack-surface reduction.  If DSA is almost never used,
why enable it by default?

I might note that I don't count myself amont the "crypto maximalists"
And I'm generally of the "raise the ceiling not the floor" mindset,
RFC7435 and all that.  However, once an algorithm is sufficiently
disused (raising the ceiling worked, and everybody we care about
has moved on) it is then time to turn out the lights.

So what are the reasons for *keeping* DSA enabled by *default*
(at runtime)?  Compile-time still delivers the legacy module,
and the configuration file can enable it with no recompilation.

-- 
Viktor.


Re: punycode licensing

2019-06-20 Thread Viktor Dukhovni
On Thu, Jun 20, 2019 at 03:39:10PM +0100, Matt Caswell wrote:

> PR 9199 incorporates the C punycode implementation from RFC3492:
> 
> https://github.com/openssl/openssl/pull/9199
> 
> The RFC itself has this section in it:
> 
> B. Disclaimer and license
> 
>Regarding this entire document or any portion of it (including the
>pseudocode and C code), the author makes no guarantees and is not
>responsible for any damage resulting from its use.  The author grants
>irrevocable permission to anyone to use, modify, and distribute it in
>any way that does not diminish the rights of anyone else to use,
>modify, and distribute it, provided that redistributed derivative
>works do not contain misleading author or version information.
>Derivative works need not be licensed under similar terms.
> 
> Which is quite confusing because on the one hand it places a requirement on
> redistributed derivative works:
> 
> "provided that redistributed derivative works do not contain misleading author
> or version information"
> 
> and then on the other hand states that derivative works are free to licence
> under different terms:
> 
> "Derivative works need not be licensed under similar terms"
> 
> It seems to me that the above gives us the ability to just relicense this 
> under
> Apache 2 and incorporate it. But I'm not entirely sure.

I'd be comfortable with relicensing under Apache, while clearly
indicating the provenance of the code, and indicating that the
file is also available under the original terms.

-- 
Viktor.


Re: Removing function names from errors (PR 9058)

2019-06-13 Thread Viktor Dukhovni
On Fri, Jun 14, 2019 at 01:41:51PM +1000, Dr Paul Dale wrote:

> I’m behind ditching the function identifier #defines but not their text names.

Good to hear.

> #define ERR_raise_error ERR_raise_error_internal(__FILE__, __LINE__, __FUNC__)

Well, __FUNC__ is entirely non-standard, and __func__ is C99.  Are
we ready to abandon C89/C90?  If not, then __func__ (and variants)
becomes compiler-specific.

In test/testutil.h, we have some of the requisite gymnastics:

# if !defined(__STDC_VERSION__) || __STDC_VERSION__ < 199901L
#  if defined(_MSC_VER)
#   define TEST_CASE_NAME __FUNCTION__
#  else
#   define testutil_stringify_helper(s) #s
#   define testutil_stringify(s) testutil_stringify_helper(s)
#   define TEST_CASE_NAME __FILE__ ":" testutil_stringify(__LINE__)
#  endif/* _MSC_VER */
# else
#  define TEST_CASE_NAME __func__
# endif /* __STDC_VERSION__ */

While the GCC manual: 
http://gcc.gnu.org/onlinedocs/gcc-4.8.1/gcc/Function-Names.html
suggests:

 #if __STDC_VERSION__ < 199901L
 # if __GNUC__ >= 2
 #  define __func__ __FUNCTION__
 # else
 #  define __func__ ""
 # endif
 #endif

we would also need similar for any other pre-C99 supported compilers.

That said, I'm still in favour of function strings, and just the
error and library codes as numeric.

-- 
Viktor.


Re: Removing function names from errors (PR 9058)

2019-06-12 Thread Viktor Dukhovni
On Wed, Jun 12, 2019 at 10:02:25AM +0100, Matt Caswell wrote:

> OTOH I do find them quite helpful from a debugging perspective, e.g. when 
> people
> send in questions along the lines of "I got this error what does it mean/how 
> do
> I fix it" - although what is actually useful is usually the function name 
> rather
> than the function code itself.

Indeed what's needed is the function name.  The numeric code is far
less important.  On the error consumer side, the idiom I'm familiar
with is:

while ((err = ERR_get_error_line_data(, , , )) != 0) {
ERR_error_string_n(err, buffer, sizeof(buffer));
if (flags & ERR_TXT_STRING)
printf("...: %s:%s:%d:%s", buffer, file, line, data);
else
printf("...: %s:%s:%d", buffer, file, line);
}

this makes no explicit reference to function numbers, returning the
appropriate strings.  So any change is likely limited to error
producers.

On the producer side, my ssl_dane library (used in Exim for example),
does depend on the function ordinal API:

https://github.com/vdukhovni/ssl_dane/blob/master/danessl.c#L52-L118
https://github.com/vdukhovni/ssl_dane/blob/master/danessl.c#L52-L118

so that would need to change (or be longer supported) if the function
ordinals are replaced by strings, or otherwise change.

-- 
Viktor.


Re: VOTE Apply PR#9084 reverting DEVRANDOM_WAIT

2019-06-07 Thread Viktor Dukhovni
> On Jun 7, 2019, at 7:24 PM, Kurt Roeckx  wrote:
> 
> That's all very nice, but nobody is going to run that.

They also don't have to upgrade their kernel, or deploy new
versions of OpenSSL.  If platform release engineers don't
deploy core services that ensure reliably CSPRNG seeding,
then their platform is less secure at boot.  This is their
choice.  Users can vote with their feet for more secure
O/S distributions.

Secure CSPRNG seeding is a platform responsibility, OpenSSL
then runs secure PRNGs seeded from the platform.  There's
only so much we can reasonably do.  The rest has to happen
outside of OpenSSL, as a pre-requisite.

And yes, fallback on RDSEED/RDRAND + TPM (real or virtual)
+ whatever is available, but ideally not in libcrypto, but
rather a service that that seeds the system at boot.

Those other mechanisms are often either not fully trusted in
isolation, not always available, or too expensive at every
process start.  The logic to identify which are available,
and how many are enough, ... is best extracted to run separately
at boot, with the library using either getentropy() or read
/dev/urandom (older kernels).

-- 
Viktor.



Re: VOTE Apply PR#9084 reverting DEVRANDOM_WAIT

2019-06-07 Thread Viktor Dukhovni
On Sat, Jun 08, 2019 at 12:54:36AM +0200, Kurt Roeckx wrote:

> On Fri, Jun 07, 2019 at 03:37:07PM -0400, Viktor Dukhovni wrote:
> > > On Jun 7, 2019, at 3:25 PM, Kurt Roeckx  wrote:
> > > 
> > > For older kernels you install rng-tools that feeds the hwrng in
> > > the kernel.
> > 
> > Which works for me, and is pretty much the point I'm trying to make.
> > Then, read /dev/random once early at boot, and do nothing special
> > libcrypto (safely use /dev/urandom).
> 
> The only thing rng-tools will actually solve is the starvation
> issue. No service will depend on it, since they don't have any
> relationship with it. Nor can you wait for it, it's not because
> it's started that it has initialized the kernel. I think I've also
> seen reports that it got started too late, actually after a
> services that wants to ask the kernel for random numbers.

Then a different service can be developed that does block just once
at boot, and tries to obtain entropy from a configurable set of
sources (to avoid or reduce unbounded delay, and mix in more
independent sources).

-- 
Viktor.


Re: VOTE Apply PR#9084 reverting DEVRANDOM_WAIT

2019-06-07 Thread Viktor Dukhovni
> On Jun 7, 2019, at 3:25 PM, Kurt Roeckx  wrote:
> 
> For older kernels you install rng-tools that feeds the hwrng in
> the kernel.

Which works for me, and is pretty much the point I'm trying to make.
Then, read /dev/random once early at boot, and do nothing special
libcrypto (safely use /dev/urandom).

-- 
Viktor.



Re: VOTE Apply PR#9084 reverting DEVRANDOM_WAIT

2019-06-07 Thread Viktor Dukhovni
> On Jun 7, 2019, at 2:41 PM, Kurt Roeckx  wrote:
> 
>> This is not the sort of thing to bolt into the kernel, but is not
>> unreasonable for systemd and the like.
> 
> The kernel actually already does this in recent versions, if
> configured to do it.

We're talking about what to do with for older kernels, and in
cases when the kernel cannot promptly obtain sufficient entropy
without external sources.  The kernel's job is to mix in entropy
from natural activity.  Boot-time acquisition of non-trivial entropy
by other means falls outside the kernel, and may be needed when
the kernel cannot obtain sufficient entropy on its own in a timely
manner.

-- 
Viktor.



Re: VOTE Apply PR#9084 reverting DEVRANDOM_WAIT

2019-06-07 Thread Viktor Dukhovni
> On Jun 7, 2019, at 2:11 PM, Dr. Matthias St. Pierre 
>  wrote:
> 
>> The init system would
>> need to create this kind of service, and then all software not using
>> getentropy()/getrandom() would need to depend on that service. It
> 
> FWIW: systemd already has a service for saving and restoring a random seed.
> If I understood Tomas correctly, the saved seed is added to the random pool,
> but without crediting any entropy to it (which sounds reasonable to me).

That's a different issue.  What I was suggesting was a service that
waits for seeding to be ready.  So that other services can depend
on that service, with that service using various sources to adequately
seed the kernel RNG, with configurable additional sources beyond the
save file, possibly with a non-zero entropy estimate.  Thus, for example,
a virtual machine or container might make use of an interface to get a
a trusted seed from the host hypervisor/OS.  Or a physical host might
trust its TPM, ...

This is not the sort of thing to bolt into the kernel, but is not
unreasonable for systemd and the like.

Applications can then use getentropy() and not even block at boot
time, if the system collects entropy at boot automatically and
without excessive delay.  The job of improving the source quality
and eliminating avoidable delay is then (correctly IMHO) the
responsibility of the platform's init system.

As for what to do on older platforms, ... add an entropy gathering
service to the system start up configuration, and make applications
that need early seed material depend on that service.

Perhaps the OpenSSL project can curate some examples of such service
configurations/scripts.  The simplest might be just DEVRANDOM_WAIT
as a service that runs at boot, and only reports "ready" once
/dev/random is ready.  After that applications can just use
/dev/urandom with some confidence of adequate seeding.

-- 
Viktor.



Re: VOTE Apply PR#9084 reverting DEVRANDOM_WAIT

2019-06-07 Thread Viktor Dukhovni
On Fri, Jun 07, 2019 at 11:09:45AM +0200, Matthias St. Pierre wrote:

> See the discussion on openssl-users:
> 
> https://mta.openssl.org/pipermail/openssl-users/2019-May/010585.html
> https://mta.openssl.org/pipermail/openssl-users/2019-May/010593.html
> https://mta.openssl.org/pipermail/openssl-users/2019-May/010595.html
> 
> If desired, I can provide an alternative (competing) pull request which
> makes the DEVRANDOM_WAIT feature configurable in a proper and
> reasonable way. The default will be whatever the OMC decides.

I think that having the RNG behaviour capriciously different on
different systems based on the whims of whoever built the library
for that system is not a good idea.  OpenSSL should provide an RNG
that does not block "unexpectedly", indefinitely, and unpredictably.

Where "unexpectedly", means except possibly early at boot time, but
ideally waiting for boot-time entropoy is something that systemd
and the like take care of, and application start scripts can just
register a dependency on some sort of "entropy" service, whose
successful initialization is sufficient to ensure adequately secure
non-blocking seeding of applications via one of getentropy(),
getrandom(), /dev/urandom...

That is, I'd expect most of the work for ensuring adequate entropy
to happen outside libcrypto, except for perhaps enabling some
additional sources that may be available on various systems.

--
Viktor.


Re: No two reviewers from same company

2019-05-23 Thread Viktor Dukhovni
On Thu, May 23, 2019 at 03:45:48PM +0100, Matt Caswell wrote:

> IMO, no.

I also don't see a need for this at present, and it is not clear
that there are enough active part-time reviewers in place to keep
up with commits from the fellows in a timely manner.

-- 
Viktor.


Re: [openssl-project] inline functions

2019-01-27 Thread Viktor Dukhovni
> On Jan 27, 2019, at 5:33 AM, Tim Hudson  wrote:
> 
> Tim - I think inline functions in public header files simply shouldn't be 
> present.

I think they have their place, and we should try to make them more portable
to less capable toolchains as needed.

-- 
Viktor.

___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project


Re: [openssl-project] [TLS] Yet more TLS 1.3 deployment updates

2019-01-23 Thread Viktor Dukhovni
> On Jan 23, 2019, at 12:42 PM, David Benjamin  wrote:
> 
> (a) Debugging hooks for tracing, often copied from the openssl binary.
> (b) As a callback to know when the handshake (in the RFC8446 sense described 
> above, not the OpenSSL sense) is done, sensitive to SSL_CB_HANDSHAKE_DONE.
> (c) As a callback to block renegotiations.
> 
> The problem here is (c), and empirically has affected versions of NGINX, 
> Node, HAProxy and the real TLS 1.3 ecosystem. There may be more yet 
> undiscovered problems; we only had KeyUpdates on in Chrome for a week on 
> Chrome Canary before we had to shut it off. At three affected callers, one 
> cannot simply say this is the consumer's fault.
> 
> As for the others, (b) also doesn't want to trigger on KeyUpdate, though it 
> may tolerate it. (I have seen versions of (b) which ignore duplicates and 
> versions which break on renegos---no one tests against it, which is why it's 
> off by default in BoringSSL.) (a) is closest to the scenario you are 
> concerned about, but such debugging notes are just that: debugging. I have 
> never seen code which cares about their particulars. Indeed, if it did, 
> adding TLS 1.3 would not be compatible because 1.3 changes the state machine.
> 
> Thus, the fix is clear: don't signal HANDSHAKE_START and HANDSHAKE_DONE on 
> KeyUpdate. Not signaling has some risk, but it is low, especially in 
> comparison to the known breakage and ecosystem damage caused by signaling.

I'm inclined to agree with David here.  I should also note that there are two
issues in this thread, of which this is the second.  The first one is about
the limit on the number of key update messages per connection, and I hope
that we can do something sensible there with less controversy.

-- 
Viktor.

___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project


Re: [openssl-project] [TLS] Yet more TLS 1.3 deployment updates

2019-01-22 Thread Viktor Dukhovni


> On Jan 22, 2019, at 2:06 PM, Adam Langley  wrote:
> 
> (This is another installment of our experiences with deploying the
> RFC-final TLS 1.3—previous messages: [1][2]. We share these with the
> community to hopefully avoid other people hitting the same issues.)
> 
> [...]
> 
> However, OpenSSL 1.1.1a signals SSL_CB_HANDSHAKE_START when TLS 1.3
> post-handshake messages are received[5], including KeyUpdate. This
> causes KeyUpdate messages to break with, at least, HAProxy, and with
> NGINX prior to this commit[6]. (There may well be more, but that level
> of breakage was enough to drown any other signal.)
> 
> Lastly, OpenSSL 1.1.1a imposes a hard limit of 32 KeyUpdate messages
> per connection[7]. Therefore clients that send periodic KeyUpdates
> based on elapsed time or transmitted bytes will eventually hit that
> limit, which is fatal to the connection.
> 
> Therefore KeyUpdate messages are not currently viable on the web, at
> least when client initiated.
> 
> [1] https://mailarchive.ietf.org/arch/msg/tls/PLtOD4kROZFfNtPKzSoMyIUOzuE
> [2] https://mailarchive.ietf.org/arch/msg/tls/pixg5cBXHuwd3MtMIn_xIhWmGGQ
> [3] https://bugs.openjdk.java.net/browse/JDK-8211806
> [4] https://bugs.openjdk.java.net/browse/JDK-8213202
> [5] https://github.com/openssl/openssl/issues/8069
> [6] 
> https://trac.nginx.org/nginx/changeset/e3ba4026c02d2c1810fd6f2cecf499fc39dde5ee/nginx/src/event/ngx_event_openssl.c
> [7] https://github.com/openssl/openssl/issues/8068
> [8] https://twitter.com/__subodh/status/1085642001595265024

I think we should remediate the reported issues in the 1.1.1b release.
We should probably clear the keyUpdate count when sufficient application
data has been received from the peer.  Where sufficient could be as little
as 1 byte, or could be something more reasonable (say 1MB, allowing for
up to 32 rekeys per MB, which is plenty).

As for applications mishandling "SSL_CB_HANDSHAKE_START", not quite sure
what to do there, but perhaps we could define a new even for keyUpdates
that does not mislead applications into assuming a new "handshake".

-- 
Viktor.

___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project

[openssl-project] Sanity check understanding of automatic module initialization?

2018-12-30 Thread Viktor Dukhovni
With automatic library initialization in OpenSSL 1.1.0 and later,
settings from the system-wide "openssl.cnf" file are automatically
loaded and may in turn cause various "modules" to be initialized.

For example, with:

  openssl.conf:
openssl_conf= system-wide-modules
#
[system-wide-modules]
ssl_conf= system-wide-ssl
#
[system-wide-ssl]
system_default  = ssl-defaults
#
[ssl-defaults]
MinProtocol = TLSv1.2
...

the settings in the "ssl-defaults" section will be loaded into memory,
and will be applied to every SSL_CTX() via:

SSL_CTX_new() ->
ssl_ctx_system_config() ->
ssl_do_config() ->
conf_ssl_get() ... SSL_CONF_cmd()

Any settings loaded via SSL_CTX_config() are in addition to the
above, possibly not necessarily overriding some of the implicit
defaults.

Looking at the code, it seems that the only way to make sure that
the application is not affected by unexpected system-wide settings,
is to load an alternative configuration file, via:

CONF_modules_load_file()

making sure that the file contains at least one profile in the
"ssl_conf" module section, whose section (to avoid errors) requires
at least one setting (empty sections should IMHO be tolerated, but
currently raise errors).  For example, it seems that the below will
suffice to avoid inherting any settings from the default system-wide
openssl.cnf file:

  openssl.conf:
myapp   = myapp-modules
#
[myapp-modules]
ssl_conf= myapp-ssl-module
#
[myapp-ssl-module]
bogus-profile   = bogus-ssl-settings
#
[bogus-ssl-settings]
MinProtocol = TLSv1.0

If the above is wrong or missing key details, please let me know.

Beyond the sanity check, it seems to me that some of the "big picture"
is missing from the documentation.  We have descriptions of pieces
of the API, but discussion of the interaction with automatic
initialization and how all the pieces fit together seems to be
missing.  The docs seem to date back to 1.0.2, and the changes in
1.1.0 are not generally properly reflected.

This would be good to address.

-- 
Viktor.
___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project


Re: [openssl-project] Release scheduling

2018-11-14 Thread Viktor Dukhovni
On Wed, Nov 14, 2018 at 01:27:17PM +, Matt Caswell wrote:

> There are now no open PRs/issues with the 1.1.1a milestone so I think we 
> should
> go ahead and do a release. The question is when? I propose next Tuesday 
> (20th),
> with releases of 1.1.0 and 1.0.2 on the same day. It's been a while since they
> last had releases so I think its worthwhile doing them at the same time.
> 
> Thoughts?

Yes, proceed to release.

-- 
Viktor.
___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project


Re: [openssl-project] FYI: [postfix & TLS1.3 problems]

2018-10-15 Thread Viktor Dukhovni
On Mon, Oct 15, 2018 at 06:56:06PM +0100, Matt Caswell wrote:

> > What do you make of the
> > idea of making it possible for servers to accept downgrades (to some
> > floor protocol version or all supported versions)?
> 
> I'm really not keen on that idea at all.

I understand the healthy skepticism, but it may worthwhile to keep
in mind that for SMTP the consequence of not accepting fallback to
TLS 1.2, is accepting fallback to cleartext!  So protocol downgrade
protection looks somewhat silly.

The only counter-argument I can think of is that some clients in
fact do mandatory authenticated TLS (e.g. with DANE, MTA-STS or
local policy), and they will not fall back to cleartext.  On the
other hand, no MTA I know of does attempts (valid) browser-style
protocol fallback after a connection failure.  So the clients that
insist on security (Postfix, Exim, ...) just defer the mail when
the TLS handshake fails.

In the SMTP ecosystem enforcing FALLBACK_SCSV is pretty much
counter-productive (only reduces security to cleartext for opportunistic
clients, and does not at all help non-opportunistic clients get
through to servers that don't support TLS 1.3, and fail the handshake
if you try).

-- 
Viktor.
___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project


Re: [openssl-project] FYI: [postfix & TLS1.3 problems]

2018-10-15 Thread Viktor Dukhovni



> On Oct 15, 2018, at 9:19 AM, Matt Caswell  wrote:
> 
>> Early, partial reports of the cause seem to indicate that the sending
>> side was using OpenSSL with:
>> 
>>  SSL_CTX_set_mode(ctx, SSL_MODE_SEND_FALLBACK_SCSV);
>> 
>> seemingly despite no prior handshake failure,
> 
> Are you sure about the "no prior handshake failure" bit? If they were
> using pre6 or below then if they attempt TLSv1.3 first it will fail
> (incorrectly - it should negotiation TLSv1.2 see issue 7315). The
> fallback to TLSv1.2 with SSL_MODE_SEND_FALLBACK_SCSV set would then be
> reasonable.

No, not sure at all, but that's what the receiving system administrator
tells me the sending system administrator told him.  Perhaps they failed
to understand the docs, and always set the fallback bit.  MTAs tend to
not do complex fallback, just send in the clear if opportunistic TLS
fails, or try later and hope things work out better then.

I've not yet received further corroboration.  What do you make of the
idea of making it possible for servers to accept downgrades (to some
floor protocol version or all supported versions)?

-- 
Viktor.

___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project


Re: [openssl-project] FYI: [postfix & TLS1.3 problems]

2018-10-12 Thread Viktor Dukhovni
On Thu, Oct 11, 2018 at 07:03:21PM -0500, Benjamin Kaduk wrote:

> I would guess that the misbehaving clients are early openssl betas
> that receive the real TLS 1.3 version and then try to interpret
> as whatever draft versino they actually implemnet.

Early, partial reports of the cause seem to indicate that the sending
side was using OpenSSL with:

SSL_CTX_set_mode(ctx, SSL_MODE_SEND_FALLBACK_SCSV);

seemingly despite no prior handshake failure, this is of course
fatally wrong.  But my question remains, should/could we provide a
control that ignores fallback signals from clients, and keeps going?
Either regardless of the resulting protocol version, or perhaps if
it is at least some acceptable floor?

That way, applications like MTAs that do opportunistic TLS, could
keep going with TLS 1.2, since failing to negotiate TLS will typically
result in downgrade to cleartext, rather than protection from TLS
version downgrades.  Such a mechanism might also make it possible
to support connections from a small fraction of broken clients,
without disabling TLS 1.3 globally.

-- 
Viktor.
___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project


[openssl-project] FYI: [postfix & TLS1.3 problems]

2018-10-11 Thread Viktor Dukhovni

Apparently, some SMTP clients set fallback_scsv when doing TLS 1.2
with Postfix servers using OpenSSL 1.1.1.  Not yet clear whether
they tried TLS 1.3 first and failed, or just sent the SCSV out of
the blue...

See attached.  If this is a common problem, it might be useful to
have a control that tolerates "downgrade" to TLS 1.2, without
disabling TLS 1.3 support.  In many cases, and especially opportunitistic
security, where STARTTLS can be stripped by an MiTM entirely, so
we often can't even prevent downgrades to cleartext, TLS 1.2 is
quite good enough.

-- 
Viktor.
--- Begin Message ---
On Thu, Oct 11, 2018 at 05:54:59PM +0200, A. Schulze wrote:

> today I noticed a significant amount of TLS failures in my postfix log.
> 
> Oct 11 17:43:35 mta postfix/smtpd[23847]: SSL_accept error from  
> client.example[192.0.2.25]:34152: -1
> 
> I traced some sessions and found the problematic client is announcing  
> the special cipher "TLS_FALLBACK_SCSV"
> in a TLSv1.2 ClientHello message. Now, as my server support TLSv1.3,  
> my SSL library (openssl-1.1.1) assume a downgrade attack an close the  
> connection with an SSL error message "inappropriate fallback"
> 
> The core issue is a client with a nonconforming TLS implementation.

Any idea what software these clients are running?  Are they at all
likely to fix this any time soon?

> To circumvent the problem I tried to disable TLS1.3 on my server by setting
> smtpd_tls_protocols = !SSLv2,!SSLv3,!TLSv1.3
> 
> But that does not help.
> The Client still fail an deliver the message by falling back to plain text :-/
> 
> The only option to force encrypted traffic again would be a library  
> downgrade on my side.
> Any other suggestions?

Support for OpenSSL 1.1.1 and TLS 1.3 is on the list of fixes slated
for Postfix 3.4, and some may then be backported to patch levels
of earlier releases.

In the meantime, try:

tls_ssl_options = 0x2000

which corresponds to SSL_OP_NO_TLSv1_3.  I am not aware of any
method to accept the "downgrade" to TLS 1.2 without disabling TLS
1.3 for clients that do have correct implementations.

-- 
Viktor.
--- End Message ---
___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project

Re: [openssl-project] Release strategy updates & other policies

2018-09-26 Thread Viktor Dukhovni



> On Sep 25, 2018, at 9:51 AM, Matt Caswell  wrote:
> 
> 5.0.0
> 5.0.1 (bug fix)
> 5.1.0 (add accessor)
>   6.0.0 (new feature)
>   6.0.1 (bug fix)
> 5.1.1 (bug fix)6.0.2 (bug fix)
> 5.2.1 (add accessor)
>   6.1.0 (add accessor)

Previously, we could add non-triviall features in "z+1" of "x.y.z",
with a stable ABI moving forward from "x.y.z" to "x.y.(z+1)".

Thus, e.g. 1.1.1 is a feature evolution of 1.1.0.  With this, we seem
to lose the ability to produce a manifestly (forward) ABI-compatible
feature release, that's a drop-in replacement for a previous release.

I would have expected to have 5.1.x as an ABI compatible upgrade of
5.0 with non-trivial new features.

The trivial API updates in stable releases (new accessors for forward
compatibility, ...) would go into the micro version along with the
bug fixes.  And should be made for the same reason.

In the case of new accessors, their visibility should conditioned
the user defining a suitable macro to make them visible.  Their
purpose is to facilitate compiling code that's forward-ported
to a later release, but needs to still work with the stable
release.  Otherwise, there really should be no feature changes
in stable releases.

So, Matt, we're not on the same page just yet...

-- 
-- 
Viktor.

___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project


Re: [openssl-project] A proposal for an updated OpenSSL version scheme (v2)

2018-09-21 Thread Viktor Dukhovni



> On Sep 22, 2018, at 12:50 AM, Tim Hudson  wrote:
> 
> The impact of the breaking change on anyone actually following our documented 
> encoding cannot.
> i.e. openssh as one example Richard pointed out.

The only use of OPENSSL_VERSION_NUMBER bits in OpenSSH (which is not yet ported 
to
1.1.x upstream BTW, so hardly relevant really) is:

ssh_compatible_openssl(long headerver, long libver)
{
long mask, hfix, lfix;

/* exact match is always OK */
if (headerver == libver)
return 1;

/* for versions < 1.0.0, major,minor,fix,status must match */
if (headerver < 0x100f) {
mask = 0xf00fL; /* major,minor,fix,status */
return (headerver & mask) == (libver & mask);
}

/*
 * For versions >= 1.0.0, major,minor,status must match and library
 * fix version must be equal to or newer than the header.
 */
mask = 0xffffL; /* major,minor,status */
hfix = (headerver & 0x000ff000) >> 12;
lfix = (libver & 0x000ff000) >> 12;
if ( (headerver & mask) == (libver & mask) && lfix >= hfix)
return 1;
return 0;
}

all other uses as a simple ordinal.  In the above function they expect
stability of the ABI for matching first three nibbles and release
status.  Which makes a case for Richard's encoding scheme as being
more compatible with one of the more prominent applications that depends
on the encoding.

The proposal to move the minor version into nibbles 2 and 3 breaks this
OpenSSH function.

-- 
Viktor.

___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project


Re: [openssl-project] A proposal for an updated OpenSSL version scheme (v2)

2018-09-21 Thread Viktor Dukhovni



> On Sep 22, 2018, at 12:59 AM, Richard Levitte  wrote:
> 
> So in summary, do we agree on this, and that it's a good path forward?
> 
> - semantic versioning scheme good, we should adopt it.
> - we need to agree on how to translate that in code.
> - we need to stop fighting about history.

Yes.

-- 
Viktor.

___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project


Re: [openssl-project] A proposal for an updated OpenSSL version scheme (v2)

2018-09-21 Thread Viktor Dukhovni



> On Sep 21, 2018, at 4:55 PM, Richard Levitte  wrote:
> 
> I think we need to get rid of OPENSSL_VERSION_NUMBER.

Absolutely not.  That's the one thing we must keep, and keep monotone
so that applications continue to compile and build.

Sure we need to communicate our ABI/API stability policies, and publish
release numbers and all that, but the API must provide a sensible
backwards compatible macro for testing whether the compile-time
version exceeds various threshold values.

We should also provide various helper macros, but NOT remove the
legacy encoded number.  That's a non-starter.

-- 
Viktor.

___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project


Re: [openssl-project] A proposal for an updated OpenSSL version scheme (v2)

2018-09-21 Thread Viktor Dukhovni



> On Sep 21, 2018, at 12:14 PM, Matt Caswell  wrote:
> 
> I support Richard's proposal with an epoch of 1.
> Grudgingly I would accept an epoch in the 3-8 range.
> I would oppose an epoch of 2.

I can live with that, though it might mean that a minority of
applications will interpret (based on obsolete nibble extraction)
that OpenSSL 2.0.0 (0x102FUL) is actually OpenSSL 1.2.0, but
that's probably harmless.  It does mean we might never reclaim the
version numbers with 0x2 for the high nibble.

-- 
Viktor.

___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project


Re: [openssl-project] A proposal for an updated OpenSSL version scheme (v2)

2018-09-21 Thread Viktor Dukhovni
> On Sep 21, 2018, at 11:56 AM, Tim Hudson  wrote:
> 
> What I was suggesting is that we don't need to break the current encoding at 
> all.

Not changing the encoding has a downside.

 * The bits that represent ABI stability would shift from the
   2nd/3rd nibbles to just the first nibble.

 * We lose the option of changing the encoding in the future unless
   we start requiring 64-bit longs.

 * We end up with 3 status nibbles, two of which some applications
   may misinterpret as holding a patch level:

0xUL

On the whole maintaining the current placement of the major number in
the encoding makes it less not more natural for holding the new semantic
versions in a backwards-compatible way.

I think I've said everything I have to say on this topic.  So I'll stop
for now.  I continue to support Richard's proposal, but with an epoch
smaller than 8.

-- 
Viktor.

___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project


Re: [openssl-project] A proposal for an updated OpenSSL version scheme (v2)

2018-09-21 Thread Viktor Dukhovni



> On Sep 21, 2018, at 11:40 AM, Tim Hudson  wrote:
> 
> That is something I wouldn't suggest makes sense as an approach - to change 
> the tarfile name and leave all the internals the same achieves nothing.

And that's not the proposal.  The proposal is that the new major number
is 2 (or 3 if someone sees a compelling need to skip over LibreSSL),
changing not just the marketing name but also the internal version.

The *integer* representing the combined major.minor.micro (+ status)
is not a data structure.  Its encoding has changed multiple times
over the years, and will now change once more, in a way that preserves
order.  That's all.  We'll just need to update:

  https://www.openssl.org/docs/manmaster/man3/OPENSSL_VERSION_NUMBER.html

to indicate a new encoding as of 0x2000UL and up.

-- 
Viktor.

___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project


Re: [openssl-project] A proposal for an updated OpenSSL version scheme (v2)

2018-09-21 Thread Viktor Dukhovni



> On Sep 21, 2018, at 11:27 AM, Tim Hudson  wrote:
> 
> No it isn't - as you note that isn't a valid mapping - 1.0 isn't a semantic 
> version and there is no such thing as a fix number,
> You get three concepts and then on top of that the pre-release and the 
> build-metadata.
> 
> Semantic versioning is about the API not the ABI.
> 
> So we could redefine what we have been telling our user all along and combine 
> our current major+minor in a new major version.
> Making 1.0.2a be 10.2.0 in semantic version terms.

Now that we've agreed on the semantic minor number, it is easy to note that 
1.0.2a's
micro number is "1" not 0, since the .0 release with 1.0.2 (no letter).  As for
the major number being "10", that's silly, we can just subtract 8, and land on
2 which is the next major number value we've not used.  There's no reason to 
jump
to 10.

> We cannot remove the current major version number - as that concept exists
> and we have used it all along.

And it can now become "2".

> We don't just get to tell our users for the last 20+ years what we called the
> major version (which was 0 for the first half and 1 for the second half) 
> doesn't
> exist.

It still exists, it becomes 2.  And an "epoch nibble" of 0x2 in 
OPENSSL_VERSION_NUMBER
maintains proper ordering.  The users see a sane versioning scheme, that is
backwards compatible.  You seem to be somewhat fixated (pardon the language,
no disrespect intended) on some unnecessary constraint.  The encoding:

0x2UL

just works to represent semantic version MM.NN.FF with  == 0xF for
final release, and 0x0--0xE for pre-releases.  There is no disruption.
The only change needed is a minor one in applications that actually
parse the nibbles, ... most don't, they just use the version number
as an ordinal for conditional compilation.

-- 
Viktor.

___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project


Re: [openssl-project] A proposal for an updated OpenSSL version scheme (v2)

2018-09-21 Thread Viktor Dukhovni



> On Sep 21, 2018, at 11:16 AM, Viktor Dukhovni  
> wrote:
> 
> I'm afraid that's where you're simply wrong.  Ever since 1.0.0, OpenSSL
> has promised (and I think delivered) ABI stability for the *minor* version
> and feature stability (bug fixes only) for the patch letters.  Therefore,
> the semantic version number of "1.0.2a" is "1.0", its minor number is 2
> and its fix number is 1 ("a").
> 
> Now of course "1.0" is not a valid semantic version number, but fortunately,
> we've since released "1.1" and are now considering "1.2" which with semantic
> versioning, is ready to become simply "2" as the "1." prefix is not needed,
> and "2" is conveniently larger than the current "not-really major" leading 
> "1".

To clarify, in the above I said "semantic version number" a few times where
I meant to say "semantic major version number"...

-- 
Viktor.

___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project


Re: [openssl-project] A proposal for an updated OpenSSL version scheme (v2)

2018-09-21 Thread Viktor Dukhovni



> On Sep 21, 2018, at 11:13 AM, Matthias St. Pierre 
>  wrote:
> 
> I like Richard's approach (with the '8' or another number) and  I don't think 
> it is
> contradicting semantic versioning. Maybe a good compromise between  your two
> opposing views would be to make the encoding irrelevant to our users by 
> introducing
> version check macros like
> 
>OPENSSL_MAKE_VERSION(maj,min,patch)and
>OPENSSL_VERSION_AT_LEAST(maj,min)
> 
> (note: the patch level was omitted from the second macro on purpose)
> 
> which enable the application programmer to write code like
> 
> 
> #if OPENSSL_MAKE_VERSION(2,0,0) <= OPENSSL_VERSION_NUMBER
> ...
> #endif

The macros would help new software, and should be added, but existing
software should continue to work with the version-dependent bits
unmodified.  To that end, Richard's encoding does the job (modulo a
minor quibble over the high bit vs. just an epoch nibble of 0x2 or 0x3).

-- 
Viktor.

___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project


Re: [openssl-project] A proposal for an updated OpenSSL version scheme (v2)

2018-09-21 Thread Viktor Dukhovni



> On Sep 21, 2018, at 11:00 AM, Tim Hudson  wrote:
> 
> If you repeat that in semantic versioning concepts just using the labels for 
> mapping you get:
> - what is the major version number - the answer is clearly "1".
> - what is the minor version number - the answer is clearly "0"
> - what is the fix version number - there is no such thing
> - what is the patch version number - the answer is clearly "2" (reusing the 
> fix version)

I'm afraid that's where you're simply wrong.  Ever since 1.0.0, OpenSSL
has promised (and I think delivered) ABI stability for the *minor* version
and feature stability (bug fixes only) for the patch letters.  Therefore,
the semantic version number of "1.0.2a" is "1.0", its minor number is 2
and its fix number is 1 ("a").

Now of course "1.0" is not a valid semantic version number, but fortunately,
we've since released "1.1" and are now considering "1.2" which with semantic
versioning, is ready to become simply "2" as the "1." prefix is not needed,
and "2" is conveniently larger than the current "not-really major" leading "1".

> Effectively the current "minor" version disappears. 

No, effectively the current "major" number disappears, with the minor
assuming the major role it already had.

-- 
Viktor.

___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project


Re: [openssl-project] A proposal for an updated OpenSSL version scheme (v2)

2018-09-21 Thread Viktor Dukhovni



> On Sep 21, 2018, at 10:07 AM, Tim Hudson  wrote:
> 
> And the output you get:
> 
> 0x10102000

The trouble is that existing software expects to potential ABI changes 
resulting from changes in the 2nd and 3rd nibbles, and if the major
version is just in the first nibble, our minor version changes will
look like major number changes to such software.

One could take the view that software that uses the OpenSSL version number
for more than inequalities is in a state of sin, and should stop doing
that, and perhaps doing that is not typical application behaviour, but
what Richard is trying to do is embed the semantic version number in
a wider field that allows us to keep the pre-release bits (which are
useful), to have an epoch nibble for versioning the version format,
and also keep the "significance" of the existing nibbles with the
2nd/3rd nibble signalling major changes while the 4th/5th are minor
version feature additions and 6th/7th are micro fix versions. the
8th nibble indicates dev/pre with 0xF signalling release.

This does not violate semantic versioning, if I only want to
support the *released* version of version 1.2.3, I'll test for
>= 0x?010203FUL, with "?" the epoch nibble (2 or 3).  If I
am planning to test pre-release features I can compare with
>= 0x?0102030UL.

We might not have done it this way if this were the first
even release of OpenSSL, but I think it is a find proposal.

-- 
-- 
Viktor.

___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project


Re: [openssl-project] A proposal for an updated OpenSSL version scheme (v2)

2018-09-21 Thread Viktor Dukhovni


> On Sep 21, 2018, at 9:35 AM, Richard Levitte  wrote:
> 
> In that case, we should probably just thrown away
> OPENSSL_VERSION_NUMBER.

Sorry, that must not t happen and there's no need.  My sense is that Tim
may end up "in the rough" on this issue, so unless there's more evidence
of support for his "strict" interpretation, I don't think you need to
consider any radical changes as yet.

Just set the high nibble to 2 for backwards compatibility, and as long as
we stick with semantic versioning, we never change it (at least not until
OpenSSL major number 256).  The "epoch" nibble is then signifies the
version of the versioning  scheme, and as long as we're doing semantic
versioning and the major number is 255 or less, it does not change.

If we wanted to make a concession to coëxistence with LibreSSL (which I
do not), we could go with an initial epoch of "3" rather than 2.

Personally, I think that making it clear that OPENSSL_VERSION_NUMBER
is a name in the OpenSSL and not LibreSSL namespace is a good thing.
LibreSSL should have stayed with "1.0.2" and encoded their version
elsewhere.  Their squatting on "2.x.y" is their fault, and I don't
see any need to make concessions to avoid "conflicts".

Software that wants to be compatible with both, and wants to use the
version number to select implementations of various features, needs
to make LibreSSL-specific choices only when some LibreSSL-specific
macro indicates that it is compiled with LibreSSL.

-- 
-- 
Viktor.

___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project

Re: [openssl-project] A proposal for an updated OpenSSL version scheme (v2)

2018-09-21 Thread Viktor Dukhovni
I support Richard's numeric scheme as proposed, with one small change.

I think that setting the epoch to 8 to set the high bit is neither
necessary nor wise.  Though the numeric version constant should be
manifestly unsigned (UL suffix not L), and having the top 3 nibbles
for the "effective" major version maximizes compatibility with prior
practice for most users, making the number negative if treated as
signed is not a good idea at this time.

Since we're making a change to semantic versioning, I'd bump the
epoch to 2.  This means that some (not common) software that
reconstructs the version string from the numeric constant (e.g.
Postfix when warning about run-time vs. compile-time mismatch)
gets something vaguely sensible:

0x202FL  -> "2.2.0" (rather than "2.0.0")

I'll have to adjust Postfix to produce better version mismatch
reports (which really should not happen since the SONAME prevents
running with an incompatible shared library, so perhaps remove the
check, but Wietse may be difficult to convince, my problem...)

> On Sep 21, 2018, at 5:58 AM, Richard Levitte  wrote:
> 
> I've worked on a proposal for an update of the OpenSSL version scheme,
> to basically align ourselves with semantic versioning, most of all in
> presentation, but also in spirit, while retaining numeric
> compatibility.  This proposal is currently in the form of a Google
> Document:
> 
> https://docs.google.com/document/d/1H3HSLxHzg7Ca3WQEU-zsOd1lUP_uJieHw5Dae34KPBs/

-- 
Viktor.

___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project


Re: [openssl-project] Release Criteria Update

2018-09-06 Thread Viktor Dukhovni



> On Sep 6, 2018, at 6:25 PM, Matt Caswell  wrote:
> 
> I'm not keen on that. What do others think?

No objections to issuing a release.  We're unlikely to have to change the
API/ABI or feature set based on further beta feedback.  Any late bugs can
be fixed in 1.1.1a, and unless they trigger CVEs, there's no compelling
reason to wait.  Barring specific concerns, I am not opposed to release
as planned.

-- 
Viktor.

___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project


Re: [openssl-project] Inappropriate fallback triggered when "holes" in client protocol list indirectly exclude TLSv1.3

2018-08-15 Thread Viktor Dukhovni



> On Aug 15, 2018, at 11:50 AM, Matt Caswell  wrote:
>> 
>> I think this counts as a regression, the client should notice that
>> it implicitly disabled TLS 1.3, and therefore not react to the
>> server's version sentinel by aborting the connection.  Thoughts?
>> 
> 
> Hmm. Yes we should probably handle this scenario. Can you open a github
> issue?

https://github.com/openssl/openssl/issues/6964

-- 
Viktor.

___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project


[openssl-project] Inappropriate fallback triggered when "holes" in client protocol list indirectly exclude TLSv1.3

2018-08-15 Thread Viktor Dukhovni
When I configure a client with a legacy TLS 1.2 protocol exclusion,
e.g. by setting SSL_OP_NO_TLSv1_2 (rather than the new min/max
version interface), as a result of the new TLS 1.3 protocol
suport configurations that previously negotiated "up to" TLS 1.1,
now fail when communicating with a TLS 1.3 server:

  $ posttls-finger -c -p '!TLSv1.2' "[127.0.0.1]"
  posttls-finger: SSL_connect error to 127.0.0.1[127.0.0.1]:25: -1
  posttls-finger: warning: TLS library problem: error:1425F175:SSL 
routines:ssl_choose_client_version:inappropriate 
fallback:../openssl/ssl/statem/statem_lib.c:1939:

If I then also explicitly disable "TLSv1.3" the connection succeeds:

  $ posttls-finger -c -lmay -Lsummary -p '!TLSv1.2:!TLSv1.3' "[127.0.0.1]"
  posttls-finger: Anonymous TLS connection established to 
127.0.0.1[127.0.0.1]:25: TLSv1.1 with cipher AECDH-AES256-SHA (256/256 bits)

I think this counts as a regression, the client should notice that
it implicitly disabled TLS 1.3, and therefore not react to the
server's version sentinel by aborting the connection.  Thoughts?

-- 
Viktor.

___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project


Re: [openssl-project] Releases tomorrow

2018-08-13 Thread Viktor Dukhovni
On Mon, Aug 13, 2018 at 03:07:02PM +0100, Matt Caswell wrote:

> - I think the ca app usability improvements for EdDSA (PR 6901) should
> go in (I have initial approval, awaiting a reconfirm from Viktor)

I already did that.  Go ahead and merge.

-- 
Viktor.
___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project


[openssl-project] ABI change tracking

2018-08-11 Thread Viktor Dukhovni


I just ran into: https://abi-laboratory.pro/index.php?view=timeline=openssl

Perhaps you've all seen this site already, but if not, enjoy!

FWIW, OpenSSL 1.1.1/master are looking fine.  Just some SM2-related
symbol churn, which does not affect the stable ABI.

-- 
Viktor.

___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project


Re: [openssl-project] FW: Certificate fractional time processing in upcoming openssl releases

2018-08-11 Thread Viktor Dukhovni
On Sat, Aug 11, 2018 at 01:50:07PM +, Salz, Rich wrote:

> FYI.  Quietly ignoring fractional seconds makes sense to me.

Ditto.

-- 
Viktor.
___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project


Re: [openssl-project] Removal of NULL checks

2018-08-09 Thread Viktor Dukhovni
On Thu, Aug 09, 2018 at 02:23:07PM -0700, Paul Dale wrote:

> > Real code often doesn't check return values.  Even ours. :(
> 
> Could we consider adding a lot more __owur tags to functions to encourage 
> this?
> 
> As an API change it would have to wait for a major release.

This is sometimes a good idea, for sufficiently important functions.
This sort of change generates compiler warnings, and is easily
addressed without breaking compatibility with older library versions.

We should not overuse __owur in marginal cases.

-- 
Viktor.
___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project


Re: [openssl-project] Removal of NULL checks

2018-08-09 Thread Viktor Dukhovni
On Thu, Aug 09, 2018 at 07:12:18PM +0200, Richard Levitte wrote:

> viktor>   X509 *x;
> viktor>   STACK_OF(X509) *s;
> viktor> 
> viktor>   ...
> viktor>   /* Allocate 's' and initialize with x as first element */
> viktor>   if (sk_X509_push(s = sk_X509_new(NULL), x) < 0) {
> viktor>   /* error */
> viktor>   }
> 
> I would regard that code incorrectly written, because it doesn't check
> the value returned from sk_X509_new(NULL) (i.e. it doesn't properly
> check for possible errors).  Correctly written code would be written
> like this:

It is correctly written *given* the existing NULL checks, and the
fact that our API is under-documented.

> However, if we actually want people to be able not to check if the
> stack they wanted to allocate actually got allocated, the correct
> course of action would be to make that a defined behaviour, i.e. fix
> the docs accordingly.

Yes, we should document the existing behaviour in preference to
changing it.  Changing the behaviour of existing functions should
require a compelling reason to do that.

-- 
Viktor.
___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project


Re: [openssl-project] Removal of NULL checks

2018-08-09 Thread Viktor Dukhovni
On Thu, Aug 09, 2018 at 05:53:11PM +0200, Richard Levitte wrote:

> I think we need to be a bit more nuanced in our views.  Bug fixes are
> potentially behaviour changes (for example, I recently got through a
> PR that added a stricter check of EVP_PKEY_asn1_new() input; see #6880
> (*)).

We went too far too quickly in the transition from 1.0.2 to 1.1.0,
e.g. by needlessly renaming some functions without providing the
legacy names (even as deprecated aliases) and by not adding to 1.0.2
the new accessors required for compatibility with 1.1.0.  Mostly
that could have been done (and could still be done) via new macros
in headers that add 1.1.0 accessors to 1.0.2:

#if OPENSSL_API_COMPAT >= 0x1010UL
#define EVP_MD_CTX_new() EVP_MD_CTX_create()
...
#endif

As a result many applications that need to support both 1.0.2 and
1.1.0 (whichever is available) had to waste effort to create the
requisite #ifdefs, wrapper functions, ...

If we keep doing that, everyone will be using LibreSSL or another
alternative.  We must not casually change APIs.  Especially because
of our documentation deficit, which results in users learning about
our interfaces via experimentation or reading the source.

If we must change an interface, and *can* do it by introducing a
new function (that we adequately document), that must be the way
forward.  And *furthermore*, we can't remove the deprecated interface
until the new function has been in multiple stable releases.  Indeed
to promote adoption, such new functions (when simple enough) should
be considered for inclusion in the extant stable releases, making
it easy to migrate from old to new.

> "But this is how it has worked so far!"  Yeah?  Still undefined behaviour.

Blaming the user for changes in undefined behaviour does not get
us more happy users.

> I think we're doing ourselves a disservice if we get too stuck by
> behaviour that can only be reasonably derived by reading the source
> code rather than the docs.

I think we're doing our users and ourselves a disservice if we're
too casual about API changes.  We can only get away with major
incompatibilities like those between 1.0.2 and 1.1.0 once.  If we
keep doing that, we'll lose the application base.

> So there's a choice, and if we accept that NULL is valid input the the
> safestack functions, we should document it.  If not, then sk == NULL
> is still mostly undefined, and crashes are therefore as expected as
> anything else.

If the functions previously returned an error, they must continue to
do that barring overwhelming reasons to make a change.

> However, caution isn't a bad thing.  I think that as part of a minor
> version upgrade, removing existing NULL checks may be a bit rad.
> However, I'd say that for the next major version, we're free to change
> an undefined behaviour to something more well defined, as we
> see fit.

No, we need a greater emphasis on backwards compatibility, and
introduce API changes more slowly, over multiple releases that carry
old and new APIs, and we must not change the behaviour of existing
functions without renaming them, except when the current behaviour
is clearly a bug.

It needs to be possible to recompile and run without auditing code.
The worst kind of incompatibilities are those that are not reported
by the compiler, and are only found at runtime, possibly under unusual
conditions.

-- 
Viktor.
___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project


Re: [openssl-project] Removal of NULL checks

2018-08-09 Thread Viktor Dukhovni



> On Aug 9, 2018, at 9:49 AM, Salz, Rich  wrote:
> 
> This is another reason why I am opposed to NULL checks.

Whether one's for them, or against them, removing a check
from a function that would formerly return an error and
making it crash is a substantial API change.  We must
avoid API changes whenever we can.  We can introduce
new functions and gradually deprecate the old over
a number (at least 3 IMHO) major release cycles, but
what we MUST NOT do is just change the API of an
existing function.

-- 
Viktor.

___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project


[openssl-project] EdDSA and "default_md"?

2018-08-08 Thread Viktor Dukhovni
Don't know whether everyone here also reads openssl-users, so to recap,
Robert Moskowitz  reports considerable frustration
as a result of "default_md = sha256" being incompatible with Ed25519
(and Ed448).  He's working around this with "-md null" sprinkled about
liberally, but it is not especially intutive.

What should we do here?  Perhaps we need a "default_md = default" that
picks a sensible default for each key algorithm (sha256 typically,
but "null" for EdDSA)?  Or ignore "default_md" with EdDSA, or ???

-- 
Viktor.

___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project


Re: [openssl-project] Removal of NULL checks

2018-08-08 Thread Viktor Dukhovni



> On Aug 8, 2018, at 6:19 AM, Tim Hudson  wrote:
> 
> However in the context of removing such checks - that we should not be doing 
> - the behaviour of the APIs in this area should not be changed

Should not be changed period.  Even across major release boundaries.
This is not an ABI compatibility issue, it is a source compatibility
issue, and should avoided all the time.  If we want to write a *new*
function that skips the NULL checks it gets a new name.

-- 
Viktor.

___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project


Re: [openssl-project] To use or not use the iconv API, and to use or not use other libraries

2018-06-12 Thread Viktor Dukhovni



> On Jun 12, 2018, at 6:56 PM, Richard Levitte  wrote:
> 
> Some implementations of the iconv library take the empty string as
> the locale-specific encoding, but that is in no way universal, and
> isn't specified in the standard:
> 
> http://pubs.opengroup.org/onlinepubs/009695399/functions/iconv_open.html
> 
> Using nl_langinfo() to get the locale-specific encoding will, as far
> as I know, always get you what you expect.

On FreeBSD, after (required) calling:

setlocale(LC_CTYPE, "");

The nl_langinfo(CODESET) returns the correct charset for by
UTF-8 terminal emulator for which my environment has:

LC_CTYPE=en_US.UTF-8

With that, iconv_open() and iconv() behave correctly converting
to from ISO-8859-1 and UTF-8 (minimal tests).  Without the
setlocale() call, my encoding is always US-ASCII, and iconv
is naturally crippled.

-- 
Viktor.

___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project


Re: [openssl-project] To use or not use the iconv API, and to use or not use other libraries

2018-06-12 Thread Viktor Dukhovni



> On Jun 12, 2018, at 3:39 PM, Richard Levitte  wrote:
> 
>> The flags I'd like to see are:
>> 
>>   -latin1:  Passphrase is a stream of octets, each of which is a single 
>> unicode
>> character in the range 0-255.
> 
> I would prefer to call it -binary or something like that...  it
> certainly comes down to the same thing in practice, and should
> translate exactly to the pre-1.1.0 behaviour.

I won't quibble over the name.

> 
>>   -utf8:Passphrase is already utf-8 encoded
>> 
>>   -ascii:   Passphrase must be ASCII, reject inadvertent 8-bit input.
> 
> ... and if none of these are given?

Not sure.  We could opt for "-binary" by default, which is backwards
compatible, but it produces non-standard outputs, which is a disfavour
to new users.  We could go with "-ascii" as a default, forcing failure
for non-ascii passwords without an explicit indication of encoding.
The second seems more appealing to me.

>> And as available:
>> 
>>   -toutf8:   Convert passphrase from the input encoding to UTF-8.
>>   Either using the locale-specific encoding, or yet
>>  another flag:
>> 
>>   -encoding: A platform-specific name for the input encoding understood
>>  by the system's encoding conversion library (iconv on Unix).
> 
> If the availability of -toutf8 depends on the presumed presence of
> iconv(), then we can assume that nl_langinfo() is present as well.
> That renders -encoding unnecessary, unless you want to use it to
> override the locale-specific encoding.

The purpose is specifically to override the encoding when it is wrong
for some reason.  The iconv library takes the empty string as the
locale-specific encoding, so we should not need nl_langinfo(), unless
that's known to produce better results.

-- 
Viktor.

___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project


Re: [openssl-project] To use or not use the iconv API, and to use or not use other libraries

2018-06-12 Thread Viktor Dukhovni



> On Jun 7, 2018, at 3:40 PM, Salz, Rich  wrote:
> 
> I think you forgot that this is not what I suggested.  One flag indicates 
> it's utf-8 encoded, don't touch it.  The other flag indicates it might have 
> high-bit chars, don't touch it.

The flags I'd like to see are:

  -latin1:  Passphrase is a stream of octets, each of which is a single unicode
character in the range 0-255.

  -utf8:Passphrase is already utf-8 encoded

  -ascii:   Passphrase must be ASCII, reject inadvertent 8-bit input.

And as available:

  -toutf8:   Convert passphrase from the input encoding to UTF-8.
 Either using the locale-specific encoding, or yet
 another flag:

  -encoding: A platform-specific name for the input encoding understood
 by the system's encoding conversion library (iconv on Unix).

None of these flags change semantics after introduction.

-- 
-- 
Viktor.

___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project


Re: [openssl-project] To use or not use the iconv API, and to use or not use other libraries

2018-06-07 Thread Viktor Dukhovni



> On Jun 7, 2018, at 3:59 PM, Salz, Rich  wrote:
> 
> If B<-pass8bit> is given, the password is taken to be encoded in the current
> locale, but is still used directly.
> A future release might automatically convert the password to valid UTF-8
> encoding if this flag is given.

I would propose that "-pass8bit" means that each byte of the input is
a unicode code point value (i.e. ASCII or LATIN1 supplement) and we'll
convert to UCS-2 by prepending 0x00 to each one.  If so, I would expect
this flag to NOT ever change its meaning.

We may internally convert to UTF-8 and then to UTF-16 largely undoing
the first conversion, but that just internal API gymnastics, not user-
observable.

-- 
Viktor.

___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project


Re: [openssl-project] Help deciding on PR 6341 (facilitate reading PKCS#12 objects in OSSL_STORE)

2018-06-06 Thread Viktor Dukhovni
https://tools.ietf.org/html/draft-mavrogiannopoulos-pkcs5-passwords-02#section-4
https://tools.ietf.org/html/draft-mavrogiannopoulos-pkcs5-passwords-02#section-5.2

> On Jun 6, 2018, at 11:23 AM, David Benjamin  wrote:
> 
> Is there a spec citation for this, or some documented experiments against 
> other implementations' behavior? (What do Microsoft and NSS do here?) I was 
> pondering something similar recently, but things do seem to point at UCS-2 
> right now. UCS-2 is indeed an unfortunate historical wart, but X.680 says:
> 
> > BMPString is a subtype of UniversalString that has its own unique tag and 
> > contains only the characters in the Basic Multilingual Plane (those 
> > corresponding to the first 64K-2 cells, less cells whose encoding is used 
> > to address characters outside the Basic Multilingual Plane) of ISO/IEC 
> > 10646.
> 
> RFC 7292 just says to use a BMPString. That doesn't suggest anyone has 
> actually updated it for UTF-16. This is fine for X.509 where BMPString is one 
> of many possible string types and folks can use UTF8String for this anyway. 
> For PKCS#12, yeah, this introduces limitations that may be worth resolving, 
> UTF-16 being the obvious fix. But if it's not in a spec, we should get it 
> into one and also be clear on if this is OpenSSL inventing a behavior or 
> following de facto behavior established elsewhere.

-- 
Viktor.

___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project


Re: [openssl-project] Help deciding on PR 6341 (facilitate reading PKCS#12 objects in OSSL_STORE)

2018-06-05 Thread Viktor Dukhovni



> On Jun 3, 2018, at 4:45 AM, Richard Levitte  wrote:
> 
> Yeah, I just learned that myself.  Somehow, I thought wchar_t would be
> Unicode characters.  So ok, with this information, UTF-8 makes
> sense...

Nico has convinced me that the mapping from UTF-8 to BMPString should
be UTF-16, which is agrees with the BMP representation on the code
points in the Basic Multinational Plane, but also supports surrogate
pairs for code points outside the plane, so that if someone wanted
to use "emoji" (or more traditional glyph outside the BMP) for their
password, they could.  This is a strict superset of UCS-2 and avoids
having to reject some UTF-8 codepoints.

-- 
Viktor.

___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project


Re: [openssl-project] Help deciding on PR 6341 (facilitate reading PKCS#12 objects in OSSL_STORE)

2018-06-02 Thread Viktor Dukhovni



> On Jun 2, 2018, at 2:36 AM, Richard Levitte  wrote:
> 
>> Canonicalize when importing for use with the store API.
> 
> Yup.
> 
>> Not sure whether wchar_t though, just octet string in UTF-8 seems saner.
> 
> Dunno about that, really.  The aim, to quote David W, is to make it
> *hard* for applications to get it wrong, and we all know that an octet
> string is merely an octet string...

Octet strings are by *defintion* not wide characters, they are an
opaque string of *octets* (an array of uint8).  The purpose of
whchar_t and friends is to process non-ascii *character strings*,
with the wide versions of strlen(), strchr(), ...  We do none of
this.  We pass the opaque input to a key-derivation function that
treats it as a opaque octet-string.

> We cannot know with absolute certainty that it's UTF-8 encoded.

Indeed someone could pass us an octet string that is not derived
from the UTF-8 encoding of some actual character string entered
by a user.  That does not matter.  What matters is that all
user input is canonically encoded, in a *platform-independent*
way.  And for that the application is responsible for converting
user input to UTF-8.  If the application does not do it right,
it will get incorrect (fail to decrypt) or non-portable (fail
to decrypt in the future on other platforms) behaviour.


> The way I saw it is that UTF-8
> really means Unicode, and a way to codify that is wchar_t.

NO.  That's not the point.  UTF-8 yields a canonical encoding
of what the user typed to an opaque octet string.  That
encoding is the application's responsibility.  We must not
treat the password as a character string, that's not portable.

> openssl-users> That is the password is an opaque byte string, not a character
> openssl-users> string in the platform's encoding of i18n strings.
> 
> Here is, unfortunately, where standards differ.  PKCS#12 has a
> requirement that makes the pass phrase anything but opaque.

OK, looking at:

  https://tools.ietf.org/html/rfc7292#appendix-B.1

we see that PKCS#5 v2.1 sensibly defines passwords as opaque strings
in some unspecified standard encoding (ASCII or UTF-8 for example).

PKCS#12 however, is sadly requiring a 16-bit BMPString encoding
(instead of UTF-8), presumably for backwards compatibility.

> With that, the characters have meaning and need to be interpreted
> correctly to form a standard compliant BMPString.

Well, in that case for PKCS#12 we must require a well-formed
UTF-8 input, which we can convert to BMPString without any
need for locale-specific information.  The ASN.1 library
presumably can convert from UTF-8 to BMP, or code can be
added to do that if missing.

> (it would have been smarter to have the PKCS12 routines take wchar_t
> strings rather than char strings...  hindsight is what it is...)

No, wchar_t is not defined to be a 16-bit BMPString compatible
encoding.  It is AFAIK a platform-specific string representation
that is not canonical.

-- 
Viktor.

___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project


Re: [openssl-project] Help deciding on PR 6341 (facilitate reading PKCS#12 objects in OSSL_STORE)

2018-06-01 Thread Viktor Dukhovni



> On Jun 1, 2018, at 6:47 PM, Richard Levitte  wrote:
> 
> Ah, forgot one important detail:  it is well understood that *all*
> file based objects will get the same requirements, right?  That goes
> for anything protected through PKCS#5 as well (good ol' PEM
> encryption, PKCS#8 objects and whatever else I forget...)

Canonicalize when importing for use with the store API.  Not sure
whether wchar_t though, just octet string in UTF-8 seems saner.
That is the password is an opaque byte string, not a character
string in the platform's encoding of i18n strings.

-- 
Viktor.

___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project


Re: [openssl-project] Help deciding on PR 6341 (facilitate reading PKCS#12 objects in OSSL_STORE)

2018-06-01 Thread Viktor Dukhovni



> On Jun 1, 2018, at 6:16 PM, Richard Levitte  wrote:
> 
> (I'm currently looking into alternatives where a UI_METHOD can present
> several variants of the same pass phrase, thus making it possible for
> the application to virtually say "hey, try one of these" instead of
> "hey, try this one"...  that would be a way to have the application
> provide the variants rather than libcrypto, and still only have to
> know the bare minimum of what the URI represents (preferably nothing
> at all))

If they're using a new API with a new store abstraction, I rather
think it'd be better for the PKCS#12 data to be re-built with
a UTF-8 password before it is exposed as a store URI.

They should be able to decode the old file using legacy tooling,
but the new tools should simply require canonical data.  Please
DO NOT implement password variants.

-- 
Viktor.

___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project


Re: [openssl-project] Help deciding on PR 6341 (facilitate reading PKCS#12 objects in OSSL_STORE)

2018-06-01 Thread Viktor Dukhovni



> On Jun 1, 2018, at 5:51 PM, Kurt Roeckx  wrote:
> 
> That would then just mean that the apps need to do the correct
> thing and convert it to UTF-8.

Module legacy files, with a passphrase in some other encoding.
For those the applications will have to provide the right
non-UTF8 octet string, and I assume we'll just use that
verbatim.

-- 
-- 
Viktor.

___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project


Re: [openssl-project] Is Mac a supported platform?

2018-06-01 Thread Viktor Dukhovni



> On Jun 1, 2018, at 5:26 PM, Salz, Rich  wrote:
> 
> So maybe I should just create a PR to update INSTALL with the Mac recipe?

I just use:

./Configure --prefix=/some/where [options] shared darwin64-x86_64-cc

-- 
Viktor.

___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project


[openssl-project] Some failing builds in travis?

2018-05-23 Thread Viktor Dukhovni

https://travis-ci.org/openssl/openssl/jobs/382694134
https://api.travis-ci.org/v3/job/382694134/log.txt

Test Summary Report
---
../test/recipes/70-test_comp.t   (Wstat: 26624 Tests: 0 Failed: 
0)
  Non-zero exit status: 104
  Parse errors: No plan found in TAP output
../test/recipes/70-test_key_share.t  (Wstat: 26624 Tests: 0 Failed: 
0)
  Non-zero exit status: 104
  Parse errors: No plan found in TAP output
../test/recipes/70-test_sslrecords.t (Wstat: 26624 Tests: 17 
Failed: 0)
  Non-zero exit status: 104
  Parse errors: Bad plan.  You planned 18 tests but ran 17.
../test/recipes/70-test_sslsigalgs.t (Wstat: 26624 Tests: 0 Failed: 
0)
  Non-zero exit status: 104
  Parse errors: No plan found in TAP output
../test/recipes/70-test_sslsignature.t   (Wstat: 26624 Tests: 0 Failed: 
0)
  Non-zero exit status: 104
  Parse errors: No plan found in TAP output
../test/recipes/70-test_sslversions.t(Wstat: 26624 Tests: 4 Failed: 
0)
  Non-zero exit status: 104
  Parse errors: Bad plan.  You planned 7 tests but ran 4.
../test/recipes/70-test_tls13cookie.t(Wstat: 26624 Tests: 0 Failed: 
0)
  Non-zero exit status: 104
  Parse errors: No plan found in TAP output
../test/recipes/70-test_tls13kexmodes.t  (Wstat: 19712 Tests: 0 Failed: 
0)
  Non-zero exit status: 77
  Parse errors: No plan found in TAP output
../test/recipes/70-test_tls13messages.t  (Wstat: 8192 Tests: 1 Failed: 
0)
  Non-zero exit status: 32
  Parse errors: Bad plan.  You planned 16 tests but ran 1.
../test/recipes/70-test_tls13psk.t   (Wstat: 19712 Tests: 0 Failed: 
0)
  Non-zero exit status: 77
  Parse errors: No plan found in TAP output
../test/recipes/70-test_tlsextms.t   (Wstat: 26624 Tests: 9 Failed: 
0)
  Non-zero exit status: 104
  Parse errors: Bad plan.  You planned 10 tests but ran 9.
Files=147, Tests=1249, 358 wallclock secs ( 5.94 usr  1.09 sys + 287.60 cusr 
53.16 csys = 347.79 CPU)
Result: FAIL
make[1]: *** [_tests] Error 1
make[1]: Leaving directory `/home/travis/build/openssl/openssl'
make: *** [tests] Error 2
+/ MAKE TEST FAILED

-- 
Viktor.

___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project


Re: [openssl-project] build/test before merging

2018-05-22 Thread Viktor Dukhovni


> On May 22, 2018, at 8:43 PM, Salz, Rich  wrote:
> 
> So do you guys use the ghmerge script or own procedures?  I'm curious.

Good point, I've not yet had a chance to look at ghmerge and figure
out how/whether to use it.  If that continues, ... my preferences for
its implementation don't carry much weight!  [ Though some changes might
prolong my state of indifference... ]

-- 
Viktor.

___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project


Re: [openssl-project] OpenSSL 1.1.1 library(OpenSSL 1.1.0 compile) Postfix to Postfix test

2018-04-23 Thread Viktor Dukhovni


> On Apr 22, 2018, at 9:49 PM, Viktor Dukhovni <openssl-us...@dukhovni.org> 
> wrote:
> 
> - Client-side diagnostics -

On the server side I see that even when the ticket callback returns "0" to 
accept and not re-issue the ticket, a new ticket is requested anyway.  I'd like 
to be able to control this, and not issue new tickets when the present ticket 
is acceptable.  If this requires new API entry points, I can condition them on 
a suitable min library version.  But ideally the callback return value will be 
honoured, I don't yet see why we would not do that.

- Server-side diagnostics -
Initial session:


SSL_accept:before SSL initialization
SSL_accept:before SSL initialization
SSL_accept:SSLv3/TLS read client hello
SSL_accept:SSLv3/TLS write server hello
SSL_accept:SSLv3/TLS write change cipher spec
SSL_accept:TLSv1.3 write encrypted extensions
SSL_accept:SSLv3/TLS write certificate
SSL_accept:TLSv1.3 write server certificate verify
SSL_accept:SSLv3/TLS write finished
SSL_accept:TLSv1.3 early data
SSL_accept:TLSv1.3 early data
SSL_accept:SSLv3/TLS read finished
>>> Callback log entry, create initial ticket:
  Issuing session ticket, key expiration: 1524534619
SSL_accept:SSLv3/TLS write session ticket
>>> Post-handshake SMTP server log entry:
  Anonymous TLS connection established from localhost[127.0.0.1]:
TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)

Resumed session:

SSL_accept:before SSL initialization
SSL_accept:before SSL initialization
>>> Callback log entry, decrypting presented ticket:
  Decrypting session ticket, key expiration: 1524534619
SSL_accept:SSLv3/TLS read client hello
SSL_accept:SSLv3/TLS write server hello
SSL_accept:SSLv3/TLS write change cipher spec
SSL_accept:TLSv1.3 write encrypted extensions
SSL_accept:SSLv3/TLS write finished
SSL_accept:TLSv1.3 early data
SSL_accept:TLSv1.3 early data
SSL_accept:SSLv3/TLS read finished
>>> Callback asked to create a new ticket:
  Issuing session ticket, key expiration: 1524534619
SSL_accept:SSLv3/TLS write session ticket
>>> Post-handshake application logging:
  Reusing old session (RFC 5077 session ticket)
  Anonymous TLS connection established from localhost[127.0.0.1]:
  TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
- End -

-- 
Viktor.

___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project


[openssl-project] OpenSSL 1.1.1 library(OpenSSL 1.1.0 compile) Postfix to Postfix test

2018-04-23 Thread Viktor Dukhovni

I tested a Postfix server and client built against OpenSSL 1.1.0,
using 1.1.1 run-time libraries.  This exercised peer certificate
fingerprint matching and session resumption.  No major issues.

The only interesting observations are:

  * With TLS 1.3 a new session is generated even sessions are
resumed, because the server responds with a new ticket
in the event of session resumption.  With TLS 1.2 sessions
that had sufficient remaining lifetime did not trigger new
ticket generation on the server, and no new session was
stored on the client.  This causes needless wear-and-tear
on the external session cache in Postfix, since each
connection writes out a new session, replacing the one
it just used.  Some might consider this a security feature,
but it is not especially desirable with SMTP.  Any thoughts
about whether this could be tunable?  It would have to be
server-side tuning I think, since the client does not know
why the server issued a new session, perhaps the old one
was not (or will soon not) be valid for re-use.


  * Postfix logs a warning when the compile-time and runtime
libraries are not exactly the same (once per process start),
this is expected.  Perhaps we should provide a means for
users to turn that off.

  * The Postfix logging from the new session callback precedes
the OpenSSL message callback that a session ticket was
received from the server.  It seems that the OpenSSL message
callback happens at the completion of session ticket processing,
but this results in slightly surprising ordering of the logs.
It seems as though the session is stored before the ticket
arrives.  I think this "cosmetic" issue may be worth addressing.

- Client-side diagnostics -
posttls-finger: warning: run-time library vs. compile-time header version 
mismatch: OpenSSL 1.1.1 may not be compatible with OpenSSL 1.1.0

posttls-finger: SSL_connect:before SSL initialization
posttls-finger: SSL_connect:SSLv3/TLS write client hello
posttls-finger: SSL_connect:SSLv3/TLS write client hello
posttls-finger: SSL_connect:SSLv3/TLS read server hello
posttls-finger: SSL_connect:TLSv1.3 read encrypted extensions
posttls-finger: SSL_connect:SSLv3/TLS read server certificate
posttls-finger: SSL_connect:TLSv1.3 read server certificate verify
posttls-finger: SSL_connect:SSLv3/TLS read finished
posttls-finger: SSL_connect:SSLv3/TLS write change cipher spec
posttls-finger: SSL_connect:SSLv3/TLS write finished
posttls-finger: Verified TLS connection established to 127.0.0.1[127.0.0.1]:25: 
TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
posttls-finger: SSL_connect:SSL negotiation finished successfully
posttls-finger: SSL_connect:SSL negotiation finished successfully
posttls-finger: save session 
[127.0.0.1]:25&340DAEC7D4C243D38B19F31A405375B4DF69D1A1E5FB70B81C38E9EDC190976D 
to memory cache
posttls-finger: SSL_connect:SSLv3/TLS read server session ticket

posttls-finger: Reconnecting after 1 seconds
posttls-finger: looking for session 
[127.0.0.1]:25&340DAEC7D4C243D38B19F31A405375B4DF69D1A1E5FB70B81C38E9EDC190976D 
in memory cache
posttls-finger: reloaded session 
[127.0.0.1]:25&340DAEC7D4C243D38B19F31A405375B4DF69D1A1E5FB70B81C38E9EDC190976D 
from memory cache
posttls-finger: SSL_connect:before SSL initialization
posttls-finger: SSL_connect:SSLv3/TLS write client hello
posttls-finger: SSL_connect:SSLv3/TLS write client hello
posttls-finger: SSL_connect:SSLv3/TLS read server hello
posttls-finger: SSL_connect:TLSv1.3 read encrypted extensions
posttls-finger: SSL_connect:SSLv3/TLS read finished
posttls-finger: SSL_connect:SSLv3/TLS write change cipher spec
posttls-finger: SSL_connect:SSLv3/TLS write finished
posttls-finger: 127.0.0.1[127.0.0.1]:25: Reusing old session
posttls-finger: Verified TLS connection established to 127.0.0.1[127.0.0.1]:25: 
TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
posttls-finger: SSL_connect:SSL negotiation finished successfully
posttls-finger: SSL_connect:SSL negotiation finished successfully
posttls-finger: save session 
[127.0.0.1]:25&340DAEC7D4C243D38B19F31A405375B4DF69D1A1E5FB70B81C38E9EDC190976D 
to memory cache
posttls-finger: SSL_connect:SSLv3/TLS read server session ticket
- End -


-- 
Viktor.

___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project


[openssl-project] When to enable TLS 1.3 (was: Google's SNI hurdle)

2018-04-19 Thread Viktor Dukhovni


> On Apr 19, 2018, at 1:48 PM, Matt Caswell  wrote:
> 
>> I might suggest conditioning it on the compile-time version of OpenSSL
>> headers. This is a common transition strategy for systems working
>> through ABI constraints. (In some systems, this is implemented as some
>> target SDK version.)
> 
> This is exactly what Richard proposed in this PR:
> 
> https://github.com/openssl/openssl/pull/5945

So we should get back to what to do about the larger question.

I am skeptical that just the compile-time header version is a
sufficiently good indicator of which applications are prepared
for TLS 1.3.  For most applications integration into a new
release involves recompiling the existing code and running some
tests.

If the tests don't cover interoperability with a sufficiently
diverse set of remote peers, the application will be no more
prepared for TLS 1.3 after compilation against OpenSSL 1.1.1
than it would have been had it been compiled against 1.1.0.

So ideally we (collectively, the OpenSSL, Google, other
TLS toolkits and service providers) will work to reduce
friction so that more applications can use TLS 1.3 without
running into any issues.

But not all the friction can be eliminated, and likely not
all providers can be persuaded to be more accommodating.
Which leaves us with some difficult judgement calls:

  * Restrict TLS 1.3 support to just applications compiled
against 1.1.1?  A weak signal, but likely correlates at
least somewhat with the application being ready.

  * Determine whether the application is likely to be compatible
at runtime by looking at the provided configuration.  Is SNI
enabled?  Is the certificate chain weird enough to break with
TLS 1.3.  Has the application turned off critical algorithms?

  * Do nothing, let the applications adapt or stick with older
libraries?

  * Something else?

We don't have much time before release, what do we do?

-- 
Viktor.

___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project


Re: [openssl-project] Potentially bad news on TLS 1.3 compatibility (sans SNI)

2018-04-19 Thread Viktor Dukhovni


> On Apr 19, 2018, at 4:24 PM, Salz, Rich  wrote:
> 
> Viktor found my comment offensive, which was not my intent.  I was trying to 
> be light-hearted in commenting on how Viktor dismissed all the issues David 
> raised.
> 
> If, in doing so, I went beyond our code of conduct and offended, I am truly 
> truly sorry.

Thanks.  Much appreciated...

Yes, there are other potential obstacles when enabling TLS 1.3 in applications 
not specifically designed for it.  Some substantial, others less so.

Without going into a length analysis, I think that most of the issues are 
minor, but authentication failure when an unexpected certificate appears with 
1.3 that one would not see with 1.2 seems like a substantially more major 
hurdle, and one that sure seems avoidable.  I hope it will be looked at more 
closely and in the not too distant future deployed less broadly (if at all).

-- 
Viktor.

___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project


Re: [openssl-project] Potentially bad news on TLS 1.3 compatibility (sans SNI)

2018-04-19 Thread Viktor Dukhovni


> On Apr 19, 2018, at 2:54 PM, Salz, Rich  wrote:
> 
> I am not fond of Viktor's reply, which comes across as "pshaw silly ninny" or 
> something like that.

You'll need to retract that.

-- 
Viktor.

___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project


Re: [openssl-project] Potentially bad news on TLS 1.3 compatibility (sans SNI)

2018-04-19 Thread Viktor Dukhovni


> On Apr 19, 2018, at 3:15 PM, Kurt Roeckx  wrote:
> 
> I think there might be some disagreement on how to go forward with
> having proper TLS in SMTP. I think Google might want to go with
> how it works for https, and so have certificates issued by a CA
> for hostname you try to connect to. I think you would like to use
> DANE instead. But I don't see DNSSEC or DANE getting wide adoption.

NO.  That's simply not the case, in fact I've contributed significantly
to MTA-STS, and the use-case that fails here is NOT the DANE one (where
SNI is already specified), but rather legacy WebPKI auth for SMTP.

Please don't jump to conclusions or impute motives.

-- 
Viktor.

___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project


Re: [openssl-project] Potentially bad news on TLS 1.3 compatibility (sans SNI)

2018-04-19 Thread Viktor Dukhovni


> On Apr 19, 2018, at 1:31 PM, David Benjamin  wrote:
> 
> Consider a caller using a PKCS#1-only ENGINE-backed private key. PKCS#1 does 
> not work in TLS 1.3, only PSS.

That's a local matter, and easy to resolve locally.

> Consider a caller which calls SSL_renegotiate.

Ditto.  And sufficiently uncommon to not worry about.

> A client which expects the session to be available immediately after the 
> handshake will also break.

Sessions are not always offered by the server, clients already have to deal 
with this.

> Or someone who listens to the message callback.

Not worth worrying about.

> Or someone who only installed CBC-mode ciphers in initialization.

Not a problem, OpenSSL 1.1.1 has separate cipher controls for TLS 1.3

> Or just someone who calls SSL_version and checks that it is TLS1_2_VERSION.

They can set the max version. ...

The above are local edge cases.  The SNI interoperability trap is random damage 
imposed by apparently capricious remote servers.  I plead you reconsider this 
*particular* additional hoop for TLS 1.3 clients to jump through, just do 
whatever you did with TLS 1.2.  If TLS 1.2 failed with SNI, fine do the same 
with TLS 1.3, if not then return the same chain.

-- 
Viktor.

___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project


Re: [openssl-project] Potentially bad news on TLS 1.3 compatibility (sans SNI)

2018-04-19 Thread Viktor Dukhovni


> On Apr 19, 2018, at 1:49 PM, Viktor Dukhovni <openssl-us...@dukhovni.org> 
> wrote:
> 
> There is no "the name that is being verified".  The Postfix SMTP client 
> accepts multiple (configurable as a set) names for the peer endpoint.  This 
> may be the next-hop domain or the MX hostname, or a sub-domain wildcard, or 
> some fixed hardcoded-name, or a mixture of these...

Furthermore, with SMTP servers we can't be sure whether the peer even tolerates 
SNI, it may decide that it has no certificate exactly matching the client's 
guess, and hang up, even though the client would be happy with the default 
certificate.

I'm reluctant to start sending SNI in configurations that work fine without 
SNI, and could well break when it is introduced.  So if you're at all in touch 
with the Gmail folks, please work with them to undo the ratchet in question, at 
least SMTP MUST NOT suddenly stop yielding the expected default certificate for 
lack of SNI.

And just recompiling against OpenSSL 1.1.1 headers should not suddenly change 
behaviour.
On the server side there needs to be some recognition of application context, 
with HTTP servers requiring SNI (where appropriate), but SMTP and other similar 
applications not doing so.

I'd like to use TLS 1.3 in SMTP, even by default on a recompile or run-time 
relink with no code changes to explicitly enable TLS 1.3.  But if servers are 
going to put up unnecessary roadblocks, TLS 1.3 is not going to get much 
traction.

Please reconsider this particular ratchet (for at least SMTP).  It *is* 
counter-productive.

Not sure what OpenSSL should do at this point... :-(

-- 
Viktor.

___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project


Re: [openssl-project] Potentially bad news on TLS 1.3 compatibility (sans SNI)

2018-04-18 Thread Viktor Dukhovni


> On Apr 18, 2018, at 10:43 AM, Andy Polyakov  wrote:
> 
> It can either be a probe just to see if it's reasonable to demand it, or
> establish a precedent that they can refer to saying "it was always like
> that, *your* application is broken, not ours." Also note that formally
> speaking you can't blame them for demanding it. But you can blame them
> for demanding it wrong. I mean they shouldn't try to communicate through
> OU of self-signed certificate, but by terminating connection with
> missing_extension alert, should they?

What I can blame them for is being counter-productively pedantic. Forget the 
RFC language, does what they're doing make sense and improve security or is it 
just a pointless downgrade justified by RFC text lawyering?

-- 
Viktor.

___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project


Re: [openssl-project] Potentially bad news on TLS 1.3 compatibility (sans SNI)

2018-04-18 Thread Viktor Dukhovni


> On Apr 18, 2018, at 10:12 AM, Andy Polyakov  wrote:
> 
> With this in mind, wouldn't it be more
> appropriate to simply not offer 1.3 capability if application didn't
> provide input for SNI?

That's what Rich suggested, and it makes sense, but what does not make any 
sense to me is what Google is doing.  Snatching defeat from the jaws of victory 
by needlessly forcing clients to downgrade to TLS 1.2.  Is there a 
justification for this?

-- 
Viktor.

___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project


Re: [openssl-project] The problem of (implicit) relinking and changed behaviour

2018-04-17 Thread Viktor Dukhovni


> On Apr 17, 2018, at 11:27 PM, Salz, Rich  wrote:
> 
> So far, if there's no SNI then we shouldn't do TLS 1.3 (as a client).  That 
> seems easy to code.

That might be a sensible work-around, with a bit of care to make sure that the 
user has not also disabled TLS 1.2 (i.e. try TLS 1.3 without SNI if that's all 
that is enabled).

Would still like to know what's motivating Google's insistence on SNI...
Sounds like a rather unnecessary downgrade.

-- 
Viktor.

___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project


[openssl-project] Potentially bad news on TLS 1.3 compatibility (sans SNI)

2018-04-17 Thread Viktor Dukhovni

Applications that have hitherto used TLS <= 1.2 have often not needed to use 
SNI.  The extension, though useful for virtual-hosting on the Web, was optional.

TLS 1.3 has raised the status of SNI from optional to "mandatory to implement".
What this means that is that implementations must support it, but it stops of
mandating SNI outright.  Clients SHOULD send SNI, when applicable, and servers
MAY require SNI:

  https://tools.ietf.org/html/draft-ietf-tls-tls13-28#section-4.4.2.2

  -  The "server_name" [RFC6066] and "certificate_authorities"
 extensions are used to guide certificate selection.  As servers
 MAY require the presence of the "server_name" extension, clients
 SHOULD send this extension, when applicable.

  [...]

  Additionally, all implementations MUST support use of the
  "server_name" extension with applications capable of using it.
  Servers MAY require clients to send a valid "server_name" extension.
  Servers requiring this extension SHOULD respond to a ClientHello
  lacking a "server_name" extension by terminating the connection with
  a "missing_extension" alert.

In the world of SMTP, with SMTP server names determined indirectly
and generally insecurely from MX records, it is not generally clear
what name one would use in SNI, and many SMTP clients don't send it
at all.  Some authenticate servers against the nexthop domain (the
envelope recipient domain), others might authenticate the MX host,
or just do unauthenticated opportunistic TLS.  This has worked
acceptably well with TLS <= 1.2

Along comes 1.3, and suddenly some server operators have become
particularly keen on enforcing all sorts of constraints that at
first blush look rather aggressive.  Specifically, the Google
SMTP servers serving millions of domains (including gmail.com),
now only do TLS 1.3 when SNI is presented, and when SNI is missing,
not only negotiate TLS 1.2, but use an unexpected self-signed cert
chain that validating senders will fail to authenticate, and others
may find perplexing in their logs.  (Thanks to Phil Pennock, Bcc'd
for reporting this on the exim-dev list).

When I link posttls-finger with the OpenSSL 1.1.1 library, I see:

  posttls-finger: gmail-smtp-in.l.google.com[173.194.66.26]:25 
CommonName invalid2.invalid
  posttls-finger: certificate verification failed for
gmail-smtp-in.l.google.com[173.194.66.26]:25:
self-signed certificate
  posttls-finger: gmail-smtp-in.l.google.com[173.194.66.26]:25:
subject_CN=invalid2.invalid, issuer_CN=invalid2.invalid
  posttls-finger: Untrusted TLS connection established to
gmail-smtp-in.l.google.com[173.194.66.26]:25:
TLSv1.2 with cipher ECDHE-RSA-CHACHA20-POLY1305 (256/256 bits)

The same command with OpenSSL 1.1.0 yields (no CAfile/CApath
so authentication fails where it would typically succeed):

  posttls-finger: certificate verification failed for
gmail-smtp-in.l.google.com[173.194.66.27]:25:
untrusted issuer /OU=GlobalSign Root CA - R2/O=GlobalSign/CN=GlobalSign
  posttls-finger: gmail-smtp-in.l.google.com[173.194.66.27]:25:
subject_CN=gmail-smtp-in.l.google.com,
issuer_CN=Google Internet Authority G3,
  posttls-finger: Untrusted TLS connection established to
gmail-smtp-in.l.google.com[173.194.66.27]:25:
TLSv1.2 with cipher ECDHE-RSA-CHACHA20-POLY1305 (256/256 bits)

This is a substantial behaviour change from TLS 1.2, and a rather
poor decision on Google's part IMHO, though I'm eager to learn why
this might have been a good idea.

In the mean-time, Richard's objection to automatic TLS 1.3 use
after shared-library upgrade is starting to look more compelling?

Comments?  [ Especially from David Benjamin, if he's in the loop
on the thinking that might have led to the new behaviour ]

-- 
Viktor.

___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project


[openssl-project] TLS 1.3 and SNI

2018-04-17 Thread Viktor Dukhovni

Just wanted to check.  The TLS 1.3 draft lists SNI as mandatory to implement, 
but is not mandatory to use.  Clients should, but do not have to send SNI, and 
servers may require SNI, but can just use some default chain instead.

Does OpenSSL's TLS 1.3 support mandate SNI in either the client or server?

-- 
Viktor.

___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project


Re: [openssl-project] The problem of (implicit) relinking and changed behaviour

2018-04-17 Thread Viktor Dukhovni


> On Apr 17, 2018, at 2:15 PM, Richard Levitte  wrote:
> 
> Depends on what "the best thing you know to do" is.  In my mind,
> simply refusing to run as before because the new kid in town didn't
> like the environment (for example a cert that's perfectly valid for
> TLSv1.2 but invalid for TLSv1.3) it ended up in isn't "the best thing
> you know to do".
> 
> But I get you, your idea of "the best thing you know to do" is to run
> the newest protocol unconditionally unless the user / application says
> otherwise, regardless of if it's at all possible given the environment
> (like said cert).

If there were a non-negligible use of certificates that work with TLS 1.2,
and that (implementation bugs aside) can't work with TLS 1.3, I'd support
your position strongly.  As it stands, I think you're right in principle,
but not yet in practice.  If we find no show-stopper issues, we should
allow TLS 1.3 to happen.

I'm far more concerned about lingering middle-box issues, than about some
edge-case certificates...

-- 
Viktor.

___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project


Re: [openssl-project] The problem of (implicit) relinking and changed behaviour

2018-04-16 Thread Viktor Dukhovni


> On Apr 16, 2018, at 6:00 AM, Matt Caswell  wrote:
> 
> That's not entirely true. This works:
> 
> $ openssl s_server -cert dsacert.pem -key dsakey.pem -cipher ALL:@SECLEVEL=0
> $ openssl s_client -no_tls1_3 -cipher ALL@SECLEVEL=0
> 
> This doesn't:
> 
> $ openssl s_server -cert dsacert.pem -key dsakey.pem -cipher ALL:@SECLEVEL=0
> $ openssl s_client -cipher ALL@SECLEVEL=0
> 
> 139667082474432:error:14201076:SSL routines:tls_choose_sigalg:no
> suitable signature algorithm:ssl/t1_lib.c:2484:
> 
> We do not allow DSA certs in TLSv1.3.

It is largely time we did not allow them in TLS 1.2 either, nobody
uses them, but perhaps "nobody" == USG?

-- 
Viktor.

___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project


Re: [openssl-project] The problem of (implicit) relinking and changed behaviour

2018-04-15 Thread Viktor Dukhovni


> On Apr 15, 2018, at 12:59 PM, Salz, Rich  wrote:
> 
> Let me turn the question around because we'll never know "everything" just 
> works. Except for our tests, what programs work with 1.1.0 and *fail* to work 
> with 1.1.1?  Any? For various reasons that Viktor and I have detailed, *our 
> tests* do not count.

I would not go as far as that.  Our tests do count, and should be paid 
attention to, but we need to be careful about interpreting the results.  
Sometimes the answer is to tune the test to make it portable to later library 
versions.  So far, we've not seen issues that warrant a library version bump, 
but testing this is a good idea, and Richard is doing good work.

-- 
Viktor.

___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project


Re: [openssl-project] The problem of (implicit) relinking and changed behaviour

2018-04-15 Thread Viktor Dukhovni


> On Apr 15, 2018, at 12:55 PM, Salz, Rich  wrote:
> 
> Do our 1.1.0 tests work when linked against the 1.1.1 library?

Our tests don't, but Richard (valiantly I must say) went to the trouble
of doing just that.  And found some tests that failed, ...

> Even then, there might be some failures because some of those tests probably 
> say "pick any protocol" and they were written at a time when 1.3 was not 
> available so might explicitly test, for example, that "any protocol" meant 
> "got 1.2"

in particular this type of failure.

> It would be interesting to test 1.1.0 against the 1.1.1 library, and then 
> analyze the failures and see which, if any, indicate bugs in the 1.1.1 
> compatibility.

This is what Richard was doing, and I commend his efforts.

> Again, to repeat myself, we have datapoints that 1.1.0 programs can use 1.1.1 
> library with no problems. We do not have any datapoints that typical 1.1.0 
> programs fail when using 1.1.1 library. 

I think that tests of this sort are valuable, and should in some cases make the 
test's assumptions more explicit, so that they will also pass with later 
libraries.  Then we can focus on any "real" issues that come up, and decide 
whether they are bugs, significant incompatibilities or "artificial" issues not 
substantive for "real" applications.

-- 
Viktor.

___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project


Re: [openssl-project] The problem of (implicit) relinking and changed behaviour

2018-04-15 Thread Viktor Dukhovni


> On Apr 15, 2018, at 2:24 AM, Bernd Edlinger  wrote:
> 
> One possible example of application failure that I am aware of is #5743:
> A certificate that is incompatible with TLS1.3 but works with TLS1.2.
> Admittedly that I did come up with that scenario only because I saw
> a possible issue per code inspection.

[ Repeating in part my response to Richar's mesage also in this thread ]

This is a bug that needs to be fixed, the point format for TLS does not
have any provenance over X.509.  There's no such thing as a certificate
not compatible with TLS 1.3 (that is compatible with TLS 1.2).

-- 
Viktor.

___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project


Re: [openssl-project] The problem of (implicit) relinking and changed behaviour

2018-04-14 Thread Viktor Dukhovni


> On Apr 15, 2018, at 1:38 AM, Richard Levitte  wrote:
> 
> Errr, are we?  Please inform me, because I cannot remember having seen
> tests that specifically targets the case of programs built with 1.1.0
> that get implicitly relinked with 1.1.1 libraries (that's what you
> call "going forward", isn't it?), or data collection for that matter.
> I may have missed something, but I am interested.

It think it is most prudent to not fall into the trap of debating this
particular side-issue.  I commend your initiative of running the 1.1.0
tests against the 1.1.1 libraries, that's fantastic.  And I further
commend attention to the failure cases.  Thank you.

With that out of the way, it seems to me that apart from some fixes in
the test framework, and tests that did not expect protocol versions
higher than TLS 1.2, no *interesting* issues have turned up.

If such issue did or will turn up let's fix them, but there should not
be fundamental obstacles to an ABI-compatible 1.1.1 library with the
same SONAME as its 1.1.0 predecessor.  The new library may negotiate
TLS 1.3 which 1.1.0 did not, but I don't see that as an incompatibility
that requires an SONAME version bump.

Which is not to say I could not be convinced otherwise, but at present
I don't see a need for the bump, or for work-arounds to limit the
negotiated protocols for code compiled against 1.1.0 that happens to
run against 1.1.1.

Let's stay alert, but not overreact to minor issues we can resolve.

-- 
Viktor.

___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project


Re: [openssl-project] The problem of (implicit) relinking and changed behaviour

2018-04-14 Thread Viktor Dukhovni


> On Apr 14, 2018, at 5:09 PM, Richard Levitte  wrote:
> 
>> I just tested posttls-finger compiled for 1.1.0 running with a 1.1.1
>> library against a TLS 1.2 server and it worked fine.
> 
> Does this answer the whole question, or do they just do the most basic
> stuff that our public headers make available?

No mere test constitutes a formal proof of correctness.  I'm just saying
that compile-time 1.1.0 runs fine in routine SSL sessions with 1.1.1 as
the underlying library.  The posttls-finger program is comparatively
sophisticated in its use of SSL, but by no means tests the entire API.

> To put it another way, I would absolutely hate it if, after 1.1.1
> (assuming that's what we go for) is released, people came back
> screaming at us because their program toppled over or bailed out in a
> virtual panic attack just because of a shared library upgrade.

When support for TLS 1.2 appeared in OpenSSL, some Postfix users ran
into some trouble, with middle-boxes or some such and had to cap the
TLS version at TLS 1.0.  This happened some time between 1.0.0 and
1.0.2 IIRC, with the library ABI at 1.0.  This is to be expected.
No matter what we do some users will upgrade their applications and/or
OpenSSL library and find that they run into some friction with TLS 1.3.
None of our work-arounds will make the problem go away.  They'll just
have to deal with it.

> openssl-users> What version of OpenSSL is Postfix linked against on 
> mta.openssl.org?
> openssl-users> Care to upgrade it to 1.1.0 if not already?  Then replace the 
> libraries
> openssl-users> with the 1.1.1 versions?  I can then retest...
> 
> But tell you what, there's a test machine as well, which I did set up
> specifically for trying this sort of thing.  I can certainly screw
> around with all of that there.

A test machine would be great.

-- 
Viktor.

___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project


Re: [openssl-project] The problem of (implicit) relinking and changed behaviour

2018-04-14 Thread Viktor Dukhovni


> On Apr 14, 2018, at 4:40 PM, Richard Levitte  wrote:
> 
> Would you say that it's an application bug if it stumbles on a change
> in API behavior that isn't due to a bug fix?  (and even better, if it
> worked according to documentation?)

Negotiating a new version of TLS is not a change in API behaviour.  The
application asks for a TLS session (of no particular maximum version),
and it gets one that both the client library and the peer support.

I just tested posttls-finger compiled for 1.1.0 running with a 1.1.1
library against a TLS 1.2 server and it worked fine.

What version of OpenSSL is Postfix linked against on mta.openssl.org?
Care to upgrade it to 1.1.0 if not already?  Then replace the libraries
with the 1.1.1 versions?  I can then retest...

Running an MTA built for 1.1.0 against 1.1.1 libraries might be a reasonable
way to "eat our own dog food".

-- 
Viktor.

___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project


Re: [openssl-project] The problem of (implicit) relinking and changed behaviour

2018-04-14 Thread Viktor Dukhovni


> On Apr 14, 2018, at 3:32 PM, Richard Levitte  wrote:
> 
> So regarding assumptions, there's only one assumption that I'm ready
> to make: a program that worked correctly with libssl 1.1.0 and uses
> its functionality as advertised should work the same with libssl
> 1.1.1.  Note that I'm not saying that this excludes new features
> "under the hood", but in that case, those new features should work
> transparently enough that a program doesn't need to be changed because
> of them.  Also, note again that I'm not talking about recompilation,
> but the implicit relinking that is what happens when a shared library
> is upgraded but keeps the same library version number (no "bump").
> (mind you, explicit relinking would make no different in this regard).
> 
> Does anyone disagree with that assumption?

It must be possible to upgrade from 1.1.0 to 1.1.1 without source
code changes, or relinking the program.  From what you describe,
it seems that source code changes might be needed to adapt to
a TLS-1.3-capable library.  That should not happen.

> 1. There's the option of making the new release 1.2.0 instead of 1.1.1.
>   I think most of us aren't keen on this, but it has to be said.

This does not address the issue of yet another compatibility break, with
many distributions not yet done adopting 1.1.0.  So I don't see that
as a solution.

> 2. Make TLSv1.2 the absolutely maximum TLS version available for
>   programs linked with libssl 1.1.0.  This is what's done in this PR:
>   https://github.com/openssl/openssl/pull/5945
>   This makes sense insofar that it's safe, it works within the known
>   parameters for the library these programs were built for.
>   It also makes sense if we view TLSv1.3 as new functionality, and
>   new functionality is usually only available to those who
>   explicitely build their programs for the new library version.
>   TLSv1.3 is unusual in this sense because it's at least it great
>   part "under the hood", just no 100% transparently so.

This should NOT be necessary.  What it is about enabling TLS 1.3
that breaks existing code?  Let's fix that.

> 3.   I dunno, please share ideas if you have them.

We need to make sure that the introduction of TLS 1.3 is transparent,
aside from occasionally leading to a connection that uses TLS 1.3.

If all that's failing is our test-suite, which is too sensitive to the
underlying implementation details, that's fine, not all the tests are 
designed to run cross-library.

Will real applications run into any meaningful problems?

While can artificially limit the max protocol in applications compiled
for 1.1.0, I don't think that's a compelling design choice.  We have
support in 1.1.0 for min/max protocol, applications can use those
controls explicitly.

In any case in order of preference, I'd like to see:

  1. Fix any issues so that it is safe to upgrade.
  2. Make the library version 1.2
  3. Hack the API to cap the protocol version based on compile-time
 maximum.

-- 
-- 
Viktor.

___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project


Re: [openssl-project] Speeding up the fuzz test...

2018-03-27 Thread Viktor Dukhovni


> On Mar 27, 2018, at 3:28 PM, Richard Levitte  wrote:
> 
> Now, I wonder how that will impact on Kurt, who sometimes produce
> these files, and on Google's oss-fuzz project, who do use this.
> My desire is to replace the current corpora with the corresponding
> cpio files, one for each test program (i.e. fuzz/corpora/asn1/* gets
> archived into fuzz/corpora/asn1.cpio, and so on and so forth).
> 
> I've changed fuzz/test-corpus.c so it can take a flag '-cpio' to tell
> it to read the files as cpio archives, otherwise it read the file raw,
> as before.
> 
> Thoughts?  Comments?  (Kurt?)

Naïve question:

Why a cpio (or any kind of) archive, and not a directory full of files?
Surely a program that reads a cpio archive can also traverse the files
in a directory?

-- 
Viktor.

___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project

Re: [openssl-project] Applying system defaults to TLS config

2018-03-15 Thread Viktor Dukhovni


> On Mar 15, 2018, at 8:12 AM, Salz, Rich  wrote:
> 
> https://github.com/openssl/openssl/pull/4848

I am also concerned about the performance implications of applying
the system settings at every SSL_CTX_new() (if that's the mechanism).

How does this interact with the creation of SSL contexts used for
server-side SNI support?

Finally, how is the configuration file specified? Do environment
variables influence the file location, and if so is that still
the case for setuid/setgid processes?

-- 
Viktor.

___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project


Re: [openssl-project] Removing assembler for outdated algorithms

2018-02-11 Thread Viktor Dukhovni


> On Feb 11, 2018, at 2:20 AM, Richard Levitte  wrote:
> 
> Those same systems will probably not have the newest OpenSSL either,
> and OpenSSH on those machines will certainly not be linked with a
> newer OpenSSL...

It is not those systems, but other systems that need to communicate
with them (various "appliances" that may not see an SSH implementation
update in years) that may need ongoing blowfish support.

So we should tread with some care.  Perhaps the software-only Blowfish
is fast enough, but my point is that Blowfish is much less of an obvious
outdated cipher than the others...

-- 
-- 
Viktor.

___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project


Re: [openssl-project] Removing assembler for outdated algorithms

2018-02-10 Thread Viktor Dukhovni
On Sat, Feb 10, 2018 at 10:19:20PM +, Salz, Rich wrote:

> > Is blowfish actually outdated?  I thought it had some significant use,
> > and don't recall any major weakness...
> 
> In particular, IIRC OpenSSH uses blowfish, and links to OpenSSL for
> the underlying cipher...
> 
> PGP use to be a heavy user, but now it only decrypts or does key-wrapping for 
> compatibility; it no longer uses blowfish to encrypt data.
> 
> SSH uses it, but according to 
> https://bbs.archlinux.org/viewtopic.php?id=188613 it has been removed, circa 
> 2014.
> Schneier recommends not using it, and use its successor(s) instead, which we 
> don't implement.

Removed in 2014 is much too recent, there are still LTS systems
with older SSH versions, and modern platforms that may want to
interoperate.  So I'm very reluctant to support removal of blowfish
ASM at this time...

-- 
Viktor.
___
openssl-project mailing list
openssl-project@openssl.org
https://mta.openssl.org/mailman/listinfo/openssl-project


  1   2   >