> On Apr 15, 2018, at 8:07 AM, Richard Levitte <levi...@openssl.org> wrote:
> This touches an issue that's already mentioned in Matt's blog, and I
> gotta ask how the protocols so be presented for negotiation are chosen
> (yes, I know, I could dive into the code...  and I will unless there's
> a quick answer).  Does libssl just pick the max version chosen (within
> the range that we support unless the application has narrowed it
> down), or does it also look at other facts, such as chosen server or
> client certs to see what protocol version range would actually work
> with those collected facts?  #5743 seems to say that libssl doesn't
> look at such facts, and can end up in the absurd situation that things
> stop working because it selected TLSv1.3 over TLSv1.2 when the latter
> couldn't possibly work right, while TLSv1.2 does.
> I can't really say what's right or wrong in this case, this really is
> a philosophical question more than anything else.  Is it all right to
> just pick a proto version that cannot work and then virtually flip it
> to the unsuspecting application that wasn't prepared with better data
> (such as a cert that's also valid in TLSv1.3) or is that essentially
> wrong, even though easier to deal with in code?  Is that what libssl
> is doing, or does it have more of a "look at all the facts" approach
> before choosing the proto range to negotiate with the other end?

I'd support choosing a lower protocol version when no interoperable
parameters are available to complete the handshake at a higher version,
but that's rather complex, and we've never done that, so I don't see
that happening on short notice, it would require some design, review
and testing time we don't have before the upcoming release.

That said, I'm puzzled by the notion of "A certificate that is incompatible
with TLS1.3".  A certificate is a certificate, and introducing TLS 1.3 does
not in any change the validity of the certificate, TLS 1.3 did not rewrite
RFC5280.  So if there's a certificate we're disallowing with TLS 1.3, that's
a bug we need to fix.  If anything TLS 1.3 is supposed to be more liberal in
how certificates are processed, since it no longer (former bug in the TLS 1.2
spec) mandates that the server hang up when the clients sigalgs don't match
the server chain, just present what you have, and let the client decide.

The specific issue cited is a bug in our TLS 1.3 implementation, the server
must NEVER refuse to present at least some default certificate chain when
it can't find one that exactly matches the client's signalled algorithms.

The EC point representation in the TLS 1.3 ECDHE key exchange messages must be 
uncompressed, but this has no bearing on PKIX, any TLS messages that require
signing just get signed per X.509, the TLS point formats have nothing to do
with this.  In Section 4.2.3 of the TLS 1.3 draft we have:

   ECDSA algorithms  Indicates a signature algorithm using ECDSA
      [ECDSA], the corresponding curve as defined in ANSI X9.62 [X962]
      and FIPS 186-4 [DSS], and the corresponding hash algorithm as
      defined in [SHS].  The signature is represented as a DER-encoded
      [X690] ECDSA-Sig-Value structure.

We should always keep in mind the semantic separation between PKIX and
TLS.  For a memorable if poorly fitting quip:

        Render unto Caesar the things that are Caesar's,
        and unto God the things that are God's

now you just to figure out which is which... :-)


openssl-project mailing list

Reply via email to