On Thu, Sep 22, 2016 at 05:11:39AM +0000, Peter Gutmann wrote:
> Martin Thomson <martin.thom...@gmail.com> writes:
> >The advantage with deploying a new protocol is that you can be strict. If,
> >for example, all of the browsers implement TLS 1.3 and are strict, then
> >Amazon won't be able to deploy a buggy 1.3 implementation without noticing
> >pretty quickly. You might suggest that that's aspiration to the point of
> >delusion, but in fact it worked out pretty well with HTTP/2 deployment. We
> >didn't squash ALL of the nasty bugs, but we got most of them.
> It also means you're going to be in for a rude shock when you encounter the
> ocean of embedded/SCADA/IoT devices with non-mainstream TLS implementations.
That did not check for interop with any mainstream TLS library?
Also, code to "recover" tends to introduce security issues if used in
security protocols. Just because I don't have to deal with simple bugs
like buffer overflows leading to RCE or data races does not mean I can
do whatever I want and expect the code to have low number of security
issues. The existing stuff is way more than enough.
(Just fixed a bug where NotBefore/NotAfter of dedicated OCSP responder
certificate were not validated... Not related to recovery in any way,
but still some special code).
> The reason why HTTP/2 "works" is that it essentially forked HTTP, HTTP/2 for
> Google, Amazon, etc, and the browser vendors, and HTTP 1.1 for everything
> else that uses HTTP as its universal substrate. As a result there will be
> two versions of HTTP in perpetuity, HTTP 1.1 and HTTP-whatever-the-current-
Well, the problem you encounter first with HTTP/2 is that it really
dislikes unencrypted operation. Which impiles you pretty much need
encryption. Which impiles you pretty much need the WebPKI certificate
model... Which tends to be poor match for anything except named servers
on the internet, which tends not be suitable for IoT stuff...
> (Should I mention TLS-LTS here? :-).
TLS mailing list