You describe the observation that leads to Postel's maxim, namely that
if you found the internet in a mess when you got there, then you have
to be tolerant of rubbish.

The advantage with deploying a new protocol is that you can be strict.
If, for example, all of the browsers implement TLS 1.3 and are strict,
then Amazon won't be able to deploy a buggy 1.3 implementation without
noticing pretty quickly.  You might suggest that that's aspiration to
the point of delusion, but in fact it worked out pretty well with
HTTP/2 deployment.  We didn't squash ALL of the nasty bugs, but we got
most of them.

On 22 September 2016 at 01:53, Peter Gutmann <> wrote:
> Andreas Walz <> writes:
>>Actually, I wasn't aware of the fact that the TLS 1.3 draft now explicitly
>>addresses this in the Presentation Language section:
>>  "Peers which receive a message which cannot be parsed according to the
>>  syntax (e.g., have a length extending beyond the message boundary or
>>  contain an out-of-range length) MUST terminate the connection with a
>>  "decoding_error" alert."
> And how many implementations are going to do this?  Consider the error-message
> litmus test I proposed earlier, reasons for failing to connect to (say)
>   Error: Couldn't connect to Amazon because its certificate isn't valid.
>   Error: Couldn't connect to Amazon because no suitable encryption was
>          available.
>   Error: Couldn't connect to Amazon because <explanation for
>          decoding_error alert>.
> What would you put for the explanation for this case?  And if you say "decode
> error" the user's response will be to switch to some less buggy software that
> doesn't have problems connecting.
> If you're writing a strict validating protocol parser than disconnecting in
> this case is a valid response, but if it's software that will be used by
> actual humans then failing a connect based on something like this makes no
> sense.
> Peter.

TLS mailing list

Reply via email to