If the plaintext length indicates a message type, then this could lead to the
issue the original query posted. In that an observer might know what message
type was passed. TLS padding is supposed to prevent this (but it doesn’t
However, I argue that having TLS do significant padding for a protocol is bad
design for that protocol. It’s one thing if it’s a few padding bytes, but the
example given was 1023 bytes of padding.
Also as pointed out by Andrei Popov, the application needs to tell TLS how much
padding to apply, so either way, the application has to deal with determining
the padding length. Why not just make it part of the protocol in the first
OpenSSL has a callback scheme, and a block-based scheme for determining the
amount of padding. Either way, the application is involved.
But my final point is that we are ignoring the amount of non-TLS processing
that must be done on various message types (before the response is sent), and
THAT might be even more of a giveaway than the minuscule timing difference due
to counting padding in TLS.
// "One if by land, two if by sea, three if by the Internet."
On Aug 11, 2017, at 1:20 PM, Eric Rescorla
On Fri, Aug 11, 2017 at 9:47 AM, Nikos Mavrogiannopoulos
On Fri, Aug 11, 2017 at 5:57 PM, Eric Rescorla
On Fri, Aug 11, 2017 at 7:11 AM, Nikos Mavrogiannopoulos
Imagine the following scenario, where the server and client have this
repeated communication N times per day:
the client puts in X a message A of 1 byte or B of 1024 bytes, and pads
it to the maximum size of TLS record. The server replies with the
message "ok" (same every time), padded to the maximum size just after
it reads X.
However, TLS 1.3 detects the message size by iterating through all the
padding bytes, and thus there is a timing leak observed by the time
difference between receiving X and sending Y. Thus as an adversary I
could take enough measurements and be able to distinguish between X
having the value A or B.
While I'd expect these iterations to be unmeasurable in desktop or
server hardware, I am not sure about the situation in low-end IoT
hardware. Is the design choice for having the padding removal depending
on padding length intentional?
Yes, we're aware of this, and it's an intentional design choice. The reasoning
was that once you have the padding removed, you'll need to operate on/copy
the unpadded content somewhere, and that's timing dependent anyway.
That is certainly an incorrect assumption. gnutls for example provides a
zero-copy API, and I guess it is not the only implementation to have that.
And then the next thing that will happen is that the application will read the
data, which is length-dependent. The problem is that the plaintext is variable
There is mentioning of possible timing channels in:
However I don't quite understand how is this section intended to be
read. The sentence for example: "Because the padding is encrypted
alongside the actual content, an attacker cannot directly determine the
length of the padding, but may be able to measure it indirectly by the
use of timing channels exposed during record processing", what is its
intention? Is it to acknowledge the above timing leak?
I am not sure if that text is sufficient to cover that issue. It seems as if
the cbc timing attack is re-introduced here and pushing the fix to
implementers. It may be better no to provide padding functionality with this
"feature", as unfortunately it will be used by applications.
I don't believe that this is analysis is correct. This timing channel only
applies to the data after message integrity has been established (i.e., after
AEAD processing), which is different from the situation in Lucky 13. It seems
like what leaks here is the length of the plaintext, which is also what would
be leaked if we simply did not have padding.
TLS mailing list
TLS mailing list