I have one editorial comment and one technical comment on this draft.

The limit here is defined as:

   LargeRecordSizeLimit denotes the maximum size, in bytes, of inner
   plaintexts that the endpoint is willing to receive.  It includes the
   content type and padding (i.e., the complete length of
   TLSInnerPlaintext).  AEAD expansion is not included.

I believe that this is the same value as RFC 8449:

   This value is the length of the plaintext of a protected record.  The
   value includes the content type and padding added in TLS 1.3 (that
   is, the complete length of TLSInnerPlaintext).  In TLS 1.2 and
   earlier, the limit covers all input to compression and encryption
   (that is, the data that ultimately produces TLSCiphertext.fragment).
   Padding added as part of encryption, such as that added by a block
   cipher, is not included in this count (see Section 4.1).

IMO it would be good to explicitly state that these are the same
value, so people don't have to decode it.


      struct {
          select (Length.type) {
              case u16: uint16;
              case u24: uint24;
              case u32: uint32;
          };
       } VarLength;


As I understand the situation, this means that if you are willing
to accept a large record, you have to accept 2 extra bytes of
overhead per record. Why not instead use a variable length
integer, perhaps using the QUIC or cTLS constructions?

-Ekr
_______________________________________________
TLS mailing list -- tls@ietf.org
To unsubscribe send an email to tls-le...@ietf.org

Reply via email to