Re: [TLS] dispatching DTLS 1.2 errata

2024-03-19 Thread Salz, Rich
> Based on the IESG statement, please let me know by 3 April if you disagree 
> with the following proposed
resolutions:

They all look good to me.


___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Next steps for Large Record Sizes for TLS and DTLS

2024-03-19 Thread John Mattsson
Sounds good to me. That makes the solution very simple.

The new extension would then work very similar to RFC 8449.

The ExtensionData of the "large_record_size" extension is

  uint32 LargeRecordSizeLimit;

When negotiated, all records protected with application_traffic_secret are 
changed:

OLD:
  struct {
  ContentType opaque_type = application_data; /* 23 */
  ProtocolVersion legacy_record_version = 0x0303; /* TLS v1.2 */
  uint16 length;
  opaque encrypted_record[TLSCiphertext.length];
  } TLSCiphertext;

NEW:
  struct {
  uint32 length;
  opaque encrypted_record[TLSCiphertext.length];
  } TLSLargeCiphertext;

OLD:
  0 1 2 3 4 5 6 7
   +-+-+-+-+-+-+-+-+
   |0|0|1|C|S|L|E E|
   +-+-+-+-+-+-+-+-+
   | Connection ID |   Legend:
   | (if any,  |
   /  length as/   C   - Connection ID (CID) present
   |  negotiated)  |   S   - Sequence number length
   +-+-+-+-+-+-+-+-+   L   - Length present
   |  8 or 16 bit  |   E   - Epoch
   |Sequence Number|
   +-+-+-+-+-+-+-+-+
   | 16 bit Length |
   | (if present)  |
   +-+-+-+-+-+-+-+-+
NEW
  0 1 2 3 4 5 6 7
   +-+-+-+-+-+-+-+-+
   |0|0|1|C|S|L|E E|
   +-+-+-+-+-+-+-+-+
   | Connection ID |   Legend:
   | (if any,  |
   /  length as/   C   - Connection ID (CID) present
   |  negotiated)  |   S   - Sequence number length
   +-+-+-+-+-+-+-+-+   L   - Length present
   |  8 or 16 bit  |   E   - Epoch
   |Sequence Number|
   +-+-+-+-+-+-+-+-+
   | 32 bit Length |
   | (if present)  |
   +-+-+-+-+-+-+-+-+

From: TLS  on behalf of Martin Thomson 

Date: Wednesday, 20 March 2024 at 13:47
To: tls@ietf.org 
Subject: Re: [TLS] Next steps for Large Record Sizes for TLS and DTLS
In offline discussion l was convinced that a bigger change might be needed.  
The shifting is cute, but we might be able to do better.

This won't be wire compatible with the existing protocol, so maybe just embrace 
that and change the record header.  This would happen when switching from 
handshake protection to application data protection.  We can drop the version 
and content type and reclaim some of the savings for a longer length field.

On Wed, Mar 20, 2024, at 13:42, John Mattsson wrote:
> Hi,
>
> My summary from the TLS WG session yesterday:
>
> - Let’s adopt and figure out the final details later.
> - Show performance data.
> - Should be new extension, i.e., not used together with "record size
> limit".
> - The new extension should redefine the meaning of the uint16 length
> field in the TLSCiphertext to allow records larger than 2^16 bytes.
>
> Simple suggestion:
>
> In the new extension the client and server negotiate an uint8 value n.
> Client suggest a value n_max. Server selects n where 0 <= n <= n_max or
> rejects the extension. Agreeing on a value n means:
>
> - The length field in the record means 2^n * length bytes instead of
> length bytes. I.e., left shifted similar to the TCP window scale option.
> - The client and server are willing to receive records of size 2^n *
> (2^16 - 1) bytes.
> - Up to 2^n - 1 bytes of padding might be required.
> - AEAD limits are reduced with a factor 2^(n+2).
>
> Thought?
>
> Cheers,
> John Preuß Mattsson
>
> *From: *internet-dra...@ietf.org 
> *Date: *Tuesday, 5 March 2024 at 06:16
> *To: *John Mattsson , Michael Tüxen
> , Hannes Tschofenig ,
> Hannes Tschofenig , John Mattsson
> , Michael Tuexen 
> *Subject: *New Version Notification for
> draft-mattsson-tls-super-jumbo-record-limit-02.txt
> A new version of Internet-Draft
> draft-mattsson-tls-super-jumbo-record-limit-02.txt has been successfully
> submitted by John Preuß Mattsson and posted to the
> IETF repository.
>
> Name: draft-mattsson-tls-super-jumbo-record-limit
> Revision: 02
> Title:Large Record Sizes for TLS and DTLS
> Date: 2024-03-04
> Group:Individual Submission
> Pages:6
> URL:
> https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.ietf.org%2Farchive%2Fid%2Fdraft-mattsson-tls-super-jumbo-record-limit-02.txt=05%7C02%7Cjohn.mattsson%40ericsson.com%7C162b752f6c2948a0998a08dc48907011%7C92e84cebfbfd47abbe52080c6b87953f%7C0%7C0%7C638465032406539633%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C0%7C%7C%7C=OMjrxOxbWsSB2PBCpAi83OzLPPdnJEP%2F1lyBB1EvFLM%3D=0
> Status:
> 

[TLS] dispatching DTLS 1.2 errata

2024-03-19 Thread Sean Turner
Hi! We’ve got 8 reported errata on DTLS 1.2 (RFC 6347):
https://www.rfc-editor.org/errata_search.php?rfc=6347_status=15=records
that we, the royal we where we is the WG, need to dispatch.  By way of 
background, the
IESG has the following statement about processing errata on the IETF stream:
https://datatracker.ietf.org/doc/statement-iesg-iesg-processing-of-rfc-errata-for-the-ietf-stream-20210507/
Based on the IESG statement, please let me know by 3 April if you disagree with 
the following proposed
resolutions:

1. https://www.rfc-editor.org/errata/eid3917

Proposed dispatch: reject
Rationale: RFC 9147 obsoletes RFC 6347 and extensions is added to the 
ClientHello struct (see s5.3).

2. https://www.rfc-editor.org/errata/eid4103

Proposed dispatch: reject
Rationale: RFC 9147 obsoletes RFC 6347 and HelloVerifyRequest is no longer 
applicable to DTLS 1.3.

3. https://www.rfc-editor.org/errata/eid5186

Proposed dispatch: reject
Rationale: RFC 9147 obsoletes RFC 6347 and the section in question was 
extensively revised; the offending text is removed or no longer applies.

4. https://www.rfc-editor.org/errata/eid4104

Proposed dispatch: reject
Rationale: RFC 9147 obsoletes RFC 6347and the paragraph in questions was 
extensively revised; the offending text is removed.

5. https://www.rfc-editor.org/errata/eid4105

Proposed dispatch: reject
Rationale: RFC 9147 obsoletes RFC 6347 and the two sections were merged into 
one.

6. https://www.rfc-editor.org/errata/eid4642

Proposed dispatch: reject
Rationale: RFC 9147 obsoletes RFC 6347, the field has been renamed, and the 
field’s explanation updated.

7. https://www.rfc-editor.org/errata/eid5903

Proposed dispatch: reject
Rationale: RFC 9147 obsoletes RFC 6347 and the paragraph in questions was 
extensively revised; the offending text is reworded.

8. https://www.rfc-editor.org/errata/eid5026

Proposed dispatch: reject
Rationale: RFC 9147 obsoletes RFC 6347 and the 2119-language for the length is 
no longer in RFC 9147.

Cheers,
spt

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Next steps for Large Record Sizes for TLS and DTLS

2024-03-19 Thread Martin Thomson
In offline discussion l was convinced that a bigger change might be needed.  
The shifting is cute, but we might be able to do better.

This won't be wire compatible with the existing protocol, so maybe just embrace 
that and change the record header.  This would happen when switching from 
handshake protection to application data protection.  We can drop the version 
and content type and reclaim some of the savings for a longer length field.

On Wed, Mar 20, 2024, at 13:42, John Mattsson wrote:
> Hi,
> 
> My summary from the TLS WG session yesterday:
> 
> - Let’s adopt and figure out the final details later.
> - Show performance data.
> - Should be new extension, i.e., not used together with "record size 
> limit".
> - The new extension should redefine the meaning of the uint16 length 
> field in the TLSCiphertext to allow records larger than 2^16 bytes.
> 
> Simple suggestion:
> 
> In the new extension the client and server negotiate an uint8 value n. 
> Client suggest a value n_max. Server selects n where 0 <= n <= n_max or 
> rejects the extension. Agreeing on a value n means:
> 
> - The length field in the record means 2^n * length bytes instead of 
> length bytes. I.e., left shifted similar to the TCP window scale option.
> - The client and server are willing to receive records of size 2^n * 
> (2^16 - 1) bytes.
> - Up to 2^n - 1 bytes of padding might be required.
> - AEAD limits are reduced with a factor 2^(n+2).
> 
> Thought?
> 
> Cheers,
> John Preuß Mattsson
> 
> *From: *internet-dra...@ietf.org 
> *Date: *Tuesday, 5 March 2024 at 06:16
> *To: *John Mattsson , Michael Tüxen 
> , Hannes Tschofenig , 
> Hannes Tschofenig , John Mattsson 
> , Michael Tuexen 
> *Subject: *New Version Notification for 
> draft-mattsson-tls-super-jumbo-record-limit-02.txt
> A new version of Internet-Draft
> draft-mattsson-tls-super-jumbo-record-limit-02.txt has been successfully
> submitted by John Preuß Mattsson and posted to the
> IETF repository.
>
> Name: draft-mattsson-tls-super-jumbo-record-limit
> Revision: 02
> Title:Large Record Sizes for TLS and DTLS
> Date: 2024-03-04
> Group:Individual Submission
> Pages:6
> URL:  
> https://www.ietf.org/archive/id/draft-mattsson-tls-super-jumbo-record-limit-02.txt
> Status:   
> https://datatracker.ietf.org/doc/draft-mattsson-tls-super-jumbo-record-limit/
> HTML: 
> https://www.ietf.org/archive/id/draft-mattsson-tls-super-jumbo-record-limit-02.html
> HTMLized: 
> https://datatracker.ietf.org/doc/html/draft-mattsson-tls-super-jumbo-record-limit
> Diff: 
> https://author-tools.ietf.org/iddiff?url2=draft-mattsson-tls-super-jumbo-record-limit-02
>
> Abstract:
>
>RFC 8449 defines a record size limit extension for TLS and DTLS
>allowing endpoints to negotiate a record size limit smaller than the
>protocol-defined maximum record size, which is around 2^14 bytes.
>This document specifies a TLS flag extension to be used in
>combination with the record size limit extension allowing endpoints
>to use a record size limit larger than the protocol-defined maximum
>record size, but not more than about 2^16 bytes.
>
>
>
> The IETF Secretariat
> ___
> TLS mailing list
> TLS@ietf.org
> https://www.ietf.org/mailman/listinfo/tls

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


[TLS] Next steps for Large Record Sizes for TLS and DTLS

2024-03-19 Thread John Mattsson
Hi,

My summary from the TLS WG session yesterday:

- Let’s adopt and figure out the final details later.
- Show performance data.
- Should be new extension, i.e., not used together with "record size limit".
- The new extension should redefine the meaning of the uint16 length field in 
the TLSCiphertext to allow records larger than 2^16 bytes.

Simple suggestion:

In the new extension the client and server negotiate an uint8 value n. Client 
suggest a value n_max. Server selects n where 0 <= n <= n_max or rejects the 
extension. Agreeing on a value n means:

- The length field in the record means 2^n * length bytes instead of length 
bytes. I.e., left shifted similar to the TCP window scale option.
- The client and server are willing to receive records of size 2^n * (2^16 - 1) 
bytes.
- Up to 2^n - 1 bytes of padding might be required.
- AEAD limits are reduced with a factor 2^(n+2).

Thought?

Cheers,
John Preuß Mattsson

From: internet-dra...@ietf.org 
Date: Tuesday, 5 March 2024 at 06:16
To: John Mattsson , Michael Tüxen 
, Hannes Tschofenig , Hannes 
Tschofenig , John Mattsson 
, Michael Tuexen 
Subject: New Version Notification for 
draft-mattsson-tls-super-jumbo-record-limit-02.txt
A new version of Internet-Draft
draft-mattsson-tls-super-jumbo-record-limit-02.txt has been successfully
submitted by John Preuß Mattsson and posted to the
IETF repository.

Name: draft-mattsson-tls-super-jumbo-record-limit
Revision: 02
Title:Large Record Sizes for TLS and DTLS
Date: 2024-03-04
Group:Individual Submission
Pages:6
URL:  
https://www.ietf.org/archive/id/draft-mattsson-tls-super-jumbo-record-limit-02.txt
Status:   
https://datatracker.ietf.org/doc/draft-mattsson-tls-super-jumbo-record-limit/
HTML: 
https://www.ietf.org/archive/id/draft-mattsson-tls-super-jumbo-record-limit-02.html
HTMLized: 
https://datatracker.ietf.org/doc/html/draft-mattsson-tls-super-jumbo-record-limit
Diff: 
https://author-tools.ietf.org/iddiff?url2=draft-mattsson-tls-super-jumbo-record-limit-02

Abstract:

   RFC 8449 defines a record size limit extension for TLS and DTLS
   allowing endpoints to negotiate a record size limit smaller than the
   protocol-defined maximum record size, which is around 2^14 bytes.
   This document specifies a TLS flag extension to be used in
   combination with the record size limit extension allowing endpoints
   to use a record size limit larger than the protocol-defined maximum
   record size, but not more than about 2^16 bytes.



The IETF Secretariat
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] A suggestion for handling large key shares

2024-03-19 Thread Kampanakis, Panos
ACK, thx, I had missed the discussions. It looks like what I was referring to 
is addressed more prescriptively in rfc8446bis. That is great.

From: David Benjamin 
Sent: Tuesday, March 19, 2024 4:42 PM
To: Kampanakis, Panos 
Cc: Scott Fluhrer (sfluhrer) ; TLS@ietf.org
Subject: RE: [EXTERNAL] [TLS] A suggestion for handling large key shares


CAUTION: This email originated from outside of the organization. Do not click 
links or open attachments unless you can confirm the sender and know the 
content is safe.


I think you're several discussions behind here. :-P I don't think 
draft-ietf-tls-hybrid-design makes sense here. This has nothing to do with 
hybrids, but anything with large key shares. If one were to do Kyber on its 
own, this would apply too. Rather, per the discussion at IETF 118, the WG opted 
to add some clarifications to rfc8446bis in light of draft-00.

It has also turned out that:
a) RFC 8446 actually already defined the semantics (when I wrote draft-00, I'd 
thought it was ambiguous), though the clarification definitely helped
b) The implementation that motivated the downgrade concern says this was not 
bug from misunderstanding the protocol, but an intentional design decision

Given that, the feedback on the list and 
https://github.com/davidben/tls-key-share-prediction/issues/5, I concluded 
past-me was overthinking this and we can simply define the DNS mechanism and 
say it is the server's responsibility to interpret the preexisting TLS spec 
text correctly and pick what it believes is a coherent selection policy. So 
draft-01 now simply defines the DNS mechanism without any complex codepoint 
classification and includes some discussion of the situation in Security 
Considerations, as you noted.

Of what remains in Security Considerations, the random client MAY is specific 
to this draft and does not make sense to move. The server NOT RECOMMENDED is 
simply restating the preexisting implications of RFC 8446 and the obvious 
implications of believing some options are more secure than others. If someone 
wishes to replicate it into another document, they're welcome to, but I 
disagree with moving it. In the context of the discussion in that section, it 
makes sense to restate this implication because this is very relevant to why 
it's okay for the client to use DNS to influence key shares.

David

On Wed, Mar 20, 2024 at 6:08 AM Kampanakis, Panos 
mailto:kpa...@amazon.com>> wrote:
Hi Scott, David,

I think it would make more sense for the normative language about Client and 
Server behavior (section 3.2, 3.4) in 
draft-davidben-tls-key-share-prediction-00 
(https://www.ietf.org/archive/id/draft-davidben-tls-key-share-prediction-00.html
 ) to go in draft-ietf-tls-hybrid-design. These are now discussed in the Sec 
Considerations of draft-davidben-tls-key-share-prediction-01, but the “SHOULD” 
and “SHOULD NOT” language from -00 section 3.2 and 3.4 ought to be in 
draft-ietf-tls-hybrid-design.

I definitely want to see draft-davidben-tls-key-share-prediction move forward 
too.

Rgs,
Panos

From: TLS mailto:tls-boun...@ietf.org>> On Behalf Of 
David Benjamin
Sent: Tuesday, March 19, 2024 1:26 AM
To: Scott Fluhrer (sfluhrer) 
mailto:40cisco@dmarc.ietf.org>>
Cc: TLS@ietf.org
Subject: RE: [EXTERNAL] [TLS] A suggestion for handling large key shares


CAUTION: This email originated from outside of the organization. Do not click 
links or open attachments unless you can confirm the sender and know the 
content is safe.


> If the server supports P256+ML-KEM, what Matt suggested is that, instead of 
> accepting P256, it instead a ClientHelloRetry with P256+ML_KEM.  We then 
> continue as expected and end up negotiating things in 2 round trips.

I assume ClientHelloRetry was meant to be HelloRetryRequest? If so, yeah, a 
server which aims to prefer P256+ML-KEM over P256 should, well, prefer 
P256+ML-KEM over P256. :-) See the discussions around 
draft-davidben-tls-key-share-prediction. In particular, RFC 8446 is clear on 
the semantics of such a ClientHello:

   This vector MAY be empty if the client is requesting a
   HelloRetryRequest.  Each KeyShareEntry value MUST correspond to a
   group offered in the "supported_groups" extension and MUST appear in
   the same order.  However, the values MAY be a non-contiguous subset
   of the "supported_groups" extension and MAY omit the most preferred
   groups.  Such a situation could arise if the most preferred groups
   are new and unlikely to be supported in enough places to make
   pregenerating key shares for them efficient.

rfc8446bis contains further clarifications: 
https://github.com/tlswg/tls13-spec/pull/1331

Now, some servers (namely OpenSSL) will instead unconditionally select from 
key_share first. This isn't wrong, per se. It is how you implement a server 
which believes all of its supported groups are of comparable security level and 
therefore prioritizes round trips. Such a policy is plausible when you only 

Re: [TLS] Question about Large Record Sizes draft and the TLS design

2024-03-19 Thread Jan-Frederik Rieckers

On 20.03.24 11:08, David Benjamin wrote:
I can't say what was going on in the SSLv3 days, but yes record size 
limits are important for memory. Whatever the maximum record size is, 
the peer can force you to buffer that many bytes in memory. That means 
the maximum record size is actually a DoS parameter for the protocol.


Ok, that at least confirms the theory.
The difference between 16KiB and 64KiB seems small with current 
computers, but I suppose back in the SSL days this was a huge difference 
and a nice side effect for current embedded systems with limited memory.



I think what puzzled me most was that there was no explanation at all 
about why that limit is there. It seemed a bit random (why 2^14 and not 
2^15 or 2^10), and whenever there is a restriction like that, IMHO there 
should be some explanation as to why it is that way.


Looking at specs from an implementer's and researcher's view, 
implementers want to understand why they need to follow specific 
restrictions, researchers want to know what assumptions were made, so 
they can prove that the assumptions were correct (or use that assumption 
in their research).


Also for future specifications, it's always good to have rationales so 
you can understand if a proposal that would change that (like the Large 
Record Sizes draft) would break things or not.
Maybe there is a not immediately obvious reason as to why a 
seemingly-arbitrary restriction is put in place.


But I guess this is not an issue single to TLS, but for all IETF documents.


Cheers,
Janfred

--
Herr Jan-Frederik Rieckers
Security, Trust & Identity Services

E-Mail: rieck...@dfn.de | Fon: +49 30884299-339 | Fax: +49 30884299-370
Pronomen: er/sein | Pronouns: he/him
__

DFN - Deutsches Forschungsnetz | German National Research and Education 
Network

Verein zur Förderung eines Deutschen Forschungsnetzes e.V.
Alexanderplatz 1 | 10178 Berlin
https://www.dfn.de

Vorstand: Prof. Dr.-Ing. Stefan Wesner | Prof. Dr. Helmut Reiser | 
Christian Zens

Geschäftsführung: Dr. Christian Grimm | Jochem Pattloch
VR AG Charlottenburg 7729B | USt.-ID. DE 136623822


smime.p7s
Description: S/MIME Cryptographic Signature
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Question about Large Record Sizes draft and the TLS design

2024-03-19 Thread Salz, Rich
  *   Whatever the maximum record size is, the peer can force you to buffer 
that many bytes in memory. That means the maximum record size is actually a DoS 
parameter for the protocol.

Absolutely true. If you have a limit, attackers will try to push your server up 
to and over the limit and try to bring you down.  Unfortunately.
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Question about Large Record Sizes draft and the TLS design

2024-03-19 Thread David Benjamin
I can't say what was going on in the SSLv3 days, but yes record size limits
are important for memory. Whatever the maximum record size is, the peer can
force you to buffer that many bytes in memory. That means the maximum
record size is actually a DoS parameter for the protocol.

On Wed, Mar 20, 2024 at 10:35 AM Jan-Frederik Rieckers 
wrote:

> Hi to all,
>
> during the presentation of the Large Record Sizes draft at the tls
> session yesterday, I wondered why the length restriction is in TLS in
> the first place.
>
> I have gone back to the TLS1.0 RFC, as well as SSLv3, TLS1.3 and TLS1.2
> and have found the restriction in all of them, but not a rationale why
> the length is artificially shortened, when the length is encoded as uint16.
>
> Does someone know what the rationale behind it is?
> One educated guess we came up with was that the limit was put there to
> ensure that implementations can make sure to not use too much memory,
> and using 2^14 was deemed a good compromise between memory usage and
> message length, but in my short research I haven't found any evidence
> that would confirm that guess.
>
>
> Cheers,
> Janfred
>
> --
> Herr Jan-Frederik Rieckers
> Security, Trust & Identity Services
>
> E-Mail: rieck...@dfn.de | Fon: +49 30884299-339 | Fax: +49 30884299-370
> Pronomen: er/sein | Pronouns: he/him
>
> __
>
> DFN - Deutsches Forschungsnetz | German National Research and Education
> Network
> Verein zur Förderung eines Deutschen Forschungsnetzes e.V.
> Alexanderplatz 1 | 10178 Berlin
> https://www.dfn.de
>
> Vorstand: Prof. Dr.-Ing. Stefan Wesner | Prof. Dr. Helmut Reiser |
> Christian Zens
> Geschäftsführung: Dr. Christian Grimm | Jochem Pattloch
> VR AG Charlottenburg 7729B | USt.-ID. DE 136623822
> ___
> TLS mailing list
> TLS@ietf.org
> https://www.ietf.org/mailman/listinfo/tls
>
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


[TLS] Question about Large Record Sizes draft and the TLS design

2024-03-19 Thread Jan-Frederik Rieckers

Hi to all,

during the presentation of the Large Record Sizes draft at the tls 
session yesterday, I wondered why the length restriction is in TLS in 
the first place.


I have gone back to the TLS1.0 RFC, as well as SSLv3, TLS1.3 and TLS1.2 
and have found the restriction in all of them, but not a rationale why 
the length is artificially shortened, when the length is encoded as uint16.


Does someone know what the rationale behind it is?
One educated guess we came up with was that the limit was put there to 
ensure that implementations can make sure to not use too much memory, 
and using 2^14 was deemed a good compromise between memory usage and 
message length, but in my short research I haven't found any evidence 
that would confirm that guess.



Cheers,
Janfred

--
Herr Jan-Frederik Rieckers
Security, Trust & Identity Services

E-Mail: rieck...@dfn.de | Fon: +49 30884299-339 | Fax: +49 30884299-370
Pronomen: er/sein | Pronouns: he/him
__

DFN - Deutsches Forschungsnetz | German National Research and Education 
Network

Verein zur Förderung eines Deutschen Forschungsnetzes e.V.
Alexanderplatz 1 | 10178 Berlin
https://www.dfn.de

Vorstand: Prof. Dr.-Ing. Stefan Wesner | Prof. Dr. Helmut Reiser | 
Christian Zens

Geschäftsführung: Dr. Christian Grimm | Jochem Pattloch
VR AG Charlottenburg 7729B | USt.-ID. DE 136623822


smime.p7s
Description: S/MIME Cryptographic Signature
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] A suggestion for handling large key shares

2024-03-19 Thread David Benjamin
I think you're several discussions behind here. :-P I don't think
draft-ietf-tls-hybrid-design makes sense here. This has nothing to do with
hybrids, but anything with large key shares. If one were to do Kyber on its
own, this would apply too. Rather, per the discussion at IETF 118, the WG
opted to add some clarifications to rfc8446bis in light of draft-00.

It has also turned out that:
a) RFC 8446 actually already defined the semantics (when I wrote draft-00,
I'd thought it was ambiguous), though the clarification definitely helped
b) The implementation that motivated the downgrade concern says this was
not bug from misunderstanding the protocol, but an intentional design
decision

Given that, the feedback on the list and
https://github.com/davidben/tls-key-share-prediction/issues/5, I concluded
past-me was overthinking this and we can simply define the DNS mechanism
and say it is the server's responsibility to interpret the preexisting TLS
spec text correctly and pick what it believes is a coherent selection
policy. So draft-01 now simply defines the DNS mechanism without any
complex codepoint classification and includes some discussion of the
situation in Security Considerations, as you noted.

Of what remains in Security Considerations, the random client MAY is
specific to this draft and does not make sense to move. The server NOT
RECOMMENDED is simply restating the preexisting implications of RFC 8446
and the obvious implications of believing some options are more secure than
others. If someone wishes to *replicate* it into another document, they're
welcome to, but I disagree with *moving* it. In the context of the
discussion in that section, it makes sense to restate this implication
because this is very relevant to why it's okay for the client to use DNS to
influence key shares.

David

On Wed, Mar 20, 2024 at 6:08 AM Kampanakis, Panos  wrote:

> Hi Scott, David,
>
>
>
> I think it would make more sense for the normative language about Client
> and Server behavior (section 3.2, 3.4) in
> draft-davidben-tls-key-share-prediction-00 (
> https://www.ietf.org/archive/id/draft-davidben-tls-key-share-prediction-00.html
> ) to go in draft-ietf-tls-hybrid-design. These are now discussed in the Sec
> Considerations of draft-davidben-tls-key-share-prediction-01, but the
> “SHOULD” and “SHOULD NOT” language from -00 section 3.2 and 3.4 ought to be
> in draft-ietf-tls-hybrid-design.
>
>
>
> I definitely want to see draft-davidben-tls-key-share-prediction move
> forward too.
>
>
>
> Rgs,
>
> Panos
>
>
>
> *From:* TLS  *On Behalf Of * David Benjamin
> *Sent:* Tuesday, March 19, 2024 1:26 AM
> *To:* Scott Fluhrer (sfluhrer) 
> *Cc:* TLS@ietf.org
> *Subject:* RE: [EXTERNAL] [TLS] A suggestion for handling large key shares
>
>
>
> *CAUTION*: This email originated from outside of the organization. Do not
> click links or open attachments unless you can confirm the sender and know
> the content is safe.
>
>
>
> > If the server supports P256+ML-KEM, what Matt suggested is that, instead
> of accepting P256, it instead a ClientHelloRetry with P256+ML_KEM.  We then
> continue as expected and end up negotiating things in 2 round trips.
>
>
>
> I assume ClientHelloRetry was meant to be HelloRetryRequest? If so, yeah,
> a server which aims to prefer P256+ML-KEM over P256 should, well, prefer
> P256+ML-KEM over P256. :-) See the discussions around
> draft-davidben-tls-key-share-prediction. In particular, RFC 8446 is clear
> on the semantics of such a ClientHello:
>
>
>
>This vector MAY be empty if the client is requesting a
>HelloRetryRequest.  Each KeyShareEntry value MUST correspond to a
>group offered in the "supported_groups" extension and MUST appear in
>the same order.  However, the values MAY be a non-contiguous subset
>of the "supported_groups" extension and MAY omit the most preferred
>groups.  Such a situation could arise if the most preferred groups
>are new and unlikely to be supported in enough places to make
>pregenerating key shares for them efficient.
>
>
>
> rfc8446bis contains further clarifications:
> https://github.com/tlswg/tls13-spec/pull/1331
>
>
>
> Now, some servers (namely OpenSSL) will instead unconditionally select
> from key_share first. This isn't wrong, per se. It is how you implement a
> server which believes all of its supported groups are of comparable
> security level and therefore prioritizes round trips. Such a policy is
> plausible when you only support, say, ECDH curves. It's not so reasonable
> if you support both ECDH and a PQ KEM. But all the spec text for that is in
> place, so all that is left is that folks keep this in mind when adding PQ
> KEMs to a TLS implementation. A TLS stack that always looks at key_share
> first is not PQ-ready and will need some changes before adopting PQ KEMs.
>
>
>
> Regarding the other half of this:
>
>
>
> > Suppose we have a client that supports both P-256 and P256+ML-KEM.  What
> the client does is send a key share for 

Re: [TLS] A suggestion for handling large key shares

2024-03-19 Thread Rob Sayre
On Tue, Mar 19, 2024 at 12:41 AM Bas Westerbaan  wrote:

> Hi Scott,
>
> I generally agree with David, in particular that the keyshare prediction
> draft is the way forward.
>

Hi,

David did not like this idea, but it's also possible to bake this
preference into ECH. If your ECHConfig requires a certain keyshare,
say X25519+Kyber, then you can enforce this choice.* This way is more
brittle for sure, but you can enforce the requirement.

thanks,
Rob

*
https://datatracker.ietf.org/doc/html/draft-ietf-tls-esni#config-extensions-guidance
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] A suggestion for handling large key shares

2024-03-19 Thread Kampanakis, Panos
Hi Scott, David,

I think it would make more sense for the normative language about Client and 
Server behavior (section 3.2, 3.4) in 
draft-davidben-tls-key-share-prediction-00 
(https://www.ietf.org/archive/id/draft-davidben-tls-key-share-prediction-00.html
 ) to go in draft-ietf-tls-hybrid-design. These are now discussed in the Sec 
Considerations of draft-davidben-tls-key-share-prediction-01, but the “SHOULD” 
and “SHOULD NOT” language from -00 section 3.2 and 3.4 ought to be in 
draft-ietf-tls-hybrid-design.

I definitely want to see draft-davidben-tls-key-share-prediction move forward 
too.

Rgs,
Panos

From: TLS  On Behalf Of David Benjamin
Sent: Tuesday, March 19, 2024 1:26 AM
To: Scott Fluhrer (sfluhrer) 
Cc: TLS@ietf.org
Subject: RE: [EXTERNAL] [TLS] A suggestion for handling large key shares


CAUTION: This email originated from outside of the organization. Do not click 
links or open attachments unless you can confirm the sender and know the 
content is safe.


> If the server supports P256+ML-KEM, what Matt suggested is that, instead of 
> accepting P256, it instead a ClientHelloRetry with P256+ML_KEM.  We then 
> continue as expected and end up negotiating things in 2 round trips.

I assume ClientHelloRetry was meant to be HelloRetryRequest? If so, yeah, a 
server which aims to prefer P256+ML-KEM over P256 should, well, prefer 
P256+ML-KEM over P256. :-) See the discussions around 
draft-davidben-tls-key-share-prediction. In particular, RFC 8446 is clear on 
the semantics of such a ClientHello:

   This vector MAY be empty if the client is requesting a
   HelloRetryRequest.  Each KeyShareEntry value MUST correspond to a
   group offered in the "supported_groups" extension and MUST appear in
   the same order.  However, the values MAY be a non-contiguous subset
   of the "supported_groups" extension and MAY omit the most preferred
   groups.  Such a situation could arise if the most preferred groups
   are new and unlikely to be supported in enough places to make
   pregenerating key shares for them efficient.

rfc8446bis contains further clarifications: 
https://github.com/tlswg/tls13-spec/pull/1331

Now, some servers (namely OpenSSL) will instead unconditionally select from 
key_share first. This isn't wrong, per se. It is how you implement a server 
which believes all of its supported groups are of comparable security level and 
therefore prioritizes round trips. Such a policy is plausible when you only 
support, say, ECDH curves. It's not so reasonable if you support both ECDH and 
a PQ KEM. But all the spec text for that is in place, so all that is left is 
that folks keep this in mind when adding PQ KEMs to a TLS implementation. A TLS 
stack that always looks at key_share first is not PQ-ready and will need some 
changes before adopting PQ KEMs.

Regarding the other half of this:

> Suppose we have a client that supports both P-256 and P256+ML-KEM.  What the 
> client does is send a key share for P-256, and also indicate support for 
> P256+ML-KEM.  Because we’re including only the P256 key share, the client 
> hello is short

I don't think this is a good tradeoff and would oppose such a SHOULD here. PQ 
KEMs are expensive as they are. Adding a round-trip to it will only make it 
worse. Given the aim is to migrate the TLS ecosystem to PQ, penalizing the 
desired state doesn't make sense. Accordingly, Chrome's Kyber deployment 
includes X25519Kyber768 in the initial ClientHello. While this does mean paying 
an unfortunate upfront cost, this alternative would instead disincentivize 
servers from deploying post-quantum protections.

If you're interested in avoiding the upfront cost, see 
draft-davidben-tls-key-share-prediction-01. That provides a mechanism for 
clients to predict more accurately, though it's yet to even be adopted, so it's 
a bit early to rely on that one. Note also the Security Considerations section, 
which further depends on the server expectations above.

David

On Tue, Mar 19, 2024 at 2:47 PM Scott Fluhrer (sfluhrer) 
mailto:40cisco@dmarc.ietf.org>> wrote:
Recently, Matt Campagna emailed the hybrid KEM group (Douglas, Shay and me) 
about a suggestion about one way to potentially improve the performance (in the 
‘the server hasn’t upgraded yet’ case), and asked if we should add that 
suggestion to our draft.  It occurs to me that this suggestion is equally 
applicable to the pure ML-KEM draft (and future PQ drafts as well); hence 
putting it in our draft might not be the right spot.

Here’s the core idea (Matt’s original scenario was more complicated):


  *   Suppose we have a client that supports both P-256 and P256+ML-KEM.  What 
the client does is send a key share for P-256, and also indicate support for 
P256+ML-KEM.  Because we’re including only the P256 key share, the client hello 
is short
  *   If the server supports only P256, it accepts it, and life goes on as 
normal.
  *   If the server supports P256+ML-KEM, what Matt suggested is that, instead 
of 

Re: [TLS] [EXT] Re: ML-KEM key agreement for TLS 1.3

2024-03-19 Thread Blumenthal, Uri - 0553 - MITLL
I support Kris, and would like to see codepoints added for MLKEM-512, 
MLKEM-768, and MLKEM-1024.

-- 
V/R, 
Uri 




On 3/19/24, 00:11, "TLS on behalf of Kris Kwiatkowski" mailto:tls-boun...@ietf.org> on behalf of k...@amongbytes.com 
> wrote:


!---|
This Message Is From an External Sender
This message came from outside the Laboratory.
|---!


Hello,


I would like to express my support for getting a codepoint for ML-KEM (the 
queue was closed quicker than I expected, so didn’t have a chance to do it at 
the meeting). 


The motivation:
* First of all the integration is rather straightforward.
* MLKEM already got a large amount of research from the crypto community, from 
a large number of various research groups - theorists, designers, implementers 
as well as experts in side-channel protection. Deirdre mentioned that schemes 
were studied for the last 7 years, but it is worth remembering that Kyber is a 
modification of the LPR cryptosystem, introduced already in 2010. 
* There is a cost of 2-step migration (to hybrid and then pure PQ), I don’t 
believe it’s good to force you to pay the cost.


Additionally, I think I would also get a codepoint for MLKEM-512.


-- 
Kris Kwiatkowski
Cryptography Dev








___
TLS mailing list
TLS@ietf.org 
https://www.ietf.org/mailman/listinfo/tls 





smime.p7s
Description: S/MIME cryptographic signature
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] A suggestion for handling large key shares

2024-03-19 Thread Bas Westerbaan
Hi Scott,

I generally agree with David, in particular that the keyshare prediction
draft is the way forward.

There is a different use-case for your mechanism, which you didn't mention:
it's less likely to trip over buggy servers / middleboxes. We use it as the
default mechanism to talk to our customer's origins.
https://blog.cloudflare.com/post-quantum-to-origins

Thanks to Chrome's efforts, for browsers, we luckily don't need to use this
slower mechanism.

One inline comments down below.

Best,

 Bas


On Tue, Mar 19, 2024 at 5:47 AM Scott Fluhrer (sfluhrer)  wrote:

> Recently, Matt Campagna emailed the hybrid KEM group (Douglas, Shay and
> me) about a suggestion about one way to potentially improve the performance
> (in the ‘the server hasn’t upgraded yet’ case), and asked if we should add
> that suggestion to our draft.  It occurs to me that this suggestion is
> equally applicable to the pure ML-KEM draft (and future PQ drafts as well);
> hence putting it in our draft might not be the right spot.
>
>
>
> Here’s the core idea (Matt’s original scenario was more complicated):
>
>
>
>- Suppose we have a client that supports both P-256 and P256+ML-KEM.
>What the client does is send a key share for P-256, and also indicate
>support for P256+ML-KEM.  Because we’re including only the P256 key share,
>the client hello is short
>- If the server supports only P256, it accepts it, and life goes on as
>normal.
>- If the server supports P256+ML-KEM, what Matt suggested is that,
>instead of accepting P256, it instead a ClientHelloRetry with P256+ML_KEM.
>We then continue as expected and end up negotiating things in 2 round 
> trips.
>
>
>
> Hence, the non-upgraded scenario has no performance hit; the upgraded
> scenario does (because of the second round trip), but we’re transmitting
> more data anyways
>

The roundtrip is quite a bit more costly than the extra kilobyte of upload
with respect to latency.


> (and the client could, if it communicates with the server again, lead off
> with the proposal that was accepted last time).
>
>
>
> Matt’s suggestion was that this should be a SHOULD in our draft.
>
>
>
> My questions to you: a) do you agree with this suggestion, and b) if so,
> where should this SHOULD live?  Should it be in our draft?  The ML-KEM
> draft as well (assuming there is one, and it’s not just a codepoint
> assignment)?  Another RFC about how to handle large key shares in general
> (sounds like overkill to me, unless we have other things to put in that
> RFC)?
>
>
>
> Thank you.
> ___
> TLS mailing list
> TLS@ietf.org
> https://www.ietf.org/mailman/listinfo/tls
>
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] A suggestion for handling large key shares

2024-03-19 Thread tirumal reddy
On Tue, 19 Mar 2024 at 15:27, David Benjamin  wrote:

> > If the server supports P256+ML-KEM, what Matt suggested is that, instead
> of accepting P256, it instead a ClientHelloRetry with P256+ML_KEM.  We then
> continue as expected and end up negotiating things in 2 round trips.
>
> I assume ClientHelloRetry was meant to be HelloRetryRequest? If so, yeah,
> a server which aims to prefer P256+ML-KEM over P256 should, well, prefer
> P256+ML-KEM over P256. :-) See the discussions around
> draft-davidben-tls-key-share-prediction. In particular, RFC 8446 is clear
> on the semantics of such a ClientHello:
>
>This vector MAY be empty if the client is requesting a
>HelloRetryRequest.  Each KeyShareEntry value MUST correspond to a
>group offered in the "supported_groups" extension and MUST appear in
>the same order.  However, the values MAY be a non-contiguous subset
>of the "supported_groups" extension and MAY omit the most preferred
>groups.  Such a situation could arise if the most preferred groups
>are new and unlikely to be supported in enough places to make
>pregenerating key shares for them efficient.
>
> rfc8446bis contains further clarifications:
> https://github.com/tlswg/tls13-spec/pull/1331
>
> Now, some servers (namely OpenSSL) will instead unconditionally select
> from key_share first. This isn't wrong, per se. It is how you implement a
> server which believes all of its supported groups are of comparable
> security level and therefore prioritizes round trips. Such a policy is
> plausible when you only support, say, ECDH curves. It's not so reasonable
> if you support both ECDH and a PQ KEM. But all the spec text for that is in
> place, so all that is left is that folks keep this in mind when adding PQ
> KEMs to a TLS implementation. A TLS stack that always looks at key_share
> first is not PQ-ready and will need some changes before adopting PQ KEMs.
>
> Regarding the other half of this:
>
> > Suppose we have a client that supports both P-256 and P256+ML-KEM.  What
> the client does is send a key share for P-256, and also indicate support
> for P256+ML-KEM.  Because we’re including only the P256 key share, the
> client hello is short
>
> I don't think this is a good tradeoff and would oppose such a SHOULD here.
> PQ KEMs are expensive as they are. Adding a round-trip to it will only make
> it worse. Given the aim is to migrate the TLS ecosystem to PQ, penalizing
> the desired state doesn't make sense. Accordingly, Chrome's Kyber
> deployment includes X25519Kyber768 in the initial ClientHello. While this
> does mean paying an unfortunate upfront cost, this alternative would
> instead disincentivize servers from deploying post-quantum protections.
>
> If you're interested in avoiding the upfront cost, see
> draft-davidben-tls-key-share-prediction-01. That provides a mechanism for
> clients to predict more accurately, though it's yet to even be adopted, so
> it's a bit early to rely on that one. Note also the Security Considerations
> section, which further depends on the server expectations above.
>

The scenario of handling large key sizes is disussed in
https://www.ietf.org/archive/id/draft-reddy-uta-pqc-app-02.html#section-4

-Tiru


>
> David
>
> On Tue, Mar 19, 2024 at 2:47 PM Scott Fluhrer (sfluhrer)  40cisco@dmarc.ietf.org> wrote:
>
>> Recently, Matt Campagna emailed the hybrid KEM group (Douglas, Shay and
>> me) about a suggestion about one way to potentially improve the performance
>> (in the ‘the server hasn’t upgraded yet’ case), and asked if we should add
>> that suggestion to our draft.  It occurs to me that this suggestion is
>> equally applicable to the pure ML-KEM draft (and future PQ drafts as well);
>> hence putting it in our draft might not be the right spot.
>>
>>
>>
>> Here’s the core idea (Matt’s original scenario was more complicated):
>>
>>
>>
>>- Suppose we have a client that supports both P-256 and P256+ML-KEM.
>>What the client does is send a key share for P-256, and also indicate
>>support for P256+ML-KEM.  Because we’re including only the P256 key share,
>>the client hello is short
>>- If the server supports only P256, it accepts it, and life goes on
>>as normal.
>>- If the server supports P256+ML-KEM, what Matt suggested is that,
>>instead of accepting P256, it instead a ClientHelloRetry with P256+ML_KEM.
>>We then continue as expected and end up negotiating things in 2 round 
>> trips.
>>
>>
>>
>> Hence, the non-upgraded scenario has no performance hit; the upgraded
>> scenario does (because of the second round trip), but we’re transmitting
>> more data anyways (and the client could, if it communicates with the server
>> again, lead off with the proposal that was accepted last time).
>>
>>
>>
>> Matt’s suggestion was that this should be a SHOULD in our draft.
>>
>>
>>
>> My questions to you: a) do you agree with this suggestion, and b) if so,
>> where should this SHOULD live?  Should it be in our