[TLS]Re: WG Adoption for TLS Trust Expressions

2024-05-23 Thread David Benjamin
On Thu, May 23, 2024 at 11:09 AM Dennis Jackson  wrote

>
> > I think we have to agree that Trust Expressions enables websites to
> adopt new CA chains regardless of client trust and even builds a
> centralized mechanism for doing so. It is a core feature of the design.
>
> No one has to agree to this because you have not backed this claim at all.
> Nick sent two long emails explaining why this was not the case, both of
> which you have simply dismissed [...]
>
> This is something that I believe David Benjamin and the other draft
> authors, and I all agree on. You and Nick seem to have misunderstood either
> the argument or the draft.
>
> David Benjamin, writing on behalf of Devon and Bob as well:
>
> By design, a multi-certificate model removes the ubiquity requirement for
> a trust anchor to be potentially useful for a server operator.
>
> [...]
>
> Server operators, once software is in place, not needing to be concerned
> about new trust expressions or changes to them. The heavy lifting is
> between the root program and the CA.
>
> From the Draft (Section 7):
>
> Subscribers SHOULD use an automated issuance process where the CA
> transparently provisions multiple certification paths, without changes to
> subscriber configuration.
>
> The CA can provision whatever chains it likes without the operator's
> involvement. These chains do not have to be trusted by any clients. This is
> a centralized mechanism which allows one party (the CA) to ship multiple
> chains of its choice to all of its subscribers. This obviously has
> beneficial use cases, but there are also cases where this can be abused.
>

Hi Dennis,

Since you seem to be trying to speak on my behalf, I'm going to go ahead
and correct this now. This is not true. I think you have misunderstood how
this extension works. In fact, the extension with this property is the
certificate_authorities extension, already standardized by the TLSWG, and
with an even longer history as a non-extension field of the
CertificateRequest message.

At the end of the day, the TLS components of trust expressions are simply a
more size-efficient form of the certificate_authorities field. The rest is
working through the deployment implications to reduce server operator
burden. However, the way we achieve this size efficiency is by *not* saying
the CAs names. Instead, the CA sets are indirected through named and
versioned "trust stores". However, the price one inherently needs to pay
here is that servers need to know how to map from those trust stores back
to the certificates. We solve this with the TrustStoreInclusionList
metadata from the CA.

That TrustStoreInclusionList structure is necessarily a point-in-time
snapshot of the state of the world. If a root program has not included a CA
yet, the CA cannot claim it in the metadata or connections will fail. If
the CA is included in zero root programs, the only viable (i.e. correct and
does not cause interop issues) TrustStoreInclusionList is the empty list,
in which case the certificate will never be presented.

If the root program were to add that CA later, the server *still will not
send those certificates* to updated clients. It takes a
new TrustStoreInclusionList from the CA for the certificate to be sent. We
can (and must for interop) efficiently solve version skew for
removals, hence the excluded_labels machinery. But version skew for
additions requires saying *something* preexisting that names the CA. Now,
if the client really, really wanted to trigger that certificate, it could
do so today. It could send the name of the CA in the
certificate_authorities extension. I wouldn't expect clients to want to
waste bandwidth on this. Regardless, trust expressions has no involvement
in that process, the tool you use there is the certificate_authorities
extension. Now, _after_ a root program has made an addition, trust
expressions' deployment model allows for reduced server operator burden as
in the text you quoted, but at that point the CA has already been trusted.

Of course, whether this property (whether servers can usefully pre-deploy
not-yet-added trust anchors), which trust expressions does not have, even
matters boils to whether a root program would misinterpret availability in
servers as a sign of CA trustworthiness, when those two are clearly
unrelated to each other. Ultimately, the trustworthiness of CAs is a
subjective social question: do we believe this CA has *and will continue*
only sign true things? We can build measures to retroactively catch issues
like Certificate Transparency, but the key question is fundamentally
forward-looking. The role of a root program is to make judgement calls on
this question. A root program that so misunderstands its role in this
system that it conflates these two isn't going to handle its other
load-bearing responsibilities either.

David
___
TLS mailing list -- tls@ietf.org
To unsubscribe send an email to tls-le...@ietf.org


[TLS]Re: Working Group Last Call for Legacy RSASSA-PKCS1-v1_5 codepoints for TLS 1.3

2024-05-22 Thread David Benjamin
On Wed, May 22, 2024 at 10:27 AM Salz, Rich  wrote:

> > This email starts the working group last call for "Legacy
> RSASSA-PKCS1-v1_5 codepoints for TLS 1.3” I-D, located here:
>
> No comments, ship it.
>
> > The only comment/question I have about this I-D (and I hope this is not
> too much of a bikeshed) is whether the Recommended column should be “D”
> instead of “N”.
>
> I think that would be a mistake as it makes the vast deployment of
> existing TPM machines nonconformant.  In a few years, maybe. For now,
> not-recommended is strong enough.
>

(I don't have strong feelings on this and am happy to defer this to what
everyone else wants. Just briefly noting that "N" in the document isn't an
explicit preference here. "D" just didn't exist at the time the document
was written.)

David
___
TLS mailing list -- tls@ietf.org
To unsubscribe send an email to tls-le...@ietf.org


[TLS]Re: WG Adoption for TLS Trust Expressions

2024-05-21 Thread David Benjamin
Hi Richard. Thanks for the comments! Replies inline.

On Mon, May 6, 2024 at 10:23 AM Richard Barnes  wrote:

> Hi all,
>
> Coming in late here.  Appreciate the discussion so far.  FWIW, here's how
> I'm thinking through this:
>
> I would frame the basic problem here as follows, since I think the use
> cases presented are all basically corollaries: Root store fragmentation
> makes it hard for server operators to make sure they can authenticate to
> all of the clients they want to connect with.  Note that the pain is
> non-zero regardless of technology.  The more clients have differing
> requirements, the more work servers are going to have to do to support them
> all.
>
> Pain = (Amount of fragmentation) * (Pain per fragmentation)
>
> The question at issue here is how trust expressions affect the inputs to
> this equation.
>
> Shifting from a single-certificate to a multi-certificate world shifts the
> pain, from "How do I pick the most widely accepted cert?" to "How do I make
> sure I have the right selection of certificates?"  I probably agree that
> this is a net reduction in pain for a given level of fragementation.
>

I think we’re broadly in agreement here. Fragmentation exists today, both
between different root programs and between versions of a given client and
there is a significant amount of pain involved for affected server
operators with no option but to find a new ubiquitously-trusted CA and
reissue.

We’re particularly concerned about this server operator pain because it
translates to security risks for billions of users. If root program actions
cause server operator pain, there is significant pressure against those
actions. The end result of this is that root store changes happen
infrequently, with the ultimate cost being that user security cannot
benefit from PKI improvements.

It’s worth noting that, for a given set of target clients, picking the most
widely accepted certificate is not merely painful but potentially
infeasible. Picking a larger selection of certificates allows the server
operator to meet their needs. There is still some cost to selecting from
too many certificates, but trust expressions greatly relieves the pressures
that, again, ultimately are paid by user security.

We also anticipate many of those costs can be mitigated by instead imposing
smaller costs on CAs, who already have existing relationships with root
programs. Indeed, CAs already make decisions about supported clients, by
deciding which cross-signs and intermediates to include and which to
retire. Trust expressions makes these decisions more explicit.


> I probably also agree with Dennis, though, that reducing the pain at a
> given level of fragmentation will increase the temptation to more
> fragmentation.  The country-level stuff is there, but even some of the
> putative use cases look like more fragmentation -- more algorithms,
> changing root store policies more frequently.  Playing the combinatorics
> out here, how many certs is a server going to have to maintain?
>

To some degree, yes, we want to increase fragmentation *when it is
necessary to benefit user security*. Fragmentation is an inherent
consequence of root program changes, and root programs often need changes
to meet user security (and, with post-quantum, performance) needs, but the
costs today are prohibitive to the point that root programs cannot
effectively meet those needs.

Of course, unnecessary fragmentation is undesirable. Trust expressions
fixes the prohibitive costs but, as you allude to, there are still costs.
We don’t want servers to need to maintain unboundedly many certificates.
However, note that these same costs are pressure against excessive,
unnecessary fragmentation.

It’s hard to say exact numbers at this stage. We can only learn with
deployment experience, hence our desire to progress toward adoption and
experimentation.


> As an aside here, speaking as someone who used to run a root program, I'm
> not sure that reducing the barriers to adding new CAs to a root program is
> a net benefit.  Obviously we don't want things to ossify, but it seems like
> experience has shown that small, niche CAs cause more trouble in terms of
> compliance checking and misissuance than the benefit that they bring in
> terms of diversity.
>

This is an important point; most modern root programs including Chrome

and Mozilla  are trending
towards increased requirements on CAs to become trusted, including greater
agility among trust anchors. This agility reduces the risk of powerful,
long-lived keys and allows for faster adoption of security improvements,
but poses additional pain on subscribers who can only deploy one
certificate to meet the needs of a set of clients that are changing faster
than ever before.

We do not expect that to change. Trust expressions *do not remove any
barriers from including a CA 

[TLS]Re: WG Adoption for TLS Trust Expressions

2024-05-21 Thread David Benjamin
(replies inline)

On Sun, May 5, 2024 at 6:48 PM Dennis Jackson  wrote:

> Hi David, Devon, Bob,
>
> I feel much of your response talks past the issue that was raised at IETF
> 118.
>
> The question we're evaluating is NOT "If we were in a very unhappy world
> where governments controlled root certificates on client devices and used
> them for mass surveillance, does Trust Expressions make things worse?".
> Although Watson observed that the answer to this is at least 'somewhat',
> I agree such a world is already maxed at 10/10 on the bad worlds to live
> in scale and so it's not by itself a major problem in my view.
>
> The actual concern is: to what extent do Trust Expressions increase the
> probability that we end up in this unhappy world of government CAs used for
> mass surveillance?
>
> The case made earlier in the thread is that it increases the probability
> substantially because it provides an effective on-ramp for new CAs even
> if they exist entirely outside of existing root stores. Websites can
> adopt such a CA without being completely broken and unavailable as they
> would be today. Although I think it's unlikely anyone would independently
> do this, it's easy to see a website choosing to add such a certificate
> (which is harmless by itself) if a government incentivized or required
> it.  Trust Expressions also enables existing CAs to force-push a cert chain
> from a new CA to a website,  without the consent or awareness of the
> website operator, further enabling the proliferation of untrusted (and
> presumably unwanted) CAs.
>
> These features neatly solve the key challenges of deploying a government
> CA, which as discussed at length in the thread, are to achieve enough
> legitimacy through website adoption to have a plausible case for enforcing
> client adoption. The real problem here is that you've (accidentally?)
> built a system that makes it much easier to adopt and deploy any new CA
> regardless of trust, rather than a system that makes it easier to deploy
> & adopt any new *trusted* CA. If you disagree with this assessment, it
> would be great to hear your thoughts on why. Unfortunately, none of the
> arguments in your email come close to addressing this point and the text
> in the draft pretty much tries to lampshade these problems as a feature.
>
Our understanding of your argument is that it will be easier for
governments to force clients to trust a CA if a sufficient number of
websites have deployed certificates from that CA. We just don’t agree with
this assertion and don’t see websites’ deployment as a factor in trust
store inclusion decisions in this scenario.


> The other side of this risk evaluation is assessing how effectively Trust
> Expressions solves real problems.
>
> Despite a lot of discussion, I've only seen one compelling unsolved
> problem which Trust Expressions is claimed to be able to solve. That is
> the difficulty large sites have supporting very old clients with
> out-of-date root stores (as described by Kyle). This leads to sites using
> complex & brittle TLS fingerprinting to decide which certificate chain to
> send or to sites using very particular CAs designed to maximize
> compatibility (e.g. Cloudflare's recent change).
>
> However, it's unclear how Trust Expressions solves either fingerprinting
> or the new trusted root ubiquity challenge. To solve the former, we're
> relying on the adoption of Trust Expressions by device manufacturers who
> historically have not been keen to adopt new TLS extensions. For the
> latter, Trust Expressions doesn't seem to solve anything. Sites / CDNs are
> still forced to either have a business arrangement with a single suitably
> ubiquitous root or to conclude multiple such arrangements (which come with
> considerable baggage) with both new and ubiquitous roots - in return for no
> concrete benefit. If we had Trust Expressions deployed today, how would
> life be better for LE / Cloudflare or other impacted parties?
>
It isn’t necessary for older device manufacturers to adopt Trust
Expressions. Rather, Trust Expressions would be adopted by modern clients,
allowing them to improve user security without being held back by older
clients that don’t update. Servers may still need to navigate intersections
and fingerprinting for older clients, but this will be unconstrained by
modern ones. It will also become simpler, with fingerprinting less
prevalent, as older clients fall out of the ecosystem.


> I won't detail them here, but it seems like there are simpler and more
> effective alternatives that would address the underlying problem, e.g.
> through root stores encouraging cross-signing or offering cross-signing
> services themselves and using existing techniques to avoid any impact at
> the TLS layer.
>
> I'm struggling to see it being an even partially effective solution for any
> of the other proposed use cases. To pick an example you've repeatedly
> highlighted, can you clarify how Trust Expressions will speed the
> 

[TLS]Re: Adoption Call for draft-davidben-tls-key-share-prediction

2024-05-21 Thread David Benjamin
Servers using DNSSEC won't help unless the client only honors the hint over
DNSSEC, and we do not live in a universe where DNSSEC succeeded to the
point that that's remotely viable.

I think that too can be discussed in detail post adoption, but I think such
a change would negate the value of this whole draft.

On Tue, May 21, 2024, 09:56 A A  wrote:

> In my opinion, to prevent downgrade attack, server MUST or SHOULD using
> DNSSEC when announcing DNS record.
>
>
> 21.05.2024, 21:48, "David Benjamin" :
>
> Off the cuff, folding it into the transcript sounds tricky, since existing
> TLS servers won't know to do it, and, as with any other DNS hints, we need
> to accommodate the DNS being out of sync with the server. It'll also be
> more difficult to deploy due to needing changes in the TLS stack and
> generally require much, much tighter coordination between DNS and TLS. I'd
> like for that coordination to be more viable (see my comments on the
> .well-known draft), but I don't think we're there yet.
>
> But I'm certainly open to continue discussing it and this problem space!
> The original version of the draft actually tried a lot harder to handle the
> downgrade story. Rather than mess with the transcript, it defined away all
> the negotiation algorithms where this would be a problem and keyed the
> NamedGroup codepoints to know when you could be guaranteed of the narrower
> server behavior.
>
> My read of the feedback was that people thought this was an unnecessary
> complication and that servers doing a key-share-first selection were doing
> so intentionally because they believed the options roughly equivalent. So I
> took all that out and replaced it with text to that effect.
>
> David
>
>
> On Tue, May 21, 2024, 08:54 Eric Rescorla  wrote:
>
> I agree that it's attractive to be able to hint in the HTTPS RR, but I'm
> less sure about addressing the basic insecurity of the DNS channel with the
> approach this draft takes. I don't have a complete thought here, but what
> if we were to somehow fold the hint into the handshake transcript? I
> suppose we can sort this out post-adoption, but I'd like the question to be
> on the table.
>
> -Ekr
>
>
> On Fri, May 3, 2024 at 3:05 PM Joseph Salowey  wrote:
>
> This is a working group call for adoption
> for draft-davidben-tls-key-share-prediction.  This document was presented
> at IET 118 and has undergone some revision based on feedback since then.
> The current draft is available here:
> https://datatracker.ietf.org/doc/draft-davidben-tls-key-share-prediction/.
> Please read the document and indicate if and why you support or do not
> support adoption as a TLS working group item. If you support adoption
> please, state if you will help review and contribute text to the document.
> Please respond to this call by May 20, 2024.
>
> Thanks,
>
> Joe, Deidre, and Sean
> ___
> TLS mailing list
> TLS@ietf.org
> https://www.ietf.org/mailman/listinfo/tls
>
> ___
> TLS mailing list -- tls@ietf.org
> To unsubscribe send an email to tls-le...@ietf.org
>
> ,
>
> ___
> TLS mailing list -- tls@ietf.org
> To unsubscribe send an email to tls-le...@ietf.org
>
>
___
TLS mailing list -- tls@ietf.org
To unsubscribe send an email to tls-le...@ietf.org


[TLS]Re: Adoption Call for draft-davidben-tls-key-share-prediction

2024-05-21 Thread David Benjamin
Off the cuff, folding it into the transcript sounds tricky, since existing
TLS servers won't know to do it, and, as with any other DNS hints, we need
to accommodate the DNS being out of sync with the server. It'll also be
more difficult to deploy due to needing changes in the TLS stack and
generally require much, much tighter coordination between DNS and TLS. I'd
like for that coordination to be more viable (see my comments on the
.well-known draft), but I don't think we're there yet.

But I'm certainly open to continue discussing it and this problem space!
The original version of the draft actually tried a lot harder to handle the
downgrade story. Rather than mess with the transcript, it defined away all
the negotiation algorithms where this would be a problem and keyed the
NamedGroup codepoints to know when you could be guaranteed of the narrower
server behavior.

My read of the feedback was that people thought this was an unnecessary
complication and that servers doing a key-share-first selection were doing
so intentionally because they believed the options roughly equivalent. So I
took all that out and replaced it with text to that effect.

David


On Tue, May 21, 2024, 08:54 Eric Rescorla  wrote:

> I agree that it's attractive to be able to hint in the HTTPS RR, but I'm
> less sure about addressing the basic insecurity of the DNS channel with the
> approach this draft takes. I don't have a complete thought here, but what
> if we were to somehow fold the hint into the handshake transcript? I
> suppose we can sort this out post-adoption, but I'd like the question to be
> on the table.
>
> -Ekr
>
>
> On Fri, May 3, 2024 at 3:05 PM Joseph Salowey  wrote:
>
>> This is a working group call for adoption
>> for draft-davidben-tls-key-share-prediction.  This document was presented
>> at IET 118 and has undergone some revision based on feedback since then.
>> The current draft is available here:
>> https://datatracker.ietf.org/doc/draft-davidben-tls-key-share-prediction/.
>> Please read the document and indicate if and why you support or do not
>> support adoption as a TLS working group item. If you support adoption
>> please, state if you will help review and contribute text to the document.
>> Please respond to this call by May 20, 2024.
>>
>> Thanks,
>>
>> Joe, Deidre, and Sean
>> ___
>> TLS mailing list
>> TLS@ietf.org
>> https://www.ietf.org/mailman/listinfo/tls
>>
> ___
> TLS mailing list -- tls@ietf.org
> To unsubscribe send an email to tls-le...@ietf.org
>
___
TLS mailing list -- tls@ietf.org
To unsubscribe send an email to tls-le...@ietf.org


[TLS]Re: WG Adoption for TLS Trust Expressions

2024-05-10 Thread David Benjamin
Resending this since it seems IETF lists were broken recently (
https://www.ietf.org/blog/ietf-mailing-list-delivery-issues/). Hopefully it
works this time.

On Thu, May 9, 2024 at 10:45 AM David Benjamin 
wrote:

> Hi Richard. Thanks for the comments! Replies inline.
>
> On Mon, May 6, 2024 at 10:23 AM Richard Barnes  wrote:
>
>> Hi all,
>>
>> Coming in late here.  Appreciate the discussion so far.  FWIW, here's how
>> I'm thinking through this:
>>
>> I would frame the basic problem here as follows, since I think the use
>> cases presented are all basically corollaries: Root store fragmentation
>> makes it hard for server operators to make sure they can authenticate to
>> all of the clients they want to connect with.  Note that the pain is
>> non-zero regardless of technology.  The more clients have differing
>> requirements, the more work servers are going to have to do to support them
>> all.
>>
>> Pain = (Amount of fragmentation) * (Pain per fragmentation)
>>
>> The question at issue here is how trust expressions affect the inputs to
>> this equation.
>>
>> Shifting from a single-certificate to a multi-certificate world shifts
>> the pain, from "How do I pick the most widely accepted cert?" to "How do I
>> make sure I have the right selection of certificates?"  I probably agree
>> that this is a net reduction in pain for a given level of fragementation.
>>
>
> I think we’re broadly in agreement here. Fragmentation exists today, both
> between different root programs and between versions of a given client and
> there is a significant amount of pain involved for affected server
> operators with no option but to find a new ubiquitously-trusted CA and
> reissue.
>
> We’re particularly concerned about this server operator pain because it
> translates to security risks for billions of users. If root program actions
> cause server operator pain, there is significant pressure against those
> actions. The end result of this is that root store changes happen
> infrequently, with the ultimate cost being that user security cannot
> benefit from PKI improvements.
>
> It’s worth noting that, for a given set of target clients, picking the
> most widely accepted certificate is not merely painful but potentially
> infeasible. Picking a larger selection of certificates allows the server
> operator to meet their needs. There is still some cost to selecting from
> too many certificates, but trust expressions greatly relieves the pressures
> that, again, ultimately are paid by user security.
>
> We also anticipate many of those costs can be mitigated by instead
> imposing smaller costs on CAs, who already have existing relationships with
> root programs. Indeed, CAs already make decisions about supported clients,
> by deciding which cross-signs and intermediates to include and which to
> retire. Trust expressions makes these decisions more explicit.
>
>
>> I probably also agree with Dennis, though, that reducing the pain at a
>> given level of fragmentation will increase the temptation to more
>> fragmentation.  The country-level stuff is there, but even some of the
>> putative use cases look like more fragmentation -- more algorithms,
>> changing root store policies more frequently.  Playing the combinatorics
>> out here, how many certs is a server going to have to maintain?
>>
>
> To some degree, yes, we want to increase fragmentation *when it is
> necessary to benefit user security*. Fragmentation is an inherent
> consequence of root program changes, and root programs often need changes
> to meet user security (and, with post-quantum, performance) needs, but the
> costs today are prohibitive to the point that root programs cannot
> effectively meet those needs.
>
> Of course, unnecessary fragmentation is undesirable. Trust expressions
> fixes the prohibitive costs but, as you allude to, there are still costs.
> We don’t want servers to need to maintain unboundedly many certificates.
> However, note that these same costs are pressure against excessive,
> unnecessary fragmentation.
>
> It’s hard to say exact numbers at this stage. We can only learn with
> deployment experience, hence our desire to progress toward adoption and
> experimentation.
>
>
>> As an aside here, speaking as someone who used to run a root program, I'm
>> not sure that reducing the barriers to adding new CAs to a root program is
>> a net benefit.  Obviously we don't want things to ossify, but it seems like
>> experience has shown that small, niche CAs cause more trouble in terms of
>> compliance checking and misissuance than the benefit that they bring in
>> terms

[TLS]Re: WG Adoption for TLS Trust Expressions

2024-05-10 Thread David Benjamin
Resending this since it seems IETF lists were broken recently (
https://www.ietf.org/blog/ietf-mailing-list-delivery-issues/). Hopefully it
works this time.

On Thu, May 9, 2024 at 10:40 AM David Benjamin 
wrote:

> (replies inline)
>
> On Sun, May 5, 2024 at 6:48 PM Dennis Jackson  40dennis-jackson...@dmarc.ietf.org> wrote:
>
>> Hi David, Devon, Bob,
>>
>> I feel much of your response talks past the issue that was raised at
>> IETF 118.
>>
>> The question we're evaluating is NOT "If we were in a very unhappy world
>> where governments controlled root certificates on client devices and used
>> them for mass surveillance, does Trust Expressions make things worse?".
>> Although Watson observed that the answer to this is at least 'somewhat',
>> I agree such a world is already maxed at 10/10 on the bad worlds to live
>> in scale and so it's not by itself a major problem in my view.
>>
>> The actual concern is: to what extent do Trust Expressions increase the
>> probability that we end up in this unhappy world of government CAs used for
>> mass surveillance?
>>
>> The case made earlier in the thread is that it increases the probability
>> substantially because it provides an effective on-ramp for new CAs even
>> if they exist entirely outside of existing root stores. Websites can
>> adopt such a CA without being completely broken and unavailable as they
>> would be today. Although I think it's unlikely anyone would
>> independently do this, it's easy to see a website choosing to add such a
>> certificate (which is harmless by itself) if a government incentivized or
>> required it.  Trust Expressions also enables existing CAs to force-push a
>> cert chain from a new CA to a website,  without the consent or awareness
>> of the website operator, further enabling the proliferation of untrusted
>> (and presumably unwanted) CAs.
>>
>> These features neatly solve the key challenges of deploying a government
>> CA, which as discussed at length in the thread, are to achieve enough
>> legitimacy through website adoption to have a plausible case for enforcing
>> client adoption. The real problem here is that you've (accidentally?)
>> built a system that makes it much easier to adopt and deploy any new CA
>> regardless of trust, rather than a system that makes it easier to deploy
>> & adopt any new *trusted* CA. If you disagree with this assessment, it
>> would be great to hear your thoughts on why. Unfortunately, none of the
>> arguments in your email come close to addressing this point and the text
>> in the draft pretty much tries to lampshade these problems as a feature.
>>
> Our understanding of your argument is that it will be easier for
> governments to force clients to trust a CA if a sufficient number of
> websites have deployed certificates from that CA. We just don’t agree with
> this assertion and don’t see websites’ deployment as a factor in trust
> store inclusion decisions in this scenario.
>
>
>> The other side of this risk evaluation is assessing how effectively Trust
>> Expressions solves real problems.
>>
>> Despite a lot of discussion, I've only seen one compelling unsolved
>> problem which Trust Expressions is claimed to be able to solve. That is
>> the difficulty large sites have supporting very old clients with
>> out-of-date root stores (as described by Kyle). This leads to sites
>> using complex & brittle TLS fingerprinting to decide which certificate
>> chain to send or to sites using very particular CAs designed to maximize
>> compatibility (e.g. Cloudflare's recent change).
>>
>> However, it's unclear how Trust Expressions solves either fingerprinting
>> or the new trusted root ubiquity challenge. To solve the former, we're
>> relying on the adoption of Trust Expressions by device manufacturers who
>> historically have not been keen to adopt new TLS extensions. For the
>> latter, Trust Expressions doesn't seem to solve anything. Sites / CDNs are
>> still forced to either have a business arrangement with a single
>> suitably ubiquitous root or to conclude multiple such arrangements (which
>> come with considerable baggage) with both new and ubiquitous roots - in
>> return for no concrete benefit. If we had Trust Expressions deployed
>> today, how would life be better for LE / Cloudflare or other impacted
>> parties?
>>
> It isn’t necessary for older device manufacturers to adopt Trust
> Expressions. Rather, Trust Expressions would be adopted by modern clients,
> allowing them to improve user security without being held back by older
> clients that don’t update. Ser

[TLS]Re: HTTPS-RR and TLS

2024-05-08 Thread David Benjamin
On Wed, May 8, 2024 at 3:50 PM Watson Ladd  wrote:

> On Tue, May 7, 2024 at 8:07 AM David Benjamin 
> wrote:
> >
> > [changing the subject since I expect this to mostly be a tangential
> discussion]
> >
> > On Sat, May 4, 2024, 09:12 Stephen Farrell 
> wrote:
> >>
> >> I hope, as the WG are processing this
> [draft-davidben-tls-key-share-prediction], we consider what,
> >> if anything, else could be usefully added to HTTPS RRs
> >> to make life easier.
> >
> >
> > Actually, I think one thing that could help is one of your drafts! One
> barrier with trying to use HTTPS RR for TLS problems is keeping the DNS and
> TLS sides in sync on the server deployment. Prior to ECH, this hasn't been
> done before, so I wouldn't expect any deployments to have a robust path
> from their TLS configuration to their DNS records.
> >
> > draft-ietf-tls-wkech seems like a good model for this, but it is
> currently written specifically for ECH. What are your thoughts on
> generalizing that document to cover other cases as well?
> > https://github.com/sftcd/wkesni/issues/14
> >
> > We might also think about the extension model for that document. Does
> each SvcParamKey opt into use with the document, with its own JSON key and
> text describing how to map it, or should we just use the presentation
> syntax and import it all en masse? (I'm not sure. The latter would
> certainly be less work for new SvcParamKeys, but I'm not sure what the
> implications would be of the DNS frontend picking up SvcParamKeys it
> doesn't understand. Then again, we seem to have happily imported basically
> all the existing keys anyway.)
>
> The one reason I could see this being a problem is that the HTTPS RR
> RFC specifies per-key wire format encodings. If we use presentation
> format and import it all en masse we need to deal with what happens
> when the DNS infrastructure doesn't understand a ScvParamKey. If we
> encoded wire format somehow in the JSON, people making these would be
> sad. But I like the idea of extending this mechanism vs. making a new
> one.
>

I think there's a generic form for the presentation format (which is
basically the wire format but with more ASCII), but yeah the person
assembling the .well-known file has no good way to tell when something was
dropped. I think that concern applies to every option short of using the
wire format though. Even using string names for the keys must account for
the DNS frontend being unaware of the key.

David
___
TLS mailing list -- tls@ietf.org
To unsubscribe send an email to tls-le...@ietf.org


[TLS]HTTPS-RR and TLS

2024-05-07 Thread David Benjamin
[changing the subject since I expect this to mostly be a tangential
discussion]

On Sat, May 4, 2024, 09:12 Stephen Farrell 
wrote:

> I hope, as the WG are processing this
> [draft-davidben-tls-key-share-prediction], we consider what,
> if anything, else could be usefully added to HTTPS RRs
> to make life easier.
>

Actually, I think one thing that could help is one of your drafts! One
barrier with trying to use HTTPS RR for TLS problems is keeping the DNS and
TLS sides in sync on the server deployment. Prior to ECH, this hasn't been
done before, so I wouldn't expect any deployments to have a robust path
from their TLS configuration to their DNS records.

draft-ietf-tls-wkech seems like a good model for this, but it is currently
written specifically for ECH. What are your thoughts on generalizing that
document to cover other cases as well?
https://github.com/sftcd/wkesni/issues/14

We might also think about the extension model for that document. Does each
SvcParamKey opt into use with the document, with its own JSON key and text
describing how to map it, or should we just use the presentation syntax and
import it all en masse? (I'm not sure. The latter would certainly be less
work for new SvcParamKeys, but I'm not sure what the implications would be
of the DNS frontend picking up SvcParamKeys it doesn't understand. Then
again, we seem to have happily imported basically all the existing keys
anyway.)

David
___
TLS mailing list -- tls@ietf.org
To unsubscribe send an email to tls-le...@ietf.org


[TLS]Re: Adoption Call for draft-davidben-tls-key-share-prediction

2024-05-06 Thread David Benjamin
This document does not make any changes to the DNS queries made. It merely
adds a parameter to the existing HTTPS-RR/SVCB record, with pre-existing
rules on who queries it and when, and describes how TLS can use it.

The interaction between HTTPS-RR and proxies is complex and all already
covered by RFC 9460. It sounds like you may be assuming that the TLS client
would start querying this in an otherwise unmodified proxy connection flow,
but the reality is more complex than that, due to the need to support, e.g.
multi-CDN flows.

Rather, I would expect that many proxy deployments to lose the record and
fail to apply this optimization altogether (with all the costs that entails
during PQ KEM transitions) because no one has yet defined a way for client
and proxy to coordinate on the split responsibilities. (Unless someone has
started this already and I missed it?) If this is a space you're interested
in, I think there is work to be done here.

Regardless, it's orthogonal to this document, which merely allocates a
parameter type.

David

On Mon, May 6, 2024, 08:31 Roelof duToit  wrote:

> The concept does indeed solve an important problem, but also introduces a
> new dependency in an environment that uses explicit proxies (mostly
> enterprise networks). In that environment this proposal, alongside ECH,
> introduces DNS queries at the TLS client endpoint where previously the DNS
> control point was limited to the proxy.  It would be good to mention that
> in the document.
>
> —Roelof
>
>
> On May 3, 2024, at 6:09 PM, David Benjamin  wrote:
>
> Unsurprisingly, I support adoption. :-)
>
> On Fri, May 3, 2024 at 6:05 PM Joseph Salowey  wrote:
>
>> This is a working group call for adoption
>> for draft-davidben-tls-key-share-prediction.  This document was presented
>> at IET 118 and has undergone some revision based on feedback since then.
>> The current draft is available here:
>> https://datatracker.ietf.org/doc/draft-davidben-tls-key-share-prediction/.
>> Please read the document and indicate if and why you support or do not
>> support adoption as a TLS working group item. If you support adoption
>> please, state if you will help review and contribute text to the document.
>> Please respond to this call by May 20, 2024.
>>
>> Thanks,
>>
>> Joe, Deidre, and Sean
>> ___
>> TLS mailing list
>> TLS@ietf.org
>> https://www.ietf.org/mailman/listinfo/tls
>>
> ___
> TLS mailing list
> TLS@ietf.org
> https://www.ietf.org/mailman/listinfo/tls
>
>
>
___
TLS mailing list -- tls@ietf.org
To unsubscribe send an email to tls-le...@ietf.org


Re: [TLS] Adoption Call for draft-davidben-tls-key-share-prediction

2024-05-03 Thread David Benjamin
Slight clarification, this is an adoption call for a DNS hint for which key
shares send in the ClientHello, not trust expressions. :-)

On Fri, May 3, 2024, 20:33 Salz, Rich 
wrote:

> I think it might be trying to be a cure-all for all PKI transition
> problems/issues, but I support adoption and hope we’ll narrow down the
> scope a bit.
> ___
> TLS mailing list
> TLS@ietf.org
> https://www.ietf.org/mailman/listinfo/tls
>
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Adoption Call for draft-davidben-tls-key-share-prediction

2024-05-03 Thread David Benjamin
Unsurprisingly, I support adoption. :-)

On Fri, May 3, 2024 at 6:05 PM Joseph Salowey  wrote:

> This is a working group call for adoption
> for draft-davidben-tls-key-share-prediction.  This document was presented
> at IET 118 and has undergone some revision based on feedback since then.
> The current draft is available here:
> https://datatracker.ietf.org/doc/draft-davidben-tls-key-share-prediction/.
> Please read the document and indicate if and why you support or do not
> support adoption as a TLS working group item. If you support adoption
> please, state if you will help review and contribute text to the document.
> Please respond to this call by May 20, 2024.
>
> Thanks,
>
> Joe, Deidre, and Sean
> ___
> TLS mailing list
> TLS@ietf.org
> https://www.ietf.org/mailman/listinfo/tls
>
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] WG Adoption for TLS Trust Expressions

2024-05-02 Thread David Benjamin
 acknowledge that achieving this level of agility requires a significant
amount of design and implementation work for web servers, certificate
automation clients/servers, and clients to support, but we believe the
improvements called out in some of the discussion on this thread strongly
outweigh these costs, especially when we remove outcomes that are not
causally related to this mechanism. Additionally, while the design changes
what is communicated between relying parties, subscribers, CAs, and root
programs, it reuses the existing communication flows, making an incremental
transition more plausible. And finally, we think this will drastically
improve the ability to migrate the Internet to PQC—not just in terms of a
faster timeline, but because trust anchor agility will enable the community
to develop fundamentally better solutions for authentication, through
reduced experimentation costs and shorter-lived roots of trust.

Sincerely,

David, Devon, and Bob


On Mon, Apr 29, 2024 at 7:20 PM Dennis Jackson  wrote:

> When this work was presented at IETF 118 in November, several participants
> (including myself, Stephen Farrell and Nicola Tuveri) came to the mic to
> highlight that this draft's mechanism comes with a serious potential for
> abuse by governments (meeting minutes
> <https://notes.ietf.org/notes-ietf-118-tls#TLS-Trust-Expressions---Devon-O%E2%80%99Brien-David-Benjamin-Bob-Beck---30-min>).
>
>
> Although the authors acknowledged the issue in the meeting, no changes
> have been made since to either address the problem or document it as an
> accepted risk. I think its critical one of the two happens before this
> document is considered for adoption.
>
> Below is a brief recap of the unaddressed issue raised at 118 and some
> thoughts on next steps:
>
> Some governments (including, but not limited to Russia
> <https://www.eff.org/deeplinks/2022/03/you-should-not-trust-russias-new-trusted-root-ca>
> , Kazakhstan
> <https://en.wikipedia.org/wiki/Kazakhstan_man-in-the-middle_attack>,
> Mauritius
> <https://www.internetsociety.org/resources/internet-fragmentation/mauritius-ictas-threat-to-encryption/>)
> have previously established national root CAs in order to enable mass
> surveillance and censorship of their residents' web traffic. This requires
> trying to force residents to install these root CAs or adopt locally
> developed browsers which have them prepackaged. This is widely regarded as
> a bad thing (RFC 7258 <https://datatracker.ietf.org/doc/html/rfc7258>).
>
> Thankfully these efforts have largely failed because these national CAs
> have no legitimate adoption or use cases. Very few website operators would
> voluntarily use certificates from a national root CA when it means shutting
> out the rest of the world (who obviously do not trust that root CA) and
> even getting adoption within the country is very difficult since adopting
> sites are broken for residents without the national root cert.
>
> However, this draft provides a ready-made solution to this adoption
> problem: websites can be forced to adopt the national CA in addition to,
> rather than replacing, their globally trusted cert. This policy can even be
> justified in terms of security from the perspective of the government,
> since the national CA is under domestic supervision (see
> https://last-chance-for-eidas.org). This enables a gradual roll out by
> the government who can require sites to start deploying certs from the
> national CA in parallel with their existing certificates without any risk
> of breakage either locally or abroad, solving their adoption problem.
>
> Conveniently, post-adoption governments can also see what fraction of
> their residents' web traffic is using their national CA via the unencrypted
> trust expressions extension, which can inform their decisions about whether
> to block connections which don't indicate support for their national CA and
> as well advertising which connections they can intercept (using existing
> methods like mis-issued certs, key escrow) without causing a certificate
> error. This approach also scales so multiple countries can deploy national
> CAs with each being able to surveil their own residents but not each
> others.
>
> Although this may feel like a quite distant consequence of enabling trust
> negotiation in TLS, the on-ramp is straightforward:
>
>- Support for trust negotiation gets shipped in browsers and servers
>for ostensibly good reasons.
>- A large country or federation pushes for recognition of their
>domestic trust regime as a distinct trust store which browsers must
>advertise. Browsers agree because the relevant market is too big to leave.
>- Other countries push for the same recognition now that the dam is
>breached

Re: [TLS] WG Adoption for TLS Trust Expressions

2024-04-30 Thread David Benjamin
Hi all. Thanks for the discussion! While we're digesting it all, one quick
comment regarding the feedback in Prague:

>From talking with folks at the meeting, it seemed part of this was due to a
misunderstanding. Trust expressions are not intended to capture per-user
customizations to root stores, as that has a number of issues. The intent
was to capture only what is implied by the browser + version. (Or the
analog for other kinds of TLS deployments. More precisely, base it on your
desired anonymity set.) We rewrote the privacy considerations

section after that meeting in response to this, to try to make that clearer.

On Tue, Apr 30, 2024 at 5:34 PM Brendan McMillion <
brendanmcmill...@gmail.com> wrote:

> This doesn't apply in case we're distrusting a CA because it's failed. In
>> 9.1 we're rotating keys. As I laid out in my initial mail, we can already
>> sign the new root with the old root to enable rotation. There's no size
>> impact to up-to-date clients using intermediate suppression or abridged
>> certs.
>>
>
> The approach you describe requires the cooperation of the CA, in signing
> the new root with the old root. My understanding is that CAs (especially
> CAs in trouble with their root program) are often uncooperative or
> absentee. It also requires the CA's customers to go through a full issuance
> cycle before they get certificates with the new root, which could take over
> a year, during which time the compromised root will still need to be
> trusted.
>
> This draft is substantially better than that. It normalizes websites
> having multiple certificates from different CAs. In a future world with
> widespread adoption of Trust Expressions and ACME, a root could be
> distrusted immediately without warning and nothing would break because
> websites would transparently switch to their alternate CA. During the very
> very long period in which this is being incrementally deployed by clients
> and servers, Trust Expressions is still substantially better than the
> approach you describe because it creates the possibility for clients to
> negotiate away from a compromised CA where possible (i.e., even if a
> website operator has taken no action but presents multiple certificates, a
> client can choose a certificate with a non-compromised root).
>
> If we want to say that, we should have an extension that actually says you
>> have an accurate clock.
>>
>
> As has been mentioned, it takes a very long time for TLS extensions to
> gain adoption by a broad set of client implementations, server
> implementations, and website operators. If we built an extension that just
> said "I have an accurate clock, you can send me short-lived certificates"
> then it would need adoption by client implementations, server
> implementations, and website operators, and this would take a long time.
> Trust Expressions creates a happy path where 1.) clients indicate support
> for a feature by trusting a fancy new CA, and 2.) website operators support
> that feature simply by configuring their ACME client to get a certificate
> from that CA. Changing the server implementation isn't necessary. This
> happy path seems quite nice and useful to me
>
>
> On Tue, Apr 30, 2024 at 8:38 AM Dennis Jackson  40dennis-jackson...@dmarc.ietf.org> wrote:
>
>> As mentioned above, we have such an extension already insofar as
>> indicating support for Delegated Credentials means indicating a desire for
>> a very short credential lifetime and an acceptance of the clock skew risks.
>>
>> Given how little use its seen, I don't know that its a good motivation
>> for Trust Expressions.
>> On 30/04/2024 16:33, Eric Rescorla wrote:
>>
>>
>>
>> On Tue, Apr 30, 2024 at 8:29 AM Watson Ladd 
>> wrote:
>>
>>> On Tue, Apr 30, 2024 at 8:25 AM Eric Rescorla  wrote:
>>> >
>>> >
>>> > On the narrow point of shorter lifetimes, I don't think the right way
>>> to advertise that you have an accurate clock is to advertise that you
>>> support some set of root certificates.
>>> >
>>> > If we want to say that, we should have an extension that actually says
>>> you have an accurate clock.
>>>
>>> That says you *think* you have an accurate clock.
>>>
>>
>> Quite so. However, if servers gate the use of some kind of short-lived
>> credential
>> on a client signal that the client thinks it has an accurate clock
>> (however that
>> signal is encoded) and the clients are frequently wrong about that, we're
>> going
>> to have big problems.
>>
>> -Ekr
>>
>>
>>
>>
>>> Sincerely,
>>> Watson
>>>
>>> --
>>> Astra mortemque praestare gradatim
>>>
>>
>> ___
>> TLS mailing listTLS@ietf.orghttps://www.ietf.org/mailman/listinfo/tls
>>
>> ___
>> TLS mailing list
>> TLS@ietf.org
>> https://www.ietf.org/mailman/listinfo/tls
>>
> ___
> TLS mailing 

Re: [TLS] Issues with buffered, ACKed KeyUpdates in DTLS 1.3

2024-04-27 Thread David Benjamin
What should the next steps be here? Is this a bunch of errata, or something
else?

On Wed, Apr 17, 2024 at 10:08 AM David Benjamin 
wrote:

> > Sender implementations should already be able to retransmit messages
> with older epochs due to the "duplicated" post-auth state machine
>
> The nice thing about option 7 is that the older epochs retransmit problem
> becomes moot in updated senders, I think. If the sender doesn't activate
> epoch N+1 until KeyUpdate *and prior messages* are ACKed and if KeyUpdate
> is required to be the last handshake message in epoch N, then the previous
> epoch is guaranteed to be empty by the time you activate it.
>
> On Wed, Apr 17, 2024, 09:27 Marco Oliverio  wrote:
>
>> Hi David,
>>
>> Thanks for pointing this out. I also favor solution 7 as it's the simpler
>> approach and it doesn't require too much effort to add in current
>> implementations.
>> Sender implementations should already be able to retransmit messages with
>> older epochs due to the "duplicated" post-auth state machine.
>>
>> Marco
>>
>> On Tue, Apr 16, 2024 at 3:48 PM David Benjamin 
>> wrote:
>>
>>> Thanks, Hannes!
>>>
>>> Since it was buried in there (my understanding of the issue evolved as I
>>> described it), I currently favor option 7. I.e. the sender-only fix to the
>>> KeyUpdate criteria.
>>>
>>> At first I thought we should also change the receiver to mitigate
>>> unfixed senders, but this situation should be pretty rare (most senders
>>> will send NewSessionTicket well before they KeyUpdate), DTLS 1.3 isn't very
>>> widely deployed yet, and ultimately, it's on the sender implementation to
>>> make sure all states they can get into are coherent.
>>>
>>> If the sender crashed, that's unambiguously on the sender to fix. If the
>>> sender still correctly retransmits the missing messages, the connection
>>> will perform suboptimally for a blip but still recover.
>>>
>>> David
>>>
>>>
>>> On Tue, Apr 16, 2024, 05:19 Tschofenig, Hannes <
>>> hannes.tschofe...@siemens.com> wrote:
>>>
>>>> Hi David,
>>>>
>>>>
>>>>
>>>> this is great feedback. Give me a few days to respond to this issue
>>>> with my suggestion for moving forward.
>>>>
>>>>
>>>>
>>>> Ciao
>>>>
>>>> Hannes
>>>>
>>>>
>>>>
>>>> *From:* TLS  *On Behalf Of *David Benjamin
>>>> *Sent:* Saturday, April 13, 2024 7:59 PM
>>>> *To:*  
>>>> *Cc:* Nick Harper 
>>>> *Subject:* Re: [TLS] Issues with buffered, ACKed KeyUpdates in DTLS 1.3
>>>>
>>>>
>>>>
>>>> Another issues with DTLS 1.3's state machine duplication scheme:
>>>>
>>>>
>>>>
>>>> Section 8 says implementation must not send new KeyUpdate until the
>>>> KeyUpdate is ACKed, but it says nothing about other post-handshake
>>>> messages. Suppose KeyUpdate(5) in flight and the implementation decides to
>>>> send NewSessionTicket. (E.g. the application called some
>>>> "send NewSessionTicket" API.) The new epoch doesn't exist yet, so naively
>>>> one would start sending NewSessionTicket(6) in the current epoch. Now the
>>>> peer ACKs KeyUpdate(5), so we transition to the new epoch. But
>>>> retransmissions must retain their original epoch:
>>>>
>>>>
>>>>
>>>> > Implementations MUST send retransmissions of lost messages using the
>>>> same epoch and keying material as the original transmission.
>>>>
>>>> https://www.rfc-editor.org/rfc/rfc9147.html#section-4.2.1-3
>>>>
>>>>
>>>>
>>>> This means we must keep sending the NST at the old epoch. But the peer
>>>> may have no idea there's a message at that epoch due to packet loss!
>>>> Section 8 does ask the peer to keep the old epoch around for a spell, but
>>>> eventually the peer will discard the old epoch. If NST(6) didn't get
>>>> through before then, the entire post-handshake stream is now wedged!
>>>>
>>>>
>>>>
>>>> I think this means we need to amend Section 8 to forbid sending *any*
>>>> post-handshake message after KeyUpdate. That is, rather than saying you
>>>> cannot send a new KeyUpdate, a KeyUpdate terminates the post-handshake

Re: [TLS] [EXT] Re: Deprecating Static DH certificates in the obsolete key exchange document

2024-04-23 Thread David Benjamin
I'll add that if we're wrong and someone *does* need these, it is all the
more important that we communicate our intentions! The current situation is
that we have effectively deprecated this by not adding a way to use those
certificates in TLS 1.3, but we forgot to say so. A hypothetical deployment
relying on these certificates would be unable to migrate to TLS 1.3, but
may not realize it yet if they're slow to upgrade.

That conflict is there whether we fix the registrations or not. Fixing the
registrations makes the conflict visible, so folks who need these can show
up and provide input.

On Tue, Apr 23, 2024 at 11:31 AM David Benjamin 
wrote:

> Having worked on a TLS implementation and removed code for this, I can
> tell you that is *not* simply a natural side-effect of supporting DH
> certificates. These modes interact with the TLS handshake logic a fair bit.
> They omit the ServerKeyExchange message and change the ClientKeyExchange
> message. The latter is extra fun because it's not determined by the cipher
> suite, but by what client certificate you got. (This is why TLS 1.2's
> message order needs to be a somewhat funny Certificate, ClientKeyExchange,
> CertificateVerify.) It's just code, and it's implementable, but as it is
> unused, there is no point in expending anyone's complexity budget on it.
>
> I support removing these and would echo everything Filippo said.
>
> Not every TLS implementor follows IETF discussions carefully, or is as
> well-connected as we are to the TLS ecosystem. We owe it to them to
> communicate our understanding and intentions with the protocol as clearly
> as we can. That includes marking things as a dead end when we believe them
> to be. If someone were to use one of these today, they would be in for a
> headache, between security issues, inability to upgrade to TLS 1.3, and
> interop failures with other stacks. At best, they waste their time. It is
> thus worth our time to document this, even if, yes, it means we have to do
> this kind of spring cleaning work. I'd like to thank the folks driving this
> for being willing to put time into this.
>
> We could make that work less time-consuming if we stopped repeating this
> same discussion every time we do this necessary and responsible task. It
> needn't be so much fuss to deprecate a thing that no one uses, and that we
> have already tacitly disavowed by not carrying forward to TLS 1.3.
>
> On Tue, Apr 23, 2024 at 6:08 AM Peter Gutmann 
> wrote:
>
>> Blumenthal, Uri - 0553 - MITLL  writes:
>>
>> >Nobody in the real world employs static DH anymore – in which case this
>> draft
>> >is useless/pointless
>>
>> It's not "any more", AFAICT from my inability to find any evidence of the
>> certificates needed for it in 25-odd years it's "nobody has ever used
>> static
>> DH" (with the absence-of-evidence caveat).
>>
>> >I’m amazed by drafts like this one. Is nothing constructive remains out
>> there
>> >to spend time and efforts on?
>>
>> Slow news day?  End-of-financial-year clearout?  Quota to fill?  Someone
>> lost
>> a bet?  Could be all sorts of things.
>>
>> Someone else commented on having seen code to support this, that's just a
>> natural side-effect of having code that supports DH and code that supports
>> certificates, you end up with code that probably supports DH certificates,
>> probably because without ever having seen one to test your code with you
>> can't
>> be 100% sure there isn't some glitch somewhere.  For example my code
>> happens
>> to support Elgamal certificates because there's Elgamal code in there for
>> PGP
>> support and so if you use an Elgamal key in a certificate you'll get an
>> Elgamal certificate.  As with the DH-cert code it's never been tested
>> because
>> I don't think such a thing as an Elgamal X.509 certificate exists, but in
>> theory there's support for them in there.
>>
>> Peter.
>>
>> ___
>> TLS mailing list
>> TLS@ietf.org
>> https://www.ietf.org/mailman/listinfo/tls
>>
>
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] [EXT] Re: Deprecating Static DH certificates in the obsolete key exchange document

2024-04-23 Thread David Benjamin
Having worked on a TLS implementation and removed code for this, I can tell
you that is *not* simply a natural side-effect of supporting DH
certificates. These modes interact with the TLS handshake logic a fair bit.
They omit the ServerKeyExchange message and change the ClientKeyExchange
message. The latter is extra fun because it's not determined by the cipher
suite, but by what client certificate you got. (This is why TLS 1.2's
message order needs to be a somewhat funny Certificate, ClientKeyExchange,
CertificateVerify.) It's just code, and it's implementable, but as it is
unused, there is no point in expending anyone's complexity budget on it.

I support removing these and would echo everything Filippo said.

Not every TLS implementor follows IETF discussions carefully, or is as
well-connected as we are to the TLS ecosystem. We owe it to them to
communicate our understanding and intentions with the protocol as clearly
as we can. That includes marking things as a dead end when we believe them
to be. If someone were to use one of these today, they would be in for a
headache, between security issues, inability to upgrade to TLS 1.3, and
interop failures with other stacks. At best, they waste their time. It is
thus worth our time to document this, even if, yes, it means we have to do
this kind of spring cleaning work. I'd like to thank the folks driving this
for being willing to put time into this.

We could make that work less time-consuming if we stopped repeating this
same discussion every time we do this necessary and responsible task. It
needn't be so much fuss to deprecate a thing that no one uses, and that we
have already tacitly disavowed by not carrying forward to TLS 1.3.

On Tue, Apr 23, 2024 at 6:08 AM Peter Gutmann 
wrote:

> Blumenthal, Uri - 0553 - MITLL  writes:
>
> >Nobody in the real world employs static DH anymore – in which case this
> draft
> >is useless/pointless
>
> It's not "any more", AFAICT from my inability to find any evidence of the
> certificates needed for it in 25-odd years it's "nobody has ever used
> static
> DH" (with the absence-of-evidence caveat).
>
> >I’m amazed by drafts like this one. Is nothing constructive remains out
> there
> >to spend time and efforts on?
>
> Slow news day?  End-of-financial-year clearout?  Quota to fill?  Someone
> lost
> a bet?  Could be all sorts of things.
>
> Someone else commented on having seen code to support this, that's just a
> natural side-effect of having code that supports DH and code that supports
> certificates, you end up with code that probably supports DH certificates,
> probably because without ever having seen one to test your code with you
> can't
> be 100% sure there isn't some glitch somewhere.  For example my code
> happens
> to support Elgamal certificates because there's Elgamal code in there for
> PGP
> support and so if you use an Elgamal key in a certificate you'll get an
> Elgamal certificate.  As with the DH-cert code it's never been tested
> because
> I don't think such a thing as an Elgamal X.509 certificate exists, but in
> theory there's support for them in there.
>
> Peter.
>
> ___
> TLS mailing list
> TLS@ietf.org
> https://www.ietf.org/mailman/listinfo/tls
>
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Issues with buffered, ACKed KeyUpdates in DTLS 1.3

2024-04-17 Thread David Benjamin
> Sender implementations should already be able to retransmit messages with
older epochs due to the "duplicated" post-auth state machine

The nice thing about option 7 is that the older epochs retransmit problem
becomes moot in updated senders, I think. If the sender doesn't activate
epoch N+1 until KeyUpdate *and prior messages* are ACKed and if KeyUpdate
is required to be the last handshake message in epoch N, then the previous
epoch is guaranteed to be empty by the time you activate it.

On Wed, Apr 17, 2024, 09:27 Marco Oliverio  wrote:

> Hi David,
>
> Thanks for pointing this out. I also favor solution 7 as it's the simpler
> approach and it doesn't require too much effort to add in current
> implementations.
> Sender implementations should already be able to retransmit messages with
> older epochs due to the "duplicated" post-auth state machine.
>
> Marco
>
> On Tue, Apr 16, 2024 at 3:48 PM David Benjamin 
> wrote:
>
>> Thanks, Hannes!
>>
>> Since it was buried in there (my understanding of the issue evolved as I
>> described it), I currently favor option 7. I.e. the sender-only fix to the
>> KeyUpdate criteria.
>>
>> At first I thought we should also change the receiver to mitigate unfixed
>> senders, but this situation should be pretty rare (most senders will send
>> NewSessionTicket well before they KeyUpdate), DTLS 1.3 isn't very widely
>> deployed yet, and ultimately, it's on the sender implementation to make
>> sure all states they can get into are coherent.
>>
>> If the sender crashed, that's unambiguously on the sender to fix. If the
>> sender still correctly retransmits the missing messages, the connection
>> will perform suboptimally for a blip but still recover.
>>
>> David
>>
>>
>> On Tue, Apr 16, 2024, 05:19 Tschofenig, Hannes <
>> hannes.tschofe...@siemens.com> wrote:
>>
>>> Hi David,
>>>
>>>
>>>
>>> this is great feedback. Give me a few days to respond to this issue with
>>> my suggestion for moving forward.
>>>
>>>
>>>
>>> Ciao
>>>
>>> Hannes
>>>
>>>
>>>
>>> *From:* TLS  *On Behalf Of *David Benjamin
>>> *Sent:* Saturday, April 13, 2024 7:59 PM
>>> *To:*  
>>> *Cc:* Nick Harper 
>>> *Subject:* Re: [TLS] Issues with buffered, ACKed KeyUpdates in DTLS 1.3
>>>
>>>
>>>
>>> Another issues with DTLS 1.3's state machine duplication scheme:
>>>
>>>
>>>
>>> Section 8 says implementation must not send new KeyUpdate until the
>>> KeyUpdate is ACKed, but it says nothing about other post-handshake
>>> messages. Suppose KeyUpdate(5) in flight and the implementation decides to
>>> send NewSessionTicket. (E.g. the application called some
>>> "send NewSessionTicket" API.) The new epoch doesn't exist yet, so naively
>>> one would start sending NewSessionTicket(6) in the current epoch. Now the
>>> peer ACKs KeyUpdate(5), so we transition to the new epoch. But
>>> retransmissions must retain their original epoch:
>>>
>>>
>>>
>>> > Implementations MUST send retransmissions of lost messages using the
>>> same epoch and keying material as the original transmission.
>>>
>>> https://www.rfc-editor.org/rfc/rfc9147.html#section-4.2.1-3
>>>
>>>
>>>
>>> This means we must keep sending the NST at the old epoch. But the peer
>>> may have no idea there's a message at that epoch due to packet loss!
>>> Section 8 does ask the peer to keep the old epoch around for a spell, but
>>> eventually the peer will discard the old epoch. If NST(6) didn't get
>>> through before then, the entire post-handshake stream is now wedged!
>>>
>>>
>>>
>>> I think this means we need to amend Section 8 to forbid sending *any*
>>> post-handshake message after KeyUpdate. That is, rather than saying you
>>> cannot send a new KeyUpdate, a KeyUpdate terminates the post-handshake
>>> stream at that epoch and all new post-handshake messages, be they KeyUpdate
>>> or anything else, must be enqueued for the new epoch. This is a little
>>> unfortunate because a TLS library which transparently KeyUpdates will then
>>> inadvertently introduce hiccups where post-handshake messages triggered by
>>> the application, like post-handshake auth, are blocked.
>>>
>>>
>>>
>>> That then suggests some more options for fixing the original problem.
>>>
>>&

Re: [TLS] Issues with buffered, ACKed KeyUpdates in DTLS 1.3

2024-04-16 Thread David Benjamin
Thanks, Hannes!

Since it was buried in there (my understanding of the issue evolved as I
described it), I currently favor option 7. I.e. the sender-only fix to the
KeyUpdate criteria.

At first I thought we should also change the receiver to mitigate unfixed
senders, but this situation should be pretty rare (most senders will send
NewSessionTicket well before they KeyUpdate), DTLS 1.3 isn't very widely
deployed yet, and ultimately, it's on the sender implementation to make
sure all states they can get into are coherent.

If the sender crashed, that's unambiguously on the sender to fix. If the
sender still correctly retransmits the missing messages, the connection
will perform suboptimally for a blip but still recover.

David


On Tue, Apr 16, 2024, 05:19 Tschofenig, Hannes <
hannes.tschofe...@siemens.com> wrote:

> Hi David,
>
>
>
> this is great feedback. Give me a few days to respond to this issue with
> my suggestion for moving forward.
>
>
>
> Ciao
>
> Hannes
>
>
>
> *From:* TLS  *On Behalf Of *David Benjamin
> *Sent:* Saturday, April 13, 2024 7:59 PM
> *To:*  
> *Cc:* Nick Harper 
> *Subject:* Re: [TLS] Issues with buffered, ACKed KeyUpdates in DTLS 1.3
>
>
>
> Another issues with DTLS 1.3's state machine duplication scheme:
>
>
>
> Section 8 says implementation must not send new KeyUpdate until the
> KeyUpdate is ACKed, but it says nothing about other post-handshake
> messages. Suppose KeyUpdate(5) in flight and the implementation decides to
> send NewSessionTicket. (E.g. the application called some
> "send NewSessionTicket" API.) The new epoch doesn't exist yet, so naively
> one would start sending NewSessionTicket(6) in the current epoch. Now the
> peer ACKs KeyUpdate(5), so we transition to the new epoch. But
> retransmissions must retain their original epoch:
>
>
>
> > Implementations MUST send retransmissions of lost messages using the
> same epoch and keying material as the original transmission.
>
> https://www.rfc-editor.org/rfc/rfc9147.html#section-4.2.1-3
>
>
>
> This means we must keep sending the NST at the old epoch. But the peer may
> have no idea there's a message at that epoch due to packet loss! Section 8
> does ask the peer to keep the old epoch around for a spell, but eventually
> the peer will discard the old epoch. If NST(6) didn't get through before
> then, the entire post-handshake stream is now wedged!
>
>
>
> I think this means we need to amend Section 8 to forbid sending *any*
> post-handshake message after KeyUpdate. That is, rather than saying you
> cannot send a new KeyUpdate, a KeyUpdate terminates the post-handshake
> stream at that epoch and all new post-handshake messages, be they KeyUpdate
> or anything else, must be enqueued for the new epoch. This is a little
> unfortunate because a TLS library which transparently KeyUpdates will then
> inadvertently introduce hiccups where post-handshake messages triggered by
> the application, like post-handshake auth, are blocked.
>
>
>
> That then suggests some more options for fixing the original problem.
>
>
>
> *7. Fix the sender's KeyUpdate criteria*
>
>
>
> We tell the sender to wait for all previous messages to be ACKed too. Fix
> the first paragraph of section 8 to say:
>
>
>
> > As with other handshake messages with no built-in response, KeyUpdates
> MUST be acknowledged. Acknowledgements are used to both control
> retransmission and transition to the next epoch. Implementations MUST NOT
> send records with the new keys until the KeyUpdate *and all preceding
> messages* have been acknowledged. This facilitates epoch reconstruction
> (Section 4.2.2) and avoids too many epochs in active use, by ensuring the
> peer has processed the KeyUpdate and started receiving at the new epoch.
>
> >
>
> > A KeyUpdate message terminates the post-handshake stream in an epoch.
> After sending KeyUpdate in an epoch, implementations MUST NOT send any new
> post-handshake messages in that epoch. Note that, if the implementation has
> sent KeyUpdate but is waiting for an ACK, the next epoch is not yet active.
> In this case, subsequent post-handshake messages may not be sent until
> receiving the ACK.
>
>
>
> And then on the receiver side, we leave things as-is. If the sender
> implemented the old semantics AND had multiple post-handshake transactions
> in parallel, it might update keys too early and then we get into the
> situation described in (1). We then declare that, if this happens, and the
> sender gets confused as a result, that's the sender's fault. Hopefully this
> is not rare enough (did anyone even implement 5.8.4, or does everyone just
> serialize their post-handshake transitions?) to not be 

Re: [TLS] DTLS 1.3 sequence number lengths and lack of ACKs

2024-04-16 Thread David Benjamin
Regarding UTA or elsewhere, let's see how the buffered KeyUpdates issue
pans out. If I haven't missed something, that one seems severe enough to
warrants an rfc9147bis, or at least a slew of significant errata, in which
case we may as well put the fixups into the main document where they'll be
easier for an implementator to find.

Certainly, as someone reading the document now to plan an implementation, I
would have found it much, much less helpful to put crucial information like
this in a separate UTA document instead of the main one, as these details
influence how and whether to expose the 8- vs 16-bit choice to Applications
Using TLS at all.

David



On Tue, Apr 16, 2024, 05:17 Tschofenig, Hannes <
hannes.tschofe...@siemens.com> wrote:

> Hi David,
>
>
>
> thanks again for these comments.
>
>
>
> Speaking for myself, this exchange was not designed based on QUIC. I
> believe it pre-dated the corresponding work in QUIC.
>
>
>
> Anyway, there are different usage environments and, as you said, there is
> a difference in the amount of messages that may be lost. For some
> environments the loss 255 messages amounts to losing the message exchanges
> of several days, potentially weeks. As such, for those use cases the
> shorter sequence number space is perfectly fine. For other environments
> this is obviously an issue and you have to select the bigger sequence
> number space.
>
>
>
> More explanation about this aspect never hurts. Of course, nobody raised
> the need for such text so far and hence we didn’t add anything. As a way
> forward, we could add text to the UTA document. In the UTA document(s) we
> already talk about other configurable parameters, such as the timeout.
>
>
>
> Ciao
>
> Hannes
>
>
>
> *From:* TLS  *On Behalf Of *David Benjamin
> *Sent:* Friday, April 12, 2024 11:36 PM
> *To:*  
> *Cc:* Nick Harper 
> *Subject:* [TLS] DTLS 1.3 sequence number lengths and lack of ACKs
>
>
>
> Hi all,
>
>
>
> Here's another issue we noticed with RFC 9147: (There's going to be a few
> of these emails. :-) )
>
>
>
> DTLS 1.3 allows senders to pick an 8-bit or 16-bit sequence number. But,
> unless I missed it, there isn't any discussion or guidance on which to use.
> The draft simply says:
>
>
>
> > Implementations MAY mix sequence numbers of different lengths on the
> same connection
>
>
>
> I assume this was patterned after QUIC, but looking at QUIC suggests an
> issue with the DTLS 1.3 formulation. QUIC uses ACKs to pick the minimum
> number of bytes needed for the peer to recover the sequence number:
>
> https://www.rfc-editor.org/rfc/rfc9000.html#packet-encoding
>
>
>
> But the bulk of DTLS records, app data, are unreliable and not ACKed. DTLS
> leaves all that to application. This means a DTLS implementation does not
> have enough information to make this decision. It would need to be
> integrated into the application-protocol-specific reliability story, if the
> application protocol even maintains that information.
>
>
>
> Without ACK feedback, it is hard to size the sequence number safely.
> Suppose a DTLS 1.3 stack unconditionally picked the 1-byte sequence number
> because it's smaller, and the draft didn't say not to do it. That means
> after getting out of sync by 256 records, either via reordering or loss,
> the connection breaks. For example, if there was a blip in connectivity and
> you happened to lose 256 records, your connection is stuck and cannot
> recover. All future records will be at higher and higher sequence numbers.
> A burst of 256 lost packets seems within the range of situations one would
> expect an application to handle.
>
>
>
> (The 2-byte sequence number fails at 65K losses, which is hopefully high
> enough to be fine?  Though it's far far less than what QUIC's 1-4-byte
> sequence number can accommodate. It was also odd to see no discussion of
> this anywhere.)
>
>
>
> David
>
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] IANA Recommendations for Obsolete Key Exchange

2024-04-15 Thread David Benjamin
>From the meeting, I remember there being some confusion around a table that
split things up between TLS 1.2 and TLS 1.3, and differences in how they
negotiate things, which makes this listing a bit ambiguous. In particular,
there aren't any *cipher suites* with FFDH or FFDHE in their name in TLS
1.2. Also some of these constructions have analogs in TLS 1.3 and some
don't, but none as cipher suites.

Agreed with the proposal, but specifically, this is what I understand the
proposal to mean:

TLS 1.2 RSA cipher suites (TLS_RSA_WITH_*) should be marked with a "D"
-- These lack forward secrecy and use a broken encryption scheme.
-- There is no analog to RSA key exchange in TLS 1.3, so leave it alone.

TLS 1.2 static DH cipher suites (TLS_DH_*_WITH_*) should be marked with a
"D"
-- These lack forward secrecy and the DH(E) construction in TLS 1.2 is
broken. It lacks parameter negotiation, and uses a variable-length secret
that is vulnerable to the Raccoon attack, particularly in this context with
a static DH key.
-- There is no analog to static DH in TLS 1.3, so leave it alone.

TLS 1.2 DHE cipher suites (TLS_DHE_*_WITH_*) should be marked with a "D"
-- While these do provide forward secrecy, the DH(E) construction in TLS
1.2 is broken. It lacks parameter negotiation, and uses a variable-length
secret that is vulnerable to the Raccoon attack. In this context, the
Racoon attack is less fatal, but it is still a side channel leakage that
the protocol should have avoided.
-- In theory RFC 7919 addressed the lack of parameter negotiation, but by
reusing the old construction's cipher suites, it is undeployable. It also
does not fix the side channel.
-- There *is* an analog in TLS 1.3 (the ffdhe* named groups). However, they
do not share the TLS 1.2 construction's problems, so we can leave them
alone. They're just slow and kinda pointless given ECC exists.

TLS 1.2 static ECDH cipher suites (TLS_ECDH_*_WITH_*) should be marked with
a "D"
-- These lack forward secrecy
-- There is no analog to static ECDH in TLS 1.3, so leave it alone.



On Mon, Apr 15, 2024 at 1:30 PM Joseph Salowey  wrote:

> At IETF 119 we had discussion on how to mark the ciphersuites deprecated
> by draft-ietf-tls-deprecate-obsolete-kex in the IANA Registry. At the
> meeting there was support for ('D' means discouraged):
>
> RSA ciphersuites should be marked with a "D"
> FFDH ciphersuites should be marked with a "D"
> FFDHE ciphersuites should be marked with a "D"
> ECDH ciphersuites should be marked with a "D"
>
> This aligns with the deprecation intent of the draft. The draft states
> ECDH are a SHOULD NOT instead of a MUST NOT, but the sentiment was they
> should be generally discouraged.
>
> Please respond with any comments on this proposal by April 30,2024.
>
> Thanks,
>
> Sean, Deirdre and Joe
> ___
> TLS mailing list
> TLS@ietf.org
> https://www.ietf.org/mailman/listinfo/tls
>
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Issues with buffered, ACKed KeyUpdates in DTLS 1.3

2024-04-13 Thread David Benjamin
Another issues with DTLS 1.3's state machine duplication scheme:

Section 8 says implementation must not send new KeyUpdate until the
KeyUpdate is ACKed, but it says nothing about other post-handshake
messages. Suppose KeyUpdate(5) in flight and the implementation decides to
send NewSessionTicket. (E.g. the application called some
"send NewSessionTicket" API.) The new epoch doesn't exist yet, so naively
one would start sending NewSessionTicket(6) in the current epoch. Now the
peer ACKs KeyUpdate(5), so we transition to the new epoch. But
retransmissions must retain their original epoch:

> Implementations MUST send retransmissions of lost messages using the same
epoch and keying material as the original transmission.
https://www.rfc-editor.org/rfc/rfc9147.html#section-4.2.1-3

This means we must keep sending the NST at the old epoch. But the peer may
have no idea there's a message at that epoch due to packet loss! Section 8
does ask the peer to keep the old epoch around for a spell, but eventually
the peer will discard the old epoch. If NST(6) didn't get through before
then, the entire post-handshake stream is now wedged!

I think this means we need to amend Section 8 to forbid sending *any*
post-handshake message after KeyUpdate. That is, rather than saying you
cannot send a new KeyUpdate, a KeyUpdate terminates the post-handshake
stream at that epoch and all new post-handshake messages, be they KeyUpdate
or anything else, must be enqueued for the new epoch. This is a little
unfortunate because a TLS library which transparently KeyUpdates will then
inadvertently introduce hiccups where post-handshake messages triggered by
the application, like post-handshake auth, are blocked.

That then suggests some more options for fixing the original problem.

*7. Fix the sender's KeyUpdate criteria*

We tell the sender to wait for all previous messages to be ACKed too. Fix
the first paragraph of section 8 to say:

> As with other handshake messages with no built-in response, KeyUpdates
MUST be acknowledged. Acknowledgements are used to both control
retransmission and transition to the next epoch. Implementations MUST NOT
send records with the new keys until the KeyUpdate *and all preceding
messages* have been acknowledged. This facilitates epoch reconstruction
(Section 4.2.2) and avoids too many epochs in active use, by ensuring the
peer has processed the KeyUpdate and started receiving at the new epoch.
>
> A KeyUpdate message terminates the post-handshake stream in an epoch.
After sending KeyUpdate in an epoch, implementations MUST NOT send any new
post-handshake messages in that epoch. Note that, if the implementation has
sent KeyUpdate but is waiting for an ACK, the next epoch is not yet active.
In this case, subsequent post-handshake messages may not be sent until
receiving the ACK.

And then on the receiver side, we leave things as-is. If the sender
implemented the old semantics AND had multiple post-handshake transactions
in parallel, it might update keys too early and then we get into the
situation described in (1). We then declare that, if this happens, and the
sender gets confused as a result, that's the sender's fault. Hopefully this
is not rare enough (did anyone even implement 5.8.4, or does everyone just
serialize their post-handshake transitions?) to not be a serious protocol
break? That risk aside, this option seems the most in spirit with the
current design to me.

*8. Decouple post-handshake retransmissions from epochs*

If we instead say that the same epoch rule only applies for the handshake,
and not post-handshake messages, I think option 5 (process KeyUpdate out of
order) might become viable? I'm not sure. Either way, this seems like a
significant protocol break, so I don't think this is an option until some
hypothetical DTLS 1.4.


On Fri, Apr 12, 2024 at 6:59 PM David Benjamin 
wrote:

> Hi all,
>
> This is going to be a bit long. In short, DTLS 1.3 KeyUpdates seem to
> conflate the peer *receiving* the KeyUpdate with the peer *processing* the
> KeyUpdate, in ways that appear to break some assumptions made by the
> protocol design.
>
> *When to switch keys in KeyUpdate*
>
> So, first, DTLS 1.3, unlike TLS 1.3, applies the KeyUpdate on the ACK, not
> when the KeyUpdate is sent. This makes sense because KeyUpdate records are
> not intrinsically ordered with app data records sent after them:
>
> > As with other handshake messages with no built-in response, KeyUpdates
> MUST be acknowledged. In order to facilitate epoch reconstruction (Section
> 4.2.2), implementations MUST NOT send records with the new keys or send a
> new KeyUpdate until the previous KeyUpdate has been acknowledged (this
> avoids having too many epochs in active use).
> https://www.rfc-editor.org/rfc/rfc9147.html#section-8-1
>
> Now, the parenthetical says this is to avoid having too many epochs in
> active use, but it appears that there

[TLS] Issues with buffered, ACKed KeyUpdates in DTLS 1.3

2024-04-12 Thread David Benjamin
Hi all,

This is going to be a bit long. In short, DTLS 1.3 KeyUpdates seem to
conflate the peer *receiving* the KeyUpdate with the peer *processing* the
KeyUpdate, in ways that appear to break some assumptions made by the
protocol design.

*When to switch keys in KeyUpdate*

So, first, DTLS 1.3, unlike TLS 1.3, applies the KeyUpdate on the ACK, not
when the KeyUpdate is sent. This makes sense because KeyUpdate records are
not intrinsically ordered with app data records sent after them:

> As with other handshake messages with no built-in response, KeyUpdates
MUST be acknowledged. In order to facilitate epoch reconstruction (Section
4.2.2), implementations MUST NOT send records with the new keys or send a
new KeyUpdate until the previous KeyUpdate has been acknowledged (this
avoids having too many epochs in active use).
https://www.rfc-editor.org/rfc/rfc9147.html#section-8-1

Now, the parenthetical says this is to avoid having too many epochs in
active use, but it appears that there are stronger assumptions on this:

> After the handshake is complete, if the epoch bits do not match those
from the current epoch, implementations SHOULD use the most recent **past**
epoch which has matching bits, and then reconstruct the sequence number for
that epoch as described above.
https://www.rfc-editor.org/rfc/rfc9147.html#section-4.2.2-3
(emphasis mine)

> After the handshake, implementations MUST use the highest available
sending epoch [to send ACKs]
https://www.rfc-editor.org/rfc/rfc9147.html#section-7-7

These two snippets imply the protocol wants the peer to definitely have
installed the new keys before you start using them. This makes sense
because sending stuff the peer can't decrypt is pretty silly. As an aside,
DTLS 1.3 retains this text from DTLS 1.2:

> Conversely, it is possible for records that are protected with the new
epoch to be received prior to the completion of a handshake. For instance,
the server may send its Finished message and then start transmitting data.
Implementations MAY either buffer or discard such records, though when DTLS
is used over reliable transports (e.g., SCTP [RFC4960]), they SHOULD be
buffered and processed once the handshake completes.
https://www.rfc-editor.org/rfc/rfc9147.html#section-4.2.1-2

The text from DTLS 1.2 talks about *a* handshake, which presumably refers
to rekeying via renegotiation. But in DTLS 1.3, the epoch reconstruction
rule and the KeyUpdate rule mean this is only possible during the
handshake, when you see epoch 4 and expect epoch 0-3. The steady state
rekeying mechanism never hits this case. (This is a reasonable change
because there's no sense in unnecessarily introducing blips where the
connection is less tolerant of reordering.)

*Buffered handshake messages*

Okay, so KeyUpdates want to wait for the recipient to install keys, except
we don't seem to actually achieve this! Section 5.2 says:

> DTLS implementations maintain (at least notionally) a next_receive_seq
counter. This counter is initially set to zero. When a handshake message is
received, if its message_seq value matches next_receive_seq,
next_receive_seq is incremented and the message is processed. If the
sequence number is less than next_receive_seq, the message MUST be
discarded. If the sequence number is greater than next_receive_seq, the
implementation SHOULD queue the message but MAY discard it. (This is a
simple space/bandwidth trade-off).
https://www.rfc-editor.org/rfc/rfc9147.html#section-5.2-7

I assume this is intended to apply to post-handshake messages too. (See
below for a discussion of the alternative.) But that means that, when you
receive a KeyUpdate, you might not immediately process it. Suppose
next_receive_seq is 5, and the peer sends NewSessionTicket(5),
NewSessionTicket(6), and KeyUpdate(7). 5 is lost, but 6 and 7 come in,
perhaps even in the same record which means that you're forced to ACK both
or neither. But suppose the implementation is willing to buffer 3 messages
ahead, so it ACKs the 6+7 record, by the rules in section 7, which permits
ACKing fragments that were buffered and not yet processed.

That means the peer will switch keys and now all subsequent records from
them will come from epoch N+1. But the sender is not ready for N+1 yet, so
we contradict everything above. We also contradict this parenthetical in
section 8:

> Due to loss and/or reordering, DTLS 1.3 implementations may receive a
record with an older epoch than the current one (the requirements above
preclude receiving a newer record).
https://www.rfc-editor.org/rfc/rfc9147.html#section-8-2

I assume then that this was not actually what was intended.

*Options (and non-options)*

Assuming I'm reading this right, we seem to have made a mess of things. The
sender could avoid this by only allowing one active post-handshake
transaction at a time and serializing them, at the cost of taking a
round-trip for each. But the receiver needs to account for all possible
senders, so that doesn't help. Some 

[TLS] DTLS 1.3 sequence number lengths and lack of ACKs

2024-04-12 Thread David Benjamin
Hi all,

Here's another issue we noticed with RFC 9147: (There's going to be a few
of these emails. :-) )

DTLS 1.3 allows senders to pick an 8-bit or 16-bit sequence number. But,
unless I missed it, there isn't any discussion or guidance on which to use.
The draft simply says:

> Implementations MAY mix sequence numbers of different lengths on the same
connection

I assume this was patterned after QUIC, but looking at QUIC suggests an
issue with the DTLS 1.3 formulation. QUIC uses ACKs to pick the minimum
number of bytes needed for the peer to recover the sequence number:
https://www.rfc-editor.org/rfc/rfc9000.html#packet-encoding

But the bulk of DTLS records, app data, are unreliable and not ACKed. DTLS
leaves all that to application. This means a DTLS implementation does not
have enough information to make this decision. It would need to be
integrated into the application-protocol-specific reliability story, if the
application protocol even maintains that information.

Without ACK feedback, it is hard to size the sequence number safely.
Suppose a DTLS 1.3 stack unconditionally picked the 1-byte sequence number
because it's smaller, and the draft didn't say not to do it. That means
after getting out of sync by 256 records, either via reordering or loss,
the connection breaks. For example, if there was a blip in connectivity and
you happened to lose 256 records, your connection is stuck and cannot
recover. All future records will be at higher and higher sequence numbers.
A burst of 256 lost packets seems within the range of situations one would
expect an application to handle.

(The 2-byte sequence number fails at 65K losses, which is hopefully high
enough to be fine?  Though it's far far less than what QUIC's 1-4-byte
sequence number can accommodate. It was also odd to see no discussion of
this anywhere.)

David
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] DTLS 1.3 epochs vs message_seq overflow

2024-04-11 Thread David Benjamin
On Thu, Apr 11, 2024 at 7:12 PM David Benjamin 
wrote:

> Hi all,
>
> In reviewing RFC 9147, I noticed something a bit funny. DTLS 1.3 changed
> the epoch number from 16 bits to 64 bits, though with a requirement that
> you not exceed 2^48-1. I assume this was so that you're able to rekey more
> than 65K times if you really wanted to.
>
> I'm not sure we actually achieved this. In order to change epochs, you
> need to do a KeyUpdate, which involves sending a handshake message. That
> means burning a handshake message sequence number. However, section 5.2
> says:
>
> > Note: In DTLS 1.2, the message_seq was reset to zero in case of a
> rehandshake (i.e., renegotiation). On the surface, a rehandshake in DTLS
> 1.2 shares similarities with a post-handshake message exchange in DTLS 1.3.
> However, in DTLS 1.3 the message_seq is not reset, to allow distinguishing
> a retransmission from a previously sent post-handshake message from a newly
> sent post-handshake message.
>
> This means that the message_seq space is never reset for the lifetime of
> the connection. But message_seq is a 16-bit field! So I think you would
> overflow message_seq before you manage to overflow a 16-bit epoch.
>
> Now, I think the change here was correct because DTLS 1.2's resetting on
> rehandshake was a mistake. In DTLS 1.2, the end of the previous handshake
> and the start of the next handshake happen in the same epoch, which meant
> that things were ambiguous and you needed knowledge of the handshake state
> machine to resolve things. However, given the wider epoch, perhaps we
> should have said that message_seq resets on each epoch or something. (Too
> late now, of course... DTLS 1.4 I suppose?)
>

Alternatively, if we think 65K epochs should be enough for anybody, perhaps
DTLS 1.4 should update the RecordNumber structure accordingly and save a
few bytes in the ACKs. :-)


> Does all that check out, or did I miss something?
>
> David
>
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


[TLS] DTLS 1.3 epochs vs message_seq overflow

2024-04-11 Thread David Benjamin
Hi all,

In reviewing RFC 9147, I noticed something a bit funny. DTLS 1.3 changed
the epoch number from 16 bits to 64 bits, though with a requirement that
you not exceed 2^48-1. I assume this was so that you're able to rekey more
than 65K times if you really wanted to.

I'm not sure we actually achieved this. In order to change epochs, you need
to do a KeyUpdate, which involves sending a handshake message. That means
burning a handshake message sequence number. However, section 5.2 says:

> Note: In DTLS 1.2, the message_seq was reset to zero in case of a
rehandshake (i.e., renegotiation). On the surface, a rehandshake in DTLS
1.2 shares similarities with a post-handshake message exchange in DTLS 1.3.
However, in DTLS 1.3 the message_seq is not reset, to allow distinguishing
a retransmission from a previously sent post-handshake message from a newly
sent post-handshake message.

This means that the message_seq space is never reset for the lifetime of
the connection. But message_seq is a 16-bit field! So I think you would
overflow message_seq before you manage to overflow a 16-bit epoch.

Now, I think the change here was correct because DTLS 1.2's resetting on
rehandshake was a mistake. In DTLS 1.2, the end of the previous handshake
and the start of the next handshake happen in the same epoch, which meant
that things were ambiguous and you needed knowledge of the handshake state
machine to resolve things. However, given the wider epoch, perhaps we
should have said that message_seq resets on each epoch or something. (Too
late now, of course... DTLS 1.4 I suppose?)

Does all that check out, or did I miss something?

David
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] -draft8447bis: rename Support Group Elliptic curve groups space

2024-03-28 Thread David Benjamin
I don't believe we need a separate range, just to unmark the elliptic
curve range. TLS does not usually ascribe meaning to ranges of codepoints
because TLS implementations do not need to categorize codepoints they don't
understand.

The only reason supported groups was partitioned was because of a goofy
thing RFC 7919 did for FFDH. That goofy thing also was what made RFC 7919
undeployable anyway, so it didn't work out. :-)

On Thu, Mar 28, 2024 at 5:08 PM Russ Housley  wrote:

> Sean:
>
> I observe that ML-KEM is not a Elliptic curve group or a Finite Field
> Diffie-Hellman group.  I really think we want to include sepport for KEMs.
> but a separate range is needed.  I assume it will be carved out of the
> Elliptic curve group range.
>
> KEMs are not "key agreement" algorithms as suggested by this draft name.
> In a key agreement algorithm, both parties contribute to the shared
> secret.  With a KEM, only one party generates the the shared secreat value.
>
> Russ
>
> > On Mar 28, 2024, at 10:52 AM, Sean Turner  wrote:
> >
> > 
> >
> > **WARNING: Potential bikeshed**
> >
> > -connolly-tls-mlkem-key-agreement has suggested that code points for the
> NIST PQ be registered in the TLS Supported Groups IANA registry [1].
> Currently [2], the registry is carved up into three blocks as follows:
> >
> > Range: 0-255, 512-65535
> > Registration Procedures: Specification Required
> > Note: Elliptic curve groups
> >
> > Range 256-511
> > Registration Procedures: Specification Required
> > Note: Finite Field Diffie-Hellman groups
> >
> > Assuming that the proposal in -connolly-tls-mlkem-key-agreement is the
> path for PQ KEM algorithms (and maybe regardless of whether this is the
> path), we should really replace the “Elliptic curve groups” note in the
> 0-255, 512-65535 range row with something else.  I am open to suggestions,
> but would like to propose “unallocated”. I have submitted the following
> issue:
> > https://github.com/tlswg/rfc8447bis/issues/54
> > and this PR:
> > https://github.com/tlswg/rfc8447bis/pull/55
> > to address this.
> >
> > spt
> >
> > [1]
> https://www.iana.org/assignments/tls-parameters/tls-parameters.xhtml#tls-parameters-8
> >
> > [2] Originally, RFC 8442 defined the name of the registry as "EC Named
> Curve Registry” and then RFC 7919 re-named it “Supported Groups” and carved
> out the FFDH space.
>
> ___
> TLS mailing list
> TLS@ietf.org
> https://www.ietf.org/mailman/listinfo/tls
>
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] -draft8447bis: rename Support Group Elliptic curve groups space

2024-03-28 Thread David Benjamin
+1 to removing the "Elliptic curve groups" note. That partition came out of
RFC 7919's (unfortunate
)
decision to repurpose the existing DHE cipher suites (see RFC 7919, section
4), so we're stuck treating 256-511 as special. But I don't believe we need
to treat the remainder as special.

Regarding renaming, I'm torn. "Group" was a truly horrible rename. The
names we pick make their way into APIs and even sometimes UI surfaces for
developers. Every time I've plumbed TLS named groups into another system,
I've been met with confusion about what in the world a "group" is, and I've
had to embarrassingly explain that yes, it is a term of art, short for
"Diffie-Hellman group", no, it doesn't even make sense with PQC, and I'm
truly very sorry that TLS chose such a needlessly confusing name, but it's
the name we've got. Sometimes I just give up on the TLSWG's naming and just
saying "key exchange" or "key agreement", but that gets a little tricky
because that can also mean the left half of a TLS 1.2 cipher suite
(ECDHE_RSA / ECDHE_ECDSA / RSA). At one point, we tried "key exchange
group" to avoid that, but that's also problematic as one needs to explain
to translators that this does not mean "primary trade collection".

This name is bad enough that I needed to make a pre-written explanation for
this, so I can save time and link to it every time it comes up.

At the same time, we've already renamed this once. These names we pick make
their way everywhere, each rename we do is costly. All the old "curve" APIs
had to be doubled up and deprecated in systems, with the old ones forever
stuck around. And then some systems (probably correctly) decided to stick
with the old "curve" name. Renaming again will add a third, and repeat this
costly cycle.

Had we not renamed, I would say we just keep it at "curves". While "curves"
is also wrong for PQC, it is less generic of a name than "group" and, in my
experience, reads more clearly as a random term of art. It's a pity that we
then changed it to one of the most overloaded words in English imaginable.
:-(

David

On Thu, Mar 28, 2024 at 11:32 AM John Mattsson  wrote:

> Hi,
>
>
>
> It would actually be good to change the name of the registry from
> “Supported Groups” as the new PQC key exchange algorithms are not groups.
>
>
>
> Cheers,
>
> John Preuß Mattsson
>
>
>
> *From: *TLS  on behalf of Sean Turner <
> s...@sn3rd.com>
> *Date: *Thursday, 28 March 2024 at 15:53
> *To: *TLS List 
> *Subject: *[TLS] -draft8447bis: rename Support Group Elliptic curve
> groups space
>
> 
>
> **WARNING: Potential bikeshed**
>
> -connolly-tls-mlkem-key-agreement has suggested that code points for the
> NIST PQ be registered in the TLS Supported Groups IANA registry [1].
> Currently [2], the registry is carved up into three blocks as follows:
>
> Range: 0-255, 512-65535
> Registration Procedures: Specification Required
> Note: Elliptic curve groups
>
> Range 256-511
> Registration Procedures: Specification Required
> Note: Finite Field Diffie-Hellman groups
>
> Assuming that the proposal in -connolly-tls-mlkem-key-agreement is the
> path for PQ KEM algorithms (and maybe regardless of whether this is the
> path), we should really replace the “Elliptic curve groups” note in the
> 0-255, 512-65535 range row with something else.  I am open to suggestions,
> but would like to propose “unallocated”. I have submitted the following
> issue:
>
> https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Ftlswg%2Frfc8447bis%2Fissues%2F54=05%7C02%7Cjohn.mattsson%40ericsson.com%7C0a5a0e0174b640b9535508dc4f36c377%7C92e84cebfbfd47abbe52080c6b87953f%7C0%7C0%7C638472343825594155%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C0%7C%7C%7C=FKpJyM8%2BPLS7Wd1zNGlZoqhFFEQuLNNRzY8bsUQxegA%3D=0
> 
> and this PR:
>
> https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Ftlswg%2Frfc8447bis%2Fpull%2F55=05%7C02%7Cjohn.mattsson%40ericsson.com%7C0a5a0e0174b640b9535508dc4f36c377%7C92e84cebfbfd47abbe52080c6b87953f%7C0%7C0%7C638472343825602619%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C0%7C%7C%7C=nMQWHlYdoSNn9yNstiB2wNLQw5IZl%2BfHtf14UvOInd8%3D=0
> 
> to address this.
>
> spt
>
> [1]
> https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.iana.org%2Fassignments%2Ftls-parameters%2Ftls-parameters.xhtml%23tls-parameters-8=05%7C02%7Cjohn.mattsson%40ericsson.com%7C0a5a0e0174b640b9535508dc4f36c377%7C92e84cebfbfd47abbe52080c6b87953f%7C0%7C0%7C638472343825608404%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C0%7C%7C%7C=f3oRbu1I2ThwoKYyK%2BlyO1SDPOrsc3mXShCT%2BeBM3ls%3D=0
> 
>
> [2] 

[TLS] KeyUpdate storms from buggy update_requested logic

2024-03-25 Thread David Benjamin
Hi all,

MT and I were discussing KeyUpdate at the meeting, and I realized I never
got around to writing up an issue we'd observed in real world TLS 1.3
deployments. rfc8446bis is pretty far along now, but MT suggested I write
this up anyway:

We ran into a fun bug with a major TLS implementation that would send an
update_requested KeyUpdate once the other side sent more than N records.
However, they were broken and *sent it on every record that came in after
the threshold was crossed*. Now, imagine the channel is asymmetric and you
are sending at a much faster rate than the buggy peer. Or, even worse,
imagine if this is a simple, non-multiplexed request/response application
protocol, and you're in the middle of sending a very, very large response.
You won't read from the peer until you're done with that response.

In the time it takes for you to finish sending your response, resume
reading, pick up the KeyUpdate, and reply, there will have been many, many
record's past the peer's threshold. Once you go back to reading, you wake
up to a huge storm of KeyUpdates. That's assuming the peer hasn't managed
to deadlock itself by filling the TCP send buffer with KeyUpdates and
blocking, or DoS itself by buffering those writes up instead and burning
memory on queued writes.

This is clearly undesirable. There is no point in sending an
update_requested past the first one because repeat requests won't change
anything. The spec already has text that says:

> Note that implementations may receive an arbitrary
> number of messages between sending a KeyUpdate with request_update set
> to "update_requested" and receiving the
> peer's KeyUpdate, because those messages may already be in flight.

However, one needs to read between the lines to then realize "and therefore
I should not keep sending request_update because it's pointless". I've
filed https://github.com/tlswg/tls13-spec/issues/1341 describing the issue
and uploaded a PR at https://github.com/tlswg/tls13-spec/pull/1343 to
clarify this point.

Thoughts?

David
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Question about Large Record Sizes draft and the TLS design

2024-03-19 Thread David Benjamin
I can't say what was going on in the SSLv3 days, but yes record size limits
are important for memory. Whatever the maximum record size is, the peer can
force you to buffer that many bytes in memory. That means the maximum
record size is actually a DoS parameter for the protocol.

On Wed, Mar 20, 2024 at 10:35 AM Jan-Frederik Rieckers 
wrote:

> Hi to all,
>
> during the presentation of the Large Record Sizes draft at the tls
> session yesterday, I wondered why the length restriction is in TLS in
> the first place.
>
> I have gone back to the TLS1.0 RFC, as well as SSLv3, TLS1.3 and TLS1.2
> and have found the restriction in all of them, but not a rationale why
> the length is artificially shortened, when the length is encoded as uint16.
>
> Does someone know what the rationale behind it is?
> One educated guess we came up with was that the limit was put there to
> ensure that implementations can make sure to not use too much memory,
> and using 2^14 was deemed a good compromise between memory usage and
> message length, but in my short research I haven't found any evidence
> that would confirm that guess.
>
>
> Cheers,
> Janfred
>
> --
> Herr Jan-Frederik Rieckers
> Security, Trust & Identity Services
>
> E-Mail: rieck...@dfn.de | Fon: +49 30884299-339 | Fax: +49 30884299-370
> Pronomen: er/sein | Pronouns: he/him
>
> __
>
> DFN - Deutsches Forschungsnetz | German National Research and Education
> Network
> Verein zur Förderung eines Deutschen Forschungsnetzes e.V.
> Alexanderplatz 1 | 10178 Berlin
> https://www.dfn.de
>
> Vorstand: Prof. Dr.-Ing. Stefan Wesner | Prof. Dr. Helmut Reiser |
> Christian Zens
> Geschäftsführung: Dr. Christian Grimm | Jochem Pattloch
> VR AG Charlottenburg 7729B | USt.-ID. DE 136623822
> ___
> TLS mailing list
> TLS@ietf.org
> https://www.ietf.org/mailman/listinfo/tls
>
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] A suggestion for handling large key shares

2024-03-19 Thread David Benjamin
I think you're several discussions behind here. :-P I don't think
draft-ietf-tls-hybrid-design makes sense here. This has nothing to do with
hybrids, but anything with large key shares. If one were to do Kyber on its
own, this would apply too. Rather, per the discussion at IETF 118, the WG
opted to add some clarifications to rfc8446bis in light of draft-00.

It has also turned out that:
a) RFC 8446 actually already defined the semantics (when I wrote draft-00,
I'd thought it was ambiguous), though the clarification definitely helped
b) The implementation that motivated the downgrade concern says this was
not bug from misunderstanding the protocol, but an intentional design
decision

Given that, the feedback on the list and
https://github.com/davidben/tls-key-share-prediction/issues/5, I concluded
past-me was overthinking this and we can simply define the DNS mechanism
and say it is the server's responsibility to interpret the preexisting TLS
spec text correctly and pick what it believes is a coherent selection
policy. So draft-01 now simply defines the DNS mechanism without any
complex codepoint classification and includes some discussion of the
situation in Security Considerations, as you noted.

Of what remains in Security Considerations, the random client MAY is
specific to this draft and does not make sense to move. The server NOT
RECOMMENDED is simply restating the preexisting implications of RFC 8446
and the obvious implications of believing some options are more secure than
others. If someone wishes to *replicate* it into another document, they're
welcome to, but I disagree with *moving* it. In the context of the
discussion in that section, it makes sense to restate this implication
because this is very relevant to why it's okay for the client to use DNS to
influence key shares.

David

On Wed, Mar 20, 2024 at 6:08 AM Kampanakis, Panos  wrote:

> Hi Scott, David,
>
>
>
> I think it would make more sense for the normative language about Client
> and Server behavior (section 3.2, 3.4) in
> draft-davidben-tls-key-share-prediction-00 (
> https://www.ietf.org/archive/id/draft-davidben-tls-key-share-prediction-00.html
> ) to go in draft-ietf-tls-hybrid-design. These are now discussed in the Sec
> Considerations of draft-davidben-tls-key-share-prediction-01, but the
> “SHOULD” and “SHOULD NOT” language from -00 section 3.2 and 3.4 ought to be
> in draft-ietf-tls-hybrid-design.
>
>
>
> I definitely want to see draft-davidben-tls-key-share-prediction move
> forward too.
>
>
>
> Rgs,
>
> Panos
>
>
>
> *From:* TLS  *On Behalf Of * David Benjamin
> *Sent:* Tuesday, March 19, 2024 1:26 AM
> *To:* Scott Fluhrer (sfluhrer) 
> *Cc:* TLS@ietf.org
> *Subject:* RE: [EXTERNAL] [TLS] A suggestion for handling large key shares
>
>
>
> *CAUTION*: This email originated from outside of the organization. Do not
> click links or open attachments unless you can confirm the sender and know
> the content is safe.
>
>
>
> > If the server supports P256+ML-KEM, what Matt suggested is that, instead
> of accepting P256, it instead a ClientHelloRetry with P256+ML_KEM.  We then
> continue as expected and end up negotiating things in 2 round trips.
>
>
>
> I assume ClientHelloRetry was meant to be HelloRetryRequest? If so, yeah,
> a server which aims to prefer P256+ML-KEM over P256 should, well, prefer
> P256+ML-KEM over P256. :-) See the discussions around
> draft-davidben-tls-key-share-prediction. In particular, RFC 8446 is clear
> on the semantics of such a ClientHello:
>
>
>
>This vector MAY be empty if the client is requesting a
>HelloRetryRequest.  Each KeyShareEntry value MUST correspond to a
>group offered in the "supported_groups" extension and MUST appear in
>the same order.  However, the values MAY be a non-contiguous subset
>of the "supported_groups" extension and MAY omit the most preferred
>groups.  Such a situation could arise if the most preferred groups
>are new and unlikely to be supported in enough places to make
>pregenerating key shares for them efficient.
>
>
>
> rfc8446bis contains further clarifications:
> https://github.com/tlswg/tls13-spec/pull/1331
>
>
>
> Now, some servers (namely OpenSSL) will instead unconditionally select
> from key_share first. This isn't wrong, per se. It is how you implement a
> server which believes all of its supported groups are of comparable
> security level and therefore prioritizes round trips. Such a policy is
> plausible when you only support, say, ECDH curves. It's not so reasonable
> if you support both ECDH and a PQ KEM. But all the spec text for that is in
> place, so all that is left is that folks keep this in mind when adding PQ
> KEMs to a TLS implementation. A TLS stack that always 

Re: [TLS] A suggestion for handling large key shares

2024-03-18 Thread David Benjamin
> If the server supports P256+ML-KEM, what Matt suggested is that, instead
of accepting P256, it instead a ClientHelloRetry with P256+ML_KEM.  We then
continue as expected and end up negotiating things in 2 round trips.

I assume ClientHelloRetry was meant to be HelloRetryRequest? If so, yeah, a
server which aims to prefer P256+ML-KEM over P256 should, well, prefer
P256+ML-KEM over P256. :-) See the discussions around
draft-davidben-tls-key-share-prediction. In particular, RFC 8446 is clear
on the semantics of such a ClientHello:

   This vector MAY be empty if the client is requesting a
   HelloRetryRequest.  Each KeyShareEntry value MUST correspond to a
   group offered in the "supported_groups" extension and MUST appear in
   the same order.  However, the values MAY be a non-contiguous subset
   of the "supported_groups" extension and MAY omit the most preferred
   groups.  Such a situation could arise if the most preferred groups
   are new and unlikely to be supported in enough places to make
   pregenerating key shares for them efficient.

rfc8446bis contains further clarifications:
https://github.com/tlswg/tls13-spec/pull/1331

Now, some servers (namely OpenSSL) will instead unconditionally select from
key_share first. This isn't wrong, per se. It is how you implement a server
which believes all of its supported groups are of comparable security level
and therefore prioritizes round trips. Such a policy is plausible when you
only support, say, ECDH curves. It's not so reasonable if you support both
ECDH and a PQ KEM. But all the spec text for that is in place, so all that
is left is that folks keep this in mind when adding PQ KEMs to a TLS
implementation. A TLS stack that always looks at key_share first is not
PQ-ready and will need some changes before adopting PQ KEMs.

Regarding the other half of this:

> Suppose we have a client that supports both P-256 and P256+ML-KEM.  What
the client does is send a key share for P-256, and also indicate support
for P256+ML-KEM.  Because we’re including only the P256 key share, the
client hello is short

I don't think this is a good tradeoff and would oppose such a SHOULD here.
PQ KEMs are expensive as they are. Adding a round-trip to it will only make
it worse. Given the aim is to migrate the TLS ecosystem to PQ, penalizing
the desired state doesn't make sense. Accordingly, Chrome's Kyber
deployment includes X25519Kyber768 in the initial ClientHello. While this
does mean paying an unfortunate upfront cost, this alternative would
instead disincentivize servers from deploying post-quantum protections.

If you're interested in avoiding the upfront cost, see
draft-davidben-tls-key-share-prediction-01. That provides a mechanism for
clients to predict more accurately, though it's yet to even be adopted, so
it's a bit early to rely on that one. Note also the Security Considerations
section, which further depends on the server expectations above.

David

On Tue, Mar 19, 2024 at 2:47 PM Scott Fluhrer (sfluhrer)  wrote:

> Recently, Matt Campagna emailed the hybrid KEM group (Douglas, Shay and
> me) about a suggestion about one way to potentially improve the performance
> (in the ‘the server hasn’t upgraded yet’ case), and asked if we should add
> that suggestion to our draft.  It occurs to me that this suggestion is
> equally applicable to the pure ML-KEM draft (and future PQ drafts as well);
> hence putting it in our draft might not be the right spot.
>
>
>
> Here’s the core idea (Matt’s original scenario was more complicated):
>
>
>
>- Suppose we have a client that supports both P-256 and P256+ML-KEM.
>What the client does is send a key share for P-256, and also indicate
>support for P256+ML-KEM.  Because we’re including only the P256 key share,
>the client hello is short
>- If the server supports only P256, it accepts it, and life goes on as
>normal.
>- If the server supports P256+ML-KEM, what Matt suggested is that,
>instead of accepting P256, it instead a ClientHelloRetry with P256+ML_KEM.
>We then continue as expected and end up negotiating things in 2 round 
> trips.
>
>
>
> Hence, the non-upgraded scenario has no performance hit; the upgraded
> scenario does (because of the second round trip), but we’re transmitting
> more data anyways (and the client could, if it communicates with the server
> again, lead off with the proposal that was accepted last time).
>
>
>
> Matt’s suggestion was that this should be a SHOULD in our draft.
>
>
>
> My questions to you: a) do you agree with this suggestion, and b) if so,
> where should this SHOULD live?  Should it be in our draft?  The ML-KEM
> draft as well (assuming there is one, and it’s not just a codepoint
> assignment)?  Another RFC about how to handle large key shares in general
> (sounds like overkill to me, unless we have other things to put in that
> RFC)?
>
>
>
> Thank you.
> ___
> TLS mailing list
> TLS@ietf.org
> 

Re: [TLS] [EXTERNAL] Re: Next steps for key share prediction

2024-03-18 Thread David Benjamin
> …and now I'm coming around to the idea that we don't need to do anything
special to account for the "wrong" server behavior. Since RFC8446 already
explicitly said that clients are allowed to not predict their most
preferred groups, we can already reasonably infer that such servers
actively believe that all their groups are comparable in security.

I've now updated the draft to do this.
https://www.ietf.org/archive/id/draft-davidben-tls-key-share-prediction-01.html

It is now considerably simpler and just contains the DNS mechanism, plus a
discussion in Security Considerations why this is OK.

On Tue, Mar 12, 2024 at 10:04 AM Andrei Popov 
wrote:

>
>- …and now I'm coming around to the idea that we don't need to do
>anything special to account for the "wrong" server behavior. Since RFC8446
>already explicitly said that clients are allowed to not predict their most
>preferred groups, we can already reasonably infer that such servers
>actively believe that all their groups are comparable in security.
>
> It makes sense for some services to prioritize RTT reduction; others may
> prioritize group strength. There are a lot of services out there
> prioritizing weaker groups for CPU savings (e.g., this is one of the
> reasons why Curve 25519 is so popular).
>
>
>
>- I... question whether taking that position is wise, given the
>ongoing postquantum transition, but so it goes
>
> Servers will have to be updated and reconfigured for PQC/hybrid support,
> at which time they will likely apply a different policy.
>
>
>
>- Hopefully your TLS server software, if it advertises pluggable
>cryptography with a PQ use case, and yet opted for a PQ-incompatible
>selection criteria, has clearly documented this so it isn't a surprise to
>you. ;-)
>
> Correct.
>
>
>
>- Between all that, we probably can reasonably say that's the server
>operator's responsibility?
>
> Yes.
>
>
>
> Cheers,
>
>
>
> Andrei
>
>
>
> *From:* TLS  *On Behalf Of *David Benjamin
> *Sent:* Friday, March 8, 2024 3:25 PM
> *To:* Watson Ladd 
> *Cc:*  
> *Subject:* [EXTERNAL] Re: [TLS] Next steps for key share prediction
>
>
>
> On Thu, Mar 7, 2024 at 6:34 PM Watson Ladd  wrote:
>
> On Thu, Mar 7, 2024 at 2:56 PM David Benjamin 
> wrote:
> >
> > Hi all,
> >
> > With the excitement about, sometime in the far future, possibly
> transitioning from a hybrid, or to a to-be-developed better PQ algorithm, I
> thought it would be a good time to remind folks that, right now, we have no
> way to effectively transition between PQ-sized KEMs at all.
> >
> > At IETF 118, we discussed draft-davidben-tls-key-share-prediction, which
> aims to address this. For a refresher, here are some links:
> >
> https://davidben.github.io/tls-key-share-prediction/draft-davidben-tls-key-share-prediction.html
> >
> https://datatracker.ietf.org/meeting/118/materials/slides-118-tls-key-share-prediction-00
> > (Apologies, I forgot to cut a draft-01 with some of the outstanding
> changes in the GitHub, so the link above is probably better than draft-00.)
> >
> > If I recall, the outcome from IETF 118 was two-fold:
> >
> > First, we'd clarify in rfc8446bis that the "key_share first" selection
> algorithm is not quite what you want. This was done in
> https://github.com/tlswg/tls13-spec/pull/1331
> >
> > Second, there was some discussion over whether what's in the draft is
> the best way to resolve a hypothetical future transition, or if there was
> another formulation. I followed up with folks briefly offline afterwards,
> but an alternative never came to fruition.
> >
> > Since we don't have another solution yet, I'd suggest we move forward
> with what's in the draft as a starting point. (Or if this email inspires
> folks to come up with a better solution, even better! :-D) In particular,
> whatever the rfc8446bis guidance is, there are still TLS implementations
> out there with the problematic selection algorithm. Concretely, OpenSSL's
> selection algorithm is incompatible with this kind of transition. See
> https://github.com/openssl/openssl/issues/22203
>
> Is that asking whether or not we want adoption? I want adoption.
>
>
>
> I suppose that would be the next step. :-) I think, last meeting, we were
> a little unclear what we wanted the document to be, so I was trying to take
> stock first. Though MT prompted me to ponder this a bit more in
> https://github.com/davidben/tls-key-share-prediction/issues/5, and now
> I'm coming around to the idea that we don't need to do anything special to
> account for the "wrong" server behavior. Since

Re: [TLS] TLSFlags ambiguity

2024-03-18 Thread David Benjamin
Oh, perfect! I was trying to find the GitHub repo to make the PR but missed
it somehow. Here's a PR: https://github.com/tlswg/tls-flags/pull/37

On Mon, Mar 18, 2024 at 5:01 PM Sean Turner  wrote:

> I just threw in a couple of PRs to align this I-D with 8446bis & 8447bis,
> but forgot to add this issue.  I have corrected this now so that we won’t
> forget again:
> https://github.com/tlswg/tls-flags/issues/36
>
> spt
>
> > On Mar 17, 2024, at 13:53, David Benjamin  wrote:
> >
> > Did this ever get resolved? I noticed that there was a draft-13 cut, but
> the issue Jonathan pointed out was still there.
> >
> > Looking at Section 2 again, it's actually even goofier than the original
> email suggests. Section 2 first says:
> >
> > > The FlagExtensions field contains 8 flags in each octet. The length of
> the extension is the minimal length that allows it to encode all of the
> present flags. Within each octet, the bits are packed such that the first
> bit is the least significant bit and the eighth bit is the most significant.
> >
> > This is LSB first. Then there's an example, which is also LSB first:
> >
> > > For example, if we want to encode only flag number zero, the
> FlagExtension field will be 1 octet long, that is encoded as follows:
> > >
> > >0001
> >
> > So that's all consistent. But then the last paragraph of section 2 says:
> >
> > > Note that this document does not define any particular bits for this
> string. That is left to the protocol documents such as the ones in the
> examples from the previous section. Such documents will have to define
> which bit to set to show support, and the order of the bits within the bit
> string shall be enumerated in network order: bit zero is the high-order bit
> of the first octet as the flags field is transmitted.
> >
> > This says it's MSB first for some reason. But this is not only
> inconsistent, but also redundant with the text at the start of section 2.
> It seems to me we could simply delete the redundant text and move on:
> >
> > > Note that this document does not define any particular bits for this
> string. That is left to the protocol documents such as the ones in the
> examples from the previous section. Such documents will have to define
> which bit to set to show support.
> >
> > David
> >
> > On Wed, Sep 27, 2023, 17:50 David Benjamin 
> wrote:
> > Nice catch! I agree those don't match. I think bit zero should be the
> least-significant bit. That is, we should leave the examples as-is and then
> fix the specification text.
> >
> > Ordering bits MSB first doesn't make much sense. Unlike bytes, there is
> no inherent order to bits in memory, so the most natural order is the power
> of two represented by the bit. Put another way, everyone accesses bit N by
> ANDing with 1 << N and that's least-significant bits first. I can think of
> a couple systems (DER, GCM) that chose to order bits most-significant first
> and both have caused endless confusion and problems. (It's particularly bad
> for GCM which is actually representing a polynomial, but then messed up the
> order. Let's not repeat this blunder.)
> >
> > On Fri, Sep 15, 2023 at 1:37 PM Jonathan Hoyland <
> jonathan.hoyl...@gmail.com> wrote:
> > Hi TLSWG,
> >
> > I'm working on implementing the TLS Flags extension, and I just wanted
> to clarify a potential ambiguity in the spec.
> >
> > In Section 2 the spec says:
> > Such documents will have to define which bit to set to show support, and
> the order of the bits within the bit string shall be enumerated in network
> order: bit zero is the high-order bit of the first octet as the flags field
> is transmitted.
> >
> > And also gives the example for encoding bit zero:
> > For example, if we want to encode only flag number zero, the
> FlagExtension field will be 1 octet long, that is encoded as follows:
> >0001
> > In which it seems that the low-order bit of the first octet represents
> zero.
> >
> > I have no preference either way, but when transmitted on the wire,
> should flag 0 be transmitted as
> > 0x01 or 0x80?
> >
> > Regards,
> >
> > Jonathan
> > ___
> > TLS mailing list
> > TLS@ietf.org
> > https://www.ietf.org/mailman/listinfo/tls
> > ___
> > TLS mailing list
> > TLS@ietf.org
> > https://www.ietf.org/mailman/listinfo/tls
>
>
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] TLSFlags ambiguity

2024-03-16 Thread David Benjamin
Did this ever get resolved? I noticed that there was a draft-13 cut, but
the issue Jonathan pointed out was still there.

Looking at Section 2 again, it's actually even goofier than the original
email suggests. Section 2 first says:

> The FlagExtensions field contains 8 flags in each octet. The length of
the extension is the minimal length that allows it to encode all of the
present flags. Within each octet, the bits are packed such that the first
bit is the least significant bit and the eighth bit is the most significant.

This is LSB first. Then there's an example, which is also LSB first:

> For example, if we want to encode only flag number zero, the
FlagExtension field will be 1 octet long, that is encoded as follows:
>
>0001

So that's all consistent. But then the last paragraph of section 2 says:

> Note that this document does not define any particular bits for this
string. That is left to the protocol documents such as the ones in the
examples from the previous section. Such documents will have to define
which bit to set to show support, and the order of the bits within the bit
string shall be enumerated in network order: bit zero is the high-order bit
of the first octet as the flags field is transmitted.

This says it's MSB first for some reason. But this is not only
inconsistent, but also redundant with the text at the start of section 2.
It seems to me we could simply delete the redundant text and move on:

> Note that this document does not define any particular bits for this
string. That is left to the protocol documents such as the ones in the
examples from the previous section. Such documents will have to define
which bit to set to show support.

David

On Wed, Sep 27, 2023, 17:50 David Benjamin  wrote:

> Nice catch! I agree those don't match. I think bit zero should be the
> least-significant bit. That is, we should leave the examples as-is and then
> fix the specification text.
>
> Ordering bits MSB first doesn't make much sense. Unlike bytes, there is no
> inherent order to bits in memory, so the most natural order is the power of
> two represented by the bit. Put another way, everyone accesses bit N by
> ANDing with 1 << N and that's least-significant bits first. I can think of
> a couple systems (DER, GCM) that chose to order bits most-significant first
> and both have caused endless confusion and problems. (It's particularly bad
> for GCM which is actually representing a polynomial, but then messed up the
> order. Let's not repeat this blunder.)
>
> On Fri, Sep 15, 2023 at 1:37 PM Jonathan Hoyland <
> jonathan.hoyl...@gmail.com> wrote:
>
>> Hi TLSWG,
>>
>> I'm working on implementing the TLS Flags extension
>> <https://datatracker.ietf.org/doc/html/draft-ietf-tls-tlsflags-12>, and
>> I just wanted to clarify a potential ambiguity in the spec.
>>
>> In Section 2 the spec says:
>> Such documents will have to define which bit to set to show support, and
>> the order of the bits within the bit string shall be enumerated in network
>> order: bit zero is the high-order bit of the first octet as the flags field
>> is transmitted.
>>
>> And also gives the example for encoding bit zero:
>> For example, if we want to encode only flag number zero, the
>> FlagExtension field will be 1 octet long, that is encoded as follows:
>>
>>0001
>>
>> In which it seems that the low-order bit of the first octet represents zero.
>>
>> I have no preference either way, but when transmitted on the wire, should 
>> flag 0 be transmitted as
>>
>> 0x01 or 0x80?
>>
>> Regards,
>>
>> Jonathan
>>
>> ___
>> TLS mailing list
>> TLS@ietf.org
>> https://www.ietf.org/mailman/listinfo/tls
>>
>
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Next steps for key share prediction

2024-03-08 Thread David Benjamin
On Thu, Mar 7, 2024 at 6:34 PM Watson Ladd  wrote:

> On Thu, Mar 7, 2024 at 2:56 PM David Benjamin 
> wrote:
> >
> > Hi all,
> >
> > With the excitement about, sometime in the far future, possibly
> transitioning from a hybrid, or to a to-be-developed better PQ algorithm, I
> thought it would be a good time to remind folks that, right now, we have no
> way to effectively transition between PQ-sized KEMs at all.
> >
> > At IETF 118, we discussed draft-davidben-tls-key-share-prediction, which
> aims to address this. For a refresher, here are some links:
> >
> https://davidben.github.io/tls-key-share-prediction/draft-davidben-tls-key-share-prediction.html
> >
> https://datatracker.ietf.org/meeting/118/materials/slides-118-tls-key-share-prediction-00
> > (Apologies, I forgot to cut a draft-01 with some of the outstanding
> changes in the GitHub, so the link above is probably better than draft-00.)
> >
> > If I recall, the outcome from IETF 118 was two-fold:
> >
> > First, we'd clarify in rfc8446bis that the "key_share first" selection
> algorithm is not quite what you want. This was done in
> https://github.com/tlswg/tls13-spec/pull/1331
> >
> > Second, there was some discussion over whether what's in the draft is
> the best way to resolve a hypothetical future transition, or if there was
> another formulation. I followed up with folks briefly offline afterwards,
> but an alternative never came to fruition.
> >
> > Since we don't have another solution yet, I'd suggest we move forward
> with what's in the draft as a starting point. (Or if this email inspires
> folks to come up with a better solution, even better! :-D) In particular,
> whatever the rfc8446bis guidance is, there are still TLS implementations
> out there with the problematic selection algorithm. Concretely, OpenSSL's
> selection algorithm is incompatible with this kind of transition. See
> https://github.com/openssl/openssl/issues/22203
>
> Is that asking whether or not we want adoption? I want adoption.
>

I suppose that would be the next step. :-) I think, last meeting, we were a
little unclear what we wanted the document to be, so I was trying to take
stock first. Though MT prompted me to ponder this a bit more in
https://github.com/davidben/tls-key-share-prediction/issues/5, and now I'm
coming around to the idea that we don't need to do anything special to
account for the "wrong" server behavior. Since RFC8446 already explicitly
said that clients are allowed to not predict their most preferred groups,
we can already reasonably infer that such servers actively believe that all
their groups are comparable in security. OpenSSL, at least, seems to be
taking that position. I... question whether taking that position is wise,
given the ongoing postquantum transition, but so it goes. Hopefully your
TLS server software, if it advertises pluggable cryptography with a PQ use
case, and yet opted for a PQ-incompatible selection criteria, has clearly
documented this so it isn't a surprise to you. ;-)

Between all that, we probably can reasonably say that's the server
operator's responsibility? I'm going to take some time to draft a hopefully
simpler version of the draft that only defines the DNS hint, and just
includes some rough text warning about the implications. Maybe also some
SHOULD level text to call out that servers should be sure their policy is
what they want. Hopefully, in drafting that, it'll be clearer what the
options are. If nothing else, I'm sure writing it will help me crystalize
my own preferences!


> > Given that, I don't see a clear way to avoid some way to separate the
> old behavior (which impacts the existing groups) from the new behavior. The
> draft proposes to do it by keying on the codepoint, and doing our future
> selves a favor by ensuring that the current generation of PQ codepoints are
> ready for this. That's still the best solution I see right now for this
> situation.
> >
> > Thoughts?
>
> I think letting the DNS signal also be an indicator the server
> implements the correct behavior would be a good idea.


I'm afraid DNS is typically unauthenticated. In most TLS deployments, we
have to assume that the attacker has influence over DNS, which makes it
unsuitable for such a signal. Of course, if we end up settling on not
needing a signal, this is moot.

David
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


[TLS] Next steps for key share prediction

2024-03-07 Thread David Benjamin
Hi all,

With the excitement about, sometime in the far future, possibly
transitioning from a hybrid, or to a to-be-developed better PQ algorithm, I
thought it would be a good time to remind folks that, right now, *we have
no way to effectively transition between PQ-sized KEMs at all*.

At IETF 118, we discussed draft-davidben-tls-key-share-prediction, which
aims to address this. For a refresher, here are some links:
https://davidben.github.io/tls-key-share-prediction/draft-davidben-tls-key-share-prediction.html
https://datatracker.ietf.org/meeting/118/materials/slides-118-tls-key-share-prediction-00
(Apologies, I forgot to cut a draft-01 with some of the outstanding changes
in the GitHub, so the link above is probably better than draft-00.)

If I recall, the outcome from IETF 118 was two-fold:

First, we'd clarify in rfc8446bis that the "key_share first" selection
algorithm is not quite what you want. This was done in
https://github.com/tlswg/tls13-spec/pull/1331

Second, there was some discussion over whether what's in the draft is the
best way to resolve a hypothetical future transition, or if there was
another formulation. I followed up with folks briefly offline afterwards,
but an alternative never came to fruition.

Since we don't have another solution yet, I'd suggest we move forward with
what's in the draft as a starting point. (Or if this email inspires folks
to come up with a better solution, even better! :-D) In particular,
whatever the rfc8446bis guidance is, there are still TLS implementations
out there with the problematic selection algorithm. Concretely, OpenSSL's
selection algorithm is incompatible with this kind of transition. See
https://github.com/openssl/openssl/issues/22203

Given that, I don't see a clear way to avoid *some* way to separate the old
behavior (which impacts the existing groups) from the new behavior. The
draft proposes to do it by keying on the codepoint, and doing our future
selves a favor by ensuring that the current generation of PQ codepoints are
ready for this. That's still the best solution I see right now for this
situation.

Thoughts?

David
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Time to first byte vs time to last byte

2024-03-07 Thread David Benjamin
This is good work, but we need to be wary of getting too excited about
TTLB, and then declaring performance solved. Ultimately, TTLB simply
dampens the impact of postquantum by mixing in the (handshake-independent)
time to do the bulk transfer. The question is whether that reflects our
goals.

Ultimately, the thing that matters is overall application
performance, which can be complex to measure because you actually have to
try that application. Metrics like TTLB, TTFB, etc., are isolated to one
connection and thus easier to measure, and without checking each
application one by one. But they're only as valuable as they are predictors
of overall application performance. For TTLB, both the magnitude and
desirability of dampening effect are application-specific:

If your goal is transferring a large file on the backend, such that you
really only care when the operation is complete, then yes, TTLB is a good
proxy for application system performance. You just care about throughput in
that case. Moreover, in such applications, if you are transferring a lot of
data, the dampening effect not only reflects reality but is larger.

However, interactive, user-facing applications are different. There, TTLB
is a poor proxy for application performance. For example, on the web,
performance is determined more by how long it takes to display a meaningful
webpage to the user. (We often call this the time to "first contentful
paint".) Now, that is a very high-level metric that is impacted by all
sorts of things, such as whether this is a repeat visit, page structure,
etc. So it is hard to immediately translate that back down to TLS. But it
is frequently much closer to the TTFB side of the spectrum than the TTLB
side. And indeed, we have been seeing impacts from PQ to our high-level
metrics on mobile.

There's also a pretty natural intuition for this: since there is much more
focus on latency than throughput, optimizing an interactive application
often involves trying to reduce the amount of traffic on the critical path.
The more the application does so, the less accurate TTLB's dampening effect
is, and the closer we trend towards TTFB. (Of course, some optimizations in
this space involve making fewer connections, etc. But the point here was to
give a rough intuition.)

On Thu, Mar 7, 2024 at 2:58 PM Deirdre Connolly 
wrote:

> "At the 2024 Workshop on Measurements, Attacks, and Defenses for the Web
> (MADweb), we presented a paper¹ advocating time to last byte (TTLB) as a
> metric for assessing the total impact of data-heavy, quantum-resistant
> algorithms such as ML-KEM and ML-DSA on real-world TLS 1.3 connections. Our
> paper shows that the new algorithms will have a much lower net effect on
> connections that transfer sizable amounts of data than they do on the TLS
> 1.3 handshake itself."
>
>
> https://www.amazon.science/blog/delays-from-post-quantum-cryptography-may-not-be-so-bad
>
> ¹
> https://www.amazon.science/publications/the-impact-of-data-heavy-post-quantum-tls-1-3-on-the-time-to-last-byte-of-real-world-connections/
>
>
> ___
> TLS mailing list
> TLS@ietf.org
> https://www.ietf.org/mailman/listinfo/tls
>
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Trust Expressions Follow-up

2024-03-02 Thread David Benjamin
rint(A1)}
> ->  all versions of a trust store by name containing A1.
>
> Obviously these names are not cross protocol friendly, we looked into URNs
> for that (don't throw things at me, I know this is a sensitive subject in
> the IETF).
>
> The concept of SCITT Receipts seems close to the purpose of intermediate
> ellison here:
> https://davidben.github.io/tls-trust-expressions/draft-davidben-tls-trust-expr.html#name-intermediate-elision
>
> The idea was that if you had stable receipts from a transparency service
> you trusted, you might not need to care about all the parts of a cert chain
> that they validated before including the cert in their transparency log.
>
> I'd be interested in prototyping a SCITT mapping for this, if I can wrap
> my head around the use case a bit more.
>
> OS
>
>
>
>
>
> On Thu, Feb 29, 2024 at 6:31 PM David Benjamin 
> wrote:
>
>> Oh, I should have added: I put together an informal "explainer"-style
>> document to try to describe the high-level motivations and goals a bit
>> better. The format is adapted more from the web platform end, which likes
>> to have separate explainer and specification documents, but it seemed a
>> good style for capturing, at a high level, what we're trying to accomplish.
>> https://github.com/davidben/tls-trust-expressions/blob/main/explainer.md
>>
>> It's largely a copy of the start of this email thread, but I figured it'd
>> be useful to have a more canonical home for it. (We'll probably adapt some
>> of that text back into the draft, after some more wordsmithing.)
>>
>>
>> On Thu, Feb 29, 2024 at 7:18 PM David Benjamin 
>> wrote:
>>
>>> Circling back to this thread, we're now looking at prototyping the TLS
>>> parts in BoringSSL, on both the client (Chrome) and the server side. Let us
>>> know if you have any thoughts on the proposal!
>>>
>>> (Nothing that would prevent us from changing details, of course. But as
>>> there are a lot of pieces here, we'd like to get some experience with
>>> things.)
>>>
>>> On Thu, Jan 25, 2024 at 5:00 PM David Benjamin 
>>> wrote:
>>>
>>>> Hi all,
>>>>
>>>> Now that the holidays are over, I wanted to follow up on
>>>> draft-davidben-tls-trust-expr and continue some of the discussions from
>>>> Prague:
>>>>
>>>>
>>>> https://davidben.github.io/tls-trust-expressions/draft-davidben-tls-trust-expr.html
>>>>
>>>>
>>>> https://datatracker.ietf.org/meeting/118/materials/slides-118-tls-tls-trust-expressions-00
>>>>
>>>> First, I wanted to briefly clarify the purpose of excluded_labels
>>>> <https://davidben.github.io/tls-trust-expressions/draft-davidben-tls-trust-expr.html#name-trust-expressions>:
>>>> it is primarily intended to address version skew: if the certificate is
>>>> known to match (example, v10) and the client says (example, v11), the
>>>> server doesn’t know whether v11 distrusted or retained the CA. We resolve
>>>> that with a combination of excluded_labels and TrustStoreStatus.
>>>> excluded_labels is not intended for user-specific removals. I’ve
>>>> reworked the Privacy Considerations
>>>> <https://davidben.github.io/tls-trust-expressions/draft-davidben-tls-trust-expr.html#name-privacy-considerations>
>>>> section to discuss this more clearly.
>>>>
>>>> Second, I wanted to take a step back and try to better articulate our
>>>> goals. I think the best way to look at this draft is in three parts:
>>>>
>>>> 1. A new multi-certificate deployment model (the overall goal)
>>>>
>>>> 2. Selecting certificates within that model (the TLS parts of the draft)
>>>>
>>>> 3. Provisioning certificates (the ACME parts of the draft)
>>>>
>>>> We’d most like to get consensus on the first, as it’s the most
>>>> important. Trust expressions are a way to achieve that goal, but we’re not
>>>> attached to the specific mechanism if there’s a better one. We briefly
>>>> discuss this in the introduction
>>>> <https://davidben.github.io/tls-trust-expressions/draft-davidben-tls-trust-expr.html#name-introduction>,
>>>> but I think it is worth elaborating here:
>>>>
>>>> The aim is to get to a more flexible and robust PKI, by allowing
>>>> servers to select between multiple certificates. To do this, we need a way
>>>> for the servers to reliably selec

Re: [TLS] Trust Expressions Follow-up

2024-02-29 Thread David Benjamin
Oh, I should have added: I put together an informal "explainer"-style
document to try to describe the high-level motivations and goals a bit
better. The format is adapted more from the web platform end, which likes
to have separate explainer and specification documents, but it seemed a
good style for capturing, at a high level, what we're trying to accomplish.
https://github.com/davidben/tls-trust-expressions/blob/main/explainer.md

It's largely a copy of the start of this email thread, but I figured it'd
be useful to have a more canonical home for it. (We'll probably adapt some
of that text back into the draft, after some more wordsmithing.)


On Thu, Feb 29, 2024 at 7:18 PM David Benjamin 
wrote:

> Circling back to this thread, we're now looking at prototyping the TLS
> parts in BoringSSL, on both the client (Chrome) and the server side. Let us
> know if you have any thoughts on the proposal!
>
> (Nothing that would prevent us from changing details, of course. But as
> there are a lot of pieces here, we'd like to get some experience with
> things.)
>
> On Thu, Jan 25, 2024 at 5:00 PM David Benjamin 
> wrote:
>
>> Hi all,
>>
>> Now that the holidays are over, I wanted to follow up on
>> draft-davidben-tls-trust-expr and continue some of the discussions from
>> Prague:
>>
>>
>> https://davidben.github.io/tls-trust-expressions/draft-davidben-tls-trust-expr.html
>>
>>
>> https://datatracker.ietf.org/meeting/118/materials/slides-118-tls-tls-trust-expressions-00
>>
>> First, I wanted to briefly clarify the purpose of excluded_labels
>> <https://davidben.github.io/tls-trust-expressions/draft-davidben-tls-trust-expr.html#name-trust-expressions>:
>> it is primarily intended to address version skew: if the certificate is
>> known to match (example, v10) and the client says (example, v11), the
>> server doesn’t know whether v11 distrusted or retained the CA. We resolve
>> that with a combination of excluded_labels and TrustStoreStatus.
>> excluded_labels is not intended for user-specific removals. I’ve
>> reworked the Privacy Considerations
>> <https://davidben.github.io/tls-trust-expressions/draft-davidben-tls-trust-expr.html#name-privacy-considerations>
>> section to discuss this more clearly.
>>
>> Second, I wanted to take a step back and try to better articulate our
>> goals. I think the best way to look at this draft is in three parts:
>>
>> 1. A new multi-certificate deployment model (the overall goal)
>>
>> 2. Selecting certificates within that model (the TLS parts of the draft)
>>
>> 3. Provisioning certificates (the ACME parts of the draft)
>>
>> We’d most like to get consensus on the first, as it’s the most important.
>> Trust expressions are a way to achieve that goal, but we’re not attached to
>> the specific mechanism if there’s a better one. We briefly discuss this in
>> the introduction
>> <https://davidben.github.io/tls-trust-expressions/draft-davidben-tls-trust-expr.html#name-introduction>,
>> but I think it is worth elaborating here:
>>
>> The aim is to get to a more flexible and robust PKI, by allowing servers
>> to select between multiple certificates. To do this, we need a way for the
>> servers to reliably select the correct certificate to use with each client.
>> In doing so, we wish to minimize manual changes by server operators in the
>> long run. Most ongoing decisions should instead come from TLS software,
>> ACME client software, and ACME servers.
>>
>> Why does this matter? PKIs need to evolve over time to meet user security
>> needs: CAs that add net value to the ecosystem may be added, long-lived
>> keys should be rotated to reduce risk, and compromised or untrustworthy CAs
>> are removed. Even a slow-moving, mostly aligned ecosystem is still made of
>> individual decisions that roll out to individual clients. This means,
>> whether we like it or not, trust divergence is a fact of life, if for no
>> other reason than out-of-date clients in the ecosystem. These clients could
>> range from unupdatable TV set-top boxes to some IoT device to a browser
>> that could not communicate with its update service.
>>
>> Today, the mere existence of old clients limits security improvements for
>> other, unrelated clients. Consider a TLS client making some trust change
>> for user security. For availability, TLS servers must have some way to
>> satisfy it. Some server, however, may also support an older client. If the
>> server uses a single certificate, that certificate is limited to the
>> intersection of both clients.
>>
>> At the scale of the Internet, th

Re: [TLS] Trust Expressions Follow-up

2024-02-29 Thread David Benjamin
Circling back to this thread, we're now looking at prototyping the TLS
parts in BoringSSL, on both the client (Chrome) and the server side. Let us
know if you have any thoughts on the proposal!

(Nothing that would prevent us from changing details, of course. But as
there are a lot of pieces here, we'd like to get some experience with
things.)

On Thu, Jan 25, 2024 at 5:00 PM David Benjamin 
wrote:

> Hi all,
>
> Now that the holidays are over, I wanted to follow up on
> draft-davidben-tls-trust-expr and continue some of the discussions from
> Prague:
>
>
> https://davidben.github.io/tls-trust-expressions/draft-davidben-tls-trust-expr.html
>
>
> https://datatracker.ietf.org/meeting/118/materials/slides-118-tls-tls-trust-expressions-00
>
> First, I wanted to briefly clarify the purpose of excluded_labels
> <https://davidben.github.io/tls-trust-expressions/draft-davidben-tls-trust-expr.html#name-trust-expressions>:
> it is primarily intended to address version skew: if the certificate is
> known to match (example, v10) and the client says (example, v11), the
> server doesn’t know whether v11 distrusted or retained the CA. We resolve
> that with a combination of excluded_labels and TrustStoreStatus.
> excluded_labels is not intended for user-specific removals. I’ve reworked
> the Privacy Considerations
> <https://davidben.github.io/tls-trust-expressions/draft-davidben-tls-trust-expr.html#name-privacy-considerations>
> section to discuss this more clearly.
>
> Second, I wanted to take a step back and try to better articulate our
> goals. I think the best way to look at this draft is in three parts:
>
> 1. A new multi-certificate deployment model (the overall goal)
>
> 2. Selecting certificates within that model (the TLS parts of the draft)
>
> 3. Provisioning certificates (the ACME parts of the draft)
>
> We’d most like to get consensus on the first, as it’s the most important.
> Trust expressions are a way to achieve that goal, but we’re not attached to
> the specific mechanism if there’s a better one. We briefly discuss this in
> the introduction
> <https://davidben.github.io/tls-trust-expressions/draft-davidben-tls-trust-expr.html#name-introduction>,
> but I think it is worth elaborating here:
>
> The aim is to get to a more flexible and robust PKI, by allowing servers
> to select between multiple certificates. To do this, we need a way for the
> servers to reliably select the correct certificate to use with each client.
> In doing so, we wish to minimize manual changes by server operators in the
> long run. Most ongoing decisions should instead come from TLS software,
> ACME client software, and ACME servers.
>
> Why does this matter? PKIs need to evolve over time to meet user security
> needs: CAs that add net value to the ecosystem may be added, long-lived
> keys should be rotated to reduce risk, and compromised or untrustworthy CAs
> are removed. Even a slow-moving, mostly aligned ecosystem is still made of
> individual decisions that roll out to individual clients. This means,
> whether we like it or not, trust divergence is a fact of life, if for no
> other reason than out-of-date clients in the ecosystem. These clients could
> range from unupdatable TV set-top boxes to some IoT device to a browser
> that could not communicate with its update service.
>
> Today, the mere existence of old clients limits security improvements for
> other, unrelated clients. Consider a TLS client making some trust change
> for user security. For availability, TLS servers must have some way to
> satisfy it. Some server, however, may also support an older client. If the
> server uses a single certificate, that certificate is limited to the
> intersection of both clients.
>
> At the scale of the Internet, the oldest clients last indefinitely. As
> servers consider older and older clients, that intersection becomes
> increasingly constraining, causing availability and security to conflict.
> As a community of security practitioners, I wish I could tell you that
> security wins, that those servers can simply be convinced to drop the old
> clients, and that this encourages old clients to update. I think we all
> know this is not what happens. Availability almost always beats security. The
> result of this conflict is not that old clients get updates, it is that
> newer clients cannot improve user security. It takes just one important
> server with one important old client to jam everything, with user
> security paying the cost.
>
> We wish to remove this conflict with certificate negotiation, analogous to
> TLS version negotiation and cipher suite negotiation. Certificate
> negotiation, via trust expressions, means security improvements in new
> clients do not conflict wit

Re: [TLS] Trust Expressions Follow-up

2024-01-26 Thread David Benjamin
On Fri, Jan 26, 2024 at 5:15 AM Ilari Liusvaara 
wrote:

> On Thu, Jan 25, 2024 at 05:00:19PM -0500, David Benjamin wrote:
> >
> > Second, I wanted to take a step back and try to better articulate our
> > goals. I think the best way to look at this draft is in three parts:
> >
> > 1. A new multi-certificate deployment model (the overall goal)
> >
> > 2. Selecting certificates within that model (the TLS parts of the draft)
> >
> > 3. Provisioning certificates (the ACME parts of the draft)
>
> I think a bit differently:
>
> a. What information does the server have, what information it
> dynamically receives from the client?
>
> b. How does this drive certificate chain selection?
>
> c. How the information from client is encoded?
>
> d. How the information server has is provisioned?
>
>
> The reason for splitting it this way is that b., c. and d. are all
> important problems, all three depend on a., but only b. and c. are in
> remit of TLS. Oh, and I regard d. as formidable challenge, by far the
> most difficult part.
>

Ah sure. I was mostly thinking a step before that split. From Prague, I got
the sense it'd be useful to focus the initial discussion about why a
multi-certificate model is useful, and perhaps the high-level shape of the
solution, separate from hashing out the protocol details. I suspect if
"what are we trying to do and why" vs "here's how to provision the server"
are lumped into one discussion thread, it'll be very difficult to keep
track of things. Also the former seems more useful for questions like
whether to adopt, while the latter seems like something we can hash out
later.

Beyond that, my division between 2 and 3 was perhaps a bit sloppy
there, yeah. I was just trying to capture that we've been focusing a bit
less on the ACME bits for now. Not because they aren't important or tricky,
but because I think they're not *strongly* impacted by the rest of the
design. "Some way to get multiple certs" and "some metadata to attach to
the certs" is a pretty general thing to design for. Maybe some variability
around whether we need to believe metadata update and certificate refresh
are the same operation or different.

Anyway, this is just me giving my best attempt at organizing the discussion
a bit. It's a pretty large problem and design space to navigate without
some framing. But if other corners catch people's eye instead, I'm happy to
discuss whatever. :-)


> > We’d most like to get consensus on the first, as it’s the most important.
> > Trust expressions are a way to achieve that goal, but we’re not attached
> to
> > the specific mechanism if there’s a better one.
>
> Well, I certainly do not have ideas for solving the problem that are
> dramatically different from what is in there currently.
>
>
> > The aim is to get to a more flexible and robust PKI, by allowing servers
> to
> > select between multiple certificates. To do this, we need a way for the
> > servers to reliably select the correct certificate to use with each
> client.
> > In doing so, we wish to minimize manual changes by server operators in
> the
> > long run. Most ongoing decisions should instead come from TLS software,
> > ACME client software, and ACME servers.
>
> The thing that makes provisioning challenging is that there is fourth
> party involved: Application terminating TLS on server side.
>
> I am not aware of any current (I know one that existed in the past)
> deployments that have ACME client software directly interface with TLS
> software.
>
> And I have never encountered application configuration interface I could
> easily see how to make it work with something like this. Most because
> certificate lists are either static or unordered.
>
> A more reduced scope that is likely feasible with more applications is
> selecting among chains for single end-entity certificate. However, such
> restrictions do not affect the TLS-visible parts.
>

Yeah the ACME client <-> server configuration interface is definitely on
the interesting side. I think it's more important to preserve the graph of
what components talk to each other (i.e. the operational considerations),
than to preserve the exact interfaces between them. Obviously, the less we
have to change, the better, but I think it's also okay to have to extend
those interfaces if push comes to shove.

And yeah, reduced-scope versions for some cases could also be useful.
Although the single-leaf restriction does remove a lot of the flexibility
that we'd otherwise afford the server operator, so I think we should still
design for the general case.

In particular, I think these reductions not only don't have to affect the
TLS-visible parts but also most of the ACME-visible parts either. Unless

[TLS] Trust Expressions Follow-up

2024-01-25 Thread David Benjamin
Hi all,

Now that the holidays are over, I wanted to follow up on
draft-davidben-tls-trust-expr and continue some of the discussions from
Prague:

https://davidben.github.io/tls-trust-expressions/draft-davidben-tls-trust-expr.html

https://datatracker.ietf.org/meeting/118/materials/slides-118-tls-tls-trust-expressions-00

First, I wanted to briefly clarify the purpose of excluded_labels
:
it is primarily intended to address version skew: if the certificate is
known to match (example, v10) and the client says (example, v11), the
server doesn’t know whether v11 distrusted or retained the CA. We resolve
that with a combination of excluded_labels and TrustStoreStatus.
excluded_labels is not intended for user-specific removals. I’ve reworked
the Privacy Considerations

section to discuss this more clearly.

Second, I wanted to take a step back and try to better articulate our
goals. I think the best way to look at this draft is in three parts:

1. A new multi-certificate deployment model (the overall goal)

2. Selecting certificates within that model (the TLS parts of the draft)

3. Provisioning certificates (the ACME parts of the draft)

We’d most like to get consensus on the first, as it’s the most important.
Trust expressions are a way to achieve that goal, but we’re not attached to
the specific mechanism if there’s a better one. We briefly discuss this in
the introduction
,
but I think it is worth elaborating here:

The aim is to get to a more flexible and robust PKI, by allowing servers to
select between multiple certificates. To do this, we need a way for the
servers to reliably select the correct certificate to use with each client.
In doing so, we wish to minimize manual changes by server operators in the
long run. Most ongoing decisions should instead come from TLS software,
ACME client software, and ACME servers.

Why does this matter? PKIs need to evolve over time to meet user security
needs: CAs that add net value to the ecosystem may be added, long-lived
keys should be rotated to reduce risk, and compromised or untrustworthy CAs
are removed. Even a slow-moving, mostly aligned ecosystem is still made of
individual decisions that roll out to individual clients. This means,
whether we like it or not, trust divergence is a fact of life, if for no
other reason than out-of-date clients in the ecosystem. These clients could
range from unupdatable TV set-top boxes to some IoT device to a browser
that could not communicate with its update service.

Today, the mere existence of old clients limits security improvements for
other, unrelated clients. Consider a TLS client making some trust change
for user security. For availability, TLS servers must have some way to
satisfy it. Some server, however, may also support an older client. If the
server uses a single certificate, that certificate is limited to the
intersection of both clients.

At the scale of the Internet, the oldest clients last indefinitely. As
servers consider older and older clients, that intersection becomes
increasingly constraining, causing availability and security to conflict.
As a community of security practitioners, I wish I could tell you that
security wins, that those servers can simply be convinced to drop the old
clients, and that this encourages old clients to update. I think we all
know this is not what happens. Availability almost always beats security. The
result of this conflict is not that old clients get updates, it is that
newer clients cannot improve user security. It takes just one important
server with one important old client to jam everything, with user security
paying the cost.

We wish to remove this conflict with certificate negotiation, analogous to
TLS version negotiation and cipher suite negotiation. Certificate
negotiation, via trust expressions, means security improvements in new
clients do not conflict with availability for older clients. Servers can,
with the aid of their ACME servers, deliver different chains to different
clients as needed.

Now, some of these problems can be addressed with cross-signs between CAs,
but this is a partial solution at best. Without negotiation, this still
means sending certificates for the lowest common denominator to all
clients. This wastes bandwidth, particularly with post-quantum
cryptography, where every signature costs kilobytes. Additionally, an
individual server operator cannot unilaterally configure this. Cross-signs
apply to entire intermediate CAs, not just the individual server’s
certificate.

There’s more to say on this topic, as relieving this conflict solves many
other PKI problems and enables new solutions for relying parties, CAs, and

Re: [TLS] [Errata Held for Document Update] RFC8446 (5682)

2024-01-22 Thread David Benjamin
On Thu, Jan 18, 2024 at 5:25 PM Rob Sayre  wrote:

> On Thu, Jan 18, 2024 at 1:26 PM David Benjamin 
> wrote:
>
>>
>> I think sometimes we spend a little more energy than is actually useful
>> in figuring out these implied lower bounds. :-) In practice, the only
>> decision we actually care about is whether 0 is allowed, and even then it's
>> often irrelevant (like here).
>>
>
> FWIW, I find these really confusing in TLS notation.
>
> I usually end up checking what NSS or OpenSSL does to get the answer. So,
> I don't think there's an operational problem, but it could be better.
>

I just mentally replace all non-zero minimums to 1 when reading. I can't
think of any structure where a non-zero minimum value was not just some
attempt to figure out the minimum possible byte count of a single object.

David
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] [Errata Held for Document Update] RFC8446 (5682)

2024-01-18 Thread David Benjamin
The extension list cannot actually be empty because we also say:

> The "signature_algorithms" extension
> MUST be specified, and other extensions may optionally be included
> if defined for this message.

That said, enforcing this by rejecting the empty list doesn't do much
because the receiver will need to look for specifically the sigalgs
extension anyway. So I'm on board with making this say either 0 or 4.
Honestly, 2 is fine too... if you check for 2 instead of 4, you'll still
have the exact same behavior. In fact I think the sigalgs rule implies the
true minimum is 8. But let's not put 8 because it will confuse everyone.

I think sometimes we spend a little more energy than is actually useful in
figuring out these implied lower bounds. :-) In practice, the only decision
we actually care about is whether 0 is allowed, and even then it's often
irrelevant (like here).

On Thu, Jan 18, 2024 at 3:47 PM Benjamin Kaduk  wrote:

> I think if the errata report is moved back into the "reported" state by
> the RFC Editor staff, the AD should be able to edit the report to reflect
> the intent as opposed to having the diff appear.
>
> -Ben
>
> On Tue, Jan 16, 2024 at 07:07:19PM -0800, RFC Errata System wrote:
> > The following errata report has been held for document update
> > for RFC8446, "The Transport Layer Security (TLS) Protocol Version 1.3".
> >
> > --
> > You may review the report below and at:
> >
> https://urldefense.com/v3/__https://www.rfc-editor.org/errata/eid5682__;!!GjvTz_vk!T2x_YvOjybcaxb8hARC3CW6xdOhGeq2BD-cjxoPyutXUwQp_f3O3PfnITevFE1EaDkGlyknuPtDLnj4boiBQ1w$
> >
> > --
> > Status: Held for Document Update
> > Type: Technical
> >
> > Reported by: Richard Barnes 
> > Date Reported: 2019-04-01
> > Held by: Paul Wouters (IESG)
> >
> > Section: 4.3.2, B.3.2
> >
> > Original Text
> > -
> > --- rfc8446.txt   2018-08-10 20:12:08.0 -0400
> > +++ rfc8446.erratum.txt   2019-04-01 15:44:54.0 -0400
> > @@ -3341,7 +3341,7 @@
> >
> >struct {
> >opaque certificate_request_context<0..2^8-1>;
> > -  Extension extensions<2..2^16-1>;
> > +  Extension extensions<0..2^16-1>;
> >} CertificateRequest;
> >
> >
> > @@ -7309,7 +7309,7 @@
> >
> >struct {
> >opaque certificate_request_context<0..2^8-1>;
> > -  Extension extensions<2..2^16-1>;
> > +  Extension extensions<0..2^16-1>;
> >} CertificateRequest;
> >
> >
> >
> >
> > Corrected Text
> > --
> > --- rfc8446.txt   2018-08-10 20:12:08.0 -0400
> > +++ rfc8446.erratum.txt   2019-04-01 15:44:54.0 -0400
> > @@ -3341,7 +3341,7 @@
> >
> >struct {
> >opaque certificate_request_context<0..2^8-1>;
> > -  Extension extensions<2..2^16-1>;
> > +  Extension extensions<0..2^16-1>;
> >} CertificateRequest;
> >
> >
> > @@ -7309,7 +7309,7 @@
> >
> >struct {
> >opaque certificate_request_context<0..2^8-1>;
> > -  Extension extensions<2..2^16-1>;
> > +  Extension extensions<0..2^16-1>;
> >} CertificateRequest;
> >
> >
> >
> >
> > Notes
> > -
> > The length of this vector can never 2.  It is either 0, if the vector is
> empty, or >=4, if the vector has at least one extension.  Nothing elsewhere
> in the spec requires a non-zero number of extensions here, so this syntax
> should allow a zero-length vector.
> >
> > Paul Wouters (AD): Richard meant the diff to be the fix, not the
> original/corrected text. The diff is not in the RFC itself. There are two
> places in the mentioned sections that need this one liner fix.
> >
> > --
> > RFC8446 (draft-ietf-tls-tls13-28)
> > --
> > Title   : The Transport Layer Security (TLS) Protocol
> Version 1.3
> > Publication Date: August 2018
> > Author(s)   : E. Rescorla
> > Category: PROPOSED STANDARD
> > Source  : Transport Layer Security
> > Area: Security
> > Stream  : IETF
> > Verifying Party : IESG
> >
> > ___
> > TLS mailing list
> > TLS@ietf.org
> >
> https://urldefense.com/v3/__https://www.ietf.org/mailman/listinfo/tls__;!!GjvTz_vk!T2x_YvOjybcaxb8hARC3CW6xdOhGeq2BD-cjxoPyutXUwQp_f3O3PfnITevFE1EaDkGlyknuPtDLnj70kj7-uw$
>
> ___
> TLS mailing list
> TLS@ietf.org
> https://www.ietf.org/mailman/listinfo/tls
>
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] TLS Suite Naming Conventions

2024-01-06 Thread David Benjamin
On Sat, Jan 6, 2024 at 12:23 PM David Benjamin 
wrote:

> I think this thread stems from a misunderstanding of what TLS is doing,
> and what "Ed25519" means.
>
> > In the thread, Neil said that it is better to negotiate for key
> (representations), instead of algorithms, and that TLS has been moving away
> from fully specifying things.
>
> This is the exact opposite of what TLS 1.3 has done. I see the JOSE email
> claims we've moved away from fully specifying things by citing the cipher
> suite change, but that is misunderstanding what changed.
>
> The SignatureScheme enum negotiates a signature algorithm, which then
> implies a key type. It used to be that this information was scattered. To
> see if a ECDSA key was viable in TLS 1.2, you had to look in...
> - Cipher suites, to see if there was a TLS_ECDHE_ECDSA_*
> - Supported curves, to see if P-256 was in there
> - EC point formats, to see if your point format was in there
> - Signature algorithms, to see if (ECDSA, SHA-256) (or some other hash
> your key supported) was in there.
>
> This was a huge mess. Especially when you consider that this key is a
> long-lived credential, and thus at the boundary between the TLS
> implementation and deployment-specific requirements (sticking keys in some
> hardware thing), this selection information often needs to be exported out
> of the library. TLS 1.3 does away with all this and collapses all the
> information into a *single* enum, ecdsa_secp256r1_sha256.
>
> Now, the email mentions cipher suites. We did indeed take the
> "ECDHE_ECDSA" half out of the cipher suite and left only the AEAD and the
> PRF hash. That was *not* about fully specifying things. Rather, it was
> about keeping atomic things atomic. "ECDHE_ECDSA" was not useful. "ECDSA"
> could not be evaluated without checking the above extensions. Likewise,
> "ECDHE" would not be evaluated without checking many of those same
> extensions. Once we shifted to SignatureScheme being the long-lived
> credential and NamedGroup (formerly supported curves) being the ephemeral
> key exchange, that partial information is redundant.
>
> In TLS 1.2, "ECDHE_ECDSA" served a second purpose, which was to say we
> were doing a Diffie-Hellman + signature handshake, not an RSA-key-exchange
> style handshake. However, it did so in a very awkward way. However, we
> removed the latter in TLS 1.3. Once that was gone, this information was
> completely redundant and only made negotiation more complicated. Thus, we
> removed it.
>
> Now, we could have decided that we actually like having a single enum,
> rather than a few orthogonal enums, and replaced SignatureScheme +
> NamedGroup with, a single "cipher suite" that specified (ECDSA-P256-SHA256,
> ECDH-P256, AES-128-GCM, HKDF-SHA256). That would have been more in keeping
> with the original (SSL3-era) naming of "cipher suite", and still satisfied
> the "keep atomic things atomic" goal. And perhaps having a single enum to
> specify the TLS settings would have been nice. (No one remembers there
> have been multiple configuration points in TLS.) However, for better or
> worse, TLS *already* eroded the "cipher suite" as a fully specified enum.
> I think it started with RFC 4492, which opted for a couple side extensions
> rather than putting the ECDSA and ECDH variant into the cipher suite. It
> would have been a *bigger* departure from TLS 1.2 to do that, than what
> we actually did. Thus, we stuck with the orthogonal enums model and
> finished the evolution started by TLS 1.2.
>
> > I see that "ed25519(0x0807)," could have been "eddsa_ed25519", and I
> assume "0x0807" actually means "eddsa with ed25519", and "0x0808" actually
> means "eddsa with ed448".
>
> When you say "Ed25519", it *already* implies EdDSA. EdDSA is a family of
> signature schemes. That is, it is a way to construct a signature scheme
> given some parameters. Ed25519 is a particular instantiation of that.
> Saying "eddsa_ed25519" would have been redundant, and would not match
> existing naming conventions.
> https://datatracker.ietf.org/doc/html/rfc8032#section-5.1
>

Since I suspect this is where some of the confusion comes from, let me
point something out from RFC 8032: Ed25519 is *not* an elliptic curve in
the way that P-256 is. RFC 8032 uses edwards25519 to refer to the elliptic
curve in Ed25519. This is a name you've probably never seen used because it
is an implementation detail of Ed25519. When using the primitive, you don't
actually care how it is defined, just the name (Ed25519) and interface.

Now, sometimes people are a bit sloppy and say "Ed25519" t

Re: [TLS] TLS Suite Naming Conventions

2024-01-06 Thread David Benjamin
I think this thread stems from a misunderstanding of what TLS is doing, and
what "Ed25519" means.

> In the thread, Neil said that it is better to negotiate for key
(representations), instead of algorithms, and that TLS has been moving away
from fully specifying things.

This is the exact opposite of what TLS 1.3 has done. I see the JOSE email
claims we've moved away from fully specifying things by citing the cipher
suite change, but that is misunderstanding what changed.

The SignatureScheme enum negotiates a signature algorithm, which then
implies a key type. It used to be that this information was scattered. To
see if a ECDSA key was viable in TLS 1.2, you had to look in...
- Cipher suites, to see if there was a TLS_ECDHE_ECDSA_*
- Supported curves, to see if P-256 was in there
- EC point formats, to see if your point format was in there
- Signature algorithms, to see if (ECDSA, SHA-256) (or some other hash your
key supported) was in there.

This was a huge mess. Especially when you consider that this key is a
long-lived credential, and thus at the boundary between the TLS
implementation and deployment-specific requirements (sticking keys in some
hardware thing), this selection information often needs to be exported out
of the library. TLS 1.3 does away with all this and collapses all the
information into a *single* enum, ecdsa_secp256r1_sha256.

Now, the email mentions cipher suites. We did indeed take the "ECDHE_ECDSA"
half out of the cipher suite and left only the AEAD and the PRF hash. That
was *not* about fully specifying things. Rather, it was about keeping
atomic things atomic. "ECDHE_ECDSA" was not useful. "ECDSA" could not be
evaluated without checking the above extensions. Likewise, "ECDHE" would
not be evaluated without checking many of those same extensions. Once we
shifted to SignatureScheme being the long-lived credential and NamedGroup
(formerly supported curves) being the ephemeral key exchange, that partial
information is redundant.

In TLS 1.2, "ECDHE_ECDSA" served a second purpose, which was to say we were
doing a Diffie-Hellman + signature handshake, not an RSA-key-exchange style
handshake. However, it did so in a very awkward way. However, we removed
the latter in TLS 1.3. Once that was gone, this information was completely
redundant and only made negotiation more complicated. Thus, we removed it.

Now, we could have decided that we actually like having a single enum,
rather than a few orthogonal enums, and replaced SignatureScheme +
NamedGroup with, a single "cipher suite" that specified (ECDSA-P256-SHA256,
ECDH-P256, AES-128-GCM, HKDF-SHA256). That would have been more in keeping
with the original (SSL3-era) naming of "cipher suite", and still satisfied
the "keep atomic things atomic" goal. And perhaps having a single enum to
specify the TLS settings would have been nice. (No one remembers there
have been multiple configuration points in TLS.) However, for better or
worse, TLS *already* eroded the "cipher suite" as a fully specified enum. I
think it started with RFC 4492, which opted for a couple side extensions
rather than putting the ECDSA and ECDH variant into the cipher suite. It
would have been a *bigger* departure from TLS 1.2 to do that, than what we
actually did. Thus, we stuck with the orthogonal enums model and finished
the evolution started by TLS 1.2.

> I see that "ed25519(0x0807)," could have been "eddsa_ed25519", and I
assume "0x0807" actually means "eddsa with ed25519", and "0x0808" actually
means "eddsa with ed448".

When you say "Ed25519", it *already* implies EdDSA. EdDSA is a family of
signature schemes. That is, it is a way to construct a signature scheme
given some parameters. Ed25519 is a particular instantiation of that.
Saying "eddsa_ed25519" would have been redundant, and would not match
existing naming conventions.
https://datatracker.ietf.org/doc/html/rfc8032#section-5.1

This separation between signature scheme and key type is mostly a
historical quirk of older cryptographic algorithms being mis-specified. A
key is a thing you get out of a signature scheme, not the other way around.
Had we defined RSA and X9.62-style EC in a way that better matched formal
expectations, there would have been no such thing as an "RSA" key. Rather,
RSASSA-PKCS1-v1_5 would have been a family of signature schemes, and
RSASSA-PKCS1-v1_5-SHA256 would have been a particular signature scheme.
>From there, you could have had a RSASSA-PKCS1-v1_5-SHA256 key, which would
have been a distinct animal from a RSASSA-PKCS1-v1_5-SHA384 key, or an
RSAES-OAEP-SHA256 key. Likewise, ECDSA-P256-SHA256 would have been a
completely disjoint algorithm from ECDSA-P384-SHA256 or ECDSA-P256-SHA384
or ECDH-P256. (And we probably wouldn't have wasted our time defining all
hash/curve pairs.)

However, RSA and X9.62-style EC weren't defined that way. Instead we have
an "RSA" key and an "EC" key that can be used with all manner of
algorithms, nevermind that not all pairs of algorithms have been 

Re: [TLS] Key Update for TLS/DTLS 1.3

2024-01-04 Thread David Benjamin
Skimming the draft, I am not following the timing of this process. Suppose
the client initiates an extended key update. It cannot update the keys yet,
because it does not know the server's response. It needs to keep reading
from the server. In doing so, it will hopefully see a responding
ExtendedKeyUpdate, but it may see something else that forces it to send
data, such as an application protocol message or an update_requested
KeyUpdate. (Or perhaps an update_requested ExtendedKeyUpdate!)

Are you envisioning that the client is unable to send anything until it
receives the server's response, or that this exchange flows in parallel
with the rest of the connection?

If the client is unable to send anything, this seems like it would cause
problems. Certainly it would not be something the TLS library can do
automatically, because it can only run at a quiet point in the application
protocol. A priori, you may receive an unbounded amount of application data
while waiting for ExtendedKeyUpdate. You need to do *something* with that
data, but all options result in either an unbounded buffer or a deadlock
somewhere.

If the exchange flows in parallel, how does the server know where, in the
client stream, did the client switch keys? I think you'd need a third
message to mark this point.Though we then need to reason through what
happens if that third message doesn't come in for a long while, because the
server can't release state from that key update until then.

To that end, what happens if someone sends a storm of ExtendedKeyUpdate
messages with update_requested in a row? Over TCP, we have to worry about a
DoS issue caused by asymmetric rates on the two sides. (If I send you a
storm of update_requested but refuse to read from the socket, at some point
backpressure will stop you from writing responses. At that point, you need
to know to stop reading or you'll buffer up unbounded data.) For plain
KeyUpdate, we said the requests can be coalesced, but ExtendedKeyUpdate
messages contain different key shares. I suspect you need to say that you
cannot send a new update_requested until after you've sent the third
message for the previous one.

Relatedly, this seems tricky:

> If implementations independently send their own ExtendedKeyUpdate
messages, and they cross in flight, the result is that each side increments
keys by two generations.

Since ExtendedKeyUpdate incorporates new key material into the new secret,
you will get a different result depending on which exchange is processed
first. But the two sides may see each exchange resolving in a different
order when crossed like this. (It *might* work with a three-message design?
Then there's an in-band signal for when the keys are applied on each side.
Though it means this cross case can actually resolve in different orders
for the two streams, which is kind of interesting.)

On Thu, Jan 4, 2024 at 6:42 AM Tschofenig, Hannes  wrote:

> Hi all,
>
>
>
> we have just submitted a draft that extends the key update functionality
> of TLS/DTLS 1.3.
>
> We call it the “extended key update” because it performs an ephemeral
> Diffie-Hellman as part of the key update.
>
>
>
> The need for this functionality surfaced in discussions in a design team
> of the TSVWG. The need for it has, however, already been discussed years
> ago on the TLS mailing list in the context of long-lived TLS connections in
> industrial IoT environments.
>
> Unlike the TLS 1.3 Key Update message, which is a one-shot message, the
> extended Key Update message requires a full roundtrip.
>
>
>
> Here is the link to the draft:
>
> https://datatracker.ietf.org/doc/draft-tschofenig-tls-extended-key-update/
>
>
>
> I am curious what you think.
>
>
>
> Ciao
> Hannes
>
>
> ___
> TLS mailing list
> TLS@ietf.org
> https://www.ietf.org/mailman/listinfo/tls
>
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] [EXT] Re: Adoption call for 'TLS 1.2 Feature Freeze'

2024-01-02 Thread David Benjamin
I agree that IANA registrations aren't a great place to constrain this.

First, constraining the registry for TLS 1.2 and not DTLS 1.2 makes a lot
of things very weird. For a feature that's TLS/DTLS-agnostic, like
post-quantum, it helps no one to define it for DTLS 1.2 and not TLS 1.2.
Most of the code and specification work is shared. Realistically, whether
or not we formally freeze DTLS 1.2, we shouldn't post-quantum for DTLS 1.2.
Part of the PQ transition for DTLS will be to get folks to DTLS 1.3.
(Indeed Chrome uses DTLS for WebRTC and has not yet implemented DTLS 1.3,
yet I have no interest in PQ for DTLS 1.2. For WebRTC PQ, we'll just do
DTLS 1.3 first.)

I expect the DTLS waffling is transitory and attitudes to DTLS 1.2 vs 1.3
will catch up to TLS 1.2 vs 1.3 soon enough. But, while we're in that
state, something as rigid as an IANA restriction is awkward.

Second, while we don't intend to define new features for TLS 1.2, the draft
says we may still apply "urgent security fixes". Restricting the IANA
registration also restricts our ability to do that. Realistically, anything
that involves a new extension will run into the usual considerations around
existing TLS 1.2 servers not implementing it. But I could imagine, if we
find another 3SHAKE, maybe deciding it's worth doing another EMS? (Maybe??
Honestly I'd probably just say, since you need a protocol change anyway,
the fix is TLS 1.3.)

On Tue, Jan 2, 2024 at 12:16 PM Eric Rescorla  wrote:

> On Tue, Jan 2, 2024 at 6:20 AM Salz, Rich  wrote:
>
>> I'm not Martin, but I believe that his point is that both TLS
>> ciphersuites and TLS supported groups/EC curves permit registration outside
>> of the IETF process based on the existence of.a specification. As long as
>> PQC can fit into new ciphersuites and group types, then anyone can specify
>> it for TLS 1.2, and it would in fact be TLS, just not standardized or
>> Recommended.
>>
>>
>>
>> That is not what the just-adopted draft says.  It says that except for
>> ALPN and exporters that no new registrations will be accepted for TLS 1.2
>> and that new entries should have a Note comment that says “for TLS 1.3 or
>> later”. This is a change in the current policy.  It has always said this;
>> see page 3 of [1].
>>
>
> I agree that's clear. Not sure how I misunderstood that, but in that case,
> I think that this may be going too far, for the usual reasons why it's not
> helpful to restrict IANA registrations of new stuff.
>
> Don't we expect this just to result in squatting.
>
> -Ekr
>
>
>>
>>
>> At the last meeting we decided NOT to freeze DTLS 1.2 since DTLS 1.3 has
>> so little deployment[4]. This has complicated the wording of the above
>> statement, which was raised at [2] and [3]
>>
>>
>>
>> [1]
>> https://datatracker.ietf.org/meeting/117/materials/slides-117-tls-new-draft-tls-12-is-frozen-00
>>
>> [2] https://github.com/richsalz/tls12-frozen/issues/10
>>
>> [3] https://github.com/richsalz/tls12-frozen/pull/13
>>
>> [4] https://datatracker.ietf.org/doc/minutes-118-tls-202311060830/
>>
>>
>>
>>
>>
> ___
> TLS mailing list
> TLS@ietf.org
> https://www.ietf.org/mailman/listinfo/tls
>
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Adoption call for 'TLS 1.2 Feature Freeze'

2023-12-11 Thread David Benjamin
I don't think that quite captures the tradeoffs. Sure, TLS 1.2 will be
around for quite some time, but that *does not mean it is worth adding new
features to TLS 1.2*. Those two statements are not directly related.

Protocol changes generally require both client and server changes to take
effect. Pre-existing deployments, by simply pre-existing, will not have
those changes. If we add, say, post-quantum options for TLS 1.2, it will
benefit zero existing TLS 1.2 deployments. If we add post-quantum options
for TLS 1.3, it will benefit zero existing TLS 1.3 deployments. That's not
why we make these changes. We make them to benefit *future* TLS
deployments, e.g. when server operators update their software. Although it
can be a slow progress, pre-existing deployments gradually cycle into
updated deployments.

So when we decide whether to make a change to TLS 1.2 or TLS 1.3, the
pre-existing deployments of both protocols are irrelevant. What matters is
what will be used in new TLS software. At this point, now that TLS 1.3 is
well-established, we should broadly expect new TLS software to be
TLS-1.3-capable. Thus is it not worth our time to backport such changes to
TLS 1.2. When I say "our", I don't mean just this working group, but also
TLS implementers, application software that configures TLS implementations,
and server operators who must somehow navigate the sea of options that
comes from everyone else's indecision. Together, those costs are
significant.

More than that, we (the WG) should communicate this expectation. We did it
once by publishing RFC 8446 and obsoleting RFC 5246. hence this document.
But communication is hard, and now that the expectation is stronger, we
should repeat it more strongly. There will always be stragglers and
misunderstandings. Perhaps some more obscure TLS implementation has yet to
implement TLS 1.3. (Or DTLS 1.3.) Perhaps some application has a hardcoded
TLS 1.2 setting that needs to be updated. Perhaps some config files are
stale.

Publishing this document helps clear up what was already the WG's
expectation. If it reminds a chunk of those folks to move to TLS 1.3, it
will have been worthwhile. That is also why mentioning PQC is valuable, as
it is the extension that is most likely to be on server operators' minds.

Finally, communicating this is useful to us. Perhaps some new deployments
are TLS 1.2 not out of inertia, but because something in TLS 1.3
inadvertently made migration difficult for those folks. In that case, us
publishing this document helps invite such feedback. For example,
draft-ietf-tls-tls13-pkcs1 addresses a migration challenge. (That specific
example long predates this and, judging by the list discussion in 2019, it
was perhaps a little ahead of its time, but we all got there eventually.
:-D)

This has nothing to do with raising the floor. This is about not bothering
to start a new, shorter tower on the side while we raise the main ceiling.

On Mon, Dec 11, 2023, 16:58 Viktor Dukhovni  wrote:

> On Mon, Dec 11, 2023 at 12:32:36PM -0800, Rob Sayre wrote:
>
> > PS - I have to say, not in this message, but sometimes it seems like the
> > goal of TLS 1.2 advocates is weaker encryption. So, for them, the flaws
> in
> > TLS 1.2 that the draft describes are desirable. If that's the case,
> > participants are not working toward the same goal. Writing down the
> > consensus seems worth it.
>
> For what it is worth, my agenda/perspective has never been to weaken
> encryption.  Rather, it has always been about making usable encryption
> ubiquitous.  While we continue work on raising the ceiling, one can be
> legitimately weary of raising the floor so high that encryption is
> unusable, and communication happens in the clear instead.
>
> Given that TLS 1.2 will be around for quite some time, it is not obvious
> that a feature freeze will in practice improve security.  It is good
> that there's ongoing effort to make TLS 1.3 better, and I accept that it
> may well not be possible to deliver on required TLS 1.3 work and to also
> make some occasional modest improvements to TLS 1.2, but if the goal is
> to deliver secure products to users, a realist might accept that TLS 1.2
> is likely to continue to be used for some time, and that those users
> could be better served if some improvements continued to take place.
>
> The contrarian possition of course assumes that such improvements
> wouldn't be a significant drain on scarce resources.  That assumption is
> a matter for debate, and the "right" trade-offs are not completely
> obvious.  Some difference of perspectives can be expected.
>
> Whatever else we do, we should not default to questioning the motives of
> others who would make somewhat different tradeoffs.  Worry more when
> everyone is in violent agreement, perhaps something is then being
> missed.
>
> --
> Viktor.
>
> ___
> TLS mailing list
> TLS@ietf.org
> https://www.ietf.org/mailman/listinfo/tls
>

Re: [TLS] Adoption call for 'TLS 1.2 Feature Freeze'

2023-12-06 Thread David Benjamin
I support adoption and am willing to review.

On Wed, Dec 6, 2023 at 12:34 AM Deirdre Connolly 
wrote:

> At the TLS meeting at IETF 118 there was significant support for the draft
> 'TLS 1.2 is in Feature Freeze' (
> https://datatracker.ietf.org/doc/draft-rsalz-tls-tls12-frozen/)  This
> call is to confirm this on the list.  Please indicate if you support the
> adoption of this draft and are willing to review and contribute text. If
> you do not support adoption of this draft please indicate why.  This call
> will close on December 20, 2023.
>
> Thanks,
> Deirdre, Joe, and Sean
> ___
> TLS mailing list
> TLS@ietf.org
> https://www.ietf.org/mailman/listinfo/tls
>
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Adoption call for Legacy RSASSA-PKCS1-v1_5 codepoints for TLS 1.3

2023-11-30 Thread David Benjamin
Whoops, I thought something seemed off! Here it is under the new name:
https://datatracker.ietf.org/doc/draft-ietf-tls-tls13-pkcs1/

On Thu, Nov 30, 2023 at 11:54 AM Joseph Salowey  wrote:

> I misdirected you with the name it should be draft-ietf-tls-tls13-pkcs1,
> can you please submit under this name?  It would be better to have in the
> tlswg repo, we'll follow up offline.
>
> Thanks,
>
> Joe
>
> On Wed, Nov 29, 2023 at 9:41 AM David Benjamin 
> wrote:
>
>> Done, although I'm not sure if I got all the metadata right. (How does
>> one mark it as replacing the old one?)
>> https://datatracker.ietf.org/doc/draft-tls-tls13-pkcs1/
>>
>> The GitHub is still under my account, but happy to move it to the TLSWG
>> if preferred. (How would we go about doing that?)
>>
>> On Wed, Nov 29, 2023 at 11:07 AM Joseph Salowey  wrote:
>>
>>> The adoption call for this draft has completed.  There is sufficient
>>> interest in the draft and no objections. Authors, please submit this draft
>>> with the file name draft-tls-tls13-pkcs1-00.txt.
>>>
>>> Cheers,
>>> Joe
>>>
>>> On Mon, Nov 6, 2023 at 9:25 AM Joseph Salowey  wrote:
>>>
>>>> At the TLS meeting at IETF 118 there was significant support for the
>>>> draft  Legacy RSASSA-PKCS1-v1_5 codepoints for TLS 1.3
>>>> <https://datatracker.ietf.org/doc/draft-davidben-tls13-pkcs1/01/> (
>>>> https://datatracker.ietf.org/doc/draft-davidben-tls13-pkcs1/01/)  This
>>>> call is to confirm this on the list.  Please indicate if you support the
>>>> adoption of this draft and are willing to review and contribute text.  If
>>>> you do not support adoption of this draft please indicate why.  This call
>>>> will close on November 27, 2023.
>>>>
>>>> Thanks,
>>>>
>>>> Sean, Chris and Joe
>>>>
>>> ___
>>> TLS mailing list
>>> TLS@ietf.org
>>> https://www.ietf.org/mailman/listinfo/tls
>>>
>>
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Adoption call for Legacy RSASSA-PKCS1-v1_5 codepoints for TLS 1.3

2023-11-29 Thread David Benjamin
Done, although I'm not sure if I got all the metadata right. (How does one
mark it as replacing the old one?)
https://datatracker.ietf.org/doc/draft-tls-tls13-pkcs1/

The GitHub is still under my account, but happy to move it to the TLSWG if
preferred. (How would we go about doing that?)

On Wed, Nov 29, 2023 at 11:07 AM Joseph Salowey  wrote:

> The adoption call for this draft has completed.  There is sufficient
> interest in the draft and no objections. Authors, please submit this draft
> with the file name draft-tls-tls13-pkcs1-00.txt.
>
> Cheers,
> Joe
>
> On Mon, Nov 6, 2023 at 9:25 AM Joseph Salowey  wrote:
>
>> At the TLS meeting at IETF 118 there was significant support for the
>> draft  Legacy RSASSA-PKCS1-v1_5 codepoints for TLS 1.3
>>  (
>> https://datatracker.ietf.org/doc/draft-davidben-tls13-pkcs1/01/)  This
>> call is to confirm this on the list.  Please indicate if you support the
>> adoption of this draft and are willing to review and contribute text.  If
>> you do not support adoption of this draft please indicate why.  This call
>> will close on November 27, 2023.
>>
>> Thanks,
>>
>> Sean, Chris and Joe
>>
> ___
> TLS mailing list
> TLS@ietf.org
> https://www.ietf.org/mailman/listinfo/tls
>
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Design Rational for Key Exporter

2023-11-29 Thread David Benjamin
An unhelpful answer is that the key exporter interface was already set by
prior versions of TLS and any TLS 1.3 key exporter needs to remain
analogous. :-)

A more helpful answer is that we cannot simultaneously believe that key
update is a transparent feature of TLS, and that exporters are sensitive to
key update. Every use of key exporters necessarily involves both client and
server computing the value and doing something with it (otherwise you could
have just generated random bytes and moved on). That means both sides need
to sample it at an analogous point in the connection, so the application
protocol needs to be very aware of when all the key updates happen, and
take care that corresponding uses of the exporter are sampled at
corresponding epochs.

That means, if we want to build an exporter update mechanism, it needs to
be some operation exported to the application protocol and driven by the
application. Key updates are not that.

But I also don't think we need to or should do anything to support that use
case. It is sufficient to have a single API, discard exporter secret, for
the application to tell TLS it is done calling the exporter (and thus
discard the root exporter secret). That API requires no new protocol
machinery. An application that wants a ratcheting behavior in the exporter
would then simply do:

1. After the connection is established, export the initial secret for their
application-specific use of exporters and save it somewhere.
2. Discard the exporter secret
3. At whatever points in the application protocol make sense, have both
sides run their application-specific ratcheting operation on the
application-specific running secret.

Not only that, we even intentionally designed the exporters to have a
two-step label, then context derivation operation. This means "discard
exporter secret" can instead be done by extracting label-specific
exporters, in case one label wants to continue deriving values and another
one is done and wants to ratchet. If there were some exporter-wide update
operation, this would require coordination across all uses of exporters
across the entire connection. So leaving the protocol as-is is the best way
to meet this use case.

On Wed, Nov 29, 2023 at 3:31 AM Tschofenig, Hannes  wrote:

> Hi all,
>
>
>
> I was wondering why the design of the key exporter is such that it is
> based on the early_exporter_master_secret or the exporter_master_secret and
> no new key export is triggered at a later point in time, for example when a
> key update is performed. RFC 5705, which is used as a basis for the key
> exporter design in TLS 1.3, just states that there are protocols that want
> to obtain keying material from the TLS exchange. RFC 5705 nor the TLS 1.3
> spec indicate the design rational of why no later events (e.g.
> post-handshake authentication or key updates) trigger a new key export. Was
> this done on purpose or was there just no use case for it at that time?
>
>
>
> Ciao
>
> Hannes
>
>
> ___
> TLS mailing list
> TLS@ietf.org
> https://www.ietf.org/mailman/listinfo/tls
>
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Request mTLS Flag

2023-11-07 Thread David Benjamin
I realized I used the word "context" in two different, uh, contexts, so
that was probably very confusing.

What I meant to say is that TLS client certificate decisions need to be
remembered session-wide, for some appropriate notion of session. So
browsing profile or something of that nature. But within that scope, some
contexts are capable of prompting (user-visible tab) and others are not (a
service worker script, or a tab in the process being closed). It would be a
problem for the latter to poison the former, but they also rather
thoroughly share an HTTP session. It's a mess. Client certificates are the
bane of my existence. :-)

On Tue, Nov 7, 2023 at 10:46 PM David Benjamin 
wrote:

> On Tue, Nov 7, 2023 at 3:43 PM Ilari Liusvaara 
> wrote:
>
>> On Mon, Oct 23, 2023 at 01:37:55PM -0400, Viktor Dukhovni wrote:
>> >
>> > - Some Java TLS libraries (used to?) fail the handshake when the
>> >   client has no configured certs, or the list of issuer CA DN hints
>> >   does include any of its available (typically just zero or one)
>> >   certificates.
>> >
>> >   They could just proceed without a certificate, or return a default
>> >   one, but they don't.
>>
>> A colleague discovered a case where sending CertificateRequest to Chrome
>> causes it to fail, instead of just proceeding without a certificate
>> (which would have worked).
>>
>
> I don't know the details of that case, but this would not surprise me. If
> we are in a context where we are unable to prompt the user (e.g. in a
> background context where we cannot show UI), that is the only viable
> option. TLS client certificate decisions in a browser, and generally in an
> HTTPS stack, have to be remembered context-wide. Otherwise we cannot do
> socket reuse or session resumption, and have to prompt the user on every
> HTTP request.
>
> But even continuing without a certificate is a decision. That means, if we
> automatically proceed without a certificate in unpromptable contexts, we
> poison the context and prevent the user from making an actual decision
> later. Moreover, this can even apply when there are zero matching
> certificates. On some platforms (e.g. Android), where the platform's
> security model (quite reasonably) prevents applications from directly
> enumerating the client certificate store, querying and prompting is a
> single combined operation. But this means we cannot even check for zero or
> non-zero certificates without triggering a prompt in the non-zero case,
> which again means we cannot trigger this code from background contexts.
>
> Moreover, on such platforms, the application is entirely at the mercy of
> the platform as to what kind of filtering or prompt suppressions are
> applied. On older Androids, this single combined operation does not
> automatically suppress empty prompts, and does not filter based on the CA
> list. Newer ones are capable, but it further makes optional
> CertificateRequests incoherent.
>
> Ultimately, TLS client certificates, and how they're used in practice,
> are just not suitable for user-facing HTTPS applications. But there is such
> a long history of existing usage, that the ship has not only sailed but
> also circumnavigated the globe a couple times. It's too late to simply say
> "oh this was a mistake, let's not do that anymore". :-(
>
> David
>
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Request mTLS Flag

2023-11-07 Thread David Benjamin
On Tue, Nov 7, 2023 at 3:43 PM Ilari Liusvaara 
wrote:

> On Mon, Oct 23, 2023 at 01:37:55PM -0400, Viktor Dukhovni wrote:
> >
> > - Some Java TLS libraries (used to?) fail the handshake when the
> >   client has no configured certs, or the list of issuer CA DN hints
> >   does include any of its available (typically just zero or one)
> >   certificates.
> >
> >   They could just proceed without a certificate, or return a default
> >   one, but they don't.
>
> A colleague discovered a case where sending CertificateRequest to Chrome
> causes it to fail, instead of just proceeding without a certificate
> (which would have worked).
>

I don't know the details of that case, but this would not surprise me. If
we are in a context where we are unable to prompt the user (e.g. in a
background context where we cannot show UI), that is the only viable
option. TLS client certificate decisions in a browser, and generally in an
HTTPS stack, have to be remembered context-wide. Otherwise we cannot do
socket reuse or session resumption, and have to prompt the user on every
HTTP request.

But even continuing without a certificate is a decision. That means, if we
automatically proceed without a certificate in unpromptable contexts, we
poison the context and prevent the user from making an actual decision
later. Moreover, this can even apply when there are zero matching
certificates. On some platforms (e.g. Android), where the platform's
security model (quite reasonably) prevents applications from directly
enumerating the client certificate store, querying and prompting is a
single combined operation. But this means we cannot even check for zero or
non-zero certificates without triggering a prompt in the non-zero case,
which again means we cannot trigger this code from background contexts.

Moreover, on such platforms, the application is entirely at the mercy of
the platform as to what kind of filtering or prompt suppressions are
applied. On older Androids, this single combined operation does not
automatically suppress empty prompts, and does not filter based on the CA
list. Newer ones are capable, but it further makes optional
CertificateRequests incoherent.

Ultimately, TLS client certificates, and how they're used in practice,
are just not suitable for user-facing HTTPS applications. But there is such
a long history of existing usage, that the ship has not only sailed but
also circumnavigated the globe a couple times. It's too late to simply say
"oh this was a mistake, let's not do that anymore". :-(

David
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Adoption call for Legacy RSASSA-PKCS1-v1_5 codepoints for TLS 1.3

2023-11-06 Thread David Benjamin
I support adoption and am willing to contribute text, but this is perhaps
not surprising. :-)

On Mon, Nov 6, 2023 at 12:25 PM Joseph Salowey  wrote:

> At the TLS meeting at IETF 118 there was significant support for the
> draft  Legacy RSASSA-PKCS1-v1_5 codepoints for TLS 1.3
>  (
> https://datatracker.ietf.org/doc/draft-davidben-tls13-pkcs1/01/)  This
> call is to confirm this on the list.  Please indicate if you support the
> adoption of this draft and are willing to review and contribute text.  If
> you do not support adoption of this draft please indicate why.  This call
> will close on November 27, 2023.
>
> Thanks,
>
> Sean, Chris and Joe
> ___
> TLS mailing list
> TLS@ietf.org
> https://www.ietf.org/mailman/listinfo/tls
>
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Fwd: New Version Notification for draft-davidben-tls-key-share-prediction-00.txt

2023-11-06 Thread David Benjamin
Yup, that's right!

(Ah yeah, it was confusing to talk about key shares reflecting preferences
because we might be talking about the relative order or which were included
or omitted. I was thinking the latter since the relative order already
comes from supported_groups. I.e. I was thinking of the key_share order as
just a syntactic restriction.)

On Mon, Nov 6, 2023, 05:55 Eric Rescorla  wrote:

> Hi David,
>
> Thanks for posting this and for the discussion on the list.
>
> Before commenting on this proposal, I'd like to make sure we're all
> on the same page about the situation.
>
>
> # Background
>
> 1. RFC 8446 states that both supported_groups and key_shares
>are in client's preference order but doesn't say what it
>means for a value to be omitted from key_shares.
>
>When sent by the client, the "supported_groups" extension indicates
>the named groups which the client supports for key exchange, ordered
>from most preferred to least preferred.
>
>...
>
>client_shares:  A list of offered KeyShareEntry values in descending
>   order of client preference.
>
> 2. Some clients only send a subset of their supported groups in key
>shares, either to save space/computation or because they are
>concerned about compatibility.
>
> 3. Some servers negotiate by picking the most preferred key_share
>that is present rather than the most preferred common group in
>supported groups.
>
>
> When we combine (2) and (3), the result is that the server may
> choose a group which is less preferred by both the client and
> server because it is included in the key_shares whereas the
> more preferred group is not.
>
>
> # Downgrade
>
> The draft here talks about there being a downgrade issue. ISTM
> that there are two things going on here.
>
> 1. If the client chooses its key_shares based solely on authenticated
> signals, or, as is presently common, consistently for every server,
> then the server may choose a suboptimal combination (from the perspective
> of group selection, though not from the perspective of latency), but
> the attacker cannot influence this selection.
>
> 2. If the client chooses its key_shares based on unauthenticated
> signals, such as DNS or falling back on apparent network error
> (e.g., due to apparent intolerance of large CH), then the attacker
> can exploit the behavior described in (3) to downgrade the selected
> groups.
>
>
> Is this a reasonably accurate summary of the situation?
>
> -Ekr
>
>
>
>
> On Tue, Sep 26, 2023 at 9:46 AM David Benjamin 
> wrote:
>
>> Hi all,
>>
>> A while back, we discussed using a DNS hint to predict key shares and
>> reduce HelloRetryRequest, but this was dropped due to downgrade issues. In
>> thinking through post-quantum KEMs and the various transitions we'll have
>> in the future, I realized we actually need to address those downgrade
>> issues now. I've published a draft below which is my attempt at resolving
>> this.
>>
>> We don't need a DNS hint for the *current* PQ transition—most TLS
>> ecosystems should stick to the one preferred option, and then clients can
>> predict that one and move on. However, I think we need to lay the
>> groundwork for it now. If today's round of PQ supported groups cannot be
>> marked "prediction-safe" (see document for what I mean by that),
>> transitioning to the *next* PQ KEM (e.g. if someone someday comes up
>> with a smaller one that we're still confident in!) will be extremely
>> difficult without introducing downgrades.
>>
>> Thoughts?
>>
>> David
>>
>> -- Forwarded message -
>> From: 
>> Date: Mon, Sep 25, 2023 at 6:56 PM
>> Subject: New Version Notification for
>> draft-davidben-tls-key-share-prediction-00.txt
>> To: David Benjamin 
>>
>>
>> A new version of Internet-Draft
>> draft-davidben-tls-key-share-prediction-00.txt
>> has been successfully submitted by David Benjamin and posted to the
>> IETF repository.
>>
>> Name: draft-davidben-tls-key-share-prediction
>> Revision: 00
>> Title:TLS Key Share Prediction
>> Date: 2023-09-25
>> Group:Individual Submission
>> Pages:11
>> URL:
>> https://www.ietf.org/archive/id/draft-davidben-tls-key-share-prediction-00.txt
>> Status:
>> https://datatracker.ietf.org/doc/draft-davidben-tls-key-share-prediction/
>> HTML:
>> https://www.ietf.org/archive/id/draft-davidben-tls-key-share-prediction-00.html
>> HTMLized:
>> https://datatracker.ietf.org/doc/html/draft-davidben-tls-key-shar

Re: [TLS] Legacy RSASSA-PKCS1-v1_5 codepoints for TLS 1.3

2023-10-27 Thread David Benjamin
On Fri, Oct 27, 2023 at 2:07 PM Benjamin Kaduk  wrote:

> On Tue, Oct 24, 2023 at 10:12:56PM -0400, David Benjamin wrote:
> >Additionally I want to emphasize that, because of the negotiation
> order
> >between versions and client certificates, there is no way to do an
> >incremental transition here. Saying deployments stick with 1.2 not
> only
> >impacts the relevant hardware but also *any other connections that the
> >server makes*. Essentially the server cannot enable TLS 1.3 until
> *every*
> >client has stopped using one of these PSS-incapable signers. This is
> not a
> >good transition plan.
>
> I think we should probably think out the transition plan here a bit more.
> Sure, if we can have updated clients offer new SignatureSchemes and the
> server
> notice that to let them use TLS 1.3.  But how does the server get to a
> place
> where it can use TLS 1.3 with every client that offers it?  It seems like
> it
> has to know that all clients with old hardware tokens have updated, which
> would
> require knowing about and tracking exactly which clients those are, since
> other
> clients would not be sending the new SignatureSchemes in the first place.
> I
> see this getting a small win for the legacy clients but no improvement for
> other clients or the server's default behavior.  Am I missing something?
>

Good question. You're right that, because we didn't do this from day
one[*], the transition plan is not ideal.

Updating software is a lot easier than replacing hardware, so I think
waiting for clients with old hardware tokens to update (at least those that
have enabled TLS 1.3) can be viable. Most client certificate deployments
that stick keys in interesting hardware tokens are relatively closed
ecosystems on the client half, such as a managed enterprise deployment. You
need to have a provisioning process that knows to use the TPMs. In those
cases, it is viable for the enterprise to rollout client support for these
legacy codepoints, wait a bit, and then start enabling TLS 1.3 on the
servers.

Andrei is probably better to speak to how commonly that plan would and
wouldn't be viable. If there are enough deployments where this doesn't
work, I suppose we could define a ClientHello extension that means "I will
either speak the legacy RSASSA-PKCS1-v1_5 codepoints, or it is not relevant
to me". Those semantics are pretty messy though, and it makes
the server-random downgrade hack much more complex. We can always do it
later if enough folks need it, so I'm inclined to defer it for now.

David

[*] As I recall, TLS 1.3 was broadly intended to be deployable with the
same keys as TLS 1.2, otherwise we probably needn't have bothered with RSA
at all. We switched from PKCS#1 v1.5 to PSS mostly because it was perceived
to cost us nothing. This was broadly true for server certificates. Client
certificates not so much. In hindsight, I think banning PKCS#1 v1.5 for
client signatures was a tad too ambitious for TLS 1.3.
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] [EXTERNAL] Re: Fwd: New Version Notification for draft-davidben-tls-key-share-prediction-00.txt

2023-10-27 Thread David Benjamin
ot; here is to not pick a postquantum option when we should have. This
means this suboptimal traffic is not protected against store-and-decrypt
attacks.

This would impact more traffic than you may think. All of modern protocol
design (see HTTP/2 and HTTP/3) has been centered on reuse of connections.
This amortizes connection setup costs and gives time for congestion control
algorithms to stabilize. With everyone, correctly, putting all this effort
into reusing connections, your "first connection" actually comprises quite
a lot of traffic. Additionally, any 0-RTT traffic, as well as psk_ke
resumptions (for folks that implement that), in subsequent connections
would depend on that first connection's secret.

Additionally, the guidance in 4.2.7 requires feeding information from one
connection to another. This, like any other state, is a tracking vector,
so, to protect user privacy, any effects here will be partitioned in both
scope (e.g. the top-level set of a browser) and time (e.g. across users
clearing state). That means that there will be far more "first connections"
than one may think.


> As a way forward, would it be worth working on this in rfc8446bis to
> clarify the desired behaviour? An example change would be to Section 2.1
> which implies preference for key_share first selection.
>
I have no particular feelings about which document takes what text. It is
presented as one document right now because that was the clearest way to
present all the changes together.

If rfc8446bis is still open for substantive changes (though my impression
was it isn't?), I don't mind putting things in there. Though we'd still
need to expend a lot of text to define prediction-safe and
prediction-unsafe groups, precisely because we do *not* want to define
duplicate groups.


> Thanks,
> Michael
>
>
>
>
>
> *From:* Rob Sayre 
> *Sent:* Tuesday, October 17, 2023 9:08 PM
> *To:* David Benjamin 
> *Cc:* Andrei Popov ; tls@ietf.org
> *Subject:* Re: [TLS] [EXTERNAL] Re: Fwd: New Version Notification for
> draft-davidben-tls-key-share-prediction-00.txt
>
>
>
> On Tue, Oct 17, 2023 at 12:32 PM David Benjamin 
> wrote:
>
>
>
> > Server-side protection against [clients adjusting HRR predictions on
> fallback] is not effective. Especially when we have both servers that
> cannot handle large ClientHello messages and servers that have buggy HRR.
>
>
>
> I think the discussion about buggy HRR is a red herring.
>
>
>
> I agree with almost everything in the email except for this part. It's
> even worse than HRR, isn't it? The initial ClientHello will fail if spread
> across too many packets on some implementations, and then a new ClientHello
> will be sent using X25519 unless you want to lose customers. The client
> won't get an HRR back on the first try, the stuff just breaks (it's their
> bug, but it must be dealt with). But, if the DNS says it should work, it
> should be ok to fail there. The trustworthiness of this hint must also be
> weighed with ECH. So, if you're using SVCB with this idea and ECH, it seems
> pretty reasonable to me.
>
>
>
> thanks,
>
> Rob
>
>
>
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Legacy RSASSA-PKCS1-v1_5 codepoints for TLS 1.3

2023-10-24 Thread David Benjamin
Some changes from the last time this was posted:
- Apparently we got early codepoint allocation for this and I forgot about
it? Anyway the allocated codepoints are now in the draft.
- We've crisped up the motivation a bit.

One thing I'll call out is that the previous discussion mentioned one could
stick with TLS 1.2. I think this was already not reasonable back then,
because TLS 1.3 fixes a serious privacy and integrity issue with client
certificates. (Renegotiation is not a good mitigation. There are loads of
problems with renegotiation, which I think this WG is very familiar with,
including incompatibility with modern multiplexed protocols.) But, more
importantly, new security improvements like post-quantum KEMs to protect
against store-and-decrypt-later attacks (rightfully) require TLS 1.3.

Additionally I want to emphasize that, because of the negotiation order
between versions and client certificates, there is no way to do an
incremental transition here. Saying deployments stick with 1.2 not only
impacts the relevant hardware but also *any other connections that the
server makes*. Essentially the server cannot enable TLS 1.3 until *every*
client has stopped using one of these PSS-incapable signers. This is not a
good transition plan.

It saddens me to have to allow RSASSA-PKCS1-v1_5 here, but I think, in the
limited scope that this draft covers (client certs only), this is clearly
the right move.

David

On Tue, Oct 24, 2023, 20:48 Andrei Popov  wrote:

> Hi TLS,
>
>
>
> We would like to re-introduce
> https://datatracker.ietf.org/doc/draft-davidben-tls13-pkcs1/
>
> (it’s intended for the TLS WG and the Standards track, despite what the
> document says at the top; we’ll fix it as soon as the submission tool
> reopens).
>
>
>
> In the course of TLS 1.3 deployment, it became apparent that a lot of
> hardware cryptographic devices used to protect TLS client certificate
> private keys cannot produce RSA-PSS signatures compatible with TLS.
>
> This draft would allow RSA-PKCS signatures in the client CertificateVerify
> messages (and not in any other contexts), as a way to unblock TLS 1.3
> deployments.
>
> This is an unfortunate situation, and work is being done with hardware
> vendors to reduce the likelihood of similar issues in the future, but
> existing devices tend to stay around for years.
>
>
>
> Comments/suggestions are welcome,
>
>
>
> Cheers,
>
>
>
> Andrei
>
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] [EXTERNAL] Re: Request mTLS Flag

2023-10-24 Thread David Benjamin
On Tue, Oct 24, 2023, 13:07 Viktor Dukhovni  wrote:

> On Tue, Oct 24, 2023 at 12:54:08PM -0400, David Benjamin wrote:
>
> > Is the concern here errors or prompting? From the original email, it
> > sounded like the issue was that requesting client certificates showed
> > undesirable UI to human-backed clients.
>
> My concern is errors, browser UX concerts are not my bailiwick.  I
> typically look at TLS from the perspective of SMTP, where all the
> communication is bot-to-bot (MTA to MTA).
>
> But, you're right that prompting could also be an issue, since in this
> case the use-case was MUA to MSA, so it would apply to Thunderbird,
> Outlook, ... where there's a user behind the keyboard.
>
> I don't recall seeing prompting as an issue reported by MUA users, since
> the MUA authentication method is typically pre-configured as part of the
> "server settings".  MUAs have the luxury of a static set of servers they
> talk to, where pre-configuration is the norm.
>

Ah yeah, I should have been clearer that I was specifically talking about
HTTPS human clients. Hopefully MUAs didn't make quite the same set of
historical deployment mistakes that HTTPS UAs did around client certs! :-)



-- 
> Viktor.
>
> ___
> TLS mailing list
> TLS@ietf.org
> https://www.ietf.org/mailman/listinfo/tls
>
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] [EXTERNAL] Re: Request mTLS Flag

2023-10-24 Thread David Benjamin
Is the concern here errors or prompting? From the original email, it
sounded like the issue was that requesting client certificates showed
undesirable UI to human-backed clients.

That one is a bit harder to avoid since no one is acting incorrectly per
se. Clients for humans need to ask the human for consent before revealing
identity information. They also need to be mindful of things like
background contexts, in which prompting isn't possible. Also some platforms
are such that querying for certs and prompting is one operation, which
limits the solution space.

All this together means that optional client certificates, for HTTPS
services that are accessed by humans, basically does not work, even though
everything works fine at the protocol level.

Really the problem is that authentication for robots and authentication for
humans have different UX requirements. But we're kind of stuck because this
particular mechanism, years ago, got used for human authentication despite
not actually being terribly suitable for it.

On Tue, Oct 24, 2023, 12:37 Viktor Dukhovni  wrote:

> On Tue, Oct 24, 2023 at 04:11:53PM +, Andrei Popov wrote:
>
> > > At least as a client, you can't read anything into seeing a cert
> request from the server, it's just a standard part of the handshake, like a
> keyex or a finished.
> >
> > This is exactly my argument: when a client receives a cert request the
> > client cannot satisfy, the RFC says send an empty Certificate and
> > continue with the handshake...
>
> Sadly, that's not what actually reliably happens in practice, or at
> least that was the case when I last looked.
>
> Perhaps all the guilty TLS stacks were fixed in the meantime?  I am not
> well placed to determine how much "friction" remains.
>
> --
> Viktor.
>
> ___
> TLS mailing list
> TLS@ietf.org
> https://www.ietf.org/mailman/listinfo/tls
>
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Fwd: New Version Notification for draft-davidben-tls-trust-expr-00.txt

2023-10-23 Thread David Benjamin
On Sat, Oct 21, 2023 at 5:41 AM Ilari Liusvaara 
wrote:

> On Fri, Oct 20, 2023 at 04:07:21PM -0400, David Benjamin wrote:
> > On Thu, Oct 19, 2023 at 3:17 PM Ilari Liusvaara <
> ilariliusva...@welho.com>
> > wrote:
> > > - The multiple certificates from one ACME order really scares me. It
> > >   seems to me that can lead to all sorts of trouble.
> > >
> >
> > Certainly open to different mechanisms, though could you elaborate on
> > the trouble? We started with this one because it's actually just fixing a
> > mechanism ACME *already has*. RFC 8555, 7.4.2 has this bit:
> >
> >The server MAY provide one or more link relation header fields
> >[RFC8288] with relation "alternate".  Each such field SHOULD express
> >an alternative certificate chain starting with the same end-entity
> >certificate.  This can be used to express paths to various trust
> >anchors.  Clients can fetch these alternates and use their own
> >heuristics to decide which is optimal.
>
> Note the part "the same end-entity certificate.". The way I interpretted
> the draft, returning different end-entity certificates would be allowed.
>
>
> > Whether anyone has ever used this, I don't know. The "and use their own
> > heuristics to decide which is optimal" bit is quite absurd. :-) Relative
> to
> > that, all we've done is:
>
> Yes, that gets sometimes used.
>
>
> > But I suspect we'll want to define one where you make multiple orders
> too.
> > That would probably work better for, e.g., Merkle Tree certs where the
> two
> > issuances complete at very different times. But then, conversely, when
> the
> > two paths actually share an end-entity certificate, I imagine a single
> > order would be better so the CA knows it only needs to generate one
> > signature. And then when they don't share an end-entity certificate but
> are
> > similar enough in lifetime and issuance time, either seems fine, so we
> > figured this was a reasonable starting point.
>
> Well, I do not think it is feasible to use the normal ACME issuance
> mechanism for Merkle Tree certificates. The issuance is just too slow.
>
> And things like ACME ARI (which is required for actually handling
> revocations) inherently assume each order can only result one
> certificate.
>

The note about sharing an EE cert is just a SHOULD, not a MUST. RFC 8555
doesn't say why, but our interpretation was, like you note, this was mostly
a concern for things like accounting for renewals and revocations. We tried
to firm that up a bit by saying this makes sense when you're willing to
issue and renew all the variants together. For something like ARI, I was
imagining the ACME client would just check all of them (we're already
assuming the ACME client has been updated) and, if it needs to renew any of
them, it goes ahead and renews all of them. Slightly wasteful if renewal
was triggered by one of them getting revoked, rather than them all expiring
together. But I expect that's not common enough to be worth optimizing for.

Do you think multiple orders would be better? A multi-order flow is
probably more complex than fits in this document (this ACME change is
pretty small), so we didn't start with it. Plus this initial version seemed
natural to us based on what ACME had already defined. But we're much more
interested in making this kind of multi-certificate deployment model
possible than any of the particular details. Happy to adjust things based
on what turns out to work best. (One nuisance with a multi-order flow is
that the CA will have a harder time linking the requests together, which
opens a can of worms around whether they do separate validations or not.)



> > > - If there can be only one certificate, one could send all the chains
> > >   in one go via fist sending the certificate, then issuer chains each
> > >   ended by entry describing the trust anchor.
> > >
> >
> > I'm not quite sure if I've parsed this right, but are you thinking of one
> > file that somehow describes all alternatives together? That's plausible
> > too. Like I said, we mostly did this one because ACME already did it, so
> we
> > inferred that was The ACME Way. :-)
>
> Yes, one file describing all alternatives together. I think that is
> easier to work with than the existing alternatives mechanism (I don't
> think most clients even support that).
>

I think having to do a round of updating existing clients is fine here.
We'd already need to teach them to implement this thing. But if we think
putting them in one bundle is more convenient, that's fine by me too. I
don't actually care.

Exploring that direction a bit, won't that make issues

Re: [TLS] Fwd: New Version Notification for draft-davidben-tls-trust-expr-00.txt

2023-10-23 Thread David Benjamin
y many words to say.
:-)

But I'm fine with whatever order. The goal was to just pick *some* defined
sort order, so that it's easy to find duplicates, and to hint that maybe
you should put this into some structure that can binary search easily.
Happy to do "a" before "ab" before "b" instead. Is there a canonical way to
say that in IETF-ese without expending a lot of text?


> On Thu, Oct 19, 2023 at 8:38 AM David Benjamin 
> wrote:
> >
> > Hi all,
> >
> > We just published a document on certificate negotiation. It's a TLS
> extension, which allows the client to communicate which trust anchors it
> supports, primarily focused on use cases like the Web PKI where trust
> stores are fairly large. There is also a supporting ACME extension, to
> allow CAs to provision multiple certificate chains on a server, with enough
> metadata to match against what the client sends. (It also works in the
> other direction for client certificates.)
> >
> > The hope is this can build towards a more agile and flexible PKI. In
> particular, the Use Cases section of the document details some scenarios
> (e.g. root rotation) that can be made much more robust with it.
> >
> > It's very much a draft-00, but we're eager to hear your thoughts on it!
> >
> > David, Devon, and Bob
> >
> > -- Forwarded message -
> > From: 
> > Date: Thu, Oct 19, 2023 at 11:36 AM
> > Subject: New Version Notification for
> draft-davidben-tls-trust-expr-00.txt
> > To: Bob Beck , David Benjamin ,
> Devon O'Brien 
> >
> >
> > A new version of Internet-Draft draft-davidben-tls-trust-expr-00.txt has
> been
> > successfully submitted by David Benjamin and posted to the
> > IETF repository.
> >
> > Name: draft-davidben-tls-trust-expr
> > Revision: 00
> > Title:TLS Trust Expressions
> > Date: 2023-10-19
> > Group:Individual Submission
> > Pages:35
> > URL:
> https://www.ietf.org/archive/id/draft-davidben-tls-trust-expr-00.txt
> > Status:
> https://datatracker.ietf.org/doc/draft-davidben-tls-trust-expr/
> > HTML:
> https://www.ietf.org/archive/id/draft-davidben-tls-trust-expr-00.html
> > HTMLized:
> https://datatracker.ietf.org/doc/html/draft-davidben-tls-trust-expr
> >
> >
> > Abstract:
> >
> >This document defines TLS trust expressions, a mechanism for relying
> >parties to succinctly convey trusted certification authorities to
> >subscribers by referencing named and versioned trust stores.  It also
> >defines supporting mechanisms for subscribers to evaluate these trust
> >expressions, and select one of several available certification paths
> >to present.  This enables a multi-certificate deployment model, for a
> >more agile and flexible PKI that can better meet security
> >requirements.
> >
> >
> >
> > The IETF Secretariat
> >
> >
> > ___
> > TLS mailing list
> > TLS@ietf.org
> > https://www.ietf.org/mailman/listinfo/tls
>
>
>
> --
> Colm
>
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] New Version Notification for draft-davidben-tls-trust-expr-00.txt

2023-10-23 Thread David Benjamin
Quick update: we pushed a draft-01. It's basically the same, but we noticed
we referred to the wrong name of some structs in places and figured it was
worth a draft-01 to be less confusing. :-)

On Thu, Oct 19, 2023 at 11:38 AM David Benjamin 
wrote:

> Hi all,
>
> We just published a document on certificate negotiation. It's a TLS
> extension, which allows the client to communicate which trust anchors it
> supports, primarily focused on use cases like the Web PKI where trust
> stores are fairly large. There is also a supporting ACME extension, to
> allow CAs to provision multiple certificate chains on a server, with enough
> metadata to match against what the client sends. (It also works in the
> other direction for client certificates.)
>
> The hope is this can build towards a more agile and flexible PKI. In
> particular, the Use Cases section of the document details some scenarios
> (e.g. root rotation) that can be made much more robust with it.
>
> It's very much a draft-00, but we're eager to hear your thoughts on it!
>
> David, Devon, and Bob
>
> -- Forwarded message -
> From: 
> Date: Thu, Oct 19, 2023 at 11:36 AM
> Subject: New Version Notification for draft-davidben-tls-trust-expr-00.txt
> To: Bob Beck , David Benjamin ,
> Devon O'Brien 
>
>
> A new version of Internet-Draft draft-davidben-tls-trust-expr-00.txt has
> been
> successfully submitted by David Benjamin and posted to the
> IETF repository.
>
> Name: draft-davidben-tls-trust-expr
> Revision: 00
> Title:TLS Trust Expressions
> Date: 2023-10-19
> Group:Individual Submission
> Pages:35
> URL:
> https://www.ietf.org/archive/id/draft-davidben-tls-trust-expr-00.txt
> Status:   https://datatracker.ietf.org/doc/draft-davidben-tls-trust-expr/
> HTML:
> https://www.ietf.org/archive/id/draft-davidben-tls-trust-expr-00.html
> HTMLized:
> https://datatracker.ietf.org/doc/html/draft-davidben-tls-trust-expr
>
>
> Abstract:
>
>This document defines TLS trust expressions, a mechanism for relying
>parties to succinctly convey trusted certification authorities to
>subscribers by referencing named and versioned trust stores.  It also
>defines supporting mechanisms for subscribers to evaluate these trust
>expressions, and select one of several available certification paths
>to present.  This enables a multi-certificate deployment model, for a
>more agile and flexible PKI that can better meet security
>requirements.
>
>
>
> The IETF Secretariat
>
>
>
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Request mTLS Flag

2023-10-23 Thread David Benjamin
> So in my mind this is something that will (almost) never be sent by
browsers.

What cases would the "(almost)" kick in? This extensions model just doesn't
match how client certificates work in browsers. I'm not seeing any
interpretation beyond "always send" or "never send".

> For example identifying a web crawler, and either allowing or disallowing
it.

I'm not following how this identifies web crawlers, unless perhaps we're
using the term to mean different things? I would expect web crawlers to
typically not do much with client certificates, and to typically *want* to
index the web in the same way that humans with web browsers see it.

> I don't think this leaks more info than a dedicated endpoint (even
accounting for ECH), and from a security perspective is just a hint.

The difference is the dedicated endpoint case only kicks in once you are
actually talking to a site that is deployed that way. A ClientHello flag
would likely be sent unconditionally, because it's too early to condition
it on much.

On Mon, Oct 23, 2023 at 11:58 AM Jonathan Hoyland <
jonathan.hoyl...@gmail.com> wrote:

> Hi David,
>
> So in my mind this is something that will (almost) never be sent by
> browsers.
>
> This is aimed at bots, both internal and external. For example identifying
> a web crawler, and either allowing or disallowing it.
>
> Currently we identify many bots by IP range and user agent (and a bunch of
> ML), which isn't always reliable.
>
> The web crawler case is where the dedicated endpoint falls over, because
> crawlers are indexing the human visible web.
>
> I don't think this leaks more info than a dedicated endpoint (even
> accounting for ECH), and from a security perspective is just a hint.
>
>
> Regards,
>
> Jonathan
>
>
> On Mon, 23 Oct 2023, 16:36 David Benjamin,  wrote:
>
>> Would you expect a browser user to send this flag? On the browser side,
>> we don't know until the CertificateRequest whether a client certificate is
>> configured. We have to do a moderately expensive query, dependent on
>> information on the CertificateRequest of the OS's cert and key stores to
>> get this information. This query may even call into things like 3p
>> smartcard drivers, which may do arbitrarily disruptive things like showing
>> UI.
>>
>> And if we could somehow predict this information, this would leak into
>> the cleartext ClientHello when, starting TLS 1.3, the whole client
>> certificate flow is in the encrypted portion of the handshake.
>>
>> So, practically speaking, I don't think browsers could do anything
>> meaningful with this extension. We'd either always send it, on grounds that
>> we have code to rummage for client certs on request, or never send it on
>> grounds that we're not preconfigured with a client cert at the time of
>> ClientHello. Either way, it seems likely to interfere with someone's
>> assumptions here.
>>
>> The dedicated endpoint strategy seems more straightforward.
>>
>> David
>>
>>
>> On Mon, Oct 23, 2023, 11:22 Jonathan Hoyland 
>> wrote:
>>
>>> Hey TLSWG,
>>>
>>> I've just posted a new draft
>>> <https://www.ietf.org/archive/id/draft-jhoyla-req-mtls-flag-00.html>
>>> that defines a TLS Flag
>>> <https://www.ietf.org/archive/id/draft-ietf-tls-tlsflags-12.html> that
>>> provides a hint to the server that the client supports mTLS / is configured
>>> with a client certificate.
>>>
>>> Usually the server has no way to know in advance whether a given inbound
>>> connection is from a client with a certificate. If the server unexpectedly
>>> requests a certificate from a human user, most users wouldn’t know what to
>>> do. To avoid this many servers never send the CertificateRequest message in
>>> the server’s first flight, or set up dedicated endpoints used only by bots.
>>> If client authentication is necessary it can be negotiated later using a
>>> higher layer either through post-handshake auth or with an Exported
>>> Authenticator, but both of those options add round trips to the connection.
>>>
>>> At Cloudflare we’re exploring ways to quickly identify clients. Having
>>> an explicit signal from the client that it has an mTLS certificate on offer
>>> reduces round-trips to find out, avoids unnecessarily probing clients that
>>> have no certificate, etc. I think this would be an ideal use case for the
>>> TLS Flags extension.
>>>
>>> I have a pair of interoperable implementations (one based on boringssl
>>> and one based on Go TLS) which I plan to open source before Prague.
>>> Obviously these include implementations of the TLS Flags extension, which
>>> hopefully will help drive that work forward too.
>>>
>>> Regards,
>>>
>>> Jonathan
>>> ___
>>> TLS mailing list
>>> TLS@ietf.org
>>> https://www.ietf.org/mailman/listinfo/tls
>>>
>>>
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Request mTLS Flag

2023-10-23 Thread David Benjamin
Would you expect a browser user to send this flag? On the browser side, we
don't know until the CertificateRequest whether a client certificate is
configured. We have to do a moderately expensive query, dependent on
information on the CertificateRequest of the OS's cert and key stores to
get this information. This query may even call into things like 3p
smartcard drivers, which may do arbitrarily disruptive things like showing
UI.

And if we could somehow predict this information, this would leak into the
cleartext ClientHello when, starting TLS 1.3, the whole client certificate
flow is in the encrypted portion of the handshake.

So, practically speaking, I don't think browsers could do anything
meaningful with this extension. We'd either always send it, on grounds that
we have code to rummage for client certs on request, or never send it on
grounds that we're not preconfigured with a client cert at the time of
ClientHello. Either way, it seems likely to interfere with someone's
assumptions here.

The dedicated endpoint strategy seems more straightforward.

David


On Mon, Oct 23, 2023, 11:22 Jonathan Hoyland 
wrote:

> Hey TLSWG,
>
> I've just posted a new draft
>  that
> defines a TLS Flag
>  that
> provides a hint to the server that the client supports mTLS / is configured
> with a client certificate.
>
> Usually the server has no way to know in advance whether a given inbound
> connection is from a client with a certificate. If the server unexpectedly
> requests a certificate from a human user, most users wouldn’t know what to
> do. To avoid this many servers never send the CertificateRequest message in
> the server’s first flight, or set up dedicated endpoints used only by bots.
> If client authentication is necessary it can be negotiated later using a
> higher layer either through post-handshake auth or with an Exported
> Authenticator, but both of those options add round trips to the connection.
>
> At Cloudflare we’re exploring ways to quickly identify clients. Having an
> explicit signal from the client that it has an mTLS certificate on offer
> reduces round-trips to find out, avoids unnecessarily probing clients that
> have no certificate, etc. I think this would be an ideal use case for the
> TLS Flags extension.
>
> I have a pair of interoperable implementations (one based on boringssl and
> one based on Go TLS) which I plan to open source before Prague. Obviously
> these include implementations of the TLS Flags extension, which hopefully
> will help drive that work forward too.
>
> Regards,
>
> Jonathan
> ___
> TLS mailing list
> TLS@ietf.org
> https://www.ietf.org/mailman/listinfo/tls
>
>
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Fwd: New Version Notification for draft-davidben-tls-trust-expr-00.txt

2023-10-20 Thread David Benjamin
On Fri, Oct 20, 2023 at 4:07 PM David Benjamin 
wrote:

> Thanks for the comments! Responses inline.
>
> On Thu, Oct 19, 2023 at 3:17 PM Ilari Liusvaara 
> wrote:
>
>> Some quick thoughts:
>>
>> - The multiple certificates from one ACME order really scares me. It
>>   seems to me that can lead to all sorts of trouble.
>>
>
> Certainly open to different mechanisms, though could you elaborate on
> the trouble? We started with this one because it's actually just fixing a
> mechanism ACME *already has*. RFC 8555, 7.4.2 has this bit:
>
>The server MAY provide one or more link relation header fields
>[RFC8288] with relation "alternate".  Each such field SHOULD express
>an alternative certificate chain starting with the same end-entity
>certificate.  This can be used to express paths to various trust
>anchors.  Clients can fetch these alternates and use their own
>heuristics to decide which is optimal.
>
> https://datatracker.ietf.org/doc/html/rfc8555#section-7.4.2
>
> Whether anyone has ever used this, I don't know. The "and use their own
> heuristics to decide which is optimal" bit is quite absurd. :-) Relative to
> that, all we've done is:
>
> - Go ahead and define some format for a chain + properties... I like PEM
> much, but hey if ACME does that, we can just mimick that.
>

Sorry, I meant to write "I *don't* like PEM much". :-)


> - Fix the heuristic. You don't need heuristics when you have robust
> negotiation.
>
> But I suspect we'll want to define one where you make multiple orders too.
> That would probably work better for, e.g., Merkle Tree certs where the two
> issuances complete at very different times. But then, conversely, when the
> two paths actually share an end-entity certificate, I imagine a single
> order would be better so the CA knows it only needs to generate one
> signature. And then when they don't share an end-entity certificate but are
> similar enough in lifetime and issuance time, either seems fine, so we
> figured this was a reasonable starting point.
>
>
>> - If there can be only one certificate, one could send all the chains
>>   in one go via fist sending the certificate, then issuer chains each
>>   ended by entry describing the trust anchor.
>>
>
> I'm not quite sure if I've parsed this right, but are you thinking of one
> file that somehow describes all alternatives together? That's plausible
> too. Like I said, we mostly did this one because ACME already did it, so we
> inferred that was The ACME Way. :-)
>
>
>> - The latest version and previous version stuff seems pretty confusing
>>   to me.
>>
>
> Yeah, it took us many iterations to find a good way to describe it, and
> I'm sure we haven't gotten it right yet. It's all to deal with version skew
> cleanly, when the relying party references a newer trust store than what
> the subscriber knew about at the time the certificate was issued. Since it
> seems your suggestion below relates to this, maybe some of the discussion
> below will help us clear it up and get to a better description?
>
>
>> - I am not sure this is useful for the client->server direction.
>>
>
> Eh, it costs ~nothing to define it in both directions, just a global
> s/client/relying party/ and s/server/subscriber/ across the document. :-) I
> figure we may as well define it in both directions, and if some client
> certificate deployments find it useful, cool. On the Chrome side, if the
> operating systems could give us something like this, with pre-made paths
> and unambiguous rules for when to send each, I would be overjoyed. We spend
> quite a lot of time helping people debug misconfigurations and quirks
> around client certificate selection.
>
>
>> What I think is a simpler version that might work:
>>
>>
>> Information from root program to CA:
>>
>> - Root program name.
>> - For each trust anchor:
>>   * Trust anchor certificate.
>>   * First version TA appeared in.
>>   * Expiry time
>>   * List of indices.
>>
>> Indices can be reused after all TAs using those have expired.
>>
>>
>> Information from CA to TLS server for each TA:
>>
>> - For each root program:
>>   * Root program name
>>   * The first version TA appeared in.
>>   * List of indices.
>>
>> CA MUST NOT include entries that expire before the certificate.
>>
>>
>> Information from TLS client to TLS server:
>>
>> - Root program name.
>> - Root program version.
>> - List of revoked indices.
>>
>> The revoked indices specifies TAs that have been recently removed
>

Re: [TLS] Fwd: New Version Notification for draft-davidben-tls-trust-expr-00.txt

2023-10-20 Thread David Benjamin
ficates:

Suppose some CA is in v2 and then was removed in v3. We need to ensure
certificates issued by that CA don't match a v3 client, so the server will
send a different one. Immediately after removal, there are plenty of
existing certs that predate v3's definition, so the relying party needs to
ship in exclusion. However, we would like the exclusion to eventually fall
off, or every historical removal will be sent in every ClientHello ever.

If the CA ceases operation, the exclusions can be dropped then. But the CA
may have good reasons to keep issuing. Consider root rotation. Unupdated
clients won't trust the new root, and servers may still need to work with
those clients for a long period of time. The CA may quite reasonably wish
to continue issuing from that root until that is no longer the case. This
may even happen during a distrust: it could be that one population of
clients no longer trusts the CA, while another population of clients
(perhaps some unupdatable devices somewhere) *only* trusts that CA. Servers
that need to serve both populations could then deploy a different
certificate for each, in which case the removed CA might continue issuing.
It only takes *one* population of relying parties for it to be useful to
keep issuing from that CA. And as long as *any* server has certificates
from that CA installed, we need to account for them in cert selection
somehow.

The way to square this is to have both a lower bound *and* an optional
upper bound. If the CA knows it in v1, v2, but not v3 onwards, it can tell
the server this. Once you add the upper bound, I *think* your sketch is
basically our design (though I may have missed something). The main
difference is rather than store a range in manifest and inclusion list,
we've just listed each version individually. If you've got a
latest_version_at_issuance entry, it means the range has no upper bound.

We listed them out because that makes versions a bit more independent,
which allows the root program to adjust its label allocations over time.
For example, maybe the root program later decides it'd be useful to mark
CAs that use some algorithm. Or perhaps some keys changed hands and we'd
like to reflect that in the labels. Now, there's a little subtlety here
because label changes take some time to be reliable. But after max_age +
max_lifetime, all certificates will have the new information available.
(Before then, the trust expression creation process will just tell you that
you need to account for both old and new entries, so you may need to add a
few more labels.) Also, when we went to describe the root program
operation, it was much more straightforward to just talk about versions as
independent, with minimal cross-talk between them.

Of course, labels probably won't change between versions much, so a range
scheme would make the list more compact. We didn't go with that just
because the exploded one was less complicated. We did some estimates and it
didn't look like the compression was actually needed, so we omitted it.
(These are not sent in TLS connections, just in the root program -> CA ->
subscriber flow. We don't want them to be *humongous*, but we don't need to
squeeze them that tightly.) But if folks prefer ranges, that's easy enough
to add.



> > -- Forwarded message -
> > From: 
> > Date: Thu, Oct 19, 2023 at 11:36 AM
> > Subject: New Version Notification for
> draft-davidben-tls-trust-expr-00.txt
> > To: Bob Beck , David Benjamin ,
> Devon
> > O'Brien 
> >
> > Name: draft-davidben-tls-trust-expr
> > Revision: 00
> > Title:TLS Trust Expressions
> > Date: 2023-10-19
> > Group:Individual Submission
> > Pages:35
> > URL:
> > https://www.ietf.org/archive/id/draft-davidben-tls-trust-expr-00.txt
> > Status:
> https://datatracker.ietf.org/doc/draft-davidben-tls-trust-expr/
> > HTML:
> > https://www.ietf.org/archive/id/draft-davidben-tls-trust-expr-00.html
> > HTMLized:
> > https://datatracker.ietf.org/doc/html/draft-davidben-tls-trust-expr
>
>
>
>
> -Ilari
>
> ___
> TLS mailing list
> TLS@ietf.org
> https://www.ietf.org/mailman/listinfo/tls
>
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


[TLS] Fwd: New Version Notification for draft-davidben-tls-trust-expr-00.txt

2023-10-19 Thread David Benjamin
Hi all,

We just published a document on certificate negotiation. It's a TLS
extension, which allows the client to communicate which trust anchors it
supports, primarily focused on use cases like the Web PKI where trust
stores are fairly large. There is also a supporting ACME extension, to
allow CAs to provision multiple certificate chains on a server, with enough
metadata to match against what the client sends. (It also works in the
other direction for client certificates.)

The hope is this can build towards a more agile and flexible PKI. In
particular, the Use Cases section of the document details some scenarios
(e.g. root rotation) that can be made much more robust with it.

It's very much a draft-00, but we're eager to hear your thoughts on it!

David, Devon, and Bob

-- Forwarded message -
From: 
Date: Thu, Oct 19, 2023 at 11:36 AM
Subject: New Version Notification for draft-davidben-tls-trust-expr-00.txt
To: Bob Beck , David Benjamin , Devon
O'Brien 


A new version of Internet-Draft draft-davidben-tls-trust-expr-00.txt has
been
successfully submitted by David Benjamin and posted to the
IETF repository.

Name: draft-davidben-tls-trust-expr
Revision: 00
Title:TLS Trust Expressions
Date: 2023-10-19
Group:Individual Submission
Pages:35
URL:
https://www.ietf.org/archive/id/draft-davidben-tls-trust-expr-00.txt
Status:   https://datatracker.ietf.org/doc/draft-davidben-tls-trust-expr/
HTML:
https://www.ietf.org/archive/id/draft-davidben-tls-trust-expr-00.html
HTMLized:
https://datatracker.ietf.org/doc/html/draft-davidben-tls-trust-expr


Abstract:

   This document defines TLS trust expressions, a mechanism for relying
   parties to succinctly convey trusted certification authorities to
   subscribers by referencing named and versioned trust stores.  It also
   defines supporting mechanisms for subscribers to evaluate these trust
   expressions, and select one of several available certification paths
   to present.  This enables a multi-certificate deployment model, for a
   more agile and flexible PKI that can better meet security
   requirements.



The IETF Secretariat
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] weird DHE params p length in TLSv1.2

2023-10-18 Thread David Benjamin
If I recall, TLS 1.2 was ambiguous on this point, so it's unclear what the
sender is expected to do.

I believe there were some implementations that expected a fixed-width
public key (which would have been the better option to pick, but we don't
have a time machine), so zero-padding on send is prudent. But since the
spec doesn't say, the receiver probably is stuck accepting both. Not all
senders zero pad, so that you've observed some doesn't surprise me too much.

But keep in mind that TLS 1.2 DHE ciphers use a flawed construction in many
other ways too.
https://datatracker.ietf.org/doc/draft-ietf-tls-deprecate-obsolete-kex/

I'd recommend moving to ECDHE and, better yet, TLS 1.3.


On Wed, Oct 18, 2023, 10:15 M K Saravanan  wrote:

> one correction:
>
> > cipher suite used: TLS_DHE_RSA_WITH_AES_256_GCM_SHA384 (0x009f)
>
> It is actually TLS_DHE_RSA_WITH_AES_128_GCM_SHA256 (0x009e)
>
>
> On Tue, 17 Oct 2023 at 13:55, M K Saravanan  wrote:
>
>> Hi,
>>
>> I found a weird packet capture of DHE key exchange.
>>
>> C --> S
>> TLSv1.2
>> cipher suite used: TLS_DHE_RSA_WITH_AES_256_GCM_SHA384 (0x009f)
>>
>> ServerKeyExchange message is sending:
>>
>> p length: 257 whereas pubkey length is: 256
>>
>> 256 means 256*8 = 2048 bit DHE key size.
>>
>> I am assuming, generally when using DHE, the p length and pubkey length
>> should match.
>>
>> Here p length = 257*8 = 2056 bits whereas pubkey len is 2048 bits, which
>> is unusual.
>>
>> Since SKE msg advertised a p len of 257, the client promptly responded
>> with a client public key size of 257 in its CKE msg to match the p len
>> advertised by SKE. Thus I feel the client behaviour is correct here.
>>
>> Can I know whether using diff p len and pubkey len allowed in DHE key
>> exchange?
>>
>> with regards,
>> Saravanan
>>
>> ___
> TLS mailing list
> TLS@ietf.org
> https://www.ietf.org/mailman/listinfo/tls
>
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] [EXTERNAL] Re: Fwd: New Version Notification for draft-davidben-tls-key-share-prediction-00.txt

2023-10-17 Thread David Benjamin
Answering a few questions that have come up thus far:

> Downgrade by attacker is only possible if the client attempts insecure
fallback (e.g., offer PQ key share, connection failed, retry without PQ key
share)?
> Or am I missing some other possible downgrade attack?

A fallback is certainly one possible downgrade trigger, but there are
others in the section 3.1 subsections. First, suppose we decide to do a DNS
hint, as the document suggests. DNS is broadly unauthenticated, so an
attacker could easily claim the server prefers a weaker algorithm than it
actually does. (As for why we might want a DNS hint, PQ's large key sizes
means clients will be far less willing to just predict multiple PQ KEMs
just in case. But, as much as we need to cut down on unnecessary options in
the PQ space, I don't think we can bank on never wanting to transition
between PQ KEMs ever. If AwesomeNewKEM comes along that's half the size,
that's definitely worth a transition.)

You could also have a non-attacker-triggered downgrade. Suppose we're
picking between PQ1, PQ2, and X25519. If I predict {PQ1, X25519} on grounds
that PQ1 is more likely than PQ2, and X25519 is free, a server that
supports {PQ2, X25519} and implements a key-share-first selection algorithm
will pick the wrong one.

(Also, to clarify, I very much do not want to implement a fallback for
Chrome and we don't currently plan to. So far we're running doing our
initial Kyber rollout without one. We've run into some compatibility
issues, but have been able to clear them so far. But Bas has described
Cloudflare needing some workaround here. I think, independent of this
fallback possibility, there are enough other forward-looking needs to
justify doing something here. But securing this option is a nice bonus. If
you believe the initial list is largely a prediction, it's intuitive that
this would be safe to do, yet it currently isn't.)

> Servers accepting other than the server’s top-priority group in order to
avoid HRR aren’t necessarily doing this because they honor client
preferences or assume that key_share reflects client preferences.
> They may simply find several groups acceptable and consider RTT reduction
more important than the strength difference between certain groups. I’m not
convinced that this is necessarily wrong, generally.

Yeah, if the server considers all groups equally acceptable, then yeah
that's perfectly okay. And indeed if the server is key-share-first but
believes all currently-implemented groups are equally preferable, that's
fine. I tried to capture that with this paragraph here, but it's certainly
possible I've phrased it badly! (In discussing this space with others at
Google, I found it surprisingly difficult to characterize the issue in a
way that people could understand.)
https://davidben.github.io/tls-key-share-prediction/draft-davidben-tls-key-share-prediction.html#section-3.2-2

The thing that's I think *not* okay is if it implements a key-share-first
selection algorithm *without* affirmatively preferring the RTT reduction
over the strength difference. E.g. OpenSSL has gotten very excited about
pluggable cryptography (so it cannot possibly know all possible named
groups are equally acceptable), explicitly documents that the relevant
configuration on the server is a preference list, and still implements a
key-share-first selection algorithm. That is clearly unreasonable, yet RFC
8446 does not make that clear. This draft is ultimately trying to clarify
that, and draw bounds on future hinting schemes (e.g. DNS) to account for
the past lack of clarify.

> Server-side protection against [clients adjusting HRR predictions on
fallback] is not effective. Especially when we have both servers that
cannot handle large ClientHello messages and servers that have buggy HRR.

I think the discussion about buggy HRR is a red herring. Cloudflare could
easily have avoided that by simply sending key_share={Kyber, X25519}, not
key_share={Kyber}. This issue has nothing to do with that. It's about
key_share={X25519}; supported_groups={Kyber, X25519}. If the client sends
that, the server picks X25519, and all parties agree Kyber is in a
different strength class from X25519, whether the server was wrong for not
honoring its preference, or whether the client was wrong for predicting an
option that wasn't its most preferred.

If we believe that key_share is a prediction, not a preference, then we
should believe that the server is in the wrong here. We should then also
believe that this fallback is actually secure, whether it's desirable or
not. (Like I said, I don't think it's desirable, and I hope we can stick to
that. I'm more concerned with the other desirable scenarios where this
matters.)

> If this is the concern, would it be better to just say that TLS clients
SHOULD NOT/MUST NOT implement insecure fallbacks to weaker TLS parameters?

See above. This isn't *the* concern, or even the primary one. It's, IMO,
just an added bonus. I'm much more concerned about 

Re: [TLS] Fwd: New Version Notification for draft-davidben-tls-key-share-prediction-00.txt

2023-10-16 Thread David Benjamin
On Fri, Oct 13, 2023 at 1:29 AM Rob Sayre  wrote:

> On Wed, Oct 11, 2023 at 8:43 AM David Benjamin 
> wrote:
>
>>  Tossed onto GitHub and removed the discussion of authenticated records
>> in
>> https://github.com/davidben/tls-key-share-prediction/commit/cabd76f7b320ab4f970f396db3d962ca9f510875
>>
>
> Apologies in advance for this one, but what is the document trying to say
> here?
>
> It says the client "MAY" use the result, Otherwise, it "SHOULD" ignore it?
> It is probably better to get more direct:
>
> "If the resulting prediction is consistent with client preferences, as
> described in {{tls-client-behavior}}, the client MAY use the result to
> predict key shares in the initial ClientHello."
>
> That's probably the way to go, since I think the goal is to avoid obsolete
> negotiations. I think this one works, because the server can always insist
> on an algorithm, and the client can ignore the DNS recommendation. But, a
> flaw of RFC 2119 is that a "SHOULD" ropes in "there may exist valid reasons
> in particular circumstances". So the circumstances would be troubling!
> Use bad encryption due to reasons? It's probably better not to put that
> sentence in.
>

Thanks! I agree that is weird. I went to rewrite it and, as I did, I
realized that text is actually kinda weird in lots of ways. It's written as
if the client might predict multiple named groups, but once we've gotten a
decent signal of what the server supports, I can't imagine why anyone would
bother with multiple. I've thus rephrased it in terms of just one group,
which I think is much tidier. How does this look to you?
https://github.com/davidben/tls-key-share-prediction/commit/310fa7bbddd1fe0c81e3a6865a59880efc901b33

David
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Fwd: New Version Notification for draft-davidben-tls-key-share-prediction-00.txt

2023-10-10 Thread David Benjamin
On Tue, Oct 10, 2023 at 1:24 PM Bas Westerbaan  wrote:

> OK, I see. It's worse than a compatibility risk, though, isn't it? If you
>> just let them break in case (a), and then maybe try again with (b), that
>> opens up a downgrade attack. Intermediaries can observe the size of the
>> Client Hello and make it break
>>
>
> Exactly.
>

Yup! The draft fixes that downgrade, should any clients take such an (a) +
(b) fallback strategy. I would very much prefer not needing such a strategy
(so Chrome's current rollout attempt simply does (a)), since such fallbacks
have other bad consequences. But if we can at least make it secure, that
gives us a bit more breathing room in case anyone needs it.
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Fwd: New Version Notification for draft-davidben-tls-key-share-prediction-00.txt

2023-10-10 Thread David Benjamin
On Tue, Oct 10, 2023 at 12:42 PM Rob Sayre  wrote:

> On Tue, Oct 10, 2023 at 8:28 AM David Benjamin 
> wrote:
>
>> On Tue, Oct 10, 2023 at 6:06 AM Dennis Jackson > 40dennis-jackson...@dmarc.ietf.org> wrote:
>>
>>> To make sure I've understood correctly, we're trying to solve three
>>> problems:
>>>
>>>- Some servers don't tolerate large Client Hellos
>>>- Some servers don't implement HelloRetryRequest correctly
>>>- Some servers prefer fewer round trips and accept an offered key
>>>share even if both client and server would prefer a different group (for
>>>which no key share was sent). This is especially troubling if we have to
>>>migrate between PQ KEMs since we can't afford multiple key shares in the
>>>ClientHello.
>>>
>>>
>> First and third, yeah. I was mostly focused on the third one, but yeah
>> we'll also need this if the first can't be cleared. (I hope we can just
>> clear through it though. Even if we solve the downgrade problem,
>> compatibility hacks for the large ClientHello will be bad for the ecosystem
>> and very hard to remove later. But if we need it, we need it.)
>>
>
> The impression I got in reading the various PQ experiment reports (and I
> think David Benjamin did some of them...) was that the issues with large
> Client Hellos *will* arise with PQ Client Hello messages. So... if a server
> adds support for PQ, they will have to fix any underlying issues with large
> Client Hello messages as a prerequisite, right? Can we cross the first
> point off the list here? I'm a little confused about that point.
>

No, issues with large ClientHellos cannot be crossed off so simply. Keep in
mind we cannot update the internet all at once. The client does not have a
priori knowledge that the server implements PQ, but it needs to construct a
ClientHello. It can choose to either:

a) Send a ClientHello with Kyber in key_shares
b) Send a ClientHello without Kyber in key_shares

If it picks (a), any *non-Kyber-supporting* servers that break with large
ClientHellos will break. If there are sufficiently few of these, we can
maybe clear through it, but it is a compatibility risk we need to deal
with. But the important thing is that we are precisely concerned with the
non-Kyber servers here. It's not as simple as saying you fix that when you
deploy Kyber.

If it picks (b), non-Kyber-supporting servers behave as before, buggy or
not. However, Kyber-supporting servers not only suffer a round-trip but
also may not even pick Kyber. For example, if you were to add Kyber to
OpenSSL today, it would pick X25519 when presented with (b). See
https://github.com/openssl/openssl/issues/22203. BoringSSL will behave
correctly, because we anticipated this issue when we first implemented TLS
1.3. I think NSS also knows about group preferences? But clearly the spec
wasn't clear enough here. Thus this draft exists to resolve this. *This* server
fix *can* be done as servers add Kyber support, but only if we remember to
tell them that.

Now, I'm hoping we can just make (a) work. Sending (b) also makes the
preferred option (Kyber) take a round-trip and more complex schemes like
fallbacks have a tendency to overstay their welcome. But different folks
will have different risk preferences and deployment strategies, so I think
it is worth making sure strategies involving (b) remain viable. And, of
course, there's still the third problem.
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] TLSFlags ambiguity

2023-09-27 Thread David Benjamin
Nice catch! I agree those don't match. I think bit zero should be the
least-significant bit. That is, we should leave the examples as-is and then
fix the specification text.

Ordering bits MSB first doesn't make much sense. Unlike bytes, there is no
inherent order to bits in memory, so the most natural order is the power of
two represented by the bit. Put another way, everyone accesses bit N by
ANDing with 1 << N and that's least-significant bits first. I can think of
a couple systems (DER, GCM) that chose to order bits most-significant first
and both have caused endless confusion and problems. (It's particularly bad
for GCM which is actually representing a polynomial, but then messed up the
order. Let's not repeat this blunder.)

On Fri, Sep 15, 2023 at 1:37 PM Jonathan Hoyland 
wrote:

> Hi TLSWG,
>
> I'm working on implementing the TLS Flags extension
> , and I
> just wanted to clarify a potential ambiguity in the spec.
>
> In Section 2 the spec says:
> Such documents will have to define which bit to set to show support, and
> the order of the bits within the bit string shall be enumerated in network
> order: bit zero is the high-order bit of the first octet as the flags field
> is transmitted.
>
> And also gives the example for encoding bit zero:
> For example, if we want to encode only flag number zero, the FlagExtension
> field will be 1 octet long, that is encoded as follows:
>
>0001
>
> In which it seems that the low-order bit of the first octet represents zero.
>
> I have no preference either way, but when transmitted on the wire, should 
> flag 0 be transmitted as
>
> 0x01 or 0x80?
>
> Regards,
>
> Jonathan
>
> ___
> TLS mailing list
> TLS@ietf.org
> https://www.ietf.org/mailman/listinfo/tls
>
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


[TLS] Fwd: New Version Notification for draft-davidben-tls-key-share-prediction-00.txt

2023-09-26 Thread David Benjamin
Hi all,

A while back, we discussed using a DNS hint to predict key shares and
reduce HelloRetryRequest, but this was dropped due to downgrade issues. In
thinking through post-quantum KEMs and the various transitions we'll have
in the future, I realized we actually need to address those downgrade
issues now. I've published a draft below which is my attempt at resolving
this.

We don't need a DNS hint for the *current* PQ transition—most TLS
ecosystems should stick to the one preferred option, and then clients can
predict that one and move on. However, I think we need to lay the
groundwork for it now. If today's round of PQ supported groups cannot be
marked "prediction-safe" (see document for what I mean by that),
transitioning to the *next* PQ KEM (e.g. if someone someday comes up with a
smaller one that we're still confident in!) will be extremely difficult
without introducing downgrades.

Thoughts?

David

-- Forwarded message -
From: 
Date: Mon, Sep 25, 2023 at 6:56 PM
Subject: New Version Notification for
draft-davidben-tls-key-share-prediction-00.txt
To: David Benjamin 


A new version of Internet-Draft
draft-davidben-tls-key-share-prediction-00.txt
has been successfully submitted by David Benjamin and posted to the
IETF repository.

Name: draft-davidben-tls-key-share-prediction
Revision: 00
Title:TLS Key Share Prediction
Date: 2023-09-25
Group:Individual Submission
Pages:11
URL:
https://www.ietf.org/archive/id/draft-davidben-tls-key-share-prediction-00.txt
Status:
https://datatracker.ietf.org/doc/draft-davidben-tls-key-share-prediction/
HTML:
https://www.ietf.org/archive/id/draft-davidben-tls-key-share-prediction-00.html
HTMLized:
https://datatracker.ietf.org/doc/html/draft-davidben-tls-key-share-prediction


Abstract:

   This document clarifies an ambiguity in the TLS 1.3 key share
   selection, to avoid a downgrade when server assumptions do not match
   client behavior.  It additionally defines a mechanism for servers to
   communicate key share preferences in DNS.  Clients may use this
   information to reduce TLS handshake round-trips.



The IETF Secretariat
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] SVCB codepoint for ECH

2023-09-21 Thread David Benjamin
How do we want to handle the rest of draft-sbn-tls-svcb-ech? It got WG
adoption in May, but I don't think anything's happened with it since.
(Unless we decided something and I forgot?) In particular, the section on
switching to SVCB-reliant mode is important for a client:
https://www.ietf.org/archive/id/draft-sbn-tls-svcb-ech-00.html#section-4.1

Whether it's the same document or a separate one, I think the SVCB
codepoint should be allocated in the same document that discusses how to
use the SVCB codepoint. Since there's movement towards putting it in
the ECH one and no movement on draft-sbn, just folding it all in and making
one document is tempting...

On Thu, Sep 21, 2023 at 11:01 AM Salz, Rich  wrote:

>
>
> >   https://github.com/tlswg/draft-ietf-tls-esni/pull/553
> 
>
>
>
>
>
> Looks good to me.
> ___
> TLS mailing list
> TLS@ietf.org
> https://www.ietf.org/mailman/listinfo/tls
>
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Early IANA Allocations for draft-ietf-tls-esni

2023-09-20 Thread David Benjamin
To clarify, when you say "the draft" do you mean draft-ietf-tls-esni
or draft-sbn-tls-svcb-ech? draft-ietf-tls-esni doesn't actually define a
format for it in the first place. draft-sbn-tls-svcb-ech does... that got
adopted, right? Is there a TLSWG version?

Messiness around the status of the draft aside, the format for ech (5) is
stable. In the unlikely event that we want to change the format, we will
also need to change the codepoint to avoid breaking things. So ech (5) will
forever mean what it currently means: an ECHConfigList of
internally-versioned ECHConfigs.

On Wed, Sep 20, 2023 at 3:08 PM Erik Nygren  wrote:

> The registry already exists with the pointer to ech (5) :
>
> https://www.iana.org/assignments/dns-svcb/dns-svcb.xhtml
>
> so no action is needed to make sure it isn't allocated for something
> else.  (Removing it would be more effort and more problematic.)
> Do we believe the draft is stable enough that we should reference it
> informationally for that code point?
>
>
> On Wed, Sep 20, 2023 at 3:01 PM David Benjamin 
> wrote:
>
>> I don't think what we do with the registry has any bearing on whether the
>> codepoint is burned. There are already draft ECH deployments today, on both
>> the client and server side, independent of what we later put in the
>> registry. Rather, the ECHConfigList structure is internally versioned, so
>> as long as we keep that structure, the codepoint isn't burned. If we find
>> we need to change the versioning scheme, that will indeed be incompatible,
>> and we'll need to switch codepoints. I wouldn't expect that to happen, but
>> I don't think we need to be deathly worried about it either. Codepoints are
>> plentiful.
>>
>> So I'd suggest that reserving it makes sense (to make sure no one
>> allocates it for something unrelated to ECH), and we can
>> leave draft-sbn-tls-svcb-ech to sort out the true allocation. If that
>> doesn't work procedurally, it's probably not worth the energy and we can
>> just omit the entry from the SVCB spec. We'd just then be relying on the
>> expert review to not accidentally use the value for something else.
>>
>> On Wed, Sep 20, 2023 at 2:44 PM Erik Nygren  wrote:
>>
>>> We're going through AUTH48 with SVCB right now and reviewing edits from
>>> the RFC Editor.  I think there is a good question of how to handle this.
>>> Right now it is "RESERVED (will be used for ECH)" for SvcParamKey "ech" (5)
>>> but we also say:
>>>
>>> New entries in this registry are subject to an Expert Review
>>> registration policy ([RFC8126
>>> <https://www.ietf.org/archive/id/draft-ietf-dnsop-svcb-https-12.html#RFC8126>],
>>> Section 4.5 <https://rfc-editor.org/rfc/rfc8126#section-4.5>). The
>>> designated expert MUST ensure that the Format Reference is stable and
>>> publicly available, and that it specifies how to convert the
>>> SvcParamValue's presentation format to wire format. The Format Reference
>>> MAY be any individual's Internet-Draft, or a document from any other source
>>> with similar assurances of stability and availability. An entry MAY specify
>>> a Format Reference of the form "Same as (other key Name)" if it uses the
>>> same presentation and wire formats as an existing key.
>>>
>>> This puts this in a weird state given that the ECH specification is not
>>> stable yet and did have some changes.
>>> Perhaps a question for the dnsops chairs and Warren as well?
>>>
>>> Should draft-ietf-tls-esni be referenced informationally?  It seems
>>> like there's a risk of "ech=" (5) getting burned as a codepoint
>>> given that implementations may exist with different interpretations...
>>>
>>>   Erik
>>>
>>>
>>>
>>> On Tue, Sep 19, 2023 at 11:22 AM Sean Turner  wrote:
>>>
>>>>
>>>>
>>>> > On Sep 18, 2023, at 21:39, Stephen Farrell 
>>>> wrote:
>>>> >
>>>> > I wonder if we also need to say something about the ech= SVCB
>>>> > parameter value 5 that's reserved at [1]? Not sure, but maybe
>>>> > no harm to make that "official" at the same time if possible.
>>>> > (There could be current code that assumes that 5 in a wire-
>>>> > format HTTPS RR value maps to 0xff0d within an ECHConfigList
>>>> > even if that isn't really right.)
>>>>
>>>> I’ll check with the dnsops chairs.
>>>>
>>>> spt
>>>> ___
>>>> TLS mailing list
>>>> TLS@ietf.org
>>>> https://www.ietf.org/mailman/listinfo/tls
>>>>
>>> ___
>>> TLS mailing list
>>> TLS@ietf.org
>>> https://www.ietf.org/mailman/listinfo/tls
>>>
>>
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Early IANA Allocations for draft-ietf-tls-esni

2023-09-20 Thread David Benjamin
I don't think what we do with the registry has any bearing on whether the
codepoint is burned. There are already draft ECH deployments today, on both
the client and server side, independent of what we later put in the
registry. Rather, the ECHConfigList structure is internally versioned, so
as long as we keep that structure, the codepoint isn't burned. If we find
we need to change the versioning scheme, that will indeed be incompatible,
and we'll need to switch codepoints. I wouldn't expect that to happen, but
I don't think we need to be deathly worried about it either. Codepoints are
plentiful.

So I'd suggest that reserving it makes sense (to make sure no one allocates
it for something unrelated to ECH), and we can leave draft-sbn-tls-svcb-ech
to sort out the true allocation. If that doesn't work procedurally, it's
probably not worth the energy and we can just omit the entry from the SVCB
spec. We'd just then be relying on the expert review to not accidentally
use the value for something else.

On Wed, Sep 20, 2023 at 2:44 PM Erik Nygren  wrote:

> We're going through AUTH48 with SVCB right now and reviewing edits from
> the RFC Editor.  I think there is a good question of how to handle this.
> Right now it is "RESERVED (will be used for ECH)" for SvcParamKey "ech" (5)
> but we also say:
>
> New entries in this registry are subject to an Expert Review registration
> policy ([RFC8126
> ],
> Section 4.5 ). The
> designated expert MUST ensure that the Format Reference is stable and
> publicly available, and that it specifies how to convert the
> SvcParamValue's presentation format to wire format. The Format Reference
> MAY be any individual's Internet-Draft, or a document from any other source
> with similar assurances of stability and availability. An entry MAY specify
> a Format Reference of the form "Same as (other key Name)" if it uses the
> same presentation and wire formats as an existing key.
>
> This puts this in a weird state given that the ECH specification is not
> stable yet and did have some changes.
> Perhaps a question for the dnsops chairs and Warren as well?
>
> Should draft-ietf-tls-esni be referenced informationally?  It seems like
> there's a risk of "ech=" (5) getting burned as a codepoint
> given that implementations may exist with different interpretations...
>
>   Erik
>
>
>
> On Tue, Sep 19, 2023 at 11:22 AM Sean Turner  wrote:
>
>>
>>
>> > On Sep 18, 2023, at 21:39, Stephen Farrell 
>> wrote:
>> >
>> > I wonder if we also need to say something about the ech= SVCB
>> > parameter value 5 that's reserved at [1]? Not sure, but maybe
>> > no harm to make that "official" at the same time if possible.
>> > (There could be current code that assumes that 5 in a wire-
>> > format HTTPS RR value maps to 0xff0d within an ECHConfigList
>> > even if that isn't really right.)
>>
>> I’ll check with the dnsops chairs.
>>
>> spt
>> ___
>> TLS mailing list
>> TLS@ietf.org
>> https://www.ietf.org/mailman/listinfo/tls
>>
> ___
> TLS mailing list
> TLS@ietf.org
> https://www.ietf.org/mailman/listinfo/tls
>
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Question about DTLS for the "no new features" draft

2023-08-08 Thread David Benjamin
On Mon, Aug 7, 2023 at 9:27 PM Eric Rescorla  wrote:

> On Mon, Aug 7, 2023 at 2:50 PM Jonathan Lennox 
> wrote:
>
>> On Aug 6, 2023, at 5:22 PM, Rob Sayre  wrote:
>>
>> On Sun, Aug 6, 2023 at 2:14 PM Eric Rescorla  wrote:
>>
>>> Sure. Though with that said, DTLS-SRTP should use the same code points
>>> for 1.2 and 1.3, so I don't actually know if this is an exception after all.
>>>
>>
>> I think the issue is still there in a spec lawyer kind of way. So, after
>> this draft is published, would we say a new DTLS-SRTP cipher suite is
>> defined for use in DTLS 1.2?
>>
>> That seems like the goal of the Github issue.
>>
>>
>> That was indeed the goal of my initial Github issue, but on further
>> reflection, I’m more concerned.
>>
>> As Achim’s mail indicated, as far as I know wolfSSL is the only library
>> currently with a released DTLS 1.3 implementation, and many of the other
>> common TLS libraries — most notably including the OpenSSL family — don’t
>> seem to have any current plans to do so.
>>
>> If this situation doesn’t change before we need PQ KEMs to enter
>> production, then there’ll be no way to protect DTLS-protected traffic —
>> notably including WebRTC and other DTLS-SRTP traffic — from
>> harvest-now-decrypt-later attacks.
>>
>> Hopefully the eventual need for PQ support will incentivize stack
>> developers to work on DTLS 1.3, but I’m worried.
>>
>> I don’t know if this would warrant actually defining PQ KEMs for DTLS 1.2
>> — will stack implementors implement that if they won’t do DTLS 1.3? — but
>> it’s something to think about.
>>
>
> These seem like good reasons for stack implementors to do DTLS 1.3.
>

Agreed. BoringSSL has not yet implemented DTLS 1.3, but we don't consider
this a reason to backport PQ KEMs to (D)TLS 1.2. The right path for PQ KEMs
in DTLS is DTLS 1.3. When we go to add PQ KEMs to our DTLS uses, we'll take
that route.

As a practical matter, PQ KEMs in (D)TLS 1.2 is messier than it sounds.
ECDHE ServerKeyExchange and ClientKeyExchange have too short of length
prefixes. Our original CECPQ1 experiment, which predated TLS 1.3, had to
define new cipher suites to get around this. Also keep in mind that,
whether it's 1.2 or 1.3, getting to PQ KEMs (or any other new (D)TLS
feature) will require updating client and server software regardless. The
existing DTLS 1.2 + no PQ deployment will not magically become PQ-ready if
we do a backport.


> -Ekr
>
>
>>
>>
>> -Ekr
>>>
>>>
>>> On Sun, Aug 6, 2023 at 1:59 PM Rob Sayre  wrote:
>>>
 On Sun, Aug 6, 2023 at 11:48 AM Eric Rescorla  wrote:

>
>
> On Sun, Aug 6, 2023 at 9:58 AM Rob Sayre  wrote:
>
>> There's also the fact that the TLS 1.3 was published in August 2018,
>> but DTLS 1.3 wasn't published until April 2022. So, it is kind of
>> reasonable to allow some extra time here.
>>
>> The WG could say this document doesn't apply to DTLS. Another choice
>> would be to say that it does apply to DTLS, but the WG will continue to
>> accept work for DTLS 1.2 that is DTLS-specific. The aim here being that
>> DTLS is not used as an excuse to continue to work on 1.2.
>>
>
> This seems like a fine proposal. However, as a practical matter, there
> are very few changes one could make to DTLS that would not also apply to
> TLS, so aside from DTLS-SRTP cipher suites, I'm not sure how much
> difference it makes.
>

 Makes sense, let's just not try to prove a negative in insisting that
 DTLS-SRTP cipher suites are the only such thing.

 "Further, TLS 1.3 use is widespread, and new protocols should require
 and assume its existence. DTLS 1.3 is a newer specification. New
 algorithms or extensions that apply solely to DTLS, such as DTLS-SRTP
 cipher suites, will be considered for DTLS 1.2."

 thanks,
 Rob


>> ___
> TLS mailing list
> TLS@ietf.org
> https://www.ietf.org/mailman/listinfo/tls
>
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Merkle Tree Certificates

2023-06-05 Thread David Benjamin
On Wed, Mar 22, 2023 at 11:22 AM Ilari Liusvaara 
wrote:

> On Wed, Mar 22, 2023 at 01:54:22PM +0100, Bas Westerbaan wrote:
> > >
> > > Unpopular pages are much more likely to deploy a solution that
> > > doesn't require a parallel CA infrastructure and a cryptographer
> > > on staff.
>
> I don't think the server-side deployment difficulties with this have
> anything to do with parallel CA infrastructure or admins having to
> understand cryptography.
>
>
> > CAs, TLS libraries, certbot, and browsers would need to make changes,
> > but I think we can deploy this without webservers or relying parties
> > having to make any changes if they're already using an ACME client
> > except upgrading their dependencies, which they would need to do
> > anyway to get plain X.509 PQ certs.
>
> I don't agree.
>
> I think deploying this is much much harder than deploying X.509 PQ
> certificates. X.509 PQ certificates are mostly dependency update. This
> looks to require some nontrivial configuration work that can not be
> done completely automatically.
>
> And then in present form, this could be extremely painful for ACME
> clients to implement (on level of complete rewrite for many).
>

It’s true that this would require code changes in more components. But TLS,
ACME, etc., are deployed many more times than they are implemented. As the
code changes happen per software package, hopefully the per-deployment cost
beyond that can be minimal. (Though, of course, that will depend on exactly
how each package's existing configuration interface looks, and how/whether
they apply it to the new thing. Understanding what protocol properties
would make this easy or hard would be very useful, but I also suspect it
depends on a lot of details we've still left as placeholders right now.)

These things also don’t have to happen all at once. It can be a transition
over time, or perhaps some sites just stay with the fast-issuance mechanism
(be it X.509 PQ or something else) if they’re happy with it. Merkle Tree
Certificates themselves cannot be your only certificate type anyway, since
they only work with up-to-date RPs.

To ACME specifically, we definitely don’t want it to be painful for ACME
clients to implement! It’s probably a bit hard to discuss that in the
abstract, with our ACME section being just a placeholder. Perhaps, when
we’ve gotten an initial draft of that, we can figure out which bits we got
wrong and iterate on that?

David
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Merkle Tree Certificates

2023-06-05 Thread David Benjamin
Thanks for such detailed feedback! Responses inline.

On Wed, Mar 22, 2023 at 12:49 PM Ilari Liusvaara 
wrote:

> Some quick comments / ideas:
>
> - I think it would be easier for subscribers to get inclusion proofs
>   from transparency service than certificate authority.
>
>   This is because issuance is heavily asynchronous, whereas most
>   servers assume ACME is essentially synchronous.
>
>   If certificates are canonicalized (this is mostly matter of ensuring
>   the names are always sorted), this could be endpoint to download known
>   inclusion proofs by certificate hash.
>
>   Or maybe even have both, and subscribers can use whichever is more
>   convinient.
>

We’re currently envisioning that the transparency services will potentially
vary by RP. They’re effectively the RP’s CT policy. The different TSs all
see the same trees, just later than the CA. It seems simpler then to get it
from the CA, which will be the least behind. This also means RPs can adjust
TS preferences without impacting subscribers or CAs at all. The equivalent
of a CT log migration and distrust is much less invasive.

Also, subscribers already talk to CAs via, e.g., ACME, so it seemed natural
to rely on that existing relationship. Especially as subscribers will need
a fallback credential from a CA anyway.

I suppose there’s no reason why the subscriber couldn’t fetch from the TS.
Though I’m not seeing how it would be more convenient. Could you elaborate?


> - I don't think there are any sane uses for >64kB claims, so the
>   claim_info length could be shortened to 16 bits.
>

Works for me. https://github.com/davidben/merkle-tree-certs/pull/29


>   I don't see rule for how claims are sorted within each type,
>   only how different types are sorted.
>
> - If each claim was in its own Claim, then one could maybe even
>   shorten it to 8 bits. Similarly, one could merge ipv4/ipv6 and
>   dns/dns_wildcard.
>
>   This could also simplify sorting: Sort by type, lexicographic
>   sort by claim contents.
>

Thanks! Yeah, the actual Claim bits were just a naive transcription of
X.509 SANs for now without much thought. I filed
https://github.com/davidben/merkle-tree-certs/issues/31 to track those.


> - I don't think anybody is going to use signatures with >64kB keys,
>   so subject_info length could be shortened to 16 bits.
>

Added to https://github.com/davidben/merkle-tree-certs/pull/29


> - What does it mean that in this document the hash is always SHA-256?
>

Just that the hash function used in building the trees, etc. is SHA-256.
(It’s the only ProofType we’ve defined. One could define others, but I
don’t particularly care to.)


> - Apparently issuer id is limited to 32 octets. This could be noted in
>   the definition.
>

Also added to https://github.com/davidben/merkle-tree-certs/pull/29


> - I think it would be easier if lifetime was expressed in batch
>   durations. Then one would not need window size, and especially not
>   handle lifetime / batch_duration not being an integer!
>

I think we’d still need to be able measure it in both units, but maybe I’m
missing something?

We need something in units of batch duration (currently window size) to
size the signed windows, etc.

But the RP can’t just use the window in lieu of expiry, because it can’t
simply assume all batches are valid, because the RP may be unable to fetch
new windows for a long period of time, such that the old (or all!) batches
have fallen off.

We could do that calculation in batch durations, but then we need to
measure the current time in batch numbers, which seemed unintuitive to me.
And then once that was in seconds, it didn’t seem that aligning it on batch
duration did much.


> - The root hash being dependent on issuer and batch number iff there
>   are multiple assertions looks very odd.
>
>   Empty assertion list might be special. But this also happens for
>   one assertion.
>

Thanks! That’s indeed inconsistent, we’ll fix it.
https://github.com/davidben/merkle-tree-certs/issues/32


> - I think LabeledWindow should add 64 spaces in front, so it
>   reuses the TLS 1.3 signature format.
>
>   This reduces risks of cross-protocol attack if the key gets
>   reused anyway (despite there being MUST NOT requirement).
>

Filed https://github.com/davidben/merkle-tree-certs/issues/30. I’m slightly
torn on this. The part of me that doesn’t trust people to keep keys
separate wants to do it. But the part of me that’s sick of chasing this
down in every new protocol would rather we just stop pretending using the
same key for multiple things is remotely sensible. :-)


> - Is there reason for type of trust_anchor_data to vary by proof_type?
>   Why not always have MerkleTreeTrustAnchor there?
>

The thinking was to be extensible for other proof types that may not have
the (issuer, number) structure. E.g. perhaps a fast-issuance model that’s
more analogous to X.509 with CT, if we get a meaningful enough improvement?

This isn’t strictly necessary. We could simply 

Re: [TLS] Merkle Tree Certificates

2023-06-05 Thread David Benjamin
On Tue, Mar 14, 2023 at 1:47 PM Watson Ladd  wrote:

> Come embrace the temptations of the Sea-SIDH!
>
> Intermediate certs are rarely used, so that would achieve 204 byte sig
> on intermediate+ 64 byte intermediate key + 204 byte  sig of EE cert
> since the signing time doesn't matter. Then with SCT and OCSP, it's
> 204 bytes each.
>

I wasn’t able to find a reference to a Sea-SIDH signature scheme. Do you
have a pointer? Do you mean this thing?
https://eprint.iacr.org/2020/1240.pdf

Taking 2 seconds to generate a signature is... certainly a constraint. :-)


> As for the actual proposal, I like the idea of per-protocol subjects.
> I am worried about the way this makes the PKI a more distributed
> system, in the Lamportian sense. A certificate being used successfully
> depends now on the transparency service propagating the batch from the
> CA and the CA creating the batch, and the user-agent, not the site,
> determines what transparency service is used. This makes it much more
> difficult for sites to be sure their certificates will actually work.
>

To some degree, subscribers already rely on this. RPs have various
requirements, and it is up to the CA to provide subscribers with
certificates that meet the requirements. Some requirements can be checked
by the subscriber, but some cannot.

In X.509, the certificate chain and signatures can be checked by the
subscriber. (Although X.509 is so complex and variable that it’s unlikely
the subscriber’s checks exactly matched the RP’s!) But other requirements,
notably future policy actions, cannot. The certificate may later be
revoked, the CA or CT log may be distrusted, etc.

In this proposal, we could have the CA pass the subscriber the signed
window alongside the proof (not currently in the draft, but this is a good
reason to include it). The subscriber can then check the inclusion proof,
hopefully with much less implementation variability than X.509.

The subscriber still needs to know if the RP recognizes that batch. These
are more dynamic than X.509 roots, but are covered by the negotiation
mechanism. On mismatch, you just pick another certificate, such as an X.509
one which is as checkable as before. So this part isn’t so much checked as
made moot. (A negotiation mechanism is ultimately “tell me if the RP will
accept this cert, so I can filter down to the ones that work”.)

Finally, subscriber and RP must agree on what, e.g, root hash #42 was. This
flows from CA to TS to RP, which is indeed hard for the subscriber to
directly check. (Though the subscriber could always fetch hashes from known
TSs to compare.) However, a mismatch here means the CA produced a split
view. The responsibilities have shifted, but analogous misbehavior in X.509
+ CT would typically result in a policy action, something the subscriber
already cannot check offline.

David
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Merkle Tree Certificates

2023-06-05 Thread David Benjamin
Hi all,

Sorry for the late reply on all these, and thanks for the feedback so far!
I lost track of this thread as I was putting together slides for IETF 116
and whatnot. I’ll reply to various outstanding emails individually...

On Sat, Mar 11, 2023 at 2:43 PM Stephen Farrell 
wrote:

>
> Hiya,
>
> I had a read and think this is a great topic for
> discussion.
>
> A few points:
>
> - I think we'd benefit from trying to think through
> the dynamics of this, e.g. how many of each entity
> might we see and how'd that differ from the current
> web PKI and possibly affect the web? (It's fine that
> that analysis emerge in time, not asking for it now.)
>

Yup. I think how deployments end up looking will definitely be interesting
to figure out. As you say, how this shakes out will emerge in time, but
sections 9 and 11 contain some initial thoughts.

One thing I think we could have conveyed more clearly is the relationship
between an overall certificate negotiation framework and this draft. We’re
interested in certificate negotiation because we think it’s a good fit for
a host of problems in the PKI, particularly around agility. Notable for
this draft is it gives more room to explore the tradeoff space, since we
can deploy different solutions for different requirements. Merkle Tree
Certificates represent one point in the tradeoff space.

We started with this draft because it was fairly self-contained. I’m hoping
we’ll have a more refined and concrete negotiation write-up next, which
might make some of this clearer. (What’s in there now is somewhat of a
placeholder.)


> - I do think the trust_anchors extension values might
> be better off as e.g. truncated hashes of public keys
> or something like that.
>

That doesn’t quite fit with some directions we’re envisioning, but I agree
having the IDs specified tightly would be nice. How about we put a pin in
this, and when we’ve got the write-up above ready, we can ponder this?


> - Aside from better on-the-wire efficiency, I think
> another reason to examine designs like this is that
> adding multiple public keys and signatures to x.509
> certs (one of the alternative designs) seems like it
> might be a bit of a nightmare, as PKI libraries are
> buggily updated to try handle that - designs like
> this seem better in terms of keeping the new code in
> a less risky place.
>
> Cheers,
> S.
>
> On 10/03/2023 22:09, David Benjamin wrote:
> > Hi all,
> >
> > I've just uploaded a draft, below, describing several ideas we've been
> > mulling over regarding certificates in TLS. This is a draft-00 with a lot
> > of moving parts, so think of it as the first pass at some of ideas that
> we
> > think fit well together, rather than a concrete, fully-baked system.
> >
> > The document describes a new certificate format based on Merkle Trees,
> > which aims to mitigate the many signatures we send today, particularly in
> > applications that use Certificate Transparency, and as post-quantum
> > signature schemes get large. Four signatures (two SCTs, two X.509
> > signatures) and an intermediate CA's public key gets rather large,
> > particularly with something like Dilithium3's 3,293-byte signatures. This
> > format uses a single Merkle Tree inclusion proof, which we estimate at
> > roughly 600 bytes. (Note that this proposal targets certificate-related
> > signatures but not the TLS handshake signature.)
> >
> > As part of this, it also includes an extensibility and certificate
> > negotiation story that we hope will be useful beyond this particular
> scheme.
> >
> > This isn't meant to replace existing PKI mechanisms. Rather, it's an
> > optional optimization for connections that are able to use it. Where they
> > aren't, you negotiate another certificate. I work on a web browser, so
> this
> > has browsers and HTTPS over TLS in mind, but we hope it, or some ideas in
> > it, will be more broadly useful.
> >
> > That said, we don't expect it's for everyone, and that's fine! With a
> > robust negotiation story, we don't have to limit ourselves to a single
> > answer for all cases at once. Even within browsers and the web, it cannot
> > handle all cases, so we're thinking of this as one of several sorts of
> PKI
> > mechanisms that might be selected via negotiation.
> >
> > Thoughts? We're very eager to get feedback on this.
> >
> > David
> >
> > On Fri, Mar 10, 2023 at 4:38 PM  wrote:
> >
> >>
> >> A new version of I-D, draft-davidben-tls-merkle-tree-certs-00.txt
> >> has been successfully submitted by David Benjamin and posted to the
> >> IETF repository.
> >>
> >> Name:   draft-davidben-tls

Re: [TLS] Servers sending CA names

2023-04-12 Thread David Benjamin
Chrome also uses this to filter the set of client certificates when asking
the user to pick one. We also sometimes use this to figure out what
intermediates to send, in cases where the server does not already have all
its intermediates available. (Though this is not very reliable and
OS-dependent. Client certificate deployments are a bit of a mess.)

Omitting it may be fine in contexts where you expect clients to only have
one possible certificate chain and that they have a priori knowledge to
only send that one. That can make sense in machine-to-machine
communication, and makes less sense when the client is a human that needs
to make decisions about identities to use.

I agree with Viktor that this isn't any more optional in TLS 1.2 than TLS
1.3. Optional and non-empty if present vs. mandatory but may be
empty express the same set of states. It's just an encoding difference,
motivated by extensibility and client/server symmetry, not changing client
certificate expectations.

On Wed, Apr 12, 2023 at 4:59 PM Viktor Dukhovni 
wrote:

> On Wed, Apr 12, 2023 at 08:41:31PM +, Salz, Rich wrote:
>
> > Is this generally used?  Would things go badly if we stopped sending
> them?
>
> I take you mean sending CA names as part of a certificate request.
>
> https://datatracker.ietf.org/doc/html/rfc8446#section-4.3.2
> https://datatracker.ietf.org/doc/html/rfc8446#section-4.2.4
>
> Yes, many servers send a non-empty list of CA names as part of
> certificate request, and some clients (notably some Java-based clients)
> fail to complete the handshake if the request does not list an issuer
> associated with any of the client's available certificates.
>
> So servers historically have been able to get away with an empty list,
> hoping that the client will then send the only/default certificate it
> typically has on hand (or not send any, but still continue the
> handshake).
>
> It looks perhaps like CA name lists are "more optional" in TLS 1.3 than
> they were in TLS 1.2, but this impression may be just an artefact of the
> separation of the CA names to a separate extension, rather than an
> actual change of expected client behaviour.
>
> --
> Viktor.
>
> ___
> TLS mailing list
> TLS@ietf.org
> https://www.ietf.org/mailman/listinfo/tls
>
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] TLS 1.2 deprecation

2023-03-30 Thread David Benjamin
post_handshake_auth was only in TLS 1.3 because some folks relied on an
existing (and terrible :-) ) corresponding mechanism in TLS 1.2: trigger a
renegotiation and request a client certificate in the new handshake. I
don't think it makes sense to backport post_handshake_auth to TLS 1.2. Such
a backport would also require much more analysis than the average
extension, since it concerns authentication.

On Fri, Mar 31, 2023 at 5:27 AM Rob Sayre  wrote:

> Hi,
>
> What I noticed is that something close to "post_handshake_auth" has been
> asked for in TLS 1.2.
>
> If you go look at the registry, which of course some people here know
> well, there are a bunch of them that are only defined for TLS 1.3.
>
>
> https://www.iana.org/assignments/tls-extensiontype-values/tls-extensiontype-values.xhtml
>
> Some of them would not make sense in a TLS 1.2 handshake, by my reading.
> So, the drift is already happening, quite apart from new feature
> development.
>
> thanks,
> Rob
>
>
> On Wed, Mar 29, 2023 at 10:05 PM Rob Sayre  wrote:
>
>> Hi,
>>
>> I watched the conversation at the end of this conference here:
>> https://youtu.be/u_sFyz4F7dc
>>
>> It was good. The only thing I would add is that I think client
>> authentication is already much different in 1.3, and that new extensions
>> such as ECH are already not being done for 1.2.
>>
>> The thing to do if you have a strong opinion is to not serve 1.2 traffic.
>> The clients will always have to be accepting for a while. But, if you've
>> worked on the internet for any amount of time, you'll quickly figure out
>> that not serving 1.2 will save you money, unless you are Google scale. Not
>> because it is way slower, but because you can drop old clients.
>>
>> thanks,
>> Rob
>>
>>
>>
>> ___
> TLS mailing list
> TLS@ietf.org
> https://www.ietf.org/mailman/listinfo/tls
>
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] WG Adoption call for draft-sbn-tls-svcb-ech

2023-03-27 Thread David Benjamin
I support adoption.

On Tue, Mar 28, 2023 at 2:20 PM Stephen Farrell 
wrote:

>
>
> On 28/03/2023 05:57, Salz, Rich wrote:
> >> At TLS@IETF116, the sense of the room was that there was WG support to
> adopt draft-sbn-tls-svcb-ech [1]. This message is to confirm the consensus
> in the room. Please indicate whether you do or do not support adoption of
> this I-D by 2359UTC on 18 April 2023. If do not support adoption, please
> indicate why.
> >
> > Strong support.
>
> Yep. No-brainer that one.
>
> S.
>
> >
> > ___
> > TLS mailing list
> > TLS@ietf.org
> > https://www.ietf.org/mailman/listinfo/tls
> ___
> TLS mailing list
> TLS@ietf.org
> https://www.ietf.org/mailman/listinfo/tls
>
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Merkle Tree Certificates

2023-03-21 Thread David Benjamin
On Tue, Mar 21, 2023 at 8:01 AM Hubert Kario  wrote:

> On Monday, 20 March 2023 19:54:24 CET, David Benjamin wrote:
> > I don't think flattening is the right way to look at it. See my
> > other reply for a discussion about flattening, and how this does
> > a bit more than that. (It also handles SCTs.)
> >
> > As for RFC 7924, in this context you should think of it as a
> > funny kind of TLS resumption. In clients that talk to many
> > servers[0], the only plausible source of cached information is a
> > previous TLS exchange. Cached info is then: if I previously
> > connected to you and I am willing to correlate that previous
> > connection to this new one, we can re-connect more efficiently.
> > It's a bit more flexible than resumption---it doesn't replace
> > authentication, so we could conceivably use larger lifetimes.
> > But it's broadly the same w.r.t. when it can be used. It doesn't
> > help the first connection to a service, or a service that was
> > connected long enough ago that it's fallen off the cache. And it
> > doesn't help across contexts where we don't want correlation.
> > Within a web browser, things are a bit more partitioned these
> > days,
> > see
> https://github.com/MattMenke2/Explainer---Partition-Network-State/blob/main/README.md
> > and https://github.com/privacycg/storage-partitioning.
>
> Sorry, but as long as the browsers are willing to perform session
> resumption
> I'm not buying the "cached info is a privacy problem".
>

I'm not seeing where this quote comes from. I said it had analogous
properties to resumption, not that it was a privacy problem in the absolute.

The privacy properties of resumption and cached info on the situation. If
you were okay correlating the two connections, both are okay in this
regard. If not, then no. rfc8446bis discusses this:
https://tlswg.org/tls13-spec/draft-ietf-tls-rfc8446bis.html#appendix-C.4

In browsers, the correlation boundaries (across *all* state, not just TLS)
were once browsing-profile-wide, but they're shifting to this notion of
"site". I won't bore the list with the web's security model, but roughly
the domain part of the top-level (not the same as destination!) URL. See
the links above for details.

That equally impacts resumption and any hypothetical deployment of cached
info. So, yes, within those same bounds, a browser could deploy cached
info. Whether it's useful depends on whether there are many cases where
resumption wouldn't work, but cached info would. (E.g. because resumption
has different security properties than cached info.)


> It also completely ignores the encrypted client hello
>

ECH helps with outside observers correlating your connections, but it
doesn't do anything about the server correlating connections. In the
context of correlation boundaries within a web browser, we care about the
latter too.


> Browser doesn't have to cache the certs since the beginning of time to be
> of benefit, a few hours or even just current boot would be enough:
>
> 1. if it's a page visited once then all the tracking cookies and javascript
>will be an order of magnitude larger download anyway
> 2. if it's a page visited many times, then optimising for the subsequent
>connections is of higher benefit anyway
>

I don't think that's quite the right dichotomy. There are plenty of reasons
to optimize for the first connection, time to first bytes, etc. Indeed,
this WG did just that with False Start and TLS 1.3 itself. (Prior to those,
TLS 1.2 was 2-RTT for the first connection and 1-RTT for resumption.)

I suspect a caching for a few hours would not justify cached info because
you may as well use resumption at that point.

> In comparison, this design doesn't depend on this sort of
> > per-destination state and can apply to the first time you talk
> > to a server.
>
> it does depend on complex code instead, that effectively duplicates the
> functionality of existing code
>
> > David
> >
> > [0] If you're a client that only talks to one or two servers,
> > you could imagine getting this cached information pushed
> > out-of-band, similar to how this document pushes some valid tree
> > heads out-of-band. But that doesn't apply to most clients,
> > certainly not a web browser.
>
> web browser could get a list of most commonly accessed pages/cert pairs,
> randomised to some degree by addition of not commonly accessed pages to
> hide if
> the connection is new or not, and make inference about previous visits
> worthless
>

True, we could preload cached info for a global list of common
certificates. I'm personally much more interested in mechanisms that
benefit popular and unpopular pages alike.


> > On Tue, Mar 14, 2023 at 9:46 AM Kampan

Re: [TLS] Merkle Tree Certificates

2023-03-20 Thread David Benjamin
I don't think flattening is the right way to look at it. See my other reply
for a discussion about flattening, and how this does a bit more than that.
(It also handles SCTs.)

As for RFC 7924, in this context you should think of it as a funny kind of
TLS resumption. In clients that talk to many servers[0], the only
plausible source of cached information is a previous TLS exchange. Cached
info is then: if I previously connected to you *and I am willing to
correlate that previous connection to this new one*, we can re-connect more
efficiently. It's a bit more flexible than resumption---it doesn't replace
authentication, so we could conceivably use larger lifetimes. But it's
broadly the same w.r.t. when it can be used. It doesn't help the first
connection to a service, or a service that was connected long enough ago
that it's fallen off the cache. And it doesn't help across contexts where
we don't want correlation. Within a web browser, things are a bit more
partitioned these days, see
https://github.com/MattMenke2/Explainer---Partition-Network-State/blob/main/README.md
and https://github.com/privacycg/storage-partitioning.

In comparison, this design doesn't depend on this sort of per-destination
state and can apply to the first time you talk to a server.

David

[0] If you're a client that only talks to one or two servers, you could
imagine getting this cached information pushed out-of-band, similar to how
this document pushes some valid tree heads out-of-band. But that doesn't
apply to most clients, certainly not a web browser.

On Tue, Mar 14, 2023 at 9:46 AM Kampanakis, Panos  wrote:

> Hi Hubert,
>
> I am not an author of draft-davidben-tls-merkle-tree-certs, but I had some
> feedback on this question:
>
> RFC7924 was a good idea but I don’t think it got deployed. It has the
> disadvantage that it allows for connection correlation and it is also
> challenging to demand a client to either know all its possible destination
> end-entity certs or be able to have a caching mechanism that keeps getting
> updated. Given these challenges and that CAs are more static and less
> (~1500 in number) than leaf certs, we have proposed suppressing the ICAs in
> the chain (draft-kampanakis-tls-scas-latest which replaced
> draft-thomson-tls-sic ) , but not the server cert.
>
> I think draft-davidben-tls-merkle-tree-certs is trying to achieve
> something similar by introducing a Merkle tree structure for certs signed
> by a CA. To me it seems to leverage a Merkle tree structure which "batches
> the public key + identities" the CA issues. Verifiers can just verify the
> tree and thus assume that the public key of the peer it is talking to is
> "certified by the tree CA". The way I see it, this construction flattens
> the PKI structure, and issuing CA's are trusted now instead of a more
> limited set of roots. This change is not trivial in my eyes, but the end
> goal is similar, to shrink the amount of auth data.
>
>
>
> -Original Message-
> From: TLS  On Behalf Of Hubert Kario
> Sent: Monday, March 13, 2023 11:08 AM
> To: David Benjamin 
> Cc:  ; Devon O'Brien 
> Subject: RE: [EXTERNAL][TLS] Merkle Tree Certificates
>
> CAUTION: This email originated from outside of the organization. Do not
> click links or open attachments unless you can confirm the sender and know
> the content is safe.
>
>
>
> Why not rfc7924?
>
> On Friday, 10 March 2023 23:09:10 CET, David Benjamin wrote:
> > Hi all,
> >
> > I've just uploaded a draft, below, describing several ideas we've been
> > mulling over regarding certificates in TLS. This is a
> > draft-00 with a lot of moving parts, so think of it as the first pass
> > at some of ideas that we think fit well together, rather than a
> > concrete, fully-baked system.
> >
> > The document describes a new certificate format based on Merkle Trees,
> > which aims to mitigate the many signatures we send today, particularly
> > in applications that use Certificate Transparency, and as post-quantum
> > signature schemes get large. Four signatures (two SCTs, two X.509
> > signatures) and an intermediate CA's public key gets rather large,
> > particularly with something like Dilithium3's 3,293-byte signatures.
> > This format uses a single Merkle Tree inclusion proof, which we
> > estimate at roughly 600 bytes. (Note that this proposal targets
> > certificate-related signatures but not the TLS handshake signature.)
> >
> > As part of this, it also includes an extensibility and certificate
> > negotiation story that we hope will be useful beyond this particular
> > scheme.
> >
> > This isn't meant to replace existing PKI mechanisms. Rather, it's an
> > optional optimization for connections that are a

Re: [TLS] Merkle Tree Certificates

2023-03-20 Thread David Benjamin
re in similar contexts.
>
>
>
> - To me this draft eliminates the need for a PKI and basically makes the
> structure flat. Each CA issues certs in the form of a batched tree. Relying
> parties that “trust and are aware” of this issuing CA’s tree can verify the
> signed window structure and then trust it. So in a TLS handshake we would
> have (1 subscriber public key + 2 signatures + some relatively small tree
> structure) compared to (1 signature + (3 sigs + 1 public key) for server
> cert + (1 Sig + 1 Public key) per ICA cert in the chain). If we borrowed
> the same flat PKI logic though and started “trusting” on a per issuer CA
> basis then the comparison becomes (1 public key + 2 signatures + some small
> tree structure) vs (1 public key + 4 sigs). So we are saving 2 PQ sig minus
> the small tree structure size . Am I misunderstanding the premise here?
>
>
>
>
>
>
>
> *From:* TLS  *On Behalf Of * David Benjamin
> *Sent:* Friday, March 10, 2023 5:09 PM
> *To:*  
> *Cc:* Devon O'Brien 
> *Subject:* [EXTERNAL] [TLS] Merkle Tree Certificates
>
>
>
> *CAUTION*: This email originated from outside of the organization. Do not
> click links or open attachments unless you can confirm the sender and know
> the content is safe.
>
>
>
> Hi all,
>
> I've just uploaded a draft, below, describing several ideas we've been
> mulling over regarding certificates in TLS. This is a draft-00 with a lot
> of moving parts, so think of it as the first pass at some of ideas that we
> think fit well together, rather than a concrete, fully-baked system.
>
> The document describes a new certificate format based on Merkle Trees,
> which aims to mitigate the many signatures we send today, particularly in
> applications that use Certificate Transparency, and as post-quantum
> signature schemes get large. Four signatures (two SCTs, two X.509
> signatures) and an intermediate CA's public key gets rather large,
> particularly with something like Dilithium3's 3,293-byte signatures. This
> format uses a single Merkle Tree inclusion proof, which we estimate at
> roughly 600 bytes. (Note that this proposal targets certificate-related
> signatures but not the TLS handshake signature.)
>
> As part of this, it also includes an extensibility and certificate
> negotiation story that we hope will be useful beyond this particular scheme.
>
> This isn't meant to replace existing PKI mechanisms. Rather, it's an
> optional optimization for connections that are able to use it. Where they
> aren't, you negotiate another certificate. I work on a web browser, so this
> has browsers and HTTPS over TLS in mind, but we hope it, or some ideas in
> it, will be more broadly useful.
>
> That said, we don't expect it's for everyone, and that's fine! With a
> robust negotiation story, we don't have to limit ourselves to a single
> answer for all cases at once. Even within browsers and the web, it cannot
> handle all cases, so we're thinking of this as one of several sorts of PKI
> mechanisms that might be selected via negotiation.
>
> Thoughts? We're very eager to get feedback on this.
>
> David
>
>
>
> On Fri, Mar 10, 2023 at 4:38 PM  wrote:
>
>
> A new version of I-D, draft-davidben-tls-merkle-tree-certs-00.txt
> has been successfully submitted by David Benjamin and posted to the
> IETF repository.
>
> Name:   draft-davidben-tls-merkle-tree-certs
> Revision:   00
> Title:  Merkle Tree Certificates for TLS
> Document date:  2023-03-10
> Group:  Individual Submission
> Pages:  45
> URL:
> https://www.ietf.org/archive/id/draft-davidben-tls-merkle-tree-certs-00.txt
> Status:
> https://datatracker.ietf.org/doc/draft-davidben-tls-merkle-tree-certs/
> Html:
> https://www.ietf.org/archive/id/draft-davidben-tls-merkle-tree-certs-00.html
> Htmlized:
> https://datatracker.ietf.org/doc/html/draft-davidben-tls-merkle-tree-certs
>
>
> Abstract:
>This document describes Merkle Tree certificates, a new certificate
>type for use with TLS.  A relying party that regularly fetches
>information from a transparency service can use this certificate type
>as a size optimization over more conventional mechanisms with post-
>quantum signatures.  Merkle Tree certificates integrate the roles of
>X.509 and Certificate Transparency, achieving comparable security
>properties with a smaller message size, at the cost of more limited
>applicability.
>
>
>
>
> The IETF Secretariat
>
>
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


[TLS] Merkle Tree Certificates

2023-03-10 Thread David Benjamin
Hi all,

I've just uploaded a draft, below, describing several ideas we've been
mulling over regarding certificates in TLS. This is a draft-00 with a lot
of moving parts, so think of it as the first pass at some of ideas that we
think fit well together, rather than a concrete, fully-baked system.

The document describes a new certificate format based on Merkle Trees,
which aims to mitigate the many signatures we send today, particularly in
applications that use Certificate Transparency, and as post-quantum
signature schemes get large. Four signatures (two SCTs, two X.509
signatures) and an intermediate CA's public key gets rather large,
particularly with something like Dilithium3's 3,293-byte signatures. This
format uses a single Merkle Tree inclusion proof, which we estimate at
roughly 600 bytes. (Note that this proposal targets certificate-related
signatures but not the TLS handshake signature.)

As part of this, it also includes an extensibility and certificate
negotiation story that we hope will be useful beyond this particular scheme.

This isn't meant to replace existing PKI mechanisms. Rather, it's an
optional optimization for connections that are able to use it. Where they
aren't, you negotiate another certificate. I work on a web browser, so this
has browsers and HTTPS over TLS in mind, but we hope it, or some ideas in
it, will be more broadly useful.

That said, we don't expect it's for everyone, and that's fine! With a
robust negotiation story, we don't have to limit ourselves to a single
answer for all cases at once. Even within browsers and the web, it cannot
handle all cases, so we're thinking of this as one of several sorts of PKI
mechanisms that might be selected via negotiation.

Thoughts? We're very eager to get feedback on this.

David

On Fri, Mar 10, 2023 at 4:38 PM  wrote:

>
> A new version of I-D, draft-davidben-tls-merkle-tree-certs-00.txt
> has been successfully submitted by David Benjamin and posted to the
> IETF repository.
>
> Name:   draft-davidben-tls-merkle-tree-certs
> Revision:   00
> Title:  Merkle Tree Certificates for TLS
> Document date:  2023-03-10
> Group:  Individual Submission
> Pages:  45
> URL:
> https://www.ietf.org/archive/id/draft-davidben-tls-merkle-tree-certs-00.txt
> Status:
> https://datatracker.ietf.org/doc/draft-davidben-tls-merkle-tree-certs/
> Html:
> https://www.ietf.org/archive/id/draft-davidben-tls-merkle-tree-certs-00.html
> Htmlized:
> https://datatracker.ietf.org/doc/html/draft-davidben-tls-merkle-tree-certs
>
>
> Abstract:
>This document describes Merkle Tree certificates, a new certificate
>type for use with TLS.  A relying party that regularly fetches
>information from a transparency service can use this certificate type
>as a size optimization over more conventional mechanisms with post-
>quantum signatures.  Merkle Tree certificates integrate the roles of
>X.509 and Certificate Transparency, achieving comparable security
>properties with a smaller message size, at the cost of more limited
>applicability.
>
>
>
>
> The IETF Secretariat
>
>
>
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] consensus call: deprecate all FFDHE cipher suites

2022-12-17 Thread David Benjamin
It is, however, mentioned throughout the actual text of the document,
assuming we're both looking at draft-ietf-tls-deprecate-obsolete-kex-01. I
think the document describes its current change just fine. I asked only
because I wasn't sure which the consensus call was about, since that isn't
yet reflected in the document. (The document just says the group size must
be at least 2048 bits in TLS 1.2, and then notes that TLS 1.3 satisfies
that already. We're now talking here about deprecating the DHE ciphers
altogether because, without negotiation, constraining group size isn't
actually viable.)

It sounds like this consensus call was indeed just about TLS 1.2,
the more narrowly-scoped option, so I consider my question satisfied. :-)

On Sat, Dec 17, 2022 at 12:03 PM Yaron Sheffer 
wrote:

> Hi Carrick,
>
>
>
> While this is clear to the authors, it is not very clear in the document.
> Even though the document only applies to TLS 1.2, TLS 1.2 (the version
> number) is not mentioned in the doc title, in the abstract or in the
> introduction.
>
>
>
> Thanks,
>
> Yaron
>
>
>
> *From: *TLS  on behalf of Carrick Bartle <
> cbartle...@gmail.com>
> *Date: *Thursday, 15 December 2022 at 20:15
> *To: *David Benjamin 
> *Cc: *TLS List 
> *Subject: *Re: [TLS] consensus call: deprecate all FFDHE cipher suites
>
>
>
> Hi David,
>
>
>
> My understanding is that we're only discussing deprecating DHE for 1.2.
> 1.3 is out of scope for this document.
>
>
>
> Carrick
>
>
>
>
>
> On Tue, Dec 13, 2022 at 10:06 AM David Benjamin 
> wrote:
>
> Small clarification question: is this about just FFDHE in TLS 1.2, i.e.
> the TLS_DHE_* cipher suites, or also the ffdhe* NamedGroup values as used
> in TLS 1.3?
>
> I support deprecating the TLS_DHE_* ciphers in TLS 1.2. Indeed, we removed
> them from Chrome back in 2016
> <https://groups.google.com/a/chromium.org/g/blink-dev/c/ShRaCsYx4lk/m/46rD81AsBwAJ>
>  and
> from BoringSSL not too long afterwards.
>
>
>
> The DHE construction in TLS 1.2 was flawed in failing to negotiate groups.
> The Logjam <https://weakdh.org/> attack should not have mattered and
> instead was very difficult to mitigate without just dropping DHE entirely.
> The lack of negotiation also exacerbates the DoS risks with DHE in much the
> same way. It is also why the client text in the current draft
> <https://www.ietf.org/archive/id/draft-ietf-tls-deprecate-obsolete-kex-01.html#section-3-2.2>
> ("The group size is at least 2048 bits"), and the previous one
> <https://www.ietf.org/archive/id/draft-ietf-tls-deprecate-obsolete-kex-00.html#section-3-2.2>
> ("The group is one of the following well-known groups described in
> [RFC7919]: ffdhe2048, ffdhe3072, ffdhe4096, ffdhe6144, ffdhe8192") are not
> easily implementable. By the time we've gotten an unsatisfying group from
> the server, it's too late to change parameters. Trying with a new
> connection and different parameters is also problematic because of
> downgrade attacks. A correct scheme would have been defined to only use
> NamedGroup values, and so the server could pick another option if no groups
> were in common.
>
>
>
> RFC 7919 should have fixed this, but it too was flawed: it reused the
> cipher suites before, making it impossible to filter out old servers. See
> these discussions:
>
> https://mailarchive.ietf.org/arch/msg/tls/I3hATzFWwkc2GZqt3hB8QX4c-CA/
>
> https://mailarchive.ietf.org/arch/msg/tls/DzazUXCUZDUpVgBPVHOwatb65dA/
>
> https://mailarchive.ietf.org/arch/msg/tls/bAOJD281iGc2HuEVq0uUlpYL2Mo/
>
>
>
> Additionally, the shared secret drops leading zeros, which leaks a timing
> side channel as a result. Secrets should be fixed-width. See
> https://raccoon-attack.com/ and
> https://github.com/tlswg/tls13-spec/pull/462
>
>
>
> At this point, fixing all this with a protocol change no longer makes
> sense. Any change we make now won't affect existing deployments. Any update
> that picks up the protocol change may as well pick up TLS 1.3 with the ECDH
> groups. Thus the best option is to just deprecate them, so deployments can
> know this is not the direction to go.
>
>
>
> Of course, some deployments may have different needs. I'm sure there are
> still corners of the world that still need to carry SSL 3.0 with RC4
> despite RFC 7465 and RFC 7568. For instance, during the meeting, we
> discussed how opportunistic encryption needs are sometimes different, which
> is already generically covered by RFC 7435
> <https://www.rfc-editor.org/rfc/rfc7435#section-3> ("OSS protocols may
> employ more liberal settings than would be best practice [...]"

Re: [TLS] consensus call: deprecate all FFDHE cipher suites

2022-12-13 Thread David Benjamin
Small clarification question: is this about just FFDHE in TLS 1.2, i.e. the
TLS_DHE_* cipher suites, or also the ffdhe* NamedGroup values as used in
TLS 1.3?

I support deprecating the TLS_DHE_* ciphers in TLS 1.2. Indeed, we removed
them from Chrome back in 2016

and
from BoringSSL not too long afterwards.

The DHE construction in TLS 1.2 was flawed in failing to negotiate groups.
The Logjam  attack should not have mattered and
instead was very difficult to mitigate without just dropping DHE entirely.
The lack of negotiation also exacerbates the DoS risks with DHE in much the
same way. It is also why the client text in the current draft

("The group size is at least 2048 bits"), and the previous one

("The group is one of the following well-known groups described in
[RFC7919]: ffdhe2048, ffdhe3072, ffdhe4096, ffdhe6144, ffdhe8192") are not
easily implementable. By the time we've gotten an unsatisfying group from
the server, it's too late to change parameters. Trying with a new
connection and different parameters is also problematic because of
downgrade attacks. A correct scheme would have been defined to only use
NamedGroup values, and so the server could pick another option if no groups
were in common.

RFC 7919 should have fixed this, but it too was flawed: it reused the
cipher suites before, making it impossible to filter out old servers. See
these discussions:
https://mailarchive.ietf.org/arch/msg/tls/I3hATzFWwkc2GZqt3hB8QX4c-CA/
https://mailarchive.ietf.org/arch/msg/tls/DzazUXCUZDUpVgBPVHOwatb65dA/
https://mailarchive.ietf.org/arch/msg/tls/bAOJD281iGc2HuEVq0uUlpYL2Mo/

Additionally, the shared secret drops leading zeros, which leaks a timing
side channel as a result. Secrets should be fixed-width. See
https://raccoon-attack.com/ and https://github.com/tlswg/tls13-spec/pull/462

At this point, fixing all this with a protocol change no longer makes
sense. Any change we make now won't affect existing deployments. Any update
that picks up the protocol change may as well pick up TLS 1.3 with the ECDH
groups. Thus the best option is to just deprecate them, so deployments can
know this is not the direction to go.

Of course, some deployments may have different needs. I'm sure there are
still corners of the world that still need to carry SSL 3.0 with RC4
despite RFC 7465 and RFC 7568. For instance, during the meeting, we
discussed how opportunistic encryption needs are sometimes different, which
is already generically covered by RFC 7435
 ("OSS protocols may
employ more liberal settings than would be best practice [...]"). All that
is fine and does not conflict with deprecating. These modes do not meet the
overall standard expected for TLS modes, so this WG should communicate that.

I'm somewhere between supportive and ambivalent on the ffdhe* NamedGroup
values in TLS 1.3. We do not expect to ever implement them in BoringSSL,
and their performance would be quite a DoS concern if we ever did. But that
construction is not as deeply flawed as the TLS 1.2 construction.

On Tue, Dec 13, 2022 at 9:46 AM Sean Turner  wrote:

> During the tls@IETF 115 session topic covering
> draft-ietd-tls-deprecate-obsolete-kex, the sense of the room was that there
> was support to deprecate all FFDHE cipher suites including well-known
> groups. This message starts the process to judge whether there is consensus
> to deprecate all FFDHE cipher suites including those well-known groups.
> Please indicate whether you do or do not support deprecation of FFDHE
> cipher suites by 2359UTC on 6 January 2023. If do not support deprecation,
> please indicate why.
>
> NOTE: We had an earlier consensus call on this topic when adopting
> draft-ietd-tls-deprecate-obsolete-kex, but the results were inconclusive.
> If necessary, we will start consensus calls on other issues in separate
> threads.
>
> Cheers,
> Chris, Joe, and Sean
> ___
> TLS mailing list
> TLS@ietf.org
> https://www.ietf.org/mailman/listinfo/tls
>
>
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] RFC 5746 applicable for session resumption?

2022-09-18 Thread David Benjamin
The exact contents and structure of StatePlaintext and ticket itself are up
to the implementation to decide. This format is merely a recommendation or
example. The only interop requirements are that the server maintain enough
state that it can correctly resume a session on the subsequent request.
OpenSSL, for example, uses a different serialization that includes bits of
ASN.1. Indeed the spec specifically says:

   Other TLS extensions may require the inclusion of additional data in
   the StatePlaintext structure.

So, no, you are not intended to take that structure as the literal,
complete format.

However, while other TLS extensions may require additional data, I believe
you are also misreading RFC 5746. There is no such requirement to retain
client_verify_data. client_verify_data is remembered across
*renegotiations* within
a *single connection*, not for *resumptions* across *different* connections.
Indeed RFC 5746, section 3.1 explicitly says:

   Both client and server need to store three additional values for each
   TLS connection state (see RFC 5246, Section 6.1).  Note that these
   values are specific to connection (not a TLS session cache entry).

Renegotiation and resumption are not the same thing. Renegotiation is when
you perform multiple handshakes *within a single connection*. Resumption is
when, for an individual handshake, you carry over key material and other
state from a previous connection as an optimization. It is possible for a
renegotiation handshake to be a full or resumption handshake, but RFC 5746
applies independently of this. Sections 3.4 and 3.6 say:

   Note that this section [3.4] and Section 3.5 apply to both full
handshakes
   and session resumption handshakes.

   Note that this section [3.6] and Section 3.7 apply to both full
handshakes
   and session-resumption handshakes.

This applies to both session-ID-based resumption and session-ticket-based
resumption. However, this does *not* mean you retain client_verify_data and
server_verify_data in the session state. You maintain it in the *connection*.
Whatever the previous handshake at the connection was, you use its Finished
messages as the next handshake's renegotiation_info values. (All applicable
handshakes, full or resumption, ticket or ID, have Finished messages.)
Maintaining it in the session state wouldn't be useful because a session
may span connections, and that's not the binding that RFC 5746 is intended
to apply. That is, suppose connection C1 handshakes and established session
S1, sending Finished messages F1. If connection C2 handshakes and happens
to resume session S1, you *do not* use F1 as the renegotiation_info C2. It
is even possible that, within a single connection, handshake 3 resumes a
session established by handshake 1. Even so, you use handshake 2's Finished
messages in handshake 3, not handshake 1.


On Fri, Sep 16, 2022 at 7:15 AM Fries, Steffen 
wrote:

> Hi Viktor,
>
> Thank you for the info. Regarding the information in the ticket, I was
> looking at the recommended ticket structure in RFC 5077 section 4 (
> https://datatracker.ietf.org/doc/html/rfc5077#section-4). There is the
> encrypted_state mentioned, which contains the encrypted information stated
> in the structures in section 4. For the renegotiation extension
> verification from RFC 5746 section 3.7 (
> https://datatracker.ietf.org/doc/html/rfc5746#section-3.7), the server
> must have the client_verify_data, which is not part of the ticket in the
> StatePlaintext structure. That was the reason for assuming that the
> renegotiation extension may not be used in the case of ticket based
> resumption. If the server puts this information (from the Finish message)
> into the ticket, it could reconstruct it. Maybe I was taking the section 4
> of RFC 5077  to literally.
>
> Best regards
> Steffen
>
> > -Original Message-
> > From: TLS  On Behalf Of Viktor Dukhovni
> > Sent: Donnerstag, 15. September 2022 15:42
> > To: tls@ietf.org
> > Subject: Re: [TLS] RFC 5746 applicable for session resumption?
> >
> > On Thu, Sep 15, 2022 at 01:16:33PM +, Fries, Steffen wrote:
> >
> > > I was just double checking if there was an answer to the question of
> > > using the TLS renegotiation extension from RFC 5746 in the context of
> > > TLS session resumption. As stated below, based on the RFC it is not
> > > crystal clear if it applies. In general I would think yes, but only
> > > for session resumption based on the sessionID, not based on tickets.
> >
> > There should be no difference between (server-side) stateful and
> stateless
> > resumption.  The server should serialise into the session ticket
> sufficient
> > information to allow it to fully recover the session, as though it were
> cached
> > locally to facilitate stateful resumption.
> >
> > This is the case at least with OpenSSL, the session ticket contains and
> encrypted
> > and MACed serialised SSL_SESSION object, in exactly the same form as it
> would
> > have in a server-side 

Re: [TLS] [EXTERNAL] Opt-in schema for client identification

2022-09-16 Thread David Benjamin
Depending on what the server does, there may also be downgrade
implications, if the server uses some unauthenticated prior state to
influence parameter selection.

On Fri, Sep 16, 2022 at 11:53 AM David Benjamin 
wrote:

> I too am not seeing the use case here. Could you elaborate?
>
> Since browsers were mentioned as an example, when Chrome makes several
> connections in a row (e.g. to measure impacts of a removal more
> accurately), we generally do *not* expect the server to change its
> selection algorithm across the two connections. A cleartext correlator
> between different requests like this would also be a privacy concern and
> seems to run counter to the work in RFC 8446, appendix C.4.
>
> On Fri, Sep 16, 2022 at 10:09 AM Andrei Popov  40microsoft@dmarc.ietf.org> wrote:
>
>>
>>- Server can distinguish the client and alter some parameters in
>>response to make the new connection successful.
>>
>> A TLS server would typically choose either server-preferred parameters
>> (cipher suite, EC curve, etc.) among those advertised by the client, or
>> honor the client’s preferences.
>>
>> Can you give some examples of what a TLS server would alter, to make the
>> new connection successful, assuming the 2nd ClientHello has the same list
>> of options as the 1st one?
>>
>> Basically, what types of interop failures is this cookie intended to
>> resolve?
>>
>>
>>
>>- Modern real-life applications (e.g. browsers) may perform
>>several handshakes in a row until the connection to the server is finally
>>rejected.
>>
>> Some TLS clients will vary their offered TLS parameters between these
>> connection attempts.
>>
>>
>>
>> Cheers,
>>
>>
>>
>> Andrei
>>
>>
>>
>> *From:* TLS  *On Behalf Of * Dmitry Belyavsky
>> *Sent:* Friday, September 16, 2022 4:32 AM
>> *To:* TLS Mailing List 
>> *Subject:* [EXTERNAL] [TLS] Opt-in schema for client identification
>>
>>
>>
>> Dear colleagues,
>>
>>
>>
>> I'd like to suggest an opt-in cookie-style schema allowing the server to
>> identify the client in case when a client performs several unsuccessful
>> connection attempts.
>>
>>
>>
>> Modern real-life applications (e.g. browsers) may perform
>> several handshakes in a row until the connection to the server is finally
>> rejected. It may make sense to provide different handshake parameters on
>> the server side on the consequent attempts.
>>
>>
>>
>> To distinguish the same client from several different clients, it may be
>> useful to add a cookie-style extension in ClientHello. The server responds
>> with an encrypted extension containing a random value in a ServerHello. If
>> the connection fails, a client may send a value received from the server in
>> the next connection attempt. Server can distinguish the client and alter
>> some parameters in response to make the new connection successful.
>>
>>
>>
>> The schema differs from the current session/tickets mechanism because the
>> current mechanism implies session resumption only for successfully
>> completed handshakes.
>>
>>
>>
>> As the schema is opt-in, it doesn't provide any extra surveillance
>> opportunities.
>>
>>
>>
>> I understand that the proposed schema may badly work with CDNs.
>>
>>
>>
>> If there is an interest to my proposal, I could draft it and present on
>> the upcoming IETF meeting.
>>
>>
>>
>> --
>>
>> SY, Dmitry Belyavsky
>> ___
>> TLS mailing list
>> TLS@ietf.org
>> https://www.ietf.org/mailman/listinfo/tls
>>
>
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] [EXTERNAL] Opt-in schema for client identification

2022-09-16 Thread David Benjamin
I too am not seeing the use case here. Could you elaborate?

Since browsers were mentioned as an example, when Chrome makes several
connections in a row (e.g. to measure impacts of a removal more
accurately), we generally do *not* expect the server to change its
selection algorithm across the two connections. A cleartext correlator
between different requests like this would also be a privacy concern and
seems to run counter to the work in RFC 8446, appendix C.4.

On Fri, Sep 16, 2022 at 10:09 AM Andrei Popov  wrote:

>
>- Server can distinguish the client and alter some parameters in
>response to make the new connection successful.
>
> A TLS server would typically choose either server-preferred parameters
> (cipher suite, EC curve, etc.) among those advertised by the client, or
> honor the client’s preferences.
>
> Can you give some examples of what a TLS server would alter, to make the
> new connection successful, assuming the 2nd ClientHello has the same list
> of options as the 1st one?
>
> Basically, what types of interop failures is this cookie intended to
> resolve?
>
>
>
>- Modern real-life applications (e.g. browsers) may perform
>several handshakes in a row until the connection to the server is finally
>rejected.
>
> Some TLS clients will vary their offered TLS parameters between these
> connection attempts.
>
>
>
> Cheers,
>
>
>
> Andrei
>
>
>
> *From:* TLS  *On Behalf Of * Dmitry Belyavsky
> *Sent:* Friday, September 16, 2022 4:32 AM
> *To:* TLS Mailing List 
> *Subject:* [EXTERNAL] [TLS] Opt-in schema for client identification
>
>
>
> Dear colleagues,
>
>
>
> I'd like to suggest an opt-in cookie-style schema allowing the server to
> identify the client in case when a client performs several unsuccessful
> connection attempts.
>
>
>
> Modern real-life applications (e.g. browsers) may perform
> several handshakes in a row until the connection to the server is finally
> rejected. It may make sense to provide different handshake parameters on
> the server side on the consequent attempts.
>
>
>
> To distinguish the same client from several different clients, it may be
> useful to add a cookie-style extension in ClientHello. The server responds
> with an encrypted extension containing a random value in a ServerHello. If
> the connection fails, a client may send a value received from the server in
> the next connection attempt. Server can distinguish the client and alter
> some parameters in response to make the new connection successful.
>
>
>
> The schema differs from the current session/tickets mechanism because the
> current mechanism implies session resumption only for successfully
> completed handshakes.
>
>
>
> As the schema is opt-in, it doesn't provide any extra surveillance
> opportunities.
>
>
>
> I understand that the proposed schema may badly work with CDNs.
>
>
>
> If there is an interest to my proposal, I could draft it and present on
> the upcoming IETF meeting.
>
>
>
> --
>
> SY, Dmitry Belyavsky
> ___
> TLS mailing list
> TLS@ietf.org
> https://www.ietf.org/mailman/listinfo/tls
>
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] draft-deprecate-obsolete-kex - Comments from WG Meeting

2022-08-01 Thread David Benjamin
Solutions which require software changes to both sides may as well apply
that software change to TLS 1.3, or even just TLS 1.2 ECDHE. (RFC 7919
could also have been such an option, but it was defined wrong, per the
meeting discussion, it is not. So it goes.)

Skimming the TLS-LTS formulation, it seems like it'd have the same problem
as 7919 in this context anyway. Any negotiation-based solution must work
correctly when the feature is *and isn't* negotiated. Reusing the same
cipher suites forces the client to offer DHE in the problematic mode too.
(Also if we're making a new construction, it should be NamedGroup code
points, not spelled out params.)

Regardless, I don't think it's worth the time to define and deploy a fixed
variant of TLS 1.2 DHE. We've already defined a successor twice over.

On Sun, Jul 31, 2022 at 3:28 AM Peter Gutmann 
wrote:

> Ilari Liusvaara  writes:
>
> >Unfortunately, that does not work because it would require protocol
> >modifications requiring coordinated updates to both clients and servers.
>
> I was thinking of it more as a smoke-em-if-you-got-em option, since -LTS
> is by
> negotiation it'd be something to the effect that if you're using -LTS then
> you're covered, otherwise do X.
>
> Peter.
>
> ___
> TLS mailing list
> TLS@ietf.org
> https://www.ietf.org/mailman/listinfo/tls
>
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


  1   2   3   4   5   >