[TLS]Re: Curve-popularity data?

2024-06-05 Thread Dennis Jackson

Hi Peter, Mike

Peter Gutmann wrote:


Just because it's possible to rules-lawyer your way around something doesn't
make it valid (I also see nothing in the spec saying a TLS 1.3 implementation
can't reformat your hard drive, for example, so presumably that's OK too).
The point is that P256 is a MTI algorithm and Chrome doesn't provide any MTI
keyex in its client hello, making it a noncompliant TLS 1.3 implementation.


As Nick quoted from the spec:


A TLS-compliant application MUST support key exchange with secp256r1 (NIST 
P-256)
Chrome advertises support for P-256 in the supported groups extension. 
As a factual matter, Chrome can successfully connect to a site that only 
implements support for P-256. I cannot find any basis for Peter's claims 
in the spec.


Ekr wrote:


One more thing: we are finalizing RFC 8446-bis right now, so if there is
WG consensus to require that clients offer all MTI curves in the 
key_shares

of their initial CH, then that would be a straightforward text change.


I think we are closer to going in the other direction and allow TLS1.3 
spec-compliant implementations aiming at post-quantum support to drop 
support for P-256 entirely.


Best,
Dennis

On 05/06/2024 14:34, Peter Gutmann wrote:

Mike Shaver  writes:


You mentioned in another message that some embedded TLS implementations also
omit MTI support for code size or attack surface reasons.

They don't omit MTI support, they *only* support MTI (think Grigg's Law,
"there is only one mode and that is secure").  So when faced with an
implementation that doesn't, they can't talk to each other.


do you have any sense of why Chrome chose to omit this MTI support?

I suspect it's just because Google does whatever Google wants to (see e.g.
https://fy.blackhats.net.au/blog/2024-04-26-passkeys-a-shattered-dream/,
section "The Warnings").  This may not be politically expendient to say out
loud :-).

Peter.
___
TLS mailing list --tls@ietf.org
To unsubscribe send an email totls-le...@ietf.org___
TLS mailing list -- tls@ietf.org
To unsubscribe send an email to tls-le...@ietf.org


[TLS]Re: Curve-popularity data?

2024-06-04 Thread Dennis Jackson

On 03/06/2024 17:25, D. J. Bernstein wrote:

I'm still puzzled as to what led to the statement that I quoted at the 
beginning:

P 256 is the most popular curve in the world besides the bitcoin
curve. And I don’t have head to head numbers, and the bitcoin curve
is SEC P, but P 256 is most popular curve on the internet. So
certificates, TLS, handshakes, all of that is like 70 plus percent
negotiated with the P 256 curve.

Maybe the TLS co-chair has a comment?


On 03/06/2024 22:19, D. J. Bernstein wrote:

As I said, the statement is from one of the current TLS co-chairs, a
month before the co-chair appointment. The position as co-chair adds to
the importance of ensuring accurate information.


Dan, this is unsavory conduct. We are here to have reasoned, impersonal 
discussions. Please see in particular the second guideline for conduct 
at the IETF (RFC 7154).


Trying to call out an individual for a comment made informally, in some 
other corner of the internet, some time ago, is rather unbecoming of you 
and looks as though you're trying to use the working group's time and 
energy to settle a playground squabble. Especially when the referenced 
comment was unconnected to any active discussion within the WG or 
decisions made by the chairs.


Your thread has raised two technical & impersonal questions relevant to 
the TLS WG. Let's keep the focus on them:


1) What cryptographic algorithms are popularly used with TLS today?

2) Does this popularity matter for deciding which PQ hybrids to 
standardize in TLS?
___
TLS mailing list -- tls@ietf.org
To unsubscribe send an email to tls-le...@ietf.org


[TLS]Re: Curve-popularity data?

2024-06-03 Thread Dennis Jackson

On 02/06/2024 22:02, Filippo Valsorda wrote:

Third, we learned to make key shares always ephemeral which makes 
invalid curve attacks irrelevant.


Although using ephemeral keys does effectively prevent key recovery 
through invalid points, you can still use invalid points to perform 
confinement attacks on an otherwise prime order curve.


This was used by Eli Biham and Lior Neumann to break Bluetooth pairing 
standard back in 2018 [1]. The Bluetooth standard previously said 
implementers could choose to do full point validation or always use 
ephemeral keys, and folks opted for the less complex choice. This isn't 
a clear separator between X25519 and P-256 though, since X25519 would 
also need to reject small order points in order to avoid the same attack.


Best,
Dennis

[1] 
https://biham.cs.technion.ac.il/BT/bt-fixed-coordinate-invalid-curve-attack.pdf 



(Also summarized in 7.2 of Prime Order Please 
https://eprint.iacr.org/2019/526.pdf)
___
TLS mailing list -- tls@ietf.org
To unsubscribe send an email to tls-le...@ietf.org


[TLS]Re: Transitioning to PQC Certificates & Trust Expressions

2024-05-28 Thread Dennis Jackson

Hi Ryan,

On 27/05/2024 19:23, Ryan Hurst wrote:
I don't understand your position on the verifier, the faith one can 
put in the chain of signatures is only the faith appropriate for the 
weakest signature. As such if a classical key is used to sign a PQ 
chain, an attacker would go after the classical signature ignoring the 
others.


That's not quite right.

Let's imagine we have a leaf public key L1, a PQ Public Key M1 and a 
Classical Public Key N1 and use <- to indicate 'signed by'. Consider the 
certificate chains:


    (1) L1 <- M1

    (2) N1 -> L1 <- M1  (N1 and M1 are both intermediates signing the 
same leaf)


    (3) L1 <- M1 <- N1 (N1 cross-signs M1).

Have we made things worse in (2) by adding a classical signature? No. 
Any verifier that would output accept on (1), will also output accept on 
(2) without even checking N1. So we cannot have made security worse for 
anyone that would accept (1). The opposite is also true, anyone that 
would trust N1 will not need to verify M1. So (2) strictly improves 
availability without reducing security for anyone. (This was my proposed 
design in the initial mail).


For (3), we still have the property that anyone that would output accept 
on (1) would output accept on (3) without checking N1, so security for 
PQ users has not been reduced at all. The reverse direction for whether 
we hurt classical users requires us to trust that M1 is at least as 
secure as N1 - otherwise we're hurting security for the folks that trust 
N1 but not M1. In the context where M1 is a PQ-Hybrid and N1 is 
classical and both are operated by the same CA, this is perfectly fine. 
(This was my alternate design in the initial mail).


The important nuance is this: we can *add* as many certificates / 
signatures as we like to the leaf node of a chain in order to improve 
availability without hurting security. We can also extend PQ chains with 
classical roots, without degrading the security of PQ users in any way.


As a further thought experiment. Imagine we had a fully PQ PKI setup and 
established, then some attacker used their quantum computer to break a 
classical root and start adding a classical signature to all our secure 
PQ chains. This action would not impact security for any client which 
only trusted PQ signatures. They simply don't care about the classical 
signature and the attacker can't produce any new chains. For clients 
which did trust the classical signing algorithm, they're doomed no 
matter what, because the attacker can just make valid chains of their 
choice.


Security all comes down to roots and signatures the verifiers accepts, 
not the chains we make :-).


Best,
Dennis




Ryan

On Mon, May 27, 2024 at 11:15 AM Dennis Jackson 
 wrote:


Hi Ryan,

I wonder if the IETF mail servers are having a bad day again. I
only see your reply to me, no other messages and currently the
archives are only showing my initial email [1] with no replies.

[1] https://mailarchive.ietf.org/arch/browse/tls/

On 27/05/2024 18:51, Ryan Hurst wrote:

However, doing so with a hybrid chain weakens the security of the
chain to the security properties of the certificate and keys
being used for the cross-signing.


I don't think there's any such thing as the security of the chain.
Only the security of the *verifier* of the chain. If they trust a
classical root, they can't do better than classical security. If
they trust only PQ roots, it doesn't matter how many extraneous
classical certs are in the chain if there's a valid PQ path from
leaf to a known PQ root or intermediate. In both cases, the having
a classical signature on a PQ root or intermediate doesn't change
security for anybody, it only improves availability.

Best,
Dennis




On Mon, May 27, 2024 at 9:51 AM Dennis Jackson
 wrote:

Hi Ryan,

On 27/05/2024 16:39, Ryan Hurst wrote:


[...]

Moreover, there's the liability issue: a CA that cross-signs
another CA exposes its business to distrust based on the
practices of the CA it cross-signs.

[...]

As someone who has both provided said cross-signs and
received them I really don't see them as the silver bullet
others seem to in this thread.


This thread is purely talking about cross-signs between two
roots operated by the same CA, which is the case when an
existing CA with classical root is generating a new PQ root.

This is completely standard practice, as exemplified by Let's
Encrypt, DigiCert and Sectigo's pages describing their cross
signs between the roots they operate [1,2,3]. There are no
commercial relationships or sensitivities involved because
the same organization controls both the signing and the
cross-signed root.

I guess you assumed the alternative scenario where the roots
belong to

[TLS]Re: Transitioning to PQC Certificates & Trust Expressions

2024-05-28 Thread Dennis Jackson

Hi Ryan,

I wonder if the IETF mail servers are having a bad day again. I only see 
your reply to me, no other messages and currently the archives are only 
showing my initial email [1] with no replies.


[1] https://mailarchive.ietf.org/arch/browse/tls/

On 27/05/2024 18:51, Ryan Hurst wrote:
However, doing so with a hybrid chain weakens the security of the 
chain to the security properties of the certificate and keys being 
used for the cross-signing.


I don't think there's any such thing as the security of the chain. Only 
the security of the *verifier* of the chain. If they trust a classical 
root, they can't do better than classical security. If they trust only 
PQ roots, it doesn't matter how many extraneous classical certs are in 
the chain if there's a valid PQ path from leaf to a known PQ root or 
intermediate. In both cases, the having a classical signature on a PQ 
root or intermediate doesn't change security for anybody, it only 
improves availability.


Best,
Dennis




On Mon, May 27, 2024 at 9:51 AM Dennis Jackson 
 wrote:


Hi Ryan,

On 27/05/2024 16:39, Ryan Hurst wrote:


[...]

Moreover, there's the liability issue: a CA that cross-signs
another CA exposes its business to distrust based on the
practices of the CA it cross-signs.

[...]

As someone who has both provided said cross-signs and received
them I really don't see them as the silver bullet others seem to
in this thread.


This thread is purely talking about cross-signs between two roots
operated by the same CA, which is the case when an existing CA
with classical root is generating a new PQ root.

This is completely standard practice, as exemplified by Let's
Encrypt, DigiCert and Sectigo's pages describing their cross signs
between the roots they operate [1,2,3]. There are no commercial
relationships or sensitivities involved because the same
organization controls both the signing and the cross-signed root.

I guess you assumed the alternative scenario where the roots
belong to two different CAs. The standard terminology of referring
to both as a cross-sign is regrettably vague.

Best,
Dennis

[1] Let's Encrypt X2 is cross signed by Let's Encrypt X1
https://letsencrypt.org/certificates/

[2] Digicert G5 by the Digicert Global Root CA

https://knowledge.digicert.com/tutorials/install-the-digicert-g5-cross-signed-root-ca-certificate

[3] Sectigo UserTrust is cross signed by Sectigo AAA

https://support.sectigo.com/articles/Knowledge/Sectigo-Chain-Hierarchy-and-Intermediate-Roots



Ryan Hurst

On Mon, May 27, 2024 at 2:31 AM Dennis Jackson
 wrote:

One of the key use cases proposed for Trust Expressions is
enabling a speedy deployment of PQC Certificates. I agree
this is an important use case to address, but I think a
closer inspection of the existing deployment options shows
that Trust Expressions does not provide any improvement or
new functionality over existing, already widely deployed
solutions.

In particular, having each CA cross-sign their new PQC root
with their existing classical root. This does not require any
new functionalities or code changes in TLS clients or
servers, does not require coordination between CAs / Root
Programs / Clients and does not impose any performance impact
on the connection (perhaps surprisingly).

The rest of this message details the Trust Expressions
proposal for a PQC transition and compares the security and
performance to existing solutions.

*The Trust Expressions Proposal for the PQC Transition
*When we come to transition to PQC Certificates, the various
Root Programs will include various PQC Roots and start
distributing them to their clients. They will also configure
their clients to start advertising the relevant PQC / hybrid
signature algorithms in their signature_algorithms_cert TLS
Extensions. TLS Servers will decide whether to send their
classical chain or their PQC chain according to this extension.

The Trust Expressions authors plus quite a few folks on the
list have stated that this approach will require us to wait
for all major root programs to accept a given PQC Root and
then for that PQC root to be ubiquitously supported by all
clients which also advertise PQC Signature Support.
Otherwise, we might send our new PQ Chain to a client who
only has an older set of PQ Roots, which would cause a
connection failure. This wait could take a long time, even a
year or more.

Trust Expressions proposes that by having clients indicate
their trust store label and version, we can mostly skip
waiting for ubiquity. Through the Trust Expression's
negotiation, we can be sure

[TLS]Re: Transitioning to PQC Certificates & Trust Expressions

2024-05-28 Thread Dennis Jackson

Hi Ryan,

On 27/05/2024 16:39, Ryan Hurst wrote:


[...]

Moreover, there's the liability issue: a CA that cross-signs another 
CA exposes its business to distrust based on the practices of the CA 
it cross-signs.


[...]

As someone who has both provided said cross-signs and received them I 
really don't see them as the silver bullet others seem to in this thread.


This thread is purely talking about cross-signs between two roots 
operated by the same CA, which is the case when an existing CA with 
classical root is generating a new PQ root.


This is completely standard practice, as exemplified by Let's Encrypt, 
DigiCert and Sectigo's pages describing their cross signs between the 
roots they operate [1,2,3]. There are no commercial relationships or 
sensitivities involved because the same organization controls both the 
signing and the cross-signed root.


I guess you assumed the alternative scenario where the roots belong to 
two different CAs. The standard terminology of referring to both as a 
cross-sign is regrettably vague.


Best,
Dennis

[1] Let's Encrypt X2 is cross signed by Let's Encrypt X1 
https://letsencrypt.org/certificates/


[2] Digicert G5 by the Digicert Global Root CA 
https://knowledge.digicert.com/tutorials/install-the-digicert-g5-cross-signed-root-ca-certificate


[3] Sectigo UserTrust is cross signed by Sectigo AAA 
https://support.sectigo.com/articles/Knowledge/Sectigo-Chain-Hierarchy-and-Intermediate-Roots




Ryan Hurst

On Mon, May 27, 2024 at 2:31 AM Dennis Jackson 
 wrote:


One of the key use cases proposed for Trust Expressions is
enabling a speedy deployment of PQC Certificates. I agree this is
an important use case to address, but I think a closer inspection
of the existing deployment options shows that Trust Expressions
does not provide any improvement or new functionality over
existing, already widely deployed solutions.

In particular, having each CA cross-sign their new PQC root with
their existing classical root. This does not require any new
functionalities or code changes in TLS clients or servers, does
not require coordination between CAs / Root Programs / Clients and
does not impose any performance impact on the connection (perhaps
surprisingly).

The rest of this message details the Trust Expressions proposal
for a PQC transition and compares the security and performance to
existing solutions.

*The Trust Expressions Proposal for the PQC Transition
*When we come to transition to PQC Certificates, the various Root
Programs will include various PQC Roots and start distributing
them to their clients. They will also configure their clients to
start advertising the relevant PQC / hybrid signature algorithms
in their signature_algorithms_cert TLS Extensions. TLS Servers
will decide whether to send their classical chain or their PQC
chain according to this extension.

The Trust Expressions authors plus quite a few folks on the list
have stated that this approach will require us to wait for all
major root programs to accept a given PQC Root and then for that
PQC root to be ubiquitously supported by all clients which also
advertise PQC Signature Support. Otherwise, we might send our new
PQ Chain to a client who only has an older set of PQ Roots, which
would cause a connection failure. This wait could take a long
time, even a year or more.

Trust Expressions proposes that by having clients indicate their
trust store label and version, we can mostly skip waiting for
ubiquity. Through the Trust Expression's negotiation, we can be
sure that we only send the PQC Root Certificate Chain to clients
that have already updated to trust it. Meanwhile, clients that
don't have PQC Signature support or do support the signatures but
don't have the new PQC root will continue to receive the old
classical chain and not enjoy any PQ Authentication.

*The Existing Alternative
*I believe this argument for the use of Trust Expressions
overlooks existing widely available deployment options for PQC
Certificates, which mean that we do not need to wait for multiple
root stores to include new PQC certs or for them to become
ubiquitous in clients. We will see how we can achieve the exact
same properties as Trust Expressions (no waiting for ubiquity, no
connection failures and PQ-Auth for all clients with the PQ Root)
without the need for any new designs or deployments.

When CAs create roots with new signature algorithms (e.g. ECDSA
Roots), it is common practice to cross-sign the new root with the
existing root (e.g. an RSA Root). This is the approach taken by
Let's Encrypt today, who have an older RSA Root (ISRG X1) and a
newer ECDSA Root (ISRG X2). X2 is cross signed by X1, and each of
the new ECDSA Intermediates are also cross-signed by X1 [1]. In
the context of RSA vs ECDSA

[TLS]Transitioning to PQC Certificates & Trust Expressions

2024-05-27 Thread Dennis Jackson
One of the key use cases proposed for Trust Expressions is enabling a 
speedy deployment of PQC Certificates. I agree this is an important use 
case to address, but I think a closer inspection of the existing 
deployment options shows that Trust Expressions does not provide any 
improvement or new functionality over existing, already widely deployed 
solutions.


In particular, having each CA cross-sign their new PQC root with their 
existing classical root. This does not require any new functionalities 
or code changes in TLS clients or servers, does not require coordination 
between CAs / Root Programs / Clients and does not impose any 
performance impact on the connection (perhaps surprisingly).


The rest of this message details the Trust Expressions proposal for a 
PQC transition and compares the security and performance to existing 
solutions.


*The Trust Expressions Proposal for the PQC Transition
*When we come to transition to PQC Certificates, the various Root 
Programs will include various PQC Roots and start distributing them to 
their clients. They will also configure their clients to start 
advertising the relevant PQC / hybrid signature algorithms in their 
signature_algorithms_cert TLS Extensions. TLS Servers will decide 
whether to send their classical chain or their PQC chain according to 
this extension.


The Trust Expressions authors plus quite a few folks on the list have 
stated that this approach will require us to wait for all major root 
programs to accept a given PQC Root and then for that PQC root to be 
ubiquitously supported by all clients which also advertise PQC Signature 
Support. Otherwise, we might send our new PQ Chain to a client who only 
has an older set of PQ Roots, which would cause a connection failure. 
This wait could take a long time, even a year or more.


Trust Expressions proposes that by having clients indicate their trust 
store label and version, we can mostly skip waiting for ubiquity. 
Through the Trust Expression's negotiation, we can be sure that we only 
send the PQC Root Certificate Chain to clients that have already updated 
to trust it. Meanwhile, clients that don't have PQC Signature support or 
do support the signatures but don't have the new PQC root will continue 
to receive the old classical chain and not enjoy any PQ Authentication.


*The Existing Alternative
*I believe this argument for the use of Trust Expressions overlooks 
existing widely available deployment options for PQC Certificates, which 
mean that we do not need to wait for multiple root stores to include new 
PQC certs or for them to become ubiquitous in clients. We will see how 
we can achieve the exact same properties as Trust Expressions (no 
waiting for ubiquity, no connection failures and PQ-Auth for all clients 
with the PQ Root) without the need for any new designs or deployments.


When CAs create roots with new signature algorithms (e.g. ECDSA Roots), 
it is common practice to cross-sign the new root with the existing root 
(e.g. an RSA Root). This is the approach taken by Let's Encrypt today, 
who have an older RSA Root (ISRG X1) and a newer ECDSA Root (ISRG X2). 
X2 is cross signed by X1, and each of the new ECDSA Intermediates are 
also cross-signed by X1 [1]. In the context of RSA vs ECDSA, this isn't 
especially interesting because there's a purely a tradeoff between a 
smaller chain (ECDSA/X2) vs a more ubiquity (RSA/X1). However, we'll see 
this approach has much more substantial benefits with PQC Signatures.


When the time comes to ship a PQC Root (which we'll call X3 for 
convenience), we'll make some PQC Intermediates (F1, F2, F3). We will 
also cross sign these intermediates with our X2 (ECDSA) Root which we'll 
call H1, H2, H3. So both F1 and H1 are certificates on the same 
intermediate PQC Public Key, with F1 having a PQC signature and H1 a 
ECDSA signature from their respective roots.


When we provision servers with their certificate chains, we'll provision 
the PQC Chain as their leaf (PQC Public Key + PQC Signature), plus both 
F1 and H1. Clients that don't indicate support PQC Signatures in their 
signature_algorithms_cert extension will receive the usual classical 
chain. Clients that support PQC and have the new root will verify the 
leaf + F1 and so enjoy PQ-Auth. Clients that support PQC and don't have 
the new root will verify the leaf and H1 and not receive PQ-Auth.


This achieves identical properties to Trust Expressions in terms of 
client security and doesn't involve any waiting for PQC Root Ubiquity or 
Root Store Approval. The only impact is the extra certificate in the 
chain. Happily, we can cut the overhead of H1 to be a mere 32 bytes with 
existing TLS Certificate Compression Algorithms like zlib / zstd / 
brotli (since H1 and F1 encode the same PQC Public Key). This is tiny 
compared to the necessary PQC Public Key and Signature already in the 
chain. With new schemes like Abridged Certs, we can go even further and 
replace all but 

[TLS]Re: WG Adoption for TLS Trust Expressions

2024-05-24 Thread Dennis Jackson

Hi Ryan,

On 23/05/2024 19:01, Ryan Hurst wrote:
Regarding the concern about government-mandated adoption of root 
certificates, I also care deeply about this issue. This is why I am 
disappointed by the one-sided nature of the conversation. I see no 
mechanism in this proposal that bypasses operator consent, and while 
governments do and will continue to force operators to make changes 
not in their interest, this proposal doesn't change that reality. 
Continuing to focus on this issue in this way detracts from the more 
constructive discussions happening in this thread.
The problem here is that fragmentation (which you acknowledge as a 
compelling concern) combined with making it much easier to deploy new 
CAs (a core goal of the draft) which together alters the power balance 
between the security community and governments in rather the wrong 
direction. I am not going to spend any further words on it here, but I'm 
disappointed you've engaged so dismissively in what has become a long 
discussion on a deeply complex topic.


The most compelling argument against Trust Expressions I see in the 
thread is the potential for fragmentation. However, the current 
conversation on this topic overlooks the existing fragmentation 
challenges faced by site operators. The primary concern I hear from 
these operators is the anxiety over changes related to cipher suites 
and certificates, fearing that these changes might inadvertently break 
services in exchange for security properties that their leadership 
isn’t asking for. Trust Expressions, in my view, offers a net-positive 
solution to this problem by allowing site operators to manage the 
trust anchor portion of this risk, which they cannot do effectively today.


Is it not rather the opposite, that this draft will give server 
operators an additional product space of choices and incompatibilities? 
With Trust Expressions, we must anticipate that CAs will need to jump 
through at least four separate hoops to provision a chain for each of 
the major root stores, as well as provision a chain for any past 
incompatible versions which are still popular. History does not suggest 
that CAs are particularly capable of managing these considerably more 
complex structures, given existing struggles with getting single chains 
correct.


Worse, we cannot expect Trust Expressions to become universal in any 
reasonable timeframe, meaning we still have the compatibility headache 
of existing and new clients with no Trust Expressions support.


At the same time, we are seeing more embedded devices being deployed 
without long-term maintenance strategies, automatic updates, or 
mechanisms to manage root store lists.


Can you evidence this? The UK and the EU have passed fairly sweeping 
laws over the past year to require that manufacturers provide long term 
maintenance and security updates for the entire lifecycle of new digital 
products [1,2]. The USA usually catches up eventually.


Better still, with the shift to PQ we have the best opportunity to adopt 
the Chrome Root Program's proposal for 7 year certificates. This would 
entirely fix this issue by pushing it from website operators to device 
manufacturers, where it should belong anyway :-), rather than creating a 
fragmented compatibility nightmare by embracing it and putting labels on 
it.


Best,
Dennis

[1] https://digital-strategy.ec.europa.eu/en/policies/cyber-resilience-act
[2] 
https://www.gov.uk/government/news/new-laws-to-protect-consumers-from-cyber-criminals-come-into-force-in-the-uki





On Thu, May 23, 2024 at 10:40 AM Watson Ladd  
wrote:


On Thu, May 23, 2024 at 12:42 PM David Benjamin
 wrote:

>
> Of course, whether this property (whether servers can usefully
pre-deploy not-yet-added trust anchors), which trust expressions
does not have, even matters boils to whether a root program would
misinterpret availability in servers as a sign of CA
trustworthiness, when those two are clearly unrelated to each
other. Ultimately, the trustworthiness of CAs is a subjective
social question: do we believe this CA has *and will continue*
only sign true things? We can build measures to retroactively
catch issues like Certificate Transparency, but the key question
is fundamentally forward-looking. The role of a root program is to
make judgement calls on this question. A root program that so
misunderstands its role in this system that it conflates these two
isn't going to handle its other load-bearing responsibilities either.

As the old saw goes "past performance is no guarantee of future
results, but it sure helps". Moreover root programs have to balance
the benefits of including a CA against the costs. One of those
benefits is the number of sites that use it.

Sincerely,
Watson

>
> David
> ___
> TLS mailing list -- tls@ietf.org
> To unsubscribe send an email to 

[TLS]Re: WG Adoption for TLS Trust Expressions

2024-05-24 Thread Dennis Jackson

On 23/05/2024 17:41, David Benjamin wrote:

On Thu, May 23, 2024 at 11:09 AM Dennis Jackson 
 wrote


This is something that I believe David Benjamin and the other
draft authors, and I all agree on. You and Nick seem to have
misunderstood either the argument or the draft.

David Benjamin, writing on behalf of Devon and Bob as well:


By design, a multi-certificate model removes the ubiquity
requirement for a trust anchor to be potentially useful for a
server operator.

[...]

Server operators, once software is in place, not needing to be
concerned about new trust expressions or changes to them. The
heavy lifting is between the root program and the CA.

From the Draft (Section 7):


Subscribers SHOULD use an automated issuance process where the CA
transparently provisions multiple certification paths, without
changes to subscriber configuration.

The CA can provision whatever chains it likes without the
operator's involvement. These chains do not have to be trusted by
any clients. This is a centralized mechanism which allows one
party (the CA) to ship multiple chains of its choice to all of its
subscribers. This obviously has beneficial use cases, but there
are also cases where this can be abused.


Hi Dennis,

Since you seem to be trying to speak on my behalf, I'm going to go 
ahead and correct this now. This is not true. I think you have 
misunderstood how this extension works. [...]



Hi David,

The certification chains issued to the server by the CA comes tagged 
with a list of trust stores its included in. The named trust stores are 
completely opaque to the server. These chains and names may not be 
trusted by any client nor approved by any server, they are issued solely 
by the CA as opaque labels. These chains sit on the server and will not 
be used unless a client connects with the right trust store label but 
obviously can easily be scanned for by anyone looking to check how 
pervasively deployed the alternate trust store is.


Do you dispute any part of that? Most of what you wrote went off on a 
completely different tangent.


Of course, whether this property (whether servers can usefully 
pre-deploy not-yet-added trust anchors), which trust expressions does 
not have, even matters boils to whether a root program would 
misinterpret availability in servers as a sign of CA trustworthiness, 
when those two are clearly unrelated to each other.


Again, my primary concern here is not around the behavior of individual 
root stores, this is not relevant to the concern I'm trying to 
communicate to you. I know folks from all of the major root stores have 
great faith in their judgement and technical acumen.


My concern is that Trust Expressions upsets a fragile power balance 
which exists outside of the individual root stores. There is an eternal 
war between governments pushing to take control of root stores and the 
security community pushing back. This battle happens in parliaments and 
governments, between lawmakers and officials, not within root stores and 
their individual judgement largely does not matter to this war. The 
major advantages we as the security community have today are that:


    a) These attempts to take control for surveillance are nakedly 
obvious to the local electorate because crappy domestic roots have no 
legitimate purpose because they can never achieve any real adoption.


    b) If a root store were to bow to pressure and let in a root CA 
used for interception, every other country has an interest in preventing 
that. An international WebPKI means that we are either all secure, or 
all insecure, together.


Trust Expressions, though intended to solve completely different 
problems, will accidentally eradicate both of these advantages. Firstly, 
it provides a nice on ramp for a new domestic trust store, mostly 
through the negotiation aspect but also through the CA pre-distribution. 
Secondly, by enabling fragmentation along geographic boundaries whilst 
maintaining availability of websites. Without Trust Expressions, we 
cannot balkanize TLS. With Trust Expressions, we can and we know people 
who want to (not anyone in this thread).


If you still do not understand this wider context within which all of 
our work sits, I do not think further discussion between you and I is 
going to help matters.


I would suggest we focus our discussion on the use cases of Trust 
Expressions and how exactly it would work in practice - these concerns I 
shared earlier in the thread are solely technical and operational and 
you and I might be able to make better progress towards a common 
understanding.


Best,
Dennis
___
TLS mailing list -- tls@ietf.org
To unsubscribe send an email to tls-le...@ietf.org


[TLS]Re: WG Adoption for TLS Trust Expressions

2024-05-23 Thread Dennis Jackson

Hi David,

On 23/05/2024 14:07, David Adrian wrote:
There is certainly a discussion to be had about how well Trust 
Expressions solves problems experienced by the HTTPS ecosystem and the 
Web PKI today. However, that requires moving past repeated, 
unsubstantiated claims about how Trust Expressions enables government 
surveillance, something has been repeatedly debunked by multiple 
people in this thread, all of whom are attempting to discuss in good 
faith. And yet, each time someone does this, you change the shape of 
your argument, claim there is more nuance that no one except you can 
see, and describe some easily debunked partial scenario that you 
believe to be worse.


This is, politely, hogwash and a rather shabby attempt to portray this 
as a one-sided discussion.


I have presented a single consistent argument about how Trust 
Expressions solves a key deployment challenge for parties trying to 
perform this kind of abuse. This argument has not changed over the 
course of the thread, as I noted in my latest reply to Nick, this was 
just a summary of the previous discussion.


This argument has been supported by others in the thread, in particular 
by Stephen Farrell:


Having read the draft and the recent emails, I fully agree with 
Dennis' criticisms of this approach. I think this is one that'd  best 
be filed under "good try, but too many downsides" and left at that. 


Meanwhile, the four loudest voices who deny there are legitimate 
concerns around this proposal all work for the same team at Google and 
have announced their intent to prototype this technology already [1].


The majority of the participants in this thread have engaged with these 
discussions with curiosity and have yet to voice any conclusion. I am 
sure they will do so when they have made up their minds.


My personal reading has been that folks who have engaged in the 
discussion would agree there is both reasonable concern about how 
effective T.E. is at solving the problems it claims to and that the 
risks of abuse cannot be dismissed as easily as the authors would like.


It may be worth taking a step back, and considering if the people you 
have worked with for nearly a decade or more, and who have been 
instrumental in improving TLS over the years, have truly suddenly 
decided to pivot to attempting to backdoor mass surveillance through 
the IETF.


I have noted throughout that this is a complex topic which reasonable 
people may disagree on. I have a great deal of respect for the authors 
who I know are acting out of genuine intent to improve the world. We 
simply disagree on whether the proposed design is likely to effective at 
solving the problems it sets out and how seriously it could be abused by 
others.




A few replies relating to surveillance are inline.

-dadrian

> I think we have to agree that Trust Expressions enables websites to 
adopt new CA chains regardless of client trust and even builds a 
centralized mechanism for doing so. It is a core feature of the design.


No one has to agree to this because you have not backed this claim at 
all. Nick sent two long emails explaining why this was not the case, 
both of which you have simply dismissed [...]


This is something that I believe David Benjamin and the other draft 
authors, and I all agree on. You and Nick seem to have misunderstood 
either the argument or the draft.


David Benjamin, writing on behalf of Devon and Bob as well:

By design, a multi-certificate model removes the ubiquity requirement 
for a trust anchor to be potentially useful for a server operator.


[...]

Server operators, once software is in place, not needing to be 
concerned about new trust expressions or changes to them. The heavy 
lifting is between the root program and the CA.

From the Draft (Section 7):

Subscribers SHOULD use an automated issuance process where the CA 
transparently provisions multiple certification paths, without changes 
to subscriber configuration.
The CA can provision whatever chains it likes without the operator's 
involvement. These chains do not have to be trusted by any clients. This 
is a centralized mechanism which allows one party (the CA) to ship 
multiple chains of its choice to all of its subscribers. This obviously 
has beneficial use cases, but there are also cases where this can be abused.


Can you explain, specifically, the government and site action that you 
expect that will result in surveillance, keeping in mind that ACME 
_already_ allows the CA to provide a bundle of untrusted 
intermediates? What is the chain of events here? What are the actions 
you see taken by a government, a CA, site owners, and root programs?


CA provided intermediates doesn't offer any long term transition without 
Trust Expressions. You could absolutely stuff the domestic CA in there 
on some short term basis, but you're never going to be able to take out 
the WebPKI recognized intermediate (for all the folks connecting without 
the domestic CA). As a 

[TLS]Re: WG Adoption for TLS Trust Expressions

2024-05-23 Thread Dennis Jackson
s on the wire. Somewhat smaller than the Trust Expressions extension 
:-).


The server-side storage space required is negligible, 34 bytes per 
intermediate certificate (32 byte hash, 2 byte identifier). The 
client-side storage space required is 2 bytes per intermediate 
certificate (since we already ship the full set of intermediates to 
clients). The dictionary format has also been ordered to support 
multiple versions without needing to store the redundant data. So in 
short, much less than the multiple extra certificate chains that Trust 
Expressions demands.


The second example is when a new CA wants to launch. The existing 
mechanism requires a new entrant to the space to make a business deal 
with one of its potential competitors to be able to usefully enter the 
market. This sounds like a bug, not a feature, of the system.


Firstly, Trust Expressions also requires these business deals. Website 
Operators need both a chain from the new CA and an existing trusted CA. 
This requires either the subscriber to have a business relationship with 
the existing CA, or the new CA to have a business relationship with them.


Secondly, unlike Trust Expressions, cross signing actually solves this. 
Further, root store operators can encourage cross signing if they so wish.




In [12], you suggest that Trust Expressions encourages CA operators to 
more slowly transition to a PQC PKI, and that this is undesirable. In 
the transition to a PQC PKI, there will be experiments with different 
approaches, and different CA operators may choose different approaches 
to implement on different schedules. The incentive for CA operators to 
move quickly instead of waiting for others to pave the way is to meet 
the demand of servers that want PQC certs early.
We're not worried about the CAs at the front of the pack, who will look 
to innovate no matter what. We're worried about the CAs that make up the 
long tail.


 Servers that use trust expressions can be more confident that they 
are using a certificate chain trusted by their diverse set of clients 
(and monitor which trust expressions they don't support so they can 
adjust their certificate order configuration to close gaps if they 
want to support those clients).


What do you mean by 'more confident'? They already know whether the 
chain is accepted or not based on whether the connection goes through. 
You seem to be imagining that having to maintain relationships with 
multiple CAs to be present in multiple trust stores is a feature for 
website operators, rather than a huge burden. Best case is that they 
will do much as they do today and simply pick one CA which is 
ubiquitous, so Trust Expressions doesn't help.


CA operators benefit by being able to issue certs from their new roots 
sooner without needing to wait for ubiquity or having server operators 
complain that they don't work.
Only by carrying out a business agreement with a more widely trusted CA, 
just as they would for a cross sign.


Root programs benefit by being able to more easily make changes with 
less risk of breakage (e.g. policy changes like root key lifetimes, 
and breakage due to the inability for servers to provide a set of 
certificates that both old and new clients can build valid paths for).


Cross Signing actually solves both of these problems as we've already 
discussed. Trust Expressions does not, it relies either on CAs doing the 
legwork to keep these things working (as they already do today) or 
website operators to do a complex dance with multiple CAs and 
fingerprinting (as they already do today).


On Tue, May 21, 2024 at 3:00 PM Dennis Jackson 
 wrote:



Although Watson observed that the answer to this is at least
'somewhat', I agree such a world is already maxed at 10/10 on the
bad worlds to live in scale and so it's not by itself a major
problem in my view.


I don't see Watson saying that in any of his messages [5] [6] [7].

On 01/05/2024 00:07, Watson Ladd wrote:


I think the sharper concern is that you could block traffic without the cert 
included.

Watson can of course speak for himself in this matter.___
TLS mailing list -- tls@ietf.org
To unsubscribe send an email to tls-le...@ietf.org


[TLS]Re: WG Adoption for TLS Trust Expressions

2024-05-21 Thread Dennis Jackson

Hi Nick,

On 21/05/2024 19:05, Nick Harper wrote:

[...]

Perhaps there are additional ways to use Trust Expressions to censor 
the web that are more practical and more useful than the existing 
techniques that I didn't consider. There are most certainly other 
forms of domestic control of the Web that I didn't consider. From my 
analysis, if I were a government looking to enable surveillance and 
domestic control of the Web, I don't see Trust Expressions as 
something that unlocks new options or makes existing techniques 
easier/more reliable. It is at most something to keep in mind as 
technology evolves. Maybe I'm not very imaginative, and you've 
imagined much more interesting ways a government might surveil the web 
or attempt to control it using Trust Expressions.


This thread is now 40+ messages deep and I guess you might have not seen 
much of the previous discussion. I actually agree with much of your 
analysis, but it focused on the wrong question, as I wrote earlier in 
this thread:


The question we're evaluating is NOT "If we were in a very unhappy 
world where governments controlled root certificates on client devices 
and used them for mass surveillance, does Trust Expressions make 
things worse?" Although Watson observed that the answer to this is at 
least 'somewhat', I agree such a world is already maxed at 10/10 on 
the bad worlds to live in scale and so it's not by itself a major 
problem in my view.


The actual concern is: to what extent do Trust Expressions increase 
the probability that we end up in this unhappy world of government CAs 
used for mass surveillance?


On 21/05/2024 19:05, Nick Harper wrote:


I'd be interested to hear details on what those are.

Messages [1,2,3,4] of this thread lay out these details at length.

Besides these concerns which are unaddressed so far, much of the recent 
discussion has focused on establishing what problem(s) Trust Expressions 
actually solves and how effective a solution it actually is.


Looking forward to your thoughts on either or both aspects.

Best,
Dennis

[1] https://mailarchive.ietf.org/arch/msg/tls/LaUJRHnEJds2Yfc-t-wgzkajXec/

[2] https://mailarchive.ietf.org/arch/msg/tls/zwPvDn9PkD5x9Yw1qul0QV4LoD8/

[3] https://mailarchive.ietf.org/arch/msg/tls/9AyqlbxiG7BUYP0UD37253MeK6s/

[4] https://mailarchive.ietf.org/arch/msg/tls/fxM4zkSn0b8zOs59xlH6uy8P7cE/



___
TLS mailing list --tls@ietf.org
To unsubscribe send an email totls-le...@ietf.org___
TLS mailing list -- tls@ietf.org
To unsubscribe send an email to tls-le...@ietf.org


[TLS]Re: WG Adoption for TLS Trust Expressions

2024-05-20 Thread Dennis Jackson

Hi David, Devon, Bob,

Response to both your recent mails below:

On Thu, May 9, 2024 at 10:45 AM David Benjamin  
wrote:
We’re particularly concerned about this server operator pain because 
it translates to security risks for billions of users. If root program 
actions cause server operator pain, there is significant pressure 
against those actions. The end result of this is that root store 
changes happen infrequently, with the ultimate cost being that user 
security cannot benefit from PKI improvements.


The claim here is that Trust Expressions is going to make it easier to 
remove CAs by reducing the pain to server operators of CA distrust? This 
seems to be incompatible with the draft's intent for CAs to provision 
the server's certificate chains without interaction with the website 
operator. Why is the CA you're distrusting ever going to voluntarily 
enable their own removal by transitioning their subscribers to a 
different CAs cert chain?


It’s hard to say exact numbers [of trust store labels / versions] at 
this stage. We can only learn with deployment experience, hence our 
desire to progress toward adoption and experimentation.
Leaving aside the concerns about what other parties may abuse this for, 
can you talk concretely about how *you* would use it? Would Chrome and 
Android Webview share the same trust expressions label? Would desktop 
Chrome and mobile Chrome? Would Chrome Canary and Chrome Release? Would 
Chrome and Chromium? Would you expose the Trust Expressions API to 
Android applications? I don't think the answer matters for the concerns 
that I'm articulating around both risks and effectiveness, but I have a 
slightly morbid**curiosity about how far through you've thought this 
proposal.


This is an important point; most modern root programs including Chrome 
 
and Mozilla  are 
trending towards increased requirements on CAs to become trusted, 
including greater agility among trust anchors. This agility reduces 
the risk of powerful, long-lived keys and allows for faster adoption 
of security improvements, but poses additional pain on subscribers who 
can only deploy one certificate to meet the needs of a set of clients 
that are changing faster than ever before.


Are those requirements truly changing faster than ever before? The vast 
majority of the pain in this space has been caused by the fact that the 
Android Root Store could only be updated by an OTA update from the OEM 
and so was effectively abandonware. I understand that Google finally 
fixed this in Android 14, released October 2023. Meanwhile, downloading 
Firefox yields a modern root store for any Android device released in 
the past 10 years (Android 5 - Lollipop, 2014).


Momentum very much seems to be pointing in the opposite direction to 
what you claim, as the old abandoned devices age out, they're being 
replaced by devices that are much easier to keep up to date. Similarly, 
various countries now have laws on the books requiring a minimum number 
of years of security updates for devices and manufacturer's are 
responding by vastly improving their supported lifetimes. This situation 
improves even further with the coming shift to PQ (described below).


There are indeed costs to fragmentation, but those costs themselves 
provide the pressure against unnecessary fragmentation. We’re not 
aware of any more limited solution that would still meet the 
requirements here. Did you mean the ones in 
https://mailarchive.ietf.org/arch/msg/tls/XXPVFcK6hq3YsdZ5D-PW9g-l8fY/


Looking at these:
- Cross-signing is a broad space. We discuss it briefly in the 
explainer 
, 
but it would need to be more concrete to be a sketch of a solution. 
Was this the option you had in mind?

Cross-signing is well understood.

The easiest case is when an already trusted CA wishes to transition to 
new key material. The CA can cross sign both the new root and the new 
intermediates as Let's Encrypt has for ISRG X1 and ISRG X2 [1]. The main 
drawback is increased certificate chain size. Adopted drafts like 
Abridged Certs [2] completely eliminate it.


The alternative use case is when a new CA wants to launch. Currently, 
it's left to the CA to negotiate with incumbents to obtain a cross 
signed certificate and many do, e.g. Let's Encrypt from IdenTrust [3], 
SSL.com from Certum [4]. If you feel that it should be made even easier, 
an argument I'm sympathetic to, that's an easy policy decision for Root 
Programs to make.


On Thu, May 9, 2024 at 10:40 AM David Benjamin  
wrote:



   Our understanding of your argument is that it will be easier for
   governments to force clients to trust a CA if a sufficient number of
   websites have deployed certificates from that CA. We just don’t
   agree with this assertion and 

Re: [TLS] WG Adoption for TLS Trust Expressions

2024-05-05 Thread Dennis Jackson

Hi David, Devon, Bob,

I feel much of your response talks past the issue that was raised at 
IETF 118.


The question we're evaluating is NOT "If we were in a very unhappy world 
where governments controlled root certificates on client devices and 
used them for mass surveillance, does Trust Expressions make things 
worse?".Although Watson observed that the answer to this is at least 
'somewhat', I agree such a world is already maxed at 10/10 on the 
bad worlds to live in scale and so it's not by itself a major problem in 
my view.


The actual concern is: to what extent do Trust Expressions increase the 
probability that we end up in this unhappy world of government CAs used 
for mass surveillance?


The case made earlier in the thread is that it increases the probability 
substantially because it provides an effective on-ramp for new CAs even 
if they exist entirely outside of existing root stores. Websites can 
adopt such a CA without being completely broken and unavailable as they 
would be today. Although I think it's unlikely anyone would 
independently do this, it's easy to see a website choosing to add such a 
certificate (which is harmless by itself) if a government 
incentivized or required it.  Trust Expressions also enables existing 
CAs to force-push a cert chain from a new CA to a website,  without the 
consent or awareness of the website operator, further enabling the 
proliferation of untrusted (and presumably unwanted) CAs.


These features neatly solve the key challenges of deploying a government 
CA, which as discussed at length in the thread, are to achieve enough 
legitimacy through website adoption to have a plausible case for 
enforcing client adoption. The real problem here is that you've 
(accidentally?) built a system that makes it much easier to adopt and 
deploy any new CA regardless of trust, rather than a system that makes 
it easier to deploy & adopt any new *trusted* CA. If you disagree with 
this assessment, it would be great to hear your thoughts on why. 
Unfortunately, none of the arguments in your email come close to 
addressing this point and the text in the draft pretty much tries to 
lampshade these problems as a feature.


The other side of this risk evaluation is assessing how effectively 
Trust Expressions solves real problems.


Despite a lot of discussion, I've only seen one compelling unsolved 
problem which Trust Expressions is claimed to be able to solve. That is 
the difficulty large sites have supporting very old clients with 
out-of-date root stores (as described by Kyle). This leads to sites 
using complex & brittle TLS fingerprinting to decide which certificate 
chain to send or to sites using very particular CAs designed to 
maximizecompatibility (e.g. Cloudflare's recent change).


However, it's unclearhow Trust Expressions solves either fingerprinting 
or the new trusted root ubiquity challenge. To solve the former, we're 
relying on the adoption of Trust Expressions by device manufacturers who 
historically have not been keen to adopt new TLS extensions. For the 
latter, Trust Expressions doesn't seem to solve anything. Sites / CDNs 
are still forced to either have a business arrangement with a single 
suitably ubiquitous root or to conclude multiple such arrangements 
(which come with considerable baggage) with both new and ubiquitous 
roots - in return for no concrete benefit. Ifwe had Trust 
Expressions deployed today, how would life be better for LE / Cloudflare 
or other impacted parties?


I won't detail them here, but it seems like there are simpler and more 
effective alternatives that would address the underlying problem, e.g. 
through root stores encouraging cross-signing or offering cross-signing 
services themselves and using existing techniques to avoid any impact at 
the TLS layer.


I'm struggling to see it being an even partially effective solution for 
any of the other proposed use cases. To pick an example you've 
repeatedly highlighted, can you clarify how Trust Expressions will speed 
the transition to a PQ PKI? Specifically, how much earlier do you expect 
a given site to be able to deploy a PQ cert chain in the case of TE 
adoption vs without TE adoption (and why)?


David, Devon & Bob wrote:

We acknowledge that achieving this level of agility requires a 
significant amount of design and implementation work for web servers, 
certificate automation clients/servers, and clients to support, but 
we believe the improvements called out in some of the discussions on 
this thread strongly outweigh these costs [...]


[...] We think this will drastically improve the ability to migrate 
the Internet to PQC—not just in terms of a faster timeline, but 
because trust anchor agility will enable the community to develop 
fundamentally better solutions for authentication, through reduced 
experimentation costs


I can completely understand why Trust Expressions seems to bring 
substantial benefits to *you*  (as root store operators) but I'm 

Re: [TLS] Adoption Call for draft-davidben-tls-key-share-prediction

2024-05-03 Thread Dennis Jackson
This looks great. I support adoption and am happy to implement & review. 

On May 3, 2024 10:05:01 PM UTC, Joseph Salowey  wrote:
>This is a working group call for adoption
>for draft-davidben-tls-key-share-prediction.  This document was presented
>at IET 118 and has undergone some revision based on feedback since then.
>The current draft is available here:
>https://datatracker.ietf.org/doc/draft-davidben-tls-key-share-prediction/.
>Please read the document and indicate if and why you support or do not
>support adoption as a TLS working group item. If you support adoption
>please, state if you will help review and contribute text to the document.
>Please respond to this call by May 20, 2024.
>
>Thanks,
>
>Joe, Deidre, and Sean
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] WG Adoption for TLS Trust Expressions

2024-04-30 Thread Dennis Jackson

On 01/05/2024 00:07, Watson Ladd wrote:


On Tue, Apr 30, 2024 at 3:26 PM Dennis Jackson
  wrote:


Let's assuming for a moment we could a) get most of the world to use ACME (a 
worthy but challenging goal) and b) get them to configure multiple CAs and 
receive multiple certificates. We don't need trust expressions to be able to do 
quick rotations - because we don't ever want to use the old CA. It's just a 
case of swapping to the new one. There's no need for negotiation.

We've already seen a serious problem with cross-signing happen, where
Cloudflare is planning to stop using Lets Encrypt. That's because the
cross-signed cert that let Lets Encrypt work with old Android devices
expired, with no replacement. Having websites present one chain
creates a lot of thorny tradeoffs. Either you present a cross-signed
certificate, or a few, and take the bandwidth hit, or you don't and
suffer a compatibility hit. This was manageable when signatures were
small. When they get chonky it will be much less fun.
There's a huge and unexplored design space here that does not require 
trust negotiation. I don't claim any of these ideas are optimal:


 * Techniques like abridged certs + cross signs let you mitigate any
   bandwidth impact for recent-ish clients. Given the older clients are
   going to be missing a lot of performance improvements anyway, this
   doesn't seem unacceptable.
 * Root Programs could introduce specific root certs they operate for
   the sole purpose of cross-signing new CAs to speed up their adoption.
 * Clients could have a TLS flag 'My Root Store is very old' which is
   set when X years without a root store update have passed.
   Alternatively, they advertise an explicit age for their root store
   or the TLS Flag 'My Root Store was updated in the past year'.
 * I think there may also be some interesting ways to improve Merkle
   Tree Certs to support these use cases without needing trust
   negotiation but that'll have to wait for another thread.


As far as I'm aware there is no need for cooperation in creating a
cross-signed intermediate: it's a certificate with a public key and
just a different signer. So Country X could force its sites to include
that cross-signed intermediate in the grab bag handed to browsers, and
everything would work as now. Browsers have to tolerate all sorts of
slop there anyway. I think the sharper concern is that you could block
traffic without the cert included.

Sincerely,
Watson Ladd
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] WG Adoption for TLS Trust Expressions

2024-04-30 Thread Dennis Jackson
nable acts that a CA might be 
incentivised to do is provision a certificate chain from a domestic 
government operated CA alongside the intended one. Helpfully bypassing 
the need to convince website operators to use them.


This 3rd problem is exactly what I've been highlighting. If you're 
trying to build a domestic government controlled PKI, you run into a 
huge adoption challenge:


 * No website wants to change to use your government controlled PKI,
   because the user's don't support it and the rest of the world won't
   ever have it, so the site is effectively blocked.
 * There's no legitimate reason for a user to want to install the
   government controlled root, because there's no sites that are using.

This draft very neatly solves that challenge. You can provision a large 
number of sites with the government cert chain with little work. You 
don't even need consent if you use the CA method as you described. Those 
sites remain available to both local and international visitors so they 
don't really mind unless they're the kind of site with strong opinions 
about internet privacy. You can start to push your domestic users onto 
using your domestic root store with the argument that a) its locally 
supervised so its great and b) loads of local sites already support it. 
Later, you have the latitude to start misbehaving or blocking traffic 
that doesn't advertise your root store.


Reading between the lines, you seem like a person who would not 
ordinarily be in favour of trusting CAs with TLS configuration 
decisions, or making it easier to spin up a CA and so increasing the 
number of trusted CAs in the world, or having a fragmented PKI along 
national borders. This draft directly enables all three and you've yet 
to identify an feature that Trust Expressions actually delivers that 
isn't already available through simpler, already deployed, means.


On Tue, Apr 30, 2024 at 8:38 AM Dennis Jackson 
 wrote:


As mentioned above, we have such an extension already insofar as
indicating support for Delegated Credentials means indicating a
desire for a very short credential lifetime and an acceptance of
the clock skew risks.

Given how little use its seen, I don't know that its a good
motivation for Trust Expressions.

On 30/04/2024 16:33, Eric Rescorla wrote:



On Tue, Apr 30, 2024 at 8:29 AM Watson Ladd
 wrote:

On Tue, Apr 30, 2024 at 8:25 AM Eric Rescorla 
wrote:
>
>
> On the narrow point of shorter lifetimes, I don't think the
right way to advertise that you have an accurate clock is to
advertise that you support some set of root certificates.
>
> If we want to say that, we should have an extension that
actually says you have an accurate clock.

That says you *think* you have an accurate clock.


Quite so. However, if servers gate the use of some kind of
short-lived credential
on a client signal that the client thinks it has an accurate
clock (however that
signal is encoded) and the clients are frequently wrong about
that, we're going
to have big problems.

-Ekr




Sincerely,
Watson

-- 
Astra mortemque praestare gradatim



___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] WG Adoption for TLS Trust Expressions

2024-04-30 Thread Dennis Jackson
As mentioned above, we have such an extension already insofar as 
indicating support for Delegated Credentials means indicating a desire 
for a very short credential lifetime and an acceptance of the clock skew 
risks.


Given how little use its seen, I don't know that its a good motivation 
for Trust Expressions.


On 30/04/2024 16:33, Eric Rescorla wrote:



On Tue, Apr 30, 2024 at 8:29 AM Watson Ladd  wrote:

On Tue, Apr 30, 2024 at 8:25 AM Eric Rescorla  wrote:
>
>
> On the narrow point of shorter lifetimes, I don't think the
right way to advertise that you have an accurate clock is to
advertise that you support some set of root certificates.
>
> If we want to say that, we should have an extension that
actually says you have an accurate clock.

That says you *think* you have an accurate clock.


Quite so. However, if servers gate the use of some kind of short-lived 
credential
on a client signal that the client thinks it has an accurate clock 
(however that
signal is encoded) and the clients are frequently wrong about that, 
we're going

to have big problems.

-Ekr




Sincerely,
Watson

-- 
Astra mortemque praestare gradatim



___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] WG Adoption for TLS Trust Expressions

2024-04-30 Thread Dennis Jackson

On 30/04/2024 16:13, Brendan McMillion wrote:


Of course this is possible in theory, there are no standards
police, but this argument overlooks the gargantuan technical and
economic costs of deploying this kind of private extension. You'd
need to convince a diverse population of implementers on both the
client and server side to adopt and enable your thing.


I don't believe my hypothetical private extension would need to be 
adopted by any servers, just clients. And due to power laws, adoption 
by one or two clients would provide visibility into a substantial 
section of Internet traffic.
This is just an observation that unilateral actions taken by major 
players can screw things up for many people. It's true, but it has 
little bearing on what we're weighing up here.


Can you expand on how this draft enables the more rapid distrust
of failed CAs?


 This is described in more detail in section 9.1 of the draft. 
Currently we have the problem that, as long as any older RP relies on 
a given root, subscribers have to keep using it, which means newer RPs 
have to keep trusting it.


This doesn't apply in case we're distrusting a CA because it's failed. 
In 9.1 we're rotating keys. As I laid out in my initial mail, we can 
already sign the new root with the old root to enable rotation. There's 
no size impact to up-to-date clients using intermediate suppression or 
abridged certs.




I'm confused by this remark. Are there clients which would
tolerate a certificate if only it had a longer lifetime? Is there
any circumstance in which you would have both a long-life and
short-life certificate available, and you would prefer to serve
the long-life cert?


 Yes, especially when you push to shorter and shorter lifetimes (say, 
1 day, 1 hour), you start to have the issue that not all clients will 
have sufficiently accurate clocks to verify them. Clients with 
accurate clocks can advertise support for a root that issues 
short-lived certs while clients that don't will not advertise support 
for this root, and with TE we can support both.


Firefox already supports Delegated Credentials for exactly this use case 
and which addresses clock skew, but DCs have never seen much use as far 
as I know. In any case, this is an already solved problem.
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] WG Adoption for TLS Trust Expressions

2024-04-30 Thread Dennis Jackson

Hi Brendan, Bas,

On 30/04/2024 05:17, Brendan McMillion wrote:
It seems like, with or without this extension, the path is still the 
same: you'd need to force a browser to ship with a government-issued 
CA installed. Nothing about this makes that easier. It /is/ somewhat 
nice to already have a way to signal that a given client does/doesn't 
support the government CA -- but you could just as easily do this with 
a simple extension from the private range, so I'm not sure that was a 
big blocker.

On 30/04/2024 09:13, Bas Westerbaan wrote:

No need for a new extension: a government can use a specific signature 
algorithm for that (say, a national flavour of elliptic curve, or a 
particular PQ/T hybrid).


Of course this is possible in theory, there are no standards police, but 
this argument overlooks the gargantuan technical and economic costs of 
deploying this kind of private extension. You'd need to convince a 
diverse population of implementers on both the client and server side to 
adopt and enable your thing. This draft if widely implemented as-is 
would effectively solve that problem for governments by removing a huge 
architectural roadblock. This is the power of the IETF and why decisions 
about what TLS extensions to adopt are important. Mark Nottingham has a 
longer piece  on this 
view.


On 30/04/2024 05:17, Brendan McMillion wrote:

On the other hand, this draft solves a number of existing security 
issues, by allowing more rapid distrust of failed CAs,


Can you expand on how this draft enables the more rapid distrust of 
failed CAs?


The roadblock to faster distrust has always been how quickly subscribers 
of the failed CA are able to migrate. ACME makes this process much 
easier, but still requires server operators to reconfigure their ACME 
clients. This draft doesn't improve that situation.


An effective technique long-used by Microsoft and Mozilla when 
distrusting CAs is to ship a distrust-certs-issued-after signal rather 
than an immediate distrust of all issued certificates. This allows 
server operators to gradually migrate in line with their usual schedule 
of certificate renewals rather than forcing a flag day on the world. I 
understand that at least one further major root program is looking at 
supporting the same feature.



by allowing clients to signal support for short-lived certificates, etc.
I'm confused by this remark. Are there clients which would tolerate a 
certificate if only it had a longer lifetime? Is there any circumstance 
in which you would have both a long-life and short-life certificate 
available, and you would prefer to serve the long-life cert?


Best,
Dennis

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] WG Adoption for TLS Trust Expressions

2024-04-29 Thread Dennis Jackson
Thanks <https://last-chance-for-eidas.org/>, I 
<https://security.googleblog.com/2023/11/qualified-certificates-with-qualified.html> 
am 
<https://www.google.com/search?q=site%3Aapple.com+eidas+-podcast+-music+-store_esv=30517ea669904188=QEMwZvmdAc2lhbIPpY6Q4AY=0ahUKEwj5vZ7x3uiFAxXNUkEAHSUHBGwQ4dUDCBA=5=site%3Aapple.com+eidas+-podcast+-music+-store_lp=Egxnd3Mtd2l6LXNlcnAiK3NpdGU6YXBwbGUuY29tIGVpZGFzIC1wb2RjYXN0IC1tdXNpYyAtc3RvcmVI2RBQmAVY7A9wAXgAkAEAmAHCAaABlQSqAQM1LjG4AQPIAQD4AQGYAgCgAgCYAwCIBgGSBwCgB44C=gws-wiz-serp> 
aware 
<https://www.google.com/search?q=site%3Ablogs.microsoft.com+eidas_esv=30517ea669904188=cdr%3A1%2Ccd_min%3A1%2F1%2F2021=okMwZoivKbCMhbIP1vOquAg=0ahUKEwiIiKSg3-iFAxUwRkEAHda5CocQ4dUDCBA=5=site%3Ablogs.microsoft.com+eidas_lp=Egxnd3Mtd2l6LXNlcnAiHnNpdGU6YmxvZ3MubWljcm9zb2Z0LmNvbSBlaWRhc0ihE1DECljZEXABeACQAQCYAYMBoAHhA6oBAzUuMbgBA8gBAPgBAZgCAKACAJgDAIgGAZIHAKAHjgI=gws-wiz-serp>. 



On 30/04/2024 01:39, S Moonesamy wrote:

Hi Dennis,
At 04:20 PM 29-04-2024, Dennis Jackson wrote:
Thankfully these efforts have largely failed because these national 
CAs have no legitimate adoption or use cases. Very few website 
operators would voluntarily use certificates from a national root CA 
when it means shutting out the rest of the world (who obviously do 
not trust that root CA) and even getting adoption within the country 
is very difficult since adopting sites are broken for residents 
without the national root cert.


There are ways to promote adoption of a government-mandated CA. The 
stumbling point is usually browser vendors, e.g. 
https://blog.mozilla.org/netpolicy/files/2021/05/Mozillas-Response-to-the-Mauritian-ICT-Authoritys-Consultation.pdf


I see that you already mentioned BCP 188.

Regards,
S. Moonesamy
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] WG Adoption for TLS Trust Expressions

2024-04-29 Thread Dennis Jackson
When this work was presented at IETF 118 in November, several 
participants (including myself, Stephen Farrell and Nicola Tuveri) came 
to the mic to highlight that this draft's mechanism comes with a serious 
potential for abuse by governments (meeting minutes 
). 



Although the authors acknowledged the issue in the meeting, no changes 
have been made since to either address the problem or document it as an 
accepted risk. I think its critical one of the two happens before this 
document is considered for adoption.


Below is a brief recap of the unaddressed issue raised at 118 and some 
thoughts on next steps:


Some governments (including, but not limited to Russia 
, 
Kazakhstan 
, 
Mauritius 
) 
have previously established national root CAs in order to enable mass 
surveillance and censorship of their residents' web traffic. This 
requires trying to force residents to install these root CAs or adopt 
locally developed browsers which have them prepackaged. This is widely 
regarded as a bad thing (RFC 7258 
).


Thankfully these efforts have largely failed because these national CAs 
have no legitimate adoption or use cases. Very few website operators 
would voluntarily use certificates from a national root CA when it means 
shutting out the rest of the world (who obviously do not trust that root 
CA) and even getting adoption within the country is very difficult since 
adopting sites are broken for residents without the national root cert.


However, this draft provides a ready-made solution to this adoption 
problem: websites can be forced to adopt the national CA in addition to, 
rather than replacing, their globally trusted cert. This policy can even 
be justified in terms of security from the perspective of the 
government, since the national CA is under domestic supervision (see 
https://last-chance-for-eidas.org). This enables a gradual roll out by 
the government who can require sites to start deploying certs from the 
national CA in parallel with their existing certificates without any 
risk of breakage either locally or abroad, solving their adoption problem.


Conveniently, post-adoption governments can also see what fraction of 
their residents' web traffic is using their national CA via the 
unencrypted trust expressions extension, which can inform their 
decisions about whether to block connections which don't indicate 
support for their national CA and as well advertising which connections 
they can intercept (using existing methods like mis-issued certs, key 
escrow) without causing a certificate error. This approach also scales 
so multiple countries can deploy national CAs with each being able to 
surveil their own residents but not each others.


Although this may feel like a quite distant consequence of enabling 
trust negotiation in TLS, the on-ramp is straightforward:


 * Support for trust negotiation gets shipped in browsers and servers
   for ostensibly good reasons.
 * A large country or federation pushes for recognition of their
   domestic trust regime as a distinct trust store which browsers must
   advertise. Browsers agree because the relevant market is too big to
   leave.
 * Other countries push for the same recognition now that the dam is
   breached.
 * Time passes and various local cottage industries of domestic CAs are
   encouraged out of national interest and government enabled rent
   seeking.
 * One or more countries start either withholding certificates for
   undesirable sites, blocking connections which don't use their trust
   store, requiring key escrow to enable interception, etc etc.

Besides the above issue which merits some considered discussion, I would 
also suggest fleshing out the use cases & problems that this draft is 
trying to solve.


Firstly because its not clear why simpler solutions don't suffice. For 
example, backwards compatible root key rotation could solved by signing 
the new root with the old root, then using existing drafts like 
intermediate suppression or abridged certs to eliminate the overhead of 
both certs for up to date clients. This would largely eliminate the 
problems raised around support for old devices.


Secondly the current proposal requires a fairly substantial amount of 
coordination between server operators, ACME vendors, CAs, browsers and 
browser providers and its unclear whether there are enough incentives in 
place to actually see the folk deploy the technology in an effective 
way. Sketching out a couple of key deployment scenarios and what 
fraction of each population would need to 

Re: [TLS] Adoption call for TLS Flag - Request mTLS

2024-04-04 Thread Dennis Jackson
Ah, I wonder what the motivation was for it being a MUST rather than a 
SHOULD.


That still leaves open sending a private use value - which would allow 
you to de-conflict with others uses.



On 04/04/2024 17:11, Jonathan Hoyland wrote:

Hi Dennis,

RFC 7250 Section 4.1 says:
If the client has no remaining certificate types to send in
the client hello, other than the default X.509 type, it MUST omit the
client_certificate_type extension in the client hello.
That seems to explicitly exclude sending the single entry 0 to me.
Regards,
Jonathan


On Thu, 4 Apr 2024, 16:43 Dennis Jackson, 
 wrote:


Hi Jonathan,

My reading of RFC 7250 is the same as Mohits. Although the RFC
talks about raw public keys and a new codepoint for them, it is
building on RFC 6091 which defined a similar extension and the
X509 codepoint.

It seems fine for you to send the client_certificate_type
extension with the single entry 0 (X509). You also have the option
of using a value assigned for private use (224 and up) for your
specific use case of indicating a search engine crawler willing to
provide a client cert.

Best,
Dennis

On 04/04/2024 11:17, Jonathan Hoyland wrote:

Hi all,

Thanks for the feedback here.

With respect to RFC 7250, as I mentioned earlier on list, there
seen to be two issues. First it changes the semantics of the
extension slightly, and second the RFC explicitly excludes x.509
certs.

IIUC the semantics of the extension are "I have a weird client
cert", not "I have a client cert".

With respect to whether this needs to be a working group item,
I'm not particularly averse to this being an independent document
if that's where the WG thinks it should go.
In my opinion, however, there are two things that it would be
good to get input from the TLS WG on.

One, this is a change from all previous versions of TLS in which
the client cannot induce auth, does enabling this break anyone's
assumptions?

Two, I'd like a low flag number because it saves bytes on the
wire, but there is a discussion to be had as to how common this
flag will be versus other flags.
(Non-attack) Bot traffic is very common, but it's not the
majority of traffic by any means.

Regards,

Jonathan



On Thu, 4 Apr 2024, 01:17 Christopher Patton,
 wrote:

It would be great to here from Jonathan (the author) if RFC
7250 is already sufficient for this use case.

On Tue, Apr 2, 2024 at 10:23 PM Mohit Sethi  wrote:

Please see my earlier comment regarding this draft:

https://mailarchive.ietf.org/arch/msg/tls/g3tImSVXO8AEmPH1UlwRB1c1TLs/

In summary: the functionality of this draft is already
achievable by
using the client_certificate_type extension defined in
RFC 7250:
https://datatracker.ietf.org/doc/html/rfc7250 with
certificate type
value = 0:

https://www.iana.org/assignments/tls-extensiontype-values/tls-extensiontype-values.xhtml#tls-extensiontype-values-3.

The table in section 4.2 of RFC8446 even mentions that
the extension can
be included in the ClientHello:
https://datatracker.ietf.org/doc/html/rfc8446#section-4.2,
thereby
ensuring that the server sends a CertificateRequest
message in response
to the ClientHello received.

OpenSSL already implements this extension since it was
needed for
support raw public keys (RPKs).

As stated earlier: if it is indeed the case that the
client_certificate_type extension is suitable for the
use-case, then
perhaps it is preferable to not have a separate flag.
Otherwise, it
would make the state machine at the server more
complicated (for
example: handling a ClientHello with both the mTLS flag
and the
client_certificate_type extension.

Therefore, like Ekr, I am mildly negative on adopting
this document but
for different reasons.

--Mohit

On 4/3/24 00:52, Sean Turner wrote:
> At the IETF 119 TLS session there was some interest in
the mTLS Flag I-D

(https://eur01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fdatatracker.ietf.org%2Fdoc%2Fdraft-jhoyla-req-mtls-flag%2F=05%7C02%7Cmohit.sethi%40aalto.fi%7C42877de6d3d64135e49e08dc534a463b%7Cae1a772440414462a6dc538cb199707e%7C1%7C0%7C638476825681199391%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C0%7C%7C%7C=ERzWFcuBlAfobNyGCcgKDhCl9wex9LOQ%2F3yPYC7idfU%3D=0

<https://eur01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fdatatracker.iet

Re: [TLS] Adoption call for TLS Flag - Request mTLS

2024-04-04 Thread Dennis Jackson

Hi Jonathan,

My reading of RFC 7250 is the same as Mohits. Although the RFC talks 
about raw public keys and a new codepoint for them, it is building on 
RFC 6091 which defined a similar extension and the X509 codepoint.


It seems fine for you to send the client_certificate_type extension with 
the single entry 0 (X509). You also have the option of using a value 
assigned for private use (224 and up) for your specific use case of 
indicating a search engine crawler willing to provide a client cert.


Best,
Dennis

On 04/04/2024 11:17, Jonathan Hoyland wrote:

Hi all,

Thanks for the feedback here.

With respect to RFC 7250, as I mentioned earlier on list, there seen 
to be two issues. First it changes the semantics of the extension 
slightly, and second the RFC explicitly excludes x.509 certs.


IIUC the semantics of the extension are "I have a weird client cert", 
not "I have a client cert".


With respect to whether this needs to be a working group item, I'm not 
particularly averse to this being an independent document if that's 
where the WG thinks it should go.
In my opinion, however, there are two things that it would be good to 
get input from the TLS WG on.


One, this is a change from all previous versions of TLS in which the 
client cannot induce auth, does enabling this break anyone's assumptions?


Two, I'd like a low flag number because it saves bytes on the wire, 
but there is a discussion to be had as to how common this flag will be 
versus other flags.
(Non-attack) Bot traffic is very common, but it's not the majority of 
traffic by any means.


Regards,

Jonathan



On Thu, 4 Apr 2024, 01:17 Christopher Patton, 
 wrote:


It would be great to here from Jonathan (the author) if RFC 7250
is already sufficient for this use case.

On Tue, Apr 2, 2024 at 10:23 PM Mohit Sethi  wrote:

Please see my earlier comment regarding this draft:
https://mailarchive.ietf.org/arch/msg/tls/g3tImSVXO8AEmPH1UlwRB1c1TLs/

In summary: the functionality of this draft is already
achievable by
using the client_certificate_type extension defined in RFC 7250:
https://datatracker.ietf.org/doc/html/rfc7250 with certificate
type
value = 0:

https://www.iana.org/assignments/tls-extensiontype-values/tls-extensiontype-values.xhtml#tls-extensiontype-values-3.

The table in section 4.2 of RFC8446 even mentions that the
extension can
be included in the ClientHello:
https://datatracker.ietf.org/doc/html/rfc8446#section-4.2,
thereby
ensuring that the server sends a CertificateRequest message in
response
to the ClientHello received.

OpenSSL already implements this extension since it was needed for
support raw public keys (RPKs).

As stated earlier: if it is indeed the case that the
client_certificate_type extension is suitable for the
use-case, then
perhaps it is preferable to not have a separate flag.
Otherwise, it
would make the state machine at the server more complicated (for
example: handling a ClientHello with both the mTLS flag and the
client_certificate_type extension.

Therefore, like Ekr, I am mildly negative on adopting this
document but
for different reasons.

--Mohit

On 4/3/24 00:52, Sean Turner wrote:
> At the IETF 119 TLS session there was some interest in the
mTLS Flag I-D

(https://eur01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fdatatracker.ietf.org%2Fdoc%2Fdraft-jhoyla-req-mtls-flag%2F=05%7C02%7Cmohit.sethi%40aalto.fi%7C42877de6d3d64135e49e08dc534a463b%7Cae1a772440414462a6dc538cb199707e%7C1%7C0%7C638476825681199391%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C0%7C%7C%7C=ERzWFcuBlAfobNyGCcgKDhCl9wex9LOQ%2F3yPYC7idfU%3D=0

);
also, see previous list discussions at [0]. This message is to
judge consensus on whether there is sufficient support to
adopt this I-D.  If you support adoption and are willing to
review and contribute text, please send a message to the
list.  If you do not support adoption of this I-D, please send
a message to the list and indicate why.  This call will close
on 16 April 2024.
>
> Thanks,
> Deirdre, Joe, and Sean
>
> [0]


Re: [TLS] TLS 1.3, Raw Public Keys, and Misbinding Attacks

2024-03-28 Thread Dennis Jackson

Hi John,

It depends what you mean by an identity. TLS1.3 ensures the peers agree 
on the used RPKs and it doesn't rely on any external proof of possession 
to achieve that property.


How the peers come to trust the RPKs or their corresponding identity is 
of necessity left to the application - not dissimilar to how the 
application has to decide which root certificates to trust and whether 
the leaf certificate is appropriate for the intended connection (e.g. 
browsers extract the valid identities from the SAN).


Best,
Dennis

On 28/03/2024 15:22, John Mattsson wrote:


Hi,

I looked into what RFC 8446(bis) says about Raw Public Keys. As 
correctly stated in RFC 8446, TLS 1.3 with signatures and certificates 
is an implementation of SIGMA-I:


SIGMA does however require that the identities of the endpoints 
(called A and B in [SIGMA]) are included in the messages. This is not 
true for TLS 1.3 with RPKs and TLS 1.3 with RPKs is therefore not 
SIGMA. TLS 1.3 with RPKs is vulnerable to what Krawczyk’s SIGMA paper 
calls misbinding attacks:


“This attack, to which we refer as an “identity misbinding attack”, 
applies to many seemingly natural and intuitive protocols. Avoiding 
this form of attack and guaranteeing a consistent binding between a 
session key and the peers to the session is a central element in the 
design of SIGMA.”


“Even more significantly we show here that the misbinding attack 
applies to this protocol in any scenario where parties can register 
public keys without proving knowledge of the corresponding signature key.”


As stated in Appendix E.1, at the completion of the handshake, each 
side outputs its view of the identities of the communicating parties. 
On of the TLS 1.3 security properties are “Peer Authentication”, which 
says that the client’s and server’s view of the identities match. TLS 
1.3 with PRKs does not fulfill this unless the out-of-band mechanism 
to register public keys proved knowledge of the private key. RFC 7250 
does not say anything about this either.


I think this needs to be clarified in RFC8446bis. The only reason to 
ever use an RPK is in constrained IoT environments. Otherwise a 
self-signed certificate is a much better choice. TLS 1.3 with 
self-signed certificates is SIGMA-I.


It is worrying to find comments like this:

“I'd like to be able to use wireguard/ssh-style authentication for my 
app. This is possible currently with self-signed certificates, but the 
proper solution is RFC 7250, which is also part of TLS 1.3.”


https://github.com/openssl/openssl/issues/6929

RPKs are not the proper solution.

(Talking about misbinding, does RFC 8446 say anything about how to 
avoid selfie attacks where an entity using PSK authentication ends up 
talking to itself?)


Cheers,

John Preuß Mattsson

[SIGMA] https://link.springer.com/chapter/10.1007/978-3-540-45146-4_24


___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


[TLS] Feedback on draft-tschofenig-tls-extended-key-update-01

2024-03-18 Thread Dennis Jackson
A new version of this draft was published a few weeks ago with an 
entirely new design. Unless I missed it, the new version hasn't yet been 
discussed on the TLS list and I was unaware of the changes until I came 
to prepare for the meeting. I have quite a few concerns - I'm sorry to 
bring them up so close to the meeting.


Firstly, the draft as specified does not achieve the claimed security goal:


Security Considerations:

To perform public key encryption the sender needs to have access to 
the public key of the recipient. This document makes the assumption 
that the public key in the exchanged end-entity certificate can be 
used with the HPKE KEM. The use of HPKE, and the recipients long-term 
public key, in the ephemeral-static Diffie-Hellman exchange provides 
perfect forward secrecy of the ongoing connection and demonstrates 
possession of the long-term secret key.


An ephemeral-static Diffie-Hellman exchange does not provide forward 
secrecy. If the attacker can compromise the endpoint with the static 
public key, they can decrypt all previously transmitted ciphertexts to 
this peer and so recover all past keys, violating forward secrecy. This 
wasn't an issue in the old draft where ephemeral-ephemeral DH exchanges 
were used.


Secondly, I think there is some confusion about what forward secrecy is. 
Forward secrecy means that compromise in the future will not enable the 
decryption of past messages. The existing KeyUpdate mechanism in TLS1.3 
achieves forward secrecy by ratcheting forwards the used keys and 
throwing away the old ones. So no changes are required to TLS1.3 to 
enjoy forward secrecy in long-lived connections, just do the existing 
key update and be sure to throw away the old keys correctly.



Introduction:

If a traffic secret (referred as application_traffic_secret_N) has 
been compromised, an attacker can passively eavesdrop on all future 
data sent on the connection, including data encrypted with 
application_traffic_secret_N+1, application_traffic_secret_N+2, etc.


This is not forward secrecy but post-compromise security (PCS) [1] 
(sometimes called Backwards Secrecy as it is the complement of Forward 
Secrecy). As the draft identifies, a fresh key exchange is needed to 
ensure PCS. However, as mentioned earlier in the PFS case, this key 
exchange needs to be with freshly generated ephemeral keys. It does no 
good to use an existing static key since the attacker might have already 
compromised it.


Finally, I'm really not sure about the decision to mix the TLS and 
Application layers by having the application carry the HPKE ciphertexts. 
This seems rather complex and likely to go wrong. The original version 
of this draft where the key exchange was carried in the extended key 
update message seems much simpler to implement and easier to analyse.


If the authors do want to go with some kind of application specific key 
exchange, I would suggest rethinking this draft as purely a way to bring 
entropy into the TLS1.3 key exchange, a TLS1.3 Key Importer if you will. 
This would work by having the application to signal to the TLS1.3 layer 
that a key was ready to be imported (with a particular key-id and key 
material). The TLS library would communicate this to the peer with a 
message similar to the one currently defined in the draft carrying the 
key-id. The new key material would be mixed to the current secret in 
when the peer confirmed it had also been passed the key id and material 
by its application. The details about some kind of application layer key 
exchange would then need to go in a different document and use 
ephemeral-ephemeral exchange as highlighted.


Given the complexities around the use of middleboxes which may not be 
available to the peers, it might be necessary to use an exported 
authenticator so the applications could confirm they were sharing a 
single TLS connection and not two duct-taped together (which would be 
unable to successfully import new keys). This seems like a like of 
complexity compared to the initial draft.


Best,
Dennis

[1] https://eprint.iacr.org/2016/221.pdf

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] I-D Action: draft-ietf-tls-cert-abridge-01.txt

2024-03-16 Thread Dennis Jackson
Just tagging editorial changes which where shared on the list a couple 
of weeks ago [1] to avoid document expiry.


No discussion is planned at IETF 119.

Best,
Dennis

On 16/03/2024 14:09, internet-dra...@ietf.org wrote:

Internet-Draft draft-ietf-tls-cert-abridge-01.txt is now available. It is a
work item of the Transport Layer Security (TLS) WG of the IETF.

Title:   Abridged Compression for WebPKI Certificates
Author:  Dennis Jackson
Name:draft-ietf-tls-cert-abridge-01.txt
Pages:   21
Dates:   2024-03-16

Abstract:

This draft defines a new TLS Certificate Compression scheme which
uses a shared dictionary of root and intermediate WebPKI
certificates.  The scheme smooths the transition to post-quantum
certificates by eliminating the root and intermediate certificates
from the TLS certificate chain without impacting trust negotiation.
It also delivers better compression than alternative proposals whilst
ensuring fair treatment for both CAs and website operators.  It may
also be useful in other applications which store certificate chains,
e.g.  Certificate Transparency logs.

The IETF datatracker status page for this Internet-Draft is:
https://datatracker.ietf.org/doc/draft-ietf-tls-cert-abridge/

There is also an HTML version available at:
https://www.ietf.org/archive/id/draft-ietf-tls-cert-abridge-01.html

A diff from the previous version is available at:
https://author-tools.ietf.org/iddiff?url2=draft-ietf-tls-cert-abridge-01

Internet-Drafts are also available by rsync at:
rsync.ietf.org::internet-drafts


___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Working Group Last Call for SSLKEYLOG File

2024-03-14 Thread Dennis Jackson
I have a suggestion which keeps things technical but hopefully addresses 
Stephen's concern:


In Security Considerations:

"TLS1.3 requires the use of forward secret key exchanges (RFC 8446, 1.2, 
E.1). Using SSLKEYLOGFILE breaks this security property as it records 
the used session key and so invalidates many of the security claims made 
in RFC 8446. If SSLKEYLOGFILE is in use, the transferred data does not 
benefit from the security protections offered by RFC 8446 and systems 
using SSLKEYLOGFILE cannot be considered compliant with RFC 8446 or 
offering similar security to the protocol outlined in that draft."


I don't think the wording there is quite right, but I do think the 
Security Considerations should clearly call out the impact on forward 
secrecy and RFC 8446 in general and so dissuade use.


Best,
Dennis

On 12/03/2024 23:07, Eric Rescorla wrote:



On Tue, Mar 12, 2024 at 4:04 PM Stephen Farrell 
 wrote:



I'll argue just a little more then shut up...

On 12/03/2024 22:55, Martin Thomson wrote:
>
>> Sorry also for a late suggestion, but how'd we feel about adding
>> some text like this to 1.1?
>>
>> "An implementation, esp. a server, emitting a log file such as this
>> in a production environment where the TLS clients are unaware that
>> logging is happening, could fall afoul of regulatory requirements
>> to protect client data using state-of-the-art mechanisms."

> I agree with Ekr.  That risk is not appreciably changed by the
> existence of a definition for a file format.
I totally do consider our documenting this format increases
the risk that production systems have such logging enabled,
despite our saying "MUST NOT." So if there's a way to further
disincentivise doing that, by even obliquely referring to
potential negative consequences of doing so, then I'd be for
doing that. 



Aside from this particular case, I don't think technical specifications
should "obliquely" refer to things. Technical specifications should be
clear.

-Ekr

Hence my suggestion.

S.
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] ML-KEM key agreement for TLS 1.3

2024-03-14 Thread Dennis Jackson

On 14/03/2024 01:41, Deirdre Connolly wrote:

Oh and one more consideration: hybrid brings complexity, and 
presenting the pure-PQ solutions and their strictly lesser complexity 
(at the tradeoff of maybe taking more risk against newer schemes no 
matter how good we feel about their fundamental cryptographic 
foundations) is worthwhile in my opinion.


On Wed, Mar 13, 2024 at 9:39 PM Deirdre Connolly 
 wrote:


[...] Shaking out all the negotiation decisions is desirable as
well as 'drawing the rest of the owl' for the pure PQ option
implied in the negotiation (are we going to copy the same ~1000
bytes for the PQ and hybrid name groups, when sharing an ephemeral
KEM keypair?).

This is an argument that supporting PQ-only and PQ-hybrid simultaneously 
will be more complex than just PQ-hybrids and require further changes at 
the TLS layer.


Given we've already paid the (minimal) complexity cost of hybrids and 
switching to PQ-only seems strictly less secure, I'm really struggling 
to see the motivation at this point in time. Once we're in a position to 
remove the classical key exchanges from TLS entirely because we know 
they're ineffective, switching to PQ-only might then have more benefit 
since we could retire a lot of old code.




For CNSA 2.0, it is cited not as a compatibility _requirement_ of
TLS, but a note that a non-trivial segment of users of standard
TLS that have been traditionally compliant will not be in a few
years, and they will come knocking anyway. This is trying to skate
where the puck is going.

But also, the fact that CNSA 2.0 explicitly requires ML-KEM _only_
key agreement in the next ~6-9 years is a strong vote of
confidence in any protocol doing this at all, so, TLS wouldn't be
out on a limb to support this, basically.

I don't have a strong opinion on whether this should be
Recommended = Y.

On Wed, Mar 13, 2024 at 6:42 PM Eric Rescorla  wrote:



On Wed, Mar 13, 2024 at 2:36 PM Rebecca Guthrie
 wrote:

Greetings Deirdre and TLS,

I read through draft-connolly-tls-mlkem-key-agreement-00
(and

https://github.com/dconnolly/draft-connolly-tls-mlkem-key-agreement/blob/main/draft-connolly-tls-mlkem-key-agreement.md)
and I have a few comments. First, though, I want to say
thank you for writing this draft. I'll echo some of what
has already been voiced on this thread and say that, while
some plan to use composite key establishment, it makes
sense to also specify the use of standalone ML-KEM in TLS
1.3 as another option. Other WGs (lamps and ipsecme) have
already begun to specify the use of standalone FIPS 203,
204, and 205 in various protocols. With respect to this
draft, there is certainly interest from National Security
System vendors in using standalone ML-KEM-1024 in TLS 1.3
for CNSA 2.0 compliance (as CNSA 2.0 does not require nor
recommend hybrid solutions for security).


I wanted to address this CNSA 2.0 point, as I've now seen it
brought up a couple of times.

The IETF, together with the IRTF, needs to make an independent
judgement on whether using PQ-only algorithms is advisable,
and if we do not think so, then we should not standardize
them, regardless of what CNSA 2.0 requires. The supported
groups registry policies are designed explicitly to allow
people to register non Standards Track algorithms, so nothing
precludes the creation of an ML-KEM only code point if some
vendors find that necessary, without the IETF standardizing
them or marking them as Recommended=Y.
-Ekr



A few specific comments:

1. In Section 1.1 (or Introduction - Motivation in the
github version), I would suggest that the second sentence
("Having a fully post-quantum...") is not needed, i.e.
that there need not be a justification for why it is
necessary to specify how to use ML-KEM in TLS 1.3 (vs.
hybrid). It could be appropriate to contextualize the
specification of ML-KEM w.r.t the advent of a CRQC, though
I also don't think that is necessary. As an example, we
can compare to the Introduction in
draft-ietf-lamps-cms-kyber-03.

2. Section 3 (Construction on github) currently reads, "We
align with [hybrid] except that instead of joining ECDH
options with a KEM, we just have the KEM as a NamedGroup."
I think it is a more useful framing to base this
specification (for the use of a standalone algorithm) off
of RFC 8446 instead of the draft spec for a hybrid
solution. I understand wanting to align the approach with

Re: [TLS] Working Group Last Call for ECH

2024-03-14 Thread Dennis Jackson

+1 to shipping it.

On 11/03/2024 22:00, Joseph Salowey wrote:
This is the working group last call for TLS Encrypted Client Hello 
[1].  Please indicate if you think the draft is ready to progress to 
the IESG and send any comments to the list by 31 March 2024.  The 
comments sent by Watson Ladd to the list [2] on 17 February 2024 will 
be considered last call comments.


Thanks,

Joe, Deirdre, and Sean

[1] https://datatracker.ietf.org/doc/draft-ietf-tls-esni/
[2] https://mailarchive.ietf.org/arch/msg/tls/XUCFuNBSQfSJclkhLW-14DZ0ETg/



___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] ML-KEM key agreement for TLS 1.3

2024-03-07 Thread Dennis Jackson

On 07/03/2024 03:57, Bas Westerbaan wrote:

We think it's worth it now, but of course we're not going to keep 
hybrids around when the CRQC arrives.


Sure, but for now we gain substantial security margin* against 
implementation mistakes, advances in cryptography, etc.


On the perf/cost side, we're already making a large number of 
sub-optimal choices (use of SHA-3, use of Kyber in TLS rather than a CPA 
scheme, picking 768 over 512, etc), we can easily 'pay' for X25519 if 
you really wanted. I think if handshake cycles really mattered then we'd 
have shown RSA the door much more quickly [1].


Best,
Dennis

* As in, actual security from combination of independent systems, not 
the mostly useless kind from using over-size primitives.


[1] https://blog.cloudflare.com/how-expensive-is-crypto-anyway



Best,

 Bas

On Thu, Mar 7, 2024 at 1:56 AM Dennis Jackson 
 wrote:


I'd like to understand the argument for why a transition back to
single
schemes would be desirable.

Having hybrids be the new standard seems to be a nice win for
security
and pretty much negligible costs in terms of performance,
complexity and
bandwidth (over single PQ schemes).

On 07/03/2024 00:31, Watson Ladd wrote:
> On Wed, Mar 6, 2024, 10:48 AM Rob Sayre  wrote:
>> On Wed, Mar 6, 2024 at 9:22 AM Eric Rescorla  wrote:
>>>
>>>
>>> On Wed, Mar 6, 2024 at 8:49 AM Deirdre Connolly
 wrote:
>>>>> Can you say what the motivation is for being "fully
post-quantum" rather than hybrid?
>>>> Sure: in the broad scope, hybrid introduces complexity in the
short-term that we would like to move off of in the long-term -
for TLS 1.3 key agreement this is not the worst thing in the world
and we can afford it, but hybrid is by design a hedge, and
theoretically a temporary one.
>>>
>>> My view is that this is likely to be the *very* long term.
>>
>> Also, the ship has sailed somewhat, right? Like Google Chrome,
Cloudflare, and Apple iMessage already have hybrids shipping (I'm
sure there many more, those are just really popular examples). The
installed base is already very big, and it will be around for a
while, whatever the IETF decides to do.
> People can drop support in browsers fairly easily especially for an
> experimental codepoint. It's essential that this happen: if
everything
> we (in the communal sense) tried had to be supported in
perpetuity, it
> would be a recipe for trying nothing.
>
>> thanks,
>> Rob
>>
>> ___
>> TLS mailing list
>> TLS@ietf.org
>> https://www.ietf.org/mailman/listinfo/tls
> ___
> TLS mailing list
> TLS@ietf.org
> https://www.ietf.org/mailman/listinfo/tls

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] ML-KEM key agreement for TLS 1.3

2024-03-06 Thread Dennis Jackson
I'd like to understand the argument for why a transition back to single 
schemes would be desirable.


Having hybrids be the new standard seems to be a nice win for security 
and pretty much negligible costs in terms of performance, complexity and 
bandwidth (over single PQ schemes).


On 07/03/2024 00:31, Watson Ladd wrote:

On Wed, Mar 6, 2024, 10:48 AM Rob Sayre  wrote:

On Wed, Mar 6, 2024 at 9:22 AM Eric Rescorla  wrote:



On Wed, Mar 6, 2024 at 8:49 AM Deirdre Connolly  
wrote:

Can you say what the motivation is for being "fully post-quantum" rather than 
hybrid?

Sure: in the broad scope, hybrid introduces complexity in the short-term that 
we would like to move off of in the long-term - for TLS 1.3 key agreement this 
is not the worst thing in the world and we can afford it, but hybrid is by 
design a hedge, and theoretically a temporary one.


My view is that this is likely to be the *very* long term.


Also, the ship has sailed somewhat, right? Like Google Chrome, Cloudflare, and 
Apple iMessage already have hybrids shipping (I'm sure there many more, those 
are just really popular examples). The installed base is already very big, and 
it will be around for a while, whatever the IETF decides to do.

People can drop support in browsers fairly easily especially for an
experimental codepoint. It's essential that this happen: if everything
we (in the communal sense) tried had to be supported in perpetuity, it
would be a recipe for trying nothing.


thanks,
Rob

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] draft-ietf-tls-cert-abridge Update

2024-03-06 Thread Dennis Jackson

Hi Panos,

On 05/03/2024 04:14, Kampanakis, Panos wrote:


Hi Dennis,

> I can see two different ways to handle it. Either as you suggest, we 
have it be a runtime decision and we just prefix the compressed form 
with a byte to indicate whether pass 2 has been used. Alternatively, 
we can define two codepoints, (pass 1 + pass 2, pass 1).


> I'd like to experiment with both operations and measure what the 
real world difference is first, then we can make a decision on how to 
proceed. There's also been more interest in the non-webpki use case 
than I expected, so that needs to factor in to whichever option we pick.


Maybe these will not matter for the scenario I am considering. Let’s 
say the client advertised support for draft-ietf-tls-cert-abridge. And 
the server sent back
- CompressedCertificate which includes the 2 identifiers for the ICA 
and RootCA from Pass 1.


- uncompressed, traditional CertificateEnty of the end-entity certificate

Or it sent back

- uncompressed, traditional CertificateEnties for the  ICA and RootCA 
certs


- CompressedCertificate which includes the ZStandard compressed (based 
on the Pass2 dictionary) end-entity cert


My point is that nothing should prevent the client from being able to 
handle these two scenarios and normative language should point that 
out. Any software that can parse certs in compressed form, ought to be 
able to parse them in regular form if the server did not support Pass1 
(CA cers were not available for some reason) or Pass2 (eg. if CT Logs 
were not available for some reason).


Am I overseeing something?

Yes I think so. TLS1.3 Certificate Compression applies to the entire 
Certificate Message, not individual CertificateEntries in that message. 
Those individual entries don't currently carry identifiers about what 
type they are, their type is negotiated earlier in the 
EncryptedExtensions extension.


So to handle this as you propose, we'd need to define a type field for 
each entry to specify whether that particular entry had undergone a 
particular pass (or both). In my message, I was suggesting either have 
it be a single label for the entire message or putting the label into 
the TLS1.3 Cert Compression codepoint.


Best,
Dennis


*From:* Dennis Jackson 
*Sent:* Monday, March 4, 2024 10:47 AM
*To:* Kampanakis, Panos ; TLS List 
*Subject:* RE: [EXTERNAL] [TLS] draft-ietf-tls-cert-abridge Update

*CAUTION*: This email originated from outside of the organization. Do 
not click links or open attachments unless you can confirm the sender 
and know the content is safe.


Hi Panos,

On 02/03/2024 04:09, Kampanakis, Panos wrote:

Hi Dennis,

I created a git issue
https://github.com/tlswg/draft-ietf-tls-cert-abridge/issues/23 but
I am pasting it here for the sake of the discussion:

What does the client do if the server only does Pass 1 and
compresses / omits the chain certs but does not compress the
end-entity certs (Pass 2)?

The client should be fine with that. It should be able to
reconstruct the chain and used the uncompressed end-entity cert.
It should not fail the handshake. I suggest the Implementation
Complexity Section to say something like

I can see two different ways to handle it. Either as you suggest, we 
have it be a runtime decision and we just prefix the compressed form 
with a byte to indicate whether pass 2 has been used. Alternatively, 
we can define two codepoints, (pass 1 + pass 2, pass 1).


I'd like to experiment with both operations and measure what the real 
world difference is first, then we can make a decision on how to 
proceed. There's also been more interest in the non-webpki use case 
than I expected, so that needs to factor in to whichever option we pick.


Best,
Dennis

/> Servers MAY chose to compress just the cert chain or the
end-certificate depending on their ability to perform Pass 1 or 2
respectively. Client MUST be able to process a compressed chain or
an end-entity certificate independently./

Thanks,

Panos

*From:* TLS  <mailto:tls-boun...@ietf.org>
*On Behalf Of *Dennis Jackson
*Sent:* Friday, March 1, 2024 8:03 AM
*To:* TLS List  <mailto:tls@ietf.org>
*Subject:* [EXTERNAL] [TLS] draft-ietf-tls-cert-abridge Update

*CAUTION*: This email originated from outside of the organization.
Do not click links or open attachments unless you can confirm the
sender and know the content is safe.

Hi all,

I wanted to give a quick update on the draft.

On the implementation side, we have now landed support for TLS
Certificate Compression in Firefox Nightly which was a
prerequisite for experimenting with this scheme (thank you to Anna
Weine). We're working on a rust crate implementing the current
draft and expect to start experimenting with abridged certs in
Firefox (with a server-side partner) ahead of IETF 120.

On the editorial side, I've addressed the com

Re: [TLS] draft-ietf-tls-cert-abridge Update

2024-03-04 Thread Dennis Jackson

Hi Panos,

On 02/03/2024 04:09, Kampanakis, Panos wrote:


Hi Dennis,

I created a git issue 
https://github.com/tlswg/draft-ietf-tls-cert-abridge/issues/23 but I 
am pasting it here for the sake of the discussion:


What does the client do if the server only does Pass 1 and compresses 
/ omits the chain certs but does not compress the end-entity certs 
(Pass 2)?


The client should be fine with that. It should be able to reconstruct 
the chain and used the uncompressed end-entity cert. It should not 
fail the handshake. I suggest the Implementation Complexity Section to 
say something like


I can see two different ways to handle it. Either as you suggest, we 
have it be a runtime decision and we just prefix the compressed form 
with a byte to indicate whether pass 2 has been used. Alternatively, we 
can define two codepoints, (pass 1 + pass 2, pass 1).


I'd like to experiment with both operations and measure what the real 
world difference is first, then we can make a decision on how to 
proceed. There's also been more interest in the non-webpki use case than 
I expected, so that needs to factor in to whichever option we pick.


Best,
Dennis

/> Servers MAY chose to compress just the cert chain or the 
end-certificate depending on their ability to perform Pass 1 or 2 
respectively. Client MUST be able to process a compressed chain or an 
end-entity certificate independently./


Thanks,

Panos

*From:* TLS  *On Behalf Of * Dennis Jackson
*Sent:* Friday, March 1, 2024 8:03 AM
*To:* TLS List 
*Subject:* [EXTERNAL] [TLS] draft-ietf-tls-cert-abridge Update

*CAUTION*: This email originated from outside of the organization. Do 
not click links or open attachments unless you can confirm the sender 
and know the content is safe.


Hi all,

I wanted to give a quick update on the draft.

On the implementation side, we have now landed support for TLS 
Certificate Compression in Firefox Nightly which was a prerequisite 
for experimenting with this scheme (thank you to Anna Weine). We're 
working on a rust crate implementing the current draft and expect to 
start experimenting with abridged certs in Firefox (with a server-side 
partner) ahead of IETF 120.


On the editorial side, I've addressed the comments on presentation and 
clarification made since IETF 117 which are now in the editors copy - 
there's an overall diff here [1] and atomic changes here [2] . There 
are two small PRs I've opened addressing minor comments by Ben Schwarz 
on fingerprinting considerations [3] and Jared Crawford on the 
ordering of certificates [4]. Feedback is welcome via mail or on the 
PRs directly.


Best,
Dennis

[1] 
https://author-tools.ietf.org/api/iddiff?doc_1=draft-ietf-tls-cert-abridge_2=https://tlswg.github.io/draft-ietf-tls-cert-abridge/draft-ietf-tls-cert-abridge.txt 
<https://author-tools.ietf.org/api/iddiff?doc_1=draft-ietf-tls-cert-abridge_2=https://tlswg.github.io/draft-ietf-tls-cert-abridge/draft-ietf-tls-cert-abridge.txt>


[2] https://github.com/tlswg/draft-ietf-tls-cert-abridge/commits/main/

[3] https://github.com/tlswg/draft-ietf-tls-cert-abridge/pull/21/files

[4] https://github.com/tlswg/draft-ietf-tls-cert-abridge/pull/19/files
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] I-D Action: draft-ietf-tls-cert-abridge-00.txt

2024-03-04 Thread Dennis Jackson

Hey Ilari,

I think you are still misunderstanding the scheme. To clarify:

On 01/03/2024 18:01, Ilari Liusvaara wrote:

The unrecognized identifier issue is a bit more subtle.
Suppose that a client:

- Has only partial list of certificates (enough to cover the built-in
   trust store).
- Allows an admin to add a new trust anchor, or to override validation
   error.

Then such client can get into situation where server sends chain that
should be valid, but instead references a certificate the client does
not have. Which is a hard error.


As laid out in 3.1, this draft works with a fixed list of certificates. 
Clients cannot use the scheme if they are not willing to have a full 
list of the certificates. Clients can trust a superset or subset of 
roots that are present on the list, any certificates not in the fixed 
pass 1 list are simply not compressed in that pass. A key goal of this 
draft is not to risk any breakage (unlike with suppressing intermediates).


If you have any editorial feedback on where you think this part of the 
draft is unclear, suggestions are welcome. I'm not sure where you've got 
the idea that only partial lists of certificates are possible.


Best,
Dennis
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


[TLS] draft-ietf-tls-cert-abridge Update

2024-03-01 Thread Dennis Jackson

Hi all,

I wanted to give a quick update on the draft.

On the implementation side, we have now landed support for TLS 
Certificate Compression in Firefox Nightly which was a prerequisite for 
experimenting with this scheme (thank you to Anna Weine). We're working 
on a rust crate implementing the current draft and expect to start 
experimenting with abridged certs in Firefox (with a server-side 
partner) ahead of IETF 120.


On the editorial side, I've addressed the comments on presentation and 
clarification made since IETF 117 which are now in the editors copy - 
there's an overall diff here [1] and atomic changes here [2] . There are 
two small PRs I've opened addressing minor comments by Ben Schwarz on 
fingerprinting considerations [3] and Jared Crawford on the ordering of 
certificates [4]. Feedback is welcome via mail or on the PRs directly.


Best,
Dennis

[1] 
https://author-tools.ietf.org/api/iddiff?doc_1=draft-ietf-tls-cert-abridge_2=https://tlswg.github.io/draft-ietf-tls-cert-abridge/draft-ietf-tls-cert-abridge.txt


[2] https://github.com/tlswg/draft-ietf-tls-cert-abridge/commits/main/

[3] https://github.com/tlswg/draft-ietf-tls-cert-abridge/pull/21/files

[4] https://github.com/tlswg/draft-ietf-tls-cert-abridge/pull/19/files
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] I-D Action: draft-ietf-tls-cert-abridge-00.txt

2024-03-01 Thread Dennis Jackson

Hi Ilari,

Thank you for the quick review. I've been integrating all of the 
editorial feedback in the draft (separate mail to follow to the group). 
Regarding your feedback:


On 06/09/2023 17:46, Ilari Liusvaara wrote:

Doing quick review:

Section 3.1.2:

- It is not clear what exactly is replaced if cert_data is known.
   Obviously overriding the length field would be more compact, but it
   also can be interpreted as replacing the value, wasting 3 bytes.

   (Reminds me of RFC 8879, which is not clear about similar things.)

- CertificateEntry and Certificate length fields are just waste of
   space, since both can be recovered in other ways when decoding.

- RFC 8879 does not allow ignoring unrecognized three-byte identifiers.
   Instead, the connection MUST be terminated with bad_certificate alert.

   This has consequence that any client that can ever add a custom trust
   anchor via any means must have the complete certificate list (whereas
   partial list would be enough if no custom trust anchors can ever be
   added).

   And I find the last comment about transcript validation failing very
   scary.


I've improved the language in the draft to clarify exactly how this pass 
works. A TLS1.3 Certificate Message carrying X509 is structured as follows:


  struct {
  opaque cert_data<1..2^24-1>;
  Extension extensions<0..2^16-1>;
  } CertificateEntry;

  struct {
  opaque certificate_request_context<0..2^8-1>;
  CertificateEntry certificate_list<0..2^24-1>;
  } Certificate;

When compressing during this first pass, we're going to be swapping the 
opaque cert_data fields for short three byte identifiers and correcting 
the lengths. The extension and certificate_request_context fields are 
not going to be adjusted. Pass 2 is then going to compress the result 
with ZStd with a provided dictionary.


When we're decompressing, after the ZStd pass, we're going to use the 
length fields to parse the message and replace any recognized three byte 
identifiers with the corresponding certificate, again leaving the 
extensions and the certificate_request_context untouched.


I hope that clarifies why transmitting the lengths is necessary. 
Regarding the decompression errors, I've updated the draft to say that 
decompression must fail and the correct alert sent if the length fields 
are incorrect as you suggested. I've filed an issue to discuss the case 
of unrecognized identifiers specifically [1].


Regarding the general security considerations, I want to clarify the 
situation. At the point we receive a Certificate (or a 
CompressedCertificate) message in TLS1.3, we don't know who we're 
talking to, the point of the message is to provide that information. 
Obviously, a peer can send any payload it likes in a Certificate message 
already, it's the job of the receiver to parse the Certificate message 
and establish whether it trusts the presented certificate (chain).


Using Certificate Compression with any scheme (including this draft) 
doesn't change the fundamentals. The result of decompressing the message 
is handled exactly like a Certificate message. So with the exception of 
memory safety bugs during decompression, there is no additional attack 
surface. An attacker can already send arbitrary content in a Certificate 
message and so implementations receiving a Certificate message already 
have to be able to robustly validate it. Any attack which somehow 
leveraged the decompression aspect is also possible by the attacker just 
sending the output of the decompression directly as a Certificate 
message. Nothing in this draft changes the situation from existing uses 
of Certificate Compression.




Section 3.2.:

- Using alternate scheme could result drastically reduced implementation
   complexity.

   Furthermore, one can't even use standard zstd decoder with this due to
   security reasons. One needs special one (but seems like reference zstd
   library ships that as alternative API).


Can you clarify what you mean by this? The standard zstd decoder works fine.


Section 3.2.1.:

- I suspect that having CA-specific dictionaries would make it much
   easier to be equitable and improve compression.

   Then I don't think the dictionary construction method is good:
  
   * Using just one certificate is very dubious.

   * It is more optimal to consider byte substrings with no structure.


This is tracked in [2] and [3]. Depending on the experimental data we 
get when evaluating [3], we might omit pass 2 entirely.




Section 3.2.1.1.:

- Caching monolithic compression from startup does not work because of
   extension fields.

   For caching to work, one would have to compress the certificate
   entries independently and leave the extension fields in between
   uncompressed.
The draft currently preserves the extension fields. Existing TLS 
Certificate Compression APIs perform the caching at the level of the 
entire message and so the cache is only used if 

Re: [TLS] Key Update for TLS/DTLS 1.3

2024-01-04 Thread Dennis Jackson
From a security perspective, this would be equivalent to having the 
client open a new connection to the server using a session ticket from 
the existing connection with psk_dhe_ke mode?


I guess the ergonomics of that approach perhaps aren't as neat, but it 
would only require client side implementation changes and no spec or 
server-side changes to deploy.


Best,
Dennis

On 04/01/2024 11:42, Tschofenig, Hannes wrote:


Hi all,

we have just submitted a draft that extends the key update 
functionality of TLS/DTLS 1.3.


We call it the “extended key update” because it performs an ephemeral 
Diffie-Hellman as part of the key update.


The need for this functionality surfaced in discussions in a design 
team of the TSVWG. The need for it has, however, already been 
discussed years ago on the TLS mailing list in the context of 
long-lived TLS connections in industrial IoT environments.


Unlike the TLS 1.3 Key Update message, which is a one-shot message, 
the extended Key Update message requires a full roundtrip.


Here is the link to the draft:

https://datatracker.ietf.org/doc/draft-tschofenig-tls-extended-key-update/

I am curious what you think.

Ciao
Hannes


___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Call to Move RFC 8773 from Experimental to Standards Track

2023-12-11 Thread Dennis Jackson

RFC 8773 S3:

> In the near term, this document describes a TLS 1.3 extension to 
protect today's communications from the future invention of a 
large-scale quantum computer by providing a strong external PSK as an 
input to the TLS 1.3 key schedule while preserving the authentication 
provided by the existing certificate and digital signature mechanisms.


I don't see anything specifically alarming about the design, but I'm 
very uncomfortable about any standards-track document making a strong 
security claim like this  if its not backed by some kind of formal 
analysis.


The document could also be a bit more explicit on the security 
properties it achieves and when e.g. that it breaks down once a 
large-scale QC is actually available, that clients & servers need to 
reject connections which do not negotiate the extension to actually 
benefit from its protection.


On the issue of tracking via external PSKs - it's easy to imagine a 
scheme where client and server divide time into epochs and derive 
per-epoch keys to prevent tracking between epochs. I'm sure there must 
be some prior art that could be referenced as a recommendation?


Best,
Dennis

On 29/11/2023 15:51, Joseph Salowey wrote:
RFC 8773 (TLS 1.3 Extension for Certificate-Based Authentication with 
an External Pre-Shared Key) was originally published as experimental 
due to lack of implementations. As part of implementation work for the 
EMU workitem draft-ietf-emu-bootstrapped-tls which uses RFC 8773 there 
is ongoing implementation work. Since the implementation status of RFC 
8773 is changing, this is a consensus call to move RFC 8773 to 
standards track as reflected in 
[RFC8773bis](https://datatracker.ietf.org/doc/draft-ietf-tls-8773bis). 
This will also help avoid downref for the EMU draft.  Please indicate 
if you approve of or object to this transition to standards track 
status by December 15, 2023.


Thanks,

Joe, Sean, and Deirdre

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Adoption call for 'TLS 1.2 Feature Freeze'

2023-12-11 Thread Dennis Jackson

I support adoption, and am happy to review.

Best,
Dennis

On 06/12/2023 12:50, Salz, Rich wrote:


At the TLS meeting at IETF 118 there was significant support for the 
draft 'TLS 1.2 is in Feature Freeze' 
(https://datatracker.ietf.org/doc/draft-rsalz-tls-tls12-frozen/ 
) 
 This call is to confirm this on the list.  Please indicate if you 
support the adoption of this draft and are willing to review and 
contribute text. If you do not support adoption of this draft please 
indicate why.  This call will close on December 20, 2023.


As the co-author, I support this and am willing to continue working on 
it as needed.



___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] ECH: What changed?

2023-11-14 Thread Dennis Jackson

Hi Rich,

During 117, both Firefox and Chrome were just starting to roll out ECH 
to release users and we had no sense of how it would go and I at least 
didn't feel we should progress without some deployment experience. These 
roll outs finished a few weeks later, see e.g [1,2] and went fairly 
smoothly, and today its deployed at 100% in both Firefox and Chrome, 
with ECH GREASEing enabled as well.


Best,
Dennis

[1] https://blog.mozilla.org/en/products/firefox/encrypted-hello/

[2] https://chromestatus.com/feature/6196703843581952

On 14/11/2023 15:02, Salz, Rich wrote:


So IETF 118 it appears that the TLS ECH draft is headed for WGLC.  
What changed since at IETF 117 it wasn’t ready and we needed more 
“something”. (I asked if we had measurable criteria and we did not.)



___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] [EXTERNAL] New Version Notification for draft-ounsworth-lamps-pq-external-pubkeys-00.txt

2023-10-10 Thread Dennis Jackson

On 10/10/2023 17:53, Russ Housley wrote:


Dennis:

If we are going to allow a certificate to include pointers to 
externally stored public keys, I think a solution that works for the 
Web PKI and other PKI environment as well.


I'm trying to understand the use case of certificates with pointers to 
externally stored public keys. What's the value in splitting these 
objects? If you're going to cache a public key, why not cache the whole 
certificate?


The suggestion of Abridged Certs is just one way to do that caching. If 
the external fetching via URL is the key feature - you could define a 
certificate compression scheme which compresses and decompresses a 
certificate to a URL.


I skimmed the LAMPS list as well, but I did not see any discussion of 
the rationale there.



___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] FW: [EXTERNAL] New Version Notification for draft-ounsworth-lamps-pq-external-pubkeys-00.txt

2023-10-10 Thread Dennis Jackson

Hi Mike,

On 30/09/2023 23:19, Mike Ounsworth wrote:



Consider the following two potential use-cases:

1. Browsers

Browsers already have mechanisms to cache intermediate CA 
certificates. It does not seem like a big leap to also cache external 
public keys for the server certs of frequently-visited websites. (yes, 
yes, I know that the idea of caching server public keys runs counter 
to the desire for the Internet to move to 14-day certs. Shrug)


I think a bigger objection would be that caching the public keys is a 
tracking vector and seems a bit redundant if the browser and server can 
do session resumption instead (with less storage overhead).


2. Mutual-auth TLS within a cluster

Consider a collection of docker containers within a kubernetes 
cluster. Consider that each container has a local volume mount of a 
read-only database of the public keys of all containers in the 
cluster. Then container-to-container mutual-auth TLS sessions could 
use much smaller certificates that contain references to public key 
objects in the shared database, instead of the large PQ public keys 
themselves.


You could easily used abridged certs here, just with an enterprise 
specific dictionary rather than the external public keys. That would let 
you reduce the entire certificate chain to a few bytes, without having 
to implement any key fetching logic in the TLS client. Would you be 
interested in that? It would be easy enough to have a generic IETF draft 
for abridged certs schemes and then a particular draft defining the 
WebPKI instantiation. I think Panos may also be interested in a similar 
enterprise use case?


I'm a little confused on why caching the public key independently of the 
certificate is a good idea. It seems like a recipe for encouraging 
people to issue new certificates without rolling new public keys, which 
would not be great for security.


Best,
Dennis


---

*Mike*Ounsworth

*From:*Spasm  *On Behalf Of *Mike Ounsworth
*Sent:* Saturday, September 30, 2023 5:16 PM
*To:* 'LAMPS' 
*Cc:* John Gray ; Markku-Juhani O. Saarinen 
; David Hook 
*Subject:* [lamps] FW: [EXTERNAL] New Version Notification for 
draft-ounsworth-lamps-pq-external-pubkeys-00.txt


Hi LAMPS! This is both a new draft announcement, and a request for a 
short (5 min?) speaking slot at 118. Actually, this is not a new 
draft. Back in 2021 Markku and I put forward a draft for External 
Public Key -- draft-ounsworth-pq-external-pubkeys-00


Hi LAMPS!

This is both a new draft announcement, and a request for a short (5 
min?) speaking slot at 118.


Actually, this is not a new draft. Back in 2021 Markku and I put 
forward a draft for External Public Key -- 
draft-ounsworth-pq-external-pubkeys-00 (the only reason this is an -00 
is because I included "lamps" in the draft name). The idea is that 
instead of a putting the full public key in a cert, you just put a 
hash and pointer to it:


ExternalValue ::= SEQUENCE {

location GeneralName,

hashAlg  AlgorithmIdentifier,

hashVal  BIT STRING

   }

That allows super small PQ certs in cases where you can pass the 
public key blob through some out-of-band mechanism.


Here's the mail list discussion from 2021:

https://mailarchive.ietf.org/arch/msg/spasm/yv7mbMMtpSlJlir8H8_D2Hjr99g/ 



It turns out that BouncyCastle has implemented this at the request of 
one of their customers as a way around megabyte sized Classic McEliece 
certs; it is especially useful for usecases where clients have a way 
to fetch-and-cache or be pre-provisioned with its peer's public keys 
out-of-band. As such, Entrust and KeyFactor are reviving this draft.


We suspect this might also be of interest to the TLS WG, but I will 
start a separate thread on the TLS list.


---

*Mike*Ounsworth

*From:*internet-dra...@ietf.org 
*Sent:* Saturday, September 30, 2023 5:12 PM
*To:* D. Hook ; John Gray 
; Markku-Juhani O. Saarinen 
; John Gray ; Markku-Juhani 
Saarinen ; Mike Ounsworth 
*Subject:* [EXTERNAL] New Version Notification for 
draft-ounsworth-lamps-pq-external-pubkeys-00.txt


A new version of Internet-Draft 
draft-ounsworth-lamps-pq-external-pubkeys-00. txt has been 
successfully submitted by Mike Ounsworth and posted to the IETF 
repository. Name: draft-ounsworth-lamps-pq-external-pubkeys Revision: 
00 Title: External


A new version of Internet-Draft
draft-ounsworth-lamps-pq-external-pubkeys-00.txt has been successfully
submitted by Mike Ounsworth and posted to the
IETF repository.
Name: draft-ounsworth-lamps-pq-external-pubkeys
Revision: 00
Title:    External Keys For Use In Internet X.509 Certificates
Date: 2023-09-30
Group:    Individual Submission
Pages:    9
URL: 

Re: [TLS] Encrypted Client Hello - SNI leaks via public name?

2023-10-06 Thread Dennis Jackson

Hi Raghu,

On 06/10/2023 10:45, Raghu Saxena wrote:
Specifically, I am concerned about the "public name" field in the 
ECHConfig. For services such as cloudflare, they can "hide" everything 
behind a single domain (e.g. "cloudflare-ech.com"). However, for 
someone who just owns a single domain (e.g. "hub.com"), what would the 
"suggested value" be?


Section 6.1.7 implies it should NOT be an IPv4 address. If I do not 
wish to leak the real domain, is it "acceptable" to use something like 
"fakedomain.com"?
The server needs to be able to answer a TLS connection for the ECH 
public name or you risk an outage if the ECHConfigs get out of sync as 
then clients will get a TLS error page if they can't get through to the 
public name.


If the public_name leaks domain in anyway, I think it would be quite 
unfortunate, at least for bypassing DPI-blocks. From what I 
understand, the purpose of public_name is only if the server doesn't 
support ECH, but if a client retrieved an ECHConfig, why shouldn't the 
client just skip this field? I fear it will become a situation like 
the initial SNI extension - even when websites do not need it, 
browsers' TLS stacks send it anyway, causing leakage.


For instance, in India, a popular website, let's call it "hub.com", is 
blocked via SNI. However, the website itself does NOT rely on SNI, It 
is possible to open a pure TLS connection to it via IP, it serves the 
TLS cert for "hub.com" so the handshake can be completed, and then the 
website will load as normal. I verified this by manually using 
"openssl s_client", WITHOUT SNI. But since Firefox/Chrome will always 
send SNI, the ISPs can block it.


If the server can answer with the correct certificate without SNI, then 
it must be the only site hosted at that IP. This means that the ISP can 
also just block the IP without any collateral damage. The same thing is 
true if ECH connections didn't expose an SNI at all, they'd stick out 
and could be blocked for that reason.


I would describe ECH primarily as a technology for improving privacy. 
Unfortunately, I don't think it's any kind of silver bullet for 
censorship resistance.


Best,
Dennis



___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] RFC7924 (Cached Information Extension) Update for TLS 1.3

2023-08-15 Thread Dennis Jackson

Hi Simon,

On 15/08/2023 03:41, Simon Mangel wrote:

We believe it to be useful in cases where the network bandwidth is
severely restricted, such that one would want to keep the number of
"full" handshakes as small as possible.

Session resumption ticket lifetimes are limited to 7 days in TLS 1.3
[RFC8446, sec. 4.6.1], and are often configured to be even shorter
[Sy2018Tracking, sec. 5.1.2].


Two observations:

   1. The reason ticket lifetimes are often shorter than 7 days is
   because they can be used to track user visits. Caching end-entity
   certificates as in RFC 7924 over a long period of time is
   problematic for the same reason.

   2. Although each individual session ticket lifetime is capped at 7
   days, you can resume a TLS 1.3 session within the 7 day window and
   receive new session tickets over that connection and so extend the
   resumption window up till the certificate lifetime (subject to the
   tracking risk in (1)).

So if you aren't concerned about tracking and you expect to make at 
least one connection per week, session resumption is strictly better.


If you're making fewer than one connection a week, RFC 7924 is better, 
but then you're saving at most 52 handshakes per year per device (as the 
max certificate lifetime is ~1 year), so I'd argue the implementation 
benefit is pretty small.


And if you're concerned about tracking then neither RFC 7924 nor long 
term session ticket renewal is appropriate.


Best,
Dennis


As X.509 certificates (as the most interesting type of cached object)
typically have a much longer validity period, the idea is to further
bring down the frequency of "full" handshakes (including the server
certificate chain) by either opting to uses RFC7924 instead of session
resumption, or even combining the two techniques.
When combining the two, the full TLS handshake after a session
resumption ticket has expired could then be made more efficient using
the cached Certificate and/or CertificateRequest message, at the cost
of a second client-side cache.

Best wishes,
Simon


References
[RFC8446] The Transport Layer Security (TLS) Protocol Version 1.3,
https://datatracker.ietf.org/doc/html/rfc8446
[Sy2018Tracking] Tracking Users across the Web via TLS Session
Resumption,https://doi.org/10.1145/3274694.3274708___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] RFC7924 (Cached Information Extension) Update for TLS 1.3

2023-08-14 Thread Dennis Jackson

Hi Simon,

Can you expand more on the intended use case? When would it make sense 
to use a RFC7924-like mechanism over TLS 1.3's session resumption?


I skimmed RFC 7924 and session resumption seems strictly better as it's 
already widely deployed, allows for the DH handshake to be optionally 
elided and has the exact same storage requirements (some space required 
on the client, none required on the server).


Best,
Dennis

On 12/08/2023 06:58, Simon Mangel wrote:

tl;dr: We plan on updating RFC 7924 for TLS 1.3 and would like to check
whether there is interest in the TLS wg.

The TLS Cached Information extension [RFC7924] has not seen significant
adoption since its specification.
However, we still believe it to be an interesting candidate in upcoming
IoT application scenarios.


___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Abridged Certificate Compression (dictionary versioning)

2023-07-14 Thread Dennis Jackson

On 13/07/2023 02:31, Kampanakis, Panos wrote:


Btw, in 3.1.1 I noticed
- "Remove all intermediate certificates which are not signed by root certificates 
still in the listing."

That could eliminate some 2+ ICA cert chains. Any reason why?
Whoops, that's a good spot. The intent here was just to remove any 
intermediates which no longer chained back to trusted roots, so I'll fix 
the wording.___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Abridged Certificate Compression

2023-07-14 Thread Dennis Jackson

On 13/07/2023 10:13, Rob Stradling wrote:

How about also including in the shared dictionary the SHA-256 hashes 
of the public keys of all the known CTv1 logs, so that the 32-byte 
LogID field of each SCT will be compressed?


This is already step 2 of the shared dictionary construction :-) (link 
<https://www.ietf.org/archive/id/draft-jackson-tls-cert-abridge-00.html#section-3.2.1-4>). 




Best,
Dennis



FWIW, RFC9162 (CTv2) tackles the same SCT bloat by changing the LogID 
type from a (32-byte) SHA-256 hash of the log's public key to a 
(minimum 4-byte) DER-encoded OID (excluding the tag and length octets).



*From:* TLS  on behalf of Tim Hollebeek 


*Sent:* 12 July 2023 19:29
*To:* Kampanakis, Panos ; Dennis 
Jackson ; TLS List 

*Subject:* Re: [TLS] Abridged Certificate Compression
CAUTION: This email originated from outside of the organization. Do 
not click links or open attachments unless you recognize the sender 
and know the content is safe.



SCTs have always seemed surprisingly large to me, and it has always seemed
like there should be a more compact representation that has the same 
security
properties, but I've never found the time to look more closely at it.  
If someone
does have the time, figuring out how to reduce the size of SCTs would 
be quite

helpful.

-Tim

> -Original Message-
> From: TLS  On Behalf Of Kampanakis, Panos
> Sent: Wednesday, July 12, 2023 2:23 PM
> To: Dennis Jackson ; TLS List
> 
> Subject: Re: [TLS] Abridged Certificate Compression
>
> > The performance benefit isn't purely in the ~1KB saved, its 
whether it brings
> the chain under the QUIC amplification limit or shaves off an 
additional packet
> and so avoids a loss+retry. There's essentially no difference in 
implementation
> complexity, literally just a line of code, so the main tradeoff is 
the required disk

> space on the client & server.
>
> Fair. I would add one more tradeoff which is pulling the end-entity 
certs in the
> CT window for pass 2. This is an one time cost for each dictionary 
version, so

> maybe not that bad.
>
> Regardless, would compressing the leaf bring us below the QUIC 3.6KB
> threshold for Dilithium 2 or 3 certs whereas not suppressing would 
keep us
> above? I think it is not even close if we are talking WebPKI. 
Without SCTs,
> maybe compressing could keep us below 4KB for Dilithium 2 leaf 
certs. But
> even then, if we add the CertVerify signature size we will be well 
over 4KB.

>
> Additionally, would compressing the leaf bring us below the 9-10KB 
threshold
> that Bas had tested to be an important inflection point? For WebPKI, 
it may
> the 8-9KB cert below 9KB if we add the CertVerify signature size. 
Maybe not. It
> would need to tested. For Dilithium 3, maybe compression could 
render the

> 11-12KB cert below 9KB if we got lucky, maybe not, but if we add the
> CertVerify signature we won’t make it. For non-WebPKI, they will 
already be

> below 9-10KB.
>
> So, I am arguing that we can't remain below the QUIC threshold by
> compressing the leaf Dilithium cert. And we could remain below the 
9-10KB
> only for Dilithium2 leaves.  I could be proven wrong if you have 
implemented

> it.
>
> One more argument for making pass 2 optional or allowing for just pass 1
> dictionaries is that if we are not talking about WebPKI we don't 
have the
> luxury of CT logs. But we would still want to option of compressing 
/ omitting

> the ICAs by using CCADB.
>
>
>
>
> -Original Message-
> From: Dennis Jackson 
> Sent: Wednesday, July 12, 2023 12:39 PM
> To: Kampanakis, Panos ; TLS List 
> Subject: RE: [EXTERNAL][TLS] Abridged Certificate Compression
>
> CAUTION: This email originated from outside of the organization. Do 
not click

> links or open attachments unless you can confirm the sender and know the
> content is safe.
>
>
>
> On 12/07/2023 04:34, Kampanakis, Panos wrote:
>
> > Thanks Dennis. Your answers make sense.
> >
> > Digging a little deeper on the benefit of compressing (a la Abridged
> > Certs draft) the leaf cert or not. Definitely this draft improves vs
> > plain certificate compression, but I am trying to see if it is worth
> > the complexity of pass 2. So, section 4 shows a 2.5KB improvement over
> > plain compression which would be even more significant for Dilithium
> > certs, but I am trying to find if the diff between ICA
> > suppression/Compression vs ICA suppression/Compression+leaf
> > compression is significant. [/n]
> >
> > I am arguing that the table 4 numbers would be much different when
> > talking about Dilithium certs because all of these numbers would be
> > inflated and any compression would have a

Re: [TLS] Abridged Certificate Compression

2023-07-14 Thread Dennis Jackson

On 12/07/2023 19:23, Kampanakis, Panos wrote:


One more argument for making pass 2 optional or allowing for just pass 1 
dictionaries is that if we are not talking about WebPKI we don't have the 
luxury of CT logs. But we would still want to option of compressing / omitting 
the ICAs by using CCADB.


Using the CT logs to extract the end-entity extensions is a bit of a 
stop-gap measure. I think in the long run we'd like to add a field to 
the CCADB where CAs could provide their own compression data (up to some 
budget).


Whilst I think pass 2 has a marked improvement for classical cert chains 
- in some cases fitting the entirety of the server's response in one 
packet - I agree we should measure carefully before deciding whether it 
be mandatory for PQ certs.


Best,
Dennis






-Original Message-
From: Dennis Jackson 
Sent: Wednesday, July 12, 2023 12:39 PM
To: Kampanakis, Panos ; TLS List 
Subject: RE: [EXTERNAL][TLS] Abridged Certificate Compression

CAUTION: This email originated from outside of the organization. Do not click 
links or open attachments unless you can confirm the sender and know the 
content is safe.



On 12/07/2023 04:34, Kampanakis, Panos wrote:


Thanks Dennis. Your answers make sense.

Digging a little deeper on the benefit of compressing (a la Abridged
Certs draft) the leaf cert or not. Definitely this draft improves vs
plain certificate compression, but I am trying to see if it is worth
the complexity of pass 2. So, section 4 shows a 2.5KB improvement over
plain compression which would be even more significant for Dilithium
certs, but I am trying to find if the diff between ICA
suppression/Compression vs ICA suppression/Compression+leaf
compression is significant. [/n]

I am arguing that the table 4 numbers would be much different when
talking about Dilithium certs because all of these numbers would be
inflated and any compression would have a small impact. Replacing a CA
cert (no SCTs) with a dictionary index would save us ~4KB (Dilithium2)
or 5.5KB (Dilithium3). That is significant. [/n]

Compressing the leaf (of size 8-9KB (Dilithium2) or 11-12 KB (Dilithium 3)) 
using any mechanism would trim down ~0.5-1KB compared to not compressing. That 
is because the PK and Sig can't be compressed and these account for most of the 
PQ leaf cert size. So, I am trying to see if pass 2 and compression of the leaf 
cert benefit us much.

I think there's a fairly big difference between suppressing CA certs in SCA and 
compressing CA certs with pass 1 of this draft. But I do agree its fair to ask 
if pass 2 is worth the extra effort.

The performance benefit isn't purely in the ~1KB saved, its whether it brings the 
chain under the QUIC amplification limit or shaves off an additional packet and so 
avoids a loss+retry. There's essentially no difference in implementation 
complexity, literally just a line of code, so the main tradeoff is the required 
disk space on the client & server.

Best,
Dennis



___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Abridged Certificate Compression (server participation)

2023-07-12 Thread Dennis Jackson

On 12/07/2023 05:02, Kampanakis, Panos wrote:


The abridged certs draft requires a server who participates and fetches 
dictionaries in order to make client connections faster. As Bas has pointed out 
before, this paradigm did not work well with OSCP staples in the past. Servers 
did not chose to actively participate and go fetch them.

Are we confident that servers would deploy the dictionary fetching mechanism to 
benefit their connecting clients?


I think OCSP staples is quite a bit different from this draft. OCSP 
Staples requires the server to fetch new data from the CA every day or 
week. It's inherently hard to do this reliably, especially with the 
large number of poor quality or poorly maintained OCSP servers and the 
large fraction of operators who do not want their servers making 
outbound connections. Besides these barriers I don't think the benefit 
was huge as clients already cached OCSP responses for up to a week so at 
most it was speeding up one connection per client per week (this was 
before network partitioning in browsers) and at worst it was breaking 
your website entirely.


In contrast, this draft aims to speed up every connection that isn't 
using session tickets, cause no harm if its misconfigured or out of date 
and be slow moving enough that the dictionaries can be shipped as part 
of a regular software release and so suitable for anyone willing to 
update their server software once a year (or less). Similarly, these 
updates aren't going to involve code changes, just changes to the static 
dictionaries, so they are suitable for backporting or ESR releases.


It would definitely be good to hear from maintainers or smaller 
operators if they have concerns though!


___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Abridged Certificate Compression (dictionary versioning)

2023-07-12 Thread Dennis Jackson

On 12/07/2023 04:54, Kampanakis, Panos wrote:


Hi Dennis,

Appendix B.1 talks about 100-200 new ICA and 10 Root certs per year. In the 
past I had looked at fluctuations of CCADB and there are daily changes. When 
checking in the past, I did not generate the ordered list as per pass 1 on a 
daily basis to confirm it, but I confirmed the fluctuations. The commits in 
https://github.com/FiloSottile/intermediates/commits/main  show it too. Given 
that, I am wondering if CCADB is not that stable. Are you confident that ICA 
dictionaries (based on CCADB) won't materially change often?


I checked the historical data for the last few years to ballpark a rate 
of 100-200 new intermediates per year. A uniform distribution of 
arrivals would mean 2 to 4 changes a week, which matches Filippo's 
commit frequency [1]. In practice Filippo's commits include removals 
(which we don't care about) and batched additions (which we do), but the 
numbers seem about right.


In terms of impact, the question is how much usage do those new ICAs see 
in their first year. If we expect websites to adopt them equally likely 
as existing ICAs then they should make up <5% of the population. I think 
in practice they see much slower adoption and so the impact is even 
lower, for example a reasonable proportion are vanity certificates with 
limited applicability or intended to replace an existing cert in the 
future. If we wanted to confirm this we could build the abridged cert 
dictionaries for '22 and then use CT to sample the cert chains used by 
websites that year. I'll see if I can find the time to put that together.


If there was an appetite for a faster moving dictionary, we could use 
the scheme I sketched in the appendix to the draft. But I think we 
should try to avoid that complexity if we can.


Best,
Dennis

[1] https://github.com/FiloSottile/intermediates/graphs/commit-activity

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Abridged Certificate Compression

2023-07-12 Thread Dennis Jackson

On 12/07/2023 04:34, Kampanakis, Panos wrote:


Thanks Dennis. Your answers make sense.

Digging a little deeper on the benefit of compressing (a la Abridged Certs 
draft) the leaf cert or not. Definitely this draft improves vs plain 
certificate compression, but I am trying to see if it is worth the complexity 
of pass 2. So, section 4 shows a 2.5KB improvement over plain compression which 
would be even more significant for Dilithium certs, but I am trying to find if 
the diff between ICA suppression/Compression vs ICA 
suppression/Compression+leaf compression is significant. [/n]

I am arguing that the table 4 numbers would be much different when talking 
about Dilithium certs because all of these numbers would be inflated and any 
compression would have a small impact. Replacing a CA cert (no SCTs) with a 
dictionary index would save us ~4KB (Dilithium2) or 5.5KB (Dilithium3). That is 
significant. [/n]

Compressing the leaf (of size 8-9KB (Dilithium2) or 11-12 KB (Dilithium 3)) 
using any mechanism would trim down ~0.5-1KB compared to not compressing. That 
is because the PK and Sig can't be compressed and these account for most of the 
PQ leaf cert size. So, I am trying to see if pass 2 and compression of the leaf 
cert benefit us much.


I think there's a fairly big difference between suppressing CA certs in 
SCA and compressing CA certs with pass 1 of this draft. But I do agree 
its fair to ask if pass 2 is worth the extra effort.


The performance benefit isn't purely in the ~1KB saved, its whether it 
brings the chain under the QUIC amplification limit or shaves off an 
additional packet and so avoids a loss+retry. There's essentially no 
difference in implementation complexity, literally just a line of code, 
so the main tradeoff is the required disk space on the client & server.


Best,
Dennis

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Abridged Certificate Compression

2023-07-12 Thread Dennis Jackson

On 12/07/2023 11:01, Ilari Liusvaara wrote:


On Tue, Jul 11, 2023 at 09:37:19PM +0100, Dennis Jackson wrote:

TLS Certificate Compression influences the transcript for the decompressing
party, as the output is the Certificate message which is used in the
transcript.

RFC 8879 does not alter how transcript is computed in any way.


Firstly, all extensions added to the ClientHello influence the 
transcript as the body of the CH message is included in the transcript.


Secondly, RFC 8879 specifies a CompressedCertificate message which is 
the result of applying the negotiated compression algorithm to the 
original Certificate message. The receiver of the CompressedCertificate 
message will decompress it and include the resulting Certificate message 
in their transcript. Consequently, for one party use of RFC 8879 will 
influence the transcript.



An extension altering computation of transcript would be truly
extraordinary.


You might find 6.1.5 and 7.2 of 
https://datatracker.ietf.org/doc/draft-ietf-tls-esni/ an interesting 
read :-).


Best, Dennis
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Abridged Certificate Compression

2023-07-11 Thread Dennis Jackson

On 11/07/2023 21:17, Eric Rescorla wrote:


I wouldn't want to 'permanently' encode the root programs, CT
trusted log lists or end entity compressed extensions for example.


Arguably it will be necessary to encode the database in the final RFC.
Otherwise, you have what is effectively a normative reference to the
contents of the CCADB.

I haven't thought through this completely, but I mention it because it
may affect the rest of the design decisions if we end up with the
WG having to produce the database.


To clarify: I'm fine with encoding things permanently in an RFC for use 
with a specific code point. I just wouldn't want to do that for multiple 
future code points to be used in future years since predicting 
developments is inherently hard.
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Abridged Certificate Compression

2023-07-11 Thread Dennis Jackson

Hi Ilari,

On 10/07/2023 20:19, Ilari Liusvaara wrote:

What does "Note that the connection will fail regardless even if this
step is not taken as neither certificate validation nor transcript
validation can succeed." mean? TLS certificate compression does not
do anything special with transcript, so transcript validation should
always succeed.


TLS Certificate Compression influences the transcript for the 
decompressing party, as the output is the Certificate message which is 
used in the transcript. So if the Certificate message is incorrectly 
formatted, then the the decompressing party will likely bail when 
passing it to the TLS library. Even if they succeed and try to use it in 
a transcript calculation, the compressing party's transcript includes 
the uncompressed certificate directly and so will differ.



And are there zstd decoders that can reuse output buffer in oneshot
decompression for lookback? The zstd command line tool manual page
mentions default 128MB memory limit for decompression. I presume
mostly for lookback. Such limit is way too large.
Zstd is already supported without a dictionary for TLS Certificate 
Compression so others with deployment experience may be able to give an 
authoritative answer. That said, Facebook's Zstd implementation is 
permissively licensed, used in the Linux Kernel and their discussion 
here  suggests much 
smaller limits are fine.

And an alternative idea:

[...]

1) Where if next certificate in chain is also not found, zstd uses
empty dictionary. Otherwise it uses dictionary associated with the
next certificate in chain.

[...]

This allows dictionaries to be specific to CA, avoiding tradeoffs
between CAs.


Interesting idea! Can you share more about the motivation for using many 
small dictionaries rather than a single combined one? Is it purely for 
supporting memory constrained devices? We can already ensure that each 
CA contributes an equal number of bytes to the pass 2 dictionary.


One drawback is that some of the data isn't unique to a particular 
issuer (e.g. the CT log ids) and so would either have to be handled in 
its own pass or be included as redundant data in each individual 
dictionary.


Best,
Dennis
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Abridged Certificate Compression

2023-07-11 Thread Dennis Jackson

On 11/07/2023 15:48, Thom Wiggers wrote:

I enjoyed reading this draft. I think it is well-written. Aside from 
some to-be-figured-out details that have already been pointed out, it 
seems very practical, which is rather nice.

Thanks!


The one thing that makes me frown a bit is the intended versioning 
scheme. I don't think consuming identifiers is a problem, but perhaps 
we can pre-define the code points and variables for the next, say, 
N=0xff years? Then the versioning mechanism is set for the foreseeable 
future.


I like the reduction of bookkeeping but I think we would need to work 
out which parts of the construction to make dynamic with an IANA 
registry. I wouldn't want to 'permanently' encode the root programs, CT 
trusted log lists or end entity compressed extensions for example.


I don't really have a sense of what the idiomatic IETF solution is for 
this problem, so I settled for seemed like the least commitment method 
in the draft.



(You could even say that we wrap the code points after N years).


I don't know whether there'll be interest in using this scheme outside 
TLS (e.g. reducing storage / bandwidth costs in CT) but if there is then 
we'll probably want identifiers which are unambiguous over long timescales.


Best,
Dennis

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Abridged Certificate Compression

2023-07-11 Thread Dennis Jackson

On 11/07/2023 01:00, Eric Rescorla wrote:

My sense is that we would be better off getting the data from CCADB, 
as CAs will have a clear incentive to populate it, as their customers 
will get better performance.


However, I think this is a question the WG is well suited to resolve 
and that we could adopt the document as-is and sort this out later.


Thanks, I filed #7 
 
to keep track of this.
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Abridged Certificate Compression

2023-07-10 Thread Dennis Jackson


On 07/07/2023 21:28, Eric Rescorla wrote:



S 3.2.1
How much value are you getting from the CT logs? It seems like
additional complexity. I agree with your comment about having
this submitted to CCADB.


It seemed the fairest repeatable way to check whether a CA was
offering certificates to WebPKI clients without having to write a
lot of emails. I agree its not desirable to keep as a dependency
in the long run.

Can you elaborate on the concern here? I.e., is it that we will 
overinclude or underinclude if we just use CCADB?


Sorry, this answer came out garbled. Using CT gives two things:

1) There are large extensions in end-entity certs which are specific to 
the issuer and change little between certs. For example, the URLs for 
OCSP, CRL and the practice statement are typically the same. Using CT 
logs lets me pull out an example of each extension for that CA without 
having to write a bunch of mails to ask them to produce them in the 
right format.


2) I don't personally have concerns about the dictionary size and would 
prefer to include every CA. However, if someone were to have strong 
feelings about this then using CT to measure popularity is much fairer 
than say scanning popular domains from the Tranco list or whatever.


In the long term, I hope this could just be removed and ideally the CA's 
themselves provide a fixed size binary blob via CCADB that they'd like 
compressed out of their certs.


Best,
Dennis



Thanks,
-Ekr


S 5.1.
ISTM that there are plenty of code points available.


Thanks!

Best,
Dennis









On Thu, Jul 6, 2023 at 3:18 PM Dennis Jackson
 wrote:

Hi all,

I've submitted the draft below that describes a new TLS
certificate
compression scheme that I'm calling 'Abridged Certs' for now.
The aim is
to deliver excellent compression for existing classical
certificate
chains and smooth the transition to PQ certificate chains by
eliminating
the root and intermediate certificates from the bytes on the
wire. It
uses a shared dictionary constructed from the CA certificates
listed in
the CCADB [1] and the associated extensions used in end entity
certificates.

Abridged Certs compresses the median certificate chain from
~4000 to
~1000 bytes based on a sample from the Tranco Top 100k. This
beats
traditional TLS certificate compression which produces a
median of ~3200
bytes when used alone and ~1400 bytes when combined with the
outright
removal of CA certificates from the certificate chain. The draft
includes a more detailed evaluation.

There were a few other key considerations. This draft doesn't
impact
trust decisions, require trust in the certificates in the shared
dictionary or involve extra error handling. Nor does the
draft favor
popular CAs or websites due to the construction of the shared
dictionary. Finally, most browsers already ship with a
complete list of
trusted intermediate and root certificates that this draft
reuses to
reduce the client storage footprint to a few kilobytes.

I would love to get feedback from the working group on
whether the draft
is worth developing further.

For those interested, a few issues are tagged DISCUSS in the
body of the
draft, including arrangements for deploying new versions with
updated
dictionaries and the tradeoff between equitable CA treatment
and the
disk space required on servers (currently 3MB).

Best,
Dennis

[1] Mozilla operates the Common CA Database on behalf of Apple,
Microsoft, Google and other members.

On 06/07/2023 23:11, internet-dra...@ietf.org wrote:
> A new version of I-D, draft-jackson-tls-cert-abridge-00.txt
> has been successfully submitted by Dennis Jackson and
posted to the
> IETF repository.
>
> Name:         draft-jackson-tls-cert-abridge
> Revision:     00
> Title:                Abridged Compression for WebPKI
Certificates
> Document date:        2023-07-06
> Group:                Individual Submission
> Pages:                19
> URL:
https://www.ietf.org/archive/id/draft-jackson-tls-cert-abridge-00.txt
> Status:
https://datatracker.ietf.org/doc/draft-jackson-tls-cert-abridge/
> Html:
https://www.ietf.org/archive/id/draft-jackson-tls-cert-abridge-00.html
> Htmlized:
https://datatracker.ietf.org/doc/html/draft-jackson-tls-cert-abridge
>
>
> Abstract:
>     This draft defines a new TLS Certificate Compre

Re: [TLS] Abridged Certificate Compression

2023-07-10 Thread Dennis Jackson

Hi Panos,

On 08/07/2023 02:49, Kampanakis, Panos wrote:

Hi Dennis,

This is an interesting draft.

Thanks!


The versioned dictionary idea for ICA and Root CAs especially was something I 
was considering for the ICA Suppression draft [1] given the challenges brought 
up before about outages with stale dictionary caches.

Btw, if we isolated the ICA and Root CA dictionary, I don't think you need pass 
1, assuming the parties can agree on a dictionary version. They could just 
agree on the dictionary and be able to build the cert chain, but providing the 
identifiers probably simplifies the process. This could be simplified further I 
think.


Ah I hadn't seen, thank you for the link to [1].

I thought a bit about suppressing pass 1 as well but I don't think its 
desirable.


A key selling point of the current Abridged Certs draft is that it can 
be enabled by default without the risk of connection failures or 
requiring retries, even if the server / client fall out of date. This 
keeps the deployment story very simple as you can just turn it on 
knowing it can only make things better and never make things worse.


Suppressing Pass 1 could be used to reduce the storage requirements on 
the server but then the server wouldn't know whether a particular ICA 
was in the dictionary and so the operator would have to configure that, 
leading to the same kind of error handling flows as in the CA Cert 
Suppression draft. Similarly, the bytes on the wire saving isn't 
significant and it would make it harder to use Abridged Certs in other 
contexts as it would no longer be a lossless compression scheme.



I also think one thing missing from the draft is how the client negotiates this 
compression with the server as the CertificateCompressionAlgorithms from 
RFC8879 will not be the same.


Sorry I'm afraid I don't follow.

Abridged Certs would negotiated just the same as any other certificate 
compression algorithm. The client indicates support by including the 
Abridged Certs identifier in its Certificate Compression extension in 
the ClientHello (along with the existing algorithms like plain Zstd). 
The server has the choice of whether to use it in its 
CompressedCertificate message. If a new version of Abridged Certs were 
minted in a few years with newer dictionaries then it would have its own 
algorithm identifier and would coexist with or replace the existing one.



About the end-entity compression, I wonder if compression, decompression 
overhead is significant and unbalanced. RFC8879 did not want to introduce a DoS 
threat by offering a cumbersome compression/decompression. Any data on that?


Abridged Certs is just a thin wrapper around Zstd which is already 
deployed as TLS Certificate Compression algorithm, so the same 
considerations apply. According to Facebook's numbers [ZSTD], 
decompression is more than 4x faster than Brotli and insensitive to the 
compression level used. TLS Certificate Compression schemes aren't 
sensitive to compression speed as it can be done once and cached at 
startup, but I ran a benchmark [COMPARE] using both the maximum and 
minimal Zstd compression levels and the outcome was within 100 bytes, so 
servers wanting to do just-in-time compression can use a minimal level 
without difficulty. I hope to have some proper benchmarks using a Rust 
implementation by the end of the week.



About your data in section 4, I think these are classical cert chains and it 
looks to be they improve 0.5-1KB from RFC8879 compression.
They are classical cert chains but I think you might be misreading the 
table. The improvement over regular TLS Certificate Compression is 1KB 
at the 5th percentile, 2.2 KB at the 50th percentile and 2.5 KB at the 
95th percentile.

In a WebPKI Dilithium2 cert with 2 SCTs the end-entity cert size will amount to ~7-8KB. 
85% of that will be the "random" Dilithium public key and signatures which will 
not get much compression. So, do we get any benefit from compressing 7-8KB certs to 
6-7KB? Is it worth the compression/decompression effort?


A 2.5 KB saving gives back a whole Dilithium2 signature, which I think 
is fair to say is substantial.


The overall size of a PQ end-entity cert is still uncertain and may not 
be as large as your calculation. If proofs of inclusion were used 
instead of SCTs, as suggested in Merkle Tree Certs [MTC], then each 
SCT-equivalent would be under 1 KB. Abridged Certs would then be saving 
around 2.5 KB of a 6 KB certificate, fitting the entire chain in roughly 
4KB. So we'd be shipping a full PQ cert chain without any size inflation 
on today's uncompressed classical chains.


Best,
Dennis

[ZSTD] https://github.com/facebook/zstd

[COMPARE] 
https://gist.github.com/dennisjackson/e1dccfef104cabc1e4151c47338bc9b2


[MTC] 
https://davidben.github.io/merkle-tree-certs/draft-davidben-tls-merkle-tree-certs.html




-Original Message-
From: TLS  On Behalf Of Dennis Jackson
Sent: Thursday, July 6, 2023 6:18 PM
To

Re: [TLS] Abridged Certificate Compression

2023-07-07 Thread Dennis Jackson
Thank you for the comments. I'll fix most of them - responses inline for 
the rest:


On 07/07/2023 17:38, Eric Rescorla wrote:


S 3.1.2.
   7.  Order the list by the date each certificate was included in the
       CCADB, breaking ties with the lexicographic ordering of the
       SHA256 certificate fingerprint.

Would it be simpler to just sort by the hash?
Possibly a premature optimization, but I was thinking that if a new 
version only included new certificates, it'd be nice to only have to 
append the new data to the existing dictionaries. I haven't yet worked 
out if that's actually going to deliver anything useful though.


       1.  If so, replace the opaque cert_data member of
           CertificateEntry with its adjusted three byte identifier and
           copy the CertificateEntry structure with corrected lengths to
           the output.

It seems like this is not injective in the face of certificates
whose length is greater than or equal to 0xff. That's probably
not a problem, but I think you should make it clear and have some
way to manage it.


If the length is corrected, isn't the only risk a collision with a 
certificate which is exactly three bytes and starts with 0xff?



S 3.2.1
How much value are you getting from the CT logs? It seems like
additional complexity. I agree with your comment about having
this submitted to CCADB.


It seemed the fairest repeatable way to check whether a CA was offering 
certificates to WebPKI clients without having to write a lot of emails. 
I agree its not desirable to keep as a dependency in the long run.



S 5.1.
ISTM that there are plenty of code points available.


Thanks!

Best,
Dennis









On Thu, Jul 6, 2023 at 3:18 PM Dennis Jackson 
 wrote:


Hi all,

I've submitted the draft below that describes a new TLS certificate
compression scheme that I'm calling 'Abridged Certs' for now. The
aim is
to deliver excellent compression for existing classical certificate
chains and smooth the transition to PQ certificate chains by
eliminating
the root and intermediate certificates from the bytes on the wire. It
uses a shared dictionary constructed from the CA certificates
listed in
the CCADB [1] and the associated extensions used in end entity
certificates.

Abridged Certs compresses the median certificate chain from ~4000 to
~1000 bytes based on a sample from the Tranco Top 100k. This beats
traditional TLS certificate compression which produces a median of
~3200
bytes when used alone and ~1400 bytes when combined with the outright
removal of CA certificates from the certificate chain. The draft
includes a more detailed evaluation.

There were a few other key considerations. This draft doesn't impact
trust decisions, require trust in the certificates in the shared
dictionary or involve extra error handling. Nor does the draft favor
popular CAs or websites due to the construction of the shared
dictionary. Finally, most browsers already ship with a complete
list of
trusted intermediate and root certificates that this draft reuses to
reduce the client storage footprint to a few kilobytes.

I would love to get feedback from the working group on whether the
draft
is worth developing further.

For those interested, a few issues are tagged DISCUSS in the body
of the
draft, including arrangements for deploying new versions with updated
dictionaries and the tradeoff between equitable CA treatment and the
disk space required on servers (currently 3MB).

Best,
Dennis

[1] Mozilla operates the Common CA Database on behalf of Apple,
Microsoft, Google and other members.

On 06/07/2023 23:11, internet-dra...@ietf.org wrote:
> A new version of I-D, draft-jackson-tls-cert-abridge-00.txt
> has been successfully submitted by Dennis Jackson and posted to the
> IETF repository.
>
> Name:         draft-jackson-tls-cert-abridge
> Revision:     00
> Title:                Abridged Compression for WebPKI Certificates
> Document date:        2023-07-06
> Group:                Individual Submission
> Pages:                19
> URL:
https://www.ietf.org/archive/id/draft-jackson-tls-cert-abridge-00.txt
> Status:
https://datatracker.ietf.org/doc/draft-jackson-tls-cert-abridge/
> Html:
https://www.ietf.org/archive/id/draft-jackson-tls-cert-abridge-00.html
> Htmlized:
https://datatracker.ietf.org/doc/html/draft-jackson-tls-cert-abridge
>
>
> Abstract:
>     This draft defines a new TLS Certificate Compression scheme
which
>     uses a shared dictionary of root and intermediate WebPKI
>     certificates.  The scheme smooths the transition to post-quantum
>     certificates by eliminating the root and intermediate
certificates
>     from

Re: [TLS] Abridged Certificate Compression

2023-07-07 Thread Dennis Jackson


On 07/07/2023 17:42, Salz, Rich wrote:

I would love to get feedback from the working group on whether the draft is 
worth developing further.

I read your document [1] and found it very interesting.


Thanks Rich!


I found the handling of extensions complicated, although I admit to reading 
that part very quickly.

How much simpler would things be if the identifier were just a SHA256 hash of 
the CA, perhaps truncated?  You send an array of (url, timestamp) as an 
extension, and then the server just sends the digest of its cert chain, perhaps 
even its own cert.


So this draft is doing two different things: building the dictionaries 
in a fair way and then specifying how to use them as part of the 
existing TLS Certificate Compression extension. Implementations only 
care about the second part which only involves a bit of string 
substitution and a call to ZStd. They don't have to know or care about 
how the dictionaries were built or do any new kind of negotiation.


I don't follow your comment about the handling of extensions, the code 
doing the compression and decompression isn't aware of what an extension 
is or handling them specially, its just swapping strings.  In order to 
compress the larger strings which issuers add to end entity certificates 
(e.g. OCSP & CRL URLs, practice statements), the dictionary does include 
some extensions made by each issuer, but these are just concatenated 
binary strings.


Best,
Dennis



___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


[TLS] Abridged Certificate Compression

2023-07-06 Thread Dennis Jackson

Hi all,

I've submitted the draft below that describes a new TLS certificate 
compression scheme that I'm calling 'Abridged Certs' for now. The aim is 
to deliver excellent compression for existing classical certificate 
chains and smooth the transition to PQ certificate chains by eliminating 
the root and intermediate certificates from the bytes on the wire. It 
uses a shared dictionary constructed from the CA certificates listed in 
the CCADB [1] and the associated extensions used in end entity 
certificates.


Abridged Certs compresses the median certificate chain from ~4000 to 
~1000 bytes based on a sample from the Tranco Top 100k. This beats 
traditional TLS certificate compression which produces a median of ~3200 
bytes when used alone and ~1400 bytes when combined with the outright 
removal of CA certificates from the certificate chain. The draft 
includes a more detailed evaluation.


There were a few other key considerations. This draft doesn't impact 
trust decisions, require trust in the certificates in the shared 
dictionary or involve extra error handling. Nor does the draft favor 
popular CAs or websites due to the construction of the shared 
dictionary. Finally, most browsers already ship with a complete list of 
trusted intermediate and root certificates that this draft reuses to 
reduce the client storage footprint to a few kilobytes.


I would love to get feedback from the working group on whether the draft 
is worth developing further.


For those interested, a few issues are tagged DISCUSS in the body of the 
draft, including arrangements for deploying new versions with updated 
dictionaries and the tradeoff between equitable CA treatment and the 
disk space required on servers (currently 3MB).


Best,
Dennis

[1] Mozilla operates the Common CA Database on behalf of Apple, 
Microsoft, Google and other members.


On 06/07/2023 23:11, internet-dra...@ietf.org wrote:

A new version of I-D, draft-jackson-tls-cert-abridge-00.txt
has been successfully submitted by Dennis Jackson and posted to the
IETF repository.

Name:   draft-jackson-tls-cert-abridge
Revision:   00
Title:  Abridged Compression for WebPKI Certificates
Document date:  2023-07-06
Group:  Individual Submission
Pages:  19
URL:
https://www.ietf.org/archive/id/draft-jackson-tls-cert-abridge-00.txt
Status: https://datatracker.ietf.org/doc/draft-jackson-tls-cert-abridge/
Html:   
https://www.ietf.org/archive/id/draft-jackson-tls-cert-abridge-00.html
Htmlized:   
https://datatracker.ietf.org/doc/html/draft-jackson-tls-cert-abridge


Abstract:
This draft defines a new TLS Certificate Compression scheme which
uses a shared dictionary of root and intermediate WebPKI
certificates.  The scheme smooths the transition to post-quantum
certificates by eliminating the root and intermediate certificates
from the TLS certificate chain without impacting trust negotiation.
It also delivers better compression than alternative proposals whilst
ensuring fair treatment for both CAs and website operators.  It may
also be useful in other applications which store certificate chains,
e.g.  Certificate Transparency logs.

   



The IETF Secretariat




___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] CRYSTALS Kyber and TLS

2023-06-19 Thread Dennis Jackson
If you have access to an uncompromised signing key, you can fix a 
compromised CSRNG generically without having to change the protocol. [1]


Best,
Dennis

[1] https://datatracker.ietf.org/doc/html/rfc8937

On 19/06/2023 16:41, Bas Westerbaan wrote:
I do have to add to Thom's remarks that KEMTLS (a.k.a. AuthKEM) offers 
an advantage here. If the private key of the leaf cert is not 
compromised (for instance when it was generated elsewhere), then the 
attacker Stephan describes cannot learn the shared secret.



On Mon, Jun 19, 2023 at 5:02 PM Thom Wiggers  wrote:

Hi all,

The attack that is described by Stephan is something that we
considered while we were initially designing KEMTLS (in the
papers, we also covered the ephemeral key exchange). I'll quickly
write what we were thinking of and why we did not choose to do
anything similar to what Stephan proposes.

I will be correcting for the misunderstanding in the document put
forward by Stephan, which suggests that Kyber is an asymmetric
encryption scheme. Encapsulation and Decapsulation should not be
confused with encryption and decryption, which are not part of the
public API of Kyber and will not be part of the NIST standard as
far as I'm aware.

I believe we can summarize the argument as follows: in the
straightforward replacement of ECDH by a KEM, the client generates
a keypair and the server (through the encapsulate operation)
computes a shared secret and a ciphertext. If either the secret
key or the shared secret are made public, for example, due to an
implementation flaw of either keygen or encapsulation, then the
ephemeral handshake key is no longer secret.

Bas correctly points out that this is not different from ECDH,
where compromise of one of the two exponents leads to shared
secret computation, but that in itself shouldn't necessarily be a
reason not to investigate if we can do better.

But, in my view, the proposed defense and the argument put forward
assumes that the flaw that affects encapsulation does not affect
the key generation (or vice versa); in particular, in the scenario
of the broken server-side random number generator it seems
far-fetched that the busted random number generator or
implementation flaw affecting encapsulation won't *also* affect
the keygen (or in other scenarios such as side-channel
vulnerabilities, decapsulate) operation of the server. This, in my
view, makes the additional security offered by the additional key
exchange very marginal.

The reason why we were investigating this issue was a bit
different: having two KEM key exchanges gives the server more
control to ensure that there will be at least one
freshly-generated KEM keypair in the mix. This could improve the
forward secrecy for handshakes (modeled via secret key exposure)
in which the client just re-uses the ephemeral keypair every
single time. But we also saw this as not significant enough to
suffer the additional, significant transmission requirement of
another full Kyber key exchange. Hopefully, we now have enough
experience with evaluating implementations of TLS to find and fix
these sorts of key-reuse flaws more easily, earlier, and in
automated ways [1]. And again, this is the same situation with
ECDH today.

Cheers,

Thom Wiggers
PQShield

[1] see e.g. https://github.com/tls-attacker/TLS-Scanner. Relying
on implementers not to make mistakes is a dangerous game, but I do
believe that it needs to factor into the cost/benefit analysis.

PS: for marketing reasons I oppose comparisons between the
post-quantum KEM schemes (which are primitives that easily can be
used in fully ephemeral ways) and RSA key wrapping (which pretty
much exclusively refers to the much-derided non-forward-secure RSA
transport in TLS-old). ;-)

Op ma 19 jun 2023 om 16:01 schreef Stephan Mueller
:

Am Montag, 19. Juni 2023, 15:56:57 CEST schrieb Scott Fluhrer
(sfluhrer):

Hi Scott,

> I do not believe that Müller is correct - we do not intend
use the Kyber CPA
> public key encryption interface, but instead the Kyber CCA
KEM interface.
> And, with that interface, the server does contribute to the
shared secret:
>
> The shared secret that Kyber KEM (round 3) generates on
success is:
>
> KDF( G( m || H(pk)) || H(c) )
>
> where:
>       - m is the hash of a value that the server selects
>       - pk is the public key selected by the client
>       - c is the server's keyshare
>       - H is SHA3-256, G is SHA3-512, KDF - SHAKE-256
> Note that this formula includes a value (pk) that is
selected solely by the
> client; hence we cannot say that this value contains only
  

Re: [TLS] Securely disabling ECH

2022-10-10 Thread Dennis Jackson
You and "SB" are in agreement. There is a middlebox terminating the TLS 
connection with a cert chain signed by a root which is also installed on 
the client. The middlebox in turn is connecting to a TLS Server whose 
cert chains back to a webpki root. The middlebox is handling the 
termination and re-encryption of the client's traffic.


In any case, SB's question was about whether this would trigger the ECH 
retry behavior (yes, since it appears to the client as though the 
middlebox is the server) and whether at least one client already 
implemented it (yes, Firefox).


Best,
Dennis

On 10/10/2022 14:04, Salz, Rich wrote:


  * In other words, the middlebox serves a cert to the client that is
cryptographically valid for the said public name of the client
facing server.

The only way that happens is if the middlebox **terminates the TLS 
connection**  In this case it is like my client<>cdn<>origin picture.  
The middlebox cannot present a certificate and then hand-off a 
connection to the server.


I must not be getting something important to you.


___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


[TLS] ECH-13 HRR Signal Derivation

2021-09-02 Thread Dennis Jackson
I have two questions about the transcript for the confirmation signal 
 
for HelloRetryRequests in ECH Draft 13:


1. Should ClientHelloInner1 be replaced with a message_hash message as 
in TLS?


2. Is the entire HelloRetryRequest (with overwritten placeholder value) 
included in the transcript or is the HRR only included up to the end of 
the placeholder value?


I had assumed 1. yes and 2. the entire HRR, but an off-list conversation 
left me unsure.


Best,
Dennis
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] TLS Impact on Network Security draft updated

2019-07-23 Thread Dennis Jackson


On 24/07/2019 04:13, Benjamin Kaduk wrote:
> On Wed, Jul 24, 2019 at 03:35:43AM +0100, Dennis Jackson wrote:
>> On 24/07/2019 02:55, Bret Jordan wrote:
>>> As a professional organization and part of due diligence, we need to try
>>> and understand the risks and ramifications on the deployments of our
>>> solutions. This means, understanding exactly how the market uses and
>>> needs to use the solutions we create. When we remove or change some
>>> technology, we should try hard to provide a work around. If a work
>>> around is not possible, we need to cleanly document how these changes
>>> are going to impact the market so it can prepare. This is the
>>> responsible and prudent thing to do in a professional organization like
>>> the IETF. 
>>>
>>
>> The IETF is for development of Internet Standards. If you want to
>> publish your (subjective) analysis of how a particular standard is going
>> to impact your market segment, there are any number of better venues:
>> trade magazines, industry associations, your company website, etc.
> 
> Actually, the Independent stream of the RFC series is purpose-built for
> individual commentary on the consequences of a particular standard
> [including in a particular segment], and would be superior (at least in
> my opinion) to any of the venues you list.  (See RFC 4846.)  But I
> believe the current ISE asks authors to try fairly hard to publish their
> work in the IETF before accepting it to the Indepndent stream.

I was thinking of 'published by the IETF' to mean the IETF stream.
Publishing in the Independent stream, without any proper review,
consensus or claim of fitness is a different matter altogether.

>>> The draft that Nancy and others have worked on is a great start to
>>> documenting how these new solutions are going to impact organizational
>>> networks. Regardless of whether you like the use-cases or regulations
>>> that some organizations have, they are valid and our new solutions are
>>> going to impact them. 
>>
>> This isn't a question of quality. The IETF simply doesn't publish
>> documents of this nature (to my knowledge).
> 
> The IETF can publish whatever there is IETF consensus to publish.  (And
> a little bit more, besides, though that is probably not relevant to the
> current discussion.)
> 
> I don't have a great sense of what you mean by "documents of this
> nature".  If you were to say "the IETF does not publish speculative and
> subjective discussion of possible future impact", I'd be fairly likely
> to agree with you (but I have also seen a fair bit of speculation get
> published).  

This was my intended meaning.

I'd feel rather differently about "the IETF does not
> publish objective analysis of the consequences of protocol changes on
> previously deployed configurations", and would ask if you think a
> document in the latter category is impossible for the TLS 1.2->1.3
> transition.  (My understanding is that the latter category of document
> is the desired proposal, regardless of the current state of the draft in
> question.)

The authors initiated this discussion by stating their draft was stable
and requesting publication. Consequently, I think it must be judged on
the current state, rather than the desired outcome.

Even considering your more generous interpretation... the objective
discussion is only 3 out of 15 pages and none of the 5 claims appears to
be correct. (As others have pointed out).

Best,
Dennis

> -Ben
> 
> 

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] TLS Impact on Network Security draft updated

2019-07-23 Thread Dennis Jackson
On 24/07/2019 02:55, Bret Jordan wrote:
> As a professional organization and part of due diligence, we need to try
> and understand the risks and ramifications on the deployments of our
> solutions. This means, understanding exactly how the market uses and
> needs to use the solutions we create. When we remove or change some
> technology, we should try hard to provide a work around. If a work
> around is not possible, we need to cleanly document how these changes
> are going to impact the market so it can prepare. This is the
> responsible and prudent thing to do in a professional organization like
> the IETF. 
> 

The IETF is for development of Internet Standards. If you want to
publish your (subjective) analysis of how a particular standard is going
to impact your market segment, there are any number of better venues:
trade magazines, industry associations, your company website, etc.

> The draft that Nancy and others have worked on is a great start to
> documenting how these new solutions are going to impact organizational
> networks. Regardless of whether you like the use-cases or regulations
> that some organizations have, they are valid and our new solutions are
> going to impact them. 

This isn't a question of quality. The IETF simply doesn't publish
documents of this nature (to my knowledge).

> Thanks,
> Bret
> PGP Fingerprint: 63B4 FC53 680A 6B7D 1447  F2C0 74F8 ACAE 7415 0050
> "Without cryptography vihv vivc ce xhrnrw, however, the only thing that
> can not be unscrambled is an egg."

Best,
Dennis

>> On Jul 23, 2019, at 7:44 PM, Dennis Jackson
>> mailto:dennis.jack...@cs.ox.ac.uk>> wrote:
>>
>> RFC 791  is nearly 40 years old.
>> RFC 4074 lists 5 forms of deviations from RFC 1034 and explains 
>> the correct behavior. 
>> RFC 7021 describes a series of objective tests of RFC 6333 and 
>> the results. 
>>
>>
>> The above RFCs describe objective test results and how they 
>> relate to earlier RFCs. In contrast, this document offers a 
>> speculative and subjective discussion of possible future impact.
>>
>>
>> I do not believe there is any precedent supporting publication.
>>
>>
>> Best,
>> Dennis
> 
> 
> ___
> TLS mailing list
> TLS@ietf.org
> https://www.ietf.org/mailman/listinfo/tls
> 

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] TLS Impact on Network Security draft updated

2019-07-23 Thread Dennis Jackson
RFC 791  is nearly 40 years old.
RFC 4074 lists 5 forms of deviations from RFC 1034 and explains 
the correct behavior. 
RFC 7021 describes a series of objective tests of RFC 6333 and 
the results. 


The above RFCs describe objective test results and how they 
relate to earlier RFCs. In contrast, this document offers a 
speculative and subjective discussion of possible future impact.


I do not believe there is any precedent supporting publication.


Best,
Dennis

> On Tue, Jul 23, 2019, 3:47 PM Filippo Valsorda  
> ;
> wrote:
>
> > Before any technical or wording feedback, I am confused as to the nature
> > of this document. It does not seem to specify any protocol change or
> > mechanism, and it does not even focus on solutions to move the web further.
> >
> > Instead, it looks like a well edited blog post, presenting the perspective
> > of one segment of the industry. (The perspective seems to also lack
> > consensus, but I believe even that is secondary.) Note how as of
> > draft-camwinget-tls-use-cases-05 there are no IANA considerations, no
> > security considerations, and no occurrences of any of the BCP 14 key words
> > (MUST, SHOULD, etc.).
> >
> > Is there precedent for publishing such a document as an RFC?
> >
>
> I was going to say RFC 691 but no, it recommends changes to the protocol
> (as well as being quite amusing). RFC 4074 comes close describing bad
> behavior without an explicit plea to stop doing it, but has a security
> considerations section. RFC 7021 describes the impact of a particular
> networking technique on applications.
>
> So there is precedent.
>
> Sincerely,
> Watson

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls