On 2025-06-17 20:16:19, David Benjamin wrote:
(As always, wearing an individual hat here. In particular, I am *not*
speaking on behalf of the Chrome Root Program.)

This draft is not the way to solve this problem.

The point of markers like EKUs is to avoid cross-protocol attacks. Client
and server here do not refer to abstract classifications of entities. They
are specific, well-defined roles in the TLS protocol. Whether the TLS
client represents a human or a backend service, it is a client as far as
TLS is concerned. This draft breaks this. TLS stacks that implement it risk
cross-protocol attacks.

I thought I was only re-aligning with your interpretation of a client being a end-device? Considering all of the discussions regarding this topic the common theme appears to be you at the Chromium project (and CAs that started to follow your policy change) ONLY considering clientAuth being used for end user authentication like with smart cards. server-to-server was overlooked everywhere. And within the discussions to this topic way too much was spent on trying to point out that a TLS-client is the initiator of a TLS connection and not an device held by and enduser.

Frankly if there is no way to get a clientAuth certificate for a server to be used in a similar way like a server certificate and the understanding at the CA-level appears to be "server means device" and "client means end-user" then the only logical step is to re-align the definition within the RFCs to unbreak all of these usecases.

At least to me that is a way better approach than just disabling TLS validation entirely. Which is basically what people will be forced to do. Changing the protocols takes time. "Just" allowing a server-cert to be considered valid by a TLS-server that is expecting to be communicating in a P2P-fashing within a decentralized network only with other server and never with end users is way better. It also allows you at Chromium to just not care about them at all as this special kind of validation would be limited to software expecting server-to-server connections only.

Also it shouldn't cause any additional impact. As the required assurances are the ones being provided "there is a server with this dns name or ip address".

And regarding the cross-protocol attack part, well if the only commonly trusted certificates are of EKU serverAuth then that is basically the only EKU in existance. Everything else may as well not exist at all at that point as it will fail validation anyway.

As for PKI hierarchies, the Web PKI, as curated by web clients,
authenticates web servers. All the policies and expectations around it,
from monitoring to domain validation to Certificate Transparency, are
maintained with web servers in mind. This blog discusses some of this:
https://blog.mozilla.org/security/2021/05/10/beware-of-applications-misusing-root-stores/
I understand your thought process. However that still doesn't change the reality that it is basically the **only** PKI hierarchie in existance.

It would be a different thing if not literally everything in existance would share the Web-PKI browsers use.

That is not to say that backend services shouldn’t authenticate as TLS
clients with certificates. If a backend service needs to take the role of a
TLS client, a certificate from some PKI is a fine way to solve that. But
there is no reason for that to use the same PKI hierarchies as web server
authentication.

Ok, I'll play along for now. Name a single CA that I can go to right now to get a certificate with EKU clientAuth issued towards dNSName=myserver.frank.fyi that your devices would trust without additional configuration. There is none. Therefore this argument is invalid.

In addition having clientAuth and serverAuth within the same certificate was also used as a cheap way to do channel binding. When you've two systems where either of these systems can initiate a connection like with e.g. IPSec then having a single certificate with both clientAuth and serverAuth within it allows to securely mutually authenticate the connection regardless of what side initiated it. And that is the scenariou in common here. Well now you may say EKU 1.3.6.1.5.5.7.3.17 exists, however what CA coud I go to to get such a certificate that you trust without having to install a new Root CA into your operating systems trust store and therefore also grant the right to issue e.g. 1.3.6.1.4.1.311.20.2.2, 1.3.6.1.5.2.3.5, or 1.3.6.1.5.5.7.3.1? Right, none. Hence why commonly almost everything built around of this problem started to use "serverAuth" 1.3.6.1.5.5.7.3.1 (which in the RFC is refering to a web server but within the documentation of CAs only referes to "Server authentication") and "clientAuth" (which the RFC also specified for web servers but CAs commonly listed them as "Client authentication" certificates). See e.g. Digicerts documentation: https://docs.digicert.com/en/trust-lifecycle-manager/inventory/certificate-attributes-and-extensions/extended-key-usage.html

Splitting that into two certs not only often isn't supported without a software change (that on some platforms will at the soonest land in 2027) but also would break that kind of channel binding and require a protocol change to rework the authentication as now the TLS-server can't be sure it is talking to the same system for a specific inbound connection as it would for an outbound connection. To be fair that drawback is minimal, but so is having clientAuth and serverAuth within the same certificate for a web browser...

You could use entirely private hierarchies, hierarchies
shared across a large class of applications (e.g. all of S/MIME), or
anything in between.

One of the reasons for why mutual TLS between all of the MTAs failed was the lack of a common trust store. Others worked around this by using the operating systems certificate store, aka Web-PKI (+ maybe 1 or 2 irrelevant additions).

I give you the point that people probably shouldn't have integrated the Web-PKI in operating systems that deeply and allow it to be used for any and all CA validation without restrictions. And I would also give you the point that it was kinda lazy for people having this use case to not go through the trouble of setting up an entire PKI infrastructure and getting it into all operating systems in existance to get the same kind of "is universally trusted" as the Web-PKI. However the Web-PKI allowed for this usecase and this policy change cuts off all of these without enough time to deal with it.

Heck I probalby would even give you the point that relying upon Google was a bad idea from the start and pushing something decentralized that isn't controlled by a single entity like DANE would have been a better design, but here we are...

As noted elsewhere in this thread, and in the
references you linked, this is the standard way you build applications with
TLS and X.509:
https://github.com/cabforum/servercert/issues/599#issuecomment-2954343849
https://community.letsencrypt.org/t/do-not-remove-tls-client-auth-eku/237427/70
https://community.letsencrypt.org/t/do-not-remove-tls-client-auth-eku/237427/92

Separate hierarchies allow the policies around each to be tailored to the
needs of specific relying parties that support them. In the case of client
and server certificates, they’re already entirely separate roles in the
protocol, so there is no benefit in tying them together.

Again, you're right in theory. However (and this is the common theme of all discussions to this topic) it doesn't change the reality that it was used differently. The CA/B Forum for example INTENTIONALLY after long discussions put a "MAY" into their policy for exactly this reason. Your policy makes this a very clear "fuck you freeloaders, we dictate it's now a 'MUST NOT'. Eat shit and die". Yea, shot taken, we kinda were freeloaders. However you know that we exist, we used it that way for decades (even way before Let's Encrypt existed), it was so universally used in different areas that it is just way too "not considerate" from your part. Don't get me wrong, I don't mean you doing this change. The HOW is what is the main issue here.

Literally any of these would have made it less painful:
* Not demanding the entire tree to be only CAs and "serverAuth"-leafe certificates. Aka allowing a CA to have a sub-CA within the same tree that issues clientAuth certificates towards devices. * Ensuring that at least one similarly wide spread trusted CA outside of the Web-PKI tree issuing domain validated clientAuth certificates exists before pushing such a policy change. * Setting a deadline after e.g. 2028 to allow for changes to be put into each and every stable distro before it hits. And writing into the policy that CAs have to actively inform their customers about this upcomming change with each and every renual upon that date. Changes like e.g. DANE support or do the (almost) impossible of getting all kinds of different projects that never before used to talk to each other but were implemented a common protocol to come together and spinn up their own PKI tree. * Making the final deadline 2028 and for now just stop including clientAuth by default BUT allow CAs to do it upon an explicit request.
* ...

My draft is just another way to do this transition. In my eyes did you with this policy change start the end of the PKI tree. I think within my draft I also made it already quite clear that DANE without a pre-populated trust store is the future. Either specify the issuing ca using type 2 or pin the self-signed certificate using type 3 directly.

This goes in both directions. The Web PKI allows a service that speaks HTTP
on port 80 to obtain a certificate. (This is the ACME http-01 challenge.)

Or using DNS-01 using a TXT-RR, TLS-ALPN-01 using port 443, ...

It never was "just HTTP on port 80". Many of these mutual TLS use cases either used to run a certbot on port 80/443 exclusively to get these certificates or used the DNS-01 challange.

That makes sense for web servers. It might not make sense for arbitrary
backend-to-backend applications, where port 80 may be served by a less
trusted process. Web serving and backend-to-backend communication are
different applications, with different needs.
Then one could still use one of the other challange types. OR buy a certificate from a CA that didn't use ACME for the validation to begin with.
Hope that helps give some background on how TLS uses certificates!

Hope my comments give you some background on how we used to use TLS in the field for literally decades.

This condescending attitude is also a common theme it doesn't make discussing the issues any easier. It also is quite hard to keep things professional when being faced with this kind of attitude in combination with any and all arguments being discarded. Frankly speaking it appears I'm also not the only one feeling this way as I've seen already a bunch of people writing conspiracy posts stating this being a desperate move by Google to kneecap the competition of decentralized and p2p alternatives and try to stay one of the dominant forces in the internet. Especially considering that it was exactly timed right as currently a lot of people are considering their options to reduce US-dependencies here in Europe. Like moving from YouTube to PeerTube or from Twitter to Mastodon and there like.

David

On Tue, Jun 17, 2025 at 9:17 AM Klaus Frank
<draft-frank-mtls-via-serverauth-extension=40frank....@dmarc.ietf.org>
wrote:


Hi,

because of recent events with policies of public CAs and associated
"fallout" that removed the ability to get publicly trusted certificates
with clientAuth for dNSName and ipAddress I've written this early draft of
a now standard I'd like to propose. As this breaks any and all mutual TLS
authentication between systems relying upon public trusted certificates
(like XMPP, AD FS, or SMTP with mutual authentication, like e.g. the
Microsoft Exchange Hybrid deployment uses) some solution is required.

Within this draft I basically sharpen the definition of "id-pk-clientAuth"
and "id-pk-serverAuth". "id-pk-clientAuth" should no longer be used for
dNSName and ipAddress (aka device) certificates and instead a TLS-server
should be allowed to accept a "id-pk-serverAuth" certificate. As long as it
expects device-to-device authenticated sessions. This should also address
the obscure and unspecified "security concerns" the Google Chrome team
stated as reason for the policy change that caused any and all CAs to drop
issuing clientAuth certificates. Even the ones that still issue
certificates with the clientAuth flag set do NOT do so for devices and only
for endusers and organizations (EV and OV).

Besides that as also Lets Encrypt stopped including the clientAuth flag
the only free and publicly trusted certificates available are of type
id-pk-serverAuth". Therefore to keep systems relying upon the mutual
authentication of TLS between servers operational it is necessary to allow
servers to use id-pk-serverAuth certificates for TLS-Client Authentication.
In addition I also tried to outline how a server should validate the
received TLS certificates.

The validation of the client certificate may currently be a bit too
verbose and complex. The main goal is to do forward-confirmed reverse DNS
lookup, allow for DANE/TLSA provided certificates, as well as services
behind SVCB and SRV records (this part may be unnecessary and maybe can be
scratched from the draft, I'm not entirely sure). I also provided steps for
verifying the source port which may also be unnecessary.

I would have hoped that we've more time on this matter, but as the first
CAs already stopped issuing such certificate and the commonly used lets
Encrypt certificates are not valid that long either this topic is kinda
urgend. I'm open to alternative solutions though. It would be great to find
some people to push this forward to at least have a standard to move over
to. Instead of being left with a bunch of broken services.

Sincerely,
Klaus Frank


Further links for the background of the above referenced policy change:
* https://googlechrome.github.io/chromerootprogram/
*
https://community.letsencrypt.org/t/do-not-remove-tls-client-auth-eku/237427
* https://github.com/processone/ejabberd/issues/4392
* https://github.com/cabforum/servercert/issues/599


A new version of Internet-Draft
draft-frank-mtls-via-serverauth-extension-00.txt has been successfully
submitted by Klaus Frank and posted to the
IETF repository.

Name:     draft-frank-mtls-via-serverauth-extension
Revision: 00
Title:    Allow using serverAuth certificates for mutual TLS (mTLS)
authentication in server-to-server usages.
Date:     2025-06-16
Group:    Individual Submission
Pages:    10
URL:
https://www.ietf.org/archive/id/draft-frank-mtls-via-serverauth-extension-00.txt
Status:
https://datatracker.ietf.org/doc/draft-frank-mtls-via-serverauth-extension/
HTML:
https://www.ietf.org/archive/id/draft-frank-mtls-via-serverauth-extension-00.html
HTMLized:
https://datatracker.ietf.org/doc/html/draft-frank-mtls-via-serverauth-extension


Abstract:

    This document aims to standardize the validation of mutual TLS
    authentication between servers (server-to-server).  It outlines
    recommended validation flows as well as provides practical design
    recommendations.  Basically the EKU id-kp-clientAuth and id-kp-
    serverAuth get more precisely defined to represent their common
    understanding by issuing CAs and browsers.  id-kp-clientAuth aka.
    "TLS WWW client authentication" SHOULD mean authentication of a
    natural or legal entity.  id-kp-serverAuth aka.  "TLS WWW server
    authetnication" SHOULD mean authentication of a device.  When two id-
    kp-clientAuth certificates are used this means E2E authentication
    between two users.  Where as two id-kp-serverAuth certificates being
    used means server-to-server authentication.  And one user and one
    server certificate within one TLS connection means client-to-server
    (or technically also server-to-client).  The term "TLS-Client" SHOULD
    no longer be used and mean the party sending the initial package
    while establishing a TLS connection.  This helps to avoid design
    issues moving forward as currently some people thought TLS-Client
    auth was only ever used in "client-to-server" and never within
    "server-to-server" context.  Which sparked the demand for this
    document to begin with to keep server-to-server auth with public
    trusted certificates working.




_______________________________________________
TLS mailing list -- tls@ietf.org
To unsubscribe send an email to tls-le...@ietf.org


_______________________________________________
TLS mailing list -- tls@ietf.org
To unsubscribe send an email to tls-le...@ietf.org

Reply via email to