On Thu, Jul 8, 2021 at 5:47 PM Viktor Dukhovni <ietf-d...@dukhovni.org>
wrote:

> Can "the industry" (CAs, software vendors, ...) unite behind getting the
> users to accept the right, but arguably less convenient, tradeoff?


No. I think deprecating wildcards would be a bad outcome for users and for
server operators.

While I agree there are legitimate concerns about wildcards, I would not be
supportive of trying to remove them, which was the same position shared
years ago. The concerns being highlighted are things that I would prefer to
be better addressed via ALPN and SRVNames, the former which can (and is)
deployed today, the latter of which suffers from a number of (tractable)
issues [1]. The discussions many years ago centered around wildcards
focused fairly heavily on "DNS used for phishing", which is an important
problem, but certificates are not the right layer for that. For those
wanting to argue against wildcards, I think it would be beneficial to more
clearly articulate the risks; not because I disagree that there are risks,
but because the discussion benefits from a shared understanding.

While not disagreeing there are legitimate concerns, there are also very
legitimate cases that play an important role in some protocols, most
notably, HTTPS and browsers. The security principal of the Web is through
the notion of the Origin (RFC 6454, although supplanted/replaced in
implementations by https://fetch.spec.whatwg.org/#origin-header ).
Concisely, "scheme, host, port" make up the boundary for much of the Web
security model.

If you're developing a complex application on the Web, you inevitably need
different isolation boundaries for different elements: for example,
user-generated content may have a distinct origin and sandbox policy
applied to it versus developer-generated content. Or you may have disparate
teams working on the same logical application, and need ways to isolate
different content from interfering. Since schemes are few, and ports are,
for better or worse, a lost cause due to Enterprise firewalls, your only
option is through different hosts. Different hosts, even subdomains,
reflect different origins.

Thus, it's not uncommon for many websites to have many subdomains, all
fronted by the same logical server, whose sole purpose is not to provide
transport-level isolation, but rather logical separation at the Web
application boundary. Wildcards make that deployable at a scale that
subjectAltNames or distinct certificates are simply not.

Or, consider hosting websites, whether it be "github.io" - where many IETF
WGs host their own content, under distinct origins, all fronted by the same
service and certificate. There is no legitimate technical reason to require
those different hosts have different certificates; they fit within the same
operational and security threat boundary, nor would having millions of SANs
be in any way beneficial to users or the evolution of protocols such as
QUIC, which are rightfully cautious about the size of certificates.

Although those are examples of single-developer and multi-developer model,
it also benefits users. Consider solutions like Plex's home streaming,
which was the first (to my knowledge) provider in the space to fully
encrypt all of their user streaming. If you're not familiar with Plex [2],
a user sets up a media server on one of their devices, and can stream to
their other devices: whether on the same local network or via anywhere on
the Internet. Plex achieved this through a rather innovative DNS naming
scheme and the use of wildcards, which work around limitations with the
deployed reality of consumer networking equipment. In short, each user is
given a (single) wildcard certificate, which is valid for all names in the
form "*.[user-identifier].plex.tv". When they want to stream, whether on
LAN or remotely, Plex can offer names of the form
"192-168-0-1.[user-identifier].plex.tv" to resolve to 192.168.0.1 (for
example). This is an alternative to IP certificates, offers unique per-user
binding, and works for both local IPs and remote IPs, ensuring full HTTPS,
and working around issues with users' networking equipment (as best they
can). A non-wildcard certificate would require on-demand minting of
certificates regularly, for questionable benefits. Since Plex started doing
this, a number of other providers of IoT and IoT-like devices have adopted
similar solutions, including Western Digital.

I agree, it's concerning that "mail.example.com" can share a certificate
with "www.example.com", but that's no less true with wildcards than it is
with subjectAltNames. The solution for those cross-protocol considerations
is not to forbid wildcards, which does nothing to address the problem, but
to use SRVNames more widely in implementations, along with wider deployment
of ALPN to reduce cross-protocol / cross-boundary confusions.



[1] There are several reasons that SRVNames are not widely supported in
client software, and in particular, web browsers. Some of these are issues
that the IETF can tackle, and it's merely been a matter of resourcing and
prioritization between responding to CA incidents and more critical
priorities, versus the ability to do proactive improvements.

1. The specification (RFC 4985) is under-specified in key security critical
aspects, namely, how to express that "You cannot issue SRVNames at all"
(i.e. nameConstraints). RFC 5280 suffers some of this, with respect to URI
nameConstraints using a different syntax for subdomains than DNS
nameConstraints, but SRVNames fit in the uncanny valley where the language
is ambiguous as to whether the proper expression for "no issuance" is a
zero-length field (like DNS name constraints) or a single "." (like URI
name constraints).

2. The specification of nameConstraints (RFC 5280) itself makes it
dangerous to introduce new name types, because all existing nameConstrained
CAs will automatically be able to issue certificates using the new name
type, even if it's synonymous to an existing constraint. Concretely, if a
DNS nameConstraint exists restricting a CA to "foo.example", clients
supporting SRVNames would rightfully allow that CA to issue certificates
for any arbitrary service on "bar.example", because SRVNames are not
constrained. Put differently, nameConstraints are "default allow" (if
unspecified), rather than "default deny", a decision that is security
relevant.

Both of these are tractable problems, but by no means trivial to solve.
However, combined, they're why client implementations have not yet
introduced support for SRVNames: not wanting to allow otherwise constrained
CAs today to issue names they're not authorized for. This is the very
problem that began this work (commonName interaction with nameConstraints,
which is unspecified as well), and while I'd be very supportive of efforts
to resolve it, they're not something I have the bandwidth to draft drafts
for.

[2] https://blog.filippo.io/how-plex-is-doing-https-for-all-its-users/
_______________________________________________
Uta mailing list
Uta@ietf.org
https://www.ietf.org/mailman/listinfo/uta

Reply via email to