On Thu, Feb 6, 2020 at 12:08 PM Eric Rescorla <[email protected]
<mailto:[email protected]>> wrote:
On Thu, Feb 6, 2020 at 12:04 PM Brian Dickson
<[email protected]
<mailto:[email protected]>> wrote:
Top-top-top reply:
The Internet Threat Model you are using for web client-server
is fine.
However, for DNS, that is the wrong threat model, for several
reasons.
* The threat for DNS cache poisoning is
recursive-to-authoritative, not client-recursive(resolver)
* The DNS path will not generally be related to the data
path, and for any parent zone, almost certainly will be
totally unrelated
* DNS recursive-to-authoritative is UDP
* UDP DNS does not require that the attacker be on-path
* Compromise of DNS caches via poisoning (by potentially
off-path attackers) leading to compromise of user data is
not exaggerated.
* The compromise risk is per-cache, as well as
per-authority-server and/or per-DNS record.
First, all of these are just consequences of the 3552 "attacker
completely controls the network" threat model.
Sorry, I'm not clear on what this statement means in this context, or
what the implication of this should be inferred as being.
Are you saying:
* It should be assumed (per the threat model) that any/every
attacker completely controls every network segment everywhere?
* or, that only attackers who DO control some specific network
segment are a threat?
These have vastly different implications, clearly.
If the first one is the case, are you conceding the precondition, that
attackers can poison DNS caches arbitrarily, by manipulating all DNS
traffic? If so, that argues in favor of DNSSEC validation by the
resolver in all cases, as that is the only way the attack can be blocked.
If the second one is the case, the bullet points you quote, contradict
that assertion. Specifically, that off-path attackers do not need to
control any network segment (let alone all network segments), to
successfully poison a DNS cache. This also argues in favor of DNSSEC
validation.
If you mean something else, could you explain what you mean?
Second, the text in question is about the effect of attacks on DNS
on the Web "Users may be directed to bogus IP addresses for e.g.
websites where they might reveal personal information to attackers."
Correct, and I don't see anything you say here refuting the concern
over DNS cache poisoning attacks, which result in bogus IP addresses
directing users to malicious servers, etc.
If users are sent to the wrong IP address, this substantially weakens
the argument that WebPKI is sufficient protection.
Why are CLR and/or OCSP needed, if not to respond to compromised
certificates (meaning leaked private keys)?
Am I missing something about WebPKI, beyond the private key proof of
possession model?
(Everything else about WebPKI is about validating the requestor's
authority and identity, but that is all orthogonal to key control.)
A web server using a compromised key is only ever going to be visible
to a (potential) victim, and never to third parties including the
legitimate certificate holder, except incidentally to operation of the
rogue server.
Brian
-Ekr
I haven't written up the details of the more effective cache
poisoning attacks, but have been sharing summary information
for several years now.
(The underlying issue is IP fragmentation of UDP packets. This
is one of the contributing factors that the DNS Flag Day for
2020 will include recommendations/requirements to not fragment.)
I'd be willing to write up those more effective attacks,
including a PoC, but that won't likely happen for a few months.
Brian
On Thu, Feb 6, 2020 at 11:22 AM Eric Rescorla <[email protected]
<mailto:[email protected]>> wrote:
Thanks. I am just looking at this text, and I think it's
inappropriate. To recap something I seem to be saying a
lot lately, the Internet Threat Model assumes a
Dolev-Yao-style attacker who controls the network between
the client and the server. TLS is designed to be secure in
this environment, and while the WebPKi is imperfect,
suggesting that compromise of local DNS lookups leads to
compromise of user data seems exaggerated, at least in the
case of Web traffic.
-Ekr
On Thu, Feb 6, 2020 at 10:22 AM Adam Roach
<[email protected] <mailto:[email protected]>> wrote:
Top-posting because I agree with the facts as you
present them. I just reach a different conclusion
based on these facts. To be clear, I think a
belt-and-suspenders approach is generally preferable.
I am merely suggesting that the "must" statement I
cite may be stronger than is actually advisable given
that such an approach is merely a small increment of
security for protocols that are otherwise secured
(e.g., HTTP, which is the example the document
chooses), rather than the sole defense, as may be the
case with other protocols.
My top-line suggestion here is to choose a different
example than HTTP.
Secondary to that is a suggestion that the "must"
statement really only makes sense when it is a sole
counter-measure, and that a softer recommendation
("should") makes more sense otherwise.
These are non-blocking comments, so I'm going to
reiterate that the WG can ignore them -- I just wanted
to make sure they were considered. It would be nice to
hear from other folks on the topic as well.
/a
On 2/6/20 11:57, Brian Dickson wrote:
On Thu, Feb 6, 2020 at 9:31 AM Adam Roach
<[email protected] <mailto:[email protected]>> wrote:
On 2/6/20 09:08, Adam Roach wrote:
>
> For the specific example chosen, it's been made
pretty clear over the
> years that at least the clients for the
specific service you cite have
> no interest in incurring this additional cost.
If that's the working
> group consensus, then I have no interest in
over-riding it. But
> ignoring operational realities seems kind of
ivory tower-ish, which
> feels like the kind of thing that undermines
the general credibility
> of the rest of the document.
>
Could you please be more specific?
When you say "for the specific service", do you mean
DNSSEC?
And do you mean the signing of DNS zones using
DNSSEC, when you refer to clients of that service?
Perhaps you missed my microphone comments at the last
IETF?
Specifically that GoDaddy will be turning on DNSSEC
for the vast majority of its DNS hosting customers?
This represents about 40% of the DNS zones on the
Internet.
(The exact time frame is not set in stone, but we
expect this to be done in the first half of 2020.)
Given that this significantly alters the calculus, I
don't think that is a good enough reason to object in
and of itself anymore.
The other aspect of this is the asymmetry involved in
the defenses against impersonation:
* The choice to sign a DNS zone is under control of
the zone owner
* The choice to deploy RPKI on routes (to protect
against BGP hijacking) is under control of the IP
prefix holder
* Both methods rely on third parties to cooperate
to achieve the protections offered
* RPKI routing filters are now widely deployed, and
RPKI registrations are substantial
* The remaining issue is DNSSEC validation; many
(most?) of the public recursive operators do this
already
The logic should be, defend against all feasible
attacks, rather than justifying the non-defense in
one area (DNSSEC for DNS) based on the assertion that
another area is not being defended (RPKI for BGP).
Brian
I realize that my editing made one of the pronoun
antecedents here go
away. The second sentence should have said
something more like "If
keeping the current text is the working group
consensus..."
/a
_______________________________________________
dns-privacy mailing list
[email protected] <mailto:[email protected]>
https://www.ietf.org/mailman/listinfo/dns-privacy