This thread is starting to get hard for me to follow.  Apologies if something 
goes missing.

On Wed, Jan 8, 2020, at 05:39, Sara Dickinson wrote:
> Propose using text suggest by Ekr here: "The privacy risks associated 
> with other protocols that make use of DNS information are not 
> considered here"

ACK.  WFM.

> Several DNS implementations support TFO which was one of the reasons to 
> include this. 

Let me try to restate my point more clearly then: the choice to enable TFO 
implies a policy decision regarding linkability on the part of those who do 
that.  And to the extent that TFO is deployed, it deserves separate mention as 
it is somewhat exceptional.  What is currently implied is that TFO is exemplary 
of a class of linkability problems that you inherit when you move to something 
other than UDP, but that is not at all the case.

As I previously mentioned, most of the visible behaviour is taken from a small 
set of implementations, because these are often OS-level TCP implementation 
quirks that allow for some fingerprinting entropy to be gained by the observer. 
 The current framing creates a biased perception of the nature of the problem.

> That works for HTTPS folks but I think this text needs to speak to the 
> DNS community who are from a UDP mindset. The current level of 
> treatment was sufficient for the draft to pass WGLC so I would be 
> minded to keep it at that level. 

That was an aside explaining motivation.  I would rather talk about the 
substance of the suggestion and the need for a more nuanced treatment of the 
subject.  One of the strengths of IETF LC over WGLC is that is allows us to 
draw on wider perspectives and this is clearly a case that would benefit from 
that.

> “The majority of currently deployed stub resolvers use DHCP to discover 
> the identities of resolvers provided by the local network and use no 
> additional authentication mechanism to validate the resolver. The stub 
> therefore places trust in the network to both operate a recursive 
> resolver and to secure the discovery process. Note that DHCP assumes 
> that the network provides certain safeguards; see Section 22 of 
> [RFC8415]. Vulnerabilities in the discovery process might allow an 
> attacker to interpose their own resolver as described below. "

As I said below:

> > I would point out that the citation for dnschanger violates the standard 
> > assumptions in RFC 3514, so I wouldn't rely on that so much.  The ARP/NDP 
> > examples are in direct contradiction to the additional assumptions that 
> > DHCP makes.

This isn't a question of framing as much as it saying that the specific text on 
vulnerabilities is directly at odds with the security models assumed by the 
discovery protocols.

That makes the expansion of these details of questionable value outside of 
saying that not all network deployments meet the requirements for DHCP 
deployment and so are vulnerable to attack on the discovery process, which is 
what enables things like dnschanger and so on.  I'd be happy to add that even 
if discovery is secured, failing to secure interactions with the DNS server 
adds further vulnerability to attack.  But this doesn't need to talk about how, 
because there isn't one single way.

> “As a matter of policy, some recursive resolvers use their position in 
> the query path to selectively block access to certain DNS records.  
> This is a form of Rendezvous-Based Blocking as described in Section 4.3 
> of [RFC7754]. Such blocklists often include servers know to be used for 
> malware, bots or other security risks. In order to prevent 
> circumvention of their blocking policies, some networks also block 
> access to resolvers with incompatible policies. "

WFM, and the title change.

> >>> Section 3.5.1.5.2
> >> NEW:
> >> “Users should be aware that the particular choice of HTTPS 
> >> functionality vs data minimisation (for example, whether to include the 
> >> user-agent header) is an implementation specific choice in DoH, not one 
> >> defined in RFC8484.”
> > 
> > Who is the audience for this document again?
> 
> I think it has a very wide audience, anyone who want to understand the 
> privacy considerations of actually using the DNS on the Internet. 

Sorry, that was a little snarky.  There are two problems here: the assumption 
that user awareness is the right approach to addressing privacy concerns. That 
I am aware of a terrible practice is not always sufficient, though it might be 
necessary. More seriously, the broad-strokes implication that HTTP capabilities 
naturally result in having to trade some aspect of functionality (performance, 
operational advances, whatever) against privacy.  That's a thematic problem 
with this document.  I don't think that it is fair to say that it is FUD, but 
it could do better to recognize that there are effective tools for managing 
this particular class of problem.

Yes, there are cases where functionality has been added that depends on 
providing some degree of linkability between requests.  Connection reuse is one 
place where DNS and HTTP have this property in common.

But it is true that HTTP has grown a number (many) of similar features.  You 
could - as this document strong implies - suggest that multitude of options 
makes it a risky proposition to use HTTP because of the surprising ways in 
which linkability manifests.  Or, you could recognize that you need a framework 
from within which to simplify the analysis.  And that these features were 
developed within existing frameworks that greatly simplified any privacy 
assessment.

That's considerably simpler than this document would seem to suggest: you just 
need to decide how to create clear separation between queries (or groups of 
queries).  For reference, to a first order approximation, the web segregates by 
origin in space and uses cookie clearing to segment in time.  That has enabled 
us to develop these capabilities without changing the privacy profile of the 
resulting system substantially.   Indeed, recognizing and formalizing this 
model has allowed us to make some significant privacy improvements on the web.

A privacy analysis of DNS that aimed to formalize a similar framework might be 
useful.  This might start by recognizing that source IP dominates any analysis. 
 This is recognized in the document, but it isn't developed beyond that.  It 
might be possible to look at existing practice here (client source port 
randomization seems to ensure that NAT provides some anonymity when UDP is 
used) and - based on that - to develop a framework for understanding the 
current bounds on linkability.  Then you could describe how aligning breaks in 
linkability all the way up the stack can be used to provide privacy gains: see 
https://tools.ietf.org/html/draft-wood-linkable-identifiers-01 for more 
background there.

For this document, at this point in time, it might be better to recognize that 
the problem exists and that current approaches for dealing with linkability are 
ad hoc and uncoordinated.  Then maybe say that the introduction of new 
transports requires a stronger shared understanding of what the exposure is and 
how that might be best managed.  My intuition is that there are significant 
gains to be had out of this more than there are new risks introduced by the new 
transports.

> “the wide practice in HTTP to use various headers to optimize HTTP 
> connections, functionality and behaviour which can introduce a 
> trade-off between functionality and privacy (since it can facilitate 
> user identification and tracking)”

How is that materially different?  See above regarding the implications about 
this trade-off.

_______________________________________________
dns-privacy mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/dns-privacy

Reply via email to