Re: [DNSOP] various approaches to dns channel secrecy

2014-07-06 Thread Matthäus Wander
* Paul Vixie [7/5/2014 7:47 PM]:
 Matthäus Wander wrote:
 DTLS works on top of UDP (among others) and thus can pass CPE devices.
 
 no, it cannot. DTLS does not look something that the CPE was programmed
 to accept; thus in many cases it is silently dropped.
 

DTLS can be used on top of UDP. CPE devices allow outgoing UDP sessions
to arbitrary ports. If they didn't, many online games and VoIP
applications would not work.

Here's an example DTLS session passing my DSL router at home:
 https://www.cloudshark.org/captures/7d2ae4cfe155

Source code found here:
 http://marc.info/?l=openssl-usersm=113009464321966w=3

Regards,
Matt



smime.p7s
Description: S/MIME Cryptographic Signature
___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] various approaches to dns channel secrecy

2014-07-06 Thread Phillip Hallam-Baker
This is really a design question.

As far as I am concerned, DNS is and always will be a first class Internet
protocol. It is the foundation for everything else. The syntax etc can
change but it is a building block other stuff should build on, not
something that can leverage other facilities.

So the approach I would take to dealing with legacy infrastructure is a two
pronged approach:

1) A principled approach that does not make allowance for network
deployment constraints.

2) One or more mechanisms to ensure service is available in restricted
networks.


So a browser would need to implement (1) and (2) but an Internet connected
coffee pot might only support (1) plus legacy DNS because it isn't a device
that would require the connectability guarantees that (2) provide.

Having experimented with it seems that a UDP service plus a Web Service
over HTTP are the best choice. I have tried using DNS as a tunnelling
protocol (TXT lookups) but that does not seem to be worth the hassle.


DTLS looks like a good idea at first but it is a bolt on to TLS which is
already a thick stack. DTLS is really designed to secure protocols that are
essentially emulating TCP in UDP.

There are times to stick with the existing standards and time to make a
fresh start. I think DNS is a case where a fresh start is appropriate.
___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] various approaches to dns channel secrecy

2014-07-06 Thread Paul Vixie


Matthäus Wander wrote:
 * Paul Vixie [7/5/2014 7:47 PM]:
 Matthäus Wander wrote:
 DTLS works on top of UDP (among others) and thus can pass CPE devices.
 no, it cannot. DTLS does not look something that the CPE was programmed
 to accept; thus in many cases it is silently dropped.


 DTLS can be used on top of UDP. CPE devices allow outgoing UDP sessions
 to arbitrary ports. If they didn't, many online games and VoIP
 applications would not work.

it's possible to find single counter examples to almost any assertion.
however, consider RFC 2671 (EDNS), published fifteen years ago. because
it changes the format of a UDP/53 datagram, there is silent loss across
most CPE boundaries. implementers of EDNS have had to investigate and
deploy about a dozen different fallback strategies since then, not to
make EDNS work, but to make it fail reliably enough so that normal
non-EDNS can be tried. since DNSSEC relies on EDNS0, this is a real
problem. to the extent that it's gotten any better it's because someone
changed this CPE logic:

if (normal dns packet)
intercept it and answer inappropriately, 30% of the time;
let it get where it's going, 70% of the time;
else
drop;

to this:

if (normal dns packet)
intercept it and answer inappropriately, 30% of the time;
let it get where it's going, 70% of the time;
else if (normal edns packet)
intercept it and answer inappropriately, 30% of the time;
let it get where it's going, 70% of the time;
else
drop;

in other words what fixes have been made have been EDNS specific, where
the real fix is:

if (packet addressed to you)
handle it or send ICMP;
else
let it get where it's going;

that fix is not going into the O(10^9) CPE devices now in place, ever.

if we can't get this right for EDNS in 15 years, my bet is that another
15 (or 150) years of trying won't produce better results. in fact, by
jim gettys and dave taht i've been made to understand that the world's
CPE problem is much worse than i knew. we might be able to fix it for
the next billion devices some day, but the devices shipping today are
still crippled.

incentives are such that a CPE provider hopes to sell web access, not
internet access.

your counter-example of DNS gaming does not change the treatment now
accorded UDP/53 at the internet edge. if you seriously think that a DTLS
solution can be universally deployed, including in hotel rooms, home CPE
environments, coffee shops, and mobile, then you and i are having a
same planet, different worlds experience, and i wish you well on your
walk.

vixie

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] draft-wkumari-dnsop-dist-root-01.txt

2014-07-06 Thread Ralf Weber
Moin!

On 05 Jul 2014, at 18:11, Joe Abley jab...@hopcount.ca wrote:
 TL;DR: there are way more cons than pros to this proposal. The pros listed 
 are weak; the cons listed are serious. I don't see a net advantage to the DNS 
 (or to perceived performance of the DNS for any client) here. This proposal, 
 if implemented, would represent non-trivial additional complexity with 
 minimal or no benefit. I am not in favour of it, if that's not obvious.
 
 As noted previously, I am not against documenting and discussing the merits 
 of slaving the root zone on resolvers (in some fashion). My preference would 
 be for a draft called something like 
 draft-ietf-dnsop-slaving-root-on-resolvers-harmful, which could borrow much 
 of your section 5.1 and 5.2 to make its argument.
Oh like draft-ietf-dnsop-reflectors-are-evil that became RFC5358, but still 
hasn't stopped Google and others offering open resolving service to the 
internet. Granted there are a lot of open DNS proxies out there that should be 
taken down, but I assume there are some companies offering resolving services 
that are valuable to Internet users.

 I remain very much *not* in favour of making changes to the DNS specification 
 that don't have a clear benefit to balance their costs.
I think there is a difference between the precise specification and what you 
can do with your DNS software. While it may not be within the spec you can 
setup an auth server today that slaves the root zone and use a stub on your 
resolver to point to that root zone. That's how I run my setup at home because 
I don't want to my queries to be part of the DITL collection and I know that 
others do that because they have very bad connectivity to the root servers and 
in general. 

I think if we think of the resolver having another auth root server at 
localhost the logic is easier to understand makes much more sense as DNSSEC 
protections would kick in even if someone managed to inject a bad zone. 

The draft doesn't require every resolver to slave the zone, but merly is an 
information that this is a possible way to do it and I assume that large 
resolver operators would benefit from it and the CPEs you mention that even 
haven't implemented EDNS0 wouldn't matter anyway.

It in the end of course comes down to do we want to document what is out there  
anyway or do we want to hide our heads in the sand. Especially for an 
operational group I would prefer the former.

So long
-Ralf

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] draft-wkumari-dnsop-dist-root-01.txt

2014-07-06 Thread Mark Andrews

In message etpan.53b82396.4353d0cd.3...@walrus.hopcount.ca, Joe Abley writes
:
 Hi Paul, Warren,
 
 On 4 July 2014 at 16:50:08, Paul Hoffman (paul.hoff...@vpnc.org) wrote:
 
  Greetings. Warren and I have done a major revision on this draft, 
 narrowing the design  
  goals, and presenting more concrete proposals for how the mechanism 
 would work. We welcome  
  more feedback, and hope to discuss it in the WG in Toronto.
 
 I think there is much in the language of this draft that could be 
 tightened up, but this is an idea for discussion so I'll avoid a pedantic 
 line-by-line dissection. But I can give you the full pedantry if you like 
 :-)
 
 On the pros and cons, however (crudely pasted below), see below.
 
 TL;DR: there are way more cons than pros to this proposal. The pros 
 listed are weak; the cons listed are serious. I don't see a net advantage 
 to the DNS (or to perceived performance of the DNS for any client) here. 
 This proposal, if implemented, would represent non-trivial additional 
 complexity with minimal or no benefit. I am not in favour of it, if 
 that's not obvious.
 
 As noted previously, I am not against documenting and discussing the 
 merits of slaving the root zone on resolvers (in some fashion). My 
 preference would be for a draft called something like 
 draft-ietf-dnsop-slaving-root-on-resolvers-harmful, which could borrow 
 much of your section 5.1 and 5.2 to make its argument.
 
 I remain very much *not* in favour of making changes to the DNS 
 specification that don't have a clear benefit to balance their costs.
 
 ---
 
 5.1. Pros
 
  o Junk queries / negative caching - Currently, a significant number
of queries to the root servers are junk queries. Many of these
queries are TLDs that do not (and may never) exist in the root
Another significant source of junk is queries where the negative
TLD answer did not get cached because the queries are for second-
level domains (a negative cache entry for foo.example will not
cover a subsequent query for bar.example).
 
 I think a better way to accommodate the second point is to implement 
 qname minimisation in recursive server logic.

When you can get rid of all the servers in the world which followed
RFC 2535 which return NXDOMAIN for empty non terminal qname
minimisation and this sort of logic will be viable though it won't
do anywhere as near as good a job as having a local copy of the
root zone.

 I don't know that the first point is much of a pro. Root server operators 
 need to provision significant spare capacity in order to accommodate 
 flash crowds and attack traffic, and compared to that spare capacity the 
 volume of junk queries is extremely small. There's no obvious operational 
 benefit to root server operators in reducing their steady-state query 
 load (in fact, it would make it harder in some cases to obtain the 
 exchange point capacity you need to accommodate flash crowds, on 
 exchanges where higher-capacity ports are reserved for those that have 
 demonstrable need based on steady-state traffic.)

But there is big benefit to cache operators.  The bigger the client base
the bigger the benefit.
 
 I'm also a little concerned about the word junk. It's a pejorative term 
 that implies assumptions about the intent of the original query. If my 
 intent is to confirm that a top-level label doesn't exist, then 
 BLAH/IN/SOA is a perfectly reasonable query for me to send to a root 
 server. We might assume that a query Joe's iPhone/IN/SOA sent to a root 
 server is not reasonable, but we're only assuming; we don't actually have 
 a way of gauging the actual intent with any accuracy.
 
  o DoS against the root service - By distributing the contents of the
root to many recursive resolvers, the DoS protection for customers
of the root servers is significantly increased. A DDoS may still
be able to take down some recursive servers, but there is much
more root service infrastructure to attack in order to be
effective. Of course, there is still a zone distribution system
that could be attacked (but it would need to be kept down for a
much longer time to cause significant damage, and so far the root
has stood up just fine to DDoS.
 
 If I was to paraphrase this advantage with malicious intent :-), you mean 
 that we don't have to rely upon the root server system to continue to 
 perform under attack, because we don't need the root server system any 
 more, although we do need the new bits of the root server system we are 
 specifying, and if those bits are not available we do need the 
 conventional root server system after all, but that's probably ok because 
 the root server system is pretty resilient. That sounds a bit circular.
 
  o Small increase to privacy of requests - This also removes a place
where attackers could collect information. Although query name
minimization also achieves some of this, it does still leak the
TLDs that people behind a 

Re: [DNSOP] [Int-area] various approaches to dns channel secrecy

2014-07-06 Thread Eliot Lear
Paul,

This seems like a fine and modular approach that doesn't boil the ocean.

Eliot

On 7/5/14, 5:04 AM, Paul Vixie wrote:
 i've now seen a number of proposals reaction to the snowden
 disclosures, seeking channel encryption for dns transactions. i have
 some thoughts on the matter which are not in response to any specific
 proposal, but rather, to the problem statement and the context of any
 solution.

 first, dns data itself is public -- the data is there for anybody to
 query for it, if you know what to query for. only the question,
 questioner, and time can be kept secret. answers are only worth keeping
 secret because they identify the question, questioner, and time.

 second, dns transactions are not secret to protocol agents. whether stub
 resolver, full resolver, forwarder, proxy, or authority server -- the
 full identity of the question must be knowable to the agent in order to
 properly process that question. if the agent does logging, then the
 question, questioner, and time will be stored and potentially shared or
 analyzed.

 by implication, then, the remainder of possible problem statement
 material is hide question from on-wire surveillance, there being no
 way to hide the questioner or the time. to further narrow this, the
 prospective on-wire surveillance has to be from third parties who are
 not also operators of on-path dns protocol agents, because any second
 party could be using on-wire surveillance as part of their logging
 solution, and by (2) above there is no way to hide from them. so we're
 left with hide question from on-wire surveillance by third parties.

 this is extremely narrow but i can envision activists and dissidents who
 rightly fear for their safety based on this narrowly defined threat, so
 i'm ready to agree that there should be some method in DNS of providing
 this secrecy. and as we know from the history of secrecy, if you only
 encrypt the things you care about, then they stand out. therefore,
 secrecy of this kind must become ubiquitous.

 datagram level channel secrecy (for example, DTLS or IPSEC) offers a
 solution which matches the existing datagram level UDP transport used
 primarily by DNS. however, the all-pervasive middleboxes (small plastic
 CPE devices installed by the hundreds of millions by DSL and Cable and
 other providers) have been shown to be more powerful than IPv6, DNSSEC,
 and EDNS -- we could expect them to prevent any new datagram level
 channel secrecy protocol we might otherwise wish to employ.

 TCP/53 is less prone to middlebox data inspection, and may seem to be an
 attractive solution here. i think not for two reasons. first, TCP/53
 is often blocked outright, and second, because TCP/53 as defined in RFC
 1035 has a connection management scheme that prohibits persistent TCP/53
 connections at Internet scale, and we cannot afford the setup/teardown
 costs of a non-persistent TCP-based channel secrecy protocol for DNS. to
 those who suggest redefining TCP/53 and upgrading the entire physical
 plant and all software and operating systems, i challenge you to first
 show how this is less global effort than other proposals now on the
 table, and then show how you would handle the long-tail problem, since
 many agents will never be upgraded, or will only be upgraded on a scale
 of half-decades. DNS works today because TCP/53 is a fallback for
 UDP/53. its definition and deployment makes it unsuitable either
 currently or as-would-be upgraded to become the primary transport.

 i suggest that any channel secrecy protocol we wish to add to the DNS
 system must be suitable as the primary transport, to which the existing
 UDP/53 and TCP/53 protocols are fallbacks. i further suggest that any
 new transport be operable at internet scale, which demands connection
 persistence. finally i suggest that this be done using a protocol that
 the internet's middle boxes (cheap CPE devices who think they know
 what all valid traffic must look like) will allow to pass without comment.

 one candidate for this would be RESTful JSON carried over HTTPS. because
 of its extensive use in e-commerce and web API applications, HTTPS
 works everywhere. because HTTPS currently depends on X.509 keys, other
 groups in the IETF world are already working to make HTTPS proof against
 on-path surveillance. (google for perfect forward secrecy to learn
 more), and others are working to defend the internet user population
 against wildcard or targeted SSL certificates issued by governments and
 other anti-secrecy agents with on-path capabilities.

 stephane bortzmeyer has already shown us that JSON representation of DNS
 transactions is possible. i have heard from another protocol engineer
 who is also working in this area (and who credits bortzmeyer for
 informing his work).

 the special advantage of TCP/443 as a primary transport for persistent
 DNS with channel secrecy is that HTTPS's connection management permits
 massive scale, as in, a single protocol agent with tens 

Re: [DNSOP] draft-wkumari-dnsop-dist-root-01.txt

2014-07-06 Thread Mark Andrews

In message 53ba1e98.9030...@redbarn.org, Paul Vixie writes:
 
 i am not joe, but i strongly +1'd his response on this thread, so i'm
 putting my oar back into the water now.
 
 Mark Andrews wrote:
  In message etpan.53b82396.4353d0cd.3...@walrus.hopcount.ca, Joe Abley wri
 tes:
 
  5.1. Pros
 
   o Junk queries / negative caching - Currently, a significant number
 of queries to the root servers are junk queries. Many of these
 queries are TLDs that do not (and may never) exist in the root
 Another significant source of junk is queries where the negative
 TLD answer did not get cached because the queries are for second-
 level domains (a negative cache entry for foo.example will not
 cover a subsequent query for bar.example).
 
  I think a better way to accommodate the second point is to implement 
  qname minimisation in recursive server logic.
 
  When you can get rid of all the servers in the world which followed
  RFC 2535 which return NXDOMAIN for empty non terminal qname
  minimisation and this sort of logic will be viable
 
 query minimization is very much worth having for its own sake. RFC 2535
 style authorities were never numerous. we can cope.

Even with query minimization you will still gets lots of junk queries
to the roots.

   though it won't
  do anywhere as near as good a job as having a local copy of the
  root zone.
 
 there are far more errors encountered below .com or .de than by their
 siblings in the root. any argument in favour of wide scale slaving of
 the root zone begs the question, why not every tld and every pseudo-tld
 (such as no-ip.org)? the root isn't special in regards to a goal of
 preventing junk queries. that's why query minimization is the preferred
 solution to this problem.

The root scales at present.

  ...
 
  There's an implication here that a recursive resolver sending a query to 
  a root server is potentially impinging upon the privacy of its anonymous 
  clients. I find that a bit difficult to swallow.
 
  Given the intelligence that root server operators have glenned in the past
  there is a degree of credability here.
 
 if it were possible to put in place agreements between the root name
 server operators and the internet community, one of the things i'd ask
 for is a no data mining rule. that is, i would want to be sure that
 verisign's security business was in no way commercially advantaged by
 its exclusive access to the a/j root query stream (or the .com query
 stream). alas, that's the third rail of internet politics, and i have no
 wish to place a moist body part up against it.
 
 to your actual point, query minimization is the solution to data leaks
 into root, and tld, and pseudo-tld authorities. slaving the root zone
 only solves this problem for the root name server operators, who are
 nowhere near our full problem statement with regard to long-qname
 surveillance.

query minimization only addresses part of the issue, usually that
of having .corp in a search list (or similar) when not talking to
a recursive server with a .corp configured.  It does not address
the issue of stub resolvers appending . to single labels when
searching.

  A new root zone is published usually two (but sometimes more) times per 
  day. The semantics specified in the draft for refreshing a local copy of 
  the root zone say keep re-using the copy you have until it expires. If 
  I assume that expire means survives beyond SOA.EXPIRE seconds of when 
  we originally fetched it, then there's the potential for stale data to 
  be published for a week plus however old the originally-retrieved file 
  was (which is difficult to determine, in contrast to the traditional root 
  zone distribution scheme). I think this disadvantage is more serious than 
  is presented.
 
 
  Slaves perform refresh queries every 30 minutes (refresh = 1800).
  Oops actually clear up faster with slaves than without as many of
  the responses are now direct to stub rather than cached responses
  which have much higher TTLs.
 
  If one was really worried one could keep a log of the last 24 hours
  of zone tranfers and issue a NOTIFY to all of the sources that
  transfered the zone.  Normal refresh logic would then kick in for
  a large percentage of slaves.  This is permitted by the RFC's.
  Machines are actually good at doing this sort of thing.
 
  This is actually a pro not a con.
 
 right now, root name servers are part of an explicit, hand-maintained
 NOTIFY tree. thus, all internet actions depending on root zone content
 have up-to-the-minute data if not up-to-the-second data in many cases.
 we should treat this as an invariant, which means any IETF
 recommendation for root zone slave service should include an explicit
 NOTIFY tree, though i doubt it can be either hand maintained as the
 current one is nor remember everybody who has fetched it and NOTIFY all
 of them as you suggest. (since many RDNS servers are behind firewalls
 or NAT or both, it's fair to say that most 

Re: [DNSOP] draft-wkumari-dnsop-dist-root-01.txt

2014-07-06 Thread Terry Manderson
Hi Paul,

No oars - just a bit of a broken paddle.

On 7/07/2014 2:14 pm, Paul Vixie p...@redbarn.org wrote:



right now, root name servers are part of an explicit, hand-maintained
NOTIFY tree. thus, all internet actions depending on root zone content
have up-to-the-minute data if not up-to-the-second data in many cases. we
should treat this as an invariant, which means
 any IETF recommendation for root zone slave service should include an
explicit NOTIFY tree, though i doubt it can be either hand maintained
as the current one is nor remember everybody who has fetched it and
NOTIFY all of them as you suggest. (since many
 RDNS servers are behind firewalls or NAT or both, it's fair to say that
most could never hear a NOTIFY.)

Terry thinking aloud: it might be palatable to have a automated
registration process where a 'root zone enabled resolver' registers with a
'zone distribution service' and issue a keep-alive, such that the ZDS
might be able to issue a notify (of sorts) to the resolver..

But getting ahead of myself, the underlying issue of zone freshness (as
I've mentioned before) is my break-point. And the environment we have now
simply doesn't help that, inclusive of NAT effects.


thus my preference for the root server anycast proposal first described
in 2005 at


https://ss.vix.su/~vixie/alternate-rootism.pdf
https://ss.vix.su/~vixie/alternate-rootism.pdf

(btw this needs to be http, https asks for auth!)


and then described again this year for the ICANN ITI panel report (see
section 9.4) at


https://www.icann.org/en/system/files/files/report-21feb14-en.pdf
https://www.icann.org/en/system/files/files/report-21feb14-en.pdf

and then described again this week for the IETF DNSOP wg at


https://tools.ietf.org/id/draft-lee-dnsop-scalingroot-00.txt
https://tools.ietf.org/id/draft-lee-dnsop-scalingroot-00.txt


I'm not going to make statements of the above on technical feasibility,
but I am more than concerned about: (sec 3.) The proposed architecture is
strongly based on the widely deployed DNSSEC.  .. not that is based on
DNSSEC, the premise of 'widely', and that serves as a unilateral go signal.

I would suggest that timing might well be the impeding factor here. I
still see some 6-10K queries/per sec (diurnal pattern) on L-root without
the DO bit set. That isn't insignificant as L is only 1 of the 13. So
while 'widely deployed' could be argued (23K-31K qps with DO bit) I think
'near pervasive' is the buy-in point. I might well be with you if the
omission of the DO bit was at similar or lower levels as the v6 query rate
- 2k qps, non-dirurnal. ;)

Or maybe the new service addressees in scalingroot-00 MUST only answer to
DNSSEC queries..but that is a stretch IMHO.


Cheers
Terry


smime.p7s
Description: S/MIME cryptographic signature
___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop