Re: [DNSOP] Stupid thought: why not an additional DNSKEY record flag: NSEC* only...

2017-01-04 Thread Nicholas Weaver

> On Jan 4, 2017, at 10:24 AM, Mukund Sivaraman <m...@isc.org> wrote:
> 
> Hi Nicholas
> 
> On Wed, Jan 04, 2017 at 09:33:04AM -0800, Nicholas Weaver wrote:
>> This way, you can deploy this solution today using white lies, and as
>> resolvers are updated, this reduces the potential negative consequence
>> of a key compromise to “attacker can only fake an NXDOMAIN”, allowing
>> everything else to still use offline signatures.
>> 
>> Combine with caching of the white lies to resist DOS attacks and you
>> have a workable solution that prevents zone enumeration that is
>> deployable today and has improved security (key can only fake
>> NXDOMAIN) tomorrow.
> 
> Assume an attacker is able to spoof answers, which is where DNSSEC
> validation helps. If a ZSK is leaked, it becomes a problem only when an
> attacker is able to spoof answers (i.e., perform the attack).
> 
> What you're saying is that with a special NSEC3-only DNSKEY compromise,
> "attacker can only fake an NXDOMAIN". If an attacker can fake NXDOMAINs
> and get the resolver to accept them, that's as bad. The attacker can
> deny all answers in the zone by presenting valid negative answers. This
> is why we have proof of non-existence that needs to be securely
> validated. A special NSEC3-only-DNSKEY's compromise isn't a better
> situation than a ZSK compromise.

An attacker in that position can just put in garbage, and you get SERVFAIL 
instead of NXDOMAIN, regardless of whether the attacker has compromised the key 
or not.

This is partially why the provable denial is somewhat silly as I can have the 
same effect as a denial of service using other mechanisms that the cryptography 
doesn’t stop.

So having a key that can only be used for provable denial compromised by an 
attacker is, yeah, not great, but not some horrid catastrophe. 

--
Nicholas Weaver  it is a tale, told by an idiot,
nwea...@icsi.berkeley.edufull of sound and fury,
510-666-2903 .signifying nothing
PGP: http://www1.icsi.berkeley.edu/~nweaver/data/nweaver_pub.asc



___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


[DNSOP] Stupid thought: why not an additional DNSKEY record flag: NSEC* only...

2017-01-04 Thread Nicholas Weaver
Any system which prevents zone enumeration requires online signing, 
https://www.cs.bu.edu/~goldbe/papers/nsec5faq.html

But NSEC5 is almost certainly not going to be adopted, simply because of the 
partial deployment problem.

NSEC3 lies work today, but people worry that NSEC3 might have server compromise 
compromise the ZSK.



So why not simply add a new DNSKEY record flag: NSEC3-only.  This flag means 
that the key in question can only be used to sign an NSEC* record when 
presenting NXDOMAIN.

This way, you can deploy this solution today using white lies, and as resolvers 
are updated, this reduces the potential negative consequence of a key 
compromise to “attacker can only fake an NXDOMAIN”, allowing everything else to 
still use offline signatures.

Combine with caching of the white lies to resist DOS attacks and you have a 
workable solution that prevents zone enumeration that is deployable today and 
has improved security (key can only fake NXDOMAIN) tomorrow.

--
Nicholas Weaver  it is a tale, told by an idiot,
nwea...@icsi.berkeley.edufull of sound and fury,
510-666-2903 .signifying nothing
PGP: http://www1.icsi.berkeley.edu/~nweaver/data/nweaver_pub.asc



___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] New usage for TXT RR type on radar: Kerberos service discovery

2016-05-31 Thread Nicholas Weaver

> On May 31, 2016, at 12:13 PM, John R Levine <jo...@taugh.com> wrote:
> 
>> It is a big failure and problem for the Internet that there is no support 
>> for unknown resource record types.
> 
> No kidding. The problem isn't with DNS server software like BIND and NSD, 
> which are updated regularly.  The problem is the Web Crudware(tm) that most 
> people use to manage their zones.
> 
> See https://datatracker.ietf.org/doc/draft-levine-dnsextlang/
> 
> I think I have funding to revise and implement this, by the way.

Overall, its the crud ware for configuration not the path.  Unknown record 
types seem to be well supported from a transport viewpoint.

--
Nicholas Weaver  it is a tale, told by an idiot,
nwea...@icsi.berkeley.edufull of sound and fury,
510-666-2903 .signifying nothing
PGP: http://www1.icsi.berkeley.edu/~nweaver/data/nweaver_pub.asc



___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] are there recent studies of client side/ISP firewalls interfering with EDNS?

2015-11-12 Thread Nicholas Weaver

> On Nov 12, 2015, at 7:59 AM, Wiley, Glen <gwi...@verisign.com> wrote:
> 
> I have seen the ISC EDNS compliance report (beautiful thing really), but it 
> loks as though the focus is really on the name servers and name server 
> operators.  Has a recent study been done to examine whether client side/ISP 
> firewalls are interfering with EDNS?

We've done some of this in Netalyzr.  Captive portals in particular are a 
problem, with about 1% of systems measured in Netalyzr unable to use EDNS0 to 
get DNSSEC information either from the recursive resolver OR directly from the 
roots.

--
Nicholas Weaver  it is a tale, told by an idiot,
nwea...@icsi.berkeley.edufull of sound and fury,
510-666-2903 .signifying nothing
PGP: http://www1.icsi.berkeley.edu/~nweaver/data/nweaver_pub.asc



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] are there recent studies of client side/ISP firewalls interfering with EDNS?

2015-11-12 Thread Nicholas Weaver

> On Nov 12, 2015, at 8:43 AM, John Kristoff <j...@cymru.com> wrote:
> 
> On Thu, 12 Nov 2015 08:00:50 -0800
> Nicholas Weaver <nwea...@icsi.berkeley.edu> wrote:
> 
> After a DNS over TCP discussion a student of mine indicated that they
> recently fixed a problem in their network where DNS messages over 512
> bytes were not being relayed.  It appears the root cause has to do with
> some defaults being set common gear that simply drops messages over 512
> bytes.  For example:

This is an issue but its relatively rare.  Often the bigger problem is 
fragmentation support.

--
Nicholas Weaver  it is a tale, told by an idiot,
nwea...@icsi.berkeley.edufull of sound and fury,
510-666-2903 .signifying nothing
PGP: http://www1.icsi.berkeley.edu/~nweaver/data/nweaver_pub.asc



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] Comments regarding the NSEC5

2015-03-24 Thread Nicholas Weaver

 On Mar 24, 2015, at 11:11 AM, Warren Kumari war...@kumari.net wrote:
 There is a paper Stretching NSEC3 to the Limit: Efficient Zone
 Enumeration Attacks on NSEC3 Variants by Sharon Goldberg et al, which
 covers some of the trivial solutions and explains why it won't work:
 
 http://www.cs.bu.edu/~goldbe/papers/nsec3attacks.pdf
 
 
 Yes, this was presented at (IIRC) DNS-OARC in Los Angeles. While the
 paper is correct, my view of the response was shrug, and this is
 not a problem worth spending resources to solve. While some zone
 operators want to minimize zone enumeration, it's not really viewed as
 a huge issue. This is like buying a triple hardened bank vault door to
 protect a slice of cake.

And if you REALLY want this, TODAY, get a HSM (optional), program it to ONLY 
sign NSEC3 records, and just dynamically sign (and cache) NSEC3 records for 
your NXDOMAINs.

You use the HSM to protect the key if you are paranoid, and you get no 
enumeration records.  By caching the responses, you prevent a DOS from 
preventing you from serving up common NXDOMAIN records, and the DOS only 
affects the NXDOMAIN side anyway: you can probably get the same results in most 
cases by serving up an NXDOMAIN without an NSEC3 RRSET, as the resolver will go 
this doesn't validate and give a servfail anyway.

--
Nicholas Weaver  it is a tale, told by an idiot,
nwea...@icsi.berkeley.edufull of sound and fury,
510-666-2903 .signifying nothing
PGP: http://www1.icsi.berkeley.edu/~nweaver/data/nweaver_pub.asc



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] [dns-operations] dnsop-any-notimp violates the DNS standards

2015-03-14 Thread Nicholas Weaver

 On Mar 13, 2015, at 7:59 PM, Paul Vixie p...@redbarn.org wrote:

  Nicholas Weaver Saturday, March 14, 2015 5:07 AM
 
 ...
 
 Overall, unless you are validating on the end host rather than the 
 recursive resolver, DNSSEC does a lot of harm from misconfiguration-DOS, 
 but almost no good.
 
 several of us jumped for joy in 2008 when kaminsky showed rdns poisoning to 
 be a trivial exercise, because it finally provided justification for what was 
 at that time 12 years of apparently-wasted effort on DNSSEC.

But it didn't justify DNSSEC, even at the time.

Between actually adding in a bit more entropy in the request through 0x20 and 
port randomization, and more importantly cleaning up the glue policy for 
recursive resolvers (which Unbound did), you close the door on off-path 
attackers: both making races harder AND eliminating the race until win 
property.

In fact, several have viewed the glue policy cleanup which gets to the root 
cause of the Kaminski problem as detrimental specifically because of the desire 
to force DNSSEC adoption.

 so we'll keep pushing the crap system we have, uphill all the way, noone 
 loving it, and almost everyone in fact hating it. we've now spent more 
 calendar- and person-years on DNSSEC than was spent on the entire IPv4 
 protocol suite (including DNS itself) as of 1996 when the DNSSEC effort 
 began. ugly, ugly, ugly.

At which point is it sunk cost fallacy?

DNS is insecure, live with it may be the best answer.  Why keep throwing good 
effort after bad?


It certainly is a hell of a lot better than the DOS attack that is recursive 
resolver validation which provides almost no meaningful security gain.

If I was Comcast, after the HBO DNSSEC mess-up, on top of previous mess-ups 
where Comcast inevitably gets the blame, I'd be really really tempted to turn 
OFF DNSSEC validation.  It has failed.

--
Nicholas Weaver  it is a tale, told by an idiot,
nwea...@icsi.berkeley.edufull of sound and fury,
510-666-2903 .signifying nothing
PGP: http://www1.icsi.berkeley.edu/~nweaver/data/nweaver_pub.asc



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] [dns-operations] dnsop-any-notimp violates the DNS standards

2015-03-13 Thread Nicholas Weaver

 On Mar 13, 2015, at 10:21 AM, Morizot Timothy S timothy.s.mori...@irs.gov 
 wrote:
 It’s been steadily increasing for years now and gives me an idea what 
 percentage of the US public is protected against certain types of attacks 
 involving our zones. DNSSEC validation is not a panacea, but in a layered 
 approach toward combating fraud and certain sorts of attacks, it does provide 
 a particular sort of protection not available through any other means. 
 Whether or not ISPs sign their authoritative zones matters much less to us 
 than whether or not they implement DNSSEC validation on their recursive 
 nameservers. And that’s not a failure at all. By the measure above (which 
 isn’t perfect, but the best one available) roughly a fifth to a quarter of 
 the US public, the primary consumers of our zones, exclusively use validating 
 nameservers. That’s significant. Would I like to see it higher? Sure. But 
 I’ll take it.
 

The problem is validation by the recursive resolver is nearly useless for 
security, but one heck of an effective DOS attack (NASA, HBO, etc)...

Lets look at what real world attacks on DNS are.

a:  Corrupt the registrar.  DNSSEC do any good?  Nope.

b:  Corrupt the traffic in-flight (on-path or in-path).  DNSSEC do any good?  
Only if the attacker is not on the path for the final traffic, but just the DNS 
request.

c:  The recursive resolver lies.  Why would you trust it to validate?

d:  The NAT or a device between the recursive resolver and the user lies.  
Again, validation from the recursive resolver works how?


Overall, unless you are validating on the end host rather than the recursive 
resolver, DNSSEC does a lot of harm from misconfiguration-DOS, but almost no 
good.

--
Nicholas Weaver  it is a tale, told by an idiot,
nwea...@icsi.berkeley.edufull of sound and fury,
510-666-2903 .signifying nothing
PGP: http://www1.icsi.berkeley.edu/~nweaver/data/nweaver_pub.asc



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] Comments regarding the NSEC5

2015-03-12 Thread Nicholas Weaver

 On Mar 11, 2015, at 9:39 AM, Jan Včelák jan.vce...@nic.cz wrote:
 
 NSEC5 proof is the FDH of domain name.
 NSEC5 hash is SHA-256 of NSEC5 proof.
 
 I will clarify that.

Why not just do something simpler?  The only thing NSEC5 really differs in a 
way that counts is not in the NSEC record but really just the DNSKEY handling, 
having a separate key used for signing the NSEC* records.

So why define NSEC5 at all.


Instead, just specify a separate flag for the DNSKEY record, NSEC-only, sign 
the NSEC3 dynamically, bada bing, bada boom, done!


For old resolvers, they just ignore the flag and treat it like any other DNSKEY 
record, and since the valid names are signed with the other key, while the 
NSEC* are signed with this key, it works just fine.

For upgraded resolvers, they follow the convention and only will accept RRSIGs 
for NSEC/NSEC3 with that DNSKEY record.

And then on the authority side, you just dynamically generate and sign the 
NSEC3 record that says H(name)-1 to H(name)+1 has no valid record and sign that 
with the NSEC-only key.



This way, you gain the protection against enumeration and the limited damage on 
key compromise property when validated by upgraded resolvers, and you still get 
the protection against enumeration when the resolver isn't upgraded, and you 
don't need to upgrade the resolver in order for this to be deployed.

--
Nicholas Weaver  it is a tale, told by an idiot,
nwea...@icsi.berkeley.edufull of sound and fury,
510-666-2903 .signifying nothing
PGP: http://www1.icsi.berkeley.edu/~nweaver/data/nweaver_pub.asc



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] DNSKEY RRset size and the root

2015-01-23 Thread Nicholas Weaver

 On Jan 23, 2015, at 10:01 AM, Paul Hoffman paul.hoff...@vpnc.org wrote:
 
 What is the problem with #2? IP fragmentation happens, and The Internet is 
 expected to work with it. That is, of what possible value is inform their 
 customers?

The Internet has unfortunately decreed that Fragmentation Does Not Work with 
IPv4, and Really Does Not Work with IPv6.

This will cause timeouts until the resolver realizes it should use a smaller 
EDNS0 MTU and in that case, the resolver will failover to TCP for that query, 
which some in the DNS community view as anathema...


--
Nicholas Weaver  it is a tale, told by an idiot,
nwea...@icsi.berkeley.edufull of sound and fury,
510-666-2903 .signifying nothing
PGP: http://www1.icsi.berkeley.edu/~nweaver/data/nweaver_pub.asc



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


[DNSOP] Enough latency obsession Re: Review of draft-ietf-dnsop-cookies-00

2014-12-16 Thread Nicholas Weaver

Its time to stop obsessing over latency in DNS!

DNS doesn't exist in a vacuum, but then goes to at minimum, a TCP handshake, 
and who knows what else beyond it.  Amdahl's law matters.

How many headaches would go away if all DNS is over TCP?  And how much would it 
really make a difference in Latency?


 On Dec 16, 2014, at 12:20 PM, Paul Vixie p...@redbarn.org wrote:
 
 3 round trips, 7 packets, for an isolated tcp/53 query.
 
 s   -
   - s+a
 a   -
 q   -
   - r+a
 f+a -
   - f+a
 
 obviously, the dickenson tcp change proposal bears on this, but today, it's 3 
 rtt, 7 pkt, and that's assuming that the response fits in one window. axfr is 
 obviously much longer.

And this is wrong:  The f+a RTT doesn't matter, thats AFTER the result has been 
received.  TCP is 2 RTT to actually get the result.

--
Nicholas Weaver  it is a tale, told by an idiot,
nwea...@icsi.berkeley.edufull of sound and fury,
510-666-2903 .signifying nothing
PGP: http://www1.icsi.berkeley.edu/~nweaver/data/nweaver_pub.asc

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


[DNSOP] Enough latency obsession Re: Review of draft-ietf-dnsop-cookies-00

2014-12-16 Thread Nicholas Weaver

Its time to stop obsessing over latency in DNS!

DNS doesn't exist in a vacuum, but then goes to at minimum, a TCP handshake, 
and who knows what else beyond it.  Amdahl's law matters.

How many headaches would go away if all DNS is over TCP?  And how much would it 
really make a difference in Latency?


 On Dec 16, 2014, at 12:20 PM, Paul Vixie p...@redbarn.org wrote:
 
 3 round trips, 7 packets, for an isolated tcp/53 query.
 
 s   -
  - s+a
 a   -
 q   -
  - r+a
 f+a -
  - f+a
 
 obviously, the dickenson tcp change proposal bears on this, but today, it's 3 
 rtt, 7 pkt, and that's assuming that the response fits in one window. axfr is 
 obviously much longer.

And this is wrong:  The f+a RTT doesn't matter, thats AFTER the result has been 
received.  TCP is 2 RTT to actually get the result.

--
Nicholas Weaver  it is a tale, told by an idiot,
nwea...@icsi.berkeley.edufull of sound and fury,
510-666-2903 .signifying nothing
PGP: http://www1.icsi.berkeley.edu/~nweaver/data/nweaver_pub.asc

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] [dns-operations] hong kong workshop, day 2, live link

2014-12-09 Thread Nicholas Weaver

 On Dec 9, 2014, at 9:12 AM, Randy Bush ra...@psg.com wrote:
 
 Complementing what Edmon Chung mentioned that root-servers was already
 reserved in the last new gTLD round, here follows the complete list of
 reserved names:
 
 AFRINIC
...
 
 this is an amusing list.  i can understand EXAMPLE, LOCALHOST, and TEST.
 maybe even WHOIS and WWW.  but the rest sure look as if lawyers wanted
 and got what is in effect a super trademark.

Its also missing one thats actually really important to be reserved: .onion.


--
Nicholas Weaver  it is a tale, told by an idiot,
nwea...@icsi.berkeley.edufull of sound and fury,
510-666-2903 .signifying nothing
PGP: http://www1.icsi.berkeley.edu/~nweaver/data/nweaver_pub.asc



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] [homenet] ip6.arpa reverse delegation

2014-11-24 Thread Nicholas Weaver

 On Nov 24, 2014, at 9:04 AM, Ted Lemon mel...@fugue.com wrote:
 
 On Nov 24, 2014, at 10:56 AM, Juliusz Chroboczek 
 j...@pps.univ-paris-diderot.fr wrote:
 I'm a little ashamed to admit that I don't understand the purpose of
 reverse DNS.
 
 Reverse DNS is useful for logging, so that you can associate a name with a 
 host.   You don't necessarily want to (and may not be able to) send a request 
 to the host, but the reverse tree is pretty easy to populate if everybody 
 does the right thing.   With DNSSEC, the reverse tree also becomes a place 
 where you can hang keys that associate with the IP address.   And, again 
 given that the host itself might not be entirely reachable, being able to 
 look up its name in the reverse tree can tell you something about it.

A nice mechanism I've seen for IPv6 that is remarkably useful along these lines 
(first seen by me in looking at Comcast's DNS infrastructure):

In the lower 64 bits of the IPv6 address, encode as human-readable the IPv4 
address.  So, for example, if a machine's V4 address is 10.1.2.4, the IPv6 
address is 2101:{...}:10:1:2:4

--
Nicholas Weaver  it is a tale, told by an idiot,
nwea...@icsi.berkeley.edufull of sound and fury,
510-666-2903 .signifying nothing
PGP: http://www1.icsi.berkeley.edu/~nweaver/data/nweaver_pub.asc



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] Secure Unowned Hierarchical Anycast Root Name Service - And an Apologia (circleid)

2014-11-10 Thread Nicholas Weaver

 On Nov 10, 2014, at 12:13 AM, John Levine jo...@taugh.com wrote:
 
 And isn't there some danger that this parallel root becomes an
 attractive target for those who want things to be different than
 what's in the official root?  That is, in effect, isn't this a plain
 old alternative root?
 
 I would assume the plan is that the clients use DNSSEC to validate
 the responses.
 
 This doesn't seem notably less secure than the current scheme, given
 how many networks helpfully reroute DNS traffic already.  But my
 question about why not just hijack the address of an existing root
 stands.

This happens in China (on CERNET I believe): there are a set of root mirrors 
that hijack most (but not all) of the root IPs.  As far as we can tell, the 
servers are legitimate, returning the proper responses, except that the mirror 
servers don't support DNSSEC.

--
Nicholas Weaver  it is a tale, told by an idiot,
nwea...@icsi.berkeley.edufull of sound and fury,
510-666-2903 .signifying nothing
PGP: http://www1.icsi.berkeley.edu/~nweaver/data/nweaver_pub.asc



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] Possible slower response with minimization

2014-11-06 Thread Nicholas Weaver


Paul Vixie wrote:
 the internet has
 hundreds of years to run yet, and these broken implementations are
 (a) shrinking not growing, and (b) subject to rapid replacement when
 they start to encounter problems with correct enhancements to their
 habitat.


Hh. Hahahah.  HAHAHA. MUHAAHHAHAHAHAHAHAHA.

Sorry, where was I.

Oh yeah.  There is far too much bad undergrowth on the Internet, of old code 
and old systems that it still works and is never upgraded.  For example, we 
see plenty of instances of dnsmasq which are so old that the version date is 
before the developer started keeping a changelog.

Short of setting deliberate viral brush fires designed to brick old devices, 
we're stuck with them and need to plan around them.

--
Nicholas Weaver  it is a tale, told by an idiot,
nwea...@icsi.berkeley.edufull of sound and fury,
510-666-2903 .signifying nothing
PGP: http://www1.icsi.berkeley.edu/~nweaver/data/nweaver_pub.asc



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] DNS, fragmentation, and IPv6 extension headers

2014-07-28 Thread Nicholas Weaver

On Jul 28, 2014, at 8:42 AM, Stephane Bortzmeyer bortzme...@nic.fr wrote:
 Quite a few folks usually argue oh, that's simple: we'll use TCP,
 
 There are many good reasons to use TCP but, in that case, I do not see
 why we need it. First, IPv6 users typically don't use extension
 headers and, second, if the problem is in IP, why would changing from
 UDP to TCP work?

Because the big issue is fragments:  The IPv4 net decreed “Fragment’s don’t 
really work”.  The IPv6 net has decreed “No, really, FRAGMENTS DO NOT WORK”.

The solution is to detect and fallback on EDNS0 MTU to retry at 1400B first 
(rather than directly down to 512b), and properly handle truncation.  But you 
do that, and things do work.

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] various approaches to dns channel secrecy

2014-07-07 Thread Nicholas Weaver

On Jul 7, 2014, at 8:24 AM, John Kristoff j...@cymru.com wrote:
 by implication, then, the remainder of possible problem statement
 material is hide question from on-wire surveillance, there being no
 way to hide the questioner or the time. to further narrow this, the
 prospective on-wire surveillance has to be from third parties who are
 not also operators of on-path dns protocol agents, because any second
 party could be using on-wire surveillance as part of their logging
 solution, and by (2) above there is no way to hide from them. so we're
 left with hide question from on-wire surveillance by third parties.
 
 This sounds like DNSCurve's approach.

One important observation:  ONLY the path between the client and the recursive 
resolver in the classic model substantially benefits from channel security.

Even if you wave a magic wand and all resolver-authority communication 
becomes protected with 0-cost, 100% perfect data encryption, basic traffic 
analysis will largely be able to determine which domains are being looked up.  
Individual names within the domain are protected, but that is relatively minor.

The other problem is DNS is used to guide endpoint communication.  Between the 
resolver-authority information leak, and the actual IP selected by the 
endpoint itself for communication, this allows a nation-state observer 
adversary to pretty much recover what the hostname was in question in many 
cases, and at least the domain in almost all cases.

--
Nicholas Weaver  it is a tale, told by an idiot,
nwea...@icsi.berkeley.edufull of sound and fury,
510-666-2903 .signifying nothing
PGP: http://www1.icsi.berkeley.edu/~nweaver/data/nweaver_pub.asc



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] various approaches to dns channel secrecy

2014-07-07 Thread Nicholas Weaver

On Jul 7, 2014, at 9:52 AM, Paul Vixie p...@redbarn.org wrote:
 i wish it noted that i am responding to the general post-snowden call for 
 channel secrecy, and that i don't myself see much need for it in the case of 
 DNS, but that the proposals i've seen come out of the security community for 
 how to add channel secrecy to DNS are alarming in their lack of understanding 
 of what DNS is, how large DNS is, and how DNS works. therefore, i'm 
 attempting to isolate the cases which might be relevant to somebody, i am 
 drumming up a definition of dissident, and crafting a proposal that would 
 protect that mythical person's interests.
 
 the fact that the QNAME can be recovered in many cases by a well resourced 
 nation-state actor is meaningless here, since that surveillance would have to 
 be targeted, and would be both inaccurate and expensive; whereas the 
 surveillance i'm solving for is the ubquitous kind, which is presently very 
 accurate and very cheap.

No, its ubiquitous and cheap, and reasonably accurate.  

This type of traffic analysis correlation is bread and butter for a 
nation-state adversary running a pretty conventional real-time or even near 
real time IDS.  Doing it on the backbone is not hard, and overall, its no more 
complex than analysis we know they do like identify ALL users based on cookies 
and HTTP replies.

It is AMAZING the IDS analyses you can run on a 10 Gbps link when you are using 
a 20-system cluster.

--
Nicholas Weaver  it is a tale, told by an idiot,
nwea...@icsi.berkeley.edufull of sound and fury,
510-666-2903 .signifying nothing
PGP: http://www1.icsi.berkeley.edu/~nweaver/data/nweaver_pub.asc



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] NOTE RR type for confidential zone comments

2014-05-27 Thread Nicholas Weaver

On May 27, 2014, at 12:29 PM, Evan Hunt e...@isc.org wrote:

 One of our operations staff made what I thought was a clever suggestion
 the other day:  That it would be nice, from an operational standpoint,
 to have a way to encode comments into a zone so that they wouldn't get
 obliterated when a dynamic zone was dumped to disk, but couldn't be read
 by just anybody with access to dig.
 
 This draft proposes such a beast.  Feedback would be lovely.
 
 http://www.ietf.org/internet-drafts/draft-hunt-note-rr-00.txt
 

I think the type makes sense, as does the encoding.

Using an EDNS0 bit however, does not makes sense to me.  Flag bits are rare and 
precious, while 16b option codes are not.

Thus, instead I think note OK it should be an EDNS0 option, with a new option 
code, an option length of 0, and no option data. 

Especially since bits themselves are not precious (DNS requests are no where 
near getting near 512b, let alone the ~1500b where fragmentation is an issue), 
and this is primarily for zone transfer queries anyway, which means the 
overhead is going to be near zero anyway.


--
Nicholas Weaver  it is a tale, told by an idiot,
nwea...@icsi.berkeley.edufull of sound and fury,
510-666-2903 .signifying nothing
PGP: http://www1.icsi.berkeley.edu/~nweaver/data/nweaver_pub.asc



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] NOTE RR type for confidential zone comments

2014-05-27 Thread Nicholas Weaver

On May 27, 2014, at 1:32 PM, Miek Gieben m...@miek.nl wrote:

 [ Quoting e...@isc.org in [DNSOP] NOTE RR type for confidenti... ]
 One of our operations staff made what I thought was a clever suggestion
 the other day:  That it would be nice, from an operational standpoint,
 to have a way to encode comments into a zone so that they wouldn't get
 obliterated when a dynamic zone was dumped to disk, but couldn't be read
 by just anybody with access to dig.
 
 This draft proposes such a beast.  Feedback would be lovely.
 
 http://www.ietf.org/internet-drafts/draft-hunt-note-rr-00.txt
 
 Interesting idea!
 
 What happens if a server get these records and doesn't know about NOTE
 and treats them as unknown records?

Thats why the EDNS0 signaling is particularly clever in this proposal: A server 
would have to know about the NOTE record to receive them in a zone transfer, so 
as long as the source knows what its doing, the recipient will only receive the 
NOTE records if they know what they are.

The only case would be if a server is reading a zone file, not a transfer, in 
which case it won't know the RRTYPE of NOTE, so it will fail to load the 
record.

--
Nicholas Weaver  it is a tale, told by an idiot,
nwea...@icsi.berkeley.edufull of sound and fury,
510-666-2903 .signifying nothing
PGP: http://www1.icsi.berkeley.edu/~nweaver/data/nweaver_pub.asc



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] call to work on edns-client-subnet

2014-05-16 Thread Nicholas Weaver

On May 16, 2014, at 7:29 AM, Colm MacCárthaigh c...@allcosts.net wrote:
 And even 4096b RSA signatures only take a handful of milliseconds to 
 construct on the fly, you can cache signature validity for minutes even in 
 the very dynamic case, and this is one of those operations that parallelize 
 obscenely well.
 
 You won't survive a trivial DOS from a wristwatch computer with that approach 
 :) Having static answers around greatly increases capacity, by many orders of 
 magnitude. 

Actually, you can.  You prioritize non-NSEC3 records, since thats a finite, 
identifiable, priority set, and cache the responses.  Thus if you have 10k 
valid names, each with 100 different possible responses, and have a max 1 
minute TTL on signatures, thats only 16k signatures/s in the absolute worst 
case, which you can do on a single, 16 core computer.

--
Nicholas Weaver  it is a tale, told by an idiot,
nwea...@icsi.berkeley.edufull of sound and fury,
510-666-2903 .signifying nothing
PGP: http://www1.icsi.berkeley.edu/~nweaver/data/nweaver_pub.asc



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] call to work on edns-client-subnet

2014-05-16 Thread Nicholas Weaver

On May 16, 2014, at 7:44 AM, Colm MacCárthaigh c...@allcosts.net wrote:

 Actually, you can.  You prioritize non-NSEC3 records, since thats a finite, 
 identifiable, priority set, and cache the responses.  Thus if you have 10k 
 valid names, each with 100 different possible responses, and have a max 1 
 minute TTL on signatures, thats only 16k signatures/s in the absolute worst 
 case, which you can do on a single, 16 core computer.
 
 16k/second is nothing, and I can generate that from a wristwatch computer. 
 Caching doesn't help, as the attackers can (and do) bust caches with 
 nonce-names and so on :/  A 16 core machine can do a million QPS relatively 
 easily - so it's a big degradation.

You miss my point.  That server is doing a million QPS, but its only providing 
~16k/s distinct answers.

Your wristwatch computer can only cause a dynamic server a problem if its 
competing with the legitimate query stream's priority category.  The priority 
category, assuming 10k names and 100 options/name and 1m max TTL requires only 
a single system to support.

Thus your wristwatch loaders can only act to load the non-priority category, 
which would be NSEC3.  If you actually care about zone enumeration, you MUST 
generate NSEC3 records on the fly, because lets face it, NSEC3 in the static 
case doesn't stop trivial enumeration of the zone.

Basically, its observing that what you really want is semi-online: The names 
you care about have at least some history/cacheability, and some level of 
finite space, but only on the order of a minute.   Once that property is there, 
you can do dynamic signing to your heart's content.

--
Nicholas Weaver  it is a tale, told by an idiot,
nwea...@icsi.berkeley.edufull of sound and fury,
510-666-2903 .signifying nothing
PGP: http://www1.icsi.berkeley.edu/~nweaver/data/nweaver_pub.asc



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] call to work on edns-client-subnet

2014-05-07 Thread Nicholas Weaver

On May 7, 2014, at 10:23 AM, P Vixie p...@redbarn.org wrote:

 Joe... To clarify... Client subnet is not what I an complaining about. It's 
 wide area rdns itself that I think is a bad idea. One reason wide area rdns 
 is a bad idea is that it needs client subnet options.
 
 Centralized rdns is not necessary and it makes the internet brittle. Better 
 alternatives exist. The architecture of DNS assumes localized rdns. If we're 
 going to document client subnet then all that advice will have to go into it.

Not necessarily.  centralized is often really anycast.  

E.g. if you look at Comcast there are multiple anycast responders in their own 
internal network for 75.75.75.75. Likewise, '8.8.8.8' is insanely anycasted.  
This is not brittle, but remarkably robust.

In this case, still, edns client subnet is very useful.  It is, frankly, a mess 
to map client subnet to recursive resolver, but it is an insanely powerful 
optimization when you can.  

edns_client_subnet makes this mapping trivial, and therefore acts to 
significantly improve end user performance.

--
Nicholas Weaver  it is a tale, told by an idiot,
nwea...@icsi.berkeley.edufull of sound and fury,
510-666-2903 .signifying nothing
PGP: http://www1.icsi.berkeley.edu/~nweaver/data/nweaver_pub.asc



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] DNS over DTLS (DNSoD)

2014-04-23 Thread Nicholas Weaver

On Apr 23, 2014, at 6:47 AM, Dan Wing d...@danwing.org wrote:

 For discussion.
 
   DNS queries and responses are visible to network elements on the path
   between the DNS client and its server.  These queries and responses
   can contain privacy-sensitive information which is valuable to
   protect.  An active attacker can send bogus responses causing
   misdirection of the subsequent connection.
 
   To counter passive listening and active attacks, this document
   proposes the use of Datagram Transport Layer Security (DTLS) for DNS,
   to protect against passive listeners and certain active attacks.  As
   DNS needs to remain fast, this proposal also discusses mechanisms to
   reduce DTLS round trips and reduce DTLS handshake size.  The proposed
   mechanism runs over the default DNS port and can also run over an
   alternate port.
 
 http://tools.ietf.org/html/draft-wing-dnsop-dnsodtls

a:  With the need to do all the handshaking, you gain only a little from doing 
dTLS over UDP rather than TLS over TCP.  So why use UDP with all its headaches? 
 Just use TCP and conventional TLS rather than DTLS, especially when you are 
talking about mucking with the handshake.

b:  DO NOT USE PORT 53 for this:  There are far far too many networks (1%+) 
that reinterpret DNS requests or just outright block all DNS to non-approved 
servers, and more still which block non-DNS over DNS.

--
Nicholas Weaver  it is a tale, told by an idiot,
nwea...@icsi.berkeley.edufull of sound and fury,
510-666-2903 .signifying nothing
PGP: http://www1.icsi.berkeley.edu/~nweaver/data/nweaver_pub.asc



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] DNS over DTLS (DNSoD)

2014-04-23 Thread Nicholas Weaver

On Apr 23, 2014, at 1:00 PM, Paul Wouters p...@nohats.ca wrote:
 No, I fully disagree with this. Port 53 TCP has a much better chance at
 working these days than a random other newly assigned port.

Not true.  Port 53 is far more molested than random:  INBOUND firewall rules 
prevent you from running new services without firewall rule modifications, but 
outbound blocking is far less common.  (Our test port for this is TCP 1947 with 
Netalyzr).


--
Nicholas Weaver  it is a tale, told by an idiot,
nwea...@icsi.berkeley.edufull of sound and fury,
510-666-2903 .signifying nothing
PGP: http://www1.icsi.berkeley.edu/~nweaver/data/nweaver_pub.asc



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] Review of draft-ietf-dnsop-respsize-15

2014-04-06 Thread Nicholas Weaver

On Apr 6, 2014, at 7:06 AM, Stephane Bortzmeyer bortzme...@nic.fr wrote:
 
 Second issue, the pessimistic tone. The draft could be read as a
 warning that horrible things will happen if the size of the answer is
 not kept well below 512 bytes. But, since the version -00 has been
 published, typical response sizes have increased and nothing
 happened. The respsize.pl tool flags .com as always red, for maximum
 size queries. But we observe daily that .com works. 

To add to this: The horribles when they do occur occur not at 512b but at 
~1400B, when things start fragmenting, as far too many devices (and its worse 
on IPv6) have decreed that fragments don't work.  

This also means that the recursive resolver, unless its doing raw packet 
reception, does not know that this is the problem source.


Yet even in this case, its only a few (single-digit-percent) of recursive 
resolvers affected (its more when you include the path to the clients).  

My belief is that EDNS0 fallback (like what BIND does) should be to use an MTU 
of 1400B and then 1200B first (which guarantees no fragmentation on IPv6, even 
in the face of tunnels, and a practical guarantee of no fragments on IPv4), 
rather than skipping directly down to EDNS with an MTU of 512B, and if multiple 
authority servers require this fallback, the recursive resolver should use this 
MTU for all authorities, and just accept the minor latency hit incurred in 
having to shift to TCP more in the very rare case

--
Nicholas Weaver  it is a tale, told by an idiot,
nwea...@icsi.berkeley.edufull of sound and fury,
510-666-2903 .signifying nothing
PGP: http://www1.icsi.berkeley.edu/~nweaver/data/nweaver_pub.asc



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] Current DNSOP thread and why 1024 bits

2014-04-02 Thread Nicholas Weaver
The profanity is deliberate.  The same discredited performance arguments have 
come up for a decade+.  It gets very frustrating to see the same ignorance, 
again and again.


On Apr 2, 2014, at 6:30 AM, Edward Lewis edlewis.subscri...@cox.net wrote:
 From these two main reasons (and you’ll notice nothing about cryptographic 
 strength in there) a third very import influence must be understood - the 
 tools operators use more or less nudge operators to the 1024 bit size.  
 Perhaps via the default settings or perhaps in the tutorials and 
 documentation that is read.
 
 Why do operators seem to ignore the input of cryptographers?  I can tell you 
 that from personal experience.  Cryptographers, whenever given a straight 
 question on DNSSEC have failed to give straight answers.  As is evident in 
 the thread, theoretical statements are made and the discussion will veer off 
 into recursive (really cache-protection) behaviors, but never wind up with a 
 result that is clearly presented and defended.  In my personal experience, 
 when I recommended 1024 bits, it was after consulting cryptographic experts 
 who would just waffle on what size is needed and then relying on what we did 
 in workshops 15 years ago.

Well, its because for the most part, cryptographers do seem to understand that 
DNSSEC is a bit of a joke when it comes to actually securing conventional DNS 
records.

And the NIST crypto recommendations have existed for years.  1024b RSA was 
deprecated in 2010, eliminate completely in 2013.  There may be doubt in NIST 
now, but 2 years ago, to ignore the standard recommendations is foolish.

 What does it matter from a security perspective?  DNS messages are short 
 lived.  It’s not like we are encrypting a novel to be kept secret for 100 
 years.  With zone signing keys lasting a month, 6 months, or so, and the 
 ability to disallow them fairly quickly, what’s the difference between this 
 so-called 80 or 112 bit strength difference?  Yes, I understand the doomsday 
 scenario that someone might “guess” my private key and forge messages.  But 
 an attack is not as simple as forging messages, it takes the ability to 
 inject them too.  That can be done - but chaining all these things together 
 just makes the attack that much less prevalent.

Do your resolvers have protection against roll back the clock attacks?  If 
not, you do not gain protection from the short-lived (well, really, a few 
month, they don't roll the actual key every 2 weeks) nature of the ZSK for 
root, .com, etc.

 Saving space and time does matter.  Roughly half the operators I studied 
 would include a backup key on-line because “they could” with the shorted 
 length.  And performance does matter - ask the web browser people.

Amdahl's law seems to be something that computer science in general always 
seems to forget.  The performance impact, both in size and cryptographic 
overhead, to shift to 2048b keys is negligible in almost all cases.  

And the step function in DNS cost, the Internet can't do fragments problem, 
doesn't really come into play at 2048b.

 It nets to this - cryptographers urge for longer lengths but can’t come up 
 with a specific, clearly rational, recommendation.  

Yes they have.  2048b.

 DNS operators want smaller, web performance wants quicker.  Putting all that 
 together, the smaller key size makes sense.  In operations.

The real dirty secret.  

DNSSEC is actually useless for, well, DNS.  A records and the like do not 
benefit from cryptographic protection against a MitM adversary, as that 
adversary can just as easily attack the final protocol.

Thus the only actual use for DNSSEC is not protecting A records, but protecting 
cryptographic material and other similar operations: DANE is probably the best 
example to date, but there is also substantial utility in, e.g., email keys.  

DNSSEC is unique in that it is a PKI with constrained and enforced path of 
trust along existing business relationships.

Building the root of this foundation on the sand of short keys, keys that we 
know that are well within range for nation-state adversaries, from the root and 
TLDs is a recipe to ensure that DNSSEC is, rightly, treated by the rest of the 
world as a pointless joke.


 PS - Yes, some operators do use longer keys.  Generally, those that do have 
 decent “excuses” (read: unusual use cases) and so they are not used in the 
 peer pressure arguments.

And that does no good unless the upstream in the path of trust, starting at the 
root, actually use real length keys.

The difference between 2^80 and 2^100 effort is huge.  2^80 is in range today 
of nation states, and near the range of academics.

--
Nicholas Weaver  it is a tale, told by an idiot,
nwea...@icsi.berkeley.edufull of sound and fury,
510-666-2903 .signifying nothing
PGP: http://www1.icsi.berkeley.edu/~nweaver/data/nweaver_pub.asc



signature.asc
Description: Message signed

Re: [DNSOP] key lengths for DNSSEC

2014-04-02 Thread Nicholas Weaver

On Apr 2, 2014, at 11:19 AM,  Roy Arends r...@dnss.ec wrote:
 
 Just a thought that occured to me. Crypto-maffia folk are looking for a 
 minimum (i.e. at least so many bits otherwise its insecure). DNS-maffia folk 
 are looking for a maximum (i.e. at most soo many bits otherwise 
 fragmentation/fallback to tcp). It seems that the cryptomaffia’s minimum 
 might actually be larger than the DNS-maffia’s maximum.

The problem from the dns-op maximalist viewpoint is there is basically two 
magic numbers: 512B and ~1400B.  As someone who's measured this, the 512B is 
not a problem, but the 1400B here be fragments is.  Yet at the same time, the 
current 1024B ZSK/2048B KSK configuration on TLDs does blow through it: I 
reported in the previous thread how org's DNSKEY record already blew past that 
limit.


And even in that case, resolvers can handle fragments don't work, albeit with 
a latency penalty.  So its not a DNSSEC fails point but simply performance 
degraded.


So the real question is do the common answers fragment, the ones with short 
TTLs that are accessed a lot, have a fragment problem.  With 2048b keys, they 
don't: the one that gets you is NSEC3, and that only blows up in your face on 
4096b keys.  (But boy does it, those 3 RRSIGs get big when you're using 4096b 
keys).


And please don't discount the psychology of the issue.  If DNSSEC wants to be 
taken seriously, it needs to show it.  Using short keys for root and the major 
TLDs, under the assumptions that it can't be cracked quickly (IMO, we have to 
assume 1024b can be.) and that old keys don't matter [1], is something that 
really does draw criticism.



[1] IMO they do until validators record and use a 'root key ratchet': never 
accept a key who's expiration is older than the inception date of the RRSIG on 
the youngest root ZSK seen, or have some other defense to roll-back-the-clock 
attacks.

--
Nicholas Weaver  it is a tale, told by an idiot,
nwea...@icsi.berkeley.edufull of sound and fury,
510-666-2903 .signifying nothing
PGP: http://www1.icsi.berkeley.edu/~nweaver/data/nweaver_pub.asc



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] Whiskey Tango Foxtrot on key lengths...

2014-04-01 Thread Nicholas Weaver

On Apr 1, 2014, at 5:39 AM, Olafur Gudmundsson o...@ogud.com wrote:
 
 Doing these big jumps is the wrong thing to do, increasing the key size 
 increases three things:
   time to generate signatures  
   bits on the wire
   verification time. 
 
 I care more about verification time than bits on the wire (as I think that is 
 a red herring).
 Signing time increase is a self inflicted wound so that is immaterial. 
 
  signverifysign/s verify/s
 rsa 1024 bits 0.000256s 0.16s   3902.8  62233.2
 rsa 2048 bits 0.001722s 0.53s580.7  18852.8
 rsa 4096 bits 0.012506s 0.000199s 80.0   5016.8
 
 Thus doubling the key size decreases the verification performance by roughly 
 by about 70%. 
 
 KSK's verification times affect the time to traverse the DNS tree, thus 
 If 1024 is too short 1280 is fine for now
 If 2048 is too short 2400 bit key is much harder to break thus it should be 
 fine. 
 
 just a plea for key use policy sanity not picking on Bill in any way.

NO!  FUCK THAT SHIT.  Seriously.

There is far far far too much worrying about performance of crypto, in cases 
like this where the performance just doesn't matter!

Yes, you can only do 18K verifies per CPU per second for 2048b keys.  Cry me a 
river.  Bite the bullet, go to 2048 bits NOW, especially since the servers do 
NOT have resistance to roll-back-the-clock attacks.



In a major cluster validating recursive resolver, like what Comcast runs with 
Nominum or Google uses with Public DNS, the question is not how many verifies 
it can do per second per CPU core, but how many verifies it needs to do per 
second per CPU core.

And at the same time, this is a problem we already know how to parallelize, and 
which is obscenely parallel, and which also caches...

Lets assume a typical day of 1 billion external lookups for a major ISP 
centralized resolver, and that all are verified.  Thats less 1 CPU core-day to 
validate every DNSSEC lookup that day at 2048b keys.  

And yeah, DNS is peaky, but that's also why this task is being run on a cluster 
already, and each cluster node has a lot of CPUs.


--
Nicholas Weaver  it is a tale, told by an idiot,
nwea...@icsi.berkeley.edufull of sound and fury,
510-666-2903 .signifying nothing
PGP: http://www1.icsi.berkeley.edu/~nweaver/data/nweaver_pub.asc



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] Whiskey Tango Foxtrot on key lengths...

2014-04-01 Thread Nicholas Weaver

On Apr 1, 2014, at 6:39 AM, Phillip Hallam-Baker hal...@gmail.com wrote:
 
 Yes, I agree, but you are proposing a different DNSSEC model to the one they 
 believe in.
 
 The DNS world has put all their eggs into the DNSSEC from Authoritative to 
 Stub client model. They only view the Authoritative to Resolver as a 
 temporary deployment hack.

And in that case (which is, I agree, what is needed), the time to verify really 
doesn't matter one fucking bit, since those clients really won't care about an 
extra 50 MICROseconds to validate the crypto.  Heck, they won't notice 50 
milliseconds...

 Weakening the crypto algorithms to make the architecture work is always a 
 sign that the wrong architecture is being applied.

And weakening the crypto needlessly like this is even worse.  IMO, all DNSSEC 
software should simply refuse to generate 2048b RSA keys.

--
Nicholas Weaver  it is a tale, told by an idiot,
nwea...@icsi.berkeley.edufull of sound and fury,
510-666-2903 .signifying nothing
PGP: http://www1.icsi.berkeley.edu/~nweaver/data/nweaver_pub.asc



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] CD (Re: Whiskey Tango Foxtrot on key lengths...)

2014-04-01 Thread Nicholas Weaver

On Apr 1, 2014, at 10:24 PM, Colm MacCárthaigh c...@allcosts.net wrote:
  
 I don't think this makes much sense for a coherent resolver. If I were 
 writing a resolver, the behaviour would instead be;  try really hard to find 
 a valid response, exhaust every reasonable possibility. If it can't get a 
 valid response, then if CD=1 it's ok to pass back the invalid response and 
 its supposed signatures - maybe the stub will no better, at least fail open. 
 If CD=0, then SERVFAIL, fail closed. 

The bigger problem is not the CD case, but getting the data at all to validate 
locally.  

A lot (and I mean a LOT) of NATs give a DNS proxy that doesn't understand or 
forward requests for DNSSEC information. Heck, even Apple (which in my opinion 
makes the best overall CPE) doesn't do this right.  These NATs don't give the 
IP of the real recursive resolver, which often does support DNSSEC (and, in the 
case of Comcast, even validates).

Which means you have to go around and do a full local fetch, starting at the 
root and going down from there to validate on the client.

And then, to make matters worse, you have the hotspots and similar cases which 
force the user to use the configured recursive resolver.  Fortunately, most of 
those support fetching DNSSEC records.  But note that I said most, not all

--
Nicholas Weaver  it is a tale, told by an idiot,
nwea...@icsi.berkeley.edufull of sound and fury,
510-666-2903 .signifying nothing
PGP: http://www1.icsi.berkeley.edu/~nweaver/data/nweaver_pub.asc



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] One more bit of Whiskey Tango Foxtrot on key lengths...

2014-03-28 Thread Nicholas Weaver


On Mar 28, 2014, at 1:34 AM, Stephane Bortzmeyer bortzme...@nic.fr wrote:

 On Thu, Mar 27, 2014 at 01:15:00PM -0700,
 Nicholas Weaver nwea...@icsi.berkeley.edu wrote 
 a message of 75 lines which said:
 
 But fixing this going forward requires a 1-line change in the ZSK
 script:
 
 I have nothing against longer keys but this sort of sentences (DNSSEC
 is simple, anyone can do it in five minutes) is a sure way to inflame
 me. It is not sufficient to change the script, you also have to search
 if it can break things later. A typical example would be the larger
 response to the DNSKEY query. If changing the key size make it larger
 than the MTU, it _may_ create problems.

It doesn't.  If you have 2 DNSKEYs and one RRSIG, a 2048b KSK and a 1024b ZSK, 
it goes from 750B to 880B.  With two ZSKs it goes to 1100B.  You only have an 
issue if you have 3+ ZSKs valid.

Or then you have cases like .org, where you are using 1024b ZSK keys but 
because there are enough and a key roll of the KSK and other crud going on 
right now, its ALREADY busting the MTU limit as you have 2 1024b keys, 2 2048b 
keys, and 2 2048b RRSIGs.  So if MTU was an issue, that would already be up and 
biting people...

 dig +dnssec DNSKEY org @199.19.56.1
...
;; Query time: 123 msec
;; SERVER: 199.19.56.1#53(199.19.56.1)
;; WHEN: Fri Mar 28 05:16:16 2014
;; MSG SIZE  rcvd: 1625


Yes, its deliberately inflamitive on my part to say its just a 1 line change, 
but sweet jeebus people: IF DNSSEC wants to actually be taken, you know, 
seriously as crypto, using 1024b signatures in the key positions of root and 
the TLDs is not gonna cut it.  It is safe to assume that 1024b RSA is broken by 
nation state adversaries.  NIST recommend it it be deprecated in 2010, and all 
use stopped in 2013.

And the code paths are well tested: resolvers hit fragments more than enough on 
DNSSEC for any resolver which validates and has fragment issues to have sucky 
performance as it dropped its MTU to 512b [1], while since the KSKs are 2048b 
the crypto is already flowing, the code paths are well tested.



[1] Yes, I've many times pointed out that the first stage EDNS0 fallback should 
be to 1400b, but I doubt that code has changed at all yet...

--
Nicholas Weaver  it is a tale, told by an idiot,
nwea...@icsi.berkeley.edufull of sound and fury,
510-666-2903 .signifying nothing
PGP: http://www1.icsi.berkeley.edu/~nweaver/data/nweaver_pub.asc



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


[DNSOP] Whiskey Tango Foxtrot on key lengths...

2014-03-27 Thread Nicholas Weaver
Bits are not precious:  Until a DNS reply hits the fragmentation limit of 
~1500B, size-matters-not (tm, Yoda Inc).  

So why are both root and com and org and, well, just about everyone else using 
1024b keys for the actual signing?

The biggest blobs of typical DNSSEC data are NSEC3 responses, and upping the 
key size to 2048b everywhere will not cause widespread fragmentation issues 
(4096b will... but only on those NSEC3 blobbies which require three RRSIGs, you 
can get non-NSEC3 responses to fit under that limit in most cases as those 
require only one or perhaps two RRSIGs)



1024B is unquestionably too weak, 768-bit RSA has been factored in 2010 as a 
low resource academic project:
http://eprint.iacr.org/2010/006.pdf

and 1024B is estimated at only a thousand times harder.

RSA 768 took just 1,500 CPU-years on the fully parallelizeable sieving step, 
and 4 days of total time (but only 12 hours of successful computation) on a 
couple of ~35 node clusters.

And, frankly speaking, a 3500 node cluster for a day is $75K thanks to EC2.

Do you really want someone like me to try to get an EC2 academic grant for the 
cluster and a big slashdot/boingboing crowd for the sieving to factor the root 
ZSK?



So why the hell do the real operators of DNSSEC that matters, notably com and 
., use 1024b RSA keys?

And don't give me that key-roll BS: Give me an out of date key for . and a MitM 
position, and I can basically create a false world for many DNSSEC-validating 
devices by also providing bogus time data with a MitM on NTP...



IMO, it is time for DNSSEC software to refuse to generate new RSA keys less 
than 2048b in length, and for the TLD and root operators to ditch short keys 
into the trash heap of history.  Well, the time was actually a decade ago, but 
hey...


If people actually want DNSSEC to be taken seriously as a PKI-type resource 
(a'la DANE), the DNS community needs to actually, well, use secure crypto.  
1024b RSA is not secure.  Go Big or Go Home.

--
Nicholas Weaver  it is a tale, told by an idiot,
nwea...@icsi.berkeley.edufull of sound and fury,
510-666-2903 .signifying nothing
PGP: http://www1.icsi.berkeley.edu/~nweaver/data/nweaver_pub.asc



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] Whiskey Tango Foxtrot on key lengths...

2014-03-27 Thread Nicholas Weaver

On Mar 27, 2014, at 6:56 AM, Nicholas Weaver nwea...@icsi.berkeley.edu wrote:
 And, frankly speaking, a 3500 node cluster for a day is $75K thanks to EC2.
 
 Do you really want someone like me to try to get an EC2 academic grant for 
 the cluster and a big slashdot/boingboing crowd for the sieving to factor the 
 root ZSK?

Crud, blew my math.  You'd want a 35000 node cluster, but thats still $1M...  
The point stands, 1024b RSA is unsafe.

--
Nicholas Weaver  it is a tale, told by an idiot,
nwea...@icsi.berkeley.edufull of sound and fury,
510-666-2903 .signifying nothing
PGP: http://www1.icsi.berkeley.edu/~nweaver/data/nweaver_pub.asc



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] Whiskey Tango Foxtrot on key lengths...

2014-03-27 Thread Nicholas Weaver

On Mar 27, 2014, at 7:22 AM, Joe Abley jab...@hopcount.ca wrote:

 
 On 27 Mar 2014, at 22:56, Nicholas Weaver nwea...@icsi.berkeley.edu wrote:
 
 Bits are not precious:  Until a DNS reply hits the fragmentation limit of 
 ~1500B, size-matters-not (tm, Yoda Inc).  
 
 So why are both root and com and org and, well, just about everyone else 
 using 1024b keys for the actual signing?
 
 Those requirements (for the root zone keys) came from NTIA via NIST:
 
 http://www.ntia.doc.gov/files/ntia/publications/dnssec_requirements_102909.pdf
  (9)(a)(i)
 
 (well, NIST specified a minimum key size, but the implication at the time was 
 that that was a safe minimum).

Obligatory Snarky Note: these being the same people who, after 2007, said that, 
although you can create your own constants, you MUST still use the specified 
magic constants for Dual_EC_DRBG if you wanted certification, even though it 
was shown that whoever generated the magic constants could have placed a 
backdoor in them...


But seriously: it was clear back a decade ago that 1024b RSA should be 
depricated in 2010:

(current)
http://csrc.nist.gov/publications/nistpubs/800-131A/sp800-131A.pdf

(historical)
http://csrc.nist.gov/publications/nistpubs/800-57/sp800-57-Part1-revised2_Mar08-2007.pdf

1024b RSA is really considered by NIST as only ~80 bits symmetric strength 
equivalent.

 Bear in mind, I guess, that these keys have a publication lifetime that is 
 relatively short. The window in which a factoring attack has an opportunity 
 to find a result that can be exploited as a compromise is fairly narrow.

Except that if I'm in a position to actually use an old-factored root key, I'm 
probably also in a position to F-up your NTP.  How many computers complain 
bloody murder if the NTP server says oh, you're clock is wrong by 20 days (or 
200 days), here you go?  And even if they do, how many users understand what 
that would mean?


And relatively short is still two weeks.  That is well within range of a 
nation-state adversary willing to build a custom sieving machine.  Look at how 
much SHA256 power has been generated with a well under $50M aggregate spending: 
its 35 PHash/s!  

We do want DNSSEC to work in the face of a nation state adversary, no?  Do you 
want to bet that the NSA has not already built a 1024b RSA factoring machine?

Likewise, we do want the ability to do historical things, no?  E.g. DNSSEC 
signature at time T to attest to a fact, using the captured DNSSEC validation 
chain at the time?


Frankly speaking, since the root uses NSEC rather than NSEC3, IMO it should be 
4096b for both the KSK and ZSK.  But I'd be happy with 2048b.  Using 1024b is a 
recipe to ensure that DNSSEC is not taken seriously.

--
Nicholas Weaver  it is a tale, told by an idiot,
nwea...@icsi.berkeley.edufull of sound and fury,
510-666-2903 .signifying nothing
PGP: http://www1.icsi.berkeley.edu/~nweaver/data/nweaver_pub.asc



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] Whiskey Tango Foxtrot on key lengths...

2014-03-27 Thread Nicholas Weaver

On Mar 27, 2014, at 11:18 AM, Christopher Morrow christopher.mor...@gmail.com 
wrote:

 On Thu, Mar 27, 2014 at 10:52 AM, Paul Hoffman paul.hoff...@vpnc.org wrote:
 Yes. If doing it for the DNS root key is too politically challenging, maybe 
 do it for one of the 1024-bit trust anchors in the browser root pile.
 
 why would this be politically sensitive?

Because the browsers have already decided killing of 1024b CAs is a good idea, 
and they could revoke just those CAs once someone breaks a 1024b example, since 
the browser vendors have good experience in revoking bad CAs already (queue 
DigiNotar...)


In contrast, DNSSEC seems mired in a 1024b swamp at the root, and when you can 
use an old key (which you can for the root, since you can fake everything up 
below that dynamically and fake NTP so that your bad key is still kosher), 
breaking a root key really would be breaking DNSSEC.

--
Nicholas Weaver  it is a tale, told by an idiot,
nwea...@icsi.berkeley.edufull of sound and fury,
510-666-2903 .signifying nothing
PGP: http://www1.icsi.berkeley.edu/~nweaver/data/nweaver_pub.asc



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


[DNSOP] One more bit of Whiskey Tango Foxtrot on key lengths...

2014-03-27 Thread Nicholas Weaver

The overall problem of using old root keys will persist (and eventually I think 
DNSSEC resolvers need to refuse ZSKs for . and tlds that are less than 2048b in 
length, or barring that, need to add a clock ratchet to keep old keys from 
being reused).

But fixing this going forward requires a 1-line change in the ZSK script:

-b 2048

That's it.  Since the KSKs are already 2048b, and therefore there are 
appropriate RRSIGs, its clear that the server side is set up to handle real key 
lengths.

--
Nicholas Weaver  it is a tale, told by an idiot,
nwea...@icsi.berkeley.edufull of sound and fury,
510-666-2903 .signifying nothing
PGP: http://www1.icsi.berkeley.edu/~nweaver/data/nweaver_pub.asc



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


[DNSOP] Any suggestion on what I'm doing that is stupid here on NSEC3?

2014-02-12 Thread Nicholas Weaver
I'm trying to do my own implementation of NSEC3 as part of my dynamic DNSSEC 
server (in order to do NSEC3 lies for NXDOMAIN, since you can't do such a lie 
with NSEC, NSEC lies only allow 0 answer noerror which is unfortunately NOT 
the same)

But I appear to be doing something stupid, and am not operating the hash right:



Looking at com, the NSEC3 for com is:
CK0POJMG874LJREF7EFN8430QVIT8BSM.com. 86400 IN NSEC3 1 1 0 - ...

(Algorithm 1 - SHA-1, flag = 1, iterations = 0, salt = None, fetched by dig 
+dnssec MX com @a.gtld-servers.net)

Reading RFC5155, the calculation of the hash is:

The hash calculation uses three of the NSEC3 RDATA fields: Hash
Algorithm, Salt, and Iterations.
 
Define H(x) to be the hash of x using the Hash Algorithm selected by
the NSEC3 RR, k to be the number of Iterations, and || to indicate
concatenation.  Then define:
 
   IH(salt, x, 0) = H(x || salt), and
 
   IH(salt, x, k) = H(IH(salt, x, k-1) || salt), if k  0
 
Then the calculated hash of an owner name is
 
   IH(salt, owner name, iterations),
 
where the owner name is in the canonical form, defined as:
 
The wire format of the owner name where:
 
1.  The owner name is fully expanded (no DNS name compression) and
fully qualified;
 
2.  All uppercase US-ASCII letters are replaced by the corresponding
lowercase US-ASCII letters;
 
3.  If the owner name is a wildcard name, the owner name is in its
original unexpanded form, including the * label (no wildcard
substitution);

So it should be the base32 encoding of the SHA1 hash of the wire format for 
com (since there is no salt), which in python is:

\x03com\x00, (3 characters, the string com, and 0 as a terminator in wire 
format.  This matches the wire format I get from my name packer in my DNS 
server)

Yet when I try to calculate the SHA1 hash in python's library, I get:
 m = hashlib.sha1() 
 m.update(\x03com\x00) # There is no salt and 0 additional iterations
 base64.b32encode(m.digest()) 
'MUAZYTWQIHEVT3OPHOPXIEDA27S5IL4W'
 m.hexdigest()
'65019c4ed041c959edcf3b9f741060d7e5d42f96'

But at the same time, this matches the sha1sum for a file containing just the 
string \x03com\x00, so the hash is correct for sha1.


So the conclusion is I'm not putting in the right input into the hash function. 
 Thoughts on what I'm doing wrong?

--
Nicholas Weaver  it is a tale, told by an idiot,
nwea...@icsi.berkeley.edufull of sound and fury,
510-666-2903 .signifying nothing
PGP: http://www1.icsi.berkeley.edu/~nweaver/data/nweaver_pub.asc



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] Any suggestion on what I'm doing that is stupid here on NSEC3?

2014-02-12 Thread Nicholas Weaver
Thanks.  Indeed I was stupid: wrong base32 encoding

--
Nicholas Weaver  it is a tale, told by an idiot,
nwea...@icsi.berkeley.edufull of sound and fury,
510-666-2903 .signifying nothing
PGP: http://www1.icsi.berkeley.edu/~nweaver/data/nweaver_pub.asc



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] On squatting and draft-grothoff-iesg-special-use-p2p-names

2014-01-06 Thread Nicholas Weaver

On Jan 6, 2014, at 12:54 PM, Andrew Sullivan a...@anvilwalrusden.com wrote:

 On Mon, Jan 06, 2014 at 01:48:04PM -0500, Ted Lemon wrote:
 It seems to me that TOR is a pretty vital application, even if it's
 not as popular as .local (which, let's be honest, is almost never
 seen, much less typed, by an end user). 
 
   Addresses in .onion are opaque, non-mnemonic, alpha-semi-numeric
   hashes corresponding to an 80-bit truncated SHA1 hash over a given
   Tor hidden service's public key. 
 
 I'm pretty sure things in .onion are never supposed to be seen, much
 less typed, by an end user too.

You'd like to think that, but sorry no.  They are seen all the time:

If you don't already have it bookmarked, you are going to have to type in or 
cut and paste or the like http://silkroad6ownowfk.onion if you want to visit 
the current incarnation of the Silk Road in order to invest in your future 
prosecution...



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] Strange EDNS Header Flags in DNS packet

2013-12-17 Thread Nicholas Weaver

On Dec 17, 2013, at 12:47 AM, Stephane Bortzmeyer bortzme...@nic.fr wrote:

 On Tue, Dec 17, 2013 at 04:21:35PM +0800,
 Jianjun Ning t...@arey.cn wrote 
 a message of 61 lines which said:
 
 ;; OPT PSEUDOSECTION:
 ; EDNS: version: 0, flags:; MBZ: 0005 , udp: 512
 ;; QUESTION SECTION:
 ;www.google.com.hk. IN  A
 
 The value of field MBZ is 0x0005!!
 
 The Google authoritative name servers do not seem to return a EDNS
 section in their answers. Therefore, this section has probably been
 added by your resolver, or by a middleman (something which is quite
 common in China).

The great firewall packet injector is easy to detect.  Because it only responds 
to queries, target your dig (using @) to an IP that isn't hosting a DNS server.

So, eg,

dig +norecurse +bufsize=768 www.google.com.hk @192.150.187.1

(Sends to ICSI, but not our DNS server, so you know the route goes to the west 
coast of the US)


--
Nicholas Weaver  it is a tale, told by an idiot,
nwea...@icsi.berkeley.edufull of sound and fury,
510-666-2903 .signifying nothing
PGP: http://www1.icsi.berkeley.edu/~nweaver/data/nweaver_pub.asc




signature.asc
Description: Message signed with OpenPGP using GPGMail
___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] [dnsext] DNS vulnerabilities

2013-11-01 Thread Nicholas Weaver

On Nov 1, 2013, at 7:57 AM, Derek Atkins de...@ihtfp.com wrote:
 It is unclear to me that ECC as a generic technology is bad, although
 any specific curves creates by NIST/NSA are certainly suspect.
 
 Having said that, Dual-EC-DRBG is a Random Number Generator, not a Hash,
 Public Key, or Cipher algorithm, and we don't use it in DNS for
 anything, AFAIK.


Random Number Generators are used to generate the key material, since bare 
entropy is often not enough, so you use your entropy pool to seed a pRNG.  
Bind, for example, ends up using OpenSSL.

Certified versions of OpenSSL do have Dual_EC_DRBG, although its not by default 
(or is it?). 


The threat is probably a lot less, however, since everything else signed in 
DNSSEC-land is deterministic, and even if Dual_EC_DRBG was used, hopefully the 
raw stream doesn't leak (the backdoor requires seeing some of the random output 
to make it predictable).

--
Nicholas Weaver  it is a tale, told by an idiot,
nwea...@icsi.berkeley.edufull of sound and fury,
510-666-2903 .signifying nothing
PGP: http://www1.icsi.berkeley.edu/~nweaver/data/nweaver_pub.asc



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


[DNSOP] Mia Culpa: Recursive resolver DNSSEC validation is necessary...

2013-10-08 Thread Nicholas Weaver

I've in general advocated client, rather than recursive resolver validation, 
and with the client doing iterative fetch and accept on all DNSSEC failures.  

With the recent revelation that the NSA/GCHQ is doing packet-injection on the 
backbone, at scale, and even using this to target NATO allies, I've changed my 
tune.  Even forget about NSA/GCHQ directly, they've now implicitly said that 
hey, its OK for everyone else to do it, too.

Backbone DNS injection allows converting a man-on-the-side attacker (who, eg, 
even with a certificate, can't intercept TLS using perfect forward secrecy, and 
who when attacking HTTP can only see requests before deciding what to do) into 
a full man-in-the-middle, as long as the attacker knows the target's recursive 
resolver.


Thus I've changed my tune:

1:  Recursive resolvers MUST validate DNSSEC as well as clients.  Not because I 
trust the recursive resolver, but there is now an adversary set where recursive 
resolver validation does help, and its an easier point to do.

2:  Validation failures due to bad signatures/etc MUST result in a failure 
unless specifically whitelisted.

3:  Future protocols MUST support Connect by multiple name semantics:  Given 
MULTIPLE names, only connect if all K names have the same IP after resolution.  
(This enables multiple-validation-path DNSSEC, which is a pretty uni).


--
Nicholas Weaver  it is a tale, told by an idiot,
nwea...@icsi.berkeley.edufull of sound and fury,
510-666-2903 .signifying nothing
PGP: http://www1.icsi.berkeley.edu/~nweaver/data/nweaver_pub.asc



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] Practical issues deploying DNSSEC into the home.

2013-09-12 Thread Nicholas Weaver


On Sep 12, 2013, at 7:24 AM, Theodore Ts'o ty...@mit.edu wrote:
 It is still a hierarchical model of trust.  So at the top, if you
 don't trust Verisign for the .COM domain and PIR for the .ORG domain
 (and for people who are worried about the NSA, both of these are US
 corporations), the whole system falls apart.


Its also a constrained path of trust, and you can actually chose who you trust.

E.g. your application could be constructed to look up both 
{data}.dnssec-info-domain.com and {data}.dnssec-info-domain.ru.  Only if 
both use the same validated key is the key accepted.

That way, the trust becomes:

1:  The root is trusted

2:  The registrar for .com and .ru don't collaborate, since they must 
collaborate for the trust to affect the results.


This is a huge difference from SSL, which unless you pin your application to 
trust only a single CA, you end up having to trust the entire universe of 
certificate authorities.

--
Nicholas Weaver  it is a tale, told by an idiot,
nwea...@icsi.berkeley.edufull of sound and fury,
510-666-2903 .signifying nothing
PGP: http://www1.icsi.berkeley.edu/~nweaver/data/nweaver_pub.asc



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] Practical issues deploying DNSSEC into the home.

2013-09-11 Thread Nicholas Weaver

On Sep 11, 2013, at 9:18 AM, Phillip Hallam-Baker hal...@gmail.com wrote:
 
 The DNS is the naming infrastructure of the Internet. While it is in theory 
 possible to use the DNS to advertise very rapid changes to Internet 
 infrastructure, the practice is that the Internet infrastructure will look 
 almost exactly the same in one hour's time as it does right now.
  
 Using DNS data from 24 hours earlier might create reliability issues but 
 should never introduce a security risk. Anyone who is relying on the DNS for 
 data that is more time sensitive than 1 hour is doing it wrong.

I disagree.  DNSSEC is not just DNS: its the only available, deployed, and 
(mostly) accessible global PKI currently in existence which also includes a 
constrained path of trust which follows already established business 
relationships.

Dynamic DNSSEC applications, where signatures are generated on the fly, are 
almost certainly going to be developed to utilize this infrastructure.

--
Nicholas Weaver  it is a tale, told by an idiot,
nwea...@icsi.berkeley.edufull of sound and fury,
510-666-2903 .signifying nothing
PGP: http://www1.icsi.berkeley.edu/~nweaver/data/nweaver_pub.asc



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] Practical issues deploying DNSSEC into the home.

2013-09-11 Thread Nicholas Weaver

On Sep 11, 2013, at 12:38 PM, Phillip Hallam-Baker hal...@gmail.com wrote:
 
 I disagree.  DNSSEC is not just DNS: its the only available, deployed, and 
 (mostly) accessible global PKI currently in existence which also includes a 
 constrained path of trust which follows already established business 
 relationships.
 
 Except that virtually nobody uses DNSSEC and most of the registrars don't 
 support it.

I strongly disagree:

I had an easier time registering my DNSSEC test domain's DS records with the 
registrar than the nameservers themselves, using an obnoxious company that 
sponsors a NASCAR driver and has obnoxious TV ads.

Comcast and Google Public DNS both validate DNSSEC on all requests.

A small minority of clients can't fetch DNSSEC records, but most actually can, 
either through one of the recursive resolvers or over the Internet.

 And then there is that other PKI that is actually used to support a trillion 
 odd dollars worth of global e-commerce per year.

Which the NSA is man-in-the-middling with abandon, in due to no-small-part the 
lack of a constrained path of trust.  Google has effectively given up on the 
TLS PKI for their own use in Chrome: they hardcode the Google sub-CA.

--
Nicholas Weaver  it is a tale, told by an idiot,
nwea...@icsi.berkeley.edufull of sound and fury,
510-666-2903 .signifying nothing
PGP: http://www1.icsi.berkeley.edu/~nweaver/data/nweaver_pub.asc



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


[DNSOP] SEA DNS w DNSSEC Hack thought...

2013-08-28 Thread Nicholas Weaver
One thought on DNSSEC and this attack.

DNSSEC couldn't have prevented this attack, as anyone authorized to update the 
.com zone for a domain can update the DS records just as easily as the NS+glue 
records.  And the attack could have done orders of magnitude more damage.


Yet DNSSEC can create an anomaly that may prove useful:

If the DS changes but the NS+glue does not, or the NS changes but the DS does 
not, this is a legit change from the registrar viewpoint as someone needs to 
change BOTH to be more than just a DOS on the domain.

But if BOTH the DS and NS+glue records for a domain change in a single event, 
this is NECESSARY for an attack that is more than a DOS, yet it is NOT 
NECESSARY for a migration (as a migration can change one and then the other).



How does the following policy strike people for DNSSEC recursive resolvers 
which perform validation:

Keep all seen DS and PARENT NS+glue RRSETs in a much-longer-than-normal (2 day 
timeout) cache.

When the DS or parent NS+glue RRSET changes, record that change (but still note 
the old version) in the cache.  

If the other one changes, mark that domain as bogus until either 2 days pass 
from the first change OR one or the other changes back to the older value.



What this accomplishes:

Registrar hijacks are no longer silent in the face of DNSSEC: it will result in 
a DOS on the domain rather than full control.

The protection is temporary, and is assuming that the registrar will be 
straightened out in two days.  

Which is probably a reasonable assumption since registrar hijack is now 
producing a DOS on the domain (making it visible) if its all done at once.   If 
the attacker first changes one and then the other, there is a two-day window 
where the site operator can notice the attack by monitoring the TLD status for 
the site's domain.

While proper migrations under the scheme (2 days between DS change and NS 
change) are always good, and improper migrations do produce a DOS, the DOS is 
limited to 2 days.



In terms of deployment, if Nominum would do this, now basically everyone gets 
protected since Nominum's usage by Comcast for recursive resolver validation 
guarentees that there is a large customer base which will be behind such 
protection, making this very visible.

Thoughts?  Comments?


--
Nicholas Weaver  it is a tale, told by an idiot,
nwea...@icsi.berkeley.edufull of sound and fury,
510-666-2903 .signifying nothing
PGP: http://www1.icsi.berkeley.edu/~nweaver/data/nweaver_pub.asc



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] SEA DNS w DNSSEC Hack thought...

2013-08-28 Thread Nicholas Weaver

On Aug 28, 2013, at 8:37 AM, Paul Wouters p...@cypherpunks.ca wrote:
...

 Sounds like certificate pinning or CT-DNSSEC. It has the same problems.
 There will be more false positives then actual attacks, and people will
 disable it.


Of course, that argument also says Ditch DNSSEC altogether, bypass the 
recursive resolver, and have a nice day:  How many attacks has DNSSEC stopped 
to date, vs false positives due to misconfiguration?

That's also why its temporary (unlike most CERT pinning which is far more 
semi-permanent).  And there is the hurdle of If you are actually configuring 
DNSSEC, there is an assumed minimum clue level


Has anyone yet studied whether the DS and NS RRSETs tend to change at the same 
time for major domains?


--
Nicholas Weaver  it is a tale, told by an idiot,
nwea...@icsi.berkeley.edufull of sound and fury,
510-666-2903 .signifying nothing
PGP: http://www1.icsi.berkeley.edu/~nweaver/data/nweaver_pub.asc



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] BIG RRSETS EDNS0 and ipv6 framentation.

2013-06-18 Thread Nicholas Weaver

On Jun 18, 2013, at 8:22 AM, Mark Andrews ma...@isc.org wrote:
 My goal as it were was to look at if fragmentation were expected to work 
 that I don't really want to expose myself to reciving a 4k response (via 
 UDP) because the risk of an amplification attack becomes very large 
 indeed. Even if I filter fragments (because I have to or as a product of 
 limitations such an attack my be targeted at the infrastructure rather 
 than the endpoint that's the notional target.
 
 Yet fragmented packets work fine if you don't put a middle box in the
 middle that has a conniption when it sees a fragmented packet.

This is practically every box on IPv6.  Fragments REALLY don't work on IPv6.

 As for being exposed you really can't prevent being exposed.
 
 As for not replying with fragmented packets, that it self causes
 operational problems as you move the traffic to TCP which unless
 you have taken measures to reduce the sement sizes runs the risk
 of PMTUD problems.  Some of the ORG servers limit the UDP size then
 don't do PMTUD well which is a real pain if you are behind a tunnel.

IPv6 is much better on PMTU discovery than IPv4, and with IPv6, you can always 
just set to use the minimum IPv6 (1200B) MTU and bypass all PMTU discovery 
anyway.

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] BIG RRSETS EDNS0 and ipv6 framentation.

2013-06-17 Thread Nicholas Weaver
Lets just say if you think the IPv4 fragmentation problem is bad, IPv6 makes it 
look positively benign by comparison on the IPv6 kit deployed today.

Basically, use a 1400B EDNS0 mtu, and failover to TCP.

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] lost key rollovers considered harmful

2013-04-04 Thread Nicholas Weaver

On Apr 4, 2013, at 1:19 PM, Paul Hoffman paul.hoff...@vpnc.org wrote:
 I think nothing is needed here except perhaps a statement of the bleeding 
 obvious: if you miss too many key rollovers, Very Bad Things will happen so 
 make sure you have a foolproof way of recovering from that.
 
 We need that statement because it's *not* bleeding obvious. I cannot think of 
 a single thing built into a 2007-era ISO of a Linux distro that would have 
 the property similar to it will automatically give mysterious results for 
 DNS service. It might have lots of unsafe software turned on, but none that 
 will say I'll serve you but then it doesn't.

Also, there is a LOT of old, NEVER updated, 5 year old networking kit out 
there.  Well, fortunately they are often clueless about DNSSEC, but still...

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


[DNSOP] Question: unknown EDNS0 options and recursive resolvers?

2012-11-08 Thread Nicholas Weaver

How do recursive resolvers react to unknown EDNS0 options?

Are the requests simply dropped?  
Is the unknown option removed and ignored?  
Passed to the authority unchanged?


___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] A good chance to get all riled up - draft-wkumari-dnsop-omniscient-as112-00

2012-06-12 Thread Nicholas Weaver

On Jun 12, 2012, at 7:40 AM, Warren Kumari wrote:

 Hi there all,
 
 So, back in (AFAIR) Taipei I proposed making AS112 instances simply be 
 authoritative for *everything*, and then simply delegating and undelegating 
 things to it as appropriate (this would make things much simpler as there 
 would be very little coordination needed). At the time I realized that this 
 would require synthesizing answers (always a bit of a controversial topic), 
 but it turns out that there are a number of other things that may be equally 
 contentious, such as (thanks to Joe for this partial list):

To be honest, it seems like almost a no-brainer good idea to me.  And whats 
wrong about synthesizing answers?

The only question I have is DNSSEC.  I take it since the model is this is all 
bogus traffic, just have an NSEC above it saying No DNSSEC information, but I 
just want to be sure that doesn't change.



Stupid question on the SOA record however: why not dynamically generate an 
exact match SOA?  


So if the query is, say

121.14.34.10.in-addr.arpa. IN  PTR

Instead of returning


10.in-addr.arpa.300 IN  SOA prisoner.iana.org. 
hostmaster.root-servers.org. 2002040800 1800 900 604800 604800
(current)

or

.   300 IN  SOA a.root-servers.net. 
nstld.verisign-grs.com. 2012061200 1800 900 604800 86400
(omniscient)


Why not return

121.14.34.10.in-addr.arpa.  300 IN  SOA prisoner.iana.org. 
hostmaster.root-servers.org. 2002040800 1800 900 604800 604800

as the SOA?
___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] A good chance to get all riled up - draft-wkumari-dnsop-omniscient-as112-00

2012-06-12 Thread Nicholas Weaver

On Jun 12, 2012, at 8:17 AM, Joe Abley wrote:

 121.14.34.10.in-addr.arpa.   300 IN  SOA prisoner.iana.org. 
 hostmaster.root-servers.org. 2002040800 1800 900 604800 604800
 
 as the SOA?
 
 That would involve custom software. At present, anybody can run an AS112 
 server using whatever choice of platform and DNS code they feel like. 
 Requiring custom code for an AS112 server and expecting it to be maintained 
 on multiple platforms seems unlikely, but no doubt it could be done.

OTOH, it eliminates all worries about the SOA being misinterpreted, and really, 
this would be be remarkably small:

EG, I know I can code this up in prototype form in my (hackish) Python DNS 
library in about an hour, and the total codebase would be on the order of 600 
total LOC, including the DNS library.

So synthesizing an exact-match SOA should at least be considered, since it 
addresses the only interoperability concern I see with that of an overly-broad 
SOA record.

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] on Negative Trust Anchors

2012-04-13 Thread Nicholas Weaver

On Apr 13, 2012, at 1:24 PM, Patrik Fältström wrote:

 
 On 13 apr 2012, at 22:09, Evan Hunt wrote:
 
 On Fri, Apr 13, 2012 at 05:43:42PM +, Paul Vixie wrote:
 i'm opposed to negative trust anchors, both for their security
 implications if there were secure applications in existence, and for
 their information economics implications.
 
 +1
 
 +1

-1

Simply put, I'm not a huge believer of recursive resolver (rather than client) 
validation.  But if you are going to do it...

There are a few cases where it is valuable [1], but for every 'validate is the 
right answer', there are hundreds of cases, like the NASA case, where the 
authority is just screwing up.  And in those cases, the economics are that 
DNSSEC is creating a DOS, and it is the one who's validating that's at least 
partially responsible because it is both validating and deciding that its 
clients should suffer.

This is especially true for ISPs.  If you want any other ISP to validate 
DNSSEC, they need a mechanism like this so they don't suffer through the 
problems that Comcast has already experienced.

Because practice has shown that it is the recursive resolver, not the 
authority, that gets blamed.  Lurk on the Google Public DNS mailing list, and 
you realize that even without DNSSEC, the resolver operator faces the blame for 
brokenness.  Thus, at least for DNSSEC, resolver operators need to be able to 
override validation easily and efficiently.



[1] And these cases require 'listen until you can get something that 
validates': Just accept then validate gives the wrong answer in these cases.

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] Batch Multiple Query Packet

2012-03-01 Thread Nicholas Weaver

On Mar 1, 2012, at 6:08 AM, John R Levine wrote:

 the additional section. MX queries already have their kludge,
 returning A and  records.
 
 I'm pretty sure MX queries do NOT have a kludge. I don't believe that
 the additional section is actually used by any servers these days,
 
 Really?  Caches throw away additional sections even when they're 
 authoritative?  I find that rather surprising.
 
 Why would it be a productive use of time to add all this new mechanism, 
 rather than fix caches to use the exact same data that they're already 
 getting?
 
 I know about cache poisoning, but the logic to deal with it is exactly the 
 same logic you'd have to use to decide which of multiple answers to accept.
 
Yes, the logic should be NEVER cache anything you didn't directly ask for, 
absent DNSSEC validation.  

Which means you (IMO) MUST throw away non-authoritative information in the 
additional section, and most resolvers will only even attempt to cache 
additional records when they are the records associated with the authority 
section.


It is the caching of non-asked-for data, be it Auth, Additional, CNAME chains, 
etc, which enables race-until-win attacks like the Kaminski attack.

Thus a resolver MUST NEVER cache data that wasn't specifically asked for if it 
can't DNSSEC validate this information.  It can use the additional data 
received to indicate that it SHOULD ask for the information, but it shouldn't 
ever cache it in a general context.

Or, if it does cache it, it should validate that the entry would be valid by 
performing an independent lookup, a'la Unbound and replace the information with 
the new version.


 My personal goal is to improve resolution times for A/ lookups by
 collecting them into a single query. Perhaps it makes more sense to
 simply add Yet Another DNS Hack and add a special QTYPE like ANY
 meaning any address, that is actually well-defined and usable.
 
 Add a new RR called AOR that returns both with a field that has flag bits 
 to say which is valid.  And, of course, the A and  records in the 
 additional section.

Or don't bother.  As I mentioned earlier, this is a clear parallelization case, 
so the only thing you are saving in this optimization is ~100B of traffic, and 
the latency involved in transmitting an additional 100B through the network.

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] Batch Multiple Query Packet

2012-03-01 Thread Nicholas Weaver

On Mar 1, 2012, at 7:59 AM, John R Levine wrote:

 It is the caching of non-asked-for data, be it Auth, Additional, CNAME 
 chains, etc, which enables race-until-win attacks like the Kaminski attack.
 
 Thus a resolver MUST NEVER cache data that wasn't specifically asked for if 
 it can't DNSSEC validate this information.  It can use the additional data 
 received to indicate that it SHOULD ask for the information, but it 
 shouldn't ever cache it in a general context.
 
 Or, if it does cache it, it should validate that the entry would be valid by 
 performing an independent lookup, a'la Unbound and replace the information 
 with the new version.
 
 This makes no sense.  Assuming you ignore records for which the server isn't 
 authoritative (which we all do since Kashpureff), why wouldn't you use the 
 records in the additional section?
 
 Or to put it another way, if you're worried that the authoritative additional 
 records are fake, why aren't the authoritative answer records equally fake?  
 Same server, same authority.

Because caching un-asked-for records is what allows race until win: the 
ability of an attacker implementing blind cache poisoning to keep retrying the 
attack until successful.


If only the requested information is cached, an attacker targeting 
www.victim.com can only try to poison once per TTL, because after that, the 
resolver doesn't generate queries.

But if ANY sort of authoritatitive or additional information is cached 
unasked-for and unverified with a second request (INCLUDING the NS RRSET from 
the authoritative field), the attacker can start with 1.victim.com, with the 
additional information containing the poisoned record for www.victim.com.  If 
success?  Great.  If failure?  Retry with 2.victim.com and so-on.


Even with halfway-decent port randomization, Race Until Win can be a problem:  
Poison the NS entry for .com on a major resolver using your botnet and, ohh, 
boy, can an attacker have some fun.

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] Batch Multiple Query Packet

2012-02-28 Thread Nicholas Weaver


Just some BotE:  Overhead for each DNS datagram (on an Ethernet) is 8 B for the 
Ethernet Preamble, 14B for the Ethernet header, 20B for the IP header, and 8 
bytes for the UDP header, for a total of 50B.

Assuming the question is 50B, and the answer is 150B, the overhead saved is Not 
That Much by coalesing two requests into one packet: you only save 100B of 
overhead for communicating 400B of data, which IS non zero, but not enough to 
really worry about.


But since if you could coalesce you could do parallel, this savings doesn't 
help latency much at all: just the transit time for 50B out and 50B back.  If 
anything, parallel will be BETTER on the latency since batching would probably 
require a coalesced response, while parallel is just that, parallel, so if the 
first record is more useful, it can be acted on immediately.


___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] Batch Multiple Query Packet

2012-02-28 Thread Nicholas Weaver

On Feb 28, 2012, at 10:04 AM, Paul Vixie wrote:

 On 2/28/2012 5:57 PM, Nicholas Weaver wrote:
 But since if you could coalesce you could do parallel, this savings doesn't 
 help latency much at all: just the transit time for 50B out and 50B back.  
 If anything, parallel will be BETTER on the latency since batching would 
 probably require a coalesced response, while parallel is just that, 
 parallel, so if the first record is more useful, it can be acted on 
 immediately.
 
 parallel (what i called blast) is great solution for things that don't
 run at scale. but the lock-step serialization that we get from the MX
 approach (where the A/ RRset may be in the additional data section
 and so we have to wait for the first answer before we ask the second or
 third question) has a feedback loop effect on rate limiting. it's not as
 good as tcp windowing but it does tend to avoid WRED in core and edge
 router egress buffers.
 
 all i'm saying is, we have to be careful about too much parallelism
 since UDP unlike TCP has no windowing of its own.

We don't need to be careful on this until you are talking about ~10 KB of 
data or more in a single transaction with no interactions, because before then 
TCP has the same dynamics due to the initial window size (with browsers opening 
4+ connections!), and this doesn't seem to bother people.

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] Data model and field names for DNS in JSON or XML

2012-01-18 Thread Nicholas Weaver

On Jan 18, 2012, at 11:14 AM, Paul Vixie wrote:

 On 1/18/2012 7:06 PM, W.C.A. Wijngaards wrote:
 this sounds very cool; is there an internet draft or tech note
 describing the protocol so that others may also implement this?
 
 It exists to bypass deep inspection firewalls, and it works.  The plain
 DNS format as you would use over TCP, but then on an SSL connection, so
 its encrypted by SSLv3.  Uses port number 443 (the https port, no other
 use of that protocol, but then, because of SSL the firewall should not
 be able to tell).
 
 alas, DPI can tell the difference between HTTPS and TLS in a TCP/443
 stream. (the Tor guys told me this.)

However, a DNS query over 443 CAN be made to look fully like HTTPS for the 
purpose of traffic analysis, since the query can easily be constructed in a URL 
with the results returned as an XML or JSON blob.

An active adversary could probe the server and check, but the point is probably 
to evade ignorant adversaries (misconfigurations), not active censorship.


___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] A new appoarch for identifying anycast name server instance

2011-09-30 Thread Nicholas Weaver



Your technical comments:

 b:  These should have a TTL of 0 seconds and/or support a
 prepended, cache-busting wildcard.
 
 Loss of synchronization can occur between cached normal data
 and uncached identification data. And, as I already mentioned
 what is broken may be the caching server.

Correct.  But this is why you need to have queries that check the
caching server AS WELL.  The CHAOS queries are useful here, as
are queries for the cached normal data, and queries which infer
glue policy so you can know if/what the cached normal data is being
used.


 OTOH, identification by ICMP is up to date save RTT.
 
 How can one generate an ICMP on the path from the resolver's
 outbound interface to  to the authority, and receive the
 response, without access to the resolver?
 
 Ask one's resolver operator to do so. He will investigate
 what's wrong and may contact an anycast server operator
 if he think there is a real problem for which his resolver
 is not responsible.

How am I, in building an automated tool designed to diagnose
as many problems as possible, supposed to ask a HUMAN at
A SEPARATE SITE to conduct MANUAL queries on my tool's behalf?

?

The major use I see for these queries is in automatic debugging, 
not human intervention, to understand if there is a problem which
will require human intervention.

Your IP layer solution only works for human intervention, and
only starting with humans which aren't initially in the debugging
process.


 As Paul Vixie can not accept all the reports from all the end
 users, aggregation through resolver operator path is the way
 for scalable operation.

But until you can generate queries to test the path how are
you supposed to know where to start looking for the problem?

As a builder of tools, I need to be able to test all the paths
I possibly can that might affects a user's traffic.  Direct when 
possible, and by inference through queries this this when not.



EG, I already have tests that can determine whether it is the recursive 
resolver which has problems with fragmentations or large responses.  

Yes, from the client standpoint this limits the fixes that can be 
applied, but automated tools on the client need to know this 
information in order to know how to react to the problem.

(If I wanted to, I could even identify the hop for the firewall on the
resolver that has this problem, but I don't want to build that test,
because I'm lazy and its sufficient to know its the resolver that
is broken for my current purposes)


This is similar information:  Information a CLIENT can use to
ATTEMPT to diagnose problems elsewhere in the network by inference.
It may not be perfect, but it will tell the client enough to know
who to talk to.


EG in your criticism, it could be the resolver, not the path
between the resolver and authority thats broken.  But even that
is useful information, AND additional queries may help, eg, what
is the TTL on the cached information the resolver is returning
when you query it?







The Old Fashoned Mail Reader Flame War below:  Everyone else
just stop reading now and save your eyeballs.

I'm including it so it is on the record, but its below so everyone
else can ignore it.

 As an addition, your headers suggest you are using
 Thunderbird.  I checked Thunderbird 6 on OS-X this morning,
 it word wraps unstructured text flawlessly.  Please ensure
 that you haven't mistakenly turned on a mis-formatting
 feature.
 
 I use my mail readers with my own configuration both for
 English and Japanese (where ASCII space characters are
 basically not used) mails.

You have  deliberately misconfigured your tool to ignore critical
formatting information for ASCII text.

My suggestion is to reinclude spaces, but have the spaces be
in a much smaller font.  Your on-screen presentation should
be the same, but it is likely that your displayer will
break on the mini-spaces, providing a word-wrap to your
desired window width.



Also, note that Format=flowed causes more problems than it solves

a)  It messes up presentation on a much LARGER population of
mail clients.

b)  It incorrectly modifies formatting!  You can not properly cut and
paste blocks of code or other items into mail messages, as it ends up
destroying the real formatting.

c)  Format=flowed is NOT required.  It is an optional feature that
only some mail composers bother with.  Notably the biggest clients
DO NOT send it that way:


Outlook neither sends nor receives it properly to my knowledge.  So
sending such email breaks a HUGE number of clients.

Mac mail receives it properly but does not send it after concluding
that it was breaking more than it was fixing.

Gmail's web client does not send it, instead mangling plain text
to 72 columns by default.  This is even worse: their solution
will ensure that not only will the text not display wide when
the user is on a wide device, but that it will display badly and
narrow when the user is on a device like a smartphone.


Standards which 

Re: [DNSOP] A new appoarch for identifying anycast name server instance

2011-09-30 Thread Nicholas Weaver

On Sep 30, 2011, at 5:56 PM, Masataka Ohta wrote:

 Nicholas Weaver wrote:
 
 Correct.  But this is why you need to have queries that check the
 caching server AS WELL.  The CHAOS queries are useful here, as
 are queries for the cached normal data, and queries which infer
 glue policy so you can know if/what the cached normal data is being
 used.
 
 Don't solve a simple problem in such a complex way.

If it can affect users, it should be testable from the user's system if 
possible.

 Ask one's resolver operator to do so. He will investigate
 what's wrong and may contact an anycast server operator
 if he think there is a real problem for which his resolver
 is not responsible.
 
 How am I, in building an automated tool designed to diagnose
 as many problems as possible,
 
 That's your fundamental misunderstanding.
 
 Professionals use simple tools. That's the only way to solve
 complex, beyond tool builder's imagination, problems in the
 real world.

No, professional tool-builders benefit from building a full rich suite of tools 
which combine many tests.  And, if done right, become favorite tools of 
professionals as well as amateurs.  [1]


Each test, on its own, can be done as a simple tool: eg, Whats the exact DNS 
PMTU for the authority to resolver path?  You would be welcome to build a 
special command-line client to do so.  


But in the end, you really do have to test as many things as possible, in as 
automatic a way as possible if you want to

a:  Find out what problems an arbitrary end user is facing.   E.G.  Your 
Mother is complaining about her Internet connection.  Do you really want to 
walk her through a set of 60+ separate tests in the debugging flow?

or

b:  Make the network fix itself.

 As Paul Vixie can not accept all the reports from all the end
 users, aggregation through resolver operator path is the way
 for scalable operation.
 
 But until you can generate queries to test the path how are
 you supposed to know where to start looking for the problem?
 
 First, login to the caching server. Rest depends on internal
 details of the server. There may be some tools available
 on the server.

Your attitude seems to be: This is ONLY a problem from the point of view of 
the resolver operator.

I disagree.


For all multicasted authorities who don't have your attitude, this seems a very 
simple and easy convention: it costs a trivial amount of effort, and may be 
quite useful.  There's no new RTYPEs and no new major changes, just a few 
records customized for each instance by a startup script.

Those multicast authorities which take your viewpoint can ignore it.


[1] Note on Netalyzr: we do do requests.  E.G. we're looking at adding tests 
for Olafur's child-sticky problem, and tests for induced stickiness, and have 
added in port filtering tests to address specific VPN tools used by colleagues. 
 

So if you want us to roll in additional tests, we do consider it, for all those 
who actually find a multi-function tool a useful service.

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] A new appoarch for identifying anycast name server instance

2011-09-29 Thread Nicholas Weaver

On Sep 29, 2011, at 4:05 AM, Masataka Ohta wrote:
 that happens sometimes.  however, i often end up in an email conversation 
 with
 a problem reporter, and i often ask them to run certain dig commands.  so,
 even if i can't reach a recursive server, a feature like this can still help
 me.
 
 It may work for you if you don't receive too much wrong requests.
 
 For scalable management, however, what you need is call center
 operators as a firewall.

And we're already seeing today, and expect more in the future, systems where 
the front-line support instructions include run a one-click or two-click 
tool, rather than run dig.


As an author of such tools, I strongly support this proposal, as the basic 
philosophy of these tools are:

1:  Discover a common problem

2:  Develop a manual test that understands that problem

{this is the ask the user to run dig method.}

3:  Wrap up an automatic version of the test into the comprehensive suite...

We have already seen that 3 is very powerful with Netalyzr: at least one 
on-line game has adopted Netalyzr as their debugging tool of choice for more 
advanced problems.


The information that this proposal realizes, through the use of a very simple 
convention, would be of an aid in debugging subtle anycast problems, over paths 
that the user OR anycast operator can't easily access otherwise.  

Yes, the end USER probably doesn't, and shouldn't care about such information, 
but it must be obtainable from the end user's vantage point if you want to 
enable such tools to debug DNS anycast issues.



The only additions I'd make is an additional keywords medium- and long- 
prepended to the query, and unicast-ip.  

The length keyword should have the same information, but with padding in the 
text records to a packet length of approximately 1100B and 1800B long.

The reason for this addition is to enable debugging of individual paths for DNS 
MTU issues.


While unicast-ip should return an A record for (a) unicast IP address of the 
server.

The reason for this is to assist tools which can look up A records but not TXT 
records.  (EG, the Java API allows easy lookup of IP addresses but doesn't 
allow grabbing of TXT records, so unless the Java applet is contacting the 
server directly, server identification can be harder).

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] A new appoarch for identifying anycast name server instance

2011-09-29 Thread Nicholas Weaver

On Sep 29, 2011, at 7:40 AM, Masataka Ohta wrote:
 
 And we're already seeing today, and expect more in the future,
 systems where the front-line support instructions include
 run a one-click or two-click tool, rather than run dig.
 
 It means those who can use run a one-click or two-click tool
 have no idea on how to bypass intermediate entities, which
 means call center operators as a firewall is definitely
 necessary.

I think you're missing something subtle here, both in this comment and in the 
Do it in IP layer comment.

Well constructed tools like Netalyzr are able to infer properties about paths 
that they don't have direct access to, based on how traffic is passed through 
them: making sure that the traffic will reveal desired information.

This proposal allows debuging information about the recursive resolver TO 
anycast authority path, a path which the user AND anycast operator do not 
otherwise have direct access to.  Only the recursive resolver operator has 
access to that path, and for much debugging, the recursive resolver OPERATOR is 
not a participant in the process.



And the end user running the tool, combined with the tech support person on the 
other end who sees the results, CAN often bypass the broken intermediate 
entitites, depending on what the results are that the tool spits out:


a:  If its their NAT or local CPE being very lame: blocking requests AND with a 
bad proxy, tell the user to replace it.


b:  If the CPE is giving its own lame proxy but can be bypassed, instruct the 
user how to use Google Public DNS: problem solved for the user without needing 
a forklift upgrade.


c:  If the recursive resolver itself is being lame (eg, how Earthlink's was the 
other day), either instruct the user how to bypass it, OR start applying 
various pressures to the resolver operator to get it fixed ASAP.


d:  If its a path problem between the recursive resolver and the authority, 
tech support escalates it internally.


a, b, and c you can get today with some care (we don't package it up in 
Netalyzr, but we can distinguish between the three cases in the data), but D is 
hard, and this requires tools such as the proposal.



 PS
 
 Before developing tools, you should better learn to wrap
 your lines well below 72 characters.


That your mail reader can't word wrap properly on received messages is not my 
problem.  

Word wrapping MUST be done on the recipient side, not on the sender side, 
unless you want to maintain ridiculous conventions like text lines are at most 
72 characters, monospaced which were obsolete two decades ago.


And in this case particular case, blame Microsoft.


Apple on their mailer for the longest time implemented a standard method, 
format=flowed, intended to please BOTH mail readers that can word-wrap and mail 
readers that can't.  But they dropped this way back in 10.6.2, because 
Microsoft never recognized it right.

Given the choice between pleasing a few recipients who cling to an obsolete 
convention with obsolete tools and pleasing the very large population of 
recipients with a tool unwilling to accept a standard which could please both, 
Apple went with the natural choice: it is the mail reader's responsibility to 
word wrap to the reader's own display parameters.

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] A new appoarch for identifying anycast name server instance

2011-09-29 Thread Nicholas Weaver

Note:  The following is manually formatted because
you are incapable of using a modern mail reader OR
have deliberately misconfigured your modern mail reader
OR are reading the message using a buggy archive:

On Sep 29, 2011, at 9:34 AM, Masataka Ohta wrote:

 Nicholas Weaver wrote:
 
 I think you're missing something subtle here, both in this comment and in 
 the Do it in IP layer comment.
 
 I'm afraid it's you.
 
 This proposal allows debuging information about the recursive
 resolver TO anycast authority path, a path which the user
 AND anycast operator do not otherwise have direct access to.
 
 As for subtlety, what if, the information is cached and stale?

Good point, but there are easy solutions:

a:  Do you honestly expect these queries to be common enough 
to be cached?

b:  These should have a TTL of 0 seconds and/or support a 
prepended, cache-busting wildcard.


 OTOH, identification by ICMP is up to date save RTT.

How can one generate an ICMP on the path from the resolver's
outbound interface to  to the authority, and receive the 
response, without access to the resolver?

Please tell me how to do so, in a way that is expected
to work, so I can use this in automating some significant 
problem solving.


 I skipped to read rest of your mail, because you have not learned
 to wrap lines properly.

Your inability to use a modern mail client is NOT MY PROBLEM!


Let me reiterate, manually formatted to please you: 


That your mail reader can't word wrap properly on 
received messages is not my problem. 

Word wrapping MUST be done on the recipient side, 
not on the sender side, unless you want to maintain 
ridiculous conventions like text lines are at most 72 
characters, monospaced which were obsolete two 
decades ago.


And in this case particular case, blame Microsoft.


Apple on their mailer for the longest time implemented 
a standard method, format=flowed, intended to please 
BOTH mail readers that can word-wrap and mail readers 
that can't.  But they dropped this way back in 10.6.2, 
because Microsoft never recognized it right.

Given the choice between pleasing a few recipients who 
cling to an obsolete convention with obsolete tools and 
pleasing the very large population of recipients with 
a tool unwilling to accept a standard which could please 
both, Apple went with the natural choice: it is the mail
reader's responsibility to word wrap to the reader's 
own display parameters.



As an addition, your headers suggest you are using
Thunderbird.  I checked Thunderbird 6 on OS-X this morning,
it word wraps unstructured text flawlessly.  Please ensure
that you haven't mistakenly turned on a mis-formatting 
feature.

If you are complaining because a particular web mail 
archive you are using are not formatting properly, it 
is a bug in the archive generation tool's HTML 
formatting, and should be reported as such.

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] A new appoarch for identifying anycast name server instance

2011-09-28 Thread Nicholas Weaver

On Sep 28, 2011, at 5:47 AM, Joe Abley wrote:

 
 On 2011-09-27, at 14:21, Edward Lewis wrote:
 
 We respond honestly to queries for HOSTNAME.BIND, VERSION.BIND, ID.SERVER,
 VERSION.SERVER as well as RFC5001/NSID on L-Root, for example.
 
 It's not a matter of honesty.
 
 No inference intended; what I meant was we let the software report its actual 
 version number, and the actual hostname of the server rather than overriding 
 them (as I've seen some people do).

Just a sampling of some of the version strings we've seen in scanning DNS 
resolvers and authorities:

The name is BIND, James BIND
Enterprise I don't think so captain
13:54 @zarkdav well, one could write a zone file so that it returns a joke
666 the number of the beast...!
A kinky version of course
ALL YOUR BASE ARE BELONG TO US
All we are is dust in the wind
Are you still shivering? Are you still cold? Are you loathsome tonight? Does 
your madness shine bright?
Ash nazg durbatuluk, ash nazg gimbatul, ash nazg thrakatuluk agh burzum-ishi 
krimpatul.
Aye, Carumba!  He's looking at me version string!


(We have even received abuse complaints for querying for version strings!)

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] draft-savolainen-mif-dns-server-selection-06.txt

2011-01-17 Thread Nicholas Weaver

On Jan 17, 2011, at 6:25 AM, Ted Lemon wrote:

 On Jan 17, 2011, at 9:22 AM, Andrew Sullivan wrote:
 (RFC 4035, section 4.9.3).  Presumably, then, the stub needs somehow
 to have authenticated the DNS server in question otherwise before
 accepting the claims about signature validation.  I can't think of any
 way to do this under DHCP, but maybe I don't know the protocol well enough.
 
 No, you know the protocol well enough.   This discussion has been making more 
 and more clear to me the need for a DHCP security architecture document.

Why bother?  IMO A better approach is NOT to try to patch DHCP/IPv6 Route 
Advertisements/ARP etc, but just ACCEPT these as insecure.  [1]

On a local broadcast network, the attacker is always a trivial MitM.  Securing 
DHCP, IPv6 Router Advertisements, ARP, etc, won't stop the attacker anyway.  
The only thing that stops the attacker is application protocols which work in 
the face of a MitM.

Yet now its the same issue with DNSSEC for A records against a MitM who's also 
a MitM on final traffic:  The application is a customer of the final result 
from DNS/DHCP/ARP/etc  And that application is either trivially vulnerable 
to a MitM OR doesn't actually need to care that DHCP, ARP, DNS, etc are secure, 
because such insecurity is no different than any other MitM.


[1] Not to mention it may be impossible to make these secure, since they are 
usually 'no initial point of trust bootstrap protocols'

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


[DNSOP] Question on recursive resolvers to test against..

2010-12-01 Thread Nicholas Weaver
One thing we've observed in Netalyzr is that RFC3597 (handling unknown 
RRTYPEs as opaque binary data) is almost universally ignored.

Does anyone have a good set of open recursive resolvers from different 
vendors that can be queried against for testing?


___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


[DNSOP] Neglect for RFC3597 for 128 = RTYPES 256. Should such RTYPES be off limits to allocation?

2010-12-01 Thread Nicholas Weaver

Much to my embarrassment, our Netalyzr test for RFC3597 (unknown RRTYPE 
handling.  We used RRTYPE=169 for our testing, as its unassigned yet a 
convenient mnemonic) was broken for us by the upstream authorities in our path 
without my realizing it:

Bad enough is that all of the authorities for icsi.berkeley.edu and 
berkeley.edu are running BIND of various versions (including the latest), which 
it turns out all return FORMERR for unknown RTYPE requests where 128 = RRTYPE 
 256.  [1]  

Now true these are 'meta' RRTYPEs (my screwup for using, only now do I 
realize that), but RFC 5395 does state that sometimes meta types may be queried 
directly (with any processing optional), so you'd hope that between that and 
RFC3597, it would work.  In didn't.


That was bad.  But it gets worse.  Namely, all ROOTS but h, k, and l 
fail to properly handle an unknown RTYPE in that range:

dig +norecurse TYPE169 txt.aoeuauoe.netalyzr.icsi.berkeley.edu 
@c.root-servers.net

Compare with

dig +norecurse TYPE169 txt.aoeuauoe.netalyzr.icsi.berkeley.edu 
@h.root-servers.net
or
dig +norecurse TYPE256 txt.aoeuauoe.netalyzr.icsi.berkeley.edu 
@c.root-servers.net


As far as I can tell, this lovely bug came about because somewhere a decision 
was made that RFC3597 should not apply to meta RRTYPEs (128 - 255).


So the question becomes:  Should all 128 = RRTYPE  256 be marked as forbidden 
for subsequent allocation, unless transparent relaying is not required?  (that 
is, ONLY be used for DIRECT, UNPROXIED, UNFORWARDED communication between two 
DNS speakers).
 
Because otherwise these meta RRTYPEs clearly don't work: the installed base of 
bind is too much to begin with, plus this behavior even extends to the roots!



[1]  Compare (trimmed for readability):
nweaver% dig +norecurse type169 txt.aoeuaoeu.netalyzr.icsi.berkeley.edu 
@adns1.berkeley.edu

;  DiG 9.6.0-APPLE-P2  +norecurse type169 
txt.aoeuaoeu.netalyzr.icsi.berkeley.edu @adns1.berkeley.edu
;; -HEADER- opcode: QUERY, status: FORMERR, id: 39144
;; flags: qr; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 0

;; QUESTION SECTION:
;txt.aoeuaoeu.netalyzr.icsi.berkeley.edu. IN TYPE169


with
nweaver% dig +norecurse type16 txt.aoeuaoeu.netalyzr.icsi.berkeley.edu 
@adns1.berkeley.edu

;; -HEADER- opcode: QUERY, status: NOERROR, id: 43151
;; flags: qr; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 1

;; QUESTION SECTION:
;txt.aoeuaoeu.netalyzr.icsi.berkeley.edu. IN TXT

;; AUTHORITY SECTION:
netalyzr.icsi.berkeley.edu. 3600 IN NS  roland.icir.org.

;; ADDITIONAL SECTION:
roland.icir.org.3600IN  A   192.150.187.31

and
dig +norecurse type256 txt.aoeuaoeu.netalyzr.icsi.berkeley.edu 
@adns1.berkeley.edu
;; -HEADER- opcode: QUERY, status: NOERROR, id: 2103
;; flags: qr; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 1

;; QUESTION SECTION:
;txt.aoeuaoeu.netalyzr.icsi.berkeley.edu. IN TYPE256

;; AUTHORITY SECTION:
netalyzr.icsi.berkeley.edu. 3600 IN NS  roland.icir.org.

;; ADDITIONAL SECTION:
roland.icir.org.3600IN  A   192.150.187.31


This particular system is running the latest version of bind:

nweaver% dig +short chaos txt version.bind @adns1.berkeley.edu
9.7.2-P2
___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] Ugly Hack, Step 2: Detection of Problem

2010-04-01 Thread Nicholas Weaver

On Apr 1, 2010, at 7:51 AM, Jason Livingood wrote:

 2 - Describe all the various methods and tactics by which end user 
 brokenness can be detected.  This may include website-based detection, 
 DNS-query-based detection, or a variety of other methods.
 
 Suggestions: (Nick Weaver I think had one – pasted below)
 
 Also, you can make an EASY in-browser Javascript check.
 
 Load 3 images in a hidden DIV.  These images should ideally be set to be 
 non-cached and have a cache-buster in the URL (akin to how Google Analytic's 
 hidden GIF is loaded: it contains a cache-buster in the URL).
 
 One is hosted on an IPv4 only site, one on a IPv4/IPv6 dual stack, and one on 
 a IPv6 only site, and bind onload/onerror to Javascript for reporting.
 
 If the first two load, the host is a successful V4 host.
 
 If only the first loads, the host has the described problem with a link local 
 V6, and needs to be patched: have the Javascript notify the user.


Addition I just realized:

Additionally, if the first loads immediately but the second loads delayed by 
1-2+ seconds, (can be measured by the server if not directly in Javascript 
alone) then it is a timeout problem rather than a complete failure problem, and 
should also be reported.

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] FYI: DNSOPS presentation

2010-03-31 Thread Nicholas Weaver

On Mar 31, 2010, at 6:42 AM, Edward Lewis wrote:

 At 3:28 -0400 3/31/10, Igor Gashinsky wrote:
 
 You are absolutely right -- it's not a DNS problem, it *is* a host
 behavior problem. The issue is that it takes *years* to fix a host
 behavior problem, and we need to engineer and deploy a fix much sooner
 then that (hopefully about a year before the v4 exhaustion date). Given
 that, is there something other then DNS that can address it better/faster?
 
 On topic of DNSOP: Reversing a fix slipped into the DNS for some other 
 segment's problem will take years to remove.  We still have round-robin in 
 the DNS, for example, added to help load balancing for an application (mail?) 
 way way back in time.  Today round-robin is a pain for DNSSEC (for example).
 
 This is off-topic for DNSOP: I don't believe it takes years to fix host 
 behavior problems.  Yes, some hosts will run the same software they have 
 today for years to come.  But most won't last that long. If you don't fight 
 the problem in the right place, you won't eradicate the issue.

Also, you can make an EASY in-browser Javascript check.

Load 3 images in a hidden DIV.  These images should ideally be set to be 
non-cached and have a cache-buster in the URL (akin to how Google Analytic's 
hidden GIF is loaded: it contains a cache-buster in the URL).

One is hosted on an IPv4 only site, one on a IPv4/IPv6 dual stack, and one on a 
IPv6 only site, and bind onload/onerror to Javascript for reporting.

If the first two load, the host is a successful V4 host.

If only the first loads, the host has the described problem with a link local 
V6, and needs to be patched: have the Javascript notify the user.

If only the second and third loads, the host is V6 only (HA!).


Put such a Javascript test on the Yahoo or Google homepage, as it is small (you 
could compress it down to probably a couple hundred bytes), you can make it so 
its effectively non-blocking on rendering (put it in a infinitesemal iframe), 
so it doesn't impact page load times.

Voila, you not only FIND the hosts that are the problem, but notify them to FIX 
the problem, especially since it is Yahoo that is specifically worried about 
clients with this problem.

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] FYI: DNSOPS presentation

2010-03-31 Thread Nicholas Weaver
A far better solution would be to instead segregate with different DNS server 
IPs.  

ISPs already have multiple DNS resolvers (eg, no wildcarding resolvers, 
DNSSEC test resolvers).  And the ISP knows if its giving out a v6 address or 
not for a client and routing IPv6 for that client.

And even then, I really wonder about the benefit.


I also object somewhat to the claim that you can't necessarily diagnose the 
cause.  With a combination of Java and JavaScript, plus user-agent examining, 
you probably can to a great degree, especially if you can convince the user to 
say OK to the signed applet.  But even without that, I'd suspect that with 
only a few root causes you could build a nice auto-diagnoser.


___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] FYI: DNSOPS presentation

2010-03-30 Thread Nicholas Weaver

On Mar 30, 2010, at 8:56 AM, Mohacsi Janos wrote:

 Dear All,
 
 Sorry for crossposting.
 
 
 This proposal is the opposite with the principle how the DNS is developed a 
 while ago. The DNS is a highly distributed, hierarchical, autonomous, 
 reliable database with very useful extensions. This modification is proposing 
 lying about the existence of the record
 
 The modification is proposed to hide the database record that is used for 
 communication. I am not favor such a modification since:
 
 1. I think we need evidence, that majority of the  queries are going via 
 IPv6 (if the client has working IPv6 and the DNS zones has the necessary  
 for the zones).

This is clearly not the case.  

Linux clients in particular seem to always do  as well as A queries: 
Netalyzr is an IPv4 service, (we detect IPv6 usage, its only a couple of 
percent and Netalyzr has a very geek-biased dataset), but there are a LOT of 
clients which are doing v6 queries on DNS as well as V4.

EG, a colleague had very problematic network connections from his parent's home 
back to ICSI, because the stupid NAT's built in DNS proxy (until he reflashed 
an update) was blocking  queries.  

His linux host would do an A and an  query and, until the  query timed 
out, delay creating connections eg, through SSH, web browsing, etc.  An 
amazingly painful experience for him until he diagnosed it.

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] FYI: DNSOPS presentation

2010-03-30 Thread Nicholas Weaver

On Mar 30, 2010, at 9:15 AM, Andrew Sullivan wrote:
 I am not among those who think that the number of clients involved
 with this is insignificant.  I know that something people sometimes
 hear, but the abolute number of people involved does make this a real
 problem.  I just don't think that the right answer is to break
 perfectly well-functioning systems for everyone else in order to work
 around clients that are implemented wrong.

Agreed on both accounts.  

All I was really pointing out is that most V6 queries are done over V4, and you 
can't tell who does or does not use V6 for the DNS server.

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] Should root-servers.net be signed

2010-03-20 Thread Nicholas Weaver

On Mar 20, 2010, at 1:50 AM, George Barwood wrote:
 Enshrining tho shalt never fragment into the Internet Architecture is 
 dangerous, and will cause far MORE problems. Having something which 
 regularly exercises fragmentation as critical to the infrastructure and we 
 wouldn't have this problem where 10% of the resolvers are broken WRT 
 fragmentation.
 
 I'm not suggesting that. If the higher level protocol has definite security 
 checks, or security is not important,
 fragmentation is ok. But for DNSSEC neither of these is true.

Then what you're arguing here is don't request stuff with DO unless you are 
willing to validate.  Given the exercise of DO requesting is done (the 
firewalls have figured it out), drop DO on unvalidated traffic, don't drop 
fragmentation.

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] Should root-servers.net be signed

2010-03-19 Thread Nicholas Weaver

On Mar 19, 2010, at 12:21 AM, George Barwood wrote:
 I suggest the default value in BIND for max-udp-size should be 1450.
 This appears to be best practice.
 Since few zones are currently signed, it's not too late to make this change.
 Later on it may be more difficult.


Actually, I'd say this ONLY for the root and TLDs.  For the rest, the onus 
should be on the resolver to discover that it can't handle fragmentation and 
adjust the MTU appropriately.

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] Should root-servers.net be signed

2010-03-19 Thread Nicholas Weaver

On Mar 19, 2010, at 6:09 AM, George Barwood wrote:

 
 - Original Message - 
 From: Nicholas Weaver nwea...@icsi.berkeley.edu
 To: George Barwood george.barw...@blueyonder.co.uk
 Cc: Nicholas Weaver nwea...@icsi.berkeley.edu; Matt Larson 
 mlar...@verisign.com; dnsop@ietf.org
 Sent: Friday, March 19, 2010 12:33 PM
 Subject: Re: [DNSOP] Should root-servers.net be signed
 
 On Mar 19, 2010, at 12:21 AM, George Barwood wrote:
 I suggest the default value in BIND for max-udp-size should be 1450.
 This appears to be best practice.
 Since few zones are currently signed, it's not too late to make this change.
 Later on it may be more difficult.
 
 
 Actually, I'd say this ONLY for the root and TLDs.  For the rest, the onus 
 should be on the resolver to discover that it can't handle fragmentation and 
 adjust the MTU appropriately.
 
 There are advantages besides messages being lost.
 It also prevents spoofing of fragments, and limits amplification attacks.

It doesn't limit amplification attacks by much if at all, and spoofing of 
fragments is not likely to be happening in large responses, because large 
responses will almost invariably be due to DNSSEC.

Since 90% CAN handle fragments, those 90% SHOULD be able to use fragments, 
especially since the broken 10% will see higher lookup latency, NOT full 
failure to resolve.

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] Should root-servers.net be signed

2010-03-19 Thread Nicholas Weaver

On Mar 19, 2010, at 12:01 PM, George Barwood wrote:
 
 Anyway, do we yet agree that 1450 is the best default for max-udp-size, and 
 that higher values are dangerous?\

No:  I agree it is the proper default for the TLD authorities and roots, but 
for everything else, the higher value should be what the resolver requests.

Enshrining tho shalt never fragment into the Internet Architecture is 
dangerous, and will cause far MORE problems. Having something which regularly 
exercises fragmentation as critical to the infrastructure and we wouldn't have 
this problem where 10% of the resolvers are broken WRT fragmentation.

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] m.root-servers.net DNSSEC TCP failures

2010-03-17 Thread Nicholas Weaver

On Mar 17, 2010, at 5:23 AM, Jim Reid wrote:

 On 17 Mar 2010, at 11:28, George Barwood wrote:
 
 It seems that  m.root-servers.net is now serving DNSSEC, but does not have 
 TCP, so the following queries all fail
 
 Well these queries work just fine for me. Perhaps your problems are caused by 
 local misconfiguration such as a broken CPE/middleware box or DNS proxy?

I think its that its agressively multihomed, and ONE of the instances is not 
working with TCP.

My home net happily lets through anything on port 53, TCP or UDP, and I'm 
seeing the same symptoms, but a little more data:

I think there may be something more wrong with that instance thats causing the 
TCP failures, so it might be something more general:

--- m.root-servers.net ping statistics ---
16 packets transmitted, 5 packets received, 68.8% packet loss
round-trip min/avg/max/stddev = 223.651/1423.662/.722/747.819 ms

--- l.root-servers.net ping statistics ---
7 packets transmitted, 7 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 85.971/87.705/89.645/1.164 ms

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] Should root-servers.net be signed

2010-03-08 Thread Nicholas Weaver

On Mar 8, 2010, at 7:27 AM, Paul Wouters wrote:

 On Mon, 8 Mar 2010, Joe Abley wrote:
 
 Our[*] reasoning so far with respect to signing ROOT-SERVERS.NET can I think 
 be paraphrased as follows:
 
 - if we sign ROOT-SERVERS.NET it will trigger large responses (the RRSIGs 
 over the A and  RRSets) which is a potential disadvantage
 
 Is it? Is DNSSEC that bad then? Why did we design it that way?
 
 - however, since the root zone is signed, validators can already tell when 
 they are talking to a root server that serves bogus information
 
 How does that work without ROOT-SERVERS.NET being signed with a known trust 
 anchor?
 How does my validating laptop know that the curent wifi is not spoofing 
 a.ROOT-SERVERS.NET to some local IP?



If your ISP is acting as a MitM on DNS, its acting as a MitM on everything, so 
DNSSEC buys you f-all if you are using it for A records, because any app using 
that A record either doesn't trust the net or is trivially p0owned by the ISP.

DNSSEC is ONLY useful for things like TXT and CERT records fetched by a DNSSEC 
aware cryptographic application, and that would require a valid signature chain 
from the root(s) of trust (either preconfigured or on a path from the signed 
root) validated on the client, so an imitation a.root-servers.net won't matter, 
as it won't be able to provide improper data.


So in your example, root-servers.net doesn't need to be signed, and buys no 
increase in trust WHETHER IT IS SIGNED OR NOT, because even if it IS signed, 
that coveys no value about the results returned from it, because the signatures 
are not along the trust heirarchy for DNSSEC, which follows the name path, not 
the lookup path.



Remember, DNSSEC is a PKI, with only one path of trust which matches the name 
path (so, for *.foo.bar.com, the trust path is foo.bar.com, bar.com, .com, ., 
either to a signed root, a signed TLD, or a trust anchor configured for either 
bar.com or foo.bar.com) [1].  You MUST be able to validate along the path (the 
transitive trust of a PKI), but you ONLY need to validate along the path (the 
limited trust of a PKI).

Thus although root-servers.net is a domain involved in the resolution of 
anything for *.foo.bar.com (its on the resolution path), it is not on the trust 
path, so whether it is signed or not has no impact on whether the chain up will 
validate cryptographically.



QED:  Signing root-servers.net should be done for completeness, but only AFTER 
.net is signed, because really, its a signature path that doesn't actually 
matter and SHOULDN'T actually be validated for normal lookups [2], but only 
when the values are directly requested by a cilent!



[1] And this is why I want DNSSEC: it IS a PKI and should be used as such, but 
one with a much cleaner trust path than the SSL-model PKI, and without adding 
any NEW trust paths to the system as this is the same trust path needed for 
normal DNS.

[2] I really don't like DNSSEC's reliance on the recursive resolver to do 
signature validations, because there really is no right answer for what the 
recursive resolver should do on cryptographic failures (contrast with the 
client where there are good answers).  

But if the recursive resolver IS validating DNSSEC, it MUST ONLY validate the 
path of trust for the names requested by the client, simply to minimize 
spurious and irrelevant cryptographic failures.  If the recursive resolver is 
validating the signatures of root-servers.net for internal use, it is doing it 
wrong: something which reduces reliability but doesn't increase security.


___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] Should root-servers.net be signed

2010-03-08 Thread Nicholas Weaver

On Mar 8, 2010, at 9:31 AM, Thierry Moreau wrote:

 Joe Abley wrote:
 On 2010-03-08, at 10:27, Paul Wouters wrote:
 On Mon, 8 Mar 2010, Joe Abley wrote:
 
 Our[*] reasoning so far with respect to signing ROOT-SERVERS.NET can I 
 think be paraphrased as follows:
 
 - however, since the root zone is signed, validators can already tell when 
 they are talking to a root server that serves bogus information
 How does that work without ROOT-SERVERS.NET being signed with a known trust 
 anchor?
 Because validators are equipped with a trust anchor for the root zone's KSK.
 An unsigned ROOT-SERVERS.NET might leave validators talking to a bogus root 
 server, but they won't believe any of the signed replies they get from it.
 
 That is a narrow view of what a bogus root server may do. It may also 
 replicate every official root signatures (basically signed delegations) and 
 spoof unsigned delegations.
 
 Your enemy may make a bogus signed TLD nameserver with the same strategy so 
 that unsigned delegations to SLD can also be spoofed.
 
 If DNSSEC usage includes validation of A/, then signed A/ for 
 nameservers at the root and TLD seem to provide some (arguably marginal but 
 not null) integrity assurance for unsigned domains.
 
 That's just an observation on the above reasoning. A full pros and cons 
 analysis is obviously more encompassing.

But in order to BECOME the bogus nameserver, the attacker is becoming a MitM, 
so the attacker can just directly spoof any non-valid reply, they don't need to 
spoof the reply to become the bogus nameserver, but the unsigned replies 
directly.

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] Should root-servers.net be signed

2010-03-07 Thread Nicholas Weaver

On Mar 7, 2010, at 4:47 AM, Masataka Ohta wrote:

 Jim Reid wrote:
 
 The Bad Guy won't have the private keys,
 
 Wrong.
 
 While the Bad Guy as an ISP administrator won't have the private
 keys, the Bad Guy as a zone administrator will have the private
 keys.
 
 That is, DNSSEC is not secure cryptographically, which is another
 reason why not to deploy DNSSEC.

I don't see what your argument here is.

DNSSEC is a PKI in disguise, and like ANY PKI, you still depend on trust up 
the heirarchy, as that is exactly how a PKI is supposed to work: One level up 
says something about the levels down.

But DNS has ALWAYS depended on trust-up-the-heirarchy anyway, so this aspect of 
DNSSEC doesn't increase the level of trust required in DNS, it just only 
codifies it in cryptographic terms so there is no trust (that isn't made 
explicit) beyond the scope up the heirarchy.

This is actually why DNSSEC is useful: it IS a PKI, who's heirarchical nature 
already matches the existing heirarchy on naming.  In the end, signing A 
records is useless.  But signing TXT and CERT records will be incredibly 
useful, if validated on the end-host application.

Additionally, since it would be end-host application validating those 
signatures, it can enforce that there must exist a signature path from the 
root (aka, it is actually a PKI). [1]


But since unless you manually or do some other finagling can't easily establish 
trust if you don't have trust above, root-servers.net should only sign after 
.net is signed at this point in the rollout.  And any PROPER useage of DNSSEC 
won't rely on root-servers.net ever being signed at all, because its only on 
the name path for resolvers.


[1] Thus, you don't have to worry about also needing the name path for the 
resolvers signed or the DOS attack by a MitM stripping signatures as part of 
their changing DNS results.
___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] L-Root Maintenance 2010-01-27 1800 UTC - 2000 UTC

2010-01-28 Thread Nicholas Weaver

On Jan 28, 2010, at 8:59 AM, Matt Larson wrote:

 On Thu, 28 Jan 2010, Mark Andrews wrote:
 The DNSKEY RRset size seems small for testing.  We really should
 be looking the biggest key set sizes that occur during rollover
 simultaneous ZSK/KSK rollovers.  Hopefully that is in the planning.
 
 The design allows for ZSK rollovers at calendar quarter boundaries and
 KSK rollovers in the middle of a quarter, which are intentionally
 non-overlapping so that the are never more than three keys in the root
 DNSKEY RRset.  (Please see the diagram on page five of
 http://www.root-dnssec.org/wp-content/uploads/2009/12/draft-icann-dnssec-arch-v1dot2dot1.pdf,
 which Tony already referred to.)
 


Stupid question on Figure 2:  What is the approximate size of responses during 
these different periods?  In particular, do any particular magic limits in 
the network (namely the 1500B ethernet MTU, the 1492 PPPoE MTU, the likely to 
be in path MTU hole of 1480-1500B MTU, or the somewhat common 1280 EDNS MTU) 
get hit?


___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] Priming query transport selection

2010-01-14 Thread Nicholas Weaver

On Jan 14, 2010, at 7:58 AM, Patrik Fältström wrote:
 
 Please do not start talking about enforcing some fixed limit that we will 
 laugh about 10 years from now... And if you talk about a limit, pick 
 something very large (like 65535 that seems to be already chosen).
 
 It is enough problems with the 512 limit of today. I do not want to have the 
 same problems when we pass 4096.
 
 Implementations should be free to choose an implementation limit smaller if 
 they want to (and signal that in the EDNS0 size), but please do not say that 
 max value on EDNS0 size will forever be 4096 or something similar.
 
 Be careful with the wording...

Except that EDNS0 MTU is closely coupled with the UDP protocol and its 
unreliable nature: this message MTU is irrelevant for TCP or another reliable 
protocol.

It is highly unlikely that the network's MTU will expand beyond 1500B:  There 
is too much Ethernet, and 1500B MTUs don't really benefit things anyway, 
because the overhead reductions of going to a higher MTU are near zero 
(Amdahl's law).  

Which means the number of fragments which ALL need to be received correctly 
goes up linearly with the size of the message.

Even WITH a larger MTU, bit-errors become more common.  So, even at a minimum, 
you'd expect many more failures, dropped packets, etc, with a 40,000B datagram 
than a 4000B datagram.  And DNS over UDP is already unreliable enough, at least 
when you consider it all the way to the end host with a reasonable timeout on 
lookups.

Thus given the nature of the UDP protocol, it is highly unlikely that you'd 
ever want to do ~10K+ byte UDP datagrams.

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] Priming query transport selection

2010-01-13 Thread Nicholas Weaver

On Jan 13, 2010, at 2:41 PM, Olafur Gudmundsson wrote:

 At 16:16 13/01/2010, Jim Reid wrote:
 On 13 Jan 2010, at 20:49, Alex Bligh wrote:
 
 Current operational practice would result in DO clear packets
 fitting within 4096 bytes, so no need for TCP when DO is clear.
 
 I don't think that's always the case Alex. See the lengthy discussion
 in this list about datagram fragmentation and broken middleware boxes
 that don't grok EDNS0. [Or do EDNS0 with a 512 byte buffer size.
 Sigh.] Mind you, some of those boxes will also barf on TCP DNS traffic.
 
 EDNS0 RFC restricts EDNS0 to 4096 bytes, number of implementations
 will not send more even if client ask for it. Firewalls will
 enforce this.

We should have some additional numbers for this with the new run (we just 
released an updated version of Netalyzr, http://netalyzr.icsi.berkeley.edu )  
Among the new tests is a detailed check for actual DNS MTU rather than 
advertised DNS MTU.

Basically, you can't RELY on UDP packets over 1500B being received by DNS 
resolvers when requested, but it works a large amount of the time.

So basically, I'd have the model of Try at EDNS4000, fallback to EDNS1280, 
fallback to TCP, and cache whether the resolver needs to do this for all 
authorities (because its side is fragment-broken) or just particular remote 
authorities.

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] Computerworld apparently has changed DNS protocol

2009-11-04 Thread Nicholas Weaver
Question:  Have people been able to estimate how large the signed root  
zone response will be?


I'm assuming its below the magic 1500B level for standard queries.  Is  
this correct?


Oh, and one thing to watch out for:  Some IP stacks I've noticed will  
set DF on UDP datagrams, if the datagram is too small to require  
fragmentation onto the local network!


Add this to the list of things DNS operators need to watch out for  
when turning on DNSSEC.


___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] [dnsext] Re: Computerworld apparently has changed DNS protocol

2009-11-04 Thread Nicholas Weaver


On Nov 4, 2009, at 11:41 AM, Matthew Dempsky wrote:

On Wed, Nov 4, 2009 at 11:26 AM,  bmann...@vacation.karoshi.com  
wrote:
   The current deployment plan is to stage things to push out  
large responses
   early - prior to having any actual DNSSEC usable data ...  
ostensibly to

   flush out DNSmtu problems.


Is this plan to push out large responses indiscriminately, or only in
response to queries with DO=1?


Also, has someone done a study what the major recursive resolvers do  
on response failures from a root?  Do they go to another first or do  
they try a smaller EDNS MTU?



___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop