Re: [DNSOP] on Negative Trust Anchors

2012-04-20 Thread Marc Lampo
Hello,

(at the risk of launching a lot of weekend messages, again,
 - which I read with great interest after some days of absence ...)

The draft on Negative Trust Anchor, section 7 : Use of a NTA
seems incomplete !

Actually, the validating caching name server is itself only an
intermediate step between
- the authoritative name servers (whose admins may commit errors)
and
- the forwarding name server or resolver on an end device.

And what if that forwarding name server
or that resolver on end device perform validation themselves ?

If the end client performs validation and is unaware of NTA,
it is in trouble again !
And the validating caching name server that implements NTA
cannot pretend the DS record, in the parent, does not exist,
because it cannot provide the appropriate Next Secure data (DNSSEC ...).
(the only one that can remove the DS record is the parent
 - it has the private key to provide the correctly signed Next Secure
data)


While I do acknowledge the concern of ISP's that offer validation
to somehow protect their customers, in case of a (DNSSEC only)
problem with some or the other domain,
I'm afraid Negative Trust Anchor may introduce other problems.

Together with other commentors on this subject,
I do think there should be some best practice recommendation
about how to cope with this kind of problem.


Kind regards,

Marc Lampo


-Original Message-
From: Livingood, Jason [mailto:jason_living...@cable.comcast.com] 
Sent: 16 April 2012 07:40 PM
To: Marc Lampo; dnsop
Cc: ralf.we...@nominum.com; Nick Weaver
Subject: Re: on Negative Trust Anchors

Inline.
- JL


On 4/12/12 8:21 AM, Marc Lampo marc.la...@eurid.eu wrote:
The draft of Negative Trust Anchors does not mention anything about 
informing the operator of the failing domain.

I'll make a note to call this out in the next version. Something about
making reasonable attempts to notify the domain of the issue and any
action taken (such a using a NTA and when it expires, how to contact party
adding the NTA, etc.).

The advantage over negative trust anchor would be that this is more 
centrally managed : the action by the parent (remove DS) is visible 
(TTL
permitted)  to any validating name server.
 (the negative trust anchor needs to be configured by every validating
NS,
   whose administrators bother to do so)

I see the advantages but I'm reluctant to see this more automated / easy.

Thanks,
Jason

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] on Negative Trust Anchors

2012-04-17 Thread Murray S. Kucherawy
 -Original Message-
 From: dnsop-boun...@ietf.org [mailto:dnsop-boun...@ietf.org] On Behalf Of 
 Warren Kumari
 Sent: Monday, April 16, 2012 11:55 PM
 To: Livingood, Jason
 Cc: Joe Abley; Nick Weaver; Tony Finch; dnsop; Paul Vixie; Evan Hunt
 Subject: Re: [DNSOP] on Negative Trust Anchors
 
 I think that this is a useful document and is *really* needed to help
 with DNSSEC deployment, and fully support it's publication (especially
 if it comes with examples!)...
 
 The IETF publishing an RFC on this doesn't mean that folk will be
 forced to use it (and not publishing it doesn't mean that folk won't) -
 - if this were not true, we would have universal adoption of BCP 38,
 everyone would be running v6 and all packets involved in SPAM / DoS
 would have the Evil bit set...

Some of this conversation reminds me of the X- debate just wrapping up in 
apps, in that things that are supposed to be temporary often become permanent 
even if we tag them very explicitly as temporary.  In that sense, I'm more 
sympathetic to the no side so far, but not enough to object.

Also, a lot of things in apps space like to call out MUST/SHOULD except if 
local policy says otherwise, and this strikes me as, basically, a kind of a 
local policy tool.  I presume the idea is to describe a mechanism that works 
and is minimally destructive to encourage people away from more broken methods. 
 In that sense, this is the right thing to do.

So if this is going to go forward to publication, I would urge ample 
explanation of why we think this is necessary to document, but also advocate 
strongly in the document that the technique is meant to be a short-term 
solution for a problem that exists during gradual and un-coordinated DNSSEC 
rollout, and that support for it be dropped once DNSSEC has reached critical 
mass.  Or something like that.

-MSK

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] on Negative Trust Anchors

2012-04-16 Thread SM

At 21:03 15-04-2012, Ralf Weber wrote:
If the IETF or this group wants to ignore these operational facts 
and not give new people guidance on how to deal with them, and do 
nothing is not an acceptable advise here, I doubt that a lot of 
people will adopt DNSSEC or move back after the first or second 
failure and that would not the be outcome I would want.


From draft-livingood-negative-trust-anchors-01:

  A Negative Trust Anchor should be considered a transitional and
   temporary tactic which is not particularly scalable and should not be
   used in the long-term.  Over time, however, the use of Negative Trust
   Anchors will become less necessary as DNSSEC-related domain
   administration becomes more resilient.

The parallel here would be 
draft-ietf-v6ops-v6--whitelisting-implications-11.  The 
significant difference is that it is not about a technological 
choice.  There are different angles to the problem discussed in 
draft-livingood-negative-trust-anchors-01.  I could look at it as follows:


   A Negative Trust Anchor should be considered even though the tactic is
   not particularly scalable.

Regards,
-sm 


___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] on Negative Trust Anchors

2012-04-16 Thread Scott Schmit
On Sun, Apr 15, 2012 at 09:24:35PM -0700, David Conrad wrote:
 On Apr 15, 2012, at 6:28 PM, Scott Schmit wrote:
  It's manual for now...until the utter lack of consequences for screwing
  up (everybody can still get to the broken zones just fine) junks up the
  NTA lists.  
 
 Given the implicit assertions associated with NTA (specifically, that
 the validator operator is asserting that the zone in question is not
 being spoofed despite the fact that validation is failing), I have
 some skepticism that folks will let stuff like this 'junk up NTA
 lists'.

Please explain how operators will prevent this, and why they can afford
to remove a zone from the NTA list (while it is still failing) but
couldn't afford to leave it off the list in the first place.

  If the resolver is unable to validate the domain, it MAY return a false
  result leading the user to a host that will explain the error and how to
  notify the domain owner of the problem.
 
 Not sure I follow -- are you proposing additional error codes in stub
 resolver responses?

No, I'm talking about a targeted use of the controversial practice of
returning spoofed results redirecting the user to another host. Since
the usual protocol in use is HTTP or HTTPS, that host presents a web
page with the desired content (usually a search page with embedded ads,
or a portal page requesting payment and/or providing terms of use that
must be accepted before continuing). It may be possible to provide
application-level error messages for other protocols to be served by
that host in support of non-HTTP/HTTPS traffic (email bounces, etc).

In this case, the page would educate the user that there's something
wrong with the site, and offer a way to let the user let the site know
about the problem. This shifts the perception of brokenness back toward
the site causing the problem (or at least attempts to).

If desired, one could also go the captive portal route and let the user
through after they've seen/acknowledged the error page (for some amount
of time).

References:
https://en.wikipedia.org/wiki/DNS_hijacking
https://en.wikipedia.org/wiki/Captive_portal

-- 
Scott Schmit


smime.p7s
Description: S/MIME cryptographic signature
___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] on Negative Trust Anchors

2012-04-16 Thread David Conrad
Scott,

On Apr 16, 2012, at 4:52 AM, Scott Schmit wrote:
 Given the implicit assertions associated with NTA (specifically, that
 the validator operator is asserting that the zone in question is not
 being spoofed despite the fact that validation is failing), I have
 some skepticism that folks will let stuff like this 'junk up NTA
 lists'.
 
 Please explain how operators will prevent this, and why they can afford
 to remove a zone from the NTA list (while it is still failing) but
 couldn't afford to leave it off the list in the first place.

I would assume operators will keep NTAs alive until the zone owner fixes 
things. 

You appear to be assuming zone owners will leave brokenness in place. In the 
case of popular zones (which would be the most likely candidates for NTAs since 
end users would notice the brokenness and complain to the validator operator), 
I'd imagine there would be some pressure to fix things, either by pulling the 
DS or by remedying whatever booboo caused the problem to begin with. My 
impression is that those who are arguing against NTAs believe that NTAs reduce 
that pressure.  I'd agree with this to some extent, however I suspect because 
of the indirect nature of the failures, the vast majority of complaints the 
zone owner will receive will come from validator operators, not end users.

 No, I'm talking about a targeted use of the controversial practice of
 returning spoofed results redirecting the user to another host. 

An interesting idea, albeit I'm actually unsure which is less appealing 
architecturally speaking. For others against NTAs, is the use of redirection as 
Scott suggests preferable?

Regards,
-drc

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] on Negative Trust Anchors

2012-04-16 Thread Joe Abley

On 2012-04-15, at 20:02, Scott Schmit wrote:

 On Fri, Apr 13, 2012 at 04:38:10PM -0700, David Conrad wrote:
 On Apr 13, 2012, at 3:30 PM, Jaap Akkerhuis wrote:
 More pragmatically, while I understand the theory behind rejecting NTAs,
 I have to admit it feels a bit like the IETF rejecting NATs and/or DNS
 redirection. I would be surprised if folks who implement NTAs will stop
 using them if they are not accepted by the IETF.
 
 it is still not a reason for the IETF to standardize this.
 
 With the implication that multiple vendors go and implement the same
 thing in incompatible ways. I always get a headache when this sort of
 thing happens as the increased operational costs of non-interoperable
 implementations usually seems more damaging to me than violations of
 architectural purity. Different perspectives I guess.
 
 What's to standardize (or be incompatible)?

Details like:

 - what data ought to be recorded with the NTA (e.g. reason, instantiation 
timestamp, expiration timestamp)
 - whether other available trust anchors to domains under an NTA should also be 
invalidated
 - whether there ought to be any signalling to a client to let them know that 
they're getting an answer despite a validation failure

 Each recursive resolver
 already has different mechanisms for configuring it, and I'd imagine
 that the list of NTAs would be configured similarly to (for example)
 its TAs  DLVs.

 - if NTAs were to be published as RRs, a bit like DLV
   - what RRs should be used?
   - should NTAs read from the DNS be cached?
   - are there requirements that the zone data be signed?

I think there's more to standardise here than you think. It's not that any of 
this is hard; it's just that it'd be so much less pain operationally if 
everybody's validator was configured along similar lines. If we did it right I 
can imagine subscription services for ISPs run by reliable people that ISPs 
could opt-in to in order to get an automatic whitelist. (Look at that! I just 
made all the security people on this list bang their fists on the table.)

Clients are very touchy about the performance of the caches they use. I have 
some dealings with a large (i.e. tiny by comparison with Comcast) residential 
ISP here in Canada, and I've seen first-hand the dramatic traffic shifts from 
the ISP-operated resolvers to OpenDNS and Google DNS if the ISP caches ever 
malfunction. Comcast's experience (as I heard about it in Teddington) rang very 
true -- unless you have the ops to mitigate signing failures, there is no way 
in hell you should validate in your cache.

Another thought from Teddington via Paris: if Comcast hadn't whitelisted NZ 
during the period when the NZ zone was tripping validation failures on CNS due 
to the BAABAA encoding oddity, there would have been a whole country's worth of 
content off the air to Comcast customers for a prolonged period. Someone might 
argue that the right thing to do was to suppress resolution of all names under 
NZ for reasons of architectural purity, but I would have a hard time agreeing 
with them (and I doubt they'd find many kindred spirits amongst kiwi expats 
living in Comcast service areas).

(And as I hope is obvious, even to those who didn't see Sebastian's talk about 
it, the NZ people actually know what they're doing. DNSSEC amateurs who are 
just blindly clicking sign have no hope. And most of the DNS isn't even 
signed -- the longer and wrigglier the entrails of trust become, the bigger 
problem this is, and I don't think it's necessarily the case that more 
deployment of signatures in the namespace will make things more reliable.)

I understand the reluctance to appear to sanction selective tolerance of 
validation failures -- from a security perspective it's ugly, it muddies the 
whole DNSSEC message, it smacks of click OK to continue certificate failures. 
But I do not see how we can expect validation in the cache to ever make sense 
to ISPs in any general, pervasive sense without some mechanism to mitigate 
signing failures. And if there's a mechanism, I think it should be standardised.

If all that sounds horrible, then the alternative is to issue new guidance that 
nobody expects caches to do validation anyway, ever, and that validation 
properly belongs in or near the application, on hosts. Reduce the advice to 
ISPs that they should make sure they can receive and generate large responses 
reliably and respond properly to clients that are willing and able to do their 
own validation. Re-point the energy currently directed at ISPs to Microsoft, 
Apple, Google and Mozilla.

That way the practical problems surrounding the use of a remote validator (the 
support cost of validation, the lack of benefit from validation from the 
perspective of the naïve end-user, the unfortunate user comparisons between a 
BROKEN! validating ISP and a WORKING! non-validating one next door, dealing 
with NTAs/whitelists/whatever, the direction of the user's anger when broken 
zones fail to 

Re: [DNSOP] on Negative Trust Anchors

2012-04-16 Thread Livingood, Jason
Inline.
- JL


On 4/12/12 8:21 AM, Marc Lampo marc.la...@eurid.eu wrote:
The draft of Negative Trust Anchors does not mention anything about
informing the operator of the failing domain.

I'll make a note to call this out in the next version. Something about
making reasonable attempts to notify the domain of the issue and any
action taken (such a using a NTA and when it expires, how to contact party
adding the NTA, etc.).

The advantage over negative trust anchor would be that this is more
centrally managed : the action by the parent (remove DS) is visible (TTL
permitted)  to any validating name server.
 (the negative trust anchor needs to be configured by every validating NS,
   whose administrators bother to do so)

I see the advantages but I'm reluctant to see this more automated / easy.

Thanks,
Jason

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] on Negative Trust Anchors

2012-04-16 Thread Livingood, Jason
On 4/13/12 1:43 PM, Paul Vixie p...@redbarn.org wrote:


we need to move quickly to the point where lots of large eyeball-facing
network operators are validating, such that any failure to properly
maintain signatures and keys and DS records, is felt most severely by
whomever's domain is thus rendered unreachable.

+100

i'm opposed to negative trust anchors, both for their security
implications if there were secure applications in existence, and for
their information economics implications.

But then what you might get is the request to turn off validation across
*all* domains until example.com is fixed and the call center pain stops.
This problem of course goes away once there are (many) more recursive
operators validating but there's the challenge of how we get from here to
there. 

- Jason

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] on Negative Trust Anchors

2012-04-16 Thread Livingood, Jason
On 4/13/12 5:00 PM, Patrik Fältström 
p...@frobbit.semailto:p...@frobbit.se wrote:
In a private chat I am asked to explain my +1.

Let me explain why.

Today, before negative trust anchors, the responsibility for whether a the 
resolution that is basis for a connection establishment is with the zone owner. 
If the signature fails, it fails, resolution fails, and the connection can not 
be established.

Now, if we have negative trust anchors that the validator is controlling, then 
I interpret it as if this choice of ability to resolve a name moves from the 
zone owner to the validator (or as in the case of X.509 certs to the client).

What I am against is this *CHANGE* in who is responsible.

It is indeed a concern (see a section dedicated to this @ 
http://tools.ietf.org/html/draft-livingood-negative-trust-anchors-01#section-5).
 But I argue that the design of DNSSEC or the way that incremental deployment 
was envisioned shifted this model by making it something that a recursive 
operator had to take action to turn on. This creates the situation where 
recursive operators get the costs of adoption errors initially.

But, all of this thinking leads me to think about DNSSEC validation risks are 
very similar to the risk with deploying IPv6? We have an IPv6 day, but why not 
a DNSSEC day? One day where *many* players at the same time turn on DNSSEC 
validation?

+1

- Jason
___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] on Negative Trust Anchors

2012-04-16 Thread Livingood, Jason
On 4/13/12 5:18 PM, Patrik Fältström 
p...@frobbit.semailto:p...@frobbit.se wrote:

On 13 apr 2012, at 22:44, Nicholas Weaver wrote:

Because practice has shown that it is the recursive resolver, not the 
authority, that gets blamed.

As you saw in my mail, I completely disagree from my own personal experience.

If I look at the number of failures, the number of cases where the validator is 
blamed is exactly one -- Comcast in the NASA case. Compared to the about 50 
cases or so when the zone owner/signer is blamed. Yes, we have been running 
DNSSEC validation in Sweden a bit longer than in the USA.

Can you please comment on that mail that uses a few more characters than '+1' 
please?

Maybe what we should do is publicize all the escalations and failures we see so 
others have some sense of this (assuming we have the cycles for that)? Here are 
a few complaints by customers that I found in a quick search:

http://forums.comcast.com/t5/Web-Browsers/Cannot-connect-to-NOAA-gov-and-related-sites/m-p/1211707/highlight/true#M23142

http://forums.comcast.com/t5/Connectivity-and-Modem-Help/DNS-issues-with-gov-addresses-Proven-Comcast-issue/m-p/1241301/highlight/true#M150167

http://forums.comcast.com/t5/Connectivity-and-Modem-Help/DNS-Issue-Again/m-p/1209289/highlight/true#M148556

http://forums.comcast.com/t5/Connectivity-and-Modem-Help/DNS-can-t-find-NOAA-Hurricane-Center-other-major-sites/m-p/1084603/highlight/true#M141297

http://forums.comcast.com/t5/Connectivity-and-Modem-Help/Why-is-Comcast-unable-to-keep-DNS-working-No-dot-gov-resolution/m-p/908009/highlight/true#M131067

And sites here (http://www.dnsops.gov/USAdotGOV-status.html) we usually hear 
about.

- Jason

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] on Negative Trust Anchors

2012-04-16 Thread Livingood, Jason
On 4/13/12 9:51 PM, Doug Barton do...@dougbarton.us wrote:


The problem, and I cannot emphasize this highly enough, is that there is
absolutely no way for an ISP (or other end-user site doing
recursion/validation) to determine conclusively that the failure they
are seeing is due to a harmless stuff-up, vs. an actual security incident.

It is admittedly manual and not scalable. It generally involves some DNS
admin checking to verify it with the authoritative admin. If not, you are
correct that if you cannot confirm it is a misconfiguration then it is
questionable if you should take action (as a recursive operator).

- Jason

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] on Negative Trust Anchors

2012-04-16 Thread Livingood, Jason
On 4/14/12 9:23 PM, Warren Kumari war...@kumari.net wrote:

Yes, but ATT, Verizon, Cox, BestWeb, RR, TW, etc are currently *not*
doing validation, and currently don't have much in the way of incentives
to start -- yes, NASA was an unusual event, but what was the standard
advice that kept popping up on twitter / forums / fb, etc?
Change your resolver to be 8.8.8.8 and the problem is fixed -- now, I'm
all for folk changing to use Google's resolvers, but to avoid validation
isn't the right reasonŠ

Yes, NTAs suck and have some really bad security implications, but I
believe that the alternative is worse. Without a way for validating
resolver operators to avoid users jumping ship to non-validation resolver
operators we delay adoption (imo significantly) and users are at a much
larger risk for a much longer time.

Once most ISPs are performing validation there should be fewer screwups,
and NTAs should be almost never needed -- but until we get to that point
I think that they are needed, and the net security wins outweigh the
costsŠ

+1

- Jason

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] on Negative Trust Anchors

2012-04-16 Thread Livingood, Jason
On 4/15/12 10:42 AM, Joe Abley joe.ab...@icann.org wrote:

Patrik,

Nobody is talking about creating NTAs. NTAs already exist. The question
for this group is whether or not they are worth standardising.

Joe

Quite true, Joe! We'll keep using NTAs as needed. But I've had enough
people ask me to document what it was we were doing and why, and other
ISPs ask about it that I figured an informational document certainly
couldn't hurt. 

Jason

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] on Negative Trust Anchors

2012-04-16 Thread Scott Schmit
On Mon, Apr 16, 2012 at 08:32:59AM -0700, David Conrad wrote:
 On Apr 16, 2012, at 4:52 AM, Scott Schmit wrote:
  Please explain how operators will prevent this, and why they can
  afford to remove a zone from the NTA list (while it is still
  failing) but couldn't afford to leave it off the list in the first
  place.
 
 I would assume operators will keep NTAs alive until the zone owner
 fixes things. 
 
 You appear to be assuming zone owners will leave brokenness in place.

Not an assumption--a reality. See Jason's recent posts for
documentation.

  No, I'm talking about a targeted use of the controversial practice of
  returning spoofed results redirecting the user to another host. 
 
 An interesting idea, albeit I'm actually unsure which is less
 appealing architecturally speaking. For others against NTAs, is the
 use of redirection as Scott suggests preferable?

Believe me, I feel dirty even suggesting it. But its benefits could
outweigh the ugliness of it, so I figured I'd offer it.

Another approach would be to bless client-configured/non-automated NTAs
for now...until there are enough resolvers validating. Then do a 'Turn
Off All NTAs Forever Day.' And hope that the world follows through 
blessing NTAs doesn't backfire instead.

-- 
Scott Schmit


smime.p7s
Description: S/MIME cryptographic signature
___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] on Negative Trust Anchors

2012-04-15 Thread Patrik Fältström
On 15 apr 2012, at 03:23, Warren Kumari wrote:

 Once most ISPs are performing validation there should be fewer screwups, and 
 NTAs should be almost never needed -- but until we get to that point I think 
 that they are needed, and the net security wins outweigh the costs…

...and my point is that the effort should be spent on convincing ATT, Cox and 
others to do validation just like Comcast. And to inform the users, press and 
others that for example it was NASA and not Comcast that had problems.

Solution is not to do a work around in the IETF that have all different kind of 
security implications, similar to the ones Doug describes.

Creating NTAs so that people, as Doug says, can turn off validation per zone 
without interaction with whoever is responsible for the zone, without 
interaction with whoever *decided* that the zone should be signed, and without 
knowing whether it is a security incident or just a management mistake, is I 
think the end of DNSSEC.

So, I rather see those that do not feel comfortable taking the discussion with 
the press and their customers (and of course this is also due to zone owners 
not doing enough press and help when they screw up) turn off validation 
completely, and then work together in whatever community they operate with 
other resolver operators to turn on validation on the same day, with the help 
of ISOC and whoever and have a DNSSEC validation launch day. Similar work that 
you at Google did for IPv6.

Much better than creating NTAs.

I see *today* many mistakes we have made that see the need for DNSSEC, and we 
could, and still can, learn from the IPv6 advocates on how to deploy something 
new. Easy to say afterwards though.

   Patrik

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] on Negative Trust Anchors

2012-04-15 Thread Joe Abley
Patrik,

Nobody is talking about creating NTAs. NTAs already exist. The question for 
this group is whether or not they are worth standardising.


Joe

Sent from my Ono-Sendai Cyberspace 7

On 2012-04-15, at 2:34, Patrik Fältström p...@frobbit.se wrote:

 On 15 apr 2012, at 03:23, Warren Kumari wrote:
 
 Once most ISPs are performing validation there should be fewer screwups, and 
 NTAs should be almost never needed -- but until we get to that point I think 
 that they are needed, and the net security wins outweigh the costs…
 
 ...and my point is that the effort should be spent on convincing ATT, Cox 
 and others to do validation just like Comcast. And to inform the users, press 
 and others that for example it was NASA and not Comcast that had problems.
 
 Solution is not to do a work around in the IETF that have all different kind 
 of security implications, similar to the ones Doug describes.
 
 Creating NTAs so that people, as Doug says, can turn off validation per zone 
 without interaction with whoever is responsible for the zone, without 
 interaction with whoever *decided* that the zone should be signed, and 
 without knowing whether it is a security incident or just a management 
 mistake, is I think the end of DNSSEC.
 
 So, I rather see those that do not feel comfortable taking the discussion 
 with the press and their customers (and of course this is also due to zone 
 owners not doing enough press and help when they screw up) turn off 
 validation completely, and then work together in whatever community they 
 operate with other resolver operators to turn on validation on the same day, 
 with the help of ISOC and whoever and have a DNSSEC validation launch day. 
 Similar work that you at Google did for IPv6.
 
 Much better than creating NTAs.
 
 I see *today* many mistakes we have made that see the need for DNSSEC, and we 
 could, and still can, learn from the IPv6 advocates on how to deploy 
 something new. Easy to say afterwards though.
 
   Patrik
 
 ___
 DNSOP mailing list
 DNSOP@ietf.org
 https://www.ietf.org/mailman/listinfo/dnsop
___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] on Negative Trust Anchors

2012-04-15 Thread David Conrad
Doug,

On Apr 13, 2012, at 6:51 PM, Doug Barton wrote:
 The problem is that there is absolutely no way for an ISP to determine 
 conclusively that the failure they are seeing is due to a harmless stuff-up, 
 vs. an actual security incident.

I suspect that in most cases, grepping through logs and comparing past 
(validated) results with current (unvalidated) results can provide sufficient 
information to ensure to an arbitrary level of certainty that the bad thing 
either is or is not happening.  For example, if the logs show the IP address 
for mail.example.com maps via whois into a block owned by Example, LLC. and the 
current IP address maps into a block owned by a dialup provider in Tajikistan, 
it's probably safe to assume the address shouldn't be trusted.

 IOW, if we do this, we might as well just abandon DNSSEC altogether.

Joe has pointed out that folks are already doing this. The question before us 
is whether or not there is a standard way of doing it.

 I would be surprised if folks who implement
 NTAs will stop using them if they are not accepted by the IETF.
 
 Actually I think what's more likely to happen is that organizations
 conclude that validation is not ready for prime time, and turn it off.

I guess I have less faith in the power of the IETF.

Regards,
-drc

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] on Negative Trust Anchors

2012-04-15 Thread Paul Vixie
On 2012-04-15 4:09 PM, Patrik Fältström wrote:
 On 15 apr 2012, at 16:42, Joe Abley wrote:

 Nobody is talking about creating NTAs. NTAs already exist. The question for 
 this group is whether or not they are worth standardising.
 Fair. I am the one that extrapolate from standardizing to wide deployment.

extrapolating further, standardizing is a form of legitimization. i
argue that this would do more harm than good.

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] on Negative Trust Anchors

2012-04-15 Thread Stephan Lagerholm
Patrik Fältström, Sunday, April 15, 2012 1:34 AM:
 
 ...and my point is that the effort should be spent on convincing ATT,
 Cox and others to do validation just like Comcast. And to inform the
 users, press and others that for example it was NASA and not Comcast
 that had problems.

Convincing other service providers is a good long term idea. That would 
increase the pressure on the zone operators to do DNSSEC properly. However, the 
open resolvers that don't support DNSSEC are a bigger issue since the Service 
Providers' customer instantly can get what he believes is a better working 
resolver by switching to one of those. If we could convince 8.8.8.8 
(208.67.222.222 and 4.2.2.1) to turn on DNSSEC validation, then there would not 
be any need for NTAs.

/S
___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] on Negative Trust Anchors

2012-04-15 Thread Patrik Wallstrom

On Apr 14, 2012, at 1:38 AM, David Conrad wrote:

 On Apr 13, 2012, at 3:30 PM, Jaap Akkerhuis wrote:
 More pragmatically, while I understand the theory behind rejecting NTAs,
 I have to admit it feels a bit like the IETF rejecting NATs and/or DNS
 redirection. I would be surprised if folks who implement NTAs will stop
 using them if they are not accepted by the IETF.
 
 it is still not a reason for the IETF to standardize this.
 
 With the implication that multiple vendors go and implement the same thing in 
 incompatible ways. I always get a headache when this sort of thing happens as 
 the increased operational costs of non-interoperable implementations usually 
 seems more damaging to me than violations of architectural purity. Different 
 perspectives I guess.

Then I should probably go ahead and write another draft with just one statement 
in it, maybe something like do not put in NTAs in a resolver. Problem solved?

But I guess that it would have the same effect as RFC 5966.

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] on Negative Trust Anchors

2012-04-15 Thread Stephan Lagerholm

David Conrad, Sunday, April 15, 2012 11:15 AM: 
 I suspect that in most cases, grepping through logs and comparing past
 (validated) results with current (unvalidated) results can provide
 sufficient information to ensure to an arbitrary level of certainty
 that the bad thing either is or is not happening.  For example, if the
 logs show the IP address for mail.example.com maps via whois into a
 block owned by Example, LLC. and the current IP address maps into a
 block owned by a dialup provider in Tajikistan, it's probably safe to
 assume the address shouldn't be trusted.
 
-1 on that approach.

You are turning off DNSSEC for the entire domain. Just spot checking
mail.example.com and www.example.com is not sufficient. They might still
be valid whereas moneytransfersystem.example.com is not. And your dig
checks are not secure since you can't validate (since DNSSEC is broken).

The requirement for an NTA (or for removing a DS from a parent) should
be as strict as for adding a DS. That is, it must be done with an secure
out of band mechanism.

/S
___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] on Negative Trust Anchors

2012-04-15 Thread Paul Vixie
On 2012-04-15 4:20 PM, Stephan Lagerholm wrote:
 ... However, the open resolvers that don't support DNSSEC are a bigger
 issue since the Service Providers' customer instantly can get what
 he believes is a better working resolver by switching to one of those.
 If we could convince 8.8.8.8 (208.67.222.222 and 4.2.2.1) to turn on
 DNSSEC validation, then there would not be any need for NTAs. ...

dnssec makes dns less reliable. that's because it's a security
technology not a resiliency technology. security makes things harder to
use and makes failures more apparent. in the real world if you put a
lock on your house or your car then you're at risk for forgetting or
losing your key and not being able to enter or use your own property.
you could mitigate that risk by removing the lock but then you'd have
other risks.

what's unique about dnssec is that if you lose your key or use the wrong
key or forget to re-sign your zones then it's not you (the property
owner) who cannot enter or use the zones, but rather, everybody else. i
don't think that asymmetry of cost changes the fundamentals. the
operators of 8.8.8.8 and so on (from the above examples) may have done a
cost:benefit analysis leading them to not deploy dnssec validation at
this time. that will make their systems more reliable in the eyes of end
users. that's true and that's inevitable.

i'm imagining that some NTA-using party notices via their syslogs that
validation is failing for some zone, and believing that access to this
zone in an unsecure way has a better cost:benefit ratio to them and to
their own customers than letting the failure propagate and so enters the
zone into the NTA... only to have it turn out later that there was a key
compromise and the failure was due to a credential or key or system
attack rather than due to someone forgetting or losing their key.

people who sign their zones must be told, early and often, that adds
risk. they will have to re-sign their zones before signatures expire,
they will have to carefully coordinate key changes with their parent
zone, they will have to keep their signing key under secure but
redundant control... and any failure by them to do any of those things
will make their zones unreachable by a large and growing audience. if
they sign anyway then any resulting failures have to be laid at their
doorstep, not at validating server operators' doorsteps.

if a secure application knows that a zone is supposed to be signed
(because there are DS RRs in the parent and the parent is signed) then
an upstream NTA will just look like a signature-stripping attacker. let
us not imagine that there will never be secure applications other than
recursive name servers (and here i'm thinking of DANE), and let us also
not imagine that every secure application (here i'm again thinking of
DANE) will have to subscribe to a trusted NTA.

dnssec is end to end, including dnssec failures, which are statistically
inevitable. i'd tell validator operators who think they need NTA's in
order to control the risks posed by zone owner errors, if you can't
stand the heat then stay out of the kitchen.

see also
http://www.circleid.com/posts/defense_in_depth_for_dnssec_applications/.

paul
___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] on Negative Trust Anchors

2012-04-15 Thread Patrik Fältström
On 15 apr 2012, at 17:46, David Conrad wrote:

 The decisions as to whether to deploy an NTA vs. whether to deploy DNSSEC are 
 made a different times and (I suspect) different places within the 
 organization.  Obviously, an organization must decide to deploy DNSSEC before 
 the question of whether to deploy NTAs becomes relevant. My impression is 
 that NTAs are/can be argued to reduce the risks of deploying validating 
 resolvers.

This all depends on the arguments in favor of, and against, validating 
resolvers.

What I think you say is ...can be argued to reduce the risk of being blamed if 
someone is making mistakes with their keys.

I.e. with the help of standardized NTAs, it will be easier for parties to 
always be able to give back responses, regardless of whether they validate or 
not. Which I think is the wrong view. You (and many more) have a different view.

In Sweden, for some reason, we managed to get people to deploy validators 
without being afraid of the risk of being blamed. Or rather, the risk of 
being blamed was much lower in the calculation that lead to so many access 
providers turning on validation.

 So, I rather see those that do not feel comfortable taking the discussion 
 with the press and their customers (and of course this is also due to zone 
 owners not doing enough press and help when they screw up)
 
 I don't think it is a question of comfort. My impression is that it is a 
 question of not losing money due to customers being unable to get their pr0n 
 because validation has been turned on (whereas the customers' friends at 
 another ISP can get the same pr0n with no problems). I suspect the vast 
 majority of end users will simply not believe the response of the zone owner 
 screwed up and our competition is not doing the right thing.

Ok, I agree with this, and it was sort of what I said as well.

 turn off validation completely,
 
 This strikes me as far more detrimental to DNSSEC-enabled security than NTAs. 
  The implication of this approach is that a mistake of a single zone owner 
 would mean DNSSEC is disabled for everyone, everywhere, regardless of how all 
 the other signed zones are operating.  NTAs mean that validation is disabled 
 for the offender only.

Ok.

 and then work together in whatever community they operate with other 
 resolver operators to turn on validation on the same day, with the help of 
 ISOC and whoever and have a DNSSEC validation launch day. Similar work that 
 you at Google did for IPv6.
 
 While I think a DNSSEC validation day is a good idea, the implication here is 
 that zone owners won't make mistakes after the DNSSEC validation launch day.

Well, we of course must first decide what the problem is, and what problem we 
might resolve with a validation deployment day.

I think one thing it can help with is to make it more understandable to /. 
people who is to blame for the inability to reach whatever is to be reached.

And to some degree that is what I heard from for example Google before the 
initiative with IPv6 day started.

 I see *today* many mistakes we have made that see the need for DNSSEC, and 
 we could, and still can, learn from the IPv6 advocates on how to deploy 
 something new. Easy to say afterwards though.
 
 Given the stunning level of IPv6 deployment after more than a decade, I'm not 
 sure I see emulating IPv6 in this regard as the best idea.

Well, I was more thinking of the (un-) happy eyeballs problem.

Or let me ask you differently, if many access providers and not only Comcast 
started to do validation at the same time, would we be in a different situation?

   Patrik

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] on Negative Trust Anchors

2012-04-15 Thread David Conrad
Stephan,

On Apr 15, 2012, at 9:36 AM, Stephan Lagerholm wrote:
 David Conrad, Sunday, April 15, 2012 11:15 AM: 
 I suspect that in most cases, grepping through logs and comparing past
 (validated) results with current (unvalidated) results can provide
 sufficient information to ensure to an arbitrary level of certainty
 that the bad thing either is or is not happening.
 
 -1 on that approach.

The alternative is for names in the zone in question to not exist resulting in 
your customer support center to receiving some number of calls and/or customers 
bolting to service providers that don't do validation.  

At the current state of DNSSEC deployment, I suspect it is far more likely that 
the zone owner has screwed something up. Validator operators that deploy NTAs 
are implicitly assuring their customers that the zone in question is actually 
safe. This presumably implies they will do some level of due diligence to 
ensure that names in that zone are indeed safe. How much due diligence they do 
is, of course, their own business decision.

 You are turning off DNSSEC for the entire domain. Just spot checking
 mail.example.com and www.example.com is not sufficient. They might still
 be valid whereas moneytransfersystem.example.com is not. And your dig
 checks are not secure since you can't validate (since DNSSEC is broken).

DNSSEC is broken for that zone.  Without NTAs, you have the choice of either 
the alternative above or turning off DNSSEC for all zones.  Which would you 
prefer?

Regards,
-drc

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] on Negative Trust Anchors

2012-04-15 Thread David Conrad
On Apr 15, 2012, at 9:37 AM, Paul Vixie wrote:
 i'd tell validator operators who think they need NTA's in
 order to control the risks posed by zone owner errors, if you can't
 stand the heat then stay out of the kitchen.

Given the benefits provided by DNSSEC (to date) are largely invisible and the 
costs quite non-trivial, I'd think this would ensure DNSSEC validation never 
gets deployed, thus secure applications (such as DANE) will never exist.

I thought we'd learned that flag day deployments don't work on the Internet 
anymore.

Regards,
-drc

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] on Negative Trust Anchors

2012-04-15 Thread David Conrad
Patrik,

On Apr 15, 2012, at 10:19 AM, Patrik Fältström wrote:
 I.e. with the help of standardized NTAs, it will be easier for parties to 
 always be able to give back responses, regardless of whether they validate or 
 not. 

Or, more specifically, will be easier for parties to give back the same 
responses that their (non-validating) competitors do.

 In Sweden, for some reason, we managed to get people to deploy validators 
 without being afraid of the risk of being blamed.

I'm guessing the reason has to do with scale. It might be helpful to get an 
idea from ISPs in Sweden how many validation failures they have seen (and how 
many calls they get as a result of those validation failures). Given the lack 
of signed zones, I'm guessing the number of validation failures is in the 
noise. It's only when you get a lot of (popular) zones signed and a lot of 
people behind validators that false positive validation failure becomes an 
issue.

 I think one thing it can help with is to make it more understandable to /. 
 people who is to blame for the inability to reach whatever is to be reached.

I don't think /. folks are significant in the decision making. I suspect what 
is more significant are the number of paying customers impacted by validation 
failure.

 Or let me ask you differently, if many access providers and not only Comcast 
 started to do validation at the same time, would we be in a different 
 situation?

Potentially.  However, I suspect it has more to do with the number of false 
positives caused by the relative immaturity of the available tools. As the 
tools get more mature and validation failures become caused more by malicious 
intent than all too easily caused mistakes, the desire for NTAs will wane 
(particularly given the implicit risks an ISP takes on when deploying an NTA).

Regards,
-drc

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] on Negative Trust Anchors

2012-04-15 Thread Paul Vixie
On 2012-04-15 7:04 PM, David Conrad wrote:
 On Apr 15, 2012, at 9:37 AM, Paul Vixie wrote:
 i'd tell validator operators who think they need NTA's in
 order to control the risks posed by zone owner errors, if you can't
 stand the heat then stay out of the kitchen.
 Given the benefits provided by DNSSEC (to date) are largely invisible and the 
 costs quite non-trivial, I'd think this would ensure DNSSEC validation never 
 gets deployed, thus secure applications (such as DANE) will never exist.

 I thought we'd learned that flag day deployments don't work on the Internet 
 anymore.

i thought so too until we had world ipv6 day last year. noting that
adding a  record to www.{facebook,yahoo,google}.com has been seen to
hit all kinds of roadblocks due to teredo and other failed tunneling
mechanisms, the only way big companies will feel safe turning it on
(knowing that they'll lose 0.3% of unique eyeballs when they do) is if
they're traveling in a pack with other big companies.

so it apparently will be for dnssec. nobody should validate until
everybody validates, because otherwise the failures at the social
security administration or nasa to sign and re-sign their zones, and to
properly maintain the relationship between the keys they use and the DS
RRs their parent zones have for them, will be felt by early adopters.

ipv6 and dnssec both have incredibly strong early adopter penalties:
you can break me now, or you can break me later. i seek to avoid
legitimizing the  igor hack in bind9, and negative trust anchors.
i know that people will do this stuff but i also know that IETF should
not give either one an implicit +1.
___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] on Negative Trust Anchors

2012-04-15 Thread David Conrad
Paul,

On Apr 15, 2012, at 1:12 PM, Paul Vixie wrote:
 I thought we'd learned that flag day deployments don't work on the Internet 
 anymore.
 i thought so too until we had world ipv6 day last year. 

I'm not sure how world IPv6 day changes anything. It's not like IPv6 saw a 
significant (relative to IPv4) increase in usage.

 i seek to avoid legitimizing the  igor hack in bind9, and negative 
 trust anchors.

My impression is that what is being proposed is standardizing on the moral 
equivalent of a manual version of happy eyeballs (is that the igor hack?). I 
suspect the only way we're going to see real deployment of DNSSEC validation is 
if we make it essentially transparent to end users which is what I see happy 
eyeballs doing. Like IPv6 with happy eyeballs, the implication of new 
technology failure is to transparently fall back to (working) old technology.  
Unlike IPv6 with happy eyeballs, DNSSEC with NTA has security implications. I 
would imagine those security implications will be an incentive not to deploy 
unless there is no real alternative (and to remove as soon as possible), so I 
can't get too worked up about the abuse of the architecture.

I guess in the end it boils down to the philosophical question of the role of 
the IETF. If DNSOP declines to accept this topic, I suspect it merely means 
each vendor will come up with their own implementation with their own quirks 
that operators will have to wade through. I fail to see how this improves 
anything.

Regards,
-drc


___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] on Negative Trust Anchors

2012-04-15 Thread Scott Schmit
On Fri, Apr 13, 2012 at 04:38:10PM -0700, David Conrad wrote:
 On Apr 13, 2012, at 3:30 PM, Jaap Akkerhuis wrote:
  More pragmatically, while I understand the theory behind rejecting NTAs,
  I have to admit it feels a bit like the IETF rejecting NATs and/or DNS
  redirection. I would be surprised if folks who implement NTAs will stop
  using them if they are not accepted by the IETF.
  
  it is still not a reason for the IETF to standardize this.
 
 With the implication that multiple vendors go and implement the same
 thing in incompatible ways. I always get a headache when this sort of
 thing happens as the increased operational costs of non-interoperable
 implementations usually seems more damaging to me than violations of
 architectural purity. Different perspectives I guess.

What's to standardize (or be incompatible)? Each recursive resolver
already has different mechanisms for configuring it, and I'd imagine
that the list of NTAs would be configured similarly to (for example)
its TAs  DLVs.

If you're thinking of some kind of DLNV (similar to the DNS-based spam
blacklists), then there's something to talk about, but in that case
I'd want it to be secured via DNSSEC, and let's hope the operators of
those don't screw up or start blacklisting each other. (Either on
purpose or due to unfortunate timing.)

Attacking DLNV zones would have a nice amplifying effect, too, if the
need for them is widespread enough to be worth standardizing.

-- 
Scott Schmit


smime.p7s
Description: S/MIME cryptographic signature
___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] on Negative Trust Anchors

2012-04-15 Thread Scott Schmit
On Sun, Apr 15, 2012 at 02:07:20PM -0700, David Conrad wrote:
 On Apr 15, 2012, at 1:12 PM, Paul Vixie wrote:
  i seek to avoid legitimizing the  igor hack in bind9, and
  negative trust anchors.
 
 My impression is that what is being proposed is standardizing on the
 moral equivalent of a manual version of happy eyeballs (is that the
 igor hack?).

It's manual for now...until the utter lack of consequences for screwing
up (everybody can still get to the broken zones just fine) junks up the
NTA lists.  As the failures start to build up, what's to keep operators
from responding to these much like most users respond to browser SSL
validation failures?  Eventually it'll become automatic, and then it'll
be automated. Once that happens, you may as well turn it off.

I suppose we could add heuristics, but then debugging will be even more
difficult, because different validators will have different heuristics
(and there will be no transparency as to what they are). And if we could
standardize the heuristics to some reasonably secure set, they'd be part
of DNSSEC!

I admit, there are certainly some disturbing parallels to the IPv6
transition--but one difference in this case is that it's pretty clear
who is at fault when things go wrong. That is, until validators start
trying to fix things, and get it wrong (not saying that that's
happened so far, but give it time...).

 I guess in the end it boils down to the philosophical question of the
 role of the IETF. If DNSOP declines to accept this topic, I suspect it
 merely means each vendor will come up with their own implementation
 with their own quirks that operators will have to wade through. I fail
 to see how this improves anything.

I suppose one way we could go is to mix this with the NXDOMAIN
interception/typo correction/captive portal mechanism:

If the resolver is unable to validate the domain, it MAY return a false
result leading the user to a host that will explain the error and how to
notify the domain owner of the problem. If operators decide to do
this, the implementation MUST make it possible for the chosen
notification mechanism to get through. The host SHOULD be able to notify
users who are not attempting to connect to the site via HTTP/HTTPS.

End result? If your domain is insecure, it's as it was pre-DNSSEC. If
your domain is secure, all is well. If your domain is bogus, the ISP
gets a chance to tell the user what went wrong and why, and how to
complain to the people that caused the problem in the first place. If
the user is validating the results themselves, they don't see the ISP
page, but all DNS resolvers look the same to them (i.e., switching to
8.8.8.8 won't fix it) so they'll eventually realize its their own
machine, etc.

I'm not entirely sure I like that solution, but I think I like it
better.

-- 
Scott Schmit


smime.p7s
Description: S/MIME cryptographic signature
___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] on Negative Trust Anchors

2012-04-15 Thread Ralf Weber
Moin!

Not to answer anyone specific, as a lot of people seemed to spend their weekend 
commenting on this and I don't want to increase the incoming mail folders too 
much.

In general I agree with what David Conrad said, but want to spare you of a 
couple of +1 mails and just want to add some remarks.

We are still talking about the draft submitted by Jason Livingood which just 
describes negative trust anchors which are available in validators today. We 
are not talking about creating a list or technology to distribute negative 
trust anchors, although this may be an idea at least as good as DLV.

The current use case for DNSSEC is protection of cache poisoning for resolvers 
(been there and therefore want DNSSEC), and while there is a great potential 
for security applications using DNSSEC once it is widely deployed, we need to 
have that deployment first and that will not happen in one day, but rather will 
be a gradual rollout when looked at it from a global perspective, or even 
within regional markets. So operators rolling out DNSSEC always will be in 
competition with operators who don't have it. And the average end user will 
never understand the difference between the two, no matter how good you market 
it. He will only understand I can get there with X and can't get there with Y.

Which means that in order to not loose customers operators have to install 
procedures for dealing with failures and if the procedure is turn it off 
completely and this happens a couple of time upper management will ask why a 
technology that has to been turned off from time to time and does work when 
turned off is needed at all. And turning it off completely will get more 
management attention than deploying a NTA for just a failed domain especially 
if there is a worked out process in operations.

If I look at what failures have happened during DNSSEC deployment, which 
granted is early, but so far also mostly done by professionals earning there 
money with DNS software or services I see the following (I'm not claiming this 
list is complete):
- TLD failures more than once
- Interoperability problems
  - Different interpretations of RFCs
  - Different levels of liberalism in what to accept
- some public visible domains failing
I don't believe that further deployment will be without errors, and as said a 
lot of times the cost of these errors will be on the validator operators. So in 
order to get them to deploy DNSSEC we have to give them tools to deal with 
errors.

If the IETF or this group wants to ignore these operational facts and not give 
new people guidance on how to deal with them, and do nothing is not an 
acceptable advise here, I doubt that a lot of people will adopt DNSSEC or move 
back after the first or second failure and that would not the be outcome I 
would want.

So long
-Ralf
---
Ralf Weber
Senior Infrastructure Architect
Nominum Inc.
2000 Seaport Blvd. Suite 400 
Redwood City, California 94063
ralf.we...@nominum.com



___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] on Negative Trust Anchors

2012-04-15 Thread Paul Vixie
On 2012-04-15 9:07 PM, David Conrad wrote:
 I guess in the end it boils down to the philosophical question of the
 role of the IETF. If DNSOP declines to accept this topic, I suspect it
 merely means each vendor will come up with their own implementation
 with their own quirks that operators will have to wade through. I fail
 to see how this improves anything.

if you don't want something to live forever and get turned on by default
or left on across sysadmin churn, don't put it in an RFC.

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] on Negative Trust Anchors

2012-04-15 Thread David Conrad
On Apr 15, 2012, at 9:05 PM, Paul Vixie wrote:
 if you don't want something to live forever and get turned on by default
 or left on across sysadmin churn, don't put it in an RFC.

Weren't we talking about requiring an explicit lifetime on NTAs? I'd be all for 
SHOULDs/MUSTs that increase the touch factor to ensure people pull NTAs 
instead of letting them ossify.  

Regards,
-drc

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] on Negative Trust Anchors

2012-04-15 Thread David Conrad
Scott,

On Apr 15, 2012, at 6:28 PM, Scott Schmit wrote:
 It's manual for now...until the utter lack of consequences for screwing
 up (everybody can still get to the broken zones just fine) junks up the
 NTA lists.  

Given the implicit assertions associated with NTA (specifically, that the 
validator operator is asserting that the zone in question is not being spoofed 
despite the fact that validation is failing), I have some skepticism that folks 
will let stuff like this 'junk up NTA lists'.

 If the resolver is unable to validate the domain, it MAY return a false
 result leading the user to a host that will explain the error and how to
 notify the domain owner of the problem.

Not sure I follow -- are you proposing additional error codes in stub resolver 
responses?

Regards,
-drc


___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] on Negative Trust Anchors

2012-04-14 Thread Patrik Fältström

On 14 apr 2012, at 00:30, Jaap Akkerhuis wrote:

   (paf)
But, all of this thinking leads me to think about DNSSEC validation
risks are very similar to the risk with deploying IPv6?
We have an IPv6 day, but why not a DNSSEC day? One day where
*many* players at the same time turn on DNSSEC validation?
 
   (drc)
   Definitely a good idea.
 
 It is seems a nice idea but a problem is that a single day is
 probably not enough.  IPv6 problems are (nearly) instantaneous but
 with DNSSEC problems start to arise when things expire.

I was more thinking of the IPv6 Launch Day that we have this year, where IPv6 
is turned on and not off.

DNSSEC validation launch day.

   Patrik

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] on Negative Trust Anchors

2012-04-14 Thread Patrik Fältström
On 14 apr 2012, at 01:50, Mark Andrews ma...@isc.org wrote:

 What one needs to do is validate answers from one's own zones
 internally as well as answers from the rest of the world.

Unfortunately too many of the broken zones we have in Sweden are the ones where 
split DNS is in use and the external zone is broken while internal one is not 
signed at all. Not until Microsoft do have full working support for DNSSEC 
(which is coming now...) this will be resolved, and then many more zones will 
be signed.

But, as soon as someone actually go to the home page of a city, it will fail as 
large ISPs do validate in Sweden. Next step is that a complaint is to be filed 
and the problem solved.

Unfortunately this process is often repeated next time there is key roll over 
unless the city start having interesting stuff on their web site... :-P

 Patrik

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] on Negative Trust Anchors

2012-04-14 Thread Jaap Akkerhuis

On Apr 13, 2012, at 3:30 PM, Jaap Akkerhuis wrote:
 More pragmatically, while I understand the theory behind rejecting NTAs,
 I have to admit it feels a bit like the IETF rejecting NATs and/or DNS
 redirection. I would be surprised if folks who implement NTAs will stop
 using them if they are not accepted by the IETF.
 
 it is still not a reason for the IETF to standardize this.

With the implication that multiple vendors go and implement the
same thing in incompatible ways. I always get a headache when
this sort of thing happens as the increased operational costs
of non-interoperable implementations usually seems more damaging
to me than violations of architectural purity. Different
perspectives I guess.

If people have to do te hack themselves, they are more likely to
understand what they are doing. If you want to give a standard tool
they might apply it just because it is there. It is like the BIND
(temporary?) delegation only  hack. Lots of people applied it
without understanding.

As an example, when some authoritative domains brough all there
name servers in balliwick, it broke the lookup for those domains and
people couldn't figure out why.

And apart from this operational problem, there are more principal
objections such as pointed out by Doug.

jaap
___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] on Negative Trust Anchors

2012-04-14 Thread Paul Vixie
On 2012-04-14 1:51 AM, Doug Barton wrote:
 ... The problem, and I cannot emphasize this highly enough, is that
 there is absolutely no way for an ISP (or other end-user site doing
 recursion/validation) to determine conclusively that the failure they
 are seeing is due to a harmless stuff-up, vs. an actual security
 incident. IOW, if we do this, we might as well just abandon DNSSEC
 altogether.

this is what i was alluding to in some text up-thread:

On 2012-04-13 5:43 PM, Paul Vixie wrote:
 ... i'm opposed to negative trust anchors, ... for their security 
 implications if there were secure applications in existence, ...

because a secure application must be able to fail reliably under attack.
introducing third party bogosity breaks that failure, and it won't
matter whether it's SOPA or NTA that breaks it. if i can leave you all
with one thought its that dnssec failure must be reliable, end to end.

see also
http://www.circleid.com/posts/20121012_dns_policy_is_hop_by_hop_dns_security_is_end_to_end/.

paul
___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] on Negative Trust Anchors

2012-04-14 Thread Warren Kumari

On Apr 13, 2012, at 6:02 PM, Patrik Fältström wrote:

 
 On 13 apr 2012, at 23:43, Nicholas Weaver wrote:
 
 Likewise, comcast being blamed for...
 
 Because (1) they seem to be the only large resolver operator that do 
 validation(?) and (2) people like us on this list try to work out end runs 
 around the standards we created instead of helping Comcast.
 
 Yes, I blame myself personally there as well for not doing enough. I was 
 working hard when we deployed DNSSEC in Sweden to counter attack all such 
 arguments in the press that you refer to, and I thought, naively, that as we 
 managed to go through those issues, other people would as well.
 
 But as I said in an earlier message, maybe we where lucky in Sweden that all 
 major ISPs did deploy validation at the same time. In the US it seems to be 
 Comcast only(?).
 
 What would have happened if ATT and Comcast and Verizon started validation 
 basically the same week?

Yes, but ATT, Verizon, Cox, BestWeb, RR, TW, etc are currently *not* doing 
validation, and currently don't have much in the way of incentives to start -- 
yes, NASA was an unusual event, but what was the standard advice that kept 
popping up on twitter / forums / fb, etc?
Change your resolver to be 8.8.8.8 and the problem is fixed -- now, I'm all 
for folk changing to use Google's resolvers, but to avoid validation isn't the 
right reason…

Yes, NTAs suck and have some really bad security implications, but I believe 
that the alternative is worse. Without a way for validating resolver operators 
to avoid users jumping ship to non-validation resolver operators we delay 
adoption (imo significantly) and users are at a much larger risk for a much 
longer time.

Once most ISPs are performing validation there should be fewer screwups, and 
NTAs should be almost never needed -- but until we get to that point I think 
that they are needed, and the net security wins outweigh the costs…

W

[ Written on a plane, will send when I land. The conversation may have moved on 
since then… ] 

 
 Now of course we can not turn back clock, but I think still we give up too 
 early if we go down this path.
 
 That is my reason for a +1.
 
 Now I will go to sleep. It is Friday, and I feel I am hijacking this thread. 
 Violating principles of IETF lists I like myself.
 
 More people should say what they want to say.
 
   Patrik
 
 ___
 DNSOP mailing list
 DNSOP@ietf.org
 https://www.ietf.org/mailman/listinfo/dnsop
 

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] on Negative Trust Anchors

2012-04-13 Thread Stephan Lagerholm

Mark Andrews, Thursday, April 12, 2012 11:43 PM:
 -Original Message-
 From: Mark Andrews [mailto:ma...@isc.org]
 Sent: Thursday, April 12, 2012 11:43 PM
 To: Stephan Lagerholm
 Cc: Ralf Weber; Marc Lampo; Nicholas Weaver; dnsop@ietf.org;
Livingood,
 Jason
 Subject: Re: [DNSOP] on Negative Trust Anchors
 
 In message
 dd056a31a84cfc4ab501bd56d1e14bbbd2e...@exchange.secure64.com, Steph
 an Lagerholm writes:
  Mark Andrews Thursday, April 12, 2012 6:10 PM:
 
On 12.04.2012, at 14:21, Marc Lampo wrote:
  It holds an alternative possibility to overcome the problem
  - for operators of validating name servers - of failing
  domains because of DNSSEC.
 
  The alternative is to have a parent regularly (no frequency
  defined) check the coherence of DS information they have
  against DNSKEY information it finds published.
  If the parent detects security lameness (term used in
  RFC4641bis) its possible reaction could be to remove the DS
   information.
   
=3D From my experience, active parenting is not a good
 practice.
Specifically in this case, you are assuming that the parent
knows
  the
algorithms used for the DS record. He would have to in order to
validate. That might not always hold true. Additionally, the
record
   is
not 'yours', it is just parked in your zone by the child. For
the
parent to Tamper with either the NS or DS is IMHO not a good
   practice.
  =20
   There is a difference between Tamper and Hey, you stuffed up.
   You need to fix the delegation or we will remove it as it is
 causing
  operational problems and yes there *are* RFCs that permit this to
  happen.
 
  Being Insecure is not necessary better than being Bogus. Hey you
got
  hacked, so we will remove the DS so that people can get to that
bogus
  site
 
 I said remove the delegation.  Get their attention as doing anything
 else doesn't work.

I have yet to understand how your parent to child DNS probes works.
Specifically, if you can explain to me how you are distinguishing
between:
A) The DS and the DNSKEY does not match because there is an operational
error at the child
And
B) The DS and the DNSKEY does not match because somebody is doing a man
in the middle on my probes.

Going back to the original claim that  alternative is to have a parent
regularly check the coherence of DS information and that a possible
reaction could be to remove the DS

I'm not supportive of such active parenting idea.

I find the idea of negative trust anchors useful for recursive resolvers
given that the operator of such recursive resolver uses a secure out of
band technology to make sure that there is an operational mistake. 
 

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] on Negative Trust Anchors

2012-04-13 Thread Marc Lampo
Stephan,

An interesting approach :
if a parent removes DS information for a child, if it finds the child
 to be in error,
 then, can an attacker make the check fail (in order to get the DS
removed) ?

At least one thing :
Unlike the Dan Kaminsky flavour of cache poisoning attach,
there is no way that attacker could trigger the verification of the
parent,
nor can the attacker know when the parent actually does the verificiation.


The draw back of negative trust anchors seems that it must be implemented
on the validating name server side -- not centrally like the DS record.
(and what if, to overcome the not centrally managed observation,
 there is a demand for some
 blacklist of domains that should not be validated ?)


Otherwise, active parenting and negative trust anchors are
two approaches to cope with the same problem.
And they are not mutually exclusive.

Kind regards,

Marc


-Original Message-
From: Stephan Lagerholm [mailto:stephan.lagerh...@secure64.com] 
Sent: 13 April 2012 04:21 PM
To: Mark Andrews
Cc: Ralf Weber; Marc Lampo; Nicholas Weaver; dnsop@ietf.org; Livingood,
Jason
Subject: RE: [DNSOP] on Negative Trust Anchors


Mark Andrews, Thursday, April 12, 2012 11:43 PM:
 -Original Message-
 From: Mark Andrews [mailto:ma...@isc.org]
 Sent: Thursday, April 12, 2012 11:43 PM
 To: Stephan Lagerholm
 Cc: Ralf Weber; Marc Lampo; Nicholas Weaver; dnsop@ietf.org;
Livingood,
 Jason
 Subject: Re: [DNSOP] on Negative Trust Anchors
 
 In message
 dd056a31a84cfc4ab501bd56d1e14bbbd2e...@exchange.secure64.com, Steph 
 an Lagerholm writes:
  Mark Andrews Thursday, April 12, 2012 6:10 PM:
 
On 12.04.2012, at 14:21, Marc Lampo wrote:
  It holds an alternative possibility to overcome the problem
  - for operators of validating name servers - of failing 
  domains because of DNSSEC.
 
  The alternative is to have a parent regularly (no frequency
  defined) check the coherence of DS information they have 
  against DNSKEY information it finds published.
  If the parent detects security lameness (term used in
  RFC4641bis) its possible reaction could be to remove the DS
   information.
   
=3D From my experience, active parenting is not a good
 practice.
Specifically in this case, you are assuming that the parent
knows
  the
algorithms used for the DS record. He would have to in order to 
validate. That might not always hold true. Additionally, the 
record
   is
not 'yours', it is just parked in your zone by the child. For
the
parent to Tamper with either the NS or DS is IMHO not a good
   practice.
  =20
   There is a difference between Tamper and Hey, you stuffed up.
   You need to fix the delegation or we will remove it as it is
 causing
  operational problems and yes there *are* RFCs that permit this to 
  happen.
 
  Being Insecure is not necessary better than being Bogus. Hey you
got
  hacked, so we will remove the DS so that people can get to that
bogus
  site
 
 I said remove the delegation.  Get their attention as doing anything 
 else doesn't work.

I have yet to understand how your parent to child DNS probes works.
Specifically, if you can explain to me how you are distinguishing
between:
A) The DS and the DNSKEY does not match because there is an operational
error at the child And
B) The DS and the DNSKEY does not match because somebody is doing a man
in the middle on my probes.

Going back to the original claim that  alternative is to have a parent
regularly check the coherence of DS information and that a possible
reaction could be to remove the DS

I'm not supportive of such active parenting idea.

I find the idea of negative trust anchors useful for recursive resolvers
given that the operator of such recursive resolver uses a secure out of
band technology to make sure that there is an operational mistake. 
 

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] on Negative Trust Anchors

2012-04-13 Thread Doug Barton
Responding to a message at random ...

I skimmed the draft, and with respect to the authors this is a terrible
idea.

DNSSEC is pointless if it's not used as designed. Providing an easy way
to bypass validation makes many things worse instead of better ... not
the least of which is that if an attacker has actually compromised the
authoritative name servers for the domain you've just made their job
100% easier (and thereby removed all the protection that DNSSEC is
supposed to provide).

Furthermore, the mechanism is not necessary, since if you somehow had
knowledge that it was safe to use the data even if it doesn't validate
you can temporarily set up a forward zone that points to a
non-validating resolver.

The mentality that we need to provide crutches and bandages to paper
over the mistakes by DNS admins is exactly what has perpetuated the long
history of bad habits and zomg I can't believe that something so badly
configured ever actually worked that is one of the reasons DNSSEC
rollouts are failing in the first place. Providing more crutches and
bandages is not the answer.

Doug

-- 
If you're never wrong, you're not trying hard enough
___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] on Negative Trust Anchors

2012-04-13 Thread Tony Finch
Doug Barton do...@dougbarton.us wrote:

 Furthermore, the mechanism is not necessary, since if you somehow had
 knowledge that it was safe to use the data even if it doesn't validate
 you can temporarily set up a forward zone that points to a
 non-validating resolver.

AFAIK that doesn't work in BIND.

Tony.
-- 
f.anthony.n.finch  d...@dotat.at  http://dotat.at/
Northwest Forties, Cromarty, Forth: Northerly or northeasterly 5 or 6,
occasionally 7 at first. Moderate or rough. Showers, becoming wintry. Good,
occasionally moderate.
___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] on Negative Trust Anchors

2012-04-13 Thread Paul Vixie
the information economics of this draft are all wrong. with all possible
respect for the comcast team who is actually validating signatures for
18 million subscribers and is therefore way ahead of the rest of the
industry and is encountering the problems of pioneers... this is not
supposed to be comcast's problem.

if someone that comcast's customers want to reach, blows their dnssec
out by using the wrong keys or using expired signatures or whatever,
then the problem ownership should rest with whosoever blows their dnssec
-- not with comcast. it's only because comcast is first that comcast has
to watch out for the deleterious effects of OPM (other people's
mistakes) on comcast's own customers. comcast can't afford the help desk
call volume that would come from another wrong-key failure at the social
security administration's domain.

but that doesn't make it comcast's problem. it would remain the social
security administration's problem.

we need to move quickly to the point where lots of large eyeball-facing
network operators are validating, such that any failure to properly
maintain signatures and keys and DS records, is felt most severely by
whomever's domain is thus rendered unreachable.

if everyone interested in working on this draft would take the time to
turn on validation, then we could avoid inverting the information
economics here. people who can't manage their keys and signatures
properly (and i include my own vanity zones in that, since i'm not yet
converted to the bind 9.9 way of doing things) should either expect
trouble or uplevel their game or stay out of the game.

i'm opposed to negative trust anchors, both for their security
implications if there were secure applications in existence, and for
their information economics implications.

paul
___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] on Negative Trust Anchors

2012-04-13 Thread Evan Hunt
On Fri, Apr 13, 2012 at 05:43:42PM +, Paul Vixie wrote:
 i'm opposed to negative trust anchors, both for their security
 implications if there were secure applications in existence, and for
 their information economics implications.

+1

-- 
Evan Hunt -- e...@isc.org
Internet Systems Consortium, Inc.
___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] on Negative Trust Anchors

2012-04-13 Thread Patrik Fältström

On 13 apr 2012, at 22:09, Evan Hunt wrote:

 On Fri, Apr 13, 2012 at 05:43:42PM +, Paul Vixie wrote:
 i'm opposed to negative trust anchors, both for their security
 implications if there were secure applications in existence, and for
 their information economics implications.
 
 +1

+1

   Patrik

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] on Negative Trust Anchors

2012-04-13 Thread Nicholas Weaver

On Apr 13, 2012, at 1:24 PM, Patrik Fältström wrote:

 
 On 13 apr 2012, at 22:09, Evan Hunt wrote:
 
 On Fri, Apr 13, 2012 at 05:43:42PM +, Paul Vixie wrote:
 i'm opposed to negative trust anchors, both for their security
 implications if there were secure applications in existence, and for
 their information economics implications.
 
 +1
 
 +1

-1

Simply put, I'm not a huge believer of recursive resolver (rather than client) 
validation.  But if you are going to do it...

There are a few cases where it is valuable [1], but for every 'validate is the 
right answer', there are hundreds of cases, like the NASA case, where the 
authority is just screwing up.  And in those cases, the economics are that 
DNSSEC is creating a DOS, and it is the one who's validating that's at least 
partially responsible because it is both validating and deciding that its 
clients should suffer.

This is especially true for ISPs.  If you want any other ISP to validate 
DNSSEC, they need a mechanism like this so they don't suffer through the 
problems that Comcast has already experienced.

Because practice has shown that it is the recursive resolver, not the 
authority, that gets blamed.  Lurk on the Google Public DNS mailing list, and 
you realize that even without DNSSEC, the resolver operator faces the blame for 
brokenness.  Thus, at least for DNSSEC, resolver operators need to be able to 
override validation easily and efficiently.



[1] And these cases require 'listen until you can get something that 
validates': Just accept then validate gives the wrong answer in these cases.

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] on Negative Trust Anchors

2012-04-13 Thread Patrik Fältström
On 13 apr 2012, at 22:24, Patrik Fältström wrote:

 +1

In a private chat I am asked to explain my +1.

Let me explain why.

Today, before negative trust anchors, the responsibility for whether a the 
resolution that is basis for a connection establishment is with the zone owner. 
If the signature fails, it fails, resolution fails, and the connection can not 
be established.

Now, if we have negative trust anchors that the validator is controlling, then 
I interpret it as if this choice of ability to resolve a name moves from the 
zone owner to the validator (or as in the case of X.509 certs to the client).

What I am against is this *CHANGE* in who is responsible.


Further, I think for .COM (and in the US) we are extremely unlucky that more or 
less only one large validator started validating, and then one zone owner made 
mistakes with their DNSSEC data. This made the press and community blame the 
one that did right, the validator, when in fact the one that validated and 
rejected some RRs did the right thing.

In Sweden, where we also had such incidents we did not give up that easy. 
But, we succeeded because of a I think two things:

- We managed to have more than one major ISP/Resolver to start validating on 
the same date, so as far as I know, no incident, regardless of how bad it was, 
was ever one that blamed the validator.

- We managed to educate press and whoever that could help put a wet blanket 
over all rumors that the validator was the one to blame when validating did not 
work.

Of course this was MUCH easier in Sweden that is a much smaller country than 
the group of entities that uses .COM.

But, all of this thinking leads me to think about DNSSEC validation risks are 
very similar to the risk with deploying IPv6? We have an IPv6 day, but why not 
a DNSSEC day? One day where *many* players at the same time turn on DNSSEC 
validation?

If we did, then maybe it would be easier for parties to turn on validation, 
because it could be easier for them to explain that it is not whoever that is 
validating that is making mistakes at failures, but instead the zone owner?


And to go back to the +1, I say strongly +1 because alternatives (like what 
I just described) to changing who is responsible for making a decision of 
whether validation should work or not are not explored enough. Definitely not.

I am not giving up yet, although after my work in a role being responsible for 
many products at an ISP 1996-2000 I definitely understand what the cost is with 
negative press and increased number of calls to customer service.

Patrik

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] on Negative Trust Anchors

2012-04-13 Thread Patrik Fältström

On 13 apr 2012, at 22:44, Nicholas Weaver wrote:

 Because practice has shown that it is the recursive resolver, not the 
 authority, that gets blamed.

As you saw in my mail, I completely disagree from my own personal experience.

If I look at the number of failures, the number of cases where the validator is 
blamed is exactly one -- Comcast in the NASA case. Compared to the about 50 
cases or so when the zone owner/signer is blamed. Yes, we have been running 
DNSSEC validation in Sweden a bit longer than in the USA.

Can you please comment on that mail that uses a few more characters than '+1' 
please?

   Patrik

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] on Negative Trust Anchors

2012-04-13 Thread Jaap Akkerhuis
...
More pragmatically, while I understand the theory behind rejecting NTAs,
I have to admit it feels a bit like the IETF rejecting NATs and/or DNS
redirection. I would be surprised if folks who implement NTAs will stop
using them if they are not accepted by the IETF.

Doing the validation on my machine makes it easy for me to realize
who to blame when things break but I realize others don't have that
insight or run validators, so I see the pain for the validating
ISP. However, it is still not a reason for the IETF to standardize
this.

(paf)
 But, all of this thinking leads me to think about DNSSEC validation
 risks are very similar to the risk with deploying IPv6?
 We have an IPv6 day, but why not a DNSSEC day? One day where
 *many* players at the same time turn on DNSSEC validation?

(drc)
Definitely a good idea.

It is seems a nice idea but a problem is that a single day is
probably not enough.  IPv6 problems are (nearly) instantaneous but
with DNSSEC problems start to arise when things expire.

jaap
___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] on Negative Trust Anchors

2012-04-13 Thread Mehmet Akcin

On Apr 13, 2012, at 2:39 PM, Patrik Fältström wrote:

 http://kommunermeddnssec.se/maps.php

This is one of the coolest thing i have clicked in long time.. thanks for 
sharing

mehmet
___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] on Negative Trust Anchors

2012-04-13 Thread David Conrad
On Apr 13, 2012, at 3:30 PM, Jaap Akkerhuis wrote:
 More pragmatically, while I understand the theory behind rejecting NTAs,
 I have to admit it feels a bit like the IETF rejecting NATs and/or DNS
 redirection. I would be surprised if folks who implement NTAs will stop
 using them if they are not accepted by the IETF.
 
 it is still not a reason for the IETF to standardize this.

With the implication that multiple vendors go and implement the same thing in 
incompatible ways. I always get a headache when this sort of thing happens as 
the increased operational costs of non-interoperable implementations usually 
seems more damaging to me than violations of architectural purity. Different 
perspectives I guess.

 It is seems a nice idea but a problem is that a single day is
 probably not enough.  IPv6 problems are (nearly) instantaneous but
 with DNSSEC problems start to arise when things expire.

Crawl before running a marathon. If we get to a point where people actually 
deploy signing and/or validation systems, I'd call it success.

Regards,
-drc
 

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] on Negative Trust Anchors

2012-04-12 Thread Marc Lampo
Hello,

this is not a reply to any comment already made on this approach
of negative trust anchors.
(I just posted a reply with RFC4641bis in the subject, about this)


It holds an alternative possibility to overcome the problem
- for operators of validating name servers - of failing domains
because of DNSSEC.

The alternative is to have a parent regularly (no frequency defined)
check the coherence of DS information they have against DNSKEY information
it finds published.
If the parent detects security lameness (term used in RFC4641bis) its
possible reaction could be to remove the DS information.


The draft of Negative Trust Anchors does not mention anything about
informing the operator of the failing domain.
But since a parent domain operator should know who operates the
child domains, they can also actively inform (eg. send email to registered
contact person).  That way, somebody can start working on correcting
the root cause.


The advantage over negative trust anchor would be that this is more
centrally managed : the action by the parent (remove DS) is visible (TTL
permitted)  to any validating name server.
 (the negative trust anchor needs to be configured by every validating NS,
   whose administrators bother to do so)

I acknowledge that negative trust anchor applies to however the
chain-of-trust starts (from the root-zone, or from some DLV).  But since
the root is DNSSEC'd since close to 2 years, I expect the DLV approach is
less important anyway.


While my company, as registry for .eu, does not do this (yet ?), we do
validate DNSSEC information when submitted, prior to publishing it as DS
records (only if validation yields success).
So, we already decide not to publish wrong information (at one point in
time).
The suggestion to regularly verify again extends that behaviour from
  not publishing if wrong (at submission time)
to
  stop publishing if wrong (as part of normal operation)
(the contract with registrars states that information provided by
registrars must be correct.  If not, the contract allows for some
reactions (like blocking a domain in case of fake identities,
but - by extension - to correctness of DNSSEC information)


Marc Lampo
Security Officer
EURid (for .eu)

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] on Negative Trust Anchors

2012-04-12 Thread Ralf Weber
Moin!

On 12.04.2012, at 14:21, Marc Lampo wrote:
 It holds an alternative possibility to overcome the problem
 - for operators of validating name servers - of failing domains
 because of DNSSEC.
 
 The alternative is to have a parent regularly (no frequency defined)
 check the coherence of DS information they have against DNSKEY information
 it finds published.
 If the parent detects security lameness (term used in RFC4641bis) its
 possible reaction could be to remove the DS information.
It is something completely different and I certainly welcome TLDs doing that. 
But it's not an alternative it's an addition. Someone who wants to operate 
DNSSEC aware resolvers that validate today must have the ability to deploy 
negative trust anchors IMHO.

 The draft of Negative Trust Anchors does not mention anything about
 informing the operator of the failing domain.
 But since a parent domain operator should know who operates the
 child domains, they can also actively inform (eg. send email to registered
 contact person).  That way, somebody can start working on correcting
 the root cause.
Agreed, we should amend section 7 with steps to do when a negative trust anchor 
is discovered and that should be one of them.

So long
-Ralf
---
Ralf Weber
Senior Infrastructure Architect
Nominum Inc.
2000 Seaport Blvd. Suite 400 
Redwood City, California 94063
ralf.we...@nominum.com



___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] on Negative Trust Anchors

2012-04-12 Thread Stephan Lagerholm
Mark,

On 12.04.2012, at 14:21, Marc Lampo wrote:
  It holds an alternative possibility to overcome the problem
  - for operators of validating name servers - of failing domains
  because of DNSSEC.
 
  The alternative is to have a parent regularly (no frequency defined)
  check the coherence of DS information they have against DNSKEY
  information it finds published.
  If the parent detects security lameness (term used in RFC4641bis)
  its possible reaction could be to remove the DS information.

= From my experience, active parenting is not a good practice.
Specifically in this case, you are assuming that the parent knows the
algorithms used for the DS record. He would have to in order to
validate. That might not always hold true. Additionally, the record is
not 'yours', it is just parked in your zone by the child. For the parent
to Tamper with either the NS or DS is IMHO not a good practice.

/S
___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] on Negative Trust Anchors

2012-04-12 Thread Andrew Sullivan
On Thu, Apr 12, 2012 at 08:27:21AM -0600, Stephan Lagerholm wrote:

 Specifically in this case, you are assuming that the parent knows the
 algorithms used for the DS record. He would have to in order to
 validate. That might not always hold true. Additionally, the record is
 not 'yours', it is just parked in your zone by the child. For the parent
 to Tamper with either the NS or DS is IMHO not a good practice.

The DS is _not_ parked in the parent zone by the child.  Unlike the NS
record, the DS record is authoritative data at the parent, and never
at the child.  As I read the RFCs, the DS record is fully and
completely parent-side data, and is the parent's assertion of its
relationship to the child.  

I really think we have to get over the idea that the DS record is
somehow the child's data that is merely represented in the parent
side.  That way of thinking about this is a good way, IMO, to get
failed chains across the zone cut.  IMO it is better to think of the
DS/DNSKEY pair as a way of expressing accord across a zone cut, with
each side contributing a portion of the effort and holding a portion
of the responsibility.

Best,

A

-- 
Andrew Sullivan
a...@anvilwalrusden.com
___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] on Negative Trust Anchors

2012-04-12 Thread Mark Andrews

In message dd056a31a84cfc4ab501bd56d1e14bbbd2e...@exchange.secure64.com, 
Steph
an Lagerholm writes:
 Mark,
 
 On 12.04.2012, at 14:21, Marc Lampo wrote:
   It holds an alternative possibility to overcome the problem
   - for operators of validating name servers - of failing domains
   because of DNSSEC.
  
   The alternative is to have a parent regularly (no frequency defined)
   check the coherence of DS information they have against DNSKEY
   information it finds published.
   If the parent detects security lameness (term used in RFC4641bis)
   its possible reaction could be to remove the DS information.
 
 = From my experience, active parenting is not a good practice.
 Specifically in this case, you are assuming that the parent knows the
 algorithms used for the DS record. He would have to in order to
 validate. That might not always hold true. Additionally, the record is
 not 'yours', it is just parked in your zone by the child. For the parent
 to Tamper with either the NS or DS is IMHO not a good practice.

There is a difference between Tamper and Hey, you stuffed up.
You need to fix the delegation or we will remove it as it is causing
operational problems and yes there *are* RFCs that permit this to
happen.

Parents are already REQUIRED to make these sorts of checks of the
records involved in the delegation according to RFC 1034.

As for not knowing the DS algorithm what is just garbage.  For DS
records to be useful the algorithms need to be well known.  There
are no private DS algorithms.

Mark

 /S
 ___
 DNSOP mailing list
 DNSOP@ietf.org
 https://www.ietf.org/mailman/listinfo/dnsop
-- 
Mark Andrews, ISC
1 Seymour St., Dundas Valley, NSW 2117, Australia
PHONE: +61 2 9871 4742 INTERNET: ma...@isc.org
___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] on Negative Trust Anchors

2012-04-12 Thread Stephan Lagerholm
Mark Andrews Thursday, April 12, 2012 6:10 PM:

  On 12.04.2012, at 14:21, Marc Lampo wrote:
It holds an alternative possibility to overcome the problem
- for operators of validating name servers - of failing domains
because of DNSSEC.
   
The alternative is to have a parent regularly (no frequency
defined) check the coherence of DS information they have against
DNSKEY information it finds published.
If the parent detects security lameness (term used in
RFC4641bis) its possible reaction could be to remove the DS
 information.
 
  = From my experience, active parenting is not a good practice.
  Specifically in this case, you are assuming that the parent knows
the
  algorithms used for the DS record. He would have to in order to
  validate. That might not always hold true. Additionally, the record
 is
  not 'yours', it is just parked in your zone by the child. For the
  parent to Tamper with either the NS or DS is IMHO not a good
 practice.
 
 There is a difference between Tamper and Hey, you stuffed up.
 You need to fix the delegation or we will remove it as it is causing
 operational problems and yes there *are* RFCs that permit this to
 happen.

Being Insecure is not necessary better than being Bogus. Hey you got
hacked, so we will remove the DS so that people can get to that bogus
site

 Parents are already REQUIRED to make these sorts of checks of the
 records involved in the delegation according to RFC 1034.

If you do an algorithm rollover, then you will have two DS at the
parent but only one will correspond to a DNSKEY at the zone.

 As for not knowing the DS algorithm what is just garbage.  For DS
 records to be useful the algorithms need to be well known.  There are
 no private DS algorithms.

That is not how I read RFC 4034, section 5.1.2 and appendix A.1. 
What am I missing?

The algorithm number used by the DS RR is identical to the algorithm
number used by RRSIG and DNSKEY RRs.  Appendix A.1 lists the 
algorithm number types.

  5   RSA/SHA-1 [RSASHA1]  y  [RFC3110]  MANDATORY
252   Indirect [INDIRECT]  n  -
253   Private [PRIVATEDNS] y  see below  OPTIONAL
254   Private [PRIVATEOID] y  see below  OPTIONAL
255   reserved


/S
___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] on Negative Trust Anchors

2012-04-12 Thread Mark Andrews

In message dd056a31a84cfc4ab501bd56d1e14bbbd2e...@exchange.secure64.com, 
Steph
an Lagerholm writes:
 Mark Andrews Thursday, April 12, 2012 6:10 PM:
 
   On 12.04.2012, at 14:21, Marc Lampo wrote:
 It holds an alternative possibility to overcome the problem
 - for operators of validating name servers - of failing domains
 because of DNSSEC.

 The alternative is to have a parent regularly (no frequency
 defined) check the coherence of DS information they have against
 DNSKEY information it finds published.
 If the parent detects security lameness (term used in
 RFC4641bis) its possible reaction could be to remove the DS
  information.
  
   =3D From my experience, active parenting is not a good practice.
   Specifically in this case, you are assuming that the parent knows
 the
   algorithms used for the DS record. He would have to in order to
   validate. That might not always hold true. Additionally, the record
  is
   not 'yours', it is just parked in your zone by the child. For the
   parent to Tamper with either the NS or DS is IMHO not a good
  practice.
 =20
  There is a difference between Tamper and Hey, you stuffed up.
  You need to fix the delegation or we will remove it as it is causing
  operational problems and yes there *are* RFCs that permit this to
  happen.
 
 Being Insecure is not necessary better than being Bogus. Hey you got
 hacked, so we will remove the DS so that people can get to that bogus
 site

I said remove the delegation.  Get their attention as doing anything
else doesn't work.

  Parents are already REQUIRED to make these sorts of checks of the
  records involved in the delegation according to RFC 1034.
 
 If you do an algorithm rollover, then you will have two DS at the
 parent but only one will correspond to a DNSKEY at the zone.

Garbage.  You don't add DS records until there are DNSKEYs for the
algorithm and you remove all DS records for the old algorithm *before* 
you remove the DNSKEY for the old algorithm.  If you fail to do this
you will introduce validation failures.

You can have DS records that don't correspond to a DNSKEY but the
algorithm MUST match one that does correspond to a DNSKEY.  This
lets you publish a single KSK DNSKEY per algorithm.

  As for not knowing the DS algorithm what is just garbage.  For DS
  records to be useful the algorithms need to be well known.  There are
  no private DS algorithms.
 
 That is not how I read RFC 4034, section 5.1.2 and appendix A.1.=20
 What am I missing?
 
 The algorithm number used by the DS RR is identical to the algorithm
 number used by RRSIG and DNSKEY RRs.  Appendix A.1 lists the=20
 algorithm number types.
 
   5   RSA/SHA-1 [RSASHA1]  y  [RFC3110]  MANDATORY
 252   Indirect [INDIRECT]  n  -
 253   Private [PRIVATEDNS] y  see below  OPTIONAL
 254   Private [PRIVATEOID] y  see below  OPTIONAL
 255   reserved

DS records have a DNSSEC algorithm which matches the DNSKEY and a
DS hash algorithm.  You can compute a DS record without knowing the
DNSSEC algorithm.  You can match DS records to DNSKEY records without
knowing the DNSSEC algorithm.  You do need to know the DS hash
algorithm and there are no private DS hash algorithms.

Mark
-- 
Mark Andrews, ISC
1 Seymour St., Dundas Valley, NSW 2117, Australia
PHONE: +61 2 9871 4742 INTERNET: ma...@isc.org
___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop