Re: [DNSOP] DNS for Cloud Resources in draft-ietf-rtgwg-net2cloud-problem-statement-08

2020-03-11 Thread Morizot Timothy S
Yes, I believe that suggestion is much stronger and expresses the intent and 
meaning better.

Thanks,

Scott

-Original Message-
From: DNSOP  On Behalf Of Hollenbeck, Scott
Sent: Wednesday, March 11, 2020 1:19 PM
To: linda.dun...@futurewei.com
Cc: dnsop@ietf.org
Subject: [DNSOP] DNS for Cloud Resources in 
draft-ietf-rtgwg-net2cloud-problem-statement-08

Could we make the last sentence stronger, perhaps with a statement like this 
from the US CERT WPAD Name Collision Vulnerability alert dated May 23, 2016?

"Globally unique names do prevent any possibility of collision at the present 
or in the future and they make DNSSEC trust manageable. Consider using a 
registered and fully qualified domain name (FQDN) from global DNS as the root 
for enterprise and other internal namespaces."

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] Solicit feedback on the problems of DNS for Cloud Resources described by the draft-ietf-rtgwg-net2cloud-problem-statement

2020-02-13 Thread Morizot Timothy S
Linda Dunbar wrote:
>Thank you very much for suggesting using the Globally unique domain name and 
>having subdomains not resolvable outside the organization.
>I took some of your wording into the section. Please let us know if the 
>description can be improved.

Thanks. I think that covers a reasonable approach to avoid collisions and 
ensure resolution and validation occur as desired by the organization with 
administrative control over the domains used.

I realized I accidentally omitted a 'when' that makes the last sentence scan 
properly. In the process, I noticed what looked like a couple of other minor 
edits that could improve readability. I did not see any substantive issues with 
the revised text but did include those minor proposed edits below.

Scott


3.4. DNS for Cloud Resources
DNS name resolution is essential for on-premises and cloud-based resources. For 
customers with hybrid workloads, which include on-premises and cloud-based 
resources, extra steps are necessary to configure DNS to work seamlessly across 
both environments.
Cloud operators have their own DNS to resolve resources within their Cloud DCs 
and to well-known public domains. Cloud's DNS can be configured to forward 
queries to customer managed authoritative DNS servers hosted on-premises, and 
to respond to DNS queries forwarded by on-premises DNS servers.
For enterprises utilizing Cloud services by different cloud operators, it is 
necessary to establish policies and rules on how/where to forward DNS queries. 
When applications in one Cloud need to communicate with applications hosted in 
another Cloud, there could be DNS queries from one Cloud DC being forwarded to 
the enterprise's on premise DNS, which in turn can be forwarded to the DNS 
service in another Cloud. Needless to say, configuration can be complex 
depending on the application communication patterns.
However, even with carefully managed policies and configurations, collisions 
can still occur. If you use an internal name like .cloud and then want your 
services to be available via or within some other cloud provider which also 
uses .cloud, then it can't work. Therefore, it is better to use the global 
domain name even when an organization does not make all its namespace globally 
resolvable. An organization's globally unique DNS can include subdomains that 
cannot be resolved at all outside certain restricted paths, zones that resolve 
differently based on the origin of the query and zones that resolve the same 
globally for all queries from any source.
Globally unique names do not equate to globally resolvable names or even global 
names that resolve the same way from every perspective. Globally unique names 
do prevent any possibility of collision at the present or in the future and 
they make DNSSEC trust manageable. It's not as if there is or even could be 
some sort of shortage in available names that can be used, especially when 
subdomains and the ability to delegate administrative boundaries are considered.

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] Solicit feedback on the problems of DNS for Cloud Resources described by the draft-ietf-rtgwg-net2cloud-problem-statement

2020-02-12 Thread Morizot Timothy S
Paul Vixie wrote:
>if the names are global then they will be unique and DNS itself will handle 
>the decision of how to route questions to the right authority servers.
>...
>first i hope you can explain why the simpler and existing viral DNS paradigm 
>(all names are global and unique) is unacceptable for your purpose.

I wanted to highlight the central point Paul Vixie made and note that it 
applies even when an organization does not make all its namespace globally 
resolvable. An organization's globally unique DNS can include subdomains that 
cannot be resolved at all outside certain restricted paths, zones that resolve 
differently based on the origin of the query and zones that resolve the same 
globally for all queries from any source. Globally unique names do not equate 
to globally resolvable names or even global names that resolve the same way 
from every perspective. Globally unique names do prevent any possibility of 
collision at the present or in the future and they make DNSSEC trust 
manageable. (Both of those are significant concerns for my organization.) It's 
not as if there is or even could be some sort of shortage in available names 
that can be used, especially subdomains and the ability to delegate 
administrative boundaries are considered.

I would also like to understand why global and unique names are unacceptable.

Thanks,

Scott

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] [dns-operations] dnsop-any-notimp violates the DNS standards

2015-03-13 Thread Morizot Timothy S
Nonsense.

I'm not sure exactly what sort of attack profile you have in mind at the 
registrar with a, but given that the TTL for DS records is generally 24 hours, 
most attacks at that level will create pretty widespread DNSSEC validation 
errors for at least that initial day. DNSSEC validation helps a great deal.

B  d are issues securing the first hop if validation is not done on the 
endpoint itself. Those are valid, but do not mean that DNSSEC validation 
provides no protection. It certainly protects against an array of cache 
poisoning attacks even in that configuration. And that's protection the clients 
would not otherwise have. It definitely makes it a lot harder to use DNS as an 
attack vector with nobody noticing. One layer of a layered approach.

C is certainly a problem if you don't validate on the end point and trust any 
random nameserver on any network to which you connect.

However, most enterprise clients and ISP users do tend to have a reliable and 
reasonably secure path to their first hop recursive nameserver. It's not nearly 
as secure as validating on the client, but it's much more secure than having no 
validation whatsoever.

Nor is DNSSEC validation a DOS vector. That's a non sequitur and frankly a 
pretty silly assertion. Yes, an organization can break their own authoritative 
DNS (which is related to signing not validation), but frankly DNSSEC is just 
one of many ways an organization can screw up DNS or anything else in their 
network. It's best to know what you're doing. Organizations will learn. If you 
haven't implemented DNSSEC validation yourself, you may not have noticed, but 
US government agency management of DNSSEC has improved greatly with experience. 
Outages due to error are less and less common and usually limited in scope when 
they occur. Since we've been validating all Internet responses for four years 
and counting now (and tend to interact quite a bit with other agencies), we 
have noticed the improvement. Refusing to return results when the authoritative 
DNS response fails validation is good thing, not a bad thing, even when it's 
the authoritative zone administrators who screwed up their own zone.

DNSSEC validation is not a panacea, but if you refuse to implement it you are 
denying your users one layer of protection you could pretty easily provide. And 
given that in the US the large majority of federal agency DNS authoritative 
zones are signed, you also can't claim there's no benefit to the US public from 
validation. Implementing validation on recursive nameservers does not protect 
users from every attack. Nothing does. Nor is it as good as performing 
validation at the client. But it is a solid first step with real security 
benefits. And it's a step that can be followed and built upon with further 
enhancements later.

Scott

-Original Message-
From: Nicholas Weaver [mailto:nwea...@icsi.berkeley.edu] 
Sent: Friday, March 13, 2015 3:08 PM
To: Morizot Timothy S
Cc: Nicholas Weaver; dnsop@ietf.org
Subject: Re: [DNSOP] [dns-operations] dnsop-any-notimp violates the DNS 
standards


 On Mar 13, 2015, at 10:21 AM, Morizot Timothy S timothy.s.mori...@irs.gov 
 wrote:
 It’s been steadily increasing for years now and gives me an idea what 
 percentage of the US public is protected against certain types of attacks 
 involving our zones. DNSSEC validation is not a panacea, but in a layered 
 approach toward combating fraud and certain sorts of attacks, it does provide 
 a particular sort of protection not available through any other means. 
 Whether or not ISPs sign their authoritative zones matters much less to us 
 than whether or not they implement DNSSEC validation on their recursive 
 nameservers. And that’s not a failure at all. By the measure above (which 
 isn’t perfect, but the best one available) roughly a fifth to a quarter of 
 the US public, the primary consumers of our zones, exclusively use validating 
 nameservers. That’s significant. Would I like to see it higher? Sure. But 
 I’ll take it.
 

The problem is validation by the recursive resolver is nearly useless for 
security, but one heck of an effective DOS attack (NASA, HBO, etc)...

Lets look at what real world attacks on DNS are.

a:  Corrupt the registrar.  DNSSEC do any good?  Nope.

b:  Corrupt the traffic in-flight (on-path or in-path).  DNSSEC do any good?  
Only if the attacker is not on the path for the final traffic, but just the DNS 
request.

c:  The recursive resolver lies.  Why would you trust it to validate?

d:  The NAT or a device between the recursive resolver and the user lies.  
Again, validation from the recursive resolver works how?


Overall, unless you are validating on the end host rather than the recursive 
resolver, DNSSEC does a lot of harm from misconfiguration-DOS, but almost no 
good.

--
Nicholas Weaver  it is a tale, told by an idiot,
nwea...@icsi.berkeley.edufull of sound and fury,
510-666-2903

Re: [DNSOP] Fwd: New Version Notification for draft-livingood-dnsop-negative-trust-anchors-01.txt

2014-10-29 Thread Morizot Timothy S
Warren Kumari wrote:
 Over on the BIND-Users list there is currently a discussion of
 fema.net (one the Federal Emergency Management Agency domains)
 being DNSSEC borked
 (https://lists.isc.org/pipermail/bind-users/2014-October/094142.html)
 
 This is an example of the sort of issues that an NTA could address --
 I'd like to note that currently neither Google Public DNS (8.8.8.8)
 nor Comcast (75.75.75.75) have put in an NTA for it, but if it were
 fema.gov, and this were during some sort of national disaster in the
 US, things might be different...

If an authoritative domain (e.g. irs.gov) screwed up its delegation NS records 
so it effectively went dark or made some similar sort of authoritative DNS or 
nameserver error, we wouldn't expect the recursive, caching side to resolve 
those sorts of errors. The domain's DNS would simply be unavailable until they 
resolved their problem.

I'm not sure I understand why DNSSEC is somehow different. If a domain owner 
chooses to sign its authoritative zones and at some point screws up either 
their signing or their chain of trust, they should reasonably expect their DNS 
to go dark to a certain percentage of the world. (I believe in the United 
States currently, that's around a quarter of the population, at least according 
to APNIC Labs numbers. That tends to be the part of the world I watch most 
closely.)

I do understand the need ISPs had to manage customer perceptions, especially 
for the earliest adopters like Comcast. Support calls cost money and in some 
instances, an irate customer may choose to switch providers. That likely 
persists to some extent today, but with Google on board the pressure is, at 
least, less than it was before. And as those implementing DNSSEC validation 
continue to increase, that pressure will continue to drop.

Outside of the ISP early adopter use case, though, I'm not sure I understand 
the need for NTAs. We've had DNSSEC validation of Internet queries enabled for 
our enterprise since 2011. On the enterprise side, we simply explain the 
problem and that it's on the domain provider's end and that it's their 
responsibility to fix it. Until they do we won't be able to resolve their 
domain. We've never viewed it as our responsibility to try to fix problems on 
the authoritative side of DNS for domains we don't own or manage. Truthfully, 
we don't really encounter as many issues as we once did.

Given the limited nature of the use case, I'm not convinced it matters if 
there's a single specification for implementing it or not. I'm not really 
opposed to the idea either, nor do I have any issues with the draft. But after 
several years of experience without NTAs from a non-ISP perspective, I do know 
it hasn't been a burden or major issue.

Scott

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop