Re: [DNSOP] [Ext] Call for Adoption: draft-hardaker-dnsop-rfc8624-bis, must-not-sha1, must-not-ecc-gost
On Thu, May 2, 2024 at 11:38 AM John R Levine wrote: > I think we're agreeing that it would be a good idea to continue to > discourage SHA1, but not a good idea to surprise people by making it > suddenly stop working, a la Redhat. > Yep. Conceptually I agree with that. I also realized its inherent in RFC 8624 that it only makes sense if interpreted as guidance to those developing the software and tools that implement DNSSEC signing and/or validation. A DNS operator is only going to sign a specific zone with a single algorithm except during an algorithm roll. And there's no choice of algorithms when deploying a validating resolver. On any platform I've ever encountered, for the most part turning DNSSEC validation on or off is a binary choice. The algorithms that are or aren't supported are built into the software. With that noted, the three drafts are suitable for working group adoption. I support the idea of expressing the table in RFC 8624 in the IANA registry and outlining that future recommendation changes can be applied there in a consolidated location. I do think that should be the sole focus of that draft and the rest of the text and the table of initial recommendations should reflect the current RFC 8624 text. Then the discussion of the other two drafts can focus on whether the recommendations in the current RFC 8624 table should be changed. Thanks, Scott ___ DNSOP mailing list DNSOP@ietf.org https://www.ietf.org/mailman/listinfo/dnsop
Re: [DNSOP] [Ext] Call for Adoption: draft-hardaker-dnsop-rfc8624-bis, must-not-sha1, must-not-ecc-gost
On Thu, May 2, 2024 at 9:19 AM John R Levine wrote: > On Thu, 2 May 2024, Scott Morizot wrote: > > ??? RFC 8624 is explicitly guidance to implementers not operators. The > > "MUST NOT" means MUST NOT implement in a conforming implementation of > > either signing or validation software. That's not an opinion. It's what > the > > text says. > > The word "software" does not appear in RFC 8624. I think it is evident > from the text that the implementers are the people using DNS software and > signing the zones. > > Ondřej and Paul wrote the RFC so perhaps they can tell us what they meant. > I would be curious about that since it's not how I'm used to "implementer" being used in any DNS context. And it also would mean this sentence in the audience section would then make no sense. "This perspective may differ from that of a user who wishes to deploy and configure DNSSEC with only the safest algorithm." I think we need a clean update to RFC 8624 first that includes instructions to IANA to update the table. I don't think the current draft does that very well. And since the IANA table already has a Zone Signing column, I think we just want to change that one so it has more than a yes/no option per algorithm and then add a Validation column. Once that has been adopted then there will actually be columns to update published at IANA. But in any context I've seen over the years the DNS implementers have always been the ones who develop and maintain the supporting tools and software. Users, operators, and terms like that have referred to those of us who deploy and administer authoritative zones and recursive resolvers. Scott ___ DNSOP mailing list DNSOP@ietf.org https://www.ietf.org/mailman/listinfo/dnsop
Re: [DNSOP] [Ext] Call for Adoption: draft-hardaker-dnsop-rfc8624-bis, must-not-sha1, must-not-ecc-gost
On Thu, May 2, 2024 at 7:32 AM John R Levine wrote: > MUST NOT is advice on how to interoperate, not on how to write software > tools. It's up to the zone operator to follow the advice, not to the tool > provider to hold them hostage. > ??? RFC 8624 is explicitly guidance to implementers not operators. The "MUST NOT" means MUST NOT implement in a conforming implementation of either signing or validation software. That's not an opinion. It's what the text says. It does acknowledge it can be useful guidance to others, but its audience is expressly DNSSEC implementers not users of DNSSEC. Sure, an implementer can choose to ignore the guidance. But creating an environment where implementers have to do that sort of seems to defeat the purpose of RFC 8624. 1.3. Document Audience The recommendations of this document mostly target DNSSEC implementers, as implementations need to meet both high security expectations as well as high interoperability between various vendors and with different versions. Interoperability requires a smooth transition to more secure algorithms. This perspective may differ from that of a user who wishes to deploy and configure DNSSEC with only the safest algorithm. On the other hand, the comments and recommendations in this document are also expected to be useful for such users. ___ DNSOP mailing list DNSOP@ietf.org https://www.ietf.org/mailman/listinfo/dnsop
Re: [DNSOP] [Ext] Call for Adoption: draft-hardaker-dnsop-rfc8624-bis, must-not-sha1, must-not-ecc-gost
On Thu, May 2, 2024 at 6:44 AM John Levine wrote: > It appears that Philip Homburg said: > >In your letter dated Thu, 2 May 2024 10:27:17 +0200 you wrote: > >>I'm not following what breaks based on the wording I suggested, and I'm > not su > >>re why you keep bringing that up. :-) > > > >Then an RFC gets published that signers MUST NOT support signing using > SHA1, > >so ldns removes those algorithms. Then a software update brings the new > >version of ldns my system. Now an unsigned zone gets deployed, > > On the other hand, if it issued annoying warning messages every time it > used a SHA1 key, I'd eventually notice and probably rotate the keys. > > I'm with Peter, I do not see a MUST NOT as requiring vendors or operators > to do stupid stuff. > A MUST NOT in RFC 8624 directs implementations to remove their implementation of an algorithm. The current NOT RECOMMENDED is the appropriate recommendation, according to the text of an RFC, for an implementation to issue a warning that the algorithm is deprecated and should not be used for signing. Here's the description of the intent in the deprecation process outlined in RFC 8624. It seems to me this discussion has strayed from that core process to various perspectives about whether or not SHA-1 remains "secure enough". "It is expected that deprecation of an algorithm will be performed gradually in a series of updates to this document. This provides time for various implementations to update their implemented algorithms while remaining interoperable. Unless there are strong security reasons, an algorithm is expected to be downgraded from MUST to NOT RECOMMENDED or MAY, instead of to MUST NOT. Similarly, an algorithm that has not been mentioned as mandatory-to-implement is expected to be introduced with a RECOMMENDED instead of a MUST. Since the effect of using an unknown DNSKEY algorithm is that the zone is treated as insecure, it is recommended that algorithms downgraded to NOT RECOMMENDED or lower not be used by authoritative nameservers and DNSSEC signers to create new DNSKEYs. This will allow for deprecated algorithms to become less and less common over time. Once an algorithm has reached a sufficiently low level of deployment, it can be marked as MUST NOT so that recursive resolvers can remove support for validating it. Recursive nameservers are encouraged to retain support for all algorithms not marked as MUST NOT." I have seen absolutely no "strong security reasons" presented in this discussion for altering that deprecation model when it comes to DNSSEC algorithms 5 and 7. (I would consider 5 less widely deployed, but since the only difference between the two is support for NSEC3 I don't see a reason to treat them differently.) Algorithm 7 remains widely used and by zones most people would consider significant. In the US, for instance, that includes cdc.gov. Moreover, as others have pointed out, the following assertion in the draft is factually wrong. I'm not going to support a standards document that can't even accurately state facts. "This document retires the use of SHA-1 within DNSSEC." Well, no. It doesn't. NSEC3 will continue to require SHA-1 until such time as that standard is also changed. The draft instructs implementations to no longer support algorithms 5 and 7 for both signing and validation and by changing both to MUST NOT at the same time it contravenes the deprecation model outlined in RFC 8624 without providing any justification beyond some security hand-waving. RFC 8624 provides the correct guidance to implementations for the current state of use for algorithms 5 and 7. There are no "strong security reasons" for deviating from the deprecation model outlined in the RFC. "NOT RECOMMENDED" for signing means the same as "SHOULD NOT". (That's also included with the RFC reference in the text of RFC 8624.) That seems appropriate for signing implementations to issue warnings that the algorithms are deprecated for signing if they choose. The guidance for support in validators is not supposed to move to MUST NOT until an algorithm has reached a low enough level of deployment. I don't see how anyone can reasonably argue that is the case today for algorithm 7. In my opinion, this draft should not move forward. Scott ___ DNSOP mailing list DNSOP@ietf.org https://www.ietf.org/mailman/listinfo/dnsop
Re: [DNSOP] Call for Adoption: draft-arends-private-use-tld
On Mon, Jun 15, 2020 at 12:59 PM Tim Wicinski wrote: > On Mon, Jun 15, 2020 at 1:48 PM John Levine wrote: > >> In article < >> cah1iciouffmryorewhhtbqfnnserw3rvups8pzc8cvnehys...@mail.gmail.com> you >> write: >> >E.g. use an FQDN belonging to you (or your company), so the namespace >> would >> >be example.com.zz under which your private names are instantiated. >> >> The obvious question is if an organization is willing to use >> example.com.zz, why wouldn't they use zz.example.com with split >> horizon DNS to keep that subtree on their local network? >> >> > or since domains are cheap, why not buy a new domain, and use that for the > namespace? > A wise person liked to remind me "Namespaces are architecture decisions". > tim > > Or use a combination of both approaches (separate second level domain and distinct subdomains in a shared public/private domain tree) if that fits your needs. The different aspects are for distinct needs. At work, for instance, we use a completely separate second level domain tree for many of our primary Active Directory forests and their constituent domains. We use private subdomain trees under our public second level domain for many other things. The appropriate internal/external boundaries require some thought and ongoing management, but it's not especially difficult. Scott ___ DNSOP mailing list DNSOP@ietf.org https://www.ietf.org/mailman/listinfo/dnsop
Re: [DNSOP] Solicit feedback on the problems of DNS for Cloud Resources described by the draft-ietf-rtgwg-net2cloud-problem-statement
Ah. Should have used the Oxford comma for clarity. I'm normally one of the people who always uses it so that was probably an accidental omission. There should be a comma before that last 'and'. I was describing the three possible states for any query and response. We have all three scenarios in production so it's critical to understand which one covers a given name when troubleshooting issues. In each scenario, though, the name itself is unique and in a domain tree over which we have global administrative control. On Fri, Feb 14, 2020, 10:22 Linda Dunbar wrote: > Scott, > > > > Thank you very much for the suggested changes. > > For the following sentence, do you you that different paths/zones can > resolve differently based on the origin of the query and zones? > > Then what do you mean by adding the subphrase that “that resolve the same > globally for all queries from any source”? > > > > An organization's globally unique DNS can include subdomains that cannot > be resolved at all outside certain restricted paths, zones* that resolve > differently based on the origin of the query and zones that resolve the > same globally for all queries from any source.* > > > > Thank you, > > > > Linda > > *From:* Morizot Timothy S > *Sent:* Thursday, February 13, 2020 6:23 AM > *To:* Linda Dunbar ; Paul Vixie < > p...@redbarn.org>; dnsop@ietf.org; Paul Ebersman > > *Cc:* RTGWG > *Subject:* RE: [DNSOP] Solicit feedback on the problems of DNS for Cloud > Resources described by the draft-ietf-rtgwg-net2cloud-problem-statement > > > > Linda Dunbar wrote: > > >Thank you very much for suggesting using the Globally unique domain name > and having subdomains not resolvable outside the organization. > > >I took some of your wording into the section. Please let us know if the > description can be improved. > > > > Thanks. I think that covers a reasonable approach to avoid collisions and > ensure resolution and validation occur as desired by the organization with > administrative control over the domains used. > > > > I realized I accidentally omitted a ‘when’ that makes the last sentence > scan properly. In the process, I noticed what looked like a couple of other > minor edits that could improve readability.. I did not see any substantive > issues with the revised text but did include those minor proposed edits > below. > > > > Scott > > > > > > 3.4. DNS for Cloud Resources > > DNS name resolution is essential for on-premises and cloud-based > resources. For customers with hybrid workloads, which include on-premises > and cloud-based resources, extra steps are necessary to configure DNS to > work seamlessly across both environments. > > Cloud operators have their own DNS to resolve resources within their Cloud > DCs and to well-known public domains. Cloud’s DNS can be configured to > forward queries to customer managed authoritative DNS servers hosted > on-premises, and to respond to DNS queries forwarded by on-premises DNS > servers. > > For enterprises utilizing Cloud services by different cloud operators, it > is necessary to establish policies and rules on how/where to forward DNS > queries. When applications in one Cloud need to communicate with > applications hosted in another Cloud, there could be DNS queries from one > Cloud DC being forwarded to the enterprise’s on premise DNS, which in turn > can be forwarded to the DNS service in another Cloud. Needless to say, > configuration can be complex depending on the application communication > patterns. > > However, even with carefully managed policies and configurations, > collisions can still occur. If you use an internal name like ..cloud and > then want your services to be available via or within some other cloud > provider which also uses .cloud, then it can't work. Therefore, it is > better to use the global domain name even when an organization does not > make all its namespace globally resolvable. An organization's globally > unique DNS can include subdomains that cannot be resolved at all outside > certain restricted paths, zones that resolve differently based on the > origin of the query and zones that resolve the same globally for all > queries from any source. > > Globally unique names do not equate to globally resolvable names or even > global names that resolve the same way from every perspective. Globally > unique names do prevent any possibility of collision at the present or in > the future and they make DNSSEC trust manageable. It's not as if there is > or even could be some sort of shortage in available names that can be used, > especially when subdomains and the ability to delegate administrative > boundaries are considered. > > > ___ > DNSOP mailing list > DNSOP@ietf.org > https://www.ietf.org/mailman/listinfo/dnsop > ___ DNSOP mailing list DNSOP@ietf.org https://www.ietf.org/mailman/listinfo/dnsop
Re: [DNSOP] Favor: Weigh in on draft-ietf-ipsecme-split-dns?
I guess I'll speak up as someone who has been managing the DNS/DNSSEC design and implementation of a large organization with a complex set of DNS requirements (operational and security-related) since we began the process of signing our zones in 2011. We have universal DNSSEC validation in place across our enterprise recursive layers and almost all zones, internal and external, are DNSSEC signed. I forget the exact count without checking, but we have 100+ 3rd level and lower internal zones. Outside public placeholder zones required for proper delegation and chain of trust at the appropriate level where internal/external divisions are made (typically the 3rd level) we do not normally maintain differing versions of external/internal zones, but we have a few explicitly defined for that purpose because there is a legitimate requirement for differing internal/external resolution. Those needs are always segregated into zones defined for that purpose to simplify ongoing administration. (We never want to be in a situation where normal, routine record updates require changes in multiple versions of a zone. That would be an administrative nightmare.) We employed manual trust anchors at different phases of our implementation, but now have very few in place. One is to establish trust for an internal zone for which we do not control the administration of the parent zone. Off the top of my head, I believe that's the only significant one left. Maintain trust anchors was unpleasant and always viewed as a transitional mechanism, not something we wanted to do on a lasting basis. On Fri, Nov 30, 2018 at 11:22 AM Paul Wouters wrote: > On Fri, 30 Nov 2018, Ted Lemon wrote: > That means your public DNS zone must be signed for your internal DNS > zone to be signed? Otherwise you cannot have this signed delegation? > That would mean you cannot run DNSSEC internally before you run it > externally. > > Outside placing trust anchors on every device performing validation, that's true. Establishing trust in a zone when the parent zone is insecure is a pain. I guess DLV could still be used. I considered it at one point. I forget why I decided not to go that route. It was too many years ago. Then again, from the signing side, I'm hard-pressed to imagine a reason to start signing internal zones without signing their external parents except in the situation where you do not control the public parent zone. Normally you want to start at the top of the hierarchy and work your way down. > > When you look things up in that zone outside the firewall, you get > NXDOMAIN for everything but the SOA on the zone, which returns an old > serial number. Inside the firewall, they get answers, which > > are signed, and which validate. There's no need for a special trust > anchor here. > > This scheme seems to require both inside and outside zones are signed > with the same key, or as Mark pointed out, both internal and external > zones need to share their DS records and keep these in sync. As these > are usually different organisations/groups/vendors/services, that is > an actual management problem. Or you can do what we did and place DS records for both the internal and external KSKs (using public placeholder versions of the internal child zones just to establish the delegation point) in the parent zone. That establishes chain of trust whichever version of the zone is resolved. That works whether the public version is just a placeholder with nothing else of significance in it or if it is a zone where both the external and internal records matter and are used by the appropriate source for the query, but differ. > > There are two ways to approach this. One is to assume that the > validator never checks the SOA on the zone. This is almost certainly the > case for nearly any use case. In that case, you just > > run the internal and external name servers with the same ZSK, and have a > delegation above it. You don't worry about zone serial numbers, because > they don't affect validation. When you're inside the VPN, > > you get answers for the internal zone; when you're outside the vpn, you > get answers for the outside. Both validate, because the DS record(s) are > referring to the same ZSK(s). > > coordinating shared ZSK's is even harder then requiring sharing DS > records! ZSKs roll every month, and a lot of software auto-generates > and performs the roll without any humans involved. It seems extremely > fragile to need to coordinate ZSKs between different organisations, > be ready for the same algorithm rollovers, etc etc. > > Yes, absolutely. I wouldn't want to try to coordinate sharing signing keys (KSK or ZSK) across different zones under any circumstance. I know some places that do it with KSKs because the product they use supports it as long as both versions of the zone are signed on the same device, but we don't even sign internal and external zones in the same place. Or maintain the master data in the same place for
Re: [DNSOP] howto "internal"
On wrote: > On 07/25/2018 05:18 AM, Tony Finch wrote: > >> I recommend having an empty public view of your private zone, so that >> external queries succeed with NXDOMAIN / NODATA. >> > > ACK. > > What is your opinion on blindly grafting the sub-domain onto the parent > zone without proper delegation. I.e. internal DNS server hosts > internal.example.net and external DNS server returns NXDOMAIN for > internal.example.net. > > I have my doubts about this sort of scheme supporting DNSSEC. - I think > it would be better to have a mostly empty zone that is properly delegated > that re-use the same DNSSEC keys. > > I might even go so far as to have the external server be a slave for a > specific empty view transferred from the internal server. That way the > keys stay internal. > > It may leak some information, but I do think that the hard NXDOMAIN / >> NODATA is likely cleanest for the DNS protocol. >> > A true "internal" enterprise network with entirely private DNS zones will often have distinct authoritative nameservers for the private versions of zones, distinct internal recursive nameservers, and will restrict clients on the enterprise network from accessing any recursive nameservers outside the enterprise network. The decisions I've made for my employer's architecture reflect those requirements and preconditions. We also DNSSEC sign most authoritative zones and DNSSEC validate responses on all recursive nameservers. With those conditions, every zone needs to be rooted in an officially registered and delegated domain to support proper chain of trusts with valid secure or insecure delegations as appropriate. With those conditions in mind, most zones (domains and subdomains within a tree) are designated as either public or internal/private. At the point where a delegated subdomain shifts to internal, a public placeholder version of the zone is created. While we have multiple registered second level domains, we have two primary second level domains that also support our enterprise. One of them is, for all practical purposes, entirely private. So only a second level zone placeholder is public. The other is public at the second level and third level (and lower) domains are a mix of public and private. Third level zone that represent the top of an internal zone hierarchy off that second level domain tree all have a public placeholder. At the demarcation point where a domain in the tree shifts to private (typically the second level domain in one case and at a number of the third level domains in the other) the DS records for both the public placeholder and the private version of the zone are placed in the public parent zone. Whether the placeholder or the real version of the zone is resolved, an appropriately signed DS record for a key signing the DNSKEY RRSet can always be resolved. Given the complex, layered nature of our recursive DNS infrastructure we used forward zones and default forwarders within the enterprise itself. Our Internet/extranet recursive configuration is also more complex than even most enterprises. Yes, trust anchors are an alternative within the standard in the absence of valid delegations or when "fake" TLDs are used, but those are not really manageable at a large enterprise scale. We do use them to anchor RFC1918 reverse arpa zones with our own versions. That's relatively straightforward, but I wouldn't want to attempt it on any broader basis. Admittedly, the above represents a DNS architecture supporting a large, highly restricted enterprise network. But in any architecture where DNSSEC validation is a factor, the chain of trust will always need to be considered. We also do employ RPZ. And we do break DNSSEC (and have encountered at least one malware domain that was DNSSEC signed). We're fine with DNSSEC validation failing for responses we've rewritten to block with RPZ. Our key considerations are that the query will not go out to the Internet and the client will not get a response from the Internet. Failed validation at the client if it should validate or SERVFAIL from a lower level nameserver to the client are perfectly acceptable results from our perspective. Scott ___ DNSOP mailing list DNSOP@ietf.org https://www.ietf.org/mailman/listinfo/dnsop
Re: [DNSOP] I-D Action: draft-vixie-dns-rpz-04.txt
Speaking as a large enterprise operator (over 100,000 employees and contractors at over 600 sites as well as a significant public Internet presence) that has DNSSEC signed all public zones, the majority of internal zones, and has DNSSEC validation enabled at all levels throughout our recursive DNS infrastructure (not just at our Internet access layer) and which also employs RPZ based protections, I don't see a lot of overlap in the threats against which each protect. The primary DNS based threats about which we are concerned when it comes to RPZ are the vectors that utilize malicious authoritative DNS zones for botnet command & control and data exfiltration. In those scenarios especially, we do not even want the query leaving the enterprise as the queries themselves are often the payload. (We are also planning to use our own RPZ zones in addition to our current subscription feeds to block malicious domains not in the feed when identified. And we are looking at mechanisms to perhaps automatically detect particular malicious query traffic patterns, especially those associated with data exfiltration.) If a malicious zone is DNSSEC signed, BOGUS validation of the RPZ responses by other internal validating recursive nameservers, stub resolvers, or applications is perfectly acceptable. Either way, the query is stopped from going out, which is a primary goal. There's nothing that stops an operator for a malicious authoritative zone from properly DNSSEC signing their authoritative zone. The traffic itself would still be malicious. RPZ is widely used because there isn't really another mechanism that effectively addresses that specific attack vector. Virtually every protocol standardized by the IETF either has been abused by malicious actors or has the potential for abuse. Yes, if an authoritarian nation state has the ability to restrict its citizens to state operated recursive nameservers, then RPZ gives them another tool they can use to forge responses. But such nation states were forging responses long before RPZ existed. In such an environment, they have considerable ability to abuse any protocol. If the IETF has any ideas on ways to improve RPZ to better protect against those DNS attack vectors in particular while reducing the potential for abuse or if anyone has a proposal for an alternative standard, I doubt those of us in the community that actually rely on it now would object. It would be helpful if there were an agreed reference standard for interoperability, but absent anything else that addresses the need, we'll keep using the tool we have. As an operator, dnsop certainly looks like the appropriate IETF working group for this draft. I'm not sure I understand the rationale behind Informational as opposed to Proposed Standard, but if the IETF wishes to have any input on the mechanism, this would seem to be the place to discuss it. I'm in favor of adopting it as a working group draft. Scott Morizot On Wed, Dec 21, 2016 at 8:54 AM, Ted Lemon <mel...@fugue.com> wrote: > William, I think the exit strategy for RPZ is DNSSEC. We really need to > figure out how to get people to be able to reliably and safely set up > DNSSEC. Despite Olaf’s excellent documents, we don’t really have that > yet. I don’t think that operating DNSSEC should be as scary as it is, but > right now all the IETF advice on this topic is too general, requiring the > installer to make decisions about their setup that the average IT person > doesn’t know how to make. > > We should have a document that says "look, if you don’t know any better, > here is a way to set up DNSSEC that will make your users more secure than > they are without it, and that will not blow up in your face (assuming you > do it)." I’ve seen a few documents like that, but nothing out of the > IETF; they are generally on someone’s personal web site, and don’t see wide > distribution. > > I think we need to stop thinking that there will be some shining day when > the Internet is a safe place. The internet is an ecosystem, and ecosystems > have predators and parasites. We may not like it, it may violate our > ideals, but it is reality, and denying reality doesn’t make it go away. > What we should be doing is thinking like gardeners, not like machinists. > Gardeners sometimes have to use methods for dealing with pests that allow > us to have yummy food but aren’t so good for the pests. The same is true > with the Internet. > > (FWIW, I’m in favor of adoption, for precisely this reason.) > > ___ > DNSOP mailing list > DNSOP@ietf.org > https://www.ietf.org/mailman/listinfo/dnsop > ___ DNSOP mailing list DNSOP@ietf.org https://www.ietf.org/mailman/listinfo/dnsop
Re: [DNSOP] Some comments on draft-hoffman-dns-terminology
On Sat, Apr 4, 2015 at 12:28 AM, Ralf Weber d...@fl1ger.de wrote: Yes. I used the term hidden primary in the past, and technically there would be no reason for a setup hidden primary - primary - secondaries, as you have two single point of failure (SPOF) there. I wouldn't deploy that. For me these words (master/primary, slave/secondary) alway have been synonyms. hidden primary (with failover in place) -- signer (again with failover in place) -- Published authoritative nameservers Just to provide a technical context in which that configuration does make technical sense. Scott ___ DNSOP mailing list DNSOP@ietf.org https://www.ietf.org/mailman/listinfo/dnsop