Re: [DNSOP] Interim DNSOP WG meeting on Special Use Names: some reading material
These comments might be more usefully said in the relevant ICANN forums. If you could suggest which ones those are, I can certainly send a note. Like Paul, I just don't have time to follow all of the ever larger set of ICANN processes. Steve On May 17, 2015, at 7:07 PM, Paul Vixie p...@redbarn.org wrote: John Levine wrote: ... I would be much happier with a statement that said the names are blocked indefinitely, and here's the plan for the $4 million in application fees we accepted for those names. +1. -- Paul Vixie ___ DNSOP mailing list DNSOP@ietf.org https://www.ietf.org/mailman/listinfo/dnsop Regards, John Levine, jo...@taugh.com, Taughannock Networks, Trumansburg NY Please consider the environment before reading this e-mail. ___ DNSOP mailing list DNSOP@ietf.org https://www.ietf.org/mailman/listinfo/dnsop
Re: [DNSOP] Interim DNSOP WG meeting on Special Use Names: some reading material
On 5/18/15, 7:12 AM, John R Levine jo...@taugh.com wrote: These comments might be more usefully said in the relevant ICANN forums. If you could suggest which ones those are, I can certainly send a note. Like Paul, I just don't have time to follow all of the ever larger set of ICANN processes. I don’t think there is currently an open forum for this topic at ICANN, since it is not actively under consideration. If this were to change in the future, I’ll be happy to send the pointer to this list (provided I’m still at ICANN). In the meantime, during the upcoming ICANN meeting (as it is the case every ICANN meeting) there is a public forum where any member of the community can raise any topic to the ICANN Board while facing the community. You don’t have to travel to participate, there is remote participation. Alternatively, correspondence can be sent to ICANN leadership for consideration and it will get posted publicly in the related section of the website. Finally, there is the option of working within the gNSO to develop a policy on this regard. Just my two cents. -- Francisco. ___ DNSOP mailing list DNSOP@ietf.org https://www.ietf.org/mailman/listinfo/dnsop
Re: [DNSOP] Interim DNSOP WG meeting on Special Use Names: some reading material
In message 1a3d420a-32cc-464f-ada5-401a9dc76...@nic.br, Rubens Kuhl writes: Besides ccTLD, out of ICANN contractual reach, looks like TLDs from Uniregistry (including ISC servers) and Neustar are the ones most mentioned here. Any outreach attempt, successful or otherwise, with Uniregistry, ISC and Neustar ? The timeouts on tld.isc-sns.info are being addressed. I'd already complained to ops about them and it looks like bad traffic shaping in front of that server. I'm more worried about getting the checks built into the delegation process so that servers are correct from the get go. Next is getting the existing servers fixed. One can also add unexpected opcode handling, zflag handling (the last unassigned DNS flag), and ad flag handling to the EDNS handling. All of these have resulted in servers not responding which is really bad given DNS is a query / response protocol. For unexpected opcode I would expect to see NOTIMP. BIND 9.11's dig will be able to test this (dig +opcode=value). Mark Rubens On May 18, 2015, at 8:50 PM, Mark Andrews ma...@isc.org wrote: Can we get DNS and EDNS Protocol Compliance added to the acceptance criteria for nameservers for TLDs. http://ednscomp.isc.org/compliance/tld-report.html shows this is NOT happening. It isn't hard to test for. Eight dig queries per server is all that was required to generate this report. Mark -- Mark Andrews, ISC 1 Seymour St., Dundas Valley, NSW 2117, Australia PHONE: +61 2 9871 4742 INTERNET: ma...@isc.org ___ DNSOP mailing list DNSOP@ietf.org https://www.ietf.org/mailman/listinfo/dnsop -- Mark Andrews, ISC 1 Seymour St., Dundas Valley, NSW 2117, Australia PHONE: +61 2 9871 4742 INTERNET: ma...@isc.org ___ DNSOP mailing list DNSOP@ietf.org https://www.ietf.org/mailman/listinfo/dnsop
Re: [DNSOP] Interim DNSOP WG meeting on Special Use Names: some reading material
Can we get DNS and EDNS Protocol Compliance added to the acceptance criteria for nameservers for TLDs. http://ednscomp.isc.org/compliance/tld-report.html shows this is NOT happening. It isn't hard to test for. Eight dig queries per server is all that was required to generate this report. Mark -- Mark Andrews, ISC 1 Seymour St., Dundas Valley, NSW 2117, Australia PHONE: +61 2 9871 4742 INTERNET: ma...@isc.org ___ DNSOP mailing list DNSOP@ietf.org https://www.ietf.org/mailman/listinfo/dnsop
Re: [DNSOP] Interim DNSOP WG meeting on Special Use Names: some reading material
On May 18, 2015, at 4:50 PM, Mark Andrews ma...@isc.org wrote: Can we get DNS and EDNS Protocol Compliance added to the acceptance criteria for nameservers for TLDs. http://ednscomp.isc.org/compliance/tld-report.html shows this is NOT happening. It isn't hard to test for. Eight dig queries per server is all that was required to generate this report. How is this related to special use names? The purpose of those names is to not be resolved in the root. --Paul Hoffman ___ DNSOP mailing list DNSOP@ietf.org https://www.ietf.org/mailman/listinfo/dnsop
Re: [DNSOP] Interim DNSOP WG meeting on Special Use Names: some reading material
Thanks for this, Mark. I’ll take a look at the report. Regards, -- Francisco. On 5/18/15, 4:50 PM, Mark Andrews ma...@isc.org wrote: Can we get DNS and EDNS Protocol Compliance added to the acceptance criteria for nameservers for TLDs. http://ednscomp.isc.org/compliance/tld-report.html shows this is NOT happening. It isn't hard to test for. Eight dig queries per server is all that was required to generate this report. Mark -- Mark Andrews, ISC 1 Seymour St., Dundas Valley, NSW 2117, Australia PHONE: +61 2 9871 4742 INTERNET: ma...@isc.org ___ DNSOP mailing list DNSOP@ietf.org https://www.ietf.org/mailman/listinfo/dnsop ___ DNSOP mailing list DNSOP@ietf.org https://www.ietf.org/mailman/listinfo/dnsop
Re: [DNSOP] Interim DNSOP WG meeting on Special Use Names: some reading material
Besides ccTLD, out of ICANN contractual reach, looks like TLDs from Uniregistry (including ISC servers) and Neustar are the ones most mentioned here. Any outreach attempt, successful or otherwise, with Uniregistry, ISC and Neustar ? Rubens On May 18, 2015, at 8:50 PM, Mark Andrews ma...@isc.org wrote: Can we get DNS and EDNS Protocol Compliance added to the acceptance criteria for nameservers for TLDs. http://ednscomp.isc.org/compliance/tld-report.html shows this is NOT happening. It isn't hard to test for. Eight dig queries per server is all that was required to generate this report. Mark -- Mark Andrews, ISC 1 Seymour St., Dundas Valley, NSW 2117, Australia PHONE: +61 2 9871 4742 INTERNET: ma...@isc.org ___ DNSOP mailing list DNSOP@ietf.org https://www.ietf.org/mailman/listinfo/dnsop ___ DNSOP mailing list DNSOP@ietf.org https://www.ietf.org/mailman/listinfo/dnsop
Re: [DNSOP] Interim DNSOP WG meeting on Special Use Names: some reading material
need therefore not to delegate it. But in the former case, one needs a pretty good argument why we need anything stronger than ICANN's policy statement that the names are blocked indefinitely -- certainly, one needs a better argument than I don't trust ICANN, because it's already got the policy token. I would be much happier with a statement that said the names are blocked indefinitely, and here's the plan for the $4 million in application fees we accepted for those names. R's, John ___ DNSOP mailing list DNSOP@ietf.org https://www.ietf.org/mailman/listinfo/dnsop
Re: [DNSOP] Interim DNSOP WG meeting on Special Use Names: some reading material
John Levine wrote: ... I would be much happier with a statement that said the names are blocked indefinitely, and here's the plan for the $4 million in application fees we accepted for those names. +1. -- Paul Vixie ___ DNSOP mailing list DNSOP@ietf.org https://www.ietf.org/mailman/listinfo/dnsop
Re: [DNSOP] Interim DNSOP WG meeting on Special Use Names: some reading material
These comments might be more usefully said in the relevant ICANN forums. Steve On May 17, 2015, at 7:07 PM, Paul Vixie p...@redbarn.org wrote: John Levine wrote: ... I would be much happier with a statement that said the names are blocked indefinitely, and here's the plan for the $4 million in application fees we accepted for those names. +1. -- Paul Vixie ___ DNSOP mailing list DNSOP@ietf.org https://www.ietf.org/mailman/listinfo/dnsop ___ DNSOP mailing list DNSOP@ietf.org https://www.ietf.org/mailman/listinfo/dnsop
Re: [DNSOP] Interim DNSOP WG meeting on Special Use Names: some reading material
On Wed, May 13, 2015 at 08:51:35PM -, John Levine wrote: .corp, .home, and .mail, they've only said they're deferred and I just don't believe that ICANN has the institutional maturity to say no permanently. The point that I keep trying to make is that, if that's what we think, we should _not_ be attempting to use DNSOP or the special names registry as a policy-preference enforcement body. If the issue is that you don't think ICANN will do the right thing in managing the policies of the root zone, then you need to go work on ICANN, not try to use the IETF as a second control. Doing that puts the IETF itself in jeopardy. So this isn't an ICANN issue, it's an IANA issue. ICANN can't sell .corp, .home, and .mail for the same reason they can't sell .arpa or .invalid: they're already spoken for. But they're _not_ spoken for. That's the point. Since the very first time I even heard of the DNS, everything I ever read said, Hey, you can't just pick any name you like. You need a name you actually control, and that needs to be registered. For years, many people the root was different, because it was not really a moving target. But that assumption turned out to be a bad one. I am susceptible to the argument that there is an operations problem on the Internet because these things are in wide use and therefore they ought to be set aside. And I think it is just fine if the IETF says, No, we decided to use this name as a protocol switch and you need therefore not to delegate it. But in the former case, one needs a pretty good argument why we need anything stronger than ICANN's policy statement that the names are blocked indefinitely -- certainly, one needs a better argument than I don't trust ICANN, because it's already got the policy token. Best regards, A -- -- Andrew Sullivan a...@anvilwalrusden.com Awkward access to mail. Please forgive formatting problems. ___ DNSOP mailing list DNSOP@ietf.org https://www.ietf.org/mailman/listinfo/dnsop
Re: [DNSOP] Interim DNSOP WG meeting on Special Use Names: some reading material
-BEGIN PGP SIGNED MESSAGE- Hash: SHA512 Ted Lemon wrote: George, I didn't get into your game theory because I think it's irrelevant. The IETF process is not a fast process. If parasitical organizations decide to try to get the calories they need from us rather than from ICANN, I am pretty sure they will quickly learn that this is futile. It might briefly suck for us while they learn that it won't work, but I don't think so. We already know how to deal with useless proposals. So with that in mind, I think we really are free to do the technically right thing without concern that it will encourage badness in the future. As to the topic of fairness, that is inherently political, and we should steer well clear of it. There is no way we can reach consensus on it, and whether you want to admit it or not, by advancing the argument you are advancing, that is what you are asking us to do. What you are saying is a really good argument against us reserving names simply because they have been squatted on. I agree we should not use that as a reason to reserve a special use name. ICANN already has a process for thatIf we want to reserve a special use name, we should have a technical argument in favor of doing so. But in the case of .onion, .corp and .home, we _do_ have such a reason. So there is no need to resort to the argument that these names should be documented in the special use registry because they were squatted on. If .onion were being proposed today, and had no previous implementation, its proponents would rightly be arguing for .onion, not for .onion.alt, because how names read _matters_, and it makes sense for .onion to be a special use TLD, as it does for .corp and .home. In the virtual meeting, I stated that if I was developing an app like I2P *now* that needed a non-DNS TLD, and .alt existed, I would use it. I should clarify that I would certainly prefer .TLD over .TLD.alt, because I agree with Ted that how a name reads matters. But, if there was an _easy_ process for securing .TLD.alt, _and I knew about it_, I would probably opt for that. It isn't as pretty, but it's not much harder to educate users that .TLD.alt is the special identifier for this new app that it was to educate users back in 2003 that .i2p is the special identifier for I2P addresses. DNS has had a long run as the only name database that is taken seriously on the Internet, and so we no longer think of names as being something that has an existence independent of the DNS hierarchy, but that is not an inherent truth of domain names. It is just the status quo. I would not want to have to use a different name hierarchy designator in order to use mDNS, and that being the case, I don't think you can make the argument that .onion is qualitatively different from .local. +1. The domain name is a concept that pervades all internet-using applications now, and any alternative non-DNS naming system that wants to maintain interoperability with existing apps is forced to use it. That is the primary reason why I2P chose .i2p and (IMHO) Tor chose .onion. The biggest barrier to the rise of hidden services like what I2P and Tor provide is content availability. If the I2P and Tor devs had needed to re-implement _every_ client application they wanted to work over their networks, neither network would be as large as they are today. That doesn't mean that new application developers should not write network-aware applications, or that existing applications can be used without any potential for privacy-compromising leaks. There are definite technical and usability benefits to an application setting up its own I2P tunnel. But it is much easier to e.g. run TZ=UTC git, or point an I2P server tunnel at Apache and configure vhosts, than it is to implement network-level support. Moreover, supporting a non-domain name system would very likely require extensive, expensive modifications throughout the app to e.g. remove any and all dependencies on gethostbyname(). The Tor Browser bundle is IMHO testament to this - its developers have added a slew of privacy-enhancing patches, but the browser still handles domain names in the same way as upstream Firefox does. str4d ___ DNSOP mailing list DNSOP@ietf.org https://www.ietf.org/mailman/listinfo/dnsop -BEGIN PGP SIGNATURE- iQIcBAEBCgAGBQJVVS8SAAoJEBO17ljAn7PgLvcP/3HPo38zZNGsQ1Hl+fbhOq0S ODDeQbIJDvqTl7DpK9mV+z8ViCjS4+gtw0oYvtB7v3Ig2Pg/jfCS6q0Xbx784fuJ gkHTe4rAkfKHKyA6ozCeSf7rY4FXR6hSc9IoxEKixm33gSiXRmfvLDcF4YOJLrKU GIraly6of0zPIC5NQVO0oXRceMwsn8Mlk2p+N1pDauhu0b54OzBht/7P/XtfIMD1 rbDVHUPcLE6zQNYFklGv3VGV3LkZbR9GwUMsyQ7Jmor9Sfj8llF37OnTJpjsI7Lm ofQgjOXwYFni3k6rOWbPObh2HtDeKAaHu3HdNHcCb5Sm8PXzrYHL9DpWZcO7VAhd 4Z2kuwEIO/7nM/B2yg1XclXvJBzc5G8Egs1plcmWu79lR7UxdHVCoWSjd+Xj1FZP SkFYPTEjt2y02ZfOGG/83rdN1XVkrqts7hIAD+mHk2sMLebcTykP2IAN3KLz5ME1
Re: [DNSOP] Interim DNSOP WG meeting on Special Use Names: some reading material
In article 0ee18e9e-e7d2-42e3-aee8-9a43c4032...@nominum.com you write: On May 14, 2015, at 1:03 AM, David Conrad d...@virtualized.org wrote: What qualitative difference do you see between those uses of numbers and the use of TLDs like CORP? Lack of scarcity. +1 R's, John ___ DNSOP mailing list DNSOP@ietf.org https://www.ietf.org/mailman/listinfo/dnsop
Re: [DNSOP] Interim DNSOP WG meeting on Special Use Names: some reading material
I got to air my view. I concur its not a majority view. I don't feel I have to have the last word and I respect you really do think this is a good idea, and even meets the technical merit consideration for the process as designed. So I'm pretty ok with people weighing this up on the strengths and merits of the argument as seen, and I suspect most will agree with you. cheers -George On Thu, May 14, 2015 at 4:19 PM, Ted Lemon ted.le...@nominum.com wrote: George, I didn't get into your game theory because I think it's irrelevant. The IETF process is not a fast process. If parasitical organizations decide to try to get the calories they need from us rather than from ICANN, I am pretty sure they will quickly learn that this is futile. It might briefly suck for us while they learn that it won't work, but I don't think so. We already know how to deal with useless proposals. So with that in mind, I think we really are free to do the technically right thing without concern that it will encourage badness in the future. As to the topic of fairness, that is inherently political, and we should steer well clear of it. There is no way we can reach consensus on it, and whether you want to admit it or not, by advancing the argument you are advancing, that is what you are asking us to do. What you are saying is a really good argument against us reserving names simply because they have been squatted on. I agree we should not use that as a reason to reserve a special use name. ICANN already has a process for thatIf we want to reserve a special use name, we should have a technical argument in favor of doing so. But in the case of .onion, .corp and .home, we _do_ have such a reason. So there is no need to resort to the argument that these names should be documented in the special use registry because they were squatted on. If .onion were being proposed today, and had no previous implementation, its proponents would rightly be arguing for .onion, not for .onion.alt, because how names read _matters_, and it makes sense for .onion to be a special use TLD, as it does for .corp and .home. DNS has had a long run as the only name database that is taken seriously on the Internet, and so we no longer think of names as being something that has an existence independent of the DNS hierarchy, but that is not an inherent truth of domain names. It is just the status quo. I would not want to have to use a different name hierarchy designator in order to use mDNS, and that being the case, I don't think you can make the argument that .onion is qualitatively different from .local. ___ DNSOP mailing list DNSOP@ietf.org https://www.ietf.org/mailman/listinfo/dnsop
Re: [DNSOP] Interim DNSOP WG meeting on Special Use Names: some reading material
George, I didn't get into your game theory because I think it's irrelevant. The IETF process is not a fast process. If parasitical organizations decide to try to get the calories they need from us rather than from ICANN, I am pretty sure they will quickly learn that this is futile. It might briefly suck for us while they learn that it won't work, but I don't think so. We already know how to deal with useless proposals. So with that in mind, I think we really are free to do the technically right thing without concern that it will encourage badness in the future. As to the topic of fairness, that is inherently political, and we should steer well clear of it. There is no way we can reach consensus on it, and whether you want to admit it or not, by advancing the argument you are advancing, that is what you are asking us to do. What you are saying is a really good argument against us reserving names simply because they have been squatted on. I agree we should not use that as a reason to reserve a special use name. ICANN already has a process for thatIf we want to reserve a special use name, we should have a technical argument in favor of doing so. But in the case of .onion, .corp and .home, we _do_ have such a reason. So there is no need to resort to the argument that these names should be documented in the special use registry because they were squatted on. If .onion were being proposed today, and had no previous implementation, its proponents would rightly be arguing for .onion, not for .onion.alt, because how names read _matters_, and it makes sense for .onion to be a special use TLD, as it does for .corp and .home. DNS has had a long run as the only name database that is taken seriously on the Internet, and so we no longer think of names as being something that has an existence independent of the DNS hierarchy, but that is not an inherent truth of domain names. It is just the status quo. I would not want to have to use a different name hierarchy designator in order to use mDNS, and that being the case, I don't think you can make the argument that .onion is qualitatively different from .local. ___ DNSOP mailing list DNSOP@ietf.org https://www.ietf.org/mailman/listinfo/dnsop
Re: [DNSOP] Interim DNSOP WG meeting on Special Use Names: some reading material
We agree its a view well out of scope. I don't agree we should imagine this _decision_ is devoid of consequence in the real world and can be treated as a technical question with no other consideration. I think almost any decision made on technical merit facing the questions of naming and addressing faces tough questions outside of the narrow domain. Thats why people have brains: to think about consequences. Its not un-like questions around GM in the biosciences world. You think biologists don't want to say oh, nobody would be insane enough to release a modified virus which we changed from contact to airborne infection into the world, this is just science. we should stick to our domain and not get involved in this morality issue -Well.. the backlash from science funding and even within Science on that question was massive. No, it is NOT a good idea to put the genome of an airborne virus into the public domain. Its bad social policy. So its not that we can't make technologically narrow decisions about names: I am asking if its even a good idea. I don't think its a good idea. I don't much care that its been blessed by an RFC. I think its a mistake, with consequences. I think it legitimises squatting, and makes other decisions on names and addresses harder in the future. Its special pleading. I have no technical grounds other than a sense the decision to 'cross the beams' and place TOR at peril of gethostbyname() was a fundamental mistake which should not be blessed by reserving a word in gethostbyname() space to stop the problem. -G On Thu, May 14, 2015 at 1:06 PM, Ted Lemon ted.le...@nominum.com wrote: On May 14, 2015, at 3:42 AM, George Michaelson g...@algebras.org wrote: I have a lot of agreement for what David is saying. What I say below may not of course point there, and he might not agree with me because this isn't a bilaterally equal thing, to agree with someone, but I do. I think I do agree with what he just said. I think that prior use by private decision on something which was demonstrably an administered commons, with a body of practice around how it is managed, is a-social behaviour. I think this is completely out of scope for the IETF, The IETF has the job of deciding what works, not adjudicating what is fair. We could never get consensus on what is fair here—for example, I find your position on this upsetting, because from a technical perspective what both the onion folks, the corp folks, apple, and for that matter hamachi did was simply expedient and sensible in the context of the time in which it was done, and not anti- or a-social, as you suggest. I do not mean to say that you are wrong, but simply to illustrate that this is not something about which we are likely to ever achieve consensus. Nor should we. We simply need to do our job and decide on a technical level whether we want to add these names to the special use registry. We should stop arguing about morality and just do that. ___ DNSOP mailing list DNSOP@ietf.org https://www.ietf.org/mailman/listinfo/dnsop
Re: [DNSOP] Interim DNSOP WG meeting on Special Use Names: some reading material
On May 14, 2015, at 11:21 AM, David Conrad wrote: snip However, as I said, how it is labeled is somewhat irrelevant. What matters to me is figuring out the objective criteria by which we can determine whether and/or how a particular label is being used so much that its delegation in the DNS would damage the Internet's security/stability. So far, all the criteria I've seen to date boils down to Justice Stewart's I know it when I see it which makes me uncomfortable. I understand the desire to have objective criteria, but in this case your call for a bright-line distinction between dangerous and not dangerous labels is an obvious red herring. Among other things, it assumes a default mindset that is the opposite of what I hope the IETF would embrace. You seem to be saying that unless we can prove, using a numerical algorithm that could be published in an RFC, that a particular label is dangerous, then by default it should be permitted in the root as a TLD name. My argument is that the burden of proof should run in the other direction: that unless we are very sure that putting something in the root will *not* cause stability or security problems, we should keep it out. The prove that it's dangerous or we'll put it in the root mindset is explicitly commercial. It assumes that the default value is the economic benefit to a potential registry operator, and that in order to justify denying that benefit, we must be convinced that allowing it would cause damage to the Internet. (The actual criterion promoted by ICANN is clear and present danger to human life, which sets the bar rather higher than damage to the Internet.) The prove that it's *not* dangerous or we won't allow it in the root mindset is explicitly operational. It assumes that the default value is the stable operation of the Internet for the benefit of its users, and that in order to justify risking that benefit, we must be convinced that the gain outweighs the potential loss. My sense is that for most people, particularly those in the IETF who are directly concerned with the engineering and operation of the Internet, the gain (to be generous) from adding a particular new TLD to the root is not even close to commensurate with the potential loss if that new TLD arrives with the risks that corp/home/mail (at least) carry. It's not about quantifying how many SSL certs or how much pre-delegation traffic to the root constitutes danger; it's about assessing all of the dimensions of operational risk and deciding in favor of stability as the core value. Of course that shouldn't be done frivolously, and of course there are ways to artificially promote other strings as risky - but sorting those concerns is what the IETF's consensus development process does. I realize that if the answer to the question why can't we delegate CORP as a new TLD? is because the consensus of the IETF is that doing so would present an unjustifiable risk to the operational stability of the Internet, the people who stand to benefit from that delegation will be less happy than if the answer were because we have measured X, Y, and Z, and have found that X and Y both exceed the thresholds established by RFC 9000 for acceptable stability risk. I agree that that is a legitimate concern for ICANN. But I don't think it's a legitimate concern for the IETF - - Lyman ___ DNSOP mailing list DNSOP@ietf.org https://www.ietf.org/mailman/listinfo/dnsop
Re: [DNSOP] Interim DNSOP WG meeting on Special Use Names: some reading material
Ted, But in the case of .onion, .corp and .home, we _do_ have such a reason. Great! What is that reason so it can be encoded into an RFC, can be measured, and there can be an objective evaluation as to whether a prospective name can be placed into the Special Use Names registry? Thanks, -drc signature.asc Description: Message signed with OpenPGP using GPGMail ___ DNSOP mailing list DNSOP@ietf.org https://www.ietf.org/mailman/listinfo/dnsop
Re: [DNSOP] Interim DNSOP WG meeting on Special Use Names: some reading material
On May 14, 2015, at 11:21 AM, David Conrad d...@virtualized.org wrote: However, as I said, how it is labeled is somewhat irrelevant. What matters to me is figuring out the objective criteria by which we can determine whether and/or how a particular label is being used so much that its delegation in the DNS would damage the Internet's security/stability. So far, all the criteria I've seen to date boils down to Justice Stewart's I know it when I see it which makes me uncomfortable. I think the idea that there could be any such criterion is aptly refuted by the existence of an adjudication process in ICANN, as well as the difficulty ICANN has had in actually selling TLDs. Despite our wishes to the contrary, processes of this sort are not like protocols, and typically can't run themselves. That is why the human element is so heavily relied on. IETF process in particular absolutely cannot work without this human element, which is baked in and referred to as rough consensus. But in the case of .onion, .corp and .home, we _do_ have such a reason. Great! What is that reason so it can be encoded into an RFC, can be measured, and there can be an objective evaluation as to whether a prospective name can be placed into the Special Use Names registry? The technical argument is different in each case. In the case of .onion, I refer you to the ToR documentation as well as the two drafts that are being discussed. In the case of .corp and .home, the organizations that started using these names had reasons for using them. If these reasons were documented in drafts and presented to the working group, I would expect the working group to consider them, and either reach consensus to publish, or not. I would expect that consensus to be arrived at on the technical merits of the proposal, not on the basis of various participants' various opinions about fairness or amount of traffic at the root. Until that happens, the IETF's position on both .corp and .home is nonexistent, and they should not be put in the special use names registry. Irrespective of the technical merits of .corp and .home, of course, the DNSOP working group might well publish a document talking about the operational implications of .corp and .home with respect to the root, but I personally see very little value in doing so, since the leaks of these names are likely coming from devices operated by people who would never read such a document. ___ DNSOP mailing list DNSOP@ietf.org https://www.ietf.org/mailman/listinfo/dnsop
Re: [DNSOP] Interim DNSOP WG meeting on Special Use Names: some reading material
Ted, On May 14, 2015, at 1:03 AM, David Conrad d...@virtualized.org wrote: What qualitative difference do you see between those uses of numbers and the use of TLDs like CORP? Lack of scarcity. Sorry, I don't understand this response in the context of whether or not the folks making use of the space owns, rents, or otherwise has lawful permission to use for that space (be it number space or name space). In the context of names, sure, there are a 37^63 (+/- a few) possible TLDs, yet we see the huge spikes of traffic for CORP/HOME/MAIL at the root, so it would seem blatantly obvious that there is scarcity, albeit perhaps in imagination if not the actual resource. However, as I said, how it is labeled is somewhat irrelevant. What matters to me is figuring out the objective criteria by which we can determine whether and/or how a particular label is being used so much that its delegation in the DNS would damage the Internet's security/stability. So far, all the criteria I've seen to date boils down to Justice Stewart's I know it when I see it which makes me uncomfortable. Regards, -drc signature.asc Description: Message signed with OpenPGP using GPGMail ___ DNSOP mailing list DNSOP@ietf.org https://www.ietf.org/mailman/listinfo/dnsop
Re: [DNSOP] Interim DNSOP WG meeting on Special Use Names: some reading material
Lyman, I understand the desire to have objective criteria, but in this case your call for a bright-line distinction between dangerous and not dangerous labels is an obvious red herring. It's not so obvious to me that dangerous/not is a red herring, particularly since that was one of the primary rationales for (appropriately, IMHO) holding up CORP/MAIL/HOME. My argument is that the burden of proof should run in the other direction: that unless we are very sure that putting something in the root will *not* cause stability or security problems, we should keep it out. Ignoring the question of how one proves a negative, that would seem to run contrary to the permissionless innovation theory of why Internet protocols are good. The prove that it's dangerous or we'll put it in the root mindset is explicitly commercial. Talk about red herring. In my view, whether it is commercial or not is not relevant here. AFAICT, we're talking about bucketizing labels, either OK to be in the DNS or Placed on the Special Use Registry. I believe that if you want to put something in the latter bucket, there should be a reason and preferably one that can be objectively measured. For example, in the case of .ONION, it seems to me that: 1) there is a well defined and non-DNS spec for the protocol 2) there are multiple independent implementations in active use 3) there is a large installed base that is growing that is already using the protocol 4) the public exposure of queries for the ONION label could be considered a privacy/operational risk The first two and last of these are objectively measurable. The third is a bit sticky since large isn't well defined or measurable and thus, you get into a subjective evaluation. I'd personally like to come up with something concrete for (3) to avoid stupid rathole arguments, but due the facts of of (1) and (2) along with my personal assumptions about (3), I support putting ONION into the special use names registry. If we apply the above criteria to .CORP: 1) there is sort of a spec, in the sense that folks documented .CORP as a recommendation for private namespaces 2) there are multiple independent implementations in the sense that lots of folks have independently made use of .CORP 3) there is a large installed base, albeit hopefully it is shrinking 4) the public exposure of queries for the CORP label could be considered a privacy/operational risk The first and last is objectively measurable. The second and third are a bit unclear, but based on the quantity of queries to the root over a long period of time (at least as far as we can tell from the yearly DITL samples), I personally am comfortable that .CORP would fall into the special use names registry. I would be much happier if we actually had some clear threshold that we could pointed to, but at the above would be a good start. You'll note that none of the above takes into consideration any form of commercialization. The prove that it's *not* dangerous or we won't allow it in the root mindset is explicitly operational. As far as I am aware, that is not what we're discussing here. For good or ill, my understanding is that RFC 2860 assigned the policy role of what goes into the root to ICANN. What I thought we were talking about was the identification of labels that are to be preempted from consideration of delegation, similar to the IETF reserving 10/8. It assumes that the default value is the stable operation of the Internet for the benefit of its users, and that in order to justify risking that benefit, we must be convinced that the gain outweighs the potential loss. This seems unrelated to the Special Use Names registry, but if this mindset were applied to the routing system, the email system, or pretty much any protocol system operationally deployed on the Internet, we might as well close down the IETF because there will always be someone who will argue that the risk associated with introducing new technology outweighs the benefit. Regards, -drc signature.asc Description: Message signed with OpenPGP using GPGMail ___ DNSOP mailing list DNSOP@ietf.org https://www.ietf.org/mailman/listinfo/dnsop
Re: [DNSOP] Interim DNSOP WG meeting on Special Use Names: some reading material
On May 14, 2015, at 4:10 PM, David Conrad wrote: Lyman, I understand the desire to have objective criteria, but in this case your call for a bright-line distinction between dangerous and not dangerous labels is an obvious red herring. It's not so obvious to me that dangerous/not is a red herring, particularly since that was one of the primary rationales for (appropriately, IMHO) holding up CORP/MAIL/HOME. No, I meant the call for a bright-line distinction - one that could be objectively determined from measurements. The distinction between dangerous and not is of course relevant. My argument is that the burden of proof should run in the other direction: that unless we are very sure that putting something in the root will *not* cause stability or security problems, we should keep it out. Ignoring the question of how one proves a negative, that would seem to run contrary to the permissionless innovation theory of why Internet protocols are good. Again, I'm not talking about proving in an algorithmic sense (of course proving a negative in that sense would be a meaningless directive). And I hope it's obvious that permissionless innovation isn't a license for anyone to harm others. You're welcome to try out any new idea you like on the Internet, without asking for anyone's permission - unless your new idea breaks something important that other Internet users depend on, in which case, not so much. The prove that it's dangerous or we'll put it in the root mindset is explicitly commercial. Talk about red herring. In my view, whether it is commercial or not is not relevant here. AFAICT, we're talking about bucketizing labels, either OK to be in the DNS or Placed on the Special Use Registry. The mindset is commercial in the way in which it prioritizes values in the discussion that we are having about bucketizing labels. I agree that that's what we're talking about. I believe that if you want to put something in the latter bucket, there should be a reason and preferably one that can be objectively measured. I believe that if you want to put something in the former bucket, there should be a reason :-) Really, what we're arguing about is the default condition. The two alternatives are OK to be in the DNS and Not OK to be in the DNS; Placed on the Special Use Registry is a mechanism that is available to implement the second alternative. You are saying that the default is everything is in the first bucket, and there has to be a good reason to move it into the second bucket. I'm saying the the default should be everything is in the second bucket, and there has to be a good reason to move it into the first bucket. For example, in the case of .ONION, it seems to me that: 1) there is a well defined and non-DNS spec for the protocol 2) there are multiple independent implementations in active use 3) there is a large installed base that is growing that is already using the protocol 4) the public exposure of queries for the ONION label could be considered a privacy/operational risk The first two and last of these are objectively measurable. The third is a bit sticky since large isn't well defined or measurable and thus, you get into a subjective evaluation. I'd personally like to come up with something concrete for (3) to avoid stupid rathole arguments, but due the facts of of (1) and (2) along with my personal assumptions about (3), I support putting ONION into the special use names registry. If we apply the above criteria to .CORP: 1) there is sort of a spec, in the sense that folks documented .CORP as a recommendation for private namespaces 2) there are multiple independent implementations in the sense that lots of folks have independently made use of .CORP 3) there is a large installed base, albeit hopefully it is shrinking 4) the public exposure of queries for the CORP label could be considered a privacy/operational risk The first and last is objectively measurable. The second and third are a bit unclear, but based on the quantity of queries to the root over a long period of time (at least as far as we can tell from the yearly DITL samples), I personally am comfortable that .CORP would fall into the special use names registry. I would be much happier if we actually had some clear threshold that we could pointed to, but at the above would be a good start. I would be too. You'll note that none of the above takes into consideration any form of commercialization. No, it doesn't. My original reference was to a commercial mindset, without which we wouldn't be arguing about whether or not CORP should be in the root. The prove that it's *not* dangerous or we won't allow it in the root mindset is explicitly operational. As far as I am aware, that is not what we're discussing here. For good or ill, my understanding is that RFC 2860 assigned the policy role of what goes into the root to ICANN.
Re: [DNSOP] Interim DNSOP WG meeting on Special Use Names: some reading material
I have a lot of agreement for what David is saying. What I say below may not of course point there, and he might not agree with me because this isn't a bilaterally equal thing, to agree with someone, but I do. I think I do agree with what he just said. I think that prior use by private decision on something which was demonstrably an administered commons, with a body of practice around how it is managed, is a-social behaviour. And I think drawing some distinction about TOR/Onion 'because we like it' compared to the VPN squats is to commit a faux pas, with two different considerations. In some equity-sense, its the 'two wrongs don't make a right' thing. TOR will of course be affected by a migration cost, but if they accept that cost, and move their dependency into .ALT or some other space, they respect the community process better than what is basically a squat-claim 'I got there first'. It was wrong to take the label, and it would be wrong to simply accede. It dis-empowers future rights to use the label without some process in the community eye. In some process-sense, if you say names get decided over -HERE and then say ...oh wait, except when we feel like it you invite many people to say that class of reservation against process is the distortion which makes us very uncomfortable with working in your space. Why should we believe what you say you do, if you do this? If we want a reserved names process, then even the CORP case is .. hard. Its a pragmatic decision, not a reflection of some dependency we consciously wanted. The technology drivers on this are (for me) pretty thin. Its a do-no-harm choice. I know asking TOR to recode off ONION is not a do-no-harm choice, but its a lesser harm to the process and equity for me. I also think there is a quality to we don't mean it to be in the DNS which makes me want to ask: Why do you let it exist in URL/URI/Omnibox-input space? If its typed into anything which heads to a field which we already use to do gethostbyname() calls, what did you think was going to happen? -G On Thu, May 14, 2015 at 7:03 AM, David Conrad d...@virtualized.org wrote: Lyman, It is neither: it is a DNS operational issue. A large number of people are apparently squatting on CORP/HOME/MAIL. Delegation of those TLDs would thus impact that large number of people. I think it is inaccurate (and unhelpful) to refer to the people who have been using corp/home/mail as squatters; most of them have simply been following what textbooks, consultants, and best practice guidelines have been advocating for a long time. Somewhat irrelevant, but I'll admit I don't see a whole lot of difference between folks using .CORP and folks like those who came up with the Hamachi VPN using 5.0.0.0/8 (before it had been allocated by IANA -- as an aside, I find it sadly ironic that their solution to 5.0.0.0/8 being allocated was to move to 25.0.0.0/8, at least according to http://en.wikipedia.org/wiki/LogMeIn_Hamachi). I recall the Hamachi folks' choice to use 5.0.0.0/8 being described as squatting. I recall a number of people on NANOG have suggested using 7.0.0.0/8 (etc) to deal with the lack of IPv4 address space. And then there is the use of 1.0.0.0/8. What qualitative difference do you see between those uses of numbers and the use of TLDs like CORP? (I'm told that squatting does not necessary have negative connotations, particularly outside the US) The security/stability concerns do not prevent ICANN from selling them. As I understand it, it does prevent them from being delegated, thus resulting in the situation where the applicants have the ability (so I understand) to request a refund. I'm saying that the IETF's core interest in a stable, operating Internet is the context in which the issue should be resolved. I agree and as I've said before, I think it would be really nice if the IETF could move CORP/HOME/MAIL to reserved like the TLDs in 2606. However, the question I still have: what criteria do you use to decide that delegating a TLD would negatively impact the stable operation of the Internet? Regards, -drc ___ DNSOP mailing list DNSOP@ietf.org https://www.ietf.org/mailman/listinfo/dnsop ___ DNSOP mailing list DNSOP@ietf.org https://www.ietf.org/mailman/listinfo/dnsop
Re: [DNSOP] Interim DNSOP WG meeting on Special Use Names: some reading material
On May 14, 2015, at 1:03 AM, David Conrad d...@virtualized.org wrote: What qualitative difference do you see between those uses of numbers and the use of TLDs like CORP? Lack of scarcity. ___ DNSOP mailing list DNSOP@ietf.org https://www.ietf.org/mailman/listinfo/dnsop
Re: [DNSOP] Interim DNSOP WG meeting on Special Use Names: some reading material
On May 14, 2015, at 3:42 AM, George Michaelson g...@algebras.org wrote: I have a lot of agreement for what David is saying. What I say below may not of course point there, and he might not agree with me because this isn't a bilaterally equal thing, to agree with someone, but I do. I think I do agree with what he just said. I think that prior use by private decision on something which was demonstrably an administered commons, with a body of practice around how it is managed, is a-social behaviour. I think this is completely out of scope for the IETF, The IETF has the job of deciding what works, not adjudicating what is fair. We could never get consensus on what is fair here—for example, I find your position on this upsetting, because from a technical perspective what both the onion folks, the corp folks, apple, and for that matter hamachi did was simply expedient and sensible in the context of the time in which it was done, and not anti- or a-social, as you suggest. I do not mean to say that you are wrong, but simply to illustrate that this is not something about which we are likely to ever achieve consensus. Nor should we. We simply need to do our job and decide on a technical level whether we want to add these names to the special use registry. We should stop arguing about morality and just do that. ___ DNSOP mailing list DNSOP@ietf.org https://www.ietf.org/mailman/listinfo/dnsop
Re: [DNSOP] Interim DNSOP WG meeting on Special Use Names: some reading material
-BEGIN PGP SIGNED MESSAGE- Hash: SHA512 On 05/13/2015 05:51 PM, John Levine wrote: which means that ICANN is sitting on $3.7 million in application fees which they will presumably have to refund, as well as five withdrawn applications from parties who got partial refunds and would likely expect the rest of their money back, so we can round it to $4 million riding on selling those domains. *** Maybe that means the business model of ICANN is not fit for the technical model of the Internet? I don't think it's IETF role to lower technical expectations and requirements for the Internet to fit anyone's business model. That. I said it. == hk -BEGIN PGP SIGNATURE- Version: GnuPG v2 iQJ8BAEBCgBmBQJVU8HtXxSAAC4AKGlzc3Vlci1mcHJAbm90YXRpb25zLm9w ZW5wZ3AuZmlmdGhob3JzZW1hbi5uZXRFQ0IyNkIyRTNDNzEyMTc2OUEzNEM4ODU0 ODA2QzM2M0ZDMTg5ODNEAAoJEEgGw2P8GJg9QwgP/20CeLAn03JP7H8WIdkcQPPe MQuDq+fXdTwLXKOnHET2m4C8bM4feJ5yymk2OVHvhrtkB+s/ikYu8rBs1E4pJ6wh QsEW+yfWPPF2bVZW3Hpwn7eCu22mmZL42mzF14t1gwMbwuYeoWvLmjfR03X/HFoz derWEH7TmkbWHmJzcrqxNx7k7PyTuBPpsCHv/JJAyaJxfhzsCtH/XqFv+duNYdf5 Zb8US6frSU48TrUZnwDCPycUFRX/FcjMDKKeCwTayc5jGBTZcSqiT2aAcAo0dlOM GXzZg2XU2Z3CVfAYcmcsRE6inKVdXXegYtux4eSP3PZSR7+B7D8YE330T2QcG6At ftUpxSsIWHM7rHaQXHK2Jd8OOavYDCpdChfk8OxbVaqkxVX5UrbDM+TC+borDe3+ wfTtISsHDLP08EPmXERU0G4CuGOO6MwEFlNkZv7DzthwEMjogIfalxEwg1PRnuD/ RoTpLvteJNnAlIN6hEi/5Vt8Xo2YvV3X5aorBA04M+eQVUlXN+AbsmiXm7z0lU2D 6sCGl0DGCiK/C7OqBPNT7pcJaPHZDawt7HC1ynJuOu+oDupzFng3/rNXzTPn5539 KfyTceEX5v0CbOAGvFSQUHKyBM21pG/W0M/SWUpkmKg/A9J/loJNNUn1le3+8Jxw /h/rzSypoUibCj2wLHCQ =a4Ah -END PGP SIGNATURE- ___ DNSOP mailing list DNSOP@ietf.org https://www.ietf.org/mailman/listinfo/dnsop
Re: [DNSOP] Interim DNSOP WG meeting on Special Use Names: some reading material
Lyman, It is neither: it is a DNS operational issue. A large number of people are apparently squatting on CORP/HOME/MAIL. Delegation of those TLDs would thus impact that large number of people. I think it is inaccurate (and unhelpful) to refer to the people who have been using corp/home/mail as squatters; most of them have simply been following what textbooks, consultants, and best practice guidelines have been advocating for a long time. Somewhat irrelevant, but I'll admit I don't see a whole lot of difference between folks using .CORP and folks like those who came up with the Hamachi VPN using 5.0.0.0/8 (before it had been allocated by IANA -- as an aside, I find it sadly ironic that their solution to 5.0.0.0/8 being allocated was to move to 25.0.0.0/8, at least according to http://en.wikipedia.org/wiki/LogMeIn_Hamachi). I recall the Hamachi folks' choice to use 5.0.0.0/8 being described as squatting. I recall a number of people on NANOG have suggested using 7.0.0.0/8 (etc) to deal with the lack of IPv4 address space. And then there is the use of 1.0.0.0/8. What qualitative difference do you see between those uses of numbers and the use of TLDs like CORP? (I'm told that squatting does not necessary have negative connotations, particularly outside the US) The security/stability concerns do not prevent ICANN from selling them. As I understand it, it does prevent them from being delegated, thus resulting in the situation where the applicants have the ability (so I understand) to request a refund. I'm saying that the IETF's core interest in a stable, operating Internet is the context in which the issue should be resolved. I agree and as I've said before, I think it would be really nice if the IETF could move CORP/HOME/MAIL to reserved like the TLDs in 2606. However, the question I still have: what criteria do you use to decide that delegating a TLD would negatively impact the stable operation of the Internet? Regards, -drc signature.asc Description: Message signed with OpenPGP using GPGMail ___ DNSOP mailing list DNSOP@ietf.org https://www.ietf.org/mailman/listinfo/dnsop
Re: [DNSOP] Interim DNSOP WG meeting on Special Use Names: some reading material
The distinction I'm making suggests why corp and onion seem different. They are, in this fundamental resolution nature. I was under the impression that part of the problem with .corp was that there were a lot of SSL certificates floating around. The CAs are supposed to have stopped issuing them a while ago, but who knows. With regard to the theory that ICANN has said they won't delegate .corp, .home, and .mail, they've only said they're deferred and I just don't believe that ICANN has the institutional maturity to say no permanently. There are still 20 active applications for those three names, which means that ICANN is sitting on $3.7 million in application fees which they will presumably have to refund, as well as five withdrawn applications from parties who got partial refunds and would likely expect the rest of their money back, so we can round it to $4 million riding on selling those domains. Having been to various name collision and universal acceptance events, I have seen way too many people in and around ICANN eager to brush away technical issues if they intefere in the least with making money. You doubtless recall Kurt Pritz saying with a straight face that all TLD name acceptance problems could be cleared up in a couple of months. So this isn't an ICANN issue, it's an IANA issue. ICANN can't sell .corp, .home, and .mail for the same reason they can't sell .arpa or .invalid: they're already spoken for. R's, John ___ DNSOP mailing list DNSOP@ietf.org https://www.ietf.org/mailman/listinfo/dnsop
Re: [DNSOP] Interim DNSOP WG meeting on Special Use Names: some reading material
But I suspect you know this, so I'm unclear why you claim they're already spoken for. I wasn't clear -- the IETF should document that they're unavailable, just like .ARPA and .TEST aren't available. Regards, John Levine, jo...@taugh.com, Taughannock Networks, Trumansburg NY Please consider the environment before reading this e-mail. ___ DNSOP mailing list DNSOP@ietf.org https://www.ietf.org/mailman/listinfo/dnsop
Re: [DNSOP] Interim DNSOP WG meeting on Special Use Names: some reading material
John, On May 13, 2015, at 1:51 PM, John Levine jo...@taugh.com wrote: The distinction I'm making suggests why corp and onion seem different. They are, in this fundamental resolution nature. I was under the impression that part of the problem with .corp was that there were a lot of SSL certificates floating around. The SSL cert aspect of CORP usage was a component of the concern, but not the sole problem. With regard to the theory that ICANN has said they won't delegate .corp, .home, and .mail, they've only said they're deferred I believe this is true. So this isn't an ICANN issue, it's an IANA issue. It is neither: it is a DNS operational issue. A large number of people are apparently squatting on CORP/HOME/MAIL. Delegation of those TLDs would thus impact that large number of people. ICANN can't sell .corp, .home, and .mail for the same reason they can't sell .arpa or .invalid: they're already spoken for. This is not true. ARPA is defined in RFC 3172 and the IAB in cooperation with ICANN are responsible for it. INVALID is defined in RFC 2606 which reserves its use. CORP/HOME/MAIL are not defined anywhere (other than drafts). But I suspect you know this, so I'm unclear why you claim they're already spoken for. ICANN can't sell CORP/HOME/MAIL because there are concerns related to security/stability with those TLDs that are, as yet, unresolved. But I suspect you know that too. Regards, -drc signature.asc Description: Message signed with OpenPGP using GPGMail ___ DNSOP mailing list DNSOP@ietf.org https://www.ietf.org/mailman/listinfo/dnsop
Re: [DNSOP] Interim DNSOP WG meeting on Special Use Names: some reading material
On May 13, 2015, at 2:05 PM, Andrew Sullivan a...@anvilwalrusden.com wrote: I think you're missing a distinction I was making, however, which is that we should not be poaching on turf already handed to someone else. Managing top-level domains that are intended to be looked up in the DNS -- even if people expect them to be part of a local root or otherwise not actually part of the DNS -- is, I increasingly think, part of ICANN's remit. Managing things that are domain names that are by definition _never_ to be looked up in the DNS is different, and we have a legitimate claim (I'm arguing. I should note I'm not sure I completely buy the distinction I'm making, but I want to keep testing it). The distinction I'm making suggests why corp and onion seem different. They are, in this fundamental resolution nature. Right. I agree that it's ICANN's decision whether to do what we say when we make suggestions about how to handle special top-level zones that should be delegated or repudiated. I agreed with you on this earlier, so I saw what you said two messages ago as making a different point. ___ DNSOP mailing list DNSOP@ietf.org https://www.ietf.org/mailman/listinfo/dnsop
Re: [DNSOP] Interim DNSOP WG meeting on Special Use Names: some reading material
On May 13, 2015, at 6:05 PM, David Conrad wrote: John, On May 13, 2015, at 1:51 PM, John Levine jo...@taugh.com wrote: The distinction I'm making suggests why corp and onion seem different. They are, in this fundamental resolution nature. I was under the impression that part of the problem with .corp was that there were a lot of SSL certificates floating around. The SSL cert aspect of CORP usage was a component of the concern, but not the sole problem. I think it's important to recognize that the issues with corp/home/mail have to do with *scope*, not with whether or not the DNS should be involved in resolving them. The established usage of corp/home/mail depends (conceptually and in practice) on using the DNS for name resolution, but it also depends on the historical assumption that those names would never resolve outside of their local scope because they were not globally-valid TLDs. That's not the case for onion (for example). With regard to the theory that ICANN has said they won't delegate .corp, .home, and .mail, they've only said they're deferred I believe this is true. So this isn't an ICANN issue, it's an IANA issue. It is neither: it is a DNS operational issue. A large number of people are apparently squatting on CORP/HOME/MAIL. Delegation of those TLDs would thus impact that large number of people. I think it is inaccurate (and unhelpful) to refer to the people who have been using corp/home/mail as squatters; most of them have simply been following what textbooks, consultants, and best practice guidelines have been advocating for a long time. They are not trying to claim or usurp territory that they (should) know doesn't belong to them; they have been playing by the rules that they learned when they studied for their Microsoft certification exams. We could go back in time and warn everyone that using a non-delegated name as the TLD anchor for an AD tree would someday turn out to be a problem, but absent that we have little justification for blaming them. That having been said, I understand that you're agreeing with me that this is a DNS operational issue :-) ICANN can't sell .corp, .home, and .mail for the same reason they can't sell .arpa or .invalid: they're already spoken for. This is not true. It's not, and John knows that, but it should be. ARPA is defined in RFC 3172 and the IAB in cooperation with ICANN are responsible for it. INVALID is defined in RFC 2606 which reserves its use. CORP/HOME/MAIL are not defined anywhere (other than drafts). But I suspect you know this, so I'm unclear why you claim they're already spoken for. ICANN can't sell CORP/HOME/MAIL because there are concerns related to security/stability with those TLDs that are, as yet, unresolved. The security/stability concerns do not prevent ICANN from selling them. A decision by the IETF to reserve them would. My point from the beginning [1] has been that the operational stability of the Internet is the proper concern of the IETF; it is not a policy issue, or a domain name registry competition issue, or any of the other issues that are the proper concern of ICANN. I don't intend that as a negative comment about ICANN - I'm not saying those guys would sell their .grandmother if they thought they could make a buck out of it, security and stability be damned. I'm saying that the IETF's core interest in a stable, operating Internet is the context in which the issue should be resolved. - Lyman [1] https://tools.ietf.org/id/draft-chapin-rfc2606bis-00.txt ___ DNSOP mailing list DNSOP@ietf.org https://www.ietf.org/mailman/listinfo/dnsop
Re: [DNSOP] Interim DNSOP WG meeting on Special Use Names: some reading material
I think you're missing a distinction I was making, however, which is that we should not be poaching on turf already handed to someone else. Managing top-level domains that are intended to be looked up in the DNS -- even if people expect them to be part of a local root or otherwise not actually part of the DNS -- is, I increasingly think, part of ICANN's remit. Managing things that are domain names that are by definition _never_ to be looked up in the DNS is different, and we have a legitimate claim (I'm arguing. I should note I'm not sure I completely buy the distinction I'm making, but I want to keep testing it). The distinction I'm making suggests why corp and onion seem different. They are, in this fundamental resolution nature. A -- Andrew Sullivan Please excuse my clumbsy thums. On May 12, 2015, at 19:16, Ted Lemon ted.le...@nominum.com wrote: On May 12, 2015, at 12:36 PM, Andrew Sullivan a...@anvilwalrusden.com wrote: This is a bizarre argument. You don't get to kind-of delegate policy authority this way. Authority was delegated, and if we don't like the outcome we can go pound sand. I think the IETF can develop a position on whether we think what ICANN is doing with the authority we delegated to them makes sense. You are right that we may not be able to do anything about this position other than state it, but we could state it, if we chose. But that wasn't the argument I was making. The argument I was making is that it's pretty clear that what ICANN has done is bad for the Internet, and that we should not decide whether or not to allocate special use names, or how many to allocate, based on an attitude of deferring to ICANN's greater wisdom on the topic. If in fact we have any basis for claiming to be able to allocate special use names, then we should just do that, not without taking care to avoid creating unnecessary conflicts, but not with trepidation either. If we don't, then we should figure that out, and figure out what to do about it, because this whole conversation appears to be based on the premise that we can in fact allocate special use names. ___ DNSOP mailing list DNSOP@ietf.org https://www.ietf.org/mailman/listinfo/dnsop
Re: [DNSOP] Interim DNSOP WG meeting on Special Use Names: some reading material
-BEGIN PGP SIGNED MESSAGE- Hash: SHA512 On 05/13/2015 03:05 PM, Andrew Sullivan wrote: we should not be poaching on turf already handed to someone else. Managing top-level domains that are intended to be looked up in the DNS -- even if people expect them to be part of a local root or otherwise not actually part of the DNS -- is, I increasingly think, part of ICANN's remit. Managing things that are domain names that are by definition _never_ to be looked up in the DNS is different, and we have a legitimate claim (I'm arguing. I should note I'm not sure I completely buy the distinction I'm making, but I want to keep testing it). *** I better understand now your reluctance on P2PNames: .bit has exactly one feet on each side of your argument. I was waiting for the minutes of the Interim Meeting (especially the parts concerning the adoption of OnionTLD by this WG, and the part concerning the P2PNames draft) but I must say that the P2PNames party is already contemplating working on split drafts to facilitate the proceeding in a reasonable time of non-controversial strings, including to support draft-appelbaum-dnsop-onion-tld (albeit with some reserves), and clarify the discussion so that such arguments become apparent. Andrew you will have an opportunity to test your assumption more practically soon. :) I'm looking for financial support to pursue this task: I am not employed by a generous corporation and most of my colleagues do not have mandates for that either. As usual the fringe of innovation on the end-to-end Internet matches the hi-tech-lo-life style of Cyberpunk novels. Regards, == hk -BEGIN PGP SIGNATURE- Version: GnuPG v2 iQJ8BAEBCgBmBQJVU54fXxSAAC4AKGlzc3Vlci1mcHJAbm90YXRpb25zLm9w ZW5wZ3AuZmlmdGhob3JzZW1hbi5uZXRFQ0IyNkIyRTNDNzEyMTc2OUEzNEM4ODU0 ODA2QzM2M0ZDMTg5ODNEAAoJEEgGw2P8GJg9DcsP/RbC+jpOJn+/Z9PJgxV6GQ7i OV3PUXId3CjPtTxoNG7rQ+0yVBv8aZKl/3KmSWRIqvvUsthO6djj/CJr/rLBp+UG g6I0f4IKdrb3RBbdeIHK6dK7Ck3MTSRel3dkUhIpH1jRPtdGyLxqIwl8nRjPPMDe MC1GSja56ZIrCniAn7pSYAxus3C0uud2bKGVpqHAz4aeZY9UrTRsHTZbaWp/XL8u 4tX4jNKixcGxZQFDmuCBoqNfnrRY98uOTkzSlUfPsGy6bW/A7+w/VS8CxjuPRNnx O+HzyQjSil2K9JzxCg1GPmWlVfaSVD3lTm8i1Sa9DOF2XeJ2jA5o5qiQc8arJRgQ Yv6CIbfD1KcdtTEXIlZ1q4TJSiQX5wH/pRvmUVD0DS0xhtKxeKwxgfpwF/rRw3eO zN1KbsfyMoSC/T+Pp2ZeLeFLznpuro4VWkItdF7h56wgvVxWdG9RDeLPJQTHnqyT ZuGeHnElLO0RDKmVc0kcxRufcS+MjQiyFiouS9Do2i5TZ5MM8JJDwUiCFxQqIY6D Vm+RWc2b1R5ePPsgSk8RCod9MbD4u/HDgt/SxBH1r+x08143fmhnE/OSDr0lYXfa Rw6oyI5yXe0urooLqkwgcg5yZkIXovbrpZ1X6cGcOny84mpzQd98trqwC0QvID7U SCNjKlU26jrDZh7qDIeK =YtU0 -END PGP SIGNATURE- ___ DNSOP mailing list DNSOP@ietf.org https://www.ietf.org/mailman/listinfo/dnsop
Re: [DNSOP] Interim DNSOP WG meeting on Special Use Names: some reading material
On Tue, May 12, 2015 at 11:28:36AM -0400, Ted Lemon wrote: The use that ICANN has chosen to sell for TLDs isn't something the IETF intended when we delegated that authority to them, so while I think we should try to be good citizens, we don't need to feel particularly guilty about taking special-use names if we have a valid reason for doing so and there is no pre-existing conflict. This is a bizarre argument. You don't get to kind-of delegate policy authority this way. Authority was delegated, and if we don't like the outcome we can go pound sand. A -- Andrew Sullivan a...@anvilwalrusden.com ___ DNSOP mailing list DNSOP@ietf.org https://www.ietf.org/mailman/listinfo/dnsop
Re: [DNSOP] Interim DNSOP WG meeting on Special Use Names: some reading material
Hi, And with a fully up to date RedHat Fedora Core 20 including latest Java and latest Google Chrome ... I get a Java out of date / not working error. (Same error with fully up to date Firefox). Interestingly, with TOR I get a Please download this *.exe thing. LOL! /Hugo From: DNSOP [dnsop-boun...@ietf.org] on behalf of hellekin [helle...@gnu.org] Sent: Tuesday, 12 May 2015 17:54 To: dnsop@ietf.org Subject: Re: [DNSOP] Interim DNSOP WG meeting on Special Use Names: some reading material -BEGIN PGP SIGNED MESSAGE- Hash: SHA512 How does one join the meeting with XMPP? I confirm that the WebEx software is not compatible with my OS. == hk -BEGIN PGP SIGNATURE- Version: GnuPG v2 iQJ8BAEBCgBmBQJVUiIFXxSAAC4AKGlzc3Vlci1mcHJAbm90YXRpb25zLm9w ZW5wZ3AuZmlmdGhob3JzZW1hbi5uZXRFQ0IyNkIyRTNDNzEyMTc2OUEzNEM4ODU0 ODA2QzM2M0ZDMTg5ODNEAAoJEEgGw2P8GJg9VCIQAI5+8oZzWYIf3ZihCERVZ0zD OL7jg1eSjGF7WShQXLko/PLqo+Aj5bN58h1r/0LB5+QPNE/SFJtOuTz06S7xIr/y 5IeYxEUhPhWHUOP0fp2CRCIspOV69SKFvLYmLXhEUrOJl+TlJtJXqTlNQ3H0DG9f d/fZ8doYUrwMeoLkb0CW9NhcazHwCeUL8rUQcqvuNVKdsrrHd1fFph0hrhZzKW1+ 3EkotHeVPPpJmWb1sj7+Exp44EEzz4vg1JM7BI9AK4HGGte6R3IMXAlF7HAJxBG9 uK6dcT5pRJEmPjxafkvmeyPwYlwRzWcqsbJTxRJz9bbpuScWLECicgprPgESLjBV r2tY/np/YbZfjZwje5K3j+CIyQcngYvOiPd5lcKc14W4lqOTjQoxmT5BWJK3/mqF Yf8JRDWleZVHhFtkgQ4VIitQO8B2eGc3xqnRosRTu0kCHJtFmSSshdNjprKu9Ev9 eI/YJ6Vpi4pb5RmNHOogyR4Ntxl7WK05YmJ1gJ0fqJDsVhGx9etwBnIvOb101QIT FV1um7iAYG8OdPylinQ1gxVW8aXjowmOLpZXW9XhCbmjJA5gN7THJ0TkhajUvTNy S9rKBzpi1AAOYhZtBn9Q2/ddbMRI0hUjXcrh3UfMLceL1d5e6vbPi7a/xFBFh+Bz X4wrNgCIzvfb3wMwiN1x =10bl -END PGP SIGNATURE- ___ DNSOP mailing list DNSOP@ietf.org https://www.ietf.org/mailman/listinfo/dnsop ___ DNSOP mailing list DNSOP@ietf.org https://www.ietf.org/mailman/listinfo/dnsop
Re: [DNSOP] Interim DNSOP WG meeting on Special Use Names: some reading material
On May 12, 2015, at 12:36 PM, Andrew Sullivan a...@anvilwalrusden.com wrote: This is a bizarre argument. You don't get to kind-of delegate policy authority this way. Authority was delegated, and if we don't like the outcome we can go pound sand. I think the IETF can develop a position on whether we think what ICANN is doing with the authority we delegated to them makes sense. You are right that we may not be able to do anything about this position other than state it, but we could state it, if we chose. But that wasn't the argument I was making. The argument I was making is that it's pretty clear that what ICANN has done is bad for the Internet, and that we should not decide whether or not to allocate special use names, or how many to allocate, based on an attitude of deferring to ICANN's greater wisdom on the topic. If in fact we have any basis for claiming to be able to allocate special use names, then we should just do that, not without taking care to avoid creating unnecessary conflicts, but not with trepidation either. If we don't, then we should figure that out, and figure out what to do about it, because this whole conversation appears to be based on the premise that we can in fact allocate special use names. ___ DNSOP mailing list DNSOP@ietf.org https://www.ietf.org/mailman/listinfo/dnsop
Re: [DNSOP] Interim DNSOP WG meeting on Special Use Names: some reading material
On May 11, 2015, at 9:06 PM, Andrew Sullivan a...@anvilwalrusden.com wrote: This makes me think that what we ought to offer ICANN is a mechanism to make insertions into the special-names registry by different criteria than the protocol-shift cases. The latter all fit neatly into 6761's 7 questions, but policy-based ones sort of don't. I see your point here, but is that really the right thing to do? It seems to me that the distinction you have drawn is a good distinction, but argues for a separate registry for please don't send these to the root than for these have special non-DNS protocol uses with points to the documents describing those uses. ___ DNSOP mailing list DNSOP@ietf.org https://www.ietf.org/mailman/listinfo/dnsop
Re: [DNSOP] Interim DNSOP WG meeting on Special Use Names: some reading material
On May 12, 2015, at 8:24 AM, Andrew Sullivan a...@anvilwalrusden.com wrote: That'd be another answer; but given that the _result_ of the registration in both cases would be the same, I'm inclined to say that the registry we use ought to be the same one. I don't feel strongly about it, but fewer registries is probably better in this case. Sure. In my mind it depends on how many of the sort of entries you're talking about get added. If it's a small number, a single registry would work better. If it's enough to swamp the protocol uses, then I think it would be a problem. ___ DNSOP mailing list DNSOP@ietf.org https://www.ietf.org/mailman/listinfo/dnsop
Re: [DNSOP] Interim DNSOP WG meeting on Special Use Names: some reading material
I’ve been reading this whole discussion with great interest over the past while and do intend on joining today’s call. In the midst of all of this I think two points from Andrew and Ed have been helpful to my thinking: On May 11, 2015, at 9:06 PM, Andrew Sullivan a...@anvilwalrusden.com wrote: It seems to me that making new reservations solely on _policy_ grounds is overstepping our role, because we actually gave that management function away to someone else many years ago. But if there are additional protocol-shift registrations, it would be appropriate to do that. I’m not sure I’m 100% on board with Andrew’s use of the term “protocol-shift” to explain the difference, but I do agree with his statement that reservations should not be made based *solely* on policy grounds and that there needs to be some true protocol-based reason for the reservation. Even better, I like Ed’s distinction: On May 9, 2015, at 7:29 AM, Edward Lewis edward.le...@icann.org wrote: The problem (the topic of discussion here) I see is that there are class of strings that are intended to not be active in the DNS and further more, the DNS isn't even meant to be consulted. This to me is the key point. Reserving names like .ONION makes sense to me because there is existing Internet infrastructure that is widely deployed and uses that TLD-like-name in its operation…. but has no expectation that the name would be active in DNS. Were such a TLD ever to be delegated in DNS, it could conceivably *break* these existing services and applications. Those are the kind of names that make sense to be reserved. I do realize that there is a challenge with determining when something is “widely deployed” enough to merit this consideration. Just because I may have some service I created that uses a pseudo-TLD of “.YYY”[1] probably doesn’t really rise to the level if only I and 5 other friends use it. What number makes sense? I don’t know because as others have commented such numbers can be easy to game with automated scripts, bots, etc. My 2 cents, Dan (as an individual, not as any statement from ISOC) [1] I was going to use “.FOO” here but of course someone (Google, in this case, maybe at Warren’s request!) did actually register .FOO through the newgTLD process. ___ DNSOP mailing list DNSOP@ietf.org https://www.ietf.org/mailman/listinfo/dnsop
Re: [DNSOP] Interim DNSOP WG meeting on Special Use Names: some reading material
On Sat, May 09, 2015 at 11:08:00AM +, Edward Lewis edward.le...@icann.org wrote a message of 157 lines which said: [0] As in name.onion. isn't a domain name, it's a string that happens to have dots in it. More than that since it also follows domain name semantics (for instance, it is big-endian). So, yes, it is a domain name, it just does not use the DNS. (domain names ≠ DNS, like Email ≠ SMTP) ___ DNSOP mailing list DNSOP@ietf.org https://www.ietf.org/mailman/listinfo/dnsop
Re: [DNSOP] Interim DNSOP WG meeting on Special Use Names: some reading material
On Tue, May 12, 2015 at 08:08:55AM -0400, Ted Lemon wrote: I see your point here, but is that really the right thing to do? It seems to me that the distinction you have drawn is a good distinction, but argues for a separate registry for please don't send these to the root than for these have special non-DNS protocol uses with points to the documents describing those uses. ___ That'd be another answer; but given that the _result_ of the registration in both cases would be the same, I'm inclined to say that the registry we use ought to be the same one. I don't feel strongly about it, but fewer registries is probably better in this case. A -- Andrew Sullivan a...@anvilwalrusden.com ___ DNSOP mailing list DNSOP@ietf.org https://www.ietf.org/mailman/listinfo/dnsop
Re: [DNSOP] Interim DNSOP WG meeting on Special Use Names: some reading material
On Tue, May 12, 2015 at 2:49 PM, Dan York y...@isoc.org wrote: I’ve been reading this whole discussion with great interest over the past while and do intend on joining today’s call. In the midst of all of this I think two points from Andrew and Ed have been helpful to my thinking: On May 11, 2015, at 9:06 PM, Andrew Sullivan a...@anvilwalrusden.com wrote: It seems to me that making new reservations solely on _policy_ grounds is overstepping our role, because we actually gave that management function away to someone else many years ago. But if there are additional protocol-shift registrations, it would be appropriate to do that. I’m not sure I’m 100% on board with Andrew’s use of the term “protocol-shift” to explain the difference, but I do agree with his statement that reservations should not be made based *solely* on policy grounds and that there needs to be some true protocol-based reason for the reservation. Even better, I like Ed’s distinction: On May 9, 2015, at 7:29 AM, Edward Lewis edward.le...@icann.org wrote: The problem (the topic of discussion here) I see is that there are class of strings that are intended to not be active in the DNS and further more, the DNS isn't even meant to be consulted. This to me is the key point. Reserving names like .ONION makes sense to me because there is existing Internet infrastructure that is widely deployed and uses that TLD-like-name in its operation…. but has no expectation that the name would be active in DNS. Were such a TLD ever to be delegated in DNS, it could conceivably *break* these existing services and applications. Those are the kind of names that make sense to be reserved. I do realize that there is a challenge with determining when something is “widely deployed” enough to merit this consideration. Just because I may have some service I created that uses a pseudo-TLD of “.YYY”[1] probably doesn’t really rise to the level if only I and 5 other friends use it. What number makes sense? I don’t know because as others have commented such numbers can be easy to game with automated scripts, bots, etc. ... and this is some of the point of the .ALT pseudo-TLD -- if you want to use a TLD that does not get resolved in the DNS, make your namespace look like YYY.ALT. This *will* leak into the DNS, but should be dropped (NXD) at the first resolver (helping with privacy and general pollution issues). Now, if 5 people or 5,000,000 people use it, it doesn't matter -- it never needs to be made a special use name, because it isn't really in the DNS name space. My 2 cents, Dan (as an individual, not as any statement from ISOC) [1] I was going to use “.FOO” here but of course someone (Google, in this case, maybe at Warren’s request!) Good gods no. Them's fighting words. Take that back :-P W did actually register .FOO through the newgTLD process. ___ DNSOP mailing list DNSOP@ietf.org https://www.ietf.org/mailman/listinfo/dnsop -- I don't think the execution is relevant when it was obviously a bad idea in the first place. This is like putting rabid weasels in your pants, and later expressing regret at having chosen those particular rabid weasels and that pair of pants. ---maf ___ DNSOP mailing list DNSOP@ietf.org https://www.ietf.org/mailman/listinfo/dnsop
Re: [DNSOP] Interim DNSOP WG meeting on Special Use Names: some reading material
On Tue, May 12, 2015 at 3:17 PM, Ted Lemon ted.le...@nominum.com wrote: On May 12, 2015, at 9:12 AM, Warren Kumari war...@kumari.net wrote: ... and this is some of the point of the .ALT pseudo-TLD -- if you want to use a TLD that does not get resolved in the DNS, make your namespace look like YYY.ALT. This *will* leak into the DNS, but should be dropped (NXD) at the first resolver (helping with privacy and general pollution issues). Now, if 5 people or 5,000,000 people use it, it doesn't matter -- it never needs to be made a special use name, because it isn't really in the DNS name space. .alt is good for experiments, Yes -- and I originally had some text in my mail about that, then removed it because I didn't want to open this can of worms. One of the uses is Make your new namespace as YYY.ALT, and get some folk using it. Once you can demonstrate that you have a bunch of users (like Onion / ToR), you will have a much much easier time convincing the IESG that you should get YYY as a special use name, and slowly migrate over to that. If you have designed your protocol / system cleverly, the migration may be easy^w not horrendous... but I don't see it gaining popularity as a replacement for genuine special-use names. Compare .home to .home.alt, for example. There is elegance in the implementation, and there is elegance in the presentation, and I think the latter inevitably wins, whether we want it to or not. Yup. But having 300 people all asking for $cool_string gets, um, tedious. Having a first pass, so that only those who actually demonstrate that someone want to use it means that (hopefully) you end up with fewer applicants. Now, you still have the metric problem. How do you know that there really are enough users of YYY.ALT to justify reserving YYY (or, YYZ if YYY is already in use)? Dunno - but, you already have this issue. I think a large amount of it comes down to humans making a decision -- I, you, and my auntie Eve have all heard of Onion. It's clear that *someone* is using it. Perhaps that's the best we can do... W -- I don't think the execution is relevant when it was obviously a bad idea in the first place. This is like putting rabid weasels in your pants, and later expressing regret at having chosen those particular rabid weasels and that pair of pants. ---maf ___ DNSOP mailing list DNSOP@ietf.org https://www.ietf.org/mailman/listinfo/dnsop
Re: [DNSOP] Interim DNSOP WG meeting on Special Use Names: some reading material
On May 12, 2015, at 9:12 AM, Warren Kumari war...@kumari.net wrote: ... and this is some of the point of the .ALT pseudo-TLD -- if you want to use a TLD that does not get resolved in the DNS, make your namespace look like YYY.ALT. This *will* leak into the DNS, but should be dropped (NXD) at the first resolver (helping with privacy and general pollution issues). Now, if 5 people or 5,000,000 people use it, it doesn't matter -- it never needs to be made a special use name, because it isn't really in the DNS name space. .alt is good for experiments, but I don't see it gaining popularity as a replacement for genuine special-use names. Compare .home to .home.alt, for example. There is elegance in the implementation, and there is elegance in the presentation, and I think the latter inevitably wins, whether we want it to or not. ___ DNSOP mailing list DNSOP@ietf.org https://www.ietf.org/mailman/listinfo/dnsop
Re: [DNSOP] Interim DNSOP WG meeting on Special Use Names: some reading material
On 05/12/2015 02:49 PM, Dan York wrote: I’ve been reading this whole discussion with great interest over the past while and do intend on joining today’s call. In the midst of all of this I think two points from Andrew and Ed have been helpful to my thinking: On May 11, 2015, at 9:06 PM, Andrew Sullivan a...@anvilwalrusden.com wrote: It seems to me that making new reservations solely on _policy_ grounds is overstepping our role, because we actually gave that management function away to someone else many years ago. But if there are additional protocol-shift registrations, it would be appropriate to do that. I’m not sure I’m 100% on board with Andrew’s use of the term “protocol-shift” to explain the difference, but I do agree with his statement that reservations should not be made based *solely* on policy grounds and that there needs to be some true protocol-based reason for the reservation. Even better, I like Ed’s distinction: On May 9, 2015, at 7:29 AM, Edward Lewis edward.le...@icann.org wrote: The problem (the topic of discussion here) I see is that there are class of strings that are intended to not be active in the DNS and further more, the DNS isn't even meant to be consulted. This to me is the key point. Reserving names like .ONION makes sense to me because there is existing Internet infrastructure that is widely deployed and uses that TLD-like-name in its operation…. but has no expectation that the name would be active in DNS. Were such a TLD ever to be delegated in DNS, it could conceivably *break* these existing services and applications. Those are the kind of names that make sense to be reserved. +1 [snip]. /Hugo Connery ___ DNSOP mailing list DNSOP@ietf.org https://www.ietf.org/mailman/listinfo/dnsop
Re: [DNSOP] Interim DNSOP WG meeting on Special Use Names: some reading material
On May 12, 2015, at 9:41 AM, Warren Kumari war...@kumari.net wrote: Now, you still have the metric problem. How do you know that there really are enough users of YYY.ALT to justify reserving YYY (or, YYZ if YYY is already in use)? Dunno - but, you already have this issue. I think a large amount of it comes down to humans making a decision -- I, you, and my auntie Eve have all heard of Onion. It's clear that *someone* is using it. Perhaps that's the best we can do... We never do this perfectly in the IETF. We've done plenty of protocol specs that never achieved any popularity. I think the test here needs to be that we have consensus to do the thing, and reasonably believe that it might catch on, and that there's no existing conflict with existing TLDs that ICANN has already assigned. The use that ICANN has chosen to sell for TLDs isn't something the IETF intended when we delegated that authority to them, so while I think we should try to be good citizens, we don't need to feel particularly guilty about taking special-use names if we have a valid reason for doing so and there is no pre-existing conflict. ___ DNSOP mailing list DNSOP@ietf.org https://www.ietf.org/mailman/listinfo/dnsop
Re: [DNSOP] Interim DNSOP WG meeting on Special Use Names: some reading material
-BEGIN PGP SIGNED MESSAGE- Hash: SHA512 How does one join the meeting with XMPP? I confirm that the WebEx software is not compatible with my OS. == hk -BEGIN PGP SIGNATURE- Version: GnuPG v2 iQJ8BAEBCgBmBQJVUiIFXxSAAC4AKGlzc3Vlci1mcHJAbm90YXRpb25zLm9w ZW5wZ3AuZmlmdGhob3JzZW1hbi5uZXRFQ0IyNkIyRTNDNzEyMTc2OUEzNEM4ODU0 ODA2QzM2M0ZDMTg5ODNEAAoJEEgGw2P8GJg9VCIQAI5+8oZzWYIf3ZihCERVZ0zD OL7jg1eSjGF7WShQXLko/PLqo+Aj5bN58h1r/0LB5+QPNE/SFJtOuTz06S7xIr/y 5IeYxEUhPhWHUOP0fp2CRCIspOV69SKFvLYmLXhEUrOJl+TlJtJXqTlNQ3H0DG9f d/fZ8doYUrwMeoLkb0CW9NhcazHwCeUL8rUQcqvuNVKdsrrHd1fFph0hrhZzKW1+ 3EkotHeVPPpJmWb1sj7+Exp44EEzz4vg1JM7BI9AK4HGGte6R3IMXAlF7HAJxBG9 uK6dcT5pRJEmPjxafkvmeyPwYlwRzWcqsbJTxRJz9bbpuScWLECicgprPgESLjBV r2tY/np/YbZfjZwje5K3j+CIyQcngYvOiPd5lcKc14W4lqOTjQoxmT5BWJK3/mqF Yf8JRDWleZVHhFtkgQ4VIitQO8B2eGc3xqnRosRTu0kCHJtFmSSshdNjprKu9Ev9 eI/YJ6Vpi4pb5RmNHOogyR4Ntxl7WK05YmJ1gJ0fqJDsVhGx9etwBnIvOb101QIT FV1um7iAYG8OdPylinQ1gxVW8aXjowmOLpZXW9XhCbmjJA5gN7THJ0TkhajUvTNy S9rKBzpi1AAOYhZtBn9Q2/ddbMRI0hUjXcrh3UfMLceL1d5e6vbPi7a/xFBFh+Bz X4wrNgCIzvfb3wMwiN1x =10bl -END PGP SIGNATURE- ___ DNSOP mailing list DNSOP@ietf.org https://www.ietf.org/mailman/listinfo/dnsop
Re: [DNSOP] Interim DNSOP WG meeting on Special Use Names: some reading material
On Fri, May 08, 2015 at 08:10:41PM -0700, Paul Hoffman wrote: - Will the IETF require some specific metrics for RFC 6761 reservations? - If yes, what are those metrics? - If no, who makes the non-specific decision? Increasingly it strikes me that RFC 6761 is trying to do too much. It was, as we all know, created as a _post hoc_ rule to permit the creation of local, on which Apple squatted to create Bonjour. That was one particular use of a namespace: the namespace amounts to a protocol-shifting mechanism. We can expect to see this again (and indeed, several of the names that are in the p2pnames draft are effectively in this class). In order to create that set of rules, however, the special use names registry came about. That registry includes all sorts of other special-use names that are in fact reserved for _policy_ reasons. For instance, example, test, and invalid are there because of the policy of needing certain names for testing, documentation, and so on. Similar arguments can be made about the RFC1918 addresses in arpa: these are special only because of a different policy decision to reserve chunks of v4 space for local use. The interesting case is localhost, which I think might have elements of each of these properties, though because of the binding to the 127.0.0 network you might be able to argue that the policy is actually elsewhere. It seems to me that making new reservations solely on _policy_ grounds is overstepping our role, because we actually gave that management function away to someone else many years ago. But if there are additional protocol-shift registrations, it would be appropriate to do that. This makes me think that what we ought to offer ICANN is a mechanism to make insertions into the special-names registry by different criteria than the protocol-shift cases. The latter all fit neatly into 6761's 7 questions, but policy-based ones sort of don't. Anyway, that's a suggestion. A -- Andrew Sullivan a...@anvilwalrusden.com ___ DNSOP mailing list DNSOP@ietf.org https://www.ietf.org/mailman/listinfo/dnsop
Re: [DNSOP] Interim DNSOP WG meeting on Special Use Names: some reading material
On 5/7/15, 11:41, John Levine jo...@taugh.com wrote: ICANN has a whole bunch of rules that mandate that once you've paid the $185,000, you have to deploy a DNSSEC signed zone on multiple servers, implement elaborate reservation and trademark claiming rules, takedown processes, WHOIS servers, and so forth. In the recent TLD application round there was one applicant that only wanted to reserve the domain (they were apparently concerned that someone else would squat ... My thought (as I wasn't in ICANN when the 2012 new TLD program was established, etc.) is that the process in place was built to allow the establishment of operating TLDs with all of those concerns. As opposed to being a process by which to reserve strings (labels) that would not be in (delegated/have whois servers/registration interfaces/etc) the root zone, i.e., for the purpose of squatting to prevent collisions. The problem (the topic of discussion here) I see is that there are class of strings that are intended to not be active in the DNS and further more, the DNS isn't even meant to be consulted. The closest process in place today to achieving this is the Special Use Names registry - and I emphasize closest recognizing that if it was perfect we wouldn't be having an interim meeting in a few days time. I may be wrong...but this is how I see view the topic. (I.e., not a statement on behalf of ICANN.) smime.p7s Description: S/MIME cryptographic signature ___ DNSOP mailing list DNSOP@ietf.org https://www.ietf.org/mailman/listinfo/dnsop
Re: [DNSOP] Interim DNSOP WG meeting on Special Use Names: some reading material
On 5/9/15, 1:10, Suzanne Woolf suzworldw...@gmail.com wrote: I share David’s reservations about this— how do we objectively and reproducibly distinguish “people are using these in private networks” from “people are generating arbitrary traffic to the roots for these”? One good characterization of the technical problem, although I'd modify the former to people...networks and leaking the queries to the root. A recipient of a DNS query cannot know why it was asked (no context). So whether this is a leak or a gaming cannot be determined in-band. Is there any concern for the IETF in a policy that says “If you start using an arbitrary name that isn’t currently in the root zone, you can just get the IETF to protect it for you”? I find this above statement a little unclear. Whose policy (as in ICANN policy/IETF policy/someone else's policy)? Furthermore, given that ICANN has already said they won’t delegate these names in particular, how is it helpful for the IETF to also add them to the Special Use Names registry? I'll throw out what is in my personal mental model on this topic (as opposed to something explicitly documented elsewhere): For the non-DNS software using identifiers. If a layer in the software stack sees an identifier matching an entry in the Special Use Names registry, it should avoid trying to resolve the name using DNS. This issue is about more than the DNS. As far as the DNS, I believe it's really only about how it can be kept from harming other identifier systems[0] and not meant to be a way to shape/prune the DNS name space tree. [0] As in name.onion. isn't a domain name, it's a string that happens to have dots in it. And at the end of it and otherwise appears to conform with the BNF in RFC 103-something - but that's just coincidental. smime.p7s Description: S/MIME cryptographic signature ___ DNSOP mailing list DNSOP@ietf.org https://www.ietf.org/mailman/listinfo/dnsop
Re: [DNSOP] Interim DNSOP WG meeting on Special Use Names: some reading material
Playing devil's advocate (http://en.wikipedia.org/wiki/Devil%27s_advocate): On 5/9/15, 3:54, John R Levine jo...@taugh.com wrote: Let's say we found that there's some online thing we never heard of before, but it turns out that 100,000,000 people in India and China use it, it uses private names in .SECRET, and people looking at DNS logs confirm that they're seeing leakage of .SECRET names. Beyond rolling our eyes and saying we wish they hadn't done that, what else should we do? Why shouldn't we reserve it? The number of possible TLDs is effectively unlimited, striking one more off the list that might be sold in the future doesn't matter. This is engineering, not ideally what we might have done with a blank slate, but the best we can do under the circumstances. Besides Paul's valid what if it's 100,000?, how does an engineer distinguish between 100x people and 100x organized bots? My question adds to what David is saying - we need solid criteria. (Just to be clear, he is my boss but this does not represent any opinion on behalf of our employer.) The criteria of just seeing queries is, I'll say, naive, because it's so obviously vulnerable to gaming. (Not saying the data to date has evidence of being gamed, but it wouldn't be hard to pull this off.) (And why data collections efforts are not publicly announced, so as to limit anyone from prepping to game.) If there is a group of people using an identifier as you describe, then I'd suspect there would be other evidence than just the log of leaked queries. (What if they don't leak?) Criteria based on the other evidence would likely be stronger than just counts of leaked queries. smime.p7s Description: S/MIME cryptographic signature ___ DNSOP mailing list DNSOP@ietf.org https://www.ietf.org/mailman/listinfo/dnsop
Re: [DNSOP] Interim DNSOP WG meeting on Special Use Names: some reading material
On May 8, 2015, at 7:10 PM, Suzanne Woolf suzworldw...@gmail.com wrote: I share David’s reservations about this— how do we objectively and reproducibly distinguish “people are using these in private networks” from “people are generating arbitrary traffic to the roots for these”? I think doing so would be a fool's errand. It probably could be done, but you asked some good questions which I think show that it's pointless to try. Is there any concern for the IETF in a policy that says “If you start using an arbitrary name that isn’t currently in the root zone, you can just get the IETF to protect it for you”? I think clearly we should not have such a policy. The point of special use names is to serve some purpose, not to put a bandaid on misuses. I think there is some value in things like .corp, .home and .lan, which can be justified without resorting to data collection, and that is how we should approach it. Furthermore, given that ICANN has already said they won’t delegate these names in particular, how is it helpful for the IETF to also add them to the Special Use Names registry? As you clearly intended to imply with your rhetorical question, there is no point in the IETF doing any such thing, to which I will add one slight caveat: unless there is some reason why writing up how these names are used would actually be useful and beneficial. If it would be useful and beneficial, and someone wants to do it, then I think that should be allowed, and if consensus can be achieved, then we can add such names as are described in this document to the special names registry. But absent such a document, we should not. I hasten to add that consensus ought not to mean nobody is offended, but rather there is a clear protocol-related use case for doing it, and nobody can raise a clear technical reason _not_ to do it. ___ DNSOP mailing list DNSOP@ietf.org https://www.ietf.org/mailman/listinfo/dnsop
Re: [DNSOP] Interim DNSOP WG meeting on Special Use Names: some reading material
Besides Paul's valid what if it's 100,000?, how does an engineer distinguish between 100x people and 100x organized bots? I dunno. How do we know that the traffic for .corp and .home is from people rather than botnets? If there is a group of people using an identifier as you describe, then I'd suspect there would be other evidence than just the log of leaked queries. (What if they don't leak?) Criteria based on the other evidence would likely be stronger than just counts of leaked queries. If that wasn't clear, of course I agree with you. But we are writing policy, not software. We're looking for evidence of substantial private use, which is something we decide by making human decisions, not by some mechanical packet counting formula. Having said all that, I'm certainly not opposed to collecting more data. It's just not a substitute for making decisions. R's, John ___ DNSOP mailing list DNSOP@ietf.org https://www.ietf.org/mailman/listinfo/dnsop
Re: [DNSOP] Interim DNSOP WG meeting on Special Use Names: some reading material
On 5/9/15, 18:27, John Levine jo...@taugh.com wrote: Besides Paul's valid what if it's 100,000?, how does an engineer distinguish between 100x people and 100x organized bots? I dunno. How do we know that the traffic for .corp and .home is from people rather than botnets? Through forensic analysis. E.g., finding that Cert Auth's issued certificates with .corp names. And that some) CPE's defaulted to .home. Not saying that in a confrontational way. Just that this makes it pretty certain that the high query counts for those two were from non-bots. (Citing a report by Interisle: https://www.icann.org/en/system/files/files/name-collision-02aug13-en.pdf) If that wasn't clear, of course I agree with you. But we are writing policy, not software. We're looking for evidence of substantial private use, which is something we decide by making human decisions, not by some mechanical packet counting formula. Having said all that, I'm certainly not opposed to collecting more data. It's just not a substitute for making decisions. And just not more, but the right data. Keep in mind that there are two cases. Names that are already polluted and names that someone wants to innovate with. In the former case, a definition of polluted needs to be made being careful not to fall victim to gaming. For the latter case, the criteria would need to be different. Assuming both cases are accommodated. smime.p7s Description: S/MIME cryptographic signature ___ DNSOP mailing list DNSOP@ietf.org https://www.ietf.org/mailman/listinfo/dnsop
Re: [DNSOP] Interim DNSOP WG meeting on Special Use Names: some reading material
Mark, home, corp and perhaps mail need special handling if we really want to not cause problems for those using those tlds internally. Why? What objective criteria makes those TLDs special? (note that I am not disagreeing, just asking for the methodology by which we can declare some TLDs as not delegatable) Regards, -drc (ICANN CTO, but speaking only for myself. Really.) signature.asc Description: Message signed with OpenPGP using GPGMail ___ DNSOP mailing list DNSOP@ietf.org https://www.ietf.org/mailman/listinfo/dnsop
Re: [DNSOP] Interim DNSOP WG meeting on Special Use Names: some reading material
-BEGIN PGP SIGNED MESSAGE- Hash: SHA512 On 05/08/2015 01:48 PM, David Conrad wrote: Mark, home, corp and perhaps mail need special handling if we really want to not cause problems for those using those tlds internally. Why? *** Citing IETF92 slides by Lyman Chapin and Mark McFadden: [0] these are the 3 names that were identified as posing operational hazards by SSAC and both ICANN name collision studies. why? • operational and engineering reasons only • problems related to potential delegation of previously invalid labels that have frequently appeared as queries to the root • lots of evidence here • https://www.icann.org/en/system/files/files/sac-045-en.pdf • affected labels • home • corp • name collision problem • lots of evidence here as well • https://www.icann.org/en/system/files/files/sac-062-en.pdf • https://www.icann.org/en/about/staff/security/ssr/name-collision-mitigat ion-26feb14-en.pdf • affected label • mail == hk [0]: https://www.ietf.org/proceedings/92/slides/slides-92-dnsop-9.pdf -BEGIN PGP SIGNATURE- Version: GnuPG v2 iQJ8BAEBCgBmBQJVTPdGXxSAAC4AKGlzc3Vlci1mcHJAbm90YXRpb25zLm9w ZW5wZ3AuZmlmdGhob3JzZW1hbi5uZXRFQ0IyNkIyRTNDNzEyMTc2OUEzNEM4ODU0 ODA2QzM2M0ZDMTg5ODNEAAoJEEgGw2P8GJg9GncP/jJVKUyyAQgFJyw2R8BMeUgO VXWxuhSrhfgYP4UpEowaCRCaHrWaN7DsOeGmpTROGKjsoR4ynIWkp15rv+22SZyI PrjtftuFR6H4JV+3iLjUq0Mm1cas8ytLGcPQt7lQo/gZzs7h2tXLsIoL+u1IYgoT Irok3Y/0LwPM8tB7vL5AGGPOOUZvmRirXqCcPF7ZGg8adY0odY2rwwtm5cAGOaMB QXk/Hky6ua464O0i7kCET/hZLtjIWLbeZxpso2jw+Sk3n78p4WZFZFvE6YfTNgHk 1gGS5/By764wgPg9FEEFLjFXnxIubjyJQMUqEuOIDJccuFxp5KiA+Bx8BZ1KTmbO xmww7fVbKs/UmwP7Kq/sTwJpJGNi+PMs4MTzCtoM1MuxzyDi8WPU/mMxOnjJ+cSh ZGR/OFFIbt564nDveSuuElFOhxpfyEma90zTMKkiU+cl48cW4VJ9/4KaZF12DvYb MgLBHSo8osl5uBtt7Ppc6YvfGxgszBk2J6V04t8EqC1grbS8LTFbg4lUT9VQuyuL JzSOvjsVlxlJwnDUnNF9dKRQCSzpXp8un5xGjtxE3bkvxuCTlrQFKlnbdRNtG3/A I0KFK+mSdbe+oi0n1DEw6osAY5YJXZf9PJaN/a65dexkS98NT8Dx9/nfpB2/Is9I rBaEIlkanMRGvoIBnQwp =ZgS+ -END PGP SIGNATURE- ___ DNSOP mailing list DNSOP@ietf.org https://www.ietf.org/mailman/listinfo/dnsop
Re: [DNSOP] Interim DNSOP WG meeting on Special Use Names: some reading material
What objective criteria makes those TLDs special? Data reportedly shows extensive off-the-books use in private networks. What data? It's an obvious stability issue. Agreed. I'd probably put lan into the same group, no doubt to the dismay of the South American airline group. That's sort of what I'm getting at: I might agree (or might not, haven't looked at root query stats recently), but if the IETF is going to declare some TLDs as unusable for stability or other reasons, I believe we need some sort of objective criteria. Regards, -drc signature.asc Description: Message signed with OpenPGP using GPGMail ___ DNSOP mailing list DNSOP@ietf.org https://www.ietf.org/mailman/listinfo/dnsop
Re: [DNSOP] Interim DNSOP WG meeting on Special Use Names: some reading material
Hellekin, On May 8, 2015, at 10:50 AM, hellekin helle...@gnu.org wrote: home, corp and perhaps mail need special handling if we really want to not cause problems for those using those tlds internally. Why? these are the 3 names that were identified as posing operational hazards by SSAC and both ICANN name collision studies. Yes, quite aware of Lyman's and Mark's draft, in fact I commented on it earlier on this mailing list (http://www.ietf.org/mail-archive/web/dnsop/current/msg13604.html). The justification for removing home/corp/mail primarily appears to be because they showed up 'a lot' at the root servers. Without characterizing this a bit better, it seems to me it would be trivial to set up situations to move pretty much any undelegated name to the Special Names registry -- just fire up a few thousand zombies to query names in the TLD you want removed using random source addresses. Perhaps something like two or three standard deviations over normal noise at the root servers for undelegated TLDs over a period of months? Of course, that would require an ability to actually collect that sort of data over long periods of time and wouldn't completely protect against the trivial attack above, but I figure it'd be better than subjective evaluations of 'a lot'... Regards, -drc (ICANN CTO, but speaking for myself only. Really.) signature.asc Description: Message signed with OpenPGP using GPGMail ___ DNSOP mailing list DNSOP@ietf.org https://www.ietf.org/mailman/listinfo/dnsop
Re: [DNSOP] Interim DNSOP WG meeting on Special Use Names: some reading material
I'm not, but name leaking is different to name use. I suspect mail ends up being qualified whereas home and corp are actually used as private tlds. This difference requires different handling. From the viewpoint of the outside world, what would be different? Regards, John Levine, jo...@taugh.com, Taughannock Networks, Trumansburg NY Please consider the environment before reading this e-mail. PS: I'm not being deliberately obtuse, I'm being actually obtuse. ___ DNSOP mailing list DNSOP@ietf.org https://www.ietf.org/mailman/listinfo/dnsop
Re: [DNSOP] Interim DNSOP WG meeting on Special Use Names: some reading material
In message 20150508194223.55320.qm...@ary.lan, John Levine writes: The justification for removing home/corp/mail primarily appears to be becau se they showed up 'a lot' at the root servers. Without characterizing this a bit better, it s eems to me it would be trivial to set up situations to move pretty much any undelegated name to the Special Names registry -- just fire up a few thousand zombies to query names in the TLD yo u want removed using random source addresses. Hmmn. Is this a serious accusation, or is this just channelling the usual domainers whinging about their business plans? Does anyone seriously argue that those domains aren't widely used in private networks, and that nominally private DNS names leak all the time? R's, John I'm not, but name leaking is different to name use. I suspect mail ends up being qualified whereas home and corp are actually used as private tlds. This difference requires different handling. -- Mark Andrews, ISC 1 Seymour St., Dundas Valley, NSW 2117, Australia PHONE: +61 2 9871 4742 INTERNET: ma...@isc.org ___ DNSOP mailing list DNSOP@ietf.org https://www.ietf.org/mailman/listinfo/dnsop
Re: [DNSOP] Interim DNSOP WG meeting on Special Use Names: some reading material
Depends on the name. Why do you call out home/corp/mail? Should LAN be reserved or not? What's your criteria? If you put it that way, it's a reasonable question. Will reply when I get done digging. Regards, John Levine, jo...@taugh.com, Taughannock Networks, Trumansburg NY Please consider the environment before reading this e-mail. ___ DNSOP mailing list DNSOP@ietf.org https://www.ietf.org/mailman/listinfo/dnsop
Re: [DNSOP] Interim DNSOP WG meeting on Special Use Names: some reading material
In message alpine.osx.2.11.1505081704140.30...@ary.lan, John R Levine write s: For a mail a secure NXDOMAIN response saying that mail. doesn't exist should be fine. For foo.home you actually want a insecure response with a insecure referal or at least you want DS home to come back as a secure NODATA rather than a secure NXDOMAIN. This assumes we want to formalise the defacto use of .home for names in the home. I'm thinking that if a query for foo.home shows up at the roots, that is evidence of a configuration error. So how about doing a secure NXDOMAIN, and tell people that if they want to use DNSSEC and their own .home names, it's up to them to put their own local .home trust anchor into their cache and a local DNS server to serve it. Really, you want to force all home users to sign their own zones and to securly distribute trust anchors (something we don't know how to do yet) to every machine that connects to the network (yes validation happens in applications as well as in the recursive servers) just to avoid installing a insecure delegation for .home in the public internet. We already have insecure delegations for RFC 1918 and ULA reverse namespaces so we don't stuff up validators looking up PTR records. Seeing foo.home just means that a search list with .home in it is in use outside of the home. Think of a laptop moving between home and the office. A validator, with just the public roots's trust anchor configured on it, will validate foo.home without needing to be reconfigured at home or at work if there is a insecure delegation for .home. DS home on the other had is a normal artifact of doing validation and if we want to formalise .home then that stops getting a NXDOMAIN response. Your typical home router is running linux anyway, so it doesn't seem unduly cruel to say that if it's going to run a validating cache, it needs to poke its own holes for private names since it's all off the shelf software. And home routers are not the only place where validation occurs. Regards, John Levine, jo...@taugh.com, Taughannock Networks, Trumansburg NY Please consider the environment before reading this e-mail. -- Mark Andrews, ISC 1 Seymour St., Dundas Valley, NSW 2117, Australia PHONE: +61 2 9871 4742 INTERNET: ma...@isc.org ___ DNSOP mailing list DNSOP@ietf.org https://www.ietf.org/mailman/listinfo/dnsop
Re: [DNSOP] Interim DNSOP WG meeting on Special Use Names: some reading material
Is there any concern for the IETF in a policy that says “If you start using an arbitrary name that isn’t currently in the root zone, you can just get the IETF to protect it for you”? It's a reasonable question, but I think a reasonable answer in some circimstances is yes. Let's say we found that there's some online thing we never heard of before, but it turns out that 100,000,000 people in India and China use it, it uses private names in .SECRET, and people looking at DNS logs confirm that they're seeing leakage of .SECRET names. Beyond rolling our eyes and saying we wish they hadn't done that, what else should we do? Why shouldn't we reserve it? The number of possible TLDs is effectively unlimited, striking one more off the list that might be sold in the future doesn't matter. This is engineering, not ideally what we might have done with a blank slate, but the best we can do under the circumstances. Furthermore, given that ICANN has already said they won’t delegate these names in particular, how is it helpful for the IETF to also add them to the Special Use Names registry? I believe that they're currently blocked in the current new gTLD round, but not necessarily beyond that. I don't see any evidence that the six applicants who paid $185,000 to apply for .CORP or the ten remaining applicants for .HOME or the five remaining applicants for .MAIL have given up. They certainly haven't gotten their money back. Regards, John Levine, jo...@taugh.com, Taughannock Networks, Trumansburg NY Please consider the environment before reading this e-mail.___ DNSOP mailing list DNSOP@ietf.org https://www.ietf.org/mailman/listinfo/dnsop
Re: [DNSOP] Interim DNSOP WG meeting on Special Use Names: some reading material
And home routers are not the only place where validation occurs. Ah. I said I was being obtuse. Regards, John Levine, jo...@taugh.com, Taughannock Networks, Trumansburg NY Please consider the environment before reading this e-mail. ___ DNSOP mailing list DNSOP@ietf.org https://www.ietf.org/mailman/listinfo/dnsop
Re: [DNSOP] Interim DNSOP WG meeting on Special Use Names: some reading material
In the interests of maybe taking this argument a little further than we have the previous n times…. On May 8, 2015, at 8:34 PM, John Levine jo...@taugh.com wrote: home, corp and perhaps mail need special handling if we really want to not cause problems for those using those tlds internally. Why? What objective criteria makes those TLDs special? Data reportedly shows extensive off-the-books use in private networks. It's an obvious stability issue. I share David’s reservations about this— how do we objectively and reproducibly distinguish “people are using these in private networks” from “people are generating arbitrary traffic to the roots for these”? Is there any concern for the IETF in a policy that says “If you start using an arbitrary name that isn’t currently in the root zone, you can just get the IETF to protect it for you”? Furthermore, given that ICANN has already said they won’t delegate these names in particular, how is it helpful for the IETF to also add them to the Special Use Names registry? thanks, Suzanne ___ DNSOP mailing list DNSOP@ietf.org https://www.ietf.org/mailman/listinfo/dnsop
Re: [DNSOP] Interim DNSOP WG meeting on Special Use Names: some reading material
As long as we are taking this path, will the Special Use Names folks please remove MX from the ISO 3166 list and delete the TLD so as not to confuse email … MX is so overloaded. manning bmann...@karoshi.com PO Box 12317 Marina del Rey, CA 90295 310.322.8102 On 8May2015Friday, at 16:10, Suzanne Woolf suzworldw...@gmail.com wrote: In the interests of maybe taking this argument a little further than we have the previous n times…. On May 8, 2015, at 8:34 PM, John Levine jo...@taugh.com wrote: home, corp and perhaps mail need special handling if we really want to not cause problems for those using those tlds internally. Why? What objective criteria makes those TLDs special? Data reportedly shows extensive off-the-books use in private networks. It's an obvious stability issue. I share David’s reservations about this— how do we objectively and reproducibly distinguish “people are using these in private networks” from “people are generating arbitrary traffic to the roots for these”? Is there any concern for the IETF in a policy that says “If you start using an arbitrary name that isn’t currently in the root zone, you can just get the IETF to protect it for you”? Furthermore, given that ICANN has already said they won’t delegate these names in particular, how is it helpful for the IETF to also add them to the Special Use Names registry? thanks, Suzanne ___ DNSOP mailing list DNSOP@ietf.org https://www.ietf.org/mailman/listinfo/dnsop ___ DNSOP mailing list DNSOP@ietf.org https://www.ietf.org/mailman/listinfo/dnsop
Re: [DNSOP] Interim DNSOP WG meeting on Special Use Names: some reading material
registering something in a registry does not prevent someone from using it. it suggests that there is already a use for the string. it does not matter if that is an IANA registry or the registry that is the DNS. There are -LOTS- of registries for strings. A classic case was Bell Northern Research and Burlington Northern Railroad.Bell registered BNR.com and was challanged (successfully) by Burlington over the domain name, since the railroad predated the telecom lab. Will the Special Names Registry be used for legal challenges? manning bmann...@karoshi.com PO Box 12317 Marina del Rey, CA 90295 310.322.8102 On 7May2015Thursday, at 7:15, Bob Harold rharo...@umich.edu wrote: On Thu, May 7, 2015 at 9:56 AM, Livingood, Jason jason_living...@cable.comcast.com wrote: On 5/6/15, 2:07 PM, Suzanne Woolf suzworldw...@gmail.com wrote: c) The requests we're seeing for .onion and the other p2p names already in use are arguing that they should get their names to enable their technologies with minimal disruption to their installed base. While the requesters may well have valid need for the names to be recognized, there is still a future risk of name collision or other ambiguity. The IETF is being asked to recognize the pre-existing use of these names. Does this scale to future requests? Beyond that, does it end up being a cheap way to avoid the ICANN process of creating a new gTLD. For example, I am not aware that anything prevents the ToR project from applying to ICANN for the .onion gTLD. So from one perspective, would more people just deploy into an unused namespace and then later lay claim the the namespace retroactively based on their use (gTLD-squatting)? This could be quite messy at scale, and I am not sure the IETF has a process to deal with and consider competing uses. Registering .onion would prevent others from using it. But the other thing that they really want is for .onion names to never to be sent to DNS, for privacy reasons, and registering the name does not solve that. In a sense the special-use registry is the opposite of registering a domain name - it says this name should never be sent to DNS. -- Bob Harold ___ DNSOP mailing list DNSOP@ietf.org https://www.ietf.org/mailman/listinfo/dnsop ___ DNSOP mailing list DNSOP@ietf.org https://www.ietf.org/mailman/listinfo/dnsop
Re: [DNSOP] Interim DNSOP WG meeting on Special Use Names: some reading material
John, On May 8, 2015, at 12:42 PM, John Levine jo...@taugh.com wrote: The justification for removing home/corp/mail primarily appears to be because they showed up 'a lot' at the root servers. Without characterizing this a bit better, it seems to me it would be trivial to set up situations to move pretty much any undelegated name to the Special Names registry -- just fire up a few thousand zombies to query names in the TLD you want removed using random source addresses. Hmmn. Is this a serious accusation, Accusation? or is this just channelling the usual domainers whinging about their business plans? Neither. It is an honest question as to how to objectively identify which non-delegated TLDs are receiving sufficient traffic as to justify never delegating them. You know, sort of like how one would be able to justify a statement like: I'd probably put lan into the same group, no doubt to the dismay of the South American airline group. But thanks for debasing the conversation. Does anyone seriously argue that those domains aren't widely used in private networks, and that nominally private DNS names leak all the time? Depends on the name. Why do you call out home/corp/mail? Should LAN be reserved or not? What's your criteria? Regards, -drc signature.asc Description: Message signed with OpenPGP using GPGMail ___ DNSOP mailing list DNSOP@ietf.org https://www.ietf.org/mailman/listinfo/dnsop
Re: [DNSOP] Interim DNSOP WG meeting on Special Use Names: some reading material
For a mail a secure NXDOMAIN response saying that mail. doesn't exist should be fine. For foo.home you actually want a insecure response with a insecure referal or at least you want DS home to come back as a secure NODATA rather than a secure NXDOMAIN. This assumes we want to formalise the defacto use of .home for names in the home. I'm thinking that if a query for foo.home shows up at the roots, that is evidence of a configuration error. So how about doing a secure NXDOMAIN, and tell people that if they want to use DNSSEC and their own .home names, it's up to them to put their own local .home trust anchor into their cache and a local DNS server to serve it. Your typical home router is running linux anyway, so it doesn't seem unduly cruel to say that if it's going to run a validating cache, it needs to poke its own holes for private names since it's all off the shelf software. Regards, John Levine, jo...@taugh.com, Taughannock Networks, Trumansburg NY Please consider the environment before reading this e-mail. ___ DNSOP mailing list DNSOP@ietf.org https://www.ietf.org/mailman/listinfo/dnsop
Re: [DNSOP] Interim DNSOP WG meeting on Special Use Names: some reading material
In message d170e3e4.1011f2%jason_living...@cable.comcast.com, Livingood, Jas on writes: On 5/6/15, 2:07 PM, Suzanne Woolf suzworldw...@gmail.commailto:suzworldw...@gmail.com wrote: 2. In the particular cases of home/corp/mail, ICANN has studied the possibilities of name collisions, and decided not to delegate those names at this time. The proposal is that the IETF reserve those names for unspecified special use permanently. It seems that an IETF action on those names is redundant, unless it's in opposition to some action contemplated under ICANN policy (for which there is no apparent mechanism). Is the possibility of the same names considered under multiple policies a problem? home, corp and perhaps mail need special handling if we really want to not cause problems for those using those tlds internally. To do this there needs to be a insecure delegation to break the DNSSEC chain of trust. This will allow any server to filter leaked queries without causing validation failures. It will also allow DNSSEC validators to work without special knowledge of these tlds. By `redundant' do you mean the IETF should take no action? That seems to leave those names in a no-mans-land that could be problematic in the long-term, and the uncertainty could inhibit experimentation/investment in the home networking space. I'd rather see the IETF consider these names which are widely used and possibly add them to a new RFC, which then can be entered into and referred to from the IANA special-use domain name registry at http://www.iana.org/assignments/special-use-domain-names/special-use-domai n-names.xhtml Mark -- Mark Andrews, ISC 1 Seymour St., Dundas Valley, NSW 2117, Australia PHONE: +61 2 9871 4742 INTERNET: ma...@isc.org ___ DNSOP mailing list DNSOP@ietf.org https://www.ietf.org/mailman/listinfo/dnsop
Re: [DNSOP] Interim DNSOP WG meeting on Special Use Names: some reading material
-BEGIN PGP SIGNED MESSAGE- Hash: SHA512 On 05/06/2015 03:07 PM, Suzanne Woolf wrote: Logistics details will follow shortly, but we have a webex URL *** As far as I understand, WebEx requires non-free software to work, which is a problem that will certainly make my participation more than difficult--as I do not have access to a system running such software. It's not the first time I notice that participation to Internet governance requires non-free software (e.g., in ISOC as well). Given that the process mentions XMPP, would it be possible to use that instead ? http://datatracker.ietf.org/doc/draft-appelbaum-dnsop-onion-tld/ http://datatracker.ietf.org/doc/draft-grothoff-iesg-special-use-p2p-na mes/ *** I am concerned that the newer draft's IANA considerations are very different from the older one to the point that they might conflict. I already pointed that to the authors who seem to have ignored it. I feel it would be critical for them to lay down the reasons for which they introduced such incompatibilities that, to my reading, demote privacy considerations from primary to secondary concerns. == hk -BEGIN PGP SIGNATURE- Version: GnuPG v2 iQJ8BAEBCgBmBQJVS3U5XxSAAC4AKGlzc3Vlci1mcHJAbm90YXRpb25zLm9w ZW5wZ3AuZmlmdGhob3JzZW1hbi5uZXRFQ0IyNkIyRTNDNzEyMTc2OUEzNEM4ODU0 ODA2QzM2M0ZDMTg5ODNEAAoJEEgGw2P8GJg9jDYP/1QyuSygoVxMxH4RzhRbIryH 95ateC2fyKeGMTwAEZ6tv5JVPYqLlWhWG+gWeWIipgFzvy5U+IbbkfzrLroIz8jB E/1wHTTgdeSkRbgH8IYZAsqWOd7w5xdU3ZwxzvujwM771RBH96EU0DEHJZEOJhDS r/r4MAbr60lrzWRo41BPDPc+s6K1cJziLbiNC7g9fSVR1Xinow5cBZiPwY8lA3dl 9ahpjy3n1NGVQXFetxyUKjIV8SYdEVdhBp4ANYzwM+9CVoXQ7oo/LNkS6oOY7v5R dDKwAFBcPjQWYmFVZcUGE6huSu/gtJrzqKURTDs1OvZ5A+3vGnyhQMl6ysXUFfVr j/TrfIiIMfzgR3Nr+D1yEmp4su0XTk3WRDkzh8b7JW4kL8dlPdgninOgMSz6p8gK G/iVkUsRS95TdbSxql1Pm3YK1D2KGSIHxMMxIzOYtfk/Y6ZDIqcht76hHL4DP722 Gc0WGW5kUe1/VvJge12RGBU3CK5wzbcYBmFJcTgSH8vNuKEuRWog1vxlAg+lAObO YLC0ZwluBIXpGP6M9ZzRoIEbdlYw8j+/VtYxaGoPzAC4qi5BVD2cdf48HxgMYHHf L4JRicYHSqDzT9rwzH+EUSs3/okBaHy9vkRlgrZGDAdPFFwaiszAblGTnGf9qMHJ KLIO+tTMGVbR6eJmzGYY =hTUY -END PGP SIGNATURE- ___ DNSOP mailing list DNSOP@ietf.org https://www.ietf.org/mailman/listinfo/dnsop
Re: [DNSOP] Interim DNSOP WG meeting on Special Use Names: some reading material
Beyond that, does it end up being a cheap way to avoid the ICANN process of creating a new gTLD. For example, I am not aware that anything prevents the ToR project from applying to ICANN for the .onion gTLD. ICANN has a whole bunch of rules that mandate that once you've paid the $185,000, you have to deploy a DNSSEC signed zone on multiple servers, implement elaborate reservation and trademark claiming rules, takedown processes, WHOIS servers, and so forth. In the recent TLD application round there was one applicant that only wanted to reserve the domain (they were apparently concerned that someone else would squat on .CONNECTORS) but they dropped out early so it's unclear what would have happened if they tried to move ahead. I was on one of the technical evaluation panels and I believe we failed them due to their lack of any plan to comply with the rules. THe only special purpose TLD that resolves globally is .ARPA, and everyone agrees what it does. The rest of them by design don't resolve globally. Some resolve locally (.local), some not at all (.test .example .invalid.) In this case, .onion falls on the IETF side of the line since it's definitely not supposed to resolve globally. R's, John ___ DNSOP mailing list DNSOP@ietf.org https://www.ietf.org/mailman/listinfo/dnsop
Re: [DNSOP] Interim DNSOP WG meeting on Special Use Names: some reading material
On May 7, 2015, at 9:56 AM, Livingood, Jason jason_living...@cable.comcast.com wrote: Beyond that, does it end up being a cheap way to avoid the ICANN process of creating a new gTLD. For example, I am not aware that anything prevents the ToR project from applying to ICANN for the .onion gTLD. So from one perspective, would more people just deploy into an unused namespace and then later lay claim the the namespace retroactively based on their use (gTLD-squatting)? This could be quite messy at scale, and I am not sure the IETF has a process to deal with and consider competing uses. I think this is an unfortunate way to look at the issue. We have a clear process for allocating special-use domain names. If TOR had come to us and asked for one, would you argue that they should pay ICANN $180k to get it? Where would that money come from? They don't need a delegation. They just need for the name to be registered as a special-use name. This is not at all the same situation as someone coming to us asking to get a _delegation_ for a TLD based on the special-use domain name process. Special-use doesn't apply in that case, and we would reject it. So your argument amounts to a straw man. I think part of the reaction to this proposal at the moment is that the process _wasn't_ followed. And so we are rightly concerned that future candidates for special-use names will also not follow the process, leading us to have to revisit this conversation. However, that is actually exactly wrong. In reality, the more pushback we give for a reasonable and legitimate request for a special-use domain now, the more likely it is that when someone needs one in the future, they will give up before they try, as the ToR people did. What we should be doing is judging those requests that seem legitimate and responding expeditiously, not creating a huge process black hole into which such requests will be swallowed. ___ DNSOP mailing list DNSOP@ietf.org https://www.ietf.org/mailman/listinfo/dnsop