On 11 Mar 2015, at 00:08, David Conrad <[email protected]> wrote:

> While true, these values will vary over time, location of collection, and 
> myriad other reasons, probably including phase of moon. If we're going to 
> reserve strings from ever being delegated, I believe we need to come up with 
> some rationale beyond "because they showed up a lot at some root servers at 
> this point in time."

In the absence of other objective, measurable data (sigh), going with what's 
seen at the root for some arbitrary shapshot(s) on arbitrary date(s) seems to 
be the least-worst option. It might even be the only option.

> If that is the only criteria, it would be relatively easy to game the stats 
> by hiring a few botnet zombies to pump queries with names you'd like to 
> reserve with spoofed source addresses.

The same would no doubt hold for whatever other criteria were chosen.

IMO the draft is a reasonable attempt at harm reduction. For some definition of 
harm. It seems ICANN's policy-making fora want the IETF to produce a list of 
"dangerous" TLD strings (on technical grounds) so that they can point at some 
RFC and say everything else can be classed as "safe to delegate". Which is at 
best a very misguided approach IMO.

That ship has already sailed so the WG just has to make the best job it can of 
the cards it has been dealt. If anyone here has better ideas on how to do that, 
please speak up.

I think it would be prudent to assume any new TLD strings (if there ever are 
any after the current round has been processed) are harmful unless the 
applicant can prove otherwise to a reasonable standard. But nobody's ever going 
to accept that.
_______________________________________________
DNSOP mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/dnsop

Reply via email to