Hello Paul,

On Sat, 08 Nov 2008 03:10:23 +0100, Paul Wouters <[EMAIL PROTECTED]> wrote:

On Sat, 8 Nov 2008, Yngve N. Pettersen (Developer Opera Software ASA) wrote:

Opera is already downloading root certificates from a digitally signed online repository using a secure connection, and the download is done in a fashion that the user will NOT be asked about certificates concerning that download. A dynamic download of the TLD structure information would be done in a similar manner, without user interaction.

So in this case, Opera is creating and maintaining a list, and offering it
to their clients. Other software, including non-browser software, will
have to do the same. The problem is that if TLD's do not implement a
universal dicovery method, any fancy protocol will come to "someone is making a list somewhere and you need to know this person and trust them". It also

Which is what is currently happening, because the TLDs are not providing the information (in a machinereadable fashion, at least), and alternative discovery and database building methods (crowdsourcing) are being used.

scales poorly. See the non-standard scheme of naming the whois sides, and
software like 'jwhois' with a 32kb config file maintainer by "someone somewhere", that requires updating all the time. We might as well stick sub-tld information in /etc/jwhois.conf while we're at it - we don't need a new protocol for that.

That's why to me it would make more sense to have something like "tld.nic.<TLD>"

Which is similar to what previous versions of the draft have suggested, a wellknown location specified by IANA.

Alternatively, the central repository can, for example, be hosted by IANA, and be open for vendors to mirror.

with information, either in DNS records that can be signed, or via some https
method with an SSL cer).

In any case: Google, Microsoft and Mozilla are ALL, right now, deploying browsers using Mozilla's PublicSuffix database, and two of them have been doing so for more than 6 months.

So what will a new protocol add to this? The database will be required anyway, because one does not know where to find this information for a random TLD.

The protocol defines the format that can be used to distribute the data, not just from vendor to client, but from central repositories (however those are defined) to the vendor. It also specified the information provided (and can be expanded for future requirements).

My original thought was to have a central repository, accessed by all clients, although vendor hosted mirrors of such central repositories will work just as well, or better, but both types of arrangement requires, for ease of use, at least, that data are coded in a common format, reducing the need to convert between different representations.

IMO, an even better approximation to what our applications need for various automatic security features, would be a list of registry-like domains provided by the TLD regisitries.

And probably the only common zone all Registry's reserved for themselves and
was not allowed to be users by Registrants is 'nic.TLD'.

As mentioned above, as common respository is an alternative.

If there are better approximations the DNSOp WG and other interested parties are encouraged to come forward with them.

the real problem is not the method of presenting or transfering the information,
the problem is finding it. Section 2 of this draft starts with:

        The client retrieves the domain list for the Top Level Domain
        "tld" from the vendor specified URI
        https://tld-structure.example.com/tld/domainlist .

That URL may be an opera.com, microsoft.com or mozilla.com URL, for example, or a central repsitory URL if somebody are willing to host a list for all clients.

However, before that comes another unlisted step:

        The client retrieves the meta information on where to
        find the list of "TLD Structure" servers for each TLD from
        some to be pre-determined location.

That's the problem not addressed in draft-pettersen-subtld-structure-04.txt

Previous versions of the draft did require an IANA specified URI to be used, and that the information had to be provided by the TLD regisitries. An archive copy is available at <URL: http://files.myopera.com/yngve/blog/draft-pettersen-subtld-structure-03.txt


As I was starting to wonder if the requirement for information from registries might be holding the rest of the draft back, and with the emergence of Mozilla's PublicSuffix list (see Gerv's announcement and request for TLD registry assistance to this list earlier this year), I decided that it might be best to move the requirements for how to populate the database, and who should do it, out of the datafile format specification.

There are a couple of suggestions in Appendix A about how a database may be populated, and Appendix B mention a couple of alternative discovery methods that has been mentioned during the occasional discussions about this topic.

I have yet to start looking at how a specification for this should be formulated, but with PublicSuffix's database available, the need for such a specification may, perhaps, not be as immediate as it was. However, to be clear: I still think the original structure specification for each TLD should be provided by the registries, as they are AFAIK the ones with the best knowledge about their domain.


--
Sincerely,
Yngve N. Pettersen
 
********************************************************************
Senior Developer                     Email: [EMAIL PROTECTED]
Opera Software ASA                   http://www.opera.com/
Phone:  +47 24 16 42 60              Fax:    +47 24 16 40 01
********************************************************************
_______________________________________________
DNSOP mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/dnsop

Reply via email to