Bijan Parsia wrote:
I suspect there's a difference in kind. E.g., "your website is down" is
really quite different than "You don't use HTTP uris."
From the point of view of someone who wants to resolve a URI, both cases
equal "it doesn't work", don't think it matters much what the excuse is.
With an HTTP URI, at least you'll then know that you'll most likely not be
able to resolve the URI (either because the domain name doesn't even exist,
or if it does you might get a helpful error message), no need to spend an
afternoon researching some new scheme, only to discover later on after
setting everything up that it still doesn't work (e.g. UniProt LSIDs...)
I'd be interested in the sort of complaints. Having a recommendation for
http uris will definitely up the number of "please use http uri"
complaints.
Surprisingly, most of the complaints come from people who are either
building or using some generic semantic web tools, rather than from W3C
vigilantes :-) The more popular such tools get, the more complaints you can
expect, regardless of any recommendations. (Though if the recommendations
have any impact, the motivation to write such tools may increase...)
I identified 3 problems, and this is only one. However, DNS doesn't even
do that *if I reuse your URIs*, or if I reuse your URI space (which you
may want me to do). E.g., I say
http://ex.org/#Bijan a Philosopher.
and you say
http://ex.org/#Bijan a PerfumeMaker.
That's not accidental reuse as cold happen with e.g. urn:bm:ipi:12 where
someone who has never heard of Banff might end up with the same identifier
for something completely unrelated (e.g. hotels in the Bahamas).
If there are conflicting statements about http://ex.org/#Bijan, at least
it's clear who is the authority (the owner of the domain ex.org), so no one
can hijack your precious URIs. (Of course someone may still prefer to trust
the third-party statements more if they trust that third-party more.)
Well, people overload google search terms all the time. Because google
search terms are natural language. And those search terms work really
well in a lot of cases. So I don't understand your point.
They may work well for many cases, but it does not work so well in other
cases -- if it did, no need to bother with most of this semantic web stuff!
There seem to be several companies working on better term disambiguation
(potential Google-killer), but so far I'd say the results aren't great, see
<http://eric.jain.name/2007/01/22/clustering-kiwis/> for an example :-)
Is there a list I can see of what you have in mind? I mean, that are
being used now by people interesting in life science terminology. Even a
few examples would help me out here.
Here are two typical applications that I know people have been playing with
(and that don't work quite as well if URI's are non-HTTP or unresolvable):
1. Piggy Bank <http://simile.mit.edu/wiki/Piggy_Bank>, a semantic web
browser and data collection tool.
2. Swoogle <http://swoogle.umbc.edu/>, a semantic web crawler.
Isn't this what's in dispute? In any case, doesn't PURLing sorta kill
the DNS argument? I'm so confused and I fear for the kittens!
I think you're mixing up the HCLS PURL resolver, which was set up to
provide identifiers for resources that do not have their own proper URLs
(yet, or anytime soon), with the general idea of using HTTP URIs (which may
or may not deserve the label "PURL")!
This is one of those things that one needs to make a decision on if one
is going to use HTTP uris. It seems mean to recommend people use HTTP
uris and punt on this crucial point.
I don't quite see the "crucial", but agree that some guidelines when you
want to use either approach (if there are arguments for both) would help.
Yep. So if the benefit is *non-obvious* or otherwise diffuse, it's not
going to be a great selling point.
Let's just forward this to the marketing department :-)