There's a lot to be said for keeping things very simple.

Over in the biodiversity informatics community we've adopted Life Science Identifiers (LSID) as our identifier of choice, which require special software to both serve and resolve, plus the added complication of convincing your friendly sysadmin to add SRV records to the DNS (and no, I had no idea there were such things until I got involved in LSIDs).

The result has been rather limited uptake by data providers (albeit millions of records now have LSIDs), many LSIDs are broken (resolvers not working properly), and pretty much no client applications making use of them.

Some in our community think LSIDs are a mistake, and yearn for the simplicity of HTTP URIs. It's ironic that they too, are not so simple after all. Or perhaps, they are but we're determined to make them more complicated than they need to be.

Regards

Rod

On 10 Jul 2009, at 01:22, Hugh Glaser wrote:

I am finding the current discussion really difficult.
Those who do not learn from history are condemned to repeat it.

As an example:
In the 1980s there were a load of hypertext systems that required the users to do a bunch of stuff to buy into them. They had great theoretical bases, and their proponents had unassailable arguments as to why their way of doing
things was right. And they really were unassailable - they were right.

They essentially died.

The web came along - I could publish a bunch of html pages about whatever I wanted, simply by putting them in some directory somewhere that I had access to (name told to me by my sysprog guru), and suddenly I was "on the web". If the html syntax was wrong it was the browser's problem - don't come back and
tell me I did wrong, make what sense of it you can, it's your problem.

Such simplicity, which was understandable by a huge swathe of people who were using computers, and acceptable to their support staff, simply swept
all before it (including WAIS, ftp, gopher).
Arguments about how "broken" the model was because of things like links
breaking and security problems were just ignored, and now seem almost
archaic to most of us.

I want the same for the Semantic Web/Linked Data.

Discussions of 303 and hash just don't cut the mustard in comparison. So I
find it hard to engage in an extended discussion about them.
Discussion:
Q: "How do I do x?"
Me: "Try this."
Q: "This doesn't work, what now?"
Immediately says to me that "this" must be wrong - we should go away and
think of something better.

So would it really be so bad if people just started putting documents with RDF in on the web, where the URI for both the document and the thing it was
about (NIR) got confused?
All I actually want is a URI that resolves to some RDF.
And even perhaps people would not run off to RDFa so quickly?

If I can't simply publish some RDF about something like my dog, by
publishing a file of triples that say what I want at my standard web site,
we have broken the system.

<3 hours flame resistance starts />

Best
Hugh




---------------------------------------------------------
Roderic Page
Professor of Taxonomy
DEEB, FBLS
Graham Kerr Building
University of Glasgow
Glasgow G12 8QQ, UK

Email: [email protected]
Tel: +44 141 330 4778
Fax: +44 141 330 2792
AIM: [email protected]
Facebook: http://www.facebook.com/profile.php?id=1112517192
Twitter: http://twitter.com/rdmpage
Blog: http://iphylo.blogspot.com
Home page: http://taxonomy.zoology.gla.ac.uk/rod/rod.html







Reply via email to