We must consider all use cases.
(1) the KISS case you present is the easy one: URIs natively map to their URLs.
(2) the redirection case, with admin rights::
Oops, i had to rename my files on the server and now my URIs no longer
match their URLs.
Fortunately, I have access to a redirection feature (symbolic links on
the server, .htaccess for 303, mod_rewrite, etc).
(3) the redirection case, without admin rights:
Idem as (2) but no workaround available. The URIs-URLs scheme is
*definitely* broken. And i am now a happy provider of 404 errors.


We need to establish *one* best practice to manage all 3 cases, with
only minor additional work when you need to switch from one case to
another.

Discarding cases (2) and (3), or considering that 404 is a non-issue
are, imho, very short-sighted positions.

But I strongly agree with you that "simple things should be simple".
Let's not forget the second part of the mantra: "but complex things
should be possible".

That is why we need to find the *one* recipe for all 3 cases.

On Friday, July 10, 2009, Hugh Glaser <[email protected]> wrote:
> Thank you all for not (yet) incinerating me.
> Some responses:
>
> I'm not really fussed about html documents - to me they aren't really "in" 
> the semantic web, other than the URL is a string, which can be interpreted as 
> something that can use the same access mechanisms as my URIs to find some 
> more text. I do publish html documents of my RDF, but that is only to permit 
> aficionadas to browse the raw data.
> If I actually have html documents, then something like RDFa is probably a 
> great way of doing things.
>
> Many people worry about the modelling, which is great and why RDF is so good.
> But I start more from the consumer's end rather than the modeller's, and work 
> back through the publisher.
> Does anyone actually have a real application (and I am afraid I don't really 
> count semantic web browsers as applications) that has a problem getting the 
> RDF if I have a file at
> http://wrpmo.com/hg
> which contains
> <http://wrpmo.com/hg> <http://xmlns.com/foaf/0.1/name> "Hugh Glaser" .
> <http://wrpmo.com/hg> <http://www.aktors.org/ontology/portal#has-web-address> 
> "http://www.ecs.soton.ac.uk/people/hg"; .
> and I then use http://wrpmo.com/hg as one of "my" URIs?
> Certainly doesn't bother my applications.
> And your average enthusiastic sysprog or geek can understand and do this, I 
> think - that's why RDFa is getting popular.
> I know that things like dc:creator can be a little problematic, but we are 
> paying a high price, I think.
>
> Steve's comments on using vi are interesting.
> Yes, we used vi and hackery.
> In fact I still generate those old web pages by running a Makefile which 
> finds a .c source and calls the C preprocessor to generate the .html pages, 
> and I certainly started this in the early 90s.
> At the moment I use all sorts of hackery to generate the millions of triples, 
> but the deployment is complex.
>
> Is it really such a Bad Thing if I do http://wrpmo.com/hg, if the alternative 
> is that I won't publish anything?
> Surely something is better than nothing?
> In any case, just like html browsers, linked data consumers should deal with 
> broken RDF and get the best they can out of it, as going back and telling the 
> server that the document was malformed, or reporting to a "user" is no more 
> an option in the linked data world than it is in the current web.
>
> Of course, as a good citizen (subject? - footsoldier?) of linked data and the 
> semantic web, I hope I do all the stuff expected of me, but it doesn't mean I 
> think it is the right way.
>
> Thank you very much for the considered responses to such an old issue.
> Best
> Hugh
>
> On 10/07/2009 11:13, "Steve Harris" <[email protected]> wrote:
>
> On 10 Jul 2009, at 10:56, Richard Light wrote:
>> In message <[email protected]>, Steve
>> Harris <[email protected]> writes
>>> On 10 Jul 2009, at 01:22, Hugh Glaser wrote:
>>>> If I can't simply publish some RDF about something like my dog, by
>>>> publishing a file of triples that say what I want at my standard
>>>> web site,
>>>> we have broken the system.
>>>
>>> I couldn't agree more.
>>>
>>> <rant subject="off-topic syntax rant of the decade">
>>> Personally I think that RDF/XML doesn't help, it's too hard to
>>> write by hand. None of the other syntaxes for RDF triples really
>>> have the stamp of legitimacy. I think that's something that could
>>> really help adoption, the the same way that strict XHTML, in the
>>> early 1990's wouldn't have been so popular with people (like me)
>>> who just wanted to bash out some text in vi.
>>> </>
>>
>> Well, in my view, when we get to "bashing out" triples it isn't the
>> holding syntax which will be the main challenge, it's the Linked
>> Data URLs. Obviously, in a Linked Data resource about your dog, you
>> can invent the URL for the subject of your triples, but if your Data
>> is to be Linked in any meaningful way, you also need URLs for their
>> predicates and objects.
>>
>> This implies that, without a sort of Semantic FrontPage (TM) with
>> powerful and user-friendly lookup facilities, no-one is going to
>> bash out usable Linked Data.  Certainly not with vi.  And if you
>> have such authoring software, the easiest part of its job will be
>> rendering your statements into as many syntaxes as you want.
>
> I think that's a fallacy. I the web wasn't bootstrapped by people
> wielding Frontpage*. It was people like Hugh and I, churning out HTML
> by hand (or shell script often), mostly by "cargo cult" copying
> existing HTML we found on the Web. That neatly sidesteps the schema
> question, as people will just use whatever other people use, warts,
> typos, and all.
>
> The tools for non-geeks phase comes along much later, IMHO. First we
> have to make an environment interesting enough for non-geeks to want
> to play in.
>
> Happy to be demonstrated wrong of course.
>
> - Steve
>
> * Frontpage wasn't released until late '95, and wasn't widely known
> until late '96 when it was bought by MS. By which time the Web was a
> done deal.
>
> --
> Steve Harris
> Garlik Limited, 2 Sheen Road, Richmond, TW9 1AE, UK
> +44(0)20 8973 2465  http://www.garlik.com/
> Registered in England and Wales 535 7233 VAT # 849 0517 11
> Registered office: Thames House, Portsmouth Road, Esher, Surrey, KT10
> 9AD
>
>
>
>
>

Reply via email to