Re: Content negotiation for Turtle files
I feel we should be crisp about these things. Its not a question of thinking of what things kind of tend to enhance interoperability, it is defining a protocol which 100% guarantees interoperability. Here are three distinct protocols which work, ie guarantee each client can understand each server. A) Client accepts various formats including RDF/XML. Server provides various formats including RDF/XML. B) Client accepts various formats including RDF/XML AND turtle. Server provides various formats including either RDF/XML OR turtle. C) Client accepts various formats including turtle. Server provides various formats including turtle. These may not ever have been named. The RDF world used A in fact for a while, but the Linked Data Platform at last count was using C. Obviously B has its own advantages but I think that we need lightweight clients more than we need lightweight servers and so being able to build a client without an XML parser is valuable. Obviously there is a conservative middle ground D in which all clients and servers support both formats, which could be defined as a practical best practice, but we should have a name for, say, C. We should see whether the LDP group will define a word for compliance with C. I hope so, and then we can all provide that and test for it. Tim On 2013-02 -06, at 11:38, Leigh Dodds wrote: From an interoperability point of view, having a default format that clients can rely on is reasonable. Until now, RDF/XML has been the standardised format that we can all rely on, although shortly we may all collectively decide to prefer Turtle. So ensuring that RDF/XML is available seems like a reasonable thing for a validator to try and test for. But there's several ways that test could have been carried out. E.g. Vapour could have checked that there was a RDF/XML version and provided you with some reasons why that would be useful. Perhaps as a warning, rather than a fail. The explicit check for RDF/XML being available AND being the default preference of the server is raising the bar slightly, but its still trying to aim for interop. Personally I think I'd implement this kind of check as ensure there is at least one valid RDF serialisation available, either RDF/XML or Turtle. I wouldn't force a default on a server, particularly as we know that many clients can consume multiple formats. This is where automated validation tools have to tread carefully: while they play an excellent role in encouraging consistently, the tests they perform and the feedback they give need to have some nuance.
Re: Content negotiation for Turtle files
Hi all While I promised a response, time is never my friend despite best intentions. +1 to Tim on crispness, and on a protocol. I note that the content-negotiation error which was at the core of this discussion hasn't really been talked about, and was where I was planning to provide comment on. So noting the latest in the discussion, I'll fast track and suggest that as an interim measure: (note - this is a drive by comment, and possibly at the risk of over-simplifying:) Cannot a lot of this discussion be solved if a server is correctly configured to return a 300 response in these cases? (Multiple-choice or there is more than one format available Mr. Client - please choose which one you'd like). We can't assume that clients or users will ask for something we have, or in a correct manner, which is the reason 300 and other responses not-often used exist. Cheers Chris I feel we should be crisp about these things. Its not a question of thinking of what things kind of tend to enhance interoperability, it is defining a protocol which 100% guarantees interoperability. Here are three distinct protocols which work, ie guarantee each client can understand each server. A) Client accepts various formats including RDF/XML. Server provides various formats including RDF/XML. B) Client accepts various formats including RDF/XML AND turtle. Server provides various formats including either RDF/XML OR turtle. C) Client accepts various formats including turtle. Server provides various formats including turtle. These may not ever have been named. The RDF world used A in fact for a while, but the Linked Data Platform at last count was using C. Obviously B has its own advantages but I think that we need lightweight clients more than we need lightweight servers and so being able to build a client without an XML parser is valuable. Obviously there is a conservative middle ground D in which all clients and servers support both formats, which could be defined as a practical best practice, but we should have a name for, say, C. We should see whether the LDP group will define a word for compliance with C. I hope so, and then we can all provide that and test for it. Tim On 2013-02 -06, at 11:38, Leigh Dodds wrote: From an interoperability point of view, having a default format that clients can rely on is reasonable. Until now, RDF/XML has been the standardised format that we can all rely on, although shortly we may all collectively decide to prefer Turtle. So ensuring that RDF/XML is available seems like a reasonable thing for a validator to try and test for. But there's several ways that test could have been carried out. E.g. Vapour could have checked that there was a RDF/XML version and provided you with some reasons why that would be useful. Perhaps as a warning, rather than a fail. The explicit check for RDF/XML being available AND being the default preference of the server is raising the bar slightly, but its still trying to aim for interop. Personally I think I'd implement this kind of check as ensure there is at least one valid RDF serialisation available, either RDF/XML or Turtle. I wouldn't force a default on a server, particularly as we know that many clients can consume multiple formats. This is where automated validation tools have to tread carefully: while they play an excellent role in encouraging consistently, the tests they perform and the feedback they give need to have some nuance.
Re: Content negotiation for Turtle files
Bernard, (forget my W3C hat, I am not authoritative on Apache tricks, for example...) When I put up a vocabulary onto www.w3.org/ns/, for example, I publish it both in ttl and rdf/xml. Actually, we also publish the file in HTML+RDFa (which very often is the master copy and I convert it into ttl and rdf/xml before publishing). Additionally, we put there a .var file. This is the .var file for the http://www.w3.org/ns/r2rml: r2rml.var - URI: r2rml URI: r2rml.html Content-Type: text/html URI: r2rml.rdf Content-Type: application/rdf+xml; qs=0.4 URI: r2rml.ttl Content-Type: text/turtle; qs=0.5 that seems to work well, at least I have not heard complaints:-) One can do a further trick by adding to .htaccess entries to convert, say, r2rml.html to r2rml.ttl on the fly; I did not do that to reduce the load on our servers. There is somewhere a flag in the apache configuration allowing apache to handle these .var files; I am not sure it is there by default. I hope this helps Ivan On Feb 6, 2013, at 24:49 , Bernard Vatant bernard.vat...@mondeca.com wrote: Hello all Back in 2006, I thought had understood with the help of folks around here, how to configure my server for content negotiation at lingvoj.org. Both vocabulary and instances were published in RDF/XML. I updated the ontology last week, and since after years of happy living with RDF/XML people eventually convinced that it was a bad, prehistoric and ugly syntax, I decided to be trendy and published the new version in Turtle at http://www.lingvoj.org/ontology_v2.0.ttl The vocabulary URI is still the same : http://www.lingvoj.org/ontology, and the namespace http://www.lingvoj.org/ontology# (cool URI don't change) Then I turned to Vapour to test this new publication, and found out that to be happy with the vocabulary URI it has to find some answer when requesting application/rdf+xml. But since I have no more RDF/XML file for this version, what should I do? I turned to best practices document at http://www.w3.org/TR/swbp-vocab-pub, but it does not provide examples with Turtle, only RDF/XML. So I blindly put the following in the .htaccess : AddType application/rdf+xml .ttl I found it a completely stupid and dirty trick ... but amazigly it makes Vapour happy. But now Firefox chokes on http://www.lingvoj.org/ontology_v2.0.ttl because it seems to expect a XML file. Chrome has not this issue. The LOV-Bot says there is a content negotiation issue and can't get the file. So does Parrot. I feel dumb, but I'm certainly not the only one, I've stumbled upon a certain number of vocabularies published in Turtle for which the conneg does not seem to be perfectly clear either. What do I miss, folks? Should I forget about it, and switch back to good ol' RDF/XML? Bernard -- Bernard Vatant Vocabularies Data Engineering Tel : + 33 (0)9 71 48 84 59 Skype : bernard.vatant Blog : the wheel and the hub Mondeca 3 cité Nollez 75018 Paris, France www.mondeca.com Follow us on Twitter : @mondecanews -- Meet us at Documation in Paris, March 20-21 Ivan Herman, W3C Semantic Web Activity Lead Home: http://www.w3.org/People/Ivan/ mobile: +31-641044153 FOAF: http://www.ivan-herman.net/foaf.rdf smime.p7s Description: S/MIME cryptographic signature
Re: Content negotiation for Turtle files
I use the follow .htaccess file: AddType text/turtle .ttl AddType application/rdf+xml .rdf AddType application/ld+json .jsonld AddType application/n-triples .nt AddType application/owl+xml .owl AddType text/trig .trig AddType application/n-quads .nq Andy On 05/02/13 23:49, Bernard Vatant wrote: Hello all Back in 2006, I thought had understood with the help of folks around here, how to configure my server for content negotiation at lingvoj.org http://lingvoj.org. Both vocabulary and instances were published in RDF/XML. I updated the ontology last week, and since after years of happy living with RDF/XML people eventually convinced that it was a bad, prehistoric and ugly syntax, I decided to be trendy and published the new version in Turtle at http://www.lingvoj.org/ontology_v2.0.ttl The vocabulary URI is still the same : http://www.lingvoj.org/ontology, and the namespace http://www.lingvoj.org/ontology# (cool URI don't change) Then I turned to Vapour to test this new publication, and found out that to be happy with the vocabulary URI it has to find some answer when requesting application/rdf+xml. But since I have no more RDF/XML file for this version, what should I do? I turned to best practices document at http://www.w3.org/TR/swbp-vocab-pub, but it does not provide examples with Turtle, only RDF/XML. So I blindly put the following in the .htaccess : AddType application/rdf+xml .ttl I found it a completely stupid and dirty trick ... but amazigly it makes Vapour happy. But now Firefox chokes on http://www.lingvoj.org/ontology_v2.0.ttl because it seems to expect a XML file. Chrome has not this issue. The LOV-Bot says there is a content negotiation issue and can't get the file. So does Parrot. I feel dumb, but I'm certainly not the only one, I've stumbled upon a certain number of vocabularies published in Turtle for which the conneg does not seem to be perfectly clear either. What do I miss, folks? Should I forget about it, and switch back to good ol' RDF/XML? Bernard -- *Bernard Vatant * Vocabularies Data Engineering Tel : + 33 (0)9 71 48 84 59 Skype : bernard.vatant Blog : the wheel and the hub http://blog.hubjects.com/ *Mondeca* 3 cité Nollez 75018 Paris, France www.mondeca.com http://www.mondeca.com/ Follow us on Twitter : @mondecanews http://twitter.com/#%21/mondecanews -- Meet us at Documation http://www.documation.fr/ in Paris, March 20-21
Re: Content negotiation for Turtle files
Thanks all for your precious help! ... which takes me back to my first options, the ones I had set before looking at Vapour results which misled me - more below. AddType text/turtle;charset=utf-8 .ttl AddType application/rdf+xml.rdf Plus Rewrite for html etc. I now get this on cURL curl -IL http://www.lingvoj.org/ontology HTTP/1.1 303 See Other Date: Wed, 06 Feb 2013 09:28:45 GMT Server: Apache Location: http://www.lingvoj.org/ontology_v2.0.ttl Content-Type: text/html; charset=iso-8859-1 HTTP/1.1 200 OK Date: Wed, 06 Feb 2013 09:28:45 GMT Server: Apache Last-Modified: Wed, 06 Feb 2013 09:19:34 GMT ETag: 60172428-5258-4d50ad316b5b2 Accept-Ranges: bytes Content-Length: 21080 Content-Type: text/turtle; charset=utf-8 ... to which Kingsley should not frown anymore (hopefully) But what I still don't understand is the answer of Vapour when requesting RDF/XML : - 1st request while dereferencing resource URI without specifying the desired content type (HTTP response code should be 303 (redirect)): Passed - 2nd request while dereferencing resource URI without specifying the desired content type (Content type should be 'application/rdf+xml'): Failed - 2nd request while dereferencing resource URI without specifying the desired content type (HTTP response code should be 200): Passed Of course this request is bound to fail somewhere since there is no RDF/XML file, but the second bullet point is confusing : why should the content type be 'application/rdf+xml' when the desired content type is not specified? And should not a Linked Data validator handle the case where there is no RDF/XML file, but only Turtle or n3? The not-so-savvy linked data publisher (me), as long as he sees something flashinf RED in the results, thinks he has not made things right, and is led to made blind tricks just to have everything green (such as contradictory mime type declarations). At least if the validator does not handle this case it should say so. The current answer does not help adoption of Turtle, to say the least! Hoping someone behind Vapour is lurking here and will answer :) Thanks again for your time Bernard
Re: Content negotiation for Turtle files
Bernard, Ivan (At last! Something I can speak semi-authoritatively on ;P ) @ Bernard - no - there is no reason to go back if you do not want to, and every reason to serve both formats plus more. Your comment about UA's complaining about a content negotiation issue is key to what you're trying to do here. I'd like to provide some clear guidance or suggestions back, but first, if possible, can you please post the http request headers for the four (and any others you have) user agents you've used to attempt to request your rdf+xml files and which have either choked or accepted the .ttl file. Extra points if you can also post the server's response headers. @ Ivan - while I wince a little at the trick - the question comes down to the same thing - what is the http response header that is sent back to the client - would be interested to see if in fact what you're doing ISN'T a trick but in fact a compliant way to approach this. Personally I think you shouldn't actually need to resort to using .var (which is Apache specific) when what is essentially a content negotiation issue can simply be configured properly at the server level and thus a single approach could be used by IIS, Apache, nginx etc. Look forward to the responses (excuse the pun) Cheers Chris -- Chris Beer Manager - Online Services Department of Regional Australia, Local Government, Arts and Sport Bernard, (forget my W3C hat, I am not authoritative on Apache tricks, for example...) When I put up a vocabulary onto www.w3.org/ns/, for example, I publish it both in ttl and rdf/xml. Actually, we also publish the file in HTML+RDFa (which very often is the master copy and I convert it into ttl and rdf/xml before publishing). Additionally, we put there a .var file. This is the .var file for the http://www.w3.org/ns/r2rml: r2rml.var - URI: r2rml URI: r2rml.html Content-Type: text/html URI: r2rml.rdf Content-Type: application/rdf+xml; qs=0.4 URI: r2rml.ttl Content-Type: text/turtle; qs=0.5 that seems to work well, at least I have not heard complaints:-) One can do a further trick by adding to .htaccess entries to convert, say, r2rml.html to r2rml.ttl on the fly; I did not do that to reduce the load on our servers. There is somewhere a flag in the apache configuration allowing apache to handle these .var files; I am not sure it is there by default. I hope this helps Ivan On Feb 6, 2013, at 24:49 , Bernard Vatant bernard.vat...@mondeca.com wrote: Hello all Back in 2006, I thought had understood with the help of folks around here, how to configure my server for content negotiation at lingvoj.org. Both vocabulary and instances were published in RDF/XML. I updated the ontology last week, and since after years of happy living with RDF/XML people eventually convinced that it was a bad, prehistoric and ugly syntax, I decided to be trendy and published the new version in Turtle at http://www.lingvoj.org/ontology_v2.0.ttl The vocabulary URI is still the same : http://www.lingvoj.org/ontology, and the namespace http://www.lingvoj.org/ontology# (cool URI don't change) Then I turned to Vapour to test this new publication, and found out that to be happy with the vocabulary URI it has to find some answer when requesting application/rdf+xml. But since I have no more RDF/XML file for this version, what should I do? I turned to best practices document at http://www.w3.org/TR/swbp-vocab-pub, but it does not provide examples with Turtle, only RDF/XML. So I blindly put the following in the .htaccess : AddType application/rdf+xml .ttl I found it a completely stupid and dirty trick ... but amazigly it makes Vapour happy. But now Firefox chokes on http://www.lingvoj.org/ontology_v2.0.ttl because it seems to expect a XML file. Chrome has not this issue. The LOV-Bot says there is a content negotiation issue and can't get the file. So does Parrot. I feel dumb, but I'm certainly not the only one, I've stumbled upon a certain number of vocabularies published in Turtle for which the conneg does not seem to be perfectly clear either. What do I miss, folks? Should I forget about it, and switch back to good ol' RDF/XML? Bernard -- Bernard Vatant Vocabularies Data Engineering Tel : + 33 (0)9 71 48 84 59 Skype : bernard.vatant Blog : the wheel and the hub Mondeca 3 cité Nollez 75018 Paris, France www.mondeca.com Follow us on Twitter : @mondecanews -- Meet us at Documation in Paris, March 20-21 Ivan Herman, W3C Semantic Web Activity Lead Home: http://www.w3.org/People/Ivan/ mobile: +31-641044153 FOAF: http://www.ivan-herman.net/foaf.rdf
Re: Content negotiation for Turtle files
Hi Chris 2013/2/6 Chris Beer ch...@codex.net.au Bernard, Ivan (At last! Something I can speak semi-authoritatively on ;P ) @ Bernard - no - there is no reason to go back if you do not want to, and every reason to serve both formats plus more. More ??? Well, I was heading the other way round actually for sake of simplicity. As said before I've used RDF/XML for years despite all criticisms, and was happy with it (the devil you know etc). What I understand of the current trend is that to ease RDF and linked data adoption we should promote now this simple, both human-readable and machine-friendly publication syntax (Turtle). And having tried it for a while, I now begin to be convinced enough as to adopt it in publication - thanks to continuing promotion by Kingsley among others :) And now you tell me I should still bother to provide n other formats, RDF/XML and more. I thought I was about to simplify my life, you tell me I have to make the simple things, *plus* the more complex ones as before. Hmm. Your comment about UA's complaining about a content negotiation issue is key to what you're trying to do here. I'd like to provide some clear guidance or suggestions back, but first, if possible, can you please post the http request headers for the four (and any others you have) user agents you've used to attempt to request your rdf+xml files and which have either choked or accepted the .ttl file. I can try to find out how do that, although remind you I can discuss languages, ontologies, syntax and semantics of data at will, but when it comes to protocols and Webby things it's not really my story, so I don't promise anything. AND : there's NO rdf+xml file in that case, only text/turtle. And that's exactly the point : can/should one do that, or not? Do I have to pass the message to adopters : publish RDF in Turtle, it's a very cool an simple syntax (oh but BTW don't forget to add HTML documentation, and also RDF/XML, and JSON, and multilingual variants, and proper content negotiation ...) ... well, OK, let's be clear about it if we have to do that ... but it looks like a non-starter for adoption of Turtle. Extra points if you can also post the server's response headers. Same remark as above. Thanks for your time Bernard -- *Bernard Vatant * Vocabularies Data Engineering Tel : + 33 (0)9 71 48 84 59 Skype : bernard.vatant Blog : the wheel and the hub http://blog.hubjects.com/ *Mondeca** ** * 3 cité Nollez 75018 Paris, France www.mondeca.com Follow us on Twitter : @mondecanews http://twitter.com/#%21/mondecanews -- Meet us at Documation http://www.documation.fr/ in Paris, March 20-21
Re: Content negotiation for Turtle files
Bernard, On Feb 6, 2013, at 11:59 , Bernard Vatant bernard.vat...@mondeca.com wrote: Hi Chris AND : there's NO rdf+xml file in that case, only text/turtle. And that's exactly the point : can/should one do that, or not? Do I have to pass the message to adopters : publish RDF in Turtle, it's a very cool an simple syntax (oh but BTW don't forget to add HTML documentation, and also RDF/XML, and JSON, and multilingual variants, and proper content negotiation ...) ... well, OK, let's be clear about it if we have to do that ... but it looks like a non-starter for adoption of Turtle. I do not believe the .var mechanism is bound to the necessity of publishing both rdf/xml and turtle; I just described what we do. If you use it only for turtle, that should work, too. B.t.w., you (and chris) asked for the curl outputs for the setup, I added it below. The default returns html; I think if the only thing you had was turtle, then that is fine, too. Ivan 12:11 Desktop curl --head http://www.w3.org/ns/r2rml/ HTTP/1.1 200 OK Date: Wed, 06 Feb 2013 11:11:18 GMT Server: Apache/2 Vary: accept Last-Modified: Mon, 17 Sep 2012 15:21:58 GMT ETag: 1818d-4c9e7559fb180;a0-4939a0734f380 Accept-Ranges: bytes Content-Length: 98701 Cache-Control: max-age=21600 Expires: Wed, 06 Feb 2013 17:11:18 GMT P3P: policyref=http://www.w3.org/2001/05/P3P/p3p.xml; Connection: close Content-Type: text/html; charset=utf-8 12:11 Desktop curl --head --header Accept: text/turtle http://www.w3.org/ns/r2rml/ HTTP/1.1 200 OK Date: Wed, 06 Feb 2013 11:11:45 GMT Server: Apache/2 Vary: accept Last-Modified: Mon, 30 Jul 2012 13:04:36 GMT ETag: 5292-4c60bb4236100;a0-4939a0734f380 Accept-Ranges: bytes Content-Length: 21138 Cache-Control: max-age=21600 Expires: Wed, 06 Feb 2013 17:11:45 GMT P3P: policyref=http://www.w3.org/2001/05/P3P/p3p.xml; Connection: close Content-Type: text/turtle; charset=utf-8 12:11 Desktop curl --head --header Accept: text/turtle, application/rdf+xml http://www.w3.org/ns/r2rml/ HTTP/1.1 200 OK Date: Wed, 06 Feb 2013 11:12:16 GMT Server: Apache/2 Vary: accept Last-Modified: Mon, 30 Jul 2012 13:04:36 GMT ETag: 5292-4c60bb4236100;a0-4939a0734f380 Accept-Ranges: bytes Content-Length: 21138 Cache-Control: max-age=21600 Expires: Wed, 06 Feb 2013 17:12:16 GMT P3P: policyref=http://www.w3.org/2001/05/P3P/p3p.xml; Connection: close Content-Type: text/turtle; charset=utf-8 12:12 Desktop Ivan Herman, W3C Semantic Web Activity Lead Home: http://www.w3.org/People/Ivan/ mobile: +31-641044153 FOAF: http://www.ivan-herman.net/foaf.rdf smime.p7s Description: S/MIME cryptographic signature
Re: Content negotiation for Turtle files
On Feb 6, 2013, at 10:56 , Chris Beer ch...@codex.net.au wrote: Bernard, Ivan (At last! Something I can speak semi-authoritatively on ;P ) @ Bernard - no - there is no reason to go back if you do not want to, and every reason to serve both formats plus more. Your comment about UA's complaining about a content negotiation issue is key to what you're trying to do here. I'd like to provide some clear guidance or suggestions back, but first, if possible, can you please post the http request headers for the four (and any others you have) user agents you've used to attempt to request your rdf+xml files and which have either choked or accepted the .ttl file. Extra points if you can also post the server's response headers. @ Ivan - while I wince a little at the trick - the question comes down to the same thing - what is the http response header that is sent back to the client - See my separate answer to Bernard. would be interested to see if in fact what you're doing ISN'T a trick but in fact a compliant way to approach this. Well, o.k. The term 'trick' may not be well chosen; it is probably the standard way of doing this on Apache. Personally I think you shouldn't actually need to resort to using .var (which is Apache specific) when what is essentially a content negotiation issue can simply be configured properly at the server level and thus a single approach could be used by IIS, Apache, nginx etc. That, unfortunately, I do not know. Ivan Look forward to the responses (excuse the pun) Cheers Chris -- Chris Beer Manager - Online Services Department of Regional Australia, Local Government, Arts and Sport Bernard, (forget my W3C hat, I am not authoritative on Apache tricks, for example...) When I put up a vocabulary onto www.w3.org/ns/, for example, I publish it both in ttl and rdf/xml. Actually, we also publish the file in HTML+RDFa (which very often is the master copy and I convert it into ttl and rdf/xml before publishing). Additionally, we put there a .var file. This is the .var file for the http://www.w3.org/ns/r2rml: r2rml.var - URI: r2rml URI: r2rml.html Content-Type: text/html URI: r2rml.rdf Content-Type: application/rdf+xml; qs=0.4 URI: r2rml.ttl Content-Type: text/turtle; qs=0.5 that seems to work well, at least I have not heard complaints:-) One can do a further trick by adding to .htaccess entries to convert, say, r2rml.html to r2rml.ttl on the fly; I did not do that to reduce the load on our servers. There is somewhere a flag in the apache configuration allowing apache to handle these .var files; I am not sure it is there by default. I hope this helps Ivan On Feb 6, 2013, at 24:49 , Bernard Vatant bernard.vat...@mondeca.com wrote: Hello all Back in 2006, I thought had understood with the help of folks around here, how to configure my server for content negotiation at lingvoj.org. Both vocabulary and instances were published in RDF/XML. I updated the ontology last week, and since after years of happy living with RDF/XML people eventually convinced that it was a bad, prehistoric and ugly syntax, I decided to be trendy and published the new version in Turtle at http://www.lingvoj.org/ontology_v2.0.ttl The vocabulary URI is still the same : http://www.lingvoj.org/ontology, and the namespace http://www.lingvoj.org/ontology# (cool URI don't change) Then I turned to Vapour to test this new publication, and found out that to be happy with the vocabulary URI it has to find some answer when requesting application/rdf+xml. But since I have no more RDF/XML file for this version, what should I do? I turned to best practices document at http://www.w3.org/TR/swbp-vocab-pub, but it does not provide examples with Turtle, only RDF/XML. So I blindly put the following in the .htaccess : AddType application/rdf+xml .ttl I found it a completely stupid and dirty trick ... but amazigly it makes Vapour happy. But now Firefox chokes on http://www.lingvoj.org/ontology_v2.0.ttl because it seems to expect a XML file. Chrome has not this issue. The LOV-Bot says there is a content negotiation issue and can't get the file. So does Parrot. I feel dumb, but I'm certainly not the only one, I've stumbled upon a certain number of vocabularies published in Turtle for which the conneg does not seem to be perfectly clear either. What do I miss, folks? Should I forget about it, and switch back to good ol' RDF/XML? Bernard -- Bernard Vatant Vocabularies Data Engineering Tel : + 33 (0)9 71 48 84 59 Skype : bernard.vatant Blog : the wheel and the hub Mondeca 3 cité Nollez 75018 Paris, France www.mondeca.com Follow us on Twitter : @mondecanews -- Meet us at Documation in Paris, March 20-21 Ivan Herman, W3C
Re: Content negotiation for Turtle files
Hi, On Wed, Feb 6, 2013 at 9:54 AM, Bernard Vatant bernard.vat...@mondeca.com wrote: ... But what I still don't understand is the answer of Vapour when requesting RDF/XML : 1st request while dereferencing resource URI without specifying the desired content type (HTTP response code should be 303 (redirect)): Passed 2nd request while dereferencing resource URI without specifying the desired content type (Content type should be 'application/rdf+xml'): Failed 2nd request while dereferencing resource URI without specifying the desired content type (HTTP response code should be 200): Passed From a purely HTTP and Content Negotiation point of view, if a client doesn't specify an Accept header then its perfectly legitimate for a server to return a default format of its choosing. I think it could also decide to serve a 300 status code and prompt the client to choose an option thats available. From an interoperability point of view, having a default format that clients can rely on is reasonable. Until now, RDF/XML has been the standardised format that we can all rely on, although shortly we may all collectively decide to prefer Turtle. So ensuring that RDF/XML is available seems like a reasonable thing for a validator to try and test for. But there's several ways that test could have been carried out. E.g. Vapour could have checked that there was a RDF/XML version and provided you with some reasons why that would be useful. Perhaps as a warning, rather than a fail. The explicit check for RDF/XML being available AND being the default preference of the server is raising the bar slightly, but its still trying to aim for interop. Personally I think I'd implement this kind of check as ensure there is at least one valid RDF serialisation available, either RDF/XML or Turtle. I wouldn't force a default on a server, particularly as we know that many clients can consume multiple formats. This is where automated validation tools have to tread carefully: while they play an excellent role in encouraging consistently, the tests they perform and the feedback they give need to have some nuance. Cheers, L. -- Leigh Dodds Freelance Technologist Open Data, Linked Data Geek t: @ldodds w: ldodds.com e: le...@ldodds.com
Re: Content negotiation for Turtle files
On 06/02/2013 10:59, Bernard Vatant wrote: More ??? Well, I was heading the other way round actually for sake of simplicity. As said before I've used RDF/XML for years despite all criticisms, and was happy with it (the devil you know etc). What I understand of the current trend is that to ease RDF and linked data adoption we should promote now this simple, both human-readable and machine-friendly publication syntax (Turtle). And having tried it for a while, I now begin to be convinced enough as to adopt it in publication - thanks to continuing promotion by Kingsley among others :) And now you tell me I should still bother to provide n other formats, RDF/XML and more. I thought I was about to simplify my life, you tell me I have to make the simple things, *plus* the more complex ones as before. Hmm. Well I for one would make a plea to keep RDF/XML in the portfolio. Turtle is only machine-processible if you happen to have a Turtle parser in your tool box. I'm quite happily processing Linked Data resources as XML, using only XSLT and a forwarder which adds Accept headers to an HTTP request. It thereby allows me to grab and work with LD content (including SPARQL query results) using the standard XSLT document() function. In a web development context, JSON would probably come second for me as a practical proposition, in that it ties in nicely with widely-supported javascript utilities. To me, Turtle is symptomatic of a world in which people are still writing far too many Linked Data examples and resources by hand, and want something that is easier to hand-write than RDF/XML. I don't really see how that fits in with the promotion of the idea of machine-processible web-based data. Richard -- *Richard Light*
Re: Content negotiation for Turtle files
On 2/6/13 4:54 AM, Bernard Vatant wrote: Thanks all for your precious help! ... which takes me back to my first options, the ones I had set before looking at Vapour results which misled me - more below. AddType text/turtle;charset=utf-8 .ttl AddType application/rdf+xml.rdf Plus Rewrite for html etc. I now get this on cURL curl -IL http://www.lingvoj.org/ontology HTTP/1.1 303 See Other Date: Wed, 06 Feb 2013 09:28:45 GMT Server: Apache Location: http://www.lingvoj.org/ontology_v2.0.ttl Content-Type: text/html; charset=iso-8859-1 HTTP/1.1 200 OK Date: Wed, 06 Feb 2013 09:28:45 GMT Server: Apache Last-Modified: Wed, 06 Feb 2013 09:19:34 GMT ETag: 60172428-5258-4d50ad316b5b2 Accept-Ranges: bytes Content-Length: 21080 Content-Type: text/turtle; charset=utf-8 ... to which Kingsley should not frown anymore (hopefully) But what I still don't understand is the answer of Vapour when requesting RDF/XML : * 1st request while dereferencing resource URI without specifying the desired content type (HTTP response code should be 303 (redirect)): Passed * 2nd request while dereferencing resource URI without specifying the desired content type (Content type should be 'application/rdf+xml'): Failed * 2nd request while dereferencing resource URI without specifying the desired content type (HTTP response code should be 200): Passed Of course this request is bound to fail somewhere since there is no RDF/XML file, but the second bullet point is confusing : why should the content type be 'application/rdf+xml' when the desired content type is not specified? And should not a Linked Data validator handle the case where there is no RDF/XML file, but only Turtle or n3? Yes, in a nutshell. The RDF/XML specificity is a relic from the past :-) The not-so-savvy linked data publisher (me), as long as he sees something flashinf RED in the results, thinks he has not made things right, and is led to made blind tricks just to have everything green (such as contradictory mime type declarations). At least if the validator does not handle this case it should say so. The current answer does not help adoption of Turtle, to say the least! Correct. Anyway, the product is Open Source [1] so anyone can fix etc.. Links: [1] https://bitbucket.org/fundacionctic/vapour/wiki/Home -- project home page . Hoping someone behind Vapour is lurking here and will answer :) Thanks again for your time Bernard -- Regards, Kingsley Idehen Founder CEO OpenLink Software Company Web: http://www.openlinksw.com Personal Weblog: http://www.openlinksw.com/blog/~kidehen Twitter/Identi.ca handle: @kidehen Google+ Profile: https://plus.google.com/112399767740508618350/about LinkedIn Profile: http://www.linkedin.com/in/kidehen smime.p7s Description: S/MIME Cryptographic Signature
Re: Content negotiation for Turtle files
On Wed, 06 Feb 2013 11:45:10 +, Richard Light rich...@light.demon.co.uk said: In a web development context, JSON would probably come second for me as a practical proposition, in that it ties in nicely with widely-supported javascript utilities. If it were up to me, XML with all the pointy brackets that make my eyes bleed would be deprecated everywhere. Most or all modern programming languages have good support for JSON, the web browsers do natively as well, and it's much easier to work with since it mostly maps directly to built-in datatypes. To me, Turtle is symptomatic of a world in which people are still writing far too many Linked Data examples and resources by hand, and want something that is easier to hand-write than RDF/XML. I don't really see how that fits in with the promotion of the idea of machine-processible web-based data. Kind of agree. Turtle is a relic of trying to make a machine readable quasi-prose representation of data, which is suitable for both machines and people. But it's not general enough -- you can only use it to write RDF, which means you need specialised tools. It's saddening because (especially with some of the N3 enhancements) it's quite an elegant approach. Cheers, -w
Re: Content negotiation for Turtle files
JSON is not a silver bullet. By only providing JSON, you cut off access for the whole XML toolchain. My related post on HackerNews: http://news.ycombinator.com/item?id=4417111 Martynas graphity.org On Wed, Feb 6, 2013 at 2:23 PM, William Waites w...@styx.org wrote: On Wed, 06 Feb 2013 11:45:10 +, Richard Light rich...@light.demon.co.uk said: In a web development context, JSON would probably come second for me as a practical proposition, in that it ties in nicely with widely-supported javascript utilities. If it were up to me, XML with all the pointy brackets that make my eyes bleed would be deprecated everywhere. Most or all modern programming languages have good support for JSON, the web browsers do natively as well, and it's much easier to work with since it mostly maps directly to built-in datatypes. To me, Turtle is symptomatic of a world in which people are still writing far too many Linked Data examples and resources by hand, and want something that is easier to hand-write than RDF/XML. I don't really see how that fits in with the promotion of the idea of machine-processible web-based data. Kind of agree. Turtle is a relic of trying to make a machine readable quasi-prose representation of data, which is suitable for both machines and people. But it's not general enough -- you can only use it to write RDF, which means you need specialised tools. It's saddening because (especially with some of the N3 enhancements) it's quite an elegant approach. Cheers, -w On Wed, Feb 6, 2013 at 2:23 PM, William Waites w...@styx.org wrote: On Wed, 06 Feb 2013 11:45:10 +, Richard Light rich...@light.demon.co.uk said: In a web development context, JSON would probably come second for me as a practical proposition, in that it ties in nicely with widely-supported javascript utilities. If it were up to me, XML with all the pointy brackets that make my eyes bleed would be deprecated everywhere. Most or all modern programming languages have good support for JSON, the web browsers do natively as well, and it's much easier to work with since it mostly maps directly to built-in datatypes. To me, Turtle is symptomatic of a world in which people are still writing far too many Linked Data examples and resources by hand, and want something that is easier to hand-write than RDF/XML. I don't really see how that fits in with the promotion of the idea of machine-processible web-based data. Kind of agree. Turtle is a relic of trying to make a machine readable quasi-prose representation of data, which is suitable for both machines and people. But it's not general enough -- you can only use it to write RDF, which means you need specialised tools. It's saddening because (especially with some of the N3 enhancements) it's quite an elegant approach. Cheers, -w
Re: Content negotiation for Turtle files
On 2/6/13 6:45 AM, Richard Light wrote: On 06/02/2013 10:59, Bernard Vatant wrote: More ??? Well, I was heading the other way round actually for sake of simplicity. As said before I've used RDF/XML for years despite all criticisms, and was happy with it (the devil you know etc). What I understand of the current trend is that to ease RDF and linked data adoption we should promote now this simple, both human-readable and machine-friendly publication syntax (Turtle). And having tried it for a while, I now begin to be convinced enough as to adopt it in publication - thanks to continuing promotion by Kingsley among others :) And now you tell me I should still bother to provide n other formats, RDF/XML and more. I thought I was about to simplify my life, you tell me I have to make the simple things, *plus* the more complex ones as before. Hmm. Well I for one would make a plea to keep RDF/XML in the portfolio. Turtle is only machine-processible if you happen to have a Turtle parser in your tool box. I'm quite happily processing Linked Data resources as XML, using only XSLT and a forwarder which adds Accept headers to an HTTP request. It thereby allows me to grab and work with LD content (including SPARQL query results) using the standard XSLT document() function. In a web development context, JSON would probably come second for me as a practical proposition, in that it ties in nicely with widely-supported javascript utilities. To me, Turtle is symptomatic of a world in which people are still writing far too many Linked Data examples and resources by hand, and want something that is easier to hand-write than RDF/XML. I don't really see how that fits in with the promotion of the idea of machine-processible web-based data. Richard -- *Richard Light* If people can't express data by hand we are on a futile mission. The era of over bearing applications placing artificial barriers between users and their data is over. Just as the same applies to overbearing schemas and database management systems. This isn't about technology for programmers. Its about technology for everyone. Just as everyone is able to write on a piece of paper today, as a mechanism for expressing and sharing data, information, and knowledge. It is absolutely mandatory that folks be able to express triple based statements (propositions) by hand. This is the key to making Linked Data and the broader Semantic Web vision a natural reality. We have to remember that content negotiation (implicit or explicit) is a part of this whole deal. Vapour was built at a time when RDF/XML was the default format of choice. That's no longer the case, but it doesn't mean RDF/XML is dead either, its just means its no longer the default. As I've said many times, RDF/XML is the worst and best thing that ever happened to the Semantic Web vision. Sadly, the worst aspect has dominated the terrain for years and created artificial inertia by way of concept obfuscation. If your consumer prefers data in RDF/XML format then it can do one of the following: 1. Locally transform the Turtle to RDF/XML -- assuming this is all you can de-reference from a given URI 2. Transform the Turtle to RDF/XML via a transformation service (these exist and they are RESTful) -- if your user agent can't perform the transformation. The subtleties of Linked Data are best understood via Turtle. -- Regards, Kingsley Idehen Founder CEO OpenLink Software Company Web: http://www.openlinksw.com Personal Weblog: http://www.openlinksw.com/blog/~kidehen Twitter/Identi.ca handle: @kidehen Google+ Profile: https://plus.google.com/112399767740508618350/about LinkedIn Profile: http://www.linkedin.com/in/kidehen smime.p7s Description: S/MIME Cryptographic Signature
Re: Content negotiation for Turtle files
Thanks Kingsley! Was about to answer but you beat me at it :) But Richard, could you elaborate on this view that hand-written and machine-processible data would not fit together? I don't feel like people are still writing far too many Linked Data examples and resources by hand. On the opposite seems to me we have seen so far too many linked data produced by (more or less dumb or smart) programs, without their human productors (so to speak) always checking too much for quality in the process, provided they can proudly announce that they have produced so many billions of triples ... so many, actually, that nobody will ever be able to assess their quality whatsoever :) Of course migrating automagically heaps of legacy data and making them available as linked data is great, but as Kingsley puts it, linked data are not only about machines talking to machines, it's also about enabling people to talk to machines as simply as possible, and the other way round. That's where Turtle fits. Bernard 2013/2/6 Kingsley Idehen kide...@openlinksw.com On 2/6/13 6:45 AM, Richard Light wrote: On 06/02/2013 10:59, Bernard Vatant wrote: More ??? Well, I was heading the other way round actually for sake of simplicity. As said before I've used RDF/XML for years despite all criticisms, and was happy with it (the devil you know etc). What I understand of the current trend is that to ease RDF and linked data adoption we should promote now this simple, both human-readable and machine-friendly publication syntax (Turtle). And having tried it for a while, I now begin to be convinced enough as to adopt it in publication - thanks to continuing promotion by Kingsley among others :) And now you tell me I should still bother to provide n other formats, RDF/XML and more. I thought I was about to simplify my life, you tell me I have to make the simple things, *plus* the more complex ones as before. Hmm. Well I for one would make a plea to keep RDF/XML in the portfolio. Turtle is only machine-processible if you happen to have a Turtle parser in your tool box. I'm quite happily processing Linked Data resources as XML, using only XSLT and a forwarder which adds Accept headers to an HTTP request. It thereby allows me to grab and work with LD content (including SPARQL query results) using the standard XSLT document() function. In a web development context, JSON would probably come second for me as a practical proposition, in that it ties in nicely with widely-supported javascript utilities. To me, Turtle is symptomatic of a world in which people are still writing far too many Linked Data examples and resources by hand, and want something that is easier to hand-write than RDF/XML. I don't really see how that fits in with the promotion of the idea of machine-processible web-based data. Richard -- *Richard Light* If people can't express data by hand we are on a futile mission. The era of over bearing applications placing artificial barriers between users and their data is over. Just as the same applies to overbearing schemas and database management systems. This isn't about technology for programmers. Its about technology for everyone. Just as everyone is able to write on a piece of paper today, as a mechanism for expressing and sharing data, information, and knowledge. It is absolutely mandatory that folks be able to express triple based statements (propositions) by hand. This is the key to making Linked Data and the broader Semantic Web vision a natural reality. We have to remember that content negotiation (implicit or explicit) is a part of this whole deal. Vapour was built at a time when RDF/XML was the default format of choice. That's no longer the case, but it doesn't mean RDF/XML is dead either, its just means its no longer the default. As I've said many times, RDF/XML is the worst and best thing that ever happened to the Semantic Web vision. Sadly, the worst aspect has dominated the terrain for years and created artificial inertia by way of concept obfuscation. If your consumer prefers data in RDF/XML format then it can do one of the following: 1. Locally transform the Turtle to RDF/XML -- assuming this is all you can de-reference from a given URI 2. Transform the Turtle to RDF/XML via a transformation service (these exist and they are RESTful) -- if your user agent can't perform the transformation. The subtleties of Linked Data are best understood via Turtle. -- Regards, Kingsley Idehen Founder CEO OpenLink Software Company Web: http://www.openlinksw.com Personal Weblog: http://www.openlinksw.com/blog/~kidehen Twitter/Identi.ca handle: @kidehen Google+ Profile: https://plus.google.com/112399767740508618350/about LinkedIn Profile: http://www.linkedin.com/in/kidehen -- *Bernard Vatant * Vocabularies Data Engineering Tel : + 33 (0)9 71 48 84 59 Skype : bernard.vatant Blog : the wheel and the hub http://blog.hubjects.com/
Re: Content negotiation for Turtle files
On 2/6/13 7:23 AM, William Waites wrote: On Wed, 06 Feb 2013 11:45:10 +, Richard Light rich...@light.demon.co.uk said: In a web development context, JSON would probably come second for me as a practical proposition, in that it ties in nicely with widely-supported javascript utilities. If it were up to me, XML with all the pointy brackets that make my eyes bleed would be deprecated everywhere. Most or all modern programming languages have good support for JSON, the web browsers do natively as well, and it's much easier to work with since it mostly maps directly to built-in datatypes. To me, Turtle is symptomatic of a world in which people are still writing far too many Linked Data examples and resources by hand, and want something that is easier to hand-write than RDF/XML. I don't really see how that fits in with the promotion of the idea of machine-processible web-based data. Kind of agree. Turtle is a relic of trying to make a machine readable quasi-prose representation of data, which is suitable for both machines and people. I disagree for the following reasons, as already stated in an earlier response: 1. Linked Data isn't about the needs of programmer, solely -- it is about giving everyone the ability to create and share webby structured data 2. Turtle is the only RDF syntax notation that satisfies the basic needs or end-users and programmers. But it's not general enough -- you can only use it to write RDF, which means you need specialised tools. Not true! People should be able to save the following to a local or publicly accessible file denoted using a file: or http: scheme URI : # Document Start # a #Document . rdfs:label My Document About Whatever . dcterms:created 2013-02-06^^xsd:date . dcterms:hasFormat text/turtle . #i http://xmlns.com/foaf/0.1/name Kingsley Uyi Idehen . #i http://xmlns.com/foaf/0.1/nick @kidehen . #i http://xmlns.com/foaf/0.1/homepage https://plus.google.com/112399767740508618350/about . #i http://xmlns.com/foaf/0.1/weblog http://www.openlinksw.com/blog/~kidehen . #i http://xmlns.com/foaf/0.1/workplaceHomepage http://www.openlinksw.com . #i http://xmlns.com/foaf/0.1/PersonalProfileDocument . # Document End # It's saddening because (especially with some of the N3 enhancements) it's quite an elegant approach. I am elated about Turtle. You are expressing a specialized world view. The Web is for everyone and we should do everything in our power to accentuate that aspect of this powerful system, via Linked Data. Links: 1. http://bit.ly/RJzd9S -- Why Turtle Matters . 2. http://bit.ly/Xk333m -- Simple Turtle Introduction (for end-users) . 3. http://kingsley.idehen.net/DAV/home/kidehen/Public/ -- my public directory which contains some of my Turtle files (basically demonstrating that the file create, save, and share pattern will work once the Read-Write aspect of the Web emerges from its nascent state) . 4. http://bit.ly/UydU9t -- Simple SPARQL integration demo based on Turtle data sources (which prefixes deliberately kept out of the mix for simplicity and clarity). 5. http://bit.ly/VaX0zx -- Turtle tutorials collection . 6. http://bit.ly/UcnEGp -- What is Data? What is a Datum? (Ontolog forum discussion) . Kingsley Cheers, -w -- Regards, Kingsley Idehen Founder CEO OpenLink Software Company Web: http://www.openlinksw.com Personal Weblog: http://www.openlinksw.com/blog/~kidehen Twitter/Identi.ca handle: @kidehen Google+ Profile: https://plus.google.com/112399767740508618350/about LinkedIn Profile: http://www.linkedin.com/in/kidehen smime.p7s Description: S/MIME Cryptographic Signature
Re: Content negotiation for Turtle files
My view: On Wed, 2013-02-06 at 11:59 +0100, Bernard Vatant wrote: [ . . . ] Do I have to pass the message to adopters : publish RDF in Turtle, it's a very cool an simple syntax (oh but BTW don't forget to add HTML documentation, and also RDF/XML, . . . . Please promote Turtle and actively discourage RDF/XML. Turtle is now far enough along in its uptake and tooling to displace RDF/XML as a common denominator format for RDF, and here is real harm in doing anything that promotes RDF/XML, as: (a) RDF/XML is much harder for humans to read; and (b) RDF/XML misleads people into thinking that RDF is a form of XML. I and others have many times seen people fall into the trap of thinking that they can use familiar XML approaches to RDF, and the result is painful disaster, because they have the wrong mental model of RDF. I think it is fine to quietly continue to serve RDF/XML if you have already been doing so, but please do not serve any new data in RDF/XML, and please do not use RDF/XML in published examples of RDF. BTW, the old Semantic Web Layer Cake http://www.w3.org/2001/09/06-ecdl/slide17-0.html is flat out wrong in showing XML at the base, as RDF/XML is merely one serialization of RDF, which is syntax independent. Here is a much better version: http://www.w3.org/Consortium/Offices/Presentations/RDFTutorial/figures/TwoTowers.png It appears in this slide set from Ivan: http://www.w3.org/2005/Talks/1214-Trento-IH/#%28160%29 -- David Booth, Ph.D. http://dbooth.org/ Aaron's Law, in memory of Web prodigy and open information advocate Aaron Swartz: http://bit.ly/USR4rx Opinions expressed herein are my own and do not necessarily reflect those of my employer.
Re: Content negotiation for Turtle files
On 2/6/13 9:52 AM, David Booth wrote: My view: On Wed, 2013-02-06 at 11:59 +0100, Bernard Vatant wrote: [ . . . ] Do I have to pass the message to adopters : publish RDF in Turtle, it's a very cool an simple syntax (oh but BTW don't forget to add HTML documentation, and also RDF/XML, . . . . Please promote Turtle and actively discourage RDF/XML. Turtle is now far enough along in its uptake and tooling to displace RDF/XML as a common denominator format for RDF, and here is real harm in doing anything that promotes RDF/XML, as: (a) RDF/XML is much harder for humans to read; and (b) RDF/XML misleads people into thinking that RDF is a form of XML. I and others have many times seen people fall into the trap of thinking that they can use familiar XML approaches to RDF, and the result is painful disaster, because they have the wrong mental model of RDF. I think it is fine to quietly continue to serve RDF/XML if you have already been doing so, but please do not serve any new data in RDF/XML, and please do not use RDF/XML in published examples of RDF. BTW, the old Semantic Web Layer Cake http://www.w3.org/2001/09/06-ecdl/slide17-0.html is flat out wrong in showing XML at the base, as RDF/XML is merely one serialization of RDF, which is syntax independent. Here is a much better version: http://www.w3.org/Consortium/Offices/Presentations/RDFTutorial/figures/TwoTowers.png It appears in this slide set from Ivan: http://www.w3.org/2005/Talks/1214-Trento-IH/#%28160%29 +1 -- Regards, Kingsley Idehen Founder CEO OpenLink Software Company Web: http://www.openlinksw.com Personal Weblog: http://www.openlinksw.com/blog/~kidehen Twitter/Identi.ca handle: @kidehen Google+ Profile: https://plus.google.com/112399767740508618350/about LinkedIn Profile: http://www.linkedin.com/in/kidehen smime.p7s Description: S/MIME Cryptographic Signature
Re: Content negotiation for Turtle files
Bernard, I'm more than happy for Turtle to have a place in the LD ecosystem. My concern is the suggestion that it should become seen as the primary/the only delivery format for Linked Data resources such as www.lingvoj.org. Also, like you, I'm not particularly impressed by massive dumps of low-quality data. In my area of interest (cultural history) what we need are reliable sources of data which is sufficiently richly structured to be useful. This usually means they need to be more structurally complex than a simple list of properties for an entity (the dbpedia fallacy). One issue that Turtle will need to address (it may do so already) is software support for free-hand data entry. While the format is seductively simple-looking (well, it is to the likes of us who grew up on XML/SGML*) it is very easy to make mistakes. I followed Kingsley's reference to his file space (see his separate reply) and grabbed the file jordan.ttl. It contains variations in spelling which will mean that some predicate - subject links will fail (e.g. New England Patriots), as will one sameAs link (USA). There is (I guess) an intended link from USA to N. America, but again this won't fly because USA's continent property is expressed as a string. If case matters, most of the sameAs references won't work. The properties (predicates) are all local to the document and none of them is defined. Integer values are typed as strings. Two of the dates are wrong (e.g. Sept 31 783). This is not to criticise Kingsley's typing, but rather to point out that if you are encouraging users to hand-type resources which are to be interpreted as data, then they are going to need some software support if they are not going to be mightily let down by the whole process. It's a bit like authoring web pages: it doesn't go too badly if you're working in a rich edit box and don't have to add HTML markup yourself. Richard * the Turtle tyro might disagree, on being asked to type e.g.: :Championships 3 (2001,2003,2004)^^xsd:string ; On 06/02/2013 13:30, Bernard Vatant wrote: Thanks Kingsley! Was about to answer but you beat me at it :) But Richard, could you elaborate on this view that hand-written and machine-processible data would not fit together? I don't feel like people are still writing far too many Linked Data examples and resources by hand. On the opposite seems to me we have seen so far too many linked data produced by (more or less dumb or smart) programs, without their human productors (so to speak) always checking too much for quality in the process, provided they can proudly announce that they have produced so many billions of triples ... so many, actually, that nobody will ever be able to assess their quality whatsoever :) Of course migrating automagically heaps of legacy data and making them available as linked data is great, but as Kingsley puts it, linked data are not only about machines talking to machines, it's also about enabling people to talk to machines as simply as possible, and the other way round. That's where Turtle fits. Bernard 2013/2/6 Kingsley Idehen kide...@openlinksw.com mailto:kide...@openlinksw.com On 2/6/13 6:45 AM, Richard Light wrote: On 06/02/2013 10:59, Bernard Vatant wrote: More ??? Well, I was heading the other way round actually for sake of simplicity. As said before I've used RDF/XML for years despite all criticisms, and was happy with it (the devil you know etc). What I understand of the current trend is that to ease RDF and linked data adoption we should promote now this simple, both human-readable and machine-friendly publication syntax (Turtle). And having tried it for a while, I now begin to be convinced enough as to adopt it in publication - thanks to continuing promotion by Kingsley among others :) And now you tell me I should still bother to provide n other formats, RDF/XML and more. I thought I was about to simplify my life, you tell me I have to make the simple things, *plus* the more complex ones as before. Hmm. Well I for one would make a plea to keep RDF/XML in the portfolio. Turtle is only machine-processible if you happen to have a Turtle parser in your tool box. I'm quite happily processing Linked Data resources as XML, using only XSLT and a forwarder which adds Accept headers to an HTTP request. It thereby allows me to grab and work with LD content (including SPARQL query results) using the standard XSLT document() function. In a web development context, JSON would probably come second for me as a practical proposition, in that it ties in nicely with widely-supported javascript utilities. To me, Turtle is symptomatic of a world in which people are still writing far too many Linked Data examples and resources by hand, and want something that is easier to hand-write than RDF/XML. I don't really see how that
Re: Content negotiation for Turtle files
On 2/6/13 10:00 AM, Richard Light wrote: One issue that Turtle will need to address (it may do so already) is software support for free-hand data entry. While the format is seductively simple-looking (well, it is to the likes of us who grew up on XML/SGML*) it is very easy to make mistakes. I followed Kingsley's reference to his file space (see his separate reply) and grabbed the file jordan.ttl. Now, you really have to put my directory listing example in context. This isn't about perfect data (such doesn't exist) it is all about the ability to create and share data. FWIW -- of all the files to pick, you picked the one created by my 12 year old son :-) It contains variations in spelling which will mean that some predicate - subject links will fail (e.g. New England Patriots), as will one sameAs link (USA). Yes, he is a Pats fan, so I used that to pique his interest en route to teaching him Turtle. There is (I guess) an intended link from USA to N. America, but again this won't fly because USA's continent property is expressed as a string. If case matters, most of the sameAs references won't work. The properties (predicates) are all local to the document and none of them is defined. Integer values are typed as strings. Two of the dates are wrong (e.g. Sept 31 783). This is not to criticise Kingsley's typing, but rather to point out that if you are encouraging users to hand-type resources which are to be interpreted as data, then they are going to need some software support if they are not going to be mightily let down by the whole process. It's a bit like authoring web pages: it doesn't go too badly if you're working in a rich edit box and don't have to add HTML markup yourself. As I said, you somehow you stumbled across the Turtle doc produced by a 12 year old. That file was all about getting him going and then showing him the implications of his mistakes etc.. My other Turtle tutorials include sample links to profiles documents, stuff I like etc. Richard -- Regards, Kingsley Idehen Founder CEO OpenLink Software Company Web: http://www.openlinksw.com Personal Weblog: http://www.openlinksw.com/blog/~kidehen Twitter/Identi.ca handle: @kidehen Google+ Profile: https://plus.google.com/112399767740508618350/about LinkedIn Profile: http://www.linkedin.com/in/kidehen smime.p7s Description: S/MIME Cryptographic Signature
Re: Content negotiation for Turtle files
On 2/6/13 1:08 PM, Colin wrote: Hi all, Fascinating thread, all arguments being quite valid and it seems it all depends on what you want to achieve with Linked data. I was about to write a lengthy text to explain my view, but I'll start with a table to save time and improve readibility: *You are..* *Human* *Machine* *You want to…* *Write data* Turtle RDF/XML, JSon, Ntriples *Read data* HTML , /Turtle/ RDF/XML, JSon, Turtle, Ntriples *Turtle*: like Kingsley pointed many times, it's easy write at hand. Like Richard pointed, users should use a decent editor, with syntax checking, possibilities to import objects, classes, properties etc.. easily, maybe a preview feature that would show a graphical view of the written graph. However, if reading Turtle is possible, I don't think it's what users would like in the end. With a plain Turtle file you get the meaning, but zero usability. You get maximum usability via the most simple of patterns i.e., the following steps: 1. Create a file 2. Add Turtle content 3. Save the file 4. Share the file. 1-4 are vital. You don't need any syntax highlighters for that. For instance, do you need an kind of aid from any editor to express: This is a Document. This Document was created today. It was created by me. I am a Person. etc.. Basically simple sentences that flow from your conventional intuition all the way over the digital realm of the Web. The sad story here is that nature of Data has been compromised by the overbearing nature of software, in general. The Web is a massive jigsaw puzzle game where every puzzle piece is a Web resource. Thus, we need to have a mechanism (one that superior to HTML) for rapidly producing and sharing data, information, and knowledge. With so much interlinked data you want to browse, not to get Turtle files one by one by manually concatenating the . It's like comparing the RFC text files (example http://www.rfc-editor.org/rfc/rfc5646.txt) with the W3C recommandation pages (example http://www.w3.org/TR/xquery/), full of links, or something even more powerful, something like Graphity (my company's data portal http://data.nxp.com). No, that's not the case. Again, its about puzzle pieces. Each Web user contributes there little pieces to the bigger puzzle. I can't think of a situation where a machine would write Turtle. Lost me on that one. *RDF/XML*: Not readable by humans. Since XML is quite common to store data, the easiest way to product RDF from XML is to serialize RDF/XML. As pointed earlier, it's so easy that some people produce millions of rubbish triples. But don't blame the tool, crappy data was in MySQL DBs, in XML, and will be in RDF too. Instead of banning or advising against using it, it would be more productive to bring a light on the pitfalls, the most common mistakes that a XML developer would make when producing RDF. *JSON*: Not readable by humans. I'm not very familiar with Javascript development. However I know enough to know that providing a JS developer with JSon is a treat, certainly for reading, probably for writing too. Yes, and that's the issue, JSON is good for you, but the Web is for everyone (programmers and none programmers) :-) Kingsley Best regards, and retro-thanks for all the previous threads! Colin Maudry @colinmaudry Product Data Analyst NXP Semiconductors On Wed, Feb 6, 2013 at 4:24 PM, Kingsley Idehen kide...@openlinksw.com mailto:kide...@openlinksw.com wrote: On 2/6/13 10:00 AM, Richard Light wrote: One issue that Turtle will need to address (it may do so already) is software support for free-hand data entry. While the format is seductively simple-looking (well, it is to the likes of us who grew up on XML/SGML*) it is very easy to make mistakes. I followed Kingsley's reference to his file space (see his separate reply) and grabbed the file jordan.ttl. Now, you really have to put my directory listing example in context. This isn't about perfect data (such doesn't exist) it is all about the ability to create and share data. FWIW -- of all the files to pick, you picked the one created by my 12 year old son :-) It contains variations in spelling which will mean that some predicate - subject links will fail (e.g. New England Patriots), as will one sameAs link (USA). Yes, he is a Pats fan, so I used that to pique his interest en route to teaching him Turtle. There is (I guess) an intended link from USA to N. America, but again this won't fly because USA's continent property is expressed as a string. If case matters, most of the sameAs references won't work. The properties (predicates) are all local to the document and none of them is defined. Integer values are typed as strings. Two of the dates are wrong
Re: Content negotiation for Turtle files
My apologies, I hit the Send button a bit too early. Please read: With so much interlinked data you want to browse, not to get Turtle files one by one by manually concatenating the base URIs with the entities names. Best regards, Colin On Wed, Feb 6, 2013 at 7:08 PM, Colin co...@zebrana.net wrote: Hi all, Fascinating thread, all arguments being quite valid and it seems it all depends on what you want to achieve with Linked data. I was about to write a lengthy text to explain my view, but I'll start with a table to save time and improve readibility: *You are..* *Human* *Machine* *You want to…* *Write data* Turtle RDF/XML, JSon, Ntriples *Read data* HTML , *Turtle* RDF/XML, JSon, Turtle, Ntriples *Turtle*: like Kingsley pointed many times, it's easy write at hand. Like Richard pointed, users should use a decent editor, with syntax checking, possibilities to import objects, classes, properties etc.. easily, maybe a preview feature that would show a graphical view of the written graph. However, if reading Turtle is possible, I don't think it's what users would like in the end. With a plain Turtle file you get the meaning, but zero usability. With so much interlinked data you want to browse, not to get Turtle files one by one by manually concatenating the . It's like comparing the RFC text files (examplehttp://www.rfc-editor.org/rfc/rfc5646.txt) with the W3C recommandation pages (example http://www.w3.org/TR/xquery/), full of links, or something even more powerful, something like Graphity (my company's data portal http://data.nxp.com). I can't think of a situation where a machine would write Turtle. *RDF/XML*: Not readable by humans. Since XML is quite common to store data, the easiest way to product RDF from XML is to serialize RDF/XML. As pointed earlier, it's so easy that some people produce millions of rubbish triples. But don't blame the tool, crappy data was in MySQL DBs, in XML, and will be in RDF too. Instead of banning or advising against using it, it would be more productive to bring a light on the pitfalls, the most common mistakes that a XML developer would make when producing RDF. *JSON*: Not readable by humans. I'm not very familiar with Javascript development. However I know enough to know that providing a JS developer with JSon is a treat, certainly for reading, probably for writing too. Best regards, and retro-thanks for all the previous threads! Colin Maudry @colinmaudry Product Data Analyst NXP Semiconductors On Wed, Feb 6, 2013 at 4:24 PM, Kingsley Idehen kide...@openlinksw.comwrote: On 2/6/13 10:00 AM, Richard Light wrote: One issue that Turtle will need to address (it may do so already) is software support for free-hand data entry. While the format is seductively simple-looking (well, it is to the likes of us who grew up on XML/SGML*) it is very easy to make mistakes. I followed Kingsley's reference to his file space (see his separate reply) and grabbed the file jordan.ttl. Now, you really have to put my directory listing example in context. This isn't about perfect data (such doesn't exist) it is all about the ability to create and share data. FWIW -- of all the files to pick, you picked the one created by my 12 year old son :-) It contains variations in spelling which will mean that some predicate - subject links will fail (e.g. New England Patriots), as will one sameAs link (USA). Yes, he is a Pats fan, so I used that to pique his interest en route to teaching him Turtle. There is (I guess) an intended link from USA to N. America, but again this won't fly because USA's continent property is expressed as a string. If case matters, most of the sameAs references won't work. The properties (predicates) are all local to the document and none of them is defined. Integer values are typed as strings. Two of the dates are wrong (e.g. Sept 31 783). This is not to criticise Kingsley's typing, but rather to point out that if you are encouraging users to hand-type resources which are to be interpreted as data, then they are going to need some software support if they are not going to be mightily let down by the whole process. It's a bit like authoring web pages: it doesn't go too badly if you're working in a rich edit box and don't have to add HTML markup yourself. As I said, you somehow you stumbled across the Turtle doc produced by a 12 year old. That file was all about getting him going and then showing him the implications of his mistakes etc.. My other Turtle tutorials include sample links to profiles documents, stuff I like etc. Richard -- Regards, Kingsley Idehen Founder CEO OpenLink Software Company Web: http://www.openlinksw.com Personal Weblog: http://www.openlinksw.com/**blog/~kidehenhttp://www.openlinksw.com/blog/~kidehen Twitter/Identi.ca handle: @kidehen Google+ Profile:
Re: Content negotiation for Turtle files
Hi all, Fascinating thread, all arguments being quite valid and it seems it all depends on what you want to achieve with Linked data. I was about to write a lengthy text to explain my view, but I'll start with a table to save time and improve readibility: *You are..* *Human* *Machine* *You want to…* *Write data* Turtle RDF/XML, JSon, Ntriples *Read data* HTML , *Turtle* RDF/XML, JSon, Turtle, Ntriples *Turtle*: like Kingsley pointed many times, it's easy write at hand. Like Richard pointed, users should use a decent editor, with syntax checking, possibilities to import objects, classes, properties etc.. easily, maybe a preview feature that would show a graphical view of the written graph. However, if reading Turtle is possible, I don't think it's what users would like in the end. With a plain Turtle file you get the meaning, but zero usability. With so much interlinked data you want to browse, not to get Turtle files one by one by manually concatenating the . It's like comparing the RFC text files (example http://www.rfc-editor.org/rfc/rfc5646.txt) with the W3C recommandation pages (example http://www.w3.org/TR/xquery/), full of links, or something even more powerful, something like Graphity (my company's data portal http://data.nxp.com). I can't think of a situation where a machine would write Turtle. *RDF/XML*: Not readable by humans. Since XML is quite common to store data, the easiest way to product RDF from XML is to serialize RDF/XML. As pointed earlier, it's so easy that some people produce millions of rubbish triples. But don't blame the tool, crappy data was in MySQL DBs, in XML, and will be in RDF too. Instead of banning or advising against using it, it would be more productive to bring a light on the pitfalls, the most common mistakes that a XML developer would make when producing RDF. *JSON*: Not readable by humans. I'm not very familiar with Javascript development. However I know enough to know that providing a JS developer with JSon is a treat, certainly for reading, probably for writing too. Best regards, and retro-thanks for all the previous threads! Colin Maudry @colinmaudry Product Data Analyst NXP Semiconductors On Wed, Feb 6, 2013 at 4:24 PM, Kingsley Idehen kide...@openlinksw.comwrote: On 2/6/13 10:00 AM, Richard Light wrote: One issue that Turtle will need to address (it may do so already) is software support for free-hand data entry. While the format is seductively simple-looking (well, it is to the likes of us who grew up on XML/SGML*) it is very easy to make mistakes. I followed Kingsley's reference to his file space (see his separate reply) and grabbed the file jordan.ttl. Now, you really have to put my directory listing example in context. This isn't about perfect data (such doesn't exist) it is all about the ability to create and share data. FWIW -- of all the files to pick, you picked the one created by my 12 year old son :-) It contains variations in spelling which will mean that some predicate - subject links will fail (e.g. New England Patriots), as will one sameAs link (USA). Yes, he is a Pats fan, so I used that to pique his interest en route to teaching him Turtle. There is (I guess) an intended link from USA to N. America, but again this won't fly because USA's continent property is expressed as a string.. If case matters, most of the sameAs references won't work. The properties (predicates) are all local to the document and none of them is defined. Integer values are typed as strings. Two of the dates are wrong (e.g. Sept 31 783). This is not to criticise Kingsley's typing, but rather to point out that if you are encouraging users to hand-type resources which are to be interpreted as data, then they are going to need some software support if they are not going to be mightily let down by the whole process. It's a bit like authoring web pages: it doesn't go too badly if you're working in a rich edit box and don't have to add HTML markup yourself. As I said, you somehow you stumbled across the Turtle doc produced by a 12 year old. That file was all about getting him going and then showing him the implications of his mistakes etc.. My other Turtle tutorials include sample links to profiles documents, stuff I like etc. Richard -- Regards, Kingsley Idehen Founder CEO OpenLink Software Company Web: http://www.openlinksw.com Personal Weblog: http://www.openlinksw.com/**blog/~kidehenhttp://www.openlinksw.com/blog/~kidehen Twitter/Identi.ca handle: @kidehen Google+ Profile: https://plus.google.com/**112399767740508618350/abouthttps://plus.google.com/112399767740508618350/about LinkedIn Profile: http://www.linkedin.com/in/**kidehenhttp://www.linkedin.com/in/kidehen
Re: Content negotiation for Turtle files
Hi Kingsley, Thanks for saving my contribution from the oblivion! When I said zero usability, I referred to reading Turtle, not writing. I think Turtle is great for hand writing as you explain, and that we just miss some ways to insert existing entities more easily and prevent typos. For reading, it is by far better than JSon, RDF/XML and Ntriples, but I don't think it is satisfactory. I read previously that you consider that non programmer people should learn how to read Turtle, and I agree, though I don't think people will be happy to leave the colorful and groovy UIs they have been used to. That's matter for debate. Lost me on that one. With my contribution I tried to explore format by format the different combinations between human/machine and reading/writing. Coming to the machine/writing/Turtle combination, I couldn't think of a situation where it would make sense. Best regards, Colin On Wed, Feb 6, 2013 at 7:18 PM, Kingsley Idehen kide...@openlinksw.comwrote: On 2/6/13 1:08 PM, Colin wrote: Hi all, Fascinating thread, all arguments being quite valid and it seems it all depends on what you want to achieve with Linked data. I was about to write a lengthy text to explain my view, but I'll start with a table to save time and improve readibility: *You are..* *Human* *Machine* *You want to…* *Write data* Turtle RDF/XML, JSon, Ntriples *Read data* HTML , *Turtle* RDF/XML, JSon, Turtle, Ntriples *Turtle*: like Kingsley pointed many times, it's easy write at hand. Like Richard pointed, users should use a decent editor, with syntax checking, possibilities to import objects, classes, properties etc.. easily, maybe a preview feature that would show a graphical view of the written graph. However, if reading Turtle is possible, I don't think it's what users would like in the end. With a plain Turtle file you get the meaning, but zero usability. You get maximum usability via the most simple of patterns i.e., the following steps: 1. Create a file 2. Add Turtle content 3. Save the file 4. Share the file. 1-4 are vital. You don't need any syntax highlighters for that. For instance, do you need an kind of aid from any editor to express: This is a Document. This Document was created today. It was created by me. I am a Person. etc.. Basically simple sentences that flow from your conventional intuition all the way over the digital realm of the Web. The sad story here is that nature of Data has been compromised by the overbearing nature of software, in general. The Web is a massive jigsaw puzzle game where every puzzle piece is a Web resource. Thus, we need to have a mechanism (one that superior to HTML) for rapidly producing and sharing data, information, and knowledge. With so much interlinked data you want to browse, not to get Turtle files one by one by manually concatenating the . It's like comparing the RFC text files (example http://www.rfc-editor.org/rfc/rfc5646.txt) with the W3C recommandation pages (example http://www.w3.org/TR/xquery/), full of links, or something even more powerful, something like Graphity (my company's data portal http://data.nxp.com). No, that's not the case. Again, its about puzzle pieces. Each Web user contributes there little pieces to the bigger puzzle. I can't think of a situation where a machine would write Turtle. Lost me on that one. *RDF/XML*: Not readable by humans. Since XML is quite common to store data, the easiest way to product RDF from XML is to serialize RDF/XML. As pointed earlier, it's so easy that some people produce millions of rubbish triples. But don't blame the tool, crappy data was in MySQL DBs, in XML, and will be in RDF too. Instead of banning or advising against using it, it would be more productive to bring a light on the pitfalls, the most common mistakes that a XML developer would make when producing RDF. *JSON*: Not readable by humans. I'm not very familiar with Javascript development. However I know enough to know that providing a JS developer with JSon is a treat, certainly for reading, probably for writing too. Yes, and that's the issue, JSON is good for you, but the Web is for everyone (programmers and none programmers) :-) Kingsley Best regards, and retro-thanks for all the previous threads! Colin Maudry @colinmaudry Product Data Analyst NXP Semiconductors On Wed, Feb 6, 2013 at 4:24 PM, Kingsley Idehen kide...@openlinksw.comwrote: On 2/6/13 10:00 AM, Richard Light wrote: One issue that Turtle will need to address (it may do so already) is software support for free-hand data entry. While the format is seductively simple-looking (well, it is to the likes of us who grew up on XML/SGML*) it is very easy to make mistakes. I followed Kingsley's reference to his file space (see his separate reply) and grabbed the file jordan.ttl. Now, you really have to put my directory listing example in context. This
Re: Content negotiation for Turtle files
On 2/6/13 1:16 PM, Colin wrote: My apologies, I hit the Send button a bit too early. Please read: With so much interlinked data you want to browse, not to get Turtle files one by one by manually concatenating the base URIs with the entities names. You don't have to do any such thing. See: 1. http://kingsley.idehen.net/DAV/home/kidehen/Public/AmazonS3/Profile/ThingsILike.ttl -- note the use of relative document URLs and how they simplify the production of Linked Data URIs . 2. http://linkeddata.uriburner.com/about/html/http/kingsley.idehen.net/DAV/home/kidehen/Public/AmazonS3/Profile/ThingsILike.ttl -- via Linked Data Browser . 3. http://linkeddata.uriburner.com/describe/?url=http%3A%2F%2Fkingsley.idehen.net%2FDAV%2Fhome%2Fkidehen%2FPublic%2FAmazonS3%2FProfile%2FThingsILike.ttl -- via Faceted Linked Data Browser . Re. #2 and #3 Each time you click on a link the entity description oriented data is retrieved and presented to you via an HTML document that provides clear context for the follow-your-nose pattern that underlies Web navigation in general. Kingsley Best regards, Colin On Wed, Feb 6, 2013 at 7:08 PM, Colin co...@zebrana.net mailto:co...@zebrana.net wrote: Hi all, Fascinating thread, all arguments being quite valid and it seems it all depends on what you want to achieve with Linked data. I was about to write a lengthy text to explain my view, but I'll start with a table to save time and improve readibility: *You are..* *Human* *Machine* *You want to…* *Write data* Turtle RDF/XML, JSon, Ntriples *Read data* HTML , /Turtle/ RDF/XML, JSon, Turtle, Ntriples *Turtle*: like Kingsley pointed many times, it's easy write at hand. Like Richard pointed, users should use a decent editor, with syntax checking, possibilities to import objects, classes, properties etc.. easily, maybe a preview feature that would show a graphical view of the written graph. However, if reading Turtle is possible, I don't think it's what users would like in the end. With a plain Turtle file you get the meaning, but zero usability. With so much interlinked data you want to browse, not to get Turtle files one by one by manually concatenating the . It's like comparing the RFC text files (example http://www.rfc-editor.org/rfc/rfc5646.txt) with the W3C recommandation pages (example http://www.w3.org/TR/xquery/), full of links, or something even more powerful, something like Graphity (my company's data portal http://data.nxp.com). I can't think of a situation where a machine would write Turtle. *RDF/XML*: Not readable by humans. Since XML is quite common to store data, the easiest way to product RDF from XML is to serialize RDF/XML. As pointed earlier, it's so easy that some people produce millions of rubbish triples. But don't blame the tool, crappy data was in MySQL DBs, in XML, and will be in RDF too. Instead of banning or advising against using it, it would be more productive to bring a light on the pitfalls, the most common mistakes that a XML developer would make when producing RDF. *JSON*: Not readable by humans. I'm not very familiar with Javascript development. However I know enough to know that providing a JS developer with JSon is a treat, certainly for reading, probably for writing too. Best regards, and retro-thanks for all the previous threads! Colin Maudry @colinmaudry Product Data Analyst NXP Semiconductors On Wed, Feb 6, 2013 at 4:24 PM, Kingsley Idehen kide...@openlinksw.com mailto:kide...@openlinksw.com wrote: On 2/6/13 10:00 AM, Richard Light wrote: One issue that Turtle will need to address (it may do so already) is software support for free-hand data entry. While the format is seductively simple-looking (well, it is to the likes of us who grew up on XML/SGML*) it is very easy to make mistakes. I followed Kingsley's reference to his file space (see his separate reply) and grabbed the file jordan.ttl. Now, you really have to put my directory listing example in context. This isn't about perfect data (such doesn't exist) it is all about the ability to create and share data. FWIW -- of all the files to pick, you picked the one created by my 12 year old son :-) It contains variations in spelling which will mean that some predicate - subject links will fail (e.g. New England Patriots), as will one sameAs link (USA). Yes, he is a Pats fan, so I used that to pique his interest en route to teaching him Turtle. There is (I guess) an intended link from USA to N. America, but again this won't fly because USA's
Re: Content negotiation for Turtle files
On 2/6/13 1:38 PM, Colin wrote: Hi Kingsley, Thanks for saving my contribution from the oblivion! When I said zero usability, I referred to reading Turtle, not writing. I think Turtle is great for hand writing as you explain, and that we just miss some ways to insert existing entities more easily and prevent typos. For reading, it is by far better than JSon, RDF/XML and Ntriples, but I don't think it is satisfactory. That's why I say RDF based Linked Data is horses for courses compliant :-) Each RDF syntax notation best serves a specific user profile. We only get into trouble when we try to force an inappropriate syntax notation upon the wrong user profile. I read previously that you consider that non programmer people should learn how to read Turtle, and I agree, though I don't think people will be happy to leave the colorful and groovy UIs they have been used to. That's matter for debate. They will actually appreciate the value of triple curation UI's once they understand the basics. Our problem in the past is that we skipped this vital first step (myself included). Lost me on that one. With my contribution I tried to explore format by format the different combinations between human/machine and reading/writing. Coming to the machine/writing/Turtle combination, I couldn't think of a situation where it would make sense. If the output is to be read and easily understood by a human :-) Kingsley Best regards, Colin On Wed, Feb 6, 2013 at 7:18 PM, Kingsley Idehen kide...@openlinksw.com mailto:kide...@openlinksw.com wrote: On 2/6/13 1:08 PM, Colin wrote: Hi all, Fascinating thread, all arguments being quite valid and it seems it all depends on what you want to achieve with Linked data. I was about to write a lengthy text to explain my view, but I'll start with a table to save time and improve readibility: *You are..* *Human* *Machine* *You want to…* *Write data* Turtle RDF/XML, JSon, Ntriples *Read data* HTML , /Turtle/ RDF/XML, JSon, Turtle, Ntriples *Turtle*: like Kingsley pointed many times, it's easy write at hand. Like Richard pointed, users should use a decent editor, with syntax checking, possibilities to import objects, classes, properties etc.. easily, maybe a preview feature that would show a graphical view of the written graph. However, if reading Turtle is possible, I don't think it's what users would like in the end. With a plain Turtle file you get the meaning, but zero usability. You get maximum usability via the most simple of patterns i.e., the following steps: 1. Create a file 2. Add Turtle content 3. Save the file 4. Share the file. 1-4 are vital. You don't need any syntax highlighters for that. For instance, do you need an kind of aid from any editor to express: This is a Document. This Document was created today. It was created by me. I am a Person. etc.. Basically simple sentences that flow from your conventional intuition all the way over the digital realm of the Web. The sad story here is that nature of Data has been compromised by the overbearing nature of software, in general. The Web is a massive jigsaw puzzle game where every puzzle piece is a Web resource. Thus, we need to have a mechanism (one that superior to HTML) for rapidly producing and sharing data, information, and knowledge. With so much interlinked data you want to browse, not to get Turtle files one by one by manually concatenating the . It's like comparing the RFC text files (example http://www.rfc-editor.org/rfc/rfc5646.txt) with the W3C recommandation pages (example http://www.w3.org/TR/xquery/), full of links, or something even more powerful, something like Graphity (my company's data portal http://data.nxp.com). No, that's not the case. Again, its about puzzle pieces. Each Web user contributes there little pieces to the bigger puzzle. I can't think of a situation where a machine would write Turtle. Lost me on that one. *RDF/XML*: Not readable by humans. Since XML is quite common to store data, the easiest way to product RDF from XML is to serialize RDF/XML. As pointed earlier, it's so easy that some people produce millions of rubbish triples. But don't blame the tool, crappy data was in MySQL DBs, in XML, and will be in RDF too. Instead of banning or advising against using it, it would be more productive to bring a light on the pitfalls, the most common mistakes that a XML developer would make when producing RDF. *JSON*: Not readable by humans. I'm not very familiar with Javascript development. However I know enough to know that providing a JS developer with JSon is a treat, certainly for reading, probably for
Re: Content negotiation for Turtle files
On 2/5/13 6:49 PM, Bernard Vatant wrote: Hello all Back in 2006, I thought had understood with the help of folks around here, how to configure my server for content negotiation at lingvoj.org http://lingvoj.org. Both vocabulary and instances were published in RDF/XML. I updated the ontology last week, and since after years of happy living with RDF/XML people eventually convinced that it was a bad, prehistoric and ugly syntax, I decided to be trendy and published the new version in Turtle at http://www.lingvoj.org/ontology_v2.0.ttl The vocabulary URI is still the same : http://www.lingvoj.org/ontology, and the namespace http://www.lingvoj.org/ontology# (cool URI don't change) Then I turned to Vapour to test this new publication, and found out that to be happy with the vocabulary URI it has to find some answer when requesting application/rdf+xml. But since I have no more RDF/XML file for this version, what should I do? I turned to best practices document at http://www.w3.org/TR/swbp-vocab-pub, but it does not provide examples with Turtle, only RDF/XML. So I blindly put the following in the .htaccess : AddType application/rdf+xml .ttl I found it a completely stupid and dirty trick ... but amazigly it makes Vapour happy. But now Firefox chokes on http://www.lingvoj.org/ontology_v2.0.ttl because it seems to expect a XML file. Chrome has not this issue. The LOV-Bot says there is a content negotiation issue and can't get the file. So does Parrot. I feel dumb, but I'm certainly not the only one, I've stumbled upon a certain number of vocabularies published in Turtle for which the conneg does not seem to be perfectly clear either. What do I miss, folks? Should I forget about it, and switch back to good ol' RDF/XML? Hoping you see the contradiction re. content-type: curl -IL http://www.lingvoj.org/ontology HTTP/1.1 303 See Other Date: Wed, 06 Feb 2013 00:54:13 GMT Server: Apache Location: http://www.lingvoj.org/ontology_v2.0.ttl Content-Type: text/html; charset=iso-8859-1 HTTP/1.1 200 OK Date: Wed, 06 Feb 2013 00:54:13 GMT Server: Apache Last-Modified: Fri, 01 Feb 2013 19:16:02 GMT ETag: 60172428-5258-4d4ae92f95333 Accept-Ranges: bytes Content-Length: 21080 Content-Type: application/rdf+xml Kingsley Bernard -- *Bernard Vatant * Vocabularies Data Engineering Tel : + 33 (0)9 71 48 84 59 Skype : bernard.vatant Blog : the wheel and the hub http://blog.hubjects.com/ *Mondeca* 3 cité Nollez 75018 Paris, France www.mondeca.com http://www.mondeca.com/ Follow us on Twitter : @mondecanews http://twitter.com/#%21/mondecanews -- Meet us at Documation http://www.documation.fr/ in Paris, March 20-21 -- Regards, Kingsley Idehen Founder CEO OpenLink Software Company Web: http://www.openlinksw.com Personal Weblog: http://www.openlinksw.com/blog/~kidehen Twitter/Identi.ca handle: @kidehen Google+ Profile: https://plus.google.com/112399767740508618350/about LinkedIn Profile: http://www.linkedin.com/in/kidehen smime.p7s Description: S/MIME Cryptographic Signature