Re: Data sets of LOD
I would like to ask you if you can give me the information, in linked open data project, which data sets makes reference to which data sets and how many links there are between them. http://lod-cloud.net/state/ Cheers, Michael -- Dr. Michael Hausenblas, Research Fellow DERI - Digital Enterprise Research Institute NUIG - National University of Ireland, Galway Ireland, Europe Tel.: +353 91 495730 http://mhausenblas.info/ On 19 Nov 2012, at 15:42, Mary Koutraki wrote: Dear all, I would like to ask you if you can give me the information, in linked open data project, which data sets makes reference to which data sets and how many links there are between them. Thank you in advance. -- Mary Koutraki PhD Student on Semantic Web UVSQ - ETIS Lab
Re: Data sets of LOD
What's the update frequency of this effort? AFAIK roughly once per year up to now but Richard would be the more competent person to provide you with an answer ;) Cheers, Michael -- Dr. Michael Hausenblas, Research Fellow DERI - Digital Enterprise Research Institute NUIG - National University of Ireland, Galway Ireland, Europe Tel.: +353 91 495730 http://mhausenblas.info/ On 20 Nov 2012, at 13:48, Kingsley Idehen wrote: On 11/20/12 7:59 AM, Michael Hausenblas wrote: I would like to ask you if you can give me the information, in linked open data project, which data sets makes reference to which data sets and how many links there are between them. http://lod-cloud.net/state/ Michael, What's the update frequency of this effort? Kingsley Cheers, Michael -- Dr. Michael Hausenblas, Research Fellow DERI - Digital Enterprise Research Institute NUIG - National University of Ireland, Galway Ireland, Europe Tel.: +353 91 495730 http://mhausenblas.info/ On 19 Nov 2012, at 15:42, Mary Koutraki wrote: Dear all, I would like to ask you if you can give me the information, in linked open data project, which data sets makes reference to which data sets and how many links there are between them. Thank you in advance. -- Mary Koutraki PhD Student on Semantic Web UVSQ - ETIS Lab -- Regards, Kingsley Idehen Founder CEO OpenLink Software Company Web: http://www.openlinksw.com Personal Weblog: http://www.openlinksw.com/blog/~kidehen Twitter/Identi.ca handle: @kidehen Google+ Profile: https://plus.google.com/112399767740508618350/about LinkedIn Profile: http://www.linkedin.com/in/kidehen
RDB2RDF Recommendations are published
http://www.w3.org/TR/2012/REC-r2rml-20120927/ http://www.w3.org/TR/2012/REC-rdb-direct-mapping-20120927/ http://semanticweb.com/transforming-relational-data-to-rdf-r2rml-becomes-official-w3c-recommendation_b32395 Thank you very much, everyone involved! A big kudos to the wonderful Editors of R2RML and DM, my co-chair and all the WG members, early ones and the ones who pulled through to the very end! Now, the real work starts: the success of a standard is, IMHO, measured by the uptake. We have now a stable proposal on the table and need to convince industry players and end-users alike that it is worth investing in this piece of infrastructure. Link long and prosper! Cheers, Michael -- Dr. Michael Hausenblas, Research Fellow DERI - Digital Enterprise Research Institute NUIG - National University of Ireland, Galway Ireland, Europe Tel.: +353 91 495730 http://mhausenblas.info/
Re: Finding a SPARQL endpoint by an LOD URI?
Is that still the state of affairs? Are there any practical workarounds? If I'm not totally misunderstanding what you're trying to achieve I'd argue that VoID [1] and the SPARQL SD vocabulary [2] should be capable of doing the job. Tim, care to update the respective sentence in your document? Cheers, Michael [1] http://www.w3.org/TR/void/#backlinks [2] http://www.w3.org/TR/sparql11-service-description/ -- Dr. Michael Hausenblas, Research Fellow DERI - Digital Enterprise Research Institute NUIG - National University of Ireland, Galway Ireland, Europe Tel.: +353 91 495730 WebID: http://sw-app.org/mic.xhtml#i On 4 Jul 2012, at 07:45, Heiko Paulheim wrote: Hi all, I am wondering whether there is a way of finding a SPARQL endpoint for an LOD URI, i.e., a function f that behaves like f(http://dbpedia.org/resource/Darmstadt) = http://dbpedia.org/sparql TimBL's design issues document [1] says: To make the data be effectively linked, someone who only has the URI of something must be able to find their way the SPARQL endpoint. [...] Vocabularies for doing this have not yet been standardized. Is that still the state of affairs? Are there any practical workarounds? Best, Heiko [1] http://www.w3.org/DesignIssues/LinkedData.html -- Dr. Heiko Paulheim Knowledge Engineering Group Technische Universität Darmstadt Phone: +49 6151 16 6634 Fax: +49 6151 16 5482 http://www.ke.tu-darmstadt.de/staff/heiko-paulheim
Re: Sorry, I don’t speak SPARQL - A Survey
Jens, Good stuff. I got as far as to question 7 out of 30 when I saw the following error message: [[ Authentication problem Take note of any unsaved data, and click here to continue. UIDL could not be read from server. Check servlets mappings. Error code: 404 ]] … and stopped doing it. KUTGW! Cheers, Michael -- Dr. Michael Hausenblas, Research Fellow DERI - Digital Enterprise Research Institute NUIG - National University of Ireland, Galway Ireland, Europe Tel.: +353 91 495730 WebID: http://sw-app.org/mic.xhtml#i On 25 May 2012, at 12:12, Jens Lehmann wrote: Hello Pierre, Am 24.05.2012 19:11, schrieb Pierre-Yves Vandenbussche: Hello Jens, That's a good initiative. Thanks for your feedback. you should give the prefixes in your survey so one can verify the labels in the ontology and understand resources , properties and classes used in the query... As I understand it is all about dbpedia so : PREFIX res : http://dbpedia.org/resource/ PREFIX dbp : http://dbpedia.org/property/ We added the prefixes everywhere now (they were in the questions already, but not in the explanation of each of the three different tasks). Also thanks for the many mails people send me. We have quite valuable feedback for our work already. Kind regards, Jens
Re: VoID and XML citemap visual viewers
Yury, Hi! Are there any pretty-looking applications or websites that allows to view VoID and XML citemaps? For VoID I certainly can use generic rdf browser but is there something more handy? For VoID, see http://semanticweb.org/wiki/VoiD where we try to keep track of such tools and I encourage the community at large to record more VoID browsers etc. there ... Cheers, Michael -- Dr. Michael Hausenblas, Research Fellow DERI - Digital Enterprise Research Institute NUIG - National University of Ireland, Galway Ireland, Europe Tel.: +353 91 495730 WebID: http://sw-app.org/mic.xhtml#i On 2 Apr 2012, at 09:14, Yury Katkov wrote: Hi! Are there any pretty-looking applications or websites that allows to view VoID and XML citemaps? For VoID I certainly can use generic rdf browser but is there something more handy? Sincerely yours, - Yury Katkov
Re: Semantic Web Dogfood
Sorry, I have been here before, and can't remember who to email (ad...@data.semanticweb.org bounces). And I know some brave people were trying to sort it out. Thanks for reporting this, Hugh. In transition - Knud, are you able to check this quickly? Cheers, Michael -- Dr. Michael Hausenblas, Research Fellow DERI - Digital Enterprise Research Institute NUIG - National University of Ireland, Galway Ireland, Europe Tel.: +353 91 495730 WebID: http://sw-app.org/mic.xhtml#i On 28 Mar 2012, at 15:56, Hugh Glaser wrote: Sorry, I have been here before, and can't remember who to email (ad...@data.semanticweb.org bounces). And I know some brave people were trying to sort it out. Anyway: Hi there, Sorry to report, but it seems things are a bit broken. Eg Resource URI on the dog food server: http://data.semanticweb.org/person/dan-brickley Email Hash: 748934f32135cfcf6f8c06e253c53442721e15e7 Eg transcript: hg@cohen [2012-03-28T15:43:32] acm.rkbexplorer.com/acquisition rdfget http://data.semanticweb.org/person/libby-miller HTTP/1.1 303 See Other Date: Wed, 28 Mar 2012 16:15:23 GMT Server: Apache/2.2.3 (Debian) DAV/2 SVN/1.4.2 PHP/5.2.0-8+etch16 mod_ssl/2.2.3 OpenSSL/0.9.8c X-Powered-By: PHP/5.2.0-8+etch16 Set-Cookie: SESS002fbfc63133341c13dbc400422ca44a=40e15aa64d8febbf4530d9d3bd778487; expires=Fri, 20 Apr 2012 19:48:43 GMT; path=/; domain=.data.semanticweb.org Expires: Sun, 19 Nov 1978 05:00:00 GMT Last-Modified: Wed, 28 Mar 2012 16:15:23 GMT Cache-Control: store, no-cache, must-revalidate Cache-Control: post-check=0, pre-check=0 Location: http://data.semanticweb.org/person/libby-miller/rdf Access-Control-Allow-Origin: * Transfer-Encoding: chunked Content-Type: text/html; charset=utf-8 HTTP/1.1 200 OK Date: Wed, 28 Mar 2012 16:15:23 GMT Server: Apache/2.2.3 (Debian) DAV/2 SVN/1.4.2 PHP/5.2.0-8+etch16 mod_ssl/2.2.3 OpenSSL/0.9.8c X-Powered-By: PHP/5.2.0-8+etch16 Set-Cookie: SESS002fbfc63133341c13dbc400422ca44a=a6cd8a43718d688ec6192079abe7a400; expires=Fri, 20 Apr 2012 19:48:43 GMT; path=/; domain=.data.semanticweb.org Expires: Sun, 19 Nov 1978 05:00:00 GMT Last-Modified: Wed, 28 Mar 2012 16:15:23 GMT Cache-Control: store, no-cache, must-revalidate Cache-Control: post-check=0, pre-check=0 Access-Control-Allow-Origin: * Content-Length: 186 Content-Type: application/rdf+xml; charset=utf-8 br / bFatal error/b: Call to a member function writeRdfToString() on a non-object in b/var/www/drupal-6.22/sites/all/modules/dogfood/dogfood.module/b on line b171/bbr / It only gives the 200 response after a very looong time. Best Hugh -- Hugh Glaser, Web and Internet Science Electronics and Computer Science, University of Southampton, Southampton SO17 1BJ Work: +44 23 8059 3670, Fax: +44 23 8059 3045 Mobile: +44 75 9533 4155 , Home: +44 23 8061 5652 http://www.ecs.soton.ac.uk/~hg/
Re: Conversion to RDF/Linked Data
Mika, Would somebody would guide me how can I convert such a record into RDF/Linked data? There are two aspects to it: 1. the converter (for example, if your data source is a relational DB you might want to use an RDB2RDF mapper [1]), and 2. the schema level, for which I would (totally unbiased of course ;) suggest that you have a look at the work we're doing in the W3C Government Linked Data WG [2] ... still early days, though ;) Cheers, Michael [1] http://www.w3.org/2001/sw/rdb2rdf/ [2] https://dvcs.w3.org/hg/gld/raw-file/default/people/index.html -- Dr. Michael Hausenblas, Research Fellow LiDRC - Linked Data Research Centre DERI - Digital Enterprise Research Institute NUIG - National University of Ireland, Galway Ireland, Europe Tel. +353 91 495730 http://linkeddata.deri.ie/ http://sw-app.org/about.html On 14 Feb 2012, at 09:16, Mika Singh wrote: I want to convert persons data to RDF/Linked Data. I have data like this: Person_ID has_name N has_surname S has_hobbies h1, h2, ..., hn has_friends f1, f2, ..., fn countries_visitedc1, c2, ..., cn date_of_birth dob height H centimetres weightW kgs favourite_books b1, b2, ..., bn favourite_moviesm1, m2, ..., mn Would somebody would guide me how can I convert such a record into RDF/Linked data?
Re: Conversion to RDF/Linked Data
I would also recommend @timrdf's csv2rdf4lod conversion automation, the basis for our conversion at TWCRPI: Sure, Tim has done a great job there - absolutely worth using this. One problem though - Mika didn't really specify the source format, so it is hard to provide concrete suggestions - any of http://www.w3.org/wiki/ConverterToRdf could fit his bill ;) Cheers, Michael -- Dr. Michael Hausenblas, Research Fellow LiDRC - Linked Data Research Centre DERI - Digital Enterprise Research Institute NUIG - National University of Ireland, Galway Ireland, Europe Tel. +353 91 495730 http://linkeddata.deri.ie/ http://sw-app.org/about.html On 14 Feb 2012, at 12:37, John Erickson wrote: I would also recommend @timrdf's csv2rdf4lod conversion automation, the basis for our conversion at TWCRPI: https://github.com/timrdf/csv2rdf4lod-automation/wiki Given csv as a starting point, you should be doing conversions in under 30mins ;) BTW: We recently started a project, Elixir http://bit.ly/wRjQTI, to create an easy-to-use Web portal front end for csv2rdf4lod. Watch this space... On Tue, Feb 14, 2012 at 7:24 AM, Michael Hausenblas michael.hausenb...@deri.org wrote: Mika, Would somebody would guide me how can I convert such a record into RDF/Linked data? There are two aspects to it: 1. the converter (for example, if your data source is a relational DB you might want to use an RDB2RDF mapper [1]), and 2. the schema level, for which I would (totally unbiased of course ;) suggest that you have a look at the work we're doing in the W3C Government Linked Data WG [2] ... still early days, though ;) Cheers, Michael [1] http://www.w3.org/2001/sw/rdb2rdf/ [2] https://dvcs.w3.org/hg/gld/raw-file/default/people/index.html -- Dr. Michael Hausenblas, Research Fellow LiDRC - Linked Data Research Centre DERI - Digital Enterprise Research Institute NUIG - National University of Ireland, Galway Ireland, Europe Tel. +353 91 495730 http://linkeddata.deri.ie/ http://sw-app.org/about.html On 14 Feb 2012, at 09:16, Mika Singh wrote: I want to convert persons data to RDF/Linked Data. I have data like this: Person_ID has_name N has_surname S has_hobbies h1, h2, ..., hn has_friends f1, f2, ..., fn countries_visitedc1, c2, ..., cn date_of_birth dob height H centimetres weightW kgs favourite_books b1, b2, ..., bn favourite_moviesm1, m2, ..., mn Would somebody would guide me how can I convert such a record into RDF/Linked data? -- John S. Erickson, Ph.D. Director, Web Science Operations Tetherless World Constellation (RPI) http://tw.rpi.edu olyerick...@gmail.com Twitter Skype: olyerickson
FYI: European Data Forum 2012
FYI: we're co-organising the European Data Forum 2012 on June 6-7, 2012 in Copenhagen (Denmark) - consider participating! Cheers, Michael [1] http://www.data-forum.eu/ -- Dr. Michael Hausenblas, Research Fellow LiDRC - Linked Data Research Centre DERI - Digital Enterprise Research Institute NUIG - National University of Ireland, Galway Ireland, Europe Tel. +353 91 495730 http://linkeddata.deri.ie/ http://sw-app.org/about.html
Re: [Ann] LODStats - Real-time Data Web Statistics
We are happy to announce the first public *release of LODStats*. Very nice! Does it output VoID [1]? Didn't find it skimming the source ... Cheers, Michael [1] http://www.w3.org/TR/void/ -- Dr. Michael Hausenblas, Research Fellow LiDRC - Linked Data Research Centre DERI - Digital Enterprise Research Institute NUIG - National University of Ireland, Galway Ireland, Europe Tel. +353 91 495730 http://linkeddata.deri.ie/ http://sw-app.org/about.html On 2 Feb 2012, at 11:04, Sören Auer wrote: Dear all, We are happy to announce the first public *release of LODStats*. LODStats is a statement-stream-based approach for gathering comprehensive statistics about datasets adhering to the Resource Description Framework (RDF). LODStats was implemented in Python and integrated into the CKAN dataset metadata registry [1]. Thus it helps to obtain a comprehensive picture of the current state of the Data Web. More information about LODStats (including its open-source implementation) is available from: http://aksw.org/projects/LODStats A demo installation collecting statistics from all LOD datasets registered on CKAN is available from: http://stats.lod2.eu We would like to thank the AKSW research group [2] and LOD2 project [3] members for their suggestions. The development LODStats was supported by the FP7 project LOD2 (GA no. 257943). On behalf of the LODStats team, Sören Auer, Jan Demter, Michael Martin, Jens Lehmann [1] http://ckan.net [2] http://aksw.org [3] http://lod2.eu
3rd CFP: Linked Data on the Web (LDOW2012) Workshop at WWW2012
= Call for Papers: Linked Data on the Web (LDOW2012) Workshop at WWW2012 http://events.linkeddata.org/ldow2012/ = 16 April, 2012 Lyon, France = Objectives The Web is continuing to develop from a medium for publishing textual documents into a medium for sharing structured data. In 2011, the Web of Linked Data grew to a size of about 32 billion RDF triples, with contributions coming increasingly from companies, governments and other public sector bodies such as libraries, statistical bodies or environmental agencies. In parallel, Google, Yahoo and Bing have established the schema.org initiative, a shared set of schemata for publishing structured data on the Web that focuses on vocabulary agreement and low barriers of entry for data publishers. These developments create a positive feedback loop for data publishers and highlight new opportunities for commercial exploitation of Web data. In this context, the LDOW2012 workshop provides a forum for presenting the latest research on Linked Data and driving forward the research agenda in this area. We expect submissions that discuss the deployment of Linked Data in different application domains and explore the motivation, value proposition and business models behind these deployments, especially in relation to complementary and alternative techniques for data provision (e.g. Web APIs, Microdata, Microformats) and proprietary data sharing platforms (e.g. Facebook, Twitter, Flickr, LastFM). = Topics of Interest Topics of interest for the LDOW2012 workshop include, but are not limited to: * Linked Data Deployment * case studies of Linked Data deployment and value propositions in different application domains * application showcases including aggregators, search engines and marketplaces for Linked Data * business models for Linked Data publishing and consumption * analyzing and profiling the Web of Data * Linked Data and alternative Data Provisioning and Sharing Techniques * comparison of Linked Data to alternative data provisioning and sharing techniques * implications and limitations of a public data commons on the Web versus company-owned sharing platforms * increasing the value of Schema.org and OpenGraphProtocol data through data linking * Linked Data Infrastructure * crawling, caching and querying Linked Data on the Web * linking algorithms and identity resolution * Web data integration and data fusion * Linked Data mining and data space profiling * tracking provenance and usage of Linked Data * evaluating quality and trustworthiness of Linked Data * licensing issues in Linked Data publishing * interface and interaction paradigms for Linked Data applications * benchmarking Linked Data tools = Submissions We seek the following kinds of submissions: 1. Full scientific papers: up to 10 pages in ACM format 2. Short scientific and position papers: up to 5 pages in ACM format Submissions must be formatted using the ACM SIG template (as per the WWW2012 Research Track) available at http://www.acm.org/sigs/publications/proceedings-templates . Please note that the author list does not need to be anonymized, as we do not operate a double-blind review process. Submissions will be peer reviewed by at least three independent reviewers. Accepted papers will be presented at the workshop and included in the workshop proceedings. At least one author of each paper is expected to register for the workshop and attend to present the paper. Please submit papers via EasyChair at https://www.easychair.org/conferences/?conf=ldow2012 = Important Dates * Submission deadline: 13 February, 2012, 23:59 CET * Notification of acceptance: 7 March, 2012 * Camera-ready versions of accepted papers: 23 March, 2012 * Workshop date: 16 April, 2012 = Organising Committee * Christian Bizer, Freie Universität Berlin, Germany * Tom Heath, Talis Systems Ltd, UK * Tim Berners-Lee, MIT CSAIL, USA * Michael Hausenblas, DERI, NUI Galway, Ireland = Programme Committee * Alexandre Passant, DERI, NUI Galway, Ireland * Andreas Harth, Karlsruhe Institute of Technology, Germany * Andreas Langegger, University of Linz, Austria * Andy Seaborne, Epimorphics, UK * Anja Jentzsch, Freie Universität Berlin, Germany * Axel-Cyrille Ngonga Ngomo, University of Leipzig, Germany * Bernhard Schandl, University of Vienna, Austria * Christopher Brewster, Aston Business School, UK * Daniel Schwabe, PUC-RIO
Re: status and problems on sematicweb.org
1. Try to remove the recent spam 2. Enforce a strict registration schema and allow edits only to registered participants. I think the community is small enough so that we could easily determine eligibility of new people. +1 Cheers, Michael -- Dr. Michael Hausenblas, Research Fellow LiDRC - Linked Data Research Centre DERI - Digital Enterprise Research Institute NUIG - National University of Ireland, Galway Ireland, Europe Tel. +353 91 495730 http://linkeddata.deri.ie/ http://sw-app.org/about.html On 13 Jan 2012, at 08:52, Martin Hepp wrote: Hi Markus, all: I think it would be sufficient to 1. Try to remove the recent spam 2. Enforce a strict registration schema and allow edits only to registered participants. I think the community is small enough so that we could easily determine eligibility of new people. Best Martin On Jan 12, 2012, at 6:43 PM, Markus Krötzsch wrote: Hi Yuri, let us take this to one mailing list semantic-...@w3.org, as this is the list that is most involved (please drop the others when you reply). As the technical maintainer of the site, I largely agree with your assessment. In spite of the very high visibility of the site (and perceived authority), the active editing community is not big. This is a problem especially given the significant and continued spam attacks that the site is under due to its high visibility (I just recently changed the captcha system and rolled back thousands of edits, yet it seems they are already breaking through again, though in smaller numbers). I do not want to blame anybody for the state of affairs: most of us do not have the time to contribute significant content to such sites. However, given the extraordinary visibility of the site, we should all perceive this as a major problem (to the extent that we attach our work to the label semantic web in any way). So what can be done? (1) Freeze the wiki. A weaker version of this is: allow users only to edit after they were manually added to a group of trusted users (all humans welcome). This would require somebody to manage these permissions but would allow existing projects/communities to continue to use the site. (2) Re-enforce spam protection on the wiki. Maybe this could be done, but the site is targeted pretty heavily. Standard captchas like ReCaptcha are thus getting broken (spammers do have an effective infrastructure for this), but maybe non-standard captchas could work better. This is a task for the technical maintainers (i.e., me and the folks at AIFB Karlsruhe where the site is hosted). (3) Clean the wiki. Whether frozen or not, there is a lot of spam already. Something needs to be done to get rid of it. This requires (easy but tedious) manual effort. Some stakeholders need to be found to provide basic workforce (e.g., by hiring a student to help with spam deletion). (4) Restore the wiki. Update the main pages (about technologies and active projects) to reflect a current and/or timeless state that we would like new readers to see. This again needs somebody to push it, and for writing pages about topics like SPARQL one would need some expertise. This is a challenge for the community. I am willing to invest /some/ time here to help with the above, but (3) and (4) requires support from more people. On the other hand, there are probably hardly more than 20 or 30 *essential* content pages that we are talking about here, plus many pages about projects and people that one should ask the stakeholders to review. So one might be able to make this into a shining entry point to the semantic web in a week of work ... together with (1) and (2) above, the invested work would remain valuable for a long time. Cheers Markus On 12/01/12 10:43, Yury Katkov wrote: Hi everyone! What is the current status of the semanticweb.org http://semanticweb.org website? It used to be the main wiki about the semantic web, it has a lot of cool and useful information about everything. But now it seems abandoned. I mean, there are about 30 real writers who update the information about their projects an write articles, but they do something like 30% of changes. The other 70% is spam! Are there guys who support the website? Who manages the community, are there any plans of creating projects and articles about SW? Is there community at all? In my opinion if this great website suppose to be alive the first goal is to find volunteers who'll help administrator to combat spam (with bots, extensions and editing policies) and support the new activities and projets on the wiki. (I'm ready to be one of them). If this wiki lived only in the past when it was a big hype around Semantic Web topics and now without a big funding nobody wants to use it - wouldn't it better to be frozen? I appreciate and admire people who started up the wiki. Please, don't let it be the rotting memorial to the past
Linked Open Data Around-The-Clock news
All, FYI: we have re-launched the LATC (Linked Open Data Around-The-Clock) project homepage [1]. Check out the freely available reports on best practices for Linked Data publishing and consuming, the Publication Consumption Tools Library and the 24/7 Interlinking Platform. Note that our ongoing work, sponsored by the EC under the FP7 Programme, is available via the project's repository [2]. Cheers, Michael - LATC co-ordinator [1] http://latc-project.eu/ [2] https://github.com/LATC -- Dr. Michael Hausenblas, Research Fellow LiDRC - Linked Data Research Centre DERI - Digital Enterprise Research Institute NUIG - National University of Ireland, Galway Ireland, Europe Tel. +353 91 495730 http://linkeddata.deri.ie/ http://sw-app.org/about.html
Re: Linked Open Data Around-The-Clock news
Gannon, Thanks for your feedback. As usual, very interesting! I'll have a deeper look into it and maybe we can follow-up on the eGov IG meetings? Cheers, Michael -- Dr. Michael Hausenblas, Research Fellow LiDRC - Linked Data Research Centre DERI - Digital Enterprise Research Institute NUIG - National University of Ireland, Galway Ireland, Europe Tel. +353 91 495730 http://linkeddata.deri.ie/ http://sw-app.org/about.html On 9 Sep 2011, at 16:19, Gannon Dick wrote: Hi Michael, Thank you for using a lower-case n. My first thought was Oh {expletive deleted}, here we go again!, but the n made me click. Around-The-Clock News (and Weather Community Culture) are something entirely different Around-The-Clock data[1,2]. An always- on/off user schedule assumption works for appliances, but a cadastral map, even coarse grained, is necessary to prevent encroachment on the personal privacy of human users. A reference from the GPS on an appliance to a cadastral map renders anonymous the location of a human appliance user. Also known as hide in plain sight :o) INSPIRE Spatial Things, Spatial Objects, and Theme=CP (Cadastral parcels ) help quite a bit. The US Library of Congress Country URI (Spatial Things) and Geographic Area URI (Spatial Objects) help too, although a PURL[3] could be used to reconcile LOC-ID and INSPIRE URI formats. The complete data sets, unfortunately, are very big. An LDAP Address Book tool to hold map fragments off-line is a good idea. I have US and Australian Weather Stations as a test case in an OpenOffice DB. It's a slow monstrosity and hard to move. The extracts (with links) are a bit better, but still large files. --Gannon [1] Seeing Like a State: How Certain Schemes to Improve the Human Condition Have Failed http://yalepress.yale.edu/book.asp?isbn=9780300078152 [2] The Latitude Effect http://tinyurl.com/white-nights-forever [3] PURL Home Page http://purl.org/docs/index.html --- On Fri, 9/9/11, Michael Hausenblas michael.hausenb...@deri.org wrote: From: Michael Hausenblas michael.hausenb...@deri.org Subject: Linked Open Data Around-The-Clock news To: Linked Data community public-lod@w3.org Date: Friday, September 9, 2011, 7:20 AM All, FYI: we have re-launched the LATC (Linked Open Data Around-The-Clock) project homepage [1]. Check out the freely available reports on best practices for Linked Data publishing and consuming, the Publication Consumption Tools Library and the 24/7 Interlinking Platform. Note that our ongoing work, sponsored by the EC under the FP7 Programme, is available via the project's repository [2]. Cheers, Michael - LATC co-ordinator [1] http://latc-project.eu/ [2] https://github.com/LATC -- Dr. Michael Hausenblas, Research Fellow LiDRC - Linked Data Research Centre DERI - Digital Enterprise Research Institute NUIG - National University of Ireland, Galway Ireland, Europe Tel. +353 91 495730 http://linkeddata.deri.ie/ http://sw-app.org/about.html
Re: Document fragment vocabulary
It is not really LinkedData friendly. Why? @Michael: is there some standardisation respective URIs for text going on? As you've rightly identified, an RFC already exists. What would this new standardisation activity be chartered for? As and aside, this reminds me a bit of http://xkcd.com/927/ The approach by Wilde and Dürst[1] seems to lack stability. I don't know what you mean by this. Lack of take-up, yes. Stability, what's that? Do you think we could do such standardisation for document fragments and text fragments within the Media Fragments Group[3] ? No. Disclaimer: I'm a MF WG member. Look at our charter [1] ... Maybe this thread should slowly be moved over to u...@w3.org [2]? Cheers, Michael [1] http://www.w3.org/2008/01/media-fragments-wg.html [2] http://lists.w3.org/Archives/Public/uri/ -- Dr. Michael Hausenblas, Research Fellow LiDRC - Linked Data Research Centre DERI - Digital Enterprise Research Institute NUIG - National University of Ireland, Galway Ireland, Europe Tel. +353 91 495730 http://linkeddata.deri.ie/ http://sw-app.org/about.html On 16 Aug 2011, at 05:40, Sebastian Hellmann wrote: Hi Michael and Alex, sorry to answer so late, I was in holiday in France. I looked at the three provided resources [1,2,3] and there are still some comments and questions I have. 1. The part after the # is actually not sent to the server. Are there any solutions for this? It is not really LinkedData friendly. Compare http://linkedgeodata.org/triplify/near/51.03,13.73/1000/class/Amenity (Currently not working, but it gives all points within a 1000m radius) The client would be required to calculate the subset of triples from the resource, that are addressed. 2. [1] is quite basic and they are basically using position and lines. I made a qualitative comparison of different fragment id approaches for text in [4] slide 7. I was wondering if anybody has researched such properties of URI fragments. Currently, I am benchmarking stability of these uris using Wikipedia changes. Has such work been done before? 3. @Alex: In my opinion, your proposed fragment ontology can only be used to provide documentation for different fragments. I would rather propose to just use one triple: http://www.w3.org/DesignIssues/LinkedData.html#offset__14406-14418 a http://nlp2rdf.lod2.eu/schema/string/OffsetBasedString The ontology I made for Strings might be generalized for formats other than text based [5] One triple is much shorter. As you can see I also tried to encode the type of fragment right into the fragment offset, although a notation like type=offset might be better. 4. @Michael: is there some standardisation respective URIs for text going on? I heard there would be a Language Technology W3C group. The approach by Wilde and Dürst[1] seems to lack stability. Do you think we could do such standardisation for document fragments and text fragments within the Media Fragments Group[3] ? I really thought the liveUrl project was quite good, but it seems dead[6]. In LOD2[7] and NIF[8] we will need some fragment identifiers to Standardize NLP tools for the LOD2 stack. It would be great to reuse stuff instead of starting from scratch. I had to extend [1] for example, because it did not produce stable uris and also it did not contain the type of algorithm used to produce the URI. All the best, Sebastian [1] http://tools.ietf.org/html/rfc5147 [2] http://tools.ietf.org/html/draft-hausenblas-csv-fragment [3] http://www.w3.org/TR/media-frags/ [4] http://www.slideshare.net/kurzum/nif-nlp-interchange-format [5] http://nlp2rdf.lod2.eu/schema/string/ [6] http://liveurls.mozdev.org/index.html [7] http://lod2.eu [8] http://aksw.org/Projects/NIF Am 04.08.2011 22:37, schrieb Michael Hausenblas: Alex, Has something already done this? Is it even (mostly?) sane? Sane yes, IMO. Done, sort of, see: + URI Fragment Identifiers for the text/plain [1] + URI Fragment Identifiers for the text/csv [2] Cheers, Michael [1] http://tools.ietf.org/html/rfc5147 [2] http://tools.ietf.org/html/draft-hausenblas-csv-fragment -- Dr. Michael Hausenblas, Research Fellow LiDRC - Linked Data Research Centre DERI - Digital Enterprise Research Institute NUIG - National University of Ireland, Galway Ireland, Europe Tel. +353 91 495730 http://linkeddata.deri.ie/ http://sw-app.org/about.html On 4 Aug 2011, at 14:22, Alexander Dutton wrote: -BEGIN PGP SIGNED MESSAGE- Hash: SHA1 Hi all, Say I have an XML document, http://example.org/something.xml, and I want to talk about about some part of it in RDF. As this is XML, being able to point into it using XPath sounds ideal, leading to something like: #fragment a fragment:Fragment ; fragment:within http://example.org/something.xml ; fragment:locator /some/path[1]^^fragment:xpath . (For now we can ignore whether we wanted a nodeset or a single node, and how to handle XML
Re: Document fragment vocabulary
Alex, Has something already done this? Is it even (mostly?) sane? Sane yes, IMO. Done, sort of, see: + URI Fragment Identifiers for the text/plain [1] + URI Fragment Identifiers for the text/csv [2] Cheers, Michael [1] http://tools.ietf.org/html/rfc5147 [2] http://tools.ietf.org/html/draft-hausenblas-csv-fragment -- Dr. Michael Hausenblas, Research Fellow LiDRC - Linked Data Research Centre DERI - Digital Enterprise Research Institute NUIG - National University of Ireland, Galway Ireland, Europe Tel. +353 91 495730 http://linkeddata.deri.ie/ http://sw-app.org/about.html On 4 Aug 2011, at 14:22, Alexander Dutton wrote: -BEGIN PGP SIGNED MESSAGE- Hash: SHA1 Hi all, Say I have an XML document, http://example.org/something.xml, and I want to talk about about some part of it in RDF. As this is XML, being able to point into it using XPath sounds ideal, leading to something like: #fragment a fragment:Fragment ; fragment:within http://example.org/something.xml ; fragment:locator /some/path[1]^^fragment:xpath . (For now we can ignore whether we wanted a nodeset or a single node, and how to handle XML namespaces.) More generally, we might want other ways of locating fragments (probably with a datatype for each): * character offsets / ranges * byte offsets / ranges * line numbers / ranges * some sub-rectangle of an image * XML node IDs * page ranges of a paginated document Some of these will be IMT-specific and may need some more thinking about, but the idea is there. Has something already done this? Is it even (mostly?) sane? Yours, Alex NB. Our actual use-case is having pointers into an NLM XML file (embodying a journal article) so we can hook up our in-text reference pointer¹ URIs to the original XML elements (xref/s) they were generated from. This will allow us to work out the context of each citation for use in further analysis of the relationship between the citing and cited articles. ¹ See http://opencitations.wordpress.com/2011/07/01/nomenclature-for-citations-and-references/ for an explanation of the terminology. - -- Alexander Dutton Developer, data.ox.ac.uk, InfoDev, Oxford University Computing Services Open Citations Project, Department of Zoology, University of Oxford -BEGIN PGP SIGNATURE- Version: GnuPG v1.4.11 (GNU/Linux) Comment: Using GnuPG with Fedora - http://enigmail.mozdev.org/ iEYEARECAAYFAk46nS4ACgkQS0pRIabRbjDVZQCdGblvoMgNqEietlE5EwAkPJY8 pikAn2KApM0HjcXj6TZegA+Dek/DJIQX =UcCr -END PGP SIGNATURE-
Re: Browser Extension for setting HTTP headers
Does anyone know a browser extension that will allow one to set the 'Accept:' HTTP header and follow redirects (a la curl -L), but actually show what it's done (a la curl -i)? Hopefully one that works in both Firefox and Chrome (a la Poster, but without this lack). Why a browser extension? :) I typically use http://redbot.org/ or http://hurl.it/ with a slight preference for the former ... Cheers, Michael -- Dr. Michael Hausenblas, Research Fellow LiDRC - Linked Data Research Centre DERI - Digital Enterprise Research Institute NUIG - National University of Ireland, Galway Ireland, Europe Tel. +353 91 495730 http://linkeddata.deri.ie/ http://sw-app.org/about.html On 31 Jul 2011, at 10:34, Barry Norton wrote: Does anyone know a browser extension that will allow one to set the 'Accept:' HTTP header and follow redirects (a la curl -L), but actually show what it's done (a la curl -i)? Hopefully one that works in both Firefox and Chrome (a la Poster, but without this lack). Barry
Re: Dataset URIs and metadata.
Frans, I had a quick look. But I could not find it. I had a closer look now and I see the URI probably is http://dbpedia.org/void/Dataset. I have tried it. Redirection to either HTML or RDF seems to be in place. HTML request lead to http://dbpedia.org/void/page/Dataset, which shows a table of VoID properties. The RDF redirection (to http://dbpedia.org/void/data/Dataset.rdf) does not seem to work, I get an error. Sorry, you lost me here. Did you curl it or how did you get your findings? Apart from the error, somehow this is not what I expected. I assumed that the dataset URI is the URI of a dataset. It is the key to all other data. If you want something from a dataset, you only need to know this URI. So why is the dataset URI hard to find? Why isn't it used when references are made to DBpedia? Why isn't it the same as the base URI (http://dbpedia.org)?. http://lod-cloud.net/dbpedia a void:Dataset; foaf:homepage http://dbpedia.org/; Says everything, or? Probably VoID metadata/dataset URIs will be easier to discover once the /.well-known/void trick (described in paragraph 7.2 of the W3C VoID document) is widely adopted. Agreed. But it's not a 'trick'. It's called a standard. Cheers, Michael -- Dr. Michael Hausenblas, Research Fellow LiDRC - Linked Data Research Centre DERI - Digital Enterprise Research Institute NUIG - National University of Ireland, Galway Ireland, Europe Tel. +353 91 495730 http://linkeddata.deri.ie/ http://sw-app.org/about.html On 22 Jul 2011, at 09:42, Frans Knibbe wrote: On 2011-07-21 16:27, Michael Hausenblas wrote: But is this really common practice nowadays? Take DBpedia for example. What is the URI of the DBpedia dataset? Is it http://dbpedia.org? That does not seem to resolve to a set of metadata. Did you have a look at the URI I gave you? I mean http://lod-cloud.net/void.ttl I had a quick look. But I could not find it. I had a closer look now and I see the URI probably is http://dbpedia.org/void/Dataset. I have tried it. Redirection to either HTML or RDF seems to be in place. HTML request lead to http://dbpedia.org/void/page/Dataset, which shows a table of VoID properties. The RDF redirection (to http://dbpedia.org/void/data/Dataset.rdf) does not seem to work, I get an error. Apart from the error, somehow this is not what I expected. I assumed that the dataset URI is the URI of a dataset. It is the key to all other data. If you want something from a dataset, you only need to know this URI. So why is the dataset URI hard to find? Why isn't it used when references are made to DBpedia? Why isn't it the same as the base URI (http://dbpedia.org)?. Probably VoID metadata/dataset URIs will be easier to discover once the /.well-known/void trick (described in paragraph 7.2 of the W3C VoID document) is widely adopted. BTW, some 30% [1] of the LOD cloud datasets are using VoID ... Is there a general way of obtaining datasets URIs? Not to my knowledge. We're working on it in LATC [2] - Keith? Cheers, Michael [1] http://www4.wiwiss.fu-berlin.de/lodcloud/state/#data-set-level-metadata [2] http://latc-project.eu/ -- Dr. Michael Hausenblas, Research Fellow LiDRC - Linked Data Research Centre DERI - Digital Enterprise Research Institute NUIG - National University of Ireland, Galway Ireland, Europe Tel. +353 91 495730 http://linkeddata.deri.ie/ http://sw-app.org/about.html On 21 Jul 2011, at 15:19, Frans Knibbe wrote: Thanks for the replies. It seems that there is agreement that a dataset should have a URI and that dereferencing that URI should return metadata about the dataset. That is good to know. But is this really common practice nowadays? Take DBpedia for example. What is the URI of the DBpedia dataset? Is it http://dbpedia.org? That does not seem to resolve to a set of metadata. Is there a general way of obtaining datasets URIs? I can imagine an RDF dataset comprising all known dataset URIs. And of course that dataset will have a URI itself. Does such a dataset exist at the moment? Regards, Frans On 2011-07-21 12:35, Frans Knibbe wrote: Hello, I have just placed a Linked Data dataset online and now I am struggling with finding the best way to publish the metadata of the dataset. I wonder if there are best practices for referencing a dataset and its metadata, and for linking the two. I did find out that using the Vocabulary of Interlinked Data (VoID) is a good way to publish the metadata of a dataset. But I still need some guidance. I have come up with three questions: 1) Is it common practice/recommendable to regard a dataset a resource? If it is, then all datasets should have a URI, right? 2) If having a dataset URI is a good thing, what should be behind the URI? Should dereferencing the URI lead to the dataset metadata (a VoID file for example)? 3) If dereferencing a dataset URI leads to the dataset metadata
Re: Dataset URIs and metadata.
So, does this mean that the URI of the dataset (DBPedia) is http://lod-cloud.net/dbpedia? ? It is one URI identifying the DBpedia dataset, yes. It is likely not the authoritative one as it is not int the dbpedia.org namespace, so there may be others ... Cheers, Michael -- Dr. Michael Hausenblas, Research Fellow LiDRC - Linked Data Research Centre DERI - Digital Enterprise Research Institute NUIG - National University of Ireland, Galway Ireland, Europe Tel. +353 91 495730 http://linkeddata.deri.ie/ http://sw-app.org/about.html On 22 Jul 2011, at 11:16, Frans Knibbe wrote: Hello Michael, On 2011-07-22 10:59, Michael Hausenblas wrote: Frans, I had a quick look. But I could not find it. I had a closer look now and I see the URI probably is http://dbpedia.org/void/Dataset. I have tried it. Redirection to either HTML or RDF seems to be in place. HTML request lead to http://dbpedia.org/void/page/Dataset, which shows a table of VoID properties. The RDF redirection (to http://dbpedia.org/void/data/Dataset.rdf) does not seem to work, I get an error. Sorry, you lost me here. Did you curl it or how did you get your findings? Yes, I tried curl. I have just tried it again: curl -H Accept: application/rdf+xml http://dbpedia.org/void/data/Dataset.rdf The response seems to be different from yesterday. Yesterday I immediately got an error message. I am afraid I don't have the exact message any more. The response I get now is different, a transaction time out, which I get after waiting a bit. Probably this is just a temporary situation and besides the point too. Apart from the error, somehow this is not what I expected. I assumed that the dataset URI is the URI of a dataset. It is the key to all other data. If you want something from a dataset, you only need to know this URI. So why is the dataset URI hard to find? Why isn't it used when references are made to DBpedia? Why isn't it the same as the base URI (http://dbpedia.org)?. http://lod-cloud.net/dbpedia a void:Dataset; foaf:homepage http://dbpedia.org/; Says everything, or? Well, let me see... First of all, please know that I am new to Linked Data and RDF, so there is a chance I don't fully understand everything. I think what it says is that there is a thing identified by the URI http://lod-cloud.net/dbpedia , that that thing is a dataset and that its home page is http://dbpedia.org/ . So, does this mean that the URI of the dataset (DBPedia) is http://lod-cloud.net/dbpedia? ? Sorry if I seem to be stupid, it is not my intention. Probably VoID metadata/dataset URIs will be easier to discover once the /.well-known/void trick (described in paragraph 7.2 of the W3C VoID document) is widely adopted. Agreed. But it's not a 'trick'. It's called a standard. Sorry about that! Regards, Frans Cheers, Michael -- Dr. Michael Hausenblas, Research Fellow LiDRC - Linked Data Research Centre DERI - Digital Enterprise Research Institute NUIG - National University of Ireland, Galway Ireland, Europe Tel. +353 91 495730 http://linkeddata.deri.ie/ http://sw-app.org/about.html On 22 Jul 2011, at 09:42, Frans Knibbe wrote: On 2011-07-21 16:27, Michael Hausenblas wrote: But is this really common practice nowadays? Take DBpedia for example. What is the URI of the DBpedia dataset? Is it http://dbpedia.org? That does not seem to resolve to a set of metadata. Did you have a look at the URI I gave you? I mean http://lod-cloud.net/void.ttl I had a quick look. But I could not find it. I had a closer look now and I see the URI probably is http://dbpedia.org/void/Dataset. I have tried it. Redirection to either HTML or RDF seems to be in place. HTML request lead to http://dbpedia.org/void/page/Dataset, which shows a table of VoID properties. The RDF redirection (to http://dbpedia.org/void/data/Dataset.rdf) does not seem to work, I get an error. Apart from the error, somehow this is not what I expected. I assumed that the dataset URI is the URI of a dataset. It is the key to all other data. If you want something from a dataset, you only need to know this URI. So why is the dataset URI hard to find? Why isn't it used when references are made to DBpedia? Why isn't it the same as the base URI (http://dbpedia.org)?. Probably VoID metadata/dataset URIs will be easier to discover once the /.well-known/void trick (described in paragraph 7.2 of the W3C VoID document) is widely adopted. BTW, some 30% [1] of the LOD cloud datasets are using VoID ... Is there a general way of obtaining datasets URIs? Not to my knowledge. We're working on it in LATC [2] - Keith? Cheers, Michael [1] http://www4.wiwiss.fu-berlin.de/lodcloud/state/#data-set-level-metadata [2] http://latc-project.eu/ -- Dr. Michael Hausenblas, Research Fellow LiDRC - Linked Data Research Centre DERI - Digital Enterprise Research Institute NUIG - National University of Ireland
Re: Dataset URIs and metadata.
Probably VoID metadata/dataset URIs will be easier to discover once the /.well-known/void trick (described in paragraph 7.2 of the W3C VoID document) is widely adopted. greed. But it's not a 'trick'. It's called a standard. Is it? Yes, I think that RFC5785 [1] can be considered a standard. Unless you want to suggest that RFCs are sorta not real standards :P Cheers, Michael [1] http://tools.ietf.org/html/rfc5785 -- Dr. Michael Hausenblas, Research Fellow LiDRC - Linked Data Research Centre DERI - Digital Enterprise Research Institute NUIG - National University of Ireland, Galway Ireland, Europe Tel. +353 91 495730 http://linkeddata.deri.ie/ http://sw-app.org/about.html On 22 Jul 2011, at 15:39, Dave Reynolds wrote: On Fri, 2011-07-22 at 09:59 +0100, Michael Hausenblas wrote: Frans, [snip] Probably VoID metadata/dataset URIs will be easier to discover once the /.well-known/void trick (described in paragraph 7.2 of the W3C VoID document) is widely adopted. greed. But it's not a 'trick'. It's called a standard. Is it? There was me thinking it was a Interest Group Note. Is there a newer version than: http://www.w3.org/TR/2011/NOTE-void-20110303/ ? Dave
Re: Dataset URIs and metadata.
It was the the claim that /.well-known/void is a standard that I was surprised by. It's the sort of thing that could easily be on a Rec track somewhere, I just wasn't aware of it. Sorry if I somehow gave the impression that VoID is a W3C Recommendation. I would consider it as a de-facto standard in the Linked Data community. Formally, though it is a W3C Note, yes. Cheers, Michael -- Dr. Michael Hausenblas, Research Fellow LiDRC - Linked Data Research Centre DERI - Digital Enterprise Research Institute NUIG - National University of Ireland, Galway Ireland, Europe Tel. +353 91 495730 http://linkeddata.deri.ie/ http://sw-app.org/about.html On 22 Jul 2011, at 16:12, Dave Reynolds wrote: On Fri, 2011-07-22 at 15:42 +0100, Michael Hausenblas wrote: Probably VoID metadata/dataset URIs will be easier to discover once the /.well-known/void trick (described in paragraph 7.2 of the W3C VoID document) is widely adopted. greed. But it's not a 'trick'. It's called a standard. Is it? Yes, I think that RFC5785 [1] can be considered a standard. Unless you want to suggest that RFCs are sorta not real standards :P :) I'm aware that /.well-known is standardized in RFC5785. It was the the claim that /.well-known/void is a standard that I was surprised by. It's the sort of thing that could easily be on a Rec track somewhere, I just wasn't aware of it. FWIW I'm perfectly happy with VoID's current status as an Interest Group note. Cheers, Dave On 22 Jul 2011, at 15:39, Dave Reynolds wrote: On Fri, 2011-07-22 at 09:59 +0100, Michael Hausenblas wrote: Frans, [snip] Probably VoID metadata/dataset URIs will be easier to discover once the /.well-known/void trick (described in paragraph 7.2 of the W3C VoID document) is widely adopted. greed. But it's not a 'trick'. It's called a standard. Is it? There was me thinking it was a Interest Group Note. Is there a newer version than: http://www.w3.org/TR/2011/NOTE-void-20110303/ ? Dave
Re: Dataset URIs and metadata.
Patrick, So, perhaps one day it will be a standard, but not today. Good catch! Did you join the Pedantic Web [1] group, yet? We need more people like you. Hope you are nearing a great weekend! Yes, indeed, I plan to go to DERI FAWM now and allow my brain to be off-line till 15:00 UTC tomorrow, in case anyone cares ... Cheers, Michael [1] http://pedantic-web.org/ -- Dr. Michael Hausenblas, Research Fellow LiDRC - Linked Data Research Centre DERI - Digital Enterprise Research Institute NUIG - National University of Ireland, Galway Ireland, Europe Tel. +353 91 495730 http://linkeddata.deri.ie/ http://sw-app.org/about.html On 22 Jul 2011, at 16:11, Patrick Durusau wrote: Michael, On 7/22/2011 10:42 AM, Michael Hausenblas wrote: Probably VoID metadata/dataset URIs will be easier to discover once the /.well-known/void trick (described in paragraph 7.2 of the W3C VoID document) is widely adopted. greed. But it's not a 'trick'. It's called a standard. Is it? Yes, I think that RFC5785 [1] can be considered a standard. Unless you want to suggest that RFCs are sorta not real standards :P RFCs can be standards, but there is a path by which RFCs become standards. As of today, the RFC 5785 header reads PROPOSED STANDARD. So, perhaps one day it will be a standard, but not today. Hope you are nearing a great weekend! Patrick Cheers, Michael [1] http://tools.ietf.org/html/rfc5785 -- Dr. Michael Hausenblas, Research Fellow LiDRC - Linked Data Research Centre DERI - Digital Enterprise Research Institute NUIG - National University of Ireland, Galway Ireland, Europe Tel. +353 91 495730 http://linkeddata.deri.ie/ http://sw-app.org/about.html On 22 Jul 2011, at 15:39, Dave Reynolds wrote: On Fri, 2011-07-22 at 09:59 +0100, Michael Hausenblas wrote: Frans, [snip] Probably VoID metadata/dataset URIs will be easier to discover once the /.well-known/void trick (described in paragraph 7.2 of the W3C VoID document) is widely adopted. greed. But it's not a 'trick'. It's called a standard. Is it? There was me thinking it was a Interest Group Note. Is there a newer version than: http://www.w3.org/TR/2011/NOTE-void-20110303/ ? Dave -- Patrick Durusau patr...@durusau.net Chair, V1 - US TAG to JTC 1/SC 34 Convener, JTC 1/SC 34/WG 3 (Topic Maps) Editor, OpenDocument Format TC (OASIS), Project Editor ISO/IEC 26300 Co-Editor, ISO/IEC 13250-1, 13250-5 (Topic Maps) Another Word For It (blog): http://tm.durusau.net Homepage: http://www.durusau.net Twitter: patrickDurusau
Re: Dataset URIs and metadata.
Frans, Please refer to http://www.w3.org/TR/void/ as this is the official Note ... 1) Is it common practice/recommendable to regard a dataset a resource? If it is, then all datasets should have a URI, right? Yes, all datasets (and sub-sets) should have a URI. 2) If having a dataset URI is a good thing, what should be behind the URI? Should dereferencing the URI lead to the dataset metadata (a VoID file for example)? As described in http://www.w3.org/TR/void/#discovery 3) If dereferencing a dataset URI leads to the dataset metadata, should there be separate HTML and RDF versions of the metadata? Or is it better to have a HTML page with embedded (RDFa) RDF data? Up to you. If you want to be Linked Data compliant (remember the 3rd principle ;) than you'll serve *some* structured data from the URI. RDFa is just as fine as anything else there, really. You might be interested to learn about the 'bigger' picture via http://linked-data-life-cycles.info Cheers, Michael -- Dr. Michael Hausenblas, Research Fellow LiDRC - Linked Data Research Centre DERI - Digital Enterprise Research Institute NUIG - National University of Ireland, Galway Ireland, Europe Tel. +353 91 495730 http://linkeddata.deri.ie/ http://sw-app.org/about.html On 21 Jul 2011, at 11:35, Frans Knibbe wrote: Hello, I have just placed a Linked Data dataset online and now I am struggling with finding the best way to publish the metadata of the dataset. I wonder if there are best practices for referencing a dataset and its metadata, and for linking the two. I did find out that using the Vocabulary of Interlinked Data (VoID) is a good way to publish the metadata of a dataset. But I still need some guidance. I have come up with three questions: 1) Is it common practice/recommendable to regard a dataset a resource? If it is, then all datasets should have a URI, right? 2) If having a dataset URI is a good thing, what should be behind the URI? Should dereferencing the URI lead to the dataset metadata (a VoID file for example)? 3) If dereferencing a dataset URI leads to the dataset metadata, should there be separate HTML and RDF versions of the metadata? Or is it better to have a HTML page with embedded (RDFa) RDF data? Thanks in advance for your help, Frans
Re: Dataset URIs and metadata.
Frans, Forgot two things, sorry: http://lod-cloud.net/void.ttl might provide you with some URI's for interlinking descriptions and we have a separate VoID discussion group [1] if you want to go into greater details ;) Cheers, Michael [1] https://groups.google.com/forum/?pli=1#!forum/void-discussion -- Dr. Michael Hausenblas, Research Fellow LiDRC - Linked Data Research Centre DERI - Digital Enterprise Research Institute NUIG - National University of Ireland, Galway Ireland, Europe Tel. +353 91 495730 http://linkeddata.deri.ie/ http://sw-app.org/about.html On 21 Jul 2011, at 11:43, Michael Hausenblas wrote: Frans, Please refer to http://www.w3.org/TR/void/ as this is the official Note ... 1) Is it common practice/recommendable to regard a dataset a resource? If it is, then all datasets should have a URI, right? Yes, all datasets (and sub-sets) should have a URI. 2) If having a dataset URI is a good thing, what should be behind the URI? Should dereferencing the URI lead to the dataset metadata (a VoID file for example)? As described in http://www.w3.org/TR/void/#discovery 3) If dereferencing a dataset URI leads to the dataset metadata, should there be separate HTML and RDF versions of the metadata? Or is it better to have a HTML page with embedded (RDFa) RDF data? Up to you. If you want to be Linked Data compliant (remember the 3rd principle ;) than you'll serve *some* structured data from the URI. RDFa is just as fine as anything else there, really. You might be interested to learn about the 'bigger' picture via http://linked-data-life-cycles.info Cheers, Michael -- Dr. Michael Hausenblas, Research Fellow LiDRC - Linked Data Research Centre DERI - Digital Enterprise Research Institute NUIG - National University of Ireland, Galway Ireland, Europe Tel. +353 91 495730 http://linkeddata.deri.ie/ http://sw-app.org/about.html On 21 Jul 2011, at 11:35, Frans Knibbe wrote: Hello, I have just placed a Linked Data dataset online and now I am struggling with finding the best way to publish the metadata of the dataset. I wonder if there are best practices for referencing a dataset and its metadata, and for linking the two. I did find out that using the Vocabulary of Interlinked Data (VoID) is a good way to publish the metadata of a dataset. But I still need some guidance. I have come up with three questions: 1) Is it common practice/recommendable to regard a dataset a resource? If it is, then all datasets should have a URI, right? 2) If having a dataset URI is a good thing, what should be behind the URI? Should dereferencing the URI lead to the dataset metadata (a VoID file for example)? 3) If dereferencing a dataset URI leads to the dataset metadata, should there be separate HTML and RDF versions of the metadata? Or is it better to have a HTML page with embedded (RDFa) RDF data? Thanks in advance for your help, Frans
Re: Dataset URIs and metadata.
But is this really common practice nowadays? Take DBpedia for example. What is the URI of the DBpedia dataset? Is it http://dbpedia.org? That does not seem to resolve to a set of metadata. Did you have a look at the URI I gave you? I mean http://lod-cloud.net/void.ttl BTW, some 30% [1] of the LOD cloud datasets are using VoID ... Is there a general way of obtaining datasets URIs? Not to my knowledge. We're working on it in LATC [2] - Keith? Cheers, Michael [1] http://www4.wiwiss.fu-berlin.de/lodcloud/state/#data-set-level-metadata [2] http://latc-project.eu/ -- Dr. Michael Hausenblas, Research Fellow LiDRC - Linked Data Research Centre DERI - Digital Enterprise Research Institute NUIG - National University of Ireland, Galway Ireland, Europe Tel. +353 91 495730 http://linkeddata.deri.ie/ http://sw-app.org/about.html On 21 Jul 2011, at 15:19, Frans Knibbe wrote: Thanks for the replies. It seems that there is agreement that a dataset should have a URI and that dereferencing that URI should return metadata about the dataset. That is good to know. But is this really common practice nowadays? Take DBpedia for example. What is the URI of the DBpedia dataset? Is it http://dbpedia.org? That does not seem to resolve to a set of metadata. Is there a general way of obtaining datasets URIs? I can imagine an RDF dataset comprising all known dataset URIs. And of course that dataset will have a URI itself. Does such a dataset exist at the moment? Regards, Frans On 2011-07-21 12:35, Frans Knibbe wrote: Hello, I have just placed a Linked Data dataset online and now I am struggling with finding the best way to publish the metadata of the dataset. I wonder if there are best practices for referencing a dataset and its metadata, and for linking the two. I did find out that using the Vocabulary of Interlinked Data (VoID) is a good way to publish the metadata of a dataset. But I still need some guidance. I have come up with three questions: 1) Is it common practice/recommendable to regard a dataset a resource? If it is, then all datasets should have a URI, right? 2) If having a dataset URI is a good thing, what should be behind the URI? Should dereferencing the URI lead to the dataset metadata (a VoID file for example)? 3) If dereferencing a dataset URI leads to the dataset metadata, should there be separate HTML and RDF versions of the metadata? Or is it better to have a HTML page with embedded (RDFa) RDF data? Thanks in advance for your help, Frans
A proposal for handling bulk data requests
Kingsley, Gio, All, An idea that arose out of a recent discussion with Juergen (in CC): how about providing a sort of 'bulk data request' facility for your SPARQL endpoints [1] [2] (as they are, I gather, the more popular ones on the WoD ;)? It could work as follows: 1. Someone uploads a VoID description [3] of the targeted datasets and provides an email, Twitter, G+ handle or a WebID 2. You could generate the 'customized' dataset internally in a very efficient manner. 3. Once available, the requester is notified by means of the provided back-channel from 1. I believe such a system in place would lower the crawling and bulk- query costs re bandwidth, etc. on your end, and opens up a business opportunity as well (think: WebID - Web Payments). What do you think? Cheers, Michael [1] http://lod.openlinksw.com/sparql [2] http://sparql.sindice.com/ [3] http://www.w3.org/TR/void/ -- Dr. Michael Hausenblas, Research Fellow LiDRC - Linked Data Research Centre DERI - Digital Enterprise Research Institute NUIG - National University of Ireland, Galway Ireland, Europe Tel. +353 91 495730 http://linkeddata.deri.ie/ http://sw-app.org/about.html
Re: HTTP status for timed-out SPARQL query
Should we be returning 500 instead? Yes. To be more concise, I'd think that 503 [1] is appropriate. A 4xx is not appropriate IMHO, because [2]: [[ The 4xx class of status code is intended for cases in which the client seems to have erred. ]] Cheers, Michael [1] http://tools.ietf.org/html/rfc2616#section-10.5.4 [2] http://tools.ietf.org/html/rfc2616#section-10.4 -- Dr. Michael Hausenblas, Research Fellow LiDRC - Linked Data Research Centre DERI - Digital Enterprise Research Institute NUIG - National University of Ireland, Galway Ireland, Europe Tel. +353 91 495730 http://linkeddata.deri.ie/ http://sw-app.org/about.html On 28 Jun 2011, at 07:21, Bill Roberts wrote: Looking for some advice from the community. If we time out a slow- running SPARQL query, what is the most appropriate HTTP status code to return to the client? We had been trying 408, but the problem with that is that some clients (notably Firefox) take it on themselves to keep retrying the request, which isn't really what we want. Should we be returning 500 instead? Thanks Bill
Re: HTTP status for timed-out SPARQL query
Seriously, I think that 413 Request Entity Too Large would be a good solution: I disagree. Just checked back w/ colleagues on the #rest IRC channel, they also agree with 503. Cheers, Michael -- Dr. Michael Hausenblas, Research Fellow LiDRC - Linked Data Research Centre DERI - Digital Enterprise Research Institute NUIG - National University of Ireland, Galway Ireland, Europe Tel. +353 91 495730 http://linkeddata.deri.ie/ http://sw-app.org/about.html On 28 Jun 2011, at 11:10, Martin Hepp wrote: Looking for some advice from the community. If we time out a slow- running SPARQL query, what is the most appropriate HTTP status code to return to the client? We had been trying 408, but the problem with that is that some clients (notably Firefox) take it on themselves to keep retrying the request, which isn't really what we want. Should we be returning 500 instead? What about 402 Payment Required? ;-) Seriously, I think that 413 Request Entity Too Large would be a good solution: The server is refusing to process a request because the request entity is larger than the server is willing or able to process. The server MAY close the connection to prevent the client from continuing the request. If the condition is temporary, the server SHOULD include a Retry- After header field to indicate that it is temporary and after what time the client MAY try again. 500 Internal Server Error was also my first guess, but this may not stop clients from trying again. Martin On Jun 28, 2011, at 8:21 AM, Bill Roberts wrote: Looking for some advice from the community. If we time out a slow- running SPARQL query, what is the most appropriate HTTP status code to return to the client? We had been trying 408, but the problem with that is that some clients (notably Firefox) take it on themselves to keep retrying the request, which isn't really what we want. Should we be returning 500 instead? Thanks Bill
Re: Java Language Ontology and .java to RDF parser?
Aldo, Does anyone know of a Java language ontology? ( with a JavaClass, JavaMethod, JavaField, etc classes, for example. ). And a parser for such an ontology? ( that takes .java sources as input ). Yes, see [1]. Ping Aftab (one of my PhD students, in CC) if you need more details ... Cheers, Michael [1] Aftab Iqbal, Oana Ureche, Michael Hausenblas, Giovanni Tummarello. LD2SD: Linked Data Driven Software Development, 21st International Conference on Software Engineering and Knowledge Engineering, 2009. http://sw-app.org/pub/seke09-ld2sd.pdf -- Dr. Michael Hausenblas, Research Fellow LiDRC - Linked Data Research Centre DERI - Digital Enterprise Research Institute NUIG - National University of Ireland, Galway Ireland, Europe Tel. +353 91 495730 http://linkeddata.deri.ie/ http://sw-app.org/about.html On 26 Jun 2011, at 08:28, Aldo Bucchi wrote: Hi, Does anyone know of a Java language ontology? ( with a JavaClass, JavaMethod, JavaField, etc classes, for example. ). And a parser for such an ontology? ( that takes .java sources as input ). I need to analyze some Java codebases and it would be really useful to create an in memory graph of the language constructs, particularly in RDF, so I could use some of the amazing tools that we all know and love ;) Thanks! A -- Aldo Bucchi @aldonline skype:aldo.bucchi http://facebook.com/aldo.bucchi ( -- add me * ) http://aldobucchi.com/ * I prefer Facebook as a networking and communications tool.
Re: URI Owners
Do URIs have owners? I don't thing owner is the correct term. A URI has an agent (person, group) who controls what it resolves to, but I'm not sure you can own an identifier. http://www.w3.org/TR/webarch/#uri-assignment Cheers, Michael -- Dr. Michael Hausenblas, Research Fellow LiDRC - Linked Data Research Centre DERI - Digital Enterprise Research Institute NUIG - National University of Ireland, Galway Ireland, Europe Tel. +353 91 495730 http://linkeddata.deri.ie/ http://sw-app.org/about.html On 15 Jun 2011, at 11:58, Christopher Gutteridge wrote: Picking up on comment by Richard, but forking the thread Would you agree that Facebook are the owners of this URI? Do URIs have owners? I don't thing owner is the correct term. A URI has an agent (person, group) who controls what it resolves to, but I'm not sure you can own an identifier. -- Christopher Gutteridge -- http://id.ecs.soton.ac.uk/person/1248 / Lead Developer, EPrints Project, http://eprints.org/ / Web Projects Manager, ECS, University of Southampton, http://www.ecs.soton.ac.uk/ / Webmaster, Web Science Trust, http://www.webscience.org/
Re: Schema.org in RDF ...
Alan, Again, this strikes me as speaking from very little experience. I spend a good deal of my time collaboratively developing ontologies and working with users of them. I've yet to encounter a person who didn't understand the difference between a book about Obama and Obama. Welcome to the real world. Cheers, Michael -- Dr. Michael Hausenblas, Research Fellow LiDRC - Linked Data Research Centre DERI - Digital Enterprise Research Institute NUIG - National University of Ireland, Galway Ireland, Europe Tel. +353 91 495730 http://linkeddata.deri.ie/ http://sw-app.org/about.html On 12 Jun 2011, at 11:12, Alan Ruttenberg wrote: On Sunday, June 12, 2011, Lin Clark lin.w.cl...@gmail.com wrote: David, as you know, it is trivial to distinguish in representation the difference between an information object and a person. I don't understand why you keep repeating this misinformation. -Alan It is trivial to distinguish between an information resource and the resource it talks about There is no if. In the below you are talking about matters other than being able to make the distinction. if you are 1) developing a custom system under your control for your own needs, which is not extensible and does not have to integrate code published by developers with a different knowledge base than you Please give me some evidence for this. My experience (not insignificant) is otherwise. -and- 2) do not have end users who you have to educate in the distinction between an info resource and an other web resource so that they can effectively add content to your system. Again, this strikes me as speaking from very little experience. I spend a good deal of my time collaboratively developing ontologies and working with users of them. I've yet to encounter a person who didn't understand the difference between a book about Obama and Obama. However, it is not trivial to add this distinction when you are working in an extensible system which you do not control It depends on the manner in which the system is made extensible. Architecture and good design matters. However, It is this attitude that has led, in part, to the prulgation of schema.org as a closed architecture. or when you do not have the resources to invest in reeducation camps to change the way end users and other developers think. As an educator, in part, I do not consider educating people to require investing in reeducation camps. In my opinion, if you want to build a system by which data can be effectively aggregated and put to novel use by machines (this is what I thought we were doing) then I think you will fail if you think that will come by continuing to set no standards for how these systems communicate meaning and what kind of knowledge someone needs to have to work with them correctly. i cite the experience of the last 50 years of computer technology as evidence. -Alan I invite anyone who disagrees and who believes this is trivial to actually try effectively communicating the distinction made by httpRange-14 to an outside technology community and to attempt the social change necessary to make it work consistently in practice. Best,Lin
Re: ANN: alpha version of Schema.org terms-to-RDF translator 'omnidator' available
Great job! Thanks. Not a real competitor to URIBurner, though ;) Little note, please tweak your Microdata tools description of the Virtuoso Sponger since URIBurner.com [1] delivers the same functionality of omnidator across the formats you mention + OData etc.. Alternatively, you can add URIBurner to the Microdata tools list [2]. Done, see http://schema.rdfs.org/tools.html Cheers, Michael -- Dr. Michael Hausenblas, Research Fellow LiDRC - Linked Data Research Centre DERI - Digital Enterprise Research Institute NUIG - National University of Ireland, Galway Ireland, Europe Tel. +353 91 495730 http://linkeddata.deri.ie/ http://sw-app.org/about.html On 11 Jun 2011, at 23:35, Kingsley Idehen wrote: On 6/11/11 6:08 PM, Michael Hausenblas wrote: All, The alpha version of omnidator [1] (omnipotent data translator), an online tool and (CORS-enabled) API to translate formats that use Schema.org terms into RDF is now available. Currently only microdata and CSV as input formats are supported, but others (such as OData) are in the queue. Let us know what other formats you want omnidator to support, or, if you fancy chiming in, clone the repo [2] and send us a pull request. Great job! Little note, please tweak your Microdata tools description of the Virtuoso Sponger since URIBurner.com [1] delivers the same functionality of omnidator across the formats you mention + OData etc.. Alternatively, you can add URIBurner to the Microdata tools list [2]. Links: 1. http://uriburner.com -- note the URL input field (this has always been a translation service i.e., Virtuoso Sponger behind a domain) and the fact that it returns a URL for a Linked Data resource 2. http://schema.rdfs.org/tools.html -- Microdata tools page . Kingsley Cheers, Michael [1] http://omnidator.appspot.com/ [2] https://github.com/mhausenblas/omnidator -- Dr. Michael Hausenblas, Research Fellow LiDRC - Linked Data Research Centre DERI - Digital Enterprise Research Institute NUIG - National University of Ireland, Galway Ireland, Europe Tel. +353 91 495730 http://linkeddata.deri.ie/ http://sw-app.org/about.html -- Regards, Kingsley Idehen President CEO OpenLink Software Web: http://www.openlinksw.com Weblog: http://www.openlinksw.com/blog/~kidehen Twitter/Identi.ca: kidehen
ANN: alpha version of Schema.org terms-to-RDF translator 'omnidator' available
All, The alpha version of omnidator [1] (omnipotent data translator), an online tool and (CORS-enabled) API to translate formats that use Schema.org terms into RDF is now available. Currently only microdata and CSV as input formats are supported, but others (such as OData) are in the queue. Let us know what other formats you want omnidator to support, or, if you fancy chiming in, clone the repo [2] and send us a pull request. Cheers, Michael [1] http://omnidator.appspot.com/ [2] https://github.com/mhausenblas/omnidator -- Dr. Michael Hausenblas, Research Fellow LiDRC - Linked Data Research Centre DERI - Digital Enterprise Research Institute NUIG - National University of Ireland, Galway Ireland, Europe Tel. +353 91 495730 http://linkeddata.deri.ie/ http://sw-app.org/about.html
Re: Schema.org in RDF ...
For how little this matters really - i'd really advice anyone wanting to produce RDFa of schema to live with it and use direct http://schema.org uris as per their example in RDFa. +1 Cheers, Michael -- Dr. Michael Hausenblas, Research Fellow LiDRC - Linked Data Research Centre DERI - Digital Enterprise Research Institute NUIG - National University of Ireland, Galway Ireland, Europe Tel. +353 91 495730 http://linkeddata.deri.ie/ http://sw-app.org/about.html On 9 Jun 2011, at 09:54, Giovanni Tummarello wrote: my2c i would seriously advice against using triples with http://schema.rdfs.org . That would be totally and entirely validating their claim that either you impose things or fragmentation will distroy everything and that talking to the community is a waste of time. For how little this matters really - i'd really advice anyone wanting to produce RDFa of schema to live with it and use direct http://schema.org uris as per their example in RDFa. Gio On Tue, Jun 7, 2011 at 9:49 AM, Patrick Logan patrickdlo...@gmail.com wrote: Would it be reasonable to use http://schema.rdfs.org rather than http://schema.org in the URIs? Essentially mirror what one might hope for schema.org to become. Then if it does become that, link the two together? On Tue, Jun 7, 2011 at 1:22 AM, Michael Hausenblas michael.hausenb...@deri.org wrote: Something I don't understand. If I read well all savvy discussions so far, publishers behind http://schema.org URIs are unlikely to ever provide any RDF description, What makes you so sure about that not one day in the (near?) future the Schema.org URIs will serve RDF or JSON, FWIW, additionally to HTML? ;) Cheers, Michael -- Dr. Michael Hausenblas, Research Fellow LiDRC - Linked Data Research Centre DERI - Digital Enterprise Research Institute NUIG - National University of Ireland, Galway Ireland, Europe Tel. +353 91 495730 http://linkeddata.deri.ie/ http://sw-app.org/about.html On 7 Jun 2011, at 08:44, Bernard Vatant wrote: Hi all Something I don't understand. If I read well all savvy discussions so far, publishers behind http://schema.org URIs are unlikely to ever provide any RDF description, so why are those URIs declared as identifiers of RDFS classes in the http://schema.rdfs.org/all.rdf. For all I can see, http://schema.org/Person is the URI of an information resource, not of a class. So I would rather have expected mirroring of the schema.org URIs by schema.rdfs.org URIs, the later fully dereferencable proper RDFS classes expliciting the semantics of the former, while keeping the reference to the source in some dcterms:source element. Example, instead of ... rdf:Description rdf:about=http://schema.org/Person; rdf:type rdf:resource=http://www.w3.org/2000/01/rdf- schema#Class/ rdfs:label xml:lang=enPerson/rdfs:label rdfs:comment xml:lang=enA person (alive, dead, undead, or fictional)./rdfs:comment rdfs:subClassOf rdf:resource=http://schema.org/Thing/ rdfs:isDefinedBy rdf:resource=http://schema.org/Person/ /rdf:Description where I see a clear abuse of rdfs:isDefinedBy, since if you dereference the said URI, you don't find any explicit RDF definition ... I would rather have the following rdf:Description rdf:about=http://schema.rdfs.org/Person; rdf:type rdf:resource=http://www.w3.org/2000/01/rdf- schema#Class/ rdfs:label xml:lang=enPerson/rdfs:label rdfs:comment xml:lang=enA person (alive, dead, undead, or fictional)./rdfs:comment rdfs:subClassOf rdf:resource=http://schema.rdfs.org/Thing/ dcterms:source rdf:resource=http://schema.org/Person/ /rdf:Description To the latter declaration, one could safely add statements like schema.rdfs:Person rdfs:subClassOf foaf:Person etc Or do I miss the point? Bernard 2011/6/3 Michael Hausenblas michael.hausenb...@deri.org http://schema.rdfs.org ... is now available - we're sorry for the delay ;) Cheers, Michael -- Dr. Michael Hausenblas, Research Fellow LiDRC - Linked Data Research Centre DERI - Digital Enterprise Research Institute NUIG - National University of Ireland, Galway Ireland, Europe Tel. +353 91 495730 http://linkeddata.deri.ie/ http://sw-app.org/about.html -- Bernard Vatant Senior Consultant Vocabulary Data Integration Tel: +33 (0) 971 488 459 Mail: bernard.vat...@mondeca.com Mondeca 3, cité Nollez 75018 Paris France Web:http://www.mondeca.com Blog:http://mondeca.wordpress.com
Schema.RDFS.org updates
All, Richard and I tried our best to capture all input so far [1] - please let us know if we've forgotten anything or anyone (if so, my bad, yeah, I'm a slacker ;) - further input re the mapping is very much appreciated via the tracker [1]! We're currently rolling out the live-sync to Schema.org along with some other improvements - there is a lot to do and if you want to contribute via the repo, you're more than welcome. Also, the tools section [2] has been updated: the new Drupal module, Virtuoso Sponger, TopBraid Composer and Ed Summer's rdflib plugin are now listed. Cheers, Michael [1] https://github.com/mhausenblas/schema-org-rdf/issues [2] http://schema.rdfs.org/tools.html -- Dr. Michael Hausenblas, Research Fellow LiDRC - Linked Data Research Centre DERI - Digital Enterprise Research Institute NUIG - National University of Ireland, Galway Ireland, Europe Tel. +353 91 495730 http://linkeddata.deri.ie/ http://sw-app.org/about.html
Re: Schema.org in RDF ...
Something I don't understand. If I read well all savvy discussions so far, publishers behind http://schema.org URIs are unlikely to ever provide any RDF description, What makes you so sure about that not one day in the (near?) future the Schema.org URIs will serve RDF or JSON, FWIW, additionally to HTML? ;) Cheers, Michael -- Dr. Michael Hausenblas, Research Fellow LiDRC - Linked Data Research Centre DERI - Digital Enterprise Research Institute NUIG - National University of Ireland, Galway Ireland, Europe Tel. +353 91 495730 http://linkeddata.deri.ie/ http://sw-app.org/about.html On 7 Jun 2011, at 08:44, Bernard Vatant wrote: Hi all Something I don't understand. If I read well all savvy discussions so far, publishers behind http://schema.org URIs are unlikely to ever provide any RDF description, so why are those URIs declared as identifiers of RDFS classes in the http://schema.rdfs.org/all.rdf. For all I can see, http://schema.org/Person is the URI of an information resource, not of a class. So I would rather have expected mirroring of the schema.org URIs by schema.rdfs.org URIs, the later fully dereferencable proper RDFS classes expliciting the semantics of the former, while keeping the reference to the source in some dcterms:source element. Example, instead of ... rdf:Description rdf:about=http://schema.org/Person; rdf:type rdf:resource=http://www.w3.org/2000/01/rdf-schema#Class/ rdfs:label xml:lang=enPerson/rdfs:label rdfs:comment xml:lang=enA person (alive, dead, undead, or fictional)./rdfs:comment rdfs:subClassOf rdf:resource=http://schema.org/Thing/ rdfs:isDefinedBy rdf:resource=http://schema.org/Person/ /rdf:Description where I see a clear abuse of rdfs:isDefinedBy, since if you dereference the said URI, you don't find any explicit RDF definition ... I would rather have the following rdf:Description rdf:about=http://schema.rdfs.org/Person; rdf:type rdf:resource=http://www.w3.org/2000/01/rdf-schema#Class/ rdfs:label xml:lang=enPerson/rdfs:label rdfs:comment xml:lang=enA person (alive, dead, undead, or fictional)./rdfs:comment rdfs:subClassOf rdf:resource=http://schema.rdfs.org/Thing/ dcterms:source rdf:resource=http://schema.org/Person/ /rdf:Description To the latter declaration, one could safely add statements like schema.rdfs:Person rdfs:subClassOf foaf:Person etc Or do I miss the point? Bernard 2011/6/3 Michael Hausenblas michael.hausenb...@deri.org http://schema.rdfs.org ... is now available - we're sorry for the delay ;) Cheers, Michael -- Dr. Michael Hausenblas, Research Fellow LiDRC - Linked Data Research Centre DERI - Digital Enterprise Research Institute NUIG - National University of Ireland, Galway Ireland, Europe Tel. +353 91 495730 http://linkeddata.deri.ie/ http://sw-app.org/about.html -- Bernard Vatant Senior Consultant Vocabulary Data Integration Tel: +33 (0) 971 488 459 Mail: bernard.vat...@mondeca.com Mondeca 3, cité Nollez 75018 Paris France Web:http://www.mondeca.com Blog:http://mondeca.wordpress.com
Re: Schema.org in RDF ...
All, Thanks a lot for the comments we received so far, both here and (even more) off-list. Now, to make our life a bit easier, may I ask you to provide suggestions concerning the mapping (or feature requests alike) directly to the Github [1]? Of course, if you're more into it, feel free to clone the repo and issue a pull request. As you can imagine, this is a community endeavour - we just happened to kick it off ;) Cheers, Michael [1] https://github.com/mhausenblas/schema-org-rdf/issues -- Dr. Michael Hausenblas, Research Fellow LiDRC - Linked Data Research Centre DERI - Digital Enterprise Research Institute NUIG - National University of Ireland, Galway Ireland, Europe Tel. +353 91 495730 http://linkeddata.deri.ie/ http://sw-app.org/about.html On 3 Jun 2011, at 22:06, Michael Hausenblas wrote: http://schema.rdfs.org ... is now available - we're sorry for the delay ;) Cheers, Michael -- Dr. Michael Hausenblas, Research Fellow LiDRC - Linked Data Research Centre DERI - Digital Enterprise Research Institute NUIG - National University of Ireland, Galway Ireland, Europe Tel. +353 91 495730 http://linkeddata.deri.ie/ http://sw-app.org/about.html
Re: See UK
Hugh, Impressive! Or to put it a bit more formally: :me http://ontologi.es/like#likes http://apps.seme4.com/see-uk . Hmmm ... I guess it's gonna be a quite competitive Open Data Challenge then ;) Cheers, Michael -- Dr. Michael Hausenblas, Research Fellow LiDRC - Linked Data Research Centre DERI - Digital Enterprise Research Institute NUIG - National University of Ireland, Galway Ireland, Europe Tel. +353 91 495730 http://linkeddata.deri.ie/ http://sw-app.org/about.html On 21 May 2011, at 11:51, Hugh Glaser wrote: Hi, Ian Millard and I have recently made public a web page that is perhaps quite a nice demonstrator. It makes use of quite a lot of public UK linked data, plus data enrichment and cross-dataset linkage provided by the EnAKTing team and Seme4. Some people would call it a Semantic Web App. Hopefully self-explanatory and of interest to the lists: http://apps.seme4.com/see-uk There is an about page at http://apps.seme4.com/see-uk/about.html Best Hugh http://www.ecs.soton.ac.uk/~hg/ http://www.seme4.com/who-we-are/profile/hugh-glaser/
[CfP] IEEE Intelligent Systems Special Issue on 'Linked Open Government Data'
All, This is the first CfP for the IEEE Intelligent Systems Special Issue on 'Linked Open Government Data' [1] with a submission deadline on 1 September 2011. Topics == 1 Interoperable and meaningful LOGD representation + Space management of uniform resource identifiers and identifiers + Catalogs and registries for LOGD datasets + Ontologies, vocabularies, and semantic annotation for large and/or dynamic LOGD data + Vocabulary management for LOGD metadata reuses and specializations + Context, provenance, quality, uncertainty, and trustworthiness of LOGD 2 Scalable semantic data management and processing for LOGD + Smart integration with legacy systems, barriers, formats + Extensible infrastructure for collaborative LOGD data management and processing + Smart link generation, learning, validation, and reasoning + Scalable LOGD data discovery, access, query, and search + Persistence, version freshness, and obsolescence of LOGD 3 LOGD deployment and society + Deployment cost and benefits + Transparency vs. privacy + Free, open data vs. business models + License, policy, and legal issues + Community engagement, best practices, and lessons learned 4 Innovative and intelligent LOGD consumption + User interaction models: cost reduction and usability improvements + Social LOGD mashups: personalization, collaboration, and trust + Mobile applications and mGovernment + Intelligent web applications using LOGD as a data source + Use-cases for scientific discovery, business analysis, and administrative decision making Guest Editors = + Vassilios Peristeras, European Commission, Directorate-General for Informatics, Interoperability Solutions for European Public Administrations (ISA) Unit, Belgium + Michael Hausenblas, Linked Data Research Centre, DERI, NUI Galway, Ireland + Li Ding, Tetherless World Constellation, Rensselaer Polytechnic Institute, USA For submission details and further questions, please visit [1]. Cheers, Michael (on behalf of the Guest Editors) [1] http://www.computer.org/intelligent/cfp2 -- Dr. Michael Hausenblas, Research Fellow LiDRC - Linked Data Research Centre DERI - Digital Enterprise Research Institute NUIG - National University of Ireland, Galway Ireland, Europe Tel. +353 91 495730 http://linkeddata.deri.ie/ http://sw-app.org/about.html
Re: Linked Media: Extending Linked Data for Updates and arbitrary Media Formats using the REST Principles
Sebastian, Good stuff and timely, indeed. Can you please tell me, how this relates to TimBL's notes [1] [2] (if it does)? I'm especially interested in the following: + How exactly is SPARQL utilised in your proposal? See also [3] and [4] for related work. + How is authentication and authorisation handled (like WebID [5] and WAC [6])? Cheers, Michael [1] http://www.w3.org/DesignIssues/ReadWriteLinkedData.html [2] http://www.w3.org/DesignIssues/CloudStorage.html [3] http://www.w3.org/TR/sparql11-http-rdf-update/ [4] http://portal.acm.org/citation.cfm?id=1645412 [5] http://www.w3.org/2005/Incubator/webid/spec/ [6] http://www.w3.org/wiki/WebAccessControl -- Dr. Michael Hausenblas, Research Fellow LiDRC - Linked Data Research Centre DERI - Digital Enterprise Research Institute NUIG - National University of Ireland, Galway Ireland, Europe Tel. +353 91 495730 http://linkeddata.deri.ie/ http://sw-app.org/about.html On 5 May 2011, at 09:12, Sebastian Schaffert wrote: Dear all, in the context of our work in Salzburg NewMediaLab and the KiWi EU project before, we had an idea that I would like to get feedback from the Linked Data Community. We are also writing on an article about it (probably for ISWC), but I think it makes sense to discuss the idea in advance. Maybe there is also a bit of related work that we are not yet aware of. Salzburg NewMediaLab is a close-to-industry research project in the media/broadcasting and the enterprise knowledge management domain. Goal of the current phase is to connect enterprise archives (multimedia but also other) with Linked (Open) Data sources to provide added value. In this context, it is not only relevant to publish and consume Linked Data, we also had the requirements to be able to easily update Linked Data and also to manage content and metadata in a uniform way. We therefore call our extension Linked Media, and I am going to briefly describe it in a rather informal way. Background -- The idea is a kind of combination of concepts from Linked Data, Media Management and Enterprise Knowledge Management (from KiWi). Up till now, the Linked Data world is read-only and primarily concerned with the structured data associated with a resource (regardless of whether this data is represented in RDF or visualised in HTML). However, in order to build more interactive mashups, it would make sense to also allow updates to the data in Linked Data servers. And in enterprise settings, it makes sense to have a unified means to manage both structured data and human-readable content for a resource. For example, a resource might represent a video on the internet, and depending on how I access the video I want to get either the video itself or the structured metadata about the video (e.g. a list of RDF links to DBPedia for all persons depicted in the video). Our Linked Media idea tries to address both issues: - it extends the Linked Data principles with RESTful principles for addition, modification, and deletion of resources - it extends the Linked Data principles by means to manage content and meta-data alike using MIME to URL mapping Linked Media Idea - 1. extending the Linked Data principles for updates using REST Linked Data is currently read-only and depending on Accept headers in the HTTP request, it redirects a request to the appropriate representation (RDF or HTML). For supporting updates in Linked Data, a consequent extension of Linked Data is to apply REST and otherwise use the same or analgous principles. This means that GET is used to retrieve a resource, POST is used to create a resource, PUT is used to update a resource, and DELETE is used to remove a resource. In case of GET, the Accept header determines what to retrieve and redirects to the appropriate URL; in case of PUT, the Content-Type header determines what to update and also redirects to the appropriate URL. This extension is therefore fully backwards compatible to Linked Data, i.e. each Linked Media server is a Linked Data server. 2. extending the Linked Data principles for arbitrary content using MIME mapping and rel Content Type Linked Data currently distinguishes between an RDF representation and a human readable representation in the GET request. The GET request then redirects either to the URL of the RDF representation or to the URL of the human readable (HTML) representation. We extended this principle so that it can handle arbitrary formats based on the MIME type in Accept/Content-Type headers and so that it can still distinguish between content and metadata based on the rel extension for Accept/Content-Type headers. The basic idea is to rewrite resource URLs of the form http://localhost/resource/1234 depending on the MIME type as follows: - if the Accept/Content-Type header is of the form Accept: type/ subtype; rel=content, then the redirect URL
Re: Minting URIs: how to deal with unknown data structures
Frans, Great to hear that you're interested in applying Linked Data and to promote it in the Netherlands - certainly a very active area ;) I would welcome any advice on this topic from people who have had some more experience with publishing Linked Data. I find [1] a very useful page from a pragmatic perspective. If you're more into books and not only focusing on the data side (see 'REST and Linked Data: a match made for domain driven development?' [2] for more details on data vs. API), I can also recommend [3], which offers some more practical guidance in terms of URI space management. Cheers, Michael [1] http://data.gov.uk/resources/uris [2] http://ws-rest.org/2011/proc/a5-page.pdf [3] http://oreilly.com/catalog/9780596529260 -- Dr. Michael Hausenblas, Research Fellow LiDRC - Linked Data Research Centre DERI - Digital Enterprise Research Institute NUIG - National University of Ireland, Galway Ireland, Europe Tel. +353 91 495730 http://linkeddata.deri.ie/ http://sw-app.org/about.html On 15 Apr 2011, at 13:48, Frans Knibbe wrote: Hello, Some newbie questions here... I have recently come in contact with the concept of Linked Data and I have become enthusiastic. I would like to promote the idea within my company (we specialize is geographical data) and within my country. I have read the excellent Linked Data book (“Linked Data: Evolving the Web into a Global Data Space”) and I think I am almost ready to start publishing Linked Data. I understand that it is important to get the URIs right, and not have to change them later. That is what my questions are about. I have acquired the first part (authority) of my URIs, let's say it is lod.mycompany.com. Now I am faced with the question: How do I come up with a URI scheme that will stand the test of time? I think I will start with publishing some FOAF data of myself and co- workers. And then hopefully more and more data will follow. At this moment I can not possible imagine which types of data we will publish. They are likely to have some kind of geographical component, but that is true for a lot of data. I believe it is not possible to come up with any hierarchical structure that will accommodate all types of data that might ever be published. So I think it is best to leave out any indication of data organization in the path element of the URI (i.e. http://lod.mycompany.com/people is a bad idea). In my understanding, I could use base URIs like http://lod.mycompany.com/resource , http://lod.mycompany.com/page and hhtp://lod.mycompany.com.data, and then use unique identifiers for all the things I want to publish something about. If I understand correctly, I don't need the URI to describe the hierarchy of my data because all Linked Data are self- describing. Nice. But then I am faced with the problem: What method do I use to mint my identifiers? Those identifiers need to be unique. Should I use a number sequence, or a hash function? In those cases the URIs would be uniform and give no indication of the type of data. But a number sequence seems unsafe, and in the case of a hash function I would still need to make some kind of structured choice of input values. I would welcome any advice on this topic from people who have had some more experience with publishing Linked Data. Regards, Frans Knibbe
Re: Exciting changes at Data.Southampton.ac.uk!
After some heated debate after the backlash against me for my recent comments about PDF, I've been forced to shift to recommending PDF as the preferred format for the data.southampton.ac.uk site, both for publishing and importing data. If today wasn't April Fool's Day I would have been worried. Nice one, Chris. Cheers, Michael -- Dr. Michael Hausenblas, Research Fellow LiDRC - Linked Data Research Centre DERI - Digital Enterprise Research Institute NUIG - National University of Ireland, Galway Ireland, Europe Tel. +353 91 495730 http://linkeddata.deri.ie/ http://sw-app.org/about.html On 1 Apr 2011, at 08:23, Christopher Gutteridge wrote: After some heated debate after the backlash against me for my recent comments about PDF, I've been forced to shift to recommending PDF as the preferred format for the data.southampton.ac.uk site, both for publishing and importing data. There are some issues with this and I know not every one will be happy with the decision; it wasn't easy to make... but on reflection, however, it's the right one. It is much easier for non programmers (the majority of people) to work with PDF documents and they are supported by pretty much every platform you can think of with a choice of tools and the benefit of familiarity. We've provided a wrapper around 4store to make PDF the default output mode: http://sparql.data.southampton.ac.uk/?query=PREFIX+soton%3A+%3Chttp%3A%2F%2Fid.southampton.ac.uk%2Fns%2F%3E%0D%0APREFIX+foaf%3A+%3Chttp%3A%2F%2Fxmlns.com%2Ffoaf%2F0.1%2F%3E%0D%0APREFIX+skos%3A+%3Chttp%3A%2F%2Fwww.w3.org%2F2004%2F02%2Fskos%2Fcore%23%3E%0D%0APREFIX+geo%3A+%3Chttp%3A%2F%2Fwww.w3.org%2F2003%2F01%2Fgeo%2Fwgs84_pos%23%3E%0D%0APREFIX+rdfs%3A+%3Chttp%3A%2F%2Fwww.w3.org%2F2000%2F01%2Frdf-schema%23%3E%0D%0APREFIX+org%3A+%3Chttp%3A%2F%2Fwww.w3.org%2Fns%2Forg%23%3E%0D%0APREFIX+spacerel%3A+%3Chttp%3A%2F%2Fdata.ordnancesurvey.co.uk%2Fontology%2Fspatialrelations%2F%3E%0D%0APREFIX+ep%3A+%3Chttp%3A%2F%2Feprints.org%2Fontology%2F%3E%0D%0APREFIX+dct%3A+%3Chttp%3A%2F%2Fpurl.org%2Fdc%2Fterms%2F%3E%0D%0APREFIX+bibo%3A+%3Chttp%3A%2F%2Fpurl.org%2Fontology%2Fbibo%2F%3E%0D%0APREFIX+owl%3A+%3Chttp%3A%2F%2Fwww.w3.org%2F2002%2F07%2Fowl%23%3E%0D%0A%0D%0ASELECT+%3Fs+WHERE+ {%0D%0A%3Fs+%3Fp+%3Fo+.%0D%0A}+LIMIT +10output=pdfjsonp=#results_table And most information URIs can now be resolved to PDF, but we are sticking to HTML as the default (for now) http://data.southampton.ac.uk/products-and-services/FreshFruit.pdf The full details and rationale are on our data blog http://blogs.ecs.soton.ac.uk/data/2011/04/01/pdf-selected-as-interchange-format/E -- Christopher Gutteridge -- http://id.ecs.soton.ac.uk/person/1248 You should read the ECS Web Team blog: http://blogs.ecs.soton.ac.uk/webteam/
LOD community gathering at WWW2011
All, As part of the WWW2011, we'd like to invite you to join the Linked Open Data gathering [1] on Tue 29 March, after the LDOW workshop [2]. The exact time and location has yet to be determined. If you plan to attend, please leave your name at the Wiki page [1] (or if you don't have an account, let me know so I can put it there). Suggestions for the venue are welcome. The LOD gatherings have a rather long tradition (this is the 18th gathering since 2007) and are well-known to be both socially and content-wise very attractive. Cheers, Michael [1] http://www.w3.org/wiki/SweoIG/TaskForces/CommunityProjects/LinkingOpenData/HyderabadGathering [2] http://events.linkeddata.org/ldow2011/ -- Dr. Michael Hausenblas, Research Fellow LiDRC - Linked Data Research Centre DERI - Digital Enterprise Research Institute NUIG - National University of Ireland, Galway Ireland, Europe Tel. +353 91 495730 http://linkeddata.deri.ie/ http://sw-app.org/about.html
Re: The truth about SPARQL Endpoint availability
I'm no SPARQL or voiD guru, but I think you need a bit more wrapping in the scovo stuff, so more like: ROTFL, reading that Hugh claims to be *not* a VoID guru ;) Note that SCOVO modelling of stats in VoID has been deprecated and simplified [1]. Fancy the challenge, it is the weekend?! :-) Indeed! Cheers, Michael [1] http://www.w3.org/TR/void/#statistics -- Dr. Michael Hausenblas, Research Fellow LiDRC - Linked Data Research Centre DERI - Digital Enterprise Research Institute NUIG - National University of Ireland, Galway Ireland, Europe Tel. +353 91 495730 http://linkeddata.deri.ie/ http://sw-app.org/about.html On 5 Mar 2011, at 15:14, Hugh Glaser wrote: Hi, On 5 Mar 2011, at 14:22, Andrea Splendiani wrote: Hi, I think it depends on the store, I've tried some (from the endpoint list) and some returns a answer pretty quickly. Some doesn't and some doesn't support count. However, one could have this information only for the stores that answers the count query, no need to try all time. I am happy for a store implementor or owner to disagree, but I find it very unlikely that the owner of a store with a decent chunk of data ( 1M triples, say) would be happy for someone to keep issuing such a query, even if they did decide to give enough resources to execute it. I would quickly blacklist such a site. VoID: is this a good query: select * where {?s http://rdfs.org/ns/void#numberOfTriples ?o } I'm no SPARQL or voiD guru, but I think you need a bit more wrapping in the scovo stuff, so more like: SELECT DISTINCT ?endpoint ?uri ?triples ?uris WHERE { ?ds a void:Dataset . ?ds void:sparqlEndpoint ?uri . ?ds rdfs:label ?endpoint . ?ds void:statItem [ scovo:dimension void:numberOfTriples ; rdf:value ?triples ] . } Try it at http://kwijibo.talis.com/voiD/ or http://void.rkbexplorer.com/ I guess Pierre-Yves might like to enhance his page by querying a voiD store to also give basic stats. Or someone might like to do a store reporter that uses (a) voiD endpoint(s) plus Pierre-Yves's data (he has a SPARQL endpoint), to do so. And maybe the CKAN endpoint would have extra useful data as well. A real Semantic Web application that queried more than one SPARQL endpoint - now that would be a novelty! Fancy the challenge, it is the weekend?! :-) ciao Hugh it doesn't seem viable if so. ciao, Andrea Il giorno 05/mar/2011, alle ore 13.49, Hugh Glaser ha scritto: NIce idea, but,... :-) SELECT (count(*) as ?c) WHERE {?s ?p ?o} is a pretty anti-social thing to do to a store. At best, a store of any size will spend a while thinking, and then quite rightly decide they have burnt enough resources, and return some sort of error. For a properly maintained site, of course, the VoiD description will give lots of similar information. Best Hugh On 5 Mar 2011, at 13:06, Andrea Splendiani wrote: Hi, very nice! I have a small suggestion: why don't you ask count(*) where {?s ?p ?o} to the endpoint ? Or ask for the number of graphs ? Both information, number of triples and number of graphs, if logged and compared over time, can give a practical view of the liveliness of the content of the endpoint. best, Andrea Splendiani Il giorno 28/feb/2011, alle ore 18.55, Pierre-Yves Vandenbussche ha scritto: Hello all, you have already encountered problems of SPARQL endpoint accessibility ? you feel frustrated they are never available when you need them? you develop an application using these services but wonder if it is reliable? Here is a tool [1] that allows you to know public SPARQL endpoints availability and monitor them in the last hours/days. Stay informed of a particular (or all) endpoint status changes through RSS feeds. All availability information generated by this tool is accessible through a SPARQL endpoint. This tool fetches public SPARQL endpoints from CKAN open data. From this list, it runs tests every hour for availability. [1] http://labs.mondeca.com/sparqlEndpointsStatus/index.html [2] http://ckan.net/ Pierre-Yves Vandenbussche. Andrea Splendiani Senior Bioinformatics Scientist Centre for Mathematical and Computational Biology +44(0)1582 763133 ext 2004 andrea.splendi...@bbsrc.ac.uk -- Hugh Glaser, Intelligence, Agents, Multimedia School of Electronics and Computer Science, University of Southampton, Southampton SO17 1BJ Work: +44 23 8059 3670, Fax: +44 23 8059 3045 Mobile: +44 78 9422 3822, Home: +44 23 8061 5652 http://www.ecs.soton.ac.uk/~hg/ Andrea Splendiani Senior Bioinformatics Scientist Centre for Mathematical and Computational Biology +44(0)1582 763133 ext 2004 andrea.splendi...@bbsrc.ac.uk -- Hugh Glaser, Intelligence, Agents, Multimedia School of Electronics and Computer Science, University of Southampton, Southampton SO17 1BJ Work: +44 23 8059
Re: The truth about SPARQL Endpoint availability
Pierre-Yves, Great contribution to the eco-system, congrats! Where applicable and if possible you may want to consider using the SD vocab as described in [1]. KUGTW! Cheers, Michael [1] http://www.w3.org/2001/sw/interest/void/#sparql-sd -- Dr. Michael Hausenblas, Research Fellow LiDRC - Linked Data Research Centre DERI - Digital Enterprise Research Institute NUIG - National University of Ireland, Galway Ireland, Europe Tel. +353 91 495730 http://linkeddata.deri.ie/ http://sw-app.org/about.html On 28 Feb 2011, at 23:03, Pierre-Yves Vandenbussche wrote: Kingsley, Thanks for your support. All data available through the SPARQL endpoint is compliant to this vocabulary: http://labs.mondeca.com/vocab/endpointStatus which relies on VoID . What Robert suggested me is to integrate RDFa in RSS feed which is a great idea. I will for sure improve the endpoint accessibility for humans. Pierre-Yves Vandenbussche Research Development Mondeca 3, cité Nollez 75018 Paris France Tel. +33 (0)1 44 92 35 07 - fax +33 (0)1 44 92 02 59 Mail: pierre-yves.vandenbuss...@mondeca.com Website: www.mondeca.com Blog: Leçons de choses On Mon, Feb 28, 2011 at 11:54 PM, Kingsley Idehen kide...@openlinksw.com wrote: On 2/28/11 5:43 PM, Pierre-Yves Vandenbussche wrote: Robert, Do you have example of inline Semantic Web Linked Data in the feeds ? My SPARQL endpoint have no human client for the moment ... computer first !! Do you have a graph representation of the data describing SPARQL endpoint availability using term from the vocabulary you devised for this effort? Basically, do you have HTML+RDFa, RDF/XML, N-Triples, Turtle etc.. representations of your entity descriptions graphs? You could place SPARQL protocol URLs in link/ within head/ as we do re. DBpedia pages, for instance. Kingsley Pierre-Yves Vandenbussche Research Development Mondeca 3, cité Nollez 75018 Paris France Tel. +33 (0)1 44 92 35 07 - fax +33 (0)1 44 92 02 59 Mail: pierre-yves.vandenbuss...@mondeca.com Website: www.mondeca.com Blog: Leçons de choses On Mon, Feb 28, 2011 at 11:35 PM, Bob Ferris z...@elbklang.net wrote: Oh sorry, I overlooked this for some reason. What a pitty. However, I thought more about some inline Semantic Web Linked Data in the feeds. Would that be an option? Cheers, Bob PS: http://labs.mondeca.com/repositories/ENDPOINT_STATUS delivers me a Missing parameter: query. So I guess, I have to parametrize the request. An instruction for that might be useful then ;) Am 28.02.2011 23:25, schrieb Pierre-Yves Vandenbussche: Hello Robert, Every information produced by this service are stored in a SPARQL Endpoint : http://labs.mondeca.com/sparqlEndpointsStatus/endpoint/endpoint.html These open data are linked to CKAN ones. You can already access them. best, Pierre-Yves Vandenbussche Research Development Mondeca 3, cité Nollez 75018 Paris France Tel. +33 (0)1 44 92 35 07 - fax +33 (0)1 44 92 02 59 Mail: pierre-yves.vandenbuss...@mondeca.com mailto:pierre-yves.vandenbuss...@mondeca.com Website: www.mondeca.com http://www.mondeca.com/ Blog: Leçons de choses http://mondeca.wordpress.com/ On Mon, Feb 28, 2011 at 10:45 PM, Bob Ferris z...@elbklang.net mailto:z...@elbklang.net wrote: Congrats Pierre, well done! This might hopefully become a quite useful resource. Any plans to publish this information itself as Semantic Web Linked Data? Cheers, Bob Am 28.02.2011 19:55, schrieb Pierre-Yves Vandenbussche: Hello all, you have already encountered problems of SPARQL endpoint accessibility ? you feel frustrated they are never available when you need them? you develop an application using these services but wonder if it is reliable? Here is a tool http://labs.mondeca.com/sparqlEndpointsStatus/index.html[1] that allows you to know public SPARQL endpoints availability and monitor them in the last hours/days. Stay informed of a particular (or all) endpoint status changes through RSS feeds. All availability information generated by this tool is accessible through a SPARQL endpoint. This tool fetches public SPARQL endpoints from CKAN http://ckan.net/ open data. From this list, it runs tests every hour for availability. [1] http://labs.mondeca.com/sparqlEndpointsStatus/index.html http://labs.mondeca.com/sparqlEndpointsStatus/index.html[2] http://ckan.net/ Pierre-Yves Vandenbussche. -- Regards, Kingsley Idehen President CEO OpenLink Software Web: http://www.openlinksw.com Weblog: http://www.openlinksw.com/blog/~kidehen Twitter/Identi.ca: kidehen
Re: CORS question (was Re: Proposal to assess the quality of Linked Data sources)
I tried this recently and it didn't work on either Safari or Chrome (iirc) without adding: Access-Control-Allow-Methods: GET Has anyone else had this issue? Hmmm. Unsure, but at least the script I wrote for [1] doesn't seem to require it and I *think* works fine. Would be glad to learn if this is not the case and adapt it respectively. Cheers, Michael [1] http://enable-cors.org/#check -- Dr. Michael Hausenblas, Research Fellow LiDRC - Linked Data Research Centre DERI - Digital Enterprise Research Institute NUIG - National University of Ireland, Galway Ireland, Europe Tel. +353 91 495730 http://linkeddata.deri.ie/ http://sw-app.org/about.html On 25 Feb 2011, at 12:22, Damian Steer wrote: -BEGIN PGP SIGNED MESSAGE- Hash: SHA1 Sorry for changing the topic (and, indeed, sailing off list topic). On 24/02/11 18:28, Melvin Carvalho wrote: http://www.w3.org/wiki/CORS_Enabled [reproduced for convenience] ... To give Javascript clients basic access to your resources requires adding one HTTP Response Header, namely: Access-Control-Allow-Origin: * I tried this recently and it didn't work on either Safari or Chrome (iirc) without adding: Access-Control-Allow-Methods: GET Has anyone else had this issue? Damian -BEGIN PGP SIGNATURE- Version: GnuPG v1.4.10 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iEYEARECAAYFAk1nkNcACgkQAyLCB+mTtynQ4QCfWNSN8IshBfr6ot6XqiO3dxGj g9UAoLuVH34Pr0aiIyl5HT3RP+Fvo0dZ =kTBf -END PGP SIGNATURE-
3rd and last CfP: Linked Data on the Web (LDOW2011) Workshop at WWW2011
All, Due to multiple requests, we have decided to extend the submission deadline of the 4th International Workshop on Linked Data on the Web (LDOW2011) at WWW2011, to: Submission deadline: Sunday 13th February, 2011, 23:59 CET Note that this is a hard deadline, no further extensions will be made. The full CfP is available via the LDOW2011 website: http://events.linkeddata.org/ldow2011/ We are looking forward to see you at LDOW2011 in Hyderabad, India. Cheers, Chris Bizer Tom Heath Tim Berners-Lee Michael Hausenblas -- Dr. Michael Hausenblas, Research Fellow LiDRC - Linked Data Research Centre DERI - Digital Enterprise Research Institute NUIG - National University of Ireland, Galway Ireland, Europe Tel. +353 91 495730 http://linkeddata.deri.ie/ http://sw-app.org/about.html
Re: How to declare in a web app's interface which kind of app/version/features and or interfaces or formats it exposes
Olivier, I'm considering the different options that could help embed (with slightest modifications possible) in the in HTML interface of a Web app, a description of which app it is and/or which interfaces it exposes, so that this would be discoverable and lead to exploitation of such data by SemWeb apps, or existing harvesters. You might find my blog post 'Announcing Application Metadata on the Web of Data' [1] along with the template [2] useful for this purpose. Cheers, Michael [1] http://webofdata.wordpress.com/2010/01/06/announcing-application-metadata [2] http://lab.linkeddata.deri.ie/2010/res/web-app-metadata-template.html -- Dr. Michael Hausenblas, Research Fellow LiDRC - Linked Data Research Centre DERI - Digital Enterprise Research Institute NUIG - National University of Ireland, Galway Ireland, Europe Tel. +353 91 495730 http://linkeddata.deri.ie/ http://sw-app.org/about.html From: Olivier Berger olivier.ber...@it-sudparis.eu Date: Thu, 20 Jan 2011 16:42:16 +0100 To: Linked Data community public-lod@w3.org Subject: How to declare in a web app's interface which kind of app/version/features and or interfaces or formats it exposes Resent-From: Linked Data community public-lod@w3.org Resent-Date: Thu, 20 Jan 2011 15:43:59 + Hi. I'm considering the different options that could help embed (with slightest modifications possible) in the in HTML interface of a Web app, a description of which app it is and/or which interfaces it exposes, so that this would be discoverable and lead to exploitation of such data by SemWeb apps, or existing harvesters. Which SemWeb standards could be used to do so ? Thanks in advance. Best regards, -- Olivier BERGER olivier.ber...@it-sudparis.eu http://www-public.it-sudparis.eu/~berger_o/ - OpenPGP-Id: 2048R/5819D7E8 Ingénieur Recherche - Dept INF Institut TELECOM, SudParis (http://www.it-sudparis.eu/), Evry (France)
Re: Linked Open Data star badges
1. Let's try to avoid cross-posting if not necessary. This discussion is already ongoing in CKAN discuss [1]. 2. Antoine, you and others (incl. dataset publishers) are free to use it - or ignore it. If you want to see it happen at [2], then I'd suggest you just go there and implement it. 3. The badges, just as TimBL's original star scheme, are a marketing vehicle. Something, CTOs, Web developers, content owners such as government agencies, etc. should be able to comprehend rather easily. Let's not try to make a science out of it. /me back to some work with concrete output and potential impact ;) Cheers, Michael [1] http://lists.okfn.org/pipermail/ckan-discuss/2010-December/000819.html [2] http://esw.w3.org/DataSetRDFDumps -- Dr. Michael Hausenblas, Research Fellow LiDRC - Linked Data Research Centre DERI - Digital Enterprise Research Institute NUIG - National University of Ireland, Galway Ireland, Europe Tel. +353 91 495730 http://linkeddata.deri.ie/ http://sw-app.org/about.html From: Pablo Mendes pablomen...@gmail.com Date: Sat, 11 Dec 2010 10:20:00 +0100 To: Antoine Zimmermann antoine.zimmerm...@insa-lyon.fr, ckan-disc...@lists.okfn.org Cc: Michael Hausenblas michael.hausenb...@deri.org, Linked Data community public-lod@w3.org Subject: Re: Linked Open Data star badges Maybe there is a potential of interaction between CKAN Curation Tool and the LOD badges? a tool that looks at http://packages.python.org/curate/overview.html packages on CKAN, applies some rules, and produces some output. Thehttp://packages.python.org/curate/overview.html output might be instructions to add a tag to a package or it might be to add a package to a group.http://packages.python.org/curate/overview.html http://packages.python.org/curate/overview.html The LOD badges are based on TimBL's 5-star data scheme http://lab.linkeddata.deri.ie/2010/lod-badges/ Cheers, Pablo http://packages.python.org/curate/overview.html On Fri, Dec 10, 2010 at 4:23 PM, Antoine Zimmermann antoine.zimmerm...@insa-lyon.fr wrote: Michael, Good job, I like the look of these badges. However, I'm wondering: will the people who have a 0 or 1-star dataset put a badge on their Web page? It's like putting a badge saying HTML page /almost/ valid: 3 errors only! In the end, I guess only the 5 star badge will be proudly displayed by dataset owners. Still, I think those badges have a utility, but not for the dataset owners. I can imagine a web page listing existing, independent datasets (such as [1]) where each row has the corresponding badge. This way, one can immediately and visually determine the level of interoperability of the dataset. [1] http://esw.w3.org/DataSetRDFDumps Cheers, AZ. Le 07/12/2010 11:09, Michael Hausenblas a écrit : If you want to express your support for LOD data on your dataset Web page, you can now use the LOD badges [1] to do so. The LOD badges are based on TimBL's 5-star data scheme, which has been made available via [2]. Cheers, Michael [1] http://lab.linkeddata.deri.ie/2010/lod-badges/ [2] http://www.w3.org/DesignIssues/LinkedData.html -- Antoine Zimmermann Researcher at: Laboratoire d'InfoRmatique en Image et Systèmes d'information Database Group 7 Avenue Jean Capelle 69621 Villeurbanne Cedex France Lecturer at: Institut National des Sciences Appliquées de Lyon 20 Avenue Albert Einstein 69621 Villeurbanne Cedex France antoine.zimmerm...@insa-lyon.fr http://zimmer.aprilfoolsreview.com/
Linked Open Data star badges
If you want to express your support for LOD data on your dataset Web page, you can now use the LOD badges [1] to do so. The LOD badges are based on TimBL's 5-star data scheme, which has been made available via [2]. Cheers, Michael [1] http://lab.linkeddata.deri.ie/2010/lod-badges/ [2] http://www.w3.org/DesignIssues/LinkedData.html -- Dr. Michael Hausenblas, Research Fellow LiDRC - Linked Data Research Centre DERI - Digital Enterprise Research Institute NUIG - National University of Ireland, Galway Ireland, Europe Tel. +353 91 495730 http://linkeddata.deri.ie/ http://sw-app.org/about.html
Re: linked data about the W3C?
Pierre-Antoine Champin, You mean something like [1]? Cheers, Michael [1] http://www.w3.org/2002/01/tr-automation/tr.rdf -- Dr. Michael Hausenblas, Research Fellow LiDRC - Linked Data Research Centre DERI - Digital Enterprise Research Institute NUIG - National University of Ireland, Galway Ireland, Europe Tel. +353 91 495730 http://linkeddata.deri.ie/ http://sw-app.org/about.html From: Pierre-Antoine Champin swlists-040...@champin.net Date: Wed, 24 Nov 2010 12:40:33 +0100 To: Linked Data community public-lod@w3.org Subject: linked data about the W3C? Resent-From: Linked Data community public-lod@w3.org Resent-Date: Wed, 24 Nov 2010 11:42:38 + Hi, is there any linked data published about the W3C? Semantic Radar does not see any on their webpages. It is a shame, because I would like to get some statistics about the members, and I'm condemned to manually go through the 324 links of [1]. Talking about eating one's own dog food :-P pa [1] http://www.w3.org/Consortium/Member/List
Re: Is 303 really necessary?
Ian, (trying to keep up with this thread, maybe missed one point or the other) I'd like to understand on what we can agree here. It seems that having a URI for a thing and another URI for the document describing it is something most people would acknowledge to be useful. Two questions that come immediately into mind: Who cares? What are the costs and what are the benefits? First, I'd reckon that a certain number of tools and library developers (incl. Tabulator, RAP, RDB2RDF mapping tools, etc.) will have to care about this in the first place. Given the relative small size of the community compared to the Web at large this seems doable. Second, as already pointed out, the 303 issue mainly effects setups where the RDF representation is detached from the HTML (such as RDF/XML, Turtle, etc.), which means that the emerging and increasing part of the RDFa-based Linked Data world is not effected per se. Third, given that we're still a small community and find certain things to be sub-optimal, the cost changing it now is likely less than changing it in, say, 5 years time. I think I can hence sympathise with your proposal to (carefully) revisit the issue and think about alternatives. Now, having said this, although I think one should contemplate about the 303 issue, I don't agree with your proposed plan ahead; certain items on your list are rather simple to achieve (define the :isDescribedBy, update the LD guide, etc.) others not. It occurs to me that one of the main features of the Linked Data community is that we *do* things rather than having endless conversations what would be the best for the world out there. Heck, this is how the whole thing started. A couple of people defining a set of good practices and providing data following these practices and tools for it. Concluding. If you are serious about this, please go ahead. You have a very popular and powerful platform at your hand. Implement it there (and in your libraries, such as Moriarty), document it, and others may/will follow. Cheers, Michael -- Dr. Michael Hausenblas LiDRC - Linked Data Research Centre DERI - Digital Enterprise Research Institute NUIG - National University of Ireland, Galway Ireland, Europe Tel. +353 91 495730 http://linkeddata.deri.ie/ http://sw-app.org/about.html From: Ian Davis m...@iandavis.com Date: Thu, 4 Nov 2010 13:22:09 + To: Linked Data community public-lod@w3.org Subject: Is 303 really necessary? Resent-From: Linked Data community public-lod@w3.org Resent-Date: Thu, 04 Nov 2010 13:22:46 + Hi all, The subject of this email is the title of a blog post I wrote last night questioning whether we actually need to continue with the 303 redirect approach for Linked Data. My suggestion is that replacing it with a 200 is in practice harmless and that nothing actually breaks on the web. Please take a moment to read it if you are interested. http://iand.posterous.com/is-303-really-necessary Cheers, Ian
[Request for Input] Linked Data Specifications
All, There are quite some specs beyond the core specs (HTTP, URIs, RDF) that are relevant to Linked Data. In order to document this, we've set up a Web page [1] collecting these specs. The page is primarily targeting Linked Data newbies but should, IMHO, also be able to offer some gems for advanced Linked Data folks. I'd appreciate suggestions via the ESW Wiki page [2] and hope that this is useful for the community. Cheers, Michael [1] http://linkeddata-specs.info/ [2] http://esw.w3.org/SweoIG/TaskForces/CommunityProjects/LinkingOpenData/Specif ications -- Dr. Michael Hausenblas LiDRC - Linked Data Research Centre DERI - Digital Enterprise Research Institute NUIG - National University of Ireland, Galway Ireland, Europe Tel. +353 91 495730 http://linkeddata.deri.ie/ http://sw-app.org/about.html
Re: [Request for Input] Linked Data Specifications
Dave, A good idea. Thanks. Could I request you more clearly separate the formal specifications from the de facto community practice documents. The Change Set vocabulary, to pick one example, doesn't really have the same standing, adoption or level of scrutiny as the RFCs, does it? Good proposal, indeed. I plan to add a short statement to each spec anyway to explain how it contributes to a core spec, and in doing so, will add this (crucial) bit of information as well. Thanks! Cheers, Michael -- Dr. Michael Hausenblas, Research Fellow LiDRC - Linked Data Research Centre DERI - Digital Enterprise Research Institute NUIG - National University of Ireland, Galway Ireland, Europe Tel. +353 91 495730 http://linkeddata.deri.ie/ http://sw-app.org/about.html From: Dave Reynolds dave.e.reyno...@gmail.com Date: Fri, 05 Nov 2010 11:31:46 + To: Michael Hausenblas michael.hausenb...@deri.org Cc: Linked Data community public-lod@w3.org Subject: Re: [Request for Input] Linked Data Specifications Hi Michael, A good idea. Could I request you more clearly separate the formal specifications from the de facto community practice documents. The Change Set vocabulary, to pick one example, doesn't really have the same standing, adoption or level of scrutiny as the RFCs, does it? Dave On Fri, 2010-11-05 at 10:33 +, Michael Hausenblas wrote: All, There are quite some specs beyond the core specs (HTTP, URIs, RDF) that are relevant to Linked Data. In order to document this, we've set up a Web page [1] collecting these specs. The page is primarily targeting Linked Data newbies but should, IMHO, also be able to offer some gems for advanced Linked Data folks. I'd appreciate suggestions via the ESW Wiki page [2] and hope that this is useful for the community. Cheers, Michael [1] http://linkeddata-specs.info/ [2] http://esw.w3.org/SweoIG/TaskForces/CommunityProjects/LinkingOpenData/Specif ications
Re: [Request for Input] Linked Data Specifications
Nathan, Thanks for your feedback! Michael, also worth mentioning RDFa, Turtle, N3? Hmmm. Not sure, as I was hoping to avoid duplication to a certain extend, as I think the SWAP publications page does an excellent job already. But maybe the most important ones in the supplementary/serialisation section? can you bold the link to the SWAP publications / highlight in some way, as it's a pretty important one. Yes. Perhaps more vocabs, perhaps sioc, org, dct, foaf and a pointer to a good resource for vocabs. Uh uh. I don't wanna get into domain semantics or pretend I can give an exhaustive overview of the vocabularies, there, really. Hope you understand ;) Cheers, Michael -- Dr. Michael Hausenblas, Research Fellow LiDRC - Linked Data Research Centre DERI - Digital Enterprise Research Institute NUIG - National University of Ireland, Galway Ireland, Europe Tel. +353 91 495730 http://linkeddata.deri.ie/ http://sw-app.org/about.html From: Nathan nat...@webr3.org Organization: webr3 Reply-To: nat...@webr3.org Date: Fri, 05 Nov 2010 12:00:10 + To: Michael Hausenblas michael.hausenb...@deri.org Cc: Dave Reynolds dave.e.reyno...@gmail.com, Linked Data community public-lod@w3.org Subject: Re: [Request for Input] Linked Data Specifications Dave Reynolds wrote: Hi Michael, A good idea. My sentiments exactly :) Michael, also worth mentioning RDFa, Turtle, N3? and also any note on IRI or HTTP-bis? can you bold the link to the SWAP publications / highlight in some way, as it's a pretty important one. Perhaps more vocabs, perhaps sioc, org, dct, foaf and a pointer to a good resource for vocabs. Could I request you more clearly separate the formal specifications from the de facto community practice documents. The Change Set vocabulary, to pick one example, doesn't really have the same standing, adoption or level of scrutiny as the RFCs, does it? and +1 to the above (re make them clearly distinct, not say CS is of the same standing as web standards!). Best, Nathan
Re: R2RML: RDB to RDF Mapping Language
Riccardo, D2R is one way to do it, a sort of non-standardised precursor to R2RML. There are many more ways to do it [1] - that's why we standardise it ;) Cheers, Michael (with his RDB2RDF WG co-chair hat on) [1] http://www.w3.org/2005/Incubator/rdb2rdf/RDB2RDF_SurveyReport.pdf -- Dr. Michael Hausenblas LiDRC - Linked Data Research Centre DERI - Digital Enterprise Research Institute NUIG - National University of Ireland, Galway Ireland, Europe Tel. +353 91 495730 http://linkeddata.deri.ie/ http://sw-app.org/about.html From: Riccardo Tasso ta...@elet.polimi.it Date: Sun, 31 Oct 2010 12:00:56 +0100 To: Ivan Herman i...@ivan-herman.net Cc: Linked Data community public-lod@w3.org, Semantic Web community semantic-...@w3.org Subject: Re: R2RML: RDB to RDF Mapping Language Resent-From: Linked Data community public-lod@w3.org Resent-Date: Sun, 31 Oct 2010 11:01:48 + What is the difference with D2R [1]? Riccardo [1] http://www4.wiwiss.fu-berlin.de/bizer/d2rmap/D2Rmap.htm On 31/10/2010 10:13, Ivan Herman wrote: FYI... The RDB2RDF Working Group[1] has published the First Public Working Draft of R2RML: RDB to RDF Mapping Language[2]. R2RML is a language for describing how to put relational data on the Semantic Web. With R2RML, people express customized mappings from relational databases to RDF datasets, allowing them to view existing relational data in the RDF data model, expressed in their preferred structure and target vocabulary. Ivan [1] http://www.w3.org/2001/sw/rdb2rdf/ [2] http://www.w3.org/TR/2010/WD-r2rml-20101028/ Ivan Herman Bankrashof 108 1183NW Amstelveen The Netherlands http://www.ivan-herman.net
Re: AW: ANN: LOD Cloud - Statistics and compliance with best practices
(cutting down lists as cross-posting is against W3C list policy) I'm in general with Chris. Of course RDFa is/can be used to do Linked Data. But rather than wasting our time in ranting how bad the world is, how about just making it a better place? 1. Clearly, we need to motivate why interlinking is beneficial or at least offer 3rd party services that do the job for the publishers (if they don't see the benefit or have other priorities). 2. Again, rather than discussing endlessly about what is fair and what is not and who should be there and who not and so on ... hey, it's the Web. An open, free ecosystem where you can just put up your own visualisation, diagram, stats, etc. - tthe community will then decide how valuable and useful it is. /me back to work now; trying to help solve the issues rather than talking about it in the first place ;) Correcting one factual error in Gio's post, though: So danny ayers has fun linking to dbpedia so he is in there with his joke dataset, but you cant credibly bring that argument to large retailers so they're left out? Denny Vrandecic. Cheers, Michael -- Dr. Michael Hausenblas LiDRC - Linked Data Research Centre DERI - Digital Enterprise Research Institute NUIG - National University of Ireland, Galway Ireland, Europe Tel. +353 91 495730 http://linkeddata.deri.ie/ http://sw-app.org/about.html From: Giovanni Tummarello giovanni.tummare...@deri.org Date: Thu, 21 Oct 2010 13:12:10 +0100 To: Chris Bizer ch...@bizer.de Cc: Martin Hepp martin.h...@ebusiness-unibw.org, Thomas Steiner tstei...@google.com, Semantic Web community semantic-...@w3.org, Linked Data community public-lod@w3.org, Anja Jentzsch a...@anjeve.de, semanticweb semantic...@yahoogroups.com, Kingsley Idehen kide...@openlinksw.com Subject: Re: AW: ANN: LOD Cloud - Statistics and compliance with best practices Resent-From: Linked Data community public-lod@w3.org Resent-Date: Thu, 21 Oct 2010 12:12:41 + But again: I agree that crawling the Web of Data and then deriving a dataset catalog as well as meta-data about the datasets directly from the crawled data would be clearly preferable and would also scale way better. Thus: Could please somebody start a crawler and build such a catalog? As long as nobody does this, I will keep on using CKAN. Hi Chris, all I can only restate that within Sindice we're very open to anyone who wanted to develop data anlisys apps creating catalogs automatically. At the moment a map reduce job a couple of week ago gave an excess of 100k independent datasets. How many interlinked etc? to be analyzed. Our interest (and the interest of the Semantic Web vision i want to sposor) is to make sure RDFa sites are fully included and so are those who provide markup which can however be translated in an automatic/agreeable way (so no scraping or sponging) into RDF. (that is anything that any23.org can turn into triples) If you were indeed interested in running your or developing your algorithms in our running dataset no problem, the code can be made opensource so it would run on others similarly structured datasets. This said yes i think too that in this phase a CKAN like repository can be an interesting aggregation point, why not. But i do think the diagram, which made great sense as an example when Richard started it is now at risk of providing a disservice which is in line which what Martin is making noticed. The diagram as it is now kinda implicitly conveys the sense that if something is so large then all that matters must be there and that's absolutely not the case. a) there are plenty of extremely useful datasets is RDF/RDFa etc which are not there b) the usefulness of being linked is all but a proven fact, so on the one hand people might want to be there on the other you'd have to do pushing toward serious commercial entities (for example) to link to dbpedia for reasons that arent clear and that hurts your credibility. So danny ayers has fun linking to dbpedia so he is in there with his joke dataset, but you cant credibly bring that argument to large retailers so they're left out? this would be ok if the diagram was just hey its my own thing i set my rules - fine but the fanfare around it gives it a different meaning and thus the controversy above. .. just tried to put in words what might be a general unspoken feeling.. Short message recap a) ckan - nice why not might be useful but.. b) generated diagram : we have the data or can collect it so whoever is interested in analitics pls let us know and we can work it out (matter of fact it turns out most uf us in here are paid by EU for doing this in collaborative projects :-) ) cheers Giovanni
[CfP] Future Internet Session at the Future Internet Assembly, Ghent, 16 December 2010
Call for Position Papers for the Future Internet Session ³Linked Data in the Future Internet² at the Future Internet Assembly, Ghent, 16 December 2010 http://www.fi-ghent.eu/ http://www.future-internet.eu/ The Future Internet sparked the interest of many different communities. All of these communities develop specific parts of infrastructure, which at one point of time need to be able to interoperate. Unfortunately, currently the Future Internet architecture does not include means to achieve interoperability at a data level. At the same time Linked Data is becoming an accepted best practice to exchange information in an interoperable and reusable fashion. Many different communities on the Internet use Linked Data standards to provide and exchange interoperable information. This is strikingly confirmed by the dramatically growing Linked Data cloud (http://lod-cloud.net/) and the currently more than 25 billion facts represented and interconnected therein with exponential growth rates both in terms of data sets and contained data. The OSI/OSI 7-Layer architecture is a conceptual view on networking architectures. One possible view is a look at Linked Data as an independent layer in the Internet architecture, on top of the networking layer, but below the application layers, since it provides a common data model for all applications as shown in the figure below. This session investigates this view, what implications this imposes on the Future Internet Architecture, but also how future architectures and system developments can benefit from this new layer. We are looking for position papers regarding the use of Linked Data in the Future Internet. These can be either concrete current use-cases or envisioned usages for the topics relevant for the Future Internet (examples include: Internet of Things, embedded systems, FIRE, services, smart cities., Open Government Data, Future Internet Architecture and others). The papers provide an input for the ongoing discussion on the role of Linked Data for the Future Internet. Submission Your position paper should have between 1 and 10 pages. We encourage authors to comply with the Springer LNCS format (see http://www.springer.com/computer/lncs?SGWID=0-164-6-793341-0 ). Position papers can be submitted until 30th November 2010 by email to futureinter...@semanticweb.org in HTML or PDF. ***Selection*** The session¹s organizers reserve the right to do a relevance check of submitted position papers and reject papers, which are clearly not relevant to the topic outlined above. ***Publication*** Submitted position papers will be published on a website related to the Future Internet Assembly Linked Data Session and may influence further developments in the Future Internet space. Session Organisers Sören Auer Email: a...@informatik.uni-leipzig.de Stefan Decker (main contact) Email: stefan.dec...@deri.org Manfred Hauswirth Email: manfred.hauswi...@deri.org -- Dr. Michael Hausenblas LiDRC - Linked Data Research Centre DERI - Digital Enterprise Research Institute NUIG - National University of Ireland, Galway Ireland, Europe Tel. +353 91 495730 http://linkeddata.deri.ie/ http://sw-app.org/about.html
Re: Correct Usage of rdfs:idDefinedBy in Vocabulary Specifications with a Hash-based URI Pattern
Martin, Opinions? We had the same discussion in the voiD team, see [1], and resolved it eventually - hope this helps. Cheers, Michael [1] http://code.google.com/p/void-impl/issues/detail?id=45 -- Dr. Michael Hausenblas LiDRC - Linked Data Research Centre DERI - Digital Enterprise Research Institute NUIG - National University of Ireland, Galway Ireland, Europe Tel. +353 91 495730 http://linkeddata.deri.ie/ http://sw-app.org/about.html From: Martin Hepp martin.h...@ebusiness-unibw.org Date: Thu, 30 Sep 2010 09:06:46 +0200 To: Linked Data community public-lod@w3.org Subject: Correct Usage of rdfs:idDefinedBy in Vocabulary Specifications with a Hash-based URI Pattern Resent-From: Linked Data community public-lod@w3.org Resent-Date: Thu, 30 Sep 2010 07:07:24 + Dear all: We use rdfs:isDefinedBy in all of our vocabularies (*) for linking between the conceptual elements and their specification. Now, there is a subtle question: Let's assume we have an ontology with the main URI http://purl.org/vso/ns All conceptual elements are defined as hash fragment URIs (URI references), e.g. http://purl.org/vso/ns#Bike The ontology itself (the instance of owl:Ontology) has the URI http://purl.org/vso/ns# http://purl.org/vso/ns# a owl:Ontology ; owl:imports http://purl.org/goodrelations/v1 ; dc:title VSO: The Vehicle Sales Ontology for Semantic Web-based E-Commerce@en . So we have two URIs for the ontology: 1. http://purl.org/vso/ns# for the ontology as an abstract artefact 2. http://purl.org/vso/ns for the syntactical representation of the ontology (its serialization) Shall the rdfs:isDefinedBy statements refer to #1 or #2 ? #1 vso:Vehicle a owl:Class ; rdfs:subClassOf gr:ProductOrService ; rdfs:label Vehicle (gr:ProductOrService)@en ; rdfs:isDefinedBy http://purl.org/vso/ns# . === #2 vso:Vehicle a owl:Class ; rdfs:subClassOf gr:ProductOrService ; rdfs:label Vehicle (gr:ProductOrService)@en ; rdfs:isDefinedBy http://purl.org/vso/ns . === I had assumed they shall refer to #1, but that caused some debate within our group ;-) Opinions? Best Martin
LDOW10 proceedings now available via CEUR-WS.org
All, This is to announce the availability of the Proceedings of the WWW2010 Workshop on Linked Data on the Web (LDOW2010) via CEUR-WS.org [1]. Cheers, Michael, on behalf of the organisers Chris, Tom and Tim [1] http://ceur-ws.org/Vol-628 -- Dr. Michael Hausenblas LiDRC - Linked Data Research Centre DERI - Digital Enterprise Research Institute NUIG - National University of Ireland, Galway Ireland, Europe Tel. +353 91 495730 http://linkeddata.deri.ie/ http://sw-app.org/about.html
Europe launches Linked Data support (and how you can benefit from it)
All, I'm happy and proud to announce that the LOD Around The Clock (LATC) Support Action [1], an European FP7 project has started today. Our primary goal is to support people and institutions to publish and consume Linked Data. If you're interested in more details or want to learn how you can participate in and benefit from LATC, please contact me. You may also want to follow us on Twitter [2]. Cheers, Michael [1] http://latc-project.eu/ [2] http://twitter.com/latcproject -- Dr. Michael Hausenblas LiDRC - Linked Data Research Centre DERI - Digital Enterprise Research Institute NUIG - National University of Ireland, Galway Ireland, Europe Tel. +353 91 495730 http://linkeddata.deri.ie/ http://sw-app.org/about.html
Re: Europe launches Linked Data support (and how you can benefit from it)
Juan, Is this only for Europe? Glad you asked ;) No, this is not only for Europe, but funded by the European Commission. So, we're more than happy to support people and institutions all over the world. Cheers, Michael -- Dr. Michael Hausenblas LiDRC - Linked Data Research Centre DERI - Digital Enterprise Research Institute NUIG - National University of Ireland, Galway Ireland, Europe Tel. +353 91 495730 http://linkeddata.deri.ie/ http://sw-app.org/about.html From: Juan Sequeda juanfeder...@gmail.com Date: Wed, 1 Sep 2010 09:22:54 -0500 To: Michael Hausenblas michael.hausenb...@deri.org Cc: Linked Data community public-lod@w3.org Subject: Re: Europe launches Linked Data support (and how you can benefit from it) Is this only for Europe? Juan Sequeda +1-575-SEQ-UEDA www.juansequeda.com On Wed, Sep 1, 2010 at 3:51 AM, Michael Hausenblas michael.hausenb...@deri.org wrote: All, I'm happy and proud to announce that the LOD Around The Clock (LATC) Support Action [1], an European FP7 project has started today. Our primary goal is to support people and institutions to publish and consume Linked Data. If you're interested in more details or want to learn how you can participate in and benefit from LATC, please contact me. You may also want to follow us on Twitter [2]. Cheers, Michael [1] http://latc-project.eu/ [2] http://twitter.com/latcproject -- Dr. Michael Hausenblas LiDRC - Linked Data Research Centre DERI - Digital Enterprise Research Institute NUIG - National University of Ireland, Galway Ireland, Europe Tel. +353 91 495730 http://linkeddata.deri.ie/ http://sw-app.org/about.html
Re: Show me the money - (was Subjects as Literals)
I am still not hearing any argument to justify the costs of literals as subjects. +1 Cheers, Michael -- Dr. Michael Hausenblas LiDRC - Linked Data Research Centre DERI - Digital Enterprise Research Institute NUIG - National University of Ireland, Galway Ireland, Europe Tel. +353 91 495730 http://linkeddata.deri.ie/ http://sw-app.org/about.html From: Jeremy Carroll jer...@topquadrant.com Date: Thu, 01 Jul 2010 08:38:00 -0700 To: Yves Raimond yves.raim...@gmail.com Cc: Pat Hayes pha...@ihmc.us, Toby A Inkster t...@g5n.co.uk, David Booth da...@dbooth.org, nat...@webr3.org, Dan Brickley dan...@danbri.org, Linked Data community public-lod@w3.org, Semantic Web community semantic-...@w3.org Subject: Show me the money - (was Subjects as Literals) Resent-From: Linked Data community public-lod@w3.org Resent-Date: Thu, 01 Jul 2010 15:38:42 + I am still not hearing any argument to justify the costs of literals as subjects I have loads and loads of code, both open source and commercial that assumes throughout that a node in a subject position is not a literal, and a node in a predicate position is a URI node. Of course, the correct thing to do is to allow all three node types in all three positions. (Well four if we take the graph name as well!) But if we make a change, all of my code base will need to be checked for this issue. This costs my company maybe $100K (very roughly) No one has even showed me $1K of advantage for this change. It is a no brainer not to do the fix even if it is technically correct Jeremy
Re: Slightly off topic - content negotiation by language accept headers
Michael, Cheers, Michael [1] http://www.w3.org/QA/2006/02/content_negotiation.html -- Dr. Michael Hausenblas LiDRC - Linked Data Research Centre DERI - Digital Enterprise Research Institute NUIG - National University of Ireland, Galway Ireland, Europe Tel. +353 91 495730 http://linkeddata.deri.ie/ http://sw-app.org/about.html From: Michael Smethurst michael.smethu...@bbc.co.uk Date: Wed, 23 Jun 2010 15:55:09 +0100 To: Linked Data community public-lod@w3.org Subject: Slightly off topic - content negotiation by language accept headers Resent-From: Linked Data community public-lod@w3.org Resent-Date: Wed, 23 Jun 2010 14:55:28 + Hello Realise this is slightly off topic for this list but since you people know the most about content negotiation of the people I know I thought I'd try here first. So... ...does anyone know of any real world sites that content negotiate on language accept headers? Yves has pointed out that Google search does do this so if I request google.co.uk with german set above english in my browser preferences it serves a page at co.uk in german. But I'm not sure if any other sites are doing this... Is it something anyone here has tried with eg dbpedia? Also unsure how many browsers support this setting. I can see and use the setting in mac firefox 3 but can't find anything in the preferences for either safari or chrome Finally wondering how google et al treat a site that does conneg on language. If the same url can serve french and english will it be indexed as both? Do search bots send out language accept headers? Any help (including pointers elsewhere) much appreciated Cheers Michael http://www.bbc.co.uk/ This e-mail (and any attachments) is confidential and may contain personal views which are not the views of the BBC unless specifically stated. If you have received it in error, please delete it from your system. Do not use, copy or disclose the information in any way nor act in reliance on it and notify the sender immediately. Please note that the BBC monitors e-mails sent or received. Further communication will signify your consent to this.
Re: Slightly off topic - content negotiation by language accept headers
Michael, (sorry for the last post, hit accidentally the send button ;) The best overview I'm aware of is a W3C QA blog post [1], contains also some valuable pointers - hope that helps. Cheers, Michael [1] http://www.w3.org/QA/2006/02/content_negotiation.html -- Dr. Michael Hausenblas LiDRC - Linked Data Research Centre DERI - Digital Enterprise Research Institute NUIG - National University of Ireland, Galway Ireland, Europe Tel. +353 91 495730 http://linkeddata.deri.ie/ http://sw-app.org/about.html From: Michael Smethurst michael.smethu...@bbc.co.uk Date: Wed, 23 Jun 2010 15:55:09 +0100 To: Linked Data community public-lod@w3.org Subject: Slightly off topic - content negotiation by language accept headers Resent-From: Linked Data community public-lod@w3.org Resent-Date: Wed, 23 Jun 2010 14:55:28 + Hello Realise this is slightly off topic for this list but since you people know the most about content negotiation of the people I know I thought I'd try here first. So... ...does anyone know of any real world sites that content negotiate on language accept headers? Yves has pointed out that Google search does do this so if I request google.co.uk with german set above english in my browser preferences it serves a page at co.uk in german. But I'm not sure if any other sites are doing this... Is it something anyone here has tried with eg dbpedia? Also unsure how many browsers support this setting. I can see and use the setting in mac firefox 3 but can't find anything in the preferences for either safari or chrome Finally wondering how google et al treat a site that does conneg on language. If the same url can serve french and english will it be indexed as both? Do search bots send out language accept headers? Any help (including pointers elsewhere) much appreciated Cheers Michael http://www.bbc.co.uk/ This e-mail (and any attachments) is confidential and may contain personal views which are not the views of the BBC unless specifically stated. If you have received it in error, please delete it from your system. Do not use, copy or disclose the information in any way nor act in reliance on it and notify the sender immediately. Please note that the BBC monitors e-mails sent or received. Further communication will signify your consent to this.
Re: 303 redirect to a fragment what should a linked data client do?
Christoph, Are you aware of the respective HTTPbis ticket [1]? Cheers, Michael [1] http://trac.tools.ietf.org/wg/httpbis/trac/ticket/43 -- Dr. Michael Hausenblas LiDRC - Linked Data Research Centre DERI - Digital Enterprise Research Institute NUIG - National University of Ireland, Galway Ireland, Europe Tel. +353 91 495730 http://linkeddata.deri.ie/ http://sw-app.org/about.html From: Christoph LANGE ch.la...@jacobs-university.de Organization: Jacobs University Bremen Date: Thu, 10 Jun 2010 13:40:42 +0200 To: Linked Data community public-lod@w3.org Subject: 303 redirect to a fragment what should a linked data client do? Resent-From: Linked Data community public-lod@w3.org Resent-Date: Thu, 10 Jun 2010 11:40:31 + Hi all, in our setup we are still somehow fighting with ill-conceived legacy URIs from the pre-LOD age. We heavily make use of hash URIs there, so it could happen that a client, requesting http://example.org/foo#bar (thus actually requesting http://example.org/foo) gets redirected to http://example.org/baz#grr (note that I don't mean http://example.org/baz%23grr here, but really the un-escaped hash). I observed that when serving such a result as XHTML, the browser (at least Firefox) scrolls to the #grr fragment of the resulting page. But what should an RDF-aware client do? I guess it should still look out for triples with the originally requested subject http://example.org/foo#bar, e.g. rdf:Description rdf:about=http://example.org/foo#bar;, or (assuming xml:base=http://example.org/foo;) for rdf:Description rdf:ID=bar. Is my assumption right? Thanks in advance for any help, Christoph -- Christoph Lange, Jacobs Univ. Bremen, http://kwarc.info/clange, Skype duke4701
Re: Describing Images (and similar), and Descriptor discovery.
Nathan, From the TAG, related, maybe helps you a bit [1]. Cheers, Michael [1] http://lists.w3.org/Archives/Public/www-tag/2010Feb/.html -- Dr. Michael Hausenblas LiDRC - Linked Data Research Centre DERI - Digital Enterprise Research Institute NUIG - National University of Ireland, Galway Ireland, Europe Tel. +353 91 495730 http://linkeddata.deri.ie/ http://sw-app.org/about.html From: Nathan nat...@webr3.org Organization: webr3 Reply-To: nat...@webr3.org Date: Wed, 09 Jun 2010 14:01:32 +0100 To: Linked Data community public-lod@w3.org, Semantic Web community semantic-...@w3.org Subject: Describing Images (and similar), and Descriptor discovery. Resent-From: Linked Data community public-lod@w3.org Resent-Date: Wed, 09 Jun 2010 13:02:35 + Hi All, I'm just wondering what approaches people are taking to describing non rdf/html resources, such as Images, PDFs and similar? Given that we have a jpeg with the URL http://example.org/image.jpg would we: give it the Identifier http://example.org/image.jpg#this and serve an RDF description via conneg give it the Identifer http://example.org/image#this and again serve an RDF description via conneg - a possible issue introduced here is if you have an alternative SVG version with it's own fragments(?) (SVGTINY12[1]) give it a completely different Identifier http://example.org/r/132#image and 'link' from the descriptor to the image with..? (dcterms:hasFormat, sioc:link, uri:uri, link:uri, other?) and on the reverse, how about descriptor discovery for images/PDFs etc, expose via the Link header (tight coupling to HTTP), or? As a side but related question, do we see the Web Of Data as running autonomous to the Web of Documents, as in it is taken that rdf / linked data clients will purely run over the web of linked data and reference non rdf resources via links with no backwards discovery needed, or is the link from web of documents back to web of data needed by any specific use cases? does conneg suffice? what if the image is ftp://example.org/image.jpg? Best, Nathan [1] http://www.w3.org/TR/2008/REC-SVGTiny12-20081222
Re: Organization ontology
Dave, We would like to announce the availability of an ontology for description of organizational structures including government organizations. Brilliant! I submitted it now to Sindice [1] and 'registered' the org prefix in prefix.cc [2] - you might want to support it by voting it up ;) Cheers, Michael [1] http://sindice.com/search?q=domain%3Awww.w3.org+Core+organization+ontologyq t=term [2] http://prefix.cc/org -- Dr. Michael Hausenblas LiDRC - Linked Data Research Centre DERI - Digital Enterprise Research Institute NUIG - National University of Ireland, Galway Ireland, Europe Tel. +353 91 495730 http://linkeddata.deri.ie/ http://sw-app.org/about.html From: Dave Reynolds dave.e.reyno...@googlemail.com Date: Tue, 01 Jun 2010 08:50:32 +0100 To: Linked Data community public-lod@w3.org, public-egov...@w3.org public-egov...@w3.org Subject: Organization ontology Resent-From: public-egov...@w3.org public-egov...@w3.org Resent-Date: Tue, 01 Jun 2010 07:51:09 + We would like to announce the availability of an ontology for description of organizational structures including government organizations. This was motivated by the needs of the data.gov.uk project. After some checking we were unable to find an existing ontology that precisely met our needs and so developed this generic core, intended to be extensible to particular domains of use. The ontology is documented at [1] and some discussion on the requirements and design process are at [2]. W3C have been kind enough to offer to host the ontology within the W3C namespace [3]. This does not imply that W3C endorses the ontology, nor that it is part of any standards process at this stage. They are simply providing a stable place for posterity. Any changes to the ontology involving removal of, or modification to, existing terms (but not necessarily addition of new terms) will be announced to these lists. We suggest that any discussion take place on the public-lod list to avoid further cross-posting. Dave, Jeni, John [1] http://www.epimorphics.com/public/vocabulary/org.html [2] http://www.epimorphics.com/web/category/category/developers/organization-ontol ogy [3] http://www.w3.org/ns/org# (available in RDF/XML, N3, Turtle via conneg or append .rdf/.n3/.ttl)
Re: Java Framework for Content Negotiation
There's also Jersey [1] ... +1 to Jersey - had overall very good experience with it. If you want to have a quick look (not saying it's beautiful/exciting, but might helps to kick-start things) see [1] for my hacking with it. Cheers, Michael [1] http://bitbucket.org/mhausenblas/sparestfulql/ -- Dr. Michael Hausenblas LiDRC - Linked Data Research Centre DERI - Digital Enterprise Research Institute NUIG - National University of Ireland, Galway Ireland, Europe Tel. +353 91 495730 http://linkeddata.deri.ie/ http://sw-app.org/about.html From: Dave Reynolds dave.e.reyno...@googlemail.com Date: Thu, 20 May 2010 11:08:03 +0100 To: Angelo Veltens angelo.velt...@online.de Cc: Linked Data community public-lod@w3.org Subject: Re: Java Framework for Content Negotiation Resent-From: Linked Data community public-lod@w3.org Resent-Date: Thu, 20 May 2010 10:08:45 + On 20/05/2010 11:03, Story Henry wrote: There is the RESTlet framework http://www.restlet.org/ There's also Jersey [1] and, for a minimalist solution to just the content matching piece see Mimeparse [2]. Dave [1] https://jersey.dev.java.net/ [2] http://code.google.com/p/mimeparse/ On 20 May 2010, at 10:49, Angelo Veltens wrote: Hello, I am just looking for a framework to do content negotiation in java. Currently I am checking the HttpServletRequest myself quickdirty. Perhaps someone can recommend a framework/library that has solved this already. Thanks in advance, Angelo
Re: Java Framework for Content Negotiation
Angelo, I might have a non-information resource http://example.org/resource/foo I could place a REST-Webservice there and do content negotiation with @GET / @Produces Annotations. But this seems not correct to me, because it is a non-information resource and not a html or rdf/xml document. So it should never return html or rdf/xml but do a 303 redirect to an information resource instead, doesn't it? This is a recurring pattern and people tend to confuse things (conneg and 303), in my experience. I assume you've read [1], already ? ;) Without more detailed knowledge about what you want to achieve it is hard for me to tell you anything beyond what has been discussed in various forums. Can you give me a more concrete description of your setup and goals? How does your data look like? What's the task you try to solve? Etc. Cheers, Michael [1] http://www.w3.org/TR/cooluris/ -- Dr. Michael Hausenblas LiDRC - Linked Data Research Centre DERI - Digital Enterprise Research Institute NUIG - National University of Ireland, Galway Ireland, Europe Tel. +353 91 495730 http://linkeddata.deri.ie/ http://sw-app.org/about.html From: Angelo Veltens angelo.velt...@online.de Date: Thu, 20 May 2010 14:38:53 +0200 To: Linked Data community public-lod@w3.org Subject: Re: Java Framework for Content Negotiation Resent-From: Linked Data community public-lod@w3.org Resent-Date: Thu, 20 May 2010 12:39:34 + On 20.05.2010 12:18, Michael Hausenblas wrote: There's also Jersey [1] ... +1 to Jersey - had overall very good experience with it. If you want to have a quick look (not saying it's beautiful/exciting, but might helps to kick-start things) see [1] for my hacking with it. Cheers, Michael [1] http://bitbucket.org/mhausenblas/sparestfulql/ Mmh, i have been thinking about using REST-Webservice already, but there is one thing i'm quite unsteady with: I might have a non-information resource http://example.org/resource/foo I could place a REST-Webservice there and do content negotiation with @GET / @Produces Annotations. But this seems not correct to me, because it is a non-information resource and not a html or rdf/xml document. So it should never return html or rdf/xml but do a 303 redirect to an information resource instead, doesn't it? Kind regards, Angelo
Re: [dady] Dataset Dynamics meet-up at WWW2010
Sandro, (funny, I posted this last Friday and it showed up only yesterday ...) I think for the use cases I have in mind (large scale, ad hoc, real-time mirroring of RDF), a key requirement is constant time (per triple) to apply deltas, including maintaining a secure hash. I was happy to see I could (I think) meet this with some tweaking of blank nodes. I sketched both a delta format (gruf) and a subscription protocol (websub), which are separate. Rough specs are here: http://websub.org/wiki/GRUF http://websub.org/wiki/Spec This looks great, thanks! It would be interesting to learn more about the status of this project (are implementations available, etc.) and your plans with it. Indeed, as I recently wrote [1], there are plenty of approaches and proposals out there and I think the time is ripe to sit together, get more practical experience in implementing and deploying stuff. I hope you can make it to the Dataset Dynamics meeting [2] and introduce your proposal to the community. I'll sync with Juergen to catch up. Cheers, Michael [1] http://blog.semantic-web.at/2010/04/26/a-dynamic-web-of-data/ [2] http://esw.w3.org/DatasetDynamics/Meetings -- Dr. Michael Hausenblas LiDRC - Linked Data Research Centre DERI - Digital Enterprise Research Institute NUIG - National University of Ireland, Galway Ireland, Europe Tel. +353 91 495730 http://linkeddata.deri.ie/ http://sw-app.org/about.html From: Sandro Hawke san...@w3.org Reply-To: dady dataset-dynam...@googlegroups.com Date: Tue, 27 Apr 2010 14:53:39 -0400 To: dady dataset-dynam...@googlegroups.com Cc: Linked Data community public-lod@w3.org, Jürgen Umbrich juergen.umbr...@deri.org Subject: Re: [dady] Dataset Dynamics meet-up at WWW2010 Changes in Linked Data sources (dataset dynamics) and how to deal with it are an important and emerging issue [1][2]. As already mentioned, there will be a dataset dynamics meet-up at WWW2010. I first planned to register a BoF, but it is still unclear if this is possible [3]. I'd now propose to have a break-out session at the W3C LOD Track [4] - as I won't be able to make it to WWW, Juergen Umbrich (in CC) would take care of the local organisation. Sounds like a good idea, although I don't know what else might be on the menu. So, I just learned of the dataset dynamics term and community last week (from Michael), but I've been thinking about this for many years. I got inspired and sketched out a design a few months ago, which I think is pretty good. I've been hoping to return to it some day soon, but if we're talking about this Thursday, I might was well share the drafts now. I think for the use cases I have in mind (large scale, ad hoc, real-time mirroring of RDF), a key requirement is constant time (per triple) to apply deltas, including maintaining a secure hash. I was happy to see I could (I think) meet this with some tweaking of blank nodes. I sketched both a delta format (gruf) and a subscription protocol (websub), which are separate. Rough specs are here: http://websub.org/wiki/GRUF http://websub.org/wiki/Spec -- Sandro Cheers, Michael [1] http://esw.w3.org/DatasetDynamics [2] http://data-gov.tw.rpi.edu/wiki/TWC_Data-gov_Vocabulary_Proposal#Change_of_d ataset [3] http://twitter.com/WWW2010/status/12452736301 [4] http://esw.w3.org/Camps:LODCampW3CTrack#breakout -- Dr. Michael Hausenblas LiDRC - Linked Data Research Centre DERI - Digital Enterprise Research Institute NUIG - National University of Ireland, Galway Ireland, Europe Tel. +353 91 495730 http://linkeddata.deri.ie/ http://sw-app.org/about.html -- Subscription settings: http://groups.google.com/group/dataset-dynamics/subscr ibe?hl=en
Dataset Dynamics meet-up at WWW2010
All, Changes in Linked Data sources (dataset dynamics) and how to deal with it are an important and emerging issue [1][2]. As already mentioned, there will be a dataset dynamics meet-up at WWW2010. I first planned to register a BoF, but it is still unclear if this is possible [3]. I'd now propose to have a break-out session at the W3C LOD Track [4] - as I won't be able to make it to WWW, Juergen Umbrich (in CC) would take care of the local organisation. Cheers, Michael [1] http://esw.w3.org/DatasetDynamics [2] http://data-gov.tw.rpi.edu/wiki/TWC_Data-gov_Vocabulary_Proposal#Change_of_d ataset [3] http://twitter.com/WWW2010/status/12452736301 [4] http://esw.w3.org/Camps:LODCampW3CTrack#breakout -- Dr. Michael Hausenblas LiDRC - Linked Data Research Centre DERI - Digital Enterprise Research Institute NUIG - National University of Ireland, Galway Ireland, Europe Tel. +353 91 495730 http://linkeddata.deri.ie/ http://sw-app.org/about.html
Re: [foaf-protocols] ACL
Nathan, That sort of reminds me of something [1] ;) So, I asked a round a bit [2] and the answer essentially was: go register one ... fancy doing it together? Cheers, Michael [1] http://webofdata.wordpress.com/2010/03/04/wod-access-control-discovery/ [2] http://lists.w3.org/Archives/Public/ietf-http-wg/2010JanMar/0218.html -- Dr. Michael Hausenblas LiDRC - Linked Data Research Centre DERI - Digital Enterprise Research Institute NUIG - National University of Ireland, Galway Ireland, Europe Tel. +353 91 495730 http://linkeddata.deri.ie/ http://sw-app.org/about.html From: Nathan nat...@webr3.org Organization: webr3 Reply-To: nat...@webr3.org Date: Mon, 19 Apr 2010 22:37:41 +0100 To: Linked Data community public-lod@w3.org, foaf-protocols foaf-protoc...@lists.foaf-project.org Subject: [foaf-protocols] ACL Hi All, I'm just trying to get an implementation of web access control [1] off the ground and have hit upon a small issue. I'm planning on exposing links to acl files via the Link header as directed, however I've realised there is no rel= for it, hence i was opting for a custom temporary type. On a first look a relation of acl:acl looks to be the one, but after checking the actual ontology the acl:acl link simply isn't there, thus in the meantime I've opted for: Link: /.wac/everyone.n3; rel=http://www.w3.org/ns/auth/acl#;; title=Access Control File Any improvements, or refinements welcome, as the above is just a temporary measure. Best, Nathan ___ foaf-protocols mailing list foaf-protoc...@lists.foaf-project.org http://lists.foaf-project.org/mailman/listinfo/foaf-protocols
Re: [foaf-protocols] ACL
Indeed! I've started implementing it last night (figured it was time to do it, rather than ponder and debate it!) +1 Where? :) Cheers, Michael -- Dr. Michael Hausenblas LiDRC - Linked Data Research Centre DERI - Digital Enterprise Research Institute NUIG - National University of Ireland, Galway Ireland, Europe Tel. +353 91 495730 http://linkeddata.deri.ie/ http://sw-app.org/about.html From: Nathan nat...@webr3.org Organization: webr3 Reply-To: nat...@webr3.org Date: Tue, 20 Apr 2010 15:05:38 +0100 To: Michael Hausenblas michael.hausenb...@deri.org Cc: Linked Data community public-lod@w3.org, foaf-protocols foaf-protoc...@lists.foaf-project.org Subject: Re: [foaf-protocols] ACL Michael Hausenblas wrote: Nathan, That sort of reminds me of something [1] ;) Indeed! I've started implementing it last night (figured it was time to do it, rather than ponder and debate it!) So far it's been relatively easy and have managed to get basic ACL / ACF implemented and working. Also made a non-sparql dependant FOAF+SSL implementation which I'll be adding to libAuthenticate w/ Lazlo, Melvin etc over the next week or so. So, I asked a round a bit [2] and the answer essentially was: go register one ... fancy doing it together? Yup certainly do :) ACL Ontology wise afaict what's needed is the inverse of acl:accessTo - resource acl:acl acf or suchlike. However, I've also got another couple of suggestions for the acl ontology which I'll send through under different cover. Cheers, Michael Likewise, Nathan [1] http://webofdata.wordpress.com/2010/03/04/wod-access-control-discovery/ [2] http://lists.w3.org/Archives/Public/ietf-http-wg/2010JanMar/0218.html Other related reading for the archives: http://esw.w3.org/WebAccessControl http://esw.w3.org/Talk:WebAccessControl http://esw.w3.org/WebAccessControl http://www.w3.org/DesignIssues/CloudStorage.html http://www.w3.org/DesignIssues/ReadWriteLinkedData.html http://dig.csail.mit.edu/2009/Papers/ISWC/rdf-access-control/paper.pdf http://dig.csail.mit.edu/2009/presbrey/UAP.pdf http://linkeddata.deri.ie/sites/linkeddata.deri.ie/files/rw-wod-tr.pdf
Dataset dynamics
All, Indeed a very exciting weekend concerning dataset dynamics [1]. I got the feeling that we, the dataset dynamics group [2], didn't do an awful good job re marketing so far, given that many people active in this area were not aware of its existence ;) - for the background see also the recent NodMag article Keeping up with a LOD of Changes [3] ... Hence, I'd like to invite people interested in this area to join in and discuss the next steps in this emerging and important area (WWW could be a good place for a BoF or the like). shameless-self-plug Dataset dynamics is also a hot topic at the upcoming LDOW2010 workshop. I invite you to have a look at our paper Towards Dataset Dynamics: Change Frequency of Linked Open Data Sources [4] and in case you're at WWW2010, chime in and contribute to the discussion [5]. /shameless-self-plug Cheers, Michael [1] http://esw.w3.org/DatasetDynamics [2] http://groups.google.com/group/dataset-dynamics [3] http://www.talis.com/nodalities/pdf/nodalities_issue9.pdf [4] http://events.linkeddata.org/ldow2010/papers/ldow2010_paper12.pdf [5] http://events.linkeddata.org/ldow2010/#programme -- Dr. Michael Hausenblas LiDRC - Linked Data Research Centre DERI - Digital Enterprise Research Institute NUIG - National University of Ireland, Galway Ireland, Europe Tel. +353 91 495730 http://linkeddata.deri.ie/ http://sw-app.org/about.html
Re: [semanticweb] ANN: DBpedia 3.5 released
Leigh, You might find some answers in our recent WebSci 2010 paper: Knud Möller, Michael Hausenblas, Richard Cyganiak, Siegfried Handschuh and Gunnar Grimnes. Learning from Linked Open Data Usage: Patterns Metrics. Web Science Conference 2010 [1]. Cheers, Michael [1] http://linkeddata.deri.ie/sites/default/files/lod-usage-websci-2010.pdf -- Dr. Michael Hausenblas LiDRC - Linked Data Research Centre DERI - Digital Enterprise Research Institute NUIG - National University of Ireland, Galway Ireland, Europe Tel. +353 91 495730 http://linkeddata.deri.ie/ http://sw-app.org/about.html From: Leigh Dodds leigh.do...@talis.com Date: Wed, 14 Apr 2010 12:33:45 +0100 To: Ivan Mikhailov imikhai...@openlinksw.com Cc: baran ba...@goldmail.de, semanticweb semantic...@yahoogroups.com, Linked Data community public-lod@w3.org, Semantic Web community semantic-...@w3.org, dbpedia-discussion dbpedia-discuss...@lists.sourceforge.net, dbpedia-announcements dbpedia-announceme...@lists.sourceforge.net, Chris Bizer ch...@bizer.de Subject: Re: [semanticweb] ANN: DBpedia 3.5 released Resent-From: Linked Data community public-lod@w3.org Resent-Date: Wed, 14 Apr 2010 11:44:24 + Hi, 2010/4/14 Ivan Mikhailov imikhai...@openlinksw.com: Similarly, growing database size and growing hit rate and growing complexity of queries are not obviously visible from outside, but turn the hosting into a race. We're improving the underlaying RDBMS as fast as we only can just to prevent the service from total halt. One might wish to provide a better service on their own RDBMS and thus to make a good advertisement, but nobody else want to do that _and_ can do that, so we're alone under this load. Out of interest, do you actually share any metrics on usage levels, common sparql queries, etc? We have a copy of the dbpedia data loaded into the Talis Platform, but its not yet up to date with 3.5. So there's more than one option already. Although the service characteristics/features are different (different software) Cheers, L. -- Leigh Dodds Programme Manager, Talis Platform Talis leigh.do...@talis.com http://www.talis.com
Re: Using predicates which have no ontology?
Ed, Would it be hard to remove the empty literal assertions? e.g. Fixed now. Dunno why I had it there in the first place ;) It's interesting that the latest efforts to create a Link Relation Registry seem to be intentionally avoiding publishing machine readable data for the registry [1]. I was wondering if Mark Nottingham's efforts to revamp link relations might present a good opportunity for us to lobby the IETF to start publishing a bit of RDFa for the link relations registry... Agree! Let's lobby :) Cheers, Michael -- Dr. Michael Hausenblas LiDRC - Linked Data Research Centre DERI - Digital Enterprise Research Institute NUIG - National University of Ireland, Galway Ireland, Europe Tel. +353 91 495730 http://linkeddata.deri.ie/ http://sw-app.org/about.html From: Ed Summers e...@pobox.com Date: Mon, 5 Apr 2010 09:37:49 -0400 To: Linked Data community public-lod@w3.org Subject: Re: Using predicates which have no ontology? Resent-From: Linked Data community public-lod@w3.org Resent-Date: Mon, 05 Apr 2010 13:38:22 + Hi Michael, Would it be hard to remove the empty literal assertions? e.g. -- http://www.iana.org/assignments/relation/alternate a awol:RelationType ; rdfs:label alternate ; dcterms:dateAccepted ; dcterms:description ; rdfs:isDefinedBy http://www.iana.org/go/rfc4287 . -- It's interesting that the latest efforts to create a Link Relation Registry seem to be intentionally avoiding publishing machine readable data for the registry [1]. I was wondering if Mark Nottingham's efforts to revamp link relations might present a good opportunity for us to lobby the IETF to start publishing a bit of RDFa for the link relations registry... //Ed [1] http://tools.ietf.org/html/draft-nottingham-http-link-header-09#appendix-A
Re: Using predicates which have no ontology?
If not, would you consider updating your interim solution to describe URI:s under [1]? I mean, since [2] currently uses the real IANA URI:s (i.e. the unsanctioned ones) and those, as Danny cautioned, could end up e.g. being resolved to documents, breaking semantics (as well as not being discoverable). I'm not totally sure if I understand but I guess the answer would be yes ;) It's interesting that you've modelled the relation-type as RDF properties in [4] whereas I turned them (in [1]) into instances of the class 'awol:RelationType' from the AtomOwl vocabulary. Any thoughts? Cheers, Michael -- Dr. Michael Hausenblas LiDRC - Linked Data Research Centre DERI - Digital Enterprise Research Institute NUIG - National University of Ireland, Galway Ireland, Europe Tel. +353 91 495730 http://linkeddata.deri.ie/ http://sw-app.org/about.html From: Niklas Lindström lindstr...@gmail.com Date: Sat, 3 Apr 2010 22:46:29 +0200 To: Michael Hausenblas michael.hausenb...@deri.org Cc: nat...@webr3.org, Danny Ayers danny.ay...@gmail.com, Phil Archer p...@philarcher.org, Linked Data community public-lod@w3.org Subject: Re: Using predicates which have no ontology? Hi Michael, that's great! If [2] were to be updated with that [1] (i.e. officially containing RDFa about these URI:s), and would be 303:d to from [3] (along with anything under that URL), this would be all we need. I know it hasn't happened for years, but sometimes a nudge at just the right time may be all it takes.. If not, would you consider updating your interim solution to describe URI:s under [1]? I mean, since [2] currently uses the real IANA URI:s (i.e. the unsanctioned ones) and those, as Danny cautioned, could end up e.g. being resolved to documents, breaking semantics (as well as not being discoverable). I did a manual (well, vim-macro:ed) conversion of [3] into RDF/XML, but had to leave to eat easter eggs at my sister's and entertain her kids. :) It's located at [4] now, and quite similar to the data in [1]. Note that I do consider [1] much more interesting. (That said, if anyone would like me to make e.g. an XSLT for turning [4] into something like [1], just say the word.) Best regards and happy easter! Niklas [1]: http://purl.org/NET/atom-link-rel [2]: http://www.iana.org/assignments/link-relations/link-relations.xhtml [3]: http://www.iana.org/assignments/relation/ [4]: http://bitbucket.org/niklasl/tripleheap/src/tip/iana-link-relations.rdf On Sat, Apr 3, 2010 at 8:38 PM, Michael Hausenblas michael.hausenb...@deri.org wrote: Nathan, Phil, All, and quote: If the relation-type is a relative URI, its base URI MUST be considered to be http://www.iana.org/assignments/relation/; http://tools.ietf.org/id/draft-nottingham-http-link-header-03.txt obviously all the links defined by: http://www.iana.org/assignments/link-relations/link-relations.xhtml (from the atom rfc) such as edit, self, related etc - with additional consideration to the thought that these will end up in rdf via RDFa/grddl etc v soon if not already. Any guidance? Yes. Use [1] ... My motto is: acting rather than talking. So, I took [2] as a starting point - which is already in nice XHTML format - and manually added some RDFa. After an hour I ended up with [1] (though, to be fair, two Wii games with the kids and consuming some Easter eggs also took place in that hour). So, [1] is really a sort of an interim solution (though, in the distributed data world I do expect much more of such fixes) and I encourage Phil, who is an editor of [2] to use the template from [1] at the 'official' location. Happy Easter! (and back to Wii games, for now ;) Cheers, Michael [1] http://purl.org/NET/atom-link-rel [2] http://www.iana.org/assignments/link-relations/link-relations.xhtml -- Dr. Michael Hausenblas LiDRC - Linked Data Research Centre DERI - Digital Enterprise Research Institute NUIG - National University of Ireland, Galway Ireland, Europe Tel. +353 91 495730 http://linkeddata.deri.ie/ http://sw-app.org/about.html From: Nathan nat...@webr3.org Organization: webr3 Reply-To: nat...@webr3.org Date: Sat, 03 Apr 2010 00:14:16 +0100 To: Danny Ayers danny.ay...@gmail.com Cc: Linked Data community public-lod@w3.org Subject: Re: Using predicates which have no ontology? Resent-From: Linked Data community public-lod@w3.org Resent-Date: Fri, 02 Apr 2010 23:14:54 + Danny Ayers wrote: On 3 April 2010 00:53, Nathan nat...@webr3.org wrote: Hi All, Any guidance on using predicates in linked data / rdf which do not come from rdfs/owl. Specifically I'm considering the range of: http://www.iana.org/assignments/relation/* Can't find a URL that resolves there snap; but that's what rel=edit and so forth resolves to. see example: http://www.w3.org/2001/tag/doc/selfDescribingDocuments.html#ATOMSection and quote: If the relation-type is a relative URI, its
Re: Using predicates which have no ontology?
Thanks a lot Phil (for the clarification and the explanation). You helped indeed much more than you think you did, IMO ;) Agree to FUP with mnot on HTTP WG's mailing list, maybe with an XSLT handy, as you suggest. Cheers, Michael -- Dr. Michael Hausenblas LiDRC - Linked Data Research Centre DERI - Digital Enterprise Research Institute NUIG - National University of Ireland, Galway Ireland, Europe Tel. +353 91 495730 http://linkeddata.deri.ie/ http://sw-app.org/about.html From: Phil Archer p...@philarcher.org Date: Tue, 06 Apr 2010 16:22:16 +0100 To: Michael Hausenblas michael.hausenb...@deri.org Cc: Niklas Lindström lindstr...@gmail.com, Kingsley Idehen kide...@openlinksw.com, nat...@webr3.org, Danny Ayers danny.ay...@gmail.com, Linked Data community public-lod@w3.org Subject: Re: Using predicates which have no ontology? Hi all, Thanks for keeping me in this loop and apologies for radio silence thus far. On a theoretical level - making the link registry available as data is, clearly, a jolly good idea and should happen. On a practical level I am sorry to say I don't think I can help. In the e-mail that Michael sent to bring me in to this discussion he said that I was an editor of the Atom registry. Sorry, no, I'm not. The ATOM Link registry is under the control of the IESG [1]. To get 'describedby' in there I had to send an e-mail to IANA [2]. But... it's all meant to be temporary. Version 09 of Mark Nottingham's HTTP Link header Internet Draft has just been published and, if, as we've been hoping for longer than I can remember, it becomes a full RFC then the ATOM Link registry will be replaced by a new registry [3]. The current XML version of the registry has a bunch of declarations that suggest that IANA is open to making different versions available if they can be automated. An XSLT that produced triples would be pretty simple I guess (linked GRDDL-style?) The informal place to raise issues around MNot's draft is the HTTP WG's mailing list (see announcement at [4]). Mark may be open to persuasion on seeking a data version of the registry. Alternatively one could write directly to IANA. Sorry I can't be of more direct practical help. Phil. [1] http://www.ietf.org/iesg/ [2] http://lists.w3.org/Archives/Public/public-powderwg/2009Feb/0007.html [3] http://tools.ietf.org/html/draft-nottingham-http-link-header-09 [4] http://lists.w3.org/Archives/Public/ietf-http-wg/2010AprJun/0014.html Niklas Lindström wrote: Kingsley, 2010/4/6 Kingsley Idehen kide...@openlinksw.com: Niklas Lindström wrote: Niklas, Nice! I would once again suggest adding local owl:equivalentProperty assertions which enables a reasoner to treat the IANA URIs as synonyms. This is in line with what I like to call the: owl:shameAs pattern :-) Kingsley Hi Kingsley, thanks! Yes, I think that'd be good. But my sketch already describes the IANA URI:s directly (by, unsolicitedly, using @xml:base=http://www.iana.org/assignments/relation/;), so *if* that RDF (or preferably Michael's richer and RDFa-based one) were official, we wouldn't need that, right? (As those would be self-referential statements..) Otherwise, if we were to mint our own (community official) URI:s for each of these properties, I'd agree that owl:equivalentProperty should definitely be there.. .. Well, unless it would be decided in the future that values in @rel:s at least in Atom are to be viewed as *indirect* references to relations via a document (akin to e.g. foaf:interest). Of course, that's not the case in XHTML+RDFa, but for the default names in @rel:s there the IANA URI:s aren't used (we have the http://www.w3.org/1999/xhtml/vocab#-based ones instead). So to nail down the definitions of (the nature of) the things the IANA relation URI:s identify, we'd either have to make it clear that they *are* relations (i.e. properties) in the RDF sense (and object-properties in the OWL sense), or that they're not. If it's undefined, we still can't really make any statements about what they are, even if we make up our own properties based on how we view them. (Well maybe, if it was declared that their precise meaning will be perpetually undefined.) So if they (the URI:s) are (direct references to relations), it'd be wonderful to have IANA publish some kind of RDF discoverable via [1] to make that clear. Thing is that we need RDF data representation now, and if we put the linked data somewhere (some data space) ASAP we can point to what will someday exist in an IANA data space -- the shameAs pattern is a productive mechanism for letting folks like IANA understand why this is so important etc. :-) absolutely. But do you think we should describe and use the IANA URI:s directly as properties, or that we need to mint new URI:s for them? The location of the document(s) containing these descriptions may very well be unreachable from iana.org for now
Re: Using predicates which have no ontology?
Niklas, While I have seen definitions of these relations made by the community before (e.g. used directly in AtomOwl, and a complete listing made by Ed Summers, which I unfortunately cannot find now), You're not peradventure talking about [1], no? Cheers, Michael [1] http://mediatypes.appspot.com/ -- Dr. Michael Hausenblas LiDRC - Linked Data Research Centre DERI - Digital Enterprise Research Institute NUIG - National University of Ireland, Galway Ireland, Europe Tel. +353 91 495730 http://linkeddata.deri.ie/ http://sw-app.org/about.html From: Niklas Lindström lindstr...@gmail.com Date: Sat, 3 Apr 2010 14:28:43 +0200 To: Danny Ayers danny.ay...@gmail.com Cc: Story Henry henry.st...@bblfish.net, nat...@webr3.org, Linked Data community public-lod@w3.org Subject: Re: Using predicates which have no ontology? Resent-From: Linked Data community public-lod@w3.org Resent-Date: Sat, 03 Apr 2010 12:29:37 + Hi, I definitely think IETF should place RDF representations at those locations, as Henry suggests (e.g. 303 to say http://www.iana.org/assignments/relation.rdf). Is there really no way we could make this happen? Since the http://www.iana.org/assignments/relation/* URI:s are used directly in many places it would be very beneficial to have those be the direct property identifiers. (And since there is really no technology other than RDF to precisely document their meaning as relations, not going that direct route would necessitate cumbersome indirection.) If not, a W3C-sanctioned vocabulary mapping each relation defined at [1] would really be the second best. We already have [2] defining a subset of these. A coordinated community effort could also do of course, as long as it was stable, durable and gained consensual support. While I have seen definitions of these relations made by the community before (e.g. used directly in AtomOwl, and a complete listing made by Ed Summers, which I unfortunately cannot find now), I think we may need something more centrally defined for these relations, as close to official IANA status as possible. Something from the W3C could be close enough. Boiling down to discoverability, consensus and stability. Best regards, Niklas [1]: http://tools.ietf.org/html/draft-nottingham-http-link-header-09#section-6.2.2 [2]: http://www.w3.org/1999/xhtml/vocab# On Sat, Apr 3, 2010 at 4:07 AM, Danny Ayers danny.ay...@gmail.com wrote: Henry, I'm pretty sure you'll have all workings on this - all that's needed is a flattened model. I bet it would only take a couple of weeks (months) to prepare that in a form that the W3C would accept as a Note or something. If you can pull together some of your old stuff, I'm happy to draft some text. It needs doing soon because of the initiatives that hang off Atom are getting interesting. Need to be in there from the get-go. On 3 April 2010 03:56, Danny Ayers danny.ay...@gmail.com wrote: About time to do another rev of that thing? The social xg is having another spin, might be a good time to throw it there. I suspect most folks (yourself there mostly Henry) think this time around it should be done minimally..? On 3 April 2010 01:29, Story Henry henry.st...@bblfish.net wrote: On 2 Apr 2010, at 23:53, Nathan wrote: Hi All, Any guidance on using predicates in linked data / rdf which do not come from rdfs/owl. Specifically I'm considering the range of: http://www.iana.org/assignments/relation/* Ah is that something you found in the AtomOWL spec? Perhaps we should just give them other names, until the IETF places RDF representations at those locations, which I imagine could take forever. Henry such as edit, self, related etc - with additional consideration to the thought that these will end up in rdf via RDFa/grddl etc v soon if not already. Any guidance? Regards, Nathan -- http://danny.ayers.name -- http://danny.ayers.name
Re: Using predicates which have no ontology?
Nathan, and quote: If the relation-type is a relative URI, its base URI MUST be considered to be http://www.iana.org/assignments/relation/; http://tools.ietf.org/id/draft-nottingham-http-link-header-03.txt Just for the record: the current draft of Web Linking is [1] and the statement above is not present anymore, in there. However, you find something alike in Appendix C. Cheers, Michael [1] http://tools.ietf.org/id/draft-nottingham-http-link-header-09.txt -- Dr. Michael Hausenblas LiDRC - Linked Data Research Centre DERI - Digital Enterprise Research Institute NUIG - National University of Ireland, Galway Ireland, Europe Tel. +353 91 495730 http://linkeddata.deri.ie/ http://sw-app.org/about.html From: Nathan nat...@webr3.org Organization: webr3 Reply-To: nat...@webr3.org Date: Sat, 03 Apr 2010 00:14:16 +0100 To: Danny Ayers danny.ay...@gmail.com Cc: Linked Data community public-lod@w3.org Subject: Re: Using predicates which have no ontology? Resent-From: Linked Data community public-lod@w3.org Resent-Date: Fri, 02 Apr 2010 23:14:54 + Danny Ayers wrote: On 3 April 2010 00:53, Nathan nat...@webr3.org wrote: Hi All, Any guidance on using predicates in linked data / rdf which do not come from rdfs/owl. Specifically I'm considering the range of: http://www.iana.org/assignments/relation/* Can't find a URL that resolves there snap; but that's what rel=edit and so forth resolves to. see example: http://www.w3.org/2001/tag/doc/selfDescribingDocuments.html#ATOMSection and quote: If the relation-type is a relative URI, its base URI MUST be considered to be http://www.iana.org/assignments/relation/; http://tools.ietf.org/id/draft-nottingham-http-link-header-03.txt obviously all the links defined by: http://www.iana.org/assignments/link-relations/link-relations.xhtml (from the atom rfc) such as edit, self, related etc - with additional consideration to the thought that these will end up in rdf via RDFa/grddl etc v soon if not already. Any guidance? By using something as a predicate you are making statements about it. But... If you can find IANA terms like this, please use them - though beware the page isn't the concept. You might have to map them over to your own namespace, PURL URIs preferred. Would it make sense to knock up an ontology for all the standard link-relations and sameAs them through to the iana uri's? Best, Nathan
Re: write enabled web of data / acl/acf/wac etc
Simply looking for the best place to discuss acl/acf/wac / write enabled web of data etc - mailing list or irc or private contacts - unsure if this comes under the banner of linked data and thus this mailing list. i.e. whilst I can have a good realtime discussion about rest related things, coming up short with regards discussing the aforementioned write enabled web of data - any pointers? IMHO definitely here and on #swig IRC channel. Further, with regards the ESW wiki pages, I've not seen any discussions yet on articles, and with some of the documents I do have notes additions etc to add, but don't want to just ad them ad-hoc without at least discussing or running past somebody else. It's a Wiki, so perfectly fine if you edit/comment stuff and the trigger discussions here and/or IRC. Great to see write-enabled Linked Data discussions happening again - lot of work still required till [1] can advance to a stable state ;) Cheers, Michael [1] http://www.w3.org/DesignIssues/ReadWriteLinkedData.html -- Dr. Michael Hausenblas LiDRC - Linked Data Research Centre DERI - Digital Enterprise Research Institute NUIG - National University of Ireland, Galway Ireland, Europe Tel. +353 91 495730 http://linkeddata.deri.ie/ http://sw-app.org/about.html From: Nathan nat...@webr3.org Organization: webr3 Reply-To: nat...@webr3.org Date: Sat, 03 Apr 2010 18:16:43 +0100 To: Linked Data community public-lod@w3.org Cc: Michael Hausenblas michael.hausenb...@deri.org Subject: write enabled web of data / acl/acf/wac etc Hi All, Simply looking for the best place to discuss acl/acf/wac / write enabled web of data etc - mailing list or irc or private contacts - unsure if this comes under the banner of linked data and thus this mailing list. i.e. whilst I can have a good realtime discussion about rest related things, coming up short with regards discussing the aforementioned write enabled web of data - any pointers? Further, with regards the ESW wiki pages, I've not seen any discussions yet on articles, and with some of the documents I do have notes additions etc to add, but don't want to just ad them ad-hoc without at least discussing or running past somebody else. Many Regards, Nathan
Re: Using predicates which have no ontology?
Nathan, Phil, All, and quote: If the relation-type is a relative URI, its base URI MUST be considered to be http://www.iana.org/assignments/relation/; http://tools.ietf.org/id/draft-nottingham-http-link-header-03.txt obviously all the links defined by: http://www.iana.org/assignments/link-relations/link-relations.xhtml (from the atom rfc) such as edit, self, related etc - with additional consideration to the thought that these will end up in rdf via RDFa/grddl etc v soon if not already. Any guidance? Yes. Use [1] ... My motto is: acting rather than talking. So, I took [2] as a starting point - which is already in nice XHTML format - and manually added some RDFa. After an hour I ended up with [1] (though, to be fair, two Wii games with the kids and consuming some Easter eggs also took place in that hour). So, [1] is really a sort of an interim solution (though, in the distributed data world I do expect much more of such fixes) and I encourage Phil, who is an editor of [2] to use the template from [1] at the 'official' location. Happy Easter! (and back to Wii games, for now ;) Cheers, Michael [1] http://purl.org/NET/atom-link-rel [2] http://www.iana.org/assignments/link-relations/link-relations.xhtml -- Dr. Michael Hausenblas LiDRC - Linked Data Research Centre DERI - Digital Enterprise Research Institute NUIG - National University of Ireland, Galway Ireland, Europe Tel. +353 91 495730 http://linkeddata.deri.ie/ http://sw-app.org/about.html From: Nathan nat...@webr3.org Organization: webr3 Reply-To: nat...@webr3.org Date: Sat, 03 Apr 2010 00:14:16 +0100 To: Danny Ayers danny.ay...@gmail.com Cc: Linked Data community public-lod@w3.org Subject: Re: Using predicates which have no ontology? Resent-From: Linked Data community public-lod@w3.org Resent-Date: Fri, 02 Apr 2010 23:14:54 + Danny Ayers wrote: On 3 April 2010 00:53, Nathan nat...@webr3.org wrote: Hi All, Any guidance on using predicates in linked data / rdf which do not come from rdfs/owl. Specifically I'm considering the range of: http://www.iana.org/assignments/relation/* Can't find a URL that resolves there snap; but that's what rel=edit and so forth resolves to. see example: http://www.w3.org/2001/tag/doc/selfDescribingDocuments.html#ATOMSection and quote: If the relation-type is a relative URI, its base URI MUST be considered to be http://www.iana.org/assignments/relation/; http://tools.ietf.org/id/draft-nottingham-http-link-header-03.txt obviously all the links defined by: http://www.iana.org/assignments/link-relations/link-relations.xhtml (from the atom rfc) such as edit, self, related etc - with additional consideration to the thought that these will end up in rdf via RDFa/grddl etc v soon if not already. Any guidance? By using something as a predicate you are making statements about it. But... If you can find IANA terms like this, please use them - though beware the page isn't the concept. You might have to map them over to your own namespace, PURL URIs preferred. Would it make sense to knock up an ontology for all the standard link-relations and sameAs them through to the iana uri's? Best, Nathan
LDOW2010 workshop papers and programme available
All, The LDOW2010 workshop papers as well as the programme of the workshop are now available online and can be accessed at [1]. Again, congratulations to the authors and a huge thanks to the members of the LDOW program committee for all their work! Cheers, Chris Bizer, Tom Heath, Tim Berners-Lee, Michael Hausenblas (LDOW 2010 Organizing Committee) [1] http://events.linkeddata.org/ldow2010/ -- Dr. Michael Hausenblas LiDRC - Linked Data Research Centre DERI - Digital Enterprise Research Institute NUIG - National University of Ireland, Galway Ireland, Europe Tel. +353 91 495730 http://linkeddata.deri.ie/ http://sw-app.org/about.html
Re: Figuring out what's behind a SPARQL endpoint
Constantine, These queries give me some indication of what's there ... but what would be handy is some sort of visualisation or analysis tool that gives me statistics like the number of resources contained in the endpoint, the type and predicate vocabularies used, and the density of linking between resources. Anything like this exist? Short answer: Yes, the voiD vocabulary and voiD tool set [1]-[6] as well as the W3C SPARQL service description [7]. Longer answer: There are two answers to it, IMO: first, the metadata regarding the datasets, which is covered by voiD, the vocabulary of interlinked datasets [1],[2],[3] - there are dedicated stores [4], [5] where you can find the descriptions and you'll also be able to find the voiD descriptions via general-purpose semantic indexers such as Sindice. There are also voiD tools that allow you to generate voiD descriptions [6]. The second part is related to SPARQL itself. I can only point you into the direction as I'm not directly involved in this activity: the W3C SPARQL Working Group is working on SPARQL 1.1 Service Description [7]. BTW: we take care of making sure that voiD plays nicely together with the W3C service description stuff ;) Cheers, Michael [1] http://semanticweb.org/wiki/VoiD [2] http://rdfs.org/ns/void/ [3] http://rdfs.org/ns/void-guide [4] http://void.rkbexplorer.com/ [5] http://kwijibo.talis.com/voiD/ [6] http://lab.linkeddata.deri.ie/ve2/ [7] http://www.w3.org/TR/sparql11-service-description/ -- Dr. Michael Hausenblas LiDRC - Linked Data Research Centre DERI - Digital Enterprise Research Institute NUIG - National University of Ireland, Galway Ireland, Europe Tel. +353 91 495730 http://linkeddata.deri.ie/ http://sw-app.org/about.html From: Hondros, Constantine constantine.hond...@wolterskluwer.com Date: Tue, 23 Mar 2010 08:59:02 +0100 To: Linked Data community public-lod@w3.org Subject: Figuring out what's behind a SPARQL endpoint Resent-From: Linked Data community public-lod@w3.org Resent-Date: Tue, 23 Mar 2010 07:59:40 + What's the best way to get a grip on what's actually behind an endpoint? I've been mulling over a proof-of-concept project to enrich published legal content - already highly annotated with RDF metadata - with RDF content from open government sources. But I'm kind of baffled by how best to assess the richness of an endpoint other than by running my own SPARQLs - eg. listing DISTINCT predicates, or CONSTRUCTing some of the typed resources. These queries give me some indication of what's there ... but what would be handy is some sort of visualisation or analysis tool that gives me statistics like the number of resources contained in the endpoint, the type and predicate vocabularies used, and the density of linking between resources. Anything like this exist? This email and any attachments may contain confidential or privileged information and is intended for the addressee only. If you are not the intended recipient, please immediately notify us by email or telephone and delete the original email and attachments without using, disseminating or reproducing its contents to anyone other than the intended recipient. Wolters Kluwer shall not be liable for the incorrect or incomplete transmission of of this email or any attachments, nor for unauthorized use by its employees. Wolters Kluwer nv has its registered address in Alphen aan den Rijn, The Netherlands, and is registered with the Trade Registry of the Dutch Chamber of Commerce under number 33202517.
Re: What is the class of a Named Graph?
Nathan, Any further input before I start using rdfg-1:Graph when describing graphs? I'd suggest you forget about both references and go with the upcoming SPARQL standard [1]. Cheers, Michael [1] http://www.w3.org/TR/2010/WD-sparql11-service-description-20100126/#id41744 -- Dr. Michael Hausenblas LiDRC - Linked Data Research Centre DERI - Digital Enterprise Research Institute NUIG - National University of Ireland, Galway Ireland, Europe Tel. +353 91 495730 http://linkeddata.deri.ie/ http://sw-app.org/about.html From: Nathan nat...@webr3.org Organization: webr3 Reply-To: nat...@webr3.org Date: Sun, 21 Feb 2010 00:38:39 + To: Linked Data community public-lod@w3.org Subject: What is the class of a Named Graph? Resent-From: Linked Data community public-lod@w3.org Resent-Date: Sun, 21 Feb 2010 00:39:24 + Hi All, As the subject line goes - what is the (recommended) rdfs:Class of a Named Graph? Thus far I can only see: a: http://www.w3.org/2004/03/trix/rdfg-1/Graph b: http://sw.nokia.com/RDFQ-1/Graph Where [a] is used as the domain of swp:Warrant,Authority etc. Any further input before I start using rdfg-1:Graph when describing graphs? Many Regards, Nathan
Re: What is the class of a Named Graph?
What you pointed at is a property sd:namedGraph. Well spotted! But I didn't really say: here is the class name. I wanted to point out that there is something relevant, likely be part of an upcoming standard so one should have it in mind. Sorry for not being explicit enough in the first place ;) The upcoming SPARQLstandard doesn't define any class for named graphs. Not yet. Any news from this end, Greg? Cheers, Michael -- Dr. Michael Hausenblas LiDRC - Linked Data Research Centre DERI - Digital Enterprise Research Institute NUIG - National University of Ireland, Galway Ireland, Europe Tel. +353 91 495730 http://linkeddata.deri.ie/ http://sw-app.org/about.html From: Jiří Procházka oji...@gmail.com Date: Sun, 21 Feb 2010 10:51:16 +0100 To: Michael Hausenblas michael.hausenb...@deri.org Cc: nat...@webr3.org, Linked Data community public-lod@w3.org Subject: Re: What is the class of a Named Graph? What you pointed at is a property sd:namedGraph. The upcoming SPARQL standard doesn't define any class for named graphs. I support using: http://www.w3.org/2004/03/trix/rdfg-1/Graph Best, Jiri On 02/21/2010 10:40 AM, Michael Hausenblas wrote: Nathan, Any further input before I start using rdfg-1:Graph when describing graphs? I'd suggest you forget about both references and go with the upcoming SPARQL standard [1]. Cheers, Michael [1] http://www.w3.org/TR/2010/WD-sparql11-service-description-20100126/#id41744
Re: Linking HTML pages and data
Thanks, Kingsley, for dumping in the initial stuff. I've tried to beautify it and make it a bit more readable, coming up with two concrete proposals/good practices [1]. Community review, please! :) Cheers, Michael [1] http://esw.w3.org/topic/SweoIG/TaskForces/CommunityProjects/LinkingOpenData/ AutoDiscovery -- Dr. Michael Hausenblas LiDRC - Linked Data Research Centre DERI - Digital Enterprise Research Institute NUIG - National University of Ireland, Galway Ireland, Europe Tel. +353 91 495730 http://linkeddata.deri.ie/ http://sw-app.org/about.html From: Kingsley Idehen kide...@openlinksw.com Date: Wed, 17 Feb 2010 10:30:13 -0500 To: Michael Hausenblas michael.hausenb...@deri.org Cc: Ed Summers e...@pobox.com, Linked Data community public-lod@w3.org Subject: Re: Linking HTML pages and data Michael Hausenblas wrote: Kingsley, Ed, We need a document that covers the following: 1. Linked Data Auto Discovery Patterns 2. How to associate documents with the things they describe. Agree. I've started a document at [1] now - please dump your ideas, thoughts, requirements, etc. there and I'll take care of getting it in a good shape ;) Cheers, Michael [1] http://esw.w3.org/topic/SweoIG/TaskForces/CommunityProjects/LinkingOpenData/ AutoDiscovery Cheers, Michael Okay, dropped a quick dump :-) -- Regards, Kingsley Idehen President CEO OpenLink Software Web: http://www.openlinksw.com Weblog: http://www.openlinksw.com/blog/~kidehen Twitter/Identi.ca: kidehen
Re: Linking HTML pages and data
Kingsley, Ed, We need a document that covers the following: 1. Linked Data Auto Discovery Patterns 2. How to associate documents with the things they describe. Agree. I've started a document at [1] now - please dump your ideas, thoughts, requirements, etc. there and I'll take care of getting it in a good shape ;) Cheers, Michael [1] http://esw.w3.org/topic/SweoIG/TaskForces/CommunityProjects/LinkingOpenData/ AutoDiscovery Cheers, Michael -- Dr. Michael Hausenblas LiDRC - Linked Data Research Centre DERI - Digital Enterprise Research Institute NUIG - National University of Ireland, Galway Ireland, Europe Tel. +353 91 495730 http://linkeddata.deri.ie/ http://sw-app.org/about.html From: Kingsley Idehen kide...@openlinksw.com Date: Wed, 17 Feb 2010 07:21:06 -0500 To: Ed Summers e...@pobox.com Cc: Linked Data community public-lod@w3.org Subject: Re: Linking HTML pages and data Resent-From: Linked Data community public-lod@w3.org Resent-Date: Wed, 17 Feb 2010 12:59:01 + Ed Summers wrote: On Tue, Feb 16, 2010 at 5:51 PM, Ian Davis li...@iandavis.com wrote: You can see it in use on data.gov.uk: http://education.data.gov.uk/doc/school/56 contains: link rel=primarytopic href=http://education.data.gov.uk/id/school/56; / Wow, thanks Ian. I hadn't noticed this pattern in use at data.gov.uk. It seems like a worthwhile pattern to encourage people to follow, by adding it to the How to Publish Linked Data on the Web [1] ... or elsewhere? //Ed [1] http://www4.wiwiss.fu-berlin.de/bizer/pub/LinkedDataTutorial/ We need a document that covers the following: 1. Linked Data Auto Discovery Patterns 2. How to associate documents with the things they describe. -- Regards, Kingsley Idehen President CEO OpenLink Software Web: http://www.openlinksw.com Weblog: http://www.openlinksw.com/blog/~kidehen Twitter/Identi.ca: kidehen
[ANN] ve2, the voiD editor
All, Happy to announce the availability of ve2, the voiD editor [1]. ve2 is a Web application that enables you to generate a voiD file (voiD is a vocabulary to describe Linked Data sets, their interlinking with other datasets, technical features, etc.). Essentially, ve2 offers a bunch of forms that allow you, based on characteristics of your linked dataset, such as categories, interlinking, technical features, licensing, etc. to create a voiD file in RDF Turtle format. On top of creating voiD files, ve2 further supports you in announcing your voiD file. Currently the following indexer/stores are supported: Sindice [2], RKB voiD store [3], Talis voiD Browser [4], and PingtheSemanticWeb.com [5]. If you have any feature requests or want to file a bug, please visit [6] and *please* use 'Product-ve' as a label - very much appreciated! A *big* thank you for early feedback, development support and pointing out bugs to (in alphabetical order of their last names): Keith Alexander, Lin Clark, Richard Cyganiak, Hugh Glaser, Ian Millard, and Jeni Tennison. Cheers, Michael [1] http://lab.linkeddata.deri.ie/ve2/ [2] http://sindice.com [3] http://void.rkbexplorer.com/ [4] http://kwijibo.talis.com/voiD/ [5] http://pingthesemanticweb.com/ [6] http://code.google.com/p/void-impl/issues/list?q=product%3Ave -- Dr. Michael Hausenblas LiDRC - Linked Data Research Centre DERI - Digital Enterprise Research Institute NUIG - National University of Ireland, Galway Ireland, Europe Tel. +353 91 495730 http://linkeddata.deri.ie/ http://sw-app.org/about.html
Re: Linking to datasets from the Cultural heritage domain
Monika, Are there any open datasets available to link with from the domain of cultural heritage. I'm not aware of any. That said, we are working into this direction [1], Joachim might be able to report on a recent event related to it [2] (German, sorry ;) and my best guess would be to see if the MultimedianNL chaps [3] have something handy. Cheers, Michael [1] http://sw-app.org/pub/vast09-chowder.pdf [2] http://www.swib09.de/ [3] http://e-culture.multimedian.nl/ -- Dr. Michael Hausenblas LiDRC - Linked Data Research Centre DERI - Digital Enterprise Research Institute NUIG - National University of Ireland, Galway Ireland, Europe Tel. +353 91 495730 http://linkeddata.deri.ie/ http://sw-app.org/about.html From: Monika Solanki m.sola...@mcs.le.ac.uk Reply-To: m.sola...@mcs.le.ac.uk Date: Sat, 02 Jan 2010 11:48:20 + To: Linked Data community public-lod@w3.org Subject: Linking to datasets from the Cultural heritage domain Resent-From: Linked Data community public-lod@w3.org Resent-Date: Sat, 02 Jan 2010 11:48:50 + Hello, Are there any open datasets available to link with from the domain of cultural heritage. Thanks, Monika -- Dr Monika Solanki F27 Department of Computer Science University of Leicester Leicester LE1 7RH United Kingdom Tel: +44 116 252 3828 Google: 52.653791,-1.158414 http://www.cs.le.ac.uk/people/ms491 Times Higher Education University of the Year 2008/09
Re: DBTropes.org, a TVTropes.org linked data wrapper
There's a RESTful HTTP JSON interface http://api.freebase.com/api/service/search?help And there are Java libraries for parsing JSON. Is that what you are looking for? Thanks David! I'll give it a try and happy to learn what Malte Daniel experienced as well ;) Cheers, Michael -- Dr. Michael Hausenblas LiDRC - Linked Data Research Centre DERI - Digital Enterprise Research Institute NUIG - National University of Ireland, Galway Ireland, Europe Tel. +353 91 495730 http://linkeddata.deri.ie/ http://sw-app.org/about.html From: David Huynh dfhu...@alum.mit.edu Date: Fri, 11 Dec 2009 09:48:46 -0800 To: Michael Hausenblas michael.hausenb...@deri.org Cc: Malte Kiesel malte.kie...@dfki.de, Daniel O'Connor daniel.ocon...@gmail.com, Linked Data community public-lod@w3.org Subject: Re: DBTropes.org, a TVTropes.org linked data wrapper Michael Hausenblas wrote: I'll look into that. It seems that Sindice performs a bit odd searching in Freebase though (try queries with domain:rdf.freebase.com). Any pointers on how to search in Freebase from Java without hassle? Not sure. Looking at [1] doesn't precisely offer support for Java, but maybe this can be seen as a request for support in case someone from MetaWeb is reading it? ;) Cheers, Michael [1] http://www.freebase.com/docs/client_libraries There's a RESTful HTTP JSON interface http://api.freebase.com/api/service/search?help And there are Java libraries for parsing JSON. Is that what you are looking for? David
Re: RDF Update Feeds
FWIW, I had a quick look at the current caching support in LOD datasets [1] - not very encouraging, to be honest. Cheers, Michael [1] http://webofdata.wordpress.com/2009/11/23/linked-open-data-http-caching/ -- Dr. Michael Hausenblas LiDRC - Linked Data Research Centre DERI - Digital Enterprise Research Institute NUIG - National University of Ireland, Galway Ireland, Europe Tel. +353 91 495730 http://linkeddata.deri.ie/ http://sw-app.org/about.html From: Michael Hausenblas michael.hausenb...@deri.org Date: Sat, 21 Nov 2009 11:19:18 + To: Hugh Glaser h...@ecs.soton.ac.uk, Georgi Kobilarov georgi.kobila...@gmx.de Cc: Linked Data community public-lod@w3.org Subject: Re: RDF Update Feeds Resent-From: Linked Data community public-lod@w3.org Resent-Date: Sat, 21 Nov 2009 11:19:57 + Georgi, Hugh, Could be very simple by expressing: Pull our update-stream once per seconds/minute/hour in order to be *enough* up-to-date. Ah, Georgi, I see. You seem to emphasise the quantitative side whereas I just seem to want to flag what kind of source it is. I agree that Pull our update-stream once per seconds/minute/hour in order to be *enough* up-to-date should be available, however I think that having the information regular/irregular vs. how frequent the update should be made available as well. My main use case is motivated from the LOD application-writing area. I figured that I quite often have written code that essentially does the same: based on the type of data-source it either gets a live copy of the data or uses already local available data. Now, given that data set publisher would declare the characteristics of their dataset in terms of dynamics, one could write such a LOD cache quite easily, I guess, abstracting the necessary steps and hence offering a reusable solution. I'll follow-up on this one soon via a blog post with a concrete example. My main question would be: what do we gain if we explicitly represent these characteristics, compared to what HTTP provides in terms of caching [1]. One might want to argue that the 'built-in' features are sort of too fine granular and there is a need for a data-source-level solution. in our semantic sitemaps, and these suggestions seem very similar. Eg http://dotac.rkbexplorer.com/sitemap.xml (And I think these frequencies may correspond to normal sitemaps.) So a naïve approach, if you want RDF, would be to use something very similar (and simple). Of course I am probably known for my naivity, which is often misplaced. Hugh, of course you're right (as often ;). Technically, this sort of information ('changefreq') is available via sitemaps. Essentially, one could lift this to RDF straight-forward, if desired. If you look closely to what I propose, however, then you'll see that I aim at a sort of qualitative description which could drive my LOD cache (along with the other information I already have from the void:Dataset). Now, before I continue to argue here on a purely theoretical level, lemme implement a demo and come back once I have something to discuss ;) Cheers, Michael [1] http://www.w3.org/Protocols/rfc2616/rfc2616-sec13.html -- Dr. Michael Hausenblas LiDRC - Linked Data Research Centre DERI - Digital Enterprise Research Institute NUIG - National University of Ireland, Galway Ireland, Europe Tel. +353 91 495730 http://linkeddata.deri.ie/ http://sw-app.org/about.html From: Hugh Glaser h...@ecs.soton.ac.uk Date: Fri, 20 Nov 2009 18:29:17 + To: Georgi Kobilarov georgi.kobila...@gmx.de, Michael Hausenblas michael.hausenb...@deri.org Cc: Linked Data community public-lod@w3.org Subject: Re: RDF Update Feeds Sorry if I have missed something, but... We currently put things like changefreqmonthly/changefreq changefreqdaily/changefreq changefreqnever/changefreq in our semantic sitemaps, and these suggestions seem very similar. Eg http://dotac.rkbexplorer.com/sitemap.xml (And I think these frequencies may correspond to normal sitemaps.) So a naïve approach, if you want RDF, would be to use something very similar (and simple). Of course I am probably known for my naivity, which is often misplaced. Best Hugh On 20/11/2009 17:47, Georgi Kobilarov georgi.kobila...@gmx.de wrote: Hi Michael, nice write-up on the wiki! But I think the vocabulary you're proposing is too much generally descriptive. Dataset publishers, once offering update feeds, should not only tell that/if their datasets are dynamic, but instead how dynamic they are. Could be very simple by expressing: Pull our update-stream once per seconds/minute/hour in order to be *enough* up-to-date. Makes sense? Cheers, Georgi -- Georgi Kobilarov www.georgikobilarov.com -Original Message- From: Michael Hausenblas [mailto:michael.hausenb...@deri.org] Sent: Friday, November 20, 2009 4:01 PM To: Georgi Kobilarov Cc: Linked Data community Subject: Re: RDF Update
Re: RDF Update Feeds
Georgi, Hugh, Could be very simple by expressing: Pull our update-stream once per seconds/minute/hour in order to be *enough* up-to-date. Ah, Georgi, I see. You seem to emphasise the quantitative side whereas I just seem to want to flag what kind of source it is. I agree that Pull our update-stream once per seconds/minute/hour in order to be *enough* up-to-date should be available, however I think that having the information regular/irregular vs. how frequent the update should be made available as well. My main use case is motivated from the LOD application-writing area. I figured that I quite often have written code that essentially does the same: based on the type of data-source it either gets a live copy of the data or uses already local available data. Now, given that data set publisher would declare the characteristics of their dataset in terms of dynamics, one could write such a LOD cache quite easily, I guess, abstracting the necessary steps and hence offering a reusable solution. I'll follow-up on this one soon via a blog post with a concrete example. My main question would be: what do we gain if we explicitly represent these characteristics, compared to what HTTP provides in terms of caching [1]. One might want to argue that the 'built-in' features are sort of too fine granular and there is a need for a data-source-level solution. in our semantic sitemaps, and these suggestions seem very similar. Eg http://dotac.rkbexplorer.com/sitemap.xml (And I think these frequencies may correspond to normal sitemaps.) So a naïve approach, if you want RDF, would be to use something very similar (and simple). Of course I am probably known for my naivity, which is often misplaced. Hugh, of course you're right (as often ;). Technically, this sort of information ('changefreq') is available via sitemaps. Essentially, one could lift this to RDF straight-forward, if desired. If you look closely to what I propose, however, then you'll see that I aim at a sort of qualitative description which could drive my LOD cache (along with the other information I already have from the void:Dataset). Now, before I continue to argue here on a purely theoretical level, lemme implement a demo and come back once I have something to discuss ;) Cheers, Michael [1] http://www.w3.org/Protocols/rfc2616/rfc2616-sec13.html -- Dr. Michael Hausenblas LiDRC - Linked Data Research Centre DERI - Digital Enterprise Research Institute NUIG - National University of Ireland, Galway Ireland, Europe Tel. +353 91 495730 http://linkeddata.deri.ie/ http://sw-app.org/about.html From: Hugh Glaser h...@ecs.soton.ac.uk Date: Fri, 20 Nov 2009 18:29:17 + To: Georgi Kobilarov georgi.kobila...@gmx.de, Michael Hausenblas michael.hausenb...@deri.org Cc: Linked Data community public-lod@w3.org Subject: Re: RDF Update Feeds Sorry if I have missed something, but... We currently put things like changefreqmonthly/changefreq changefreqdaily/changefreq changefreqnever/changefreq in our semantic sitemaps, and these suggestions seem very similar. Eg http://dotac.rkbexplorer.com/sitemap.xml (And I think these frequencies may correspond to normal sitemaps.) So a naïve approach, if you want RDF, would be to use something very similar (and simple). Of course I am probably known for my naivity, which is often misplaced. Best Hugh On 20/11/2009 17:47, Georgi Kobilarov georgi.kobila...@gmx.de wrote: Hi Michael, nice write-up on the wiki! But I think the vocabulary you're proposing is too much generally descriptive. Dataset publishers, once offering update feeds, should not only tell that/if their datasets are dynamic, but instead how dynamic they are. Could be very simple by expressing: Pull our update-stream once per seconds/minute/hour in order to be *enough* up-to-date. Makes sense? Cheers, Georgi -- Georgi Kobilarov www.georgikobilarov.com -Original Message- From: Michael Hausenblas [mailto:michael.hausenb...@deri.org] Sent: Friday, November 20, 2009 4:01 PM To: Georgi Kobilarov Cc: Linked Data community Subject: Re: RDF Update Feeds Georgi, All, I like the discussion, and as it seems to be a recurrent pattern as pointed out by Yves (which might be a sign that we need to invest some more time into it) I've tried to sum up a bit and started a straw-man proposal for a more coarse-grained solution [1]. Looking forward to hearing what you think ... Cheers, Michael [1] http://esw.w3.org/topic/DatasetDynamics -- Dr. Michael Hausenblas LiDRC - Linked Data Research Centre DERI - Digital Enterprise Research Institute NUIG - National University of Ireland, Galway Ireland, Europe Tel. +353 91 495730 http://linkeddata.deri.ie/ http://sw-app.org/about.html From: Georgi Kobilarov georgi.kobila...@gmx.de Date: Tue, 17 Nov 2009 16:45:46 +0100 To: Linked Data community public-lod@w3.org Subject: RDF Update Feeds Resent-From: Linked Data
Re: data.semanticweb.org down?
All DERI-hosted services should be up and running now again. Please ping me in case you encounter some unexpected behaviour or you find one of our sites or services not online as expected and please do accept our apologies for any inconvenience caused. Cheers, Michael -- Dr. Michael Hausenblas LiDRC - Linked Data Research Centre DERI - Digital Enterprise Research Institute NUIG - National University of Ireland, Galway Ireland, Europe Tel. +353 91 495730 http://linkeddata.deri.ie/ http://sw-app.org/about.html From: Richard Cyganiak rich...@cyganiak.de Date: Wed, 18 Nov 2009 15:49:17 +0100 To: Juan Sequeda juanfeder...@gmail.com Cc: Semantic Web community semantic-...@w3.org, Linked Data community public-lod@w3.org Subject: Re: data.semanticweb.org down? Resent-From: Linked Data community public-lod@w3.org Resent-Date: Wed, 18 Nov 2009 14:49:54 + On 18 Nov 2009, at 15:37, Juan Sequeda wrote: Is this because of the rain problems DERI is having? Yes. The DERI datacenter is powered down due to local flooding. This affects many DERI-hosted services, including sindice.com, sig.ma, data.semanticweb.org, pedantic-web.org, deri.ie and others. Hopefully everything will be up again in the next few hours, or tomorrow morning at the latest. Best, Richard Juan Sequeda, Ph.D Student Dept. of Computer Sciences The University of Texas at Austin www.juansequeda.com www.semanticwebaustin.org
Re: RDF Update Feeds
Georgi, All, I like the discussion, and as it seems to be a recurrent pattern as pointed out by Yves (which might be a sign that we need to invest some more time into it) I've tried to sum up a bit and started a straw-man proposal for a more coarse-grained solution [1]. Looking forward to hearing what you think ... Cheers, Michael [1] http://esw.w3.org/topic/DatasetDynamics -- Dr. Michael Hausenblas LiDRC - Linked Data Research Centre DERI - Digital Enterprise Research Institute NUIG - National University of Ireland, Galway Ireland, Europe Tel. +353 91 495730 http://linkeddata.deri.ie/ http://sw-app.org/about.html From: Georgi Kobilarov georgi.kobila...@gmx.de Date: Tue, 17 Nov 2009 16:45:46 +0100 To: Linked Data community public-lod@w3.org Subject: RDF Update Feeds Resent-From: Linked Data community public-lod@w3.org Resent-Date: Tue, 17 Nov 2009 15:46:30 + Hi all, I'd like to start a discussion about a topic that I think is getting increasingly important: RDF update feeds. The linked data project is starting to move away from releases of large data dumps towards incremental updates. But how can services consuming rdf data from linked data sources get notified about changes? Is anyone aware of activities to standardize such rdf update feeds, or at least aware of projects already providing any kind of update feed at all? And related to that: How do we deal with RDF diffs? Cheers, Georgi -- Georgi Kobilarov www.georgikobilarov.com
Re: Updated GeoSpecies Data Set 1,765,790 Triples
Peter, Great work! Now, as you already have a semantic sitemap, it should be pretty straight-forward to offer a voiD description [1] of your dataset as well. You can, for example, use the lifting XSLT [2] to bootstrap the voiD file from your existing sitemap and add further details about the interlinking to other datasets, etc. (see also the voiD guide [3] for the interplay with sitemaps). Happy to assist you off-line in case you have further questions regarding voiD ... Cheers, Michael [1] http://semanticweb.org/wiki/VoiD [2] http://code.google.com/p/void-impl/source/browse/trunk/liftSSM/SSM2void.xslt [3] http://rdfs.org/ns/void-guide#sec_5_2_Discovery_via_sitemaps -- Dr. Michael Hausenblas LiDRC - Linked Data Research Centre DERI - Digital Enterprise Research Institute NUIG - National University of Ireland, Galway Ireland, Europe Tel. +353 91 495730 http://linkeddata.deri.ie/ http://sw-app.org/about.html From: Peter DeVries pete.devr...@gmail.com Date: Thu, 29 Oct 2009 20:40:44 -0500 To: Renaud Delbru renaud.del...@deri.org Cc: Linked Data community public-lod@w3.org Subject: Re: Updated GeoSpecies Data Set 1,765,790 Triples Resent-From: Linked Data community public-lod@w3.org Resent-Date: Fri, 30 Oct 2009 01:41:27 + Hi Renaud, Thank you, I have a semantic sitemap at: http://lod.geospecies.org/sitemap.xml I am open to additional comments or suggestions. :-) http://lod.geospecies.org/sitemap.xml- Pete On Thu, Oct 29, 2009 at 6:16 PM, Renaud Delbru renaud.del...@deri.orgwrote: Hi Peter, I see that you have already a dataset dump available. Could I suggest also the use of a semantic sitemap [1], so that search engines such as Sindice can find, process and index your dump. Best, [1]http://sw.deri.org/2007/07/sitemapextension -- Renaud Delbru On 29/10/09 21:10, Peter DeVries wrote: I have updated the GeoSpecies data set. You can read about it here: http://about.geospecies.org/ You can browse it here: http://lod.geospecies.org/ The RDF dump can be obtained here: Here is the new RDF dump http://lod.geospecies.org/geospecies.rdf.tar.gz (1,765,790 Triples) The data set currently contains information and linked data for: 15,862 Species, 1,291 Familes, 206 Orders. We have approximately 6,500 species observations, but are awaiting release on the majority of those. The current data set includes 12 sample observation records with geo and geonames links. There is also a growing number of GeoSpecies annotated articles and presentations in the bibtex and bibio vocabularies. The knowledge base is currently linked to DBpedia, Freebase, Bio2RDF, Uniprot, uBio data sources, and uses some of the umbel subject concepts. See the projects page information on proper attribution. Until they have been fully documented, the bulk of the observation records are not currently available. I have attempted to link to dbpedia, bio2rdf, uniprot and freebase when possible using skos:closeMatch. Of the 15,862 species, 5,684 are linked to dbpedia and wikipedia, 8,948 are linked to bio2rdf and uniprot. There are also foaf:isPrimaryTopicOf links to 8,910 Wikispecies pages. Similar linkages are made at the other taxonomic levels of kingdom, phylum, class, order and family. Here the the page for the Silver-bordered Fritillary Butterfly Boloria selene Denis and Schiffermuller 1775 http://lod.geospecies.org/ses/ICmLC.html The entity is http://lod.geospecies.org/ses/ICmLC The RDF is http://lod.geospecies.org/ses/ICmLC.rdf The levels above species and family are in XHTML with RDFa, but also have a straight RDF representation. Order Carnivora http://lod.geospecies.org/orders/jtSaY.xhtml RDF version http://lod.geospecies.org/orders/jtSaY.rdf This page has some example SPARQL queries. http://about.geospecies.org/sparql.xhtml You can find the ontology documentation here: http://rdf.geospecies.org/gs_ont_doc/index.html It is mainly a vocabulary, since I have had trouble getting all the related ontologies to play well together. The SPARQL query examples will work as described on the RDF dataset without the ontology. This is only a fraction of the world's species but it includes all the world's Mammals, and North American Birds. I will be working to improve the data set's depth, breadth and linkages overtime, and would appreciate any comments or suggestions :-) My long term plan is to also add biologically relevant assertions to allow useful semantic queries about species. I have started to add state and county level records from the USDA Plants dataset for Wisconsin, Iowa, Michigan, Minnesota. In addition, I have started to make links between habitats and species. - Pete Pete DeVries Department of Entomology University of Wisconsin - Madison 445 Russell Laboratories 1630 Linden Drive Madison, WI
Re: ANN: alternative to cURL for debugging URIs
Olaf, We announce a new tool for Linked Data publishers as well as developers of Linked Data based applications. You may use our tool [1] as an alternative to the command line tool cURL for debugging Linked Data sites [2]. Nice work. You might want to look at hurl [1] as well and copy some of their features ;) Cheers, Michael [1] http://hurl.it/ -- Dr. Michael Hausenblas LiDRC - Linked Data Research Centre DERI - Digital Enterprise Research Institute NUIG - National University of Ireland, Galway Ireland, Europe Tel. +353 91 495730 http://linkeddata.deri.ie/ http://sw-app.org/about.html From: Olaf Hartig har...@informatik.hu-berlin.de Organization: Humboldt-Universität zu Berlin Date: Mon, 12 Oct 2009 16:02:21 +0200 To: Linked Data community public-lod@w3.org Subject: ANN: alternative to cURL for debugging URIs Resent-From: Linked Data community public-lod@w3.org Resent-Date: Mon, 12 Oct 2009 14:03:09 + Dear LODers, We announce a new tool for Linked Data publishers as well as developers of Linked Data based applications. You may use our tool [1] as an alternative to the command line tool cURL for debugging Linked Data sites [2]. Our tool allows you to dereference URIs and it visualizes the HTTP response of the server. In contrast to cURL, you may directly select each URI that occurs in the response in order to initiate the dereferencing of the selected URI with our tool. Hence, with our tool you may avoid the cumbersome copying and pasting of URIs on the command line as is necessary with curl. Furthermore, you may view the response body in different RDF serialization formats and you may inspect RDF data embedded in XHTML+RDFa documents. To make our tool a real Linked Data application that can also be accessed by software agents we embed an RDF description of the visualized HTTP messages in the HTML output. Cheers, Annika, Olaf [1] http://linkeddata.informatik.hu-berlin.de/uridbg/ [2] http://dowhatimean.net/2007/02/debugging-semantic-web-sites-with-curl -- Olaf Hartig Database and Information Systems Research Group Department of Computer Science Humboldt-Universität zu Berlin
Re: Publications on SOA and linked data?
Axel, Some *related* publications that come to mind are: [1], [2], and [3], however I guess you have to decide yourself if this fits your needs ;) Depending on how deep you're already into RESTful stuff, you might want to look into the REST Wiki (for example [4]) or invest the money to buy the brilliant book 'RESTful Web Services' [5]. Eventually, you might also be interested to have a look at our work in progress, where we started to work on the 'write-part' of linked data, see [6]. Cheers, Michael [1] http://www.ricardoamador.com/research/publication/kr2rk.pdf [2] http://sweet.kmi.open.ac.uk/pub/SupportingSemi-AutomaticAcquisitionofSRS.pdf [3] http://www.semanticscripting.org/SFSW2009/short_5.pdf [4] http://rest.blueoxen.net/cgi-bin/wiki.pl?XMLSemanticWeb [5] http://oreilly.com/catalog/9780596529260/ [6] http://esw.w3.org/topic/WriteWebOfData -- Dr. Michael Hausenblas LiDRC - Linked Data Research Centre DERI - Digital Enterprise Research Institute NUIG - National University of Ireland, Galway Ireland, Europe Tel. +353 91 495730 http://linkeddata.deri.ie/ http://sw-app.org/about.html From: Axel Rauschmayer a...@rauschma.de Date: Mon, 24 Aug 2009 14:31:29 +0200 To: Linked Data community public-lod@w3.org Subject: Publications on SOA and linked data? Resent-From: Linked Data community public-lod@w3.org Resent-Date: Mon, 24 Aug 2009 12:32:09 + Web services and linked data seem highly related: Many of the linked data introductions feel ReSTful, as does Tabulator's use of SPARQL/update. But, while there are many blog posts out there that briefly touch on this topic, I have yet to find a publication that paints a complete and coherent picture. Is anyone aware of such publications (or currently writing one ;-) ? There are semantic web services, but I would expect linked data web services to be different. Greetings, Axel -- Axel Rauschmayer a...@rauschma.de http://www.pst.ifi.lmu.de/people/staff/rauschmayer/axel-rauschmayer/ http://2ality.blogspot.com/ http://hypergraphs.de/