Geohash for events

2008-06-04 Thread Bernard Vatant


Hi Gustavo, all

For LOD people : Gustavo is the man behind http://geohash.org, a very 
cool service. I've been thinking about a layer which could be added to 
geohash to generate URIs for events, encapsulating in a single standard 
URI string the Where-When-What, and allowing automatic generation of a 
standard RDF description for events.

Something like the following
http://geohash.org/u0tyz0ssw:%5B2008-10-26_2008-10-30%5D_International_Semantic_Web_Conference 

The dates could be entered by user using some calendar widget, 
integrated with the Geohash Mapplet, with an extra field to enter the 
title.

The service would return the above URI.

To make such URIs usable by the Semantic Web, RDF descriptions could be 
automatically generated from the above using e.g., the event ontology at

http://motools.sourceforge.net/event/event.html

Something like (namespaces pending)

event:Event 
rdf:about=http://geohash.org/u0tyz0ssw:%5B2008-10-26_2008-10-30%5D_International_Semantic_Web_Conference; 


  event:place rdf:about=http://geohash.org/u0tyz0ssw;
geo:lat49.0026/geo:lat
geo:long8.4000/geo:long
  /event:place
  time:beginDate2008-10-26/time:beginDate
  time:endDate2008-10-30/time:endDate
  rdfs:labelInternational Semantic Web Conference/rdfs:label
/event:Event

What do you think?

Bernard
--

*Bernard Vatant
*Knowledge Engineering

*Mondeca**
*3, cité Nollez 75018 Paris France
Web:www.mondeca.com http://www.mondeca.com

Tel:   +33 (0) 971 488 459
Mail: [EMAIL PROTECTED] mailto:[EMAIL PROTECTED]
Blog:Leçons de Choses http://mondeca.wordpress.com/





Re: Geohash for events

2008-06-09 Thread Bernard Vatant


Hello 'masaka'

KANZAKI Masahide a écrit :

Hello Bernard,

I've been thinking something similar, and test implemented such URI, e.g.:
http://www.kanzaki.com/ns/geo/u0tyz0ssw:2008-10-26_2008-10-31;International_Semantic_Web_Conference
  

Very cool. Tried a similar one
http://www.kanzaki.com/ns/geo/u07t4qf8j:2008-09-28_2008-10-03;INRIA_IST_2008 
http://www.kanzaki.com/ns/geo/http://geohash.org/u07t4qf8j:2008-09-28_2008-10-03;INRIA_IST_2008


One thing I was wondering was how to encapsulate the URI of the event 
itself, something like (completely incorrect syntax, but you get the 
idea again)

http://www.kanzaki.com/ns/geo/http://geohash.org/u07t4qf8j:2008-09-28_2008-10-03;INRIA_IST_2008http://www.kanzaki.com/ns/geo/u07t4qf8j:2008-09-28_2008-10-03;INRIA_IST_2008?uri=http://www.inria.fr/actualites/colloques/2008/ist08/

Because URIs in geohash.org identify services, not things like events
or places, I use 302 redirection to represent RDF/XML data. 

Absolutely

Some
notes:

- data/time delimiter changed from enclosing [] to tailing ;, since []
are reserved chars in URI and would cause some identification troubles
  
Indeed. I tossed the original example just to push the general idea of 
encapsulation without much thinking about what correct separators should 
be.

- currently use RDFical vocabulary for events. Care should be taken
for 'dtend', as it must be non-inclusive end of the event (i.e. one
day after the last day of an event)

Not quite sure how useful in practice, but interesting trial (still
needs some fixes).
  
I'll stay tuned to further developments. Curious to have Gustavo 
feedback on this.


Bernard

--

*Bernard Vatant
*Knowledge Engineering

*Mondeca**
*3, cité Nollez 75018 Paris France
Web:www.mondeca.com http://www.mondeca.com

Tel:   +33 (0) 971 488 459
Mail: [EMAIL PROTECTED] mailto:[EMAIL PROTECTED]
Blog:Leçons de Choses http://mondeca.wordpress.com/





Re: Geohash for events

2008-06-09 Thread Bernard Vatant


Hi masa

Hi Bernard,

Added support for event page uri:
http://www.kanzaki.com/ns/geo/u07t4qf8j:2008-09-28_2008-10-03;INRIA_IST_2008?uri=http://www.inria.fr/actualites/colloques/2008/ist08/
  
Really cool. The URI format looks perfect to me now and exactly what I 
imagined you would do :-) .

The uri part cannot be combined 'hash:datetime;name' part, because
such uri itself adds another hierarchy to the original uri (i.e. / in
event page uri). Hence it should be provided as query string.

I wonder this looks too complicated for practical use ?
  
It is, if users have to concatenate the URI themselves by going to 
geohash, searching the place, copying the geohash id in the service 
namespace, add the time interval in conformant date format, add the 
event URI. Speak about user experience ... for geeks like you and me, 
but ordinary people will never do it that way.


But it you (or someone else) provide a smart bookmarklet (Faviki has 
given me a lot of ideas for that matter) to use in your browser from the 
page of the event 
(http://www.inria.fr/actualites/colloques/2008/ist08/), where the user 
can call geohash via a geocoder, enter the dates using a calendar 
applet, and grab the name from the page title ... et voilà ...
The service would return a page as yours, with the RDF description and a 
permanent URI. And maybe call au passage the geonames service to add 
the neighbouring geonames features, yahoo or google to add sponsored 
links, whatever ...


This would be *practical* ... You could even au passage dump the 
created event in a backend data store, etc.


Say what?


2008/6/9, Bernard Vatant [EMAIL PROTECTED]:
  

One thing I was wondering was how to encapsulate the URI of the event
itself, something like (completely incorrect syntax, but you get the idea
again)



cheers,

  


--

*Bernard Vatant
*Knowledge Engineering

*Mondeca**
*3, cité Nollez 75018 Paris France
Web:www.mondeca.com http://www.mondeca.com

Tel:   +33 (0) 971 488 459
Mail: [EMAIL PROTECTED] mailto:[EMAIL PROTECTED]
Blog:Leçons de Choses http://mondeca.wordpress.com/





Linked Data Web group on LinkedIn : Linked Data, Linked People

2008-07-03 Thread Bernard Vatant


All

I just had a look at figures of the Linked Data Web group on LinkedIn, 
which counts 178 members to date. Granted, this is a very small 
community compared to the 23 million LinkedIn users, but it's steadily 
growing. What I found interesting is the small world effect, illustrated 
by the proportion of this group members within my personal network, 
which is over 90%.

Among those 178 people :

   40 are 1st degree connections (= 30% of my 1st degree connections!)
   74 are 2nd degree
   50 are 3rd degree

... small world indeed!

It figures also, if those social connections were available themselves 
as linked data, relations of people with each other and with linked data 
sets those people are involved in / managing ... would make an 
interesting social gateway to the GGG.


Bernard

--

*Bernard Vatant
*Knowledge Engineering

*Mondeca**
*3, cité Nollez 75018 Paris France
Web:www.mondeca.com http://www.mondeca.com

Tel:   +33 (0) 971 488 459
Mail: [EMAIL PROTECTED] mailto:[EMAIL PROTECTED]
Blog:Leçons de Choses http://mondeca.wordpress.com/





Re: Announcing Open GUID

2008-09-25 Thread Bernard Vatant


Jason

An EU consortium?  Is this going to end like the Lisbon treaty?  :)
  
Easy target, but beware of not being too sarcastic here about European 
projects. :-)
First because there are a lot of European people around this space, and 
moreover a lot of Semantic Web related projects have been and will be 
funded by the EU  European Research Area, under successive Framework 
Programs (currently FP7). I'm happy to say that a good part of my 
(small) company's RD has been supported by such projects for years, 
where companies big or small work along with academic research centers.
Granted, as you mention, the visibility of those projects is not always 
what it should be, although there is always a Work Package called 
Dissemination and Outreach for each of them ... and the EU projects 
Web is really tricky to navigate!

Thanks for the tip, my research did not uncover them.  Maybe because the
name is obtuse? 
Given the reference to a razor (even a conceptual one), I would say 
acute rather than obtuse ... European medieval culture, that is. :-P

 While Open GUID isn't a good brand name for a t-shirt,
it's at least evocative of it's singular purpose.
  

Indeed.

I will reach out to them.  Though I will say there's a reason I've been
an agile developer for the better part of a decade...
  
I think this is an interesting confrontation. Of course European 
Projects are all but agile. But they are funded. ;-)


Bernard

 Original Message 
Subject: Re: Announcing Open GUID
From: Giovanni Tummarello [EMAIL PROTECTED]
Date: Thu, September 25, 2008 11:21 am
To: [EMAIL PROTECTED]
Cc: public-lod@w3.org


Hi Jason,

i believe you're persuing exactly the same goal as the Okkam project
(http://okkam.org).
unlike okkam however you have something up alrady at a nice visible,
uncluttered website.
This mail of mine is just so that you know tha tther eis this common
research effort and in fact to say that if you're very motivated to
pursue this it might make sense to talk to the Okkam people and maybe
fit in the works there, afterall there is eu funding behind this
effort so you might find yourself leveraging quite some manpower (e.g.
matching libraries).
.. on the other hands you might want to decide to stay agile and
independant :-) your pick.

Giovanni
  



--

*Bernard Vatant
*Senior Consultant
Vocabulary  Data Engineering
Tel:   +33 (0) 971 488 459
Mail: [EMAIL PROTECTED] mailto:[EMAIL PROTECTED]

*Mondeca**
*3, cité Nollez 75018 Paris France
Web:www.mondeca.com http://www.mondeca.com
Blog:Leçons de Choses http://mondeca.wordpress.com/
**




Re: SPARQL Endpoint for Lingvoj

2008-10-03 Thread Bernard Vatant


Kjetil

I had unfortunately no bandwidth to care about lingvoj.org for months 
now. There is a long to-do list for data quality, e.g., all the 
referenced Cyc URIs are currently broken, due to changes in Cyc URI 
schemes.  The roadmap in my mind is to switch from the current static 
files to a dynamic repository and publication (hopefully managed in 
Mondeca software) and a proper SPARQL endpoint.
The model is likely to evolve to include the SKOS-Label extension, in 
order to cope with the multilingual aspects.


I can't set any trustable schedule for all that at the moment, 
unfortunately.


The bottom line is that having such resources maintained by 
individual/private initiatives is certainly not a scalable solution.


Bernard


All,

I was just wondering whether anybody has a SPARQL Endpoint for lingvoj.org 
data? We found that importing it all was a little too much, so we figured 
that the best way to get the subset that we want is to CONSTRUCT it, if it is 
somewhere.


Cheers,

Kjetil

  



--

*Bernard Vatant
*Senior Consultant
Vocabulary  Data Engineering
Tel:   +33 (0) 971 488 459
Mail: [EMAIL PROTECTED] mailto:[EMAIL PROTECTED]

*Mondeca**
*3, cité Nollez 75018 Paris France
Web:www.mondeca.com http://www.mondeca.com
Blog:Leçons de Choses http://mondeca.wordpress.com/
**




WiserEarth Re: eEnvironment Terminology Workshop deadline extended

2008-10-06 Thread Bernard Vatant


Hello all

I take the opportunity of Thomas's message to put another community, 
somehow related, in the LOD community radar.


It's been a while indeed since I've thought that a missing bubble in the 
LOD cloud was some data about environmental, sustainable development, 
and other similar and critical issues of our time, and in this respect 
I'd noticed for quite a while the WiserEarth site and community at 
http://www.wiserearth.org http://www.wiserearth.org.%C2%A0
Basically WiserEarth is a community-built data base, along with social 
networking tools, of the who-what-where-when-how in those areas. The 
main part of data is about over 100,000 organisations with their 
whereabouts, including location, address, description, type, activities, 
areas of focus, relations etc. Lately I joined this community and 
started to speak about bringing their data in the LOD cloud, and added 
them to the LOD shopping list.


What I have found interesting in my conversation so far is that 
WiserEarth people have quickly understood that linking their data would 
definitely be a win-win strategy in terms of both outreach, gained 
visibility in the common LOD space, and access and organisation using 
shared categories, e.g., DBpedia categories (we've not considered yet 
other resources like UMBEL, but they are in the radar, too). They know 
their current taxonomy is suboptimal, and see the potential benefit of 
using shared categories.
If people in WiserEarth staff are quite excited at the idea, their 
technical team is quite small, and would be happy to see people from LOD 
community bring about a bit of their know-how and technical expertise to 
support them if needed.


I for one have proposed to help, bandwidth permitting, with the mapping 
or redefinition of current categories, and/or definition of a supporting 
ontology, leveraging as far as possible current LOD ontologies (FOAF, 
Geonames ...). But I guess other geeks around will be happy to help 
WiserEarth folks with server configuration, data migration, format of 
RDF publication, and all the gory details of 303 redirects. :-)


Thomas, I have forwarded the Workshop announcement to WiserEarth folks. 
Their representative in Europe might be interested in attending.


Thanks for your attention

Bernard

Thomas Bandholtz a écrit :


Hi all,

the European Environmental Information community is currently
discovering Semantic Web technologies and, more specific, LOD.

Environmental data consists of billions of measurement records which may
be published in a LOD style and be linked to the environmental
terminology. This would be a huge use case to satisfy legal reporting 
obligations of the EU member states.


One of the milestones in this development will be the eEnvironment
Terminology Workshop of the European conference of the Czech Presidency
of the Council of the EU TOWARDS eENVIRONMENT, March 25-27 2009,
Prague, Czech Republic
http://www.e-envi2009.org/?workshops

Environmental terminology and its semantics constitute a major 
building block of SEIS and SISE as important instruments for 
discovery, understanding, and integration of any kind of accessible 
information. They have already taken a long way starting with early 
subject heading systems of the libraries, moving on to multilingual 
thesauri such as GEMET, some of them evolving towards more expressive 
ontologies.


This workshop will present several domain-specific and 
interdisciplinary examples and discuss common design issues such as 
terminology structure models, cross-referencing, symmetric vs. 
asymmetric multilingualism, identity and reference, publishing 
terminology in the Web, and linking environmental data to such 
published terminology.


What's SEIS and SISE?

The European Commission has recently decided on building a Shared
Environmental Information System(SEIS)
http://ec.europa.eu/environment/seis/index.htm
accompanied by a European research strategy Towards a Single
Information Space for the Environment in Europe (SISE)
http://cordis.europa.eu/fp7/ict/sustainable-growth/workshops_en.html
with a high awareness of explicit semantics.

This was again one of the topics of the EnviroInfo conference in 
September.

http://www.enviroinfo2008.org/detail_wednesday.php

Those who are interested in contributing in these activities should
submit an abstract at
http://www.e-envi2009.org/myreview/SubmitAbstract.php before the
deadline Oct 31.

Kind regards
Thomas Bandholtz
innoQ.com
www.semantic-network.de


--

*Bernard Vatant
*Senior Consultant
Vocabulary  Data Engineering
Tel:   +33 (0) 971 488 459
Mail: [EMAIL PROTECTED] mailto:[EMAIL PROTECTED]

*Mondeca**
*3, cité Nollez 75018 Paris France
Web:www.mondeca.com http://www.mondeca.com
Blog:Leçons de Choses http://mondeca.wordpress.com/
**




Re: [Ann] OSM Linked Geo Data extraction, browser editor

2008-12-12 Thread Bernard Vatant


Hi Sören

Have you any plans to link those data with geonames.org data?

Bernard


Hi all,

We were working in the last weeks on bringing geo data derived from 
the marvelous OpenStreetMap project [1] to the data web. This work in 
progress is still far from being finished, however, we would like to 
share some first preliminary results:


* A *vast amount of point-of-interest descriptions* was extracted from 
OSM and published as Linked Data at http://linkedgeodata.org


* The *Linked Geo Data browser and editor* (available at 
http://linkedgeodata.org/browser) is a facet-based browser for geo 
content, which uses an OLAP inspired hypercube for quickly retrieving 
aggregated information about any user selected area on earth.


Further information can be also found in our AKSW project description:

http://aksw.org/Projects/LinkedGeoData

Thanks go to Sebastian Dietzold, Jens Lehmann, Sebastian Hellmann, 
David Aumueller and other members of the AKSW team for their 
contributions.


Merry Christmas to everybody from Leipzig

Sören


[1] http://openstreetmap.org




--

*Bernard Vatant
*Senior Consultant
Vocabulary  Data Engineering
Tel:   +33 (0) 971 488 459
Mail: bernard.vat...@mondeca.com mailto:bernard.vat...@mondeca.com

*Mondeca**
*3, cité Nollez 75018 Paris France
Web:www.mondeca.com http://www.mondeca.com
Blog:Leçons de Choses http://mondeca.wordpress.com/
**




News from lingvoj.org

2009-04-02 Thread Bernard Vatant

Hello all

I've started refreshing the content of pages at http://lingvoj.org, 
which was long overdue.
The data set now links to DBpedia (of course), Freebase and OpenCyc 
(which had been broken since OpenCyc had changed its URIs a while ago). 
But there are no backwards links from those yet... so far the only 
dataset that I know linking to lingvoj.org is Linked MDB, but it does 
not use the correct URIs [1]. :'( 
Please don't be shy! lingvoj.org URIs are cool : stable, simple, and 
dereferencable. ;-)
Bandwidth permitting, I will now focus on data quality, for the most 
used language (ISO 639-1 list).


Some technical details : In the previous release I had generated static 
content for html files as well as RDF files. Now the html page redirects 
(html-style) to the rdf file, which calls a XSL stylesheet. The result, 
as I can see is the following, regarding browsers I have tested :
- IE is happy with that. So in IE you can actually browse through 
languages, clicking on the codes of the labels.
- Firefox + Tabulator extension calls its default RDF stylesheet, where 
the languages of labels are ignored ... too bad. If anyone knows how to 
have Firefox use the stylesheet declared in the RDF file instead of the 
default one, I'm buying the trick!


Next release should move from static files to SPARQL endpoint and all. 
But no real schedule so far. Stay tuned and be patient.


Thanks for your attention

Bernard

[1] Note to MDB publishers : the lingvoj.org correct URI for e.g., 
english should be http://www.lingvoj.org/lang/en 
http://www.lingvoj.org/lingvo/en
and not http://www.lingvoj.org/lingvo/en for which no semantics 
whatsoever declared, even if it is de-referenced to the same RDF file 
thanks to some http magic

Thanks guys if you could correct this.

--

*Bernard Vatant
*Senior Consultant
Vocabulary  Data Engineering
Tel:   +33 (0) 971 488 459
Mail: bernard.vat...@mondeca.com mailto:bernard.vat...@mondeca.com

*Mondeca**
*3, cité Nollez 75018 Paris France
Web:www.mondeca.com http://www.mondeca.com
Blog:Leçons de Choses http://mondeca.wordpress.com/
**




Re: News from lingvoj.org

2009-04-02 Thread Bernard Vatant

Hi Simon

Thanks for the feedback. You (or someone else) already mentioned a while 
ago this extra language in labels. It's true in other languages too.
It's on my to-do list to clean the data in order to get rid of those, 
and/or to use skos:prefLabel for English and skos:altLabel for 
English language, or maybe the other way round, did not make my mind 
completely, I've to think about it. Any suggestions welcome. :-)


Bernard

Simon Reinhardt a écrit :

Bernard Vatant wrote:

Hello all

I've started refreshing the content of pages at http://lingvoj.org, 
which was long overdue.
The data set now links to DBpedia (of course), Freebase and OpenCyc 
(which had been broken since OpenCyc had changed its URIs a while 
ago). But there are no backwards links from those yet... so far the 
only dataset that I know linking to lingvoj.org is Linked MDB, but it 
does not use the correct URIs [1]. :'( Please don't be shy! 
lingvoj.org URIs are cool : stable, simple, and dereferencable. ;-)
Bandwidth permitting, I will now focus on data quality, for the most 
used language (ISO 639-1 list).


Hi Bernard,

Great news! Re. interlinking: I'm linking to lingvoj in the university 
project I'm currently working on. Should it ever get into a 
publishable state I will show you right away. ;-)


In this same project I have a minor problem with the lingvoj data: I'm 
using the labels in your dataset so I can display names of the 
languages in the user's language in the user interface. However the 
Wikipedia article names you use for those labels don't always lend 
themselves that nicely for user interface labels. Very often they say 
something like English language to distinguish the article from the 
one about the attribute English in general. But then in my interface 
it would say: Language: English language. However it's really just a 
minor annoyance and I'm not sure what to do about it either. Sure 
there are infoboxes which contain the shorter name but they are 
different for every language version of Wikipedia and not all versions 
have them so this doesn't work as a general mechanism for extracting 
the labels. Maybe other data sources have better labels. CLDR [1]? But 
that's not free, is it?


Regards,
 Simon

[1] http://cldr.unicode.org/




--

*Bernard Vatant
*Senior Consultant
Vocabulary  Data Engineering
Tel:   +33 (0) 971 488 459
Mail: bernard.vat...@mondeca.com mailto:bernard.vat...@mondeca.com

*Mondeca**
*3, cité Nollez 75018 Paris France
Web:www.mondeca.com http://www.mondeca.com
Blog:Leçons de Choses http://mondeca.wordpress.com/
**




Re: News from lingvoj.org

2009-04-02 Thread Bernard Vatant

Hello Yves

I just added the links back from 185 lingvoj URIs to the equivalent 
declared in Musicbrainz.

See http://www.lingvoj.org/lang/br
There are certainly more possible mappings. To be continued ...

Bernard


Yves Raimond a écrit :

Hello Bernard!

On Thu, Apr 2, 2009 at 10:12 AM, Bernard Vatant
bernard.vat...@mondeca.com wrote:
  

Hello all

I've started refreshing the content of pages at http://lingvoj.org, which
was long overdue.
The data set now links to DBpedia (of course), Freebase and OpenCyc (which
had been broken since OpenCyc had changed its URIs a while ago). But there
are no backwards links from those yet... so far the only dataset that I know
linking to lingvoj.org is Linked MDB, but it does not use the correct URIs
[1]. :'( Please don't be shy! lingvoj.org URIs are cool : stable, simple,
and dereferencable. ;-)



There are also a couple of links to lingvoj at
http://dbtune.org/musicbrainz/ (see
http://dbtune.org/musicbrainz/resource/language/aka and
http://dbtune.org/musicbrainz/directory/language, for example)

Cheers!
y

  

Bandwidth permitting, I will now focus on data quality, for the most used
language (ISO 639-1 list).

Some technical details : In the previous release I had generated static
content for html files as well as RDF files. Now the html page redirects
(html-style) to the rdf file, which calls a XSL stylesheet. The result, as I
can see is the following, regarding browsers I have tested :
- IE is happy with that. So in IE you can actually browse through languages,
clicking on the codes of the labels.
- Firefox + Tabulator extension calls its default RDF stylesheet, where the
languages of labels are ignored ... too bad. If anyone knows how to have
Firefox use the stylesheet declared in the RDF file instead of the default
one, I'm buying the trick!

Next release should move from static files to SPARQL endpoint and all. But
no real schedule so far. Stay tuned and be patient.

Thanks for your attention

Bernard

[1] Note to MDB publishers : the lingvoj.org correct URI for e.g., english
should be http://www.lingvoj.org/lang/en http://www.lingvoj.org/lingvo/en
and not http://www.lingvoj.org/lingvo/en for which no semantics whatsoever
declared, even if it is de-referenced to the same RDF file thanks to some
http magic
Thanks guys if you could correct this.

--

*Bernard Vatant
*Senior Consultant
Vocabulary  Data Engineering
Tel:   +33 (0) 971 488 459
Mail: bernard.vat...@mondeca.com mailto:bernard.vat...@mondeca.com

*Mondeca**
*3, cité Nollez 75018 Paris France
Web:www.mondeca.com http://www.mondeca.com
Blog:Leçons de Choses http://mondeca.wordpress.com/
**






  


--

*Bernard Vatant
*Senior Consultant
Vocabulary  Data Engineering
Tel:   +33 (0) 971 488 459
Mail: bernard.vat...@mondeca.com mailto:bernard.vat...@mondeca.com

*Mondeca**
*3, cité Nollez 75018 Paris France
Web:www.mondeca.com http://www.mondeca.com
Blog:Leçons de Choses http://mondeca.wordpress.com/
**




Re: fw: Google starts supporting RDFa -- 'rich snippets'

2009-05-13 Thread Bernard Vatant

Hi all

Agreed with Dan and all others saying we have to welcome Google's move. 
But nevertheless, I take the risk to include myself in the 1000 defined 
below ... :-)
I suppose pages such as [1] with indications for webmasters are likely 
to be more read by webmasters than RDFa specs themselves or linked data 
best pratcices documents. So, is this page making correctly the case for 
linked data? For structured semantic data, yes, and nevermind the 
vocabulary.
But for linked data, well, not much. Linked data ate about 
relationships, and unfortunately the only example given in this page 
defining a relation between resources using about is for the 
structured data geeks out there ... and can be misleading for people 
not aware of what LOD is about.


div xmlns:v=http://rdf.data-vocabulary.org/; typeof=v:Person
  span property=v:nameJohn Smith/span
  span rel=v:affiliation
 span about=http://en.wikipedia.org/wiki/Acme_Corporation; 
property=v:nameACME/span
  /span
  ...
/div

So John Smith is affiliated to a wikipedia page. Whoever has the ear of 
Google folks behind this could simply suggest to replace in this example 
http://en.wikipedia.org/wiki/Acme_Corporation;  by 
http://dbpedia.org/resource/Acme_Corporation;, explaining quickly the 
difference.
Of course one can wonder if a fictional guy is better off being 
affiliated with a fictional corporation than with a real web page.


That said, to follow-up with Dan's suggestion, would it be really 
difficult e.g., for LOD html pages such as 
http://dbpedia.org/page/Acme_Corporation to be RDFa-ized?


Bernard


[1] http://google.com/support/webmasters/bin/answer.py?answer=146646

On 13/5/09 15:23, Kingsley Idehen wrote:


I desperately hope that you can see the Google is providing a huge
opportunity to showcase Linked Data meme value. Again, so what -- if
they don't use existing vocabularies? What matters is that they are
using RDFa to produce structured data, and that is simply huge!!!


Yeah, to be blunt, the last thing this situation needs right now is 
having 1000 semantic web pedants descend, complaining that they're not 
doing x, y or z right, that they don't get it, that they're 
copycatting yahoo, or whatever. This won't help anyone and would be 
severely counterproductive.


What would help right now is having real and sizable sites expose lots 
of RDFa HTML pages using FOAF, DOAP, SIOC, SKOS, CC etc. If anyone has 
such information and is exposing it only in RDF/XML and not RDFa, I'd 
suggest looking to make that change...


Dan





--

*Bernard Vatant
*Senior Consultant
Vocabulary  Data Engineering
Tel:   +33 (0) 971 488 459
Mail: bernard.vat...@mondeca.com mailto:bernard.vat...@mondeca.com

*Mondeca**
*3, cité Nollez 75018 Paris France
Web:www.mondeca.com http://www.mondeca.com
Blog:Leçons de Choses http://mondeca.wordpress.com/
**




Linking back to sameas.org?

2009-06-08 Thread Bernard Vatant
/network/
Eg
http://www.rkbexplorer.com/network/?uri=http://southampton.rkbexplorer.com/i
d/person-62ca72227cd42255eb0d8c37383eccf0-2e1762effd1839702bc077c652d57901
  

Another thought - is the whole system necessarily based on pre-loaded
data, or could sameas.org make some explorations of the Web while you
wait? eg. do a few searches via Yahoo BOSS or Google JSON API and parse
the results for same-as's.


I would avoid this.
For it to be a service of the kind that John would use, I think it needs to
provide a guaranteed fast response (at least in the sense of no other
unexpected dependencies).
  

Re bad results it's worth looking at what Google SGAPI does. They
distinguish between one sided claims vs reciprocations. If my homepage
has rel=me pointing to my youtube profile, that's one piece of evidence
they have a common owner; if the profile has similar markup pointing
back, that's even more reassuring


Ah yes, now that is a big topic. Several PhDs on trust and provenance to be
done here. What is the provenance of each of the pairwise assertions, how
does that contribute to the bundle, how do multiple assertions from
different sources contribute? In fact, what is the calculus of all this?
Cheers
Hugh
  

cheers,

Dan


cheers,

Dan




--

*Bernard Vatant
*Senior Consultant
Vocabulary  Data Engineering
Tel:   +33 (0) 971 488 459
Mail: bernard.vat...@mondeca.com mailto:bernard.vat...@mondeca.com

*Mondeca**
*3, cité Nollez 75018 Paris France
Web:www.mondeca.com http://www.mondeca.com
Blog:Leçons de Choses http://mondeca.wordpress.com/
**




Re: Linking back to sameas.org?

2009-06-09 Thread Bernard Vatant

Hugh, Toby

Thanks for the follow-up on this, and well, I'm pretty much convinced 
now that rdfs:seeAlso is the simplest way to achieve the backlink to 
sameas.org
And thinking twice, I'm now also convinced  that neither adding any 
specific semantics to this link nor specifying the class of sameas.org 
URIs would provide anything more than rdfs:seeAlso standard 
interpretation : go there to find more, I trust them.


So I think in the next release of lingvoj.org I will replace in each 
description the bunch of local sameas links by a single rdfs:seeAlso 
sameas.org/foo. I guess it would not be difficult for dbpedia folks to 
add it also to dbpedia URIs? Actually, supposing sameas.org becomes a 
sort of institution :-) , RDF search/navigation tools could built-in the 
sameas.org service extension as an option, so that publishers do not 
even need to include such links anymore. I can imagine a little sameas 
tab on Tabulator menu.



Bernard


On 08/06/2009 20:59, Toby A Inkster t...@g5n.co.uk wrote:

  

On 8 Jun 2009, at 12:22, Bernard Vatant wrote:



http://sameas.org/html?uri=http://www.lingvoj.org/lang/fr provides
16 equivalent URIs (including the original one).
At http://www.lingvoj.org/lang/fr I've gathered painfully only 10
of those :-)
But now that sameas.org is alive, why should I care maintaining
those sameAs links locally?
  

I imagine that sameas.org found many of those links at lingvoj.org.


You imagine right.
Or at least to start with, as I recall that was one of the first sites that
started this particular hobby, when I realised that we had generated a bunch
of language URIs ourselves, and I wanted to link to the others.

In fact Bernard raises an issue dear to my heart, which is that publishers
of Linked Data should be able to do just that; and that others can be
facilitated in providing and maintaining the links, in particular without
putting any load on the data publisher to be troubled with any of it.
These sort of separations are good engineering practice. I have long liked
the idea that the linking is knowledge of a separate sort to the substantive
content. It certainly often has different temporal characteristics.

So I think what Bernard means is that he needs reliable service(s) that will
look after his hard-won links, and allow him to maintain the links where
necessary.
  

Using rdfs:seeAlso is as usual good but not precise enough. Maybe
sameas.org could provide a minimal vocabulary to describe its URI,
such as some http://sameas.org#hub property
http://www.lingvoj.org/lang/fr sameas:hub http://sameas.org/
html?uri=http://www.lingvoj.org/lang/fr
  

Perhaps better:

http://www.lingvoj.org/lang/fr
   rdfs:seeAlso
 http://sameas.org/html?uri=http://www.lingvoj.org/lang/fr .


I think that cygri's email thread ended in a similar conclusion?
I confess that the discussion has helped me to understand rdfs:seeAlso.
I had tried to understand it through the semantics.
But now I can see that from the point of view of the consumer it is just an
optional
#include http://sameas.org/rdf?uri=http://www.lingvoj.org/lang/fr
(if you will pardon the C syntax!)
That is, if I am following my nose bringing stuff into my local RDF cache to
play with, and I hit one of these (?s rdfs:seeAlso ?o), I can decide to add
the RDF resolved by ?o to the mix. In fact, I don't even care what the ?s
is, other than as part of the decision whether to follow ?o or not.
I guess everyone else already understood this. :-)
  

http://sameas.org/html?uri=http://www.lingvoj.org/lang/fr
   rdf:type
 sameas:Hub .


Not sure why this would be needed.
But can be done.
By the way, I think the URI should be:
http://sameas.org/?uri=http://www.lingvoj.org/lang/fr
To allow the conneg.
  

As this will work in existing tools that understand rdfs:seeAlso.


Which is always good.
Best
Hugh
  



--

*Bernard Vatant
*Senior Consultant
Vocabulary  Data Engineering
Tel:   +33 (0) 971 488 459
Mail: bernard.vat...@mondeca.com mailto:bernard.vat...@mondeca.com

*Mondeca**
*3, cité Nollez 75018 Paris France
Web:www.mondeca.com http://www.mondeca.com
Blog:Leçons de Choses http://mondeca.wordpress.com/
**




WiserEarth API is open - 100,000+ orgs data to link

2009-06-10 Thread Bernard Vatant

Hello all

In case you missed this, WiserEarth [1] is opening its API [2] and 
proposes a conf call for developers tomorrow June 11 at 10am PDT (19h00 
Paris time)


The WiserEarth data base is mainly an index of more than 111,000 
organizations (NGOs and business) working in the fields of environment, 
peace, social justice, sustainable development, education and the like 
the world over. More data about events, jobs, solutions (and people).


On the radar of WiserEarth in opening its API is of course the 
perspective to see their data enter the LOD cloud, so watch this space, 
or jump in to help.


[1] http://www.wiserearth.org
[2] http://blog.wiserearth.org/?p=974
[3] http://www.wiserearth.org/organization


--

*Bernard Vatant
*Senior Consultant
Vocabulary  Data Engineering
Tel:   +33 (0) 971 488 459
Mail: bernard.vat...@mondeca.com mailto:bernard.vat...@mondeca.com

*Mondeca**
*3, cité Nollez 75018 Paris France
Web:www.mondeca.com http://www.mondeca.com
Blog:Leçons de Choses http://mondeca.wordpress.com/
**




Re: [HELP] Can you please update information about your dataset?

2009-08-14 Thread Bernard Vatant

Richard, all

I've done my homework and added a voiD description of lingvoj.org 
dataset at http://www.lingvoj.org/void

It's still minimal, but at least got stats. Links stuff to be added ASAP.
For those who might care, note that it links to a new FOAF profile at 
http://www.lingvoj.org/foaf.rdf


Bernard

Richard Cyganiak a écrit :
The problem at hand is: How to get reasonably accurate and up-to-date 
statistics about the LOD cloud?


I see three workable methods for this.

1. Compile the statistics from voiD descriptions published by 
individual dataset maintainers. This is what Hugh proposes below. 
Enabling this is one of the main reason why we created voiD. There has 
to be better tools for creating voiD before this happens. The tools 
could be, for example, manual entry forms that spit out voiD 
(voiD-o-matic?), or analyzers that read a dump and spit out a skeleton 
voiD file.


2. Hand-compile the statistics by watching public-lod, trawling 
project home pages, emailing dataset maintainers, and fixing things 
when dataset maintainers complain. This is how I created the original 
LOD cloud diagram in Berlin, and after I left Berlin, Anja has done a 
great job keeping it up to date despite its massive growth. We will 
continue to update it on a best-effort basis for the foreseeable 
future. A voiD version of the information underlying the diagram is in 
the pipeline. Others can do as we did.


3. Anyone who has a copy of a big part of the cloud (e.g. OpenLink and 
we at Sindice) can potentially calculate the statistics. This is 
non-trivial because we just have triples, and we need to 
reverse-engineer datasets and linksets from them, it involves 
computation over quite serious amounts of data, and in the end you 
still won't have good labels or homepages for the datasets. While this 
approach is possible, it seems to me that there are better uses of 
engineering and research resources.


There is a fourth process that, IMO, does NOT work:

4. Send an email to public-lod asking Everyone please enter your 
dataset in this wikipage/GoogleSpreadsheet/fancyAppOfTheWeek.


Best,
Richard




--

*Bernard Vatant
*Senior Consultant
Vocabulary  Data Engineering
Tel:   +33 (0) 971 488 459
Mail: bernard.vat...@mondeca.com mailto:bernard.vat...@mondeca.com

*Mondeca**
*3, cité Nollez 75018 Paris France
Web:www.mondeca.com http://www.mondeca.com
Blog:Leçons de Choses http://mondeca.wordpress.com/
**




Re: Top three levels of Dewey Decimal Classification published as linked data

2009-08-20 Thread Bernard Vatant

Ed, Michael

Hi Michael,

This is really exciting news, especially for those of us Linked Data
enthusiasts in the world of libraries and archives. Congratulations!
  

+1

I haven't fully read the wiki page yet, so I apologize if this
question is already answered there. I was wondering why you chose to
mint multiple URIs for the same concept in different languages. 

I had exactly the same question, you stole it ...


...

I kind of expected the assertions to hang off of a language and
version agnostic URI, with perhaps dct:hasVersion links to previous
versions.
  

Indeed.

http://dewey.info/class/641/
cc:attributionName OCLC Online Computer Library Center, Inc. ;
cc:attributionURL http://www.oclc.org/dewey/ ;
cc:morePermissions http://www.oclc.org/dewey/about/licensing/ ;
dct:hasVersion http://dewey.info/class/641/2009/08/ ;
dct:language de^^dct:RFC4646 ;
a skos:Concept ;
xhtml:license http://creativecommons.org/licenses/by-nc-nd/3.0/ ;
skos:broader http://dewey.info/class/64/2003/08/about.de ;
skos:inScheme http://dewey.info/scheme/2003/08/about.de ;
skos:notation 641^^http://dewey.info/schema-terms/Notation ;
skos:prefLabel Food  drink@en, Essen und Trinken@de .
  

Maybe rather :

skos:broader http://dewey.info/class/64/2003/ ;
skos:inScheme http://dewey.info/scheme/2003/08/ ;

Although I don't understand why some classes have a year information 
in the URI (2003) and some have none?


Bernard

--

*Bernard Vatant
*Senior Consultant
Vocabulary  Data Engineering
Tel:   +33 (0) 971 488 459
Mail: bernard.vat...@mondeca.com mailto:bernard.vat...@mondeca.com

*Mondeca**
*3, cité Nollez 75018 Paris France
Web:www.mondeca.com http://www.mondeca.com
Blog:Leçons de Choses http://mondeca.wordpress.com/
**




Re: Updated GeoSpecies Data Set

2009-10-27 Thread Bernard Vatant
Hello Daniel

Interesting data. If I had to model them, I guess I would define one
resource for each Species, and one for each Sighting.
For the species, I would reuse as far as possible existing URIs, such as
http://dbpedia.org/page/Desert_Froglet for Crinia deserticola.
But certainly Peter has more precise recommandations based on his work at
geospecies.
Sighting I would model as a subtype of Event as defined e.g., in the
Event Ontology at http://motools.sourceforge.net/event/event.html

Now, googling for sighting ontology retrieves this research paper
http://www.itee.uq.edu.au/~eresearch/projects/ecoportalqld/papers/SemWildNET.pdf
which seems to address exactly your question ...

Bernard

2009/10/27 Daniel O'Connor daniel.ocon...@gmail.com

 Hey, I don't suppose anyone could lend a hand modelling
 http://data.australia.gov.au/570 appropriately?




-- 
Bernard Vatant
Senior Consultant
Vocabulary  Data Engineering
Tel:   +33 (0) 971 488 459
Mail: bernard.vat...@mondeca.com

Mondeca
3, cité Nollez 75018 Paris France
Web:http://www.mondeca.com
Blog:http://mondeca.wordpress.com



Re: Need help mapping two letter country code to URI

2009-11-10 Thread Bernard Vatant
Hugh

The actual problem, as is well shown by your sameas.org example, is not the
lack of URIs for countries, but to figure out which are cool (stable,
authoritative, published following best practices). sameas.org yields 23
URIs for Austria, 29 for France etc.
Supposing they are all really equivalent in the strong owl:sameAs sense,
any of those should do, but ...
On the other hand, maybe more authoritative sources are absent of the
sameas.org list, such as the excellent FAO ontology pointed by Dan. And
above all, which is definitely missing are sets of URIs published by ISO
itself.
There is an ongoing work aiming at authoritative URIs for ISO 639-2
languages by its registration authority at Library of Congress.
http://www.loc.gov/standards/iso639-2/. As I understand, those URI will be
published under http://id.loc.gov/authorities/, so watch this space. I cc
Rebecca Guenther who is in charge of this effort at LoC, she'll certainly be
able to provide update about this, and maybe she's aware of some equivalent
effort for ISO 3166-1. But according to an exchange I had with her a while
ago, ISO itself might be years away from publication under its own
namespace, unfortunately.

Bernard


2009/11/9 Hugh Glaser h...@ecs.soton.ac.uk

 There are quite a few, but I don't know which other ones follow ISO 3166-1.
 http://sameas.org/?uri=http://dbpedia.org/resource/Austria
 Gives a selection.
 Or also
 http://unlocode.rkbexplorer.com/id/AT
 http://ontologi.es/place/AT

 Our site, http://unlocode.rkbexplorer.com/id/AT
 is our capture of UN/LOCODE 2009-1, the United Nations Code for Trade and
 Transport Locations, which uses the 2-letter country codes from ISO 3166-1,
 as well as the 1-3 letter subdivision codes of ISO 3166-2
 See http://www.unece.org/cefact/locode/
 It also gives inclusion and coords, etc.
 We need to do more coref to other than onologi.es .

 Best
 Hugh

 On 09/11/2009 21:47, Aldo Bucchi aldo.buc...@gmail.com wrote:

  Hi,
 
  I found a dataset that represents countries as two letter country
  codes: DK, FI, NO, SE, UK.
  I would like to turn these into URIs of the actual countries they
 represent.
 
  ( I have no idea on whether this follows an ISO standard or is just
  some private key in this system ).
 
  Any ideas on a set of candidata URIs? I would like to run a complete
  coverage test and take care I don't introduce distortion ( that is
  pretty easy by doing some heuristic tests against labels, etc ).
 
  There are some border cases that suggest this isn't ISO3166-1, but I
  am not sure yet. ( and if it were, which widely used URIs are based on
  this standard? ).
 
  Thanks!
  A





-- 
Bernard Vatant
Senior Consultant
Vocabulary  Data Engineering
Tel:   +33 (0) 971 488 459
Mail: bernard.vat...@mondeca.com

Mondeca
3, cité Nollez 75018 Paris France
Web:http://www.mondeca.com
Blog:http://mondeca.wordpress.com



Re: Lightweight RDF to Map Various Semantic Representations of Species

2009-12-08 Thread Bernard Vatant
 correspondance tables and
constructive queries. And you don't have to wonder any more what is the
mysterious semantic nature of the link between the class there and the
concept here. It is that you can translate assertions from there to
assertions here.

More thoughts on this representation as translation paradigm on my blog at
[1], certainly more technical follow-up in the near future, so stay tuned.

Cheers

Bernard

[1] http://blog.hubjects.com/2009/11/representation-as-translation.html

2009/11/30 Peter DeVries pete.devr...@gmail.com

 Hi LOD'ers :-)

 I am trying to work out some way to map the various semantic
 representations for a species, in conjunction with a friendly three letter
 organization.

 The goal of these documents is in part to improve findability of
 information about species.

 The hope is that they will also help serve as a bridge from the LOD
 to species information from the three letter organization and it's partners.

 The resources are mapped using skos:closeMatch.

 This should allow consumers to choose those attributes of each species
 resource that they think are appropriate.

 It has been suggested to me that more comprehensive documents describing
 species should be in the form of OWL documents, so I have included
 nonfunctional links to these hypothetical resources.

 I have the following examples, and am looking for comments and suggestions.

 RDF Example  http://rdf.taxonconcept.org/ses/v6n7p.rdf

 http://rdf.taxonconcept.org/ses/v6n7p.rdfOntology
 http://rdf.taxonconcept.org/ont/txn.owl

 http://rdf.taxonconcept.org/ont/txn.owlOntology Doc
 http://rdf.taxonconcept.org/ont/txn_doc/index.html

 VOID  http://rdf.taxonconcept.org/ont/void.rdf

 http://rdf.taxonconcept.org/ont/txn_doc/index.htmlI look forward to your
 comments and suggestions, :-)

 - Pete
 
 Pete DeVries
 Department of Entomology
 University of Wisconsin - Madison
 445 Russell Laboratories
 1630 Linden Drive
 Madison, WI 53706
 GeoSpecies Knowledge Base
 About the GeoSpecies Knowledge Base
 




-- 
Bernard Vatant
Senior Consultant
Vocabulary  Data Engineering
Tel:   +33 (0) 971 488 459
Mail: bernard.vat...@mondeca.com

Mondeca
3, cité Nollez 75018 Paris France
Web:http://www.mondeca.com
Blog:http://mondeca.wordpress.com



Re: Colors

2010-02-24 Thread Bernard Vatant
And you have even wonderful classes such as
http://dbpedia.org/class/yago/ShadesOfBlue :)
BTW Dan, what about a new property foaf:favouriteColor?  -;)

Bernard

2010/2/24 Dan Brickley dan...@danbri.org

 On Wed, Feb 24, 2010 at 8:31 AM, Pat Hayes pha...@ihmc.us wrote:
  Does anyone know of URIs which identify colors? Umbel has the general
 notion
  of Color, but I want the actual colors, like, you know, red, white, blue
 and
  yellow. I can make up my own, but would rather use some already out
 there,
  if they exist.
 
  Many thanks for any pointers.

 How scruffy are you feeling?
 http://en.wikipedia.org/wiki/List_of_colors suggests you'll find a lot
 in Wikipedia / dbpedia...

 Dan



Re: SKOS, owl:sameAs and DBpedia

2010-03-24 Thread Bernard Vatant
Hi all

 see also http://wiki.foaf-project.org/w/term_focus
 
 
  However, I'd like to understand why a sameAs would be bad here, I have
  the intuition it might be, but am really not sure. It looks to me like
  there's no resource out there that couldn't be a SKOS concept as well
  (you may want to use anything for categorisation purpose --- the loose
  categorisation relationship being encoded in the predicate, not the
  type). If it can't be, then I am beginning to feel slightly
  uncomfortable about SKOS :-)

 Because conceptualisations of things as SKOS concept are distinct from
 the things themselves. If this weren't the case, we couldn't have
 diverse treatment of common people/places/artifacts in multiple SKOS
 thesauri, since sameAs merging would mangle the data. SKOS has lots of
 local administrative info attached to each concept which doesn't make
 sense when considered to be properties of the thing the concept is a
 conceptualization of.



I'm glad to see those things expressed so neatly, thanks Dan for this!

Another example to hit this nail, in which I'm currently deeply engaged :
evolution of concepts and concept schemes over time. A concept can for
example be renamed or moved in a given scheme. The descriptions of the
same concept in two different versions of the concept scheme can therefore
be distinct, IOW you'll have two different skos:Concept having different
URIs, with different descriptions ruling out the use of owl:sameAs. But to
capture the fact that they have the same referent, they would share the same
value of foaf:focus.

BTW for those interested in this issue, after discussion with Antoine Isaac
yesterday, we (he) set a new page about it on SW wiki:
http://www.w3.org/2001/sw/wiki/SKOS/Issues/ConceptEvolution

The first reference paper by Joseph Tennis introduces the notion of
abstract concept vs concrete concept (the one instanciated in a concept
scheme). In foaf parlance, the former would be the focus of the latter.

Bernard

-- 
Bernard Vatant
Senior Consultant
Vocabulary  Data Engineering
Tel:   +33 (0) 971 488 459
Mail: bernard.vat...@mondeca.com

Mondeca
3, cité Nollez 75018 Paris France
Web:http://www.mondeca.com
Blog:http://mondeca.wordpress.com



Re: Should dbpedia have stuff in that is not from wikipedia - was: Re: A URI(Web ID) for the semantic web community as a foaf:Group

2010-03-30 Thread Bernard Vatant
Hi Hugh and all

... skipping Kingsley-related stuff :)


    This is an interest to me because there is a whole load of other
 stuff that
 appears under the dbpedia banner, mostly concerned with sameAs with other
 resources (some of which I disagree with).


Pat Hayes and Harry Halpin have a nice paper for LDOW 2010 about use and
abuse of owl:sameAs
http://events.linkeddata.org/ldow2010/papers/ldow2010_paper09.pdf


 I think that most people who use dbpedia are using it on the basis that
 what
 they get from dbpedia is a reflection (for good or bad, of course) of the
 contents of wikipedia infoboxes and whatever else the dbpedia team have
 managed to glean from the site.


I would say translation or re-presentation rather than reflection.
I've expanded on this notion of translation a few months ago ...
http://blog.hubjects.com/2009/11/representation-as-translation.html
Wikipedia content itself is the result of a long chain of re-presentations
of knowledge. dbpedia is yet another another step in the translation
re-presentation of knowledge. There is a lot of added value even it's the
same content (whatever that means). Interpreting fields in the infobox,
expliciting their semantics, is not a simple reflection. There is added
value, there is re-interpretaion in terms of ontologies that have not been
invented in Wikipedia, alignments of equivalent fields etc. And linking to
other representations is certainly part of the process.

Adding other stuff, for whatever reason, complicates the trust and
 provenance of the source.
 Exactly what is the provenance of resolving a dbpedia URI?
 Well, it is a subset of the wikipedia information, plus possibly a chunk
 more.


Indeed, but the same for anything produced by human intelligence. It's bits
of the legacy plus a chunk more. Dwarves on giant's shoulders etc.

I think that dbpedia (all praise to its amazing achievement) should restrict
 itself to publishing exactly and only what it has gleaned from wikipedia,
 and any other stuff should be published elsewhere.


IMHO exactly and only can't make any sense here. There is no explicit
semantics in WP, and there is in DBpedia.

Bernard

-- 
Bernard Vatant
Senior Consultant
Vocabulary  Data Engineering
Tel:   +33 (0) 971 488 459
Mail: bernard.vat...@mondeca.com

Mondeca
3, cité Nollez 75018 Paris France
Web:http://www.mondeca.com
Blog:http://mondeca.wordpress.com



Re: Announce: Linked Data Patterns book

2010-04-07 Thread Bernard Vatant
Interesting exchange

Following Ian here to say that quads are not necessary. There are some
workarounds coming to mind.

Any linked data consumer can at any moment dereference the URI to sort out
the original (aka authoritative) description triples from those asserted by
some other source.
When mashing/meshing descriptions from different sources, if one does not
want to nitpick on who said what, but wants to keep track of the used
sources nevertheless, using dcterms:source at either RDF document or
resource level can do it. Keeping rdfs:isDefinedBy for the original source
as asserted by the URI reference if needed.

Bernard

Ian Davis li...@iandavis.com

 On Wed, Apr 7, 2010 at 12:14 AM, Peter Ansell ansell.pe...@gmail.com
 wrote:

  It is entirely consistent with the Linked Data principles to make
  statements about third-party resources.
 
  I don't believe that to be true, simply because, unless users are
  always using a quad model (RDF+NamedGraphs), they have no way of
  retrieving that information just by resolving the foreign identifier
  which is the subject of the RDF triple. They would have to stumble on
  the information by knowing to retrieve the object URI, which isn't
  clear from the pattern description so far. In a triples model it is
  harmful to have this pattern as Linked Data, as the statements are not
  discoverable just knowing the URI.
 

 Can you elaborate more on the harm you suggest here?

 I don't think we need to limit the data published about a subject to
 that subset retrievable at its URI.  (I wrote a little about this last
 year at http://blog.iandavis.com/2009/10/more-than-the-minimum )

 I also don't believe this requires the use of quads. I think it can be
 interlinked using rdfs:seeAlso.




-- 
Bernard Vatant
Senior Consultant
Vocabulary  Data Engineering
Tel:   +33 (0) 971 488 459
Mail: bernard.vat...@mondeca.com

Mondeca
3, cité Nollez 75018 Paris France
Web:http://www.mondeca.com
Blog:http://mondeca.wordpress.com



Proliferation of URIs - the lingvoj-lexvo-rosetta use case Re: Catalog of Ontology Modeling Issues

2010-04-16 Thread Bernard Vatant
. if you do find a relevant thread,
   1. the content is raw and hard to digest;
   2. the summary you want is hard to find, even it is there.
4. there is no agreed place to go to find out about modeling issues;
5. it is often easier just to ask the question again, and the cycle
continues.

 We have created the * ontology modeling 
 issueshttp://ontologydesignpatterns.org/wiki/Community:Main
 * section in the ODP Wiki http://ontologydesignpatterns.org/wiki to
 address these problems.

 *HOW:  *We envision the following steps in the evolution of a modeling
 issue:

1. Lively discussion happens on some mailing list.
2. Post a summary to the list of the key points raised, including the
pros and cons of proposed solutions.
3. Post a modeling issue on the ODP Wiki (based on that summary).
4. Post a note to any relevant discussion lists inviting them to
contribute to the Wiki.
5. Discuss and refine the issue further in the ODP Wiki
6. Post major updates back to relevant discussion lists.

 OR, start with step 3, and post the modeling issue directly on the ODP
 Wiki.
 *

 **To  Contribute:*

1. Visit *Ontology Design Patterns 
 Wiki*http://ontologydesignpatterns.org/
2. Click the *How to 
 register*http://ontologydesignpatterns.org/wiki/Odp:Register link at
lower left of the page; follow instructions to get a login name and
password.
3. Visit the 
 **http://ontologydesignpatterns.org/wiki/Odp:WhatIsAnExemplaryOntology
*Ontology Modeling 
 Issueshttp://ontologydesignpatterns.org/wiki/Community:Main
* page for further information,examples and instructions.

 *
 Examples: *(from discussion lists)

1. Proliferation of URIs, Managing 
 Coreferencehttp://ontologydesignpatterns.org/wiki/Community:Proliferation_of_URIs%2C_Managing_Coreference
2. Overloading owl 
 sameAshttp://ontologydesignpatterns.org/wiki/Community:GI_Overloading_owl_sameAs
3. Versioning and 
 URIshttp://ontologydesignpatterns.org/wiki/Community:Versioning_and_URIs
4. Representing 
 Specieshttp://ontologydesignpatterns.org/wiki/Community:epresenting_Species
5. Using SKOS 
 Concepthttp://ontologydesignpatterns.org/wiki/Community:Using_SKOS_Concept
6. Resource multiple 
 attributionhttp://ontologydesignpatterns.org/wiki/Community:Resource_multiple_attribution



 The above issues were ones that I found by pouring over all the threads in
 the linking open data list from December 2009, plus some that I was directly
 involved from 2008.  There are many others to be found from many other
 lists.

 This work was originally supported by the NeOn 
 project.http://www.neon-project.org/

 Thanks very much,
  Michael
 ==




-- 
Bernard Vatant
Senior Consultant
Vocabulary  Data Engineering
Tel:   +33 (0) 971 488 459
Mail: bernard.vat...@mondeca.com

Mondeca
3, cité Nollez 75018 Paris France
Web:http://www.mondeca.com
Blog:http://mondeca.wordpress.com



Semantic black holes at sameas.org Re: [GeoNames] LOD mappings

2010-04-23 Thread Bernard Vatant
Alexander :

It would be useful to have a list of currently available mappings to
 GeoNames. It would be useful not only for people like me who create custom
 RDF datasets but also for people who want to contribute additional mappings.


Seems a good idea

Daniel :


 Re-publish your data with rdfs:seeAlso
 http://sameas.org/rdf?uri=http%3A%2F%2Fsws.geonames.org%2F2078025%2Fperhaps?


This seems like a good idea. Considering that geonames.org cannot dedicate
(m)any resources to LOD mappings, those can be deferred to external services
such as sameas.org. The sameas.org URI is easy to generate automatically
from the geonames id.

So far so good. But let's look at it closely. Someone has to feed this kind
of recursive and iterative social process happening at sameas.org, but there
is no provenance track, and the clustering of URIs will make with the time
the concepts more and more fuzzy, and sameas.org a tool to create semantic
black holes.

It would be definitely better to have some clear declaration from Geonames
viewpoint which of its three URIs for Berlin
http://sws.geonames.org/2950159/, http://sws.geonames.org/6547383/ or
http://sws.geonames.org/6547539/ should map to
http://dbpedia.org/resource/Berlin. So far, neither does.

From DBpedia side owl:sameAs declarations at the latter URI are as following
(today)

   - 
opencyc:en/Berlin_StateGermanyhttp://sw.opencyc.org/2008/06/10/concept/Mx4rv77EfZwpEbGdrcN5Y29ycA
   - 
fbase:Berlinhttp://rdf.freebase.com/ns/guid.9202a8c04000641f800094d6
   - http://umbel.org/umbel/ne/wikipedia/Berlin
   - 
opencyc:en/CityOfBerlinGermanyhttp://sw.opencyc.org/2008/06/10/concept/Mx4rvVjrhpwpEbGdrcN5Y29ycA
   - http://www4.wiwiss.fu-berlin.de/eurostat/resource/regions/Berlin
   - http://sws.geonames.org/2950159/
   - http://data.nytimes.com/N50987186835223032381

So it seems DBpedia has decided to map its Berlin to the Geonames feature of
type capital of a political entity, subtype of populated place. Why not?
OTOH it also declares two equivalent in opencyc, one being a state and the
other a city. If opencyc buys the DBpedia declarations, the semantic
collapse begins

Let's go yet closer to the black hole horizon ...

http://sameas.org/html?uri=http%3A%2F%2Fdbpedia.org%2Fresource%2FBerlin

... yields 29 URIs including the previous ones ...

If geonames.org had taken the time to map carefully its administrative
features on the respective city and state opencyc resources, the three
different URIs carefully coined to make distinct entities for Berlin as a
populated place and the two administrative subdivisions bearing the same
name, would be by the grace of DBpedia fuzziness crushed in the same
sameas.org semantic black hole.

Bottom line. Given the current state of affairs for geographical entities in
the linked data cloud, geonames agnosticism re. owl:sameAs is rather a good
thing. There are certainly more subtle ways to link geo entities at various
level of granularity, and a lot of work to achieve semantic interoperability
of geo entities defined everywhere. Things are moving forward, but it will
be a long way and needin a lot of resources. Look e.g., at Yahoo!
concordance
http://developer.yahoo.com/geo/geoplanet/guide/api-reference.html#api-concordance,
which BTW also links to geonames id.

In conclusion:

YES Marc Wick is right to currently focus on data and data quality first. A
tremendous set of data is available for free, take what you can and what you
wish and build on it. If you want premium services, pay for it. Fair enough.


YES it should be great to have geonames data/URIs more integrated, and
better to the linked data economy. More complete descriptions at
sws.geonames URIs, SPARQL endpoint etc. Bearing in mind that Geonames.org
has no dedicated resources for it, who will care of that in a scalable way?
What is the business model? Good questions. Volunteers, step forward :)

Bernard

-- 
Bernard Vatant
Senior Consultant
Vocabulary  Data Engineering
Tel:   +33 (0) 971 488 459
Mail: bernard.vat...@mondeca.com

Mondeca
3, cité Nollez 75018 Paris France
Web:http://www.mondeca.com
Blog:http://mondeca.wordpress.com



Re: Semantic black holes at sameas.org Re: [GeoNames] LOD mappings

2010-04-23 Thread Bernard Vatant
Hi Giovanni

2010/4/23 Giovanni Tummarello giovanni.tummare...@deri.org

 Hi Bernard, the need to automatically interlink at large scale, and
 give clean, and high performance querable datasets to users is well
 recognized and supported e.g. also by the new EU funded projects which
 still cant be named (i guess) yet


Oh, more of those you won't even dare to pronounce? One of them is called
Eyjafjallajökull, I've heard :)


 but are now about to be confirmed.

 so hang on tight a bit.. we're working on this, just continue
 publishing high quality data with good entity descriptions (as much as
 you know about YOUR stuff), and the links will come to you just like
 that at some point. I promise :)


WOW ... rings a bell  ...and all these things will be given to you as
well.
Let me find out. Here it is : http://bible.cc/matthew/6-33.htm
BTW, amazing coreference resource :))

Cheers

Bernard

-- 
Bernard Vatant
Senior Consultant
Vocabulary  Data Engineering
Tel:   +33 (0) 971 488 459
Mail: bernard.vat...@mondeca.com

Mondeca
3, cité Nollez 75018 Paris France
Web:http://www.mondeca.com
Blog:http://mondeca.wordpress.com



Re: Semantic black holes at sameas.org Re: [GeoNames] LOD mappings

2010-04-27 Thread Bernard Vatant
Hi Hugh

2010/4/27 Hugh Glaser h...@ecs.soton.ac.uk

 Thanks Bernard.
 Yes, I think the problems you raise are valid.
 Just a short response.
 In some sense I consider sameas.org to be a discovery service.


Indeed, so do I. The known issue is the overload of owl:sameAs, but you have
an excellent presentation today of Pat Hayes and Harry Halpin just coming
... (you are at ldow2010 I guess)


 This is in contrast to a service that might be called something more
 definitive.
 So I have taken quite a liberal view of what I will accept on the site.

We have other services that are much more conservative in their view; in
 particular the ones we use for RKBExplorer.
 So what we are trying to do is capture a spectrum of views of what
 constitutes equivalence, which will always be a moveable feast.


Agreed with all that. Maybe you could introduce a sameas ontology for
different flavours of equivalence, containing a single property
sameas:sameas  of which owl:sameAs; owl:equivalent*, skos:*Match ... would
be subproperties. In that case the liberal clustering would use
sameas:sameas and the more conservative ones whatever fits.

BTW currently working in connection with Gerard de Melo at
http://lexvo.orgre. semiotic approach to this issue, connecting
vocabulary resources
(concepts, classes, whatever) through the terms they use. You might bring
that on ldow forum.

Have fun

Bernard



 Best
 Hugh

 On 23/04/2010 16:14, Bernard Vatant bernard.vat...@mondeca.com wrote:

 Alexander :

 It would be useful to have a list of currently available mappings to
 GeoNames. It would be useful not only for people like me who create custom
 RDF datasets but also for people who want to contribute additional mappings.

 Seems a good idea

 Daniel :

 Re-publish your data with rdfs:seeAlso
 http://sameas.org/rdf?uri=http%3A%2F%2Fsws.geonames.org%2F2078025%2Fperhaps?

 This seems like a good idea. Considering that geonames.org 
 http://geonames.org  cannot dedicate (m)any resources to LOD mappings,
 those can be deferred to external services such as sameas.org 
 http://sameas.org . The sameas.org http://sameas.org  URI is easy to
 generate automatically from the geonames id.

 So far so good. But let's look at it closely. Someone has to feed this kind
 of recursive and iterative social process happening at sameas.org 
 http://sameas.org , but there is no provenance track, and the clustering
 of URIs will make with the time the concepts more and more fuzzy, and
 sameas.org http://sameas.org  a tool to create semantic black holes.

 It would be definitely better to have some clear declaration from Geonames
 viewpoint which of its three URIs for Berlin
 http://sws.geonames.org/2950159/, http://sws.geonames.org/6547383/ or
 http://sws.geonames.org/6547539/ should map to
 http://dbpedia.org/resource/Berlin. So far, neither does.

 From DBpedia side owl:sameAs declarations at the latter URI are as
 following (today)

   *   opencyc:en/Berlin_StateGermany 
 http://sw.opencyc.org/2008/06/10/concept/Mx4rv77EfZwpEbGdrcN5Y29ycA
  *   fbase:Berlin 
 http://rdf.freebase.com/ns/guid.9202a8c04000641f800094d6
  *   http://umbel.org/umbel/ne/wikipedia/Berlin
  *   opencyc:en/CityOfBerlinGermany 
 http://sw.opencyc.org/2008/06/10/concept/Mx4rvVjrhpwpEbGdrcN5Y29ycA
  *   http://www4.wiwiss.fu-berlin.de/eurostat/resource/regions/Berlin
   *   http://sws.geonames.org/2950159/
  *   http://data.nytimes.com/N50987186835223032381

 So it seems DBpedia has decided to map its Berlin to the Geonames feature
 of type capital of a political entity, subtype of populated place. Why
 not? OTOH it also declares two equivalent in opencyc, one being a state and
 the other a city. If opencyc buys the DBpedia declarations, the semantic
 collapse begins

 Let's go yet closer to the black hole horizon ...

 http://sameas.org/html?uri=http%3A%2F%2Fdbpedia.org%2Fresource%2FBerlin

 ... yields 29 URIs including the previous ones ...

 If geonames.org http://geonames.org  had taken the time to map carefully
 its administrative features on the respective city and state opencyc
 resources, the three different URIs carefully coined to make distinct
 entities for Berlin as a populated place and the two administrative
 subdivisions bearing the same name, would be by the grace of DBpedia
 fuzziness crushed in the same sameas.org http://sameas.org  semantic
 black hole.

 Bottom line. Given the current state of affairs for geographical entities
 in the linked data cloud, geonames agnosticism re. owl:sameAs is rather a
 good thing. There are certainly more subtle ways to link geo entities at
 various level of granularity, and a lot of work to achieve semantic
 interoperability of geo entities defined everywhere. Things are moving
 forward, but it will be a long way and needin a lot of resources. Look e.g.,
 at Yahoo! concordance
 http://developer.yahoo.com/geo/geoplanet/guide/api-reference.html#api-concordance,
 which BTW also links to geonames id.

 In conclusion:

 YES

Countries Re: [GeoNames] GeoNames RDF dataset improvements

2010-04-29 Thread Bernard Vatant
Hi Alexander and all

[cc to LOD list since there is a parallel thread on what is a country]

I'll try to sump up clearly the countries current status in both the
Geonames ontology and the RDF service output (letting the dump alone) and
how it will (should) be changed in future releases.

Let's take the example of Argentina, ISO code = AR.

Currently Geonames has way too many URIs to describe this country.
[1] The feature id=3865483 : http://sws.geonames.org/3865483/
[2] An anchor in the countries page : http://www.geonames.org/countries/#AR
[3] An HTML description linked from the above
http://www.geonames.org/countries/AR/argentina.html

Each of those URIs provide some description, but only [1] should be used in
the RDF output as value of the inCountry attribute. OTOH only [2] provides
a clear list of countries based on the existence of an ISO code, but this
URI is not linked-data friendly at all. The URI such as [2] are used as
values of the inCountry object property, put on each faeture inside the
country territory.

Now how do you figure out when looking only at a feature RDF description as
the one provided at [1], if it has a match in the list at [2]. maybe the
feature code would help. We find featureCode rdf:resource=
http://www.geonames.org/ontology#A.PCLI/
Which means that Argentina is an independent political entity. Is not that
the same thing as a country? Well, most of the time, yes, but all the time,
no. The list of countries as per ISO code at [2] is 248, the number of
features with code PCLI is 192 ... which let us with 56 countries with
either no matching feature or a code different from PCLI. Is every PCLI a
country? I won't swear it is, but let's assume this for a moment.

Now http://download.geonames.org/export/dump/countryInfo.txt gives you more
info on the 248 countries, including ... the matching geonames id. But not
the feature code. But every country has its feature match. Good news. But so
far, we have still no clue to infer from the description at [1] that
Argentina is indeed a country, and we're left with 56 countries at least
which are not independent political entities. What a world ... no wonder
we have so many wars ...

Actually the description at [1] does not say that [1] *is* a country, but it
says it is *in* the country defined by [2] ...
inCountry rdf:resource=http://www.geonames.org/countries/#AR/

I don't think we can clear this mess, because the world is messy. So what I
propose is the following :

- Deprecate the ObjectProperty inCountry altogether.

- Replace it by a countryCode property giving the ISO 2-letter code. The
transformation is completely straighforward, e.g.,

inCountry rdf:resource=http://www.geonames.org/countries/#AR/
will be replaced by
countryCodeAR/countryCode

and indeed can be used the same way

- For each feature matching a country, whichever its feature code, put a
rdfs:seeAlso link to the URI at [2] in its description
rdfs:seeAlso rdf:resource=http://www.geonames.org/countries/#AR/

And we're done.

Bernard


Re: Organization ontology

2010-06-01 Thread Bernard Vatant
Hi Dave

Great resource indeed. One remark, one suggestion, and one question :)

Remark : Just found out what seems to be a mistake in the N3 file.

org:role a owl:ObjectProperty, rdf:Property;
rdfs:label role@en;
rdfs:domain org:Membership;
rdfs:range  foaf:Agent;
...

I guess one should read :rdfs:range  org:Role

Suggestion : I always feel uneasy with having class and property just
distinct by upper/lower case. Suggest to change the property to org:hasRole

Question : Will RDF-XML file available at some point?

Keep the good work going

Best

Bernard



2010/6/1 Dave Reynolds dave.e.reyno...@googlemail.com

 We would like to announce the availability of an ontology for description
 of organizational structures including government organizations.

 This was motivated by the needs of the data.gov.uk project. After some
 checking we were unable to find an existing ontology that precisely met our
 needs and so developed this generic core, intended to be extensible to
 particular domains of use.

 The ontology is documented at [1] and some discussion on the requirements
 and design process are at [2].

 W3C have been kind enough to offer to host the ontology within the W3C
 namespace [3]. This does not imply that W3C endorses the ontology, nor that
 it is part of any standards process at this stage. They are simply providing
 a stable place for posterity.

 Any changes to the ontology involving removal of, or modification to,
 existing terms (but not necessarily addition of new terms) will be announced
 to these lists. We suggest that any discussion take place on the public-lod
 list to avoid further cross-posting.

 Dave, Jeni, John

 [1] http://www.epimorphics.com/public/vocabulary/org.html
 [2]
 http://www.epimorphics.com/web/category/category/developers/organization-ontology
 [3] http://www.w3.org/ns/org# (available in RDF/XML, N3, Turtle via conneg
 or append .rdf/.n3/.ttl)




-- 
Bernard Vatant
Senior Consultant
Vocabulary  Data Engineering
Tel:   +33 (0) 971 488 459
Mail: bernard.vat...@mondeca.com

Mondeca
3, cité Nollez 75018 Paris France
Web:http://www.mondeca.com
Blog:http://mondeca.wordpress.com



Looking for use of skos mapping in the Linked Data Cloud

2010-06-09 Thread Bernard Vatant
Hello all

For a project in terminology alignment, we are looking for uses of various
flavours of skos:mappingRelation in vocabularies published in the LOD cloud,
and well, we've hard time finding out published data sets using those
relations. What I know of so far is not really published following LOD good
practices ...
- Results of the OAEI 2009 Library Thesaurus Mapping Task mapping LSCH,
RAMEAU ans SWD.
http://www.few.vu.nl/~aisaac/oaei2009/results.html (needs registration)
- Mappings overview at http://linkedlifedata.com/sources, but it's unclear
if and where the mapping data are available (no link available).

Anything obvious I miss?

Bernard



-- 
Bernard Vatant
Senior Consultant
Vocabulary  Data Engineering
Tel:   +33 (0) 971 488 459
Mail: bernard.vat...@mondeca.com

Mondeca
3, cité Nollez 75018 Paris France
Web:http://www.mondeca.com
Blog:http://mondeca.wordpress.com



Re: URI's for Geogrid like Resource

2010-06-11 Thread Bernard Vatant
Hi Peter

Did you consider using URIs in the new geo: URI scheme defined by RFC 5870
http://tools.ietf.org/html/rfc5870?
Although I do not figure how such URIs fit in the Linked Data architecture,
since they are not http URIS


 This would need URI's that are of a grid equally sized polygons that are
 either equal in X, Y by either decimal degrees or meters.


RFC 5870 provides for uncertainty on positions, not sure this fit your needs

Bernard


-- 
Bernard Vatant
Senior Consultant
Vocabulary  Data Engineering
Tel:   +33 (0) 971 488 459
Mail: bernard.vat...@mondeca.com

Mondeca
3, cité Nollez 75018 Paris France
Web:http://www.mondeca.com
Blog:http://mondeca.wordpress.com



Lexvo.org - a semiotic approach to Re: Subjects as Literals

2010-07-01 Thread Bernard Vatant
Hi all

Re-naming the subject to try and get out of the general noise :)

I'm been following this noisy thread with amazement. I've no clear position
on the issue, just take the opportunity to attract the attention of the
community to the work of Gerard de Melo at Lexvo.org [1] which has been
updated lately with new resources. I've posted today [2] why I think this is
important and won't repeat it here in details, but in a nutshell Lexvo.org
proposes a semiotic and pragmatic approach to this issue.
Lexvo.org considers a particular type of Literals, terms in natural
language. Say 'mean'@en. Since this literal in the current state of affairs
can't be used as a subject, Lexvo.org provides a one-to-one representation
of such terms by URIs.

http://lexvo.org/id/term/eng/mean identifies the term 'mean'@en
This URI, in subject position, can be used to describe the term, and in
object position, to assert that a concept uses it as a label.And
translations in other languages and so on.

I won't elaborate, Gerard is likely to make a formal announcement in the
days to come, but I just wanted to point the resource as maybe relevant to
this debate.

Cheers

Bernard

[1] http://lexvo.org
[2] http://blog.hubjects.com/2010/07/what-mean-means.html

-- 
Bernard Vatant
Senior Consultant
Vocabulary  Data Engineering
Tel:   +33 (0) 971 488 459
Mail: bernard.vat...@mondeca.com

Mondeca
3, cité Nollez 75018 Paris France
Web:http://www.mondeca.com
Blog:http://mondeca.wordpress.com



Re: Show me the money - (was Subjects as Literals)

2010-07-01 Thread Bernard Vatant
Hi Dan, Kingsley

Happy to see you expose clearly those things that have been also in the
corner of my mind since Kingsley started to hammer the EAV drum a while ago.

I've been also in training and introduction to RDF insisted on the fact that
RDF was somehow just an avatar of the old paradigm EAV or however you name
it, and I think it's a good way to introduce it, and keep all the gory
aspects for later on, and in particular the syntactic mess (or should I say,
joyful diversity).

But I follow Dan on the fact that the Linked Data cloud has flourished on
top of RDF-XML, at least as exchange and publication format. And I must say
that what I see daily with data providers and consumers around Mondeca
applications is data coming in and out in RDF-XML, for better and worse
indeed. And for what I see, it's easier to have data providers now familiar
with XML understand RDF through RDF-XML, by making XML-friendly RDF. RDF-XML
has not to be ugly and unreadable and untractable, even if some tools have
never care about that (no names).
And as the grease-monkey in charge of migrating miscellaneous data to feed
the semantic engine, I'm still quite happy with the current
CSV-to-plain-XML-to-RDF-XML (via XSLT, yes) route.

And I will give you the short feedback of our CTO in Mondeca after reading
the output of RDFNext workshop. Well, no canonical XML syntax?. Believe
me, all the rest he did not even care mentioning. Don't want to add to the
I wish I'd been there but I would myself exchange every other evolution
and future work for a canonical RDF-XML syntax. I know, I know, don't tell
me.

Bernard



2010/7/1 Dan Brickley dan...@danbri.org

 (cc: list trimmed to LOD list.)

 On Thu, Jul 1, 2010 at 7:05 PM, Kingsley Idehen kide...@openlinksw.com
 wrote:

  Cut long story short.

 [-cut-]

  We have an EAV graph model, URIs, triples and a variety of data
  representation mechanisms. N3 is one of those, and its basically the
  foundation that bootstrapped the House of HTTP based Linked Data.

 I have trouble believing that last point, so hopefully I am
 misunderstanding your point.

 Linked data in the public Web was bootstrapped using standard RDF,
 serialized primarily in RDF/XML, and initially deployed mostly by
 virtue of people enthusiastically publishing 'FOAF files' in the
 (RDF)Web. These files, for better or worse, were overwhelmingly in
 RDF/XML.

 When TimBL wrote http://www.w3.org/DesignIssues/LinkedData.html in
 2006 he used what is retrospectively known as Notation 2, not its
 successor Notation 3.

 Notation2[*] was an unstriped XML syntax ( see original in

 http://web.archive.org/web/20061115043657/http://www.w3.org/DesignIssues/LinkedData.html
 ). That DesignIssues note was largely a response to the FOAF
 deployment.
 This linking system was very successful, forming a  growing social
 network, and dominating, in 2006, the linked data available on the
 web.

 The LinkedData design note argued that (post RDFCore cleanup and
 http-range discussions) we could now use URIs for non-Web things, and
 that this would be easier than dealing with bNode-heavy data. Much of
 the subsequent successes come from following that advice. Perhaps N3
 played an educational role in showing that RDF had other
 representations; but by then, SPARQL, NTriples etc were also around.
 As was RDFa, http://xtech06.usefulinc.com/schedule/paper/58  ...

 I have a hard time seeing N3 as the foundation that bootstrapped
 things. Most of the substantial linked RDF in Web by 2006 was written
 in RDF/XML, and by then the substantive issues around linking,
 reference, aggregation, identification and linking etc were pretty
 well understood. I don't dislike N3; it was a good technology testbed
 and gave us the foundation for SPARQL's syntax, and for the Turtle
 subset. But it's role outside our immediate community has been pretty
 limited in my experience.

 cheers,

 Dan

 [*] http://www.w3.org/DesignIssues/Syntax.html




-- 
Bernard Vatant
Senior Consultant
Vocabulary  Data Engineering
Tel:   +33 (0) 971 488 459
Mail: bernard.vat...@mondeca.com

Mondeca
3, cité Nollez 75018 Paris France
Web:http://www.mondeca.com
Blog:http://mondeca.wordpress.com



Re: Looking for use of skos mapping in the Linked Data Cloud

2010-07-05 Thread Bernard Vatant
Thanks Antoine for the update, and (belated) thanks also to Mike and Peter
for their pointers.

Bernard

2010/7/5 Antoine Isaac ais...@few.vu.nl

 Hi Bernard,

 Sorry for the late answer.

 As a matter of fact an updated version of the manually-built gold standard
 that we have used for [2] has been just now made available as linked data
 (skos:closeMatch statements), both at the prototype site for the French
 RAMEAU subject headings [3] and at the one for the German SWD headings [1].

 As an example,
 http://stitch.cs.vu.nl/vocabularies/rameau/ark:/12148/cb11932889r gives
 you a link to http://d-nb.info/gnd/4063673-2 and
 http://d-nb.info/gnd/4063673-2 gives you a link to
 http://stitch.cs.vu.nl/vocabularies/rameau/ark:/12148/cb11932889r .

 Note that both link to the Library of Congress'
 http://id.loc.gov/authorities/sh85014310#concept . This link is not
 reciprocal yet (ie, id.loc.gov does not publish it) for the German SWD,
 and reciprocity for French RAMEAU is partial. But hopefully that will change
 in the next weeks :-)

 For info all these links come from the MACS project [4].

 Cheers,

 Antoine

 [1] http://lists.w3.org/Archives/Public/public-lod/2010Apr/0321.html
 [2] 
 http://www.few.vu.nl/~aisaac/oaei2009/results.htmlhttp://www.few.vu.nl/%7Eaisaac/oaei2009/results.html
 [3] http://www.cs.vu.nl/STITCH/rameau/
 [4] http://macs.cenl.org

  Hello all

 For a project in terminology alignment, we are looking for uses of
 various flavours of skos:mappingRelation in vocabularies published in
 the LOD cloud, and well, we've hard time finding out published data sets
 using those relations. What I know of so far is not really published
 following LOD good practices ...
 - Results of the OAEI 2009 Library Thesaurus Mapping Task mapping LSCH,
 RAMEAU ans SWD.
 http://www.few.vu.nl/~aisaac/oaei2009/results.htmlhttp://www.few.vu.nl/%7Eaisaac/oaei2009/results.html(needs
  registration)
 - Mappings overview at http://linkedlifedata.com/sources, but it's
 unclear if and where the mapping data are available (no link available).

 Anything obvious I miss?

 Bernard



 --
 Bernard Vatant
 Senior Consultant
 Vocabulary  Data Engineering
 Tel:   +33 (0) 971 488 459
 Mail: bernard.vat...@mondeca.com mailto:bernard.vat...@mondeca.com

 
 Mondeca
 3, cité Nollez 75018 Paris France
 Web: http://www.mondeca.com
 Blog: http://mondeca.wordpress.com
 





-- 
Bernard Vatant
Senior Consultant
Vocabulary  Data Engineering
Tel:   +33 (0) 971 488 459
Mail: bernard.vat...@mondeca.com

Mondeca
3, cité Nollez 75018 Paris France
Web:http://www.mondeca.com
Blog:http://mondeca.wordpress.com



Re: [ANN] Major update of Lexvo.org

2010-07-07 Thread Bernard Vatant
Hi all

Thanks to Gerard for pushing officially Lexvo.org here. Since he mentioned
the redirection of lingvoj.org URIs for languages, I think good to provide
here a little details about the why and how's of this process as an example
of practice, hopefully good, possibly best :)

What should you do when you have published a data set and a couple of years
later discover that :
1. You don't have much bandwidth, or technical skills, or resources to
update and maintain it.
2. Meanwhile, some other data set has been published, with better quality
than your own.
3. You want to ensure backward compatibility, that is not break the
applications consuming your URIs.

From a social viewpoint, first step is to contact the administrator of the
other data set and the following conversation takes place

X: Hello Y, would you mind if I redirect my URIs at foo to you resources at
bar?
Y: Hello X, I've looked at you URIs and data set and think it makes sense.
What are your service load stats, to figure if my server can stand them?
X: Fair enough, please find attached my servers stats for last year. Can you
handle that?
Y: OK, no problem I can handle the extra charge

Then the technical part, reconfiguring the content negotiation. Let's take
an example.

GET html on http://www.lingvoj.org/lang/zh has a 303 to
http://www.lingvoj.org/lingvo/zh.html which has itself a regular html
redirection to http://www.lexvo.org/page/iso639-3/zho which is the html
rendition of http://lexvo.org/id/iso639-3/zho

GET rdf on the same resource redirects to
http://www.lingvoj.org/lingvo/zh.rdf where the following can be found

rdf:Description rdf:about=http://www.lingvoj.org/lingvo/zh.rdf;
dcterms:isReplacedBy rdf:resource=
http://www.lexvo.org/data/iso639-3/zho/
/rdf:Description

The former description living at this URI is superceded by the description
at http://www.lexvo.org/data/iso639-3/zho

lingvoj:Lingvo rdf:about=http://www.lingvoj.org/lang/zh;
rdfs:isDefinedBy rdf:resource=
http://www.lexvo.org/data/iso639-3/zho/
owl:sameAs rdf:resource=http://lexvo.org/id/iso639-3/zho/
/lingvoj:Lingvo

The resource described is defined as the same as its lexvo.org equivalent,
and the definition resource changed accordingly.

So a simple follow-your nose from the deprecated URIs and RDF files will
retrieve the current description.

Basically that's it. If this practice seems good from social and technical
viewpoint it could be a good idea to document it in a more formal way and
put it somewhare on the wiki. There has been a page set up on the wiki a
while ago about this issue, sorry can't find now the page address and who
set it up, and I can't access now to
http://community.linkeddata.org/MediaWiki/ for some reasons.

Looking forward for the feedback

Bernard



2010/7/5 Gerard de Melo gdem...@mpi-inf.mpg.de

 Hi everyone,

 We'd like to announce a major update of Lexvo.org [1], a site that brings
 information about languages, words, characters, and other human language-
 related entities to the LOD cloud. Lexvo.org adds a new perspective to the
 Web of Data by exposing how everything in our world is connected in terms
 of language, e.g. via words and names and their semantic relationships.

 Lexvo.org first went live in 2008 just in time for that year's ISWC.
 Recently, the site has undergone a major revamp, with plenty of help from
 Bernard Vatant, who has decided to redirect lingvoj.org's language URIs to
 the corresponding Lexvo.org ones.

 At this point, the site is no longer considered to be in beta testing,
 and we invite you to take a closer look. On the front page, you'll find
 links to examples that will allow you get a feel for the type of
 information being offered. We'd love to hear your comments.

 Best,
 Gerard

 [1] http://www.lexvo.org/

 --
 Gerard de Melodem...@mpi-inf.mpg.de
 Max Planck Institute for Informatics
 http://www.mpi-inf.mpg.de/~gdemelo/http://www.mpi-inf.mpg.de/%7Egdemelo/







-- 
Bernard Vatant
Senior Consultant
Vocabulary  Data Engineering
Tel:   +33 (0) 971 488 459
Mail: bernard.vat...@mondeca.com

Mondeca
3, cité Nollez 75018 Paris France
Web:http://www.mondeca.com
Blog:http://mondeca.wordpress.com



Re: Best practice for permantently moved resources?

2010-08-12 Thread Bernard Vatant
Hi Kjetil

You might be interested by what has been done for lingvoj.org language URIs
(which you have used in a project if I remember well) redirected now to
lexvo.org. See http://www.lingvoj.org/

There are not many explanations of the rationale and method, but your
message reminds me it's on my to-do list to document it further.

At http://www.lingvoj.org/lingvo/fr.rdf you get the following descriptions

rdf:Description rdf:about=http://www.lingvoj.org/lingvo/fr.rdf;
dcterms:isReplacedBy rdf:resource=
http://www.lexvo.org/data/iso639-3/fra/
/rdf:Description

Provides the RDF document replacing the current one

lingvoj:Lingvo rdf:about=http://www.lingvoj.org/lang/fr;
rdfs:isDefinedBy rdf:resource=
http://www.lexvo.org/data/iso639-3/fra/
owl:sameAs rdf:resource=http://lexvo.org/id/iso639-3/fra/
/lingvoj:Lingvo

Provides the new URI and the new document where it is defined.

Re. conneg, I've set up a simple redirect for the html pages.

I of course welcome any feedback about this method.

Best

Bernard



2010/8/12 Kjetil Kjernsmo kje...@kjernsmo.net

 Hi all!

 Cool URIs don't change, but cool content does, so the problem surfaces that
 I
 need to permanently redirect now and then. I discussed this problem in a
 meetup yesterday, and it turns out that people have found dbpedia
 problematic
 to use because it is too much of a moving target, when a URI changes
 because
 the underlying concepts change, there's a need for more 301s.

 The problem is then that I need to record the relation between the old and
 the
 new URI somehow. As of now, it seems that the easiest way to do this would
 be
 to do something like:

 http://example.org/old ex:permanently_moved_to http://example.org/new

 and if the former is dereferenced, the server will 301 redirect to the
 latter.
 Has anyone done something like that, or have other useful experiences
 relevant
 to this problem?

 Cheers,

 Kjetil




-- 
Bernard Vatant
Senior Consultant
Vocabulary  Data Engineering
Tel:   +33 (0) 971 488 459
Mail: bernard.vat...@mondeca.com

Mondeca
3, cité Nollez 75018 Paris France
Web:http://www.mondeca.com
Blog:http://mondeca.wordpress.com



Re: Best Way to Extend the Geo Vocabulary to include an error or extent radius in meters

2010-10-07 Thread Bernard Vatant
Hi Peter

Something like the example below, but I suspect that this might not make it
 a real geo:Point?


barely. The old maths teacher in me frowns at points having a radius :)


   geo:Point
 geo:lat55.701/geo:lat
 geo:long12.552/geo:long
 dwc:radius10/dwc:radius
   /geo:Point


What about something as the following, since the radius is not really a
property of the point ...

geo:Area
geo:center
geo:Point
geo:lat55.701/geo:lat
geo:long12.552/geo:long
/geo:Point
/geo:center
dwc:radius10/dwc:radius
/geo:Area

namespaces ad libitum of course

Cheers

Bernard


-- 
Bernard Vatant
Senior Consultant
Vocabulary  Data Engineering
Tel:   +33 (0) 971 488 459
Mail: bernard.vat...@mondeca.com

Mondeca
3, cité Nollez 75018 Paris France
Web:http://www.mondeca.com
Blog:http://mondeca.wordpress.com



Re: survey: who uses the triple foaf:name rdfs:subPropertyOf rdfs:label?

2010-11-12 Thread Bernard Vatant
Hi Dan

For the record what happened to geonames ontology re. this issue

Answering to the first publication of geonames ontology in october 2006, Tim
Berners-Lee himself asked for the geonames:name attribute to be declared
as a subproperty of rdfs:label to make Tabulator able to use it. And in
order to make DL tools also happy the trick was to have a Full ontology
declaring the subproperties of rdfs:label and importing a Lite ontology.
I'm afraid I can find now neither on which list this conversation took
place, nor who suggested the trick.

It was done so until version 2.0, see
http://www.geonames.org/ontology/ontology_v2.0_Full.rdf

I changed it from version 2.1, by declaring the various geonames naming
properties as subproperties of either skos:prefLabel or skos:altLabel,
kicking the issue out towards the SKOS outfield, and getting rid of this
cumbersome splitting of the ontology into a Full and Lite part.

That can't be done for foaf:name I'm afraid, but it would be interesting to
know if Tabulator uses subproperty declarations in the case of foaf:name.

Best

Bernard


2010/11/12 Dan Brickley dan...@danbri.org

 Dear all,

 The FOAF RDFS/OWL document currently includes the triple

  foaf:name rdfs:subPropertyOf rdfs:label .

 This is one of several things that OWL DL oriented tools (eg.
 http://www.mygrid.org.uk/OWL/Validator) don't seem to like, since it
 mixes application schemas with the W3C builtins.

 So for now, pure fact-finding. I would like to know if anyone is
 actively using this triple, eg. for Linked Data browsers. If we can
 avoid this degenerating into a thread about the merits or otherwise of
 description logic, I would be hugely grateful.

 So -

 1. do you have code / applications that checks to see if a property is
 rdfs:subPropertyOf rdfs:label ?
 2. do you have any scope to change this behaviour (eg. it's a web
 service under your control, rather than shipping desktop software )
 3. would you consider checking for ?x rdf:type foaf:LabelProperty or
 other idioms instead (or rather, as well).
 4. would you object if the triple foaf:name rdfs:subPropertyOf
 rdfs:label  is removed from future version of the main FOAF RDFS/OWL
 schema? (it could be linked elsewhere, mind)

 Thanks in advance,

 Dan




-- 
Bernard Vatant
Senior Consultant
Vocabulary  Data Engineering
Tel:   +33 (0) 971 488 459
Mail: bernard.vat...@mondeca.com

Mondeca
3, cité Nollez 75018 Paris France
Web:http://www.mondeca.com
Blog:http://mondeca.wordpress.com



Looking for metalex ontology

2010-11-29 Thread Bernard Vatant
Hi all

According to http://www.ckan.net/tag/format-metalex there are two datasets
in the LOD cloud relying on metalex ontology.

But they provide different URIs for this ontology ...

http://www.best-project.nl/rechtspraak.ttl  says :
void:vocabulary http://www.metalex.eu/schema

http://www.legislation.gov.uk/ukpga/1985/67/section/6/data.rdf says :
xmlns:metalex=http://www.metalex.eu/metalex/2008-05-02#;

... and both are dead links ...

OTOH http://www.metalex.eu/documentation/ says:

The namespace of the CEN MetaLex XML Schema and OWL specification is
http://www.metalex.eu/metalex/1.0;

... which redirects to ... well ...

Too bad because this metalex ontology looks really interesting :)

Pointer, someone?

Bernard

-- 
Bernard Vatant
Senior Consultant
Vocabulary  Data Engineering
Tel:   +33 (0) 971 488 459
Mail: bernard.vat...@mondeca.com

Mondeca
3, cité Nollez 75018 Paris France
Web:http://www.mondeca.com
Blog:http://mondeca.wordpress.com



Re: Looking for metalex ontology

2010-11-30 Thread Bernard Vatant
Tim

Thanks for taking the time to drill down to those gory details. To follow-up
with Rinke ... ouch indeed :)

BTW seems to me (now that you eventually led me to the OWL file) there is
another way this ontology does not follow Linked Data best practices : It
does not rely on any other vocabulary, although many classes and properties
it defines could be easily find in what I call the Linked Open Vocabularies
(FOAF, Dublin Core, BIBO, FRBR etc.)
Regarding this last point I'm in the process to gather in the single dataset
how vocabularies used in the LOD cloud rely on each other. Stay tuned,
publication in the days to come, bandwidth permitting.

Best

Bernard

2010/11/30 Tim Berners-Lee ti...@w3.org

 Bernard,

 You have been tripped up by abuse of content negotiation.

 Their document says they do conneg.

 cwm http://www.metalex.eu/metalex/1.0
 gives you data, as cwm only asks for RDF.

 Following it by hand
 $ curl -H Accept:application/rdf+xml http://www.metalex.eu/metalex/1.0
 !DOCTYPE HTML PUBLIC -//IETF//DTD HTML 2.0//EN
 htmlhead
 title303 See Other/title
 /headbody
 h1See Other/h1
 pThe answer to your request is located a href=
 http://svn.metalex.eu/svn/MetaLexWS/branches/latest/metalex-cen.owl
 here/a./p
 hr
 addressApache/2.2.11 (Ubuntu) DAV/2 SVN/1.5.4 mod_jk/1.2.26
 PHP/5.2.6-3ubuntu4.6 with Suhosin-Patch mod_python/3.3.1 Python/2.6.2
 mod_ruby/1.2.6 Ruby/1.8.7(2008-08-11) mod_ssl/2.2.11 OpenSSL/0.9.8g
 mod_perl/2.0.4 Perl/v5.10.0 Server at www.metalex.eu Port 80/address
 /body/html
 $

 *** Note here we are a 303 which means that what we are being redirected to
 is NOT the
 ontology, but may be relevant.  The chain of custody is broken, te site
 does not assert that what follows
 is the ontology.  But let us follow it anyway:

 Following the 303

 $ curl -H Accept:application/rdf+xml
 http://svn.metalex.eu/svn/MetaLexWS/branches/latest/metalex-cen.owl
 ?xml version=1.0?


 !DOCTYPE rdf:RDF [
 !ENTITY owl http://www.w3.org/2002/07/owl#; 
 !ENTITY owl11 http://www.w3.org/2006/12/owl11#; 
 [...]
  xmlns:metalex=http://www.metalex.eu/metalex/2008-05-02#;
  xmlns:rdf=http://www.w3.org/1999/02/22-rdf-syntax-ns#;
  xmlns:owl=http://www.w3.org/2002/07/owl#;
 owl:Ontology rdf:about=
 [...]

 *** Note that here you do get some RDF.

 Tabulator can read that.  Each term is as you explore marked by a red dot
 to
 indicate that it could not be looked up on the web.  Because:
 ** You do not get information about the namespace
   xmlns:metalex=http://www.metalex.eu/metalex/2008-05-02#;
 instead of the one you originally asked about!

 So after all that you can see what they are getting  at and how they are
 thinking.
 But their linked data is seriously and needlessly broken.

 To fix it, they should just serve the ontology with 200 from
 http://www.metalex.eu/metalex/1.0
 and fix the namespace in it to be that. No conneg.


 *** Without Firefox, however, even with tabulator, so accepting RDF or
 HTML, is redirected to:

 http://www.cen.eu/cen/Sectors/Sectors/ISSS/CEN%20Workshop%20Agreements/Pages/MLX%20CWAs.aspx
 which is a sort of a home page about the document, with copyright stuff,
 but it is not the ontology.

 So they are using the same URI for two documents with very different
 information, which is architecturally bad and practically messed you up.

 Note that John Shedidan (Ccd) and colleagues have put the UK laws on line
 with lots of RDF -- you could sync up wit them if you haven't

 Moral: point tabulator at it and if it doesn't work, fix it.
 Tim

 PS: Their copyright

 CWAs are CEN copyright. Those made available for downloading are provided
 on the condition that they may not be modified, re-distributed, sold or
 repackaged in any form without the prior consent of CEN, and are only for
 the use of the person downloading them.  For additional copyright
 information, please refer to the statements on the cover pages of the CWAs
 concerned.   sounds as though if it applies to the ontology, which isn't
 obvious

 On 2010-11 -29, at 14:41, Bernard Vatant wrote:

 Hi all

 According to http://www.ckan.net/tag/format-metalex there are two datasets
 in the LOD cloud relying on metalex ontology.

 But they provide different URIs for this ontology ...

 http://www.best-project.nl/rechtspraak.ttl  says :
 void:vocabulary http://www.metalex.eu/schema

 http://www.legislation.gov.uk/ukpga/1985/67/section/6/data.rdf says :
 xmlns:metalex=http://www.metalex.eu/metalex/2008-05-02#;

 ... and both are dead links ...

 OTOH http://www.metalex.eu/documentation/ says:

 The namespace of the CEN MetaLex XML Schema and OWL specification is
 http://www.metalex.eu/metalex/1.0;

 ... which redirects to ... well ...

 Too bad because this metalex ontology looks really interesting :)

 Pointer, someone?

 Bernard

 --
 Bernard Vatant
 Senior Consultant
 Vocabulary  Data Engineering
 Tel:   +33 (0) 971 488 459
 Mail: bernard.vat...@mondeca.com

Concurrent namespaces for Creative Commons ontology

2010-12-21 Thread Bernard Vatant
Hi folks

It seems that there are two concurrent publications and namespaces for the
Creative Commons Rights Expression Language.

http://creativecommons.org/schema.rdf uses http://creativecommons.org/ns#
http://web.resource.org/cc/schema.rdf uses http://web.resource.org/cc/

The first one looks at first sight more reliable since it is maintained by
Creative Commons folks themselves
It is apparently the namespace underlying CC tag on CKAN packages
http://ckan.net/tag/format-cc
Datasets under this tag indeed use it, such as Eurostat or Geospecies
(and BTW it would be great if CKAN tags pages could explicit the vocabulary
namespace underlying the tag, if any)

OTOH I found the second one to be used by a bunch of more or less famous
ontologies such as:

BIO http://purl.org/vocab/bio/0.1
Music Ontology http://purl.org/ontology/mo/
FRBR http://purl.org/vocab/frbr/core
Review Ontology http://purl.org/stuff/rev
Talis Address Schema http://schemas.talis.com/2005/address/schema
VANN http://purl.org/vocab/vann

So I wonder ... I cc people behind both vocabularies, maybe they can do
something about it?

Best

Bernard

-- 
Bernard Vatant
Senior Consultant
Vocabulary  Data Engineering
Tel:   +33 (0) 971 488 459
Mail: bernard.vat...@mondeca.com

Mondeca
3, cité Nollez 75018 Paris France
Web:http://www.mondeca.com
Blog:http://mondeca.wordpress.com



Introducing Vocabularies of a Friend (VOAF)

2011-01-14 Thread Bernard Vatant
Hello all

I'm pleased to announce the first publication of
Vocabularies of a Friend (VOAF) - Friendly vocabularies for the linked data
Web

Data sets published in the framework of the Linked Open Data Cloud are
relying on a variety of RDFS vocabularies or OWL ontologies.
The aim of VOAF is to provide information on such vocabularies, and in
particular how they rely on each other.
VOAF defines a network of vocabularies the same way FOAF is used to define
networks of people

VOAF is of course a clear homage to FOAF, which is the hub of the network :
more than half of the listed vocabularies rely on it one way or another.
I've asked Dan Brickley a couple of days ago if he did not mind this
friendly hack. Without answer from him, I just went ahead following the
adage Qui ne dit mot consent.

More at http://www.mondeca.com/foaf/voaf-doc.html

The VOAF dataset is available as linked data at
http://www.mondeca.com/foaf/voaf-vocabs.rdf

This is a work in progress, still a bit sketchy, which hopefully will
benefit from the community feedback.

In particular I've tried to link the vocabularies to the CKAN datasets using
the Tag ontology, for example making explicit the link from
http://xmlns.com/foaf/0.1 to http://ckan.net/tag/format-foaf
Instead of re-inventing a specific attribute I've reused Lexvo, MOAT and Tag
ontologies like in the following, which is might be a bit convoluted for the
purpose at hand.

voaf:Vocabulary rdf:about=http://xmlns.com/foaf/0.1;
...
   lvont:representedBy
  moat:Tag rdf:about=http://ckan.net/tag/format-foaf;
tag:nameformat-foaf/tag:name
  /moat:Tag
/lvont:representedBy
...
/voaf:Vocabulary

I've made sense of quite a bunch of CKAN tags in terms of corresponding
vocabulary used, but there are still quite a few vocabularies w/o
corresponding CKAN tags. If people in charge of CKAN tags see anything I've
missed pleas feel free to push it to me.

And of course any feedback on whatever you would like to see
added/modified/deleted is welcome.

Thanks for your attention.

Bernard



-- 
Bernard Vatant
Senior Consultant
Vocabulary  Data Engineering
Tel:   +33 (0) 971 488 459
Mail: bernard.vat...@mondeca.com

Mondeca
3, cité Nollez 75018 Paris France
Web:http://www.mondeca.com
Blog:http://mondeca.wordpress.com



Re: Introducing Vocabularies of a Friend (VOAF)

2011-01-15 Thread Bernard Vatant
Hi Egon

2011/1/15 Egon Willighagen egon.willigha...@gmail.com

 Hi Bernard,

 Maybe it's just Saturday morning, but what exactly is the goal of your
 VOAF effort? What problems with existing ontologies does it address?
 Just curious, as it sounds interesting...


There are no problems with existing ontologies. VOAF just aims at easing
their discovery. I think Kingsley has shown by a few applications the
potential of such an interlinking. I've very often the question : what
vocabularies can I use for my data, and how can I discover them. VOAF is a
tool addressing this question.

It can help to put in light which vocabularies have a good practice of
reusing and relying existing ones, and which reinvent everything in their
namespace. Having quick access on creators and publishers (those data are
yet largely to complete in the current dataset) is also linking the
vocabularies to the community creating them etc etc.

Next step for example I would like to add terminological links between
vocabularies using the same term, using lexvo.org ontology, such as to add

http://lexvo.org/id/term/eng/event  lvont:means  
http://purl.org/vocab/bio/0.1/Event

http://lexvo.org/id/term/eng/event  lvont:means  
http://purl.org/dc/dcmitype/Event

http://lexvo.org/id/term/eng/event  lvont:means  
http://www.aktors.org/ontology/portal#Event 

This might help curators of those various ontologies to wonder if they
needed to duplicate the class in their own vocabulary, and the data curator
wanting to publish data about events to look up those various flavours of
Event before picking her choice ...

And as Kingsley says this is just the beginning of what you can imagine ...

Bernard


-- 
Bernard Vatant
Senior Consultant
Vocabulary  Data Engineering
Tel:   +33 (0) 971 488 459
Mail: bernard.vat...@mondeca.com

Mondeca
3, cité Nollez 75018 Paris France
Web:http://www.mondeca.com
Blog:http://mondeca.wordpress.com



Re: Introducing Vocabularies of a Friend (VOAF)

2011-01-15 Thread Bernard Vatant
Hi Stephane

2011/1/15 Stephane Fellah fella...@gmail.com

 Sounds very interesting initiative. Based on my understanding, I think it
 should be possible to write a tool that read any OWL document and generate a
 VOAF document.


Indeed I've been thinking along those lines. The current dataset is
handcrafted as a prototype should be, but I'm indeed thinking now about ways
to generate the VOAF description automagically from the OWL or RDFS files.
Devil is in the details, though. Some information you can't really get by
conventional parsing of the graph, such as which namespaces are used, to
populate the voaf:reliesOn property. Those you can get by ad hoc syntactic
scripts, but vocabularies are published using a variety of syntaxes.


 May be Swoogle could be a good starting point, but not sure how the API can
 provide the list of ontology namespaces through the REST API.


I don't know either, but I'm sure someone will find a way :)


 The imports section would corresponds to the imports statement. The tools
 would count the number of classes and properties in the ontology namespace.
 It would be interesting to aggregate all this information and see which
 vocabularies have the most influence using SNA algorithms.


You are welcome to play along those lines. I think there are a lot of
opportunities and things to discover. This is just the beginning of the
story.

Best

Bernard



-- 
Bernard Vatant
Senior Consultant
Vocabulary  Data Engineering
Tel:   +33 (0) 971 488 459
Mail: bernard.vat...@mondeca.com

Mondeca
3, cité Nollez 75018 Paris France
Web:http://www.mondeca.com
Blog:http://mondeca.wordpress.com



Re: Introducing Vocabularies of a Friend (VOAF)

2011-01-19 Thread Bernard Vatant
Hi Christopher

 I can't help but feel that calling it VOAF is just going to muddy the
 waters. Friendly vocabularies for the linked data Web
 doesn't help clarify either. It's cute, but I strongly suggest you at the
 very least make this 'tag line' far more clear.


I agree the current documentation is too sketchy and potentially misleading
as is. I have put efforts mainly on the dataset itself so far, but you're
right it has to be better documented.

Regarding the name, well, the pun is here to stay I'm afraid. I've had
positive feedback from Dan Brickley about it, so I already feel it's too
late to change now.


 Frankly calling something 'voaf' when people will hear it mixed in with
 'foaf' is just making the world more confusing.


Actually I've not thought much (not at all) about how people would pronounce
or hear it. I principally communicate with vocabularies (and people using
them) through written stuff, and very rarely speak about them. I barely know
how to pronounce OWL, and always feel like a fool when I've to, and will
eventually spell it O.W.L. - as every other french native would do. If I had
to speak about VOAF, I think I would spell it also V.O.A.F.


 I had a lot of confusion until I found out the SHOCK vocab people were
 talking about was spelled SIOC.


Interesting, I was confused exactly the other way round. I've read a lot
(and written a bit) about SIOC since it's been around, but realized only two
days ago how it was pronounced when I actually heard someone speaking
about it the right way ... and thought at first time it was something
else.


 One other minor suggestion;
 Vocabularyhttp://graphite.ecs.soton.ac.uk/browser/?uri=http%3A%2F%2Fwww.mondeca.com%2Ffoaf%2Fvoaf%23Vocabulary#http://www.mondeca.com/foaf/voaf%23Vocabulary
 → 
 rdfs:subClassOfhttp://graphite.ecs.soton.ac.uk/browser/?uri=http%3A%2F%2Fwww.w3.org%2F2000%2F01%2Frdf-schema%23subClassOf#http://www.w3.org/2000/01/rdf-schema%23subClassOf
 → 
 void:Datasethttp://graphite.ecs.soton.ac.uk/browser/?uri=http%3A%2F%2Frdfs.org%2Fns%2Fvoid%23Dataset#http://rdfs.org/ns/void%23Dataset

 might be a mistake because void:Dataset is defined as A set of RDF
 triples that are published, maintained or aggregated by a single provider.


Not a bug, but a feature. It's exactly what a voaf:Vocabulary is.

and it may be that you would want to define non RDF vocabs using this.


You might want to do that but I don't and I'm the vocabulary creator
(right?) so I can insist on the fact that this is really meant to describe
*RDF* vocabularies, and cast this intention in the stone of formal
semantics.
If you want to describe other kind of vocabularies the same way, feel free
to use or create something else. Or extend foaf:Vocabulary to a more generic
class. It's an open world, let thousand flowers blossom :)


 I see no value in making this restriction.


The value I see is to keep this vocabulary use focused on what it was meant
for.

Best

Bernard

-- 
Bernard Vatant
Senior Consultant
Vocabulary  Data Engineering
Tel:   +33 (0) 971 488 459
Mail: bernard.vat...@mondeca.com

Mondeca
3, cité Nollez 75018 Paris France
Web:http://www.mondeca.com
Blog:http://mondeca.wordpress.com



Fwd: Vocabulary of a Friend (VOAF)

2011-01-19 Thread Bernard Vatant
 on the Dublin Core
 community's vocabulary to provide detailed descriptions of documents
 and bibliographic content, or that it relies on SIOC when there's a
 need to describe eg. forums or bulletin boards.


Hmm. Interesting. Introducing facets or contexts in which relies on
applies. I have to munch over this.


 Expressing 'relies'
 links between terms is harder. I like to add mappings; but I don't
 like to add dependencies. So my guess is VOAF will be easier to use as
 a kind of 'vocabulary buddylist' than at the term level, and for
 terms, we might turn directly to things like subClassOf /
 subPropertyOf...


Indeed. But the terminological glue is something to think about.


 ps. feel  free to migrate this to public-lod


Done :)

Bernard


-- 
Bernard Vatant
Senior Consultant
Vocabulary  Data Engineering
Tel:   +33 (0) 971 488 459
Mail: bernard.vat...@mondeca.com

Mondeca
3, cité Nollez 75018 Paris France
Web:http://www.mondeca.com
Blog:http://mondeca.wordpress.com



SWEET (but not friendly) ontologies

2011-01-21 Thread Bernard Vatant
Hello all

Gathering vocabularies for the growing VOAF dataset [1] leads to the
discovery of a bunch of linking and resusing good practice (good news) but
also makes obvious in comparison some data islands, apparently isolated from
everyything else whatsoever in the Cloud.

The SWEET ontologies developed by NASA [2] [3] seem to be in that case. We
have there a set of about 200 interlinked ontologies for Earth and
Environment sciences, but neither relying on any external namespace, nor
bearing any kind of metadata (creator, date, publisher, rights ...) to which
we are used in friendly vocabularies. SWEET ontologies don't seem used in
any VOAF vocabulary or CKAN package I've met so far. And the homepage has
not even a contact email to cc this message :(

I've heard that NASA uses those ontologies internally, but could not find
any pointer to that kind of use.

This is really a sad observation given the size of the work and the reliable
organization backing up this effort, those ontologies should be linked to
and from many other vocabularies!

So, if anyone has used one of SWEET vocabularies in a dataset or extend it
in some vocabulary, please send pointers!
And if someone behind SWEET ontologies is lurking on this list, I would be
happy to make contact :)

Bernard

[1] http://www.mondeca.com/foaf/voaf-vocabs.rdf
[2] http://sweet.jpl.nasa.gov/
[3] http://sweet.jpl.nasa.gov/2.1/

-- 
Bernard Vatant
Senior Consultant
Vocabulary  Data Engineering
Tel:   +33 (0) 971 488 459
Mail: bernard.vat...@mondeca.com

Mondeca
3, cité Nollez 75018 Paris France
Web:http://www.mondeca.com
Blog:http://mondeca.wordpress.com



Re: Introducing Vocabularies of a Friend (VOAF)

2011-01-25 Thread Bernard Vatant
Hello all

Points taken. Somehow changed the headings and itroduction at
http://www.mondeca.com/foaf/voaf-doc.html
to make more explicit what is is about (hopefully).

I did not change (yet) either VOAF acronym or namespace. To tell the truth,
my first idea was LOV for Linked Open Vocabularies, but I guess some would
have found that pun confusing too.
Sorry to keep on pushing puns and portmanteau(s?), from the Semantopic Map
(back in 2001, maybe some folks here remember it, it's offline now) to
hubjects ... Maybe it's not a good idea after all.

So if I sum up the feedback so far
- there is no question the dataset is worth it
- the introduction is a bit confusing (changed a couple of things, let's see
if it's better or worse)
- the name is totally confusing for some not-so-dumb people, so go figure
waht happens to not-so-smart ones :)

I'm open to all suggestions to change to something better. Is LOV a good
idea?
Other proposals :

LV or LVoc : Linked Vocabularies
WOV : Web of Vocabularies
...

Bernard



2011/1/25 Kingsley Idehen kide...@openlinksw.com

  On 1/25/11 11:59 AM, William Waites wrote:

 * [2011-01-25 11:21:45 -0500] Kingsley Idehen kide...@openlinksw.com 
 kide...@openlinksw.com écrit:

 ] Hmm. Is it the Name or Description that's important?
 ]
 ] But what about discerning meaning from the VOAF graph?

 Humans looking at documents and trying to understand a system
 do so in a very different way from machines. While what you
 suggest might be strictly true according to the way RDF and
 formal logic work, it isn't the way humans work (otherwise
 the strong AI project of the past half-century might have
 succeeded by now). So we should try arrange things in a way
 that is both consistent with what the machines want and as
 easy as possible for humans to understand. That Hugh, an
 expert in the domain, had trouble figuring it out due to
 poetic references to well known concepts suggests that there
 is some room for improvement.

 Cheers,
 -w


 Yes, but does a human say: you lost me at VOAF due to FOAF? I think they do
 read the docs, at least the opening paragraph :-)

 --

 Regards,

 Kingsley Idehen   
 President  CEO
 OpenLink Software
 Web: http://www.openlinksw.com
 Weblog: http://www.openlinksw.com/blog/~kidehen 
 http://www.openlinksw.com/blog/%7Ekidehen
 Twitter/Identi.ca: kidehen


-- 
Bernard Vatant
Senior Consultant
Vocabulary  Data Engineering
Tel:   +33 (0) 971 488 459
Mail: bernard.vat...@mondeca.com

Mondeca
3, cité Nollez 75018 Paris France
Web:http://www.mondeca.com
Blog:http://mondeca.wordpress.com



Re: Proposal to assess the quality of Linked Data sources

2011-02-25 Thread Bernard Vatant
Hi Annika

- A vocabulary is said to be established, if it is one of the 100 most
 popular vocabularies stated on pre x.cc - uhm, as the results from
 Richard's evaluation have, this is quite arguable

 It's a practical way to determine it (which I can use for the
 implementation of the formalism). Another way would be to compare many
 documents from many data sources and to find out, which vocabularies are
 most popular.


I'm particularly interested in this aspect of vocabulary selection.
Regarding popularity, I fully go along with Bob regarding prefix.cc in which
all sorts of biases can be introduced. I think the popularity is better
measured by the use of vocabularies in CKAN datasets, as indicated by
format-* tags. See http://ckan.net/tag/?page=F and for example
http://ckan.net/tag/format-bibo or http://ckan.net/tag/format-foaf.

Another approach I'm currently working on is the one you can find at
http://labs.mondeca.com/dataset/lov. The description of interlinked
vocabularies (using VOAF vocabulary) provide indication of popularity at the
vocabulary level itself. From this dataset (still far from exhaustive of
course) you can see which vocabularies are reused, extended, used for
annotation by other ones. I think the density of links to and from a
vocabulary to other ones gives a good indicator of its establishment, in
combination with the number of datasets actually using it.

Best

Bernard


-- 
Bernard Vatant
Senior Consultant
Vocabulary  Data Engineering
Tel:   +33 (0) 971 488 459
Mail: bernard.vat...@mondeca.com

Mondeca
3, cité Nollez 75018 Paris France
Web:http://www.mondeca.com
Blog:http://mondeca.wordpress.com



SWEET Ontologies

2011-03-09 Thread Bernard Vatant
Hello all

I am wondering about the use, reuse, reusability of the SWEET ontologies in
the LOD Cloud
http://sweet.jpl.nasa.gov/

Any dataset using one of them?
Any vocabulary relying on or extending one of them?

Pointers welcome

Bernard

-- 
Bernard Vatant
Senior Consultant
Vocabulary  Data Engineering
Tel:   +33 (0) 971 488 459
Mail: bernard.vat...@mondeca.com

Mondeca
3, cité Nollez 75018 Paris France
Web:http://www.mondeca.com
Blog:http://mondeca.wordpress.com



Re: [ANN] Linked Open Colors

2011-04-01 Thread Bernard Vatant
Ola Sergio

Very cool ... and could be actually useful, so maybe less a joke than it
seems

Bernard


2011/4/1 Sergio Fernández sergio.fernan...@fundacionctic.org

 Hi,

 for giving some color to the semantic web folks, we are happy to
 announce the release the Linked Open Colors dataset [1]. The Linked
 Open Colors project offers tons of facts about colors, all readily
 available as Linked Open Data, linking with other relevant datasets
 such as dbpedia.

 The dataset and its publication mechanisms have been pedantically
 checked, and we expect no errors in the triples; if you do find some,
 please let us know.

 This project is highly inspired by Linked Open Numbers project [2].
 Happy April Fools' Day!

 Cheers,

 [1] http://purl.org/colors
 [2] http://km.aifb.kit.edu/projects/numbers/

 --
 Carlos Tejo, Iván Mínguez and Sergio Fernández




-- 
Bernard Vatant
Senior Consultant
Vocabulary  Data Engineering
Tel:   +33 (0) 971 488 459
Mail: bernard.vat...@mondeca.com

Mondeca
3, cité Nollez 75018 Paris France
Web:http://www.mondeca.com
Blog:http://mondeca.wordpress.com



Re: LOV - Linked Open Vocabularies

2011-04-01 Thread Bernard Vatant
Maybe I missed something, but can someone tell me what the URI of the
ontology of dbpedia is?

Bernard

2011/3/30 Benedikt Kaempgen benedikt.kaemp...@kit.edu

 Hello,

 Maybe I missed something, but can someone tell me why the ontology of
 dbpedia is not listed on LOV [1]?

 [1] http://labs.mondeca.com/dataset/lov/index.html

 Regards,

 Benedikt

 --
 AIFB, Karlsruhe Institute of Technology (KIT)
 Phone: +49 721 608-47946
 Email: benedikt.kaemp...@kit.edu
 Web: http://www.aifb.kit.edu/web/Hauptseite/en



 -Original Message-
 From: semantic-web-requ...@w3.org [mailto:semantic-web-requ...@w3.org] On
 Behalf Of Kingsley Idehen
 Sent: Tuesday, March 29, 2011 1:09 AM
 To: Pierre-Yves Vandenbussche
 Cc: public-lod@w3.org; SW-forum; semantic...@yahoogroups.com;
 info...@listes.irisa.fr
 Subject: Re: LOV - Linked Open Vocabularies

 On 3/28/11 5:37 PM, Pierre-Yves Vandenbussche wrote:

Kingsley,

I've just added rdfs:isDefinedBy property to vocabularies which
 accept content negotiation.
Example here:
 http://labs.mondeca.com/dataset/lov/details/vocabulary_voaf.html


 Okay, a few more things though. For instance, what type of Entity is
 Identified by this URI: http://labs.mondeca.com/vocab/voaf#VocabularySpace?


 Effect of entity ambiguity shows here:

 http://uriburner.com/describe/?url=http%3A%2F%2Flabs.mondeca.com%2Fvocab%2Fv
 oaf%23VocabularySpacehttp://uriburner.com/describe/?url=http%3A%2F%2Flabs.mondeca.com%2Fvocab%2Fv%0Aoaf%23VocabularySpace.

 Also, we have an Entity ID (URI based Named Ref):
 http://labs.mondeca.com/dataset/lov/lov#CITY, and we (hopefully most
 Linked
 Data folk) kinda know said Entities representation (in the form of a linked
 data graph pictorial) is accessible from the Address (URL):
 http://labs.mondeca.com/dataset/lov/lov, but for absolute clarity (human
 and
 machines) you should add a wdrs:describedby relation of the form:

 http://labs.mondeca.com/dataset/lov/lov#CITY
 http://labs.mondeca.com/dataset/lov/lov#CITY  wdrs:describedby
 http://labs.mondeca.com/dataset/lov/lov
 http://labs.mondeca.com/dataset/lov/lov  .

 Good job!

 Kingsley




regards,



Pierre-Yves Vandenbussche
Research  Development
Mondeca
3, cité Nollez 75018 Paris France
Tel. +33 (0)1 44 92 35 07 - fax +33 (0)1 44 92 02 59
Mail: pierre-yves.vandenbuss...@mondeca.com
 Website: www.mondeca.com http://www.mondeca.com/
Blog: Leçons de choses http://mondeca.wordpress.com/



On Mon, Mar 28, 2011 at 4:57 PM, Kingsley Idehen
 kide...@openlinksw.com wrote:


On 3/28/11 10:39 AM, Pierre-Yves Vandenbussche wrote:

Hello all,

We are pleased to announce the Linked Open
 Vocabularies initiative [1].

The web of data is based on datasets publication.
 When building a dataset some questions arise: which existing vocabularies
 will be the best-suited for my needs? To facilitate this task we propose
 the
 Linked Open Vocabularies (LOV)  dataset [1]. It identifies the defined
 vocabularies for data description but also the relationships between these
 vocabularies.
The work within the LOV is not exhaustive but, by
 suggesting us some vocabulary modifications and/or creations, we could
 improve this dataset.
You could access this dataset via an RDF/XML file
 [2] and via a SPARQL Endpoint [3].



[1] http://labs.mondeca.com/dataset/lov/index.html
[2] http://labs.mondeca.com/dataset/lov/lov.rdf
[3] http://labs.mondeca.com/endpoint/lov



Pierre-Yves Vandenbussche, Bernard Vatant, Lise
 Rozat
Research  Development
Mondeca
3, cité Nollez 75018 Paris France
 Website: www.mondeca.com http://www.mondeca.com/
Lab: Mondeca Labs http://labs.mondeca.com/




Nice!

See:

 http://linkeddata.uriburner.com/describe/?url=http%3A%2F%2Flabs.mondeca..com%
 2Fdataset%2Flov%2Flov%23LOV

Would be nice if you also added isDefinedBy relations so
 that one can FYN between TBox and ABox with ease :-)



--

Regards,

Kingsley Idehen
President  CEO
OpenLink Software
Web: http://www.openlinksw.com
Weblog: http://www.openlinksw.com/blog/~kidehen
 http://www.openlinksw.com/blog/%7Ekidehen
 Twitter/Identi.ca: kidehen








 --

 Regards,

 Kingsley Idehen
 President  CEO
 OpenLink Software
 Web: http://www.openlinksw.com
 Weblog: http://www.openlinksw.com/blog/~kidehen
 Twitter/Identi.ca: kidehen







-- 
Bernard Vatant
Senior Consultant
Vocabulary  Data Engineering
Tel:   +33 (0) 971 488 459
Mail: bernard.vat

How many instances of foaf:Person are there in the LOD Cloud?

2011-04-13 Thread Bernard Vatant
Hello all

Just trying to figure what is the size of personal information available as
LOD vs billions of person profiles stored by Google, Amazon, Facebook,
LinkedIn, unameit ... in proprietary formats.

Any hint of the proportion of living people vs historical characters is
also welcome.

Any idea?

Bernard


-- 
Bernard Vatant
Senior Consultant
Vocabulary  Data Integration
Tel:   +33 (0) 971 488 459
Mail: bernard.vat...@mondeca.com

Mondeca
3, cité Nollez 75018 Paris France
Web:http://www.mondeca.com
Blog:http://mondeca.wordpress.com



Re: Schema.org in RDF ...

2011-06-07 Thread Bernard Vatant
Hi all

Something I don't understand. If I read well all savvy discussions so far,
publishers behind http://schema.org URIs are unlikely to ever provide any
RDF description, so why are those URIs declared as identifiers of RDFS
classes in the http://schema.rdfs.org/all.rdf. For all I can see,
http://schema.org/Person is the URI of an information resource, not of a
class.
So I would rather have expected mirroring of the schema.org URIs by
schema.rdfs.org URIs, the later fully dereferencable proper RDFS classes
expliciting the semantics of the former, while keeping the reference to the
source in some dcterms:source element.

Example, instead of ...

rdf:Description rdf:about=http://schema.org/Person;
rdf:type rdf:resource=http://www.w3.org/2000/01/rdf-schema#Class/
rdfs:label xml:lang=enPerson/rdfs:label
rdfs:comment xml:lang=enA person (alive, dead, undead, or fictional)./
rdfs:comment
rdfs:subClassOf rdf:resource=http://schema.org/Thing/
rdfs:isDefinedBy rdf:resource=http://schema.org/Person/
/rdf:Description

where I see a clear abuse of rdfs:isDefinedBy, since if you dereference the
said URI, you don't find any explicit RDF definition ...

I would rather have the following

rdf:Description rdf:about=http://schema.rdfs.org/Person;
rdf:type rdf:resource=http://www.w3.org/2000/01/rdf-schema#Class/
rdfs:label xml:lang=enPerson/rdfs:label
rdfs:comment xml:lang=enA person (alive, dead, undead, or fictional)./
rdfs:comment
rdfs:subClassOf rdf:resource=http://schema.rdfs.org/Thing/
dcterms:source rdf:resource=http://schema.org/Person/
/rdf:Description

To the latter declaration, one could safely add statements like

schema.rdfs:Person rdfs:subClassOf  foaf:Person

etc

Or do I miss the point?

Bernard

2011/6/3 Michael Hausenblas michael.hausenb...@deri.org


 http://schema.rdfs.org

 ... is now available - we're sorry for the delay ;)

 Cheers,
Michael
 --
 Dr. Michael Hausenblas, Research Fellow
 LiDRC - Linked Data Research Centre
 DERI - Digital Enterprise Research Institute
 NUIG - National University of Ireland, Galway
 Ireland, Europe
 Tel. +353 91 495730
 http://linkeddata.deri.ie/
 http://sw-app.org/about.html





-- 
Bernard Vatant
Senior Consultant
Vocabulary  Data Integration
Tel:   +33 (0) 971 488 459
Mail: bernard.vat...@mondeca.com

Mondeca
3, cité Nollez 75018 Paris France
Web:http://www.mondeca.com
Blog:http://mondeca.wordpress.com



Re: Schema.org in RDF ...

2011-06-07 Thread Bernard Vatant
Hi Michael

I just repeated what some people-who-know-better around assumed  ...
For myself I'm sure of nothing, in particular regarding the future :)
And that's exactly why seems to me that assertions published today should
not preempt (possible) semantics of tomorrow, but rely on semantics as they
stand : http://schema.org/Person is an information resource, not a
rdfs:Class.

In the solution I propose, whenever the event you expect happens, just add
owl:equivalentClass and owl:equivalentProperty to your descriptions.
If it does not happen as you wish, nothing is broken. If people at
schema.org change their mind and throw away everything, you get rid of the
dcterms:source and your descriptions stay alive and backward compatible for
people in the RDF world. Et voilà.

Bernard

2011/6/7 Michael Hausenblas michael.hausenb...@deri.org

 Something I don't understand. If I read well all savvy discussions so far,
 publishers behind http://schema.org URIs are unlikely to ever provide any
 RDF description,


 What makes you so sure about that not one day in the (near?) future the
 Schema.org URIs will serve RDF or JSON, FWIW, additionally to HTML? ;)


 Cheers,
Michael
 --
 Dr. Michael Hausenblas, Research Fellow
 LiDRC - Linked Data Research Centre
 DERI - Digital Enterprise Research Institute
 NUIG - National University of Ireland, Galway
 Ireland, Europe
 Tel. +353 91 495730
 http://linkeddata.deri.ie/
 http://sw-app.org/about.html

 On 7 Jun 2011, at 08:44, Bernard Vatant wrote:

  Hi all

 Something I don't understand. If I read well all savvy discussions so far,
 publishers behind http://schema.org URIs are unlikely to ever provide any
 RDF description, so why are those URIs declared as identifiers of RDFS
 classes in the http://schema.rdfs.org/all.rdf. For all I can see,
 http://schema.org/Person is the URI of an information resource, not of a
 class.
 So I would rather have expected mirroring of the schema.org URIs by
 schema.rdfs.org URIs, the later fully dereferencable proper RDFS classes
 expliciting the semantics of the former, while keeping the reference to the
 source in some dcterms:source element.

 Example, instead of ...

 rdf:Description rdf:about=http://schema.org/Person;
 rdf:type rdf:resource=http://www.w3.org/2000/01/rdf-schema#Class/
 rdfs:label xml:lang=enPerson/rdfs:label
 rdfs:comment xml:lang=enA person (alive, dead, undead, or
 fictional)./rdfs:comment
 rdfs:subClassOf rdf:resource=http://schema.org/Thing/
 rdfs:isDefinedBy rdf:resource=http://schema.org/Person/
 /rdf:Description

 where I see a clear abuse of rdfs:isDefinedBy, since if you dereference
 the said URI, you don't find any explicit RDF definition ...

 I would rather have the following

 rdf:Description rdf:about=http://schema.rdfs.org/Person;
 rdf:type rdf:resource=http://www.w3.org/2000/01/rdf-schema#Class/
 rdfs:label xml:lang=enPerson/rdfs:label
 rdfs:comment xml:lang=enA person (alive, dead, undead, or
 fictional)./rdfs:comment
 rdfs:subClassOf rdf:resource=http://schema.rdfs.org/Thing/
 dcterms:source rdf:resource=http://schema.org/Person/
 /rdf:Description

 To the latter declaration, one could safely add statements like

 schema.rdfs:Person rdfs:subClassOf  foaf:Person

 etc

 Or do I miss the point?

 Bernard

 2011/6/3 Michael Hausenblas michael.hausenb...@deri.org

 http://schema.rdfs.org

 ... is now available - we're sorry for the delay ;)

 Cheers,
   Michael
 --
 Dr. Michael Hausenblas, Research Fellow
 LiDRC - Linked Data Research Centre
 DERI - Digital Enterprise Research Institute
 NUIG - National University of Ireland, Galway
 Ireland, Europe
 Tel. +353 91 495730
 http://linkeddata.deri.ie/
 http://sw-app.org/about.html





 --
 Bernard Vatant
 Senior Consultant
 Vocabulary  Data Integration
 Tel:   +33 (0) 971 488 459
 Mail: bernard.vat...@mondeca.com
 
 Mondeca
 3, cité Nollez 75018 Paris France
 Web:http://www.mondeca.com
 Blog:http://mondeca.wordpress.com
 





-- 
Bernard Vatant
Senior Consultant
Vocabulary  Data Integration
Tel:   +33 (0) 971 488 459
Mail: bernard.vat...@mondeca.com

Mondeca
3, cité Nollez 75018 Paris France
Web:http://www.mondeca.com
Blog:http://mondeca.wordpress.com



Re: Schema.org in RDF ...

2011-06-07 Thread Bernard Vatant
Kingsley, you lost me once again :(

From the URI you provide I follow my nose to
http://uriburner.com/describe/?url=http%3A%2F%2Fschema.org%2FPerson%23this

Which as it says provides a description of the resource identified by
http://schema.org/Person#this, including the following triple :

http://schema.org/Person#this  rdf:type  
http://www.w3.org/2000/01/rdf-schema#Class

AFAIK, http://schema.org/Person#this is no more declared as an RDFS class
than http://schema.org/Person. Actually since http://schema.org/Person is
currently an information resource per its answer to http GET, I wonder what
http://schema.org/Person#this actually identifies, since there is no actual
#this anchor in the page.

Tweaking a new URI to explicit the semantics of http://schema.org/Person is
OK, but this new URI has to be in a namespace you control etc.

Bernard



2011/6/7 Kingsley Idehen kide...@openlinksw.com


 Here is an example of an updated tweak [1] of what we did with Google's
 initial foray into this realm combined with recent developments at:
 schema.rdfs.org.

 Note, anyone can yank out this data, tweak, and then share (ideally via Web
 in pure Linked Data form). I'll be sending an archive to Micheal and Co.
 post hitting send button re. this mail.

 Links:

 1.
 http://uriburner.com/describe/?url=http%3A%2F%2Fschema.rdfs.org%2Fallp=2lp=4first=op=0gp=2




Re: Get your dataset on the next LOD cloud diagram

2011-07-13 Thread Bernard Vatant
Re. availability, just a reminder of SPARQL Endpoints Status service
http://labs.mondeca.com/sparqlEndpointsStatus/index.html
As of today 80% (192/240) endpoints registered at CKAN are up and running.
Monitor grey dots (still alive?) for candidate passed out datasets ...

Bernard

2011/7/13 Leigh Dodds leigh.do...@talis.com:
 Hi,

 On 12 July 2011 18:45, Pablo Mendes pablomen...@gmail.com wrote:
 Dear fellow Linked Open Data publishers and consumers,
 We are in the process of regenerating the next LOD cloud diagram and
 associated statistics [1].
 ...

 This email prompted a discussion about how to the data collection or
 diagram could be improved or updated. As CKAN is an open platform and
 anyone can add additional tags to datasets, why doesn't everyone who
 is interested in seeing a particular improvement or alternate view of
 the data just go ahead and do it? There's no need to require all this
 to be done by one team on a fixed schedule.

 Some light co-ordination between people doing similar analyses would
 be worthwhile, but it wouldn't be hard to, e.g. tag datasets based on
 whether their Linked Data or SPARQL endpoint is available regularly,
 whether they're currently maintained, or (my current bug bear) whether
 the data dumps they publish parse with more than one tool chain.

 It'd be nice to see many different aspects of the cloud being explored.

 Cheers,

 L.

 --
 Leigh Dodds
 Programme Manager, Talis Platform
 Mobile: 07850 928381
 http://kasabi.com
 http://talis.com

 Talis Systems Ltd
 43 Temple Row
 Birmingham
 B2 5LS





-- 
Bernard Vatant
Senior Consultant
Vocabulary  Data Integration
Tel:       +33 (0) 971 488 459
Mail:     bernard.vat...@mondeca.com

Mondeca
3, cité Nollez 75018 Paris France
Web:    http://www.mondeca.com
Blog:    http://mondeca.wordpress.com




Re: Question: Authoritative URIs for Geo locations? Multi-lingual labels?

2011-09-08 Thread Bernard Vatant
Hi all

2011/9/8 Sarven Capadisli i...@csarven.ca

 Here is a nice:

 http://dbpedia.org/resource/Montreal owl:sameAs
 http://sws.geonames.org/6077244/ .


A nice abuse of owl:sameAs indeed :)
http://sws.geonames.org/6077244/ is Montréal *Post Office* since its feature
code is S.PO, called simply Montréal by laziness of the data curator ...
and mistaken as the city by some dumb script not leveraging geonames
classification to sort out homonyms.


 http://sws.geonames.org/6077244/ doesn't provide more labels however.


Indeed. No one has taken the time so far to name Montréal Post Office in
other languages.
But http://sws.geonames.org/6077243/ which is indeed the *City* of Montréal,
does provide suite a bunch of them.

So data curation is needed :
- Specify the name(s) of http://sws.geonames.org/6077244/ on geonames side
to ease disambiguation (will do it)
- Correct the DBpedia dumb matching (I have no power on that one)

Cheers

Bernard



 -Sarven

 On Thu, 2011-09-08 at 15:49 +0100, Paul Wilton wrote:
  Hi Scott
  http://www.geonames.org is a good source of global Geospatial RDF
  linked data - it is a very large global dataset
 
 
  For the UK:  http://data.ordnancesurvey.co.uk  is a good option
 
 
  freebase also has a large global geospatial dataset
 
 
 
  cheers
  Paul
 
 
 
  On Thu, Sep 8, 2011 at 3:38 PM, M. Scott Marshall
  mscottmarsh...@gmail.com wrote:
  It seems that dbpedia is a de facto source of URIs for
  geographical
  place names. I would expect to find a more specialized source.
  I think
  that I saw one mentioned here in the last few months. Are
  there
  alternatives that are possible more fine-grained or designed
  specifically for geo data? With multi-lingual labels? Perhaps
  somebody
  has kept track of the options on a website?
 
  -Scott
 
  --
  M. Scott Marshall
  http://staff.science.uva.nl/~marshall
 
  On Thu, Sep 8, 2011 at 3:07 PM, Sarven Capadisli
  i...@csarven.ca wrote:
   On Thu, 2011-09-08 at 14:01 +0100, Sarven Capadisli wrote:
   On Thu, 2011-09-08 at 14:07 +0200, Karl Dubost wrote:
# Using RDFa (not implemented in browsers)
   
   
ul xmlns:geo=http://www.w3.org/2003/01/geo/wgs84_pos#;
  id=places-rdfa
lispan
about=http://www.dbpedia.org/resource/Montreal;
geo:lat_long=45.5,-73.67Montréal/span,
  Canada/li
lispan
about=http://www.dbpedia.org/resource/Paris;
geo:lat_long=48.856578,2.351828Paris/span,
  France/li
/ul
   
* Issue: Latitude and Longitude not separated
  (have to parse them with regex in JS)
* Issue: xmlns with !doctype html
   
   
# Question
   
On RDFa vocabulary, I would really like a solution with
  geo:lat and geo:long, Ideas?
  
   Am I overlooking something obvious here? There is lat, long
  properties
   in wgs84 vocab. So,
  
   span about=http://dbpedia.org/resource/Montreal;
   span property=geo:lat
 content=45.5
 datatype=xsd:float/span
   span property=geo:lat
 content=-73.67
 datatype=xsd:float/span
   Montreal
   /span
  
   Tabbed for readability. You might need to get rid of
  whitespace.
  
   -Sarven
  
   Better yet:
  
   li about=http://dbpedia.org/resource/Montreal;
  span property=geo:lat
   ...
  
  
   -Sarven
 
 
 







-- 
*Bernard Vatant
*
Vocabularies  Data Engineering
Tel :  + 33 (0)9 71 48 84 59
Skype : bernard.vatant



*Mondeca**  **   *
3 cité Nollez 75018 Paris, France
www.mondeca.com
Follow us on Twitter : @mondecanews http://twitter.com/#%21/mondecanews


Re: Press.net News Ontology

2011-09-08 Thread Bernard Vatant
Hello Stéphane

Any idea when rNews will be available as an RDFS or OWL vocabulary?
So far we have at least an URI for it http://dev.iptc.org/rnewsowl but no
description :)

Bernard

2011/9/8 Stéphane Corlosquet scorlosq...@gmail.com

 Hi Jarred,

 It seems to me that your work is similar or at least related to rNews [1].
 I'm curious to know if you're looked at rNews when building the
 News Ontology. Do they complement each other, or are we re-inventing the
 wheel?

 Steph.

 [1] http://dev.iptc.org/rNews


 On Thu, Sep 8, 2011 at 9:48 AM, Jarred McGinnis 
 jarred.mcgin...@pressassociation.com wrote:

 ** ** ** ** ** **

 Hello all,

 ** **

 The Press Association has just published our first draft of a 'news'
 ontology (*http://data.press.net/ontology*). For each of the ontologies
 documented, we've included the motivation for the ontologies as well as some
 of the design decisions behind it. Also, you can get the rdf or ttl by
 adding the extension. For example, http://data.press.net/ontology/asset
 .rdf gives you the ontology described at
 http://data.press.net/ontology/asset/ ..

 ** **

 Have a look at the ontology and tell us what you think. We think it is
 pretty good but feel free to point out our mistakes. We will fix it. Ask why
 we did it one way and not another. We will give you an answer.

 ** **

 Paul Wilton of Ontoba has been working with us at the PA and has spelled
 out a lot of the guiding principles of this work at
 http://www.ontoba.com/blog.

 ** **

 The reasons behind this work were talked about at SemTech 2011 San
 Fransisco:
 http://semtech2011.semanticweb.com/sessionPop.cfm?confid=62proposalid=4134
 

 ** **

 Looking forward to hearing from you,

 ** **

 *Jarred McGinnis, PhD*

 *Research Manager, Semantic Technologies*

 *PRESS**
 **ASSOCIATION***

 *www.pressassociation.com*

 jarred.mcgin...@pressassociation.com

 T: +44 (0) 2079 637 198
 Extension: (7198)
 M: +44 (0) 7816 286 852 

 ** **

 Registered Address: The Press Association Limited, 292 Vauxhall
 Bridge Road**, London, SW1V 1AE**. Registered in 
 England No. 5946902

 ** **

 This email is from the Press Association. For more information, see
 www.pressassociation.com. This email may contain confidential
 information. Only the addressee is permitted to read, copy, distribute or
 otherwise use this email or any attachments. If you have received it in
 error, please contact the sender immediately. Any opinion expressed in this
 email is personal to the sender and may not reflect the opinion of the Press
 Association. Any email reply to this address may be subject to interception
 or monitoring for operational reasons or for lawful business practices.





-- 
*Bernard Vatant
*
Vocabularies  Data Engineering
Tel :  + 33 (0)9 71 48 84 59
 Skype : bernard.vatant



*Mondeca**  **   *
3 cité Nollez 75018 Paris, France
www.mondeca.com
Follow us on Twitter : @mondecanews http://twitter.com/#%21/mondecanews


Re: Press.net News Ontology

2011-09-08 Thread Bernard Vatant
Adding to Bob's list with which I fully agree

In http://data.press.net/ontology/stuff/ the namespace
http://www.w3.org/TR/owl-time/ used for time ontology is not correct.
http://www.w3.org/TR/owl-time/Instant is 404. Bing.

The time ontology is indeed specified by http://www.w3.org/TR/owl-time
But the namespace is http://www.w3.org/2006/time#

... speak about good URI practice in W3C specs ;-)

Bob is using cute prefixes pns, pna etc.
I'm using them as recommended prefixes at
http://labs.mondeca.com/dataset/lov/ where I just started adding them.
(not quite sure if they are in the right vocabulary space, though ...)

Best

Bernard

2011/9/8 Bob Ferris z...@smiy.org

 Hi Jarred,

 at a first glance, here are my remarks:

 1. pne:Event, pne:sub_event seem to be a bit duplicated. I guess,
 event:Event, event:sub_event are enough.

 2. pne:title can be replaced by, e.g., dc:title.

 3. pns:Person can be replaced by foaf:Person.

 4. pns:Organization can be replaced by foaf:Organization.

 5. pns:worksFor can be replaced by rel:employedBy [1].

 6. pns:Lcoation can be replaced by geo:SpatialThing

 7. Re. the tagging terms, I would recommend to have a look at the Tag
 Ontology [2] or similar (see, e.g., [3])

 8. Re. biographical events I would recommend to have a look at the Bio
 Vocabulary [4], e.g., bio:birth/bio:death.

 9. pns:label can be replaced by dc:title (or rdfs:label).

 10. pns:comment can be replaced by dc:description (or rdfs:comment).

 11. pns:describedBy can be replaced by wdrs:describedby [5].

 12. Re. bibliographic terms I would recommend to have a look at the Bibo
 Ontology [6], e.g., bibo:Image (or foaf:Image), or the FRBR Vocabulary [7],
 e.g., frbr:Text.

 13. pna:hasThumbnail can be replaced by foaf:thumbnail.

 ...

 Please help us to create 'shared understanding' by reutilising terms of
 existing Semantic Web ontologies.

 Cheers,


 Bo


 [1] 
 http://purl.org/vocab/**relationship/employedByhttp://purl.org/vocab/relationship/employedBy
 [2] 
 http://www.holygoat.co.uk/**projects/tags/http://www.holygoat.co.uk/projects/tags/
 [3] http://answers.semanticweb.**com/questions/1566/**
 ontologyvocabulary-and-design-**patterns-for-tags-and-tagged-**datahttp://answers.semanticweb.com/questions/1566/ontologyvocabulary-and-design-patterns-for-tags-and-tagged-data
 [4] http://purl.org/vocab/bio/0.1/
 [5] 
 http://www.w3.org/2007/05/**powder-s#describedbyhttp://www.w3.org/2007/05/powder-s#describedby
 [6] http://purl.org/ontology/bibo/
 [7] http://purl.org/vocab/frbr/**core# http://purl.org/vocab/frbr/core#


 On 9/8/2011 3:48 PM, Jarred McGinnis wrote:

 Hello all,

 The Press Association has just published our first draft of a 'news'
 ontology 
 (_http://data.press.net/**ontology_http://data.press.net/ontology_).
 For each of the ontologies
 documented, we've included the motivation for the ontologies as well as
 some of the design decisions behind it. Also, you can get the rdf or ttl
 by adding the extension. For example,
 http://data.press.net/**ontology/asset.rdfhttp://data.press.net/ontology/asset.rdf
 http://**data.press.net/ontology/asset.**rdfhttp://data.press.net/ontology/asset.rdf
 gives

 you the ontology described at 
 http://data.press.net/**ontology/asset/http://data.press.net/ontology/asset/..

 Have a look at the ontology and tell us what you think. We think it is
 pretty good but feel free to point out our mistakes. We will fix it. Ask
 why we did it one way and not another. We will give you an answer.

 Paul Wilton of Ontoba has been working with us at the PA and has spelled
 out a lot of the guiding principles of this work at
 http://www.ontoba.com/blog.

 The reasons behind this work were talked about at SemTech 2011 San
 Fransisco:
 http://semtech2011.**semanticweb.com/sessionPop.**
 cfm?confid=62proposalid=4134http://semtech2011.semanticweb.com/sessionPop.cfm?confid=62proposalid=4134
 http://semtech2011.**semanticweb.com/sessionPop.**
 cfm?confid=62proposalid=4134http://semtech2011.semanticweb.com/sessionPop.cfm?confid=62proposalid=4134
 

 Looking forward to hearing from you,

 *Jarred McGinnis, PhD*






-- 
*Bernard Vatant
*
Vocabularies  Data Engineering
Tel :  + 33 (0)9 71 48 84 59
 Skype : bernard.vatant



*Mondeca**  **   *
3 cité Nollez 75018 Paris, France
www.mondeca.com
Follow us on Twitter : @mondecanews http://twitter.com/#%21/mondecanews


Re: ANN: Modular Unified Tagging Ontology (MUTO)

2011-11-18 Thread Bernard Vatant
Hi folks

Maybe a good way to capture the fact that MUTO has used previous works, but
with significant changes making difficult to assert equivalences at element
level such as equivalentClass etc, would be is to assert link at vocabulary
(ontology) level, using for example the property
http://www.w3.org/2000/10/swap/pim/doc#derivedFrom

http://purl.org/muto/core  doc:derivedFrom  
http://www.holygoat.co.uk/projects/tags/
http://purl.org/muto/core  doc:derivedFrom  http://moat-project.org/ns

etc.

Such assertions could be added to other cross-vocabulary links at e.g.,
http://labs.mondeca.com/dataset/lov/details/vocabulary_muto.html. Actually
derivedFrom and derivativeWork should be mentioned in VOAF.

BTW if the honorable creator of http://www.w3.org/2000/10/swap/pim/doc is
following this thread (he might/should) he could benefit of it to revisit
the definitions of doc:derivedFrom and doc:derivativeWork, inverse
properties with the same definition A work wholey or partly used in the
creation of this one. Guess it is OK for the former, but not the latter :)

Best

Bernard

2011/11/18 Steffen Lohmann slohm...@inf.uc3m.es

 On 17.11.2011 20:03, Richard Cyganiak wrote:

 Hi Steffen,

 On 17 Nov 2011, at 14:34, Steffen Lohmann wrote:

 MUTO should thus not be considered as yet another tagging ontology but
 as a unification of existing approaches.

 I'm curious why you decided not to include mappings (equivalentClass,
 subProperty etc) to the existing approaches.


 Good point, Richard. I thought about it but finally decided to separate
 these alignments from the core ontology - therefore the MUTO Mappings
 Module 
 (http://muto.socialtagging.**org/core/v1.html#Moduleshttp://muto.socialtagging.org/core/v1.html#Modules
 ).

 SIOC and SKOS can be nicely reused but aligning MUTO with the nine
 reviewed tagging ontologies is challenging and would result in a number of
 inconsistencies. This is mainly due to a different conceptual understanding
 of tagging and folksonomies in the various ontologies. To give some
 examples:

 - Are tags with same labels merged in the ontology (i.e. are they one
 instance)?
 - Is the number of tags per tagging limited to one or not?
 - In case of semantic tagging: Are single tags or complete taggings
 disambiguated?
 - How are the creators of taggings linked?
 - Are tags from private taggings visible to other users or not?

 Apart from that, I would have risk that MUTO is no longer OWL Lite/DL
 which I consider important for a tagging ontology (reasoning of
 folksonomies).

 The current version of the MUTO Mappings Module provides alignments to
 Newman's popular TAGS ontology (mainly for compatibility reasons). Have a
 look at it and you'll get an idea of the difficulties in correctly aligning
 MUTO with existing tagging ontologies.


 Best,
 Steffen

 --
 Steffen Lohmann - DEI Lab
 Computer Science Department, Universidad Carlos III de Madrid
 Avda de la Universidad 30, 28911 Leganés, Madrid (Spain), Office: 22A20
 Phone: +34 916 24-9419, 
 http://www.dei.inf.uc3m.es/**slohmann/http://www.dei.inf.uc3m.es/slohmann/






-- 
*Bernard Vatant
*
Vocabularies  Data Engineering
Tel :  + 33 (0)9 71 48 84 59
Skype : bernard.vatant
Linked Open Vocabularies http://labs.mondeca.com/dataset/lov


*Mondeca**  **   *
3 cité Nollez 75018 Paris, France
www.mondeca.com
Follow us on Twitter : @mondecanews http://twitter.com/#%21/mondecanews


Fwd: status and problems on sematicweb.org

2012-01-12 Thread Bernard Vatant
Trying the lod list since apparently the message did not make it to semweb
list ...

-- Forwarded message --
From: Bernard Vatant bernard.vat...@mondeca.com
Date: 2012/1/12
to Semantic Web semantic-...@w3.org

Hi all

A related issue is that under semanticweb.org domain or subdomains are
living several vocabularies (ontologies), some of them are used in the
linked data space, either by published data sets or other vocabularies
relying on them.

But their status is variable. Examples :

http://data.semanticweb.org/ns/swc/ontology is alive and well so far
and re-used e.g., by http://online-presence.net/opo/ns

http://proton.semanticweb.org/2005/04/protons# is alive and well so far
and re-used e.g., by http://www.bbc.co.uk/ontologies/sport/

But

http://www.semanticweb.org/ontologies/2009/2/HumanEmotions.owl is 404,
although http://kdo.render-project.eu/kdo declares that it imports it
Actually http://semanticweb.org/ontologies/ itself is 404

http://knowledgeweb.semanticweb.org/semanticportal/OWL/Documentation_Ontology.owlis
404
although http://lsdis.cs.uga.edu/projects/semdis/opus# declares many
mappings to it

http://data.semanticweb.org/ns/misc is 404
although it is used by http://data.semanticweb.org/ns/swc/ontology

And that's only what I can quickly discover using the results provided by
the LOV bot which explores the Linked Open Vocabularies space.

What is the bottom line of this? Data and vocabularies publishers take for
granted that published vocabularies can be re-used at will and rely on
them. But when a re-used vocabulary goes off-line, not only we have 404 in
the linked data web, but semantics of dependent vocabularies is affected.

semanticweb.org is just an example. Unfortunately it's not the only one. It
seems that vocabulary publishers are often not aware of their long-term
responsibility. We have in the LOV project even had answers from some
people mentioned as vocabulary creators who were not even aware that their
vocabulary was actually still used ...

But given its singular place in the semantic web space, one could think
that semanticweb.org should show off good practices ...

Best

Bernard


2012/1/12 Markus Krötzsch markus.kroetz...@cs.ox.ac.uk

 Hi Yuri,

 let us take this to one mailing list semantic-...@w3.org, as this is the
 list that is most involved (please drop the others when you reply).

 As the technical maintainer of the site, I largely agree with your
 assessment. In spite of the very high visibility of the site (and perceived
 authority), the active editing community is not big. This is a problem
 especially given the significant and continued spam attacks that the site
 is under due to its high visibility (I just recently changed the captcha
 system and rolled back thousands of edits, yet it seems they are already
 breaking through again, though in smaller numbers).

 I do not want to blame anybody for the state of affairs: most of us do not
 have the time to contribute significant content to such sites. However,
 given the extraordinary visibility of the site, we should all perceive this
 as a major problem (to the extent that we attach our work to the label
 semantic web in any way).

 So what can be done?

 (1) Freeze the wiki. A weaker version of this is: allow users only to edit
 after they were manually added to a group of trusted users (all humans
 welcome). This would require somebody to manage these permissions but would
 allow existing projects/communities to continue to use the site.

 (2) Re-enforce spam protection on the wiki. Maybe this could be done, but
 the site is targeted pretty heavily. Standard captchas like ReCaptcha are
 thus getting broken (spammers do have an effective infrastructure for
 this), but maybe non-standard captchas could work better. This is a task
 for the technical maintainers (i.e., me and the folks at AIFB Karlsruhe
 where the site is hosted).

 (3) Clean the wiki. Whether frozen or not, there is a lot of spam already.
 Something needs to be done to get rid of it. This requires (easy but
 tedious) manual effort. Some stakeholders need to be found to provide basic
 workforce (e.g., by hiring a student to help with spam deletion).

 (4) Restore the wiki. Update the main pages (about technologies and active
 projects) to reflect a current and/or timeless state that we would like new
 readers to see. This again needs somebody to push it, and for writing pages
 about topics like SPARQL one would need some expertise. This is a challenge
 for the community.

 I am willing to invest /some/ time here to help with the above, but (3)
 and (4) requires support from more people. On the other hand, there are
 probably hardly more than 20 or 30 *essential* content pages that we are
 talking about here, plus many pages about projects and people that one
 should ask the stakeholders to review. So one might be able to make this
 into a shining entry point to the semantic web in a week of work ...
 together with (1) and (2) above

Re: Recommendations for Documenting EoL.org Content Partners

2012-01-18 Thread Bernard Vatant
Hi Peter

What about something like :

http://dbpedia.org/resource/EOL rdf:type 
http://www.w3.org/ns/org#OrganizationalCollaboration
http://dbpedia.org/resource/EOL http://www.w3.org/ns/org#hasMember 
http://eol.org/partner/159
http://eol.org/content/159  foaf:page http://eol.org/content_partners/159


Bernard

2012/1/18 Peter DeVries pete.devr...@gmail.com

 Hi All,

 If you were to recommend how to markup the content providers listed on
 this page how would you do it?

 http://eol.org/content_partners

 Would you use SIOC, DOAP or some other vocabulary?

 What some would like is the ability to cite a content partner using just a
 URI.

 For example:

 dcterms:source rdf:resource=ContentPartnerURI/

 I would appreciate any suggestion or comments :-)

 Thanks,

 - Pete


 
 Pete DeVries
 Department of Entomology
 University of Wisconsin - Madison
 445 Russell Laboratories
 1630 Linden Drive
 Madison, WI 53706
 Email: pdevr...@wisc.edu
 TaxonConcept http://www.taxonconcept.org/
 GeoSpecieshttp://about.geospecies.org/ Knowledge
 Bases
 A Semantic Web, Linked Open Data http://linkeddata.org/  Project

 --




-- 
*Bernard Vatant
*
Vocabularies  Data Engineering
Tel :  + 33 (0)9 71 48 84 59
 Skype : bernard.vatant
Linked Open Vocabularies http://labs.mondeca.com/dataset/lov


*Mondeca**  **   *
3 cité Nollez 75018 Paris, France
www.mondeca.com
Follow us on Twitter : @mondecanews http://twitter.com/#%21/mondecanews


Re: Modelling colors

2012-01-26 Thread Bernard Vatant
Hi Melvin

There are a few resources in the LOV database which might be of interest
http://labs.mondeca.com/dataset/lov/search/#s=color

Bernard

2012/1/26 Melvin Carvalho melvincarva...@gmail.com

 I see hasColor a lot in the OWL documentation but I was trying to work
 out a way to say something has a certain color.

 I understand linked open colors was a joke

 Anyone know of an ontology with color or hasColor as a predicate?




-- 
*Bernard Vatant
*
Vocabularies  Data Engineering
Tel :  + 33 (0)9 71 48 84 59
 Skype : bernard.vatant
Linked Open Vocabularies http://labs.mondeca.com/dataset/lov


*Mondeca**  **   *
3 cité Nollez 75018 Paris, France
www.mondeca.com
Follow us on Twitter : @mondecanews http://twitter.com/#%21/mondecanews


Re: [Ann] LODStats - Real-time Data Web Statistics

2012-02-02 Thread Bernard Vatant
Hello Sören

Great work! Of course as you can imagine I jumped right away to
http://stats.lod2.eu/vocabularies.
Interesting to see the broad figures (205 vocabularies) vs 189 harvested as
of today at http://labs.mondeca.com/dataset/lov
So I would like to compare, see the overlap ... and complete LOV as needed
:)

Do you have the vocabularies and datasets using them available in a single
file? (preferably RDF of course!)

Thanks

Bernard


2012/2/2 Sören Auer a...@informatik.uni-leipzig.de

 Dear all,

 We are happy to announce the first public *release of LODStats*.

 LODStats is a statement-stream-based approach for gathering
 comprehensive statistics about datasets adhering to the Resource
 Description Framework (RDF). LODStats was implemented in Python and
 integrated into the CKAN dataset metadata registry [1]. Thus it helps to
 obtain a comprehensive picture of the current state of the Data Web.

 More information about LODStats (including its open-source
 implementation) is available from:

 http://aksw.org/projects/LODStats

 A demo installation collecting statistics from all LOD datasets
 registered on CKAN is available from:

 http://stats.lod2.eu

 We would like to thank the AKSW research group [2] and LOD2 project [3]
 members for their suggestions. The development LODStats was supported by
 the FP7 project LOD2 (GA no. 257943).

 On behalf of the LODStats team,

 Sören Auer, Jan Demter, Michael Martin, Jens Lehmann

 [1] http://ckan.net
 [2] http://aksw.org
 [3] http://lod2.eu




-- 
*Bernard Vatant
*
Vocabularies  Data Engineering
Tel :  + 33 (0)9 71 48 84 59
 Skype : bernard.vatant
Linked Open Vocabularies http://labs.mondeca.com/dataset/lov


*Mondeca**  **   *
3 cité Nollez 75018 Paris, France
www.mondeca.com
Follow us on Twitter : @mondecanews http://twitter.com/#%21/mondecanews


Re: [Ann] LODStats - Real-time Data Web Statistics

2012-02-02 Thread Bernard Vatant
Hello all

I've started comparing http://stats.lod2.eu/vocabularies with what we have
in store in LOV.

A few preliminary stats are available. Those who prefer raw data can go
directly to the shared GDocs (waiting for better formats)
https://docs.google.com/spreadsheet/ccc?key=0AiYc9tLJbL4SdEhvMlJjSmJELVhqVk9RUzBIWEhBMUE
Public access in read-only, if you want edit rights, just ask.
Pretty much sandbox/work in progress, provisional but interesting figures
nevertheless. Three sheets available :

1. LOV in LOD : vocabularies extracted by LODStats and already present in
LOV : 54 so far
2. LOV w/o LOD : vocabularies in LOV not yet used in LOD (at least not
extracted by LODStats) : 137
(figures to be consolidated since there are 189 vocs in LOV altogether -
duplicates to double-check)
3. LOD w/o LOV : vocabularies extracted by LODStats and not (yet) present
in LOV : 150

Figures 1 and 2 show that there is still a large majority of unused
vocabularies in LOV.. This is useful information. Does that mean they are
useless? Time will tell ...

Figure 3 is more challenging. I've looked at each of those 150 URIs and, as
of today they can be distributed as following :

Less than 50 are proper de-referencable vocabularies, hence LOV-able.
Which means a challenging to-do list for LOV curators, which should lead
the figures in 1 and 3 to meet somewhere around 100 with a little effort,
but be patient, this is human-checked. If you want some of those to be
added in priority, use the suggest facility at
http://labs.mondeca.com/dataset/lov/suggest/

More than 60 are either 404, time out or access denied, which does not come
as a surprise, but is nevertheless a big issue. It means that data using
those vocabularies are relying on semantics no one can check.

The rest is de-referencable, but to various types of resources more or less
close to one or several vocabularies, but not published following good
practices, in a word not in a LOV-able state.

All in all, almost half of the vocabularies used in LOD are not meeting a
minimal quality requirement : be published at their namespace.

Conclusion : Quality, Quality, Quality please !
Double-check the vocabularies you use, publish them properly if they are in
your namespace etc etc.

Bernard


2012/2/2 Bernard Vatant bernard.vat...@mondeca.com

 Hello Sören

 Great work! Of course as you can imagine I jumped right away to
 http://stats.lod2.eu/vocabularies.
 Interesting to see the broad figures (205 vocabularies) vs 189 harvested
 as of today at http://labs.mondeca.com/dataset/lov
 So I would like to compare, see the overlap ... and complete LOV as needed
 :)

 Do you have the vocabularies and datasets using them available in a single
 file? (preferably RDF of course!)

 Thanks

 Bernard



 2012/2/2 Sören Auer a...@informatik.uni-leipzig.de

 Dear all,

 We are happy to announce the first public *release of LODStats*.

 LODStats is a statement-stream-based approach for gathering
 comprehensive statistics about datasets adhering to the Resource
 Description Framework (RDF). LODStats was implemented in Python and
 integrated into the CKAN dataset metadata registry [1]. Thus it helps to
 obtain a comprehensive picture of the current state of the Data Web.

 More information about LODStats (including its open-source
 implementation) is available from:

 http://aksw.org/projects/LODStats

 A demo installation collecting statistics from all LOD datasets
 registered on CKAN is available from:

 http://stats.lod2.eu

 We would like to thank the AKSW research group [2] and LOD2 project [3]
 members for their suggestions. The development LODStats was supported by
 the FP7 project LOD2 (GA no. 257943).

 On behalf of the LODStats team,

 Sören Auer, Jan Demter, Michael Martin, Jens Lehmann

 [1] http://ckan.net
 [2] http://aksw.org
 [3] http://lod2.eu




 --
 *Bernard Vatant
 *
 Vocabularies  Data Engineering
 Tel :  + 33 (0)9 71 48 84 59
  Skype : bernard.vatant
 Linked Open Vocabularies http://labs.mondeca.com/dataset/lov

 
 *Mondeca**  **   *
 3 cité Nollez 75018 Paris, France
 www.mondeca.com
 Follow us on Twitter : @mondecanews http://twitter.com/#%21/mondecanews




-- 
*Bernard Vatant
*
Vocabularies  Data Engineering
Tel :  + 33 (0)9 71 48 84 59
Skype : bernard.vatant
Linked Open Vocabularies http://labs.mondeca.com/dataset/lov


*Mondeca**  **   *
3 cité Nollez 75018 Paris, France
www.mondeca.com
Follow us on Twitter : @mondecanews http://twitter.com/#%21/mondecanews


Re: [Ann] LODStats - Real-time Data Web Statistics

2012-02-03 Thread Bernard Vatant
Hello Richard

 All in all, almost half of the vocabularies used in LOD are not meeting a
 minimal quality requirement : be published at their namespace.

 Now, if there was a list of these, annotated with some stats (used in how
 many datasets? occurring in how many triples?), then we could start at the
 top of the list, and sort it out with the various publishers involved.


Indeed! That's the purpose of what I started in the Gdocs ... I just sent
you edition rights :)

That is a work we have already started with Pierre-Yves inside the LOV
ecosystem : ping the vocabularies curators when they rely on
non-such-reliable namespaces (either their own ones, or the ones of
vocabularise they re-use but don't maintain). The objective being to
augment the overall quality of the vocabulary ecosystem, one vocabulary at
a time :)

It is a patient but important task. You're welcome to participate. It is
actually 80% social and 20% technical :)

Best

Bernard

-- 
*Bernard Vatant
*
Vocabularies  Data Engineering
Tel :  + 33 (0)9 71 48 84 59
Skype : bernard.vatant
Linked Open Vocabularies http://labs.mondeca.com/dataset/lov


*Mondeca**  **   *
3 cité Nollez 75018 Paris, France
www.mondeca.com
Follow us on Twitter : @mondecanews http://twitter.com/#%21/mondecanews


Re: URIs for languages

2012-02-16 Thread Bernard Vatant
Hi all

As creator and curator of lingvoj.org, I think I can give some explanations
of such mysteries :)

Lingvoj.org URIs included quite an arbitrary set of URI for languages, gory
details of the story starting in 2007 can be found at lingvoj.org main page.
To be in line with BCP 47 and e.g., values used in xml:lang tags, the
lingvoj.org URIs are based on ISO 639-1 (2 letters) codes when available.
For Japanese the lingvoj.org URI is therefore http://www.lingvoj.org/lang/ja,
which is actually redirected to http://lexvo.org/id/iso639-3/jpn since 2010
for reasons explained at the same page.
For Ancient Greek there is no 2-letters code hence the 3-letters code grc
is used (either ISO 639-2 or 639-3 in this case)
Lexvo.org URIs are all based on ISO 639-3 3-letters code, which is simpler.

Now this is part of a story which started even earlier, more than ten years
ago in OASIS Published Subjects Technical Committee with URIs such as
http://psi.oasis-open.org/iso/639/#grc (BTW still in use inside Mondeca
software) Lars Marius Garshol, editor is in cc.

I think now we should forget about URIs published by pionneer projects such
as OASIS TC, lingvoj.org and lexvo.org, and stick to URIs published by
genuine authority Library of Congress which is as close to the primary
source as can be. So if you want to use a URI for Ancient Greek as defined
by ISO 639-2, please use http://id.loc.gov/vocabulary/iso639-2/grc.

BTW Lars Marius, hello, what do you think? URIs at id.loc.gov are really
what we were dreaming to achieve in 2001, right?

Bernard



2012/2/16 M. Scott Marshall mscottmarsh...@gmail.com

 I was planning to give the example URI for the Japanese language
 (stemming out of work at the Biohackathon 2011):
 http://lexvo.org/id/iso639-3/jpn

 BTW, I wasn't able to use the simpler URI scheme below for jpn as you
 had done with grc:
 http://www.lingvoj.org/lang/jpn
 ?

 -Scott

 On Thu, Feb 16, 2012 at 5:26 PM, Barry Norton barry.nor...@ontotext.com
 wrote:
 
  http://www.lingvoj.org/lang/grc
 
  Barry
 
 
 
 
  On 16/02/2012 16:15, Jordanous, Anna wrote:
 
  Hi LOD list,
 
  I am looking for URIs to use  to represent particular languages
 (primarily
  Ancient Greek, Arabic, English and Spanish). This is to represent what
  language a document is written in, in an RDF triple. I thought it would
 be
  obvious how to refer to the language itself, but I am struggling.
 
  I would like to use something like the ISO 639 standard for languages. To
  distinguish between Ancient Greek and Modern Greek, I have to use the
  ISO-639-2 set of language codes. http://www.loc.gov/standards/iso639-2/(The
  codes are grc and gre respectively)
 
  http://downlode.org/Code/RDF/ISO-639/ is an RDF representation of ISO
 639
  but it doesn’t include Ancient Greek as it only includes ISO-639-1
  languages.
 
  As far as I see, I have the following options e.g. for Arabic
  Use the
  http://www.loc.gov/standards/iso639-2/php/langcodes_name.php?code_ID=22
 
 http://www.loc.gov/standards/iso639-2/php/langcodes-keyword.php?SearchTerm=araSearchType=iso_639_2
  http://www.loc.gov/standards/iso639-2#ara
 
 
  This really must be simpler – what am I missing? Any comments welcomed.
  Thanks for your help
  anna
 
  ---
  Anna Jordanous
  Research Associate
  Centre for e-Research
  King's College London
  Tel: +44 (0) 20 7848 1988
 
 
 
 
 



 --
 M. Scott Marshall
 http://staff.science.uva.nl/~marshall




-- 
*Bernard Vatant
*
Vocabularies  Data Engineering
Tel :  + 33 (0)9 71 48 84 59
Skype : bernard.vatant
Linked Open Vocabularies http://labs.mondeca.com/dataset/lov


*Mondeca**  **   *
3 cité Nollez 75018 Paris, France
www.mondeca.com
Follow us on Twitter : @mondecanews http://twitter.com/#%21/mondecanews


Re: URIs for languages

2012-02-17 Thread Bernard Vatant
 to press send. I'm
 confused by your reply. What about the problems with LOC lang ids that
 Gerard pointed out? Is that what you meant by If only they could do
 ISO 3166 countries as well...?]

 Best,
 Scott

 On Thu, Feb 16, 2012 at 8:21 PM, Gerard de Melo gdem...@mpi-inf.mpg.de
 wrote:
  Hi Bernard,
 
 
  I think now we should forget about URIs published by pionneer projects
 such
  as OASIS TC, lingvoj.org and lexvo.org, and stick to URIs published by
  genuine authority Library of Congress which is as close to the primary
  source as can be. So if you want to use a URI for Ancient Greek as
 defined
  by ISO 639-2, please use http://id.loc.gov/vocabulary/iso639-2/grc.
 
  BTW Lars Marius, hello, what do you think? URIs at id.loc.gov are really
  what we were dreaming to achieve in 2001, right?
 
 
  Now of course I may be a bit biased here, but I do not believe that the
  id.loc.gov service solves
  all of the problems. This is from the Lexvo.org FAQ [1]:
 
  The advantage of using those URIs is that they are maintained by the
 Library
  of Congress. However, there are also several issues to consider. First of
  all, ISO 639-2 is orders of magnitude smaller than ISO 639-3 and for
 example
  lacks an adequate code for Cantonese, which is spoken by over 60 million
  speakers.
  More importantly, the LOC's URIs do not describe languages per se but
 rather
  describe code-mediated conceptualizations of languages. This implies, for
  instance, that the French language (http://lexvo.org/id/iso639-3/fra)
 has
  two different counterparts at the LOC,
  http://id.loc.gov/vocabulary/iso639-2/fra and
  http://id.loc.gov/vocabulary/iso639-2/fre, which each have slightly
  different properties.
  Finally, connecting your data to Lexvo.org's information is likely to be
  more useful in practical applications. It offers information about the
  languages themselves, e.g. where they are spoken, while the LOC mostly
  provides information about the codes, e.g. when the codes were created
 and
  updated and what kind of code they are.
  In practice, you can also use both codes simultaneously in your data.
  However, you need to be very careful to make sure that you are asserting
  that a publication is written in French rather than in some concept of
  French created on January, 1, 1970 in the United States.
 
 
  Best,
  Gerard
 
  [1] http://www.lexvo.org/linkeddata/faq.html
 
  --
  Gerard de Melo [dem...@icsi.berkeley.edu]
  http://www.icsi.berkeley.edu/~demelo/




-- 
*Bernard Vatant
*
Vocabularies  Data Engineering
Tel :  + 33 (0)9 71 48 84 59
Skype : bernard.vatant
Linked Open Vocabularies http://labs.mondeca.com/dataset/lov


*Mondeca**  **   *
3 cité Nollez 75018 Paris, France
www.mondeca.com
Follow us on Twitter : @mondecanews http://twitter.com/#%21/mondecanews


Re: How do OBO ontologies work on the LOD?

2012-02-22 Thread Bernard Vatant

 --





 --

 
 Pete DeVries
 Department of Entomology
 University of Wisconsin - Madison
 445 Russell Laboratories
 1630 Linden Drive
 Madison, WI 53706
 Email: pdevr...@wisc.edu
 TaxonConcept http://www.taxonconcept.org/
 GeoSpecieshttp://about.geospecies.org/ Knowledge
 Bases
 A Semantic Web, Linked Open Data http://linkeddata.org/  Project

 --




-- 
*Bernard Vatant
*
Vocabularies  Data Engineering
Tel :  + 33 (0)9 71 48 84 59
Skype : bernard.vatant
Linked Open Vocabularies http://labs.mondeca.com/dataset/lov


*Mondeca**  **   *
3 cité Nollez 75018 Paris, France
www.mondeca.com
Follow us on Twitter : @mondecanews http://twitter.com/#%21/mondecanews


Re: owl:sameAs temptation

2012-03-07 Thread Bernard Vatant
Hi Sarven

You might be interested by the way I've mapped the Geonames feature codes,
which are modelled as instances of a subclass of skos:Concept (hence OWL
individuals) to equivalent classes in other ontologies. See [1] and [2].

The rationale is that most of the time when you assert that a owl:Thing T
is equivalent to some owl:Class C, it means that being of rdf:type C is
equivalent to have T as a value of some typing property. For example
being an instance of the class BlueThing is equivalent to having Blue
as value of some hasColor property. This can be modelled as in [2] using
a owl:hasValue restriction, avoiding the owl:sameAs temptation and keep
all your ontology in safe OWL-DL land, this way :

owl:Class rdf:about=http://example.org/BlueThing;
  rdfs:label xml:lang=enBlue Thing/rdfs:label
  owl:equivalentClass
owl:Restriction
  owl:onProperty
rdf:resource=http://example.org/hasColor/

  owl:hasValue rdf:resource=http://example.org/Blue/
/owl:Restriction
  /owl:equivalentClass
/owl:Class

skos:Concept rdf:about=http://example.org/Blue;
  skos:prefLabel xml:lang=enBlue/skos:prefLabel
  skos:prefLabel xml:lang=frBleu/skos:prefLabel
/skos:Concept

Hope this helps

Bernard

[1] http://www.geonames.org/ontology/ontology_v3.01.rdf
[2] http://www.geonames.org/ontology/mappings_v3.01.rdf


Le 7 mars 2012 07:43, Sarven Capadisli i...@csarven.ca a écrit :

 Hi,

 I'm sure this is talked somewhere, I'd love a pointer if you know any:

 I often see resources of type owl:Class get paired with resources of type
 owl:Thing using owl:sameAs. As far as I understand, this is incorrect since
 domain and range of owl:sameAs should be owl:Thing.

 I'm tempted to change my resource that is a skos:Concept
 skos:exactMatch'ed with a resource of type owl:Thing, and use owl:sameAs.
 Sort of like everyone else is doing it, it should be okay, and don't
 need to fear the thought police.

 However, I don't wish to do that with a clear conscience, hence, I'd
 appreciate it if anyone can shed some light here for me and help me
 understand to make an informed decision based on reason (no pun intended).

 Related to this, I was wondering whether it makes sense to claim a
 resource to be of type owl:Class as well as of type owl:Thing, where may be
 appropriate, or one could get away with it e.g., a country. If this is
 okay, I imagine it is okay to use owl:sameAs for the subject at hand and
 point to yet another thing.

 Thanks all.

 -Sarven




-- 
*Bernard Vatant
*
Vocabularies  Data Engineering
Tel :  + 33 (0)9 71 48 84 59
Skype : bernard.vatant
Linked Open Vocabularies http://labs.mondeca.com/dataset/lov


*Mondeca**  **   *
3 cité Nollez 75018 Paris, France
www.mondeca.com
Follow us on Twitter : @mondecanews http://twitter.com/#%21/mondecanews


Re: Change Proposal for HttpRange-14

2012-03-26 Thread Bernard Vatant
All

Like many others it seems, I had sworn to myself : nevermore HttpRange-14,
but I will also bite the bullet.
Here goes ... Sorry I've hard time to follow-up with whom said what with
all those entangled threads, so I answer to ideas more than to people.

There is no need for anyone to even talk about information resources.


YES! I've come with years to a very radical position on this, which is that
we have create ourselves a huge non-issue with those notions of
information resource and non-information resource. Please show any
application making use of this distinction, or which would break if we get
rid of this distinction.
And in any case if there is a distinction, this distinction is about how
the URI behave in the http protocol (what it accesses), which should be
kept independent of what the URI denotes. The neverending debate will never
end as long as those two aspects are mixed, as they are in the current
httpRange-14 as well as in various change proposals (hence those
interminable threads).


 The important point about http-range-14, which unfortunately it itself
 does not make clear, is that the 200-level code is a signal that the URI
 *denotes* whatever it *accesses* via the HTTP internet architecture.


The proposal is that URI X denotes what the publisher of X says it denotes,
 whether it returns 200 or not.


This is the only position which makes sense to me. What the URI is intended
to denote can be only derived from explicit descriptions, whatever the way
you access those descriptions. And assume that if there is no such
description, the URI is intended to provide access to somewhere, but not to
denote *some* *thing*. It's just actionable in the protocol, and clients do
whatever they want with what they get. It's the way the (non-semantic) Web
works, and it's OK.


 And what if the publisher simply does not say anything about what the URi
 denotes?


Then nobody knows, and actually nobody cares what the URI denotes, or say
that all users implicitly agree it is the same thing, but it does not break
any system to ignore what it is. Or, again, show me counter-examples..

After all, something like 99.999% of the URIs on the planet lack this
 information.


Which means that for the Web to work so far, knowing what a URI denotes is
useless. But it's useful for the Semantic Web. So let's say that a URI is
useful for, or is part of, the Semantic Web if some description(s) of it
can be found. And we're done.


 What, if anything, can be concluded about what they denote?


Nothing, and let's face it.


 The http-range-14 rule provides an answer to this which seems reasonably
 intuitive.


Wonder if it can be the same Pat Hayes writing this as the one who wrote
six years ago In Defence of Ambiguity :)
http://www.ibiblio.org/hhalpin/irw2006/presentations/HayesSlides.pdf
Quote (from the conclusion)
WebArch http-range-14 seems to presume that if a URI accesses  something
directly (not via an http redirect), then the URI must refer  to what it
accesses.
This decision is so bad that it is hard to list all the mistakes in it, but
here are a few :
- It presumes, wrongly, that the distinction between access and  reference
is based
on the distinction between accessible and  inaccessible referents.
 ... [see above link for full list]

Pat, has your position changed on this?


 What would be your answer? Or do you think there should not be any
 'default' rule in such cases?


I would say so, because such a rule is basically useless. As useless as to
wonder what a phone number denotes. A phone number allows you to access a
point in a network given the phone infrastructure and protocols, it does
not denote anything except in specific contexts where it's used explicitly
as an identifier e.g., to uniquely identify people, organizations or
services. Otherwise it works just like a phone number should do.

Best regards

Bernard

-- 
*Bernard Vatant
*
Vocabularies  Data Engineering
Tel :  + 33 (0)9 71 48 84 59
 Skype : bernard.vatant
Linked Open Vocabularies http://labs.mondeca.com/dataset/lov


*Mondeca**  **   *
3 cité Nollez 75018 Paris, France
www.mondeca.com
Follow us on Twitter : @mondecanews http://twitter.com/#%21/mondecanews


Re: Proposal to amend the httpRange-14 resolution

2012-04-04 Thread Bernard Vatant
Hello David

Now that this conversation has turned a bit less noisy :)
What I have written recently is along the lines of  the distinction you
propose between definition and description, and the process you are
envisioning[1].
Kingsley has an amazing and enthusiastic faith in the power of the Web's
architecture, but this is not only technical, it is about a social process.
Agreed there is no way to desambiguate once for all a URI, but like in
natural languages there is a neverending quest towards accuracy and
disambiguation.
And indeed this has to start with the URI owner providing the first
description of the resource, acting as a definition, further descriptions,
either provided by the URI owner or other sources can be compared to the
definition to figure if they bring extra information, or other perspectives
on the resources, or if they are inconsistent with what the URI owner
asserts in the definition.

The fact that, as Pat Hayes and others have correctly pointed, over 99,9%
of URIs on the Web do not provide such definitions does not prevent to push
provision of such definitions as a best practice.

If you are a URI owner, and if you want your URIs to play nicely and be a
reliable reference in the Semantic Web, don't take the risk to see third
parties provide various and probably inconsistent descriptions of what your
URIs mean, based or not on debatable and various interpretations of the
semantics of HTTP GET answers.

Best

Bernard

[1] http://blog.hubjects.com/2012/03/beyond-httprange-14-addiction.html


Le 4 avril 2012 03:00, David Booth da...@dbooth.org a écrit :

 Hi Kingsley,

 On Tue, 2012-04-03 at 15:01 -0400, Kingsley Idehen wrote:
  On 4/3/12 1:46 PM, David Booth wrote:
 [ . . . ]
   This use of URI definitions helps to anchor the meaning of the URI,
 so
   that it does not drift uncontrollably.
 [ . . . ]
 
  But once on the Web the user really [loses] control. There is not such
  thing as real stability per se. Only when you have system faults can one
  at least pivot accordingly. Thus, you only get the aforementioned
  behavior in the context of a specific system and its associated rules.

 I think you're right that we can never get total semantic stability in
 an absolute sense.  But if we establish a commonly followed convention
 in which the URI owner's URI definition is used when making statements
 involving a URI, then the semantic drift will at least be substantially
 limited.  Again, this does not require *everyone* to follow the
 convention.  But the more that do follow it, the more effective it
 becomes in making the web a sort of self-describing dictionary.


 --
 David Booth, Ph.D.
 http://dbooth.org/

 Opinions expressed herein are those of the author and do not necessarily
 reflect those of his employer.



-- 
*Bernard Vatant
*
Vocabularies  Data Engineering
Tel :  + 33 (0)9 71 48 84 59
 Skype : bernard.vatant
Linked Open Vocabularies http://labs.mondeca.com/dataset/lov


*Mondeca**  **   *
3 cité Nollez 75018 Paris, France
www.mondeca.com
Follow us on Twitter : @mondecanews http://twitter.com/#%21/mondecanews


Re: ANN: Nature Publishing Group Linked Data Platform

2012-04-05 Thread Bernard Vatant
Hello Tony

Amazing work indeed.  I have a little LOV echo to the big LOD call of
Kingsley :)

At http://ns.nature.com/docs/terms/ I get only the vocabulary OWLDoc, no
conneg to some rdf file?
Is this rdf file available somewhere?

Thanks

Bernard


Le 5 avril 2012 13:25, Kingsley Idehen kide...@openlinksw.com a écrit :

 On 4/5/12 5:17 AM, Hammond, Tony wrote:

 ** Apologies for cross-posting **

 Hi:

 We just wanted to share this news from yesterday's NPG press release [1]:

 Nature Publishing Group (NPG) today is pleased to join the linked
 data
 community by opening up access to its publication data via a linked data
 platform. NPG's Linked Data Platform is available at
 http://data.nature.com.

 The platform includes more than 20 million Resource Description
 Framework (RDF) statements, including primary metadata for more than
 450,000
 articles published by NPG since 1869. In this first release, the datasets
 include basic citation information (title, author, publication date, etc)
 as
 well as NPG specific ontologies. These datasets are being released under
 an
 open metadata license, Creative Commons Zero (CC0), which permits maximal
 use/re-use of this data.

 NPG's platform allows for easy querying, exploration and extraction of
 data and relationships about articles, contributors, publications, and
 subjects. Users can run web-standard SPARQL Protocol and RDF Query
 Language
 (SPARQL) queries to obtain and manipulate data stored as RDF. The platform
 uses standard vocabularies such as Dublin Core, FOAF, PRISM, BIBO and OWL,
 and the data is integrated with existing public datasets including
 CrossRef
 and PubMed.

 More information about NPG's Linked Data Platform is available at
 http://developers.nature.com/**docs http://developers.nature.com/docs.
 Sample queries can be found at
 http://data.nature.com/query. 

 Cheers,

 Tony

 [1] 
 http://www.nature.com/press_**releases/linkeddata.htmlhttp://www.nature.com/press_releases/linkeddata.html


 Great stuff!

 BTW -- do you also expose an RDF dump (directly or via a VoiD graph) ?
 Naturally, I would also like to add this dataset to the LOD cloud cache we
 maintain.

 --

 Regards,

 Kingsley Idehen
 Founder  CEO
 OpenLink Software
 Company Web: http://www.openlinksw.com
 Personal Weblog: 
 http://www.openlinksw.com/**blog/~kidehenhttp://www.openlinksw.com/blog/%7Ekidehen
 Twitter/Identi.ca handle: @kidehen
 Google+ Profile: 
 https://plus.google.com/**112399767740508618350/abouthttps://plus.google.com/112399767740508618350/about
 LinkedIn Profile: http://www.linkedin.com/in/kidehen



-- 
*Bernard Vatant
*
Vocabularies  Data Engineering
Tel :  + 33 (0)9 71 48 84 59
Skype : bernard.vatant
Linked Open Vocabularies http://labs.mondeca.com/dataset/lov


*Mondeca**  **   *
3 cité Nollez 75018 Paris, France
www.mondeca.com
Follow us on Twitter : @mondecanews http://twitter.com/#%21/mondecanews


Re: Question on moving linked data sets

2012-04-19 Thread Bernard Vatant
Hello Antoine

My take on this would be to use dcterms:isReplacedBy links rather than
owl:sameAs
Description of the concepts by BNF might change in the future and although
the original identifier is the same, the description might be out of sync
at some point.

Bernard

Le 19 avril 2012 16:23, Antoine Isaac ais...@few.vu.nl a écrit :

 Dear all,

 We have a question on an what to do when a linked data set is moved from
 one namespace to the other. We searched for recipes to apply, but did not
 really find anything 'official'  around...
  The VU university of Amsterdam has published a Linked Data SKOS
 representation of RAMEAU [1] as a prototype, several years ago. For example
 we have
 http://stitch.cs.vu.nl/**vocabularies/rameau/ark:/**12148/cb14521343bhttp://stitch.cs.vu.nl/vocabularies/rameau/ark:/12148/cb14521343b

 Recently, BnF implemented its own production service for RAMEAU. The
 previous concept is at:
 http://data.bnf.fr/ark:/12148/**cb14521343bhttp://data.bnf.fr/ark:/12148/cb14521343b
 (see RDF at 
 http://data.bnf.fr/14521343/**web_semantique/rdf.xmlhttp://data.bnf.fr/14521343/web_semantique/rdf.xml
 )

 The production services makes the prototype obsolete. Our issue is how to
 properly transition from one to the other. Several services are using the
 URIs of the prototype. For example at the Library of Congress:
 http://id.loc.gov/authorities/**subjects/sh2002000569http://id.loc.gov/authorities/subjects/sh2002000569

 We can ask for the people we know to change their links. But identifying
 the users of URIs seems too manual, error-prone a process. And of course in
 general we do not want links to be broken.

 Currently we have done the following:

 - a 301 moved permanently redirection from the 
 stitch.cs.vu.nl/rameauprototype to
 data.bnf.fr.

 - an owl:sameAs statement between the prototype URIs and the production
 ones, so that a client searching for data on the old URI gets data that
 enables it to make the connection with the original resource (URI) it was
 seeking data about.

 Does that seem ok? What should we do, otherwise?

 Thanks for any feedback you could have,

 Antoine Isaac (VU Amsterdam side)
 Romain Wenz (BnF side)

 [1] RAMEAU is a vocabulary (thesaurus) used by the National Library of
 France (BnF) for describing books.




-- 
*Bernard Vatant
*
Vocabularies  Data Engineering
Tel :  + 33 (0)9 71 48 84 59
 Skype : bernard.vatant
Linked Open Vocabularies http://labs.mondeca.com/dataset/lov


*Mondeca**  **   *
3 cité Nollez 75018 Paris, France
www.mondeca.com
Follow us on Twitter : @mondecanews http://twitter.com/#%21/mondecanews


Re: Question on moving linked data sets

2012-04-20 Thread Bernard Vatant
Antoine

In fact it seems that the dcterms:replaces option considers two resources
 (one that replace the other).


Indeed. The bnf resource replaces, by all means of the term, the stitch
one.


 Which in turns hints that you're considering that the URIs denote the URI
 themselves (or a 'concept-with-a-URI'), and not the resource (concept).


There I don't follow you. Let's say I have both URIs stitch:x and bnf:y.
They both identify resources, which happen to be instances of skos:Concept.
When I read   stitch:x  dcterms:isReplacedBy   bnf:y
I understand : Whenever you used the resource stitch:x (in your linked
data, vocabularies, index ...), please now use bnf:y.
That does not mean only change dumbly the URI in your application, but it
means also that if you want to figure the current definition of the
concept, trust what you'll find when you dereference bnf:y. It might be
strictly the same description as was once found at stitch:x, or it might
have changed (for example this concept has been moved in the RAMEAU
hierarchy, or its label or definition slightly modified etc). And since
from stitch side you don't know about it, you can't assert any owl:sameAs
for sure. BNF description can keep owl:sameAs links to assert that indeed
for this very concept, the semantics has not changed since stitch, and get
rid of this sameAs if ever the concept changes.
Actually, quite a lot of RAMEAU concepts have changed since stitch
publication. Maybe (certainly) some concepts present in stitch are not
present any more in RAMEAU. In that case the bnf:y would be 404 and it's
OK. It means stitch:x has been replaced by nothing in bnf namespace.
Of course, this does not look as a good practice, but I'm afraid it justs
shows the plain fact that RAMEAU has not (yet) a clean depreciation
mechanism, unless I miss recent developments (Romain can correct me if I am
wrong). I'm sure it will have some day soon :)

Bernard

-- 
*Bernard Vatant
*
Vocabularies  Data Engineering
Tel :  + 33 (0)9 71 48 84 59
 Skype : bernard.vatant
Linked Open Vocabularies http://labs.mondeca.com/dataset/lov


*Mondeca**  **   *
3 cité Nollez 75018 Paris, France
www.mondeca.com
Follow us on Twitter : @mondecanews http://twitter.com/#%21/mondecanews


Re: Introducing the Knowledge Graph: things, not strings

2012-05-16 Thread Bernard Vatant
... To put it in a longer way.

Yes this is great news, although it's not completely news, we had quite a
few hints of it by Google in the past months.

But what is just unfair is Google presenting this as if they had invented
it. Apart from a quick allusion to DBpedia and Freebase, no mention of the
collective and converging efforts of so many libraries, museums,
governments, research centers, standard bodies, associations and
institutions, thousands of wikipedians, topic mappers, classifiers,
documentalists ... (apologies to those I forget, too many of them) ... who
have dedicated countless days and nights to build structured data and put
them on the Web. For those who do not know this background story, Google
will show off as the Only One able to organize and make sense of the messy
Web. I wish they were able to acknowledge at least that they are leveraging
all this work.
Neither have they built the core data, nor invented the underlying
concepts. They just bring more power and visibility.

Bernard

2012/5/17 David Wood da...@3roundstones.com

 On May 16, 2012, at 17:45, Bernard Vatant wrote:

 Thanks to all who had this ground ploughed and sown patiently since those
 dark ages where Google was all but an idea.
 Now the grain is ripe and it's a great time for them to harvest ... hope
 we are left with some crumbs to pick up as a reward of our efforts :)


 Hmm, yes.  Will SemWeb researchers feel about Google's Knowledge Graph the
 way hypertext researchers feel about the Web? I hope not.

 Still, Kingsley is right, too.  We are certainly busier than we have ever
 been, with no clear end in sight.  That's positive.

 Regards,
 Dave


 Bernard

 2012/5/16 Kingsley Idehen kide...@openlinksw.com

 On 5/16/12 4:02 PM, Melvin Carvalho wrote:

 Big thumbs up (at least in principle) from google on linked data

 http://googleblog.blogspot.de/**2012/05/introducing-knowledge-**
 graph-things-not.htmlhttp://googleblog.blogspot.de/2012/05/introducing-knowledge-graph-things-not.html


 +1000...

 It's getting real interesting. Google and Facebook as massive Linked Data
 Spaces, awesome!

 --

 Regards,

 Kingsley Idehen
 Founder  CEO
 OpenLink Software
 Company Web: http://www.openlinksw.com
 Personal Weblog: 
 http://www.openlinksw.com/**blog/~kidehenhttp://www.openlinksw.com/blog/%7Ekidehen
 Twitter/Identi.ca handle: @kidehen
 Google+ Profile: 
 https://plus.google.com/**112399767740508618350/abouthttps://plus.google.com/112399767740508618350/about
 LinkedIn Profile: 
 http://www.linkedin.com/in/**kidehenhttp://www.linkedin.com/in/kidehen









 --
 *Bernard Vatant
 *
 Vocabularies  Data Engineering
 Tel :  + 33 (0)9 71 48 84 59
 Skype : bernard.vatant
 Linked Open Vocabularies http://labs.mondeca.com/dataset/lov

 
 *Mondeca**  **   *
 3 cité Nollez 75018 Paris, France
 www.mondeca.com
 Follow us on Twitter : @mondecanews http://twitter.com/#%21/mondecanews





-- 
*Bernard Vatant
*
Vocabularies  Data Engineering
Tel :  + 33 (0)9 71 48 84 59
Skype : bernard.vatant
Linked Open Vocabularies http://labs.mondeca.com/dataset/lov


*Mondeca**  **   *
3 cité Nollez 75018 Paris, France
www.mondeca.com
Follow us on Twitter : @mondecanews http://twitter.com/#%21/mondecanews


Re: Introducing the Knowledge Graph: things, not strings

2012-05-16 Thread Bernard Vatant
Adrian

Don't dream of accessing the Google Knowledge Graph and query it through a
SPARQL endpoint as you do for DBpedia. As every Google critical
technological infrastructure, I'm afraid it will be well hidden under the
hood, and accessible only through the search interface. If they ever expose
the Graph objects through an API as they do for Gmaps, now THAT would be
really great news.

Kingsley says they have Freebase, yes but Freebase stores only 22 million
entities according to their own stats, which makes less than 5% of the
overall figure, since Google claims 500 million nodes in the Knowledge
Graph, and growing.  So I guess they have also DBpedia and VIAF and
Geonames and you name it ... whatever open and structured they can put
their hands on. Linked data stuff whatever the format.

Bernard


2012/5/17 Adrian Walker adriandwal...@gmail.com

 Hi All,

 Nice videos etc, but has anyone found a link to actually *use* Knowledge
 Graph ?

 If it's not online yet, one wonders why Google chose to pre-announce it.

 Thanks, -- Adrian

 Internet Business Logic
 A Wiki and SOA Endpoint for Executable Open Vocabulary English Q/A over
 SQL and RDF
 Online at www.reengineeringllc.com
 Shared use is free, and there are no advertisements

 Adrian Walker
 Reengineering


 On Wed, May 16, 2012 at 4:05 PM, Kingsley Idehen 
 kide...@openlinksw.comwrote:

 On 5/16/12 4:02 PM, Melvin Carvalho wrote:

 Big thumbs up (at least in principle) from google on linked data

 http://googleblog.blogspot.de/**2012/05/introducing-knowledge-**
 graph-things-not.htmlhttp://googleblog.blogspot.de/2012/05/introducing-knowledge-graph-things-not.html


 +1000...

 It's getting real interesting. Google and Facebook as massive Linked Data
 Spaces, awesome!

 --

 Regards,

 Kingsley Idehen
 Founder  CEO
 OpenLink Software
 Company Web: http://www.openlinksw.com
 Personal Weblog: 
 http://www.openlinksw.com/**blog/~kidehenhttp://www.openlinksw.com/blog/%7Ekidehen
 Twitter/Identi.ca handle: @kidehen
 Google+ Profile: 
 https://plus.google.com/**112399767740508618350/abouthttps://plus.google.com/112399767740508618350/about
 LinkedIn Profile: 
 http://www.linkedin.com/in/**kidehenhttp://www.linkedin.com/in/kidehen










-- 
*Bernard Vatant
*
Vocabularies  Data Engineering
Tel :  + 33 (0)9 71 48 84 59
Skype : bernard.vatant
Linked Open Vocabularies http://labs.mondeca.com/dataset/lov


*Mondeca**  **   *
3 cité Nollez 75018 Paris, France
www.mondeca.com
Follow us on Twitter : @mondecanews http://twitter.com/#%21/mondecanews


Re: Is there a general preferred property?

2012-07-18 Thread Bernard Vatant
Nathan

Interesting discussion indeed, at least allowing me to discover
con:preferredURI I missed so far ... although I was looking for something
like that, and it was just under my nose in LOV :)
http://lov.okfn.org/dataset/lov/search/#s=preferred

If I parse correctly the definition of con:preferredURI  (A string which is
the URI a person, organization, etc, prefers that people use for them.) it
applies only to some agent able to express its preference about how
he/she/it should be identified. The domain is open, but if I was to close
it I would declare it to be foaf:Agent.
This is quite different from skos:prefLabel which expresses the preference
of a community of vocabulary users about how some concept should be named
(a practice coming from the library/thesaurus community). The borderline
case are authorities, when LoC uses skos:prefLabel in their authority files
for people of organization, they don't ask those people or organizations if
they agree (many of them not being in position to answer anyway ...).

Seems we lack some x:prefURI expressing the same type of preference as
skos:prefLabel.
With of course con:preferredURI rdfs:subPropertyOf x:prefURI

And a general  property  x:hasURI

x:hasURIx:preferred   x:prefURI

Meaning that :

ex:foox:hasURI  'bar'

entails

bar   owl:sameAs   ex:foo

Not sure of notations here, what I mean by bar is the resource of which
URI is the string 'bar'

And while we are at it x:altURI would be nice to have also :)

Bernard

2012/7/17 Nathan nat...@webr3.org

 Good point and question! I had assumed preferred by the owner of the
 object, just as you have a con:preferredURI for yourself.

 The approach again comes from you, same approach as
 link:listDocumentProperty (which now appears to have dropped from the link:
 ontology?)

 Cheers,

 Nathan


 Tim Berners-Lee wrote:

 Interesting to go meta on this with x:preferred .

 What would be the meaning of preferred -- preferred by the object
 itself or
 the owner of the object itself?

 In other words, I wouldn't use it to store in a local store my preferred
 names
 for people, that would be an abuse of the property.

 Tim

 On 2012-07 -15, at 19:42, Nathan wrote:

  Essentially what I'm looking for is something like

  foaf:nick x:preferred foaf:preferredNick .
  rdfs:label x:preferred foaf:preferredLabel .
  owl:sameAs x:preferred x:canonical .

 It's nice to have con:preferredURI and skos:prefLabel, but what I'm
 really looking for is a way to let machines know that x value is preferred.

 Anybody know if such a property exists yet?

 Cheers,

 Nathan









-- 
*Bernard Vatant
*
Vocabularies  Data Engineering
Tel :  + 33 (0)9 71 48 84 59
Skype : bernard.vatant
Blog : the wheel and the hub http://blog.hubjects.com/


*Mondeca**  **   *
3 cité Nollez 75018 Paris, France
www.mondeca.com
Follow us on Twitter : @mondecanews http://twitter.com/#%21/mondecanews


Re: referencing a concept scheme as the code list of some referrer's property

2012-08-23 Thread Bernard Vatant
 to a concept scheme along with  a rdfs:range
  statement?
 
  And I add
  (3) why do we not have mapping properties to link concept schemes from
  different providers?
  This cannot be inferred from a given concept mapping, as mapping of some
  concepts does not imply mappings of their entire schemes.
 
  Best regards,
  Thomas
 


 --
 Thomas Bandholtz
 Principal Consultant

 innoQ Deutschland GmbH
 Krischerstr. 100,
 D-40789 Monheim am Rhein, Germany
 http://www.innoq.com
 thomas.bandho...@innoq.com
 +49 178 4049387

 http://innoq.com/de/themen/linked-data (German)
 https://github.com/innoq/iqvoc/wiki/Linked-Data (English)





-- 
*Bernard Vatant
*
Vocabularies  Data Engineering
Tel :  + 33 (0)9 71 48 84 59
Skype : bernard.vatant
Blog : the wheel and the hub http://blog.hubjects.com/


*Mondeca**  **   *
3 cité Nollez 75018 Paris, France
www.mondeca.com
Follow us on Twitter : @mondecanews http://twitter.com/#%21/mondecanews


Re: referencing a concept scheme as the code list of some referrer's property

2012-08-29 Thread Bernard Vatant
 as a class, of
 which the concepts of a given concept scheme are instances?
 That would be the way to proceed, if you want to use the concept
 scheme directly as the range of a property.
 This is has never been suggested for inclusion in SKOS. In fact it is
 not forbidden, either. You can assert rdf:type statements between
 concepts and a concept scheme, if you want.
 You can also define an adhoc sub-class of skos:Concept (say,
 ex:ConceptOfSchemeX), which includes all concepts that related to a
 specific concept scheme (ex:SchemeX) by skos:inScheme statements. This
 is quite easy using OWL. And then you can use this new class as the
 rdf:range.

 The possibility of these two options makes it less obvious, why there
 should be a specific feature in SKOS to represent what you want.
 But more fundamentally, it was perhaps never discussed, because it's
 neither a 100% SKOS problem, nor a simple one.
 It's a bit like the link between a document and a subject concept:
 there could have been a skos:subject property, but it was argued that
 Dublin Core's dc:subject was good enough.
 But it's maybe even worse than that :-) There are indeed discussions
 in the Dublin Core Architecture community about represent the link
 between a property and a concept scheme directly, similar to what you
 want. This is what is called vocabulary/value encoding schemes there
 [1].
 But the existence of this feature at a quite deep, data-model level,
 rather confirms for me that it is something that clearly couldn't be
 tackled at the time SKOS was made a standard. One can view this
 problem as one of modeling RDFS/OWL properties, rather than
 representing concepts, no?


 (3)
 I'm not sure I get the question. If they exist, such mapping
 properties could be very difficult to semantically define. Would a
 concept scheme be broader, equivalent, narrower than another one?
 Rather, I'd say that the property you're after indicates that some
 concepts from these two concept schemes are connected. For this I
 think one could use general linkage properties between datasets, such
 as voiD's linksets [2].

 I hope that helps,

 Antoine

 [1] 
 http://dublincore.org/**documents/profile-guidelines/http://dublincore.org/documents/profile-guidelines/,
  search
 Statement template: subject
 [2] http://vocab.deri.ie/void










-- 
*Bernard Vatant
*
Vocabularies  Data Engineering
Tel :  + 33 (0)9 71 48 84 59
 Skype : bernard.vatant
Blog : the wheel and the hub http://blog.hubjects.com/


*Mondeca**  **   *
3 cité Nollez 75018 Paris, France
www.mondeca.com
Follow us on Twitter : @mondecanews http://twitter.com/#%21/mondecanews


Re: Breaking news: GoodRelations now fully integrated with schema.org!

2012-11-12 Thread Bernard Vatant
Dan, Martin, all

This breaking news made me un-earth the couple of questions I already
discussed with you regarding the (more or less declared) soft semantics
of schema.org, and how both http://schema.org/docs/schemaorg.owl and
http://schema.rdfs.org are interpreting those semantics a bit harder than
it should, in particular regarding domains and ranges of properties.

I take now for granted from your message that :
- The reference file for schema.org declared semantics is
http://schema.org/docs/schema_org_rdfa.html (rather than the outdated, or
at lesat not clearly dated OWL file at http://schema.org/docs/schemaorg.owl)
- It declares explicitly schema.org types (classes) as instances of
rdfs:Class, and attached properties as instances of rdf:Property.
- It uses rdfs:subClassOf for the type hierarchy. There is no use of
rdfs:subPropertyOf
- It uses specific properties  http://schema.org/domain and
http://schema.org/range
to attach properties to classes.

The latter is the most interesting and innovative feature. It should be
good to document in th file the implied semantics of those properties, of
which semantics is weaker than the ones of rdfs:domain and rdfs:range, as
implicitly (explicitly?) stated in http://schema.org/docs/datamodel.html.
And maybe it would be wise to rename them otherwise, since confusion is
likely to occur (the more so that http://schema.rdfs.org has interpreted
them abusively as rdfs:domain and rdfs:range). Why not call them the same
as in the html pages : expectedOnType and expectedValueType, since it's
really what they mean.

Side question to Martin. Is there any issue in formally mapping the OWL
classes and properties of GoodRelations to their schema.org equivalents,
which do not even rely on RDFS semantics? I'm pretty sure you have thought
about it and I would be happy to have your take on this.

Another point is since you now declare that the RDF expression of
schema.orgis the root of it, why not publish a proper RDF schema that
could be GET
from the http://schema.org/ namespace through content negotiation, as any
other vocabulary conformant to SW publishing best practices? BTW for
example we would be happy to have such a thing in order to integrate
seamlessly schema.org in LOV. So far we use the
http://schema.rdfs.orgsource but this is really suboptimal, we would
like to get rid of this, and
insert the real stuff.

I submitted the page to the W3C vRDFa validator at
http://www.w3.org/2012/pyRdfa/Validator.html it's happy with the file and
produces a very clean n3 file, the kind it would be cool to have in above
said content negotiation.

Best

Bernard




2012/11/9 Dan Brickley dan...@danbri.org


 This latest build of schema.org uses a different approach to previous
 updates. Earlier versions (apart from health/medicine) were relatively
 small, and could be hand coded. With Good Relations, the approach we
 took was to use an import system that reads schema definitions
 expressed in HTML+RDFa/RDFS and generates the site as an aggregation
 of these 'layers'. In other words, schema.org is built by a system
 that reads a collection of schema definitions expressed using W3C
 standards. The public site is also now more standards-friendly, aiming
 for 'Polyglot' HTML that works as HTML5 and XHTML, and you can find an
 RDFa view of the overall schema at
 http://schema.org/docs/schema_org_rdfa.html


 I'm really happy to see Good Relations go live, and look forward to
 catching up on the other contributions that are in the queue. The
 approach will be to express each of these in HTML/RDFa/RDFS and make
 some test sites on Appspot that show each proposal 'in place', and in
 combination with other proposals. Since schemas tend to overlap in
 coverage, this is really important for improving the quality and
 integration of schema.org as we grow. While it took us a little while
 to get this mechanism in place, I'm glad we now have this
 standards-based machinery in place that will help us scale up the
 collaboration around schema.org.

 Thanks again to all involved,

 Dan




-- 
*Bernard Vatant
*
Vocabularies  Data Engineering
Tel :  + 33 (0)9 71 48 84 59
 Skype : bernard.vatant
Blog : the wheel and the hub http://blog.hubjects.com/


*Mondeca**  **   *
3 cité Nollez 75018 Paris, France
www.mondeca.com
Follow us on Twitter : @mondecanews http://twitter.com/#%21/mondecanews


Re: Linked Data Dogfood circa. 2013

2013-01-04 Thread Bernard Vatant
2013/1/4 Melvin Carvalho melvincarva...@gmail.com

2013 is the year to get serious about linked data!

+100!

Let 2013 be indeed the year of serious gardening of the Data (and
Vocabularies) Commons

Reminder :
http://blog.hubjects.com/2012/03/lov-stories-part-2-gardeners-and.html

-- 
*Bernard Vatant
*
Vocabularies  Data Engineering
Tel :  + 33 (0)9 71 48 84 59
 Skype : bernard.vatant
Blog : the wheel and the hub http://blog.hubjects.com/


*Mondeca**  **   *
3 cité Nollez 75018 Paris, France
www.mondeca.com
Follow us on Twitter : @mondecanews http://twitter.com/#%21/mondecanews


Re: [Virtuoso-users] WebSchemas, Schema.org and W3C

2013-01-23 Thread Bernard Vatant
Hi Alexey

(limiting the cc list to avoid noise)

2013/1/23 Alexey Zakhlestin indey...@gmail.com

 Is it attempt to reimplement http://prefix.cc/ ?

Not at all. http://prefix.cc/ is a precious resource, and that has nothing
to do with prefix wars. Prefixes in the spreadsheet are simply informative,
they are the ones used in the LOV data base and web site. Most of the time
they are the ones chosen by the vocabulary editors, but the LOV
infrastructure needs a 1-1 correspondance so sometimes we have to differ
from the vocabulary publishers.
More at http://lov.okfn.org/dataset/lov/about/#lovdataset Note on
prefixes.

The point of this spreadsheet is to clarify current responsibilties re. the
listed vocabularies.
More context on the public-vocabs list at
http://lists.w3.org/Archives/Public/public-vocabs/2013Jan/0125.html
BTW I suggest people interested to follow-up on public-vocabs forum rather
than on either public-lod or semantic-web.

Best

Bernard


 On 23 Jan 2013 06:35, Kingsley Idehen kide...@openlinksw.com wrote:

  On 1/22/13 11:45 AM, Bernard Vatant wrote:

 ACTION

 Make a list of globally adopted schemas (vocabularies)  and put a *
 responsible* agent name/email/URI whatever Web identifier in front of it
 https://docs.google.com/spreadsheet/ccc?key=0AiYc9tLJbL4SdHByWkRYUkYxZU5qS1lQOE5FV0hiNlE#gid=0
 Free to edit by anyone. If you are* currently responsible* for a
 vocabulary, put your name and contact email address.
 Let's take a month to see what we can gather. A month from now I will
 mail all declared responsible to have confirmation, lock the document, and
 add this information to LOV vocabularies description.


 Best

 Bernard


 FYI

 --

 Regards,

 Kingsley Idehen  
 Founder  CEO
 OpenLink Software
 Company Web: http://www.openlinksw.com
 Personal Weblog: http://www.openlinksw.com/blog/~kidehen
 Twitter/Identi.ca handle: @kidehen
 Google+ Profile: https://plus.google.com/112399767740508618350/about
 LinkedIn Profile: http://www.linkedin.com/in/kidehen





 --
 Master Visual Studio, SharePoint, SQL, ASP.NET, C# 2012, HTML5, CSS,
 MVC, Windows 8 Apps, JavaScript and much more. Keep your skills current
 with LearnDevNow - 3,200 step-by-step video tutorials by Microsoft
 MVPs and experts. ON SALE this month only -- learn more at:
 http://p.sf.net/sfu/learnnow-d2d
 ___
 Virtuoso-users mailing list
 virtuoso-us...@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/virtuoso-users




-- 
*Bernard Vatant
*
Vocabularies  Data Engineering
Tel :  + 33 (0)9 71 48 84 59
 Skype : bernard.vatant
Blog : the wheel and the hub http://blog.hubjects.com/


*Mondeca**  **   *
3 cité Nollez 75018 Paris, France
www.mondeca.com
Follow us on Twitter : @mondecanews http://twitter.com/#%21/mondecanews


Searching for LOV responsibles : the Drake equation

2013-01-31 Thread Bernard Vatant
Hello all

Apologies for cross-posting, but Kingsley started it :)
And I've reduced the cc list to a minimum ...

I had a couple of feedback those days by people who understood that the
open spreadsheet referenced below [1] was aimed either at claiming prefixes
for vocabularies, or submitting vocabularies to LOV, whatever.
Please refer to the original discussion on public-vocab [2] to understand
what it is about, but I want to clarify it agian here : This initiative is
an experiment to identify for each of the 300+ current vocabularies
gathered in LOV, a responsible person as of today for the availability,
content, past and foreseeable future management of the vocabulary.
This information is different of what can be found in the vocabulary
metadata or documentation if any, in particular for vocabularies published
years ago.

Assuming that :
- Such responsible people do exist for a reasonable proportion (p1) of the
vocabularies.
- Among those, a reasonable proportion (p2) do lurk on either public-lod,
public-vocabs or semantic-web list, or wherever this message will be pushed
via social networks.
- Among those, a reasonable proportion (p3) is ready to raise a hand saying
: yes, that's me!
- Among those, a reasonable proportion (p4) considers the proposed method
as a sensible way to do so.
- Among those, a reasonable proportion (p5) will actually do so.

The above assumptions lead to a number of answers
N = p1.p2.p3.p4.p5.V
where V is the number of vocabularies in the LOV cloud

Similar to Drake equation [3] At the difference of aliens, though, some
vocabulary responsible have already given signs of life. In the first week,
18 people have been listed, representing 39 vocabularies ... out of more
than 300.
Showing at least that all above factors are strictly positive, which is
good start ... if it's only a start.
In short, more aliens are welcome to show up at [1]

I've given an arbitrary delay of one month for this experiment. Basically,
at the end of February, we'll make the counts and try to evaluate the value
of each of the above factors. I'm afraid the critical one is p1, actually,
but I would be happy to be proven wrong.

Last word : if you want to show up for a vocabulary not yet listed at [1]
feel free to do so by adding an entry in the spreadsheet, but at the same
time please submit it to LOV using the suggest form at [4], and don't
forget to read [5] before in order to make sure your vocabulary is LOV-able.

Thanks for your attention!

Bernard

[1] http://bit.ly/WB0ad5
[2] http://lists.w3.org/Archives/Public/public-vocabs/2013Jan/0125.html
[3] http://en.wikipedia.org/wiki/Drake_equation
[4] http://lov.okfn.org/dataset/lov/suggest/
[5] http://lov.okfn.org/dataset/lov/Recommendations_Vocabulary_Design.pdf

2013/1/22 Kingsley Idehen kide...@openlinksw.com

  On 1/22/13 11:45 AM, Bernard Vatant wrote:

 ACTION

 Make a list of globally adopted schemas (vocabularies)  and put a *
 responsible* agent name/email/URI whatever Web identifier in front of it
 https://docs.google.com/spreadsheet/ccc?key=0AiYc9tLJbL4SdHByWkRYUkYxZU5qS1lQOE5FV0hiNlE#gid=0
 Free to edit by anyone. If you are* currently responsible* for a
 vocabulary, put your name and contact email address.
 Let's take a month to see what we can gather. A month from now I will mail
 all declared responsible to have confirmation, lock the document, and add
 this information to LOV vocabularies description.


 Best

 Bernard


 FYI

 --

 Regards,

 Kingsley Idehen   
 Founder  CEO
 OpenLink Software
 Company Web: http://www.openlinksw.com
 Personal Weblog: http://www.openlinksw.com/blog/~kidehen
 Twitter/Identi.ca handle: @kidehen
 Google+ Profile: https://plus.google.com/112399767740508618350/about
 LinkedIn Profile: http://www.linkedin.com/in/kidehen





 --
 Master Visual Studio, SharePoint, SQL, ASP.NET, C# 2012, HTML5, CSS,
 MVC, Windows 8 Apps, JavaScript and much more. Keep your skills current
 with LearnDevNow - 3,200 step-by-step video tutorials by Microsoft
 MVPs and experts. ON SALE this month only -- learn more at:
 http://p.sf.net/sfu/learnnow-d2d
 ___
 Dbpedia-discussion mailing list
 dbpedia-discuss...@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/dbpedia-discussion




-- 
*Bernard Vatant
*
Vocabularies  Data Engineering
Tel :  + 33 (0)9 71 48 84 59
 Skype : bernard.vatant
Blog : the wheel and the hub http://blog.hubjects.com/

*Mondeca**  **   *
3 cité Nollez 75018 Paris, France
www.mondeca.com
Follow us on Twitter : @mondecanews http://twitter.com/#%21/mondecanews
--

Meet us at the SIIA Information Industry
Summithttp://www.siia.net/iis/2013 in
NY, January 30-31
image001.gif

Content negotiation for Turtle files

2013-02-05 Thread Bernard Vatant
Hello all

Back in 2006, I thought had understood with the help of folks around here,
how to configure my server for content negotiation at lingvoj.org.
Both vocabulary and instances were published in RDF/XML.

I updated the ontology last week, and since after years of happy living
with RDF/XML people eventually convinced that it was a bad, prehistoric and
ugly syntax, I decided to be trendy and published the new version in Turtle
at http://www.lingvoj.org/ontology_v2.0.ttl

The vocabulary URI is still the same : http://www.lingvoj.org/ontology, and
the namespace  http://www.lingvoj.org/ontology# (cool URI don't change)

Then I turned to Vapour to test this new publication, and found out that to
be happy with the vocabulary URI it has to find some answer when requesting
application/rdf+xml. But since I have no more RDF/XML file for this
version, what should I do?
I turned to best practices document at http://www.w3.org/TR/swbp-vocab-pub,
but it does not provide examples with Turtle, only RDF/XML.

So I blindly put the following in the .htaccess : AddType
application/rdf+xml .ttl
I found it a completely stupid and dirty trick ... but amazigly it makes
Vapour happy.

But now Firefox chokes on http://www.lingvoj.org/ontology_v2.0.ttl because
it seems to expect a XML file. Chrome has not this issue.
The LOV-Bot says there is a content negotiation issue and can't get the
file. So does Parrot.

I feel dumb, but I'm certainly not the only one, I've stumbled upon a
certain number of vocabularies published in Turtle for which the conneg
does not seem to be perfectly clear either.

What do I miss, folks? Should I forget about it, and switch back to good
ol' RDF/XML?

Bernard

-- 
*Bernard Vatant
*
Vocabularies  Data Engineering
Tel :  + 33 (0)9 71 48 84 59
Skype : bernard.vatant
Blog : the wheel and the hub http://blog.hubjects.com/

*Mondeca**  **   *
3 cité Nollez 75018 Paris, France
www.mondeca.com
Follow us on Twitter : @mondecanews http://twitter.com/#%21/mondecanews
--

Meet us at Documation http://www.documation.fr/ in Paris, March 20-21


Re: Content negotiation for Turtle files

2013-02-06 Thread Bernard Vatant
Thanks all for your precious help!

... which takes me back to my first options, the ones I had set before
looking at Vapour results which misled me - more below.

AddType  text/turtle;charset=utf-8   .ttl
AddType  application/rdf+xml.rdf

Plus Rewrite for html etc.

I now get this on cURL

curl -IL http://www.lingvoj.org/ontology
HTTP/1.1 303 See Other
Date: Wed, 06 Feb 2013 09:28:45 GMT
Server: Apache
Location: http://www.lingvoj.org/ontology_v2.0.ttl
Content-Type: text/html; charset=iso-8859-1

HTTP/1.1 200 OK
Date: Wed, 06 Feb 2013 09:28:45 GMT
Server: Apache
Last-Modified: Wed, 06 Feb 2013 09:19:34 GMT
ETag: 60172428-5258-4d50ad316b5b2
Accept-Ranges: bytes
Content-Length: 21080
Content-Type: text/turtle; charset=utf-8

... to which Kingsley should not frown anymore (hopefully)

But what I still don't understand is the answer of Vapour when requesting
RDF/XML :

   - 1st request while dereferencing resource URI without specifying the
   desired content type (HTTP response code should be 303 (redirect)):
   Passed
   - 2nd request while dereferencing resource URI without specifying the
   desired content type (Content type should be 'application/rdf+xml'):
   Failed
   - 2nd request while dereferencing resource URI without specifying the
   desired content type (HTTP response code should be 200): Passed

Of course this request is bound to fail somewhere since there is no RDF/XML
file, but the second bullet point is confusing : why should the content
type be 'application/rdf+xml' when the desired content type is not
specified?

And should not a Linked Data validator handle the case where there is no
RDF/XML file, but only Turtle or n3?

The not-so-savvy linked data publisher (me), as long as he sees something
flashinf RED in the results, thinks he has not made things right, and is
led to made blind tricks just to have everything green (such as
contradictory mime type declarations).

At least if the validator does not handle this case it should say so. The
current answer does not help adoption of Turtle, to say the least!

Hoping someone behind Vapour is lurking here and will answer :)

Thanks again for your time

Bernard


Re: Content negotiation for Turtle files

2013-02-06 Thread Bernard Vatant
Hi Chris

2013/2/6 Chris Beer ch...@codex.net.au

 Bernard, Ivan

 (At last! Something I can speak semi-authoritatively on ;P )

 @ Bernard - no - there is no reason to go back if you do not want to, and
 every reason to serve both formats plus more.


More ??? Well, I was heading the other way round actually for sake of
simplicity. As said before I've used RDF/XML for years despite all
criticisms, and was happy with it (the devil you know etc). What I
understand of the current trend is that to ease RDF and linked data
adoption we should promote now this simple, both human-readable and
machine-friendly publication syntax (Turtle). And having tried it for a
while, I now begin to be convinced enough as to adopt it in publication -
thanks to continuing promotion by Kingsley among others :)

And now you tell me I should still bother to provide n other formats,
RDF/XML and more. I thought I was about to simplify my life, you tell me I
have to make the simple things, *plus* the more complex ones as before.
Hmm.


 Your comment about UA's complaining about a content negotiation issue is
 key to what you're trying to do here. I'd like to provide some clear
 guidance or suggestions back, but first, if possible, can you please post
 the http request headers for the four (and any others you have) user
 agents you've used to attempt to request your rdf+xml files and which have
 either choked or accepted the .ttl file.


I can try to find out how do that, although remind you I can discuss
languages, ontologies, syntax and semantics of data at will, but when it
comes to protocols and Webby things it's not really my story, so I don't
promise anything.

AND : there's NO rdf+xml file in that case, only text/turtle. And that's
exactly the point : can/should one do that, or not? Do I have to pass the
message to adopters : publish RDF in Turtle, it's a very cool an simple
syntax (oh but BTW don't forget to add HTML documentation, and also
RDF/XML, and JSON, and multilingual variants, and proper content
negotiation ...) ... well, OK, let's be clear about it if we have to do
that ... but it looks like a non-starter for adoption of Turtle.


 Extra points if you can also post
 the server's response headers.


Same remark as above.

Thanks for your time

Bernard

-- 
*Bernard Vatant
*
Vocabularies  Data Engineering
Tel :  + 33 (0)9 71 48 84 59
Skype : bernard.vatant
Blog : the wheel and the hub http://blog.hubjects.com/

*Mondeca**  **   *
3 cité Nollez 75018 Paris, France
www.mondeca.com
Follow us on Twitter : @mondecanews http://twitter.com/#%21/mondecanews
--

Meet us at Documation http://www.documation.fr/ in Paris, March 20-21


Re: Content negotiation for Turtle files

2013-02-06 Thread Bernard Vatant
Thanks Kingsley!

Was about to answer but you beat me at it :)

But Richard, could you elaborate on this view that hand-written and
machine-processible data would not fit together?

I don't feel like people are still writing far too many Linked Data
examples and resources by hand. On the opposite seems to me we have seen
so far too many linked data produced by (more or less dumb or smart)
programs, without their human productors (so to speak) always checking
too much for quality in the process, provided they can proudly announce
that they have produced so many billions of triples ... so many, actually,
that nobody will ever be able to assess their quality whatsoever :)

Of course migrating automagically heaps of legacy data and making them
available as linked data is great, but as Kingsley puts it, linked data are
not only about machines talking to machines, it's also about enabling
people to talk to machines as simply as possible, and the other way round.
That's where Turtle fits.

Bernard


2013/2/6 Kingsley Idehen kide...@openlinksw.com

  On 2/6/13 6:45 AM, Richard Light wrote:


 On 06/02/2013 10:59, Bernard Vatant wrote:

 More ??? Well, I was heading the other way round actually for sake of
 simplicity. As said before I've used RDF/XML for years despite all
 criticisms, and was happy with it (the devil you know etc). What I
 understand of the current trend is that to ease RDF and linked data
 adoption we should promote now this simple, both human-readable and
 machine-friendly publication syntax (Turtle). And having tried it for a
 while, I now begin to be convinced enough as to adopt it in publication -
 thanks to continuing promotion by Kingsley among others :)

 And now you tell me I should still bother to provide n other formats,
 RDF/XML and more. I thought I was about to simplify my life, you tell me I
 have to make the simple things, *plus* the more complex ones as before.
 Hmm.

 Well I for one would make a plea to keep RDF/XML in the portfolio. Turtle
 is only machine-processible if you happen to have a Turtle parser in your
 tool box.

 I'm quite happily processing Linked Data resources as XML, using only XSLT
 and a forwarder which adds Accept headers to an HTTP request. It thereby
 allows me to grab and work with LD content (including SPARQL query results)
 using the standard XSLT document() function.

 In a web development context, JSON would probably come second for me as a
 practical proposition, in that it ties in nicely with widely-supported
 javascript utilities.

 To me, Turtle is symptomatic of a world in which people are still writing
 far too many Linked Data examples and resources by hand, and want something
 that is easier to hand-write than RDF/XML.  I don't really see how that
 fits in with the promotion of the idea of machine-processible web-based
 data.

 Richard
 --
 *Richard Light*


 If people can't express data by hand we are on a futile mission. The era
 of over bearing applications placing artificial barriers between users and
 their data is over. Just as the same applies to overbearing schemas and
 database management systems.

 This isn't about technology for programmers. Its about technology for
 everyone. Just as everyone is able to write on a piece of paper today, as a
 mechanism for expressing and sharing data, information, and knowledge.


 It is absolutely mandatory that folks be able to express triple based
 statements (propositions) by hand. This is the key to making Linked Data
 and the broader Semantic Web vision a natural reality.

 We have to remember that content negotiation (implicit or explicit) is a
 part of this whole deal.

 Vapour was built at a time when RDF/XML was the default format of choice.
 That's no longer the case, but it doesn't mean RDF/XML is dead either, its
 just means its no longer the default. As I've said many times, RDF/XML is
 the worst and best thing that ever happened to the Semantic Web vision.
 Sadly, the worst aspect has dominated the terrain for years and created
 artificial inertia by way of concept obfuscation.

 If your consumer prefers data in RDF/XML format then it can do one of the
 following:

 1. Locally transform the Turtle to RDF/XML -- assuming this is all you can
 de-reference from a given URI
 2. Transform the Turtle to RDF/XML via a transformation service (these
 exist and they are RESTful) -- if your user agent can't perform the
 transformation.

 The subtleties of Linked Data are best understood via Turtle.

 --

 Regards,

 Kingsley Idehen   
 Founder  CEO
 OpenLink Software
 Company Web: http://www.openlinksw.com
 Personal Weblog: http://www.openlinksw.com/blog/~kidehen
 Twitter/Identi.ca handle: @kidehen
 Google+ Profile: https://plus.google.com/112399767740508618350/about
 LinkedIn Profile: http://www.linkedin.com/in/kidehen


-- 
*Bernard Vatant
*
Vocabularies  Data Engineering
Tel :  + 33 (0)9 71 48 84 59
 Skype : bernard.vatant
Blog : the wheel and the hub http://blog.hubjects.com

Re: I've built www.vocabs.org - A community driven website that allows you to build RDF vocabularies

2013-02-15 Thread Bernard Vatant
Hi Luca

Welcome to the vocabularies galaxy. I cc to the public-vocabs list, which
might be more relevant for this topic.

Seems to me the best ways to learn to write (a little) is to read (a lot).
Books, apps, tutorials and so on are fine. But above all, read vocabularies
to figure out from examples done by people who know just a bit more than
you do. You have more than 300 examples listed in the Linked Open
Vocabularies data base [1], including the famous wine ontology developed
along with the OWL recommandation [2]
As for your idea of collaborative construction, I think it's worth the try,
you'll see how it flies. The vocabularies are a critical part of the linked
data ecosystem (their genetic code, sort of), raising complex technical and
social issues, and a global governance model still to be invented. Next
Dublin Core conference in September will focus on this issue [3]

Regarding the complexity of building and publishing a vocabulary, after
years of struggling with Protege and other ontology editors, I've come to
the point that if you're not building a complex ontology with thousands of
axioms, but a basic vocabulary with typically 10-20 classes and a similar
number of properties, without fancy logical constructs, you just need a
good text editor and learn a bit of Turtle, and for publication rely on
public stylesheets or cool services such as Parrot.
The only tricky (just a bit) part being the server configuration for
content negotiation, but that's not very big deal either.

It figures that we (the linked data vocabularies community) definitely
should provide a good tutorial on Publish your simple vocabularies using
Turtle. I should put that on my backburner, actually.

Best regards

[1] http://lov.okfn.org/dataset/lov
[2] http://lov.okfn.org/dataset/lov/details/vocabulary_vin.html
[3] http://dcevents.dublincore.org/IntConf/dc-2013

2013/2/14 Luca Matteis lmatt...@gmail.com

 Dear all,

 It's my first time here, but I've been attracted to the Linked data
 initiative for quite a while now. A couple of weeks ago I needed to build
 my first RDF vocabulary.. I cannot tell you how hard this process was for
 an RDF newbie as myself. I had to read a couple of books, and read a lot
 all over the web before I could get a grasp of it all.

 Even after understanding the linked-data context, and how the technologies
 involved worked, I was still left with a set of tools that I thought were
 pretty limited. I had to download apps, that did or didn't work. And learn
 various different programming APIs to generate the RDF that I wanted. I can
 only imagine the difficulty a non-techie person would have when trying to
 build a vocabulary.

 Another issue that I confronted when looking for existing vocabularies,
 was that most of the time they were created by a single entity (a group of
 people) that knows about the lexicon of the subject. I think this is quite
 limited as well. A vocabulary should be open and agreed upon a group of
 people. It should be community-driven. It should be crowd-sourced and
 validated, the same way correct answers are validated on Stackoverflow.

 So in a couple of days I built http://www.vocabs.org/ that does exactly
 this. It allows people, with very little technical experience, to start
 creating vocabularies (entirely through the web-interface). Not only that,
 but different users can then join and comment, and add new vocabulary
 terms. An example of this: http://www.vocabs.org/term/WineOntology(*hint* 
 click download at the top).

 I was just wondering what the Semantic community thinks of this idea. I
 hope it's clear what I'm trying to achieve here, but maybe a better
 explanation would be here: http://www.vocabs.org/about

 Thanks!




-- 
*Bernard Vatant
*
Vocabularies  Data Engineering
Tel :  + 33 (0)9 71 48 84 59
 Skype : bernard.vatant
Blog : the wheel and the hub http://blog.hubjects.com/

*Mondeca**  **   *
3 cité Nollez 75018 Paris, France
www.mondeca.com
Follow us on Twitter : @mondecanews http://twitter.com/#%21/mondecanews
--

Meet us at Documation http://www.documation.fr/ in Paris, March 20-21


Re: How can I express containment/composition?

2013-02-21 Thread Bernard Vatant
 part-whole relations in OWL Ontologies). It explains that OWL has no
 direct
 support for this kind of relationship and it goes on to give examples on
 how
 one can create ontologies that do support the relationship in one way or
 the
 other.

 Is there a ready to use ontology/vocabulary out there that can help me
 express containment/composition?

 Thanks in advance,
 Frans


-- 
*Bernard Vatant
*
Vocabularies  Data Engineering
Tel :  + 33 (0)9 71 48 84 59
Skype : bernard.vatant
Blog : the wheel and the hub http://blog.hubjects.com/

*Mondeca**  **   *
3 cité Nollez 75018 Paris, France
www.mondeca.com
Follow us on Twitter : @mondecanews http://twitter.com/#%21/mondecanews
--

Meet us at Documation http://www.documation.fr/ in Paris, March 20-21


Re: Linking to non-RDF datasets

2013-03-12 Thread Bernard Vatant
Hi Alasdair

Some results from http://lov.okfn.org/dataset/lov/search/#s=dataset

http://purl.org/dc/dcmitype/Dataset is quite generic, but does not have any
attached properties.
http://purl.org/ctic/dcat#Dataset is a subclass of the above, is intended
to represent datasets in a catalogue, and is not limited to RDF datasets

... many more I let you explore

Hope that helps

Bernard


2013/3/12 Alasdair J G Gray alasdair.g...@manchester.ac.uk

 Hi All,

 We are making extensive use of the VoID vocabulary [1] in the Open PHACTS
 project [2] to describe our datasets.  We are currently deciding how to
 model a recurring use case of needing to describe non-RDF datasets and
 manage linksets to them.

 In the VoID vocabulary, a dataset is defined to be [3]

 A set of RDF triples that are published, maintained or aggregated by a
 single provider.

 Since all predicates are defined with a domain/range of void:Dataset, this
 would mean that it would be incorrect to use them for any dataset that is
 not a set of RDF triples. However, this usage is becoming common.

 Should we go ahead and use the predicates despite this inaccurate
 interpretation of the non-RDF dataset?

 Is there another vocabulary that allows for the modelling of linksets that
 does not restrict the dataset to a set of RDF triples? I am aware of DCAT
 [4] but do not see suitable linking predicates.

 Should we develop a set of super-properties that do not have the
 domain/range restrictions?

 Thanks,

 Alasdair

 [1] http://www.w3.org/TR/void/
 [2] http://www.openphacts.org/
 [3] http://vocab.deri.ie/void#Dataset
 [4] http://www.w3.org/TR/vocab-dcat/


   Dr Alasdair J G Gray
 Research Associate
 alasdair.g...@manchester.ac..uk alasdair.g...@manchester.ac.uk
 +44 161 275 0145

 http://www.cs.man.ac.uk/~graya/

 Please consider the environment before printing this email.






-- 
*Bernard Vatant
*
Vocabularies  Data Engineering
Tel :  + 33 (0)9 71 48 84 59
Skype : bernard.vatant
Blog : the wheel and the hub http://blog.hubjects.com/

*Mondeca**  **   *
3 cité Nollez 75018 Paris, France
www.mondeca.com
Follow us on Twitter : @mondecanews http://twitter.com/#%21/mondecanews
--

Meet us at Documation http://www.documation.fr/ in Paris, March 20-21


Re: Ending the Linked Data debate -- PLEASE VOTE *NOW*!

2013-06-14 Thread Bernard Vatant
Some speak about linked data, and other speak about linked and data.
How can they possibly agree?

This is really a very old debate, and it can go forever
A white horse is not a horse
http://www.thezensite.com/ZenEssays/Philosophical/Horse.html

Bernard


2013/6/14 Gregg Reynolds d...@mobileink.com

 On Thu, Jun 13, 2013 at 12:20 PM, David Booth da...@dbooth.org wrote:
   Original Message 
  Subject: Ending the Linked Data debate -- PLEASE VOTE *NOW*!
  Date: Thu, 13 Jun 2013 13:19:27 -0400
  From: David Booth da...@dbooth.org
  To: community, Linked public-lod@w3.org

 
  In normal usage within the Semantic Web community,
  does the term Linked Data imply the use of RDF?
 
  PLEASE VOTE NOW at

 Hate to rain on your parade, but I can't resist, since I've spent the
 past two years researching survey design, validity, etc. which I
 pretty much hated all the way, but you've innocently given me a chance
 to use some of that knowledge.  The likelihood that this will question
 will produce valid data that can be unambiguously interpreted is
 pretty close to zero.  It's a pretty well-established fact that even
 the simplest questions - e.g. how many children do you have? - will be
 misinterpreted by an astonishingly large number of respondents
 (approaching 50% if I recall).  In this case, given the intrinsic
 ambiguity of the question (normal, imply, etc.) and the high
 degree of education and intelligence of the respondents, I predict
 that if 50 people respond there will be at least 51 different
 interpretations of the question.  In other words they are all highly
 likely to be responding to different questions.  Which means you won't
 be able to draw any valid conclusions.

 Here's an obvious example:  is normal usage descriptive or
 evaluative?  In other words, does it refer to the fact of how people
 do use it, or to a norm of how they ought to use it?  Somebody
 strongly committed one way or the other could claim that normal
 usage is just the usage they favor - people who don't in fact use it
 that way are weirdos and deviants, even if they're in the majority.
 So your question is inherently ambiguous, and that's not counting
 problems with Semantic Web community, etc.

 Besides, you omitted the Refused to answer option. ;)

 -Gregg




Re: RDF and CIDOC CRM

2013-06-14 Thread Bernard Vatant
Hi all

I'm a bit lost with all those avatars of CIDOC-CRM ontology published under
various URIs, under various namespaces and confusing redirections

In LOV [1]  we have registered two versions and two different namespaces, a
version 5.01 in OWL [2] and the more recent one 5.04 in RDFS [3],
dereferencing to [6].
The draft mentioned by Kingsley [4] is a more recent version 5.1, the
xml:base it declares is yet another one [5] which actually dereferences to
[6] as [3] does. And the Erlangen manifestation [7] mentioned by Richard
is yet another avatar, apparently also of version 5.04.

You are lost already? Imagine the poor linked data publisher wanting to use
the latest, authoritative version of the ontology ...

Since none of those vocabularies in their RDF form expose either clear
provenance metadata, more recent versions do not mention the previous
one(s), you have to look at comments in [4]  :

This is the encoding approved by CRM-SIG in the meeting 21/11/2012 as the
current version for the CIDOC CRM namespace. Note that this is NOT a
definition of the CIDOC CRM, but an encoding derived from the authoritative
release of the CIDOC CRM v5.1 (draft) May 2013 on
http://www.cidoc-crm.org/official_release_cidoc.html;

And from this html page I understand that indeed the 5.04 version is the
current official one, 5.1 is just a draft, so after all both namespaces [3]
and [5]  redirecting to the current official version might be a feature
and not a bug, but the above comment in the draft about the current
version for the CIDOC CRM namespace is confusing at least ...

If editors of those various versions are around, could they please step
forward and clarify what should be used as of today as the authoritative
URI and namespace for this important ontology, so that potential users do
not need, beyond mastering RDF technologies, a degree in hermeneutics :)

Thanks for your time

Bernard


[1] http://lov.okfn.org/dataset/lov/details/vocabulary_crm.html
[2] http://purl.org/NET/cidoc-crm/core
[3] http://www.cidoc-crm.org/rdfs/cidoc-crm
[4] http://www.cidoc-crm.org/rdfs/cidoc_crm_v5.1-draft-2013May.rdfs
[5] http://www.cidoc-crm.org/cidoc-crm/
[6] http://www.cidoc-crm.org/rdfs/5.0.4/cidoc-crm.rdf
[7] http://erlangen-crm.org/current/


*Bernard Vatant
*
Vocabularies  Data Engineering
Tel :  + 33 (0)9 71 48 84 59
Skype : bernard.vatant
Blog : the wheel and the hub http://bvatant.blogspot.com
Linked Open Vocabularies : lov.okfn.org

*Mondeca**  **   *
3 cité Nollez 75018 Paris, France
www.mondeca.com
Follow us on Twitter : @mondecanews http://twitter.com/#%21/mondecanews
--

Mondeca is selected to present at ReInvent Law,
Londonhttp://reinventlawlondon.com/ on
June 14th

Meet us during the European Open Data Week http://opendataweek.org in
Marseille (June 25-28)


RDF, Linked Data etc : please ping me when it's over ...

2013-06-19 Thread Bernard Vatant
I guess I'm not the only one : I'm about to put a filter rule on my inbox

from public-lod AND (contains RDF and Linked Data) = trash

No one having a decent full-time job and normal life can have the bandwidth
(not even speaking of the will or interest) to follow those threads. It's
too bad because there is certainly a lot of amazing stuff I miss.

So please ping me when it's over, and if someone can write a summary and
possibly draw useful conclusions, please do so and post it on a stable URI
where everything could be parsed in a single piece of document.

Note : anyone willing to do that is both a saint and a fool :)

Have fun

Bernard


*Bernard Vatant
*
Vocabularies  Data Engineering
Tel :  + 33 (0)9 71 48 84 59
Skype : bernard.vatant
Blog : the wheel and the hub http://bvatant.blogspot.com
Linked Open Vocabularies : lov.okfn.org

*Mondeca**  **   *
3 cité Nollez 75018 Paris, France
www.mondeca.com
Follow us on Twitter : @mondecanews http://twitter.com/#%21/mondecanews
--
Meet us during the European Open Data Week http://opendataweek.org in
Marseille (June 25-28)


Re: Are Topic Maps Linked Data?

2013-06-23 Thread Bernard Vatant
Back in 2001-2002 we had quite a lot of passionate interaction between the
Topic Maps and RDF working groups
My preferred presentation at that time was the one by Nikita Ogievetsky
wearing his Semantic Web Glasses, various versions of the concept are
still on line at http://www.cogx.com/?si=urn:cogx:resource:swg.
Lars Marius Garshol made also quite good comparisons of the two piles of
standards, see http://www.garshol.priv.no/blog/92.html

Now when the Linked Data brand started around 2006, unfortunately Topic
Maps were already more or less in a deadlock (for all sorts of reasons
off-topic here - no pun), so the question Are Topic Maps Linked Data? is
a sort of de facto anachronism.

That said, well, yes, of course, Topic Maps is a technology meant to link
data. It was even its core business. Jim Mason [1] (if I remember
correctly) used to say that XML was SGML with good marketing, maybe Linked
Data is Topic Maps with good marketing :)

Bernard

[1] http://www.open-std.org/jtc1/sc34old/repository/0688.pdf

*Bernard Vatant
*
Vocabularies  Data Engineering
Tel :  + 33 (0)9 71 48 84 59
 Skype : bernard.vatant
Blog : the wheel and the hub http://bvatant.blogspot.com
Linked Open Vocabularies : lov.okfn.org

*Mondeca**  **   *
3 cité Nollez 75018 Paris, France
www.mondeca.com
Follow us on Twitter : @mondecanews http://twitter.com/#%21/mondecanews
--
Meet us during the European Open Data Week http://opendataweek.org in
Marseille (June 25-28)




2013/6/23 rich...@light.demon.co.uk rich...@light.demon.co.uk

 Didn't Steve Pepper do an analysis which mapped Topic Maps to RDF a decade
 or so back?

 Richard Light
 Sent from my phone

 - Reply message -
 From: Dan Brickley dan...@danbri.org
 To: public-lod public-lod@w3.org
 Subject: Are Topic Maps Linked Data?
 Date: Sun, Jun 23, 2013 15:04


 Just wondering,

 Dan



Re: Linked Data Glossary is published!

2013-06-27 Thread Bernard Vatant
Hi Bernadette

Great job. What about a publication of the glossary as linked data? In SKOS
for example :)

Bernard

*Bernard Vatant
*
Vocabularies  Data Engineering
Tel :  + 33 (0)9 71 48 84 59
 Skype : bernard.vatant
Blog : the wheel and the hub http://bvatant.blogspot.com
Linked Open Vocabularies : lov.okfn.org

*Mondeca**  **   *
3 cité Nollez 75018 Paris, France
www.mondeca.com
Follow us on Twitter : @mondecanews http://twitter.com/#%21/mondecanews
--
Meet us during the European Open Data Week http://opendataweek.org in
Marseille (June 25-28)




2013/6/27 Bernadette Hyland bhyl...@3roundstones.com

 Hi,
 On behalf of the editors, I'm pleased to announce the publication of the
 peer-reviewed *Linked Data Glossary* published as a W3C Working Group
 Note effective 27-June-2013.[1]

 We hope this document serves as a useful glossary containing terms defined
 and used to describe Linked Data, and its associated vocabularies and best
 practices for publishing structured data on the Web.

 The LD Glossary is intended to help foster constructive discussions
 between the Web 2.0 and 3.0 developer communities, encouraging all of us
 appreciate the application of different technologies for different use
 cases.  We hope the glossary serves as a useful starting point in your
 discussions about data sharing on the Web.

 Finally, the editors are grateful to David Wood for contributing the
 initial glossary terms from Linking Government 
 Datahttp://www.springer.com/computer/database+management+%26+information+retrieval/book/978-1-4614-1766-8,
 (Springer 2011). The editors wish to also thank members of the Government
 Linked Data Working Group http://www.w3.org/2011/gld/ with special
 thanks to the reviewers and contributors: Thomas Baker, Hadley Beeman,
 Richard Cyganiak, Michael Hausenblas, Sandro Hawke, Benedikt Kaempgen,
 James McKinney, Marios Meimaris, Jindrich Mynarz and Dave Reynolds who
 diligently iterated the W3C Linked Data Glossary in order to create a
 foundation of terms upon which to discuss and better describe the Web of
 Data.  If there is anyone that the editors inadvertently overlooked in this
 list, please accept our apologies.

 Thank you one  all!

 Sincerely,
 Bernadette 
 Hylandhttp://3roundstones.com/about-us/leadership-team/bernadette-hyland/,
 3 Round Stones http://3roundstones.com/ Ghislain 
 Atemezinghttp://www.eurecom.fr/%7Eatemezin,
 EURECOM http://www.eurecom.fr Michael Pendleton, US Environmental
 Protection Agency http://www.epa.gov Biplav Srivastava, 
 IBMhttp://www.ibm.com/in/research/

 W3C Government Linked Data Working Group
 Charter: http://www.w3.org/2011/gld/

 [1] http://www.w3.org/TR/ld-glossary/



Re: YASGUI: Web-based SPARQL client with bells ‘n wistles

2013-08-20 Thread Bernard Vatant
Hello Barry

I had a reminder today that I never answered the question below, and I
am very late indeed !

Properties and classes of all vocabularies in LOV are aggregated in a
triple store

of which SPARQL endpoint is at http://lov.okfn.org/endpoint/lov_aggregator

This is quite raw data but you should find everything you need in there.

Otherwise can also use the new API http://lov.okfn.org/dataset/lov/api/v1/vocabs

which for each vocabulary provides the prefix and link to the last
version stored.

Hope that helps

Bernard

From: Barry Norton barry.nor...@ontotext.com
barry.nor...@ontotext.com?Subject=Re%3A%20YASGUI%3A%20Web-based%20SPARQL%20client%20with%20bells%20%FFn%20wistlesIn-Reply-To=%3C51D7F122.5060407%40ontotext.com%3EReferences=%3C51D7F122.5060407%40ontotext.com%3EDate:
Sat, 06 Jul 2013 11:27:46 +0100

Bernard, does LOV keep a cache of properties and classes?

I'd really like to see resource auto-completion in Web-based tools like
YASGUI, but a cache is clearly needed for the to be feasible.

Barry


Re: Is the same video but in different encodings the owl:sameAs?

2013-12-05 Thread Bernard Vatant
Hi all

Reading the thread, I was also thinking about a FRBR-ish approach.
Maybe you will not use the exact FRBR classes, but the spirit of it
See
http://bvatant.blogspot.fr/2013/07/frbr-and-beyond-its-abstraction-all-way.html


2013/12/5 Damian Steer d.st...@bris.ac.uk


 On 5 Dec 2013, at 13:52, Thomas Steiner to...@google.com wrote:

  Dear Public-LOD,
 
  Thank you all for your very helpful replies. Following your joint
  arguments, owl:sameAs is _not_ an option then.

 You could use dc:hasFormat to link them:

 A related resource that is substantially the same as the pre-existing
 described resource, but in another format. [1]

 http://ex.org/video.mp4 dc:hasFormat http://ex.org/video.ogv .

 snip

  The most reasonable
  thing to do seems to introduce some sort of proxy object, on top of
  which statements can be made.

 I prefer this. It feels FRBR-ish [2][3] although that's not quite right.
 (Are the individual videos items, and the proxy object a manifestation?)

 Damian

 [1] http://dublincore.org/documents/dcmi-terms/#terms-hasFormat
 [2] 
 https://en.wikipedia.org/wiki/Functional_Requirements_for_Bibliographic_Records
 
 [3] http://vocab.org/frbr/core.html




-- 

*Bernard Vatant*
Vocabularies  Data Engineering
Tel :  + 33 (0)9 71 48 84 59
Skype : bernard.vatant
http://google.com/+BernardVatant

*Mondeca*
3 cité Nollez 75018 Paris, France
www.mondeca.com
Follow us on Twitter : @mondecanews http://twitter.com/#%21/mondecanews
--


  1   2   >