RESTful API for accessing DBpedia

2010-03-04 Thread Monika Solanki
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hi LODers,

I am looking for a REST based API for programmatically accessing
DBpedia's SPARQL end point. Any pointers much appreciated.

Monika


- --
Dr Monika Solanki
F27 Department of Computer Science
University of Leicester
Leicester LE1 7RH
United Kingdom
WebID: http://www.cs.le.ac.uk/people/ms491/foaf.rdf#monika
Google Coordinates: 52.653791,-1.158414
Tel: +44 116 252 3828
Web: http://www.cs.le.ac.uk/people/ms491/


-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.9 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iEYEARECAAYFAkuPtHIACgkQef8v4/uRsizv9QCgsIallB1D9IoiRXcYpsRH0l1U
5BoAnAjE5W8GMgNr5vMi9P+0LTnbB69A
=KkIB
-END PGP SIGNATURE-



Crowdsourcing request: Google People Finder Data as RDF

2010-03-04 Thread Aldo Bucchi
Hi,

As most of you heard things were a bit shaky down here in Chile. We
have some requests and hope you guys can help. This is a moment to
prove what we always boast about: that Linked Data can solve real
problems.

Google provides a prople finder service
(http://chilepersonfinder.appspot.com/) which is right now
centralizing some ( but not all ) of the missing people data. This
service is OK but it lacks some features plus we need to integrate
with other sources to perform analysis and aid our rescue teams /
alleviate families.

This is serious matter but it is indeed taken a bti lightly by
existing software. ( there is a tradeoff between the amount of
structure you can impose and ease of use in the front-line ).

What we would love to have is a way to access all feeds from
http://chilepersonfinder.appspot.com/ as RDF

We already have some databases operating on these feeds, but we're
still far away a clean solution because of its loose structure ( take
a look and you'll see what I mean ).

Who wants to take a shot at this?

Requirements.
- Take all feeds originating from http://chilepersonfinder.appspot.com/
- Generate an initial RDF dump ( big TTL file )
- Generate Incremental RDF dumps every hour

The transfromation should do its best guess at the ideal data
structure and try not to loose granularity but shield us a bit from
this feed based model.

We then take care of downloading this, integrating with other systems,
further processing, geocoding, etc.

There's a lot of work to do and the more we can outsource, the bettter.

On Friday ( tomorrow ) there will be the first nation-wide
announcement of our search platform and we expect lots of people to
use our services. So this is something really urgent and really,
really important for those who need it.

Ah. Volunteers are moving all this data into a Virtuoso instance that
will also have more stuff. It will be available soon at
http://opendata.cl/ so stay tuned.

We really hope we had something like DBpedia in place by now, it would
make all this much easier. But now is the time.
Guys, the tsunami casualties could have been avoided it was all about
mis-information.
Same goes for relief efforts. They are not optimal and this is all
about data in the end.

I know you know how valuable data is. But it is now that you can
really make your point! Triple by Triple.

Thanks!
A

-- 
Aldo Bucchi
skype:aldo.bucchi
http://www.univrz.com/
http://aldobucchi.com/

PRIVILEGED AND CONFIDENTIAL INFORMATION
This message is only for the use of the individual or entity to which it is
addressed and may contain information that is privileged and confidential. If
you are not the intended recipient, please do not distribute or copy this
communication, by e-mail or otherwise. Instead, please notify us immediately by
return e-mail.



Crowdsourcing request 2: Crisis platform data from http://chile.ushahidi.com/

2010-03-04 Thread Aldo Bucchi
Hi,

Here's another RDF conversion task that is in the queue and it falls
in the public domain.

Ushahidi is a crisis management platform that has been used in the
Haiti crisis and other and is being used in Chile as well via
http://chile.ushahidi.com/

It provides a facility to download the data as CSV from:
http://chile.ushahidi.com/download

Again, the requirement is to transform this to RDF using pertinent
ontologies and doing it as richly as possible
We then slurp it into a Virtuoso instance where we will try to link
this with main organizational entities in the country and data from
other feeds.

Anyone? :)

Thanks!
A



-- 
Aldo Bucchi
skype:aldo.bucchi
http://www.univrz.com/
http://aldobucchi.com/

PRIVILEGED AND CONFIDENTIAL INFORMATION
This message is only for the use of the individual or entity to which it is
addressed and may contain information that is privileged and confidential. If
you are not the intended recipient, please do not distribute or copy this
communication, by e-mail or otherwise. Instead, please notify us immediately by
return e-mail.



Re: Crowdsourcing request 2: Crisis platform data from http://chile.ushahidi.com/

2010-03-04 Thread KANZAKI Masahide
Hello Aldo,

I tried to generate RDF/Turtle from CSV dump of 536 records, resulting
5085 triples. All URIs are just minted for this. Please see the file
at

http://www.kanzaki.com/works/2010/test/chile.ttl

best regards,

2010/3/4 Aldo Bucchi aldo.buc...@gmail.com:
 Hi,

 Here's another RDF conversion task that is in the queue and it falls
 in the public domain.

 Ushahidi is a crisis management platform that has been used in the
 Haiti crisis and other and is being used in Chile as well via
 http://chile.ushahidi.com/

 It provides a facility to download the data as CSV from:
 http://chile.ushahidi.com/download

 Again, the requirement is to transform this to RDF using pertinent
 ontologies and doing it as richly as possible
 We then slurp it into a Virtuoso instance where we will try to link
 this with main organizational entities in the country and data from
 other feeds.

 Anyone? :)

 Thanks!
 A

-- 
@prefix : http://www.kanzaki.com/ns/sig# .  :from [:name
KANZAKI Masahide; :nick masaka; :email mkanz...@gmail.com].



Re: Crowdsourcing request: Google People Finder Data as RDF

2010-03-04 Thread Bill Roberts
Hi Aldo - I'd like to help, but I see you posted your mail a few hours ago.  Do 
you have updated information on what still needs done?  Do you have a wiki or 
similar to coordinate volunteer programming efforts?

Regards

Bill


On 4 Mar 2010, at 14:06, Aldo Bucchi wrote:

 Hi,
 
 As most of you heard things were a bit shaky down here in Chile. We
 have some requests and hope you guys can help. This is a moment to
 prove what we always boast about: that Linked Data can solve real
 problems.
 
 Google provides a prople finder service
 (http://chilepersonfinder.appspot.com/) which is right now
 centralizing some ( but not all ) of the missing people data. This
 service is OK but it lacks some features plus we need to integrate
 with other sources to perform analysis and aid our rescue teams /
 alleviate families.
 
 This is serious matter but it is indeed taken a bti lightly by
 existing software. ( there is a tradeoff between the amount of
 structure you can impose and ease of use in the front-line ).
 
 What we would love to have is a way to access all feeds from
 http://chilepersonfinder.appspot.com/ as RDF
 
 We already have some databases operating on these feeds, but we're
 still far away a clean solution because of its loose structure ( take
 a look and you'll see what I mean ).
 
 Who wants to take a shot at this?
 
 Requirements.
 - Take all feeds originating from http://chilepersonfinder.appspot.com/
 - Generate an initial RDF dump ( big TTL file )
 - Generate Incremental RDF dumps every hour
 
 The transfromation should do its best guess at the ideal data
 structure and try not to loose granularity but shield us a bit from
 this feed based model.
 
 We then take care of downloading this, integrating with other systems,
 further processing, geocoding, etc.
 
 There's a lot of work to do and the more we can outsource, the bettter.
 
 On Friday ( tomorrow ) there will be the first nation-wide
 announcement of our search platform and we expect lots of people to
 use our services. So this is something really urgent and really,
 really important for those who need it.
 
 Ah. Volunteers are moving all this data into a Virtuoso instance that
 will also have more stuff. It will be available soon at
 http://opendata.cl/ so stay tuned.
 
 We really hope we had something like DBpedia in place by now, it would
 make all this much easier. But now is the time.
 Guys, the tsunami casualties could have been avoided it was all about
 mis-information.
 Same goes for relief efforts. They are not optimal and this is all
 about data in the end.
 
 I know you know how valuable data is. But it is now that you can
 really make your point! Triple by Triple.
 
 Thanks!
 A
 
 -- 
 Aldo Bucchi
 skype:aldo.bucchi
 http://www.univrz.com/
 http://aldobucchi.com/
 
 PRIVILEGED AND CONFIDENTIAL INFORMATION
 This message is only for the use of the individual or entity to which it is
 addressed and may contain information that is privileged and confidential. If
 you are not the intended recipient, please do not distribute or copy this
 communication, by e-mail or otherwise. Instead, please notify us immediately 
 by
 return e-mail.
 




Re: Crowdsourcing request 2: Crisis platform data from http://chile.ushahidi.com/

2010-03-04 Thread Kingsley Idehen

Aldo Bucchi wrote:

Hi,

Here's another RDF conversion task that is in the queue and it falls
in the public domain.

Ushahidi is a crisis management platform that has been used in the
Haiti crisis and other and is being used in Chile as well via
http://chile.ushahidi.com/

It provides a facility to download the data as CSV from:
http://chile.ushahidi.com/download

Again, the requirement is to transform this to RDF using pertinent
ontologies and doing it as richly as possible
We then slurp it into a Virtuoso instance where we will try to link
this with main organizational entities in the country and data from
other feeds.

Anyone? :)

Thanks!
A



  

Aldo,

We have a CSV Cartridge, so if you update your RDF Mappers VAD, you have 
a head strart re. CSV to RDF. Then you also have the ability to make a 
mapping ontology etc..


--

Regards,

Kingsley Idehen	  
President  CEO 
OpenLink Software 
Web: http://www.openlinksw.com

Weblog: http://www.openlinksw.com/blog/~kidehen
Twitter/Identi.ca: kidehen 









Re: RESTful API for accessing DBpedia

2010-03-04 Thread Bob DuCharme

Monika Solanki wrote:


I am looking for a REST based API for programmatically accessing
DBpedia's SPARQL end point. Any pointers much appreciated.


A SPARQL endpoint is by its nature already a REST-based API. You send it 
HTTP GETs, and it returns data laid out in a specific protocol 
(http://www.w3.org/TR/2008/REC-rdf-sparql-protocol-20080115/).


To create the URL for the GET for DBpedia, you can escape the SPARQL 
query (most programming languages have a function for this, but 
http://www.xs4all.nl/~jlpoutre/BoT/Javascript/Utils/endecode.html is 
nice for experiments) and append it to the following: 
http://dbpedia.org/sparql?format=XMLdefault-graph-uri=


For example, doing this with this query

 SELECT ?p ?o WHERE { http://dbpedia.org/resource/IBM ?p ?o }

gets you this URL, which you can paste into your browser:

http://dbpedia.org/sparql?format=XMLdefault-graph-uri=http%3A%2F%2Fdbpedia.orgquery=SELECT%20%3Fp%20%3Fo%20%20%20WHERE%20%7B%20%3Chttp%3A%2F%2Fdbpedia.org%2Fresource%2FIBM%3E%20%3Fp%20%3Fo%20%7D

Virtuoso provides the dbpedia endpoint, so you'll see more doc on this 
at http://virtuoso.openlinksw.com/dataspace/dav/wiki/Main/VOSSparqlProtocol.


Or am I misunderstanding what you're looking for?

Bob





Re: RESTful API for accessing DBpedia

2010-03-04 Thread Hugh Williams
Hi Monika,

The Virtuoso Facets web service provides a REST interface for accessing the 
DBpedia service it hosts, as detailed at:


http://virtuoso.openlinksw.com/dataspace/dav/wiki/Main/VirtuosoFacetsWebService

The DBpedia Virtuoso Facets web service interface is accessible from:

http://dbpedia.org/fct/service

I hope this will suffice for your needs ...

Best Regards
Hugh Williams
Professional Services
OpenLink Software
Web: http://www.openlinksw.com
Support: http://support.openlinksw.com
Forums: http://boards.openlinksw.com/support
Twitter: http://twitter.com/OpenLink

On 4 Mar 2010, at 13:24, Monika Solanki wrote:

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1
 
 Hi LODers,
 
 I am looking for a REST based API for programmatically accessing
 DBpedia's SPARQL end point. Any pointers much appreciated.
 
 Monika
 
 
 - --
 Dr Monika Solanki
 F27 Department of Computer Science
 University of Leicester
 Leicester LE1 7RH
 United Kingdom
 WebID: http://www.cs.le.ac.uk/people/ms491/foaf.rdf#monika
 Google Coordinates: 52.653791,-1.158414
 Tel: +44 116 252 3828
 Web: http://www.cs.le.ac.uk/people/ms491/
 
 
 -BEGIN PGP SIGNATURE-
 Version: GnuPG v1.4.9 (GNU/Linux)
 Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org
 
 iEYEARECAAYFAkuPtHIACgkQef8v4/uRsizv9QCgsIallB1D9IoiRXcYpsRH0l1U
 5BoAnAjE5W8GMgNr5vMi9P+0LTnbB69A
 =KkIB
 -END PGP SIGNATURE-