Re: Crowdsourcing request: Google People Finder Data as RDF

2010-03-05 Thread Stephane Fellah
Hi,

I am interested to help for this project.  I have about than 10 years
experience with semantic web technology and it is my dog food everyday. I
had the idea of doing it during the Haiti Earthquake. I looked at the People
Finder Interchange Format (PFIF) that is used by google
http://zesty.ca/pfif/1.2/ . The problem is XML format is mainly its fixed
structure and its difficulty to extend it for specific purpose (like
address).
I would be interested to work on developing core ontology that would fix the
defect of PFIF and then use it as a foundation to develop extensions. There
are other ontologies that could be taken in account such as Sahana
http://ontology.nursix.org/sahana-person.owl.I think it is important we
do it right that just going to a straight conversion from PFIF format. It
requires some effort but it should pay off in the long term.
I would appreciate if you can tell me where to start (forum, wiki, code base
...etc)

Best regards
Stephane Fellah



On Thu, Mar 4, 2010 at 1:12 PM, Bill Roberts b...@swirrl.com wrote:

 Hi Aldo - I'd like to help, but I see you posted your mail a few hours ago.
  Do you have updated information on what still needs done?  Do you have a
 wiki or similar to coordinate volunteer programming efforts?

 Regards

 Bill


 On 4 Mar 2010, at 14:06, Aldo Bucchi wrote:

  Hi,
 
  As most of you heard things were a bit shaky down here in Chile. We
  have some requests and hope you guys can help. This is a moment to
  prove what we always boast about: that Linked Data can solve real
  problems.
 
  Google provides a prople finder service
  (http://chilepersonfinder.appspot.com/) which is right now
  centralizing some ( but not all ) of the missing people data. This
  service is OK but it lacks some features plus we need to integrate
  with other sources to perform analysis and aid our rescue teams /
  alleviate families.
 
  This is serious matter but it is indeed taken a bti lightly by
  existing software. ( there is a tradeoff between the amount of
  structure you can impose and ease of use in the front-line ).
 
  What we would love to have is a way to access all feeds from
  http://chilepersonfinder.appspot.com/ as RDF
 
  We already have some databases operating on these feeds, but we're
  still far away a clean solution because of its loose structure ( take
  a look and you'll see what I mean ).
 
  Who wants to take a shot at this?
 
  Requirements.
  - Take all feeds originating from http://chilepersonfinder.appspot.com/
 
  - Generate an initial RDF dump ( big TTL file )
  - Generate Incremental RDF dumps every hour
 
  The transfromation should do its best guess at the ideal data
  structure and try not to loose granularity but shield us a bit from
  this feed based model.
 
  We then take care of downloading this, integrating with other systems,
  further processing, geocoding, etc.
 
  There's a lot of work to do and the more we can outsource, the bettter.
 
  On Friday ( tomorrow ) there will be the first nation-wide
  announcement of our search platform and we expect lots of people to
  use our services. So this is something really urgent and really,
  really important for those who need it.
 
  Ah. Volunteers are moving all this data into a Virtuoso instance that
  will also have more stuff. It will be available soon at
  http://opendata.cl/ so stay tuned.
 
  We really hope we had something like DBpedia in place by now, it would
  make all this much easier. But now is the time.
  Guys, the tsunami casualties could have been avoided it was all about
  mis-information.
  Same goes for relief efforts. They are not optimal and this is all
  about data in the end.
 
  I know you know how valuable data is. But it is now that you can
  really make your point! Triple by Triple.
 
  Thanks!
  A
 
  --
  Aldo Bucchi
  skype:aldo.bucchi
  http://www.univrz.com/
  http://aldobucchi.com/
 
  PRIVILEGED AND CONFIDENTIAL INFORMATION
  This message is only for the use of the individual or entity to which it
 is
  addressed and may contain information that is privileged and
 confidential. If
  you are not the intended recipient, please do not distribute or copy this
  communication, by e-mail or otherwise. Instead, please notify us
 immediately by
  return e-mail.
 





Re: Crowdsourcing request: Google People Finder Data as RDF

2010-03-05 Thread Hugh Glaser
Not sure if it helps:

It may be that some relief organisations use the UN Locode codes:
http://www.unece.org/cefact/locode/cl.htm
We have a Linked Data version of them at
for example http://unlocode.rkbexplorer.com/id/CLTLX
http://unlocode.rkbexplorer.com/id/CL-ML
Probably not sufficient granularity for anything useful, I suppose.
And sameas.org doesn't have much co-ref data for that area.

Anyway, as always, if anyone wants to use sameas:org as a clearing house to
bridge and re-publish such things (or anything else), ping me the
equivalence pairs and I will put them in as fast as I can.

Best regards in your endeavours.
Hugh

On 04/03/2010 14:06, Aldo Bucchi aldo.buc...@gmail.com wrote:

 Hi,
 
 As most of you heard things were a bit shaky down here in Chile. We
 have some requests and hope you guys can help. This is a moment to
 prove what we always boast about: that Linked Data can solve real
 problems.
 
 Google provides a prople finder service
 (http://chilepersonfinder.appspot.com/) which is right now
 centralizing some ( but not all ) of the missing people data. This
 service is OK but it lacks some features plus we need to integrate
 with other sources to perform analysis and aid our rescue teams /
 alleviate families.
 
 This is serious matter but it is indeed taken a bti lightly by
 existing software. ( there is a tradeoff between the amount of
 structure you can impose and ease of use in the front-line ).
 
 What we would love to have is a way to access all feeds from
 http://chilepersonfinder.appspot.com/ as RDF
 
 We already have some databases operating on these feeds, but we're
 still far away a clean solution because of its loose structure ( take
 a look and you'll see what I mean ).
 
 Who wants to take a shot at this?
 
 Requirements.
 - Take all feeds originating from http://chilepersonfinder.appspot.com/
 - Generate an initial RDF dump ( big TTL file )
 - Generate Incremental RDF dumps every hour
 
 The transfromation should do its best guess at the ideal data
 structure and try not to loose granularity but shield us a bit from
 this feed based model.
 
 We then take care of downloading this, integrating with other systems,
 further processing, geocoding, etc.
 
 There's a lot of work to do and the more we can outsource, the bettter.
 
 On Friday ( tomorrow ) there will be the first nation-wide
 announcement of our search platform and we expect lots of people to
 use our services. So this is something really urgent and really,
 really important for those who need it.
 
 Ah. Volunteers are moving all this data into a Virtuoso instance that
 will also have more stuff. It will be available soon at
 http://opendata.cl/ so stay tuned.
 
 We really hope we had something like DBpedia in place by now, it would
 make all this much easier. But now is the time.
 Guys, the tsunami casualties could have been avoided it was all about
 mis-information.
 Same goes for relief efforts. They are not optimal and this is all
 about data in the end.
 
 I know you know how valuable data is. But it is now that you can
 really make your point! Triple by Triple.
 
 Thanks!
 A




Re: Crowdsourcing request: Google People Finder Data as RDF

2010-03-05 Thread Aldo Bucchi
Hi Stephane,

On Thu, Mar 4, 2010 at 5:39 PM, Stephane Fellah fella...@gmail.com wrote:
 Hi,
 I am interested to help for this project.  I have about than 10 years
 experience with semantic web technology and it is my dog food everyday. I
 had the idea of doing it during the Haiti Earthquake. I looked at the People
 Finder Interchange Format (PFIF) that is used by
 google http://zesty.ca/pfif/1.2/ . The problem is XML format is mainly its
 fixed structure and its difficulty to extend it for specific purpose (like
 address).

Great link. I had not found the spec anywhere ;)

 I would be interested to work on developing core ontology that would fix the
 defect of PFIF and then use it as a foundation to develop extensions. There

OK. But at this point we need something that works
We don't want to extend the Google service, we want to consume it and
integrate it with other services.

 are other ontologies that could be taken in account such as Sahana
 http://ontology.nursix.org/sahana-person.owl.    I think it is important we
 do it right that just going to a straight conversion from PFIF format. It
 requires some effort but it should pay off in the long term.

Long term is the key word here... we don't have much time ;)
We;re looking for people and guiding rescue teams

 I would appreciate if you can tell me where to start (forum, wiki, code base
 ..etc)

Well all forums are in spanish. There are some small english hubs, for example:
http://wiki.crisiscommons.org/wiki/Chile/2010_2_27_Earthquake

I think a simple RDF converter should be easy. Just don't have the
time for it. Other teams are using the data as-is. Time is critical
that's why we're asking for help.

Thanks!


 Best regards
 Stephane Fellah


 On Thu, Mar 4, 2010 at 1:12 PM, Bill Roberts b...@swirrl.com wrote:

 Hi Aldo - I'd like to help, but I see you posted your mail a few hours
 ago.  Do you have updated information on what still needs done?  Do you have
 a wiki or similar to coordinate volunteer programming efforts?

 Regards

 Bill


 On 4 Mar 2010, at 14:06, Aldo Bucchi wrote:

  Hi,
 
  As most of you heard things were a bit shaky down here in Chile. We
  have some requests and hope you guys can help. This is a moment to
  prove what we always boast about: that Linked Data can solve real
  problems.
 
  Google provides a prople finder service
  (http://chilepersonfinder.appspot.com/) which is right now
  centralizing some ( but not all ) of the missing people data. This
  service is OK but it lacks some features plus we need to integrate
  with other sources to perform analysis and aid our rescue teams /
  alleviate families.
 
  This is serious matter but it is indeed taken a bti lightly by
  existing software. ( there is a tradeoff between the amount of
  structure you can impose and ease of use in the front-line ).
 
  What we would love to have is a way to access all feeds from
  http://chilepersonfinder.appspot.com/ as RDF
 
  We already have some databases operating on these feeds, but we're
  still far away a clean solution because of its loose structure ( take
  a look and you'll see what I mean ).
 
  Who wants to take a shot at this?
 
  Requirements.
  - Take all feeds originating from
  http://chilepersonfinder.appspot.com/
  - Generate an initial RDF dump ( big TTL file )
  - Generate Incremental RDF dumps every hour
 
  The transfromation should do its best guess at the ideal data
  structure and try not to loose granularity but shield us a bit from
  this feed based model.
 
  We then take care of downloading this, integrating with other systems,
  further processing, geocoding, etc.
 
  There's a lot of work to do and the more we can outsource, the bettter.
 
  On Friday ( tomorrow ) there will be the first nation-wide
  announcement of our search platform and we expect lots of people to
  use our services. So this is something really urgent and really,
  really important for those who need it.
 
  Ah. Volunteers are moving all this data into a Virtuoso instance that
  will also have more stuff. It will be available soon at
  http://opendata.cl/ so stay tuned.
 
  We really hope we had something like DBpedia in place by now, it would
  make all this much easier. But now is the time.
  Guys, the tsunami casualties could have been avoided it was all about
  mis-information.
  Same goes for relief efforts. They are not optimal and this is all
  about data in the end.
 
  I know you know how valuable data is. But it is now that you can
  really make your point! Triple by Triple.
 
  Thanks!
  A
 
  --
  Aldo Bucchi
  skype:aldo.bucchi
  http://www.univrz.com/
  http://aldobucchi.com/
 
  PRIVILEGED AND CONFIDENTIAL INFORMATION
  This message is only for the use of the individual or entity to which it
  is
  addressed and may contain information that is privileged and
  confidential. If
  you are not the intended recipient, please do not distribute or copy
  this
  communication, by e-mail or otherwise. Instead, please notify 

Re: Crowdsourcing request: Google People Finder Data as RDF

2010-03-05 Thread Kingsley Idehen

Aldo,

A quick game plan that agile enough for the problem at hand:

Find an ontology or instance data set, simply sponge it (preferably 
using the URIBurner [1] instance).


Example:

http://linkeddata.uriburner.com/ode/?uri=http%3A%2F%2Fontology.nursix.org%2Fsahana-person.owl  
(which is now loaded as a result of the Sponger URL).


This gets the ontology into the Virtuoso instance and from there we can 
do all sorts of things re. ontology mapping, data reconciliation,  and 
reasoning etc..


Other thing to note:

Please can just put N3 in a text file an publish link to Web, once on 
the Web they get sponged. This an example of where Turtle and N3 trump 
other representation formats for RDF, you practically scribble your 
triples on a magic surface etc..


Worst case, make a triple via HyperTweet.

The following People oriented ontologies are already in place:

1. FOAF
2. Relationship
3. PIM
4. Family -- http://web.nickshanks.com/ns/family



Links:

1. http://uribuner.com/
2. http://uriburner.com/fct -- you can then use Full Text, Data Object 
Labels, or Data Object Identifiers to Find information and circulate via 
URLs

3. http://uriburner.com/sparql -- sparql for the more advanced
4. http://uriburner.com/isparql -- easy way to SPARQL and then share the 
Results and Query Defs via URLs (permalinks).


--

Regards,

Kingsley Idehen	  
President  CEO 
OpenLink Software 
Web: http://www.openlinksw.com

Weblog: http://www.openlinksw.com/blog/~kidehen
Twitter/Identi.ca: kidehen 









Crowdsourcing request: Google People Finder Data as RDF

2010-03-04 Thread Aldo Bucchi
Hi,

As most of you heard things were a bit shaky down here in Chile. We
have some requests and hope you guys can help. This is a moment to
prove what we always boast about: that Linked Data can solve real
problems.

Google provides a prople finder service
(http://chilepersonfinder.appspot.com/) which is right now
centralizing some ( but not all ) of the missing people data. This
service is OK but it lacks some features plus we need to integrate
with other sources to perform analysis and aid our rescue teams /
alleviate families.

This is serious matter but it is indeed taken a bti lightly by
existing software. ( there is a tradeoff between the amount of
structure you can impose and ease of use in the front-line ).

What we would love to have is a way to access all feeds from
http://chilepersonfinder.appspot.com/ as RDF

We already have some databases operating on these feeds, but we're
still far away a clean solution because of its loose structure ( take
a look and you'll see what I mean ).

Who wants to take a shot at this?

Requirements.
- Take all feeds originating from http://chilepersonfinder.appspot.com/
- Generate an initial RDF dump ( big TTL file )
- Generate Incremental RDF dumps every hour

The transfromation should do its best guess at the ideal data
structure and try not to loose granularity but shield us a bit from
this feed based model.

We then take care of downloading this, integrating with other systems,
further processing, geocoding, etc.

There's a lot of work to do and the more we can outsource, the bettter.

On Friday ( tomorrow ) there will be the first nation-wide
announcement of our search platform and we expect lots of people to
use our services. So this is something really urgent and really,
really important for those who need it.

Ah. Volunteers are moving all this data into a Virtuoso instance that
will also have more stuff. It will be available soon at
http://opendata.cl/ so stay tuned.

We really hope we had something like DBpedia in place by now, it would
make all this much easier. But now is the time.
Guys, the tsunami casualties could have been avoided it was all about
mis-information.
Same goes for relief efforts. They are not optimal and this is all
about data in the end.

I know you know how valuable data is. But it is now that you can
really make your point! Triple by Triple.

Thanks!
A

-- 
Aldo Bucchi
skype:aldo.bucchi
http://www.univrz.com/
http://aldobucchi.com/

PRIVILEGED AND CONFIDENTIAL INFORMATION
This message is only for the use of the individual or entity to which it is
addressed and may contain information that is privileged and confidential. If
you are not the intended recipient, please do not distribute or copy this
communication, by e-mail or otherwise. Instead, please notify us immediately by
return e-mail.



Re: Crowdsourcing request: Google People Finder Data as RDF

2010-03-04 Thread Bill Roberts
Hi Aldo - I'd like to help, but I see you posted your mail a few hours ago.  Do 
you have updated information on what still needs done?  Do you have a wiki or 
similar to coordinate volunteer programming efforts?

Regards

Bill


On 4 Mar 2010, at 14:06, Aldo Bucchi wrote:

 Hi,
 
 As most of you heard things were a bit shaky down here in Chile. We
 have some requests and hope you guys can help. This is a moment to
 prove what we always boast about: that Linked Data can solve real
 problems.
 
 Google provides a prople finder service
 (http://chilepersonfinder.appspot.com/) which is right now
 centralizing some ( but not all ) of the missing people data. This
 service is OK but it lacks some features plus we need to integrate
 with other sources to perform analysis and aid our rescue teams /
 alleviate families.
 
 This is serious matter but it is indeed taken a bti lightly by
 existing software. ( there is a tradeoff between the amount of
 structure you can impose and ease of use in the front-line ).
 
 What we would love to have is a way to access all feeds from
 http://chilepersonfinder.appspot.com/ as RDF
 
 We already have some databases operating on these feeds, but we're
 still far away a clean solution because of its loose structure ( take
 a look and you'll see what I mean ).
 
 Who wants to take a shot at this?
 
 Requirements.
 - Take all feeds originating from http://chilepersonfinder.appspot.com/
 - Generate an initial RDF dump ( big TTL file )
 - Generate Incremental RDF dumps every hour
 
 The transfromation should do its best guess at the ideal data
 structure and try not to loose granularity but shield us a bit from
 this feed based model.
 
 We then take care of downloading this, integrating with other systems,
 further processing, geocoding, etc.
 
 There's a lot of work to do and the more we can outsource, the bettter.
 
 On Friday ( tomorrow ) there will be the first nation-wide
 announcement of our search platform and we expect lots of people to
 use our services. So this is something really urgent and really,
 really important for those who need it.
 
 Ah. Volunteers are moving all this data into a Virtuoso instance that
 will also have more stuff. It will be available soon at
 http://opendata.cl/ so stay tuned.
 
 We really hope we had something like DBpedia in place by now, it would
 make all this much easier. But now is the time.
 Guys, the tsunami casualties could have been avoided it was all about
 mis-information.
 Same goes for relief efforts. They are not optimal and this is all
 about data in the end.
 
 I know you know how valuable data is. But it is now that you can
 really make your point! Triple by Triple.
 
 Thanks!
 A
 
 -- 
 Aldo Bucchi
 skype:aldo.bucchi
 http://www.univrz.com/
 http://aldobucchi.com/
 
 PRIVILEGED AND CONFIDENTIAL INFORMATION
 This message is only for the use of the individual or entity to which it is
 addressed and may contain information that is privileged and confidential. If
 you are not the intended recipient, please do not distribute or copy this
 communication, by e-mail or otherwise. Instead, please notify us immediately 
 by
 return e-mail.