Request for help - Usage of PROV-O in your Ontologies and Vocabularies

2016-02-18 Thread Monika Solanki

Dear All,

Many thanks to those who responded to the survey.

If you have not taken the PROV-O survey already, there is still time 
till the 29th of February 2016.


Monika

On 01/02/2016 22:42, Monika Solanki wrote:

Dear All,

Do you import, extend, generalise or specialise the W3C Provenance 
vocabulary[1]? If so we gratefully request your

help.

As part of an analysis on how PROV-O is being used across vocabularies 
and ontologies, we ask you to answer
just two (ok, three in some cases and no more!) very short questions 
available at,


http://goo.gl/forms/xS1qF4SVsz

The survey is open till the 29th of February 2016. Results of the 
analysis would be made available to everyone interested.


Many Thanks for your support,

Monika

[1] https://www.w3.org/TR/prov-o/





Re: Request for help - Usage of PROV-O in your Ontologies and Vocabularies

2016-02-02 Thread Ghislain Atemezing
Hello,
> Le 2 févr. 2016 à 12:12, Daniel Garijo  a écrit :
> 
> just in case it helps, you have some of the usages registered in LOV: 
> http://lov.okfn.org/dataset/lov/vocabs/prov 
> 

More at 
https://www.w3.org/2002/09/wbs/46974/prov-vocabulary-usage-survey/results 
 ? 
See [1] 

Best,

Ghislain 
[1] http://www.w3.org/TR/prov-implementations/ 
 





Request for help - Usage of PROV-O in your Ontologies and Vocabularies

2016-02-01 Thread Monika Solanki

Dear All,

Do you import, extend, generalise or specialise the W3C Provenance 
vocabulary[1]? If so we gratefully request your

help.

As part of an analysis on how PROV-O is being used across vocabularies 
and ontologies, we ask you to answer
just two (ok, three in some cases and no more!) very short questions 
available at,


http://goo.gl/forms/xS1qF4SVsz

The survey is open till the 29th of February 2016. Results of the 
analysis would be made available to everyone interested.


Many Thanks for your support,

Monika

[1] https://www.w3.org/TR/prov-o/



Help about RDF Datasets

2015-01-27 Thread Regina Paola Ticona Herrera
Dear all,

I'm doing a test with different datasets of RDF descriptions like DBpedia,
WordNet, LinkedGeoData, but I need to test with final user's datasets
(where I can find blank nodes, collections, etc). Please if someone can
send me a link or dataset to test, I will be very grateful.

Sincerely,

Regina Ticona

-- 



Regina P. Ticona Herrera

Ph.D. Student in Computer Science
Laboratoire d'informatique de l'UPPA
Université de Pau et des Pays de l'Adour
Ecole Doctorale de Sciences Exactes
et leur Applications

BP 115

64013 Pau CEDEX, France
Tel.: +33 559407411

Fax: +33 559407445

Email: rtic...@iutbayonne.univ-pau.fr rherr...@iutbayonne.univ-pau.fr

Web: http://liuppa.univ-pau.fr/live/


Re: [help] semanticweb.org admin

2014-07-21 Thread Maxim Kolchin
Hi Annalisa,

Was you able to contact someone? I've sent emails to all people
mentioned here, but no one responded to me.

Thank you in advance!
Maxim Kolchin
PhD Student
ITMO University (National Research University)
E-mail: kolchin...@gmail.com
Tel.: +7 (911) 199-55-73


On Wed, Mar 26, 2014 at 6:17 PM, Stéphane Corlosquet
scorlosq...@gmail.com wrote:
 Knud Möller was the main developer of this site:
 http://www.linkedin.com/in/knudmoeller / https://twitter.com/knudmoeller


 On Wed, Mar 26, 2014 at 9:34 AM, Anna Lisa Gentile
 a.l.gent...@dcs.shef.ac.uk wrote:

 Hi guys, just a quick question.
 Does anyone know who to contact for technical questions about
 http://data.semanticweb.org ?
 The admin contact ad...@data.semanticweb.org seems unreachable atm.
 Thanks you!
 Annalisa

 --
 Anna Lisa Gentile
 Research Associate
 Department of Computer Science
 University of Sheffield
 http://staffwww.dcs.shef.ac.uk/people/A.L.Gentile
 office: +44 (0)114 222 1876




 --
 Steph.



Re: [help] semanticweb.org admin

2014-07-21 Thread Gannon Dick
Apparently SemanticWeb.Org is run by a Mr./Ms(?) W. Ki. First name Wi. He/she 
is quite friendly, but perfers to speak RDF, which I often mistake a for Pidgin 
Klingon variant.  Maybe it's just me.

In any case, here is community residence home page: 
http://semanticweb.org/wiki/Main_Page

I suspect the community may be located in the Lake Wobegon District of Erewhon, 
Minnesota because everybody seems to have above average expertise and Wi Ki 
makes executive decisions.

HTH

--Gannon



On Mon, 7/21/14, Maxim Kolchin kolchin...@gmail.com wrote:

 Subject: Re: [help] semanticweb.org admin
 To: Stéphane Corlosquet scorlosq...@gmail.com
 Cc: a.l.gent...@dcs.shef.ac.uk, public-lod@w3.org public-lod@w3.org
 Date: Monday, July 21, 2014, 7:25 AM
 
 Hi Annalisa,
 
 Was you able to contact
 someone? I've sent emails to all people
 mentioned here, but no one responded to me.
 
 Thank you in advance!
 Maxim Kolchin
 PhD Student
 ITMO University (National Research
 University)
 E-mail: kolchin...@gmail.com
 Tel.: +7 (911) 199-55-73
 
 
 On Wed, Mar 26, 2014 at 6:17 PM, Stéphane
 Corlosquet
 scorlosq...@gmail.com
 wrote:
  Knud Möller was the main
 developer of this site:
  http://www.linkedin.com/in/knudmoeller /
 https://twitter.com/knudmoeller
 
 
 
 On Wed, Mar 26, 2014 at 9:34 AM, Anna Lisa Gentile
  a.l.gent...@dcs.shef.ac.uk
 wrote:
 
  Hi
 guys, just a quick question.
  Does
 anyone know who to contact for technical questions about
  http://data.semanticweb.org ?
  The admin contact ad...@data.semanticweb.org
 seems unreachable atm.
  Thanks
 you!
  Annalisa
 
  --
  Anna Lisa Gentile
  Research Associate
  Department of Computer Science
  University of Sheffield
  http://staffwww.dcs.shef.ac.uk/people/A.L.Gentile
  office: +44 (0)114 222 1876
 
 
 
 
 
 --
  Steph.




Re: [help] semanticweb.org admin

2014-07-21 Thread Bernard Vatant
According to http://whois.net/whois/semanticweb.org the DNS registrant is
Stefan Decker, based somewhere in Galway, Ireland :)

2014-07-21 15:40 GMT+02:00 Gannon Dick gannon_d...@yahoo.com:

 Apparently SemanticWeb.Org is run by a Mr./Ms(?) W. Ki. First name Wi.
 He/she is quite friendly, but perfers to speak RDF, which I often mistake a
 for Pidgin Klingon variant.  Maybe it's just me.

 In any case, here is community residence home page:
 http://semanticweb.org/wiki/Main_Page

 I suspect the community may be located in the Lake Wobegon District of
 Erewhon, Minnesota because everybody seems to have above average expertise
 and Wi Ki makes executive decisions.

 HTH

 --Gannon


 
 On Mon, 7/21/14, Maxim Kolchin kolchin...@gmail.com wrote:

  Subject: Re: [help] semanticweb.org admin
  To: Stéphane Corlosquet scorlosq...@gmail.com
  Cc: a.l.gent...@dcs.shef.ac.uk, public-lod@w3.org public-lod@w3.org
  Date: Monday, July 21, 2014, 7:25 AM

  Hi Annalisa,

  Was you able to contact
  someone? I've sent emails to all people
  mentioned here, but no one responded to me.

  Thank you in advance!
  Maxim Kolchin
  PhD Student
  ITMO University (National Research
  University)
  E-mail: kolchin...@gmail.com
  Tel.: +7 (911) 199-55-73


  On Wed, Mar 26, 2014 at 6:17 PM, Stéphane
  Corlosquet
  scorlosq...@gmail.com
  wrote:
   Knud Möller was the main
  developer of this site:
   http://www.linkedin.com/in/knudmoeller /
  https://twitter.com/knudmoeller
  
  
  
  On Wed, Mar 26, 2014 at 9:34 AM, Anna Lisa Gentile
   a.l.gent...@dcs.shef.ac.uk
  wrote:
  
   Hi
  guys, just a quick question.
   Does
  anyone know who to contact for technical questions about
   http://data.semanticweb.org ?
   The admin contact ad...@data.semanticweb.org
  seems unreachable atm.
   Thanks
  you!
   Annalisa
  
   --
   Anna Lisa Gentile
   Research Associate
   Department of Computer Science
   University of Sheffield
   http://staffwww.dcs.shef.ac.uk/people/A.L.Gentile
   office: +44 (0)114 222 1876
  
  
  
  
  
  --
   Steph.





-- 

*Bernard Vatant*
Vocabularies  Data Engineering
Tel :  + 33 (0)9 71 48 84 59
Skype : bernard.vatant
http://google.com/+BernardVatant

*Mondeca*
35 boulevard de Strasbourg 75010 Paris
www.mondeca.com
Follow us on Twitter : @mondecanews http://twitter.com/#%21/mondecanews
--


Re: [help] semanticweb.org admin

2014-07-21 Thread Maxim Kolchin
Annalisa's response below...

Maxim

On Mon, Jul 21, 2014 at 4:37 PM, Maxim Kolchin kolchin...@gmail.com wrote:
 Thank you very much! I'll send them my questions in a separate mail.

 Maxim

 On Mon, Jul 21, 2014 at 4:35 PM, Anna Lisa Gentile
 a.gent...@sheffield.ac.uk wrote:
 Hi Maxim,
 I think you should now contact Hazem Safwat and Siegfried Handschuh (both in
 \cc).

 Bests

 Annalisa

 --
 Anna Lisa Gentile
 Research Associate
 Department of Computer Science
 University of Sheffield
 www: http://staffwww.dcs.shef.ac.uk/people/A.L.Gentile
 email: a.gent...@dcs.shef.ac.uk

 office: +44 (0)114 222 1876
 skype: anlige




 On 21/07/2014 13:25, Maxim Kolchin wrote:

 Hi Annalisa,

 Was you able to contact someone? I've sent emails to all people
 mentioned here, but no one responded to me.

 Thank you in advance!
 Maxim Kolchin
 PhD Student
 ITMO University (National Research University)
 E-mail: kolchin...@gmail.com
 Tel.: +7 (911) 199-55-73


 On Wed, Mar 26, 2014 at 6:17 PM, Stéphane Corlosquet
 scorlosq...@gmail.com wrote:

 Knud Möller was the main developer of this site:
 http://www.linkedin.com/in/knudmoeller / https://twitter.com/knudmoeller


 On Wed, Mar 26, 2014 at 9:34 AM, Anna Lisa Gentile
 a.l.gent...@dcs.shef.ac.uk wrote:

 Hi guys, just a quick question.
 Does anyone know who to contact for technical questions about
 http://data.semanticweb.org ?
 The admin contact ad...@data.semanticweb.org seems unreachable atm.
 Thanks you!
 Annalisa

 --
 Anna Lisa Gentile
 Research Associate
 Department of Computer Science
 University of Sheffield
 http://staffwww.dcs.shef.ac.uk/people/A.L.Gentile
 office: +44 (0)114 222 1876




 --
 Steph.





Re: [help] semanticweb.org admin

2014-07-21 Thread Gannon Dick
Hmmm ... Postal Code DERI, we're making some progress :)

On Mon, 7/21/14, Bernard Vatant bernard.vat...@mondeca.com wrote:

 Subject: Re: [help] semanticweb.org admin
 To: Gannon Dick gannon_d...@yahoo.com
 Cc: public-lod@w3.org public-lod@w3.org, Stefan Decker 
stefan.dec...@deri.org
 Date: Monday, July 21, 2014, 8:51 AM
 
 According
 to http://whois.net/whois/semanticweb.org
 the DNS registrant is Stefan Decker, based somewhere in
 Galway, Ireland :)
 
 
 
 2014-07-21 15:40 GMT+02:00
 Gannon Dick gannon_d...@yahoo.com:
 
 
 Apparently SemanticWeb.Org is run by a Mr./Ms(?) W. Ki.
 First name Wi. He/she is quite friendly, but perfers to
 speak RDF, which I often mistake a for Pidgin Klingon
 variant.  Maybe it's just me.
 
 
 
 In any case, here is community residence home page: 
http://semanticweb.org/wiki/Main_Page
 
 
 
 I suspect the community may be located in the Lake Wobegon
 District of Erewhon, Minnesota because everybody seems to
 have above average expertise and Wi Ki makes executive
 decisions.
 
 
 
 HTH
 
 
 
 --Gannon
 
 
 
 
 
 
 
 On Mon, 7/21/14, Maxim Kolchin kolchin...@gmail.com
 wrote:
 
 
 
  Subject: Re: [help] semanticweb.org admin
 
  To: Stéphane Corlosquet scorlosq...@gmail.com
 
  Cc: a.l.gent...@dcs.shef.ac.uk,
 public-lod@w3.org
 public-lod@w3.org
 
  Date: Monday, July 21, 2014, 7:25 AM
 
 
 
  Hi Annalisa,
 
 
 
  Was you able to contact
 
  someone? I've sent emails to all people
 
  mentioned here, but no one responded to me.
 
 
 
  Thank you in advance!
 
  Maxim Kolchin
 
  PhD Student
 
  ITMO University (National Research
 
  University)
 
  E-mail: kolchin...@gmail.com
 
  Tel.: +7 (911)
 199-55-73
 
 
 
 
 
  On Wed, Mar 26, 2014 at 6:17 PM, Stéphane
 
  Corlosquet
 
  scorlosq...@gmail.com
 
  wrote:
 
   Knud Möller was the main
 
  developer of this site:
 
   http://www.linkedin.com/in/knudmoeller
 /
 
  https://twitter.com/knudmoeller
 
  
 
  
 
  
 
  On Wed, Mar 26, 2014 at 9:34 AM, Anna Lisa Gentile
 
   a.l.gent...@dcs.shef.ac.uk
 
  wrote:
 
  
 
   Hi
 
  guys, just a quick question.
 
   Does
 
  anyone know who to contact for technical questions
 about
 
   http://data.semanticweb.org
 ?
 
   The admin contact ad...@data.semanticweb.org
 
  seems unreachable atm.
 
   Thanks
 
  you!
 
   Annalisa
 
  
 
   --
 
   Anna Lisa Gentile
 
   Research Associate
 
   Department of Computer Science
 
   University of Sheffield
 
   http://staffwww.dcs.shef.ac.uk/people/A.L.Gentile
 
   office: +44 (0)114
 222 1876
 
  
 
  
 
  
 
  
 
  
 
  --
 
   Steph.
 
 
 
 
 
 
 
 
 -- 
 Bernard Vatant
 Vocabularies  Data
 Engineering
 
 
 Tel
 :  + 33 (0)9 71 48 84 59
 
 
 Skype
 : bernard.vatant
 http://google.com/+BernardVatant
 
 
 Mondeca       
                    
  
 
 
 
 35 boulevard de Strasbourg 75010
 Paris
 www.mondeca.comFollow
 us on Twitter : @mondecanews
 
 
 --
 
 
 
 




New RDFeasy AMIs help you build your own knowledge graph

2014-06-06 Thread Paul Houle
I just got the Complete Edition of :BaseKB approved at the AWS marketplace

https://github.com/paulhoule/RDFeasy/wiki/RDFeasy-BaseKB-Gold-Complete
https://aws.amazon.com/marketplace/pp/B00KRKRYW0

Containing all valid and relevant facts from Freebase,  this product
contains about twice as much data as the Compact Edition

https://github.com/paulhoule/RDFeasy/wiki/RDFeasy-Zero
https://aws.amazon.com/marketplace/pp/B00KDO5IFA

and thus requires a machine that is twice the size.  The RDFeasy
distributions are the only RDF data products that meet the standards
of the Amazon Marketplace and are particularly economical because they
use SSD storage that comes free with the machine instead of,  like
some other distribution,  being dependent on expensive provisioned EBS
I/O which costs an additional $120 or so per month even when you
aren't running the instance.

People are used to RDF processing of billion triple files being
difficult and expensive and are often skeptical about RDFeasy but when
people try it,  they can feel the difference with their legacy
solutions right away.

RDFeasy is open source software,  with documented operation protocols,
 but you can use the following Base AMI

https://github.com/paulhoule/RDFeasy/wiki/RDFeasy-Zero
https://aws.amazon.com/marketplace/pp/B00KRI3DWW

to get a bundle of hardware and software into which you can load your
own RDF data and package it as an AMI which can also be distributed in
the AWS Marketplace.

-- 
Paul Houle
Expert on Freebase, DBpedia, Hadoop and RDF
(607) 539 6254paul.houle on Skype   ontolo...@gmail.com
ᐧ



[help] semanticweb.org admin

2014-03-26 Thread Anna Lisa Gentile

Hi guys, just a quick question.
Does anyone know who to contact for technical questions about 
http://data.semanticweb.org ?

The admin contact ad...@data.semanticweb.org seems unreachable atm.
Thanks you!
Annalisa

--
Anna Lisa Gentile
Research Associate
Department of Computer Science
University of Sheffield
http://staffwww.dcs.shef.ac.uk/people/A.L.Gentile
office: +44 (0)114 222 1876



Re: [help] semanticweb.org admin

2014-03-26 Thread Kevin Ford

Try

1) http://semanticweb.org/wiki/Markus_Kr%C3%B6tzsch

and if that doesn't work

2) http://semanticweb.org/wiki/Stefan_Decker

Yours,
kevin



On 03/26/2014 09:34 AM, Anna Lisa Gentile wrote:

Hi guys, just a quick question.
Does anyone know who to contact for technical questions about
http://data.semanticweb.org ?
The admin contact ad...@data.semanticweb.org seems unreachable atm.
Thanks you!
Annalisa

--
Anna Lisa Gentile
Research Associate
Department of Computer Science
University of Sheffield
http://staffwww.dcs.shef.ac.uk/people/A.L.Gentile
office: +44 (0)114 222 1876





Re: [help] semanticweb.org admin

2014-03-26 Thread Stéphane Corlosquet
Knud Möller was the main developer of this site:
http://www.linkedin.com/in/knudmoeller / https://twitter.com/knudmoeller


On Wed, Mar 26, 2014 at 9:34 AM, Anna Lisa Gentile 
a.l.gent...@dcs.shef.ac.uk wrote:

  Hi guys, just a quick question.
 Does anyone know who to contact for technical questions about
 http://data.semanticweb.org ?
 The admin contact ad...@data.semanticweb.org seems unreachable atm.
 Thanks you!
 Annalisa

 --
 Anna Lisa Gentile
 Research Associate
 Department of Computer Science
 University of Sheffieldhttp://staffwww.dcs.shef.ac.uk/people/A.L.Gentile
 office: +44 (0)114 222 1876




-- 
Steph.


Re: [Dbpedia-discussion] MapReduce expert needed to help DBpedia [as GSoC co-mentor]

2014-03-07 Thread Nicolas Torzec
I understand the need for commitments.
Let’s take it offline?

-Nicolas.





From: Dimitris Kontokostas 
kontokos...@informatik.uni-leipzig.demailto:kontokos...@informatik.uni-leipzig.de
Date: Thursday, March 6, 2014 at 11:32 PM
To: Nicolas Torzec torz...@yahoo-inc.commailto:torz...@yahoo-inc.com
Cc: semantic-...@w3.orgmailto:semantic-...@w3.org 
semantic-...@w3.orgmailto:semantic-...@w3.org, Linked Data community 
public-lod@w3.orgmailto:public-lod@w3.org, DBpedia Discussions 
dbpedia-discuss...@lists.sourceforge.netmailto:dbpedia-discuss...@lists.sourceforge.net,
 DBpediaDevelopers 
dbpedia-develop...@lists.sourceforge.netmailto:dbpedia-develop...@lists.sourceforge.net,
 
dbp-spotlight-us...@lists.sourceforge.netmailto:dbp-spotlight-us...@lists.sourceforge.net
 
dbp-spotlight-us...@lists.sourceforge.netmailto:dbp-spotlight-us...@lists.sourceforge.net,
 DBpediaSpotlight Developers 
dbp-spotlight-develop...@lists.sourceforge.netmailto:dbp-spotlight-develop...@lists.sourceforge.net
Subject: Re: [Dbpedia-discussion] MapReduce expert needed to help DBpedia [as 
GSoC co-mentor]




On Thu, Mar 6, 2014 at 6:42 PM, Nicolas Torzec 
torz...@yahoo-inc.commailto:torz...@yahoo-inc.com wrote:
Great idea and much needed move ;)

Really good to know that :) We don't have a direct use case for this idea, we 
just thought it would increase DBpedia usage in big data pipelines

Within the Hadoop platform, the MapReduce framework is focused on distributed 
batch processing.
Other frameworks are more focused on streaming…
= Have you considered the pros and cons?

Actually no, this is one of the reasons we need the expert. We have a very 
general idea of the existing frameworks but cannot make that decision with 
confidence.

FYI, we are using the DBpedia Extraction framework at Yahoo Labs for some 
projects, and have been thinking about porting it to Hadoop for some time.
We may be able to help…

Since you (Yahoo Labs) can provide one of our use cases, it could all fit very 
well.
However, we'd need some commitment. The application period starts next week [1] 
and if we won't find anyone we'll have to drop this.

Regarding the workflow, we will provide the DBpedia know-how and the expert 
will have two tasks
1) Ensure that the student's application is technically good and, if the 
students gets accepted
2) periodically (weekly) check his progress during the coding period

Best,
Dimitris

[1] https://www.google-melange.com/gsoc/events/google/gsoc2014


--
Nicolas Torzec
Yahoo Labs


From: Dimitris Kontokostas 
kontokos...@informatik.uni-leipzig.demailto:kontokos...@informatik.uni-leipzig.de
Date: Thursday, March 6, 2014 at 5:04 AM
To: semantic-...@w3.orgmailto:semantic-...@w3.org 
semantic-...@w3.orgmailto:semantic-...@w3.org, Linked Data community 
public-lod@w3.orgmailto:public-lod@w3.org, DBpedia Discussions 
dbpedia-discuss...@lists.sourceforge.netmailto:dbpedia-discuss...@lists.sourceforge.net,
 DBpediaDevelopers 
dbpedia-develop...@lists.sourceforge.netmailto:dbpedia-develop...@lists.sourceforge.net,
 
dbp-spotlight-us...@lists.sourceforge.netmailto:dbp-spotlight-us...@lists.sourceforge.net
 
dbp-spotlight-us...@lists.sourceforge.netmailto:dbp-spotlight-us...@lists.sourceforge.net,
 DBpediaSpotlight Developers 
dbp-spotlight-develop...@lists.sourceforge.netmailto:dbp-spotlight-develop...@lists.sourceforge.net
Subject: [Dbpedia-discussion] MapReduce expert needed to help DBpedia [as GSoC 
co-mentor]

Dear all,

We want to adapt the DBpedia extraction framewok to work with a MapReduce 
framework. [1]

We want to implement this idea through GSoC 14 and already got two interested 
students [2] [3].
Unfortunately we are not experienced in this field and our existing contacts 
could not join. Thus,  we are looking for someone to help us mentor the 
technical aspects of this project.

About GSoC (http://en.wikipedia.org/wiki/GSoC)
The Google Summer of Code (GSoC) is an annual program, first held from May to 
August 2005,[1]http://en.wikipedia.org/wiki/GSoC#cite_note-LinSOC-1 in which 
Google awards stipends (of US$5,500, as of 2014) to all students who 
successfully complete a requested free and open-source software coding project 
during the summer.
See some additional info on our page [4]

Best,
Dimitris

[1] http://wiki.dbpedia.org/gsoc2014/ideas/ExtractionwithMapReduce/
[2] student 
#1http://sourceforge.net/mailarchive/forum.php?thread_name=CA%2Bu4%2Ba3g3dSd9L%3DM173hryYPp9HjwtNYgUU6Jcedy9MUAmzMVA%40mail.gmail.comforum_name=dbpedia-gsoc
[3] student 
#2http://sourceforge.net/p/dbpedia/mailman/dbpedia-gsoc/thread/CAOk94WbB7%2BEzaWveP4OWCGeXvKdVUv790wAL%2BuRsoxTb1VEDeQ%40mail.gmail.com/#msg32063932
[4] http://wiki.dbpedia.org/gsoc2014?v=kx0#h358-6


--
Dimitris Kontokostas
Department of Computer Science, University of Leipzig
Research Group: http://aksw.org
Homepage:http://aksw.org/DimitrisKontokostas

--
Subversion Kills Productivity. Get off

MapReduce expert needed to help DBpedia [as GSoC co-mentor]

2014-03-06 Thread Dimitris Kontokostas
Dear all,

We want to adapt the DBpedia extraction framewok to work with a MapReduce
framework. [1]

We want to implement this idea through GSoC 14 and already got two
interested students [2] [3].
Unfortunately we are not experienced in this field and our existing
contacts could not join. Thus,  we are looking for someone to help us
mentor the technical aspects of this project.

About GSoC (http://en.wikipedia.org/wiki/GSoC)
The *Google Summer of Code* (*GSoC*) is an annual program, first held from
May to August 2005,[1]http://en.wikipedia.org/wiki/GSoC#cite_note-LinSOC-1 in
which Google awards stipends (of US$5,500, as of 2014) to all students who
successfully complete a requested free and open-source software coding
project during the summer.
See some additional info on our page [4]

Best,
Dimitris

[1] http://wiki.dbpedia.org/gsoc2014/ideas/ExtractionwithMapReduce/
[2] student 
#1http://sourceforge.net/mailarchive/forum.php?thread_name=CA%2Bu4%2Ba3g3dSd9L%3DM173hryYPp9HjwtNYgUU6Jcedy9MUAmzMVA%40mail.gmail.comforum_name=dbpedia-gsoc
[3] student 
#2http://sourceforge.net/p/dbpedia/mailman/dbpedia-gsoc/thread/CAOk94WbB7%2BEzaWveP4OWCGeXvKdVUv790wAL%2BuRsoxTb1VEDeQ%40mail.gmail.com/#msg32063932
[4] http://wiki.dbpedia.org/gsoc2014?v=kx0#h358-6


-- 
Dimitris Kontokostas
Department of Computer Science, University of Leipzig
Research Group: http://aksw.org
Homepage:http://aksw.org/DimitrisKontokostas


Re: [Dbpedia-discussion] MapReduce expert needed to help DBpedia [as GSoC co-mentor]

2014-03-06 Thread Nicolas Torzec
Great idea and much needed move ;)

Within the Hadoop platform, the MapReduce framework is focused on distributed 
batch processing.
Other frameworks are more focused on streaming…
= Have you considered the pros and cons?

FYI, we are using the DBpedia Extraction framework at Yahoo Labs for some 
projects, and have been thinking about porting it to Hadoop for some time.
We may be able to help…

--
Nicolas Torzec
Yahoo Labs


From: Dimitris Kontokostas 
kontokos...@informatik.uni-leipzig.demailto:kontokos...@informatik.uni-leipzig.de
Date: Thursday, March 6, 2014 at 5:04 AM
To: semantic-...@w3.orgmailto:semantic-...@w3.org 
semantic-...@w3.orgmailto:semantic-...@w3.org, Linked Data community 
public-lod@w3.orgmailto:public-lod@w3.org, DBpedia Discussions 
dbpedia-discuss...@lists.sourceforge.netmailto:dbpedia-discuss...@lists.sourceforge.net,
 DBpediaDevelopers 
dbpedia-develop...@lists.sourceforge.netmailto:dbpedia-develop...@lists.sourceforge.net,
 
dbp-spotlight-us...@lists.sourceforge.netmailto:dbp-spotlight-us...@lists.sourceforge.net
 
dbp-spotlight-us...@lists.sourceforge.netmailto:dbp-spotlight-us...@lists.sourceforge.net,
 DBpediaSpotlight Developers 
dbp-spotlight-develop...@lists.sourceforge.netmailto:dbp-spotlight-develop...@lists.sourceforge.net
Subject: [Dbpedia-discussion] MapReduce expert needed to help DBpedia [as GSoC 
co-mentor]

Dear all,

We want to adapt the DBpedia extraction framewok to work with a MapReduce 
framework. [1]

We want to implement this idea through GSoC 14 and already got two interested 
students [2] [3].
Unfortunately we are not experienced in this field and our existing contacts 
could not join. Thus,  we are looking for someone to help us mentor the 
technical aspects of this project.

About GSoC (http://en.wikipedia.org/wiki/GSoC)
The Google Summer of Code (GSoC) is an annual program, first held from May to 
August 2005,[1]http://en.wikipedia.org/wiki/GSoC#cite_note-LinSOC-1 in which 
Google awards stipends (of US$5,500, as of 2014) to all students who 
successfully complete a requested free and open-source software coding project 
during the summer.
See some additional info on our page [4]

Best,
Dimitris

[1] http://wiki.dbpedia.org/gsoc2014/ideas/ExtractionwithMapReduce/
[2] student 
#1http://sourceforge.net/mailarchive/forum.php?thread_name=CA%2Bu4%2Ba3g3dSd9L%3DM173hryYPp9HjwtNYgUU6Jcedy9MUAmzMVA%40mail.gmail.comforum_name=dbpedia-gsoc
[3] student 
#2http://sourceforge.net/p/dbpedia/mailman/dbpedia-gsoc/thread/CAOk94WbB7%2BEzaWveP4OWCGeXvKdVUv790wAL%2BuRsoxTb1VEDeQ%40mail.gmail.com/#msg32063932
[4] http://wiki.dbpedia.org/gsoc2014?v=kx0#h358-6


--
Dimitris Kontokostas
Department of Computer Science, University of Leipzig
Research Group: http://aksw.org
Homepage:http://aksw.org/DimitrisKontokostas


Re: [Dbpedia-discussion] MapReduce expert needed to help DBpedia [as GSoC co-mentor]

2014-03-06 Thread Dimitris Kontokostas
On Thu, Mar 6, 2014 at 6:42 PM, Nicolas Torzec torz...@yahoo-inc.comwrote:

  Great idea and much needed move ;)


Really good to know that :) We don't have a direct use case for this idea,
we just thought it would increase DBpedia usage in big data pipelines


  Within the Hadoop platform, the MapReduce framework is focused on
 distributed batch processing.
 Other frameworks are more focused on streaming…
 = Have you considered the pros and cons?


Actually no, this is one of the reasons we need the expert. We have a very
general idea of the existing frameworks but cannot make that decision with
confidence.


 FYI, we are using the DBpedia Extraction framework at Yahoo Labs for some
 projects, and have been thinking about porting it to Hadoop for some time.
 We may be able to help…


Since you (Yahoo Labs) can provide one of our use cases, it could all fit
very well.
However, we'd need some commitment. The application period starts next week
[1] and if we won't find anyone we'll have to drop this.

Regarding the workflow, we will provide the DBpedia know-how and the
expert will have two tasks
1) Ensure that the student's application is technically good and, if the
students gets accepted
2) periodically (weekly) check his progress during the coding period

Best,
Dimitris

[1] https://www.google-melange.com/gsoc/events/google/gsoc2014



  --
 Nicolas Torzec
 Yahoo Labs


   From: Dimitris Kontokostas kontokos...@informatik.uni-leipzig.de
 Date: Thursday, March 6, 2014 at 5:04 AM
 To: semantic-...@w3.org semantic-...@w3.org, Linked Data community 
 public-lod@w3.org, DBpedia Discussions 
 dbpedia-discuss...@lists.sourceforge.net, DBpediaDevelopers 
 dbpedia-develop...@lists.sourceforge.net, 
 dbp-spotlight-us...@lists.sourceforge.net 
 dbp-spotlight-us...@lists.sourceforge.net, DBpediaSpotlight Developers 
 dbp-spotlight-develop...@lists.sourceforge.net
 Subject: [Dbpedia-discussion] MapReduce expert needed to help DBpedia [as
 GSoC co-mentor]

   Dear all,

  We want to adapt the DBpedia extraction framewok to work with a
 MapReduce framework. [1]

  We want to implement this idea through GSoC 14 and already got two
 interested students [2] [3].
 Unfortunately we are not experienced in this field and our existing
 contacts could not join. Thus,  we are looking for someone to help us
 mentor the technical aspects of this project.

  About GSoC (http://en.wikipedia.org/wiki/GSoC)
  The *Google Summer of Code* (*GSoC*) is an annual program, first held
 from May to August 
 2005,[1]http://en.wikipedia.org/wiki/GSoC#cite_note-LinSOC-1 in
 which Google awards stipends (of US$5,500, as of 2014) to all students
 who successfully complete a requested free and open-source software coding
 project during the summer.
  See some additional info on our page [4]

  Best,
 Dimitris

 [1] http://wiki.dbpedia.org/gsoc2014/ideas/ExtractionwithMapReduce/
 [2] student 
 #1http://sourceforge.net/mailarchive/forum.php?thread_name=CA%2Bu4%2Ba3g3dSd9L%3DM173hryYPp9HjwtNYgUU6Jcedy9MUAmzMVA%40mail.gmail.comforum_name=dbpedia-gsoc
 [3] student 
 #2http://sourceforge.net/p/dbpedia/mailman/dbpedia-gsoc/thread/CAOk94WbB7%2BEzaWveP4OWCGeXvKdVUv790wAL%2BuRsoxTb1VEDeQ%40mail.gmail.com/#msg32063932
 [4] http://wiki.dbpedia.org/gsoc2014?v=kx0#h358-6


  --
 Dimitris Kontokostas
 Department of Computer Science, University of Leipzig
 Research Group: http://aksw.org
 Homepage:http://aksw.org/DimitrisKontokostas


 --
 Subversion Kills Productivity. Get off Subversion  Make the Move to
 Perforce.
 With Perforce, you get hassle-free workflows. Merge that actually works.
 Faster operations. Version large binaries.  Built-in WAN optimization and
 the
 freedom to use Git, Perforce or both. Make the move to Perforce.

 http://pubads.g.doubleclick.net/gampad/clk?id=122218951iu=/4140/ostg.clktrk
 ___
 Dbpedia-discussion mailing list
 dbpedia-discuss...@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/dbpedia-discussion




-- 
Dimitris Kontokostas
Department of Computer Science, University of Leipzig
Research Group: http://aksw.org
Homepage:http://aksw.org/DimitrisKontokostas


Computer science publisher needs help with RDFa/HTTP technical issue [Re: How are RDFa clients expected to handle 301 Moved Permanently?]

2013-10-25 Thread Christoph LANGE
Dear all,

it seems the RDFa mailing list is not that active any more, as I haven't
got an answer for this question for two weeks.  As my question is also
related to LOD publishing, let me try to ask it here.  We, the
publishers of CEUR-WS.org, are facing a technical issue involving RDFa
and hash vs. slash URIs/URLs.

I believe that, when an open access publisher that is a big player at
least in the field of computer science workshops, introduces RDFa, this
has the potential to become a very interesting use case for RDFa.
(Please see also our blog at http://ceurws.wordpress.com/ for further
planned innovations.)

While I think I have very good knowledge of RDFa, we are in an early
phase of implementing RDFa in the specific setting of CEUR-WS.org.
Therefore we would highly appreciate any input on how to get our RDFa
implementation right.  Please see below for the original message with
the gory technical details.

Cheers, and thanks in advance,

Christoph (CEUR-WS.org technical editor)

On 2013-10-10 16:54, Christoph LANGE wrote:
 Dear RDFa community,

 I am writing in the role of technical editor of the CEUR-WS.org open
 access publishing service (http://ceur-ws.org/), which many of you have
 used before.

 We provide a tool that allows proceedings editors to include RDFa
 annotations into their tables of content
 (https://github.com/clange/ceur-make).  FYI: roughly 1 in 6 proceedings
 volumes has been using RDFa recently.

 We are now possibly running into a problem by having changed the
 official URLs of our volume pages from, e.g.,
 http://ceur-ws.org/Vol-994/ into http://ceur-ws.org/Vol-994, i.e.
 dropping the trailing slash.  In short, RDFa requested from
 http://ceur-ws.org/Vol-994 contains broken URIs in outgoing links, as
 RDFa clients don't seem to follow the HTTP 301 Moved Permanently,
 which points from the slash-less URL to the slashed URL (which still
 exists, as our server-side directory layout hasn't changed).  And I'm
 wondering whether that's something we should expect an RDFa client to
 do, or whether we need to fix our RDFa instead.

 Our rationale for dropping the trailing slash was the following:

 1. While at the moment all papers inside our volumes are PDF files, e.g.
 http://ceur-ws.org/Vol-994/paper-01.pdf, we are thinking about other
 content types (see
 http://ceurws.wordpress.com/2013/09/25/is-a-paper-just-a-pdf-file/), in
 particular directories containing accompanying data such as original
 research data, and the main entry point to such a paper could then be
 another HTML page in a subdirectory.

 2. As the user (here we mean a human using a browser) should not be
 responsible for knowing whether a paper, or a volume, is a file or a
 directory, we thought we'd use slash-less URLs throughout, and then let
 the server tell the browser (and thus the user) when some resource
 actually is a directory.

 (Do these considerations make sense?)

 This behaviour is implemented as follows (irrelevant headers stripped):

 $ wget -O /dev/null -S http://ceur-ws.org/Vol-1010
 --2013-10-10 16:33:57--  http://ceur-ws.org/Vol-1010
 Resolving ceur-ws.org... 137.226.34.227
 Connecting to ceur-ws.org|137.226.34.227|:80... connected.
 HTTP request sent, awaiting response...
HTTP/1.1 301 Moved Permanently
Location: http://ceur-ws.org/Vol-1010/
 Location: http://ceur-ws.org/Vol-1010/ [following]
 --2013-10-10 16:33:57--  http://ceur-ws.org/Vol-1010/
 Reusing existing connection to ceur-ws.org:80.
 HTTP request sent, awaiting response...
HTTP/1.1 200 OK

 But now RDFa clients don't seem to respect this redirect.  Please try
 for yourself with http://www.w3.org/2012/pyRdfa/ and
 http://linkeddata.uriburner.com/.  These are two freely accessible RDFa
 extractors I could think of, and I think they are based on different
 implementations.  (Am I right?)

 When you enter a slashed URI, e.g. http://ceur-ws.org/Vol-1010/, you get
 correct RDFa, in particular outgoing links to, e.g.,
 http://ceur-ws.org/Vol-1010/paper-01.pdf.  When you enter the same URI
 without a slash, the relative URIs that point from index.html to the
 papers like ol rel=dcterms:hasPartli about=paper-01.pdf resolve
 to http://ceur-ws.org/paper-01.pdf.

 Now I have the following questions:

 Are these RDFa clients broken?

 If they are not broken, what is broken on our side, and how can we
fix it?

 Is it acceptable that RDFa retrieved from a slash-less URL is broken,
 whereas RDFa from the slashed URL works?

 Is it OK to say that the canonical URL of something should be
 slash-less, whereas the semantic identifier of the same thing (if
 that's what we mean by its RDFa URI) should have a slash?  Or should
 both be the same?  (Note: I am well aware of the difference between
 information resources and non-information resources, but IMHO this
 difference doesn't apply here, as we publish online proceedings.
 http://ceur-ws.org/Vol-1010 _is_ the workshop volume, which has editors
 and contains papers; it is not just a page that 

Re: Computer science publisher needs help with RDFa/HTTP technical issue [Re: How are RDFa clients expected to handle 301 Moved Permanently?]

2013-10-25 Thread Kingsley Idehen

On 10/25/13 12:03 PM, Christoph LANGE wrote:

it seems the RDFa mailing list is not that active any more, as I haven't
got an answer for this question for two weeks.  As my question is also
related to LOD publishing, let me try to ask it here.  We, the
publishers of CEUR-WS.org, are facing a technical issue involving RDFa
and hash vs. slash URIs/URLs.


What is your problem re. entity denotation?

Simple rule of thumb:

1. Denote documents using URLs
2. Denote every other kind of entity using hash (as #) based HTTP URIs.

If # based HTTP URIs pose deployment problems, then you can consider 
using / based HTTP URIs, but you then have to take look to one of the 
following issues that require tweaks to your data server:


1. Use the Path component (part) of your HTTP URIs to set up regular 
expression pattern-friendly markers that distinguish HTTP URIs that 
denote documents from those that denote every other type of entity -- 
basically, this is what you see re. /page/ (for description documents) 
and /resource/ (for every other entity type/kind) re., DBpedia


2. Use 303 to redirect entity URIs to the document URLs that denote 
their descriptors (description documents).


If using 303 redirection presents deployment challenges, bearing in mind 
latest revisions to HTTP, you can use a 200 OK instead of a 303, but you 
MUST place the URL of the entity descriptor (description document) in 
the Location:  header of your HTTP responses i.e., use HTTP response 
metadata to handle the ambiguity that / based HTTP URIs present.


In my experience with RDFa, I've found it easiest to deploy using 
relative hash based HTTP URIs.


Links:

[1] http://bit.ly/15tk1Au -- hash based Linked Data URI illustrated
[2] http://bit.ly/11xnQ36 -- hashless or slash based Linked Data URI 
illustrated


--

Regards,

Kingsley Idehen 
Founder  CEO
OpenLink Software
Company Web: http://www.openlinksw.com
Personal Weblog: http://www.openlinksw.com/blog/~kidehen
Twitter/Identi.ca handle: @kidehen
Google+ Profile: https://plus.google.com/112399767740508618350/about
LinkedIn Profile: http://www.linkedin.com/in/kidehen







smime.p7s
Description: S/MIME Cryptographic Signature


Re: Computer science publisher needs help with RDFa/HTTP technical issue [Re: How are RDFa clients expected to handle 301 Moved Permanently?]

2013-10-25 Thread Nathan Rixham
It's simpler than that and there are two quite simple issues.

1) They have said they have changed from /Vol-1010/ to /Vol-1010 when
they have not - as the 301 Moved Permanently to /Vol-1010/
illustrates, if they had moved URIs it would be the other way around,
/Vol-1010/ would 301 to /Vol-1010.

2) Difference between web browser and rdfa base URI calculation and
ambiguity of not being specific have compounded and confused the issue
further.

To address the situation they can just be specific, set the base of
the document to be either http://ceur-ws.org/Vol-1010/ or
http://ceur-ws.org/Vol-1010, if they set it to be the variant with the
trailing slash, they'll find both HTML and RDFa are correct, if they
set it to be variant without the trailing slash they'll find both the
HTML and the RDFa have incorrect links.

Separately, it does raise the question as to why uriburner and pyrdfa
both use the input URI http://ceur-ws.org/Vol-1010 as base rather than
the one instructed by the HTTP 301 redirect, namely
http://ceur-ws.org/Vol-1010/ - perhaps this is an issue, or perhaps it
should be left as is to encourage the good practise of explicitly
saying what you mean.

Best, Nathan

On Fri, Oct 25, 2013 at 5:44 PM, Kingsley Idehen kide...@openlinksw.com wrote:
 On 10/25/13 12:03 PM, Christoph LANGE wrote:

 it seems the RDFa mailing list is not that active any more, as I haven't
 got an answer for this question for two weeks.  As my question is also
 related to LOD publishing, let me try to ask it here.  We, the
 publishers of CEUR-WS.org, are facing a technical issue involving RDFa
 and hash vs. slash URIs/URLs.


 What is your problem re. entity denotation?

 Simple rule of thumb:

 1. Denote documents using URLs
 2. Denote every other kind of entity using hash (as #) based HTTP URIs.

 If # based HTTP URIs pose deployment problems, then you can consider using
 / based HTTP URIs, but you then have to take look to one of the following
 issues that require tweaks to your data server:

 1. Use the Path component (part) of your HTTP URIs to set up regular
 expression pattern-friendly markers that distinguish HTTP URIs that denote
 documents from those that denote every other type of entity -- basically,
 this is what you see re. /page/ (for description documents) and
 /resource/ (for every other entity type/kind) re., DBpedia

 2. Use 303 to redirect entity URIs to the document URLs that denote their
 descriptors (description documents).

 If using 303 redirection presents deployment challenges, bearing in mind
 latest revisions to HTTP, you can use a 200 OK instead of a 303, but you
 MUST place the URL of the entity descriptor (description document) in the
 Location:  header of your HTTP responses i.e., use HTTP response metadata
 to handle the ambiguity that / based HTTP URIs present.

 In my experience with RDFa, I've found it easiest to deploy using relative
 hash based HTTP URIs.

 Links:

 [1] http://bit.ly/15tk1Au -- hash based Linked Data URI illustrated
 [2] http://bit.ly/11xnQ36 -- hashless or slash based Linked Data URI
 illustrated

 --

 Regards,

 Kingsley Idehen
 Founder  CEO
 OpenLink Software
 Company Web: http://www.openlinksw.com
 Personal Weblog: http://www.openlinksw.com/blog/~kidehen
 Twitter/Identi.ca handle: @kidehen
 Google+ Profile: https://plus.google.com/112399767740508618350/about
 LinkedIn Profile: http://www.linkedin.com/in/kidehen








Re: Computer science publisher needs help with RDFa/HTTP technical issue [Re: How are RDFa clients expected to handle 301 Moved Permanently?]

2013-10-25 Thread Kingsley Idehen

On 10/25/13 12:56 PM, Nathan Rixham wrote:

It's simpler than that and there are two quite simple issues.

1) They have said they have changed from /Vol-1010/ to /Vol-1010 when
they have not - as the 301 Moved Permanently to /Vol-1010/
illustrates, if they had moved URIs it would be the other way around,
/Vol-1010/ would 301 to /Vol-1010.

2) Difference between web browser and rdfa base URI calculation and
ambiguity of not being specific have compounded and confused the issue
further.

To address the situation they can just be specific, set the base of
the document to be either http://ceur-ws.org/Vol-1010/ or
http://ceur-ws.org/Vol-1010, if they set it to be the variant with the
trailing slash, they'll find both HTML and RDFa are correct, if they
set it to be variant without the trailing slash they'll find both the
HTML and the RDFa have incorrect links.


Yes!



Separately, it does raise the question as to why uriburner and pyrdfa
both use the input URI http://ceur-ws.org/Vol-1010 as base rather than
the one instructed by the HTTP 301 redirect, namely
http://ceur-ws.org/Vol-1010/ - perhaps this is an issue, or perhaps it
should be left as is to encourage the good practise of explicitly
saying what you mean.


Ideally, we want to encourage the good practice of being explicit 
about what's being denoted :-)



Kingsley


Best, Nathan

On Fri, Oct 25, 2013 at 5:44 PM, Kingsley Idehen kide...@openlinksw.com wrote:

On 10/25/13 12:03 PM, Christoph LANGE wrote:

it seems the RDFa mailing list is not that active any more, as I haven't
got an answer for this question for two weeks.  As my question is also
related to LOD publishing, let me try to ask it here.  We, the
publishers of CEUR-WS.org, are facing a technical issue involving RDFa
and hash vs. slash URIs/URLs.


What is your problem re. entity denotation?

Simple rule of thumb:

1. Denote documents using URLs
2. Denote every other kind of entity using hash (as #) based HTTP URIs.

If # based HTTP URIs pose deployment problems, then you can consider using
/ based HTTP URIs, but you then have to take look to one of the following
issues that require tweaks to your data server:

1. Use the Path component (part) of your HTTP URIs to set up regular
expression pattern-friendly markers that distinguish HTTP URIs that denote
documents from those that denote every other type of entity -- basically,
this is what you see re. /page/ (for description documents) and
/resource/ (for every other entity type/kind) re., DBpedia

2. Use 303 to redirect entity URIs to the document URLs that denote their
descriptors (description documents).

If using 303 redirection presents deployment challenges, bearing in mind
latest revisions to HTTP, you can use a 200 OK instead of a 303, but you
MUST place the URL of the entity descriptor (description document) in the
Location:  header of your HTTP responses i.e., use HTTP response metadata
to handle the ambiguity that / based HTTP URIs present.

In my experience with RDFa, I've found it easiest to deploy using relative
hash based HTTP URIs.

Links:

[1] http://bit.ly/15tk1Au -- hash based Linked Data URI illustrated
[2] http://bit.ly/11xnQ36 -- hashless or slash based Linked Data URI
illustrated

--

Regards,

Kingsley Idehen
Founder  CEO
OpenLink Software
Company Web: http://www.openlinksw.com
Personal Weblog: http://www.openlinksw.com/blog/~kidehen
Twitter/Identi.ca handle: @kidehen
Google+ Profile: https://plus.google.com/112399767740508618350/about
LinkedIn Profile: http://www.linkedin.com/in/kidehen










--

Regards,

Kingsley Idehen 
Founder  CEO
OpenLink Software
Company Web: http://www.openlinksw.com
Personal Weblog: http://www.openlinksw.com/blog/~kidehen
Twitter/Identi.ca handle: @kidehen
Google+ Profile: https://plus.google.com/112399767740508618350/about
LinkedIn Profile: http://www.linkedin.com/in/kidehen







smime.p7s
Description: S/MIME Cryptographic Signature


Re: Request for Help: US Government Linked Data

2013-05-22 Thread Gannon Dick
Hi Jürgen,

Thanks for the Cartoon.
A mixture of education/outreach has always served the web well.
The US Government is a special case because they issue a lot of integrated 
facts into the Public Domain.  Civil Servants (in theory) never have to debate 
the quality of work product.  This system produces frictions not unique to US 
Territory.  The link ( 
http://www.rustprivacy.org/2013/egov/roadmap/NoMoneyInGovernment.pdf) makes the 
general case:  You want both tolerance and precision, but both are not 
necessary.  It is not reasonable to land on Mars directly (without a few 
orbits) or even drive from Berlin to Bonn and back in *exactly* the same time.  
You have to let the average happen.  Both Lord Kelvin (Absolute Zero) and 
Professor Einstein (Mass and Energy) played tricks at the far ends of the range 
to make a return trip possible.  To put it another way, you can level the 
playing field (and you should), but if you build a high brick wall in front of 
the Goal, then the game you are playing is not Football (Soccer or American 
Football). 
 Actually, it is not a fun game at all.

StratML can specify successive approximations (steps) to the goal, SKOS and RDF 
determine a median which is presumed close to the mean (average).  Watches were 
invented to agree with each other, not the sun, which is never on time for 
lunch.  The average is what it is.  Policy goals are not stupid and not fish.

--Gannon


 From: Jürgen Jakobitsch j.jakobit...@semantic-web.at
To: Gannon Dick gannon_d...@yahoo.com 
Cc: Eric Mill konkl...@gmail.com; Luca Matteis lmatt...@gmail.com; David 
Wood da...@3roundstones.com; community public-lod@w3.org; eGov W3C 
public-egov...@w3.org 
Sent: Tuesday, May 21, 2013 6:17 PM
Subject: Re: Request for Help: US Government Linked Data
 

it's a fish of course, not a frog [1]... (excuse climb typo)

wkr j

[1] 
http://smrt.ccel.ca/files/2012/08/Albert-Einstein-everyone-is-a-genius-but-if-you-judge-a-fish-by-its-ability-to-climb-a-tree.jpeg



- Original Message -
From: Jürgen Jakobitsch j.jakobit...@semantic-web.at
To: Gannon Dick gannon_d...@yahoo.com
Cc: Eric Mill konkl...@gmail.com, Luca Matteis lmatt...@gmail.com, 
David Wood da...@3roundstones.com, community public-lod@w3.org, eGov 
W3C public-egov...@w3.org
Sent: Wednesday, May 22, 2013 1:13:10 AM
Subject: Re: Request for Help: US Government Linked Data

hi,

for clarification, my comment was about this line [1]. 
it was meant to give you an idea that comparison made cannot be left just so
as it is full of suggestions and
 doesn't seem to be based on anything other
than emotion (as are most is better than debates).
the given judgement [1] suggests that it is the best solution for what 
it was actually made for (=void) [3] by first comparing it with something the 
main goal of which 
is to create knowledge representations in the form of a controlled vocabulary 
(thesaurus)
and secondly comparing it with a whole datamodel the inherence of it's ability 
to create
a schema suitable to handle the given use case of which i am convinced.

you know... if you judge a frog by it's ability to clime a tree...

wkr jürgen


[1] Sorry to say, for reasons given, that StratML seems the better choice
for Strategic Policy Representation (rather than SKOS and RDF).
[2] http://en.wikipedia.org/wiki/List_of_XML_markup_languages
[3] from [2]: StratML is an
 XML vocabulary and schema for strategic and performance plans and reports

- Original Message -
From: Gannon Dick gannon_d...@yahoo.com
To: Eric Mill konkl...@gmail.com, Luca Matteis lmatt...@gmail.com
Cc: j jakobitsch j.jakobit...@semantic-web.at, David Wood 
da...@3roundstones.com, public-lod@w3.org community public-lod@w3.org, 
eGov W3C public-egov...@w3.org
Sent: Tuesday, May 21, 2013 10:02:09 PM
Subject: Re: Request for Help: US Government Linked Data

FWIW, having been labelled confused multiple times through the magic of email 
forwarding ... :-) 




From: Eric Mill konkl...@gmail.com
Subject: Re: Request for Help: US Government Linked Data



My (completely personal, unofficial) request of the LD community, as Project 
Open Data and its discussion threads grow, is to avoid a general summoning of 
the troops to this stuff. 
===
Yes, avoid circle the wagons too.  Data Models are important, and ... (cont.)

===

One of the things that
 was made obvious to me by that thread is how painfully easy it is for people 
who very much have the same awesome shared end goals in mind - more useful 
government data - to talk past each other. That only gets easier when comments 
get more emotional, and gauging one's success during a debate becomes a matter 
of quantity rather than quality.

As I said, I really valued the thread we had - and most especially, I love what 
POD is doing, and I think the US and Github are going to have a profound impact 
on how the world views policy making in the long run. The POD project

Re: Request for Help: US Government Linked Data

2013-05-21 Thread Gannon Dick
FWIW, having been labelled confused multiple times through the magic of email 
forwarding ... :-) 




 From: Eric Mill konkl...@gmail.com
Subject: Re: Request for Help: US Government Linked Data
 


My (completely personal, unofficial) request of the LD community, as Project 
Open Data and its discussion threads grow, is to avoid a general summoning of 
the troops to this stuff. 
===
Yes, avoid circle the wagons too.  Data Models are important, and ... (cont.)

===

One of the things that was made obvious to me by that thread is how painfully 
easy it is for people who very much have the same awesome shared end goals in 
mind - more useful government data - to talk past each other. That only gets 
easier when comments get more emotional, and gauging one's success during a 
debate becomes a matter of quantity rather than quality.

As I said, I really valued the thread we had - and most especially, I love what 
POD is doing, and I think the US and Github are going to have a profound impact 
on how the world views policy making in the long run. The POD project is going 
to be looked at by governments around the world, and they're going to evaluate 
POD based on the quality of those discussions (not the outcomes).
===
... outcomes are a dodgy gauge of Policy with Open Data for well defined 
reasons - and you do not conflate well defined with benefits the Smartest 
Guys in the Room most, obviously.  In the Commercial world, the square root of 
a dollar (or Euro) is 10 dimes, the square root of a dime is a penny and the 
square root of 2 dollars is $1.41.  Facts just don't understand simple 
Economics, or something like that.  That said, Open Data does use an odd 
Coordinate System.  Map Makers will tell you that these maps have little 
navigation value.  They also expect you to know that the voids cannot be flown 
over or tunnelled under without a theory to support such an operation.  
Because I think it would be nice if I could isn't a theory.  There are quite 
a few other non-theories around too. The regions beyond the length of the 
Equator+Voids are real, not Outliers.  The shortest distance between two points 
is the long route around, sometimes.
http://en.wikipedia.org/wiki/File:MercTranSph_enhanced.png (no voids - 
Spherical)
http://upload.wikimedia.org/wikipedia/commons/9/93/MercTranEll.png (Elliptical 
with voids at the boundaries)

1) The voids never go away.  
2) Governments and the LD Community cannot treat Outliers as useless losers 
who don't get LD.

===

It's going to be great to having more of those discussions with everyone here, 
both about LD (and things other than LD!). There's a ton of unblazed trails 
here, and I'm just so excited to see where they go.

===
Me too.
===


-- Eric



On Mon, May 20, 2013 at 4:31 PM, Luca Matteis lmatt...@gmail.com wrote:

The pull request was merged. Great success! 


Let's continue this effort by submitting more LOD pull-requests.



On Sun, May 19, 2013 at 7:15 PM, Jürgen Jakobitsch SWC 
j.jakobit...@semantic-web.at wrote:

On Sun, 2013-05-19 at 08:19 -0700, Gannon Dick wrote:
 Dave,


 IMHO, the W3C Cookbook methods do not go far enough to define the
 short-term strategy game of which Americans are so fond.  The Federal
 Government must plan Social Policy from ante Meridian (AM) to post
 Meridian (PM).  Playing statistical games with higher frequencies or
 modified time spans is fun, but it is not Science (a Free Energy
 Calculation).


 http://www.rustprivacy.org/2013/egov/roadmap/NoMoneyInGovernment.pdf


 Sorry to say, for reasons given, that StratML seems the better choice
 for Strategic Policy Representation (rather than SKOS and RDF).

sorry, no offence but above are two lines of total confusion...

wkr j




 --Gannon



 __

 From: David Wood da...@3roundstones.com
 To: public-lod@w3.org community public-lod@w3.org
 Sent: Saturday, May 18, 2013 8:59 AM
 Subject: Re: Request for Help: US Government Linked Data


 Hi all,

 I take it back: Don't just comment.

 We need to introduce pull requests into the Project Open Data
 documents that add Linked Data terms, examples and guidelines to the
 existing material.

 There are a few scattered RDFa references in relation to schema.org,
 but most of the Linked Data material has been removed from the
 documents.  We need to get this back in existing Linked Data efforts
 within the US Government might very well be hurt.

 Please help.  Thanks.

 Regards,
 Dave
 --
 http://about.me/david_wood



 On May 18, 2013, at 09:16, David Wood da...@3roundstones.com wrote:

  Hi all,
 
  Parts of the US Government have been discussing the role of Linked
 Data in government agencies and whether Linked Data is what the Obama
 Administration meant when they mandated machine readable data.
 Unsurprisingly, some people like to do things the old ways, with a
 three-tier architecture and without

Re: Request for Help: US Government Linked Data

2013-05-21 Thread Jürgen Jakobitsch
hi,

for clarification, my comment was about this line [1]. 
it was meant to give you an idea that comparison made cannot be left just so
as it is full of suggestions and doesn't seem to be based on anything other
than emotion (as are most is better than debates).
the given judgement [1] suggests that it is the best solution for what 
it was actually made for (=void) [3] by first comparing it with something the 
main goal of which 
is to create knowledge representations in the form of a controlled vocabulary 
(thesaurus)
and secondly comparing it with a whole datamodel the inherence of it's ability 
to create
a schema suitable to handle the given use case of which i am convinced.

you know... if you judge a frog by it's ability to clime a tree...

wkr jürgen


[1] Sorry to say, for reasons given, that StratML seems the better choice
for Strategic Policy Representation (rather than SKOS and RDF).
[2] http://en.wikipedia.org/wiki/List_of_XML_markup_languages
[3] from [2]: StratML is an XML vocabulary and schema for strategic and 
performance plans and reports

- Original Message -
From: Gannon Dick gannon_d...@yahoo.com
To: Eric Mill konkl...@gmail.com, Luca Matteis lmatt...@gmail.com
Cc: j jakobitsch j.jakobit...@semantic-web.at, David Wood 
da...@3roundstones.com, public-lod@w3.org community public-lod@w3.org, 
eGov W3C public-egov...@w3.org
Sent: Tuesday, May 21, 2013 10:02:09 PM
Subject: Re: Request for Help: US Government Linked Data

FWIW, having been labelled confused multiple times through the magic of email 
forwarding ... :-) 




 From: Eric Mill konkl...@gmail.com
Subject: Re: Request for Help: US Government Linked Data
 


My (completely personal, unofficial) request of the LD community, as Project 
Open Data and its discussion threads grow, is to avoid a general summoning of 
the troops to this stuff. 
===
Yes, avoid circle the wagons too.  Data Models are important, and ... (cont.)

===

One of the things that was made obvious to me by that thread is how painfully 
easy it is for people who very much have the same awesome shared end goals in 
mind - more useful government data - to talk past each other. That only gets 
easier when comments get more emotional, and gauging one's success during a 
debate becomes a matter of quantity rather than quality.

As I said, I really valued the thread we had - and most especially, I love what 
POD is doing, and I think the US and Github are going to have a profound impact 
on how the world views policy making in the long run. The POD project is going 
to be looked at by governments around the world, and they're going to evaluate 
POD based on the quality of those discussions (not the outcomes).
===
... outcomes are a dodgy gauge of Policy with Open Data for well defined 
reasons - and you do not conflate well defined with benefits the Smartest 
Guys in the Room most, obviously.  In the Commercial world, the square root of 
a dollar (or Euro) is 10 dimes, the square root of a dime is a penny and the 
square root of 2 dollars is $1.41.  Facts just don't understand simple 
Economics, or something like that.  That said, Open Data does use an odd 
Coordinate System.  Map Makers will tell you that these maps have little 
navigation value.  They also expect you to know that the voids cannot be flown 
over or tunnelled under without a theory to support such an operation.  
Because I think it would be nice if I could isn't a theory.  There are quite 
a few other non-theories around too. The regions beyond the length of the 
Equator+Voids are real, not Outliers.  The shortest distance between two points 
is the long route around, sometimes.
http://en.wikipedia.org/wiki/File:MercTranSph_enhanced.png (no voids - 
Spherical)
http://upload.wikimedia.org/wikipedia/commons/9/93/MercTranEll.png (Elliptical 
with voids at the boundaries)

1) The voids never go away.  
2) Governments and the LD Community cannot treat Outliers as useless losers 
who don't get LD.

===

It's going to be great to having more of those discussions with everyone here, 
both about LD (and things other than LD!). There's a ton of unblazed trails 
here, and I'm just so excited to see where they go.

===
Me too.
===


-- Eric



On Mon, May 20, 2013 at 4:31 PM, Luca Matteis lmatt...@gmail.com wrote:

The pull request was merged. Great success! 


Let's continue this effort by submitting more LOD pull-requests.



On Sun, May 19, 2013 at 7:15 PM, Jürgen Jakobitsch SWC 
j.jakobit...@semantic-web.at wrote:

On Sun, 2013-05-19 at 08:19 -0700, Gannon Dick wrote:
 Dave,


 IMHO, the W3C Cookbook methods do not go far enough to define the
 short-term strategy game of which Americans are so fond.  The Federal
 Government must plan Social Policy from ante Meridian (AM) to post
 Meridian (PM).  Playing statistical games with higher frequencies or
 modified time spans is fun, but it is not Science (a Free Energy
 Calculation).


 http

Re: Request for Help: US Government Linked Data

2013-05-20 Thread Luca Matteis
The pull request was merged. Great success!

Let's continue this effort by submitting more LOD pull-requests.


On Sun, May 19, 2013 at 7:15 PM, Jürgen Jakobitsch SWC 
j.jakobit...@semantic-web.at wrote:

 On Sun, 2013-05-19 at 08:19 -0700, Gannon Dick wrote:
  Dave,
 
 
  IMHO, the W3C Cookbook methods do not go far enough to define the
  short-term strategy game of which Americans are so fond.  The Federal
  Government must plan Social Policy from ante Meridian (AM) to post
  Meridian (PM).  Playing statistical games with higher frequencies or
  modified time spans is fun, but it is not Science (a Free Energy
  Calculation).
 
 
  http://www.rustprivacy.org/2013/egov/roadmap/NoMoneyInGovernment.pdf
 
 
  Sorry to say, for reasons given, that StratML seems the better choice
  for Strategic Policy Representation (rather than SKOS and RDF).

 sorry, no offence but above are two lines of total confusion...

 wkr j


 
 
  --Gannon
 
 
 
  __
  From: David Wood da...@3roundstones.com
  To: public-lod@w3.org community public-lod@w3.org
  Sent: Saturday, May 18, 2013 8:59 AM
  Subject: Re: Request for Help: US Government Linked Data
 
 
  Hi all,
 
  I take it back: Don't just comment.
 
  We need to introduce pull requests into the Project Open Data
  documents that add Linked Data terms, examples and guidelines to the
  existing material.
 
  There are a few scattered RDFa references in relation to schema.org,
  but most of the Linked Data material has been removed from the
  documents.  We need to get this back in existing Linked Data efforts
  within the US Government might very well be hurt.
 
  Please help.  Thanks.
 
  Regards,
  Dave
  --
  http://about.me/david_wood
 
 
 
  On May 18, 2013, at 09:16, David Wood da...@3roundstones.com wrote:
 
   Hi all,
  
   Parts of the US Government have been discussing the role of Linked
  Data in government agencies and whether Linked Data is what the Obama
  Administration meant when they mandated machine readable data.
  Unsurprisingly, some people like to do things the old ways, with a
  three-tier architecture and without fostering reuse of the data.
  
   Please respond to the GitHub thread if you would like to support
  Linked Data:
  
  https://github.com/project-open-data/project-open-data.github.io/pull/21
  
   Regards,
   Dave
   --
   http://about.me/david_wood
  
  
  
 
 
 

 --
 | Jürgen Jakobitsch,
 | Software Developer
 | Semantic Web Company GmbH
 | Mariahilfer Straße 70 / Neubaugasse 1, Top 8
 | A - 1070 Wien, Austria
 | Mob +43 676 62 12 710 | Fax +43.1.402 12 35 - 22

 COMPANY INFORMATION
 | web   : http://www.semantic-web.at/
 | foaf  : http://company.semantic-web.at/person/juergen_jakobitsch
 PERSONAL INFORMATION
 | web   : http://www.turnguard.com
 | foaf  : http://www.turnguard.com/turnguard
 | g+: https://plus.google.com/111233759991616358206/posts
 | skype : jakobitsch-punkt
 | xmlns:tg  = http://www.turnguard.com/turnguard#;





Re: Request for Help: US Government Linked Data

2013-05-19 Thread Gannon Dick
Dave,

IMHO, the W3C Cookbook methods do not go far enough to define the short-term 
strategy game of which Americans are so fond.  The Federal Government must plan 
Social Policy from ante Meridian (AM) to post Meridian (PM).  Playing 
statistical games with higher frequencies or modified time spans is fun, but it 
is not Science (a Free Energy Calculation).

http://www.rustprivacy.org/2013/egov/roadmap/NoMoneyInGovernment.pdf

Sorry to say, for reasons given, that StratML seems the better choice for 
Strategic Policy Representation (rather than SKOS and RDF).

--Gannon




 From: David Wood da...@3roundstones.com
To: public-lod@w3.org community public-lod@w3.org 
Sent: Saturday, May 18, 2013 8:59 AM
Subject: Re: Request for Help: US Government Linked Data
 

Hi all,

I take it back: Don't just comment.

We need to introduce pull requests into the Project Open Data documents that 
add Linked Data terms, examples and guidelines to the existing material.

There are a few scattered RDFa references in relation to schema.org, but most 
of the Linked Data material has been removed from the documents.  We need to 
get this back in existing Linked Data efforts within the US Government might 
very well be hurt.

Please help.  Thanks.

Regards,
Dave
--
http://about.me/david_wood



On May 18, 2013, at 09:16, David Wood da...@3roundstones.com wrote:

 Hi all,
 
 Parts of the US Government have been discussing the role of Linked Data in 
 government agencies and whether Linked Data is what the Obama Administration 
 meant when they mandated machine readable data.  Unsurprisingly, some 
 people like to do things the old ways, with a three-tier architecture and 
 without fostering reuse of the data.
 
 Please respond to the GitHub thread if you would like to support Linked Data:
  https://github.com/project-open-data/project-open-data.github.io/pull/21
 
 Regards,
 Dave
 --
 http://about.me/david_wood
 
 
 

Re: Request for Help: US Government Linked Data

2013-05-19 Thread Luca Matteis
Great reminder David,

We need to add more Linked Data content on those pages. One interesting
positive note though: this Linked Data pull request is by far the most
active request, with 42 comments. So this should spark something in the
minds of the people that are managing this project.

Luca


On Sun, May 19, 2013 at 5:19 PM, Gannon Dick gannon_d...@yahoo.com wrote:

 Dave,

 IMHO, the W3C Cookbook methods do not go far enough to define the
 short-term strategy game of which Americans are so fond.  The Federal
 Government must plan Social Policy from ante Meridian (AM) to post Meridian
 (PM).  Playing statistical games with higher frequencies or modified time
 spans is fun, but it is not Science (a Free Energy Calculation).

 http://www.rustprivacy.org/2013/egov/roadmap/NoMoneyInGovernment.pdf

 Sorry to say, for reasons given, that StratML seems the better choice for
 Strategic Policy Representation (rather than SKOS and RDF).

 --Gannon

   --
  *From:* David Wood da...@3roundstones.com
 *To:* public-lod@w3.org community public-lod@w3.org
 *Sent:* Saturday, May 18, 2013 8:59 AM
 *Subject:* Re: Request for Help: US Government Linked Data

 Hi all,

 I take it back: Don't just comment.

 We need to introduce pull requests into the Project Open Data documents
 that add Linked Data terms, examples and guidelines to the existing
 material.

 There are a few scattered RDFa references in relation to schema.org, but
 most of the Linked Data material has been removed from the documents.  We
 need to get this back in existing Linked Data efforts within the US
 Government might very well be hurt.

 Please help.  Thanks.

 Regards,
 Dave
 --
 http://about.me/david_wood



 On May 18, 2013, at 09:16, David Wood da...@3roundstones.com wrote:

  Hi all,
 
  Parts of the US Government have been discussing the role of Linked Data
 in government agencies and whether Linked Data is what the Obama
 Administration meant when they mandated machine readable data.
 Unsurprisingly, some people like to do things the old ways, with a
 three-tier architecture and without fostering reuse of the data.
 
  Please respond to the GitHub thread if you would like to support Linked
 Data:
 
 https://github.com/project-open-data/project-open-data.github.io/pull/21
 
  Regards,
  Dave
  --
  http://about.me/david_wood
 
 
 





Re: Request for Help: US Government Linked Data

2013-05-19 Thread Jürgen Jakobitsch SWC
On Sun, 2013-05-19 at 08:19 -0700, Gannon Dick wrote:
 Dave,
 
 
 IMHO, the W3C Cookbook methods do not go far enough to define the
 short-term strategy game of which Americans are so fond.  The Federal
 Government must plan Social Policy from ante Meridian (AM) to post
 Meridian (PM).  Playing statistical games with higher frequencies or
 modified time spans is fun, but it is not Science (a Free Energy
 Calculation).
 
 
 http://www.rustprivacy.org/2013/egov/roadmap/NoMoneyInGovernment.pdf
 
 
 Sorry to say, for reasons given, that StratML seems the better choice
 for Strategic Policy Representation (rather than SKOS and RDF).

sorry, no offence but above are two lines of total confusion...

wkr j


 
 
 --Gannon
 
 
 
 __
 From: David Wood da...@3roundstones.com
 To: public-lod@w3.org community public-lod@w3.org 
 Sent: Saturday, May 18, 2013 8:59 AM
 Subject: Re: Request for Help: US Government Linked Data
 
 
 Hi all,
 
 I take it back: Don't just comment.
 
 We need to introduce pull requests into the Project Open Data
 documents that add Linked Data terms, examples and guidelines to the
 existing material.
 
 There are a few scattered RDFa references in relation to schema.org,
 but most of the Linked Data material has been removed from the
 documents.  We need to get this back in existing Linked Data efforts
 within the US Government might very well be hurt.
 
 Please help.  Thanks.
 
 Regards,
 Dave
 --
 http://about.me/david_wood
 
 
 
 On May 18, 2013, at 09:16, David Wood da...@3roundstones.com wrote:
 
  Hi all,
  
  Parts of the US Government have been discussing the role of Linked
 Data in government agencies and whether Linked Data is what the Obama
 Administration meant when they mandated machine readable data.
 Unsurprisingly, some people like to do things the old ways, with a
 three-tier architecture and without fostering reuse of the data.
  
  Please respond to the GitHub thread if you would like to support
 Linked Data:
 
 https://github.com/project-open-data/project-open-data.github.io/pull/21
  
  Regards,
  Dave
  --
  http://about.me/david_wood
  
  
  
 
 
 

-- 
| Jürgen Jakobitsch, 
| Software Developer
| Semantic Web Company GmbH
| Mariahilfer Straße 70 / Neubaugasse 1, Top 8
| A - 1070 Wien, Austria
| Mob +43 676 62 12 710 | Fax +43.1.402 12 35 - 22

COMPANY INFORMATION
| web   : http://www.semantic-web.at/
| foaf  : http://company.semantic-web.at/person/juergen_jakobitsch
PERSONAL INFORMATION
| web   : http://www.turnguard.com
| foaf  : http://www.turnguard.com/turnguard
| g+: https://plus.google.com/111233759991616358206/posts
| skype : jakobitsch-punkt
| xmlns:tg  = http://www.turnguard.com/turnguard#;




Request for Help: US Government Linked Data

2013-05-18 Thread David Wood
Hi all,

Parts of the US Government have been discussing the role of Linked Data in 
government agencies and whether Linked Data is what the Obama Administration 
meant when they mandated machine readable data.  Unsurprisingly, some people 
like to do things the old ways, with a three-tier architecture and without 
fostering reuse of the data.

Please respond to the GitHub thread if you would like to support Linked Data:
  https://github.com/project-open-data/project-open-data.github.io/pull/21

Regards,
Dave
--
http://about.me/david_wood





smime.p7s
Description: S/MIME cryptographic signature


Re: Request for Help: US Government Linked Data

2013-05-18 Thread David Wood
Hi all,

I take it back: Don't just comment.

We need to introduce pull requests into the Project Open Data documents that 
add Linked Data terms, examples and guidelines to the existing material.

There are a few scattered RDFa references in relation to schema.org, but most 
of the Linked Data material has been removed from the documents.  We need to 
get this back in existing Linked Data efforts within the US Government might 
very well be hurt.

Please help.  Thanks.

Regards,
Dave
--
http://about.me/david_wood



On May 18, 2013, at 09:16, David Wood da...@3roundstones.com wrote:

 Hi all,
 
 Parts of the US Government have been discussing the role of Linked Data in 
 government agencies and whether Linked Data is what the Obama Administration 
 meant when they mandated machine readable data.  Unsurprisingly, some 
 people like to do things the old ways, with a three-tier architecture and 
 without fostering reuse of the data.
 
 Please respond to the GitHub thread if you would like to support Linked Data:
  https://github.com/project-open-data/project-open-data.github.io/pull/21
 
 Regards,
 Dave
 --
 http://about.me/david_wood
 
 
 



smime.p7s
Description: S/MIME cryptographic signature


[ANN] Add your links to DBpedia workflow version 0.3 (help wanted)

2013-04-08 Thread Sebastian Hellmann

Hi all,
there were some discussions meanwhile and we are able to present yet an 
updated version of the workflow: https://github.com/dbpedia/dbpedia-links


New is that we will also include SILK link specs now as well as the 
scripts, which generated the links.
The README was updated 
https://github.com/dbpedia/dbpedia-links/blob/master/README.md


Please apply to join this effort and email me, if you want write access 
to the repo and join the linking committee.
We are really looking for managers, who help us push this this effort 
forward. We also need a proposal for the metadata.ttl and how to 
maintain the links and load and validate them.



All the best,
Sebastian


--
Dipl. Inf. Sebastian Hellmann
Department of Computer Science, University of Leipzig
Projects: http://nlp2rdf.org , http://linguistics.okfn.org , 
http://dbpedia.org/Wiktionary , http://dbpedia.org

Homepage: http://bis.informatik.uni-leipzig.de/SebastianHellmann
Research Group: http://aksw.org



Re: Help with modeling my ontology

2013-02-28 Thread Dave Reynolds
Just on the question of representing measurements then one approach to 
that is the RDF Data Cube vocabulary [1]. In that each observation has a 
measure (the thing you are measuring, such as canopyHeight), the 
dimensions of where/when/etc the measurement applies to and the 
attributes that allow you to interpret the measurement.


So you would normally make the unit of measure an attribute.

If the method doesn't fundamentally change the nature of the thing you 
are measuring then you could make that another attribute.  If it does 
then you should have a different measure property for the different 
methods (possibly with some common super property).


Dave

[1] http://www.w3.org/TR/vocab-data-cube/

On 27/02/13 20:58, Luca Matteis wrote:

Hello all,

At http://www.cropontology.org/ I'm trying to make things a little more
RDF friendly. For example, we have an ontology about Groundnut here:
http://www.cropontology.org/ontology/CO_337/Groundnut/ttl

I'm generating this from a somewhat flat list of names/concepts, so it's
still a work in progress. But I'm having issues making sense of it all
so that the ontology can be used by people that actually have Groundnut
data.

For example, in that Turtle dump, search for Canopy height. This is a
concept that people might use to describe the height of the canopy of
their groundnut plant, as the comment describes (this should be a
Property not a Class, but like I said, it's still work-in-progress).
Let's try with some sample data someone might have about groundnut, and
see if I can further explain my issue (I assume co: is a prefix for my
cropontology.org http://cropontology.org site, also the URIs are
different but it's just an example):

 :groundnut1
   a co:Groundnut;
   co:canopyHeight xxx .

Ok here's the issue, we know that `canopyHeight` is measured using
different methodologies. For example it might be measured using a
methodology that we found to be described as Measuring the distance
from the base to the tip of the main stem, but it might also be some
other method. And, funny enough, we also realized that it is measured
using centimeters, with a minimum of 0 and a maximum of 10cm.

So how should I make this easier on the people that are using my
ontology? Should it be:

 :groundnut1
   a co:Groundnut;
   co:canopyHeight 9.5cm .

or should it be:

 :groundnut1
   a co:Groundnut;
   co:canopyHeight [
 co:method Measuring the distance from the base to the tip of
the main stem;
 co:scale 9.5cm
   ] .

Maybe I'm going about this the wrong way and should think more about how
this ontology is going to be used by people that have data about it...
but I'm not sure. Any advice would be great. And here's the actual
browsable list of concepts, in a tree sort of interface:
http://www.cropontology.org/terms/CO_337:039/

As you can see there's this kind of thing happening all over the
ontology where we have the Property-the method it was measured- and
finally the scale. Any help? Thanks!






Help with modeling my ontology

2013-02-27 Thread Luca Matteis
Hello all,

At http://www.cropontology.org/ I'm trying to make things a little more RDF
friendly. For example, we have an ontology about Groundnut here:
http://www.cropontology.org/ontology/CO_337/Groundnut/ttl

I'm generating this from a somewhat flat list of names/concepts, so it's
still a work in progress. But I'm having issues making sense of it all so
that the ontology can be used by people that actually have Groundnut data.

For example, in that Turtle dump, search for Canopy height. This is a
concept that people might use to describe the height of the canopy of their
groundnut plant, as the comment describes (this should be a Property not a
Class, but like I said, it's still work-in-progress). Let's try with some
sample data someone might have about groundnut, and see if I can further
explain my issue (I assume co: is a prefix for my cropontology.org site,
also the URIs are different but it's just an example):

:groundnut1
  a co:Groundnut;
  co:canopyHeight xxx .

Ok here's the issue, we know that `canopyHeight` is measured using
different methodologies. For example it might be measured using a
methodology that we found to be described as Measuring the distance from
the base to the tip of the main stem, but it might also be some other
method. And, funny enough, we also realized that it is measured using
centimeters, with a minimum of 0 and a maximum of 10cm.

So how should I make this easier on the people that are using my ontology?
Should it be:

:groundnut1
  a co:Groundnut;
  co:canopyHeight 9.5cm .

or should it be:

:groundnut1
  a co:Groundnut;
  co:canopyHeight [
co:method Measuring the distance from the base to the tip of the
main stem;
co:scale 9.5cm
  ] .

Maybe I'm going about this the wrong way and should think more about how
this ontology is going to be used by people that have data about it... but
I'm not sure. Any advice would be great. And here's the actual browsable
list of concepts, in a tree sort of interface:
http://www.cropontology.org/terms/CO_337:039/

As you can see there's this kind of thing happening all over the ontology
where we have the Property-the method it was measured- and finally the
scale. Any help? Thanks!


Re: Help with modeling my ontology

2013-02-27 Thread Kingsley Idehen

On 2/27/13 3:58 PM, Luca Matteis wrote:

Hello all,

At http://www.cropontology.org/ I'm trying to make things a little 
more RDF friendly. For example, we have an ontology about Groundnut 
here: http://www.cropontology.org/ontology/CO_337/Groundnut/ttl


I'm generating this from a somewhat flat list of names/concepts, so 
it's still a work in progress. But I'm having issues making sense of 
it all so that the ontology can be used by people that actually have 
Groundnut data.


For example, in that Turtle dump, search for Canopy height. This is 
a concept that people might use to describe the height of the canopy 
of their groundnut plant, as the comment describes (this should be a 
Property not a Class, but like I said, it's still work-in-progress). 
Let's try with some sample data someone might have about groundnut, 
and see if I can further explain my issue (I assume co: is a prefix 
for my cropontology.org http://cropontology.org site, also the URIs 
are different but it's just an example):


:groundnut1
  a co:Groundnut;
  co:canopyHeight xxx .

Ok here's the issue, we know that `canopyHeight` is measured using 
different methodologies. For example it might be measured using a 
methodology that we found to be described as Measuring the distance 
from the base to the tip of the main stem, but it might also be some 
other method. And, funny enough, we also realized that it is measured 
using centimeters, with a minimum of 0 and a maximum of 10cm.


So how should I make this easier on the people that are using my 
ontology? Should it be:


:groundnut1
  a co:Groundnut;
  co:canopyHeight 9.5cm .

or should it be:

:groundnut1
  a co:Groundnut;
  co:canopyHeight [
co:method Measuring the distance from the base to the tip of 
the main stem;

co:scale 9.5cm
  ] .

Maybe I'm going about this the wrong way and should think more about 
how this ontology is going to be used by people that have data about 
it... but I'm not sure. Any advice would be great. And here's the 
actual browsable list of concepts, in a tree sort of interface: 
http://www.cropontology.org/terms/CO_337:039/


As you can see there's this kind of thing happening all over the 
ontology where we have the Property-the method it was measured- and 
finally the scale. Any help? Thanks!




Some options that cover units of measurement:

1. http://www.linkedmodel.org/catalog/qudt/1.1/index.html .
2. http://www.heppnetz.de/ontologies/goodrelations/v1.html#hasCurrency 
-- its currency, but you should get the gist.



--

Regards,

Kingsley Idehen 
Founder  CEO
OpenLink Software
Company Web: http://www.openlinksw.com
Personal Weblog: http://www.openlinksw.com/blog/~kidehen
Twitter/Identi.ca handle: @kidehen
Google+ Profile: https://plus.google.com/112399767740508618350/about
LinkedIn Profile: http://www.linkedin.com/in/kidehen






smime.p7s
Description: S/MIME Cryptographic Signature


Re: Help needed: *brief* online poll about blank-nodes

2011-06-23 Thread M. Scott Marshall
Please feel free to send this sort of poll to HCLS
public-semweb-life...@w3.org. I think that it's an important issue
but I only got around to forwarding it to a small group at HCLS. So,
maybe for the next round.

Cheers,
Scott

-- 
M. Scott Marshall, W3C HCLS IG co-chair, http://www.w3.org/blog/hcls
http://staff.science.uva.nl/~marshall

On Tue, Jun 21, 2011 at 7:14 PM, Alejandro Mallea jan...@gmail.com wrote:
 Hi all,
 We would like to say thanks to all of you who have replied to the poll on
 blank nodes. We have got interesting answers and feedback, and we will be
 making the results available online early next week (with the exception of
 the data sets). If you still want to participate, the poll will be open for
 24 hours (closing at 18:00 UTC tomorrow). Remember that you can leave
 feedback on the questions, the alternatives, or general comments about your
 uses for blank nodes. Until that time, please do not send comments or
 questions about the poll to this list.
 The poll is available at http://db.ing.puc.cl/amallea/blank-nodes-poll
 Regards,
 Aidan and Alejandro

 On 17 June 2011 17:10, Hogan, Aidan aidan.ho...@deri.org wrote:

 Dear colleagues,

 We're conducting some research into the current use of blank-nodes in
 Linked Data publishing, and we need your help.

 We would like to get a general impression of the intent of publishers
 when using blank-nodes in their RDF data. Along these lines, we drafted
 a short survey containing *2 questions* which will only take a minute or
 two of your time.

 We would be very grateful if you would take the time to fill out the
 poll. We will make the results available online later this month.

 **Note that the poll is trying to determine what you *intend* when you
 publish blank-nodes. It is not a quiz on RDF Semantics. There is no
 correct answer.**

 Link to Poll: http://db.ing.puc.cl/amallea/blank-nodes-poll

 If you have been involved in publishing RDF data on the Web (e.g., as
 Linked Data), please provide a URL or a domain name which indicates the
 dataset.

 Many thanks for your time!
 Alejandro and Aidan


 P.S. Please feel free to tweet a link to this mail. However, to avoid
 influencing responses, we would strongly prefer if this email is not
 replied to on-list. If you want to leave feedback, please do so in the
 space provided in the poll, or reply directly to Alejandro (CC'ed on
 this mail) and Aidan. Thanks!



Re: Help needed: *brief* online poll about blank-nodes

2011-06-21 Thread Alejandro Mallea
Hi all,

We would like to say thanks to all of you who have replied to the poll on
blank nodes. We have got interesting answers and feedback, and we will be
making the results available online early next week (with the exception of
the data sets). If you still want to participate, the poll will be open for
24 hours (closing at 18:00 UTC tomorrow). Remember that you can leave
feedback on the questions, the alternatives, or general comments about your
uses for blank nodes. Until that time, please do not send comments or
questions about the poll to this list.

The poll is available at http://db.ing.puc.cl/amallea/blank-nodes-poll

Regards,

Aidan and Alejandro


On 17 June 2011 17:10, Hogan, Aidan aidan.ho...@deri.org wrote:

 Dear colleagues,

 We're conducting some research into the current use of blank-nodes in
 Linked Data publishing, and we need your help.

 We would like to get a general impression of the intent of publishers
 when using blank-nodes in their RDF data. Along these lines, we drafted
 a short survey containing *2 questions* which will only take a minute or
 two of your time.

 We would be very grateful if you would take the time to fill out the
 poll. We will make the results available online later this month.

 **Note that the poll is trying to determine what you *intend* when you
 publish blank-nodes. It is not a quiz on RDF Semantics. There is no
 correct answer.**

 Link to Poll: http://db.ing.puc.cl/amallea/blank-nodes-poll

 If you have been involved in publishing RDF data on the Web (e.g., as
 Linked Data), please provide a URL or a domain name which indicates the
 dataset.

 Many thanks for your time!
 Alejandro and Aidan


 P.S. Please feel free to tweet a link to this mail. However, to avoid
 influencing responses, we would strongly prefer if this email is not
 replied to on-list. If you want to leave feedback, please do so in the
 space provided in the poll, or reply directly to Alejandro (CC'ed on
 this mail) and Aidan. Thanks!



Help needed: *brief* online poll about blank nodes

2011-06-19 Thread Alejandro Mallea
Dear colleagues,

We're conducting some research into the current use of blank nodes in
Linked Data publishing, and we need your help.

We would like to get a general impression of the intent of publishers
when using blank-nodes in their RDF data. Along these lines, we drafted
a short survey containing 2 questions which will only take a minute or
two of your time.

We would be very grateful if you would take the time to fill out the
poll. We will make the results available online later this month.

Note that the poll is trying to determine what you *intend* when you
publish blank nodes. It is not a quiz on RDF Semantics. There is no
correct answer.

Link to Poll: http://db.ing.puc.cl/amallea/blank-nodes-poll

If you have been involved in publishing RDF data on the Web (e.g., as
Linked Data), please provide a URL or a domain name which indicates the
dataset.

Please feel free to tweet a link to this mail. However, to avoid influencing
responses, we would strongly prefer if this email is not replied to on-list.
If you want to give feedback on the poll, please reply directly to my email
address. Thanks :-)

Many thanks for your time!

Alejandro Mallea
DERI Galway


Help needed: *brief* online poll about blank nodes

2011-06-19 Thread Alejandro Mallea
Dear colleagues,

We're conducting some research into the current use of blank nodes in
Linked Data publishing, and we need your help.

We would like to get a general impression of the intent of publishers
when using blank-nodes in their RDF data. Along these lines, we drafted
a short survey containing 2 questions which will only take a minute or
two of your time.

We would be very grateful if you would take the time to fill out the
poll. We will make the results available online later this month.

Note that the poll is trying to determine what you *intend* when you
publish blank nodes. It is not a quiz on RDF Semantics. There is no
correct answer.

Link to Poll: http://db.ing.puc.cl/amallea/blank-nodes-poll

If you have been involved in publishing RDF data on the Web (e.g., as
Linked Data), please provide a URL or a domain name which indicates the
dataset.

Please feel free to tweet a link to this mail. However, to avoid influencing
responses, we would strongly prefer if this email is not replied to on-list.
If you want to give feedback on the poll, please reply directly to my email
address.

Many thanks for your time!

Alejandro Mallea et al.
DERI Galway

PS. Sorry if the message is repeated. I sent it two hours ago but so far I
haven't seen it in the archives or in my inbox.


Help needed: *brief* online poll about blank-nodes

2011-06-17 Thread Hogan, Aidan
Dear colleagues,

We're conducting some research into the current use of blank-nodes in
Linked Data publishing, and we need your help.

We would like to get a general impression of the intent of publishers
when using blank-nodes in their RDF data. Along these lines, we drafted
a short survey containing *2 questions* which will only take a minute or
two of your time. 

We would be very grateful if you would take the time to fill out the
poll. We will make the results available online later this month.

**Note that the poll is trying to determine what you *intend* when you
publish blank-nodes. It is not a quiz on RDF Semantics. There is no
correct answer.**

Link to Poll: http://db.ing.puc.cl/amallea/blank-nodes-poll

If you have been involved in publishing RDF data on the Web (e.g., as
Linked Data), please provide a URL or a domain name which indicates the
dataset.

Many thanks for your time!
Alejandro and Aidan


P.S. Please feel free to tweet a link to this mail. However, to avoid
influencing responses, we would strongly prefer if this email is not
replied to on-list. If you want to leave feedback, please do so in the
space provided in the poll, or reply directly to Alejandro (CC'ed on
this mail) and Aidan. Thanks!

P.P.S. We've had some problems sending mails on the list (due to
moderation lag), so we apologise in advance if repetitions of this mail
surface later.



RE: An idea I need help with, or told to stop wasting time on!

2010-06-07 Thread Michael Schneider
-Original Message-
From: semantic-web-requ...@w3.org [mailto:semantic-web-requ...@w3.org]
On Behalf Of Nathan
Sent: Sunday, June 06, 2010 11:51 PM
To: Michael Schneider
Cc: Linked Data community; semantic-...@w3.org
Subject: Re: An idea I need help with, or told to stop wasting time on!

Michael Schneider wrote:
 Hi!

 Just a few notes concerning your ideas and OWL DL (I don't know
whether this
 is important for you or not, but some people might find it relevant):

Thanks Michael,

Very useful and indeed relevant (thanks!).

To summarise, everything mentioned so far by me is fine in OWL Full and
RDF(s), but not in OWL DL.

I should have mentioned that you could make ex:value an
owl:AnnotationProperty, which would allow you to have all of URIs, literals
and bnodes in object position. But this, of course, has other drawbacks in
OWL DL, apart from not looking very justified conceptually (ex:value is
probably not meant as a means to add comments to a resource?). If you make
it an annotation property, most OWL constructs cannot be used with the
property anymore. For example, it may make sense to state that ex:value is a
functional property, or to put a has-value restriction on it in some
scenarios. That's all not possible then anymore. In OWL 2 DL, you could at
least put a range axiom on it, but it would not have any semantic
consequences, i.e. an OWL DL reasoner would completely ignore both the
property and the axiom on it. This may lead to surprises.

So, my general view is that making a property, which is not naturally sort
of a commenting property (such as rdfs:comment), an annotation property is
only acceptable, if you exactly know what you are doing and if you have full
control over the property's use. If you expect to publish the property to be
used by others, and if there are possible scenarios where one might like to
use the property in an OWL construct (e.g. an axiom) or even do reasoning
with it, then don't make it an annotation property.

Thus is it safe to say that this would be a problem in OWL DL as well?:

   :x owl:sameAs 'a literal'^^xsd:string .

No, owl:sameAs cannot be used with literals in OWL DL. It can only be used
with URIs (named individuals). What you are doing here is, again, genuine
OWL Full, because OWL Full treats data values as individuals.

And I guess the take-away is, that if one was to go for something as
described in the original post, it would not be OWL DL compliant.

Consider SKOS-XL [1]:

  ex:foo skosxl:prefLabel [
  rdf:type skosxl:Label ;
  skosxl:literalForm foo ]

Here, skosxl:prefLabel is specified as an object property [sic!] and
skosxl:literalForm is a data property (while the better known skos:label
property is an annotation property). This works in DL, but only if you use
those properties as a team. In your original example, you have used
foaf:name, which was a data property, and this does not work. Also, you
cannot use skosxl:literalForm with a URI as an object, what you did with
ex:value in your earlier post. So, you can do it in DL, but you don't have
very much usage freedom. Thus, check your use cases! 

ps: If I get to the stage of trying to express any of this in an OWL
ontology (FULL I guess!), would it be okay to send through to cast your
eye over.

Feel free. For OWL Full, actually, it's as simple as this: syntactically, if
it is in RDF (and it always is), then you are in OWL Full (a no-brainer),
simply since the syntax of OWL Full is defined to be (unrestricted) RDF. And
if you want to do OWL Full-style reasoning, then the OWL 2 RL/RDF Rules
language [2] and corresponding reasoners (e.g. [3]) are often sufficient
(though sometimes not, depends on your usecases).

Many Regards,

Nathan

Best,
Michael

[1] http://www.w3.org/TR/skos-reference/#xl
[2]
http://www.w3.org/TR/2009/REC-owl2-profiles-20091027/#Reasoning_in_OWL_2_RL_
and_RDF_Graphs_using_Rules
[3] http://www.ivan-herman.net/Misc/2008/owlrl/

--
Dipl.-Inform. Michael Schneider
Research Scientist, Information Process Engineering (IPE)
Tel  : +49-721-9654-726
Fax  : +49-721-9654-727
Email: michael.schnei...@fzi.de
WWW  : http://www.fzi.de/michael.schneider
===
FZI Forschungszentrum Informatik an der Universität Karlsruhe
Haid-und-Neu-Str. 10-14, D-76131 Karlsruhe
Tel.: +49-721-9654-0, Fax: +49-721-9654-959
Stiftung des bürgerlichen Rechts, Az 14-0563.1, RP Karlsruhe
Vorstand: Prof. Dr.-Ing. Rüdiger Dillmann, Dipl. Wi.-Ing. Michael Flor,
Prof. Dr. Dr. h.c. Wolffried Stucky, Prof. Dr. Rudi Studer
Vorsitzender des Kuratoriums: Ministerialdirigent Günther Leßnerkraus
===




RE: An idea I need help with, or told to stop wasting time on!

2010-06-07 Thread Michael Schneider
Hi Henry!

Story Henry wrote:

If you look at the rdf semantics document it spends a lot of time
showing how one can turn literals into bnodes. http://www.w3.org/TR/rdf-
mt/

(I can't quite remember where now)

This works for OWL (1/2) /Full/ as well. OWL Full uses the (unrestricted)
RDF abstract syntax as its native syntax; hence, _:x owl:sameAs 'foo' is
valid syntactically in OWL Full. And OWL (1/2) Full uses the OWL 1
RDF-Compatible Semantics [1a] or the OWL 2 RDF-Based Semantics [1b] as its
semantics, which is strictly layered on top of the RDF Semantics; hence,
_:x owl:sameAs 'foo' is semantically meaningful in OWL Full, meaning that
there exists a resource in the universe of discourse that happens to be the
string 'foo'. 

What I said was that it does not work in OWL /DL/. See Sec. 11.2 of the OWL
2 Structural Specification [2] for the syntactic Restrictions on the Usage
of Anonymous Individuals in OWL 2 DL ontologies, and see Sec. 2.2 of the
OWL 2 Direct Semantics [3] (the semantics of OWL 2 DL), which states that
the object domain (individuals, represented by URIs and bnodes) and the data
domain (data values, represented by literals) are disjoint.

I wonder what the problem owl has with doing this. And also I wonder if
it is easy to create some new owl version that could deal with that.

No such need. This language exists and has always been around: it is OWL
(1/2) Full. Although, I'm starting to get the bad feeling that many people
seem to miss the point what the purpose of this language is. The whole idea
behind OWL Full is having a fully RDF compatible variant of OWL, which
provides semantic expressivity comparable (but not necessarily perfectly
equal) to OWL DL. Technically (as you have read the RDF Semantics spec, the
following should be familiar terms to you), OWL 2 Full (actually, its
semantics, the RDF-Based Semantics) is a semantic extension of RDFS (or
D-entailment, to be more precise), providing vocabulary entailment for all
the URIs of the OWL (2) vocabulary. You may want to read Chap. 1
(Introduction) of [1b] for further explanation (and, again, you will find
a lot of familiar sounding terms and concepts there).

Michael
 
[1a] http://www.w3.org/TR/owl-semantics/rdfs.html
[1b] http://www.w3.org/TR/2009/REC-owl2-rdf-based-semantics-20091027/
[2] http://www.w3.org/TR/2009/REC-owl2-syntax-20091027/
[3] http://www.w3.org/TR/2009/REC-owl2-direct-semantics-20091027/

--
Dipl.-Inform. Michael Schneider
Research Scientist, Information Process Engineering (IPE)
Tel  : +49-721-9654-726
Fax  : +49-721-9654-727
Email: michael.schnei...@fzi.de
WWW  : http://www.fzi.de/michael.schneider
===
FZI Forschungszentrum Informatik an der Universität Karlsruhe
Haid-und-Neu-Str. 10-14, D-76131 Karlsruhe
Tel.: +49-721-9654-0, Fax: +49-721-9654-959
Stiftung des bürgerlichen Rechts, Az 14-0563.1, RP Karlsruhe
Vorstand: Prof. Dr.-Ing. Rüdiger Dillmann, Dipl. Wi.-Ing. Michael Flor,
Prof. Dr. Dr. h.c. Wolffried Stucky, Prof. Dr. Rudi Studer
Vorsitzender des Kuratoriums: Ministerialdirigent Günther Leßnerkraus
===




An idea I need help with, or told to stop wasting time on!

2010-06-06 Thread Nathan

All,

My brains breaking over this one - can see it and can't quite flesh out 
the details (or figure out if it's worth it) - it's very much a marmite 
(love/hate) idea that I haven't fully formed, and is targeted at 
addressing some common problems with namedgraph, reification, provenance 
tracking and metadata about data. And as I'm sure you realise by now I'm 
not afraid to be wrong or simply throw out ideas in to the public 
domain, so here goes:


1: Introduce a 'value' property

Where currently we can say:
 :me foaf:name 'nathan' .

I'd propose introducing an ex:value property that allows us to say:
 :me foaf:name [ ex:value 'nathan' ] .
or
 :me foaf:name :myname .
 :myname ex:value 'nathan' .

I'm hoping the basic human understanding of this is pretty obvious, 
sorting out the domain  range of ex:value, class of :myname, ontology 
details and related are hurting my head a bit at the minute.


And thus you could describe a value:

 :me foaf:name [
 ex:value 'nathan' ;
 ex:type xsd:string ;
 ex:language 'en-gb' .
  ] .

And do some funkier stuff:

 :me foaf:mbox :myemail ;
 :myemail ex:value mailto:nat...@webr3.org ;
 ex:sha_1 'KLSJFS9F7S9D8F7SLADFSLKDJF98SD7' ;
 dcterms:created '2010-06-03T15:19:35-05:00' ;
 dcterms:replaces :oldmail .
 :oldmail ex:value mailto:oldem...@webr3.org .

so because of the way ex:value works, in there we have the triple:

mailto:nat...@webr3.org dcterms:replaces mailto:oldem...@webr3.org .

but we've also introduced a way to make non http URIs dereferencable..

http://webr3.org/nathan#oldmail ex:value mailto:oldem...@webr3.org .

see why my head is hurting with this?


2: Double Serialization

In a way we can already do this with rdf:XMLLiteral

:x content:encoded 'rdf:RDF 
xmlns:rdf=http://www.w3.org/1999/02/22-rdf-syntax-ns#;rdf:Description 
rdf:about=http://ex.org/egg#i;rdfs:label 
xml:lang=enegg/rdfs:label/rdf:Description/rdf:RDF'^^http://www.w3.org/1999/02/22-rdf-syntax-ns#XMLLiteral 
.


so we could say:

:x a ex:NamedGraph ;
  ex:graph 'rdf:RDF...'^^rdf:XMLLiteral .

or including ex:value as outlined in 1 earlier:

:graph1 a ex:NamedGraph ;
  ex:graph [
 ex:value '''some serialized rdf in here''' ;
 ex:type 'text/rdf+n3' . ] .

Would allow you to strap provenance / meta to a value and/or the named 
graph.. and here's where it's either so simple, or so complex that my 
brain simply pickles:


example onto:

ex:graph rdfs:domain ex:NamedGraph;
   rdfs:range ex:Graph .

example graph:

:graph1 ex:graph :v3 .

:v3 ex:value '''some serialized rdf in here''' ;
  ex:type 'text/rdf+n3' ;
  dcterms:replace :v2 .

v2 ex:value '''old serialized rdf here''' ;
  ex:type 'text/rdf+n3' ;
  dcterms:replace :v1 .

.. I'm sure there's something in using [ log:content, log:n3String, 
log:uri ] here instead, and quite sure that by stating that graph 
contents where either a Truth or a Falsehood you could use for rdf updates..


So, is there something in this, am I going down a wrong path, thoughts, 
feedback, anything?


Best,

Nathan



Re: An idea I need help with, or told to stop wasting time on!

2010-06-06 Thread Damian Steer

On 6 Jun 2010, at 17:17, Nathan wrote:

 1: Introduce a 'value' property

I have good news :-) rdf:value [1]

Damian

[1] http://www.w3.org/TR/rdf-mt/#rdfValue



Re: An idea I need help with, or told to stop wasting time on!

2010-06-06 Thread Nathan

Damian Steer wrote:

On 6 Jun 2010, at 17:17, Nathan wrote:


1: Introduce a 'value' property


I have good news :-) rdf:value [1]


Brilliant, I hoped that's what it was for (but lack of documentation led 
me astray!)


Great,

Nathan



Re: An idea I need help with, or told to stop wasting time on!

2010-06-06 Thread Reto Bachmann-Gmuer
On Sun, Jun 6, 2010 at 6:17 PM, Nathan nat...@webr3.org wrote:

 ...

  :me foaf:name [
 ex:value 'nathan' ;
 ex:type xsd:string ;
 ex:language 'en-gb' .
  ] .

 foaf:name has range rdfs:Literal, this still allows us to say:

:me foaf:name [



 And do some funkier stuff:

  :me foaf:mbox :myemail ;
  :myemail ex:value mailto:nat...@webr3.org ;
 ex:sha_1 'KLSJFS9F7S9D8F7SLADFSLKDJF98SD7' ;
 dcterms:created '2010-06-03T15:19:35-05:00' ;
 dcterms:replaces :oldmail .
  :oldmail ex:value mailto:oldem...@webr3.org .

 so because of the way ex:value works, in there we have the triple:

 mailto:nat...@webr3.org dcterms:replaces mailto:oldem...@webr3.org .

 but we've also introduced a way to make non http URIs dereferencable..

 http://webr3.org/nathan#oldmail ex:value mailto:oldem...@webr3.org .

 see why my head is hurting with this?


 2: Double Serialization

 In a way we can already do this with rdf:XMLLiteral

 :x content:encoded 'rdf:RDF xmlns:rdf=
 http://www.w3.org/1999/02/22-rdf-syntax-ns#;rdf:Description rdf:about=
 http://ex.org/egg#i;rdfs:label
 xml:lang=enegg/rdfs:label/rdf:Description/rdf:RDF'^^
 http://www.w3.org/1999/02/22-rdf-syntax-ns#XMLLiteral .

 so we could say:

 :x a ex:NamedGraph ;
  ex:graph 'rdf:RDF...'^^rdf:XMLLiteral .

 or including ex:value as outlined in 1 earlier:

 :graph1 a ex:NamedGraph ;
  ex:graph [
 ex:value '''some serialized rdf in here''' ;
 ex:type 'text/rdf+n3' . ] .

 Would allow you to strap provenance / meta to a value and/or the named
 graph.. and here's where it's either so simple, or so complex that my brain
 simply pickles:

 example onto:

 ex:graph rdfs:domain ex:NamedGraph;
   rdfs:range ex:Graph .

 example graph:

 :graph1 ex:graph :v3 .

 :v3 ex:value '''some serialized rdf in here''' ;
  ex:type 'text/rdf+n3' ;
  dcterms:replace :v2 .

 v2 ex:value '''old serialized rdf here''' ;
  ex:type 'text/rdf+n3' ;
  dcterms:replace :v1 .

 .. I'm sure there's something in using [ log:content, log:n3String,
 log:uri ] here instead, and quite sure that by stating that graph contents
 where either a Truth or a Falsehood you could use for rdf updates..

 So, is there something in this, am I going down a wrong path, thoughts,
 feedback, anything?

 Best,

 Nathan




Re: An idea I need help with, or told to stop wasting time on!

2010-06-06 Thread Reto Bachmann-Gmuer
ops, accidentally sent too eraly

On Sun, Jun 6, 2010 at 6:17 PM, Nathan nat...@webr3.org wrote:

 ...


  :me foaf:name [
 ex:value 'nathan' ;
 ex:type xsd:string ;
 ex:language 'en-gb' .
  ] .

 foaf:name has range rdfs:Literal, this still allows us to say:

:me foaf:name [
ex:sha_1 'KLSJFS9F7S9D8F7SLADFSLKDJF98SD7' .
]

in this case we know about the bnode that it stands for a literal value and
the ex:sha_1 value of that literal.

your way of specifying the type of the literal reminds me the recent
discussion started by Henry Story:
http://lists.w3.org/Archives/Public/semantic-web/2010Feb/0174.html

following this we could also say:

:me foaf:name [
xsd:string 'nathan' ;
ex:sha_1 'KLSJFS9F7S9D8F7SLADFSLKDJF98SD7' .
 ] .

which expresses the same as:

:me foaf:name [
owl:sameAs 'nathan'^^xsd:string;
ex:sha_1 'KLSJFS9F7S9D8F7SLADFSLKDJF98SD7' .
 ] .



 And do some funkier stuff:

  :me foaf:mbox :myemail ;
  :myemail ex:value mailto:nat...@webr3.org ;
 ex:sha_1 'KLSJFS9F7S9D8F7SLADFSLKDJF98SD7' ;
 dcterms:created '2010-06-03T15:19:35-05:00' ;
 dcterms:replaces :oldmail .
  :oldmail ex:value mailto:oldem...@webr3.org .

 so because of the way ex:value works, in there we have the triple:

 mailto:nat...@webr3.org dcterms:replaces mailto:oldem...@webr3.org .

ex:way seems to work the same way as owl:sameAs

ex:sha_1 to me seems to make sense with literals but not with a mailbox,
there the foaf-approach of having a distinct property convinces me more:
mbox point to the mailbox which is a resource typically identified by its
mailto-uri, mbox_sha1sum by contrast point to a literal with an
sha1-encoding of the mailto-uri of an email address of the subject. Your
second usage of ex:sha1 ties a resource to its name which seems very
limiting.

the statement mailto:nat...@webr3.org dcterms:replaces mailto:
oldem...@webr3.org . seems however perfectly sound I don't see why this
should need the construct with ex:value an the additional node.

reto


Re: An idea I need help with, or told to stop wasting time on!

2010-06-06 Thread Nathan

Reto Bachmann-Gmuer wrote:

ops, accidentally sent too eraly
On Sun, Jun 6, 2010 at 6:17 PM, Nathan nat...@webr3.org wrote:

...


 :me foaf:name [
ex:value 'nathan' ;
ex:type xsd:string ;
ex:language 'en-gb' .
 ] .

foaf:name has range rdfs:Literal, this still allows us to say:


:me foaf:name [
ex:sha_1 'KLSJFS9F7S9D8F7SLADFSLKDJF98SD7' .
]

in this case we know about the bnode that it stands for a literal value and
the ex:sha_1 value of that literal.

your way of specifying the type of the literal reminds me the recent
discussion started by Henry Story:
http://lists.w3.org/Archives/Public/semantic-web/2010Feb/0174.html

following this we could also say:

:me foaf:name [
xsd:string 'nathan' ;
ex:sha_1 'KLSJFS9F7S9D8F7SLADFSLKDJF98SD7' .
 ] .

which expresses the same as:

:me foaf:name [
owl:sameAs 'nathan'^^xsd:string;
ex:sha_1 'KLSJFS9F7S9D8F7SLADFSLKDJF98SD7' .
 ] .


so are we saying that all of these express the same:

:me foaf:name 'nathan'^^xsd:string .

:me foaf:name [
   xsd:string 'nathan' .
] .

:me foaf:name [
   owl:sameAs 'nathan'^^xsd:string .
] .

:me foaf:name [
   rdf:value 'nathan'^^xsd:string .
] .

:me foaf:name :myname .
:myname xsd:string 'nathan' .

:me foaf:name :myname .
:myname owl:sameAs 'nathan'^^xsd:string .

:me foaf:name :myname .
:myname rdf:value 'nathan'^^xsd:string .

?

I can see the rdf:value and owl:sameAs versions expressing the same, 
unsure about the xsd:string version..


what about..

:London rdfs:label [ rdf:value London@en, Londres@fr, Лондон@ru ].

or..

:London rdfs:label
[ rdf:value London@en ] ,
[ rdf:value Londres@fr ] ,
[ rdf:value Лондон@ru ] .



And do some funkier stuff:

 :me foaf:mbox :myemail ;
 :myemail ex:value mailto:nat...@webr3.org ;
ex:sha_1 'KLSJFS9F7S9D8F7SLADFSLKDJF98SD7' ;
dcterms:created '2010-06-03T15:19:35-05:00' ;
dcterms:replaces :oldmail .
 :oldmail ex:value mailto:oldem...@webr3.org .

so because of the way ex:value works, in there we have the triple:

mailto:nat...@webr3.org dcterms:replaces mailto:oldem...@webr3.org .


ex:way seems to work the same way as owl:sameAs

ex:sha_1 to me seems to make sense with literals but not with a mailbox,
there the foaf-approach of having a distinct property convinces me more:
mbox point to the mailbox which is a resource typically identified by its
mailto-uri, mbox_sha1sum by contrast point to a literal with an
sha1-encoding of the mailto-uri of an email address of the subject. Your
second usage of ex:sha1 ties a resource to its name which seems very
limiting.

the statement mailto:nat...@webr3.org dcterms:replaces mailto:
oldem...@webr3.org . seems however perfectly sound I don't see why this
should need the construct with ex:value an the additional node.


cheers for the comments,

Nathan



Re: An idea I need help with, or told to stop wasting time on!

2010-06-06 Thread Story Henry

On 6 Jun 2010, at 19:54, Reto Bachmann-Gmuer wrote:

 
 your way of specifying the type of the literal reminds me the recent
 discussion started by Henry Story:
 http://lists.w3.org/Archives/Public/semantic-web/2010Feb/0174.html
 
 following this we could also say:
 
 :me foaf:name [
xsd:string 'nathan' ;
ex:sha_1 'KLSJFS9F7S9D8F7SLADFSLKDJF98SD7' .
 ] .

Yes, it would be possible to extend xsd:string so that this works, as explained 
in this email

   http://lists.w3.org/Archives/Public/semantic-web/2010Mar/0037.html

But also see the follow up

   http://lists.w3.org/Archives/Public/semantic-web/2010Mar/0038.html

And as we don't control the xsd: namespace, we can't tell if they will use this
interpretation or the inverse.

  In the case of the cert ontology we can define a datatype to also be such a
relation. See: 

   http://www.w3.org/ns/auth/cert.n3

Henry


Re: An idea I need help with, or told to stop wasting time on!

2010-06-06 Thread Reto Bachmann-Gmuer
On Sun, Jun 6, 2010 at 9:44 PM, Nathan nat...@webr3.org wrote:


 so are we saying that all of these express the same:

 :me foaf:name 'nathan'^^xsd:string .

:me foaf:name [
  owl:sameAs 'nathan'^^xsd:string .
] .

this two mean the same

:me foaf:name [
   xsd:string 'nathan' .
 ] .

this should mean the same, but while not illegal the specs do not define the
meaning clearly


 :me foaf:name [
   rdf:value 'nathan'^^xsd:string .
 ] .

I find it hard to communicate using a term that has no meaning on its own
(accoring to: http://www.w3.org/TR/rdf-schema/#ch_value)


 :me foaf:name :myname .
 :myname xsd:string 'nathan' .


 :me foaf:name :myname .
 :myname owl:sameAs 'nathan'^^xsd:string .

these two graphs are equivalent and entail the original graph, additional
they assign the uri :myname to the name


 :me foaf:name :myname .
 :myname rdf:value 'nathan'^^xsd:string .

meaningless rdf:value again


 ?

 I can see the rdf:value and owl:sameAs versions expressing the same, unsure
 about the xsd:string version..

 what about..

 :London rdfs:label [ rdf:value London@en, Londres@fr, Лондон@ru ].

clearly owl:samesAs couldn't be used here, and a label is usually a literal
and not a multivalued object


 or..

 :London rdfs:label
[ rdf:value London@en ] ,
[ rdf:value Londres@fr ] ,
[ rdf:value Лондон@ru ] .


 :London rdfs:label London@en, Londres@fr, Лондон@ru.

cheers,
reto


Re: An idea I need help with, or told to stop wasting time on!

2010-06-06 Thread Nathan

Reto Bachmann-Gmuer wrote:

On Sun, Jun 6, 2010 at 9:44 PM, Nathan nat...@webr3.org wrote:


so are we saying that all of these express the same:

:me foaf:name 'nathan'^^xsd:string .

:me foaf:name [
  rdf:value 'nathan'^^xsd:string .
] .


I find it hard to communicate using a term that has no meaning on its own
(accoring to: http://www.w3.org/TR/rdf-schema/#ch_value)


:me foaf:name :myname .
:myname rdf:value 'nathan'^^xsd:string .


meaningless rdf:value again


Anything stopping me creating an ex:value which does have a strong 
meaning and definition where in usage both of the following express the 
same:


:me foaf:name 'nathan'^^xsd:string .

:me foaf:name [ ex:value 'nathan'^^xsd:string ] .

or does a property with this meaning exist (?) / any input on just how 
one would define this with rdfs/owl


Best,

Nathan



Re: An idea I need help with, or told to stop wasting time on!

2010-06-06 Thread Story Henry

On 6 Jun 2010, at 22:22, Nathan wrote:

 
 Anything stopping me creating an ex:value which does have a strong meaning 
 and definition where in usage both of the following express the same:
 
 :me foaf:name 'nathan'^^xsd:string .
 
 :me foaf:name [ ex:value 'nathan'^^xsd:string ] .

seems to me you want owl:sameAs .

Henry


RE: An idea I need help with, or told to stop wasting time on!

2010-06-06 Thread Michael Schneider
Hi!

Just a few notes concerning your ideas and OWL DL (I don't know whether this
is important for you or not, but some people might find it relevant):

Nathan wrote:

1: Introduce a 'value' property

Where currently we can say:
  :me foaf:name 'nathan' .

I'd propose introducing an ex:value property that allows us to say:
  :me foaf:name [ ex:value 'nathan' ] .
or
  :me foaf:name :myname .
  :myname ex:value 'nathan' .

The FOAF spec defines foaf:name as a owl:DatatypeProperty [1], and therefore
cannot be used in OWL DL with blank nodes (representing anonymous
individuals) or URIs (representing named individuals) in object position. No
problem with RDF(S) or OWL Full, though.

I'm hoping the basic human understanding of this is pretty obvious,
sorting out the domain  range of ex:value, class of :myname, ontology
details and related are hurting my head a bit at the minute.

And thus you could describe a value:

  :me foaf:name [
  ex:value 'nathan' ;
  ex:type xsd:string ;

In OWL DL, you cannot use a datatype name in the object position of an
object or data property (ex:type will be either an object property or a data
property). If this was a typo and you really mean rdf:type, it is still
not allowed in OWL DL, since xsd:string is not a class in OWL DL. Again, it
is fine in RDF(S) or OWL Full.

And do some funkier stuff:

  :me foaf:mbox :myemail ;
  :myemail ex:value mailto:nat...@webr3.org ;

Above, you use ex:value with literals, but here you use it with a URI. In
OWL DL, you have to decide for one: either ex:value is a data property, then
URIs are not allowed; or it is an object property, then literals are not
allowed. Again, no problems with RDF(S) and OWL Full.

Michael

[1] http://xmlns.com/foaf/spec/20100101.rdf

--
Dipl.-Inform. Michael Schneider
Research Scientist, Information Process Engineering (IPE)
Tel  : +49-721-9654-726
Fax  : +49-721-9654-727
Email: michael.schnei...@fzi.de
WWW  : http://www.fzi.de/michael.schneider
===
FZI Forschungszentrum Informatik an der Universität Karlsruhe
Haid-und-Neu-Str. 10-14, D-76131 Karlsruhe
Tel.: +49-721-9654-0, Fax: +49-721-9654-959
Stiftung des bürgerlichen Rechts, Az 14-0563.1, RP Karlsruhe
Vorstand: Prof. Dr.-Ing. Rüdiger Dillmann, Dipl. Wi.-Ing. Michael Flor,
Prof. Dr. Dr. h.c. Wolffried Stucky, Prof. Dr. Rudi Studer
Vorsitzender des Kuratoriums: Ministerialdirigent Günther Leßnerkraus
===




Please help to point the way,

2010-04-27 Thread Nathan
All,

There is a burgeoning little community on the web [1] that is getting a
lot of interest and offers from many parties [2]. The community is built
around the very new OpenLike, which is rather important because it could
easily end up being the glue between the social networks and people.

In every way they are crying out for linked data, to the point that they
are even re-inventing many of the main principals themselves.

Out of many, many examples [3] (every thread over the past few days)
here's the latest:

Whenever you OLike something it adds the URI to the relevant Wikipedia
page [4]

If you have any time at all, please do help to nudge them in the right
direction, and save the web from yet another workaround instead of a
solution.

Best,

Nathan

[1] http://openlike.org
[2]
http://groups.google.com/group/openlike/browse_thread/thread/c57331cf8324765e?hl=en
[3] http://groups.google.com/group/openlike?hl=en
[4]
http://groups.google.com/group/openlike/browse_thread/thread/c10d10b4976bbc21?hl=en



Re: Earthquake in Chile; Ideas? to help

2010-03-01 Thread Massimo Di Pierro

This software was used for Haiti

  http://www.sahanapy.org/

Here it is in production for Haiti

  http://haiti.sahanafoundation.org/prod/

and here it is for Chile

  http://chile.sahanafoundation.org/live

It is based on web2py and it would be trivial to add RDF tags since  
web2py has native support for linked data :


   http://web2py.com/semantic

Currently the database schema of Sahanapy has not yet been tagged but  
they alway look for volunteers


   http://groups.google.com/group/michipug/browse_thread/thread/e3e7700e7970059

Massimo


On Feb 28, 2010, at 8:50 PM, Aldo Bucchi wrot


Hi,

As many of you probably know, we just had a mega quake here in Chile.
This next week will be critical in terms of logistics, finding lost
people... and as you probably know it is all about information in the
end.

In a scenario like this, everything is chaotic.

We will soon have a SPARQL endpoint available with all the data we can
find, hoping that people around the world can extract some insights.
In the meantime, I would love to hear any kind of ideas.

They needn't be high tech. Sometimes simple ideas go a long way!

Thanks!
A


--
Aldo Bucchi
skype:aldo.bucchi
http://www.univrz.com/
http://aldobucchi.com/

PRIVILEGED AND CONFIDENTIAL INFORMATION
This message is only for the use of the individual or entity to  
which it is
addressed and may contain information that is privileged and  
confidential. If
you are not the intended recipient, please do not distribute or copy  
this
communication, by e-mail or otherwise. Instead, please notify us  
immediately by

return e-mail.






Re: Earthquake in Chile; Ideas? to help

2010-03-01 Thread Massimo Di Pierro
The Sahana project has been around for some time and it seems to be  
the best Open Source Disaster Management Software available out there.


http://en.wikipedia.org/wiki/Sahana_FOSS_Disaster_Management_System

They recently moved a from PHP to Python/web2py and renamed it SahanaPy.
I do not know much more but the guy in charge of development is Fran  
Boon francisb...@googlemail.com.


A student of mine worked on the linked data extension for web2py but  
that has not yet been integrated with SahanaPy although it would be  
trivial (it just requires annotation of the tables in RDF) but I do  
not know if there are specific ontologies for this kind of data.


Massimo


On Mar 1, 2010, at 2:09 PM, Aldo Bucchi wrote:


Hi,

On Mon, Mar 1, 2010 at 10:04 AM, Massimo Di Pierro
mdipie...@cs.depaul.edu wrote:

This software was used for Haiti

 http://www.sahanapy.org/

Here it is in production for Haiti

 http://haiti.sahanafoundation.org/prod/

and here it is for Chile

 http://chile.sahanafoundation.org/live


Oh.
Who put this up?
I am impressed by ( and thankful of ) the amount of efforts we are  
not aware of!

Now. The issue is quickly starting to become: data integration.

( I am sure someone has been saying this for eons. He's big, darK...
can't remember his name though ;).

I will fwd this to people on the team. If you have any idea, the list
of developers coordinating this is:
digitales-por-ch...@googlegroups.com

Thanks!



It is based on web2py and it would be trivial to add RDF tags since  
web2py

has native support for linked data :

  http://web2py.com/semantic


OK cool.
Now, just to be clear: semantic is not really a requirement. We just
need to make things better.

Thanks!
( Leo: I copy you directly cuz you're the python man here )



Currently the database schema of Sahanapy has not yet been tagged  
but they

alway look for volunteers


http://groups.google.com/group/michipug/browse_thread/thread/e3e7700e7970059

Massimo


On Feb 28, 2010, at 8:50 PM, Aldo Bucchi wrot


Hi,

As many of you probably know, we just had a mega quake here in  
Chile.

This next week will be critical in terms of logistics, finding lost
people... and as you probably know it is all about information in  
the

end.

In a scenario like this, everything is chaotic.

We will soon have a SPARQL endpoint available with all the data we  
can

find, hoping that people around the world can extract some insights.
In the meantime, I would love to hear any kind of ideas.

They needn't be high tech. Sometimes simple ideas go a long way!

Thanks!
A


--
Aldo Bucchi
skype:aldo.bucchi
http://www.univrz.com/
http://aldobucchi.com/

PRIVILEGED AND CONFIDENTIAL INFORMATION
This message is only for the use of the individual or entity to  
which it

is
addressed and may contain information that is privileged and  
confidential.

If
you are not the intended recipient, please do not distribute or  
copy this

communication, by e-mail or otherwise. Instead, please notify us
immediately by
return e-mail.








--
Aldo Bucchi
skype:aldo.bucchi
http://www.univrz.com/
http://aldobucchi.com/

PRIVILEGED AND CONFIDENTIAL INFORMATION
This message is only for the use of the individual or entity to  
which it is
addressed and may contain information that is privileged and  
confidential. If
you are not the intended recipient, please do not distribute or copy  
this
communication, by e-mail or otherwise. Instead, please notify us  
immediately by

return e-mail.





Re: Earthquake in Chile; Ideas? to help

2010-03-01 Thread Patrick Durusau

Aldo,

On 3/1/2010 3:09 PM, Aldo Bucchi wrote:

Hi,

On Mon, Mar 1, 2010 at 10:04 AM, Massimo Di Pierro
mdipie...@cs.depaul.edu  wrote:
   

This software was used for Haiti

  http://www.sahanapy.org/

Here it is in production for Haiti

  http://haiti.sahanafoundation.org/prod/

and here it is for Chile

  http://chile.sahanafoundation.org/live
 

Oh.
Who put this up?
I am impressed by ( and thankful of ) the amount of efforts we are not aware of!
Now. The issue is quickly starting to become: data integration.

   

Data integration is always the issue.

Ever since there were two languages for communication. ;-)

Rather than attempting to choose (or force) a choice of an ontology, I 
would suggest creating a topic map to provide an integration layer over 
diverse data. That allows users to work with systems most familiar to 
them, which should allow them to get underway more quickly as well as 
more accurately. While allowing for integration of diverse sources.


Besides, as the relief effort evolves, things not contemplated at the 
outset are going arise. Topic maps can easily adapt to include such 
information.


Hope you are having a great day!

Patrick

( I am sure someone has been saying this for eons. He's big, darK...
can't remember his name though ;).

I will fwd this to people on the team. If you have any idea, the list
of developers coordinating this is:
digitales-por-ch...@googlegroups.com

Thanks!

   

It is based on web2py and it would be trivial to add RDF tags since web2py
has native support for linked data :

   http://web2py.com/semantic
 

OK cool.
Now, just to be clear: semantic is not really a requirement. We just
need to make things better.

Thanks!
( Leo: I copy you directly cuz you're the python man here )

   

Currently the database schema of Sahanapy has not yet been tagged but they
alway look for volunteers


http://groups.google.com/group/michipug/browse_thread/thread/e3e7700e7970059

Massimo


On Feb 28, 2010, at 8:50 PM, Aldo Bucchi wrot

 

Hi,

As many of you probably know, we just had a mega quake here in Chile.
This next week will be critical in terms of logistics, finding lost
people... and as you probably know it is all about information in the
end.

In a scenario like this, everything is chaotic.

We will soon have a SPARQL endpoint available with all the data we can
find, hoping that people around the world can extract some insights.
In the meantime, I would love to hear any kind of ideas.

They needn't be high tech. Sometimes simple ideas go a long way!

Thanks!
A


--
Aldo Bucchi
skype:aldo.bucchi
http://www.univrz.com/
http://aldobucchi.com/

PRIVILEGED AND CONFIDENTIAL INFORMATION
This message is only for the use of the individual or entity to which it
is
addressed and may contain information that is privileged and confidential.
If
you are not the intended recipient, please do not distribute or copy this
communication, by e-mail or otherwise. Instead, please notify us
immediately by
return e-mail.

   


 



   


--
Patrick Durusau
patr...@durusau.net
Chair, V1 - US TAG to JTC 1/SC 34
Convener, JTC 1/SC 34/WG 3 (Topic Maps)
Editor, OpenDocument Format TC (OASIS), Project Editor ISO/IEC 26300
Co-Editor, ISO/IEC 13250-1, 13250-5 (Topic Maps)




Earthquake in Chile; Ideas? to help

2010-02-28 Thread Aldo Bucchi
Hi,

As many of you probably know, we just had a mega quake here in Chile.
This next week will be critical in terms of logistics, finding lost
people... and as you probably know it is all about information in the
end.

In a scenario like this, everything is chaotic.

We will soon have a SPARQL endpoint available with all the data we can
find, hoping that people around the world can extract some insights.
In the meantime, I would love to hear any kind of ideas.

They needn't be high tech. Sometimes simple ideas go a long way!

Thanks!
A


-- 
Aldo Bucchi
skype:aldo.bucchi
http://www.univrz.com/
http://aldobucchi.com/

PRIVILEGED AND CONFIDENTIAL INFORMATION
This message is only for the use of the individual or entity to which it is
addressed and may contain information that is privileged and confidential. If
you are not the intended recipient, please do not distribute or copy this
communication, by e-mail or otherwise. Instead, please notify us immediately by
return e-mail.



Can anyone help with an XSLT GRDDL conversion of Open Packaging Format (OPF) into RDF/XML Dublin Core

2010-01-28 Thread Dan Brickley
Hi all

http://www.idpf.org/2007/opf/OPF_2.0_final_spec.html#AppendixA defines
a Dublin Core-based XML metadata format used for ebooks.

This is very nice but a little disconnected from other Dublin Core
data in RDF. It would be great to have some XSLT to explore closer
integration and use of newer Dublin Core idioms (including
http://purl.org/dc/terms/).

Anyone got the time / expertise to explore this?

A related task would be to track down some actual OPF data to convert.
You don't need be an XSLT guru to do this :)

There's a forum at
http://www.idpf.org/forums/viewforum.php?f=5sid=4b4d5b89baf1300bd0f258e0715610e5
with some pointers to data. For example,

I am pleased to announce that Adobe InDesign CS3 now supports the
direct generation of OCF-packaged OPS content. A sample generated
directly from InDesign CS3 can be found at:
http://www.idpf.org/2007/ops/samples/TwoYearsBeforeTheMast.epub;

...which is a .zip package containing a file content.opf, the
beginning of which I'll excerpt below.

Thanks for any help exploring this. I found 3 examples in the forum,
the metadata section of the .opf files are extracted below. As we
think about RDFizing these, I think there are two aspects: firstly,
getting modern RDF triples from the data as-is. This might take some
care to figure out what role= should be, etc. But also secondly,
thinking how the format could be enriched in future iterations, so
that linked data URIs are used, eg. for those LCSH headings. At the
moment they have  dc:subjectlcsh: Czech
Americans—Fiction./dc:subject but it would be nice if
http://id.loc.gov/authorities/sh2009122741#concept was in there
somewhere (instead, as well?).

I'm sure any help working through these practicalities would be
appreciated both by the OPF folk and by Dublin Core...

cheers,

Dan




example 1: http://www.idpf.org/2007/ops/samples/TwoYearsBeforeTheMast.epub

?xml version=1.1?
package xmlns=http://www.idpf.org/2007/opf; version=2.0
unique-identifier=bookid
  metadata xmlns:dc=http://purl.org/dc/elements/1.1/;
dc:titleTwo Years Before the Mast/dc:title
dc:creatorRichard H. Dana Jr./dc:creator
dc:subject19th Century/dc:subject
dc:subjectCalifornia/dc:subject
dc:subjectSailors' life/dc:subject
dc:subjectfur trade/dc:subject
dc:descriptionTwo years at sea on the coast of California/dc:description
dc:identifier
id=bookidurn:uuid:4618c86c-f508-11db-8314-0800200c9a66/dc:identifier
 /metadata
  manifest
item id=ncx href=toc.ncx media-type=text/xml/
item id=introduction href=Introduction.html
media-type=application/xhtml+xml/
item id=chapteri href=ChapterI.html
media-type=application/xhtml+xml/
...



example 2: http://www.idpf.org/2007/ops/samples/hauy.epub

package xmlns=http://www.idpf.org/2007/opf; version=2.0
unique-identifier=uid
metadata xmlns:dc=http://purl.org/dc/elements/1.1/;
xmlns:opf=http://www.idpf.org/2007/opf;
dc:titleValentin Haüy - the father of the education
for the blind/dc:title
dc:creatorBeatrice Christensen Sköld/dc:creator
dc:publisherTPB/dc:publisher
dc:date opf:event=publication2006-03-23/dc:date
dc:date opf:event=creation2007-08-09/dc:date
dc:identifier id=uidC0/dc:identifier
dc:languageen/dc:language
meta name=generator content=Daisy Pipeline OPS Creator /
/metadata


example 3: http://www.idpf.org/2007/ops/samples/myantonia.epub

package version=2.0
 unique-identifier=PrimaryID
 xmlns=http://www.idpf.org/2007/opf;

metadata xmlns:dc=http://purl.org/dc/elements/1.1/;
  xmlns:opf=http://www.idpf.org/2007/opf;
dc:titleMy Ántonia/dc:title
dc:identifier id=PrimaryID
opf:scheme=URNurn:uuid:14c77a9a-e849-11db-8314-0800200c9a66/dc:identifier
dc:languageen-US/dc:language
dc:creator opf:role=aut opf:file-as=Cather, Willa SibertWilla
Cather/dc:creator
dc:creator opf:role=ill opf:file-as=Benda, Wladyslaw TheodorW.
T. Benda/dc:creator
dc:contributor opf:role=edt opf:file-as=Noring, Jon E.Jon E.
Noring/dc:contributor
dc:contributor opf:role=edt opf:file-as=Menéndez, JoséJosé
Menéndez/dc:contributor
dc:contributor opf:role=mdc opf:file-as=Noring, Jon E.Jon E.
Noring/dc:contributor
dc:contributor opf:role=trc opf:file-as=Noring, Jon E.Jon E.
Noring/dc:contributor
dc:publisherDigitalPulp Publishing/dc:publisher
dc:descriptionMy Ántonia is considered to be Willa S. Cather’s best
work, first published in 1918. It is a fictional account (inspired by
Cather’s childhood years) of the pioneer prairie settlers in late 19th
century Nebraska. This version, intended for general readers, is a
faithful, highly-proofed, and modestly modernized transcription of the
First Edition, with text corrections by José
Menéndez./dc:description
dc:coverageNebraska prairie, late 19th and early 20th Centuries
C.E./dc:coverage
dc:sourceFirst Edition of My Ántonia, published by the Riverside
Press Cambridge, Houghton

Re: Need help mapping two letter country code to URI

2009-11-11 Thread Hugh Glaser
Absolutement.
More authoritative sources would be great – in fact dbpedia may be as good as 
it gets, as it often is (at least for the moment).

As far as sameAs .org is concerned, I only (aim to, but sometimes I fail) 
include URIs that resolve to RDF.
And of course, often need others to have done the hard work of establishing the 
link.
So even for the excellent http://www.fao.org/countryprofiles/geoinfo.asp there 
are (as far as I can see) no resolvable URIs, nor any sameAs links.

Best
Hugh


On 10/11/2009 09:20, Bernard Vatant bernard.vat...@mondeca.com wrote:

Hugh

The actual problem, as is well shown by your sameas.org http://sameas.org  
example, is not the lack of URIs for countries, but to figure out which are 
cool (stable, authoritative, published following best practices). sameas.org 
http://sameas.org  yields 23 URIs for Austria, 29 for France etc.
Supposing they are all really equivalent in the strong owl:sameAs sense, any 
of those should do, but ...
On the other hand, maybe more authoritative sources are absent of the 
sameas.org http://sameas.org  list, such as the excellent FAO ontology 
pointed by Dan. And above all, which is definitely missing are sets of URIs 
published by ISO itself.
There is an ongoing work aiming at authoritative URIs for ISO 639-2 languages 
by its registration authority at Library of Congress. 
http://www.loc.gov/standards/iso639-2/. As I understand, those URI will be 
published under http://id.loc.gov/authorities/, so watch this space. I cc 
Rebecca Guenther who is in charge of this effort at LoC, she'll certainly be 
able to provide update about this, and maybe she's aware of some equivalent 
effort for ISO 3166-1. But according to an exchange I had with her a while ago, 
ISO itself might be years away from publication under its own namespace, 
unfortunately.

Bernard


2009/11/9 Hugh Glaser h...@ecs.soton.ac.uk
There are quite a few, but I don't know which other ones follow ISO 3166-1.
http://sameas.org/?uri=http://dbpedia.org/resource/Austria
Gives a selection.
Or also
http://unlocode.rkbexplorer.com/id/AT
http://ontologi.es/place/AT

Our site, http://unlocode.rkbexplorer.com/id/AT
is our capture of UN/LOCODE 2009-1, the United Nations Code for Trade and
Transport Locations, which uses the 2-letter country codes from ISO 3166-1,
as well as the 1-3 letter subdivision codes of ISO 3166-2
See http://www.unece.org/cefact/locode/
It also gives inclusion and coords, etc.
We need to do more coref to other than onologi.es http://onologi.es  .

Best
Hugh

On 09/11/2009 21:47, Aldo Bucchi aldo.buc...@gmail.com wrote:

 Hi,

 I found a dataset that represents countries as two letter country
 codes: DK, FI, NO, SE, UK.
 I would like to turn these into URIs of the actual countries they represent.

 ( I have no idea on whether this follows an ISO standard or is just
 some private key in this system ).

 Any ideas on a set of candidata URIs? I would like to run a complete
 coverage test and take care I don't introduce distortion ( that is
 pretty easy by doing some heuristic tests against labels, etc ).

 There are some border cases that suggest this isn't ISO3166-1, but I
 am not sure yet. ( and if it were, which widely used URIs are based on
 this standard? ).

 Thanks!
 A







Re: Need help mapping two letter country code to URI

2009-11-11 Thread Ross Singer
Coincidentally, I'm 2/3 of the way through providing the easily
mappable MARC codes lists to RDF.

Here are Geographic Area Codes:

http://purl.org/NET/marccodes/gacs/n-usa#location

(Use the values here: http://www.loc.gov/marc/geoareas/gacshome.html)

And Languages:

http://purl.org/NET/marccodes/languages/eng#lang

(see:  http://www.loc.gov/marc/languages/langhome.html)

Tomorrow morning come the countries list:

http://www.loc.gov/marc/countries/cou_home.html

From there, well, we'll see how well I can match Geonames.

-Ross.

On Wed, Nov 11, 2009 at 5:26 PM, Hugh Glaser h...@ecs.soton.ac.uk wrote:
 Absolutement.
 More authoritative sources would be great – in fact dbpedia may be as good as 
 it gets, as it often is (at least for the moment).

 As far as sameAs .org is concerned, I only (aim to, but sometimes I fail) 
 include URIs that resolve to RDF.
 And of course, often need others to have done the hard work of establishing 
 the link.
 So even for the excellent http://www.fao.org/countryprofiles/geoinfo.asp 
 there are (as far as I can see) no resolvable URIs, nor any sameAs links.

 Best
 Hugh


 On 10/11/2009 09:20, Bernard Vatant bernard.vat...@mondeca.com wrote:

 Hugh

 The actual problem, as is well shown by your sameas.org http://sameas.org  
 example, is not the lack of URIs for countries, but to figure out which are 
 cool (stable, authoritative, published following best practices). 
 sameas.org http://sameas.org  yields 23 URIs for Austria, 29 for France etc.
 Supposing they are all really equivalent in the strong owl:sameAs sense, 
 any of those should do, but ...
 On the other hand, maybe more authoritative sources are absent of the 
 sameas.org http://sameas.org  list, such as the excellent FAO ontology 
 pointed by Dan. And above all, which is definitely missing are sets of URIs 
 published by ISO itself.
 There is an ongoing work aiming at authoritative URIs for ISO 639-2 languages 
 by its registration authority at Library of Congress. 
 http://www.loc.gov/standards/iso639-2/. As I understand, those URI will be 
 published under http://id.loc.gov/authorities/, so watch this space. I cc 
 Rebecca Guenther who is in charge of this effort at LoC, she'll certainly be 
 able to provide update about this, and maybe she's aware of some equivalent 
 effort for ISO 3166-1. But according to an exchange I had with her a while 
 ago, ISO itself might be years away from publication under its own 
 namespace, unfortunately.

 Bernard


 2009/11/9 Hugh Glaser h...@ecs.soton.ac.uk
 There are quite a few, but I don't know which other ones follow ISO 3166-1.
 http://sameas.org/?uri=http://dbpedia.org/resource/Austria
 Gives a selection.
 Or also
 http://unlocode.rkbexplorer.com/id/AT
 http://ontologi.es/place/AT

 Our site, http://unlocode.rkbexplorer.com/id/AT
 is our capture of UN/LOCODE 2009-1, the United Nations Code for Trade and
 Transport Locations, which uses the 2-letter country codes from ISO 3166-1,
 as well as the 1-3 letter subdivision codes of ISO 3166-2
 See http://www.unece.org/cefact/locode/
 It also gives inclusion and coords, etc.
 We need to do more coref to other than onologi.es http://onologi.es  .

 Best
 Hugh

 On 09/11/2009 21:47, Aldo Bucchi aldo.buc...@gmail.com wrote:

 Hi,

 I found a dataset that represents countries as two letter country
 codes: DK, FI, NO, SE, UK.
 I would like to turn these into URIs of the actual countries they represent.

 ( I have no idea on whether this follows an ISO standard or is just
 some private key in this system ).

 Any ideas on a set of candidata URIs? I would like to run a complete
 coverage test and take care I don't introduce distortion ( that is
 pretty easy by doing some heuristic tests against labels, etc ).

 There are some border cases that suggest this isn't ISO3166-1, but I
 am not sure yet. ( and if it were, which widely used URIs are based on
 this standard? ).

 Thanks!
 A









Re: Need help mapping two letter country code to URI

2009-11-10 Thread Bernard Vatant
Hugh

The actual problem, as is well shown by your sameas.org example, is not the
lack of URIs for countries, but to figure out which are cool (stable,
authoritative, published following best practices). sameas.org yields 23
URIs for Austria, 29 for France etc.
Supposing they are all really equivalent in the strong owl:sameAs sense,
any of those should do, but ...
On the other hand, maybe more authoritative sources are absent of the
sameas.org list, such as the excellent FAO ontology pointed by Dan. And
above all, which is definitely missing are sets of URIs published by ISO
itself.
There is an ongoing work aiming at authoritative URIs for ISO 639-2
languages by its registration authority at Library of Congress.
http://www.loc.gov/standards/iso639-2/. As I understand, those URI will be
published under http://id.loc.gov/authorities/, so watch this space. I cc
Rebecca Guenther who is in charge of this effort at LoC, she'll certainly be
able to provide update about this, and maybe she's aware of some equivalent
effort for ISO 3166-1. But according to an exchange I had with her a while
ago, ISO itself might be years away from publication under its own
namespace, unfortunately.

Bernard


2009/11/9 Hugh Glaser h...@ecs.soton.ac.uk

 There are quite a few, but I don't know which other ones follow ISO 3166-1.
 http://sameas.org/?uri=http://dbpedia.org/resource/Austria
 Gives a selection.
 Or also
 http://unlocode.rkbexplorer.com/id/AT
 http://ontologi.es/place/AT

 Our site, http://unlocode.rkbexplorer.com/id/AT
 is our capture of UN/LOCODE 2009-1, the United Nations Code for Trade and
 Transport Locations, which uses the 2-letter country codes from ISO 3166-1,
 as well as the 1-3 letter subdivision codes of ISO 3166-2
 See http://www.unece.org/cefact/locode/
 It also gives inclusion and coords, etc.
 We need to do more coref to other than onologi.es .

 Best
 Hugh

 On 09/11/2009 21:47, Aldo Bucchi aldo.buc...@gmail.com wrote:

  Hi,
 
  I found a dataset that represents countries as two letter country
  codes: DK, FI, NO, SE, UK.
  I would like to turn these into URIs of the actual countries they
 represent.
 
  ( I have no idea on whether this follows an ISO standard or is just
  some private key in this system ).
 
  Any ideas on a set of candidata URIs? I would like to run a complete
  coverage test and take care I don't introduce distortion ( that is
  pretty easy by doing some heuristic tests against labels, etc ).
 
  There are some border cases that suggest this isn't ISO3166-1, but I
  am not sure yet. ( and if it were, which widely used URIs are based on
  this standard? ).
 
  Thanks!
  A





-- 
Bernard Vatant
Senior Consultant
Vocabulary  Data Engineering
Tel:   +33 (0) 971 488 459
Mail: bernard.vat...@mondeca.com

Mondeca
3, cité Nollez 75018 Paris France
Web:http://www.mondeca.com
Blog:http://mondeca.wordpress.com



Re: Need help mapping two letter country code to URI

2009-11-09 Thread Martin Hepp (UniBW)

Hi Aldo,

Note that there are multiple branches of the ISO 3166 familiy of codes. 
See pages 23 and 24 of the GoodRelations Technical Report 
(http://www.heppnetz.de/projects/goodrelations/GoodRelations-TR-final.pdf) 
for a more detailed discussion. I am still not aware of any 
authoritative URI schema for ISO 3166, which is why GoodRelations uses 
string literals for that code.


The key ISO page http://www.iso.org/iso/country_codes.htm does also not 
refer to any established http or URN URI schema for the ISO 3166 family 
of codes.


I assume that dbPedia URIs may be well suited, but they are not as 
authoritative. If they have ISO 3166 codes attached via properties, 
entity consolidation on that basis may be relatively simple.


Below, please find an excerpt from the discussion re identifiers for 
countries in the GoodRelations Technical Report:


Country or Region

...

GoodRelations could reuse several approaches for ontologies of regions 
and places for
specifying Countries and Regions. However, we suggest a more pragmatic 
approach of
reusing the ISO Standard 3166, in particular ISO 3166-1 (ISO, 2006) and 
ISO 3166-2
(ISO, 1998). The first defines 2- or 3-letter identifiers for existing 
countries and a few
independent geopolitical entities. ISO 3166-1 alpha-2 defines 2-letter 
codes for most
countries. There exist alternative standards with 3-letter codes and a 
numerical
representation. For the following reasons, we suggest using the 2-letter 
codes: First, they
are well established and people are likely more familiar with them (they 
are also used for
most top-level domains). Second, and more important, the 2-letter 
variant is the basis for
ISO 3166-2, which breaks down the countries from ISO 3166-1 into 
administrative
subdivisions (ISO, 1998). The code elements used in ISO 3166-2 consist 
of “the alpha-2
code element from ISO 3166-1 followed by a separator and a further 
string of up to three

alphanumeric characters e. g.” (from: http://www.iso.org/iso/en/prods-
services/iso3166ma/04background-on-iso-3166/iso3166-2.html).
This allows using simple string operations on the respective ISO 3166 
codes in order to
handle administrative subdivisions. For example, if a certain Offering 
is said to be valid
for Canada (ISO 3166-1 two-letter code “CA”), then one can infer that 
any longer search
string specifying an administrative subdivision of Canada (e.g. British 
Columbia, ISO

3166-2 “CA-BC”) is also an eligible region.
Examples: Canada (CA), Austria (AT), Canada: British Columbia (CA-BC), 
Italy (IT),

Italy: Province of Milano (IT-MI)

Note: More complex modeling of Countries and Regions may be useful in some
scenarions, and GoodRelations can be imported and extended if necessary. 
However,
most offerings on the Web contain statements on the level of countries 
only, for which

ISO 3166-1 is sufficient and very common.

Martin



Aldo Bucchi wrote:

Hi,

I found a dataset that represents countries as two letter country
codes: DK, FI, NO, SE, UK.
I would like to turn these into URIs of the actual countries they represent.

( I have no idea on whether this follows an ISO standard or is just
some private key in this system ).

Any ideas on a set of candidata URIs? I would like to run a complete
coverage test and take care I don't introduce distortion ( that is
pretty easy by doing some heuristic tests against labels, etc ).

There are some border cases that suggest this isn't ISO3166-1, but I
am not sure yet. ( and if it were, which widely used URIs are based on
this standard? ).

Thanks!
A

  


--
--
martin hepp
e-business  web science research group
universitaet der bundeswehr muenchen

e-mail:  h...@ebusiness-unibw.org
phone:   +49-(0)89-6004-4217
fax: +49-(0)89-6004-4620
www: http://www.unibw.de/ebusiness/ (group)
http://www.heppnetz.de/ (personal)
skype:   mfhepp 
twitter: mfhepp


Check out GoodRelations for E-Commerce on the Web of Linked Data!
=

Webcast:
http://www.heppnetz.de/projects/goodrelations/webcast/

Recipe for Yahoo SearchMonkey:
http://www.ebusiness-unibw.org/wiki/GoodRelations_and_Yahoo_SearchMonkey

Talk at the Semantic Technology Conference 2009: 
Semantic Web-based E-Commerce: The GoodRelations Ontology

http://www.slideshare.net/mhepp/semantic-webbased-ecommerce-the-goodrelations-ontology-1535287

Overview article on Semantic Universe:
http://www.semanticuniverse.com/articles-semantic-web-based-e-commerce-webmasters-get-ready.html

Project page:
http://purl.org/goodrelations/

Resources for developers:
http://www.ebusiness-unibw.org/wiki/GoodRelations

Tutorial materials:
CEC'09 2009 Tutorial: The Web of Data for E-Commerce: A Hands-on Introduction to the GoodRelations Ontology, RDFa, and Yahoo! SearchMonkey 
http://www.ebusiness-unibw.org/wiki/Web_of_Data_for_E-Commerce_Tutorial_IEEE_CEC%2709



attachment: martin_hepp.vcf

Re: Need help mapping two letter country code to URI

2009-11-09 Thread Dan Brickley
On Mon, Nov 9, 2009 at 10:47 PM, Aldo Bucchi aldo.buc...@gmail.com wrote:
 Hi,

 I found a dataset that represents countries as two letter country
 codes: DK, FI, NO, SE, UK.
 I would like to turn these into URIs of the actual countries they represent.

 ( I have no idea on whether this follows an ISO standard or is just
 some private key in this system ).

 Any ideas on a set of candidata URIs? I would like to run a complete
 coverage test and take care I don't introduce distortion ( that is
 pretty easy by doing some heuristic tests against labels, etc ).

 There are some border cases that suggest this isn't ISO3166-1, but I
 am not sure yet. ( and if it were, which widely used URIs are based on
 this standard? ).

http://www.fao.org/countryprofiles/geoinfo.asp might have something
useful for you?

Dan



Re: Need help mapping two letter country code to URI

2009-11-09 Thread Hugh Glaser
There are quite a few, but I don't know which other ones follow ISO 3166-1.
http://sameas.org/?uri=http://dbpedia.org/resource/Austria
Gives a selection.
Or also
http://unlocode.rkbexplorer.com/id/AT
http://ontologi.es/place/AT

Our site, http://unlocode.rkbexplorer.com/id/AT
is our capture of UN/LOCODE 2009-1, the United Nations Code for Trade and
Transport Locations, which uses the 2-letter country codes from ISO 3166-1,
as well as the 1-3 letter subdivision codes of ISO 3166-2
See http://www.unece.org/cefact/locode/
It also gives inclusion and coords, etc.
We need to do more coref to other than onologi.es .

Best
Hugh

On 09/11/2009 21:47, Aldo Bucchi aldo.buc...@gmail.com wrote:

 Hi,
 
 I found a dataset that represents countries as two letter country
 codes: DK, FI, NO, SE, UK.
 I would like to turn these into URIs of the actual countries they represent.
 
 ( I have no idea on whether this follows an ISO standard or is just
 some private key in this system ).
 
 Any ideas on a set of candidata URIs? I would like to run a complete
 coverage test and take care I don't introduce distortion ( that is
 pretty easy by doing some heuristic tests against labels, etc ).
 
 There are some border cases that suggest this isn't ISO3166-1, but I
 am not sure yet. ( and if it were, which widely used URIs are based on
 this standard? ).
 
 Thanks!
 A




Need help mapping two letter country code to URI

2009-11-09 Thread Aldo Bucchi
Hi,

I found a dataset that represents countries as two letter country
codes: DK, FI, NO, SE, UK.
I would like to turn these into URIs of the actual countries they represent.

( I have no idea on whether this follows an ISO standard or is just
some private key in this system ).

Any ideas on a set of candidata URIs? I would like to run a complete
coverage test and take care I don't introduce distortion ( that is
pretty easy by doing some heuristic tests against labels, etc ).

There are some border cases that suggest this isn't ISO3166-1, but I
am not sure yet. ( and if it were, which widely used URIs are based on
this standard? ).

Thanks!
A

-- 
Aldo Bucchi
skype:aldo.bucchi
http://www.univrz.com/
http://aldobucchi.com/

PRIVILEGED AND CONFIDENTIAL INFORMATION
This message is only for the use of the individual or entity to which it is
addressed and may contain information that is privileged and confidential. If
you are not the intended recipient, please do not distribute or copy this
communication, by e-mail or otherwise. Instead, please notify us immediately by
return e-mail.



Re: Need help mapping two letter country code to URI

2009-11-09 Thread Nathan
 On 09/11/2009 21:47, Aldo Bucchi aldo.buc...@gmail.com wrote:
 I found a dataset that represents countries as two letter country
 codes: DK, FI, NO, SE, UK.

http://dbpedia.org/resource/ISO_3166-2:DK
http://dbpedia.org/resource/ISO_3166-2:FI
http://dbpedia.org/resource/ISO_3166-2:NO
http://dbpedia.org/resource/ISO_3166-2:SE
http://dbpedia.org/resource/ISO_3166-2:GB (UK)

?



Re: Need help mapping two letter country code to URI

2009-11-09 Thread Jona Christopher Sahnwaldt
On Mon, Nov 9, 2009 at 23:59, Nathan nat...@webr3.org wrote:
 On 09/11/2009 21:47, Aldo Bucchi aldo.buc...@gmail.com wrote:
 I found a dataset that represents countries as two letter country
 codes: DK, FI, NO, SE, UK.

 http://dbpedia.org/resource/ISO_3166-2:DK
 http://dbpedia.org/resource/ISO_3166-2:FI
 http://dbpedia.org/resource/ISO_3166-2:NO
 http://dbpedia.org/resource/ISO_3166-2:SE
 http://dbpedia.org/resource/ISO_3166-2:GB (UK)

 ?



With '-1' instead of '-2', these all dbpprop:redirect to their
respective countries:

http://dbpedia.org/resource/ISO_3166-1:DK
http://dbpedia.org/resource/ISO_3166-1:FI
http://dbpedia.org/resource/ISO_3166-1:NO
http://dbpedia.org/resource/ISO_3166-1:SE
http://dbpedia.org/resource/ISO_3166-1:GB

I guess this pattern is quite reliable, because some
people at Wikipedia were rather diligent:

http://en.wikipedia.org/wiki/Category:Redirects_from_ISO_3166




Re: Subjects Tagging - Help?

2009-11-04 Thread Toby Inkster
On Tue, 2009-11-03 at 18:16 +, Nathan wrote:
 Hoping for a little bit of guidance here on tagging  assigning
 subjects to content etc - I can't quite grasp how to describe what an
 item of content is about

# TIMTOWTDI

# Here's a few abbreviations for starters...

@prefix dct:  http://purl.org/dc/terms/ .
@prefix foaf: http://xmlns.com/foaf/0.1/ .
@prefix rdfs: http://www.w3.org/2000/01/rdf-schema# .
@prefix skos: http://www.w3.org/2004/02/skos/core# .
�...@prefix tags: http://www.holygoat.co.uk/owl/redwood/0.1/tags/ .

# Tags I model as being instances of both skos:Concept and tags:Tag.
# The latter is a subclass of the former anyway, but my store doesn't
# do inferencing.

/tag/linux/#concept
a skos:Concept, tags:Tag ;
skos:prefLabel linux@en ;
tags:name linux@en ;
rdfs:label linux@en .

# I associate a page with each tag, for linked data goodness.

/tag/linux/#concept
foaf:isPrimaryTopicOf /tag/linux/ .

/tag/linux/
foaf:primaryTopic /tag/linux/#concept .

# I link from tags to the real-world things they describe.

/tag/linux/#concept
moat:localMeaning http://dbpedia.org/resource/Linux .

# Finally, I link from pages to tags.

/article/linux-rocks/
  dc:subject /tag/linux/#concept ;
  tags:taggedWithTag /tag/linux/#concept .

/article/linux-sucks/
  dc:subject /tag/linux/#concept ;
  tags:taggedWithTag /tag/linux/#concept .

# When outputting article tags in RDFa, I use:
#
# a rel=dc:subject tags:taggedWithTag tag
#href=/tag/linux/#conceptlinux/a
#
# ... which is rel=tag-compatible.

# TIMTOWTDI

-- 
Toby A Inkster
mailto:m...@tobyinkster.co.uk
http://tobyinkster.co.uk




Re: Subjects Tagging - Help?

2009-11-04 Thread Nathan

Toby Inkster wrote:

On Tue, 2009-11-03 at 18:16 +, Nathan wrote:

Hoping for a little bit of guidance here on tagging  assigning
subjects to content etc - I can't quite grasp how to describe what an
item of content is about


# TIMTOWTDI

# Here's a few abbreviations for starters...

@prefix dct:  http://purl.org/dc/terms/ .
@prefix foaf: http://xmlns.com/foaf/0.1/ .
@prefix rdfs: http://www.w3.org/2000/01/rdf-schema# .
@prefix skos: http://www.w3.org/2004/02/skos/core# .
�...@prefix tags: http://www.holygoat.co.uk/owl/redwood/0.1/tags/ .

# Tags I model as being instances of both skos:Concept and tags:Tag.
# The latter is a subclass of the former anyway, but my store doesn't
# do inferencing.

/tag/linux/#concept
a skos:Concept, tags:Tag ;
skos:prefLabel linux@en ;
tags:name linux@en ;
rdfs:label linux@en .

# I associate a page with each tag, for linked data goodness.

/tag/linux/#concept
foaf:isPrimaryTopicOf /tag/linux/ .

/tag/linux/
foaf:primaryTopic /tag/linux/#concept .

# I link from tags to the real-world things they describe.

/tag/linux/#concept
moat:localMeaning http://dbpedia.org/resource/Linux .

# Finally, I link from pages to tags.

/article/linux-rocks/
  dc:subject /tag/linux/#concept ;
  tags:taggedWithTag /tag/linux/#concept .

/article/linux-sucks/
  dc:subject /tag/linux/#concept ;
  tags:taggedWithTag /tag/linux/#concept .

# When outputting article tags in RDFa, I use:
#
# a rel=dc:subject tags:taggedWithTag tag
#href=/tag/linux/#conceptlinux/a
#
# ... which is rel=tag-compatible.

# TIMTOWTDI



Thanks Toby,

Nice way of doing it and also rel=tag compatible is a good thing; I had 
perused the source of your site at length but good to see what's behind 
it too :)


Primary question in this scenario is why dc:subject through to local tag 
then tag associated with moat:localMeaning rather than dc:subject 
straight through to dbpedia resource? ...


.. as from the dc docs I gathered that This term (dc:subject) is 
intended to be used with non-literal values as defined in the DCMI 
Abstract Model (http://dublincore.org/documents/abstract-model/). As of 
December 2007, the DCMI Usage Board is seeking a way to express this 
intention with a formal range declaration.


Also, noted that you opted for a different doc type XHTML+RDFa 
1.0+Role in demiblog (assumed you're the core dev) any specific notes 
on why you're using a customised DTD rather than XHTML+RDFa or HTML5?


Many thanks again for the input, all v much appreciated.

Nathan



Re: Subjects Tagging - Help?

2009-11-04 Thread Toby Inkster
On Wed, 2009-11-04 at 14:26 +, Nathan wrote:
 Primary question in this scenario is why dc:subject through to local tag 
 then tag associated with moat:localMeaning rather than dc:subject 
 straight through to dbpedia resource? ...

 .. as from the dc docs I gathered that This term (dc:subject) is 
 intended to be used with non-literal values as defined in the DCMI 
 Abstract Model (http://dublincore.org/documents/abstract-model/). As of 
 December 2007, the DCMI Usage Board is seeking a way to express this 
 intention with a formal range declaration.

For me it wasn't an question of what to link to when using dc:subject;
but rather which predicate to use when linking from a foaf:Document to a
skos:Concept. dc:subject seemed the ideal solution.

Can dc:subject be used to link straight from a document to a dbpedia
resource? Maybe. The definition is sufficiently woolly. Personally if I
were linking directly between them I'd use foaf:topic/foaf:page.

 Also, noted that you opted for a different doc type XHTML+RDFa 
 1.0+Role in demiblog (assumed you're the core dev) any specific notes 
 on why you're using a customised DTD rather than XHTML+RDFa or HTML5?

I use the role attribute which is not defined by the XHTML+RDFa DTD.

http://www.w3.org/TR/xhtml-role/

-- 
Toby A Inkster
mailto:m...@tobyinkster.co.uk
http://tobyinkster.co.uk




Re: Subjects Tagging - Help?

2009-11-03 Thread Alexandre Passant

Hi Nathan,

On 3 Nov 2009, at 18:16, Nathan wrote:


Hi All,

Hoping for a little bit of guidance here on tagging  assigning  
subjects to content etc - I can't quite grasp how to describe what  
an item of content is about; particularly in the context of a normal  
blog post and with relation to tags/subjects/moat/commontag/scot etc.


In short I've build a little mashup of a few services and some  
linked data which extracts terms  subjects from an item of content;  
and now I'm unclear of which ontologies to use.



The info I can extract is tag string and mainly a dbpedia uri for  
the tag (to give it real meaning I guess)


example..
string: Nuclear program of Iraq
URI:http://dbpedia.org/resource/Nuclear_program_of_Iran

also bearing in mind that I'll typically have 5-10 of these per  
post.


On the face of it I'd assume I should be using the following for  
each tag and leaving the string literal value out of the triples  
altogether

http://purl.org/dc/terms/subject
http://www.holygoat.co.uk/owl/redwood/0.1/tags/taggedWithTag

however, with MOAT/CommonTag/SCOT (and no doubt others) added in to  
the equation I'm totally lost as which is the most fitting and  
widely recognised for tagging content in this manner; is it worth  
adding something to say that it was automatically tagged by a  
machine? or including the string literal value of the tag(s)?


SCOT does not directly address the issue of 'tag meaning' but focus on  
modeling tagclouds and making them interoperable.


MOAT and CommonTag serve the same general purpose (defining what tag  
means, in terms of URIs) so you can use whatever you like - however,  
CommonTag is indexed by SearchMonkey so that is a clearer advantage  
for it and I'd then suggest to use that one if you develop an app on  
the Web.


A few differences between them however so far (it may evolve in the  
future, with ongoing work on CommonTag)


- CommonTag provides ways to make the difference between  
ctag:AuthorTag, ctag:ReaderTag and ctag:AutoTag while MOAT just make  
the difference between manual and auto-tag.
- MOAT models the tagging action (i.e. tri/quatri-partite model,  
based on - and extending - the Tag Ontology) and 'global  
meanings' (that can be used if you want to setup a tag server that  
deliver URIs / meanings for each tags, e.g. in a company.)


Hope that helps,

Best,

Alex.



Many thanks in advance,

Nathan



--
Dr. Alexandre Passant
Digital Enterprise Research Institute
National University of Ireland, Galway
:me owl:sameAs http://apassant.net/alex .









Re: Subjects Tagging - Help?

2009-11-03 Thread Nathan

Alexandre Passant wrote:

Hi Nathan,

On 3 Nov 2009, at 18:16, Nathan wrote:


Hi All,

Hoping for a little bit of guidance here on tagging  assigning 
subjects to content etc - I can't quite grasp how to describe what an 
item of content is about; particularly in the context of a normal blog 
post and with relation to tags/subjects/moat/commontag/scot etc.


In short I've build a little mashup of a few services and some linked 
data which extracts terms  subjects from an item of content; and now 
I'm unclear of which ontologies to use.



The info I can extract is tag string and mainly a dbpedia uri for 
the tag (to give it real meaning I guess)


example..
string:Nuclear program of Iraq
URI:http://dbpedia.org/resource/Nuclear_program_of_Iran

also bearing in mind that I'll typically have 5-10 of these per post.

On the face of it I'd assume I should be using the following for each 
tag and leaving the string literal value out of the triples altogether

http://purl.org/dc/terms/subject
http://www.holygoat.co.uk/owl/redwood/0.1/tags/taggedWithTag

however, with MOAT/CommonTag/SCOT (and no doubt others) added in to 
the equation I'm totally lost as which is the most fitting and widely 
recognised for tagging content in this manner; is it worth adding 
something to say that it was automatically tagged by a machine? or 
including the string literal value of the tag(s)?


SCOT does not directly address the issue of 'tag meaning' but focus on 
modeling tagclouds and making them interoperable.


MOAT and CommonTag serve the same general purpose (defining what tag 
means, in terms of URIs) so you can use whatever you like - however, 
CommonTag is indexed by SearchMonkey so that is a clearer advantage for 
it and I'd then suggest to use that one if you develop an app on the Web.


A few differences between them however so far (it may evolve in the 
future, with ongoing work on CommonTag)


- CommonTag provides ways to make the difference between ctag:AuthorTag, 
ctag:ReaderTag and ctag:AutoTag while MOAT just make the difference 
between manual and auto-tag.
- MOAT models the tagging action (i.e. tri/quatri-partite model, based 
on - and extending - the Tag Ontology) and 'global meanings' (that can 
be used if you want to setup a tag server that deliver URIs / meanings 
for each tags, e.g. in a company.)


Hope that helps,



cheers, it does.. but also leaves me thinking I need to be using:
dc:subject
tag:taggedWith
ctag:means
moat:tagMeaning

surely this is an issue if they're all essentially the same?

and leads me to a further question.. is there any way to express that 
[dc:subject tag:taggedWith ctag:means moat:tagMeaning] are all equal?


thanks again,

nathan




Re: Subjects Tagging - Help?

2009-11-03 Thread Alexandre Passant


On 3 Nov 2009, at 18:37, Nathan wrote:


Alexandre Passant wrote:

Hi Nathan,
On 3 Nov 2009, at 18:16, Nathan wrote:

Hi All,

Hoping for a little bit of guidance here on tagging  assigning  
subjects to content etc - I can't quite grasp how to describe what  
an item of content is about; particularly in the context of a  
normal blog post and with relation to tags/subjects/moat/commontag/ 
scot etc.


In short I've build a little mashup of a few services and some  
linked data which extracts terms  subjects from an item of  
content; and now I'm unclear of which ontologies to use.



The info I can extract is tag string and mainly a dbpedia uri  
for the tag (to give it real meaning I guess)


example..
   string:Nuclear program of Iraq
   URI:http://dbpedia.org/resource/Nuclear_program_of_Iran

also bearing in mind that I'll typically have 5-10 of these per  
post.


On the face of it I'd assume I should be using the following for  
each tag and leaving the string literal value out of the triples  
altogether

   http://purl.org/dc/terms/subject
   http://www.holygoat.co.uk/owl/redwood/0.1/tags/taggedWithTag

however, with MOAT/CommonTag/SCOT (and no doubt others) added in  
to the equation I'm totally lost as which is the most fitting and  
widely recognised for tagging content in this manner; is it worth  
adding something to say that it was automatically tagged by a  
machine? or including the string literal value of the tag(s)?
SCOT does not directly address the issue of 'tag meaning' but focus  
on modeling tagclouds and making them interoperable.
MOAT and CommonTag serve the same general purpose (defining what  
tag means, in terms of URIs) so you can use whatever you like -  
however, CommonTag is indexed by SearchMonkey so that is a clearer  
advantage for it and I'd then suggest to use that one if you  
develop an app on the Web.
A few differences between them however so far (it may evolve in the  
future, with ongoing work on CommonTag)
- CommonTag provides ways to make the difference between  
ctag:AuthorTag, ctag:ReaderTag and ctag:AutoTag while MOAT just  
make the difference between manual and auto-tag.
- MOAT models the tagging action (i.e. tri/quatri-partite model,  
based on - and extending - the Tag Ontology) and 'global  
meanings' (that can be used if you want to setup a tag server that  
deliver URIs / meanings for each tags, e.g. in a company.)

Hope that helps,


cheers, it does.. but also leaves me thinking I need to be using:
dc:subject
tag:taggedWith
ctag:means
moat:tagMeaning

surely this is an issue if they're all essentially the same?

and leads me to a further question.. is there any way to express  
that [dc:subject tag:taggedWith ctag:means moat:tagMeaning] are all  
equal?


They are actually not the same.

The relationships ctag:means and moat:tagMeaning are used to define  
links between a tag and its meaning, not for linking the tagged  
resource to the meaning of the tag.
For that direct relationship , ctag:isAbout is the appropriate  
relationship (I'm just realizing it's not in the doc but in the  
ontology only [1]).

There is also moat:taggedWith that serve a similar purpose.

In addition, tag:taggedWith is there to link a resource to a tag, not  
to the URI that serves as a meaning for this tag.


Finally, regarding dc:subject, a tag can be used not as a subject  
(think of a webpage tagged cool or todo, they are probably not  
used as subject) so the semantics of dc:subject is probably not what  
you want here.
However, this property can be enough if you know that the tags used  
are here as subject / topics.


Best,

Alex.



thanks again,

nathan



--
Dr. Alexandre Passant
Digital Enterprise Research Institute
National University of Ireland, Galway
:me owl:sameAs http://apassant.net/alex .









Re: Subjects Tagging - Help?

2009-11-03 Thread Kingsley Idehen

Alexandre Passant wrote:

Hi Nathan,

On 3 Nov 2009, at 18:16, Nathan wrote:


Hi All,

Hoping for a little bit of guidance here on tagging  assigning 
subjects to content etc - I can't quite grasp how to describe what an 
item of content is about; particularly in the context of a normal 
blog post and with relation to tags/subjects/moat/commontag/scot etc.


In short I've build a little mashup of a few services and some linked 
data which extracts terms  subjects from an item of content; and now 
I'm unclear of which ontologies to use.



The info I can extract is tag string and mainly a dbpedia uri for 
the tag (to give it real meaning I guess)


example..
string:Nuclear program of Iraq
URI:http://dbpedia.org/resource/Nuclear_program_of_Iran

also bearing in mind that I'll typically have 5-10 of these per post.

On the face of it I'd assume I should be using the following for each 
tag and leaving the string literal value out of the triples altogether

http://purl.org/dc/terms/subject
http://www.holygoat.co.uk/owl/redwood/0.1/tags/taggedWithTag

however, with MOAT/CommonTag/SCOT (and no doubt others) added in to 
the equation I'm totally lost as which is the most fitting and widely 
recognised for tagging content in this manner; is it worth adding 
something to say that it was automatically tagged by a machine? or 
including the string literal value of the tag(s)?


SCOT does not directly address the issue of 'tag meaning' but focus on 
modeling tagclouds and making them interoperable.


MOAT and CommonTag serve the same general purpose (defining what tag 
means, in terms of URIs) so you can use whatever you like - however, 
CommonTag is indexed by SearchMonkey so that is a clearer advantage 
for it and I'd then suggest to use that one if you develop an app on 
the Web.


A few differences between them however so far (it may evolve in the 
future, with ongoing work on CommonTag)


- CommonTag provides ways to make the difference between 
ctag:AuthorTag, ctag:ReaderTag and ctag:AutoTag while MOAT just make 
the difference between manual and auto-tag.
- MOAT models the tagging action (i.e. tri/quatri-partite model, 
based on - and extending - the Tag Ontology) and 'global meanings' 
(that can be used if you want to setup a tag server that deliver URIs 
/ meanings for each tags, e.g. in a company.)


Hope that helps,

Best,

Alex.


Alex,

What the URI of your MOAT to CommonTags mapping? That should be 
something that easy to find :-)


Nathan: You just need what I refer to above then you can use terms from 
MOAT or CommonTags.


Kingsley


Many thanks in advance,

Nathan



--
Dr. Alexandre Passant
Digital Enterprise Research Institute
National University of Ireland, Galway
:me owl:sameAs http://apassant.net/alex .











--


Regards,

Kingsley Idehen   Weblog: http://www.openlinksw.com/blog/~kidehen
President  CEO 
OpenLink Software Web: http://www.openlinksw.com








Re: Subjects Tagging - Help?

2009-11-03 Thread Bill Roberts


Finally, regarding dc:subject, a tag can be used not as a subject  
(think of a webpage tagged cool or todo, they are probably not  
used as subject) so the semantics of dc:subject is probably not what  
you want here.


I think this comment from Alexandre gets to the core of the matter.   
Tagging has grown up as a kind of metadata-lite, ie a quick way of  
categorising blog posts etc into topics, often fairly broad topics.   
Of course it can be a very useful way of finding related material.   
However if you are able to say something more precise, then I think  
you should - eg by using dc:subject or foaf:primaryTopic, to say that  
a blog post (or other document) is about a particular thing that you  
have a URI for.


You might also want to have tags of course, in their role of more  
general categorisation and the Common Tag ontology looks a nice way to  
handle that, especially with its capabilities for saying which user  
added a tag - so letting you handle an applicatoin like delicious.com.


Cheers

Bill



Re: [HELP] Can you please update information about your dataset?

2009-08-14 Thread Bernard Vatant

Richard, all

I've done my homework and added a voiD description of lingvoj.org 
dataset at http://www.lingvoj.org/void

It's still minimal, but at least got stats. Links stuff to be added ASAP.
For those who might care, note that it links to a new FOAF profile at 
http://www.lingvoj.org/foaf.rdf


Bernard

Richard Cyganiak a écrit :
The problem at hand is: How to get reasonably accurate and up-to-date 
statistics about the LOD cloud?


I see three workable methods for this.

1. Compile the statistics from voiD descriptions published by 
individual dataset maintainers. This is what Hugh proposes below. 
Enabling this is one of the main reason why we created voiD. There has 
to be better tools for creating voiD before this happens. The tools 
could be, for example, manual entry forms that spit out voiD 
(voiD-o-matic?), or analyzers that read a dump and spit out a skeleton 
voiD file.


2. Hand-compile the statistics by watching public-lod, trawling 
project home pages, emailing dataset maintainers, and fixing things 
when dataset maintainers complain. This is how I created the original 
LOD cloud diagram in Berlin, and after I left Berlin, Anja has done a 
great job keeping it up to date despite its massive growth. We will 
continue to update it on a best-effort basis for the foreseeable 
future. A voiD version of the information underlying the diagram is in 
the pipeline. Others can do as we did.


3. Anyone who has a copy of a big part of the cloud (e.g. OpenLink and 
we at Sindice) can potentially calculate the statistics. This is 
non-trivial because we just have triples, and we need to 
reverse-engineer datasets and linksets from them, it involves 
computation over quite serious amounts of data, and in the end you 
still won't have good labels or homepages for the datasets. While this 
approach is possible, it seems to me that there are better uses of 
engineering and research resources.


There is a fourth process that, IMO, does NOT work:

4. Send an email to public-lod asking Everyone please enter your 
dataset in this wikipage/GoogleSpreadsheet/fancyAppOfTheWeek.


Best,
Richard




--

*Bernard Vatant
*Senior Consultant
Vocabulary  Data Engineering
Tel:   +33 (0) 971 488 459
Mail: bernard.vat...@mondeca.com mailto:bernard.vat...@mondeca.com

*Mondeca**
*3, cité Nollez 75018 Paris France
Web:www.mondeca.com http://www.mondeca.com
Blog:Leçons de Choses http://mondeca.wordpress.com/
**




Re: [HELP] Can you please update information about your dataset?

2009-08-14 Thread Michael Hausenblas

Bernard,

 Richard, all
 
 I've done my homework and added a voiD description of lingvoj.org
 dataset at http://www.lingvoj.org/void

A+ ;)

And as a tiny kudo: added this fact to the voiD site [1] ...

Cheers,
  Michael

[1] http://semanticweb.org/wiki/VoiD#Examples_in_the_Wild

-- 
Dr. Michael Hausenblas
LiDRC - Linked Data Research Centre
DERI - Digital Enterprise Research Institute
NUIG - National University of Ireland, Galway
Ireland, Europe
Tel. +353 91 495730
http://linkeddata.deri.ie/
http://sw-app.org/about.html



 From: Bernard Vatant bernard.vat...@mondeca.com
 Date: Fri, 14 Aug 2009 16:08:38 +0200
 To: Richard Cyganiak rich...@cyganiak.de
 Cc: Linked Data community public-lod@w3.org
 Subject: Re: [HELP] Can you please update information about your dataset?
 Resent-From: Linked Data community public-lod@w3.org
 Resent-Date: Fri, 14 Aug 2009 14:09:44 +
 
 Richard, all
 
 I've done my homework and added a voiD description of lingvoj.org
 dataset at http://www.lingvoj.org/void
 It's still minimal, but at least got stats. Links stuff to be added ASAP.
 For those who might care, note that it links to a new FOAF profile at
 http://www.lingvoj.org/foaf.rdf
 
 Bernard
 
 Richard Cyganiak a écrit :
 The problem at hand is: How to get reasonably accurate and up-to-date
 statistics about the LOD cloud?
 
 I see three workable methods for this.
 
 1. Compile the statistics from voiD descriptions published by
 individual dataset maintainers. This is what Hugh proposes below.
 Enabling this is one of the main reason why we created voiD. There has
 to be better tools for creating voiD before this happens. The tools
 could be, for example, manual entry forms that spit out voiD
 (voiD-o-matic?), or analyzers that read a dump and spit out a skeleton
 voiD file.
 
 2. Hand-compile the statistics by watching public-lod, trawling
 project home pages, emailing dataset maintainers, and fixing things
 when dataset maintainers complain. This is how I created the original
 LOD cloud diagram in Berlin, and after I left Berlin, Anja has done a
 great job keeping it up to date despite its massive growth. We will
 continue to update it on a best-effort basis for the foreseeable
 future. A voiD version of the information underlying the diagram is in
 the pipeline. Others can do as we did.
 
 3. Anyone who has a copy of a big part of the cloud (e.g. OpenLink and
 we at Sindice) can potentially calculate the statistics. This is
 non-trivial because we just have triples, and we need to
 reverse-engineer datasets and linksets from them, it involves
 computation over quite serious amounts of data, and in the end you
 still won't have good labels or homepages for the datasets. While this
 approach is possible, it seems to me that there are better uses of
 engineering and research resources.
 
 There is a fourth process that, IMO, does NOT work:
 
 4. Send an email to public-lod asking Everyone please enter your
 dataset in this wikipage/GoogleSpreadsheet/fancyAppOfTheWeek.
 
 Best,
 Richard
 
 
 
 -- 
 
 *Bernard Vatant
 *Senior Consultant
 Vocabulary  Data Engineering
 Tel:   +33 (0) 971 488 459
 Mail: bernard.vat...@mondeca.com mailto:bernard.vat...@mondeca.com
 
 *Mondeca**
 *3, cité Nollez 75018 Paris France
 Web:www.mondeca.com http://www.mondeca.com
 Blog:Leçons de Choses http://mondeca.wordpress.com/
 **
 
 




Re: [HELP] Can you please update information about your dataset?

2009-08-12 Thread Hugh Glaser


On 12/08/2009 08:45, Jun Zhao jun.z...@zoo.ox.ac.uk wrote:

 Hi Hugh,
 
 Thanks for championing voiD here:)
 
 I would love to avoid the manual editing too. As Leigh said, editing esw
 wiki is not the best experience you could have.
 
 Do you have a voiD description creator/editor you can share with the
 community?
I am really sorry to report that we don't.
We have php code that generates it from a CRS (Co-ref Service) (written by
Richard Cyganiak and Ian Millard), but it wouldn't just move somewhere else
as it stands.
I know Ian is working on improving it to deal with internal sameAs better.
Of course anyone is welcome to have that if it helps.

Aldo, could you generate voiD from an appropriate Google spreadsheet?

Cheers
Hugh
 
 Hugh Glaser wrote:
 Please no! Not another manual entry system.
 I had already decided I just haven't got the time to manually maintain this
 constantly changing set of numbers, so would not be responding to the request
 to update.
 (In fact, the number of different places that a good LD citizen has to put
 their data into the esw wiki is really rather high.)
 Last time Anja was kind enough to put a lot of effort into processing the
 graphviz for us to generate the numbers, but this is not the way to do it.
 In our case, we have 39 different stores, with linkages between them and to
 others outside.
 There are therefore 504 numbers to represent the linkage, although they don't
 all meet a threshold.
 For details of the linkage in rkbexplorer see pictures at
 http://www.rkbexplorer.com/linkage/ or query http://void.rkbexplorer.com/ .
 And these figures are constantly changing, as the system identifies more -
 there can be more than 1000 a day.
 
 If any more work is to be put into generating this picture, it really should
 be from voiD descriptions, which we already make available for all our
 datasets.
 And for those who want to do it by hand, a simple system to allow them to
 specify the linkage using voiD would get the entry into a format for the voiD
 processor to use (I'm happy to host the data if need be).
 Or Aldo's system could generate its RDF using the voiD ontology, thus
 providing the manual entry system?
 
 I know we have been here before, and almost got to the voiD processor thing:-
 please can we try again?
 
 Sure, this will be an interesting experiment.
 
 Regards,
 
 Jun
 




Re: [HELP] Can you please update information about your dataset?

2009-08-12 Thread Kingsley Idehen

Hugh Glaser wrote:

On 12/08/2009 08:45, Jun Zhao jun.z...@zoo.ox.ac.uk wrote:

  

Hi Hugh,

Thanks for championing voiD here:)

I would love to avoid the manual editing too. As Leigh said, editing esw
wiki is not the best experience you could have.

Do you have a voiD description creator/editor you can share with the
community?


I am really sorry to report that we don't.
We have php code that generates it from a CRS (Co-ref Service) (written by
Richard Cyganiak and Ian Millard), but it wouldn't just move somewhere else
as it stands.
I know Ian is working on improving it to deal with internal sameAs better.
Of course anyone is welcome to have that if it helps.

Aldo, could you generate voiD from an appropriate Google spreadsheet?

  

Hugh,

If the RDFizer takes the form of a Virtuoso Sponger Cartridge, you get 
VoiD gratis :-)


Kingsley

Cheers
Hugh
  

Hugh Glaser wrote:


Please no! Not another manual entry system.
I had already decided I just haven't got the time to manually maintain this
constantly changing set of numbers, so would not be responding to the request
to update.
(In fact, the number of different places that a good LD citizen has to put
their data into the esw wiki is really rather high.)
Last time Anja was kind enough to put a lot of effort into processing the
graphviz for us to generate the numbers, but this is not the way to do it.
In our case, we have 39 different stores, with linkages between them and to
others outside.
There are therefore 504 numbers to represent the linkage, although they don't
all meet a threshold.
For details of the linkage in rkbexplorer see pictures at
http://www.rkbexplorer.com/linkage/ or query http://void.rkbexplorer.com/ .
And these figures are constantly changing, as the system identifies more -
there can be more than 1000 a day.

If any more work is to be put into generating this picture, it really should
be from voiD descriptions, which we already make available for all our
datasets.
And for those who want to do it by hand, a simple system to allow them to
specify the linkage using voiD would get the entry into a format for the voiD
processor to use (I'm happy to host the data if need be).
Or Aldo's system could generate its RDF using the voiD ontology, thus
providing the manual entry system?

I know we have been here before, and almost got to the voiD processor thing:-
please can we try again?
  

Sure, this will be an interesting experiment.

Regards,

Jun





  



--


Regards,

Kingsley Idehen   Weblog: http://www.openlinksw.com/blog/~kidehen
President  CEO 
OpenLink Software Web: http://www.openlinksw.com








Re: [HELP] Can you please update information about your dataset?

2009-08-12 Thread Richard Cyganiak
The problem at hand is: How to get reasonably accurate and up-to-date  
statistics about the LOD cloud?


I see three workable methods for this.

1. Compile the statistics from voiD descriptions published by  
individual dataset maintainers. This is what Hugh proposes below.  
Enabling this is one of the main reason why we created voiD. There has  
to be better tools for creating voiD before this happens. The tools  
could be, for example, manual entry forms that spit out voiD (voiD-o- 
matic?), or analyzers that read a dump and spit out a skeleton voiD  
file.


2. Hand-compile the statistics by watching public-lod, trawling  
project home pages, emailing dataset maintainers, and fixing things  
when dataset maintainers complain. This is how I created the original  
LOD cloud diagram in Berlin, and after I left Berlin, Anja has done a  
great job keeping it up to date despite its massive growth. We will  
continue to update it on a best-effort basis for the foreseeable  
future. A voiD version of the information underlying the diagram is in  
the pipeline. Others can do as we did.


3. Anyone who has a copy of a big part of the cloud (e.g. OpenLink and  
we at Sindice) can potentially calculate the statistics. This is non- 
trivial because we just have triples, and we need to reverse-engineer  
datasets and linksets from them, it involves computation over quite  
serious amounts of data, and in the end you still won't have good  
labels or homepages for the datasets. While this approach is possible,  
it seems to me that there are better uses of engineering and research  
resources.


There is a fourth process that, IMO, does NOT work:

4. Send an email to public-lod asking Everyone please enter your  
dataset in this wikipage/GoogleSpreadsheet/fancyAppOfTheWeek.


Best,
Richard


On 11 Aug 2009, at 22:07, Hugh Glaser wrote:
If any more work is to be put into generating this picture, it  
really should be from voiD descriptions, which we already make  
available for all our datasets.
And for those who want to do it by hand, a simple system to allow  
them to specify the linkage using voiD would get the entry into a  
format for the voiD processor to use (I'm happy to host the data if  
need be).


Or Aldo's system could generate its RDF using the voiD ontology,  
thus providing the manual entry system?


I know we have been here before, and almost got to the voiD  
processor thing:- please can we try again?


Best
Hugh

On 11/08/2009 19:00, Aldo Bucchi aldo.buc...@gmail.com wrote:

Hi,

On Aug 11, 2009, at 13:46, Kingsley Idehen kide...@openlinksw.com
wrote:


Leigh Dodds wrote:

Hi,

I've just added several new datasets to the Statistics page that
weren't previously listed. Its not really a great user experience
editing the wiki markup and manually adding up the figures.

So, thinking out loud, I'm wondering whether it might be more
appropriate to use a Google spreadsheet and one of their submission
forms for the purposes of collectively the data. A little manual
editing to remove duplicates might make managing this data a little
more easier. Especially as there are also pages that separately list
the available SPARQL endpoints and RDF dumps.

I'm sure we could create something much better using Void, etc but
for
now, maybe using a slightly better tool would give us a little more
progress? It'd be a snip to dump out the Google Spreadsheet data
programmatically too, which'd be another improvement on the current
situation.

What does everyone else think?


Nice Idea! Especially as Google Spreadsheet to RDF is just about
RDFizers for the Google Spreadsheet API :-)


Hehe. I have this in my todo (literally). A website that exposes a
google spreadsheet as SPARQL endpoint. Internally we use it as UI to
quickly create config files et Al.
But It will remain in my todo forever...;)

Kingsley, this could be sponged. The trick is that the spreadsheet
must have an accompanying page/sheet/book with metadata (the NS or
explicit URIs for cols).



Kingsley

Cheers,

L.

2009/8/7 Jun Zhao jun.z...@zoo.ox.ac.uk:


Dear all,

We are planning to produce an updated data cloud diagram based on
the
dataset information on the esw wiki page:
http://esw.w3.org/topic/TaskForces/CommunityProjects/LinkingOpenData/DataSets/Statistics

If you have not published your dataset there yet and you would
like your
dataset to be included, can you please add your dataset there?

If you have an entry there for your dataset already, can you
please update
information about your dataset on the wiki?

If you cannot edit the wiki page any more because the recent
update of esw
wiki editing policy, you can send the information to me or Anja,
who is
cc'ed. We can update it for you.

If you know your friends have dataset on the wiki, but are not on
the
mailing list, can you please kindly forward this email to them? We
would
like to get the data cloud as up-to-date as possible.

For this release, we will use the above wiki page as the  
information


Re: [HELP] Can you please update information about your dataset?

2009-08-11 Thread Leigh Dodds
Hi,

I've just added several new datasets to the Statistics page that
weren't previously listed. Its not really a great user experience
editing the wiki markup and manually adding up the figures.

So, thinking out loud, I'm wondering whether it might be more
appropriate to use a Google spreadsheet and one of their submission
forms for the purposes of collectively the data. A little manual
editing to remove duplicates might make managing this data a little
more easier. Especially as there are also pages that separately list
the available SPARQL endpoints and RDF dumps.

I'm sure we could create something much better using Void, etc but for
now, maybe using a slightly better tool would give us a little more
progress? It'd be a snip to dump out the Google Spreadsheet data
programmatically too, which'd be another improvement on the current
situation.

What does everyone else think?

Cheers,

L.

2009/8/7 Jun Zhao jun.z...@zoo.ox.ac.uk:
 Dear all,

 We are planning to produce an updated data cloud diagram based on the
 dataset information on the esw wiki page:
 http://esw.w3.org/topic/TaskForces/CommunityProjects/LinkingOpenData/DataSets/Statistics

 If you have not published your dataset there yet and you would like your
 dataset to be included, can you please add your dataset there?

 If you have an entry there for your dataset already, can you please update
 information about your dataset on the wiki?

 If you cannot edit the wiki page any more because the recent update of esw
 wiki editing policy, you can send the information to me or Anja, who is
 cc'ed. We can update it for you.

 If you know your friends have dataset on the wiki, but are not on the
 mailing list, can you please kindly forward this email to them? We would
 like to get the data cloud as up-to-date as possible.

 For this release, we will use the above wiki page as the information
 gathering point. We do apologize if you have published information about
 your dataset on other web pages and this request would mean extra work for
 you.

 Many thanks for your contributions!

 Kindest regards,

 Jun


 __
 This email has been scanned by the MessageLabs Email Security System.
 For more information please visit http://www.messagelabs.com/email
 __




-- 
Leigh Dodds
Programme Manager, Talis Platform
Talis
leigh.do...@talis.com
http://www.talis.com



Re: [HELP] Can you please update information about your dataset?

2009-08-11 Thread Aldo Bucchi

Hi,

On Aug 11, 2009, at 13:46, Kingsley Idehen kide...@openlinksw.com  
wrote:



Leigh Dodds wrote:

Hi,

I've just added several new datasets to the Statistics page that
weren't previously listed. Its not really a great user experience
editing the wiki markup and manually adding up the figures.

So, thinking out loud, I'm wondering whether it might be more
appropriate to use a Google spreadsheet and one of their submission
forms for the purposes of collectively the data. A little manual
editing to remove duplicates might make managing this data a little
more easier. Especially as there are also pages that separately list
the available SPARQL endpoints and RDF dumps.

I'm sure we could create something much better using Void, etc but  
for

now, maybe using a slightly better tool would give us a little more
progress? It'd be a snip to dump out the Google Spreadsheet data
programmatically too, which'd be another improvement on the current
situation.

What does everyone else think?

Nice Idea! Especially as Google Spreadsheet to RDF is just about  
RDFizers for the Google Spreadsheet API :-)


Hehe. I have this in my todo (literally). A website that exposes a  
google spreadsheet as SPARQL endpoint. Internally we use it as UI to  
quickly create config files et Al.

But It will remain in my todo forever...;)

Kingsley, this could be sponged. The trick is that the spreadsheet  
must have an accompanying page/sheet/book with metadata (the NS or  
explicit URIs for cols).




Kingsley

Cheers,

L.

2009/8/7 Jun Zhao jun.z...@zoo.ox.ac.uk:


Dear all,

We are planning to produce an updated data cloud diagram based on  
the

dataset information on the esw wiki page:
http://esw.w3.org/topic/TaskForces/CommunityProjects/LinkingOpenData/DataSets/Statistics

If you have not published your dataset there yet and you would  
like your

dataset to be included, can you please add your dataset there?

If you have an entry there for your dataset already, can you  
please update

information about your dataset on the wiki?

If you cannot edit the wiki page any more because the recent  
update of esw
wiki editing policy, you can send the information to me or Anja,  
who is

cc'ed. We can update it for you.

If you know your friends have dataset on the wiki, but are not on  
the
mailing list, can you please kindly forward this email to them? We  
would

like to get the data cloud as up-to-date as possible.

For this release, we will use the above wiki page as the information
gathering point. We do apologize if you have published information  
about
your dataset on other web pages and this request would mean extra  
work for

you.

Many thanks for your contributions!

Kindest regards,

Jun


__



This email has been scanned by the MessageLabs Email Security  
System.

For more information please visit http://www.messagelabs.com/email
__














--


Regards,

Kingsley Idehen  Weblog: http://www.openlinksw.com/blog/~kidehen
President  CEO OpenLink Software Web: http://www.openlinksw.com









Re: [HELP] Can you please update information about your dataset?

2009-08-11 Thread Jun Zhao

Hi Michael,

I have taken this dataset off the list.

I also believe that Yves has managed to update the record about BBC 
music thanks to Michael H.:)


cheers,

Jun

Michael Smethurst wrote:

Hi Jun/all

Just noticed the line on:

http://esw.w3.org/topic/TaskForces/CommunityProjects/LinkingOpenData/DataSets/Statistics

saying:

BBC Later + TOTP (link not responding - 2009-04-01)

That's my bad. The site's been down since we forgot to pay our ec2 bills :-/

Having said that the data has either moved or is in the process of 
moving to BBC programmes and BBC music so TOTP/Later should probably 
come off the cloud piccie and off the list and BBC Music should be added 
in linking to musicbrainz and bbc programmes. The TOTP/Later site was 
only ever intended as a try out and a demo to management types at the 
BBC. Strangely it seems to have worked... :-)


Also to note that BBC programmes and music do now have a joint sparql 
endpoint - well 2 in fact:

http://api.talis.com/stores/bbc-backstage
http://bbc.openlinksw.com/sparql

Sorry for not notifying earlier

Michael


-Original Message-
From: public-lod-requ...@w3.org on behalf of Jun Zhao
Sent: Fri 8/7/2009 7:05 PM
To: public-lod@w3.org
Cc: Anja Jentzsch
Subject: [HELP] Can you please update information about your dataset?

Dear all,

We are planning to produce an updated data cloud diagram based on the
dataset information on the esw wiki page:
http://esw.w3.org/topic/TaskForces/CommunityProjects/LinkingOpenData/DataSets/Statistics

If you have not published your dataset there yet and you would like your
dataset to be included, can you please add your dataset there?

If you have an entry there for your dataset already, can you please
update information about your dataset on the wiki?

If you cannot edit the wiki page any more because the recent update of
esw wiki editing policy, you can send the information to me or Anja, who
is cc'ed. We can update it for you.

If you know your friends have dataset on the wiki, but are not on the
mailing list, can you please kindly forward this email to them? We would
like to get the data cloud as up-to-date as possible.

For this release, we will use the above wiki page as the information
gathering point. We do apologize if you have published information about
your dataset on other web pages and this request would mean extra work
for you.

Many thanks for your contributions!

Kindest regards,

Jun



http://www.bbc.co.uk
This e-mail (and any attachments) is confidential and may contain 
personal views which are not the views of the BBC unless specifically 
stated.

If you have received it in error, please delete it from your system.
Do not use, copy or disclose the information in any way nor act in 
reliance on it and notify the sender immediately.

Please note that the BBC monitors e-mails sent or received.
Further communication will signify your consent to this.





Re: [HELP] Can you please update information about your dataset?

2009-08-11 Thread Hugh Glaser
Please no! Not another manual entry system.
I had already decided I just haven't got the time to manually maintain this 
constantly changing set of numbers, so would not be responding to the request 
to update.
(In fact, the number of different places that a good LD citizen has to put 
their data into the esw wiki is really rather high.)
Last time Anja was kind enough to put a lot of effort into processing the 
graphviz for us to generate the numbers, but this is not the way to do it.
In our case, we have 39 different stores, with linkages between them and to 
others outside.
There are therefore 504 numbers to represent the linkage, although they don't 
all meet a threshold.
For details of the linkage in rkbexplorer see pictures at 
http://www.rkbexplorer.com/linkage/ or query http://void.rkbexplorer.com/ .
And these figures are constantly changing, as the system identifies more - 
there can be more than 1000 a day.

If any more work is to be put into generating this picture, it really should be 
from voiD descriptions, which we already make available for all our datasets.
And for those who want to do it by hand, a simple system to allow them to 
specify the linkage using voiD would get the entry into a format for the voiD 
processor to use (I'm happy to host the data if need be).
Or Aldo's system could generate its RDF using the voiD ontology, thus providing 
the manual entry system?

I know we have been here before, and almost got to the voiD processor thing:- 
please can we try again?

Best
Hugh

On 11/08/2009 19:00, Aldo Bucchi aldo.buc...@gmail.com wrote:

Hi,

On Aug 11, 2009, at 13:46, Kingsley Idehen kide...@openlinksw.com
wrote:

 Leigh Dodds wrote:
 Hi,

 I've just added several new datasets to the Statistics page that
 weren't previously listed. Its not really a great user experience
 editing the wiki markup and manually adding up the figures.

 So, thinking out loud, I'm wondering whether it might be more
 appropriate to use a Google spreadsheet and one of their submission
 forms for the purposes of collectively the data. A little manual
 editing to remove duplicates might make managing this data a little
 more easier. Especially as there are also pages that separately list
 the available SPARQL endpoints and RDF dumps.

 I'm sure we could create something much better using Void, etc but
 for
 now, maybe using a slightly better tool would give us a little more
 progress? It'd be a snip to dump out the Google Spreadsheet data
 programmatically too, which'd be another improvement on the current
 situation.

 What does everyone else think?

 Nice Idea! Especially as Google Spreadsheet to RDF is just about
 RDFizers for the Google Spreadsheet API :-)

Hehe. I have this in my todo (literally). A website that exposes a
google spreadsheet as SPARQL endpoint. Internally we use it as UI to
quickly create config files et Al.
But It will remain in my todo forever...;)

Kingsley, this could be sponged. The trick is that the spreadsheet
must have an accompanying page/sheet/book with metadata (the NS or
explicit URIs for cols).


 Kingsley
 Cheers,

 L.

 2009/8/7 Jun Zhao jun.z...@zoo.ox.ac.uk:

 Dear all,

 We are planning to produce an updated data cloud diagram based on
 the
 dataset information on the esw wiki page:
 http://esw.w3.org/topic/TaskForces/CommunityProjects/LinkingOpenData/DataSets/Statistics

 If you have not published your dataset there yet and you would
 like your
 dataset to be included, can you please add your dataset there?

 If you have an entry there for your dataset already, can you
 please update
 information about your dataset on the wiki?

 If you cannot edit the wiki page any more because the recent
 update of esw
 wiki editing policy, you can send the information to me or Anja,
 who is
 cc'ed. We can update it for you.

 If you know your friends have dataset on the wiki, but are not on
 the
 mailing list, can you please kindly forward this email to them? We
 would
 like to get the data cloud as up-to-date as possible.

 For this release, we will use the above wiki page as the information
 gathering point. We do apologize if you have published information
 about
 your dataset on other web pages and this request would mean extra
 work for
 you.

 Many thanks for your contributions!

 Kindest regards,

 Jun


 __


 This email has been scanned by the MessageLabs Email Security
 System.
 For more information please visit http://www.messagelabs.com/email
 __










 --


 Regards,

 Kingsley Idehen  Weblog: http://www.openlinksw.com/blog/~kidehen
 President  CEO OpenLink Software Web: http://www.openlinksw.com










Re: [HELP] Can you please update information about your dataset?

2009-08-10 Thread Kurt J
 I would like to edit the statistics page, but for some reason the page
 is marked as an immutable page?

similarly i would like to note that the Myspace wrapper now has an end
point and has out going links to musicbrainz.  but cannot edit page.

-kurtjx



Re: [HELP] Can you please update information about your dataset?

2009-08-08 Thread Yves Raimond

 BBC Later + TOTP (link not responding - 2009-04-01)

 That's my bad. The site's been down since we forgot to pay our ec2 bills :-/

 Having said that the data has either moved or is in the process of moving to
 BBC programmes and BBC music so TOTP/Later should probably come off the
 cloud piccie and off the list and BBC Music should be added in linking to
 musicbrainz and bbc programmes.

When we released the bbc programmes/music links I added an entry to
http://esw.w3.org/topic/TaskForces/CommunityProjects/LinkingOpenData/DataSets/LinkStatistics.
After quickly checking what we have on live right now, the numbers are
still roughly the same. It's sad for TOTP/Later, I hope we can get
them soon in bbc music and programmes!

I would like to edit the statistics page, but for some reason the page
is marked as an immutable page?

Cheers,
y



Re: [HELP] Can you please update information about your dataset?

2009-08-08 Thread Michael Hausenblas

Yves,

 I would like to edit the statistics page, but for some reason the page
 is marked as an immutable page?

You were not listed under [1], which is necessary to edit pages. I've added
you now.

Cheers,
  Michael

[1] http://esw.w3.org/topic/ESWEditorsGroup

-- 
Dr. Michael Hausenblas
LiDRC - Linked Data Research Centre
DERI - Digital Enterprise Research Institute
NUIG - National University of Ireland, Galway
Ireland, Europe
Tel. +353 91 495730
http://linkeddata.deri.ie/
http://sw-app.org/about.html



 From: Yves Raimond yves.raim...@gmail.com
 Date: Sat, 8 Aug 2009 23:55:00 +0100
 To: Michael Smethurst michael.smethu...@bbc.co.uk
 Cc: Jun Zhao jun.z...@zoo.ox.ac.uk, Linked Data community
 public-lod@w3.org, Anja Jentzsch a...@anjeve.de
 Subject: Re: [HELP] Can you please update information about your dataset?
 Resent-From: Linked Data community public-lod@w3.org
 Resent-Date: Sat, 08 Aug 2009 22:55:42 +
 
 
 BBC Later + TOTP (link not responding - 2009-04-01)
 
 That's my bad. The site's been down since we forgot to pay our ec2 bills :-/
 
 Having said that the data has either moved or is in the process of moving to
 BBC programmes and BBC music so TOTP/Later should probably come off the
 cloud piccie and off the list and BBC Music should be added in linking to
 musicbrainz and bbc programmes.
 
 When we released the bbc programmes/music links I added an entry to
 http://esw.w3.org/topic/TaskForces/CommunityProjects/LinkingOpenData/DataSets/
 LinkStatistics.
 After quickly checking what we have on live right now, the numbers are
 still roughly the same. It's sad for TOTP/Later, I hope we can get
 them soon in bbc music and programmes!
 
 I would like to edit the statistics page, but for some reason the page
 is marked as an immutable page?
 
 Cheers,
 y
 




[HELP] Can you please update information about your dataset?

2009-08-07 Thread Jun Zhao

Dear all,

We are planning to produce an updated data cloud diagram based on the 
dataset information on the esw wiki page: 
http://esw.w3.org/topic/TaskForces/CommunityProjects/LinkingOpenData/DataSets/Statistics


If you have not published your dataset there yet and you would like your 
dataset to be included, can you please add your dataset there?


If you have an entry there for your dataset already, can you please 
update information about your dataset on the wiki?


If you cannot edit the wiki page any more because the recent update of 
esw wiki editing policy, you can send the information to me or Anja, who 
is cc'ed. We can update it for you.


If you know your friends have dataset on the wiki, but are not on the 
mailing list, can you please kindly forward this email to them? We would 
like to get the data cloud as up-to-date as possible.


For this release, we will use the above wiki page as the information 
gathering point. We do apologize if you have published information about 
your dataset on other web pages and this request would mean extra work 
for you.


Many thanks for your contributions!

Kindest regards,

Jun



RE: [HELP] Can you please update information about your dataset?

2009-08-07 Thread Michael Smethurst
Hi Jun/all

Just noticed the line on:

http://esw.w3.org/topic/TaskForces/CommunityProjects/LinkingOpenData/DataSets/Statistics

saying:

BBC Later + TOTP (link not responding - 2009-04-01)

That's my bad. The site's been down since we forgot to pay our ec2 bills :-/

Having said that the data has either moved or is in the process of moving to 
BBC programmes and BBC music so TOTP/Later should probably come off the cloud 
piccie and off the list and BBC Music should be added in linking to musicbrainz 
and bbc programmes. The TOTP/Later site was only ever intended as a try out and 
a demo to management types at the BBC. Strangely it seems to have worked... :-)

Also to note that BBC programmes and music do now have a joint sparql endpoint 
- well 2 in fact:
http://api.talis.com/stores/bbc-backstage
http://bbc.openlinksw.com/sparql

Sorry for not notifying earlier

Michael


-Original Message-
From: public-lod-requ...@w3.org on behalf of Jun Zhao
Sent: Fri 8/7/2009 7:05 PM
To: public-lod@w3.org
Cc: Anja Jentzsch
Subject: [HELP] Can you please update information about your dataset?
 
Dear all,

We are planning to produce an updated data cloud diagram based on the 
dataset information on the esw wiki page: 
http://esw.w3.org/topic/TaskForces/CommunityProjects/LinkingOpenData/DataSets/Statistics

If you have not published your dataset there yet and you would like your 
dataset to be included, can you please add your dataset there?

If you have an entry there for your dataset already, can you please 
update information about your dataset on the wiki?

If you cannot edit the wiki page any more because the recent update of 
esw wiki editing policy, you can send the information to me or Anja, who 
is cc'ed. We can update it for you.

If you know your friends have dataset on the wiki, but are not on the 
mailing list, can you please kindly forward this email to them? We would 
like to get the data cloud as up-to-date as possible.

For this release, we will use the above wiki page as the information 
gathering point. We do apologize if you have published information about 
your dataset on other web pages and this request would mean extra work 
for you.

Many thanks for your contributions!

Kindest regards,

Jun



http://www.bbc.co.uk/
This e-mail (and any attachments) is confidential and may contain personal 
views which are not the views of the BBC unless specifically stated.
If you have received it in error, please delete it from your system.
Do not use, copy or disclose the information in any way nor act in reliance on 
it and notify the sender immediately.
Please note that the BBC monitors e-mails sent or received.
Further communication will signify your consent to this.



Re: Help me, I need to get digest version. How?

2008-12-08 Thread Libby Miller



On 8 Dec 2008, at 01:50, Dwight Hines wrote:



Help me, I need to get digest version.  How?



It doesn't appear that this is possible on these lists:

http://www.w3.org/Mail/Request

For everyone else - can I suggest posting on just one W3C list at a  
time as per the guidelines?


http://www.w3.org/Mail/

It won't help Dwight but it'd help me :-)

cheers

Libby


Dwight
Florida






Need some freebase parallax or tabulator help here, save lives doing so Re: freebase parallax: user interface for browsing graphs of data

2008-08-20 Thread Dwight Hines


I've been following the posts a little, while downloading excel  
format data on emergency medical services across the US.   We need to  
have these data examined in forty eleven different ways and we need  
to know what peculiar combinations exist for different locations to  
keep people alive.


It seems to me that these data sets that are available for  
downloading, as long as you agree to a few restrictions to protect  
privacy and understand these data are not perfect, yet, and analyses  
of your choice.


Look, I'm including below the notice sent out today to the NEMSiS  
list, and would like to know if you decide to tackle the data with  
freebase, how are you doing it, and where you will post your  
results.  The reality now is that we need what the members of the  
list are working on applied to these data sets.   The data sets, and  
their natural relationship data sets are going to continue to grow.

Dwight Hines
St. Augustine, Florida
=
Greetings from the NEMSIS Technical Assistance Center:
We are pleased to announce that the NEMSIS National Reporting System  
based upon the National EMS Data Base is available!  The web-based  
system can be found on the NEMSIS web site (www.nemsis.org) under the  
NEMSIS Reporting tab (click on “National Reports”).
We invite you to visit and use the reports and, most importantly,  
provide us feedback regarding their usefulness.  We are striving to  
provide you the best possible product and need to hear from you to  
enhance the impact of the national reports on the EMS system as a whole.
It should be noted that there are limitations to the data that are  
available.  First, these data are not “population-based” and do not  
represent conditions of the nation or any individual state submitting  
data to NEMSIS (i.e., most states currently submit a portion of all  
state EMS runs).  Second, the data are not formally “cleaned” and,  
therefore, represent the information as submitted by each state.   
Although basic cleaning is completed by the NEMSIS TAC, some data  
inconsistencies are retained to aid states with quality assessments.
We welcome you to use and become familiar with the national reports.   
As the data base grows, so will the usefulness and validity of the  
National EMS Data Base.



David J. Owens
Program Director
National EMS Information System
University of Utah - Department of Pediatrics
Intermountain Injury Control Research Center
295 Chipeta Way
PO Box 581289
Salt Lake City, Utah 84158-1289
Phone: (801) 585 1631
Fax: (801) 581-8686
[EMAIL PROTECTED]
The NEMSIS Technical Assistance Center exists to standardize data  
collection and develop a national registry to facilitate evaluation  
of emergency medical services.

On Aug 20, 2008, at 1:32 PM, M.Daquin wrote:



The trouble is of course when the whole web is the database, it's  
hard to
suggest those relationships (connections) for a set of entities.  
How might
one solve that problem? I suppose something like Swoogle can  
help. Is that

what Tabulator uses to know what data is on the SW?

David








Re: help

2008-06-26 Thread Aldo Bucchi

Call 911.
We haven't developed super-powers yet.


On Thu, Jun 26, 2008 at 6:14 PM, Bob Wyman [EMAIL PROTECTED] wrote:
 help




-- 
 Aldo Bucchi 
+56 9 7623 8653
skype:aldo.bucchi
http://aldobucchi.com/