Quite An Impressive Use of Linked Data

2009-03-20 Thread Melvin Carvalho
http://www.gisuser.com/content/view/17142/2/

(Requires Java)

Example screenshot:
http://www.geographie.uni-bonn.de/karto/osm-3d/Screenshots/Frankfurt/Frankfurt4.jpg



Re: Update: LOD Cloud Hosting Other Matters

2009-03-29 Thread Melvin Carvalho
Is this an add?  http://sw.deri.org/2009/01/visinav/faq.html#3

On Sat, Mar 28, 2009 at 8:45 PM, Kingsley Idehen kide...@openlinksw.com wrote:
 All,

 We are now nearing complete stability re uploads, deletes, and data
 cleansing activity re. the Virtuoso instance hosting the LOD Cloud [1].

 We are still awaiting fresh data sets from Freebase and Bio2RDF (both
 communities a prepping new RDF data sets). Once received, we will replace
 the current datasets accordingly.

 At the current time we have loaded 100% of all the very large data sets from
 the LOD Cloud [2]. Thus, I would really like owners of RDF data sets
 depicted in the clouds that cannot locate their data to notify me (via this
 this mailing list) ASAP. You can use the LOD instance Search  Find or
 URI Lookup or SPARQL endpoint [3] to verify existence of your data (note:
 we are preserving original data provider URIs).

 Of the top of my head here are the data sets added since my last update
 notice:

 1. U.S. Census
 2. DBP RKB Explorer* and related datsets from Hugh Glaser
 3. Gov-Track
 4. BBC Programmes, DBtune 5. SemanticBible (this is a small dataset, not in
 the LOD cloud, but added since linkage will be easy to generate)
 6. PingTheSemanticWeb (FOAF Cloud and others)
 7. All the Linking Open Drug Data from the LODD project.

 One more time, if you have a new RDF based Linked Data archive, or an
 updated dataset, please add pertinent information to the Linked Open Data
 Sets page [4].

 Additional developments re. Amazon Hosting:

 Amazon have agreed to add all the Linked Open Data Sets to their public data
 sets collective. Thus, the data sets we are loading will be available in
 raw data on the public data sets page [5] in Elastic Block Storage (EBS)
 form; meaning, you can make an EC2 AMI (e.g. a Linux, Windows, Solaris) and
 install an RDF quad or triple store of choice, then load the data. Of
 course, we are also going to offer a Virtuoso 6.0 Cluster Edition AMI that
 will enable you to simply instantiate a personal and service specific
 edition of Virtuoso with all the LOD data in place, so that you can press
 go and have the LOD space in true Linked Data from at your disposal in
 minutes (i.e. the time it takes the DB to start).

 Work on the migration of the LOD data to EC2 starts next week, so please get
 your data sets in place if you want to take advantage of this most generous
 offering from Amazon.

 We are also going make a few USB devices with chunks of LOD data sets as
 another distribution mechanism.


 Links:

 1. http://lod.openlinksw.com
 2. http://www4.wiwiss.fu-berlin.de/bizer/pub/lod-datasets_2009-03-05.html
 3. http://lod.openlinksw.com/sparql
 4. http://esw.w3.org/topic/DataSetRDFDumps
 5. http://aws.amazon.com/publicdatasets

 --


 Regards,

 Kingsley Idehen       Weblog: http://www.openlinksw.com/blog/~kidehen
 President  CEO OpenLink Software     Web: http://www.openlinksw.com









Re: Keeping crawlers up-to-date

2009-04-28 Thread Melvin Carvalho
On Tue, Apr 28, 2009 at 3:39 PM, Yves Raimond yves.raim...@gmail.com wrote:
 Hello!

 I know this issue has been raised during the LOD BOF at WWW 2009, but
 I don't know if any possible solutions emerged from there.

 The problem we are facing is that data on BBC Programmes changes
 approximately 50 000 times a day (new/updated
 broadcasts/versions/programmes/segments etc.). As we'd like to keep a
 set of RDF crawlers up-to-date with our information we were wondering
 how best to ping these. pingthesemanticweb seems like a nice option,
 but it needs the crawlers to ping it often enough to make sure they
 didn't miss a change. Another solution we were thinking of would be to
 stick either Talis changesets [1] or SPARQL/Update statements in a
 message queue, which would then be consumed by the crawlers.

That's a lot of data, I wonder if there is a smart way of filtering it down.

Perhaps an RDF version of twitter would be interesting, where you
follow changes that you're interested in?  You could even follow by
possibly user, or by SPARQL query, and maybe accross multiple domains.


 Did anyone tried to tackle this problem already?

 Cheers!
 y


 [1] http://n2.talis.com/wiki/Changeset





Re: [Call for Action] Support linked data at the National Dialogue

2009-05-02 Thread Melvin Carvalho
1 day left.

Only 4 more votes needed to reach 3rd place!

On Thu, Apr 30, 2009 at 9:04 PM, Michael Hausenblas
michael.hausenb...@deri.org wrote:

 All,

 I'd like to draw your attention to [1], TimBL's proposal for answering the
 following question stated by The National Dialogue/Recovery.gov:

 'What do you consider to be the most exciting upcoming technology or system
 in the field of managing, aggregating, and visualizing diverse types of
 data?'

 Visit [1] now and leave your comment there or simply rate Tim's proposal
 high ;)

 Note that we only have time till 3 May, this Sunday!

 Cheers,
      Michael

 [1] http://www.thenationaldialogue.org/ideas/linked-open-data

 --
 Dr. Michael Hausenblas
 DERI - Digital Enterprise Research Institute
 National University of Ireland, Lower Dangan,
 Galway, Ireland, Europe
 Tel. +353 91 495730
 http://sw-app.org/about.html
 http://webofdata.wordpress.com/






Re: gimmee some data!

2009-06-14 Thread Melvin Carvalho
Football Data:

http://www.football-data.co.uk/englandm.php

On Sun, Jun 14, 2009 at 11:23 AM, Danny Ayersdanny.ay...@gmail.com wrote:
 It's Ian Davis' birthday tomorrow, and for it he wants some linked data.

 So what datasets does anyone know of that can be translated relatively
 quick  easy, the stuff you are planning to do one day when you get
 time..?



 --
 http://danny.ayers.name





Re: .htaccess a major bottleneck to Semantic Web adoption / Was: Re: RDFa vs RDF/XML and content negotiation

2009-06-26 Thread Melvin Carvalho
On Thu, Jun 25, 2009 at 6:44 PM, Martin Hepp
(UniBW)martin.h...@ebusiness-unibw.org wrote:
 Hi all:

 After about two months of helping people generate RDF/XML metadata for their
 businesses using the GoodRelations annotator [1],
 I have quite some evidence that the current best practices of using
 .htaccess are a MAJOR bottleneck for the adoption of Semantic Web
 technology.

 Just some data:
 - We have several hundred entries in the annotator log - most people spend
 10 or more minutes to create a reasonable description of themselves.
 - Even though they all operate some sort of Web sites, less than 30 % of
 them manage to upload/publish a single *.rdf file in their root directory.
 - Of those 30%, only a fraction manage to set up content negotiation
 properly, even though we provide a step-by-step recipe.

 The effects are
 - URIs that are not dereferencable,
 - incorrect media types and
 and other problems.

 When investigating the causes and trying to help people, we encountered a
 variety of configurations and causes that we did not expect. It turned out
 that helping people just managing this tiny step of publishing  Semantic Web
 data would turn into a full-time job for 1 - 2 administrators.

 Typical causes of problems are
 - Lack of privileges for .htaccess (many cheap hosting packages give limited
 or no access to .htaccess)
 - Users without Unix background had trouble name a file so that it begins
 with a dot
 - Microsoft IIS require completely different recipes
 - Many users have access just at a CMS level

 Bottomline:
 - For researchers in the field, it is a doable task to set up an Apache
 server so that it serves RDF content according to current best practices.
 - For most people out there in reality, this is regularly a prohibitively
 difficult task, both because of a lack of skills and a variety in the
 technical environments that turns into an engineering challenge what is easy
 on the textbook-level.

 As a consequence, we will modify our tool so that it generates dummy RDFa
 code with span/div that *just* represents the meta-data without interfering
 with the presentation layer.
 That can then be inserted as code snippets via copy-and-paste to any XHTML
 document.

 Any opinions?

Been thinking about this issue for the last 6 months, and ive changed
my mind a few times.

Inclined to agree that RDFa is probably the ideal entry point for
bringing existing businesses onto Good Relations.

For a read/write web (which is the goal of commerce, right?), you're
probably back to .htaccess, though, with, say, a controller that will
manage POSTed SPARUL inserts.

I think taking it one step at a time, in this way, seems a sensible
approach, though as a community, we'll need to put a bit of wieght
behind getting the RDFa tool set up to the state of the art.


 Best
 Martin

 [1]  http://www.ebusiness-unibw.org/tools/goodrelations-annotator/

 Danny Ayers wrote:

 Thank you for the excellent questions, Bill.

 Right now IMHO the best bet is probably just to pick whichever format
 you are most comfortable with (yup it depends) and use that as the
 single source, transforming perhaps with scripts to generate the
 alternate representations for conneg.

 As far as I'm aware we don't yet have an easy templating engine for
 RDFa, so I suspect having that as the source is probably a good choice
 for typical Web applications.

 As mentioned already GRDDL is available for transforming on the fly,
 though I'm not sure of the level of client engine support at present.
 Ditto providing a SPARQL endpoint is another way of maximising the
 surface area of the data.

 But the key step has clearly been taken, that decision to publish data
 directly without needing the human element to interpret it.

 I claim *win* for the Semantic Web, even if it'll still be a few years
 before we see applications exploiting it in a way that provides real
 benefit for the end user.

 my 2 cents.

 Cheers,
 Danny.




 --
 --
 martin hepp
 e-business  web science research group
 universitaet der bundeswehr muenchen

 e-mail:  mh...@computer.org
 phone:   +49-(0)89-6004-4217
 fax:     +49-(0)89-6004-4620
 www:     http://www.unibw.de/ebusiness/ (group)
        http://www.heppnetz.de/ (personal)
 skype:   mfhepp twitter: mfhepp

 Check out the GoodRelations vocabulary for E-Commerce on the Web of Data!
 

 Webcast:
 http://www.heppnetz.de/projects/goodrelations/webcast/

 Talk at the Semantic Technology Conference 2009: Semantic Web-based
 E-Commerce: The GoodRelations Ontology
 http://tinyurl.com/semtech-hepp

 Tool for registering your business:
 http://www.ebusiness-unibw.org/tools/goodrelations-annotator/

 Overview article on Semantic Universe:
 http://tinyurl.com/goodrelations-universe

 Project page and resources for developers:
 http://purl.org/goodrelations/

 Tutorial materials:
 Tutorial at 

Re: .htaccess a major bottleneck to Semantic Web adoption / Was: Re: RDFa vs RDF/XML and content negotiation

2009-06-27 Thread Melvin Carvalho
On Sat, Jun 27, 2009 at 9:21 AM, Martin Hepp
(UniBW)martin.h...@ebusiness-unibw.org wrote:
 So if this hidden div / span approach is not feasible, we got a problem.

 The reason is that, as beautiful the idea is of using RDFa to make a) the
 human-readable presentation and b) the machine-readable meta-data link to
 the same literals, the problematic is it in reality once the structure of a)
 and b) are very different.

 For very simple property-value pairs, embedding RDFa markup is no problem.
 But if you have a bit more complexity at the conceptual level and in
 particular if there are significant differences to the structure of the
 presentation (e.g. in terms of granularity, ordering of elements, etc.), it
 gets very, very messy and hard to maintain.

 And you give up the clear separation of concerns between the conceptual
 level and the presentation level that XML brought about.

 Maybe one should tell Google that this is not cloaking if SW meta-data is
 embedded...

 But the snippet basically indicates that we should not recommend this
 practice.

What happens if you put them in one big span tree and use the
@content attribute?


 Martin


 Kingsley Idehen wrote:

 Mark Birbeck wrote:

 Hi Martin,



 b) download RDFa snippet that just represents the RDF/XML content (i.e.
 such
 that it does not have to be consolidated with the presentation level
 part
 of the Web page.


 By coincidence, I just read this:

  Hidden div's -- don't do it!
  It can be tempting to add all the content relevant for a rich snippet
  in one place on the page, mark it up, and then hide the entire block
  of text using CSS or other techniques. Don't do this! Mark up the
  content where it already exists. Google will not show content from
  hidden div's in Rich Snippets, and worse, this can be considered
  cloaking by Google's spam detection systems. [1]

 Regards,

 Mark

 [1]
 http://knol.google.com/k/google-rich-snippets/google-rich-snippets/32la2chf8l79m/1#



 Martin/Mark,

 Time to make a sample RDFa doc that includes very detailed GR based
 metadata.

 Mark: Should we be describing our docs for Google, fundamentally? I really
 think Google should actually recalibrate back to the Web etc..



 --
 --
 martin hepp
 e-business  web science research group
 universitaet der bundeswehr muenchen

 e-mail:  mh...@computer.org
 phone:   +49-(0)89-6004-4217
 fax:     +49-(0)89-6004-4620
 www:     http://www.unibw.de/ebusiness/ (group)
        http://www.heppnetz.de/ (personal)
 skype:   mfhepp twitter: mfhepp

 Check out the GoodRelations vocabulary for E-Commerce on the Web of Data!
 

 Webcast:
 http://www.heppnetz.de/projects/goodrelations/webcast/

 Talk at the Semantic Technology Conference 2009: Semantic Web-based
 E-Commerce: The GoodRelations Ontology
 http://tinyurl.com/semtech-hepp

 Tool for registering your business:
 http://www.ebusiness-unibw.org/tools/goodrelations-annotator/

 Overview article on Semantic Universe:
 http://tinyurl.com/goodrelations-universe

 Project page and resources for developers:
 http://purl.org/goodrelations/

 Tutorial materials:
 Tutorial at ESWC 2009: The Web of Data for E-Commerce in One Day: A Hands-on
 Introduction to the GoodRelations Ontology, RDFa, and Yahoo! SearchMonkey

 http://www.ebusiness-unibw.org/wiki/GoodRelations_Tutorial_ESWC2009








Re: Sig.ma - live views on the web of data

2009-07-23 Thread Melvin Carvalho
Really great.

I've used sig.ma to power a linked data profile searcher, that I've
been toying with ...

http://openprofile.com/

I'll be adding more linked data sources, over time...

On Thu, Jul 23, 2009 at 2:58 AM, Giovanni
Tummarellogiovanni.tummare...@deri.org wrote:
 Dear Web of Data enthusiasts,

 we are very happy to share with you today the first public version of
 Sigma,  http://sig.ma ,  a browser, a mashup engine and an API for the
 web of data.

 here is blog post with screencast, sample sigma embedded mashup etc.

 http://blog.sindice.com/2009/07/22/sigma-live-views-on-the-web-of-data/

 Sig.ma is heavily based on Sindice but also takes important hints from
 Yahoo BOSS,  the OKKAM service and likely several others soon

 cheers

 Giovanni, also on behalf of all -  Michele Catasta, Richard Cyganiak
 and Szymon Danielczyk who worked specifically on this but also
 .. and of the Data Intensive Infrastructure Group, DERI as a whole.
 http://di2.deri.ie/team/





Re: W3C Workshop on Access Control Application Scenarios

2009-11-11 Thread Melvin Carvalho
On Thu, Oct 8, 2009 at 7:59 PM, Dan Brickley dan...@danbri.org wrote:
 Hi Rigo, Pling,

 +cc: SocialWeb XG

Position Papers: http://www.w3.org/2009/policy-ws/papers/


 On Thu, Oct 8, 2009 at 8:42 PM, Rigo Wenning r...@w3.org wrote:
 Dear all,

 as you may know, there are many issues around identity management.
 Some of them touch on access control. W3C had numerous requests to
 help shed some light into new challenges on access control as well
 as talking about application scenarios, the semantics they require
 and the challenges that come with them.

 Are any of these requests on the public record? (or W3C Member record at 
 least).

 Therefore, W3C is organizing this Workshop on Access Control
 Application Scenarios

 http://www.w3.org/2009/policy-ws/cfp.html

 I don't see any mention of oauth or openid there; would you like to
 encourage members of those communities to participate?

 cheers,

 Dan


 We are lucky to have Hal Lockhart, Chair of OASIS TC XACML, as our
 Workshop Chair. We hope to get interesting papers on semantics but
 also challenges to access control, including from the cloud
 community, the identity management community and the privacy
 community.

 Best,

 Rigo Wenning
 W3C Legal counsel






Re: Generate RDFa with Epiphany

2009-12-23 Thread Melvin Carvalho
On Tue, Dec 22, 2009 at 5:13 PM, Benjamin Adrian benjamin.adr...@dfki.dewrote:

 Hi everyone!

 Let me introduce the RDFa annotator Epiphany:

 It uses configurable domain-specific Linked Data to enrich web pages with
 RDFa annotation, automatically.
 These annotations link text passages to instances inside the Linked Data
 model.
 Hovering an annotation with your mouse opens a lighting box with additional
 information from the RDF graph behind the instance's HTTP URI.

 Epiphany runs at: http://projects.dfki.uni-kl.de/epiphany/
 On the top right you'll find an example.

 Under http://projects.dfki.uni-kl.de/epiphany/form, you can write your own
 text and receive RDFa content.


Nice!!

The look up bookmarklet in tabulator gave me about 15 JS errors (firefox
3.5, with tabulator installed) ... is there a way we can reduce that?



 Currently, the underlying Linked Data model is a subset of DBpedia covering
 German politics.
 In later versions you will be able to upload or link your own Linked Data
 model to annotate web pages with your own domain specific RDFa.

 Please don't hesitate in giving me your comments :). Twitter hashtag is
 #RDFEPIPHANY

 Regards
 Ben

 --
 __
 Benjamin Adrian
 Email : benjamin.adr...@dfki.de
 WWW : http://www.dfki.uni-kl.de/~adrian/http://www.dfki.uni-kl.de/%7Eadrian/
 Tel.: +49631 20575 145
 __
 Deutsches Forschungszentrum fuer Kuenstliche Intelligenz GmbH
 Firmensitz: Trippstadter Strasse 122, D-67663 Kaiserslautern
 Geschaeftsfuehrung:
 Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) Dr. Walter
 Olthoff
 Vorsitzender des Aufsichtsrats:
 Prof. Dr. h.c. Hans A. Aukes
 Amtsgericht Kaiserslautern, HRB 2313
 __






Re: New github project for RDFizer scripts

2010-02-03 Thread Melvin Carvalho
One for the collection?

http://code.google.com/p/lindenb/source/browse/trunk/src/xsl/linkedin2foaf.xsl


On 21 May 2009 19:53, Kingsley Idehen kide...@openlinksw.com wrote:

 All,

 The 30+ xslt stylesheets [1]used by the our collection Sponger Cartridges
 are now available for community development and enhancement via a github
 [2].

 Links:

 1.
 http://virtuoso.openlinksw.com/dataspace/dav/wiki/Main/ClickableVirtSpongerCloud
 2. http://tr.im/m0PT


 --


 Regards,

 Kingsley Idehen   Weblog: 
 http://www.openlinksw.com/blog/~kidehenhttp://www.openlinksw.com/blog/%7Ekidehen
 President  CEO OpenLink Software Web: http://www.openlinksw.com








Re: Tools for transforming data to RDF

2010-03-10 Thread Melvin Carvalho
2010/3/9 Alasdair Logan alasdair.lo...@yahoo.co.uk

 Hey all,

 I was wondering if anyone is familiar with tools to convert data into RDF
 triples and Linked Data. They can be for any data format i.e. XML, CSV,
 plain text etc.

 Im doing this as part of a pilot study for my Master's project so i'm just
 trying get a general view of any tools used.


Converters: http://www.w3.org/2001/sw/wiki/Category:Converter

Tools in General: http://www.w3.org/2001/sw/wiki/Category:Tool



 Thanks in advance

 Ally




Re: RDFa for Turtles 2: HTML embedding

2010-03-10 Thread Melvin Carvalho
2010/3/10 Paul Houle ontolo...@gmail.com

 Specific proposal for RDFa embedding in HTML
 

 Ok,  here's a strategy for embedding RDFa metadata in HTML document heads
 -- make the head of the document be a valid XHTML fragment.

 Here,  now,  I'm going to write something like

 head xmlns=http://www.w3.org/1999/xhtml; xmlns:dcterms=
 http://purl.org/dc/terms/;
 meta rel=dcterms:creator content=Ataru Morobishi
 /head

 Because the content of the meta area is so simple,  compared to other
 parts of an html document,  I feel comfortable publishing a valid XHTML
 fragment for the head.  My understanding is that the namespace declarations
 will just be ignored by ordinary HTML tools (as they are in
 backwards-compatible XHTML documents) so there's really no problem here.

 This does bend the XHTML/RDFa standard and also HTML a little (those
 namespace declarations aren't technically valid) but I think we get a big
 gain (even a Turtle-head can embed triples in an HTML document) for very
 little pain.

 Any thoughts?


This seems sensible.  I did have the idea of dumping a whole bunch of RDFa
triples in the footer, and setting visibility to zero, but if you can do it
safely in the head, problem solved!   We're back to the old days of putting
meta data in the head of a document.

This works for the rel attribute, but what about for property?

One nice thing about RDF is that it's a set, so if you put a full dump of
triples in one area, even if there's a dup somewhere in your markup, a
parser, should remove duplicates.


Re: Improving Organization of Govt. based Linked Data Projects

2010-03-21 Thread Melvin Carvalho
2010/3/21 Hugh Glaser h...@ecs.soton.ac.uk


 On 21/03/2010 13:00, Dan Brickley dan...@danbri.org wrote:

  On 21 Mar 2010, at 12:47, Hugh Glaser h...@ecs.soton.ac.uk wrote:
 
  Hi Kingsley, I am right with you - finding stuff is hard.
  But I do think we could make it easier for all of us.
  Just the esw wiki alone requires me to put every set I create into a
  bunch of places
 
  10 years ago, looking for RDF on the public Web was like looking for a
  needle in a haystack. There wasnt much out there and it was poorly
  linked. So a big part of the thinking that led to the foaf/rdfweb
  design was to make discovery easier: if you find one rdf doc, you
  should be able to find most of the rest by following seeAlso and other
  kinds of links.
 
  Why isn't this enough? Perhaps because many of the datasets are huge
  db exports, crawlers are often overwhelmed and dissapear into depth-
  first holes? Or because we don't publish triples about doc- and
  dataset-types in a crawler-discoverable way?
 Yes, sort of.
 I think the problem is now with metadata for the datasets, which is great.
 Actually if everyone published semantic sitemaps and voiD descriptions
 etc.,
 and we had the tools to re-present the data, we would be well along the
 road.
 At worst, I might register my site somewhere (as I do with Sindice), say
 go
 figure. Pages such as the esw ones should then appear magically.
 
  A wiki page is ok for initial bootstrap but we ought to outgrow that
  soon...
 But I think we may be pleased to say that soon has arrived?
 And perhaps if it was easier we would discover that there is so much more
 out there that the wiki page hasn't actually been enough for a while. I can
 think of 10 interesting datasets that aren't there (that aren't mine).

 I am tempted to say that we spend all our time persuading others to take
 things like those tables and republish as RDF, but... :-)

 And yes, I know this has been a topic before, but we really should be
 feeling increasingly embarrassed by this.


Well I got a bit carried away with some regular expressions (which you
should never do on a Sunday) and came up with:

2/dev/null curl http://esw.w3.org/SparqlEndpoints | grep -A6 '^tr$'  |
awk '{  i++  ;  if (i==3 ) print $0  . ; if (i==1) print \n#endpoint
p++ \n $0  ; ; if ( $1 == tr ) i = 0; }' | sed 's/td
b/dct:description /' | sed 's/.tdtd/void:sparqlEndpoint /' | sed
's/void.*href=\([^]*\).*/void:sparqlEndpoint \1 ./' | sed
's/dct:description.*\(.*\).a.*/dct:description \1 ;/'


#endpoint1
dct:description Project/b ;
/tddct:description SPARQL endpoint/b .

#endpoint2
dct:description BBC Programmes and Music ;
void:sparqlEndpoint http://bbc.openlinksw.com/sparql/ .

#endpoint3
dct:description Bio2RDF ;
void:sparqlEndpoint 
http://www.freebase.com/view/user/bio2rdf/public/sparql .

#endpoint4
dct:description BioGateway ;
void:sparqlEndpoint 
http://www.semantic-systems-biology.org/biogateway/endpoint .

#endpoint5
dct:description BBC Backstage ;
void:sparqlEndpoint http://jena.hpl.hp.com:3040/backstage .

#endpoint6
dct:description DBTune ;
void:sparqlEndpoint http://dbtune.org/bbc/peel/sparql .

#endpoint7
dct:description DBTune ;
void:sparqlEndpoint http://dbtune.org/bbc/playcount/sparql .

#endpoint8
dct:description DailyMed ;
void:sparqlEndpoint http://www4.wiwiss.fu-berlin.de/dailymed/sparql .

#endpoint9
dct:description data.gov.uk ;
void:sparqlEndpoint http://data.gov.uk/sparql .

#endpoint10
dct:description D2R Server ;
void:sparqlEndpoint http://www4.wiwiss.fu-berlin.de/dblp/sparql .

#endpoint11
dct:description OpenLink Software ;
void:sparqlEndpoint http://dbpedia.org/sparql .

#endpoint12
dct:description OpenLink Software ;
void:sparqlEndpoint http://dbpedia-live.openlinksw.com/sparql/ .

#endpoint13
dct:description Diseasome ;
void:sparqlEndpoint http://www4.wiwiss.fu-berlin.de/diseasome/sparql .

#endpoint14
dct:description DoapSpace ;
void:sparqlEndpoint http://doapspace.org/sparql .

#endpoint15
dct:description DrugBank ;
void:sparqlEndpoint http://www4.wiwiss.fu-berlin.de/drugbank/sparql .

#endpoint16
td  ba href=http://code.google.com/p/openflydata/wiki/Flyatlas;
class=external text title=
http://code.google.com/p/openflydata/wiki/Flyatlas;FlyAtlas/a/b ;
void:sparqlEndpoint http://openflydata.org/query/flyatlas_20080916 .

#endpoint17
dct:description Fly-TED ;
void:sparqlEndpoint http://openflydata.org/query/flyted_20081203 .

#endpoint18
dct:description Gene Expression In-situ Images of fruitfly embryogenesis
;
void:sparqlEndpoint http://spade.lbl.gov:2021/sparql .

#endpoint19
dct:description Gene Ontology Database ;
void:sparqlEndpoint http://spade.lbl.gov:2020/sparql .

#endpoint20
dct:description DBTune ;
void:sparqlEndpoint http://dbtune.org/henry/sparql/ .

#endpoint21
dct:description IBM ATG (Advanced Technologies Group) ;
void:sparqlEndpoint http://abdera.watson.ibm.com:8080/sparql .

#endpoint22
dct:description DBTune ;
void:sparqlEndpoint http://dbtune.org/jamendo/sparql .

#endpoint23

Re: A URI(Web ID) for the semantic web community as a foaf:Group

2010-03-26 Thread Melvin Carvalho
2010/3/26 KangHao Lu (Kenny) kennyl...@csail.mit.edu

 Hi all  hi Tom,

 Does the Semantic Web Interest Group (or the Linked Data community), as a
 foaf:Group or something equivalent, has a WebID(URI)? Sorry but I didn't
 check whether this has been brought up.

 If it doesn't, I would certainly hope that it does have one. I think the
 best choice might be something like

 http://linkeddata.org/data#swig

 With that, we can eat our own dog food and are able to ask the following
 questions with SPARQL (doap-based questions)

 Question 1: What are all the semantic web people working on?

 SPARQL1 :

 SELECT ?person ?propject
 WHERE {
http://linkeddata.org/data#swig foaf:member ?person.
   ?project doap:developer ?person .

 Question 2: What programming languages do semantic web people use?

 SELECT DISTINCT ?person ?lang
 WHERE {
http://linkeddata.org/data#swig foaf:member ?person.
   ?project doap:developer ?person .
   ?project doap:programming-language ?lang.
 }

 I mean, from questions of our interest, we can develop *our* Linked Data
 and we can also develop the DOAP vocabulary when needed so that, in the
 future, we can ask What RDF backends does SWIG people use, Jena or Virtuoso
 or Sesame.

 FYI:
 1. timbl's WebID does link(that is, doap:developer reverse links) to two of
 his projects, namely, tabulator and CWM
 2. Dbpedia, as a doap:Project, does have a URI 
 http://www4.wiwiss.fu-berlin.de/is-group/page/projects/Project13, and it
 is bidirectionally linked to it's developers.


 So, for things to work. I would just recommend to place a N3 file at
 http://linkeddata.org/data . It can be maintained manually by Tom or we
 can have a very simple form for updating WebIDs of SWIG people.

 The spirit is very similar to http://data.semanticweb.org/ , but since

 1. I am not an academic person, so I am not even on the list of people
 2. I found manually created RDF much more useful than those generated by
 machine (this is controversial, I know) and I would be happy to see more
 cross-domain links, that is, Tim's URI should be

 http://www.w3.org/People/Berners-Lee/card#i

 not

 http://data.semanticweb.org/person/tim-berners-lee


 The choice of http://linkeddata.org/data is because:
 1. linkeddata.org It's the first google hit of Linked Data
 2. Unlike those sites generated by large systems such as SemanticWiki, I
 think linkeddata.org is more flexible to make little changes like this
 one.

 What do you guys think?


Really nice idea.  I find http://crschmidt.net/semweb/doapamatic/ nice.

But I think to really dogfood this properly it has to be an access
controlled datawiki (edit vai sparul/webdav auth via foaf, with web acls).
Then you dont need anyone to maintain it and it should be self healing.  We
can iterate on the rules and structure etc. perhaps doing one by hand first,
but I think we have the tools to showcase what the sem web can do here.

Perhaps we can put the source under a shared project.  I think between us we
have the expertise to to build it?




 Have a nice weekend,
 Kenny





Re: write enabled web of data / acl/acf/wac etc

2010-04-03 Thread Melvin Carvalho
2010/4/3 Nathan nat...@webr3.org

 Hi All,

 Simply looking for the best place to discuss acl/acf/wac / write enabled
 web of data etc - mailing list or irc or private contacts - unsure if
 this comes under the banner of linked data and thus this mailing list.
  i.e. whilst I can have a good realtime discussion about rest related
 things, coming up short with regards discussing the aforementioned write
 enabled web of data - any pointers?


I would say foaf-protocols [1] and #swig

public-lod tends to be public and open data, imho, but there should be a
growing overlap, especially when sparql 1.1 arrives

[1] http://lists.foaf-project.org/mailman/listinfo/foaf-protocols



 Further, with regards the ESW wiki pages, I've not seen any
 discussions yet on articles, and with some of the documents I do have
 notes additions etc to add, but don't want to just ad them ad-hoc
 without at least discussing or running past somebody else.

 Many Regards,

 Nathan




Re: What would you build with a web of data?

2010-04-09 Thread Melvin Carvalho
2010/4/9 Georgi Kobilarov georgi.kobila...@gmx.de

 Hi Bernard,

 well, why did I ask people to write about their ideas for apps?
 My observation is that there are zero real apps using linked open data
 (i.e.
 data from the cloud). Not even a single one. Null.
 After 3 years of linking open data...


Agree that there could be more apps, but I've seen a few that are useful.
Linked Geo Data and Data WIki spring to mind.



 There are applications that re-use identifiers, and there are applications
 that use single, hand-picked data sources.  But let's be honest, that's not
 using the linked data cloud. So, why's that? There must be a reason.
 Which
 part of the ecosystem sucks?


I think limited support for sparql update means that linked data is largely
read only.  When sparql 1.1 comes out, hopefully that will change.



 In my opinion we won't get to solve that question if we stick to linked
 data will save the planet, one day. But instead, figure out which apps
 people would want to build now, and then see why it's not possible. If it
 doesn't work on the small scale of some simple app, how will linked data
 ever save our planet?


Decentralization of data will become a growing theme.  5 years ago there
were almost no blogs, but now blogs have changed the way we consume news.
The newspaper industry has struggled to adapt to the decentralization of
documents.  The end user is probably better off for it.

I think one big area could be ecommerce, or data driven commerce.
Decentralizing transactions (i would again use a data wiki driven solution
for this) within our current legal framework could facilitate better control
of peoples finances and offer a boost to the economy.  Countries like greece
could benefit from a boost to their economy.  In fact, all countries could.
WIth any decentralization process there's going to be winners and losers,
but again hopefully the end user will be better off in the long term.

WIll this equate to saving the planet?  Maybe, just maybe ... :)



 Cheers,
 Georgi

  -Original Message-
  From: Bernard Vatant [mailto:bernard.vat...@mondeca.com]
  Sent: Friday, April 09, 2010 12:25 PM
  To: Georgi Kobilarov
  Cc: public-lod
  Subject: Re: What would you build with a web of data?
 
  Hi Georgi
  Copying below the comment I just posted on ReadWriteWeb. Looks like a
  rant, but could have been worse ... I could have added that if the Web of
  Data is used to find out more cute cats images, well, I wonder what I do
 on
  this boat.
  I'm amazed, not to say frightened, by the egocentrism and lack of
  imagination of the applications proposed so far. Will the Web of Data be
 an
  effective tool for tackling our planet critical issues, or just another
 toy for
  spoiled children of the Web?
  I would like to see the Web of Data enable people anywhere in the world
 to
  find out smart, sustainable and low-cost solutions to their local
 development
  issues. What are the success (or failure) stories in e.g., farming, water
 supply,
  energy, education, health etc. in environments similar to mine, anywhere
 in
  the world? Something along the lines of http://www.wiserearth.org (of
  which data, BTW would be great to have in the Linked Data cloud).
  Best
  Bernard
 
  2010/4/9 Georgi Kobilarov georgi.kobila...@gmx.de
  Yesterday issued a challenge on my blog for ideas for concrete linked
 open
  data applications. Because talking about concrete apps helps shaping the
  roadmap for the technical questions for the linked data community ahead.
  The
  real questions, not the theoretical ones...
 
  Richard MacManus of ReadWriteWeb picked up the challenge:
  http://www.readwriteweb.com/archives/web_of_data_what_would_you_
  build.php
 
  Let's be creative about stuff we'd build with the web of data. Assume the
  Linked Data Web would be there already, what would build?
 
  Cheers,
  Georgi
 
  --
  Georgi Kobilarov
  Uberblic Labs Berlin
  http://blog.georgikobilarov.com
 
 
 
 
  --
  Bernard Vatant
  Senior Consultant
  Vocabulary  Data Engineering
  Tel:   +33 (0) 971 488 459
  Mail: bernard.vat...@mondeca.com
  
  Mondeca
  3, cité Nollez 75018 Paris France
  Web:http://www.mondeca.com
  Blog:http://mondeca.wordpress.com
  





Re: RedStore 0.3 released

2010-04-11 Thread Melvin Carvalho
2010/4/11 Nicholas J Humfrey n...@aelius.com

 Hello,

 I have released version 0.3 of RedStore:
 http://code.google.com/p/redstore/

 RedStore is a lightweight RDF triplestore written in C using the Redland
 library. It is aimed at being a quick to install and easy to use
 triplestore for people wanting to test/develop/experiment with semantic web
 technologies.

 The recommend versions of Redland to compile against are:
 http://download.librdf.org/source/raptor-1.4.21.tar.gz
 http://redstore.googlecode.com/files/rasqal-20100409.tar.gz (pre-release
 of rasqal-0.9.20)
 http://download.librdf.org/source/redland-1.0.10.tar.gz

 A statically compiled binary for Mac OS 10.4+ (containing the above) is
 available:
 http://redstore.googlecode.com/files/redstore-0.3-macosx.zip


This is awesome.

Question:  do you run this on localhost only, or combine it with some kind
of dyndns technology?




 Changes
 ---
 - Added improved content negotiation support
 - Created new Service Description page
 - Full format list on query page
 - Added HTML entity escaping
 - Added support for selecting query language
 - Added support for ASK queries
 - text/plain load messages unless HTML is requested
 - N-quads dumping support
 - more automated testing

 Features
 
 - SPARQL over HTTP support
 - An HTTP interface that is compatible with 4store.
 - Only build dependancy is Redland.
 - Unit tests for most of the HTTP server code.

 Limitations
 --
 - Single process/single threaded
 - No request timeouts


 nick.





Re: RedStore 0.3 released

2010-04-11 Thread Melvin Carvalho
2010/4/11 Nicholas J Humfrey n...@aelius.com


  I have released version 0.3 of RedStore:
  http://code.google.com/p/redstore/
 
  RedStore is a lightweight RDF triplestore written in C using the Redland
  library. It is aimed at being a quick to install and easy to use
 triplestore for people wanting to test/develop/experiment with semantic web
 technologies.
 
  This is awesome.
 
  Question:  do you run this on localhost only, or combine it with some
 kind of dyndns technology?

 I don't fully understand your question; but hopefully this answers it.


Sorry if i was unclear, I'm just thinking that in the longer term, this
might be valuable to expose to the ouside world.  So I was wondering what
you do, but as it seems you bind to localhost, that indeed answers my
question.



 There is currently no form of access control, so anyone who can access the
 endpoint can query/add/update and delete any data. So I would recommend that
 you firewall it/bind it to localhost:

 redstore -p 9000 -b 127.0.0.1


Yes that makes sense.  I can bind this to localhost, then allow apache to
forward requests, based on which WebID is trying to access the store.




 nick.






Re: Standards Based Data Access Reality (Edited)

2010-04-11 Thread Melvin Carvalho
2010/4/12 Kingsley Idehen kide...@openlinksw.com

 All,

 Edited, as I just realized some critical typo+errors that affect context.

 Hopefully, you understand what Nathan is articulating (ditto Giovanni). If
 not, simply step back and as yourself a basic question: What is Linked Data
 About?

 Is it about markup? Is it about Data Access? Is it about a never ending
 cycle of subjective commentary and cognitive dissonance that serves to
 alienate and fragment a community that desperately needs clarity and
 cohesion.

 Experience and history reveal the following to me:

 1. Standards based data access is about to be inflected in a major way
 2. The EAV (Entity-Attribute-Value) graph model is the new focal point of
 Data Access (covering CRUD operations).

 Microsoft, Google, and Apple grok the reality above in a myriad of ways via
 somewhat proprietary offerings (this community should really learn to look
 closer via objective context lenses). Note, proprietary is going to mean
 less and less since their initiatives are HTTP based i.e., it's all about
 hypermedia resources bearing EAV model data representations -- with varying
 degrees of fidelity.

 **
 Players and EAV approaches:

 1. Microsoft -- OData (EAV with Atom+Feed extension based data
 representation)

 2. Google -- GData (EAV with Atom+Feed based data representation)

 3. RDF based Linked Data -- (RDF variant of EAV plus a plethora of data
 representation formats that are pegged to RDF moniker)

 4. Apple -- Core Data (the oldest of the lot from a very proprietary
 company, this is basically an EAV store that serves all Mac OS X apps, built
 using SQLite; until recently you couldn't extend its backend storage engine
 aspect) .
 **

 Reality re. Business of Linked Data:

 Data as a Service (DaaS) is the first step i.e., entity oriented
 structured data substrate based on the EAV model. In a nutshell, when you
 have structured data place, data meshing replaces data mashing.  Monikers
 aside, entrepreneurs, CTOs, and CIOs already grok this reality in their own
 realm specific ways.

 Microsoft in particular, already groks data access (they developed their
 chops eons ago via ODBC). In recent times, they've groked EAV model as
 mechanism for concrete Conceptual Model Level data access, and they are
 going unleash an avalanche of polished EAV based applications courtesy of
 their vast developer network. Of course, Google and Apple will follow suit,
 naturally.

 The LOD Community and broader Semantic Web Problem (IMHO):

 History is a very good and kind teacher, make it an integral part of what
 you do and the path forward becomes less error prone; a message that hasn't
 penetrated deep enough within this community, in my personal experience.

 **
 Today, I see a community rife with cognitive dissonance and desires to
 define a non existent absolute truth with regards to what constitutes an
 Application or Killer Application. Ironically, has there EVER been a
 point in history where the phrase: Killer Application, wasn't retrospective?
 Are we going to miraculously change this, now?

 **

 Has there ever been a segment in the market place (post emergence of
 Client-Server partitioning) where if you didn't make both the Client and the
 Server, the conclusion was: we have nothing?

 We can continue postulating about what constitutes an application, but be
 rest assured, Microsoft, Google, Apple (in that order), are priming up for
 precise execution with regards to opportunities in the emerging EAV based
 Linked Data realm. They have:

 1. Polished Clients
 2. Vast User Networks
 3. Vast Integrator Networks
 4. Vast Developer Networks
 5. Bottom-less cash troves
 6. Very smart people.

 In my experience, combining the above has never resulted in failure, even
 if the deliverable contains little bits of impurity.

 Invest a little more time in understanding the history of our industry
 instead of trying to reinvent it wholesale. As Colin Powell articulated re.
 the IRAQ war: If You Break The Pot, You Own It!

 Folks, we are just part of an innovation continuum, nothing is new under
 the sun bar, context !!


+1

Just to add maybe that CRUD is just one part of the equation, after that can
come aggregation, curation, self healing etc.

Now I'm trying to work out whether what you've presented is good news or
bad.

http://www.w3.org/2007/09/map/main.jpg

Looking at the WWW Roadmap, are we all headed for the Sea of
Interoperability or to be sucked in to a Growing Desert Wasteland?



 --

 Regards,

 Kingsley Idehen   President  CEO OpenLink Software Web:
 http://www.openlinksw.com
 Weblog: 
 http://www.openlinksw.com/blog/~kidehenhttp://www.openlinksw.com/blog/%7Ekidehen
 Twitter/Identi.ca: kidehen








Re: [foaf-protocols] semantic pingback improvement request for foaf

2010-04-17 Thread Melvin Carvalho
2010/4/15 Story Henry henry.st...@bblfish.net

 Hi,

   I often get asked how one solve the friend request problem on open social
 networks that use foaf in the hyperdata way.

 On the closed social networks when you want to make a friend, you send them
 a request which they can accept or refuse. It is easy to set up, because all
 the information is located in the same database, owned by the same company.
 In a distributed social foaf network anyone can link to you, from anywhere,
 and your acceptance can be expressed most clearly by linking back. The
 problem is: you need to find out when someone is linking to you.


So then the problem is how does one notify people that one is linking to
 them. Here are the solutions in order of simplicity.

   0. Search engine solution
   -

   Wait for a search engine to index the web, then ask the search engine
 which people are linking to you.

  Problems:

   - This will tend to be a bit slow, as a search engine optimised to search
 the whole web will need to be notified first, even if this is only of minor
 interest to them
   - It makes the search engine a core part of the communication between two
 individuals, taking on the role of the central database in closed social
 networks
   - It will not work when people deploy foaf+ssl profiles, where they
 access control who can see their friends. Search engines will not have
 access to that information, and so will not be able to index it.


A great summary, Henry

What about using W3C recommended standard of SPARQL (Update)?  I refer to
the architecture sketch for web 3.0:

http://www.w3.org/DesignIssues/diagrams/social/acl-arch.png

It strikes me a (hyper) data file *should* be, first and foremost, updated
(write) using SPARUL or WebDAV and HTTP should be used for read operations.

So I add you as a friend to my FOAF.  And also send you a message with the
triples I'd like you to add to your FOAF (use discovery if necessary to find
your sparql server, but as tabulator does, you could just send it to the web
3.0 enabled file).  You can peform some sanitization/validation/reputation
check before adding the friend to your FOAF.  It's a simple workflow to get
you started, but you can build layers on top (push/pull, realtime,
notifications, approvals etc.).  Also compatible with FOAF+SSL Auth.



   1. HTTP Referer Header
   --

   The absolute simplest solution would be just to use the mis-spelled HTTP
 Referer Header, that was designed to do this job. In a normal HTTP request
 the location from which the requested URL was found can be placed in the
 header of the request.

http://en.wikipedia.org/wiki/HTTP_referrer

   The server receiving the request and serving your foaf profile, can then
 find the answer to the referrer in the web server logs.

 Perhaps that is all that is needed! When you make a friend request, do the
 following:

   1. add the friend to your foaf profile

  http://bblfish.net/#hjs foaf:knows 
 http://kingsley.idehen.name/dataspace/person/kidehen#this .

   2. Then just do a GET on their Web ID with the Referrer header set to
 your Web Id. They will then find in their apache logs, something like this:

 93.84.41.131 - - [31/Dec/2008:02:36:54 -0600] GET
 /dataspace/person/kidehen HTTP/1.1 200 19924 http://bblfish.net/;
 Mozilla/5.0 (Windows; U; Windows NT 5.1; ru; rv:1.9.0.5) Gecko/2008120122
 Firefox/3.0.5

  This can then be analysed using incredibly simple scripts such (as
 described in [1] for example)

   3. The server could then just verify that information by

  a. doing a GET on the Referer URL to find out if indeed it is linking to
 the users WebId
  b. do some basic trust analysis (is this WebId known by any of my
 friends?), in order to rank it before presenting it to the user

   The nice thing about the above method is that it will work even when the
 initial linker's server does not have a Ping service for WebIDs. If the
 pages linking are in html with RDFa most browsers will send the referrer
 field.

  There is indeed a Wikipedia entry for this: it is called Refback.
  http://en.wikipedia.org/wiki/Refback

  Exactly why Refback is more prone to spam than the ping back or linkback
 solution is still a bit of a mystery to me.

  2. Referer with foaf+ssl
  

  In any case the SPAM problem can be reduced by using foaf+ssl [2]. If the
 WebId is an https WebId - which it really should be! - then the requestor
 will authentify himself, at least on the protected portion of the foaf
 profile. So there are the following types of people who could be making the
 request on your WebId.

  P1. the person making the friend request

   Here their WebId and the referer field will match.
   (this can be useful, as this should be the first request you will receive
 - a person making a friend request, should at least test the link!)

  P2. A friend of the person making the friend request

   Perhaps a friend of P1 goes to his 

Re: [foaf-protocols] semantic pingback improvement request for foaf

2010-04-17 Thread Melvin Carvalho
2010/4/17 Story Henry henry.st...@bblfish.net


 On 17 Apr 2010, at 11:34, Melvin Carvalho wrote:

 
   0. Search engine solution
   -
 
   Wait for a search engine to index the web, then ask the search engine
  which people are linking to you.
 
  Problems:
 
   - This will tend to be a bit slow, as a search engine optimised to
 search
  the whole web will need to be notified first, even if this is only of
 minor
  interest to them
   - It makes the search engine a core part of the communication between
 two
  individuals, taking on the role of the central database in closed social
  networks
   - It will not work when people deploy foaf+ssl profiles, where they
  access control who can see their friends. Search engines will not have
  access to that information, and so will not be able to index it.
 
 
  A great summary, Henry
 
  What about using W3C recommended standard of SPARQL (Update)?  I refer to
  the architecture sketch for web 3.0:
 
  http://www.w3.org/DesignIssues/diagrams/social/acl-arch.png
 
  It strikes me a (hyper) data file *should* be, first and foremost,
 updated
  (write) using SPARUL or WebDAV and HTTP should be used for read
 operations.

 SPARUL seems to me to be asking a lot of technology for something that is
 really simple, and that is much easier done using much more widely deployed
 technology. This does not stop it from being deployed later. But we will
 have a lot
 more chance integrating foaf into web 2.0 applications if we don't require
 SPARUL,
 especially if it is clear that one can do without it.


Makes sense.  But if you look at the diagram it did have an extra arrow for
the web 2.0 servers.  As you did raise 6 different ways, I thought I would
add the standards compliant web 3.0 way.  If you're saying we should
concentrate on the low hanging fruit first, it's a great point, but let's
not forget that recommended standards do exist here.



 It is important to have this work in browsers too, so that people can
 create
 a friend request and have a web page, they can see the request at. This
 would
 allow them to then also DELETE that ping request, or edit it.

 This can all easily be done using POST, GET and DELETE.  And mostly even
 just POST.

 Furthermore as we saw doing updates on graphs is still very new, and can
 easily lead
 to all kinds of problems.


Right, so perhaps need some testing / proof-of-concept here ... let's
experimant a bit and see what we find.



 Finally a ping is arguably not a request to update a graph. It is an action
 to notify someone of something. That is all. As I mentioned in another
 thread, what the
 person notified does with this notification is up to them: it could be
  - to add the person to their foaf as a friend
  - to add them as an aquaintance
  -a spammer
  -a follower
  - ignore them
  - to warn their friends
  - to call the police


Good point.  One classic 2.0 pattern is twitter emailing you 'X has become
your follower' to represent #X siot:follows #you.  Of course blogs and
we're back to the classic pingback.

I wonder if something like a Talis Changeset could be sent to a
'notification queue' of a recipient ... that may be something like an inbox
for them, but instead containing semantically rich updates.  When the user
and the queue are both online the updates can be process via some kind of
workflow.  Some updates can be automatic.



 I really don't see that I want to give other people any decision in
 updating my graphs.
 I can give them a space to make a comment, but updating my graph will be
 something I
 am going to be very careful about allowing.

 
  So I add you as a friend to my FOAF.  And also send you a message with
 the
  triples I'd like you to add to your FOAF (use discovery if necessary to
 find
  your sparql server, but as tabulator does, you could just send it to the
 web
  3.0 enabled file).

 As stated above, if that is what you want to do, then you don't need
 SPARUL.
 You could post a request which contains the triples that you want.

 Perhaps we can design the ping in such a way, that a change request can be
 posted,
 for occasions when you noticed an error in my foaf file


Right, you can do it that way too, but web 3.0 servers will be more and more
often sparql enabled.




   You can peform some sanitization/validation/reputation
  check before adding the friend to your FOAF.  It's a simple workflow to
 get
  you started, but you can build layers on top (push/pull, realtime,
  notifications, approvals etc.).  Also compatible with FOAF+SSL Auth.

 I'd be for that, as long as we can start with a very simple ping mechanism
 that
 is easy to implement. And that would favor a POST, using an html form.

 Also it would be nice if this could be done in an intutive way so that we
 can have
 deployments with human readable html, that reminds people of SN friend
 requests.


I'll thing about some proofs of concept.



 Henry







Re: Sindice real time widget/api, and news feed

2010-04-26 Thread Melvin Carvalho
2010/4/27 Giovanni Tummarello giovanni.tummare...@deri.org

 Hi all,

 A new version of the Sindice frontend with some interesting improvements.
 e.g. a  realtime data widget on the homepage, and the new API to
 restrict to new day documents (or weekly) etc.

 http://sindice.com

 Also Facebook support for RDFa is making the web now bubble with new
 triples.

 See how these are supported right away:


 http://sindice.com/developers/inspector/?url=http%3A%2F%2Fwww.rottentomatoes.com%2Fm%2F10011268-oceans%2F

 New important features and capabilities are in the pipeline for the
 next weeks and months, those interested may now follow us on the
 sindice_news twitter feed.

  http://twitter.com/Sindice_news


Fantastic!!  Love the realtime front page.




 On behalf of the Sindice Team
 Giovanni




Re: [foaf-dev] linking facebook data

2010-04-30 Thread Melvin Carvalho
2010/4/30 Matthew Rowe m.r...@dcs.shef.ac.uk

 Hi

 First just want to say Li that your app is cool. Good job.

 Hello,

 Am cc'ing the foaf-dev mailing (sorry for cross posting)...

 I just had a look at your fb graph API - foaf rdf service[1], firstly cool
 stuff, but I have a few points I will address below.

 I recall Matthew Rowe[2] making a similar service a few years ago which
 spat out foaf data for a user's fb account, and I recall fb getting annoyed.
 Am guessing they mentally might have shifted since danbri's good work in
 getting them involve with SW tech  (great work once again by danbri ...
 *tonnes of applause), I guess we will find out soon ...


 Indeed it appears that their opinion of 'open data' has shifted. Facebook
 refused to list the FOAF Generator, which Mischa mentions, in their
 application directory as they were concerned about having 'their' data
 exported from Facebook for use by 3rd parties. You can use it here:

 http://ext.dcs.shef.ac.uk/~u0057/FoafGeneratorhttp://ext.dcs.shef.ac.uk/%7Eu0057/FoafGenerator

 With the above app you are able to export your entire social graph from fb,
 thus capturing all your relationships in RDF using FOAF. I guess that with
 the Open Graph Protocol you can't get such information.


I believe there was a restriction facebook has about caching data for more
than 48 hours.

In the f8 keynote last week mark zuckerberg said that restriction had now
been lifted




 I just built a demo that provides dereferenable HTTP URIs (with
 RDF/XML data) for Facebook data using data retrieved from the recently
 announced  Graph API by Facebook.  see
 http://sam.tw.rpi.edu/ws/face_lod.html

 In the demo, I observed inconsistent term usage between the Facebook
 data  API (JSON) and open graph protocol vocabulary. There is also
 some good potential to get the Facebook terms mapped to FOAF and
 DCterms terms. Please see my blog at

 http://tw.rpi.edu/weblog/2010/04/28/putting-open-facebook-data-into-linked-data-cloud/
 .

 Comments are welcome.

 best,

 --
 Li Ding
 http://www.cs.rpi.edu/~dingl/ http://www.cs.rpi.edu/%7Edingl/




 Matthew Rowe, MEng
 PhD Student
 OAK Group
 Department of Computer Science
 University of Sheffield
 m.r...@dcs.shef.ac.uk


 ___
 foaf-dev mailing list
 foaf-...@lists.foaf-project.org
 http://lists.foaf-project.org/mailman/listinfo/foaf-dev



Re: [foaf-protocols] ACL Append

2010-06-19 Thread Melvin Carvalho
2010/6/18 Nathan nat...@webr3.org

 Following the recent update of a few of the design issues, one change to
 the 'Socially Aware Cloud Storage' [1] introduced (among many other
 interesting updates) the following sentence:


Great spot.



 'If I want to say that I want to be your friend, for example, I could
 write that as a simple one-line statement into a friend requests file
 which you allow me write access to. In fact, I only need append access,
 and not even read or general write access to that list.'


What would the one line statement equivalent to 'I want to be your friend'
be?



 Hand in hand with this, the ACL Ontology has been updated to to include
 a new acl:Append (in addition to the existing :Control, :Read, :Write)

 http://www.w3.org/ns/auth/acl#Append
 'Append accesses are specific write access which only add information,
 and do not remove information.
 For text files, for example, append access allows bytes to be added onto
 the end of the file.
 For RDF graphs, Append access allows adds triples to the graph but does
 not remove any.
 Append access is useful for dropbox functionality.
 Dropbox can be used for link notification, which the information added
 is a notification that a some link has been made elsewhere relevant to
 the given resource.'

 Best,

 Nathan

 [1] http://www.w3.org/DesignIssues/CloudStorage.html
 ___
 foaf-protocols mailing list
 foaf-protoc...@lists.foaf-project.org
 http://lists.foaf-project.org/mailman/listinfo/foaf-protocols



Re: Subjects as Literals, [was Re: The Ordered List Ontology]

2010-06-30 Thread Melvin Carvalho
On 30 June 2010 21:14, Pat Hayes pha...@ihmc.us wrote:


 On Jun 30, 2010, at 1:30 PM, Kingsley Idehen wrote:

  Nathan wrote:

 Pat Hayes wrote:

 On Jun 30, 2010, at 6:45 AM, Toby Inkster wrote:

 On Wed, 30 Jun 2010 10:54:20 +0100
 Dan Brickley dan...@danbri.org wrote:

 That said, i'm sure sameAs and differentIndividual (or however it is
 called) claims could probably make a mess, if added or removed...


 You can create some pretty awesome messes even without OWL:

   # An rdf:List that loops around...

   #mylist a rdf:List ;
   rdf:first #Alice ;
   rdf:next #mylist .

   # A looping, branching mess...

   #anotherlist a rdf:List ;
   rdf:first #anotherlist ;
   rdf:next #anotherlist .


 They might be messy, but they are *possible* structures using pointers,
 which is what the RDF vocabulary describes.  Its just about impossible to
 guarantee that messes can't happen when all you are doing is describing
 structures in an open-world setting. But I think the cure is to stop
 thinking that possible-messes are a problem to be solved. So, there is dung
 in the road. Walk round it.


 Could we also apply that to the 'subjects as literals' general discussion
 that's going on then?

 For example I've heard people saying that it encourages bad 'linked data'
 practise by using examples like { 'London' a x:Place } - whereas I'd
 immediately counter with { x:London a 'Place' }.

 Surely all of the subjects as literals arguments can be countered with
 'walk round it', and further good practise could be aided by a few simple
 notes on best practise for linked data etc.


 IMHO an emphatic NO.

 RDF is about constructing structured descriptions where Subjects have
 Identifiers in the form of Name References (which may or many resolve to
 Structured Representations of Referents carried or borne by Descriptor
 Docs/Resources). An Identifier != Literal.


 What ARE you talking about? You sound like someone reciting doctrine.

 Literals in RDF are just as much 'identifiers' or 'names' as URIs are. They
 identify their value, most clearly and emphatically. They denote in exactly
 the same way that URIs denote. 23^^xsd:number   is about as good an
 identification of the number twenty-three as you are ever likely to get in
 any notational system since ancient Babylonia.


You can also do this:

http://km.aifb.kit.edu/projects/numbers/web/n23



 Pat Hayes



 If you are in a situation where you can't or don't want to mint an HTTP
 based Name, simply use a URN, it does the job.



 Best,

 Nathan




 --

 Regards,

 Kingsley Idehen   President  CEO OpenLink Software Web:
 http://www.openlinksw.com
 Weblog: 
 http://www.openlinksw.com/blog/~kidehenhttp://www.openlinksw.com/blog/%7Ekidehen
 Twitter/Identi.ca: kidehen







 
 IHMC (850)434 8903 or (650)494 3973
 40 South Alcaniz St.   (850)202 4416   office
 Pensacola(850)202 4440   fax
 FL 32502  (850)291 0667   mobile
 phayesAT-SIGNihmc.us   http://www.ihmc.us/users/phayes









Re: Metaweb joins Google

2010-07-20 Thread Melvin Carvalho
On 20 July 2010 11:40, Hondros, Constantine 
constantine.hond...@wolterskluwer.com wrote:

 It's big news for the wider Semantic Web community, as it shows that Google
 is determined to extract better semantics from pages it crawls ... but it's
 mediocre news for the LOD community. Freebase is based on proprietary
 database technology, it relies on its own graph data format, is queryable by
 its own query language (MQL, based on JSON), and makes no commitment to RDf,
 OWL and SPARQL beyond supporting a SPARQL end-point (in beta).

 The best case is that Google is just buying the entity extraction expertise
 and software deployed by Freebase ... the worst case is that they end up
 leapfrogging the Semantic Web standards in favour of their own ...


The big 4 are getting ready for the upcoming semantic singularity

google with metaweb
ms with powerset
facebook with ogp
yahoo with search monkey

it's more than dipping your toe in the water ... no one will want to be left
behind in this game

i personally think google have bought well and bought QUALITY

followers of socionomics say that explosive turns normally happen
(ironically) at time after there's been a long standing degree of pessimism
in the market

do the indicators point to the possibility that the sem web is approaching a
point of inflection?



 C.

 -Original Message-
 From: public-lod-requ...@w3.org [mailto:public-lod-requ...@w3.org] On
 Behalf Of Nathan
 Sent: Friday, July 16, 2010 9:57 PM
 To: Semantic Web; Linked Data community
 Subject: Metaweb joins Google

 Suprised this one isn't already posted!

 Metaweb (inc Freebase) has joined google:


 http://googleblog.blogspot.com/2010/07/deeper-understanding-with-metaweb.html
 http://blog.freebase.com/2010/07/16/metaweb-joins-google/

 Big (huge) news  congrats to all involved,

 Best,

 Nathan



 This email and any attachments may contain confidential or privileged
 information
 and is intended for the addressee only. If you are not the intended
 recipient, please
 immediately notify us by email or telephone and delete the original email
 and attachments
 without using, disseminating or reproducing its contents to anyone other
 than the intended
 recipient. Wolters Kluwer shall not be liable for the incorrect or
 incomplete transmission of
 of this email or any attachments, nor for unauthorized use by its
 employees.

 Wolters Kluwer nv has its registered address in Alphen aan den Rijn, The
 Netherlands, and is registered
 with the Trade Registry of the Dutch Chamber of Commerce under number
 33202517.





Re: [ANN] RDFa Developer (1.0b1): RDFa extension for firefox

2010-07-20 Thread Melvin Carvalho
2010/7/13 Javier Pozueco Pérez javier.pozu...@fundacionctic.org

 (sorry for cross posting)

 We are glad to announce the first public release of RDFa Developer, a
 firefox extension that helps you to correctly annotate web pages with RDFa.

 This tool enables you to examine the RDFa markup, to query your data using
 SPARQL and to detect common pitfalls in the use of RDFa.

 RDFa Developer is an open source tool that is available for download at:
 http://rdfadev.sourceforge.net. A short demonstration video is also
 available from the web page.

 Your comments are very much appreciated.


Looks awesome.

So reminds me of autopager [1], which many people say is the best ff
extension of them all!

[1] https://addons.mozilla.org/en-US/firefox/addon/4925/



 Yours,

 Javier Pozueco.

 javier.pozu...@fundacionctic.org



Re: [foaf-protocols] Great Presentation re. what WebIDs enable: Personal Data Spaces (nee. Data Lockers)

2010-08-10 Thread Melvin Carvalho
On 11 August 2010 00:18, Kingsley Idehen kide...@openlinksw.com wrote:

 All,


 A very important: what the Web of Linked Data will ultimately deliver
 presentation [1].

 Please watch this presentation with value proposition articulation
 (rather than implementation technology) in mind. It does an excellent
 job of explaining the concept of: Personal Data Spaces (basically data
 virtualization via HTTP based Linked Data + WebID driven ACLs).

 My only little issue with Dave (a superficial one) lies with his use of
 Personal Data Locker I much prefer: Personal Data Spaces :-)


Great vid!

Nice quote from DanC ... the important word in 'Semantic Web' is WEB :)



 Links:

 1. http://www.vimeo.com/13942000 -- Dave Siegel presentation at Semtech
 2010


 --

 Regards,

 Kingsley Idehen
 President  CEO
 OpenLink Software
 Web: http://www.openlinksw.com
 Weblog: http://www.openlinksw.com/blog/~kidehen
 Twitter/Identi.cahttp://www.openlinksw.com/blog/%7Ekidehen%0ATwitter/Identi.ca:
 kidehen





 ___
 foaf-protocols mailing list
 foaf-protoc...@lists.foaf-project.org
 http://lists.foaf-project.org/mailman/listinfo/foaf-protocols



Re: Announce: Official LOD2 Project Launch

2010-09-21 Thread Melvin Carvalho
On 21 September 2010 18:37, Kingsley Idehen kide...@openlinksw.com wrote:

  All,

 The new EU Co-funded project aimed at accelerating the evolution of the Web
 into a global knowledge space is now live. The project's official press
 release [1] provides an overview of goals, consortium membership,
 infrastructure technology, and services associated with this groundbreaking
 initiative.


Awesome!

I hope it's not going to just be read-only like LOD1 ... very exciting!



 Links:

 1.  http://lod2.eu/BlogPost/9-press-release-lod2-project-launch.html --
 Official Launch Press Release
 2.  http://lod2.eu/Welcome.html -- Project Home Page .

  --

 Regards,

 Kingsley Idehen   
 President  CEO
 OpenLink Software
 Web: http://www.openlinksw.com
 Weblog: http://www.openlinksw.com/blog/~kidehen 
 http://www.openlinksw.com/blog/%7Ekidehen
 Twitter/Identi.ca: kidehen








Re: XRI and XDI

2010-10-18 Thread Melvin Carvalho
On 18 October 2010 16:56, Juan Sequeda juanfeder...@gmail.com wrote:
 Hi everybody
 I just stumbled on XRI and XDI:
 http://www.xdi.org/modules/tut3/index.php?id=2
 A quick overview of this seems that it is the same thing as Linked Data. XRI
 are identifiers (URIs) and XDI is a data interchange format (RDF)
 ...n which XML data from any data source can be identified, exchanged,
 linked, and synchronized into a machine-readable dataweb just as HTML
 pages from any content source are linked into the human-readable Web today.
 What makes this interchange format possible is identifying, describing, and
 versioning data using XRIs.
 This is the first that I've heard about this? Who know about the status of
 this? Who is adopting this?

Last I heard the W3C TAG's decision was not to take forward the
development of XRI or specs containing XRI, but rather, to explore the
use of URIs as the value proposition of the WWW [1].

Subsequently, the OASIS membership voted against it becoming a spec,
which was perhaps not received as well as it could have been, by the
folks at on the OASIS XRI committee [2].

It's currently still mentioned in some pockets still, namely some of
the OpenID specs.

The future of XRI I believe is uncertain, with some saying it has died
and saying it is alive and well [3]

However a new version, 3.0 is coming out later this year.  It is
purported to have addressed the concerns of the TAG [3]

It will be interesting to see what the new version brings, when it comes out.

[1] http://lists.w3.org/Archives/Public/www-tag/2008May/0078
[2] http://www.equalsdrummond.name/?p=130
[3] http://permalink.gmane.org/gmane.comp.web.openid.general/12901

 Cheers
 Juan Sequeda
 +1-575-SEQ-UEDA
 www.juansequeda.com




Re: [foaf-protocols] Please allow JS access to Ontologies and LOD

2010-10-22 Thread Melvin Carvalho
On 23 October 2010 01:04, Nathan nat...@webr3.org wrote:
 Hi All,

 Currently nearly all the web of linked data is blocked from access via
 client side scripts (javascript) due to CORS [1] being implemented in
 the major browsers.

 Whilst this is important for all data, there are many of you reading
 this who have it in your power to expose huge chunks of the RDF on the
 web to JS clients, if you manage any of the common ontologies or
 anything in the LOD cloud diagram, please do take a few minutes from
 your day to expose the single http header needed.

 Long story short, to allow js clients to access our open data we need
 to add one small HTTP Response header which will allow HEAD/GET and POST
 requests - the header is:
   Access-Control-Allow-Origin *

 This is both XMLHttpRequest (W3C) and XDomainRequest (Microsoft)
 compatible and supported by all the major browser vendors.

 Instructions for common servers follow:

 If you're on Apache then you can send this header by simply adding the
 following line to a .htaccess file in the dir you want to expose
 (probably site-root):
   Header add Access-Control-Allow-Origin *

 For NGINX:
   add_header Access-Control-Allow-Origin *;
 see: http://wiki.nginx.org/NginxHttpHeadersModule

 For IIS see:
   http://technet.microsoft.com/en-us/library/cc753133(WS.10).aspx

 In PHP you add the following line before any output has been sent from
 the server with:
   header(Access-Control-Allow-Origin, *);

 For anything else you'll need to check the relevant docs I'm afraid.

+1

Thanks for the heads up.  I added:

Header add Access-Control-Allow-Origin *

to my .htaccess and everything worked fine.  Easy!  :)


 Best  TIA,

 Nathan

 [1] http://dev.w3.org/2006/waf/access-control/
 ___
 foaf-protocols mailing list
 foaf-protoc...@lists.foaf-project.org
 http://lists.foaf-project.org/mailman/listinfo/foaf-protocols




Re: Breaking News: Google supports GoodRelations

2010-11-02 Thread Melvin Carvalho
Whow!  Congrats!!

On 2 November 2010 19:07, Martin Hepp mfh...@gmail.com wrote:
 Dear all:

 Breaking News: Google has just started to recommend using the GoodRelations
 vocabulary for product and price information in Web pages!

 See

   http://www.heppresearch.com/gr4google


 This is a major - if not the critical - step towards massive adoption of
 RDF, because there is now a clear incentive for any site owner in the world
 to add rich meta-data in RDFa to her or his page templates. It is also, to
 my knowledge, the first OWL DL vocabulary adopted by a major search engine.

 It is safe to assume that additional GoodRelations elements, not currently
 relevant for Rich Snippets, and RDF features currently not required by
 Google (e.g. datatype information), will not irritate Google's processing of
 RDFa markup, so you can cater for Google and the Web of Linked Data in one
 turn if you follow the recipe from my page given above.

 I would like to take this opportunity to thank all of the many individuals
 who supported my work on GoodRelations in one way or another over the past
 years, namely Andreas Radinger, Alex Stolz, Uwe Stoll, Kavi Goel, Kingsley
 Idehen, Jay Myers, Peter Mika, Stephan Decker, Jamie Taylor, Andreas Harth,
 Aldo Bucchi, Giovanni Tummarello, Richard Cyganiak, Jon Udell, Daniel
 Bingel, Markus Linder, Martin Schliefnig, Andreas Wechselberger, Leyla Jael
 Garcia, and many others. All of them have provided valuable suggestions and
 feedback, encouragement, or both.

 This is a great day for the Semantic Web and Linked Data. Now please spread
 the word!

 Best wishes

 Martin Hepp

 -
 Hepp Research GmbH
 Karlstraße 8
 D-88212 Ravensburg, Germany
 Phone +49 751 205 8512
 Fax   +49 3212 1020296

 Web     http://www.heppresearch.com/
 eMail   cont...@heppresearch.com
 Twitter heppresearch
 Skype   heppresearch

 UStID: DE268 362 852
 Amtsgericht Ulm, HRB 724378
 Geschäftsführer: Univ.-Prof. Dr. Martin Hepp




Re: WebID and Signed Emails

2010-11-04 Thread Melvin Carvalho
On 4 November 2010 23:24, Kingsley Idehen kide...@openlinksw.com wrote:
 On 11/4/10 5:09 PM, Mischa Tuffield wrote:

 Drawing an analogy, this email is signed, I am not signed, the email has a
 uri identifying the person which sent, and they are quite different.

 Cheers,

 Mischa *2 [cents|pence] worth


 Best,

 Nathan


 -BEGIN PGP SIGNATURE-
 Version: GnuPG/MacGPG2 v2.0.12 (Darwin)

 iQIcBAEBAgAGBQJM0yETAAoJEJ7QsE5R8vfvsQoQALCxpsT7wfjSHLIiYsHCuf/8
 KSXHqMMUBiHNJyc8asFyfA9+CGMOM3r3b/kmF5KNPmg49RB9bon5Jlb5fiCvBr5J
 TXYk+5s7iFpLENzhWhDJhCpIX8ZC/HBXDJ/Vpkjijesa3W+5dL/G+4RHYXCpUTi1
 Rwc6FA57pZTb1NnKgmEdK6jCO4sZBhdkyCKaWwlvK1zig07XdP1/CVmblGWpaSuc
 oXJZ9cUf0gKnwI4NDO7B/PjgvfMH7/8pWVPx56f68rk/fnXaOB0aWbxCwuIuDeL/
 obzLU1i7oxjnKD4TMdH+bULJAnZvndyLWPRJBorhfJQqfnvV9xAJGTWAWxf4G5Xh
 r5iHA5FsLIw1GFBuMhWHVsFXtuDhCXrXzWxOTlSPGx43/bIZtXeQbTXcbUvI5zGU
 RAU6etCOFuCEo46H+i0T5yJfUz0OhwjYNBSIqIZq/FDpt9rkCKNavXIhRmazKCoI
 l7Lh5zouk9UH+wuKfE4Z0yAbXDTgobmbqcZzKzBXgzx9B8haYuCEKXcbmBbWIt2h
 +p2OkAEBfngZZMtz2Wi5WQWE/dgv0cPjX19y9sHcXpaop6i9kFArQeCuYb/p+6fr
 G1FnjZYTOKWex9eQd88oxzlisFafyU8cgTX2VxEdiH6Ko7yD1wdhyAw8KYegEnL+
 4O+cZmx+6w0HQNwM2T5D
 =Q5HZ
 -END PGP SIGNATURE-


 Misca,

 Nice of you to bring this up, I've changed topic heading accordingly.

 Now imagine if your signature also included your WebID. Then my email client
 would verify your mails using the WebID protocol :-)

 Another example of the power of Linked Data!

Do you even need your WebID in your signature?

What if your WebID pointed to your PGP credentials?


 --

 Regards,

 Kingsley Idehen   
 President  CEO
 OpenLink Software
 Web: http://www.openlinksw.com
 Weblog: http://www.openlinksw.com/blog/~kidehen
 Twitter/Identi.ca: kidehen








Possible Idea For a Sem Web Based Game?

2010-11-20 Thread Melvin Carvalho
Hi All

I was thinking about creating a simple game based on semantic web
technologies and linked data.

Some on this list may be too young to remember this, but there used to
be game books where you would choose your own adventure.

http://en.wikipedia.org/wiki/Choose_Your_Own_Adventure

The idea is that every page would have a description, optionally a
drawing, and a number or 'links' to new pages.  Each 'link' would take
you to another page where you would continue the adventure, or reach
an ending.

I was wondering how hard it would be to model this using linked data principles.

1.  Would each 'location' be a document or a resource?  Web of
Documents vs Web of Resources?

2.  Could we use foaf:image and dcterms:desc for the game pages?

3.  How would you model the link on each page?

It might be interesting to knock something together, maybe there's
some data sources for this.  Or even better still allow people to
change images and descriptions to create a living story, using web
standards (webdav  sparql update perhaps).

I'm not sure how the rendering would work, but perhaps it's easy
enough in RDFa once we have a model.

Do you think this could work?

Thanks in advance
Melvin



Re: Possible Idea For a Sem Web Based Game?

2010-11-20 Thread Melvin Carvalho
On 20 November 2010 20:13, Toby Inkster t...@g5n.co.uk wrote:
 On Sat, 20 Nov 2010 18:28:24 +0100
 Melvin Carvalho melvincarva...@gmail.com wrote:

 1.  Would each 'location' be a document or a resource?  Web of
 Documents vs Web of Resources?

 2.  Could we use foaf:image and dcterms:desc for the game pages?

 3.  How would you model the link on each page?

 Sounds like a pretty awesome idea. I'd personally model it like this:

        #node1
                a game:Node ;
                foaf:name Dark Cave ;
                foaf:depiction ... ;
                dcterms:description ... .

 I'd say that game:Node is not disjoint with foaf:Document. That gives
 you flexibility - in some cases a node might be a page, and in other
 cases you might have several nodes described on the same page.

 Links to other places could be accomplished using:

        #node1
                game:north #node2 ;
                game:south otherdoc.xhtml#wasteland ;
                game:east http://example.net/game#node9 .

 The description itself would have more detailed descriptions of the
 directions like To the south lies a desolate wasteland.. Directions
 you'd want would probably be eight compass, points plus up, down,
 inside, outside.

 Each node should probably also have co-ordinates (not in WGS84, but a
 made-up co-ordinate system), along the lines of:

        #node1
                game:latitude 123 ;
                game:longitude -45 .

 This would not be used for gameplay, but to aid authoring new nodes.
 You'd want to have your north triple link to a node that you could
 plausibly reach by going a short distance north.

Hi Toby

Thanks for the detailed reply.  That sounds excellent, exactly what I
was looking form

In fact compass directions (plus up and down) bring add a lot to the equation.

However most book based text games will have a description added to
each link, rather than simply directions to travel.  Here's a quick
example from googling 'choose your own adventure' :

http://www.iamcal.com/games/choose/room.php

I think it would be possible to bootstrap some existing stories to the
model if we could expand the idea of game:north to have a link and a
description, in this way, which I was wondering about.

Longer term, I think it would be great to start a simple adventure
game in the classic style of 'the hobbit' or 'hitchikers guide'
text/graphics based adventures from the 80s, however with the twist
that game worlds could link to multiple servers, across the web,
allowing anyone to make a 'game withing a game', or web of games.
However, that's probably a greater modeling task, so I wanted to start
more simply to begin with.

So something like
#action1
a game:Action
dcterms:description Jump on the Barrel
game:destination http://example.net/game/node10

I'm pretty new to modeling this stuff, so not sure how much sense that makes?


 I'm not sure how the rendering would work, but perhaps it's easy
 enough in RDFa once we have a model.

 I'd be happy to mock-up an interface - perhaps tonight!

 --
 Toby A Inkster
 mailto:m...@tobyinkster.co.uk
 http://tobyinkster.co.uk





Re: Possible Idea For a Sem Web Based Game?

2010-11-20 Thread Melvin Carvalho
On 20 November 2010 21:06, mike amundsen mam...@yahoo.com wrote:
 FWIW, earlier this year I implemented a simple Hypermedia maze game:
 http://amundsen.com/examples/mazes/2d/

 The was done as part of an exercise to implement bots that can read
 hypermedia and successfully navigate a 2D perfect maze.

 The current version supports a ;simple hypermedia XML format
 (application/xml) and an XHTML format. If you view these links /w a
 browser, you'll get a very basic UI.

 I'd be most interested in successfully implementing this same 2D maze
 using RDF (any flavor including RDFa, n3, etc.).

Looks cool!

Maybe we can work out an ontology to handle both game scenarios.

Did you ever think about importing data from openstreetmap to create
real world mazes?


 The code is implemented in C# and I'm happy to share it for anyone
 interested, too. I don't have any meaning docs at this point, but
 would be happy to spend the time to make it more accessible to anyone
 who wishes to implement clients against this server.

 mca
 http://amundsen.com/blog/
 http://twitter@mamund
 http://mamund.com/foaf.rdf#me


 #RESTFest 2010
 http://rest-fest.googlecode.com




 On Sat, Nov 20, 2010 at 14:13, Toby Inkster t...@g5n.co.uk wrote:
 On Sat, 20 Nov 2010 18:28:24 +0100
 Melvin Carvalho melvincarva...@gmail.com wrote:

 1.  Would each 'location' be a document or a resource?  Web of
 Documents vs Web of Resources?

 2.  Could we use foaf:image and dcterms:desc for the game pages?

 3.  How would you model the link on each page?

 Sounds like a pretty awesome idea. I'd personally model it like this:

        #node1
                a game:Node ;
                foaf:name Dark Cave ;
                foaf:depiction ... ;
                dcterms:description ... .

 I'd say that game:Node is not disjoint with foaf:Document. That gives
 you flexibility - in some cases a node might be a page, and in other
 cases you might have several nodes described on the same page.

 Links to other places could be accomplished using:

        #node1
                game:north #node2 ;
                game:south otherdoc.xhtml#wasteland ;
                game:east http://example.net/game#node9 .

 The description itself would have more detailed descriptions of the
 directions like To the south lies a desolate wasteland.. Directions
 you'd want would probably be eight compass, points plus up, down,
 inside, outside.

 Each node should probably also have co-ordinates (not in WGS84, but a
 made-up co-ordinate system), along the lines of:

        #node1
                game:latitude 123 ;
                game:longitude -45 .

 This would not be used for gameplay, but to aid authoring new nodes.
 You'd want to have your north triple link to a node that you could
 plausibly reach by going a short distance north.

 I'm not sure how the rendering would work, but perhaps it's easy
 enough in RDFa once we have a model.

 I'd be happy to mock-up an interface - perhaps tonight!

 --
 Toby A Inkster
 mailto:m...@tobyinkster.co.uk
 http://tobyinkster.co.uk







Re: Possible Idea For a Sem Web Based Game?

2010-11-20 Thread Melvin Carvalho
On 21 November 2010 00:43, Toby Inkster t...@g5n.co.uk wrote:
 On Sat, 20 Nov 2010 19:13:31 +
 Toby Inkster t...@g5n.co.uk wrote:

 I'd be happy to mock-up an interface - perhaps tonight!

 Here are a few test nodes:

 http://buzzword.org.uk/2010/game/test-nodes/london
 http://buzzword.org.uk/2010/game/test-nodes/birmingham
 http://buzzword.org.uk/2010/game/test-nodes/brighton
 http://buzzword.org.uk/2010/game/test-nodes/hove

 The vocab they use is:

 http://purl.org/NET/game#

 I've put together a little web-based client you can use to play the
 game here:

 http://buzzword.org.uk/2010/game/client/?Node=http%3A%2F%2Fbuzzword.org.uk%2F2010%2Fgame%2Ftest-nodes%2Flondon

wow, very cool! :)


 Source code is here:

 http://buzzword.org.uk/2010/game/client/source-code

 As you should be able to see from the source, while the four test nodes
 only link to each other, they could theoretically link to nodes
 elsewhere on the Web, and the client would follow the links happily.

 Melvster wrote:

 However most book based text games will have a description added to
 each link, rather than simply directions to travel.

 The way I've written this client, the nodes themselves can extend the
 pre-defined link directions:

        #node1 #hide-under-rug #node2 .

        #hide-under-rug
                rdfs:label hide under the rug ;
                rdfs:subPropertyOf game:exit .

 The client will notice you've defined a custom direction, and offer it
 as an option.

Ah great, so you have a number of predicates, one for each 'action'.


 One possible addition, that would go way beyond what the CYOA books
 could offer would be for the client to have a stateful store. So when
 you entered a room, there could be a list of room contents which you
 could collect into your store. The objects that you've collected could
 then influence the progress of the game. Probably need to think a bit
 more about how this should work.

That sounds like a logical next step.  Your game character can be
completely virtual or linked to a foaf : Person.  It may be possible
to have an agent that acts as a 'dungeon master', which has the
ability to say, pick up an #item and put it in your inventory when you
issue a command in the right room, for an object that is free.  Then
when selecting an action in a room, it can check the rules to see if
that's possible give your current inventory.

I've been talking to someone interested in porting a few simple games
to linked data, so we'll see if we can put together some small game
worlds, and maybe adding a few puzzles.

Maybe you can even get XBOX style Achievements for completing
certain levels in different adventures!


 --
 Toby A Inkster
 mailto:m...@tobyinkster.co.uk
 http://tobyinkster.co.uk





Re: Possible Idea For a Sem Web Based Game?

2010-11-21 Thread Melvin Carvalho
On 21 November 2010 00:43, Toby Inkster t...@g5n.co.uk wrote:
 On Sat, 20 Nov 2010 19:13:31 +
 Toby Inkster t...@g5n.co.uk wrote:

 I'd be happy to mock-up an interface - perhaps tonight!

 Here are a few test nodes:

 http://buzzword.org.uk/2010/game/test-nodes/london
 http://buzzword.org.uk/2010/game/test-nodes/birmingham
 http://buzzword.org.uk/2010/game/test-nodes/brighton
 http://buzzword.org.uk/2010/game/test-nodes/hove

 The vocab they use is:

 http://purl.org/NET/game#

 I've put together a little web-based client you can use to play the
 game here:

 http://buzzword.org.uk/2010/game/client/?Node=http%3A%2F%2Fbuzzword.org.uk%2F2010%2Fgame%2Ftest-nodes%2Flondon

 Source code is here:

 http://buzzword.org.uk/2010/game/client/source-code

 As you should be able to see from the source, while the four test nodes
 only link to each other, they could theoretically link to nodes
 elsewhere on the Web, and the client would follow the links happily.

 Melvster wrote:

 However most book based text games will have a description added to
 each link, rather than simply directions to travel.

 The way I've written this client, the nodes themselves can extend the
 pre-defined link directions:

        #node1 #hide-under-rug #node2 .

        #hide-under-rug
                rdfs:label hide under the rug ;
                rdfs:subPropertyOf game:exit .

 The client will notice you've defined a custom direction, and offer it
 as an option.

I do like the exits, and think they are all useful and needed.

However, just thinking that this might not be ideal for scaling wrt
CYOA.  Every game choice would have to be added to the ontology if we
used exclusively this technique.  I wonder if there's possibly another
way to model an 'exit' or game choice such that we dont need to change
the vocab?


 One possible addition, that would go way beyond what the CYOA books
 could offer would be for the client to have a stateful store. So when
 you entered a room, there could be a list of room contents which you
 could collect into your store. The objects that you've collected could
 then influence the progress of the game. Probably need to think a bit
 more about how this should work.

 --
 Toby A Inkster
 mailto:m...@tobyinkster.co.uk
 http://tobyinkster.co.uk





Re: Possible Idea For a Sem Web Based Game?

2010-11-21 Thread Melvin Carvalho
On 21 November 2010 03:06, mike amundsen mam...@yahoo.com wrote:
 Melvin:

 I'd very much like to work on a shared ontology. And yes, changing the
 data to include real places has come up w/ some other folks who have
 implemented test clients for this server.

Great!  I think toby's is a very good start.


 I've also worked out that basics of additional features (not exposed
 in this public version) including:
 - objects that can be found in each room
 - treasure that can be accumulated

Very nice.  For richer games that would be ideal.


 Some additional items that are on my drawing board but are not yet
 working well are:
 - monsters that require battle (using objects|tool already acquired)

There was some modeling of this here:

http://goonmill.org/2007/

 - purchase of supplies, objects, etc. (using the treasure already 
 accumulated)

I'm interested in digital economies.  This is an area I would like to
help build to Web Scale.


 Of course, moving away from 2D is also an important step. I have built
 maps that include the four simple map directions + up  down, but have
 not yet completed an engine that randomly generates these cubes.

Rendering engines is a whole realm in itself.  Would be great to build
up a number of clients based on a generic data model.


 Finally, I also have some state-handling built-into the server
 including life values and other variables that are tracked to
 individual players, etc.

We may be able to use SPARQL Update to add items to a persona (WebID)
... tracking items is an interesting problem to solve, maybe a game in
itself!


 Many pieces lying about, quite a bit of assembly required at this
 pointg. This has been sitting dormant for a few months and I am
 looking forward to picking it back up again.

 Let me know how I can contribute and I'd be happy to so what I can.

Sounds good!  I think if we start of simple, and make a few simple
linked games, we can iterate richer versions, if people are
interested.


 mca
 http://amundsen.com/blog/
 http://twitter@mamund
 http://mamund.com/foaf.rdf#me


 #RESTFest 2010
 http://rest-fest.googlecode.com




 On Sat, Nov 20, 2010 at 18:09, Melvin Carvalho melvincarva...@gmail.com 
 wrote:
 On 20 November 2010 21:06, mike amundsen mam...@yahoo.com wrote:
 FWIW, earlier this year I implemented a simple Hypermedia maze game:
 http://amundsen.com/examples/mazes/2d/

 The was done as part of an exercise to implement bots that can read
 hypermedia and successfully navigate a 2D perfect maze.

 The current version supports a ;simple hypermedia XML format
 (application/xml) and an XHTML format. If you view these links /w a
 browser, you'll get a very basic UI.

 I'd be most interested in successfully implementing this same 2D maze
 using RDF (any flavor including RDFa, n3, etc.).

 Looks cool!

 Maybe we can work out an ontology to handle both game scenarios.

 Did you ever think about importing data from openstreetmap to create
 real world mazes?


 The code is implemented in C# and I'm happy to share it for anyone
 interested, too. I don't have any meaning docs at this point, but
 would be happy to spend the time to make it more accessible to anyone
 who wishes to implement clients against this server.

 mca
 http://amundsen.com/blog/
 http://twitter@mamund
 http://mamund.com/foaf.rdf#me


 #RESTFest 2010
 http://rest-fest.googlecode.com




 On Sat, Nov 20, 2010 at 14:13, Toby Inkster t...@g5n.co.uk wrote:
 On Sat, 20 Nov 2010 18:28:24 +0100
 Melvin Carvalho melvincarva...@gmail.com wrote:

 1.  Would each 'location' be a document or a resource?  Web of
 Documents vs Web of Resources?

 2.  Could we use foaf:image and dcterms:desc for the game pages?

 3.  How would you model the link on each page?

 Sounds like a pretty awesome idea. I'd personally model it like this:

        #node1
                a game:Node ;
                foaf:name Dark Cave ;
                foaf:depiction ... ;
                dcterms:description ... .

 I'd say that game:Node is not disjoint with foaf:Document. That gives
 you flexibility - in some cases a node might be a page, and in other
 cases you might have several nodes described on the same page.

 Links to other places could be accomplished using:

        #node1
                game:north #node2 ;
                game:south otherdoc.xhtml#wasteland ;
                game:east http://example.net/game#node9 .

 The description itself would have more detailed descriptions of the
 directions like To the south lies a desolate wasteland.. Directions
 you'd want would probably be eight compass, points plus up, down,
 inside, outside.

 Each node should probably also have co-ordinates (not in WGS84, but a
 made-up co-ordinate system), along the lines of:

        #node1
                game:latitude 123 ;
                game:longitude -45 .

 This would not be used for gameplay, but to aid authoring new nodes.
 You'd want to have your north triple link to a node that you could
 plausibly reach

Re: Possible Idea For a Sem Web Based Game?

2010-11-21 Thread Melvin Carvalho
On 21 November 2010 00:43, Toby Inkster t...@g5n.co.uk wrote:
 On Sat, 20 Nov 2010 19:13:31 +
 Toby Inkster t...@g5n.co.uk wrote:

 I'd be happy to mock-up an interface - perhaps tonight!

 Here are a few test nodes:

 http://buzzword.org.uk/2010/game/test-nodes/london
 http://buzzword.org.uk/2010/game/test-nodes/birmingham
 http://buzzword.org.uk/2010/game/test-nodes/brighton
 http://buzzword.org.uk/2010/game/test-nodes/hove

I've started building a small game world based on a telnet game we all
used to play in the cambridge computer lab back in the early 90s (this
was maybe 1 year before Andrew Gower of Jagex/Runescape fame was
there, so not sure if he played ...)

http://buzzword.org.uk/2010/game/client/?Node=http%3A%2F%2Fdrogon.me%2FMain_Street

I left in a link to your test world for fun.  Hopefully I'll be able
to map the game world into RDF without too much trouble ... then think
about adding items and puzzles etc.

Im also looking for a CYOA game that has the data available.


 The vocab they use is:

 http://purl.org/NET/game#

 I've put together a little web-based client you can use to play the
 game here:

 http://buzzword.org.uk/2010/game/client/?Node=http%3A%2F%2Fbuzzword.org.uk%2F2010%2Fgame%2Ftest-nodes%2Flondon

 Source code is here:

 http://buzzword.org.uk/2010/game/client/source-code

 As you should be able to see from the source, while the four test nodes
 only link to each other, they could theoretically link to nodes
 elsewhere on the Web, and the client would follow the links happily.

 Melvster wrote:

 However most book based text games will have a description added to
 each link, rather than simply directions to travel.

 The way I've written this client, the nodes themselves can extend the
 pre-defined link directions:

        #node1 #hide-under-rug #node2 .

        #hide-under-rug
                rdfs:label hide under the rug ;
                rdfs:subPropertyOf game:exit .

 The client will notice you've defined a custom direction, and offer it
 as an option.

 One possible addition, that would go way beyond what the CYOA books
 could offer would be for the client to have a stateful store. So when
 you entered a room, there could be a list of room contents which you
 could collect into your store. The objects that you've collected could
 then influence the progress of the game. Probably need to think a bit
 more about how this should work.

 --
 Toby A Inkster
 mailto:m...@tobyinkster.co.uk
 http://tobyinkster.co.uk





Re: Possible Idea For a Sem Web Based Game?

2010-12-01 Thread Melvin Carvalho
On 20 November 2010 20:13, Toby Inkster t...@g5n.co.uk wrote:
 On Sat, 20 Nov 2010 18:28:24 +0100
 Melvin Carvalho melvincarva...@gmail.com wrote:

 1.  Would each 'location' be a document or a resource?  Web of
 Documents vs Web of Resources?

 2.  Could we use foaf:image and dcterms:desc for the game pages?

 3.  How would you model the link on each page?

 Sounds like a pretty awesome idea. I'd personally model it like this:

        #node1
                a game:Node ;
                foaf:name Dark Cave ;
                foaf:depiction ... ;
                dcterms:description ... .

 I'd say that game:Node is not disjoint with foaf:Document. That gives
 you flexibility - in some cases a node might be a page, and in other
 cases you might have several nodes described on the same page.

 Links to other places could be accomplished using:

        #node1
                game:north #node2 ;
                game:south otherdoc.xhtml#wasteland ;
                game:east http://example.net/game#node9 .

I'm now using this ontology to create a game world, and have written a
quick client in PHP/ARC2.

http://drogon.me/

Try jumping in and moving around.  It's all RDF based.

I think the next thing I need to model is 'items'.

At present need to work out a way to say a location has an item.

Any thoughts on a predicate?

There are a number of possible implementations, but I think the
location document containing an item should be supported.

Eventually we'll allow players to own an item, or maybe wear/wield
etc. ... but perhaps that's a problem for another day ...

PS maybe i should start a new thread on this topic, something like
'Linked Open Gaming'?


 The description itself would have more detailed descriptions of the
 directions like To the south lies a desolate wasteland.. Directions
 you'd want would probably be eight compass, points plus up, down,
 inside, outside.

 Each node should probably also have co-ordinates (not in WGS84, but a
 made-up co-ordinate system), along the lines of:

        #node1
                game:latitude 123 ;
                game:longitude -45 .

 This would not be used for gameplay, but to aid authoring new nodes.
 You'd want to have your north triple link to a node that you could
 plausibly reach by going a short distance north.

 I'm not sure how the rendering would work, but perhaps it's easy
 enough in RDFa once we have a model.

 I'd be happy to mock-up an interface - perhaps tonight!

 --
 Toby A Inkster
 mailto:m...@tobyinkster.co.uk
 http://tobyinkster.co.uk





Re: Possible Idea For a Sem Web Based Game?

2010-12-01 Thread Melvin Carvalho
On 2 December 2010 01:13, Toby Inkster t...@g5n.co.uk wrote:
 On Wed, 1 Dec 2010 23:06:42 +0100
 Melvin Carvalho melvincarva...@gmail.com wrote:

 I think the next thing I need to model is 'items'.

 At present need to work out a way to say a location has an item.

 Perhaps model it the other direction?

        item22 game:initial_position node394 .

I was thinking more along the lines of:

Location x has
  item 1
  item 2
  player 1
  player 2

With a trusted Agent(dungeon master) adding them to a copy of the game world.

The DM is allowed to sparql update the locations via insert and
delete, contains the game logic, and interacts with players.

In this way you can have 1 or more DM's given access to administer the
worlds, the best DMs would become 'resident' in the game world.

Agree, it's not the only way to model it, but I like the idea of a
file based solution mediated by agents.

Make sense?


 --
 Toby A Inkster
 mailto:m...@tobyinkster.co.uk
 http://tobyinkster.co.uk





Re: Possible Idea For a Sem Web Based Game?

2010-12-14 Thread Melvin Carvalho
On 14 December 2010 22:21, Pierre-Antoine Champin
pierre-antoine.cham...@liris.cnrs.fr wrote:
 Hi,

 this is fun, but we have to ask ourselves: what is the added value of
 RDF/sem-web/linked-data here?
 What does http://drogon.me/ have that wouldn't be possible with HTML+PHP?

To me the Web, particularly the Sem Web is a universal space whose key
advantage is interoperability.

So, each world can interop with similar worlds.

Also worlds can operate with other parts of the Semantic Web Space.  I
use the acronym SEMANTIC to describe key areas:

Social
Entertainment
Markets
Access
Nearby services
Trust
Information management
Currencies

So a game can be social, have trading with virtual currencies and
markets, you can interact with a personal or public web of trust, with
existing information or things in the real world in your locality (eg
augmented reality), using web standards.

Granted each area on the list is still in an embryonic phase.  But
this is a level of interop simply not available in other systems.

We've seen linking of basic social and trust in PHP+HTML (facebook)
and social and entertainment (zynga) get some traction.  But when we
have interop across all areas we'll have a that much more powerful
system.

 Don't get me wrong, I think those ideas is great, and kudos to you guys for
 turning them into code so quickly!

 My two cents on this question:

 1/ linking to real world data is definitely an interesting track, because
 this leverages existing linked data for the purpose of the game

Yes, agree, leverage interop.


 2/ another way to use linked data principles is that the game can be
 distributed, even more so than an HTML-based game.

Exactly.


 I imagine that every character, place, item... could have its own RDF
 description, linking to each other. A triple between two objects (X is
 located at Y, X owns Z...) is considered true only if both the subject and
 the object claim it.

 This implies that the RDF files are hosted by smart servers that will
 allow updates by anybody, but under certain conditions.

You dont need smart servers, just socially aware cloud storage.  Flat
files are fine, you can let Agents do all the middleware.

http://www.w3.org/DesignIssues/CloudStorage.html


 For example, a place will acknowledge that it contains a person only if the
 person claims to be in that place, and only there.

This is game logic.  It need not reside on a server.


 The protocol might be tricky to design for more complex things like
 transactions. I imagine that an item would change its owner only after
 checking that both the old and the new owner explictly agree on the
 transaction

  #me game:agreesOn [
    a game:Transaction ;
    game:give some:sword ;
    game:receive some:money ;
  ]

Im working on an economic aspect.  This is an interesting proposal on
transactions and contracts:

http://iang.org/papers/ricardian_contract.html

I have reasonable confidence we can introduce a sophisticated economy
that can be leveraged by all sem web projects, probably before end of
next year.


 Plus, the buyer would have to trust the sword not cheat on them and return
 to its previous owner without notice...

 Fights will probably be even trickier... But I think the idea is worth
 exploring...

Many ways to model this, again agents can handle this.

Traditional architecture is

client -- middleware -- data store

Web oriented architecture is more flexible and can have, in addition:

client -- data store
client -- agent -- data store
client -- data store -- agent

With trust and PKI regulating actions.  Of course we see why WebID is
important here too.


  pa


 On 12/02/2010 01:20 AM, Melvin Carvalho wrote:

 On 2 December 2010 01:13, Toby Inkstert...@g5n.co.uk  wrote:

 On Wed, 1 Dec 2010 23:06:42 +0100
 Melvin Carvalhomelvincarva...@gmail.com  wrote:

 I think the next thing I need to model is 'items'.

 At present need to work out a way to say a location has an item.

 Perhaps model it the other direction?

        item22  game:initial_positionnode394  .

 I was thinking more along the lines of:

 Location x has
   item 1
   item 2
   player 1
   player 2

 With a trusted Agent(dungeon master) adding them to a copy of the game
 world.

 The DM is allowed to sparql update the locations via insert and
 delete, contains the game logic, and interacts with players.

 In this way you can have 1 or more DM's given access to administer the
 worlds, the best DMs would become 'resident' in the game world.

 Agree, it's not the only way to model it, but I like the idea of a
 file based solution mediated by agents.

 Make sense?


 --
 Toby A Inkster
 mailto:m...@tobyinkster.co.uk
 http://tobyinkster.co.uk








Re: Eating your own dog food and inviting others to dinner

2010-12-14 Thread Melvin Carvalho
On 14 December 2010 16:59, ProjectParadigm-ICT-Program 
metadataport...@yahoo.com wrote:

 Eating your own dog food and inviting others to dinner is not the exclusive
 domain of software engineers only, as I discovered when trying to come up
 with a new paradigm for building online communities with social search
 capabilities for selected sectors industry or civil society.

 So it got me thinking about linked data and the semantic web, based on the
 contents I have seen in emails from the past two years from the two mailing
 lists fro the W3C to which I subscribe and read every single day

 One of the biggest problems we seem to have in getting linked data and the
 semantic web to the main public is that we i.e. its promoters,
 aficionados, gurus and developers and dedicated users  are scattered all
 over the place and on the internet.

 Has anyone ever given thought to creating a portal site, which combines
 some features of LinkedIn.com and Arxiv.org with online directories, product
 exhibits and a social search engine?

 And of course there would be the links to everything.

 And the whole darn thing would be an excellent launching pad for new
 start-ups and information hub for angel and corporate investors to hunt for
 new projects to fund.

 Build it and they will come.

 Is there any way we can get a collection of interested professionals,
 existing groups and initiatives together to dream up the design and blue
 print for such a thing and then build it?


I think I've heard it discussed before.

I think elgg is a decent choice for a small to medium sized community:

http://elgg.org/

It's easy to get up and running, and there's plugins for WebID etc.  Getting
people to sign up and to develop is always the tricky thing, tho.



 Milton Ponson
 GSM: +297 747 8280
 PO Box 1154, Oranjestad
 Aruba, Dutch Caribbean
 Project Paradigm: A structured approach to bringing the tools for
 sustainable development to all stakeholders worldwide by creating ICT
 tools for NGOs worldwide and: providing online access to web sites and
 repositories of data and information for sustainable development

 This email and any files transmitted with it are confidential and intended
 solely for the use of the individual or entity to whom they are addressed.
 If you have received this email in error please notify the system manager.
 This message contains confidential information and is intended only for the
 individual named. If you are not the named addressee you should not
 disseminate, distribute or copy this e-mail.




Re: Possible Idea For a Sem Web Based Game?

2010-12-16 Thread Melvin Carvalho
On 15 December 2010 09:39, Pierre-Antoine Champin
swlists-040...@champin.net wrote:
 Melvin,

 (sorry to the others, I used the wrong address to post to the mailing lists,
 so my previous message didn't get through)

 you wrote:
 You dont need smart servers, just socially aware cloud storage.  Flat
 files are fine, you can let Agents do all the middleware.

 ok, I should'nt have used the term 'server'; I was not considering
 cloud-storage (yet)...

 It does not really change my point, though: if you only trust a single agent
 (dungeon master) to manage game-data and enforce game logic, you end up
 with a rather centralized system.

You can trust multiple agents.


 On the other hand, distributing the game logic is harder:

Harder but more fun!

 - how do different agents maintain consistency of the game?

Rules and game logic.  Theres a number of ways, one is simply to
encapsulate game login in the agent code.

 - how do you trust a newly discovered agent?

Web of Trust.  This should be a sem web scale service.

 - how do you know that several agents are not colluding to cheat?

You dont know that but distributed systems have a good track record of
fault tolerance.


 But obviously, I merely scratched the surface, while you seem to have
 clearer ideas on the subject... :) -- thanks for the links by the way.

 I'll keep an eye on that.

I've summarized some of the links in this thread under the concept
Linked Open Gaming

http://linkedgaming.org/

If you have a game world or client I'll add to the list.  Note that
the ontology now has items.


  pa

 On 12/15/2010 12:39 AM, Melvin Carvalho wrote:

 On 14 December 2010 22:21, Pierre-Antoine Champin
 pierre-antoine.cham...@liris.cnrs.fr  wrote:

 Hi,

 this is fun, but we have to ask ourselves: what is the added value of
 RDF/sem-web/linked-data here?
 What does http://drogon.me/ have that wouldn't be possible with HTML+PHP?

 To me the Web, particularly the Sem Web is a universal space whose key
 advantage is interoperability.

 So, each world can interop with similar worlds.

 Also worlds can operate with other parts of the Semantic Web Space.  I
 use the acronym SEMANTIC to describe key areas:

 Social
 Entertainment
 Markets
 Access
 Nearby services
 Trust
 Information management
 Currencies

 So a game can be social, have trading with virtual currencies and
 markets, you can interact with a personal or public web of trust, with
 existing information or things in the real world in your locality (eg
 augmented reality), using web standards.

 Granted each area on the list is still in an embryonic phase.  But
 this is a level of interop simply not available in other systems.

 We've seen linking of basic social and trust in PHP+HTML (facebook)
 and social and entertainment (zynga) get some traction.  But when we
 have interop across all areas we'll have a that much more powerful
 system.

 Don't get me wrong, I think those ideas is great, and kudos to you guys
 for
 turning them into code so quickly!

 My two cents on this question:

 1/ linking to real world data is definitely an interesting track, because
 this leverages existing linked data for the purpose of the game

 Yes, agree, leverage interop.


 2/ another way to use linked data principles is that the game can be
 distributed, even more so than an HTML-based game.

 Exactly.


 I imagine that every character, place, item... could have its own RDF
 description, linking to each other. A triple between two objects (X is
 located at Y, X owns Z...) is considered true only if both the subject
 and
 the object claim it.

 This implies that the RDF files are hosted by smart servers that will
 allow updates by anybody, but under certain conditions.

 You dont need smart servers, just socially aware cloud storage.  Flat
 files are fine, you can let Agents do all the middleware.

 http://www.w3.org/DesignIssues/CloudStorage.html


 For example, a place will acknowledge that it contains a person only if
 the
 person claims to be in that place, and only there.

 This is game logic.  It need not reside on a server.


 The protocol might be tricky to design for more complex things like
 transactions. I imagine that an item would change its owner only after
 checking that both the old and the new owner explictly agree on the
 transaction

  #me  game:agreesOn [
    a game:Transaction ;
    game:give some:sword ;
    game:receive some:money ;
  ]

 Im working on an economic aspect.  This is an interesting proposal on
 transactions and contracts:

 http://iang.org/papers/ricardian_contract.html

 I have reasonable confidence we can introduce a sophisticated economy
 that can be leveraged by all sem web projects, probably before end of
 next year.


 Plus, the buyer would have to trust the sword not cheat on them and
 return
 to its previous owner without notice...

 Fights will probably be even trickier... But I think the idea is worth
 exploring...

 Many ways to model this, again agents can handle

Re: [ANN] LinkedMarkMail

2011-02-07 Thread Melvin Carvalho
2011/2/7 Kingsley Idehen kide...@openlinksw.com:
 On 2/7/11 4:59 PM, Sergio Fernández wrote:

 Hi,

 I'd like to announce the alpha version of LinkedMarkMail [1], a
 service for providing Linked Data from the mailing lists' archives
 indexed by MarkMail [2]. Actually an old idea discussed here some time
 ago [3].

 The service has been developed in the context of the SWAML project [4].

 Further details at [5]. As usual, all feedback would be welcomed!

 Kind regards,

 [1] http://linkedmarkmail.wikier.org/
 [2] http://markmail.org/
 [3] http://osdir.com/ml/web.semantic.linking-open-data/2008-03/msg00093.html
 [4] http://swaml.berlios.de/
 [5] http://www.wikier.org/blog/linkedmarkmail

 Cool!

 Better late than never, as I said earlier :-)

 BTW - how about Linked Open Email Data (LOED)? Other emails archive spaces
 will follow!!

Brilliant, yes definitely, well done.

Rated emails too, user reputation etc.


 --

 Regards,

 Kingsley Idehen   
 President  CEO
 OpenLink Software
 Web: http://www.openlinksw.com
 Weblog: http://www.openlinksw.com/blog/~kidehen
 Twitter/Identi.ca: kidehen








Re: Proposal to assess the quality of Linked Data sources

2011-02-24 Thread Melvin Carvalho
On 24 February 2011 19:00, Annika Flemming annika.flemm...@gmx.de wrote:
 Hi,

 two months ago I presented the findings of my diploma thesis in this mailing
 list. The aim of my thesis is to draw up a set of criteria to assess the
 quality of Linked Data sources. My findings included 11 criteria, each
 comprising a set of so-called indicators. These indicators constitute a
 measurable aspect of a criterion and, thus, allow for the assessment of the
 quality of a data source w.r.t the criteria.

 Since then, I developed a valuation system to assess the quality of a data
 source based on these criteria and indicators. One part of the valuation
 system is a generic formalism to represent quality assessment formally. The
 second part is a proposal of actual valuation methods for the indicators.
 My proposal can be found here:
 http://www2.informatik.hu-berlin.de/~flemming/Proposal.pdf

Nice paper.  An idea to add CORS?

http://www.w3.org/wiki/CORS_Enabled

[reproduced for convenience] ...

What is this about?

CORS is a specification that enables true open access across domain boundaries.

Why is this important?

Currently, client-side scripts (e.g., Javascript) are prevented from
accessing much of the Web of Linked Data due to same origin
restrictions implemented in all major Web browsers.

While enabling such access is important for all data, it is especially
important for Linked Open Data and related services; without this, our
data simply is not open to all clients.

If you have public data which doesn't use require cookie or session
based authentication to see, then please consider opening it up for
universal javascript/browser access.

For CORS access to anything other than simple, non auth protected
resources please see this full write up on Cross Origin Request
Security.
How can I participate?

To give Javascript clients basic access to your resources requires
adding one HTTP Response Header, namely:

 Access-Control-Allow-Origin: *

This is compatible with both XHR XMLHttpRequest and XDR
XDomainRequest, and is supported by all the major Web browsers.


 I'm posting this proposal in order to evaluate my findings. Therefore, any
 feedback will be very welcome!

 Thanks again to everyone participating,
 Annika





Re: Linked Data, Blank Nodes and Graph Names

2011-04-10 Thread Melvin Carvalho
On 7 April 2011 19:45, Nathan nat...@webr3.org wrote:
 Hi All,

 To cut a long story short, blank nodes are a bit of a PITA to work with,
 they make data management more complex, new comers don't get them (lest
 presented as anonymous objects), and they make graph operations much more
 complex than they need be, because although a graph is a set of triples, you
 can't (easily) do basic set operations on non-ground graphs, which
 ultimately filters down to making things such as graph diff, signing,
 equality testing, checking if one graph is a super/sub set of another very
 difficult. Safe to say then, on one side of things Linked Data / RDF would
 be a whole lot simpler without those blank nodes.

 It's probably worth asking then, in a Linked Data + RDF environment:

 - would you be happy to give up blank nodes?

*Disclaimer* I've not participated in an RDF WG or XG, but am a
hobbyist that tries to learn in their spare time, my knowledge if far
from complete.

That said, I'd be happy to give up blank nodes.

One less thing to worry about for the newcomer (my subjective POV),
and also it then means that I can maybe do things like c14n and
signing subgraphs more easily.


 - just the [] syntax?

 - do you always have a name for your graphs? (for instance when published
 on the web, the URL you GET, and when in a store, the ?G of the quad?

 I'm asking because there are multiple things that could be done:

 1) change nothing

 2) remove blank nodes from RDF

 3) create a subset of RDF which doesn't have blank nodes and only deals with
 ground graphs

 4) create a subset of RDF which does have a way of differentiating blank
 nodes from URI-References, where each blank node is named persistently as
 something like ( graph-name , _:b1 ), which would allow the subset to be
 effectively ground so that all the benefits of stable names and set
 operations are maintained for data management, but where also it can be
 converted (one way) to full RDF by removing those persistent names.

 Generally, this thread perhaps differs from others, by suggesting that
 rather than changing RDF, we could layer on a set of specs which cater for
 all linked data needs, and allow that linked data to be considered as full
 RDF (with existential) when needed.

 It appears to me, that if most people would be prepared to make the trade
 off of loosing the [ ] syntax and anonymous objects such that you always had
 a usable name for each thing, and were prepared to modify and upgrade
 tooling to be able to use this not-quite-rdf-but-rdf-compatible thing, then
 we could solve many real problems here, without changing RDF itself.

 That said, it's a trade-off, hence, do the benefits outweigh the cost for
 you?

 Best,

 Nathan





Re: How many instances of foaf:Person are there in the LOD Cloud?

2011-04-13 Thread Melvin Carvalho
On 13 April 2011 10:54, Michael Brunnbauer bru...@netestate.de wrote:

 re

 On Wed, Apr 13, 2011 at 10:15:46AM +0200, Bernard Vatant wrote:
 Just trying to figure what is the size of personal information available as
 LOD vs billions of person profiles stored by Google, Amazon, Facebook,
 LinkedIn, unameit ... in proprietary formats.

 At www.foaf-search.net, we have ca. 3.5 mio instances of foaf:Person.

 The biggest chunk out there is probably livejournal.com with more than 25mio
 users which we cannot index all right now (we have 221090 of them).

 Another big one is hi5.com but the FOAF is quite broken so we don't crawl it.

gmail at one point were publishing foaf profiles ... so that's quite a few more

facebook graph is not quite foaf but certainly machine readable JSON,
and could easily be transformed to FOAF, so that's another chunk

there's a few bridges too such as ones last.fm, flikr and semantic tweet

So including bridge I'd guess 250 million, 99% should be alive today,
but that number will fall over time (obviously)


 See also:

 http://www.w3.org/wiki/FoafSites
 http://wiki.foaf-project.org/w/DataSources

 Regards,

 Michael Brunnbauer

 --
 ++  Michael Brunnbauer
 ++  netEstate GmbH
 ++  Geisenhausener Straße 11a
 ++  81379 München
 ++  Tel +49 89 32 19 77 80
 ++  Fax +49 89 32 19 77 89
 ++  E-Mail bru...@netestate.de
 ++  http://www.netestate.de/
 ++
 ++  Sitz: München, HRB Nr.142452 (Handelsregister B München)
 ++  USt-IdNr. DE221033342
 ++  Geschäftsführer: Michael Brunnbauer, Franz Brunnbauer
 ++  Prokurist: Dipl. Kfm. (Univ.) Markus Hendel





Re: How many instances of foaf:Person are there in the LOD Cloud?

2011-04-13 Thread Melvin Carvalho
On 13 April 2011 23:49, Bernard Vatant bernard.vat...@mondeca.com wrote:
 Thanks everybody !

 Could not imagine that this simple question would trigger such an activity.
 Actually my naive quest was to figure how many people had actively
 published, and possibly still maintain a FOAF profile for themselves, vs the
 number of profiles stored and maintained in a proprietary social system, vs
 a profile computed out of their activity on the web for any purpose.
 Browsing all the answers makes me wonder. I was not aware of so many sources
 of FOAF information (to tell the truth a great majority of domains quoted by
 Michael in his 25 top list were totally unknown to me until today). The
 number I had in mind when asking was rather abou FOAF profiles actively
 maintained by some primary topic aware of what FOAF is and deliberately
 using it to be present in the social semantic web. I suppose this number
 really represents a microscopic part of the millions announced, but I do not
 know more about it at the end of this day. Except that most FOAF information
 is certainly produced without people subject of the triples even being aware
 of it, or even knowing that FOAF exists at all (supposing they are living,
 real people).
 Actually it's quite easy to produce FOAF out of any social application data
 with an open API. So the millions I read about are simply an image of the
 millions of users of social software using open API, plus the growing number
 of people for which public data is available such as people listed in
 Wikipedia and Freebase.

 So tonight I would turn my question otherwise : Among those millions of FOAF
 profiles, how do I discover those of which primary source is their primary
 topic, expressing herself natively in FOAF, vs the ocean of second-hand
 remashed / remixed information, captured with or without clear approbation
 of their subjects, and eventually released in FOAF syntax in the Cloud ...

I think you can also look out for next generation FOAF profiles that
have a public key (WebID), have ACLs and allow a read/write Web eg.
through SPARQL Update, WebDAV, PushBack etc.

We're starting to see the very first of these emmerge, as
standardizations progresses in parallel.

This begins to close the loop in terms of a standard way to make a
semantic social collaborative space, with FOAF at the heart, and is
going to lead to hopefully a whole new wave of apps and innovation on
The Web.


 Bernard


 2011/4/13 Kingsley Idehen kide...@openlinksw.com

 On 4/13/11 6:54 AM, Michael Brunnbauer wrote:

 I could not find working bridges for last.fm and flikr but
 semantictweet.com
 is really working again - interesting:-)

 We've always had Sponger Cartridges (bridges) for last.fm and flickr. In
 addition there are cartridges for Crunchbase, Amazon, and many others. Of
 course, the context of Bernard's quest ultimately determines the relevance
 of these data sources :-)

 --

 Regards,

 Kingsley Idehen
 President  CEO
 OpenLink Software
 Web: http://www.openlinksw.com
 Weblog: http://www.openlinksw.com/blog/~kidehen
 Twitter/Identi.ca: kidehen









 --
 Bernard Vatant
 Senior Consultant
 Vocabulary  Data Integration
 Tel:       +33 (0) 971 488 459
 Mail:     bernard.vat...@mondeca.com
 
 Mondeca
 3, cité Nollez 75018 Paris France
 Web:    http://www.mondeca.com
 Blog:    http://mondeca.wordpress.com
 




Re: For our UK readers

2011-05-24 Thread Melvin Carvalho
On 24 May 2011 20:05, Hogan, Aidan aidan.ho...@deri.org wrote:
  http://who.isthat.org/id/CTB
 
  Have I got the RDF right?
  Not sure foaf is the right thing for this.
  Should there be a blank node somewhere in there?
  Suggestions for improvements welcome.
  Anyone feel up to adding more owl:sameAs ?

 You're missing an end tag:

 /foaf:Person

 ...also:

 owl:sameAshttp://dbpedia.org/resource/Andrew_Marr/owl:sameAs

 ...should be:

 owl:sameAs rdf:resource=http://dbpedia.org/resource/Andrew_Marr; /

 ...you can add:

 owl:sameAs rdf:resource=http://rdf.freebase.com/ns/en.andrew_marr; /

 Since you're using relative URIs, might be good to define an xml:base
 for the document.

 Should be returning Content-type: application/rdf+xml (probably just
 need to .rdf extensions).

 You might want to consider encoding Leigh's email using mbox_sha1sum.

 Also, for some of your other entries, foaf:knows feels a little generic.
 You may want to have a look through this thread on the FOAF dev list:

 http://lists.foaf-project.org/pipermail/foaf-dev/2011-May/010588.html

I was thinking about something similar at the weekend.

You can have a foaf:group with a number of agents, then make a
statement about an unnamed person in that group.

Maybe that is sufficiently ambiguous to be allowed ... im not sure ...


 Cheers,
 Aidan

 P.S., I'm sure that RAFT 2012 would be interested in reading about your
 new Linked Open (Super) Injunctions initiative:

 http://liris.cnrs.fr/~azimmerm/RAFT/afd2011.html

 ...looking forward to seeing LO(S)I on the LOD cloud in the near future.

 -Original Message-
 From: public-lod-requ...@w3.org [mailto:public-lod-requ...@w3.org] On
 Behalf Of Hugh Glaser
 Sent: 24 May 2011 16:31
 To: john.nj.dav...@bt.com
 Cc: public-lod@w3.org
 Subject: Re: For our UK readers


 On 24 May 2011, at 15:39, john.nj.dav...@bt.com
  wrote:

  ID = him
  Bah - scaredy cat! ;-)
 Too right!
 But I have added a bit more information :-)
 And a dbpedia owl:sameAs to http://who.isthat.org/id/FGH.
 Anyone feel up to adding more owl:sameAs ?
 
  -Original Message-
  From: public-lod-requ...@w3.org [mailto:public-lod-requ...@w3.org]
 On
 Behalf Of Hugh Glaser
  Sent: 24 May 2011 15:07
  To: public-lod@w3.org community
  Subject: For our UK readers
 
  http://who.isthat.org/id/CTB
 
  Have I got the RDF right?
  Not sure foaf is the right thing for this.
  Should there be a blank node somewhere in there?
  Suggestions for improvements welcome.
 
  Hugh
 

 --
 Hugh Glaser,
               Intelligence, Agents, Multimedia
               School of Electronics and Computer Science,
               University of Southampton,
               Southampton SO17 1BJ
 Work: +44 23 8059 3670, Fax: +44 23 8059 3045
 Mobile: +44 75 9533 4155 , Home: +44 23 8061 5652
 http://www.ecs.soton.ac.uk/~hg/








Re: Using Facebook Data Objects to illuminate Linked Data add-on re. structured data

2011-06-15 Thread Melvin Carvalho
On 12 June 2011 23:05, Kingsley Idehen kide...@openlinksw.com wrote:
 All,

 Facebook offers a data space (of the silo variety). Every Object has an
 Address (URL) from which you can access its actual Representation in JSON
 format.

 Example using the URL: http://graph.facebook.com/kidehen:

 {
   id: 605980750,
   name: Kingsley Uyi Idehen,
   first_name: Kingsley,
   middle_name: Uyi,
   last_name: Idehen,
   link: https://www.facebook.com/kidehen;,
   username: kidehen,
   gender: male,
   locale: en_US
 }


If you think that's good, try this!

http://graph.facebook.com/kidehen?metadata=1

{
   id: 605980750,
   name: Kingsley Uyi Idehen,
   first_name: Kingsley,
   middle_name: Uyi,
   last_name: Idehen,
   link: http://www.facebook.com/kidehen;,
   username: kidehen,
   gender: male,
   locale: en_US,
   metadata: {
  connections: {
 home: http://graph.facebook.com/kidehen/home;,
 feed: http://graph.facebook.com/kidehen/feed;,
 friends: http://graph.facebook.com/kidehen/friends;,
 family: http://graph.facebook.com/kidehen/family;,
 payments: http://graph.facebook.com/kidehen/payments;,
 activities: http://graph.facebook.com/kidehen/activities;,
 interests: http://graph.facebook.com/kidehen/interests;,
 music: http://graph.facebook.com/kidehen/music;,
 books: http://graph.facebook.com/kidehen/books;,
 movies: http://graph.facebook.com/kidehen/movies;,
 television: http://graph.facebook.com/kidehen/television;,
 games: http://graph.facebook.com/kidehen/games;,
 likes: http://graph.facebook.com/kidehen/likes;,
 posts: http://graph.facebook.com/kidehen/posts;,
 tagged: http://graph.facebook.com/kidehen/tagged;,
 statuses: http://graph.facebook.com/kidehen/statuses;,
 links: http://graph.facebook.com/kidehen/links;,
 notes: http://graph.facebook.com/kidehen/notes;,
 photos: http://graph.facebook.com/kidehen/photos;,
 albums: http://graph.facebook.com/kidehen/albums;,
 events: http://graph.facebook.com/kidehen/events;,
 groups: http://graph.facebook.com/kidehen/groups;,
 videos: http://graph.facebook.com/kidehen/videos;,
 picture: http://graph.facebook.com/kidehen/picture;,
 inbox: http://graph.facebook.com/kidehen/inbox;,
 outbox: http://graph.facebook.com/kidehen/outbox;,
 updates: http://graph.facebook.com/kidehen/updates;,
 accounts: http://graph.facebook.com/kidehen/accounts;,
 checkins: http://graph.facebook.com/kidehen/checkins;,
 apprequests: http://graph.facebook.com/kidehen/apprequests;,
 friendlists: http://graph.facebook.com/kidehen/friendlists;,
 permissions: http://graph.facebook.com/kidehen/permissions;,
 notifications: http://graph.facebook.com/kidehen/notifications;
  },
  fields: [
 {
name: id,
description: The user's Facebook ID. No `access_token`
required. `string`.
 },
 {
name: name,
description: The user's full name. No `access_token`
required. `string`.
 },
 {
name: first_name,
description: The user's first name. No `access_token`
required. `string`.
 },
 {
name: middle_name,
description: The user's middle name. No `access_token`
required.  `string`.
 },
 {
name: last_name,
description: The user's last name. No `access_token`
required.  `string`.
 },
 {
name: gender,
description: The user's gender.  No `access_token`
required. `string`.
 },
 {
name: locale,
description: The user's locale. No `access_token`
required. `string` containing the ISO Language Code and ISO Country
Code.
 },
 {
name: languages,
description: The user's languages. No `access_token`
required. `array` of objects containing language `id` and `name`.
 },
 {
name: link,
description: The URL of the profile for the user on
Facebook. Requires `access_token`. `string` containing a valid URL.
 },
 {
name: username,
description: The user's Facebook username. No
`access_token` required. `string`.
 },
 {
name: third_party_id,
description: An anonymous, but unique identifier for
the user. Requires `access_token`. `string`.
 },
 {
name: timezone,
description: The user's timezone offset from UTC.
Available only for the current user.  `number`.
 },
 {
name: updated_time,
description: The last time the user's profile was
updated. Requires `access_token`. `string` containing a IETF RFC 3339
datetime.
 },
 {
name: verified,
  

Re: Using Facebook Data Objects to illuminate Linked Data add-on re. structured data

2011-06-15 Thread Melvin Carvalho
On 13 June 2011 10:29, Kingsley Idehen kide...@openlinksw.com wrote:
 On 6/13/11 8:46 AM, Richard Cyganiak wrote:

 On 12 Jun 2011, at 22:05, Kingsley Idehen wrote:

 Example using the URL: http://graph.facebook.com/kidehen:

 {
   id: 605980750,
   name: Kingsley Uyi Idehen,
   first_name: Kingsley,
   middle_name: Uyi,
   last_name: Idehen,
   link: https://www.facebook.com/kidehen;,
   username: kidehen,
   gender: male,
   locale: en_US
 }

 Ok so you got this JSON from here: http://graph.facebook.com/kidehen

 Then you go on to say that it would be much better if it said:

    id: https://www.facebook.com/kidehen#this;

 instead of:

    id: 605980750

 But given that you can't get any JSON from
 https://www.facebook.com/kidehen#this, wouldn't it be better if it said:

    id: http://graph.facebook.com/kidehen;

 or even, as Glenn proposed:

    id: http://graph.facebook.com/605980750;

 Both of these resolve and produce JSON.

 I should have said: http://graph.facebook.com/605980750#this :-)

Chatted with Joe and Nathan about this some time back.

I think there as an argument that said you can get away with not using
the #this ... ill try and dig up the notes if you would like a pointer


  So if I wanted to refer to you in my app, it seems like these two would
 be quite handy identifiers, and superior to the one you proposed, no?

 Yes, as per above.


 Kingsley

 Best,
 Richard



 Some observations:

 id attribute has value 605980750, this value means little on its own
 outside Facebook's data space.

 Now imagine we tweaked this graph like so:


 {
   id: https://www.facebook.com/kidehen#this;
   name: Kingsley Uyi Idehen,
   first_name: Kingsley,
   middle_name: Uyi,
   last_name: Idehen,
   link: https://www.facebook.com/kidehen;,
   username: kidehen,
   gender: male,
   locale: en_US
 }

 All of a sudden, I've used a HTTP scheme based hyperlink to introduce a
 tiny degree of introspection.

 I repeat this exercise for the attributes i.e., Name then using HTTP
 scheme URIs, and likewise for values best served by HTTP scheme URIs for
 boundlessly extending the object above, courtesy of the InterWeb.

 Even if Facebook doesn't buy into my world view re. data objects, my
 worldview remains satisfied since I can ingest the FB data objects and then
 endow them with the fidelity I via use of URI based Names.

 Example Linked Data Resource URL:
 http://linkeddata.uriburner.com/about/html/http://linkeddata.uriburner.com/about/id/entity/http/graph.facebook.com/kidehen
 .

 Example Object Name from My Data Space:
 http://linkeddata.uriburner.com/about/id/entity/http/graph.facebook.com/kidehen
 .

 A little structured data goes a long way to making what we all seek
 happen. Facebook, Google, Microsoft, Yahoo! etc.. have committed to
 producing structured data. This commitment is massive and it should be
 celebrated since it makes life much easier for everyone that's interested in
 Linked Data or the broader Semantic Web vision. They aren't claiming to
 deliver anything more than structured data. At this time, their fundamental
 goal is to leave Semantic Fidelity matters to those who are interested in
 such pursuits, appropriately skilled, and so motivated.


 --

 Regards,

 Kingsley Idehen
 President   CEO
 OpenLink Software
 Web: http://www.openlinksw.com
 Weblog: http://www.openlinksw.com/blog/~kidehen
 Twitter/Identi.ca: kidehen










 --

 Regards,

 Kingsley Idehen
 President  CEO
 OpenLink Software
 Web: http://www.openlinksw.com
 Weblog: http://www.openlinksw.com/blog/~kidehen
 Twitter/Identi.ca: kidehen










Re: WebID and pets -- was: Squaring the HTTP-range-14 circle [was Re: Schema.org in RDF ...]

2011-06-19 Thread Melvin Carvalho
On 19 June 2011 20:42, Henry Story henry.st...@bblfish.net wrote:

 On 19 Jun 2011, at 20:15, Danny Ayers wrote:

 Only personal Henry, but have you tried the Myers-Briggs thing - I
 think you used to be classic INTP/INTF - but once you got WebID in
 your sails it's very different. These things don't really allow for
 change.

 Is there a page where I can find this out in one click? Looks like those 
 pages ask all kinds of questions that require detailed and complicated 
 answers. I am surprised anyone ever answers those things. It's certainly more 
 complex than the Object/Document distinction ;-)

Myers Briggs is based on the Jungian analysis of mythology and
personality types, with a few additions.  Myths being public dreams,
and dreams being private myths.

The personality types are the lens from which we interpret the inner
and outer universal symbols.  e.g. Intuitively / Analytically / Senses
/ Feeling.  But the symbols themselves are often the more fascinating
parts.

An interesting parallel here is the relation to Jung's archetypes of
the unconscious and WebID.  Both in your dreams, and in mythology, you
have symbols where are metaphors that reference some universal
concept.  WebID is of course a reference to the self ( foaf : Person
).

As many of the myths we live with today are 100s of years out of date,
and people are searching for something new, perhaps WebID can become a
modern symbol, to determine or even evangelize the new personality
type of society, post information revolution :)


 Only slightly off-topic, very relevant here, need to pin down WebID in
 a sense my dogs can understand.

 Ok. So you need to give each of your dogs and cats a webid enabled RDFID chip 
 that can publish webids to other animals with similarly equipped chips when 
 they sniff them. From the frequence and length of sniffs  you can work out 
 the quality of the relationships. On coming home for food, this data could be 
 uploaded automatically to your web server to their foaf file. These 
 relationships could then be used to allow their pals access to parts of your 
 house. For example good friends of your dog, could get a free meal once a 
 week. You could also use that to tie up friendship with their owners, by the 
 master-of-pet relationships, and give them special ability to tag their pet 
 photos. Masters of my dogs friends could be potential friends. If you get 
 these pieces working right you could set up a business with a strong viral 
 potential, perhaps the strongest on the net.

 Here to make my point:




 The Myers-Briggs thing is intuitively rubbish. But with only one or
 two posts in the ground, it does seem you can extrapolate.

 On 19 June 2011 19:52, Henry Story henry.st...@bblfish.net wrote:

 On 19 Jun 2011, at 19:44, Danny Ayers wrote:


 I am of the view that this has been discussed to death, and that any 
 mailing list that discusses this is short of real things to do.

 I confess to talking bollocks when I should be coding.

 yeah, me too. Though now you folks managed to get me interested in this 
 problem! (sigh)

 Henry

 Social Web Architect
 http://bblfish.net/





 --
 http://danny.ayers.name

 Social Web Architect
 http://bblfish.net/






Re: Great news! sears.com and kmart.com adopt GoodRelations in RDFa!

2011-07-04 Thread Melvin Carvalho
On 4 July 2011 22:15, Martin Hepp martin.h...@ebusiness-unibw.org wrote:
 Dear all:

 sears.com and kmart.com, together the the third largest discount store chain 
 in the world, have just turned on GoodRelations support in RDFa! This 
 complements the already impressive list of major adopters of GoodRelations 
 semantic SEO technology, following BestBuy, overstock.com, and CSN stores!

 a) sears.com - ca. 15 Million items (ca. 0.5 billion triples)
 Example: http://www.sears.com/shc/s/p_10153_12605_07180844000P
 Sitemap: http://www.sears.com/Sitemap_Index.xml

 b) kmart.com - ca. 250,000 items
 Example: 
 http://www.kmart.com/shc/s/p_10151_10104_024W434912980001P?prdNo=1blockNo=1blockType=G1
 Sitemap: http://www.kmart.com/Sitemap_Index.xml

 Plus, sears.com also uses www.productontology.org classes for describing the 
 items.

 This is a great move in terms of leading edge use of Semantic Web technology 
 for e-commerce.

Awesome news!

Yet more evidence that persistence and quality in the semantic web
space, will pay off!


 Best wishes

 Martin Hepp

 PS: If you have contact details of the respective developers, please get them 
 in contact with me; I'd have some free advice on how to improve the markup 
 and fix minor issues!

 
 martin hepp
 e-business  web science research group
 universitaet der bundeswehr muenchen

 e-mail:  h...@ebusiness-unibw.org
 phone:   +49-(0)89-6004-4217
 fax:     +49-(0)89-6004-4620
 www:     http://www.unibw.de/ebusiness/ (group)
        http://www.heppnetz.de/ (personal)
 skype:   mfhepp
 twitter: mfhepp

 Check out GoodRelations for E-Commerce on the Web of Linked Data!
 =
 * Project Main Page: http://purl.org/goodrelations/






Re: Facebook Linked Data

2011-09-23 Thread Melvin Carvalho
On 23 September 2011 14:09, Jesse Weaver weav...@rpi.edu wrote:
 APOLOGIES FOR CROSS-POSTING

 I would like to bring to subscribers' attention that Facebook now
 supports RDF with Linked Data URIs from its Graph API.  The RDF is in
 Turtle syntax, and all of the HTTP(S) URIs in the RDF are dereferenceable
 in accordance with httpRange-14.  Please take some time to check it out.

 If you have a vanity URL (mine is jesserweaver), you can get RDF about you:

 curl -H 'Accept: text/turtle' http://graph.facebook.com/vanity-url
 curl -H 'Accept: text/turtle' http://graph.facebook.com/jesserweaver
 If you don't have a vanity URL but know your Facebook ID, you can use
 that instead (which is actually the fundamental method).

 curl -H 'Accept: text/turtle' http://graph.facebook.com/facebook-id
 curl -H 'Accept: text/turtle' http://graph.facebook.com/1340421292
 From there, try dereferencing URIs in the Turtle.  Have fun!

WOW!

One small step for a facebook.  One giant leap for the (semantic) Web!


 Jesse Weaver
 Ph.D. Student, Patroon Fellow
 Tetherless World Constellation
 Rensselaer Polytechnic Institute
 http://www.cs.rpi.edu/~weavej3/









Re: [foaf-protocols] Solving Real Problems using Linked Data: InterWeb scale Verifiable Identity via WebID

2011-11-02 Thread Melvin Carvalho
On 2 November 2011 16:27, Kingsley Idehen kide...@openlinksw.com wrote:
 All,

 Here are links to recent G+ posts that showcase use of URIs, LinkedData, and
 WebID applied the thorny issue of verifiable identity at InterWeb scale:

 1. http://goo.gl/AcYWQ -- using Facebook as an Identity Provider (IdP) for
 the WebID verification protocol
 2. http://goo.gl/ouXeF -- ditto using LinkedIn
 3. http://goo.gl/9jjxG -- ditto using WordPress and other AtomPub compliant
 blog platforms
 4. http://goo.gl/FFsjv -- ditto using Twitter.

 What does this all mean?
 Anyone with a Facebook, Twitter, LinkedIn, or Blog Platform account can now
 do the following with ease:

 1. generate a self signed X.509 certificate with a WebID watermark
 2. persist certificate to a keystore/keychain provided by host operating
 system of browser
 3. persist certificate fingerprint (MD5 of SHA1) to Web data spaces such as:
 Facebook, Twitter, LinkedIn, Blogs etc..
 4. as part of the WebID verification (authentication) protocol, lookup the
 fingerprints in the aforementioned data spaces.

 As a result of the above, it becomes much easier to achieve a global scale
 Read-Write Web since WebID granularity works for:

 1. access control lists (acls)
 2. signed emails via s/mime
 3. semantically enhanced notification services that only require a WebID
 e.g., befriending, resource share notifications etc..

+1

Awesome work, Kingsley!


 Related:

 1. http://www.slideshare.net/rumito/solving-real-problems-using-linkeddata
 -- old presentation about solving real problems courtesy of Linked Data
 (note: FOAF+SSL is now the WebID protocol).
 2. http://webid.info -- additional information about WebID.
 3. http://www.w3.org/wiki/WebID -- WebID Wiki.
 4. http://www.w3.org/community/rww/ -- Read-Write Web .

 --

 Regards,

 Kingsley Idehen
 President  CEO
 OpenLink Software
 Company Web: http://www.openlinksw.com
 Personal Weblog: http://www.openlinksw.com/blog/~kidehen
 Twitter/Identi.ca handle: kidehen
 Google+ Profile: https://plus.google.com/112399767740508618350/about
 LinkedIn Profile: http://www.linkedin.com/in/kidehen







 ___
 foaf-protocols mailing list
 foaf-protoc...@lists.foaf-project.org
 http://lists.foaf-project.org/mailman/listinfo/foaf-protocols




Re: ANN: Multi-syntax Markup Translator: http://rdf-translator.appspot.com/

2011-11-03 Thread Melvin Carvalho
On 3 November 2011 17:50, Martin Hepp martin.h...@ebusiness-unibw.org wrote:
 (Apologies for cross-posting)

 Dear all:

 Alex Stolz, a PhD student in our group, has just released a nice multi-syntax 
 data translation tool

    http://rdf-translator.appspot.com/

 that can translate between

 * RDFa,
 * Microdata,
 * RDF/XML,
 * Turtle,
 * NTriples,
 * Trix, and
 * JSON.

 This service is built on top of RDFLib 3.1.0. For the translation between 
 microdata and the other file formats it is using Ed Summers' microdata plugin 
 and for RDF/JSON the plugin as available in the RDFLib add-on package 
 RDFExtras.

 The source code of this tool is available under a LPGL license.

Awesome!

Can do a few things that http://any23.org/ cant.


 Acknowledgements

 The work on RDF Translator has been supported by the German Federal Ministry 
 of Research (BMBF) by a grant under the KMU Innovativ program as part of the 
 Intelligent Match project (FKZ 01IS10022B).

 A huge thanks to Alex Stolz for this useful tool!

 Best wishes

 Martin Hepp
 
 martin hepp
 e-business  web science research group
 universitaet der bundeswehr muenchen

 e-mail:  h...@ebusiness-unibw.org
 phone:   +49-(0)89-6004-4217
 fax:     +49-(0)89-6004-4620
 www:     http://www.unibw.de/ebusiness/ (group)
         http://www.heppnetz.de/ (personal)
 skype:   mfhepp
 twitter: mfhepp

 Check out GoodRelations for E-Commerce on the Web of Linked Data!
 =
 * Project Main Page: http://purl.org/goodrelations/





Re: ANN: Multi-syntax Markup Translator: http://rdf-translator.appspot.com/

2011-12-28 Thread Melvin Carvalho
On 28 December 2011 10:08, Alex Stolz alex.st...@ebusiness-unibw.org wrote:
 Hi,

 On Dec 27, 2011, at 8:12 PM, Melvin Carvalho wrote:

 On 6 November 2011 20:04, Alex Stolz alex.st...@ebusiness-unibw.org wrote:
 Hello,

 thanks for your issue report, Masahide. We fixed it, i.e. the converter is 
 now able to handle UTF-8 characters properly.
 An additional feature that we've been working on and could be of general 
 interest is the integration of the RDF2RDFa and RDF2Microdata services that 
 allow us to convert from any input format to RDFa or Microdata (still work 
 in progress). E.g. it is now possible to go from RDF/JSON to RDFa.

 Thanks to Martin Hepp and Andreas Radinger for valuable feedback and 
 substantial contributions to this project!

 Do you know if there's any way to turn on the CORS headers?

 Yes, it is possible as per http://enable-cors.org/#how-gae. I enabled CORS 
 headers now for all API requests, be it with html=1-parameter (pygmentized 
 output with Content-type=text/html) or without (raw output using the most 
 appropriate media type for each format). You can check it at 
 http://enable-cors.org/#check.

Awesome, thanks!


 Alex



 $.getJSON('http://rdf-translator.appspot.com/parse?of=rdf-jsonurl=http://bblfish.net/people/henry/card',
 function (data) { alert(JSON.stringify(data)) } )

 gives

 XMLHttpRequest cannot load ... is not allowed by Access-Control-Allow-Origin.

 e.g.

 Access-Control-Allow-Origin: *

 http://www.w3.org/wiki/CORS_Enabled


 Best,
 Alex


 On Nov 4, 2011, at 3:39 AM, KANZAKI Masahide wrote:

 Hello, thank you for introducing useful tool.

 It would be much nicer if the translator could handle non-ascii
 characters properly. (we got \u escaped string in JSON outputs, but
 garbage in other formats. It'd be better if can have non-escaped value
 in JSON as well).

 cheers,

 2011/11/4 Martin Hepp martin.h...@ebusiness-unibw.org:
 (Apologies for cross-posting)

 Dear all:

 Alex Stolz, a PhD student in our group, has just released a nice 
 multi-syntax data translation tool

    http://rdf-translator.appspot.com/

 that can translate between

 * RDFa,
 * Microdata,
 * RDF/XML,
 * Turtle,
 * NTriples,
 * Trix, and
 * JSON.

 This service is built on top of RDFLib 3.1.0. For the translation between 
 microdata and the other file formats it is using Ed Summers' microdata 
 plugin and for RDF/JSON the plugin as available in the RDFLib add-on 
 package RDFExtras.

 The source code of this tool is available under a LPGL license.



 --
 @prefix : http://www.kanzaki.com/ns/sig# .  :from [:name
 KANZAKI Masahide; :nick masaka; :email mkanz...@gmail.com].

 Alex Stolz
 E-Business  Web Science Research Group
 Universität der Bundeswehr München

 e-mail:  alex.st...@ebusiness-unibw.org
 phone:   +49-(0)89-6004-4277
 fax:     +49-(0)89-6004-4620
 skype:   stalsoft.com




 Alex Stolz
 E-Business  Web Science Research Group
 Universität der Bundeswehr München

 e-mail:  alex.st...@ebusiness-unibw.org
 phone:   +49-(0)89-6004-4277
 fax:     +49-(0)89-6004-4620
 skype:   stalsoft.com




Re: SOPA Blackout Vote

2012-01-17 Thread Melvin Carvalho
On 17 January 2012 18:18, David Booth da...@dbooth.org wrote:
 FYI, that link doesn't seem to work.

Thanks for the heads up, sorry about that! :)

Server can be a bit temperamental ... ive pinged the folks at MIT if
they can get a chance to see what's up ...


 David

 On Tue, 2012-01-17 at 11:46 -0500, Kingsley Idehen wrote:
 All,

 There is a Linked Data driven poll service that enables those with an
 opinion re., then matter above, to cast votes: http://vote.data.fm/ .


 --
 David Booth, Ph.D.
 http://dbooth.org/

 Opinions expressed herein are those of the author and do not necessarily
 reflect those of his employer.





Re: SOPA Blackout Vote

2012-01-17 Thread Melvin Carvalho
On 17 January 2012 18:18, David Booth da...@dbooth.org wrote:
 FYI, that link doesn't seem to work.

 David

 On Tue, 2012-01-17 at 11:46 -0500, Kingsley Idehen wrote:
 All,

 There is a Linked Data driven poll service that enables those with an
 opinion re., then matter above, to cast votes: http://vote.data.fm/ .


Seems to be working better now ... results so far:


Should DBPedia / LOD Cloud have a SOPA blackout?

+20 Yes
+3   No
+3   No opinion


 --
 David Booth, Ph.D.
 http://dbooth.org/

 Opinions expressed herein are those of the author and do not necessarily
 reflect those of his employer.





Modelling colors

2012-01-25 Thread Melvin Carvalho
I see hasColor a lot in the OWL documentation but I was trying to work
out a way to say something has a certain color.

I understand linked open colors was a joke

Anyone know of an ontology with color or hasColor as a predicate?



Re: Modelling colors

2012-01-26 Thread Melvin Carvalho
2012/1/26 Sergio Fernández sergio.fernan...@fundacionctic.org:
 Melvin,

 Linked Open Colors was something made for fun, which is very different
 to a joke. It only provides instances of colors based on some of their
 different representations. For what your are looking for, here a
 couple of vocabularies that would be useful:

 http://data.colourphon.co.uk/def/colour-ontology#
 http://www.w3.org/ns/ui#

Thanks all so much!

I was slightly unsure when I read

Linked Open Colors, dataset created for the April Fools' Day of 2011

But I have something I can use now, thanks! :)


 Cheers,

 On 26 January 2012 00:15, Melvin Carvalho melvincarva...@gmail.com wrote:
 I see hasColor a lot in the OWL documentation but I was trying to work
 out a way to say something has a certain color.

 I understand linked open colors was a joke

 Anyone know of an ontology with color or hasColor as a predicate?




 --
 Sergio Fernández
 CTIC - Technological Center
 Parque Científico y Tecnológico de Gijón
 C/ Ada Byron, 39 Edificio Centros Tecnológicos
 33203 Gijón - Asturias - Spain
 Tel.: +34 984 29 12 12
 Fax: +34 984 39 06 12
 E-mail: sergio.fernan...@fundacionctic.org
 http://www.fundacionctic.org
 Privacy Policy: http://www.fundacionctic.org/privacidad



Yahoo! patent 7,747,648 court case

2012-03-13 Thread Melvin Carvalho
You may have seen in the news facebook are getting sued for using the
following patented technology

http://patft.uspto.gov/netacgi/nph-Parser?Sect2=PTO1Sect2=HITOFFp=1u=/netahtml/PTO/search-bool.htmlr=1f=Gl=50d=PALLRefSrch=yesQuery=PN/7747648%0A

Abstract

 Systems and methods for information retrieval and communication employ a
world model. The world model is made up of interrelated entity models, each
of which corresponds to an entity in the real world, such as a person,
place, business, other tangible thing, community, event, or thought. Each
entity model provides a communication channel via which a user can contact
a real-world person responsible for that entity model. Entity models also
provide feedback information, enabling users to easily share their
experiences and opinions of the corresponding real-world entity.



Does this affect Linked Open Data too?


Re: Change Proposal for HttpRange-14

2012-03-23 Thread Melvin Carvalho
2012/3/23 Giovanni Tummarello giovanni.tummare...@deri.org

 2012/3/23 Sergio Fernández sergio.fernan...@fundacionctic.org:
  Do you really think that base your proposal on the usage on a Powder
  annotation is a good idea?
 
  Sorry, but IMHO HttpRange-14 is a good enough agreement.

 yup performed brilliantly so far, nothing to say. Industry is flocking
 to adoption, and what a consensus.


+1

'Brilliantly' is an understatement :)

And we're probably still only towards the beginning of the adoption cycle!

I dont think, even the wildest optimist, could have predicted the success
of the current architecture (both pre and post HR14).


Re: The Battle for Linked Data

2012-03-26 Thread Melvin Carvalho
On 26 March 2012 17:49, Hugh Glaser h...@ecs.soton.ac.uk wrote:

 So What is Linked Data?
 And relatedly, Who Owns the Term Linked Data?
 (If we used a URI for Linked Data, it might or might not be clearer.)

 Of course most people think that What *I* think is Linked Data is Linked
 Data.
 And by construction, if it is different it is not Linked Data.
 Kingsley views the stuff people are talking about that does not, for
 example, conform to a policy that includes Range-14 as Structured Data -
 naming things is important, as we well know, and can serve to separate
 communities..

 There are clearly quite a few people who would like to relax things, and
 even go so far as to drop the IR thing completely, but still want to have
 the Linked Data badge on the resultant Project.
 There are others for whom that is anathema.

 I actually think that what we are watching is the attempt of the Linked
 Data child to fly the nest from the Semantic Web.
 Can it develop on its own, and possibly have different views to the
 Semantic Web, or must it always be obedient to the objectives of its parent?

 Often the objectives of Linked Data engineers are very different to the
 objectives of Semantic Web engineers.
 (A Data Integration technology or a global AI system.)
 So it is not surprising that the technologies they want might be
 different, and even incompatible.

 If I push the parent/child analogy beyond its limit, I can see the
 forthcoming TAG meeting as the moment at which the child proposes to reason
 with the parent to try to reach a compromise.
 The TAG seems to be part of the ownership of the term Linked Data,
 because the Linked Data people (whoever they are) so agree at the moment -
 but this is not a God-given right - I don't think there is any trade- or
 copy-right on the term.
 A failure to arrive at something that the child finds acceptable can often
 lead to a complete rift, where the child leaves home entirely and even
 changes its name.

 And of course, after such a separation, exactly who would be using the
 term Linked Data to badge their activities?


I would definitely use the Linked Data term to badge activities.

The way I see it:

1. Linked Documents aka the Web of Documents, has done quite well.  After 2
decades, arguably it's the best technical system built to date.

2. Linked Data, still in its infancy, seems to be exploding.  I think the
tipping point was late last year when facebook came on board.  I hear
linked data success stories on a weekly basis.  Indeed, the World Bank came
on board lately, and it was barely a cause for celebration.  That shows me
a growing maturity.

3. Linked apps is the era ive been looking forward to the most.  We're not
even at the very beginning but I'm very excited about the huge potential of
web scale apps working together, consuming and producing linked data for
both humans and machines.

To use a cliche If it aint broken, dont try to fix it

No technology remains static, and I can understand proposals for changes.

But looking from a 50,000 ft perspective, do people honestly think The Web
(and/or Linked Data) is broken?



 Like others in this discussion I am typing one-handed, after earlier
 biting my arm off in preference to entering the Range-14 discussion again.
 But I do think this is an important moment for the Linked Data world.

 Best
 Hugh
 --
 Hugh Glaser,
 Web and Internet Science
 Electronics and Computer Science,
 University of Southampton,
 Southampton SO17 1BJ
 Work: +44 23 8059 3670, Fax: +44 23 8059 3045
 Mobile: +44 75 9533 4155 , Home: +44 23 8061 5652
 http://www.ecs.soton.ac.uk/~hg/





Re: Google and the Googlization of the semantic web

2012-03-26 Thread Melvin Carvalho
On 26 March 2012 21:53, ProjectParadigm-ICT-Program 
metadataport...@yahoo.com wrote:

 See:

 http://online.wsj.com/article/SB10001424052702304459804577281842851136290.html

 The clock is ticking now and it seems Google will soon take over semantic
 web technologies, or not?

 With the new privacy universal agreement introduced at the beginning of
 March this year by Google it was only logical that semantic search would be
 added to expand the data mining tool kit to optimize the utilization of
 user generated trails of web use.

 And what will happen to the envisioned academic uses of semantic web
 technologies and linked data?

 Are we facing a world according to Google (and FaceBook etc.)?


Interesting article.

Both facebook and google have some great engineers.  If I was to bet on
one, it'd be Facebook, because I think we need things to be social.

Facebook actually serve nice turtle, so it integrates quite well.



 Milton Ponson
 GSM: +297 747 8280
 PO Box 1154, Oranjestad
 Aruba, Dutch Caribbean
 Project Paradigm: A structured approach to bringing the tools for
 sustainable development to all stakeholders worldwide by creating ICT
 tools for NGOs worldwide and: providing online access to web sites and
 repositories of data and information for sustainable development

 This email and any files transmitted with it are confidential and intended
 solely for the use of the individual or entity to whom they are addressed.
 If you have received this email in error please notify the system manager.
 This message contains confidential information and is intended only for the
 individual named. If you are not the named addressee you should not
 disseminate, distribute or copy this e-mail.



Re: Change Proposal for HttpRange-14

2012-03-27 Thread Melvin Carvalho
On 27 March 2012 19:54, Jeni Tennison j...@jenitennison.com wrote:

 Hi Tom,

 On 26 Mar 2012, at 17:13, Tom Heath wrote:
  On 26 March 2012 16:47, Jeni Tennison j...@jenitennison.com wrote:
  Tom,
 
  On 26 Mar 2012, at 16:05, Tom Heath wrote:
  On 23 March 2012 15:35, Steve Harris steve.har...@garlik.com wrote:
  I'm sure many people are just deeply bored of this discussion.
 
  No offense intended to Jeni and others who are working hard on this,
  but *amen*, with bells on!
 
  One of the things that bothers me most about the many years worth of
  httpRange-14 discussions (and the implications that HR14 is
  partly/heavily/solely to blame for slowing adoption of Linked Data) is
  the almost complete lack of hard data being used to inform the
  discussions. For a community populated heavily with scientists I find
  that pretty tragic.
 
 
  What hard data do you think would resolve (or if not resolve, at least
 move forward) the argument? Some people  are contributing their own
 experience from building systems, but perhaps that's too anecdotal? Would a
  structured survey be helpful? Or do you think we might be able to pick
 up trends from the webdatacommons.org  (or similar) data?
 
  A few things come to mind:
 
  1) a rigorous assessment of how difficult people *really* find it to
  understand distinctions such as things vs documents about things.
  I've heard many people claim that they've failed to explain this (or
  similar) successfully to developers/adopters; my personal experience
  is that everyone gets it, it's no big deal (and IRs/NIRs would
  probably never enter into the discussion).

 How would we assess that though? My experience is in some way similar --
 it's easy enough to explain that you can't get a Road or a Person when you
 ask for them on the web -- but when you move on to then explaining how that
 means you need two URIs for most of the things that you really want to talk
 about, and exactly how you have to support those URIs, it starts getting
 much harder.


I'm curious as to why this is difficult to explain.  Especially since I
also have difficulties explaining the benefits of linked data.  However,
normally the road block I hit is explaining why URIs are important.

Are there perhaps similar paradigms that the majority of developers are
already already familiar with?


One that springs to mind is in java

You have a file Hello.java

But the file contains the actual class, Hello, which has keys and
values.


Or perhaps most people these datys know JSON, where you have file like
hello.json

The file itself is not that important, but it can contain 0 or more
objects, such as
{
  key1 : value1,
  key2 : value2,
  key3 : value3
}

Would this be a valid analogy?



 The biggest indication to me that explaining the distinction is a problem
 is that neither OGP nor schema.org even attempts to go near it when
 explaining to people how to add to semantic information into their web
 pages. The URIs that you use in the 'url' properties of those vocabularies
 are explained in terms of 'canonical URLs' for the thing that is being
 talked about. These are the kinds of graphs that millions of developers are
 building on, and those developers do not consider themselves linked data
 adopters and will not be going to linked data experts for training.

  2) hard data about the 303 redirect penalty, from a consumer and
  publisher side. Lots of claims get made about this but I've never seen
  hard evidence of the cost of this; it may be trivial, we don't know in
  any reliable way. I've been considering writing a paper on this for
  the ISWC2012 Experiments and Evaluation track, but am short on spare
  time. If anyone wants to join me please shout.

 I could offer you a data point from legislation.gov.uk if you like. When
 someone requests the ToC for an item of legislation, they will usually hit
 our CDN and the result will come back extremely quickly. I just tried:

 curl --trace-time -v http://www.legislation.gov.uk/ukpga/1985/67/contents

 and it showed the result coming back in 59ms.

 When someone uses the identifier URI for the abstract concept of an item
 of legislation, there's no caching so the request goes right back to the
 server. I just tried:

 curl --trace-time -v http://www.legislation.gov.uk/id/ukpga/1985/67

 and it showed the result coming back in 838ms, of course the redirection
 goes to the ToC above, so in total it takes around 900ms to get back the
 data.

 So every time that we refer to an item of legislation through its generic
 identifier rather than a direct link to its ToC we are making the site seem
 about 15 times slower. What's more, it puts load on our servers which
 doesn't happen when the data is cached; the more load, the slower the
 responses to other important things that are hard to cache, such as
 free-text searching.

 The consequence of course is that for practical reasons we design the site
 not to use generic identifiers for items of legislation 

Re: See Other

2012-03-28 Thread Melvin Carvalho
On 28 March 2012 03:30, Dan Brickley dan...@danbri.org wrote:

 On 27 March 2012 20:23, Melvin Carvalho melvincarva...@gmail.com wrote:

  I'm curious as to why this is difficult to explain.  Especially since I
 also
  have difficulties explaining the benefits of linked data.  However,
 normally
  the road block I hit is explaining why URIs are important.



 Alice: So, you want to share your in-house thesaurus in the Web as
 'Linked Data' in SKOS?

 Bob: Yup, I saw [inspirational materials] online and a few blog posts,
 it looks easy enough. We've exported it as RDF/XML SKOS already. Here,
 take a look...

 [data stick changes hands]

 Alice: Cool! And .. yup it's wellformed XML, and here see I parsed it
 with a real RDF parser (made by Dave Beckett who worked on the last
 W3C spec for this stuff, beats me actually checking it myself) and it
 didn't complain. So looks fine! Ok so we'll need to chunk this up
 somehow so there's one little record per term from your thesaurus, and
 links between them... ...and it's generally good to make human facing
 pages as well as machine-oriented RDF ones too.

 Bob: Ok, so that'll be microformats no wait microdata ah yeah, RDFa,
 right? Which version?

 Alice: well RDFa yes, microdata is a kind of cousin, a mix of thinking
 from microdata and microformats communities. But I meant that you'd
 make a version of each page for computers to use (RDF/XML like your
 test export here), ... and you'd make some kind of HTML page for more
 human readers also. The stuff you mention is more about doing both
 within the same format...

 Bob: Great. Which one's the most standard?  What should I use?

 Alice: Well I guess it depends what you mean by standard.
 [skips digression about whatwg and w3c etc notions of standards process]
 [skips digression about XHTML vs XML-ish polyglot HTML vs resolutely
 non-XML HTML5 flavours]
 [skips digression about qnames in HTML and RDFa 1.1 versus 1.0]

 ...you might care to look at using basic HTML5 document with say the
 Lite version of RDFa 1.1 (which is pretty much finished but not an
 official stable standard yet at W3C)

 Bob: [makes a note]. Ok, but that's just the human-facing page,
 anyway. We'd put up RDF/XML for machines too, right? Well maybe that's
 not necessary I guess. I was reading something about GRDDL and XSLT
 that automates the conversion, ... should we maybe generate the
 RDF/XML from the HTML+RDFa or vice versa? or just have some php hack
 generate both from MySQL since that's where the stuff ultimately lives
 right now anyway...?

 Alice: Um, well it's pretty much your choice. Do you need RDF/XML too?
 Well. maybe, not sure... it depends. There are more RDF/XML
 parsers around, they're more mature, ... but increasingly tools will
 consume all kinds of data as RDF. So it might not matter. Depends why
 you're doing this, really.

 Bob: Er ok, maybe we ought to do both for now, ... belt-and-braces,
 ... maybe watch the stats and see what's being picked up? I'm doing
 this because of promise of interestingly unexpected re-use and so on,
 which makes details hard to predict by definition.

 Alice: Sounds like a plan. Ok, so each node in your RDF graph, ...
 we'll need to give it a URI. You know that's like the new word for
 URL,
 but that includes identifiers for real world things too.

 Bob: Sure sure, I read that. Makes sense. And I can have a URI, my
 homepage can have a URI, I'm not my home page blah-de-blah?

 Alice: You got it.

 Bob: Ok, so what URLs should I give the concepts in this thesaurus?
 They've got all kinds of strings attached, but we've also got nicely
 managed numeric IDs too.

 Alice: Right so maybe something short (URls can never be too
 short...), ... so maybe if you host at your example.org server,
 http://example.com/demothes/c1  then same but /c2 /c3 etc.

 ... or well you could use #c1 or #c2 etc. That's pretty much up to
 you. There are pros-and-cons in both directions.

 Bob: whatever's easiest. It's a pretty plain apache2 setup, with php
 if we want it, or we can batch create files if that makes more sense;
 this data doesn't change much.

 Alice: Well how big is the thesaurus...?

 Bob: a couple thousand terms, each with a few relations and bits of
 text; maybe more if we dig out the translations (humm should we
 language negotiate those somehow?)

 Alice: Let's talk about that another day, maybe?

 Bob:  And hmm the translations are versioned a bit differently? Should
 we put version numbers in somewhere so it's unambiguous which
 version of the translation we're using?

 Alice: Let's talk about that another day, too.

 Bob: OK, where were we? http://example.com/demothes/c1 ... sure, that
 sounds fine.

 ... we'd put some content negotiated apache thing there, and make c1
 send HTML if there's a browser, or rdf/xml if they want that stuff
 instead? Default to the browser / HTML version maybe?

 Alice: something like that could work. There are some howtos around.
 Oh, but if c1 isn't

Re: {Disarmed} Re: See Other

2012-03-28 Thread Melvin Carvalho
On 28 March 2012 15:28, Hugh Glaser h...@ecs.soton.ac.uk wrote:

 I can't find any apps (other than mine) that actually use this.

 Searching:
 Sindice:
 http://sindice.com/search?q=http://graph.facebook.com
 40 (forty) results
 Bing:
 http://www.bing.com/search?q=%22http://graph.facebook.com/%22
 8400 results

 I don't think this activity has actually set the world alight yet - people
 are quite excited from what you call the Structured Data point of view, but
 little or no Linked Data.
 And it has been around for a little while now.
 And my (unproven) hypothesis is that Sindice would be finding these links
 all over the place if Facebook had been encouraged to do it differently.

 I'm not knocking it - you are right - it is really great they have done it.
 But I think we could have helped them do it better.


I wanted to give you a demo of some of the things we've been working on,
but I couldnt quite work out how to insert images to gmail.

I've blogged about it here:

http://www.w3.org/community/rww/2012/03/28/using-tabulator-to-link-to-facebook-in-chrome/

Hope that helps! :)



 Cheers
 Hugh

 On 28 Mar 2012, at 14:00, Kingsley Idehen wrote:
 
  Hugh,
 
  Really short story: Facebook has delivered you enough structured data
 for you to wonder into higher Linked Data realms if you choose. Half bread
 is better than none. Facebook has contributed something like 850 million+
 profiles in structured data form to the Web, isn't that awesome? Doesn't
 that simplify the entire journey towards of a Web of semantically rich
 relations that ultimately aids:
 
  1. Findability
  2. Disparate Data Access  Integration -- basically data virtualization
  3. Introduction and Exploitation of  Intensional Data Management --
 basically what's also referred to as Closed World vs Open World database
 technology
  4. Flexible Data Representation where schema bindings are late and loose.
 
  As I told you last week, Linked Data has already exploded beyond the
 point of critical mass. Stalled it hasn't :-)
 
  Links:
 
  1. MailScanner has detected definite fraud in the website at goo.gl.
 Do not trust this website: http://goo.gl/y7Gq4 -- an pretty old post
 titled: What Facebook Can Teach Us about Bootstrapping Linked Data at
 InterWeb Scales .
 
  --
 
  Regards,
 
  Kingsley Idehen
  Founder  CEO
  OpenLink Software
  Company Web:
  http://www.openlinksw.com
 
  Personal Weblog:
  http://www.openlinksw.com/blog/~kidehen
 
  Twitter/Identi.ca handle: @kidehen
  Google+ Profile:
  https://plus.google.com/112399767740508618350/about
 
  LinkedIn Profile:
  http://www.linkedin.com/in/kidehen
 
 
 
 
 
 

 --
 Hugh Glaser,
 Web and Internet Science
 Electronics and Computer Science,
 University of Southampton,
 Southampton SO17 1BJ
 Work: +44 23 8059 3670, Fax: +44 23 8059 3045
 Mobile: +44 75 9533 4155 , Home: +44 23 8061 5652
 http://www.ecs.soton.ac.uk/~hg/





Fwd: Call for Two Year Feature Freeze -- to httpRange-14 resolution

2012-03-29 Thread Melvin Carvalho
FYI

Apologies for cc'ing initial mail to public-lod-requ...@w3.org by mistake

-- Forwarded message --
From: Melvin Carvalho melvincarva...@gmail.com
Date: 29 March 2012 20:13
Subject: Call for Two Year Feature Freeze -- to httpRange-14 resolution
To: TAG List www-...@w3.org, public-lod-requ...@w3.org


In line with:

http://www.w3.org/2001/tag/doc/uddp/change-proposal-call.html

Calls for : Reinforcement of the status quo.

The proposal simply states, what the title says, and calls for a two year
feature freeze on this issue



Benefits
===

- Continued meteoric rise of the web of documents

- Continued emergence of linked data

- A chance for nascent read/write, web of applications, to come into
fruition, based on existing arch


Costs
=

- Potential continued performance issues for those that choose to deploy
the 303 pattern



*Disclaimer* my suggestion is based on work done in the W3C Read Write Web
Community group, but are my personal opinion, and do not necessarily
reflect the views of that group.

Thanks
Melvin


Re: Interesting GoogleTalk about Named Content Networks

2012-04-12 Thread Melvin Carvalho
On 11 April 2012 16:52, Kingsley Idehen kide...@openlinksw.com wrote:

 All,

 The links below have been extracted from a -- circa. 2006 -- Google
 TechTalks video about Named Content Networks which is another moniker for
 InterWeb scale Linked Data.  The presenter is Van Jacobson (#DBpedia URI:
 http://dbpedia.org/resource/**Van_Jacobsonhttp://dbpedia.org/resource/Van_Jacobson)
 and the full video is at: http://youtu.be/gqGEMQveoqg :

 1. http://www.youtube.com/watch?**feature=player_detailpagev=**
 gqGEMQveoqg#t=1048shttp://www.youtube.com/watch?feature=player_detailpagev=gqGEMQveoqg#t=1048s--
  Communications vs Data Networking

 2. http://www.youtube.com/watch?**feature=player_detailpagev=**
 gqGEMQveoqg#t=1242shttp://www.youtube.com/watch?feature=player_detailpagev=gqGEMQveoqg#t=1242s--
  Doing it differently with same infrastructure

 3. http://www.youtube.com/watch?**feature=player_detailpagev=**
 gqGEMQveoqg#t=2320shttp://www.youtube.com/watch?feature=player_detailpagev=gqGEMQveoqg#t=2320s--
  Named Chunks of Data (Data Objects style of Resources)

 4. http://www.youtube.com/watch?**feature=player_detailpagev=**
 gqGEMQveoqg#t=2407shttp://www.youtube.com/watch?feature=player_detailpagev=gqGEMQveoqg#t=2407s--
  Data Abstraction is what matters not the underlying Network

 5. http://www.youtube.com/watch?**feature=player_detailpagev=**
 gqGEMQveoqg#t=2570shttp://www.youtube.com/watch?feature=player_detailpagev=gqGEMQveoqg#t=2570s--
  Focusing on the Data by Name

 6. http://www.youtube.com/watch?**feature=player_detailpagev=**
 gqGEMQveoqg#t=2657shttp://www.youtube.com/watch?feature=player_detailpagev=gqGEMQveoqg#t=2657s--
  Security the Data using Crytpo .

 7. http://www.youtube.com/watch?**feature=player_detailpagev=**
 gqGEMQveoqg#t=2662shttp://www.youtube.com/watch?feature=player_detailpagev=gqGEMQveoqg#t=2662s--
  Trusting Data via Data i.e., using Crytpo and Trust Logic .

 8. http://www.youtube.com/watch?**feature=player_detailpagev=**
 gqGEMQveoqg#t=2685shttp://www.youtube.com/watch?feature=player_detailpagev=gqGEMQveoqg#t=2685s--
  Data Dissemination (Data Diffusion) Example that includes Vectorization
 via indirection in response to Data by Generic Name Requests.

 9. http://www.youtube.com/watch?**feature=player_detailpagev=**
 gqGEMQveoqg#t=2949shttp://www.youtube.com/watch?feature=player_detailpagev=gqGEMQveoqg#t=2949s--
  Data Objects have Names as opposed to Addresses (Fundamental to Data
 Dissemination oriented Networking)

 10. http://www.youtube.com/watch?**feature=player_detailpagev=**
 gqGEMQveoqg#t=3003shttp://www.youtube.com/watch?feature=player_detailpagev=gqGEMQveoqg#t=3003s--
  Integrity and Trust are Data Properties .

 11. http://www.youtube.com/watch?**feature=player_detailpagev=**
 gqGEMQveoqg#t=3081shttp://www.youtube.com/watch?feature=player_detailpagev=gqGEMQveoqg#t=3081s--
  Artificial Tensions about Networking (Bit Shifting).

 12. http://www.youtube.com/watch?**feature=player_detailpagev=**
 gqGEMQveoqg#t=3178shttp://www.youtube.com/watch?feature=player_detailpagev=gqGEMQveoqg#t=3178s--
  Communication Wins from Good Data Dissemination Architecture .

 13. http://www.youtube.com/watch?**feature=player_detailpagev=**
 gqGEMQveoqg#t=3640shttp://www.youtube.com/watch?feature=player_detailpagev=gqGEMQveoqg#t=3640s--
  Data Integrity and Broken Hierarchical CA Network.

 14. http://www.youtube.com/watch?**feature=player_detailpagev=**
 gqGEMQveoqg#t=3722shttp://www.youtube.com/watch?feature=player_detailpagev=gqGEMQveoqg#t=3722s--
  Change has to be Unobtrusive (so 200 OK Locations when disambigutating
 HTTP URI Names).

 15. http://www.youtube.com/watch?**feature=player_detailpagev=**
 gqGEMQveoqg#t=3874shttp://www.youtube.com/watch?feature=player_detailpagev=gqGEMQveoqg#t=3874s-
  What exactly is a Name?


Very interesting talk.  I like his phrase, 'All we have is names and data.
There ain't nothing more.'

Thanks Kingsley for annotating this video into different sections.




 --

 Regards,

 Kingsley Idehen
 Founder  CEO
 OpenLink Software
 Company Web: http://www.openlinksw.com
 Personal Weblog: 
 http://www.openlinksw.com/**blog/~kidehenhttp://www.openlinksw.com/blog/%7Ekidehen
 Twitter/Identi.ca handle: @kidehen
 Google+ Profile: 
 https://plus.google.com/**112399767740508618350/abouthttps://plus.google.com/112399767740508618350/about
 LinkedIn Profile: 
 http://www.linkedin.com/in/**kidehenhttp://www.linkedin.com/in/kidehen









Re: Can we create better links by playing games?

2012-06-20 Thread Melvin Carvalho
On 19 June 2012 21:23, Martin Hepp martin.h...@ebusiness-unibw.org wrote:

 Dear Jens:

 I wonder how this approach is different to the one described in our 2008
 paper [1], available from

 http://www.heppnetz.de/files/gwap-semweb-ieee-is.pdf


Must watch, Jesse Schell's talk on Games for Change

http://vimeo.com/25681002



 Martin


 [1] Games with a Purpose for the Semantic Web, IEEE Intelligent Systems,
 Vol. 23, No. 3, pp. 50-60, May/June 2008.

 On Jun 19, 2012, at 9:04 PM, Jens Lehmann wrote:

 
  Dear all,
 
  most of you will agree that links are an important element of the Web of
 Data. There exist a number of tools, such as LIMES [1] or SILK [2], which
 are able to create a high number of such links by using heuristics. The
 manual validation of verification of such links can be quite tedious, so we
 try to find out whether game based approaches can be used to make that task
 much more interesting. We developed the VeriLinks game prototype and ask
 for your help to judge how well the the validation works by playing the
 game and answering a few questions in a survey:
 
  Game: http://verilinks.aksw.org
  Survey: http://surveys.aksw.org/survey/verilinks
 
  The survey itself takes only 5 minutes to complete. In addition to
 having the chance to play a game at work ;), there are also great prizes to
 win. 10 randomly selected participants will receive Amazon vouchers:
 
  1st prize: 200 Euro
  2nd prize: 100 Euro
  3rd prize: 50 Euro
  4th - 10th prize: 25 Euro
 
  If you want to win a prize, please participate in the survey until this
 *Friday, June 22, 23:59 CET*.
 
  Kind regards,
 
  Quan Nguyen and Jens Lehmann
  (researchers at the AKSW [3] Group, supported by LOD2 [4] and LATC [5])
 
  [1] http://limes.sf.net
  [2] http://www4.wiwiss.fu-berlin.de/bizer/silk/
  [3] http://aksw.org
  [4] http://lod2.eu
  [5] http://latc-project.eu
 
 



 
 martin hepp
 e-business  web science research group
 universitaet der bundeswehr muenchen

 e-mail:  h...@ebusiness-unibw.org
 phone:   +49-(0)89-6004-4217
 fax: +49-(0)89-6004-4620
 www: http://www.unibw.de/ebusiness/ (group)
 http://www.heppnetz.de/ (personal)
 skype:   mfhepp
 twitter: mfhepp

 Check out GoodRelations for E-Commerce on the Web of Linked Data!
 =
 * Project Main Page: http://purl.org/goodrelations/







Re: Can we create better links by playing games?

2012-06-20 Thread Melvin Carvalho
On 20 June 2012 15:11, Kingsley Idehen kide...@openlinksw.com wrote:

 On 6/19/12 3:23 PM, Martin Hepp wrote:

 [1] Games with a Purpose for the Semantic Web, IEEE Intelligent Systems,
 Vol. 23, No. 3, pp. 50-60, May/June 2008.


 Do the games at: http://ontogame.sti2.at/games/**, still work? The more
 data quality oriented games the better re. LOD and the Semantic Web in
 general.

 Others: Are there any other games out there?


iand is working on a game:

http://blog.iandavis.com/2012/05/21/wolfie/



 --

 Regards,

 Kingsley Idehen
 Founder  CEO
 OpenLink Software
 Company Web: http://www.openlinksw.com
 Personal Weblog: 
 http://www.openlinksw.com/**blog/~kidehenhttp://www.openlinksw.com/blog/%7Ekidehen
 Twitter/Identi.ca handle: @kidehen
 Google+ Profile: 
 https://plus.google.com/**112399767740508618350/abouthttps://plus.google.com/112399767740508618350/about
 LinkedIn Profile: 
 http://www.linkedin.com/in/**kidehenhttp://www.linkedin.com/in/kidehen








Re: Can we create better links by playing games?

2012-06-20 Thread Melvin Carvalho
On 20 June 2012 15:46, Leigh Dodds le...@ldodds.com wrote:

 On Wed, Jun 20, 2012 at 2:19 PM, Melvin Carvalho
 melvincarva...@gmail.com wrote:
 
 
  On 20 June 2012 15:11, Kingsley Idehen kide...@openlinksw.com wrote:
 
  On 6/19/12 3:23 PM, Martin Hepp wrote:
 
  [1] Games with a Purpose for the Semantic Web, IEEE Intelligent
 Systems,
  Vol. 23, No. 3, pp. 50-60, May/June 2008.
 
 
  Do the games at: http://ontogame.sti2.at/games/, still work? The more
 data
  quality oriented games the better re. LOD and the Semantic Web in
 general.
 
  Others: Are there any other games out there?
 
 
  iand is working on a game:
 
  http://blog.iandavis.com/2012/05/21/wolfie/

 Is that relevant? :)


I guess I was reaching a bit there :)

I think it's just a fun project at the moment.  But you never know how
things will develop, look at the explosion of minecraft.  Also written by
one of the top linked data experts in the world ... so you can only hope! :)

One thing that I think could be really good is if inked geo browser (also
aksw) were somehow gamified.



 L.



Re: Can we create better links by playing games?

2012-06-20 Thread Melvin Carvalho
On 20 June 2012 17:44, Elena Simperl elena.simp...@aifb.uni-karlsruhe.dewrote:

  Am 20.06.2012 15:19, schrieb Melvin Carvalho:



 On 20 June 2012 15:11, Kingsley Idehen kide...@openlinksw.com wrote:

 On 6/19/12 3:23 PM, Martin Hepp wrote:

 [1] Games with a Purpose for the Semantic Web, IEEE Intelligent Systems,
 Vol. 23, No. 3, pp. 50-60, May/June 2008.


  Do the games at: http://ontogame.sti2.at/games/, still work? The more
 data quality oriented games the better re. LOD and the Semantic Web in
 general.

  Hey,

 Most of the OntoGame games still work, and a more comprehensive list of
 related games is available at http://semanticgames.org/. One of the
 problems I see, however, is that all data collected through such games is
 not accessible or reusable by applications (or in other games, as a matter
 of fact).


Yes this is a really important point.

If you get the high score it should be part of linked data to your identity
(eg like a badge).  This makes the game 100 times more worthwhile to play!



 Elena


 Others: Are there any other games out there?


 iand is working on a game:

 http://blog.iandavis.com/2012/05/21/wolfie/



 --

 Regards,

 Kingsley Idehen
 Founder  CEO
 OpenLink Software
 Company Web: http://www.openlinksw.com
 Personal Weblog: http://www.openlinksw.com/blog/~kidehen
 Twitter/Identi.ca handle: @kidehen
 Google+ Profile: https://plus.google.com/112399767740508618350/about
 LinkedIn Profile: http://www.linkedin.com/in/kidehen








 --
 Dr. Elena Simperl
 Assistant Professor
 Karlsruhe Institute of Technology
 t: +49 721 608 45778
 m: +49 1520 1600994
 e: elena.simp...@kit.edu




Re: Can we create better links by playing games?

2012-06-20 Thread Melvin Carvalho
On 20 June 2012 19:13, Hugh Glaser h...@ecs.soton.ac.uk wrote:

 Hi Kingsley,
 I'm all in favour of having good authentication processes, but I have to
 say I smiled at your suggestion that WebID is not complex compared with
 HTTP Auth..

 Anyone can find endless examples of how to use HTTP Authentication to
 access things like sameAs.org.
 Any Google search will show you it is going to be something like
 curl --user name:password http://sameas.org/store/games/submit/


Yes sending  webid cert is more complex.  Something like:

 wget -qO- --no-check-certificate URI --certificate=./webid.cer
--private-key=./webid.key

But it does save having to remember username / pw


 And then the librarian or whoever it is can move on to doing the
 application, which is what they wanted to do, with the distraction of
 access mechanisms.

 Note that some of my users have never heard of RDF - sameAs does identity
 management for them, and returns something like JSON.

 Of course, as always, if someone who actually wants to use the service
 says they would prefer to use WebID, then I will probably allow it when I
 get a moment, but until then it is way down the list of importance.

 Best
 Hugh

 On 20 Jun 2012, at 17:56, Kingsley Idehen wrote:

  On 6/20/12 12:38 PM, Hugh Glaser wrote:
  Yes, I could do.
  But it would be a barrier, certainly at the moment.
  It is much easier for someone to send me a username and password to put
 into their curl, than for to start with, well what you need to do first is
 get a WebID. Now let me tell you what a WebID is…
  Simple access is everything.
 
  WebID isn't complex. Users get a digital identity card (a Web resource)
 that bears identity claims. These claims are verified cryptographically via
 existing PKI technology that's already backed into all Web browsers.
 
  It can be as simple as:
 
  1. http://id.myopenlink.net/certgen
  2. http://my-profile.eu .
 
  Links:
 
  1. http://www.w3.org/wiki/WebID -- WebID Info Portal
  2. http://delicious.com/kidehen/webid_verifier -- WebID verification
 services
  3. http://delicious.com/kidehen/webid_apps+webid_apps -- WebID apps.
 
 
  Kingsley
 
 
 
  On 20 Jun 2012, at 17:21, Kingsley Idehen wrote:
 
  On 6/20/12 12:03 PM, Hugh Glaser wrote:
  (Sorry to repeat myself :-) )
  If you want a way of collecting and publishing coref data (or indeed
 any pair data), then I would be happy to provide a
  http://sameas.org/store/games or whatever, where you could even post
 pairs as they happen.
  Tell me the name you want, a username and password for the htaccess,
 and Bob's Your Uncle
  (http://sameas.org/?uri=http://rdf.freebase.com/ns/m.0265wn_)
 
  (This is one of the primary reasons for sameas.org - there is a lot
 of coref stuff being generated, but it sits on researchers' and PhD
 students' PCs, etc and never sees the light of day.)
  Best
  Hugh
  Hugh,
 
  What about using WebID and WebID based ACLs re. controlled or
 purpose-specific access to this service?
 
  Yesterday, I shared a post [1] on the Read-Write community mailing
 list that showcases an example of this kind of WebID  Linked Data
 exploitation.
 
  Links
 
  1. http://bit.ly/NNOkNB -- mounting 3rd party storage services into
 my WebID ACL protected personal data space .
 
  Kingsley
 
  On 20 Jun 2012, at 16:44, Elena Simperl wrote:
 
  Am 20.06.2012 15:19, schrieb Melvin Carvalho:
  On 20 June 2012 15:11, Kingsley Idehen kide...@openlinksw.com
 wrote:
  On 6/19/12 3:23 PM, Martin Hepp wrote:
  [1] Games with a Purpose for the Semantic Web, IEEE Intelligent
 Systems, Vol. 23, No. 3, pp. 50-60, May/June 2008.
 
  Do the games at: http://ontogame.sti2.at/games/, still work? The
 more data quality oriented games the better re. LOD and the Semantic Web in
 general.
  Hey,
 
  Most of the OntoGame games still work, and a more comprehensive list
 of related games is available at http://semanticgames.org/. One of the
 problems I see, however, is that all data collected through such games is
 not accessible or reusable by applications (or in other games, as a matter
 of fact).
 
  Elena
  Others: Are there any other games out there?
 
  iand is working on a game:
 
  http://blog.iandavis.com/2012/05/21/wolfie/
   --
 
  Regards,
 
  Kingsley Idehen
  Founder  CEO
  OpenLink Software
  Company Web: http://www.openlinksw.com
  Personal Weblog: http://www.openlinksw.com/blog/~kidehen
  Twitter/Identi.ca handle: @kidehen
  Google+ Profile:
 https://plus.google.com/112399767740508618350/about
  LinkedIn Profile: http://www.linkedin.com/in/kidehen
 
 
 
 
 
 
  --
  Dr. Elena Simperl
  Assistant Professor
  Karlsruhe Institute of Technology
  t: +49 721 608 45778
  m: +49 1520 1600994
  e:
  elena.simp...@kit.edu
 
  --
 
  Regards,
 
  Kingsley Idehen
  Founder  CEO
  OpenLink Software
  Company Web: http://www.openlinksw.com
  Personal Weblog: http://www.openlinksw.com/blog/~kidehen
  Twitter/Identi.ca handle: @kidehen
  Google+ Profile: https://plus.google.com/112399767740508618350

Re: Can we create better links by playing games?

2012-06-20 Thread Melvin Carvalho
On 21 June 2012 00:04, Martin Hepp martin.h...@ebusiness-unibw.org wrote:

 I can only add to Elena's statement - in fact, it is rather the exception
 than the rule that a Semantic Web task can be turned into a good game that
 attracts large, non-nerd audiences. Over the years since our first
 experiments in 2007, I have come to the conclusion that it is way more
 rewarding to turn such tasks into Amazon Mechanical Turk tasks (HITs) than
 to develop games. If we are honest to ourselves, then all of the existing
 SW games fall short in a terribly in terms of gaming fun and
 understandability.

 The difference between Luis van Ahn's successful games and our attempts of
 using this for the SW is that Luis used challenges where the processing of
 visual data and applying linguistic competence are the core intelligence
 task, two areas that are suited for broad audiences and easily link to
 entertaining game scenarios.

 But validating mapping axioms between bio ontologies and even open street
 map data is terribly boring in comparison.

 Plus, the level of competence needed for cracking the interesting nuts in
 our data (e.g. subtle forms of polysemy like the city of Munich vs. the
 district of Munich) restricts the target audience significantly.

 To be frank, I consider GWAPs for the Semantic Web a dead end and would
 not invest additional lifetime into it. It was a promising field back then,
 and has a lot of appeal at first sight, but it will not solve any of our
 big challenges.


Martin, I agree with you that solving esoteric problem in linked data might
not be the most fun idea to gamify.  However, that does not mean that
linked data isnt well suited to producing potentially viral games that have
positive side effects (eg for social good).

In the last few years simple text based games, using linked social data (eg
facebook) have achieved virality of a millions, and even tens of millions
of users.

What are the big challenges, if they are not to deliver linked data to a
larger population in appealing ways?



 Martin

 On Jun 20, 2012, at 10:59 PM, Elena Simperl wrote:

  Am 20.06.2012 17:52, schrieb Melvin Carvalho:
 
 
  On 20 June 2012 17:44, Elena Simperl 
 elena.simp...@aifb.uni-karlsruhe.de wrote:
  Am 20.06.2012 15:19, schrieb Melvin Carvalho:
 
 
  On 20 June 2012 15:11, Kingsley Idehen kide...@openlinksw.com wrote:
  On 6/19/12 3:23 PM, Martin Hepp wrote:
  [1] Games with a Purpose for the Semantic Web, IEEE Intelligent
 Systems, Vol. 23, No. 3, pp. 50-60, May/June 2008.
 
  Do the games at: http://ontogame.sti2.at/games/, still work? The more
 data quality oriented games the better re. LOD and the Semantic Web in
 general.
  Hey,
 
  Most of the OntoGame games still work, and a more comprehensive list of
 related games is available at http://semanticgames.org/. One of the
 problems I see, however, is that all data collected through such games is
 not accessible or reusable by applications (or in other games, as a matter
 of fact).
 
  Yes this is a really important point.
 
  If you get the high score it should be part of linked data to your
 identity (eg like a badge).  This makes the game 100 times more worthwhile
 to play!
  In fairness, you want the games to be played by a very large user base,
 and most of these players will have nothing to do with Linked Data. They
 will need other incentives to engage with the game :-) But the results
 would be more useful, indeed.
 
  A second problem that I've seen with the increasing number of games
 being released over the past years (including ours) is that they produce
 very similar data sets, mostly in general-purpose domains, for which there
 are actually knowledge bases available containing that knowledge (as RDF).
  Having a standard means to reuse such crowdsourced data sets would make
 the games definitely more valuable.
 
 
  Elena
 
 
  Others: Are there any other games out there?
 
  iand is working on a game:
 
  http://blog.iandavis.com/2012/05/21/wolfie/
 
 
  --
 
  Regards,
 
  Kingsley Idehen
  Founder  CEO
  OpenLink Software
  Company Web: http://www.openlinksw.com
  Personal Weblog: http://www.openlinksw.com/blog/~kidehen
  Twitter/Identi.ca handle: @kidehen
  Google+ Profile: https://plus.google.com/112399767740508618350/about
  LinkedIn Profile: http://www.linkedin.com/in/kidehen
 
 
 
 
 
 
 
 
 
  --
  Dr. Elena Simperl
  Assistant Professor
  Karlsruhe Institute of Technology
  t:
  +49 721 608 45778
 
  m:
  +49 1520 1600994
 
  e:
  elena.simp...@kit.edu
 
 
 
  --
  Dr. Elena Simperl
  Assistant Professor
  Karlsruhe Institute of Technology
  t: +49 721 608 45778
  m: +49 1520 1600994
  e:
  elena.simp...@kit.edu

 
 martin hepp
 e-business  web science research group
 universitaet der bundeswehr muenchen

 e-mail:  h...@ebusiness-unibw.org
 phone:   +49-(0)89-6004-4217
 fax: +49-(0)89-6004-4620
 www: http://www.unibw.de/ebusiness/ (group)
 http

Re: Can we create better links by playing games?

2012-06-22 Thread Melvin Carvalho
On 19 June 2012 21:04, Jens Lehmann lehm...@informatik.uni-leipzig.dewrote:


 Dear all,

 most of you will agree that links are an important element of the Web of
 Data. There exist a number of tools, such as LIMES [1] or SILK [2], which
 are able to create a high number of such links by using heuristics. The
 manual validation of verification of such links can be quite tedious, so we
 try to find out whether game based approaches can be used to make that task
 much more interesting. We developed the VeriLinks game prototype and ask
 for your help to judge how well the the validation works by playing the
 game and answering a few questions in a survey:

 Game: http://verilinks.aksw.org
 Survey: 
 http://surveys.aksw.org/**survey/verilinkshttp://surveys.aksw.org/survey/verilinks

 The survey itself takes only 5 minutes to complete. In addition to having
 the chance to play a game at work ;), there are also great prizes to win.
 10 randomly selected participants will receive Amazon vouchers:

 1st prize: 200 Euro
 2nd prize: 100 Euro
 3rd prize: 50 Euro
 4th - 10th prize: 25 Euro

 If you want to win a prize, please participate in the survey until this
 *Friday, June 22, 23:59 CET*.


Game feedback.

In general it's a great game.  Nice look.  Nice sounds.  Nice animations.
Angry peas style.

Some important things:

1 The idea of doing 2 things at once is never going to be popular in
games.

This talk goes into the reason behind it (
http://fora.tv/2010/07/27/Jesse_Schell_Visions_of_the_Gamepocalypse )

2 The tutorial didnt really explain what the game was about, I had to play
it a few times to work out how to play.  Game designers have 3 goals:

- Interest your users for 15 seconds
- Interest your users for 15 minutes
- Interest your users for 15 hours

The first step needs some where here.

3 For a game to be popular / viral, it needs to be 'social'.  What this
means is use social linked data.  Facebook is quite good at this, but the
linked data cloud can take this further an order of magnitude.  As I
mentioned before you need to also tie your achievements to your login
profile, otherwise it's just another silo.

Some links:
How to be an indie game developer:
http://www.mode7games.com/blog/2012/06/12/how-to-be-an-indie-game-developer/
Post on the mechanics of social games:
http://techcrunch.com/2010/08/01/the-new-games
-people-play-game-mechanics-in-the-age-of-social

In general : making good games is HARD, but can be hugely rewarding.
Linked data is uniquely placed to make games at web scale.  The first
people to crack this are on to something very big.  But you have to try and
capture modern social mechanics.  A great can take linked data to the next
level, and virally bootstrap web 2.0 if done right.  This effort was a
really good try, lots of the hard parts are done, but still some ways to go.



 Kind regards,

 Quan Nguyen and Jens Lehmann
 (researchers at the AKSW [3] Group, supported by LOD2 [4] and LATC [5])

 [1] http://limes.sf.net
 [2] 
 http://www4.wiwiss.fu-berlin.**de/bizer/silk/http://www4.wiwiss.fu-berlin.de/bizer/silk/
 [3] http://aksw.org
 [4] http://lod2.eu
 [5] http://latc-project.eu





Re: Can we create better links by playing games?

2012-07-02 Thread Melvin Carvalho
On 19 June 2012 21:04, Jens Lehmann lehm...@informatik.uni-leipzig.dewrote:


 Dear all,

 most of you will agree that links are an important element of the Web of
 Data. There exist a number of tools, such as LIMES [1] or SILK [2], which
 are able to create a high number of such links by using heuristics. The
 manual validation of verification of such links can be quite tedious, so we
 try to find out whether game based approaches can be used to make that task
 much more interesting. We developed the VeriLinks game prototype and ask
 for your help to judge how well the the validation works by playing the
 game and answering a few questions in a survey:

 Game: http://verilinks.aksw.org
 Survey: 
 http://surveys.aksw.org/**survey/verilinkshttp://surveys.aksw.org/survey/verilinks

 The survey itself takes only 5 minutes to complete. In addition to having
 the chance to play a game at work ;), there are also great prizes to win.
 10 randomly selected participants will receive Amazon vouchers:


You may also enjoy this 3 minute video which 'tricks' kids into learning
algebra using gamification

http://www.youtube.com/watch?feature=player_embeddedv=n6Np4Eb0Ff0



 1st prize: 200 Euro
 2nd prize: 100 Euro
 3rd prize: 50 Euro
 4th - 10th prize: 25 Euro

 If you want to win a prize, please participate in the survey until this
 *Friday, June 22, 23:59 CET*.

 Kind regards,

 Quan Nguyen and Jens Lehmann
 (researchers at the AKSW [3] Group, supported by LOD2 [4] and LATC [5])

 [1] http://limes.sf.net
 [2] 
 http://www4.wiwiss.fu-berlin.**de/bizer/silk/http://www4.wiwiss.fu-berlin.de/bizer/silk/
 [3] http://aksw.org
 [4] http://lod2.eu
 [5] http://latc-project.eu





Re: SparQLed: Data assisted SPARQL editor available OpenSource

2012-07-19 Thread Melvin Carvalho
On 19 July 2012 10:56, Giovanni Tummarello giovanni.tummare...@deri.orgwrote:

 Thanks

 MQL editor from freebase has always been an inspiration for us (like
 everything else that came somehow from the MIT Simile group, David
 Huyhn, Stefano Mazzocchi etc).

 The goal here is to hopefully ignite activity on this long missing
 piece of sem web tool. Wether its going to be sparqled or something
 else that takes inspiration from it doesnt matter as long as we can
 finally get sparql to be usable.


Great work, I enjoy using it, makes SPARQL take on a new dimension :)



 Gio


 On Wed, Jul 18, 2012 at 8:40 PM, Yury Katkov katkov.ju...@gmail.com
 wrote:
  Hi!
 
  Looks very cool and reminds me on equally awesome MQL Editor on
  Freebase. [1] Thanks!
 
  [1] http://www.freebase.com/queryeditor
  -
  Yury Katkov
 
 
 
  On Tue, Jul 17, 2012 at 2:30 PM, Giovanni Tummarello
  giovanni.tummare...@deri.org wrote:
  Thanks for the comments we received.
 
  To answer some of the requests and the will it scale on complex
  datasets  we have now a sparqled which assists writing queries on the
  latest DBPedia dump
 
  http://demo.sindice.net/dbpedia-sparqled/
 
  We look forward to making Sparql a collaborative, collectively owned
  project. Pls sign up  to the google group to express your support.
 
  cheers
  Gio
 
 
  On Sat, Jun 30, 2012 at 6:52 PM, Giovanni Tummarello
  giovanni.tummare...@deri.org wrote:
  Dear all,
 
  we're happy to release open source today (actually yesterday :) )  a
  first version of our data assisted SPARQL query editor
 
  here is a short blog post which then leads to the homepage and other
 material
 
  http://www.sindicetech.com/blog/?p=14preview=true
 
  ---
 
  Our desire is to make this a community driven project.
 
   In a few weeks we plan to licence the whole things as Apache and,
  with your support, make this a significant improvement into usability
  of semantic web tools.
 
  we look forward to your feedback.
 
  Gio
 




Re: position in cancer informatics

2012-07-19 Thread Melvin Carvalho
On 17 July 2012 22:27, Nathan nat...@webr3.org wrote:

 Can you open this right up for everybody to be involved?

 I know I for one would be happy to invest free time to looking at these
 datasets to find patterns - are they open and available online, any
 pointers to get started, anything at all that would enable me (and
 hopefully others skilled here) to work on this?

 It sounds like less of a position and more of a global need we who can
 should all be pumping time in to.


Maybe related:

15-Year-Old Maker Astronomically Improves Pancreatic Cancer Test

http://blog.makezine.com/2012/07/18/15-year-old-maker-astronomically-improves-pancreatic-cancer-test/

He gleaned information on the topic from his “good friend Google,” and
began his research. Yes, he even got in trouble in his science class for
reading articles on carbon nanotubes instead of doing his classwork. When
Andraka had solidified ideas for his novel paper sensor, he wrote out his
procedure, timeline, and budget, and emailed 200 professors at research
institutes. He got 199 rejections and one acceptance from Johns Hopkins:
“If you send out enough emails, someone’s going to say yes.”


 Best,

 Nathan


 Helena Deus wrote:

 Dear all,
 We have an exciting research assistant position open at DERI for a chance
 to work with Cancer Informatics! We are looking for an enthusiastic
 developer who is familiar with bioinformatics concepts. Your role will be
 exploring cancer related datasets and looking for pattern (applying, for
 example, machine learning techniques) that can be used for personalized
 medicine.
 Please don't hesitate to Fw. this to whomever you think might be
 interested.
 To apply or to ask for more information, please reply to me (
 helena.d...@deri.org) with CV + motivation letter
 Kind regards, Helena F. Deus, PhD
 Digital Enterprise Research Institute
 helena.d...@deri.org









Re: Linked Data Business Models?

2012-07-30 Thread Melvin Carvalho
On 26 July 2012 00:08, Kingsley Idehen kide...@openlinksw.com wrote:

 All,

 There is a tendency assume an eternal lack of functional and scalable
 business models with regards to Linked Data. I think its time for an open
 discussion about this matter.

 It's no secret, I've never seen business models as challenging Linked
 Data. Quite the contrary. That said, instead of a dump from me about my
 viewpoints on Linked Data models, how about starting this discussion by
 identifying any non Advertising based business model that have actually
 worked on the Web to date.

 As far as I know, Advertising and Surreptitious Personal Profile Data
 Wholesale are the only models that have made a difference to the bottom
 lines of: Google, Facebook, Twitter, Yahoo! and other non eCommerce
 oriented behemoths.

 Based on the above, let's have a serious and frank discussion about
 business models with the understanding agreement that one size will never
 fit all, ever, so this rule cannot be overlooked re. Linked Data. Also
 remember, Business models aren't silver bullets, they are typically aligned
 with markets (qualified and quantified pain points) and the evolving nature
 of tangible and monetizable value.

 Hopefully, the floor is now open to everyone that has a vested interest in
 this very important matter :-)


I think we need a paid app store for the web

Plenty of examples of people writing apps, deploying them to mobile and
making good revenue

Why should it not be as easy to deploy to 'The Web'



 --

 Regards,

 Kingsley Idehen
 Founder  CEO
 OpenLink Software
 Company Web: http://www.openlinksw.com
 Personal Weblog: 
 http://www.openlinksw.com/**blog/~kidehenhttp://www.openlinksw.com/blog/%7Ekidehen
 Twitter/Identi.ca handle: @kidehen
 Google+ Profile: 
 https://plus.google.com/**112399767740508618350/abouthttps://plus.google.com/112399767740508618350/about
 LinkedIn Profile: 
 http://www.linkedin.com/in/**kidehenhttp://www.linkedin.com/in/kidehen








Re: Access Control Lists, Policies and Business Models

2012-08-16 Thread Melvin Carvalho
On 17 August 2012 01:39, Kingsley Idehen kide...@openlinksw.com wrote:

 All,

 Here's Twitter pretty much expressing the inevitable reality re. Web-scale
 business models: https://dev.twitter.com/blog/**
 changes-coming-to-twitter-apihttps://dev.twitter.com/blog/changes-coming-to-twitter-api


Limiting the number of users for 3rd party apps?  The mind boggles.


 There's no escaping the importance of access control lists and policy
 based data access.

 --

 Regards,

 Kingsley Idehen
 Founder  CEO
 OpenLink Software
 Company Web: http://www.openlinksw.com
 Personal Weblog: 
 http://www.openlinksw.com/**blog/~kidehenhttp://www.openlinksw.com/blog/%7Ekidehen
 Twitter/Identi.ca handle: @kidehen
 Google+ Profile: 
 https://plus.google.com/**112399767740508618350/abouthttps://plus.google.com/112399767740508618350/about
 LinkedIn Profile: 
 http://www.linkedin.com/in/**kidehenhttp://www.linkedin.com/in/kidehen








Re: Access Control Lists, Policies and Business Models

2012-08-16 Thread Melvin Carvalho
On 17 August 2012 01:39, Kingsley Idehen kide...@openlinksw.com wrote:

 All,

 Here's Twitter pretty much expressing the inevitable reality re. Web-scale
 business models: https://dev.twitter.com/blog/**
 changes-coming-to-twitter-apihttps://dev.twitter.com/blog/changes-coming-to-twitter-api

 There's no escaping the importance of access control lists and policy
 based data access.


A nice summary post (Thanks to Manu for spotting)

http://thenextweb.com/twitter/2012/08/17/twitter-4/



 --

 Regards,

 Kingsley Idehen
 Founder  CEO
 OpenLink Software
 Company Web: http://www.openlinksw.com
 Personal Weblog: 
 http://www.openlinksw.com/**blog/~kidehenhttp://www.openlinksw.com/blog/%7Ekidehen
 Twitter/Identi.ca handle: @kidehen
 Google+ Profile: 
 https://plus.google.com/**112399767740508618350/abouthttps://plus.google.com/112399767740508618350/about
 LinkedIn Profile: 
 http://www.linkedin.com/in/**kidehenhttp://www.linkedin.com/in/kidehen








India To Biometrically Identify All Of Its 1.2 Billion Citizens

2012-08-18 Thread Melvin Carvalho
http://singularityhub.com/2012/07/10/india-to-biometrically-identify-all-of-its-1-2-billion-citizens/

This reminds me a lot of foaf : dnaChecksum

I can conceive of possible LOD applications such as helping the poor, or
voting systems.

I wonder if this can be published as linked open data, something like an
electoral role, or if there's privacy issues there?


Re: Vocabulary for reviewing businesses, places, ...?

2012-08-19 Thread Melvin Carvalho
On 19 August 2012 23:00, Kingsley Idehen kide...@openlinksw.com wrote:

 On 8/15/12 7:50 PM, Chaals McCathieNevile wrote:

 On Wed, 15 Aug 2012 23:43:48 +0200, Daniel O'Connor 
 daniel.ocon...@gmail.com wrote:

  http://support.google.com/**webmasters/bin/answer.py?hl=**
 enanswer=146645http://support.google.com/webmasters/bin/answer.py?hl=enanswer=146645

 is a good starting point.


 That's pretty much where we are at already (Yandex also participates in
 schema.org). The question is whether anyone has already built the next
 vocabulary, describing the things being reviewed - we're happy to do it but
 it would be pretty silly to reinvent that wheel

 Cheers

 Chaals


 Chaals,

 There are a number of Review ontologies out in the wild. Here are some
 links:

 1. http://ontologi.es/like#
 2. http://vocab.org/review/terms.**rdf http://vocab.org/review/terms.rdf
 3. http://vocab.org/review/terms.**htmlhttp://vocab.org/review/terms.html.


seeAlso : http://revyu.com/

vocab: http://purl.org/stuff/rev

Tho not sure how active the above still is ...



 Kingsley


  On Aug 16, 2012 6:09 AM, Charles McCathie Nevile 
 cha...@yandex-team.ru

 wrote:

  Hi,

 does anyone want to recommend a vocabulary that can be used for
 reviewing
 places, businesses, etc - cafes, holidays, booking agents, ...?

 I'm looking for terms that can describe the service, cleanliness, prompt
 response, etc.

 cheers

 Chaals

 --
 Charles McCathie Nevile - Consultant (web standards) CTO Office, Yandex
   cha...@yandex-team.ru Find more at http://yandex.com







 --

 Regards,

 Kingsley Idehen
 Founder  CEO
 OpenLink Software
 Company Web: http://www.openlinksw.com
 Personal Weblog: 
 http://www.openlinksw.com/**blog/~kidehenhttp://www.openlinksw.com/blog/%7Ekidehen
 Twitter/Identi.ca handle: @kidehen
 Google+ Profile: 
 https://plus.google.com/**112399767740508618350/abouthttps://plus.google.com/112399767740508618350/about
 LinkedIn Profile: 
 http://www.linkedin.com/in/**kidehenhttp://www.linkedin.com/in/kidehen








Re: Deserted Island Sem Web reading list

2012-09-12 Thread Melvin Carvalho
On 12 September 2012 20:30, ProjectParadigm-ICT-Program 
metadataport...@yahoo.com wrote:

 We are working on creating shortlists of books that are recommended for
 mastering complex issues in ICT for development (ICT4DEV).

 Because access to internet AND academic publications in a library AND
 science journals in a library AND off-the-shelf paperbacks via online
 vendors is a luxury only the USA, the European Union and a select group of
 other countries have to offer to its citizens I am asking the input of the
 list subscribers on the Ultimate List for Semantic Web and Linked Data for
 a deserted island. Only ten (10) titles are allowed.

 These ten books must cover all issues related to and/or relevant to linked
 data and the semantic web.

 Recommendations and additional pointers are welcome. If we can do with
 less than 10 books I'd like to hear, if more than 10 are absolutely in
 order, too.


Weaving The Web is essential to understanding the semantic web vision.



 Milton Ponson
 GSM: +297 747 8280
 PO Box 1154, Oranjestad
 Aruba, Dutch Caribbean
 Project Paradigm: A structured approach to bringing the tools for
 sustainable development to all stakeholders worldwide by creating ICT
 tools for NGOs worldwide and: providing online access to web sites and
 repositories of data and information for sustainable development

 This email and any files transmitted with it are confidential and intended
 solely for the use of the individual or entity to whom they are addressed.
 If you have received this email in error please notify the system manager.
 This message contains confidential information and is intended only for the
 individual named. If you are not the named addressee you should not
 disseminate, distribute or copy this e-mail.



Re: Linked Data Adoption Challenges Poll

2012-09-13 Thread Melvin Carvalho
On 13 September 2012 18:34, Kingsley Idehen kide...@openlinksw.com wrote:

 All,

 I've created a poll oriented towards capturing data about issues that
 folks find most challenging re., Linked Data Adoption.

 Please cast your vote as the results will be useful to all Linked Data
 stakeholders.

 Link: http://poll.fm/3w0cb .


I put other and fragmented ecosystem as the comment.

The academic world seems to be a system in its own right.  And if you're
not an academic, the projects can seem remote and disjointed.

There are quite a few businesses out there but tend to work on their own
thing.

The Gov stuff is also a system in its own right.

If you're just grass roots, especially if you dont or cant afford to attend
conferences, the whole thing can seem a bit of a maze.

I do love things like open mailing lists and IRC channels such as #swig,
the w3c wiki, and community groups, or even channels such as google+ have
recently starting improving things.

But I always get the feeling that groups, with similar goals, are operating
in relative isolation.

Certainly there is progress in this direction, but the dream of the
semantic web to allow rich collaboration (with tight feedback loops), and a
feeling of all pulling in the same direction seems not yet to be 100%
realized ...

Just a $0.02



 --

 Regards,

 Kingsley Idehen
 Founder  CEO
 OpenLink Software
 Company Web: http://www.openlinksw.com
 Personal Weblog: 
 http://www.openlinksw.com/**blog/~kidehenhttp://www.openlinksw.com/blog/~kidehen
 Twitter/Identi.ca handle: @kidehen
 Google+ Profile: 
 https://plus.google.com/**112399767740508618350/abouthttps://plus.google.com/112399767740508618350/about
 LinkedIn Profile: 
 http://www.linkedin.com/in/**kidehenhttp://www.linkedin.com/in/kidehen








Re: Linked Data Book in Early Access Release

2012-12-05 Thread Melvin Carvalho
On 5 December 2012 14:56, David Wood da...@3roundstones.com wrote:

 On Dec 5, 2012, at 08:46, Kingsley Idehen kide...@openlinksw.com wrote:

  On 12/5/12 7:55 AM, David Wood wrote:
  On Dec 5, 2012, at 06:34, Chris Beerch...@codex.net.au  wrote:
 
  snip
  
  http://www.manning.com/dwood/  itself doesn't seam to have any
 Linked Data
  to consume;)
  
  Makes sense to me - if you know enough to look for LD resources at the
  manning.com/dwood/ URI, you've just self evaluated that you probably
 don't
  need the book! :P
  I agree - and have been speaking with Manning about this.
  Unfortunately, I haven't made any progress yet.  I'll keep trying!
 
  Thanks for the mail.  Perhaps I can use it as proof to Manning that
 people do want LD on their site.
 
  Regards,
  Dave
 
 
  Dave,
 
  They have to understand that its sorta contradictory if they need to be
 convinced of this matter :-)

 Oh, I see your point and have made it myself.  Unfortunately, economics
 seems to be dictating otherwise to them for right or wrong.

 The only productive suggestion that has been made to me is to put up a
 parallel site for the book that includes LD.  Michael Hausenblas has
 offered the domain linkeddatadeveloper.com, which was his original site
 for the book but has fallen into disuse.  Of course, I would need to be
 willing to pay for the site and take the time to operate it.

 Would the community find that a useful thing to do?  I am willing to go to
 the effort if I receive a good number of positive responses.


In (I think the preface) to Weaving the Web, Tim wrote something like,
The next one will be online, I promise

The dream of linked data has always been to be a collaborative space to
both read and write to.

If it were feasible to dogfood such an endeavour, I think the dream of
linked data could be close to reality.

I wonder if we're close enough yet for this to be practical?



 Regards,
 Dave





Re: Linked Data Book in Early Access Release

2012-12-05 Thread Melvin Carvalho
On 5 December 2012 15:21, Dawson, Laura laura.daw...@bowker.com wrote:

 I actually spend quite a bit of time thinking about this. Given that an
 ebook is, fundamentally, an xhtml file, it seems feasible that structuring
 it and tagging it much as we do on the open web could lead to books
 themselves being linked. I am piloting some experiments with this in 2013
 at Bowker.

 Also, Weaving the Web is ironically NOT available as an ebook from
 HarperCollins.


Maybe not from harpercollins, but im sure there is a digital version
somewhere.



 From: Melvin Carvalho melvincarva...@gmail.com
 To: David Wood da...@3roundstones.com
 Cc: Kingsley Idehen kide...@openlinksw.com, public-lod@w3.org 
 public-lod@w3.org
 Subject: Re: Linked Data Book in Early Access Release



 On 5 December 2012 14:56, David Wood da...@3roundstones.com wrote:

 On Dec 5, 2012, at 08:46, Kingsley Idehen kide...@openlinksw.com wrote:

  On 12/5/12 7:55 AM, David Wood wrote:
  On Dec 5, 2012, at 06:34, Chris Beerch...@codex.net.au  wrote:
 
  snip
  
  http://www.manning.com/dwood/  itself doesn't seam to have any
 Linked Data
  to consume;)
  
  Makes sense to me - if you know enough to look for LD resources at
 the
  manning.com/dwood/ URI, you've just self evaluated that you
 probably don't
  need the book! :P
  I agree - and have been speaking with Manning about this.
  Unfortunately, I haven't made any progress yet.  I'll keep trying!
 
  Thanks for the mail.  Perhaps I can use it as proof to Manning that
 people do want LD on their site.
 
  Regards,
  Dave
 
 
  Dave,
 
  They have to understand that its sorta contradictory if they need to be
 convinced of this matter :-)

 Oh, I see your point and have made it myself.  Unfortunately, economics
 seems to be dictating otherwise to them for right or wrong.

 The only productive suggestion that has been made to me is to put up a
 parallel site for the book that includes LD.  Michael Hausenblas has
 offered the domain linkeddatadeveloper.com, which was his original site
 for the book but has fallen into disuse.  Of course, I would need to be
 willing to pay for the site and take the time to operate it.

 Would the community find that a useful thing to do?  I am willing to go
 to the effort if I receive a good number of positive responses.


 In (I think the preface) to Weaving the Web, Tim wrote something like,
 The next one will be online, I promise

 The dream of linked data has always been to be a collaborative space to
 both read and write to.

 If it were feasible to dogfood such an endeavour, I think the dream of
 linked data could be close to reality.

 I wonder if we're close enough yet for this to be practical?



 Regards,
 Dave






Re: Linked Data Book in Early Access Release

2012-12-05 Thread Melvin Carvalho
On 5 December 2012 14:56, David Wood da...@3roundstones.com wrote:

 On Dec 5, 2012, at 08:46, Kingsley Idehen kide...@openlinksw.com wrote:

  On 12/5/12 7:55 AM, David Wood wrote:
  On Dec 5, 2012, at 06:34, Chris Beerch...@codex.net.au  wrote:
 
  snip
  
  http://www.manning.com/dwood/  itself doesn't seam to have any
 Linked Data
  to consume;)
  
  Makes sense to me - if you know enough to look for LD resources at the
  manning.com/dwood/ URI, you've just self evaluated that you probably
 don't
  need the book! :P
  I agree - and have been speaking with Manning about this.
  Unfortunately, I haven't made any progress yet.  I'll keep trying!
 
  Thanks for the mail.  Perhaps I can use it as proof to Manning that
 people do want LD on their site.
 
  Regards,
  Dave
 
 
  Dave,
 
  They have to understand that its sorta contradictory if they need to be
 convinced of this matter :-)

 Oh, I see your point and have made it myself.  Unfortunately, economics
 seems to be dictating otherwise to them for right or wrong.

 The only productive suggestion that has been made to me is to put up a
 parallel site for the book that includes LD.  Michael Hausenblas has
 offered the domain linkeddatadeveloper.com, which was his original site
 for the book but has fallen into disuse.  Of course, I would need to be
 willing to pay for the site and take the time to operate it.


How about this?

http://linked.data.fm/book.html

It's also LDP compliant ;)



 Would the community find that a useful thing to do?  I am willing to go to
 the effort if I receive a good number of positive responses.

 Regards,
 Dave





Re: Linked Data Book in Early Access Release

2012-12-05 Thread Melvin Carvalho
On 5 December 2012 15:36, Melvin Carvalho melvincarva...@gmail.com wrote:



 On 5 December 2012 14:56, David Wood da...@3roundstones.com wrote:

 On Dec 5, 2012, at 08:46, Kingsley Idehen kide...@openlinksw.com wrote:

  On 12/5/12 7:55 AM, David Wood wrote:
  On Dec 5, 2012, at 06:34, Chris Beerch...@codex.net.au  wrote:
 
  snip
  
  http://www.manning.com/dwood/  itself doesn't seam to have any
 Linked Data
  to consume;)
  
  Makes sense to me - if you know enough to look for LD resources at
 the
  manning.com/dwood/ URI, you've just self evaluated that you
 probably don't
  need the book! :P
  I agree - and have been speaking with Manning about this.
  Unfortunately, I haven't made any progress yet.  I'll keep trying!
 
  Thanks for the mail.  Perhaps I can use it as proof to Manning that
 people do want LD on their site.
 
  Regards,
  Dave
 
 
  Dave,
 
  They have to understand that its sorta contradictory if they need to be
 convinced of this matter :-)

 Oh, I see your point and have made it myself.  Unfortunately, economics
 seems to be dictating otherwise to them for right or wrong.

 The only productive suggestion that has been made to me is to put up a
 parallel site for the book that includes LD.  Michael Hausenblas has
 offered the domain linkeddatadeveloper.com, which was his original site
 for the book but has fallen into disuse.  Of course, I would need to be
 willing to pay for the site and take the time to operate it.


 How about this?

 http://linked.data.fm/book.html

 It's also LDP compliant ;)


BTW you can do quite a lot in design mode :
http://www.quirksmode.org/dom/execCommand/





 Would the community find that a useful thing to do?  I am willing to go
 to the effort if I receive a good number of positive responses.

 Regards,
 Dave






Re: Linked Data Book in Early Access Release

2012-12-07 Thread Melvin Carvalho
On 7 December 2012 19:24, Kingsley Idehen kide...@openlinksw.com wrote:

 On 12/7/12 12:30 PM, David Wood wrote:

 On Dec 6, 2012, at 17:33, Kingsley Idehen kide...@openlinksw.com wrote:

  On 12/6/12 5:22 PM, Kingsley Idehen wrote:

 On 12/6/12 5:12 PM, David Wood wrote:

 This seems like good guidance for anyone wishing to provide LD the
 easiest possible way: Add a Turtle file, link to it and provide a link
 header.  Of course, we need to live with a bogus HTTP Content-Type, but
 that's unfortunately quite common even for people who control their own
 server.

 Thanks again to Manning for the quick response!

 Exactly!

 Linked Data should be discoverable to a variety of user agent profiles:

 1. HTML oriented -- user link/ based Web Linking pattern in head/
 2. HTTP oriented -- repeat via Link:
 3. RDF aware -- in the content include wdrs:desceribedby, foaf:topic
 etc. relations.


  David,

 In addition to the above, is it possible to have the mime type fixed? At
 the current time, I still get:

 Hi Kingsley,

 Maybe.  Manning js using the Yahoo! Small Business Web Hosting service.
  I have asked some people at Yahoo! to repair the Content-Types and I will
 revisit this if they don't.


 How about this:

 1. They add another link/ relationship to head/
 2. The link points to a Turtle doc in a data space you control -- absolute
 worst case (as I believe you already have your own space), just can get
 free data space from the likes of Dropbox, SkyDrive, Amazon S3 etc
 3. You set the correct content-type for the Turtle doc in your data space
 4. Ditto fixing any other issues e.g., entity name ambiguity etc..
 5. Done :-)


I've often thought there should be an open world, distributed, fault
correction service for sites that give the wrong content type.  Could solve
a lot of problems.



 Kingsley


 Regards,
 Dave
 --
 http://about.me/david_wood


  curl -I 
 http://manning.com/dwood/**LinkedData.ttlhttp://manning.com/dwood/LinkedData.ttl
 HTTP/1.1 200 OK
 Date: Thu, 06 Dec 2012 22:32:01 GMT
 P3P: 
 policyref=http://info.yahoo.**com/w3c/p3p.xmlhttp://info.yahoo.com/w3c/p3p.xml,
 CP=CAO DSP COR CUR ADM DEV TAI PSA PSD IVAi IVDi CONi TELo OTPi OUR DELi
 SAMi OTRi UNRi PUBi IND PHY ONL UNI PUR FIN COM NAV INT DEM CNT STA POL HEA
 PRE LOC GOV
 Last-Modified: Thu, 06 Dec 2012 00:44:18 GMT
 Accept-Ranges: bytes
 Content-Length: 2544
 *Content-Type: application/octet-stream*
 Age: 0
 Connection: close


 --

 Regards,

 Kingsley Idehen
 Founder  CEO
 OpenLink Software
 Company Web: http://www.openlinksw.com
 Personal Weblog: 
 http://www.openlinksw.com/**blog/~kidehenhttp://www.openlinksw.com/blog/~kidehen
 Twitter/Identi.ca handle: @kidehen
 Google+ Profile: 
 https://plus.google.com/**112399767740508618350/abouthttps://plus.google.com/112399767740508618350/about
 LinkedIn Profile: 
 http://www.linkedin.com/in/**kidehenhttp://www.linkedin.com/in/kidehen







 --

 Regards,

 Kingsley Idehen
 Founder  CEO
 OpenLink Software
 Company Web: http://www.openlinksw.com
 Personal Weblog: 
 http://www.openlinksw.com/**blog/~kidehenhttp://www.openlinksw.com/blog/~kidehen
 Twitter/Identi.ca handle: @kidehen
 Google+ Profile: 
 https://plus.google.com/**112399767740508618350/abouthttps://plus.google.com/112399767740508618350/about
 LinkedIn Profile: 
 http://www.linkedin.com/in/**kidehenhttp://www.linkedin.com/in/kidehen








Re: Linked Data Driven RWW Demo Experiment

2012-12-14 Thread Melvin Carvalho
On 14 December 2012 15:39, Kingsley Idehen kide...@openlinksw.com wrote:

  On 12/14/12 8:08 AM, Melvin Carvalho wrote:



 On 13 December 2012 21:19, Kingsley Idehen kide...@openlinksw.com wrote:

  All,

 Experiment t Steps:

 1. Attempt to lookup the following resource URL:
 http://web.ods.openlinksw.com/~kidehen/bing-snapshot-picasso.png
 2. Send me you preferred identifier (a URI displayed in the profile UI
 under Connected Accounts)
 3. **Optionally** open up an account via 
 http://web.ods.openlinksw.comhttp://web.ods.openlinksw.com-- using 
 whatever identification and authentication service works best for
 you (note: the profile UI has a connected accounts tab which will expose
 URIs to you re. next step)
 4. I'll then add the URI to the ACL protecting the resource -- again, you
 don't really need to sign up for a new account to complete this step, I
 just need an identifier aligned with an authentication service
 5. I'll reply indicating you have access
 6. Done.

 Note, these steps can (and will be) streamlined using notifications
 services such that you end up with a variant of Juergen's example which I
 posted about at:
 https://plus.google.com/u/0/112399767740508618350/posts/MVEcEkY6ZCz .

 Without Web- and Internet-scale Linked Data this simply would be
 achievable, without opening up yet another artificial silo.


 Works for me with twitter and persona


 Great!

 Anyone else?

 All:

 Basically, all you really have to do is just let me (the resource owner)
 know how you want to be identified. Examples would include:

 1. Twitter -- http://twitter.com/userid e.g., http://twitter.com/kidehen
 2. LinkedIn -- http://www.linkedin.com/in/userid e.g.,
 http://www.linkedin.com/in/kidehen
 3. Facebook -- http://www.facebook.com/userid e.g.,
 https://www.facebook.com/kidehen
 4. etc..

 Basically, each of these social networking/media service providers mints a
 verifiable URI for its members. In many cases, these days, you can verify
 these URIs by testing for control over said URIs using protocols such as
 OpenID, OAuth etc.. Naturally, you can also leverage WebID and its
 WebID+TLS authentication protocol in the same manner, just by sharing your
 WebID with the resource owner (me).

 Remember, this is a deceptively simple example of the power of Linked
 Data applied to the following problems:

 1. verifiable identity
 2. resource access authorization .

 Also note, we are going to be releasing this Javascript control as part of
 our ODS social web framework project on Github.


Awesome!  I would definitely like to use something like this in my apps for
login.

For anyone that hasnt tried it yet, I'd encourage you to give it a go.  You
just need an account with any of:

WebID
Persona
OpenID
Facebook
Twitter
LinkedIn
Google
Windows Live
Wordpress
Yahoo
Tumblr
Disqus
Instragram
Bitly
Foursquare
Dropbox

Then click: http://web.ods.openlinksw.com/~kidehen/bing-snapshot-picasso.png

Do let us know if it works! :)



 --

 Regards,

 Kingsley Idehen   
 Founder  CEO
 OpenLink Software
 Company Web: http://www.openlinksw.com
 Personal Weblog: http://www.openlinksw.com/blog/~kidehen
 Twitter/Identi.ca handle: @kidehen
 Google+ Profile: https://plus.google.com/112399767740508618350/about
 LinkedIn Profile: http://www.linkedin.com/in/kidehen






Proposal: register /.well-known/sparql with IANA

2012-12-22 Thread Melvin Carvalho
May I propose that we register the well known address

/.well-known/sparql

with IANA.

This could be a sparql endpoint for the domain queried, and a helpful
shortcut for both web based discovery, and also write operations via sparql
update.


Re: Proposal: register /.well-known/sparql with IANA

2012-12-22 Thread Melvin Carvalho
On 22 December 2012 15:41, David Wood da...@3roundstones.com wrote:

 On Dec 22, 2012, at 08:57, Melvin Carvalho melvincarva...@gmail.com
 wrote:

  May I propose that we register the well known address
 
  /.well-known/sparql
 
  with IANA.
 
  This could be a sparql endpoint for the domain queried, and a helpful
 shortcut for both web based discovery, and also write operations via sparql
 update.


 +1.  Now, who is we?


Some comments from michael hausenblas:

[[
Very simple, though not exactly quick ;)

Just send to wellknown-uri-rev...@ietf.org following [1].

However, note that the SPARQL WG pushed back when I suggested it:
http://lists.w3.org/Archives/Public/public-rdf-dawg-comments/2010Jun/.html

Cheers,
   Michael

[1] http://tools.ietf.org/html/rfc5785#section-5.1.1
]]

So I think anyone register this, if there's interest, it would probably
just need to reopen the conversation with the sparql wg mail list, I think.



 Regards,
 Dave
 --
 http://about.me/david_wood





Re: Linked Data Dogfood circa. 2013

2013-01-04 Thread Melvin Carvalho
On 5 January 2013 00:14, Kingsley Idehen kide...@openlinksw.com wrote:

 On 1/4/13 4:02 PM, Giovanni Tummarello wrote:

 One might just simply stay silent and move along, but i take a few
 seconds to restate the obvious.

 It is a fact that Linked data as  publish some stuff and they will
 come, both new publishers and consumers has failed.


 Of course it hasn't. How have we (this community) arrived at a LOD Cloud
 way in excess of 50 Billion+ useful triples? I just can't accept this kind
 of dismissal, it has adverse effects on the hard work of many that are
 continuously contributing to the LOD Cloud effort.



 The idea of putting some extra energy would simply be useless per se
 BUT it becomes  wrong when one tries to involve others e.g. gullible
 newcomers,  fresh ph.d students who trust that hey if my ph.d advisor
 made a career out of it, and EU gave him so much money it must be real
 right?


 Teach people how to make little bits of Linked Data in Turtle. The RDBMS
 world successfully taught people how to make Tables and execute simple
 queries using SQL, in the ultimate data silos i.e., RDBMS engines. The same
 rules apply here with the advantage of a much more powerful, open, and
 ultimately useful language in SPARQL. In addition to that, you have a
 superior data source name (DSN) mechanism in HTTP URIs, and superior Data
 Access that's all baked into HTTP.

 Last year I ensured every employee at OpenLink could write Turtle by hand.
 They all performed a basic exercise [1][2]: describe the yourself and/or
 stuff you like. The process started slow and ended with everyone having a
 lot of fun.

 Simple message to get folks to engage: if you know illiteracy leads to
 competitive disadvantage in the physical (real) world, why accept
 illiteracy in the ultra competitive digital realm of the Web? Basically, if
 you can write simple sentences in natural language, why not learn to do the
 same with the Web realm in mind?  Why take the distracting journey of
 producing an HTML file when you can dump content such as what follows into
 a file?

 ## Turtle Start ##

  a #Document .
  #topic #i .
 #i #name Kingsley Idehen .
 #i #nickname @kidehen .

 ## Bonus bits: Cross References to properties defined by existing
 vocabularies
 ## In more serious exercises this section would be where DBpedia and other
 LOD cloud URIs kick-in.

 #name owl:equivalentProperty foaf:name .
 #topic owl:equivalentProperty foaf:topic .
 #nickname owl:equivalentClass foaf:nick .
 #Document owl:equivalentClass foaf:Document .
 #i owl:sameAs http://kingsley.idehen.net/**
 dataspace/person/kidehen#thishttp://kingsley.idehen.net/dataspace/person/kidehen#this
 .

 ## Turtle End ##

 Don't underestimate the the power of human intelligence, once awakened :-)
 The above is trivial for any literate person to comprehend. Remember, they
 already understand natural language sentence structure expressed in:
 subject-predicate-object or subject-verb-object form.



 IAs community of people who claim to have something to do with
 research (and not a cult) every once in a while is learn from the
 above lesson and devise NEW methods and strategies.


 Yes, and the lesson we've learned over the years is that premature
 optimization is suboptimal when dealing with Linked Data. Basically, you
 have to teach Linked Data using manual document production steps i.e.,
 remind them of the document create and share pattern. Once this is
 achieved, they'll immediately realize there's a lot of fun to being able to
 represent structured data with ease, but at the expense of limited free
 time -- the very point when productivity oriented tools and services come
 into play.


Nice!

There should be official sem web tests, badges and achievements based on
passing things like this.  Described in linked data of course!




  In other words,
 move ahead in a smart way.


 Yes, but there isn't one smart way. For we humans the quest is always rife
 with context fluidity. Thus, horses for courses rule always applies. No
 silver bullets.



 I am by no mean trowing all away.


 Good!



 * publishing structured data on the web is already a *huge thing* with
 schema.org and the rest.


 Yes, but that's a useful piece of the picture. Not the picture.
 HTML+Microdata and (X)HTML+RDFa are not for end-users. Turtle is for
 end-users, so it too has to be part of the mix when the target audience is
 end-users.


  Why? because of the clear incentive SEO.


 SEO is only a piece of the picture. Yes, everyone wants to be discovered
 by Google, for now, but that isn't the Web's ultimate destiny. What people
 really want is serendipitous discovery of relevant information as an
 intrinsic component of the virtuous cycle associated with content sharing
 via the Web.


  * RDF is a great model for heterogeneous data integration and i think
 it will explode in (certain) enterprises (knowledge intensive)


 RDF provides specific benefits lost in a warped narrative. It USP boils
 down 

Re: A Distributed Economy -- A blog involving Linked Data

2013-01-07 Thread Melvin Carvalho
On 6 January 2013 18:22, Kingsley Idehen kide...@openlinksw.com wrote:

 On 1/6/13 5:46 AM, Michael Brunnbauer wrote:

 Hello Kingsley,

 On Sat, Jan 05, 2013 at 10:25:32AM -0500, Kingsley Idehen wrote:

 1. Create documents that describe items of interest

 [...]

 for time and attention challenged end-users this has to be Turtle

 [...]

 4. Make others aware of your document via services like Twitter,
 Facebook, LinkedIn, G+ etc.. posts

 Are you seriously proposing that people should publish links to Turtle
 files
 on social networks ?


 No.

 I am seriously proposing they publish Turtle documents to the Web :-)



 Have you tried this with someone else than your employees ?


 Yes, all my kids, my siblings and personal friends. The results are all
 the same, the come to grok the concept of digital sentences that are just
 short hand for natural language sentences.

 As I said, my conversation starts on the following fundamental premise:
 illiteracy is a shortcut to competitive disadvantage in the physical world,
 since that's a fact, why would it be any different in a digital realm like
 the Web?

 Assuming literate humans can't grok Turtle is one of the biggest mistakes
 many of us made (myself included) many years ago.

 ## Natural Language Content Start ##
 This is a Document about me (as in I).
 Document content is as follows:
 I am a Person.
 My name is Kingsley Idehen.
 My nickname is @kidehen.

 ## End ##

 ## Turtle Content Start ##
  a #Document .
  #topic #i .
 #i a #Person .
 #i #name Kingsley Idehen .
 #i #nickname @kidehen .


I wonder if it's better to use : in the predicates, rather than, # ?



 ## End ##

 Try it out with your kids, family members, and friends outside the
 Semantic Web and Linked Data communities you'll have an amazing amount of
 fun when you open up the documents via a Turtle processor (many of which
 are browser extensions) [1], especially when you cross reference to
 DBpedia, FOAF etc..

 Links:

 1. http://ode.openlinksw.com -- an example of an RDF processor (for all
 the syntaxes) that installs as a browser extension, across all major
 browser (you can make it automatically handle mime type: text/turtle,
 within Chrome, Firefox, and possibly Safari, if that's been released )  .

 Happy New Year!



 Regards,

 Michael Brunnbauer



 --

 Regards,

 Kingsley Idehen
 Founder  CEO
 OpenLink Software
 Company Web: http://www.openlinksw.com
 Personal Weblog: 
 http://www.openlinksw.com/**blog/~kidehenhttp://www.openlinksw.com/blog/~kidehen
 Twitter/Identi.ca handle: @kidehen
 Google+ Profile: 
 https://plus.google.com/**112399767740508618350/abouthttps://plus.google.com/112399767740508618350/about
 LinkedIn Profile: 
 http://www.linkedin.com/in/**kidehenhttp://www.linkedin.com/in/kidehen








  1   2   3   >