OpenLink Structured Data Sniffer & JSON-LD

2016-07-28 Thread Kingsley Idehen
All,

We recently released an edition of the OpenLink Structured Data Sniffer
(OSDS) [1] that takes support for JSON-LD beyond what's embedded in HTML
docs via  [2] . In a nutshell, this browser extension (for
Chrome, Opera, Firefox, Vivaldi) also includes:

1. support for JSON-LD embedded in nanotations -- i.e., creating and
publishing 5-Star Linked Open Data wherever text input is supported

2. adding native visualization support to browsers for
"application/ld+json" content from HTTP-accessible documents.

Enjoy!

Links:

[1] http://osds.openlinksw.com -- OpenLink Structured Data Sniffer

[2]
https://medium.com/openlink-software-blog/sentences-notations-8b090cf28574#.txzuo2nqt
-- Sentences & Notations .

-- 
Regards,

Kingsley Idehen   
Founder & CEO 
OpenLink Software 
Company Web: http://www.openlinksw.com
Personal Weblog 1: http://kidehen.blogspot.com
Personal Weblog 2: http://www.openlinksw.com/blog/~kidehen
Twitter Profile: https://twitter.com/kidehen
Google+ Profile: https://plus.google.com/+KingsleyIdehen/about
LinkedIn Profile: http://www.linkedin.com/in/kidehen
Personal WebID: http://kingsley.idehen.net/dataspace/person/kidehen#this




smime.p7s
Description: S/MIME Cryptographic Signature


Re: Business Of Linked Data: Opportunities re., Smart Agents (Bots)

2016-07-14 Thread Kingsley Idehen
On 7/8/16 10:40 AM, Kingsley Idehen wrote:
> All,
>
> Smart Agents and Bots are now hot topics across the industry at large.
>
> Bearing in mind years of knowledge and experience this community has in
> regards to Bots and Smart Agents [1], are there any Linked Data driven
> Smart Agents out there? Basically, conversational bots that have the ability 
> to leverage semantically-rich Linked Open Data clouds. 
>
> Personally, I believe this new wave of interest in Bots provides a great
> opportunity to showcase the value proposition of a Semantic Web build
> using Linked Open Data. 
>
> Historically, there has been a disconnect between the technology and business 
> opportunities related to Linked Data and a Semantic Web. Thus, I am opening a 
> discussion thread to explore the business related aspects of this matter 
> (i.e., this has nothing to do with research, papers, and conferences).
>  Links:
>
> [1]
> http://linkeddata.uriburner.com/describe/?url=https%3A%2F%2Ftwitter.com%2Fhashtag%2FSmartAgent%23this=1
> -- Smart Agent Notes collated over time.
>
> [2] 
> http://techemergence.com/valuing-the-artificial-intelligence-market-2016-and-beyond/
>  -- Market Size & Projections Example

All,

Here's an example of a Bot [1] that falls in line with what I was trying
to instigate discussion about. I've also joined a thread on Product Hunt
where some work is taking shape re., Facebook Messenger [2].

[1] http://growthbot.org/

[2] https://www.producthunt.com/tech/growthbot#comment-326494

-- 
Regards,

Kingsley Idehen   
Founder & CEO 
OpenLink Software 
Company Web: http://www.openlinksw.com
Personal Weblog 1: http://kidehen.blogspot.com
Personal Weblog 2: http://www.openlinksw.com/blog/~kidehen
Twitter Profile: https://twitter.com/kidehen
Google+ Profile: https://plus.google.com/+KingsleyIdehen/about
LinkedIn Profile: http://www.linkedin.com/in/kidehen
Personal WebID: http://kingsley.idehen.net/dataspace/person/kidehen#this




smime.p7s
Description: S/MIME Cryptographic Signature


Re: Where are the Linked Data Driven Smart Agents (Bots) ?

2016-07-06 Thread Kingsley Idehen
On 7/6/16 12:38 PM, Ruben Verborgh wrote:
> Hi,
>
> This is a very important question for our community,
> given that smart agents once were an important theme.
> Actually, the main difference we could bring with the SemWeb
> is that our clients could be decentralized
> and actually run on the client side, in contrast to others.
>
> One of the main problems I see is how our community
> (now particularly thinking about the scientific subgroup)
> receives submissions of novel work.
> We have evolved into an extremely quantitative-oriented view,
> where anything that can be measured with numbers
> is largely favored over anything that cannot.
>
> Given that the smart agents / bots field is quite new,
> we don't know the right evaluation metrics yet.
> As such, it is hard to publish a paper on this
> at any of the main venues (ISWC / ESWC / …).
> This discourages working on such themes.
>
> Hence, I see much talent and time going to
> incremental research, which is easy to evaluate well,
> but not necessarily as ground-breaking.
> More than a decade of SemWeb research
> has mostly brought us intelligent servers,
> but not yet the intelligent clients we wanted.
>
> So perhaps we should phrase the question more broadly:
> how can we as a community be more open
> to novel and disruptive technologies?
>
> Best,
>
> Ruben

Hi Ruben,

On my part, I am really pinging the community (due to deafening silence)
about a pendulum swing back to its fundamental strengths--structured
data (represented as sentences) endowed with machine- and
human-comprehensible entity relationship type semantics.

Eons ago (Semweb time) use of various Bots (Jenni [1], Micro Turtle [2],
Phenny [3], and friends) on IRC channels was the norm. Those agents were
early Semantic Web utility demos confined to IRC rather than the broader
Web.

Twitter and Slack are just pretty looking modern variants of what IRC
delivers, as most folks in this community certainly already know.

Today, we have an opportunity to rehash, recast, port, or build new bots
performing the very same tasks, but across a Web of (Linked) Data.

Fundamentally, mercurial GUI oriented UI/UX should be less of a hurdle
to Linked Data and Semantic Web utility showcases. Why? Because bot
interactions are predominantly about conversational interfaces that
ultimately depend on the ability to process information encoded using
sentences, and In RDF (the Language) we have the ultimate tool for
compact sentence representation :)

Links

[1]
http://www.zoharbabin.com/wp-content/uploads/2013/02/irc-log-kc-search-module.png
-- Jenni bot

[2]
http://media-cache-ak0.pinimg.com/originals/63/25/37/632537949aeea66cbf7aa8fba73c9e46.jpg
-- Micro Turtle Bot

[3]
http://media-cache-ak0.pinimg.com/originals/d1/e9/6a/d1e96adb4fcb720659db75aaa3e38b92.jpg
-- Phenny Bot.

-- 
Regards,

Kingsley Idehen   
Founder & CEO 
OpenLink Software 
Company Web: http://www.openlinksw.com
Personal Weblog 1: http://kidehen.blogspot.com
Personal Weblog 2: http://www.openlinksw.com/blog/~kidehen
Twitter Profile: https://twitter.com/kidehen
Google+ Profile: https://plus.google.com/+KingsleyIdehen/about
LinkedIn Profile: http://www.linkedin.com/in/kidehen
Personal WebID: http://kingsley.idehen.net/dataspace/person/kidehen#this




smime.p7s
Description: S/MIME Cryptographic Signature


Re: Where are the Linked Data Driven Smart Agents (Bots) ?

2016-07-06 Thread Kingsley Idehen
On 7/6/16 11:38 AM, Melvin Carvalho wrote:
>
>
> On 6 July 2016 at 17:22, Kingsley Idehen <kide...@openlinksw.com
> <mailto:kide...@openlinksw.com>> wrote:
>
> All,
>
> Smart Agents and Bots are now hot topics across the industry at large.
>
> Bearing in mind years of knowledge and experience this community
> has in
> regards to Bots and Smart Agents [1], are there any Linked Data driven
> Smart Agents out there?
>
> Personally, I believe this new wave of interest in Bots provides a
> great
> opportunity to showcase the value proposition of a Semantic Web build
> using Linked Open Data.
>
>
> I have started working on an early prototype using the kue[1] library
>
> Source code: https://github.com/solid-live/solidbot
>
> I also think this is the way forward, and would like to see such bots
> cooperating using LOD. 
>
> My intuition tells me that knowledge is valuable.  Smart bots enable
> turning an inert hard drive into something of value, but for the bot
> owner, and their connections.  I have a vague hope that this value can
> be, at some point, converted into a kind of basic income, for those
> who opt in.
>
> [1] https://github.com/Automattic/kue

Hi Melvin,

I will certainly take a look, pronto !


-- 
Regards,

Kingsley Idehen   
Founder & CEO 
OpenLink Software 
Company Web: http://www.openlinksw.com
Personal Weblog 1: http://kidehen.blogspot.com
Personal Weblog 2: http://www.openlinksw.com/blog/~kidehen
Twitter Profile: https://twitter.com/kidehen
Google+ Profile: https://plus.google.com/+KingsleyIdehen/about
LinkedIn Profile: http://www.linkedin.com/in/kidehen
Personal WebID: http://kingsley.idehen.net/dataspace/person/kidehen#this



smime.p7s
Description: S/MIME Cryptographic Signature


Where are the Linked Data Driven Smart Agents (Bots) ?

2016-07-06 Thread Kingsley Idehen
All,

Smart Agents and Bots are now hot topics across the industry at large.

Bearing in mind years of knowledge and experience this community has in
regards to Bots and Smart Agents [1], are there any Linked Data driven
Smart Agents out there?

Personally, I believe this new wave of interest in Bots provides a great
opportunity to showcase the value proposition of a Semantic Web build
using Linked Open Data. 


Links:

[1]
http://linkeddata.uriburner.com/describe/?url=https%3A%2F%2Ftwitter.com%2Fhashtag%2FSmartAgent%23this=1
-- Smart Agent Notes collated over time.


-- 
Regards,

Kingsley Idehen   
Founder & CEO 
OpenLink Software 
Company Web: http://www.openlinksw.com
Personal Weblog 1: http://kidehen.blogspot.com
Personal Weblog 2: http://www.openlinksw.com/blog/~kidehen
Twitter Profile: https://twitter.com/kidehen
Google+ Profile: https://plus.google.com/+KingsleyIdehen/about
LinkedIn Profile: http://www.linkedin.com/in/kidehen
Personal WebID: http://kingsley.idehen.net/dataspace/person/kidehen#this




smime.p7s
Description: S/MIME Cryptographic Signature


Re: [foaf-dev] FOAF Relationship

2016-04-18 Thread Kingsley Idehen
On 4/18/16 2:47 PM, Seth Grimes wrote:
> Yes, that's pretty much it, thanks. Anaphora (pronoun) resolution
> would be a big plus.
>
> Seth

Do you have any sample document URLs? That would help anyone seeking to
respond to your request that produces these kinds of solutions.


Kingsley
>
>
> On Mon, 18 Apr 2016, Kingsley Idehen wrote:
>
>> On 4/18/16 9:44 AM, Seth Grimes wrote:
>>> Hello,
>>>
>>> Are there any available information-extraction systems that
>>> implement the FOAF Relationship vocabulary
>>> (http://vocab.org/relationship/)? By available, I mean commercial or
>>> open source and currently maintained. My particular interest at this
>>> moment is identification and extraction of familial relationships from
>>> documents, and preferably also of attributes associated with the
>>> relationships, and representation of extracted relationships.
>>>
>>> Thanks,
>>>
>>> Seth
>>>
>>>
>>>
>>> -- 
>>> Seth Grimesgri...@altaplana.com   +1 301-270-0795@sethgrimes
>>> Alta Plana Corp, analytics strategy consulting, http://altaplana.com
>>> Sentiment Analysis Symposium, July 12 in NYC, SentimentSymposium.com
>>
>> Hi Seth,
>>
>> Are you looking for technology that would perform the following tasks?
>>
>> [1] analyze a document containing a collection of English sentences
>> representing familial relationships
>> [2] extract said relationships and then convert to RDF sentences using
>> terms from the FOAF vocabulary
>> [3] return an RDF document and the final output artifact.
>>
>> I've copied the LOD list in on this response to broaden audience for
>> this exchange.
>>
>>
>
> -- 
> Seth Grimesgri...@altaplana.com   +1 301-270-0795@sethgrimes
> Alta Plana Corp, analytics strategy consulting, http://altaplana.com
> Sentiment Analysis Symposium, July 12 in NYC, SentimentSymposium.com


-- 
Regards,

Kingsley Idehen   
Founder & CEO 
OpenLink Software 
Company Web: http://www.openlinksw.com
Personal Weblog 1: http://kidehen.blogspot.com
Personal Weblog 2: http://www.openlinksw.com/blog/~kidehen
Twitter Profile: https://twitter.com/kidehen
Google+ Profile: https://plus.google.com/+KingsleyIdehen/about
LinkedIn Profile: http://www.linkedin.com/in/kidehen
Personal WebID: http://kingsley.idehen.net/dataspace/person/kidehen#this




smime.p7s
Description: S/MIME Cryptographic Signature


Re: [foaf-dev] FOAF Relationship

2016-04-18 Thread Kingsley Idehen
On 4/18/16 9:44 AM, Seth Grimes wrote:
> Hello,
>
> Are there any available information-extraction systems that
> implement the FOAF Relationship vocabulary
> (http://vocab.org/relationship/)? By available, I mean commercial or
> open source and currently maintained. My particular interest at this
> moment is identification and extraction of familial relationships from
> documents, and preferably also of attributes associated with the
> relationships, and representation of extracted relationships.
>
> Thanks,
>
> Seth
>
>
>
> -- 
> Seth Grimesgri...@altaplana.com   +1 301-270-0795@sethgrimes
> Alta Plana Corp, analytics strategy consulting, http://altaplana.com
> Sentiment Analysis Symposium, July 12 in NYC, SentimentSymposium.com

Hi Seth,

Are you looking for technology that would perform the following tasks?

[1] analyze a document containing a collection of English sentences
representing familial relationships
[2] extract said relationships and then convert to RDF sentences using
terms from the FOAF vocabulary
[3] return an RDF document and the final output artifact.

I've copied the LOD list in on this response to broaden audience for
this exchange.

-- 
Regards,

Kingsley Idehen   
Founder & CEO 
OpenLink Software 
Company Web: http://www.openlinksw.com
Personal Weblog 1: http://kidehen.blogspot.com
Personal Weblog 2: http://www.openlinksw.com/blog/~kidehen
Twitter Profile: https://twitter.com/kidehen
Google+ Profile: https://plus.google.com/+KingsleyIdehen/about
LinkedIn Profile: http://www.linkedin.com/in/kidehen
Personal WebID: http://kingsley.idehen.net/dataspace/person/kidehen#this




smime.p7s
Description: S/MIME Cryptographic Signature


OpenLink Structured Data Editor -- Update

2016-03-19 Thread Kingsley Idehen
All,

We've finally knocked up an initial Web Site containing a collection of
documents that describe our Structured Data Editor (OSDE). As should be
expected, the page includes Structured Data created using the Editor  :)

New Features:

[1] Adding cut & paste as a direct input option for Turtle content
[2] Web Service API for loose integration with other services.

Links:

[1] http://osde.openlinksw.com .
[2] http://osde.openlinksw.com/#UsageScreencasts -- Some Usage Demos
[3] http://linkeddata.uriburner.com/rdf-editor  -- Live Instance .

Enjoy!

-- 
Regards,

Kingsley Idehen   
Founder & CEO 
OpenLink Software 
Company Web: http://www.openlinksw.com
Personal Weblog 1: http://kidehen.blogspot.com
Personal Weblog 2: http://www.openlinksw.com/blog/~kidehen
Twitter Profile: https://twitter.com/kidehen
Google+ Profile: https://plus.google.com/+KingsleyIdehen/about
LinkedIn Profile: http://www.linkedin.com/in/kidehen
Personal WebID: http://kingsley.idehen.net/dataspace/person/kidehen#this




smime.p7s
Description: S/MIME Cryptographic Signature


Announcement: OpenLink Structured Data Sniffer (OSDS) 2.6.1 Released

2016-01-30 Thread Kingsley Idehen
All,

I am pleased to announce immediate availability of the 2.6.1 edition of
our Structured Data Sniffer extension for Chrome, Opera, Firefox, and
Vivaldi. I've published detailed notes on Medium [1] and Google Plus [2]
that include live features and benefits exploitation examples.


New Features Summary:

  * Nanotation sniffing — ability to sniff out RDF-Turtle (or
RDF-Ntriples) statements embedded in the body of HTML docs and/or
text pages


  * Improved Default SPARQL Query — clicking on the “Query” button
returns a page enhanced (via Natural Language Processing, Machine
Learning, and Entity Extraction) for immediate Linked Data exploration

  * Sniffed content download — you can download extracted data to local
Turtle or JSON-LD documents (which include mounted folders from the
likes of Dropbox, Google Drive, OneDrive etc..).


Personally, Data Flow across Data Silos is a key benefit delivered by
this release. Why? Because notations such as RDF-Turtle and RDF-Ntriples
can be used to craft structured data islands within any kind of document
without depending on the whims of any of the following:

1. article author -- since you can embed your notes (RDF-Turtle or
RDF-Ntriples based nanotations) in comments that are associated with a
post or in your own document publishing space

2. social media space owner --- you no longer have to wait for social
media spaces to support or publish metadata (or general structured data
islands) using RDF-Turtle or RDF-Ntriples, just post your nanotations
and be done with it (I have a many examples of this powerful capability)

3. beyond tagging and bookmarking  you can now craft important notes
that simply augment your knowledgebase (wherever it might be on an HTTP
network) while also being able to save content to the local or your
local disk.

Note:

At this point, the Firefox extension lacks nanotation sniffing. That
will be fixed in the coming days.

Enjoy!


Links:

[1]
https://medium.com/@kidehen/openlink-structured-data-sniffer-osds-2-6-1-unleashed-15137e558350#.7lqecrci7
-- OSDS New Features & Usage Examples Article

[2] https://plus.google.com/+KingsleyIdehen/posts/PaH7rRhiD3D -- G+ Post
covering New Features & Usage Examples

[3] https://osds.openlinks.com -- OpenLink Structured Data Sniffer Home Page

[4]
https://chrome.google.com/webstore/detail/openlink-structured-data/egdaiaihbdoiibopledjahjaihbmjhdj/related?hl=en
-- Chrome Store Listing

[5]
https://addons.mozilla.org/en-US/firefox/addon/openlink-structured-data-sniff/
-- Mozilla Extensions Store Listing .

-- 
Regards,

Kingsley Idehen   
Founder & CEO 
OpenLink Software 
Company Web: http://www.openlinksw.com
Personal Weblog 1: http://kidehen.blogspot.com
Personal Weblog 2: http://www.openlinksw.com/blog/~kidehen
Twitter Profile: https://twitter.com/kidehen
Google+ Profile: https://plus.google.com/+KingsleyIdehen/about
LinkedIn Profile: http://www.linkedin.com/in/kidehen
Personal WebID: http://kingsley.idehen.net/dataspace/person/kidehen#this



smime.p7s
Description: S/MIME Cryptographic Signature


Re: Announce: Javascript based RDF Editor

2016-01-17 Thread Kingsley Idehen
On 1/16/16 6:38 PM, Ed - 0x1b, Inc. wrote:
> Kingsley,
>
> Thank you, I look forward to trying your editor. 

Okay.

> I noticed you were
> deploying on Node.js, is your editor like the Atom.io editor or do you
> see integration as an Atom plugin being a potential goal?

I don't know anything about Atom.io. Thus, maybe you can make this
determination once you've played with our editor? :)

>  The
> Atom/Electron foundation looks very interesting for a Javascript
> application doing a desktop deployment.

Possibly, but I need to take a look at this effort before I can make any
kind of meaningful response.

Kingsley
>
> Thanks again, Ed
>
> On Sat, Jan 16, 2016 at 4:14 PM, Kingsley Idehen <kide...@openlinksw.com> 
> wrote:
>> On 1/16/16 8:58 AM, Timothy Holborn wrote:
>>
>> Might be a foolish question; but have you thought about extending the
>> functionality to aid with converting XML Schema documents to RDF?
>>
>>
>> Yes, but simply not as our top priority. We've released this in Open Source
>> form for others to chip in :)
>>
>>
>> Perhaps an Import button?
>>
>>
>> Maybe, but it about priorities right now.
>>
>>
>> Kingsley
>>
>>
>> On 15 January 2016 at 23:48, Kingsley Idehen <kide...@openlinksw.com> wrote:
>>> On 1/15/16 6:01 AM, Timothy Cook wrote:
>>>
>>> Nice work but it should be noted that it only supports TURTLE.
>>>
>>>
>>> It only outputs Turtle. The UI/UX is basically abstract RDF
>>> (subject->predicate->object) or EAV (entity->attribute-value).
>>>
>>> Remembering that RDF is abstract (thereby notation and serialization
>>> format agnostic), your Turtle content can be transformed to other
>>> serialization formats via a plethora of tools e.g., the live URIBurner
>>> instance [1]. Take a look at the "Options" feature re., application
>>> configuration that allows you setup functionality behind the "Upload" button
>>> in OSDS [1].
>>>
>>> [1] http://osds.openlinksw.com -- Structured Data Sniffer
>>>
>>> [2] http://linkeddata.uriburner.com -- URIBurner service.
>>>
>>> Kingsley
>>>
>>>
>>>
>>> On Fri, Jan 15, 2016 at 7:31 AM, Ruben Verborgh <ruben.verbo...@ugent.be>
>>> wrote:
>>>> Hi Kingsley,
>>>>
>>>> This seems great stuff—great you've done this in JavaScript.
>>>> Would it be possible to have a (restricted) demo live online somewhere?
>>>> This might make it directly accessible for people who want to test.
>>>>
>>>> Best,
>>>>
>>>> Ruben
>>>
>>>
>>>
>>> --
>>> Timothy W. Cook, President
>>> Data Insights, Inc.
>>>
>>>
>>>
>>> --
>>> Regards,
>>>
>>> Kingsley Idehen 
>>> Founder & CEO
>>> OpenLink Software
>>> Company Web: http://www.openlinksw.com
>>> Personal Weblog 1: http://kidehen.blogspot.com
>>> Personal Weblog 2: http://www.openlinksw.com/blog/~kidehen
>>> Twitter Profile: https://twitter.com/kidehen
>>> Google+ Profile: https://plus.google.com/+KingsleyIdehen/about
>>> LinkedIn Profile: http://www.linkedin.com/in/kidehen
>>> Personal WebID: http://kingsley.idehen.net/dataspace/person/kidehen#this
>>
>>
>>
>> --
>> Regards,
>>
>> Kingsley Idehen  
>> Founder & CEO
>> OpenLink Software
>> Company Web: http://www.openlinksw.com
>> Personal Weblog 1: http://kidehen.blogspot.com
>> Personal Weblog 2: http://www.openlinksw.com/blog/~kidehen
>> Twitter Profile: https://twitter.com/kidehen
>> Google+ Profile: https://plus.google.com/+KingsleyIdehen/about
>> LinkedIn Profile: http://www.linkedin.com/in/kidehen
>> Personal WebID: http://kingsley.idehen.net/dataspace/person/kidehen#this
>


-- 
Regards,

Kingsley Idehen   
Founder & CEO 
OpenLink Software 
Company Web: http://www.openlinksw.com
Personal Weblog 1: http://kidehen.blogspot.com
Personal Weblog 2: http://www.openlinksw.com/blog/~kidehen
Twitter Profile: https://twitter.com/kidehen
Google+ Profile: https://plus.google.com/+KingsleyIdehen/about
LinkedIn Profile: http://www.linkedin.com/in/kidehen
Personal WebID: http://kingsley.idehen.net/dataspace/person/kidehen#this




smime.p7s
Description: S/MIME Cryptographic Signature


Re: Announce: Javascript based RDF Editor

2016-01-16 Thread Kingsley Idehen
On 1/16/16 8:58 AM, Timothy Holborn wrote:
> Might be a foolish question; but have you thought about extending the
> functionality to aid with converting XML Schema documents to RDF? 

Yes, but simply not as our top priority. We've released this in Open
Source form for others to chip in :)
>
> Perhaps an Import button?

Maybe, but it about priorities right now.


Kingsley
>
> On 15 January 2016 at 23:48, Kingsley Idehen <kide...@openlinksw.com
> <mailto:kide...@openlinksw.com>> wrote:
>
> On 1/15/16 6:01 AM, Timothy Cook wrote:
>> Nice work but it should be noted that it only supports TURTLE.
>
> It only outputs Turtle. The UI/UX is basically abstract RDF
> (subject->predicate->object) or EAV (entity->attribute-value).
>
> Remembering that RDF is abstract (thereby notation and
> serialization format agnostic), your Turtle content can be
> transformed to other serialization formats via a plethora of tools
> e.g., the live URIBurner instance [1]. Take a look at the
> "Options" feature re., application configuration that allows you
> setup functionality behind the "Upload" button in OSDS [1].
>
> [1] http://osds.openlinksw.com -- Structured Data Sniffer
>
> [2] http://linkeddata.uriburner.com -- URIBurner service.
>
> Kingsley
>>
>>
>> On Fri, Jan 15, 2016 at 7:31 AM, Ruben Verborgh
>> <ruben.verbo...@ugent.be <mailto:ruben.verbo...@ugent.be>> wrote:
>>
>> Hi Kingsley,
>>
>> This seems great stuff—great you've done this in JavaScript.
>> Would it be possible to have a (restricted) demo live online
>> somewhere?
>> This might make it directly accessible for people who want to
>> test.
>>
>> Best,
>>
>> Ruben
>>
>>
>>
>>
>> -- 
>> Timothy W. Cook, President
>> Data Insights, Inc. 
>
>
> -- 
> Regards,
>
> Kingsley Idehen 
> Founder & CEO 
> OpenLink Software 
> Company Web: http://www.openlinksw.com
> Personal Weblog 1: http://kidehen.blogspot.com
> Personal Weblog 2: http://www.openlinksw.com/blog/~kidehen
> <http://www.openlinksw.com/blog/%7Ekidehen>
> Twitter Profile: https://twitter.com/kidehen
> Google+ Profile: https://plus.google.com/+KingsleyIdehen/about
> LinkedIn Profile: http://www.linkedin.com/in/kidehen
> Personal WebID: http://kingsley.idehen.net/dataspace/person/kidehen#this
>
>


-- 
Regards,

Kingsley Idehen   
Founder & CEO 
OpenLink Software 
Company Web: http://www.openlinksw.com
Personal Weblog 1: http://kidehen.blogspot.com
Personal Weblog 2: http://www.openlinksw.com/blog/~kidehen
Twitter Profile: https://twitter.com/kidehen
Google+ Profile: https://plus.google.com/+KingsleyIdehen/about
LinkedIn Profile: http://www.linkedin.com/in/kidehen
Personal WebID: http://kingsley.idehen.net/dataspace/person/kidehen#this



smime.p7s
Description: S/MIME Cryptographic Signature


Re: Announce: Javascript based RDF Editor

2016-01-15 Thread Kingsley Idehen
On 1/15/16 4:31 AM, Ruben Verborgh wrote:
> Hi Kingsley,
>
> This seems great stuff—great you've done this in JavaScript.
> Would it be possible to have a (restricted) demo live online somewhere?
> This might make it directly accessible for people who want to test.
>
> Best,
>
> Ruben
>

Yes, of course.  Just lookup the following:

[1] http://ods-qa.openlinksw.com/rdf-editor -- live instance
[2] https://youtu.be/ftLDf9NHmU0 -- silent screencast
[3] https://youtu.be/vmIqGKw3VeY -- demonstrates loose-coupling with our
Structured Data Sniffer extension.

-- 
Regards,

Kingsley Idehen   
Founder & CEO 
OpenLink Software 
Company Web: http://www.openlinksw.com
Personal Weblog 1: http://kidehen.blogspot.com
Personal Weblog 2: http://www.openlinksw.com/blog/~kidehen
Twitter Profile: https://twitter.com/kidehen
Google+ Profile: https://plus.google.com/+KingsleyIdehen/about
LinkedIn Profile: http://www.linkedin.com/in/kidehen
Personal WebID: http://kingsley.idehen.net/dataspace/person/kidehen#this




smime.p7s
Description: S/MIME Cryptographic Signature


Re: Announce: Javascript based RDF Editor

2016-01-15 Thread Kingsley Idehen
On 1/15/16 6:01 AM, Timothy Cook wrote:
> Nice work but it should be noted that it only supports TURTLE.

It only outputs Turtle. The UI/UX is basically abstract RDF
(subject->predicate->object) or EAV (entity->attribute-value).

Remembering that RDF is abstract (thereby notation and serialization
format agnostic), your Turtle content can be transformed to other
serialization formats via a plethora of tools e.g., the live URIBurner
instance [1]. Take a look at the "Options" feature re., application
configuration that allows you setup functionality behind the "Upload"
button in OSDS [1].

[1] http://osds.openlinksw.com -- Structured Data Sniffer

[2] http://linkeddata.uriburner.com -- URIBurner service.

Kingsley
>
>
> On Fri, Jan 15, 2016 at 7:31 AM, Ruben Verborgh
> <ruben.verbo...@ugent.be <mailto:ruben.verbo...@ugent.be>> wrote:
>
> Hi Kingsley,
>
> This seems great stuff—great you've done this in JavaScript.
> Would it be possible to have a (restricted) demo live online
> somewhere?
> This might make it directly accessible for people who want to test.
>
> Best,
>
> Ruben
>
>
>
>
> -- 
> Timothy W. Cook, President
> Data Insights, Inc. 


-- 
Regards,

Kingsley Idehen   
Founder & CEO 
OpenLink Software 
Company Web: http://www.openlinksw.com
Personal Weblog 1: http://kidehen.blogspot.com
Personal Weblog 2: http://www.openlinksw.com/blog/~kidehen
Twitter Profile: https://twitter.com/kidehen
Google+ Profile: https://plus.google.com/+KingsleyIdehen/about
LinkedIn Profile: http://www.linkedin.com/in/kidehen
Personal WebID: http://kingsley.idehen.net/dataspace/person/kidehen#this



smime.p7s
Description: S/MIME Cryptographic Signature


Announce: Javascript based RDF Editor

2016-01-14 Thread Kingsley Idehen
All,

Quick FYI re. installation notes for deploying our 100% Javascript based
RDF Editor using Node.js:

1. Download RDF Editor Zip Archive from:
http://opldownload.s3.amazonaws.com/uda/vad-packages/7.2/rdf_editor_pkg.zip
2. Extract archive to a folder ({RDF Editor Archive Extraction Folder}
henceforth)
3. From your OS command-line interface run: npm install http-server -g 
-- to setup global invocation of HTTP Server
4. From your OS command-line interface run: http-server {RDF Editor
Archive Extraction Folder}/rdf-editor [options]
5. Open up your browser and goto: http://localhost:8080.

Metaphor:

You have entity->attribute->value or subject->predicate->object based
sentences/statements written to a document (identified by a URL).
Entity, Attribute, and Value (optionally) identified by an HTTP URI
(absolute or relative). Likewise, Subject, Predicate, and Object
(optionally) identified by an HTTP URI (absolute or relative).
Sentence/Statement collections are grouped by Attribute (Predicate), and
this is the basis for optimistic concurrency hashes contructed in
regards to multi-user editing actvitity occurring on the same document.

To summarize, as is the case in the real-word:

1. You write sentences to a document (e.g. a page in a book)
2. Sentences in a page are grouped by paragraphs -- the editor uses
statement attributes (predicates) grouping to emulate this concept
3. a document [named graph when writing to SPARQL server] could be part
of a collection -- like one of many pieces of paper in a folder .

Actual document storage is driven by WebDAV, LDP, SPARQL Graph Protocol,
or SPARQL 1.1 INSERTs.

Features:

1. Shared Ontology / Vocabulary Import
2. Intelligent determination of sentence predicate types driven by 
Ontologies / Vocabularies lookups
3. A variety of editing views scoped to Statements, Entities, Attribute
Names, and Attribute Values -- you can toggle across these subject to
your editing modality preferences
4. Allows you download documents to local or cloud storage
5. Cloud Storage supports multiple HTTP-based storage protocols (WebDAV,
LDP, SPARQL Graph Protocol, SPARQL 1.1 INSERT)
6. Deployable using Nodes.js, Apache, IIS, Virtuoso HTTP Server, any
other HTTP Server
7. 100% Javascript based
8. Available in Open Source form.

Enjoy!

Links:

[1]
http://opldownload.s3.amazonaws.com/uda/vad-packages/7.2/rdf_editor_pkg.zip
[2] https://github.com/openlink/rdf-editor

-- 
Regards,

Kingsley Idehen   
Founder & CEO 
OpenLink Software 
Company Web: http://www.openlinksw.com
Personal Weblog 1: http://kidehen.blogspot.com
Personal Weblog 2: http://www.openlinksw.com/blog/~kidehen
Twitter Profile: https://twitter.com/kidehen
Google+ Profile: https://plus.google.com/+KingsleyIdehen/about
LinkedIn Profile: http://www.linkedin.com/in/kidehen
Personal WebID: http://kingsley.idehen.net/dataspace/person/kidehen#this




smime.p7s
Description: S/MIME Cryptographic Signature


Update: New Extension for revealing Structured Data Islands embedded in HTML pages

2015-12-16 Thread Kingsley Idehen
On 12/5/15 1:50 PM, Kingsley Idehen wrote:
> All,
>
> Here's a quick FYI about our recently released browser extension that
> simplifies discovery of metadata oriented structured data embedded in
> HTML docs. Naturally, this extension also functions as a Linked Data
> exploration launch point for follow-your-nose discovery of related Web
> resources.
>
> A few key benefits:
>
> [1] Discovering use of schema.org terms across pages and web sites
> [2] Evaluating document metadata quality in relation of SEO optimization .
>
> Current browser support includes: Chrome, Opera, and Firefox (nightly
> builds only).
>
> Enjoy!
>
> Links:
>
> [1] http://osds.openlinksw.com -- Home Page
> [2]
> http://kidehen.blogspot.com/2015/12/openlink-structured-data-sniffer-osds.html
> -- Blog Post about extension
> [3] https://github.com/openlink/structured-data-sniffer/releases

All,

We've published the latest edition of this Chrome and Opera compatible
extension to the Chrome Web Store [1]. New features include:
[1] Query Service Integration -- one-click integration with SPARQL Query
Services (defaults to URIBurner Service which is a major Linked Open
Data Cloud junction-box) [2] Annotation Service Integration -- places
you in our RDF Editing Service as a mechanism for making and storing
notes about anything (Web Pages or whatever they are about) [3] LOD
Cloud Upload -- one-click integration with RDF Extract, Transform, and
LOD service that adds data to the LOD Cloud (URIBurner is the default
service) [4] Flexible Configuration options for binding other SPARQL
Query and RDFizer Services . Update to the Firefox edition will follow
shortly. Links: [1]
https://chrome.google.com/webstore/detail/openlink-structured-data/egdaiaihbdoiibopledjahjaihbmjhdj
-- Chrome Web Store Listing [2] http://osds.openlinksw.com -- Home Page
[3] https://www.pinterest.com/kidehen/structured-metadata-related/ --
Usage Screenshot Collection on Pinterest.

-- 
Happy Holidays!

Kingsley Idehen   
Founder & CEO 
OpenLink Software 
Company Web: http://www.openlinksw.com
Personal Weblog 1: http://kidehen.blogspot.com
Personal Weblog 2: http://www.openlinksw.com/blog/~kidehen
Twitter Profile: https://twitter.com/kidehen
Google+ Profile: https://plus.google.com/+KingsleyIdehen/about
LinkedIn Profile: http://www.linkedin.com/in/kidehen
Personal WebID: http://kingsley.idehen.net/dataspace/person/kidehen#this


smime.p7s
Description: S/MIME Cryptographic Signature


Re: Ontology to model access control

2015-12-16 Thread Kingsley Idehen
On 12/16/15 12:05 PM, Robert Sanderson wrote:
> And to pile on, +1 to WebACL for this. We also use it in the Fedora4
> (LDP based) repository solution.
>

+1

<http://www.w3.org/ns/auth/acl#this> already provides an vocabulary of
terms for describing resource access controls. It is also widely used
across Linked Data solutions that support:

[1] WebID -- Identity
[2] WebID-TLS Protocol -- Authentication
[3] WebID-Profile Documents -- Identity Claims.

Kingsley

> On Wed, Dec 16, 2015 at 3:43 AM, Víctor Rodríguez Doncel
> <vrodrig...@fi.upm.es <mailto:vrodrig...@fi.upm.es>> wrote:
>
> Sebastian,
>
> I understand you want to implement some sort of RBAC.
> I believe WebAccessControl
> <https://www.w3.org/wiki/WebAccessControl> is the most
> straightforward option if this is what you need.
> For more complex expressions (conditions, etc.) you may want to
> use ODRL
>
> Víctor
>
>
> El 16/12/2015 a las 11:25, Sebastian Hellmann escribió:
>> Dear all,
>>
>> to guide the integration of data into DBpedia+ effectively, we
>> are  working on a new release of DataID, i.e. it's ontology [1,2]
>> and an implementation of a repository to manage metadata
>> effectively and distribute it to different other venues like
>> datahub.io <http://datahub.io> via push.
>>
>> we are looking for an ontology that allows us to express access
>> rights to model the following roles:
>>
>> for metadata editing: Guest, Creator, Maintainer, Contributor
>> for datasets: Guest, Creator, Maintainer, Contributor, Contact,
>> Publisher
>>
>> We are thinking about copying some ideas from here:
>> https://hal-unice.archives-ouvertes.fr/hal-01202126/document
>> e.g. to have something like:
>>
>> [] a dataid:AuthorityEntityContext ;
>>dataid:authorizedFor :x ; # this is a prov:Entity (either a
>> DataId, a Dataset or a distribution)
>>dataid:authorityAgentRole dataid:Maintainer ;
>>dataid:authorizedAgent  [ (insert FOAF here) ] .
>>
>>
>> Any ideas or pointers?
>> A detailed analysis of the problem has been published by our
>> ALIGNED[4] partner SWC [3]
>>
>> All the best,
>> Sebastian and Markus
>>
>> [1] previous version
>> http://svn.aksw.org/papers/2014/Semantics_Dataid/public.pdf
>> [2] current ontology version:
>> 
>> https://raw.githubusercontent.com/dbpedia/dataid/testbranch/ontology/ontology.ttl
>> in the testbranch of https://github.com/dbpedia/dataid
>> [3]
>> 
>> https://blog.semantic-web.at/2015/12/02/ready-to-connect-to-the-semantic-web-now-what/
>> [4] http://aligned-project.eu/
>> -- 
>> Sebastian Hellmann
>> AKSW/KILT research group at Leipzig University
>> Insitute for Applied Informatics (InfAI) at Leipzig University
>> DBpedia Association
>> Events:
>> * *Feb 8th, 2016* Submission Deadline, 5th Workshop on Linked
>> Data in Linguistics <http://ldl2016.linguistic-lod.org/>
>> Venha para a Alemanha como PhD:
>> http://bis.informatik.uni-leipzig.de/csf
>> Projects: http://dbpedia.org, http://nlp2rdf.org,
>> http://linguistics.okfn.org, https://www.w3.org/community/ld4lt
>> <http://www.w3.org/community/ld4lt>
>> Homepage: http://aksw.org/SebastianHellmann
>> Research Group: http://aksw.org
>> Thesis:
>> http://tinyurl.com/sh-thesis-summary
>> http://tinyurl.com/sh-thesis
>
>
> -- 
> Víctor Rodríguez-Doncel
> D3205 - Ontology Engineering Group (OEG)
> Departamento de Inteligencia Artificial
> ETS de Ingenieros Informáticos
> Universidad Politécnica de Madrid
>
> Campus de Montegancedo s/n
> Boadilla del Monte-28660 Madrid, Spain
> Tel. (+34) 91336 3753 <tel:%28%2B34%29%2091336%203753>
> Skype: vroddon3
>
>
>
>
> -- 
> Rob Sanderson
> Information Standards Advocate
> Digital Library Systems and Services
> Stanford, CA 94305


-- 
Regards,

Kingsley Idehen   
Founder & CEO 
OpenLink Software 
Company Web: http://www.openlinksw.com
Personal Weblog 1: http://kidehen.blogspot.com
Personal Weblog 2: http://www.openlinksw.com/blog/~kidehen
Twitter Profile: https://twitter.com/kidehen
Google+ Profile: https://plus.google.com/+KingsleyIdehen/about
LinkedIn Profile: http://www.linkedin.com/in/kidehen
Personal WebID: http://kingsley.idehen.net/dataspace/person/kidehen#this



smime.p7s
Description: S/MIME Cryptographic Signature


New Extension for revealing Structured Data Islands embedded in HTML pages

2015-12-05 Thread Kingsley Idehen
All,

Here's a quick FYI about our recently released browser extension that
simplifies discovery of metadata oriented structured data embedded in
HTML docs. Naturally, this extension also functions as a Linked Data
exploration launch point for follow-your-nose discovery of related Web
resources.

A few key benefits:

[1] Discovering use of schema.org terms across pages and web sites
[2] Evaluating document metadata quality in relation of SEO optimization .

Current browser support includes: Chrome, Opera, and Firefox (nightly
builds only).

Enjoy!

Links:

[1] http://osds.openlinksw.com -- Home Page
[2]
http://kidehen.blogspot.com/2015/12/openlink-structured-data-sniffer-osds.html
-- Blog Post about extension
[3] https://github.com/openlink/structured-data-sniffer/releases

-- 
Regards,

Kingsley Idehen   
Founder & CEO 
OpenLink Software 
Company Web: http://www.openlinksw.com
Personal Weblog 1: http://kidehen.blogspot.com
Personal Weblog 2: http://www.openlinksw.com/blog/~kidehen
Twitter Profile: https://twitter.com/kidehen
Google+ Profile: https://plus.google.com/+KingsleyIdehen/about
LinkedIn Profile: http://www.linkedin.com/in/kidehen
Personal WebID: http://kingsley.idehen.net/dataspace/person/kidehen#this




smime.p7s
Description: S/MIME Cryptographic Signature


Re: What Happened to the Semantic Web?

2015-11-16 Thread Kingsley Idehen
On 11/13/15 11:30 AM, Michael Brunnbauer wrote:
> hi all,
>
> correct me if I am wrong:
>
> -Google CSE
>  -cannot be queried programmatically without violating the Google TOS

I don't know about that.

As per my demonstrations, you can construct URLs for directly accessing
HTML docs that are scoped to specific schema.org entity types.

>  -will only accept a disjunctive list of schema.org classes as restriction
>  -will only find pages mentioning things, not things

A page can only be about something. A CSE enables you access pages about
something that's an instance of one or more schema.org entity types.

>
> -Google products generally will not recognize triples with classes or
>  properties outside the schema.org namespace (with selected exceptions, e.G. 
>  Goodrelations). This is understandable, but:
>
> -There is no way to tell Google crawlers that your classes/properties are
>  specializations of schema.org classes/properties.

A CSE isn't speaking to Google Crawlers. A CSE let's you scope queries
to Google's Index of Pages that include Structured Data Islands created
using Schema.org terms.

>
> I would say we are not there yet.

And I beg to differ since, "being there" to me is about the following:

[1] Incentivizing Web Masters and Web Developers to construct and
publish Web Documents that include both human and machine-comprehensible
Structured Data Islands

[2] Demonstrating utility of RDF Language based Structured Data, at
Web-scale

[3] Ending distractions such as RDF Language based Document Content
Format wars.


Kingsley
>
> Regards,
>
> Michael Brunnbauer
>


-- 
Regards,

Kingsley Idehen   
Founder & CEO 
OpenLink Software 
Company Web: http://www.openlinksw.com
Personal Weblog 1: http://kidehen.blogspot.com
Personal Weblog 2: http://www.openlinksw.com/blog/~kidehen
Twitter Profile: https://twitter.com/kidehen
Google+ Profile: https://plus.google.com/+KingsleyIdehen/about
LinkedIn Profile: http://www.linkedin.com/in/kidehen
Personal WebID: http://kingsley.idehen.net/dataspace/person/kidehen#this




smime.p7s
Description: S/MIME Cryptographic Signature


Re: What Happened to the Semantic Web?

2015-11-12 Thread Kingsley Idehen
On 11/12/15 6:45 AM, Nicolas Chauvat wrote:
> Hi,
>
> On Wed, Nov 11, 2015 at 02:27:10PM -0500, Kingsley Idehen wrote:
>>> > > To me, The Semantic Web is like Google, but then run on my machine.
>> > 
>> > To me its just a Web of Data [...]
> Ruben says "The Semantic Web" and Kingsley answers "just a Web of Data".
>
> In my tutorial "introduction to the semantic web" last week at
> SemWeb.Pro, I presented the Semantic Web and the Web of Data as the
> same thing.
>
> Then Fabien Gandon from Inria summarized the first session of the MOOC
> "Le Web Sémantique" and distinguished two items in a couple
> (web of data ; semantic web).
>
> It made me think that splitting the thing in two after the fact might
> have benefits:
>
> - Web of Data = what works today = 1st deliverable of the SemWeb Project
>
> - Semantic Web = what will work = prov, trust, inference, smartclient, etc.
>
> It allows us to say that The Semantic Web Project *has*delivered* its
> version 1, nicknamed "Web of Data", and that more versions will follow.
>
> [Hopefully in a couple years the "Web of Data" will have completely
> merged with the One True Web and nobody will care about making a
> distinction any more]
>
> That way of putting things fits well with the iterative/agile/lean
> culture of project management that is now spreading all over.
>
> Do you know of people that have been trying to sell things this way?

Hopefully everyone :)


-- 
Regards,

Kingsley Idehen   
Founder & CEO 
OpenLink Software 
Company Web: http://www.openlinksw.com
Personal Weblog 1: http://kidehen.blogspot.com
Personal Weblog 2: http://www.openlinksw.com/blog/~kidehen
Twitter Profile: https://twitter.com/kidehen
Google+ Profile: https://plus.google.com/+KingsleyIdehen/about
LinkedIn Profile: http://www.linkedin.com/in/kidehen
Personal WebID: http://kingsley.idehen.net/dataspace/person/kidehen#this



smime.p7s
Description: S/MIME Cryptographic Signature


Re: What Happened to the Semantic Web?

2015-11-12 Thread Kingsley Idehen
On 11/12/15 11:18 AM, Paul Houle wrote:
> Really I don't see how much better the search results on the right
> ("Google CSE") are then the ones on the left.  It is a little like this:
>
> http://www.audiocheck.net/blindtests_16vs8bit_NeilYoung.php
>
> Google and Bing are stuck with P@1 at %70 or so because they don't
> always know the intent of the question.
>
> Various systems that put documents in a blender,  discarding the order
> of the words,  perform astonishingly well at search,  classification
> and other tasks -- anything "smarter" that this has to solve the
> "difficult" problems that remain,  and little steps (like "not good"
> -> "bad" for sentiment analysis) help only a few marginal cases.
>
> The promise of semantics is to give people an experience they never
> had before,  not move some score from .81 to .83.

Paul,

Impact scale of .81 to .83 varies, as you know. For a Web Content
behemoth that's significant :)

Bridges to structured data provided by the aforementioned behemoths aids
the following:

[1] Improvements from behemoths
[2] Investment from behemoths and their close associates (e.g., Venture
Capitalists)
[3] Acquisitions by behemoths.

Any structured data is fodder for value added products and services from
this community. I still believe "Opportunity Costs Realization" is the
ultimate trigger for change and improvement in commercial settings.

Google, Bing, and friends are just parts of the larger ecosystem.

Kingsley 
>
> On Thu, Nov 12, 2015 at 9:29 AM, David Wood <da...@3roundstones.com
> <mailto:da...@3roundstones.com>> wrote:
>
>
> On Nov 12, 2015, at 07:51, Kingsley Idehen <kide...@openlinksw.com
> <mailto:kide...@openlinksw.com>> wrote:
>
>> On 11/12/15 6:45 AM, Nicolas Chauvat wrote:
>>> Hi,
>>>
>>> On Wed, Nov 11, 2015 at 02:27:10PM -0500, Kingsley Idehen wrote:
>>>>> > > To me, The Semantic Web is like Google, but then run on my 
>>>>> machine.
>>>> > 
>>>> > To me its just a Web of Data [...]
>>> Ruben says "The Semantic Web" and Kingsley answers "just a Web of Data".
>>>
>>> In my tutorial "introduction to the semantic web" last week at
>>> SemWeb.Pro, I presented the Semantic Web and the Web of Data as the
>>> same thing.
>>>
>>> Then Fabien Gandon from Inria summarized the first session of the MOOC
>>> "Le Web Sémantique" and distinguished two items in a couple
>>> (web of data ; semantic web).
>>>
>>> It made me think that splitting the thing in two after the fact might
>>> have benefits:
>>>
>>> - Web of Data = what works today = 1st deliverable of the SemWeb Project
>>>
>>> - Semantic Web = what will work = prov, trust, inference, smartclient, 
>>> etc.
>>>
>>> It allows us to say that The Semantic Web Project **has*delivered** its
>>> version 1, nicknamed "Web of Data", and that more versions will follow.
>>>
>>> [Hopefully in a couple years the "Web of Data" will have completely
>>> merged with the One True Web and nobody will care about making a
>>> distinction any more]
>>>
>>> That way of putting things fits well with the iterative/agile/lean
>>> culture of project management that is now spreading all over.
>>>
>>> Do you know of people that have been trying to sell things this way?
>>
>> Hopefully everyone :)
>
>
> +1  :)
>
> Regards,
> Dave
> --
> http://about.me/david_wood
>
>
>>
>>
>> -- 
>> Regards,
>>
>> Kingsley Idehen
>> Founder & CEO 
>> OpenLink Software 
>> Company Web: http://www.openlinksw.com
>> Personal Weblog 1: http://kidehen.blogspot.com
>> Personal Weblog 2: http://www.openlinksw.com/blog/~kidehen
>> <http://www.openlinksw.com/blog/%7Ekidehen>
>> Twitter Profile: https://twitter.com/kidehen
>> Google+ Profile: https://plus.google.com/+KingsleyIdehen/about
>> LinkedIn Profile: http://www.linkedin.com/in/kidehen
>> Personal WebID: http://kingsley.idehen.net/dataspace/person/kidehen#this
>
>
>
>
> -- 
> Paul Houle
>
> *Applying Schemas for Natural Language Processing, Distributed
> Systems, Classification and Text Mining and Data Lakes*
>
> (607) 539 6254paul.houle on Skype   ontolo...@gmail

What Happened to the Semantic Web?

2015-11-11 Thread Kingsley Idehen
All,

I think I inadvertently forgot to share this blog post [1] with this
community.

Links:

[1]
http://kidehen.blogspot.cz/2015/09/what-happened-to-semantic-web.html --
What Happened to the Semantic Web?
[2] https://plus.google.com/+KingsleyIdehen/posts/8aMYzBN2FjL -- End of
RDF Document Format Wars.

-- 
Regards,

Kingsley Idehen   
Founder & CEO 
OpenLink Software 
Company Web: http://www.openlinksw.com
Personal Weblog 1: http://kidehen.blogspot.com
Personal Weblog 2: http://www.openlinksw.com/blog/~kidehen
Twitter Profile: https://twitter.com/kidehen
Google+ Profile: https://plus.google.com/+KingsleyIdehen/about
LinkedIn Profile: http://www.linkedin.com/in/kidehen
Personal WebID: http://kingsley.idehen.net/dataspace/person/kidehen#this




smime.p7s
Description: S/MIME Cryptographic Signature


Re: What Happened to the Semantic Web?

2015-11-11 Thread Kingsley Idehen
On 11/11/15 5:56 PM, Wouter Beek wrote:
> ​Hi Ruben, Kingsley, others,
> ​
> On Wed, Nov 11, 2015 at 9:49 PM, Ruben Verborgh
> <ruben.verbo...@ugent.be <mailto:ruben.verbo...@ugent.be>> wrote:
>
> Of course—but the emphasis in the community has mostly been on
> servers,
>
> ​ The emphasis has been on servers and, as of late, on Web Services.
> ​

Yes, that too is a new area of focus as exemplified by Hydra.

> whereas the SemWeb vision started from agents (clients) that would
> do things (using those servers).
>
> ​ Today we are nowhere near this vision.  In fact, we may be further
> removed from it today than we were in 2001.  If you look at the last
> ISWC there was particularly little work on (Web) agents.

As Rueben stated, too much focus has been paid to servers (which are
focused on large datasets etc..). The real game (as I see it) boils down
to small packets of "smart data" being transmitted via hyperlinks  etc..

>
> Now, the Semantic Web is mostly a server thing, which the
> Google/CSE example also shows.
>
> With the LOD Laundromat <http://lodlaundromat.org/> we had the
> experience that people really like it when we make publishing and
> consuming data very easy for them.  People generally find it easier to
> publish their data through a Web Service rather than having to use
> more capable data publishing software they have to configure locally. 
> We ended up with a highly centralized approach that works for many use
> cases.  It would have been much more difficult the build the same
> thing in a distributed fashion.

True. That's why its really a bit of everything rather than total focus
on one side of the equation. Right now, I think its time for clients and
services (controllers) to drive most of the interesting innovations and
collaborations.

>
> I find it difficult to see why centralization will not be the end game
> for the SW as it has been for so many other aspects of computing
> (search, email, social networking, even simple things like text chat).

The won't be centralization because more of the "deep web" will come to
the surface reducing the size and impacts of many of today's magnitude
based control points. This is just a natural feature of the Web that
can't be suppressed by anyone (person, organization, or bot army).

> The WWW shows that the 'soft benefits' of privacy, democratic
> potential, and data ownership are not enough to make distributed
> solutions succeed.

They are drivers that will aid increasing efforts to reduce
centralization. Those issues a completely incompatible with centralization.

>
> However, I believe that there are other benefits to decentralization
> that have not been articulated yet and that are to be found within the
> semantic realm.  An agent calculus is fundamentally different from a
> traditional model theory.

Naturally :)


Kingsley
>
> ---
> Best regards,
> Wouter Beek.
>
> Email: w.g.j.b...@vu.nl <mailto:w.g.j.b...@vu.nl>
> WWW: wouterbeek.com <http://wouterbeek.com>
> Tel: +31647674624


-- 
Regards,

Kingsley Idehen   
Founder & CEO 
OpenLink Software 
Company Web: http://www.openlinksw.com
Personal Weblog 1: http://kidehen.blogspot.com
Personal Weblog 2: http://www.openlinksw.com/blog/~kidehen
Twitter Profile: https://twitter.com/kidehen
Google+ Profile: https://plus.google.com/+KingsleyIdehen/about
LinkedIn Profile: http://www.linkedin.com/in/kidehen
Personal WebID: http://kingsley.idehen.net/dataspace/person/kidehen#this



smime.p7s
Description: S/MIME Cryptographic Signature


Re: What Happened to the Semantic Web?

2015-11-11 Thread Kingsley Idehen
On 11/11/15 3:49 PM, Ruben Verborgh wrote:
> Hi Kingsley,
>
> Some valid points. Two quick remarks:
>
>>> For me, the Semantic Web vision has always been about clients.
>> I think the "Semantic Web" has always been about "The Web" (clients and
>> servers) :)
> Of course—but the emphasis in the community has mostly been on servers,
> whereas the SemWeb vision started from agents (clients) that would do things 
> (using those servers).
> Now, the Semantic Web is mostly a server thing, which the Google/CSE example 
> also shows.

Okay, I certainly agree with that observation. Too much emphasis on
servers and large datasets has starved the crucial need for
collaboration on the client side.

Areas of starvation include:

1. Bindings various UI/UX frameworks to data access controls capable of
handling JSON-LD, Turtle, RDFa etc..
2. Constructing sophisticated data access controls that simplify Linked
Data exploitation by client-centric developers.

Collaboration taking shape on the Javascript front re. rdflib.js,
rdfstore.js, SoLID, and RWW in general etc.. are great examples of
movement in the right areas (IMHO).

>
>>> At the moment, consuming seems only within reach of the big players,
>>> who have the capacity to do it otherwise anyway.
>> No, you can craft a CSE yourself right now and triangulate searches
>> scoped to specific entity types.
> Do you mean making a CSE through the Google interface?

Google offers CSEs as a kind of service. If you leave said service with
Google trimmings there's no cost. If you seek to remove Google trimmings
then they charge a fee. Either way, that's fair enough in my eyes.
> But then I'm actually querying the Google servers, not the Web…

Google is a major Web hub, via CSEs you can find pathways to other
places on the Web. What useful about these CSEs is that they return a
boatload of documents that include RDF based structured data [1].

> Then intelligence is with a centralized system, not between clients and 
> servers.

Google is just one of many hubs from which RDF documents can be
discovered and access.
> Not yet the Semantic Web for me.

I the "Semantic" and "Web" components of the meme breakdown as follows,
in my experience:

1. Semantic -- structured data endowed with machine- and human-readable
relationship type semantics.

2. Web -- hyperlinks functioning dually as mechanism for entity
denotation and connotation  (i.e., names resolve to RDF Language based
descriptor documents).

>
> Best,
>
> Ruben
>


-- 
Regards,

Kingsley Idehen   
Founder & CEO 
OpenLink Software 
Company Web: http://www.openlinksw.com
Personal Weblog 1: http://kidehen.blogspot.com
Personal Weblog 2: http://www.openlinksw.com/blog/~kidehen
Twitter Profile: https://twitter.com/kidehen
Google+ Profile: https://plus.google.com/+KingsleyIdehen/about
LinkedIn Profile: http://www.linkedin.com/in/kidehen
Personal WebID: http://kingsley.idehen.net/dataspace/person/kidehen#this




smime.p7s
Description: S/MIME Cryptographic Signature


Re: What Happened to the Semantic Web?

2015-11-11 Thread Kingsley Idehen
On 11/11/15 12:25 PM, Ruben Verborgh wrote:
> Hi Kingsley,
>
> While your main points are correct, I disagree with your conclusion.
> I guess everything depends on what you mean with "The Semantic Web",
> but if I read the article with that title, we're arguably _not_ there.
>
> In that sense, I find it strange you use Google as an example of success.

I use them as an example because they are providing a platform for
accessing data based on entity relationship type semantics. I
specifically included Custom Google Search (CSE) engine links in my post
to demonstrate this point [1].

Google has a massive index of structured data culled from
HTML5+Microdata, JSON-LD, HTML+RDFa, Microdata, and maybe even Turtle
docs. Access to said indexes is possible via CSEs scoped to specific
Entity Types.

> The fact that the big players are doing something with Linked Data,
> is not necessarily a success, as they have much larger means than most of us.

Of course that's success, especially when it doesn't end up in some
guarded silo. Google is providing access to this data, and its is clear
that access fidelity will increase over time, for sure.

Google's actions ensure all their competitors will take note and follow
suit. Watch out for the Deep Linking App indexing promised by Microsoft
in regards to Services (Apps) and Actions.

>
> For me, the Semantic Web vision has always been about clients.

I think the "Semantic Web" has always been about "The Web" (clients and
servers) :)

> It's a democratic principle of publishing and consuming data:
> everyone can say anything about anything,
> but everyone should also be able to consume that data.

Of course, and that's happening on a level that surpasses what we had
15+ years ago. Put differently, the actions of behemoths like Google
make it a zillion times easier to articulate and demonstrate "Semantic
Web" virtues.

>
> At the moment, consuming seems only within reach of the big players,
> who have the capacity to do it otherwise anyway.

No, you can craft a CSE yourself right now and triangulate searches
scoped to specific entity types. Is this perfect? Of course not, but its
a zillion times better than zilch!

> In what sense did we succeed then?

Fundamentally, by virtue of the following actions:

[1] Schema.org support
[2] JSON-LD notation support -- this is a massive bridge between many
Semantic Web world views that failed to converge
[3] Integrating Schema.org into Custom Search Engine service.

>
> To me, The Semantic Web is like Google, but then run on my machine.

To me its just a Web of Data that includes entity relationship type
semantics that are comprehensible to both humans and machines. I can
start exploration and discovery of data, information, and knowledge from
my personal device (watch, phone, tablet, laptop, desktop, server etc..)
by simply looking-up a hyperlink.
> My client that knows my preferences, doesn't share them,
> but uses them the find information on the Web for me.
> I still hope to see that. Then, we might be there.

The Semantic Web is here. Evolution will continue as part of its
innovation continuum, naturally  :)

>
> Best,
>
> Ruben
>
>

Links:

[1] https://delicious.com/kidehen/google_custom_search
[2]
https://cse.google.com/cse/publicurl?cx=008280912992940796406:buiznppg_vy=Semantic+Web
-- Structured Data docs about Books associated with Semantic Web
[3]
https://cse.google.com/cse/publicurl?cx=008280912992940796406:vmu7ys-yeli=Linked+Data+RDF
-- Datasets associated with Linked Data and RDF .

-- 
Regards,

Kingsley Idehen   
Founder & CEO 
OpenLink Software 
Company Web: http://www.openlinksw.com
Personal Weblog 1: http://kidehen.blogspot.com
Personal Weblog 2: http://www.openlinksw.com/blog/~kidehen
Twitter Profile: https://twitter.com/kidehen
Google+ Profile: https://plus.google.com/+KingsleyIdehen/about
LinkedIn Profile: http://www.linkedin.com/in/kidehen
Personal WebID: http://kingsley.idehen.net/dataspace/person/kidehen#this




smime.p7s
Description: S/MIME Cryptographic Signature


Re: Are there any datasets about companies? ( DBpedia Open Data Initiative)

2015-11-05 Thread Kingsley Idehen
rtschaftsjournalisten-teil-2/
>
> *
>
> -- 
> Sebastian Hellmann
> AKSW/KILT research group
> Insitute for Applied Informatics (InfAI) at Leipzig University
> DBpedia Association
> Events:
> * *Nov 20th, 2015* Extended Deadline for Quality Management of
> Semantic Web Assets (Data, Services and Systems)
> <https://urldefense.proofpoint.com/v2/url?u=http-3A__www.semantic-2Dweb-2Djournal.net_blog_call-2Dpapers-2Dspecial-2Dissue-2Dquality-2Dmanagement-2Dsemantic-2Dweb-2Dassets-2Ddata-2Dservices-2Dand-2Dsystems=AwMDaQ=4ZIZThykDLcoWk-GVjSLm9hvvvzvGv0FLoWSRuCSs5Q=MWFkXZKGzjUiPsZJmvKkJFzcfjNv8b2-O-FtVxe_lKo=OMBq1pjpko-reA0yvs5X6147GDpMFmm16eXOAeS6frU=izKSqFXl0wJ-ARVjbrgSAJq6y_OaZRauR0kLPrzqb24=>
> Venha para a Alemanha como PhD:
> <https://urldefense.proofpoint.com/v2/url?u=http-3A__bis.informatik.uni-2Dleipzig.de_csf=AwMDaQ=4ZIZThykDLcoWk-GVjSLm9hvvvzvGv0FLoWSRuCSs5Q=MWFkXZKGzjUiPsZJmvKkJFzcfjNv8b2-O-FtVxe_lKo=OMBq1pjpko-reA0yvs5X6147GDpMFmm16eXOAeS6frU=KBUYPc8TFHsUPK5mfmrOdlWC1NO32_-Duzg4jGeTsXo=>http://bis.informatik.uni-leipzig.de/csf
> Projects: http://dbpedia.org
> <https://urldefense.proofpoint.com/v2/url?u=http-3A__dbpedia.org=AwMDaQ=4ZIZThykDLcoWk-GVjSLm9hvvvzvGv0FLoWSRuCSs5Q=MWFkXZKGzjUiPsZJmvKkJFzcfjNv8b2-O-FtVxe_lKo=OMBq1pjpko-reA0yvs5X6147GDpMFmm16eXOAeS6frU=lXZXLOPrD5qo34ZgwrqvCvEMGo757BwRa07zDRO9278=>,
> http://nlp2rdf.org
> <https://urldefense.proofpoint.com/v2/url?u=http-3A__nlp2rdf.org=AwMDaQ=4ZIZThykDLcoWk-GVjSLm9hvvvzvGv0FLoWSRuCSs5Q=MWFkXZKGzjUiPsZJmvKkJFzcfjNv8b2-O-FtVxe_lKo=OMBq1pjpko-reA0yvs5X6147GDpMFmm16eXOAeS6frU=fezYJm0qaK4gsb6A_NGkPixM5FKOInDbC6udiwrSqTs=>,
> <https://urldefense.proofpoint.com/v2/url?u=http-3A__linguistics.okfn.org=AwMDaQ=4ZIZThykDLcoWk-GVjSLm9hvvvzvGv0FLoWSRuCSs5Q=MWFkXZKGzjUiPsZJmvKkJFzcfjNv8b2-O-FtVxe_lKo=OMBq1pjpko-reA0yvs5X6147GDpMFmm16eXOAeS6frU=lRWc4cIkCAhsuil2_c043vkHRAOV9bxYq0cX9Gg96cs=>http://linguistics.okfn.org,
> https://www.w3.org/community/ld4lt
> <https://urldefense.proofpoint.com/v2/url?u=http-3A__www.w3.org_community_ld4lt=AwMDaQ=4ZIZThykDLcoWk-GVjSLm9hvvvzvGv0FLoWSRuCSs5Q=MWFkXZKGzjUiPsZJmvKkJFzcfjNv8b2-O-FtVxe_lKo=OMBq1pjpko-reA0yvs5X6147GDpMFmm16eXOAeS6frU=9B0UrytKAnBNOfZ7Wlz4cOhglo1P9wKJRWVv_oHOGiw=>
> Homepage: http://aksw.org/SebastianHellmann
> <https://urldefense.proofpoint.com/v2/url?u=http-3A__aksw.org_SebastianHellmann=AwMDaQ=4ZIZThykDLcoWk-GVjSLm9hvvvzvGv0FLoWSRuCSs5Q=MWFkXZKGzjUiPsZJmvKkJFzcfjNv8b2-O-FtVxe_lKo=OMBq1pjpko-reA0yvs5X6147GDpMFmm16eXOAeS6frU=5JwRYfk0sB5mrBCxBS1lwR7wHhldIUyKl2qi9Ir-SrI=>
> Research Group: http://aksw.org
> <https://urldefense.proofpoint.com/v2/url?u=http-3A__aksw.org=AwMDaQ=4ZIZThykDLcoWk-GVjSLm9hvvvzvGv0FLoWSRuCSs5Q=MWFkXZKGzjUiPsZJmvKkJFzcfjNv8b2-O-FtVxe_lKo=OMBq1pjpko-reA0yvs5X6147GDpMFmm16eXOAeS6frU=krCEwIqO7hZQBgFS8UNJWmEl_nle5R4T3r_oTCF2auI=>
> Thesis:
> http://tinyurl.com/sh-thesis-summary
> <https://urldefense.proofpoint.com/v2/url?u=http-3A__tinyurl.com_sh-2Dthesis-2Dsummary=AwMDaQ=4ZIZThykDLcoWk-GVjSLm9hvvvzvGv0FLoWSRuCSs5Q=MWFkXZKGzjUiPsZJmvKkJFzcfjNv8b2-O-FtVxe_lKo=OMBq1pjpko-reA0yvs5X6147GDpMFmm16eXOAeS6frU=9aiZHrv9UsI-nFHSk63A2CRVzmHs5OdEvB_n1xutGEE=>
> http://tinyurl.com/sh-thesis
> <https://urldefense.proofpoint.com/v2/url?u=http-3A__tinyurl.com_sh-2Dthesis=AwMDaQ=4ZIZThykDLcoWk-GVjSLm9hvvvzvGv0FLoWSRuCSs5Q=MWFkXZKGzjUiPsZJmvKkJFzcfjNv8b2-O-FtVxe_lKo=OMBq1pjpko-reA0yvs5X6147GDpMFmm16eXOAeS6frU=U1lCcDDb6w-tfOpYYANJE33PawMOH3YX7NH0Ldhvvh4=>


-- 
Regards,

Kingsley Idehen   
Founder & CEO 
OpenLink Software 
Company Web: http://www.openlinksw.com
Personal Weblog 1: http://kidehen.blogspot.com
Personal Weblog 2: http://www.openlinksw.com/blog/~kidehen
Twitter Profile: https://twitter.com/kidehen
Google+ Profile: https://plus.google.com/+KingsleyIdehen/about
LinkedIn Profile: http://www.linkedin.com/in/kidehen
Personal WebID: http://kingsley.idehen.net/dataspace/person/kidehen#this



smime.p7s
Description: S/MIME Cryptographic Signature


Re: Loupe - a tool for inspecting and exploring datasets

2015-10-10 Thread Kingsley Idehen
On 10/10/15 4:26 AM, Nandana Mihindukulasooriya wrote:
> Hi John,
>
> Thanks! yes, we see the value in using Loupe with private SPARQL
> endpoints and we looking at possible models for using it with private
> SPARQL endpoints. We will update the thread with the details after
> ISWC conf.
>
> Best Regards,
> Nandana
>
> On Fri, Oct 9, 2015 at 12:45 PM, John Walker <john.wal...@semaku.com
> <mailto:john.wal...@semaku.com>> wrote:
>
> Hi Nandana
>
> Nice tool, is it possible to use on private SPARQL endpoints?
>
> John
>
>
>

Nandana,

As part of the setup, must there be an RDF dump URL? Remember, you can
make a dump from a SPARQL Endpoint via CONSTRUCT queries. Basically,
just use SPARQL LOAD against a SPARQL results URL for a  CONSTRUCT or
DESCRIBE query.

Also, don't forget the option to interrogate the /sparql endpoint to see
if it includes an endpoint description and pointers to a VoID document.


-- 
Regards,

Kingsley Idehen   
Founder & CEO 
OpenLink Software 
Company Web: http://www.openlinksw.com
Personal Weblog 1: http://kidehen.blogspot.com
Personal Weblog 2: http://www.openlinksw.com/blog/~kidehen
Twitter Profile: https://twitter.com/kidehen
Google+ Profile: https://plus.google.com/+KingsleyIdehen/about
LinkedIn Profile: http://www.linkedin.com/in/kidehen
Personal WebID: http://kingsley.idehen.net/dataspace/person/kidehen#this



smime.p7s
Description: S/MIME Cryptographic Signature


Re: Loupe - a tool for inspecting and exploring datasets

2015-10-08 Thread Kingsley Idehen
On 10/8/15 12:09 PM, Nandana Mihindukulasooriya wrote:
> Hi all,
>
> We are developing a tool called Loupe [ http://loupe.linkeddata.es ]
> for inspecting and exploring datasets to understand which vocabularies
> (classes, and properties) are used in a dataset and which are common
> triple patterns. Loupe has some similarities to LODStat, Aether,
> ProLOD++, etc. but it provides the ability to dig into more details.
> It also connects the information provided directly to data so that so
> that one can see the triples that correspond to those numbers.  At the
> moment, it indexes 2+ billion triples from datasets including DBpedia
> (17 languages), wikidata, Linked Brainz, Bio models, etc.
>
> It's easier to describe what information Loupe provides using an
> example. If we take the DBpedia dataset, first it provides a summary
> with the number of triples, distinct subjects, objects, their
> composition (IRIs, blank nodes, literals), etc. and summary of the
> other information that we will present below.
>  http://tinyurl.com/loupe-dbpedia
>
> The class explorer provides the list of 941 classes used, number of
> instances per each class, number classes in each namespace etc. It
> also allows you to search for classes. http://tinyurl.com/dbpedia-classes
>
> If we select a concrete class such as dbo:Person, it shows the 13,128
> distinct properties associated with instances of dbo:Person and the
> probability that a given property is found in an instance. It also
> provides a list 438 other types that are declared in dbo:Person
> instances which can be equivalents classes, superclasses, subclasses,
> etc. http://tinyurl.com/dbo-person
>
> The property explorer provides a list of 60347 properties with the
> number of triples, number properties in each namespace etc. It also
> allows searching. http://tinyurl.com/dbpedia-properties
>
> Again, if we select a concrete property such as dbprop:name, it looks
> at all the triples that contain the given property and analyze the
> subjects and objects of those triples. For subjects, it looks at IRI /
> blank node counts and also the their types. For objects, it does the
> same but additionally analyzes literals for numeric, integers,
> averages, min, max, etc. http://tinyurl.com/dbp-name
>
> The triple pattern explorer allows you to search the 3,807,196
> abstract triple patterns. http://tinyurl.com/dbpedia-triple-patterns
> Or you can select a pattern you are interested, for instance what are
> the properties that connect dbo:Politician to dbo:Criminal
> http://tinyurl.com/politician-criminal
>
> In all these cases, the numbers are directly linked to the
> corresponding triples. 
>
> That's a glimpse of Loupe.  We would like to know whether it useful to
> your use cases so that we can keep improving it. It's still in its
> early stages so any feedback on improvements are more than welcome. If
> are interested, we will we doing a demo [1] at ISWC 2015.
>
> Best Regards,
> Nandana Mihindukulasooriya
> María Poveda Villalón
> Raúl García Castro
> Asunción Gómez Pérez
>
> [1] Nandana Mihindukulasooriya, María Poveda Villalón, Raúl García
> Castro, and Asunción Gómez Pérez. "Loupe - An Online Tool for
> Inspecting Datasets in the Linked Data Cloud", Demo at The 14th
> International Semantic Web Conference, Bethlehem, USA, 2015.

Great effort, very cool !!

-- 
Regards,

Kingsley Idehen   
Founder & CEO 
OpenLink Software 
Company Web: http://www.openlinksw.com
Personal Weblog 1: http://kidehen.blogspot.com
Personal Weblog 2: http://www.openlinksw.com/blog/~kidehen
Twitter Profile: https://twitter.com/kidehen
Google+ Profile: https://plus.google.com/+KingsleyIdehen/about
LinkedIn Profile: http://www.linkedin.com/in/kidehen
Personal WebID: http://kingsley.idehen.net/dataspace/person/kidehen#this




smime.p7s
Description: S/MIME Cryptographic Signature


Context Computing

2015-10-02 Thread Kingsley Idehen
All,

A great presentation about what Linked Data and the Semantic Web
fundamentally enable, like no other [1]. Naturally, terms like RDF,
Semantic Web, Linked Data etc.. aren't mentioned, but none of that
matters. This is about the fundamental value for which the
aforementioned items collectively provide a superior solution
(technically and business-wise).

Enjoy!

[1] https://www.youtube.com/watch?v=rWDIkfpbTmQ=youtu.be

-- 
Regards,

Kingsley Idehen   
Founder & CEO 
OpenLink Software 
Company Web: http://www.openlinksw.com
Personal Weblog 1: http://kidehen.blogspot.com
Personal Weblog 2: http://www.openlinksw.com/blog/~kidehen
Twitter Profile: https://twitter.com/kidehen
Google+ Profile: https://plus.google.com/+KingsleyIdehen/about
LinkedIn Profile: http://www.linkedin.com/in/kidehen
Personal WebID: http://kingsley.idehen.net/dataspace/person/kidehen#this




smime.p7s
Description: S/MIME Cryptographic Signature


Re: Please publish Turtle or JSON-LD instead of RDF/XML [was Re: Recommendation for transformation of RDF/XML to JSON-LD in a web browser?]

2015-09-03 Thread Kingsley Idehen
On 9/3/15 1:03 PM, David Booth wrote:
> Side note: RDF/XML was the first RDF serialization standardized, over
> 15 years ago, at a time when XML was all the buzz. Since then other
> serializations have been standardized that are far more human friendly
> to read and write, and easier for programmers to use, such as Turtle
> and JSON-LD.
>
> However, even beyond ease of use, one of the biggest problems with
> RDF/XML that I and others have seen over the years is that it misleads
> people into thinking that RDF is a dialect of XML, and it is not.  I'm
> sure this misconception was reinforced by the unfortunate depiction of
> XML in the foundation of the (now infamous) semantic web layer cake of
> 2001, which in hindsight is just plain wrong:
> http://www.w3.org/2001/09/06-ecdl/slide17-0.html
> (Admittedly JSON-LD may run a similar risk, but I think that risk is
> mitigated now by the fact that RDF is already more established in its
> own right.)
>
> I encourage all RDF publishers to use one of the other standard RDF
> formats such as Turtle or JSON-LD.  All commonly used RDF tools now
> support Turtle, and many or most already support JSON-LD.
>
> RDF/XML is not officially deprecated, but I personally hope that in
> the next round of RDF updates, we will quietly thank RDF/XML for its
> faithful service and mark it as deprecated.
>
> David Booth
>
>
Amen!

-- 
Regards,

Kingsley Idehen   
Founder & CEO 
OpenLink Software 
Company Web: http://www.openlinksw.com
Personal Weblog 1: http://kidehen.blogspot.com
Personal Weblog 2: http://www.openlinksw.com/blog/~kidehen
Twitter Profile: https://twitter.com/kidehen
Google+ Profile: https://plus.google.com/+KingsleyIdehen/about
LinkedIn Profile: http://www.linkedin.com/in/kidehen
Personal WebID: http://kingsley.idehen.net/dataspace/person/kidehen#this




smime.p7s
Description: S/MIME Cryptographic Signature


Re: Please publish Turtle or JSON-LD instead of RDF/XML [was Re: Recommendation for transformation of RDF/XML to JSON-LD in a web browser?]

2015-09-03 Thread Kingsley Idehen
On 9/3/15 4:23 PM, Timothy W. Cook wrote:
> It is interesting to see the XML-haters want to abandoned RDF/XML.  IF
> you do not like it then do not use it.  But remmber that there is not
> an RDF serialization that can fully accomplish structural integrity
> like XML Schema.  RDF/XML gives us the ability to mix both in one
> structurally and semantically complete document.

I think the trending sentiment boils down to downgrading it from its
exalted position rather than total removal :)

It certainly shouldn't be removed or dropped.

Kingsley
>
> On Thu, Sep 3, 2015 at 4:52 PM, John Walker <john.wal...@semaku.com
> <mailto:john.wal...@semaku.com>> wrote:
>
> Hi Martynas,
>
> Indeed abandoning XML based serialisations would be foolish IMHO.
>
> Both RDF/XML and TriX can be extremely useful in certain
> circumstances.
>
> John
>
> On 3 Sep 2015, at 19:53, Martynas Jusevičius
> <marty...@graphity.org <mailto:marty...@graphity.org>> wrote:
>
> > With due respect, I think it would be foolish to burn the bridges to
> > XML. The XML standards and infrastructure are very well developed,
> > much more so than JSON-LD's. We use XSLT extensively on RDF/XML.
> >
> > Martynas
> > graphityhq.com <http://graphityhq.com>
> >
> > On Thu, Sep 3, 2015 at 8:03 PM, David Booth <da...@dbooth.org
> <mailto:da...@dbooth.org>> wrote:
> >> Side note: RDF/XML was the first RDF serialization
> standardized, over 15
> >> years ago, at a time when XML was all the buzz. Since then other
> >> serializations have been standardized that are far more human
> friendly to
> >> read and write, and easier for programmers to use, such as
> Turtle and
> >> JSON-LD.
> >>
> >> However, even beyond ease of use, one of the biggest problems
> with RDF/XML
> >> that I and others have seen over the years is that it misleads
> people into
> >> thinking that RDF is a dialect of XML, and it is not.  I'm sure
> this
> >> misconception was reinforced by the unfortunate depiction of
> XML in the
> >> foundation of the (now infamous) semantic web layer cake of
> 2001, which in
> >> hindsight is just plain wrong:
> >> http://www.w3.org/2001/09/06-ecdl/slide17-0.html
> >> (Admittedly JSON-LD may run a similar risk, but I think that
> risk is
> >> mitigated now by the fact that RDF is already more established
> in its own
> >> right.)
> >>
> >> I encourage all RDF publishers to use one of the other standard
> RDF formats
> >> such as Turtle or JSON-LD.  All commonly used RDF tools now
> support Turtle,
> >> and many or most already support JSON-LD.
> >>
> >> RDF/XML is not officially deprecated, but I personally hope
> that in the next
> >> round of RDF updates, we will quietly thank RDF/XML for its
> faithful service
> >> and mark it as deprecated.
> >>
> >> David Booth
> >>
> >
>
>
>
>
> -- 
>
> 
> Timothy Cook
> LinkedIn Profile:http://www.linkedin.com/in/timothywaynecook
> MLHIM http://www.mlhim.org <http://www.mlhim.org/>
>


-- 
Regards,

Kingsley Idehen   
Founder & CEO 
OpenLink Software 
Company Web: http://www.openlinksw.com
Personal Weblog 1: http://kidehen.blogspot.com
Personal Weblog 2: http://www.openlinksw.com/blog/~kidehen
Twitter Profile: https://twitter.com/kidehen
Google+ Profile: https://plus.google.com/+KingsleyIdehen/about
LinkedIn Profile: http://www.linkedin.com/in/kidehen
Personal WebID: http://kingsley.idehen.net/dataspace/person/kidehen#this



smime.p7s
Description: S/MIME Cryptographic Signature


Re: [Dbpedia-discussion] [ANNOUNCE] Fact Extraction from Wikipedia Text datasets released

2015-09-02 Thread Kingsley Idehen
On 9/2/15 2:33 PM, Marco Fossati wrote:
> [Begging pardon if you read this multiple times]
>
> The Italian DBpedia chapter, on behalf of the whole DBpedia Association, 
> is thrilled to announce the release of new datasets extracted from 
> Wikipedia text.
>
> This is the outcome of an outstanding Google Summer of Code 2015 
> project, which implements NLP techniques to acquire structured facts 
> from a textual corpus.
>
> The approach has been tested on the soccer use case, with the Italian 
> Wikipedia as input.
>
> The datasets are publicly available at:
> http://it.dbpedia.org/downloads/fact-extraction/
>
> and loaded into the SPARQL endpoint at:
> http://it.dbpedia.org/sparql
>
> You can check out this article for more details:
> http://it.dbpedia.org/2015/09/meno-chiacchiere-piu-fatti-una-marea-di-nuovi-dati-estratti-dal-testo-di-wikipedia/?lang=en
>
> If you feel adventurous, you can fork the codebase at:
> https://github.com/dbpedia/fact-extractor
>
> Get in touch with Marco at foss...@fbk.eu for everything else.
>
> Best regards,
> Marco Fossati

Awesome !


-- 
Regards,

Kingsley Idehen   
Founder & CEO 
OpenLink Software 
Company Web: http://www.openlinksw.com
Personal Weblog 1: http://kidehen.blogspot.com
Personal Weblog 2: http://www.openlinksw.com/blog/~kidehen
Twitter Profile: https://twitter.com/kidehen
Google+ Profile: https://plus.google.com/+KingsleyIdehen/about
LinkedIn Profile: http://www.linkedin.com/in/kidehen
Personal WebID: http://kingsley.idehen.net/dataspace/person/kidehen#this




smime.p7s
Description: S/MIME Cryptographic Signature


Re: Discovering a query endpoint associated with a given Linked Data resource

2015-08-27 Thread Kingsley Idehen

On 8/27/15 9:18 AM, simon@csiro.au wrote:


There is the SPARQL Service Description vocabulary.

http://www.w3.org/TR/sparql11-service-description/



I recently tweeted about SPARQL endpoint lists [1][2] from our URIBurner 
Linked Open Data space:


[1] https://twitter.com/kidehen/status/635051731299254272
[2] https://twitter.com/kidehen/status/634346434666569728
[3] http://linkeddata.uriburner.com/c/9DX7S6Q4 -- Paged SPARQL Endpoint 
Listing
[4] http://linkeddata.uriburner.com/c/9CXT67A7 -- Attributes of SPARQL 
Endpoints known to URIBurner.


Kingsley


*From:*Nandana Mihindukulasooriya [mailto:nmihi...@fi.upm.es]
*Sent:* Wednesday, 26 August 2015 6:46 PM
*To:* public-lod public-lod@w3.org
*Subject:* Discovering a query endpoint associated with a given Linked 
Data resource


Hi,

Is there a standard or widely used way of discovering a query endpoint 
(SPARQL/LDF) associated with a given Linked Data resource?


I know that a client can use the follow your nose and related link 
traversal approaches such as [1], but if I wonder if it is possible to 
have a hybrid approach in which the dereferenceable Linked Data 
resources that optionally advertise query endpoint(s) in a standard 
way so that the clients can perform queries on related data.


To clarify the use case a bit, when a client dereferences a resource 
URI it gets a set of triples (an RDF graph) [2].  In some cases, it 
might be possible that the returned graph could be a subgraph of a 
named graph / default graph of an RDF dataset. The client wants to 
discover if a query endpoint that exposes the relevant dataset, if one 
is available.


For example, something like the following using the search link 
relation [3].


--

HEAD /resource/Sri_Lanka

Host: http://dbpedia.org

--

200 OK

Link: http://dbpedia.org/sparql; rel=search; type=sparql, 
http://fragments.dbpedia.org/2014/en#dataset; rel=search; type=ldf


... other headers ...

--

Best Regards,

Nandana

[1] 
http://swsa.semanticweb.org/sites/g/files/g524521/f/201507/DissertationOlafHartig_0.pdf


[2] 
http://www.w3.org/TR/2014/REC-rdf11-concepts-20140225/#section-rdf-graph


[3] http://www.iana.org/assignments/link-relations/link-relations.xhtml




--
Regards,

Kingsley Idehen 
Founder  CEO
OpenLink Software
Company Web: http://www.openlinksw.com
Personal Weblog 1: http://kidehen.blogspot.com
Personal Weblog 2: http://www.openlinksw.com/blog/~kidehen
Twitter Profile: https://twitter.com/kidehen
Google+ Profile: https://plus.google.com/+KingsleyIdehen/about
LinkedIn Profile: http://www.linkedin.com/in/kidehen
Personal WebID: http://kingsley.idehen.net/dataspace/person/kidehen#this



smime.p7s
Description: S/MIME Cryptographic Signature


Re: UK Open Data data vignette

2015-08-15 Thread Kingsley Idehen

On 8/14/15 3:31 PM, Tom Heath wrote:

@Kingsley: I'll check whether we used contact details for OpenLinkSW
or OLF (either our bad on the contact details or your bad on not
replying;)

Hi Tom,

In regards to mail sent to us (hopefully not OLF), to whom would that 
have been directed? Typically, unsigned emails don't get high priority 
and are most likely to be false positives re. spam filtering :)


I would encourage everyone to sign their emails, bearing in mind the 
fact that Linked Data enables a powerful PKI-based mechanism for 
verifiable identity that fixes problems with the centralized Certificate 
Authority infrastructure of yore [1][2[3]].


Links:

[1] 
http://kidehen.blogspot.com/2014/04/importance-of-signing-encrypting-emails.html 
-- Importance of sending digitally signed emails
[2] 
http://kidehen.blogspot.com/2015/06/sending-digitally-signed-email-using.html 
-- Sending digitally signed emails using Thunderbird
[3] http://kidehen.blogspot.com/2014/05/youid-for-ios-and-android.html 
-- Digital Identity Generator for iOS  Android.



--
Regards,

Kingsley Idehen 
Founder  CEO
OpenLink Software
Company Web: http://www.openlinksw.com
Personal Weblog 1: http://kidehen.blogspot.com
Personal Weblog 2: http://www.openlinksw.com/blog/~kidehen
Twitter Profile: https://twitter.com/kidehen
Google+ Profile: https://plus.google.com/+KingsleyIdehen/about
LinkedIn Profile: http://www.linkedin.com/in/kidehen
Personal WebID: http://kingsley.idehen.net/dataspace/person/kidehen#this




smime.p7s
Description: S/MIME Cryptographic Signature


Re: UK Open Data data vignette

2015-08-04 Thread Kingsley Idehen

On 8/4/15 12:00 PM, Kingsley Idehen wrote:


BTW -- how actually handles editing of the original spreadsheet? The 
information on OpenLink is really messed up. The ultimate 
demonstration of identity and identifiers gone very wrong. Half of the 
description is based on OpenLink Financials and the other half 
OpenLink Software :( 


Corrected spreadsheet for the folks at ODI : 
https://docs.google.com/spreadsheets/d/15euLaQSgQfavuKisl2q7omRxsGtJQ4P4fiLVxz8ySuE/edit#gid=0


--
Regards,

Kingsley Idehen 
Founder  CEO
OpenLink Software
Company Web: http://www.openlinksw.com
Personal Weblog 1: http://kidehen.blogspot.com
Personal Weblog 2: http://www.openlinksw.com/blog/~kidehen
Twitter Profile: https://twitter.com/kidehen
Google+ Profile: https://plus.google.com/+KingsleyIdehen/about
LinkedIn Profile: http://www.linkedin.com/in/kidehen
Personal WebID: http://kingsley.idehen.net/dataspace/person/kidehen#this




smime.p7s
Description: S/MIME Cryptographic Signature


Re: UK Open Data data vignette

2015-08-04 Thread Kingsley Idehen

On 8/4/15 12:09 PM, Hugh Glaser wrote:

Yeah, the Seme4 data ain't too good either :-)

Try Tom (Tom Heath tom.he...@theodi.org) at the ODI.


Okay, Tom please take note.

In the meantime, I've cloned and published the original spreadsheet [1]. 
Naturally, I've also made a CSV dump then then used LOD Refine to knock 
up mappings [2], a project [3], and a Turtle doc [4], all of which I've 
published via Dropbox [5] (mounted to my person ODS-Briefcase) and 
slurped into URIBurner [6].


Here are some notes, using nanotation [7], that provide additional 
provenance while also using this post as a Linked Open Data source:


{

   a foaf:Document, schema:WebPage ;
   rdfs:label UK Open Data Vignette Discussion on LOD Mailing List ;
   schema:about 
https://docs.google.com/spreadsheets/d/15euLaQSgQfavuKisl2q7omRxsGtJQ4P4fiLVxz8ySuE/, 


http://kingsley.idehen.net/DAV/home/kidehen/Public/DropBox/Public/Linked%20Data%20Resources/LODRefine/odi-company-directory-mappings.txt,
http://kingsley.idehen.net/DAV/home/kidehen/Public/DropBox/Public/Linked%20Data%20Resources/LODRefine/ODI-Open-Data-Company-Directory.openrefine.tar,
http://kingsley.idehen.net/DAV/home/kidehen/Public/DropBox/Public/Linked%20Data%20Resources/LODRefine/ODI-Open-Data-Company-Directory.ttl,
http://kingsley.idehen.net/DAV/home/kidehen/Public/DropBox/Public/Linked%20Data%20Resources/LODRefine/,
http://linkeddata.uriburner.com/c/9DD72QAV ;
   skos:related 
https://docs.google.com/spreadsheets/d/1xwxNIaxXSEMLktb-oo1UoYTd5WMPFkRLytRVVBr5OMA, 
http://opendatacompanies.data.seme4.com ;

   dcterms:created 2015-08-04^^xsd:date ;
   schema:author 
http://kingsley.idehen.net/dataspace/person/kidehen#this .



https://docs.google.com/spreadsheets/d/15euLaQSgQfavuKisl2q7omRxsGtJQ4P4fiLVxz8ySuE
   a foaf:Document, schema:WebPage ;
   rdfs:label Revised ODI Company Directory Spreadsheet (corrected 
OpenLink's entry) .


http://kingsley.idehen.net/DAV/home/kidehen/Public/DropBox/Public/Linked%20Data%20Resources/LODRefine/odi-company-directory-mappings.txt
   a foaf:Document, schema:WebPage;
   rdfs:label LOD Refine RDF Skeleton (Mappings from Relational Tables 
to Relational Predicate Graphs) .


http://kingsley.idehen.net/DAV/home/kidehen/Public/DropBox/Public/Linked%20Data%20Resources/LODRefine/ODI-Open-Data-Company-Directory.openrefine.tar
   a foaf:Document ;
   rdfs:label LOD Refine Project [TAR Archive] .

http://kingsley.idehen.net/DAV/home/kidehen/Public/DropBox/Public/Linked%20Data%20Resources/LODRefine/ODI-Open-Data-Company-Directory.ttl 


   a foaf:Document, schema:WebPage ;
   rdfs:label Turtle Document Describing ODI Company Directory ;
   rdfs:comment Generated via LOD Refine using RDF Skeleton Mappings 
;
   skos:related 
http://kingsley.idehen.net/DAV/home/kidehen/Public/DropBox/Public/Linked%20Data%20Resources/LODRefine/odi-company-directory-mappings.txt 
.


http://kingsley.idehen.net/DAV/home/kidehen/Public/DropBox/Public/Linked%20Data%20Resources/LODRefine/
   a ldp:BasicContainer, foaf:Document, sioc:Container, schema:WebPage ;
   rdfs:label Dropbox Folder mounted to my ODS-Briefcase ;
   rdfs:comment Folder containing map of LOD Refine artifacts used 
to produce Turtle document.  ;
   skos:related 
http://kingsley.idehen.net/DAV/home/kidehen/Public/DropBox/Public/Linked%20Data%20Resources/LODRefine/ODI-Open-Data-Company-Directory.ttl, 
http://kingsley.idehen.net/DAV/home/kidehen/Public/DropBox/Public/Linked%20Data%20Resources/LODRefine/ODI-Open-Data-Company-Directory.openrefine.tar,
http://kingsley.idehen.net/DAV/home/kidehen/Public/DropBox/Public/Linked%20Data%20Resources/LODRefine/odi-company-directory-mappings.txt 
.


   http://linkeddata.uriburner.com/c/9DD72QAV
   a foaf:Document, schema:WebPage ;
   rdfs:label 5-Star Linked Data rendition of ODI Company Directory 
deployed via URIBurner .

}


Links:

[1] 
https://docs.google.com/spreadsheets/d/15euLaQSgQfavuKisl2q7omRxsGtJQ4P4fiLVxz8ySuE 
-- Revised Spreadsheet (corrected OpenLink's entry)
[2] 
http://kingsley.idehen.net/DAV/home/kidehen/Public/DropBox/Public/Linked%20Data%20Resources/LODRefine/odi-company-directory-mappings.txt
[3] 
http://kingsley.idehen.net/DAV/home/kidehen/Public/DropBox/Public/Linked%20Data%20Resources/LODRefine/ODI-Open-Data-Company-Directory.openrefine.tar 
-- LOD Refine Project
[4] 
http://kingsley.idehen.net/DAV/home/kidehen/Public/DropBox/Public/Linked%20Data%20Resources/LODRefine/ODI-Open-Data-Company-Directory.ttl 
-- use of hash based HTTP URIs for deployment
[5] 
http://kingsley.idehen.net/DAV/home/kidehen/Public/DropBox/Public/Linked%20Data%20Resources/LODRefine/ 
-- items 2-4 from Dropbox mounted to my ODS-Briefcase
[6] http://linkeddata.uriburner.com/c/9DD72QAV -- Linked Data rendition 
deployed via URIBurner
[7] http://kidehen.blogspot.com/2014/07/nanotation.html -- About 
Nanotation (embedding RDF statements wherever text is accepted, in a 
nutshell).




On 04/08/2015 17:00, Kingsley Idehen wrote:

On 8/4/15 11:35 AM, Hugh Glaser wrote

Re: UK Open Data data vignette

2015-08-04 Thread Kingsley Idehen

On 8/4/15 11:35 AM, Hugh Glaser wrote:
The ODI (http://theodi.org) has published a research report, 
Research: Open data means business 
(http://theodi.org/open-data-means-business), along with a Google 
sheet of Open Data Companies 
(https://docs.google.com/spreadsheets/d/1xwxNIaxXSEMLktb-oo1UoYTd5WMPFkRLytRVVBr5OMA). 



It seemed a nice idea to map the companies and make the data 
accessible as 5 * Open Data at http://opendatacompanies.data.seme4.com


So we have loaded the company data into a Linked Data store with a 
SPARQL endpoint (http://opendatacompanies.data.seme4.com/sparql/), and 
made the URIs resolve, such as 
http://opendatacompanies.data.seme4.com/id/company/od001


We have also used the UK postcodes (and OS services) to plot the 
companies on a map: http://opendatacompanies.data.seme4.com/services/map/


There you go.
This is all a little rough-and-ready, just to see what it looks like, 
and see if anyone wants to use it. If you do, and want any changes, 
please ask.


Actually, if anyone wanted to produce a similar GSheet for other 
places, we could suck that in too.


Best
Hugh


Nice work!

BTW -- how actually handles editing of the original spreadsheet? The 
information on OpenLink is really messed up. The ultimate demonstration 
of identity and identifiers gone very wrong. Half of the description is 
based on OpenLink Financials and the other half OpenLink Software :(


--
Regards,

Kingsley Idehen 
Founder  CEO
OpenLink Software
Company Web: http://www.openlinksw.com
Personal Weblog 1: http://kidehen.blogspot.com
Personal Weblog 2: http://www.openlinksw.com/blog/~kidehen
Twitter Profile: https://twitter.com/kidehen
Google+ Profile: https://plus.google.com/+KingsleyIdehen/about
LinkedIn Profile: http://www.linkedin.com/in/kidehen
Personal WebID: http://kingsley.idehen.net/dataspace/person/kidehen#this




smime.p7s
Description: S/MIME Cryptographic Signature


Re: [ANN] beta release of 'Linked Data Reactor' for component-based LD application development

2015-06-23 Thread Kingsley Idehen

On 6/23/15 9:07 AM, Ali Khalili wrote:

Dear all,
we are happy to announce the beta release of our LD-R (Linked Data 
Reactor) framework for developing component-based Linked Data 
applications:


http://ld-r.org

The LD-R framework combines several state-of-the-art Web technologies 
to realize the vision of Linked Data components.
LD-R is centered around Facebook's ReactJS and Flux architecture for 
developing Web components with single directional data flow.
LD-R offers the first Isomorphic Semantic Web application (i.e. using 
the same code for both client and server side) by 
dehydrating/rehydrating states between the client and server.


The main features of LD-R are:
- User Interface as first class citizen.
- Isomorphic SW application development
- Reuse of current Web components within SW apps.
- Sharing components and configs rather than application code.
- Separation of concerns.
- Flexible theming for SW apps.

This is the beta release of LD-R and we are working on enhancing the 
framework. Therefore, your feedback is more than welcome.


For more information, please check the documentation on 
http://ld-r.org or refer to the Github repository at 
http://github.com/ali1k/ld-r


Ali Khalili
Knowledge Representation and Reasoning (KRR) research group
The Network Institute
Computer Science Department
VU University Amsterdam
http://krr.cs.vu.nl/


Ali,

Great stuff !

BTW -- What credentials should be used with the demo? If signup is 
required, what's the signup verification turnaround time?


--
Regards,

Kingsley Idehen 
Founder  CEO
OpenLink Software
Company Web: http://www.openlinksw.com
Personal Weblog 1: http://kidehen.blogspot.com
Personal Weblog 2: http://www.openlinksw.com/blog/~kidehen
Twitter Profile: https://twitter.com/kidehen
Google+ Profile: https://plus.google.com/+KingsleyIdehen/about
LinkedIn Profile: http://www.linkedin.com/in/kidehen
Personal WebID: http://kingsley.idehen.net/dataspace/person/kidehen#this




smime.p7s
Description: S/MIME Cryptographic Signature


Re: [Ann] QueryVOWL

2015-06-11 Thread Kingsley Idehen

On 6/11/15 8:25 AM, Steffen Lohmann wrote:

Hi all,

last week we presented a prototype at ESWC that implements our 
VOWL-based visual query language (QueryVOWL) for SPARQL-based Linked 
Data querying. Check it out at: http://queryvowl.visualdataweb.org


Note that it has mainly been developed to demonstrate the QueryVOWL 
approach and should not be considered a mature tool (e.g., it contains 
some known bugs). It does also not implement all current features of 
the visual language that are described at 
http://vowl.visualdataweb.org/queryvowl/v1/index.html


The web demo is configured for the DBpedia endpoint. It can only be 
used if the DBpedia endpoint is available and may slow down if many 
people access it simultaneously. For these cases, we also provide a 
short screencast of the tool. 


Great Stuff.

Please note, rather than depending solely on the DBpedia endpoint, why 
not consider the following backup endpoints which also include DBpedia data:


1. http://lod.openlinksw.com -- this is a cluster edition of Virtuoso 
that includes DBpedia data in the same http://dbpedia.org named graph
2. http://dbpedia-live.openlinksw.com -- a mirror with close to 
real-time update from Wikpedia.


--
Regards,

Kingsley Idehen 
Founder  CEO
OpenLink Software
Company Web: http://www.openlinksw.com
Personal Weblog 1: http://kidehen.blogspot.com
Personal Weblog 2: http://www.openlinksw.com/blog/~kidehen
Twitter Profile: https://twitter.com/kidehen
Google+ Profile: https://plus.google.com/+KingsleyIdehen/about
LinkedIn Profile: http://www.linkedin.com/in/kidehen
Personal WebID: http://kingsley.idehen.net/dataspace/person/kidehen#this




smime.p7s
Description: S/MIME Cryptographic Signature


Re: Profiles in Linked Data

2015-05-12 Thread Kingsley Idehen

On 5/12/15 8:18 AM, Svensson, Lars wrote:

Kingsley,

On Monday, May 11, 2015 9:00 PM, Kingsley Idehen wrote:


We have to be careful here. RDF Language sentences/statements have a
defined syntax as per RDF Abstract Syntax i.e., 3-tuples organized in subject,
predicate, object based structure. RDF Shapes (as far as I know) has nothing to
do with the subject, predicate, object structural syntax of an RDF
statement/sentence. Basically, it's supposed to provide a mechanism for
constraining the entity type (class instances) of RDF statement's subject and
object, when creating RDF statements/sentences in documents. Think of this as
having more to do with what's regarded as data-entry validation and control, in
other RDBMS quarters.

The charter of the data shapes WG [1] says that the product of the RDF Data Shapes 
WG will enable the definition of graph topologies for interface specification, code 
development, and data verification, so it's not_only_  about validation etc. My 
understanding is that it's somewhat similar to XML schema and thus is essentially a 
description of the graph structure. As such, it can of course be used for validation, but 
that is only one purpose.


Terms from a vocabulary or ontology do not change the topology of an RDF 
statement represented as graph pictorial. Neither do additional 
statements that provide constraints on the subjects and objects of a 
predicate. It is still going to be an RDF 3-tuple (or triple).





The function of the profile I believe you (and others that support this) are
seeking has more to do with enabling clients and servers (that don't 
necessarily
understand or care about RDF's implicit semantics) exchange hints about the
nature of RDF document content (e.g., does it conform to Linked Data
principles re. entity naming [denotation + connotation] ).

No, my use of profile is really a shape in the sense of the data shapes wg. 
Some of their motivations are what I'm envisioning, too, e.g.

* Developers of each data-consuming application could define the shapes their 
software needs to find in each feed, in order to work properly, with optional 
elements it can use to work better.
* Developers of data-providing systems can read the shape definitions (and 
possibly related RDF Vocabulary definitions) to learn what they need to provide


Cut long story short, a profile hint is about the nature of the RDF content 
(in
regards to entity names and name interpretation), not its shape (which is
defined by RDF syntax).

OK, I stand corrected: My question is: How can clients and servers negotiate 
shape information?


RDF data has one shape. Use of terms from a vocabulary or ontology don't 
change the shape of RDF document content.


Profiles are a means of representing preferences. Seeking terms from a 
specific vocabulary or ontology in regards to RDF document content is an 
example of a preference.


You can use rel=profile as a preference indicator via HTTP message 
exchanges between clients and servers.


--
Regards,

Kingsley Idehen 
Founder  CEO
OpenLink Software
Company Web: http://www.openlinksw.com
Personal Weblog 1: http://kidehen.blogspot.com
Personal Weblog 2: http://www.openlinksw.com/blog/~kidehen
Twitter Profile: https://twitter.com/kidehen
Google+ Profile: https://plus.google.com/+KingsleyIdehen/about
LinkedIn Profile: http://www.linkedin.com/in/kidehen
Personal WebID: http://kingsley.idehen.net/dataspace/person/kidehen#this




smime.p7s
Description: S/MIME Cryptographic Signature


Re: Profiles in Linked Data

2015-05-11 Thread Kingsley Idehen

On 5/11/15 11:39 AM, Svensson, Lars wrote:

John,

On Friday, May 08, 2015 10:05 PM, John Walker wrote:


Hi Lars


 On May 8, 2015 at 5:44 PM Svensson, Larsl.svens...@dnb.de  wrote:
 
 
 John, Kingsley,
 
 I wrote:

  OK, I can understand that. Does that mean that if I have under the same

URI

  serve different representations (e. g. rdf/xml, turtle and xhtml+RDFa) 
all

those

  representations must return exactly the same triples, or would it be

allowed to

  use schema.org in the RDFa, W3C Organisation Ontology for rdf/xml and

foaf

  when returning turtle? After all it's different descriptions of the same

resource.

 
 John wrote:
 

  My take on this is each representation (with negotiation only on format 
via
  HTTP Accept header)*should*  contain the same set of RDF statements
  (triples).
  Also one could define a different URL for each representation which can

be

  linked to with Content-Location in the HTTP headers.
  
  We’re you to introduce an additional (orthogonal) way to negotiate a

certain

  profile, this would be orthogonal to the format. Following on from above,

one

  could then have a separate URL for each format-profile combination.

 
 Kingsley wrote:
 

  Yes.
  
  For the sake of additional clarity, how about speaking about documents and
  content-types rather than representation which does inevitably conflate

key

  subtleties, in regards to RDF (Language, Notations, and Serialization

Formats)?

 
 The terminology is fine with me, as long as we don't forget the entities we

describe.

 
 So to repeat my question in another mail: I have an entity described by a

(generic) URI. Then I have three groups of documents describing that entity, 
the
first uses schema.org, the second group uses org ontology and the third uses
foaf. All documents are available as RDF/XML, Turtle and xhtml+RDFa. How
does a client that knows only the generic URI for the resource tell the server
that it prefers foaf in turtle and what does the server answer?

I believe that the two options are in HTTP headers or in the query string part 
of
the URI. In the latter case I guess you would say that is no longer the generic
URI.

I note in the JSON-LD spec it is stated A profile does not change the 
semantics
of the resource representation when processed without profile knowledge, so
that clients both with and without knowledge of a profiled resource can safely
use the same representation, which would no longer hold true if the profile
parameter were used to negotiate which vocabulary/shape is used.

Yes, I noted that text in RFC 6906, too, but assumed that unchanged semantics of 
the resource meant that both representations still describe the same thing (which 
they do in my case). Would a change in description vocabulary really mean that I change 
the semantics of the description?

If so, I'd be happy to call it not a profile, but a shape instead (thus 
adopting the vocabulary of RDF data shapes).


Lars,

We have to be careful here. RDF Language sentences/statements have a 
defined syntax as per RDF Abstract Syntax i.e., 3-tuples organized in 
subject, predicate, object based structure. RDF Shapes (as far as I 
know) has nothing to do with the subject, predicate, object structural 
syntax of an RDF statement/sentence. Basically, it's supposed to provide 
a mechanism for constraining the entity type (class instances) of RDF 
statement's subject and object, when creating RDF statements/sentences 
in documents. Think of this as having more to do with what's regarded as 
data-entry validation and control, in other RDBMS quarters.


The function of the profile I believe you (and others that support 
this) are seeking has more to do with enabling clients and servers (that 
don't necessarily understand or care about RDF's implicit semantics) 
exchange hints about the nature of RDF document content (e.g., does it 
conform to Linked Data principles re. entity naming [denotation + 
connotation] ).


Cut long story short, a profile hint is about the nature of the RDF 
content (in regards to entity names and name interpretation), not its 
shape (which is defined by RDF syntax).


--
Regards,

Kingsley Idehen 
Founder  CEO
OpenLink Software
Company Web: http://www.openlinksw.com
Personal Weblog 1: http://kidehen.blogspot.com
Personal Weblog 2: http://www.openlinksw.com/blog/~kidehen
Twitter Profile: https://twitter.com/kidehen
Google+ Profile: https://plus.google.com/+KingsleyIdehen/about
LinkedIn Profile: http://www.linkedin.com/in/kidehen
Personal WebID: http://kingsley.idehen.net/dataspace/person/kidehen#this



smime.p7s
Description: S/MIME Cryptographic Signature


Re: Profiles in Linked Data

2015-05-11 Thread Kingsley Idehen

On 5/11/15 11:54 AM, Svensson, Lars wrote:

Kingsley,

On Saturday, May 09, 2015 12:07 AM, Kingsley Idehen wrote:
[...]

So to repeat my question in another mail: I have an entity described by a
(generic) URI.

You have an entity identified by a IRI in RDF. If you are adhering to Linked 
Open
Data principles, said IRI would take the form of an HTTP URI.



  Then I have three groups of documents describing that entity, the first uses
schema.org, the second group uses org ontology and the third uses foaf.

You have an entity identified by an HTTP URI. The dual nature of this kind of
URI enables it function as a Name. The fundamental quality (attribute,
property, feature) of a Name is that its interpretable to meaning ie., a Name
also has a dual (denotation and connotation feature) which is what an HTTP URI
is all about, the only different is that denotation-connotation (i.e. name
interpretation) occurs in the hypermedia medium provided by an HTTP network
(e.g. World Wide Web). Net effect, the HTTP URI resolves to and document at a
location on the Web (i.e, a document at a location, which is the URL aspect of
this duality).

OK. I have an http URI that denotes an entity. Depending on the server 
configuration and what accept-headers I provide, the http dereferencing 
function returns a document at a location.


All documents are available as RDF/XML, Turtle and xhtml+RDFa. How does a
client that knows only the generic URI for the resource tell the server that it
prefers foaf in turtle and what does the server answer?

It can do stuff like this:

curl -L -H Accept: text/xml;q=0.3,text/html;q=1.0,text/turtle;q=0.5,*/*;q=0.3 
-
H Negotiate: * -I http://dbpedia.org/resource/Analytics

OK, I can see how setting the Accept-header negotiates the media type. If I 
understand correctly, the Negotiate-header gives the server and intermediate 
proxies a carte blanche to negotiate things any way they prefer. I don't see 
any header that tells the server what profile/shape/vocabulary the client 
prefers.


That's about a client negotiating different types of document content 
using a preference algorithm which in integral to Transparent Content 
Negotiation. It has nothing to do with a preferred vocabulary of terms 
e.g., dcterms vs schema.org in regards to terms used to describe 
something using RDF Language bases sentences/statements.


If you want an RDF based entity description, where the terms used come 
from a specific vocabulary, that's where you could leverage a query 
language e.g., SPARQL. Of course, there are those that don't want to use 
SPARQL which could then lead to yet another kind of profile relation 
object, but ultimately such use will only be the equivalent of ignoring 
the existence of multiplication and division in regards to 
arithmetic operations.


Conclusion: if folks want to build profile relations for selecting RDF 
content constructed using terms from a specific vocabulary, that's fine 
too, even though its  utility would simply boil down to navigating 
politics.



HTTP/1.1 303 See Other
Date: Tue, 05 May 2015 16:01:06 GMT
Content-Type: text/turtle; qs=0.35
Content-Length: 0
Connection: keep-alive
Server: Virtuoso/07.20.3213 (Linux) i686-generic-linux-glibc212-64  VDB
TCN: choice
Vary: negotiate,accept
Alternates: {/data/Analytics.atom 0.50 {type application/atom+xml}},
{/data/Analytics.jrdf 0.60 {type application/rdf+json}},
{/data/Analytics.jsod 0.50 {type application/odata+json}},
{/data/Analytics.json 0.60 {type application/json}},
{/data/Analytics.jsonld 0.50 {type application/ld+json}},
{/data/Analytics.n3 0.80 {type text/n3}}, {/data/Analytics.nt 0.80
{type text/rdf+n3}}, {/data/Analytics.ttl 0.70 {type text/turtle}},
{/data/Analytics.xml 0.95 {type application/rdf+xml}}

Given this Alternates-header: how can  a client figure out what those 
representations look like (except for their media type)?


Your Web Browser (a client) understands text/html. A Browser and other 
HTTP clients apply the same content handling rules to other content 
types (e.g., those related to images, sound, and video etc..) .






Link:
http://mementoarchive.lanl.gov/dbpedia/timegate/http://dbpedia.org/resour
ce/Analytics; rel=timegate
Location: http://dbpedia.org/data/Analytics.ttl
Expires: Tue, 12 May 2015 16:01:06 GMT
Cache-Control: max-age=604800

Best,

Lars



--
Regards,

Kingsley Idehen 
Founder  CEO
OpenLink Software
Company Web: http://www.openlinksw.com
Personal Weblog 1: http://kidehen.blogspot.com
Personal Weblog 2: http://www.openlinksw.com/blog/~kidehen
Twitter Profile: https://twitter.com/kidehen
Google+ Profile: https://plus.google.com/+KingsleyIdehen/about
LinkedIn Profile: http://www.linkedin.com/in/kidehen
Personal WebID: http://kingsley.idehen.net/dataspace/person/kidehen#this




smime.p7s
Description: S/MIME Cryptographic Signature


Re: Profiles in Linked Data

2015-05-11 Thread Kingsley Idehen

On 5/11/15 4:22 PM, Paul Houle wrote:

I think there's the issue of the data format but also the problem that

http://someorganization.org/namespace/something

might represent someorganization.org http://someorganization.org's 
viewpoint about :something if you are lucky.  On the other hand you 
might want to know what somebody else thinks about that thing  -- 
i.e. you might want to follow a dbpedia identifier to get schema.org 
http://schema.org information on it that somebodyelse.net 
http://somebodyelse.net has compiled on the dbpedia universe.


I.e. fundamentally dereferencing has to be extended to support the 
idea of what does source X think about resource Y?


That's what you get via a SPARQL Query Results URL. At the end of the 
day, the aforementioned URL identifies a document comprised of content 
that's dynamically generated from relational variables in the query body 
combined with solution output directives associated with select, 
construct, or describe query types.



There is also the need to recognize that dereferencing has created a 
lot of confusion.


Yes, there is still mass confusion about HTTP URI based Names :(

As you know, Names have denotation and connotation duality i.e., a Name 
by definition has interpretation that describes its referent. Thus,
Identifying anything with an HTTP URI based name implies it is 
interpretable on an HTTP Network.


I suspect some people have been intimidated from using RDF because 
they think that having names based on URLs means that they *have* to 
publish everything on the web.

Yes.

As to the scope of HTTP Name interpretation (public or private), that 
should really boil down to resource access controls [1] that are also 
driven by entity relationship type semantics.


Links:

[1] http://www.w3.org/wiki/WebAccessControl -- Web Access Controls 
related Wiki


[2] 
http://www.slideshare.net/kidehen/how-virtuoso-enables-attributed-based-access-controls 
-- How fine-grained ACLs are implemented (using RDF) in Virtuoso .


Kingsley



On Mon, May 11, 2015 at 3:14 PM, Kingsley Idehen 
kide...@openlinksw.com mailto:kide...@openlinksw.com wrote:


On 5/11/15 11:54 AM, Svensson, Lars wrote:

Kingsley,

On Saturday, May 09, 2015 12:07 AM, Kingsley Idehen wrote:
[...]

So to repeat my question in another mail: I have an
entity described by a
(generic) URI.

You have an entity identified by a IRI in RDF. If you are
adhering to Linked Open
Data principles, said IRI would take the form of an HTTP URI.


  Then I have three groups of documents describing
that entity, the first uses
schema.org http://schema.org, the second group uses
org ontology and the third uses foaf.

You have an entity identified by an HTTP URI. The dual
nature of this kind of
URI enables it function as a Name. The fundamental quality
(attribute,
property, feature) of a Name is that its interpretable to
meaning ie., a Name
also has a dual (denotation and connotation feature) which
is what an HTTP URI
is all about, the only different is that
denotation-connotation (i.e. name
interpretation) occurs in the hypermedia medium provided
by an HTTP network
(e.g. World Wide Web). Net effect, the HTTP URI resolves
to and document at a
location on the Web (i.e, a document at a location, which
is the URL aspect of
this duality).

OK. I have an http URI that denotes an entity. Depending on
the server configuration and what accept-headers I provide,
the http dereferencing function returns a document at a location.

All documents are available as RDF/XML, Turtle and
xhtml+RDFa. How does a
client that knows only the generic URI for the
resource tell the server that it
prefers foaf in turtle and what does the server answer?

It can do stuff like this:

curl -L -H Accept:
text/xml;q=0.3,text/html;q=1.0,text/turtle;q=0.5,*/*;q=0.3 -
H Negotiate: * -I http://dbpedia.org/resource/Analytics

OK, I can see how setting the Accept-header negotiates the
media type. If I understand correctly, the Negotiate-header
gives the server and intermediate proxies a carte blanche to
negotiate things any way they prefer. I don't see any header
that tells the server what profile/shape/vocabulary the client
prefers.


That's about a client negotiating different types of document
content using a preference algorithm which in integral to
Transparent Content Negotiation. It has nothing to do with a
preferred vocabulary of terms e.g., dcterms

Re: Profiles in Linked Data

2015-05-08 Thread Kingsley Idehen

On 5/8/15 10:24 AM, john.wal...@semaku.com wrote:

Hi Lars

*From:* Svensson, Lars mailto:l.svens...@dnb.de
*Sent:* ‎Friday‎, ‎8‎ ‎May‎ ‎2015 ‎11‎:‎05
*To:* Martynas Jusevičius mailto:marty...@graphity.org
*Cc:* public-lod@w3.org mailto:public-lod@w3.org

Martynas,

 To my understanding, in a resource-centric model resources have a
 description containing statements available about them.

 When you try split it into parts, then you involve documents or graphs
 and go beyond the resource-centric model.

OK, I can understand that. Does that mean that if I have under the 
same URI serve different representations (e. g. rdf/xml, turtle and 
xhtml+RDFa) all those representations must return exactly the same 
triples, or would it be allowed to use schema.org in the RDFa, W3C 
Organisation Ontology for rdf/xml and foaf when returning turtle? 
After all it's different descriptions of the same resource.


My take on this is each representation (with negotiation only on 
format via HTTP Accept header) *should* contain the same set of RDF 
statements (triples).
Also one could define a different URL for each representation which 
can be linked to with Content-Location in the HTTP headers.


We’re you to introduce an additional (orthogonal) way to negotiate a 
certain profile, this would be orthogonal to the format. Following on 
from above, one could then have a separate URL for each format-profile 
combination.


Yes.

For the sake of additional clarity, how about speaking about documents 
and content-types rather than representation which does inevitably 
conflate key subtleties, in regards to RDF (Language, Notations, and 
Serialization Formats)?


Links:

{
  
  a schema:WebPage ;
  schema:mentions http://dbpedia.org/resource/Serialization,
http://dbpedia.org/resource/Notation,
http://dbpedia.org/resource/Language ;
  skos:related 
http://linkeddata.uriburner.com/about/html/https/lists.w3.org/Archives/Public/public-lod/2015May/0065.html, 


http://linkeddata.uriburner.com/about/html/https/lists.w3.org/Archives/Public/public-lod/2015May/0058.html,
http://lists.w3.org/Archives/Public/public-lod/2010Nov/0391.html .

http://lists.w3.org/Archives/Public/public-lod/2010Nov/0391.html
a schema:WebPage ;
rdfs:label Rough draft poem: Document what art thou? .

}

--
Regards,

Kingsley Idehen 
Founder  CEO
OpenLink Software
Company Web: http://www.openlinksw.com
Personal Weblog 1: http://kidehen.blogspot.com
Personal Weblog 2: http://www.openlinksw.com/blog/~kidehen
Twitter Profile: https://twitter.com/kidehen
Google+ Profile: https://plus.google.com/+KingsleyIdehen/about
LinkedIn Profile: http://www.linkedin.com/in/kidehen
Personal WebID: http://kingsley.idehen.net/dataspace/person/kidehen#this



smime.p7s
Description: S/MIME Cryptographic Signature


Re: Profiles in Linked Data

2015-05-08 Thread Kingsley Idehen
a schema:WebPage, foaf:Document;
rdfs:label JSON-LD Variant of this Document ;
schema:creator https://plus.google.com/+KingsleyIdehen/about#this ;
schema:about #you, #i, #kidehen-comment-1, #kidehen-comment-2, 
#kidehen-comment-3,

  #kidehen-comment-4;
rdfs:comment 
The statements that follow, constructed using Turtle 
(one of many notations for writing RDF Language statements)
can be serialized to persistent storage (what documents 
provide e.g., this mail exchange persisted to the mailing

list server) using a variety of data serialization formats.

The entities identified by #you and #i are the 
subjects or relations represented by RDF statements in the statements
that follow. Put differently, the entities identified 
by #you and #i are associated with a collection of attribute=value 
pairings

represented by the RDF statements that follow.

  .


## Description of You and I

#you
a foaf:Person ;
foaf:name Svensson Lars ;
foaf:mbox mailto:l.svens...@dnb.de .

#i
a foaf:Person ;
foaf:name Kingsley Idehen ;
foaf:mbox mailto:kide...@openlinksw.com ;
owl:sameAs https://plus.google.com/+KingsleyIdehen/about#this,
http://kingsley.idehen.net/dataspace/person/kidehen#this.

#kidehen-comment-1
a schema:Comment ;
rdfs:label Comment 1 ;
schema:about dbpedia:Notation, 
http://dbpedia.org/resource/Identity_(philosophy) ;

schema:comment 
The notation used to construct RDF Language 
sentence/statements in a document and the serialization formats used for 
persistent storage, are
distinct. Basically, the entities identified by the 
identifiers #you and #i can be described using identical 
sentences/statements using
different notations, and persisted to a variety of 
document types, non of which actually changes the nature of what #you 
and #i identify.

.

#kidehen-comment-2
a schema:Comment ;
rdfs:label Comment 2 ;
schema:about rdf:, owl:, dbpedia:Semantics ;
schema:comment 
To know that #i and 
https://plus.google.com/+KingsleyIdehen/about#this identify the same 
entity, an RDF Language processor
that understands relations semantics (e.g., those 
described by the OWL vocabulary of terms) simply applies reasoning and 
inference
to the semantics (meaning) of the owl:sameAs relation 
(which is about equivalence [1] rather than equality). Likewise, it 
could also
use the semantics of the foaf:mbox relation (which is 
inverse-functional in nature) to discern the fact that #i is 
identified by many

other identifiers.
.

#kidehen-comment-3
a schema:Comment ;
schema:about dbpedia:World_Wide_Web, 
dbpedia:Representational_state_transfer ;

rdfs:label Comment 3 ;
schema:comment
   
Conflating document content and identity issues is an 
unfortunate quirk that overshadows most REST and so called Web 
Programmer narratives.
Hence the problematic fixation with content-types 
(media types) rather than the more fine-grained issues of identity, 
identification,
relations, semantics, notation, and serialization which 
are handled by RDF. Naturally, we could continue to work on improving 
RDF literature

so that a lot of these issues become even clearer.
.

#kidehen-comment-4
a schema:Comment ;
rdfs:label Comment 4 ;
schema:about dbpedia:World_Wide_Web, rdf:, owl:, dbpedia:Semantics ;
schema:comment 
The nature of the entities identified by identifiers 
#you and #i are unaffected by the content-types of documents through 
which they
are described. What matters are the semantics of the 
relations (represented using sentences/statements) used to construct 
entity descriptions.

.
}


Hope this live example helps, in regards to understanding the issue at 
hand. Basically, what a document describes is distinct from the shape 
and form of its content.


--
Regards,

Kingsley Idehen 
Founder  CEO
OpenLink Software
Company Web: http://www.openlinksw.com
Personal Weblog 1: http://kidehen.blogspot.com
Personal Weblog 2: http://www.openlinksw.com/blog/~kidehen
Twitter Profile: https://twitter.com/kidehen
Google+ Profile: https://plus.google.com/+KingsleyIdehen/about
LinkedIn Profile: http://www.linkedin.com/in/kidehen
Personal WebID: http://kingsley.idehen.net/dataspace/person/kidehen#this




smime.p7s
Description: S/MIME Cryptographic Signature


Re: Profiles in Linked Data

2015-05-08 Thread Kingsley Idehen

On 5/8/15 11:44 AM, Svensson, Lars wrote:

John, Kingsley,

I wrote:

OK, I can understand that. Does that mean that if I have under the same URI
serve different representations (e. g. rdf/xml, turtle and xhtml+RDFa) all those
representations must return exactly the same triples, or would it be allowed to
use schema.org in the RDFa, W3C Organisation Ontology for rdf/xml and foaf
when returning turtle? After all it's different descriptions of the same 
resource.

John wrote:
  

My take on this is each representation (with negotiation only on format via
HTTP Accept header) *should* contain the same set of RDF statements
(triples).
Also one could define a different URL for each representation which can be
linked to with Content-Location in the HTTP headers.

We’re you to introduce an additional (orthogonal) way to negotiate a certain
profile, this would be orthogonal to the format. Following on from above, one
could then have a separate URL for each format-profile combination.

Kingsley wrote:


Yes.

For the sake of additional clarity, how about speaking about documents and
content-types rather than representation which does inevitably conflate key
subtleties, in regards to RDF (Language, Notations, and Serialization Formats)?

The terminology is fine with me, as long as we don't forget the entities we 
describe.

So to repeat my question in another mail: I have an entity described by a 
(generic) URI.


You have an entity identified by a IRI in RDF. If you are adhering to 
Linked Open Data principles, said IRI would take the form of an HTTP URI.



  Then I have three groups of documents describing that entity, the first uses 
schema.org, the second group uses org ontology and the third uses foaf.


You have an entity identified by an HTTP URI. The dual nature of this 
kind of URI enables it function as a Name. The fundamental quality 
(attribute, property, feature) of a Name is that its interpretable to 
meaning ie., a Name also has a dual (denotation and connotation feature) 
which is what an HTTP URI is all about, the only different is that 
denotation-connotation (i.e. name interpretation) occurs in the 
hypermedia medium provided by an HTTP network (e.g. World Wide Web). Net 
effect, the HTTP URI resolves to and document at a location on the Web 
(i.e, a document at a location, which is the URL aspect of this duality).



All documents are available as RDF/XML, Turtle and xhtml+RDFa. How does a 
client that knows only the generic URI for the resource tell the server that it 
prefers foaf in turtle and what does the server answer?


It can do stuff like this:

curl -L -H Accept: 
text/xml;q=0.3,text/html;q=1.0,text/turtle;q=0.5,*/*;q=0.3 -H 
Negotiate: * -I http://dbpedia.org/resource/Analytics

HTTP/1.1 303 See Other
Date: Tue, 05 May 2015 16:01:06 GMT
Content-Type: text/turtle; qs=0.35
Content-Length: 0
Connection: keep-alive
Server: Virtuoso/07.20.3213 (Linux) i686-generic-linux-glibc212-64 VDB
*TCN: choice**
**Vary: negotiate,accept*
*Alternates*: {/data/Analytics.atom 0.50 {type 
application/atom+xml}}, {/data/Analytics.jrdf 0.60 {type 
application/rdf+json}}, {/data/Analytics.jsod 0.50 {type 
application/odata+json}}, {/data/Analytics.json 0.60 {type 
application/json}}, {/data/Analytics.jsonld 0.50 {type 
application/ld+json}}, {/data/Analytics.n3 0.80 {type text/n3}}, 
{/data/Analytics.nt 0.80 {type text/rdf+n3}}, 
{/data/Analytics.ttl 0.70 {type text/turtle}}, 
{/data/Analytics.xml 0.95 {type application/rdf+xml}}
Link: 
http://mementoarchive.lanl.gov/dbpedia/timegate/http://dbpedia.org/resource/Analytics; 
rel=timegate

Location: http://dbpedia.org/data/Analytics.ttl
Expires: Tue, 12 May 2015 16:01:06 GMT
Cache-Control: max-age=604800





Best,

Lars



--
Regards,

Kingsley Idehen 
Founder  CEO
OpenLink Software
Company Web: http://www.openlinksw.com
Personal Weblog 1: http://kidehen.blogspot.com
Personal Weblog 2: http://www.openlinksw.com/blog/~kidehen
Twitter Profile: https://twitter.com/kidehen
Google+ Profile: https://plus.google.com/+KingsleyIdehen/about
LinkedIn Profile: http://www.linkedin.com/in/kidehen
Personal WebID: http://kingsley.idehen.net/dataspace/person/kidehen#this



smime.p7s
Description: S/MIME Cryptographic Signature


Re: Profiles in Linked Data

2015-05-08 Thread Kingsley Idehen

On 5/8/15 12:05 PM, Young,Jeff (OR) wrote:

Also note that Schema.org will have a solution in their next release:

http://sdo-gozer.appspot.com/mainEntityOfPage
http://sdo-gozer.appspot.com/mainEntity

Jeff


Jeff,

They already have:

1. schema:about -- Property
2. schema:url -- Property
3. schema:WebPage  -- Class

Kingsley

-Original Message-
From: Martynas Jusevičius [mailto:marty...@graphity.org]
Sent: Friday, May 08, 2015 11:58 AM
To: Svensson, Lars
Cc: Kingsley Idehen; public-lod@w3.org
Subject: Re: Profiles in Linked Data

I think foaf:primaryTopic/foaf:isPrimaryTopic of is a good convention for 
linking
abstract concepts/physical things to documents about them.
We use it extensively in our datasets. For example:

   some/resource#this a bibo:Book ;
 foaf:isPrimaryTopicOf some/resource/dcat , some/resource/premis .

   some/resource/dcat a foaf:Document ;
 foaf:primaryTopic some/resource#this .

   some/resource/premis a foaf:Document ;
 foaf:primaryTopic some/resource#this .

Hope this helps.

Martynas
graphityhq.com

On Fri, May 8, 2015 at 5:47 PM, Svensson, Lars l.svens...@dnb.de wrote:

Kingsley,


Hope this live example helps, in regards to understanding the issue
at hand. Basically, what a document describes is distinct from the
shape and form of its content.

We're totally on the same page here, but I need a way to negotiate the shape

and the form of the description and that must in some way refer back to the
entity it describes.

Lars

Still trying to understand




--
Regards,

Kingsley Idehen 
Founder  CEO
OpenLink Software
Company Web: http://www.openlinksw.com
Personal Weblog 1: http://kidehen.blogspot.com
Personal Weblog 2: http://www.openlinksw.com/blog/~kidehen
Twitter Profile: https://twitter.com/kidehen
Google+ Profile: https://plus.google.com/+KingsleyIdehen/about
LinkedIn Profile: http://www.linkedin.com/in/kidehen
Personal WebID: http://kingsley.idehen.net/dataspace/person/kidehen#this




smime.p7s
Description: S/MIME Cryptographic Signature


Re: Profiles in Linked Data

2015-05-07 Thread Kingsley Idehen
/public-ldp-wg/2014Jan/0081.html -- LDP Thread 
from 2014 about RDF and profile relation.

--
Regards,

Kingsley Idehen 
Founder  CEO
OpenLink Software
Company Web: http://www.openlinksw.com
Personal Weblog 1: http://kidehen.blogspot.com
Personal Weblog 2: http://www.openlinksw.com/blog/~kidehen
Twitter Profile: https://twitter.com/kidehen
Google+ Profile: https://plus.google.com/+KingsleyIdehen/about
LinkedIn Profile: http://www.linkedin.com/in/kidehen
Personal WebID: http://kingsley.idehen.net/dataspace/person/kidehen#this



smime.p7s
Description: S/MIME Cryptographic Signature


Re: Profiles in Linked Data

2015-05-06 Thread Kingsley Idehen

On 5/6/15 11:04 AM, Svensson, Lars wrote:

All,

I am looking for a way to specify a profile when requesting a (linked data) 
resource. A profile in this case is orthogonal to the mime-type and is intended 
to specify e. g. the use of a specific RDF vocabulary to describe the data (I 
ask a repository for a list of datasets, specify that I want the data in turtle 
and also that I want the data dictionary described with DCAT and not with 
PREMIS). This is adding a new dimension to the traditional content-negotiation 
(mime-type, language, etc.).

I have not found a best practice for doing this but the following possibilities 
have crossed my mind:

1) Using the Link-Header to specify a profile
This uses profile as specified in RFC 6906 [1]

Request:
GET /some/resource HTTP 1.1
Accept: application/rdf+xml
Link:http://example.org/dcat-profile; rel=profile

The server would then either return the data in the requested profile, answer 
with 406 (not acceptable), or return the data in a default profile (and set the 
Link-header to tell the client what profile the server used...)


2) Register new http headers Accept-Profile and Profile

Request:
GET /some/resource HTTP 1.1
Accept: application/rdf+xml
Accept-Profile:http://example.org/dcat-profile

The server would then either return the data in the requested profile, answer 
with 406 (not acceptable), or return the data in a default profile. If the 
answer is a 200 OK, the server needs to set the Profile header to let the 
client know which profile was used. This is consistent with the use of the 
Accept header.

3) Use the Accept-Features and Features headers
RFC 2295 §6 [2] defines so-called features as a further dimension of content 
negotiation.

Request:
GET /some/resource HTTP 1.1
Accept: application/rdf+xml
Accept-Features: profile=http://example.org/dcat-profile

The server would then either return the data in the requested profile/feature, 
answer with 406 (not acceptable), or return the data in a default 
profile/feature. If the answer is a 200 OK, the server needs to set the Feature 
header to let the client know which profile was used. This is consistent with 
the use of the Accept header.

Discussion
The problem I have with the Accept-Features/Features header is that I feel that the provision of a specific 
(application) profile is not the same as a feature of the requested resource, at least not if I look at the examples 
they provide in RFC 2295 which includes tables, fonts, screenwidth and 
colordepth, but perhaps I'm overly picky.

The registration of Accept-Profile/Profile headers is appealing since their 
semantics can be clearly defined and that their naming show the similarities to 
other Accept-* headers. OTOH the process of getting those headers registered 
with IETF can be fairly heavy.

Lastly, the use of RFC 6906 profiles has the advantage that no extra work has 
to be done, the Link header is in place and so is the profile relation type.

Any feedback would be greatly appreciated.

[1]http://tools.ietf.org/html/rfc6906
[2]http://tools.ietf.org/html/rfc2295#section-6

Best,

Lars


Lars,

To flesh out what you are seeking here, could you also include expected 
(or suggested) HTTP server responses to requests that include this 
profile relation?


--
Regards,

Kingsley Idehen 
Founder  CEO
OpenLink Software
Company Web: http://www.openlinksw.com
Personal Weblog 1: http://kidehen.blogspot.com
Personal Weblog 2: http://www.openlinksw.com/blog/~kidehen
Twitter Profile: https://twitter.com/kidehen
Google+ Profile: https://plus.google.com/+KingsleyIdehen/about
LinkedIn Profile: http://www.linkedin.com/in/kidehen
Personal WebID: http://kingsley.idehen.net/dataspace/person/kidehen#this




smime.p7s
Description: S/MIME Cryptographic Signature


Re: Vocabulary to describe software installation

2015-05-04 Thread Kingsley Idehen

On 5/4/15 2:34 AM, Sarven Capadisli wrote:

On 2015-05-01 16:22, Jürgen Jakobitsch wrote:

hi,

i'm investigating possibilities to describe an arbitrary software
installation process
in rdf. currently i've found two candidates [1][2], but examples are
practically non-existent.
has anyone done this before, are there somewhere real examples?

any pointer greatly appreciated.

wkr j

[1] http://www.w3.org/2005/Incubator/ssn/ssnx/ssn#Module_Deployment
[2]
http://wiki.dublincore.org/index.php/User_Guide/Publishing_Metadata#dcterms:instructionalMethod 



You may have looked into this already, but I'll mention it any way, in 
case it is of use to someone else.


Consider using OPMW [3], P-PLAN [4] and PROV-O [5]. These aren't 
exclusively for software processes, but for any general process.


Depending on the granularity you want to work with, it might fit the 
bill. We have experimented with describing the actual workflows e.g., 
[6], but IIRC, have not executed an action from the descriptions.


[3] http://www.opmw.org/model/OPMW
[4] http://purl.org/net/p-plan
[5] http://www.w3.org/TR/prov-o/
[6] 
https://github.com/csarven/doingbusiness-linked-data/tree/dev/scripts 
(note: in development) 


Sarven,

In regards to locating RDF-Turtle docs for the items above, I performed 
the following tasks using HTTP URI manipulation:


1. http://lov.okfn.org/dataset/lov/vocabs/{namespace} -- obtain 
description doc URI from LOV via vocabulary namespace
2. 
http://linkeddata.uriburner.com/about/html/http/{lov-vocabulary-description-uri} 
-- to import the Vocabulary into URIBurner
3. 
http://linkeddata.uriburner.com/describe/?url=http%3A%2F%2Flov.okfn.org%2Fdataset%2Flov%2Fvocabs%2Fopmw%2Fversions%2F2015-01-11.n3distinct=1 
--  to obtain an alternative faceted browsing view of the vocabulary in 
question.


Live Results:

OPMW:

[1] http://lov.okfn.org/dataset/lov/vocabs/opmw -- identifies LOV 
description document
[2] 
http://linkeddata.uriburner.com/about/html/http://lov.okfn.org/dataset/lov/vocabs/opmw/versions/2015-01-11.n3 
- identifies Basic URIBurner description document
[3] http://linkeddata.uriburner.com/c/9D6PWI7O -- identifies Faceted 
Browsing based description document.


P-Plan:

[1] http://lov.okfn.org/dataset/lov/vocabs/p-plan - identifies LOV 
description document
[2] 
http://linkeddata.uriburner.com/about/html/http/lov.okfn.org/dataset/lov/vocabs/p-plan/versions/2014-03-12.n3-- 
identifies Basic URIBurner description document
[3] 
http://linkeddata.uriburner.com/describe/?uri=http%3A%2F%2Flov.okfn.org%2Fdataset%2Flov%2Fvocabs%2Fp-plan%2Fversions%2F2014-03-12.n3 
-- identifies Faceted Browsing based description document.


--
Regards,

Kingsley Idehen 
Founder  CEO
OpenLink Software
Company Web: http://www.openlinksw.com
Personal Weblog 1: http://kidehen.blogspot.com
Personal Weblog 2: http://www.openlinksw.com/blog/~kidehen
Twitter Profile: https://twitter.com/kidehen
Google+ Profile: https://plus.google.com/+KingsleyIdehen/about
LinkedIn Profile: http://www.linkedin.com/in/kidehen
Personal WebID: http://kingsley.idehen.net/dataspace/person/kidehen#this




smime.p7s
Description: S/MIME Cryptographic Signature


Re: Vocabulary to describe software installation

2015-05-03 Thread Kingsley Idehen

On 5/2/15 5:41 PM, Jürgen Jakobitsch wrote:
..i don't want to hijack my own thread, but i think the from web of 
documents to web of data meme did it's part..
it's highly misleading for a lot of people who only look at the whole 
thing with one eye. well, these days we know
that not even gravity comes without a bearer, so why should structured 
data? thank god, we don't have to have
a small cern-clone in our backyards... a text editor is pretty much 
enough.. ;-)


wkr j


Jürgen,

Yep! A text editor is enough. It is the right tool for the job at hand.

The RDF Editor we are about to release is just a text editing aid, so to 
speak. The fundamental design goal of the aforementioned editor boils 
down to guiding  users to the realization that RDF is a Language for 
constructing sentences. These sentences become web-like when HTTP URIs 
are used to identify their subject, predicate, and object parts.


Kingsley






| Jürgen Jakobitsch,
| Software Developer
| Semantic Web Company GmbH
| Mariahilfer Straße 70 / Neubaugasse 1, Top 8
| A - 1070 Wien, Austria
| Mob +43 676 62 12 710 | Fax +43.1.402 12 35 - 22

COMPANY INFORMATION
| web   : http://www.semantic-web.at/
| foaf  : http://company.semantic-web.at/person/juergen_jakobitsch
PERSONAL INFORMATION
| web   : http://www.turnguard.com
| foaf  : http://www.turnguard.com/turnguard
| g+: https://plus.google.com/111233759991616358206/posts
| skype : jakobitsch-punkt
| xmlns:tg  = http://www.turnguard.com/turnguard#;

2015-05-02 23:05 GMT+02:00 Kingsley Idehen kide...@openlinksw.com 
mailto:kide...@openlinksw.com:


On 5/2/15 4:48 PM, Jürgen Jakobitsch wrote:

thanks kingsley,

re ontology directory : good to know (maybe bernard vatant also
reads this thread)..

Jürgen

He is aware of these ontologies :)

My hope is that everyone realizes how simple it actually is to
construct and publish ontologies, using file save, create,
publish, and share via HTTP URI pattern.

Circa., 2015 we should be demonstrating how easy this whole thing
is. Basically, we need to focus less on centralization (consensus
overload)  re., creation of content that leverages RDF Language in
conjunction with Linked Open Data principles.


Links:

[1] http://kidehen.blogspot.com/2014/07/nanotation.html --
Nanotation (showing how to exploit power of RDF Language via
Tweets, Plain Text Docs, Mailing List Posts, Social Media Posts
(Facebook, Google+, LinkedIn etc..), and anywhere else where
plain/text is accepted

[2] http://linkeddata.uriburner.com/c/9CRO6GJD -- H/T relation
described via a series of Tweets, using Nanotation

[3]

http://linkeddata.uriburner.com/fct/rdfdesc/usage.vsp?g=https%3A%2F%2Ftwitter.com%2Fhashtag%2Fht%23this
-- Provenance Metadata (showing the series of 147 char constrained
tweets containing RDF-Turtle based nanotations).

Kingsley


yes, i meant step by step guide... i already found examples via
sparql..

note: i want to create installation instructions and notes for
apache mesos on factory-fresh opensuse 13.2 (will soon be available)

wkr j

| Jürgen Jakobitsch,
| Software Developer
| Semantic Web Company GmbH
| Mariahilfer Straße 70 / Neubaugasse 1, Top 8
| A - 1070 Wien, Austria
| Mob +43 676 62 12 710 | Fax +43.1.402 12 35 - 22

COMPANY INFORMATION
| web   : http://www.semantic-web.at/
| foaf  :
http://company.semantic-web.at/person/juergen_jakobitsch
PERSONAL INFORMATION
| web   : http://www.turnguard.com
| foaf  : http://www.turnguard.com/turnguard
| g+: https://plus.google.com/111233759991616358206/posts
| skype : jakobitsch-punkt
| xmlns:tg  = http://www.turnguard.com/turnguard#;

2015-05-02 22:16 GMT+02:00 Kingsley Idehen
kide...@openlinksw.com mailto:kide...@openlinksw.com:

On 5/2/15 11:49 AM, Jürgen Jakobitsch wrote:

kingsley, thanks for the pointer...

i'm taking a look at these two openlinksw ontologies (i
think they would fit my needs)

components
http://goo.gl/F8HFE5

installers
http://goo.gl/eKpbeW

wkr j


Jürgen,

I should have mentioned that all our ontologies are available
from a standard location [1].

[1] http://www.openlinksw.com/data/turtle/ -- OpenLink
Ontology Collection .


-- 
Regards,


Kingsley Idehen
Founder  CEO
OpenLink Software
Company Web: http://www.openlinksw.com
Personal Weblog 1: http://kidehen.blogspot.com
Personal Weblog 2: http://www.openlinksw.com/blog/~kidehen
http://www.openlinksw.com/blog/%7Ekidehen
Twitter Profile: https://twitter.com/kidehen
Google+ Profile: https://plus.google.com/+KingsleyIdehen/about
LinkedIn Profile: http

Re: Vocabulary to describe software installation

2015-05-02 Thread Kingsley Idehen

On 5/2/15 10:42 AM, Paul Houle wrote:

Could you RDFize the model used by packaging systems such as rpm?


Of course.

I would be very surprised if Jürgen (or someone else) hasn't already 
done that.


You basically have a bundle (or collection) of components which are 
partOf relation participants.


We even describe our product installers using partOf and dependsOn 
relations [1][2].



[1] http://virtuoso.openlinksw.com/data/turtle/installers/ -- Virtuoso
[2] http://www.openlinksw.com/c/9BZ74XJW -- Virtuoso Installer Archive 
Description for Linux.


Kingsley


My latest ideological kick has been to realize that converting data 
from non-RDF sources to RDF is often trivial or close to trivial.  
This fact has been obscured by things like the TWC data.gov 
http://data.gov thing that make it look a lot harder than it really is.




On Fri, May 1, 2015 at 10:22 AM, Jürgen Jakobitsch 
j.jakobit...@semantic-web.at mailto:j.jakobit...@semantic-web.at 
wrote:


hi,

i'm investigating possibilities to describe an arbitrary software
installation process
in rdf. currently i've found two candidates [1][2], but examples
are practically non-existent.
has anyone done this before, are there somewhere real examples?

any pointer greatly appreciated.

wkr j

[1] http://www.w3.org/2005/Incubator/ssn/ssnx/ssn#Module_Deployment
[2]

http://wiki.dublincore.org/index.php/User_Guide/Publishing_Metadata#dcterms:instructionalMethod

| Jürgen Jakobitsch,
| Software Developer
| Semantic Web Company GmbH
| Mariahilfer Straße 70 / Neubaugasse 1, Top 8
| A - 1070 Wien, Austria
| Mob +43 676 62 12 710 | Fax +43.1.402 12 35 - 22

COMPANY INFORMATION
| web   : http://www.semantic-web.at/
| foaf  : http://company.semantic-web.at/person/juergen_jakobitsch
PERSONAL INFORMATION
| web   : http://www.turnguard.com
| foaf  : http://www.turnguard.com/turnguard
| g+: https://plus.google.com/111233759991616358206/posts
| skype : jakobitsch-punkt
| xmlns:tg  = http://www.turnguard.com/turnguard#;




--
Paul Houle

*Applying Schemas for Natural Language Processing, Distributed 
Systems, Classification and Text Mining and Data Lakes*


(607) 539 6254paul.houle on Skype ontolo...@gmail.com 
mailto:ontolo...@gmail.com
https://legalentityidentifier.info/lei/lookup 
http://legalentityidentifier.info/lei/lookup



--
Regards,

Kingsley Idehen 
Founder  CEO
OpenLink Software
Company Web: http://www.openlinksw.com
Personal Weblog 1: http://kidehen.blogspot.com
Personal Weblog 2: http://www.openlinksw.com/blog/~kidehen
Twitter Profile: https://twitter.com/kidehen
Google+ Profile: https://plus.google.com/+KingsleyIdehen/about
LinkedIn Profile: http://www.linkedin.com/in/kidehen
Personal WebID: http://kingsley.idehen.net/dataspace/person/kidehen#this



smime.p7s
Description: S/MIME Cryptographic Signature


Re: Vocabulary to describe software installation

2015-05-02 Thread Kingsley Idehen

On 5/2/15 11:15 AM, Jürgen Jakobitsch wrote:
hi paul, i was investigating the packaging aspect a couple of years 
ago [1] (but didn't follow..)


what i'm currently searching for is rather a vocabulary for prosa 
description of an installation process.


wkr j

[1] 
https://forums.opensuse.org/showthread.php/398186-yast-to-sesame-repository

Jürgen,

Do you mean a step-by-guide? If so, I have some examples that we use [1][2]

[1] http://www.openlinksw.com/c/9CZ34POV -- Virtuoso Installation Steps 
for Mac OS X .
[2] http://virtuoso.openlinksw.com/data/turtle/stepbyguides/ -- 
step-by-guide document folder/collection .


--
Regards,

Kingsley Idehen 
Founder  CEO
OpenLink Software
Company Web: http://www.openlinksw.com
Personal Weblog 1: http://kidehen.blogspot.com
Personal Weblog 2: http://www.openlinksw.com/blog/~kidehen
Twitter Profile: https://twitter.com/kidehen
Google+ Profile: https://plus.google.com/+KingsleyIdehen/about
LinkedIn Profile: http://www.linkedin.com/in/kidehen
Personal WebID: http://kingsley.idehen.net/dataspace/person/kidehen#this




smime.p7s
Description: S/MIME Cryptographic Signature


Re: Vocabulary to describe software installation

2015-05-02 Thread Kingsley Idehen

On 5/2/15 11:49 AM, Jürgen Jakobitsch wrote:

kingsley,  thanks for the pointer...

i'm taking a look at these two openlinksw ontologies (i think they 
would fit my needs)


components
http://goo.gl/F8HFE5

installers
http://goo.gl/eKpbeW

wkr j


Jürgen,

I should have mentioned that all our ontologies are available from a 
standard location [1].


[1] http://www.openlinksw.com/data/turtle/ -- OpenLink Ontology 
Collection .


--
Regards,

Kingsley Idehen 
Founder  CEO
OpenLink Software
Company Web: http://www.openlinksw.com
Personal Weblog 1: http://kidehen.blogspot.com
Personal Weblog 2: http://www.openlinksw.com/blog/~kidehen
Twitter Profile: https://twitter.com/kidehen
Google+ Profile: https://plus.google.com/+KingsleyIdehen/about
LinkedIn Profile: http://www.linkedin.com/in/kidehen
Personal WebID: http://kingsley.idehen.net/dataspace/person/kidehen#this




smime.p7s
Description: S/MIME Cryptographic Signature


Re: Vocabulary to describe software installation

2015-05-02 Thread Kingsley Idehen

On 5/2/15 4:48 PM, Jürgen Jakobitsch wrote:

thanks kingsley,

re ontology directory : good to know (maybe bernard vatant also reads 
this thread)..

Jürgen

He is aware of these ontologies :)

My hope is that everyone realizes how simple it actually is to construct 
and publish ontologies, using file save, create, publish, and share via 
HTTP URI pattern.


Circa., 2015 we should be demonstrating how easy this whole thing is. 
Basically, we need to focus less on centralization (consensus overload)  
re., creation of content that leverages RDF Language in conjunction with 
Linked Open Data principles.



Links:

[1] http://kidehen.blogspot.com/2014/07/nanotation.html -- Nanotation 
(showing how to exploit power of RDF Language via Tweets, Plain Text 
Docs, Mailing List Posts, Social Media Posts (Facebook, Google+, 
LinkedIn etc..), and anywhere else where plain/text is accepted


[2] http://linkeddata.uriburner.com/c/9CRO6GJD -- H/T relation described 
via a series of Tweets, using Nanotation


[3] 
http://linkeddata.uriburner.com/fct/rdfdesc/usage.vsp?g=https%3A%2F%2Ftwitter.com%2Fhashtag%2Fht%23this 
-- Provenance Metadata (showing the series of 147 char constrained 
tweets containing RDF-Turtle based nanotations).


Kingsley


yes, i meant step by step guide... i already found examples via sparql..

note: i want to create installation instructions and notes for apache 
mesos on factory-fresh opensuse 13.2 (will soon be available)


wkr j

| Jürgen Jakobitsch,
| Software Developer
| Semantic Web Company GmbH
| Mariahilfer Straße 70 / Neubaugasse 1, Top 8
| A - 1070 Wien, Austria
| Mob +43 676 62 12 710 | Fax +43.1.402 12 35 - 22

COMPANY INFORMATION
| web   : http://www.semantic-web.at/
| foaf  : http://company.semantic-web.at/person/juergen_jakobitsch
PERSONAL INFORMATION
| web   : http://www.turnguard.com
| foaf  : http://www.turnguard.com/turnguard
| g+: https://plus.google.com/111233759991616358206/posts
| skype : jakobitsch-punkt
| xmlns:tg  = http://www.turnguard.com/turnguard#;

2015-05-02 22:16 GMT+02:00 Kingsley Idehen kide...@openlinksw.com 
mailto:kide...@openlinksw.com:


On 5/2/15 11:49 AM, Jürgen Jakobitsch wrote:

kingsley,  thanks for the pointer...

i'm taking a look at these two openlinksw ontologies (i think
they would fit my needs)

components
http://goo.gl/F8HFE5

installers
http://goo.gl/eKpbeW

wkr j


Jürgen,

I should have mentioned that all our ontologies are available from
a standard location [1].

[1] http://www.openlinksw.com/data/turtle/ -- OpenLink Ontology
Collection .


-- 
Regards,


Kingsley Idehen
Founder  CEO
OpenLink Software
Company Web: http://www.openlinksw.com
Personal Weblog 1: http://kidehen.blogspot.com
Personal Weblog 2: http://www.openlinksw.com/blog/~kidehen
http://www.openlinksw.com/blog/%7Ekidehen
Twitter Profile: https://twitter.com/kidehen
Google+ Profile: https://plus.google.com/+KingsleyIdehen/about
LinkedIn Profile: http://www.linkedin.com/in/kidehen
Personal WebID:
http://kingsley.idehen.net/dataspace/person/kidehen#this






--
Regards,

Kingsley Idehen 
Founder  CEO
OpenLink Software
Company Web: http://www.openlinksw.com
Personal Weblog 1: http://kidehen.blogspot.com
Personal Weblog 2: http://www.openlinksw.com/blog/~kidehen
Twitter Profile: https://twitter.com/kidehen
Google+ Profile: https://plus.google.com/+KingsleyIdehen/about
LinkedIn Profile: http://www.linkedin.com/in/kidehen
Personal WebID: http://kingsley.idehen.net/dataspace/person/kidehen#this



smime.p7s
Description: S/MIME Cryptographic Signature


Re: Survey on Faceted Browsers for RDF data ?

2015-04-27 Thread Kingsley Idehen

On 4/27/15 9:02 AM, Christian Morbidoni wrote:
Honestly I got a bit stuck asking myself What exactly am I looking 
for? In other words: what is exactly a faceted browser for RDF data?

Great point!

All:

What is Faceted Browsing ?

What are the distinguishing characteristics (features, properties, 
attributes) of a Class of Thing referred to literally as a Faceted 
Browser ?


If we cannot build a useful discussion thread based on this, what are we 
saying to both ourselves and the rest of the world re., the usefulness 
of the following:


[1] RDF Language

[2] Linked Open Data constructed using RDF Language.

It's the year 2015, we can (and MUST) do better.

Let's get some responses going, in the form of attributes and associated 
comments describing said attributes literally, at the very least.


--
Regards,

Kingsley Idehen 
Founder  CEO
OpenLink Software
Company Web: http://www.openlinksw.com
Personal Weblog 1: http://kidehen.blogspot.com
Personal Weblog 2: http://www.openlinksw.com/blog/~kidehen
Twitter Profile: https://twitter.com/kidehen
Google+ Profile: https://plus.google.com/+KingsleyIdehen/about
LinkedIn Profile: http://www.linkedin.com/in/kidehen
Personal WebID: http://kingsley.idehen.net/dataspace/person/kidehen#this




smime.p7s
Description: S/MIME Cryptographic Signature


Re: Looking for pedagogically useful data sets

2015-03-14 Thread Kingsley Idehen

On 3/13/15 7:17 PM, Michael Brunnbauer wrote:

Hello Kingsley,

On Fri, Mar 13, 2015 at 06:06:00PM -0400, Kingsley Idehen wrote:

I hope you are not assuming that I meant: ACID and tradition RDBMS are for
losers?

Interpreting what you say is not easy for me. So forgive me if I have read
too much into your statement.


My fundamental point is simply about the fact that RDBMS doesn't mean SQL
RDBMS.

OK. I have no problem with SPARQL - I like it. But I have a problem with more
restricted query languages being used without very good reasons just because
NOSQL or whatever is hip.


So do I. That's basically the underlying theme of my view i.e., we are 
conceding reality for marketing, which is horrible.



There are big data and small data use cases for
both SQL and its alternatives. Most of the time, one of them is clearly the
best design decision and I reject the notion that one technology solves all
use cases.


Correct, that's why I continue to chant the horses for course mantra. 
In the cased of RDBMS technology we have relations handled in different 
ways by products that support various query languages. None of these 
query languages (e.g. SPARQL or SQL) is the only option for a relation 
database management system. Unfortunately, via effective marketing over 
the years, SQL RDBMS vendors have laid claim the entire realm of 
relational database management leaving alternatives to redefine 
themselves as NoSQL, Graph, Document, oriented Database Management 
Systems etc..



Thus, ACID has nothing to do with being Relational, in regards to
Database document construction and/or management.

I want to do this:

  BEGIN TRANSACTION
  SPARQL query
  SPARQL update
  ...
  COMMIT TRANSACTION

If you know a triple store that can do this, I will concede that ACID has
nothing to do with being relational.


What do you mean by triple store ? Virtuoso (a hybrid relational 
RDBMS)  is in use at many organizations where they make even make use of 
Distributed Transaction Monitors as part of the RDF data management 
related solutions.


Why are you conflating Transactions Management with Data Organization?

An RDBMS could exist without ACID. Of course they would present utility 
challenges in regards to transactions and oltp settings, but that 
doesn't render said solution as being non-relational. It's just lacking 
in the area of transaction management re., issues addressed by ACID.





If this problem is solved, there is still the data shapes problem. I grok
that you want to allow exceptions from the data shape rules. So show me

1) how these exception can have value if they are not handled in the code
2) how your solution handles the RDF data shapes problem


Shapes  boils down to moving relation member validation into the DBMS 
engine layer, rather than leaving it solely to client code. This is best 
handled via rules which is something we have been working on, for some 
time now. We are supporters of both SPIN and SHACL.




I am also interested in how your solution handles the join performance problem
(from your company presentations, I gather that you have addressed this
somehow).


We continue to deliver variants of our engine that include enhancements 
in these areas. Basically, we are getting closer, per release, to no 
difference in performance between our Relational Tables and RDF 
Statements handling.


If you provide a specific example, I can provide a much more specific 
response.




Besides, what is a database document?


SQL:

What an RDBMS engine basically refers to as a database page . Exposure 
to clients (via an identifier) depends on implementation.


SPARQL:

What an  RDBMS/Store refers to as named graph.  It is identified by a 
Named Graph IRI in the FROM (when external) and/or FROM NAMED (when 
internal) clauses of an SPARQL query.


In both cases, you have relations (sets of tuples) associated with one 
or more database documents. Each relation is identified by a predicate.


In a SQL RDBMS, each Table Name identifies a relation predicate. In RDF, 
a relation is identified by the IRI used in the predicate slot of an RDF 
statement.



A traditional RDBMS != SQL RDBMS either. That has never ever been the case.

I think it has been the case for quite a while.


In marketing collateral and memes. Never ever been a technical fact.


But if you go back long enough,
that ceases to be true, yes.


Correct.

Links:

[1] http://w3c.github.io/data-shapes/shacl/ -- SHACL
[2] http://www.w3.org/Submission/spin-overview/ -- SPIN
[3] https://technet.microsoft.com/en-us/library/ms190969(v=sql.105).aspx 
-- SQL Server Database Documents and Relations (Tables)





Regards,

Michael Brunnbauer




--
Regards,

Kingsley Idehen 
Founder  CEO
OpenLink Software
Company Web: http://www.openlinksw.com
Personal Weblog 1: http://kidehen.blogspot.com
Personal Weblog 2: http://www.openlinksw.com/blog/~kidehen
Twitter Profile: https://twitter.com/kidehen
Google+ Profile: https://plus.google.com/+KingsleyIdehen/about

Re: Looking for pedagogically useful data sets

2015-03-13 Thread Kingsley Idehen

On 3/13/15 11:39 AM, Michael Brunnbauer wrote:

Hello Kingsley,

yes, ACID and traditional RDBMS are for loosers. Think Big instead!

Big data
Big exploitation
Big complexity
Big technical debt

Interest is payable to OpenLink Software.

Regards,

Michael Brunnbauer


Michael,

I hope you are not assuming that I meant: ACID and tradition RDBMS are 
for losers?


My fundamental point is simply about the fact that RDBMS doesn't mean 
SQL RDBMS. Thus, ACID has nothing to do with being Relational, in 
regards to Database document construction and/or management. It has 
everything to do with the Atomicity, Concurrency, Isolation, and 
Durability of operations on databases performed by RDBMS applications.


A traditional RDBMS != SQL RDBMS either. That has never ever been the case.


Kingsley


On Fri, Mar 13, 2015 at 10:05:54AM -0400, Kingsley Idehen wrote:

On 3/12/15 5:38 PM, Paul Houle wrote:

The goal is to show that you can do the same things you do with a
relational database,  and maybe *just* a little bit more.

Every RDF store is a relational database management system (RDBMS). As you
know, an RDF compliant RDBMS simply group sets of RDF 3-tuples by statement
predicate.

We can't continue to concede the notion of a relational database management
to SQL relational database management systems (sets of n-tuples grouped by
Table Name).

Maybe we should start referring to SPARQL compliant RDF stores as SPARQL
Relational Database Management Systems, just like SQL Relational Database
Management Systems which have now become synonymous with Relational Database
Management System. Then just a little more becomes much closer to
demonstrable reconciliation of the truth, the whole truth, and nothing but
the truth, in regards to relations, databases, and database management
systems :)

ACID has nothing to do with what constitutes an RDBMS either, that's an a
useful, but optional feature of any RDBMS. So don't fall for that baloney
laden push-back when taking the SPARQL RDBMS position.

We MUST end the SQL RDBMS power-grab! It has done a major disservice to the
entire DBMS industry, over the last 40+ years. You have a multi-billion
dollar industry that's fundamentally about companies and individuals that
are data-access-heavy and data-exploitation-challenged i.e., they have tons
of data (Big Data these days), but still can't achieve basic agility goals
in regards to: accessing, integrating, and moving data effectively to the
right people, at the right time, in the right form, and in appropriate
context etc..

Links:

[1] http://bit.ly/spasql-sql-querying-based-on-sparql-table-relation --
demonstrating that relations are relations (even when the underlying tuple
organizations vary e.g., when organized as sql relational tables or rdf
statements graphs) .

[2] http://www.openlinksw.com/c/9C5DNHYW -- Relation .

[3] http://www.openlinksw.com/c/9BVTLIAG -- SQL Relation .

[4] http://www.openlinksw.com/c/9BH3NH7S -- RDF Relation.

[5] http://www.openlinksw.com/c/9BDLVDX3 -- Differentiating Database (a
Document comprised of sets of Relations [Data] ) from Database Management
System (software for indexing and querying culled from Database Documents).

--
Regards,

Kingsley Idehen 
Founder  CEO
OpenLink Software
Company Web: http://www.openlinksw.com
Personal Weblog 1: http://kidehen.blogspot.com
Personal Weblog 2: http://www.openlinksw.com/blog/~kidehen
Twitter Profile: https://twitter.com/kidehen
Google+ Profile: https://plus.google.com/+KingsleyIdehen/about
LinkedIn Profile: http://www.linkedin.com/in/kidehen
Personal WebID: http://kingsley.idehen.net/dataspace/person/kidehen#this








--
Regards,

Kingsley Idehen 
Founder  CEO
OpenLink Software
Company Web: http://www.openlinksw.com
Personal Weblog 1: http://kidehen.blogspot.com
Personal Weblog 2: http://www.openlinksw.com/blog/~kidehen
Twitter Profile: https://twitter.com/kidehen
Google+ Profile: https://plus.google.com/+KingsleyIdehen/about
LinkedIn Profile: http://www.linkedin.com/in/kidehen
Personal WebID: http://kingsley.idehen.net/dataspace/person/kidehen#this




smime.p7s
Description: S/MIME Cryptographic Signature


Re: Looking for pedagogically useful data sets

2015-03-13 Thread Kingsley Idehen

On 3/13/15 10:05 AM, Kingsley Idehen wrote:


We MUST end the SQL RDBMS power-grab! It has done a major disservice 
to the entire DBMS industry, over the last 40+ years. You have a 
multi-billion dollar industry that's fundamentally about companies and 
individuals that are data-access-heavy and 
data-exploitation-challenged i.e., they have tons of data (Big Data 
these days), but still can't achieve basic agility goals in regards 
to: accessing, integrating, and moving data effectively to the right 
people, at the right time, in the right form, and in appropriate 
context etc..


Links:

[1] http://bit.ly/spasql-sql-querying-based-on-sparql-table-relation 
-- demonstrating that relations are relations (even when the 
underlying tuple organizations vary e.g., when organized as sql 
relational tables or rdf statements graphs) .


[2] http://www.openlinksw.com/c/9C5DNHYW -- Relation .

[3] http://www.openlinksw.com/c/9BVTLIAG -- SQL Relation .

[4] http://www.openlinksw.com/c/9BH3NH7S -- RDF Relation.

[5] http://www.openlinksw.com/c/9BDLVDX3 -- Differentiating Database 
(a Document comprised of sets of Relations [Data] ) from Database 
Management System (software for indexing and querying culled from 
Database Documents).


Re: #1, when challenged use the following credentials:

username: vdb
password: vdb  .

--
Regards,

Kingsley Idehen 
Founder  CEO
OpenLink Software
Company Web: http://www.openlinksw.com
Personal Weblog 1: http://kidehen.blogspot.com
Personal Weblog 2: http://www.openlinksw.com/blog/~kidehen
Twitter Profile: https://twitter.com/kidehen
Google+ Profile: https://plus.google.com/+KingsleyIdehen/about
LinkedIn Profile: http://www.linkedin.com/in/kidehen
Personal WebID: http://kingsley.idehen.net/dataspace/person/kidehen#this




smime.p7s
Description: S/MIME Cryptographic Signature


Re: Looking for pedagogically useful data sets

2015-03-13 Thread Kingsley Idehen

On 3/12/15 5:38 PM, Paul Houle wrote:
The goal is to show that you can do the same things you do with a 
relational database,  and maybe *just* a little bit more.


Every RDF store is a relational database management system (RDBMS). As 
you know, an RDF compliant RDBMS simply group sets of RDF 3-tuples by 
statement predicate.


We can't continue to concede the notion of a relational database 
management to SQL relational database management systems (sets of 
n-tuples grouped by Table Name).


Maybe we should start referring to SPARQL compliant RDF stores as SPARQL 
Relational Database Management Systems, just like SQL Relational 
Database Management Systems which have now become synonymous with 
Relational Database Management System. Then just a little more becomes 
much closer to demonstrable reconciliation of the truth, the whole 
truth, and nothing but the truth, in regards to relations, databases, 
and database management systems :)


ACID has nothing to do with what constitutes an RDBMS either, that's an 
a useful, but optional feature of any RDBMS. So don't fall for that 
baloney laden push-back when taking the SPARQL RDBMS position.


We MUST end the SQL RDBMS power-grab! It has done a major disservice to 
the entire DBMS industry, over the last 40+ years. You have a 
multi-billion dollar industry that's fundamentally about companies and 
individuals that are data-access-heavy and data-exploitation-challenged 
i.e., they have tons of data (Big Data these days), but still can't 
achieve basic agility goals in regards to: accessing, integrating, and 
moving data effectively to the right people, at the right time, in the 
right form, and in appropriate context etc..


Links:

[1] http://bit.ly/spasql-sql-querying-based-on-sparql-table-relation -- 
demonstrating that relations are relations (even when the underlying 
tuple organizations vary e.g., when organized as sql relational tables 
or rdf statements graphs) .


[2] http://www.openlinksw.com/c/9C5DNHYW -- Relation .

[3] http://www.openlinksw.com/c/9BVTLIAG -- SQL Relation .

[4] http://www.openlinksw.com/c/9BH3NH7S -- RDF Relation.

[5] http://www.openlinksw.com/c/9BDLVDX3 -- Differentiating Database 
(a Document comprised of sets of Relations [Data] ) from Database 
Management System (software for indexing and querying culled from 
Database Documents).


--
Regards,

Kingsley Idehen 
Founder  CEO
OpenLink Software
Company Web: http://www.openlinksw.com
Personal Weblog 1: http://kidehen.blogspot.com
Personal Weblog 2: http://www.openlinksw.com/blog/~kidehen
Twitter Profile: https://twitter.com/kidehen
Google+ Profile: https://plus.google.com/+KingsleyIdehen/about
LinkedIn Profile: http://www.linkedin.com/in/kidehen
Personal WebID: http://kingsley.idehen.net/dataspace/person/kidehen#this




smime.p7s
Description: S/MIME Cryptographic Signature


Re: Enterprise information system

2015-03-03 Thread Kingsley Idehen

On 3/2/15 6:56 PM, Paul Tyson wrote:

Kingsley,

I admire and respect your enthusiastic, tireless efforts at
evangelization and your fluent command of the relevant technologies in
this area. I mostly agree with your diagnosis and prescriptions, but I
have a growing suspicion that it is not sufficient to meet the real ICT
needs of the enterprise.


Paul,

In my experience, Linked Open Data exploitation is meeting real ICT 
needs (creation and dissemination of information  knowledge, throughout 
the enterprise). Generally, challenges typically arise in the following 
areas:


1. Matching Linked Open Data value proposition narratives to real ICT 
needs -- a communications issue
2. Implementation approaches that build on what exists rather than 
covertly pushing a rip and replace agenda -- an understanding of 
enterprise culture and investment realities issue.



[1] 
http://kidehen.blogspot.com/2015/03/configuring-odbc-dsn-for-world-wide-web.html 
-- using Linked Open Data to showcase the Web as just another ODBC 
accessible RDBMS (this exemplifies tweaking existing IT infrastructure 
by enabling exising ODBC applications take advantage of Linked Open Data).



Kingsley



On Fri, 2015-02-27 at 18:19 -0500, Kingsley Idehen wrote:

On 2/27/15 4:19 PM, Paul Tyson wrote:


I don't know that many linked data systems improve much on
conventional ones.

To that answer, I ask: what is the problem?

As I see it, the problem boils down to data access, integration, and
dissemination. This has to happen on time, in the right form, and
delivered to the relevant entity.

Enterprises that still have these problems are stubborn, timid,
unimaginative, or in thrall to troglodytic enterprise software vendors.
Getting data from anywhere on the network to a screen in front of a user
is a solved problem, however messy the details might be.


All the conventional systems I am aware of suffer from a common
flaw, one  that I refer to as data-silo-fication. This problem ranges
from big iron to tiny computing devices.

What are the ramifications of the data-silo-fication? Degradation of
the following:

1. Agility
2. Privacy
3. Society.


Agreed.


We have enough of data and linking.

How come? The world is full of data silos that hide behind silky
promises of convenience and/or outright cognitive dissonance.


What we need to
link are the artifacts of mental processes, which are not so easily
reduced to data.

Yes! And you already achieve that in the so-called real world using
natural language sentences, one of mankind's most powerful inventions
[1].

We have used language to encode and decode information for eons.


I think we lose a lot when we characterize language that way. Before
mathematical logic and information science took over, thoughtful people
characterized language as a system of conventional signs representing
mental states or processes.  Of particular importance to philosophers
was the reasoning mind, the mental entities involved in reason, and the
particular structures thereof which seem to correlate to correct
knowledge of the real world. From the time of Aristotle these entities
have been studied as concepts, propositions, and argument (or
demonstration). (I don't mean the poor emaciated use of these terms as
in mathematical logic, but the rich, robust--and complicated--usage they
have in traditional logic from Aristotle on through Maritain and points
in between.)


That is the real promise of these technologies, but
is not, so far as I am aware, being pursued anywhere in public.

Well, we have this wonderful thing called the World Wide Web. Its
basic architecture boils down to:
1. Using HTTP URIs as names for entity types (or classes) -- the
*nature* of entity relationship participants
2. Using HTTP URIs as names for predicates (sentence forming
relations) -- the *nature* of entity relationship types (functional,
inverse-functional, transitive, symmetrical etc..)
3. Using HTTP URIs as names for instances of entity types -- actual
entity relationship participants.

In addition to the above, albeit not immediately obvious in HTML
(which has had link/ and the a/ control  in place forever) , the
aforementioned architecture also included the ability to construct
sentences where the subject, predicate, and object (optionally) are
identified by HTTP URIs.

Yes, I've absorbed all the philosophy of hyperlinking from Vannevar
Bush, Doug Engelbart, Ted Nelson on through Hytime, HTML[2-5], XML 
friends, RDF, and Linked Open Data. I'm here to tell you it's all good,
but *it's not enough*.


The digital sentences described above can be written to documents
using a variety of notations (HTML, XML, CSV, TURTLE, JSON, JSON-LD
etc..), and served from any location on an HTTP network, with name
interpretation (description lookup or de-reference) baked in.

The semantics of the predicates that hold these sentences together are
both machine and human comprehensible. Thus, anyone can lookup the
*nature* of an entity relationship type en

Re: Enterprise information system

2015-02-27 Thread Kingsley Idehen

On 2/27/15 4:19 PM, Paul Tyson wrote:

I don't know that many linked data systems improve much on
conventional ones.


To that answer, I ask: what is the problem?

As I see it, the problem boils down to data access, integration, and 
dissemination. This has to happen on time, in the right form, and 
delivered to the relevant entity.


All the conventional systems I am aware of suffer from a common flaw, 
one  that I refer to as data-silo-fication. This problem ranges from big 
iron to tiny computing devices.


What are the ramifications of the data-silo-fication? Degradation of the 
following:


1. Agility
2. Privacy
3. Society.


We have enough of data and linking.
How come? The world is full of data silos that hide behind silky 
promises of convenience and/or outright cognitive dissonance.



  What we need to
link are the artifacts of mental processes, which are not so easily
reduced to data.


Yes! And you already achieve that in the so-called real world using 
natural language sentences, one of mankind's most powerful inventions [1].


We have used language to encode and decode information for eons.


That is the real promise of these technologies, but
is not, so far as I am aware, being pursued anywhere in public.


Well, we have this wonderful thing called the World Wide Web. Its basic 
architecture boils down to:


1. Using HTTP URIs as names for entity types (or classes) -- the 
*nature* of entity relationship participants
2. Using HTTP URIs as names for predicates (sentence forming relations) 
-- the *nature* of entity relationship types (functional, 
inverse-functional, transitive, symmetrical etc..)
3. Using HTTP URIs as names for instances of entity types -- actual 
entity relationship participants.


In addition to the above, albeit not immediately obvious in HTML (which 
has had link/ and the a/ control  in place forever) , the 
aforementioned architecture also included the ability to construct 
sentences where the subject, predicate, and object (optionally) are 
identified by HTTP URIs.


The digital sentences described above can be written to documents using 
a variety of notations (HTML, XML, CSV, TURTLE, JSON, JSON-LD etc..), 
and served from any location on an HTTP network, with name 
interpretation (description lookup or de-reference) baked in.


The semantics of the predicates that hold these sentences together are 
both machine and human comprehensible. Thus, anyone can lookup the 
*nature* of an entity relationship type en route to understanding the 
meaning of a given entity relationship.  What more do you need?


In my experience  I see a big problem that boils down to understanding 
that over automation is bad. Recent fixation with imperative 
programming and applications, in every situation, is utterly broken.


The assumption that end-users are dumb, stupid, even lazy is the eternal 
blind spot that afflicts those that simply cannot see, or even describe 
a digital computing realm without fixating on a specific programming 
language, framework, library, data serialization format, or some new 
dogma (e.g., Open Source -- which doesn't guarantee data 
de-silo-fication ).


In my eyes, what we need is the ability to put language to use, in the 
medium that the Web provides. That simply boils down to systematic use 
of signs [HTTP URIs], syntax [S,P,O or E,A,V term arrangement rules], 
and semantics [meaning of subject, predicate, and object relationship 
roles] for encoding and decoding information [data in some context]. 
Basically,  an ability to write the aforementioned digital sentences to 
HTTP network accessible documents, using a variety notations.


BTW -- if you know of some existing alternative to what I've described, 
that doesn't include a hidden data-silo-fication tax, I am all ears :)




Note, however, the recent release of Linked Data Platform as a W3C
standard (http://www.w3.org/TR/2015/REC-ldp-20150226/). No doubt this
will be useful in its own right, but also point the way to future
opportunities by what it*doesn't*  cover.


LDP simply addresses read-write issues for those that can't make use of 
SPARQL 1.1 Update, SPARQL Graph Protocol, or WebDAV (which is XML 
specific in regards to metadata).


Links:

[1] http://www.slideshare.net/kidehen/understanding-29894555/55 -- 
Natural Language  Data
[2] https://www.pinterest.com/pin/389561436488854582/ -- What sites 
between you and your data
[3] http://kidehen.blogspot.com/2014/07/nanotation.html -- Nanotation 
(inserting digital sentences wherever plain text is allowed)
[4] 
http://kidehen.blogspot.com/2014/08/linked-local-data-lld-and-linked-open.html 
-- Linked Local Data vs Linked Open Data .


--
Regards,

Kingsley Idehen 
Founder  CEO
OpenLink Software
Company Web: http://www.openlinksw.com
Personal Weblog 1: http://kidehen.blogspot.com
Personal Weblog 2: http://www.openlinksw.com/blog/~kidehen
Twitter Profile: https://twitter.com/kidehen
Google+ Profile: https://plus.google.com/+KingsleyIdehen/about

Re: Enterprise information system

2015-02-26 Thread Kingsley Idehen

On 2/26/15 12:44 PM, Paul Houle wrote:

I'd go back to Microsoft Access to highlight the problem with status quo.

I think Access is a pretty good tool that punches above its weight, 
 but when businesspeople want to make a database they frequently 
pick Excel.  Even though you can use Excel to visually create database 
tables and forms and sprinkle in just a little bit of Visual Basic to 
make a real app,  untrained people have a difficult time with data 
modelling.  Even if you could make all the coding go away,  you'd 
still have extreme difficulties because people would (i) pick a bad 
data model,  and (ii) realize it later when there is a lot of data in 
the system.


Two angles to these problems are:

(1) The OMG has developed a large number of standards centered around 
the model driven architecture,  which,  like the Semantic Web,  is a 
work in process.  The OMG has more of an enterprise focus so it is 
worth understanding what they've done.  UML started out as a 
diagramming tool,  but eventually wants to become executable,  because 
so long as you have a separate map and territory,  these will diverge.


(2) Business rules engines,  primarily based on forward chaining, 
 have been promised as another technology that lets businesspeople 
express their will in something human readable but this too is a 
challenge.  To be specific,  I've built some systems that are based on 
a first-order logical theory,  and many of these rules engines don't 
have a real query optimizer so the order that I write the conditions 
in can be the difference between something that runs in 1ms and 
something that takes 20 seconds.  If somebody who isn't hep to that 
makes a change to the system,  they can break it.


The are multiple angles of attack on this problem,  like ultimately 
you need the query optimizer,  but I think semantics have a lot to 
offer both (1) and (2) in the sense of being able to start with a 
general domain model for something like CRM and then specialize for a 
particular company without doing a lot of programming.




Yes!


--
Regards,

Kingsley Idehen 
Founder  CEO
OpenLink Software
Company Web: http://www.openlinksw.com
Personal Weblog 1: http://kidehen.blogspot.com
Personal Weblog 2: http://www.openlinksw.com/blog/~kidehen
Twitter Profile: https://twitter.com/kidehen
Google+ Profile: https://plus.google.com/+KingsleyIdehen/about
LinkedIn Profile: http://www.linkedin.com/in/kidehen
Personal WebID: http://kingsley.idehen.net/dataspace/person/kidehen#this




smime.p7s
Description: S/MIME Cryptographic Signature


Re: Microsoft Access for RDF?

2015-02-24 Thread Kingsley Idehen

On 2/24/15 6:07 AM, Graham Klyne wrote:

Hi Kingsley,

In https://lists.w3.org/Archives/Public/public-lod/2015Feb/0116.html
You said, re Annalist:

My enhancement requests would be that you consider supporting of at
least one of the following, in regards to storage I/O:

1. LDP
2. WebDAV
3. SPARQL Graph Protocol
4. SPARQL 1.1 Insert, Update, Delete.

As for Access Controls on the target storage destinations, don't worry
about that in the RDF editor itself, leave that to the storage provider
[1] that supports any combination of the protocols above.


Thanks for you comments and feedback - I've taken note of them.

My original (and current) plan is to provide HTTP access 
(GET/PUT/POST/etc) with a little bit of WebDAV to handle directory 
content enumeration., which I think is consistent with your suggestion 
(cf. [1]).


Yes.


The other options you mention are not ruled out.


Okay.



You say I shouldn't worry too much about access control, but leave 
that to the back-end store.  If by this you mean *just* access 
control, then that makes sense to me.


Read and Write modalities scoped to an identity principal.



A challenge I face is to understand what authentication tokens are 
widely supported by existing HTTP stores.


Most will support digest authentication. For more sophisticated 
solutions you should consider WebID-TLS [1] and WebACL [2] which simply 
boils down to servers determining identity principals and resource 
privileges via relations in a WebID-Profile [3] document and a server's 
WebACL doc.


You can also bolt OAuth on to WebDAV, as we have, but it may be a lot of 
work.


BTW -- we do have a generic Authentication Layer in Virtuoso that 
supports Digest Authentication, WebID-TLS, basic PKIX+TLS, OpenID, and 
OAuth. We are even planning to decouple the aforementioned module from 
Virtuoso via a Javascript library that will be released in Open Source 
form to Github. Only hold-up is prioritization, so I can provide a firm 
ETA.



Annalist itself uses OpenID Connect (ala Google+, etc) is its main 
authentication mechanism, so I cannot assume that I have access to 
original user credentials to construct arbitrary security tokens.


I had been thinking that something based on OAuth2 might be 
appropriate (I looked at UMA [2], had some problems with it as a total 
solution, but I might be able to use some of its elements).


See comment above.

I took a look at the link you provided, but there seem to be a lot of 
moving parts and couldn't really figure out what you were describing 
there.


It looks like that on the surface, but it simply boils down to:

1. Javascript
2. Ajar (Asynchronous JavaScript and RDF) -- TimBL tried to kick of this 
Ajar meme a few years ago with limited uptake, but its darn neat!)

3. Terms for making Identity Claims from a Certificate Vocabulary
4. A Storage relation described by a Personal Information (PIM) Vocabulary
5. WebID-Profile document that includes relations based on terms from 
the PIM and Certificate vocabularies.


I re-read my original post [4], and realized its could be clearer, so 
I've tweaked it a little.  Thanks for bringing the many moving parts 
perception to my attention.


Links:

[1] http://www.w3.org/2005/Incubator/webid/spec/tls -- WebID-TLS protocol
[2] http://linkeddata.uriburner.com/c/9D3D4FSF -- a Web Access Control 
Vocabulary
[3] 
http://www.w3.org/2005/Incubator/webid/spec/identity/#webid-profile-vocabulary 
-- WebID Profile Document
[4] 
http://kidehen.blogspot.com/2014/07/loosely-coupled-read-write-interactions.html 
-- slightly revised edition.



Kingsley


Thanks!

#g
--

[1] https://github.com/gklyne/annalist/issues/32

[2] http://en.wikipedia.org/wiki/User-Managed_Access, 
http://kantarainitiative.org/confluence/display/uma/Home







--
Regards,

Kingsley Idehen 
Founder  CEO
OpenLink Software
Company Web: http://www.openlinksw.com
Personal Weblog 1: http://kidehen.blogspot.com
Personal Weblog 2: http://www.openlinksw.com/blog/~kidehen
Twitter Profile: https://twitter.com/kidehen
Google+ Profile: https://plus.google.com/+KingsleyIdehen/about
LinkedIn Profile: http://www.linkedin.com/in/kidehen
Personal WebID: http://kingsley.idehen.net/dataspace/person/kidehen#this




smime.p7s
Description: S/MIME Cryptographic Signature


Re: AJAR (WAS: Microsoft Access for RDF?)

2015-02-24 Thread Kingsley Idehen

On 2/24/15 12:20 PM, Paul Tyson wrote:

On Tue, 2015-02-24 at 11:43 -0500, Kingsley Idehen wrote:

It looks like that on the surface, but it simply boils down to:

1. Javascript
2. Ajar (Asynchronous JavaScript and RDF) -- TimBL tried to kick of this
Ajar meme a few years ago with limited uptake, but its darn neat!)

Kingsley, do you have further references for AJAR? It sounds like the
approach I stumbled onto while looking for a sensible way to make a
heavy client for displaying RDF. In theory it could provide great power
and simplicity, but I can't say I've worked through the initial
complexity to prove the point.

Regards,
--Paul



Quick description of an old TimBL post related to Ajar, using nanotation 
(which turns this post into an RDF data source) [1]:


{
 a schema:WebPage ;
rdfs:label Description of Ajar related items ;
rdfs:comment Describes Ajar concept and a related Blog Posting.
 ;
dcterms:created 2015-02-24T13:47:00-05:00^^xsd:dateTime ;
foaf:maker http://kingsley.idehen.net/dataspace/person/kidehen#this ;
xhv:license http://creativecommons.org/licenses/by/4.0/deed.en_US ;
cc:attributionName Kingsley Uyi Idehen ;
schema:about http://dig.csail.mit.edu/breadcrumbs/node/62/, 
https://twitter.com/hashtag/Ajar#this .


http://dig.csail.mit.edu/breadcrumbs/node/62/
a schema:BlogPosting ;
schema:about https://twitter.com/hashtag/Ajar#this ;
rdfs:label Links on the Semantic Web ;
foaf:maker http://www.w3.org/People/Berners-Lee/card#i ;
dcterms:created 2005-12-30T15:04:00-05:00^^xsd:dateTime ;
rdfs:comment On the Semantic Web, links are also critical. Here, 
the local name, and the URI formed using the hash,
refer to arbitrary things. When a semantic web 
document gives information about something, and uses a
URI formed from the name of a different document, 
like foo.rdf#bar, then that's an invitation to look
up the document, if you want more information 
about. I'd like people to use them more, and I think we
need to develop algorithms which for deciding when 
to follow Semantic Web links as a function of what

we are looking for. -- TimBL
 ;
rdfs:comment 
To play with semantic web links, I made a toy 
semantic web browser, Tabulator. Toy, because it is hacked up in
Javascript (a change from my usual Python) to 
experiment with these ideas. It is AJAR - Asynchronous Javascript
and RDF. I started off with Jim Ley's RDF Parser 
and added a little data store. The store understands the mimimal
OWL ([inverse] functional properties, sameAs) to 
smush nodes representing the same thing together, so it doesn't
matter if people use many different URIs for the 
same thing, which of course they can. It has a simple index and
supports simple query. The API is more or less the 
one which cwm and had been tending toward in python.

-- TimBL
;
schema:url http://dig.csail.mit.edu/breadcrumbs/node/62 ;
sioc:links_to http://dig.csail.mit.edu/breadcrumbs/taxonomy/term/17/,
http://dig.csail.mit.edu/breadcrumbs/taxonomy/term/20/,
http://dig.csail.mit.edu/breadcrumbs/taxonomy/term/1/ ;
rdfs:seeAlso http://dig.csail.mit.edu/2005/ajar/ajaw/Developer.html ;
opl:mentions https://twitter.com/hashtage/Ajar#this .

https://twitter.com/hashtag/Ajar#this
a skos:Concept ;
rdfs:label Ajar ;
skos:prefLabel Asynchronous Javascript and RDF ;
is schema:about of 
http://dig.csail.mit.edu/2005/ajar/ajaw/Developer.html,

http://dig.csail.mit.edu/breadcrumbs/node/62/;
rdfs:comment Leveraging abstraction nature of RDF Language in 
conjunction with  XMLHttpRequest where format-specific is replaced with 
RDF [which isserializable using a 
variety of serialization formats].  XMLHttpRequest isn't actually XML 
specific.;

xhv:related http://www.w3schools.com/json/json_http.asp .
}

Hope that helps.

[1] http://kidehen.blogspot.com/2014/07/nanotation.html -- Nanotation 
(which enables the embedding of RDF statements, using TURTLE notation, 
wherever plain text input is allowed)


[2] http://kingsley.idehen.net/c/D5MTTC -- processed version of 
nantotions above (from a document saved to my Briefcase).


--
Regards,

Kingsley Idehen 
Founder  CEO
OpenLink Software
Company Web: http://www.openlinksw.com
Personal Weblog 1: http://kidehen.blogspot.com
Personal Weblog 2: http://www.openlinksw.com/blog/~kidehen
Twitter Profile: https://twitter.com/kidehen
Google+ Profile: https://plus.google.com/+KingsleyIdehen/about
LinkedIn Profile: http://www.linkedin.com/in/kidehen
Personal WebID: http://kingsley.idehen.net/dataspace/person/kidehen#this




smime.p7s
Description: S/MIME Cryptographic Signature


Re: Microsoft Access for RDF?

2015-02-23 Thread Kingsley Idehen

On 2/23/15 5:20 AM, Jean-Claude Moissinac wrote:

Perhaps something interesting here
https://github.com/jmvanel/semantic_forms





Great!

BTW -- is there a live demo instance somewhere?

--
Regards,

Kingsley Idehen 
Founder  CEO
OpenLink Software
Company Web: http://www.openlinksw.com
Personal Weblog 1: http://kidehen.blogspot.com
Personal Weblog 2: http://www.openlinksw.com/blog/~kidehen
Twitter Profile: https://twitter.com/kidehen
Google+ Profile: https://plus.google.com/+KingsleyIdehen/about
LinkedIn Profile: http://www.linkedin.com/in/kidehen
Personal WebID: http://kingsley.idehen.net/dataspace/person/kidehen#this



smime.p7s
Description: S/MIME Cryptographic Signature


Re: Microsoft Access for RDF?

2015-02-23 Thread Kingsley Idehen

On 2/23/15 4:34 AM, Stian Soiland-Reyes wrote:

On 21 February 2015 at 20:38, Michael Brunnbauer bru...@netestate.de wrote:


I admit that what you sketch here is better than what I have sketched with
named graphs. But it seems to require a very sophisticated editor or a
very sophisticated user.

Agreed that the editor needs to be more clever than three text fields
with auto-complete. If a non-RDF user is to be making RDF, at a
triple-level, then the editor must be guiding the user towards the
basic Linked Data principles rather than be a glorified vim. :)



I was talking about an editor where the user can
add triples with arbitrary properties.

.. and arbitrary properties/classes should include ones the user make
up on the spot and relate it to existing properties/classes.

Kingsley was talking about the sentences analogy, and I think we
should keep this in mind. While you shouldn't normally write a whole
book using your own terminology, it's very common to introduce at
least SOME terminology that you specify or clarify for the scope of
the text (e.g. the document, graph, dataset).

It could be something as in a triple (a sentence), show something as
simple as a little [+] button next to the property or class to
specialize it and use this instead in that triple. (making it
available for other triples)

Perhaps Kingsley's approach have some mechanism for specializing or
introducing new properties?


It doesn't have that right now, but it will have such functionality in 
due course. This feature may or may not make the initial public release.


At the current time, I use nanotation [1][2] as my mechanism to for 
achieving this goal. Ultimately, an RDF Editor should simply be an 
alternative to nantation that addresses the needs of those that don't 
want type RDF statements by hand, in any notation. Yes, they don't want 
to work with raw editors like: vi, vim, textmate, sublime etc..


Links:

[1] http://kidehen.blogspot.com/2014/07/nanotation.html  -- I can create 
RDF statements that define the nature of things and/or the things themselves


[2] http://linkeddata.uriburner.com/c/8FWGFY -- results of nanotation in 
a tweet that include the definition of 
https://twitter.com/hashtag/shouldBeOfInterestTo#this


[3] 
http://linkeddata.uriburner.com/fct/rdfdesc/usage.vsp?g=https%3A%2F%2Ftwitter.com%2Fhashtag%2FshouldBeOfInterestTo%23this 
-- named graphs (documents) that contain the RDF statements used to 
define this particular relation  [basically, this was performed using a 
series of tweets] .


--
Regards,

Kingsley Idehen 
Founder  CEO
OpenLink Software
Company Web: http://www.openlinksw.com
Personal Weblog 1: http://kidehen.blogspot.com
Personal Weblog 2: http://www.openlinksw.com/blog/~kidehen
Twitter Profile: https://twitter.com/kidehen
Google+ Profile: https://plus.google.com/+KingsleyIdehen/about
LinkedIn Profile: http://www.linkedin.com/in/kidehen
Personal WebID: http://kingsley.idehen.net/dataspace/person/kidehen#this




smime.p7s
Description: S/MIME Cryptographic Signature


Re: Microsoft Access for RDF?

2015-02-21 Thread Kingsley Idehen

On 2/21/15 9:48 AM, Martynas Jusevičius wrote:

How do you minimize the user interaction space i.e., reduce clutter --
especially if you have a lot of relations in scope or the possibility that
such becomes the reality over time?


I don't think concurrent updates I related to resources or specific
to our editor.


Okay, so your editor simply writes data assuming the destination (a 
store of some sort) handles conflicts, right?



The Linked Data platform (whatever it is) and its HTTP
logic has to deal with ETags and 409 Conflict etc.


Dealing with conflict is intrinsic to this problem space.



I was wondering if this logic should be part of specifications such as
the Graph Store Protocol:
https://twitter.com/pumba_lt/status/545206095783145472
But I haven't an answer. Maybe it's an oversight on the W3C side?


No, I think it boils to down to implementations. A client or a server 
may or may not be engineered to deal with these matters.


We scope the description edited either by a) SPARQL query or b) named
graph content.


Yes, but these activities will inevitably encounter conflicts, which 
take different shapes and forms:


1. one user performing insert, updates, deletes in random order
2. many users performing the activities above, close-to-concurrently .

Ideally, the client should have the ability to determine a delta prior 
to its write attempt. The nature of the delta is affected by the 
relation in question. Similar issues arise in situations where relations 
are represented as tables, it just so happens that the destination store 
is a SQL RDBMS which has in-built functionality for handling these 
matters, via different concurrency and locking schemes (which can be at 
the record, table, or page levels).


Bearing in mind what's stated above, ideally, an  RDF Editor shouldn't 
assume it's writing to an RDBMS equipped with the kind of functionality 
that a typical SQL RDBMS would posses. Thus, it needs to have some kind 
of optimistic concurrency mechanism (over HTTP) that leverages the kind 
of grouping that named graphs, rdf relations, and rdf statement 
reification provide.


To conclude, its a good thing that we are now talking about RDF Editors 
and Read-Write Web issues. The silence on this matter, in relation to 
Linked Open Data, has been deafening :)








--
Regards,

Kingsley Idehen 
Founder  CEO
OpenLink Software
Company Web: http://www.openlinksw.com
Personal Weblog 1: http://kidehen.blogspot.com
Personal Weblog 2: http://www.openlinksw.com/blog/~kidehen
Twitter Profile: https://twitter.com/kidehen
Google+ Profile: https://plus.google.com/+KingsleyIdehen/about
LinkedIn Profile: http://www.linkedin.com/in/kidehen
Personal WebID: http://kingsley.idehen.net/dataspace/person/kidehen#this




smime.p7s
Description: S/MIME Cryptographic Signature


Re: Microsoft Access for RDF?

2015-02-21 Thread Kingsley Idehen

On 2/21/15 9:48 AM, Martynas Jusevičius wrote:

On Fri, Feb 20, 2015 at 6:41 PM, Kingsley  Idehen
kide...@openlinksw.com wrote:

On 2/20/15 12:04 PM, Martynas Jusevičius wrote:


Not to criticize, but to seek clarity:

What does the term resources refer to, in your usage context?

In a world of Relations (this is what RDF is about, fundamentally) its hard
for me to understand what you mean by grouped by resources. What is the
resource etc?

Well, RDF stands for Resource Description Framework after all, so
I'll cite its spec:
RDF graphs are sets of subject-predicate-object triples, where the
elements may be IRIs, blank nodes, or datatyped literals. They are
used to express descriptions of resources.

More to the point, RDF serializations often group triples by subject
URI.


The claim often group triples by subject  isn't consistent with the 
nature of an RDF Relation [1].


A _*predicate*_ is a sentence-forming _*relation*_. Each tuple in the 
relation is a finite, ordered sequence of objects. The fact that a 
particular tuple is an element of a predicate is denoted by 
'(*predicate* arg_1 arg_2 .. arg_n)', where the arg_i are the objects so 
related. In the case of binary predicates, the fact can be read as 
`arg_1 is *predicate* arg_2' or `a *predicate* of arg_1 is arg_2'.)  [1] .


RDF's specs are consistent with what's described above, and inconsistent 
with the subject ordering claims you are making.


RDF statements (which represent relations) have sources such as 
documents which are accessible over a network and/or documents managed 
by some RDBMS e.g., Named Graphs in the case of a SPARQL compliant RDBMS .


In RDF you are always working with a set of tuples (s,p,o 3-tuples 
specifically) grouped by predicate .


Also note, I never used the phrase RDF Graph in any of the sentences 
above, and deliberately so, because that overloaded phrase is yet 
another source of unnecessary confusion.


Links:

[1] 
http://54.183.42.206:8080/sigma/Browse.jsp?lang=EnglishLanguageflang=SUO-KIFkb=SUMOterm=Predicate


Kingsley





   Within a resource block, properties are sorted
alphabetically by their rdfs:labels retrieved from respective
vocabularies.


How do you handle the integrity of multi-user updates, without killing
concurrency, using this method of grouping (which in of itself is unclear
due to the use resources term) ?

How do you minimize the user interaction space i.e., reduce clutter --
especially if you have a lot of relations in scope or the possibility that
such becomes the reality over time?


I don't think concurrent updates I related to resources or specific
to our editor. The Linked Data platform (whatever it is) and its HTTP
logic has to deal with ETags and 409 Conflict etc.

I was wondering if this logic should be part of specifications such as
the Graph Store Protocol:
https://twitter.com/pumba_lt/status/545206095783145472
But I haven't an answer. Maybe it's an oversight on the W3C side?

We scope the description edited either by a) SPARQL query or b) named
graph content.


Kingsley


On Fri, Feb 20, 2015 at 4:59 PM, Michael Brunnbauer bru...@netestate.de
wrote:

Hello Martynas,

sorry! You mean this one?


http://linkeddatahub.com/ldh?mode=http%3A%2F%2Fgraphity.org%2Fgc%23EditMode

Nice! Looks like a template but you still may have the triple object
ordering
problem. Do you? If yes, how did you address it?

Regards,

Michael Brunnbauer

On Fri, Feb 20, 2015 at 04:23:14PM +0100, Martynas Jusevi??ius wrote:

I find it funny that people on this list and semweb lists in general
like discussing abstractions, ideas, desires, prejudices etc.

However when a concrete example is shown, which solves the issue
discussed or at least comes close to that, it receives no response.

So please continue discussing the ideal RDF environment and its
potential problems while we continue improving our editor for users
who manage RDF already now.

Have a nice weekend everyone!

On Fri, Feb 20, 2015 at 4:09 PM, Paul Houle ontolo...@gmail.com wrote:

So some thoughts here.

OWL,  so far as inference is concerned,  is a failure and it is time to
move
on.  It is like RDF/XML.

As a way of documenting types and properties it is tolerable.  If I
write
down something in production rules I can generally explain to an
average
joe what they mean.  If I try to use OWL it is easy for a few things,
hard
for a few things,  then there are a few things Kendall Clark can do,
and
then there is a lot you just can't do.

On paper OWL has good scaling properties but in practice production
rules
win because you can infer the things you care about and not have to
generate
the large number of trivial or otherwise uninteresting conclusions you
get
from OWL.

As a data integration language OWL points in an interesting direction
but it
is insufficient in a number of ways.  For instance,  it can't convert
data
types (canonicalize mailto:j...@example.com and j...@example.com),
deal
with trash dates (have you ever seen an enterprise

Re: Microsoft Access for RDF?

2015-02-21 Thread Kingsley Idehen

On 2/21/15 2:57 PM, Martynas Jusevičius wrote:

Kingsley,

I am fully aware of the distinction between RDF as a data model and
its serializations. That's why I wrote: RDF *serializations* often
group triples by subject URI.

What I tweeted recently was, that despite having concept models and
abstractions in our heads, when pipelining data and writing software
we are dealing with concrete serializations. Am I not right?


Trouble here is that we have a terminology problem. If I disagree with 
you (about terminology), it is presumed I am lecturing.


FWIW -- serialization formats, notations, and languages are not the same 
thing. Unfortunately, all of these subtly distinct items are conflated 
in RDF-land.




So what I mean with it works is that our RDF/POST-based user
interface is simply a generic function of the RDF graph behind it, in
the form of XSLT transforming the RDF/XML serialization.

I commented on concurrency in the previous email, but you haven't
replied to that.


I'll go find that comment, and respond if need be. Otherwise, I would 
rather just make time to use your RDF editing tool and provide specific 
usage feedback, in regards to the issues of concern and interest to me.


Kingsley


On Sat, Feb 21, 2015 at 8:43 PM, Kingsley  Idehen
kide...@openlinksw.com wrote:

On 2/21/15 1:34 PM, Martynas Jusevičius wrote:

Kingsley,

I don't need a lecture from you each time you disagree.


I am not lecturing you. I am trying to make the conversation clearer. Can't
you see that?


Please explain what you think Resource means in Resource
Description Framework.

In any case, I think you know well what I mean.

A grouped RDF/XML output would be smth like this:

rdf:Description rdf:about=http://resource;
rdf:type rdf:resource=http://type/
a:propertyvalue/a:property
b:propertysmth/b:property
/rdf:Description


You spoke about RDF not RDF/XML (as you know, they are not the same thing).
You said or implied RDF datasets are usually organized by subject.

RDF is an abstract Language (system of signs, syntax, and semantics). Thus,
why are you presenting me with an RDF/XML statement notation based response,
when we are debating/discussing the nature of an RDF relation?

Can't you see that the more we speak about RDF in overloaded-form the more
confusing it remains?

RDF isn't a mystery. It doesn't have to be some unsolvable riddle. Sadly,
that's its general perception because we talk about it using common terms in
an overloaded manner.


How would you call this? I call it a resource description.


See my comments above. RDF is a Language. You can create RDF Statements in a
document using a variety of Notations. Thus, when speaking of RDF I am not
thinking about RDF/XML, TURTLE, or any other notation. I am thinking about a
language that systematically leverages signs, syntax, and semantics as a
mechanism for encoding and decoding information [data in some context].


But the
name does not matter much, the fact is that we use it and it works.


Just works in what sense? That's why I asked you questions about how you
catered to integrity and concurrency.

If you have more than one person editing sentences, paragraphs, or a page in
a book, wouldn't you think handling issues such as the activity frequency,
user count, and content volume are important? That's all I was seeking an
insight from you about, in regards to your work.


Kingsley



On Sat, Feb 21, 2015 at 7:01 PM, Kingsley  Idehen
kide...@openlinksw.com wrote:

On 2/21/15 9:48 AM, Martynas Jusevičius wrote:

On Fri, Feb 20, 2015 at 6:41 PM, Kingsley  Idehen
kide...@openlinksw.com wrote:

On 2/20/15 12:04 PM, Martynas Jusevičius wrote:


Not to criticize, but to seek clarity:

What does the term resources refer to, in your usage context?

In a world of Relations (this is what RDF is about, fundamentally) its
hard
for me to understand what you mean by grouped by resources. What is the
resource etc?

Well, RDF stands for Resource Description Framework after all, so
I'll cite its spec:
RDF graphs are sets of subject-predicate-object triples, where the
elements may be IRIs, blank nodes, or datatyped literals. They are
used to express descriptions of resources.

More to the point, RDF serializations often group triples by subject
URI.


The claim often group triples by subject  isn't consistent with the
nature
of an RDF Relation [1].

A predicate is a sentence-forming relation. Each tuple in the relation
is a
finite, ordered sequence of objects. The fact that a particular tuple is
an
element of a predicate is denoted by '(*predicate* arg_1 arg_2 ..
arg_n)',
where the arg_i are the objects so related. In the case of binary
predicates, the fact can be read as `arg_1 is *predicate* arg_2' or `a
*predicate* of arg_1 is arg_2'.)  [1] .

RDF's specs are consistent with what's described above, and inconsistent
with the subject ordering claims you are making.

RDF statements (which represent relations) have sources such as documents
which are accessible over

Re: Microsoft Access for RDF?

2015-02-21 Thread Kingsley Idehen

On 2/21/15 1:58 PM, Pat Hayes wrote:

On Feb 21, 2015, at 12:01 PM, Kingsley Idehen kide...@openlinksw.com wrote:


On 2/21/15 9:48 AM, Martynas Jusevičius wrote:

On Fri, Feb 20, 2015 at 6:41 PM, Kingsley  Idehen

kide...@openlinksw.com
  wrote:


On 2/20/15 12:04 PM, Martynas Jusevičius wrote:


Not to criticize, but to seek clarity:

What does the term resources refer to, in your usage context?

In a world of Relations (this is what RDF is about, fundamentally) its hard
for me to understand what you mean by grouped by resources. What is the
resource etc?


Well, RDF stands for Resource Description Framework after all, so
I'll cite its spec:
RDF graphs are sets of subject-predicate-object triples, where the
elements may be IRIs, blank nodes, or datatyped literals. They are
used to express descriptions of resources.

More to the point, RDF serializations often group triples by subject
URI.


The claim often group triples by subject  isn't consistent with the nature of 
an RDF Relation [1].

Sure it is.

A predicate is a sentence-forming relation. Each tuple in the relation is a finite, 
ordered sequence of objects. The fact that a particular tuple is an element of a predicate is 
denoted by '(*predicate* arg_1 arg_2 .. arg_n)', where the arg_i are the objects so related. 
In the case of binary predicates, the fact can be read as `arg_1 is *predicate* arg_2' or `a 
*predicate* of arg_1 is arg_2'.)  [1] .

RDF's specs are consistent with what's described above,

Indeed. All the RDF relations are binary, so this is overkill, but...


and inconsistent with the subject ordering claims you are making.

Not in the least.


Pat,

There is no implicit ordering in RDF.

The claim RDF serializations often group triples by subject URI is 
close to inferring a common practice ordering, when such isn't specified.


An application, e.g., the kind that's created this thread (i.e., an RDF 
Editor) could decide to order RDF statements by Subject, but it has 
implications. The very editor we will soon be releasing offers that kind 
of ordering, but not as the sole option.


At the start of this RDF Editor thread, I tried to bring a specific 
metaphor (Book [RDF Source], Pages [Named Graphs], Paragraphs [RDF 
Statements grouped by Predicate], and Sentences [RDF Statements]) into 
scope so that we had a simple basis for understanding issues that arise 
in a multi-user editor -- where operation atomicity is important, 
without totally killing concurrency.


As we already know from RDBMS experience in general, an RDF Editor (a 
client to a store) needs to be able to leverage optimistic concurrency, 
which may not actually match the UI/UX interaction experience of the 
user i.e., they see one thing, but at actual data persistence time 
something else is happening in regards to the actual atomic units that 
are subject to comparison with the original RDF source prior to actual 
persistence.


RDF Editors are not a trivial matter, which is why after 14+ years we 
only beginning to attend to this issue as a matter of course re., the 
broader Linked Open Data cloud.



RDF triples can be organized in any way that suits the user, including by 
common subject if that is thought to be useful or intuitive.


Sets of RDF 3-tuples can be *ordered* in an app UI/UX as the developer 
sees fit, again there's no golden rule (including approaches that 
produce unusable apps that struggle with integrity and concurrency as 
data size and concurrent users increase) .


RDF 3-tuples are organized in line with RDF syntax rules. Assuming 
organize implies how an RDF 3-tuple (triple) is arranged, hence the 
subject-predicate-object structure.



  The RDF spec says nothing about how triples are to be ordered or organized.


I never said or implied it did. I simply referred to the nature of an 
RDF relation. Basically, what rdf:Property is about.



  RDF/XML syntax assumes a by-subject organization and so provides 
abbreviations which only work for that.


You know this already, but I just have to reply to your comment (we have 
an audience): RDF/XML != RDF :)


I wasn't replying to a question about RDF/XML, and I don't believe 
(circa. 2015) that RDF datasets are typically in RDF/XML form. That's 
not what I see these days, and I work with a lot of RDF data (not a 
secret to you or anyone else).






RDF statements (which represent relations) have sources such as documents which 
are accessible over a network and/or documents managed by some RDBMS e.g., 
Named Graphs in the case of a SPARQL compliant RDBMS .

In RDF you are always working with a set of tuples (s,p,o 3-tuples specifically)

yes, but


grouped by predicate .

Not necessarily. They can be grouped any way you like.


Yes, they can be grouped in an application, however the developer 
chooses, but that isn't what I was refuting or even concerned about.


My real concern boils down to an RDF Editor that can work against big or 
large RDF data sources without ignoring fundamental

Re: Microsoft Access for RDF?

2015-02-21 Thread Kingsley Idehen

On 2/21/15 1:34 PM, Martynas Jusevičius wrote:

Kingsley,

I don't need a lecture from you each time you disagree.


I am not lecturing you. I am trying to make the conversation clearer. 
Can't you see that?




Please explain what you think Resource means in Resource
Description Framework.

In any case, I think you know well what I mean.

A grouped RDF/XML output would be smth like this:

rdf:Description rdf:about=http://resource;
   rdf:type rdf:resource=http://type/
   a:propertyvalue/a:property
   b:propertysmth/b:property
/rdf:Description


You spoke about RDF not RDF/XML (as you know, they are not the same 
thing). You said or implied RDF datasets are usually organized by 
subject.


RDF is an abstract Language (system of signs, syntax, and semantics). 
Thus, why are you presenting me with an RDF/XML statement notation based 
response, when we are debating/discussing the nature of an RDF relation?


Can't you see that the more we speak about RDF in overloaded-form the 
more confusing it remains?


RDF isn't a mystery. It doesn't have to be some unsolvable riddle. 
Sadly, that's its general perception because we talk about it using 
common terms in an overloaded manner.




How would you call this? I call it a resource description.


See my comments above. RDF is a Language. You can create RDF Statements 
in a document using a variety of Notations. Thus, when speaking of RDF I 
am not thinking about RDF/XML, TURTLE, or any other notation. I am 
thinking about a language that systematically leverages signs, syntax, 
and semantics as a mechanism for encoding and decoding information [data 
in some context].



But the
name does not matter much, the fact is that we use it and it works.


Just works in what sense? That's why I asked you questions about how you 
catered to integrity and concurrency.


If you have more than one person editing sentences, paragraphs, or a 
page in a book, wouldn't you think handling issues such as the activity 
frequency, user count, and content volume are important? That's all I 
was seeking an insight from you about, in regards to your work.



Kingsley



On Sat, Feb 21, 2015 at 7:01 PM, Kingsley  Idehen
kide...@openlinksw.com wrote:

On 2/21/15 9:48 AM, Martynas Jusevičius wrote:

On Fri, Feb 20, 2015 at 6:41 PM, Kingsley  Idehen
kide...@openlinksw.com wrote:

On 2/20/15 12:04 PM, Martynas Jusevičius wrote:


Not to criticize, but to seek clarity:

What does the term resources refer to, in your usage context?

In a world of Relations (this is what RDF is about, fundamentally) its hard
for me to understand what you mean by grouped by resources. What is the
resource etc?

Well, RDF stands for Resource Description Framework after all, so
I'll cite its spec:
RDF graphs are sets of subject-predicate-object triples, where the
elements may be IRIs, blank nodes, or datatyped literals. They are
used to express descriptions of resources.

More to the point, RDF serializations often group triples by subject
URI.


The claim often group triples by subject  isn't consistent with the nature
of an RDF Relation [1].

A predicate is a sentence-forming relation. Each tuple in the relation is a
finite, ordered sequence of objects. The fact that a particular tuple is an
element of a predicate is denoted by '(*predicate* arg_1 arg_2 .. arg_n)',
where the arg_i are the objects so related. In the case of binary
predicates, the fact can be read as `arg_1 is *predicate* arg_2' or `a
*predicate* of arg_1 is arg_2'.)  [1] .

RDF's specs are consistent with what's described above, and inconsistent
with the subject ordering claims you are making.

RDF statements (which represent relations) have sources such as documents
which are accessible over a network and/or documents managed by some RDBMS
e.g., Named Graphs in the case of a SPARQL compliant RDBMS .

In RDF you are always working with a set of tuples (s,p,o 3-tuples
specifically) grouped by predicate .

Also note, I never used the phrase RDF Graph in any of the sentences
above, and deliberately so, because that overloaded phrase is yet another
source of unnecessary confusion.

Links:

[1]
http://54.183.42.206:8080/sigma/Browse.jsp?lang=EnglishLanguageflang=SUO-KIFkb=SUMOterm=Predicate

Kingsley


   Within a resource block, properties are sorted
alphabetically by their rdfs:labels retrieved from respective
vocabularies.

How do you handle the integrity of multi-user updates, without killing
concurrency, using this method of grouping (which in of itself is unclear
due to the use resources term) ?

How do you minimize the user interaction space i.e., reduce clutter --
especially if you have a lot of relations in scope or the possibility that
such becomes the reality over time?

I don't think concurrent updates I related to resources or specific
to our editor. The Linked Data platform (whatever it is) and its HTTP
logic has to deal with ETags and 409 Conflict etc.

I was wondering if this logic should be part of specifications such as
the Graph

Re: Microsoft Access for RDF?

2015-02-20 Thread Kingsley Idehen

On 2/20/15 1:19 PM, Graham Klyne wrote:

Hi Stian,

Thanks for the mention :)


Graham Klyne's Annalist is perhaps not quite what you are thinking of
(I don't think it can connect to an arbitrary SPARQL endpoint), but I
would consider it as falling under a similar category, as you have a
user interface to define record types and forms, browse and edit
records, with views defined for different record types. Under the
surface it is however all RDF and REST - so you are making a schema by
stealth.

http://annalist.net/
http://demo.annalist.net/


Annalist is still in its prototype phase, but it's available to play 
with if anyone wants to try stuff.  See also 
https://github.com/gklyne/annalist for source.  There's also a 
Dockerized version.


It's true that Annalist does not currently connect to a SPARQL 
endpoint, but have recently been doing some RDF data wrangling and 
starting to think about how to connect to public RDF (e.g. 
http://demo.annalist.net/annalist/c/CALMA_data/d/ is a first attempt 
at creating an editable version of some music data from your colleague 
Sean).  In this case, the record types and views have been created 
automatically from the raw data, and are pretty basic - but that 
automatic extraction can serve as a starting point for subsequent 
editing.  (The reverse of this, creating an actual schema from the 
defined types and views, is an exercise for the future, or maybe even 
for a reader :) )


Internally, the underlying data access is isolated in a single module, 
intended to facilitate connecting to alternative backends, which could 
be via SPARQL access.  (I'd also like to connect up with the linked 
data fragments work at some stage.)


If this looks like something that could be useful to anyone out there, 
about now might be a good time to offer feedback.  Once I have what I 
feel is a minimum viable product release, hopefully not too long now, 
I'm hoping to use feedback and collaborations to prioritize ongoing 
developments.


#g
--

It is very good and useful, in my eyes!

My enhancement requests would be that you consider supporting of at 
least one of the following, in regards to storage I/O:


1. LDP
2. WebDAV
3. SPARQL Graph Protocol
4. SPARQL 1.1 Insert, Update, Delete.

As for Access Controls on the target storage destinations, don't worry 
about that in the RDF editor itself, leave that to the storage provider 
[1] that supports any combination of the protocols above.


Links:

[1] 
http://kidehen.blogspot.com/2014/07/loosely-coupled-read-write-interactions.html 
-- Loosely-Coupled Read-Write Web pattern example.


--
Regards,

Kingsley Idehen 
Founder  CEO
OpenLink Software
Company Web: http://www.openlinksw.com
Personal Weblog 1: http://kidehen.blogspot.com
Personal Weblog 2: http://www.openlinksw.com/blog/~kidehen
Twitter Profile: https://twitter.com/kidehen
Google+ Profile: https://plus.google.com/+KingsleyIdehen/about
LinkedIn Profile: http://www.linkedin.com/in/kidehen
Personal WebID: http://kingsley.idehen.net/dataspace/person/kidehen#this




smime.p7s
Description: S/MIME Cryptographic Signature


Re: Microsoft Access for RDF?

2015-02-20 Thread Kingsley Idehen

On 2/20/15 4:54 AM, Stian Soiland-Reyes wrote:



On 19 Feb 2015 21:42, Kingsley Idehen kide...@openlinksw.com 
mailto:kide...@openlinksw.com wrote:


 No, this is dangerous and is hiding the truth.
 What?

(Just to clarify my view, obviously you know this :) )

That RDF Triples are not ordered in an RDF Graph.



Correct.

They might be ordered in something else, but that is not part of the 
RDF graph.




You can produce an order using:

select * where {graph named-graph-iri {?s ?p ?o}}
order by ?p

offset and limit can be used to create a paging mechanism, if required. 
Subqueries can be used for further optimize when the named graph is very 
large etc.


This UI would be for the user to alter relation subjects or objects. 
Predicate alternation isn't an option in this UI. Basically, the user it 
focused on relationship entities for a specific relationship type.



(Reification statements can easily also become something else)


UI leveraging Reification:

This provides an ability to let the user interact with a collection of 
statements in a UI where subject, predicate, and objects can be altered. 
Basically, they have a UX oriented towards sentence editing.


So if you tell the user his information is just RDF, but neglect to 
mention and then some, he could wrongfully think that his list of 
say preferred president has its order preserved in any exposed RDF.




Not the intent here at all.

If you don't tell him it is RDF (this is now the trend of Linked Data 
movement..), fine! It's just a technology - he doesn't need to know.




In our case we are showcasing RDF as Language, and using the UI/UX to 
bolster that point of view, using different UI/UX patterns to address 
the different ways a user can create or alter relations.




 You can describe collections using RDF statements, I don't have any 
idea how what I am talking about implies collection exclusion.


My apologies, I got the impression there was a suggestion to control 
ordering of triples without making any collection statements.




An RDF editor has to allow users create any kind of relation that's 
possible in the RDF Language.



 Don't let the user encode information he considers important in a 
way that is not preserved semantically.

 ??

I simply meant to not store such information out of band, e.g. by 
virtue of triple order or comments in a Turtle file, or by magic 
extra bits in some database that don't transfer along to other 
consumers of the produced RDF.




Okay, we don't do that. In fact, exposing relations as groups of rdf 
statements grouped by predicate enables clients lever optimistic 
concurrency patterns since they can make hash based checksums on the 
predicate based grouping that are tested prior to final persistence on 
the target store (SPARQL, WebDAV, LDP compliant).


It should be fine to store view-metadata out of bands (e.g. which 
field was last updated) - but if it has a conceptual meaning to the 
user I think it should also have meaning in the RDF and the 
vocabularies used.




We have a View that works with Controls en route to Data Persistence at 
storage location that supports one of: SPARQL 1.1 Insert, Update, 
Delete,  SPARQL Graph Protocol, LDP, and WebDAV . The Editor we've built 
is Javascript based. It also makes use of rdfstore.js, our generic I/O 
layer (also in Javascript) and a few other bits ontology lookups etc.., 
plus some bits JQuery integration etc..


If you are able to transparently do the right thing semantically, 
then hurray!




I think we do, but we'll see what everyone thinks once its released :)



 Why do you think we've built an RDF editor without factoring in OWL?

Many people are still allergic to OWL :-(



We aren't :)

And also I am still eager to actually see what you are talking about 
rather than guessing! :-)



 I think we are better off waiting until we release our RDF Editor. 
We actually built this on the request of a vary large customer. This 
isn't a speculative endeavor. It's actually being used by said 
organization as I type


Looking forward to have a go. Great that you will open source it!



Okay.

--
Regards,

Kingsley Idehen 
Founder  CEO
OpenLink Software
Company Web: http://www.openlinksw.com
Personal Weblog 1: http://kidehen.blogspot.com
Personal Weblog 2: http://www.openlinksw.com/blog/~kidehen
Twitter Profile: https://twitter.com/kidehen
Google+ Profile: https://plus.google.com/+KingsleyIdehen/about
LinkedIn Profile: http://www.linkedin.com/in/kidehen
Personal WebID: http://kingsley.idehen.net/dataspace/person/kidehen#this



smime.p7s
Description: S/MIME Cryptographic Signature


Re: Microsoft Access for RDF?

2015-02-20 Thread Kingsley Idehen

On 2/20/15 10:23 AM, Martynas Jusevičius wrote:

I find it funny that people on this list and semweb lists in general
like discussing abstractions, ideas, desires, prejudices etc.


That's because dog-fooding hasn't yet become second nature, across the 
aforementioned communities. Don't give up, just keep pushing the case 
via real  examples etc..




However when a concrete example is shown, which solves the issue
discussed or at least comes close to that, it receives no response.


Yes, that is the general case, but don't give up. Keep pushing, things 
will change, they have too!




So please continue discussing the ideal RDF environment and its
potential problems while we continue improving our editor for users
who manage RDF already now.


Please keep up your good work and overall effort in general. Don't get 
frustrated (I know that's easier said than done).


RDF Editors are vital, in regards to bootstrapping a Read-Write Linked 
Open Data ecosystem. The more the merrier, as long as the solutions in 
question are based on open standards (RDF, SPARQL, LDP, HTTP, URIs, 
WebDAV etc..).



Kingsley


Have a nice weekend everyone!

On Fri, Feb 20, 2015 at 4:09 PM, Paul Houle ontolo...@gmail.com wrote:

So some thoughts here.

OWL,  so far as inference is concerned,  is a failure and it is time to move
on.  It is like RDF/XML.

As a way of documenting types and properties it is tolerable.  If I write
down something in production rules I can generally explain to an average
joe what they mean.  If I try to use OWL it is easy for a few things,  hard
for a few things,  then there are a few things Kendall Clark can do,  and
then there is a lot you just can't do.

On paper OWL has good scaling properties but in practice production rules
win because you can infer the things you care about and not have to generate
the large number of trivial or otherwise uninteresting conclusions you get
from OWL.

As a data integration language OWL points in an interesting direction but it
is insufficient in a number of ways.  For instance,  it can't convert data
types (canonicalize mailto:j...@example.com and j...@example.com),  deal
with trash dates (have you ever seen an enterprise system that didn't have
trash dates?) or convert units.  It also can't reject facts that don't
matter and so far as both timespace and accuracy you do much easier if you
can cook things down to the smallest correct database.



The other one is that as Kingsley points out,  the ordered collections do
need some real work to square the circle between the abstract graph
representation and things that are actually practical.

I am building an app right now where I call an API and get back chunks of
JSON which I cache,  and the primary scenario is that I look them up by
primary key and get back something with a 1:1 correspondence to what I got.
Being able to do other kind of queries and such is sugar on top,  but being
able to reconstruct an original record,  ordered collections and all,  is an
absolute requirement.

So far my infovore framework based on Hadoop has avoided collections,
containers and all that because these are not used in DBpedia and Freebase,
at least not in the A-Box.  The simple representation that each triple is a
record does not work so well in this case because if I just turn blank nodes
into UUIDs and spray them across the cluster,  the act of reconstituting a
container would require an unbounded number of passes,  which is no fun at
all with Hadoop.  (At first I though the # of passes was the same as the
length of the largest collection but now that I think about it I think I can
do better than that)  I don't feel so bad about most recursive structures
because I don't think they will get that deep but I think LISP-Lists are
evil at least when it comes to external memory and modern memory
hierarchies.







--
Regards,

Kingsley Idehen 
Founder  CEO
OpenLink Software
Company Web: http://www.openlinksw.com
Personal Weblog 1: http://kidehen.blogspot.com
Personal Weblog 2: http://www.openlinksw.com/blog/~kidehen
Twitter Profile: https://twitter.com/kidehen
Google+ Profile: https://plus.google.com/+KingsleyIdehen/about
LinkedIn Profile: http://www.linkedin.com/in/kidehen
Personal WebID: http://kingsley.idehen.net/dataspace/person/kidehen#this




smime.p7s
Description: S/MIME Cryptographic Signature


Re: Microsoft Access for RDF?

2015-02-20 Thread Kingsley Idehen

On 2/20/15 10:09 AM, Paul Houle wrote:

So some thoughts here.

OWL,  so far as inference is concerned,  is a failure and it is time 
to move on.  It is like RDF/XML.


I think that's a little too generic a comment. Describing the nature of 
relations using relations is vital.


Not all of OWL is vital, at the onset. Basically, OWL doesn't need to be 
at the front-door per se., but understanding its role, in regards to 
relations semantics description and exploitation is important.


RDF/XML's problems have tarnished OWL, as it has the notion of a 
Semantic Web in general. For starters, too many OWL usage examples 
(circa., 2105) are *still* presented using  RDF/XML :(


The creation and management of RDF/XML is THE real problem. It messed up 
everything, and stayed at the fore front (as the sole official W3C RDF 
notation standard) for way too long. Exponential decadence++ par excellence!




As a way of documenting types and properties it is tolerable.


Methinks, very useful.

If I write down something in production rules I can generally explain 
to an average joe what they mean.  If I try to use OWL it is easy 
for a few things,  hard for a few things,  then there are a few things 
Kendall Clark can do,  and then there is a lot you just can't do.


On paper OWL has good scaling properties but in practice production 
rules win because you can infer the things you care about and not have 
to generate the large number of trivial or otherwise uninteresting 
conclusions you get from OWL.


You need both, with rules being much clearer starting points for users 
and developers.


It's a journey back to Prolog [1] i.e., long awaited 5GL == Webby Prolog.




As a data integration language OWL points in an interesting direction 
but it is insufficient in a number of ways.  For instance,  it can't 
convert data types (canonicalize mailto:j...@example.com 
mailto:j...@example.com and j...@example.com 
mailto:j...@example.com),  deal with trash dates (have you ever seen 
an enterprise system that didn't have trash dates?) or convert units. 
It also can't reject facts that don't matter and so far as both 
timespace and accuracy you do much easier if you can cook things down 
to the smallest correct database.


Task better handled via rules.

Links:


[1] http://www.jfsowa.com/logic/prolog1.htm -- A Prolog to Prolog by 
John F. Sowa  (Last Modified: 11/07/2001 14:13:17) .


--
Regards,

Kingsley Idehen 
Founder  CEO
OpenLink Software
Company Web: http://www.openlinksw.com
Personal Weblog 1: http://kidehen.blogspot.com
Personal Weblog 2: http://www.openlinksw.com/blog/~kidehen
Twitter Profile: https://twitter.com/kidehen
Google+ Profile: https://plus.google.com/+KingsleyIdehen/about
LinkedIn Profile: http://www.linkedin.com/in/kidehen
Personal WebID: http://kingsley.idehen.net/dataspace/person/kidehen#this



smime.p7s
Description: S/MIME Cryptographic Signature


Re: Microsoft Access for RDF?

2015-02-20 Thread Kingsley Idehen

On 2/20/15 12:04 PM, Martynas Jusevičius wrote:

Hey Michael,

this one indeed.

The layout is generated with XSLT from RDF/XML. The triples are
grouped by resources.


Not to criticize, but to seek clarity:

What does the term resources refer to, in your usage context?

In a world of Relations (this is what RDF is about, fundamentally) its 
hard for me to understand what you mean by grouped by resources. What 
is the resource etc?




  Within a resource block, properties are sorted
alphabetically by their rdfs:labels retrieved from respective
vocabularies.


How do you handle the integrity of multi-user updates, without killing 
concurrency, using this method of grouping (which in of itself is 
unclear due to the use resources term) ?


How do you minimize the user interaction space i.e., reduce clutter -- 
especially if you have a lot of relations in scope or the possibility 
that such becomes the reality over time?


Kingsley


On Fri, Feb 20, 2015 at 4:59 PM, Michael Brunnbauer bru...@netestate.de wrote:

Hello Martynas,

sorry! You mean this one?

http://linkeddatahub.com/ldh?mode=http%3A%2F%2Fgraphity.org%2Fgc%23EditMode

Nice! Looks like a template but you still may have the triple object ordering
problem. Do you? If yes, how did you address it?

Regards,

Michael Brunnbauer

On Fri, Feb 20, 2015 at 04:23:14PM +0100, Martynas Jusevi??ius wrote:

I find it funny that people on this list and semweb lists in general
like discussing abstractions, ideas, desires, prejudices etc.

However when a concrete example is shown, which solves the issue
discussed or at least comes close to that, it receives no response.

So please continue discussing the ideal RDF environment and its
potential problems while we continue improving our editor for users
who manage RDF already now.

Have a nice weekend everyone!

On Fri, Feb 20, 2015 at 4:09 PM, Paul Houle ontolo...@gmail.com wrote:

So some thoughts here.

OWL,  so far as inference is concerned,  is a failure and it is time to move
on.  It is like RDF/XML.

As a way of documenting types and properties it is tolerable.  If I write
down something in production rules I can generally explain to an average
joe what they mean.  If I try to use OWL it is easy for a few things,  hard
for a few things,  then there are a few things Kendall Clark can do,  and
then there is a lot you just can't do.

On paper OWL has good scaling properties but in practice production rules
win because you can infer the things you care about and not have to generate
the large number of trivial or otherwise uninteresting conclusions you get
from OWL.

As a data integration language OWL points in an interesting direction but it
is insufficient in a number of ways.  For instance,  it can't convert data
types (canonicalize mailto:j...@example.com and j...@example.com),  deal
with trash dates (have you ever seen an enterprise system that didn't have
trash dates?) or convert units.  It also can't reject facts that don't
matter and so far as both timespace and accuracy you do much easier if you
can cook things down to the smallest correct database.



The other one is that as Kingsley points out,  the ordered collections do
need some real work to square the circle between the abstract graph
representation and things that are actually practical.

I am building an app right now where I call an API and get back chunks of
JSON which I cache,  and the primary scenario is that I look them up by
primary key and get back something with a 1:1 correspondence to what I got.
Being able to do other kind of queries and such is sugar on top,  but being
able to reconstruct an original record,  ordered collections and all,  is an
absolute requirement.

So far my infovore framework based on Hadoop has avoided collections,
containers and all that because these are not used in DBpedia and Freebase,
at least not in the A-Box.  The simple representation that each triple is a
record does not work so well in this case because if I just turn blank nodes
into UUIDs and spray them across the cluster,  the act of reconstituting a
container would require an unbounded number of passes,  which is no fun at
all with Hadoop.  (At first I though the # of passes was the same as the
length of the largest collection but now that I think about it I think I can
do better than that)  I don't feel so bad about most recursive structures
because I don't think they will get that deep but I think LISP-Lists are
evil at least when it comes to external memory and modern memory
hierarchies.



--
++  Michael Brunnbauer
++  netEstate GmbH
++  Geisenhausener Straße 11a
++  81379 München
++  Tel +49 89 32 19 77 80
++  Fax +49 89 32 19 77 89
++  E-Mail bru...@netestate.de
++  http://www.netestate.de/
++
++  Sitz: München, HRB Nr.142452 (Handelsregister B München)
++  USt-IdNr. DE221033342
++  Geschäftsführer: Michael Brunnbauer, Franz Brunnbauer
++  Prokurist: Dipl. Kfm. (Univ.) Markus Hendel






--
Regards,

Kingsley Idehen 
Founder  CEO

Re: Microsoft Access for RDF?

2015-02-19 Thread Kingsley Idehen

On 2/19/15 1:40 PM, Bo Ferri wrote:

Hi Paul,

maybe I'm wrong, but I think what you are generally looking for (in 
the Semantic Web world) is the things that are going on at RDF Shapes 
group [1], or (but I guess you are aware of them)? What they can't do 
(yet; afaik) is the family instance loop check, or?


On 2/18/2015 11:59 PM, Paul Houle wrote:

For small projects you don't need access controls,  provenance and that
kind of thing,  but if you were trying to run something like Freebase
and Wikidata where you know what the algebra is the obvious thing to do
is use RDF* and SPARQL*.


No, why? - Wikidata's data model goes beyond RDF (triple based 
statements) for good reason. I'm always wondering why the heck we 
true Semantic Web developer still insist so hard on our beloved RDF 
(which all its drawbacks). 


Huh?


Look forward. Think property graph and graph databases in general ;)

What?

You can also combine both perfectly, see, e.g., this graph data model 
[3] - RDF compatible, but able to store qualified attributes (e.g. 
order or provenance) at statement basis (without reification et. al).


RDF is a Language, poorly promoted as a  Data Model (a term which is 
ultimately a can of worms, exemplified by all the confusion its 
inflicted on everyone to date).


Re. editing of graph information, you may have a look at OrientDB's 
graph editor [4], which offers neat editing options direct in the 
graph (yay!).


Seriously?

I do not know of a single so-called Graph Database that even 
comprehends the concept of data de-silo-fication [1][2][3][4]. 
Basically, for data to flow it can only do so via identifiers that 
resolve to descriptions using a platform agnostic protocol.


Data is represented using relations [5][6].

RDF is a Data Representation Language [7].

Links:

[1] https://www.pinterest.com/pin/389561436492223291/ -- What sits 
between You and Your data
[2] http://twitter.com/kidehen/status/567723528318124033 -- Open 
Database Connectivity vs Open Data Connectivity
[3] 
http://kidehen.blogspot.com/2014/08/linked-local-data-lld-and-linked-open.html 
-- Linked Local Data vs Linked Open Data
[4] https://www.pinterest.com/pin/389561436493662240/ -- What RDF is 
about, illustrated (as they say: a picture speaks a thousand words)
[5] http://twitter.com/kidehen/status/568462066207625216 -- What is a 
Relation?
[6] http://twitter.com/kidehen/status/568461114129956864 -- What is a 
Predicate?
[7] http://www.slideshare.net/kidehen/understanding-29894555/55 -- RDF 
is a Language .


Kingsley


Cheers,


Bo


[1] http://www.w3.org/2014/data-shapes/wiki/Main_Page
[2] http://en.wikipedia.org/wiki/Graph_database
[3] https://github.com/dswarm/dswarm-documentation/wiki/Graph-Data-Model
[4] 
http://www.orientechnologies.com/docs/2.0/orientdb-studio.wiki/Graph-Editor.html







--
Regards,

Kingsley Idehen 
Founder  CEO
OpenLink Software
Company Web: http://www.openlinksw.com
Personal Weblog 1: http://kidehen.blogspot.com
Personal Weblog 2: http://www.openlinksw.com/blog/~kidehen
Twitter Profile: https://twitter.com/kidehen
Google+ Profile: https://plus.google.com/+KingsleyIdehen/about
LinkedIn Profile: http://www.linkedin.com/in/kidehen
Personal WebID: http://kingsley.idehen.net/dataspace/person/kidehen#this




smime.p7s
Description: S/MIME Cryptographic Signature


Re: Microsoft Access for RDF?

2015-02-19 Thread Kingsley Idehen

On 2/19/15 4:52 AM, Michael Brunnbauer wrote:

Hello Paul,

an interesting aspect of such a system is the ordering of triples - even
if you restrict editing to one subject. Either the order is predefined and the
user will have to search for his new triple after doing an insert or the user
determines the position of his new triple.

In the latter case, the app developer will want to use something like
reification - at least internally. This is the point when the app developer
and the Semantic Web expert start to disagree ;-)


Not really, in regards to Semantic Web expert starting to disagree per 
se. You can order by Predicate or use Reification.


When designing our RDF Editor, we took the route of breaking things down 
as follows:


Book (Named Graph Collection e.g. in a Quad Store or service that 
understands LDP Containers etc..)  -- (contains) -- Pages (Named 
Graphs) -- Paragraphs (RDF Sentence/Statement Collections).


The Sentence/Statement Collections are the key item, you are honing 
into, and yes, it boils down to:


1. Grouping sentences/statements by predicate per named graph to create 
a paragraph
2. Grouping sentences by way of reification where each sentence is 
identified and described per named graph.


Rather that pit one approach against the other, we simply adopted both, 
as options.


Anyway, you raise a very important point that's generally overlooked. 
Ignoring this fundamental point is a shortcut to hell for any editor 
that's to be used in a multi-user setup, as you clearly understand :)



Kingsley



Maybe they can compromise on a system with a separate named graph per triple
(BTW what is the status of blank nodes shared between named graphs?).

Regards,

Michael Brunnbauer

On Wed, Feb 18, 2015 at 03:08:33PM -0500, Paul Houle wrote:

I am looking at some cases where I have databases that are similar to
Dbpedia and Freebase in character,  sometimes that big (ok,  those
particular databases),   sometimes smaller.  Right now there are no blank
nodes,  perhaps there are things like the compound value types from
Freebase which are sorta like blank nodes but they have names,

Sometimes I want to manually edit a few records.  Perhaps I want to delete
a triple or add a few triples (possibly introducing a new subject.)

It seems to me there could be some kind of system which points at a SPARQL
protocol endpoint (so I can keep my data in my favorite triple store) and
given an RDFS or OWL schema,  automatically generates the forms so I can
easily edit the data.

Is there something out there?

--
Paul Houle
Expert on Freebase, DBpedia, Hadoop and RDF
(607) 539 6254paul.houle on Skype   ontolo...@gmail.com
http://legalentityidentifier.info/lei/lookup



--
Regards,

Kingsley Idehen 
Founder  CEO
OpenLink Software
Company Web: http://www.openlinksw.com
Personal Weblog 1: http://kidehen.blogspot.com
Personal Weblog 2: http://www.openlinksw.com/blog/~kidehen
Twitter Profile: https://twitter.com/kidehen
Google+ Profile: https://plus.google.com/+KingsleyIdehen/about
LinkedIn Profile: http://www.linkedin.com/in/kidehen
Personal WebID: http://kingsley.idehen.net/dataspace/person/kidehen#this




smime.p7s
Description: S/MIME Cryptographic Signature


Re: Microsoft Access for RDF?

2015-02-19 Thread Kingsley Idehen

On 2/19/15 10:32 AM, Michael Brunnbauer wrote:

Hello Paul,

let me put this into two simple statements:

1) There is no canonical ordering of triples

2) A good triple editor should reflect this by letting the user determine the 
order

Regards,

Michael Brunnbauer


Yes!

Kingsley


On Thu, Feb 19, 2015 at 03:50:33PM +0100, Michael Brunnbauer wrote:

Hello Paul,

I am not so sure if this is good enough. If you add something to the end of a
list in a UI, you normally expect it to stay there. If you accept that it
will be put in its proper position later, you may - as user - still have
trouble figuring out where that position is (even with the heuristics you gave).

The problem repeats with the triple object if the properties have been ordered.
As user, you might feel even more compelled to introduce a deviant ordering on
this level.

Regards,

Michael Brunnbauer

On Thu, Feb 19, 2015 at 09:07:37AM -0500, Paul Houle wrote:

There are quite a few simple heuristics that will give good enough
results,  consider for instance:

(1) order predicates by alphabetical order (by rdfs:label or by localname
or the whole URL)
(2) order predicates by some numerical property given by a custom predicate
in the schema
(3) order predicates by the type of the domain alphabetically, and then
order by the name of the predicates
(4) work out the partial ordering of types by inheritance so Person winds
up at the top and Actor shows up below that

Freebase does something like (4) and that is good enough.

On Thu, Feb 19, 2015 at 8:01 AM, Kingsley Idehen kide...@openlinksw.com
wrote:


On 2/19/15 4:52 AM, Michael Brunnbauer wrote:


Hello Paul,

an interesting aspect of such a system is the ordering of triples - even
if you restrict editing to one subject. Either the order is predefined
and the
user will have to search for his new triple after doing an insert or the
user
determines the position of his new triple.

In the latter case, the app developer will want to use something like
reification - at least internally. This is the point when the app
developer
and the Semantic Web expert start to disagree ;-)


Not really, in regards to Semantic Web expert starting to disagree per
se. You can order by Predicate or use Reification.

When designing our RDF Editor, we took the route of breaking things down
as follows:

Book (Named Graph Collection e.g. in a Quad Store or service that
understands LDP Containers etc..)  -- (contains) -- Pages (Named Graphs)
-- Paragraphs (RDF Sentence/Statement Collections).

The Sentence/Statement Collections are the key item, you are honing into,
and yes, it boils down to:

1. Grouping sentences/statements by predicate per named graph to create a
paragraph
2. Grouping sentences by way of reification where each sentence is
identified and described per named graph.

Rather that pit one approach against the other, we simply adopted both, as
options.

Anyway, you raise a very important point that's generally overlooked.
Ignoring this fundamental point is a shortcut to hell for any editor that's
to be used in a multi-user setup, as you clearly understand :)


Kingsley



Maybe they can compromise on a system with a separate named graph per
triple
(BTW what is the status of blank nodes shared between named graphs?).

Regards,

Michael Brunnbauer

On Wed, Feb 18, 2015 at 03:08:33PM -0500, Paul Houle wrote:


I am looking at some cases where I have databases that are similar to
Dbpedia and Freebase in character,  sometimes that big (ok,  those
particular databases),   sometimes smaller.  Right now there are no blank
nodes,  perhaps there are things like the compound value types from
Freebase which are sorta like blank nodes but they have names,

Sometimes I want to manually edit a few records.  Perhaps I want to
delete
a triple or add a few triples (possibly introducing a new subject.)

It seems to me there could be some kind of system which points at a
SPARQL
protocol endpoint (so I can keep my data in my favorite triple store) and
given an RDFS or OWL schema,  automatically generates the forms so I can
easily edit the data.

Is there something out there?

--
Paul Houle
Expert on Freebase, DBpedia, Hadoop and RDF
(607) 539 6254paul.houle on Skype   ontolo...@gmail.com
http://legalentityidentifier.info/lei/lookup


--
Regards,

Kingsley Idehen
Founder  CEO
OpenLink Software
Company Web: http://www.openlinksw.com
Personal Weblog 1: http://kidehen.blogspot.com
Personal Weblog 2: http://www.openlinksw.com/blog/~kidehen
Twitter Profile: https://twitter.com/kidehen
Google+ Profile: https://plus.google.com/+KingsleyIdehen/about
LinkedIn Profile: http://www.linkedin.com/in/kidehen
Personal WebID: http://kingsley.idehen.net/dataspace/person/kidehen#this





--
Paul Houle
Expert on Freebase, DBpedia, Hadoop and RDF
(607) 539 6254paul.houle on Skype   ontolo...@gmail.com
http://legalentityidentifier.info/lei/lookup

--
++  Michael Brunnbauer
++  netEstate GmbH
++  Geisenhausener Straße 11a
++  81379 München

Re: Microsoft Access for RDF?

2015-02-19 Thread Kingsley Idehen

On 2/19/15 4:08 PM, Stian Soiland-Reyes wrote:


No, this is dangerous and is hiding the truth.



What?

Take the red pill and admit to the user that this particular property 
is unordered, for instance by always listing the values sorted 
(consistency is still king).




A Predicate is a sentence forming Relation. Thus, you can effectively 
group RDF relations by Predicate scoped to a Named Graph IRI, if you 
choose. Naturally, you can also apply other forms of ordering, en route 
to effect UI/UX.



Then make it easy to do lists and collections.



You can describe collections using RDF statements, I don't have any idea 
how what I am talking about implies collection exclusion.


Don't let the user encode information he considers important in a way 
that is not preserved semantically.




??

If you are limited by typing of rdf:List elements, then look at  owl 
approaches for collections that allow you to order items and use owl 
reasoning for list membership - see 
https://code.google.com/p/collections-ontology/




Why do you think we've built an RDF editor without factoring in OWL?

Ordering of which properties to show in which order on a form is 
another matter, that is a presentational view over the model.




Sorta.

The :View could simply be a choice of an owl:Class and a partial order 
of owl:Properties instances of that class should have.




There is no perfect view. You simply need a view to enables:

1. coherent data curation
2. provides the user with choices in regards to how items are categorized.


I think we are better off waiting until we release our RDF Editor. We 
actually built this on the request of a vary large customer. This isn't 
a speculative endeavor. It's actually being used by said organization as 
I type


We've opted (we didn't have to) to make it Open Source too.

--
Regards,

Kingsley Idehen 
Founder  CEO
OpenLink Software
Company Web: http://www.openlinksw.com
Personal Weblog 1: http://kidehen.blogspot.com
Personal Weblog 2: http://www.openlinksw.com/blog/~kidehen
Twitter Profile: https://twitter.com/kidehen
Google+ Profile: https://plus.google.com/+KingsleyIdehen/about
LinkedIn Profile: http://www.linkedin.com/in/kidehen
Personal WebID: http://kingsley.idehen.net/dataspace/person/kidehen#this




smime.p7s
Description: S/MIME Cryptographic Signature


Re: Microsoft Access for RDF?

2015-02-18 Thread Kingsley Idehen

On 2/18/15 3:08 PM, Paul Houle wrote:
I am looking at some cases where I have databases that are similar to 
Dbpedia and Freebase in character,  sometimes that big (ok,  those 
particular databases),   sometimes smaller.  Right now there are no 
blank nodes,  perhaps there are things like the compound value types 
from Freebase which are sorta like blank nodes but they have names,


Sometimes I want to manually edit a few records.  Perhaps I want to 
delete a triple or add a few triples (possibly introducing a new subject.)


It seems to me there could be some kind of system which points at a 
SPARQL protocol endpoint (so I can keep my data in my favorite triple 
store) and given an RDFS or OWL schema,  automatically generates the 
forms so I can easily edit the data.


Is there something out there?


Sorta.

We will soon release (this month or early March at the latest) a 
Read-Write Editor for RDF, in Open Source form, that addresses some of 
these features. Naturally, it supports WebID, LDP, WebACL etc..



--
Regards,

Kingsley Idehen 
Founder  CEO
OpenLink Software
Company Web: http://www.openlinksw.com
Personal Weblog 1: http://kidehen.blogspot.com
Personal Weblog 2: http://www.openlinksw.com/blog/~kidehen
Twitter Profile: https://twitter.com/kidehen
Google+ Profile: https://plus.google.com/+KingsleyIdehen/about
LinkedIn Profile: http://www.linkedin.com/in/kidehen
Personal WebID: http://kingsley.idehen.net/dataspace/person/kidehen#this




smime.p7s
Description: S/MIME Cryptographic Signature


Re: Microsoft Access for RDF?

2015-02-18 Thread Kingsley Idehen

On 2/18/15 5:07 PM, Martynas Jusevičius wrote:

Why do you assume I assume something?


Because he spoke about an Microsoft-Access-like solution. If you've used 
that tool you would know that massive dataset edits, as you indicated, 
can't be the focal point of his quest, hence my comment.




I simply stated what should be considered. When Paul replies, we'll
know what kind of dataset it is, and whether these issues apply.


See my comment above.



He mentions DBPedia as an example, which returns around 20 named
graphs for triples. How are those supposed to be edited?


He can (if he needs to) scope his activity to triples associated with a 
specific named graph.




select distinct ?g where { graph ?g { ?s ?p ?o } }

count()
  didn't work for me BTW.


Yes, that's by design [1].


[1] http://lists.w3.org/Archives/Public/public-lod/2011Aug/0028.html -- 
DBpedia fair use policy .


Kingsley



On Wed, Feb 18, 2015 at 10:51 PM, Kingsley  Idehen
kide...@openlinksw.com wrote:

On 2/18/15 4:01 PM, Martynas Jusevičius wrote:

we have the editing interface, but ontologies are not of much help
here. The question is, how and where to draw the boundary of the
description that you want to edit, because millions of triples on one
page will not work.


Why do you assume that the target of an edit is a massive dataset presented
in a single page?



--
Regards,

Kingsley Idehen
Founder  CEO
OpenLink Software
Company Web: http://www.openlinksw.com
Personal Weblog 1: http://kidehen.blogspot.com
Personal Weblog 2: http://www.openlinksw.com/blog/~kidehen
Twitter Profile: https://twitter.com/kidehen
Google+ Profile: https://plus.google.com/+KingsleyIdehen/about
LinkedIn Profile: http://www.linkedin.com/in/kidehen
Personal WebID: http://kingsley.idehen.net/dataspace/person/kidehen#this







--
Regards,

Kingsley Idehen 
Founder  CEO
OpenLink Software
Company Web: http://www.openlinksw.com
Personal Weblog 1: http://kidehen.blogspot.com
Personal Weblog 2: http://www.openlinksw.com/blog/~kidehen
Twitter Profile: https://twitter.com/kidehen
Google+ Profile: https://plus.google.com/+KingsleyIdehen/about
LinkedIn Profile: http://www.linkedin.com/in/kidehen
Personal WebID: http://kingsley.idehen.net/dataspace/person/kidehen#this




smime.p7s
Description: S/MIME Cryptographic Signature


Re: How do you explore a SPARQL Endpoint?

2015-01-23 Thread Kingsley Idehen

On 1/23/15 4:37 AM, Pavel Klinov wrote:

Alright, so this isn't an answer and I might be saying something
totally silly (since I'm not a Linked Data person, really).

If I re-phrase this question as the following: how do I extract a
schema from a SPARQL endpoint?, then it seems to pop up quite often
(see, e.g., [1]). I understand that the original question is a bit
more general but it's fair to say that knowing the schema is a huge
help for writing meaningful queries.

As an outsider, I'm quite surprised that there's still no commonly
accepted (i'm avoiding standard here) way of doing this. People
either hope that something like VoID or LOV vocabularies are being
used, or use 3-party tools, or write all sorts of ad hoc SPARQL
queries themselves, looking for types, object properties,
domains/ranges etc-etc. There are also papers written on this subject.

At the same time, the database engines which host datasets often (not
always) manage the schema separately from the data. There're good
reasons for that. One reason, for example, is to be able to support
basic reasoning over the data, or integrity validation. Just because
in RDF the schema language and the data language are the same, so
schema and data triples can be interleaved, it need not (and often
not) be managed that way.

Yet, there's no standard way of requesting the schema from the
endpoint, and I don't quite understand why. There's the SPARQL 1.1
Service Description, which could, in theory, cover it, but it doesn't.
Servicing such schema extraction requests doesn't have to be mandatory
so the endpoints which don't have their schemas right there don't have
to sift through the data. Also, schemas are typically quite small.

I guess there's some problem with this which I'm missing...


To cut a long story short, you are seeking an experience from one realm 
(SQL RDBMS Relational Tables) in another (RDF Relational 
Property/Predicate Graphs).


I'll try to break this issue down a little, as this problem has 
everything to do with poor and deteriorating narratives in regards to 
the nature of:


1. RDBMS Applications
2. Database Documents
3. Relations.

First off, an RDBMS [1] and a Database [2] are two distinct things. 
Contrary to the marketing-driven misinformation from SQL RDBMS [3] 
vendors (spanning 20 years), a Database is a Document. It isn't a 
conflation of RDBMS application (which provides interaction services) 
and Database Documents.


A Database is a document comprised of Data.

Data is basically sets of tuples (values) representing entity 
relationships that are grouped by relationship types (a/k/a relations).


Relations can be represented as Records in a Table which is what you 
have in a SQL RDBMS. They can also be represented as Property/Predicate 
graphs.


A predicate is a sentence-forming-relation [4]. This is basically what 
RDF is all about, hence the special role of rdf:Property [5] in this 
particular Language (system of signs, syntax, and 
entity-relationship-role-semantics -- for encoding and decoding 
information [data in some context] ).


So back to your fundamental quest, you want to interrogate an RDF RDBMS 
via SPARQL queries. That quest boils down to the following:


1. systematically determining the nature of entity relationships 
represented by RDF relations -- managed by an given RDBMS instance
2. using information obtained from step 1 to find instances of items of 
interest (as already outlined by Bernard Vatant's response [6] ).


I hope this helps.  In my strong personal opinion, SQL RDBMS vendor 
marketing has actually done the world a disservice over the years. 
Luckily, the emergence of the World Wide Web -- and the Linked Open Data 
cloud its facilitated -- lays the foundation for fixing the 
aforementioned disservice. Naturally, we'll have to live with marketing 
gobbledegook like Big Data and other silliness for a while, but the 
truth (in the form of facts) will eventually bubble to the top!



Links:

[1] 
http://www.openlinksw.com/data/turtle/general/GlossaryOfTerms.ttl#RDBMS 
-- RDBMS
[2] 
http://www.openlinksw.com/data/turtle/general/GlossaryOfTerms.ttl#Database 
-- Database
[3] 
http://www.openlinksw.com/data/turtle/general/GlossaryOfTerms.ttl#SQLDBMS -- 
SQL RDBMS
[4] 
http://54.183.42.206:8080/sigma/Browse.jsp?lang=EnglishLanguageflang=SUO-KIFkb=SUMOterm=Predicate 
-- Predicate

[5] http://linkeddata.uriburner.com/c/8CCIWQ -- RDF Property
[6] https://lists.w3.org/Archives/Public/public-lod/2015Jan/0105.html -- 
Bernard Vatant response in prior thread
[7] 
http://kidehen.blogspot.com/2015/01/loosely-coupling-database-document.html 
-- Part 1 of a post I am working on about Data that starts with a CSV 
document.


--
Regards,

Kingsley Idehen 
Founder  CEO
OpenLink Software
Company Web: http://www.openlinksw.com
Personal Weblog 1: http://kidehen.blogspot.com
Personal Weblog 2: http://www.openlinksw.com/blog/~kidehen
Twitter Profile: https://twitter.com/kidehen
Google+ Profile: https

Re: linked open data and PDF

2015-01-20 Thread Kingsley Idehen

On 1/20/15 11:22 AM, Paul Houle wrote:
I don't like the idea of unchecked expansion of formats because it 
puts an open-ended burden on both Adobe and third party software 
developers.  If there are specific problems with the current XMP 
format it makes sense to transition to XMP2 but the more variation in 
formats the worse it will be for the ecosystem.


RDF/XML or any other XML based notation has next to no traction in 
regards to Linked Open Data, circa., 2015.


I am not asking Adobe to dump what they have since that in of itself 
puts a massive burden on them. I am suggesting a middle-ground where 
they simply focus on transformation between other RDF notations and the 
one they currently support. Net effect, we can add metadata to PDF using 
a variety of notations.


As for the XMP vocabulary of terms,  Adobe doesn't even need to do 
anything about that if its published using RDF/XML. There are many 
transformers in place for them to leverage en route to making said 
vocabulary broadly readable.


Kingsley


On Mon, Jan 19, 2015 at 5:20 PM, Kingsley Idehen 
kide...@openlinksw.com mailto:kide...@openlinksw.com wrote:


On 1/19/15 2:36 PM, Larry Masinter wrote:

I just joined this list. I’m looking to help improve the story
for Linked Open Data in PDF, to lift PDF (and other formats)
from one-star to five, perhaps using XMP. I’ve found a few
hints in the mailing list archive here.
http://lists.w3.org/Archives/Public/public-lod/2014Oct/0169.html
but I’m still looking. Any clues, problem statements, sample
sites?

Larry
--
http://larry.masinter.net


Larry,

Rather than only supporting an XML based notation (i.e., RDF/XML)
for representing RDF triples, it would be nice if one could attain
the same goal (embedded metadata in PDFs) using any RDF notation
e.g., N-Triples, TURTLE, JSON-LD etc..

The above would also include creating an XMP ontology that's
represented in at least TURTLE and JSON-LD notations -- which do
have strong usage across different RDF user  developer profiles.

Naturally, Adobe apps that already process XMP simply need to
leverage a transformation processor (built in-house or acquired)
that converts metadata represented in TURTLE and JSON-LD to
RDF/XML (which is what they currently supports).

-- 
Regards,


Kingsley Idehen
Founder  CEO
OpenLink Software
Company Web: http://www.openlinksw.com
Personal Weblog 1: http://kidehen.blogspot.com
Personal Weblog 2: http://www.openlinksw.com/blog/~kidehen
http://www.openlinksw.com/blog/%7Ekidehen
Twitter Profile: https://twitter.com/kidehen
Google+ Profile: https://plus.google.com/+KingsleyIdehen/about
LinkedIn Profile: http://www.linkedin.com/in/kidehen
Personal WebID:
http://kingsley.idehen.net/dataspace/person/kidehen#this





--
Paul Houle
Expert on Freebase, DBpedia, Hadoop and RDF
(607) 539 6254paul.houle on Skype ontolo...@gmail.com 
mailto:ontolo...@gmail.com

http://legalentityidentifier.info/lei/lookup



--
Regards,

Kingsley Idehen 
Founder  CEO
OpenLink Software
Company Web: http://www.openlinksw.com
Personal Weblog 1: http://kidehen.blogspot.com
Personal Weblog 2: http://www.openlinksw.com/blog/~kidehen
Twitter Profile: https://twitter.com/kidehen
Google+ Profile: https://plus.google.com/+KingsleyIdehen/about
LinkedIn Profile: http://www.linkedin.com/in/kidehen
Personal WebID: http://kingsley.idehen.net/dataspace/person/kidehen#this



smime.p7s
Description: S/MIME Cryptographic Signature


Re: SPARQLES usage

2015-01-07 Thread Kingsley Idehen

On 1/7/15 11:31 AM, Carlos Buil Aranda wrote:

Dear Semantic Web community,

the main developers of SPARQLES, a tool for monitoring the SPARQL
Endpoint Status (http://sparqles.okfn.org/) are gathering information
about how SPARQLES is being used by the community to see what
improvements could be made.

If you have been using it in any way, can you please send us an email
describing how you used it?

thanks!

Aidan, Pierre-Yves, Jürgen and Carlos


How do SPARQL endpoints get added to the monitor list?

Isn't it possible to provide the monitoring data in RDF form?


--
Regards,

Kingsley Idehen 
Founder  CEO
OpenLink Software
Company Web: http://www.openlinksw.com
Personal Weblog 1: http://kidehen.blogspot.com
Personal Weblog 2: http://www.openlinksw.com/blog/~kidehen
Twitter Profile: https://twitter.com/kidehen
Google+ Profile: https://plus.google.com/+KingsleyIdehen/about
LinkedIn Profile: http://www.linkedin.com/in/kidehen
Personal WebID: http://kingsley.idehen.net/dataspace/person/kidehen#this




smime.p7s
Description: S/MIME Cryptographic Signature


Re: Is SPIN still a valid direction?

2014-12-26 Thread Kingsley Idehen

On 12/25/14 4:44 AM, lo...@pa.icar.cnr.it wrote:

Great news Kingsley, are you allowed to share when this rule update will
be public?


At this time, official release (open or closed source) is unpredictable, 
this is a major undertaking that affects the core Virtuoso engine.



  Im using Virtuoso in my project and would be good to have SPIn
support there instead than using Jena as wrapper...


I will be in a better position to predict official release by mid January.


Seasons Greetings!

Kingsley


Merry xmas to the list!

On 12/24/14 5:07 PM, Paul Houle wrote:

Jerven,  I'd like to see the implementation list at

http://spinrdf.org/

updated to reflect the Allegrograph implementation and any others you
can find.

We are supporters, for the record. Implementation coming as part of a
major rules related enhancement to Virtuoso.

Seasons Greetings to All!

--
Regards,

Kingsley Idehen
Founder  CEO
OpenLink Software
Company Web: http://www.openlinksw.com
Personal Weblog 1: http://kidehen.blogspot.com
Personal Weblog 2: http://www.openlinksw.com/blog/~kidehen
Twitter Profile: https://twitter.com/kidehen
Google+ Profile: https://plus.google.com/+KingsleyIdehen/about
LinkedIn Profile: http://www.linkedin.com/in/kidehen
Personal WebID: http://kingsley.idehen.net/dataspace/person/kidehen#this










--
Regards,

Kingsley Idehen 
Founder  CEO
OpenLink Software
Company Web: http://www.openlinksw.com
Personal Weblog 1: http://kidehen.blogspot.com
Personal Weblog 2: http://www.openlinksw.com/blog/~kidehen
Twitter Profile: https://twitter.com/kidehen
Google+ Profile: https://plus.google.com/+KingsleyIdehen/about
LinkedIn Profile: http://www.linkedin.com/in/kidehen
Personal WebID: http://kingsley.idehen.net/dataspace/person/kidehen#this




smime.p7s
Description: S/MIME Cryptographic Signature


Re: Is SPIN still a valid direction?

2014-12-24 Thread Kingsley Idehen

On 12/24/14 5:07 PM, Paul Houle wrote:

Jerven,  I'd like to see the implementation list at

http://spinrdf.org/

updated to reflect the Allegrograph implementation and any others you 
can find.


We are supporters, for the record. Implementation coming as part of a 
major rules related enhancement to Virtuoso.


Seasons Greetings to All!

--
Regards,

Kingsley Idehen 
Founder  CEO
OpenLink Software
Company Web: http://www.openlinksw.com
Personal Weblog 1: http://kidehen.blogspot.com
Personal Weblog 2: http://www.openlinksw.com/blog/~kidehen
Twitter Profile: https://twitter.com/kidehen
Google+ Profile: https://plus.google.com/+KingsleyIdehen/about
LinkedIn Profile: http://www.linkedin.com/in/kidehen
Personal WebID: http://kingsley.idehen.net/dataspace/person/kidehen#this




smime.p7s
Description: S/MIME Cryptographic Signature


Re: [Ann] WebVOWL 0.3 - Visualize your ontology on the web

2014-12-23 Thread Kingsley Idehen

On 12/23/14 11:37 AM, Steffen Lohmann wrote:

Kingsley, Timothy, Sarven, Melvin, Ali,

On 22.12.2014 16:20, Kingsley Idehen wrote:
I just want the URI of the current node in the graph to be a live 
link i.e., an exit point from the visualization tool to the actual 
source. You offer something close to this in the side panel, but its 
scoped to the entire ontology rather than a selected term.


I am suggesting you make the selected node text e.g., Tagging an 
HTTP URI (hyperlink) via a href=http://purl.org/muto/core#Tagging 
Tagging/a .


[1] http://susepaste.org/36507989 -- screenshot showing what's 
currently offered 


The URI and link is already there! The labels in the Selection 
Details (e.g., Tagging) are hyperlinks that you can click on to go 
to the actual URIs. As it does not seem to be that clear (and the 
hyperlink URI may not be properly shown in all web browsers), we 
already discussed to add further tooltips with the URIs in the GUI.


You spec says:

Labels

If elements do not have an rdfs:label, it is recommended to take the 
last part of the URI as label, i.e. the part that follows the last slash 
(/) or hash (#) character. Labels may be abbreviated if they do not fit 
in the available space (e.g. Winnie-the-Pooh → Winnie…). The full 
label should be shown on demand in these cases (e.g. in a tooltip or 
sidebar).



I am suggesting:

Labels

If elements do not have an rdfs:label, it is recommended to take the 
last part of the URI as label, i.e. the part that follows the last slash 
(/) or hash (#) character. Labels may be abbreviated if they do not fit 
in the available space (e.g. Winnie-the-Pooh → Winnie…). The full 
label should be shown on demand in these cases (e.g. in a tooltip or 
sidebar) and the label itself hyperlinked when the item is question is 
*named* using an HTTP URI.


So instead of Winnie or Winnie-the-Pooh or rdfs:label value, it will 
be: a href={actual-entity-http-uri{Winnie or Winnie-the-Pooh or 
rdfs:label value}/a.


Then you won't have to explain any of this as users will simply have to 
options:


1. Click as per usual to explore the graph visual
2. CTRL+Mouse+Click to grab item HTTP URI en route to further Linked 
Data exploration -- using any HTTP user agent.




--
Regards,

Kingsley Idehen 
Founder  CEO
OpenLink Software
Company Web: http://www.openlinksw.com
Personal Weblog 1: http://kidehen.blogspot.com
Personal Weblog 2: http://www.openlinksw.com/blog/~kidehen
Twitter Profile: https://twitter.com/kidehen
Google+ Profile: https://plus.google.com/+KingsleyIdehen/about
LinkedIn Profile: http://www.linkedin.com/in/kidehen
Personal WebID: http://kingsley.idehen.net/dataspace/person/kidehen#this




smime.p7s
Description: S/MIME Cryptographic Signature


Re: [Ann] WebVOWL 0.3 - Visualize your ontology on the web

2014-12-22 Thread Kingsley Idehen

On 12/22/14 7:04 AM, Steffen Lohmann wrote:

Kingsley, Alfredo, Timothy, Melvin,

thank you for your comments and ideas.

On 19.12.2014 17:57, Kingsley Idehen wrote:
do you have a permalink feature that makes any visualization doc 
shareable via HTTP URLs ? 


Yes, we have (as already indicated by Melvin). For instance, you can 
use 
http://vowl.visualdataweb.org/webvowl/#iri=http://xmlns.com/foaf/0.1/ 
to visualize the FOAF vocabulary - see 
http://vowl.visualdataweb.org/webvowl.html



On 21.12.2014 04:11, Melvin Carvalho wrote:
I would normally expect a ? in the query string however, rather than, 
# which I presume is the 1337 way to hide the ontology from the server.


Good point. We discuss it and maybe change to the query identifier (?) 
instead of the fragment identifier (#) in the next version of WebVOWL.



On 19.12.2014 17:57, Kingsley Idehen wrote:
what about connecting your custom ontology feature to prefix.cc [1] 
and LOV [2] ? 


That could indeed be nice and is already possible due to the 
customizable HTTP requests (see above).



On 19.12.2014 17:57, Kingsley Idehen wrote:
Would you be able to make HTTP URIs that identify terms defined by a 
selected ontology live? For instance, if I am exploring FOAF, and I 
click on the foaf:Agent node, I should have an HTTP URI anchoring the 
text Agent [FOAF] which then makes that node a live LOD Cloud 
conduit. I notice you provide this capability via selection 
details, but I don't think most will realize its existence. 


Adding selected classes to the URL is a nice idea that we will discuss 
together with the above issue about the fragment identifier. If I got 
you right, you would like to have something like
http://vowl.visualdataweb.org/webvowl/index.html?iri=http://purl.org/muto/core#Tagging 

which would then visualize the MUTO ontology and automatically select 
(and highlight) the Tagging class.

Is that correct?


I just want the URI of the current node in the graph to be a live link 
i.e., an exit point from the visualization tool to the actual source. 
You offer something close to this in the side panel, but its scoped to 
the entire ontology rather than a selected term.


I am suggesting you make the selected node text e.g., Tagging an HTTP 
URI (hyperlink) via a href=http://purl.org/muto/core#Tagging 
Tagging/a .


[1] http://susepaste.org/36507989 -- screenshot showing what's currently 
offered



--
Regards,

Kingsley Idehen 
Founder  CEO
OpenLink Software
Company Web: http://www.openlinksw.com
Personal Weblog 1: http://kidehen.blogspot.com
Personal Weblog 2: http://www.openlinksw.com/blog/~kidehen
Twitter Profile: https://twitter.com/kidehen
Google+ Profile: https://plus.google.com/+KingsleyIdehen/about
LinkedIn Profile: http://www.linkedin.com/in/kidehen
Personal WebID: http://kingsley.idehen.net/dataspace/person/kidehen#this




smime.p7s
Description: S/MIME Cryptographic Signature


Re: [Ann] WebVOWL 0.3 - Visualize your ontology on the web

2014-12-21 Thread Kingsley Idehen

On 12/20/14 10:11 PM, Melvin Carvalho wrote:



On 19 December 2014 at 17:57, Kingsley Idehen kide...@openlinksw.com 
mailto:kide...@openlinksw.com wrote:


On 12/19/14 10:49 AM, Steffen Lohmann wrote:

Hi all,

we are glad to announce the release of WebVOWL 0.3, which
integrates our OWL2VOWL converter now. WebVOWL works in modern
web browsers without any installation so that ontologies can
be instantly visualized. Check it out at:
http://vowl.visualdataweb.org/webvowl.html

To the best of our knowledge, WebVOWL is the first
comprehensive ontology visualization completely based on open
web standards (HTML, SVG, CSS, JavaScript). It implements VOWL
2, which has been designed in a user-oriented process and is
clearly specified at http://vowl.visualdataweb.org (incl.
references to scientific papers).

Please note that:
- WebVOWL is a tool for ontology visualization, not for
ontology modeling.
- VOWL considers many language constructs of OWL but not all
of them yet.
- VOWL focuses on the visualization of the TBox of small to
medium-size ontologies but does not sufficiently support the
visualization of very large ontologies and detailed ABox
information for the time being.
- WebVOWL 0.3 implements the VOWL 2 specification nearly
completely, but the current version of the OWL2VOWL converter
does not.
These issues are subject to future work.

Have fun with it!

On behalf of the VOWL team,
Steffen


Great job! Clearly when it rains, it pours!

Lot's of great Linked Data visualizations are now popping up
everywhere, just what we all needed.

Question:

Would you be able to make HTTP URIs that identify terms defined by
a selected ontology live? For instance, if I am exploring FOAF,
and I click on the foaf:Agent node, I should have an HTTP URI
anchoring the text Agent [FOAF] which then makes that node a
live LOD Cloud conduit. I notice you provide this capability via
selection details, but I don't think most will realize its
existence.

In addition to the suggestion above, do you have a permalink
feature that makes any visualization doc shareable via HTTP URLs ?


Great work!

Docs are shareable

http://vowl.visualdataweb.org/webvowl/index.html#iri=http://www.w3.org/1999/02/22-rdf-syntax-ns

I would normally expect a ? in the query string however, rather than, 
# which I presume is the 1337 way to hide the ontology from the server.


Yes! I would expect: 
http://vowl.visualdataweb.org/webvowl/index.html?iri=http://www.w3.org/1999/02/22-rdf-syntax-ns 
http://vowl.visualdataweb.org/webvowl/index.html#iri=http://www.w3.org/1999/02/22-rdf-syntax-ns 
rather than 
http://vowl.visualdataweb.org/webvowl/index.html#iri=http://www.w3.org/1999/02/22-rdf-syntax-ns 
.




--
Regards,

Kingsley Idehen 
Founder  CEO
OpenLink Software
Company Web: http://www.openlinksw.com
Personal Weblog 1: http://kidehen.blogspot.com
Personal Weblog 2: http://www.openlinksw.com/blog/~kidehen
Twitter Profile: https://twitter.com/kidehen
Google+ Profile: https://plus.google.com/+KingsleyIdehen/about
LinkedIn Profile: http://www.linkedin.com/in/kidehen
Personal WebID: http://kingsley.idehen.net/dataspace/person/kidehen#this



smime.p7s
Description: S/MIME Cryptographic Signature


Re: [Ann] WebVOWL 0.3 - Visualize your ontology on the web

2014-12-20 Thread Kingsley Idehen

On 12/20/14 9:43 AM, Giovanni Tummarello wrote:



Great job! Clearly when it rains, it pours!

Lot's of great Linked Data visualizations are now popping up
everywhere, just what we all needed.


hi kingsley
which other are you referring to? (i likely have missed them)
thanks
Gio

Gio,

See: https://delicious.com/kidehen/cool_linkeddata_ui

Remember, I am of the opinion that like all things, visualization is a 
horses for courses affair, rather that the one visualization that 
trumps all others. Basically, like any form of beauty data 
visualization coolness lies in the eyes of the beholder :)



--
Regards,

Kingsley Idehen 
Founder  CEO
OpenLink Software
Company Web: http://www.openlinksw.com
Personal Weblog 1: http://kidehen.blogspot.com
Personal Weblog 2: http://www.openlinksw.com/blog/~kidehen
Twitter Profile: https://twitter.com/kidehen
Google+ Profile: https://plus.google.com/+KingsleyIdehen/about
LinkedIn Profile: http://www.linkedin.com/in/kidehen
Personal WebID: http://kingsley.idehen.net/dataspace/person/kidehen#this



smime.p7s
Description: S/MIME Cryptographic Signature


Re: [Ann] WebVOWL 0.3 - Visualize your ontology on the web

2014-12-19 Thread Kingsley Idehen

On 12/19/14 10:49 AM, Steffen Lohmann wrote:

Hi all,

we are glad to announce the release of WebVOWL 0.3, which integrates 
our OWL2VOWL converter now. WebVOWL works in modern web browsers 
without any installation so that ontologies can be instantly 
visualized. Check it out at: http://vowl.visualdataweb.org/webvowl.html


To the best of our knowledge, WebVOWL is the first comprehensive 
ontology visualization completely based on open web standards (HTML, 
SVG, CSS, JavaScript). It implements VOWL 2, which has been designed 
in a user-oriented process and is clearly specified at 
http://vowl.visualdataweb.org (incl. references to scientific papers).


Please note that:
- WebVOWL is a tool for ontology visualization, not for ontology 
modeling.

- VOWL considers many language constructs of OWL but not all of them yet.
- VOWL focuses on the visualization of the TBox of small to 
medium-size ontologies but does not sufficiently support the 
visualization of very large ontologies and detailed ABox information 
for the time being.
- WebVOWL 0.3 implements the VOWL 2 specification nearly completely, 
but the current version of the OWL2VOWL converter does not.

These issues are subject to future work.

Have fun with it!

On behalf of the VOWL team,
Steffen


Great job! Clearly when it rains, it pours!

Lot's of great Linked Data visualizations are now popping up everywhere, 
just what we all needed.


Question:

Would you be able to make HTTP URIs that identify terms defined by a 
selected ontology live? For instance, if I am exploring FOAF, and I 
click on the foaf:Agent node, I should have an HTTP URI anchoring the 
text Agent [FOAF] which then makes that node a live LOD Cloud conduit. 
I notice you provide this capability via selection details, but I 
don't think most will realize its existence.


In addition to the suggestion above, do you have a permalink feature 
that makes any visualization doc shareable via HTTP URLs ?



Finally, what about connecting your custom ontology feature to 
prefix.cc [1] and LOV [2] ?


Links:

[1] http://prefix.cc
[2] http://lov.okfn.org/dataset/lov/

--
Regards,

Kingsley Idehen 
Founder  CEO
OpenLink Software
Company Web: http://www.openlinksw.com
Personal Weblog 1: http://kidehen.blogspot.com
Personal Weblog 2: http://www.openlinksw.com/blog/~kidehen
Twitter Profile: https://twitter.com/kidehen
Google+ Profile: https://plus.google.com/+KingsleyIdehen/about
LinkedIn Profile: http://www.linkedin.com/in/kidehen
Personal WebID: http://kingsley.idehen.net/dataspace/person/kidehen#this




smime.p7s
Description: S/MIME Cryptographic Signature


Re: Organisation Linked Data dataset

2014-12-19 Thread Kingsley Idehen

On 12/18/14 10:29 AM, Bianca Pereira wrote:

Hello,

 I was looking for a Linked Data dataset and I always find very 
challenging to find a dataset related to a given subject. I am 
specifically looking for a dataset (not wikipedia-based such as 
DBPedia, Yago and so on) that contains information about organizations.


  I found there was a Linked CrunchBase [1] at some point but it does 
not seem to work anymore. Does anyone know any Linked Data dataset 
about organizations?


  Best Regards,
  Bianca

[1] http://cbasewrap.ontologycentral.com/


Bianca,

Here's a little Linked Open Data follow-your-nose sequence that exposes 
data sources that could be relevant to your quest:


1. https://legalentityidentifier.info/lei/get/787RXPR0UX0O0XUXPZ81 -- 
https://legalentityidentifier.info/lei/lookup/
2. 
http://lod.openlinksw.com/describe/?url=http%3A%2F%2Frdf.basekb.com%2Fns%2Fm.0lwkhgraph=http%3A%2F%2Fbasekb.com%2Fpro%2F 
-- LOD Cloud Cache
3. 
http://lod.openlinksw.com/describe/?url=http%3A%2F%2Frdf.basekb.com%2Fns%2Fm.062_7qdgraph=http%3A%2F%2Fbasekb.com%2Fpro%2F 
-- instances of Company (from :baseKB)

4. http://lod.openlinksw.com/c/IMQDH3A -- by industry.

There's a lot more in the LOD Cloud cache, assuming this piques your 
interest :)



--
Regards,

Kingsley Idehen 
Founder  CEO
OpenLink Software
Company Web: http://www.openlinksw.com
Personal Weblog 1: http://kidehen.blogspot.com
Personal Weblog 2: http://www.openlinksw.com/blog/~kidehen
Twitter Profile: https://twitter.com/kidehen
Google+ Profile: https://plus.google.com/+KingsleyIdehen/about
LinkedIn Profile: http://www.linkedin.com/in/kidehen
Personal WebID: http://kingsley.idehen.net/dataspace/person/kidehen#this




smime.p7s
Description: S/MIME Cryptographic Signature


Re: ANN: datos.bne.es, the new Linked Data service from the National Library of Spain

2014-12-03 Thread Kingsley Idehen

On 12/3/14 7:13 AM, Daniel Vila Suero wrote:


My understanding is that an agent should only use the Location to 
retrieve more data (a representation of a resource). Given a canonical 
URI to name the entity (http://datos.bne.es/resource/XX154), an 
agent can try to dereference it, asking for a concrete representation 
that he understands (e.g., turtle) and what it gets back is a LOCATION 
with a Turtle representation of the entity in question, that it's 
described in terms of canonical URIs (e.g., 
http://datos.bne.es/resource/XX154 a frbr:Person; rdfs:label 
Richard). The agent then can follow his nose using these URIs to 
retrieve more data (in the RDF representations we only use canonical 
URIs not Locations).


We have tried to follow existing best practices (ISA guidelines [1] 
mention for example the pattern /id/ -- /doc/), but we are open to 
suggestions from the community on how to improve content-negotiation 
mechanisms because our goal is to make the data useful for application 
developers.



Here is a Vapor Report [1]. Hopefully, it sheds light on this matter:

[1] http://bit.ly/spanish-national-library-sample-entity-uri-report -- 
Vapour Report to shows whats happening (conclusion: nothing wrong).


--
Regards,

Kingsley Idehen 
Founder  CEO
OpenLink Software
Company Web: http://www.openlinksw.com
Personal Weblog 1: http://kidehen.blogspot.com
Personal Weblog 2: http://www.openlinksw.com/blog/~kidehen
Twitter Profile: https://twitter.com/kidehen
Google+ Profile: https://plus.google.com/+KingsleyIdehen/about
LinkedIn Profile: http://www.linkedin.com/in/kidehen
Personal WebID: http://kingsley.idehen.net/dataspace/person/kidehen#this




smime.p7s
Description: S/MIME Cryptographic Signature


Re: [ontolog-forum] Mike Dean

2014-11-20 Thread Kingsley Idehen

On 11/19/14 10:40 PM, Peter Yim wrote:

Dear Nancy,


Very sorry to hear this. I want to extend my most sincere condolences
to you, Jason and Noah.

Mike has been an inspiration to me, and a role model to a lot of us in
the ontology and semantic technology community.

May he rest in peace!


Very sincerely. =ppy

Peter P. Yim
--


Dear Nancy,

Myself and countless others (note the additional lists in the cc.) are 
profoundly saddened by Mike's departure. For many years, Mike has been a 
steady source of knowledge and inspiration to many of us.


I (and many others) would like to extend our deep condolences to you and 
your family, during this difficult time.





On Wed, Nov 19, 2014 at 7:26 PM, Nancy Dean nancy.d...@gmail.com wrote:

Mike asked me to use this email list to spread this announcement.  I know
there were some people missing from his original message, but I can not find
an updated list.  Please feel free to forward this message if you know of
someone omitted.

I am saddened to report that Mike passed away peacefully this morning,
ending his 10-year cancer battle.  His sons, Jason and Noah, and I thank you
for your friendship and kindness towards Mike.  He enjoyed his work
immensely, for the technical content, yes, but mostly for the wonderful
people he was privileged to be around.

The below link details services to be held in Ann Arbor on Friday.

http://www.niefuneralhomes.com/obituaries/Michael-Dean-2/

We are currently finalizing arrangements for visitation and then services
and burial in Elkton, Virginia (and expect that to be Monday and Tuesday,
respectively).  He will be at the Elkton Kyger funeral home and further
details should be available shortly on their website
(http://www.kygers.com/).

Nancy



On Wed, Oct 8, 2014 at 5:11 PM, Mike Dean md...@bbn.com wrote:


Almost ten years after being diagnosed with metastatic colorectal cancer,
today I started in-home hospice care.  This doesn't mean that death is
imminent.  I've had a great life, and thank you for being part of it.  I
plan to continue to at least read email as long as I am able.  Please feel
free to forward this message to other friends I may have inadvertently
missed.

 Mike
  






--
Regards,

Kingsley Idehen 
Founder  CEO
OpenLink Software
Company Web: http://www.openlinksw.com
Personal Weblog 1: http://kidehen.blogspot.com
Personal Weblog 2: http://www.openlinksw.com/blog/~kidehen
Twitter Profile: https://twitter.com/kidehen
Google+ Profile: https://plus.google.com/+KingsleyIdehen/about
LinkedIn Profile: http://www.linkedin.com/in/kidehen
Personal WebID: http://kingsley.idehen.net/dataspace/person/kidehen#this




smime.p7s
Description: S/MIME Cryptographic Signature


Re: Debates of the European Parliament as LOD

2014-11-10 Thread Kingsley Idehen

On 11/5/14 6:19 AM, Hollink, L. wrote:

- Dataset announcement -

We are happy to announce the release of a new linked dataset: the 
proceedings of the plenary debates of the European Parliament as 
Linked Open Data.


The dataset covers all plenary debates held in the European Parliament 
(EP) between July 1999 and January 2014, and biographical information 
about the members of parliament. It includes: the monthly sessions of 
the EP, the agenda of debates, the spoken words and translations 
thereof in 21 languages; the speakers, their role and the country they 
represent; membership of national parties, European parties and 
commissions. The data is available though a SPARQL endpoint, see 
http://linkedpolitics.ops.few.vu.nl/ for more details.


Please note that this is a first version; we hope you will try it out 
and send us your feedback!


Good Job!

I was able to build a SPARQL-FED query [1], and then use the HTTP URIs 
exposed in the query results page for follow-my-nose based exploration 
and drill-down/ .



[1] http://bit.ly/sparql-fed-against-eu-parliament-debates-endpoint .

--
Regards,

Kingsley Idehen 
Founder  CEO
OpenLink Software
Company Web: http://www.openlinksw.com
Personal Weblog 1: http://kidehen.blogspot.com
Personal Weblog 2: http://www.openlinksw.com/blog/~kidehen
Twitter Profile: https://twitter.com/kidehen
Google+ Profile: https://plus.google.com/+KingsleyIdehen/about
LinkedIn Profile: http://www.linkedin.com/in/kidehen
Personal WebID: http://kingsley.idehen.net/dataspace/person/kidehen#this




smime.p7s
Description: S/MIME Cryptographic Signature


Re: How to model valid time of resource properties?

2014-10-17 Thread Kingsley Idehen

On 10/17/14 11:48 AM, Pat Hayes wrote:

On Oct 17, 2014, at 6:34 AM, Kingsley Idehen kide...@openlinksw.com wrote:


On 10/17/14 12:00 AM, Pat Hayes wrote:

Kingsley, greetings.

It is important to keep a clear distinction between what temporal DB calls 
valid time and transaction time. T-time is when the record was inserted into 
the databese or when it was created.

Yes, certainly.


This is important, basically, for internal accounting and maintenance of the DB 
itself.

Yes.


  V-time is the time being referred to in the data. So for example, if Bill and 
Jane are married on 01012014 and this fact gets inserted into a record of 
marriages on 05012014, there are two times involved, and these are the valid 
and transaction times respectively.

Yes.


What you are talking about, when you suggest using reification and dates of 
documents, is transaction time, not valid time.

If this was based solely on the world view of the RDF Reification Ontology [1], 
then yes, but I have an extended Ontology [2] that adds addition properties to 
the Statement Class. These extensions arose so that statements in a document 
could be endorsed and signed etc..

But endorsing, signing, etc. are all things that attach to transaction time 
rather than valid time. They are all things done to the document (to the data) 
rather than things that the data talks about. (Except in those cases where what 
the data talks about is those very signings, endorsings, etc., but that is 
exactly where the distinction becomes pointless since the valid and transaction 
times coincide.)

You point me to your ontology, below, where it defines endorsement as a claim used 
to describe statements. Exactly: it describes the *statement*, not what the 
statement is talking about. I can endorse in 2014 a statement made in 2013 about a house 
purchase deed transfer which took place in 2011. The actual event was the transfer, which 
is not a document and not an endorement of a document, but an actual event in the world, 
which these documents refer to. Which is why I, in 2014, might have to pay fines to the 
IRS for taxes not paid in 2011.

This is why reification is the wrong tool to be using to describe valid times. 
Valid times are extra arguments to the relations in the data, or related to 
events described in the data; transaction times are arguments to relations 
between times and the data itself.

Pat


I don't believe I am suggesting the use of reification for valid times. 
I am indicating (hopefully) that what's described in a document and the 
mechanism of description (RDF statements, using [in my case] TURTLE 
notation) can be collectively used to establish fact from fiction, so to 
speak.


A marriage certificate is a document comprised of claims that reflect a 
reality comprised of temporal relations -- just like any other document 
that describes events e.g., ISWC 2014.


Here's the kind of document to which my thoughts apply, in regards to 
this matter. For instance, I've made a claim about when this event 
starts and ends, and claimed that some entity (referred to as 
#ISWC2014Organization) has endorsed the statements in question etc.:


## Nanotation Start ##

 a http://rdfs.org/sioc/ns#Post ;
http://purl.org/dc/terms/created 2014-10-17T13:10+05:00^^xsd:dateTime;
https://twitter.com/hashtag/features#this [ a Person;
foaf:mbox mailto:pha...@ihmc.us ],
http://kingsley.idehen.net/dataspace/person#this ;
http://xmlns.com/foaf/0.1/primaryTopic 
https://twitter.com/hashtage/Reification;

http://xmlns.com/foaf/0.1/topic #ISWC2014, #StartStatement .

‪#‎ISWC2014‬
a http://purl.org/NET/c4dm/event.owl#Event ;
http://www.w3.org/2000/01/rdf-schema#label ISWC2014 ;
http://www.w3.org/2004/02/skos/core#altLabel International Semantic 
Web Conference 2014;
http://www.ontologydesignpatterns.org/cp/owl/timeinterval.owl#beginsAtDateTime 
2014-10-19T08:00-01:00^^xsd:dateTime;
http://www.ontologydesignpatterns.org/cp/owl/timeinterval.owl#endsAtDateTime 
2014-10-23T17:00-01:00^^xsd:dateTime;

http://www.w3.org/2007/05/powder-s#describedby .

#ISWC2014EventStartStatement
a http://www.w3.org/1999/02/22-rdf-syntax-ns#Statement ;
http://www.w3.org/2000/01/rdf-schema#label ISWC2014EventStartStatement ;
http://www.w3.org/2004/02/skos/core#prefLabel ISWC2014 Event Start 
Statement ;

http://www.w3.org/1999/02/22-rdf-syntax-ns#subject ‪#‎ISWC2014‬;
http://www.w3.org/1999/02/22-rdf-syntax-ns#predicate 
http://www.ontologydesignpatterns.org/cp/owl/timeinterval.owl#beginsAtDateTime 
;
http://www.w3.org/1999/02/22-rdf-syntax-ns#object 
2014-10-19T08:00-01:00^^xsd:dateTime ;

http://www.w3.org/2007/05/powder-s#describedby .

#ISWC2014StatementEndorsementExample
http://www.w3.org/2000/01/rdf-schema#label 
ISWC2014StatementEndorsementExample ;
http://www.w3.org/2004/02/skos/core#prefLabel ISWC 2014 Statement 
Endorsement Example ;

a http://www.openlinksw.com/schemas/reification#Endorsement ;
http://www.openlinksw.com/schemas/reification#endorser 
#ISWC2014Organization

Re: It is not the URI that bends, it is only ..

2014-10-17 Thread Kingsley Idehen

On 10/17/14 2:01 PM, mike amundsen wrote:
of course URIs are merely signifiers so, like all names, they are 
transient and subject to re-interpretation.


Mike does not describe or define me, it's just one of my handles.



Yep!

And if you opt to use HTTP URIs as names you end up with the combined 
effect of denotation [signification] and connotation [description] by 
way of sign-description-document indirection, which is a form of 
interpretation that's native to HTTP based networks :)


--
Regards,

Kingsley Idehen 
Founder  CEO
OpenLink Software
Company Web: http://www.openlinksw.com
Personal Weblog 1: http://kidehen.blogspot.com
Personal Weblog 2: http://www.openlinksw.com/blog/~kidehen
Twitter Profile: https://twitter.com/kidehen
Google+ Profile: https://plus.google.com/+KingsleyIdehen/about
LinkedIn Profile: http://www.linkedin.com/in/kidehen
Personal WebID: http://kingsley.idehen.net/dataspace/person/kidehen#this




smime.p7s
Description: S/MIME Cryptographic Signature


Re: It is not the URI that bends, it is only ..

2014-10-17 Thread Kingsley Idehen

On 10/17/14 2:10 PM, Laura Dawson wrote:
As is  0001 1612 8189. That is your ISNI. ;) Makes for a much more 
stable URI.


## Nanotation Start ##

# More more stable URI claim is subject to existence of something like:

#you
#id  0001 1612 8189 ;
foaf:name Laura Dawson ;
http://www.w3.org/2007/05/powder-s#describedby  .

# That resides on a network somewhere, and a document processor 
understands the *semantics* of the relation identified by #id. For 
example :


#id
a http://www.w3.org/2002/07/owl#inverseFunctionalProperty.

## Nanotation End ##

--
Regards,

Kingsley Idehen 
Founder  CEO
OpenLink Software
Company Web: http://www.openlinksw.com
Personal Weblog 1: http://kidehen.blogspot.com
Personal Weblog 2: http://www.openlinksw.com/blog/~kidehen
Twitter Profile: https://twitter.com/kidehen
Google+ Profile: https://plus.google.com/+KingsleyIdehen/about
LinkedIn Profile: http://www.linkedin.com/in/kidehen
Personal WebID: http://kingsley.idehen.net/dataspace/person/kidehen#this



smime.p7s
Description: S/MIME Cryptographic Signature


Re: How to model valid time of resource properties?

2014-10-16 Thread Kingsley Idehen

On 10/16/14 8:00 AM, Kingsley Idehen wrote:

On 10/16/14 3:33 AM, John Walker wrote:

Hi Kingsley,
On October 15, 2014 at 2:59 PM Kingsley Idehen 
kide...@openlinksw.com wrote:


On 10/15/14 8:36 AM, Frans Knibbe | Geodan wrote:

On 2014-10-13 14:14, John Walker wrote:

Hi Frans,
See this example:
http://patterns.dataincubator.org/book/qualified-relation.html


Thank you John! Strangely enough, I had not come across the Linked 
Data Patterns book before. But I can see it is a valuable resource 
with solutions for many common problems. And it looks pretty too! I 
am sure it will come in handy for problems that I haven't stumbled 
upon yet.


A nice thing about this solution is that it doesn't need any 
extensions of core technologies. I do see some downsides, though:


Let's assume I want to publish data about people, as in the 
examples. A person can have common properties defined by the FOAF 
vocabulary, like foaf:age or foaf:based_near. Properties like these 
are likely to change. If I want to record the time in which a 
statement is valid I would have to create a class for that 
relationship and add properties to that class that will allow me to 
associate a start time and an end time with the class. But by doing 
that I would not only be forced to create my own vocabulary, I 
would also replace common web wide semantics with my own semantics. 
Or would it still be possible to relate the original property with 
the custom class somehow?


In the cases known to me that require the recording of history of 
resources, /all/ resource properties (except for the identifier) 
are things that can change in time. If this pattern would be 
applied, it would have to be applied to all properties, leading to 
vocabularies exploding and becoming unwieldy, as described in the 
Discussion paragraph.


I think that the desire to annotate statements with things like 
valid time is very common. Wouldn't it be funny if the best 
solution to a such a common and relatively straightforward 
requirement is to create large custom vocabularies?


Regards,
Frans 


Frans,

How about reified RDF statements?

I think discounting RDF reification vocabulary is yet another act of 
premature optimization, in regards to the Semantic Web meme :) 


Just wondering if the semantics of RDF reification would accurately 
capture the semantics of what Frans wants to model.


If the idea is to capture the start and end date of a relationship, 
then is RDF reification the answer in this case?




Yes, since an RDF statement represents a relationship [1]. Thus, using 
reification (as per my example) you can refer to utterances 
(statements) made at a specific time.


As the reified statement has rdf:type rdf:Statement, wouldn't we that 
be making the additional statements about the statement, not about 
the relationship it represents. If so, what does it mean to indicate 
a start and end date of a statement?


To use a real life example discussed during Pilod [3] we have 
multiple conflicting source of information:


Tax office records show Alice and Bob were married from 2010-03-01 to 
2014-01-01.




The Tax Office statements (recording this event) exist in a document 
created at a point in time. The document in question is comprised of 
RDF statements. Each statement was also made at a point in time. 
Collectively they provide a temporal context lens relating to the 
observations (RDF statements) captured in the aformentioned document.


Local council records show Alice and Bob were married from 2001-03-01 
to 2014-10-10.




Local Council statements (recording this event) exist in a document 
created at a point in time. This document is independent of the Tax 
Office document.


This probably requires a mix of different modelling techniques and 
there's no right or wrong way to do it.




What would you do in the real-world today? You (the relevant offices, 
or the marriage relation participants) would reconcile these two 
documents.


RDF Reification provides a good foundation for these issues, you can 
extend the vocabulary to enhance context-fidelity (across various 
axis), if need be [1][2].



[1] http://www.openlinksw.com/c/9DQD6HLX -- OpenLink Statement
[2] http://www.openlinksw.com/c/9HWIJM -- Statement (which extends 
rdf:Statement)


Kingsley 


Fixing the references above, for sake of clarity.

I meant to say:  RDF Reification provides a good foundation for these 
issues, you can extend the vocabulary to enhance context-fidelity 
(across various axis), if need be [2][3].


Links:

[1] http://bit.ly/understanding-data-relationships-section -- Relationships
[2] http://www.openlinksw.com/c/9DQD6HLX -- OpenLink Statement 
Reification Ontology
[3] http://www.openlinksw.com/c/9HWIJM -- Definition of a Statement 
(which extends rdf:Statement)


--
Regards,

Kingsley Idehen 
Founder  CEO
OpenLink Software
Company Web: http://www.openlinksw.com
Personal Weblog 1: http://kidehen.blogspot.com
Personal Weblog 2: http://www.openlinksw.com

  1   2   3   4   5   6   7   8   9   10   >