Re: Is 303 really necessary?

2010-11-19 Thread Kingsley Idehen

On 11/18/10 5:10 PM, David Booth wrote:

Hi Ian,

Although I applaud your efforts at finding a simpler solution, upon
reflection I think I would vote against the 200 + Content-location
proposal,
http://iand.posterous.com/a-guide-to-publishing-linked-data-without-red
for these reasons:

1. I think adding a third option to the use either hash URIs or 303
redirects guidance will cause more harm then good to the LOD community.
There is already enough confusion over Should I use a hash URI or a 303
and why? question.  A third option is likely to create even more
confusion.

2. I don't see the extra network access of 303 as being such a big deal
-- though YMMV -- since even though the 303 response itself is not
supposed to be cached (per RDF 2616), the toucan description document
returned in the following 200 response can (and should) be cached.  In
fact, if you have n slash URIs U1, U2, ... Un etc. that share the same
description document, then with the 200+CL approach the entire
description document would have to be returned n times, whereas with the
303 approach the description document would only be retrieved once: the
rest of the n network access would be short 303 responses.

3. The proposed use of the Content-location header is not aligned with
the RFC 2616 or HTTPbis definitions of its purpose:
http://tools.ietf.org/html/draft-ietf-httpbis-p3-payload-12#page-24
That header does not indicate that the returned representation is *not*
a representation corresponding to the effective request URI.  Rather, it
says that it is *also* a representation corresponding to the
Content-location URI.  I.e., the returned representation is *both* a
representation of a generic *and* a more specific resource.

Best wishes,



David,

What about the aspect of this idea that's based on the content of the 
self-describing description document? Basically, that a Linked Data 
aware user agent interprets the world view expressed in the content?


I still believe there is a common ground here that isn't coming through 
i.e., nothing is broken, and people can drop description documents into 
data spaces on HTTP networks (e.g., WWW)  where Subjects are identified 
using slash URIs, without any Linked Data aware user agent confusion re. 
Subject Name and Description Document Address confusion.


Developers of Linked Data aware solutions (e.g. user agents and user 
agent extensions) are the ones that need to make a decision about 
semantic-fidelity i.e., do they stick with HTTP response codes, our use 
a combination of HTTP response codes, HTTP session metadata,  and 
description document content, to disambiguate Names and Addresses.


To conclude, I am saying:

1. No new HTTP response codes
2. Web Servers continue to return 200 OK for Document URLs
3. Linked Data Servers have option handle Name or Address disambiguation 
using 303 redirection for slash URIs
4. Linked Data Servers have option to be like Web Servers i.e. do no 
Name or Address disambiguation leaving Linked Data aware user agents to 
understand the content of Description Documents

5. Linked Data aware User Agents handle Name or Address disambiguation.

IMHO: when the dust settles, this is what it boils down to. On our side, 
we're done re. 1-5 across our Linked Data server and client 
functionality, as delivered by our products :-)


--

Regards,

Kingsley Idehen 
President  CEO
OpenLink Software
Web: http://www.openlinksw.com
Weblog: http://www.openlinksw.com/blog/~kidehen
Twitter/Identi.ca: kidehen







Making Linked Data Fun

2010-11-19 Thread Kingsley Idehen

All,

Here is an example of what can be achieved with Linked Data, for 
instance using BBC Wild Life Finder's data:


1. http://uriburner.com/c/DI463N -- remote SPARQL queries between two 
instances (URIBurner and LOD Cloud Cache) with results serialized in 
CXML (image processing part of the SPARQL query pipeline) .



Enjoy!

--

Regards,

Kingsley Idehen 
President  CEO
OpenLink Software
Web: http://www.openlinksw.com
Weblog: http://www.openlinksw.com/blog/~kidehen
Twitter/Identi.ca: kidehen







Re: Making Linked Data Fun

2010-11-19 Thread Aldo Bucchi
Kingsley,

On Fri, Nov 19, 2010 at 1:07 PM, Kingsley Idehen kide...@openlinksw.com wrote:
 All,

 Here is an example of what can be achieved with Linked Data, for instance
 using BBC Wild Life Finder's data:

 1. http://uriburner.com/c/DI463N -- remote SPARQL queries between two
 instances (URIBurner and LOD Cloud Cache) with results serialized in CXML
 (image processing part of the SPARQL query pipeline) .

This is excellent!
Single most powerful demo available. Really looking fwd to what's coming next.

Let's see how this shifts gears in terms of Linked Data comprehension.
Even in its current state, this is an absolute game changer.

I know this was not easy. My hat goes off to the team for their focus.

Now, just let me send this link out to some non-believers that have
been holding back my evangelization pipeline ;)

Regards,
A



 Enjoy!

 --

 Regards,

 Kingsley Idehen   
 President  CEO
 OpenLink Software
 Web: http://www.openlinksw.com
 Weblog: http://www.openlinksw.com/blog/~kidehen
 Twitter/Identi.ca: kidehen








-- 
Aldo Bucchi
@aldonline
skype:aldo.bucchi
http://aldobucchi.com/



Re: Making Linked Data Fun

2010-11-19 Thread Nathan

Kingsley Idehen wrote:

All,

Here is an example of what can be achieved with Linked Data, for 
instance using BBC Wild Life Finder's data:


1. http://uriburner.com/c/DI463N -- remote SPARQL queries between two 
instances (URIBurner and LOD Cloud Cache) with results serialized in 
CXML (image processing part of the SPARQL query pipeline) .


Enjoy!


Sweet you did it! Pivot and Linked Data joined together, that's a big 
step, congrats!


kutgw,

Nathan




Re: Making Linked Data Fun

2010-11-19 Thread John Erickson
This single most powerful demo available is an epic fail on Ubuntu
10.10 + Chrome. The most recent release of Moonlight just doesn't cut
it (and shouldn't have to).

Could we as a community *possibly* work towards a rich data
visualization/presentation toolkit built on, say, HTML5?

On Fri, Nov 19, 2010 at 11:20 AM, Aldo Bucchi aldo.buc...@gmail.com wrote:
 Kingsley,

 On Fri, Nov 19, 2010 at 1:07 PM, Kingsley Idehen kide...@openlinksw.com 
 wrote:
 All,

 Here is an example of what can be achieved with Linked Data, for instance
 using BBC Wild Life Finder's data:

 1. http://uriburner.com/c/DI463N -- remote SPARQL queries between two
 instances (URIBurner and LOD Cloud Cache) with results serialized in CXML
 (image processing part of the SPARQL query pipeline) .

 This is excellent!
 Single most powerful demo available. Really looking fwd to what's coming next.

 Let's see how this shifts gears in terms of Linked Data comprehension.
 Even in its current state, this is an absolute game changer.

 I know this was not easy. My hat goes off to the team for their focus.

 Now, just let me send this link out to some non-believers that have
 been holding back my evangelization pipeline ;)

 Regards,
 A



 Enjoy!

 --

 Regards,

 Kingsley Idehen
 President  CEO
 OpenLink Software
 Web: http://www.openlinksw.com
 Weblog: http://www.openlinksw.com/blog/~kidehen
 Twitter/Identi.ca: kidehen








 --
 Aldo Bucchi
 @aldonline
 skype:aldo.bucchi
 http://aldobucchi.com/





-- 
John S. Erickson, Ph.D.
http://bitwacker.wordpress.com
olyerick...@gmail.com
Twitter: @olyerickson
Skype: @olyerickson



AW: Making Linked Data Fun

2010-11-19 Thread Peter Haase
Hi Kingsley,

 

very nice!

 

Btw, in our Information Workbench we have a similar PivotViewer / LOD
bridge.

See e.g. http://iwb.fluidops.com/resource/dbpedia:Mammal (as an example for
the visualization of the extension of a class, i.e. DBpedia category in this
case), 

Or http://iwb.fluidops.com/resource/Barack_Obama (as an example for the
visualization of related entities)

 

(Select the Pivot view, i.e. the bottom left view)

 

Regards,

Peter

 

Von: public-lod-requ...@w3.org [mailto:public-lod-requ...@w3.org] Im Auftrag
von Kingsley Idehen
Gesendet: Friday, November 19, 2010 5:07 PM
An: public-lod@w3.org
Betreff: Making Linked Data Fun

 

All,

Here is an example of what can be achieved with Linked Data, for instance
using BBC Wild Life Finder's data: 

1. http://uriburner.com/c/DI463N -- remote SPARQL queries between two
instances (URIBurner and LOD Cloud Cache) with results serialized in CXML
(image processing part of the SPARQL query pipeline) .


Enjoy!



-- 
 
Regards,
 
Kingsley Idehen 
President  CEO 
OpenLink Software 
Web: http://www.openlinksw.com
Weblog: http://www.openlinksw.com/blog/~kidehen
Twitter/Identi.ca: kidehen 
 
 
 
 


Re: Making Linked Data Fun

2010-11-19 Thread Aldo Bucchi
You started ;)

On Nov 19, 2010, at 13:39, John Erickson olyerick...@gmail.com wrote:

 This single most powerful demo available is an epic fail on Ubuntu
 10.10 + Chrome. The most recent release of Moonlight just doesn't cut
 it (and shouldn't have to).

What you see as a fail I see as a win.



 
 Could we as a community *possibly* work towards a rich data
 visualization/presentation toolkit built on, say, HTML5?

Will happen. But we need to stop investing asymmetrically. All tech, no 
marketing collateral. We need big players to see what we see.

This demo makes a major CTO visualize what linked data could do for his 
company in the long run, thus lowering the entry barrier for us today. In order 
for an industry to grow, we need participation and engagement.



 
 On Fri, Nov 19, 2010 at 11:20 AM, Aldo Bucchi aldo.buc...@gmail.com wrote:
 Kingsley,
 
 On Fri, Nov 19, 2010 at 1:07 PM, Kingsley Idehen kide...@openlinksw.com 
 wrote:
 All,
 
 Here is an example of what can be achieved with Linked Data, for instance
 using BBC Wild Life Finder's data:
 
 1. http://uriburner.com/c/DI463N -- remote SPARQL queries between two
 instances (URIBurner and LOD Cloud Cache) with results serialized in CXML
 (image processing part of the SPARQL query pipeline) .
 
 This is excellent!
 Single most powerful demo available. Really looking fwd to what's coming 
 next.
 
 Let's see how this shifts gears in terms of Linked Data comprehension.
 Even in its current state, this is an absolute game changer.
 
 I know this was not easy. My hat goes off to the team for their focus.
 
 Now, just let me send this link out to some non-believers that have
 been holding back my evangelization pipeline ;)
 
 Regards,
 A
 
 
 
 Enjoy!
 
 --
 
 Regards,
 
 Kingsley Idehen
 President  CEO
 OpenLink Software
 Web: http://www.openlinksw.com
 Weblog: http://www.openlinksw.com/blog/~kidehen
 Twitter/Identi.ca: kidehen
 
 
 
 
 
 
 
 
 --
 Aldo Bucchi
 @aldonline
 skype:aldo.bucchi
 http://aldobucchi.com/
 
 
 
 
 
 -- 
 John S. Erickson, Ph.D.
 http://bitwacker.wordpress.com
 olyerick...@gmail.com
 Twitter: @olyerickson
 Skype: @olyerickson
 



Re: Making Linked Data Fun

2010-11-19 Thread Kingsley Idehen

On 11/19/10 11:39 AM, John Erickson wrote:

This single most powerful demo available is an epic fail on Ubuntu
10.10 + Chrome.


End-users don't use Ubuntu + Chrome. It should be an epic fail for that 
demographic :-)


This is for end-users, not for Linux geeks.

The most recent release of Moonlight just doesn't cut
it (and shouldn't have to).

Could we as a community *possibly* work towards a rich data
visualization/presentation toolkit built on, say, HTML5?


Hopefully.

Look closer at Pivot, it tells many stories.

It can be written in HTML5, so happy to see it developed ASAP :-)

But our first goal was to get it done with the PivotViewer control from 
Microsoft which works across Windows and Mac OS X, and supports all the 
main browsers (including Opera).


Kingsley



On Fri, Nov 19, 2010 at 11:20 AM, Aldo Bucchialdo.buc...@gmail.com  wrote:

Kingsley,

On Fri, Nov 19, 2010 at 1:07 PM, Kingsley Idehenkide...@openlinksw.com  wrote:

All,

Here is an example of what can be achieved with Linked Data, for instance
using BBC Wild Life Finder's data:

1. http://uriburner.com/c/DI463N -- remote SPARQL queries between two
instances (URIBurner and LOD Cloud Cache) with results serialized in CXML
(image processing part of the SPARQL query pipeline) .

This is excellent!
Single most powerful demo available. Really looking fwd to what's coming next.

Let's see how this shifts gears in terms of Linked Data comprehension.
Even in its current state, this is an absolute game changer.

I know this was not easy. My hat goes off to the team for their focus.

Now, just let me send this link out to some non-believers that have
been holding back my evangelization pipeline ;)

Regards,
A



Enjoy!

--

Regards,

Kingsley Idehen
President  CEO
OpenLink Software
Web: http://www.openlinksw.com
Weblog: http://www.openlinksw.com/blog/~kidehen
Twitter/Identi.ca: kidehen








--
Aldo Bucchi
@aldonline
skype:aldo.bucchi
http://aldobucchi.com/








--

Regards,

Kingsley Idehen 
President  CEO
OpenLink Software
Web: http://www.openlinksw.com
Weblog: http://www.openlinksw.com/blog/~kidehen
Twitter/Identi.ca: kidehen








Re: AW: Making Linked Data Fun

2010-11-19 Thread Kingsley Idehen

On 11/19/10 11:53 AM, Peter Haase wrote:


Hi Kingsley,

very nice!



Peter,

Thanks!

Btw, in our Information Workbench we have a similar PivotViewer / LOD 
bridge.


See e.g. http://iwb.fluidops.com/resource/dbpedia:Mammal (as an 
example for the visualization of the extension of a class, i.e. 
DBpedia category in this case),


Or http://iwb.fluidops.com/resource/Barack_Obama (as an example for 
the visualization of related entities)


(Select the Pivot view, i.e. the bottom left view)



A few key technical points about what we've done:

1. SPARQL Query Engine includes DZC (Digital Zoom Collection and Digital 
Zoom Image) generation engine (you can use SPARQL patterns to determine 
the entire shape of the Pivot e.g. where images come i.e. local remote, 
how you link out via @href, etc..


2. SPARQL compiler -- has pragmas for CXML which is now a bona fide 
SPARQL results serialization format option


3. Cursors -- dynamically constructed collections need to be able to 
page over masses of data e.g. LOD Cloud cache.


So its full SPARQL query capability intimately meshed with PivotViewer 
and dynamically generated CXML collections.


Here are some additional demo links:

1. http://bit.ly/cJ5oqs -- BBC (already posted)
2. http://bit.ly/9r7t1f -- Linked Open Commerce oriented
3. http://bit.ly/b5zgsz -- ditto .

Kingsley


Regards,

Peter

*Von:*public-lod-requ...@w3.org [mailto:public-lod-requ...@w3.org] *Im 
Auftrag von *Kingsley Idehen

*Gesendet:* Friday, November 19, 2010 5:07 PM
*An:* public-lod@w3.org
*Betreff:* Making Linked Data Fun

All,

Here is an example of what can be achieved with Linked Data, for 
instance using BBC Wild Life Finder's data:


1. http://uriburner.com/c/DI463N -- remote SPARQL queries between two 
instances (URIBurner and LOD Cloud Cache) with results serialized in 
CXML (image processing part of the SPARQL query pipeline) .



Enjoy!

--
  
Regards,
  
Kingsley Idehen

President  CEO
OpenLink Software
Web:http://www.openlinksw.com
Weblog:http://www.openlinksw.com/blog/~kidehen  
http://www.openlinksw.com/blog/%7Ekidehen
Twitter/Identi.ca: kidehen
  
  
  
  



--

Regards,

Kingsley Idehen 
President  CEO
OpenLink Software
Web: http://www.openlinksw.com
Weblog: http://www.openlinksw.com/blog/~kidehen
Twitter/Identi.ca: kidehen







Re: A Direct Mapping of Relational Data to RDF

2010-11-19 Thread Juan Sequeda
I forgot to mention the public RDB2RDF comment mailing list:

http://lists.w3.org/Archives/Public/public-rdb2rdf-comments/

http://lists.w3.org/Archives/Public/public-rdb2rdf-comments/Please send
your comments to that mailing list.

Thanks!

Juan Sequeda
+1-575-SEQ-UEDA
www.juansequeda.com


On Thu, Nov 18, 2010 at 10:49 AM, Juan Sequeda juanfeder...@gmail.comwrote:

 Hi Everybody

 I'm please to announce that today we have published A Direct Mapping of
 Relational Data to RDF as a First Public Working Draft.

 http://www.w3.org/TR/rdb-direct-mapping/

 This is an important step for the W3C RDB2RDF WG.

 The WG is looking forward to comments.

 Regards

 Juan Sequeda on behalf of the RDB2RDF WG

 Juan Sequeda
 +1-575-SEQ-UEDA
 www.juansequeda.com






Re: Is 303 really necessary?

2010-11-19 Thread David Booth
On Fri, 2010-11-19 at 07:26 -0500, Kingsley Idehen wrote:
[ . . . ]
 To conclude, I am saying:
 
 1. No new HTTP response codes
 2. Web Servers continue to return 200 OK for Document URLs 
 3. Linked Data Servers have option handle Name or Address
 disambiguation using 303 redirection for slash URIs
 4. Linked Data Servers have option to be like Web Servers i.e. do no
 Name or Address disambiguation leaving Linked Data aware user agents
 to understand the content of Description Documents
 5. Linked Data aware User Agents handle Name or Address
 disambiguation.
 
 IMHO: when the dust settles, this is what it boils down to. On our
 side, we're done re. 1-5 across our Linked Data server and client
 functionality, as delivered by our products :-)
 
I think the above reflects reality, regardless of what is recommended,
because:

 - some Linked Data Servers *will* serve RDF with 200 response codes via
slash URIs, regardless of what is recommended;

 - some User Agents *will* still try to use that data;

 - those User Agents may or may not care about the ambiguity between the
toucan and its web page;

 - those that do care will use whatever heuristics they have to
disambiguate, and the heuristic of ignoring the 200 response code is
very pragmatic.


-- 
David Booth, Ph.D.
Cleveland Clinic (contractor)
http://dbooth.org/

Opinions expressed herein are those of the author and do not necessarily
reflect those of Cleveland Clinic.




Re: Is 303 really necessary?

2010-11-19 Thread Kingsley Idehen

On 11/19/10 4:55 PM, David Booth wrote:

On Fri, 2010-11-19 at 07:26 -0500, Kingsley Idehen wrote:
[ . . . ]

To conclude, I am saying:

1. No new HTTP response codes
2. Web Servers continue to return 200 OK for Document URLs
3. Linked Data Servers have option handle Name or Address
disambiguation using 303 redirection for slash URIs
4. Linked Data Servers have option to be like Web Servers i.e. do no
Name or Address disambiguation leaving Linked Data aware user agents
to understand the content of Description Documents
5. Linked Data aware User Agents handle Name or Address
disambiguation.

IMHO: when the dust settles, this is what it boils down to. On our
side, we're done re. 1-5 across our Linked Data server and client
functionality, as delivered by our products :-)


I think the above reflects reality, regardless of what is recommended,
because:

  - some Linked Data Servers *will* serve RDF with 200 response codes via
slash URIs, regardless of what is recommended;

  - some User Agents *will* still try to use that data;

  - those User Agents may or may not care about the ambiguity between the
toucan and its web page;

  - those that do care will use whatever heuristics they have to
disambiguate, and the heuristic of ignoring the 200 response code is
very pragmatic.



David,

Great! We're going to point back to this post repeatedly in the future :-)

--

Regards,

Kingsley Idehen 
President  CEO
OpenLink Software
Web: http://www.openlinksw.com
Weblog: http://www.openlinksw.com/blog/~kidehen
Twitter/Identi.ca: kidehen







Re: Is 303 really necessary?

2010-11-19 Thread Nathan

Kingsley Idehen wrote:

On 11/19/10 4:55 PM, David Booth wrote:

On Fri, 2010-11-19 at 07:26 -0500, Kingsley Idehen wrote:
[ . . . ]

To conclude, I am saying:

1. No new HTTP response codes
2. Web Servers continue to return 200 OK for Document URLs
3. Linked Data Servers have option handle Name or Address
disambiguation using 303 redirection for slash URIs
4. Linked Data Servers have option to be like Web Servers i.e. do no
Name or Address disambiguation leaving Linked Data aware user agents
to understand the content of Description Documents
5. Linked Data aware User Agents handle Name or Address
disambiguation.

IMHO: when the dust settles, this is what it boils down to. On our
side, we're done re. 1-5 across our Linked Data server and client
functionality, as delivered by our products :-)


I think the above reflects reality, regardless of what is recommended,
because:

  - some Linked Data Servers *will* serve RDF with 200 response codes via
slash URIs, regardless of what is recommended;

  - some User Agents *will* still try to use that data;

  - those User Agents may or may not care about the ambiguity between the
toucan and its web page;

  - those that do care will use whatever heuristics they have to
disambiguate, and the heuristic of ignoring the 200 response code is
very pragmatic.


David,

Great! We're going to point back to this post repeatedly in the future :-)


I truly hope not, recognizing that some people *will* do whatever the 
hell they please, doesn't make what they're doing a good idea, or 
something that should be accepted as best / standard practise.


As David mentioned earlier, having two ways to do things is already bad 
enough (hash/303) without introducing a third. There's already been half 
a decade of problems/ambiguity/nuisance because of the httpRange-14 
resolution, ranging from technical to community and via conceptual, why 
on earth would we want to compound that by returning to the messy state 
that prompted the range-14 issue in the first place?


Fact is, the current reality is primarily due to the fact there is so 
much confusion with no single clear message coming through, and until 
that happens the future reality is only likely to get messier.


Best,

Nathan



Re: Is 303 really necessary?

2010-11-19 Thread Kingsley Idehen

On 11/19/10 5:57 PM, Nathan wrote:

Kingsley Idehen wrote:

On 11/19/10 4:55 PM, David Booth wrote:

On Fri, 2010-11-19 at 07:26 -0500, Kingsley Idehen wrote:
[ . . . ]

To conclude, I am saying:

1. No new HTTP response codes
2. Web Servers continue to return 200 OK for Document URLs
3. Linked Data Servers have option handle Name or Address
disambiguation using 303 redirection for slash URIs
4. Linked Data Servers have option to be like Web Servers i.e. do no
Name or Address disambiguation leaving Linked Data aware user agents
to understand the content of Description Documents
5. Linked Data aware User Agents handle Name or Address
disambiguation.

IMHO: when the dust settles, this is what it boils down to. On our
side, we're done re. 1-5 across our Linked Data server and client
functionality, as delivered by our products :-)


I think the above reflects reality, regardless of what is recommended,
because:

  - some Linked Data Servers *will* serve RDF with 200 response 
codes via

slash URIs, regardless of what is recommended;

  - some User Agents *will* still try to use that data;

  - those User Agents may or may not care about the ambiguity 
between the

toucan and its web page;

  - those that do care will use whatever heuristics they have to
disambiguate, and the heuristic of ignoring the 200 response code is
very pragmatic.


David,

Great! We're going to point back to this post repeatedly in the 
future :-)


I truly hope not, recognizing that some people *will* do whatever the 
hell they please, doesn't make what they're doing a good idea, or 
something that should be accepted as best / standard practise.


I am not implying that :-)

I am trying to say that 1-5 represent the landscape, and solutions 
simply operate within that reality. Next time these matters arise we can 
save time returning to 1-5.


As David mentioned earlier, having two ways to do things is already 
bad enough (hash/303) without introducing a third. 


There will always be many ways to skin a rat, Linked Data can't stop 
that reality.


There's already been half a decade of problems/ambiguity/nuisance 
because of the httpRange-14 resolution, ranging from technical to 
community and via conceptual, why on earth would we want to compound 
that by returning to the messy state that prompted the range-14 issue 
in the first place?


I don't see how I am advocating that. There are no mandates in 1-5, 
again that outlines how things are. Server, Servers and User Agents, or 
User Agents can make decisions.




Fact is, the current reality is primarily due to the fact there is so 
much confusion with no single clear message coming through, and until 
that happens the future reality is only likely to get messier.


It won't get any messier since 1-5 simply implies that there isn't 
anything to change re. HTTP response codes. Developers should make 
choices and live with the consequences, as they do every day :-)





Best,

Nathan





--

Regards,

Kingsley Idehen 
President  CEO
OpenLink Software
Web: http://www.openlinksw.com
Weblog: http://www.openlinksw.com/blog/~kidehen
Twitter/Identi.ca: kidehen