Danny Ayers wrote:
<%=profanity /%>
or something - cool thread
while I disagree with many of Aldo's individual points, getting them
surfaced is really positive
in response to a line from the firestarter:
"The only reason anyone can afford to offer a SPARQL endpoint is because it
doesn't get used too much?"
while my love of SPARQL is enormous, I can't see the SPARQL endpoint
being a lasting scenario.
But you seem to assume that SPARQL endpoints are the definitive scenario
today?
I see SPARQL endpoints as simply optional. The public visibility will be
determined by needs and the wishes of a give Linked Data Space owner.
Take DBpedia as an example, it works fine as a Linked Data Space on its
own, but it also offers a SPARQL endpoint for those who want to speak
SPARQL.
As I've stated (already), the real issue comes down to deep DBMS level
issues re. scalability and query processing prowess. Today, we don't
have a ODBC or JDBC DSNs published on the Web primarily because the
naming granularity isn't there (i.e. record level naming), single record
access and manipulation has always been a problem for SQL, and there's
never been a truly standardized data access mechanism that incorporates
the networking layer.
Linked Data picks up from where the RDBMS realm simply runs out of steam
re. Open Data Access, but this doesn't mean that the techniques in the
RDBMS realm can form the basis of a new kind of DBMS frontier of the
kind I believe the Linked Data Web ultimately delivers.
linking and the fresh approach to caching this will demand, need
another rev. before the web starts doing data efficiently
When you say: "..Web doing data efficiently", what do you mean?
Scalable and "change sensitive data access" on the Linked Data Web is
fundamentally an issue for Linked Data Deployment platform player
differentiation (imho). I think HTTP, RDF, and SPARQL (plus extensions)
already provide ample building blocks for building solid platforms. We
just have to remember that data access and DBMS technology predates the
Web and lots of the techniques and knowledge from this realm are vital
when it comes to scalability and efficiency challenges.
In my eyes, and from my experience, the Web is only beginning to
comprehend DBMS matters. And like most things relating to humans, some
lessons are only learned the hard way :-)
Kingsley
the answer to the quoted line is the question - how can you not
afford? Classic stuff re. amazon opening up their silo a little bit -
guess what, profit!
pip,
Danny.
2008/11/28 Juan Sequeda <[EMAIL PROTECTED]>:
On Thu, Nov 27, 2008 at 2:33 PM, Peter Ansell <[EMAIL PROTECTED]>
wrote:
2008/11/27 Richard Cyganiak <[EMAIL PROTECTED]>
Hugh,
Here's what I think we will see in the area of RDF publishing in a few
years:
- those query capabilities are described in RDF and hence can be invoked
by tools such as SQUIN/SemWebClient to answer certain queries efficiently
I still don't understand what SQUIN etc have that goes above
Jena/Sesame/SemWeb etc which can do this URI resolution with very little
programming knowledge in custom applications.
True, jena/sesame does everything that SQUIN entails to do. However, SQUIN
is oriented to the "web2.0" developers. How is a php/ror web developer going
to interact with the web of data and make some kind of semantic-linked data
mashup over a night? SQUIN will let them do this. No need of having jena,
learning jena, etc. Make it simple! If it is not simple, then developers are
not going to use it.
--
Regards,
Kingsley Idehen Weblog: http://www.openlinksw.com/blog/~kidehen
President & CEO
OpenLink Software Web: http://www.openlinksw.com