Hi,
>> my personal view is that providing simplier subsets of such a
language (an api) only leads to the fact that nobody will learn the
language (see pocket calculators,...)
This totally depends on the use case. Simpler subsets may not be
powerful enough for use cases, and I guess people will at some point
realize that if they start small, as soon as they start adding new
features (filters, joins, etcs) they are starting to reinvent SPARQL.
I think the problem is that many people (especially web developers) have
not yet realized that SPARQL *IS* already a REST API and - because of
the standardization - the lingua franca for (Semantic) Web data access.
If your SPARQL query is slow, then don't blame SPARQL - blame the engine
(if it misses obvious optimizations), the tool or the person that
created it. Its like saying FOL is useless because its undecidable -
well, then stay away from certain features, change your modeling, do
work-arounds or live with the consequences.
For example, here I put up a demo of a work-in-progress faceted
(spatial) browser "SemMap" [1], that generates SPARQL queries on the
client side:
Thanks to SPARQL, we can build up a data table with the information we
are interested in. In a next step, it will be possible to data-bind this
table to widgets (e.g. charts). In in an even further step, we could add
an export for sgvizler[2] (also SPARQL based). So we can have an
eco-system just based on SPARQL. No mess with over 9000 REST APIs.
For example:
"Show me projects, corresponding partners in France and their amount of
funding". Whats missing in SemMap is just adding UI elements that add
sorting and aggregation to the generated SPARQL query. (Yes, Freebase
can do that too).
Now show me how you would do that with a REST API ;)
There is also so many tools (Sesame, Jena, and most likely a few others)
that allow you to create an (inefficient but working) SPARQL endpoint
yourself by implementing some interface method along the lines of
"lookup(subject, predicate, object)" or defining your custom filter
functions - So technically nowadays there is no challenge in offering a
SPARQL service based on a "so much simpler than SPARQL data access
function", which will work well for small datasets.
If one's problem is merely, that one gets incomplete results because of
QoS result set limits, then just use a proxy service or wrapper such as
(shameless selfadvertisement) [3]
This transparently does pagination for your query in the background
(while caching the pages so you can resume your pagination from cache).
[1] http://semmap.aksw.org/odp/v2/fp7-pp/
[2] http://code.google.com/p/sgvizler/
[3] https://github.com/AKSW/jena-sparql-api
Cheers,
Claus
On 04/18/2013 01:21 PM, Jürgen Jakobitsch SWC wrote:
i think there's yet another point overlooked :
what we are trying to do is to create barrier free means of
communication on data level in a globalized world. this effort requires
a common language.
my personal view is that providing simplier subsets of such a language
(an api) only leads to the fact that nobody will learn the language (see
pocket calculators,...), although there's hardly anything easier than to
write a sparql query, it can be learned in a day.
i do not really understand where this "the developer can't sparql, so
let's provide something similar (easier)" - idea comes from.
did anyone provide me with a wrapper for the english language? nope, had
to learn it.
wkr jürgen
On Thu, 2013-04-18 at 11:27 +0100, Leigh Dodds wrote:
Hi Hugh,
On Thu, Apr 18, 2013 at 10:56 AM, Hugh Glaser <[email protected]> wrote:
(Yes, Linked Data API is cool!, and thanks for getting back to the main
subject, although I somehow doubt anyone is expecting to read anything about it
in this thread now :-) )
I'm still hoping we might return to the original topic :)
What this discussion, and in fact most related discussions about
SPARQL as a web service, seems to overlook is that there are several
different issues in play here:
* Whether SPARQL is more accessible to developers than other forms of
web API. For example is the learning curve, harder or easier?
* Whether offering query languages like SPARQL, SQL, YQL, etc is a
sensible option when offering a public API and what kinds of quality
of service can be wrapped around that. Or do other forms of API offer
more options for providing quality of service by trading off power of
query expression?
* Techniques for making SPARQL endpoints scale in scenarios where the
typical query patterns are unknown (which is true of most public
endpoints). Scaling and quality of service considerations for a public
web service and a private enterprise endpoint are different. Not all
of the techniques that people use, e.g. query timeouts or partial
results, are actually standardised so plenty of scope for more
exploration here.
* Whether SPARQL is the only query language we need for RDF, or for
more general graph databases, or whether there are room for other
forms of graph query languages
The Linked Data API was designed to provide a simplified read-only API
that is less expressive than full SPARQL. The goals were to make
something easier to use, but not preclude helping developers towards
using full SPARQL if that's what they wanted. It also fills a
short-fall with most Linked Data publishing approaches, i.e. that
getting lists of things, possibly as a paged list, possibly with some
simple filtering is not easy. We don't need a full graph query
language for that. The Linked Data Platform is looking at that area
too, but its also got a lot more requirements its trying to address.
Cheers,
L.
--
Leigh Dodds
Freelance Technologist
Open Data, Linked Data Geek
t: @ldodds
w: ldodds.com
e: [email protected]
--
Dipl. Inf. Claus Stadler
Department of Computer Science, University of Leipzig
Research Group: http://aksw.org/
Workpage & WebID: http://aksw.org/ClausStadler
Phone: +49 341 97-32260