Hi Sergio, I'm not sure which deliberate decision you are referring to, is it Issue #35 in Github?
Anyway, the impl.sparql code is not about extending the API to allow running queries on a graph, in fact the API isn't extended at all. It's an implementation of the API which is backed by a SPARQL endpoint. Very often the triple store doesn't run in the same VM as the client and so it is necessary that implementation of the API speak to a remote triple store. This can use some proprietary protocols or standard SPARQL, this is an implementation for SPARQL and can thus be used against any SPARQL endpoint. Cheers, Reto On Tue, Mar 17, 2015 at 7:41 AM, Sergio Fernández <[email protected]> wrote: > Hi Reto, > > thanks for updating us with the status from Clerezza. > > In the current Commons RDF API we delivery skipped querying for the early > versions. > > Although I'd prefer to keep this approach in the initial steps at ASF (I > hope we can import the code soon...), that's for sure one of the next > points to discuss in the project, where all that experience is valuable. > > Cheers, > > On 16/03/15 13:02, Reto Gmür wrote: > >> Hello, >> >> With the new repository the clerezza rdf commons previously in the commons >> sandbox are now at: >> >> https://git-wip-us.apache.org/repos/asf/clerezza-rdf-core.git >> >> I will compare that code with the current status of the code in the >> incubating rdf-commons project in a later mail. >> >> Now I would like to point to your attention a big step forward towards >> CLEREZZA-856. The impl.sparql modules provide an implementation of the API >> on top of a SPARQL endpoint. Currently it only supports read access. For >> usage example see the tests in >> /src/test/java/org/apache/commons/rdf/impl/sparql ( >> https://git-wip-us.apache.org/repos/asf?p=clerezza-rdf-core. >> git;a=tree;f=impl.sparql/src/test/java/org/apache/commons/ >> rdf/impl/sparql;h=cb9c98bcf427452392e74cd162c08ab308359c13;hb=HEAD >> ) >> >> The hard part was supporting BlankNodes. The current implementation >> handles >> them correctly even in tricky situations, however the current code is not >> optimized for performance yet. As soon as BlankNodes are involved many >> queries have to be sent to the backend. I'm sure some SPARQL wizard could >> help making things more efficient. >> >> Since SPARQL is the only standardized methods to query RDF data, I think >> being able to façade an RDF Graph accessible via SPARQL is an important >> usecase for an RDF API, so it would be good to also have an SPARQL backed >> implementation of the API proposal in the incubating commons-rdf >> repository. >> >> Cheers, >> Reto >> >> > -- > Sergio Fernández > Partner Technology Manager > Redlink GmbH > m: +43 660 2747 925 > e: [email protected] > w: http://redlink.co >
