Juan,

You will have to manage the transactions when it's a local dataset. For remote, the transactional characteristics are whatever the remote endpoint provides. For Fuseki, for example, each HTTP operation is a transaction; there are aren't multiple-operation transactions.

You could write an implementation of DatasetAccessor as a wrapper that did the transactions per operation. DatasetGraphWithLock can help if it's plain memory storage. That way you get local and remote being similar semantics.

Digression:

The whole distributed transactions thing creates a coupling between client and server. If the purpose is a more tightly coupled system, e.g enterprise application, there could be more rules acceptably for the client. If the assumption is that the client does transaction boundaries, the client is a lot more closely coupled to the server. Its not just the proprietary effects - there are also issues with the client now being able to infer with the operation of the server in new and interesting ways like start a write transaction ... and keep it open for an extended period of time.

ETags (optimistic concurrency support) are an interesting way to manage this. The client needs to be conflict aware though so it's not transparent solution.

        Andy


On 02/12/14 23:21, Juan Sequeda wrote:
Thanks Andy. This is exactly what I was looking for.

When connecting to a SPARQL endpoint, I'm doing:

DatasetAccessor accessor = DatasetAccessorFactory.createHTTP("http://....";);

And it seems, that I could do the same for a Dataset:

String directory = "";
Dataset dataset = TDBFactory.createDataset(directory);
DatasetAccessor accessor = DatasetAccessorFactory.create(dataset);

Regardless if it's a SPARQL Endpoint or a local dataset, it seems that I
can use methods like accessor.add(nameURI, m);

Before, per my previous email, when connecting to a Dataset, I was being
explicit with the transactions: begin, commit, end etc.

If I connect to a Dataset through a DatasetAccessor, should I also begin,
commit, etc the dataset?

Thanks!



Juan Sequeda
+1-575-SEQ-UEDA
www.juansequeda.com

On Mon, Dec 1, 2014 at 4:26 PM, Andy Seaborne <[email protected]> wrote:

DatasetAccessor

(this is the SPARQL Graph Store protocol, client side)
which works for local and remote.

You may also be interested in Stephen's work on a SPARQL client interface
to pull all the different aspects together:

https://svn.apache.org/repos/asf/jena/Experimental/jena-client/

         Andy


On 01/12/14 20:17, Juan Sequeda wrote:

All,

I currently have a local instance of Jena TDB, which I create:

String directory = "";
Dataset dataset = TDBFactory.createDataset(directory);

And I can insert a Jena model to the dataset:

String nameURI = "...";
Model m = ... ;
dataset.begin(ReadWrite.WRITE);
try {
dataset.addNamedModel(nameURI, m);
      dataset.commit();
      return true;
} catch (Exception e) {
      dataset.abort();
      return false;
} finally {
dataset.end();
}

And I can delete a model, check if a model exists, etc. All good.

Now, I would like to do the same, but connected to a remote SPARQL
endpoint, which is also Jena TDB/Fuseki, that I have control of.

What is the best way to do this? I can't seem to find documentation or
example code on this.

I thought I might ask here before I try to figure out how to do this on my
own.

Thank!

Juan Sequeda
www.juansequeda.com





Reply via email to