Thanks...So for a short term solution, is the below correct?

1) Create a copy of UpdateRemote from the fuseki codebase
2) Create an update request for my dataset and invoke (the client-side)
UpdateRemote (which appears to handle the content-type setting etc).

Also - are there any additional things to watch out for related to remotely
updating a dataset at a time using the above? (ie any one of the
default/and or named graphs may have changed - does the current update
handling on server side take care of this transparently?)

Has there been any thoughts/discussion on a connection-based api similar to
jdbc, where the details of the storage(file, mem, sdb, tdb...) are
abstracted away so that interaction via ARQ can focus on the operation at
hand?

Venkat

On Wed, Jan 18, 2012 at 8:17 AM, Andy Seaborne <[email protected]> wrote:

> On 17/01/12 17:38, Venkat Krishnamurthy wrote:
>
>> All
>>
>> I've checked the archives/javadoc but cannot find instructions on how to
>> programmatically update a remote sparql endpoint (fuseki/tdb) from Java.
>>
>> The tdb page on incubator has a one-line 'Use UpdateRemote.execute' but I
>> cannot find the class - is that in ARQ or tdb?
>>
>> What's the best way to do this if I have an in-memory dataset that I want
>> to persist?
>>
>> Thanks
>> Venkat
>>
>>
> The class UpdateRemote is (currently) in Fuseki - it depends on
> org.apache.http. It needs to move to ARQ sometime.
>
> You just POST the update as a string and content type
> application/sparql-update.  Or, as the code is ~30 lines long so maybe, for
> now, taking a copy to avoid using Fuseki on the client side.
>
> The query remote could do with rewriting to use org.apache.http as well.
>
> UpdateExecutionFactory could acquire a .createRemote and have a remote
> implementation of UpdateProcessor based on this code.
>
> See also the script "s-update"
>
>        Andy
>

Reply via email to