Hi,
I want to update a large RDF store with 10 billions triples once a week.
The triples to be inserted or deleted are save in documents.
There is no variable binding or blank nodes in the documents.
So I guess the best fit sparql update functions are
insert data/delete data
What is the best way to do this?
Using JDBC connection pool or http?
Using 'modify graph <graph-iri> insert/delete', or insert/delete data?
Is it possible to run concurrent update jobs?
Best,
Gang
------------------------------------------------------------------------------
BPM Camp - Free Virtual Workshop May 6th at 10am PDT/1PM EDT
Develop your own process in accordance with the BPMN 2 standard
Learn Process modeling best practices with Bonita BPM through live exercises
http://www.bonitasoft.com/be-part-of-it/events/bpm-camp-virtual- event?utm_
source=Sourceforge_BPM_Camp_5_6_15&utm_medium=email&utm_campaign=VA_SF
_______________________________________________
Virtuoso-users mailing list
Virtuoso-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/virtuoso-users