Hi David,

This is one of the main use cases for:

https://afs.github.io/rdf-delta/

and there is a Fuseki-component in that build that incorporates the mechanism need for 2+ Fuseki's to propagate changes [3] (a custom service /dataset/patch that accepts patch files and applied them).

The work has two parts - the data format need to propagate change (RDF Patch [1]) and a patch log server [2].

Keeping these two components separate is import because not all situations will want patch server. Distribution using Hazelcast or Kafka, or publish changes in the style of Atom/RSS, being good examples. By having a defined patch format, there is no reason why the various triplestores even have to be all Jena-based.

Apache Licensed, not part of the Jena project.

Let me know what you think:

    Andy

[1] https://afs.github.io/rdf-delta/rdf-patch.html
[2] https://afs.github.io/rdf-delta/rdf-patch-logs.html
[3] https://github.com/afs/rdf-delta/tree/master/rdf-delta-fuseki

Disclosure : this part of my $job at TopQuadrant.

There is not reason not to start publishing it to maven central - I just haven't had the need to so far.

The RDF patch work is based on previous work with Rob Vesse.

On 21/02/18 12:32, DAVID MOLINA ESTRADA wrote:
Hi,

I want to buid a HA web Application based on fuseki server in HA too. My idea 
is create a fuseki docker and deploy so instance that I need. For querying all 
is Ok, but I try to define a mechanism (it may be based in Topics with 
Hazelcast or Kafka) to distribute changes to all nodes (Both uploading files 
and SparQL updated).

Any recommandation or best practise? Has somebody done anything similar?

Thanks

David Molina Estrada

Evite imprimir este mensaje si no es estrictamente necesario | Eviti imprimir aquest missatge si no ├ęs estrictament necessari | Avoid printing this message if it is not absolutely necessary

Reply via email to