On 24/08/15 16:15, Jason Levitt wrote:
Great info, thanks.
Some organisations achieve this by running a load balancer in front of
several replicas then co-ordinating the update process.
So, they're running the same query against other nodes behind the load
balancer to keep things in sync?
You can do a live backup
So, an HTTP POST /$/backup/*{name}* initiates a backup and that
results in a "gzip-compressed N-Quads file".
What does a "restore" look like from that file?
You just load it into an empty database (tdbloader etc).
Andy
-J
On Mon, Aug 24, 2015 at 4:08 AM, Rob Vesse <[email protected]> wrote:
Andy already answered 1 but more on 2
Assuming you use TDB then in-memory checkpointing already happens. TDB
caches data into memory but fundamentally is a persistent disk backed
database that uses write-ahead logging for transactions and failure
recovery so this already happens automatically and is below the level of
Fuseki (you get this behaviour wherever you use TDB provided you use it
transactionally which Fuseki always does)
Rob
On 24/08/2015 05:51, "Jason Levitt" <[email protected]> wrote:
Just wondering if there are any projects out there
to provide:
1) HA (high availability) configuration of Fuseki such
as mirroring or hot/standby failover.
2) Some kind of on-the-fly backup of Fuseki when it's
running in RAM. This might be similar to how Hadoop
1.x "checkpoints" the in-RAM namenode data structures.
BTW, are there any tools for testing the consistency of the Fuseki
data structures when Fuseki is temporarily halted?
Cheers,
Jason