Hi Henrik,

> I've been thinking about this issue lately, apart from extra safety from
> drive failures the fact that you have realtime (or almost realtime) backup
> can be used to generate statistics etc on another machine which offloads
> the main machine.
> 
> What would be the easiest way to implement this in PicoLisp (if it is not
> already)?

In fact it is already.

I'm using the built-in DB replication of PicoLisp since about 10 years.
You can use it as follows:

1. In your application, pass a "journal" argument to the initial 'pool'
   call. That is, if the third argument to 'pool' is given, it is used
   as an "asynchronous replication journal":

      (pool "db/myApp/" *Dbs "fifo/myApp")

   By convention, I use a local "fifo/" directory to hold the temporary
   transaction storage.

   In the following, each and every change to the database will be
   written to that file (with proper file locking, and controlled within
   the 'commit' transaction logic). This happens on a very low level, on
   the DB block level, for each block in the database that gets modified.

   Without further precautions, this file would grow infinitely.

2. The script which starts the application, i.e. which does:

      ./pil myApp/main.l lib/app.l  -main -go -wait >>log/myApp  2>&1 &

   also starts an 'ssl' process (is in the PicoLisp distribution):

      bin/ssl <remote-ip> 443 '<port>/!replica' key/myApp fifo/myApp 
blob/myApp/ 20

   This ssl process checks the journal file "fifo/myApp" every 20 seconds (the
   last argument), and if so tries to transfer its contents atomically, and
   propery encrypted, to the host at <remote-ip>.

3. On the host <remote-ip> there is a directory "rpl/", to receive all
   replicated databases. On that machine, for each client to be replicated,
   a process "bin/replica" (is in the PicoLisp distribution):

      bin/replica <port> key/myApp "" rpl/db/myApp/ rpl/blob/myApp/ 4 1 1 1 1 2 
3 4 4 4 4 5 5 5 6 3 4 4 4 3 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 0 6 4 &

   (Those may numeric arguments in the end are a bit awkward. They correspond to
   the *Dbs values of the client application)

With this setup, the client 'ssl' tries to connect to the remote 'replica',
negotiates some authentication, and transfers database blocks and blob files in
an atomic way. This means, if the transfer is interrupted or fails for some
other reason, the replica will not commit the changes.

The above works very reliable. As I said, I'm using this since 10 years
for several commercial applications, and periodically compare the
databases and their replicas with 'md5sum'. It always survived network
interrups or server restarts etc.

♪♫ Alex
-- 
UNSUBSCRIBE: mailto:picolisp@software-lab.de?subject=Unsubscribe

Reply via email to