On 6 May 2016, at 1:32pm, Stephan Buchert <stephanb007 at gmail.com> wrote:

> The largest database file has now grown to about 180 GB. I need to have
> copies of the files at at least two different places. The databases are
> updated regularly as new data from the satellites become available.
> 
> Having the copies of the file synced becomes increasingly tedious
> as their sizes increase. Ideal would be some kind of
> incremental backup/sync facility.

Believe it or not, the fastest way to synchronise the databases is not to 
synchronise the databases.  Instead you keep a log of the instructions used to 
modify the database.  You might, for example, modify the library that you use 
for INSERT, DELETE and UPDATE commands to execute those commands and also save 
the command to another 'commandLog' table.  Or perhaps just append those 
commands to a plain text file.

Then instead of sending any data to the other sites you send this list of 
commands to the other sites and have them execute them.

Once you start implementing this you'll see that it's more complicated than I 
have described but the text of your post suggests that you're a good enough 
programmer to do it properly.

This assumes that the structure and primary keys of the tables which hold data 
are constructed in such a way that the order in which new data is entered 
doesn't matter.

Simon.

Reply via email to