On 19 Oct 2012, at 8:16am, Ben Morris <magospiet...@gmail.com> wrote:

> During Upload, the local database is scanned for rows where the Sync flag
> is true. Each row like this is either updated or inserted into the master
> database (depending on whether a row can be found with the same PK).
> 
> During Download, every local table is compared, row-by-row, field-by-field,
> to the corresponding table in the master database (both tables are loaded
> into memory as array structures to speed up the comparison). If any
> differences are found, or the local row is missing, the local row is
> updated/inserted using the data from the master database

This is the only way to successfully sync multiple copies of a database.  if 
you are going to keep distributed copies, then this is how to do it.  However, 
you do not mention the possibility of two different 'child' copies both having 
the same row modified.  You will need a strategy for reconciling cases where 
this happens and the new rows don't exactly match.

> Ignoring the obvious fact that this will not resolve our locking and
> malformation problems, I'm also deeply concerned about the maintainability
> and supportability of such code. Given the fact that I cannot get any
> traction with my management team, I was hoping an appeal to authority might
> make them see sense.

Would it help if the primary author of SQLite himself says whether SQLite is or 
is not suitable ?

<http://www.sqlite.org/whentouse.html>

See the section at the bottom titled "Situations Where Another RDBMS May Work 
Better".  I don't know if this does fit your situation exactly, but it does 
look relevant.  You didn't cite this page in your post so I thought you should 
see it.

However, I see Dr Hipp just posted to this thread himself so he may have 
comments which address your situation exactly.

Simon.
_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to