On Sat, 02 May 2015 19:24:19 -0700
Scott Doctor <scott at scottdoctor.com> wrote:

> Each computer has its own copy of the program, but may 
> share the database file which may be located on any of the computers.
> So I am wondering whether I should implement my own locking logic in
> my program.

Yes, you need to arrange within the application not to have two
processes writing to the remote database at the same time.  The locking
SQLite uses depends on semantics not provided by the remote
filesystem.  

With a local filesystem, when two processes are updating a file, each
process's update is visible to the other in the unified buffer cache
supplied by the OS.  In a networked filesystem, there is no unified
buffer cache: updates from process A, while manifest in the file, are
not necessarily reflected in the cache used by process B on another
machine.  A subsequent update from B based on its outdated cache could
well create an incoherent file image.  

The only safe answer is arrange for each update to begin by locking the
file in the application.  Then open the database, update it, close the
database, and unlock the file.  By the same token, after any update
every reading process should close and re-open the database before
continuing to rely on the database

One way to do that would be to keep an update.count file adjacent to
the database file.  Lock it, read it, increment it, proceed with the
database update, and release it.  Before each read, lock the file for
reading, and check the counter value.  If it's changed, close and
re-open the database, execute the SELECT, and release the file.  

That's just an outline; I might have overlooked something.  The
complexity and pitfalls explain why applications that need
inter-machine consistency connect to a DBMS daemon that manages its
database files locally.  

HTH.  

--jkl

Reply via email to