Ben Morris wrote:
> Potentially we could have around fifty client applications all attempting 
> simultaneous writes
> to a SQLite database stored on a file server.

In that case, I would say quoting directly from the SqLite web page we've 
mentioned already should be sufficient to scare management (emphasis added):

"If you have many client programs accessing a common database over a network, 
you should consider using a client/server database engine instead of SQLite. 
SQLite will work over a network filesystem, but because of the latency 
associated with most network filesystems, *performance will not be great*. 
Also, the file locking logic of many network filesystems implementation 
contains bugs (on both Unix and Windows). If file locking does not work like it 
should, *it might be possible for two or more client programs to modify the 
same part of the same database at the same time, resulting in database 
corruption*."

Seems pretty clear to me from that statement that SqLite would not suite your 
system's requirements as a central database.

Nick.

On Fri, Oct 19, 2012 at 4:21 PM, Nick Shaw <nick.s...@citysync.co.uk> wrote:

> Richard Hipp wrote:
> > Ben Morris <magospiet...@gmail.com> wrote:
> >> If anyone could share their honest opinions of both my suggested 
> >> approach, and my colleague's, I would be very grateful.
> >
> > [snip]
> > Using a separate SQLite database on each client to serve as a local
> cache of the master database and then
> > periodically synchronizing with the master is a very reasonable 
> > thing to
> do in many situations.  Such an
> > approach can be very bandwidth efficient (if implemented correctly) 
> > and
> it has the huge advantage that is
> > allows for disconnected operation - it allows the clients to 
> > continue
> functioning when the network goes down.
> > [snip]
>
> I completely agree; this is what I do in our database application - 
> the clients have a local SqLite copy of a central MSSQL database, so 
> the clients can operate with the data when the network goes down 
> (which on some customers' sites is a fairly regular occurrence!).  To 
> avoid having to rectify duplicate rows / primary key violations / etc 
> when down, we just mark the local database as 'read-only' when the 
> link to MSSQL goes down so no changes can  be made to it 'offline'.  
> Whenever the link is up, we poll the MSSQL database for changes every 15 
> seconds or so, so the clients'
> SqLite copies are pretty much always in sync.  It adds a bit of 
> network traffic doing this every 15 seconds, but data doesn't change 
> very often in our application so there's rarely anything more than a 
> few COUNT queries going on.  Plus it means the client-side app can 
> usually just query the local SqLite database instead of talking over 
> the network to MSSQL all the time, which can dramatically speed things
>   up on slow networks.
>
> So I see no fundamental problem in using this kind of approach.  But 
> as Richard Hipp says, it depends on what your system needs are.  
> Operating on an SqLite database from multiple networked clients 
> (especially when on a Windows network) with data that is changing a 
> lot is not advised when performance and concurrency are important 
> factors.  (See http://www.sqlite.org/whentouse.html for specific 
> details, as Simon Slavin recommended).
>
> Nick.
> _______________________________________________
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to