TL;DR

Get SQLite away from networks entirely when you start throwing multiple
machines at the problem.  Your last paragraph is probably the best bet if
you're not willing to get into a real client/server setup, and yes, you're
going to run into aggravated users, but, then, which is more important to
deal with?  Aggravated users or functional data?  Training will help with
the aggravated users, as will making your application aware of the
situations is going to be used in and properly deal with the issues that
could come up.  Single user over a wire (Other than IDE, SATA, or SCSI
cables) is "OK", but I'd still not trust any NFS type of communication.  If
you start throwing multiple users with ANY sort of write possibilities,
you're begging for trouble, as you've read in your research.  Read access,
from what I'm told and have read in these very forums is "Alright to do",
but again, I'm still not a fan.

The long story;

At the end of the road you're traveling, you're going to reach the final
conclusion that if you want concurrent users to have access to the data,
AND you want to make sure the data stays valid and not corrupted because of
network file sharing issues, the easiest detour to your final destination
is that you're going to have to get into a real client/server setup and get
it out of your codes mind that a client application should act as a server
as well.

Whether it is with SQLite and you build (IE: Write) a 'server' to handle
the requests and your database stays on the local storage system where the
server is run, or, switch to another program entirely like MySQL.  The
other option is, if you're hooked on SQLite, is to normalize the data to be
worked with on each client into individual files, then when the client is
done with the work, use the SQLite backup routines to push the data back to
the server.  Then, if required, use your merge maintenance to collect all
the resources and drop it into one main data store.  I don't write many
client/server applications for the Win32 environment, but anything I've
written that is web accessible has ALWAYS had a database engine with the
physical files sitting on some drive that is DIRECTLY controlled by the OS
running the database engine.

For desktop apps using SQLite that might want data shared between machines,
I put it in my programs head that any data to be EXCHANGED is done via the
SQLite backup routines.  Modifications are ALWAYS done locally.  If needed,
have the app write a lock file that matches the database file you're
working on, that way any other clients coming to pick the data up look for
that lock file and can behave accordingly, as in, pull the data in as read
only, or, deny access entirely.  First come, first serve policy.  I got
fancy with one of my apps in that I opened up a TCP port in the high 32k
range.  The lock file contained the local IP address of the machine.  If
another client came along to look for the database in question and
encountered the lock file, it'd open up a port to the client and just see
if it still was awake.  Then MS started throwing firewalls around and that
became a fun job to get around.

SQLite is awesome in a server architecture, being used as the back end for
PHP, small to medium web servers, and all the such.  Even with hundreds to
thousands of users hitting the SQLite database files, the file is being
accessed by X number of applications on the local store, or one governing
OS, is handling the file manipulation.  The fact that it is one OS handling
the file manipulation, you'll find that on ANY DBMS (MS-SQL, MySQL,
Postgres, etc) typically are run from a local store.  There are reasons for
this, and you're running into that reason.  I've actually never heard of a
database engine running over any kind of network device, SAN included.

One method I started to develop years ago was a web service that would take
in SQL calls from a client and spit back a formatted table of results.
Problems obviously arose with raw byte data in text results, so, changed it
up so that the server would just generate a SQLite3 DB on the fly and give
that to the client to work with.  The perk on that was that I could get
table normalization with some work on additional parsing what the query
was.  The issue (Which would have been resolved with further development)
was record locking.  The challenge was to decide what records (For example,
consider Insurance Policies, Doctor Records, Personnel files all as
individual record types) were to be locked on the server while the client
dealt with the data, and how long to keep the data locked.

On Sun, May 3, 2015 at 11:20 PM, Scott Doctor <scott at scottdoctor.com> wrote:

>
> I googled file locking issues. Sheesh, The horror stories of programs like
> Quickbooks corrupting data files. Files getting locked when nobody is using
> them (A windows 7 bug that supposedly was fixed) and endless information
> about what not to do, or what does not work. Very little about what does
> work.
>
> My application is oriented toward research where data is added, but rarely
> deleted. Doing a cleanup or purge operation is rare, but would be done by a
> single user with a backup made of the original (I automate backups since
> people always forget to do it). So I just instruct the user to make sure
> they are the only one accessing that database when doing so. Users always
> follow directions, um....
>
> What I am considering is if two people are adding to the same project file
> at the same time, make a local database that has the new information, then
> do a merge later into the main project file later. This is a real situation
> anyways as data may be collected offsite on something like an iPad or
> laptop then merged into the main database later. Isolated operation with no
> internet access. Would work similar to the rules of accounting. Make a
> transaction log then close the books at the end of the day. Seems to work
> for the past few hundred years for the bean counters. If person X needs
> person Y's new data, they just run the merge operation.
>
> Hmm, I can think of multiple problems with this technique, or the
> aggravation the users may encounter. Need to think about it some more,
>
> ------------
> Scott Doctor
> scott at scottdoctor.com
>
>
>
> _______________________________________________
> sqlite-users mailing list
> sqlite-users at mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>

Reply via email to