Right up front, I'm fully aware about the How To Corrupt document (Believe
me, I've preached about network access in this forum), and with the
development tools I have in Windows vs the destination OS and the purpose
of the DB, I'm asking for other peoples experience on remote developing a
database.

The SQLite editor of choice for me is SQLite Expert Pro (SEP).  The remote
system is a Linux based OS.  The databases job is to keep track of jobs,
hosts, last completed, priorities of the jobs, etc.  The Linux machine is
going to be running a BASH script that runs in an infinite loop,
periodically poking the database to decide what to run next based on a
schedule.  There will be frequent sleep periods between SQL calls.

While I'm developing the database, the infinite looping in the bash script
isn't going to exist.  The script runs, does its thing (To start, just ECHO
what I want it to do), update the database on successful completion, then
check for the next job if any are available.  When the scripts are done
running, I want to re-run a query in the SEP to confirm what I've done in
the BASH script did what it was supposed to do.

The question for the experienced multi-machine & multi-OS DB designers, has
anyone ever run into a problem where EXTREMELY LIGHT WEIGHT use of the
database causes corruption?  What would be a recommended way to setup the
connections for a DEV-only arena where the below paragraph describes?

By EXTREMELY LIGHT WEIGHT use, I mean I *DO* guarantee that although I have
one permanent open file handle to the database via SEP, and that Linux OS
will only open a handle  periodically while I'm writing the script,
multiple accesses of reading or writing to the DB at the exact same time
just will not happen.  Once development stops, it'll be just this one BASH
script that will ever touch the database.
_______________________________________________
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to