On 2015/01/30 14:45, Mario M. Westphal wrote:
- The databases in question are stored on a location hard disk or SSD.
- If a user stores his database on a NAS box or Windows server, it is accessed
directly, via standard Windows file system routines.
- From what I can tell, network-based databases are not more likely to corrupt
than databases stored on built-in disks or SSDs or databases kept on disks or
USB sticks connected via USB.
That is simply not true. The report-back on locking success via a local resource (albeit for a removable drive) is under normal
circumstances absolute and correct. For a network file (remote) source, that is just not true in near all network cases. If you can
be sure only one instance of your program access it over the network and nothing else, then it should not be harmed, but this is
Users kill their processes and re-start programs and SQLite connections (unwittingly) that finds hot roll-back journals and all
kinds of things that might fall into a long "busy" cycle which may again prompt a process-kill, etc.
It's easy to tell though, when you get reports of corruption, require the file location information. A pattern should quickly emerge
if this is a networking problem.
- My software is updated every 2 to 4 weeks, and I always include and ship with
the latest SQLite version.
- There is a big variance in when users update so some users may work with
versions several months old, but not older than 2 months, typically.
- A user may access a database from multiple computers, but then only in
read-only mode. Write access is only permitted when the database is opened in
- I use SQLite since about 2008, but the code base is changed frequently. I
maintain old databases (up to maybe one year old and use them in regression
tests before shipping).
sqlite-users mailing list