I have an application file that I'm considering moving to sqlite. Some current behavior is:
1) Only one writing thread for the lifetime of a logical dataset. 2) The writing thread opens the file with exclusive write access (open fails if there is another writer, no other writers allowed, but one writer plus multiple readers is a common situation). 3) The writing thread prefers to delete any existing file. If it can't do that (some readers currently have the file open) it gains an exclusive read/write lock (consistent with no reader has a transaction in progress) and truncates the file to zero length, writes its new header (including his own uuid, indicating that this is logically a new file). When existing readers get around to reading again, they will check that uuid, and handle the change in writers "gracefully." I'm wondering how to implement that behavior while using sqlite as my application file. I can imagine several solutions, but I'm not sure what pitfalls to look for. In particular, haven't determined how to get sqlite to write-open an existing empty file, as-if it were creating a new database. Possible solutions: 1) As part of the initial write transaction (at the sqlite client level), use sqlite to remove all of the existing tables. There are a couple of issues with this. If the old file was 200gb, I'd expect sqlite to "waste a lot of time" updating its free-page structures. Also, if the new file only grows to a few megabytes, I'd want the excess space to be recovered. I also want the write-open to succeed if the existing file is corrupted. 2) Implement my own VFS, such that old readers get read (and lock) errors until they open a new connection, while my new writer "sees" this as a new file even if the VFS wasn't able to delete the file at the OS level. Since I'm just getting started with sqlite, I'd rather not have to dive into the low-level guts of implementing a new VFS, if I can help it. 3) After gaining the exclusive lock, truncate the database file (and the write-ahead log or rollback journal), create a new database file (somewhere else, possibly a virtual database). Perform a raw copy of the new virtual database into the old actual database file, toss the virtual database, and then use sqlite to open the old (now properly formatted, almost empty) database file as a writer. 4) Change all readers so that they close/reopen the database at all transaction boundaries (so that when there are no current read transactions, nobody else has the file open). I'm a bit worried about the performance implications of this for small read transactions (although I have not tested this). I can do this for all my existing clients, but I can't do this for somebody using some other sqlite3 client (like the command-line client). Also if some non-sqlite code has the file open without a lock (perhaps somebody is currently copying the file using an os-level command), I'd prefer to make that reader fail, rather delay my writer. Help please? Bill ************************************************************************************** This e-mail and any attachments thereto may contain confidential information and/or information protected by intellectual property rights for the exclusive attention of the intended addressees named above. If you have received this transmission in error, please immediately notify the sender by return e-mail and delete this message and its attachments. Unauthorized use, copying or further full or partial distribution of this e-mail or its contents is prohibited. **************************************************************************************