Thanks so much for the reply. Sorry for the ignorance, but wouldn't only the sectors (page cache) that are being written need to be cached? And I was trying to read up on how sqlite does atomic writes, but doesn't the way sqlite handles atomic writes guarentee that the file is *always* in a valid state (even from the perspective of other programs that try to read it while being written)?


Do not confuse the atomicity of the "Database" with the state of the "File"... they are in no way the same thing. The database is of course quite atomic and quite safe to access whenever and expect to be in a good state if not locked etc. etc. This is achieved through one or more files containing data and journals that may very well contain very different things at different times. The atomicity of the "file" IO (if that is even a valid thought) is in no way guaranteed or implied or claimed at any point and as Simon correctly pointed out, the only way to ensure you are getting a fully up-to-date file is to use the Back-up API.

The Backup API will copy the current complete database ensuring all committed and atomic states and data goes into the new file and even provides nice callbacks for progress, cancellation and the like. It is even clever enough to reset and resume the copying process if a change occured once it already started backing up, ensuring your copied (target) database file is always fully up-to-date, uncorrupted and copy-able (if no other handles to it exist) at the time the backup concluded. This is the only way to ensure the *file* is atomic in the way that I think you mean above - and it works rather well. (It's also supported by most wrappers in case you are not using the API directly).

Details and examples here:
http://www.sqlite.org/backup.html

Cheers,
Ryan

_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to