Thanks so much for the reply. Sorry for the ignorance, but wouldn't only the sectors (page cache) that are being written need to be cached? And I was trying to read up on how sqlite does atomic writes, but doesn't the way sqlite handles atomic writes guarentee that the file is *always* in a valid state (even from the perspective of other programs that try to read it while being written)?

Josh


On 16 Nov 2013, at 11:37pm, Joshua Grauman <jnf...@grauman.com> wrote:

Or conversely, that if sqlite has the file open to write, my program will read a cached version (if reading and writing happen at the same time, I'm fine with the reader getting a slightly stale version). But I'm not completely clear on how Linux file locking works... So do I need to add a mutex to my program to make sure that a reader doesn't get a corrupt database file?

Good questions, checking a bad assumption. There is no such thing as a 'cached version' of a database file. Unix doesn't do things like that. Imagine you had a database file that was 20GB long. How long do you think it would take to make a cached version, and where do you think it would put it ?.

So if you're reading a database file without using locking then you're running 
the risk of reading some of it before a change and some of it after the change. 
 So yes, you need some form of mutex.  Or to use the SQLite backup API to read 
the file.  Or to use the normal SQLite API to open the file read/only and read 
all the data.

Simon.
_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to