On 17 Nov 2013, at 6:17am, Joshua Grauman <jnf...@grauman.com> wrote:

> Thanks so much for the reply. Sorry for the ignorance, but wouldn't only the 
> sectors (page cache) that are being written need to be cached?

If Unix did that (which it doesn't in any File System I know of) then that 
might be one approach.  But then you need some mechanism so that it keeps old 
pages around until file handles are released and has them all timestamped so it 
knows which ones to serve to which handles.

> And I was trying to read up on how sqlite does atomic writes, but doesn't the 
> way sqlite handles atomic writes guarentee that the file is *always* in a 
> valid state (even from the perspective of other programs that try to read it 
> while being written)?

Imagine the worst possible case: a very long database file and a slow reading 
process.  If you could read the entire database file in an instant, making sure 
your 'read' delayed all other writes, then yes, you would always get an 
uncorrupted copy.  But you can't.  So database file another process may be 
writing various pages to the file while your reading is happening.  And it may 
have the chance to make any number of changes to the file (any number of 
separate transactions) if your reading goes slowly.

So you either need to establish a mutex like you wrote -- perhaps SQLite's own 
locks which are a mutex system -- or to use the SQLite Backup API which will 
continue to re-read the file until it gets a 'clean' copy.

Simon.
_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to