On Sat, 16 Nov 2013 22:17:31 -0800 (PST), Joshua Grauman
<jnf...@grauman.com> wrote:

>Thanks so much for the reply. Sorry for the ignorance, but wouldn't only 
>the sectors (page cache) that are being written need to be cached? 

Database pages are updated in sqlites page cache, then, being 'dirty',
flushed to the filesystem, which may try to postpone writing to disk and
keep it in the filesystem cache for a while, or write it immediately if
it likes to. Sqlite tries to instruct the filesystem to flush the
filesystem cache to disk at certain moments.

The filesystem image of the database is consistent when there are no
open transactions (everything committed). During  transactions, you have
to assume the filesystem cache is not up to date, may be partially
updated, and is not guaranteed to be consistent.
Only in combination with a journal a consistent version can be
reconstructed.

>And I 
>was trying to read up on how sqlite does atomic writes, but doesn't the 
>way sqlite handles atomic writes guarentee that the file is *always* in a 
>valid state (even from the perspective of other programs that try to read 
>it while being written)?

Not with "PRAGMA synchronous=off;"
http://sqlite.org/pragma.html#pragma_synchronous

To have a consistent image of the database in the filesystem when you
start the copy, you have to make sure the database image in filesystem
is consistent with PRAGMA synchronous=normal; and, like Simon says, lock
the database file with "BEGIN IMMEDIATE" or "BEGIN EXLCUSIVE" to prevent
partioal updates appearing in the image the filesystem has.


>Josh
>
>>
>> On 16 Nov 2013, at 11:37pm, Joshua Grauman <jnf...@grauman.com> wrote:
>>
>>> Or conversely, that if sqlite has the file open to write, my program 
>>> will read a cached version (if reading and writing happen at the same 
>>> time, I'm fine with the reader getting a slightly stale version). But 
>>> I'm not completely clear on how Linux file locking works... So do I 
>>> need to add a mutex to my program to make sure that a reader doesn't 
>>> get a corrupt database file?
>>
>> Good questions, checking a bad assumption.  There is no such thing as a 
>> 'cached version' of a database file.  Unix doesn't do things like that. 
>> Imagine you had a database file that was 20GB long.  How long do you 
>> think it would take to make a cached version, and where do you think it 
>> would put it ?.
>>
>> So if you're reading a database file without using locking then you're 
>> running the risk of reading some of it before a change and some of it after 
>> the change.  So yes, you need some form of mutex.  Or to use the SQLite 
>> backup API to read the file.  Or to use the normal SQLite API to open the 
>> file read/only and read all the data.
>>
>> Simon.

-- 
Groet, Cordialement, Pozdrawiam, Regards,

Kees Nuyt

_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to