There is also which more 
closely resembles what you have now.

I strongly suspect that an update cycle of the user_version should be done 
within the transaction performing the changes.

Read user version
<your changes go here>
Write updated user version

-----Urspr√ľngliche Nachricht-----
Von: sqlite-users [] Im 
Auftrag von Wout Mertens
Gesendet: Dienstag, 07. November 2017 19:08
An: SQLite mailing list <>
Betreff: [EXTERNAL] [sqlite] Most efficient way to detect on-disk change

I'm working with a db that's only written to in transations, and each 
transaction increases a db-global version counter.

This means that I can cache all reads, unless the version changed.

What would be the most efficient way to make sure I *never* serve stale data?

Right now everything's a single process, so it's really easy, just clear the 
cache on every write. However, I want to be prepared for the near future where 
I will have multiple processes using this db file.

I'm thinking that to detect writes, this might be a safe approach:

Before serving any cached read, check the timestamp on the wal file. If it 
changed, read the global version. If it changed, clear the cache.
Otherwise, serve the cached read.

Is it safe to assume that all writes would mean change of the wal file 
More importantly, is it faster to check the timestamp or would a prepared query 
for the version actually be faster (and safer)?

Also, I'm using WAL right now, but I wonder if that's really useful given the 

Thank you for your insights,

sqlite-users mailing list

 Gunter Hick | Software Engineer | Scientific Games International GmbH | 
Klitschgasse 2-4, A-1130 Vienna | FN 157284 a, HG Wien, DVR: 0430013 | (O) +43 
1 80100 - 0

May be privileged. May be confidential. Please delete if not the addressee.
sqlite-users mailing list

Reply via email to