Sean Heber wrote:

>
>
> My database file is only around 4MB and I have set the 
> default_cache_size to 50000.  From what I've read, that should 
> translate to almost 50MB of cache size which would be more than 
> enough to keep the entire database in memory, I'd think.  Yet it 
> doesn't seem to actually do that since it is reading from the file so 
> often.

SQLite cannot know whether the database file has been modified by
another process once it releases its file locks, so it needs to read
everything again. If you wrap multiple queries in a transaction the file
stays locked and cache remains valid, and the number of seeks/reads
should go down.

After SQLite obtains a read lock it will have to perform at least the
following:
- check the file header to check for corruption
- check for a hot transaction log, and roll back changes if necessary
- read the database schema from the sqlite_master table

This is the most likely cause of the repetitive behavior you're seeing.

That said: the OS will of course cache the database file as well, and
many of the 'read' calls will be very fast.

I can imagine a protocol that will improve performance by keeping track
of whether the database was updated, which would allow a quick
determination whether a program's cache was still valid. The idea is to
increment a 'version number' in the database file as soon as a program
obtains a write lock. A reader could then determine whether the file was
written to and flush the cache only when necessary, speeding things up a
little. A 128 bit counter should never wrap around.


Gé

-- 
Ge' Weijers
e-mail: [EMAIL PROTECTED]
tel:  (520)623-8542

Reply via email to