I have 300K records in my database and the blob text file (invoices) in a 
separate database. I query the database over a network or USB stick and the 
performance is good. However when I moved my database to CD media, the 
performance dropped drastically. It takes 2 minutes sometime to retreive the 
result. The CPU is not busy during the time, and the memory usage increases 
slowly. 

When the user selects a row, the system queries the blob database, and here I 
am able to retreive the text, convert it into a tiff file, and load it into my 
program within 1.5sec or less from the CD. Copying the database to the hard 
drive and querying that resumes with normal performance. Querying the CD 
version (immeadiately) and the performance lags again. The LED on the CD rom 
blinking only occasionally, not constantly like when I copy a file.

Both databases have 300k records, the first(invoice no indexed) has 8 retrieval 
fields, while the second has just the ID and blob. The average row size is 
60-80 bytes, while the blob has on average 500bytes. I am looking to improve 
the performance, and reasons for the failings. It does sound like cacheing 
mentioned in the thread ... 

[sqlite] indexes in memory
> 
> Indexes will be loaded into the cache as needed. The whole SQLite database 
> is page based, and the cache caches the pages. The tables and indexes are 
> implemented as page based btrees, with nodes represented by pages.
> 
> The cache is unaware of the higher level structure of the btrees, and 
> there is no way to selectively bring load/unload tables or indexes from 
> memory. The page cache will manage itself on an LRU basis.
>

-----------------------------------------------------------------------------
To unsubscribe, send email to [EMAIL PROTECTED]
-----------------------------------------------------------------------------

Reply via email to