I have a particular logging task which is time-critical, both in reading and 
writing.  It runs on Linux, but not a fast desktop computer, more like the slow 
kind built into your WiFi router, with limited solid state storage.  I can use 
any version of the SQLite C API I want, and currently use a minimal build which 
omits almost all optional parts.  Remember: on this setup processing is slow, 
storage space limited.

Log entries are written from six a minute (peak time) to one an hour (night).  
In some use-cases queries will be frequent (a few a day).  In others, users are 
not interested and weeks may pass without a query.  It has the age-old problem 
when you need random access to the data:

A) Create no indexes.  Fast writing, but slow when looking things up.
B) Create indexes.  Slower writing, but faster when looking things up.

Naturally, I want it all.  I'd been toying with the idea that initial writing 
should be to a text file, and data flushed to SQLite just before a query is 
executed.  But a recent SQLite innovation changes things.  Instead of using a 
text file I can use partial indexes.

So I add a column to my table called "searchable".  It starts off set to FALSE. 
 I replace my indexes with partial indexes which count only "searchable" rows.  
When a search is done, before actually doing the search I do

   UPDATE MyTable SET searchable = TRUE <WHERE searchabble = FALSE ?>

This, theoretically, updates the indexes.  Does anyone have experience with 
this ?  The programming is simpler if I use this trick, since I don't have to 
handle and flush a text file.  But I haven't used partial indexes before.  Any 
advice or suggestions before I do testing ?  Is there a better way I've missed 
entirely ?

Simon.
_______________________________________________
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to