Hi Mike,

Thanks for the steps to try. I was hoping for some theoretical (implementation) insight before we do the normal battery of tests... we'll get onto that next week if there are no other inputs on how data storage is handled.


On 12/7/2013 8:36 PM, Michael Black wrote:
One more test I would is first principles.
Load 1200 records and just do "select * from items" -- you aren't going to
get any faster than that.
Then add the index query.
You should find a performance knee as you add records (try adding them in
powers of 2).
To test I would use "select * from items where rowid%2==0" for 2400 records,
and rows%4 for 4800 records, etc.

We'll give this a shot. I remember that %x queries are usually significantly slower, but I do know what you're getting at, so we'll give it a try.

Also, what happens if you don't encrypt?

Also, what if you turn off SQLite caching completely.  Let CE have a bit
more cache space?

There is enough available memory in the system right now. So, we're not choking anything else at this stage.

You could also create 2 tables -- one for you frequent data and one for the
non-frequent.
That's 2 selects but might be noticeably faster if the frequent is small
enough.

There are a number of queries that use these tables and they don't know if the data is in the more frequent area or the less frequent area. So, every query would have to bounce through both tables. I know that we're using it more as a cache in the expectation that most queries execute on only the first table. But there are other uses of the data where the distribution is more general.

Thanks for the inputs - we'll see what we can do.

Best Regards,
Mohit.


_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to