The only way to efficiently do this would be to have counting (range) index b-trees. Since you don't, you're stuck with a O(n) implementation, either on reading or writing. So your solution is as good as it gets, save maybe some implementation particularities.
However, you may consider a shift in perspective: with this kind of data, statisticians use percentiles. That is, instead of querying for ranks 21,000-22,000, you query for "top 1%", "6-8%" etc, based on either value or rank; this way, you can maintain a percentile rank table as granular as you like (i.e. every 1% results in a table with 100 lines). Each line would have count, value min, value max. Such a table is much faster to update and then if you need to retrieve the actual records, you use by range (value BETWEEN min AND max) joined with the percentile table. -- Sent from: http://sqlite.1065341.n5.nabble.com/ _______________________________________________ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users