The reason you're getting the same results is because you are CPU-bound. I/O has nothing to do with this problem. From your timings of your app, 31.76/33 = 96% CPU. If you were I/O bound, your real time would be 33 seconds and your sys+user time would be 3 seconds, or something low.
My guess is you are spending more time looping around in Perl than you think. Try running your Perl code with one small, static set of test data in a loop and see how much time it takes w/o any DB interactions. That will give you a baseline for performance improvements. If it turns out that 1000 loops w/o SQL takes 25 seconds instead of 33, none of your SQL optimizations matter much. Jim On 3/21/09, P Kishor <punk.k...@gmail.com> wrote: > So, I increased the cache_size to 1048576 but got the same results... > 30 odd SELECTs per second. > > Then I created an in-memory db and copied all the data from the > on-disk db to memory. I didn't use the backup API... simply opened a > db connection to an in-memory db, then created all the tables and > indexes, ATTACHed the on-disk db and did an INSERT .. SELECT * FROM > attached db. Interestingly, the same results -- > > [04:24 PM] ~/Data/carbonmodel$perl carbonmodel.pl > Creating in memory tables... done. > Transferring data to memory... done. Took: 90 wallclock secs (75.88 > usr + 8.44 sys = 84.32 CPU) > Creating indexes... done. Took: 38 wallclock secs (23.82 usr + 13.36 > sys = 37.18 CPU) > Prepare load testing > ...timethis 1000: 33 wallclock secs (30.74 usr + 1.02 sys = 31.76 > CPU) @ 31.49/s (n=1000) -- Software first. Software lasts! _______________________________________________ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users