Re: [sqlite] speeding up row by row lookup in a large db

2009-03-21 Thread P Kishor
Found the bottleneck! First, the results with the "correction" -- Db remains on disk: - [06:01 PM] ~/Data/carbonmodel$perl carbonmodel.pl Prepare load testing timethis 1000: 3 wallclock secs ( 0.23 usr + 0.41 sys = 0.64 CPU) @ 1562.50/s (n=1000) or 0.63 ms per

Re: [sqlite] speeding up row by row lookup in a large db

2009-03-21 Thread Stefan Evert
On 21 Mar 2009, at 15:31, P Kishor wrote: > I did some benchmarking with the above schema using Perl DBI, and I > get about 30 transactions per second as long as I returning the data > to memory. Even for Perl/DBI, that seems pretty slow. Depends on how much data each of these transactions

Re: [sqlite] speeding up row by row lookup in a large db

2009-03-21 Thread Jim Wilcoxson
The reason you're getting the same results is because you are CPU-bound. I/O has nothing to do with this problem. From your timings of your app, 31.76/33 = 96% CPU. If you were I/O bound, your real time would be 33 seconds and your sys+user time would be 3 seconds, or something low. My guess

Re: [sqlite] speeding up row by row lookup in a large db

2009-03-21 Thread Kees Nuyt
On Sat, 21 Mar 2009 16:37:15 -0500, P Kishor wrote: > [04:24 PM] ~/Data/carbonmodel$perl carbonmodel.pl > Creating in memory tables... done. > Transferring data to memory... done. Took: 90 wallclock secs (75.88 > usr + 8.44 sys = 84.32 CPU) > Creating indexes... done. Took:

Re: [sqlite] speeding up row by row lookup in a large db

2009-03-21 Thread P Kishor
On Sat, Mar 21, 2009 at 2:07 PM, Nicolas Williams wrote: > On Sat, Mar 21, 2009 at 01:55:32PM -0500, P Kishor wrote: >> On Sat, Mar 21, 2009 at 1:25 PM, Nicolas Williams >> wrote: >> > On Sat, Mar 21, 2009 at 12:14:43PM -0500, P Kishor wrote:

Re: [sqlite] speeding up row by row lookup in a large db

2009-03-21 Thread Kees Nuyt
On Sat, 21 Mar 2009 14:07:17 -0500, Nicolas Williams wrote: >But the thing is, you might just set the cache size large enough and let >it warm up as you go -- the effect should be the same if your DB doesn't >grow very fast. > >> [Puneet:] Also, isn't there a 2 GB

Re: [sqlite] speeding up row by row lookup in a large db

2009-03-21 Thread Nicolas Williams
On Sat, Mar 21, 2009 at 01:55:32PM -0500, P Kishor wrote: > On Sat, Mar 21, 2009 at 1:25 PM, Nicolas Williams > wrote: > > On Sat, Mar 21, 2009 at 12:14:43PM -0500, P Kishor wrote: > >> If I can't improve 33 ms per query, then I can experiment with > > > > 33ms per-query

Re: [sqlite] speeding up row by row lookup in a large db

2009-03-21 Thread P Kishor
On Sat, Mar 21, 2009 at 1:25 PM, Nicolas Williams wrote: > On Sat, Mar 21, 2009 at 12:14:43PM -0500, P Kishor wrote: >> If I can't improve 33 ms per query, then I can experiment with > > 33ms per-query sounds like you're not caching enough of the database in > memory.  

Re: [sqlite] speeding up row by row lookup in a large db

2009-03-21 Thread Nicolas Williams
On Sat, Mar 21, 2009 at 12:14:43PM -0500, P Kishor wrote: > If I can't improve 33 ms per query, then I can experiment with 33ms per-query sounds like you're not caching enough of the database in memory. What's the cache size? Can you jack it up? Does the entire dataset fit in memory? If so,

Re: [sqlite] speeding up row by row lookup in a large db

2009-03-21 Thread Jim Wilcoxson
I'm not sure I completely understand your data structure, but here are some ideas: First, conduct experiments with different page and cache sizes. I don't know if you jumped to a 32K page size or experimented with each size, but experimentation is the way to go. I'm guessing that in your

Re: [sqlite] speeding up row by row lookup in a large db

2009-03-21 Thread P Kishor
On Sat, Mar 21, 2009 at 12:03 PM, Kees Nuyt wrote: > > Hi Puneet, > > On Sat, 21 Mar 2009 10:47:44 -0500, P Kishor > wrote: > >>I should have mentioned the page_size in my OP. It is 32768 set by me >>at the start of db creation. >> >>Yes, the db connection

Re: [sqlite] speeding up row by row lookup in a large db

2009-03-21 Thread Kees Nuyt
Hi Puneet, On Sat, 21 Mar 2009 10:47:44 -0500, P Kishor wrote: >I should have mentioned the page_size in my OP. It is 32768 set by me >at the start of db creation. > >Yes, the db connection is kept open. Hm, apart from faster disks (15k RPM or high end SSD) I have no idea

Re: [sqlite] speeding up row by row lookup in a large db

2009-03-21 Thread P Kishor
On Sat, Mar 21, 2009 at 10:28 AM, Kees Nuyt wrote: > > Hi Puneet, > > On Sat, 21 Mar 2009 09:31:45 -0500, P Kishor > wrote: > >>Part 1. >>--- >> >>I have the following schema in a SQLite db that is 430 MB on my >>Macbook laptop's 320 GB HFS+ formatted

Re: [sqlite] speeding up row by row lookup in a large db

2009-03-21 Thread Kees Nuyt
Hi Puneet, On Sat, 21 Mar 2009 09:31:45 -0500, P Kishor wrote: >Part 1. >--- > >I have the following schema in a SQLite db that is 430 MB on my >Macbook laptop's 320 GB HFS+ formatted 7200 RPM disk with an 8 MB >cache. [...] >How can I, if at all, speed this up?

[sqlite] speeding up row by row lookup in a large db

2009-03-21 Thread P Kishor
Part 1. --- I have the following schema in a SQLite db that is 430 MB on my Macbook laptop's 320 GB HFS+ formatted 7200 RPM disk with an 8 MB cache. -- 1000,000 rows CREATE TABLE cell ( cell_id INTEGER PRIMARY KEY, met_cell_id INTEGER, 8 other INTEGER or REAL columns ) -- 38 rows

Re: [sqlite] writting a text file from sqlite db in C

2009-03-21 Thread Kees Nuyt
On Fri, 20 Mar 2009 08:12:28 -0400 (EDT), mrobi...@cs.fiu.edu wrote: >Thanks for your help. > >I checked those sites, and I worked with sqlite3.exe using the command >line tool, no problems, however I can not find anywhere samples of how to >applied these commands in C. > >Thru trial and error, I

Re: [sqlite] Use sqlite3_update_hook() to watch for changes

2009-03-21 Thread Roger Binns
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 Nicolas Williams wrote: > The OS can't know when a SQLite3 transaction has finished, It doesn't need to. A second connection will enter its busy handler until the transaction in the first connection has completed. Roger -BEGIN PGP