> I have a simple table with five columns and 450,000 rows.  In 
> SQLiteSpy, I can run "SELECT * FROM trend_data" and get all 
> 450,000 rows in 4.5 seconds.  But in my program, if I use 
> sqlite3_prepare() and
> sqlite3_step() until I run out of data, it takes 55 seconds 
> to get through all rows.  A test with hard-coded junk data 
> showed that my program is account for only 2 seconds of that. 
>  If I use sqlite3_get_table(), I can cut my time in half, 
> which is nice, but I'm still taking 25 seconds to get the 
> same data SQLiteSpy is getting in 4.
> How is SQLiteSpy doing it, and can I use the same trick?

I suspect that SqLiteSpy is not extracting all the rows since you can't
see 450,000 rows on the computer screen at any one time. It probably
uses some form of double buffering method which extracts data as you
scroll through the rows to give the allusion that it has extracted all
the rows.

You can limit the number of rows and what position you start using the
terms LIMIT and OFFSET within your SQL statement. See
http://www.sqlite.org/lang_select.html

Rgds

********************************************************************
This email and any attachments are confidential to the intended
recipient and may also be privileged. If you are not the intended
recipient please delete it from your system and notify the sender.
You should not copy it or use it for any purpose nor disclose or
distribute its contents to any other person.
********************************************************************


-----------------------------------------------------------------------------
To unsubscribe, send email to [EMAIL PROTECTED]
-----------------------------------------------------------------------------

Reply via email to