On Nov 19, 2008, at 3:08 AM, Jens Miltner wrote:

> Hi,
>
> we're seeing terrible performance problems when fetching data from one
> of our tables:
> The table contains roughly 1.2 Million rows and a plain "SELECT
> COUNT(*) FROM t" query takes 8 minutes to finish.

That's because "SELECT count(*) FROM t" has to read and count every  
row in the table - all 1.2 million rows.  And this involves reading  
all 1.2 million rows from disk.  If each row is 100 bytes in size,  
that involves reading 120MB.

An SQLite database does *not* maintain the number of rows in a table  
as separate metadata.  Many client/server database engines do maintain  
such metadata.  When I was designing the SQLite file format, I  
deliberately choose to omit such metadata because including it would  
slow down inserts and deletes.

If you frequently need to know how many rows are in a certain table,  
use insert and delete triggers to maintain the count yourself in a  
separate table.  Then just read out the count from the separate table  
when you need it, rather than recomputing it by reading all 1.2  
million rows of the original table.


D. Richard Hipp
[EMAIL PROTECTED]



_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to