Re: [sqlite] Terrible performance for one of our tables

2008-11-19 Thread Gerry Snyder
Jens Miltner wrote: > > Any ideas why there would be such a _huge_ performance hit after > deleting and re-inserting records for a while? Without deletes/inserts the reads are sequential, and the OS and/or the drive/controller are reading ahead for you, hiding much of the disk read and seek

Re: [sqlite] Terrible performance for one of our tables

2008-11-19 Thread Alexey Pechnikov
Hello! В сообщении от Wednesday 19 November 2008 15:05:01 D. Richard Hipp написал(а): > That's because "SELECT count(*) FROM t" has to read and count every   > row in the table - all 1.2 million rows.  And this involves reading   > all 1.2 million rows from disk.  If each row is 100 bytes in

Re: [sqlite] Terrible performance for one of our tables

2008-11-19 Thread John Stanton
Jens Miltner wrote: > Am 19.11.2008 um 13:05 schrieb D. Richard Hipp: > >> On Nov 19, 2008, at 3:08 AM, Jens Miltner wrote: >> >>> Hi, >>> >>> we're seeing terrible performance problems when fetching data from >>> one >>> of our tables: >>> The table contains roughly 1.2 Million rows and a

Re: [sqlite] Terrible performance for one of our tables

2008-11-19 Thread Jens Miltner
Am 19.11.2008 um 13:05 schrieb D. Richard Hipp: > > On Nov 19, 2008, at 3:08 AM, Jens Miltner wrote: > >> Hi, >> >> we're seeing terrible performance problems when fetching data from >> one >> of our tables: >> The table contains roughly 1.2 Million rows and a plain "SELECT >> COUNT(*) FROM t"

Re: [sqlite] Terrible performance for one of our tables

2008-11-19 Thread Brad Stiles
>> Out of interest why must it completely READ each entire row? Is it >> because '*' has been used? > > The database reads a page at a time. A page is, by default, 1024 > bytes. A single page might contain multiple rows, or a single large > row might be spread across multiple pages. > When

Re: [sqlite] Terrible performance for one of our tables

2008-11-19 Thread D. Richard Hipp
> From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] > ] On Behalf Of D. Richard Hipp > Sent: 19 November 2008 12:05 > To: General Discussion of SQLite Database > Subject: Re: [sqlite] Terrible performance for one of our tables > > > On Nov 19, 2008, at 3:08 AM, Jens Mil

Re: [sqlite] Terrible performance for one of our tables

2008-11-19 Thread Hardy, Andrew
] Terrible performance for one of our tables On Nov 19, 2008, at 3:08 AM, Jens Miltner wrote: > Hi, > > we're seeing terrible performance problems when fetching data from one > of our tables: > The table contains roughly 1.2 Million rows and a plain "SELECT > COUNT(*) FROM t

Re: [sqlite] Terrible performance for one of our tables

2008-11-19 Thread D. Richard Hipp
On Nov 19, 2008, at 3:08 AM, Jens Miltner wrote: > Hi, > > we're seeing terrible performance problems when fetching data from one > of our tables: > The table contains roughly 1.2 Million rows and a plain "SELECT > COUNT(*) FROM t" query takes 8 minutes to finish. That's because "SELECT

Re: [sqlite] Terrible performance for one of our tables

2008-11-19 Thread Elefterios Stamatogiannakis
What page size do you use in your database? Maybe you should increase it? Assuming a table fully fragmented with a page size of 16KB then the I/O rate should be (if my calculations are correct) in the 2MB/s range (assuming a hard disk having 50MB/s bandwidth, 7msec seek time). For 32KB page

Re: [sqlite] Terrible performance for one of our tables

2008-11-19 Thread Marcus Grimm
Hi, do you have a autoincrement primary key in that table ? if not, try to add one giving sqlite the chance to query an internal index rather than the table itselve. I don't see why sqlite should read all the data from that table. I've read somewhere that count(*) may scan the hole table, if you

[sqlite] Terrible performance for one of our tables

2008-11-19 Thread Jens Miltner
Hi, we're seeing terrible performance problems when fetching data from one of our tables: The table contains roughly 1.2 Million rows and a plain "SELECT COUNT(*) FROM t" query takes 8 minutes to finish. The table contains 10 fields, but the records average to about 100 Bytes of data total.