Jens Miltner wrote:
>
> Any ideas why there would be such a _huge_ performance hit after
> deleting and re-inserting records for a while?
Without deletes/inserts the reads are sequential, and the OS and/or the
drive/controller are reading ahead for you, hiding much of the disk read
and seek
Hello!
В сообщении от Wednesday 19 November 2008 15:05:01 D. Richard Hipp написал(а):
> That's because "SELECT count(*) FROM t" has to read and count every
> row in the table - all 1.2 million rows. And this involves reading
> all 1.2 million rows from disk. If each row is 100 bytes in
Jens Miltner wrote:
> Am 19.11.2008 um 13:05 schrieb D. Richard Hipp:
>
>> On Nov 19, 2008, at 3:08 AM, Jens Miltner wrote:
>>
>>> Hi,
>>>
>>> we're seeing terrible performance problems when fetching data from
>>> one
>>> of our tables:
>>> The table contains roughly 1.2 Million rows and a
Am 19.11.2008 um 13:05 schrieb D. Richard Hipp:
>
> On Nov 19, 2008, at 3:08 AM, Jens Miltner wrote:
>
>> Hi,
>>
>> we're seeing terrible performance problems when fetching data from
>> one
>> of our tables:
>> The table contains roughly 1.2 Million rows and a plain "SELECT
>> COUNT(*) FROM t"
>> Out of interest why must it completely READ each entire row? Is it
>> because '*' has been used?
>
> The database reads a page at a time. A page is, by default, 1024
> bytes. A single page might contain multiple rows, or a single large
> row might be spread across multiple pages.
> When
> From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
> ] On Behalf Of D. Richard Hipp
> Sent: 19 November 2008 12:05
> To: General Discussion of SQLite Database
> Subject: Re: [sqlite] Terrible performance for one of our tables
>
>
> On Nov 19, 2008, at 3:08 AM, Jens Mil
] Terrible performance for one of our tables
On Nov 19, 2008, at 3:08 AM, Jens Miltner wrote:
> Hi,
>
> we're seeing terrible performance problems when fetching data from one
> of our tables:
> The table contains roughly 1.2 Million rows and a plain "SELECT
> COUNT(*) FROM t
On Nov 19, 2008, at 3:08 AM, Jens Miltner wrote:
> Hi,
>
> we're seeing terrible performance problems when fetching data from one
> of our tables:
> The table contains roughly 1.2 Million rows and a plain "SELECT
> COUNT(*) FROM t" query takes 8 minutes to finish.
That's because "SELECT
What page size do you use in your database? Maybe you should increase it?
Assuming a table fully fragmented with a page size of 16KB then the I/O
rate should be (if my calculations are correct) in the 2MB/s range
(assuming a hard disk having 50MB/s bandwidth, 7msec seek time).
For 32KB page
Hi,
do you have a autoincrement primary key in that table ?
if not, try to add one giving sqlite the chance to
query an internal index rather than the table itselve.
I don't see why sqlite should read all the data from that table.
I've read somewhere that count(*) may scan the hole table,
if you
Hi,
we're seeing terrible performance problems when fetching data from one
of our tables:
The table contains roughly 1.2 Million rows and a plain "SELECT
COUNT(*) FROM t" query takes 8 minutes to finish.
The table contains 10 fields, but the records average to about 100
Bytes of data total.
11 matches
Mail list logo