Have tested this now on a table of some 300000 rows (no indexes at all)
and with 100 rows to find in the middle of the table, sorted asc on
time-stamp field.
It gave me a speed increase of about 25%.
If I looked for rows at the beginning of the table the speed increase was
more, some 50% faster.
If I looked for rows at the end (highest timestamp) then the simple select
was quite a lot faster than the one with limit etc.

Maybe not the kind of increase you were interested in, but still something
and with no extra overhead at all, just a different SQL. Probably only worth
it if looking for early times.

RBS


-----Original Message-----
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of
[EMAIL PROTECTED]
Sent: 14 April 2008 19:11
To: General Discussion of SQLite Database
Subject: Re: [sqlite] Implicit INDEX?

How about this:

select
*
from
table1
where
rowid >=
(select
rowid
from
table1
where
time_stamp = xxx
limit 1)
and
rowid <
(select
rowid
from
table1
where
time_stamp > xxx
limit 1)

RBS


> Donald,
>
>> To test this, I think you'd want to create a select for some of the most
>> recent data (i.e. data at the tail of the database), perhaps after
>> clearing cache.  I suspect this will take the full table scan time to
>> return any values.
>
>          I'd actually just thought of that.  All my test SELECTs happened
> to be using data close to the "front" of the file.  As suspected, if I try
> it on "later" data, I have to wait for the table scan, then I get the
> data.  I'd be erroneously assuming that SQLite was somehow searching for
> the first value quickly, then scanning the table.
>
>> Two thoughts:
>>
>>   1).  Easy.
>> How bad is the extra 0.8 GByte cost of the index?  At today's prices,
>> it's only about 20 cents on ordinary 5" drives.  (but maybe you're
>> programming a cellphone)
>
>          No, it's nothing terribly constrained.  I'm just trying to
> understand the mechanisms and do what I can to keep the size down where
> possible.  I was somewhat surprised to find that adding an index on a
> single INTEGER column nearly doubled the size of the database and wanted
> to figure out if there was a way around it, given that the column will
> always be sorted.  (And given my, perhaps erroneous understanding that
> creating an INDEX just makes sure that the column stays sorted so SQLite
> can search through it more intelligently)
>
>>   2).  Fancy.
>> You could create your own sparse index table mapping a ROWID to say,
>> every thousandth timestamp.  Then you could create upper and lower ROWID
>> bounds on any query based on timestamps.  Maybe you import the records
>> in batches already and can create the new table at the same time.
>
>          This is closer to what I'm probably going to do.  The data gets
> pulled in every 5 minutes, but between runs, very little actually changes.
> So the idea is to store only the changes along with a full dump say once
> or twice every day.  Then I can just query the values from <requested
> time> to <last full row store> and compute the state of everything from
> that data.
>
>          Thanks,
>
>       Chris
> _______________________________________________
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
>



_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to