On Oct 20, 2008, at 2:57 AM, Clodo wrote:

> Thanks for you answer MikeW.
>> Could you put the update records into their own separate table
> I already solved my problem using the newest "INDEXED BY".
> I'm trying to understand if is a SqLite limit or bug.

It's an unfortunate edge case alright. SQLite's query optimizer is very
simple. Running ANALYZE sometimes improves the decisions it makes, but
also sometimes not.

If you must use ANALYZE, overriding it's decision using INDEXED BY is  
probably
your best solution at this point. As you are aware, this is brand new  
in 3.6.4,
so your queries will be syntax errors

> And i'm trying to understand your answer about the hashes :(

SQLite doesn't use hash indexes. And I'm not sure that it would make
any difference to this kind of case if it did. (it's not a crazy  
suggestion
though - a lot of database systems do use hash indexes).


> For the moment, here you can found the comparison between SqLite,  
> MySql
> and Oracle9 with the minimal full test-case.
> Only sqlite do a full-table-scan after an "analyze". But i'm not 100%
> sure about the right syntax for MySql and Oracle.
> -----------------------------------------------
> SqLite (from new empty database)
>
> SQL> CREATE TABLE mytest
> (
> mycode    int NOT NULL,
> mymark    int NOT NULL
> );
> SQL> insert into mytest select 0,0;
> SQL> insert into mytest select * from mytest;
> -- rerun the lastest 21 times, until i reach 2 millions of records.
> SQL> create index myindex on mytest (mymark);
> SQL> explain query plan select mycode from mytest where mymark=1;
> detail: TABLE mytest WITH INDEX myindex
> SQL> analyze
> SQL> explain query plan select mycode from mytest where mymark=1;
> detail: TABLE mytest
> -----------------------------------------------
> Oracle:
>
> SQL> CREATE TABLE mytest
> (
> mycode    int NOT NULL,
> mymark    int NOT NULL
> );
> SQL> insert into mytest select 0,0 from dual;
> SQL> insert into mytest select * from mytest;
> -- rerun the lastest 21 times, until i reach 2 millions of records.
> SQL> create index myindex on mytest (mymark) tablespace users;
> -- with sqlite-autotrace enabled
> SQL> select mycode from mytest where mymark=1;
> Execution Plan
>   0      SELECT STATEMENT Optimizer=CHOOSE
>   1    0   TABLE ACCESS (BY INDEX ROWID) OF 'MYTEST'
>   2    1     INDEX (RANGE SCAN) OF 'MYINDEX' (NON-UNIQUE)
> SQL> analyze table mytest compute statistics;
> SQL> explain query plan select mycode from mytest where mymark=1;
> Execution Plan
>   0      SELECT STATEMENT Optimizer=CHOOSE (Cost=4 Card=1 Bytes=2)
>   1    0   TABLE ACCESS (BY INDEX ROWID) OF 'MYTEST' (Cost=4 Card=1
> Bytes=2)
>   2    1     INDEX (RANGE SCAN) OF 'MYINDEX' (NON-UNIQUE) (Cost=3  
> Card=1)
>
> -------------------------------------------
> MySql:
>
> SQL> CREATE TABLE mytest
> (
> mycode    int NOT NULL,
> mymark    int NOT NULL
> );
> SQL> insert into mytest select 0,0;
> SQL> insert into mytest select * from mytest;
> -- rerun the lastest 21 times, until i reach 2 millions of records.
> SQL> create index myindex on mytest (mymark);
> SQL> explain select mycode from mytest where mymark=1;
> id     select_type     table     type     possible_keys     key
> key_len     ref     rows     Extra
> 1    SIMPLE    mytest    ref    myindex    myindex    4    const    1
> SQL> analyze table mytest;
> SQL> explain select mycode from mytest where mymark=1;
> id     select_type     table     type     possible_keys     key
> key_len     ref     rows     Extra
> 1    SIMPLE    mytest    ref    myindex    myindex    4    const    1
>
>
>> Don't have the source to hand, but it's possible that SQLite does  
>> indexes
>> by hash, in which case it wouldn't "know" that the field values are  
>> all
>> the same, just that they had the same hash-value, hence it will have
>> to do a linear search, which it will always have to do if hashes  
>> are the same.
>>
>> Could you put the update records into their own separate table,
>> then move them into the main one later - that would save all the  
>> slow accesses.
>>
>> Regards,
>> MikeW
>>
>>
>> _______________________________________________
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>
>>
>
> _______________________________________________
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to