2011/11/9 Simon Slavin <slav...@bigfraud.org>

>
> Didn't someone recently note that entering the first million records was
> fast, but if he then closed and reopened the database, entering the next
> 100,000 records was slow ?
>
>
Yes, and there is still no real explanation for it, other than slow disk
reads. But even with very slow random disk I/O, 30 seconds seems still way
too slow for a 100MB file.

But today I made a new observation: if I create the same table as a virtual
FTS4 table, I can add the additional rows within 1 second (even on an
un-cached database file).

So if the reason for the slowness is disk-reads, the FTS4 way of
creating/updating the index requires much less reads? Maybe because it
allows for seperate tree-branches?

FTS is overkill for my simple requirements, but if it's so much faster than
a regular index, why not? The only things that's holding me back from
switching to FTS for this table is:

 - I cannot use the UNIQUE constraint, to disallow duplicate values.
 - I cannot search efficiently for rows that DON'T match a certain value,
because FTS doesn't allow a single NOT operator.

So can someone explain what FTS is doing behind the scenes that makes these
additional inserts so much faster?
_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to