On 19 Mar 2016, at 9:36pm, Domingo Alvarez Duarte <sqlite-mail at 
dev.dadbiz.es> wrote:

> Here it's: (database size 5GB, run twice)  

That's a nice piece of testing.

As you can see from Dr Hipp's answer to another thread, an index is not created 
by reading the table one row at a time and using each row to modify each index. 
 Instead all the rows are copied/sorted correctly first, and then the sorted 
copy is used to generate the index all at once.

If this system was used to generate multiple indexes at once each of the copies 
would need to be sorted into a different order.  The amount of temporary space 
(memory or file) would increase with your system, since space would be needed 
to store all the partially made indexes at once.

This would mean more cache (or paging, or swapping, or whatever you call it) 
activity would be involved, which would slow the system down and wear out your 
storage system.

So I think that although your idea might be efficient for small tables or 
setups with a lot of memory, it would be less efficient for other setups.

If you want to get close to simulating what you asked for, compare the time 
taken for this:

CREATE TABLE t (a REAL, b REAL);
BEGIN;
generate INSERT lines for lots of random numbers
CREATE INDEX t_a (a);
CREATE INDEX t_b (b);
END;

with this:

CREATE TABLE t (a REAL, b REAL);
BEGIN;
CREATE INDEX t_a (a);
CREATE INDEX t_b (b);
generate INSERT lines for lots of random numbers
END;

Simon

Reply via email to