----- Original Message ----
From: Unit 5 <[EMAIL PROTECTED]>
To: sqlite-users@sqlite.org
Sent: Thursday, November 9, 2006 8:02:51 AM
Subject: RE: [sqlite] Insert statement taking too long


> --- Robert Simpson <[EMAIL PROTECTED]> wrote:
> > You need to create an index on the columns you're
> > joining.  Otherwise I
> > believe 100,000,000 rows (10k x 10k) in table 2 will
> > be scanned while SQLite
> > looks for matches against the 10,000 rows in table
> > 1.

> While that makes sense, I suspect there is something
> else going on.  
> 
> I did a few more tests.  For example, if I remove the
> INSERT but keep the exact same SELECT statement with
> the joins, it is fast again.  So, it seems that it is
> quite slow when doing the insert's.  I was thinking
> that perhaps the statement was not in a transaction,
> but I tried that too.

Could it just be that your data set is just too big and doesn't fit in memory? 
Your statement most likely results in random inserts in the target table.
 
Talking about this, is there a way to tell sqlite to put "holes" in the file so 
that when doing random inserts (even in a transaction), only portions of the 
file need to be moved around?
 
It would waste some disk space, but for improved performance (it's a 
trade-off), I would be willing to give away large amount of disk. I know this 
is quite the opposite of what (auto) vacuum does but when data doesn't fit in 
memory and most access is random there is not much performance benefit in 
having the data not sparse in the DB file. The "holes" could be recreated from 
time to time to ensure the sparseness of the db file (hence giving a guaranty 
on insert times).
 
Nicolas

Reply via email to