Griggs, Donald wrote: 

> I guess I was wondering if the fastest records-per-transaction value 
> would depend on the page cache and be more or less independent of the 
> total records to be imported.  

I think the page cache is one of a great many variables.

> So, the records-per-transaction for import to a 20 million row table 
> should be twenty times the size for a 1 million row table?  

I'm no sqlite or sql guru myself, so with a grain of salt: 

If you have no reason to commit in the middle of a batch, then don't 
do it.  I think inserting all the rows in a single go will give you the 
best insert performance in most use cases.  

The idea is that there is some fixed overhead (call it O) that SQLite 
has to go through every time it commits a transaction.  The overhead is 
'fixed' because it is independent of the number of rows you inserted.  
If you insert 1m rows and commit every 500, the total commit overhead is 
2000*O.  If you commit just once, the total commit overhead is just O.  

This argument is likely a small or big lie for a number of reasons, but 
is at least a push in the right direction.  

Eric 

-- 
Eric A. Smith

The problem with engineers is that they tend to cheat in order to get results.

The problem with mathematicians is that they tend to work on toy problems 
in order to get results.

The problem with program verifiers is that they tend to cheat at toy problems 
in order to get results.
_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to