Hello,

I have some data to transfer from table A to table B (same structure).
"INSERT INTO B SELECT * FROM A" works ok.
However, the performance is comparable to a loop inserting rows one by one. I 
thought that such a bulk copy could perform better with internal optimizations.
Is there a way to perform better, or should I just be happy with that ?

The context is a little more subtle.
Actually, my table structure is very simple (two columns : an INDEX PRIMARY KEY 
and a BLOB of few KB). I am generating data and store it in the database. I am 
working with synchronous=OFF and journal_mode=OFF.
The simplest workflow works very well : I am continuously reading/filtering my 
input data and stores the results progressively, row by row, in the DB. I 
suppose that the page_size and the cache_size are already doing a good memory 
cache job.

But I wondered if I could improve the performance with the following scheme :
my disk DB is B
I create a memory DB, with the same structure, named A
I attach B to A
then in a a loop
    I insert the rows in A
    When A is big enough, I flush A into B with "INSERT INTO B SELECT * from A"
    I make A empty
    and so on until the input data is exhausted

But the overall performance is comparable to not using A at all. Is it a stupid 
idea (regarding the way sqlite is already optimized) or can I do something 
clever with that ?
A few more info :
the whole DB cannot fit in memory
the DB on disk may already have content : I must append data, and cannot use 
the backup API.

Regards,

Pierre Chatelier
_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to