On Fri, Jan 28, 2011 at 11:38 AM, Pierre Chatelier <k...@club-internet.fr>wrote:

>
> But I wondered if I could improve the performance with the following scheme
> :
> my disk DB is B
> I create a memory DB, with the same structure, named A
> I attach B to A
> then in a a loop
>    I insert the rows in A
>    When A is big enough, I flush A into B with "INSERT INTO B SELECT * from
> A"
>    I make A empty
>    and so on until the input data is exhausted
>
> But the overall performance is comparable to not using A at all. Is it a
> stupid idea (regarding the way sqlite is already optimized) or can I do
> something clever with that ?
>


How you fill your primary key of newly inserted data comparing to the file
db ids? I mean if memory ids are always bigger than Max(Id) of the file
table, this should be a very quick insert comparable with the speed of
harddisk. But if these ids have intermediate values comparing with the
existing ones, the speed can drop significantly since sqlite should
read/write at virtually random positions of the file db.

Max
_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to