Thomas DILIGENT <[EMAIL PROTECTED]> wrote: Hi Ken,

Ken wrote:
> Thomas, if i understand you correctly:
>
> 1. Place the writing of multiple rows of data inside a transaction.
> 2. Querry performance will not be affected by the transaction.
>
> So, If you have multiple rows and are doing something like the following:
>
>     1. Begin Transaction
>     2. Read data from somewhere.
>     3. Insert to sqlite
>     4. Querry sqlite 
>     5. Update sqlite.
>     6. Repeat 2-5, till no more data.
>     7. Commit.
>
> Now depending upon how much data you may wish to add a step to periodically 
> commit and start a new Txn.
>         5a. if (rows loaded Mod some number ) commit txn ... Begin Txn.
>   

That's it. This is what I want to do.
 From this point, I have the following questions :
1) Will this increase speed compared to a basic solution where I would 
use autocommit mode ? (in other words, is it worthy implementing such a 
solution ?)
2) If yes, how much and how to chose the number of rows between begin 
and commit ?

Thomas,

As drh discussed. Try 1000 entries first, then 2000, 4000,  etc...

Eventually you'll hit a point of diminishing return.  It really depends upon 
your data and page size of the database.

Ken





Reply via email to