Michael,

Thank you for your suggestion! The problem with this approach is that N
would not be a constant that we could tune.

As I mentioned, the amount of updates may vary, depending on the data
received. 

For example, one piece of data may lead to a single INSERT. So it would be
safe and effective to have N=1000, for example. Another piece of data may
lead to 1000 INSERTs. Now, if we still have N=1000, then we'll have
1,000,000 INSERTs in a single transaction. It's completely unpredictable.
And when the amount of data changed in a single transaction is large enough,
it would cause either cache spill and exclusive lock on the database, or the
growth of cache and memory consumption.

Do you think this makes sense?

We could theoretically count the number of DML statements or steps, but this
would imply changing the underlying architecture of the application, so that
any plug-in or extension that accesses SQLite also reports how much data did
they change. It's not very convenient.

Kind regards,
Igor


Black, Michael (IS) wrote:
> 
> So only do N many records in one batch.  That's the easiest thing.  Forget
> about the cache and just use responsiveness to adjust how many records you
> allow at once.
>  
> Pseudo-code:
> recnum=0
> BEGIN;
> while more records
>     INSERT....
>     recnum++
>     if (recnum % 1000) 
>         COMMIT;
>         BEGIN;
> end
> COMMIT;
> 

-- 
View this message in context: 
http://old.nabble.com/Using-cache-stats-to-balance-transaction-size-for-optimal-performance-tp28690967p28692687.html
Sent from the SQLite mailing list archive at Nabble.com.

_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to