I would like each transaction to be as large as possible, but not too large
to cause cache growth or cache spill.
We have a stream of incoming data, with each piece of data causing updates
in SQLite database. The number of rows inserted/updated for each data record
may vary.
If I enclose each data record's processing in a separate transaction, there
will be too many transactions -- it would be slow. If I enclose too many
records processing in a single transaction, the cache may grow or spill to
disk -- not wanted either.
It would be great if we could dynamically assess how much of the cache is
taken up by the transaction. Then, I would issue COMMIT as soon as cache use
is over some threshold, like 50%.
Pseudocode:
while (have data) {
BEGIN
while (have data && CACHE USE < 50%) {
process next piece of data
}
COMMIT
}
Is this possible? Any other best practices for optimizing transaction size?
Thanks!
Igor
--
View this message in context:
http://old.nabble.com/Using-cache-stats-to-balance-transaction-size-for-optimal-performance-tp28690967p28690967.html
Sent from the SQLite mailing list archive at Nabble.com.
_______________________________________________
sqlite-users mailing list
[email protected]
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users