I think there are two answers I've seen:
#1 Your mileage may vary....the optimum transaction size is dependent on your
table structure. There's a trade-off at some point if you need to keep access
open to the database. Otherwise just make it one transaction.
#2 You may find dropping and recreating indexes is faster if you have any.
I happened to just conduct some tests with a simple 1-integer table yesterday.
Give 200,000 inserts this is how long it took for different commit intervals.
You can see that committing every 12,500 inserts or more was a relatively minor
improvement.
Interval Ins/Sec Total Time
200000 121877 1.64
100000 119689 1.67
50000 117439 1.70
25000 116414 1.72
12500 114286 1.75
6250 109409 1.83
3125 100806 1.98
1563 87108 2.30
782 66667 3.00
391 39754 5.03
196 25000 8.00
98 16141 12.39
49 9524 21.00
25 4990 40.08
13 2707 73.88
7 1475 135.59
Michael D. Black
Senior Scientist
Advanced Analytics Directorate
Northrop Grumman Information Systems
________________________________
From: [email protected] on behalf of David Peters
Sent: Sun 11/14/2010 8:24 AM
To: [email protected]
Subject: EXTERNAL:[sqlite] Transaction Size
Hi there,
Could someone point me to the documentation that deals with optimizations of
blocks of statements. I have a couple of databases of about 64Mbytes size
and I want to understand how to optimize the processing of large numbers of
statements. For example there are times when I need to execute almost 1
million statements (inserts and updates) in a single block. I am currently
executing blocks of 50,000 statements in a single transaction but that value
may be too low.
Thank you very much.
Regards,
David
_______________________________________________
sqlite-users mailing list
[email protected]
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
_______________________________________________
sqlite-users mailing list
[email protected]
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users