I'm new to this, and working in Python's sqlite3. So be patient, and don't
expect me to know too much. This is also a personal hobby, so there's
nobody else for me to ask.
I've got a database of a some tens of millions of positions in a board
game. It may be over a billion before I'm done (working in an 11-TB
partition at the moment.) I want to process a major slice of these records,
in order to update other records. I might want to break the work up
into chunks to allow other access to the database while this is going on.
So I have some questions:
(1) If I do all of my updates to a temporary table, does the database still
(2) Is there another way to keep it available? It happens for this
activity that consistency is not at risk.
(3) If it turns out that I can avoid locking, it there still a performance
reason to break the transaction into chunks, or would I be as well off
doing it as a single transaction (assuming I have disk space for the
(4) If I break it up into chunks, I can think of several ways to do that
and keep track of what's been done and what has not. Is there a best
practice for this?
(5) Is there a forum specifically for folks doing Python database
programming? It occurs to me that much of what I'm asking about is not
specific to SQLite. But my code, for now at least, is going to be in
Python because it's the easiest way I can see, and my own efforts are the
real bottleneck in making progress.
#define QUESTION ((bb) || (!bb)) /* Shakespeare */
sqlite-users mailing list