On 30 Dec 2015, at 5:12pm, Jason H wrote:
> I read around, and it seems that the consensus is it should only be locked
> during a multi-thread/multi-process update.
Nope. The database is locked any time a transaction is under way [1]. Your
program cannot know if another process is going to try to access the database
so it has to lock the database just in case. The lock won't have any effect
unless another thread/process gets into it, but your process can't know that.
> However I encountered the error in a python script (single proc/single
> thread) that was the only reader/writer to the database. It seems that I
> forgot to con.commit() in a loop that was doing thousands of update
> statements. It seems to me that there is some limit to how many pending
> statements there can be before encountering this error. If that is true, and
> not a bug, then maybe add it to the documentation somewhere? I didn't find
> any mention of "too many pending statements" as a cause anywhere. Adding a
> commit() periodically in the loop made it go away.
There is no such limitation in SQLite. The limits to the amount of pending
statements are operating resources like memory and memory handles.
It's possible that the problem you encountered is part of your Python interface
to SQLite. So you would need to take this up with the author of the interface
(who might be reading this list !).
Simon.
[1] Transaction doesn't start with default BEGIN. SQLite waits until something
actually needs the database to lock it. If you want the database locked right
now use BEGIN IMMEDIATE.
Simon.