On 30 Sep 2017, at 10:54pm, Kevin O'Gorman <kevinogorm...@gmail.com> wrote:

> Here's my prime suspect: I'm using WAL, and the journal is 543 MB.  I
> hadn't given it much thought, but could this be more than the software
> really wants to deal with?

No SQLite.  Possibly something else you’re using.  I used to work daily with a 
43 Gigabyte SQLite database.  And most of that space was used by one tall thin 
table.  SQLite has known limits and is not thoroughly tested near those limits 
(because nobody can afford to buy enough hardware to do it) , but those limits 
are a lot more than half a Gigabyte.


A crash sometimes happens because the programmer continues to call sqlite_ 
routines after one of them has already reported a problem.  Are you checking 
the values returned by all sqlite_() calls to see that it is SQLITE_OK ?  You 
may have to learn how your Python shim works to know: it may interpret other 
results as "catch" triggers or some equivalent.

Are you using the standard Python shim or APSW ?  The standard Python shim does 
complicated magic to make SQLite behave the way Python wants it to behave.  
This complication can make it difficult to track down faults.  You might 
instead want to try APSW:


This is an extremely thin shim that does almost nothing itself.  That makes it 
easy to track down all errors to a Python problem or a SQLite problem.  I’m not 
saying we can’t help with the standard Python import, just that it’s a little 
more complicated.

sqlite-users mailing list

Reply via email to