We are using sqlite on linux to create a database with several tables. Each of those tables contains tens of thousands of rows, so not excessively huge. In order to speed up the process, the entire set of inserts is wrapped in a BEGIN/COMMIT pair. This does dramatically improve the speed, but as a side effect we notice that sqlite is using up very large amounts of memory. When we have finished creating all the tables, we close the database connection, but the memory is not released. If we use this process to iteratively create several such databases, we reach a point where the memory consumption is so large (well over 100 meg) that the linux OOM handler kicks in and kills the process.
If we do not use transactions, memory consumption is not huge but the process is extremely slow.
If we exit the process and then reopen the database in a new process, all that memory is not reallocated.
Any clues as to 1) why so much memory is being used by these transactions and/or 2) why is it not reused or released? Is there anything we should be doing to prevent this?
This is version 2.8.15.
Any help greatly appreciated! --Brett Granger

