On 16 Mar 2017, at 8:09pm, Bob Friesenhahn <bfrie...@simple.dallas.tx.us> wrote:

> Would it be reasonably feasible to compress the per-connection schema data 
> (stored in RAM) and decompress it as needed?  This would make 
> prepared-statement and possibly other operations a bit slower but if objects 
> are compressed at sufficiently small granularity, then the per-connection 
> memory footprint would be reduced.
> 
> The schema (already stripped to remove white space and comments) for our 
> database has reached 664K and with several processes (with one or more 
> connections), the memory budget attributed to redundant sqlite connection 
> schema data is high.

The schema stored in memory until the connection is closed is not a copy of the 
CREATE statements stored in the sqlite_master table.  It’s in a format closer 
to the result you get when you use PRAGMAs like PRAGMA table_info() and PRAGMA 
index_info().

Also in memory are hashed lists of all table names and other details needed for 
fast searching, which, of course, cannot be compressed because they need to be 
searched every time a new SQLite command mentions a table name.

What you might be seeing is that initially sqlite_master is read into memory, 
so it survives in the cache until other SQLite operations overwrite it.  But 
you should not be seeing permanent allocation of storage equivalent to the size 
of sqlite_master.  If you are seeing 664K of storage set aside, and if this 
increases proportional to the size of sqlite_master that’s not how I thought 
SQLite worked.

Simon.
_______________________________________________
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to