Would it be reasonably feasible to compress the per-connection schema data (stored in RAM) and decompress it as needed? This would make prepared-statement and possibly other operations a bit slower but if objects are compressed at sufficiently small granularity, then the per-connection memory footprint would be reduced.

The schema (already stripped to remove white space and comments) for our database has reached 664K and with several processes (with one or more connections), the memory budget attributed to redundant sqlite connection schema data is high. Using gzip compression, the database schema reduces to just 62k so there is a 10X benefit. With 10 processes/connections, almost 6MB could be saved with our database. It is likely that the compression ratio is less when compressing many small fragments of text.

Thoughts?

Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,    http://www.GraphicsMagick.org/
_______________________________________________
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to