If you ever going to use ANALYZE on your database and database is going to be 
open frequently (like once per request) consider dropping sqlite_stat3 and 
sqlite_stat4 tables.

SQLite reads content of those tables on each open. Number of tables greatly 
contributes to amount of data stored in there. For me, having below 30 tables, 
penalty of keeping sqlite_stat3 and sqlite_stat4 after ANALYZE equals to some 5 
milliseconds of overhead on each open.

13 January 2016, 13:43:06, by "Olivier Mascia" <om at integral.be>:

>   Hello,
> 
> Is there any known structural performance issue working with a schema made of 
> about 100 tables, about 80 foreign keys constraints, and some indexes in 
> addition to those implicit of the primary keys and foreign keys. In my book 
> it does not qualify as a complex schema, some tables would have 30 to 40 
> columns and 4 or 5 tables are candidates for a moderate number of rows 
> (rarely more than 1 million), while one of the tables could receive about 10 
> millions rows after some years of data collection (so again nothing really 
> fancy).
> 
> Does sqlite have to reparse the schema text often to execute the queries? Or 
> is the schema somehow translated internally to a, stored, digested 
> ('compiled') format, to ease its working?
> 
> The application which would use this schema is a server-side application 
> (quite along the lines described in http://sqlite.org/whentouse.html).  We 
> have started experimentation and things look very good, excellent should I 
> say, so the above question is more about which details to supervise, which 
> traps to avoid.  I'm pretty sure there are people here with valuable 
> background with similar datasets.
> 
> Thanks,
> -- 
> Meilleures salutations, Met vriendelijke groeten, Best Regards,
> Olivier Mascia, integral.be/om
> 
> 
> _______________________________________________
> sqlite-users mailing list
> sqlite-users at mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Reply via email to