On 2019/01/28 6:17 PM, mzz...@libero.it wrote:
I developed an application that need to create 1 table with thousand of rows 
every time when a certain event occours.

This works in a good way, but when the number of the tables become huge (about 
15000/20000 tables) the first DataBase reading query, after Database open, is 
very slow (about 4sec.) while next reading operations are faster.

How can I speed up?


This is because SQLite has to parse 15000/20000 tables when it connects to that DB, it's a laborious process and is actually very fast at 4 seconds.

The problem here is a design one.

Accessing table data is fast, accessing tables are slow. You should have more data, less tables.  It's like storing a few boxes of your goods in a million warehouses around the World. It's slow to retrieve. Rather have one warehouse big enough to hold all your boxes.

Put your data in a single table. One table with 10 million rows will be fast as lightening in SQLite, while 10,000 tables with 1,000 rows each will work excessively slow. (They are slow in all RDBMS systems but you won't notice it much in client-server systems because the table schema remains cached in the server).

In stead of having tables like "MyData_1", "MyData_2", "MyData_3", ... , "MyData_n" - rather add an extra indexed field to one table called only "MyData" that carries the 1, 2, 3 ... n etc.

That way you would end up with one table having millions of rows and not millions of tables having few rows.

Cheers!
Ryan


_______________________________________________
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to