On Jan 28, 2019, at 9:17 AM, mzz...@libero.it wrote:
> 
> I developed an application that need to create 1 table with thousand of rows 
> every time when a certain event occours.
> 
> This works in a good way, but when the number of the tables become huge 
> (about 15000/20000 tables) the first DataBase reading query, after Database 
> open, is very slow (about 4sec.) while next reading operations are faster.

This sounds like a “table per client” sort of architecture.  If so, and the 
others’ advice to put all the data in a single table won’t work for you, you 
can easily use SQLite to create a separate DB file for each client.  Each file 
can have the same schema, or not.

If some operations will need to work with data from multiple instances of this 
table, SQLite makes it easy to attach multiple DB files to a single connection 
and cross-query them.

Doing it this way may end up faster than the “everything in one table” 
approach, since SQLite’s performance is roughly proportional to the log2(n) of 
the number of rows in the tables you’re working with.  20000 DB files with one 
table per and N rows each might be faster than one DB file with N*20000 rows in 
a single table.

There are speed hits in that architecture, too.  

For one, the disk directory file lookup might begin to take significant amounts 
of time at some point.  Usually this begins to happen at about this “tens of 
thousands” area, so if you’re looking at going into the hundreds of thousands 
or millions of DB files, you might want to make the directory structure 
hierarchical in some way.

Another speed trap risk here is that each opened DB takes some memory of its 
own.  If you’ve got RAM to burn, this might be great architecture, but if you 
run the system into swapping, the single-table architecture would probably be 
better.
_______________________________________________
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to