Two things:

1. The longer the table names, the longer it will take to compute the hash
of each table name.

2. Because the entire schema must be reprocessed after each change, all the
table names will be rehashed after each table has been created. Creating
10,000 tables will result in  re-reading all that data and re-hashing all
the table names. After adding the 10,000th table, SQLite will have computed
at least 50,005,000 hash operations. Many more if column names are hashed
too.

SDR
On Sep 6, 2013 2:00 PM, "Harmen de Jong - CoachR Group B.V." <
har...@coachr.com> wrote:

> On 6 sep. 2013, at 20:09, "Kevin Benson" <kevin.m.ben...@gmail.com> wrote:
> > Dr. Hipp does a little bit of explaining on this topic, generally, in his
> > replies on this thread:
> >
> > http://www.mail-archive.com/sqlite-users@sqlite.org/msg78602.html
>
> Thanks for pointing me to that thread, but as dr. Hipp states in this
> thread, the tables are stored in a hash. Therefore I would not expect a
> large performance decrease on large number of tables at all, or am I
> missing something?
>
> Best regards,
> Harmen
> _______________________________________________
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to