On Wed, 14 Sep 2016 18:29:36 +0000 (UTC)
Alex Ward <cov...@yahoo.com> wrote:
> Perhaps our schema needs a rework, would one table with a million
> rows be better than 500 tables with 2000 rows each?
500 tables isn't right or wrong, but *counting* tables is. Table count
is not a design-quality metric.
There is no rule of thumb except Boyce-Codd Normal Form.
Table design reflects the entities you choose to represent your domain
of discourse. You should strive, as Einstein advised, to make your
model as simple as possible, but no simpler.
Sometimes we see schemas on this list with sets of identical tables,
each set dedicated to a different client or somesuch. Those folks buy
themselves trouble, because otherwise identical SQL has to vary by
tablename, adding nothing but complexity to the application layer.
At the opposite end of the spectrum is the classic
entity-attribute-value design error. One table conquers all, including
the programmers when they discover how slow things are when SQL is used
Number of columns? Many widely used and correctly normalized financial
market databases maintain thousands of rows on hundreds of columns.
Complex models can easily have a hundred tables. Before I would agree
a design has "too many" tables, I would want an affirmative answer to
one of two questions:
1. Is there a process in place to dynamically extend the schema? If
so, that indicates data -- some change over time -- has found its way
into the metadata (the table names).
2. Do you frequently find yourself looking in more than one place for
what -- to you -- substantially the same information. That would
indicate the tables do not reflect your mental model of the problem
There are forces that drive intelligent table design other than the
logical model. Things like performance and convenience inevitably
cause tables that would otherwise pass textbook muster to be split and
combined. Those things count, too. One thing that doesn't count is
sqlite-users mailing list