If I have various record types that are "one up" records that are structurally similar (same columns) and are mostly retrieved one at a time by its primary key, is there any performance or operational benefit to having millions of such records split across multiple tables (say by their application-level purpose) rather than all in one big table? I am thinking of PG performance (handing queries against multiple tables each with hundreds of thousands or rows, versus queries against a single table with millions of rows), and operational performance (number of WAL files created, pg_dump, vacuum, etc.).

If anybody has any tips, I'd much appreciate it.

Thanks,
David

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Reply via email to