Jehan-Guillaume de Rorthais wrote:
> On Mon, 29 Aug 2016 13:38:03 +0200
> hubert depesz lubaczewski <dep...@depesz.com> wrote:
> 
> > Hi,
> > we have rather uncommon case - DB with ~ 50GB of data, but this is
> > spread across ~ 80000 tables.
> > 
> > Running pg_dump -Fd -jxx dumps in parallel, but only data, and MOST of
> > the time is spent on queries that run sequentially, and as far as I can
> > tell, get schema of tables, and sequence values.
> > 
> > This happens on Pg 9.5. Are there any plans to make getting schema
> > faster for such cases? Either by parallelization, or at least by getting
> > schema for all tables "at once", and having pg_dump "sort it out",
> > instead of getting schema for each table separately?

Depesz: I suggest you start coding ASAP.

> Another issue I found in current implementation is how pg_restore deal with 
> PK.
> As it takes an exclusif lock on the table, it is executed alone before indexes
> creation. 
> 
> Splitting the PK in unique index creation then the constraint creation might
> save a lot of time as other index can be built during the PK creation.

Yeah.  I recall there being some stupid limitation in ALTER TABLE .. ADD
CONSTRAINT USING INDEX to create a primary key from a previously
existing unique index, which would be very good to fix (I don't recall
what it was, but it was something infuriatingly silly).  I suggest you
start coding that ASAP.

(Two new contributors to pg_dump!  Yay!)

-- 
Álvaro Herrera                http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Reply via email to