On Sat, 2008-07-26 at 11:03 -0700, Joshua D. Drake wrote:

> 2. We have no concurrency which means, anyone with any database over 50G
> has unacceptable restore times.

Agreed.

Also the core reason for wanting -w

> 3. We have to continue develop hacks to define custom utilization. Why
> am I passing pre-data anything? It should be automatic. For example:
> 
> pg_backup (not dump, we aren't dumping. Dumping is usually associated
> with some sort of crash or fould human behavoir. We are backing up).
>    pg_backup -U <user> -D database -F -f mybackup.sqlc
> 
> If I were to extract <mybackup.sqlc> I would get:
> 
>   mybackup.datatypes
>   mybackup.tables
>   mybackup.data
>   mybackup.primary_keys
>   mybackup.indexes
>   mybackup.constraints
>   mybackup.grants

Sounds good.

Doesn't help with the main element of dump time: one table at a time to
one output file. We need a way to dump multiple tables concurrently,
ending in multiple files/filesystems.

> Oh and pg_dumpall? It should have been removed right around the release
> of 7.2, pg_dump -A please.

Good idea

-- 
 Simon Riggs           www.2ndQuadrant.com
 PostgreSQL Training, Services and Support


-- 
Sent via pgsql-patches mailing list (pgsql-patches@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-patches

Reply via email to