On Sun, Feb 24, 2008 at 6:52 PM, Jochem van Dieten <[EMAIL PROTECTED]> wrote:
>
>  Or we could have a switch that specifies a directory and have pg_dump
>  split the dump not just in pre-schema, data and post-schema, but also
>  split the data in a file for each table. That would greatly facilitate
>  a parallel restore of the data through multiple connections.
>


How about having a single switch like --optimize <level> and then based
on the "level", pg_dump behaves differently. For example, if optimization is
turned off (i.e. -O0), pg_dump just dumps the schema and data. At level 1,
it will dump the pre-schema, data and post-schema.

We can then add more levels and optimize it further. For example, postponing
the creation of non-constraining indexes, splitting the data into
multiple files etc.
I can also think of adding constructs to the dump so that we can identify what
can be restored in parallel and pg_restore using that information
during restore.


Thanks,
Pavan

-- 
Pavan Deolasee
EnterpriseDB     http://www.enterprisedb.com

---------------------------(end of broadcast)---------------------------
TIP 9: In versions below 8.0, the planner will ignore your desire to
       choose an index scan if your joining column's datatypes do not
       match

Reply via email to