Naz Gassiep <[EMAIL PROTECTED]> writes: > Just a question, is there any advantage to having this then building a > function in applications that wrap and use pg_dump with a few options? > Surely that's a more appropriate way to achieve this functionality?
Refactoring pg_dump into some sort of library would clearly be a better solution. Unfortunately it's also a huge amount of work :-( There are several reasons why trying to push pg_dump's functionality into the backend is largely doomed to failure: * pg_dump needs to be able to dump from older server versions, and having two completely different code paths for servers before and after version X would be a mess. * pg_dump can't consider a table as a monolithic object anyway; problems like breaking circular dependencies involving DEFAULT expressions require getting down-and-dirty with the constituent elements. If there were a monolithic pg_get_table_def function, pg_dump couldn't use it. * pg_dump ought to be dumping a snapshot of the DB as of its transaction start time. Most of the backend's catalog access works on SnapshotNow and hence fails this test. (I fear that we already have some issues from the get_xxx_def functions that pg_dump uses now.) regards, tom lane ---------------------------(end of broadcast)--------------------------- TIP 7: You can help support the PostgreSQL project by donating at http://www.postgresql.org/about/donate