I've been trying to figure out a good way to manage schema change control for a while now. Since I have a development background, I really want to find some way to check the schema into a SCM system like CVS (for example). Just using a pg_dump doesn't work very well becase there's no guarantee of consistent ordering. One of the things I find most useful with SCM systems is the ability to quickly spot changes. A re-ordering within a single file makes for a lot of noise in the deltas.
So far, the best idea I've come up with is the "file tree dump" approach: get a list of every object in the database and then dump it into a suitably named file. Finally, create a master file which consists of only include lines. There are a couple of problems I can see with this. 1) How to calculate the dependancy graph (or worse, dealing with RI loops) to determine the right order to load things in isn't stupidly obvious. 2) If I just script this in bash, without making any changes to pg_dump, it doesn't happen in a single transaction. 3) No clever solution for relative vs absolute filenames (unless all this goes into a tar file, but that format already exists and doesn't solve my problem). So my question is, does anyone have a better way of managing schemas in SCM systems? And if not, is there a precieved value in the community for a pg_dump --format scm_friendly option? (In which case I'll take this thread over to hackers) Drew ---------------------------(end of broadcast)--------------------------- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match