From: "Mark Derricutt" <[EMAIL PROTECTED]>fyi, Erwin does allow this to be 'automated.'> As part of our project, we're releasing 'live' systems for the client at
> short 2 week iterations or so (or trying to). However, as the project
> grows, the db structure is slightly changing (adding new tables, adding
> reference fields from existing tables to new ones etc. etc. etc.
>
> I was wondering what 'best practises' there were out there managing
> db-schema versioning, for updating one db schema to another, without loss
> of data.Almost any good DB tool such as ERwin has the ability to generate all the
ddl needed to recreate the schema. Keep these scripts according to whatever
code versioning system you already use.> We're using an inhosue modelling tool (well, its more a sourceforge
project
> of mine) that generates our sql schema, DAO objects, and generic backend
> stuff, and am looking to extend it to create "diffs" between two models,
> and generic the requred alter table/create table stuff, and provide for
> custom user processing of data (i.e., field x needs be 1/2'd, field y
needs
> to be split into field y1 and y2, before deing dropped) etc. etc.
>
> I was wondering how other people have handled this in the past, weather
> anyone knows of any existing projects/products to handle it etc. etc...
> (target db is postgres, and running under linux if it matters).I have worked on many db conversions and schema alterations. I know of no
tool that will completely automate this process. I cannot even conceive
that it will be developed anytime in the near future. Such a tool would be
extremely sophisticated and the market potential is very small.In order to "update" a schema by incorporating changes made during the
development process, you have to disable certain system constraints and
certain triggers. Then you execute the ddl to make the changes. But this
part can vary widely depending on the DBMS being used.Some DBMSs will allow you to, say, add a new column to an existing table and
others will not. You have to create the whole table with a different name,
move the data from the old to new table, delete the old table and rename the
new. Even fewer allow the direct removal of a column from a table.Regardless, if there are constraints on the new column, existing constraints
must be replaced, altered, deleted or new constraints created. Triggers
(even ones associated with other tables) may have to be deleted, altered, or
new ones created. A process may have to be run to populate the new column
with valid data before the constraints can be re-enabled.Then everything has to be re-enabled in just the right order.
This process requires too many intricate judgments to be made for it to be
automated. And I was just considering adding one column to one table. Due
to possible interaction, the level of complication increases rapidly with
the number of changes made.
With user interaction the metadata in your erd is compared to that of the
target database.
When you perform a change in your erd and wish to forward engineer it
to your database the tool determines the does a complete comparison
with
existing metadata to that of your erd or selected objects you chose
to implement.
As per the example used above...
New column as well as its dependencies whether in other tables/constraints/triggers
etc. are handled, and generated in an RDBMS engine specific script such
that the sequence is correct (i.e. renaming the table creating a new table
and
adding the constraint etc.) and can be applied...
The only drawback is that it leaves the old metadata and you have to
do
a clean up of the same..
g'luck,
M
____________________________________________________
To change your JDJList options, please visit:
http://www.sys-con.com/java/list.cfm
Be respectful! Clean up your posts before replying
____________________________________________________
