A new topic, 'Feature suggestions', has been made on a board you are watching.

You can see it at
http://liquibase.org/forum/index.php?topic=79.new#new

The text of the topic is shown below:

Hey,

I am a newbie in LiquiBase, but is very exited about what I have seen so far.

Still, I have a few suggestions to make.

My current solution to the first three suggestions are summarized after.

1) Logging LiquiBase activity in the database.
As part of switching to LiquiBase I am looking forward to have the database 
refactorings made a part of the automated build process running for our source 
code. Whenever our product is build all the resulting .jar files are stamped 
with certain build data (build date, build number, branch name etc.) by writing 
the data in the manifest. I want the same to happen for the changelog root file 
used by LiquiBase to execute all added refactorings. So every time a build is 
genereted these data are written in the changelog root file. When ever 
LiquiBase is executed, I want these build data to be written in a special 
table. So when Liquibase starts, a record is written saying "START_LIQUIBASE" 
and when its finished it writes another record "STOP_LIQUIBASE". 
Now I can quickly look in the database to see if a certain build has been 
executed and how long it took (there is a datetime on each record).
Only if I dont find a "STOP" sign I have to look in the log file to see what 
went wrong. Since all LiquiBase activity is logged I can also see if a build is 
deployed multiple times (on purpose or by mistake).

Suggestion: LiquiBase should always log its activity in a table.


2) HUGE data refactorings.
>From time to time we have to make data refactorings involving hundreds of 
>millions of records. 
Making such a refactoring in one transaction will probably eat up all disk 
space for the log file, setting the database server in an unhealthy state and 
giving the DBAs concurrent heart attacks. Instead we do the refactoring in 
pieces. First some changeSet creates some stats tables, then a piece of SQL 
code is executed in a loop taking off say 10000 elements, manipulate them and 
add a record in the stats table telling how far it got now, and each loop is 
one transaction.
This gives us two advantages.
a) By looking in the stats tables we can follow the process to see that its not 
broken and also estimate when it is completed.
b) If the process is stopped (say broken connection) the SQL code is made so 
that it looks in its stats tables to see if it has already been running and if 
so it reinitializes from there.
The changeSets must therefore be able to run more than once, but when the 
entire refactoring has completed they should never run again.
I therefore adds a "Mark" in my special table that all changeSets involved in 
the refactoring uses as a precondition - if the mark is there then skip.
This could perhaps also have been done by using <changeSetExecuted> looking for 
the last changeSet, but I am not fully convinced.

Suggestion: There should be a function in LiquiBase to make a mark in the 
database.
 

3) New releases and virgin databases.
In the doc under LiquiBase Best Practices its written how to start a new root 
changelog when making a new release. At that time we also generates a 
changeLogFile for creating a virgin database in that release.
The problem is that if we dont copy all the 5000 records in the table 
DATABASECHANGELOG, executing an old build will actually be run because 
LiquiBase cant see that these changes was implicit a part of current release.
Instead I have made a Checkpoint mechanism in my special table. In the root 
changelog there is an ExpectedCheckpoint and if CurrentCheckpoint in the 
database is different, then LiquiBase is HALTED.
Now when making a new release, the last thing I do in the old release is making 
an INCREMENT_CHECKPOINT command. The new root changelog is then marked with the 
new Checkpoint. 

Suggestion: There should be a Checkpoint function in LiquiBase.


When I made the the solution for the features my concerns was: 
a) How to make this work the same way in at least MSSQL and Oracle.
b) Automatically add a datetime execute on all records.

So I figured that a trigger would do the trick.

So this is how I did:

<?xml version="1.0" encoding="UTF-8"?>
<databaseChangeLog
  xmlns="http://www.liquibase.org/xml/ns/dbchangelog/1.9";
  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance";
  xsi:schemaLocation="http://www.liquibase.org/xml/ns/dbchangelog/1.9
  http://www.liquibase.org/xml/ns/dbchangelog/dbchangelog-1.9.xsd";>

    <changeSet id="2009-06-10_1" author="Claus J.O. Justesen">
        <preConditions onFail="WARN" onError="HALT">
            <not>
                <tableExists schemaName="dbo" 
tableName="DATABASECHANGELOGACTIVITY" />
            </not>
        </preConditions>
        
        <createTable schemaName="dbo" tableName="DATABASECHANGELOGACTIVITY">
            <column name="id" type="INT" autoIncrement="true">
                <constraints primaryKey="true" nullable="false" />
            </column>
            <column name="DateExecuted" type="DATETIME" />
            <column name="Action" type="varchar(50)">
                <constraints nullable="false" />
            </column>
            <column name="ExpectedCheckpoint" type="INT" />
            <column name="Comments" type="varchar(250)" />
            <column name="CurrentCheckpoint" type="INT" />
            <column name="ErrorMessage" type="varchar(250)" />
        </createTable>
        <sql>
create trigger tr_DATABASECHANGELOGACTIVITY_ins 
on DATABASECHANGELOGACTIVITY for insert
as
declare @id int, @errmsg varchar(200), @cp_cur int, @cp_exp int, @action 
varchar(50), @res bit
select @errmsg = null, @cp_cur = null, @action = null
-- harvest record
select @id = max(id) from DATABASECHANGELOGACTIVITY
select @cp_exp = ExpectedCheckpoint, @action = Action from 
DATABASECHANGELOGACTIVITY where id = @id
select @cp_cur = isnull(max(CurrentCheckpoint),0) from DATABASECHANGELOGACTIVITY
-- check : is action known
if @action not in 
('START_LIQUIBASE','STOP_LIQUIBASE','MARK_EVENT','INCREMENT_CHECKPOINT') 
begin
    set @errmsg = 'ERROR: Unknown Action ['+isnull(@action,'NULL')+']'
end
-- check : is cp_exp not null : cp_exp = cp_cur
if @cp_exp is not null
begin
   if @cp_exp != @cp_cur
   begin
      set @errmsg = 'ERROR: Unexpected Checkpoint. Expected ['+cast(@cp_exp as 
varchar)+'],  Found ['+cast(@cp_cur as varchar)+']'
   end
end
-- check : is action START_LIQUIBASE, INCREMENT_CHECKPOINT : cp_exp not null
if @action in ('START_LIQUIBASE','INCREMENT_CHECKPOINT')
begin
   if @cp_exp is null
   begin
      set @errmsg = 'ERROR: Expected Checkpoint was NULL for Action 
['+...@action+'].'
   end
end
-- conclusion
if @errmsg is null
begin
   if @action = 'INCREMENT_CHECKPOINT'
      update DATABASECHANGELOGACTIVITY set DateExecuted = GetDate(), 
ErrorMessage = null, CurrentCheckpoint = @cp_exp + 1 where id = @id
   else
      update DATABASECHANGELOGACTIVITY set DateExecuted = GetDate(), 
ErrorMessage = null, CurrentCheckpoint = null where id = @id
end
else
begin 
   update DATABASECHANGELOGACTIVITY set DateExecuted = GetDate(), ErrorMessage 
= @errmsg, CurrentCheckpoint = null where id = @id
   raiserror 1000000 @errmsg
end
        </sql>
        <insert tableName="DATABASECHANGELOGACTIVITY">
            <column name="Action" value="INCREMENT_CHECKPOINT" />
            <column name="Comments" value="2009-06-10 Setting Initial 
Checkpoint" />
            <column name="ExpectedCheckpoint" valueNumeric="0" />
        </insert>
    </changeSet>
    
</databaseChangeLog>


4. preConditions reports Fail
In the doc under preConditions (button) its stated that preConditions in 
included files are not evaluated until that include are about to run, but that 
this behavior can change in the future. 
It looks to me that it has changed, which I dont like much. Assume two 
changeLog files A and B, where B is dependent upon A has been run. 
If I run B on a database where A has already been running, the preCondition in 
B says ok. But if I run both A and B in a batch via includes B will fail 
because its preCondition is evaluated before any changesets are run.
The problem here is that in the development process A could have been made 
months ago and executed on all test environments so B will also execute, but 
our production environments can be in a branched state for months, 
so perhaps A has never been run there. One happy day we merges all branches, 
builds and deploys and BANG - there she goes. This is a production update, but 
the problems as small as it may seem if you know where to look is now causing 
stress - this never happened during the tests!!

Suggestion: Add an attribute on preCondition saying if this preCondition should 
be evaluated before any changelogs are executed or if it should evaluate right 
before this changelog is executed.


I hope this novel can give you some inspiration in the future development of a 
great tool.
Sorry for my poor english

Unsubscribe to new topics from this board by clicking here: 
http://liquibase.org/forum/index.php?action=notifyboard;board=1.0

Regards,
The LiquiBase Community Forum Team.
------------------------------------------------------------------------------
Enter the BlackBerry Developer Challenge  
This is your chance to win up to $100,000 in prizes! For a limited time, 
vendors submitting new applications to BlackBerry App World(TM) will have
the opportunity to enter the BlackBerry Developer Challenge. See full prize  
details at: http://p.sf.net/sfu/Challenge
_______________________________________________
Liquibase-user mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/liquibase-user

Reply via email to