Amen to that!

Another important topic are macros and include files; if you do recompiles at
every stage, you have to make sure that all the macros and include files are
the same at every stage, which generates more - and severe - error possibilities.

If you have differences in the OS or LE releases in the different stages,
you should take care that you don't have them for a long time. We try
to migrate the different LPARs (test and production etc.) within weeks.

No need to mention: our home-grown change management system did
copies of the load modules (no recompiles) from the start in the 1990s.
The design goal was: what runs in production must be (in binary) the
same as what was tested. This is what is wanted by legal authorities, too
(we are an insurance company). For example: it must be possible
for every program run in the last 10 years to show the source code
(and the include files, compile listings, test protocols etc.) for that point in time.

Kind regards

Bernd



Am 31.05.2013 16:43, schrieb Farley, Peter x23353:
Jan, in my experience recompiling is absolutely the wrong way to operate.  The 
most serious drawback is that when developers perform regression testing (and 
they *do* *always* perform full regression testing, right?) to verify that what 
worked before their change still works the same, and that the only differences 
in the file outputs are those expected from the change that they made, the 
version that they test is NOT the version that goes into production (or into 
any other stage along the way to production).

Given the system software rollout pattern you have described (and that is the "right", i.e. safe, 
way to do it, IMHO), the application software running in production *must* always be at the "lowest 
common denominator" level (i.e., whatever is in production), thus possibly losing any benefit of newer 
HW/SW levels for the time it takes to finish a system software rollout.  The great benefit is that production 
behavior is predictable and stable.  From the developer's perspective, this means NOT using the "latest 
and greatest" compiler and run-time system features until those features have reached the production 
environment.

Significant differences in compiler facilities do have to be carefully 
monitored so that any of the broken scenarios you describe do not occur.  The 
standard compile-and-link process must be carefully controlled so that every 
compile at the development level uses the same standard 
lowest-common-denominator options and facilities, without the ability for a 
developer to bypass the standard.

The only safe way to support recompilation at each of your several stages along 
the way to production is to perform full regression testing *at every level* 
after recompilation is performed.  This is a *huge* burden on personnel and 
systems (people time and CPU/DASD usage among others), especially if you are 
(as I can suspect from your description) a large shop with many different 
applications and application groups.  Having to perform multiple levels of 
regression testing significantly slows time-to-market for any application 
change, which can be deadly to any business model.  And to your employment.  A 
dead business pays no salaries (except maybe to the bankruptcy crews... :).

HTH

Peter



----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [email protected] with the message: INFO IBM-MAIN

Reply via email to