Bill Woodger wrote:
>You (now) need to check for the stupid out-of-order PERFORM ...
>THRU ... but otherwise you are good to go.

I don't understand the logic. Yes, you ought to make sure that ABO PTF is
applied. But look again at the APAR (PI68138):

"ABO was fixed to correctly optimize input programs that contain these
specific kinds of PERFORMs."

These "stupid" PERFORMs work correctly now. ("Correctly" means "whatever
they were doing before." Of course if they never worked as intended, they
will continue not working.) You don't have to check for them, but you do
have to check to make sure the PTF is applied if you have ABO Version 1.1.
Or use ABO Version 1.2, with a planned general availability date of
November 11, 2016.

Norman Hollander wrote:
>ABO creates a new load module that (IMHO) needs as much Q/A testing as
>compiling in the newest compiler.

Karl Huf wrote:
>Our developers are required to do
>regression testing on their changes - even if it is just recompiling
>with no source code changes.    They initially argued (well, not
>initially, it went on way too long) that there's no need to do such
>testing when using ABO.  Technically they may be right; technically one
>probably shouldn't have to do complete regression testing when
>recompiling the same source.  None of that makes any difference if the
>stated requirement in the development standards they have to follow says
>they DO have to do that testing.  Knowing that, then, they would be
>similarly required to do that testing for an ABO optimized module we
>questioned the benefit of licensing another product to do the same thing
>the compiler can do.

Charles Mills wrote:
>So I raise my eyebrows at the assertion that since the ABO is just a
>massage of the existing compiled object code no re-testing is necessary.
>I don't deny it; I just raise my eyebrows: IBM's compiler team knows a
>lot more about this stuff than I do.

OK, about testing. For perspective, for over two decades (!) Java has
compiled/compiles bytecode *every time* a Java class is first instantiated.
The resulting native code is model optimized depending on the JVM release
level's maximum model optimization capabilities or the actual machine
model, whichever is lower. Raise your hand if you're performing "full
regression testing"(*) before every JVM instantiation. :-) That's absurd,
of course. I'm trying to think how that would even be technically possible.
It'd at least be difficult.

ABO takes the same fundamental approach. From the perspective of an IBM z13
machine (for example), ABO takes *COBOL* "bytecode" (1990s and earlier
vintage instructions) and translates it to functionally identical native
code (2015+ instruction set). It's the core essence of what JVMs do all the
time, and IBM and others in the industry have been doing it for over two
decades. Except with ABO it's done once (per module), and you control when
and where.

I don't think I've seen any IBM published technical advice suggesting
*zero* ABO-related testing. Testing is important. However, you really,
really can't afford to be silly about it. (Do you run a full regression
test of all your applications when you add another disk pack to your
storage unit?) If you're running a fixed battery of tests just so you can
tick a box on a compliance form, you (or somebody above you) has probably
lost the plot. That's mere process over outcome, the sort of behavior that
kills organizations. Focus instead on the desired outcome and real,
informed risk profile of what you're trying to accomplish. Then test
*appropriately*. Ask the IBM ABO technical team if you'd like some testing

Let's oversimplify and assume a range of testing intensity (and work
effort, and costs) from 0% to 100%. Certainly you should test ABO
"pathways" in your environment to some degree. So, I vote for "not zero."
But 100%? Really? I don't buy that extreme either.

Another way to think about this is that one can ALWAYS think of more and
more complex tests to run. That's not hard. So why aren't you running them,
and more tomorrow, and more next month, and more again the month after
that? Ever increasing testing scope and effort. The basic answer is that
that would be silly. Time and testing resources are finite, and they have
costs, including opportunity costs associated with delayed introduction of
new business functions to your customers and partners. As with managing any
other finite resource, the most successful organizations consume smartly,
efficiently. They aim to get maximum risk mitigation (in business value and
cost terms) for every dollar, yen, euro, and minute of testing resource.

(*) Whatever that means. :-)

Timothy Sipples
IT Architect Executive, Industry Solutions, IBM z Systems, AP/GCG/MEA

For IBM-MAIN subscribe / signoff / archive access instructions,
send email to with the message: INFO IBM-MAIN

Reply via email to