Re: ABO Automatic Binary Optimizer

2016-10-19 Thread Bill Woodger
Agreed apart from "the means". The original topic: "Does anyone know if this 
product, Automatic Binary Optimizer, will actually migrate Cobol V4 to V6 for 
you?  Our IBM reps are telling us that it will actually do the migration for 
you."

100 different people contacting the "ABO Team" is also an "outdated" way to do 
it. The ABO Team (probably) has no experience of "existing testing requirements 
for large systems" and on the other side of that hill, there is no/little 
information about *why* it is probably possible for ABO testing to be 
"different", and how to go about arguing for that.

With some more detailed knowledge of ABO first, a much more effective 
interaction can be expected.

If ABO is to become what it seems it is expected to become (by IBM), a 
significant tool expected to provide "hardware exploitation" to the majority of 
programs which not be recompiled in the foreseeable future, that can't happen 
in an information vacuum about the product.

It is early days with ABO, but what better time to start everything than 
"early" (and earlier than this)?

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: ABO Automatic Binary Optimizer

2016-10-18 Thread Timothy Sipples
David Crayford wrote:
>Good point well made.

Thanks, David.

For the record, Amazon.com (the commerce site and associated commerce
services) reportedly consists of a *mix* of programming languages: Java, C,
C++, Perl, Ruby (on Rails), and Javascript. All of these programming
languages are available for IBM z Systems, by the way. Is COBOL "different"
from a testing and deployment lifecycle point of view? No, absolutely not
-- I again strongly disagree. If anything, C (for example) is much more
difficult to test well. Javascript isn't even compiled.

I hope nobody is testing the Employee Cafeteria Menu application and the
Billion Dollar Payments application using the same testing effort and
procedures, even if both applications happen to be written in COBOL and run
on the same machine. That testing equivalence would be nuts.

Adapt or die, folks. Yes, I know some of you occasionally face irrationally
rigid people who celebrate process for process sake, with little or no
awareness of real world outcomes and actual business risks. Some of them
are auditors. That doesn't mean we should agree with such irrational
rigidity.

Back to ABO. ABO and Enterprise COBOL Version 6 have clear benefits. They
can help you reduce your peak monthly utilization, shorten CPU-bound batch
execution times, and improve the performance of latency sensitive
transactions. Those benefits translate directly into financial and other
valuable business benefits. Exactly how much benefit you can get is
situational, but you can figure that out pretty easily (and without your
purchasing department having to do anything). You and your employer have a
choice, and so do your competitors. You can postpone adopting these
technologies until 2038 because you've got a "one size all" testing
program, that's your process (gosh darn it), and any "new" (not new!)
thinking is intolerable. That's one option, an expensive one. Or you can
apply some rationality, discuss appropriate testing scope and methodologies
with IBM, verify that these technologies work and work well starting with
your low risk programs, and enjoy the benefits much sooner. I vote for this
second approach. It's reasonable, rational, logical, informed, and based on
sound risk mitigation and outcome-oriented principles.


Timothy Sipples
IT Architect Executive, Industry Solutions, IBM z Systems, AP/GCG/MEA
E-Mail: sipp...@sg.ibm.com

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: ABO Automatic Binary Optimizer

2016-10-18 Thread Bill Woodger
Yes, let's add Literature into the pot as well...

The thing is, once a COBOL program is compiled, it is no longer a COBOL 
program. It is no longer at the whim of a misplaced full-stop (period), 
oblivious to whether a SECTION has been coded or a THRU has been used, the GO 
TO superficially indistinguishable from a "leg" of a condition, the magic which 
operates when a 12-integer-3-decimal-place packed-decimal is MOVEd to a 
zoned-decimal integer, a floating-point item and an identically-described 
subscripted item.

Poof! In about the same time it takes to compile a program, it has been 
compiled, and is now sitting as a bunch of machine-code, with some data ranged 
in ordered manner. Now it is machine-code, it is coincidental that it was once 
COBOL, except that because it once was COBOL, and has been compiled, there is a 
shed-load of defined information about it, and another shed-load or two of 
information that can be intuited, and a further amount, which I can't relate to 
in terms of sheds (is it one, three hundred, 0.002?) because it is machine-code 
and because there's maybe something that can be done to it (turn it into an 
intermediate form, let's call it an Intermediate Result, for the heck of it) 
and then, the IR can be modified in a cyclic process which, at each change, 
contains a "proof" that the state is unchanged.

Once no more can be done with the IR, compile that. Now it's machine-code 
again. Some final touches, a bit of low-level (unverified?) optimisation 
(including perhaps killing that vital "return" from that "stupid PERFORM", my 
guess) and you have a new program which, to some extent, to be determined, 
"does the same thing" as the original.

Things projected for the future of ABO:

1) It is not going away. Since nowhere near all programs are changed even over 
a long period of years, ABO will be available to "soup up" (great, now I have 
soup and stock, bring in the cookery to the topic as well) things and not 
require the work to "recompile".
2) It is not going away. When V7 comes (so it is already in the works) ABO will 
be available for V5/V6 program stock, as well as all the others it supports. 
3) In some future it may also be able to "profile on the fly", providing 
further performance improvements revealed by the actual interaction between 
program and data consumed. A different run, with different data, may obtain 
different improvements if the profiling dictates.
4) Other stuff.

So, yes, Martin Packer's point of the ABO "team" providing a lot more 
information, so that things like "how do we test something which is ABO'd?" can 
be discussed *in the terms of what the thing does* not because "remember Java" 
or "xyz is unrealistic". Without knowledge, everything is speculation, 
effectively, which could be located in any local bar, and have the same quality 
of content and result.

So, literature, cookery, woodworking, horse racing, cricket (should have put 
that first), let's get them all in here - without information, we may as well 
:-)

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: ABO Automatic Binary Optimizer

2016-10-18 Thread Jesse 1 Robinson
"Happy families are all alike; every unhappy family is unhappy in its own way." 
While Wikipedia calls it the Anna Karenina Principle, it's not quite enough to 
substantiate the Russian claim that Tolstoy invented great literature.  It's 
still a vital lesson.

I would venture that there so many ways to screw up a COBOL program that no 
software mechanism could possible catch all of them, let alone correct them. 
Like a medieval cathedral that took generations to build, a 'classic' COBOL 
program has evolved through many cosmetic surgeries performed by many different 
hands with many different attitudes. Some scars are superficial. Some are 
systemic. The best news is that not all programs need to be converted at once. 
Go for the low hanging fruit; the biggest, sweetest ones that will feed the 
most mouths. Leave the rest for gleaners. There is no need to wait for a grand 
solution. 

.
.
.
J.O.Skip Robinson
Southern California Edison Company
Electric Dragon Team Paddler 
SHARE MVS Program Co-Manager
323-715-0595 Mobile
626-302-7535 Office
robin...@sce.com

-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf 
Of Bill Woodger
Sent: Tuesday, October 18, 2016 2:20 AM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: (External):Re: ABO Automatic Binary Optimizer

We continue to add things to the bubbling pot that is a discussion started 
through the misunderstanding of some IBM sales staff. 

Tom Ross ventured close, dipped a spoon in the pot, and confirmed that there is 
no "source conversion" process for migration to V5/V6. Mmmm... he's either 
writing something very substantial, or he ain't coming back to this pot any 
time soon.

Sure, delivery needs to be addressed. 

But. COBOL isn't Java.

While a COBOL program is able to overwrite itself, or another program (there 
are limits, it can't trash storage it has no access to), you can't compare 
testing processes for COBOL to Java.

The generous levels of programmer-hand-holding in Java (and other "more modern" 
languages) don't exist beyond a bit of a shadow, or mist, or the intellectual 
satisfaction of "hah! Xyz! So that proves you wrong!".

I'm prepared to bet, and it is an entirely case to measure, or resolve, so even 
betting is useless, that around the world in, say, for every 10,000 Mainframe 
sites that there are at least 20,000 times a day, unique occasions, not the 
same storage, that storage is overwritten in Production COBOL systems. It 
happens "so often" because most often when it happens "it doesn't matter".

By "doesn't matter" I mean that the program completes normally, and all the 
test cases are passed, and all Production data is correctly processed. 
Actually, can't be definitive about that latter, wihch is the most important 
thing. If nothing else, the unintntional partial destruction of a program is at 
least a "symptom" of "something".

OK, arriving at the Door to Production (I've never known of it referred to as 
that, but what the heck?). You have one of these systems, and it has passed all 
the tests. You change a compile option which affects the generation of code, 
which means that what was being overwritten with impunity previuously is now 
safe, but something which actually matters after the overwiring has occured has 
now been... altered. Uunthinkingly, you toss it into Production.

The 10am presentation in the Board Room is arranged, you have people from the 
US and Asia who flying in, and out again, specially.

You know what is going to happen.

So what do you do in that situation? It's been through all the tests, and then 
it gets vommited out on its introduction to Production. Explain that away. As 
soon as you get to "and we did this...", whatever "this" is, anyone but the 
village idiot is going to ask "why did you do that" (referring to the "this", 
of course). Oh, and it much, much, worse if, instead of vomitting, there is 
mild nausea which goes undiscovered for a period of time.

So you don't just change compiler options, or the order of INCLUDEs in a 
linkedit deck (indeed, as Peter pointed out earlier, many sites push 
executables up the line, rather than recreating them at each stage).

20,000 only seems a lot in isolation (and it is an entirely invented figure, 
just to have a figure), it is tiny amongst the total, and the times that 
something turns from an innocuous to a vicious situation are tiny in comparison 
to 20,000. But it happens.

If you do just up and change a compile option, you have invalidated your 
testing. If you are going to go with it anyway (it wasn't my hypotherical 
scenario) then the risk in terms of possibility is very small, but in terms of 
potential impact may be immense. So, as well as going forward, you do what you 
can to be as sure as possible that the health of the system will be unimpeded. 
Do you feel h

Re: ABO Automatic Binary Optimizer

2016-10-18 Thread Anne & Lynn Wheeler
sipp...@sg.ibm.com (Timothy Sipples) writes:
> I strongly disagree with the word "all." I don't think that word in this
> sentence is grounded in a reasonable, rational, informed assessment of
> comparative risks and testing costs.

re:
http://manana.garlic.com/~lynn/2016f.html#91 ABO Automatic Binary Optimizer
http://manana.garlic.com/~lynn/2016f.html#92 ABO Automatic Binary Optimizer
http://manana.garlic.com/~lynn/2016f.html#97 ABO Automatic Binary Optimizer

I wrote a long tome on this 17Oct1980 for internal distribution
... about having single monolithic resource versus having huge number of
smaller resources capable of efficiently rolling in/roll back. They sent
somebody from Armonk corporate hdqtrs to slap my hands. I had already
had my hands slapped that summer ... being blamed for online computer
conferencing on the internal network (larger than arpanet/internet from
just about the beinning until sometime mid-80s). Folklore is that when
the corporate executive committee was told about online computer
communication (and the internal network), 5of6 wanted to fire me.

-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: ABO Automatic Binary Optimizer

2016-10-18 Thread David Crayford

On 18/10/2016 12:55 PM, Timothy Sipples wrote:

Bill Woodger wrote:

For me, changing any compile option at the moment of going to Production
invalidates all the testing up to that point.

Then along comes Java :-)


Good point well made.


I strongly disagree with the word "all." I don't think that word in this
sentence is grounded in a reasonable, rational, informed assessment of
comparative risks and testing costs.

Consider also this important point: many businesses are moving much, much
faster than this rigid point of view would ever allow. Take a look at this
2011 video, for example:

https://www.youtube.com/watch?v=dxk8b9rSKOo

The whole video is worth watching, but fast forward to about 10:00 for the
key statistics. According to the speaker, Amazon.com (the Web commerce
site, not all of AWS) deployed code changes into production on average once
every 11.6 seconds in May, 2011 (based on weekday deployments; they
evidently have a slower but still rapid deployment pace during weekends).
That was their pace half a decade ago. Are your current testing practices
and policies able to support that sort of business velocity or anything
vaguely similar? If not, why not? Are you helping your business compete? (I
believe at least a couple readers do work for businesses in competition
with Amazon.)

Amazon, the publicly traded company, has a market capitalization of $385.4
billion (as of October 17, 2016). Among companies traded on U.S. exchanges
it's currently #4 by that measure. True, its price-earnings ratio is over
200, i.e. the company isn't all that profitable. But that's yet another
problem if you're in competition with Amazon.


Timothy Sipples
IT Architect Executive, Industry Solutions, IBM z Systems, AP/GCG/MEA
E-Mail: sipp...@sg.ibm.com

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: ABO Automatic Binary Optimizer

2016-10-18 Thread Bill Woodger
We continue to add things to the bubbling pot that is a discussion started 
through the misunderstanding of some IBM sales staff. 

Tom Ross ventured close, dipped a spoon in the pot, and confirmed that there is 
no "source conversion" process for migration to V5/V6. Mmmm... he's either 
writing something very substantial, or he ain't coming back to this pot any 
time soon.

Sure, delivery needs to be addressed. 

But. COBOL isn't Java.

While a COBOL program is able to overwrite itself, or another program (there 
are limits, it can't trash storage it has no access to), you can't compare 
testing processes for COBOL to Java.

The generous levels of programmer-hand-holding in Java (and other "more modern" 
languages) don't exist beyond a bit of a shadow, or mist, or the intellectual 
satisfaction of "hah! Xyz! So that proves you wrong!".

I'm prepared to bet, and it is an entirely case to measure, or resolve, so even 
betting is useless, that around the world in, say, for every 10,000 Mainframe 
sites that there are at least 20,000 times a day, unique occasions, not the 
same storage, that storage is overwritten in Production COBOL systems. It 
happens "so often" because most often when it happens "it doesn't matter".

By "doesn't matter" I mean that the program completes normally, and all the 
test cases are passed, and all Production data is correctly processed. 
Actually, can't be definitive about that latter, wihch is the most important 
thing. If nothing else, the unintntional partial destruction of a program is at 
least a "symptom" of "something".

OK, arriving at the Door to Production (I've never known of it referred to as 
that, but what the heck?). You have one of these systems, and it has passed all 
the tests. You change a compile option which affects the generation of code, 
which means that what was being overwritten with impunity previuously is now 
safe, but something which actually matters after the overwiring has occured has 
now been... altered. Uunthinkingly, you toss it into Production.

The 10am presentation in the Board Room is arranged, you have people from the 
US and Asia who flying in, and out again, specially.

You know what is going to happen.

So what do you do in that situation? It's been through all the tests, and then 
it gets vommited out on its introduction to Production. Explain that away. As 
soon as you get to "and we did this...", whatever "this" is, anyone but the 
village idiot is going to ask "why did you do that" (referring to the "this", 
of course). Oh, and it much, much, worse if, instead of vomitting, there is 
mild nausea which goes undiscovered for a period of time.

So you don't just change compiler options, or the order of INCLUDEs in a 
linkedit deck (indeed, as Peter pointed out earlier, many sites push 
executables up the line, rather than recreating them at each stage).

20,000 only seems a lot in isolation (and it is an entirely invented figure, 
just to have a figure), it is tiny amongst the total, and the times that 
something turns from an innocuous to a vicious situation are tiny in comparison 
to 20,000. But it happens.

If you do just up and change a compile option, you have invalidated your 
testing. If you are going to go with it anyway (it wasn't my hypotherical 
scenario) then the risk in terms of possibility is very small, but in terms of 
potential impact may be immense. So, as well as going forward, you do what you 
can to be as sure as possible that the health of the system will be unimpeded. 
Do you feel happy yourself? No, you want to find out how you were strung up in 
that situation, and do what you can to make sure it can't, ever, happen again.

On the other side of the hill, an expression I use because I like it, not 
because of its confrontational connotations, there are "edge cases". An 
edge-case is something that's unlikely to happen, so you don't have to bother 
about it until it does happen. There are also "corner cases". A corner-case is 
a situation which is unlikely to happen, and even if it does you are not going 
to bother about it.

You don't bother so much about getting the technical details right, because 
usually (or at least in expectation) the language will do something to assist 
you. "Line 3491 array out of bounds". Which is nices, but which underlying code 
operates 24 hours a day, 365 days a year (plus leap seconds, days) whilst 
99.99% of the time it is unnecessary to have it checked automatically.

In passing, I hope no-one replies to this, else the topice gets even more 
convoluted, wide-ranging and unresolved.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: ABO Automatic Binary Optimizer

2016-10-17 Thread Timothy Sipples
Bill Woodger wrote:
>For me, changing any compile option at the moment of going to Production
>invalidates all the testing up to that point.

Then along comes Java :-)

I strongly disagree with the word "all." I don't think that word in this
sentence is grounded in a reasonable, rational, informed assessment of
comparative risks and testing costs.

Consider also this important point: many businesses are moving much, much
faster than this rigid point of view would ever allow. Take a look at this
2011 video, for example:

https://www.youtube.com/watch?v=dxk8b9rSKOo

The whole video is worth watching, but fast forward to about 10:00 for the
key statistics. According to the speaker, Amazon.com (the Web commerce
site, not all of AWS) deployed code changes into production on average once
every 11.6 seconds in May, 2011 (based on weekday deployments; they
evidently have a slower but still rapid deployment pace during weekends).
That was their pace half a decade ago. Are your current testing practices
and policies able to support that sort of business velocity or anything
vaguely similar? If not, why not? Are you helping your business compete? (I
believe at least a couple readers do work for businesses in competition
with Amazon.)

Amazon, the publicly traded company, has a market capitalization of $385.4
billion (as of October 17, 2016). Among companies traded on U.S. exchanges
it's currently #4 by that measure. True, its price-earnings ratio is over
200, i.e. the company isn't all that profitable. But that's yet another
problem if you're in competition with Amazon.


Timothy Sipples
IT Architect Executive, Industry Solutions, IBM z Systems, AP/GCG/MEA
E-Mail: sipp...@sg.ibm.com

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: ABO Automatic Binary Optimizer/COBOL V5/V6 Migration

2016-10-17 Thread Tom Ross
>ABO only creates an optimized LOAD MODULE (program object).  It does not=20
>convert your source to V6, and it will not give you all the=20
>optimizations of V6.  Your biggest payback is if you upgrade your CPU,=20
>then you can run your load modules through ABO and get some of the=20
>optimization provided by the new hardware.

I know this was a while ago, but I wanted to comment on the reference to
'convert your source to V6'.  In general, all programs compile cleanly with
COBOL V5 and COBOL V6.  If there are problems, and about 25% of customers
have had some, they are caused by invalid data in the COBOL data items at
runtime.  To fix this, users have to change the data, or the data entry
panels, or use new compiler options to tolerate the bad data.  There is
no way to do source conversion to migrate to COBOL V5/V6, like there
was years ago for OS/VS COBOL to anything newer.

Cheers,
TomR  >> COBOL is the Language of the Future! <<

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: ABO Automatic Binary Optimizer

2016-10-17 Thread Bill Woodger
There are a number of different items within this topic.



"Just a recompile", where the object is expected to be identical, 
regression-tested?

To my mind, no. It should be verified as identical. If it is not, the reason 
should be identified and what follows depends on what is found out.

I would not expect regression-test cases to necessarily notice that the object 
is different if subtly so.


"Slap on OPT at the last moment before going to Production". I'd never suggest 
this, so for me it is a hypothetical question which does, however, shed some 
light on the "to what extent an ABO'd program should be tested".

For me, changing any compile option at the moment of going to Production 
invalidates all the testing up to that point. So if you're going to Production, 
as in at that time, then no, I don't see what you could do to test it. It's 
like a contradiction in terms, but if you are really going to Production and 
really changing an option which affects the generated code, then I don't see 
how you can test it (effectively you go the "Production Fix" route).



"ABO changes, can they be verified": there is IBM advice on testing of ABO.For 
now, until I can transcribe (or find it written) I'll paraphrase it as "become 
confident with the product with your early testing, then test for performance". 
The "test for performance" suggestion is interesting. Is there an implication 
that sometimes things regress in performance? Oh no, another subject added to 
the already complex mix.


Enterprise COBOL and OPT. Does OPT change the results produced by a program? 
Well, given data which conforms to PICture and COBOL which conforms to the 
Language Reference, and impact of options as described in the Programming 
Guide, and, perhaps, things in the Migration Guide, No. OPT does not change how 
such a program works. If it happens to do so, then it is time to "reach out" to 
IBM.

As an example, if you write a program which only uses literals or "constants", 
OPT (at least up to 4.2) will (I don't know if there is a limit as to the size 
of the program) just work out the result and not bother with executing the 
code. I suspect V5+ does this, even more so.


Might there be difference in output with non-conforming data or a 
non-conforming program, per level of OPT? For sure. Since there is no "defined" 
behaviour for those cases, it would be... difficult... for OPT to be able to 
second-guess what was considered to be the correct result.


"Can all programs which could ever exist in this or any potential Universe be 
verified automatically". No. But, so what?


DSPLAY "A MESSAGE"
GOBACK
.

Can that program (with minimal necessary stuff at the beginning) be verified? 
Yes. (I hate to think of the disavowals of that, so hopefully in a separate 
topic, this one is twisted enough already).

Can something more complex, expressed in a language more suitable to automatic 
verification, be automatically verified, as it is transformed, as being 
equivalent to its starting-point? The claim seems to, from ABO, yes. It is 
their claim, not mine, it is what does underly the quote (from the User Guide 
for ABO, although it may also appear in marketing material).

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: ABO Automatic Binary Optimizer

2016-10-17 Thread Martin Packer

The advice I gave to anyone who would listen INSIDE IBM was:

Publish information on the kinds of transformations ABO does. That would
help build CONFIDENCE.

My advice to anyone using it, which echoes what's been said here is:

Test the ABO output to the extent you can.

Of course ABO might get a reputation for reliability or otherwise; Time
will tell.

Cheers, Martin

Sent from my iPad

> On 17 Oct 2016, at 00:25, Peter Relson  wrote:
>
> 
> No, that is not what I meant.
>
> It goes back to this: "[ABO] ... produces a functionally equivalent
> executable program", which is a claim somewhere within the ABO site. OK,
I
> can see a search-box at the top of my screen (sorry, "page"). It is in
the
> User's Guide for ABO.
>
> That is either some snake-oil marketing-speak, or something underlies it.

> I assumed the latter, and that now seems to be borne out by further
> research.
>
> To me it amounts to "we can show that the program we produce, works in
the
> same way as the program we started with, it just does it differently".
> This is a very different thing from the mythical program which can test
> any given program.
> 
>
> I cannot agree.  That sort of statement in marketing material that I know

> of (my knowledge of such material is admittedly very limited) amounts to
> "that is our intent; if that is not the case then we will consider that a

> bug that we may fix".
>
> 
> My interpretation is this: "If the program is written in such a way that
> it complies with what is explicitly documented for the version of
> Enterprise COBOL that the program was last compiled with, that
> documentation being the appropriate Language Reference, Programming Guide

> and Migration Guide, and that all the data referenced by the given
program
> "complies with its PICture", then that will work in an identical manner
> with Enterprise COBOL V5+."
> 
>
> And that sounds to me like the intent of the product. And when the intent

> is not met, it is a bug that we may fix.
>
> You seem to be looking for a guarantee of program correctness. If I am
> correct, a vendor (including IBM) cannot in ordinary circumstances offer
> that; they/we can offer only a warranty that defects will be dealt with.
>
> Peter Relson
> z/OS Core Technology Design
>
>
> --
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
>Unless stated otherwise above:
IBM United Kingdom Limited - Registered in England and Wales with number 
741598. 
Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: ABO Automatic Binary Optimizer

2016-10-16 Thread Peter Relson

No, that is not what I meant.

It goes back to this: "[ABO] ... produces a functionally equivalent 
executable program", which is a claim somewhere within the ABO site. OK, I 
can see a search-box at the top of my screen (sorry, "page"). It is in the 
User's Guide for ABO.

That is either some snake-oil marketing-speak, or something underlies it. 
I assumed the latter, and that now seems to be borne out by further 
research.

To me it amounts to "we can show that the program we produce, works in the 
same way as the program we started with, it just does it differently". 
This is a very different thing from the mythical program which can test 
any given program.


I cannot agree.  That sort of statement in marketing material that I know 
of (my knowledge of such material is admittedly very limited) amounts to 
"that is our intent; if that is not the case then we will consider that a 
bug that we may fix".


My interpretation is this: "If the program is written in such a way that 
it complies with what is explicitly documented for the version of 
Enterprise COBOL that the program was last compiled with, that 
documentation being the appropriate Language Reference, Programming Guide 
and Migration Guide, and that all the data referenced by the given program 
"complies with its PICture", then that will work in an identical manner 
with Enterprise COBOL V5+."


And that sounds to me like the intent of the product. And when the intent 
is not met, it is a bug that we may fix.

You seem to be looking for a guarantee of program correctness. If I am 
correct, a vendor (including IBM) cannot in ordinary circumstances offer 
that; they/we can offer only a warranty that defects will be dealt with.

Peter Relson
z/OS Core Technology Design


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: ABO Automatic Binary Optimizer

2016-10-16 Thread Jesse 1 Robinson
The procedure Peter describes has clear benefits, but the ability to 
incorporate one CEC at a time into an existing complex is IMO something of a 
luxury. It means having an entirely new separate of cables, power connections, 
and replication of all other necessaries. It means new/changed connections for 
SNA and TCP/IP plus all the software definitions required to support them. Then 
at some later time the old CEC has to be extricated from the complex without 
disruption. 

If someone proposed this procedure in our shop, I would probably argue that the 
risk to operational stability would be greater than the risk of an old 
fashioned push-pull. Not to mention cost.

.
.
J.O.Skip Robinson
Southern California Edison Company
Electric Dragon Team Paddler 
SHARE MVS Program Co-Manager
323-715-0595 Mobile
626-302-7535 Office
robin...@sce.com


-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf 
Of Farley, Peter x23353
Sent: Friday, October 14, 2016 10:00 AM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: (External):Re: ABO Automatic Binary Optimizer

I don’t know how other installations perform processor model upgrades, but our 
facilities manager moves in one new processor (CEC) at a time on a weekend 
during an extended maintenance window and activates only the development/QA 
LPAR's on it for at least a week (sometimes more) of "active use" of the new 
model before migrating production LPAR's to the new model.  Remaining CEC's are 
migrated in similar fashion over a period of months.

A "live" regression test, if you will, with only Development/QA LPAR's affected 
by any issues.  I suppose a similar procedure could work for ABO translations, 
but I suspect that hardware change is "different" from the company's 
perspective, not under full company control if you will.  Individual programs 
*are* under full company control and thus subject to the regression test rules.

I don’t know how we handle microcode updates, so I can't comment accurately on 
that, but I suspect it is done one CEC at a time with backout procedures in 
place in case of issues.

Peter

-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf 
Of Paul Gilmartin
Sent: Friday, October 14, 2016 12:31 PM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: ABO Automatic Binary Optimizer

On Fri, 14 Oct 2016 11:29:46 -0400, Farley, Peter x23353 wrote:

>Timothy,
>
>You missed two crucial issues:
>
>1. Auditors don't believe in "verification" and management requires audits to 
>pass.  IT does not control auditors (quite the reverse in fact).  And we lowly 
>programmers have no input to auditors at all.
>2. There is no existing independent verification tool for a company to use on 
>ABO's output.  And if someone creates one, it has to be from a company OTHER 
>than IBM so that IBM's ABO results are independently verifiable.
>
>"Smart" testing is of course a valid and desirable goal, but lacking an 
>existing *independent* verification tool there is no option but full 
>regression testing.  Manual verification is not reasonable or cost effective, 
>especially for very large programs and program suites.
>
>And again, I am not trashing ABO, which on its face is an amazing tool BUT it 
>changes object code.  Lacking independent automated verification, in any sane 
>definition of a program life cycle system that is a change that requires full 
>regression testing.
>
Do the above apply likewise to moving to a different processor model, or even 
to a microcode upgrade?


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: ABO Automatic Binary Optimizer

2016-10-15 Thread Bill Woodger
Here's Tom Ross: 
https://www.ibm.com/developerworks/community/forums/html/topic?id=6d98d469-5088-41ec-8926-34e945443891=25

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: ABO Automatic Binary Optimizer

2016-10-15 Thread Bill Woodger
Well, if so leaky, let's hear a few.

ABO does not know about PICtures. One of the limits to the optimisations 
available to it.

Strictly, it could intuit some things, but it can't, because of REDEFINES large 
or small.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: ABO Automatic Binary Optimizer

2016-10-15 Thread Bill Woodger
Great. Now we've got PL/I and Assembler in the mix.

I do absolutely agree with Prino on "same everything throughout", at least 
after program testing. There are assorted (and growing) compiler options which 
should only live up to program testing (although there is not universal 
agreement).

Proving that grass is green, doesn't mean that all grass is all green, or even 
green. We are, or were, talking about a particular grass, ABO, and in the 
particular, although highly complex in its task, it may not be entirely green. 
It may be variegated, and it may be black. If purple, get someone to say so.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: ABO Automatic Binary Optimizer

2016-10-15 Thread Paul Gilmartin
On Sat, 15 Oct 2016 15:32:05 -0500, Bill Woodger wrote:
>
>My interpretation is this: "If the program is written in such a way that it 
>complies with what is explicitly documented for the version of Enterprise 
>COBOL that the program was last compiled with, that documentation being the 
>appropriate Language Reference, Programming Guide and Migration Guide, and 
>that all the data referenced by the given program "complies with its PICture", 
>then that will work in an identical manner with Enterprise COBOL V5+."
> 
That specification is as leaky as a sieve.  There's only slight protection in 
that
ABO is probably unaware of what the PICtures were.

Unless at OPT(0) the generated code verifies compliance with the PICtures, etc.
http://trevorjim.com/postels-law-is-not-for-you/

-- gil

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: ABO Automatic Binary Optimizer

2016-10-15 Thread Bill Woodger
"As to re-testing after recompile, if the resulting OBJ is the same, sure, no 
need. When can you expect that? Probably rarely, but that's only because there 
are often dates present in the output. So ignoring changes due to compile-date, 
changes in macros could affect things, so assume none of them. That brings you 
to something like application of a PTF. So are you compiling with the same PTF 
level of the compiler as before?"

In this discussion, it arose from this: "Any program change requires full 
regression testing, including "just a recompile"."

I'm not sure what you mean by "changes to macros" in this context, so I'll 
assume "copybooks".

By "just a recompile" I assumed "with no changes to anything". The reason I 
assumed that is because there are sites where, if program A CALLs program B, 
when program B is changed, you have to recompile program A. Seriously.

If you change a data-name in a copybook, and recompile the program, the source 
is different but the object is expected to be the same. If it is expected to be 
the same, and can be proven to be the same, why do a regression test?

I don't think anyone would count a PTF which actually affected a program to 
count as "just a recompile",  it is not the process, it is the expectation that 
the object is identical.

For thousands of different Mainframe sites there are tens (at least) of ways 
that things are done. Not all of them are good (which is subjective), (as) 
effective (as they could be), or efficient. You can be sure there will be 
various rules, old wive's tales, rumour, cant and "it's the way we've always 
done it" which underpin these things.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: ABO Automatic Binary Optimizer

2016-10-15 Thread Paul Gilmartin
On Sat, 15 Oct 2016 22:50:11 +, Robert Prins wrote:
>
>Programs compiled with different optimization levels (and sometimes even other
>compiler options) are not the same program! Period. Full stop. End of story!
>
Many possibilities.  A race condition might be won by the wrong path when
optimized; the desired path when not optimized.  But beware of processor
upgrades.

I once had a program in a little-known langage from a vendor other than IBM
which program checked when optimized; at the debug levei it operated as I
had intended.  Reported.  Vendor replied (correctly) that I had (unwittingly)
used a construct clearly documented as undefined; unpredictable; NAPWAD.
(Not timing-dependent nor resource constrained.)  I seethed that the object
of debugging mode is to report as many or more errors, not fewer.  Vendor
remained adamant.  it had been:

STHRemoved ...
LH ... when optmized.
SLA ..,3  Fixed point overflow when optimized

-- gil

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: ABO Automatic Binary Optimizer

2016-10-15 Thread Bill Woodger
"I believe COBOL V5 stated that recompile would work for correct programs. I 
don't know if that statement is true or not, or what exactly is definitively 
meant by "correct", but I think that ABO's more conservative approach is 
expected to work even for programs that do not work upon recompile with COBOL 
V5. Of course once one error has been found in an implementation, most bets are 
off. "

My interpretation is this: "If the program is written in such a way that it 
complies with what is explicitly documented for the version of Enterprise COBOL 
that the program was last compiled with, that documentation being the 
appropriate Language Reference, Programming Guide and Migration Guide, and that 
all the data referenced by the given program "complies with its PICture", then 
that will work in an identical manner with Enterprise COBOL V5+."

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: ABO Automatic Binary Optimizer

2016-10-15 Thread Paul Gilmartin
On Sat, 15 Oct 2016 10:44:06 -0400, Peter Relson wrote:
> 
>I believe COBOL V5 stated that recompile would work for correct programs. 
>I don't know if that statement is true or not, or what exactly is 
>definitively meant by "correct", but I think that ABO's more conservative 
>approach is expected to work even for programs that do not work upon 
>recompile with COBOL V5. Of course once one error has been found in an 
>implementation, most bets are off.
> 
When HLASM was introduced its documentation asserted that any
program that Assembler H assembled with no errors or warnings
would assemble identicallly with HLASM.  The refutation was trivial;
take any code with a construct undefined in H that HLASM assembled
differently.  That could even be leveraged into a Serious error in
HLASM.  IBM rapidly retracted the assertion.

I even have an ironic example where a program that HLASM assembles
differently with COMPAT(MACROCASE) but the same as H with
COMPAT(NOMACROCASE).

>... ignoring changes due to compile-date, ...
>
>If by "certified" you basically mean "proved to be correct", how many 
>realistic programs are ever provably correct (many non-realistic programs 
>could be)? Surely a lot *are* correct, but could you prove it? I suspect 
>that most software companies "warrant" (if an error is reported, it may be 
>fixed) rather than "certify". 
> 
There are claims of general techniques for proving program correctness.
I disagree.  If the preprocessor is Turing-complete, correctness is
undecidable.  If one hedges by introducing a limit on program
complexity, that too may be undecidable even as Kolmogorov complexity
is uncomputable.

-- gil

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: ABO Automatic Binary Optimizer

2016-10-15 Thread Robert Prins

On 2016-10-15 14:44, Peter Relson wrote:


So, if someone compiles their COBOL program without optimization and tests
it, then compiles it with optimization before putting it into production,
does it need to be tested again?

Well, it's an excellent question Tom, but needs to be directed to people
at sites that do that :-)

Pushed for an answer, I'd say "no". But, if you have and it ends up
being the same asnwer as for ABO, which is why you've posed the question.


I on the other hand, when pushed for an answer would say "yes". Even if
the optimizer in the compiler is 100% correct (not always the case),
better optimization may make assumptions that the code in question does
not happen to satisfy.


At some time in the 1990'ies, using OS PL/I V2.3.0, my then employer had one 
large program (at 10+K pure LOC it was their largest program) that had to be 
compiled OPT(0). It would not run (S0C1 or S0C4, don't remember) if it was 
compiled OPT(2).


Likewise, I've got a PL/I program that runs OK when compiled OPT(0) with the 
PL/I for Windows compiler, yet fails miserably when it's compiled OPT(3).


I've always been a very strong proponent of only testing programs that are 
compiled with exactly the same options as the one used for production compiles.


Programs compiled with different optimization levels (and sometimes even other 
compiler options) are not the same program! Period. Full stop. End of story!


Robert
--
Robert AH Prins
robert(a)prino(d)org
No programming (yet) @ http://prino.neocities.org/

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: ABO Automatic Binary Optimizer

2016-10-15 Thread Bill Woodger
Splitting up the replies.

"If by "certified" you basically mean "proved to be correct", how many
realistic programs are ever provably correct (many non-realistic programs
could be)? Surely a lot *are* correct, but could you prove it? I suspect
that most software companies "warrant" (if an error is reported, it may be
fixed) rather than "certify". "

No, that is not what I meant.

It goes back to this: "[ABO] ... produces a functionally equivalent executable 
program", which is a claim somewhere within the ABO site. OK, I can see a 
search-box at the top of my screen (sorry, "page"). It is in the User's Guide 
for ABO.

That is either some snake-oil marketing-speak, or something underlies it. I 
assumed the latter, and that now seems to be borne out by further research.

To me it amounts to "we can show that the program we produce, works in the same 
way as the program we started with, it just does it differently". This is a 
very different thing from the mythical program which can test any given program.

If IBM were to approach one or more organisations which represent 
companies/organisations who provide Audit Rules for large computer systems, and 
were able to get some definitive statement on the techniques used by ABO that 
underpin the above statement, then it may allow an easier take-up and 
implementation of ABO for large organisations, who have to operate under any 
number of compliance/audit/legal requirements, plus rules from parent 
companies, for instance. 

The idea is "damaged" by that bug. I can guess where the loophole was, (only a 
guess) and hope that it can either be included within the verification, or, if 
not, that that type of optimization dropped altogether.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: ABO Automatic Binary Optimizer

2016-10-15 Thread Peter Relson

So, if someone compiles their COBOL program without optimization and tests 
it, then compiles it with optimization before putting it into production, 
does it need to be tested again?

Well, it's an excellent question Tom, but needs to be directed to people 
at sites that do that :-)

Pushed for an answer, I'd say "no". But, if you have and it ends up 
being the same asnwer as for ABO, which is why you've posed the question.


I on the other hand, when pushed for an answer would say "yes". Even if 
the optimizer in the compiler is 100% correct (not always the case), 
better optimization may make assumptions that the code in question does 
not happen to satisfy.

That is similar to the following:
 

As with ABO, it's not OPT I'm afraid of, it is the potential of bad 
programs. 

I believe COBOL V5 stated that recompile would work for correct programs. 
I don't know if that statement is true or not, or what exactly is 
definitively meant by "correct", but I think that ABO's more conservative 
approach is expected to work even for programs that do not work upon 
recompile with COBOL V5. Of course once one error has been found in an 
implementation, most bets are off.

As to re-testing after recompile, if the resulting OBJ is the same, sure, 
no need. When can you expect that? Probably rarely, but that's only 
because there are often dates present in the output. So ignoring changes 
due to compile-date, changes in macros could affect things, so assume none 
of them. That brings you to something like application of a PTF. So are 
you compiling with the same PTF level of the compiler as before?


ouldn't it be great if the American Institute of Auditors (I've invented 
the name, but I'm sure there is at least one such organisation) and IBM 
got together and certified ABO. 
 
If by "certified" you basically mean "proved to be correct", how many 
realistic programs are ever provably correct (many non-realistic programs 
could be)? Surely a lot *are* correct, but could you prove it? I suspect 
that most software companies "warrant" (if an error is reported, it may be 
fixed) rather than "certify". 


Peter Relson
z/OS Core Technology Design


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: ABO Automatic Binary Optimizer

2016-10-14 Thread Bill Woodger
Peter, 

The RC=4 thing was not directed at you. I don't think anyone with "experience" 
(being counted as just turning up for years, or "one year of experience many 
times") would be contributing to this list.

I'm pointing out that "RC=4 is OK, get on with the test" is reasonably common. 
And with such systems, whether implied from one of the diagnostic messages, or 
simply associated with the sloppiness of the method, *I guarantee there are 
careless bugs in the system*. 

Someone sent me a program a couple of weeks ago. Numerous W and I diagnostics, 
including the interesting "hey, here's a SECTION, the prior paragraphs aren't 
within SECTIONs, watch it buddy, could be very bad" message.

Looking at the program, written originally in the early 80s and changed a 
number of times since, you could see that it had started out fairly well, and 
was still quite readable. Whereas when written, the full-stop/period in the 
PROCEDURE DIVISION had much greater significance, once using compilers which 
didn't require it, most, but not all, new statements were terminated with a 
full-stop/period (so they had become sloppy), and also no END- scope 
terminators were used. So, you immediately expect to find an IF where the 
indentation shows one intention, and the code (presence or absence of 
full-stop/period) shows reality. Lo, there it was.

They had become sloppy, and sloppy always costs.

Take the stupid PERFORM. Here's an example program:

   IDENTIFICATION DIVISION. 
   PROGRAM-ID. STAB59. 
   DATA DIVISION. 
   WORKING-STORAGE SECTION. 
   PROCEDURE DIVISION. 
   DISPLAY "GOING TO B" 
   PERFORM B THRU A 
   DISPLAY "COMING BACK FROM... A?" 
   GOBACK 
   . 
   A. 
   DISPLAY "IN A" 
   . 
   B. 
   DISPLAY "IN B" 
   . 

Compile that, and you get: 

 7  IGYPA3086-W   "PERFORM" start-of-range "B" follows the "PERFORM" 
end-of-range "A".  The "PERFORM" end-of-range may be unreachable.  The 
statement was processed as written. 
 

Run it and you get this:

GOING TO B  
  
IN B
  
IGZ0037S The flow of control in program STAB59 proceeded beyond the last line 
of the program. 
 From compile unit STAB59 at entry point STAB59 at compile unit offset 
+039E at entry offset +039E at 
 address 1210039E.  
  

Basically it fell of the end of the program, in this case. Of course if there 
was a paragraph after B, it would have fallen into that. If there was some 
'rithmetic it could have S0C7'd, or with other code caused some other abend. 
But it needn't have caused another system abend, it depends what the code it 
drops into does.

If it continues to wimble along and still falls off the end of the program, you 
get the user abend, as above.

However, if, below B, there is a PERFORM range which is still active... then as 
it falls, it will arrive at the termination of the active PERFORM.

And if later in the program control passes to A, then, as if by magic, control 
will fly to the position after PERFORM B THRU A.

Now, the diagnostic, only a W, is indicating you may have a problem. Anyone who 
reads that and leaves it is... only superseded by anyone who doesn't even read 
it because "RC=4 is OK".

Back to your REDEFINES in copybooks. I'll assume that "we are not the owner of 
the copybook, so can't change it". If that is the case, that is where auditors 
are your friends - play the "sloppy" story to them, and explain that it is 
unacceptable (to you) that such practice is acceptable from an audit point of 
view.

If that has no direct impact, and you have at least V4.2, you can use a 
"message exit" for the compiler (can be written in COBOL even) which "upgrades" 
the status of any W and I that you select. Then your programs won't compile. 
Then they'll have to change the copybook?

Or are we driven back to "if we change the copybook, we have to recompile and 
change, and test, 200 programs". Well, if you only fix those REDEFINES with 
FILLERs (explicit or implicit) then a) you don't *need* to recompile any 
program just for the copybook change b) if you do recompile anything/everything 
then the object code *should be identical* c) if proven identical (except for 
date/time of compile) then it is already as tested as the current version d) if 
not proved identical you know that something else is wrong and whatever that 
is, it is unexpected for your test cases.

Now, I'd like to run that past an auditor if I was in a position to do so, even 
if only to become aware of their response.


Re: ABO Automatic Binary Optimizer

2016-10-14 Thread Bill Woodger
On Thu, 13 Oct 2016 16:44:42 -0400, Tony Harminc  wrote:

>On 13 October 2016 at 14:47, Bill Woodger  wrote:
>>
>> No, it doesn't turn the machine-code (ESA/370) into anything intermediate.
>
>Are you quite sure?
>

No, actually I'm not. It would be brilliant if it did, and if the inetermediate 
code could be output (for the entire executable). Although the generated 
"p-code" for pre-ABO and post-ABO perhaps wouldn't/needn't be the same?

If ABO generates COBOL, I'll let you push me down a mountain.

Something Java-ie? You could surprise me. If you find out, or someone else 
reveals, how it is done, I'm not going to deny it :-)

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: ABO Automatic Binary Optimizer

2016-10-14 Thread Norman Hollander on Desertwiz
You should request it from them.  If there is a change to the channel 
subsystem, and afterwards yours jobs start running
slowly, you may want to see if it is I/O related, for example...

-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf 
Of Farley, Peter x23353
Sent: Friday, October 14, 2016 11:29 AM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: ABO Automatic Binary Optimizer

Thanks Norman, but as I am not a sysprog I am not involved in those types of 
changes.  I/we depend on our facilities management team to handle those issues.

Peter

-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf 
Of Norman Hollander on Desertwiz
Sent: Friday, October 14, 2016 1:45 PM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: ABO Automatic Binary Optimizer

You should request the Hardware Buckets for microcode updates.  Sometimes, 
there could be a necessary OS PTF to support/exploit new microcode.  Is there a 
chance that microcode code cause a production problem?
You know the answer...  Can the update be backed out?  Not always... 

-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf 
Of Farley, Peter x23353
Sent: Friday, October 14, 2016 10:00 AM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: ABO Automatic Binary Optimizer

I don’t know how other installations perform processor model upgrades, but our 
facilities manager moves in one new processor (CEC) at a time on a weekend 
during an extended maintenance window and activates only the development/QA 
LPAR's on it for at least a week (sometimes more) of "active use" of the new 
model before migrating production LPAR's to the new model.  Remaining CEC's are 
migrated in similar fashion over a period of months.

A "live" regression test, if you will, with only Development/QA LPAR's affected 
by any issues.  I suppose a similar procedure could work for ABO translations, 
but I suspect that hardware change is "different" from the company's 
perspective, not under full company control if you will.  Individual programs 
*are* under full company control and thus subject to the regression test rules.

I don’t know how we handle microcode updates, so I can't comment accurately on 
that, but I suspect it is done one CEC at a time with backout procedures in 
place in case of issues.

Peter

-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf 
Of Paul Gilmartin
Sent: Friday, October 14, 2016 12:31 PM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: ABO Automatic Binary Optimizer

On Fri, 14 Oct 2016 11:29:46 -0400, Farley, Peter x23353 wrote:

>Timothy,
>
>You missed two crucial issues:
>
>1. Auditors don't believe in "verification" and management requires audits to 
>pass.  IT does not control auditors (quite the reverse in fact).  And we lowly 
>programmers have no input to auditors at all.
>2. There is no existing independent verification tool for a company to use on 
>ABO's output.  And if someone creates one, it has to be from a company OTHER 
>than IBM so that IBM's ABO results are independently verifiable.
>
>"Smart" testing is of course a valid and desirable goal, but lacking an 
>existing *independent* verification tool there is no option but full 
>regression testing.  Manual verification is not reasonable or cost effective, 
>especially for very large programs and program suites.
>
>And again, I am not trashing ABO, which on its face is an amazing tool BUT it 
>changes object code.  Lacking independent automated verification, in any sane 
>definition of a program life cycle system that is a change that requires full 
>regression testing.
>
Do the above apply likewise to moving to a different processor model, or even 
to a microcode upgrade?

-- 


This message and any attachments are intended only for the use of the addressee 
and may contain information that is privileged and confidential. If the reader 
of the message is not the intended recipient or an authorized representative of 
the intended recipient, you are hereby notified that any dissemination of this 
communication is strictly prohibited. If you have received this communication 
in error, please notify us immediately by e-mail and delete the message and any 
attachments from your system.


--
For IBM-MAIN subscribe / signoff / archive access instructions, send email to 
lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: ABO Automatic Binary Optimizer

2016-10-14 Thread Farley, Peter x23353
Thanks Norman, but as I am not a sysprog I am not involved in those types of 
changes.  I/we depend on our facilities management team to handle those issues.

Peter

-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf 
Of Norman Hollander on Desertwiz
Sent: Friday, October 14, 2016 1:45 PM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: ABO Automatic Binary Optimizer

You should request the Hardware Buckets for microcode updates.  Sometimes, 
there could be a necessary
OS PTF to support/exploit new microcode.  Is there a chance that microcode code 
cause a production problem?
You know the answer...  Can the update be backed out?  Not always... 

-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf 
Of Farley, Peter x23353
Sent: Friday, October 14, 2016 10:00 AM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: ABO Automatic Binary Optimizer

I don’t know how other installations perform processor model upgrades, but our 
facilities manager moves in one new processor (CEC) at a time on a weekend 
during an extended maintenance window and activates only the development/QA 
LPAR's on it for at least a week (sometimes more) of "active use" of the new 
model before migrating production LPAR's to the new model.  Remaining CEC's are 
migrated in similar fashion over a period of months.

A "live" regression test, if you will, with only Development/QA LPAR's affected 
by any issues.  I suppose a similar procedure could work for ABO translations, 
but I suspect that hardware change is "different" from the company's 
perspective, not under full company control if you will.  Individual programs 
*are* under full company control and thus subject to the regression test rules.

I don’t know how we handle microcode updates, so I can't comment accurately on 
that, but I suspect it is done one CEC at a time with backout procedures in 
place in case of issues.

Peter

-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf 
Of Paul Gilmartin
Sent: Friday, October 14, 2016 12:31 PM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: ABO Automatic Binary Optimizer

On Fri, 14 Oct 2016 11:29:46 -0400, Farley, Peter x23353 wrote:

>Timothy,
>
>You missed two crucial issues:
>
>1. Auditors don't believe in "verification" and management requires audits to 
>pass.  IT does not control auditors (quite the reverse in fact).  And we lowly 
>programmers have no input to auditors at all.
>2. There is no existing independent verification tool for a company to use on 
>ABO's output.  And if someone creates one, it has to be from a company OTHER 
>than IBM so that IBM's ABO results are independently verifiable.
>
>"Smart" testing is of course a valid and desirable goal, but lacking an 
>existing *independent* verification tool there is no option but full 
>regression testing.  Manual verification is not reasonable or cost effective, 
>especially for very large programs and program suites.
>
>And again, I am not trashing ABO, which on its face is an amazing tool BUT it 
>changes object code.  Lacking independent automated verification, in any sane 
>definition of a program life cycle system that is a change that requires full 
>regression testing.
>
Do the above apply likewise to moving to a different processor model, or even 
to a microcode upgrade?

-- 


This message and any attachments are intended only for the use of the addressee 
and may contain information that is privileged and confidential. If the reader 
of the message is not the intended recipient or an authorized representative of 
the intended recipient, you are hereby notified that any dissemination of this 
communication is strictly prohibited. If you have received this communication 
in error, please notify us immediately by e-mail and delete the message and any 
attachments from your system.


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: ABO Automatic Binary Optimizer

2016-10-14 Thread Norman Hollander on Desertwiz
You should request the Hardware Buckets for microcode updates.  Sometimes, 
there could be a necessary
OS PTF to support/exploit new microcode.  Is there a chance that microcode code 
cause a production problem?
You know the answer...  Can the update be backed out?  Not always... 

-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf 
Of Farley, Peter x23353
Sent: Friday, October 14, 2016 10:00 AM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: ABO Automatic Binary Optimizer

I don’t know how other installations perform processor model upgrades, but our 
facilities manager moves in one new processor (CEC) at a time on a weekend 
during an extended maintenance window and activates only the development/QA 
LPAR's on it for at least a week (sometimes more) of "active use" of the new 
model before migrating production LPAR's to the new model.  Remaining CEC's are 
migrated in similar fashion over a period of months.

A "live" regression test, if you will, with only Development/QA LPAR's affected 
by any issues.  I suppose a similar procedure could work for ABO translations, 
but I suspect that hardware change is "different" from the company's 
perspective, not under full company control if you will.  Individual programs 
*are* under full company control and thus subject to the regression test rules.

I don’t know how we handle microcode updates, so I can't comment accurately on 
that, but I suspect it is done one CEC at a time with backout procedures in 
place in case of issues.

Peter

-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf 
Of Paul Gilmartin
Sent: Friday, October 14, 2016 12:31 PM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: ABO Automatic Binary Optimizer

On Fri, 14 Oct 2016 11:29:46 -0400, Farley, Peter x23353 wrote:

>Timothy,
>
>You missed two crucial issues:
>
>1. Auditors don't believe in "verification" and management requires audits to 
>pass.  IT does not control auditors (quite the reverse in fact).  And we lowly 
>programmers have no input to auditors at all.
>2. There is no existing independent verification tool for a company to use on 
>ABO's output.  And if someone creates one, it has to be from a company OTHER 
>than IBM so that IBM's ABO results are independently verifiable.
>
>"Smart" testing is of course a valid and desirable goal, but lacking an 
>existing *independent* verification tool there is no option but full 
>regression testing.  Manual verification is not reasonable or cost effective, 
>especially for very large programs and program suites.
>
>And again, I am not trashing ABO, which on its face is an amazing tool BUT it 
>changes object code.  Lacking independent automated verification, in any sane 
>definition of a program life cycle system that is a change that requires full 
>regression testing.
>
Do the above apply likewise to moving to a different processor model, or even 
to a microcode upgrade?

-- 


This message and any attachments are intended only for the use of the addressee 
and may contain information that is privileged and confidential. If the reader 
of the message is not the intended recipient or an authorized representative of 
the intended recipient, you are hereby notified that any dissemination of this 
communication is strictly prohibited. If you have received this communication 
in error, please notify us immediately by e-mail and delete the message and any 
attachments from your system.


--
For IBM-MAIN subscribe / signoff / archive access instructions, send email to 
lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: ABO Automatic Binary Optimizer

2016-10-14 Thread Tony Harminc
On 14 October 2016 at 02:30, Timothy Sipples  wrote:

> No, not optimistic. Mere fact. Sun Microsystems made Java 1.0 generally
> available for download on January 23, 1996, for the Windows 95, Windows NT,
> and Solaris operating systems (three different operating systems across two
> different processor architectures). That was over two decades ago.

We're not debating the existence of Java in 1996. What you said that I
disagree with is this:

 >For perspective, for over two decades (!) Java has
> compiled/compiles bytecode *every time* a Java class is first instantiated.
> The resulting native code is model optimized depending on the JVM release
> level's maximum model optimization capabilities or the actual machine
> model, whichever is lower.

This just isn't correct. The Java made available by Sun in 1996 was a
pure interpreter. They wrote C code to implement a JVM, and the
related (and in fact much larger) baggage of class loaders and such.
This C code could be compiled for just about any architecture, but it
did no converting of JVM bytecodes into native instructions. It was a
lot like the many interpreters that existed for decades before, such
as UCSD Pascal's P-code, and a number of offerings from IBM and many
others ( APL, CPS-BASIC and CPS-PL/I), TSO classic CLISTs, Rexx, and
more. In pass 1 source code is converted into some kind of internal
representation, and in pass 2 that representation is then interpreted
by native code. What the intermediate form looks like with respect to
the source language differs. Either it's a lot like the source,
cleaned up for easier machine processing (CLIST, Rexx, most BASICs),
or it's more of an easy-to-interpret virtual machine (UCSD, JVM), but
by definition it's not the native instruction set of the eventual
target machine. Sun's JVM, as documented in the 1996 book, did define
some pseudo-ops for performance reasons, but they were entirely
optional and in no way can be called compiling (JIT or otherwise).

Back in 1996, while I'm sure there were bright people thinking about
JIT compilation of JVM bytecodes into existing instruction sets, the
more general musing (in the Sun JVM book, even) was about building
hardware to directly execute -- rather than interpret --  JVM
bytecodes. That didn't happen in any commercial way to my knowledge.

> Java isn't the earliest example of a bytecode-to-native technology. IBM's
> Technology Independent Machine Interface (TIMI) proved the concept much
> earlier. TIMI traces its roots to the IBM System/38 introduced in August,
> 1979.

I'd say its roots go much further back to the failed Future System
(FS), that Lynne Wheeler has mentioned here many times.

> The latest TIMI technologies are found in the IBM i platform, another
> platform deservedly famous as a stable, reliable home for applications.
> Before program execution (just before, if necessary) TIMI automatically
> translates the bytecode to native code tailored for that particular
> hardware model environment, persists it, and runs it.

You're describing the Original Programming Model (OPM), which has
largely been deprecated in recent years in favour of compiling
directly into native (p-series) instructions, probably via some quite
different intermediate representation.

> TIMI is arguably even closer to ABO conceptually than Java is.

Sure. It wouldn't surprise me at all to find that the intermediate
representation used by ABO is the very same one used on the IBM i.
Certainly ABO and the IBM i compilers all come from the same lab.

Tony H.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: ABO Automatic Binary Optimizer

2016-10-14 Thread Farley, Peter x23353
I don’t know how other installations perform processor model upgrades, but our 
facilities manager moves in one new processor (CEC) at a time on a weekend 
during an extended maintenance window and activates only the development/QA 
LPAR's on it for at least a week (sometimes more) of "active use" of the new 
model before migrating production LPAR's to the new model.  Remaining CEC's are 
migrated in similar fashion over a period of months.

A "live" regression test, if you will, with only Development/QA LPAR's affected 
by any issues.  I suppose a similar procedure could work for ABO translations, 
but I suspect that hardware change is "different" from the company's 
perspective, not under full company control if you will.  Individual programs 
*are* under full company control and thus subject to the regression test rules.

I don’t know how we handle microcode updates, so I can't comment accurately on 
that, but I suspect it is done one CEC at a time with backout procedures in 
place in case of issues.

Peter

-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf 
Of Paul Gilmartin
Sent: Friday, October 14, 2016 12:31 PM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: ABO Automatic Binary Optimizer

On Fri, 14 Oct 2016 11:29:46 -0400, Farley, Peter x23353 wrote:

>Timothy,
>
>You missed two crucial issues:
>
>1. Auditors don't believe in "verification" and management requires audits to 
>pass.  IT does not control auditors (quite the reverse in fact).  And we lowly 
>programmers have no input to auditors at all.
>2. There is no existing independent verification tool for a company to use on 
>ABO's output.  And if someone creates one, it has to be from a company OTHER 
>than IBM so that IBM's ABO results are independently verifiable.
>
>"Smart" testing is of course a valid and desirable goal, but lacking an 
>existing *independent* verification tool there is no option but full 
>regression testing.  Manual verification is not reasonable or cost effective, 
>especially for very large programs and program suites.
>
>And again, I am not trashing ABO, which on its face is an amazing tool BUT it 
>changes object code.  Lacking independent automated verification, in any sane 
>definition of a program life cycle system that is a change that requires full 
>regression testing.
>
Do the above apply likewise to moving to a different processor model, or even to
a microcode upgrade?

-- 


This message and any attachments are intended only for the use of the addressee 
and may contain information that is privileged and confidential. If the reader 
of the message is not the intended recipient or an authorized representative of 
the intended recipient, you are hereby notified that any dissemination of this 
communication is strictly prohibited. If you have received this communication 
in error, please notify us immediately by e-mail and delete the message and any 
attachments from your system.


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: ABO Automatic Binary Optimizer

2016-10-14 Thread Norman Hollander on Desertwiz
Back in my banking days, we Sysprogs worked with OPs and APPs to create several 
Jobstreams to
create a group of critical/typical transactions to exercise various 
applications (such as Loan Origination,
ATM, Retail, and Financial, along with File Transfer, Statement Printing, 
etc.).  These were used for any
major changes, like Processor changes, Microcode, DASD changes, Channel 
changes, Printer mods (we had
3900 Duplex printers), OS changes, Middleware changes, and Application changes. 
 These were used as a vehicle
to exercise major components, and were a good indicator that a change appeared 
successful.  If you've ever been
through an Operational Audit, you know that the "process" must be in place for 
changes, and you must routinely
exercise them.  While all changes might not be subjected to this process, a 
robust Change Management process
will determine the need, and will satisfy most auditors.  Satisfying auditors 
is not really a SysProg daily job, but it
is something often required.  If your shop does not need such time consumers, 
consider yourself lucky.  AFAIK, many
industries still require these today.  

-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf 
Of Paul Gilmartin
Sent: Friday, October 14, 2016 9:31 AM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: ABO Automatic Binary Optimizer

On Fri, 14 Oct 2016 11:29:46 -0400, Farley, Peter x23353 wrote:

>Timothy,
>
>You missed two crucial issues:
>
>1. Auditors don't believe in "verification" and management requires audits to 
>pass.  IT does not control auditors (quite the reverse in fact).  And we lowly 
>programmers have no input to auditors at all.
>2. There is no existing independent verification tool for a company to use on 
>ABO's output.  And if someone creates one, it has to be from a company OTHER 
>than IBM so that IBM's ABO results are independently verifiable.
>
>"Smart" testing is of course a valid and desirable goal, but lacking an 
>existing *independent* verification tool there is no option but full 
>regression testing.  Manual verification is not reasonable or cost effective, 
>especially for very large programs and program suites.
>
>And again, I am not trashing ABO, which on its face is an amazing tool BUT it 
>changes object code.  Lacking independent automated verification, in any sane 
>definition of a program life cycle system that is a change that requires full 
>regression testing.
>
Do the above apply likewise to moving to a different processor model, or even 
to a microcode upgrade?

-- gil

--
For IBM-MAIN subscribe / signoff / archive access instructions, send email to 
lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: ABO Automatic Binary Optimizer

2016-10-14 Thread Paul Gilmartin
On Fri, 14 Oct 2016 11:29:46 -0400, Farley, Peter x23353 wrote:

>Timothy,
>
>You missed two crucial issues:
>
>1. Auditors don't believe in "verification" and management requires audits to 
>pass.  IT does not control auditors (quite the reverse in fact).  And we lowly 
>programmers have no input to auditors at all.
>2. There is no existing independent verification tool for a company to use on 
>ABO's output.  And if someone creates one, it has to be from a company OTHER 
>than IBM so that IBM's ABO results are independently verifiable.
>
>"Smart" testing is of course a valid and desirable goal, but lacking an 
>existing *independent* verification tool there is no option but full 
>regression testing.  Manual verification is not reasonable or cost effective, 
>especially for very large programs and program suites.
>
>And again, I am not trashing ABO, which on its face is an amazing tool BUT it 
>changes object code.  Lacking independent automated verification, in any sane 
>definition of a program life cycle system that is a change that requires full 
>regression testing.
>
Do the above apply likewise to moving to a different processor model, or even to
a microcode upgrade?

-- gil

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: ABO Automatic Binary Optimizer

2016-10-14 Thread Anne & Lynn Wheeler
sipp...@sg.ibm.com (Timothy Sipples) writes:
> No, not optimistic. Mere fact. Sun Microsystems made Java 1.0
> generally available for download on January 23, 1996, for the Windows
> 95, Windows NT, and Solaris operating systems (three different
> operating systems across two different processor architectures). That
> was over two decades ago.

re:
http://manana.garlic.com/~lynn/2016f.html#91 ABO Automatic Binary Optimizer
http://manana.garlic.com/~lynn/2016f.html#92 ABO Automatic Binary Optimizer

trivia: general manager of the sun business group responsible for java
had formally been at IBM Los Gatos lab ... and one of two people
responsible for the original mainframe pascal (I got invited to the JAVA
announce).

In the early 90s, object language were all the rage and (at least) both
Apple and Sun were doing new operating systems (Apple's Pink and Sun's
SPRING).

Before SUN abandon'ed SPRING, I was asked if I would be interested in
coming onboard and bringing SPRING to commercial quality for release (I
did some review and then declined). Note the SPRING and GREEN (JAVA)
people claimed that there was no overlap between the two ... although
from SPRING: A Client-Side Stub Interpreter

We have built a research operating system in which all services are
presented through interfaces described by an interface description
language. The system consists of a micro-kernel that supports a small
number of these interfaces, and a large number of interfaces that are
implemented by user-level code. A typical service implements one or
more interfaces, but is a client of many other interfaces that are
implemented elsewhere in the system. We have an interface compiler
that generates client-side and service-side stubs to deliver calls
from clients to services providing location transparency if the client
and server are in different address spaces. The code for client-side
stubs was occupying a large amount of the text space on our clients,
so a stub interpreter was written to replace the client-side stub
methods. The result was that we traded 125k bytes of stub code for 13k
bytes of stub descriptions and 4k bytes of stub interpreter. This
paper describes the stub interpreter, the stub descriptions, and
discusses some alternatives.

... snip ...

note that I've periodically claimed that the father of 801/RISC had gone
to the opposite extreme of the (failed) Future System effort. In the
late 70s, there was an effort to replace a myriad of internal IBM
microprocessors all, with 801/RISC (Iliad) ... controlers, low &
mid-range 370, AS/400 (follow-on to S/38). The 4361 & 4381 (followon to
4331 and 4341) were originally going to be Iliad microprocessors. For
the 4361/4381 Iliad, they were looking in addition to straight
interpreting 370 (like early generations of 360 & 370), a JIT
(just-in-time) compiler that took snipets of 370->801. In the early 70s,
I had written a PLI program that analyzed 370 assembler programs,
creating a high-level abstraction of the instructions and program flow
... and got asked to spend some time talking to the Iliad/JIT
people. Note for various reasons, these Iliad efforts failed and all
reverted to doing traditional CISC processors (and some of the RISC/801
engineers leave and start showing up at other companies working on RISC
efforts)

In the late 80s some 370 emulation efforts started which morph into
Hercules and other offerings. At least one of the commercial 370
emulator offerings implemented JIT (370->native) on-the-fly (for intel &
sparc) for high-use 370 code snippets.

During the 90s, there was a lot about the RISC throughput performance
advantage over I86. However, starting about two decades ago, I86
processor implementations started doing hardware translation of I86
instructions to series of risc micro-ops for actual execution ... which
has contributed to largely closing the throughput difference between I86
and RISC processors.

-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: ABO Automatic Binary Optimizer

2016-10-14 Thread Farley, Peter x23353
Timothy,

You missed two crucial issues:

1. Auditors don't believe in "verification" and management requires audits to 
pass.  IT does not control auditors (quite the reverse in fact).  And we lowly 
programmers have no input to auditors at all.
2. There is no existing independent verification tool for a company to use on 
ABO's output.  And if someone creates one, it has to be from a company OTHER 
than IBM so that IBM's ABO results are independently verifiable.

"Smart" testing is of course a valid and desirable goal, but lacking an 
existing *independent* verification tool there is no option but full regression 
testing.  Manual verification is not reasonable or cost effective, especially 
for very large programs and program suites.

And again, I am not trashing ABO, which on its face is an amazing tool BUT it 
changes object code.  Lacking independent automated verification, in any sane 
definition of a program life cycle system that is a change that requires full 
regression testing.

Peter

P.S. -- To Bill Woodger:  I don't know how other COBOL programmers work, but I 
*always* review the compiler warnings, and strive to make my compilations 
warning-free.  I don't always succeed (data area copy books that define smaller 
areas over larger ones and debugging paragraphs that are not commented out are 
particular nuisances), but I do try.

-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf 
Of Timothy Sipples
Sent: Friday, October 14, 2016 2:30 AM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: ABO Automatic Binary Optimizer


With all that said, one has to be smart about when, where, how, and how
much to test. Bill Woodger reminded me of an important fact, that if you're
not smart about testing and you're just running "big" tests over and over
to tick checkboxes, even when it makes little or no sense, then you're not
actually testing well either. You're very likely missing genuine risks.
Literally nobody wins if you've got an expensive, bloated testing regime
that isn't actually testing what you should be testing in 2016 (and
beyond). There are also many cases when business users simply throw up
their hands and declare "No thank you; I can't afford to wait 68 days for
your testing program to complete," and they shop elsewhere for their IT
services.

My friends and colleagues, we must always be *reasonable* and adaptable,
else we're lost at sea. And it's simply unreasonable to treat ABO
optimizations the same for testing purposes as source code changes and
recompiles. The source code doesn't change! The correct ABO testing effort
answer should be something more than zero and something less than what's
ordinarily advisable for a "typical" source code change/recompile trip.
Please consult with ABO technical experts to help decide where precisely
that "in between" should be given your particular environment and
applications.

--

This message and any attachments are intended only for the use of the addressee 
and may contain information that is privileged and confidential. If the reader 
of the message is not the intended recipient or an authorized representative of 
the intended recipient, you are hereby notified that any dissemination of this 
communication is strictly prohibited. If you have received this communication 
in error, please notify us immediately by e-mail and delete the message and any 
attachments from your system.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: ABO Automatic Binary Optimizer

2016-10-14 Thread Timothy Sipples
Tony Harminc wrote:
>That seems a little, uh, optimistic. The Java Programming Language
>book, and the corresponding Java Virtual Machine Specification, first
>editions, were both published in 1996.

No, not optimistic. Mere fact. Sun Microsystems made Java 1.0 generally
available for download on January 23, 1996, for the Windows 95, Windows NT,
and Solaris operating systems (three different operating systems across two
different processor architectures). That was over two decades ago.

http://web.archive.org/web/20080205101616/http://www.sun.com/smi/Press/sunflash/1996-01/sunflash.960123.10561.xml

I was not even counting Sun's "alpha" and "beta" release timeline, but some
would.

Java isn't the earliest example of a bytecode-to-native technology. IBM's
Technology Independent Machine Interface (TIMI) proved the concept much
earlier. TIMI traces its roots to the IBM System/38 introduced in August,
1979. The latest TIMI technologies are found in the IBM i platform, another
platform deservedly famous as a stable, reliable home for applications.
Before program execution (just before, if necessary) TIMI automatically
translates the bytecode to native code tailored for that particular
hardware model environment, persists it, and runs it. TIMI is arguably even
closer to ABO conceptually than Java is.

Another analog to ABO is the technology that Transitive Corporation (a
company IBM acquired) developed. Apple's Mac OS X 10.4 ("Tiger") for Intel
X86 introduced "Rosetta," a translation technology. Rosetta took Mac OS X
applications compiled for PowerPC and translated them, at program load, to
X86 instructions for execution on Apple's then new Intel-based Macintoshes.
It worked extremely well. Apple removed Rosetta from Mac OS X 10.7 some
years later, after developers had recompiled their applications to ship X86
versions. For a time IBM offered PowerVM Lx86 for IBM Power Servers, a
Transitive technology that performed the reverse translation (X86 to
Power).

There are many well proven technologies that parallel ABO. Java, TIMI,
Rosetta, and PowerVM Lx86 are four examples.

With all that said, one has to be smart about when, where, how, and how
much to test. Bill Woodger reminded me of an important fact, that if you're
not smart about testing and you're just running "big" tests over and over
to tick checkboxes, even when it makes little or no sense, then you're not
actually testing well either. You're very likely missing genuine risks.
Literally nobody wins if you've got an expensive, bloated testing regime
that isn't actually testing what you should be testing in 2016 (and
beyond). There are also many cases when business users simply throw up
their hands and declare "No thank you; I can't afford to wait 68 days for
your testing program to complete," and they shop elsewhere for their IT
services.

My friends and colleagues, we must always be *reasonable* and adaptable,
else we're lost at sea. And it's simply unreasonable to treat ABO
optimizations the same for testing purposes as source code changes and
recompiles. The source code doesn't change! The correct ABO testing effort
answer should be something more than zero and something less than what's
ordinarily advisable for a "typical" source code change/recompile trip.
Please consult with ABO technical experts to help decide where precisely
that "in between" should be given your particular environment and
applications.


Timothy Sipples
IT Architect Executive, Industry Solutions, IBM z Systems, AP/GCG/MEA
E-Mail: sipp...@sg.ibm.com

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: ABO Automatic Binary Optimizer

2016-10-13 Thread Bill Woodger
Peter,

You started with this: "Any program change requires full regression testing, 
including "just a recompile"." I'm saying that paying "lip-service" to audit 
requirements, and not confirming that the programs are exactly the same, is 
heading (potentially) for exactly what you don't want. If the program is 
supposed to be identical ("just a recompile") then, for me, "testing" is 
proving that it is identical.

Fine, if the auditors don't accept that an identical program produces an 
identical outcome, you are stymied. But I for sure would not just do 
regression-tests when the program is supposed to be identical.

"ABO is different, the executable code changes. The changes are can be 
"reconciled", through the listing. An automatic verification is going to give 
way better results than any regression-test. Yes, for sure, that is a harder 
argument with an auditor, but even if the regression test is still forced, I'd 
go with the verification every time to "prove" the system, and, yes, that again 
means the regression-test (applied for the entire process) is again degraded."

Again, I'm saying that independent verification will give you better than your 
regression-tests. If the auditors still want regression-tests, I'm just saying 
I'd do both, and the one I'd rely on for finding any theoretical potential 
problem is not the regression-tests.

I'm not sure how you come to your conclusion about me when as far as I'm 
concerned it provides for a more stable Production system. To summarise: 
automate consumption and verification of the audit trail which ABO provides in 
its listing: talk to the auditors; if it is not possible to win the argument 
with the auditors (and I've both won and lost arguments with auditors) then you 
have to follow the audit procedures laid down *in addition to verifying beyond 
what is supplied by your regression tests*.

On top of that, from earlier, where there are "known problems" and "flaky 
programs" take extra care, and perhaps exclude them from ABO. Plus the "Bust 
ABO and You Win a T-Shirt Party"-type thing.

Now, wouldn't it be great if the American Institute of Auditors (I've invented 
the name, but I'm sure there is at least one such organisation) and IBM got 
together and certified ABO. Probably not going to happen.

Perhaps more feasible is an RFE to make the ABO output more readily 
"consumable" by automation (although I don't know that it is difficult now).

As to "rolling your own", that's been IBM's response to at least one RFE 
(unrelated to this) I voted for. "RFE: How about you add this functionality to 
SDSF so everyone doesn't have to roll-their own with the REXX/SDSS?" Response 
"Rejected, everyone can roll their own with REXX/SDSF".

For the ABO, yes, it sounds like you'll be regression-testing if you use it. 
Those costs have to be factored in, but such is life. Your regression-tests 
will make the auditors (and those who consume audit reports, which is lots of 
important people, some of whom could theoretically close you down) happy. Keep 
an eye on the ABO fix list.

I don't think your systems will be worse for the process (apart from the 
potential "morale" issue of doing regression just for the sake of regression).

With the "just a recompile" you are dead wrong. Mitigated by your processes, 
almost certainly. But if a program that is supposed to be identical is not, it 
can quite easily pass your regression and bite you in tender portions. If it is 
supposed to be the same, *check it is the same*. I seriously can't imagine any 
other reasonable way to do it.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: ABO Automatic Binary Optimizer

2016-10-13 Thread Charles Mills
It is a cinch that Java was designed if not from the get-go then certainly from 
infancy for JIT compiling and so forth. I am sure the ABO was far from the 
minds of the authors of COBOL II.

Charles

-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf 
Of Tony Harminc
Sent: Thursday, October 13, 2016 1:45 PM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: ABO Automatic Binary Optimizer

On 13 October 2016 at 14:47, Bill Woodger <bill.wood...@gmail.com> wrote:
>
> No, it doesn't turn the machine-code (ESA/370) into anything intermediate.

Are you quite sure?

> Yes, it knows something of the internals, and yes it knows things it can and 
> can't do with that knowledge.
>
> "There is much more to go wrong in the ABO process"
>
> Why, and with what supporting information?

As I said, the JVM class file format is extremely well documented, and was 
designed to be interpreted and verified. The output of various old COBOL 
compilers much less so. Well, no, I don't have the negative supporting evidence 
(lack of doc on the COBOL object code), but I'm willing to bet...

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: ABO Automatic Binary Optimizer

2016-10-13 Thread Tony Harminc
On 13 October 2016 at 14:47, Bill Woodger  wrote:
>
> No, it doesn't turn the machine-code (ESA/370) into anything intermediate.

Are you quite sure?

> Yes, it knows something of the internals, and yes it knows things it can and 
> can't do with that knowledge.
>
> "There is much more to go wrong in the ABO process"
>
> Why, and with what supporting information?

As I said, the JVM class file format is extremely well documented, and
was designed to be interpreted and verified. The output of various old
COBOL compilers much less so. Well, no, I don't have the negative
supporting evidence (lack of doc on the COBOL object code), but I'm
willing to bet...

To be clear, I think ABO is a great idea with great potential. And
probably it works really well. But if worked at a bank or insurance
company, I wouldn't be quick to say "oh it's still the same program,
it'll work fine, just bang it into production".

Tony H.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: ABO Automatic Binary Optimizer

2016-10-13 Thread Paul Gilmartin
On Thu, 13 Oct 2016 16:01:16 -0400, Farley, Peter wrote:

>Bill,
>
>You do not comprehend the depth of the fear of failure in large, audited 
>business organizations.
>
>Also the "verification" you propose that we use for ABO output has no 
>programmed tool yet to perform the verification automatically...
>
To be fair, Bill discussed automated comparison of generated binaries
from identical source, allowing for timestamp differences.  But, then,
why recompile?

We performed such a comparison, exhaustively, at the transition from
Assembler H to HLASM.  Successfully.

Perhaps once earlier, when we encountered and programmed around an
Assembler H code generation bug that IBM was calling WAD.  IBM finally
repaired it by PTF (did a customer more influential than we finally report it?)
A very cautious manager, learning that IBM had modified Assembler code
generation, required that we perform such a comparison, circumvention
in place, before and after PTF.  When the comparison succeeded, we saw
no need to remove the circumvention.

-- gil

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: ABO Automatic Binary Optimizer

2016-10-13 Thread Sam Siegel
Peter - Well said.

Sam

On Thu, Oct 13, 2016 at 1:01 PM, Farley, Peter x23353 <
peter.far...@broadridge.com> wrote:

> Bill,
>
> You do not comprehend the depth of the fear of failure in large, audited
> business organizations.
>
> Also the "verification" you propose that we use for ABO output has no
> programmed tool yet to perform the verification automatically - IEBIBALL is
> the only current tool available, and that is notoriously inefficient and
> error-prone.  Do you expect each ABO client to "roll their own" automated
> verification tool?  IEBIBALL a 1+ line COBOL program assembler listing
> (option LIST) against the list produced by ABO?  I don't think so.
>
> I am not saying ABO is a bad tool, only that it must be considered as a
> change like any other actual source change in order to allay natural and
> substantial concerns in any large organization.  Like the OPT compiler
> option, if the tested version does not use the option (or does not use ABO
> before testing) then it was not tested at all and is not "ready for QA".
>
> No auditor I have ever encountered will tell you otherwise.  I wouldn’t
> buy it either if I were an auditor.
>
> Careful is not wasteful.  Careful saves jobs and companies.
>
> Peter
>
> -Original Message-
> From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On
> Behalf Of Bill Woodger
> Sent: Thursday, October 13, 2016 2:31 PM
> To: IBM-MAIN@LISTSERV.UA.EDU
> Subject: Re: ABO Automatic Binary Optimizer
>
> Peter,
>
> For a recompile, where the program is not expected to have changed even
> one iota, a regression-test is a very poor substitute for verification. I'd
> be amazed if your tests would be extensive enough to pick that the program
> was different, whereas a comparison (masking or "reconciling" the date/time
> change) will give you 100% surety that the recompiled-program-with-no-changes
> is indeed identical, in all but the date/time, to the original.
>
> If your audit procedures still insist that you do the regression-test,
> still do the comparison and ignore the results of the regression-test (it's
> wasted). Because you then have a task which is ignored/wasted, the tendency
> will be to become "sloppier" with the regression-tests in general.
>
> Look at an site which accepts RC=4 as "OK" without needing to look at the
> diagnostics. It is highly likely that there's dark and stinky stuff in
> their systems. Slackness leads to slackness.
>
> A program which is 100% identical (executable code) to the previous
> program has already been tested to the exact extent of the previous
> program. Are there auditors, in a financial environment, who buy that? Yes.
> Do they all? It seems not. Or it is open at least as to whether things have
> been explained to them.
>
> ABO is different, the executable code changes. The changes are can be
> "reconciled", through the listing. An automatic verification is going to
> give way better results than any regression-test. Yes, for sure, that is a
> harder argument with an auditor, but even if the regression test is still
> forced, I'd go with the verification every time to "prove" the system, and,
> yes, that again means the regression-test (applied for the entire process)
> is again degraded.
>
> Analogies are tricky, but how about this (abstracting all "human error"):
> you have a very long list of single-digits which you have to add up; you
> have a process which removes all sequences of digits which add up to a
> factor of 10 (and is guaranteed not to have removed anything else) by a
> total of all of those digits at the bottom of the list, and lists all those
> removed numbers separately. We assume 100% accuracy either way (ABO isn't
> doing anything for that), and it is going to be quicker to add up the new
> "full" list than the original, and the total of the removed digits can be
> verified to the total in the new list to demonstrate that nothing is lost.
> Given that the original list has been audited for its addition, is there a
> requirement to again audit the speeded-up addition of the new list?
>
> --
>
>
> This message and any attachments are intended only for the use of the
> addressee and may contain information that is privileged and confidential.
> If the reader of the message is not the intended recipient or an authorized
> representative of the intended recipient, you are hereby notified that any
> dissemination of this communication is strictly prohibited. If you have
> received this communication in error, please notify us immediately by
> e-mail and delete the message and any attachments from your system.
>
>

Re: ABO Automatic Binary Optimizer

2016-10-13 Thread Farley, Peter x23353
Bill,

You do not comprehend the depth of the fear of failure in large, audited 
business organizations.

Also the "verification" you propose that we use for ABO output has no 
programmed tool yet to perform the verification automatically - IEBIBALL is the 
only current tool available, and that is notoriously inefficient and 
error-prone.  Do you expect each ABO client to "roll their own" automated 
verification tool?  IEBIBALL a 1+ line COBOL program assembler listing 
(option LIST) against the list produced by ABO?  I don't think so.

I am not saying ABO is a bad tool, only that it must be considered as a change 
like any other actual source change in order to allay natural and substantial 
concerns in any large organization.  Like the OPT compiler option, if the 
tested version does not use the option (or does not use ABO before testing) 
then it was not tested at all and is not "ready for QA".

No auditor I have ever encountered will tell you otherwise.  I wouldn’t buy it 
either if I were an auditor.

Careful is not wasteful.  Careful saves jobs and companies.

Peter

-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf 
Of Bill Woodger
Sent: Thursday, October 13, 2016 2:31 PM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: ABO Automatic Binary Optimizer

Peter,

For a recompile, where the program is not expected to have changed even one 
iota, a regression-test is a very poor substitute for verification. I'd be 
amazed if your tests would be extensive enough to pick that the program was 
different, whereas a comparison (masking or "reconciling" the date/time change) 
will give you 100% surety that the recompiled-program-with-no-changes is indeed 
identical, in all but the date/time, to the original.

If your audit procedures still insist that you do the regression-test, still do 
the comparison and ignore the results of the regression-test (it's wasted). 
Because you then have a task which is ignored/wasted, the tendency will be to 
become "sloppier" with the regression-tests in general.

Look at an site which accepts RC=4 as "OK" without needing to look at the 
diagnostics. It is highly likely that there's dark and stinky stuff in their 
systems. Slackness leads to slackness.

A program which is 100% identical (executable code) to the previous program has 
already been tested to the exact extent of the previous program. Are there 
auditors, in a financial environment, who buy that? Yes. Do they all? It seems 
not. Or it is open at least as to whether things have been explained to them.

ABO is different, the executable code changes. The changes are can be 
"reconciled", through the listing. An automatic verification is going to give 
way better results than any regression-test. Yes, for sure, that is a harder 
argument with an auditor, but even if the regression test is still forced, I'd 
go with the verification every time to "prove" the system, and, yes, that again 
means the regression-test (applied for the entire process) is again degraded.

Analogies are tricky, but how about this (abstracting all "human error"): you 
have a very long list of single-digits which you have to add up; you have a 
process which removes all sequences of digits which add up to a factor of 10 
(and is guaranteed not to have removed anything else) by a total of all of 
those digits at the bottom of the list, and lists all those removed numbers 
separately. We assume 100% accuracy either way (ABO isn't doing anything for 
that), and it is going to be quicker to add up the new "full" list than the 
original, and the total of the removed digits can be verified to the total in 
the new list to demonstrate that nothing is lost. Given that the original list 
has been audited for its addition, is there a requirement to again audit the 
speeded-up addition of the new list? 

--


This message and any attachments are intended only for the use of the addressee 
and may contain information that is privileged and confidential. If the reader 
of the message is not the intended recipient or an authorized representative of 
the intended recipient, you are hereby notified that any dissemination of this 
communication is strictly prohibited. If you have received this communication 
in error, please notify us immediately by e-mail and delete the message and any 
attachments from your system.


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: ABO Automatic Binary Optimizer

2016-10-13 Thread Bill Woodger
Tony,

"But the ABO product is certainly not just translating individual old 
instructions into newer ones. Rather, it is surely identifying COBOL paradigms 
based on some kind of pattern matching, and then retranslating those COBOLy 
things into modern machine code. Presumably it effectively reverse engineers 
the byte codes (whether back into COBOL, or much more likely into an 
intermediate representation that is already accepted by later stages of the 
modern compiler middle or back end) and generates a new object module. If it 
really was just converting old instructions into better performing modern ones, 
the input would not be constrained to be COBOL-generated. Any object code could 
be optimized this way."

The ABO will only operate on COBOL programs, and only from a very specific list 
of compilers, which, with the coming 1.2, impressively goes back to the later 
versions of VS COBOL II as long as they are LE compliant. (When I first came 
across mention of what became the ABO I was sceptical that there would be 
sufficient non-Enterprise COBOL executables around to make it worthwhile, but 
looks like I was considered to be wrong on that).

It was perhaps wrong to call it an "Optimizer", as people then presume that it 
is doing things that the optimizing engine in the compiler can do. It doesn't. 
It more "soups up" rather than optimizes. Yes, that broadly optimizes things, 
but ABO does not touch many things that the optimizer treats effectively.

No, it doesn't turn the machine-code (ESA/370) into anything intermediate. Yes, 
it knows something of the internals, and yes it knows things it can and can't 
do with that knowledge.

"There is much more to go wrong in the ABO process"

Why, and with what supporting information?

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: ABO Automatic Binary Optimizer

2016-10-13 Thread Bill Woodger
Peter,

For a recompile, where the program is not expected to have changed even one 
iota, a regression-test is a very poor substitute for verification. I'd be 
amazed if your tests would be extensive enough to pick that the program was 
different, whereas a comparison (masking or "reconciling" the date/time change) 
will give you 100% surety that the recompiled-program-with-no-changes is indeed 
identical, in all but the date/time, to the original.

If your audit procedures still insist that you do the regression-test, still do 
the comparison and ignore the results of the regression-test (it's wasted). 
Because you then have a task which is ignored/wasted, the tendency will be to 
become "sloppier" with the regression-tests in general.

Look at an site which accepts RC=4 as "OK" without needing to look at the 
diagnostics. It is highly likely that there's dark and stinky stuff in their 
systems. Slackness leads to slackness.

A program which is 100% identical (executable code) to the previous program has 
already been tested to the exact extent of the previous program. Are there 
auditors, in a financial environment, who buy that? Yes. Do they all? It seems 
not. Or it is open at least as to whether things have been explained to them.

ABO is different, the executable code changes. The changes are can be 
"reconciled", through the listing. An automatic verification is going to give 
way better results than any regression-test. Yes, for sure, that is a harder 
argument with an auditor, but even if the regression test is still forced, I'd 
go with the verification every time to "prove" the system, and, yes, that again 
means the regression-test (applied for the entire process) is again degraded.

Analogies are tricky, but how about this (abstracting all "human error"): you 
have a very long list of single-digits which you have to add up; you have a 
process which removes all sequences of digits which add up to a factor of 10 
(and is guaranteed not to have removed anything else) by a total of all of 
those digits at the bottom of the list, and lists all those removed numbers 
separately. We assume 100% accuracy either way (ABO isn't doing anything for 
that), and it is going to be quicker to add up the new "full" list than the 
original, and the total of the removed digits can be verified to the total in 
the new list to demonstrate that nothing is lost. Given that the original list 
has been audited for its addition, is there a requirement to again audit the 
speeded-up addition of the new list?

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: ABO Automatic Binary Optimizer

2016-10-13 Thread Bill Woodger
Closest to a guarantee is: "That may depend on how you take this claim: "[ABO] 
... produces a functionally equivalent executable program"."

I've taken that from a post of mine in the March 10 discussion here started by 
Skip.

The Stupid PERFORM broke that.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: ABO Automatic Binary Optimizer

2016-10-13 Thread Farley, Peter x23353
Being in the financial services business, I'm with you on this.  Any program 
change requires full regression testing, including "just a recompile".  We do 
not recompile at all once changed source code is tested by the developer and 
moved into QA.  All moves into QA and into Prod are just IEBCOPY operations.  
Code is tested and quality assured as-is from the developer.

It is the developer's responsibility to make sure nothing broke and that new 
function operates as specified.  ABO does not change that mandate, so it had 
better have been done before the developer performs the required testing and 
certifies a change as "ready for QA".

The potential impact of an un-caught bug or even just an unexpected change in 
business process behavior on the reputation and client base of a company is 
immense and cannot be ignored (especially in this social-media-frenzied 
environment), and I would venture to say that any any business type you care to 
name has that same risk.

Peter

-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf 
Of Norman Hollander on Desertwiz
Sent: Thursday, October 13, 2016 1:23 PM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: ABO Automatic Binary Optimizer

Coming from the banking and Utilities background, it was required that any 
changes made in a
production environment be tested prior to implementation, and include backout 
capabilities.
While I believe that ABO can very much help old COBOL modules (and even those 
sites that don't
have corresponding source code), it is a change.  Your industry may have 
different compliance
requirements.  From a performance analyst perspective, I'd have much interest 
in the before and
after effect of implanting ABO.  Seems like a good session for SHARE for anyone 
who can collect the
statistics...

--


This message and any attachments are intended only for the use of the addressee 
and may contain information that is privileged and confidential. If the reader 
of the message is not the intended recipient or an authorized representative of 
the intended recipient, you are hereby notified that any dissemination of this 
communication is strictly prohibited. If you have received this communication 
in error, please notify us immediately by e-mail and delete the message and any 
attachments from your system.


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: ABO Automatic Binary Optimizer

2016-10-13 Thread Charles Mills
Very little software comes with guarantees of "do no harm."

Charles

-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf 
Of Paul Gilmartin
Sent: Thursday, October 13, 2016 10:18 AM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: ABO Automatic Binary Optimizer

On Thu, 13 Oct 2016 12:39:30 -0400, Tony Harminc wrote:
>
>... But the ABO product is certainly not just translating individual 
>old instructions into newer ones. Rather, it is surely identifying 
>COBOL paradigms based on some kind of pattern matching, and then 
>retranslating those COBOLy things into modern machine code.
>
If ABO is guaranteed (we don't know that) not to impair the semantics of any 
machine code, even not generated by COBOL, then the worst it can do is to pass 
machine code unmodified.  But sometimes part of that code might match a COBOL 
cliché and be optimized.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: ABO Automatic Binary Optimizer

2016-10-13 Thread Paul Gilmartin
On Thu, 13 Oct 2016 12:39:30 -0400, Tony Harminc wrote:
>
>... But the ABO product is certainly not just translating
>individual old instructions into newer ones. Rather, it is surely
>identifying COBOL paradigms based on some kind of pattern matching,
>and then retranslating those COBOLy things into modern machine code.
>
If ABO is guaranteed (we don't know that) not to impair the semantics of
any machine code, even not generated by COBOL, then the worst it can
do is to pass machine code unmodified.  But sometimes part of that code
might match a COBOL cliché and be optimized.

There are pitfalls.  With COBOL-generated code it might be easier to
distinguish between instructions and data.  Or a non-COBOL compiler
might branch into a CSECT at an offset from a declared ENTRY.  (I
did that, once, in my younger, less prudent days.)

-- gil

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: ABO Automatic Binary Optimizer

2016-10-13 Thread Norman Hollander on Desertwiz
Coming from the banking and Utilities background, it was required that any 
changes made in a
production environment be tested prior to implementation, and include backout 
capabilities.
While I believe that ABO can very much help old COBOL modules (and even those 
sites that don't
have corresponding source code), it is a change.  Your industry may have 
different compliance
requirements.  From a performance analyst perspective, I'd have much interest 
in the before and
after effect of implanting ABO.  Seems like a good session for SHARE for anyone 
who can collect the
statistics...

zN

-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf 
Of Jesse 1 Robinson
Sent: Thursday, October 13, 2016 9:13 AM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: ABO Automatic Binary Optimizer

The idea of making any kind of last-minute change just before production makes 
my skin crawl. Even when we (rarely) install an emergency PTF, we insist on 
running the code at least on the sandbox and preferably on the development 
system before hitting production. I would call it irresponsible to move any 
object-code instance to production without at least some testing, including 
compiled COBOL. 

If I were in charge, OPT would be specified from the get-go. Likewise I would 
expect ABO code to be tested at least cursorily before jumping into prime time.

.
.
J.O.Skip Robinson
Southern California Edison Company
Electric Dragon Team Paddler
SHARE MVS Program Co-Manager
323-715-0595 Mobile
626-302-7535 Office
robin...@sce.com

-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf 
Of Charles Mills
Sent: Thursday, October 13, 2016 8:39 AM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: (External):Re: ABO Automatic Binary Optimizer

Call me conservative after many years in this business but I say Yes. In my 
experience optimization sometimes exposes bugs that previously were masked. I 
have little experience with COBOL, but COBOL is notorious for allowing invalid 
constructs like array subscript over-cleverness.

Charles

-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf 
Of Tom Marchant
Sent: Thursday, October 13, 2016 7:00 AM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: ABO Automatic Binary Optimizer

On Thu, 13 Oct 2016 05:44:51 -0500, Bill Woodger wrote:

>Recompiling a program with no changes. Do you "regression test"? No.

...

So, if someone compiles their COBOL program without optimization and tests it, 
then compiles it with optimization before putting it into production, does it 
need to be tested again?


--
For IBM-MAIN subscribe / signoff / archive access instructions, send email to 
lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: ABO Automatic Binary Optimizer

2016-10-13 Thread Tony Harminc
On 13 October 2016 at 06:41, Timothy Sipples  wrote:
> OK, about testing. For perspective, for over two decades (!) Java has
> compiled/compiles bytecode *every time* a Java class is first instantiated.

That seems a little, uh, optimistic. The Java Programming Language
book, and the corresponding Java Virtual Machine Specification, first
editions, were both published in 1996. Did IBM even have a mainframe
Java at that point (beyond perhaps compiling Sun's published C code on
MVS), let alone one doing JIT optimization?

> The resulting native code is model optimized depending on the JVM release
> level's maximum model optimization capabilities or the actual machine
> model, whichever is lower. Raise your hand if you're performing "full
> regression testing"(*) before every JVM instantiation. :-) That's absurd,
> of course. I'm trying to think how that would even be technically possible.
> It'd at least be difficult.
>
> ABO takes the same fundamental approach. From the perspective of an IBM z13
> machine (for example), ABO takes *COBOL* "bytecode" (1990s and earlier
> vintage instructions) and translates it to functionally identical native
> code (2015+ instruction set). It's the core essence of what JVMs do all the
> time, and IBM and others in the industry have been doing it for over two
> decades. Except with ABO it's done once (per module), and you control when
> and where.

Well... Sort of. Seems to me the problem is that while the JVM
instruction stream is architected as a whole (i.e. it's not just the
definition of what each instruction does, but an entire Java class
that must match the specs), the output of an old COBOL compiler can't
be said to be architected like this. (Probably there is some internal
doc describing what the compiler generates (and of course the old
compiler source code is definitive) but I'm willing to bet that it's
nowhere close to the standards of any generation of the Principles of
Operation.) The Principles of Operation defines very accurately and
completely what each instruction emitted by the old compiler does, but
lacks any notion of larger groupings along the lines of Java class
files. But the ABO product is certainly not just translating
individual old instructions into newer ones. Rather, it is surely
identifying COBOL paradigms based on some kind of pattern matching,
and then retranslating those COBOLy things into modern machine code.
Presumably it effectively reverse engineers the byte codes (whether
back into COBOL, or much more likely into an intermediate
representation that is already accepted by later stages of the modern
compiler middle or back end) and generates a new object module. If it
really was just converting old instructions into better performing
modern ones, the input would not be constrained to be COBOL-generated.
Any object code could be optimized this way.

This all says that the analogy between converting Java bytecodes into
native instructions and converting old COBOL object code into modern,
is weak. And it follows that the need for testing isn't the same.
There is much more to go wrong in the ABO process than there is with
JITing Java.

Tony H.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: ABO Automatic Binary Optimizer

2016-10-13 Thread Jesse 1 Robinson
The idea of making any kind of last-minute change just before production makes 
my skin crawl. Even when we (rarely) install an emergency PTF, we insist on 
running the code at least on the sandbox and preferably on the development 
system before hitting production. I would call it irresponsible to move any 
object-code instance to production without at least some testing, including 
compiled COBOL. 

If I were in charge, OPT would be specified from the get-go. Likewise I would 
expect ABO code to be tested at least cursorily before jumping into prime time.

.
.
J.O.Skip Robinson
Southern California Edison Company
Electric Dragon Team Paddler 
SHARE MVS Program Co-Manager
323-715-0595 Mobile
626-302-7535 Office
robin...@sce.com

-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf 
Of Charles Mills
Sent: Thursday, October 13, 2016 8:39 AM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: (External):Re: ABO Automatic Binary Optimizer

Call me conservative after many years in this business but I say Yes. In my 
experience optimization sometimes exposes bugs that previously were masked. I 
have little experience with COBOL, but COBOL is notorious for allowing invalid 
constructs like array subscript over-cleverness.

Charles

-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf 
Of Tom Marchant
Sent: Thursday, October 13, 2016 7:00 AM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: ABO Automatic Binary Optimizer

On Thu, 13 Oct 2016 05:44:51 -0500, Bill Woodger wrote:

>Recompiling a program with no changes. Do you "regression test"? No.

...

So, if someone compiles their COBOL program without optimization and tests it, 
then compiles it with optimization before putting it into production, does it 
need to be tested again?


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: ABO Automatic Binary Optimizer

2016-10-13 Thread Charles Mills
I have heard Java characterized as "write once, debug everywhere."

Charles

-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On
Behalf Of Timothy Sipples
Sent: Thursday, October 13, 2016 3:42 AM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: ABO Automatic Binary Optimizer



OK, about testing. For perspective, for over two decades (!) Java has
compiled/compiles bytecode *every time* a Java class is first instantiated.
The resulting native code is model optimized depending on the JVM release
level's maximum model optimization capabilities or the actual machine model,
whichever is lower. Raise your hand if you're performing "full regression
testing"(*) before every JVM instantiation. :-) That's absurd, of course.
I'm trying to think how that would even be technically possible.
It'd at least be difficult.



--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: ABO Automatic Binary Optimizer

2016-10-13 Thread Charles Mills
Call me conservative after many years in this business but I say Yes. In my 
experience optimization sometimes exposes bugs that previously were masked. I 
have little experience with COBOL, but COBOL is notorious for allowing invalid 
constructs like array subscript over-cleverness.

Charles

-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf 
Of Tom Marchant
Sent: Thursday, October 13, 2016 7:00 AM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: ABO Automatic Binary Optimizer

On Thu, 13 Oct 2016 05:44:51 -0500, Bill Woodger wrote:

>Recompiling a program with no changes. Do you "regression test"? No.

...

So, if someone compiles their COBOL program without optimization and tests it, 
then compiles it with optimization before putting it into production, does it 
need to be tested again?

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: ABO Automatic Binary Optimizer

2016-10-13 Thread Bill Woodger
Well, it's an excellent question Tom, but needs to be directed to people at 
sites that do that :-)

Pushed for an answer, I'd say "no". But, if you have and it ends up being 
the same asnwer as for ABO, which is why you've posed the question.

IBM actually recommends slapping OPT on three minutes before the program enters 
Production. Issue cited is the extra time for a compile, which is compounded to 
some extent with V5+ (a bit less so with V6+, due to use 64-bit code in the 
compiler). When is a program most compiled? During development/program testing. 
If concerned about time taken for a compile, OPT is not needed there. Out of 
that stage, the program is either recompiled once, or once per testing stage - 
no real problem to have it compiled with everything that it will use in 
Production (principally this is OPT and "monitoring" options, like SSRANGE).

So, I wouldn't go "up the line" without OPT, but if that were the rules for a 
particular site (I'd try to change the rules) I'd do my CYA (establish risk of 
the weird) and argue against re-testing.

As with ABO, it's not OPT I'm afraid of, it is the potential of bad programs. 
It is not that I necessarily expect the testing done to reveal a bad program 
which intermittebtly and obscurely behaves badly, it is that the program which 
has been tested up to that point I can guarantee wouldn't have failed any tests 
done just by slapping OPT on last thing.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: ABO Automatic Binary Optimizer

2016-10-13 Thread Bill Woodger
I'm not saying just instal the ABO and get on with it. I'm talking about 
per-program, beyond an initial "proving" stage (of the procedures for working, 
implementation, gauging actual improvement with actual programs, including even 
detailed work on "beast" programs). I'd also expect "parallel" running, of 
appropriate lengths of time, and appropriate includes "decreasing" as 
"confidence" is built.

I'd not even be averse to a "bust it if you can" period. Not only toss bad 
programs into it, but deliberately try to create things to not work. Yes, 
there's been one issue, so that increases the chance of there being another. 
The V5+ experience is that many people commonly use COBOL in ways that were 
"unexpected". For all of those with V5+, there is not evidence that any of 
those things, not a single one, caused a problem with ABO (V5+ has no 
(reported) problem with that stupid PERFORM construct).

Now, to argue against myself. I always recommend that beyond program-testing, 
all compile options should be the same as they are to be in Production. You 
can't just slap on OPT at the final point. Not because OPT doesn't work, but 
because the code "moves". Since the code moves, something which was previously 
stamping innocuously on program code *may* now stamp on something more 
inconvenient. That's not ABO's fault, but could be a side-effect, it is the 
fault of a bad program. Is regression-testing genuinely likely to uncover what 
tends to cause such things (and they are not at all common) over a "random" 
parallel run?

ABO *may* expose a bad program, not because of ABO itself, but because of the 
bad program. If you have a program which "intermittently fails and we've never 
discovered why" or the program that eveyone hates to touch, then yes, it is 
reasonabe to take different measures - and perhaps on balance not even ABO 
things like that without remedial action first.

For me, in this type of case, I am concerned that a regression-test (expensive) 
may be a placebo, and at worst lead to a false sense of security. Bad code is 
bad code. Passing a regression test does not change that. With a correct 
program I have no fear for ABO'ing. My belt-and-braces would be an (automatic) 
analysis of the ABO output, consuming considerably fewer resources.



As a side note, to a separate post, ABO doesn't come out of Markham, but from 
IBM Research, although obviously not in a vacuum.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: ABO Automatic Binary Optimizer

2016-10-13 Thread Tom Marchant
On Thu, 13 Oct 2016 05:44:51 -0500, Bill Woodger wrote:

>Recompiling a program with no changes. Do you "regression test"? No.

...

So, if someone compiles their COBOL program without optimization and tests it, 
then compiles it with optimization before putting it into production, does it 
need to be tested again?

-- 
Tom Marchant

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: ABO Automatic Binary Optimizer

2016-10-13 Thread Bill Woodger
The logic is that if you've already ABO'd X-number of programs, you need to 
check for the stupid PERFORM.

If located, fix the damn thing, then and there, test it, regression-test it, 
get it completed.

Before ABO'ing, even if the code will "work", it is still garbage code (not 
ABO's fault). So, since you're doing some work on it, change the garbage code 
and do it properly.

Yes, I'm happy that ABO now deals with it correctly, I'm just far from happy 
that the particular source code exists. It may be "working", it may be an 
irrelevant error, it may be a subtle error, it is for sure an accident waiting 
to happen.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: ABO Automatic Binary Optimizer

2016-10-13 Thread Bill Woodger
Recompiling a program with no changes. Do you "regression test"? No.

You compare the object (masking the date/time). If it is the same (as in 
identical) - what would a regeression-test show?

OK, compiler may have been patched. Doesn't matter. The executable code 
generated is the same.

Run-time (LE) may have been patched. OK, but regression-test that, not every 
single program.

What if the executable code turns out to be different? Why test that? You need 
to find out what is different, not test it, you're not expecting differences, 
so explain tht first.

If after analysis it is proven that the executable code should be different, of 
course, test it, because it is no longer a "no change" compile.

Yes, there's rules. The rulers should be aware of the "psychological" impacts 
of rules applied in... the wrong... circumstances. If you "blanket regression 
test" you for sure are not verifying that the executable code is identical. 
Tick "Yes" for Regression-Tested, tick "No" for "All changes tested" because 
you don't know of a change. Testing something with no changes leads to a 
climate of "you don't need to bther much with that one".

ABO is a little different, in that the executable obviously changes.

What should not happen is any change in the logic (optimisations for that are 
out-of-bounds for ABO).

ABO produces a listing. Manually/automatically inspecting the listing produced 
should provide a very high level of confidence in "no logic changes".

Yes, if you're going to regression-test ABO'd programs it greatly affects the 
cost/benefit analysis.

I'd be happier without the "PERFORM A THRU B where B precedes A" problem. 
Automatic verification of "no logic change" should help. Even if that is no use 
to the argument "don't regression-test", still consider doing it, it is looking 
for something concrete, whereas the regression-test is... a regression-test 
only.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: ABO Automatic Binary Optimizer

2016-10-13 Thread Timothy Sipples
Bill Woodger wrote:
>You (now) need to check for the stupid out-of-order PERFORM ...
>THRU ... but otherwise you are good to go.

I don't understand the logic. Yes, you ought to make sure that ABO PTF is
applied. But look again at the APAR (PI68138):

"ABO was fixed to correctly optimize input programs that contain these
specific kinds of PERFORMs."

These "stupid" PERFORMs work correctly now. ("Correctly" means "whatever
they were doing before." Of course if they never worked as intended, they
will continue not working.) You don't have to check for them, but you do
have to check to make sure the PTF is applied if you have ABO Version 1.1.
Or use ABO Version 1.2, with a planned general availability date of
November 11, 2016.

Norman Hollander wrote:
>ABO creates a new load module that (IMHO) needs as much Q/A testing as
>compiling in the newest compiler.

Karl Huf wrote:
>Our developers are required to do
>regression testing on their changes - even if it is just recompiling
>with no source code changes.They initially argued (well, not
>initially, it went on way too long) that there's no need to do such
>testing when using ABO.  Technically they may be right; technically one
>probably shouldn't have to do complete regression testing when
>recompiling the same source.  None of that makes any difference if the
>stated requirement in the development standards they have to follow says
>they DO have to do that testing.  Knowing that, then, they would be
>similarly required to do that testing for an ABO optimized module we
>questioned the benefit of licensing another product to do the same thing
>the compiler can do.

Charles Mills wrote:
>So I raise my eyebrows at the assertion that since the ABO is just a
>massage of the existing compiled object code no re-testing is necessary.
>I don't deny it; I just raise my eyebrows: IBM's compiler team knows a
>lot more about this stuff than I do.

OK, about testing. For perspective, for over two decades (!) Java has
compiled/compiles bytecode *every time* a Java class is first instantiated.
The resulting native code is model optimized depending on the JVM release
level's maximum model optimization capabilities or the actual machine
model, whichever is lower. Raise your hand if you're performing "full
regression testing"(*) before every JVM instantiation. :-) That's absurd,
of course. I'm trying to think how that would even be technically possible.
It'd at least be difficult.

ABO takes the same fundamental approach. From the perspective of an IBM z13
machine (for example), ABO takes *COBOL* "bytecode" (1990s and earlier
vintage instructions) and translates it to functionally identical native
code (2015+ instruction set). It's the core essence of what JVMs do all the
time, and IBM and others in the industry have been doing it for over two
decades. Except with ABO it's done once (per module), and you control when
and where.

I don't think I've seen any IBM published technical advice suggesting
*zero* ABO-related testing. Testing is important. However, you really,
really can't afford to be silly about it. (Do you run a full regression
test of all your applications when you add another disk pack to your
storage unit?) If you're running a fixed battery of tests just so you can
tick a box on a compliance form, you (or somebody above you) has probably
lost the plot. That's mere process over outcome, the sort of behavior that
kills organizations. Focus instead on the desired outcome and real,
informed risk profile of what you're trying to accomplish. Then test
*appropriately*. Ask the IBM ABO technical team if you'd like some testing
advice.

Let's oversimplify and assume a range of testing intensity (and work
effort, and costs) from 0% to 100%. Certainly you should test ABO
"pathways" in your environment to some degree. So, I vote for "not zero."
But 100%? Really? I don't buy that extreme either.

Another way to think about this is that one can ALWAYS think of more and
more complex tests to run. That's not hard. So why aren't you running them,
and more tomorrow, and more next month, and more again the month after
that? Ever increasing testing scope and effort. The basic answer is that
that would be silly. Time and testing resources are finite, and they have
costs, including opportunity costs associated with delayed introduction of
new business functions to your customers and partners. As with managing any
other finite resource, the most successful organizations consume smartly,
efficiently. They aim to get maximum risk mitigation (in business value and
cost terms) for every dollar, yen, euro, and minute of testing resource.

(*) Whatever that means. :-)


Timothy Sipples
IT Architect Executive, Industry Solutions, IBM z Systems, AP/GCG/MEA
E-Mail: sipp...@sg.ibm.com

--
For IBM-MAIN subscribe / signoff / 

Re: ABO Automatic Binary Optimizer

2016-10-12 Thread Charles Mills
I'm going to try to walk a delicate line here. I don't want to offend the fine 
folks at IBM, and IBM's compiler people are a bright, bright bunch of folks up 
in Toronto. They know more about compilers than I ever will. But, that said ...

I agree with you on testing. A recompile of unedited source code demands a full 
regression test IMHO. Who knows what might have changed: a PTF, an environment 
variable, you only thought it was unedited (you forgot about Fred) -- whatever. 
Heck, on our product, we do a partial regression test after a re-link of 
unchanged object code "just to make sure."

So I raise my eyebrows at the assertion that since the ABO is just a massage of 
the existing compiled object code no re-testing is necessary. I don't deny it; 
I just raise my eyebrows: IBM's compiler team knows a lot more about this stuff 
than I do.

Yes, "we can't find the source code" or "we found 17 versions of the source 
code and we don't know which one is in production" is apparently a reality -- a 
very scary reality IMHO, but certainly a justification for ABO.

Charles

-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf 
Of Karl S Huf
Sent: Wednesday, October 12, 2016 3:49 PM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: ABO Automatic Binary Optimizer

In our experience the need for PDSE datasets was far from the only difficulty 
in migrating to COBOL V5 (and that really wasn't the hard part).  Many compiles 
take drastically more CPU time as well as require more region.  While this is 
documented we were stunned by the orders of magnitude.  Compiles that had 
previously taken single digit CPU seconds suddenly needed minutes (this is on a 
z13).  Similarly where a 4MB Region default had been adequate those compiles 
routinely failed; we were directed to specify 200MB but found even that 
frequently failed and threw up our hands and told everybody to use REGION=0 
(personal peeve of mine).  We also did find some incompatibilities for which we 
opened up PMR's and received APAR's.

Within the context of the thread topic, ABO . . . I discussed this quite a bit 
with my account team.  Our developers are required to do regression testing on 
their changes - even if it is just recompiling
with no source code changes.They initially argued (well, not
initially, it went on way too long) that there's no need to do such testing 
when using ABO.  Technically they may be right; technically one probably 
shouldn't have to do complete regression testing when recompiling the same 
source.  None of that makes any difference if the stated requirement in the 
development standards they have to follow says they DO have to do that testing. 
 Knowing that, then, they would be similarly required to do that testing for an 
ABO optimized module we questioned the benefit of licensing another product to 
do the same thing the compiler can do.  Now, if there's a substantial amount of 
executing COBOL code that consumes a fair amount of resources AND the source is 
missing/unavailable then maybe ABO is in play - but that's not our situation.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: ABO Automatic Binary Optimizer

2016-10-12 Thread Karl S Huf
In our experience the need for PDSE datasets was far from the only
difficulty in migrating to COBOL V5 (and that really wasn't the hard
part).  Many compiles take drastically more CPU time as well as require
more region.  While this is documented we were stunned by the orders of
magnitude.  Compiles that had previously taken single digit CPU seconds
suddenly needed minutes (this is on a z13).  Similarly where a 4MB
Region default had been adequate those compiles routinely failed; we
were directed to specify 200MB but found even that frequently failed and
threw up our hands and told everybody to use REGION=0 (personal peeve of
mine).  We also did find some incompatibilities for which we opened up
PMR's and received APAR's.

Within the context of the thread topic, ABO . . . I discussed this quite
a bit with my account team.  Our developers are required to do
regression testing on their changes - even if it is just recompiling
with no source code changes.They initially argued (well, not
initially, it went on way too long) that there's no need to do such
testing when using ABO.  Technically they may be right; technically one
probably shouldn't have to do complete regression testing when
recompiling the same source.  None of that makes any difference if the
stated requirement in the development standards they have to follow says
they DO have to do that testing.  Knowing that, then, they would be
similarly required to do that testing for an ABO optimized module we
questioned the benefit of licensing another product to do the same thing
the compiler can do.  Now, if there's a substantial amount of executing
COBOL code that consumes a fair amount of resources AND the source is
missing/unavailable then maybe ABO is in play - but that's not our
situation.



___
Karl S Huf | Senior Vice President | World Wide Technology
50 S LaSalle St, LQ-18, Chicago, IL  60603 | phone (312)630-6287 |
k...@ntrs.com
Please visit northerntrust.com
CONFIDENTIALITY NOTICE: This communication is confidential, may be
privileged and is meant only for the intended recipient. If you are not
the intended recipient, please notify the sender ASAP and delete this
message from your system.  NTAC:3NS-20

P Please consider the environment before printing this e-mail.

> -Original Message-
> From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU]
> On Behalf Of Lizette Koehler
> Sent: Wednesday, October 12, 2016 11:38 AM
> To: IBM-MAIN@LISTSERV.UA.EDU
> Subject: Re: ABO Automatic Binary Optimizer
>
> The only difficulty in migration to Cobol V5 and above is the need for
PDS/E
> datasets.
>
> Since z/OS V2.2 is providing a way to not have to UPDATE all
production JCL
> with PDS/E datasets, that issue with migration, imo, is greatly
reduced.
>
> So once z/OS V2.2 is installed, migration plans to COBOL V5 and above
should
> be able to begin.
>
> Lizette
>
>
> > -Original Message-
> > From: IBM Mainframe Discussion List
[mailto:IBM-MAIN@LISTSERV.UA.EDU]
> > On Behalf Of Norman Hollander on Desertwiz
> > Sent: Wednesday, October 12, 2016 9:26 AM
> > To: IBM-MAIN@LISTSERV.UA.EDU
> > Subject: Re: ABO Automatic Binary Optimizer
> >
> > 2 Thoughts to consider:
> >
> > - ABO only runs on z/OS 2.1 and above
> > - ABO creates a new load module that (IMHO) needs as much Q/A
testing
> > as compiling in the newest compiler.
> > IIRC, back in the day, going to Enterprise COBOL, there was less
than
> > 8% of COBOL source that needed
> > to be remediated.  That is, certain COBOL verbs needed to be
updated
> > to new ones.  Things like INSPECT
> > may have been flagged.
> >
> > A good Life Cycle Management tool (did I say Endevor?) could help
with
> > an easy migration to a new compiler.
> > You could try a minor application and see how difficult in may be...
> >
> > zN
> >
> > -Original Message-----
> > From: IBM Mainframe Discussion List
[mailto:IBM-MAIN@LISTSERV.UA.EDU]
> > On Behalf Of Charles Mills
> > Sent: Wednesday, October 12, 2016 8:48 AM
> > To: IBM-MAIN@LISTSERV.UA.EDU
> > Subject: Re: ABO Automatic Binary Optimizer
> >
> > Nope. Agree 100% with what @Tom says. The ABO is not a source code
> > migration tool, it is a compiler. Really -- a very weird compiler.
> > Most compilers take source code in and produce object code out. The
> > ABO is a compiler that takes object code in and produces object code
> > out. What good is that? It takes System 370 object code in and
produces z13
> object code out.
> >
> > Why is that useful? Because the speed gains in the last several
> > generations of mainframe are not in clock/

Re: ABO Automatic Binary Optimizer

2016-10-12 Thread Anne & Lynn Wheeler
l...@garlic.com (Anne & Lynn Wheeler) writes:
> count of latency to memory (& cache miss), when measured in count of
> processor cycles is comparable to 60s latency to disk when measured in
> count of 60s processor cycles.

re:
http://www.garlic.com/~lynn/2016f.html#91 ABO Automatic Binary Optimizer

science center ... some past posts
http://www.garlic.com/~lynn/subtopic.html#545tech

had done virtual machines, virtual memory, paging, etc operating systems
in the 60s ... as well as lots of performance monitoring and
optimization technology. we gave POK some tutorials in the 70s when they
were migrating from os/360 to VS2 on the subject.

One of the performance optimizating technologies done by the scientific
center was eventually released to customers as "VS/Repack" ... it did
semi-optimated program organization for paging environment. These days,
caches are the modern memory, and cache misses are the modern page
faults, and latency to memory (when measured in processor cycles) is
comparable to access to paging devices.

In the 70s, lots of OS/VS products used the internal version of
VS/Repack for adapting their products to virtual memory environment
(also for use on general performance optimization ... since a of the
tools that could feed into VS/Repack and program organization could be
used for hot-spot analysis)

-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: ABO Automatic Binary Optimizer

2016-10-12 Thread Tom Conley

On 10/12/2016 12:29 PM, Jesse 1 Robinson wrote:

IBM is wrong. Tom is right. He lives for moments like this. ;-)



I resemble that remark ;-)

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: ABO Automatic Binary Optimizer

2016-10-12 Thread Bill Woodger
Well, I'm still going to disagree on the level of "testing" required.

You (now) need to check for the stupid out-of-order PERFORM ... THRU ... but 
otherwise you are good to go.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: ABO Automatic Binary Optimizer

2016-10-12 Thread Lizette Koehler
The only difficulty in migration to Cobol V5 and above is the need for PDS/E
datasets.

Since z/OS V2.2 is providing a way to not have to UPDATE all production JCL with
PDS/E datasets, that issue with migration, imo, is greatly reduced.

So once z/OS V2.2 is installed, migration plans to COBOL V5 and above should be
able to begin.

Lizette


> -Original Message-
> From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On
> Behalf Of Norman Hollander on Desertwiz
> Sent: Wednesday, October 12, 2016 9:26 AM
> To: IBM-MAIN@LISTSERV.UA.EDU
> Subject: Re: ABO Automatic Binary Optimizer
> 
> 2 Thoughts to consider:
> 
> - ABO only runs on z/OS 2.1 and above
> - ABO creates a new load module that (IMHO) needs as much Q/A testing as
> compiling in the newest compiler.
>   IIRC, back in the day, going to Enterprise COBOL, there was less than 8%
> of COBOL source that needed
>   to be remediated.  That is, certain COBOL verbs needed to be updated to
> new ones.  Things like INSPECT
>   may have been flagged.
> 
> A good Life Cycle Management tool (did I say Endevor?) could help with an easy
> migration to a new compiler.
> You could try a minor application and see how difficult in may be...
> 
> zN
> 
> -Original Message-
> From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On
> Behalf Of Charles Mills
> Sent: Wednesday, October 12, 2016 8:48 AM
> To: IBM-MAIN@LISTSERV.UA.EDU
> Subject: Re: ABO Automatic Binary Optimizer
> 
> Nope. Agree 100% with what @Tom says. The ABO is not a source code migration
> tool, it is a compiler. Really -- a very weird compiler. Most compilers take
> source code in and produce object code out. The ABO is a compiler that takes
> object code in and produces object code out. What good is that? It takes
> System 370 object code in and produces z13 object code out.
> 
> Why is that useful? Because the speed gains in the last several generations of
> mainframe are not in clock/cycle speed. System 370 object code does not run
> any faster on a z13 than on a z10. The gains are in new instructions.
> The same functionality as that S/370 object code expressed in z13 object code
> runs a lot faster.
> 
> (Please, no quibbles. Many shortcuts and generalizations in the above. If I
> had been perfectly precise it would have read like a legal document. The
> general points are correct.)
> 
> Charles
> 
> -Original Message-
> From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On
> Behalf Of Lopez, Sharon
> Sent: Wednesday, October 12, 2016 7:51 AM
> To: IBM-MAIN@LISTSERV.UA.EDU
> Subject: ABO Automatic Binary Optimizer
> 
> Does anyone know if this product, Automatic Binary Optimizer, will actually
> migrate Cobol V4 to V6 for you?  Our IBM reps are telling us that it will
> actually do the migration for you.  Based on what I've read, it is a
> performance product and I didn't see that capability.
> 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: ABO Automatic Binary Optimizer

2016-10-12 Thread Jesse 1 Robinson
IBM is wrong. Tom is right. He lives for moments like this. ;-)

Optimization occurs at the load module end only. You might view ABO as a 
stop-gap on the road to compiler upgrade. You get savings now even while (you 
may be) upgrading at the source end. ABO-optimized code is not as efficient as 
V5/V6, but it can be substantially better than native V4 code. 

Note that Tom's reference to 'program object' does not mean that you must 
necessarily use PDSE. As discussed here before, one compiler upgrade problem 
that some shops have is historically sharing load libraries across sysplex 
boundaries. Convenient but not advisable even for PO. PDSEs cannot be shared 
that way. In order to migrate production load modules to multiple PDSEs, the 
whole migration process might have to be rewritten. That could take more work 
than the compiler upgrade itself. 

.
.
J.O.Skip Robinson
Southern California Edison Company
Electric Dragon Team Paddler 
SHARE MVS Program Co-Manager
323-715-0595 Mobile
626-302-7535 Office
robin...@sce.com


-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf 
Of Tom Conley
Sent: Wednesday, October 12, 2016 8:07 AM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: (External):Re: ABO Automatic Binary Optimizer

On 10/12/2016 10:50 AM, Lopez, Sharon wrote:
> Does anyone know if this product, Automatic Binary Optimizer, will actually 
> migrate Cobol V4 to V6 for you?  Our IBM reps are telling us that it will 
> actually do the migration for you.  Based on what I've read, it is a 
> performance product and I didn't see that capability.
>
> Thanks to everyone in advance.
>

ABO only creates an optimized LOAD MODULE (program object).  It does not 
convert your source to V6, and it will not give you all the optimizations of 
V6.  Your biggest payback is if you upgrade your CPU, then you can run your 
load modules through ABO and get some of the optimization provided by the new 
hardware.

Regards,
Tom Conley

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: ABO Automatic Binary Optimizer

2016-10-12 Thread Anne & Lynn Wheeler
charl...@mcn.org (Charles Mills) writes:
> Why is that useful? Because the speed gains in the last several generations
> of mainframe are not in clock/cycle speed. System 370 object code does not
> run any faster on a z13 than on a z10. The gains are in new instructions.
> The same functionality as that S/370 object code expressed in z13 object
> code runs a lot faster.

count of latency to memory (& cache miss), when measured in count of
processor cycles is comparable to 60s latency to disk when measured in
count of 60s processor cycles.

claim is that over half of per processor improvement from z10 to z196
was introduction of latency masking technology (that have been in other
platforms for decades), out-of-order execution (proceed with other
instructions while waiting for cache miss), speculative execution and
branch prediction (start executing instructions before condition for
branch is available), etc. (aka basically do other things while waiting
on memory).

z196->ec12 is supposedly further refinements in memory latency masking
features (again have been in other platforms for decades).

z900, 16 processors, 2.5BIPS (156MIPS/proc), Dec2000
z990, 32 processors, 9BIPS, (281MIPS/proc), 2003
z9, 54 processors, 18BIPS (333MIPS/proc), July2005
z10, 64 processors, 30BIPS (469MIPS/proc), Feb2008
z196, 80 processors, 50BIPS (625MIPS/proc), Jul2010
EC12, 101 processors, 75BIPS (743MIPS/proc), Aug2012

z13 published refs is 30% more throughput than EC12 (or about 100BIPS)
with 40% more processors ... or about 710MIPS/proc

I've told story before about after FS imploded there was mad rush
to get stuff back into product pipelines ... some past posts
http://www.garlic.com/~lynn/submain.html#futuresys

303x and 3081 were kicked off in parallel, 3031&3032 were 158&168
repackaged to work with channel director ... and 3033 started out 168
logic mapped to some warmed over FS chips that were 20% faster.

we had a project to do 16-way SMP multiprocessor and had con'ed the 3033
processor engineers to work on it in their spare time (lot more
interesting than 3033), many in POK thot it was really neat ... until
somebody told the head of POK that it could be decades before the POK
favorite son operating system had effective 16-way support ... then some
of us were instructed to never visit POK again (and the 3033 processor
engineers were told to stop getting distracted). 16-way finally ships
almost 25yrs later (Dec2000).

-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: ABO Automatic Binary Optimizer

2016-10-12 Thread Norman Hollander on Desertwiz
2 Thoughts to consider:

- ABO only runs on z/OS 2.1 and above
- ABO creates a new load module that (IMHO) needs as much Q/A testing as
compiling in the newest compiler.
IIRC, back in the day, going to Enterprise COBOL, there was less
than 8% of COBOL source that needed
to be remediated.  That is, certain COBOL verbs needed to be updated
to new ones.  Things like INSPECT
may have been flagged.  

A good Life Cycle Management tool (did I say Endevor?) could help with an
easy migration to a new compiler.
You could try a minor application and see how difficult in may be...

zN

-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On
Behalf Of Charles Mills
Sent: Wednesday, October 12, 2016 8:48 AM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: ABO Automatic Binary Optimizer

Nope. Agree 100% with what @Tom says. The ABO is not a source code migration
tool, it is a compiler. Really -- a very weird compiler. Most compilers take
source code in and produce object code out. The ABO is a compiler that takes
object code in and produces object code out. What good is that? It takes
System 370 object code in and produces z13 object code out.

Why is that useful? Because the speed gains in the last several generations
of mainframe are not in clock/cycle speed. System 370 object code does not
run any faster on a z13 than on a z10. The gains are in new instructions.
The same functionality as that S/370 object code expressed in z13 object
code runs a lot faster.

(Please, no quibbles. Many shortcuts and generalizations in the above. If I
had been perfectly precise it would have read like a legal document. The
general points are correct.)

Charles

-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On
Behalf Of Lopez, Sharon
Sent: Wednesday, October 12, 2016 7:51 AM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: ABO Automatic Binary Optimizer

Does anyone know if this product, Automatic Binary Optimizer, will actually
migrate Cobol V4 to V6 for you?  Our IBM reps are telling us that it will
actually do the migration for you.  Based on what I've read, it is a
performance product and I didn't see that capability.

--
For IBM-MAIN subscribe / signoff / archive access instructions, send email
to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: ABO Automatic Binary Optimizer

2016-10-12 Thread Feller, Paul
Having done some testing with ABO and COBOL V6, you will find that ABO (ARCH10) 
runs better than COBOL V4 and COBOL V6 (ARCH10) runs better then ABO.  When you 
start using ABO you need to keep using ABO for new compiles (compile/link/ABO) 
or convert to COBOL V6.  Depending on the size of the LOAD module ABO can use 
up some memory and CPU to perform its work.  


Thanks..

Paul Feller
AGT Mainframe Technical Support


-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf 
Of Bill Woodger
Sent: Wednesday, October 12, 2016 11:03
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: ABO Automatic Binary Optimizer

I suppose the cunning thing to do would be to write it into the contract, then 
you get IBM to do the migration to V6 "for free"...

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: ABO Automatic Binary Optimizer

2016-10-12 Thread Bill Woodger
Fix list for ABO.

http://www-01.ibm.com/support/docview.wss?uid=swg27047229#28062016

Looks good... except for one thing: 
http://www-01.ibm.com/support/docview.wss?uid=swg1PI68138

"


* USERS AFFECTED: Users of the IBM Automatic Binary Optimizer  *
* (ABO) for z/OS, v1.1 where the original  *
* source program has PERFORM B THRU A  *
* statements where paragraph B appears after   *
* A.   *

* PROBLEM DESCRIPTION: Input programs that contain PERFORM B   *
*  THRU A statements where paragraph B *
*  appears after A may be incorrectly  *
*  optimized by ABO. This usually results  *
*  in a data exception (0C7) when running  *
*  the optimized program but other abends  *
*  or error conditions are also possible.  *

* RECOMMENDATION: Apply the provided PTF.  *
*  *

ABO was fixed to correctly optimize input programs that contain
these specific kinds of PERFORMs.

Problem conclusion

ABO was modified to correctly optimize the input program, and
the resulting optimized program no longer produces the data
exception nor other abends or conditions not present in the
original program."

More "sales" talk: "incorrectly optimizes" - well, the program broke, I suppose 
that counts as "incorrect".

Is a broken program of this type (I'd guess control "fell through" rather than 
doing the return code) always going to Abend? No. Eeeks on this one. If you've 
ABO'd, verify no use of PERFORM A THRU PERFORM B, where B is physically located 
prior to A.

Yes, I know writing a PERFORM like that is nuts, but... nuts happens.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: ABO Automatic Binary Optimizer

2016-10-12 Thread Lizette Koehler
So if you are not ready for COBOL V5/V6 migration and want the benefit of some 
of the optimizations, then ABO can take your program and attempt to optimize to 
a new program that is optimized.

However, it is not a migration.

And if you have a requirement for your applications to validate any program 
that has changed, then you may still need to do this.  It is module to module 
not source to source.

Also, if you are going to z/OS V2.2 then there is a way at the system level to 
concatenate libraries in SYS1.PARMLIB

So rather than changing Production JCL to include a PDSE you can tell MVS that 
if library A is in the JCL look in Library B first.

IEFOPZxx contains statements that define the load library data set optimization 
configuration, which could, for example, provide a list of pairings of an old 
Cobol load library and the intended new load libraries (one for each desired 
architecture level) and specifies which members are to be processed (optimized).


Lizette


> -Original Message-
> From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On
> Behalf Of Bill Woodger
> Sent: Wednesday, October 12, 2016 8:51 AM
> To: IBM-MAIN@LISTSERV.UA.EDU
> Subject: Re: ABO Automatic Binary Optimizer
> 
> Mmmm... I wonder why they would say that?
> 
> It takes the existing executable code of your Enterprise COBOL programs and
> optimises them for new instructions available on ARCH(!0) and ARCH(11).
> 
> So if you hardware is up-to-date or so, it gives you a route for existing
> COBOL executables to take advantage of instructions introduces since ESA/390.
> 
> It doesn't do anything for your source code.
> 
> An identical program compiled with V6.1 will/should perform better than an
> ABO'd executable, because there are many more optimizations available to the
> compiler.
> 
> If you have a large program stock (of Enterprise COBOL executables) and
> current hardware, ABO gives you a painless (except for cost, and time to do
> it) way to make use of machine instructions that didn't exist when Enterprise
> COBOL was designed. Going to V6 much more care (testing) is needed. ABO can be
> wash-'n-go.
> 
> ABO has been discussed here a couple of times this year.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: ABO Automatic Binary Optimizer

2016-10-12 Thread Bill Woodger
I suppose the cunning thing to do would be to write it into the contract, then 
you get IBM to do the migration to V6 "for free"...

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: ABO Automatic Binary Optimizer

2016-10-12 Thread Charles Mills
> I wonder why they would say that?

Because they are sales reps, not techies?

Charles

-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf 
Of Bill Woodger
Sent: Wednesday, October 12, 2016 8:51 AM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: ABO Automatic Binary Optimizer

Mmmm... I wonder why they would say that?

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: ABO Automatic Binary Optimizer

2016-10-12 Thread Bill Woodger
Mmmm... I wonder why they would say that?

It takes the existing executable code of your Enterprise COBOL programs and 
optimises them for new instructions available on ARCH(!0) and ARCH(11).

So if you hardware is up-to-date or so, it gives you a route for existing COBOL 
executables to take advantage of instructions introduces since ESA/390.

It doesn't do anything for your source code.

An identical program compiled with V6.1 will/should perform better than an 
ABO'd executable, because there are many more optimizations available to the 
compiler.

If you have a large program stock (of Enterprise COBOL executables) and current 
hardware, ABO gives you a painless (except for cost, and time to do it) way to 
make use of machine instructions that didn't exist when Enterprise COBOL was 
designed. Going to V6 much more care (testing) is needed. ABO can be wash-'n-go.

ABO has been discussed here a couple of times this year.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: ABO Automatic Binary Optimizer

2016-10-12 Thread Charles Mills
Nope. Agree 100% with what @Tom says. The ABO is not a source code migration
tool, it is a compiler. Really -- a very weird compiler. Most compilers take
source code in and produce object code out. The ABO is a compiler that takes
object code in and produces object code out. What good is that? It takes
System 370 object code in and produces z13 object code out.

Why is that useful? Because the speed gains in the last several generations
of mainframe are not in clock/cycle speed. System 370 object code does not
run any faster on a z13 than on a z10. The gains are in new instructions.
The same functionality as that S/370 object code expressed in z13 object
code runs a lot faster.

(Please, no quibbles. Many shortcuts and generalizations in the above. If I
had been perfectly precise it would have read like a legal document. The
general points are correct.)

Charles

-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On
Behalf Of Lopez, Sharon
Sent: Wednesday, October 12, 2016 7:51 AM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: ABO Automatic Binary Optimizer

Does anyone know if this product, Automatic Binary Optimizer, will actually
migrate Cobol V4 to V6 for you?  Our IBM reps are telling us that it will
actually do the migration for you.  Based on what I've read, it is a
performance product and I didn't see that capability.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: ABO Automatic Binary Optimizer

2016-10-12 Thread Tom Conley

On 10/12/2016 10:50 AM, Lopez, Sharon wrote:

Does anyone know if this product, Automatic Binary Optimizer, will actually 
migrate Cobol V4 to V6 for you?  Our IBM reps are telling us that it will 
actually do the migration for you.  Based on what I've read, it is a 
performance product and I didn't see that capability.

Thanks to everyone in advance.



ABO only creates an optimized LOAD MODULE (program object).  It does not 
convert your source to V6, and it will not give you all the 
optimizations of V6.  Your biggest payback is if you upgrade your CPU, 
then you can run your load modules through ABO and get some of the 
optimization provided by the new hardware.


Regards,
Tom Conley

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


ABO Automatic Binary Optimizer

2016-10-12 Thread Lopez, Sharon
Does anyone know if this product, Automatic Binary Optimizer, will actually 
migrate Cobol V4 to V6 for you?  Our IBM reps are telling us that it will 
actually do the migration for you.  Based on what I've read, it is a 
performance product and I didn't see that capability.

Thanks to everyone in advance.




Email correspondence to and from this address may be subject to the North 
Carolina Public Records Law and may be disclosed to third parties by an 
authorized state official.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN