[I have cut all the stuff and am directing this in answer to various posts.]

I am glad that there are those of you who know CISC vs. RISC, and state
switching, memory protection schemes, and all that. Some of you even
know that an apparent clock cycle and a micro-code cycle may NOT be the
same duration. Some of you are knowledgable about micro-code word
widths, how many stages a CPU pipe contains, the width of the various
buses of different machines and models. That must mean that between all
of us, we should be able to come up with a good bench-mark system.

I've seen too many systems that were force marched off a mainframe
environment to a non-mainframe environment by people who had short
deadlines and only had time to make a system that did the same things as
the mainframe. What a RETCHED system, that ran incredibly poorly.

But we keep talking about the power of the mainframe. Isn't it time to
put up or shut up?

For Normalization purposes, I chose COBOL and a data base (common
situation). I also am suggesting a batch oriented report. However, we
could choose an ALC based system (but then we have to emulate it on
under Windows, which wouldn't be considered fair). We could choose to
use C or C++.

But if you wanted to, you could set the whole thing up to provide a
browser interface and do some interactive updates as well. But human
interface/reaction time would skew your timings.

Again, for normalization purposes, the whole system must run on the same
machine/LPAR. And so there is a need to make sure that the whole system
can fit and run in a single "image". This causes that O/S to have to
deal with cross memory communications (how-ever it is implemented),
security (such as it may be), and I/O (how ever much) in order for the
application to fetch the data and then process it.

And by using COBOL with DB calls, we get to use the same "command set"
between the two systems (I/O off loads to the DB system). Now it is the
compiler that generates the code (one would expect it to be optimized
for the platform) that gets executed to do the report and write "line
mode" output (basic sequential file I/O).

We already know that all of this will run in a single z/OS environment,
because there are many shops already doing it. What we want to do is
demonstrate, rather visually, which system can handle the throughput
faster.

After all, some years ago there was a comparison between a Univac and a
S/370. The Univac is an octal based machine with 36 bit words. Great for
number crunching (for its day). The Univac ate the S/370's dinner, lunch
and breakfast when it came to number crunching and running a single job.
But when multiple jobs ran simultaneously -- on a REAL storage system,
only the number of jobs that would fit in memory could run. The S/370
finished first because of Virtual Storage and its ability multi-task
(even with a single CPU).

So, if the whole test can be built and run from a DVD on a Windows
system, and one for Linux and one for z/OS, then one has something that
non-tech people can see and appreciate. And I think it would be in our
best interests to do it. I think IBM doesn't do it because it would
cause problems for various of their sales organizations.

THE biggest problem that I can see for right now is a Data Base engine
that is common to z/OS, Linux and Windows that can be used (without
licensing problems). I'm not a big DB person, so I don't know the answer
to that.

Lastly, COBOL has died how many times now? Yet it is supported on how
many platforms and O/Ses?

Regards,
Steve Thompson

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

Reply via email to