I think I would like to suggest that maybe the list as you put it has
pretty much been there since day 1 of OS/360 (before Virtual Storage). As
an example we had a IBM SE (yes a real SE) worked up a document (that was
later published as an orange book) showing optimal blocksizes on tape AND
buffering the tape and how it would benefit the company ie reduced run
time faster sorts ect. I believe that his document started development
on IBM's SAMe (the cost option to make it as a default to do changed
scheduling and the default buffno to 5. We were amoung the first to buy
and install the product (we had the black and blue marks to prove it).
Yes it was rough (although some had worse experiences than we) but we
stuck at it and got a real benefit from it. It wasn't IBM's most shinning
moment but we got real value from it and to this day the people that have
benefited from this product (now included in DFSMS) is real. For an
example we went from 10 hour processing our nightly batch window to half
that amount. Yes we needed more storage but the cut in run time saved us
from buying a larger machine at the time (btw this was at a different
company). So this did help out at many other companies. No JCL DD
changes, it installed and it worked.
Don't get me wrong. I completely agree that there was tremendous benefit in
making I/O more efficient, however in many cases there were significant
trade-offs that had to be made. I still remember many situations were we
had an 80-100K partition (or region) available to run in, and you can bet
that I wasn't going to waste that space by allocating 5 - 7K buffers (2314
devices) to improve a sequential file's performance. There would have been
virtually no room for the program code if the only concern was optimal
blocking. For selected jobs or high priority work that needed efficiency,
they were generally given the largest amount of memory and CPU access,
precisely for the reasons you stated. However, I also remember many times
having to specify BUFNO=1, just to make a program fit into the storage
available.
Additionally, my point about virtual storage was based on the experience of
having to examine which modules were loaded into memory by the OS to see
which could be removed to avoid virtual storage constraint. Things improved
somewhat with the introduction of MVS, but even so, given the amount of
real-storage available I think many people have forgotten that you couldn't
have 100 CICS regions at 8 MB each running. In today's environment, people
have buffer pools defined that represent far more storage than was available
for several systems in those days, so I/O optimization was something one had
to be judicious about.
I am neutral on this issue as the issues you site are somewhat true but
they are somewhat false. SHould it be transparent... I am really mixed. I
have seen (somewhat) the PC side and it leads to sloppy programming and
frequently unpredictable results. Not taking sides too much I would
rather have predictable results.
I understand what you're saying, but I guess to extend the thought a bit,
my point is really that the more transparent someone is, then the more we
depend on someone else (developers?) to make the decisions for us. In my
mind this will usually result in significantly less flexibility, and will
tend to give external parties the final say in what is considered "optimal".
One of the significant benefits of the mainframe (z/OS) environment, is that
an installation has numerous choices to exploit their particular situation
and means of running a business, while many of the other platforms are quite
rigid in the options available to adapt to differing circumstances. I'm
sure you've seen well-run, as well as poorly run data centers, but at least
the choices and options were available. All too often the alternatives (I'm
thinking PCs here) are like the absurdity of experiencing an error or
software failure and having that stupid pop-up box appear which allows for
the singular option of specifying "OK".
Part of the cost is that there are certain rules that must be followed
and if they aren't followed INCORRECT-OUT (as they say). The PC side has
claimed that small things should not bother programmers. Well up to a
point. LRECL and Blocksize are (in my world) two different animals. As
others have put it blocksize is agreed that it should be (majority of the
time) irrelevant. LRECL is not irrelevant it is the basic fundamental way
units of data are presented to the programmer
I agree. My only point is that in many ways I think its presumptious of
programmers to expect that they shouldn't have to know their chosen craft.
Adam
----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html