On Oct 30, 2007, at 9:22 PM, Gerhard Adam wrote:
But SDB came too late: if it had been present in rudimentary
form, supplying a valid but nonoptimal BLKSIZE, in OS/360
release 1, coding BLKSIZE could always have been optional,
and much of the rough transition to present techniques could
have been avoided.
Boy are you dreaming. Maybe you don't recall, but even with
Virtual Storage Constraint (VSC), and system programmers evaluating
which modules could be eliminated from processor storage (LPA) so
that demands could be met in CSA or Private ... The use of
blocksizes for programs, early buffer pools (ie. VSAM files, esp
with multiple strings) were quite significant and the last thing I
would have wanted was an operating system making arbitrary
decisions about what was consider "optimal". Maybe it was optimal
for I/O, but it could've killed many a paging subsystem with
excessive storage demand. Given the problems and constraints that
had to be dealt with in early systems, I seriously doubt that SDB
was at the top of anyone's list of issues that needed to addressed
as a priority item.
I think I would like to suggest that maybe the list as you put it has
pretty much been there since day 1 of OS/360 (before Virtual
Storage). As an example we had a IBM SE (yes a real SE) worked up a
document (that was later published as an orange book) showing optimal
blocksizes on tape AND buffering the tape and how it would benefit
the company ie reduced run time faster sorts ect. I believe that his
document started development on IBM's SAMe (the cost option to make
it as a default to do changed scheduling and the default buffno to 5.
We were amoung the first to buy and install the product (we had the
black and blue marks to prove it). Yes it was rough (although some
had worse experiences than we) but we stuck at it and got a real
benefit from it. It wasn't IBM's most shinning moment but we got real
value from it and to this day the people that have benefited from
this product (now included in DFSMS) is real. For an example we went
from 10 hour processing our nightly batch window to half that amount.
Yes we needed more storage but the cut in run time saved us from
buying a larger machine at the time (btw this was at a different
company). So this did help out at many other companies. No JCL DD
changes, it installed and it worked.
As for "rough transitions", I have to wonder whether people that
can't get BLKSIZE and LRECL straight in their minds are in any
position to be designing or developing anything. This is some of
the most trivial of the trivial problems associated with data
management. This mentality of having the operating system do it,
is precisely why people overload systems and wonder why throughput
suffers, or why DBA's turn on every trace under the sun and wonder
at the attendant overhead in database performance.
I am neutral on this issue as the issues you site are somewhat true
but they are somewhat false. SHould it be transparent... I am really
mixed. I have seen (somewhat) the PC side and it leads to sloppy
programming and frequently unpredictable results. Not taking sides
too much I would rather have predictable results.
Like it or not, this is a technical field and such a trivial
problem shouldn't even be fodder for discussion.
I don't agree there is almost always room for discussion and when
there is each side has room for movement. One key issue that I think
you have to give a stance to is predictability. The MF world (at
least IBM's MF) has always been extremely reliable and that comes at
a cost. Part of the cost is that there are certain rules that must be
followed and if they aren't followed INCORRECT-OUT (as they say). The
PC side has claimed that small things should not bother programmers.
Well up to a point. LRECL and Blocksize are (in my world) two
different animals. As others have put it blocksize is agreed that it
should be (majority of the time) irrelevant. LRECL is not irrelevant
it is the basic fundamental way units of data are presented to the
programmer and they act on (program for) the information that is
presented in a coherent discreet piece of information. If there is no
rhyme or reason how data is presented how can a programmer program
for data that is essentially unknown? That is almost like speech
recognition or handwriting analasys. You have to have artificial
intelligence and the current computers can't even come close to doing
a reasonable job. And I also think there will have to be a major leap
forward before computers can really do AI.
Remember that one time storage was NOT cheap. Now it is. Also
remember processor cycles were not cheap now they are inexpensive.
Ed
Adam
----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html
----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html