But SDB came too late: if it had been present in rudimentary
form, supplying a valid but nonoptimal BLKSIZE, in OS/360
release 1, coding BLKSIZE could always have been optional,
and much of the rough transition to present techniques could
have been avoided.

Boy are you dreaming. Maybe you don't recall, but even with Virtual Storage Constraint (VSC), and system programmers evaluating which modules could be eliminated from processor storage (LPA) so that demands could be met in CSA or Private ... The use of blocksizes for programs, early buffer pools (ie. VSAM files, esp with multiple strings) were quite significant and the last thing I would have wanted was an operating system making arbitrary decisions about what was consider "optimal". Maybe it was optimal for I/O, but it could've killed many a paging subsystem with excessive storage demand. Given the problems and constraints that had to be dealt with in early systems, I seriously doubt that SDB was at the top of anyone's list of issues that needed to addressed as a priority item.

As for "rough transitions", I have to wonder whether people that can't get BLKSIZE and LRECL straight in their minds are in any position to be designing or developing anything. This is some of the most trivial of the trivial problems associated with data management. This mentality of having the operating system do it, is precisely why people overload systems and wonder why throughput suffers, or why DBA's turn on every trace under the sun and wonder at the attendant overhead in database performance.

Like it or not, this is a technical field and such a trivial problem shouldn't even be fodder for discussion.

Adam

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

Reply via email to