The following message is a courtesy copy of an article
that has been posted to bit.listserv.ibm-main as well.

[EMAIL PROTECTED] (Paul Gilmartin) writes:
> Understood.  But is this because z/VM does a superior job of providing
> virtual images of the underlying hardware, or because z/VM provides
> images of an architecture superior to that hardware.  z/VM becomes
> something like another layer of microcode.
>
> I glanced at Sine Nomine's page about OpenSolaris for Z.  There's a
> prominent restriction that it runs only under z/VM, not in an LPAR.
> So it exploits a CP feature.  An easy conjecture, with no evidence,
> is that it uses CP Block DASD I/O to bypass the complexities of
> CKD channel programs.
>
> Then, is it fairer to compare VMWare to z/VM or to PR/SM?
>
> Is OpenSolaris for z eligible for IFL?
>
> Thinking about the recurrent chatter about FBA, might something
> akin to CP Block I/O be moved into PR/SM to provide FBA emulation
> or other device type imaging?

various unix ports dating back to at least the early 80s ... like UCLA
locus port for aix/370 ... were done for vm370 ... not because of the
complexity of CKD channel programs vis-a-vis block i/o ... but because
of being able to leverage vm to meet error recover and EREP requirements
... which represented a much, much larger body of code (than straight
device driver).

dating back to original cp67 & cms ... CKD disks had been treated as
logical block devices ... with simplified, stylized CKD channel
programs. But the lines-of-code to meet error recovery and EREP
requirements was significantly larger than the much simpler and smaller
inline device driver code.

Part of the past FBA wasn't so much about the complexity of the inline
device driver code ... but as part of the FBA simplification,
significant amount of device physical characteristics were abstracted.
This eliminated a lot of release-to-release transitions and significant
new device driver support code that came with every small change in CKD
product.

In the middle of the FBA wars ... i had offered driver support to the
MVS device support group. They replied that even fully tested and
integrated code ... there was still a $26m bill for documentation and
training ... which I needed a business case for. At the time, the
simplified scenario was that a business case required incremental, new
product sales (as opposed to long term infrastructure cost savings).
Their scenario was that FBA support would just result in the same amount
of disk being sold as FBA rather than CKD ... resulting in no
incremental business case to cover the $26m cost for MVS supporting FBA.

misc. past posts mentioning CKD and/or FBA issues
http://www.garlic.com/~lynn/submain.html#dasd

I was also allowed to play disk engineer in bldg. 14 (disk engineering)
and bldg 15 (disk product test). One of the issues was that they were
doing mainframe "stand-alone", dedicated machine testing (i.e. each test
required prescheduled, dedicated machine time). They had tried running
MVS on the machines (looking to possibly being able to perform multiple
concurrent tests and eliminated the dedicated machine time test
bottleneck). However, standard MVS product had 15min MTBF in that
enviroment. I undertook to rewrite i/o supervisor to create bullet proof
error recovery and operation ... enabling multiple concurrent testing to
be done in operating system environment (and eliminating the dedicated
machine time scheduling development bottlenecks). misc. past posts
mentioning getting to play disk engineer
http://www.garlic.com/~lynn/subtopic.html#disk

I had originally done simplified "block i/o" interface for CMS & CP67 as
pathlength reduction as an undergraduate in the 60s.

Later in the early 70s, for CP67/CMS, I did a much more powerful,
flexible, lower overhead, and high thruput API that supported page
mapped operations (even more simplified than FBA channel programs, much
lower pathlength, and much more opportunity for thruput optimization.
On the CMS side of the API, I then implementated a paged mapped
filesystem. Later I migrated the changes to vm370 ... some
old email from the period
http://www.garlic.com/~lynn/2006v.html#email731212
http://www.garlic.com/~lynn/2006w.html#email750102
http://www.garlic.com/~lynn/2006w.html#email750430

Later tests on 3380s, with light to moderate disk intensive CMS
applications, I could get something like three times the thruput than
best case with standard block I/O. The thruput advantage increased
further as applications became more & more disk intensive. misc.
past posts mentioning page mapped filesystem work
http://www.garlic.com/~lynn/submain.html#mmap

-- 
40+yrs virtualization experience (since Jan68), online at home since Mar70

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

Reply via email to