[email protected] (Shane Ginnane) writes:
> Let's turn the discussion around Ed, and inject a little left-field
> logic. Why run z/OS in an LPAR at all ?.  It's already running under a
> hipervisor - why not just junk PR/SM (and CPs) and run everybody on
> ILFs under z/VM ?.
>
> If IBM can do their development under z/VM, why can't we as customers
> avail ourselves of the inherent advantages.  For "free" of course,
> just like PR/SM ... win/win (except maybe IBM and a few ISVs .... ;-)

pr/sm for 3090 was original ibm response to amdahls hypervisor. during
the time that amdahl was developing the hypervisor i gave presentations
at silicon valley baybunch on the work that originally went into the vm
microcode assist a decade earlier for 138/148 (in the mid-70s) ... old
post with some ecps details:
http://www.garlic.com/~lynn/94.html#21 370 ECPS VM microcode assist

and after the meetings, the amdahl people would ask a lot more detailed
questions.

one of the amdahl issues was starting with 3033 mvs/sp ... it appeared
that ibm was constantly making lots of minor architecture tweaks that
were would be required by the latest mvs operating systems (by
comparison, vm370 was shipped that it ran better with its microcode
assists, but would continue to run even if they weren't available)
... as a countermeausre to clone processors.

The amdahl response was macrocode ... slightly modified 370 instruction
set that ran in microcode mode ... enormously simplifying the task of
implementing minor architecture tweaking ... especially compared to the
ibm effort which required being implemented in native horizontal
microcode (a significantly more complex effort). the amdahl hypervisor
then became a relatively straight-forward move of vm370 code into
"macrocode". 

By comparison the 3090 pr/sm was a significantly more difficult
undertaken since it had to be done in native 3090 microcode ... even tho
it was incremental add-on to SIE support that was already implemented
(for a long time, SIE performance assist was available for vm370 running
in pr/sm).

one of the issues pointed out about 3033 mvs/sp microcode assist
... sort-of targeted as the equivalent of ECPS for vm370 ...  was that
3033 was already running close to one 370 instruction per machine cycle
(165 was 2.1/cycle, improved in 168 to 1.6/cycle and finally close to 1
per cycle in 3033). The low & mid-range machines ran vertical microcode
(somewhat analogous to modern day 370 simulators running on intel
platforms) with an avg. of ten native instructions per 370 instructions.
ECPS was relatively straight-forward move of vm370 370 instructions on a
one-for-one basis getting a ten times throughput improvement. Doing the
equivalent on 3033 saw little or no throughput difference, since the
hardware was already executing 370 instructions at nearly one per
machine cycle.

the place where hypervisor got its performance boost was that it was
subset of the full vm370 virtual machine function ... so a lot of
extraneous instructions could be eliminated ... and it was a different
machine mode with its own status and registers ...  eliminating the
"task-switch overhead" ... aka saving registers from the executing
virtual machine and loading the vm370 operating systems registers
... and then saving the vm370 operating system registers and reloading
the virtual machine registers.

for some topic drift ... old email by 3090 (aka "trout 1.5") engineer
discussing the enhancements that went into 3090 SIE (compared to 3081
SIE):
http://www.garlic.com/~lynn/2006j.html#email810630
in this post
http://www.garlic.com/~lynn/2006j.html#27 

one of the problems with 3081 SIE ... was that 3081 had limited
microcode memory and so had come up with mechanism of swapping microcode
.... invoking SIE required a large amount of hardware overhead to get
all the SIE code swapped into microcode memory (replacing other
microcode ... which typically would have to be swapped back in later).

Part of the reason for the 3081 SIE performance was POK had managed to
convince corporate to kill the vm370 product ... and dedicate all the
vm370 development people to support mvs/xa (endicott manage to resurrect
the vm370 product mission but had to reconstitute a development group
from scratch).  The virtual machine VMTOOL was created to support MVS/XA
development ... but was never planned to be released to customers (as
was SIE microcode assist as part of VMTOOL execution)

Later POK realized that a lot of customers could use VMTOOL purely for
MVS to MVS/XA migration ... and it was released as VM/MA and VM/SF
... past references
http://www.garlic.com/~lynn/2001m.html#38 CMS under MVS
http://www.garlic.com/~lynn/2001m.html#47 TSS/360
http://www.garlic.com/~lynn/2001n.html#67 Hercules etc. IBM not just missing a 
great opportunity...
http://www.garlic.com/~lynn/2002e.html#27 moving on
http://www.garlic.com/~lynn/2002m.html#9 DOS history question
http://www.garlic.com/~lynn/2002p.html#14 Multics on emulated systems?
http://www.garlic.com/~lynn/2003g.html#22 303x, idals, dat, disk head settle, 
and other rambling folklore
http://www.garlic.com/~lynn/2005j.html#54 Q ALLOC PAGE vs. CP Q ALLOC vs ESAMAP
http://www.garlic.com/~lynn/2006j.html#27 virtual memory
http://www.garlic.com/~lynn/2006l.html#25 Mainframe Linux Mythbusting (Was: 
Using Java in batch on z/OS?)
http://www.garlic.com/~lynn/2007.html#23 How to write a full-screen Rexx 
debugger?
http://www.garlic.com/~lynn/2007b.html#32 IBMLink 2000 Finding ESO levels
http://www.garlic.com/~lynn/2010k.html#72 "SIE" on a RISC architecture
http://www.garlic.com/~lynn/2011b.html#18 Melinda Varian's history page move
http://www.garlic.com/~lynn/2011e.html#26 Multiple Virtual Memory
http://www.garlic.com/~lynn/2011p.html#114 Start Interpretive Execution
http://www.garlic.com/~lynn/2012g.html#19 Co-existance of z/OS and z/VM on same 
DASD farm
http://www.garlic.com/~lynn/2013n.html#46 'Free Unix!': The world-changing 
proclamation made30yearsagotoday

-- 
virtualization experience starting Jan1968, online at home since Mar1970

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [email protected] with the message: INFO IBM-MAIN

Reply via email to