Long ago, 1980s, we used VM/CMS for program development. On identical
hardware, it beat the dickens out of TSO/ISPF. I still have fond memories
of XEDIT. Of course, ee ran MVS on a separate machine. We ran MVS under
VM/SP1 for testing, and with shared SPOOL to submit jobs to run on the
production machine. z/OS might run well if it would use the VM "helper"
interfaces like VSE.
On Feb 2, 2015 6:46 PM, "Ed Gould" <[email protected]> wrote:

> So, it was IBM saying if you don't run VM, FY?
> I think the many MVS sites would take exception to that.
> From my perspective VM was OK some things but not for PRODUCTION.
> VM was a sand box so the real work was to be done on MVS.
>
> Ed
>
>
> On Feb 2, 2015, at 3:49 PM, Anne & Lynn Wheeler wrote:
>
>  [email protected] (Ed Gould) writes:
>>
>>> yet IBM never delivered a source code "maintenance" system. Something
>>> that practically everyone was in need of.
>>>
>>
>> re:
>> http://www.garlic.com/~lynn/2015.html#84 a bit of hope? What was old is
>> new again.
>>
>> science center did the multi-level cms update source maintenance system
>> as part of joint project with endicott to implement cp67 370 virtual
>> machine emulation on 360/67.
>>
>> the non-virtual memory 370 emulation was originally used by branch
>> office HONE online cp67 systems to test new operating systems.
>>
>> the full virtual memory 370 emulation was used for development of 370
>> virtual memory operating systems (i.e. 360/67 cp67 virtual memory
>> machine emulation was in regular production use a year before 370
>> virtual memory hardware became available).
>>
>> cp67 distribution was always full source ... and customers typically
>> built their systems from the source. This continued with the vm370
>> followon ... new releases had single source file per module ... and then
>> monthly maintenance&enhancement distributions was done via incremental
>> add-on updates ... with cumulative source included updates included on
>> every monthly maintenance&enhancement distribution. New releases would
>> merge the incremental updates into base source file and things would
>> start again ... accumulating increasing number of incremental source
>> updates.
>>
>> SHARE waterloo updates and customers used the same process for their
>> source changes ... and large part of internal development did also
>> (which accounted for the origin of a lot of the OCO-wars). Not that new
>> releases ... besides past incremental updates being incorporated into
>> base source files ... there could also be large amount of new
>> function/code added ... never before seen by customers as incremental
>> updates ... increasing the difficulty of release to release migration.
>> Tools were developed (both inside & at customers) that would analyze new
>> source releases and pick out differences from the latest previous
>> release (with all maintenance/changes applied) to facilate the release
>> to release source transition.
>>
>> There were also periodic internal fights ... where various MVS-based
>> product (like JES2) did all their stuff with CMS source maintenance
>> ... but were required to convert to the "official" internal MVS process
>> for final integration.
>>
>> Note after FS failure
>> http://www.garlic.com/~lynn/submain.html#futuresys
>>
>> q&d 3033 (starting out as 168-3 logic mapped to 20% faster chips) and
>> 3081 efforts were kicked off in parallel as part of mad rush to get
>> stuff into the 370 product pipelines.
>>
>> during the 3033 product life, there started to be minor (supervisor
>> state) tweaks made to the machine which were mandatory for new operating
>> system releases. The clone makers initially responded with operating
>> system patches to work with non-tweak hardware. As patching the
>> operating system was made more and more difficult, the clone makers
>> eventually responsed with "macrocode" ... basically 370 instructions
>> running in new machine mode that would implement the tweak features
>> .... this enormously simplified the implementation of such features
>> ... compared to the enormous difficulty involved in generating native
>> microcode. This shows up in 3090 timeframe when clone vendor has used
>> macrocode to create "hypervisor" support ... and it was a much larger (&
>> longer) effort for 3090 to eventually respond with PR/SM.
>>
>> In the current timeframe, things could be construed as customers having
>> their own programming support staff represents money that could
>> otherwise be spent on vendor software&services (2012 claim that
>> processor sales represented 4% of revenue ... but total mainframe group,
>> including software&services was 25% of total revenue and 40% of profit).
>>
>> The same efforts to inhibit clone vendor patches ... also increasingly
>> made it difficult for customers to move their changes to new releases
>> (they either stayed on their old hardware or moved to new clone hardware
>> that worked with the older releases). The OCO-wars could be viewed as
>> both inhibiting new operating system versions working on clone
>> processors and minimizing customer migration latency to latest software
>> releases and hardware models.
>>
>> One of the worst case examples starts during the FS period, I continued
>> to work on 370s (and periodically ridicule FS). Also one of my hobbies
>> was producing highly enhanced production operating system distribution
>> for internal datacenters (science center was on 4th flr of 545 tech sq,
>> and multics was on 5th flr of 545 tech sq, at one point I would needle
>> the multics crowd that I had more internal datacenters running my
>> enhanced operating systems than all the datacenters in the world running
>> multics). Anyway, for some reason, one of these versions was made
>> available AT&T longlines ... which then made a lot of their own
>> enhancements and distributed it throughout a lot of AT&T. Nearly a
>> decade later the IBM AT&T national sales rep tracks me down to ask me to
>> help with AT&T. The decade old operating system, AT&T would apply the
>> patches to move to the latest 370s ... except it didn't have
>> multiprocessor support ... and initially the 3081 was going to be a
>> multiprocessor "ONLY" machine. Large parts of AT&T was looking at moving
>> to faster, newer, clone (uni-)processors ... because they were dependent
>> on this decade old operating system (that didn't have multiprocessor
>> support).
>>
>> As an aside, if the original AT&T relationship had continued ... about
>> 18months after they got the original (internal) version ...  they could
>> have gotten an update with multiprocessor support. past posts mentioning
>> multiprocessor
>> http://www.garlic.com/~lynn/subtopic.html#smp
>>
>> past posts mentioning the science center
>> http://www.garlic.com/~lynn/subtopic.html#545tech
>>
>> --
>> virtualization experience starting Jan1968, online at home since Mar1970
>>
>> ----------------------------------------------------------------------
>> For IBM-MAIN subscribe / signoff / archive access instructions,
>> send email to [email protected] with the message: INFO IBM-MAIN
>>
>
> ----------------------------------------------------------------------
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to [email protected] with the message: INFO IBM-MAIN
>

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [email protected] with the message: INFO IBM-MAIN

Reply via email to