martin_pac...@uk.ibm.com (Martin Packer) writes:
> Interesting you mention "Jupiter Project"...
>
> ... In the late 1980's as a young SE I supported one of the "Jupiter 
> Council" customers in their roll out of what something called "Jupiter" 
> turned into: DFSMS.
>
> I'm wondering if your mentioned SSD was another part of a grander plan - 
> incorporating storage management and hardware.

re:
http://www.garlic.com/~lynn/2013d.html#0 Query for Destination z article -- 
mainframes back to the future</a>

reuse of code names ... POK solid state Jupiter was late 70s. HSM&DFSMS
was west coast disk division ... although JUPITER (DFSMS) was in
progress in STL by at least early 1983. There was effort in this
timeframe to do a totally different kind of vm370 system (unrelated to
then current vm370 or the internal vmtool that would become vm/xa) and
in 1983 there were joint reviews with the JUPITER group (at the time in
STL).

in the late 70s, I had been con'ed into playing disk engineer part time
over in bldg. 14&15 (disk engineering lab and disk product test lab)
http://www.garlic.com/~lynn/subtopic.html#disk

... and the disk division was the ones that wanted more competitive 3350
(it was POK group that felt it would be in competition with the
solid-state disk that they were hoping to do).

when I first transferred to san jose research ... they let me
wander around various places ... recent mention about STL
IMS group 
http://www.garlic.com/~lynn/2013c.html#62 What Makes an Architecture Bizarre?

but they also let me wander around bldg 14&15. At the time they were
running disk development regression tests with stand-alone mainframes
(they had variety of different mainframes, frequently getting early
engineering processors to validate new channels as well as using to
validate engineering development dasd) ... scheduled 7x24 around the
clock dedicated (stand-alone) mainframe time. They had recently
attempted to use MVS in the environment, enabling concurrent testing
... but found MVS had 15min MTBF (hang/died, requiring re-ipl) in that
environment.

I offerred to rewrite I/O supervisor making it bullet proof and never
failed ... which greatly increased their productivity having ondemand,
anytime, concurrent testing available. this got me sucked into
diagnosing hardware problems because frequently initial fingerpointing
was at my software.

later I wrote an internal-only document describing the effort and happen
to mention the MVS 15min MTBF ... which brought down the wrath of the
MVS group on my head (I think they would have gotten me fired if they
could have figured out how).

note however, it was the san jose disk software group that quoted me
the $26M business case requirement for MVS FBA ... even if I gave
them fully integrated and tested code ... to cover pubs, education,
training etc (and I could only use incremental new disk sales ...
required possibly $200m-$300m ... and claim was they were selling
disks as fast as they could make them ... and any FBA support would
just change from CKD to same amount of FBA; was precluded from
using life-cycle savings and/or other business justifications).
misc. past posts mentioning CKD/FBA
http://www.garlic.com/~lynn/submain.html#dasd

also unrelated to networking and crypto & the Los Gatos lab ... recent
reference:
http://www.garlic.com/~lynn/2013d.html#1 IBM Mainframe (1980's) on You tube

-- 
virtualization experience starting Jan1968, online at home since Mar1970

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

Reply via email to