info ibm-main

2021-09-17 Thread Anne & Lynn Wheeler

info ibm-main

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Memory-Lane Monday: Documentation just takes up too much space | Computerworld

2020-03-21 Thread Anne & Lynn Wheeler
wdonze...@gmail.com (William Donzelli) writes:
> Al "bitsavers" Kossow has a HUGE backlog of material to scan -
> something like 25 pallets of stuff or something crazy like that - so
> scanning and archiving, while constant, picks away at that pile
> depending on which way the wind blows and who is begging for what.
>
> Almost constant, I should say. Scanning is shut down right now, as CHM
> (and his lab) is closed and locked up.
>
> Of course, if some *really* good stuff gets presented to him, it might
> just get in the short queue for scanning. Things like maintenance
> binders for S/360s and the like...

I scanned SHARE LSRAD from 1979 ... he sent me his app to arrange
left/right pages. problem I had was that LSRAD was from right after
copyright law extended yrs (or otherwise it would have been free), I had
trouble finding somebody at SHARE that would signoff 

-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: What is a mainframe?

2020-01-14 Thread Anne & Lynn Wheeler
z.sch...@gmail.com (z/OS scheduler) writes:
> IMHO TCP/ip is part and parcel of this new "Open Source / Written by
> Hackers" we are living in.
> I cannot believe that C.C.I.T.T.would have recommended to IBM to make their
> product more hack-able - unless Microsoft or SUN had big influence on
> C.C.I.T.T.

The original mainframe TCP/IP implementation was done in VS/PASCAL which
had none of the typical exploits found commonly in C-language TCP/IP
implementatins. The communication group fought fierce battle to prevent
its release. When they lost the battle, they then changed their story
and said that since it was "communication" it had to be released through
the communication group. What shipped would used nearly a whole 3090
processor to get 44kbytes/sec aggregate throughput.

I then did the enhancements to support RFC044 and in tuning tests at
Cray Research between a Cray and 4341 ... got channel speed sustained
throughput using only modest amount of 4341 processor (something like
500 times improvement in bytes moved per instruction executed).

Later the communication group hired a silicon valley contractor to
implement TCP/IP support directly in VTAM. He initially demonstrated
TCP/IP running significantly faster than LU6.2. He was then told that
*everybody* knows that a *valid* TCP/IP implementation runs
significantly slower than LU6.2 and they would only be paying for a
*valid* TCP/IP implementation.

After leaving IBM, I was brought in as consultant to small client/server
startup that wanted to do payment transactions on their server (two
Oracle people that I had worked with at IBM when we were doing IBM's
HA/CMP product were then at startup responsible for something called
"commerce server). The startup had invented this technology they called
"SSL" they wanted to use, the result is now frequently called
"electronic commerce". I had complete responsibility for the server to
payment networks ... but could only make recommendations on the
client/server side ... some of which were almost immediately violated
... continues to account for some number of exploits.

At the time, internet exploits were about half C-language related
programming problems and half social enginnering ... with a few
misc. other items. Then at 1996 m'soft moscone MDC conference, all the
banners said "Internet" ... but the constant refrain in every session
was "protect your investment" ... aka Visual Basic applications embedded
in data files that would be automagically executed. They were going to
transition from the safe, small closed LANs network environments to the
wild anarchy of the Internet w/o any additional countermeasures. By the
end of the decade over 1/3rd of "internet" exploits were these
automagically executed code snippets (the numbers of the other exploits
didn't decrease, there was just an explosion of this new category of
exploits).

Early part of the century I did some work on categorizing exploits in
the NIST CVE exploit database ... and tried to get MITRE to require
additional information in exploit reports. At the time MITRE said that
they had hard enough time getting reports to have any information
... and additional requirements would just inhibit people writting
anything.

Some archived posts about CVE exploit categrizing
http://www.garlic.com/~lynn/2004e.html#43
http://www.garlic.com/~lynn/2005d.html#0
http://www.garlic.com/~lynn/2005d.html#67
http://www.garlic.com/~lynn/2005k.html#3
http://www.garlic.com/~lynn/2007q.html#20

old posts about IBM evaluation of the 30yr old gov. MULTICS security
evaluation ... implemented in PLI and having none of the 
exploitable bugs typical in C-lanugage implementations.
http://www.garlic.com/~lynn/2002l.html#42
http://www.garlic.com/~lynn/2002l.html#44

The copy of the IBM paper was originally on IBM website ... but all such
websites have since disappeared and I had to find copy at other
locations.

-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: How many ways can one sentence be wrong dept

2020-01-13 Thread Anne & Lynn Wheeler
remember no real CKD devices have been made for decades ... all being
simulated on industry standard fixed-block ... need a fair amount
electronics and processing between the emulated CKD layer and the real
fixed-block hardware (whether fixed-block spinning disks or fixed-block
SSD).

a lot of the CKD optimization work ... may actually have little or no
meaning by the time things reach the fixed-block physical device.

m...@hogstrom.org (Matt Hogstrom) writes:
> Out of curiosity, its been a while since I did storage admin but it
> occurred to me that for the most part a lot of the work in defragging,
> worrying about disk geometry and other issues are really not / less of
> an issue with cache and SSD technologies.  So, perhaps naive on my
> part, but it would seem to me the work to “defrag” is really more to
> keep up the legacy z/OS concepts like # of extents, CKD processing for
> PDS’, etc.  Are there benefits to defragging these days apart from the
> consequences of the limitations from older architectures and paradigms
> like directory blocks and member placement?

-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Water-cooled 360s?

2019-12-13 Thread Anne & Lynn Wheeler
01f25da983e8-dmarc-requ...@listserv.ua.edu (Robert Longabaugh)
writes:
> I worked at a telco in the 1980s and the 3033, 3032, and 3033MP were water
> cooled.  There was a 3037 PCDU (Power/Cooling Distribution Unit).
>
> I think the 3031 was air cooled.

during the Future System period early-to-mid 70s (completely different
than 370 and was going to replace 370), internal politics was killing of
370 efforts (lack of 370 products during the FS period is credited with
giving clone makers market foothold). Then when FS imploded there was
mad rush to get stuff back into the 370 product pipelines and 303x &
308x quick efforts were kicked off in parallel.

they took 158-3 engine with just the integrated channel microcode (and
w/o 370 microcode) for the 303x channel directors.

3031 was then 158-3 engine with the 370 microcode (and w/o the
integrated channel microcode) and a 2nd 158-3 engine as the 303x channel
director

3032 was 168-3 reconfigured to work with 303x channel director as
external channels

a 3033 started out as the 168-3 logic remapped to 20% faster chips, some
logic tweaks then got it up to 1.5 times a 168-3.

3081 was then some left over work from Future System ... other
description of 3033, 3081, and Future System:
http://www.jfsowa.com/computer/memo125.htm

3081 TCM fried story: the internal side of the heat exchanger had flow
sensor but not the external side. One customer lost flow on the external
side ... and by the time the heat sensor registered rise in temperature
and cut power ... it was too late, and TCMs were fried. After that flow
sensors were retrofited to external side.

when the 168-3 engineers got the 3033 out they door, they then started
on 3090 (overlapping with work on 3081).

I was involved in a product to do a 16-way 370 SMP and I con'ed some
processor engineers working on 3033 to get involved in their spare time
(a lot more interesting than what they were doing for 3033). At first
everybody thot it was great ... but then somebody informed the head of
POK that it could be decades before the POK favorite son operating
system had effective 16-way support. Then the head of POK invited some
of us to never visit POK again (and the engineers working on 3033 to
stop being distracted) ... I could sometimes still sneak back into POK.

Note that POK finally didn't ship 16-way SMP until z900 Dec2000 (over
20yrs later).

-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: MIPS chart for all IBM hardware model

2019-11-07 Thread Anne & Lynn Wheeler
gib...@wsu.edu (Gibney, Dave) writes:
> Unfortunately, my search for Phil's tables ended here.
> https://audifans.com/mirror/www.isham-research.co.uk/mips.html

you are better with this from wayback machine ... 2016
https://web.archive.org/web/20160315224541/http://www.isham-research.co.uk/mips.html
base
https://web.archive.org/web/20160315143331/http://www.isham-research.co.uk/

the audifans copy ... only some of hrefs are swizzled to point relative
to the copy ... so many go to original URL (not the copy) ... which is
now something totally different.

-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: MIPS chart for all IBM hardware model

2019-11-07 Thread Anne & Lynn Wheeler
sme...@gmu.edu (Seymour J Metz) writes:
> Of course, IBM does not claim that those numbers reflect a Meaningless
> Indicator of Processor Speed (MIPS), but rather a well defined (LSPR
> ITR) benchmark.

Jim Gray was one of the primary behind people original SQL/RDBMS,
System/R ... and then left IBM Research (trying to palm off bunch of
stuff on me) for Tandem, then DEC, then microsoft.

One of the things he did starting while at Tandem was standardized DBMS
trasnaction benchmarks
http://www.tpc.org/information/who/gray.asp
https://jimgray.azurewebsites.net/

numbers per system, numbers per total system cost $$$, and more recently
numbers per power.

both cluster supercomputer(grid) and cloud megadatacenter so
significantly drop system costs (claim for over a decade that they
assemble blade systems for 1/3rd cost of brand name systems) ... that
power/cooling was becoming increasingly major part of
total-cost-of-ownership.

I continued to see some mainframe industry standard TPC numbers up to
some time last decade ...  but haven't found anything since then. past
published numbers of mainframe "MIPS" (now BIPS)
z900, 16 processors, 2.5BIPS (156MIPS/proc), Dec2000
z990, 32 processors, 9BIPS, (281MIPS/proc), 2003
z9, 54 processors, 18BIPS (333MIPS/proc), July2005
z10, 64 processors, 30BIPS (469MIPS/proc), Feb2008
z196, 80 processors, 50BIPS (625MIPS/proc), Jul2010
EC12, 101 processors, 75BIPS (743MIPS/proc), Aug2012
z13, 140 processors, 100BIPS (710MIPS/proc), Jan2015
z14, 170 processors, 150BIPS (862MIPS/proc), Aug2017

2010 was when they published z196 "peak I/O" benchmark getting 2M
IOPS using 104 FICON (protocol running over industry fibre channel
standard). This was also about the same time that a fibre channel was
announced for E5-2600 blade claiming over million IOPS (for single fibre
channel, two such having higher throughput than 104 FICON running over
104 fibre channel).

At the time, E5-2600 blades had benchmarks of 400BIPS-530BIPS (depending
on model, industry standard is number of iterations compared to
370/158-3 assumed to be 1MIPS) ... and at the time IBM base list price
for E5-2600 blade was $1815 (about $3/BIPS) compared to $30M for max
configured z196 (or about $600,000/BIPS, not including devices, software
and services).  It was not long later that server chip makers announce
they were shipping over half their product directly to cloud
megadatacenters (where they assemble for 1/3rd the cost of brand named
servers, 1/3rd IBM's $3/BIPS is $1/BIPS), and IBM sells off its server
business.

trivia: cache miss, memory latency ... when measured in number of
processor cycles ... is compareable to 60s disk I/O latency when
measured in count of 60s processor cycles. z196 claim is over half the
z10->196 per processor improvement (469MIPS to 625MIPS) is the
introduction of memory latency compensating technology (that have been
in other platforms for decades), out-of-order execution, hyperthreading,
branch prediction, etc. ... sort of hardware equivalent to 60s
multitasking.

FICON trivia: 1980s, STL was bursting at the seams and they were moving
300 people from the IMS group to offsite bldg (with dataprocessing back
to STL datacenter). They tried "remote" 3270s ... but found human
factors horrible compared to local channel-attached controllers in the
bldg. I get con'ed into doing channel-extender support allowing local
channel-attached controllers to be placed at the offsite bldg (and don't
see any difference in human factors between local and offiste). The
hardware vendor tries to get IBM approval to ship my support, but a
group in POK playing with some serial stuff, get it vetoed because they
were afraid that it would make it harder to ship their stuff. In 1988,
I'm asked to help LLNL (national lab) standardize some serial stuff they
are playing with, which quickly becomes fibre channel standard
(including some stuff I did in 1980). The POK people finally get their
stuff released in 1990 with ES/9000 as ESCON when it is already
obsolete. Later some POK people become involved in fibre channel
standard and define a heavy-weight protocol that drastically reduces the
native I/O throughput that eventually ships as FICON.

-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Assembler :- PC Instruction

2019-08-29 Thread Anne & Lynn Wheeler
apoorva.kanm...@gmail.com (SUBSCRIBE IBM-MAIN Anonymous) writes:
> I have a question on PC instruction for which I have been looking for
> an answer for quite sometime now. According to "Priciples of
> operations" manual, execution of an SVC instruction causes a new PSW
> to be loaded from x'1C0' (SVC FLIH), and program interruption causes a
> new PSW loaded from x'1D0' (Program Interruption FLIH). Now my
> question is what happens when a "PC" instruction is executed. Does a
> new PSW gets loaded from a pre-determined location (like SVC/program
> interrruption) or it's all handled through some micro code?

The problem started with move from OS/VS2 SVS (and single address space)
to OS/VS2 MVS and multiple address spaces (each application) ... however
OS/360 heritage was heavily pointer passing APIs ... as a result an
8mbyte image of the MVS kernel had to appear in every application
16mbyte virtual address space ... so that kernel could access storage
pointed to be the past pointers. the issue was then all the subsystems
were put in each of their own address spaces and when application passed
pointer to subsystem ... the subsystem was running in different address
space than the address space that the parameter pointed to (by the
passed pointer).

The solution was the common segment area, a one megabyte area in every
application 16mbyte address space ... where applications could obtain
parameter space so the pointer to the parameter list passed to subsystem
was identical address in both the application and the subsystem.
However, the requirement for common segment area space was somewhat
proportional to number of concurrent applications and number of
subsystems ... which quickl exceeded one (mbyte) segement ...  and the
common segment area (CSA) morphed into the common system area (CSA).  As
systems continued to grow, the CSA requirement got larger and larger,
4mbytes (kernel+csa is 12mbytes, leaving only 4mbytes for applications,
then 6mbytes in 3033 time-frame (kernel+csa 14mbytes, leaving only
2mbytes for applications) and threatening to become 8mbytes ... leaving
zero bytes for applications.

in the wake of the FS faulure (FS was going to be completely different
than 370, and 370 efforts were being shutdown during FS period, also
lack of 370 offerings during FS period is credited with giving clone
mainframe vendors market foothold), there was mad rush to get stuff back
into 370 product pipeline ... 303x and 3081 Q efforts were kicked off
in parallel. 3081 included 370/xa, 31bit addressing and "access
registers" (subsystems had their own virtual address space, but could
use "access registers" to access "parameter" storage in application
address space). All this was known informally as "811" for the Nov1978
publication date of the architecture specification documents.

In part because of the increasing threat of CSA increasing to 8mbytes
for larger 3033 customers, a subset of "access registers" was
retrofitted to 3033 as "dual-address space" mode ... subsystems could
have their own address space, but also a 2nd address space to access
calling application parameters directly ... w/o needing CSA space.

In 370 (3033) dual-address space mode ... there still wasn't program
call, but a supervisor call which in software would move the application
space address space to secondary and then load the subsystem address
space and enter the called subsystem. In 370/xa and "access register"
program call had a system defined table with all the necessary
information to do that function directly as part of the program call
instruction (whether implemented in hardware, microcode, picocode and/or
some combination)

Z/archiecture principles of opration references system defined ETEs
(entry-table entrys) for program call instruction which includes a
number options, including switching address spaces or not switching
address spaces, changing instruction address, etc.

"Interrupts" save the current PSW and load a new (static) PSW.

Each Program Call ETE has controls for PSW fields that are saved and how
much of new fields (unique to each ETE) are loaded ... as well as any
address space games that might be played ... and various other rules
(description goes on for more than dozen pages).

-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: vendor distributes their private key

2019-08-27 Thread Anne & Lynn Wheeler
sme...@gmu.edu (Seymour J Metz) writes:
> The proper way to provide encryption and non-repudiation is to have
> two key pairs. You sign a message using your private key. People
> wanting to send you encrypted data encrypt using your public key. So
> if foo wants to send bar a signed encrypted document, foo double
> encrypts it with foo's private key and bar's publickey.

I got into the middle of this in NIST, US and ISO financial standards
bodies. crypto non-repudiation can show it came from your machine.  The
crypto companies wanted to move up the value stream to claim that
non-repudiation was in the legal sense of read, understood, agreed,
approved, and/or authorized something ... so they could charge more for
the crypto ... however showed that those crypto "non-repudation" in no
way satisfied the legal/business requirements (just that it was sent
from your machine).

we were also brought in to help wordsmith some cal. state legislation
... at the time they were working on electronic signature, data breach
notification and opt-in personal information sharing. The "digital
certificate" companies were lobbying that the electronic signature
legislation mandate digital certificates (obtained at high price from
them, at the time they were hawking $20B/annum business plans on
wallstreet where every person would have a digital certificate at
$100/year) ... for use with "digital signatures" ... as someway being
equivalent to "human signatures" (and apply to non-repudation).  They
didn't get their way.

-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Capital One Data Breach-100 Million Customers affected

2019-07-31 Thread Anne & Lynn Wheeler
jcew...@acm.org (Joel C. Ewing) writes:
> And I noticed a reprinted Washington Post article in my local paper
> today "Bank data stolen despite cloud push", which clearly indicates
> bank management had the perception that  somehow removing data from
> Capital One's direct physical control  to Amazon Web Services on the
> cloud would "improve" security rather than just add different paths for
> attack.   Can't help but wonder if this move to "cut back" on Capital
> One's data centers also involved laying off the people that might have
> been smart enough to configure their firewall correctly and avoid the
> breach.

We were brought in to help wordsmith some california legislation (late
90s, two decades ago). At the time they were doing electronic signature,
data breach notification (original, 1st in country), and opt-in personal
information sharing. Some of the participants had done indepth public
privacy surveys and the #1 issue was identify theft, primarily
fraudulent financial transactions as a result of breaches. At the time
there was little or nothing being done (other than
misdirection/obfuscation as to the source of the problems). The issue is
that normally entities take security measures in self-protection,
however in the case of the breaches, the institutions weren't at risk,
it was the public. It was hoped that the publicity from breach
notification might motivate serious and comprehensive security measures.

since then there have been several federal breach notification (state
preemption) bills introduced ... many of them worded in such a way that
they would effectively eliminate any requiement for notification 

-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Fwd: Happy 50th Birthday CICS

2019-07-07 Thread Anne & Lynn Wheeler
marktre...@gmail.com (Mark Regan) writes:
> https://it.toolbox.com/blogs/trevoreddolls/happy-50th-birthday-cics-070719

As an undergraduate, within a year after taking 2hr intro to
computing/fortran (they had 709 tape->tape with 1401 unit record
front-end ... manually moving tapes between 709 drives and 1401 drives),
I was hired fulltime to be responsible for academic and administration
mainframe systems (they had 360/67 replacing 709/1401 supposedly for
tss/360 which never quite came to production fruition, and so ran as
360/65 with os/360). I got to redo a lot of os/360, including
sysgen. Student fortran jobs ran less than second on 709, but initially
move to os/360 ran over minute (about 100 times slower). Adding HASP,
cut it about in half (over 30 seconds). I then redid sysgen to carefully
place datasets and members in PDS for optimal arm seek and PDS directory
multi-track search ... which improved another factor of three. Last week
in January 1968, three people from the science center came out to
install CP67 ... which I would get to play with on weekends ... along
with OS/360 work (univ. shutdown datacenter from 8am sat until 8am
monday ... and I would have the place to myself, although it made any
Monday morning class a little hard having gone 48hrs w/o sleep). Part of
old SHARE presentation fall 1968 ... mostly CP/67 pathlength rewrites to
improve OS/360 running in virtual machine (put also some amount of
carefully reordered os/360 stage2 sysgen to optimize dataset arm seek
and PDS directory multi-track search)
http://www.garlic.com/~lynn/94.html#18

University library gets ONR grant to do online library catalog and part
of the money goes for 2321 datacell. 1969 was also selected as one of
the original CICS product betatest sites ... and supporting/debugging
CICS was added to responsibility. One of the "bugs" was original CICS
had some undocumented hard coded BDAM file options and university was
using a different set of options. W/o source, it took some time to
diagnose CICS startup was failing with BDAM file open (and why).

lots of CICS history, gone 404 but lives on at wayback machine
http://web.archive.org/web/20050409124902/http://www.yelavich.com/cicshist.htm
and
http://web.archive.org/web/20071124013919/http://www.yelavich.com/history/toc.htm

-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: mainframe hacking "success stories"?

2019-06-23 Thread Anne & Lynn Wheeler
dspiegel...@hotmail.com (David Spiegel) writes:
> *HIPAA

Summary of the HIPAA Security Rule
https://www.hhs.gov/hipaa/for-professionals/security/laws-regulations/index.html

after leaving ibm, did some amount of work with financial industry,
including rep on standards committees ... as part of being co-author for
the privacy standard ... had number of meetings with fed privacy
officers ... also meeting with people behind HIPAA ... there were two
that were still around who had originally drafted HIPAA back in the 70s
... and bemoaning how long it took to get passed ... and at the time,
the health industry had still managed to block/delay including any
penalties for HIPAA privacy violations. We had to talk to HIPAA
people because there were situations were monthly financial transaction
statement could leak information about medical tests and procedures.

along the way, had been asked to help word smith the cal. state data
breach notification act (1st in the nation). there were several
participants heavily into privacy issues and had done detail public
surveys and found that the #1 issue was "identity theft" resulting in
fraudulent financial transactions (largely as result of breaches). At
the time little or nothing was being done about breaches. The issue is
that entities normally take security countermeasures in self protection,
however in the breach cases, the institutions weren't at risk, it was
the public (and the institutions were doing a lot to obfuscate when any
breaches occured). It was hoped that publicity from breach notifications
might motivate corrective action.

I was able to include in the financial privacy standard some of the work
that went into the cal. breach notification legislation regarding
needing to motivate institutions to protect their customers and the
public privacy.

-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: mainframe hacking "success stories"?

2019-06-18 Thread Anne & Lynn Wheeler
014ab5cdfb21-dmarc-requ...@listserv.ua.edu (Mike Wawiorko) writes:
> Remember back in 1980 there was no sysplex. Each machine was a
> stand-alone system with a single operating system - if we ignore VM
> guests.
>
> There was a proliferation of 4341s, 4361s(?), 4381s and even a bit
> later 9370s running MVS. OS/VS1, OS/VS2, VM, DOS (the mainframe one
> not the PC one), TPF and possibly others.
>
> Also remember non-IBM mainframes. Boroughs comes to mind but there were 
> others.

Early 1979 (before first customer ship), I was con'ed into doing 4341
benchmark for national lab that was looking at getting 70 for compute
farm ... sort of leading edge of the coming cluster supercomputing
tsunami. cluster of five 4341s had more compute and I/O power than 3033,
less expensive, less floor space, less power & environmentals.  At some
point POK felt so threatened that they got corporate to cut in half
critical 4341 manufacturing component.

Also large corporations were ordering (VM/370) 4341+3370 FBAs, hundreds
at a time for placing out in departmental areas (inside IBM,
departmental conference rooms became scarce commodity since so many were
being used for vm4341s), sort of the leading edge of the coming
distributed computing tsunami. One of the issues for MVS was that 3380s
(even 3380 had already moved to small fixed sized blocks, can see in
size roundup calculations for records/track) were high-end datacenter
disks ... FBA were the only mid-range that could be used out in
non-datacenter environment. Eventually 3375 CKD was produced, 3370 FBA
simulating CKD for MVS ... however, it didn't do MVS a whole lot of
good. Large customers with hundreds of distributed VM/4300s were looking
at large number of (distributed and/or clustered) systems per
support/operational staff ... while MVS was still number of staff per
system (today's large cloud megadatacenters have several hundred
thousand systems with 80-120 staff).

Old post with decade of DEC VAX sales, sliced by year, model,
US/non-US (discount microvax, half the total). VM/4300s sales in single
or small unit number orders were similar (as VAX numbers), a big
difference were large corporations ordering hundreds of vm/4341s at a
time for departmental, distributed operation
http://www.garlic.com/~lynn/2002f.html#0

The internal network (non-SNA) was larger than the arpanet/internet from
just about the beginning until sometime mid-80s. The big difference for
arpanet/internet was the change-over from IMPs/host protocol to
internetworking protocol on 1Jan1983. At that time, it had approx.  100
IMP network nodes and 250 mainframe hosts ... while the IBM internal
network was rapidly approaching 1000 nodes (which it passes a few months
later), a huge influx were the distributed vm/4300s all over the
world. Old post with list of world-wide corporate locations that added
one or more network nodes during 1983.
http://www.garlic.com/~lynn/2006k.html#8

Some of the MIT CTSS/7094 people had gone to the 5th floor to do MULTICS
while others went to the IBM science center on the 4th floor and did
virtual machines, internal network, bunch of online and performance
technology (GML was invented at the science center in 1969, GML tag
processing was then added to CMS SCRIPT ... which was a reimplementation
of CTSS RUNOFF).

There was some amount of (friendly) rivalry between 5th and 4th
flrs. One of MULTICS premier installations was USAF data services ...
old email about they wanted to come out to talk to me about 20 vm/4341s
http://www.garlic.com/~lynn/2001m.html#email790404b

when they finally got around to coming out six months later (fall 1979),
it had grown to 210 vm/4341s. Other old reference to virtual machines at
some government agencies starting in the 60s ... gone 404, but lives on
at the wayback machine:
http://web.archive.org/web/20090117083033/http://www.nsa.gov/research/selinux/list-archive/0409/8362.shtml

trivia: my wife was in the Gburg JES group and one of he catchers for
ASP to turn in JES3 ... also co-author of JESUS (JES Unified System),
all the features of the two systems that the respective JES2 & JES3
customers couldn't live w/o (for various reasons, it never came to
fruition). She was then con'ed into going to POK to be responsible for
loosely-coupled architecture where she did peer-to-peer, shared data
architecture. She didn't remain long in part because of 1) little uptake
(except for IMS hot standby), until SYSPLEX and Parallel SYSPLEX much
later and 2) constant battles with communication group trying to force
her into using SNA/VTAM for loosely-coupled operation.

-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: mainframe hacking "success stories"?

2019-05-14 Thread Anne & Lynn Wheeler
sme...@gmu.edu (Seymour J Metz) writes:
> On the S/360 the Alternate CPU Recovery facility was limited to 65MP
> (I don't know about 9020 or TSS/360.) On MVS it was a standard
> facility, although on an AP or MP without Channel Set Switching losing
> the processor with the I/O channels was fatal. With MVS/XA and later
> I/O was more robust.

360/65MP shared memory ... but processors had their own dedicated
channels, to simulate "shared" i/o, it required controllers with
multi-channel interfaces.

360/67MP had "channel controller" ... that included all channels to be
accessed by all processors  had bunch of switches to reconfigure
hardware ... and switch settings were visible in the control registers.
It also had hardware multiple paths to memory, introduced additional
latency overhead ... but for I/O intensive workloads (where processors
and I/O could simultaneously be doing transfers) it could have
significant higher throughput (non-MP 360/67 was more like 65 and other
360s, where I/O memory accesses could interfer with cpu memory
accesses). Could order a MP with only one processor and get
channel controller and independent paths to memory.
http://www.bitsavers.org/pdf/ibm/360/funcChar/A27-2719-0_360-67_funcChar.pdf

Originally 360/67 announcement was for up to four processors (and
the channel controller control register values had fields for
all four processors). However (mostly) just two processors
were built ... except for a special three processor 360/67
done for Lockheed and the manned orbital laboratory project.
https://en.wikipedia.org/wiki/Manned_Orbiting_Laboratory

tri-plex machine also provided for the configuration switch settings to
be changed by changing the control register values (not just sensing the
switch settings).

-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: mainframe hacking "success stories"?

2019-05-13 Thread Anne & Lynn Wheeler
john.archie.mck...@gmail.com (John McKown) writes:
> Yes, we have had a TCM fail. I was almost called a liar when I told the
> Windows people that the z simply switch the work transparently (on the
> hardware level) to another CP. They were shocked and amazed that we could
> "hot swap" a new TCM into the box without any outage. The same thing when
> an OSA failed. The other OSA simply did an "ARP rollover" and there were
> not any outages. And that, again, IBM replaced the OSA "hot" and we simply
> started using it. All automatically. But the Windows people still chant
> "Windows is BETTER than the mainframe."

I was keynote speaker at NASA dependable computing workshop (along with
Jim Gray, who I worked with at IBM SJR, but he had gone on to Tandem,
Dec, and then Microsoft) ... reference gone 404 but lives on at wayback
machine
http://web.archive.org/web/20011004023230/http://www.hdcc.cs.cmu.edu/may01/index.html

and told mainframe story

I had done this software support for channel extender ... allowing local
controllers & devices to operate at the end of some telco link. for
various reasons, i had chosen to simulate "channel check" when various
telco errors occurred ... in order to kick-off various operating system
recovery/retry routines.

along came the 3090 ... which was designed to have something like 3-5
channel check errors per annum (not per annum per machine ... but per
annum across all machines).

After 3090s had been out a year ... R-something? was reporting that
there had been an aggregate of something like 15-20 channel check errors
in the first year across all machines  which launched a detailed
audit of what had gone wrong.

they finally found me ... and after a little bit additional
investigation, i decided that for all intents and purposes, simulating
an IFCC (interface control check) instead of a CC (channel check) would
do as well from the standpoint of the error retry/recovery procedures
activated.

... snip ...

majority of audience didn't even understand that errors & faults were
being recorded, tracked, collected, trends, etc.

I had done the support in 1980 for STL, which was bursting at the seams
and were moving 300 from the IMS group to offsite bldg. with
dataprocessing back to STL. They had tried remote 3270, but found human
factors totally unacceptable. Channel-extender support allowd local
channel attached controllers at the offsite bldg ... and the human
factors were same offsite as local in STL. Actually the STL POK
mainframes supporting the offsite bldg ran faster ... turns out 3270
controllers had lots of excessive channel busy ... the channel-extender
significantly reduced that 3270 controller channel busy ... moving it
all to the interface at the offsite bldg.

Hardware vendor had tried to get IBM to release my software, but there
was group in POK that were playing with some serial stuff and got it
vetoed (they were afraid that if it was in the market, it would make it
harder to get their stuff released). The vendor then had to (exactly)
duplicate my support from scratch (including reflecting CC on errors).
I then get them to change their implementation from CC to IFCC.

trivia: in 1988, I was asked to help LLNL standardize some stuff they
were playing with ... which quickly becomes fibre channel standard
(including some stuff I had done in 1980).

The POK people finally get their stuff released in 1990 with ES/9000 as
ESCON when it was already obsolute.

Later POK people become involved in fibre channel standard and define a
heavy-weight protocol that radically reduces the native throughput,
which eventually ships as FICON.

Our last product at IBM was HA/CMP and after leaving IBM we were bought
into the financial institution that had implemented the original
magstripe merchant/gift cards ... on a SUN 2-way "HA" platform. Turns
out SUN had implemented/copied my HA/CMP design ... even copying my
marketing pitches. System had failure and "fell over" and continued
working with no outage. SUN replaced failed component but CE forgot to
update configuration with the identifier for the new component ... so it
wasn't actually being used. Three months later when they had 2nd
failure, they found that parts of the DBMS records weren't actually
being written/replicated (more than "no single point of failure", three
problems, original failure, failure to update configuration info, 2nd
failure).

earlier HA/CMP reference/post in this thread
http://www.garlic.com/~lynn/2019c.html#11 

-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: mainframe hacking "success stories"?

2019-05-12 Thread Anne & Lynn Wheeler
li...@akphs.com (Phil Smith III) writes:
> https://en.wikipedia.org/wiki/Xeon_Phi
>
> Up to 72 cores per chip, so up to 144 threads per socket. On an
> eight-socket motherboard, that's, um, a lot.

they announced they are discontinue Phi
https://www.extremetech.com/extreme/290963-intel-quietly-kills-off-xeon-phi

... but latest production server XEON announced last month have up to 56
cores per socket and up to eight sockets. from recent post
http://www.garlic.com/~lynn/2019c.html#9

Most recent announce (last month) 56-core (processors) Platinum 9200
https://www.anandtech.com/show/14182/hands-on-with-the-56core-xeon-platinum-9200-cpu-intels-biggest-cpu-package-ever
https://www.servethehome.com/intel-xeon-platinum-9200-formerly-cascade-lake-ap-launched/
https://www.storagereview.com/intel_releases_second_generation_intel_xeon_scalable_cpus
https://www.hpcwire.com/2019/04/02/intel-launches-second-gen-scalable-xeons-with-up-to-56-cores/

"We are delivering 8-core Xeons all the way up to 56-core, the highest
core count we've ever delivered on Xeon," said Shenoy. "We are
delivering support for 1- 2- 4- and 8-socket glueless support for Xeon."

... snip ...

aka eight socket, 56/socket, max 448 cores-processors sharing same
memory providing large number of TIPS (1000s BIPS) computation power in
single system.

IBM sold off its (intel) server business about the time the server chip
makers started saying that they are shipping over half their chips
directly to the big cloud megadatacenters ... for going on two decades,
the big cloud megadatacenters claim that they assemble their own servers
at 1/3rd the cost of brand name servers (aka cloud operators view
dataprocessing as a cost rather than profit).

big cloud megadatacenters have so radically reduced their server system
cost to a point that power & cooling have become major cost ... and they
are focusing on total costs, including electricity/cooling cost per
computation ... even getting special chip versions that improve
computation electricity/cooling costs. However, the highest performance
server chips can double the power reqirements for less than twice than
the computation throughput.

a big cloud megadatacenter will have over half million blade systems
with millions of processors, being operated by 80-120 people (enormous
automation) ... doubling the number of systems (for total computational
power), can easily be net financial win, for optimal computation per
power (and large cloud operations have several such
megadatacenters around the world).

-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: mainframe hacking "success stories"?

2019-05-12 Thread Anne & Lynn Wheeler
0047540adefe-dmarc-requ...@listserv.ua.edu (Bill Johnson) writes:
> Until the mid-1990s, mainframes provided the only acceptable meansof
> handling the data processing requirements of a large business. These
> requirementswere then (and are often now) based on running large and
> complex programs,such as payroll and general ledger processing.

late 80s through early 90s there were lots of news stories about moving
off mainframes to "killer micros" ... and IBM has gone into the red. IBM
was being reorganized into the 13 "baby blues" in preparation for
breaking up the company ... when new CEO was brought in and reversed the
breakup.

late 70s & early 80s, I was involved in the original SQL/relational
implementation, System/R and the technology transfer to Endicott for
SQL/DS ("under the radar", while corporation was preoccupied with the
official next generation DBMS, "EAGLE"). When "EAGLE" finally imploded,
there was request about how fast could System/R be ported to MVS
... which was eventually released as DB2, originally for decision
support only.

Late 80s, was doing RS/6000 high availability HA/6000, but I quickly
change the name to HA/CMP because doing cluster scaleup,
technical/scientific with national labs and commercial with RDBMS
vendors. I'm also asked to write a section for the corporate strategic
continuous availability document. The section gets pulled when both
Rochester (AS/400) and POK (mainframe) complained they can't meet the
requirements.

Post about Jan1992 meeting in Ellison's conference room on 128-way
http://www.garlic.com/~lynn/95.html#13 one of the Oracle executives in
the room, said he was the major person in STL handling the tech transfer
to STL for DB2. Within a couple weeks of the Ellison meeting, cluster
scaleup is transferred, announced as supercomputer for
technical/scientific *ONLY*, and we are told we can't work on anything
with more than four processors. A few months later we leave. Part of the
issue was that mainframe DB2 were complaining that if I was allowed to
continue, it would be at least 5yrs ahead of what they were doing.

Later two of the oracle people in the Ellison meeting have left and are
at small client/server startup responsible for somehting called
"commerce server" and we are brought in as consultants because they want
to do payment transactions on the server, the startup had also invented
this technology they call "SLL", the result is now frequently called
"electronic commerce". For having done "electronic commerce", get
invited into lots of other financial industry activities.

In the mid/late 90s, lots of financial was overruning their (cobal batch
mainframe) overnight financial settlement window (globalization cutting
window size and increasing workload). Numerous were spending billions on
"straight through processing" (each transaction is settled as it
executes), leveraging huge parallelization with large number of "killer
micros".  Turns out that they were using parallelization libraries that
introduced 100 times the overhead of cobol batch. They were warned
(including by me) about the problem, which they continued to ignore
... until large scale pilots went down spectacularly in flames (overhead
totally swamping anticipated increase in throughput with large numbers
of killer micros)

Decade later, I was involved in taking some technology to financial
industry groups, that had allowed high level business rules to be
specified ... that then were decomposed into fine-grain SQL statements
(easily parallelized). The implementation enormously reduced the
development and maintenance effort for large, complex business
operations ... and heavily leveraged vendor efforts in enormous
throughput for large clustered & parallel RDBMS (including IBMs). Was
able to demonstrate enormously complex business processing with many
times the throughput of any existing implementations. Initially it had
high level of acceptance, but then ran into brick wall. We were
eventually told that lots of the executives still bore the scars from
the enormous parallelization failures in the 90s and it would take all
of them retiring before it was tried again.

trivia: 2000, did some performance work on 450k statement cobol program
that did overnight batch settlement running every night on >40
max. configured mainframes (number of mainframes required to finish in
the overnight batch window).

-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: mainframe hacking "success stories"?

2019-05-12 Thread Anne & Lynn Wheeler
0047540adefe-dmarc-requ...@listserv.ua.edu (Bill Johnson) writes:
> Right, my articles are flawed. Yet, real mainframe hacks can be
> counted on one hand. And many of those are hypothetical or were
> achieved via someone hacking a laptop (MSFT) or acquiring a valid
> userid because of someone’s stupidity. If hackers wanted to go where
> the money is, and banks would be the place, they would target the
> mainframe since nearly every bank in the world uses one. 

one of the hack story issues is those using mainframes for critical
systems (especially financial) do quite a bit to keep such things
out of the news

I was in financial sector CIP meetings in white house annex
https://en.wikipedia.org/wiki/Critical_infrastructure_protection and
major issues was to make sure that the financial ISAC
https://www.fsisac.com/
wasn't subject to FOIA
https://en.wikipedia.org/wiki/Freedom_of_Information_Act_(United_States)

Was also brought in to help wordsmith some cal. state legislation. At
the time they were working on electronic signature, data breach
notification, and opt-in personal information sharing. There were
participants that were heavily into privacy issues and had done detailed
consumer/public privacy studies. The number one issue was "identity
theft", primarily information leaking used for fraudulent financial
transactions. The problem at the time was little being done about the
leaks & breaches (other than obscuring source of the problem). The issue
is normally entities take security issues in self-interest, in the case
of most of the information leaks/breaches, the institutions weren't at
risk, it was the public. It was hoped that publicity from breach
notifications might motivate corrective action.

Since then then there have been numerous federal data breach
notification bills introduced ... about half similar to the
cal. legislation and the other half with requirements that almost never
would be met (eliminating need for majority of breach notifications).

-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: mainframe hacking "success stories"?

2019-05-12 Thread Anne & Lynn Wheeler
charl...@mcn.org (Charles Mills) writes:
> The mainframe seems to me to have also some "architectural"
> advantages. It seems to support a denser "clustering." It does not
> seem to me that there is anything in the Windows/Linux world that
> duplicates the advantages of 100 or so very-closely-coupled (sharing
> all main storage potentially) CPUs. Sure, you can link a thousand
> Windows or Linux 8-way servers on a super-fast net, and it is fine for
> some things -- incredibly powerful for some of them, but it seems
> there are some things the mainframe architecture is inherently better
> at.

z900, 16 processors, 2.5BIPS (156MIPS/proc), Dec2000
z990, 32 processors, 9BIPS, (281MIPS/proc), 2003
z9, 54 processors, 18BIPS (333MIPS/proc), July2005
z10, 64 processors, 30BIPS (469MIPS/proc), Feb2008
z196, 80 processors, 50BIPS (625MIPS/proc), Jul2010
EC12, 101 processors, 75BIPS (743MIPS/proc), Aug2012
z13, 140 processors, 100BIPS (710MIPS/proc), Jan2015
z14, 170 processors, 150BIPS (862MIPS/proc), Aug2017

industy standard MIPS benchmark is number of interations compared to
370/158 assumed to be 1MIP processor (not actual count of instructions)

z196 (@50BIPS) comparison was e5-2600 blade with two 4-processor chips
(8 processors shared memory) getting between 400-530 BIPS (depending on
model, 50BIPS-65BIPS/processor), ten times max configured z196

most recent peak I/O published benchmark (I've found) is for z196
getting 2M IOPS using 104 FICONs running over 104 Fibre Channel
Standard.  FICON is protocol that radically reduces the native I/O
throughput.  At time of the z196 peak I/O benchmarks, a fibre channel
was announced for e5-2600 blades claiming over million IOPS (two such
fibre channel have higher throughput than 104 FICON (running over 104
fibre channel).

the naming convention for current sever blades have been revised
... family of chips
https://www.servethehome.com/intel-xeon-scalable-processor-family-platinum-gold-silver-bronze-naming-conventions/intel-scalable-processor-family-skylake-sp-platinum-gold-silver-bronze/

code name having inceasing throughput, 2017 ... blades potentially one
to eight chips (with shared memory) and 4-28 cores (i.e. processors) per
chip (max 8*28 ... or 224 processors, and possibly 448 threads.
https://ark.intel.com/content/www/us/en/ark/products/series/125191/intel-xeon-scalable-processors.html

each high end blades a few TIPS (thousand BIPS) or more than ten times
max configured z14.  Dense rack packaging might have 50-60 such blades
in a rack ...  about the floor space of z14 and potentially thousand
times the throughput.

Most recent announce (last month) 56-core (processors) Platinum 9200 
https://www.anandtech.com/show/14182/hands-on-with-the-56core-xeon-platinum-9200-cpu-intels-biggest-cpu-package-ever
https://www.servethehome.com/intel-xeon-platinum-9200-formerly-cascade-lake-ap-launched/
https://www.storagereview.com/intel_releases_second_generation_intel_xeon_scalable_cpus
https://www.hpcwire.com/2019/04/02/intel-launches-second-gen-scalable-xeons-with-up-to-56-cores/

We are delivering 8-core Xeons all the way up to 56-core, the highest
core count we've ever delivered on Xeon," said Shenoy. "We are
delivering support for 1- 2- 4- and 8-socket glueless support for Xeon."

... snip ...

aka 8-socket (8 chips), 56-core (processors per chip), 448 cores
(processors, shared memory)

above has discussions about customers building supercomputers with
thousands of such blades.

trivia: 1980 STL was full and moving 300 people from the IMS group to
offsite bldg, they tried "remote" 3270 and found the human factors
totally unacceptable. I get con'ed into doing channel extender support,
allowing local channel attached 3270 controllers to be placed at the
offsite bldg (with service back to STL datacenter) ... and see no
difference in human factors. Hardware vendor tries to get IBM to let
them distribute my support ... but there was group in POK playing with
some serial stuff that gets that vetoed (they were afraid it would make
it harder for them to release their stuff).

In 1988, I'm asked to help LLNL standardize some serial stuff they are
playing with ... which quickly becomes the fibre channel standard
(including some stuff I did in 1980). The POK people finally get their
stuff released in 1990 with ES/9000 as ESCON when it is already
obsolete. Then some POK people get involved in fibre channel standard
and define heavy weight protocol that radically reduces the native
throughput ... which eventually is released as FICON.

-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: mainframe hacking "success stories"?

2019-05-10 Thread Anne & Lynn Wheeler
l...@garlic.com (Anne & Lynn Wheeler) writes:
> Later two of the Oracle people in the Ellison meeting have left and are
> at a small client/server startup responsible for something called
> "commerce server" and we are brought in as consultants because they want
> to do payment transactions on the server, the startup had also invented
> this technology they call "SSL" they want to use, the result is now
> fequently called "electronic commerce".

other topic drift ... somewhat for having done "electronic commerce"
... got asked into the X9A10 working group which had been given the
requirement to preserve the integrity of the financial infrastructure
for all retail payments (point-of-sale, internet, ach, credit, debit,
aka *ALL*)

after detailed end-to-end vulernability studies ... came up with the
X9.59 standard that eliminated the need to hide (encrypt) the account
&/or credit card number (as countermeasure to fraud) ... this also
eliminated the major use of SSL, hiding (encrypting) the account &/or
credit card number for data in transit (but didn't do anything for data
at the endpoints and data "at reast").

we used a couple examples

account/credit number dual use, both authentication and business
processes. for authentication it needs to be kept completely
confidential and never divulged ... at the same time it is needed in
dozen of business processes at millions of locations around the world.

security proporational to risk, value of the transaction information for
merchant is profit on the transactions, possibly a couple dollars ...
and for transaction processor possibly a couple cents. While value to
the crook is the account balance and/or credit limit ... crooks can
afford to spend attacking the system 100 times more than merchant can
afford to spend defending.

x9.59 eliminated account/credit number for authentication and only used
it for business processes ... so it was no longer necessary to
hide/encrypt the number.

the problem was that x9.59 represented major disruption to the status
quo, it effectively would have eliminated much of the existing fraud,
commoditizing the payment industry ... and theoritically threatened the
tens of billions that are made each year off electronic payments.

-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: mainframe hacking "success stories"?

2019-05-09 Thread Anne & Lynn Wheeler
Before 370 virtual memory was announced, a copy of internal document
leaked to industry magazine. There was then a "Pentagon Papers" like
investigation to find the leaker. Also all company copiers were
retrofitted that placed a machine identification on all copied
pages. Then for the "Future System" project
http://www.jfsowa.com/computer/memo125.htm

they decided to deploy only softcopy versions of the FS documents on
specially modified VM370/CMS systems, which could only be read/accessed
from specific 3270 terminals. I was in the process of moving a lot of my
enhancements from CP67 to VM370 and had some weekend time on a 370
system in a machine room that had one of the modified FS document
systems. I went in late Friday afternoon to double check everything was
ready for me coming in over the weekend. They started needling me that
even if I was left alone in the machine room all weekend, I wouldn't be
able to access FS documents in their enhanced VM370 sysem. Finally it
got too much, and I asked them to log off all users and disable logins
from outside the machine room. From the front panel, I flipped a bit in
kernel storage, which had the effect of accepting anything typed as
valid password. I then gave them a list of countermeasures that would be
required to block skilled attacker (including encrypted files).

Other tivia ... gone 404, but lives on at wayback machine.
http://web.archive.org/web/20090117083033/http://www.nsa.gov/research/selinux/list-archive/0409/8362.shtml

-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: mainframe hacking "success stories"?

2019-05-08 Thread Anne & Lynn Wheeler
one of the biggest problems doing the (non-SNA) internal network around
the world was when (encrypted) links crossed national boundaries
... lots of push back from numerous countries around the world (even tho
all these links were between purely corporate locations).

other trivia: at big cutover to internetworking protocol on 1Jan1983,
they had approx 100 network nodes and 255 hosts ... when the internal
network was rapidly approach 1000 systems (which it passes a few months
later). old post with corporate locations around the world that added
one or more network nodes during 1983:
http://www.garlic.com/~lynn/2006k.html#8

-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: mainframe hacking "success stories"?

2019-05-08 Thread Anne & Lynn Wheeler
sipp...@sg.ibm.com (Timothy Sipples) writes:
> Together we sketched a picture of all this on a whiteboard so I could
> understand what they had done. After we drew the picture, I asked this
> simple question: "Is this secure?" After a very little bit of side
> discussion, very quickly, they did two things: (1) they changed their
> "security" policy, and (2) they went immediately to work to change
> everything I just described.

early 80s, I had HSDT project doing T1 (1.5mbits/sec) and faster speed
full-duplex links. IBM internal network was larger than arpanet/internet
from just about the beginning to sometime mid-80s ... and the same
technology was used for the IBM sponsored university BITNET (also larger
than arpanet/internet for a time). It wasn't SNA ... until late 80s when
the communication group was claiming that the internal network would
stop working if not converted to SNA/VTAM ... which occured about the
same time that BITNET converted to TCP/IP.

Corporate also required that all links leaving IBM physical locations
had to be encrypted ... which were external hardware link encryptors
(mid-80s, major hardware link encryptor company claimed that IBM had
over half of all the link encryptors in the world).

On of my problems was I really hated what I had to pay for T1 link
encrptors (a few thousand) and it was really hard to find faster link
encryptors (less of problem for links supported by standard IBM
controllers which were limited to 56kbit links).

I eventually got involved in doing hardware link encryptor that would
cost less than $100 to build and support at least 3mbyte/sec ... with
some other tweaks. Initially the corporate crypto product group said
that it significantly weakened the DES standard. It took me 3months to
figure out how to explain what was happening (it was significantly
stronger than standard DES, & not TDES) ... but it turned out to be a
hollow victory. I was told that I could make as many as I wanted ... but
there was only organization in the world that could use such crypto; i
could make as many as I wanted to, but they all had to be shipped to an
address in Maryland. It was when I realized that there was 3kinds of
crypto in the world: 1) the kind they don't care about, 2) the kind you
can't do and 3) the kind you can only do for them.

Other trivia: doing mainframe DES in the early 80s for a full-duplex T1
required both processors of a dedicated 3081K doing nothing else but
executing standard DES. There was also work on doing public key for
email (PGP-like public key).

Last product we did at IBM was RS/6000 HA/CMP (it originally started out
as HA/6000, but I quickly changed name to HA/CMP when started working
with national labs (technical/scientific) and RDBMS vendors (commercial)
on cluster scaleup. Old reference on Jan1992 meeting in Ellison's
conference room on 128-way cluster scaleup:
http://www.garlic.com/~lynn/95.html#13
within a few weeks of the meeting, cluster scaleup is transferred,
announced as supercomputer (for technical/scientific *ONLY*), and we are
told that we can't work on anything with more than four processors. A
few months later we leave.

Later two of the Oracle people in the Ellison meeting have left and are
at a small client/server startup responsible for something called
"commerce server" and we are brought in as consultants because they want
to do payment transactions on the server, the startup had also invented
this technology they call "SSL" they want to use, the result is now
fequently called "electronic commerce".

I have absolute authority over everything from servers to payment
networks ... and make several tweaks to the HTTPS implementation to
improve integrity and availability ... but can only make recommendations
on the browser/server side ...  some of which are almost immediately
violated ... contributing to problems, some that continue to this day.

Second half 90s, I'm giving presentations on "Why Internet Isn't
Business Critical Dataprocessing" at various internet meetings. Problems
aren't TCP/IP design ... but various glitches in deployments by various
organizations.

-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: S/360

2019-04-15 Thread Anne & Lynn Wheeler
t...@tombrennansoftware.com (Tom Brennan) writes:
> This reminds me of my first (junk pile) floppy disk drive back in the
> 1970's for my home-made computer.  I had little money so I made my own
> controller out of a dozen chips and wrote some 8080 code to handle the
> I/O.  So the format of the disk was totally up to me, and not
> compatible with anything else.  I did just what you said and settled
> on about 3K per track.  But that was with no separate records or
> sectors - you had to read the entire track if you wanted any data,
> which I found out later (when I took my first computer class) wasn't
> too smart.

re:
http://www.garlic.com/~lynn/2019b.html#52 S/360
http://www.garlic.com/~lynn/2019b.html#53 S/360

recent (facebook ibm retirees) post ... with 3390/3990, iceberg,
seastar, etc:

IBM Adstar tried to counter (STK Iceberg) with seastar (software was
seahorse) ... current web search for references just turn up my old
usenet posts I've archived at garlic.com ... which have old online
references that have gone 404 ... although some of them still live at
the wayback machine
https://web.archive.org/web/20080608164743/http://www.informationweek.com/565/65mtrob.htm
https://web.archive.org/web/20060328034324/http://www.stkhi.com/nearline.htm

Besides working with LLNL on technical cluster scaleup "supercomputers"
... we were also working with LLNL on porting their high-performance
filesystem LINCS they had originally done on Cray ... including HA/CMP
version (Unitree). As I've periodically mentioned before ... a week or
two later we had meeting in ellison's conference on commercial cluster
scaleup, reference
http://www.garlic.com/~lynn/95.html#13
then a couple weeks after that meeting, cluster scaleup was transferred,
announced as IBM supercomputer (for technical/scientifc "ONLY") and we
were told we couldn't work on anything with more than four processors
(we leave IBM a few months later)

Date:  Mon, 30 Dec 91 15:07:32 -0800
To: wheeler
Subject: Unitree log structured filesystem

Lynn, this could be useful for the high-end for a non-obvious reason.

We recently removed the only future CKD 5.25" DASD from the plan. The
next high-end DASD, Cortez, will do CKD emulation.  Emulation is the
right way to do CKD, but unfortunately Cortez is forced to emulate
behind a 3990.  This is badness because the DDC interface behind a 3990
is gap-synchronous.  This means there is a Read-Modify-Write cycle to do
a write.  This is inherent to a 3990.

The only way to do CKD Emulation correctly is to get rid of
3990. Seastar plans to do this, but not until 2Q95.  In the interim, we
need a high performance high-end CKD subsystem.  We would like array
support if possible, and compression.

What has this to do with HA/Unitree LFS you ask?  If we added some ESCON
cards to an HA/950, and we did CKD Emulation SW and Compression HW here
at ARC, we could quickly build a cached/arrayed/CKD subystem.  We could
stripe data across Harrier drawers 3+P.  We could compress because we
have LSFS (update in place precludes compression).  We could cache in
Unitree memory.  Would this work ?  It would be really keen.

... snip ...

In late 70s and first part of 80s, I got to spend some time playing disk
engineer in bldgs 14&15 ... but later 80s, I was spending more time on
risc & turning out HA/CMP product. Above email references HA/950
... which is HA/CMP running on RS6000/950 (product started out HA/6000,
but I fairly quickly renamed HA/CMP when started on cluster scaleup)
... high-end rack mounted system. I've previous mentioned that ESCON was
announced in 1990 with ES/9000 when it was already obsolete. In 1988, I
was (also) asked to help LLNL standardize some serial stuff that they
had been playing with which quickly becomes fibre channel standard
(including some stuff that I had done in 1980), initially 100mbyte/sec
concurrent in both direction, compared to ESCON 17mbyte/sec half-duplex.

The post about Jan1992 Ellison commercial cluster scaleup (128-way
ye1992) meeting, and the above email mentions Harrier(/9333) which we
were using in some HA/CMP configurations. It was high-speed fixed-block
disks using packetized SCSI protocol running over 80mbit/sec full-duplex
serial copper. I mention that I had hoped that it evolves into 1/8th
speed interoperable fibre-channel standard ... but instead it evolves
into IBM proprietary SSA (after we leave):
https://en.wikipedia.org/wiki/Serial_Storage_Architecture The above wiki
mentions that SSA was "overtaken" by FCS, but I had been asked to help
with standardizing what becomes FCS in 1988 ... and I wanted
Harrier/9333 to evolve into FCS interoperable instead of IBM proprietary

... end of ibm retiree post ...

aka, real CKD DASD hasn't been made for decades ... but POK's favorite
son operating system has not been able to ween itself off it.

as mentioned previously, somewhere along the way, some POK engineers
become involved with FCS and define an extremely heavyweight protocol
that drastically 

Re: S/360

2019-04-11 Thread Anne & Lynn Wheeler
wmhbl...@comcast.net (WILLIAM H BLAIR) writes:
> Donald Ludlow WAS indeed the principal author
> of OS/360 IOS. In fact, he wrote ALL of the
> code that actually survived and was shipped.
> There was another gentleman who CLAIMED to be
> the "author" of IOS (whom I knew personally), 
> but everything he did had to be redone by Don 
> (or mostly, in fact, simply thrown away).
>
> Mr. Ludlow moved to Raleigh, NC and worked on
> SPF (as it was then called), incorporating the
> SUPERC FDP into ISPF/PDF as what we know today 
> as options 3.12, 3.13, and 3.14 (a "recent" 
> enhancement adds 3.15).
>
> He wrote some of the slickest, tightest S/360
> Assembler code I've ever seen or had to modify
> learning a lot about device channel programming
> from it (and from him).

this is reference to getting request to find people that had been
involved in the decision to convert all 370s to virtual memory (i.e. MVT
storage management was so bad, that region sizes typically had to be
four times larger than actually used, as result typical 1mbyte 370/165
only ran four regions, going to virtual memory, could get four times as
many regions with little or no paging)
http://www.garlic.com/~lynn/2011d.html#73

Ludlow was doing initial implementation of MVT for VS2/SVS ... work done
on 360/67. Basically not that different of running MVT in a 16mbyte
cp/67 virtual machine. Build table for single 16mbyte virtual address
space at startup and a little bit of page I/O (not hihgly optimized
because anticipating little or no actual paging). Biggest amount of code
was same as CP/67 ... (EXCP/SV0) got channel programs built with virtual
addresses ... and so had to make a channel program copy replacing the
virtual addresses with real addresses ... and basically borrowed the
code from CP/67 and hacked into EXCP.

Slight topic drift, in my previous post in this thread, I mentioned
doing bullet proof input/output supervisor for doing bldg14 disk
engineering testing and bldg15 product test
http://www.garlic.com/~lynn/2019h.html#52 S/360

They had previously tried MVS, but in that environment MVS had 15min
MTBF requiring manual re-ipl. This is later email just before 3380
customer ship ... FE had regression test of 57 simulated errors that
were expected to occur in normal operations. MVS was still failing in
all 57 error (requiring manual re-ipl) with no indication of what cause
the failure in 2/3rds of the case ... old email
http://www.garlic.com/~lynn/2007.html#email801015

I did an internal report of all the changes/fixes needed to support any
amount of on-demand concurrent dasd development testing (previously they
were running 7x24 pre-scheduled stand-alone testing) ... and
(unfortunately) happened to mention the MVS 15min MTBF ... which brought
down the wrath of the MVS group on my head (I was told initially they
tried to have me separated from the IBM company).

-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: S/360

2019-04-11 Thread Anne & Lynn Wheeler
li...@akphs.com (Phil Smith III) writes:
> And I'm 99.9% sure that DASD capacity was determined by building the
> geometry and then trying various densities until error rates became
> unacceptable, then backing off slightly. Which would explain the
> weird, random sizes with each generation (until 3390, after which it
> went to arrays and became standardized-on what future generations will
> consider a weird size).

error detecting/correcting started moving to fixed block sizes ... aka,
FBA ... even 3380 had fixed cell size for error correcting ... however
POK's favorite son operating system has had difficulty weening off
CKD. There hasn't been real CKD made for decades, all being simulated on
industry standard fixed block disks.

the recent move from 512byte to 4096byte fixed blocks is largely
motivated by error correct.
fixed-block
https://en.wikipedia.org/wiki/Fixed-block_architecture
FBA 512->4096 migration
https://en.wikipedia.org/wiki/Advanced_Format

original "raid" patent was by IBMer in the 70s
https://en.wikipedia.org/wiki/RAID

first use was IBM S/38 ... because common single disk failure took out
everything. part of S/38 organization simplification was scatter
allocation across all disks (treated as single filesystem) ... and
therefor any single disk failure took out the whole system ... all disks
had to be backed up as single unit/pool (because of scatter allocation)
... and any recovery required complete system restore.

Note originally 3380 had 20 track spacings between each data-track ...
flying lower met adjacent tracks had less interferance and cut the data
track spacing in half (double-density with twice the number of
tracks/cylinders), then spacing cut again for triple-density (three
times the number of tracks/cylinders).

trivia: I got dragged into idea the IBM "father of risc" had for
"wide-head" 16 adjacent datatracks with servo tracks on either side
... read/write all 16 simultaneously (while tracking servo tracks on
both sides of the data tracks). This was in 3090 and 3380 triple-density
time-frame. The problem was that the IBM mainframe data transfer is
16*3mbytes/sec or 48mbytes/sec. Even when ESCON is announced in 1990
(with ES/9000, when ESCON is alread obsolete) it is only 17mbytes/sec.

little more trivia: 70s, engineer was running "air bearing" (floating
heads) simulation (part of reducing head flying height enabling greater
densities) on research 370/195 ... but only getting a couple turn
arounds a month (even with priority designation). I had done enhanced
bullet proof, never fail operating system for bldg 14&15 allowing them
moving from stand-alone testing to doing concurrent development testing
under operating system. Turns out even concurrent testing only used
percent or two of processor ... so we set up private online service
using the machines. Bldg15 had 2ndor3rd engineering 3033 from POK, and
we get the air bearing simulation moved over to the 3033, where he can
get several turn-arounds a day (even tho 3033 has little less than 1/2
processing of 370/195).

more trivia: I had done channel-extender support in 1980 for STL that
was moving 300 people from IMS group to offsite bldg ... but the POK
people playing with what becomes ESCON ... blocks it release to
customer. In 1988, I'm asked to help LLNL standardize some serial stuff
they are playing with which quickly becomes fiber-channel standard,
including some stuff that I had one in 1980 (FCS, originally
100mbyte/sec concurrent in both directions).

Then some POK engineers get involved in FCS and define a protocol
that radically cuts the native throughput ... which is eventually
released as FICON. Most recent published FICON numbers I've seen
is peak I/O z196 test that used 104 FICON (running over 104 FCS)
getting 2M IOPS. About the same time there was FCS announced
for E5-2600 blade claiming over million IOPS (two such FCS,
getting more throughput than 104 FICON running over 104 FCS).

-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: instruction clock speed

2019-03-07 Thread Anne & Lynn Wheeler
if you want to look at other various ... Jan1979, I was con'ed into
doing benchmarks on engineering 4341 for national lab that was looking
at getting seventy for a compute farm (sort of leading edge of coming
cluster supercomputing tsunami).

in the wake of Future System failure, the was mad rush to get
products back into 370 pipeline (internal politics had been
shutting down 370 efforts) and 3033 (168-3 logic mapped to
20% faster chips) and 3081 were kicked off in parallel. some
history
http://www.jfsowa.com/computer/memo125.htm

they took 158 engine w/o the 370 microcode and just the integrated
channel microcode for the 303x (external) channel director. then the
3031 is a 158 engine with just the 370 microcode (no integrated channel
microcode) and a 2nd 158 engine with the integrated channel microcode
(and no integrated channel microcode). A 3032 is 168-3 configured to
use channel director for external channels ... and 3033 is 168-3 logic
remapped to 20% faster chips.

158  45.54 secs
3031 37.03 secs
4341 36.21 secs
168-3 9.1  secs
916.77 secs

and real historic cdc6600 35.77 secs

158-3 (158 engine running both 370 and integrated channel microcode) was
45.54 secs compared to 3031 (that was two 158 engines, one for 370 mcode
only and one for channels mcode only) was 37.03 secs.

misc. old 4341 email
http://www.garlic.com/~lynn/lhwemail.html#4341

-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: instruction clock speed

2019-03-07 Thread Anne & Lynn Wheeler
charl...@mcn.org (Charles Mills) writes:
> It is not possible now. A single instruction may literally add no time at
> all to some instruction sequence.
>
> My imperfect model is that main storage is the new disk. Figure that
> instructions take no time at all and memory accesses take forever. 

I go even further ... that the current latency to access memory in
processor cycles is compareable to 60s letency to access disk in 60s
processor cycles (on cache miss).

A few decades ago, RISC started doing multi-stage pipeline, concurrent
execution (with multiple execution units), out-of-order execution,
branch prediction speculative execution, hyperthreading, etc ... in part
for offsetting cache misses and inceasing memory access latency (sort of
equivalent to 60s software multitasking ... but in the hardware
processor).

The poster child has been 360/195 & 370/195 that did pipeline with
out-of-order execution ... but no branch prediction and speculative
execution. I got roped into project to hyperthread 195 (that never
shipped). Conditional branches drained the pipeline  most
codes only ran 370/195 at half speed  because of the stalls
associated with conditional branches in most codes, throughput
was cut in half.

The idea was that simulating two processor (hyperthread) ... each
running at half speed, it would achieve full throughput. This is
discussion about the end of ACS/360 (executives were afraid that it
would advance of the computer state-of-the-art too fast and IBM would
loose control of the market) "Sidebar: Multithreading" towards the
bottom of the page ... followed by ACS/360 features that show
up in ES/9000 some 20-odd years later.

Two decades ago, the Intel processors started decomposing Intel
instructions into risc micro-ops for actual decoding ... which largely
negated the difference between Intel & risc throughput.

IBM says that about half the throughput increase from (mainframe) z10
and z196 processors was starting to introduce things like (risc-like)
out-of-order execution.


-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Unreadable code

2019-01-18 Thread Anne & Lynn Wheeler
000433f07816-dmarc-requ...@listserv.ua.edu (Paul Gilmartin) writes:
> Intriguing.  Seems to be specifying redirection.  Pointless for CMS or TSO
> since the concept of standard input/standard output is alien to each.  
> Relevant,
> of course, to OMVS, but OMVS was unable to exert much influence on the
> design of z/OS Rexx.
>
> I wonder how this made its way into the Standard since standards tend to
> be descriptive and rarely innovate extensions except to resolve ambiguities
> or inconsistencies.

modulo CMS pipelines
https://en.wikipedia.org/wiki/CMS_Pipelines

at least since 1982 (almost as old as REXX) ... I had author at spring
1982 adtech conference (week before share), archived reference
http://www.garlic.com/~lynn/96.html#4a

later made available on MVS
https://en.wikipedia.org/wiki/BatchPipes
history
https://en.wikipedia.org/wiki/BatchPipes#History

BatchPipes Version 1 was developed in the late 1980s and early 1990s
simply as a technique to speed up MVS/ESA batch processing. In 1997 the
functionality of BatchPipes was integrated into a larger IBM product -
SmartBatch (which incorporated two BMC Corporation product features:
DataAccelerator and BatchAccelerator). However SmartBatch was
discontinued in April 2000.


-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Where's the fire? | Computerworld Shark Tank

2019-01-17 Thread Anne & Lynn Wheeler
sme...@gmu.edu (Seymour J Metz) writes:
> The 370/168 had UP models ranging from 1 MiB to 8 MiB. Double that for MP.
>
> The Amdahl 470V/6 was available in 1 MiB through 8 MiB.
>
> Maybe so, but Amdahl started shipping the 470V/6 in 1975 with 4 MB of
> memory standard, and I'm pretty sure that the 370 model 168 also had 4
> MB in that time frame. I'm pretty sure that either of those processors
> would outperform a 360/75 by a considerable margin. According to
> Wikipedia, the model 75 first shipped in 1965

370/165 had 2mic memory ... typically 1mbyte. It was part of
explanation/justification for making all 370s virtual memory ... i.e.
MVT real storage management was so bad that region sizes had to be four
times larger than actually used ... getting four concurrent regions on
typical 1mbyte memory machine. Going to virtual memory (very much like
running in CP67 16mbyte virtual machine) could get four times as many
regions with little or no paging ... on same 370/165 one mbyte machine.
Old reference about being asked to try and track down reason for the
decision to move to virtual memory for all 370s.
http://www.garlic.com/~lynn/2011d.html#73

part of upgrade from 165 to 168 was moving to new memory technology
... about same access time as 370/145, around 400ns (but 165&168 also
had 80ns cache)
https://en.wikipedia.org/wiki/IBM_System/370_Model_168

Newer technology than that of the 370/165, which had been introduced 2
years prior, used "monolithic, instead of magnetic core" memory,[5]
resulting in a system which was faster and physically smaller than a
Model 165.[5]:pp.3

... snip ... 

168-1 to 168-3 doubled cache size from 16kbytes to 32kbytes. Had one
vm370/vs1 customer that upgraded from 168-1 to 168-3 (double cache) and
found it running much slower than 168-1. Issue was that 168-3 ran 2k
page option with only half the cache, and everytime (vm370) switched
between 4k pages and 2k pages, the cache was flushed (w/o all the cache
flushing, 2k pages would have run same as 168-1).

I had worked with some of the 165/168 engineers and they said that the
other difference was that they optimized the microcode reducing avg. of
2.1cycles per 370 instruction for 165 to 1.6cycles per instruction for
168.  168-3, optimized m'code, 400ns memory and 32kbyte cache was
3-3.5MIPS ... some 3times that of 360/75.

-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Network names

2019-01-04 Thread Anne & Lynn Wheeler
sme...@gmu.edu (Seymour J Metz) writes:
> I would have loved to see an enhanced SNA with internetworking and
> DNS, but when CCITT refused to look at it, that wasn't an option.
>
> If the major TCP-based protocols at least switched to SCTP, that would
> be an improvement.

re:
http://www.garlic.com/~lynn/2019.html#3 Network Names

late 80s, I was on XTP technical advisory board ... that IBM
communication group found hard to block.

XTP has high-speed option for TCP/IP. Supported internetworking and
reliable delivery in minimum of 3-packet exchange (compared to TCP that
requires minimum 7-packet exchange for reliable transmission and the
earlier stanford VMTP that required minimum 5-packet exchange for
reliable transmission).

We had been doing rate-bassed pacing inside the HSDT effort (T1 & faster
speed links, both satellite and terrestrial) for several years ... and I
wrote the draft for rate-based in XTP. There was lots of XTP multi-cast
reliable work by various DOD organizations (went into navy's SAFENET).  Also
cleaned up some other stuff in TCP flow that was serialized ... so it
could be pipelined. Much of this was influenced from SGI and Greg
Chesson from SGI's pipelined graphics engines.

SCTP, XTP and TCP as transport protocols for high performance computing
on multi-cluster grid environments
https://dl.acm.org/citation.cfm?id=2127989
https://link.springer.com/chapter/10.1007/978-3-642-12659-8_17
Xpress Transport Protocol
https://en.wikipedia.org/wiki/Xpress_Transport_Protocol
SAFENET II-THE NAVY'S FDDI-BASED COMPUTER NETWORK STANDARD
(although it mentions the dreaded "OSI" word)
https://apps.dtic.mil/dtic/tr/fulltext/u2/a230482.pdf
XTP
http://www.cs.virginia.edu/~acw/netx/xtp_long.html

The Xpress Transport Protocol (XTP) has been designed to support a
variety of applications ranging from real-time embedded systems to
multimedia distribution to applications distributed over a wide area
network. In a single protocol it provides all the classic functionality
of TCP, UDP, and TP4, plus new services such as transport multicast,
multicast group management, transport layer priorities, traffic
descriptions for quality-of service negotiation, rate and burst control,
and selectable error and flow control mechanisms.

... snip ...

in some sense, SCTP is a later subset of some of the XTP features
https://en.wikipedia.org/wiki/SCTP
SCTP Oct2000.
https://tools.ietf.org/html/rfc2960

HSDT posts
http://www.garlic.com/~lynn/subnetwork.html#hsdt
XTP posts
http://www.garlic.com/~lynn/xtphsp
rate-based pacing draft for XTP (1989)
http://www.garlic.com/~lynn/xtprate.html

we were also doing some slight of hand with selective resend. There was
work with Berkeley Reed-Solomon company (that did a lot of the work for
CDROM standard, they were then bought by Kodak) on high-speed 15/16-rate
Reed-Solomon ... and selective resend (if couldn't be corrected by
RS-FEC) would transmit the 1/2-rate Viturbi (rather than original data,
could reasonably recover even if both packets had unrecoverable errors
with RS-FEC) ... and if things really got noisy, dynamically switch to
1/2-rate Virturbi (within 15/16-rate Reed-solomon).
https://en.wikipedia.org/wiki/Viterbi_decoder
https://en.wikipedia.org/wiki/Reed%E2%80%93Solomon_error_correction

Reed-Solomon codes are a group of error-correcting codes that were
introduced by Irving S. Reed and Gustave Solomon in 1960.[1] They have
many applications, the most prominent of which include consumer
technologies such as CDs, DVDs, Blu-ray Discs, QR Codes, data
transmission technologies such as DSL and WiMAX, broadcast systems such
as satellite communications, DVB and ATSC, and storage systems such as
RAID 6.

... snip ... 

-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Fwd: It's Official: Open-Plan Offices Are Now the Dumbest Management Fad of All Time | Inc.com

2019-01-03 Thread Anne & Lynn Wheeler
marktre...@gmail.com (Mark Regan) writes:
> For those of you who find yourselves in this type of working environment.
>
> https://www.inc.com/geoffrey-james/its-official-open-plan-offices-are-now-dumbest-management-fad-of-all-time.html
>
> Mark T. Regan, K8MTR
> CTO1, USNR-Retired
> 1969-1991

long ago and far away ... from "Real Programmers Don't Eat Quiche":

Real Programmers never work 9 to 5.  If any Real Programmers are
around at 9 AM, it's because they were up all night.

so can concentrate and avoid interruptions from co-workers and phone
calls.

older trivia: as undergraduate, about a year after taking two semester
hr intro to fortran/computers, I was hired fulltime to be responsible
for univ. academic and administration os/360 systems. The univ. shutdown
the datacenter from 8am sat. to 8am monday ... and I had the whole
datacenter dedicated to myself ... although 48hrs w/o sleep could make
monday morning classes a little hard.

-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Network names

2019-01-02 Thread Anne & Lynn Wheeler
sme...@gmu.edu (Seymour J Metz) writes:
> Well, at one time I expected
> https://en.wikipedia.org/wiki/Government_Open_Systems_Interconnection_Profile
> (GOSIP) to displace SNA, but the Feds went TCP/IP despite the mandate
> and that was all she wrote.

Part of GOSIP was mandate to eliminate tcp/ip and internet. At Interop
'88 there were some number of OSI application booths (even tho it was an
internet conference) ... supposedly vendors trying to appeal to expected
government customers. However both OSI & SNA didn't have internet layer
(SNA also didn't have a network layer).

The market went TCP/IP ... and government agencies went with the market
(government in the 80s started increasingly going COTS, which was
whatever the market was doing).

trivia: there was joke about ISO & OSI compared to IETF & TCP/IP.  IETF
required at least two interoperable implementations before progressing
in the standards process ... while ISO didn't even require a
specification to be implementable to be made a standard. In some sense I
was part of TCP/IP forces that couldn't see how OSI could ever prevale,
regardless of the Federal mandates. I had IBM equipment in booth at
Interop 88 (but not in the IBM booth).

-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Jean Sammet — Designer of COBOL – A Computer of One’s Own – Medium

2018-12-05 Thread Anne & Lynn Wheeler
john.archie.mck...@gmail.com (John McKown) writes:
> https://medium.com/a-computer-of-ones-own/jean-sammet-designer-of-cobol-77c6d794365c

Sammet wiki
https://en.wikipedia.org/wiki/Jean_E._Sammet
more
https://history.computer.org/pioneers/sammet.html
and
https://www.nytimes.com/2017/06/04/technology/obituary-jean-sammet-software-designer-cobol.html

sammet was (resident) in the boston programming center, 3rd flr, 545
tech square (as was Nat rochester). When the CP67/cms group was spun off
from the science center (on the 4th flr), the moved to 3rd flr and took
over the boston programming center (& sammat and rochester moved up to
the science center on the 4th flr). As the CP67 group morphed into the
vm370 group and outgrew the 3rd flr, the vm370 group moved out to the
old SBC (service bureau corporation) bldg at burlington mall.

I would come in on weekends and sometimes bring my kids. sometimes I
would set them up to play the (pdp1) spacewar port to 2250m4 (1130+2250)
... but other times they would just run up and down the halls.  Sammet
was usually the only other person on the flr ... to complain about the
noise my kids running up & down the halls.

posts mentioning 545 tech sq
http://www.garlic.com/~lynn/subtopic.html#545tech
cambridge science center wiki
https://en.wikipedia.org/wiki/Cambridge_Scientific_Center

-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: IBM Z and cloud

2018-10-01 Thread Anne & Lynn Wheeler
sipp...@sg.ibm.com (Timothy Sipples) writes:
> z/OS is UNIX(TM), certified by The Open Group and a trademark
> bearer. Linux is not UNIX, as it happens. Apple's macOS is UNIX, while
> iOS, tvOS, and watchOS are not. AIX is UNIX. The modern BSD family
> operating systems derived from "Networking Tape 2" (NetBSD, FreeBSD,
> OpenBSD, etc.) are not UNIX.

originally funded/developed by (renamed/reorged IBM disk division)
ADSTAR software VP ... as part of trying to work around the
communication group ... also provided venture/startup funding to
entities doing distributed computing support that would use mainframe
for disk storage.

I've mentioned before that senior disk engineer got talk scheduled at
world-wide, annual, internal communication group ... supposedly on 3174
performance ... but opened the talk that the IBM communication group was
going to be responsible for the IBM disk division. The issue was that
the communication group had corporate strategic responsibility for
everything that crossed datacenter walls and were fiercely fighting off
distributed computing and client/server ... trying to preserve their
(emulated) dumb terminal paradigm and install base. The disk division
was seeing data fleeing to more distributed computing platforms with
drop in disk sales ... their efforts to correct the problems were
constantly being veto'ed by the communication group. "POSIX" support was
part of work-around (since it didn't directly involve crossing the
datacenter walls) and funding distributed computing startups didn't
directly challenge communication group IBM ownership of everything (IBM)
that crossed the datacenter wall. The communication group stanglehold on
mainframe datacenters didn't just disks ... and a few years later IBM
goes into the red.

POSIX ... portable operating system interface ... originally 1988
https://en.wikipedia.org/wiki/POSIX
z/OS here
https://en.wikipedia.org/wiki/POSIX#Compliant_via_compatibility_feature

trivia: "ADSTAR" was the furthest along with reoganization of IBM into
the 13 "baby blues" in preparation for breaking up the
company. reference gone behind paywall, but mostly lives free at wayback
machine
http://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html

then new CEO was brought in and the breakup reversed ... although as
predicted ... IBM disk group no long exists ... even tho CKD DASD
is still required ... but haven't been manufactored for decades,
all being emulated on industry standard fixed-block disks.

-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: IBM Z and cloud

2018-10-01 Thread Anne & Lynn Wheeler
sme...@gmu.edu (Seymour J Metz) writes:
> cloud = timesharing
>
> Someone else deploys the infrastructure, to you it's a black box. Less
> control but also less manpower. Some legal issues.
>
> No, z/OS is not a cloud, but neither is AIX, *bsd, Linux, windows or
> Solaris; it's the deployment that makes it a cloud or not a cloud. You
> can have a cloud with z/OS just as much as you can have one with,
> e.g., Linux.

In 1968, there were two commercial spin-offs from the IBM Cambridge
Science Center ... doing virtual machine based online (commercial)
service bureaus. One of the big issues was providing 7x24 non-stop
operation. This was in the days when IBM leased the hardware and charges
was based on number of hours per month ... based on the "system" clock
... which would run whenever any CPU and/or channel was busy (and
continue to run for 400ms after everything was idle). 

There was lots of work on CP67 to reduce offshift charges when
(initially) use was light (and little revenue based on online use)
... this included dark room operation ... not requiring onsite operator
... as well as special terminal channel program that would go to sleep
when no characters were arriving ... but would immediately wake up to
accept incoming characters (allowing system clock & charges to stop when
system idle). They also fairly quickly moved up the value stream
specializing in offering services to the financial industry and had to
provide significant security features (when multiple competing financial
operations all using the same systems).

past scince center posts
http://www.garlic.com/~lynn/subtopic.html#545tech.

Trivia: long after IBM changed from leased to selling hardware ... MVS
still had timer event that would wake up every 400ms (to make sure
system clock never stopped).

other triva: science center also started offering its virtual machine
CP67 services to other internal IBM operations as well as (free) to
various employees and students at various institutions of higher
learning in the Cambridge/Boston area. CSC had also ported apl\360
(typically ran with 16kbyte workspace) to CMS as CMS\APL ... including
allowing workspace to be virtual address space size and offering API for
system services (like file read/write) ... greatly expanding the
real-world applications that could be done on APL. Early user was
business planning/forcasting from IBM Armonk hdqtrs that loaded the most
holy of IBM data (detailed customer information) on the system ... for
modeling ... and significant security had to be demonstrated ... making
sure people like MIT students wouldn't be able to access corporate data.

For over decade the large cloud operations have claimed they assemble
their own server systems for 1/3rd the price of brand name server
systems (a typical cloud megadatacenter will have over half million
blade systems), likely part of the motivation for IBM selling off its
server product business ... along with announcements from the server
chip (processors, etc) makers were shipping over half their chips
directly to the large cloud operations.

The large cloud operations have reduced the cost of their servers so
drastically that they are able to significantly over provision for
"on-demand" (i.e. huge number of idle systems that can be instantly
"on-demand" operation) ... these costs are possibly 1/100,000 the
cost/BIPS of typical IBM mainframe. Because they have reduced server
costs so significantly power has become increasing major portion
of cloud megadatacenter ... and they have put significant pressure on
server chip makers to optimize execution power consumption as well as
drop to zero when systems are idle (but instantly on for "on-demand).

There are some number of vendors looking at leverage a lot of the
enormous work down by the cloud megadatacenters for marketing in-house
cloud operations to businesses.

The comparison to the 60s virtual machine commercial online operation
...  is cloud datacenter (with over half million systems) typically
operate with staff of 80-120 people (compared to cp/67 dark room
operation) ... and power/cooling dropping to zero when systems are idle
... but are instantly on (compared to 360 channel programs allowing
channel to go idle allowing system meter to stop ... but instantly on)
... as well as quite a bit of work on security.

Early 70s also had TYMSHARE (on west coast) offering online commercial
services (now with VM370). In Aug1976, TYMSHARE also started offering is
CMS-based online computer conference "free" to SHARE as VMSHARE
... archives here:
http://vm.marist.edu/~vmshare

other trivia: as undergraduate in the 60s, I was brought into small
group in Boeing CFO office to help with consolidating all dataprocessing
into Boeing Computer Services (independent business unit to better
monetize the investment, just renton datacenter had something between
$200M-$300M, 60s dollars, in 360 mainframes, 360/65s were arriving
faster than they could be installed, boxes constantly being 

Re: Updated Green Card

2018-07-29 Thread Anne & Lynn Wheeler
internally, somebody did a online "green card" ... using CMS
IOS3270. I provided person with section from the 360/67 "blue card"
which included device sense information. More recently I did a Q
conversion from IOS3270 to HTML
http://.garlic.com/~lynn/gcard.html

some trivia drift:

image of 360/67 blue card and vmshare users guide
http://www.garlic.com/~lynn/folds.jpg

blue card I got from one of the people that invented GML at the science
center in 1969 (G, M, L chosen for the inventors last name). I included
more recent 3380 and A220 in the sense information (A220 was
channel-extender, I had originally done A220 support for STL who were
moving 300 people from the IMS group to offsite bldg with services back
into the STL datacenter).

in Aug1976, TYMSHARE offered its CMS-based online computer conferencing
system free to SHARE as "VMSHARE", archives here
http://vm.marist.edu/~vmshare

a decade later, vmshare moved to McGill univ. & BITNET after M/D
bought TYMSHARE (ibm-main originated on BITNET later part of the 80s).

I had early on cut a deal with TYMSHARE where they sent me monthly tape
of all VMSHARE files for putting up on internal network and systems
(including world-wide online sales support HONE system). The
biggest problem I had was with IBM lawyers who were concerned that
IBM employees would be contaminated with customer information.

-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Walt Doherty - RIP

2018-05-25 Thread Anne & Lynn Wheeler
g...@gabegold.com (Gabe Goldberg) writes:
> https://jlelliotton.blogspot.com/p/the-economic-value-of-rapid-response.html

Yorktown research also did study of what was minimum human response
threshold perception (somewhat skewed population, members of YKT
research) ... and it varied for different people between .1seconds and
.2second (threashold where person couldn't distinquish that it was
getting faster).

Thadani work then went back and looked at difference between "system
response" and what the human saw. At there was difference between
"system response" and 3270 response ... because minimum channel attached
3272/3277 added .089seconds ... so for human to see .25sec response
... the "system response" had to be .161secs (or better).

When the 3274/3278 came out ... a lot of electronics was moved out of
3278 terminal back to 3274 controller (to reduce manufacturing cost)
... but it required a huge amount of coax protocol chatter latency
between the 3274 controller and 3278 terminal ... resulting in typical
.3sec-.5sec hardware response (depending on data stream) ... with a
.3sec minimum. To achieve .3sec person response then required a zero
second system response and to achieve .25sec person response
(i.e. response seen by person) required system response to be negative
.05seconds (needed time machine).

there was complaints sent to the 3274/3278 product administrator ...
and the eventual response was that 3274/3278 wasn't designed for
interactive computing ... but instead for data entry (i.e. electronic
keypunch).

old post with some of the 3270 & system response comparison
http://www.garlic.com/~lynn/2001m.html#19

and from IBM Jargon:

bad response - n. A delay in the response time to a trivial request of
a computer that is longer than two tenths of one second. In the 1970s,
IBM 3277 display terminals attached to quite small System/360 machines
could service up to 19 interruptions every second from a user I
measured it myself. Today, this kind of response time is considered
impossible or unachievable, even though work by Doherty, Thadhani, and
others has shown that human productivity and satisfaction are almost
linearly inversely proportional to computer response time. It is hoped
(but not expected) that the definition of Bad Response will drop below
one tenth of a second by 1990.

-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: z/VM Live Guest Relocation

2018-05-06 Thread Anne & Lynn Wheeler
re:
http://www.garlic.com/~lynn/2018c.html#77 z/VM Live Guest Relocation
http://www.garlic.com/~lynn/2018c.html#78 z/VM Live Guest Relocation
http://www.garlic.com/~lynn/2018c.html#79 z/VM Live Guest Relocation
http://www.garlic.com/~lynn/2018c.html#80 z/VM Live Guest Relocation

some other trivia about the cp67 (precursor to vm370) commercial
spinoffs besides cluster, loosely-coupled, single-system-image, load
balancing and fall-over as well as live guest relocation.

other trivia: I recently posted scans of 1969 "First Financial Language"
manual to facebook. I got copy when one of the cp67 commercial spinoffs
(science center and MIT lincoln labs) was recruiting me ... and the
person primarily responsible for first financial language implementation
then makes some comments.  turns out that he had teamed up a decade
later with bricklin to form software arts and implement visicalc.
https://en.wikipedia.org/wiki/VisiCalc

the other cp67 commercial spinoff from the same period ... was also
heavily into 4th generation reporting language ... another science
center spin-off and moved up value chain with RAMIS from
Mathematica at NCSS
https://en.wikipedia.org/wiki/Ramis_software
and then NOMAD
https://en.wikipedia.org/wiki/Nomad_software
RAMIS followon, FOCUS
https://en.wikipedia.org/wiki/FOCUS
FOCUS also on another (virtual machine based) commercial online service
https://en.wikipedia.org/wiki/Tymshare

of course all these mainframe 4th generation languages were eventually
pretty much subsumed by SQL/RDBMS which was developed on VM370 system at
IBM San Jose Research, System/R ... some past posts
http://www.garlic.com/~lynn/submisc.html#systemr

and Tymshare trivia ... started providing its CMS-based online computer
conferencing (precursor to listserv on ibm sponsored bitnet in the 80s
and modern social media) free to SHARE ... as VMSHARE in Aug1976 (later
also added PCSHARE). vmshare archive
http://vm.marist.edu/~vmshare/

and vm/bitnet trivia (used technology similar to the IBM internal
network ... primarily VM-based)
https://en.wikipedia.org/wiki/BITNET
and vm/listserv reference
http://www.lsoft.com/products/listserv-history.asp

which is where this ibm-main group eventually originates

-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: z/VM Live Guest Relocation

2018-05-01 Thread Anne & Lynn Wheeler
re:
http://www.garlic.com/~lynn/2018c.html#77 z/VM Live Guest Relocation
http://www.garlic.com/~lynn/2018c.html#78 z/VM Live Guest Relocation
http://www.garlic.com/~lynn/2018c.html#79 z/VM Live Guest Relocation

note that US consolidated HONE (branch office sales support)
systems running SSI in Palo Alto had eight 2-processor POK machines
... "AP", only one with channels ... so had channel connectivity for
eight systems with twice the processing power. HONE apps were heavily
APL applications so they needed max.  processing power ... with
relatively heavy I/O. Problem putting larger numbers in the complex was
disk connectivity, IBM offered each disk connected to string-switch
which connected to two 4-channel 3830 controllers (maximum of eight
systems).

Part of my wife's problem with POK growing resistance to increasingly
sophisticated loosely-coupled (cluster) was burgeoning cluster vm/4341s
(both inside ibm and at customers). vm/4341 cluster had more aggregate
processing power than 3033, more aggregate I/O and more aggregate
memory, for less money, lower environmentals and much smaller floor
space.

In Jan. 1979 I was con'ed into doing LLNL benchmark on engineering 4341
(before customer ship) that was looking at getting 70 4341s for compute
farm (leading edge of coming cluster supercomputing tsunmai). Inside
IBM, there was big upsurge in budget for internal computing power
... however dataceenter floor space was becoming critical resource ...
vm/4341 clusters were very attractive alternative to POK 3033. vm/4341
also didn't require raised floor along with FBA 3370 and could be placed
out into departmental areas ... customers (and IBM business units) were
acquiring 4341s hundreds at a time (leading edge of distributed
computing tsunami). The cluster 4341s and departmental 4341s were
addressing the raised floor bottleneck (both at customers and inside
IBM).

email from long ago and far away with extract from "Adessa" newsletter

Date: 08/26/82 09:35:43
From: wheeler

re: i/o capacity on 4341; from The Adessa Advantage, Volume 1, Number 1,
October 1981., Strategies for Coping with Technology:

... as of this writing, for roughly $500,000 you can purchase a procssor
with the capacity to execute about 1.6 million instructions per
second. This system, the 4341 model group 2, comes with eight megabytes
of storage and six channels. Also at this time, a large processor like
the IBM 3033 costs about $2,600,000 when configured with sixteen
megabytes of memory and twelve channels. The processor will execute
about 4.4 million instructions per second.

... What would happen happen if the 3033 capacity for computing was
replaced by some number of 4341 model group 2 processors? How many of
these newer processors would be needed, and what benefits might result
by following such a course of action?

... three of the 4341 systems will do quite nicely. In fact, they can
provide about 10 per cent more instruction execution capacity than the
3033 offers. If a full complement of storage is installed on each of the
three 4341 (8 megs. at this time) processors then the total 24 megabytes
will provide 50 percent more memory than the 3033 makes available. With
respect to the I/O capabilities, three 4341 systems together offer 50
per cent more channels than does the 3033.

.. The final arbiter in many acquisition proposals is the price. Three
4341 group 2 systems have a total cost of about $1.5 million. If another
$500,000 is included for additional equipment to support the sharing of
the disk, tape and other devices amoung the three processors, the total
comes to $2 million. The potential saving over the cost of installing a
3033 exceeds $500,000.

- - - - - - - - - - - - - - - - - - - - - - - - -

of course Adessa offers a VM/SP enhancement known as Single System
Image (SSI) ... making it possible to operate multiple VM machines as
a single system.

... snip ...

note Adessa company specialized in VM/370 software enhancements, and
included some number of former IBM employees. However, live migration
implementation was still limited to a few (virtual-machine based)
commercial online service providers (original two were spinoffs of the
ibm cambridge science center in the 60s). trivia: IBM San Jose Research
had also done vm/4341 clusters implementation ... but lost to VTAM/SNA
(battle my wife got tired of fighting) ... cluster operations that had
been taking much less than second elapsed time become over 30 seconds
with move to VTAM/SNA (my wife also had enhancements for trotter/3088,
eight system CTCA that reduced latency and increased throughput, but
couldn't get it approved).

Note that 3033 was quick effort kicked off after the failure of FS
(along with 3081 in parallel) ... initially 168-3 logic remapped to 20%
faster chips ... various tweaks eventually get it to 4.4-4.5MIPS. 303x
external channel "director" was 370/158 engine with the integrated
channel microcode and w/o the 370 microcode. The engineering 4341 in the
(San 

Re: z/VM Live Guest Relocation

2018-04-30 Thread Anne & Lynn Wheeler
dcrayf...@gmail.com (David Crayford) writes:
> Great story. You should add some content to the Wikipedia page
> https://en.wikipedia.org/wiki/Live_migration.

re:
http://www.garlic.com/~lynn/2018c.html#77 z/VM Live Guest Relocation
http://www.garlic.com/~lynn/2018c.html#78 z/VM Live Guest Relocation

need to get people from the two virtual machine based commercial online
service bureaus (spin-offs from the science center) One was done by
co-op student that worked for me on cp67 at the science center ...  and
then went to service bureau when he graduated (trivia: a couple years
earlier, the same service bureau tried to hire me when I was an
undergraudate, but when I graduated, I went to the science center
instead).

past posts mentioning science center, 4th flr, 545 tech sq
http://www.garlic.com/~lynn/subtopic.html#545tech

trivia: HONE had availability issues 1st shift was with all branch
office people using the systems ... but wasn't concerned about 7x24
offshift service  so they didn't have to worry about "live guest
relocation" as work around to standard mainframe downtime for service
and maintenance (evenings and weekends). posts mentioning HONE
http://www.garlic.com/~lynn/subtopic.html#hone

other trivia: my wife had been in the gburg JES group and was part of
the ASP "catcher" team turning ASP into JES3. She was then con'ed into
going to POK to be in charge of loosely-coupled architecture (mainframe
for cluster). While there she did peer-coupled shared data architecture
... past posts
http://www.garlic.com/~lynn/submain.html#shareddata

she didn't remain long ... in part because of 1) little uptake (except
for IMS hot-standby until much later sysplex & parallel sysplex) and 2)
constant battles with the communication group trying to force her into
using SNA/VTAM for loosely-coupled operation.

much later we do high availability rs/6000 HA/CMP (cluster,
loosely-coupled) product ... but we still have lots of battles with the
communication group and other mainframe groups.
http://www.garlic.com/~lynn/subtopic.html#hacmp

-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: z/VM Live Guest Relocation

2018-04-30 Thread Anne & Lynn Wheeler
re:
http://www.garlic.com/~lynn/2018c.html#77 z/VM Live Guest Relocation

Other CP/67 7x24 trivia. Initially moving to 7x24 was some amount of
chicken & egg. This was back in the days when machines were rented that
IBM charged based on the system "meter" ... that ran when ever the cpu
and/or any channels were operating ... and datacenters recovered their
costs with "use" charges. Initially there was little offshift use but in
order to encourage offshift use, the system had to be available at all
times. To minimize their offshift costs ... there was a lot of CP/67
work down to oeprate "dark room" w/o operator present ... and to have
special CCWs that allowed the channel to stop when nothing was going on
... but startup immediately when there was incoming characters (allowing
system be up and available but the system meter would stop when idle).

Note that for system meter to actually come to stop, cpu(s) and all
channels had to be completely idle for at least 400milliseconds.
trivia: long after business had moved from rent to purchase, MVS still
had a timer task that woke up every 400milliseconds making sure that if
system was IPL'ed, the system meter never stopped.

with regard to MVS killing VM370 product (with excuse they needed the
people to work on MVS/XA) ... the VM370 development group was out in the
old IBM SBC (service bureau corporation) in Burlington Mall (mass, after
outgrowing 3rd, 545tech sq space in cambridge). The shutdown/move plan
was to not notify the people until just before the move ... in order to
minimize the number that would escape. However the information leaked
early ... and a lot managed to escape to DEC (joke was major contributer
to the new DEC VAX/VMS system development was the head of POK). There
was then a witch hunt to find out the source of the leak ... fortunately
for me, nobody gave up the leaker.

past posts mentioning Future System product ... its demise (and
some mention of POK getting the VM370 product killed)
http://www.garlic.com/~lynn/submain.html#futuresys

not long after that, I transferred from science center out to IBM San
Jose Research ... which was not long after US HONE datacenter
consolidation up in Palo Alto. One of my hobbies from time I originally
joined IBM was enhanced production operating systems for internal
datacenters ... and HONE was a long time customer from just about their
inception (and when started clones in other parts of the world, I would
get asked to go along for the install). I have some old email from HONE
about the head of POK telling them that they had to moved to MVS because
VM370 would no longer be supported on high-end POK processors (just
low-end and mid-range 370s from Endicott) ... and then later having to
retract the statements. past posts mentioning HONE
http://www.garlic.com/~lynn/subtopic.html#hone
some old HONE related email
http://www.garlic.com/~lynn/lhwemail.html#hone

in previous post I had mentioned VMSHARE ... TYMSHARE started offering
its CMS-based online computer conferencing, free to SHARE starting in
August1976. I cut a deal with TYMSHARE to get monthly distribution tape
of all VMSHARE (and later PCSHARE) files for putting up on internal IBM
systems (also available over the internal network) ... including HONE.
The biggest problem I had was from the lawyers that were afraid IBMers
would be contaminated by customer information. some old email
http://www.garlic.com/~lynn/lhwemail.html#vmshare

another run in with the MVS group ... was that I was allowed to wander
around the San Jose area ... eventually getting to play disk engineer,
DBMS developer, HONE development, visit lots of customers, make
presentations at customer user group meetings, etc.

bldg. 14 disk enginner lab and bldg. 15 disk product test lab had "test
cells" with stand-alone, mainframe test time, prescheduled around the
clock.  They had once tried to run testing under MVS (for some
concurrent testing), but MVS had 15min MTBF in that environment
(requiring manual re-ipl). I offerred to rewrite input/output supervisor
to be bullet proof and never fail ... allowing for anytime, on-demand
concurrent testing greatly improving productivity. I then wrote up an
internal research report on all the work and happened to mention the MVS
15min MTBF ... which brought down the wrath of the MVS organization on
my head. It was strongly implied that they attempted to separate me from
the company and when they couldn't they would make things unpleasant in
other ways.

past posts getting to play disk engineer in bldgs. 14&15
http:///www.garlic.com/~lynn/subtopic.html#disk

part of what I had to deal with was new 3380 ... another MVS story
... FE had developed regression test of 57 3380 errors that they would
typically expect in customer shops. Not long before 3380 customer ship,
MVS was failing (requiring reipol) in all 57 cases ... and in 2/3rds of
the cases there wasn't any indication of what caused the failure. old
email

Re: z/VM Live Guest Relocation

2018-04-29 Thread Anne & Lynn Wheeler
dcrayf...@gmail.com (David Crayford) writes:
> PowerVM had live migration in 2007 [1]. VMware released VMotion in
> 2003 [2] so I guest the trailblazer was VMware.
>
> [1] https://en.wikipedia.org/wiki/Live_Partition_Mobility
> [2] https://en.wikipedia.org/wiki/VMware

the internal world-wide sales (vm/370 based) HONE system had
multi-system single-sysetm image, load-balancing and fall-over by 1978
... largest was the US HONE had consolidated datacenters in Palo Alto in
the mid-70s (trivia: when FACEBOOK moved into silicon valley, it was
into a new bldg built next to the old HONE datacenter). The US HONE
datacenter was then replicated in Dallas ... with load-balancing and
fall-over between the two complexes ... and finally a third replicated
in Boulder. They never got around to doing live migration (POK was
constantly putting heavy pressure on HONE to migrate to MVS ... by 1980
they were constantly forced to dump huge amount of resources into
repeated failed MVS migrations).

However, earlier in the 70s ... the commercial virtual machine CP67
service bureau spin-offs from the science center ... besides doing
multi-machine single system image (load-balancing & fall-over) ... had
also implemented live migration ... originally to provide 7x24 non-stop
operation ... initially for when machine systems and/or hardware was
being taken down for IBM service and maintenance.

Part of the enormous pressure that POK was putting on HONE ... after
Future System failed and there was mad rush to get products back into
370 pipeline, POK manage to convince corporate to to kill the vm370
product, shutdown the VM370 development group, and move all the people
to POK (or supposedly they would miss the MVS/XA customer ship date some
7-8yrs later). Eventually Endicott did manage to save the VM370 product
mission, but had to reconstitute a development group from scratch ...
some of the resulting code quality issues shows up in the VMSHARE
archives
http://vm.marist.edu/~vmshare/

so it is 40 years since HONE had (virtual machine) single-system image
and load-balancing/fall-over capability within datacenter and also
across datacenters ... but something like 45 years since the commercial
virtual machine service bureaus had live migration (around 30yrs before
VMware) ... but would never see such features from IBM because of the
enormous political pressure MVS group exerted.

trivia: the last product that my wife and I did before leaving IBM in '92
was RS/6000 HA/CMP
https://en.wikipedia.org/wiki/IBM_High_Availability_Cluster_Multiprocessing

While out marketing, I had coined terms disaster survivability and
geographic survivability ... and was asked to write a section for the
corporate continuous availability strategy document ... but then the
section got pulled because both rochester (as/400) and POK (mvs)
complained that they couldn't meet the goals.

past posts mentioning HONE
http://www.garlic.com/~lynn/subtopic.html#hone
past posts mentioning HA/CMP
http://www.garlic.com/~lynn/subtopic.html#hacmp
past posts mentioning cotinuous availability
http://www.garlic.com/~lynn/submain.html#available

-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: IRS - 60-Year-Old IT System Failed on Tax Day Due to New Hardware (nextgov.com)

2018-04-24 Thread Anne & Lynn Wheeler
frank.swarbr...@outlook.com (Frank Swarbrick) writes:
> Here's a somewhat interesting document: 
> https://www.irs.gov/pub/irs-pia/imf_pia.pdf.
> "IMF is a batch driven application that uses VSAM files."
> Date of Approval: February 28, 2017 PIA ID Number: 2140 A 
> ...
> Date of Approval: February 28, 2017 PIA ID Number: 2140 A. SYSTEM
> DESCRIPTION 1. Enter the full name and acronym for the system,
> project, application and/or database.
> www.irs.gov

re:
http://www.garlic.com/~lynn/2018c.html#57 The IRS Really Needs Some New 
Computers
http://www.garlic.com/~lynn/2018c.html#62 The IRS Really Needs Some New 
Computers

trivia: around turn of century, I was co-author of financial industry
x9.99 (privacy impact assessment, PIA) standard, along with somebody
that had previously worked at treasury. current version
https://webstore.ansi.org/RecordDetail.aspx?sku=ANSI+X9.99%3A2009+(Identical+to+ISO+22307-2008)

Work included meetings with several different federal agency privacy
officers ... including IRS.  Also talked to the people behind HIPAA
https://www.hhs.gov/hipaa/for-professionals/privacy/index.html

old reference to voting to approve NWI (new work item) for x9.99 (April
1999)
http://www.garlic.com/~lynn/aepay3.htm#aadsnwi

reference to standard available (April 2004, 5yrs later)
http://www.garlic.com/~lynn/aadsm17.htm#45

-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: The IRS Really Needs Some New Computers

2018-04-19 Thread Anne & Lynn Wheeler
sipp...@sg.ibm.com (Timothy Sipples) writes:
> Then PARS -> ACP -> ACP/TPF -> TPF -> TPF/ESA -> z/TPF (IBM supported
> today). PARS definitely made it onto System/360, probably from 1965 with
> the first machines. However, there were at least three PARS customers that
> started on IBM 70xx machines: American, Delta, and PanAm. (Were there any
> others?) All three switched over to System/360 and successor machines
> fairly quickly.

re:
http://www.garlic.com/~lynn/2018c.html#57 The IRS Really Needs Some New 
Computers

in the ACP/TPF timeframe, non-airlines, other res systems and financial
were starting to use ACP ... prompting the change in name ... from
airline control program to transaction processing facility.

this was also when 308x was introducted which was going to be
multiprocessor only. the problem was that acp/tpf didn't have
multiprocessor support ... and IBM was concerned that all the ACP/TPF
customers would move to clone vendors which were still offering faster,
newer single processor systems. Eventually 3083 was introduced,
basically 3081 with one of the processors removed (one of the problems
was that 2nd 3081 processor was in the middle of the box, just straight
removal would have left the box dangerously top-heavy).

-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: The IRS Really Needs Some New Computers

2018-04-17 Thread Anne & Lynn Wheeler
li...@akphs.com (Phil Smith III) writes:
> "Plans to replace the IMF with a twenty-first-century equivalent known as
> CADE (Customer Account Data Engine) have faltered. The transition is now
> well behind schedule. As a consequence, the likelihood of a catastrophic
> computer failure during tax season increases with every passing year. That
> may not pose quite the same danger as an errant missile, but the prospect of
> lost refund checks, unnecessary audits, and other errors suggests that the
> time has come to bring the IRS into the 21st century."
>
> Because.of bitrot? C'mon. That graf is just stupid: ain't no catastrophic
> failure coming because they're running old, well-tested code. SMH.
>
> Wow, lattice of coincidence: after I hit SEND but before this went, I got
> the following from a friend:
>
> https://www.cnbc.com/2018/04/17/irs-tax-payment-site-down-as-agency-works-to
> -resolve-issue.html
>
> Insider news suggests catastrophic system failure. 
>
> Still not due to the software being old, though!

10-15yrs ago got brought in to look at some of it ... beltway bandits
had contracts for modernization projects ... where they had bunch of
newly minted graduates that went thru process classes on modern project
management and modern programming. There was huge amount of mainframe
assembler code that the beltway bandits never got to the point of
understanding and the projects failed.

however, there was article a decade ago about (mostly) dataprocessing
modernization projects where the beltway bandits had realized that they
made more money off a series of failures ... not just IRS but many other
gov. agencies
http://www.govexec.com/excellence/management-matters/2007/04/the-success-of-failure/24107/

part of it was beltway bandits and other gov. contractors were being
bought up by private-equity companies and were under heavy pressure to
cut corners every way possible as passing money up to their owners.
http://www.motherjones.com/politics/2007/10/barbarians-capitol-private-equity-public-enemy/

Lou Gerstner, former ceo of ibm, now heads the Carlyle Group, a
Washington-based global private equity firm whose 2006 revenues of $87
billion were just a few billion below ibm's. Carlyle has boasted George
H.W. Bush, George W. Bush, and former Secretary of State James Baker III
on its employee roster.

... snip ...

includes buying beltway bandit that will employee Snowden
... intelligence 70% of budget and over half the people
http://www.investingdaily.com/17693/spies-like-us

part of it is illegal to use gov. contract money for lobbying congress,
however private equity owners don't appear to be any such restrictions
(so PE operations could buy up beltway bandits and boost revenue with
lobbying).

-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Slashdot: Business under-investing in I.T.

2018-04-09 Thread Anne & Lynn Wheeler
poodles...@sbcglobal.net (Dan @ Poodles) writes:
> It's simple business economics - i.e., cost center vs profit center.
> Businesses will always invest in revenue generating first above all
> else.

big cloud megadatacenters (hundreds of thousands of systems, millions of
processors) had been spending enormous amount to commoditize server
systems, for more than decade they say they've been assembling their own
systems at 1/3rd the cost of brand name systems. About the time server
chip makers said they were shipping more chips to cloud customers (to
assemble their own systems) than to server vendors, IBM sold off its
server business.
http://www.opencompute.org/

They've managed to so commoditize the cost of servers (cost but shows on
bottom line) that power & cooling have increasingly become major part of
their costs ... and they are putting heavy pressure on server chip makers
to significantly improve power (& cooling) use. Also they've managed to
drop server cost so drastically that they can provision large number of
extra systems for "on-demand" ... and they require systems that
power/cooling drop to near zero when idle but are nearly instant on when
needed.

It is so changing the metrics that server price/performance is being
replaced by power efficiency ratio.
https://en.wikipedia.org/wiki/Power_usage_effectiveness

and PUE is also affecting locations chosen to build new megadatacenters.

-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Software Delivery on Tape to be Discontinued

2018-04-06 Thread Anne & Lynn Wheeler
llo...@gmail.com (Lou Losee) writes:
> Yes you are correct that you have to initiate your trust somewhere.  The
> paradigm is that you trust the vendor that delivers the CA certificates to
> you (e.g., Mozilla, Microsoft, IBM, etc.)
> Hand delivering keys defeats the purpose of using certificates.  If you
> were going to hand deliver keys, you might as well just use a symmetric
> cipher rather than asymmetric. If you want perfect unbreakable encryption
> then you should hand deliver one time pads between the parties.

re:
http://www.garlic.com/~lynn/2018c.html#28 Software Delivery on Tape to Be 
Discontinued

symmetric key ... like passwords ... are shared secrets ... you need a
unique value for every security domain (or use, as countermeasure to
cross domain attacks).

the same single public/private key pair could be used for every security
domain in lieu of unique shared secret ... shifting from an
institutional centric security paradigm to a person centric security
paradigm.

this is somewhat analogous to biometric authentication ... but can work
at a distance rather than requiring person's biometric be physically
present.

the trivial approach is registering a public key in lieu of a
unique pin/password at every institution ... we actually did
example implementations for radius, kerberos and some number
of other widely used authentication infrastructure.

problem was that it would generate no new revenue stream ... and
the ceritication authority industry really wanted their $20B/year

tirivia ... in the digital signature scenario ... some hash (SHA, MD5,
etc) is calculated for the data, the hash encrypted with the private key
and appended to the data. the recipient decrypts the digital signature
with the public key and compares the decruypted value with recalculated
hash. This confirms/authenticates that the original data hasn't been
changes and also confirms/authenticates the sender.

The use in lieu of pin/password ... the institution has to protect
against replay attacks. Rather than the user generating the data that is
signed ... the institution sends the user some unique data. The user
than encrypts the hash of the unique data with their private key and
returns the encrypted hash to the institution (doesn't have to return
the data, since the institution already has it). The institution then
decrypts the returned value with the public key (saved in file that had
previously stored pin/password) and compares it with the hash of the
originally transmitted value.

There is no longer a danger of pin/passwords being skimmed or the
pin/password file being copied ... and in fact can be transmitted
completely in the clear (w/o any additional encryption).

We used this for the X9.59. Financial industry started out saying
that they could not trust some other certification authority and
would only recognize "relying party only certificates" (i.e. only
recognize certificates that they had issues).

However, a certification authority has to create a business process that
does background on the public key and then save the public key in some
administration file (before issuing a certificate).  However, for a
financial institution, that would all be collapsed in account
record. But in any sort of financial authentication, it involves
accessing the account record ... where the public key has been stored
... having it also in appended digital certificate (that is typically
100 times larger than financial transaction) is then redundant and
superfluous (since a financial institution will already have the public
key).

Then for the whole financial industry, certificates became
unnecessary. Also since X9.59 account transactions could only be done
with public key (digital signature) ... crooks were no longer able to do
fraudulent transaction against an x9.59 w/o the corresponding private
key. Since the private key was never divulged ... it was no longer
necessary to hide/encrypt/ssl/tls such financial transactions (skimming,
breaches, evesdropping, wern't prevented, but the risk of crooks
being able to use the information was eliminated).

One of the things that the transit industry then asked was if I could
design a chip that could do such transactions and be implemented in a
contractless transit card (i.e. amount of power for doing calculations
is severely limited) and time constraint of turnstyle (1/10 sec or
less). Turns out using a variation of ECC would be as strong or stronger
than RSA ... and do the calculations within the power and time
constraints of a transit contactless turnstyle.

other trivia: its certification authority ... not certificate authority
... the service provided is certification (some correspondance between
the public key and some entity) ... which is then encapsulated in a
certificate. However, since fine print frequently says that the
certification has no warrenty ... there is frequent disire to distract
the market ... and make believe that the certificate by itself has some
magic pixie dust 

Re: Software Delivery on Tape to be Discontinued

2018-04-03 Thread Anne & Lynn Wheeler
l...@garlic.com (Anne & Lynn Wheeler) writes:
> also from bitsavers:
> http://www.bitsavers.org/pdf/amdahl/datapro/70C-044-01_7709_Amdahl_470.pdf

re:
http://www.garlic.com/~lynn/2018c.html#27 Software Delivery on Tape to be 
Discontinued

other trivia ... from 470.pdf article

The system that resulted from this shift in direction, the 470V/6,
featured about twice the performance level of the IBM~370/168 at a
similar price, while occupying only one-third of the space required by
the IBM counterpart.

... snip ...

this Future System reference talks about after FS implodes, there was
mad rush to get stuff back into 370 product pipelines (Internal FS
politics had been killing off 370 efforts and lack of 370 products
during the FS period credited with starting to give clone makers market
foothold) ... kicking off 3033 and 3081 in parallel. 3033 started off
being 168-3 logic remapped to 20% faster chips. Eventually they got it
up to 50% faster by doing some other optimization. 3081 was such a
kludge that it required huge number of circuits and much more expensive
to manufacture
http://www.jfsowa.com/computer/memo125.htm

The 370 emulator minus the FS microcode was eventually sold in 1980 as
as the IBM 3081. The ratio of the amount of circuitry in the 3081 to its
performance was significantly worse than other IBM systems of the time;
its price/performance ratio wasn't quite so bad because IBM had to cut
the price to be competitive. The major competition at the time was from
Amdahl Systems -- a company founded by Gene Amdahl, who left IBM
shortly before the FS project began, when his plans for the Advanced
Computer System (ACS) were killed. The Amdahl machine was indeed
superior to the 3081 in price/performance and spectaculary superior in
terms of performance compared to the amount of circuitry.]

...snip ...

this ACS/360 reference besides killing it off because they were afraid
that it would advance the state-of-the-art too fast and they would loose
control of the market ... at the end it goes into the ACS/360 features
that show up more than 20yrs later in ES-9000.
https://people.cs.clemson.edu/~mark/acs_end.html

-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Software Delivery on Tape to be Discontinued

2018-04-03 Thread Anne & Lynn Wheeler
000433f07816-dmarc-requ...@listserv.ua.edu (Paul Gilmartin) writes:
> I don't understand digital signatures beyond what I just read in:
> https://en.wikipedia.org/wiki/Digital_signature
>
> ... Digital signatures are equivalent to traditional handwritten 
> signatures
> in many respects, but properly implemented digital signatures are more
> difficult to forge than the handwritten type.  ...
> Paper contracts sometimes have the ink signature block on the last page,
> and the previous pages may be replaced after a signature is applied.  ...
>
> But it seems that all such schemes depend on being able to authenticate
> a public key from some certificate authority.  It doesn't appear that a
> digitally signed document can be entirely self-contained.
>
> So is a signature any more secure than an independently verifiable checksum,
> or just more practical?

trivia: digital signature is the hash of the document (SHA-2) that has
been encrypted with the private key. On reception, you recompute the
hash, decrypt the digital signature with the corresponding public key
and compare the two hashes. One of the original motivations for
public/private key was to get around some of the secret key distribution
problems (which has to be hidden and never divulged). Public key had be
publicly distributed (w/o needing to hide). People can use the public
key to encrypt stuff and send it to you ... and only you can decrypt it
(with the private key). You can encrypt stuff with the private key ...
and people can decrypt it (like digital signature) with public key
... and know it came from you ... since only your private key could have
encrypted something that is decryptable with your public key.

we worked on the cal. state electronic signature legislation ... one of
the things is that "digital signatures" aren't true human signatures in
the legal sense ... "digital signatures" can be used for authentication
(in the same way pins and passwords) ... but need some additional
features to qualify as a legal signature. In that sense, might claim
that they were purposefully called "digital signatures" in an attempt to
try and inflate their perceived value (justify charging billions)

Last project we did at IBM was HA/CMP ... and was working on commercial
cluster scalenup with RDBMS vendors and technical scaleup with national
labs. Old post about Jan1992 meeting in Oracle CEO conference room
on commercial cluster scaleup
http://www.garlic.com/~lynn/95.html#13

within a few weeks of the meeting, cluster scaleup was transferred,
announced as supercomuter and we were told we couldn't work on anything
with more than four processors. Possible contributing faster was that
the mainframe DB2 people were complaining if I went ahead, it would
be at least 5yrs ahead of them. We leave IBM a few months later.

A little while later, two of the Oracle people (from the Jan1992
meeting) have left and our at small client/server startup responsible
for something called "commerce server". We are brought in as consultants
because they want to do payment transactions on the server, the startup
had also invented this stuff they called "SSL" they want to use, the
result is now frequently called "electronic commerce".

Somewhat for having done "electronic commerce" we get sucked into
X9 financial standards organization working on new standards.

During this time, I wrote extensively about how it was trivial to use
public/private key in lieu of passwords ... w/o digital
certificates. The problem was that the digitial certificate industry was
floating $20B business case on wall street ...  basically
$100/certificate/annum/person. We were also brought in to help wordsmith
cal. state legislation ... at the time they were working on electronic
signature (and under heavy pressure by the certificate industry to
mandate digital certificates), data breach notification, and "opt-in"
personal information sharing. Electronic signature and data breach
notification passed ... but "opt-in" (institutions could only share your
information with explicit record of you of approving) got pre-empted by
"opt-out" provision added to GLBA (institutions could share your
information unless they kept a record of you objecting).

some discussion of financial transaction standard that can do
public key authentication w/o digital certificate
http://www.garlic.com/~lynn/x959.html#x959

One of the scenarios was electronic payment transaction where they
wanted to append a digital certificate to every transaction that was at
least 100 times larger than the transaction size. Partly because I
ridiculed the idea, some of X9 started a compress digitial certificate
work item ... to try and get the digital certificate bloat down to only
20-50 times larger. Then I wrote a detailed analysis showing how to
eliminate the payload bloat by appending to every transaction a digital
certificate compressed to zero bytes (had all the same detail, just
didn't occupy any space).

-- 

Re: Software Delivery on Tape to be Discontinued

2018-04-03 Thread Anne & Lynn Wheeler
000a2a8c2020-dmarc-requ...@listserv.ua.edu (Tom Marchant) writes:
> I'm pretty sure that the 470/6 was never shipped. The way I heard it was that 
> work on the 470/V started very soon after the introduction of virtual memory 
> on 370 machines and the announcement of OS/VS1 and OS/VS2. OS/VS1 and 
> OS/VS2 release 1 were both introduced in 1972 and OS/VS2 release 2 (MVS) 
> in 1973, though I don't know when it actually shipped. See 
> http://bitsavers.trailing-edge.com/pdf/ibm/370/OS_VS2/Release_2_1973/GC28-0667-1_OS_VS2_Planning_Guide_for_Release_2_Nov73.pdf
>  
>
> The Wiki article on Amdahl Corporation is no help here. According to it, the 
> 470/6 was introduced in 1975, and that when IBM announced DAT, Amdahl 
> dropped the 470/6 and replaced it with the 470V/6. It also claims, 
> incorrectly, 
> that MDF was first shipped on the 470V/8. In fact, MDF required major 
> architectural extensions that were not available until the 5860.

re:
http://www.garlic.com/~lynn/2018c.html#23 VS History

also from bitsavers:
http://www.bitsavers.org/pdf/amdahl/datapro/70C-044-01_7709_Amdahl_470.pdf
more:
http://www.bitsavers.org/pdf/amdahl/

Amdahl Corporation was the first company to develop and produce an IBM
plug-compatible mainframe computer.  The company, formed in 1971 by Dr.
Gene Amdahl, delivered its first processor, the 470V /6, in June 1975.

The original Amdahl 470 was intended to be a real- memory system
targeted at IBM's System/370 Model 165.  The target moved, however, with
IBM's announcement of the virtual-memory 370/168 in August 1972, and
Amdahl modified its system design to incorporate virtual- memory
hardware, enabling the new system to compete with IBM's latest
technology.  The system that resulted from this shift in direction, the
470V/6, featured about twice the performance level of the IBM~370/168 at
a similar price, while occupying only one-third of the space required by
the IBM counterpart

... snip ...

more amdahl ref:
http://www.bitsavers.org/pdf/amdahl/

Amdahl account of running ACS/360 ... however it was terminated because
executives were afraid that it would advance state-of-the-art too fast
and they would loose control of the market
https://people.cs.clemson.edu/~mark/acs_end.html

Amdahl gave a talk in large MIT auditorium early 70s about starting
company ... filled mostly students ... but several of us from the
IBM science center attended. He was asked how he convince VC people
to fund his company. He said that he told them that even if IBM
totally walked away from 370 ... there was sufficient customer 370
software that would keep him in business until the end of the century.
Could be interrupted that he was referring to the IBM Future System effort
that was going to completely replace 370 ... but in subsequent
interviews he claims he never knew about FS. some FS ref:
http://www.jfsowa.com/computer/memo125.htm

370/165 ref. ... original 370 virtual memory architecture had a lot more
features ... but POK was running into all sorts of problems retrofitting
virtual memory hardware to 165 ... and claimed if they had to do the
full architecture, virtual memory announce would have to slip by
6months. Decision was made to eliminate the troublesome features ... and
existing 370 models & software that had support for the removed features
would have to eliminate (redo hardware and rework software).

other trivia: In the 70s, I did a lot of mainframe customer
presentations and got to know many customers. I got to know the manager
of one of the largest financial mainframe datacenters on the east coast,
who liked me to drop by and talk technology. Then at one point the
branch manager did something that horribly offended the customer. In
response, the customer announced they would order an Amdahl (clone)
mainframe (lonely Amdahl in vast sea of blue). At the time, clone makers
had been selling into mostly universities but hadn't broken into the
true blue large financial market ... and this would be the first. I was
asked to go sit onsite at the customers for 12months to help obfuscate
the reason for the Amdahl order. I said that I knew the customer really
well and while he liked the idea of me spending my time there, it would
make no difference in the order ... so I didn't see any point. I was
told that the branch manager was good sailing buddy of IBM's CEO and if
I didn't do this, it would ruin the branch manager's career ... and I
could forget about having any career or promotions at IBM (it wasn't the
first time I got told that).

-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: VS History

2018-04-03 Thread Anne & Lynn Wheeler
john.archie.mck...@gmail.com (John McKown) writes:
> ​Not exactly correct. OS/VS1 was a single large address space. That one
> address space was divided up into a _fixed_ number of _fixed sized_
> partitions (not regions). That is, if you had a step which required, say,
> 128M to run, you had to be sure it as in a partition which was at least
> 128M. The size of a partition was set by the sysprog or, IIRC, via an
> operator command. OS/VS2 release 1 was also called SVS (Single Virtual
> Storage). It has a single 24 bit addressable space which has a number of
> "regions" defined. Like in MVT, the size of a region was variable and
> basically it was "GETMAIN'd" when the job (or step - I forget) started. One
> problem that could exist in SVS was that a long running job might be
> GETMAINd while some smaller jobs were running. The long running job's
> storage would be a "sandbar" which could prevent other large jobs from
> running due to lack of contiguous storage. That why many shops would "shut
> down" batch in order to "start up" all the long running tasks, such as
> CICS, IMS, etc so that those STCs would not turn into storage sandbars.

SVS prototype was initially developed on 360/67 ... basically not too
different MVT running in 16mbyte virtual machine ... SVS built tables
for single 16mbyte virtual address space ... and a little bit to handle
very low rate paging. The largest amount of code was borrowing CCWTRANS
from CP/67 to hack into the side of EXCP for building shadow channel
programs.

CP/67 had operating systems running in virtual machines building channel
programs with virtual addresses ... which CP67 CCWTRANS had to build
"shadow" channel program (same as original but with real
addresses). OS/VS2 both SVS and MVS had same problem with MVT
applications building their own channel programs, but now addresses were
virtual ... and then executing EXCP/SVC0. EXCP was now faced with making
a copy of the applications channel programs that replaced the virtual
address with real addresses.

original justification for making all 370s virtual memory came from the
really bad storage management in MVT ... resulting in region sizes
typically having to be four times more storage that would be actually
used ... this severely restricted the number of regions that could be
defined on typical one megabyte 370/165. Moving MVT to virtual memory
(aka SVS) met that could get four times as many regions with doing
little or no paging (this was even w/o long running jobs which severely
aggrevating the storage management problem).

archived post with history from somebody in POK at the time, who was
involved in the decision
http://www.garlic.com/~lynn/2011d.html#73 Multiple Virtual Memory

MVS also turned out to have horrible problrem with OS/360 API pointer
passing convention ... as a result it started out with an 8mbyte image
of the MVS kernel in each application 16mbyte virtual address space (so
when kernel code got the API pointer, it could directly access the
parameter fields in the application address area. However, MVT had a lot
of subsystems (outside the kernel) that needed to access application
parameters. For this they created the one mbyte common segment area
... storage that appeared in every application 16mbyte virtual address
space ... could allocate storage in the CSA for parameters that both the
application and a subsystem running in a different virtual address
space, could access (not max. application area was 7mbytes out of
16mbytes).

However, the size requirement for CSA is somewhat proportional to the
number of subsystems and activity ... by 3033 ...  CSA had become
"common system area" (rather than "common segment area") and large
installations was having problems restricted it to 5or6 mbytes (leaving
2-3mbytes out of 16mbyets for applications) and it was threatening to
grow to 8mbytes ... leaving zero bytes for applications.

After failure of FS (original os/vs2 MVS was suppose to be "glide path"
for the 370 replacement totally different from 370 ... see above
archived post), POK kicked of 3033 and 3081 (370/xa) in parallel. 370/xa
was to address a large number of MVS problems ... one was new hardware
mechanism for applications directly calling subsystems (w/o having to
execute kernel code) along with "access register" architecture that
provided ability for subsystems to access storage in different
application virtual address space.

However, the CSA/API problem was getting so bad in 3033 (before 370/xa),
that a subset of access registers was retrofitted to 3033 as "dual
address space" mode (person responsible left not long later for HP,
working on their snake/risc and then later was one of the primary
architects for Itanium, including a lot of enterprise integrity
features).

Endicott (low/mid range 370s) equivalent to 370xa was "e-architecture"
... since DOS/VS & VS1 had single virtual address space,
"e-architecture" where the virtual address space table was moved into
microcode and new 

Re: Graph database on z/OS?

2018-03-27 Thread Anne & Lynn Wheeler
re:
http://www.garlic.com/~lynn/2018c.html#9 Graph database on z/OS?
http://www.garlic.com/~lynn/2018c.html#10 Graph database on z/OS?

some old "graph", much earlier I had been involved in original
sql/relational implementation, System/R ... some past posts
http://www.garlic.com/~lynn/submain.html#systemr

the "official" next generation DBMS was "EAGLE" ... and while the
corporation was focused on EAGLE, we were able to do technology
transfer to Endicott for release as SQL/DS. Then when EAGLE implodes,
there is request about how fast could System/R be ported to MVS
... eventually released as DB2, originally for decision support only.

About the same time, I was also sucked into helping with a different
kind of relational ... that physically instantiated every entity and
every relation ... a little more like IMS ... but w/o record pointers
... entities and relations were content addressable indexes. As a
result it could represent any kind of information structure
... including tables as well as graphs. IDEA was heavily influenced by
System/R in eliminating explicit record numbers with indexing under
the covers ... but also Sowa, who was at IBM STL at the time.
http://www.jfsowa.com/pubs/semnet.htm
topic drift, other Sowa reference (about IBM FS)
http://www.jfsowa.com/computer/memo125.htm
http://www.jfsowa.com/computer/

In some respect, System/R (RDBMS) was optimized for financial
transactions, tables with account number index and most everything
related to the account was physically in same record. IDEA could have
seperate record for every (indexed) entity and every (indexed) relation
(could be 5-10 times physical space of RDBMS tables).

Obvious doing financial transaction ran much faster on RDBMS (one record
with all information) than compared to dozen or more records for the
same information in IDEA.

However, for non-uniform information structure, IDEA could be several
times faster. A large VLSI chip design was loaded into DB2 and then
several traces were run to get best optimization. Then DB2 (hihgly
optimized) test took nearly 3hrs elapsed time to extract the full chip
design running on 3081 with 3380 disks. IDEA running with no
optimization on same 3081 and 3380s ... took less than 30mins to extract
same chip design (almost ten times faster).

IDEA also had query language that solved/addressed the SQL NULLs problem
(IDEA only has fields for things that exist, there is no direct concept
of "missing values") ... old archived post from DBMS theory discussion
http://www.garlic.com/~lynn/2003g.html#40 How to cope with missing values - 
NULLS?
http://www.garlic.com/~lynn/2003g.html#41 How to cope with missing values - 
NULLS?

We also did some work with NIH national library of medicine. They had
hired a company to load the NLM "index" information into RDBMS ... they
had spent 18months on "normalization" and could only do about 80% of the
data (the rest was loaded unnormalized with lots of duplicates).
Normalization/integrating new information was taking longer than real
time (four months of new medical knowledge was taking more than four
months).

I (one person) spent about three weeks doing the equivalent for IDEA.

some NLM refs:
https://www.nlm.nih.gov/databases/umls.html
https://www.nlm.nih.gov/research/umls/
https://www.nlm.nih.gov/research/umls/sourcereleasedocs/current/MSH/
https://www.nlm.nih.gov/mesh/intro_trees.html

trivia: at the time, NLM still had people that had originally done their
mainframe based online catalog in the 60s (BDAM with their own home
grown transaction system). This was same time that the univ library had
gotten an ONR grant to do online catalog, some of the money was used for
2321 datacell and it was also selected to be betatest site for original
CICS product ... and I got tasked with supporting/debugging the CICS
implementation (so we had lots of discussion about online catalogs and
IBM BDAM). past BDAM/CICS posts
http://www.garlic.com/~lynn/submain.html#bdam

-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Graph database on z/OS?

2018-03-27 Thread Anne & Lynn Wheeler
rob.schr...@gmail.com (Rob Schramm) writes:
> Seems like there is a drift about security and walls.. interesting article
> I found about walls when reading Cryptograms...
>
> https://warontherocks.com/2018/02/wall-wall-fortresses-fail/

re:
http://www.garlic.com/~lynn/2018c.html#9 Graph database on z/OS?

possibly more than you ever wanted to know, in part because of doing
electronic commerce, was sucked into financial standards, financial
industry critical infrastructure protection, and other efforts, like
doing some work with these guys (but from 2004)

Electronic Safety and Soundness Securing Finance in a New Age
http://documents.worldbank.org/curated/en/756761468778791728/pdf/284050PAPER0WBWP026.pdf

This monograph presents a four pillar framework for policymakers in
emerging markets to use in designing responses to the challenge of
assuring electronic safety and soundness of their financial systems. As
such, this paper is focused in part on technological solutions, but more
importantly on the incentives of the many parties involved in assuring
the security of critical infrastructures--from telecommunications and
financial sector service providers to the government and even to the
many final consumers of financial or other services.

... snip ...

we had been also brought in to help wordsmith some cal. state
legislation, they were working on electronic signature, data breach
notification, and opt-in privacy. several entities involved in privacy
were involved and had done detailed, in-depth public surveys on privacy
and the #1 issue was identity theft, specifically the form involving
various breaches that resulted in fraudulent financial transactions.

A problem was that little or nothing was being done about these breaches
(except trying to keep them out of the news). A major issue is that
entities take security measures in self protection ... the problem with
the breaches was that the institutions weren't at risk, it was the
public ... so they had little motivation. It was hoped that the
publicity from the data breach notifications might motivate institutions
to take security measures.

that and a combination of other things resulted in doing financial
transaction standard that slightly tweaked the current infrastructure
...  and eliminated criminals ability to use information from previous
transactions obtained in breaches for doing fraudulent transactions
(form of replay attack) ... it didn't prevent breaches, but eliminated
risk from (and major motivation for doing) breaches.

two (other) problems: 1) "security proportional to risk": value of
transaction information to merchant can be a few dollars (and a few
cents to transaction processors), the value of the information to
criminals can be the account balance (or credit limit) ... as a result
criminals may be able to outspend by factor of 100 times attacking (than
defenders can afford to spend) and 2) "dual use": transaction
information is used for both authentication and dozens of business
processes at millions of locations around the world ... as a result it
has to be both kept absolutely secure and never divulged and
simultaneously readily available.

for various reasons there are numerous stakeholders with vested
interests in preserving the status quo.

from the law of unintended consequences ... "SSL" for electronic
commerce (worked on earlier) was used to hide financial transaction
information during transmission. the "tweak" eliminates the need to hide
the information ... whether in transmission or "at rest".

-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Graph database on z/OS?

2018-03-27 Thread Anne & Lynn Wheeler
dcrayf...@gmail.com (David Crayford) writes:
> I think the general ROT for those kind of systems is that the network
> defines security. All back-end services should be hidden behind
> firewalls and not accessible to the outside world. It's a different
> world these days where everything seems to run on docker images
> orchestrated by something like kuebernetes and secured by LDAP or
> whatever. Nobody dishes out userids unless you need admin.

Skip containers and do serverless computing instead; Container
technologies like Docker are very powerful, but require talent you can't
get. Serverless computing provides the same benefits -- with talent you
can actually get
https://www.infoworld.com/article/3265457/containers/why-serverless-is-the-better-option-than-containers.html

we had worked with several people at Oracle on cluster scaleup ...  part
of getting cluster scaleup being transferred were mainframe DB2
complaining if I was allowed to continue, it would be at least 5yrs
ahead of them. Over a period of a few weeks, cluster scaleup was
transferred, announced as IBM supercomputer (for technical/scientific
*ONLY*) and we were told we couldn't work on anything with more than
four processors. we leave a few months later. past posts
http://www.garlic.com/~lynn/subtopic.html#hacmp

not long later, we are brought in as consultants by two of the (former
Oracle) people we had worked with ... who were then at a small
client/server startup responsible for something called commerce server,
the startup had also invented this technology they called "SSL" they
wanted to use, the result is now frequently called "electronic
commerce".

As webservers got more complex, there was increasing number of
RDBMS-backed servers (compared to flat-file based implementations) that
had significant larger number of exploits. Part of it was RDBMS were
much more complex & corresponding increase in mistakes (along with
rapidly exploding demand for scarce skills). A specific example was they
would disable all outside connections for RDBMS maintenance ... and
during maintenance they would relax various security processes.
Complexity of RDBMS met that increasingly likely they would overrun
maintenance windows, in mad rush to get back online they would
frequently overlook reactivating various security processes.

more recent
https://en.wikipedia.org/wiki/SQL_injection

all of these have web application with access ... and attacks are
typically against the web application (where webserver frontends are
also responsible for access control).

-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Didn't we have this some time ago on some SLED disks? Multi-actuator

2018-03-22 Thread Anne & Lynn Wheeler
john.archie.mck...@gmail.com (John McKown) writes:
> https://www.theregister.co.uk/2017/12/19/seagate_disk_drive_multi_actuator/

latest from yesterday ...

Seagate's HAMR to drop in 2020: Multi-actuator disk drives on the way
Fast and slow high-cap disk lines coming
https://www.theregister.co.uk/2018/03/21/seagate_to_drop_multiactuator_hamr_in_2020/

trivia: original 3380 had 20 track widths between each data track,
"double density" cut spacing between data tracks to 10 track widths,
and then was cut again for triple density.

in the mid-80s there was work on veritical recording (higher bit
density) and about the same time the "father of risc" gets me involved
in his idea for wide disk head ... basically handle 16 (adjacent) data
tracks with servo track on each side. disk formated servo track followed
by 16 data tracks (followed by another servo track and 16 more data
tracks). This would read/write 16 tracks in parallel ... something like
the old 2301 drum that read/write four data tracks in parallel.

For the 2301 drum it met (four times 2303 300kbyte/sec) 1.2mbyte/sec
transfer ... which 1.5mbyte channels could handle. The problem for
wide-head was 16 data tracks in parallel, each at 3mbytes/sec resulted
in 48mbytes/sec transfer ...  which no mainframe channel could handle
... even the introduction of ESCON in 1990 was only 17mbyte/sec.

In 1988, I was asked to help LLNL standardize some serial stuff they
were playing with which quickly becomes fibre channel standard and
started out handling 100mbyts/sec concurrently in both directions
(200mbytes/sec aggregate) ... but mainframe FICON protocol built on
fibre channel didn't come along until much later (and heavy weight FICON
protocol drastically reduced the native fibre channel throughput).

FICON posts
http://www.garlic.com/~lynn/submisc.html#ficon

posts mentioning getting to play disk engineer in blgs 14 (disk
engineering) and 15 (disk product test)
http://www.garlic.com/~lynn/subtopic.html#disk

-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: AW: mainframe distribution

2018-03-19 Thread Anne & Lynn Wheeler
m...@beer.at (Mike Beer) writes:
> https://www-03.ibm.com/systems/z/os/zvse/about/history1970s.html

Endicott told me there was 6kbytes available for assist microcode ...  I
was to identify the highest used code paths in the vm370 kernel for
replication in microcode.
(standard 370 kernel instructions translated on about byte-for-byte
basis)

the low & mid-range 370 native (vertical) microcode emulated 370 on
about 10:1 basis ... so instructions moved from 370 to native code got
approx.  10:1 speedup.

old post with times I did of vm370 kernel for selecting 6k bytes of code
segments for dropping into "ECPS" microcode
http://www.garlic.com/~lynn/94.html#21

6kbyte cutoff accounted for 79.55% of kernel execution ... gets a 10:1
speedup.

At the same time there was VS1 handshaking that bypassed certain VS1
processes and left it to VM370 ... resulting in VS1 under VM370 ran
faster than stand alone on the bare machine.

Endicott then tried to get corporate approval to preinstall vm370 on
every 138&148 shipped from the factory (sort of like LPARs
today). However, this is in the period after Future System implosion and
mad rush to get 370 products back into the IBM product pipeline.  POK
kicked off 3033 & 3081 in parallel and convinced corporate to kill the
vm370 product, shutdown the vm370 development group and move all the
people to POK to work on MVS/XA (or otherwise MVS/XA wouldn't be able to
ship on schedule). Endicott managed to save the VM370 product mission
... but had to reconstitute a development group from scratch ...  but
wasn't able to convince corporate to allow vm370 to be preinstalled on
every 138&148.

Note since DOS/VS and VS/1 were single virtual address space (something
like original VS2, SVS) ... E-architecture dropped the single virtual
address table into microcode ... and there were new hardware
instructions to add the virtual->real address page mapping. VM370
always ran in 370 mode supporting multiple address spaces.

4341 caused lots of problems for POK ... it performed better than 3031
(erzats 158) and small cluster of 4341s outperformed 3033, cost much
less than 3033, had smaller footprint and used much less environmentals.

In 1979, I got con'ed into doing 4341 benchmarks for LLNL that was
looking at getting 70 4341s for compute farm ... sort of the leading
edge of the coming cluster supercomputing (and cloud megadatacenter)
tsunami.

It was so threatening to highend mainframes, at one point, head of POK
got allocation of critical 4341 manufacturing component cut in half.

The price, environmentals & footprint for 4300s & FBA disks had dropped
so far, that corporations started ordering large hundreds at a time for
placing out in departmental areas (inside IBM it resulted in conference
rooms becoming scarce commodity) ... sort of the leading edge of the
coming distributed computing tsunami.

Boeblingen lab had done 370 115&125 ... which was a nine position memory
bus for up to nine microprocessers ... for the 115, all microprocessors
(controllers, 370 "cpu", etc) were the same but with different microcode
loads. The 125 was identical to 115, but the microprocessor for the 370
"cpu" was 50% faster (than the other microprocessors). This
design/implementation was so threatening to other 370 models, the got
corporate to discipline Boeblingen.

At the same time that Endicott con'ed me into working on ECPS microcode
assist (for 138/148), I got con'ed into doing 125 design/implementation
which would have up to five of the faster CPU processors all in the same
machine (with four positions left for controllers). In same ways it was
as threatening to Endicott 148 as 4341 clusters was threatening to 3033.
In the escalation meetings by Endicott to kill five processor 125, I was
expected to do the technical arguments for both sides (pro/con 148+ECPS
and pro/con for 5-way 125)

-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: CKD details

2018-01-25 Thread Anne & Lynn Wheeler
sme...@gmu.edu (Seymour J Metz) writes:
> The 3330 was not the first disk drive with Set Sector; that honor
> belongs to the 2305, formally part of the S/360 series rather than the
> S/370, although I imagine that a lot more were sold for use on, e.g.,
> 370/165, than for 85 or 195.

re:
http://www.garlic.com/~lynn/2018.html#77 CKD details
http://www.garlic.com/~lynn/2018.html#79 CKD details

(resend) 2305 was also fixed-head disk (head per track, sort of
replacement for 2301 & 2303 fixed-head drums) ... so there was no arm
movement latency, just rotational delay.

most internal sites used 2305-2 as paging device, approx. 11mbyte
capacity, 1.5mbyte transfer.

There was 2305-1, same number of heads, but only half the tracks, two
heads positioned on track, offset 180degrees and transferred in parallel
for 3mbytes/sec (special two byte channel), half the number of tracks,
(little less than) half the capacity and half the rotational delay
... basically even/odd bytes that could start as soon as came under
either offset/opposing heads.
https://www-03.ibm.com/ibm/history/exhibits/storage/storage_2305.html
and
https://www-03.ibm.com/ibm/history/exhibits/storage/storage_PH2305.html

2305 also had "multiple exposure" support ... eight device addresses
... uniform formating of tracks ... software strategies for the eight
addresses could be used to let the controller maximize the transfer per
rotation.

very late 70s, early 80s, IBM contracted with vendor for electronic
disks (for paging use at internal datacenters) ... referenced as model
1655, could simulate 2305 or operate natively (more like FBA) ... no arm
motion, no retational delay. some old email
http://www.garlic.com/~lynn/2007e.html#email820805

as aside, 3380 3mbyte channel used "data streaming" ... channels had
used protocol that did end-to-end (half-duplex) handshaking on every
byte transferred, "data streaming" support would transfer multiple bytes
per end-to-end handshake ... allowed for increasing data transfer rate
as well as doubled maximum channel cabling distance.

trivia: ECKD was originally used for calypso ... speed-matching 3880
controller feature that allowed 3380 3mbyte/sec to used with 168 & 3033
1.5mbyte/sec channels (took enormous amount of work to get all the kinks
worked out, i've frequently commented it would have been less effort to
have just moved to FBA). some old email
http://www.garlic.com/~lynn/2007e.html#email820907b
a little more in these posts
http://www.garlic.com/~lynn/2010e.html#36 What was old is new again (water 
chilled)
http://www.garlic.com/~lynn/2015f.html#83 Formal definituion of Speed Matching 
Buffer

recent post trying to get 2nd "exposure" (device address) for the 3350
fixed-head feature (allowing data transfer overlapped with arm motion)
http://www.garlic.com/~lynn/2017k.html#44 Can anyone remember "drum" storage?

getting to play disk engineer in bldgs 14&15 posts
http://www.garlic.com/~lynn/subtopic.html#disk
CKD, FBA, multi-track search, etc posts
http://www.garlic.com/~lynn/submain.html#dasd

-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: CKD details

2018-01-24 Thread Anne & Lynn Wheeler
sme...@gmu.edu (Seymour J Metz) writes:
> The 3330 was not the first disk drive with Set Sector; that honor
> belongs to the 2305, formally part of the S/360 series rather than the
> S/370, although I imagine that a lot more were sold for use on, e.g.,
> 370/165, than for 85 or 195.

re:
http://www.garlic.com/~lynn/2018.html#77 CKD details
http://www.garlic.com/~lynn/2018.html#79 CKD details

2305 was also fixed-head disk (head per track, sort of replacement for
2301 & 2303 fixed-head drums) ... so there was no arm movement latency,
just rotational delay.

most internal sites used 2305-2 as paging device, approx. 11mbyte
capacity, 1.5mbyte transfer.

There was 2305-1, same number of heads, but only half the tracks, two
heads positioned on track, offset 180degrees and transferred in parallel
for 3mbytes/sec (special two byte channel), half the number of tracks,
(little less than) half the capacity and half the rotational delay
... basically even/odd bytes that could start as soon as came under
either offset/opposing heads.
https://www-03.ibm.com/ibm/history/exhibits/storage/storage_2305.html
and
https://www-03.ibm.com/ibm/history/exhibits/storage/storage_PH2305.html

2305 also had "multiple exposure" support ... eight device addresses
... uniform formating of tracks ... software strategies for the eight
addresses could be used to let the controller maximize the transfer per
rotation.

very late 70s, early 80s, IBM contracted with vendor for electronic
disks (for paging use at internal datacenters) ... referenced as model
1655, could simulate 2305 or operate natively (more like FBA) ... no arm
motion, no retational delay. some old email
http://www.garlic.com/~lynn/2007e.html#email820805

as aside, 3380 3mbyte channel used "data streaming" ... channels had
used protocol that did end-to-end (half-duplex) handshaking on every
byte transferred, "data streaming" support would transfer multiple bytes
per end-to-end handshake ... allowed for increasing data transfer rate
as well as doubled maximum channel cabling distance.

trivia: ECKD was originally used for calypso ... speed-matching 3880
controller feature that allowed 3380 3mbyte/sec to used with 168 & 3033
1.5mbyte/sec channels (took enormous amount of work to get all the kinks
worked out, i've frequently commented it would have been less effort to
have just moved to FBA). some old email
http://www.garlic.com/~lynn/2007e.html#email820907b
a little more in these posts
http://www.garlic.com/~lynn/2010e.html#36 What was old is new again (water 
chilled)
http://www.garlic.com/~lynn/2015f.html#83 Formal definituion of Speed Matching 
Buffer

recent post trying to get 2nd "exposure" (device address) for the 3350
fixed-head feature (allowing data transfer overlapped with arm motion)
http://www.garlic.com/~lynn/2017k.html#44 Can anyone remember "drum" storage?

getting to play disk engineer in bldgs 14&15 posts
http://www.garlic.com/~lynn/subtopic.html#disk
CKD, FBA, multi-track search, etc posts
http://www.garlic.com/~lynn/submain.html#dasd

other past posts discussing 2305 & 1655
http://www.garlic.com/~lynn/2001c.html#17 database (or b-tree) page sizes
http://www.garlic.com/~lynn/2001l.html#53 mainframe question
http://www.garlic.com/~lynn/2002.html#31 index searching
http://www.garlic.com/~lynn/2002l.html#40 Do any architectures use instruction 
count instead of timer
http://www.garlic.com/~lynn/2003b.html#15 Disk drives as commodities. Was Re: 
Yamhill
http://www.garlic.com/~lynn/2003b.html#17 Disk drives as commodities. Was Re: 
Yamhill
http://www.garlic.com/~lynn/2003c.html#55 HASP assembly: What the heck is an 
MVT ABEND 422?
http://www.garlic.com/~lynn/2003m.html#39 S/360 undocumented instructions?
http://www.garlic.com/~lynn/2004d.html#73 DASD Architecture of the future
http://www.garlic.com/~lynn/2004e.html#3 Expanded Storage
http://www.garlic.com/~lynn/2005e.html#5 He Who Thought He Knew Something About 
DASD
http://www.garlic.com/~lynn/2005r.html#51 winscape?
http://www.garlic.com/~lynn/2006.html#38 Is VIO mandatory?
http://www.garlic.com/~lynn/2006c.html#1 Multiple address spaces
http://www.garlic.com/~lynn/2006e.html#46 using 3390 mod-9s
http://www.garlic.com/~lynn/2006k.html#57 virtual memory
http://www.garlic.com/~lynn/2006r.html#36 REAL memory column in SDSF
http://www.garlic.com/~lynn/2006s.html#30 Why magnetic drums was/are worse than 
disks ?
http://www.garlic.com/~lynn/2007e.html#59 FBA rant
http://www.garlic.com/~lynn/2007o.html#26 Tom's Hdw review of SSDs
http://www.garlic.com/~lynn/2007s.html#9 Poster of computer hardware events?
http://www.garlic.com/~lynn/2007u.html#4 Remembering the CDC 6600
http://www.garlic.com/~lynn/2008b.html#15 Flash memory arrays
http://www.garlic.com/~lynn/2010g.html#11 Mainframe Executive article on the 
death of tape
http://www.garlic.com/~lynn/2010g.html#22 Mainframe Executive article on the 
death of tape
http://www.garlic.com/~lynn/2010g.html#55 Mainframe Executive article on the 
death of tape

Re: CKD details

2018-01-23 Thread Anne & Lynn Wheeler
sme...@gmu.edu (Seymour J Metz) writes:

> The 3330 was not the first disk drive with Set Sector; that honor
> belongs to the 2305, formally part of the S/360 series rather than the
> S/370, although I imagine that a lot more were sold for use on, e.g.,
> 370/165, than for 85 or 195.

re:
http://www.garlic.com/~lynn/2018.html#77 CKD details
http://www.garlic.com/~lynn/2018.html#79 CKD details

2305 was also fixed-head disk (head per track, sort of replacement for
2301 & 2303 fixed-head drums) ... so there was no arm movement latency,
just rotational delay.

most internal sites used 2305-2 as paging device, approx. 11mbyte
capacity, 1.5mbyte transfer.

There was 2305-1, same number of heads, but only half the tracks, two
heads positioned on track, offset 180degrees and transferred in parallel
for 3mbytes/sec (special two byte channel), half the number of tracks,
(little less than) half the capacity and half the rotational delay
... basically even/odd bytes that could start as soon as came under
either offset/opposing heads.
https://www-03.ibm.com/ibm/history/exhibits/storage/storage_2305.html
and
https://www-03.ibm.com/ibm/history/exhibits/storage/storage_PH2305.html

2305 also had "multiple exposure" support ... eight device addresses
... uniform formating of tracks ... software strategies for the eight
addresses could be used to let the controller maximize the transfer per
rotation.

very late 70s, early 80s, IBM contracted with vendor for electronic
disks (for paging use at internal datacenters) ... referenced as model
1655, could simulate 2305 or operate natively (more like FBA) ... no arm
motion, no retational delay. some old email
http://www.garlic.com/~lynn/2007e.html#email820805

as aside, 3380 3mbyte channel used "data streaming" ... channels had
used protocol that did end-to-end handshaking on every byte transferred,
"data streaming" support would transfer multiple bytes per end-to-end
handshake ... allowed for increasing data transfer rate as well as
doubled maximum channel cabling distance.

trivia: ECKD was originally used for calypso ... speed-matching 3880
controller feature that allowed 3380 3mbyte/sec to used with 168 & 3033
1.5mbyte/sec channels (took enormous amount of work to get all the kinks
worked out, i've frequently commented it would have been less effort to
have just moved to FBA). some old email
http://www.garlic.com/~lynn/2007e.html#email820907b
a little more in these posts
http://www.garlic.com/~lynn/2010e.html#36 What was old is new again (water 
chilled)
http://www.garlic.com/~lynn/2015f.html#83 Formal definituion of Speed Matching 
Buffer

recent post trying to get 2nd "exposure" (device address) for the 3350
fixed-head feature (allowing data transfer overlapped with arm motion)
http://www.garlic.com/~lynn/2017k.html#44 Can anyone remember "drum" storage?

getting to play disk engineer in bldgs 14&15 posts
http://www.garlic.com/~lynn/subtopic.html#disk
CKD, FBA, multi-track search, etc posts
http://www.garlic.com/~lynn/submain.html#dasd

other past posts discussing 2305 & 1655
http://www.garlic.com/~lynn/2001c.html#17 database (or b-tree) page sizes
http://www.garlic.com/~lynn/2001l.html#53 mainframe question
http://www.garlic.com/~lynn/2002.html#31 index searching
http://www.garlic.com/~lynn/2002l.html#40 Do any architectures use instruction 
count instead of timer
http://www.garlic.com/~lynn/2003b.html#15 Disk drives as commodities. Was Re: 
Yamhill
http://www.garlic.com/~lynn/2003b.html#17 Disk drives as commodities. Was Re: 
Yamhill
http://www.garlic.com/~lynn/2003c.html#55 HASP assembly: What the heck is an 
MVT ABEND 422?
http://www.garlic.com/~lynn/2003m.html#39 S/360 undocumented instructions?
http://www.garlic.com/~lynn/2004d.html#73 DASD Architecture of the future
http://www.garlic.com/~lynn/2004e.html#3 Expanded Storage
http://www.garlic.com/~lynn/2005e.html#5 He Who Thought He Knew Something About 
DASD
http://www.garlic.com/~lynn/2005r.html#51 winscape?
http://www.garlic.com/~lynn/2006.html#38 Is VIO mandatory?
http://www.garlic.com/~lynn/2006c.html#1 Multiple address spaces
http://www.garlic.com/~lynn/2006e.html#46 using 3390 mod-9s
http://www.garlic.com/~lynn/2006k.html#57 virtual memory
http://www.garlic.com/~lynn/2006r.html#36 REAL memory column in SDSF
http://www.garlic.com/~lynn/2006s.html#30 Why magnetic drums was/are worse than 
disks ?
http://www.garlic.com/~lynn/2007e.html#59 FBA rant
http://www.garlic.com/~lynn/2007o.html#26 Tom's Hdw review of SSDs
http://www.garlic.com/~lynn/2007s.html#9 Poster of computer hardware events?
http://www.garlic.com/~lynn/2007u.html#4 Remembering the CDC 6600
http://www.garlic.com/~lynn/2008b.html#15 Flash memory arrays
http://www.garlic.com/~lynn/2010g.html#11 Mainframe Executive article on the 
death of tape
http://www.garlic.com/~lynn/2010g.html#22 Mainframe Executive article on the 
death of tape
http://www.garlic.com/~lynn/2010g.html#55 Mainframe Executive article on the 
death of tape

Re: CKD details

2018-01-23 Thread Anne & Lynn Wheeler
t...@harminc.net (Tony Harminc) writes:
> I assume it is the value used in the Set Sector/Read Sector CCWs. This
> came with the 3330 (real "analogue" disk) and is part of Rotational
> Position Sensing (RPS). It should have no logical relationship to the
> cell size; it's just a logical position (degrees, radians, IBM magic
> numbers because degrees and radians were NIH...?) on the track.

re:
http://www.garlic.com/~lynn/2018.html#77 CKD details

it use to be all surfaces were data ... with the 3330, one surface
became dedicated to the sector position ... 20 r/w heads, 20 surfaces,
19 data r/w heads, 19 data surfaces ... the 20th surface has the
rotational position information recorded.

Supposedly the loss in total data capacity was more than offset in
better system throughput ... RPS "set sector" in channel program
reducing channel busy involved in constant search (although it couldn't
fix multi-track search for VTOCs and PDS directorys). All that goes away
in FBA ... as can be seen in justification description going from
512 FBA to 4096 FBA:
https://en.wikipedia.org/wiki/Advanced_Format

i've periodically mentioned pointing out that in the 70s, increase in
disk throughput wasn't keeping up with increase in overall system
performance. Some disk division executive in the early 80s took
exception with my statement that relative system disk throughput had
declined by an order of magnitude since the 60s (disk throughput
increase 3-5 times, processor throughput increase 40-50 times)
and assigned the division performance group to refute my claim.  After a
couple weeks the group comes back and essentially say that I had
slightly understated the problem ... not bothering to include RPS-miss
in the calculations (attempting to channel reconnect at the sector
number ... but channel busy with some other device ... and so have to
loose full revolution). They then turn the analysis into SHARE
presentation on how to organize disk farms for better throughput.

-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: CKD details

2018-01-23 Thread Anne & Lynn Wheeler
cblaic...@syncsort.com (Christopher Y. Blaicher) writes:
> Your right, things are a little confusing.
> SECTORS - Think of it as 224 pieces of pie.  It is, I believe, physical.
> CELL - Also physical, but I think of them as little chunks of data,
>   which may be your data or control data for the hardware.
> TRACK BALANCE - How much room is left on the track if you were to
>   write a single block.  Look up TRKBAL macro.
>
> That extra calculation is for device control information, part of
> which I know is CRC, or at least that is what I was told.  All that
> stuff other than the COUNT-KEY-DATA areas are for the hardware and we
> mortals can't see it, but it is there.

and all that is now archaic fiction since no real CKD have been made for
decades, being simulated on industry standard fixed-block

this is the "real" format ... giving both 512byte FBA and the newer
4096byte FBA
https://en.wikipedia.org/wiki/Advanced_Format

part of the change justification is 4096byte is more "efficient"
... 15byte "gap, sync, address mark" for each phsical record and "512"
has 50byte ECC and 4096 has 100byte ECC for each record (eight 512 has
400byte ECC total) ... 512byte efficiency 88.7% and 4096byte efficiency
97.3%

-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: VSAM usage for ancient disk models

2018-01-17 Thread Anne & Lynn Wheeler
mitchd...@gmail.com (Dana Mitchell) writes:
> Current (for us 2.1) z/OS HCD still shows 3375 as a valid DASD device
> type.  IIRC 3375 was emulated CKD on FBA 3370 HDA's.  I also think
> 3375s were used as the storage for the embedded 43X1's used as
> processor controllers on 3090s.

re:
http://www.garlic.com/~lynn/2018.html#41 VSAM usage for ancient disk models

FEs had bootstrap service process ... starting with incrementally
scoping for dignoses for failed component. 3081 had circuits enclosed in
TCMs ... no longer capable of scoping. Went to "service processor" (with
3310 FBA disk) that could be scoped/diagnosed and then used to analyze
large number of probes embedded in TCMs.

Original move to 3090 was going to be 4331 running customized version of
VM370 Release 6 for service processor (with 3370 disk, instead of RYO
primitive customized operating system done from scratch) and all service
screens done in CMS IOS3270 ... this was upgraded to pair of (redundant)
4361s for "3092"
https://www-03.ibm.com/ibm/history/exhibits/mainframe/mainframe_PP3090.html

In addition, the 3092 processor controller requires two IBM 3370 model
A2 direct access storage devices with string switches or equivalent. The
3092 requires access to a customer-supplied 3420 model 4, 6 or 8 tape
drive or equivalent.

... snip ...

trivia: I had wanted to show that REXX (before customer release,
originally just REX) wasn't just another pretty scripting language ...
I decided on demo'ing that I could redo IPCS (very large assembler
application) in less than 3months elapsed time working half time with
ten times the function and running ten times faster (some slight of hand
for REXX to run faster than assembler). I finished early so started
doing library of automated scripts that searched/analyzed for lots of
different kind of failure signatures. I had expected that it would be
released to customers ... but for some reason it wasn't, even tho it
became standard for internal datacenters and customer support PSRs. Some
old email from the 3092 group wanting to include it
http://www.garlic.com/~lynn/2010e.html#email861031
http://www.garlic.com/~lynn/2010e.html#email861223

Eventually I got approval for making presentations at SHARE and other
customer user group meetings on how I implemented it .. and within a few
months similar implementations started to appear. As an aside the
implementation included functions like formating storage segments using
maclib DSECTs and decompiling instruction sequences ... and this was in
the "OCO-wars" (object code only) ... transition to no longer shipping
source code. past posts
http://www.garlic.com/~lynn/submain.html#dumprx

other trivia: this is old "greencard" done in IOS3270 with q
conversion to html:
http://www.garlic.com/~lynn/gcard.html

-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: VSAM usage for ancient disk models

2018-01-16 Thread Anne & Lynn Wheeler
00ac4b1d56b3-dmarc-requ...@listserv.ua.edu (David Purdy) writes:
> I honestly cannot remember MVS *EVER* supporting 3375’s DOS/VSE and VM
> AFAIK are the only OS’s. Can someone correct me please ?

large corporations started ordering hundreds of vm/4300s at a time for
placing out in departmental (non-datacenter) areas ... sort of the
leading edge of the coming distributed computing tsunami.

MVS was looking at playing in that market ... but the only (new) CKD
dasd was 3380 (high-end datacenter) ... all the low & mid-range disks
were FBA (3310 & 3370) that could be deployed in non-datacenter,
departmental areas.
https://www-03.ibm.com/ibm/history/exhibits/storage/storage_3370.html

Eventually they came out with (emulated) CKD announced as 3375 to
support MVS in that market ... however there was additional issue, the
customers were looking at large number of unattended systems per support
person ... as opposed to number of support persons per system.

past posts mentioning CKD, FBA, multi-track search, etc.
http://www.garlic.com/~lynn/submain.html#dasd

there was also similar explosion of vm/4300s inside IBM ...  at one
point resulting in significant problem scheduling increasingly scarce
conference rooms for meetings.

trivia: 4341 integrated channels were so fast that with slight tweaking
disk engineering & product test were using them for testing 3mbyte/sec
3880/3380 testing.

3370 & 3375
https://en.wikipedia.org/wiki/History_of_IBM_magnetic_disk_drives#IBM_3370_and_3375

above mention research starting on thin-film floating heads at TJR in
the 60s. However, in the 70s, the disk division were running thin-film,
floating head "air-bearing" simulation studies on SJR (bldg28, before
research moved up the hill) MVT 370/195. However, even with
high-priority designation, there were only getting a couple turn-arounds
a month.

Then bldg. 15 (product test) got early engineering 3033 for disk I/O
testing. they had been been running all testing in bldg 14&15
"stand-alone" (at one point they had tried running under MVS but found
it had 15min MTBF in that environment, requiring manual re-ipl). I then
offered to rewrite I/O supervisor to make it bullet proof and never fail
... after which nearly all machines in bldg 14&15 ran under that system.
Turns out even several concurrent I/O testing only used a few percent of
3033 CPU ... so started using the machine for lots of other stuff.  We
moved the air-bearing simulation from the MVT 370/195 to bldg 15 3033
and they could get several turn-arounds a day ... rather than a
couple/month (while 370/195 was a little over twice the 3033
performance, the 195 job queue was measured in number of weeks).

1979 thin-film heads introduced for large disks
http://www.computerhistory.org/storageengine/thin-film-heads-introduced-for-large-disks/

past posts getting to play disk engineer in bldgs 14&15
http://www.garlic.com/~lynn/subtopic.html#disk

-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: AW: Re: Number of Cylinders per Volume

2018-01-14 Thread Anne & Lynn Wheeler
000433f07816-dmarc-requ...@listserv.ua.edu (Paul Gilmartin) writes:
> Solaris has somthing of the sort.  I've occasionally got "File is temporarily 
> unavailable."
> Fifteen seconds later it opens.
>
> IBM is just behind the curve.

isn't that part of what ADSM/TSM is suppose to do
https://en.wikipedia.org/wiki/IBM_Tivoli_Storage_Manager
and
https://en.wikipedia.org/wiki/IBM_Tivoli_Storage_Manager#Data_Sources

Other data injectors include policy-based hierarchical storage
management (HSM) components for AIX, Linux and Windows. These allow
migration of data from production disk into one or more of the TSM
storage hierarchies while maintaining transparent access to that data by
the use of DMAPI or NTFS reparse points.

... snip ...

I had originally done the implementation in the late 70s for
internal datacenters as CMSBACK ... some old email
http://www.garlic.com/~lynn/lhwemail.html#cmsback
and posts
http://www.garlic.com/~lynn/submain.html#backup

it went thru a number of internal releases before being enhanced for
supporting distributed computing (workstations, unix PCs, etc) and
released to customers as workstation data save (WDSF).

It was then picked up by AdStar 
https://en.wikipedia.org/wiki/Adstar

becoming ADSM, i.e. IBM was being re-orged into the 13 baby blues in
preparation for breaking up the company, the disk division was one of
the business units that was furthest along. I've frequently mentioned
that the disk division in the late 80s had been claiming that the
communication group was going to be responsible for the demise of the
disk division ... becuase of it constantly vetoing disk division
advanced support for real distributed computing (communicatio group
trying to preserve the communication dumb termainal paradigm and install
base) ... some past posts
http://www.garlic.com/~lynn/subnetwork.html#terminal

trivia: we did some distributed computing & open system projects with
the adstar VP of software, who also funded the original project
supporting POSIX on MVS.
https://www.ibm.com/support/knowledgecenter/en/SSLTBW_2.1.0/com.ibm.zos.v2r1.idan400/cpn2co70.htm

but were almost constantly at war with the communication group.

-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Can anyone remember "drum" storage?

2017-12-22 Thread Anne & Lynn Wheeler
000a2a8c2020-dmarc-requ...@listserv.ua.edu (Tom Marchant) writes:
> The 3850 was much larger. When I was an Amdahl SE, one of 
> my accounts had one. It was probably 20 feet long, maybe 
> more. My impression was that it was a much improved version 
> of the 2321.

re:
http://www.garlic.com/~lynn/2017k.html#44 Can anyone remember "drum" storage?
http://www.garlic.com/~lynn/2017k.html#46 Can anyone remember "drum" storage?

2841 and Associated DASD ... 2311, 2302, 2303, 2321
http://www.bitsavers.com/pdf/ibm/28xx/2841/GA26-5988-7_2841_DASD_Component_Descr_Dec69.pdf

more 2321 from IBM
https://www-03.ibm.com/ibm/history/exhibits/storage/storage_2321.html
https://www-03.ibm.com/ibm/history/exhibits/storage/storage_PH2321B.html
even more 2321
https://en.wikipedia.org/wiki/IBM_2321_Data_Cell
http://www.columbia.edu/cu/computinghistory/datacell.html

magnetic stripes directly read/written

when I was undergraduate in the 60s, I got hired fulltime to be
responsible for IBM mainframe systems. The univ. library got an ONR
grant
https://en.wikipedia.org/wiki/Office_of_Naval_Research

to do an online catalog. Part of the money went to getting a 2321. The
project was also selected to be be betatest for original CICS product
and I got tagged to support/debug the implementation. One troublesome
"bug" to find was that CICS had (undocumented) hard-coded BDAM options
for OPEN ... and the library was using files with different set of BDAM
options. some past CICS &/or BDAM posts
http://www.garlic.com/~lynn/submain.html#cics
some more CICS history, gone 404, but lives on at wayback machine
http://web.archive.org/web/20050409124902/http://www.yelavich.com/cicshist.htm

3850 from IBM
https://www-03.ibm.com/ibm/history/exhibits/storage/storage_3850.html
https://www-03.ibm.com/ibm/history/exhibits/storage/storage_3850b.html
https://www-03.ibm.com/ibm/history/exhibits/storage/storage_PH3850A.html
other 3850
https://en.wikipedia.org/wiki/IBM_3850

3850 automated tape library with 200mbyte tape cartridges for 3330-1
caching/staging hierarchy. virtual 3330-1 would be staged to/from a pool
of 3330-1 drives (hardware HSM mount/unmount 3330-1 pack ... rather than
files). Later they would support 3330-11 drives simulating two 3330-1
drives.  Even later they would support 3350 drives simulating 3330-1
drives (could be considered experience for current situation where
industry standard fixed-block disks are used to simulate CKD DASD, real
CKD DASD hasn't been made for decades).

from pg. 510
https://www.computer.org/csdl/proceedings/afips/1975/5083/00/50830509.pdf

If the specific cylinder required by the CPU (1/404th of a Mass Storage
Volume) is already on DASD, an I/O operation proceeds.  If not, and
data is being accessed, the MSC causes the cartridge containing the
cylinder to be placed on a Data Recording Device (DRD), and the data
contained in that cylinder to be transferred to the DASD staging buffer.

,,,

If the Operating System knows which cylinders will be accessed, it can
cause the MSC to stage only those cylinders containing the data set;
reducing the number of times cartridges need to be accessed.

... snip ...

aka a pool of real 3330 staging drives can be used to simulate a much
larger number of "mounted" 3330 virtual packs.

trivia topic drift.

1980, I had been con'ed into doing extended channel support for IBM STL
(rename silicon valley lab, SVL) ... moving 300 people from IMS group
to offsite bldg. recent (ibm-main) posts referencing effort
http://www.garlic.com/~lynn/2017d.html#1 GREAT presentation on the history of 
the mainframe
http://www.garlic.com/~lynn/2017d.html#88 Paging subsystems in the era of 
bigass memory
http://www.garlic.com/~lynn/2017e.html#94 Migration off Mainframe to other 
platform
http://www.garlic.com/~lynn/2017j.html#3 Somewhat Interesting Mainframe Article

other posts
http://www.garlic.com/~lynn/submisc.html#channel.extender

1985, I was considered IBM export on the vendors hardware used for the
channel extender ... and the NCAR/UCAR IBM rep. tracked me down to help
NCAR. NCAR had bunch of (non-IBM) supercomputers and 4381 implementing
HSM function using some of the vendors hardware boxes (the vendor
implemented their own channel protocol between their boxes). The 4381
would get supercomputer network request for file/data, it would stage
the data (from tape) if required (to IBM DASD), and download a channel
program (CCWs) into one of the vendor's A515 boxes ... and return the
"handle" for that channel program to the requesting supercomputer.  The
supercomputer would then request that channel program to be executed,
transfer the data to/from directly between supercomputer and IBM DASD.

I've mentioned before a senior disk engineer getting talk scheduled at
communication group conference where he said that the communication
group was going to be responsible for the demise of the disk division
(i.e. stranglehold on datacenters, not only hitting to disk division
but significant contributing to 

Re: Can anyone remember "drum" storage?

2017-12-20 Thread Anne & Lynn Wheeler
jcew...@acm.org (Joel C. Ewing) writes:
> Clearly from the picture the Seagate really is like the 3380/3390
> solution.  Two completely independent actuators giving the appearance of
> two drives in one unit with a shared drive shaft and motor.  The
> doubling of throughput is ONLY because you have two drives that can be
> accessing or preparing to access totally independent data at the same
> time, not because of any faster access to a block of data or multiple
> blocks of data on a single track of one of those devices.  Dang!  My
> interpretation would have been a much more intriguing device.

recent ref
http://www.garlic.com/~lynn/2017k.html#22 little old mainframes, Re: Was it 
ever worth it?
http://www.garlic.com/~lynn/2017k.html#44 Can anyone remember "drum" storage?

or ibm 2302
IBM System/360 Component Descriptions- 2841 and Associated DASD
http://www.bitsavers.com/pdf/ibm/28xx/2841/GA26-5988-7_2841_DASD_Component_Descr_Dec69.pdf

2302 (never heard of any actually installed) pg59-63 (pg 59 picture
looks a little bit like the later 2305 fixed-head disk picture but not
fixed-head per track). has two access mechanism, one for the inner 250
tracks and one for the outer 250 tracks (figure 46, pg 60)

note that 2301 (fixed-head track) drum transferred four heads in
parallel for 1.2mbyte/sec transfer (compared to 2303 319kbyte/sec
transfer) and the 2305m1 transfered to heads in parallel for 3mbyte/sec
transfer compared to 1.5mbyte/sec 2305m2 (mod1 also had heads offset 180
degrees on same track so it also cut avg. rotational delay in half ...
but mod1 physically had the same number of heads, so only had have the
tracks and half the capacity)

-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Can anyone remember "drum" storage?

2017-12-20 Thread Anne & Lynn Wheeler
cvitu...@hughes.net (Carmen Vitullo) writes:
> I remember DRUM storage, just never worked with it, the only other
> DRUM storage I saw was at a tour at a data center somewhere in Jersey,
> my BIL worked there, did some work with NYSE I believe, and they were
> mostly all Univac or PDP systems and I saw what I think was a solid
> state drum storage unit, at 19 or 20 I was quite impressed.


2303 drum was fixed-head per track about 4mbyte ... ran at 2314 transfer
and could be connected to later 360/30 (recent discussion on facebook
ibm retiree group).  2301 was pretty much 2303 ... but read/wrote four
heads in parallel, 1/4th the number of tracks, tracks four times larger,
four times the transfer speed (1.2mbytes/sec).

IBM System/360 Component Descriptions- 2841 and Associated DASD
http://www.bitsavers.com/pdf/ibm/28xx/2841/GA26-5988-7_2841_DASD_Component_Descr_Dec69.pdf

2302 (never heard of any actually installed) pg39-63 ... looks a little bit
like the later 2305 fixed-head disk. has two access heads, one for the
inner 250 tracks and one for the outer 250 tracks.

2303 "drum", pg 73-76.



2301 drum
http://www.bitsavers.com/pdf/ibm/28xx/2820/A22-6895-2_2820_2301_Component_Descr_Sep69.pdf

2301 drum was traditional "paging" drum for 360/67 virtual memory
systems ... officially TSS/360  but IBM science center did virtual
machine (cp67) system, Univ. of Mich did MTS system, Stanford did Orvyl
(where Wylbur editor originated).

standard 2301 (paging) format was nine 4k pages on pair of tracks (with
record spanning the end of one track and the start of next). Original
CP67 delivered to the univ. Jan1968 did single page transfer per I/O and
both disk & drum requests were executed purely FIFO. Drum requests would
cost half rev. per each transfer ... peaking at 80 page I/Os per
second. I did ordered chained I/O and could peak at 270 page I/Os per
second. I also did ordered seek queuing for disk, helping with both
(overflow) disk paging as well as file I/O operation throughput.

(later) 2305 fixed head disk, model 2 11.2mbytes, 1.5mbyte/sec transfer,
avg access 5ms. model 1 had same number of heads but they were installed
on half the number of tracks with pair of heads at 180degree offset on
same track. 5.4mbyte capacity (half m2), 2.5msec avg. rational delay
(half m2), and 3mbyte/sec transfer (twice m2). Transfer would occur on
pairs of heads in parallel, and with pairs of head on opposite side of
platter, it only had to avg. 1/4 revolution before start of record came
under pair of heads (even/odd pairs on opposite side of platter).
https://www-03.ibm.com/ibm/history/exhibits/storage/storage_2305.html

In the early 80s, IBM cut deal with vendor for "1655" electronic disks
used by internal datacenters ... they had two modes of operation, native
mode and 2305 emulation mode. They were volatile (lost data when power
was lost) ... so were limited to paging operations. They were limited to
2305 channel data transfer and were more efficient at low to medium
loading (no rotational delay) ... but less difference at heavy loading
(since 2305 ordered chained request would already be running at near
transfer speed).

some recent posts mentioning 1655
http://www.garlic.com/~lynn/2016d.html#24 What was a 3314?
http://www.garlic.com/~lynn/2016f.html#23 Frieden calculator
http://www.garlic.com/~lynn/2017b.html#68 The ICL 2900
http://www.garlic.com/~lynn/2017b.html#69 The ICL 2900
http://www.garlic.com/~lynn/2017c.html#26 Multitasking, together with OS 
operations
http://www.garlic.com/~lynn/2017d.html#63 Paging subsystems in the era of 
bigass memory
http://www.garlic.com/~lynn/2017d.html#65 Paging subsystems in the era of 
bigass memory
http://www.garlic.com/~lynn/2017e.html#36 National Telephone Day

I got to play disk engineer in bldgs 14&15 from mid-70s through early
80s ... some past posts
http://www.garlic.com/~lynn/subtopic.html#disk

3350 offered fixed-head feature for limited number of cylinders ... but
didn't have multiple exposure support (like 2305) so couldn't do
concurrent channel programs for the moveable head portion and the
fixed-head portion. I had project to add multiple exposure to 3350 with
fixed head feature (so could overlap fixed head transfer with seek
operations). There was a group in POK planning on VULCAN, for electronic
disk ... that got it killed because they thought it might compete with
them in the paging market. Eventually VULCAN gets canceled, they were
told that IBM was already selling all the electronic memory it could
make for processor memory at higher markup ... but it was too late to
resurrect multiple exposure support for 3350 fixed head feature.

-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: CMS style XMITMSG for Unix and other platforms

2017-12-19 Thread Anne & Lynn Wheeler
edgould1...@comcast.net (Edward Gould) writes:
> Thanks for reminding me to ask a question that I have never gotten IBM
> to answer. Whenever I have ordered TCP/IP I have always had to order
> PASCAL runtime library. Since we have lost SE’s and salesmen are now
> non existent. There is no one left to answer this question. About 20
> years ago, I asked IBM and was told that in order to answer this we
> would have to hire an IBM contractor and the minimum cost would be
> $1000. I said no thanks and just continued ordering it, The cost was
> small IIRC so it never reached VP approval.  What is the reason why we
> have to order the PASCAL library?

re:
http://www.garlic.com/~lynn/2017k.html#37 CMS style XMITMSG for Unix and other 
platforms
http://www.garlic.com/~lynn/2017k.html#38 CMS style XMITMSG for Unix and other 
platforms

the mainframe TCP/IP product implementation was originally done in
Pascal/VS ... the mainframe pascal originally by two people in the Los
Gatos lab using Metaware's TWS for internal VLSI tool implementation

This implementation had none of the buffer exploits that are notoriously
epidemic in C-language implementations.

5735-HAL IBM TCP/IP FOR MVS Version 2.2 (no longer available 13Dec1994)
http://www-01.ibm.com/common/ssi/cgi-bin/ssialias?infotype=dd=sm=897/ENUS5735-HAL

Enhanced Socket Library

TCP/IP Version 2 for MVS has a more extensive socket library than
provided by Version 1. This extended Socket Library support, based on
Berkeley Socket Library**, BSD 4.3*, removes the requirement for the
PASCAL Runtime library since all sockets are written to the C language
interface. This support facilitates the port of UNIX** applications to
the System/370*.

however later on it says running in "31 bit mode", pascal run time
library is required.

10Sep1996 TCP/IP v3r2 for MVS/ESA
https://www-304.ibm.com/jct01003c/cgi-bin/common/ssi/ssialias?infotype=an=ca=897/ENUS296-317=usn=enus

Note: IBM TCP/IP Version 3 Release 2 for MVS/ESA does not include the
Pascal FTP server as stated in Software Announcement 294-529, dated
September 13, 1994. Customers should migrate to the C FTP server prior
to installing TCP/IP Version 3 Release 2.

this (updated 4Dec2017) mentions a pascal version SMTP & Pascal Sockets
API
http://www-01.ibm.com/support/docview.wss?uid=swg27019687

This says pascal just needed for user written programs that
interface to TCP/UDP/IP
https://www.ibm.com/support/knowledgecenter/en/SSLTBW_2.1.0/com.ibm.zos.v2r1.e0zb100/pgmreqs.htm

same here
https://www.ibm.com/support/knowledgecenter/en/SSLTBW_2.3.0/com.ibm.zos.v2r3.e0zb100/pgmreqs.htm

-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: CMS style XMITMSG for Unix and other platforms

2017-12-18 Thread Anne & Lynn Wheeler
sme...@gmu.edu (Seymour J Metz) writes:
> Back in the Paleolithic era IBM ported VMPC to MVS for use by
> TCP/IP. The Pascal stack has been dead for lo these many years. Is it
> conceivable that the VMCF port is still present in z/OS V2?

I've periodically commented about how how the communication group was
going to be responsible for the demise of the disk division
... communication group had corporate strategic responsibility
("stranglehold") for everything crossed datacenter walls and was
fiercely fighting off distributed computing and client/server (trying to
preserve their dumb terminal paradigm). some past posts
http://www.garlic.com/~lynn/subnetwork.html#terminal

Part of this was doing its best to prevent shipping TCP/IP. Eventually
it shipped but only got 44kbytes/sec transfer using whole 3090 cpu.  I
did the enhancements for rfc1044 and in some throughput tests at Cray
research, got mbyte/sec channel speed throughput between 4341 and Cray
using only modest amount of 4341 processor (possibly 500 times
improvement in cpu used per bytes moved) ... some past posts
http://www.garlic.com/~lynn/subnetwork.html#1044

sometime later, it was ported to MVS by emulating VM370 function on
MVS. However later, the communication group hired subcontractor to add
TCP/IP support to VTAM. His initial implementation had TCP/IP performing
much faster than LU6.2. He was told that everybody "knows" that a
"correct" TCP/IP implementation is much slower than LU6.2 and they would
only be paying for a "correct" implementation.

recent post in thread
http://www.garlic.com/~lynn/2017k.html#37 CMS style XMITMSG for Unix and other 
platforms
recent post about communication group
http://www.garlic.com/~lynn/2017k.html#34 Bad History

-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: CMS style XMITMSG for Unix and other platforms

2017-12-18 Thread Anne & Lynn Wheeler
peter.far...@broadridge.com (Farley, Peter x23353) writes:
> I may not get to try your XMITMSG tool for a while due to other
> commitments, but the VM facility I miss the most is the SMSG / WAKEUP
> SMSG facility that permits "server" VM's to run and respond to remote
> requests from "users".  In a prior lifetime my coworkers and I used
> that facility to implement a nicely featured SCLM for an ISV.
>
> I realize that a git server is the modern incarnation of that concept
> and git is certainly a much more sophisticated SCLM tool, but it would
> be interesting anyway to have something resembling SMSG / WAKEUP SMSG
> available in z/OS.
>
> XMITMSG would be very helpful in a "disconnected server" setup for sure.

triva: SPM ... special message was a superset of SMSG & IUCV combined.
It was original done for CP/67 by the IBM Pisa Scientific Center ...
and ported to vm370 in POK. I included it in my internal CSC/VM system
distribution for internal datacenters and supported by internal VNET
(even included in the original version shipped to customers). Reference
in this old post
http://www.garlic.com/~lynn/2006w.html#8
and email
http://www.garlic.com/~lynn/2006w.html#email750430
more detailed description
http://www.garlic.com/~lynn/2006w.html16

I had also done autolog facility ... originally for doing automated
benchmarks
http://www.garlic.com/~lynn/submain.html#benchmark

but was included in my internal CSC/VM distribution and quickly picked
up for starting "service virtual machines" ... which could use SPM for
doing things like early "automated operator" implementations.

It was used by the author of REXX in his multi-user (client/server)
space wars implementation ... client supported 3270 display and
client/server communication was via SPM. Since VNET internal network
supported SPM ... space war players could be on the same machine or
anyplace on the internal network. One of the problems was bot players
fairly early appeared which were beating all the human players (because
they could make moves much faster). Server was then modified that
increased energy user non-linearly as interval between moves dropped
below some (human) threashold. part of old (client) MFF PLI 
http://www.garlic.com/~lynn/2005u.html#4

-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Bad History

2017-12-16 Thread Anne & Lynn Wheeler
john.archie.mck...@gmail.com (John McKown) writes:
> ​Not as I was told. U.S. Government said, basically, you can only bid a
> POSIX compliant (and branded?) system for any I.T. purchase. To keep their
> business, IBM grafted OpenEdition (original name) onto MVS. As time goes
> on, it does get a bit better.​

re:
http://www.garlic.com/~lynn/2017k.html#33 Bad History

it wasn't just FEDs POSIX compliance.  had several conversations with
the disk division (GPD morphs into adstar, during the period that IBM
was being reorged into "baby blues" in preparation for breaking up the
company) executive that initially had posix support grafted onto MVS.

late 80s, senior disk engineer got talk scheduled at the internal annual
worldwide communication group conference supposedly on 3174 performance
... but opened the talk with statement that the communication group was
going to be responsible for the demise of the disk division. The issue
was that communication group had corporate strategic responsibility
(stanglehold) for everything that crossed the datacenter walls, they
were fiercely fighting off distributed computing and client/server
trying to preserve their dumb terminal paradigm (and install base). The
disk division was seeing data fleeing datacenters to more distributed
computing friendly platform with drop in disk sales. They came up with a
number of solutions ... which were constantly vetoed by the
communication group.

"terminal emulation" (also numerous mentions of above account) posts
http://www.garlic.com/~lynn/subnetwork.html#terminal

Since openedition was purely MVS software ... the communication group
didn't have any justification for veto'ing it. The other thing that GPD
could get away with, was investing in non-IBM startups that were doing
distributed computing that would involve IBM disks (communication group
could only veto IBM products that involved something that physical
crosses the datacenter walls). Some number of these investments the
executive would ask if we could stop by and lend any support that we
could.

adstar ref:
https://en.wikipedia.org/wiki/ADSTAR
28DEC1992 13 "baby blues" time article, gone behind pay wall, but part
of it avail at wayback machine.
http://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
more adstar april 1993
http://www.nytimes.com/1993/04/24/business/company-news-ibm-gives-adstar-storage-unit-more-autonomy.html

and more May 1993: One of the biggest dominoes from the breakup of IBM
is about to fall on the West Coast, where AdStar is preparing to launch
a search for a global age
http://www.adweek.com/brand-marketing/adstar-set-launch-global-review-bby-michael-mccarthbbr-clearnonebr-clearnonenew-yor/

As recently as two years ago, AdStar sold only to and through IBM, but
in 1992 it generated nearly $500 million in revenues via sales to other
companies. During 1993, AdStar officials expect this figure to grow by
roughly 70% to $850 million.

... snip ...

past posts getting to play disk engineer in bldgs 14&15
http://www.garlic.com/~lynn/subtopic.html#disk

I had also done CMSBACK late 1970s for internal datacenters. after some
number of internal releases ... it is modified to include backup for
distributed systems and released to customers as Workstation Data Save
Facility ... which is then morphs into ADMS (ADSTAR Distributed Storage
Manager) ... and is rebranded as TSM (when adstar is unloaded). some
old CMSBACK email
http://www.garlic.com/~lynn/lhwemail.html#cmsback

cmsback, WDSF, ADSM, TSM ref (cmsback originally done a decade earlier
than date mentioned here)
https://en.wikipedia.org/wiki/IBM_Tivoli_Storage_Manager

other trivia: after having left IBM ... we get a call from the bowels of
Armonk asking if we could help with the breakup. Business units were
using MOUs to leverage supplier contracts in other divisions.  With
breakup, these would be different companies and the MOUs would have to
be turned into their own contracts. We were to help inventory and
catalog the MOUs ... however a new CEO was brought in and the breakup
was (mostly reversed (for a time).

-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Bad History

2017-12-16 Thread Anne & Lynn Wheeler
000433f07816-dmarc-requ...@listserv.ua.edu (Paul Gilmartin) writes:
> I imagine:
>
> RFE: We want UNIX.
>
> IBM: Be more specific.
>
> Both: (After much deliberation) Single UNIX specification.
>
> And so it went.  There's no formal specification of GNU Linux.
>
> Sigh.

some of the CTSS (IBM 7094) people
https://en.wikipedia.org/wiki/Compatible_Time-Sharing_System

went to the 5th flr to do MULTICS
https://en.wikipedia.org/wiki/Multics

others went to the IBM cambridg science center on the 4th flr and did
virtual machines, internal network, invented GML (letters taken from
last names of 3 inventors), lots of online and performance work
https://en.wikipedia.org/wiki/CP/CMS

folklore is that the belllabs people working on Multics on the 5th
flr, return home and do UNIX
https://en.wikipedia.org/wiki/Multics#Unix

In the early 80s, a group from Stanford approached the IBM Palo Alto
Science Center about IBM doing a workstation, PASC invites several
internal groups for review ... who all claim that they were doing
something better (and IBM turns down the offer). The group then starts
their own company
https://en.wikipedia.org/wiki/Sun_Microsystems

late 80s, there appears to be aggreement between SUN & AT to make
UNIX exclusive.  the other vendors form organization to create an
"open" unix work-alike.
https://en.wikipedia.org/wiki/Open_Software_Foundation

The organization was seen as a response to the collaboration between
AT and Sun on UNIX System V Release 4, and a fear that other vendors
would be locked out of the standardization process. This led Scott
McNealy of Sun to quip that "OSF" really stood for "Oppose Sun
Forever".[4] The competition between the opposing versions of UNIX
systems became known as the UNIX wars.

... snip ...

Unix wars
https://en.wikipedia.org/wiki/UNIX_wars

in the 90s, they merge
https://en.wikipedia.org/wiki/Open_Software_Foundation#Merger

By 1993, it had become clear that the greater threat to UNIX system
vendors was not each other as much as the increasing presence of
Microsoft in enterprise computing. In May, the Common Open Software
Environment (COSE) initiative was announced by the major players in the
UNIX world from both the UI and OSF camps: Hewlett-Packard, IBM, Sun,
Unix System Laboratories, and the Santa Cruz Operation. As part of this
agreement, Sun and AT became OSF sponsor members, OSF submitted Motif
to the X/Open Consortium for certification and branding and Novell
passed control and licensing of the UNIX trademark to the X/Open
Consortium.

... snip ...

triva ... recent mention of joke about head of POK being major
contributor to DEC VMS ...
http://www.garlic.com/~lynn/2017k.html#30 Converting programs to accommodate 
8-character userids and prefixes

one of the DEC executives at OSF meetings had previously worked in
the (Burlington Mall) vm370/cms development group.

Not all of AT was UNIX. In 1975, I had moved a lot of enhancments
from CP67 to VM370 ... some old email
http://www.garlic.com/~lynn/2006v.html#email731212
http://www.garlic.com/~lynn/2006w.html#email750102
http://www.garlic.com/~lynn/2006w.html#email750430

one of my hobbies was providing & supporting enhanced operating systems
for internal datacenters.  However, some deal was cut with AT
Longlines to get a copy (this was version w/o multiprocessor
support). AT Longlines had all the source and over the years,
continued to move it to more current IBM mainframes.  Finally in the
80s, the IBM AT national account rep tracks me down about helping
longlines move to current version (with multiprocessor). This was in
3081 period which was announced as multiprocessor only and clone vendors
were coming out with faster single processors.

Eventually IBM did come out with 3083 (3081 with processor removed) ...
mostly for the ACP/TPF market (ACP/TPF didn't have multiprocessor
support, concern that the ACP/TPF customers would all move to non-IBM
clone processors).

IBM CSC, 545 tech sq posts
http://www.garlic.com/~lynn/subtopic.html#545tech
SMP posts
http://www.garlic.com/~lynn/subtopic.html#smp

-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Converting programs to accommodate 8-character userids and prefixes

2017-12-15 Thread Anne & Lynn Wheeler
john.archie.mck...@gmail.com (John McKown) writes:
> ​TSO seems to be about as important to IBM as VSPC was.
> https://en.wikipedia.org/wiki/Virtual_Storage_Personal_Computing​

VSPC was to be low-end non-vm370/cms online. They had a performance
"model" which predicted benchmark performance ... and required VM370/CMS
to run equivalent benchmarks taking major part of the VM370/CMS group
resources (and the predicted VSPC performance was always significantly
better then the equivalent VM370/CMS benchmarks). Finally when VSPC was
actually operational, it turns out that VSPC actual performance was much
worse than their model predictions (as well as actual VM370/CMS
performance)

afterwards, Endicott tried to get corporate approval to ship vm370/cms
as part of every machine they made (sort of like LPARS today
implementing a virtual machine subset).  however, this was in the period
after Future System imploding ...  past posts
http://www.garlic.com/~lynn/submain.html#futuresys

and POK was convincing corporate to kill VM370/CMS product and move the
group to POK for MVS/XA or otherwise MVS/XA wouldn't ship on time (some
7-8yrs later). Endicott eventually managed to acquire the VM370/CMS
product mission, but they had to reconstitute a development group from
scratch ... some customer comments about code quality during this period
show up in the vmshare archives (TYMSHARE provided their CMS-based
online computer conferencing free to share starting in August 1976).
http://vm.marist.edu/~vmshare

Later still, endicott was selling so many vm/4300 machines that it got
corporate to declare vm370/cms the corporate strategic online
interactive platform (which really drove POK crazy, small payback for
POK earlier getting vm370/cms product killed) ... even tho they still
couldn't get corporate approval to ship vm370/cms as part of every
machine sold.

large customers were ordering hundreds of vm/4300s at a time for placing
out in departmental areas, sort of precursor to the coming distributed
computing tsunami.

also, vm/4300 clusters were severely threatening high-end POK mainframes
(better price/performance, smaller footprint, less environmentals)
... at one point POK managed to get allocation of critical 4300
manufacturing component cut it half. Before first 4341 shipped, I had
got conned into doing benchmarks on engineering machines for LLNL
(national lab) that was looking at getting 70 4341s for compute farm
... leading edge of the coming cluster supercomputing tsunami (grid
computing which has huge technology overlap with the cloud
megadatacenters, running hundreds of thousand of systems).

Part of the POK plan to kill vm370/cms was to not tell the group about
their move to POK until the very last minute ... to minimize the number
that could escape. However the news leaked early and lots managed to
escape in local Boston/Cambridge area ... many to DEC (there is joke
that head of POK was one of the largest contributors to the DEC VMS
product).  In the wake of the leak, there was witchhunt for the source
... fortuantely for me nobody gave up the source.

-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: IBM does what IBM does best: Raises the chopper again

2017-11-29 Thread Anne & Lynn Wheeler
edgould1...@comcast.net (Edward Gould) writes:
> The latest buzz word is education on the computer. IBM tried that 40
> years ago and it was an abysmal failure. Pretty soon they are going to
> make a pizza making MF.  Now, how do you deliver a 20 ton computer
> with a flat top to a neighborhood that has narrow streets?

prior to gov. legal action, IBM steep educational discount and there was
IBM mainframes in lots of universities ... then with the 23June1969
unbundling announcement ... those educational discounts went away.
past posts/refs
http://www.garlic.com/~lynn/submain.html#unbundle
recent post referencing ACIS
http://www.garlic.com/~lynn/2017j.html#76 A Computer That Never Was: the IBM 
7095

with relaxing of gov. pressure in the early 80s, IBM tried to get back
into educational market, setting up ACIS ... initially with $300M for
giving away to educational institutions ... MIT Project Athena got $25M
(jointly with another $25M from DEC), CMU got $50M for various andrew
efforts (MACH, unix work-alike, camelot/encina ... unix transaction
processing, andrew fileystems, etc) ... lots of other institutions.

my brother was regional Apple marketing rep in this period (largest
physical region in CONUS) with several univ. institutions. He would
comment that he would fawn over any IBM coffee mugs at customer sites
and say he liked them so much that he would be willing to trade two
apple mugs for every IBM mug (selling appleII and MACs against IBM/PC).

sometime earlier, IBM had 1500
https://en.wikipedia.org/wiki/IBM_1500

picture of my (future) wife ... she had job at the Naval academy in
Annapolis, programming IBM 1500 courses (before she joined IBM)
http://www.garlic.com/~lynn/1500.jpg

1500 installations, gone but lives on at wayback machine
https://web.archive.org/web/20090604181740/http://www.uofaweb.ualberta.ca/educationhistory/IBM1500Systems_NorthAmerica.cfm

-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Db2! was: NODE.js for z/OS

2017-10-31 Thread Anne & Lynn Wheeler
edgould1...@comcast.net (Edward Gould) writes:
> Way back in the 1980’s we had just gotten in a 4331 for testing. I was
> given a list of software to order and DL/1 was there but for DOS/VSE.
> Was it ever available on MVS?
> My memory is starting to ooze here, but wasn’t there a DB for VM as well was 
> it QBE(???).


re:
http://www.garlic.com/~lynn/2017j.html#27 Db2! was: NODE.js for z/OS
http://www.garlic.com/~lynn/2017j.html#28 Db2! was: NODE.js for z/OS

all the original sql/relational was done on vm/145 and then System/R
technology transfer to Endicott for SQL/DS on both DOS/VSE and VM,
before transfer to STL & porting to MVS (after EAGLE implodes).
some posts
http://www.garlic.com/~lynn/submain.html#systemr

YKT also did QBE (query-by-example) for VM370. Old email about "father
of QBE and arch-enemy of System/R" doing QBE presentation at SJR
http://www.garlic.com/~lynn/2002e.html#email800310

QBE ref:
https://en.wikipedia.org/wiki/Query_by_Example

note predating SQL & QBE were 4th generation languages developed on
CP67 for virtual machine based commercial online offerings. past
posts
http://www.garlic.com/~lynn/submain.html#online

Mathematica original did RAMIS and ran on NCSS ... CP67 spin-off from
the science center
http://www.garlic.com/~lynn/subotpic.html#545tech

then came FOCUS available on TYMSHARE (later vm370-based online
commercial system) ... and NCSS did NOMAD (as their own proprietary)

some online refs
http://archive.computerhistory.org/resources/text/Oral_History/RAMIS_and_NOMAD/RAMIS_and_NOMAD.National_CSS.oral_history.2005.102658182.pdf
http://www.decosta.com/Nomad/

https://en.wikipedia.org/wiki/FOCUS

FOCUS is a computer programming language and development environment. It
is a language used to build database queries, and is regarded as a
fourth-generation programming language (4GL). Produced by Information
Builders Inc., it was originally developed for data handling and
analysis on the IBM mainframe.

https://en.wikipedia.org/wiki/Nomad_software

Martin provided a "dozen pages of COBOL, and then just a page or two of
Mark IV, from Informatics." Rawlings offered the following single
statement, performing a set-at-a-time operation, to show how trivial
this problem was with Nomad:

https://en.wikipedia.org/wiki/Ramis_software

RAMIS was initially developed in the mid 1960s by the company
MATHEMATICA on a consulting contract for a marketing study by a team
headed by Gerald Cohen and subsequently further developed and marketed
as a general purpose data management and analysis tool. In the late
1960s Cohen fell out with the management of MATHEMATICA and left to form
his own company. Soon thereafter his new company released a new product
called FOCUS which was very similar to RAMIS - even, it is rumored,
having some of the same bugs.

... snip ...

posts mentioning QBE:
http://www.garlic.com/~lynn/2002e.html#44 SQL wildcard origins?
http://www.garlic.com/~lynn/2002o.html#70 Pismronunciation
http://www.garlic.com/~lynn/2003n.html#11 Dreaming About Redesigning SQL
http://www.garlic.com/~lynn/2003n.html#18 Dreaming About Redesigning SQL
http://www.garlic.com/~lynn/2003n.html#19 Dreaming About Redesigning SQL
http://www.garlic.com/~lynn/2005.html#25 Network databases
http://www.garlic.com/~lynn/2007s.html#21 Ellison Looks Back As Oracle Turns 30
http://www.garlic.com/~lynn/2009p.html#82 What would be a truly relational 
operating system ?
http://www.garlic.com/~lynn/2011d.html#55 Maybe off topic
http://www.garlic.com/~lynn/2011d.html#60 Maybe off topic
http://www.garlic.com/~lynn/2011p.html#1 Deja Cloud?
http://www.garlic.com/~lynn/2012b.html#60 Has anyone successfully migrated off 
mainframes?
http://www.garlic.com/~lynn/2013c.html#37 PDP-10 byte instructions, was What 
Makes an Architecture Bizarre?
http://www.garlic.com/~lynn/2017c.html#85 Great mainframe history(?)

other posts mentioning RAMIS, NOMAD, &/or FOCUS
http://www.garlic.com/~lynn/2003d.html#15 CA-RAMIS
http://www.garlic.com/~lynn/2003d.html#17 CA-RAMIS
http://www.garlic.com/~lynn/2003n.html#15 Dreaming About Redesigning SQL
http://www.garlic.com/~lynn/2006k.html#37 PDP-1
http://www.garlic.com/~lynn/2006r.html#49 Seeking info on HP FOCUS (HP 9000 
Series 500) and IBM ROMP CPUs from early 80's
http://www.garlic.com/~lynn/2007c.html#12 Special characters in passwords was 
Re: RACF - Password rules
http://www.garlic.com/~lynn/2007e.html#37 Quote from comp.object
http://www.garlic.com/~lynn/2007j.html#17 Newbie question on table design
http://www.garlic.com/~lynn/2009k.html#40 Gone but not forgotten: 10 operating 
systems the world left behind
http://www.garlic.com/~lynn/2010e.html#54 search engine history, was Happy 
DEC-10 Day
http://www.garlic.com/~lynn/2010e.html#55 Senior Java Developer vs. MVS Systems 
Programmer (warning: Conley rant)
http://www.garlic.com/~lynn/2010e.html#58 Senior Java Developer vs. MVS Systems 
Programmer (warning: Conley rant)
http://www.garlic.com/~lynn/2010n.html#21 What non-IBM software products 

Re: Db2! was: NODE.js for z/OS

2017-10-31 Thread Anne & Lynn Wheeler
k...@ntrs.com (Karl S Huf) writes:
> Yes, IBM officially rebranded DB2 to Db2 because . . . that's what they
> do (apologies to GEICO).  At least it's still pronounced the same so
> that's at least one less question I have to field - unlike, say JES2 vs
> JES3 ("Hey Karl, are we every going to upgrade to JES3?" - actual
> question).

re:
http://www.garlic.com/~lynn/2017j.html#27 Db2! was: NODE.js for z/OS

more trivia ... my wife served stint in the gburg JES group ... one of
the catchers for ASP to turn into JES3 ... then was one of the authors
of "JESUS" (JES Unified System) ... the combined features of JES2 and
JES3 that the respective customers couldn't live without ... but for
various reasons never went any further. She then was con'ed into going
to POK to be responsible for mainframe loosely-coupled architecture
... where she did "peer-coupled shared data" architecture. She didn't
remain very long because of 1) little update (except for IMS hotstandby
until sysplex & parallel sysplex much later) and 2) constant battles
with the communication group trying to force her into using SNA for
loosely-coupled operation (periodic truce where they allowed she could
use anything within the datacenter walls, but communication group had
strategic ownership of everything that crossed datacenter walls) past
posts
http://www.garlic.com/~lynn/submain.html#shareddata

-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Db2! was: NODE.js for z/OS

2017-10-31 Thread Anne & Lynn Wheeler
jesse1.robin...@sce.com (Jesse 1 Robinson) writes:
> The name change was much bandied about at SHARE in Providence. I for
> one have gotten over my indignation and am ready to move on. If you
> really want to be offended by an assault on the sensibilities, how
> about the fact that there never was a D(bee)1? The product was spawned
> in an era where calling anything '2' gave it a veneer of
> respectability as if it were a new and improved version of some
> mythical precursor. That was implicitly fake news, which we now know
> is reprehensible skullduggery.

the original sql/relational implementation done on vm/145 at san jose
research in bldg. 28 on main plant site (before almaden was built up the
hill). past posts
http://www.garlic.com/~lynn/submain.html#systemr

the "official" next generation DBMS was code named EAGLE (DB1?). while
the corporation was preoccupied with EAGLE, we managed to do tech
transfer to Endicott and get it out as SQL/DS. Then when EAGLE imploded,
there was requests about how fast could sql/ds (system/r) be ported as
MVS. This was eventually released as DB2 originally for decision support
*ONLY*. 1995 System/R reunion
http://www.mcjones.org/System_R/
HTML version
http://www.mcjones.org/System_R/SQL_Reunion_95/sqlr95.html
Some EAGLE reference
http://www.mcjones.org/System_R/SQL_Reunion_95/sqlr95-System.html#Index164

from above:

Eagle was an IMS successor; it was going to do everything. And they were
very worried about path lengths. So there had been something in IMS
called TP1. But TP1 was more of a general characterization; ET1 was a
specific program. And then Jim wrote all this stuff down in an article
that he published in Datamation. It had Anonymous et al. or something
like that as the author[

... snip ...

When Jim leaves for Tandem, he palms off some amount of stuff on me
releted to System/R as well as consulting with the IMS group.

later we were doing cluster scaleup for rs/6000 ha/cmp ... working with
oracle, ingres, sybase, etc for commercial scaleup and national labs for
scientific scaleup. for cluster scaleup, the issue was ibm's RDBMS
cluster was mainframe only (loosely-coupled). These other vendors had
open system source base that also supported DEC VAX and DEC
VAX/Cluster. I did a cluster scaleuup distributed lock manager that
emulated the DEC VAX/Cluster semantics ... making it easier for them to
port to HA/CMP. The mainframe DB2 group started complaining that if I
was allowed to go ahead, it would be at least 5yrs ahead of them.
This is reference to Jan1992 meeting in Ellison's (Oracle CEO) on
cluster scaleup
http://www.garlic.com/~lynn/95.html#13

Within a few weeks of the above meeting, cluster scaleup was
transferred, announced as IBM supercomputer for numeric/scientific
*ONLY*, and we were told we couldn't work on anything with more than
four processors ... some old email for the period
http://www.garlic.com/~lynn/lhwemail.html#medusa
posts mentioning ha/cmp
http://www.garlic.com/~lynn/subtopic.html#hacmp

Trivia: one of the oracle people mentioned in the Ellison cluster
scaleup meeting claims he was the primary person when he was at IBM,
doing the SQL/DS tech transfer to STL (now SVL) for port to MVS (to
become DB2).

Totally other trivia: in the wake of being told we couldn't work on more
than four processors, we leave IBM. Later two of the other Oracle people
in the Ellison cluster scaleup meeting are at a small client/server
startup responsible for something called the "commerce" server. We are
brought in as consultants because they want to do payment transactions
on the server. The startup had also done some technology called "SSL"
they wanted to use ... the result is now frequently called "electronic
commerce".


-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: IBM open sources it's JVM and JIT code

2017-10-24 Thread Anne & Lynn Wheeler
t...@harminc.net (Tony Harminc) writes:
> One can certainly write a Rexx interpreter (or compiler, for that matter),
> and run it under TSO and/or ISPF; in that sense it *tolerates* those
> environments. But for reasons known only to IBM, the interfaces needed to
> implement *integration* with the TSO/E and ISPF environments are
> undocumented. It used to be possible to write one's own Terminal Monitor
> Program (TMP), and there was even a book describing how to do so. With
> TSO/E that book was dropped, and while one can guess at much of what needs
> to be done, there are OCO control blocks and interfaces that inhibit
> implementing interfaces like Address TSO and Address ISPF.

early/mid 70s, internal politics from the Future System group was
shutting down 370 products (pushing that everything would be moved to
completely different Future System). The lack of 370 products during
this period is credited with giving clone processor makers market
foothold.

32june1969 unbundling announcement started to charge for software &
services ... but managed to make the case that kernel software would
still be free. when FS imploded there was mad rush to get products back
into the 370 pipeline. old IBM reference:
http://www.jfsowa.com/computer/memo125.htm

at the same time (because of the rise of clone processors) there was
decision to transition to charging for kernel software (my resource
manager was initial guinea pig). This continued into the early 80s
... then starts the OCO-wars. One of the motivations for OCO-wars was
(again) the clone processor competition ... but another motivation was
that customers weren't migrating off MVS to MVS/XA according to
plan. Part of the blame was placed on customers that had source and made
local modifications ... which weren't easily migrated from MVS to
MVS/XA. Eliminating source would minimize customers making local
modifications and enhance IBM control of their customer base.

Complicating things was introduction of clone "hypervisor" (subset of
virutal machines in hardware) which allowed concurrent operation of MVS
& MVS/XA much more efficiently than traditional virtual machine. IBM was
eventually able to respond with PR/SM & LPARS for 3090 (but by that time
some amount of the MVS->MVS/XA had passed).

-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Blockchain on Mainframe ?

2017-10-24 Thread Anne & Lynn Wheeler
copied from social media IBM group

Blockchain mining using GPU (graphics) chips that have huge number of
internal processors
https://hothardware.com/news/amd-radeon-rx-vega-mining-block-chain-ethereum
and xeon crypto (xeon are processor chips used in e5-2600 and other
blades) ... benchmark e5-2600v4 against xeon gold (rebranded e5-2600v5)
https://software.intel.com/en-us/articles/intel-xeon-scalable-processor-cryptographic-performance

e5-2600v5 (xeon GOLD) blade is somewhere 10-20 times the BIPS (TIPS)
rating of max configured z14 (@150BIPS).

-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Somewhat Interesting Mainframe Article

2017-10-15 Thread Anne & Lynn Wheeler
billwil...@hotmail.com (Bill Wilkie) writes:
> But the biggest BOONDOGGLE of all times, was what management spent a
> few million on and that was Four Quadrant Leadership. If discussed
> something with another person and YOU made the change you were
> operating from Quadrant 1. If you discussed it with another persons
> and THEY did it you were operating from Quadrant 2. No one ever
> figured out whay it was important but everyone in the company had to
> take the course. We spent millions and the manager who bought it was
> called a VISIONARY. I suspect or should I say HOPE he is on the
> unemployment line with the rest of the visionaries.

mid-80s, top IBM executives were predicting that corporate revenue would
double mostly based on mainframe business ... and there was massive
internal bldg program to double mainframe manufacturing capacity.

however, a couple years later, a senior disk engineer got a talk
scheduled at the internal, world-wide communication group conference
supposedly on 3174 performance, but opened the talk with statement that
the communication group was going to be responsible for the demise of
the disk division. The issue was that the communication group had
stanglehold on datacenter with their corporate strategic responsibility
for everything that crossed the datacenter walls and were fiercely
fighting distributed computing and client/server trying to preserve
their dumb terminal paradigm and install base. The disk division was
seeing data fleeing the datacenter to more distributed computing
friendly platforms with drop in disk sales. They had come up with
several solutions which were constantly vetoed by the communication
group.

The communication group stranglehold on the datacenter not only was
affecting disk sales but everything mainframe and a few short years
later the company goes into the red.

By the mid-90s, most of the easier applications had fled mainframes
(driving IBM into the red) and the major remaining customer was the
financial industry (accounting for significant percentage of all new
mainframe sales). However, the financial industry was facing significant
bottleneck with decades old legacy cobol financial software that did
settlement in the overnight batch window ... and globalization
(shortening the window) and business increases (increasing workload) was
putting extreme strain on the overnight batch window.

Late 90s, there was a period where the financial industry spends
billions of dollars on new software to support parallized "straight
through processing" (real-time settlement with every transaction),
planning on using large numbers of killer micros. However there were
using some industry standard parallelization libraries ...  and they
continued down the path even when repeatedly told (including by me) the
libraries introduced factor of 100 times overhead (compared to mainframe
batch cobol).  It wasn't until they actually had some major large pilots
go down in flames that they pulled back (100 times overhead totally
swamped anticipated throughput increases from large numbers of
parallelized killer micros).

A decade later, I was involved in effort that approached it from a
different standpoint. It supported high level business rules that
generated fine-grain, parallelizable SQL statements, and rather than
directly implementing parallelization ... it relied on the enormous work
that all the RDBMS vendors (including IBM) put into scalable cluster
parallelized throughput. Prototype pilots were implemented for different
financial sectors that easily supported several times their current peak
workloads on non-mainframe cluster RDBMS platforms. This was presented
to several financial industry groups and initially had high acceptance
... but then hit brick wall. They eventually said that the top
executives still bore significant scars from the failed efforts in the
late 90s and it would be a very long time before it would be tried
again.

Note that SQL/RDBMS features are common across mainframe and
non-mainframe platforms ... all platforms use industry standard
fixed-block disks, non-mainframes getting slightly better throughput
than mainframe doing CKD simulation on same industry standard disks
(real CKD disks haven't been made for decades). Native fibre-channel
having significantly higher native throughput than mainframe FICON
protocol running over the same fibre-channel.

The most recent published mainframe peak I/O throughput that I've found
is z196 getting 2M IOPS using 104 FICON. About the same time there was
fibre channel announced for E5-2600V1 blade claiming over million IOPS
(for single fibre-channel, two such fibre-channel having higher native
throughput than 104 FICON running over 104 fibre-channel).

max configured z196->z14 went from 80 @625MIPS for 50BIPS to 170
@882MIPS for 150BIPS (three times aggregate processing with more than
double the number of slightly faster processors).

In the same time frame, two chip e5-2600V1 blade at 530BIPS went to 

Re: git, z/OS and COBOL

2017-10-11 Thread Anne & Lynn Wheeler
000433f07816-dmarc-requ...@listserv.ua.edu (Paul Gilmartin) writes:
> I have not used IEBUPDTE extensively.  When I contributed to
> Charlotte, I made more use of CMS UPDATE, which is similar to
> IEBUPDTE, but with further features useful for source code control.
> XEDIT can generate CMS UPDATE control files, but they contain some
> noise which I filtered out with a final pass through SuperC.
>
> There are more powerful tools than IEBUPDTE.  Embrace them.
>
> Examples include diff3 and various GUI merge utilities.

original CMS UPDATE was single level (mid-60s) ... much more akin to
IEBUPDATE. As undergraduate in the 60s, I did preprocessor to CMS UPDATE
that support "$" which would do the sequence numbering on the inserted
source cards ... eliminating having to manually add them to each one.

Later at the science center there was joint project with Endicott for
modifications to CP/67 to support 370 virtual memory virtual machines
(in addition to 360/67 virtual memory virtual machines) ... aka
simulating 370 virtual memory architecture on real 360/67.

This was originally implemented all in EXEC ... repeatedly processing
CNTRL files and multiple levels of update files.

Originally had three levels ... "L" updates to CP/67 (my enhancements to
base product CP/67), "H" udpates to CP/67 to provide 370 virtual
machines.

The combination of "L" & "H" updated CP/67 then ran regularly on
production 360/67. Lots of 370 operating system softwarre started
development in "H" 370 virtual machines.

Then the "I" updates to CP/67 to change from running 360/67 architecture
to running 370 architecture ... build typically required applying "L",
"H", & "I". This was running regularly in "H" 370 virtual machines a
year before the first 370/145 engineering machine supporting virtual
memory was operational (and long before 370 virtual memory was
announced). In fact, the first 370/145 engineering machine used an "I"
level system as early software to test operation of the machine.

trivia: initial "I" system IPL failed. It turned out that they had
reversed the B2 op-codes for RRB & PTLB ... quickly diagnosed the
problem and zap'ed the kernel to correspond with the "incorrect"
implementation (they eventually corrected the hardware).

trivia: the person responsible for Internet DNS system had been MIT
student at the time working at the science center and did some of the
original CMS multi-level source update implementation.

past posts mentioning science center
http://www.garlic.com/~lynn/subtopic.html#545tech

Later some San Jose engineers added support for 3330s & 2305s device for
CP/67-SJ. This ran production internally on most of the 370 systems for
quite some time.

Later the multi-level update support was added to both standard UPDATE
and eventually XEDIT.

I had kept archives of much of the science center files on tapes. In
mid-80s, when Melinda Varian was doing her VM History ... she contacted
me about getting copies of the original multi-level source update
implementation in EXEC. It was fortunate timing ... IBM Almaden Research
was starting to have datacenter operational problem (operators were
mounting random tapes as scratch), and even tho I had replicated the
archives on three different tapes ... they were all in the IBM Almaden
Research tape library ... and operators managed to mount all three
archive tapes (and several of my other tapes) as scratch. They never got
around to notifying users until long after the damage was done.

some old email exchange with Melinda (some repeat and not all about
multi-level update
http://www.garlic.com/~lynn/2011c.html#email850820
http://www.garlic.com/~lynn/2006w.html#email850906
http://www.garlic.com/~lynn/2006w.html#email850906b
http://www.garlic.com/~lynn/2006w.html#email850908
http://www.garlic.com/~lynn/2011c.html#email850908
http://www.garlic.com/~lynn/2014e.html#email850908
http://www.garlic.com/~lynn/2007b.html#email860111
http://www.garlic.com/~lynn/2011c.html#email860111
http://www.garlic.com/~lynn/2011b.html#email860217
http://www.garlic.com/~lynn/2011b.html#email860217b
http://www.garlic.com/~lynn/2011c.html#email860407

other trivia: much of internal software development was then being done
using CMS and CMS multi-level update ... including MVS components like
JES2 ... then when came time for release ... they had to port to
standard MVS source distribution process.

One of the VM/370 issues was even though (originally CP/67) maintenance
distribution was all done using the CMS multi-level source ... every new
release ... they would permanently apply all maintenance & development
updates and resequence each module. Lots of internal sites and customers
had developed extensive source updates (some claim there was more source
updates on the VM/370 SHARE Waterloo tape than in the base system).

The release-to-release resequencing became something of hassle ... so in
the late 70s, I wrote a couple programs ... one would take a previous
release with all maintenance applied 

Re: Here's a horrifying thought for all you management types....

2017-10-06 Thread Anne & Lynn Wheeler
cfmpub...@ns.sympatico.ca (Clark Morris) writes:
> While the last systems programming job I did was 27 years ago and I
> wouldn't know how to safely power on and IPL a system today (3081s
> didn't have LPARs let alone HMCs) that is ridiculous.  At least I know
> how to play with SMF 30 records in COBOL and modify other peoples
> assembler code.  I am not willing to move from Nova Scotia so I am not
> looking for the job although I might take short assignments.
> Retirement is nice.

3081 did have service processor which then had increasing functions.

field engineering had diagnostic process that started with scoping
individual components. 3081 had components in TCMs and could no longer
(directly) scoped. For TCMs, service processors were introduced with
probes into TCMs for doing diagnostics ... and engineers had bootstrap
process starting with being able to (directly) scope/diagnose the
service processor ... which then could be used to diagnose the 3081.

3090 service processor started out was to be 4331 running customized
version of VM370 release 6 ... it was then changed to a pair of 4361.
PR/SM (LPARs) was eventually introduced for 3090 as reaction to Amdahl's
"hypervisor". Amdahl had created macrocode ... which was intermediate
370-like instructions ... and enormously easier to program than the
native machine horizontal micrcode (originaly done to the increasing
number of architecture tweaks that IBM was making). It was then used to
implementation hypervisor (virtual machine subset). 3090 took quite a
bit longer to respond to Amdahl's hypervisor (with PR/SM, LPAR) because
it had to be done in the low-level native horizontal microcode.

-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Interesting article in IEEE Annals of the History of Computing

2017-10-05 Thread Anne & Lynn Wheeler
dbo...@sinenomine.net (David Boyes) writes:
> IBM Branch Offices: What They Were, How They Worked, 1920s–1980s
>
> James W. Cortada
>
> Abstract:
> IBM branch offices were the company’s local face around the world in
> the 20th century. Its sales and customer support came out of these
> organizations, which are described here, using the example of one
> branch office as a historical case study. Additionally, personal
> perspectives on their role of having worked with these during the
> 1970s and 1980s are provided.
>
> Published in: IEEE Annals of the History of Computing ( Volume: 39, Issue: 3, 
> 2017 )

one of the issues after 23jun1969 unbundling announcement was how to
handle the training of new SEs ... previously it was sort of journeyman
training as part of large SE group at customer site. Unbundling started
to charge for SE time at the customer ... and they couldn't figure out
how to *NOT* charge for the SE trainee time. past posts mentiong
23jun1969 unbundling
http://www.garlic.com/~lynn/submain.html#unbundle

The solution was HONE ... hands-on network environment, online to
(originally) CP67 virtual machine datacenters (later moved to vm370)
... being able to practice with running guest operating systems in
virtual machines. I provided highly customized & enhanced CP67 operating
systems (and later VM370) to HONE from just about the beginning until
sometime in the mid-80s. One of early enhancements was to provide
simulation of the newly announced 370 instructions ... so guest
operating systems generated for 370s could be run under CP67. some
past posts mentioning HONE
http://www.garlic.com/~lynn/subtopic.html#hone

The science center (besides doing CP40, CP67, CMS, GML, internal network
... technology also used for the corporate sponsored university BITNET
... where ibm-main mailing list originated) ... early on, also ported
APL\360 to CMS as CMS\APL. HONE then started offering CMS\APL based
sales support applications ... eventually the sales
support applications started to dominate all HONE activity (salesmen
edging out trainee SEs at branch office terminals) ... and the original
HONE use for guest operating systems dwindled away. past posts
mentioning science center
http://www.garlic.com/~lynn/subtopic.html#545tech
post mentioning internal network
http://www.garlic.com/~lynn/subnetwork.html#internalnet
posts mentioning corporate sponsored univ network
http://www.garlic.com/~lynn/subnetwork.html#bitnet

Mid-70s, all the US HONE datacenters were consolidated in Palo Alto
(trivia when FACEBOOK originally moved to silicon valley, it was into
new bldg next door to the old HONE datacenter). Their VM370 systems were
enhanced to support single-system-image ... possibly largest in the
world, eight large POK multiprocessor all operating as single complex
with load balancing and fall-over across the complex. In the early 80s,
this was replicated first in dallas and then in boulder ... with
fall-over for disaster survivability across the three datacenters.
Also, by mid-70s, mainframe configurations were getting so complex, that
all new customer orders had to be first be run through HONE
configurators.

Also by late 70s, various IBM factions were demanding that HONE be
migrated to MVS, the corporation's "favorite son operating system"
... and periodically all HONE resources were being devoted to MVS
migration, eventually fail ... and then things would settle back to
normal for a little while ... and then it would start all over. After
several of these failed attempts, in the first part of the 80s, they
started blaming me (and my enhanced vm370 operating systmes) for all the
failed attempts to migrate to MVS.

During the late 70s period, head of POK had made some internal
proclamations that VM370 was being killed as product (part of the
initial motivation for migrating HONE to MVS) ... which initiated huge
protests from HONE & marketing ... and POK had to spend several months
walking back the proclamation (reassuring HONE and marketing
organization).

-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Temporary Data Sets

2017-10-04 Thread Anne & Lynn Wheeler
000248cce9f3-dmarc-requ...@listserv.ua.edu (Edward Finnell) writes:
> For decades MVS has honored the concept of public, Storage and private
> DASD. Numerous SHARE papers on how to configure DASD subsystems in
> order to reduce contention and optimize thruput. WSC under Ray Wicks
> produced many of them. One of my favorites was the 'The Big
> Pitcher'. Properly administered SMS can enhance the basic concepts and
> augment them with storage overflow.
>  
> If we had more info on the problem better suggestions could be
> provided. One of the old tricks was to preallocate sortwks and pass
> them thru the life of the job. No need to worry about vol=ref

back when CKD were real ... (rather than various kinds of simulation on
industry fixed-block disks ... all that rotational positioning and arm
motion, track lengths ... are all fiction) ... I was increasingly
pointing out that disk wasn't keeping up with computer technology and by
the early 80s was saying that disk relative system throughput had
declined by a factor of ten times since the 60s (disk throughput
increased 3-5 times, processor and memory throughput increased 40-50
times).

Some disk division executive took exception and assigned the division
performance group to refute the statements. after several weeks they
eventually came back and effectively said that I had slightly under
stated the problem. The analysis was then respun as disk configuration
recommendations for improving system throughput ... SHARE presentation
B874. old post with part of the early 80 comparison
http://www.garlic.com/~lynn/93.html#31
old posts with pieces of B874
http://www.garlic.com/~lynn/2001l.html#56
http://www.garlic.com/~lynn/2006f.html#3

note that memory is the new disk ... current latency for cache miss,
memory access ... when measured in count of processor cycles is similar
to 60s disk latency when measured in 60s processor cycles   it is
part of the introduction of out-of-order execution, branch prediction,
speculative execution, hyperthreading ... stuff that can go on while
waiting on stalled instruction (waiting for memory on cache miss)
 these show up in z196 (accounting for at least half the performance
improvement over z10) ... much of this stuff have been in other
platforms for decades.

trivia: 195 pipeline had out-of-order execution ... but no branch
prediction and/or speculative execution ... so conditional branches
stalled the pipeline, most applications would only ran at half 195 rated
performance. I got dragged into proposal to hyperthread 195 ... two
instruction streams simulating multiprocessor ... two simulated
processors running programs around half throughput ... then would keep
195 running at rated speed. It was never done ...

IBM hyper/multi threading patents mentioned in this post about the end
of ACS/360
https://people.cs.clemson.edu/~mark/acs_end.html

from Amdahl interview in the above:

IBM management decided not to do it, for it would advance the computing
capability too fast for the company to control the growth of the
computer marketplace, thus reducing their profit potential. I then
recommended that the ACS lab be closed, and it was.

... snip ...

end of the article has some of the acs/360 features that show up more
than 20yrs later in es/9000.

-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Would encryption have prevented known major breaches?

2017-09-15 Thread Anne & Lynn Wheeler
we were somewhat involved in (original) cal. data breach notification
act ... having been brought in to help wordsmith the electronic
signature act and several of the players were heavily involved in
privacy ... and had done in depth public surveys and #1 was fraudulent
financial transactions somewhat as the result of various kinds of
breaches (before notification each member of public thot it was isolated
incident affecting only them).  Problem was that little or nothing was
being done about the breaches and it was hoped that publicity from the
notifications might prompt corrective action. The issue is that entities
normally take security measures in self interest/protection. In the case
of breaches, it wasn't the institutions that are risk, but the
public. Since then there has been a dozen or so federal bills proposed
about evenly divided between those similar to the cal. state act and
those that effectively negate need for notification (in some cases,
specifying a combination of information compromised that would
essentially never occur).

We had aksi been brought in as consultants into small client/server
startup that wanted to do payment transactions on their server, they had
also invented this technology called "SSL" they wanted to use, the
result is now frequently called "electronic commerce". Somewhat for
having done "electronic commerce", we get brought in to the X9A10
financial standard working group that had been given the requirement to
preserve the integrity of finanical infrastructure for *ALL* retail
payments. We did detailed end-to-end vulnerability and exploit studies
of various kinds of payments and eventually wrote a standard that
slightly changes the current paradigm ... and eliminates the ability of
crooks to use information from previous transactions, records and/or
account numbers to perform fraudulent transaction. As a result it is no
longer necessary to hide/encrypt such information ... either in transit
and/or at rest (somewhat negating the earlier work with SSL for
electronic commerce).

dual-use metaphor; transaction account number is used for business
processes and must be readily available for scores of business processes
and millions of locations around the planet. at the same time it is used
for authentication and therefor must *NEVER* be
divulged. The conflicting requirements has resulted in us observing that
even if the planet was buried under miles of information hiding
encryption, it still wouldn't stop information leakage

security proportional to risk metaphor; value of transaction information
to merchant is profit from the transaction ... possibly a couple of
dollars (and value to infrastructure operators a few cents) while the
value of the information to the crooks is the account balance and/or
credit limit. As a result, the crooks may be able to outspend attacking
the system by a factor of 100 times what the defenders can afford to
spend.

Part of the issue now is there are lot of stakeholders with vested
interest in the unchanged paradigm.

-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: z14 and zBX

2017-08-27 Thread Anne & Lynn Wheeler
l...@garlic.com (Anne & Lynn Wheeler) writes:
> Old email about doing CP (vm370) internals class and
> meetings with NSF about connecting NSF supercomputer centers
> http://www.garlic.com/~lynn/2011h.html#email850930
> http://www.garlic.com/~lynn/2011h.html#email851114
> http://www.garlic.com/~lynn/2011h.html#email851116

re:
http://www.garlic.com/~lynn/2017h.html#89

oops, typo, that is a "b" (not an "h") z14 and zBX
http://www.garlic.com/~lynn/2011b.html#email850930
http://www.garlic.com/~lynn/2011b.html#email851114
http://www.garlic.com/~lynn/2011b.html#email851116

-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: z14 and zBX

2017-08-27 Thread Anne & Lynn Wheeler
edja...@phoenixsoftware.com (Ed Jaffe) writes:
> On 8/22/2017 4:27 AM, R.S. wrote:
>>
>> The above is some simplification, however I heard A LOT OF zBX, saw
>> a lot of presentations, and  IBMers never ever convinced me the zBX
>> is something more than LAN-attached rack.
>
> zBX was a mistake. Every company makes them.

for at least past ten years, large public cloud operators have been
claiming that they assemble their own systems at 1/3rd the price of
brand name server vendors. their large megadatacenters having hundreds
of thousands of these system blades. Within past couple years, system
processor chip makers have been saying that they ship over half their
chips to these large megadatacenters ... significantly changing large
datacenter model (and possibly contributing to IBM selling off its
server business). Each one of these blades having upwards of ten times
the processing power of max. configured z14 ... and a single
megadatacenter will have several hundred thousand of these blades, and
there are large scores of these megadatacenters around the world.

more than 30 years ago

early 1979, I got con'ed into doing 4341 benchmarks (on engineering
4341, they hadn't started shipping yet) for LLNL that was looking at
getting 70 4341s for computer farm (sort of leading edge of cluster
supercomputers).

starting in the early 80s, we were working with director of NSF on
inter-connecting the NSF supercomputer centerrs. We were suppose to get
$20M ... but then congres cuts the budget and things drag on for
sometime while we continue to work with the director. I had also done a
proposal to do racks of processor chips ... racks with arbitrary mix of
cards with arbitrary mix of 370 & 801/risc CMOS chips. This is old email
having scheduled meeting with director of NSF but also a week of
meetings at research on racks full of arbitrary mix 370 & 801/risc chips
(I had to get somebody to fill in for me at the NSF meetings)
http://www.garlic.com/~lynn/2007d.html#email850315

I had project I called HSDT that had T1 and faster speed links ...
including connectivity to IBM mainframes using non-IBM controllers.
some HSDT email
http://www.garlic.com/~lynn/lhwemail.html#hsdt

Old email about doing CP (vm370) internals class and
meetings with NSF about connecting NSF supercomputer centers
http://www.garlic.com/~lynn/2011h.html#email850930
http://www.garlic.com/~lynn/2011h.html#email851114
http://www.garlic.com/~lynn/2011h.html#email851116

some more HSDT & NSF
http://www.garlic.com/~lynn/2006u.html#email860505
more NSF related email
http://www.garlic.com/~lynn/lhwemail.html#nsfnet

eventually NSF releases a RFP (in part based on what we already have
running in HSDT), but internal politics prevents us from bidding,
director of NSF tries to help by writting letter to the company (with
support from other agencies) copying the CEO ... but that just makes the
internal politics worse. As regional networks connect into the center,
it evolves into the NSFNET backbone, precursor to modern internet.
https://www.technologyreview.com/s/401444/grid-computing/

more HSDT email
http://www.garlic.com/~lynn/lhwemail.html#hsdt

during this period, the communication group is spreading misinformation
internally, even claiming that SNA/VTAM can be used ... even tho
SNA/VTAM doesn't support TCP/IP and 37x5 boxes doesn't support more than
56kbits/sec links. Somebody collects much of the mis-information email
and sends us a copy ... significantly clipped and redacted to protect
the guilty
http://www.garlic.com/~lynn/2006w.html#email870109

there is not a lot of interest within IBM about including 370 chips in
cluster racks ... so eventually it is only 801/risc power chips. We are
working with national labs and supercomputer centers on cluster scaleup
for scientific/technical as well as RDBMS vendors on commercial cluster
scaleup. past reference JAN1992 meeting in Oracle CEO's office about
commercial cluster scaleup
http://www.garlic.com/~lynn/95.html#13

within a couple weeks of the Oracle meeting, cluster scaleup was
transferred, announced as supercomputer, and we were told that we
couldn't work on anything with more than four processors. 
17Feb1992 press about scientific/technical *ONLY*
http://www.garlic.com/~lynn/2001n.html#6000clusters1
later in the spring 11May1992 press, IBM surprised in national lab
interest in cluster supercomputers
http://www.garlic.com/~lynn/2001n.html#6000clusters2

more cluster scaleup email
http://www.garlic.com/~lynn/lhwemail.html#medusa

-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: In Silicon Valley, dropping in at the GooglePlex, tech museums and the Jobs garage

2017-07-18 Thread Anne & Lynn Wheeler
g...@gabegold.com (Gabe Goldberg) writes:
> So when Southwest Airlines started offering daily nonstops from
> Baltimore-Washington International Marshall Airport to San Jose, I
> booked a trip with my husband, Eric. After an affordable
> transcontinental flight, we landed at Mineta San Jose International
> Airport, in the heart of Silicon Valley.

really conflicted about (almost) doing the reverse (on Alaska). One of
the Boyd "people" is back from Afghanastan for a couple weeks and will
be holding Boyd "beer" night in the basement of Ft. Myers O-club this
week (Boyd would regularly hold court there)... I use to sponsor Boyd
briefings at IBM.

I also used to sponsor (IBM) "Friday after work" in San Jose, frequently
(half priced pictures of anchor steam) at Eric's on Cottle across from
the main plant site. Eric's is still there ... but much of the plant
site has been plowed under and the rest is no longer IBM. I'm no longer
in San Jose ... but I try and stop by Eric's every year when I go back
for "Hacker's" (silicon valley invitation only tech conference, for a
time I was the only IBMer, early conferences, people could bring
unannounced products for others to play with; culture has significantly
changed since early days, been a long time since Apple developers show
up with unannounced products for competitors to play with).
https://en.wikipedia.org/wiki/The_Hackers_Conference

Old post mentioning Boyd (posted to IBM-MAIN)
http://www.garlic.com/~lynn/2007c.html#25 
includes several old emails mentioning Boyd ... including a "Friday's"
email notice mentioning that I have hardcopies handouts of Boyd's
presentation
http://www.garlic.com/~lynn/2007c.html#email830512

1998 sat. photo of old plant site, 85 running horizontal across lower
half of the picture and cottle rd running vertical across left side of
picture, railroad running diagnally across upper right, "IBM" plant site
still mostly intact in the middle. Bldg. 28 (triangle shape, old san
jose research) in the upper right intersection of cottle & 85, with the
homestead (and lack) next to it.
http://www.garlic.com/~lynn/ibm5600-1998.jpg

current area, lots of plant site gone, now condos, apartments,
stores
https://www.google.com/maps/place/Erik's+DeliCaf%C3%A9/@37.248622,-121.8020719,1294m/data=!3m1!1e3!4m5!3m4!1s0x808e2e18a77f94bd:0xadfa00ef945ff99d!8m2!3d37.2491302!4d-121.8044912?hl=en

last year, both bldg 14&15 (where I played disk engineer) still existed,
current sat. view, bldg 15 is plowed under ...  bldg. 14 still exists
and cars in the parking lot. posts getting to play disk engineer in
bldg 14&15
http://www.garlic.com/~lynn/subtopic.html#disk

-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: z/VM subcapacity pricing

2017-07-17 Thread Anne & Lynn Wheeler
gib...@wsu.edu (Gibney, Dave) writes:
> As an aside, I spent several years with a uni-processor (z800). There
> are significant benefits to having at least 2 processors. The benefits
> of fewer/faster processors go hockey stick when fewer becomes 1.

I remember in the 90s when they complained NT would regularly do that
... and "fix" was at least 2 processors. I felt really smug that my vm
mainframe resource manager/scheduler would never do that ... dating back
to when I first wrote it as undergraduate in the 60s for cp67. ... some
past posts
http://www.garlic.com/~lynn/subtopic.html#fairshare

-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: IBM z14 High-lights

2017-07-17 Thread Anne & Lynn Wheeler
parwez_ha...@hotmail.com (Parwez Hamid) writes:
> z14 Key H/W high-lights:
>
> Up to 170 Customer PUs @ 5.2 GHz each on a 14 nm 10 core chip
> Up to 32 TB Memory
> Uni = 1832 'mips', 170-way = 146462 'mips'

z900, 16 processors, 2.5BIPS (156MIPS/proc), Dec2000
z990, 32 processors, 9BIPS, (281MIPS/proc), 2003
z9, 54 processors, 18BIPS (333MIPS/proc), July2005
z10, 64 processors, 30BIPS (469MIPS/proc), Feb2008
z196, 80 processors, 50BIPS (625MIPS/proc), Jul2010
EC12, 101 processors, 75BIPS (743MIPS/proc), Aug2012
z13, 141 processors, 100BIPS (710MIPS/proc), Jan2015
...
z14, 170 processor, 146.5 BIPS, (862MIPS/proc - half uni), Aug2017

z196 documentation claims that half the per processor performance
improvement (compared to z10), is the introduction of out-of-order
execution (compared to being used for decades in other processors) ...
i.e. half of 156MIPS increase from 469MIPS to 625MIPS. out-of-order
helps to mask huge latency in memory access ... potentially allowing
execution of other instructions while waiting on cache miss.

added to out-of-order execution are branch prediction and speculative
execution ... 360/195 had just out-of-order execution ... but
conditional branches drained the pipeline ... most codes ran at only
half the 195 rated mip-rate (5mips rather than 10mips).

Current latency to memory, when measured in number of processor cycles
... is comparable to 60s disk access latency, when measured in number of
60s processor cycles.

almost 18yrs, the number of processors increase by factor of ten times,
while per processor performance increase by 5.5 times ... overall
increase 58.6 times.

-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Running unsupported is dangerous was Re: AW: Re: LE strikes again

2017-07-15 Thread Anne & Lynn Wheeler
charl...@mcn.org (Charles Mills) writes:
> DoS of course = denial of service, which is a large basket. I think it
> sometimes means any sort of "bring the system down or make it
> ineffective" attack, but usually I think it refers to repeatedly
> starting a TCP session and not completing it so as to tie up resources
> and make real connections impossible.
>
> https://en.wikipedia.org/wiki/Denial-of-service_attack

re:
http://www.garlic.com/~lynn/2017g.html#74 Running unsupported is dangerous was 
Re: AW: Re: LE strikes again
http://www.garlic.com/~lynn/2017g.html#75 Running unsupported is dangerous was 
Re: AW: Re: LE strikes again
http://www.garlic.com/~lynn/2017g.html#80 Running unsupported is dangerous was 
Re: AW: Re: LE strikes again

June 17th 1995, the internet facing servers for the largest online
service provider started crashing. they brought in lots of experts to
look at the problem ... and finally one of their people flew out to
silicon valley and bought me a hamburger after work. I ate the burger
while he described the problem ... and then I gave him a Q fix that
stopped the crashing (that he installed that night). I then tried to get
vendors to address the problem but found no interest. Almost exactly a
year later there was lots of publicity about service provider in
Manhatten started crashing ...  and all of a sudden vendors started
bragging on fast they reacted.

One of the issues was that there appeared to be two different groups
... those writting the code and those writting the specs ... some
particular DOS were because small gaps between what some of the code did
and what some of the specs said ... and didn't have people that did
detailed study/understanding of both.

Until he passed, the internet standards editor would let me help with
the periodic STD1 ... he also sponsored my talk at ISI/USC why internet
wasn't (yet) business critical dataprocessing

-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Running unsupported is dangerous was Re: AW: Re: LE strikes again

2017-07-15 Thread Anne & Lynn Wheeler
idfli...@gmail.com (scott Ford) writes:
> As a vendor i have been receiving questions about DoS attacks on z/OS ..
> I understand the idea / concept of perimeter defense , i was a Network
> Engineer in a pass life.
> But from a application point of view, if the application is using AT/TLS
> and there are Pagent protection policies for PORTS/IP addresses and the
> application is using encryption, where's the risk ???

We had worked with some number of Oracle people supporting cluster
scaleup for our HA/CMP IBM product. We then left IBM and two of the
Oracle people from this Jan1992 Ellison meeting
http://www.garlic.com/~lynn/95.html#13

left Oracle and were at small client/server startup responsible for
"commerce server". We were brought in as consultants because they wanted
to do payment transactions on their server; the startup had also
invented this technology they called "SSL" they wanted to use; the
result is now frequently called "electronic commerce".  I had absolute
authority over server to payment network gateway but could only make
recommendations about the browser to server, some of which were almost
immediately violated, which continue to account for some number of
vulnerabilities that continue to this day. Several of the attacks have
to do with faking certificates and not recognizing the problem (enabling
things like MITM-attacks). I use to pontificate about how vulnerable
spoofing certificates were (do trust certificates from other entities)
http://www.garlic.com/~lynn/subpubkey.html#sslcerts

Don't know how much control that installations use for AT/TLS
certificates.

One of the early "electronic commerce" vulnerabilities was increasing
number of commerce servers moving from flat files to RDBMS based
implementations. RDBMS maintenance was much more difficult and
time-consuming. For maintenance, servers would be taken offline, some
security relaxed, maintenance performed ... and then because RDBMS
maintenance more often overran window, there was mad rush to get back
online and not all of the security were turned back on.

Then apparently for having done "electronic commerce", we get pulled
into X9 financial standards meetings to help write some number of
financial standards.  I did a financial standard and secure chip. This
was in the same era as chip started ... which had lots of
vulnerabilities and took on the order of 8seconds with direct connect
power. I did chip w/o any of the vulnerabilities. Then the transit
industry asked me if the chip could also do transaction in the transit
turnstyle time limits (100milliseconds) using only contactless (RF)
power (w/o compromise any integrity). There was a large pilot of
chip in the US around the turn of the century during its "Yes Card"
period ... old cartes 2002 trip report (gone 404 but lives on at the
wayback machine) ... at the end of report, it is almost as easy to
counterfeit chip as magstripe.
http://web.archive.org/web/20030417083810/http://www.smartcard.co.uk/resources/articles/cartes2002.html

At 2003 ATM Integrity Task Force meeting, Federal LEO gave "Yes Card"
presentation prompting somebody in the audience to exclaim that they
managed to spend billions of dollars to prove that chips were as
vulnerable as magstripe. In the wake of the "Yes Card" problems, all
evidence of the large US pilot appeared to evaporate and it was
speculated that it would be a long time before it was tried in the US
again.

some more discussion in this recent (facebook) IBM Retirees post
https://www.facebook.com/groups/62822320855/permalink/10155349644130856/

trivia: CEO of one of the cyber companies that participated in the booth
at annual, world-wide retail banking BAI show, had previously been head
of POK mainframe and then Boca PC:
http://www.garlic.com/~lynn/99.html#217
http://www.garlic.com/~lynn/99.html#224

Also did pilot code for both RADIUS and KERBEROS authentication ...
some past posts
http://www.garlic.com/~lynn/subpubkey.html#radius
and
http://www.garlic.com/~lynn/subpubkey.html#kerberos

bunch of security patents
http://www.garlic.com/~lynn/x959.html#aads

-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Running unsupported is dangerous was Re: AW: Re: LE strikes again

2017-07-12 Thread Anne & Lynn Wheeler
000433f07816-dmarc-requ...@listserv.ua.edu (Paul Gilmartin) writes:
> 8x11mm Minox camera?  I suppose physical security can interdict that.
> https://en.wikipedia.org/wiki/Minox#Technical_details_of_Minox_8.C3.9711_cameras

re:
http://www.garlic.com/~lynn/2017g.html#74 Running unsupported is dangerous was 
Re: AW: Re: LE strikes again

also in the wake of the company's "pentagon papers" type event, they
retrofitted all company copier machines with serial number identifier on
the underside of the glass, that would show up on all pages
copied. example from this copied document over a decade later
http://www.garlic.com/~lynn/grayft84.pdf

trivia: not long after I graduated and joined the science center, the
company got a new CSO ... as common in that era, had previously been at
government agency & familiar with physical security (at one time head of
presidential detail). I got tagged to run around with him for a time
... to talk about computer security.

more trivia: I found my wifes father's WW2 status reports (from europe)
at National Archives. They had been declassified but never "marked". The
NA "reading room" required that cameras had to be registered (including
serial number) and given permit and I had to have declassification tag
that appeared in every image that I took.
http://www.garlic.com/~lynn/dectag.jpg

part of one of his reports
http://www.garlic.com/~lynn/2010i.html#82

-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Running unsupported is dangerous was Re: AW: Re: LE strikes again

2017-07-12 Thread Anne & Lynn Wheeler
charl...@mcn.org (Charles Mills) writes:
> Frankly, in the beginnings of computing, including in DOS and OS/360,
> there was often an assumption that all users -- at least all "real"
> (TSO and development, as opposed to CICS or application) users -- were
> trusted. There was a lot of your gun, your bullet, your foot. The
> assumption was that the threat of dismissal was a sufficient limit on
> misbehavior.

well there is this ... going back around 50yrs
http://web.archive.org/web/20090117083033/http://www.nsa.gov/research/selinux/list-archive/0409/8362.shtml

cambridge science center ...
http://www.garlic.com/~lynn/subtopic.html#545tech

was running its cp/67 service, allowing both other IBM locations to use
it as well as non-employees (students, professors, etc) from
universities (mit, harvard, bu) in cambridge area.

science center had also ported apl\360 to cp67/cms for cms\apl ...
expanding workspace size (from typical 16kbytes) to virtual memory size
(required redoing apl storage management for virtual memory demand paged
environment) and adding APIs to system facilities (like file read/write)
... significantly enabling real-world applications.

One of the remote internal users was business planners at Armonk hdqtrs
who loaded the most valuable corporate assets on the cambridge system
for doing business modeling in cms\apl (and it was expected that all
such information was protected from non-authorized users ... including
students around the boston/cambridge area using the system.

note before 370 virtual memory was announced ... a document somehow
leaked to an industry publication ... which resulted in something like a
"pentagon papers" event for the corporation. For the Future System
project, they attempted a countermeasure with a significantly enhanced
vm370 system where all FS documents were softcopy and could only be read
from specially connected 3270 terminals (no file copy, printing, etc,
before ibm/pc and things like screen scraping). some FS refs
http://www.garlic.com/~lynn/submain.html#futuresys

For the initial morph of CP67 to VM370, they simplified and/or dropped
a bunch of features. During the FS period I continued to work on
360/370 stuff (even when 370 efforts were being shutdown) and would
even periodically ridicule the FS efforts. Some old email about
eventually getting around to migrating from CP67 to VM370
http://www.garlic.com/~lynn/2006v.html#email731212
http://www.garlic.com/~lynn/2006w.html#email750102
http://www.garlic.com/~lynn/2006w.html#email750430

I had some weekend test time at datacenter with one of these FS "secure"
vm370 systems. I was in Friday afternoon to make sure everything was
setup for my use. They couldn't resist claiming that their system was so
secure that even if I was left alone in the machine room all weekend, I
wouldn't be able to do anything. So one of the few times I took the
bait. I asked them to disable all access from outside the machine room,
and then from the front panel I changed one byte in storage ... which
disabled all security measures. I suggested if they were serious, they
had to secure/protect all machine facilities (including front panel).

trivia: during the FS period, 370 efforts were being shutdown (lack of
370 offerings during the FS period is credited with giving clone
processor makers market foothold). Then when FS finally implodes, there
is mad rush to get products back into the 370 pipeline ... including
kicking of quick efforts for 3033 and 3081. some refs:
http://www.jfsowa.com/computer/memo125.htm

this also contributes to decision for picking up various bits
(from CSC/VM mentioned in above email) for release to customers.


-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Windows 10 Pro automatic update

2017-06-27 Thread Anne & Lynn Wheeler
000433f07816-dmarc-requ...@listserv.ua.edu (Paul Gilmartin) writes:
> I see the history differently.  This is conjectural, but I believe
> that UNIX had at least the user/group/others file protection facility
> at a time when OS/360 had only the primitive data set passwords.  I
> recall, perhaps at MVS 3.8, systems programmers still relying on
> passwords to control access to the master catalog or the resident
> volume.  (Where I was, the res pack password was the system ID spelled
> backwards.)  MVS bypassed the concept of resource ownership and went
> directly to the ACL-like RACF.

I was working on IBM's HA/CMP cluster scaleup both technical/scientific
(with national labs) and commercial (with RDBMS vendors) ... reference
to JAN1992 meeting in Ellison's conference room
http://www.garlic.com/~lynn/95.html#13

within a couple weeks, cluster scaleup is transferred, announced as IBM
supercomputer (for technical & scientific only) and we were told we
couldn't work on anything with more than four processors. some old email
http://www.garlic.com/~lynn/lhwemail.html#medusa

later, two of the oracle people in the ellison meeting have left and are
at a small client/server startup responsible for something called the
"commerce server". I'm brought in as consultant because they want to do
payment transactions on the server. The startup had also invented this
technology called "SSL" they wanted to use, the result is now frequently
called "electronic commerce".

I have complete authority over the webservers to payment networks
gateway (but could only make recommendations on the client/server side,
some of which were almost immediately violated, which continue to
account for some number of exploits to this day). I have to do a whole
lot of process documentation and compensating procedures for
availability, dark room operation, and diagnostic processses (payment
network call centers were use to doing 5min 1st level problem
determination; 1st pilot electronic commerce service call was closed
after 3hrs of effort with "no trouble found").

Part of the issue is lots of UNIX is oriented towards interacting with
human ... with frequent implication that any problem is resolved by the
responsible human. I contrasted this (for darkroom operation) that
mainframe has long history of software where there is assumption that
responsible person isn't present and therefor lots of processes grew up
over decades to handle issues automagically.

Disclaimer: while out marketing for IBM's HA/CMP, I coined the term
"disaster survivability" and "geographic survivability" (to
differentiate from disaster/recovery). I was then asked to write a
section for the corporate continuous availability strategy
document. However, the section got removed when both Rochester (as/400)
and POK (mainframe) complained they couldn't meet the requirements.

past availability posts
http://www.garlic.com/~lynn/submain.html#available

Later at the 1996 Moscone MDC, all the banners said "Internet" but the
constant refrain in all the sessions was "preserve your investment".
The issue was that they had single user dedicated systems that had
history of business applications with executable scripts embedded in
application data, that were automagically executed ... in purely
stand-alone environment or small, safe, isolated business LANs. This was
being extended to the wide anarchy of the internet with no additional
security measures.

trivia: I had worked with Jim Gray at IBM san jose research on various
things including the original SQL/RDBMS, System/R. some past posts
http://www.garlic.com/~lynn/submain.html#systemr

When he left IBM, he palms off some number of things on me, including
consulting for the IMS group. During 1996 Moscone MDC, he is head of the
new SanFran research center and has open house. Then last decade, before
he disappears, he cons me into interviewing for chief security architect
in redmond. The interview drags on for a couple weeks, but we could
never agree on what needed to be done.

MVS trivia: in the 60s, there was lots of work on CP67 for 7x24 dark
room operation. This was in period when IBM rented machines and charges
were based on system meter that ran whenever the processor and/or any
channel was active (everything had to be idle for at least 400ms before
meter stopped). Initial deployments had little offshift & weekend use,
but to encourage use, the systems had to be always available, even when
totally idle. Part of minimize costs there was lots of work on channel
programs that would allow channel to go idle (and system meter stop),
but be immediately available for arriving characters. Long after IBM was
selling machines, MVS still had a 400ms timer event that guaranteed the
system meter would never stop.

also CP67 from that period ... gone 404, but lives on at wayback
machine.
http://web.archive.org/web/20090117083033/http://www.nsa.gov/research/selinux/list-archive/0409/8362.shtml

authentication triva: Former head of POK and 

Re: Eliminating the systems programmer was Re: IBM cuts contractor bil ling by 15 percent (our else)

2017-06-23 Thread Anne & Lynn Wheeler
john.archie.mck...@gmail.com (John McKown) writes:
> ​I remember from my first jobs, about 1979, DP (the name back then) was
> looking at some mini-computer for the police department (City of
> Ft. Worth, TX). The sales person showed us the equipment. And said
> that all software maintenance was done by the hardware C.E. type
> person. He would put a tape in the integrated drive and "press a
> button". That was it. Everything else was just application level
> programming. The closest that I know of today is the IBMi (nee AS/400)
> which supposedly only needs a "administrator" who supposedly doesn't
> need to know much more than how to read a manual. Of course, the OS
> being more or less "hard wired" into the hardware means that there are
> basically NO internals documented.​

re:
http://www.garlic.com/~lynn/2017g.html#23 Eliminating the systems programmer 
was Re: IBM cuts contractor billing by 15 percent (our else)
http://www.garlic.com/~lynn/2017g.html#28 Eliminating the systems programmer 
was Re: IBM cuts contractor bil ling by 15 percent (our else)

"no internals documented" ... including hardware operation &
instructions

AS/400 was targeted at being migration path for s/36 and s/38 ... and
lower-level "FS" features (from s/38) were eliminated ... but because of
the very high-level ease of operation ... it was relatively
straight-forward to migrate both s/36 and s/38 to as/400.

starting late 70s, the was IBM program to migrate the multitude of
internal microprocessors to RISC (801 iliad chips)  low & mid range
370s, controllers, as/400, etc. For various reasons these programs
aborted (with risc engineers leaving for risc programs at other vendors)
... and things reverted to doing traditional CISC chips ...  including
crash CISC chip design program for as/400. However, the as/400 interface
is so high  that decade later, as/400 finally did migrate to 801
risc (power/pc).

past posts mentioning 801, risc, iliad, romp, rios, power, power/pc
http://www.garlic.com/~lynn/subtopic.html#801

note about the same time apple macs went from motorola 68k to power/pc
 and since has moved to intel (latest change is claimed because IBM
wasn't doing power efficient power/pc chips for laptop market).

other triva: my brother was regional apple market rep (largest physical
region conus). I would get invited to business dinners and sometimes got
to argue mac design with the mac developers (before mac was announced).
He worked out how to get on online access to the hdqtrs system to track
manufacturing and delivery schedules ... which was an IBM S/38.

-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Eliminating the systems programmer was Re: IBM cuts contractor bil ling by 15 percent (our else)

2017-06-23 Thread Anne & Lynn Wheeler
esst...@juno.com (esst...@juno.com) writes:
> "supplying the entire OS on a chip"
>
> I heard a similar statement delivered by the Late Great Bob Yelevich
> in the early 1990s.  He suggested that CICS would be delivered on a
> Board, or possibly a component/domain would be delivered on a board.
> .
> .
> As a contractor I have experienced the neglect in Installations, when
> Qualified Systems Programmers are not employed. I was in one
> installation where I inherited well over one hundred outstanding
> issues, Abends, Storage Violations, back level maintenance.

re:
http://www.garlic.com/~lynn/2017g.html#23 Eliminating the systems programmer 
was Re: IBM cuts contractor billing by 15 percent (our else)

early 1975, I got sucked into helping get system enhancements out ... as
failing
http://www.garlic.com/~lynn/submain.html#futuresys

one was ECPS microcode assist for new 138/148  low & mid-range
machines implemented with vertical microcode (somewhat like Hercules
mainframe emulator) ... with a avg ratio of 10:1 native instructions per
370 instruction. was to select 6kbytes of most frequently executed
operating system code for moving into native ... for a 10:1 speedup.
(which turned out to be 79.55% of supervisor execution) old post
http://www.garlic.com/~lynn/94.html#21 370 ECPS VM microcode assist

I also got sucked into designing 5-way SMP for 370/125. 115/125 had nine
position memory bus for microprocessors. 115 had all microprocessors the
same just with different microcode loads for 370 processor, controllers,
etc. 125 was identical except the 370 processor was 50% faster than the
other processors.

I dropped multiprocessor dispatching/scheduling for problem state and
supervisor state into microcode ... with queued interface that put tasks
on the queue and pulled stuff off the queue. Lots of multiprocessor
operation was transparent to the actual software (all hidden in
microcode). I also did queued microcode interface for all I/O ...
putting stuff on the queue and pulling stuff off the queue.

The 370/125 multiprocessor was never announced or shipped (in part
becuase the 138/148 people complained it was overlapping their market, I
was in some escalation meetings where I had to argue both sides).
http://www.garlic.com/~lynn/submain.html#bounce

Early 80s, I was at bi-annual ACM SIGOPS meetings where the intel i432
people gave a talk on what they were doing ... which included a lot of
higher level function ... like I had done for 125 (lot of multiprocessor
and I/O operation was queued interface and transparent to "software").
They found out that their major problem was that all these advanced
functions was manufactured into the chip silicon ... and any fixes
required spinning new silicon and replacing all the chips.

as an aside ...  other stuff going into i432 was similar stuff to some
stuff that went into IBM S/38 ... which has been characterized as after
FS failure, some of the people retreated to Rochester and did a much
simplified FS flavor as S/38 (but again in microcode, not the raw
silicon). I've periodically pointed out that in the S/38 market the
trade-off between simplified operation and lack of sclability ... came
down on the side of simplified operation (in the high-end market, one of
the things that put the nails in FS coffin was showing 370/195
applications redone for FS, running on fastest possible FS hardware,
would have throughput of 370/145, about 30times slowdown).

past posts mentioning I432
http://www.garlic.com/~lynn/2000e.html#6 Ridiculous
http://www.garlic.com/~lynn/2000f.html#48 Famous Machines and Software that 
didn't
http://www.garlic.com/~lynn/2001g.html#36 What was object oriented in iAPX432?
http://www.garlic.com/~lynn/2002d.html#27 iAPX432 today?
http://www.garlic.com/~lynn/2002l.html#19 Computer Architectures
http://www.garlic.com/~lynn/2002o.html#5 Anyone here ever use the iAPX432 ?
http://www.garlic.com/~lynn/2003e.html#54 Reviving Multics
http://www.garlic.com/~lynn/2003m.html#23 Intel iAPX 432
http://www.garlic.com/~lynn/2003m.html#24 Intel iAPX 432
http://www.garlic.com/~lynn/2003m.html#47 Intel 860 and 960, was iAPX 432
http://www.garlic.com/~lynn/2004e.html#52 Infiniband - practicalities for small 
clusters
http://www.garlic.com/~lynn/2004q.html#60 Will multicore CPUs have identical 
cores?
http://www.garlic.com/~lynn/2004q.html#64 Will multicore CPUs have identical 
cores?
http://www.garlic.com/~lynn/2004q.html#73 Athlon cache question
http://www.garlic.com/~lynn/2005d.html#64 Misuse of word "microcode"
http://www.garlic.com/~lynn/2005k.html#46 Performance and Capacity Planning
http://www.garlic.com/~lynn/2005q.html#31 Intel strikes back with a parallel 
x86 design
http://www.garlic.com/~lynn/2006c.html#47 IBM 610 workstation computer
http://www.garlic.com/~lynn/2006n.html#42 Why is zSeries so CPU poor?
http://www.garlic.com/~lynn/2006n.html#44 Any resources on VLIW?
http://www.garlic.com/~lynn/2006t.html#7 32 or even 64 registers for x86-64?

Re: Eliminating the systems programmer was Re: IBM cuts contractor billing by 15 percent (our else)

2017-06-22 Thread Anne & Lynn Wheeler
cfmpub...@ns.sympatico.ca (Clark Morris) writes:
> If the goal was to eliminate the need for highly technical people who
> understand the platform and the tradeoffs, that is a futile goal for
> any operating system.  If the goal is to eliminate the need for
> assembler coded exits, this is more doable but customization will
> always be with us.  While there can be plenty of obscurity in
> assembler, how well documented are the SYS1.PARMLIB members and JES
> initialization decks that control how the systems operate?  These are
> just weird programming interfaces that can be every bit as cryptic.
>
> As someone who did his last systems programming in the 1990s, I would
> hope that systems maintenance and upgrade has become a lot easier (and
> if IBM made the Knowledge Center and Shopz 24/365.24 available) and
> that less custom code is required because of all the new concerns that
> I didn't have to deal with.  The environment has become more complex
> for all of the operating systems so anything that can be eliminated is
> to the good.  There is enough to do so that automation of some of the
> grunt work is a good thing.

23Jun1969 unbundling announcement started to charge for (application)
software, SE services, etc ... however IBM managed to make the
case that kernel software should still be free
http://www.garlic.com/~lynn/submain.html#unbundle

in the 1st part of 70s, they launch the (failed) Future System effort,
completely different from 360/370 and was going to complete 360/370 ...
supposedly major motivation was to significantly increase the
complexity of processor/controller interface as countermeasure
to clone controllers.
http://www.garlic.com/~lynn/submain.html#futuresys

however, the lack of IBM 370 offerings during the FS period is credited
with giving clone processors a market foothold. the rise of clone
processors then initiates the transition to charging for kernel software
... and my resource manager is selected as guinea pig ... I get to spend
a lot of time with lawyers and business people on charging for kernel
software
http://www.garlic.com/~lynn/subtopic.html#fairshare

eventually transition to charging for all kernel software happens
in the early 80s  starting the OCO-wars ... transition
to "object code only" ... some of this shows up in the VMSHARE
archives
http://vm.marist.edu/~vmshare/

part of the motivation was source code availability contributed to
customers making source code motifications ... which contributes to
customers needing their own system programmers and also slows down
keeping up with the latest system releases (cutting into budget that
could be spent with IBM).

this period in the first part of the 80s also saw many customers buying
4300s (in some cases ordering hundreds at a time) for placing out in
departmental areas (sort of leading wave of distributed computing
tsunami).  Initially MVS was locked out of this market. The mid-range
disks were all FBA that could be deployed out in non-datacenter
environments. Eventually 3375 CKD emulation on 3370 FBA came out ... but
that didn't significantly help. Turns out these large deparmental
deployments were looking at large tens of systems per staff member
... while MVS systems were frequently measured in tens of staff members
per MVS system (if MVS was going to play in that market, it had to
significantly lower skill requirements)
http://www.garlic.com/~lynn/submain.html#dasd

trivia: some old 4300 email from the period
http://www.garlic.com/~lynn/lhwemail.html#43xx

other trivia: TYMSHARE started offering is CMS-based online computer
conferencing free to SHARE as VmSHARE in AUG1976.

-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Nice article about MF and Government

2017-06-10 Thread Anne & Lynn Wheeler
g...@gabegold.com (Gabe Goldberg) writes:
> But I've profiled a couple gov agencies technology and I read
> http://www.govtech.com/ -- which highlights mostly good news (many
> interesting/innovative projects highlighted), though they also sure
> cover disasters and project failures. And half the time they're
> badmouthing legacy systems. I'm just noting that there's a spectrum of
> competence and quality in gov, same as elsewhere.

AMEX was in competition with KKR for private equity take-over of RJR and
KKR wins. KKR runs into trouble and hires away the president of AMEX to
turn it around. IBM has gone into the red and was being reorganized into
the 13 "baby bells" in preparation for breaking up the company. The
board then hires away the former president of AMEX to reverse the
breakup and resurrect the company ... using some of the same techniques
used at RJR
http://www.ibmemployee.com/RetirementHeist.shtml

The former president of AMEX then leaves IBM to head up another large
private equity company that will acquire a large beltway bandit that
employes Snowden. There was enormous uptic in gov. outsourcing last
decade, especially to private equity owned companies ... in
intelligence, 70% of the budget and over half the people
http://www.investingdaily.com/17693/spies-like-us/

private equity owned companies are under intense pressure to cut corners
and do what ever is necessary to generate profits for their owners. In
the case of outsourced security clearances, they were found to be
filling out the paper work and not actually doing the background checks.
Companies in private equity mill are sometimes compared to "house
flipping", except rather than paying off the mortgage as part of the
flip, the loan to buy the company stays on the company's books after the
sale. Combination of factors contribute to over half of corporate
defaults are companies that are in (or previously in) the private equity
mill.
http://www.nytimes.com/2009/10/05/business/economy/05simmons.html?_r=0

this has also contributed to the rapid spreading "success of failure"
culture ... beltway bandits (especially private equity owned
subsidiaries) get more revenue from a series of failures
http://www.govexec.com/excellence/management-matters/2007/04/the-success-of-failure/24107/

This all sounds cynical, because it is. Whether or not it's deliberate
is another matter. But you don't have to believe that people consciously
fail to recognize the windfall it brings. Even if they don't know why,
there's a reason people keep making the same mistakes: Failure is one of
the most successful things going.

... snip ...

which also includes a long list of failed legacy system modernization
efforts. Badmouthing legacy systems might just be obfuscation and
misdirection regarding the real source of the problems.

disclaimer: early in the century we gat a call asking us to respond to
an unclassifed BAA (by IC-ARDA, since renamed IARPA) that was about to
close and nobody else had responding to (basically said that the tools
they have didn't do the job). We get response in and then have some
meetings showing we could do what was needed and then it goes silent and
hear nothing more. It wasn't until the above article that we realize
that what was going on (although we wondered why the agency had allowed
the BAA to be released in the first place, possibly some internal
politics were still being played out).

-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


  1   2   3   4   5   6   7   8   >