Re: What was a 3314?

2016-05-18 Thread Anne & Lynn Wheeler
edgould1...@comcast.net (Edward Gould) writes:
> It addressing had MMBBCCHHR(R?) so I guess you could address it
> directly. Anyone remember how to do that? (progr5amming for a 2321 is
> a lost art (where is Seymour?).

the "BB" was to select the BIN that the magnetic strips were located
in. 
https://www-03.ibm.com/ibm/history/exhibits/storage/storage_2321.html
and
https://en.wikipedia.org/wiki/IBM_2321_Data_Cell

Generically, at the OS level, IBM defined the six bytes as BBCCHH, for
Bin, Bin, Cylinder, Cylinder, Head and Head respectively.

... snip ... 

referenced from:
http://www.bitsavers.org/pdf/ibm/360/os/R21.7_Apr73/GC28-6628-9_OS_System_Ctl_Blks_R21.7_Apr73.pdf


the bins (cells) rotated under the r/w heads.
https://www-03.ibm.com/ibm/history/exhibits/storage/storage_PH2321B.html
http://www.computer-history.info/Page4.dir/pages/Photostore.dir/images/Picture.1.jpg

as undergraduate, univ. hired me as fulltime support for the production
ibm systems. the univ. library got an ONR grant to do online catalog
... and used part of the money to get a 2321 datacell. The project was
also selected to be one of the original CICS product betatest sites and
I got tasked with debugging/supporting CICS (one of the "bugs" was that
CICS original implementation at customer site used specific set of BDAM
options which was hardcoded in the source ... and the library had chosen
a different set of BDAM options ... it took some dump analysis to
discover the issue since it wasn't documented).

-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: TCP/IP question on routing

2016-04-15 Thread Anne & Lynn Wheeler
rpomm...@sfgmembers.com (Pommier, Rex) writes:
> Sorry if these are silly questions, but my brain is really foggy this
> morning.  My questions are for validation of what I think would happen
> with various iterations of IPCONFIG DATAGRAMFWD.
>
> Scenario 1, I have a single IP address on my z/OS system running to a
> single network segment out an OSA port.  In this scenario, DATAGRAMFWD
> would have no effect, correct?
>
> Scenario 2, I have 2 IP addresses, connected via 2 different OSA
> adapters to 2 different networks.  If I have DATAGRAMFWD, when TCPIP
> sees a packet coming in on OSA 1 but it has a destination address of a
> device on network 2, TCP/IP will forward the packet out the other OSA,
> effectively acting like a router or gateway.  If I set it to
> NODATAGRAMFWD, if TCP/IP sees the same packet on OSA1, it will ignore
> the packet rather than forwarding it.  Is this how it actually works?
>
> What I'm looking at is we have a new machine being installed that I
> need to be able to access from 2 different networks at different
> times.  These two networks need to be isolated from each other and I
> don't want the mainframe to start acting as a router, passing packets
> from one to the other.  Is there some other configuration setting I
> need to be aware of, or would the NODATAGRAMFWD be sufficient to
> keeping them isolated?
>
> Yes, the safest solution would be to unplug one of the network cables,
> and just have 1 plugged in at a time, but the machine is about 300
> miles from me.

modulo bugs in the code.

the weekend before 1988 Interop, the floor nets were crashing well into
early Monday morning ... before problem was identified. As a result, new
requirements were mandated in RFC1122 (RFC1122 & RFC1123 combined are
official internet STD3). regarding automagically forwarding
packets. ... 

Any host that forwards datagrams generated by another host is
acting as a gateway and MUST also meet the specifications laid out
in the gateway requirements RFC [INTRO:2].  An Internet host that
includes embedded gateway code MUST have a configuration switch to
disable the gateway function, and this switch MUST default to the
non-gateway mode.  In this mode, a datagram arriving through one
interface will not be forwarded to another host or gateway (unless
it is source-routed), regardless of whether the host is single-
homed or multihomed.  The host software MUST NOT automatically
move into gateway mode if the host has more than one interface, as
the operator of the machine may neither want to provide that
service nor be competent to do so.

... snip ...

trivia: i had hardware in booth at Interop 88 ... but not in the
IBM booth ... which was on the other side of the floor.


-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: opinion? Fujitsu USA

2016-04-08 Thread Anne & Lynn Wheeler
elardus.engelbre...@sita.co.za (Elardus Engelbrecht) writes:
> I vaguely remember that I worked [indirectly] with them when I started
> worked around 1989.
>
> ICL [from Britain?] and Amdahl [from that wizard Gene Amdahl] were
> guzzled up by Fujitsu.


Fujitsu was major manufacture and investor in Amdahl (from the start).
Not long after Amdahl Co. was formed (early 70s), Gene had presentation
at MIT in large, full auditorium ... some of us went over from the IBM
Cambridge Science Center. Several in the audience pressed Gene pretty
hard about being front for foreign interests (regarding the ties with
Fujitsu)

Later my boss at IBM was head of the workstation IBU (PC/RT, RS/6000,
etc) and had some head-to-head with some senior executives and left,
eventually forming HAL ... early 64bit RISC (also backed by
Fujitsu). There was joke that they was so much traffic back
between silicon valley and japan that it justified the non-stop San
Jose/Narita flight (and some companies had permanently reserved 1st
class seats).

Topic Drift Warning.

One of the issues was corporate mandate that RS/6000 had to use PS2
microchannel cards (and not do their own). As mentioned in recent "Tech
News 1964" posting, the communication group was fiercely fighting off
client/server and distributed computing ... and PS2 microchannel cards
had minimal throughput and performance. we had snide comments that if
RS/6000 was restricted to to only using PS2 cards, it wouldn't have any
better throughput/performance than PS2.

For instance, the PS2 (32bit) microchannel 16mbit T/R card had lower
throughput than the PC/RT (16bit) atbus 4mbit T/R card (a PC/RT server
with 4mbit T/R card would have higher throughput than RS/6000 with
16mbit T/R card ... i.e. the workstation IBU had done their own 4mbit
T/R card but were prevented from doing their own 16mbit T/R card.
http://manana.garlic.com/~lynn/subtopic.html#801

The communication group had design point for 16mbit T/R card of 300+
stations sharing common LAN doing terminal emulation. The major
justification for having done T/R was many large mainframe customers
were running into bldg. weight loading restrictions with the massive
amount of 3270 coax cables. T/R LAN enormously reduced the wiring
needed, the aggregate LAN bandwidth increased the number of stations per
LAN ... but terminal emulation didn't require significant per card
throughput.

In the late 80s, the communication group 16mbit T/R card was something
like $899/card ... but there were $69 10mbit Ethernet cards running over
CAT5 wiring, that had higher per card throughput than the 16mbit T/R
card. The new Almaden Research bldg had extensive CAT5 wiring assuming
16mbit T/R, and found that not only the $69 ethernet cards had higher
per card throughput ... but running 10mbit ethernet had higher aggregate
LAN throughput and lower latency than 16mbit T/R.

The communication group publications comparing 16mbit T/R with ethernet
apparently used original 3mbit ethernet that did have listen before
transmit protocol. ACM SIGOPS publication had paper that did detailed
study of typical Ethernet configurations and found effective aggregate
throughput was around 9mbits/mbit ... but running low-level device
driver code that constantly transmitted minimum sized ethernet packets
on all stations ... the effective aggregate LAN throughput dropped off
to 8mbit/sec.

The lower aggregate 16mbit T/R LAN throughput and higher latency was
attributed to the token passing processing latency.
http://manana.garlic.com/~lynn/subnetwork.html#terminal

disclaimer: my wife is one of the inventors on one of the original IBM
token-passing patents.

-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Fwd: Tech News 1964

2016-04-07 Thread Anne & Lynn Wheeler
peter.far...@broadridge.com (Farley, Peter x23353) writes:
> IMHO part of what is vanishing mainframe clients is IBM's failure
> several decades back to continue to support universities with
> discounted hardware and software.  Lack of mainframe availability at
> university level has translated into current managements with no
> exposure and no desire to learn the advantages (TCO, security, etc.)
> of mainframes.  Not the whole reason, but a significant contributor.

The gov. legal action resulted in number of IBM responses ... 23jun1969
unbundling announcement that included starting to charge for
(application) software; it also saw IBM pull back from the enormous
grants and discounts it gave academic institutions... some past posts
http://manana.garlic.com/~lynn/submain.html#unbundle

IBM did come back in the early 80s with the academic business unit
(ACIS) ... it was putting several hundred million into universities (but
lots of it would go into non-mainframe technologies). IBM also sponsored
the university BITNET (where this ibm-main mailing list
originated). some past posts
http://manana.garlic.com/~lynn/subnetwork.html#bitnet

it used technology similar to IBM internal network ... some
past posts (larger than arpanet/internet from just about
the beginning until sometime mid-80s)
http://manana.garlic.com/~lynn/subnetwork.html#internalnet

that originated at the IBM cambridge science center ... some
past posts
http://manana.garlic.com/~lynn/subtopic.html#545tech

this was non-SNA (and not communication group technology) ...  at about
the same time in the late 80s when the communication group was forcing
the internal network into moving to SNA ... BITNET moved to tcp/ip
(which would have been much better for internal network also, rather
than SNA).

I've told the story several times about senior disk engineer at late
80s, annual communication group world-wide internal conference. His talk
was supposedly on 3174 performance but he opened with the statement that
the communication group was going to be responsible for the demise of
the disk division. The issue was that the communication group had
strategic "ownership" of everything that cross the datacenter walls and
was fiercely fighting off client/server and distributed computing trying
to preserve their dumb terminal paradigm install base. The disk division
was seeing applications fleeing the datacenter to more distributed
computing friendly platforms with drop in disk sales. The disk division
had come up with a number of solutions to try and correct the situation,
but were constantly being vetoed by the communication group. some
past posts
http://manana.garlic.com/~lynn/subnetwork.html#terminal

Somewhat as work-around to the communication group opposition, the disk
division VP of software was investing in open system and distributed
computing technology ... the POSIX support in MVS ... as well as
startups that built mainframe based distributed computing hardware and
software solutions.

trivia: the original mainframe tcp/ip product was implemented in
vs/pascal (which had none of the buffer length related exploits that
have been epidemic in C-language implementations) ... however for
various reasons it only got about 44kbytes/sec using full 3090
processor. I did the enhancements to support RFC1044 and in some tuning
tests at Cray Research between Cray and 4341, got sustained channel
speeed throughput using only a modest amount of 4341 processor
... possibly 500 times improvement in bytes moved per instruction
executed. some past posts
http://manana.garlic.com/~lynn/subnetwork.html#1044

-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Microprocessor Optimization Primer

2016-04-04 Thread Anne & Lynn Wheeler
note that test was on both 360/67 and 360/65 machines and was
atomic.

I've commented before about charlie invented compare (chosen
because CAS are his initials) while doing fine-grain multiprocessor
locking working on CP67 (360/67 precursor to vm370) at the science
center.
http://manana.garlic.com/~lynn/subtopic.html#545tech
and
http://manana.garlic.com/~lynn/subtopic.html#smp

then we attempted to get it added to 370 architecture. initially was
rebuffed because the POK favorite son operating system people said that
test was more than adequate for multiprocessor support (serializing
critical code sections). The 370 architecture owners said that to get it
justified would require additional uses, not just multiprocessor
serialization. Thus was invented the multiprogramming/multithreading
examples (used whether or not running on multiprocessor machine) that
still are shown in the principles of operation.

The problem in a multithreaded application is it is enabled for
interrupts and can loose control in a locked/critical section.
Compare is used for doing an atomic operation directly not needing
to lock a critical section.

This was especially leveraged by large multiprogramming/multithreading
DBMS avoiding needing to make kernel calls for lots of serialization
... and by the 80s lots of other platforms (especially those supporting
high-throughput DBMS) were including compare (or instructions with
similar semantics).

I first saw transactional memory on 801/risc in the late 70s.  They
demonstrated that they could do transactional type operations on
applications that weren't originally coded for transactions.

801/risc ROMP (research/office products) that started out going to be a
displaywriter followon.  When the displaywriter followon was canceled,
they looked around and decided to retarget it to the workstation
market. They hired the company that had done the UNIX port to IBM/PC for
PC/IX to do one for romp. This was eventually released as PC/RT and AIX.

The followon to ROMP was RIOS (rs/6000) and they used the transactional
memory to implement JFS ... journalling the UNIX filesystem metadata
changes ... with a claim that it was more efficient that directly
implementing journalling calls in the filesystem.

However, Palo Alto then did a portable JFS that used explicit
journaling calls ... and demonstrated on RS/6000 that it was
much faster than the transaction memory implemention.
http://manana.garlic.com/~lynn/subtopic.html#801

Note that s/370 had very strong (multiprocessor) memory consistency and
cost huge amount in performance. Two processor multiprocessor machines
slowed each processor clock cycle by 10% to accommodate cross-cache
protocol chatter ... and this overhead went up non-linearly. Later IBM
mainframe was running cache machine cycle at much higher rate than the
processor machine cycle.

In the late 80s, I was asked to participate in the standardization
(started by LLNL) of what quickly became fibre-channel standard (on
which they eventually built the heavy-weight FICON protocol that
drastically reduces the native throughput)
http://www.garlic.com/~lynn/submisc.html#ficon

I was also asked to participate in the standardization of scalable
coherant interface (started by people at SLAC ... a large VM370
mainframe installation at the time and host of the monthly IBM BAYBUNCH
user group meetings). SCI was defined for both I/O operations as well as
multiprocessor shared memory operation. The standard SCI memory
concistency defined 64-port memory bus ... that relaxed memory
concistency (compared to IBM mainframe) and allowed for lot larger
mainframe configuration.s Sequent, Data General, Silicon Graphics, and
at least Convex built multiprocessor products.

Sequent & Data General took standard i486 four processor board that
shared cache and built interface to SCI ... being able to get 64
4-processor boards in configurations (256-way processor shared memory
configuration). Convex took standard HP/SNAKE (risc) two processor board
that shared cache and built interface to SCI ... being able to get 64
2-processor boards in configuration. As an aside, much later IBM buys
Sequent and shuts it down.

Note both FCS and SCI started out with fiber that supported concurrent
transfers in both direction.

SCI
https://en.wikipedia.org/wiki/Scalable_Coherent_Interface
is part of what evolves into infiniband
https://en.wikipedia.org/wiki/InfiniBand

other trivia ... in the mid-70s I was involved in project that defined a
16-way shared memory multiprocessor. Lots of people thought it was
really fantastic ... and we got some of the 3033 processor engineers to
work on it in their part time (lot more interesting than mapping 168
logic to 20% faster chips). Then somebody tells the head of POK that it
could be decades before the POK favorite son operating system could
effectively support 16-way (it was 2000 before 16-way shipped) and we
got invited to never visit POK again (and the 3033 processor engineers
were 

Re: CeBIT and mainframes

2016-03-20 Thread Anne & Lynn Wheeler
dcrayf...@gmail.com (David Crayford) writes:
> Emulex sells an HBA that handles over 1M IOPS on a single port. IIRC,
> x86 Xeon class servers have something called DDIO which facilitates
> writes directly to processor cache.
> It's not too dissimilar to offloading I/O to SAPs. I've got old
> colleagues that work on distributed now and they are of the opinion
> that I/O bandwidth is not an issue on x86 systems,
> but it's not exactly commodity hardware. They're all hooked up using
> 16Gbs fiber connected to a SAN using PCIe, the same as z Systems.
>
> I would question the RAS capabilities rather than I/O.

Last published mainframe I/O I've seen was peak I/O benchmark for z196
which got 2M IOPS using 104 FICON (running over 104 fibre-channel). Also
that all 14 SAPs would run 100% busy getting 2.2M SSCHs/sec but
recommendation was keeping SAPs to 75% or 1.5M SSCHs/sec.

About the same time of the z196 peak I/O benchmark there was
fibre-channel announced for e5-2600 blade claiming over million IOPS,
two such fibre-channel getting more throughput than 104 FICON (running
over 104 fibre-channel) ... aka FICON is enormously heavy-weight
protocol that drastically cuts the native throughput of fibre-channel.

disclaimer: 1980 I was asked to do the support for channel extender for
STL (now IBM Silicon Valley Lab), they were moving 300 people from the
IMS group to offsite bldg. with access back to the STL datacenter; they
had tried remote 3270 but found the human factors intolerable.  The
channel extender support put channel attached 3270 controllers out at
the offsite bldg ... and resulted in response indistinguishable from
channel attach 3270 controllers within the STL bldg. The vendor they
tried to get approval from IBM to release the support, but there was a
group in POK that was playing with some serial stuff and they got it
blocked because they were afraid it might interfer with getting their
stuff released.

In 1988, I'm asked to help standardize some serial stuff that LLNL was
playing with which quickly becomes fibre channel standard ... one of the
issues is that protocol latency effects increases with increase in
bandwidth ... so that it becomes apparent at relatively short
distances. One of the features with the 1980 work is that it localized
the enormous IBM channel protocol latency at the offsite bldg and then
used much more efficient protocol the longer distance. For fibre-channel
used the much more efficient protocol for everything.

In 1990, the POK group finally get their stuff release as ESCON when it
is already obsolete. Then some POK engineers become involved with fibre
channel standard and define a protocol that enormously cuts the native
throughput ... that is eventually released as FICON. Note that the more
recent zHPF/TCW work for FICON looks a little more like the work that I
had done back in 1980.

Besides the peak I/O benchmark FICON throughput issue (compared to
native fibre channel issue) there is also the overhead of CKD
simulation. There hasn't been any real CKD disks built for decades,
current CKD disks are all simulation on industry standard commodity
disks.

Other tivia, when I moved to San Jose Research in the 70s, they let me
wander around. At the time the disk engineering lab (bldg 14) and disk
product test lab (bldg 15) they were running pre-scheduled standalone
mainframe around the clock, 7x24. At one point they had tried to us MVS
for concurrent testing, but found that MVS had 15mins MTBF in that
environment. I offerred to rewrite I/O supervisor that made it bullet
proof and never fail ... being able to do ondemand, anytime concurrent
testing, greatly improving productivity. I happened to mention that MVS
15min MTBF in an internal-only report on the work ... which brings down
the wrath of the MVS group on my head (not that it was untrue, but that
it exposed the information to the rest of the company). When they found
that they couldn't get me fired, they then were to make sure they made
my career as unpleasant as possible (blocking promotions and awards
whenever they could).

z900, 16 processors, 2.5BIPS (156MIPS/proc), Dec2000
z990, 32 processors, 9BIPS, (281MIPS/proc), 2003
z9, 54 processors, 18BIPS (333MIPS/proc), July2005
z10, 64 processors, 30BIPS (469MIPS/proc), Feb2008
z196, 80 processors, 50BIPS (625MIPS/proc), Jul2010
EC12, 101 processors, 75BIPS (743MIPS/proc), Aug2012

z13 published refs is 30% move throughput than EC12 (or about 100BIPS)
with 40% more processors ... or about 710MIPS/proc

z196 era e5-2600v1 blade rated at 400-500+BIPS depending on model,
e5-2600v4 blades are three-four times that, around 1.5TIPS (1500BIPS).

i.e. since the start of the century, commodity processors have increased
their processing power significantly more aggresively than
mainframe. They have also come to dominate the wafer-chip manufacturing
technology ... and essentially mainframe chips have converged to use the
same technology (in much the same way mainframe has converged to use

Re: CeBIT and mainframes

2016-03-19 Thread Anne & Lynn Wheeler
dave.g4...@gmail.com (Dave Wade) writes:
> In fact its a bit like SVC's in VM/370. The code which handles them is
> very different to that in the OS world, but the code still runs

there was joke about the time MVS came out with 8mbyte kernel image in
every virtual address space ... that the 32kbyte os/360 system services
simulation in VM/CMS was a lot more efficient than the 8mbyte os/360
system services simulation in MVS.

-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Introducing the New z13s: Tim's Hardware Highlights

2016-02-24 Thread Anne & Lynn Wheeler
edgould1...@comcast.net (Ed Gould) writes:
> Remember the *OLD* days there was a 16MB max on (even) an MP? Never
> mind the cost of $10K per meg (if memory serves me on a 168).
> Yes the newer machines have more memory but in reality you really
> don't get all that more functionality, and yes there are bells and
> whistles for the z genation.

Significant MVS bloat by 3033 was causing a number of problems ... real
storage requirements was banging hard at the 16mbyte limit. 16bit 370
PTE was 12bit (4kbyte) page number, 2defined bits and 2undefined/unused.
They took 2undefined/unused bits then used them to prefix the (real)
page number ... allowing 14bit page number or up to 64mbytes of real
pages ... allowing lots of application virtual pages to reside above the
16mbyte line.

os/360 significant pointer passing API paradigm was making 16mbyte
virtual address space limit a problem. Transition from SVS to MVS gave
each application its own 16mbyte virtual address space ... but pointer
passing API paradigm required 8mbyte image of the MVS kernel in each
application virtual address space. Then because subsystems services were
in their own virtual address space, pointer passing API required 1mbyte
CSA (in each virtual address space) for passing parameters. CSA size
requirements were proportional to subsystems and applications ... for
large 3033s was 5-6mbytes and threatening to become 8mbytes (leaving
none for applications). Subset of "access registers" was then
retrofitted to 3033 as dual-address mode (allowing subsystems to access
application virtual address space w/o needing CSA).

problem was that 4341 clusters had more processing power than 3033, more
aggregate memory and I/O throughput, much lower cost and significantly
less physical and environmental footprint. Folklore is that head of POK
felt so threatened that corporate was convinced to cut allocation of
critical 4341 manufacturing component in half.

4341 had significant improvement price/performance as well as physical
and environmental footprint resulted in corporations ordering hundreds
at a time for placing out in departmental areas ... sort of the leading
edge of distributed computing tsunami.

Before 4341s shipped, I got roped into benchmarking engineering 4341 for
national labs for big compute farm ... sort of the leading edge of the
coming supercomputer paradigm 

internet+distributed computing+compute farms ... evolves into cloud with
hundreds of thousands of systems and millions of processors in each
cloud megadatacenter (system costs have dropped to such a level
that power are starting to dominate cloud costs).

old email about air force data systems coming out to talk about 20
4341s, spring of 1979 (they had a few mainframes in their datacenter),
but by the time they got around to caming out fall of 1979, it had
jumped to 210 4341s.
http://www.garlic.com/~lynn/2001m.html#email790404
and
http://www.garlic.com/~lynn/2001m.html#email790404b

other 4341 related email
http://www.garlic.com/~lynn/lhwemail.html#4341

-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: History of Computing 1944 and the evolution to the System/360

2016-02-24 Thread Anne & Lynn Wheeler
charl...@mcn.org (Charles Mills) writes:
> My *recollection* is that the S/360 30 came with up to 48K, or 64K by RPQ. I
> could be off, but 1MB sounds incredibly high to me.

ga24-3231-7, 360-30 functional characteristics pg14 (from bitsavers)

c308kbytes
d30   16kbytes
dc30  24kbytes
e30   32kbytes
f30   64kbytes



univ had 709/1401 and was sold 360/67 replacement (for tss/360)
... pending delivery of 360/67, transition replaced 1401 with 64kbytes
360/30 ... gave univ.  chance to get acquated with 360 ... but 360/30
could be also be run in 1401 hardware emulation mode.

tss/360 never quite came to production fruition ... so 360/67 ran most
of the time as 360/65 with os/360.

IBM offered 2361 large capacity storage
https://en.wikipedia.org/wiki/IBM_2361_Large_Capacity_Storage

models came in 1mbyte and 2mbyte for models 50, 65, and 75.

i also remember ampex (and other vendors) offering LCS up to 8mbytes,
also additional memory for 30s & 40s.

search engine turns up other vendors offering addon
semiconductor/monolithic memory for 360s in the 70s with larger sizes at
cheaper prices.

-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: ASCII vs. EBCDIC (was Re: On sort options ...)

2016-02-21 Thread Anne & Lynn Wheeler
000a2a8c2020-dmarc-requ...@listserv.ua.edu (Tom Marchant) writes:
> ASCII was seriously considered for the initial System/360
> design. Amdahl, Blaauw and Brooks published an article in the IBM
> Journal in April, 1964, titled "Architecture of the System/360" in
> which many of the design trade-offs were described. One place where
> the article can be found is
> http://web.ece.ucdavis.edu/~vojin/CLASSES/EEC272/S2005/Papers/IBM360-Amda=
hl_april64.pdf
>
> 
> ASCII vs BCD codes. The selection of the 8-bit character size in 1961
> proved wise by 1963, when the American Standards Association adopted a
> 7-bit standard character code for information interchange
> (ASCII). This 7-bit code is now under final consideration by the
> International Standards Organization for adoption as an international
> standards recommendation. The question became =E2=80=9CWhy not adopt ASCI=
I as
> the only internal code for System/360?=E2=80=99
>
> The reasons against such exclusive adoption was the widespread use of
> the BCD code derived from and easily translated to the IBM card
> code. To facilitate use of both codes, the central processing units
> are designed with a high degree of code independence, with generalized
> code translation facilities, and with program-selectable BCD or ASCII
> modes for code-dependent instructions. Nevertheless, a choice had to
> be made for the code-sensitive I/O devices and for programming
> support, and the solution was to offer both codes, as a user
> option. Systems with either option will, of course, easily read or
> write I/O media with the other code.
> 


IBMer Bob Bemer ... "father of ASCII" ... EBCDIC and the P-Bit (The
Biggest Computer Goof Ever).
http://www.bobbemer.com/P-BIT.HTM

Who Goofed?

The culprit was T. Vincent Learson. The only thing for his defense is
that he had no idea of what he had done. It was when he was an IBM Vice
President, prior to tenure as Chairman of the Board, those lofty
positions where you believe that, if you order it done, it actually will
be done. I've mentioned this fiasco elsewhere. Here are some direct
extracts:

... snip, see reference for a whole lot more ...

have been having problems with outgoing email this weekend, apologize
if multiple copies show up

-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Query: Will modern z/OS and z/VM classes suffice for MVS and VM/370

2016-02-15 Thread Anne & Lynn Wheeler
ri...@livingcomputermuseum.org (Rich Alderson) writes:
> We are currently in the process of restoring a 4341 to operating
> condition.  We have just last week corrected a fault in the power
> system, and are able to power the system up and IML it from floppy.
>
> We are now deciding what operating system to run on the restored
> system.  Most likely, we will run VM/370, but possibly we will run an
> MVS guest as well.  I used to be an MVS systems programmer, but that
> was more than 30 years ago, and even the rust has eroded away.
>
> I would like to brush up on operations and systems programming, which
> would be much simpler if a modern z/OS and/or z/VM course would
> suffice for the older operating systems.  Have the operator commands
> and programming utilities changed radically since 1984 (JES2, CMS)?
>
> Please feel free to reply privately if you wish to tell me how foolish this 
> sounds.
>
> Thanks,
> Rich Alderson


Hercules comes with 4341 era vm370
https://en.wikipedia.org/wiki/Hercules_%28emulator%29

vast majority of 4341s were shipped with FBA disks ... you would need
some sort of CKD disks in order to bring up MVS.

huge percentage of 4341s went out into departmental areas with 3370 FBA
disks, sort of leading edge of distributed computing tsunami ... not
requiring datacenter provisioning.

-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: AW: Re: You thought IEFBR14 was bad? Try GNU's /bin/true code

2016-02-11 Thread Anne & Lynn Wheeler
dlc@gmail.com (David L. Craig) writes:
> Does anyone else (Google doesn't) remember the ELHO acronym?
>
> Equal- mask '8'
> Low  - mask '4'
> High - mask '2'
> Overflow - mask '1'
>
> Back in the days of no extended mnemonic opcodes it was
> quite the assembler programming aid.

I was involved in doing some of the original relational/sql "System/R",
some past posts
http://www.garlic.com/~lynn/submain.html#systemr

where SQL does 2-value, true/false logic. It created big problem for
unknowns and/or null values. Old discussion that null values in SQL
tended to produce the opposite of the expected results ... making it all
the more dangerous.

about the same time, I was brought in to do some of a different kind of
relational implementation ... which had interface language that directly
supported unknowns/null with 3-value logic.

old post in DBMS group discussing the dangers of unknowns/nulls in SQL
and how it was handled in 3-value logic
http://www.garlic.com/~lynn/2003g.html#40 How to cope with missing values - 
NULLS?
http://www.garlic.com/~lynn/2003g.html#41 How to cope with missing values - 
NULLS?

Earlier in the first part of the 70s, I had written a PLI program to
ingest assembler listing ... analyze the statements, creating higher
level representation ... code paths, logic processes, "dead" code,
possible register use before set, etc ... and generate representation
using a psuedo-pascal statements.

some highly optimized cp67 kernel tmade liberal use of 3&4-way logic
with branch conditions and would appear very straight forward ... the
psuedo-pascal true/false if/then/else type logic could look very
convoluted with nesting that could go 15-levels deep.

iefbr14
https://en.wikipedia.org/wiki/IEFBR14

got fixed with SR, then with eyecatcher name, also a couple other
nits, "IEFBR14" on "END" statement, and "RENT" & "REUSE" in linkedit
http://hercules390.996247.n3.nabble.com/IBM-program-naming-question-td27674.html

from above:

Two more ...

First, because the RENT linkedit option wasn't specified.

Second, because the REUSE linkedit option wasn't specified. 

... snip ...

-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Ancient History (OS's) - was : IBM Destination z ...

2016-02-07 Thread Anne & Lynn Wheeler
000433f07816-dmarc-requ...@listserv.ua.edu (Paul Gilmartin) writes:
> (Topic drift on recreation) I found a fun Mandelbrot set viewer at:

other IBM Mandelbrot drift ... In the 80s, Mandelbrot resigned from IBM
Research in protest over the elimination of research.
https://en.wikipedia.org/wiki/Benoit_Mandelbrot

Mandelbrot left IBM in 1987, after 35 years and 12 days, when IBM
decided to end pure research in his division.[20] He joined the
Department of Mathematics at Yale, and obtained his first tenured post
in 1999, at the age of 75.[21] At the time of his retirement in 2005, he
was Sterling Professor of Mathematical Sciences.

... snip ...

-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: IBM Destination z - What the Heck Is JCL and Why Does It Look So Funny?

2016-02-07 Thread Anne & Lynn Wheeler
t...@tombrennansoftware.com (Tom Brennan) writes:
> Yep - I'm hoping they'll like the batch facilities in MVS which in my
> opinion are far beyond unix.  This might be a spot where a history
> lesson is needed, but I wasn't around in the early days:
>
> From what I've read, MVS started with nothing but batch jobs and later
> grew into online systems.  So TSO is just another batch job that
> happens to communicate with a terminal.  On the unix side though, it
> seems they started with online terminals first, so a batch
> (background) job was later created as a terminal session with no
> terminal.

I've pointed that out before ... CTSS was conversational online default
from the start, then some of the people went to the 5th flr and did
Multics (and folklore is that some of the Bell Labs people went back
home and did a simplified Multics, calling it unix) ... and other of the
people went to the science center on the 4th flr and did CP/40-CMS
(making hardware modifications to 360/40 to support virtual memory),
which morphs into CP/67-CMS when standard 360/67 with virtual memory
standard comes available  precursor to VM/370-CMS (cms originally
stood for "cambridge monitor system" ... is renamed "converstational
monitor system" for vm/370). some cambridge science center posts
http://www.garlic.com/~lynn/subtopic.html#545tech

I've periodically mentioned Kildall working with CP/67-CMS at NPG
school, before doing CP/M, which then morphs at Seattle Computing, and
leads to ms/dos.

os/360 assumed batch ... and had to provide increasing about of
contingency handling capability ... while conversational started out
assuming responsible human was there to handle the contigency cases.

we were working with director of NSF and were suppose to get $20M to tie
together the the NSF supercomputer centers. Then congress cuts the
budget, some other things happen, and then they release an RFP ... but
internal politics prevent us bidding on the RFP (director of NSF tries
to help by writing the corporation a letter, but that just makes
internal politics worse). As regional networks tie into the centers, it
morphs into NSFNET backbone, precursor to modern internet some old email
http://www.garlic.com/~lynn/lhwemail.html#nsfnet

We do get TCP/IP product for mainframe but there are quite a few issues
... getting 44kbytes/sec using near full 3090 processor. I do the
enhancements to support RFC1044 and in some tuning tests at Cray
Research get full sustained channel speed throughput between 4341 and a
Cray, using only a modest amount of 4341 (possibly 500 times improvement
in bytes moved per instruction executed). some past posts
http://www.garlic.com/~lynn/subtopic.html#rfc1044

Late 90s, a senior disk engineer gets a talk scheduled at annual,
world-wide, internal communication group conference, supposedly on 3174
performance ... but opens the talk with statement that the communication
group was going to be responsible for the demise of the disk
division. The issue was that the communication group had strangle hold
on datacenters with corporate strategic ownership of everything crossing
the datacenter walls, and were fiercely fighting off distributed
computing and client/server (trying to preserve their dumb terminal
paradigm and install base). The disk division was starting to see data
fleeing the datacenter to more distributed computing friendly platforms
with drop in disk sales. The disk division had come up with a number of
solutions to reverse the process, but were constantly being vetoed by
the communication group. some past posts
http://www.garlic.com/~lynn/subnetwork.html#terminal

In the late 60s, increasing number of cp/67-cms customers were extending
to 7x24 availability (including some number of commercial online service
bureaus). One of the issues in the 60s mainframes were rented and
initially, it was hard to promote offshit use enough to recover system
costs. There was a lot of work done to reduce system costs (especially
offshift). Part of system rental costs were based on the system meter
that ran whenever the processor or any channel was running. All
processor and channel activity had to be quiet for at least 400ms before
system meter would stop. Special terminal CCWs were created to allow
channel to stop ... but immediately start ondemand when characters were
coming in ... some of the stuff sort of analogous being done for
on-demand cloud computing (trivia long after mainframes had converted
from rental to sales, MVS still had timer task that woke up every 400ms
... making sure that system meter never stopped).

Other stuff to further minimize offshift costs was eliminating operator
requirements. Another early CP/67 enhancement in the 60s was automatic
re-ipl after failure (system come up and available w/o needing any human
intervention). In the early 70s, as environments became more complex,
increasing amount of CP/67 services were provided by "service virtual
machines" (analogous to demons ... the current 

Re: IBM Destination z - What the Heck Is JCL and Why Does It Look So Funny?

2016-02-07 Thread Anne & Lynn Wheeler
harris...@gmail.com (Graham Harris) writes:
> Doesn't deadline scheduling count?

as undergraduate in the 60s, I did dynamic adaptive resource management
that was picked up and shipped in CP/67 (customers periodically referred
to as fairshare scheduler or wheeler scheduler because default policy
was fairshare).
http://www.garlic.com/~lynn/subtopic.html#fairshare

in the morph from CP/67 to VM/370 there was a lot of things dropped
and simplified ... including all the scheduling stuff.

At the science center during the FS period, I continued to work on 360 &
370 stuff ... even periodically ridiculing the FS activity.
http://www.garlic.com/~lynn/submain.html#futuresys

with death of FS, there was mad rush to get stuff back into 370 product
pipelines  which contributed to decision to release some amount of
the stuff I had been doing. Some of it was shipped in standard release.

Note that earlier in the 23Jun1969 unbundling announcement, ibm started
to charge for se services, maintenance, (application) software ... but
made the case that kernel software should still be free.
http://www.garlic.com/~lynn/submain.html#unbundle

During the FS period, the lack of 370 products is credited with given
clone processors a market foothold. So part of resuming 370 efforts, the
decision was made to also transition to start charging for all kernel
software (likely motivated by 370 clone makers getting market
foothold). The decision was made to make the scheduling work a guinee
pig as separate charged-for kernal product (I had to spend a lot of time
with lawyers and business people about kernel charging policies). After
the transition was complete to charging for all kernel software in the
80s ... the next step was the OCO-wars (aka only shipping object code).

As part of the product review process somebody in Armonk said he
wouldn't approve it unless it had customer setable parameters because
everybody knew that the state of the art was setable performance
parameters (MVS would have this huge array of setable paremters
... there would be lots of SHARE presentations about various tests of
random walks of all the setable parameters with various workloads). I
tried to explain to him what dynamic adaptive management met ... but
eventually had to implement some customer setable parameters. However,
there was a joke that I took from operations research and "degrees of
freedom". The range of values for the manually setable parameters was
less than what the dynamic adaptive calculations could do ... so
effectively the dynamic adaptive calculations could compensate for human
selected values.

-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Ancient History (OS's) - was : IBM Destination z - What the Heck Is JCL and Why Does It Look So Funny?

2016-02-05 Thread Anne & Lynn Wheeler
linda.lst...@comcast.net (Linda) writes:
> I had an Apple ][ with an acoustic coupler. It auto dialed over a
> regular telco dial tone line using a program loaded from a cassette
> player, or if one could afford it, from an early floppy drive. The
> college I went to had a Univac 90/70d. The were 4 student dialup
> numbers. I could get into one of those much like the scene from War
> Games.  It was fun.


TYMSHARE made their CMS-based online computer conferencing available free
to SHARE as VMSHARE starting in Aug1976 ... archives:
http://vm.marist.edu/~vmshare

In the 70s, I started trying to get IBM to let me put all the VMSHARE
files up on internal systems ... including the world-wide
sales support HONE system. One of the biggest battles I had
with IBM was the lawyers were afraid that customer information would
contaminate IBM employees.

My brother was Apple regional marketing rep at the time (largest
physical region in CONUS) and I started trying to get him to setup up an
apple that would do terminal emulation for copying all the files down
from TYMSHARE ... he never quite got around to doing it ... although
over the years ... when he would come into town for business meetings I
would get invited to dinners ... and even got to argue with the MAC
developers about design (before MAC was announced).

I eventually had to resort to getting montly tapes mailed from TYMSHARE
... that dumped all VMSHARE files (later added all PCSHARE
files). misc. old email
http://www.garlic.com/~lynn/lhwemail.html#vmshare

-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Ancient History (OS's) - was : IBM Destination z - What the Heck Is JCL and Why Does It Look So Funny?

2016-02-05 Thread Anne & Lynn Wheeler
bles...@ofiglobal.com (Lester, Bob) writes:
> ​Yeah. Worst mistake Gary Kindall ever made. Just think, if he'd hadn't
> "blown off" IBM, I'd be cursing his memory (he's deceased) instead of
> Bill Gates. Or maybe not, I ran CP/M-80 back in the day. I really
> enjoyed it.  But, then, I enjoyed everything more back then. 
> everything was bright, shiny, and new ​

before ms/dos
http://en.wikipedia.org/wiki/MS-DOS
there was seattle computer
http://en.wikipedia.org/wiki/Seattle_Computer_Products
before seattle computer there was cp/m,
http://en.wikipedia.org/wiki/CP/M
before cp/m, kildall worked with cp67/cms (precursor to vm370) at npg
http://en.wikipedia.org/wiki/Naval_Postgraduate_School

other trivia ... after 64, commodore did amiga ... which ran ARexx
https://en.wikipedia.org/wiki/ARexx

ARexx is an implementation of the REXX language for the Amiga, written
in 1987 by William S. Hawes, with a number of Amiga-specific features
beyond standard REXX facilities. Like most REXX implementations, ARexx
is an interpreted language. Programs written for ARexx are called
"scripts", or "macros"; several programs offer the ability to run ARexx
scripts in their main interface as macros.

... snip ...

more trivia ... acorn group in Boca kept claiming that they wouldn't
going to do any software and an IBM group was formed in silicon valley
to write software for acorn. Then at some point the Boca group changed
their mind and wanted responsibility for all software ...  if necessary
contracting with outside groups (some viewed as eliminating internal
competition).

some past mentioning acorn
http://www.garlic.com/~lynn/2002g.html#79 Coulda, Woulda, Shoudda moments?
http://www.garlic.com/~lynn/2003d.html#19 PC history, was PDP10 and RISC
http://www.garlic.com/~lynn/2005q.html#24 What ever happened to Tandem and 
NonStop OS ?
http://www.garlic.com/~lynn/2005r.html#8 Intel strikes back with a parallel x86 
design
http://www.garlic.com/~lynn/2006y.html#29 "The Elements of Programming Style"
http://www.garlic.com/~lynn/2007e.html#5 Is computer history taugh now?

reference
https://en.wikipedia.org/wiki/IBM_Personal_Computer#Project_Chess

-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: 3270 based ATMs

2016-02-02 Thread Anne & Lynn Wheeler
other trivia:

A Brief History of the ATM
http://www.theatlantic.com/technology/archive/2015/03/a-brief-history-of-the-atm/388547/

The company seemed poised to overwhelm its competitors until executives
decided to deploy a new model "the IBM 4732 family" which were
incompatible with previous models, including the already-successful and
widely deployed IBM 3624.

...

IBM's move soured banks, inadvertently, opening the ATM market to new
cashpoint manufacturers. Eventually, IBM abandoned payment-technology
systems entirely.

...

IBM's returns fell short of its expectations, in part due to the growth
in local processing architectures, which had invalidated IBM's strategy
to link ATMs to its expensive mainframes.

... snip ...

There are articles about OS/2 lingering on in ATM market long after it
had disappeared elsewhere

The ticking time bomb inside your bank ATM
http://www.fiercefinanceit.com/story/ticking-time-bomb-inside-your-bank-atm/2013-07-31

-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: 3270 based ATMs

2016-02-01 Thread Anne & Lynn Wheeler
tro...@gmail.com (Rick Troth) writes:
> I searched before asking, but didn't find anything close.
> Anyone know how many 3270 based ATMs are in operation?
> Anyone know where I can find tech pubs for such?

3624 designed at los gatos lab (disclaimer at one time, I had wing of
offices and labs there ... not involved in 3624, but heard stories,
at one time los gatos labs was considered one of the most scenic in
ibm ... since plowed under and now housing development)
https://en.wikipedia.org/wiki/IBM_3624

the guy managing magstripe standards was also there
https://en.wikipedia.org/wiki/Magnetic_stripe_card#Further_developments_and_encoding_standards

when I was co-author of financial industry standard that was piloted by
nacha for ATM ... this reference gone 404 ... but lives on at wayback
machine ... pilot results entry 23July2001:
http://web.archive.org/web/20070706004855/http://internetcouncil.nacha.org/News/news.html

all the ATM network stuff was Tandem ... with special crypto hardwware.

-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Lineage of TPF

2016-01-24 Thread Anne & Lynn Wheeler
000248cce9f3-dmarc-requ...@listserv.ua.edu (Ed Finnell) writes:
> As Lynn mentioned there were hardware mods for ACP/TPF to the 3081, 3083  
> and 3090's. They were given new numbers 9081,9083 and of course 9190? I guess 
>  shorter path lengths and such but couldn't find any details after a short  
> search.

besides the 3830 disk controller RPQ ... the 3083 was 3081 with one of
the processors removed (at the time, acp/tpf didn't have tightly-coupled
multiprocessor support) that still wasn't competitive ...  so there was
3083 with specialized channel microcode operation tailored to ACP/TPF
operation. I'm not familiar something similar for 3090.

as mentioned 3081 technology wasn't competitive with clones:
http://www.jfsowa.com/computer/memo125.htm

initial 3081D per processor throughput was suppose to be faster than
3033 ... but many benchmarks have it about 20% slower. 3081K doubled the
cache and per processor was suppose to improve to 50% faster than 3033
... but many benchmarks were same as 3033.

IBM 2-way multiprocessor technology from the period slowed the processor
clock down by 10% to handle cross-cache activity. Going from 3081K to
3083K increased processor clock by nearly 15% (no multiprocessor clock
slow-down) ... 3083 mostly done because all ACP/TPF customers might
migrate to clone makers (since ACP/TPF didn't have multiprocessor
support). Faster clock and tweaks for 3083jx got it up to 16% faster
than 3081K (or supposedly almost 80% faster than 3033).

9083 had different I/O microcode load to bias for the typical higher
channel i/o loads by ACP/TPF.

It is possible that they may have done something similar for 3090, but I
don't recollect any details.

-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Where do you place VSE?

2016-01-23 Thread Anne & Lynn Wheeler
g...@gabegold.com (Gabe Goldberg) writes:
> One response cited Wikipedia entry. ALSO good timing; I'm ALSO writing
> article on VSE community.  As you'd expect, the VSE list has had a lot
> to say -- positive, negative, and informative.

OS/360 for a time PCP, MFT, and MVT ... but didn't work well in the
smallest real memory configurations ... giving rise to DOS/360.  OS/360
somewhat becomes split between MFT customers (usually mid-size memory
configurations) and MVT (largest memory configurations).

Move to virtual memory, DOS/360 morphs into DOS/VS (singe virtual
address space, MFT morphs into VS1 (single virtual address space) and
MVT morphs into VS2 (initially with single virtual address space, aka
SVS much like VS1 ... and eventually MVS with multiple virtual address
spaces).

During the Future System period, 370 efforts are being killed off (FS
was different than 360/370 and was going to completely replace it).
With the demise of FS, there is mad rush to get products back into the
370 pipeline. POK kicks off 3033 (168 logic mapped to 20% faster chips)
and 3081 & MVS/XA in parallel (and convinces corporate to kill off vm370
product and move all the people to POK to work for MVS/XA; Endicott
eventually acquires the vm370 product mission, but has to recreate a
development group from scratch).

While POK is doing "XA" architecture ... highly tailored to MVS ...
Endicott kicks off the "E" architecture ... which in large part is
moving the single virtual address space into microcode and new
instructions that enable/disable virtual page for specific real page.
Internally the 4331 is called E3 and 4341 is called E4. DOS/VS becomes
DOS/VSE.

In part because large percentage of 4300 machines are run with vm/370
... they are actually run in 370 mode ... supporting 370 multiple
virtual address spaces.

os/vs1
https://en.wikipedia.org/wiki/OS/VS1

above slightly garbled since the migration aid was primarily motivated
for helping move mvs/370 to mvs/xa

VSE
https://en.wikipedia.org/wiki/VSE_%28operating_system%29

The migration aid originally was only going to be used for internal
mvs/xa development and never released to customers and so paid little
attention to general function and performance. There then is internal
politics ... an internal datacenter added full XA support to VM370 with
full function/performance. POK wants corporate to support a massive new
staff for the migration aid to try and upgrade it to the feature and
performance of standard vm370 (with XA added). POK wins.

-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Lineage of TPF

2016-01-23 Thread Anne & Lynn Wheeler
g...@gabegold.com (Gabe Goldberg) writes:
> Indeed. Then a couple people responded. Good timing; I'm writing
> article on TPF for Destination z or IBM Systems Magazine (I forget
> where it'll be published). IBM TPFers have been very helpful and I'm
> contacting TPF users group: http://www.tpfug.org/ . I didn't post here
> because  well, I just didn't, but I should have. Better late than
> never: I'm interested in TPF insights, experiences, etc.
>
> Be brief, this won't be an epic article, though there might be
> follow-on pieces. Please copy me directly so replies aren't buried in
> the list digest.
>
> Thanks...
>
> Rick Troth said on IBM-Main: Lineage of TPF would also be interesting.

Before Jim left for tandem (earlier post about RDBMS, System/R, DB2)
... he was looking for real live DBMS locking statistics for profiling
System/R (RDBMS) performance. This included data from ACP 3830
controller RPQ ... logical/symbolic locks implemented in the 3830
controller (much more efficient than device reserve/release) ... minor
note IBM wanted to depreciate the ACP RPQ because corporate strategy was
to push "string switches" ... which allowed two different controllers to
get to the same device (and bypass "locks" in the other controller).
old email refs: http://www.garlic.com/~lynn/2008i.html#email800325

The customer statistics in above ... was just before the looming 3081
"crisis" (while ACP/TPF had loosely-coupled cluster support, it didn't
have tightly-coupled, SMP support). Note above mentions two controlers
(with string-switch) ... but it is same system having access to both
controllers for redundancy.

As aside, US HONE datacenters were consolidated in Silicon Valley in the
mid-70s (HONE was the world-wide, online sales support
system). By the late 70s, the US HONE system had the largest
"single-system image" loosely-coupled configuration in the world (with
load-balancing and recovery across all systems in the complex) ...  and
required string-switch with pairs of controllers each with multiple
channel connections in order for all the SMP (multiprocessor) systems in
the complex to fully access the large DASD farm.

Rather than locking (device reserve/release) for the necessary
operations, it used a special CCW sequence (when needed) that emulated
the compare-and-swap instruction semantics. past posts mentioning
charlie invented compare-and-swap while doing fine-grain multiprocessing
locking on CP67 at the science center ... past posts
http://www.garlic.com/~lynn/subtopic.html#smp
and
http://www.garlic.com/~lynn/subtopic.html#545tech

trivia: when facebook moved to silicon valley it was to a new bldg.
next door to the old HONE datacenter. misc. past posts mentioning
HONE
http://www.garlic.com/~lynn/subtopic.html#hone

Of course the (HONE cluster) "single system image" support wasn't made
available to customers until 30yrs later (late last decade).

-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Lineage of TPF

2016-01-23 Thread Anne & Lynn Wheeler
jo.skip.robin...@att.net (Skip Robinson) writes:
> I had a brief and bemusing encounter with TPF around 1990. My
> employer, Security Pacific Bank, was acquired by (the old SF-based)
> Bank of America, which was then under the tutelage of an ex CEO of
> American Airlines. He believed that TPF was the answer to all
> important IT questions. In particular, he engineered a project to
> manage the Bank's ATMs with TPF, perhaps the only time/place that TPF
> was charged with that responsibility--absolutely critical for a major
> financial institution. It apparently worked pretty well. My mainframe
> buddies there admired TPF for its lightning quick recovery--a
> blessing, they said, because it crashed a lot. ;-)

There was fantastic SE on financial institution account in LA ... he
wrote ATM cash machine support in VM370 that he showed had higher
throughput on 370/158 than TPF ran on 370/168. His trick was significant
better disk arm scheduling (than TPF) ... had patterns of ATM useage and
record layout ... and do things like delaying transaction somewhat
proportional to the record location distance from current arm position
and the probability another transaction would come in needing a record
closer to the current arm position.

-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Lineage of TPF

2016-01-23 Thread Anne & Lynn Wheeler
sas...@sas.com (Don Poitras) writes:
> TPF ran lots of ATM networks. I worked at First Interstate in 1988
> working on a project to convert from TPF to MVS. And certainly any
> bank that does VISA authorization at their ATMs still to this day use
> TPF because that's what VISA uses.

(credit) card associations started out as non-profits for brand
marketing (getting merchants to accept brands) and network interconnect
between merchant acquiring and customer issuing financial institutions
(at one time 30,000 institutions). Interchange rules (amount charge
merchants for credit card transaction, had pieces for the acquiring and
issuing financial institutions as well as piece for the association
network (and other surcharges based on things like fraud history &
probability)

VISA installation for ACP for its card association network transaction
processing was possibly big part of changing name from ACP to TPF.  The
card associations were making so much money off their netework
interchange transaction charges (for card association networks) that
they changed to profit and spun off in IPOs.

Around the turn of the century ... because of bank consolidation and
outsourcing, 90% of credit card transactions were handled by six
datacenters that had direct connections and no longer needed the card
association networks. There was then big legal battle between card
associations and the six processors (who felt they no longer had to
share interchange fees with the card associations ... since they were no
longer using their networks).

ATM/Debit networks were primarily Tandem (even though the backends might
be IBM mainframes). Tandem had also acquired major ATM machine crypto
hardware vendors. Long ago I got brought in as consultant into a small
client/server that wanted to do financial transactions on their server,
they had also invented this technology they called SSL they wanted to
use, the result is now frequently called "electronic commerce".
Somewhat for having done "electronic commerce", in the mid-90s I got
asked to work in the X9A10 financial standard working group that had
been given the requirement to preserve the integrity of the financial
infrastructure for all retail payments. We did detailed end-to-end
threat analysis for nearly all kinds of retail payments (credit, debit,
ACH, wire-transfer, face-to-face, point-of-sale, internet, etc). The
result was a standard that eliminated most of the current kinds of fraud
... the downside was interchange fees have been heavily prorated based
on fraud rates ... with an enormous profit component ... actually
eliminating the fraud enormously impacts those calculations (and
profit).

NACHA Internet Council did debit pilot with support in the Tandem
network processors ... results published 23July2001, gone 404 but
lives on at the wayback machine
http://web.archive.org/web/20070706004855/http://internetcouncil.nacha.org/News/news.html

Compaq/Tandem had previously sponsored large workshop for me Jan1999 on
the financial protocol standards ... old long-winded post by somebody at
the workshop
http://www.garlic.com/~lynn/aepay3.htm#riskm

The CEO of one of the companies that we had been working with and at the
meeting, had been the head of POK mainframes in a prior life.

tandem ref (which includes reference to Jim Gray, which I worked with at
IBM and left for Tandem ... but by the mid-90s was at Microsoft
https://en.wikipedia.org/wiki/Tandem_Computers
and
https://en.wikipedia.org/wiki/Jim_Gray_%28computer_scientist%29

-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Lineage of TPF

2016-01-23 Thread Anne & Lynn Wheeler
edgould1...@comcast.net (Ed Gould) writes:
> I was not on the the team (next cubicle over). I was somewhat involved
> in the precursor(?) of Mastercard called Town & Country.  This was in
> Chicago.  The OS that Mastercard was was written was DOS (I *THINK* it
> was on a 360/30) and to some extent MFT (350/50) (this goes back 40 or
> so years so please forgive the memory errors).  I do not have anything
> to add to the mastercard/ and the VISA (I just do not remember what
> the name was).  I will take as face value about the battle, although I
> do remember it somewhat.

mastercard had huge number of series/1 in their network interfacing
between acquirers and issuers.

around the turn of decade ... the populace was moving from credit to
debit ... and the card associations introduced "signature debit" at
point-of-sale ... that ran through the credit networks (and had credit
level fraud and the much higher credit/fraud interchange fees ... rather
than pin-debit through the debits network that had much lower fees as
well as card associations not getting anything). national retailer
association then had anti-trust legal action against the card
associations for forcing debit point-of-sale transactions as "signature
debit" (with significantly higher fees) ... and won huge settlement.

card associations then came up with "cash back" as an alternative ...
where the cash back interchange rate that merchants pay is significantly
higher than the "cash back" that consumers actually see (this is
eventually going to replace the enormous amounts they make off fraud
surchange fees ... when the get around to deploying more secure
technology)

-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Lineage of TPF

2016-01-23 Thread Anne & Lynn Wheeler
other trivia ... IBM had bought complex that had been originally built
in Purchase for new Nestle hdqtrs (before Nestle ever moved in). In the
90s, during the IBM troubles ... the new CEO was looking to raise cash
and was selling off real estate (even at well below market and sometimes
even below originally paid) ... and sold the Purchase bldg to MasterCard
for its new hdqtrs. We had a meeting there (to discuss online banking)
shortly after MasterCard moved in, they said that they paid more to have
all the door hardware handles to be replaced ... than they paid IBM for
the complex (something like 1% of the original Nestle building cost).

-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Man Versus System

2016-01-22 Thread Anne & Lynn Wheeler
john.archie.mck...@gmail.com (John McKown) writes:
> ​Descended from ACP (Airline Control Program).
> https://en.wikipedia.org/wiki/IBM_Airline_Control_Program​
>
> I worked at Braniff Airways before it went under. The reservation system
> ran ACP on a 2 Meg 3033. The thing would IPL in about 5 seconds. The ACP
> systems people were a bit strange. They had the source and modified it. I
> remember the CE complaining that the ACP attached tapes (3420s) would just
> die with "no warnings at all" whereas the MVT (yes MVT on a 3033) and, a
> bit later MVS and VM would show temp errors. The ACP people then told the
> CE that they had removed all logging of temporary errors to speed up
> processing. Not just on the tapes, but on the 3344 disks as well. IIRC, the
> 3344s on ACP actually used "software duplexing" for reliability.

there was big problem with 3081 ... which originally was going to be
multiprocessor only ... and ACP/TPF didn't have multiprocessor support
(they were afraid that the whole market would move to clone processors
which were building newer single processor machines). An an interim they
shipped some number of releases of VM370 with very unnatural things done
to it specifically for running ACP/TPF on multiprocessors (but degraded
performance for all other customers). Eventually they shipped 3083 ...
which was a 3081 box with one of the processors removed (minor trivia,
simplest would be to remove the 2nd processor which was in the middle of
the box ... but that would have made the box dangerously top-heavy, they
had to rewrire "processor 0" to processor in the middle and remove the
processor at the top of the box). other issues with 308x
... highlighting that it was warmed over FS technology
http://www.jfsowa.com/computer/memo125.htm

later in the 80s, my wife did temporary stint as chief architect for
Amedeus (euro res system based on old eastern "system one") ... the
communication group got her replaced because she backed x.25 (instead of
sna/vtam) ... it didn't do them much good because amedeus went with x.25
anyway.

later in the mid-90s, we were asked to look at re-engineering some of
the largest airline res system in the world ... starting with ROUTES
(about 25% of total mainframe processing load) addressing the ten
impossible things that they couldn't do. I went away and two months
later came back with totally different ROUTES implementation that ran
about hundred times faster and did all ten impossible things
... including ten RS/6000 990s being able to handle every ROUTES
transaction for every airline in the world. The issue was much of
ACP/TPF implementation was dictated by technology trade-offs made in the
60s ... it was possible to start from scratch 30yrs later and make
totally different trade-offs (and a decade later, cellphone processors
had processing capacity of those ten 990s).

It was fun because they provided me with tape of the full OAG ...
including record for every scheduled airline flt in the world.

-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Compile error

2016-01-22 Thread Anne & Lynn Wheeler
jo.skip.robin...@att.net (Skip Robinson) writes:
> The name 'DB2' seems to have followed the 1980s tradition of what I call
> 'name bloat', the practice of inflating a moniker in one way or another to
> make a product look more mature or more elegant. The paragon in my mind was
> dBASE II from Ashton-Tate. There never was a plain old dBASE. The roman
> numeral was added from the get-go to make the product seem new and improved.
> Moreover, there was never an 'Ashton'. That name was invented because, gosh
> darn it, it sounded good hyphenated with Tate, a real person. 
>
> Before DB2 there was precedent for name bloat within IBM. There never was a
> plain old 'JES'. The product emerged from the cocoon as JES2. There had been
> a predecessor product called 'HASP', which may or may not have been an
> acronym for Houston Automatic Spooling Priority, but the name 'J-E-S' was
> born complete with suffix. 
>
> Meanwhile there did emerge a 'JES3', but it was not an evolutionary
> descendant of JES2. Both products have coexisted, albeit uneasily, for
> decades. We used to imagine a JES5 or JES6 (depending on one's arithmetic
> proclivity) that would somehow combine the best features of both products,
> but it's almost certainly DOA. Likewise, the prospects for a 'DB3' are as
> dim as a distant star.

note that VS1 had JES1 (Job Entry Subsystem 1)
https://en.wikipedia.org/wiki/OS/VS1

The official names were OS/VS1 and OS/VS2 ... so JES2 originally may
have originally been to designate it was for OS/VS2.

Long ago and far away, my wife was in the GBURG JES group and was part
of the catchers for ASP turning into JES3. She was then co-author of
JESUS (JES UNIFIED SYSTEM) document ... which merged the features in
JES2 and JES3 that respective customers couldn't live w/o ...  for
various reasons never saw the light of day.

A Fascinating History of JES2
http://www.share.org/p/bl/et/blogid=9=238

For the truth we must go back to the mid 1960's.  IBM's OS/360 was in
trouble.  The spooling (wonder where that name came from) support was
slow and the overhead was high.  Many programming groups independently
attacked the problem.  ASP, loosely based upon the tightly coupled IBM
7090/7094 DCS, held the lead in the OS/360 spooling sweepstakes.  ASP's
need for at least two CPU's fit well with IBM Marketing's plans for the
System/360.  Meanwhile, a group of IBM SE's, located in Houston,
developed a different product of which they were justifiably proud.
They wanted to popularize it, as they correctly suspected it would be
the balm for OS/360 users, increasing the usability and popularity of
the operating system, and, not incidentally, furthering their careers.
All they needed was the right name!  A name which was easy to remember,
a name which would draw attention to their product, and a name to
distract from the ASP publicity.  That name was Half-ASP, or HASP.
Naturally, if HASP and ASP were products of two different companies, the
FTC would have stepped in to stop such a predatory product name.
Regulatory action was prevented, however, because IBM is "one big happy
family", believed by many to be larger than the Government.

... snip ...

of course officially, the "H" stands for "Houston"
https://en.wikipedia.org/wiki/Houston_Automatic_Spooling_Priority

then my wife was con'ed into going to POK to be responsible for
loosely-coupled architecture ... where she "peer-coupled shared data
architecture" ... which saw very little uptake (except for IMS
hot-standby) until SYSPLEX & Parallel SYSPLEX ... contributing to her
not remaining long in POK (along with the ongoing periodic battles with
the communication group trying to force her into using SNA/VTAM for
loosely-coupled operation). some past posts
http://www.garlic.com/~lynn/submain.html#shareddata

as undergraduate in the 60s, I got to make a lot of HASP modifications
(I had also been hired fulltime by the university to be responsible for
production mainframe systems) ... including implementing terminal
support and conversational editor in HASP for a form of CRJE.
https://en.wikipedia.org/wiki/Remote_job_entry

DB2 may have been because some had hopes that the official new DBMS
"EAGLE" might still be able to rise from its ashes ... or it was to
designate the OS/VS2 (aka MVS) version of System/R as opposed to the
earlier SQL/DS version of System/R (that ran on VM370, VS1, DOS/VSE).

trivia: one of the problems with the System/R tech transfer to Endicott
for SQL/DS ... was that several enhancements to vm370 had been made to
make System/R much more efficient. For various reasons, the Endicott
people didn't want to make SQL/DS release dependent on getting
enhancements into VM370 ... and so that had to be dropped.

-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: 

Re: Compile error

2016-01-22 Thread Anne & Lynn Wheeler
other trivia from ibm jargon:

MVM - n. Multiple Virtual Memory. The original name for MVS (q.v.),
which fell foul of the fashion of changing memory to storage.

MVS - n. Multiple Virtual Storage, an alternate name for OS/VS2
(Release 2), and hence a direct descendent of OS. OS/VS2 (Release 1)
was in fact the last release of OS MVT, to which paging had been
added; it was known by some as SVS (Single Virtual Storage). MVS is
one of the big two operating systems for System/370 computers (the
other being VM (q.v.)). n. Man Versus System.

... snip ...

as part of the "Man Versus System" theme ... it had become significantly
much easier to work out lots of computer concepts and design on
vm370/cms ... and then later port the implementation to MVS ... than
trying to start on an MVS base.

some time ago, I got a request about the history of adding virtual
memory to all 370s ... old post with exchange from IBMer involved (who
recently passed) with references of os/v2, future systems, hasp/asp,
etc
http://www.garlic.com/~lynn/2011d.html#73

other parts of the thread
http://www.garlic.com/~lynn/2011d.html#71
http://www.garlic.com/~lynn/2011d.html#72
http://www.garlic.com/~lynn/2011d.html#74


-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Compile error

2016-01-22 Thread Anne & Lynn Wheeler
hal9...@panix.com (Robert A. Rosenberg) writes:
> And then there was Star Wars (AKA: A New Hope [which was added when
> the film was rereleased as part of the release of The Empire Strikes
> Back]) which opened with a crawl saying Episode 4". That was just
> because they were emulating the old serials where each segment was a
> numbered Chapter with its own title (which often reflected the
> cliffhanger being resolved or the plot point of that chapter).

co-worker at IBM would talk about Lucas attending San Jose Astronomy
club meetings and bringing draft outlines for all 8 episodes (for
members to review) More recent interviews with Lucas says that the first
episode he chose to do, was the one most likely for getting funding.

-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Compile error

2016-01-21 Thread Anne & Lynn Wheeler
thomas.sa...@fiserv.com (Savor, Thomas  , Alpharetta) writes:
> Management System or DBMS in 1983 when IBM >released DB2 on its MVS
> mainframe platform." -- Wikipedia, citing an IBM manual as authority.
>
> All these years, I've have only known of DB2.  The name seems to have stuck.
>   
> Was there ever a DB1 ??
> Will there ever be a DB3 ??

The original sql/relational implementation was at SJR (bldg. 28 on main
plant site, using modified vm/370 on 370/145), System/R. History/Reunion:
http://www.mcjones.org/System_R/
wiki
https://en.wikipedia.org/wiki/IBM_System_R
and another history
http://www.theregister.co.uk/2013/11/20/ibm_system_r_making_relational_really_real/
and
http://www.cs.ubc.ca/~rap/teaching/504/2010/readings/history-of-system-r.pdf

The official new DBMS project was EAGLE  with the corporation
focused on EAGLE it was possible to get System/R out the door
as SQL/DS (under the radar).

When EAGLE imploded, there was a request about how fast would it take to
port System/R to MVS ... eventually released as DB2 (originally for
analytical & decision support *only*).

past posts mentioning System/R
http://www.garlic.com/~lynn/submain.html#systemr
also referenced here
http://www.mcjones.org/System_R/citations.html

The Birth of SQL
http://www.mcjones.org/System_R/SQL_Reunion_95/sqlr95-The.html

Some discussion of EAGLE and then DB2
http://www.mcjones.org/System_R/SQL_Reunion_95/sqlr95-DB2.html

I periodically reference this post about Jan1992 meeting in Ellison's
conference room
http://www.garlic.com/~lynn/95.html#13

one of the people in the meeting would tell how he was responsible for
the majority of the tech transfer into the Santa Teresa Lab (now silicon
valley lab) for DB2.

Jim Gray departs for Tandem palming off some number of things on me
... old email ref:
http://www.garlic.com/~lynn/2007.html#email801016

Eventually IBM Toronto starts RDBMS for IBM/PC ... implemented in C ..
which is made available on other platforms and is also called DB2
... even though it is totally different code base from the mainframe
implementation.

SQL/DS is also eventually renamed DB2
https://en.wikipedia.org/wiki/IBM_SQL/DS

-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Fibre Chanel Vs FICON

2016-01-03 Thread Anne & Lynn Wheeler
Kevin Bowling  writes:
> I'm shortly going to be the new owner of a z800 at home.  Looking
> forward to booting and playing with this bistro, what kind of disk array
> do I need?  Is fibre channel storage enough, or is FICON extra special
> at the protocol level?  Is there any way to network boot/emulate storage
> or will I be looking for FICON arrays next?

there are two issues ... one is the FICON protoool running over
fibre-channel standard ... some past posts
http://www.garlic.com/~lynn/submisc.html#ficon

and controller emulation of CKD on industry standard fixed-block disks
(there haven't been any real CKD manufactured for decades).
http://www.garlic.com/~lynn/submain.html#dasd

there have been various past discussions about IBM charging/justifying a
significant $$/mbyte premium for that emulation

trivia

Build Your Own Fibre Channel SAN For Less Than $1000 - Part 1
http://www.smallnetbuilder.com/nas/nas-howto/31485-build-your-own-fibre-channel-san-for-less-than-1000-part-1


-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: History question - In what year did IBM first release its DF/DSS backup & restore product?

2016-01-02 Thread Anne & Lynn Wheeler
ibmm...@computersupervisoryservices.com (Stephen Mednick) writes:
> Looking to find the answer to the question "in which year did IBM release
> its DF/DSS backup & restore product.

some trivia from the web
https://www.ibm.com/developerworks/community/blogs/InsideSystemStorage/entry/ibm_storwize_product_name_decoder_ring1?lang=en

In my post January 2009 post [Congratulations to Ken on your QCC
Milestone], I mentioned that my colleague Ken Hannigan worked on an
internal project initially called "Workstation Data Save Facility"
(WDSF) which was changed to "Data Facility Distributed Storage Manager"
(DFDSM), then renamed to "ADSTAR Distributed Storage Manager" (ADSM),
and finally renamed to the name it has today: IBM Tivoli Storage Manager
(TSM).

... snip ... 

Note: I had originally developed CMSBACK in the late 70s that was used
at a number of internal sites (including the online world-wide,
sales support HONE). It went through a number of internal
releases at San Jose Research ... which morphed into Almaden research
when research moved up the hill in the mid-80s. Ability to back up from
distributed systems was added and then released to customers as
workstation datasave facility. some past email
http://www.garlic.com/~lynn/lhwemail.html#cmsback

It was picked up by the storage division ... and renamed ADSM ... adstar
storage manager ... storage division had been renamed adstar as part of
reorganizing the company into the 13 baby blues in preparation for
breaking up the company (then the board brought in a new CEO to reverse
the breakup and resurrect the company).

The company acquired Tivoli (started by a couple former IBMers that had
been at the rs/6000 workstation group in Austin) and ADSM was moved
to Tivoli morphing into TSM.

We spent some amount of time consulting for the ADSTAR VP of software on
number of items ... not just ADSM, he also was behind the original
MVS/USS development ... and provided funding for some number of non-IBM
storage related startups. I've mentioned several times:

A senior disk engineer got a talk scheduled at communication group
world-wide internal annual conference supposedly on 3174 performance
... however he opened the talk with the statement that the communication
group was going to be responsible for the demise of the disk
division. The issue was that the communication group had strangle hold
on datacenters with its strategic ownership of everything that crossed
the datacenter wall and was fighting off distributed computing and
client/server trying to preserve its (emulated) dumb terminal paradigm
and install base. The disk division was seeing data fleeing to more
distributed computing friendly platforms with drop in disk sales. The
disk division had come up with a number of solutions to reverse the
trend, but they were constantly vetoed by the communication group.

...

As attempted work-around (to the communication group) the VP of software
would fund non-IBM efforts to provide mainframe distributed computing
and part of what he had us do was try and keep track of some number of
these activities.

-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Is there a source for detailed, instruction-level performance info?

2015-12-27 Thread Anne & Lynn Wheeler
shmuel+ibm-m...@patriot.net (Shmuel Metz  , Seymour J.) writes:
> We ran more than that, plus TSO, on a 2 MiB machine.

IBM executives were looking at 370/165 ... where typical customer had
1mbyte ... in part because 165 real memory was very expensive ... and
typical regions were such that they only got four in 1mbytes (after
system real storage requirement)

later, newer memory for 370/168 was less expensive ... and started to
see four mbytes as much more common ... aka four mbytes as 370/165 would
have met that typical MVT customer could have gotten 16 regions ... w/o
having to resort to virtual memory ... but the decision had already been
made.

basical initial transtion os/mvt to os/vs2 svs was MVT laid out in
single 16mbyte virtual address space ... and little bit of code to build
the segement/page tables and handle page faults. The biggest code hit
was adding channel program translation in EXCP ...  code initially
copied from CP67 CCWTRANS channel program translation.

prior reference/discssion to justification for 370 virutal memory
http://www.garlic.com/~lynn/2011d.html#73 Multiple Virtual Memory

later transition to os/vs2 MVS with multiple virtual address spaces
... had other problems. The os/360 MVT heritage was heavily based on
pointer passing API paradigm ... and with move to MVS. The first pointer
passing ... was put an 8mbyte image of the MVT kernel into every
application virtual address space ... leaving only 8mbytes (out of 16)
for application use. Then because of subsystems were now in their own
(different) virtual address space ... needed a way for passing
parameters & data back and forth between applications and subsystems
using pointer passing API. The result was "common segment" ... a one
mbyte area that also appeared in every virtual address space  which
could be used to pass arguments/data back between applications and
subsystems (leaving only 7mbytes for applications). The next issue was
demand for common segment was somewhat proportional to number of
concurrent applications and subsystems ... so the common segment area
became common system area (CSA) as requirements exceeded 1mbytes. Into
the 3033 area, larger operations were pushing CSA to 4&5 mbytes and
threatening to push it to 8mbytes ... leaving no space at all for
application (of course with MVS kernel at 8mbytes and CSA at 8mbytes,
there wouldn't be any left for applications ... which drops the demand
for CSA to zero).

Part of the solution to address the OS/360 MVT pointer passing API
problem was included in the original XA architecture (later referred to
811 ... because documents dated Nov1978) were access registers ... and
ability to address/access multiple address spaces.  To try and alleviate
the CSA explosion in 3033 time-frame ... a subset of access registers
was retrofitted to 3033 as dual-address space mode ... but it provide
only limited help since it still required updating all the subsystems to
support dual-address space mode (instead of CSA).

trivia: the person responsible for retrofitting dual-access to 3033 ...
later leaves IBM for another vendor and later shows up as one of the
people behind HP Snake and later Itanium.

-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Is there a source for detailed, instruction-level performance info?

2015-12-24 Thread Anne & Lynn Wheeler
mike.a.sch...@gmail.com (Mike Schwab) writes:
> If branch predicting is a big hang up, the obvious solution is to
> start processing all possible outcomes then keep the one that is
> actually taken.  I. E.  B OUTCOME(R15) where R15 is a return code of
> 0,4,8,12,16.

aka, speculative execution ... instructions executed on path ... that
is not actually taken ... are not committed
https://en.wikipedia.org/wiki/Speculative_execution
and
https://en.wikipedia.org/wiki/Speculative_execution#Eager_execution

Eager execution is a form of speculative execution where both sides of
the conditional branch are executed; however, the results are committed
only if the predicate is true. With unlimited resources, eager execution
(also known as oracle execution) would in theory provide the same
performance as perfect branch prediction. With limited resources eager
execution should be employed carefully since the number of resources
needed grows exponentially with each level of branches executed
eagerly.[7]

... snip ...

https://en.wikipedia.org/wiki/Eager_evaluation

-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Is there a source for detailed, instruction-level performance info?

2015-12-24 Thread Anne & Lynn Wheeler
rpin...@netscape.com (Richard Pinion) writes:
> Don't use zoned decimal for subscripts or counters, rather use indexes
> for subscripts and binary for counter type variables.  And when using
> conditional branching, try to code so as to make the branch the
> exception rather than the rule.  For large table lookups, use a binary
> search as opposed to a sequential search.
>
> These simple coding techniques can also reduce CPU time.

in late 70s we would have friday nights after work ... and discuss a
number of things ... along the lines of what came up in tandem memos
... aka I was blamed for online computer conferencing on the internal
network (larger than arpanet/internet from just about the beginning
until sometime mid-80s) in the late 70s and early 80. folklore is that
when the corporate executive committee were told about online computer
conferencing (and the internal network), 5of6 wanted to fire me. from
IBMJARGON:

[Tandem Memos] n. Something constructive but hard to control; a fresh of
breath air (sic). "That's another Tandem Memos." A phrase to worry
middle management. It refers to the computer-based conference (widely
distributed in 1981) in which many technical personnel expressed
dissatisfaction with the tools available to them at that time, and also
constructively criticized the way products were [are] developed. The
memos are required reading for anyone with a serious interest in quality
products. If you have not seen the memos, try reading the November 1981
Datamation summary.

... snip ...

one of the issues was that the majority of the people inside the company
didn't actually use computers ... and we thot things would be be
improved if the people in the company actually had personal experience
using computers, especially managers and executives. So we eventually
came up with the idea of online telephone books ... of (nearly)
everybody in the corporation ... especially if lookup elapsed time was
less than look up of paper telephone book.

avg binary search of 256k is 18 ... aka 2*18. Also important was there
were nearly 64 entries in physical block ... so binary search to the
correct physical block is 12 reads (i.e. 64 is 2**6, 18-6=12).

However, it is fairly easy to calculate the name letter frequency ... so
instead of doing binary search, do radix search (based on letter
frequency) and can get within the correct physical block within 1-3
physical reads (instead of 12). We also got fancy doing first two letter
frequency and partially adjusting 2nd probe, based on how accurate the
first probe was.  In any case, binary search for totally unknown
distribution characteristics.

So one friday night, we established the criteria, to design, implement,
test and deploy the lookup program had to take less than a person week
of effort ... and less than another person week to design, implement,
test and deploy the process for collecting, formating and distributing
the online telephone books.

trivia ... long ago and far away ... a couple people I had worked with
at Oracle (when I was at IBM and working on cluster scaleup for HA/CMP),
had left and were at small client/server responsible for something
called commerce server. After cluster scaleup was transferred, announced
as IBM supercomputer, and we were told we couldn't work on anything with
more than four processors ... we decide to leave. We are then brought in
as consultants at this small client/server startup because they want to
do payment transactions on the server, the startup had also invented
this technology called SSL they want to use, the result is now
frequently called "electronic commerce".

TCP/IP protocol has session termination process that includes something
called FINWAIT list. At the time, session termination was relative
infrequent process and common TCP/IP implementations used a sequential
search of the FINWAIT list (assuming that there would be few or none
entries on the list). HTTP (& HTTPS) implementation chose to use TCP
... even tho it is a datagram protocol rather than a session protocol.
There was period in the early/mid 90s as web use was scaling up where
webservers saturated spending 90-95% of cpu time doing FINWAIT list
searches  before the various implementations were upgraded to do
significantly more efficient management of FINWAIT (session termination)
process.

-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Is there a source for detailed, instruction-level performance info?

2015-12-24 Thread Anne & Lynn Wheeler
charl...@mcn.org (Charles Mills) writes:
> Not so simple anymore.
>
> "How long does a store halfword take?" used to be a question that had an
> answer. It no longer does.
>
> My working rule of thumb (admittedly grossly oversimplified) is
> "instructions take no time, storage references take forever." I have heard
> it said that storage is the new DASD. This is true so much that the z13
> processors implement a kind of "internal multiprogramming" so that one CPU
> internal thread can do something useful while another thread is waiting for
> a storage reference.
>
> Here is an example of how complex it is. I am responsible for an "event" or
> transaction driven program. I of course have test programs that will run
> events through the subject software. How many microseconds does each event
> consume? One surprising factor is how fast do you push the events through.
> If I max out the speed of event generation (as opposed to say, one event
> tenth of a second) then on a real-world shared Z the microseconds of CPU per
> event falls in HALF! Same exact sequence of instructions -- half the CPU
> time! Why? My presumption is that because if the program is running flat out
> it "owns" the caches and there is much less processor "wait" (for
> instruction and data fetch, not ECB type wait) time.

so such accounting measuring CPU time (elapsed instruction time) is
analogous to early accounting which measured by elapsed wall clock time.

cache miss/memory access latency ... when measured in count of processor
cycles is comparable to 60s disk access when measured in in count of 60s
processor cycles.

There is lot of analogy between page thrashing when overcommitting real
memory and cache misses. This is old account of motivation behind moving
370 to all virtual memory. The issue was that as processors got faster,
they spent more and more time waiting for disk. To keep the processors
busy, required increasing levels of multiprogramming to overlap
execution with waiting on disk. At the time, MVT storage allocation was
so bad that a region sizes needed to be four times larger than actually
used. As a result, a typical 1mbyte 370/165 would only have four
regions. Going to virtual memory, it would be possible to run 16 regions
in a typical 1mbyte 370/165 with little or no paging ... significantly
increasing aggregate throughput.
http://www.garlic.com/~lynn/2011d.html#73 Multiple Virtual Memory

risc has been doing cache miss compensation for decades, out-of-order
execution, branch prediction, speculative execution, hyperthreading ...
can be viewed as hardware analogy to 60s multitasking ... given the
processor something else to do while waiting for cache miss. Decade or
more ago, some of the other non-risc chips started moving to hardware
layer that translated instructions into risc micro-ops for scheduling
and execution ... largely mitigating performance difference between
those CISC architectures and RISC.

IBM documentation claimed that half the per processor improvement from
z10->z196 was the introduction of many of the features that have been
common in risc implementation for decades ... with further refinement in
ec12 and z13.

z10, 64processors, aggregate 30BIPS or 496MIPS/proc
z196, 80processors, aggregate 50BIPS or 625MIPS/proc
EC12, 101 processor, aggregate 75BIPS or 743MIPS/proc

however, z13 claims 30% more throughput than EC12 with 40% more
processors ... which would make it 700MIPS/processor

by comparison z10 era E5-2600v1 blade was about 500 BIPS, 16 processors
or 31BIPS/proce. E5-2600v4 blade is pushing 2000BIPS, 36 processors or
50BIPS/proc.

as an aside, 370/195 pipeline was doing out-of-order execution ...  but
didn't do branch proediction or speculative execution ... and
conditional branch would drain the pipeline. careful coding could keep
the execution units busy getting 10MIPS ... but normal codes typically
ran around 5MIPS (because of conditional branches). I got sucked into
helping with hyperthreading 370/195 (which never shipped), it would
simulate two processors with two instructions streams, sets of
registers, etc ... assuming two instruction streams, each running at
5MIPS would then keep all execution units running at 10MIPS.

from account of shutdown of ACS-360
http://people.cs.clemson.edu/~mark/acs_end.html

Sidebar: Multithreading

In summer 1968, Ed Sussenguth investigated making the ACS-360 into a
multithreaded design by adding a second instruction counter and a second
set of registers to the simulator. Instructions were tagged with an
additional "red/blue" bit to designate the instruction stream and
register set; and, as was expected, the utilization of the functional
units increased since more independent instructions were available.

IBM patents and disclosures on multithreading include:

US Patent 3,728,692, J.W. Fennel, Jr., Instruction selection in a
two-program counter instruction unit, filed August 1971, and issued
April 1973.

US Patent 3,771,138, J.O. Celtruda, et al., 

Re: DOS descendant still lives was Re: slight reprieve on the z.

2015-12-21 Thread Anne & Lynn Wheeler
t...@vse2pdf.com (Tony Thigpen) writes:
> The 4300 did not come out of Endicott. It was developed in Germany, in
> the same lab that developes DOS/VSE.

As an undergraduate I do lots of work on cp67 (including to run in
256kbyte machine). The morph of cp67 to vm370, did a lot of
simplification of cp67 at the same time bloating the kernel size so
performance was seriously impacting running in 256kbytes. Boeblingon
does 115&125 ... and at one point I get dragged into optimizing vm370 to
run on 256kbyte 125 customer machines.

boeblingon does 135 and 138.  Then Boeblingon does 4331. Endicott had
con'ed me into doing lots of work on 148, vm/370 ecps. At the same time
I was helping Endicott with 148 vm370 ECPS, the 125 group also asked me
to do the design and specification for a 5-way 125 SMP machine (which
never shipped, it turns out the Endicott 148 people felt threaten that
5-way 125 multiprocessor would impact their market ... which put me in
odd position since I was doing both).

Later was dragged into doing a lot of work with regard to 4341.

Across the street, the disk engineering group in bldg. 14 and the disk
product test group in bldg. 15 dragged me into playing disk engineer ...
some past posts
http://www.garlic.com/~lynn/subtopic.html#disk

the product test group in bldg 15 would typically get the 3rd or 4th
engineering model for doing disk i/o testing. they got the 3rd
enginneering model of 3033 and very early E4 (4341) engineering machine
for testing.  Because I was doing so much stuff for them, I would get
lots of time on bldg. 15 system for other stuff I might want to do. The
performance test marketing group in Endicott con'ed me into doing
customer benchmarking on the bldg. 15 enginneering E4/4341 ... since I
had bettter access to the machine ... than they had to early engineering
machines in Endicott. some old email
http://www.garlic.com/~lynn/lhwemail.html#43xx

email includes references that when the E4/4341 originally arrived in
bldg. 15 ... it had the proceessor cycle slowed down (allowed machine to
work as they refined the engineering) so that the benchmarks were not as
good as they could be. Later as they refined the machine, they were able
to crank down the processor cycle.

One of the benchmarks was for LLNL that was looking at getting 70 4341s
for a compute farm (sort of the precursor to modern cluster, GRID and
supercomputing). It showed 4341 was faster than 158&3031 and 4341
cluster was faster, cheaper, much less floor space and environmentals
than 3033. The cluster 4341 threat to 3033 was so big that at one point,
the head of POK got corporate to cut the allocation of critical 4341
manufacturing component in half.

other trivia: circa 1980, there was plan to move the large variety of
internal microprocessors to 801/RISC, including low (vertical
microcode) 370s, what was to be as/400, lots of controllers, etc. For
various reasons that effort floundered and they continued with various
CISC microprocessors. I helped somebody in endicott with white paper
showing that VLSI was moving to the point, that large part of 370 could
be implemented in silicon ... as opposed to having to be emulated
... which would be much faster & price/performance than pure emulation
in 801/RISC (other side effect was that some number of 801/RISC
engineers leave and go to work on RISC projects at other vendors).

Boeblingon does 4361 (4331 follow-on) and Endicott does 4381 (4341
follow-on) in CISC. IBM was expecting that 4361/4381 would continue the
enormous 4331/4341 sales explosion, but by that time the mid-range
market was starting to move to workstations and large PCs.

Previous posts mentions in the wake of the FS failure, there was mad
rush to get stuff back into 370 product pipeline, this included 3033
(168 logic remap to faster chips) and 3081/xa kicked off at the same
time
http://www.jfsowa.com/computer/memo125.htm

Turns out during the 3033 engineering period, I was also involved in a
16-way 370 SMP effort and we con'ed some of the 3033 processor engineers
to work on it in their spare time. At first everybody thot it was really
great effort, and then somebody tells the head of POK that it could be
decades before MVS had effective 16-way support ... and he then invites
some of us to never visit POK again ... and tells the 3033 processor
engineers to stop being distracted by other activities.

With the 3033 out the door, the 3033 processor engineers start work on
trout1.5 (aka 3090, in parallel with ongoing 3081/xa) circa 1980.  Part
of the 3090 effort was to use 4331 as service processor running a highly
modified version of vm370 release 6 ... and I periodically get dragged
into that effort. The 3090 service processor eventually gets upgraded to
pair of 4361s running highly modified version of vm370 release 6 ... a
couple (later) old email references
http://www.garlic.com/~lynn/2010e.html#email861031
http://www.garlic.com/~lynn/2010e.html#email861223

Early days of REXX (well before ships to 

Re: DOS descendant still lives was Re: slight reprieve on the z.

2015-12-21 Thread Anne & Lynn Wheeler
other trivia

in the wake of FS and mad rush ... 303x was kicked off ... as mentioned
3033 was 168 logic remapped to 20% faster chips ... that happened to
have ten times more circuits per chip. Using original 168 logic, 3033
would have been only 20% faster than 168 (aka 3.6mips). However, some
specific logic rework to use the larger circuits per chip got it up to
50% faster than 168 (4.5mips).

158 manufacturing had been enormously automated ... somewhat like what
they quote for incremental cost of an automobile rolling off the
line. The 158 integrated channel microcode was used for the 303x channel
director (158 engine w/o 370 microcode and with the integrated channel
microcode). 3031 was two 158 engines, one with just the 370 microcode and
a 2nd (channel director) with just just the integrated channel
microcode. A 3032 is 168-3 reworked to use 303x channel director for
external channels.

some benchmark numbers for LLNL ... looking at getting 70 4341s for
computer farm (precursor to modern GRID, cloud, and supercomputers)

  158   3031  4341

Rain  45.64 secs   37.03 secs 36.21 secs
Rain4 43.90 secs   36.61 secs 36.13 secs

also times approx;
   145168-3  91
   145 secs.  9.1 secs  6.77 secs

also had run in 35.77 on CDC6600. 158 370 was slower than 3031 because
the (single) 158 engine was being shared between the 370 microcode and
the integrated channel microcode (which ran even when channels were
idle).

Part of the original morph from cp67 (and 360/67) to VM370 (multiple 370
models) was vm370 had table of supported 370 models ... with various
model characteristics. As part of my moving from cp67 to vm370 ... old
email reference:
http://www.garlic.com/~lynn/2006v.html#email731212
http://www.garlic.com/~lynn/2006w.html#email750102
http://www.garlic.com/~lynn/2006w.html#email750430

... I replaced the static table of supported 370 models with dynamic
code to determaine the characteristics ... it made it much simpler to
deploy a csc/vm production system to different machines (like
engineering models) not included in the shipped static table of
supported machines. some posts discussion work at the scientific center
(why it was called csc/vm). some past scientific center posts
http://www.garlic.com/~lynn/subtopic.html#545tech

Later I transfer to San Jose research ... on the san jose plant site
(accross the street from bldgs. 14&15) and csc/vm morphs into sjr/vm.
Old 4341 email about engineering model processor cycle time includes
reference to checking the DSPSL value ... which is one of my dynamically
determined values ... old reference
http://www.garlic.com/~lynn/2006y.html#email790220

from my dynamic adaptive resource manager ... which was guinea
pig for starting to charge for system/kernel software (customers
referred to as "fair share" since the default resource management
policy was "fair share") ... some past posts
http://www.garlic.com/~lynn/subtopic.html#fairshare

then somebody leaks the benchmark numbers to the press ... and
they initially try to blame me ... reference
http://www.garlic.com/~lynn/2006y.html#email790226

-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: DOS descendant still lives was Re: slight reprieve on the z.

2015-12-21 Thread Anne & Lynn Wheeler
jcew...@acm.org (Joel C. Ewing) writes:
> No (about the "free", not about the "dead for decades"), DOS/VS was the
> last really free base (last version Release 34?).   Perhaps technically
> DOS/VSE was "free", as there didn't appear to be a monthly licensing
> charge for DOS/VSE itself (Computerworld, April 30, 1979, p4), but in
> the practical sense a production DOS/VSE system was definitely not free
> as there were monthly support charges for DOS/VSE and separate monthly
> licensing plus support charges for must-have VSE add-on components like
> VSE/Power and others.  DOS/VSE came out with the IBM 4331 & 4341
> processors in 1979 and supported running in both S/370 mode or the
> ECPS:VSE mode supported by the 4300 processor family.

various legal actions resulted in 23June1969 unbundling announcement
where (application) software & other stuff started to be charged for
(however they made the case the kernel software should still be free).
some past posts
http://www.garlic.com/~lynn/submain.html#unbundle

during future system effort 370 efforts were being killed off (lack of
370 products there era is credited with clone processors market
foothold). after future system failed ... past posts
http://www.garlic.com/~lynn/submain.html#futuresys

there was mad rush to get stuff back into 370 product pipeline.
POK kick off 3033 (168 logic mapped to faster chips) and
3081/XA in parallel ... reference
http://www.jfsowa.com/computer/memo125.htm
 
XA had a lot of extensions tailored for MVS.

Endicott did something similar for e-architecture (4331 & 4341) tailored
for vs1 In large part a single virtual address space supported as
part of the hardware architecture. Rather than having segment & page
tables ... there were two new instructions that told the machine what
virtual address was at what real address ... and invalidated the virtual
address.

However there was an enormous explosion in vm/4300 sales (before
announce, 4341s were referred to a "E4") ... which required multiple
virtual address space ... which met that large number of 4300s ran in
370 mode rather than e-mode. Note that POK had convinced corporate to
kill off the vm370 product and move all the development people to POK as
part of MVS/XA development (including excuse that MVS/XA would ship on
time, if they couldn't get the additional resources). Endicott manage to
save the vm370 product mission, but had to reconstitute a development
group from scratch. some old 4300 related email
http://www.garlic.com/~lynn/lhwemail.html#43xx

Note that VS1 and VM/370 "ECPS" was different than e-machine
architecture. It originated with the 138/148 (virgil/tully) ...  where
selected high use kernel/system instructions paths were implemented in
microcode. The low & mid-range machine were vertical microcode machines
with an avg of 10 native instructions per 370 instruction (somewhat
analogous to mainframe emulators that run on Intel platforms). Kernel
instruction paths tended to get 10:1 performance improvement when moved
to microcode. I did the initial study and effort for the VM/370 ECPS
... old post with results for selecting pathlengths to be moved to
microcode (I was told, that I needed to select the 6kbytes of highest
executed kernel, which turned out to account of 80% of vm/370 kernel
execution)
http://www.garlic.com/~lynn/94.html#21 370 ECPS VM microcode assist

trivia ... the methodology for selecting the VS1 paths wasn't nearly so
rigorous.

other trivia ... major motivation for Future System product was as
countermeasure to clone controllers ... but the (failed) Future System
effort contributed significantly to the rise of the clone processors.
The threat of clone processors resulted in decision to transition to
charging for system/kernel software. I continued to work on 370 stuff
all during the FS period ... even periodically ridiculing FS stuff
... which wasn't exactly career enhancing. Also one of my hobbies was
developing advanced enhanced operating systems for internal
datacenters ... some old email
http://www.garlic.com/~lynn/2006v.html#email731212
http://www.garlic.com/~lynn/2006w.html#email750102
http://www.garlic.com/~lynn/2006w.html#email750430

In any case, the mad rush to get stuff back into 370 product pipeline
contributed to decision to pick up various of my stuff and ship it in
products for customers. One part of that stuff (dynamic adaptive
resource manager) was selected to be guinea pig for starting to charge
for system/kernel software ... and I had to spend some amount of time
with lawyers & business types going over policies for system/kernel
software charging.

even more trivia: when 3033 looked at doing something similar to ECPS
... it didn't work out as well. 3033 was horizontal microcode machine
that had been optimized so it was executing nearly one 370 instruction
per machine cycle. Directly dropping system/kernel 370 pathlengths into
microcode could even result in running slower than the original 370.

-- 
virtualization experience 

Re: OT: Electrician cuts wrong wire and downs 25,000 square foot data centre

2015-12-13 Thread Anne & Lynn Wheeler
p...@petelancashire.com (Pete Lancashire) writes:
> Showing my age 
>
> I worked for Burroughs as an engineering technician.
>
>  A customer with 360/65 instantaneous loss of power. I was there only for a
> couple hours to drop off some equipment. Later heard they lost a couple
> disk packs.

separate from power failures can precipitate disk drive failure.

IBM CKD dasd had power loss failure mode ... where there wasn't enough
power to maintain memory contents ... but there was enough power left
for the controller to complete a write operation ... problem was that
the channel had stopped transferring data ... so the controller
continued writting all zeros. The result was that after recovery, a
subsequent read would show no errors ... for the record that had write
operation ("correctly") complete with all zeros (this was especially
troublesome when things like VTOC record was being written)

FBA introduced that a physical record would not be written unless all
data was available to correctly complete a write. This philosophy
continued for RAID (write "failure" either completes correctly or
at least results in error indication for subsequent read).

During the 80s, there was lots of work trying to figure out how to
retrofit such a fix to CKD dasd ... or at least provide a way for system
to recognize an incorrect trailing zeros write.

-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: OT: Electrician cuts wrong wire and downs 25,000 square foot data centre

2015-12-13 Thread Anne & Lynn Wheeler
tony.j.new...@btinternet.com writes:
> This happend to us, 3380 continued to write x'00' over VM byte
> allocation map on cyl 0.

Original CMS filesystem from the mid-60s almost had a fix for this
... updated filesysem control information was written to new locations
... and then the MFD was rewritten pointing to the new version of
control information rather than old version. It worked for all writes
except for the MFD. The new CMS "EDF" filesystem in the 2nd half of the
70s went to a pair of MFDs. There was the current MFD and a write would
always be to the alternate MFD ... if completely corectly, the alternate
becomes the (new) current and the (old) current becomes the alternate.

A version number goes at the end of (EDF) MFD, on startup/recovery, both
MFD is read and the most recent one is used ... a power failure,
trailing zeros write would always result in that record appearing as
older than the other MFD.

In early 80s, I did a CP kernel filesystem ... including spool file
system that addressed the problem for CP also. My motivation was that I
had HSDT project and I needed VM370 spool file system that ran much
faster for RSCS driving multiple T1 (& faster) links (standard spool
file system typically got 5-32kbytes/sec ... I needed 3mbytes or better
throughput. some past posts
http://www.garlic.com/~lynn/subnetwork.html#hsdt

I did the implementation in vs/pascal running in virtual address space
but still managed to significant improvement over the standard
implementation done as part of vm370 kernel in assembler.

At one point, I thought I finally had a path to getting it picked up
through the corporate network backbone which nodes were moving to
multiple 56kbit links. However, this was about the time the
communication group was putting intense pressure on the corporate nework
to move to SNA ... technical people started being excluded from the
backbone meetings ... so they could focus on the pressure being applied
to move to SNA.

However, by that time I also was doing the throughput enhancements to
mainframe TCP/IP product (also implemented in vs/pascal). At the time,
the standard product got about 44kbytes/sec using nearly full 3090
processor. In some tuning tests of the "fixes" at Cray Research between
Cray and 4341 ... got channel speed throughput using only modest
amount of the 4341 processor (possibly 500 times improvement in
bytes moved per instruction executed). some past posts
http://www.garlic.com/~lynn/subnetwork.html#1044

old email reference to communication group forcing internal
network to move to SNA:
http://www.garlic.com/~lynn/2006w.html#email870302
and
http://www.garlic.com/~lynn/2011.html#email870306

other trivia ... I had also done a paged-mapped CMS filesystem
originallyh for CP67 ... and then later moved to VM370. In the early 80s
(on 3380 drives), side-by-side comparison of moderate i/o intensive
workload ... it would get three times the througput of standard CMS
filesystem. some past posts
http://www.garlic.com/~lynn/submain.html#mmap

old post with some (mmap) benchmark measurements from the 1st half of
the 80s ... included in '86 SEAS presentation
http://www.garlic.com/~lynn/2006.html#25 DCSS as SWAP disk for z/Linux
and repeated in hillgang meeting
http://www.garlic.com/~lynn/2011c.html#88 Hillgang -- VM Performance

-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: IBM Automatic (COBOL) Binary Optimizer Now Availabile

2015-12-01 Thread Anne & Lynn Wheeler
hyperthreading trivia ...

early 70s, I got sucked into helping with hyperthreading effort
for 370/195 (that never shipped).

370/195 could run at 10MIPS, but most codes ran at 5MIPs. 195 had
our-of-order execution, but didn't have branch prediction or speculative
execution ... so conditiional branches stalled the pipeline (had to do
careful programming to get 10MIPS, but abundance of conditional branches
in most codes would keet machine to 5MIPS). Two i-streams running at
5MIPS would be able to keep machine running aggregate 10MIPS.

Idea was to have dual i-stream and registers ... but same single
pipeline and execution units ... with instructions flagged as to
i-stream. this discussion of end of ACS-360 includes references to
hardware muiltithreading patents (and red/blue bit tagging).
http://people.cs.clemson.edu/~mark/acs_end.html

other acs-360 reference/trivia ... Amdahl says that executives were
afraid that it would advance state-of-the-art too fast and company would
loose control of the market ... so acs-360 was shutdown. There is also
description of acs-360 features that show up over 20yrs later in
ES-9000.

-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: IBM Automatic (COBOL) Binary Optimizer Now Availabile

2015-11-30 Thread Anne & Lynn Wheeler
charl...@mcn.org (Charles Mills) writes:
> Now, in a sense, mainframes ARE getting faster. More cache. Higher
> real memory limits and for Z, dramatically lowered memory prices. That
> processor multi-threading thing. But especially, new instructions that
> are inherently faster than the old way of doing things. Load and store
> on condition are the i-cache's dream instructions! Lots and lots of
> new "faster way to do things" instructions on the z12 and z13.

cache miss access to memory ... when measured in number of processor
cycles ... is compariable to 60s disk access time when measured in
number of 60s processor cycles. non-mainframe processors have been doing
memory latency compensation for decades, out-of-order execution, branch
prediction, speculative execution, hyperthreading, etc (aka waiting for
memory access increasing being treated like multiprogramming in the 60s
while waiting for disk i/o). Also, industry standard, non-risc
processors some time ago introduced risc micro-ops ... where standard
instructions were translated into risc microops for execution
scheduling.

mainframe implementations are more & more reusing industry standard
implementations, fixed-block disks, fibre-channel standard, CMOS,
etc. Half the per-processor performance improvement from z10->z196
playing catchup, is claimed to be introduction of some of these industry
standard memory access compensation technologies  with further
additions in z12 (its not clear about z13 ... some numbers about total
system throughput compared to z12, is less than the increase in number
of processors ... possibly implying that per processor throughput didn't
increase or even declined).

-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Were you at SHARE in Seattle? Watch your credit card statements!

2015-11-21 Thread Anne & Lynn Wheeler
martin_pac...@uk.ibm.com (Martin Packer) writes:
> Ah Chip & PIN at last.

there was a large pilot deployment in the US around the turn of the
century ... however it was in the "YES CARD" period ... the issue was it
was possible to use the same skimming exploits to collect information
for counterfeit magstripe card ... for making counterfeit chipcards.

Gov. LEOs did a description of "YES CARD" cases at an ATM Integrity
TaskForce meeting ... prompting somebody in the audience to exclaim that
they managed to spend billions of dollars to prove that chipcards are
less secure than magstripe.

In the wake of that, all evidence of the pilot evaporated w/o a trace
and speculation was that it would be a long time while things were tried
in the US again (waiting for more glitches to be worked out in other
jurisdictions).

The problem was 1) it was as easy to make counterfeit chipcards as
magstipe and 2) they had moved business rules out into the chip.  A
chipcard terminal would ask the chip 1) was the correct PIN entered, 2)
should the transaction be done offline, 3) is the transaction within the
credit limit. A counterfeit "YES CARD" would answer "YES" to all three,
so didn't need to know the correct PIN and didn't need to do online
check with backend (and all transaction are approved). Traditional
countermeasure for counterfeit magstripe card is to deactivate the
account at the backend ... but that doesn't work with "YES CARD"

I had warned the people doing the pilot about the problems, but they
went ahead and did it anyway (they were myopically focused on
lost/stolen cards and ignored the counterfeit "YES CARD" scenarios).

Reference to "YES CARD" presentation at the bottom of this CARTES2002
trip report (gone 404, but lives on at the wayback machine)
http://web.archive.org/web/20030417083810/http://www.smartcard.co.uk/resources/articles/cartes2002.html

disclaimer: in the mid/late 90s, I was asked to do a protocol that
had no such vulnerabilities and was significantly more secure ... then
the transit industry also requested that it could also run contactless
within the power constraints of transit turnstyle (w/o any
reduction in security) ... have you seen how long these
transactions take? ... even when they are getting full contact power.

-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Were you at SHARE in Seattle? Watch your credit card statements!

2015-11-21 Thread Anne & Lynn Wheeler
slight mainframe related trivia.

chip had a booth at the '99 world-wide retail banking
conference ... along with press release ... in this old post
http://www.garlic.com/~lynn/ansiepay.htm#x959bai X9.59/AADS announcement at BAI

leading up to the conference ... we spent a lot of time with one of the
other companies including regular meetings with their CEO ... who in
prior life had been president of DSD (pok mainframe).

a lot of the work had started in the x9a10 financial standard working
grop which had been given the requirement to preserve the integrity of
the integrity of the financial infrastructure for *ALL* retail payments
... and as a result it was required to not only work for point-of-sale
... but *ALL* payments (including internet). The downside was that
eliminating much of the fraud it commoditized payments and reduced
barriers to entry. '99 was also the year that GLBA passed (now better
known for repeal of Glass-Steagall), rhetoric on the floor of congress
was the (original) primary purpose of GLBA was to prevent new entries
into banking (especially prevent competition from entities with much
more efficient technologies).

-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Compiler

2015-11-13 Thread Anne & Lynn Wheeler
000433f07816-dmarc-requ...@listserv.ua.edu (Paul Gilmartin) writes:
> Well, yes.  Something about core competency.  Spend programming
> resource on an optimizing compiler which can produce object code
> faster, better, cheaper than redundant effort by human programmers.
> And the next generation ISA can be exploited merely by recompiling,
> not recoding.

modern compilers will have detailed knowledge of ISA and lots of
programming tricks/optimizations/techniques done by the very best
assembler programmers (compiler stat-of-the-art is typically considered
having reached this point for most things at least by the late 80s).

One of the issues is C language has some ill defined & ambiguous
features that inhibits better optimization (that is possible in some
better defined languages).

minor reference (not only optimization issues but also bugs)
http://www.ghs.com/products/misrac.html

This flexibility comes at a cost however. Ambiguities in the C language,
along with certain syntaxes, consistently trip up even the best
programmers and result in bugs. For software developers, this means a
large amount of unexpected time spent finding bugs. For managers, this
often means the single largest risk to their project.

... snip ...

The original mainframe TCP/IP product was done in pascal/vs ... and had
none of the programming bugs that have been epidemic in C language
implementations.

-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Gene Amhdahl Dies at 92

2015-11-12 Thread Anne & Lynn Wheeler
stars...@mindspring.com (Lizette Koehler) writes:
> Gene Amdahl, who helped IBM usher in general-purpose computers in the 1960s 
> and
> challenged the company's dominance a decade later with his eponymous machines,
> has died. He was 92.
> He died on Nov. 10 at Vi at Palo Alto, a continuing care retirement community 
> in
> Palo Alto, California, his wife Marian Amdahl said in a telephone interview. 
> The
> cause was pneumonia, and he had Alzheimer's disease for about five years.

end of ACS, Amdahl shutdown ACS-360
http://people.cs.clemson.edu/~mark/acs_end.html

ACS was shutdown after ibm management decided it would advance state of
the art too fast and they could loose control of the market. Talks about
ACS features that finally show up in ES/9000 more than two decades
later. Also references multithreading patents. I had gotten sucked into
a project that was looking at multithreading 370/195 ... which never
shipped.

Early 70s, there was FS project that was completely different than
360&370 and was going to completely replace 360/370 ... and internal
politics was shutting down 370 efforts. Lack of 370 products during this
period is credited with giving clone processors market foothold.
http://people.cs.clemson.edu/~mark/fs.html

-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Self-service PC

2015-09-30 Thread Anne & Lynn Wheeler
charl...@mcn.org (Charles Mills) writes:
> Agreed. I did an HR systems evaluation a few years back (why is a
> coder evaluating HR systems? Don't ask.) and all were big on
> "self-service," by which they meant if an employee, for example,
> wanted to know how many vacation days s/he had in the bank, s/he did
> not have to call HR, s/he just signed onto the HR system with a Web
> browser (and with "role-based authority" much lower than an HR person)
> and looked.

20yrs ago it was webifying callcenter menu screens ... had to have
computerized-based authentication front-end and restricting access to
information just for the authenticated entity. it has been 20yrs of
reducing callcenter use (not having real person at the other end).

slight topic drift ... 20yrs ago, consumer dailup online banking
operations were making presentations at financial conferences on
motivation for moving to the internet; primarily development
costs for proprietary modem drivers (at the time >60 drivers were
typical) and dialup infrastructure, enormous support costs associated
with serial-port modems, etc ... all gets offloaded to ISP. Note at the
same time, the commercial dialup online banking operations were saying
that they would *NEVER* move to the internet because of a long list of
exploits and vulnerabilities (many that presist to this day) ... as an
aside, the commercial dialup online banking operations have subsequently
moved to the internet anyway.

self-service PCs in the past were typically associated with "kiosk",
library, etc, public PCs that anybody can walk up to (like store
machines looking for stock &/or price check) ... as opposed to the
webifying callcenter operations.

I had some number of meetings with the NIST rbac people in the 90s
http://csrc.nist.gov/groups/SNS/rbac/

at the time, it was much more oriented towards simplifying security
office handing out fine-grain access ... and codifying multi-party
operations as countermeasure to insider threats (no single person had
sufficient authority to complete any high-value operation).

-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: More "ageing mainframe" (bad) press.

2015-09-27 Thread Anne & Lynn Wheeler
vbc...@gmail.com (Vince Coen) writes:
> I think the stats on migration failures show that many fail regardless
> of the target migration mainly is that they over estimate project
> time, and quality of the target systems being used in place of m/f.
>
> Taking a straight view the mainframe is slow compared to running on
> servers on a instruction throughput basis.
>
> What they miss however is the data through put specs compared to
> mainframes where the m/f still wins hands down.
>
> I have tried (just for my self) to build a 8 core PC with separate
> Sata controllers for each 15000 rpm drive to match up with m/f
> performance but apart from the high costs of each controller there is
> still the speed or lack of it of going from the controllers to the
> application because of bottle necks in the data bus.
>
> I have not seen any PC/server design mobo that gets around this
> problem and until they do - the mainframe is still "the man"  for data
> processing in bulk.

Lots of migration failures are trying to make any change at all.

A simple scenario is the financial industry spent billions of dollars in
the 90s to move from "aging" overnight (mainframe) batch settlement to
straight-through processing using large numbers of parallel "killer
micros". A major source of failure was wide-spread use of industry
parallelization libraries (that had 100 times the overhead of cobal
batch). I pointed it out at the time, but was completely ignored ...
the toy demos looked so neat. It wasn't until they tried to deploy that
they ran into the scaleup problems (the 100 times parallelization
overhead total swamped the antificapated throughput increases using
large number of "killer micros" for straight-through processing). In the
meantime there has been enormous amount of work by the industry
(including IBM) on RDBMS parallizing efficiencies. A RDBMS-based
straight-through processing implementation done more recently easily
demonstation all of the original objectives from the 90s ... but the
financial industry claimed that it would be at least be another decade
before they were ready to try again (lots of executives still bore the
scars from the 90s failures and had become risk adverse).

In 2009, non-mainframe IBM was touting some of these RDBMS
parallelization scaleup efficienices. I somewhat ridiculed them
... "From The Annals of Release No Software Before Its Time" ... since I
had been working on it 20yrs earlier (and got shutdown, being told I was
not allowed to work on anything with more than four processors).

Also, in 1980 I got sucked into to do channel extender for STL that was
moving 300 people from the IMS group to off-site bldg. The channel
extender work did lots of optimization to eliminate the enormous channel
protocol chatter latency over the extended link ... resulting in no
appearant difference between local and remote operation. The vendor then
tried to get IBM approval for release of my support ... but there was
group in POK working on some serial stuff (and were afraid if it was in
the market, it would make releasing their stuff more difficult) and
managed to get approval blocked. Their stuff is final released a decade
later, when it is already obsolete (as ESCON with ES/9000). some past
posts
http://www.garlic.com/~lynn/submisc.html#channel.extender

In 1988, I was asked to help LLNL standardize some serial stuff they
have, which quickly morphs into fibre-channel standard (including lots
of stuff that I had done from 1980).  Later some of the POK engineers
define a heavy weight protocol for fibre-channel that drastically
reduces the native throughput which is eventually released as FICON.
some past posts
http://www.garlic.com/~lynn/submisc.html#ficon

The latest published numbers I have from IBM is peak I/O benchmark for
z196 that used 104 FICON (running over 104 fibre-channel) to get 2M
IOPS. At the same time there was a fibre-channel announced for e5-2600
blade that claimed over million IOPS (two such fibre-channel has greater
native throughput than 104 FICON running over 104 fibre-channel).

In addition, there hasn't been any real CKD manufactured for decades,
CKD is simulated on industry standard fixed-block disks. It is possible
to have high-performance server blades running native fibre-channel with
native fixed-block disks that eliminates the enormous FICON and CKD
simulation inefficiencies.

Related z196 I/O throughput number is all 14 SAPs running at 100% busy
peaks at 2.2M SSCH/sec ... however, they recommend that SAPs are limited
to 75% or 1.5M SSCH/sec.

I have yet to see equivalent numbers published for EC12 or z13. EC12
press has been that z196 @ 50BIPS processing to EC12 @ 75BIPS processing
(50% more processing) only claims 30% more I/O throughput. z13 quote has
been 30% more processing than EC12 (with 40% more processors than EC12).

Note that while fibre-channel wasn't originally designed for mainframe
... but for non-mainframe server configurations (that tend to run a few
thousand), 

Re: Setting the writers right

2015-09-26 Thread Anne & Lynn Wheeler
charl...@mcn.org (Charles Mills) writes:
> "... the OPM is facing a huge problem with modernizing its security measures
> and tactics because of one acronym: COBOL. The programming language that
> rose to prominence in the 1960s is rampant throughout the OPM and with the
> advanced persistent threats federal agencies are experiencing, it's a
> significant vulnerability."
>
> -- http://fedtechmagazine.com/OPMhack 

somewhat obfuscation and misdirection away from the massive uptick in
outsourcing that occured last decade (even some slightly related to
IBM). A big part of the uptick in outsourcing was enormous lobbying done
by major private-equity firms on behalf of their acquisitions. "OPM
Contractor's Parent Firm Has a Troubled History"
https://firstlook.org/theintercept/2015/06/24/opm-contractor-veritas/

when CEO leaves IBM, he goes on to head up major private equity firm
http://www.wsj.com/articles/SB1037893592918171788
that does LBO of company that employes Snowden.
http://www.investingdaily.com/17693/spies-like-us/

Private contractors like Booz Allen now reportedly garner 70 percent of
the annual $80 billion intelligence budget and supply more than half of
the available manpower. They're not going away any time soon unless the
CIA and NSA want to start over and with some off-the-shelf laptops,
networked by the Geek Squad from Best Buy. Security clearances used to
be a government function too, but are now a profit center for various
private-equity subsidiaries.

... snip ...

Private equity tends to do everything possible to loot the companies
they acquire (the industry had gotten such a bad name during the S
crisis, that they change their name to "private-equity").  In the case
of the "subsidiaries" doing outsourced security clearance checks, they
were filling out the paperwork, but not actually doing the checks they
were being paid for.

AMEX had been in competition with KKR for the LBO of RJR and KKR
wins. KKR runs into problems with RJR and hires away the president of
AMEX to turn it around.
https://en.wikipedia.org/wiki/Barbarians_at_the_Gate:_The_Fall_of_RJR_Nabisco

IBM has gone into the red and was being re-orged into the 13 "baby
blues" in preparation of breakup; the board then hires away the former
president of AMEX to resurrect the company and reverse the breakup
... using some of the same measures used at RJR
http://www.ibmemployee.com/RetirementHeist.shtml

there has been long standing revolving door between gov and beltway
bandits and/or wallstreet ... example is recent CIA director resigned in
disgrace including slap on the wrist for leaking classified documents
... joins KKR.
http://dealbook.nytimes.com/2013/05/30/k-k-r-hires-petraeus/

-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: HP being sued, not by IBM.....yet!

2015-09-22 Thread Anne & Lynn Wheeler
john.archie.mck...@gmail.com (John McKown) writes:
> ​They are probably referring to a z, but doing it in such a way as to
> totally disparage it. The fact that the z13 is the fastest microprocessor
> currently existed just doesn't penetrate their mind because the original
> ​S/360 was designed in the 1960s.

z900, 16 processors, 2.5BIPS (156MIPS/proc), Dec2000
z990, 32 processors, 9BIPS, (281MIPS/proc), 2003
z9, 54 processors, 18BIPS (333MIPS/proc), July2005
z10, 64 processors, 30BIPS (469MIPS/proc), Feb2008
z196, 80 processors, 50BIPS (625MIPS/proc), Jul2010
EC12, 101 processors, 75BIPS (743MIPS/proc), Aug2012

I haven't seen the BIPS numbers for z13 yet, just reference that Z13 has
about 30% more throughput than EC12 (with 40% more processors) ... which
would be about 100BIPS & about 710MIPS/proc.

the claims that half the per processor improvement was introduction of
features like out-of-order execution, branch prediction, etc.  that have
been in other chips for decades.

e5-2600v1 blade (about concurrent with z196) 400-500+ BIPS (depending on
model); around which IBM had base list price of $1815 or about $3.50/BIPS.
However, the large cloud megadatacenters claim that they had been doing
their own system assemblies for decades (carefully choosing components
for total lifetime costs) or around $1/BIPS. The commoditizing of these
systems by the large cloud megadatacenters possibly accounts for IBM
unloading that product line (the chip manufactures were saying that they
were shipping more processor chips to the large cloud megadatacenters
than they were shipping to the brand name system vendors).

By comparison z196 works out to $560,000/BIPS (w/o software) and EC12
works out to $440,000/BIPS (aka large cloud megadatacenters at $1/BIPS).

A e5-2600v3 blade is rated at 2.5 times a e5-2600v1 blade and a current
e5-2600v4 blade is rated at 3.5 times a e5-2600v1 blade ... or over
1.5TIPS. A high density rack of e5-2600v4 blades may have more
processing power than the aggregate of all mainframes in the world
today.

-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: HP being sued, not by IBM.....yet!

2015-09-22 Thread Anne & Lynn Wheeler
0047540adefe-dmarc-requ...@listserv.ua.edu (Bill Johnson) writes:
> http://www.theregister.co.uk/2015/09/22/michigan_sues_hp_for_upgrade_failure/
> Michigan failure.

remember HP had bought EDS:
http://www8.hp.com/us/en/hp-news/press-release.html?id=169924

originally founded by former IBM salesman who then created a mainframe
services empire. Along the way, EDS was bought by GM to try and improve
their IT position and then later spun off (before being acquired by HP)
http://www.dallasnews.com/business/headlines/20121209-eds-sees-success-after-purchase-by-gm--but-at-a-cost.ece

trivia: in 1990, GM had the C4 taskforce to look at completely remaking
themself ... since they were planning on heavily leveraging IT
technology, they invited representatives from technology companies to
participate. They could clearly articulate what the foreign competition
was doing right and the changes GM would have to make, but as the
bailout this century shows they weren't able to make the necessary
changes (one of the shortcomings of their operation they described was
also true of the mainframe operation at the time ... so I would chide
the mainframe representatives how could they expect to contribute).

more trivia: congress had passed foreign import quotas to significantly
reduce competition and greatly increase domestic profits supposedly that
they would use the money to completely remake themselves. However in the
early 80s there was a call for 100% unearned profit tax on the US
industry since they were just pocketing the profit and continued
business as usual (through the 1990 C4 taskforce up through the recent
bailouts and possibly beyond, "remake" their business is 35 or so years
old, and it hasn't been the case that they didn't know what they needed
to do).

In the mid-80s we had opportunity to talk to GM/EDS periodically.  Then
we were doing a presentation in Raleigh on "real" networking and happen
to mention that the GM/EDS people mentioned that they were moving the
company off SNA to x.25. The communication group executives immediately
left the room ... and then came back and said they didn't care, GM/EDS
had already spent their IBM communication budget for the year ... so it
would be somebody else's problem in the future

other trivia: when Perot left GM/EDS and founded Perot Systems, he
brought in the former head of POK mainframe to be CEO. 

past posts mentioning C4-taskforce
http://www.garlic.com/~lynn/submisc.html#auto.c4.taskforce


-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Term "Open Systems" (as Sometimes Currently Used) is Dead -- Who's with Me?

2015-09-12 Thread Anne & Lynn Wheeler
t...@harminc.net (Tony Harminc) writes:
> In my experience, though, Windows was not generally included in what
> people meant by "open systems"; they meant UNIX, and if they failed to
> include z/OS (or OS/390) UNIX, it's because they were unaware of its
> existence. If they wanted to include Windows in a term meaning "not
> mainframes", they'd say "distributed systems". I hear very few people
> these days use the term "open systems" at all.

re:
re:
http://www.garlic.com/~lynn/2015g.html#77 Term "Open Systems" (as Sometimes 
Currently Used) is Dead -- Who's with Me?
http://www.garlic.com/~lynn/2015g.html#79 Term "Open Systems" (as Sometimes 
Currently Used) is Dead -- Who's with Me?
http://www.garlic.com/~lynn/2015g.html#80 Term "Open Systems" (as Sometimes 
Currently Used) is Dead -- Who's with Me?

google archive
https://groups.google.com/forum/#!topic/bit.listserv.ibm-main/dvpRJRmFIJA

advent of single chip processors met that companies could develop
hardware systems at very low cost  but there was still enormous cost
associated with developing proprietary operating systems. the thing that
unleashed was these companies being able to adapt unix for their
hardware at small fraction of developing proprietary operating system
from scratch. Saw big explosion in companies doing minis, workstations,
mini-supers, supers, etc all using commodity processor chips and
portable unix.

IBM's office products group was going to use 801/RISC ROMP chip to do a
displaywriter followon ... when that got canceled they decided to
retarget to the Unix workstation market and got the company that had
done the AT unix port for IBM/PC PC/IX to do one for ROMP ... released
as PC/RT and AIX2. Some past posts
http://www.garlic.com/~lynn/subtopic.html#801

Along the way saw universities doing unix work-alikes ... UCB doing BSD,
UCLA doing Locus, CMU doing MACH, etc.

IBM Palo Alto Science Center was working on doing UCB BSD for 370 when
they got retargeted to PC/RT ... which came out as AOS. They had also
been working with UCLA Locus ... which was eventually released as
AIX/370 & AIX/386 (Locus AIX having little directly to do with AT UNIX
for PC/RT). Jobs had left Apple and was doing NeXT and using MACH as
base system, when Jobs comes back to Apple, he brings MACH with him to
be the basis for applie operating system.

AT & SUN then try and make UNIX more "proprietary" ... kicking
off the UNIX wars
https://en.wikipedia.org/wiki/Unix_wars
and
https://en.wikipedia.org/wiki/Open_Software_Foundation

The organization was first proposed by Armando Stettner of Digital
Equipment Corporation at a by-invitation-only meeting hosted by DEC for
several UNIX system vendors in January 1988 (called the "Hamilton
Group", since the meeting was held at DEC's offices on Palo Alto's
Hamilton Avenue).[3] It was intended as an organization for joint
development, mostly in response to a perceived threat of "merged UNIX
system" efforts by AT Corporation and Sun Microsystems.

...

The foundation's original sponsoring members were Apollo Computer,
Groupe Bull, Digital Equipment Corporation, Hewlett-Packard, IBM,
Nixdorf Computer, and Siemens AG, sometimes called the "Gang of Seven"

... snip ...

which also gave big boost to POSIX
https://en.wikipedia.org/wiki/POSIX

The disk division executive sponsored POSIX/Open implementation for MVS
... as part of work around communication group opposition to
client/server and distributed computing ... but it was also motivated by
being able to bid MVS for gov. contracts requiring POSIX compliance.

i86 processors have became dominate commodity processor chip
... drastically reducing some of the hardware portability issues ...
and windows took over as dominant operating system.

rise of Linux was partially because the new computing paradigm for GRID
and CLOUD computing that can have millions of processors ...  and being
able to evolve that new computing paradigm needed full, unrestricted
source.

Big cluster supercomputers evolved into GRID ... and the big cloud
megadatacenters were not too far behind (started leveraging some of the
same components) ... mostly dependent on freely available Linux source
(although as the paradigm matured, some cases of other systems jumping
on the bandwagon).

The big cloud megadatacenters have enormously expanded on-demand
computing ... there are even instances of dynamically spinning up
on-demand supercomputer using credit card. Four years ago there was case
of ondemand 240TIPS supercomputer created for research. Year later,
there was case of dynamically created ondemand supercomputer that was
three times larger, for 3hrs of cancer research (would have ranked in
the top 50 supercomputers in the world).

By comparison, max configured EC12 is 101 processors rated at 75BIPS
(743MIPS/proc), z13 claims 30% more throughput than EC12 with 40% more
processors (700MIPS/proc?). 240TIPS would be equivalent of over 3000 max
configured EC12 systems ... and more recent, more like 10,000 max

Re: Term "Open Systems" (as Sometimes Currently Used) is Dead -- Who's with Me?

2015-09-11 Thread Anne & Lynn Wheeler
j...@well.com (Jack J. Woehr) writes:
> How about "if all my disparate operating systems support TCP/IP and
> C/C++, it's easier to accomplish the mission"?
>
> Which is more or less what it has come down to.

re:
http://www.garlic.com/~lynn/2015g.html#78 Term "Open Systems" (as Sometimes 
Currently Used) is Dead -- Who's with Me?

original mainframe TCP/IP product was done in vs/pascal for VM370 and
communication group was sort of pushed into corner to eventually let it
be released ... however with some performance issues (max 44kbyte/sec
using nearly full 3090 processor). It was ported to MVS by providing
simulation for the required VM370 functions.

Open systems have had epidemics of exploits and vulnerabilities
attributed to C-language buffer length and addressing semantics (the
mainframe vs/pascal implementation was not known to have similar
problems) ...  some past posts on the subject
http://www.garlic.com/~lynn/subintegrity.html#buffer

I did the modifications to the vm370 version to support RFC1044 ... and
in some tuning tests at Cray research got sustained channel throughput
between 4341 and Cray using only modest amount of 4341 processor time.
(possibly 500 times improvement in bytes moved per instruction
executed). some past posts
http://www.garlic.com/~lynn/subnetwork.html#1044

much later the communication group hired a subcontractor to do TCP/IP
implementation in vtam and after the initial demonstration they told him
that everybody *knows* that a *valid* tcp/ip implementation is slower
than LU6.2, and they would only be paying for a *valid* implementation.

we were also working with NSF and its supercomputer centers on
interconnecting the labs ... originally we were suppose to get
$20M. Then congress cut the budget and some other things happened and
finally NSF releases RFP (several pieces based on what we already had
running) ... but internal politics prevents us from bidding. The
director of NSF tries to help and writes the corporation a letter
(copying the CEO) with support from other agencies ... but that just
made the internal politics worse (as did comments that what we already
had running was at least five years ahead of all bid submissions). some
old email
http://www.garlic.com/~lynn/lhwemail.html#nsfnet

as regional networks connect into the nodes, it evolves into
the NSFNET backbone ... precursor to modern internet. some
discussion
http://www.technologyreview.com/featuredstory/401444/grid-computing/

some past posts
http://www.garlic.com/~lynn/subnetwork.html#nsfnet
and
http://www.garlic.com/~lynn/subnetwork.html#internet

-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Term "Open Systems" (as Sometimes Currently Used) is Dead -- Who's with Me?

2015-09-11 Thread Anne & Lynn Wheeler
j...@well.com (Jack J. Woehr) writes:
> Not Found ... but I went through several of the others ...
> one could spend the rest of one's careers reading your posts ;)

re:
http://www.garlic.com/~lynn/2015g.html#77 Term "Open Systems" (as Sometimes 
Currently Used) is Dead -- Who's with Me?
http://www.garlic.com/~lynn/2015g.html#79 Term "Open Systems" (as Sometimes 
Currently Used) is Dead -- Who's with Me?

garlic.com changed/moved their webserver on 16apr2015 and I am still
trying to work out how to update files at the new webserver (i've
exchange lots of email, still haven't worked it out).

in mean time, this thread archived in google groups
https://groups.google.com/forum/#!topic/bit.listserv.ibm-main/dvpRJRmFIJA

reference here from last decade
http://www.ibmsystemsmag.com/mainframe/stoprun/Stop-Run/Making-History/

I was blamed for online computer conferencing on the internal network
(larger than arpanet/internet from just about the beginning until
mid-80s) in the late 70s & early 80s; folklore is that when the
corporate executive committee was told about online computer
conferencing (and the internal network), 5of6 wanted to fire me.

one reference (from IBMJargon):

Tandem Memos - n. Something constructive but hard to control; a fresh of
breath air (sic). That's another Tandem Memos. A phrase to worry middle
management. It refers to the computer-based conference (widely
distributed in 1981) in which many technical personnel expressed
dissatisfaction with the tools available to them at that time, and also
constructively criticised the way products were are developed. The memos
are required reading for anyone with a serious interest in quality
products. If you have not seen the memos, try reading the November 1981
Datamation summary.

... snip ...

somewhat as a result, a researcher was paid to sit in the back of my
office for 9months to study how I communicated; took notes on
face-to-face and telephone, went with me to meetings, got logs of my
instant messages and copies of all incoming & outgoing email. material
was used for research report, some number of papers & books and a
Stanford PHD (joint between language and computer AI). some past
posts
http://www.garlic.com/~lynn/subnetwork.html#cmc

(children's "bullying") book about former co-worker at the science
center responsible for the internal network
http://itunes.apple.com/us/app/cool-to-be-clever-edson-hendricks/id483020515?mt=8
It's Cool to Be Clever: The Story of Edson C. Hendricks, the Genius Who 
Invented the Design for the Internet 
http://www.amazon.com/Its-Cool-Be-Clever-Hendricks/dp/1897435630/
and wiki
https://en.wikipedia.org/wiki/Edson_Hendricks

the internal network technology was also used for the corporate
sponsored university network ... also larger than internet for
a time (and where ibm-main originated) ... wiki reference
http://en.wikipedia.org/wiki/BITNET

some past posts
http://www.garlic.com/~lynn/subnetwork.html#bitnet

some past science center posts
http://www.garlic.com/~lynn/subtopic.html#545tech

-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Term "Open Systems" (as Sometimes Currently Used) is Dead -- Who's with Me?

2015-09-11 Thread Anne & Lynn Wheeler
imugz...@gmail.com (Itschak Mugzach) writes:
> The term 'open' for me is the liberty to choose. To choose the
> hardware from many makers and to move easily from one operating system
> to another. See how many are moving from unix to Linux so easy.  The
> mainframe is not dead nor the customers. They, who can choose, vote
> for liberty. IBM killed all alternative hardware makers and now they
> buy or resell software makers. In Israel most of the mainframe sites
> are looking their way out, telling them selves this is because COBOL,
> pricing, and other tails. The real truth is that IBM kills the
> industry by being a monopoly. The only chance I see for the industry
> is IBM allowing alternatives.

I had the discussion with the (disk division) executive that sponsored
the "open" implementation for MVS.

His view would be that it would make it easy for customers to move
applications from non-IBM platforms to MVS. My claim was industry
motivation for "open" was that it would make it trivial for customers to
move applications to whatever platform they wanted to ... eliminating
proprietary lockins that had previously been the norm. Frequently move
to the latest most cost effective platform, aka hardware platform
agnostic, promoting competition and accelerating improvements. It is
also seen in things like industry standard benchmarks like TPC
(price/transaction, watts/transaction, etc)
http://www.tpc.org/
trivia ... former co-worker at ibm san jose research
http://www.tpc.org/information/who/gray.asp

For IBM, it was sort of in the genre of SAA ... billed as application
could be run anywhere ... but really met that application could be moved
to the mainframe ... while human interface could be on some other
platform. SAA was part of communication group desperately fighting off
client/server and distributed computing trying to preserve their dumb
(emulated) terminal paradigm and install base.

The communication group had stranglehold on datacenters with corporate
strategic ownership of everything that crossed the datacenter walls and
the disk division was starting to see the effects with drop in disk
sales (data fleeing the datacenter to more distributed computing
friendly platforms). The disk division had come up with number of
solutions to correct the problem ... but the ones that involved actually
crossing the datacenter walls were constantly vetoed by communication
group. some past posts (including references to claim that communication
group was going to be responsible for demise of disk division)
http://www.garlic.com/~lynn/subnetwork.html#terminal

My wife had written 3-tier architecture into response to large gov.
super-secure, campus environment RFI ... and then we were using 3-tier
in customer executive presentations and taking all sorts of arrows in
the back from the SAA forces. past posts
http://www.garlic.com/~lynn/subnetwork.html#3tier

The "open" for MVS didn't actually involve anything that crossed the
datacenter walls so the communication group couldn't veto it. He was
also investing in companies that produced mainframe products that
crossed datacenter walls (communication group could veto his developing
and selling IBM distributed products that physically crossed datacenter
walls, but couldn't veto him from investing in non-IBM companies).

-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Mainframes open to internet attacks?

2015-08-27 Thread Anne Lynn Wheeler
mike.a.sch...@gmail.com (Mike Schwab) writes:
 How about Multics?  Designed from the start to be multi-user and
 highly secure.

some of the CTSS people went to the 5th flr and did Multics. Other of
the CTSS people went to the IBM science center on the 4th flr and did
cp67/cms, the internal network, online services, etc. Being in the same
bldg. separated by one flr, there was some rivalry.

One of the early tests was when science center ported apl\360 to cms
for cms\apl ... it allowed typical apl\360 16kbyte workspaces to be
increased to virtual memory size ... and also added API that allowed
access of system services (like file read/write). Opening APL to
real-world applications attracted a lot of internal locations to start
using the cambridge system remotesly. A group of business planners in
Armonk loaded the most valuable corporate asset (customer details) on
cambridge system to do business modeling applications in cms\apl.

we had some interesting issues since non-employess (cambridge area univ
students, instructors, professors) also had online access to the
cambridge system. some posts mentioning science center
http://www.garlic.com/~lynn/subtopic.html#545tech

some multics installations:
http://www.multicians.org/site-afdsc.html
http://www.multicians.org/mgd.html#DOCKMASTER

other old reference to DOCKMASTER org. (gone 404 but lives on at wayback
machine):
http://web.archive.org/web/20090117083033/http://www.nsa.gov/research/selinux/list-archive/0409/8362.shtml

and old reference to afds coming by to talk about 20 vm/4341 systems
... but then that was increased to 220 (posted in multics discussion
group)
http://www.garlic.com/~lynn/2001m.html#email790404

Recently a european that worked in NATO claimed that they got 6000
vm/4341 systems.

Note that Multics was implemented in PLI.

Up through the 90s, the major tcp/ip bugs/exploits were because of
buffer length related bugs epidemic in c-language implementations (and
still continues to be a frequent source of exploits). The original ibm
mainframe tcp/ip product was implemented in vs/pascal and had *none* of
these epidemic bugs found in c-language implementations.

As an aside, for various reasons this implementation had some
significant performance issues, getting 44kbytes/sec aggregate using
3090 processor. I did the rfc1044 enhancements and some tuning tests at
cray research got sustained channel speed throughput between cray and
4341, using only modest amount of 4341 (possibly 500 times improvement
in bytes moved per instruction executed). The (non-rfc1044) version was
also made available on MVS by simulating the required VM functions.
Much later the communication group contracted for TCP/IP support through
VTAM. After the initial demonstration, the communication group told the
contractor that everybody *knows* that a *correct* version of TCP/IP
runs slower than LU6.2 and they will only be paying for a *correct*
version.

I also had other rivalry with the 5th flr. One of my hobbies was
providing enhanced operating systems to internal locations ...  some old
email regarding CSC/VM (later it was SJR/VM, after I transferred to san
jose research):
http://www.garlic.com/~lynn/2006w.html#email750102
http://www.garlic.com/~lynn/2006w.html#email750430

It wasn't fair to compare the total number of Multics systems that had
ever existed with the total number of vm370 customer systems or even the
total number of internal vm370 systems. However, for a time, I had a few
more internal csc/vm systems than the total number of Multics systems.

-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Miniskirts and mainframes

2015-08-18 Thread Anne Lynn Wheeler
JimP solosa...@gmail.com writes:
 Interesting. The main contractor told us it was due to the teraflops
 it could do, a YMP-2. I worked for a sub-contractor.

IBM Kingston supposedly had the responsibility for doing new
supercomputer ... also was providing to Chen's endevor (responsible for
both xmp  ymp)
https://en.wikipedia.org/wiki/Steve_Chen_%28computer_engineer%29

we were doing cluster scaleup as part of HA/CMP ... past posts
http://www.garlic.com/~lynn/subtopic.html#hacmp

then end of Oct91, the senior executive backing Kingston effort retires
and there is audit of all his projects. After that they start scouring
the company looking for high performance technology.

Jan1992 meeting in ellison's conference room about (commerical)
cluster HA/CMP scaleup
http://www.garlic.com/~lynn/95.html#13

mainframe DB2 complaining if we were allowed to go ahead, it would be
years ahead of them. Then cluster scaleup is transferred, announced as
supercomputer (for technical  scientific *ONLY*) and we were told that
we can't work on anything with more than four processors (which
motivates us to leave). Somewhat no brainer operations ... it takes care
of the DB2 complaints and also gets them their supercomputer ... old
email about working on technical and scientific with national labs and
others (up until the transfer)
http://www.garlic.com/~lynn/lhwemail.html#medusa

17Feb1992 press, scientific and technical *ONLY*
http://www.garlic.com/~lynn/2001n.html#6000clusters1
05May1992 press, total surprise about national lab interest
http://www.garlic.com/~lynn/2001n.html#6000clusters2

I would claim that (national lab) scientific  technical activity traces
back to at least getting roped into doing LLNL benchmark looking at
getting 70 4341s for computer farm and RDBMS/commercial dates back to
BofA getting 60 4341s for branch office distributed System/R (original
relational/sql implementation).

Note that IBM's RDBMS were mainframe only and so RS/6000 had to work
with other RDBMS for their platform. It turns out that a couple of those
vendors had same source base for both VMS cluster as well as open
systems. Part of simplifying the HA/CMP cluster scaleup was providing
interface that supported VMS cluster semantics. These RDBMS vendors also
had some very strong feeling about some of the VMS cluster
implementation that could be done much better (which I was able to take
advantage of besides my experience doing mainframe tightly-coupled and
loosely-coupled implementations).

Later in the 90s, Chen is CTO at Sequent and we do some consulting work
for him (before IBM buys Sequent and shuts it down).

-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Formal definituion of Speed Matching Buffer

2015-08-15 Thread Anne Lynn Wheeler
shmuel+ibm-m...@patriot.net (Shmuel Metz  , Seymour J.) writes:
 I'm editing the wikipedia article on Count Key Data, and I've run into
 an editorial dispute. I claim that what is now ECKD was part of the
 SMB, and the other editor claims that you could run 3380 on a slow
 channel without using, e.g., Define Extent. Does anybody have a
 document outlining what IBM included in the term SMB?: Thanks.

re:
http://www.garlic.com/~lynn/2015f.html#86 Formal definituion of Speed Matching 
Buffer
http://www.garlic.com/~lynn/2015f.html#88 Formal definituion of Speed Matching 
Buffer
http://www.garlic.com/~lynn/2015f.html#89 Formal definituion of Speed Matching 
Buffer
http://www.garlic.com/~lynn/2015g.html#4 3380 was actually FBA?
http://www.garlic.com/~lynn/2015g.html#6 3380 was actually FBA?
http://www.garlic.com/~lynn/2015g.html#9 3380 was actually FBA?

need to have calypso (SMB) installed to attach to channel
slower than device ... however it is possible to do channel
programs w/o using ECKD CCWs 

GA26-1661-9 3880 Storage Control Description ... from bitsavers

4-23 Define Extent (also 4-26 Locate Record)

Note: This command is valid only when the speed matching buffer for the
3375 or 3380 feature is installed.

... snip ...

4-111 describes how speed matching buffer uses the define extent 
locate record to calculate when to connect to channel ahead of time on
write operation.

5-17 I/O Operation for Speed Matching Buffer 

The speed matching buffer for the 3375 feature and the speed matching
buffer for the 3380 feature (Models AA4, A04, and B04 only) will
correctly execute standard command chains when connected to channels
slower than the 3375 or the 3380. However, a performance reduction on
write operations (described below) will occur.

... snip ...

-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: 3380 was actually FBA?

2015-08-13 Thread Anne Lynn Wheeler
l...@garlic.com (Anne  Lynn Wheeler) writes:
 hardware speed and error correction was going to fixed-sized blocks. You
 can see this in 3380 track capacity calculations where record sizes have
 to be rounded up, sort of compromise hack given that MVS wasn't going to
 support real FBA.  The 3380 was smaller fixed-sized blocks ... but not
 true IBM FBA like 3310  3370. 3375 was the first CKD emulated on top
 of an IBM FBA (3370) device. 512-byte blocks have prevailed for a couple
 decades (IBM 3310  3370 and follow-ons ... but also all the other
 industry standard disks). There is currently inudstry move to 4096-byte
 fixed blocks for improved error correction and track capacity.
 https://en.wikipedia.org/wiki/Advanced_Format
 http://www.seagate.com/tech-insights/advanced-format-4k-sector-hard-drives-master-ti/

re:
http://www.garlic.com/~lynn/2015g.html#4 3380 was actually FBA?
and 
https://groups.google.com/forum/#!topic/bit.listserv.ibm-main/3QSdKeko604


IBM journal articles are behind IEEE membership wall ...  have found
this detailed description at Google Books (3380 error correcting)
https://books.google.com/books?id=cG4Zgb8OqwECpg=PA495lpg=PA495dq=ibm+3380+error+correctingsource=blots=lMaYN_d94Fsig=o-R202AspjC1Ox09YNcZDb9Ljgchl=ensa=Xved=0CDgQ6AEwBGoVChMIxpHRg-KmxwIVVluICh1twgJy#v=onepageq=ibm%203380%20error%20correctingf=false

which has each subblock consists of 96 data bytes and six first-level
check bytes are appended in the form of two interleaved codewords

after discussing details of 3380, it moves into RAID  Reed-Solomon
codes ... trivia, I worked with somebody in bldg14 that was awarded the
original RAID patent.

-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: 3380 was actually FBA?

2015-08-12 Thread Anne Lynn Wheeler
jcal...@narsil.org (Jerry Callen) writes:
 In another thread, l...@garlic.com wrote:

   ... but then if MVS had FBA support wouldn't have needed to do 3380
   as CKD (even tho inherently it was FBA underneath) ...

 I didn't know that.

 Was that the first (and/or last?) IBM SLED to be inherently FBA under
 the hood? Where were the smarts for that implemented, in the control
 unit, or the drive itself?

re:
http://www.garlic.com/~lynn/2015f.html#86 Formal definituion of Speed Matching 
Buffer

hardware speed and error correction was going to fixed-sized blocks. You
can see this in 3380 track capacity calculations where record sizes have
to be rounded up, sort of compromise hack given that MVS wasn't going to
support real FBA.  The 3380 was smaller fixed-sized blocks ... but not
true IBM FBA like 3310  3370. 3375 was the first CKD emulated on top
of an IBM FBA (3370) device. 512-byte blocks have prevailed for a couple
decades (IBM 3310  3370 and follow-ons ... but also all the other
industry standard disks). There is currently inudstry move to 4096-byte
fixed blocks for improved error correction and track capacity.
https://en.wikipedia.org/wiki/Advanced_Format
http://www.seagate.com/tech-insights/advanced-format-4k-sector-hard-drives-master-ti/

eckd originally for speed-matching buffer ... was also trying to
retrofit a little of the FBA benefits to CKD architecture (again because
MVS wouldn't upgrade to real FBA).

part of the issue for 3375 was there wasn't a mid-size CKD disk (just
the high-end 3380). Large customers were buying hundreds ( thousands)
of vm/4300s for distributed non-datacenter market (sort of leading edge
of the coming distributed computing tsunami; for instanceNATO got 6000
vm/4341s) ... and MVS couldn't play in that new market with no mid-range
CKD disk.

Doing the 3375 CKD at least gave MVS a path ... however MVS support was
really oriented around having 10-20 people in large datacenter. The idea
of supporting a thousand distributed systems out in departmental areas
wasn't very practical.

I also got dragged into doing benchmarks for LLNL that was looking at 70
4341s for computer farm (sort of leading edge of the future
supercomputing paradigm). 4341 was faster than 1583031 ... and clusters
of 4341s were faster than 3033, lower aggregate cost, lower aggregate
physical and environmental footprint, also higher aggregate memory and
i/o throughput. old email
http://www.garlic.com/~lynn/lhwemail.html#4341

past posts
http://www.garlic.com/~lynn/submain.html#dasd

-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Formal definituion of Speed Matching Buffer

2015-08-10 Thread Anne Lynn Wheeler
re:
http://www.garlic.com/~lynn/2015f.html#86 Formal definituion of Speed Matching 
Buffer
http://www.garlic.com/~lynn/2015f.html#88 Formal definituion of Speed Matching 
Buffer

For those that got post forwarded and can't see the recent URL refs
on garlic.com ... On 17Apr2015, garlic.com changed there webserver and
still have yet been able to update my personal web pages ... to
see the full thread check google group archive.
https://groups.google.com/forum/#!topic/bit.listserv.ibm-main/K2Elt-40-VE

With regard to MVS MTBF 15min, I happened to mention in an internal
(only) report giving technical details of building a bullet proof and
never fail I/O subsystem ... which brought down wrath of the MVS
organization on my head ... apparently they would have gotten me fired
if they couldn't figure out how ... but they found other ways of
taking out their displeasure.

VM370 official calypso support, put8201, reference to retrofitting to
heavily modified internal system.

To: wheeler
Date: 01/11/82 12:51:34
 
ref calypso-vm/sp (ext. ckd ) ;

The official release of the code was this month. Put tape 8201
lvl. 110.  It looks like it might be awhile before we get that far for
common and all, right ?  Maybe I should go ahead and work on fitting
it to 106.  What do you think?

... snip ...

from vmshare archives ... discussion of 3880 speed matching buffer
but doesn't explicitly say anything about define extent and eckd
http://vm.marist.edu/~vmshare/browse?fn=3380ft=MEMO

old email about FE error injection regression tests for 3380 ...  all of
the 57 errors were resulting in MVS hanging and requiring reipl and in
2/3rds of the cases, there was no indication of what caused the problem
(this is separate from the earlier issue where they attempted to use MVS
in the bldg. 14 engineering test lab and found it had 15min MTBF)
http://www.garlic.com/~lynn/2007.html#email801015

-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Formal definituion of Speed Matching Buffer

2015-08-09 Thread Anne Lynn Wheeler
shmuel+ibm-m...@patriot.net (Shmuel Metz  , Seymour J.) writes:
 I'm editing the wikipedia article on Count Key Data, and I've run into
 an editorial dispute. I claim that what is now ECKD was part of the
 SMB, and the other editor claims that you could run 3380 on a slow
 channel without using, e.g., Define Extent. Does anybody have a
 document outlining what IBM included in the term SMB?: Thanks.

Calypso (speed-matching) feature for 3880 controller allowing 3380s to
be attached to slower channels. I have old email about significant
software  hardware debugging getting it to work.
http://www.garlic.com/~lynn/2007f.html#email801010
http://www.garlic.com/~lynn/2007e.html#email820907
http://www.garlic.com/~lynn/2007e.html#email820907b

We had done a VM370 modification at SJR that did super efficient trace
of all disk record accessed (both by vm370  virtual machines) what was
installed in several systems in the San Jose/Bay Area. the 10Oct1980
email refers to upgrading to support calypso/eckd CCW. The trace was
used for various things like modeling disk i/o cache configurations.  We
had a proposal to have it incorporated into all systems for use in
dynamic load-balancing for placement/location.

some of the ECKD intertwinces with my theme (rant) that it would have
been enormously simpler  less expensive to have added FBA support to
MVS. I had been told that even if I provided them with fully integrated
and tested MVS FBA support ... that i needed a $26M incremental business
case (to cover documentation and education) ... basically $200M-$300M in
additional disk sales ... but they claimed that customers were already
buying disks as fast as they could be made ... so it would just shift
the same amount of sales from CKD to FBA (and therefor it was impossible
for me to show incremental/additional disk sales from FBA support).
http://www.garlic.com/~lynn/submain.html#dasd

this post references that 4341 with small tweak was being used
for testing 3mbyte/sec channel (w/o needing speed matching)
http://www.garlic.com/~lynn/2010e.html#36 What was old is new again (water 
chilled)

If POK machines (158, 168, 303x) had been as powerful as 4341 ... there
wouldn't have needed Calypso (speed matching and enormous resources
needed to get it working) ... but then if MVS had FBA support wouldn't
have needed to do 3380 as CKD (even tho inherently it was FBA
underneath) ... or do a 3375/CKD version of 3370/FBA.

other past posts mentioning calypso
http://www.garlic.com/~lynn/2004o.html#7 Integer types for 128-bit addressing
http://www.garlic.com/~lynn/2007e.html#40 FBA rant
http://www.garlic.com/~lynn/2007f.html#0 FBA rant
http://www.garlic.com/~lynn/2008q.html#40 TOPS-10
http://www.garlic.com/~lynn/2009k.html#44 Z/VM support for FBA devices was Re: 
z/OS support of HMC's 3270 emulation?
http://www.garlic.com/~lynn/2009p.html#11 Secret Service plans IT reboot
http://www.garlic.com/~lynn/2010e.html#36 What was old is new again (water 
chilled)
http://www.garlic.com/~lynn/2010h.html#30 45 years of Mainframe
http://www.garlic.com/~lynn/2010n.html#14 Mainframe Slang terms
http://www.garlic.com/~lynn/2011e.html#35 junking CKD; was Social Security 
Confronts IT Obsolescence
http://www.garlic.com/~lynn/2012j.html#12 Can anybody give me a clear idea 
about Cloud Computing in MAINFRAME ?
http://www.garlic.com/~lynn/2012o.html#64 Random thoughts: Low power, High 
performance
http://www.garlic.com/~lynn/2014m.html#154 BDW length vs. Physical Length

-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Formal definituion of Speed Matching Buffer

2015-08-09 Thread Anne Lynn Wheeler
000433f07816-dmarc-requ...@listserv.ua.edu (Paul Gilmartin) writes:
 Circa 1980 my then employer marketed a CCD SSD product which
 suffered timing incompatibilities, not because of transfer rate, but
 because of inter-block latencies.  It appeared that some VM paging
 code paths depended on completing while the inter-block gap was
 passing.

re:
http://www.garlic.com/~lynn/2015f.html#86 Formal definituion of Speed Matching 
Buffer

an issue would be if you simulated 3330 .. and formated 50byte (or
110byte) dummy records and had a simulated rotational spin rate much
faster than 3330 (it wasn't vm code paths, it was speed of chained CCW
channel processing).

I periodically mention getting to play disk engineer in bldgs 1415
(sometimes they demanded I play disk engineer) ... past posts
http://www.garlic.com/~lynn/subtopic.html#disk

vm370 formated 3330s for paging so records were aligned on each track
... three 4kbyte pages per track ... and if there were queued requests
for records on same cylinder but different tracks would attempt to
optimize single channel program to transfer all pages in the minimum
number of revolutions. For CKD this would require seek head, search,
tic, read/write ... and in order to allow the channel time to execute
the CCWs while the disk was spinning, page area formating inserted dummy
records between page data records. It turns out that channel specs
(worst case 370) required 110byte dummy records given the 3330
rotational spin rate ... to allow channel time to process the CCWs (to
do a head switch) ... but track size only had room for 50byte dummy
records.

I did a test program that was run on 145, 148, 4341, 158, 168, 303x with
IBM disk controllers and various non-IBM disk controllers. IBM disk
controller could actually process the head switch CCWs with 50-byte
dummy record (w/o additional revolution), for most machines except 158 
all 303x ... and some number of non-IBM disk controllers could do the
switch even with 158  303x (the issue is that for all models of 303x,
they used external channel director ... which was actually a 158 engine
with the slow integrated channel microcode and w/o the 370
microcode). 3081 channels also had problem doing CCW head-switch within
the 50byte dummy record window.  past posts on the subject
http://www.garlic.com/~lynn/2000d.html#7 4341 was Is a VAX a mainframe?
http://www.garlic.com/~lynn/2001j.html#3 YKYGOW...
http://www.garlic.com/~lynn/2002b.html#17 index searching
http://www.garlic.com/~lynn/2003g.html#22 303x, idals, dat, disk head settle, 
and other rambling folklore
http://www.garlic.com/~lynn/2004d.html#64 System/360 40 years old today
http://www.garlic.com/~lynn/2004d.html#65 System/360 40 years old today
http://www.garlic.com/~lynn/2004d.html#66 System/360 40 years old today
http://www.garlic.com/~lynn/2005p.html#38 storage key question
http://www.garlic.com/~lynn/2005s.html#22 MVCIN instruction
http://www.garlic.com/~lynn/2006w.html#8 Why these original FORTRAN quirks?
http://www.garlic.com/~lynn/2008l.html#83 old 370 info
http://www.garlic.com/~lynn/2011.html#65 Speed of Old Hard Disks
http://www.garlic.com/~lynn/2013e.html#61 32760?
http://www.garlic.com/~lynn/2014k.html#26 1950:  Northrop's Digital 
Differential Analyzer

There was a different issue with code paths on 3880 controller.  After
FS death, there was mad rush to get stuff back into 370 product
pipelines (during FS period, internal politics had been killing off 370
efforts) and 303x and 370-xa were kicked off in parallel. 370-xa became
none as 811 for the Nov1978 date on most of the architecture
documents. When I saw SSCH, I thot that it was mostly to compensate for
the enormous interrupt pathlength in MVS. A big problem was that as
devices became faster, and load increased, there was significant
increasing device idle time while MVS went thru interrupt  redrive
overhead.

Earlier, the disk engineering lab had been testing using prescheduled,
stand-alone mainframe dedicated test time ... at one point they had
tried to use MVS ... but found it had 15min MTBF (requiring manual
re-ipl) in that environment. I volunteered to rewrite I/O supervisor so
it was bullet-proof and never fail so that any number of on-demand,
concurrent testing could go on (vastly increasing productivity). I also
setout to demonstrate the optimal interrupt processing and queued
request device redrive.

some bean counter had dictated that 3880 use a really slow control
processor (compared to 3830) and used dedicated circuits to get
3mbyte/sec transfer rate. The slow control processor showed up in
increased channel and controller busy as well as increase elapsed time
for channel program processing. To pass product acceptance test
(requiring 3880 appear to be within 5% of 3830 performance), they would
signal channel program complete (CE+DE) early ... before having actually
finished everything. The first time they put 3880 controller into use
with 16 3330s (and heavy load on 3033) replacing 3830 ... 

Re: Limit number of frames of real storage per job

2015-08-07 Thread Anne Lynn Wheeler
allan.stal...@kbmg.com (Staller, Allan) writes:
 There can also be performance advantages from GC. GC moves objects
 together in storage, making it much more likely that your application
 data will be in the processor caches. If GC keeps your data in
 processor cache it will perform much better than if it's scattered
 across a GB of storage.

apl\360 would allocate new storage for every assignment statement,
quickly using every available location in workspace ... and then it
would collect everything in contiguous storage (garbage collect) and
then start all over again.. This wasn't too bad with apl\360 typically
16kbyte (sometimes 32kbyte) workspaces there were swapped as integral
unit. the initial port of apl\360 to cp67/cms for cms\apl was something
of a problem because it allowed workspaces that were the size of virtual
memory ... and strategy would quickly result in page thrashing
(repeatedly touching every virtual page regardless of actual
programdata size).

before release of cms\apl, this all had to be reworked in order
to reduce the massive page thrashing.

Besides doing virtual machines, cp67/cms, the technology for the
internal network (and corporate sponsored univ bitnet ... where ibm-main
originated), GML, and lots of other things ... the science center
http://www.garlic.com/~lynn/subtopic.html#545tech

also did a number of performance  analysis tools. One did processing 
storage use analysis ... which was used for analyzing cms\apl and bunch
of other things. It was also used extensively inside ibm by most product
groups in their transition to virtual memory operation (would identify
hot-spot instruction use as well as hot-spot storage use) ... and
eventually released to customers as VS/Repack (which attempt
semi-automated program reoganization to improve operation in virtual
memory environment).

references to internal network
http://www.garlic.com/~lynn/subnetwork.html#internalnet
references to bitnet
http://www.garlic.com/~lynn/subnetwork.html#bitnet
references to gml (sgml, html, etc)
http://www.garlic.com/~lynn/submain.html#sgml

a major factor in the motivation in transition from os/360 MVT to
virtual memory OS/VS2 was significant problems with the way MVT managed
real storage, GETMAIN, etc ... regions had to typically be four times
larger than really needed. The analysis showed that typical 370/165 MVT
1mbyte machine only supported four regions.  A virtual memory MVT on
370/165 1mbyte machine could support 16 regions with little or no paging
(aka keep all the in-use data in the 370/165 1mbyte processor cache).
Old reference to study motivating to move all 370 to virtual memory:
http://www.garlic.com/~lynn/2011d.html#73 Multiple Virtual Memory

-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Where are Internet Lists for Mainframe

2015-08-03 Thread Anne Lynn Wheeler
stars...@mindspring.com (Lizette Koehler) writes:
 For those of you going to share in Orlando, I would like to let you know
 that at Share Tom Conley will be giving a share presentation on Thursday
 3:15p called

 Effective Use of the Internet for Mainframe Problem Solving

 This session will show better ways of posting and getting responses from
 various Lists.  As well as providing a list of Mainframe specific Lists.

a co-worker at the ibm cambridge science center was responsible for the
technology for the internal network (larger than the arpanet/internet
from just about the beginning until sometime late 85 or early 86) and
later bitnet (corporation sponsored univ. network) starting in the early
80s.  https://en.wikipedia.org/wiki/Edson_Hendricks

I was blamed for online computer conferencing on the internal network in
the late 70s  early 80s. folklore is that when the executive was told
about online computer conferencing (and the internal network) 5of6
wanted to fire me. this is email from person in paris given job of doing
EARN (bitnet equivalent in europe) looking for online applications
http://www.garlic.com/~lynn/2001h.html#email840320

this is history of listserv creation in paris in 1986 ... somewhat
similar to the earlier internal online VMTOOL computer conferencing tool
(developed in the wake of my activities) ... although VMTOOL had both
usenet-like mode in addition to mailing list like mode.
http://www.lsoft.com/corporate/history-listserv.asp
http://www.lsoft.com/products/listserv-history.asp
wiki
https://en.wikipedia.org/wiki/LISTSERV

In the late 80s, the communication group was spreading misinformation as
part of convincing corporate to ocnvert the internal network to SNA/VTAM
... when it would have been much more efficient to have converted to
TCP/IP (like BITNET did).
http://en.wikipedia.org/wiki/BITNET
more mailing lists
http://www.lsoft.com/lists/list_q.html

some of the mailing lists gateway to usenet ... some bidirectional,
others only distribution (but not posting). usenet is also gatewayed
(usually bi-directional) to google groups

note that TYMSHARE had developed CMS-based online computer conferencing
system and made it avaialble to IBM SHARE organization starting in
Aug1976 ... archive:
http://vm.marist.edu/~vmshare

... for mainframe specific try search engine for ibm mainframe discussion
group mailing list


-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: 3705

2015-07-21 Thread Anne Lynn Wheeler
g...@ugcs.caltech.edu (glen herrmannsfeldt) writes:
 OK, I forgot that the Usenet gateway doesn't work anymore.

 I am wondering what software one needs for a 3705 to connect
 up ordinary ASCII terminals.

 For example, what would be needed to use TSO or Wylbur on
 ASCII terminals?  I know this is what was done 35 years
 ago, but I don't know now who knows how to do it.

 I do remember that for dial-up lines it would allow for 300
 baud or 110 baud, or even for 2741s, depending on the first
 character you typed. Hardwired lines were fixed speed, and
 could be higher than 300.  (I believe O for 300 baud, and 
 S for 110 baud.)

 Faster lines might only be at a fixed baud rate.

cp67 delivered to the univ. had automatic terminal type identification
for 1052 and a couple of 2741 types. 2702/2703 was possible to
dynamically change the line-scanner type using the SAD CCW (use one
line-scanner type, try a couple operations and if they get errors,
switch to a different line-scanner type).

the univ. had a number of TTY/ASCII so I had to add TTY support to CP67
... and tried to do it also using dynamic terminal type identification.
I also tried to support single dial-in number for all terminal types
... aka hunt group ... common pool of lines. However, IBM had taken
short cut and hard-wired the line-speed oscillator to each line ... so
while it was possible to change the line-scanner ...  it wasn't possible
to change the line-speed (original 1052  2741 had same line speed, but
TTY was different).

This was motivation for univ. to start a clone-controller project,
building channel interface board for Interdata/3 programed to emulate
2702 ... but able to also do dynamic line-speed operation. This was
later improved to Interdata/4 for the channel interface and cluster of
Interdata/3s dedicated to line-scanner. Four of us get written up as
responsible for (some part of) the clone-controller business. Later
Perkin-Elmer buys Interdata and the clone-controller continues to be
sold under the PE logo (in the late 90s, I ran into PE box in large
datacenter handling much of the dial-up point-of-sale terminals on the
east coast, 1200 baud ascii).

A number of univ. had been sold (virtual memory) 360/67s supposedly for
use with TSS/360 ... however TSS/360 had hard time reaching maturity ...
so a lot of places ran CP/67. Other places developed their own virtual
memory operating systems for 360/67 ... Stanford did Orvyl/Wylbur
(Wylbur later ported to MVS) and Michigan did MTS. MTS did
clone-controller using PDP8
http://www.eecis.udel.edu/~mills/gallery/gallery7.html
some more MTS
http://www.eecis.udel.edu/~mills/gallery/gallery8.html

-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Why major financial institutions are growing their use of mainframes

2015-06-17 Thread Anne Lynn Wheeler
marktre...@gmail.com (Mark Regan) writes:
 I recently learned about a bank in Japan that has been using a
 mainframe since the 1970's without a single second of downtime. Its
 architecture allows for full software and hardware upgrades without an
 outage.

i periodically mention that my wife had been con'ed into going to POK to
be in charge of loosely-coupled architecture where she did peer-coupled
shared data architecture. some past posts
http://www.garlic.com/~lynn/subnetwork.html#shareddata

She didn't say very long, in part because of on-going periodic battles
with communication group trying to force her to use sna/vtam for
loosely-coupled operation, as well as very little uptake (at the time)
... except for IMS hot-standby.

Around the turn of the century, we would periodically drop in on the
person that ran large financial transaction operation (33 liberty st,
nyc) ... and he credited 100% uptime to

1) automated operator
2) IMS hot-standby

... he had triple replicated IMS hot-standby operation at geographic
separated sites.

slight topic drift ... when Jim Gray left IBM Research for Tandem, he
palmed off bunch off stuff on me ... DBMS consulting with the IMS group,
interfacing with BofA, early adopter of original relational/SQL
implementation, etc ...
http://www.garlic.com/~lynn/submain.html#systemr

At tandem he did study of what was causing outages. One of the things he
found was that hardware reliability was getting to point where it was
responsible for decreasing percentage of outages and other factors were
starting to dominate ... software faults, people mistakes, environmental
issues like power outages, floods, earthquakes, etc). summary/overview
http://www.garlic.com/~lynn/grayft84.pdf

later we were doing IBM's (RS/6000) HA/CMP ... some past posts
http://www.garlic.com/~lynn/subtopic.html#hacmp

and working on both commercial, DBMS ... old reference to
Jan1992 meeting in Ellison's conference room
http://www.garlic.com/~lynn/95.html#13

as well as technical with gov. agencies and national labs ... some
old email
http://www.garlic.com/~lynn/lhwemail.html#medusa

While out marketing HA/CMP, I coined the terms geographic
survivability and disaster survivability to differentiate from
disaster/recovery ... some past posts
http://www.garlic.com/~lynn/submain.html#available

On the commercial side, the mainframe DB2 group were complaining if I
was allowed to continue ... it would be at least five years ahead of
them. Shortly later, the cluster scaleup part was transferred and
announced as the IBM supercomputer for technical and scientific *ONLY*
(and we were told we couldn't work on anything with more than four
processors).

-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: New Line vs. Line Feed

2015-05-29 Thread Anne Lynn Wheeler
john.archie.mck...@gmail.com (John McKown) writes:
 As a side note (as I have heard it), the reason that Windows uses CRLF
 as a line ending is because MS-DOS did the same. MS-DOS used CRLF
 because CPM-80 used CRLF. And, finally, CPM-80 used CRLF because the
 common printers at the time could not do a carriage return / line feed
 in a single operation.  So, Gary Kindall (author of CPM-80) decided to
 end text files with CRLF so that he didn't need to complicate the
 printer driver to put a LF in when a CR was detected. This made good
 sense in the day that 64K RAM and a 1 Mhz 8080 was top of the line
 equipment for the hobbyist.

a little other topic drift from recent IBM antitrust thread

Other trivia ... also at the scientific center ... GML was invented at
the science center in 1969 (G, M,  L are the 1st letters of the
inventor's last name). This is posting by Sowa about GML being used by
IBM for documents used in the antitrust suit
http://ontolog.cim3.net/forum/ontolog-forum/2012-04/msg00058.html

from above:

For text that was copied from the original OED, they got GML to produce
exactly the same line breaks and hyphenation.  They needed to get it
exactly right in order to aid the proof readers who had to make sure
that the new copy was identical to the old.

The GML-based software in the 1980s was far more flexible than MS Word
is today.  Just look at the OED and imagine how you might use MS Word to
match that exactly.

... snip ...

in the mid-60s at science center, CMS script was implementation of CTSS
runoff using dot formating controls ... then later, script was
enhanced to support GML tag processing. in late 70s, a vm370 SE in the
LA branch ... did implementation of CMS script on trs80 (NewScript)

and periodically mentioned ... before ms/dos
http://en.wikipedia.org/wiki/MS-DOS
there was seattle computer
http://en.wikipedia.org/wiki/Seattle_Computer_Products
before seattle computer there was cp/m,
http://en.wikipedia.org/wiki/CP/M
before cp/m, kildall worked with cp67/cms at npg
http://en.wikipedia.org/wiki/Naval_Postgraduate_School

other Sowa trivia ... on the failure of FS and how poorly 3081 compared
to competition
http://www.jfsowa.com/computer/memo125.htm


-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: New Line vs. Line Feed

2015-05-28 Thread Anne Lynn Wheeler
t...@vse2pdf.com (Tony Thigpen) writes:
 It's actually much worse. There are three:

 Ebcdic:
 CR = x0D
 NL = x15
 LF = x25

 Originally, CR only moved the print back to the first position of the
 same line. LF only moved the print down one line without moving
 sideways. NL moved both down and to the first position of the line.

 When it was designed, they were using teletype machines and simple
 printers. No CRTs.

 Historically:

 1930's had the Teletype standard: International Telegraph Alphabet
 No. 2 (ITA2); which had both a CR and a LF and required both at the
 end of a line.

 1950's IBM introduces BCD and adds NL
 1960's IBM introduces EBCDIC and continued using the 3 values.

 1960's ATT pushes for a replacement of ITA2 which the ATA published as
 ASCII in 1963. (One of their requirements was 7 bit so EBCDIC was
 ruled out.)

 In the ASCII world, CR and LF were the standard until the mid-1960's
 when the Multics developers decided that using two characters was
 stupid and they started using just LF. Unix and follow-on OSs carried
 on the same tradition.

 Today, it's a mess. Windows wants CRLF. Internet RFCs normally use
 CRLF. Mac and Linux use just LF.

 Interesting, Windows Notepad requires CRLF, but Windows Wordpad will
 read and display a LF only file correctly and even change the file to
 CRLF when saved.

IBM did much of the standardization for ASCII and 360 originally was
suppose to be an ASCII machine ... unfortunately the 360 ASCII unit
record gear wasn't ready ... and the decision was made to go
(temporarily) with the old BCD unit record gear (but there was some
unfortunate side-effects of that decision).

EBCDIC and the P-Bit, The Biggest Computer Goof Ever
http://www.bobbemer.com/P-BIT.HTM

The culprit was T. Vincent Learson. The only thing for his defense is
that he had no idea of what he had done. It was when he was an IBM Vice
President, prior to tenure as Chairman of the Board, those lofty
positions where you believe that, if you order it done, it actually will
be done. I've mentioned this fiasco elsewhere.

... snip ...

by the father of ASCII
http://www.bobbemer.com/FATHEROF.HTM
his history index
http://www.bobbemer.com/HISTORY.HTM



-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: PCI DSS compliance for z/OS

2015-05-19 Thread Anne Lynn Wheeler
charl...@mcn.org (Charles Mills) writes:
 I think much of the problem is with credit card numbers
 themselves. There are only ~10**16 possible credit card numbers --
 many fewer if you allow for the fact that only certain combinations
 are valid. A credit card number is easier to brute-force guess than
 its encryption key, format-preserving or not.

trivia/background

long ago and far away we got called as consultants to small
client/server startup that wanted to do payment transactions on their
server, they had also invented this technology called SSL they wanted
to use; the result is now sometimes called electronic commerce.

Then somewhat having done electronic commerce, in the mid-90s we were
asked to participate in the (financial industry) x9a10 standards working
group which had been given the requirement to preserve the integrity of
the financial infrastructure for all retail payments (not just internet,
*ALL*, point-of-sale, attended, unattended, credit, debit, ACH,
i.e. *ALL*).

part of this is we did end-to-end threat analysis and attetmpted to use
a number of metaphors to characterize the existing paradigm:

dual-use ... since information from previous transactions can be used
for fraudulent transactions, that information has to be kept totally
confidential and never divulged. at the same time the same information
is required in dozens of business processes at millions of locations
around the world. we've periodically commented that even if the planet
was buried under miles of information hiding encryption, it still
wouldn't stop leakage

security proportional to risk ... the value of the transaction
information to the merchants is the profit on the transactions, which
can be a couple dollars (and a couple cents for the transaction
processor) ... the value of the information to the crooks is the account
balance and/or credit limit ... as a result the crooks can afford to
outspend the defenders by a factor of 100 times.

...

the x9a10 financial standard working group came up with a transaction
standard that eliminated the dual-use characteristic of the account
number ... which met that it no longer needed to be kept hidden (and the
earlier work we did on electronic commerce was the major use of SSL
for hiding the account number ... which in the new transaction standard
was no longer necessary).

Later we were tangentially involved in the cal. state data breach
notification law. A lot of the participants were heavily involved in
privacy issues and had done detailed, in-depth public surveys. The #1
issue was identity theft, primarily of the form of fraudulent financial
transactions as the result of breaches and there was little or nothing
being done about the breaches. An issue is typically an
entity/institution takes security measures in self protection, In the
case of the breaches, the institution wasn't at risk ... it was their
customers. It was hoped that the publicity from the breach notifications
would prompt breach countermeasures.

Note in the years since the cal. state breach notification act there
have been numerous federal (state preemption) acts introduced ... about
evenly divided between those similar to the cal. act and those that
would effectively eliminate any requirement for notification (frequently
ingeniously disguised as criteria on when notification was
required). The PCI DSS specification came out after the appearance of
the cal. state data breach notification and referenced by federal
legislation attempting to eliminate notification requirements
... because the industry was addressing the problem. Early jokes about
the PCI DSS certification was that it was relatively straight-forward
... but everybody with PCI DSS certification that had a breach would
have their certification revoked.

-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Knowledge Center Outage May 3rd

2015-04-30 Thread Anne Lynn Wheeler
jerry.whitteri...@safeway.com (Jerry Whitteridge) writes:
 I miss HONE !

 Jerry Whitteridge
 Lead Systems Engineer
 Safeway Inc.

I was recently asked when HONE actually shutdown
http://www.garlic.com/~lynn/2015c.html#93 HONE Shutdown

and found an email from may1998 saying it was going away

HONE (hands-on network environemnt), some past posts
http://www.garlic.com/~lynn/subtopic.html#hone

had started out after the 23jun1969 unbundling announcement (starting to
charge for application software, se services, etc), some past posts
http://www.garlic.com/~lynn/submain.html#unbundle

with (virtual machine) CP67 (running on 360/67), to give branch SEs
hands-on practice with operating systems. Previously SEs got sort of
journeyman training as part of large group onsite in customer
datacenter, after unbundling, nobody could figure out how not to charge
customers for this SE time onsite at customer.

Science center very early did enhancements to CP67 that provided the
simulation of the new 370 (before virtual memory instructions), so they
could work with latest operating systems gen'ed for 370.

For CP67/CMS, the science center ... some past posts
http://www.garlic.com/~lynn/subtopic.html#545tech

had also ported apl\360 to CMS for cms\apl. HONE then started offering
apl-based salesmarketing support tools ... which soon came to dominate
all HONE activity and the virtual guest operating system use disappeared
... and HONE clone systems would start sprouting up all over the world

HONE eventually migrated to VM370 from the custom science center
cp67/cms. This is old email from 40yrs ago today (30apr1975)
http://www.garlic.com/~lynn/2006w.html#email750430

where I've moved a bunch of enhancements from CP67 to VM370 and made it
(csc/vm) available to internal datacenters ... including HONE ... which
would run my enhanced custom operating systems for another decade
(including after I moved to SJR and called in sjr/vm, I then started
moving off doing mainframe work).

Not long after the above email, US HONE consolidated all its (US)
datacenters in Palo Alto. By the end of the 70s, the US HONE datacenter
was the largest single system image operation in the world ... several
large (POK) multiprocessor mainframes operating in loosely-coupled
operation with load balancing and workload fall-over (in case of failure
... effectively peer-coupled shared data architecture mentioned in
this recent post
http://www.garlic.com/~lynn/2015c.html#112 JES2 as primary with JES3 as a 
secondary

but was not released to customers. Then in the early 80s, the US HONE
complex was replicated first in Dallas and then a 3rd in Boulder
... with load balancing and fallover ... countermeasure to disaster
scenarios (like earthquake in california) ... also not released to
customers.

for other drift, the previous post reference to IMS hot-standby ... had
a fall-over operational problem. IMS configuration would be large CEC
(one or more shared-memory processor) with fall-over hot-standby. IMS
could immediately fall-over but VTAM sessions were enormous problem
... large systems could have 30,000-60,000 terminals ... which could
take VTAM one or more hrs to get backup and running. We actually did
some work with a 37x5/NCP emulator that spoofed mainframe VTAM that
sessions were being managed cross-domain ... but was actually done by
outboard non-VTAM processing ... which could manage replicated shadow
sessions to the hot-standby machine ... so IMS hot-standby fall-over
would be nearly immediately. However, this met enormous amount of
resistence from the communication group (for lots of reasons, no
communication group hardware and SNA RUs were carried over real network
with lots of feature/function that couldn't be done in SNA).

This sort of issue then in internet server environment where operations
are connectionless ... in theory doesn't require long-term session
maintenance overhead  server workload is propotional to the workload
... not to the number of clients ... much easier to have replicated
servers, load balancing and workload fall-over.

The browser HTTP people did mess up ... they did use (session) TCP
(instead of UDP) to implement a connectionless protocol ... it would go
to all the overhead of setting up a session to do a connectionless
operation and then immediately tear it down. Besides all the
(unecessary) processing overhead (for TCP session setup/shutdown), TCP
protocol (chatter) has a minimum of seven packet exchange. This was
initially noted as webserver load started to scaleup. Industry standard
TCP had what was called FINWAIT (to handle session dangling packets
after session was closed) list ... and ran the list linearly looking if
incoming packet was part of recently closed session ... aka expectation
had been FINWAIT list was empty or a few entries. Increasing HTTP
webserver workload started to have thousands of entries on the FINWAIT
list ... and FINWAIT processing would consume 95% of webserver
processor.

We had been 

Re: JES2 as primary with JES3 as a secondary

2015-04-30 Thread Anne Lynn Wheeler
sipp...@sg.ibm.com (Timothy Sipples) writes:
 Let's take a brief look at this not exactly new history. I can fairly
 easily trace JES3 back a quarter century. (Perhaps somebody else would like
 to go back into the pre-Sysplex JES3 era, from 1973 to 1990, to see what
 IBM recommended and/or required.)

trivia ... my wife was in the gburg JES group and part of the catchers
of ASP (from the west coast) to turn into JES3. She then was part of the
authors of JESUS (JES unified system) that combined all the features
that the JES2 and JES3 customers couldn't live w/o ... but it never got
very far because of various internal political issues.

She then got con'ed into going to POK to be in charge of mainframe
loosely-coupled architecture ... where she did peer-coupled shared
data architecture. she didn't remain long because her architecture1 saw
very little uptake ... except for IMS hot-standby (until SYSPLEX and
parallel SYSPLEX).  She was also being badgered by the communication
group to force her into using SNA for loosely-coupled operation (there
would periodically be a truce where communication group had strategic
ownership of everything that crossed the datacenter walls and she could
use whatever she wanted within a datacenter ... but then they would
start badgering again).

I can't speak to the other issues ... but on the JES2 networking side in
the 70s  80s ... not only couldn't JES2 talk to anything else
... talking to another JES2 at a different release level could result in
taking down both JES2 and the MVS system. The issue was that JES2
networking implementation intermixed networking control and job control
fields and minor release-to-release changes resulted in incompatible
systems.

On the internal network, JES nodes were kept at edge boundary
nodes. Major internal network talked to JES nodes by using drivers that
emulated JES protocol ... and because of the issues with JES
incompatible release vulnerabilities ... a large library of internal
network software drivers grew up that would not only format fields
expected for specific JES release being talked to ... but also handle
JES release reformating ... allowing different JES systems to
communicate. I've periodically commented on the infamous case of files
from San Jose disk plant site JES system resulting in Hursley MVS
crashes ... and it was blamed on the Hursley internal network
software. The actual issue was some new release-to-release JES field
incompatibility and the internal network software driver library hadn't
been updated to handle the new case (as part of countermeasure for
keeping JES systems at different release levels from crashing MVS
system).

-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: A New Performance Model ?

2015-04-09 Thread Anne Lynn Wheeler
sipp...@sg.ibm.com (Timothy Sipples) writes:
 Storage isn't what it was in 1982, and that's the whole point. It's faster,
 more reliable, and ridiculously less expensive. We shift our attentions
 elsewhere, rightly so, at least in terms of degree of emphasis. We simply
 don't worry about kilobytes if we're rational. This year we worry about
 terabytes, and maybe in the future we won't even worry about those.

re:
http://www.garlic.com/~lynn/2015c.html#65 A New Performance Model ?

I've periodically mentioned that when measured in number of CPU cycles
access to storage (aka a cache miss) is similar to 60s access to disk
when measured in 60s CPU cycles (caches are the new storage and storage
is the new disk).

for decades other processors (especially risc and then i86 when they
moved to risc cores with hardware layer that translated from i86 to risc
microops), have had lots of hardware features that attempt to
mitigage/compensate for (cache miss) storage access latency;
hyperthreading, out-of-order execution, branch prediction and
speculative execution.

the claim is that at least half the z10-z196 per processor throughput
improvement is starting to introduce similar features ... with further
refinements moving to z12  z13.

go back over 40 years, this shows up in 195. I've periodically mentioned
getting con'ed into helping with effort to add hyperthreading to 370/195
... which never announced/shipped. The issue was that 195 pipeline had
out-of-order execution but didn't have branch prediction or speculative
execution ... so conditional branches drained the pipeline. It took
careful programming to get sustained 10MIPs throughput ... but most
codes (with conditional branches) ran at 5MIPs. The objective with
hyperthreading was to emulate two-processor multiprocessor hoping that
two instruction streams running at 5MIPs each would archieve 10MIPs
throughput.

it was basically red/blue mentioned in this 60s ACS/END reference
http://people.cs.clemson.edu/~mark/acs_end.html

Note that the above also points out that ACS-360 was shutdown because
executives thought that it would advance the state-of-the-art to fast
and they would loose control of the market. It lists some of the ACS-360
features that show up more than 20yrs later with ES/9000.

The equivalent to 195 pipeline careful programming ... is careful code
ordering to minimize cache misses (in much the same way that 70s/80s
code was ordered to minimize page faults ... requiring disk accesses).
Recent discussion in comp.arch about (virtual memory and) VS/Repack out
of the science center in the 70s ... which did semi-automated code
reorganization for virtual memory operation. Before it was released to
customers, many internal development groups had been using it for
improving operation for virtual memory environment; they also used some
of the VS/Repack technology for hot-spot analysis.
http://www.garlic.com/~lynn/2015c.html#66 Messing Up the System/360

aka part of the decision to migrate all 370s to virtual memory.  Old
post that the primary motivation for this was analysis that because MVT
storage management was so bad ... that regions had to be four times
larger than normally used ... a typical 1mbyte storage 370/165 ran with
four regions. With virtual memory, it would be possible to run with 16
regions and still result in little or no paging.
http://www.garlic.com/~lynn/2011d.html#73 Multiple Virtual Memory


topic drift ... what 370/195 didn't account for was that MVT/SVS/MVS in
the period introduced extraordinarily inefficient multiprocessor
overhead, typical guidelines was two processor operation was 1.3-1.5 the
throughput of single processor.

this brings up the story about compareswap ... invented by charlie at
the science center when he was doing work on fine-grain (efficient)
multiprocessor locking for (virtual machine) cp/67. initial attempt to
have it included in 370 was rejected ... the 370 architecture owners
said that the POK favorite son operating system people were claiming
that testset was more than sufficient for multiprocessor support
(partially accounting for their being able to only get 1.3times the
throughput). cp67 ( later vm370) multiprocessor support could get close
to multiprocessor hardware throughput (with minimal introduced
multiprocessor operating system overhead). we ere finally able to
justify compareswap for 370 with the examples of how multithreaded
applications could use compareswap (regardless of single processor
or multi-processor operation) ... examples that continue to be
included in POO. past multiprocessor /or compareswap posts
http://www.garlic.com/~lynn/subtopic.html#smp
past science center posts
http://www.garlic.com/~lynn/subtopic.html#545tech


-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: 

Re: A New Performance Model ?

2015-04-07 Thread Anne Lynn Wheeler
idfzos...@gmail.com (Scott Ford) writes:
 Agree you 100%.  Maybe they need a second pair of eyes to review the
 design. I know I do and I will bet other software designers and system
 programmers do. A second pair of eyes is like a Dr.'s second option.. Like
 you mentioned something was missed and the easy out was a mainframe
 upgrade. I agree with everyone on this one, sometimes it's lack of
 experience too.

the IBM science center pioneered a lot of performance methodologies in
the 60s  70s ... hot-spot monitoring, system modeling, multiple
regression analysis, etc.

some of the system modeling work eventually evolves into capacity
planning. One of the system models was analytical model done in APL.
The APL model evolves into the Performance Predictor available on the
world-wide salesmarketing support HONE system ... branch office could
obtain customer workload and system profile data ... feed it into the
Performance Predictor and ask what-if questions (aka what happens if
the workload changes, system configuration changes, more disks, more
memory, etc  major objective justifying selling more hardware)

Around the start of the century I ran into consultant that was making a
living from performance consulting to large mainframe datacenters in
Europe and the US. IBM's downturn in the early 90s, IBM was unloading
some amount of its stuff ... and this consultant obtained the right to a
descendent of the performance predictor and ran it through an APL-C
language converter.

We met at a large datacenter that had a 450kloc cobol program that ran
evernight on 40+ max. configured mainframes (constantly being upgraded,
none older than 18months, number required for application to finish in
the overnight batch window).

They application had a few dozen people in peformance department that
had been working on it for decades ... primarily using hot-spot
methodology. Hot-spot tends to shine light on sections that need logic
examination for doing things better ... working primarily with logic at
the micro-level

The modeling work fed workload  system activity data and identified
areas that resulted in 7% improvement. I then used multiple regression
analysis with application activity data to spotlight some macro-level
logic that resulted in 14% improvement. Remember that this is an
application that had dedicated performance group with dozens of people
that had been working with this application for decades (but primarily
using hot-spot methodology ... that tends to focus on micro-level
logic)

-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: OT: Digital? Cloud? Modern And Cost-Effective? Surprise! It's The Mainframe - Forbes

2015-03-28 Thread Anne Lynn Wheeler
Robert Wessel robertwess...@yahoo.com writes:
 IBM shipped about 20 360/91s, then a couple of 360/95s with a
 redesigned memory subsystem, then the 360/195 which re-implemented the
 same machine on a faster, denser logic process, then that modified was
 to include the basic S/370 extensions (no virtual memory) and shipped
 as the 370/195.  About 40 machines of all four types (combined) were
 shipped.

in the 70s, the 195 people sucked me into doing some stuff with them on
370/195 multiprocessor that never shipped ... basically red/blue
multithreading mentioned here
http://people.cs.clemson.edu/~mark/acs_end.html

the above also includes some other discussion of 195 ... although
primarily '60s 360 ACS ... which got canceled because executives thought
that it would advance the state-of-the-art too fast and they would loose
control of the market ... aka acs/360 would be significantly more
cost-effective machines (also describes some of the ACS features that
eventually show up in the 1990 ES/9000)

one of the things they told me was that another difference between
360/195  370/195 (besides the non-virtual memory 370 instructions) was
hardware instruction retry ... which greatly improved reliability.

195 execution units would do 10mips but required careful programming for
the pipeline ... which did out-of-order execution ... but not branch
preduction or speculative execution ... so conditional branches would
drain the pipeline. as a result, most codes ran around 5mips. motivation
for red/blue multitreading was the 10mips execution units would be kept
busy by two 5mip threads.

recent posts mentioning 370/195
http://www.garlic.com/~lynn/2015.html#27 Webcasts - New Technology for System z
http://www.garlic.com/~lynn/2015b.html#61 ou sont les VAXen d'antan, was 
Variable-Length Instructions that aren't

this describes decision to make all 370 machines virtual memory
... basically MVT virtual memory allocation was so bad that typical
region size had to be four times larger than what was being used ...  a
1mbyte 370/165 running four regions could get 16 regions with virtual
memory and still have little or no paging.
http://www.garlic.com/~lynn/2011d.html#73 Multiple Virtual Memory

however, retrofitting 370 virtual memory hardware to 370/165 (for
165-II) was no trivial task ... eventually they decide to drop several
370 virtual memory features because they were too hard for the 165 ...
other machines would also have to drop those features ... and software
groups that had already written code using the dropped features would
have to be reworked.

-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: New Principles of Operation (and Vector Facility for z/Architecture)

2015-03-10 Thread Anne Lynn Wheeler
sipp...@sg.ibm.com (Timothy Sipples) writes:
 The IBM z13's ~139 SIMD instructions are different and new, yes. I expect
 that they represent a perfect functional superset of the long ago
 discontinued S/390 Vector Facility. However, it's probably not particularly
 useful to draw many parallels (!) with that older product. Yes, they are
 very different. As one example, every IBM z13 processor core incorporates
 the new SIMD instructions as a standard included feature. That's a much
 different, much lower latency design than the old, optional S/390 Vector
 Facility.

 If you have older code that was able to exploit the S/390 Vector Facility,
 I expect you could adapt it to exploit the new SIMD instructions. IBM's
 latest compilers can often help. However, you can do much, much more with
 the new instructions. Please see my other post about the IBM MASS and ATLAS
 libraries, for example. This IBM redpiece introduction is also a good,
 quick read:

the 3090 processor engineers complained some about adding vector to
3090. their claim was big part of vector was that floating point
processing was so slow ... that the typical memory bus utilization was
very low ... as a result it was possible to have large number of
floating point execution units running concurrently and still not
saturate the memory bus. they claimed that they had improved 3090
floating point processing ... so that scalar floating point was capable
of keeping memory bus busy. they felt that adding vector to 3090 was
pure marketing (since most applications would saturate memory bus just
doing scalar floating point ... and adding additional concurrent
floating point execution units would rarely increase effective
throughput).

these days the massive supercomputers have both the data and the
execution split across tens of thousands of systems.

SIMD greatly expanded type of things being done
http://en.wikipedia.org/wiki/Streaming_SIMD_Extensions

part of this is that number of chip transisters have exploded and they
are constantly looking for what they can do with all those transisters
(other than design complexity, little incremental cost ... even that is
mitigated with standard chip design libraries)

-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Economics of Mainframe Technology

2015-03-10 Thread Anne Lynn Wheeler
arthur.gutow...@gm.com (Art Gutowski) writes:
 If my notes are accurate from Ross' Keynote address to SHARE attendees
 in Seattle, mainframes account for 68% of production workloads, but
 only 6% of IT spend (exclusive of aggregate labor costs across
 platforms).  Given the armies of sysadmins to support *nix and windoze
 platforms, I gotta believe labor costs on these platforms eclipse
 those of the mainframe.

at industry level ... one of the industries that didn't migrate off
mainframe was financial ... which tends to have much higher profit
margin than others. in the 90s, there was big effort in the financial
industry to migrate to killer micros as lots of other industries were
doing ... that failed. those failures had much higher consequences in
financial ... and so they've tended to retrench and minimize their risks
for some period.

at datacenter level ... a large cloud megadatacenter will have hundreds
of thousands of systems (more processoring than the aggregate of all
mainframes in the world today), massively automated with staff of 80-120
people. large cloud operators have claimed for a decade or more that
they assemble their own systems for 1/3rd the cost of brand name vendors
... along with news about server chip vendors starting to ship more
chips to cloud operations than to brand name vendors (possibly
motivation for IBM to sell off their server chip business).

there have been rumors that some of the brand name server vendors have
been doing side cloud business ... for large volume order they will
price close to that of the costs that the large cloud operators claim
(where the massive automation is migrating out into corporations running
their own clouds).


-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: bloomberg article on ASG and Chpater 11

2015-03-07 Thread Anne Lynn Wheeler
0047540adefe-dmarc-requ...@listserv.ua.edu (Bill Johnson) writes:
 Bankruptcies are rarely a good thing. I've been through one.

trivia ... stockman goes into some detail about stock buybacks
(including IBM's) and characterizes them as mini-form of LBO.
http://www.amazon.com/Great-Deformation-Corruption-Capitalism-ebook/dp/B00B3M3UK6/

because of bad rep from the SL crisis, the industry changed its name to
private equity and junk bonds become high-yield bonds.

industry has been borrowing money for LBO and has been characterized as
similar to house flipping. Difference is that the loan goes on the
bought company's books and goes with it after flipping (rather than paid
off; private equity can sell for less than they paid and still walk away
with boat loads of money ... totally aside from what they loot might
from the company). the enormous (LBO) debt load has over half corporate
defaults involving companies currently or formally in private equity
clutches. ... ref
http://www.nytimes.com/2009/10/05/business/economy/05simmons.html?_r=0

AMEX was in competition with KKR for LBO of RJR ... and KKR wins.  the
president of AMEX had been in competition to be the next CEO and wins.
However, KKR is then having some problems with RJR and hires the
president of AMEX away to turn it around. Then IBM has gone into the red
and is in the process of being broken up into the 13 baby blues. The
board hires the former AMEX president away to resurrect IBM and reverse
the breakup ... and then begins to apply some of the same techniques
used at RJR (also start to see big upswing in stock buybacks)
http://www.ibmemployee.com/RetirementHeist.shtml

-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Anthem Healthcare Hacked

2015-02-08 Thread Anne Lynn Wheeler
hal9...@panix.com (Robert A. Rosenberg) writes:
 What is done with the Sensitive Data is importance. In many cases,
 such as passwords, there is no need to know the actual data but only
 to compare it with some supplied value to see that it matches. Thus a
 stored one-way hashed value is secured since there is no way to unhash
 it since all that is needed is to hash the value you think it is and
 compare the two hashes.

re:
http://www.garlic.com/~lynn/2015.html#96 Anthem Healthcare Hacked

an issue is something you know shared secrets for authentication,
pins, passwords, as well other information you might know that can be
used for authentication, mother's maiden name, social security
number, date-of-birth, etc ...

... but skimming attacks can occur in the infrastructure before the data
is hashed. also hashing doesn't work if working with human operators
that are doing purely visual compare.

one of the worst is financial industry ... where the account number
tends to be dual-use ... essentially both for authentication, but also
required in dozen of business processes at millions of locations around
the planet (security requirements that authentication info is kept
totally confidential and *NEVER* divulged, conflicting requirements when
same information is also required for large number of business
processes) ... harvesting can be breaches at backends, at any of the
business processes, any of the transmission points and at the
originating front-ends.

hash for password repositories has been used for some time ...  storing
hashed password first done in unix in early 70s:
http://en.wikipedia.org/wiki/Password#History_of_passwords

trivia ... above also mentions CTSS:
http://en.wikipedia.org/wiki/Compatible_Time-Sharing_System

then some of the CTSS people go to the science center on 4th flr
545 tech sq ... some past posts
http://www.garlic.com/~lynn/subtopic.html#545tech
others go to the 5th flr and do Multics
http://en.wikipedia.org/wiki/Multics

some of the people working on Multics return home and do
simplified version that they call Unix.
http://en.wikipedia.org/wiki/Unix#History

above also references Greg Chesson ... who I worked with in the 80s,
when I was on the XTP technical advisory board.

-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


ancient cobol applications

2015-02-07 Thread Anne Lynn Wheeler
local news just had item about ancient software at state agencies, 619
major cobol applications developed in 80s ... frequent crashesoutages,
almost impossible to maintain or change ... in part because of the lack
of cobol programmers. The state is even considering setting up financial
incentive for schools to produce cobol programmers.

-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Anthem Healthcare Hacked

2015-02-06 Thread Anne Lynn Wheeler
Anthem's stolen customer data not encrypted; But under federal law,
health insurance companies don't have to encrypt user data.
http://www.cnet.com/news/anthems-hacked-customer-data-was-not-encrypted/

In early part of century, I was co-author of financial industry privacy
standard ... and we had some meetings with gov. employees that had
drafted the original HIPAA legislation back in the 70s. They mentioned
that special interests had kept it from being passed for decades ... and
even once it was passed, there were no provisions for actually doing
anything/security about it.

we were also tangentially involved in the cal. state data breach
legislation ... having been brought in to help wordsmith the cal. state
electronic signature act.

A lot of the participants were heavily involved in privacy issues and
had done detailed, in-depth public surveys. The #1 issue was identity
theft, primarily of the form of fraudulent financial transactions as the
result of breaches and there was little or nothing being done about the
breaches. An issue is typically an entity/institution takes security
measures in self protection, In the case of the breaches, the
institution wasn't at risk ... it was their customers. It was hoped that
the publicity from the breach notifications would prompt breach
countermeasures.

-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: a bit of hope? What was old is new again.

2015-02-03 Thread Anne Lynn Wheeler
shmuel+ibm-m...@patriot.net (Shmuel Metz  , Seymour J.) writes:
 FSVO this. IBM distributed service with preassembled modules. Only
 if you had updates would the service process reassemble.

re:
http://www.garlic.com/~lynn/2015.html#84 a bit of hope? What was old is new 
again
http://www.garlic.com/~lynn/2015.html#85 a bit of hope? What was old is new 
again
http://www.garlic.com/~lynn/2015.html#86 a bit of hope? What was old is new 
again
http://www.garlic.com/~lynn/2015.html#87 a bit of hope? What was old is new 
again

vm370 monthly service tapes (referred to as PLC or program level
change) had both the full original release source and all the
accumulated source updates ... besides having precompiled modules.

The burlington group had done a modified CMS TAPE program for release
and monthly PLC service tapes (VMFPLC). Among the things lost with the
shutdown of the Burlington development group was the source changes for
VMFPLC (one of the few things where the full source wasn't shipped). I
was possibly the only person in the company that still had the original
source for VMFPLC.  some discussion in this post
http://www.garlic.com/~lynn/2003b.html#42 VMFPLC2 tape format

In the late 70s, I was doing an internal backup/archive for internal
datacenters and enhanced the VMFPLC source to add some additional
features and get higher tape data capacity (lot more tape record
blocking so there were fewer physical tape records). Originally
distributed mostly in the silicon valley area (including HONE) ... but
started to spread through much of the rest of the company. This went
through several internal releases ... and then was enhanced for customer
release with lots of client applications for backing up distributed
environment and released as workstation datasave ...  which morphs into
ADSM ... and then when IBM was unloading the disk division, morphs int
TSM. some old email
http://www.garlic.com/~lynn/lhwemail.html#cmsback
and past posts
http://www.garlic.com/~lynn/submain.html#cmsback

-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: a bit of hope? What was old is new again.

2015-02-02 Thread Anne Lynn Wheeler
edgould1...@comcast.net (Ed Gould) writes:
 yet IBM never delivered a source code maintenance system. Something
 that practically everyone was in need of.

re:
http://www.garlic.com/~lynn/2015.html#84 a bit of hope? What was old is new 
again.

science center did the multi-level cms update source maintenance system
as part of joint project with endicott to implement cp67 370 virtual
machine emulation on 360/67.

the non-virtual memory 370 emulation was originally used by branch
office HONE online cp67 systems to test new operating systems.

the full virtual memory 370 emulation was used for development of 370
virtual memory operating systems (i.e. 360/67 cp67 virtual memory
machine emulation was in regular production use a year before 370
virtual memory hardware became available).

cp67 distribution was always full source ... and customers typically
built their systems from the source. This continued with the vm370
followon ... new releases had single source file per module ... and then
monthly maintenanceenhancement distributions was done via incremental
add-on updates ... with cumulative source included updates included on
every monthly maintenanceenhancement distribution. New releases would
merge the incremental updates into base source file and things would
start again ... accumulating increasing number of incremental source
updates.

SHARE waterloo updates and customers used the same process for their
source changes ... and large part of internal development did also
(which accounted for the origin of a lot of the OCO-wars). Not that new
releases ... besides past incremental updates being incorporated into
base source files ... there could also be large amount of new
function/code added ... never before seen by customers as incremental
updates ... increasing the difficulty of release to release migration.
Tools were developed (both inside  at customers) that would analyze new
source releases and pick out differences from the latest previous
release (with all maintenance/changes applied) to facilate the release
to release source transition.

There were also periodic internal fights ... where various MVS-based
product (like JES2) did all their stuff with CMS source maintenance
... but were required to convert to the official internal MVS process
for final integration.

Note after FS failure
http://www.garlic.com/~lynn/submain.html#futuresys

qd 3033 (starting out as 168-3 logic mapped to 20% faster chips) and
3081 efforts were kicked off in parallel as part of mad rush to get
stuff into the 370 product pipelines. 

during the 3033 product life, there started to be minor (supervisor
state) tweaks made to the machine which were mandatory for new operating
system releases. The clone makers initially responded with operating
system patches to work with non-tweak hardware. As patching the
operating system was made more and more difficult, the clone makers
eventually responsed with macrocode ... basically 370 instructions
running in new machine mode that would implement the tweak features
 this enormously simplified the implementation of such features
... compared to the enormous difficulty involved in generating native
microcode. This shows up in 3090 timeframe when clone vendor has used
macrocode to create hypervisor support ... and it was a much larger (
longer) effort for 3090 to eventually respond with PR/SM.

In the current timeframe, things could be construed as customers having
their own programming support staff represents money that could
otherwise be spent on vendor softwareservices (2012 claim that
processor sales represented 4% of revenue ... but total mainframe group,
including softwareservices was 25% of total revenue and 40% of profit).

The same efforts to inhibit clone vendor patches ... also increasingly
made it difficult for customers to move their changes to new releases
(they either stayed on their old hardware or moved to new clone hardware
that worked with the older releases). The OCO-wars could be viewed as
both inhibiting new operating system versions working on clone
processors and minimizing customer migration latency to latest software
releases and hardware models.

One of the worst case examples starts during the FS period, I continued
to work on 370s (and periodically ridicule FS). Also one of my hobbies
was producing highly enhanced production operating system distribution
for internal datacenters (science center was on 4th flr of 545 tech sq,
and multics was on 5th flr of 545 tech sq, at one point I would needle
the multics crowd that I had more internal datacenters running my
enhanced operating systems than all the datacenters in the world running
multics). Anyway, for some reason, one of these versions was made
available ATT longlines ... which then made a lot of their own
enhancements and distributed it throughout a lot of ATT. Nearly a
decade later the IBM ATT national sales rep tracks me down to ask me to
help with ATT. The decade old operating system, ATT would 

Re: a bit of hope? What was old is new again.

2015-02-02 Thread Anne Lynn Wheeler
edgould1...@comcast.net (Ed Gould) writes:
 So, it was IBM saying if you don't run VM, FY?  I think the many MVS
 sites would take exception to that.  From my perspective VM was OK
 some things but not for PRODUCTION.  VM was a sand box so the real
 work was to be done on MVS.

re:
http://www.garlic.com/~lynn/2015.html#84 a bit of hope? What was old is new 
again.
http://www.garlic.com/~lynn/2015.html#85 a bit of hope? What was old is new 
again.

depends on what you mean by real work. Nearly all online, network
and various other kinds of real work inside IBM ... went on with VM.

there was the virtual machine based (initially cp67, then vm370),
world-wide online salesmarketing support HONE system ... and by
mid-70s, an IBM mainframe couldn't be ordered w/o having been processed
on HONE. some past HONE posts:
http://www.garlic.com/~lynn/subtopic.html#hone

when I first transferred from Cambridge
http://www.garlic.com/~lynn/subtopic.html#545tech

to San Jose Research, they let me wander around various locations in
silicon valley (there use to be a joke that I worked 4shifts a week, 1st
shift in SJR, 2nd shift in disk engineering, 3rd shift in STL, now SVL,
and 4th shift/weekends at HONE).

when I first transferred, I found the disk engineering and development
lab was all being done stand-alone, mainframes being scheduled 7x24
for testing. At one point they had tried to use MVS (allowing multiple
concurrent testing), but in that environment, MVS had 15min MTBF
(hang/fail requiring manual restart). I offerred to rewrite I/O
supervisor so that it would be bullet proof and never fail, enabling.
multiple concurrent, on-demand testing ... significantly improving
productivity. I wrote an internal document describing all the
enhancements ... and happened to include the MVS 15min MTBF reference
... which drove the POK MVS over the edge (not that I couldn't prove it
true, but that I had made the information public inside IBM) ... and I
was eventually told that while they couldn't actually get me fired, but
they would be able to make sure that I never got an corporate award for
the work.

One of the other side-effects was since it was my software, any problems
disk enginneering would constantly suck me into working on their
problems. They also start insisting that I sit in on conference calls
with pok channel engineers. past posts mentioning getting to play
disk engineer in bldgs. 1415
http://www.garlic.com/~lynn/subtopic.html#disk

old email reference that 3380s about to ship and MVS system is
hanging/crashing in *all* the standard FE error injection tests (and in
2/3rds of the cases, MVS leaves no indication what caused the failure).
http://www.garlic.com/~lynn/2007.html#email801015

note that the internal network originated at the cambridge science
center (virtual machine based) and was larger than arpanet/internet from
just about the beginning until sometime in the middle 80s. Most of the
nodes were vm370 and much of source development went on vm370 systems
(even for mvs based products). part of the issue was severe limitation
with the MVS networking support ... limit on max defined nodes were much
less than total number of internal nodes ... also mixed up design so
networking  job control information was intermixed in headers
... resulting in traffic between dissimilar MVS releases crashing MVS.
As a result, any MVS systems were restricted to edge nodes, fronted by
VM370 systems that had special code that would convert all header
information into exact format required by the directly connected
specific release of MVS.
http://www.garlic.com/~lynn/subnetwork.html#internalnet

Note that this wasn't SNA ... at least not until the late 80s when the
communication group insisted that it convert to SNA ... which was major
factor leading to its demise.

the same technology was also used for the corporate sponsored university
network BITNET (EARN in europe) which was also larger than
arpanet/internet for a time. some past posts
http://www.garlic.com/~lynn/subnetwork.html#bitnet

Also, the original relational/sql System/R was developed on 370/145
vm370 at san jose research in the 70s. STL was responsible for IMS
... but nearly all the developers did their work on vm/cms. The followon
to IMS was EAGLE ... and while the corporation was preoccupied with
EAGLE, it was possible to get tech. transfer for System/R to Endicott
released as SQL/DS. When EAGLE finally imploded, they wanted to know how
fast could System/R be ported to MVS ... eventually released as DB2.

-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: a bit of hope? What was old is new again.

2015-02-02 Thread Anne Lynn Wheeler
re:
http://www.garlic.com/~lynn/2015.html#84 a bit of hope? What was old is new 
again.
http://www.garlic.com/~lynn/2015.html#85 a bit of hope? What was old is new 
again.
http://www.garlic.com/~lynn/2015.html#86 a bit of hope? What was old is new 
again.

part of customer facing issue was that in the aftermath of FS
failure in the mid-70s
http://www.garlic.com/~lynn/submain.html#futuresys

and the mad rush to get stuff back into the 370 product pipelines, the
head of POK convinced corporate to kill-off vm370/cms product, shutdown
the burlington mall development group and transfer all the people to
POK, otherwise he wouldn't be able to ship MVS/XA on schedule in the
80s. They weren't going to tell burlington until the very last minute to
try and minimize the number of people that might escape ... however the
information leaked and lots of the people got away (one of the jokes was
that the head of POK was one of the major contributors to DEC VAX/VMS).

Endicott managed to save the vm370/cms product mission ... but had to
reconstitute a development group from scratch ... and lots of the stuff
that was in progress in burlington never resurfaced.

also it put major damper on significant enhancements in customer
releases.  HONE had major enhancements for single-system-image, cluster
operation with load-balancing and sharing across multiple multiprocessor
systems, even being able to handle fall-over between geographically
distributed complexes. A little of this finally shows shows up in
customer release 30yrs later in 2009.

HONE was able to accomplish these enhancements despite being under
enormous pressure to move to MVS platform ... repeatedly they would be
directed that they had to move to MVS ... and put all resources into the
effort for a year or more ... only for it to eventually be declared a
complete failure.
http://www.garlic.com/~lynn/subtopic.html#hone

They even tried blaming me for HONE inability to move to MVS platform
... because HONE was one of my long time internal customers for enhanced
production operating systems (back to cp67 days).

note that in the early 70s, CERN did a share mvs/tso - vm370/cms bakeoff
report ... copies internally were classified confidential - restricted
... aka available on need to know basis only ... because it made a
mockery of what POK was claiming internally (even tho it was freely
available outside IBM). CERN  SLAC were long time production vm370
customers ... and the first webserver outside of europe/cern was on the
slacvm system.
http://www.slac.stanford.edu/history/earlyweb/history.shtml
first web pages
http://www.slac.stanford.edu/history/earlyweb/firstpages.shtml

somewhat topic drift, slac did a bit-slice 168E that implemented
sufficient problem state to run 370 fortran programs ... they were used
to do initial data reduction from sensors along the linear accelerator.
this was then upgraded to 3081E and they were used at both SLAC and CERN
for offline, initial data reduction as well as online compute farm.
recent post in a.f.c. with other slac references:
http://www.garlic.com/~lynn/2015.html#79

-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: a bit of hope? What was old is new again.

2015-02-02 Thread Anne Lynn Wheeler
000433f07816-dmarc-requ...@listserv.ua.edu (Paul Gilmartin) writes:
 Where I read:
 ... For example, one thing I try to do is to have our IT infrastructure 
 employees trained
  to code so that they can automate repetitive tasks.

 In  contrast to the Enterprise mindset frequently apparent here: We don't
 want our general IT infrastructure employees coding.  And even (though
 less frequently lately): How can I prevent my coders' using Unix System
 Services?

 OTOH a current thread in IBMVM:
 https://listserv.uark.edu/cgi-bin/wa?A2=ind1502L=IBMVMO=DF=S=P=1987
 missing CUA 2001 package files...

 ... explores the legal, technical, and social hazards of garage tools 
 development.

IBM used FUD during the OCO-wars in the early 80s including the enormous
risks of customers having source and allowing their programmers to
change it. It was part of transition that included charging for
operating system software. there was some study that the internal
datacenters had enormous library of operating system changes ... and
there was similar number of LOC in the waterloo library

in the 23jun1969 unbundling announcement, that started to charge for
(application) software, SE services, maintenance, etc ... the company
managed to make the case that operating system software should still be
free. some past posts
http:///www.garlic.com/~lynn/submain.html#unbundle

then during the Future System period in the first part of the 70s,
internal politics was killing off 370 efforts (FS was completely
different and going to completely replacing 370). The dearth of 370
products during the FS period is credited with giving 370 clone
processor makers a market foothold. some past posts
http://www.garlic.com/~lynn/submain.html#futuresys

with the death of FS, there was a mad rush to get stuff into the 370
product pipelines ... which contributed to decision to release a bunch
of (370) stuff that I had been doing all during the FS period (although
I would periodically ridicule the FS activity which wasn't exactly
career enhancing). Some of my stuff was then selected as guinea pig
starting to charge for operating system softare (some claims part of the
countermeasures to clone makers).

-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: z13 new(?) characteristics from RedBook

2015-01-18 Thread Anne Lynn Wheeler
shmuel+ibm-m...@patriot.net (Shmuel Metz  , Seymour J.) writes:
 What is CP, chopped liver?

trivia ... (at least) 80s90s ... the various vendor UNIX ports to
mainframe ran under vm370 ... the issue was relying on vm370 for error
handling/recovery/EREP ...  because adding such capability to UNIX was
several times larger effort than the straight-forward ports ... *AND*
hardware field support said they wouldn't maintain system w/o it.

re:
http://www.garlic.com/~lynn/2015.html#43 z13 new(?) characteristics from 
RedBook
http://www.garlic.com/~lynn/2015.html#44 z13 new(?) characteristics from 
RedBook
http://www.garlic.com/~lynn/2015.html#45 z13 new(?) characteristics from 
RedBook

more trivia, UTS was ATT UNIX port ... but aix370/aixesa was port of
UCLA's (unix work-a-like) LOCUS (and unrelated to workstation
AIX). LOCUS provided for distributed transparent operation across
dissimilar architecture (aix/370 was announced in combination of aix/386)
http://en.wikipedia.org/wiki/LOCUS_%28operating_system%29

-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: z13 new(?) characteristics from RedBook

2015-01-16 Thread Anne Lynn Wheeler
000a2a8c2020-dmarc-requ...@listserv.ua.edu (Tom Marchant) writes:
 Today's processors have cache because main memory is _really_ slow
 compared to the processor. When the processor accesses something at a
 memory address, if the data at that location is in the cache, the
 processor can access it in one clock cycle (if it is in the on-chip
 cache) or a few clock cycles if it is farther away.

i.e. current latency to access memory (on cache miss) when measured in
number of processor (clock) cycles ... is compareable to 60s disk access
latency when measured in terms of number of (60s) processor (clock)
cycles (cache is the new memory, main memory is the new disk).

that enormous idle time was what drove multiprogramming/multithreading
(in the 60s).

this old account that increasing the level of multiprogramming is what
drove decision to move to virtual memory for all 370s ... that and the
extremely inefficient MVT memory management ... aka MVT regions
typically needed to be four times larger than actively used memory
 typical 1mbyte 360/165 MVT ran four initiators, adding virtual
memory could it increase it to 16 initiators with little paging impact.
http://www.garlic.com/~lynn/2011d.html#73 Multiple Virtual Memory

introduction of out-of-order instruction execution for z196 is claimed
to account for at least half the (per processor) performance increase
from z10 to z196. this is basically a technique when one instruction is
stalled waiting for memory, switch to another instruction in the same
instruction stream (sort of multithreading of instruction execution at
the micro level). This can only go only so far, in part because it can
get quickly complex with subsequent instructions stalled waiting for
results from previous instructions.

recent ref (hyperthreading in the early 70s)
http://www.garlic.com/~lynn/2015.html#27 Webcasts - New Technology for System z

-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: z13 new(?) characteristics from RedBook

2015-01-16 Thread Anne Lynn Wheeler
dcrayf...@gmail.com (David Crayford) writes:
 Better to compare it to the POWER arch
 http://www-03.ibm.com/systems/resources/pwrsysperf_SMT4OnP7.pdf.
 It may be CISC not RISC but those lines are getting more blured with
 every new churn of z. I would imagine that the SIMD vector units also
 originate from POWER. It may seem far fetched but I can see a time in
 the not too distant future when the two architectures are converged.

note that in the 90s, i86 cisc started moving to hardware layer that
translated cisc instructions into risc micro-ops for execution ...
pentium pro, 20yrs ago
http://en.wikipedia.org/wiki/Pentium_Pro

from above:

The Pentium Pro incorporated a new microarchitecture in a departure from
the Pentium x86 architecture. It has a decoupled, 14-stage
superpipelined architecture which used an instruction pool. The Pentium
Pro (P6) featured many advanced concepts not found in the Pentium,
although it wasn't the first or only x86 processor to implement them
(see NexGen Nx586 or Cyrix 6x86). The Pentium Pro pipeline had extra
decode stages to dynamically translate IA-32 instructions into buffered
micro-operation sequences which could then be analysed, reordered, and
renamed in order to detect parallelizable operations that may be issued
to more than one execution unit at once. The Pentium Pro thus featured
out of order execution, including speculative execution via register
renaming. It also had a wider 36-bit address bus (usable by PAE),
allowing it to access up to 64GB of memory.

... snip ...

...  this was pipelined so wasn't serialized ...  so there has been
shrinking difference between popular cisc and risc for a couple decades.
http://en.wikipedia.org/wiki/Instruction_pipeline

above mentions early pentium4 (2000) with 20 stage pipeline and later
pentium4 with 31 stage pipeline

recent refs
http://www.garlic.com/~lynn/2014m.html#164 Slushware
http://www.garlic.com/~lynn/2014m.html#166 Slushware
http://www.garlic.com/~lynn/2014m.html#170 IBM Continues To Crumble

other recent posts on subject:
http://www.garlic.com/~lynn/2015.html#35 [CM] IBM releases Z13 Mainframe - 
looks like Batman
http://www.garlic.com/~lynn/2015.html#36 [CM] IBM releases Z13 Mainframe - 
looks like Batman
http://www.garlic.com/~lynn/2015.html#38 [CM] IBM releases Z13 Mainframe - 
looks like Batman
http://www.garlic.com/~lynn/2015.html#39 [CM] IBM releases Z13 Mainframe - 
looks like Batman
http://www.garlic.com/~lynn/2015.html#40 [CM] IBM releases Z13 Mainframe - 
looks like Batman
http://www.garlic.com/~lynn/2015.html#41 [CM] IBM releases Z13 Mainframe - 
looks like Batman
http://www.garlic.com/~lynn/2015.html#42 [CM] IBM releases Z13 Mainframe - 
looks like Batman
http://www.garlic.com/~lynn/2015.html#43 z13 new(?) characteristics from 
RedBook

-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: z13 new(?) characteristics from RedBook

2015-01-16 Thread Anne Lynn Wheeler
john.archie.mck...@gmail.com (John McKown) writes:
 ​Yes. I remember some decades back reading that CISC was going to die due
 to RISC performing better with optimizing compilers​. That both did and
 didn't come true. The hardware exposed ISA is dominated by CISC on the high
 end (RISC ISA chips that I know of are ARM, Sparc, and MIPS) but the
 hardware internally is more like RISC. Conceptually, a bit like what IBM
 did with the TIMI for the i systems. Except that TIMI, from what I've read,
 is actually compiled into native code on the first execution and is store
 in a hidden portion of the executable on disk. Said compiled code is
 foot printed and recompiled if the TIMI object is change or, sometimes,
 when maintenance is applied to the i system software. I found the concept
 fascinating.

re:
http://www.garlic.com/~lynn/2015.html#44 z13 new(?) characteristics from 
RedBook

lowmid range 360/370s were vertical microcode with standard cisc
engines. in the late 70s an effort was started to move the large number
of internal microprocessors to 801/risc (mostly 801/risc Iliad chips)
... including all of the 370 native engines (4361/4381 followon to
4331/4341), the as/400 (aka i-system,) and numerous controllers. For
various reasons all of these efforts faultered (and you found various
801/risc engineers leaving and going to other vendors, spawning their
risc projects) and reverting to business-as-usual cisc. A decade after
as/400 ships with cisc, it (finally) migrated to (risc/801)
power/pc. some past posts mention 801/risc, iliad, romp, rios, fort
knox, power, power/pc, etc
http://www.garlic.com/~lynn/subtopic.html#801

one of the earliest such efforts was the AMD 29K (and IBM may have sued
AMD because former 801/risc engineer worked on it).  Some (former)
IBMers showed up working on HP snake risc ... and later Itanium.

801/risc as the native engine for microcoded 370 ... including some work
on just-in-time (JIT) compiling ... sequences of 370 code segments
dynamically translated to native risc (instead of repeated straight
interpretation). Equivalent JIT was later done for some of the i86-based
370 simulators ... and analogous JIT is being done for JAVA.

other trivia, AIM/Somerset (applie, ibm, motorola) for single chip
801/risc (aka power/pc) somewhat combined 801/risc rios with Motorola's
88k risc (internal IBM 801/risc had long history of not supporting cache
consistency making multiprocessor implementations difficult, Motorolo's
88k did have cache consistency support).

The pentium-pro translate from i86 to risc could be considered similar
to (long history of) 360/370 microcoded implementations ... but at the
hardware layer and pipelined. The 360/370 microcode implementations
typically avg. 10 native instructions for every 360/370 instruction and
since they were serialized, it needed a 10mip processor to get 1mip
360/370. The pentium-pro pipelined just needed 1mip native since it was
doing several things, concurrently, in parallel with the pipeline.

-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Young's Black Hat 2013 talk - was mainframe tribute song

2015-01-11 Thread Anne Lynn Wheeler
charl...@mcn.org (Charles Mills) writes:
 I am not certain that MVS exposures versus lax security is a black and
 white dichotomy. It's easy to look after the fact at any breach and say
 aha! You should not have done X. I don't think the role of we security
 practitioners is solely pointing out exposures in MVS to IBM. I think
 helping customers with common less-than-ideal practices is more important.

 Logica was a professional service bureau with a professionally-maintained
 z/OS. They got breached. One might infer that other MVS sites, and not just
 those with lax (however defined) security practices, might also be
 vulnerable.

long ago and far away we were brought in as consultants to small
client/server startup that wanted to do payment transactions on their
server; they had also invented this technology they called SSL that they
wanted to use, the result is now frequently called e-commerce.

early experience found that RDBMS-based ecommerce servers had more
frequent exploits than flat-file based ecommerce servers ... these
weren't intrinsic to the environment ... it was that RDBMS-based
ecommerce servers were a lot more complicated ... and as a result people
were more prone to making mistakes resulting in exploits (there is some
amount of security literature about exploits proportional to
complexity, which is a counter to the periodic meme of security
through obscurity).

much more recently there have been some SQL-specific attacks
http://en.wikipedia.org/wiki/SQL_injection

which claims that they can attack any type of SQL database (although a
case might be made that SQL-injection is another characteristic of
RDBMS/SQL being more complex).

disclaimer: I periodically have stressed KISS as a major security theme.

-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Slushware

2014-12-29 Thread Anne Lynn Wheeler
alan_altm...@us.ibm.com (Alan Altmark) writes:
 Yet you never hear millicode being applied to storage controllers or
 other parts outside of the processor.  And you know as well as I do
 that they aren't replacing microcode on the processor chips.  They're
 replacing the OS and the applications that use them.  But we continue
 to call it microcode.  The joke's on us

re:
http://www.garlic.com/~lynn/2014m.html#161 Slushware
http://www.garlic.com/~lynn/2014m.html#163 Slushware
http://www.garlic.com/~lynn/2014m.html#164 Slushware
http://www.garlic.com/~lynn/2014m.html#166 Slushware

79/80 there was effort to replace the myriad of internal microprocessors
with 801/risc ... 801 Iliad chips for the low  mid-range 370s, 801 ROMP
chip for the follow-on to the displaywriter, new 801 chip for the AS/400
(follow-on to s/36  s/38), 801 chips for wide variety of (disk, tape,
communication, etc) controlers, etc.

For various reasons all of these failed and things returned to business
as usual with various CISC chips ... and started to see 801 chip
engineers leaving to other vendors to work on risc programs there.

the followon to 4331/4341, 43614381 were originally to be 801
microprocessors with 370 simulation done in 801 software ... rather than
whatever preceeding CISC processors were used (vertical microcode that
avg. ten native instructions per 370 instruction). There was even work
on JIT (just in time dynamic compiling of 370 into native 801/risc)
... somewhat analogous to what is seen with some modern day JAVA.

I helped with white paper that shot down the use of 801/Iliad for 4381
... the story was that CISC chips were getting sophisticated enough that
much of 370 instructions could be directly implemented in silicon
... rather than having to be all simulation in microcode (software) ...
resulting in significant better price/performance.

as/400 eventually abandoned 801/risc implementation, changing to
traditional CISC microprocessor. However, a decade later AS/400 did move
over to 801/risc with power/pc. past 801/risc, iliad, romp, rios, power,
etc posts
http://www.garlic.com/~lynn/subtopic.html#801

A little later, IBM Germany did the (native) 370 ROMAN chipset.
Somehow somebody in Nixdorf (did 370 clones) came into possession of
detailed specs. for ROMAN. He sent it to somebody at Amdahl that he had
been working with ... who presented it to me to return to the rightful
owners (trying to avoid any litigation that might come from having come
into the possession of the document).

Turns out that I was trying to get a project going to package a few
dozen ROMAN chipsets in a rack. It was sort of followon to something I
had gotten dragged into a few years earlier. I had access to engineering
4341 (before first customer ship) and got asked to do some benchmarking
for LLNL that was looking at getting 70 4341s for compute farm (sort of
precursor to modern grid  supercomputing). A cluster of 4341s had more
computer power than high-end mainframes, were much cheaper, and required
much less floor space and environmentals. old 4341 email
http://www.garlic.com/~lynn/lhwemail.html#4341

later I got involved in doing something similar ... but packing as many
801/RIOS chips in a rack as possible (instead of 370/ROMAN). some old
email
http://www.garlic.com/~lynn/lhwemail.html#medusa

-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Slushware

2014-12-28 Thread Anne Lynn Wheeler
re:
http://www.garlic.com/~lynn/2014m.html#161 Slushware
http://www.garlic.com/~lynn/2014m.html#163 Slushware
http://www.garlic.com/~lynn/2014m.html#164 Slushware

as an aside ... the hardware layer from i86 instructions to risc
micro-ops for execution ... isn't serialized ...  it is pipelined
operation ... simple version starts with overlapping instruction fetch 
decode with instruction execution
http://en.wikipedia.org/wiki/Instruction_pipeline

the above mentions that pentium4/pentuimD had 31-stage pipeline ...
longest in mainstream consumer computing

longer pipeline affects the latency for any specific instruction getting
executed ... but isn't (necessarily) limiting in the aggregate
instruction execution rate (since the operations are overlapped in
parallel).

there was recent claim (in ibm linkedin discussion) that there is
approx. mainframe aggregate 18-20 milllion MIPS in the world today ...
or the equivalent of around 270 max. configured EC12s (@75BIPS) ...  or
about 15 e5-2699v3 blades (@1.3TIPS). A typically cloud megadatacenter
can have several hundred thousand blades ...  and a standardized
virtualization/container facility goes a long way to simplifying the
operation.
http://www.networkcomputing.com/cloud-infrastructure/virtual-machines-vs-containers-a-matter-of-scope/a/d-id/1269190

-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Slushware

2014-12-27 Thread Anne Lynn Wheeler
000433f07816-dmarc-requ...@listserv.ua.edu (Paul Gilmartin) writes:
 It began nearly a half century ago with microcode implementation of S360
 models, and only slightly later, W. M. Waite's Mobile Programming System.
 Nowadays:

 microcode-millicode-PR/SM-VM-JVM-byte code

 How many layers have I neglected?  Hercules is a confluent branch.

note that the original hypervisor was done by Amdahl in something called
macrocode ... which was a layer above microcode and very close to
standard 370.

In the mid-70s, I had been sucked in by Endicott to help with microcode
assists for 138/148 ... vertical microcode machine that avg. 10
microcode instructions per 370 instruction (not that different from the
various intel based simulators). Was told that there were 6kbytes
available for microcode and kernel instruction sequences dropped into
microcode on nearly byte for byte ... so was to identify the top 6kbytes
worth of kernel instruction sequences ... that would be moved to
microcode for a 10:1 performance improvement. Old post with results of
analysis ... turns out top 6kbytes of instruction sequences accounted
for 79.55percent of kernel time.
http://www.garlic.com/~lynn/94.html#21

In any case, I was giving presentations on the effort at the monthly bay
area user group meetings (BAYBUNCH) held at SLAC ... and the Amdahl
people were pumping me for additional details at the get togethers held
at local watering holes after the meetings (this was before hypervisor
was announced).

After hypervisor was announced ... the 3090 was eventually forced to
respond with PR/SM. Part of the issue was that 3090 was horizontal
microcode machine ... which was enormously more difficult to program for
than 370 instructions ... and was much more difficult.

I had been told that Amdahl had original evolved macrocode to respond to
the enormous number of architecture tweaks that IBM had been doing on
their high-end (vertical microcode machines) starting with the 3033
and continued through 3081 (macrocode used to drastically reduce the
effort needed to respond).

I've mentioned before ... during FS period ... internal politics were
killing off 370 efforts (the lack of 370 products during this period is
credited with giving clone processor makers a market foothold) ... then
when FS imploded there was mad rush to get 370 products back into
pipeline. POK kicked off 3033 (initially 168 logic remapped to 20%
faster chips) and 3081 in parallel ... more detailed account here:
http://www.jfsowa.com/computer/memo125.htm

since that neither 3033 or 3081 were really that competitive, the
architecture tweaks would supposedly give the machines competitive
advantage ... many were claimed to be performance improvements ... but
folklore is that many actually ran slower (than native 370). Part of the
issue is the high-end, horizontal microcode machines were profiled in
terms of avg. machine cycles per 370 instruction ... by 3033, this was
done to cloe to one machine cycle per 370 instruction (370 instruction
move to microcode couldn't see the 10:1 improvement seen on the vertical
microcode machines). In anycase, it sort of drove Amdahl into creating
macrocode as a way of drastically simplifying being able to respond to
the increased architecture tweaking.

The other factor was that part of the mad rush after FS failure, the
head of POK managed to convince corporate to kill off vm370, shutdown
the development group and move all the people to POK ...  or otherwise
POK would be able to make mvs/xa ship schedule several years (Endicott
managed to save the vm370 product mission, but had to reconstitute a
development group from scratch). Part of the POK activity was creating a
XA vritual machine VMTOOL (to support MVS/XA development) that was never
intended to be made available to customers.

After initial introduction of 370/xa and MVS/XA ... there was very slow
uptake ... customers continued to run 3081s in 370 mode with MVS (or
vm370). The decision then was to release the VMTOOL as the migration
aid ... allowing customers to run both MVS and MVS/XA concurrently on
the same machine as aid to migration. Amdahl solution was the hypervisor
which provided the same capability ... but much more efficiently.

IBM eventually responded with PR/SM on the 3090 ... but it was much
greater effort because it required being all done in native horizontal
microcode.

The POK(/Kingston) group then pushed very hard to have migration aid
morph into standard VM/XA product. The problem was that VMTOOL had only
been developed for MVS/XA development and lacked lots of function and
performance features (especially compared to vm370 of the period) and
was going to require lots of resources, effort and time to bring up to
compareable level of vm370. Somebody at an internal datacenter had made
the changes to vm370 to provide full function 370/XA support ...  which
would have been trivial to release. In the internal politics between POK
and Endicott, POK managed to prevail and 

Re: Slushware

2014-12-27 Thread Anne Lynn Wheeler
000433f07816-dmarc-requ...@listserv.ua.edu (Paul Gilmartin) writes:
 How many layers have I neglected?  Hercules is a confluent branch.

re:
http://www.garlic.com/~lynn/2014m.html#161 Slushware
http://www.garlic.com/~lynn/2014m.html#163 Slushware

for other hercules drift ... risc processors had performance advantage
over intel ... risc having made extensive use of technolology to
compensate for the increasing mismatch between memory latency and
processor speed ... out-of-order execution, speculative execution,
branch-prediction, etc ... sort of the hardware equivalent of '60s
multiprogramming to keep processor busy while waiting for disk access
(current memory latency, measured in count of cpu cycles is compareable
to 60s disk latency when measured in number of 60s cpu cycles).

however, for nearly 20yrs, intel has gone to hardware layer that
translates intel instructions into risc micro-ops for execution ...
largely negating any risc performance advantage.

note that somewhat similar (out-of-order, etc) technology started to be
introduced for z196 ... claiming it provided over half the performance
improvement from z10 to z196 ... and further additions responsible for
some of the z196 to ec12 performance improvement.


another technology (compensating for stalled instructions) is
hyperthreading. I first ran into it when I was asked to help 370/195 for
a hyperthreading implementation they wanted to do. 370/195 had pipeline
supporting out-of-order execution that could run at 10mips ... but
didn't have branch prediction ... so conditional branches would stall
the pipeline ... many codes only ran at 5mips.  The idea was to simulate
multiprocessor operation with two instruction streams, registers ... but
still the same pipeline and execution units (two 5mip instruction steams
keeping the 10mip execution units busy).  note that it dates back to
acs/360 in the late 60s ... see multithreading reference near the end of
this article
http://people.cs.clemson.edu/~mark/acs_end.html
also referenced here
http://en.wikipedia.org/wiki/Simultaneous_multithreading

SPARC T5 can have 8chips/system, 16cores/chip and 128threads/chip (aka
8threads/core)
http://en.wikipedia.org/wiki/SPARC_T5

by comparison, about same time as ec12, e5-2600v1 had two 8core chips
for 16cores total and 400-600+ BIPS rating (depending on model)
... compared to max configured (101 processors) EC12 @75BIPS. both
e5-2600v1 and ec12 processor chips are done in 32nm technology.

intel has a tick-tock chip generation
http://en.wikipedia.org/wiki/Intel_Tick-Tock

alternates shrinking previous chip design with new technology (tick,
e5-2600v2 22nm tech) and then designing new chip for the new technology
(tock, e5-2600v3 redesign 22nm). some e5-2600v3 ( v4) discussion
http://techgadgetnews.com/2014/09/21/intel-xeon-e5-2600-v3-haswell-ep-workstation-and-server-processors-unleashed-for-high-performance-computing/

E5-2690v1 at 632BIPS, E5-2690v2 at 790BIPS, E5-2690v3 at 996BIPS,
E5-2699v3 at 1.321TIPS.
http://www.tomshardware.com/reviews/intel-xeon-e5-2600-v3-haswell-ep,3932-7.html

note MIPS/BIPS/TIPS are benchmark iterations compared to 370/158 assumed
to be 1MIP processor.

-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: BDW length vs. Physical Length

2014-12-25 Thread Anne Lynn Wheeler
cblaic...@syncsort.com (Blaicher, Christopher Y.) writes:
 ECKD, which is what all modern DASD is, stands for Extended Count Key
 Data.  The 'Extended' refers to the channel commands you can issue,
 not the devices capabilities.  All blocks written to a ECKD device
 consist of a Count field, an optional Key field and a Data field.  The
 Count field is 8 bytes long and has a format of CCHHRKDD.  (Extended
 format volume count fields are formatted slightly differently, but for
 basics, this will do.)

all modern disk is fixed-block, there hasn't been any real CKD DASD
manufactured for decades ... it is all emulated on industry fixed-block
disks.

ECKD started out when MVS couldn't support FBA ... and they wanted to
retrofit 3380 3mbyte/sec disks to 1683033 1.5mbyte/sec channels
... getting CALYPSO to work was something of horror story

past posts mentioning FBA, CKD, multi-track seek, etc
http://www.garlic.com/~lynn/submain.html#dasd

past posts specifically mentioning CALYPSO
http://www.garlic.com/~lynn/2004o.html#7 Integer types for 128-bit addressing
http://www.garlic.com/~lynn/2007e.html#40 FBA rant
http://www.garlic.com/~lynn/2007f.html#0 FBA rant
http://www.garlic.com/~lynn/2008q.html#40 TOPS-10
http://www.garlic.com/~lynn/2009k.html#44 Z/VM support for FBA devices was Re: 
z/OS support of HMC's 3270 emulation?
http://www.garlic.com/~lynn/2009p.html#11 Secret Service plans IT reboot
http://www.garlic.com/~lynn/2010e.html#36 What was old is new again (water 
chilled)
http://www.garlic.com/~lynn/2010h.html#30 45 years of Mainframe
http://www.garlic.com/~lynn/2010n.html#14 Mainframe Slang terms
http://www.garlic.com/~lynn/2011e.html#35 junking CKD; was Social Security 
Confronts IT Obsolescence
http://www.garlic.com/~lynn/2012j.html#12 Can anybody give me a clear idea 
about Cloud Computing in MAINFRAME ?
http://www.garlic.com/~lynn/2012o.html#64 Random thoughts: Low power, High 
performance

-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: z/OS MD5 file hashing

2014-12-05 Thread Anne Lynn Wheeler
002782105f5c-dmarc-requ...@listserv.ua.edu (Frank Swarbrick) writes:
 Does anyone know of a program/subroutine that can read any kind of
 MVS sequential dataset and calculate an MD5 hash on it?  By any kind
 I am specifically meaning a file that is either FB or VB and can have
 any LRECL.

note, MD5 has been depreciated for some time (decade ago, I was getting
real-time messages from somebody in the crypto rump session where the
compromise was being described) ... and asked to do list of internet
RFCs mentioning/referencing MD5.
http://en.wikipedia.org/wiki/Collision_attack

my rfc index
http://www.garlic.com/~lynn/rfcietff.htm

see special list of RFCs referring to MD5

more recent reference

6151 I
 Updated Security Considerations for the MD5 Message-Digest and the
 HMAC-MD5 Algorithms, Chen L., Turner S., 2011/03/06 (7pp)
 (.txt=14662) (Updates 1321, 2104) (Refs 1321, 1939, 2104, 2202,
 4231, 4270, 4493) (Ref'ed By 6150, 6176, 6331, 6421, 6528, 6542,
 6668, 6920, 6929, 6931, 6952, 7217, 7292, 7298, 7317, 7321, 7376) 

from above:

2.  Security Considerations

MD5 was published in 1992 as an Informational RFC.  Since that time, MD5
has been extensively studied and new cryptographic attacks have been
discovered.  Message digest algorithms are designed to provide
collision, pre-image, and second pre-image resistance.  In addition,
message digest algorithms are used with a shared secret value for
message authentication in HMAC, and in this context, some people may
find the guidance for key lengths and algorithm strengths in [SP800-57]
and [SP800-131] useful.

MD5 is no longer acceptable where collision resistance is required such
as digital signatures.  It is not urgent to stop using MD5 in other
ways, such as HMAC-MD5; however, since MD5 must not be used for digital
signatures, new protocol designs should not employ HMAC-MD5.
Alternatives to HMAC-MD5 include HMAC-SHA256 [HMAC] [HMAC-SHA256] and
[AES-CMAC] when AES is more readily available than a hash function.

... snip ...

-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Page Data Set Sizes and Volume Types

2014-12-04 Thread Anne Lynn Wheeler
t...@tombrennansoftware.com (Tom Brennan) writes:
 Me too - until just a few days ago when I happened upon a number of
 3380's defined at a client site.  All I can guess is these were still
 real 3380's at the time they needed to be moved to a DS8000.  TASID
 shows them as 3380-TC3 (whatever that is) at 3,339 cyls.  I think I
 remember a type 3380-K (triple density?), but much of those years is
 just a blur to me.

original 3380 (1981) had twenty track width spacing between (885) tracks
...  they then doubled (1770, 3380E, 1985) and then tripled (2655,
3380K, 1987) the number of tracks ... by cutting the inter-track
spacing.
http://www-03.ibm.com/ibm/history/exhibits/storage/storage_3380.html

3390 announce nov1989
http://www-03.ibm.com/ibm/history/exhibits/storage/storage_3390.html

however, as periodically mentioned, there haven't been real CKD DASD
manufactured for decades, all just emulation on industry standard
fixed-block disks.
http://www.garlic.com/~lynn/submain.html#dasd

as an aside ... even 3380s CKD were really (32byte) fixed-block
http://www.bitsavers.org/pdf/ibm/dasd/reference_summary/GX26-1678-0_3380_Reference_Summary_Feb83.pdf

all disk technology was moving to fixed-block by the late 70s ... but
MVS inability to come up with fixed-block support required CKD emulation
long after CKD was obsolete.

there was special 3380j end of 1988 ... which had avg. seek of 12ms and
max. seek ms 21 ... compared to 16ms  29ms for 3380k ... but the 3380j
had only 885 tracks (same capacity as original 3380) ... one is tempted
to believe that the 3380j might have really been a 3380k limited to only
accessing 1/3rd of the platter (note seek time isn't strictly linear
since there is acceleration latency).

recent post mentioning early 80s semi-facetious discussion at SHARE
about doing a fast 3380 (with fewer tracks by microcode change). 
http://www.garlic.com/~lynn/2014m.html#87 Death of spnning disk?

From IBM 3380 history reference:

In September 1987, IBM announced a significant extension to the 3380
series: the Model K DASD that stored 7.5 billion characters of
information, and the densest disk device IBM ever manufactured; and the
high-speed Model J, which could locate data faster than any previous
3380 DASD. The Model J found the correct information track in an average
time of just 12 thousandths of a second. Customers who installed Model
Js, which could store 2.5 billion characters of data, could upgrade it
to the denser Model K.

... snip ...

-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


<    1   2   3   4   5   6   7   8   >