Re: zEC12, and previous generations, why? type question - GPU computing.

2012-09-07 Thread William Donzelli
 Mainframes? The 8100 was a series of small machines that grew out of
 the 3790. They were no more mainframes than their competitor, the S/1.
 Perhaps you are thinking of DPPX/370, which ran on the 9370.

It is debatable (although maybe we shouldn't here!) - 8100 was a DPD
product, not GSD like the midranges.

--
Will

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: zEC12, and previous generations, why? type question - GPU computing.

2012-09-06 Thread Lloyd Fuller
- Original Message 
From: zMan zedgarhoo...@gmail.com
To: IBM-MAIN@LISTSERV.UA.EDU
Sent: Wed, September 5, 2012 5:17:37 PM
Subject: Re: zEC12, and previous generations, why? type question - GPU 
computing.

On Wed, Sep 5, 2012 at 2:28 PM, Shmuel Metz (Seymour J.) 
shmuel+...@patriot.net wrote:

 In a6b9336cdb62bb46b9f8708e686a7ea0115baa1...@nrhmms8p02.uicnrh.dom,
 on 09/05/2012
at 11:45 AM, McKown, John john.mck...@healthmarkets.com said:

 If it is because the z architecture is not good at numeric
 computation,

 The z architecture is fine for numeric computations. The problem is
 that the implementation is competing with processors manufactured in
 bulk. If IBM could sell millions of z boxen then they'd be able to cut
 the price dramatically.

 I've always wondered what would have happened had IBM used a 370
 instruction set on the PC instead of Intel.


16MB ought to be enough for anybody? :-)

Since IBM wasn't manufacturing the chips, of course that wasn't even on the
table, but it's still a VERY interesting Gedankenexperiment...
-- 
zMan -- I've got a mainframe and I'm not afraid to use it

They could have gone with the Motorola 6800x chips instead.  However, I am not 
sure that Motorola would have committed to producing as many chips as IBM 
thought they needed.  The 6800x is what was used for the 370 part of the PC-370 
systems.

Lloyd

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: zEC12, and previous generations, why? type question - GPU computing.

2012-09-06 Thread zMan
AT/370.

On Thu, Sep 6, 2012 at 2:07 AM, Leopold Strauss 
leopold.stra...@isis-papyrus.com wrote:

 Yes.

 It was a microprogrammed motorola-68000-chip, which was used. Name was
 similar to PC/370, but I am not sure about that.
 Many years ago the company, where I was employeed at that tim, had one for
 short for testing-purposes. Ibelieve to remember, it was the time,
 where 3033-systems came up ( before 3081/3083).




 On 06.09.2012 07:58, George Henke wrote:

 I believe IBM produced a pc with a 370 to run VM on a PC.  Merrill Lynch
 had one.  Somewhere in the late 80's I believe.

 On Thu, Sep 6, 2012 at 1:52 AM, Timothy Sipples1 sipp...@sg.ibm.com
 wrote:

  Yes, there are organizations that use zEnterprise servers for heavy
 numeric computation. Like decimal floating point. Cryptography is
 another
 excellent example. And you can buy optional CryptoExpress adapters if you
 want to augment the excellent capabilities found in every machine. You
 can
 also buy the optional zBladeCenter Extension (zBX) if you want to add
 DataPower accelerators, Power blades, and/or X86 blades. You can also add
 an optional IBM DB2 Analytics Accelerator, to boost many types of DB2
 queries. So we're way ahead of you, John. ;-)

 I think the simple answer is that it depends what you optimize for in
 designing a server processor (or complex). But IBM has broken a lot of
 rules already about which server should do what, and I predict more
 rules
 will be broken.

 With respect to the 370-on-a-chip, IBM sort of did that with the 1975
 introduction of the IBM 5100 Portable Computer starting at $8,975 (1975
 dollars), although it was for a relatively narrow initial purpose (to get
 APL running). The 5100 sold reasonably well from what I've read, but I
 think there were three basic problems which prevented it from becoming a
 blockbuster:

 1. The price was not low enough for mass market appeal. (Apple had a
 similar problem with the Lisa in the early 1980s.)

 2. The software selection didn't exactly hit the mark, although it was a
 good try for the time. (IBM learned the value of software somewhat later
 in
 its evolution but not in time for the 1981 IBM PC.)

 3. It probably didn't have the right third party marketing and
 distribution
 channels. With some very notable exceptions, like typewriters, at that
 time
 IBM would have had some challenges with this type of product.

 Keep in mind that for 1975 this was absolutely amazing technology, but
 amazing technology required some expense. Being early is pricey. If the
 5100 debuted in, say, 1977 or 1978, it would have still been well timed
 but
 could have dramatically reduced the chip and board count. I also think
 the
 small built-in monitor could have been sacrified (at least as an option)
 in
 favor of a display port of some kind -- ideally RF for TV hookup. And IBM
 might have gone with a diskette drive for storage -- the 5100 was too
 early
 for the 5.25 inch drive, which debuted in 1976. Finally, if IBM had
 provided a little more guidance on the 370 subset instruction set they
 implemented, software developers could have taken over from there.

 So I think the 5100 could have been a nice 5110 by tweaking the recipe a
 bit. But history didn't happen that way.

 IBM had some success with the System/4 Pi avionics processors which are
 descended from System/360.


 --**--**
 --**--
 Timothy Sipples
 Consulting Enterprise IT Architect (Based in Singapore)
 E-Mail: sipp...@sg.ibm.com
 --**--**
 --
 For IBM-MAIN subscribe / signoff / archive access instructions,
 send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN





 --
 Mit freundlichen Grüßen / Kind Regards,

 Leopold Strauss
 Research and Development

 ISIS Papyrus Europe AG
 Alter Wienerweg 12, A-2344 Maria Enzersdorf, Austria
 T: +43 - 2236 – 27551, F: +43 - 2236 - 21081
 @ leopold.strauss@isis-papyrus.**com leopold.stra...@isis-papyrus.com

 Visit our brand new extended Website at www.isis-papyrus.com

 This e-mail is only intended for the recipient and not legally binding.
 Unauthorised use, publication, reproduction or disclosure of the content
 of this e-mail is not permitted. This e-mail has been checked for known
 viruses, but ISIS accepts no responsibility for malicious or inappropriate
 content.


 --**--**--
 For IBM-MAIN subscribe / signoff / archive access instructions,
 send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN




-- 
zMan -- I've got a mainframe and I'm not afraid to use it

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: zEC12, and previous generations, why? type question - GPU computing.

2012-09-06 Thread Lloyd Fuller
There were two:  the PC/370 and the AT/370.  I am not sure that many PC/370s 
got 
distributed as they were real SLOW.  My old company had both for awhile.

Lloyd



- Original Message 
From: zMan zedgarhoo...@gmail.com
To: IBM-MAIN@LISTSERV.UA.EDU
Sent: Thu, September 6, 2012 8:54:25 AM
Subject: Re: zEC12, and previous generations, why? type question - GPU 
computing.

AT/370.

On Thu, Sep 6, 2012 at 2:07 AM, Leopold Strauss 
leopold.stra...@isis-papyrus.com wrote:

 Yes.

 It was a microprogrammed motorola-68000-chip, which was used. Name was
 similar to PC/370, but I am not sure about that.
 Many years ago the company, where I was employeed at that tim, had one for
 short for testing-purposes. Ibelieve to remember, it was the time,
 where 3033-systems came up ( before 3081/3083).




 On 06.09.2012 07:58, George Henke wrote:

 I believe IBM produced a pc with a 370 to run VM on a PC.  Merrill Lynch
 had one.  Somewhere in the late 80's I believe.

 On Thu, Sep 6, 2012 at 1:52 AM, Timothy Sipples1 sipp...@sg.ibm.com
 wrote:

  Yes, there are organizations that use zEnterprise servers for heavy
 numeric computation. Like decimal floating point. Cryptography is
 another
 excellent example. And you can buy optional CryptoExpress adapters if you
 want to augment the excellent capabilities found in every machine. You
 can
 also buy the optional zBladeCenter Extension (zBX) if you want to add
 DataPower accelerators, Power blades, and/or X86 blades. You can also add
 an optional IBM DB2 Analytics Accelerator, to boost many types of DB2
 queries. So we're way ahead of you, John. ;-)

 I think the simple answer is that it depends what you optimize for in
 designing a server processor (or complex). But IBM has broken a lot of
 rules already about which server should do what, and I predict more
 rules
 will be broken.

 With respect to the 370-on-a-chip, IBM sort of did that with the 1975
 introduction of the IBM 5100 Portable Computer starting at $8,975 (1975
 dollars), although it was for a relatively narrow initial purpose (to get
 APL running). The 5100 sold reasonably well from what I've read, but I
 think there were three basic problems which prevented it from becoming a
 blockbuster:

 1. The price was not low enough for mass market appeal. (Apple had a
 similar problem with the Lisa in the early 1980s.)

 2. The software selection didn't exactly hit the mark, although it was a
 good try for the time. (IBM learned the value of software somewhat later
 in
 its evolution but not in time for the 1981 IBM PC.)

 3. It probably didn't have the right third party marketing and
 distribution
 channels. With some very notable exceptions, like typewriters, at that
 time
 IBM would have had some challenges with this type of product.

 Keep in mind that for 1975 this was absolutely amazing technology, but
 amazing technology required some expense. Being early is pricey. If the
 5100 debuted in, say, 1977 or 1978, it would have still been well timed
 but
 could have dramatically reduced the chip and board count. I also think
 the
 small built-in monitor could have been sacrified (at least as an option)
 in
 favor of a display port of some kind -- ideally RF for TV hookup. And IBM
 might have gone with a diskette drive for storage -- the 5100 was too
 early
 for the 5.25 inch drive, which debuted in 1976. Finally, if IBM had
 provided a little more guidance on the 370 subset instruction set they
 implemented, software developers could have taken over from there.

 So I think the 5100 could have been a nice 5110 by tweaking the recipe a
 bit. But history didn't happen that way.

 IBM had some success with the System/4 Pi avionics processors which are
 descended from System/360.


 --**--**
 --**--
 Timothy Sipples
 Consulting Enterprise IT Architect (Based in Singapore)
 E-Mail: sipp...@sg.ibm.com
 --**--**
 --
 For IBM-MAIN subscribe / signoff / archive access instructions,
 send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN





 --
 Mit freundlichen Grüßen / Kind Regards,

 Leopold Strauss
 Research and Development

 ISIS Papyrus Europe AG
 Alter Wienerweg 12, A-2344 Maria Enzersdorf, Austria
 T: +43 - 2236 – 27551, F: +43 - 2236 - 21081
 @ leopold.strauss@isis-papyrus.**com leopold.stra...@isis-papyrus.com

 Visit our brand new extended Website at www.isis-papyrus.com

 This e-mail is only intended for the recipient and not legally binding.
 Unauthorised use, publication, reproduction or disclosure of the content
 of this e-mail is not permitted. This e-mail has been checked for known
 viruses, but ISIS accepts no responsibility for malicious or inappropriate
 content.


 --**--**--
 For IBM-MAIN subscribe / signoff / archive access instructions,
 send email to lists

Re: zEC12, and previous generations, why? type question - GPU computing.

2012-09-06 Thread Anne Lynn Wheeler
gahe...@gmail.com (George Henke) writes:
 I believe IBM produced a pc with a 370 to run VM on a PC.  Merrill Lynch
 had one.  Somewhere in the late 80's I believe.

re:
http://www.garlic.com/~lynn/2012l.html#72 zEC12, and previous generations, 
why? type question - GPU computing

1984, xt/370 ... later same board was made available on at as at/370.

basically a couple M68k executing subset of vm370 ... code-named
washington. it didn't support i/o ... so vm370 was modified to
communicate with a monitor running under dos on the 8088 for all i/o
functions. it provided approx. 100kip 370 with 384kbytes of memory
... little bit faster than 370/115. however, since all disk i/o (paging,
cms file, etc) was being done on 100ms (per block) dos hard disk. By
that time, vm370 and cms had gotten quite a bit bloated ... much larger
than cp67/cms that would run on 256kbyte 360/67. Also any kind of disk
i/o (paging, file activity) could become extremely painful ... compared
to what one was use to with real mainframe disks.

I got con'ed into doing some work on it ... first thing simple paging
tests showed almost any cms application would page thrash in the
pageable pages available left over after vm370 kernel fixed storage size
(from 384kbytes) ... exhaserbated by the paging on dos xt disk. I got
blamed for several month schedule ship in the product while they
upgraded the memory from 384kbytes to 512kbytes ... to cut down on
severe paging problems. However, cms applications that tended to be much
more file intensive than (and fared poorly in comparison with)
equivalent applications developed for the DOS/XT resource limited
environment.

I had tried to start a project to implement a super lean and fast vm370
replacement kernel in pascal. As a demo I had re-implemented the vm370
kernel spooling function in pascal running in virtual address space.  My
objective was to enormously increase the throughput and performance
compared to the kernel assembler implemented equivalent. 

I had another agenda ... I was also doing high-speed data transport
project ... and for vm370 vnet ... which was dependent on vm370 spool
... I needed multi-megabyte sustained thruput to drive the links I had.
misc. past posts mentioning hsdt
http://www.garlic.com/~lynn/subnetwork.html#hsdt

I indirectly referenced it in previous post regarding work with NSF on
what was to become NSFNET backbone ... also original mainframe tcp/ip
product was done for vm370 in pascal ... and I did the rfc1044
enhancements that got sustained channel thruput (between 4341 and cray
machine using only modest amount of 4341 processor, about 500 times
improvement in bytes moved per instruction executed). misc. past posts
mentioning NSFNET backbone
http://www.garlic.com/~lynn/subnetwork.html#nsfnet

-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: zEC12, and previous generations, why? type question - GPU computing.

2012-09-06 Thread Shmuel Metz (Seymour J.)
In
caccgc5dh-2kebjfpnmt0apwdcjjaondevelz6wnufcla7og...@mail.gmail.com,
on 09/06/2012
   at 01:58 AM, George Henke gahe...@gmail.com said:

I believe IBM produced a pc with a 370 to run VM on a PC. 

XT/370 and AT/370 used a 68000 with custom microcode and a second
68000 with standard microcode. The software for it was VM/PC.

Note that the later P/370 and R/370 cards implemented the full
architecture and ran stock operating systems.

-- 
 Shmuel (Seymour J.) Metz, SysProg and JOAT
 Atid/2http://patriot.net/~shmuel
We don't care. We don't have to care, we're Congress.
(S877: The Shut up and Eat Your spam act of 2003)

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: zEC12, and previous generations, why? type question - GPU computing.

2012-09-06 Thread Anne Lynn Wheeler
 XT/370 and AT/370 used a 68000 with custom microcode and a second
 68000 with standard microcode. The software for it was VM/PC.

 Note that the later P/370 and R/370 cards implemented the full
 architecture and ran stock operating systems.

re:
http://www.garlic.com/~lynn/2012l.html#72 zEC12, and previous generations, 
why? type question - GPU computing
http://www.garlic.com/~lynn/2012l.html#74 zEC12, and previous generations, 
why? type question - GPU computing

between xt/at/370 and p370/p390 was a74 (7437) done in POK by the same
group that had done the 3277GA (i.e. large tektronics graphics tube that
plugged into side of 3277 terminal) ... a74 was their POK dept. 

I got con'ed into making the vm370 modifications for them ... including
a74 only supported 4k storage key. this is long-winded post ... with
bunch of old a74 press at the bottom
http://www.garlic.com/~lynn/2002d.html#4 IBM Mainframe at home
post with some of the discussion of the vm370 changes I did for a74
http://www.garlic.com/~lynn/2003f.html#56 ECPS:VM DISPx instructions

old internal A74 email on the announcement of a74
http://www.garlic.com/~lynn/2000e.html#email880622
in this post
http://www.garlic.com/~lynn/2000e.html#56 Why not an IBM zSeries workstation?

a74 was 350kips (370, compared to 100kips for xt/at/370)

and the ROMAN chip set was 168/3mips ... mentioned in previous post.

-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: zEC12, and previous generations, why? type question - GPU computing.

2012-09-06 Thread Anne Lynn Wheeler
sipp...@sg.ibm.com (Timothy Sipples1) writes:
 Keep in mind that for 1975 this was absolutely amazing technology, but
 amazing technology required some expense. Being early is pricey. If the
 5100 debuted in, say, 1977 or 1978, it would have still been well timed but
 could have dramatically reduced the chip and board count. I also think the
 small built-in monitor could have been sacrified (at least as an option) in
 favor of a display port of some kind -- ideally RF for TV hookup. And IBM
 might have gone with a diskette drive for storage -- the 5100 was too early
 for the 5.25 inch drive, which debuted in 1976. Finally, if IBM had
 provided a little more guidance on the 370 subset instruction set they
 implemented, software developers could have taken over from there.

 So I think the 5100 could have been a nice 5110 by tweaking the recipe a
 bit. But history didn't happen that way.

re:
http://www.garlic.com/~lynn/2012l.html#72 zEC12, and previous generations, 
why? type question - GPU computing
http://www.garlic.com/~lynn/2012l.html#74 zEC12, and previous generations, 
why? type question - GPU computing
http://www.garlic.com/~lynn/2012l.html#77 zEC12, and previous generations, 
why? type question - GPU 


put all logic in microcode
http://en.wikipedia.org/wiki/IBM_PALM_processor

5100 had enuf 360 microcode emulation to run apl/360
http://en.wikipedia.org/wiki/IBM_5100

from above:

The 5100 was based on IBM's innovative concept that, using an emulator
written in microcode, a small and relatively cheap computer could run
programs already written for much larger, and much more expensive,
existing computers, without the time and expense of writing and
debugging new programs.

Two such programs were included: a slightly modified version of APL.SV,
IBM's APL interpreter for its System/370 mainframes, and the BASIC
interpreter used on IBM's System/3 minicomputer. Consequently, the
5100's microcode was written to emulate most of the functionality of
both a System/370 and a System/3.

IBM later used the same approach for its 1983 introduction of the XT/370
model of the IBM PC, which was a standard IBM PC XT with the addition of
a System/370 emulator card.

... snip ... 

part of the issue was apl code was fairly dense ... and apl\360
workspaces were typically 16kbytes (some systems offerred 32kbytes).

cambridge science center had taken apl\360 ... stripped out all the
multitasking and swapping stuff and got it to run under cms workspace as
large as virtual memory ... for cp67 cms\apl. some amount of work had to
be done on how apl\360 storage since it tended to use all available
workspace ... which resulted in page thrashing in virtual memory
environment. there was also an cms\apl API to access system services
(including file i/o). The combination of large workspace and file i/o
allowed doing a lot of real-world applications (that couldn't be done
with apl\360). The business planners in Armonk loaded the holiest of
holy data (detailed customer profiles) on the cambridge system for
business modeling in cms\apl. This also created something of security
issue since cambridge also allowed non-employee access from various
institutions in the cambridge area (students, staff, faculty).

palo alto science center then did the enhancements to make vm370 apl\cms
... they also did the 370/145 apl microcode assist and the 5100.

the person that did 370/145 apl microcode assist was also instrumental
in many of the fortan hx performance enhancmenets.

-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: zEC12, and previous generations, why? type question - GPU computing.

2012-09-06 Thread William Donzelli
 360s, 370s, etc ... have been microcode implemented on variety of other
 kinds of engines. circa 1980 there was an effort to replace the wide
 variety of internet microprocessors used for controllers, lowmid range
 370s, the planned as/400 replacement for s/38, etc ... all with 801/risc
 Iliad chips. For various reasons the efforts floundered and they went
 back to doing custom processor implementations.

Was this effort in some way related, or in competition with, the UC
series of controllers? Quite a lot of machines used those internally,
and they even popped out with the 8100 series (the mainframes that
have fallen into the memory hole).

--
Will

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: zEC12, and previous generations, why? type question - GPU computing.

2012-09-06 Thread Anne Lynn Wheeler
wdonze...@gmail.com (William Donzelli) writes:
 Was this effort in some way related, or in competition with, the UC
 series of controllers? Quite a lot of machines used those internally,
 and they even popped out with the 8100 series (the mainframes that
 have fallen into the memory hole).

re:
http://www.garlic.com/~lynn/2012l.html#72 zEC12, and previous generations, 
why? type question - GPU computing
http://www.garlic.com/~lynn/2012l.html#74 zEC12, and previous generations, 
why? type question - GPU computing
http://www.garlic.com/~lynn/2012l.html#77 zEC12, and previous generations, 
why? type question - GPU computing
http://www.garlic.com/~lynn/2012l.html#79 zEC12, and previous generations, 
why? type question - GPU computing

uc controllers were much simpler, earlier (and underpowered) processors,
3705, 8100, service processor for 3081, etc. early on before 3705 was
announced there was a strong effort at the science center to get cpd to
use peachtree for 3705 (instead of uc) ... peachtree was much more
powerful processor and was used in series/1. 

UCs would have been part of the internal microprocessors replaced by
801, the 801 replacement effort was circa 1980 ... but for various
reasons the efforts floundered (the as/400 quickly did a cisc chip to
replace the planned 801 ... but in the 90s eventually migrated to
801/risc power/pc). The followon to 4331/4341 (aka 4361/4381) were
suppose to be iliad (801/risc) ... but there was a white paper (that I
contributed to) that shot that down that effort (even tho I was working
on 801/risc for other things). In the wake of the failure of those
efforts in the earlier 80s, some number of 801/risc chip engineers left
and showup working on risc efforts at other vendors (I've posted various
old email from people worried that I might be following in their
footsteps).

bo evans had asked my wife to audit 8100 and shortly later it was
effectively canceled (although continued to linger on for quite some
time) ... has some amount about UC also:
http://en.wikipedia.org/wiki/IBM_8100

old email referencing mit lisp machine group asking ibm for 801
processor ... and evans offering 8100 instead:
http://www.garlic.com/~lynn/2006o.html#email790711

later one of the baby bells did a NCP  VTAM (both) emulation on
series/1 ... and outboard of mainframe ... carried sna traffic over real
networking infrastructure (mainframe vtams were told all resources were
cross-domain ... which was actually simulated outboard in redundant
infrastructure). I did a deal with the baby bell to turn it out as an
IBM product ... as well as concurrently porting from series/1 to rios
(801/risc processor used in rs/6000). Because I knew that communication
group would be out for my head ... I cut a deal with another baby bell
to underwrite all of my development costs ... with no strings attached
(their business case was that they would totally recover all my costs
within the first year just replacing 37x5/NCP with new product). The
internal politics that then happened could only be described as truth is
stranger than fiction.

part of presentation that I did at sna architecture review board meeting
in raleigh, fall of 1986:
http://www.garlic.com/~lynn/99.html#67 System/1 ?
part of presentation by baby bell at series/1 common meeting
http://www.garlic.com/~lynn/99.html#70 Series/1 as NCP

past posts mentioning 801/risc, iliad, romp, rios, fort knox, power,
power/pc, etc
http://www.garlic.com/~lynn/subtopic.html#801

In the previous reference about using large number of 370 3mip roman
(three) chip sets in racks ... the 801 chip was blue iliad ... which was
first 32bit 801 chip ... and design for 20mips ... although it was never
put into production (and it was a very large hot chip). Biggest design
problem  bottleneck was increasing problem with getting all the heat
out of the rack as ever increasing numbers of chips were packed into the
rack. old post 
http://www.garlic.com/~lynn/2004m.html#17 mainframe and microprocessors

mentioning series of documents that I did on the roman/iliad rack
cluster design
RMN.DD.001, Jan 22, 1985
RMN.DD.002, Mar 5, 1985
RMN.DD.003, Mar 8, 1985
RMN.DD.004, Apr 16, 1985

old email discussing 801, risc, romp, rios
http://www.garlic.com/~lynn/lhwemail.html#801

there was huge amount of communication group FUD about my 3725 numbers
used in comparison/presentation ... which I pulled directly from HONE
3725 configurator ... HONE configurations (world-wide virtual machine
based online salesmarketing) were used by IBM salesmarketing for
configuring hardware. In the case of 3725 configurator ... performance
modeling had official communication group sanction. misc. past posts
mentioning HONE 
http://www.garlic.com/~lynn/subtopic.html#hone

one of my hobbies was enhanced production operating systems for internal
datacenters ... HONE was (also) one of my long time customers since
cp67/cms days in the early 70s. that hone was actually virtual machine
based was obfuscated from most

zEC12, and previous generations, why? type question - GPU computing.

2012-09-05 Thread McKown, John
I guess that I should preface this with another question. Does anybody use a z 
for heavy numeric computation anymore? Or has that all gone to Intel and Power 
boxes? Why is that? If it is because the z architecture is not good at 
numeric computation, I have a question. The internals of the z has used the 
PCIe bus for some time. Wouldn't that imply that it would be at least 
theoretically possible to plug in any PCIe card? OK, I understand that there 
would need to be some way to access it. But, in conjunction with the previous 
question about computational ability, couldn't this be used for GPU 
computation. It seems to be that the really heavy computation is being moved 
from the CPU (Intel) onto the GPU (AMD or Nvidia graphics processor). Would it 
make any monetary sense to enable GPU computation on a z? Long ago, there were 
vector instructions. Why not some sort of interface instruction(s) which 
allow loading a GPU processing program to be placed in a GPU on a PCIe card and 
then have some way to suspend the unit of work until the GPU computation is 
complete? Likewise have an API to request access to a GPU which would suspend 
the unit of work until a GPU was available and assign the GPU to the unit of 
work?

Weird thoughts from a weird person.

-- 
John McKown
Systems Engineer IV
IT

Administrative Services Group

HealthMarkets(r)

9151 Boulevard 26 * N. Richland Hills * TX 76010
(817) 255-3225 phone *
john.mck...@healthmarkets.com * www.HealthMarkets.com

Confidentiality Notice: This e-mail message may contain confidential or 
proprietary information. If you are not the intended recipient, please contact 
the sender by reply e-mail and destroy all copies of the original message. 
HealthMarkets(r) is the brand name for products underwritten and issued by the 
insurance subsidiaries of HealthMarkets, Inc. -The Chesapeake Life Insurance 
Company(r), Mid-West National Life Insurance Company of TennesseeSM and The 
MEGA Life and Health Insurance Company.SM

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: zEC12, and previous generations, why? type question - GPU computing.

2012-09-05 Thread Shmuel Metz (Seymour J.)
In a6b9336cdb62bb46b9f8708e686a7ea0115baa1...@nrhmms8p02.uicnrh.dom,
on 09/05/2012
   at 11:45 AM, McKown, John john.mck...@healthmarkets.com said:

If it is because the z architecture is not good at numeric
computation,

The z architecture is fine for numeric computations. The problem is
that the implementation is competing with processors manufactured in
bulk. If IBM could sell millions of z boxen then they'd be able to cut
the price dramatically.

I've always wondered what would have happened had IBM used a 370
instruction set on the PC instead of Intel.

-- 
 Shmuel (Seymour J.) Metz, SysProg and JOAT
 Atid/2http://patriot.net/~shmuel
We don't care. We don't have to care, we're Congress.
(S877: The Shut up and Eat Your spam act of 2003)

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: zEC12, and previous generations, why? type question - GPU computing.

2012-09-05 Thread zMan
On Wed, Sep 5, 2012 at 2:28 PM, Shmuel Metz (Seymour J.) 
shmuel+...@patriot.net wrote:

 In a6b9336cdb62bb46b9f8708e686a7ea0115baa1...@nrhmms8p02.uicnrh.dom,
 on 09/05/2012
at 11:45 AM, McKown, John john.mck...@healthmarkets.com said:

 If it is because the z architecture is not good at numeric
 computation,

 The z architecture is fine for numeric computations. The problem is
 that the implementation is competing with processors manufactured in
 bulk. If IBM could sell millions of z boxen then they'd be able to cut
 the price dramatically.

 I've always wondered what would have happened had IBM used a 370
 instruction set on the PC instead of Intel.


16MB ought to be enough for anybody? :-)

Since IBM wasn't manufacturing the chips, of course that wasn't even on the
table, but it's still a VERY interesting Gedankenexperiment...
-- 
zMan -- I've got a mainframe and I'm not afraid to use it

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: zEC12, and previous generations, why? type question - GPU computing.

2012-09-05 Thread Conlin, Pete
With IBM's acquisition of SPSS several years ago  the recent acquisition of 
Netezza (for use as an attached processor for computational workloads on 
zSeries), IBM's z/Series intentions seem to have changed.  After the AS 
(Application System) disaster (early eighties, great demo, not scalable, ADRS 
based if I recall), I hope the performance concerns are addressed.  Even the 
DB2 folk no longer accept a performance hit with a new release (more code  
features take more resources was a mantra at IDUG for years, finally falling 
flat with V8.)

In particular, with the minimization of locking, data above the bar, 
increased use of zIIP  general performance improvements, analytics with DB2 on 
zSeries might be cost effective for big data in a shared workload environment.

See (unfortunately marketing oriented):  

http://www.clabbyanalytics.com/uploads/zBAfinalfinalfinal.pdf
and   
http://berniespang.com/2012/06/08/clients-chose-ibm-system-z-for-analytics-over-teradata-and-oracle-exadata/

It would have been interesting if they had put something like this together for 
the 2010 census data in the way SAS did
for the 1980 data, but there's plenty more data sources against which these 
marketing claims will soon be tested.
  --Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On 
Behalf Of Mark Post
Sent: Wednesday, September 05, 2012 1:07 PM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: zEC12, and previous generations, why? type question - GPU 
computing.
 On 9/5/2012 at 12:45 PM, McKown, John john.mck...@healthmarkets.com 
 wrote:
 I guess that I should preface this with another question. Does anybody 
 use a z for heavy numeric computation anymore? Or has that all gone to 
 Intel and Power boxes? Why is that? If it is because the z architecture is 
 not good
 at numeric computation, I have a question.
As has been pointed out in another thread here, the dollar cost per 
instruction is much higher on System z than other architectures.  So for 
purely computational workloads, although System z may have a faster CPU than 
the other architectures, it costs more for the same amount of computation.  A 
lot of high performance computing is restartable in that if a computation 
node fails, starting that piece of work over from the beginning isn't hard.  
Most of the qualities that are built into System z aren't needed for that 
type of work, so no need to spend the big bucks for it.
Mark Post

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: zEC12, and previous generations, why? type question - GPU computing.

2012-09-05 Thread Tony Harminc
On 5 September 2012 17:17, zMan zedgarhoo...@gmail.com wrote:
 On Wed, Sep 5, 2012 at 2:28 PM, Shmuel Metz (Seymour J.) 
 shmuel+...@patriot.net wrote:

 The z architecture is fine for numeric computations. The problem is
 that the implementation is competing with processors manufactured in
 bulk. If IBM could sell millions of z boxen then they'd be able to cut
 the price dramatically.

 I've always wondered what would have happened had IBM used a 370
 instruction set on the PC instead of Intel.


 16MB ought to be enough for anybody? :-)

 Since IBM wasn't manufacturing the chips, of course that wasn't even on the
 table, but it's still a VERY interesting Gedankenexperiment...

There *was* a single-chip 370 produced by someone in the late 70s - a
168i. I think it was a university or research institute, but not
IBM. I'm not finding anything on Google with a casual search, but
things like this are easily overwhelmed.

Tony H.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: zEC12, and previous generations, why? type question - GPU computing.

2012-09-05 Thread Anne Lynn Wheeler
t...@harminc.net (Tony Harminc) writes:
 There *was* a single-chip 370 produced by someone in the late 70s - a
 168i. I think it was a university or research institute, but not
 IBM. I'm not finding anything on Google with a casual search, but
 things like this are easily overwhelmed.

SLAC did 168E ... basically could run problem state fortran at 168 speed
... for data collection/reduction along the accelerator line ... long
ways from single chip.

168 had been four circuits per chip, 3033 initially was 168 logic
initially layed out on something like 40circuits per chip ... but just
using 4 circuits in each chip ... getting 20% chip improvement. during
development there was some rework of part of the 168 logic to make
better use of higher chip density ... get 3033 up to 50% faster than
168.  
http://www.jfsowa.com/computer/memo125.htm

i've frequently claimed that John Cocke's 801/risc was reaction to
horrible complexity of (failed) FS effort ... initially simplified (aka
reduced instruction set) for single chip implementation ...  and then
later simplified instructions that were all single machine cycle.

360s, 370s, etc ... have been microcode implemented on variety of other
kinds of engines. circa 1980 there was an effort to replace the wide
variety of internet microprocessors used for controllers, lowmid range
370s, the planned as/400 replacement for s/38, etc ... all with 801/risc
Iliad chips. For various reasons the efforts floundered and they went
back to doing custom processor implementations. 

It took another couple decades ... but lots of stuff is now risc in one
way or another (as previously mentioned the past couple generations of
i86 are risc processors with hardware layer translating i86 to risc
micro-ops).

In the mid-80s, there was a 3-chip 370 that ran at 168 speed from
Boeblingen called ROMAN. I had a project/proposal to pack arbitrary mix
of large numbers of ROMAN and Iliad chips in the same rack ...  with
large number of racks (sort of precursor to latest rack announcement). a
couple old email refs:
http://www.garlic.com/~lynn/2011b.html#email850314
http://www.garlic.com/~lynn/2007d.html#email850315

I had also been working with NSF on what was to become NSFNET backbone
(i.e. tcp/ip is the technology basis for the modern internet, NSFNET
backbone was the operational basis for the modern internet, and CIX was
business basis for the modern internet). Above refs. that I had to find
standin for me doing presentation to head of NSF ... because of rack
cluster effort meeting.

Lots of low  mid-range clone 370 vendors were starting to spring up all
over the place. Somebody in Siemens germany had somehow acquired a
proprietary ROMAN document ... and was trying to get it returned to IBM
with all fingerprints removed. He sent it to somebody at Amdahl in
silicon valley ... who arranged to hand it over to me.

Other trivia ... SLAC had hosted the monthly IBM user group meetings
(BAYBUNCH) and was also the first webserver outside of europe.
http://www.slac.stanford.edu/history/earlyweb/history.shtml

and from long ago and far away ... mentions slac/cern 168E ... having
become 3081E (3mips to 14mips).

Date: Fri, 7 Jul 89 10:52:39 CDT
From: wheeler
Subject: requirements task force

Note that both DEC and Apollo (along with hp) are heavily into
distributed environment, heterogeneous network/system/enterprise
management, and networks. Note that heterogenous means more than OSI,
TCP/IP, UNIX, etc ... it means interoperability between all of them
along with DECNET, VMS, XNS, etc.

Apollo's FDDI group is heavily involved in XTP and the former manager of
the Apollo FDDI group (they've been active for some time and spending
lot of time optimizing performance of high thruput adapters) left Apollo
and formed synernetics (he was involved with XTP at Apollo and
synernetics is a XTP/TAB member).  They are working on initial cut of
FDDI station management (SMT ... and have been out talking to a number
of groups, including IBM ... I also believe he has even had contacts
with Andy).

Distributed 370s have a hard time keeping up. Way back in history
(someplace), I spent a lot of time up at SLAC (there is tight coupling
between SLAC and CERN). At that time SLAC was doing the 168E, a
bit-slice processor that would run standard 370 Fortran programs. They
technology has been improved and now CERN  SLAC are calling it a 3081E
(i.e. the processing power of 3081). The design was to have one of these
processors at each of the (large number of) data collection points.

Something that will be competing with this will be the 370 simulator
that xx has done for the SUN4. He currently has VM/370 running at
about 168 thruput on the old SUN4 (big register memory are super for
large integrated applications but state switch overhead can be
heavy/horrible ... something that we are currently grappling with
... some possibilities exist for pipelining/overlapping state switch
like yy is doing with Vector Buffer architecture). 

Re: zEC12, and previous generations, why? type question - GPU computing.

2012-09-05 Thread Timothy Sipples1
Yes, there are organizations that use zEnterprise servers for heavy
numeric computation. Like decimal floating point. Cryptography is another
excellent example. And you can buy optional CryptoExpress adapters if you
want to augment the excellent capabilities found in every machine. You can
also buy the optional zBladeCenter Extension (zBX) if you want to add
DataPower accelerators, Power blades, and/or X86 blades. You can also add
an optional IBM DB2 Analytics Accelerator, to boost many types of DB2
queries. So we're way ahead of you, John. ;-)

I think the simple answer is that it depends what you optimize for in
designing a server processor (or complex). But IBM has broken a lot of
rules already about which server should do what, and I predict more rules
will be broken.

With respect to the 370-on-a-chip, IBM sort of did that with the 1975
introduction of the IBM 5100 Portable Computer starting at $8,975 (1975
dollars), although it was for a relatively narrow initial purpose (to get
APL running). The 5100 sold reasonably well from what I've read, but I
think there were three basic problems which prevented it from becoming a
blockbuster:

1. The price was not low enough for mass market appeal. (Apple had a
similar problem with the Lisa in the early 1980s.)

2. The software selection didn't exactly hit the mark, although it was a
good try for the time. (IBM learned the value of software somewhat later in
its evolution but not in time for the 1981 IBM PC.)

3. It probably didn't have the right third party marketing and distribution
channels. With some very notable exceptions, like typewriters, at that time
IBM would have had some challenges with this type of product.

Keep in mind that for 1975 this was absolutely amazing technology, but
amazing technology required some expense. Being early is pricey. If the
5100 debuted in, say, 1977 or 1978, it would have still been well timed but
could have dramatically reduced the chip and board count. I also think the
small built-in monitor could have been sacrified (at least as an option) in
favor of a display port of some kind -- ideally RF for TV hookup. And IBM
might have gone with a diskette drive for storage -- the 5100 was too early
for the 5.25 inch drive, which debuted in 1976. Finally, if IBM had
provided a little more guidance on the 370 subset instruction set they
implemented, software developers could have taken over from there.

So I think the 5100 could have been a nice 5110 by tweaking the recipe a
bit. But history didn't happen that way.

IBM had some success with the System/4 Pi avionics processors which are
descended from System/360.


Timothy Sipples
Consulting Enterprise IT Architect (Based in Singapore)
E-Mail: sipp...@sg.ibm.com
--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: zEC12, and previous generations, why? type question - GPU computing.

2012-09-05 Thread George Henke
I believe IBM produced a pc with a 370 to run VM on a PC.  Merrill Lynch
had one.  Somewhere in the late 80's I believe.

On Thu, Sep 6, 2012 at 1:52 AM, Timothy Sipples1 sipp...@sg.ibm.com wrote:

 Yes, there are organizations that use zEnterprise servers for heavy
 numeric computation. Like decimal floating point. Cryptography is another
 excellent example. And you can buy optional CryptoExpress adapters if you
 want to augment the excellent capabilities found in every machine. You can
 also buy the optional zBladeCenter Extension (zBX) if you want to add
 DataPower accelerators, Power blades, and/or X86 blades. You can also add
 an optional IBM DB2 Analytics Accelerator, to boost many types of DB2
 queries. So we're way ahead of you, John. ;-)

 I think the simple answer is that it depends what you optimize for in
 designing a server processor (or complex). But IBM has broken a lot of
 rules already about which server should do what, and I predict more rules
 will be broken.

 With respect to the 370-on-a-chip, IBM sort of did that with the 1975
 introduction of the IBM 5100 Portable Computer starting at $8,975 (1975
 dollars), although it was for a relatively narrow initial purpose (to get
 APL running). The 5100 sold reasonably well from what I've read, but I
 think there were three basic problems which prevented it from becoming a
 blockbuster:

 1. The price was not low enough for mass market appeal. (Apple had a
 similar problem with the Lisa in the early 1980s.)

 2. The software selection didn't exactly hit the mark, although it was a
 good try for the time. (IBM learned the value of software somewhat later in
 its evolution but not in time for the 1981 IBM PC.)

 3. It probably didn't have the right third party marketing and distribution
 channels. With some very notable exceptions, like typewriters, at that time
 IBM would have had some challenges with this type of product.

 Keep in mind that for 1975 this was absolutely amazing technology, but
 amazing technology required some expense. Being early is pricey. If the
 5100 debuted in, say, 1977 or 1978, it would have still been well timed but
 could have dramatically reduced the chip and board count. I also think the
 small built-in monitor could have been sacrified (at least as an option) in
 favor of a display port of some kind -- ideally RF for TV hookup. And IBM
 might have gone with a diskette drive for storage -- the 5100 was too early
 for the 5.25 inch drive, which debuted in 1976. Finally, if IBM had
 provided a little more guidance on the 370 subset instruction set they
 implemented, software developers could have taken over from there.

 So I think the 5100 could have been a nice 5110 by tweaking the recipe a
 bit. But history didn't happen that way.

 IBM had some success with the System/4 Pi avionics processors which are
 descended from System/360.


 
 Timothy Sipples
 Consulting Enterprise IT Architect (Based in Singapore)
 E-Mail: sipp...@sg.ibm.com
 --
 For IBM-MAIN subscribe / signoff / archive access instructions,
 send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN




-- 
George Henke
(C) 845 401 5614

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN