Re: IBM System/3 3277-1
The following message is a courtesy copy of an article that has been posted to comp.sys.ibm.sys3x.misc,alt.folklore.computers,bit.listserv.ibm-main as well. Anne Lynn Wheeler [EMAIL PROTECTED] writes: field/col definition for 12-2-9 TXT card: col 1 12-2-9 / x'02' 2-4 TXT 5 blank 6-8 relative address of first instruction on record 9-10blank 11-12 byte count ... number of bytes in information field 15-16 ESDID 17-72 56-byte information field 73-80 deck id, sequence number, or both cols. 2-4 and 73-80 were character ... the other fields were hex. re: http://www.garlic.com/~lynn/2007q.html#69 IBM System/3 3277-1 txt card decks were nearly executable output from assemblers and compilers. more information about format of other cards in txt card deck http://www.garlic.com/~lynn/2001.html#14 IBM Model Numbers (was: First video terminal?) before i learned about rep cards, i would duplicate a TXT, multipunching the patch/fix into the duplicated card. keypunches just had keys for punching the character information, if you were dealing with hex ... for which there was no equivalent character ... it would be necessary to multi-punch to get the correct holes punched. for hex, it was necessary to read the holes ... since even if the card had been interpreted ... there were no corresponding character symbols for the majority of the hex codes. my process was to fan the txt card deck ... reading the holes in cols 6-8 (displacement address in the program of data punched in the specific card) ... looking for the card corresponding to the data i needed to patch. I would then take that card and duplicate it out to the cols that needed to be fixed ... multi-punch the corrections (in the duplicate/new card) and then resume duplicating the remaining of the card. misc past posts mentioning multi-punch http://www.garlic.com/~lynn/93.html#17 unit record other controllers http://www.garlic.com/~lynn/2000f.html#75 Florida is in a 30 year flashback! http://www.garlic.com/~lynn/2001b.html#26 HELP http://www.garlic.com/~lynn/2001b.html#27 HELP http://www.garlic.com/~lynn/2001k.html#27 Is anybody out there still writting BAL 370. http://www.garlic.com/~lynn/2001k.html#28 Is anybody out there still writting BAL 370. http://www.garlic.com/~lynn/2002k.html#63 OT (sort-of) - Does it take math skills to do data processing ? http://www.garlic.com/~lynn/2004p.html#24 Systems software versus applications software definitions http://www.garlic.com/~lynn/2005c.html#54 12-2-9 REP 47F0 http://www.garlic.com/~lynn/2006c.html#17 IBM 610 workstation computer http://www.garlic.com/~lynn/2006g.html#43 Binder REP Cards (Was: What's the linkage editor really wants?) http://www.garlic.com/~lynn/2006g.html#58 REP cards http://www.garlic.com/~lynn/2006l.html#64 Large Computer Rescue http://www.garlic.com/~lynn/2007d.html#51 IBM S/360 series operating systems history http://www.garlic.com/~lynn/2007f.html#78 What happened to the Teletype Corporation? -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: IBM System/3 3277-1
The following message is a courtesy copy of an article that has been posted to comp.sys.ibm.sys3x.misc,alt.folklore.computers,bit.listserv.ibm-main as well. [EMAIL PROTECTED] writes: What I don't understand is pre sorting a deck that will be used as input to the computer--couldn't the computer sort it faster than a person could? The machine sorted strictly sequentially, while the computer had bubble or shell sorts that were more efficient. maybe tape sorting was slow, but disk sorting should've been fast. If the machine had some core ie 128 k, then plenty of work could be done within the CPU at very high speed. simple example would be fortran student jobs. the master of the program is the individual student's card deck. the student has access to only fortran compile execution capability ... and compile would be one pass of the input card deck. when i started, the univ. had 1401 that was used as unit-record front-end to 709. the card decks (potentially multiple student jobs) would be collected in card tray. when the tray approached full (our every couple hrs), the tray of cards would be read by the 1401 and transferred to tape. the tape would be carried to 709 tape drive and processed (sequentially, each job compiled and executed) with output going to another tape. When processing finished, the output tape would be moved to 1401 and results printed. The operator would take the printed, fan-fold output, burst it ... i.e. tear it into individual jobs, match the bursted print output with corresponding original card deck, wrap the bursted print output around the input card deck (with rubber band) and place it in output bin for student pickup. there were some administrative jobs that used sort ... but that frequently had trays and trays of cards ... written to tape .. and then multiple tape sort (with intermediate tape files) that ran for extended period of time. i did write part of an application that was used for class registration. 2540 could not only read holes ... but also had the capability of reading sense-marked cards (i.e. no. 2 pencil marks in little boxes on cards). the 2540 had two feeds from the sides with five card stackers in the middle. one side read cards and could select two of the read-side stackers or the middle stacker, the other side punched cards and could select two of the punch-side stackers or the middle stacker. class registration had all these sense-marked cards ... which would read and place in the middle stacker. if the processing found some problem with a card ... a blank card from the punch side would be punched behind the recently read sense-marked card (with some problem ... before the next card would be read/processed) standard processing had an operator removing cards from the stacker and placing in card trays. all of the class registration sense-marked cards were plain manilla. the punch cards were loaded with cards that had yellow (or sometimes red) across the top band of the card. once all class registration cards were processed ... there would be multiple trays ... sporadically sprinkled with yellow top-edge cards ... clearly identifying the registration cards with some kind of problem. qd conversion of gcard ios3270 to html http://www.garlic.com/~lynn/gcard.html reader/punch channel program command codes http://www.garlic.com/~lynn/gcard.html#23 system/360 model 30 machine room, 2540 is seen in middle, in front of the tape drives and partly obscured by 2311 disk drive. the card reader (feed) is on the right and the punch is on the left, the five output stackers are in the center http://www-03.ibm.com/ibm/history/exhibits/mainframe/mainframe_PP2030.html system 370 model 40 machine room, 2540 is in upper middle http://www-03.ibm.com/ibm/history/exhibits/mainframe/mainframe_PP2040.html better picture of 2540 on the right with somebody loading deck of cards to be read http://www.cs.ncl.ac.uk/events/anniversaries/40th/images/ibm360_672/slide19.jpg -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: IBM System/3 3277-1
The following message is a courtesy copy of an article that has been posted to comp.sys.ibm.sys3x.misc,alt.folklore.computers,bit.listserv.ibm-main as well. [EMAIL PROTECTED] writes: I could read ASCII from a paper tape. Took me a while. :-) previous post in this thread: http://www.garlic.com/~lynn/2007q.html#48 IBM System/3 3277-1 i eventually learned to read 12-2-9 (i.e. card punch holes for hex 02) txt text deck cards ... as part of multi-punch/duplicate cards and punching patches ... i had a 2000 card assembler program and it was frequently faster to multi-punch fixes (into duplicate/new card) than to reassemble program (which could take 30-60 minutes elapsed time ... this was on 360/30 under os/360 release 6 ... i had dedicated university machine room on weekends for 48hrs stretch). basically had to not only be able to read storage dumps and equivalence between hexcode and things like instructions and/or addresses ... but the similar information on cards in punch hole representation. field/col definition for 12-2-9 TXT card: col 1 12-2-9 / x'02' 2-4 TXT 5 blank 6-8 relative address of first instruction on record 9-10blank 11-12 byte count ... number of bytes in information field 15-16 ESDID 17-72 56-byte information field 73-80 deck id, sequence number, or both cols. 2-4 and 73-80 were character ... the other fields were hex. qd converstion of gcard ios3270 to html http://www.garlic.com/~lynn/gcard.html but it lacks card punch hole equivalence for hex (on real green card) here is actual scan of a 360 green card ... front back (11mb) http://weblog.ceicher.com/archives/IBM360greencard.pdf from: http://weblog.ceicher.com/archives/2006/12/ibm_system360_green_card.html the following table is from http://www.cs.uiowa.edu/~jones/cards/codes.html giving equivalence between card punch codes, hexidemal value, and ebcdic 00 10 20 30 40 50 60 70 80 90 A0 B0 C0 D0 E0 F0 ___ ___ ___ ___ ___ ___ ___ ___ ___ ___ ___ ___ ___ ___ ___ ___ 0|NUL| |DS | |SP | | - | | | | | | | | | 0 |0 |__1|___|__2|___|__3|__4|__5|___|___|___|___|___|___|___|___|___| 1| | |SOS| | | | / | | a | j | | | A | J | | 1 |1 |___|___|___|___|___|___|___|___|___|___|___|___|___|___|___|___| 2| | |FS | | | | | | b | k | s | | B | K | S | 2 |2 |___|___|___|___|___|___|___|___|___|___|___|___|___|___|___|___| 3| |TM | | | | | | | c | l | t | | C | L | T | 3 |3 |___|___|___|___|___|___|___|___|___|___|___|___|___|___|___|___| 4|PF |RES|BYP|PN | | | | | d | m | u | | D | M | U | 4 |4 |___|___|___|___|___|___|___|___|___|___|___|___|___|___|___|___| 5|HT |NL |LF |RS | | | | | e | n | v | | E | N | V | 5 |5 |___|___|___|___|___|___|___|___|___|___|___|___|___|___|___|___| 6|LC |BS |EOB|UC | | | | | f | o | w | | F | O | W | 6 |6 |___|___|___|___|___|___|___|___|___|___|___|___|___|___|___|___| 7|DEL|IL |PRE|EOT| | | | | g | p | x | | G | P | X | 7 |7 |___|___|___|___|___|___|___|___|___|___|___|___|___|___|___|___| 8| | | | | | | | | h | q | y | | H | Q | Y | 8 |8 |___|___|___|___|___|___|___|___|___|___|___|___|___|___|___|___| 9| | | | | | | | | i | r | z | | I | R | Z | 9 |9 |___|___|___|___|___|___|___|___|___|___|___|___|___|___|___|___| A| | | | | ¢ | ! | | : | | | | | | | | |2-8 |___|___|___|___|___|___|___|___|___|___|___|___|___|___|___|___| B| | | | | . | $ | , | # | | | | | | | | |3-8 |___|___|___|___|___|___|___|___|___|___|___|___|___|___|___|___| C| | | | | | * | % | @ | | | | | | | | |4-8 |___|___|___|___|___|___|___|___|___|___|___|___|___|___|___|___| D| | | | | ( | ) | _ | ' | | | | | | | | |5-8 |___|___|___|___|___|___|___|___|___|___|___|___|___|___|___|___| E| | | | | + | ; | | = | | | | | | | | |6-8 |___|___|___|___|___|___|___|___|___|___|___|___|___|___|___|___| F| | | | | | | ¬ | ? | | | | | | | | | |7-8 |___|___|___|___|___|___|___|___|___|___|___|___|___|___|___|___| 12 11 10 12 11 10 12 11 10 12 11 10 9 9 9 10 12 11 i.e. hex values down the left and across the top, punch holes dorwn the right adn across the bottom. and card punch format ... card rows are number 12, 11, 0-9 from the top. /-0123456789ABCDEFGHIJKLMNOPQR/STUVWXYZb#@'V?.¤[§!$*];^±,%v\¶ 12 / O OOO 11| O O OO 0|O O OO 1| OOOO 2| OOOO O O O O 3| OOOO O O O O 4|OO
Re: Are there tasks that don't play by WLM's rules
The following message is a courtesy copy of an article that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well. [EMAIL PROTECTED] writes: I seem to remember this as TCP/IP version 3.2 with 3.3 having the fixes for optimization. Weren't there twin stacks being managed or some such thing. I'm not too TCP/IP literate. We had this original version implemented because I remember doing a pre/post resource impact analysis finding additional CPU, significant in relation to prior usage, in use by TCP/IP. re: http://www.garlic.com/~lynn/2007q.html#45 Are there tasks that don't play by WLM's rules as per previous post ... there was the vs/pascal implementation ported from vm ... with a diagnose instruction simulation done in os ... and then the vtam-based implementation (that started out only being correct if it had lower thruput than lu6.2). some part of the base code poor thruput (and high processor consumption) was that (only) a channel-attached bridge box was being supported ... rather than a native channel-attached tcp/ip router box. In the LAN bridge scenario ... the mainframe host code not only had to do the ip-header gorp ... but also had to do the lan/mac header overhead before passing the packet to the channel for processing by the bridge box. part of the rfc 1044 three orders of magnitude improvement http://www.garlic.com/~lynn/subnetwork.html#1044 was having a real channel-attach tcp/ip router box ... eliminating the mainframe host code having to also provide the lan/mac header overhead processing (needed by a lan/mac bridge box ... rather than having a real channel-attach tcp/ip router box). part of this possibly was the whole focus on the sna communication paradigm (the old joke that it wasn't a system, wasn't a network, and wasn't an architecture) ... where vtam provided the communication addressing (and didn't have the concept of networking). in the early days of sna ... my wife had co-authored AWP39 for peer-to-peer networking architecture ... which was possibly viewed as somewhat in competition with sna. part of the issue is that in most of the industry, networking it peer-to-peer ... it is only when sna had co-opted the term networking to apply to communication ... that it was necessary to qualify networking with peer-to-peer. this was possibly also why she got con'ed into going to pok to be in charge of loosely-coupled architecture. while there she also created peer-to-peer shared data architecture ... which, except for ims hot-standby, didn't see a lot of uptake until sysplex. misc past posts http://www.garlic.com/~lynn/subtopic.html#shareddata for another archeological trivia ... APPN was originally AWP164. misc. past posts mentioning AWP39 http://www.garlic.com/~lynn/2004n.html#38 RS/6000 in Sysplex Environment http://www.garlic.com/~lynn/2004p.html#31 IBM 3705 and UC.5 http://www.garlic.com/~lynn/2005p.html#8 EBCDIC to 6-bit and back http://www.garlic.com/~lynn/2005p.html#15 DUMP Datasets and SMS http://www.garlic.com/~lynn/2005p.html#17 DUMP Datasets and SMS http://www.garlic.com/~lynn/2005q.html#27 What ever happened to Tandem and NonStop OS ? http://www.garlic.com/~lynn/2005u.html#23 Channel Distances http://www.garlic.com/~lynn/2006h.html#52 Need Help defining an AS400 with an IP address to the mainframe http://www.garlic.com/~lynn/2006j.html#31 virtual memory http://www.garlic.com/~lynn/2006k.html#9 Arpa address http://www.garlic.com/~lynn/2006k.html#21 Sending CONSOLE/SYSLOG To Off-Mainframe Server http://www.garlic.com/~lynn/2006l.html#4 Google Architecture http://www.garlic.com/~lynn/2006l.html#45 Mainframe Linux Mythbusting (Was: Using Java in batch on z/OS?) http://www.garlic.com/~lynn/2006o.html#62 Greatest Software, System R http://www.garlic.com/~lynn/2006r.html#4 Was FORTRAN buggy? http://www.garlic.com/~lynn/2006r.html#9 Was FORTRAN buggy? http://www.garlic.com/~lynn/2006t.html#36 The Future of CPUs: What's After Multi-Core? http://www.garlic.com/~lynn/2006u.html#28 Assembler question http://www.garlic.com/~lynn/2006u.html#55 What's a mainframe? http://www.garlic.com/~lynn/2007b.html#9 Mainframe vs. Server (Was Just another example of mainframe http://www.garlic.com/~lynn/2007b.html#48 6400 impact printer http://www.garlic.com/~lynn/2007d.html#55 Is computer history taugh now? http://www.garlic.com/~lynn/2007h.html#35 sizeof() was: The Perfect Computer - 36 bits? http://www.garlic.com/~lynn/2007h.html#39 sizeof() was: The Perfect Computer - 36 bits? http://www.garlic.com/~lynn/2007l.html#62 Friday musings on the future of 3270 applications http://www.garlic.com/~lynn/2007o.html#72 FICON tape drive? http://www.garlic.com/~lynn/2007p.html#12 JES2 or JES3, Which one is older? http://www.garlic.com/~lynn/2007p.html#23 Newsweek article--baby boomers and computers -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED]
Re: IBM System/3 3277-1
The following message is a courtesy copy of an article that has been posted to comp.sys.ibm.sys3x.misc,alt.folklore.computers,bit.listserv.ibm-main as well. [EMAIL PROTECTED] writes: (For the AS/400 I never could figure the internal code architecture, IBM used something called LIC that was rather vague. I once tried to get an optional machine language listing of my application program compilation but it was very confusing. I believe IBM used a multi- layered approach for AS/400 internals, remnants of its Future System effort. I was not a big AS/400 fan, except for a file-aid tool that was better than mainframe tools.) one of the things that as/400 layered approach bought was that it could move from a CICS chip to a (power/pc) RISC chip w/o a lot of trouble. the future system project was going to replace 360/370 in the early-to-mid 70s ... when the project was eventually canceled there was big effort to make up for lost time resulting from the future system distraction http://www.garlic.com/~lynn/subtopic.html#futuresys attempting to get stuff back into the 370 (hardware software) product pipelines ... crash program for 303x was part of that. part of the analysis killing the project was that if a future system machine was built from the fastest hardware then available (370/195) it would have the thruput of a 370/145. the folklore is that some of the future system participants regrouped in rochester, coming out with the s/38 (which didn't have nearly the thruput requirements). i've periodically commented that there is some characteristics of the 801 risc activities in the 70s to go to the exact opposite extreme of what went on in future system. a early, big push for 801/risc was effort to replace the multitude of corporate internal microprocessors with common risc architecture chips (every low-to-mid range 370 implemented with microcode on their own unique microprocessor, controllers, and other kinds of microprocessors). one of these was going to be the s/38 followon, as/400. the common 801/risc microprocessor effort ran into all sorts of problems and eventually died off ... at which time, as/400 had crash project to design a new CISC processor. misc. past 801, romp, rios, fort knox, power, power/pc, somerset, etc postings http://www.garlic.com/~lynn/subtopic.html#801 as well as some old email from the period http://www.garlic.com/~lynn/lhwemail.html#801 effectively the effort was revisited when rochester began move of as/400 from their CISC chip to its current use of 801/RISC chip. -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Are there tasks that don't play by WLM's rules
The following message is a courtesy copy of an article that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well. [EMAIL PROTECTED] (Ted MacNEIL) writes: It's not just z/OS UNIX. The first implementation of TCP/IP on OS/390 was a port from VM. And, it was a pig until they decided to re-implement by starting from scratch using z/OS UNIX (circa 2.7). there was two issues ... the base was implemented in vs/pascal on on 3090 (under vm) it got about 44kbytes/sec thruput and consumed nearly whole 3090 processor. i did the support for rfc 1044 http://www.garlic.com/~lynn/subnetwork.html#1044 and in some tuning tests at cray research ... got 1mbyte/sec (channel media) thruput between 4341 clone and cray machine ... using only very modest amount of the 4341 ... about 25 times the bytes moved for maybe 1/30th the pathlength ... say nearly three orders of magnitude improvement in bytes/mip thruput the initial port to os ... kept the base vm tcp/ip code unchanged and implemented a cut-down vm emulation underneath (just enuf to run the tcp/ip code) ... which further aggrevated the poor tcp/ip thruput there was then a tcp/ip implementation done in vtam that had been outsourced to subcontractor. the folklore is that initial version delivered had tcp with higher thruput than lu6.2 and the subcontractor was told that everybody knows that lu6.2 has much higher thruput (than tcp/ip) and therefor the tcp/ip implementation must be incorrect ... and only a correct implementation was going to be accepted. misc. past references to folklore about the vtam-based implementation for tcp/ip http://www.garlic.com/~lynn/2000b.html#79 Database term ok for plain files? http://www.garlic.com/~lynn/2000c.html#58 Disincentives for MVS future of MVS systems programmers http://www.garlic.com/~lynn/2002k.html#19 Vnet : Unbelievable http://www.garlic.com/~lynn/2002q.html#27 Beyond 8+3 http://www.garlic.com/~lynn/2003j.html#2 Fix the shuttle or fly it unmanned http://www.garlic.com/~lynn/2004e.html#35 The attack of the killer mainframes http://www.garlic.com/~lynn/2005h.html#43 Systems Programming for 8 Year-olds http://www.garlic.com/~lynn/2005p.html#15 DUMP Datasets and SMS http://www.garlic.com/~lynn/2005r.html#2 Intel strikes back with a parallel x86 design http://www.garlic.com/~lynn/2006f.html#13 Barbaras (mini-)rant http://www.garlic.com/~lynn/2006l.html#53 Mainframe Linux Mythbusting (Was: Using Java in batch on z/OS?) http://www.garlic.com/~lynn/2006w.html#29 Descriptive term for reentrant program that nonetheless is http://www.garlic.com/~lynn/2007h.html#8 whiny question: Why won't z/OS support the HMC 3270 emulator i had a project i called hsdt (high-speed data transport) http://www.garlic.com/~lynn/subnetwork.html#hsdt that would periodically run into contention with the communication group. among other things, had deployed backbone connected to the internal network http://www.garlic.com/~lynn/subnetwork.html#internalnet that had T1 (and higher speed) terrestrial and satellite links. recent post http://www.garlic.com/~lynn/2007p.html#64 mentioning business trip to the far east to visit a company that we were buying some hardware from. the friday before we left, somebody in raleigh had announced a new internal discussion group that was to use the following terminology references: low-speed 9.6kbits medium-speed19.2kbits high-speed 56kbits very high-speed 1.5mbits on the wall of a conference room, the following monday on the other side of the pacific low-speed 20mbits medium-speed100mbits high-speed 200-300mbits very high-speed 600mbits we had also been doing some work with NSF and various universities leading up to what was to be NSFNET backbone ... aka tcp/ip is the technology basis for the modern internet, nsfnet backbone is the operational basis for the modern internet and CIX is the business basis for the modern internet. some old email references from that period http://www.garlic.com/~lynn/lhwemail.html#nsfnet -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: GETMAIN/FREEMAIN and virtual storage backing up
The following message is a courtesy copy of an article that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well. Anne Lynn Wheeler [EMAIL PROTECTED] writes: The first operational 370 hardware supporting virtual memory was a 370/145 engineering processor. However, cp67h with cp67i running in a 370 virtual machine was in regular operation a year before the 370/145 engineering box was operational. In fact, cp67i system was used as initial software brought up on the 370/145 engineering box. re: http://www.garlic.com/~lynn/2007p.html#74 GETMAIN/FREEMAIN and virtual storage backing up for additional topic drift, another internal project that drew on some of the cp67h activity was the inciption of the internal HONE project. lots of past posts mentioning HONE (and/or APL) http://www.garlic.com/~lynn/subtopic.html#hone this is at least partially motivated by the 23jun69 unbundling announcement ... a little topic drift here http://www.garlic.com/~lynn/2007q.html#13 Does software life begin at 40? IBM updates IMS database http://www.garlic.com/~lynn/2007q.html#14 Does software life begin at 40? IBM updates IMS database misc. other posts mentioning unbundling and starting to charge for application software http://www.garlic.com/~lynn/subtopic.html#unbundle the other aspect of unbundling was that it also started to charge for SE time/services. prior to that, (young/new) SEs picked up a lot of their experience via on the job training ... working with more experienced SEs on the customer machine. with unbundling and charging customers for SE services/time, this hands-on learning experience evaporated. somewhat as a substitute, HONE (Hands-On Network Experience) was created ... with a number of 360/67 running a clone of the science centers http://www.garlic.com/~lynn/subtopic.html#545tech cp67 system were installed around the country. the idea was that SEs (at branch offices) could pickup (hands-on) experience running/testing operating systems remotely in the HONE cp67 virtual machines. for slightly other, topic drift ... this recent post http://www.garlic.com/~lynn/2007q.html#22 Enterprise: Accelerating the Progress of Linux When initial 370 was announced, virtual memory still wasn't available ... but there were a few new instructions ... and the operating systems were updated to make use of the new instructions. that is somewhat where a subset of the cp67h enhancements came into play (at HONE) ... it was possible to run the latest (370) operating systems in cp67 virtual machines ... with cp67 kernel simulating the latest, new 370 instructions. Another activity by the science center, effectively resulted in the direction of HONE completely changing. The science center had also did a port of apl\360 to cms as cms\apl. Among other things ... APL work spaces could now be 16mbytes ... instead of the 16kbyte-32kbytes typical of apl\360 ... and an API for operating system functions was added (things like being able to do file i/o). This allowed APL to start being used for real-world applications (instead of toy demos that were frequently the result of the 16k limitation). In this period, APL was frequently used for lots of things that spreadsheets are used for today. Quite a few APL applications (like configurators) in support of sales and marketing were deployed on HONE ... and overtime these started to consume all available HONE processing ... and the original use for SE hands-on withered and disappeared. After vm370 became available, HONE upgraded from cp67 to vm370 (and HONE clones started to sprout up around the world). Also by the mid-70s, it was no longer possible for computing system orders to be submitted w/o first having been processed through some number of HONE APL applications (like configurators). other posts in this thread: http://www.garlic.com/~lynn/2007p.html#69 GETMAIN/FREEMAIN and virtual storage backing up http://www.garlic.com/~lynn/2007p.html#70 GETMAIN/FREEMAIN and virtual storage backing up http://www.garlic.com/~lynn/2007p.html#73 GETMAIN/FREEMAIN and virtual storage backing up http://www.garlic.com/~lynn/2007q.html#8 GETMAIN/FREEMAIN and virtual storage backing up In the 70s, the various HONE datacenters were consolidated in cal. with possibly largest single system image operation. This involved quite a few operational and functional enhancements to vm370 supporting load-balancing and fall-over ... that allowed a large number of loosely-coupled (tightly-coupled) multiprocessors to effectively operate as single large timesharing service (in part driven by the significant processing requirements because of using APL) ... somewhat reminiscent of some modern day advanced operations. Then because of business continuity considerations, the california datacenter was replicated first in Dallas, and then a 3rd in Boulder (supporting geographic load-balancing and fall-over). for even more topic drift ... misc. posts mentioning cp67 and vm370 based commercial
Re: GETMAIN/FREEMAIN and virtual storage backing up
The following message is a courtesy copy of an article that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well. [EMAIL PROTECTED] (Hunkeler Peter , KIUK 3) writes: Fixed storage is not only to support diabled users but much more often used in the ubiquituos I/O processing. The channel subsystem (the I/O part of System z hardware) does not use DAT. Channel commands transfer data blocks data from and to real storage to and from I/O devices, resp. Before the I/O can be initiated, MVS's I/O supervisor code has to make sure the virtual storage allocated for the I/O buffers is not being paged out while the channel subsystem is working on the I/O request. Therefore, the pages will be fixed before the I/O supervisor passes the I/O request to the channel subs this was part of the technology that was borrowed from cp67 in the original os/vs2 work ... discussed earlier in this thread http://www.garlic.com/~lynn/2007p.html#69 GETMAIN/FREEMAIN and virtual storage backing up http://www.garlic.com/~lynn/2007p.html#70 GETMAIN/FREEMAIN and virtual storage backing up one of the uses for fixed storage was allowing applications to build channel programs with the (previously) fixed, real storage addresses ... then the application channel program could be directly executed ... w/o requiring the supervisor having to scan ... building a shadow/duplicate channel program with the real addresses for instance, lookup various discussions about EXCPVR compared to EXCP ... this redbook has some discussion of the differences between EXCPVR and EXCP (although most of the discussion is about support for using storage about 2GB line) http://www.redbooks.ibm.com/abstracts/SG245976.html from 2.10.3 Using EXCP and EXCPVR Programs using EXCPVR have the esponsibility to page fix all I/O area and build real channel programs. ... snip ... -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: GETMAIN/FREEMAIN and virtual storage backing up
The following message is a courtesy copy of an article that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well. Bill Ogden [EMAIL PROTECTED] writes: The statements about the 360/67 are correct. It was a little ahead of its time in several ways. The 67's DAT design was a bit different than the later S/370 DAT that was used by MVS, and is typically not considered in the history lines for MVS. re: 2007p.html:Subject: Re: GETMAIN/FREEMAIN and virtual storage backing up 2007p.html:Subject: Re: GETMAIN/FREEMAIN and virtual storage backing up 2007p.html:Subject: Re: GETMAIN/FREEMAIN and virtual storage backing up other than original os/vs2 prototype implementation was done with mvt kernel modified with a lot of code borrowed from cp67 running on 360/67 i had done a lot of work with virtual memory as an undergraduate http://www.garlic.com/~lynn/subtopic.html#wsclock and then later after joining the science center http://www.garlic.com/~lynn/subtopic.html#545tech and in the early 70s several of us would make frequent sojourns to pok (out the mass pike and down the taconic) for architecture meetings (virtual memory, multiprocessing, etc) ... including architecture meetings where several features were pulled from 370 virtual memory architecture in order to buy 370/165 engineers six month schedule in their hardware implementation. there were other issues in the os/vs2 virtual memory implementation (spanning both svs and mvs) ... one had to do with the page replacement algorithm implementation ... the standard is LRU (least recently used) or various approximations related of LRU. The pok performance modeling group had discovered that (at a micro-level) that if a non-changed page was selected for replacement ... that the latency to service a page fault was much less than if a changed page was selected for replacement (non-changed pages could be immediately discarded, without needing to write, relying on copy already out on disk). However, i repeatedly pointed out to them that weighting the replacement algorithm based on changed bit as opposed to the reference bit ... severely negated any recently used strategy. They went ahead with it anyway (possibly they didn't have very good macro-level simulation capability and stuck with just the micro-level simulation could make informed judgement). in any case, it was well into a number of MVS release before somebody got an award for improving MVS performance by changing to give more weight to the reference use in replacement decisions (example was that under the earlier strategy, the replacement algorithm was selecting high-use, shared, executable linklib virtual pages for replacement before private, lower-use application data virtual pages). another influence of cp67 and the science center was a joint project between endicott and the science center to do custom modifications to cp67 to provide 370 (virtual memory architecture) virtual machines. For instance, this required cp67 simulating 370 architecture hardware format virtual memory tables ... rather than 360/67 architecture hardware format virtual memory tables ... internally, this was comingly referred to as cp67h system. After that was done, there were modifications to cp67 to make it run on 370 hardware ... building 370 format tables ... rather than 360/67 format tables. Internally, this was comingly referred to as cp67i. The first operational 370 hardware supporting virtual memory was a 370/145 engineering processor. However, cp67h with cp67i running in a 370 virtual machine was in regular operation a year before the 370/145 engineering box was operational. In fact, cp67i system was used as initial software brought up on the 370/145 engineering box. One of the complexities in the cp67h cp67i development was it was all done on the science center cp67 timesharing service. Information about virtual memory for 370 was an extremely tightly held corporate secret ... and there were a variety of non-employees (from numerous education institutions in the cambridg area) with regular access to the science center timesharing service. As a result ... nearly all of the cp67h work went on in a 360/67 virtual machine (not on the bare hardware) to isolate it from any non-employee prying eyes. lots of past posts about use of cp67 for timesharing service ... both internally and externally (including mentioning it being used to address various security issues) http://www.garlic.com/~lynn/subtopic.html#timeshare misc past posts mentioning cp67h and/or cp67i systems: http://www.garlic.com/~lynn/2002j.html#0 HONE was .. Hercules and System/390 - do we need it? http://www.garlic.com/~lynn/2004b.html#31 determining memory size http://www.garlic.com/~lynn/2004h.html#27 Vintage computers are better than modern crap ! http://www.garlic.com/~lynn/2004p.html#50 IBM 3614 and 3624 ATM's http://www.garlic.com/~lynn/2005c.html#59 intel's Vanderpool and virtualization in general
Re: GETMAIN/FREEMAIN and virtual storage backing up
The following message is a courtesy copy of an article that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well. [EMAIL PROTECTED] (Hunkeler Peter , KIUK 3) writes: OS/360 was a real storage only operating system. DAT was introduced with S/370. OS/390 could run on that hardware but not use DAT (and other new hardware facilites). DAT was introduced on 360/67 ... basically 360/65 with dynamic address translation ... at least in its single processor version (although 360/67 offerred both 24-bit as well as 32-bit virtual addressing modes). The 360/67 multiprocessor did offer some additional features vis-a-vis 360/65 multiprocessor ... like all 360/67 processors could directly address all physical channels (while 360/65 multiprocessor was limited to addressing common real storage ... but didn't provide channel multiprocessor connectivity). tss/360 was to be the official operating system supporting 360/67 but ran into lots of problems and was decommited. however, the science center http://www.garlic.com/~lynn/subtopic.html#545tech did do a virtual machine monitor called cp40 for a 360/40 with custom hardware dynamic address translation modifications ... and then morphed it into cp67 when production 360/67 machines became available. cp67 was the precursor to vm370 when virtual memory support was announced for 370. the initial prototype for os/vs2 svs ... precusor to os/vs2 mvs ... was a custom modified mvt system ... initially running on 360/67 machines. it had hack on the side to create a single 16mbyte virtual address space and some simple interrupt handler for page faults. it also had CCWTRANS (and associated routines) from cp67 wired into the side to handle the application channel programs (from excp/svc0) to real channel program translation. This is an issue common for both virtual machine monitors and the os/vs genre of operation systems ... where the applications built channel programs that were then passed to be directly executed. The 360/370 genre of channels required real addresses for execution ... but the application (and/or virtual machine) built channel programs all had virtual address specifications. To handle the situation, a copy of the original channel program had to be created with the specified virtual addresses replaced with the corresponding real addresses. for other topic drift ... charlie's work on fine-grain locking supporting cp67 multiprocessor operation resulted in his invention of the compare-and-swap instruction (mnemonic chosen because CAS are charlie's initials). initial forey with pok and 370 architecture owners were met with brick wall resistance because the pok favorite son operating system people claimed that the test-and-set instruction (from 360 days) were more than sufficient for all multiprocessor support. The challenge was in order to justify comapre-and-swap instruction was a non-multiprocessor use had to be defined/invented. The result was the multi-threaded use description (whether or not the environment was multiprocessor) that current shows up in appendix section in principles of operation. misc. posts mentioning multiprocessor and/or compare-and-swap instruction http://www.garlic.com/~lynn/subtopic.html#smp somewhat related to the original thread subject ... when i first got a copy of cp67 at the university as an undergraduate ... when virtual machine logged on ... the virtual address space backing store (for the virtual machine) were all initialized to a single, special zeros page on the cp67 ipl/boot volume. Each corresponding page table entry that pointed to the zeros page also had a flag that if the virtual page was ever modified/changed (after being fetched into real storage), it was to have a new (disk paging) backing location dynamically allocated. an early enhancement that i made to cp67 ... was to initialize freshly, created virtual storage with indication that on initial page fault, that instead of fetching the virtual page from some disk location ... that a real page was to be allocated and then simply cleared to zeros (i used a bxle loop with stm of ten registers that had been all cleared to zeros). The recompute flag still remained the same ... i.e. if virtual execution subsequently modified a zeros page ... it would have a new back disk page location dynamically allocated. -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: GETMAIN/FREEMAIN and virtual storage backing up
The following message is a courtesy copy of an article that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well. [EMAIL PROTECTED] writes: This has always intrigued me. What was done to eliminate the possibility that the channel had to access a virtual page that had been paged out? An enabled application or system code that is copying and translating virtual-to-real addresses can always suffer a page fault, wait for the page-in, and resume as if nothing had happened, but channels cannot wait for page-fault resolution. Or could they? re: http://www.garlic.com/~lynn/2007p.html#69 GETMAIN/FREEMAIN and virtual storage backing up part of CCWTRANS creation of shadow channel programs (with real addresses) included pinning/locking the associated virtual pages (to those real-addresses). after the real i/o had completed (running with the shadow channel program), there was an UNTRANS process ... that included unpinning the associated virtual pages. the original 370 virutal memory architecture included some number of features that didn't actually make it out. i've posted before about some features that the 165 hardware engineers ran into problems ... creating full 370 virtual memory hardware retrofit to the 165 ... and in escalation where they claimed they could pickup six months on the delivery schedule if they could drop the features ... and the pok favorite son operating system expressed they could see no use for the features. dropping the features then met that all the other processors had to undo their implementation and any software that was already completed that would use the additional features ... and to be reworked. there had been channel operation with virtual addresses defined (including being able to suspend because of a page-fault and then be resumed) and there was folklore there was even patents on such channel operation with virtual addresses. this never got very far into the 370 architecture. for lots of topic drift ... past posts mentioning issue with 370/165 virtual memory hardware retrofit schedule and dropping a number of features to make up six monhts http://www.garlic.com/~lynn/95.html#3 What is an IBM 137/148 ??? http://www.garlic.com/~lynn/99.html#7 IBM S/360 http://www.garlic.com/~lynn/99.html#204 Core (word usage) was anti-equipment etc http://www.garlic.com/~lynn/99.html#209 Core (word usage) was anti-equipment etc http://www.garlic.com/~lynn/2000d.html#82 all-out vs less aggressive designs (was: Re: 36 to 32 bit transition) http://www.garlic.com/~lynn/2000f.html#35 Why IBM use 31 bit addressing not 32 bit? http://www.garlic.com/~lynn/2000f.html#55 X86 ultimate CISC? No. (was: Re: all-out vs less aggressive designs) http://www.garlic.com/~lynn/2000f.html#63 TSS ancient history, was X86 ultimate CISC? designs) http://www.garlic.com/~lynn/2000g.html#10 360/370 instruction cycle time http://www.garlic.com/~lynn/2000g.html#15 360/370 instruction cycle time http://www.garlic.com/~lynn/2000g.html#16 360/370 instruction cycle time http://www.garlic.com/~lynn/2000g.html#21 360/370 instruction cycle time http://www.garlic.com/~lynn/2001.html#63 Are the L1 and L2 caches flushed on a page fault ? http://www.garlic.com/~lynn/2001b.html#37 John Mashey's greatest hits http://www.garlic.com/~lynn/2001k.html#8 Minimalist design (was Re: Parity - why even or odd) http://www.garlic.com/~lynn/2002.html#48 Microcode? http://www.garlic.com/~lynn/2002.html#50 Microcode? http://www.garlic.com/~lynn/2002.html#52 Microcode? http://www.garlic.com/~lynn/2002g.html#47 Why are Mainframe Computers really still in use at all? http://www.garlic.com/~lynn/2002l.html#51 Handling variable page sizes? http://www.garlic.com/~lynn/2002m.html#2 Handling variable page sizes? http://www.garlic.com/~lynn/2002m.html#68 Tweaking old computers? http://www.garlic.com/~lynn/2002n.html#10 Coherent TLBs http://www.garlic.com/~lynn/2002n.html#15 Tweaking old computers? http://www.garlic.com/~lynn/2002n.html#23 Tweaking old computers? http://www.garlic.com/~lynn/2002n.html#32 why does wait state exist? http://www.garlic.com/~lynn/2002n.html#58 IBM S/370-168, 195, and 3033 http://www.garlic.com/~lynn/2002p.html#44 Linux paging http://www.garlic.com/~lynn/2003e.html#12 Resolved: There Are No Programs With 32 Bits of Text http://www.garlic.com/~lynn/2003f.html#56 ECPS:VM DISPx instructions http://www.garlic.com/~lynn/2003g.html#19 Multiple layers of virtual address translation http://www.garlic.com/~lynn/2003g.html#20 price ov IBM virtual address box?? http://www.garlic.com/~lynn/2003h.html#37 Does PowerPC 970 has Tagged TLBs (Address Space Identifiers) http://www.garlic.com/~lynn/2003m.html#34 SR 15,15 was: IEFBR14 Problems http://www.garlic.com/~lynn/2003m.html#37 S/360 undocumented instructions? http://www.garlic.com/~lynn/2004c.html#6 If the x86 ISA could be redone http://www.garlic.com/~lynn/2004p.html#8 vm/370 smp support and shared segment protection hack
Re: Is Parallel Programming Just Too Hard?
The following message is a courtesy copy of an article that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well. Choice Overload In Parallel Programming http://developers.slashdot.org/developers/07/10/03/0021253.shtml from above: And then we show them the parallel programming environments they can work with: MPI, OpenMP, Ct, HPF, TBB, Erlang, Shmemm, Portals, ZPL, BSP, CHARM++, Cilk, Co-array Fortran, PVM, Pthreads, windows threads, Tstreams, GA, Java, UPC, Titanium, Parlog, NESL,Split-C... and the list goes on and on. If we aren't careful, the result could very well be a 'choice overload' experience with software vendors running away in frustration. ... snip .. and ... Embedded software stuck at C http://www.eetimes.com/news/latest/showArticle.jhtml?articleID=202102427 from above: The inability of C/C++ code to parallelize coupled with its ubiquity throughout the embedded market is a major issue for multi-core going forward, Heikkila wrote in a follow up email to EE Times. Any alternative parallel programming languages certainly won't materialize in the embedded market, but instead will more likely gain momentum in a more mainstream computing market before making its way into embedded applications, he added. ... snip ... past posts in thread: http://www.garlic.com/~lynn/2007l.html#24 Is Parallel Programming Just Too Hard? http://www.garlic.com/~lynn/2007l.html#26 Is Parallel Programming Just Too Hard? http://www.garlic.com/~lynn/2007l.html#34 Is Parallel Programming Just Too Hard? http://www.garlic.com/~lynn/2007l.html#38 Is Parallel Programming Just Too Hard? http://www.garlic.com/~lynn/2007l.html#60 Is Parallel Programming Just Too Hard? http://www.garlic.com/~lynn/2007l.html#63 Is Parallel Programming Just Too Hard? http://www.garlic.com/~lynn/2007m.html#5 Is Parallel Programming Just Too Hard? http://www.garlic.com/~lynn/2007m.html#13 Is Parallel Programming Just Too Hard? http://www.garlic.com/~lynn/2007m.html#14 Is Parallel Programming Just Too Hard? http://www.garlic.com/~lynn/2007m.html#19 Is Parallel Programming Just Too Hard? http://www.garlic.com/~lynn/2007m.html#22 Is Parallel Programming Just Too Hard? http://www.garlic.com/~lynn/2007m.html#26 Is Parallel Programming Just Too Hard? http://www.garlic.com/~lynn/2007m.html#29 Is Parallel Programming Just Too Hard? http://www.garlic.com/~lynn/2007m.html#37 Is Parallel Programming Just Too Hard? http://www.garlic.com/~lynn/2007m.html#39 Is Parallel Programming Just Too Hard? http://www.garlic.com/~lynn/2007m.html#49 Is Parallel Programming Just Too Hard? http://www.garlic.com/~lynn/2007m.html#51 Is Parallel Programming Just Too Hard? http://www.garlic.com/~lynn/2007m.html#52 Is Parallel Programming Just Too Hard? http://www.garlic.com/~lynn/2007m.html#53 Is Parallel Programming Just Too Hard? http://www.garlic.com/~lynn/2007m.html#54 Is Parallel Programming Just Too Hard? http://www.garlic.com/~lynn/2007m.html#58 Is Parallel Programming Just Too Hard? http://www.garlic.com/~lynn/2007m.html#59 Is Parallel Programming Just Too Hard? http://www.garlic.com/~lynn/2007m.html#61 Is Parallel Programming Just Too Hard? http://www.garlic.com/~lynn/2007m.html#70 Is Parallel Programming Just Too Hard? http://www.garlic.com/~lynn/2007n.html#1 Is Parallel Programming Just Too Hard? http://www.garlic.com/~lynn/2007n.html#3 Is Parallel Programming Just Too Hard? http://www.garlic.com/~lynn/2007n.html#6 Is Parallel Programming Just Too Hard? http://www.garlic.com/~lynn/2007n.html#25 Is Parallel Programming Just Too Hard? http://www.garlic.com/~lynn/2007n.html#28 Is Parallel Programming Just Too Hard? http://www.garlic.com/~lynn/2007n.html#38 Is Parallel Programming Just Too Hard? http://www.garlic.com/~lynn/2007n.html#39 Is Parallel Programming Just Too Hard? -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Industry Standard Time To Analyze A Line Of Code
The following message is a courtesy copy of an article that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well. [EMAIL PROTECTED] (John P Baker) writes: Back in the 80s, we operated under the premise that a seasoned programmer should be able to produce 20 lines of bug-free assembler code per day. there have been periodic statements that code generation can be the simplest part of the problem. we've periodically commented that the effort to produce a service can be 4-10 times that of a straight-forward application (or taking a well-tested and well-debugged application and turning it into a service can take 4-10 times the effort of the original application development). frequently this may have only a little to do with lines-of-code. we were called in to consult with a small client/server startup that wanted to do payment transactions on servers ... they had this technology called SSL ... and subsequently the activity has frequently been referred to as electronic commerce. Part of the infrastructure that the server payment application talked to was something called a payment gateway ... misc. past posts mentioning payment gateway activity http://www.garlic.com/~lynn/subnetwork.html#gateway the initial take was to take transaction message formats from existing circuit-based infrastructure and map them to packets in internet infrastructure. this somewhat ignored a whole lot of telco provisioning that went into circuit-based operation ... and provided a basis for business critical dataprocessing ... which was all missing in the initial transition to internet-based operation. as part of supporting an operational environment (as opposed to somewhat trivial technology demonstration) ... we had to invent a lot of compensating processes for the internet environment. some other recent posts raising the issue about business critical dataprocessing http://www.garlic.com/~lynn/2007f.html#37 Is computer history taught now? http://www.garlic.com/~lynn/2007g.html#51 IBM to the PCM market(the sky is falling!!!the sky is falling!!) http://www.garlic.com/~lynn/2007h.html#78 John W. Backus, 82, Fortran developer, dies http://www.garlic.com/~lynn/2007n.html#10 The top 10 dead (or dying) computer skills http://www.garlic.com/~lynn/2007n.html#76 PSI MIPS http://www.garlic.com/~lynn/2007n.html#77 PSI MIPS http://www.garlic.com/~lynn/2007o.html#23 Outsourcing loosing steam? -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: India is outsourcing jobs as well
re: http://www.garlic.com/~lynn/2007p.html#39 Inda is outsourcing jobs as well Why Is US Grad School Mainly Non-US Students? http://ask.slashdot.org/askslashdot/07/09/29/2027210.shtml from above: I am a new graduate student in Computer Engineering. I would like to get my MS and possibly my Ph.D. I have learned that 90% of my department is from India and many others are from China. ... snip ... somewhat related recent post http://www.garlic.com/~lynn/2007o.html#76 Graduate Enrollment in 2005 giving stats showing it slightly closing between 2001 2005, i.e. foreign/US; 2001: 6500/2500 and 2005: 4500/3500 -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Writing 23FDs
Raymond Noal wrote: Dear List: An IBM 4361 Model Group 5 had the ECPS feature - Extended Control Program Support (ECPS) -- offers VSE mode, VM/370 mode, and MVS/370 mode. These modes provide microcode assists that make the system control programs operate more efficiently. ECPS was originally done for virgil/tully (370 138/148). basically portions of kernel/nucleus pathlengths were implemented in microcode. a new instruction was defined each of these (moved) pathlength snippets ... and placed in front of the corresponding kernel instructions. The parameter list for the new instruction included address(es) of where the microcode was to resume in the standard code. here is old post that details what portions of the vm370 kernel were identified for movement into microcode http://www.garlic.com/~lynn/94.html#21 370 ECPS VM microcode assist the issue was that low-end and mid-range 370 machines were vertical microcode to implement 370 instruction set ... and there was typically an avg ratio of 10:1 (microcode instructions to 370 instructions) ... this avg. ratio has also been found by some of the more recent 370 emulators on i86 platforms. For virgil/tully we were given that there was approx. 6k of microcode instruction space available ... and typical kernel instruction would translate approx. 1:1 into microcode (6k bytes of kernel 370 instructions translates into approx. 6k bytes of microcode). So the identification activity was to identity the 6k bytes of vm370 kernel code that were the highest used pathlengths. There was also an ipl/boot sequence that identified whether it was running on an ECPS machine ... and if not, it had a table of all ECPS instruction in the kernel which it would overlay with no-ops (allowing the same kernel to execute on both ECPS machines and non-ECPS machines). Note, for vm370, vm microcode assist (VMA) had previously been implemented on 370/158. This were specific, high-use, supervisor state instructions that normally interrupted into the vm370 kernel for simulation. A new mode was defined for the machine which was virtual machine supervisor state ... and the machine microcode was changed to directly execute the supervisor state instruction using virtual machine rules ... w/o having to interrupt into the vm370 kernel. As part of the virgil/tully ECPS effort, there was also implementation of the VMA supervisor instructions, as well as additional supervisor state instructions not in the original VMA implementation. Later there was an ECPS-like effort done for the 3033 for MVS. There were some difference between the 3033 MVS changes and the virgil/tully vm370 implementation. * the new MVS would only run on machines with the MVS microcode enhancement and wouldn't run on machines w/o the feature * 3033 was a horizontal microcode machine where the ratio of microcode instructions to 370 instructions was nearly 1:1 ... aka there was little or no performance difference between the 370 instruction implementation and the microcode implementation (this characteristic continued on later high-end machines) later, in the 4331/4341 time-frame ... there was some effort to retrofit the mvs ecps change to 4341s ... allowing latest release of mvs to operate on 4341 machines. there was lots of contention over the value of doing this since 4341 was barely powerful enough to support any kind of mvs thruput and 4331 was quite a bit below that threshold (so i can't be positive, but i'm pretty sure that the mvs ecps feature was ever retrofitted to 4331 ... although it was eventually made available on 4341). somewhat 4361 topic drift the 3081 had a service processor which ran off a 3310 fba disk. part of the issue was that field service had a requirement that it could perform bootstrap field diagnostics starting with a scope. this was no longer possible for the 3081 ... so a service processor was added that had the capability of diagnosing 3081 hardware ... and it was possible for field service to do diagnostic field bootstrap starting with scope on the service processor. the service processor function was getting more and more complex, and so it was decided that for 3090, it would use a 4331 running a highly customized version of vm370 release 6 ... and all service processor menu screens implemented in cms ios3270. before 3090 first customer ship, the service processor was upgraded to a pair of 4361s (running vm370 and cms with menu screens implemented in cms ios3270). having a pair of redundant 4361s eliminated the requirement for field service to bootstrap diagnose 4361s ... since they could just switch to the other 4361 machine for diagnosing the 3090 (if there was 4361 failure). misc. past posts mentioning service processor operation http://www.garlic.com/~lynn/96.html#41 IBM 4361 CPU technology http://www.garlic.com/~lynn/99.html#61 Living legends http://www.garlic.com/~lynn/99.html#62 Living legends http://www.garlic.com/~lynn/99.html#108 IBM 9020 computers used by FAA
Re: Writing 23FDs
Matthew Stitt wrote: Because the FBA's and 8809's were boat anchors. And the 3350's and 3420 gave interchangeability with MVS. With things connected to normal channels the sky was the limit with what could be done with the 4331. Using the ICA severely limited your devices. The 3350 and 3420 tapes could run circles around the standard stuff IBM wanted to sell with the 4331. 4331 had integrated channels (aka like 370/158 and many other processors) i think you are referring to the integrated controller adapter (as opposed to integrated channels). part of the ICA case were that run-of-the-mill controllers were going to be physically on the size of 4331 (or larger) and cost (unless you could pickup old hardware at surplus prices). An example of the size ... in addition to the original effort to use it for 3090 service processor ... http://www.garlic.com/~lynn/2007p.html#36 Writing 23FDs research had a project that had a 4331 as a desk-side personal computer. FBAs were mostly boat anchors because mvs wouldn't ship support for them. Eventually all physical disks migrated to FBA ... and for mvs compatibility, there had to be CKD emulation (the first was 3375). misc. past posts mentioning ckd issues http://www.garlic.com/~lynn/subtopic.html#dasd i was told that even if i provided fully tested and integrated mvs fba support, there would still be a bill of $26m for education, classes, documentation, etc. In order to justify mvs fba support, i had to show incremental disk sale ROI (increment gross sales at least 10-20 times the expense) attributed solely to the availability of the mvs fba support. misc. past posts mention being quoted $26m as bill for mvs fba education, classes and documentation: http://www.garlic.com/~lynn/97.html#16 Why Mainframes? http://www.garlic.com/~lynn/97.html#29 IA64 Self Virtualizable? http://www.garlic.com/~lynn/99.html#75 Read if over 40 and have Mainframe background http://www.garlic.com/~lynn/2000.html#86 Ux's good points. http://www.garlic.com/~lynn/2000f.html#18 OT? http://www.garlic.com/~lynn/2000g.html#51 512 byte disk blocks (was: 4M pages are a bad idea) http://www.garlic.com/~lynn/2001.html#54 FBA History Question (was: RE: What's the meaning of track overfl ow?) http://www.garlic.com/~lynn/2001d.html#64 VTOC/VTOC INDEX/VVDS and performance (expansion of VTOC position) http://www.garlic.com/~lynn/2001g.html#32 Did ATT offer Unix to Digital Equipment in the 70s? http://www.garlic.com/~lynn/2002.html#5 index searching http://www.garlic.com/~lynn/2002.html#10 index searching http://www.garlic.com/~lynn/2002g.html#13 Secure Device Drivers http://www.garlic.com/~lynn/2002l.html#47 Do any architectures use instruction count instead of timer http://www.garlic.com/~lynn/2003.html#15 vax6k.openecs.org rebirth http://www.garlic.com/~lynn/2003c.html#48 average DASD Blocksize http://www.garlic.com/~lynn/2003m.html#56 model 91/CRJE and IKJLEW http://www.garlic.com/~lynn/2004d.html#65 System/360 40 years old today http://www.garlic.com/~lynn/2004g.html#15 Infiniband - practicalities for small clusters http://www.garlic.com/~lynn/2004l.html#20 Is the solution FBA was Re: FW: Looking for Disk Calc http://www.garlic.com/~lynn/2004l.html#23 Is the solution FBA was Re: FW: Looking for Disk Calc http://www.garlic.com/~lynn/2004n.html#52 CKD Disks? http://www.garlic.com/~lynn/2005c.html#64 Is the solution FBA was Re: FW: Looking for Disk Calc http://www.garlic.com/~lynn/2005m.html#40 capacity of largest drive http://www.garlic.com/~lynn/2005u.html#21 3390-81 http://www.garlic.com/~lynn/2006f.html#3 using 3390 mod-9s http://www.garlic.com/~lynn/2006f.html#4 using 3390 mod-9s -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: India is outsourcing jobs as well
Edward Jaffe wrote: I have family all over Virginia. Less developed is probably a good thing. It's a beautiful state. Lots of history. There's something very wrong with and/or not being stated in the premise here. They probably need people in the United States because things aren't working out so well with an all-Indian work force. the other possibility is that they have some specific outsourcing that may include requirement for some legacy skills ... it may turn out to be cheaper to hire people, that already have such experience, than try to train a new generation ... especially if it is considered obsolete skills with limited future applicability. i've frequently claimed that a big boost for outsourcing was as part of y2k remediation efforts ... when it wasn't so much a question of pay scale ... but getting anybody at all. this was significantly aggravated because it was happening during the big resource demand growth in the internet bubble. once business relations were established (during the y2k era), these business relations continued to exist after y2k remediation completed. some of the recent statistics ... that well over half of cs advanced degrees from us institutions were to people not born in the US. still the majority of the advanced degrees (from us institutions) are to people not born in the us ... while at the same time the number graduating from non-US institutions is dramatically increasing. This is coupled with things like test scores for US highschool graduates ranks near the bottom of all industrial nations. misc. recent posts on the subject: http://www.garlic.com/~lynn/2007g.html#6 U.S. Cedes Top Spot in Global IT Competitiveness http://www.garlic.com/~lynn/2007g.html#7 U.S. Cedes Top Spot in Global IT Competitiveness http://www.garlic.com/~lynn/2007g.html#34 U.S. Cedes Top Spot in Global IT Competitiveness http://www.garlic.com/~lynn/2007g.html#35 U.S. Cedes Top Spot in Global IT Competitiveness http://www.garlic.com/~lynn/2007g.html#52 U.S. Cedes Top Spot in Global IT Competitiveness http://www.garlic.com/~lynn/2007g.html#68 U.S. Cedes Top Spot in Global IT Competitiveness http://www.garlic.com/~lynn/2007i.html#13 U.S. Cedes Top Spot in Global IT Competitiveness http://www.garlic.com/~lynn/2007l.html#22 U.S. Cedes Top Spot in Global IT Competitiveness http://www.garlic.com/~lynn/2007o.html#20 U.S. Cedes Top Spot in Global IT Competitiveness http://www.garlic.com/~lynn/2007o.html#21 U.S. Cedes Top Spot in Global IT Competitiveness http://www.garlic.com/~lynn/2007o.html#22 U.S. Cedes Top Spot in Global IT Competitiveness http://www.garlic.com/~lynn/2007p.html#15 U.S. Cedes Top Spot in Global IT Competitiveness http://www.garlic.com/~lynn/2007p.html#18 U.S. Cedes Top Spot in Global IT Competitiveness http://www.garlic.com/~lynn/2007p.html#22 U.S. Cedes Top Spot in Global IT Competitiveness http://www.garlic.com/~lynn/2007p.html#32 U.S. Cedes Top Spot in Global IT Competitiveness -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: zH/OS (z/OS on Hercules for personal use only)
Andreas F. Geissbuehler wrote: It's been done many times before, FREEWARE for STRICTLY PERSONAL USE. It is proven to sell more licences for commercial use. There is precedence, DB2, Lotus... personal computing ... freeware or not ... has always shown to contribute significantly to useage increase. CMS was the personal computing of 60s and 70s (first as cambridge monitor system on cp67 and then renamed to conversational monitor system as part of the morph to vm370) ... and SHARE case studies in the 70s showed that vm370/cms environments had largest usage growth (this was part of the many countermeasures to the perodic corporate statements that vm370 product was being eliminated). misc. past posts mentioning cambridge science center ... originated cp40 and cp67 virtual machine systems (along with cms) http://www.garlic.com/~lynn/subtopic.html#545tech where gml was invented (precursor to sgml, html, xml, etc) http://www.garlic.com/~lynn/subtopic.html#sgml where compareswap multiprocessor instruction was invented http://www.garlic.com/~lynn/subtopic.html#smp and where the technology for the internal network originated http://www.garlic.com/~lynn/subnetwork.html#internalnet which was also the basis for bitnet (and european earn): http://www.garlic.com/~lynn/subnetwork.html#bitnet -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: CA to IBM product swap
[EMAIL PROTECTED] (Mark Zelden) writes: Then you switch back. ;-) There are actually a lot of companies that seem to work that way. That's what happens when bean counters make the decisions and don't consider the human aspects (time, training etc.) this is related to the original justification for 360 product line with common architecture across the product line ... recent post mentioning supposed testimony in the gov. anti-trust case by one of the bunch http://www.garlic.com/~lynn/2007p.html#8 what does xp do when system is copying i.e. compatible product line minimized having to redo applications every time customer upgraded/changed processor ... people resources and elapsed time for conversion was starting to dominate considerations this was also touched on by a talk amdahl gave at mit in the early 70s when asked about what justification was used getting funding for his clone processor company ... even if ibm were to completely walk away from 360, customers already had something like $200B invested in software applications, which would support clone processor business through at least the end of the century. and the walk away from 360 could possibly considered a veiled reference to future system project http://www.garlic.com/~lynn/subtopic.html#futuresys which would have been as different from 360 as 360 had been different from earlier machines ... recent posts http://www.garlic.com/~lynn/2007p.html#1 what does xp do when system is copying http://www.garlic.com/~lynn/2007p.html#3 PL/S programming language http://www.garlic.com/~lynn/2007p.html#5 PL/S programming language -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: FICON tape drive?
George McAliley wrote: All IBM 3490's on mainframes were either Block MPX channel (bus/tag) or ESCON. The STK 9490's on mainframe were also ESCON though they did have a SCSI interface for distributed system attachment. The IBM Magstar (3590) series were natively FICON and ESCON capable depending on how you configure the drive/controller. The newer 3592's are also either ESCON or FICON though they are really too fast for ESCON. All the Magstar drives have standalone (non_ATL) configurations but are now usually installed in ATL's or VTL's in today's world. a recent escon, sla, fcs, ficon and eckd x-over discussion from comp.arch newsgroup http://www.garlic.com/~lynn/2007o.html#54 mainframe performance, was Is a RISC chip more expensive? and additional drift, other posts in the thread: http://www.garlic.com/~lynn/2007o.html#42 mainframe performance, was Is a RISC chip more expensive? http://www.garlic.com/~lynn/2007o.html#55 mainframe performance, was Is a RISC chip more expensive? ... escon had been fiber technology that had knocking around pok from the 70s. my wife had been con'ed into going to pok to be in charge of loosely-coupled architecture where she created peer-coupled shared data architecture http://www.garlic.com/~lynn/subtopic.html#shareddata which didn't see a whole lot of take-up until sysplex ... except for ims hot-standby work. she also had significant battles with the communication group over not using sna for peer-coupled operation. eventually there supposedly was a (temporary) truce where sna had to be used for anything transiting the walls of the glasshouse but non-sna could be used within the walls of the glasshouse. this sort of came to a test with ctca enhancement; trotter/3088 where she pushed hard for being able to have full-duplex operation ... as improvement over standard ctca/channel half-duplex operation (which didn't make it out of the door). san jose research did do a vm/4341 cluster prototype using enhanced 3088 peer-coupled operation ... but when it came to make it available to customers, they were required to use sna for the implementation. a trivial example of the difference was the cluster synchronization protocol ... which started out being done in subsecond elapsed time. it was severely crippled by being forced to regress to a sna implementation which increased the cluster synchronization protocol elapsed time to nearly a minute. all of this contributed to her not lasting very long as pok's loosely-coupled architect. of course, part of her problem was that she had earlier co-authored AWP39, peer-coupled networking architecture in the early days of SNA ... which they possibly viewed as a threat. SNA architecture was VTAM ... not a networking architecture at all, but a (dumb) terminal communication control infrastructure that could handle massive numbers of terminals (or at least initially up to 64k). for other random trivia, appn was AWP164 misc. past posts mentioning AWP39 http://www.garlic.com/~lynn/2004n.html#38 RS/6000 in Sysplex Environment http://www.garlic.com/~lynn/2004p.html#31 IBM 3705 and UC.5 http://www.garlic.com/~lynn/2005p.html#8 EBCDIC to 6-bit and back http://www.garlic.com/~lynn/2005p.html#15 DUMP Datasets and SMS http://www.garlic.com/~lynn/2005p.html#17 DUMP Datasets and SMS http://www.garlic.com/~lynn/2005q.html#27 What ever happened to Tandem and NonStop OS ? http://www.garlic.com/~lynn/2005u.html#23 Channel Distances http://www.garlic.com/~lynn/2006h.html#52 Need Help defining an AS400 with an IP address to the mainframe http://www.garlic.com/~lynn/2006j.html#31 virtual memory http://www.garlic.com/~lynn/2006k.html#9 Arpa address http://www.garlic.com/~lynn/2006k.html#21 Sending CONSOLE/SYSLOG To Off-Mainframe Server http://www.garlic.com/~lynn/2006l.html#4 Google Architecture http://www.garlic.com/~lynn/2006l.html#45 Mainframe Linux Mythbusting (Was: Using Java in batch on z/OS?) http://www.garlic.com/~lynn/2006o.html#62 Greatest Software, System R http://www.garlic.com/~lynn/2006r.html#4 Was FORTRAN buggy? http://www.garlic.com/~lynn/2006r.html#9 Was FORTRAN buggy? http://www.garlic.com/~lynn/2006t.html#36 The Future of CPUs: What's After Multi-Core? http://www.garlic.com/~lynn/2006u.html#28 Assembler question http://www.garlic.com/~lynn/2006u.html#55 What's a mainframe? http://www.garlic.com/~lynn/2007b.html#9 Mainframe vs. Server (Was Just another example of mainframe http://www.garlic.com/~lynn/2007b.html#48 6400 impact printer http://www.garlic.com/~lynn/2007d.html#55 Is computer history taugh now? http://www.garlic.com/~lynn/2007h.html#35 sizeof() was: The Perfect Computer - 36 bits? http://www.garlic.com/~lynn/2007h.html#39 sizeof() was: The Perfect Computer - 36 bits? http://www.garlic.com/~lynn/2007l.html#62 Friday musings on the future of 3270 applications -- For IBM-MAIN subscribe / signoff / archive access
Re: CA to IBM TCP Conversion
Chris Mason wrote: Robert I thought I'd dig further into this IUCV point and I found a reference in the IP Configuration Guide. It appears that IUCV, VMCF and TNF stuff is still available, you just don't necessarily need it. It would appear to have become an *optional* bit of preparation for the use of the Communications Server (CS) IP component from being *required* as it was when I used to teach TCP/IP for MVS. It is described in the CS IP Configuration Guide under Chapter 2. Configuration overview, Required steps before starting TCP/IP as Step 3: Configure VMCF and TNF on page 111 of the z/OS 1.8 manual. It appears that the section headers are logically incorrect since, as far as I can tell, it really is an *optional* step and depends on whether or not the Pascal API is used or not. The clearest indication that this step really is optional is ... therefore, some installations will require setting up VMCF and TNF. at the end of the first paragraph. I then found Dana Mitchell's post where he/she said something of the same as above. Chris Mason the original tcp/ip implementation was done in vs/pascal on vm370 (20 yrs ago) ... but there were some number of implementation bottlenecks ... such that it got about 44kbyte/sec aggregate thruput consuming a 3090 processor. i then did rfc1044 support for the product and in some tuning tests at cray research (between 4341 clone and a cray machine) was getting 4341 channel media speed thruput using only a modest amount of the 4341 clone. http://www.garlic.com/~lynn/subnetwork.html#1044 for some topic drift, recent post mentioning vs/pascal http://www.garlic.com/~lynn/2007o.html#61 (Newbie question)How does the modern high-end processor been designed? which is slightly related to topic in this newsgroup since the los gatos vlsi tools group was responsible for the 370 pascal implementation as well as the LSM http://www.garlic.com/~lynn/2007o.html#67 1401 simulator for OS/360 somewhat drifting back to the topic, a port of the implementation was then done for mvs ... by doing a (vm370) vmcf/iucv emulator for mvs systems. for other background ... internally there was something called spm that was originally implemented on cp67 (precursor to vm370 that ran on 360/67s) which was a superset of the later vmcf and iucv implementations. there was somewhat internal dissension leading up to the initial vmcf release ... since spm had been around for much longer period and had so much more function. Later, iucv was released to cover some additional function (also covered by spm) that was handled by vmcf. some old email with spm reference http://www.garlic.com/~lynn/2006w.html#email750430 http://www.garlic.com/~lynn/2006k.html#email851017 misc. old posts mentioning spm: http://www.garlic.com/~lynn/2002d.html#31 2 questions: diag 68 and calling convention http://www.garlic.com/~lynn/2004m.html#20 Whatever happened to IBM's VM PC software? http://www.garlic.com/~lynn/2005m.html#45 Digital ID http://www.garlic.com/~lynn/2006k.html#51 other cp/cms history http://www.garlic.com/~lynn/2006t.html#47 To RISC or not to RISC http://www.garlic.com/~lynn/2006w.html#8 Why these original FORTRAN quirks? http://www.garlic.com/~lynn/2006w.html#12 more secure communication over the network http://www.garlic.com/~lynn/2006w.html#16 intersection between autolog command and cmsback (more history) http://www.garlic.com/~lynn/2006w.html#52 IBM sues maker of Intel-based Mainframe clones -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: ServerPac Installs and dataset allocations
Ted MacNEIL wrote: I put the heavily hit loadlibs such as SYS1.LINKLIB on one side of the VTOC, and the ISPF libraries on the other side of the VTOC. With todays heavily cached dasd, that probably will buy you very little anymore. Very little. Especially, since it's been over 15 years since IBM stopped recommending placing the VTOC (VTOCIX, VVDS, Catalogue [if there is one]) elsewhere than at the beginning of the pack. for os/360 releases 11 14 system builds i had carefully reodered stage-2 sysgen to achieve optimal placement ... not only of datasets but also members within pds. i had given presentations at share (on the results of both the customized release 11 and 14 system builds) ... that for the university workload, i could achieve nearly three times increased thruput. i had also asked for being able to specify vtoc location ... which showed up in release 15/16 (release 15 slipped and there was a combined release 15/16). one of the problems was applying normal system maintenance ... replacing members in pds libraries like sys1.linklib could detrimentally affect the carefully ordering and over a period of six months, thruput could degrade by a third or more (and might require a new build of critical pds libraries). reference to old presentation that i had made a aug68 share meeting in boston (this particular presentation also included some measurements after i had rewritten several critical sections of the cp67 kernel): http://www.garlic.com/~lynn/94.html#18 CP/67 OS MFT14 http://www.garlic.com/~lynn/94.html#20 CP/67 OS MFT14 however, going into the mid-70s, it was becoming apparent that overall system thruput (processor and memory) was increasing much faster than disk technology thruput was increasing. as a result there was starting to be more and more reliance on mechanisms (like more use of various kinds of caching technology) to compensate for the relative system degradation of disk thruput. at one point, i had made the observation that relative system disk thruput had degrading by a factor of ten times over a period of years. this upset some of the people in the disk division ... and the disk division performance group was assigned to refute the observation. after several weeks, they came back and effectively said that i had slightly understated the amount of relative system thruput degradation (i.e. disks were getting faster, but overall systems were getting also getting faster, much faster than disks were getting faster). in any case, the work by the disk division performance group eventually turned into a share presentation ... not on how slow disks are ... but on how to organize data on disk to improve overall system thruput. as caching technologies became more and more wide used ... nearly all of the work on careful ordering of highly used disk records (that i had done as undergraduate in the 60s) was obsoleted since such high-used records would now be found in the electronic caches. some number of old posts mentioning gpd finding that i had slightly understated the degree of disk technology relative system thruput degradation over a period of years http://www.garlic.com/~lynn/93.html#31 Big I/O or Kicking the Mainframe out the Door http://www.garlic.com/~lynn/94.html#43 Bloat, elegance, simplicity and other irrelevant concepts http://www.garlic.com/~lynn/94.html#55 How Do the Old Mainframes Compare to Today's Micros? http://www.garlic.com/~lynn/95.html#10 Virtual Memory (A return to the past?) http://www.garlic.com/~lynn/98.html#46 The god old days(???) http://www.garlic.com/~lynn/99.html#4 IBM S/360 http://www.garlic.com/~lynn/2001d.html#66 Pentium 4 Prefetch engine? http://www.garlic.com/~lynn/2001f.html#62 any 70's era supercomputers that ran as slow as today's supercomputers? http://www.garlic.com/~lynn/2001f.html#68 Q: Merced a flop or not? http://www.garlic.com/~lynn/2001l.html#40 MVS History (all parts) http://www.garlic.com/~lynn/2001l.html#61 MVS History (all parts) http://www.garlic.com/~lynn/2001m.html#23 Smallest Storage Capacity Hard Disk? http://www.garlic.com/~lynn/2002.html#5 index searching http://www.garlic.com/~lynn/2002b.html#11 Microcode? ( index searching) http://www.garlic.com/~lynn/2002b.html#20 index searching http://www.garlic.com/~lynn/2002e.html#8 What are some impressive page rates? http://www.garlic.com/~lynn/2002e.html#9 What are some impressive page rates? http://www.garlic.com/~lynn/2002i.html#16 AS/400 and MVS - clarification please http://www.garlic.com/~lynn/2003i.html#33 Fix the shuttle or fly it unmanned http://www.garlic.com/~lynn/2004n.html#22 Shipwrecks http://www.garlic.com/~lynn/2004p.html#39 100% CPU is not always bad http://www.garlic.com/~lynn/2005h.html#13 Today's mainframe--anything to new? http://www.garlic.com/~lynn/2005k.html#53 Performance and Capacity Planning http://www.garlic.com/~lynn/2006m.html#32 Old Hashing Routine http://www.garlic.com/~lynn/2006x.html#13 The Future of CPUs: What's After
Re: Virtual Storage implementation
Mark Post wrote: If you give a Linux guest a 4GB virtual machine, it will have very close to a 4GB working set. If you give that same Linux guest 64GB, it will have very close to a 64GB working set. The fact that you say Linux will use what it needs tells me that you have little or no experience running Linux, either on midrange systems or on the mainframe. Linux will _always_ use everything you give it, if for nothing else than buffers and cache. Hence the constant battle we have with midrange Linux sysadmins, DBAs, etc., regarding this topic. It's also a recurring topic with the midrange performance/capacity folks, since we keep getting concerned phone calls and emails about how we have to add more RAM to a Linux system because it's running out. It hasn't run out, it is just using all the otherwise unused storage for buffers and cache, but that's not the behavior they're used to seeing from AIX, Solaris, HPUX, etc. past posts in this thread: http://www.garlic.com/~lynn/2007o.html#41 Virtual Storage implementation http://www.garlic.com/~lynn/2007o.html#45 Virtual Storage implementation http://www.garlic.com/~lynn/2007o.html#46 Virtual Storage implementation http://www.garlic.com/~lynn/2007o.html#47 Virtual Storage implementation http://www.garlic.com/~lynn/2007o.html#48 Virtual Storage implementation http://www.garlic.com/~lynn/2007o.html#52 Virtual Storage implementation unix/linux will use its (virtual) machine storage for running applications and file caching. if there is enough machine storage ... the application storage is whatever the page requirements for the collection of the running appliications (linux kernel code, demons, etc). if the machine storage is at least as large as the total program execution storage ... then the system may not have to do any paging operations. the remaining (potentially virtual) machine storage will be used for file record caching is analogous to the operation of dbms systems with database record caching. lots of dbms systems have configuration parameters for total size of record caching ... attempting to tune them when running in a virtual memory environment. this is similar to the description of the (storage management) changes migrating apl\360 (assuming the available storage was real) to cms\apl (where there was enormously larger amount of virtual, paged storage). The implicit assumption was that the available configured storage was real and could be used arbitrarily w/o regard to potential working set size implications and effect on demand paging. http://www.garlic.com/~lynn/2007o.html#45 Virtual Storage implementation one of the other projects at the science center http://www.garlic.com/~lynn/subtopic.html#545tech involved application instruction and storage access tracing. this was used in modeling possible page replacement algorithms and working set sizes. Eventually some of this was released as VS/REPACK product which would do semi-automated program reorgnization to optimize virtual storage operation. Early version of what became VS/REPACK was used in assisting with migrating apl\360 to virtual memory environment as cms\apl. Various versions of VS\REPACK were also used be other corporate product groups aiding in the migration from real storage paradigm to virtual storage paradigm. One such early user of the package was the organization developing and supporting IMS. A side-effect of the package tracing/monitoring ... in addition to helping analyzing execution characteristics in virtual storage environment, it was also used for straight-forward execution hot-spot analysis. random past posts mentioning vs/repack: http://www.garlic.com/~lynn/94.html#7 IBM 7090 (360s, 370s, apl, etc) http://www.garlic.com/~lynn/97.html#20 Why Mainframes? http://www.garlic.com/~lynn/99.html#7 IBM S/360 http://www.garlic.com/~lynn/99.html#61 Living legends http://www.garlic.com/~lynn/99.html#68 The Melissa Virus or War on Microsoft? http://www.garlic.com/~lynn/2000.html#78 Mainframe operating systems http://www.garlic.com/~lynn/2000d.html#12 4341 was Is a VAX a mainframe? http://www.garlic.com/~lynn/2000g.html#11 360/370 instruction cycle time http://www.garlic.com/~lynn/2000g.html#30 Could CDR-coding be on the way back? http://www.garlic.com/~lynn/2001b.html#83 Z/90, S/390, 370/ESA (slightly off topic) http://www.garlic.com/~lynn/2001c.html#31 database (or b-tree) page sizes http://www.garlic.com/~lynn/2001c.html#33 database (or b-tree) page sizes http://www.garlic.com/~lynn/2001i.html#20 Very CISC Instuctions (Was: why the machine word size ...) http://www.garlic.com/~lynn/2001j.html#3 YKYGOW... http://www.garlic.com/~lynn/2002c.html#28 OS Workloads : Interactive etc http://www.garlic.com/~lynn/2002c.html#45 cp/67 addenda (cross-post warning) http://www.garlic.com/~lynn/2002c.html#46 cp/67 addenda (cross-post warning) http://www.garlic.com/~lynn/2002c.html#49 Swapper was Re: History of Login Names http://www.garlic.com/~lynn/2002d.html#7 IBM
Re: Virtual Storage implementation
[EMAIL PROTECTED] (Ted MacNEIL) writes: Also, sub-systems like DB2 are getting to the point where you should/could/would not like it to page. Sort of throws the concept out the window, doesn't it? previous post in thread http://www.garlic.com/~lynn/2007o.html#41 Virtual Storage implementation one of the things that can happen is if you run a subsystem doing LRU-like activity management in a virtual address space that is also being managed with a LRU-like activity management. I first noticed this in the 70s ... when some of the os/360 migrated to virtual storage support and were in turn run in a virtual machine. vm370 was managing the virtual address space (of the virtual machine) with an LRU-like algorithm while the guest operating system was also managing (what it thot to be real storage) with a LRU-like algorithm. if the virtual quest took a page fault ... it would examine its available storage for the least-recently used page to replace. if the vm370 hypervisor was also paging ... it will also have used the same criteria to remove the least-recently used page from real storage ... however, this would possibly also going to be the most likely next page that the virtual guest was going to start using (when it did its own paging). dbms subsystems tends to have large buffer storage ... that are managed in a manner analogous to virtual storage ... i.e. the least recently used buffer is likely to be replaced with the latest requested record. A heavily used dbms subsystem is likely going to use the maximum storage available to it (because it is going to replace its least recently used buffers with the most recently requested records). one of my statements from the 70s was that running an LRU-like algorithm under a LRU-like algorithm can result in very pathelogical behavior and the virtualized quest/subystem can exhibit exact opposite of behavior of assumptions that are the foundation of LRU implementations (the least recently used page is the least likely to be needed in the near future, a virtual LRU-like algorithm is most likely to use the least recently used page). lots of past posts mentioning virtual storage page replacement and/or page/buffer replacement algorithms http://www.garlic.com/~lynn/subtopic.html#clock misc. past posts mentioning original rdbms sql implementation (originally all done on vm370 platform) http://www.garlic.com/~lynn/subtopic.html#systemr including tech. transfer from bldg. 28 to endicott for sql/ds. for other topic drift one of the people in the meeting mentioned in the following post claimed to have handled large part of the technology transfer from endicott to bldg. 90 for DB2 http://www.garlic.com/~lynn/95.html#13 http://www.garlic.com/~lynn/96.html#15 above meeting was related to turning out ha/cmp product http://www.garlic.com/~lynn/subtopic.html#hacmp and other old email related to working on ha/cmp scaleup http://www.garlic.com/~lynn/lhwemail.html#medusa another scenaro of running subsystem in a paged virtual address space ... which believed that the virtual address space was really memory was when the science center http://www.garlic.com/~lynn/subtopic.html#545tech originally did the port of apl\360 to cms for cms\apl. The problem was that apl\360 believed its workspace was resident real storage and had a storage allocation strategy that would assign a new storage location for every assignment statement ... until it had exhausted the available (workspace) available storage ... at which point it would do garbage collection and collapse all allocated locations into contiguous memory ... and then starting all over again. It wasn't too bad to repeatedly use all of a (real storage) 16kbyte swapped workspace. However, in cms virtual address space environment, the available workspace could easily be several mbytes (or even nearly all of 16mbytes). this would be under cp67 on a 360/67 with typically 512kbytes to 1mbyte of real storage. very quickly it was realized that the apl\360 storage management and garbage collection implementation had to be significantly reworked to move it to a virtual memory environment. Turns out one of the early major uses of cms\apl on the cambridge cp67 machine was the business planning people in armonk. prior to cms\apl, apl\360 with only 16kbyte-32kbyte workspace sizes didn't provide much room for working on any real world problems. significantly opening up the apl workspace size with cms\apl allowed for work on some real world problems. the business planning people loaded the most sensitive corporate information ... detailed customer information ... on the cambridge machine and ran sophisticated business modeling applications implemented in apl. for other drift, this represented some interested security issues ... since the cambridge system was also being used by numerous students and others from colleges and universities in the cambridge area. recent post on that particular topic drift:
Re: Virtual Storage implementation
[EMAIL PROTECTED] (Mark Post) writes: Oh, and if you create a 64GB z/VM guest, shame on you. As someone who is very heavy into z/VM performance once told me, z/VM is very good at managing large numbers of small things. It's not so good at managing a smaller number of very large things. I tend to agree. The z/VM scheduler isn't too happy about guests with large working sets. the issue may not so much be a scheduling problem and/or specifically a large working set problem ... as somewhat mentioned in this post: http://www.garlic.com/~lynn/2007o.html#45 Virtual Storage implementation there is an implicit assumption in paged virtual memory and working sets with regard to least-recently-used page replacement algorithms ... which assumes that the page/buffer that has been least-recently-used in the past is likely to be least-recently-used in the future. however, virtual guests and various subsystems (which manage storage with their own least-recently-used algorithm) or likely to exhibit just the opposite behavior ... the page/buffer that has been least-recently-used in the past ... is the page/buffer that the virtual guest/subsystem is going to select for replacement and start using. Having a multi-level least-recently-used replacement strategy can exhibit pathelogical behavior where the next lower level management has removed the page/buffer which is going to be the higher level management operation is most likely to select to start using (the hypervisor closest to the hardware is the lowest level). lots of past posts mentioning page replacement algorithms and virtual storage management http://www.garlic.com/~lynn/subtopic.html#wsclock -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Virtual Storage implementation
[EMAIL PROTECTED] (Thompson, Steve) writes: VSE, as I recall, was told that it had 32MB (or something similar) and VM then took care of the paging (because VSE didn't page in that case) -- must understand the memory system used by VSE (similar to VS1). recent posts in this thread: http://www.garlic.com/~lynn/2007o.html#41 Virtual Storage implementation http://www.garlic.com/~lynn/2007o.html#45 Virtual Storage implementation http://www.garlic.com/~lynn/2007o.html#46 Virtual Storage implementation the handshaking had earlier been implemented in cp67 with mvt by one of the university cp67 installations. a similar implementation was done for vm370 somewhat in conjunction with the ecps microcode assist ... for a little topic drift a recent post in comp.arch mentioning ecps (as well as sie) http://www.garlic.com/~lynn/2007o.html#42 mainframe performance, was Is a RISC chip more expensive? vs1 typically ran with something like a 4mbyte virtual address space ... something akin to the initial move of mvt to os/vs2 svs which ran with a 16mbyte virtual address space. for vs1 handshaking, vm370 gave the vs1 guest virtual machine a 4mbyte machine. then vs1 mapped its 4mbyte virtual address space one-for-one to the 4mbyte virtual machine address space (at first glance vs1 had 4mbyte virtual address space to a 4mbyte machine so it would never get any guest page faults). all the page faults would be happening at the vm370 level, which would then schedule a psuedo page-fault interrupt for the vs1 guest while it performed the page replacement operation. This would allow vs1 to switch to a different task/application ... so that the whole virtual machine execution wouldn't be blocked just waiting on page fault processing for a specific task/application. When page fetch had been completed, vm370 would post a psuedo page fetch completion interrupt to VS1 guest ... so that it might choose to re-enable that faulted task for execution. the assumption was that the virtual machine guest is multitasking lots of different workload and is capable of doing a task switch and continue execution when a specific task has a missing page. I had highly optimized both the native vm370 page processing pathlength as well as selection of code paths to be moved to microcode as part of the ECPS effort. As a result, it was actually possible for VS1 to have higher thruput under vm370 than running stand-alone on the same hardware (w/o vm370; my pathlength for doing page processing was significantly better than VS1's ... as well as my page replacement implementation). -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Virtual Storage implementation
[EMAIL PROTECTED] (Mark Post) writes: That's still probably too much, if only by a little. The idea is to force Linux to use as little storage as possible for buffers and cache, and page out any programs, etc., that haven't been used very recently. Letting z/VM handle this via expanded storage, and paging some things out to real disk turns out to work very well in a shared environment. Other techniques, such as having the kernel in a Named Saved Segment, and executable userspace code in a DCSS using the eXecute In Place file system helps even more. previous posts in this thread: http://www.garlic.com/~lynn/2007o.html#41 Virtual Storage implementation http://www.garlic.com/~lynn/2007o.html#45 Virtual Storage implementation http://www.garlic.com/~lynn/2007o.html#46 Virtual Storage implementation http://www.garlic.com/~lynn/2007o.html#47 Virtual Storage implementation for more archeological topic drift with regard to DCSS. i had originally started what i called virtual memory management on cp67 platform at the science center http://www.garlic.com/~lynn/subtopic.html#545tech this included mapping the cms filesystem to a paged mapped infrastructure http://www.garlic.com/~lynn/subtopic.html#mmap which included a lot of fancy options about moving pages to/from virtual address space and disk storage. i then ported this to a vm370/cms environment with a lot of options for sharing of segments. a variety of some of this was used in some of the original relation/sql dbms work ... all done on vm370 platform http://www.garlic.com/~lynn/subtopic.html#systemr in the early 70s, there was a project called future system http://www.garlic.com/~lynn/subtopic.html#futuresys which was going to replace 360/370 with a radically different machine architecture. this effort absorbed significant corporate resources and when it was finally canceled (w/o even being announced) there was significant scrambling to get all sort of items back into the 370 hardware and software product pipeline. The resulting mad scramble open opportunity to get a lot of work ... that had continued at the science center http://www.garlic.com/~lynn/subtopic.html#545tech on 370s into vm370 product ... including my resource manager http://www.garlic.com/~lynn/subtopic.html#fairshare that included a large amount of other work not strictly related to resource management, things like lots of kernel reorganization for multiprocessor support http://www.garlic.com/~lynn/subtopic.html#smp part of that opportunity resulted in releasing an extremely small subset of the virtual memory management work as DCSS (and the generalized paged mapped infrastructure was not included). for additional topic drift ... some discussions of various problems trying to reconcile generalized virtual memory management features with os/360 address constant convention http://www.garlic.com/~lynn/subtopic.html#adcon -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Virtual Storage implementation
[EMAIL PROTECTED] (Rugen, Len) writes: Virtual storage isn't exclusive to MVS - z/OS. One on of the best presentations I recall was in a VM Performance and Tuning class. Together with storage protection keys, page tables can be built to allow different users to have various parts of private, shared for read and shared for update storage. (At least update if you're friendly with the think king and he lets you in key 0). 360/67 was machine that came with virtual memory as standard ... it could be viewed somewhat as 360/65 with DAT box bolted on to the side ... although the 360/67 multiprocessor was significantly more sophisticated that 360/65 multiprocessor (for instance, all 360/67 processors in a multiprocessor complex could address all channels ... which wasn't true of 360/65 multiprocessors) ... some multiprocessor digression from a recent thread http://www.garlic.com/~lynn/2007o.html#37 Each CPU usage some of the people at the science center http://www.garlic.com/~lynn/subtopic.html#545tech were worried about some of the virtual memory issues ... some historical comment that atlas virtual memory never worked well ... misc. past posts mentioning atlas http://www.garlic.com/~lynn/2000f.html#78 TSS ancient history, was X86 ultimate CISC? designs) http://www.garlic.com/~lynn/2001h.html#10 VM: checking some myths. http://www.garlic.com/~lynn/2001h.html#26 TECO Critique http://www.garlic.com/~lynn/2003b.html#1 Disk drives as commodities. Was Re: Yamhill http://www.garlic.com/~lynn/2003m.html#34 SR 15,15 was: IEFBR14 Problems http://www.garlic.com/~lynn/2005o.html#4 Robert Creasy, RIP http://www.garlic.com/~lynn/2006i.html#30 virtual memory http://www.garlic.com/~lynn/2007e.html#1 Designing database tables for performance? http://www.garlic.com/~lynn/2007g.html#36 Wylbur and Paging also reference to cp67 and vm370 historical paper http://www.princeton.edu/~melinda/ anyway, as result of the concerns about virtual memory, cambridge modified a 360/40 with custom virtual memory hardware ... prior to 360/67 availability. cp/40 virtual machine system was built for the custom 360/40 ... which was remapped to cp/67 when 360/67 machines became available. virtual memory hardware support was eventually going to be made available on 370s ... although only 24-bit virtual memory addressing ... 360/67 had support for both 24-bit virtual memory addressing as well as 32-bit virtual memory addressing. originally 370 virtual memory was going to have a lot more features. the translation of the cp67 virtual machine system to vm370 virtual machine system was going to include use of some of these additional features. one specific feature that was going to be used was virtual memory shared segment allowing the same shared segment to appear in multiple different virtual address spaces ... and be read/only protected. retrofitting 370 virtual memory hardware support to 370/165 ran into some schedule delays ... in order to make up those delays, lots of 370 virtual memory features were dropped ... and the other machines that had implemented the full 370 virtual memory support had to remove the additional features. in escalation meetings about the trade-off delaying 370 virtual memory availability by six more months (because of hardware implementation issues for 370/165) or shipping a subset six months earlier ... the favorite son operating system took the position that they didn't need any of the additional features. this had a fairly big impact on vm370 implementation which including coming up with a emulation of shared segment protection using a kludge involving storage keys. the initial translation of os/360 MVT to virtual memory environment (for os/vs2 svs) involved creating a single 16mbyte virtual address space and hacking a simple paging support into the side of MVT. Then CCWTRANS (from cp67) was integrated into the MVT kernel to provide for channel program translation (i.e. handle all the stuff of taking the application space channel program that had been created with virtual address ... creating a copy substituting real addresses, pinning the associated pages, and all the rest of the gorp). misc. past posts mentioning the 370/165 virtual memory implementation schedule problems and impact on vm370 implementation http://www.garlic.com/~lynn/2000.html#59 Multithreading underlies new development paradigm http://www.garlic.com/~lynn/2003d.html#53 Reviving Multics http://www.garlic.com/~lynn/2003f.html#14 Alpha performance, why? http://www.garlic.com/~lynn/2004p.html#8 vm/370 smp support and shared segment protection hack http://www.garlic.com/~lynn/2004p.html#9 vm/370 smp support and shared segment protection hack http://www.garlic.com/~lynn/2004p.html#10 vm/370 smp support and shared segment protection hack http://www.garlic.com/~lynn/2004p.html#14 vm/370 smp support and shared segment protection hack http://www.garlic.com/~lynn/2005.html#3 [Lit.] Buffer overruns
Re: Each CPU usage
[EMAIL PROTECTED] (Thompson, Steve) writes: Imagine, you have a 3081 at 100% and you upgraded to a 3084 (basically you added the other 3081) and you are still at 100%. Or you have a 3033 and you went to a 470/V8. [I'm not saying these were the systems, just using them as examples.] 3081 was two processor ... and 3084 was pair of 3081s (four-processors). the basic 370/3033/308x multiprocessor cache coherency started out slowing the processor speed by 10% to allow for cross-cache chatter (i.e. raw two-processor thruput was 1.8 times raw single processor thruput). this is independent of any actual cache invalidates that were occuring (i.e. just providing for basic cross-cache communication). 3084 was even worse since each processor cache had to listen for x-cache chatter from three other processors (rather than just one other processor). 308x wasn't even going to have a single processor version ... however, eventually a single processor, 3083 did ship. This was primarily motivated by ACP/TPF which didn't have multiprocessor at the time (base 3083 processor was almost 15percent faster than one of the 3081 processors since the multiprocessor x-cache chatter slowdown was eliminated) in the 3081 time-frame ... both vm and mvs had kernel storage re-org to carefully align storage on cache-line boundaries (and multiples of cache lines). this was to eliminate a lot of cache-line trashing where two different storage locations overlapped in the same cache-line (and different processors could be simultaneously operating on the two storage locations). This kernel storage re-org was claimed to improve system thruput by something over five percent. the other example was a major restructuring of the vm multiprocessor support between r6 and sp1. the issue was that since acp/tpf didn't have multiprocessor support, there was a lot of acp/tpf running under vm370 on 3081s. for the dedicated acp/tpf, 3081 operations that met that they ran two copies of acp/tpf (in two different virtual machines) and/or that one of the processors sat idle most of the time. for the later case, the multiprocessor restructuring attempting to get (some amount of) virtual machine kernel processing running on the idle processor (overlapped with acp/tpf execution on the other processor). This involved introducing a lot of signal processor instructions to wake the possibly idle processor to get busy on some execution and return to executing the (acp/tpf) virtual machine (specific scenario was overlapping siof instruction emulation and channel program translation with the acp/tpf virtual machine execution). the standard virtual machine multiprocessor support was designed for efficiently handling lots of totally operations. the sp1 reorganization (for acp/tpf overlapped execution) was generic for all possible execution environments ... and introduced quite a bit of overhead (in the acp/tpf scenario it was justified on the basis that it improved overall thruput ... since there was an otherwise idle processor). a lot of existing customers moving from r6 multiprocessor support to sp1 multprocessor support found significant increase in multiprocessor overhead ... a combination of the significant increase in signal processor instructions, the corresponding interrupts and a lot of new spin-lock activities (just the new spin-locks measured as much as ten percent of each processor). spin-locks were typically used to provide exclusive execution for lots of kernel code. global kernel spin-locks were typical of lot of 60s, 70s and even 80s operating systems (i.e. a single kernel lock that kernel would attempt to obtain at entry into kernel mode ... interrupt routines, etc ... and spin/loop until it obtain the lock). at the science center, http://www.garlic.com/~lynn/subtopic.html#545tech charlie was working on fine-grain multiprocessing kernel locks (lots of short execution paths rather than the whole kernal) for cp67 when he invented the compare-and-swap instruction (CAS mnemonic chosen because they are charlie's initials ... compare-and-swap designation had to be invented to have something that matched CAS). the attempt to get CAS added to 370 architecture was initially rebuffed ... the favorite son operating system considered the testset locking instruction (used for os/360 multiprocessor kernel spin-locks) more than sufficient for 370 multiprocessing support. the challenge was to come up with a non-multiprocessor use for the compare-and-swap instruction ... in order to get it included in 370 architecture. lots of past posts mentioning multiprocessor and/or compare-and-swap instruction http://www.garlic.com/~lynn/subtopic.html#smp this is where the use for a lot of multithreaded application software (regardless of whether running on multiprocessor hardware) was invented .. as well as the programming notes that now appear in appendix of principles of operation ... i.e. A.6 Multiprogramming and Multiprocessing Examples
Outsourcing loosing steam?
On Aug 15, 11:11 am, [EMAIL PROTECTED] (daver++) wrote: Around 1:30 p.m., the CPB experienced problems accessing its database containing information on international travelers. Assuming this to be a wide-area network problem, CBP called Sprint, its carrier, to test the lines. After three fruitless hours of remote testing, Sprint finally sent technicians on-site. Another three hours passed before Sprint finally concluded that transmission lines were not the problem, meaning the problem was inside the CBP local network. After more hours of troubleshooting, the issue was finally resolved at 11:45 p.m. The real culprit: a failed router.http://blogs.zdnet.com/projectfailures/?p=346 20,000 stranded because it took over ten hours to diagnose and replace a failed router. I used to be a mainframe guy that inherited the network side, so they cut me some slack. BUT- I can guarantee that there wasn't anywhere near enough slack for me to get off with taking that long to replace a router. I would have been tarred, feathered and run out of town. It seems like basic due diligence wasn't even followed. Yes, Sprint added to the problem, but Sprint never should have been called. Why call Sprint before determining that the problem isn't on _your_ end? It is all a bit silly, and it frightens me a bit that our airlines have this level of quality. note that inadequate processes in packet networks contribute significantly to diagnosing the problems. some of the older protocols were much more circuit oriented ... and could much more rely on telco circuit diagnostics to identify problems. we experience this when we were building reliable network based infrastructures in the 80s ... and attempting to do some work on NSFNET infrastructure misc. collected old emails http://www.garlic.com/~lynn/lhwemail.html#nsfnet and to some extent met with quite a bit of corporate resistance ... somewhat highlighted in this old email: http://www.garlic.com/~lynn/2006w.html#email870109 in this post http://www.garlic.com/~lynn/2006w.html#21 note that while tcp/ip is the technology basis for the modern internet, nsfnet was the operational basis (interconnections of networks, i.e. internetworking), and cix was the business basis. in the above reference there is somebody in corporation proposing that sna could be proposed for basis for nsfnet ... the main issue was the ability to providing internetworking ... interconnection of large number of different networks. we later investigated several of the issues in more detail when we were doing the ha/cmp product http://www.garlic.com/~lynn/subtopic.html#hacmp which required a detailed threat and vulnerability study for high availability environments. we later got to use some of that experience when we were called in to consult with a small client/server company that wanted to do payments on their server http://www.garlic.com/~lynn/subnetwork.html#gateway they had this technology called SSL and the effort is now frequently referred to as electronic commerce. the initial simple obvious solution was to move the payment transaction message formats from their existing circuit-based environment to a packet-based enviornment. however, that totally ignored much of the availability, diagnostic, and recovery processews that were available in the circuit based environment. We eventually developed a set of compensating processes and procedures attempting to make the availability of the packet-based environment somewhat compareable to the existing circuit-based environment. for a little topic drift ... recent comment on availability, diagnosing and recovery in one of the ATC modernization efforts http://www.garlic.com/~lynn/2007o.html#18 misc past posts on estimated of 4-10 times the effort to take a well written application and turn it into an industrial strength service (in the case of the payment gateway, it was closer to ten times, including inventing various diagnostic and recovery process to compensate for moving payment gateway to a packet-based environment) http://www.garlic.com/~lynn/2001f.html#75 Test and Set (TS) vs Compare and Swap (CS) http://www.garlic.com/~lynn/2001n.html#91 Buffer overflow http://www.garlic.com/~lynn/2001n.html#93 Buffer overflow http://www.garlic.com/~lynn/2003g.html#62 IBM says AMD dead in 5yrs ... -- Microsoft Monopoly vs. IBM http://www.garlic.com/~lynn/2003j.html#15 A Dark Day http://www.garlic.com/~lynn/2003p.html#37 The BASIC Variations http://www.garlic.com/~lynn/2004b.html#8 Mars Rover Not Responding http://www.garlic.com/~lynn/2004b.html#48 Automating secure transactions http://www.garlic.com/~lynn/2004k.html#20 Vintage computers are better than modern crap ! http://www.garlic.com/~lynn/2004l.html#49 Perfect or Provable security both crypto and non-crypto? http://www.garlic.com/~lynn/2004m.html#51 stop worrying about it offshoring - it's doing fine http://www.garlic.com/~lynn/2004p.html#23 Systems software versus applications software definitions
Re: some questions about System z PR/SM.
[EMAIL PROTECTED] (R.S.) writes: No. PR/SM is microcode - a code under OS. Sometimes called firmware. z/OS runs in LPAR. Although it can obtaine i.e. LPAR name, it is still unaware from PR/SM and LPARs features. z/OS works in virtual machine (Logical PARtition) and does know that machine. However some z/OS application, called HCD allows you to define hardware configuration - a set of I/O definitions (manual Plug and Play) as well as division of CPC into LPARs. However the resulting file is simply transmitted to Support Element (notebook inside CPC) and it is interpreted by PR/SM. Just to complement: Another part of the file is also read by z/OS during IPL process (however this file is read from regular DASD, not SE). The prepared LPAR can be further customized on SE and can be used for Linux. originally pr/sm was done on 3090 ... somewhat in response to amdahl's hypervisor. it basically is a subset of virtual machine capability moved into microcode. in the amdahl scenario ... amdahl had added a variation called macrocode ... which was a 370 instruction variation that sat part way between the real microcode and standard 370 machine instructions. it significantly simplified migrating virtual machine 370 code into the native machine. by comparison, 3090 pr/sm was a much more difficult task since it involved implementation directly in the 3090 microcode. however, much of pr/sm actually leveraged the SIE instruction which was used by virtual machine operating system to implement virtual machine mode. pr/sm evolved into supporting multiple concurrent hypervisors as LPAR for some topic drift ... some old email discussing amdahl hypervisor and macrocode http://www.garlic.com/~lynn/2006b.html#email810318 in this post http://www.garlic.com/~lynn/2006b.html#38 blast from the past ... macrocode some old email somewhat comparing 3081 sie and 3090 sie http://www.garlic.com/~lynn/2006j.html#email810630 in this post http://www.garlic.com/~lynn/2006j.html#27 virtual memory above posts also includes numerous other references to sie, pr/sm, lpars, etc -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: IBM obsoleting mainframe hardware
The following message is a courtesy copy of an article that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well. [EMAIL PROTECTED] (Chris Mason) writes: One of the presentations was someone from a big UK bank who defended IBM having made the 155 and 165 available and relatively shortly afterwards having announced the 158 and 168 - together with the relatively expensive DAT box extension to the 155 and 165. I hope I'm remembering the details about right. I heard about this only second-hand but I believe the argument was that IBM was right to offer the enhanced performance of the 155 and 165 as soon as it could in spite of the fact that it knew that the virtual storage models were well advanced in development. I guess there was a shadow of the it's illegal to preannounce principle hanging over this. re: http://www.garlic.com/~lynn/2007n.html#31 IBM obsoleting mainframe hardware http://www.garlic.com/~lynn/2007n.html#34 IBM obsoleting mainframe hardware 370/165 ... announce jun70 http://www-03.ibm.com/ibm/history/exhibits/mainframe/mainframe_PP3165.html 370/168 ... announce aug72 http://www-03.ibm.com/ibm/history/exhibits/mainframe/mainframe_PP3168.html for virtual memory ... hacking virtual memory support into MVT (for VS2/SVS) was needed in addition to the virtual memory hardware retrofitted to 165s (there were significant software as well as hardware schedules). this is similar to previous comments about *crash* program to try and get out 370-xa (after FS project was killed) and POK in 1976, convincing the corporation to shutdown vm370 product and transfer all the developers to POK as part of being able to make mvs/xa (software) schedule (although Endicott was eventually able to save part of the vm370 product mission). i've mentioned before about (370 virtual memory) prototype work that went on in pok, using 360/67s and hacking single address space virtual memory into the side of MVT ... as well as cobbling in cp67's (ccw translation) CCWTRAN into MVT ... i.e. cp67 had started out having to build shadow channel programs with real addresses ... for the virtual machine's channel programs; all the (MVT) channel programs passed via EXCP ... would be equivalent virtual address channel programs ... requiring similar translation (and misc. other things like page locking/pinning) recent posts about using CP67's CCWTRANS as part of turning MVT into os2/svs http://www.garlic.com/~lynn/2007f.html#6 IBM S/360 series operating systems history http://www.garlic.com/~lynn/2007f.html#33 Historical curiosity question The other part ... was that there was a lot of work to retrofit virtual memory to 165 ... so much so that they ran into schedule problems. In order to buy back six months in the 165 virtual memory schedule, there was an escalation dropping several features from the original 370 virtual memory architecture. Once the 165 engineers had won that battle, then all the other processors (that had already completed their virtual memory implementations) ... had to go back and remove the dropped features. recent posts mentioning 165-ii schedule issues and impact on dropping features from original 370 virtual memory architecture http://www.garlic.com/~lynn/2007f.html#7 IBM S/360 series operating systems history http://www.garlic.com/~lynn/2007f.html#16 more shared segment archeology http://www.garlic.com/~lynn/2007j.html#43 z/VM usability http://www.garlic.com/~lynn/2007k.html#28 IBM 360 Model 20 Questions -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: IBM obsoleting mainframe hardware
The following message is a courtesy copy of an article that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well. [EMAIL PROTECTED] (Rick Fochtman) writes: If the business needs are being satisfied, with reasonable economy, who cares whether the box is the lastest and greatest? Future business needs may or may not dictate upgrades. YMMV a little search engine mainframe surfing for vm/4341 turned up this story about a vm/4341 keeping the nyse running will thru the 80s ... apparently with an old mvt system that had been moved from 360/50s http://www.raylsaunders.com/asmwork.html that i mentioned in this recent post: http://www.garlic.com/~lynn/2007n.html#26 VM system kept NYSE running a quick check just this moment, turns up some problem with the URL ... but (as always) the wayback machine knows http://web.archive.org/web/20060220161415/http://raylsaunders.com/asmwork.html for other topic drift ... we spent some amount of time in the early 90s talking to SIAC about using ha/cmp for much of the work that the tandems were doing (see mainframe MDS-II being replaced with tandem MDS-IIIs in the above reference) ... lots of ha/cmp references: http://www.garlic.com/~lynn/subtopic.html#hacmp this was in the period that we were also working on ha/cmp and trying to cram as much computing into dense footprint, old email references: http://www.garlic.com/~lynn/lhwemail.html#medusa I had actually attempted to do something similar nearly a decade earlier with trying to cram as many 370 chipsets (had about 168-3 thruput) as possible into racks. the old 8-10 yr cycle for mainframe generations (and obsolescence) really showed up when the early 70s FS project was killed http://www.garlic.com/~lynn/subtopic.html#futuresys since it was going to be something completely different, much of the work on 370 related stuff pretty much went away. after FS was killed, there was a scramble to get stuff back into the 370 product pipeline. 370-xa/3081 was going to take eight yrs (early 80s) ... so they had to find something else that could be done in possibly half that time. the resulting 303x was quite a bit of warmed over 370. they took the intergrated channel microcode from 158 and made it stand-alone box called channel director. Then 158 paired with a channel director became 3031 (with integrated channel microcode running on different processor). 168 became 3032 repackaged to work with channel director. 3033 started out as 168 wiring diagram implemented with faster chip technology. straight-forward mapping would have just been 20percent faster than 168 ... other tweaks done during development got 3033 up to 1.5times 168. part of the issue was that up to the 80s, lots of technology was on 7-10yr cycle ... where in the 80s, the rate of change started to accelerate, for a time, leaving some mainframe technology in the dust. note that it wasn't just mainframes. circa 1990, there US automobile (C4) task force looked at being able to accelerate (cut in half) us automobile product cycle from 7-8yrs (in attempt to get on level playing field with some of the imports). it was interesting to watch what the mainframe people were saying in the meetings (since they were effectively in the same boat). one of the things that the automobile industry had been doing would run parallel new product projects offset by four yrs (so it appeared that something new was coming out every four yrs). the analogy for mainframes ... was as soon as 3033 was out the door, they started on 3090 (8yr overlap with 3081 with 4yr offset). in fairly stable industry this worked since consumer tastes weren't signicantly changing. However the 8yr lag could become significant if there was any significant change in what the market place was looking for (giving vendors that had much shorter product cycle a competitive edge). some recent references to C4 effort circa 1990 ... attempting to improve competitive footing vis-a-vis several imports: http://www.garlic.com/~lynn/2007f.html#50 The Perfect Computer - 36 bits? http://www.garlic.com/~lynn/2007g.html#29 The Perfect Computer - 36 bits? http://www.garlic.com/~lynn/2007g.html#34 U.S. Cedes Top Spot in Global IT Competitiveness http://www.garlic.com/~lynn/2007g.html#52 U.S. Cedes Top Spot in Global IT Competitiveness http://www.garlic.com/~lynn/2007i.html#13 U.S. Cedes Top Spot in Global IT Competitiveness http://www.garlic.com/~lynn/2007j.html#33 IBM Unionization -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: The Development of the Vital IBM PC in Spite of the Corporate Culture of IBM
The following message is a courtesy copy of an article that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well. [EMAIL PROTECTED] (Phil Smith III) writes: Re VAX vs. IBM: I was a central, low level member of the 4300 series. I also led the engineering side of the fight against the VAX. We never approached the installed base of the VAX machines. Never. approach the size of the install base in number of customers or number of machines or competitive marketing approaching the customers that bought vaxes? past post giving decade of vax install numbers sliced and diced by model, yr, domestic, non-domestic, etc: http://www.garlic.com/~lynn/2002f.html#0 Computers in Science Fiction both 43xx and vaxes saw huge uptake in the early 80s with the growth of the department market ... which was starting to move into workstations and PCs by the mid-80s. as above, the big volumes for VAXes in the mid-80s were from micro-vax ... not traditional 780 machines. lots of vaxes were customer orders for one or a very few. vaxes had an advantage here since their installation and support required a lot less effort (something that 43xx was constantly fighting ... there were even some SHARE reports highlighting the resource requirement differences in competitive environment). however, there were some number of large customers that ordered 43xx boxes large lots (hundreds, even large hundreds). the resource support requirement competitive advantage (in small shops) was mitigated when amortized across a large number of boxes. old email about specific customer ordering in hundreds (customer initially thot 20, but order was finally for 210): http://www.garlic.com/2001m.html#email790404 in this post also discussing other departmental computing issues from the period http://www.garlic.com/~lynn/2001m.html#15 departmental server lots of old email discussing various aspects of 43xx ... use for clustering and/or distributed, departmental computing http://www.garlic.com/~lynn/lhwemail.html#43xx -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: The Development of the Vital IBM PC in Spite of the Corporate Culture of IBM
The following message is a courtesy copy of an article that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well. [EMAIL PROTECTED] (Phil Smith III) writes: Re RISC vs. 68K: Anyone who thinks the RISC chips killed the 68K is off base. They just need to check the dates. Intel killed the 68K. Motorola allied with IBM on RISC only after Intel had destroyed Motorola's market for the 68K. 801 was originally targeted (very) low-end ... ROMP chip was targeted to be used in a displaywriter follow-in ... when that project was killed, the group looked around for something to save the effort ... and hit on the unix workstation market (with the displaywriter follow-on morphing into unix workstation). lots of unix workstation market place is very numerical and power hungry ... somewhat as a result ... the followon to ROMP for that market was large, power-hungry RIOS chipset (i.e. POWER, announced in RS/6000). Paperweight on my desk (from original) has six chips, and says 150 million OPS, 60 million FLOPS, and 7 million transistors. somerset was combined ibm, motorola, apple project to do a single chip, 801 PC-level implementation ... the executive we reported to when we were doing ha/cmp http://www.garlic.com/~lynn/subtopic.html#hacmp went over to head up somerset. part of somerset including infusing power/pc with some of motorola's 88k (risc) technology. ROMP and RIOS were single processer implementations with no provision for multi-processor cache consistency. power/pc was going to be able to support cache consistency and multiprocessor operation. lots of past 801 posts http://www.garlic.com/~lynn/subtopic.html#801 68k was still hanging in there in 89/90 time-frame ... a couple posts with some old references from the period (raw chip volumes, business analysis, etc) http://www.garlic.com/~lynn/2005q.html#35 Intel strickes back with a parallel x86 design http://www.garlic.com/~lynn/2005q.html#44 Intel strickes back with a parallel x86 design -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: The Development of the Vital IBM PC in Spite of the Corporate Culture of IBM
The following message is a courtesy copy of an article that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well. re: http://www.garlic.com/~lynn/2007n.html#18 The Development of the Vital IBM PC in Spite of the Corporate Culture of IBM the place that 43xx had the most difficult competition against vax/vms was in the single (at a time) departmental servers (as some of the SHARE studies highlighted). cost of mid-range computers had dropped below a threshold that made them very cost-effective in departmental settings ... however scarce people skills and costs then started to dominate as market inhibitor. 43xx did do very well in large number of departmental server orders (especially with distributed, networked operation) ... where people support skill/costs could be amortized across large number of machines. clusters of 43xx also started to impact 3033. at one point (traditional internal politics), the head of pok, manipulated east fishkill to cut the allocation in half of a critical component needed for 43xx manufacturing. later the same person gave a talk to a large public audience and made some statement that something like 11,000 vax/vms orders should have been 43xx ... also referenced in this old post http://www.garlic.com/~lynn/2001m.html#15 departmental servers and old email mentioning various 43xx issues ... including moving workload off 3033 boxes onto 4341 clusters ... and large distributed departmental server operations. http://www.garlic.com/~lynn/lhwemail.html#43xx -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: The Development of the Vital IBM PC in Spite of the Corporate Culture of IBM
The following message is a courtesy copy of an article that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well. [EMAIL PROTECTED] (Eugene Miya) writes: No, the most difficult competition was and is against the IBM PC. If it did so well, we'd see more evidence of it being around. They are not even museum pieces. re: http://www.garlic.com/~lynn/2007n.html#20 The Development of the Vital IBM PC in Spite of the Corporate Culture of IBM you didn't read the zillion previous posts mentioning that mid-range market for both vax/vms and 43xx volumes in departmental server market started to move to workstations and larger PCs in the mid-80s. above reference post ... mentions the previous post in the thread ... which made the same point one more time (and then later the workstations started to also loose out to PCs). http://www.garlic.com/~lynn/2007n.html#18 The Development of the Vital IBM PC in Spite of the Corporate Culture of IBM for instance, the 4361/4381 which were expecting similar large volume sales as seen for 4331/4341 ... never happened. similar numbers can be seen for vax/vms numbers ... where vax did do some volumes in the mid-80s with micro-vax ... also readily seen in the repeated references to decade of vax/vms numbers, sliced diced by model, yr, domestic, world-wide, etc http://www.garlic.com/~lynn/2002f.html#0 Computers in Science Fiction ... the 4331s/4341s and other mid-market players in the departmental servers had very little PCs to compete with (late 70s and early 80s) ... it wasn't until you get to the followon machines; 4361s/4381s (and later vax) that you start to see the workstation/PC effect in the departmental server market. one of the contributions to the PCs in the departmental server market was a project called DataHub which was being done by the san jose disk division. Part of the software implementation was being done under work-for-hire subcontract by a group in Provo (one of the people from San Jose commuted to Provo nearly every week). At some point, the company decided to kill the DataHub project and allowed the Provo group to retain rights to everything that they had done under the work-for-hire contract. Not too long later, there was a company out of Provo with a PC server offering. misc. past posts mentioning DataHub project: http://www.garlic.com/~lynn/96.html#4a John Hartmann's Birthday Party http://www.garlic.com/~lynn/2000g.html#40 No more innovation? Get serious http://www.garlic.com/~lynn/2002f.html#19 When will IBM buy Sun? http://www.garlic.com/~lynn/2002g.html#79 Coulda, Woulda, Shoudda moments? http://www.garlic.com/~lynn/2002o.html#33 Over-the-shoulder effect http://www.garlic.com/~lynn/2003e.html#26 MP cost effectiveness http://www.garlic.com/~lynn/2003f.html#13 Alpha performance, why? http://www.garlic.com/~lynn/2004f.html#16 Infiniband - practicalities for small clusters http://www.garlic.com/~lynn/2005p.html#23 What ever happened to Tandem and NonStop OS ? http://www.garlic.com/~lynn/2005q.html#9 What ever happened to Tandem and NonStop OS ? http://www.garlic.com/~lynn/2005q.html#36 Intel strikes back with a parallel x86 design http://www.garlic.com/~lynn/2006l.html#39 Token-ring vs Ethernet - 10 years later http://www.garlic.com/~lynn/2006y.html#31 The Elements of Programming Style http://www.garlic.com/~lynn/2007f.html#17 Is computer history taught now? http://www.garlic.com/~lynn/2007j.html#49 How difficult would it be for a SYSPROG ? in the mean time, the communication division had seen a huge install base of communication controllers grow based on terminal emulation http://www.garlic.com/~lynn/subnetwork.html#emulation which was started to break away into various kinds of client/server ... they came up with SAA ... somewhat positioned at helping preserve their communication controller market (and countermeasure to client/server). A problem we had in this period was that we were making some number of customer executive presentations on 3-tier (network) architecture ... and taking flames barbs from the SAA factions http://www.garlic.com/~lynn/subnetwork.html#3tier other recent posts in this same thread: http://www.garlic.com/~lynn/2007m.html#42 The Development of the Vital IBM PC in Spite of the Corporate Culture of IBM http://www.garlic.com/~lynn/2007m.html#44 The Development of the Vital IBM PC in Spite of the Corporate Culture of IBM http://www.garlic.com/~lynn/2007m.html#45 The Development of the Vital IBM PC in Spite of the Corporate Culture of IBM http://www.garlic.com/~lynn/2007m.html#48 The Development of the Vital IBM PC in Spite of the Corporate Culture of IBM http://www.garlic.com/~lynn/2007m.html#50 The Development of the Vital IBM PC in Spite of the Corporate Culture of IBM http://www.garlic.com/~lynn/2007m.html#57 The Development of the Vital IBM PC in Spite of the Corporate Culture of IBM http://www.garlic.com/~lynn/2007m.html#63 The Development of the Vital IBM PC in Spite of the
Re: Is Parallel Programming Just Too Hard?
The following message is a courtesy copy of an article that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well. Morten Reistad [EMAIL PROTECTED] writes: Also, log structured file systems, the jfs and contributions to efs3, and huge improvements to the irq and dma routing; including some work in processor affinities. metadata logging is slightly different from log structured file systems. one of the problems with log structured file systems is the periodic garbage collection done to consolidate files, making their records sequential and contiguous. for other drift ... during work on HA/CMP http://www.garlic.com/~lynn/subtopic.html#hacmp we hired one of the people responsible for doing the BSD log structured filesystem implement to consult on doing a geographically distributed filesystem. JFS was originally done by people working on 801/AIXV3. 801 early on had definition/implementation for database memory ... i.e. hardware could keep track of fine-grain changes (size on the order of cache-lines). Just load up data into memory mapped infrastructure ... provide the COMMIT boundaries ... and eliminate needing to sprinkle log calls thruout the code. At commit, just run thru the changed memory indications ... collecting data-lines needing logging. There had been various kinds of conflict between the unix development group in palo alto and the group in austin. The palo alto group took JFS and ported it to non-801 platforms ... having to retrofit the logging calls to the software (since they lacked database memory hardware). It turns out that the version with explicit logging calls ran faster than the original implementation (even on the same 801 hardware platform) ... the commit time scanning of memory for changes tended to be higher overhead than the explicit log calls. Then the remaining justification for database memory is the implementation simplification ... somewhat akin to some of the pushes for parallel programming (except parallel programming is frequently explicitly about performance; not trying to trade-off performance against simplicity). some of the database memory stuff can be found under the heading of transactional memory ... some posts mentioning transactional memory: http://www.garlic.com/~lynn/2005r.html#27 transactional memory question http://www.garlic.com/~lynn/2005s.html#33 Power5 and Cell, new issue of IBM Journal of RD http://www.garlic.com/~lynn/2007b.html#44 Why so little parallelism? misc. past posts mentioning log structured filesystems http://www.garlic.com/~lynn/93.html#28 Log Structured filesystems -- think twice http://www.garlic.com/~lynn/93.html#29 Log Structured filesystems -- think twice http://www.garlic.com/~lynn/2000c.html#24 Hard disks, one year ago today http://www.garlic.com/~lynn/2001f.html#59 JFSes: are they really needed? http://www.garlic.com/~lynn/2002b.html#20 index searching http://www.garlic.com/~lynn/2002l.html#36 Do any architectures use instruction count instead of timer http://www.garlic.com/~lynn/2003b.html#69 Disk drives as commodities. Was Re: Yamhill http://www.garlic.com/~lynn/2004g.html#22 Infiniband - practicalities for small clusters http://www.garlic.com/~lynn/2005l.html#41 25% Pageds utilization on 3390-09? http://www.garlic.com/~lynn/2005n.html#36 Code density and performance? http://www.garlic.com/~lynn/2006j.html#3 virtual memory http://www.garlic.com/~lynn/2006j.html#10 The Chant of the Trolloc Hordes http://www.garlic.com/~lynn/2007.html#30 V2X2 vs. Shark (SnapShot v. FlashCopy) http://www.garlic.com/~lynn/2007i.html#27 John W. Backus, 82, Fortran developer, dies some past posts mentioning database memory http://www.garlic.com/~lynn/2002b.html#33 Does it support Journaling? http://www.garlic.com/~lynn/2002b.html#34 Does it support Journaling? http://www.garlic.com/~lynn/2003c.html#49 Filesystems http://www.garlic.com/~lynn/2003d.html#54 Filesystems http://www.garlic.com/~lynn/2005n.html#20 Why? (Was: US Military Dead during Iraq War http://www.garlic.com/~lynn/2005n.html#32 Why? (Was: US Military Dead during Iraq War http://www.garlic.com/~lynn/2006o.html#26 Cache-Size vs Performance http://www.garlic.com/~lynn/2006y.html#36 Multiple mappings http://www.garlic.com/~lynn/2007i.html#27 John W. Backus, 82, Fortran developer, dies -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Operating systems are old and busted
The following message is a courtesy copy of an article that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well. another article on the same theme: Leopard and Vista: Last Gasp of the Big OS? http://news.yahoo.com/s/pcworld/133276 from above: Twenty years from now a new generation of computer users will look back on the operating systems of today with the same bemused smile we look back at the cars of the late 1950s and early 60s. They had huge fins, were the size of a small yacht and burned up just about as much gas. ... snip ... a few similar articles over the past yr: Windows Vista: The last Of Microsoft's Supersized Operating Systems? http://www.informationweek.com/blog/main/archives/2006/08/windows_vista_t.html Windows Vista the last of its kind http://www.techworld.com/news/index.cfm?NewsID=6718 Vista: The Last Microsoft Operating System that will Matter http://www.realtime-websecurity.com/articles_and_analysis/2007/01/vista_the_last_microsoft_opera.html Vista is the last of the dinosaurs http://www.theinquirer.net/default.aspx?article=36155 other recent posts in this thread: http://www.garlic.com/~lynn/2007m.html#64 Operating systems are old and busted http://www.garlic.com/~lynn/2007m.html#66 Off Topic But Concept should be Known To All http://www.garlic.com/~lynn/2007m.html#67 Operating systems are old and busted http://www.garlic.com/~lynn/2007m.html#68 Operating systems are old and busted http://www.garlic.com/~lynn/2007m.html#69 Operating systems are old and busted http://www.garlic.com/~lynn/2007m.html#73 Operating systems are old and busted -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Operating systems are old and busted
The following message is a courtesy copy of an article that has been posted to bit.listserv.ibm-main as well. [EMAIL PROTECTED] (Chris Mason) writes: ... and thereby put the wait light out[1]. Having been brought up with DOS (the original DOS), and, generally, S/360 Model 30s, I was used to knowing how busy the machine was by observing the flickering of the wait light. my first undergraduate programming job was to port MPIO from 1401 to 360/30. MPIO provided tape-unitrecord/printer/punch front-end for university 709 running ibsys. it was possible to operate the 360/30 in 1401 emulation mode ... so i conjecture that the exercise was purely to get familiarity with new 360 ... which would eventually replace both the 709 and the front-end machine with 360/67. i got to design and implement my own monitor, device drivers, interrupt handlers, storage management, consol interface, etc ... and eventually had assembler program with approx. 2000 cards. running os/360 pcp (r6) ... the stand-alone version assembled in about 20-25 minutes elapsed time. I had conditional assembly that would also generate program that would run under PCP and used open/close and DCB macros. There were five DCB macros and you could tell from the wait light pattern when the assembler was processing a DCB macro ... and each one took 5-6 minutes elapsed time ... the os/360 conditional assembly version took an extra 30minutes (making the assembly nearly an hr total). -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: The Development of the Vital IBM PC in Spite of the Corporate Culture of IBM
The following message is a courtesy copy of an article that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well. [EMAIL PROTECTED] (Phil Smith III) writes: Which is the end of the story, boys and girls. For, while so many people focus on how the PC has damaged the mainframe, the mainframe still stands tall. What the PC was meant to destroy, it did destroy - the minis and superminis. DEC went from top of the heap (Queen Elizabeth in Boston harbor for DECWorld) to non-existence in less than 10 years. DG is no more. Wang is no more. The PC destroyed them all. we were spending some time in SCI (as well as FCS and HIPPI) meetings. both Sequent and DG would build an SCI machine with four (intel) processor boards ... for 256process numa machine (convex built an sci machine with two hp/risc processor board ... for 128processor numa machine). both DG and sequent are gone ... sequent being absorbed by ibm ... and some recent references that the only surviving sequent technology may be found in some contributions to linux. HP's superdome may or may not be considered to be the exemplar follow-on. a couple recent posts on sci/numa machines: http://www.garlic.com/~lynn/2007g.html#3 University rank of Computer Architecture http://www.garlic.com/~lynn/2007m.html#13 Is Parallel Programming Just Too Hard? wang signed a deal with austin (and some of the austin people actually left and went to work for wang) to use rs/6000 as their hardware platform (getting out of the hardware business). in some of the a.f.c. posts, i've frequently pointed out that the late 70s and early 80s saw a significant uptake of mid-range machines in the departmental server market segment ... both vm/43xx and vax/vms ... with vm/43xx actually having larger install base than vax/vms (in part because there were numerous large customer orders for multiple hundred 43xx machines at a time). by the mid-80s that market segment was starting to be taken over by workstations and large PCs (with corresponding drop-off in sales of 43xx and vax machines). Later the more powerful PCs would also take over much of the workstation market. misc. old email mentioning various happenings around 43xx http://www.garlic.com/~lynn/lhwemail.html#43xx there had been anticipation that the introduction of the 4361/4381 would see compareable uptake to 4331/4341 ... but by then, the market was already starting to move to workstations and larger PCs. a couple past posts given domestic and world-wide vax numbers, sliced diced by model and yr (post 85, the numbers are primarily micro-vax): http://www.garlic.com/~lynn/2002f.html#0 Computers in Science Fiction http://www.garlic.com/~lynn/2005f.html#37 Where should the type information be: in tags and descriptors http://www.garlic.com/~lynn/2006k.html#31 PDP-1 -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Off Topic But Concept should be Known To All
The following message is a courtesy copy of an article that has been posted to bit.listserv.ibm-main as well. [EMAIL PROTECTED] (Ken Brick) writes: http://www.theregister.com/2007/06/20/usenix_07_opening_keynote/ the new, 40yr old theme, curtesy of the science center http://www.garlic.com/~lynn/subtopic.html#545tech related post here http://www.garlic.com/~lynn/2007m.html#64 Operating systems are old and busted a lot of technology evolved in that environment ... GML, precursor to SGML, HTML, XML, aka the markup stuff http://www.garlic.com/~lynn/subtopic.html#sgml the internal network ... which was larger than the arpanet/internet from just about the beginning until possibly sometime mid-85 http://www.garlic.com/~lynn/subnetwork.html#internalnet and as mentioned in this recent thread: http://www.garlic.com/~lynn/2007m.html#47 Capacity and Relational Database http://www.garlic.com/~lynn/2007m.html#55 Capacity and Relational Database http://www.garlic.com/~lynn/2007m.html#56 Capacity and Relational Database relational/sql was first created in that environment http://www.garlic.com/~lynn/subtopic.html#systemr a lot of virtual memory and dispatch/scheduling http://www.garlic.com/~lynn/subtopic.html#fairshare http://www.garlic.com/~lynn/subtopic.html#wsclock among other things, it provides a fantastic incubator for RD and new technology ... which has been somewhat alluded to in parts of a recent thread: http://www.garlic.com/~lynn/2007m.html#15 Patents, Copyrights, Profits, Flex and Hercules http://www.garlic.com/~lynn/2007m.html#20 Patents, Copyrights, Profits, Flex and Hercules http://www.garlic.com/~lynn/2007m.html#32 Patents, Copyrights, Profits, Flex and Hercules in fact, after the corporation had canceled the failed Future System project http://www.garlic.com/~lynn/subtopic.html#futuresys and realized that it had to throw resources back into the 370 product line ... POK was able to convince the corporation that the vm370 product had to be killed ... because they needed to transfer the all the (relatively few) people in the burlington mall development group to POK to provide support getting the mvs/xa development effort on schedule. Eventually, Endicott was able to salvage some of the product mission. -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Operating systems are old and busted
The following message is a courtesy copy of an article that has been posted to alt.folklore.computers,bit.listserv.ibm-main as well. re: http://www.garlic.com/~lynn/2007m.html#64 Operating systems are old and busted http://www.garlic.com/~lynn/2007m.html#66 Off Topic But Concept should be Known To All part of the new, old things are called virtual appliances ... but in the good old 60s 70s ... they were called service virtual machines some recent posts mentioning the virtual appliance genre http://www.garlic.com/~lynn/2006t.html#46 To RISC or not to RISC http://www.garlic.com/~lynn/2006w.html#25 To RISC or not to RISC http://www.garlic.com/~lynn/2006x.html#6 Multics on Vmware ? http://www.garlic.com/~lynn/2006x.html#8 vmshare http://www.garlic.com/~lynn/2007i.html#36 John W. Backus, 82, Fortran developer, dies http://www.garlic.com/~lynn/2007k.html#26 user level TCP implementation http://www.garlic.com/~lynn/2007k.html#48 John W. Backus, 82, Fortran developer, dies cp67 had fairly early implemented fast, automated dumpreboot. That along with the change-over to using 2702 prepare command helped contribute to round-the-clock, 24x7 cp67/cms online, timesharing services. The issue was that 360/67 were leased ... and charge for was based on system meter running ... and having the system up 3rd/4th shift might have the meter running ... but light load charges might not be able to cover the off-shift lease rate. The system meter would run, even when the operating system was in wait state, but there was I/O active. The use of the 2702 prepare command for terminal I/O would effectively suspend I/O and the system meter would stop. The other part as the fast, automated dumpreboot helped make practical to run cp67 3rd4th shifts w/o any human (operator) present (aka dark room) ... eliminating another expense that light-load offshift useage might not cover. The combination helped encourage both internal 7x24, around-the-clock online, timesharing cp67 operation ... as well as help make various commercial cp67 timesharing offerings more viable http://www.garlic.com/~lynn/subtopic.html#timeshare however, one of the short-comings with unattended, offshift operation was that the service virtual machines still required human intervention. as part of lots of work on performance tuning, dynamic adaptive dispatch/scheduling, virtual memory optimization, workload profiling http://www.garlic.com/~lynn/subtopic.html#fairshare http://www.garlic.com/~lynn/subtopic.html#wsclock and other stuff I was doing at the science center http://www.garlic.com/~lynn/subtopic.html#545tech I was having to do a lot of benchmarking http://www.garlic.com/~lynn/subtopic.html#benchmark and as part of the benchmarking, I worked on being able to automate the whole process. One of the issues was being able to generate a new/different kernel and automatically reboot ... and start the next sequence of benchmarks. cp67 had morphed into vm370 and inherited the automatic reboot operation. The issue then was how to get all the benchmarks kicked off. I created a autolog command that emulated the manual login process ... and added one such command late in the system bringup/boot process. The resulting process that was automatically logged on then could execute scripts with autolog commands for large number of other processes. I initially used it for implementing the benchmarking process. For instance, in the final sequence before release of the resource manager ... there was a sequence of something like 2000 (automated) benchmarks that took 3months elapsed time to run. However, it was quickly realized that the autolog process (for benchmarking) ... would also extremely useful for automating the startup of service virtual machines ... as part of automated system reboot. The burlington development group was one of the organizations that had been distracted by the future system project http://www.garlic.com/~lynn/subtopic.html#futuresys after FS was killed (and before burlington was put on notice that they were being shutdown and everybody being transferred to POK to support mvs/xa development) ... they had crash program to turn out items in vm370 release 3 ... and picked up a lot of stuff from the science center (including the autolog command) where we had continued to work on (360/370) virtual machine activity. -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Operating systems are old and busted
The following message is a courtesy copy of an article that has been posted to alt.folklore.computers,bit.listserv.ibm-main as well. re: http://www.garlic.com/~lynn/2007m.html#64 Operating systems are old and busted http://www.garlic.com/~lynn/2007m.html#66 Off Topic But Concept should be Known To All http://www.garlic.com/~lynn/2007m.html#67 Operating systems are old and busted part of the timesharing issue was whether the off-shift useage charges (or just plain useage) could justify the off-shift operational costs ... since useage tended to decline significantly offshift and weekends (although I finally got my home machine for dial-up access mar70 ... it was 2741 selectric, and have effectively had online access at home ever since). lots of past posts about timesharing services ... including commercial (cp67 vm370) timesharing service bureaus in the 60s 70s. http://www.garlic.com/~lynn/subtopic.html#timeshare in the 60s thru some of the 70s, machines tended to be leased ... and there was system meter ... which would rackup charges as the machine was used ... even when the machine was in wait state ... but I/O was active. The 2702 prepare command was mechanism to leave the terminal lines prepared for any terminal operation ... w/o actually having an active I/O apparent to the system meter. the incremental machine lease charges and costs having people/operators present ... was one of the inhibitors for justifying/providing around-the-clock, 7x24 timesharing operation (since offshift useage could be extremely spotty). eliminating system meter running ... when the system wasn't actually doing anything (just available for doing something) ... and being able to run with dark-room, unattended operation ... would significantly lower the off-shift useage threashold that was necessary to justify leaving the system up, available and operational (significantly helped in transition for providing production around-the-clock, 7x24 operation). -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Operating systems are old and busted
The following message is a courtesy copy of an article that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well. [EMAIL PROTECTED] writes: This is fascinating history, Lynn. I remember using the Prepare command in channel programs for the 2701 that we used in the TUCC network ca. 1967 on. Speaking of old, busted systems and ones that were killed (like FS), does anybody know anything about the new operating system that Amdahl was trying to build? I had a phone interview with an Amdahl person in SEP 1987 who mentioned this OS and I started salivating at the prospect of working on that project. The next thing I knew the project had been killed. re: http://www.garlic.com/~lynn/2007m.html#64 Operating systems are old and busted http://www.garlic.com/~lynn/2007m.html#66 Off Topic But Concept should be Known To All http://www.garlic.com/~lynn/2007m.html#67 Operating systems are old and busted http://www.garlic.com/~lynn/2007m.html#68 Operating systems are old and busted Simpson (of HASP fame) ... misc. old posts mentioning hasp http://www.garlic.com/~lynn/subtopic.html#hasp including observation that the much of the source for HASP/JES2 internal networking support (before being released as project) carried the letters TUCC in cols. 68-71. misc. past posts mentioning internal network (which was mostly vm370 based ... with a few mvs/jes2 around the perimeter) http://www.garlic.com/~lynn/subnetwork.html#internalnet had left the HASP group and started an internal operating system project called RASP. It had some of the characteristics of TSS/360, being an extremely paged mapped oriented operating system (shared some characteristics of FS, s/38, as/400) ... but purely 370 based. Later, he left and became an Amdahl fellow in Dallas ... starting a similar project. There was some litigation as a result that included some code reviews (to see if any RASP code had leaked out). Some of this overlapped with the developed of Au/GOLD (aka UTS) ... and there was appeared to be some amount of anbivalence between the two groups. Knowing some of the people in both organizations ... I even tried to do some mediation (ignore for the moment that i didn't work for them and knew about unannounced, internal projects). One of the examples I tried to use was the UNIX TSS370 (SSUP) effort that was being done for internal ATT use. A lot of the 370 UNIX being done in the 80s was all being done under VM ... not so much because of the point in the original subject of this thread ... but because VM370 would provide for hardware EREP (if necessary) on behalf of operating system in virtual machine ... and an effort to fit UNIX with 370 EREP was several times larger than any of the efforts just porting UNIX to 370. The TSS370/SSUP strategy being done for ATT ... had all the low-level TSS/370 kernel hardware support ... but with UNIX layered ontop (an alternative approach to giving unix environment a large amount of 370 EREP). In any case, I suggested that the two groups might be able to form a marriage of convenience doing something similar. Didn't happen. misc. past posts mentioning tss370/ssup, rasp, aspen, au/gold/uts, etc http://www.garlic.com/~lynn/95.html#1 pathlengths http://www.garlic.com/~lynn/96.html#4a John Hartmann's Birthday Party http://www.garlic.com/~lynn/98.html#11 S/360 operating systems geneaology http://www.garlic.com/~lynn/99.html#2 IBM S/360 http://www.garlic.com/~lynn/99.html#64 Old naked woman ASCII art http://www.garlic.com/~lynn/99.html#190 Merced Processor Support at it again http://www.garlic.com/~lynn/99.html#191 Merced Processor Support at it again http://www.garlic.com/~lynn/2000b.html#61 VM (not VMS or Virtual Machine, the IBM sort) http://www.garlic.com/~lynn/2000c.html#8 IBM Linux http://www.garlic.com/~lynn/2000f.html#68 TSS ancient history, was X86 ultimate CISC? designs) http://www.garlic.com/~lynn/2000f.html#69 TSS ancient history, was X86 ultimate CISC? designs) http://www.garlic.com/~lynn/2000f.html#70 TSS ancient history, was X86 ultimate CISC? designs) http://www.garlic.com/~lynn/2001b.html#73 7090 vs. 7094 etc. http://www.garlic.com/~lynn/2001e.html#19 SIMTICS http://www.garlic.com/~lynn/2001f.html#20 VM-CMS emulator http://www.garlic.com/~lynn/2001f.html#22 Early AIX including AIX/370 http://www.garlic.com/~lynn/2001f.html#23 MERT Operating System Microkernels http://www.garlic.com/~lynn/2001f.html#47 any 70's era supercomputers that ran as slow as today's supercomputers? http://www.garlic.com/~lynn/2001l.html#7 mainframe question http://www.garlic.com/~lynn/2001l.html#8 mainframe question http://www.garlic.com/~lynn/2001l.html#9 mainframe question http://www.garlic.com/~lynn/2001l.html#11 mainframe question http://www.garlic.com/~lynn/2001l.html#17 mainframe question http://www.garlic.com/~lynn/2001l.html#18 mainframe question http://www.garlic.com/~lynn/2001l.html#20 mainframe question http://www.garlic.com/~lynn/2002d.html#23
Re: Is Parallel Programming Just Too Hard?
The following message is a courtesy copy of an article that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well. [EMAIL PROTECTED] (Scott Lurndal) writes: Sure there are. Start with WindRiver. Then progress to MCP/AS, z/OS, Exec/1100 and so forth. The whole world isn't microsoft, you know. the person at the science center http://www.garlic.com/~lynn/subtopic.html#545tech that did the technology that was used in the internetal network http://www.garlic.com/~lynn/subnetwork.html#internalnet and part of the technology that was used by customers and in bitnet http://www.garlic.com/~lynn/subnetwork.html#bitnet ... aka the base technology was extremely layered with effectively something akin to gateway like function ... it not only deployed peer-to-peer networking ... but easily provided emulators that could also talk to HASP/JES2 ... lots of posts mentioning hasp/jes2 http://www.garlic.com/~lynn/subtopic.html#hasp ... a recent x-over reference: http://www.garlic.com/~lynn/2007m.html#69 Operating systems are old and busted so by the time of bitnet time-frame ... internal corporate politics was such that it was restricting shipping support for just the HASP/JES2 interfaces ... even tho the native peer-to-peer implementations were much more efficient (and still continued to be used internally for some time). in any case, the implementation was one of those service virtual machines (virtual appliances) ... more x-over http://www.garlic.com/~lynn/2007m.html#64 Operating systems are old and busted http://www.garlic.com/~lynn/2007m.html#66 Off Topic But Concept should be Known To All http://www.garlic.com/~lynn/2007m.html#67 Operating systems are old and busted http://www.garlic.com/~lynn/2007m.html#68 Operating systems are old and busted and included in the implementation was a very small tightly coded multitasking monitor (for dispatch/scheduling). now many yrs later the person had opportunity to be involved in project that involved one of the major RTOS ... and he happened to be looking thru the C-source which seemed to be familiar. Eventually checking an old listing of the multitasking monitor ... it was apparent that they had done a nearly line-by-line translation of his 360 assembler code into C ... including preserving all the original comments. -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Is Parallel Programming Just Too Hard?
The following message is a courtesy copy of an article that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well. [EMAIL PROTECTED] writes: When? I never considered IBM world and its batch environment timesharing. Timesharing does not do large data processing tasks well; and it's not supposed to. there were somewhat distinct, different environments ... one was commercial dataprocessing and the other was interactive computing and timesharing. the commercial, batch, production environment was oriented towards business dataprocessing ... it wasn't computing done on behalf of some specific person ... it was computing done on behalf of some business operation ... like the organizations payroll and printing checks. the requirement was that the business dataprocessing be done ... frequently on very determined scheduled ... independent of any specific person. over time, there was lots of batch technology evolved to guarentee that specific operations could be done reliably, predictably, and deterministicly independent of any human involvement. much of the interactive and virtual machine paradigm evolved totally independently at the science center ... first with cp40/cms, morphing into cp67/cms, followed by vm370/cms (even tho during the 70s, the batch infrastructure and the timesharing infastructure shared a common 370 hardware platform): http://www.garlic.com/~lynn/subtopic.html#545tech both multics (on the 5th flr) and science center (on the 4th flr) could trace common heritage back to ctss (and unix traces some heritage back to multics). even tho there was a relatively large timesharing install base (in most cases larger than any other vendor's timesharing install base that might be more commonly associated with timesharing) ... in the period, it was dwarfed by the commerical batch install base. I've joked before that at one period, the installed commercial customer install base was much larger than the timesharing customer install base, and the timesharing customer install base was much larger than the timesharing internal install base, and the timesharing internal install base was much larger than the internal installations that I directly supported (built, distributed, fixed bugs, on highly customized/modified kernel and services). However, at one point the number of internal installations that I directly supported was as large as the total number of Multics installations that ever existed. lots of past posts mentioning the timesharing environment http://www.garlic.com/~lynn/subtopic.html#timeshare much of that timesharing install base was cms personal computing ... while other was mixed-mode operation with cms personal computing and other kinds of operating systems in virtual machines ... aka the same timesharing infrastructure supporting both interactive cms personal computing as well as production (frequently batch) guest operating systems. this required a timesharing dispatching/scheduling policy infrastructure that could support a broad range of requirements. for a little topic drift, slightly related recent post: http://www.garlic.com/~lynn/2007m.html#46 Rate Monotonic scheduling (RMS) vs. OS Scheduling also coming out of the science center in the period (besides virtual machines, a lot of timesharing and interactive/personal computer) ... somewhat reflecting the timesharing and personal computing orientation was much of the internal networking technology http://www.garlic.com/~lynn/subnetwork.html#internalnet as well as things like the invention of GML, precusor to SGML, HTML, XML, etc http://www.garlic.com/~lynn/subtopic.html#sgml with the advent of PCs ... a lot of the cms personal computing migrated to PCs ... although the (mainframe) virtual machine operating system continues to survive ... and even had seen some resurgent in the early part of this decade supporting large numbers of virtual machines running linux ... somewhat in the server consolidation market segment recently, server consolidation has become something of a more widely recognized buzzword ... pushing a combination of virtual machine capability migrated to PC hardware platforms possibly in combination with large BLADE form-factors farms ... where a business with hundreds, thousands, or even tens of thousands of servers are consolidating into much smaller space. Microsoft Looks to Stop Internal Server Sprawl http://www.computerworld.com/action/article.do?command=viewArticleBasicarticleId=296360 from above: The profile of Microsoft Corp.’s in-house server farm is similar to those of many other companies: one application per server, with less than 20% peak server utilization on average. But Devin Murray, Microsoft’s group manager of utility services, is working to change that. Murray’s team manages about 17,000 servers that support 40,000 of Microsoft’s end users worldwide. ... snip ... -- For IBM-MAIN subscribe / signoff /
Re: Is Parallel Programming Just Too Hard?
The following message is a courtesy copy of an article that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well. Anne Lynn Wheeler [EMAIL PROTECTED] writes: with the advent of PCs ... a lot of the cms personal computing migrated to PCs ... although the (mainframe) virtual machine operating system continues to survive ... and even had seen some resurgent in the early part of this decade supporting large numbers of virtual machines running linux ... somewhat in the server consolidation market segment recently, server consolidation has become something of a more widely recognized buzzword ... pushing a combination of virtual machine capability migrated to PC hardware platforms possibly in combination with large BLADE form-factors farms ... where a business with hundreds, thousands, or even tens of thousands of servers are consolidating into much smaller space. re: http://www.garlic.com/~lynn/2007m.html#51 Is Parallel Programming Just Too Hard? note that in the 80s, there started to be the possibility of two-level timesharing dispatch/scheduling when some amount of the virtual machine capability migrated into the mainframe hardware, ... commingly referred to now as LPARS (logical partitions). The hardware had to schedule/dispatch timeshare the virtual machine LPARS ... and within an LPAR could be a virtual machine operating system, also having to schedule/dispatch timeshare its virtual machines. something similar has to be going on the emerging PC-based genre of virtual machine implementations. one of the interesting dispatch/schedule evolution starts with single processor virtual machines running on single processor hardware ... then moving to single processor virtual machines running on multiple processor hardware ... things can get more complex when having to run multiple processor virtual machines running on multiple processsor hardware ... and it may not be possible to independently dispatch/schedule the different virtual processors of a virtual machine ... having possibly needing to dispatch/schedule multiple virtual processors (of a virtual machine) concurrently on multiple real processors. lots of past posts about multiprocessors, tightly-coupled, and/or compareswap instruction http://www.garlic.com/~lynn/subtopic.html#smp -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Capacity and Relational Database
The following message is a courtesy copy of an article that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well. re: http://www.garlic.com/~lynn/2007m.html##47 Capacity and Relational Database for some additional past history the university i was at was selected to be beta-test for original CICS ... it was an ONR, online library funded project. It also got a 2321, datacell as part of the project. One of my responsibilities got to be shooting bugs in this early CICS (before first official product ship). One specific bug I remember was that the customer installation that CICS had grown out of had been using a specific set of BDAM options. For whatever reason, the university library chose to use some other combination of BDAM options ... resulting in CICS failures. ... misc. past posts mentioning cics /or bdam (and having to shoot CICS and BDAM bugs) http://www.garlic.com/~lynn/subtopic.html#bdam one of the IMS things in the mid-70s was transition to virtual memory environment. The science center http://www.garlic.com/~lynn/subtopic.html#545tech had done much of the early stuff on virtual memory as part of both CP67 and VM370. Some of the work involved extensive performance monitoring, performance modeling, workload profiling and the early stuff leading to capacity planning. http://www.garlic.com/~lynn/subtopic.html#benchmark One of these efforts was instruction tracing and modeling virtual memory useage. This was used extensively in many applications moving from real storage environment to virtual memory operation. One of the earliest was in was significant benefit as part of rewritting the whole APL storage management when the science center did the port of apl\360 to cms\apl (and expanding APL workspaces from typical 16k-32k real memory to allow maximum virtual memory sizes) ... various past posts mentioning APL and/or one of its heaviest users ... the HONE system http://www.garlic.com/~lynn/subtopic.html#hone In the mid-70s, one of the major internal users of this tracing and modeling application (from the science center) was the IMS group ... tracing and monitoring both general IMS performance operation ... as well as optimization for virtual memory operation. The science center also added semi-automated program re-organization to the application and the science center announced it as VS/REPACK product in 1976. And here is old email reference about getting pushed as general consultant to the IMS development group in STL (mentions luncheon with the IMS deevelopment people) http://www.garlic.com/~lynn/2007.html#email801016 this independent of the previous mention about working on some of system/r ... the original relational/sql implementation http://www.garlic.com/~lynn/subtopic.html#systemr for other drift ... lots of past posts about doing lots of stuff for virtual memory optimization and replacement algorithms http://www.garlic.com/~lynn/subtopic.html#wsclock Now, when my wife was con'ed into going to POK to be in charge of loosely-coupled architecture ... she originated peer-coupled shared data architecture (and a lot of the mainframe distributed/global locking stuff) http://www.garlic.com/~lynn/subtopic.html#shareddata which saw very little uptake until sysplex ... except for IMS and especially IMS hot-standby effort for somewhat other topic drift ... lots of past posts about being allowed to play disk engineer in bldg. 1415 http://www.garlic.com/~lynn/subtopic.html#disk at one time there was joke about working 4hr shift week, 1st shift in bldg28/sjr, 2nd shift in bldgs. 1415, 3rd shift in bldg90/stl, and 4th shift (aka weekends) at HONE. later when we were doing our HA/CMP product http://www.garlic.com/~lynn/subtopic.html#hacmp and scaleup for distributed databased operation ... along with scaleup for distributed lock manager (as well as massive distributed recovery) ... some email references here http://www.garlic.com/~lynn/lhwemail.html#medusa and minor reference in these posts http://www.garlic.com/~lynn/95.html#13 http://www.garlic.com/~lynn/96.html#15 the people in STL complained that if we were allowed to ship the support for the commercial DBMS stuff ... we would be at least five yrs ahead of where they were. -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Is Parallel Programming Just Too Hard?
The following message is a courtesy copy of an article that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well. Peter Flass [EMAIL PROTECTED] writes: Well, unix is unix (or Linux). The problems come from the basic design; if you changed the design, it wouldn't be unix. The best you can do is mitigate the problems. This is the case with every OS - some fundamental decisions made during the initial design can't be changed without modifying the OS out of existence. This is just like programming languages. You can add improvements, but some initial design decisions are set in stone. virtual machines has periodically been used over the past 40yrs to address various limitations in operating systems ... rather than trying to stress a particular operating system past its design point ... attempting to consolidate more and more applications on a single operating system platform ... go to a two (multi) level paradigm ... where you have a virtual machine environment, timesharing multiple virtual machines concurrently on common platform ... and then within each virtual machine ... allow is doing its own thing (i.e. a little peter principle ... not pushing an operating system to rise past its level of competence). this is somewhat optimization at a more macro level ... while making some micro-level optimization sacrifices (i.e. the overhead of the virtual machine capability). re: http://www.garlic.com/~lynn/2007m.html#51 Is Parallel Progrmaming Just Too Hard? http://www.garlic.com/~lynn/2007m.html#52 Is Parallel Progrmaming Just Too Hard? -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Capacity and Relational Database
The following message is a courtesy copy of an article that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well. IBMsysProg wrote: From a software architecture standpoint, Multi Regions, Independent locking (IRLM), Automated Recovery (DBRC), and DASD Logging became the foundations of IBMs second relational data base system and its first SQL based system called at its introduction DB2. IBMs first relational database system pre dated wide use of DASD and long historys could be written about it alone ... Bill of Material Program called at various times BOMP T-BOMP for TAPE BOMP and D-BOMP for DISK BOMP. BOMP was probably used more for applications like payroll than manufacturing. re: http://www.garlic.com/~lynn/2007m.html#47 Capacity and Relational Database http://www.garlic.com/~lynn/2007m.html#55 Capacity and Relational Database lots of postings about sql/relational database system/r done at sjr/bldg.28 http://www.garlic.com/~lynn/subtopic.html#systemr including mentioning doing work on system/r and handling technology transfer of system/r from sjr to endicott for sql/ds. another source of a lot of old archeological references: http://www.mcjones.org/System_R now system/r was all done in vm370 virtual machines ... technology out of the science center ... 4th flr, 545 tech sq http://www.garlic.com/~lynn/subtopic.html#545tech on the 5th flr, 545 tech sq was multics ... which had done an even earlier relational implementation. recent posting (in comp.databases.theory) http://www.garlic.com/~lynn/2007e.html#1 Designing database tables for performance? with multics MRDS reference: http://www.multicians.org/mgm.html#MRDS http://www.mcjones.org/System_R/mrds.html now the seminal work on relational was done by Codd at SJR, A relational Model of Data for Large Shared Data Banks, ACM, v13n6, june 1970 http://www.acm.org/classics/nov95/toc.html wiki reference: http://en.wikipedia.org/wiki/Edgar_F._Codd minor pt in the above reference ... sjr was in bldg. 28 on the san jose plant site, the almaden facility wasn't built until the mid-80s. now one of the people in the meeting referenced here http://www.garlic.com/~lynn/95.html#13 http://www.garlic.com/~lynn/96.html#15 mentioned that he had handled a lot of technology transfer from sql/ds endicott back to STL for DB2 (even tho bldg. 28 and bldg. 90 are only about ten miles apart ... i would even periodically do the commute on my bike). for lots of topic drift ... two of the other people in that same meeting ... were later at a small client/server startup responsible for something called the commerce server and we were called in to consult on being able to do payment transactions on their server ... misc. collected postings mentioning putting together payment transaction infrastructure for what is now frequently referred to as electronic commerce http://www.garlic.com/~lynn/subnetwork.html#gateway -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: The Development of the Vital IBM PC in Spite of the Corporate Culture of IBM
The following message is a courtesy copy of an article that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well. Frank McCoy [EMAIL PROTECTED] writes: Yup ... As *would* have happened with the PC itself if they'd been that tight-assed with it. They just didn't *get* the fact that the open bus and configuration was what made the PC popular. IOW, it was the *competition* that made it such a huge success. re: http://www.garlic.com/~lynn/2007m.html#42 The Development of the Vital IBM PC in Spite of the Corporate Culture of IBM http://www.garlic.com/~lynn/2007m.html#44 The Development of the Vital IBM PC in Spite of the Corporate Culture of IBM as i've mentioned before ... the other market force was that the previous personal computers had been do-it-yourself and hobbiest market. individuals had to justify the cost of the box for their own personal interest ... that included a lot of the software ... not a lot of off-the-shelf stuff ... so individuals had to do that themselves also. big break-out for ibm/pc was selling it into terminal emulation market at businesses. business that had justified buying a couple thousand or tens of thousand (3270) terminals ... for about the same amount of money that provided both local computing and terminal emulation in a single desktop footprint. instead of selling one at a time to a very limited market ... orders were being taken for thousands at a time. these (business) install base motivated a lot of the business users and software entrepreneurs to write software applications for the install base. having growing library of useful software tools for the market segment ... made it easier to justify spending the money to buy the machine. the combination of growing install base and growing available application creates snowball effect (positive feedback). misc. past posts mentioning various aspects of the terminal emulation theme http://www.garlic.com/~lynn/subnetwork.html#emulation the business market potential significantly motivated the clone makers ... something that had been happening in the mainframe dataprocessing business market since at least the late 60s (and so wasn't that unique of a concept). misc. past posts mentioning (mainframe) plug compatible (clone) http://www.garlic.com/~lynn/subtopic.html#360pcm this was enormous synergistic effect ... that wouldn't happen in the purely home/hobbiest market ... since the purchase price for strictly individuals was still fairly significant with not a large number of solutions to attact a big following. possibly one of the biggest drivers of personal computers into the home/personal market was the internet ... the volumes from the business world were driving down the price point and the combination of the price-point and internet as a personal use (for the computers) ... then helped explode the sales into the home market (aka killer app/silver bullet for personal, personal computer use). recent references: http://www.garlic.com/~lynn/2007j.html#11 Newbie question on table design http://www.garlic.com/~lynn/2007j.html#71 John W. Backus, 82, Fortran developer, dies http://www.garlic.com/~lynn/2007k.html#68 John W. Backus, 82, Fortran developer, dies http://www.garlic.com/~lynn/2007l.html#37 Friday musings on the future of 3270 applications -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Capacity and Relational Database
The following message is a courtesy copy of an article that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well. [EMAIL PROTECTED] (IBMsysProg) writes: Memory. Over the years the first exploiters of architecture changes to allow more address spaces, more real memory, and more virtual memory have always been DBMS systems. At this time I would suggest allocating at least 2 gig of additional real memory to your future DBMS. DBMS system address spaces in general are intolerant of paging. This is because a page in results in a wait for the entire address space and to make things worse DBMS address spaces serve many concurrent users. i've related before the discussion between IMS group and (original SQL/System/r group about pros cons. http://www.garlic.com/~lynn/subtopic.html#systemr IMS has/had direct pointers ... which significantly cut down processing overhead ... but significantly increased development, maintenance, and administrative costs. System/r abstracted away the direct pointers ... at the cost of implicit overhead of automatically maintained index. The argument back then was that the (RDBMS) automatically maintained index, doubled the physical disk space and significantly increased the number of disk i/os (as part of processing the index) ... offset by significantly reduced human resources skills. going into the 80s ... disk price/bit came down significantly (muting the disk price/bit argument) and (relative) significant increases in system real memory allowed much of the indexes to be cashed (eliminated lots of the increased index disk i/os). The index overhead then somewhat shifted from the amount of disk i/os ... to just CPU overhead. In any case, the price and availability of system resources changes that went on in the 80s ... changed the trade-off between the human skill/resources and system price-resources ... significantly enabling the wider use of RDBMS. Virtual memory and high-end DBMS don't mesh very well. High-end DBMS tends to have lots of their own managed cache ... typically with some sort of LRU type algorithm. I first noticed that running an LRU storage managed algorithm under an LRU storage managed algorithm could be a bad idea ... in the mid-70s with SVS/MVS running in virtual machine (virtual memory). It was possible to get in extremely pathelogical situation where MVS would select one of the pages (at a location it believed to be its real-memory) to be replaced ... at about the same time that the virtual machine hypervisor also decided that the corresponding virtual page should be replaced (since they were both looking at effectively the same useage patterns as basis for replacement decision). As a result, a LRU-based strategy ... running in a virtual memory, can start to look like an MRU strategy (the next most likely page to be used ... is the one that has been least recently used). lots of past posts about page replacement algorithms ... including some difference of opinion of some of the internally implemented MVS strategies http://www.garlic.com/~lynn/subtopic.html#wsclock as well as some old email on various aspects of the subject http://www.garlic.com/~lynn/lhwemail.html#globallru In any case, when running high-end DBMS that have their own cache implementation ... in a virtual memory operating system environment ... there tends to be a lot of tuning options ... to minimize the conflict between the DBMS cache replacement strategy (typically some sort of LRU-based) and the operating sysetm virtual memory replacement strategy (typically also some sort of LRU-based). There is also the possibility of things analogous to the old VS1-handshaking, where VM370 would present a psuedo page fault interrupt to VS1 (running in a virtual machine) ... enabling VS1 to do a task switch (instead of blocking the whole VS1 whenever any page fault occured for a virtual machine page). note that one of the progression of large real storage has resulted in DBMS memory implementations ... rather than assuming that the DBMS natively resides on disk and there is a lot of processor overhead related to the assumed DBMS operation. The assumption is that nearly everything is memory resident and managed with memory pointers ... with periodic snap-shots to disk for commits/integrity. Given the same amount of large real storage ... there are claims that the switch to a RDBMS memory-based paradigm can run ten times faster than a RDBMS disk-based paradigm that was fully cached and otherwise doing little disk i/o (and both running nearly identical SQL-based applications). misc. recent posts mentioning old interactions between IMS and System/r organizations regarding various pros and cons http://www.garlic.com/~lynn/2007e.html#1 Designing database tables for performance? http://www.garlic.com/~lynn/2007e.html#14 Cycles per ASM instruction http://www.garlic.com/~lynn/2007e.html#31 Quote from comp.object http://www.garlic.com/~lynn/2007e.html#37 Quote from
Re: The Development of the Vital IBM PC in Spite of the Corporate Culture of IBM
The following message is a courtesy copy of an article that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well. [EMAIL PROTECTED] writes: The public history of the PC began in August 1981, when IBM first announced 'The IBM Personal Computer.' . This was The original PC. The time period for the development of this landmark, legacy product was approximately a year. It must be remembered that IBM was a centralized committee paper top down organization at the time. Everything went by snail mail and paper, communication was slow and lines of communication as well as the necessary and ... Read full article at http://www.knowledgefield.com/articles/the-development-of-the-vital-ibm-pc-in-spite-of-the-corporate-culture-of-ibm.shtml trolling? how 'bout the internal network ... world-wide http://www.garlic.com/~lynn/subnetwork.html#internalnet larger than the arpanet/internet from just about the beginning until possibly mid-85 http://www.garlic.com/~lynn/subnetwork.html#internet the great switch-over from arpanet (host-to-host with homogeneous IMP front-ends) to internetworking protocol was on 1jan83. internet was somewhere between 100-250 nodes at the time (depending on how things were counted). the internal network was far past that ... passing 1000 nodes that summer http://www.garlic.com/~lynn/internet.htm#22 http://www.garlic.com/~lynn/99.html#112 http://www.garlic.com/~lynn/2006k.html#8 various old email on a variety of subjects from the 70s 80s http://www.garlic.com/~lynn/lhwemail.html after the 23jun69 unbundling announced, there was an effort to deploy (360/67) cp67 machines in various datacenters to give branch office technical people an opportunity to practice with operating systems running in (the remote) cp67 virtual machines (logon from terminals in the branch office to cp67 machines at remote datacenters). this was called HONE (aka hands-on network experience). however, it was soon taken over by applications (mostly written in APL) supporting the branch office sales/marketing people (and the use by SEs for operating system experience eventually was dropped). when EMEA hdqtrs moved from the US to Paris in the early 70s ... I was called in to help with their HONE installation. At that time, it still took a little ingenuity to read email back in the states. http://www.garlic.com/~lynn/subtopic.html#hone note that the 5150 computer announced aug81 was predated by the 5100 computer from the palo alto science center ... 5100 demo'ed 1973 http://www-03.ibm.com/ibm/history/exhibits/pc/pc_1.html http://www-03.ibm.com/ibm/history/exhibits/pc/pc_2.html also, note that the boca group doing the development was designated IBU ... independent business unit ... where some amount of corporate culture commandcontrol was much more relaxed ... for instance the standard AR (announce and review) product process requiring sign-off from possibly nearly 500 executives from around the corporation. The birth of the IBM PC http://www-03.ibm.com/ibm/history/exhibits/pc25/pc25_birth.html misc. old posts: http://www.garlic.com/~lynn/2000.html#69 APL on PalmOS ??? http://www.garlic.com/~lynn/2000.html#70 APL on PalmOS ??? http://www.garlic.com/~lynn/2000d.html#15 APL version in IBM 5100 (Was: Resurrecting the IBM 1130) http://www.garlic.com/~lynn/2002b.html#39 IBM 5100 [Was: First DESKTOP Unix Box?] http://www.garlic.com/~lynn/2002b.html#43 IBM 5100 [Was: First DESKTOP Unix Box?] http://www.garlic.com/~lynn/2002b.html#45 IBM 5100 [Was: First DESKTOP Unix Box?] http://www.garlic.com/~lynn/2002b.html#47 IBM 5100 [Was: First DESKTOP Unix Box?] http://www.garlic.com/~lynn/2003i.html#79 IBM 5100 http://www.garlic.com/~lynn/2003i.html#82 IBM 5100 http://www.garlic.com/~lynn/2003i.html#84 IBM 5100 http://www.garlic.com/~lynn/2003j.html#0 IBM 5100 http://www.garlic.com/~lynn/2003n.html#6 The IBM 5100 and John Titor http://www.garlic.com/~lynn/2003n.html#8 The IBM 5100 and John Titor http://www.garlic.com/~lynn/2005m.html#2 IBM 5100 luggable computer with APL http://www.garlic.com/~lynn/2005m.html#3 IBM 5100 luggable computer with APL parts of thread from last yr that might have some interest: http://www.garlic.com/~lynn/2006o.html#43 25th Anniversary of the Personal Computer http://www.garlic.com/~lynn/2006o.html#45 25th Anniversary of the Personal Computer http://www.garlic.com/~lynn/2006o.html#46 25th Anniversary of the Personal Computer http://www.garlic.com/~lynn/2006o.html#65 25th Anniversary of the Personal Computer http://www.garlic.com/~lynn/2006o.html#66 25th Anniversary of the Personal Computer http://www.garlic.com/~lynn/2006p.html#15 25th Anniversary of the Personal Computer http://www.garlic.com/~lynn/2006p.html#31 25th Anniversary of the Personal Computer http://www.garlic.com/~lynn/2006p.html#34 25th Anniversary of the Personal Computer http://www.garlic.com/~lynn/2006p.html#36 25th Anniversary of the Personal Computer http://www.garlic.com/~lynn/2006p.html#39 25th
Re: Patents, Copyrights, Profits, Flex and Hercules
The following message is a courtesy copy of an article that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well. [EMAIL PROTECTED] (R.S.) writes: Key-based solutions exist on mianframe as well as on other systems. I think it is rather technical, not ethical or organisational issue: It is *easy* to have illegal software on PC, sometimes you are even unaware of it. I mean a lot of small but usefull tools like Windows Commander, archivizers, DVD-burning software etc. etc. Even if you have some tools for z/OS it is simply not so easy to install it on the host - usually several persons are involed, usually someone could ask - Did we buy it ? How did you get it ?. From the other hand, people are interested in having some bells whistles on *their* PC (even company owned), while mainframe is not *their*. It is not *personal*. It's common. re: http://www.garlic.com/~lynn/2007m.html#15 Patents, Copyrights, Profits, Flex and Hercules slightly related recent posts about looking at software piracy (DRM) in the mainframe and PC market space http://www.garlic.com/~lynn/2007b.html#59 Peter Gutmann Rips Windows Vista Content Protection http://www.garlic.com/~lynn/aadsm27.htm#9 Enterprise Right Management vs. Traditional Encryption Tools old email about new apple lisa announcement and conjecture about the processor serial number being used for software licensing (and piracy countermeasure). http://www.garlic.com/~lynn/2007b.html#email830213 http://www.garlic.com/~lynn/2007b.html#email830213b in this recent post http://www.garlic.com/~lynn/2007b.html#56 old lisa info part of the mainframe was being able to show in court that something out of the ordinary had to have been done to subvert the licensing provisions (value was worth taking to court). in the PC case, the value of individual copy makes it difficult to justify investigation and bringing to court every individual case. TPM is the one of the latest in pirarcy countermeasure (as well as suppose to be countermeasure to software compromises). misc. past posts mentioning giving an assurance talk in trusted computing track at intel developers conference http://www.garlic.com/~lynn/aadsm5.htm#asrn1 Assurance, e-commerce, and some x9.59 http://www.garlic.com/~lynn/aadsm21.htm#3 Is there any future for smartcards? http://www.garlic.com/~lynn/aadsm23.htm#56 UK Detects Chip-And-PIN Security Flaw http://www.garlic.com/~lynn/aadsm24.htm#23 Use of TPM chip for RNG? http://www.garlic.com/~lynn/aadsm24.htm#52 Crypto to defend chip IP: snake oil or good idea? http://www.garlic.com/~lynn/2005g.html#36 Maximum RAM and ROM for smartcards http://www.garlic.com/~lynn/2005o.html#3 The Chinese MD5 attack http://www.garlic.com/~lynn/2006p.html#48 Device Authentication - The answer to attacks lauched using stolen passwords? http://www.garlic.com/~lynn/2006w.html#37 What does a patent do that copyright does not? http://www.garlic.com/~lynn/2007g.html#61 The Perfect Computer - 36 bits? http://www.garlic.com/~lynn/2007g.html#63 The Perfect Computer - 36 bits? http://www.garlic.com/~lynn/2007l.html#42 My Dream PC -- Chip-Based -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Patents, Copyrights, Profits, Flex and Hercules
The following message is a courtesy copy of an article that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well. [EMAIL PROTECTED] (Clem Clarke) writes: It's a shame, but unless IBM does do a big rethink on this, and allows small developers some sort of inexpensive or free access to the mainframes, they will die. Allowing a hobbyist license for Z/OS, VM and VSE on Hercules would be one way, and what does IBM really have to lose? And the gain would be that they could have many people working at no cost on these systems developing tools and applications to make them better and better. some related thread drift from another n.g. http://www.garlic.com/~lynn/2007m.html#3 nouns and adjectives that in the 60s and much of the 70s ... lots of the innovation came out of customer installations datacenters ... since it was the customers that understood the need and requirement ... things like cics, ims, etc. later they were transferred to development organizations for product support. in many cases, this is misnomer ... since those development organizations are responsible for product maintenance ... not the products development (maybe doing plus/minus five percent changes per annum). I've periodically made facetious comments referencing the term inflation in using the word development applied to organizations that are primarily product maintenance. something similar happened with the introduction of the ibm/pc ... large proportion of the products originated from end-users (that were faced with the actual problems and understood what kind of solution was needed). vendor product operations tend to have people like software engineers that understand issues about software maintenance ... but rarely have people with the necessary experience that they could see what solution was originally needed. even before ibm/pc came out ... there were some that had jump shipped from vm/cms (that had been providing mainframe-based personal computing environment) and were implementing some number of CMS applications on other early personal computers. These weren't ports of CMS applications (because the implementation details tended to be totally different), but frequently the lookfeel and the solution they provided were the same. the OCO-wars were especially hard on the vm/cms community ... because not only was full source available ... but even maintenance, fixes, etc for customers were shipped as source updates ... based on CMS multi-level source maintenance facilities. Some studies from their period even claimed the number of system (source) updates done at customer datacenters (aka aggregate lines-of-code) was actually larger than the source lines-of-code in the base system. the high-end of the market is where the (quarterly) revenue/profit ... but all the innovation tends to originate at the low-end mid-range ... in part innovation requires quite a bit of experimentation, trialerror, etc ... and the high-end is rarely made available for such experimentation. As a result, some of the other vendors found a need that could filled in the entry/low-end market segment (and long term ... it is frequently the entry/low-end that tends to feed the high-end with the applications that keep the high-end quarterly revenue sustained). the pre-occupation with quarterly results has been a sporadic topic for at least the last 40 yrs. during periods when there was significant general economic growth ... the generational issues appeared to almost take care of themselves ... allowing the perception that executives could solely concentrate on the quarterly issues. however, this approach somewhat came to roost. i've mentioned before about being at a talk at MIT in the early 70s where Amdahl was asked how he was able to convince the money people to support his new clone computer company. His reply was that there was already something like $200b that customers had invested in 360 applications ... that even if IBM were to totally walk away from 360/370 ... which might be considered a vieled reference to the future system project http:///www.garlic.com/~lynn/subtopic.html#futuresys ... (just) that (existing) software application base could keep him in business thru the end of the century. starting in the early 70s, i had been heavily involved with HONE deployment ... first its original objective to provied hands-on experience to branch office SEs with operating systems running in virtual machines ... and then the transition to being primarily an online, interactive environment deploying applications (mostly implemented in cms\apl) supporting sales marketing worldwide. http://www.garlic.com/~lynn/subtopic.html#hone in the mid-70s, I got con'ed into helping with the virgil/tully microcode assists ... including spending time off on over a period of a year running around the world with the product managers, meeting with business planning forcasting groups positioning the processors in the market. One of the
Re: Is Parallel Programming Just Too Hard?
The following message is a courtesy copy of an article that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well. [EMAIL PROTECTED] (Shane) writes: M - seems to be just a warming over of the multi-programming versus parallel discussion. With the exception of that last link, I'd need serious convincing any of them are parallel programming. I suspect in the not too distant future, multi-threaded will supplant (in the common vernacular) all notions of parallel. If not already ... re: http://www.garlic.com/~lynn/2007l.html#60 Is Parallel Programming Just Too Hard? multi-threaded tends to be used in conjunction with tightly-coupled, shared-memory multiprocessing (and the current buzzword multi-core). lots of past posts mentioning shared-memory multiprocessing and/or compareswap instruction http://www.garlic.com/~lynn/subtopic.html#smp compareswap instruction had been invented by Charlie (CAS are Charlie's initials) at the science center ... working on fine-grain multiprocessor locking for cp67 http://www.garlic.com/~lynn/subtopic.html#545tech in order to get the instruction justified for 370, had to come up with the description of its use in multi-threaded/multi-programming operation, which was included (originally) in the (370) principles of operation ... a more recent version http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/dz9zr003/A.6?DT=20040504121320 parallel has been used in reference to both (tightly-coupled) multi-threaded operation, as well as loosely-coupled /or cluster multiprocessing operation misc. past posts mentioning doing high availability, cluster multiprocessing product http://www.garlic.com/~lynn/subtopic.html#hacmp some old email references about working on cluster scaleup http://www.garlic.com/~lynn/lhwemail.html#medusa a couple old posts specifically about working on applying distributed lock manager and cluster scaleup to parallel oracle http://www.garlic.com/~lynn/95.html#13 http://www.garlic.com/~lynn/96.html#15 recent post mentioning my wife had been con'ed into going to POK to be in charge of loosely-coupled architecture http://www.garlic.com/~lynn/2007l.html#62 Friday musings on the future of 3270 applications a lot of the blades stuff has been physical packaging originally done for (numerical intensive cluster) GRIDs (getting more more computing into smaller and smaller footprint). some amount of GRID/blades are now being pitched into commercial sector. some of it isn't strictly loosely-coupled/cluster operation ... but it is also being used (frequently in combination with virtualization) for server consolidation. -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: mainframe = superserver
The following message is a courtesy copy of an article that has been posted to bit.listserv.ibm-main as well. [EMAIL PROTECTED] writes: I realize that this has probably been asked before, but google didn't give me an answer. Before I asked my question let me state that I know that windows server 2003 and longhorn won't run on an IBM mainframe. There is the endian issue and the ascii vs ebcidic issues. Is there a medium to large IBM box that can run a couple hundred of virtual windows 2003 servers? And said box can scale up to approximately 1000+ virtual windows servers? Given that all current servers are dell and hp servers with 2 intel core 2 duo processors and a total of 150TB of storage? I am doing research for the possible replacement of 200+ windows server in our datacenter. We need to add servers, but there is literally no more power. My thinking is if IBM can get windows servers to run on their something like their mainframes it would save electricity and space. Everyone would win. recent cross-over from another thread: http://www.garlic.com/~lynn/2007l.html#63 Is Parallel Programming Just Too Hard? mentioning that blades are also being sold into commercial market, frequently in combination with virtualization, for server consolidation and another mention here ... in slightly older thread/post: http://www.garlic.com/~lynn/2007h.html#2 The Mainframe in 10 Years and mention in this thread http://www.garlic.com/~lynn/2007k.html#22 Another migration from the mainframe http://www.garlic.com/~lynn/2007k.html#23 Another migration from the mainframe some references from above thread CIO Challenge: Energy Efficiency http://www.wallstreetandtech.com/showArticle.jhtml?articleID=192202377 IBM Unveils New Energy-Efficient Blades http://www.hpcwire.com/hpc/1379801.html IBM to focus on energy efficiency http://www.bladewatch.com/2007/05/10/ibm-to-focus-on-energy-efficiency/ Blade innovations highlight energy efficiency opportunities http://www.it-director.com/business/content.php?cid=9135 IBM defends blades' energy efficiency http://green.itweek.co.uk/2006/10/ibm_defends_bla.html IBM Data Center and Facilities Strategy Services - high density computing data center readiness assessment http://www-935.ibm.com/services/us/index.wss/offering/its/a1025605 Lots of Blade Server articles http://www.eweek.com/category2/0,1874,1658862,00.asp IBM Grid Computing Solutions - financial industry http://www-03.ibm.com/grid/solutions/by_industry/financial.shtml Grid Computing for Financial Services 2007 http://www.iqpc.com/cgi-bin/templates/genevent.html?topic=233event=12603; Grid computing: Accelerating the search for revenue and profit for financial markets http://www-03.ibm.com/industries/financialservices/doc/content/landing/973028103.html the previously mentioned scaleup activity was in large part about physical packaging and issues like power and cooling http://www.garlic.com/~lynn/lhwemail.html#medusa but the server consolidation is now frequently blades/grid technology in combination with virtualization curtesy of science center from mid-60s, first with cp40 and then when 360/67 became available, morphed into cp67 (precursor to vm370) ... misc past posts mentioning science center http://www.garlic.com/~lynn/subtopic.html#545tech besides virtualization and virtual machines being invented at the science center ... compareswap instruction for multi-thread/multi-processor was also invented at the science center http://www.garlic.com/~lynn/subtopic.html#smp and also GML (later morphed into sgml, html, xml, etc) http://www.garlic.com/~lynn/subtopic.html#sgml and most of the internal network http://www.garlic.com/~lynn/subnetwork.html#internalnet which was also seen outside in deployments like bitnet and earn http://www.garlic.com/~lynn/subnetwork.html#bitnet -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Is Parallel Programming Just Too Hard?
The following message is a courtesy copy of an article that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well. a couple recent items: The death of single threaded development http://blogs.zdnet.com/Ou/?p=519 Google Acquires Multicore Programming Startup PeakStream -- Multithreaded Programming http://www.informationweek.com/news/showArticle.jhtml?articleID=199901501 Intel updates compilers for multicore era http://arstechnica.com/news.ars/post/20070605-intel-updates-compilers-for-multicore-era.html Sun Updates Studio For Multi-core Development http://itmanagement.earthweb.com/entdev/article.php/3681151 Sun stresses multicore chips, Linux with dev tool http://news.yahoo.com/s/infoworld/20070604/tc_infoworld/89028 Scots firm demonstrates parallelizing compiler at MPF http://www.eetimes.com/news/latest/showArticle.jhtml?articleID=199700792 recent posts on the subject: http://www.garlic.com/~lynn/2007.html#3 The Future of CPUs: What's After Multi-Core? http://www.garlic.com/~lynn/2007g.html#3 University rank of Computer Architecture http://www.garlic.com/~lynn/2007i.html#20 Does anyone know of a documented case of VM being penetrated by hackers? http://www.garlic.com/~lynn/2007i.html#78 John W. Backus, 82, Fortran developer, dies http://www.garlic.com/~lynn/2007l.html#15 John W. Backus, 82, Fortran developer, dies http://www.garlic.com/~lynn/2007l.html#19 John W. Backus, 82, Fortran developer, dies http://www.garlic.com/~lynn/2007l.html#24 Is Parallel Programming Just Too Hard? http://www.garlic.com/~lynn/2007l.html#26 Is Parallel Programming Just Too Hard? http://www.garlic.com/~lynn/2007l.html#34 Is Parallel Programming Just Too Hard? http://www.garlic.com/~lynn/2007l.html#38 Is Parallel Programming Just Too Hard? http://www.garlic.com/~lynn/2007l.html#42 My Dream PC -- Chip-Based -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Friday musings on the future of 3270 applications
The following message is a courtesy copy of an article that has been posted to bit.listserv.ibm-main,alt.folklore.computer as well. [EMAIL PROTECTED] (Patrick O'Keefe) writes: At times like this I sorely miss my long lost APPN Formats and Protocols bible. I believe official sources claim that an APPN CP and an LU were different kinds of NAUs because they were different chunks of code. (Or at least the same chunk of code implementing 2 different FSMs.) re: http://www.garlic.com/~lynn/2007l.html#37 Friday musings on the future of 3270 applications As undergraduate at the univ, I had done TTY/ascii terminal support for cp67 ... and attempted to make the 2702 do something that it couldn't quite do. that somewhat prompted a univ. project to build our own clone controller (originally using an Interdata/3). This was subsequently a writeup ... blaiming us (at least in part) for clone controller business (interdata was subsequently bought by perkin-elmer and the box was sold under PE logo well thru the 80s ... apparently with the same channel interface card that was designed at the univ. in the 60s) http://www.garlic.com/~lynn/subtopic.html#360pcm the clone controller business was supposedly a major motivation behind the future system project http://www.garlic.com/~lynn/subtopic.html#futuresys ... recent FS reference/post (with some quotation by one of the executives involved in FS) http://www.garlic.com/~lynn/2007l.html#10 John W. Backus, 82, Fortran developer, dies one might claim that when FS was killed, that SNA attempted to still meet some of the FS objectives with the PU4/PU5 interface for advanced terminal control infrastructure. in the same time that SNA was starting, my wife co-authored peer-to-peer networking (AWP39) ... that defined real networking ... rather than complex terminal control. Possibly there was some amount of semantic confusion lingered on because the term SNA contained the word network. Later, when my wife was con'ed into going to POK to be in charge of loosely-coupled architecture, she created peer-coupled shared data architecture (... and except for IMS hot-standby, didn't see a lot of uptake until sysplex) http://www.garlic.com/~lynn/subtopic.html#shareddata she had lots of battles with the SNA organization over peer-coupled shared data ... eventually there was temporary truce with my wife being able to specify peer-couple operation as long as it was within the walls of the same/single machine room (datacenter) ... but SNA was mandated if it crossed the walls of the machine room. much later, APPN was specified in AWP164 and when there was an attempt to announce/release APPN, the SNA organization non-concurred (at the time, the person responsible for APPN and I reported to the same executive). The APPN announcement was escalated and eventually the announcement letter was carefully rewritten to not imply that APPN had any relationship at all to SNA. misc. past posts mentioning AWP39 and/or AWP164: http://www.garlic.com/~lynn/2004n.html#38 RS/6000 in Sysplex Environment http://www.garlic.com/~lynn/2004p.html#31 IBM 3705 and UC.5 http://www.garlic.com/~lynn/2005p.html#8 EBCDIC to 6-bit and back http://www.garlic.com/~lynn/2005p.html#15 DUMP Datasets and SMS http://www.garlic.com/~lynn/2005p.html#17 DUMP Datasets and SMS http://www.garlic.com/~lynn/2005q.html#27 What ever happened to Tandem and NonStop OS ? http://www.garlic.com/~lynn/2005u.html#23 Channel Distances http://www.garlic.com/~lynn/2006h.html#52 Need Help defining an AS400 with an IP address to the mainframe http://www.garlic.com/~lynn/2006j.html#31 virtual memory http://www.garlic.com/~lynn/2006k.html#9 Arpa address http://www.garlic.com/~lynn/2006k.html#21 Sending CONSOLE/SYSLOG To Off-Mainframe Server http://www.garlic.com/~lynn/2006l.html#4 Google Architecture http://www.garlic.com/~lynn/2006l.html#45 Mainframe Linux Mythbusting (Was: Using Java in batch on z/OS?) http://www.garlic.com/~lynn/2006o.html#62 Greatest Software, System R http://www.garlic.com/~lynn/2006r.html#4 Was FORTRAN buggy? http://www.garlic.com/~lynn/2006r.html#9 Was FORTRAN buggy? http://www.garlic.com/~lynn/2006t.html#36 The Future of CPUs: What's After Multi-Core? http://www.garlic.com/~lynn/2006u.html#28 Assembler question http://www.garlic.com/~lynn/2006u.html#55 What's a mainframe? http://www.garlic.com/~lynn/2007b.html#9 Mainframe vs. Server (Was Just another example of mainframe http://www.garlic.com/~lynn/2007b.html#48 6400 impact printer http://www.garlic.com/~lynn/2007b.html#49 6400 impact printer http://www.garlic.com/~lynn/2007d.html#55 Is computer history taugh now? http://www.garlic.com/~lynn/2007h.html#35 sizeof() was: The Perfect Computer - 36 bits? http://www.garlic.com/~lynn/2007h.html#39 sizeof() was: The Perfect Computer - 36 bits? -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the
Re: Questions to the list
The following message is a courtesy copy of an article that has been posted to bit.listserv.ibm-main as well. [EMAIL PROTECTED] (Tom Schmidt) writes: It seems to me that only a few years ago (and probably in many of the hundreds of recycled I remember when... threads lately) we were, as a group, lamenting that we were all getting older and there was little new blood being introduced to the mainframe. We were also having 'sour grapes' discussions about workload moving off the mainframe and companies abandoning the mainframe altogether. one of the issues ... especially on the usenet side ... was that the start of each new semester ... there would be a some new flurry of homework questions ... as individuals taking a computer class and getting possibly their first exposure to terminals and online infrastructures. as online infrastructures starting to permeate the whole culture ... it is now possible to find homework questions happening all thru the yr. there is some balance between answering questions where the askee has actually made some attempt to learn something ... or is using the list in lieu of having to learn anything. misc. past posts mentioning homework issue: http://www.garlic.com/~lynn/2000.html#28 Homework: Negative side of MVS? http://www.garlic.com/~lynn/2000.html#32 Homework: Negative side of MVS? http://www.garlic.com/~lynn/2001.html#70 what is interrupt mask register? http://www.garlic.com/~lynn/2001b.html#38 Why SMP at all anymore? http://www.garlic.com/~lynn/2001c.html#11 Memory management - Page replacement http://www.garlic.com/~lynn/2001c.html#25 Use of ICM http://www.garlic.com/~lynn/2001k.html#75 Disappointed http://www.garlic.com/~lynn/2001l.html#0 Disappointed http://www.garlic.com/~lynn/2001m.html#0 7.2 Install upgrade to ext3 LOSES DATA http://www.garlic.com/~lynn/2001m.html#32 Number of combinations in five digit lock? (or: Help, my brain hurts) http://www.garlic.com/~lynn/2002c.html#2 Need article on Cache schemes http://www.garlic.com/~lynn/2002f.html#32 Biometric Encryption: the solution for network intruders? http://www.garlic.com/~lynn/2002f.html#40 e-commerce future http://www.garlic.com/~lynn/2002g.html#83 Questions about computer security http://www.garlic.com/~lynn/2002l.html#58 Spin Loop? http://www.garlic.com/~lynn/2002l.html#59 Spin Loop? http://www.garlic.com/~lynn/2002n.html#13 Help! Good protocol for national ID card? http://www.garlic.com/~lynn/2002o.html#35 META: Newsgroup cliques? http://www.garlic.com/~lynn/2003d.html#27 [urgent] which OSI layer is SSL located? http://www.garlic.com/~lynn/2003j.html#34 Interrupt in an IBM mainframe http://www.garlic.com/~lynn/2003m.html#41 Issues in Using Virtual Address for addressing the Cache http://www.garlic.com/~lynn/2003m.html#46 OSI protocol header http://www.garlic.com/~lynn/2003n.html#4 Dual Signature http://www.garlic.com/~lynn/2004f.html#43 can a program be run withour main memory ? http://www.garlic.com/~lynn/2004f.html#51 before execution does it require whole program 2 b loaded in http://www.garlic.com/~lynn/2004f.html#61 Infiniband - practicalities for small clusters http://www.garlic.com/~lynn/2004h.html#47 very basic quextions: public key encryption http://www.garlic.com/~lynn/2004k.html#34 August 23, 1957 http://www.garlic.com/~lynn/2005h.html#1 Single System Image questions http://www.garlic.com/~lynn/2005m.html#50 Cluster computing drawbacks http://www.garlic.com/~lynn/2006.html#16 Would multi-core replace SMPs? http://www.garlic.com/~lynn/2006b.html#2 Mount a tape http://www.garlic.com/~lynn/2006h.html#40 Mainframe vs. xSeries http://www.garlic.com/~lynn/2006l.html#54 Memory Mapped I/O Vs I/O Mapped I/O http://www.garlic.com/~lynn/2007f.html#16 more shared segment archeology -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Friday musings on the future of 3270 applications
The following message is a courtesy copy of an article that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well. [EMAIL PROTECTED] (Rich Smrcina) writes: If I understand what your asking there are products on the market that can do this today. As long as there is a 3270 on the back end (specifically TN3270), a web interface or a web service is presented on the front end. part of this is some of the whole history about terminal emulation. 3270 terminal emulation contributed significantly to early uptake of ibm/pc ... i.e. businesses that had already allocated money for 3270 terminal ... it became nearly no-brainer ... to switch to an ibm/pc ... price was about the same ... and in single desktop footprint the business got both 3270 terminal and some possibly added-value local computing. http://www.garlic.com/~lynn/subnetwork.html#emulation this contributed to significant install base of 3270 terminal and terminal emulation products. in the later part of the 80s ... were had come up with 3-tier architecture (as an enhancement to client/server) http://www.garlic.com/~lynn/subnetwork.html#3tier and were out doing some amount of customer executive presentations ... and taking a lot of heat from the T/R and SAA forces(to some extent SAA could be viewed as attempting to help preserve the terminal emulation paradigm and inhibit the spread of client/server ... and especially this new fangled 3-tier stuff). we also were taking some amount of heat working with organizations around the nsfnet backbone effort (i.e. tcp/ip is considered the technology basis for the modern internet but nsfnet backbone would be considered the operational basis for the modern internet). some old email from the period on the topic http://www.garlic.com/~lynn/lhwemail.html#nsfnet and after starting to cancel our meetings with outside entities ... then there was suggestion that they should start proposing SNA/VTAM as the basis for nsfnet backbone ... specific old email reference http://www.garlic.com/~lynn/2006w.html#email870109 in this post http://www.garlic.com/~lynn/2006w.html#21 SNA/VTAM for NSFNET one of the side happenings in all this was we did get an NSF audit of high-speed backbone we had running internally http://www.garlic.com/~lynn/subnetwork.html#internalnet which concluded that what we had running was at least five yrs ahead of all NSFNET backbone bids (to build something new) and for some topic drift ... tagential reference here http://www.garlic.com/~lynn/2007l.html#14 Sueprconductors and computing -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Is Parallel Programming Just Too Hard?
The following message is a courtesy copy of an article that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well. re: http://www.garlic.com/~lynn/2007l.html#24 Is Parallel Programming Just Too Hard? http://www.garlic.com/~lynn/2007l.html#26 Is Parallel Programming Just Too Hard? http://www.garlic.com/~lynn/2007l.html#34 Is Parallel Programming Just Too Hard? recent news items Intel Pledges 80 Core Processor in 5 Years http://hardware.slashdot.org/hardware/06/09/26/1937237.shtml Intel shows off 80-core processor http://news.com.com/Intel+shows+off+80-core+processor/2100-1006_3-6158181.html Next Windows To Get Multicore Redesign http://developers.slashdot.org/article.pl?sid=07/05/31/1257231 part of the issue is that a lot of the parallel processing has been limited to high-end market ... where highly skilled programming could be used to manage large amount of shared resources ... effectively concurrently working on different activity from independent sources. as parallel hardware has started to move downstream into standard consumer market ... issues in the past couple yrs is how to change the (mostly) sequential programming paradigm to better utilize the independent/parallel hardware resources that are available. the hardware technology motivation is that as components are shrinking ... things like signal latency and syncronized, serial operation are starting to represent a significant limiting factor ... going to asyncronous operation ... even across the distances involved in a typical chip ... can contributed to significant thruput increases. -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Is Parallel Programming Just Too Hard?
The following message is a courtesy copy of an article that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well. [EMAIL PROTECTED] (Howard Brazee) writes: Depending on one's definition of parallel programming, we have been doing to various degrees since before they started off-loading the paper-tape reading to the paper-tape reader.Video cards on PCs are powerful computers that work in parallel with the program's main logic. Our operating systems have allowed us to run payroll and accounts payable at the same time, and central databases have expanded on this ability. lots of comments about this in the past couple yrs is that technology in support of parallel programming has not really changed in at least the past 20yrs ... as a result, the actual use has been limited to very specialized implementations. there has been lots of stuff in multiprogramming and multithreading in the same processor complex (single processor and/or shared-memory multiprocessor). multiprogramming was managing lots of independent different tasks on the same processor complex. multithreading was program managing different tasks. application implementation of multithreading isn't necessarily very pervasive. some number of DBMS implementation have used things like transaction model ... to provide independent operations ... that they multithread. In this sense, DBMS kernels are somewhat more like operating system kernels ... highly specialized ... and not a lot of end-users implementing their own DBMS kernels. in a lot of multiprocessor kernel support ... a global kernel lock was used ... which only allowed a single thread to be executing in the kernel at a time. it was somewhat painful experience for a lot of kernel implementations to make the transition from single thread (at a time) executing in a multiprocessor kernel to multiple concurrent threads executing in same parts of the kernel. long ago and far away, this was one of the battles getting the compareswap instruction into 370 architecture. testset had been around in the 60s and was used fro 360/65 multiprocessor support with global kernel spin-locks (set the lock and everybody else spins, untill the lock is cleared). at the science center http://www.garlic.com/~lynn/subtopic.html#545tech Charlie had been doing a lot of work in fine-grain lock for the cp67 kernel and invented the compareswap instruction (mnemonic chosen because CAS are charlie's initials). misc. past posts mentioning SMPs and/or compareswap http://www.garlic.com/~lynn/subtopic.html#smp somewhat implicit in a lot of compareswap uses is that there can be concurrent threads executing in the same instruction sequences simultaneously. the inital forey into POK attempting to get compareswap justified was unsuccesful, in large part because the favorite son operating system felt that testset was just fine for multiprocessor support (the 360/65 smp global spin lock paradigm). the challenge was to create justification for compareswap instruction that was applicable to single processor deployment. Thus was born the programming notes that can be found in principles of operation describing how the atomic characteristics of comapreswap can be leveraged in single processor environment for multithreaded applications (like DBMS) ... these aren't necessarily concurrent multithreads ... but multiple threads that might be interrupted and so atomic operations can be applied to both simultaneous concurrent multithread operation as well as possibly non-simultaneous (but interrruptable) multithreaded operation. the advances in concurrent, parallel technology into loosely-coupled/cluster deployments is even more limited than the proliferation in tightly-coupled environments. we had done a scallable distributed lock manager in support of our ha/cmp product http://www.garlic.com/~lynn/subtopic.html#hacmp and the medusa cluster-in-a-rack activity ... old email http://www.garlic.com/~lynn/lhwemail.html#medusa and somewhat referenced in these postings about old meeting http://www.garlic.com/~lynn/95.html#13 http://www.garlic.com/~lynn/96.html#15 ... but again ... it tended to be directly used by a very limited amount of specailized code ... there wasn't a huge number of different applications directly implementing semantics of highly parallel operation (for either tightly-coupled or loosely-coupled configurations). a couple recent posts in another thread/fora on the subject http://www.garlic.com/~lynn/2007l.html#15 John W. Backus, 82, Fortran developer, dies http://www.garlic.com/~lynn/2007l.html#19 John W. Backus, 82, Fortran developer, dies http://www.garlic.com/~lynn/2007l.html#23 John W. Backus, 82, Fortran developer, dies -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at
Re: Is Parallel Programming Just Too Hard?
The following message is a courtesy copy of an article that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well. Anne Lynn Wheeler [EMAIL PROTECTED] writes: long ago and far away, this was one of the battles getting the compareswap instruction into 370 architecture. testset had been around in the 60s and was used fro 360/65 multiprocessor support with global kernel spin-locks (set the lock and everybody else spins, untill the lock is cleared). ... somewhat implicit in a lot of compareswap uses is that there can be concurrent threads executing in the same instruction sequences simultaneously. the inital forey into POK attempting to get compareswap justified was unsuccesful, in large part because the favorite son operating system felt that testset was just fine for multiprocessor support (the 360/65 smp global spin lock paradigm). the challenge was to create justification for compareswap instruction that was applicable to single processor deployment. Thus was born the programming notes that can be found in principles of operation describing how the atomic characteristics of comapreswap can be leveraged in single processor environment for multithreaded applications (like DBMS) ... these aren't necessarily concurrent multithreads ... but multiple threads that might be interrupted and so atomic operations can be applied to both simultaneous concurrent multithread operation as well as possibly non-simultaneous (but interrruptable) multithreaded operation. re: http://www.garlic.com/~lynn/2007l.html#24 Is Parallel Programming Just Too Hard? misc. past posts mentioning smp and/or compareswap instruction http://www.garlic.com/~lynn/subtopic.html#smp in the mid-70s i was working on a 5-way SMP implementation it involved one of the lower-end 370 processor designs ... and was moving lots of features into microcode. for one reason or another that project got killed, misc. past posts discussing the effort http://www.garlic.com/~lynn/subtopic.html#bounce shortly after that got killed, there was another project started for 16-way smp involving higher-end processors. we even co-opted the spare time from some of the processor engineers furiously attempting to complete the 3033. in general, most people that looked at it thought it was a really great idea ... until it came to the attention of the head of POK that it would possible be decades before the POK favorite son operating system would be able to support the machine. At which time, the 3033 engineers were instructed to get their noses back to the grindstone and some people were invited to never show up in POK again. misc. past references: http://www.garlic.com/~lynn/95.html#5 Who started RISC? (was: 64 bit Linux?) http://www.garlic.com/~lynn/95.html#6 801 http://www.garlic.com/~lynn/95.html#11 801 power/pc http://www.garlic.com/~lynn/98.html#40 Comparison Cluster vs SMP? http://www.garlic.com/~lynn/2000.html#86 Ux's good points. http://www.garlic.com/~lynn/2001e.html#5 SIMTICS http://www.garlic.com/~lynn/2001h.html#33 D http://www.garlic.com/~lynn/2002i.html#82 HONE http://www.garlic.com/~lynn/2003.html#4 vax6k.openecs.org rebirth http://www.garlic.com/~lynn/2003.html#5 vax6k.openecs.org rebirth http://www.garlic.com/~lynn/2004f.html#21 Infiniband - practicalities for small clusters http://www.garlic.com/~lynn/2004f.html#26 command line switches [Re: [REALLY OT!] Overuse of symbolic http://www.garlic.com/~lynn/2004j.html#45 A quote from Crypto-Gram http://www.garlic.com/~lynn/2004m.html#53 4GHz is the glass ceiling? http://www.garlic.com/~lynn/2005k.html#45 Performance and Capacity Planning http://www.garlic.com/~lynn/2005m.html#48 Code density and performance? http://www.garlic.com/~lynn/2005p.html#39 What ever happened to Tandem and NonStop OS ? http://www.garlic.com/~lynn/2006c.html#40 IBM 610 workstation computer http://www.garlic.com/~lynn/2006l.html#30 One or two CPUs - the pros cons http://www.garlic.com/~lynn/2006n.html#37 History: How did Forth get its stacks? http://www.garlic.com/~lynn/2006r.html#22 Was FORTRAN buggy? http://www.garlic.com/~lynn/2006t.html#7 32 or even 64 registers for x86-64? http://www.garlic.com/~lynn/2006t.html#9 32 or even 64 registers for x86-64? http://www.garlic.com/~lynn/2007g.html#17 The Perfect Computer - 36 bits? http://www.garlic.com/~lynn/2007g.html#44 1960s: IBM mgmt mistrust of SLT for ICs? http://www.garlic.com/~lynn/2007g.html#57 IBM to the PCM market(the sky is falling!!!the sky is falling!!) -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Non-Standard Mainframe Language?
The following message is a courtesy copy of an article that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well. Dan Espen [EMAIL PROTECTED] writes: No, in the sense that C was pretty close to the assembler for the machine UNIX was first developed on. It would have been interesting if KR went on to implement UNIX on a 360 type machine. I assume they would have extended the C language and library functions to better exploit the hardware. re: http://www.garlic.com/~lynn/2007l.html#18 Non-Standard Mainframe Language? when i was undergraudate, i had added tty/ascii terminal support to cp67 ... in the process of doing that ... came up with some difficiencies in the 2702 terminal controller ... that somewhat prompted project to build our own clone controller out of interdata/3 ... which had somewhat 360-like instruction set. recent post making reference: http://www.garlic.com/~lynn/2007l.html#11 John W. Backus, 82, Fortran developer, dies there was some article blaiming us (at least in part) for the clone controller business. lots of past posts mentioning clone controllers http://www.garlic.com/~lynn/subtopic.html#360pcm all the references I've seen regarding redoing C UNIX for portability make mention of (later) interdata machines (again 360-like) The First Unix Port http://www.usenix.org/publications/library/proceedings/usenix98/invited_talks/miller.ps Version 6 Unix - Wikipedia, the free encyclopedia http://en.wikipedia.org/wiki/Version_6_Unix Interdata_v6 http://minnie.tuhs.org/UnixTree/Interdata_v6/ Anecdotes http://doi.ieeecomputersociety.org/10.1109/MAHC.1989.10025 The Daemon, the GNU and the Penguin - Chapter 2 and 3 http://www.icims.csl.uiuc.edu/~lheal/doc/dgp/chapter02_03.html of course these machines were quite a bit after interdata/3 Interdata 7/32 and 8/32 http://en.wikipedia.org/wiki/Interdata_7/32 in the above ... references to perkin-elmer having bought interdata and quite a bit of success in defense and aerospace industries. people i've talked to since, have said that a lot of the sales involved attaching to ibm mainframe ... and the channel attach board didn't appear to have been redesigned since our original (still wire-wrap). Interdata Simulator Configuration http://simh.trailing-edge.com/interdata.html from above: Interdata was founded in the mid 1960's. It produced a family of 16b minicomputers loosely modeled on the IBM 360 architecture. Microprogramming allowed a steady increase in the functionality of successive models. * Interdata 3 * Interdata 4 (autoload, floating point) * Interdata 5 (list processing, microcoded automatic I/O channel) * Interdata 70, 74, 80 * Interdata 6/16, 7/16 * Interdata 8/16, 8/16e (double precision floating point, extended memory) In the early 1970's, Interdata was purchased by Perkin-Elmer. In 1974, it introduced one of the first 32b minicomputers, the 7/32. Several generations of 32b systems followed: * Interdata 7/32 * Interdata 8/32 * Perkin-Elmer 3205, 3210, 3220 * Perkin-Elmer 3250 Interdata was spun out of Perkin-Elmer as Concurrent Computer Corporation. -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Non-Standard Mainframe Language?
The following message is a courtesy copy of an article that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well. [EMAIL PROTECTED] (Paul Gilmartin) writes: Much of this is due to the reliance on null-terminated strings, which are not peculiar to C, but are rooted in the UNIX continuum between applications programming and systems programming. i've actually had this discussion with some of the people involved, null allowed for one byte overhead for arbitrary lengths ... somewhat the y2k phenomena ... as opposed to the two byte explicit length overhead (for up to 64k). x-over post on the subject from today in another fora http://www.garlic.com/~lynn/2007l.html#11 John W. Backus, 82, Fortran developer, dies lots of posts on the subject of exploits/failures related to the characteristic http://www.garlic.com/~lynn/subintegrity.html#overflow i had been monitoring some of the statistics thru the 90s ... but more recently there were much fewer ... so i had to do some analysis myself ... looking at some of the exploit databases. part of the problem (that I complained about) was that many of the descriptions were somewhat freeform and could be ambiguous ... which i complained about a number of times. there were some more recent announcements that they would be attempting to better classify/categorize exploits. old posts with some attempts at classification/categorization based on analysis of some of the exploit databases http://www.garlic.com/~lynn/2004e.html#43 security taxonomy and CVE http://www.garlic.com/~lynn/2004j.html#58 Vintage computers are better than modern crap ! http://www.garlic.com/~lynn/2005c.html#32 [Lit.] Buffer overruns and this one mentions an article in early 2005 quoting a NIST study that came up with similar statistics that I had come up with nearly a year earlier: http://www.garlic.com/~lynn/2005b.html#43 [Lit.] Buffer overruns note part of the mentioned efforts was in support of my merged security taxonomy and glossary ... some notes here: http://www.garlic.com/~lynn/index.html#glosnote past posts in this thread: http://www.garlic.com/~lynn/2007k.html#65 Non-Standard Mainframe Language? http://www.garlic.com/~lynn/2007k.html#67 Non-Standard Mainframe Language? http://www.garlic.com/~lynn/2007k.html#73 Non-Standard Mainframe Language? http://www.garlic.com/~lynn/2007k.html#74 Non-Standard Mainframe Language? -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Non-Standard Mainframe Language?
The following message is a courtesy copy of an article that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well. [EMAIL PROTECTED] (Craddock, Chris) writes: I never actually met a processor with the (mythical?) APL assist feature. However, I did write mountains of APL throughout the 1980s. APL was always thought of as a resource hog. IMHO it could be very efficient or grotesque, depending on your data structures and algorithms. If you wrote programs in the style of 3 GLs, it was typically a dog. re: http://www.garlic.com/~lynn/2007k.html#65 Non-Standard Mainframe Language? APL microcode assist was done for the 370/145 by the Palo Alto Science Center ... sort of as part of their doing APL\CMS. This was also made available on 370/148. As mentioned, the Cambridge Science Center had originally done the port of APL\360 to CMS for CMS\APL (i.e. back when it was Cambridge Monitor System for CP67; as part of the morph of CP67 to VM370 they renamed CMS the Conversational Monitor System). APL microcode assist gave APL\CMS applications on 370/145 about the same processor thruput as same APL\CMS application running on 370/168 (nearly ten times thruput increase). When the HONE datacenters were consolidated in silicon valley (actually across the back parking lot from the Palo Alto Science Center) ... they looked at whether they take any of their APL application intensive workload and move them off 168s to 145s. The problem was not only were their applications quite processor intensive (which 145 microcode assist would have given about equivalence) but also real storage and I/O intensive (which would have severely degraded if they had moved from 168 to 145/148). For nearly 15 yrs, i provided highly modified/customized versions of cp67 kernel and later vm370 kernels for HONE (and large number of other internal datacenters). I also periodically got involved in reviewing various APL applications from performance tuning standpoint ... aka majority of the applications that provided world-wide support for sales and marketing were implemented in APL running on CMS. Lots of past posts mentioning HONE and/or APL http://www.garlic.com/~lynn/subtopic.html#hone Eventually there was a APL language development group formed in STL which picked up APL\CMS responsibility as well as making it available on MVS ... renaming it VS\APL (and later APL2). Trivia ... in the early to mid 80s, the manager of the APL group in STL transferred to Palo Alto to head up a new group doing a port of BSD Unix to 370. I got to attend some of the design sessions and also help obtain a 370 C compiler for the effort. Before that specific implementation shipped, the group had their BSD porting efforts retargeted to the PC/RT ... eventually shipping AOS (the C compiler vendor being used had to retarget the backend from 370 to ROMP). misc. past posts mentioning 801/ROMP as well as risc, Iliad, RIOS, rs/6000, power/pc, etc http://www.garlic.com/~lynn/subtopic.html#801 The APL microcode assist was not made available on other processors. The 145/148 microcode engine was a vertical microcode engine and executing approx. 10 microcode instructions per every 370 instruction (some of the modern i86-based 370 simulators have similar ratio characteristics). The 370/165 had a horizontal microcode engine ... and achieved an avg. of 2.1 machine cycles per 370 instruction ... which was improved to 1.6 machine cycles per 370 instruction (and hit nearly 1:1 with 3033). Since 370 instructions were executing very close to hardware speed on the high-end processors ... there was frequently very little performance benefit of doing a 1-for-1 translation of 370 instruction into native hardware. The exception was virtual machine microcode assists on the high-end processors ... however these weren't the 1:1 translation of 370 instructions to native instructions. In the virtual machine assists, the instruction emulation for privilege instruction was modified to directly perform the privilege operation while in problem state (but according to virtual machine execution rules ... sort of a 3rd machine state). This avoided the interrupt into the kernel, having to save registers and other state change overhead ... redecode the operation in software and perform the necessary operation, and then switch back to virtual machine problem state execution. In addition to stuff like APL microcode assist done for 145/148 ... there was the VM kernel assist ECPS done for both 138 148. This took about 6k bytes of vm370 370 kernel code and moved it into native microcode of the machines (again getting about 10:1 thruput improvement). some old posts about how we went about selecting what parts of the kernel code were moved into microcode (some of the initial work involved help from some of the same people involved in doing the 145 APL microcode assist) http://www.garlic.com/~lynn/94.html#21 370 ECPS VM microcode assist http://www.garlic.com/~lynn/94.html#27 370
Re: Non-Standard Mainframe Language?
The following message is a courtesy copy of an article that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well. [EMAIL PROTECTED] (Shmuel Metz , Seymour J.) writes: Dont forget APL Shared Variables. re: http://www.garlic.com/~lynn/2007k.html#67 Non-Standard Mainframe Language? http://www.garlic.com/~lynn/2007k.html#70 Non-Standard Mainframe Language? yes, i skipped over some of the intermediate folklore. there was a big uproar created with the phili science center apl\360 group when the cambridge science center did cms\apl and added system services calls ... the claim was that it totally violated spirit of apl language ... although as i've referenced before, removing the trivial workspace size limits of apl\360 and providing for access to system services (like being able to do file read/writes) ... really opened up being able to use cms\apl for real world applications. Eventually, APL shared variables was the effective come-back from the APL language purists on how to be able to access system services ... w/o corrupting the purity of the APL language. misc. past posts mentioning apl shared variable http://www.garlic.com/~lynn/97.html#4 Mythical beasts (was IBM... mainframe) http://www.garlic.com/~lynn/2002c.html#30 OS Workloads : Interactive etc http://www.garlic.com/~lynn/2002n.html#66 Mainframe Spreadsheets - 1980's History http://www.garlic.com/~lynn/2003n.html#8 The IBM 5100 and John Titor http://www.garlic.com/~lynn/2004c.html#7 IBM operating systems http://www.garlic.com/~lynn/2004n.html#37 passing of iverson http://www.garlic.com/~lynn/2005f.html#63 Moving assembler programs above the line http://www.garlic.com/~lynn/2005n.html#50 APL, J or K? http://www.garlic.com/~lynn/2006o.html#13 The SEL 840 computer -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Non-Standard Mainframe Language?
The following message is a courtesy copy of an article that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well. [EMAIL PROTECTED] (Shmuel Metz , Seymour J.) writes: The were also assists for OS/VS1 and MVS/SE, to say nothing of the infamous ECPS:VSE. re: http://www.garlic.com/~lynn/2007k.html#67 Non-Standard Mainframe Language? http://www.garlic.com/~lynn/2007k.html#70 Non-Standard Mainframe Language? http://www.garlic.com/~lynn/2007k.html#73 Non-Standard Mainframe Language? the 145/148 ... for lots of typical kernel instruction paths ... there was approximately a one-for-one byte translation from 370 into microcode. 145 allowed for scavanging part of processor memory for microcode. that was changed in 148 ... and after the OS/VS1 microcode assist was done for 148 ... there was only 6kbytes left in dedicated 148 microcode storage for VM370 ECPS. This somewhat contributed to us doing a significantly better job of choosing the highest used vm370 instruction paths (vis-a-vis the vs1 effort) for dropping into microcode. basically all the instruction paths thru the vm370 kernel were carefully profiled and then ranked as per use ... and then the top 6k bytes were chosen for migration to 148 m'code ... refs: http://www.garlic.com/~lynn/94.html#21 370 ECPS VM microcode assist http://www.garlic.com/~lynn/94.html#27 370 ECPS VM microcode assist http://www.garlic.com/~lynn/94.html#28 370 ECPS VM microcode assist MVS/SE microcode assist would have been much more problamatical since it was applied to the high-end horizontal m'code machines ... where there was already nearly one-for-one between 370 execution and microcode execution; it wouldn't have been possibly to pick up the 10:1 improvement that you found in the low mid-ranged microcoded machines (and in some cases, trying to do straight-forward one-for-one movement of blocks of 370 instructions to horizontal microcode, would actually increase processing time). The place where the vm370 virtual machine microcode assists worked across the whole machine line ... was being able to eliminate the priv. op interrupts into the vm370 kernel ... that 370 supervisor state instruction emulation, when running in special virtual machine problem state ... executed the instructions directly. This wasn't a one-for-one movement of kernel instructions to microcode instruction ... this was the total elimination of the interrupt processing, context switch, and a bunch of other kernel overhead stuff. This was further demonstrated when Amdahl implemented hypervisor in their macrocode ... a sort of 370 instruction set running in special hardware mode. The response was PR/SM on the 3090 (which was a much more difficult undertaking since it was native horizontal microcode programming). -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: [META] Is WaveMind spamming entire IBM-MAIN readership?
The following message is a courtesy copy of an article that has been posted to bit.listserv.ibm-main as well. me too -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Non-Standard Mainframe Language?
The following message is a courtesy copy of an article that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well. [EMAIL PROTECTED] (Gerhard Postpischil) writes: Back in the seventies I was in charge of the systems group at a service bureau. One of our customers was from a local university, running an APL application that tracked students vs. classes, and a few other things. It was gold mine - whenever it ran, the CPU went 100% busy and stayed that way for a long time. The same thing written in another language might have cost one or two percent as much. recent post mentioning world-wide hone system http://www.garlic.com/~lynn/2007.html#30 V2X2 vs. Shark (SnapShot v. FlashCopy) http://www.garlic.com/~lynn/2007.html#31 V2X2 vs. Shark (SnapShot v. FlashCopy) http://www.garlic.com/~lynn/2007.html#46 How many 36-bit Unix ports in the old days? http://www.garlic.com/~lynn/2007b.html#51 Special characters in passwords was Re: RACF - Password rules http://www.garlic.com/~lynn/2007d.html#39 old tapes http://www.garlic.com/~lynn/2007e.html#38 FBA rant http://www.garlic.com/~lynn/2007e.html#41 IBM S/360 series operating systems history http://www.garlic.com/~lynn/2007f.html#12 FBA rant http://www.garlic.com/~lynn/2007f.html#20 Historical curiosity question http://www.garlic.com/~lynn/2007g.html#31 Wylbur and Paging http://www.garlic.com/~lynn/2007i.html#34 Internal DASD Pathing http://www.garlic.com/~lynn/2007i.html#77 Sizing CPU http://www.garlic.com/~lynn/2007j.html#65 Help settle a job title/role debate http://www.garlic.com/~lynn/2007k.html#60 3350 failures HONE (hands-on) started out in the US with cp67 ... sort of to allow branch office SEs to have hands-on with various operating sysetms (running in virtual machines). prior to 23jun69 unbundling announcement http://www.garlic.com/~lynn/subtopic.html#unbundle a lot of SEs got much of their hands-on experience in their customer accounts. after the unbundling announcement, SE time was being charged for ... and not a lot of customers were interested in paying to have SEs learn. however, the science center http://www.garlic.com/~lynn/subtopic.html#545tech in addition to doing virtual machines, cms, inventing GML (precursor to SGML, HTML, XML, etc) http://www.garlic.com/~lynn/subtopic.html#sgml and the internal networking technology http://www.garlic.com/~lynn/subnetwork.html#internalnet which was also used in bitnet and earn http://www.garlic.com/~lynn/subnetwork.html#bitnet also did a port of apl\360 to cms (cms\apl). apl\360 had its own monitor, scheduler, workspace swapping, terminal handler, etc ... all of which could be discarded in the port for cms\apl. also in moving from the 16kbyte (sometimes 32kbyte) real workspace sizes to CMS ... where the workspace size could be all of virtual memory ... the whole way that APL managed storage had to be reworked (the real storage stategy resulted in enormous page thrashing). part of cms\apl was also the ability to access system services (things like read/write files) ... something that apl didn't previously have. the combination of really large workspace sizes and the access to system services ... opened up APL for a lot of real-world problems. A lot of modeling off all kinds was done ... as well as a lot of stuff that these days are implementing with spreadsheets. One of the early big APL uses (at cambride) were a number of business planners from corporate hdqtrs in armonk. they forwarded a tape to cambridge with all of the most sensitive corporate customer business data ... and would do significant amount of business modeling and planning. this created an interested security scenario for the service at cambridge since there were a lot of non-employees using the system from various educational institutions in the cambridge area. one instance is this slightly related DNS trivia topic drift ... more than a decade before DNS http://www.garlic.com/~lynn/2007k.html#33 Even worse than Unix Before long there were a significant number of CMS\APL applications written that supported sales marketing and deployed on the HONE system ... effectively taking over its whole use for sales marketing (and eliminating the original hands-on use for SEs). Before long, sales couldn't even submit customer orders that hadn't been processed by some CMS\APL application. HONE transitioned from cp67 to vm370-based platform and from cms\apl to apl\cms (enhancements done by the palo alto science center ... including the 370/145 apl microcode assist) ... and clones of (US) HONE system were sprouting up all over the world (some of the early ones i even got to handle ... like when EMEA hdqtrs moved from the US to Paris). lots of other posts mentioning HONE and/or APL http://www.garlic.com/~lynn/subtopic.html#hone in the mid-70s, the US HONE datacenters were consolidated in silicon valley. The large customer base (all US sales and marketing) drove the requirement for large disk farm ... and the heavy
Re: 3350 failures
The following message is a courtesy copy of an article that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well. [EMAIL PROTECTED] (Ted MacNEIL) writes: I started with 3330's. And, I remember when STK (STC) showed us their first ICEBERG, and the size of the device was that of a standard conference table, weighed less and had the capacity of an order (or 2) of magnitude larger than the 3330 farm I first tended. That 3330 farm was less than 50 GB, and we were considered a medium to large site. (Running on a 3081-D) re: http://www.garlic.com/~lynn/2007k.html#58 3350 failures silicon valley area had at least three fairly large vm370 customer datacenters with good sized disk farms ... there was SLAC (lots of collection from the accelerator) and both Tymshare and internal HONE operation ... both extensive online, timesharing services http://www.garlic.com/~lynn/subtopic.html#timeshare HONE had somewhat started out with a number of cp67 installations to provide hands-on virtual machine use for branch office SEs. recent reference: http://www.garlic.com/~lynn/2007j.html#65 Help settle a job title/role debate It then transitioned to vm370 and lots of online, interactive APL applications supporting sales marketing ... i.e. at some point early in 370 timeframe, there was transition where machine orders couldn't even be submitted w/o having first being processed by a HONE configuration. In the mid-70s, the various (US) HONE datacenters were consolidated in silicon valley area ... with what was possibly the largest single-system configuration in the world at the time (large datafarm with load balancing across large number of processors in loosely-coupled configuration). http://www.garlic.com/~lynn/subtopic.html#hone another large datacenter in silicon valley was Lockheed's DIALOG (online library titles and abstracts which has gone thru a number of owners since that time) ... which had something like 300(?) 3330-clones in their data farm (the basic service was MVS ... but lots of it was run under VM ... on clone processors). -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: 3350 failures
The following message is a courtesy copy of an article that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well. [EMAIL PROTECTED] writes: I don't remember any 3350 problems as this device type was my first performance charge with doing internal pathing/volume placement based on performance metrics at timeshare NVIP back in the early 80's. I do however remember the 3350 to 3380 migration project which turned ugly when we were informed, post migration, that we needed plenum replacements on our 3380 E's/K's. IIRC the plenum connected to 2 different HDA's but I could be wrong on this point. Lots of long weekends with the media folks deciding how to play musical chairs with strings of DASD. re: http://www.garlic.com/~lynn/2007k.html#58 3350 failures http://www.garlic.com/~lynn/2007k.html#60 3350 failures old email http://www.garlic.com/~lynn/2007b.html#email800402 talks about problem executing HIO/HDV to 3350 when (3880) control unit was busy (which may have also existing in 3830) ... and software fix was to i/o supervisor to not do that. one of the 3350 to 3380 migration issues was that the 3380 had more data under each arm (proportional in excess in any increase in 3380 thruput improvement). internally we had some performance monitoring and modeling tools that would identify what 3350 data to move to what 3380 and some recommendations (in heavily loaded environment) to leave 3380 10-20 percent empty/idle (in order to have same thruput as 3350 configuration). there was a facetious proposal (even discussed at SHARE) for a special 3380 feature in the 3880 controller ... that would define extra priced 3380 drives that were faster (by reducing the number of cylinders that could be accessed). This was for shops where the administrators couldn't resist completely filling a 3380 as cost effective measure (however, they would feel comfortable with paying extra for feature that prevented them from completely filling a 3380). misc. past posts about getting to play dasd engineer in the disk engineering lab (bldg. 14) and the disk product test lab (bldg. 15). http://www.garlic.com/~lynn/subtopic.html#disk -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: 3350 failures
The following message is a courtesy copy of an article that has been posted to bit.listserv.ibm-main as well. [EMAIL PROTECTED] writes: IBM 3880 - 1 or 2 (IBM DASD and Control Units Facts Folder G520-3075-2) old email with reference to finding bug in the 3350 support in 3880 controller (and possibility of same bug having been in 3830 controller) http://www.garlic.com/~lynn/2007b.html#email800402 in this recent post http://www.garlic.com/~lynn/2007b.html#28 What is command reject trying to tell me? above post also references early 3880 MVS RAS testing in this post http://www.garlic.com/~lynn/2007.html#2 The Elements of Programming Style -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Another migration from the mainframe
The following message is a courtesy copy of an article that has been posted to bit.listserv.ibm-main as well. [EMAIL PROTECTED] (Richards.Bob) writes: I wonder if they will reveal the costs of extra hw/sw for high-availability and business continuance associated with this migration. Probably not. when we were doing the ha/cmp product, they were one of the customers we called on http://www.garlic.com/~lynn/subtopic.html#hacmp also, I had been asked to write a section in the corporate continuous availability strategy document. most of my writing got pulled because both Rochester and POK complained (that at the time, they couldn't meet what we were doing in ha/cmp). it was also in this period that we coined the terms disaster survivability and geographic survivability (to differentiate from disaster/recovery) http://www.garlic.com/~lynn/subtopic.html#available for other drift, old email about what we had been doing about ha/cmp scaleup http://www.garlic.com/~lynn/lhwemail.html#medusa -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Another migration from the mainframe
The following message is a courtesy copy of an article that has been posted to bit.listserv.ibm-main as well. [EMAIL PROTECTED] (Howard Brazee) writes: I'd also like to see that with politics, but politician's pay is power, and that cannot be deferred.But it is more important for a President's policy to work for the long term than for a CEO's policy. Neither should be measured by not on my term, but both should be measured by leaving a legacy that lasts.Build for the future - when the other guys are running the company/country. i think that the comptroller general has suggested something similar for legislation ... that metrics are defined associated for any claimed benefits justifying some legislation ... and if the results fail to meet the metrics ... poof, its gone. however, in speeches that the comptroller general has given over the past yr or so on some aspects of medicare/medicaid legislation ... he has commented that he doesn't believe any congressman in the last fifty yrs has been capable of middle-school arithmatic. -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Another migration from the mainframe
The following message is a courtesy copy of an article that has been posted to bit.listserv.ibm-main as well. re: http://www.garlic.com/~lynn/2007k.html#18 Another migration from the mainframe http://www.garlic.com/~lynn/2007k.html#19 Another migration from the mainframe as an aside ... all the vendors that support server farms at least in the form of blade/GRID technology have done significant amounts of work on energy and cooling efficiency. in fact, cooling was one of major concerns working on ha/cmp scaleup related to these old emails http://www.garlic.com/~lynn/lhwemail.html#medusa small sample re blade/grid energy efficiency CIO Challenge: Energy Efficiency http://www.wallstreetandtech.com/showArticle.jhtml?articleID=192202377 from above: Like Fidelity, Wachovia has been targeting energy efficiency initiatives for the last 12 to 18 months or so. The initial spur was a move by the firm's traders in January to a new building in New York. The three trading floors have relatively low ceiling heights, where it was not possible to put in a lot of air distribution, which meant we had to think creatively to ensure we don't have an unhealthy environment for the traders, ... snip ... and: IBM Unveils New Energy-Efficient Blades http://www.hpcwire.com/hpc/1379801.html IBM to focus on energy efficiency http://www.bladewatch.com/2007/05/10/ibm-to-focus-on-energy-efficiency/ Blade innovations highlight energy efficiency opportunities http://www.it-director.com/business/content.php?cid=9135 IBM defends blades' energy efficiency http://green.itweek.co.uk/2006/10/ibm_defends_bla.html IBM Data Center and Facilities Strategy Services - high density computing data center readiness assessment http://www-935.ibm.com/services/us/index.wss/offering/its/a1025605#spotligt-data-center -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Another migration from the mainframe
The following message is a courtesy copy of an article that has been posted to bit.listserv.ibm-main as well. re: http://www.garlic.com/~lynn/2007k.html#18 Another migration from the mainframe http://www.garlic.com/~lynn/2007k.html#19 Another migration from the mainframe http://www.garlic.com/~lynn/2007k.html#22 Another migration from the mainframe lots of old posts mentioning working on our ha/cmp product ... and/or some loosely-coupled (dating back to at least when my wife had been con'ed into going to POK to be in charge of loosely-coupled architecture) http://www.garlic.com/~lynn/subtopic.html#hacmp when she was in POK, in charge of loosely-coupled architecture, she developed peer-coupled shared data architecture, which didn't see a lot of uptake (except for ims-hotstandby) until parallel sysplex http://www.garlic.com/~lynn/subtopic.html#shareddata and a little followup of financial industry using blades/grids at the high-end ... including enabling them to do real-time trading algorithms ... something that they haven't been able to do before Lots of Blade Server articles http://www.eweek.com/category2/0,1874,1658862,00.asp IBM Grid Computing Solutions - financial industry http://www-03.ibm.com/grid/solutions/by_industry/financial.shtml from above: Optimized Analytic Infrastructure Drive higher margins and revenue growth by: * Turning creative quantitative insight into tested, supported, tradable investment products * Achieving near real-time and intraday decision making for on demand valuations and complex risk reporting in minutes * Reducing costs and enhancing standardization of existing analytic infrastructures ... snip ... Grid Computing for Financial Services 2007 http://www.iqpc.com/cgi-bin/templates/genevent.html?topic=233event=12603; from above: 70% of firms now deploy enterprise grids in key business areas to maximise CPU power and business capability – but are you really driving its development forward in your IT strategy? ... snip ... Grid computing: Accelerating the search for revenue and profit for financial markets http://www-03.ibm.com/industries/financialservices/doc/content/landing/973028103.html -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Data Areas Manuals to be dropped
The following message is a courtesy copy of an article that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well. [EMAIL PROTECTED] writes: What is 'OCO' ? Thanks there were several OCO-wars threads/discussion on vmshare. it was somewhat more of an issue in vm culture ... since source maintenance was standard and there were extensive amount of customer source changes available from waterloo/share library. tymshare had provided online computer conferencing for share called vmshare starting in mid-70s; in part, because tymshare offered vm-based commercial timesharing service (later tymshare would also offer pcshare online computer conferencing) ... lots/misc posts about vm-based online commercial timesharing services http://www.garlic.com/~lynn/subtopic.html#timeshare vmshare archives here: http://vm.marist.edu/~vmshare/ following is sample by doing a search on oco war in browse mode against all memo, note, and prob files. OCO's 10th b'day http://vm.marist.edu/~vmshare/browse?fn=OCO:BDAYft=MEMO OCO source business http://vm.marist.edu/~vmshare/browse?fn=OCOBUSft=MEMO issue sort of dates back to 23jun69 unbundling announcement with start to charge for application software. misc. past posts mentioning unbundling http://www.garlic.com/~lynn/subtopic.html#unbundle initially only application software was charged for ... using an excuse that kernel/system software was required for operation of the hardware. later various circumstances precipitated decision to start charging for system software. this was about the time that my resource manager was going to be released ... so it got selected to be initial guinea pig for policty/practices related to kernel software charging. http://www.garlic.com/~lynn/subtopic.html#fairshare change to charging for software eventually also evolved into Object-Code-Only (i.e. OCO, no source). recent post also mentioning 23jun69 unbundling announcement resulted in start charging for SE services. http://www.garlic.com/~lynn/2007j.html#65 Help settle a job title/role debate -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Help settle a job title/role debate
The following message is a courtesy copy of an article that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well. Steve Samson [EMAIL PROTECTED] writes: I would regard SP as the inside job, designing, writing, testing, and integrating code to accomplish some well-defined purpose. An SE would be on the interface between inside and outside, meeting with TPTB and the end users to arrive at a set of specs that would then be reviewed and/or revised with the SP to assess cost and schedule, thus defining the purpose of the SP's effort. In the dawn of history, an SE was the IBM sales team member who would provide on-site training and act as the level 1 contact for solving problems. By 1965 the SE became not much more than the guy you called to order manuals, as all of them with half a brain were pulled into the S/360 development effort. Just my opinion and recollections... big change was 23jun69 with unbundling announcement and charging for SE time. prior to that a lot of SEs got hands-on training at customer installations do real-live technical things (sort of on-the-job training after introductory school). after 23jun69 announcement, customers were less likely to pay for SE services ... especially younger ones getting their on-the-job training. that sort of created two-class system ... those that had hands-on experience prior to 23jun69 ... and customers were more likely to pay for their time ... and those that came after 23jun69. before 23jun69, for a period as an undergraduate, i had responsibility for the univ. production os/360 system (and also got to play with cp67). I had done a lot of stuff to significantly soup up mft (and then mvt) thruput ... part of of it doing carefully crafted sysgens. There was a period where I would see brand new SEs in the branch office (fresh out of corporate schools) for 3-4 month period and then be replaced by new batch (getting their hands-on by helping me). the early HONE system http://www.garlic.com/~lynn/subtopic.html#hone was largely instituted as countermeasure to training issues introduced with 23jun69 announcement ... started out as clone of science center's http://www.garlic.com/~lynn/subtopic.html#545tech cp67 (virtual machine) system on a few 360/67s at locations around the states and remote login access from branch offices ... being able to use dos/360, os/360, etc (i.e. behind the original hands-on in the HONE acronym). the focus somewhat changed after science center did the port of apl\360 to cms for cms\apl and the explosion in the number apl applications supporting sales marketing appeared ... like the configurators. Early in 370 product time-frame ... there was transition where sales couldn't even submit orders until after they had been processed by HONE configurator. The explosion in the use by direct sales marketing sort of swamped the processors and there was then transition away from its original purpose of allowing SEs to get hands-on system expierence. HONE would migrate (from cp67) to vm370 and eventually had HONE systems sprouting up all around the world ... some of the early ones, i even got to do the installation. Many of the HONE APL modeling applications would also permeate hdqtrs locations, in addition to direct branch office sales marketing support. One of my first HONE installs outside the US was when EMEA hdqtrs moved from NY to La Defense (outside paris) in the early 70s. for other drift ... misc. posts mentioning 23jun69 unbundling http://www.garlic.com/~lynn/subtopic.html#unbundle -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Help settle a job title/role debate
The following message is a courtesy copy of an article that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well. [EMAIL PROTECTED] (Mark Zelden) writes: Today, I see these two used interchangeably. I've even seen title changes from one to the other in the same shop when HR decided to review everyone's job titles and such. I still prefer plain ol' Systems Programmer over all the titles I've had. re: http://www.garlic.com/~lynn/2007j.html#65 Help settle a job title/role debate later, i would have battles to have no title at all ... and have my business cards w/o any title (I would sometimes joke that if it was necessary to get things done based on a title ... then it was time to retire ... i should be able to convince people to do something based on it was the right thing to do). the other battle was being one of the first to have email address on business card. ... there is old joke about person that use to fly a kite from the roof of 705 bldg. in pok on april 1st ... who had pencils made up with his name ... Elect lab director, raises or promotions, but not both. old references: http://www.garlic.com/~lynn/2000b.html#60 South San Jose (was Tysons Corner, Virginia) http://www.garlic.com/~lynn/2000d.html#38 S/360 development burnout? http://www.garlic.com/~lynn/2006m.html#22 Patent #6886160 this is slightly different than the Boyd line effectively about neither raises nor promotions ... recent ref: http://www.garlic.com/~lynn/2007j.html#61 Lean and Mean: 150,000 U.S. layoffs for IBM? which is more along the lines of references at some number of locations (across a variety of large bureaucratic organizations) being primarily mushroom factories (i.e. most of the people are kept in the dark and feed ) -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Lean and Mean: 150,000 U.S. layoffs for IBM?
The following message is a courtesy copy of an article that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well. [EMAIL PROTECTED] (Rick Fochtman) writes: I went the other way in the Army, finding myself in a tropical climate where the main diet was rice, with a few vegetables and maybe a water buffalo, when the gunner forgot to clear the M2-HB before attempting to clean it. In my (somewhat limited) experience, the officers were there to get some combat time, and pay, into their service records, as a stepping stone to further promotion. Net result: the sergeants ran the Army while the officers fought the battles and collected the medals. Needless to say, I have a very low opinion of high-flying leaders that don't share the hardships of those who are led. re: http://www.garlic.com/~lynn/2007j.html#61 Lean and Mean: 150,000 U.S. layoffs for IBM? for other boyd drift, he did yr running datacenter at spook base ... possibly largest in the world ... at least in the fareast, at the time, claim was that it represented a $2.5B windfall for IBM. http://www.garlic.com/~lynn/2005t.html#1 Dangerous Hardware http://www.garlic.com/~lynn/2005t.html#2 Dangerous Hardware http://www.garlic.com/~lynn/2005t.html#5 Dangerous Hardware http://www.garlic.com/~lynn/2006u.html#51 Where can you get a Minor in Mainframe? http://www.garlic.com/~lynn/2007g.html#13 The Perfect Computer - 36 bits? http://www.garlic.com/~lynn/2007i.html#4 John W. Backus, 82, Fortran developer, dies and for other drift ... boyd's briefing on organic design for command and control ... past posts mentioning the briefing http://www.garlic.com/~lynn/94.html#8 scheduling dynamic adaptive ... long posting warning http://www.garlic.com/~lynn/2000e.html#34 War, Chaos, Business (web site), or Col John Boyd http://www.garlic.com/~lynn/2002q.html#33 Star Trek: TNG reference http://www.garlic.com/~lynn/2002q.html#34 Star Trek: TNG reference http://www.garlic.com/~lynn/2003h.html#46 employee motivation executive compensation http://www.garlic.com/~lynn/2004k.html#25 Timeless Classics of Software Engineering http://www.garlic.com/~lynn/2004l.html#34 I am an ageing techy, expert on everything. Let me explain the http://www.garlic.com/~lynn/2004q.html#69 Organizations with two or more Managers http://www.garlic.com/~lynn/2005e.html#1 [Lit.] Buffer overruns http://www.garlic.com/~lynn/2005e.html#2 [Lit.] Buffer overruns http://www.garlic.com/~lynn/2005e.html#3 Computerworld Article: Dress for Success? http://www.garlic.com/~lynn/2005n.html#14 Why? (Was: US Military Dead during Iraq War http://www.garlic.com/~lynn/2006q.html#41 was change headers: The Fate of VM - was: Re: Baby MVS??? http://www.garlic.com/~lynn/2007c.html#25 Special characters in passwords was Re: RACF - Password rules http://www.garlic.com/~lynn/2007i.html#35 ANN: Microsoft goes Open Source and as before ... lots of other past posts mentioning Boyd as well as other URLs from around the web http://www.garlic.com/~lynn/subboyd.html -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Lean and Mean: 150,000 U.S. layoffs for IBM?
The following message is a courtesy copy of an article that has been posted to bit.listserv.ibm-main as well. [EMAIL PROTECTED] (Ed Gould) writes: John, I agree with you on this issue ... somewhat. I was in the Army (over in Germany) and our base was a 'stepping stone for generals. They came in a 1 star and one year (more or less) later they became a 2 star. We weren't even close to the trench's (so to speak) we were a command so no one really got hurt (per se) by any screwups the General may have caused. In a side issue (humorous) one of our subordinate bases when polled at 11PM reported they were under attack by the communist. Since they reported an attack we had to wake up our General to inform him of the situation. The colonel (IIRC), in charge of that base, was reduced in rank to a Captain(?) and was basically told to retire. All this from a sergeant who couldn't read a code book. Boyd had lots of run in with generals ... just one of his old stories http://www.garlic.com/~lynn/2005n.html#14 Why? (Was: US Military Dead during Iraq War past post w/reference to dedication of Boyd Hall, United States Air Force Weapons School, Nellis Air Force Base, Nevada. 17 September 1999 http://www.garlic.com/~lynn/2007.html#20 MS to world: Stop sending money, we have enough - was Re: Most ... can't run Vista http://www.garlic.com/~lynn/2007h.html#74 John W. Backus, 82, Fortran developer, dies ... There are two career paths in front of you, and you have to choose which path you will follow. One path leads to promotions, titles, and positions of distinction The other path leads to doing things that are truly significant for the Air Force, but the rewards will quite often be a kick in the stomach because you may have to cross swords with the party line on occasion. You can't go down both paths, you have to choose. Do you want to be a man of distinction or do you want to do things that really influence the shape of the Air Force? To be or to do, that is the question. Colonel John R. Boyd, USAF 1927-1997 ... snip ... misc. past posts mentioning Boyd http://www.garlic.com/~lynn/subboyd.html#boyd misc. URLs from around the web mentioning Boyd http://www.garlic.com/~lynn/subboyd.html#boyd2 ... there have also been corporate stepping stone positions ... for individuals that got put on fast track. Woe was you if the head position of your organization was designated as a fast track stepping stone ... there could be turn-over in the position every six months. -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: How difficult would it be for a SYSPROG ?
The following message is a courtesy copy of an article that has been posted to bit.listserv.ibm-main as well. [EMAIL PROTECTED] (Anthony Saul Babonas) writes: Please colleagues, allow my to clarify by stating : SET FAVORITEEXPLETIVE='' 1. I do not believe other platforms are favoriteexpletive. 2. I am not arrogant and certainly not blind. 3. I am not responsible for the mainframe market share situation. I neither buy nor sell mainframes. 4. I have heard the costs of hardware and software, but I still believe arrogance is no cost. 5. I do understand the competition (favoriteexpletive). 6. Recent young graduates are free to work on any platform of their choosing. Undoubtedly the market influences their choices. 7. I do believe the learning curve for Novell, circa 1995, was orders of magnitude less than zOS + subsystems, circa 1995. So fast forward 12 years, is learning to be a PC based network sysprog more difficult than zOS, less so, or about the same? Please note, final question is posed without beliefs, opinions, standpoints, political biases, or prejudices. I reserve the right to invoke arrogance at some later date. a little x-over http://www.garlic.com/~lynn/2007j.html#47 IBM Unionization http://www.garlic.com/~lynn/2007j.html#48 IBM Unionization from afc ng thread ... w/regard to post references some national labs ... discontinuing mainframe systems in the 90s because of inability to fill positions for system support (schools were turning out lots of unix skills but little or no mainframe skills). It wasn't a particular cost issue, it was an issue about being able to find/hire the skills. part of the problem is getting into a negative feedback loop ... programs to turn out mainframe skills can take a decade ... once it starts the reputation about skill shortages can contribute to choices made about platforms to use ... and the choice about platforms to use can contribute to choices about skills required. for totally other topic drift ... in the very early 80s, the disk division had a PC network server project ... part of the implementation was being done under a work for hire contract by people in Provo. For a while, one of the people on the project was commuting between San Jose and Provo nearly every week. At some point, the corporation decided to cancel the project ... and allowed the group in Provo to retain rights to the work they had already been paid for. Not long afterwards there appeared a PC network server company out of Provo. misc. past posts mentioning DataHub project: http://www.garlic.com/~lynn/96.html#4a John Hartmann's Birthday Party http://www.garlic.com/~lynn/2000g.html#40 No more innovation? Get serious http://www.garlic.com/~lynn/2002f.html#19 When will IBM buy Sun? http://www.garlic.com/~lynn/2002g.html#79 Coulda, Woulda, Shoudda moments? http://www.garlic.com/~lynn/2002o.html#33 Over-the-shoulder effect http://www.garlic.com/~lynn/2003e.html#26 MP cost effectiveness http://www.garlic.com/~lynn/2003f.html#13 Alpha performance, why? http://www.garlic.com/~lynn/2004f.html#16 Infiniband - practicalities for small clusters http://www.garlic.com/~lynn/2005p.html#23 What ever happened to Tandem and NonStop OS ? http://www.garlic.com/~lynn/2005q.html#9 What ever happened to Tandem and NonStop OS ? http://www.garlic.com/~lynn/2005q.html#36 Intel strikes back with a parallel x86 design http://www.garlic.com/~lynn/2006l.html#39 Token-ring vs Ethernet - 10 years later http://www.garlic.com/~lynn/2006y.html#31 The Elements of Programming Style http://www.garlic.com/~lynn/2007f.html#17 Is computer history taught now? -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Sizing CPU
The following message is a courtesy copy of an article that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well. [EMAIL PROTECTED] (Edward Jaffe) writes: I agree that part of what I wrote is misleading because I was thinking about processor resource contention analysis when I wrote it. :-[ Overall processor utilization is -- as you've said -- calculated from dispatched-time and/or wait-time accumulators maintained by the PR/SM and MVS dispatchers (e.g., LCCAWTIM is maintained by the MVS dispatcher). But, it's also true that the monitors must still periodically sample and remember the values contained within those accumulators to be able to calculate the percentage busy over any given interval. or at least periodically checkpoint the numbers. for some topic drift ... the science center had done the http://www.garlic.com/~lynn/subtopic.html#545tech port of apl\360 to cms\apl ... this opened up the use of apl for more real-world problems ... since typical apl\360 operation limited workspace size to 16kbytes (or sometimes as much as 32kbytes) ... i.e. cms\apl workspace size could be nearly as big as the virtual address provided recent posts ... about when people in armonk started using cambrige cp67 system for cms\apl to do business analysis and modeling http://www.garlic.com/~lynn/2007i.html#20 Does anyone know of a documented case of VM being penetrated by hackers? apl was being used for a lot of things that spreadsheets are used for today. the science center also did a significant amount of work on performance monitoring, performance tuning, workload profiling, system characterization that also evolved into things like capacity planning. cms\apl (and cambridge cp67 system) was also picked up by the marketing sales organization for something called HONE http://www.garlic.com/~lynn/subtopic.html#hone which transition to vm370 and apl/cms ... with majority of world-wide sales and marketing support being provided by APL applications under vm370. in the 70s there was transition where sales could not even enter a customer order that hadn't first been processed by a HONE configurator (application written in APL). The science center APL, performance modeling and capacity planning work somewhat came together in a sophisticated computer modeling application written in APL. A version of this was made available to (worldwide) sales and marketing as the performance predictor. Sales/marketing people could enter customer workload, configuration, and performance data ... and ask what if questions about what happens (to computer performance and thruput) when there are changes to configuration and/or workload. some other past posts mentioning benchmarking, workload profiling, performance monitoring and/or work on capacity planning. http://www.garlic.com/~lynn/subtopic.html#bench for some topic drift ... recent post about joke in the resource manager: http://www.garlic.com/~lynn/2007i.html#43 Latest Principles of Operation -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Best practices for software delivery
The following message is a courtesy copy of an article that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well. [EMAIL PROTECTED] (Schwarz, Barry A) writes: Internet is only a reasonable approach when companies are willing to provide the same level of quality control over their web sites that they do for their traditional media. Other than the CBT site (thank you Sam) none of the others I deal with do. When I'm trying to download the solution to a problem, I don't need a link to a page that doesn't exist, some distracting animation, annoying pop-ups, notification that the web site prefers a different browser, brain dead interfaces that insist you enter the same information repeatedly, options that are ignored, and out of date content. old email here from somebody in corporate hdqtrs (in this x-over thread from vmesa-l list) ... asking if HSDT project had been thinking about how to do this sort of stuff ... http://www.garlic.com/~lynn/2007i.html#39 Does anyone know of a documented case of VM being benetrated by hackers? this was in the timeframe when we had been doing work w/NSF on what was to become NSFNET (tcp/ip is the technology basis for the modern internet, but we claim that NSFNET was the operational basis for the modern internet, aka high-speed backbone providing internetworking of networks). some of the old email on the subject from the period http://www.garlic.com/~lynn/lhwemail.html#nsfnet lots of past posts mentioning HSDT project http://www.garlic.com/~lynn/subnetwork.html#hsdt -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Latest Principles of Operation
The following message is a courtesy copy of an article that has been posted to bit.listserv.ibm-main,alt.folklore.comupters as well. Howard Brazee [EMAIL PROTECTED] writes: A lot of our smarts is in seeing patterns, simplifying what we are looking for. Occasionally this kind of shortcut causes us to miss things, but pattern recognition allows the chess master to ignore dead ends that poorer players waste time on. I see craftsmen and artists using tools that are difficult to master - but supply and demand doesn't take that into consideration in setting prices for their goods. long post warning ... as an undergraduate ... i did fair share scheduler for cp67 ... or actually dynamic adaptive resource management ... with default policy being fair share; minor recent reference: http://www.garlic.com/~lynn/2007h.html#77 Linux: The Completely Fair Scheduler but also did a lot of highly optimized pathlength and fastpath stuff. I made joke about being able to do stuff in zero instruction ... carefully tweaking instructions all over the kernel so that various things happened implicitly ... so scheduler didn't actually have to execute instructions to obtain the desired results (secondary effects of the order of how other things are occruing, of course it helps if you effectively have memorized the source for the complete kernel). over some yrs, there was complaints that nobody could understand how it worked ... which probably contributed to a lot of being dropped (simplified) in the morph from cp67 to vm370. so possibly still part of the recovery in the aftermath of future system project ... recent reference in this thread http://www.garlic.com/~lynn/2007i.html#31 Latest Principles of Operation I was given the opportunity to reintroduce the resource manager for vm370. ... more topic drift ... and they decided that the resource manager should be guinea pig to starting to charge for kernel code ... so i had to spend some amount of time with the business and legal people working on policy for charging for kernel software; i.e. 23jun69 unbundling announcement started charging for application software ... but they used the excuse that kernel software was integral to hardware operation and should still be free ... misc. past posts http://www.garlic.com/~lynn/subtopic.html#unbundle ... slightly return to topic ... now, i did fix up some of the more obtuse pieces of code ... but there still was quite a bit of complexity in the resource manager ... and people over the yrs would complain that they didn't understand how it worked ... and periodically somebody would make changes in other parts of the kernel resulting in unexplicable affects on scheduling (there was still quite a bit of convoluted code that I justified on being able to do things in fewer instructions and shorter pathlength). so there was one fly in the oitment, somebody from corporate complained that there weren't any tuning parameters ... which was the state of the art in other products; ... namely the favorite son operating system of the period had a massive table of tuning parameters ... and there were this presentations at SHARE (could put everybody to sleep) about random walks across the tuning parameter table landscape, varying the parameters for all different kinds of workloads and configurations ... attempting to find settings that made some difference. So I was forced to put in some tuning parameters (before product ship), document them and the backup formulas ... as well as detailed code explanation. so we roll forward nearly 20yrs ... we are on a marketing swing thru the far east for our ha/cmp product http://www.garlic.com/~lynn/subtopic.html#hacmp also additional thread drift http://www.garlic.com/~lynn/95.html#13 http://www.garlic.com/~lynn/96.html#15 and http://www.garlic.com/~lynn/lhwemail.html#medusa ... and we are on a call at a large financial institution. One of the people at the meeting (relatively recent graduate) pipes up and asks if i'm the same person associated with the scheduler ... since they had studied me/it at the Univ. of Waterloo. So I respond politely and asked if the joke was discussed. Now part of the issue with static tuning parameters is that workloads tend to vary from minute-to-minute, day-to-day, week-to-week ... and there was a whole lot of work that went into the dynamic adaptive implementation to eliminate the requirement for any static tuning parameters. Being force to add static tuning parameters seemed to be a great step backward in the state-of-the-art. So as to the joke ... if dynamic adaptive can adjust for changes in configurations and workloads ... as well as a lot of other things ... why couldn't the dynamic adaptive implementation also adjust to compensate for changes in any static tuning parameters. misc. past posts discussing the resource manager joke: http://www.garlic.com/~lynn/2001b.html#18 Linux IA-64 interrupts [was Re: Itanium benchmarks ...]
Re: latest Principles of Operation
The following message is a courtesy copy of an article that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well. [EMAIL PROTECTED] (Shane) writes: Which of course made the whole thing too complex. Maybe we need to make it simpler ... other posts in this thread: http://www.garlic.com/~lynn/2007i.html#25 Latest Principles of Operation http://www.garlic.com/~lynn/2007i.html#26 Latest Principles of Operation http://www.garlic.com/~lynn/2007i.html#31 Latest Principles of Operation http://www.garlic.com/~lynn/2007i.html#43 Latest Principles of Operation part of the complexity issue is possible number of things and/or interactions that have to be managed. it is one of the reasons for modular code. it also one of the reasons for hierarchical management infrastructure and recommendations about span of control ... ideal number of direct reports is supposedly seven. fully-meshed interaction complexity is something like N-factorial (i.e. 7! is already 5040). when were were consulting with small client/server company on the original payment gateway http://www.garlic.com/~lynn/subnetwork.html#gateway one of the things that we were doing was making the whole gateway operation redundant, including multiple links into strategic places in the internet backbone ... and planning on advertising (alternate) routes in cases of faults. It was in this time-frame that internet backbone transitioned to hierarchical routing ... in attempting to deal with some of the massive scaling complexity issues. As a result, we had to convert to multiple A-record strategy (for alternate paths) as opposed to strategy advertising routes. recent post http://www.garlic.com/~lynn/2007h.html#67 SSL vs. SSL over tcp/ip in the previous post http://www.garlic.com/~lynn/2007i.html#43 Latest Principles of Operation I had enormously increased the complexity by creating an environment were the particular order of any set of (assembler) instructions, any place the kernel might have secondary effects on other things in the kernel, not only have to take into account individual instruction operation and the purely sequential flow ... but there might be (non-obvious) secondary effects based possibly on the order that things were performed. In the early 80s, I had done a dump analysis tool http://www.garlic.com/~lynn/subtopic.html#dumprx recent reference http://www.garlic.com/~lynn/2007i.html#30 John W. Backus, 82, Fortran developer, dies which was never released as a product, but eventually was in use at nearly all the internal datacenters as well as being used by nearly all the PSRs that were involved in diagnosing (vm) failures kinds at customer shops. One of the things that I studied was large number of system failures looking for familiar and/or common signatures ... and writting automatic diagnostic processes that looked for numerous, identifiable failure characteristics. One such problem, somewhat unique to assembler is the additional burden/complexity that is placed on the programmer for managing register contents. A failure mode that I found to occur quite frequently in kernel assembler code was register contents were not as expected ... possibly because of logic flow taking a particular anomolous path. Higher level languages tend to automate that area of complexity (register) management for the programmer ... and failures due to incorrect register contents occur significantly less frequently. This analysis was one of my motivations behind helping instigate a operating system rewrite project ... with an objective that included significantly reducing complexity (and possibility for failures). some recent references to the operating system rewrite project (which eventually acquired way to much attention, eventually imploding ... somewhat analogous to a mini FS failure). misc. recent references to the old rewrite effort: http://www.garlic.com/~lynn/2007g.html#70 The Perfect Computer - 36 bits? http://www.garlic.com/~lynn/2007h.html#24 sizeof() was: The Perfect Computer - 36 bits? http://www.garlic.com/~lynn/2007h.html#57 ANN: Microsoft goes Open Source -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Latest Principles of Operation
The following message is a courtesy copy of an article that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well. [EMAIL PROTECTED] (Rick Fochtman) writes: That increased instruction set allows for vastly increased capability, in spite of the perceived complexity. Simple applications can still be coded using simple instructions, but more complex requirements can be met more simply and efficiently by using some of those added instructions that seem to lead to complexity. Complexity is far too often used as an excuse for incompetence or laziness; not always or even most of the time, but still far too often. You don't let a carpenter into your house if he doesn't know how to use his tools, do you re: http://www.garlic.com/~lynn/2007i.html#25 Latest Principles of Operation http://www.garlic.com/~lynn/2007i.html#26 Latest Principles of Operation well, i remember the hassle to get compareswap instruction into the 370 architecture. Charlie had invented compareswap instruction when he was working on smp knerel fine-grain locking for cp67 at the science center http://www.garlic.com/~lynn/subtopic.html#545tech the redbook 370 architecture owners claimed that everybody (namely the POK favorite son operating system people) thot that testset was totally adequate for all multiprocessor support. in order to justify getting compareswap instruction into 370 architecture, we had to come up with a whole boatload of justifications for compareswap instruction that wasn't specific to multiprocessor operation. thus was born the stuff in principle of operations about using compareswap instruction for application multithreaded operation (regardless of whether or not it was multiprocessor environment). lots of past posts mentioning multiprocessor support and/or compareswap instruction http://www.garlic.com/~lynn/subtopic.html#smp it is sometimes relative. i've claimed that John came up with risc/801 as part of going to the opposite extreme after the extremely complex future system project failed ... misc. past posts mentioning failed future system project http://www.garlic.com/~lynn/subtopic.html#futuresys and lots of past post mentioning 801, romp, rios, iliad, fort knox, somerset, power/pc, etc http://www.garlic.com/~lynn/subtopic.html#801 and old email with 801 references http://www.garlic.com/~lynn/lhwemail.html#801 Supposedly future system project was a countermeasure to clone controllers ... something I got (at least partially) blamed for from a project that I worked on as undergraduate in the 60s producing our own clone controller http://www.garlic.com/~lynn/subtopic.html#360pcm also some FS quotes referenced in this post http://www.garlic.com/~lynn/2007f.html#28 The Perfect Computer - 36 bits? from article by former executive here http://www.ecole.org/Crisis_and_change_1995_1.htm The FS effort and subsequent failure ... can also be considered as contributing to the uptake of clone processors (in part because of the dearth of items in the 370 product pipeline) a few recent posts http://www.garlic.com/~lynn/2007g.html#55 IBM to the PCM market(the sky is falling!!!the sky is falling!!) http://www.garlic.com/~lynn/2007g.html#57 IBM to the PCM market(the sky is falling!!!the sky is falling!!) http://www.garlic.com/~lynn/2007g.html#59 IBM to the PCM market(the sky is falling!!!the sky is falling!!) and after the failure of the FS project ... there was a rush to get stuff back into the 370 product pipeline. Recent post attributing that as big part of the reason that the product group shipped so much of my code (since I continued to develop 370-based software all thru the FS activity) http://www.garlic.com/~lynn/2007i.html#21 John W. Backus, 82, Fortran developer, dies and another recent posting touching on some stuff that went on in the FS era http://www.garlic.com/~lynn/2007i.html#20 Does anyone know of a documented case of VM being penetrated by hackers? http://www.garlic.com/~lynn/2007i.html#29 Does anyone know of a documented case of VM being penetrated by hackers? another stop-gap effort to try and quickly fill the 370 product pipeline was 3033x (after failure of FS project) was 303x. The standard processor development cycle was 7-8yrs ... and they needed to get something out much quicker than that ... since starting the 370-xa/3081 wouldn't be out before the early 80s. So they took they 370/158 microcode engine ... and stripped out the 370 microcode support ... leaving just the integrated channel microcode and packaged it as the 303x channel director. Then the 3031 was a 370/158 microcode engine with just the 370 microcode support (and no integrated channel microcode) paired with a channel director. The 3032 was a 370/168-3 repackaged to work with channel director. The 3033 started out simply being 168 wiring diagram mapped to newer chip technology. Recent posts about enormous effort to hurry up and get 303x out the door after failed FS project:
Re: Internal DASD Pathing
The following message is a courtesy copy of an article that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well. [EMAIL PROTECTED] (Richards.Bob) writes: 3880-xx Controller series followed by the 3990-xxx series DASD was 3380-D, 3380-E, 3380-J, 3380-K, 3390-1, 3390-2, 3390-3 (today's major image) and 3390-9 (a true SLED pulled by crippled and maimed dogs) Yes, the 3380s had four paths. Do not recall if it was ever increased. 3830 disk controller had dual channel along with string switch for 3330 ... allowing for four paths total (i.e. string connecting to two different controllers, each of which could connect to two different channels) 3880 disk controller had four channel interfaces ... and 3380 string switch (a-box) allowed 3380 string to be connected to two different controllers for eight total paths. Evolution of the DASD storage control http://www.research.ibm.com/journal/sj/282/ibmsj2802C.pdf not to be too harsh about comment in the above about 3880 having better performance ... but some recent posts that touched on getting 3880 out the door ... as well as little discussion of dynamic pathing http://www.garlic.com/~lynn/2007h.html#1 21st Century ISA goals? http://www.garlic.com/~lynn/2007h.html#3 21st Century ISA goals? http://www.garlic.com/~lynn/2007h.html#4 21st Century ISA goals? http://www.garlic.com/~lynn/2007h.html#5 21st Century ISA goals? http://www.garlic.com/~lynn/2007h.html#6 21st Century ISA goals? http://www.garlic.com/~lynn/2007h.html#9 21st Century ISA goals? IBM Jargon: A-Box n. A primary storage unit: the one closest to the controller. Most 370 storage peripherals come in two flavours. The A-Box (also called head of String, or Model A) either houses the controller (e.g., 3422), is the controller (e.g., 3480), or connects to the controller. The B-Box (Model B) is used for extending the string. Some strings can be connected to the controller at both ends, in which case the unit at the end of the string is usually a Model D. ... snip ... 3380 history here with A B boxes: http://www-03.ibm.com/ibm/history/exhibits/storage/storage_3380.html http://www-03.ibm.com/ibm/history/exhibits/storage/storage_3380b.html http://www-03.ibm.com/ibm/history/exhibits/storage/storage_3380c.html minor mention of a-box, head-of-string function: http://www-03.ibm.com/ibm/history/exhibits/storage/storage_3380d.html http://www-03.ibm.com/ibm/history/exhibits/storage/storage_3380e.html from above: A string of Standard model 3380s can consist of a single 3380 Model A04 or AA4 and up to three 3380 Model B04 units. A string of Extended Capability 3380s can consist of one 3380 Model AD4 or AE4 and up to three 3380 Model BD4 and BE4 units, in any combination. Two strings of 3380s can be attached to each storage director of a 3880 Model 3 or cache storage control Model 23. An A04 string is not supported by the Model 23. Strings headed by an AA4, AD4 or AE4 must attach to two storage directors (usually on two separate 3880 Storage Controls). An AA4 string and an Extended Capability string can be attached to the same storage directors. ... snip ... at one point i made a forey into redoing the (3880) controller dynamic pathing architecture implementation ... as part of significantly extending simplifying (dynamic pathing simplification) capability for virtualized operation ... but got into all sort of issues regarding compatibility with implementation already in the field. -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Internal DASD Pathing
The following message is a courtesy copy of an article that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well. Howard Brazee [EMAIL PROTECTED] writes: A bit off topic: I find your input to these threads in general to be quite useful. They appear to take a fair amount of work - are they part of your jobs, or is this just a style you are comfortable with? re: http://www.garlic.com/~lynn/2007i.html#33 Internal DASD Pathing i had done some amount of the semi-automated computer conferencing via email (copy list) mechanism on the internal network http://www.garlic.com/~lynn/subnetwork.html#internalnet in the late 70s and early 80s ... and got blaimed for something called tandem memos (there was even a nov81 datamation article about tandem memos). tandemo memos were somewhat spawned by various trip reports visiting Tandem after Jim left SJR. somewhat recent postings referencing Jim and his departure from SJR: http://www.garlic.com/~lynn/2007.html#1 The Elements of Programming Style http://www.garlic.com/~lynn/2007.html#13 The Elements of Programming Style http://www.garlic.com/~lynn/2007d.html#17 Jim Gray Is Missing somewhat as an outcome of various investigations into the tandem memo phenomena ... there were decisions to deploy officially sanctioned corporate computer conferencing capability. there was also a researcher hired that sat in the back of my office for nine months and studied how i communicated; phone, face-to-face, email, instant messeging, etc. they had copies of my incoming and outgoing email and logs of all instant messages. The resulting report was also published as a stanford phd thesis (joint between ai and language) and the subject matter for some number of papers and books. some related posts mentioning computer mediated conferencing http://www.garlic.com/~lynn/subtopic.html#cmc never got paid for it ... frequently closer to the opposite ... minor reference: http://www.garlic.com/~lynn/2007e.html#48 time spent/day on a computer there were sometimes jokes in the mid-80s about various subject threads having gotten wheeler'ized ... i.e. when possibly 80-90% percent of the characters in some corporate/world wide discussion were found to originate from my keyboard. most consider that i have extremely mellowed since then ... and have been doing it for 30 yrs or so somewhat drifting back to the original topic ... misc. collected posts about getting to play in the disk engineering (bldg. 14) and disk product test (bldg. 15) labs. http://www.garlic.com/~lynn/subtopic.html#disk -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Internal DASD Pathing
The following message is a courtesy copy of an article that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well. re: http://www.garlic.com/~lynn/2007i.html#33 Internal DASD Pathing http://www.garlic.com/~lynn/2007i.html#34 Internal DASD Pathing minor addenda ... to previous topic drift ... this index of old email http://www.garlic.com/~lynn/lhwemail.html also contains a URL for an eserver (since renamed IBM Systems) magazine article that appeared two yrs ago ... although they got some of the details slightly garbled. I actually haven't seen the copy of the physical magazine ... they had sent a photographer to the house for photo shoot ... and something from that supposedly shows up in the magazine. -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Latest Principles of Operation
The following message is a courtesy copy of an article that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well. Howard Brazee [EMAIL PROTECTED] writes: That makes sense. But continuing that thought, I see Apple, which doesn't try to make its OS be all things for all people (and hardware manufacturers).Even if it *is* UNIX. nominally the argument is that complexity contributes to confusion and failures ... KISS is frequently better because it minimizes confusion which can be major source of failures, vulnerabilities, threats and exploits. however, another argument is that the solution paradigm has to match the environment ... that there can be enormous amount of complexity introduced when the solution paradigm is a mismatch for the environment that it is being applied to. slightly related thread discussing f18/f14, f16/f15, as well as f20 (with even a little computer related stuff sprinkled in) ... warning quite a bit of thread drift ... even tho there was a lot of numerical intensive computing that went into f16, f18, f20, etc: http://www.garlic.com/~lynn/2007i.html#3 John W. Backus, 82, Fortran developer, dies http://www.garlic.com/~lynn/2007i.html#4 John W. Backus, 82, Fortran developer, dies http://www.garlic.com/~lynn/2007i.html#6 John W. Backus, 82, Fortran developer, dies http://www.garlic.com/~lynn/2007i.html#7 John W. Backus, 82, Fortran developer, dies http://www.garlic.com/~lynn/2007i.html#8 John W. Backus, 82, Fortran developer, dies http://www.garlic.com/~lynn/2007i.html#10 John W. Backus, 82, Fortran developer, dies -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Latest Principles of Operation
The following message is a courtesy copy of an article that has been posted to bit.listserv.ibm-main,alt.folkore.computers as well. Howard Brazee [EMAIL PROTECTED] writes: That makes sense. But continuing that thought, I see Apple, which doesn't try to make its OS be all things for all people (and hardware manufacturers).Even if it *is* UNIX. re: http://www.garlic.com/~lynn/2007i.html#25 Latest Principles of Operation apple os (and next before it) starts out with a MACH microkernel basis ... could be considered striving for KISS ... somewhat like the original CP67, as an extremely well focused microkernel. The morph from CP67 to VM370 included work by people with much more of traditional operating system training. Over the years, many found that the extreme simplicity of the kernel made it easy to change/add/modify on an adhoc basis. Unfortunately many such years of such adhoc approach to dealing with something that was suppose to be a microkernel (rather than operating system) ... eventually results in a lot of bloat and spaghetti code past reference to comment about simple can be frequently much harder than complex ... and it should be done with there is nothing left to remove ... as opposed to it being done when there is nothing left to add. http://www.garlic.com/~lynn/2007h.html#29 sizeof() was: The Perfect Computer - 36 bits? http://www.garlic.com/~lynn/2007h.html#30 sizeof() was: The Perfect Computer - 36 bits? http://www.garlic.com/~lynn/2007i.html#5 John W. Backus, 82, Fortran developer, dies more recently there have been comments about simple virtual machine microkernels may be solution to various significant security issues in the personal computing market place ... dynamically opening up a traditional operating system in a padded cell virtual machine when it needs to do various kinds of internet/network operations ... and then collapsing and discarding most of the environment when finished. lots of past posts referring to various microkernel activities: http://www.garlic.com/~lynn/94.html#54 How Do the Old Mainframes http://www.garlic.com/~lynn/95.html#0 pathlengths http://www.garlic.com/~lynn/2000e.html#42 IBM's Workplace OS (Was: .. Pink) http://www.garlic.com/~lynn/2001f.html#23 MERT Operating System Microkernels http://www.garlic.com/~lynn/2001h.html#11 checking some myths. http://www.garlic.com/~lynn/2001j.html#36 Proper ISA lifespan? http://www.garlic.com/~lynn/2001k.html#45 SMP idea for the future http://www.garlic.com/~lynn/2001l.html#25 mainframe question http://www.garlic.com/~lynn/2001m.html#47 TSS/360 http://www.garlic.com/~lynn/2003.html#50 Origin of Kerberos http://www.garlic.com/~lynn/2003.html#60 MIDAS http://www.garlic.com/~lynn/2003h.html#37 Does PowerPC 970 has Tagged TLBs (Address Space Identifiers) http://www.garlic.com/~lynn/2003j.html#72 Microkernels are not all or nothing. Re: Multics Concepts For http://www.garlic.com/~lynn/2003k.html#5 What is timesharing, anyway? http://www.garlic.com/~lynn/2003k.html#9 What is timesharing, anyway? http://www.garlic.com/~lynn/2003k.html#24 Microkernels are not all or nothing. Re: Multics Concepts For http://www.garlic.com/~lynn/2003k.html#26 Microkernels are not all or nothing. Re: Multics Concepts For http://www.garlic.com/~lynn/2003k.html#27 Microkernels are not all or nothing. Re: Multics Concepts For http://www.garlic.com/~lynn/2003k.html#28 Microkernels are not all or nothing. Re: Multics Concepts For http://www.garlic.com/~lynn/2003k.html#30 IBM channels, was Re: Microkernels are not all or nothing http://www.garlic.com/~lynn/2003k.html#37 Microkernels are not all or nothing. Re: Multics Concepts For http://www.garlic.com/~lynn/2005b.html#22 The Mac is like a modern day Betamax http://www.garlic.com/~lynn/2005c.html#44 [Lit.] Buffer overruns http://www.garlic.com/~lynn/2005c.html#56 intel's Vanderpool and virtualization in general http://www.garlic.com/~lynn/2005c.html#63 intel's Vanderpool and virtualization in general http://www.garlic.com/~lynn/2005f.html#10 Where should the type information be: in tags and descriptors http://www.garlic.com/~lynn/2006p.html#10 What part of z/OS is the OS? http://www.garlic.com/~lynn/2006p.html#11 What part of z/OS is the OS? http://www.garlic.com/~lynn/2007g.html#70 The Perfect Computer - 36 bits? http://www.garlic.com/~lynn/2007g.html#83 IBM to the PCM market -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Laugh, laugh. I thought I'd die - application crashes
The following message is a courtesy copy of an article that has been posted to bit.listserv.ibm-main,alt.foklore.comupters as well. [EMAIL PROTECTED] (Clem Clarke) writes: In PCP, MFT and MVT, SVC 99 didn't even exist! Nor TSO. just for laughs here is the (Hercules) build install procedure for TSO/TCAM for MVT http://www.conmicro.cx/hercos360/tsotcam.html and from my rfc index http://www.garlic.com/~lynn/rfcietff.htm RFC90 http://www.garlic.com/~lynn/rfcidx0.htm 90 CCN as a network service center, Braden R., 1971/01/15 (6pp) (.txt=11929) ... about UCLA 360/91 running MVT, URSA, a conversational remote job entry system and TSO from the RFC: d) TSO IBM's new general purpose time-sharing subsystem under MVT, to be available at CCS sometime during 1971. TSO supports 2741's and Teletypes (and at CCN it will support CCI consoles). TSO is reminiscent of CTSS in its capabilities and command language. ... snip ... -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: T.J. Maxx data theft worse than first reported
The following message is a courtesy copy of an article that has been posted to bit.listserv.ibm-main as well. [EMAIL PROTECTED] (Tom Schmidt) writes: I once heard a former CIA spook say that any POS system can be hacked from a truck parked at the curb, if the price/value is right. (Speaking from a previous lifetime in marketing research.) Maybe somebody built a proof-of- concept device??? (Think: TEMPEST) re: http://www.garlic.com/~lynn/2007h.html#56 T.J. Maxx data theft worse than first reported ... don't think individual POS terminals sitting on the counter ... think corporate POS concentrator ... where all POS transactions for the whole corporation passes thru on the way to the financial network. this is slightly analogous to the internet payment gateway (we periodically claim is the original SOA) long ago, and far away, we were called in to consult with this small client/server startup that had this technology called SSL and wanted to do payment transactions on their server. http://www.garlic.com/~lynn/aadsm5.htm#asrn2 http://www.garlic.com/~lynn/aadsm5.htm#asrn3 a payment gateway was developed and deployed ... it is somewhat analogous to a corporate POS concentrator ... but can be used by lots of different (small) webservers any place on the web (as opposed to webservers in large corporation that frequently just aggregate into a corporate POS concentrator). as before ... there are all kinds of evesdropping technology (some may or may not require some sort of physical operation) ... and then use the harvested information for fraudulent transactions in various kinds of replay attacks (being able to use information harvested from previous transactions ... in new fraudulent transactions) http://www.garlic.com/~lynn/subintegrity.html#harvest as an aside ... it isn't too unusual to see such trucks parked all over the place around silicon valley ... they are brought in for regular audits for leaking/stray emissions. they typically don't bother to disquise external antennas for some topic drift ... posts about trade secret litigation and some question about whether the security was proportional to the risk (i.e. had to demonstrate security procedures that were proportional to the claimed value of the stuff at risk): http://www.garlic.com/~lynn/2001d.html#42 IBM was/is: Imitation... http://www.garlic.com/~lynn/2002d.html#8 Security Proportional to Risk (was: IBM Mainframe at home) http://www.garlic.com/~lynn/2003i.html#62 Wireless security http://www.garlic.com/~lynn/2005r.html#7 DDJ Article on Secure Dongle http://www.garlic.com/~lynn/2006r.html#29 Intel abandons USEnet news http://www.garlic.com/~lynn/2007e.html#9 The Genealogy of the IBM PC http://www.garlic.com/~lynn/2007f.html#45 The Perfect Computer - 36 bits? http://www.garlic.com/~lynn/2007f.html#46 The Perfect Computer - 36 bits? http://www.garlic.com/~lynn/2007f.html#57 Is computer history taught now? part of the web case ... was that the existing infrastructure is extremely vulnerable to replay attacks. from security acronym PAIN P - privacy (sometimes CAIN, confidential) A - authentication I - integrity N - non-repudiation in the case of the payment gateway, SSL was used for privacy/confidentiality of the transaction transmitting thru the internet ... i.e. achieving security with encryption as countermeasure to evesdropping (as part of replay attacks). However, as we've frequently noted was that the most of the harvesting exploits appear to happen at the end-points ... as opposed to while the transaction is actually being transmitted. now, in the mid-90s, the x9a10 financial standard working had been given the requirement to preserve the integrity of the financial infrastructure for all retail payments. the result was x9.59 financial transaction standard http://www.garlic.com/~lynn/x959.html#x959 in effect, the x9.59 financial standard substituted end-to-end authentication and integrity (for privacy, confidentiality, encryption) to achieve security. providing end-to-end authentication and integrity eliminated evesdropping as a risk or compromise ... since information from existing transactions could no longer be used for fraudulent transactions in replay attacks i.e. x9.59 transactions aren't vulnerable to evesdropping, skimming, harvesting exploits ... whether at-rest or in-transit. we've claimed that the largest use of SSL has been the e-commerce stuff that we previously worked on ... as part of hiding transactions during transmission. x9.59 eliminates the requirement for hiding transactions (and therefor eliminates one of the major uses for SSL). -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Fast and Safe C Strings: User friendly C macros to Declare and use C Strings.
The following message is a courtesy copy of an article that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well. JimKeo [EMAIL PROTECTED] writes: I remember with great fondness working with VS/Pascal when on contract to IBM working on VM and MVS implementation of TCPIP. Later IBM came out with a C version of much of the TCPIP suite and one of my duties was to address serious performance issues with TCPIP stack but mostly with the C FTP Server (and later C FTP Client). At some point performance had finally been improved enough to where C FTP was competitive with the earlier VS/Pascal offering. When I suggested I could make some of the same improvements to the VS/Pascal version some folks were positively apoplectic. {smile} They, understandably but regretably, wanted the old Pascal FTP buried and replaced by the new C FTP and knew renewed performance issues (Pascal faster than C) would just cause grief to some. Hmmm. It's been almost a decade but is anyone able to ascertain whether IBM FTP Server or client still has some of my assembler CSECTs with names like WRTFBA** or WRTVB** linked/bound somewhere? Just curious. re: http://www.garlic.com/~lynn/2007h.html#41 Fast and Safe C Strings: User friendly C macros to Declare and use C Strings the initial implementation and would get about 44kbyte/sec consuming most of a 3090 processor. i then added the support for rfc1044 and some tuning tests at cray research was getting channel speed (1mbyte/sec) between 4341 clone and a cray machine ... using only a modest amount of the 4341 processor ... i.e. about 25 times the aggregate thruput for about 1/20 the pathlength ... about 400-500 times difference in bytes/transferred per instruction executed. re: http://www.garlic.com/~lynn/2007h.html#8 whiny question: Why won't z/OS support the HMC 3270 emulator misc. posts mentioning various compromises, vulnerabilities, exploits, etc related C language http://www.garlic.com/~lynn/subintegrity.html#overflow and misc. past posts mentioning having done rfc1044 support http://www.garlic.com/~lynn/subnetwork.html#1044 for other topic drift ... we had an internal high-speed backbone ... part of our hsdt (high-speed data transport project) http://www.garlic.com/~lynn/subnetwork.html#hsdt and we working with various organizations and NSF for applying it to NSFNET related operations ... various old email from the period http:/www.garlic.com/~lynn/lhwemail.html#nsfnet in various posts http://www.garlic.com/~lynn/subnetwork.html#nsfnet -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Fast and Safe C Strings: User friendly C macros to Declare and use C Strings.
The following message is a courtesy copy of an article that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well. re: http://www.garlic.com/~lynn/2007h.html#41 Fast and Safe C Strings: User friendly C macros to Declare and use C Strings. http://www.garlic.com/~lynn/2007h.html#60 Fast and Safe C Strings: User friendly C macros to Declare and use C Strings. besides vs/pascal and lot of chip design applications, los gatos vlsi lab had also done the LSM ... original name was los gatos state machine, but change to logic simulation machine for some external publications ... it want chip logic simulation at something like 50,000 times that of software application running on 3033. it was somewhat original in that it could take into account time (allowed for handling asynchronous clock chips as well as digital chips with analog circuits). The later machines, like EVE (endicott verification engine), assumed chips with synchronous clock. recent post mentioning LSM (with several LSM, YSE, and EVE references): http://www.garlic.com/~lynn/2007f.html#73 Is computer history taught now? one of the HSDT high-speed links http://www.garlic.com/~lynn/subnetwork.html#hsdt was between austin and los gatos ... and there was fair amount of chip design traffic over the link from austin to los gatos; in fact it was claimed that the availability helped bring in the RIOS (i.e. rs/6000) chipset a year early. The Los Gatos lab also did a high-performance eperimental database in conjunction with some people from STL ... somewhat concurrent with system/r ... original sql/relational implementation http://www.garlic.com/~lynn/subtopic.html#systemr it shared some of the characteristics of relational ... but while the system/r implementation assumed fairly regular information organization implemented in tables ... the los gatos implementation (also originally done in vs/pascal) was targeted at chip design ... both logical and physical layout ... with possibly extremely anomoulous and non-uniform data (not well suited for table structure). i had worked on some of the system/r stuff ... recent post http://www.garlic.com/~lynn/2007.html#1 The Elements of Programming Style with some old email http://www.garlic.com/~lynn/2007.html#email801006 http://www.garlic.com/~lynn/2007.html#email801016 as well worked on some of the implementation that los gatos was doing. -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: T.J. Maxx data theft worse than first reported
The following message is a courtesy copy of an article that has been posted to bit.listserv.ibm-main as well. [EMAIL PROTECTED] (Tom Schmidt) writes: I once heard a former CIA spook say that any POS system can be hacked from a truck parked at the curb, if the price/value is right. (Speaking from a previous lifetime in marketing research.) Maybe somebody built a proof-of- concept device??? (Think: TEMPEST) re: http://www.garlic.com/~lynn/2007h.html#56 T.J. Maxx data theft worse than first reported http://www.garlic.com/~lynn/2007h.html#58 T.J. Maxx data theft worse than first reported and for more topic drift, latest news , hot of the press today Laptops And Flat Panels Now Vulnerable to Van Eck Methods http://hardware.slashdot.org/hardware/07/04/20/2048258.shtml Seeing through walls http://www.newscientist.com/blog/technology/2007/04/seeing-through-walls.html from above: Back in 1985, Wim Van Eck proved it was possible to tune into the radio emissions produced by electromagentic coils in a CRT display and then reconstruct the image. The practice became known as Van Eck Phreaking, and NATO spent a fortune making its systems invulnerable to it. It was a major part of Neal Stephenson's novel Cryptonomicon. ... snip ... so as previously noted, there are several countermeasure to evesdropping and replay attacks ... 1) make sure the attacker can't get the information, 2) scramble/encrypt, so the information is unintelligible, 3) change the paradigm (ala x9.59) so the evesdropped/harvested information is useless for replay attacks. http://www.garlic.com/~lynn/x959.html#x959 -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html