Re: Cartons of Punch Cards
hanco...@bbs.cpcn.com writes: As I understand it, years ago in foreign countries telephone capacity was limited and phones were expensive, thus many people did not have them. When cell phones came out, it represented a whole new infrastructure that exploded, and many people got connected that way. expensive/scarcity of telco also shows up in slow-adaption of point-of-sale terminals and magstripe plastic payment cards in europe as a result, saw chipcards that could do offline point-of-sale transactions in europe ... i.e. point-of-sale terminal interacted with chipcard and wasn't required to go online for every transaction. lot of these were stored-value cards ... that had secure mechanism for storing recording value ... somewhat like some of the US metro cards. in the 90s, some of these made pilot excursions into the US ... and we got asked to designcost dataprocessing infrastructure for scaled-up, country-wide deployment (mostly backup dealing with loading valud into the cards). I also did some financial analysis and nearly all of the infrastructure value motivating the programs was that the operator got the float on the unspent value in the cards. In some case it was like a pyramid scheme where the international license holder effectively got all of the float ... with individual country operators not getting any. then to spur the uptake, there were announcements that the international license holder would split the float with the individual country operators. Then the EU central banks decreed said that interest would have to start being paid on unspent value in the cards ... and the programs just slowly dwindled away. About that time, some operators in the US introduced an online magstripe stored value ... similar in concept to the EU chipcards but leveraged existing online point-of-sale telco infrastructures to do account-based operation. they are now marketing as gift and merchant cards ... large racks of them can be seen near checkout counters in some grocery stores. a variation of the stored-value chipcards ... were more sophisticated association chipcards for standard credit operation. the merchant point-of-sale terminal would interact with the chipcards ... and the chipcards could be trusted to tell the merchant POS terminal whether or not to go online, as well as how much available credit limit was available on the card and whether the current transaction was approved or not. these required PIN operation (as countermeasure to lost/stolen cards unauthorized use) and supposedly had lots of security to prevent other forms of fraudulent activity. Point of the card was specifically for security ... but would allow merchant point-of-sale terminals to do offline transactions (to avoid high telco charges) and could batch large number of transactions to be done in one telco transaction at end-of-shift or end-of-day. There was a large pilot in the US of these cards in the early part of the century. However, the cards interacted with the terminal using static authentication data. There turned out that effectively the same terminal compromise that would skim static magstripe data (to create counterfeit magstripe cards) could be used to skim static chipcard authentication data. This then could be used to create counterfeit chipcards that were called YES CARDS; once authenticated the card would always answer YES to the following three question: 1) was the correct PIN entered (YES), 2) should this be an offline transaction (YES) and 3) is the transaction within the account credit limit (YES). It was not too long later that the pilot disappeared w/o a trace. I had tried to tell the pilot operators about the vulnerability ... but they apparently had such a myopic focus on the chips ... that they responded by saying they could address the problem by changing the programming in valid chips. The problem was that the compromise wasn't of valid chips ... but a merchant terminal compromise (and changing programming in valid chips had no impact on creation of fraudulent counterfeit YES CARDS). At the ATM Integrity Task Force meetings ... early part of this century when the YES CARD problem was explained, somebody in the audience made the observation that they managed to spend billions of dollars to prove chipcards are less secure than magstripe cards. The issue is that a countermeasure to counterfeit magstripe card is to deactivate the account (and prevents/blocks future online fraudulent transactions). However for YES CARDS, deactivating the account has no effect, since the merchant terminal doesn't go online until long after the crooks are gone. old reference (gone 404 but lives on at wayback machine) to YES CARD presentation at cartes2002: http://web.archive.org/web/20030417083810/http://www.smartcard.co.uk/resources/articles/cartes2002.html past posts mentioning YES CARDS: http://www.garlic.com/~lynn/subintegrity.html#yescard -- virtualization experience starting Jan1968, online at home since
Re: Co-existance of z/OS and z/VM on same DASD farm
p...@voltage.com (Phil Smith) writes: VM/XA MA begat VM/XA SF begat VM/XA SP, which eventually moved to Endicott, and became VM/ESA and then z/VM. The core of VM/XA was actually much better than VM/SP; as a developer I found it much easier to work with. re: http://www.garlic.com/~lynn/2012g.html#17 Co-existance of z/OS and z/VM on same DASD farm http://www.garlic.com/~lynn/2012g.html#19 Co-existance of z/OS and z/VM on same DASD farm http://www.garlic.com/~lynn/2012g.html#24 Co-existance of z/OS and z/VM on same DASD farm old email about vm/370 running in XA mode: http://www.garlic.com/~lynn/2011c.html#email860122 http://www.garlic.com/~lynn/2011c.html#email860123 http://www.garlic.com/~lynn/2011e.html#email870508 the early issue were claims that the resources to bring migration aid up to vm370 product level was several orders larger than the resources needed to fix any perceived deficiencies in vm370 (compared to migration aid). for little x-over with this thread: http://www.garlic.com/~lynn/2012g.html#29 24/7/365 appropriateness was Re: IBMLink outages in 2012 http://www.garlic.com/~lynn/2012g.html#30 24/7/365 appropriateness was Re: IBMLink outages in 2012 post from couple years ago about z/VM announcing cluster support: http://www.garlic.com/~lynn/2009p.html#43 From The Annals of Release No Software Before Its Time US HONE system had done vm370 cluster (loosely-coupled) single-system-image support in the late 70s (large number of multiprocessors sharing disk pool) ... US HONE datacenters had been consolidated in Palo Alto in mid-70s (building next door to where FACEBOOK later first moved into) and provided online salesmarketing support (HONE clones sprouted all over the world for world-wide salesmarketing support). In the early 80s, the datacenter was replicated in Dallas, and fall-over/load-balancing was extended across the two geographically separated datacenters. misc. poast posts mentioning HONE http://www.garlic.com/~lynn/subtopic.html#hone Prior to US HONE cluster support, vm370 commerical online service bureaus had done their own cluster support including non-disruptive migration of active running users between systems in the complex (not just logon load-balancing and fall-over). This allowed a system to be taken/varied offline for maintenance w/o impacting any users running on the system. misc. past posts mentioning commercial online service http://www.garlic.com/~lynn/submain.html#timeshare In the 80s, IBM research had done vm/4341 cluster support with 3088/trotter ... but when they went to release, they were told that they had to convert from their own home grown protocol to SNA/VTAM ... cluster operations that had taken small fraction of a second started taking half a minute or more. all of that would be disappearing in transition from vm370 base to vmtool/migration-aid base. with regard to loosely-coupled and SNA/VTAM battles ... my wife had earlier run into the problem when she had been con'ed into going to POK to be in charge of loosely-coupled architecture. She created peer-coupled shared data architecture while there ... but it saw very little uptake (except for IMS hot-standby) until SYSPLEX ... some past posts http://www.garlic.com/~lynn/submain.html#shareddata combination of little uptake and constant wars with the communication group over demands that she use SNA/VTAM for loosely-coupled operation contributed to her not remaining long in the position (there would be periodic temporary truces where it was allowed she could use anything she wanted within the datacenter ... but the communication group owned everything that crossed the datacenter walls). also note in the late 80s, a senior disk engineer had gotten a talked scheduled at the internal, worldwide, annual communication group conference and opened with the statement that the communication group was going to be responsible for the demise of the disk division. the issue that the communication group was protecting their terminal emulation install base ... and the disk division was starting to see drop of sales as data was fleeing the datacenter to more distributed computing friendly platforms. The disk division had come up with a number of solutions for the problem ... but (again) the communication group had strategic ownership for everything that cross the datacenter walls (and would veto the solutions). misc. past posts mentioning terminal emulation paradigm http://www.garlic.com/~lynn/subnetwork.html#emulation this whole situation contributed to the significant dropoff of mainframe use and the company going into the red in early 90s. Reference to a Gerstner's resurrection of IBM ... as well as pointer to review of Gerstner's book who says elephants can't dance (in IBM employee forum): http://www.garlic.com/~lynn/2012f.html#84 -- virtualization experience starting Jan1968, online at home since Mar1970 -- For IBM-MAIN subscribe /
Re: 24/7/365 appropriateness was Re: IBMLink outages in 2012
cfmpub...@ns.sympatico.ca (Clark Morris) writes: On a logical basis I agree with you but has the 24/7/365 shortcut for continuous availability become so pervasive that it is the shorthand way for saying it and is it the way that the general public as opposed to us professional nitpickers best understands it? when we were doing ha/cmp in the early 90s, one of the customers we called on supported the 1-800 lookup (i.e. 1-800 got routed to dbms transaction that looked up the real number for putting the call through) had five-nines availability. the incumbent had redundant hardware ... but required system to be taken down for software maintenance ... short scheduled downtime, once a year blew the outage budget for a nearly a century. ha/cmp didn't have redundant hardware components but had replicated systems and fall-over ... so failures downtime was masked ... even rolling outages for software system maintenance w/o service impact. eventually the incumbent vendor came back and said that they could do replicated systems also ... for masking individual system downtime ... but that negated the requirement for redudant sofware. i was then asked to write a section for the corporae continuous available strategy document ... but the section got pulled after both Rochester and POK complained that they couldn't meet the objectives. past posts mentioning coining the terms disaster survivability and geographic survivability ... to differentiate from disaster/recovery when out marketing ha/cmp: http://www.garlic.com/~lynn/submain.html#available -- virtualization experience starting Jan1968, online at home since Mar1970 -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN
Re: 24/7/365 appropriateness was Re: IBMLink outages in 2012
bfairch...@rocketsoftware.com (Bill Fairchild) writes: And the general public, many Dilbertian managers, and even some of us professional nitpickers, think that a job running 1 hour instead of 10 is 900% faster, and that 1 is 10 times smaller than 10. 2+2 no longer = 5; now it equals chartreuse. Fortunately architects and engineers know how to use mathematically accurate and precise terminology when describing the bridges they design and build, or we would have a lot more cars falling off of collapsing bridges. re: http://www.garlic.com/~lynn/2012g.html#29 24/7/365 appropriateness was Re: IBMLink outages in 2012 Volcker in discussion with civil engineering professor about significantly decline in infrastructure projects (as institutions skimmed funds for other purposes disappearing civil engineering jobs) resulting in universities cutting back civil engineering programs; Confidence Men, pg290: Well, I said, 'The trouble with the United States recently is we spent several decades not producing many civil engineers and producing a huge number of financial engineers. And the result is s**tty bridges and a s**tty financial system! ... snip ... old presentation by Jim Gray on availability ... scanned from paper copy that had been made on copying machine in bldg. 28, SJR http://www.garlic.com/~lynn/grayft84.pdf the point (from early 80s) was that majority of outages (scheduled and non-scheduled) had shifted from hardware to software (and human errors). (early 70s) before virtual memory announcement for 370, a copy of internal document describing the technology leaked to the press. in the wake of the following investigation, all internal copying machines were retrofitted with unique identifier (under the glass) that would appear on all copies made on that machine. for other drift ... it has been five years since Jim disappeared and cal. court recently declared him dead ... reference in (linkedin) z/VM group: http://lnkd.in/C2yn7p also archived here: http://www.garlic.com/~lynn/2012g.html#21 Closure in Disappearance of Computer Scientist -- virtualization experience starting Jan1968, online at home since Mar1970 -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN
Re: IBM's first tape drive turns 60 (makes you feel old!)
r.skoru...@bremultibank.com.pl (R.S.) writes: BTW: I heard about 1-inch tapes. Is it true? Did such wide tapes exist? Current cartridges are 1/2 inch wide. The article says that 729 was also 1/2 inch. how do you feel about 3850 http://www-03.ibm.com/ibm/history/exhibits/storage/storage_3850.html http://www.columbia.edu/cu/computinghistory/mss.html cartridges (tape 4inches wide by 770inches)? http://www-03.ibm.com/ibm/history/exhibits/storage/storage_PH3850B.html http://en.wikipedia.org/wiki/IBM_3850 http://www.columbia.edu/cu/computinghistory/media.html -- virtualization experience starting Jan1968, online at home since Mar1970 -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN
Re: Co-existance of z/OS and z/VM on same DASD farm
p...@voltage.com (Phil Smith) writes: And the VM/XA SPOOL system in general was super-robust - I wrote a system mod (product) that tinkered with SPOOL, and while I created SPOOL files that couldn't be seen, and couldn't be opened, and couldn't be purged by normal means, I *never* took out the rest of SPOOL. Really nice stuff. Especially after the HPO 5 debacle! re: http://www.garlic.com/~lynn/2012g.html#17 Co-existance of z/OS and z/VM on same DASD farm http://www.garlic.com/~lynn/2012g.html#19 Co-existance of z/OS and z/VM on same DASD farm 40th vm370 anniv. this year ... 2012 vm workshop discussion in (linkedin) z/VM http://lnkd.in/Emfz8Z some also archived here http://www.garlic.com/~lynn/2012g.html#18 and http://www.garlic.com/~lynn/2012g.html#23 I posted schedule for 1987 vm workshop ... mentions I gave two presentations (on performance and networking) and two BOFs (debugging and spool file system rewrite). The spool file system rewrite was because I needed at least a factor of 100 times increase in thruput (for RSCS network thruput). I also made the integrity of the spool file system and the integrity of the overall system completely independent (like I could loose whole spool file disk w/o impacting the running of the system and/or the integrity of the spool files on other disks). -- virtualization experience starting Jan1968, online at home since Mar1970 -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN
Re: Co-existance of z/OS and z/VM on same DASD farm
VM had dasd read/only for volser (vol1 record) to identify each mounted disk. VM r/w activity was limited to vm page formated disks. CMS running in virtual machine had support for cms filesystems and some primitive support for real formated OS DOS disks. regarding incorrently rewriting vtoc ... there is some possibility it might have happened if somebody had attached/linked the real disks to cms in a virtual machine (in r/w mode). In the mid-70s, one of the people in the vm370/cms development group significantly rewrote and developed full function OS r/w filesystem (real os vtoc, pds directory, etc.) function in CMS (joke that the 100k bytes was more efficient os/360 similiation than the 8+mbytes that had been done in MVS for os/360 simulation). however this was approx. the period when FS effort was imploding and there was mad rush to get products back into the 370 pipelines (during FS effort, 370 activity was being suspended and/or killed off). misc. past posts mentioning Future System effort (that was going to completely replace 360/370) http://www.garlic.com/~lynn/submain.html#futuresys As part of reconsituting 370, (303x was kicked off in parallel with 370/xa) and the head of POK managed to convince corporate to kill off vm370 product, shutdown the development group and move all the people to POK ... or otherwise they wouldn't be able to meet the mvs/xa ship schedule. somehow the vm370 development group was warned ahead of time and some of the people managed to escape being moved to POK (there was joke about head of POK was major contributor to DEC vax/vms). in the killing off of the vm370 product and shutdown of the group ... before the full function OS filesystem support shipped ... and it all just disappeared. Eventually, Endicott managed to save the vm370 product mission, but they had to reconstitute a development group from scratch. -- virtualization experience starting Jan1968, online at home since Mar1970 -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN
Re: Co-existance of z/OS and z/VM on same DASD farm
paulgboul...@aim.com (Paul Gilmartin) writes: And somewhere in there, there was something like VM/XA/SF (System Facility), intended to allow virtual machines for development and testing, but not to support emigration of the OS workload as happened in the VSCR crisis. re: http://www.garlic.com/~lynn/2012g.html#17 Co-existance of z/OS and z/VM on same DASD farm the POK group did VMTOOL that was supposed to be for internal use only for MVX/XA development. However, eventually the decision was made to release it as VM/SF ... for customer aid in MVS to MVS/XA conversion. There was lots of internal politics. Internally, vm370 had been ported and running in 370/XA support ... had much better function, features, performance, reliability, etc than VM/SF. However, there was growing politics to turn VM/SF into VM/XA ... even tho the vm370 solution running in XA-mode was significantly better. Part of the issue was that VM/SF was from the POK high-end group ... which was responsible for XA. vm370 was still from the endicott mid-range group ... which had less political clout. old post with mention of vm/811 (aka vm/sf ... XA was referred to as 811 internally for the nov1978 date on lots of the XA architecture documents). http://www.garlic.com/~lynn/2011b.html#70 VM/370 3081 and discussion (with old email) about vm370 running in xa-mode http://www.garlic.com/~lynn/2011c.html#87 A History of VM Performance with regard to FBA ... I've mentioned before that I was told that it would cost $26M to release MVS support for FBA (fixed-block archtecture, at the time 3370s) ... even if I gave the MVS group fully integrated and tested code. The $26M was just for education and documentation changes. To justify the $26M, I had to show incremental new disk sales (on the order of ten times the cost ... i.e. around $300M); and they were claiming that they were making selling as much disks as possible ... and if MVS had FBA support ... customers would just switch to having the same amount of FBA as CKD. I wasn't allowed to use business justification for drastically reduced lifetime costs ... I had to have business justification showing additional new sales. As as been pointed out ... current disks are all FBA ... there haven't been real CDK disks made for decades. misc. past posts mentioning DASD, CKD, FBA, multi-track search, etc http://www.garlic.com/~lynn/submain.html#dasd -- virtualization experience starting Jan1968, online at home since Mar1970 -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN
Re: The old is new again - Not IBM related, but I hope interesting
john.mck...@healthmarkets.com (McKown, John) writes: http://www.phoronix.com/scan.php?page=articleitem=plugable_multiseat_kicknum=1 This is a USB device which can plug into a normal PC running Linux (Fedora 17 is mentioned). You then connect a DisplayLink monitor, USB keyboard and mouse to the device. And you have a multi-user system on a single PC. Not a server PC with other PCs connected as clients, but just one single PC. Reminds me of what could be done with MP/M-80 (the multiuser version of CP/M-80), except back then it was a serial (RS-232?) connected keyboard/display. Or, maybe, an S/360 with a 2260(?) or 3272(?). cp67 (ran on 360/67) delivered to the univ. jan1968 had support for 2741 (selectric typewriter with computer/rs-232 interface) and 1052 (sort of like 360 1052-7 operators console with rs-232 interface) terminals. the univ. had ascii/tty terminals ... so I added tty/ascii terminal support. the 2741/1052 support did games with switching terminal controller SAD command ... associated terminal specific line-scanner with each port/line ... so I added tty/ascii support in similar manner. I had wanted to have single dial-up number (hunt group) for all dial-up terminals ... but ibm terminal controller had taken short-cut ... while it was possible to change line-scanner, the line-speed was hard-wired for each port/line ... 27411052 operated at same line-speed, but tty/ascii was different speed. this somewhat was motivation for univ. to start clone controller project, reverse engineered 360 channel interface and build channel interface board for Interdata/3 programmed to emulate ibm terminal controller (but also supporting dynamic line-speed). Interdata then takes the implementation and markets as clone controller; Perken-Elmer then buys Interdata and continues to market under their own brand (30 yrs later ran across one in large east coast datacenter handling large percentage of point-of-sale dial-up terminals in the US). There is some write-up blaming four of us for (some part of) IBM clone controller business. past posts mentioning clone controller http://www.garlic.com/~lynn/subtopic.html#360pcm This claims a major motivation for the Future System effort was clone controller business. There is also some implication that major design criteria for SNA was tight integration between NCPVTAM ... a continuation of the FS goals: http://www.ecole.org/Crisis_and_change_1995_1.htm And then Ferguson Morris book, Computer Wars: The Post-IBM World, Time Books, 1993, mention that distraction of Future System and killing off work on 370 products ... and then after Future System imploded and delays in getting 370 efforts restarted, allowed clone processors to gain market foothold. before there was ms/dos there was seattle computer, http://en.wikipedia.org/wiki/MS-DOS before seattle computer there was cp/m, http://en.wikipedia.org/wiki/Seattle_Computer_Products before there was cp/m there was cp67/cms http://en.wikipedia.org/wiki/CP/M kildall worked on cp67/cms at npg (gone 404, but lives on at the wayback machine) http://web.archive.org/web/20071011100440/http://www.khet.net/gmc/docs/museum/en_cpmName.html npg http://en.wikipedia.org/wiki/Naval_Postgraduate_School cp67/cms http://en.wikipedia.org/wiki/CP/CMS there is also folklore that person that did mp/m-80 had done a lot of work on cp67/cms -- virtualization experience starting Jan1968, online at home since Mar1970 -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN
Re: Explination of S0C4 reason code 4 and related data areas
shmuel+ibm-m...@patriot.net (Shmuel Metz , Seymour J.) writes: This is a case where I prefer the Burroughs notation; they called the equivalent flag the presence bit, which is more neutral. page transfers/io is done with channel programs which have real addresses. virtual memory has segment and page tables that map specific virtual memory pages to real pages. when a virtual page is selected for replacement, the corresponding page table entry invalid bit is set, the contents of the real page is written out, the replacing virtual page is read into the real page location, and then the corresponding page table entry invalid bit (for the replacing virtual page) is trned off. this is copy of presentation on cp/40 given at 1982 SEAS meeting http://www.garlic.com/~lynn/cp40seas1982.txt where they modified standard 360/40 to support virtual memory. In the 360/40, there were 64 4kbyte real pages. The added hardware gave each 4k real page had an virtual address space identifier (somewhat analogous to storage keys) plus a virtual page number. Running a virtual machine involved loading a virtual address space identifier into control register. In virtual address mode ... all real pages would be interrogated for matching virtual address space identifier plus matching virtual page number. cp/40 morphed into cp/67 when standard 360/67 with virtual memory hardware becamse available ... which looked much more like 370 virtual memory segment and page tables ... that continue through the various generations. I've claimed that the 801/risc effort was at least partially in reaction to the enormous complexity of the (failed) future system effort (which was going to completely replace 360/370 ... but imploded before even being announced) ... some past posts http://www.garlic.com/~lynn/submain.html#futuresys ... where 801/risc was going to the opposite extreme (to FS) by eliminating a lot of hardware complexity and simplifying the hardware. One of the things in 801/risc were inverted pagetables ... which are effectively much more like the 360/40 virtual memory implementation. 801/risc romp chip instead of having a virtual address space identifier had a 12bit virtual segment identifier (aka STE associative ... rather than the 360/370 STO associative). romp had 32bit virtual addressing with 16 256mbyte segments. When going to run something ... the segment identifiers were loaded into the 16 segment registers. Running in virtual address space made would peal off the virtual address space number and index the corresponding segment register, pull out the segment identifier ... and then use the virtual segment identifier plust segment virtual page number to look for the associated real page number. In 801/ROMP, rather than turning off the invalid bit ... to indicate virtual page is available ... the corresponding segment-id plus segment-virtual-page-number is loaded (for corresponding real page). misc. past posts mentioning 801, risc, romp, rios, power, power/pc, etc http://www.garlic.com/~lynn/subtopic.html#801 -- virtualization experience starting Jan1968, online at home since Mar1970 -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN
Re: PDF vs. Bookie
mitchd...@gmail.com (Dana Mitchell) writes: And another disparaging remark against IBMs 'Information Center': I'm trying to use two different levels for IBM i this morning, both of them are stuck on 'indexing' they then eventually fail. Information center indeed! a couple recent posts mentioning ibm's Information Center in thread about user-friendly http://www.garlic.com/~lynn/2012.html#11 Who originated the phrase user-friendly? http://www.garlic.com/~lynn/2012.html#12 Who originated the phrase user-friendly? http://www.garlic.com/~lynn/2012.html#27 From Who originated the phrase user-friendly thread in the early 80s, it was a bunch of vm/4341s going into branch offices for sales marketing (frequently identified with IC suffix in their internal network node-name) ... augmenting the online vm/hone salesmarketing support systems ... including a couple old emails http://www.garlic.com/2012.html#email810921 http://www.garlic.com/2012.html#email820826 http://www.garlic.com/2012.html#email820827 -- virtualization experience starting Jan1968, online at home since Mar1970 -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN
Re: A z/OS Redbook Corrected - just about!
mike.a.sch...@gmail.com (Mike Schwab) writes: Since they have AIX on Power, how about zIX or MIX. One concern I have is an operating system name without z/OS implies a completely independent operating system, not a subsystem of z/OS. re: http://www.garlic.com/~lynn/2012e.html#13 A z/OS Redbook Corrected - just about! besides the OSF and POSIX support on MVS folklore recent tale of origin of AIX http://www.garlic.com/~lynn/2012e.html#2 was done for IBM by the company that had done port of ATT unix to ibm/pc as PC/IX ... i.e. ROMP was originally going to be the followon to the Displaywriter ... but when that was canceled, it was redirected to the unix workstation market (as PC/RT with AIX). RS/6000 and Power were then followon to PC/RT. the above also mentioned that the people that had done the initial development for what becomes SUN workstation, had come to IBM about producing the product. There was meeting in Palo Alto that included several organizations around the company ... afterwards several organizations all claimed that they were doing something better .. and IBM declined to come out with SUN workstation. Palo Alto had been working on port of Berkeley's unix work-alike (BSD) to mainframe ... but later get redirected to port it to the PC/RT ... coming out as AOS (as an alternative to AIX). I had done internal advanced technology conference spring of 1982 ... one of the first since the mid-70s ... when there was lots of corporate retrenching after the failure of Future System effort .. some past posts http://www.garlic.com/~lynn/submain.html#futuresys Presentations included BSD implementation on vm/370, TSS/370 UNIX PRPQ for ATT, and CMS running under MVS ... old post regarding the adtech conference http://www.garlic.com/~lynn/96.html#4a Palo Alto was also working with UCLA and its unix work-alike (Locus) ... and ported it to both mainframe and ps2 ... which was released as AIX/370 and AIX/386. Another unix work-alike was MACH done at CMU ... a derivative can still be found as the Apple operating system. recent tale of mainframe C language http://www.garlic.com/~lynn/2012d.html#64 Layer 8: NASA unplugs last mainframe There joint project between IBM and ATT for unix on the mainframe ... purely for ATT internal use ... it involved doing a stripped down TSS/370 (residual limited availability follow-on to TSS/360) kernel with unix higher levels layered on top. Part of the TSS/370 strategy was to provide an alternative to Amdahl's UTS (unix) Amdahl processors for large number of ATT installations. As an aside, person responsible for UTS (code named GOLD during development for Au or Amdahl Unix) had done port of unix to ibm mainframe at school. When he was graduating, some of us attempted unsuccesfully to get IBM to make him an offer. -- virtualization experience starting Jan1968, online at home since Mar1970 -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN
Re: Malicious Software Protection
scott_j_f...@yahoo.com (Scott Ford) writes: You can't be serious...never never heard of anyone developing a virus for mainframes, I understand the fear, but firewalls, network apps do rat in front of the mainframe this discussion group, mailing list originated on BITNET ... recent discussion (with wiki references) http://www.garlic.com/~lynn/2012e.html#19 Inventor of e-mail honored by Smithsonian really long winded recent post in linkedin MainframeZone group http://www.garlic.com/~lynn/2012d.html#49 Do you know where all your sensitive data is located? mentions the xmas exec nov1987 ... reference from vmshare archive http://vm.marist.edu/~vmshare/browse?fn=CHRISTMAft=PROB was almost exactly a year before the morris worm on the internet. -- virtualization experience starting Jan1968, online at home since Mar1970 -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN
Re: A z/OS Redbook Corrected - just about!
dickbond...@gmail.com (Dick Bond) writes: I agree with Chris Mason. IBM should have never started called it USS - how about a simple definitive abbreviation, like zUnix. IBM adores putting a z in front of everything (for some clueless reason) so why should their version of Unix be any different? back when MVS posix support started ... was in the unix wars period http://en.wikipedia.org/wiki/Unix_wars which also resulted in the formation of OSF http://en.wikipedia.org/wiki/Open_Software_Foundation to produce a posix, copyright-free implementation while we were doing HA/CMP product http://www.garlic.com/~lynn/subtopic.html#hacmp we also did some consulting to the executive that was behind doing the MVS posix implementation ... it was one of the many efforts to try and get around the strangle-hold that the communication group had on the datacenter ... attempting to reverse lots of stuff that was fleeing the mainframe to more distributed computing friendly platforms. -- virtualization experience starting Jan1968, online at home since Mar1970 -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN
Re: Execution Velocity
shmuel+ibm-m...@patriot.net (Shmuel Metz , Seymour J.) writes: Virtual multiprocessors go back to the late 1950's[1] and early 1960's[2], although IBM and Intel came late to the game. [1] Honeywell 800 [2] Peripheral Processors on CDC 6600 re: http://www.garlic.com/~lynn/2012d.html#73 Execution Velocity not 50s, but this old post on dual i-stream 195 keeping execution units fed: http://www.garlic.com/~lynn/2004e.html#1 A POX on you, Dennis Ritchie!!! in a.f.c. news group ... references the 1-bit flag in the pipeline as red/black instruction streams (as part of common pipeline keeping track of which instructions/registers were associated with which instruction stream) ... and believing it appeared in IBM's ACS project from early 60s ... but I can't find any mention at ACS reference site (IBM Advanced Computing Systems -- 1961 - 1969) http://www.cs.clemson.edu/~mark/acs.html however above does mention ACS and out-of-order instruction execution that appears in 360/91. Part of the current environment is deep-pipeline, dual instruction streams, out-of-order execution, speculative execution (re: conditional branches) and decomposing into RISC micro-ops ... provides execution units with a queue of large tens of pending operations for execution ... and if some operation stalls with a cache miss (and requires latency of storage fetch) ... there are a large number of other pending operations that may be executed (helping mask cache-miss, main storage fetch delay/latency). ACS timeline http://www.cs.clemson.edu/~mark/acs_timeline.html as in above timeline Amdahl resigns from IBM sept. 1970 ... supposedly as result of decision not to do ACS. Claims have been that Amdahl was not aware of the subsequent Future System effort that was going to completely replace all 370 ... but at a seminar he gave at MIT in the early 70s (several of us at the science center attended), he was asked what justifications did he use with investors for his new clone company. He mentioned that customers had already invested several billion dollars in 360 software development, and even if IBM were to completely walk away from 360(/370), that software base would keep him in business until the end of the century (which could be claimed to be a veiled reference to Future System). misc. past posts mentioning Future System http://www.garlic.com/~lynn/submain.html#futuresys for more topic drift ... additional Future System details: http://www.cs.clemson.edu/~mark/fs.html This claims motivation for Future System effort was clone controllers: http://www.ecole.org/Crisis_and_change_1995_1.htm from above: IBM tried to react by launching a major project called the 'Future System' (FS) in the early 1970's. The idea was to get so far ahead that the competition would never be able to keep up, and to have such a high level of integration that it would be impossible for competitors to follow a compatible niche strategy. However, the project failed because the objectives were too ambitious for the available technology. Many of the ideas that were developed were nevertheless adapted for later generations. Once IBM had acknowledged this failure, it launched its 'box strategy', which called for competitiveness with all the different types of compatible sub-systems. But this proved to be difficult because of IBM's cost structure and its RD spending, and the strategy only resulted in a partial narrowing of the price gap between IBM and its rivals. ... snip ... Ferguson Morris book, Computer Wars: The Post-IBM World, Time Books, 1993, mention that distraction of Future System and killing off work on 370 products ... and then after Future System was killed with delay in getting 370 efforts restarted, allowed clone processors to gain market foothold. some discussion of restarting 370 efforts (3033 3081) http://www.jfsowa.com/computer/memo125.htm for other topic drift ... as undergraduate in the 60s, I had extended cp67 terminal support to include tty/ascii ... and tried to do something with the 2702 terminal controller that it couldn't quite do. This was somewhat behind university effort to do a clone controller (started with an Interdata/3 minicomputer) that would (at least) support both automatic terminal type identification as well as automatic line speed identification. this is picked up as product and sold as clone controller by Interdata (later bought by Perkin-Elmer and marketed under their brand name) ... four of us at the univ. get written up as responsible for (some part of) clone controller business. misc. past posts http://www.garlic.com/~lynn/subtopic.html#360pcm -- virtualization experience starting Jan1968, online at home since Mar1970 -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN
Re: megabytes per second
Anne Lynn Wheeler l...@garlic.com writes: data-transfer channel program. Cache operation was also write store-through ... aka synchronous to disk ... and no indication that 3880 controller would do its own seek operation (to move to different track for pre-fetch) independent of what was explicit from some channel program. re: http://www.garlic.com/~lynn/2012d.html#72 megabytes per second http://www.garlic.com/~lynn/2012d.html#75 megabytes per second also http://www.garlic.com/~lynn/2012d.html#73 Execution Velocity http://www.garlic.com/~lynn/2012d.html#74 Execution Velocity at least by 80s, some processors were started to do store-into caches (rather than store-through) for additional performance ... store operation happened in cache and write could be done asynchronously at some later point without involving stalling instructions (with store operations). Issue with disk caches ( store-into for later writting as opposed to store-through) was processor cachememory data was typically viewed as ephemeral ... i.e. in power failure, changes weren't expected to survive. However, for disk caches ... store-into had to wait until there was (typically redundant) battery-backed /or flash memory ... since data written to disk was expected to survive power failure (would survive in cache until power was sufficient to eventually write to disk). note that ibm dasd/channel operation use to have peculiar power-failure, failure mode for a long time. data to be written was in processor memory and if power failed in the middle of the write operation ... there could be sufficient power for the disk to complete the write operation ... but not enough to power processor memory and transfer of data to disk. The symptoms was that disk would propagate write with all zeros ... and then write correct error code for the partial zero record (no hardware error condition). there were even countermeasure system designs through the 80s that all physical records were guarenteed to end in non-zero (systeme) data ... what wouldn't be seen by applications ... as a validity check for power-failure partially valid record with propagated zeros. FBA drives developed strategy that there was sufficient power and data to always complete a write operation, once it started. Once all CKD DASD migrated to simulation on top of FBA (there hasn't been any real CKD DASD for decades) ... along with various intermediate cache memory ... there problem has been mitigated. misc. past posts mentioning CKD FBA http://www.garlic.com/~lynn/submain.html#dasd misc. past posts mentioning getting to play disk engineer in bldgs. 1415 http://www.garlic.com/~lynn/subtopic.html#disk -- virtualization experience starting Jan1968, online at home since Mar1970 -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN
Re: Pre-Friday fun: Halon dumps and POK Resets
maryanne4...@gmail.com (Mary Anne Matyaz) writes: Customer designs a new datacenter, moves in, has an issue where a guy in a backhoe clips the incoming power source. Customer is patting themselves on the back for the wisdom of having two separate power lines, one on each side of the building. early days of internet ... connectivity out of the boston area was set up with nine(?) different 56kbit links with diverse routing (telco provisioning) ... physically separate lines exchanges over the years, telco company eventually consolidated all nine links until they were being carried on a common fiber-optic trunk ... one day, someplace in Connecticut, a backhoe clips the fiber-optic trunk ... and boston was partitioned from the rest of the internet. ... one customer we were marketing ha/cmp to ... had major datacenter in large metropolitan area ... carefully chosen to be in building that was fed by multiple water mains down different sides of the building, four different power feeds from different physical power substations and four different telephone trunks to different physical central exchanges (all different sides) one day transformer in the basement blew ... contaminating the bldg. with PCB ... everything was off and bldg. had to be evacuated. ha/cmp had started work on supporting physical separate and I coin the marketing terms disaster survivable and geographic survivable (to differentiate from disaster/recovery). I get asked to write section in corporate continuous available strategy document ... but the section gets pulled when both Rochester and POK complain (that they couldn't meet the requirements, at least at that time). misc. past posts mentioning ha/cmp http://www.garlic.com/~lynn/subtopic.html#hacmp -- virtualization experience starting Jan1968, online at home since Mar1970 -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN
Re: megabytes per second
ronjhawk...@sbcglobal.net (Ron Hawkins) writes: I didn't get to work with the 3880-13, but with the 3880-23 I think I recall sequential pre-fetch was initially fetching three tracks, using a wrap-around buffer to keep track of the last block read and maintaining two tracks beyond the last track accessed in cache. With 3990-3 I think this increased to five tracks, and I have no idea about 3990-6 and beyond. re: http://www.garlic.com/~lynn/2012d.html#72 megabytes per second http://www.garlic.com/~lynn/2012d.html#75 megabytes per second http://www.garlic.com/~lynn/2012d.html#76 megabytes per second this mentions that sequential detect is new function as of June1996 for 3990-6 ftp://public.dhe.ibm.com/eserver/zseries/zos/vse/pdf3/veioperf.pdf -- virtualization experience starting Jan1968, online at home since Mar1970 -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN
Re: megabytes per second
ronjhawk...@sbcglobal.net (Ron Hawkins) writes: I'm afraid sequential pre-fetch kinda of makes your point invalid for sequential IO. when ibm first came out with full-track cache (3880-13/sheriff) ... it advertised a 90% hit rate ... based on 3380 track, 10 records per track and sequential read, where first sequently read on track would fetch the full track ... and then the next 9 sequential reads would already be in cache. however, if the application when to full-track buffering ... the same exact application and data would go from 90% hit ratio to zero percent hit ratio (effectively each track would be read as a whole, streamed through the 3880-13 cache right into processor memory ... and then would have any additional cache reference for the track). double full-track buffering would then overlap retrieval of the following track with the processing of the records in the previous track (masking disk retrieval latency, aka akin to instruction execution with prefetch /or out-of-order execution to mask processor cache miss latency). -- virtualization experience starting Jan1968, online at home since Mar1970 -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN
Re: Execution Velocity
m42tom-ibmm...@yahoo.com (Tom Marchant) writes: To look at it another way, cache exists because main storage is very slow compared to the processor speed. Without cache, the processor would not be able to execute instructions nearly as fast as it could. Cache allows data from main storage to be kept very close to the processor in extremely fast memory, allowing the processor to execute instructions as fast as possible. there have been observations that the latency of a cache miss (elapsed to retrieve data from main storage), measured in processor cycles is on the order of the 60s disk access ... measured in 60s processor cycles. the effort in the 60s to improve throughput was to have multitasking and/or multithreading ... be able to switch to some other work ... while waiting for disk accesses. a lot of work was done in this area starting in the 80s ... especially for risk processors, for out-of-order execution and speculative execution ... allowing execution of other instructions (that had their data in cache) ... while a stalled instruction was waiting on cache-miss. The 60s equivalent to not simply trying to make infinite sized storage, as countermeasure to serialized miss latency, but multiprogramming to be able to switch to something else while waiting. there was work on hyperthreading ... independent instruction streams feeding common execution units ... that while one stalled instruction stream (waiting on cache miss), there could be instruction execution from other independent instruction stream ... basically simulates multiple processors ... but w/o actual double all of the hardware. possibly one of the original hyperthreading efforts was for 370/195 ... which didn't actually ship. 370/195 was pipelined and allowed out-of-order execution ... but didn't have speculative execution and/or branch prediction ... so conditional branch stalled the processor. Peak throughput of 195 was approx. 10mips ... but most codes only got 5mips because of abundance of conditional branches. The 195 hyperthreading effort was to simulate multiprocessing with two instruction streams, PSWs, registers, etc ... but not twice the hardware (instructions in pipeline would have one flag bit indicating which instruction stream it was associated with). Two (simulated multiprocessor, independent) instruction streams ... each executing in the pipeline at 5mips (because of stall waiting for conditional branches) ... would be able to keep the execution units operating at effective throughput of 10mips. One of the issues of current and past several generation of 86 (CISC) chips ... are that they are actually RISC chips ... with a hardware layer translating the CISC instructions into RISC micro-ops for actual execution. This has resulted in significantly closing the thruput MIP thruput rate of CISC vis-a-vis traditional RISC. The current generation of chips have cache sizes larger than the 60s processor memory sizes. However, the relative performance degradation of a cache miss today is about the same as virtual memory page fault from the 60s. Some applications today are tuned to maintain their working set in cache (and minimize cache misses) ... in much the same way that virtual memory apps in the 60s were tuned to maintain their working sets in real storage (and minimize page faults). The science center ... some past posts http://www.garlic.com/~lynn/subtopic.html#545tech besides having done a lot of virtual memory with (virtual machine) cp67 in the 60s and early 70s ... also did a lot of work with performance monitoring, performance simulated, and workload profiling (some of which then evolved into capacity planning). Science center also did a lot of paging algorithm and paging simulation work. One such was full instruction trace that was then fed into paging simulator ... that also had support for doing semi-automatic program re-organization to optimize operation in virtual memory environment. This was eventually released as a product called VS/Repack in 1976. However, even before it was released, internally, a lot of products made extensive use of it to improve their operations ... including a lot of OS/360 applications, subsystems, and products making the transition to virtual storage environment. Some of the high-use, performance sensitive applications do something similar today ... but from the standpoint of improved throughput in a processor cache environment (i.e. today's cache has become the 60s real storage for 60s virtual memory page fault systems) For a total different take ... there have been some current high-throughput processors done w/o caches ... but with something like 128 hyper-threads, aka 128 independent instruction streams (simulated multi-processors) ... that while the processor execution is stalled waiting for data from other threads ... the hardware is able to switch to some thread that has instruction with data ready for execution (think of it as hardware multiprogramming and hardware
Re: Server time Protocol and CICS
johnwgilmore0...@gmail.com (John Gilmore) writes: The original design of CICS envisaged making elegant use of the announced facilities of OS/MVT. When the time came to implement CICS 1) some of these facilities were not yet available and 2) some of them did not yet work reliably. The implementers of CICS were thus forced to take a RYO approach. They in effect gutted an MFT partition and installed their own functionally MVT-like facilities in it, calling their storage-management interfacing macros GETMAIN and FREEMAIN, etc., etc. The result was an in many ways a superb table-driven system, one that improved significantly over the succeeding years. Its chief 'defect' was the implementation of its user interfaces as a set of assembly-language macros, which meant that applications run under it had to be written in assembly language. This was 'remedied' in various ways, some elegant and some not, and finally by introducing a 'command'---as opposed to the old 'macro'---level CICS; ultimately it became possible to write CICS APs even in RPG, although these could not be even quasi-reentrant. The major marketing obstacles to its use by other than assembly-language programmers were thus gradually removed. In my own doubtless élitist view CICS never fully recovered from these initiatives. They did enable ribbon clerks to write CICS APs, and opinions about whether that was beneficial differ widely. What is not in my view open to argument is that criticism of the present state of CICS and other such subsystems that is not diachronic is all but certain to be irrelevant. We are all, ineluctably, creatures of our experience. If you don't know the history of CICS, IMS, DB2, whatever, mug it up if you wish to discuss that subsystem; and stai zitt' until you have mastered it. (Controversy will not thus be eliminated or perhaps even much reduced; equally informed views can, do differ sharply; quaint irrelevance will be reduced). I've more characterized that pathlengths for os/360 was so enormous that there was no way to do light-weight operations. CICS effectively batched a large percentage of os/360 operations at startup ... and then used its own lightweight versions for actual operation. Disclaimer: Univ library got ONR grant to do online catalog and used part of the funds to get 2321 datacell. It was also selected for one of the beta test sites for CICS program product (1969) ... and I got tasked for supporting debugging the deployment. Part of the CICS birthing experience was shooting some number of bugs related to the library choosing different BDAM options than the site where CICS was originally developed. misc. past posts mentioning CICS /or BDAM http://www.garlic.com/~lynn/submain.html#cics other cics history (gone 404 but lives on at the wayback machine): http://web.archive.org/web/20080123061613/http://www.yelavich.com/history/toc.htm The Evolution of CICS: CICS Services for Performance (1968) http://web.archive.org/web/20060325095459/http://www.yelavich.com/history/ev196805.htm from above: In the very beginning, CICS attempted to use services provided by the operating system(s) (PCP, MFT and MVT), however it quickly proved to be unacceptable because of the relatively high overhead of those services (CPU cycles and storage consumed with regard to the particular service). ... snip ... I've made similar claims (about large part of design involved countermeasures for heavyweight os/360 services) ... old email Jim Gray wanting me to be take responsibility for consulting with the IMS group when he was leaving for Tandem: http://www.garlic.com/~lynn/2007.html#email8011016 IMS wiki http://en.wikipedia.org/wiki/IBM_Information_Management_System as to DB2 ... original relational/sql was done at sjr on vm370 some number of past posts http://www.garlic.com/~lynn/submain.html#systemr the standard folklore was that we were able to do tech. transfer from sjr to endicott for sql/ds under the radar when the corporation was distracted with the official DBMS product, EAGLE. Then when EAGLE imploded, there was a request about how fast could there be a port to MVS ... eventually turning into DB2. for random other DB2 lore ... one of the people mentioned in this Jan92 meeting in Ellison's conference room claims to have done the SQL/DS transfer from Endicott back to STL http://www.garlic.com/~lynn/95.html#13 ... separate from the SJR work. Additional relational/SQL lore: http://www.mcjones.org/System_R/SQL_Reunion_95/index.html -- virtualization experience starting Jan1968, online at home since Mar1970 -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN
Re: TINC?
edgould1...@comcast.net (Ed Gould) writes: We used to run MFT and everyday we changed the partition sizes without an IPL. Now if you are saying to change from MFT to MVT then indeed an IPL was needed, as well PCP to MFT (or for that matter MVT)? The OS is the key issue and indeed VM you can ipl an OS and it probably does not require an IPL(machine wise) a virtual machine needs to be brought in . Maybe I am missing some distinction here. recent post about vm370 handshaking being done at univ. for MVT http://www.garlic.com/~lynn/2012c.html#16 5 Byte Device Addresses? vm370 has function to save an virtual memory image from virtual machine and then restore it using IPL command (using ipl-by-name function) sort like checkpoint/restart ... but for system. they identified place in MVT where everything was quiesced and could jump back in ... provided for hot-restart significantly cutting MVT IPL elapsed time startup. note that one of the customers that had been sold 360/67 was boeing huntsville to run tss/360 ... tss/360 was never fully realize ... and many customers ran machine as 360/65 with os/360. boeing huntsville had 360/67 two-processor multiprocessor configurated to run as two independent single processor processors ... with MVT supporting several 2250M1s and long-running graphic applications. The problem was that MVT had a horrible storage fragmentation with long running applications. As a result, Boeing Hunstsville had modified release 13 MVT to run in virtual memory mode but w/o paging. The virtual memory hardware was used to re-order storage addresses as compensation for significant MVT storage fragmentation associated with long running applications. This is similar ... but different to the justification for adding virtual memory as standard to all 370s ... and move from MVT to SVS ... discussed in this past post: http://www.garlic.com/~lynn/2011d.html#73 Multiple Virtual Mmeory part of quote in above: Evans around. For reasons unknown to me, the TSO group had the flip charts and wallboard z used. The clincher was the ability to run 16 initiators simultaneously on a 1 megabyte system, taking advantage of the fact that MVT normally used only 25% of the memory in a partition. The resulting throughput gain (compared to real hardware) was substantial enough to convince Bob. It helped that Tom Simpson and Bob Crabtree had hosted an MFT II system TSS-Style and shown similar performance gains. Of course, since CP67 was a pickup group they weren't considered and we had the OS/VS adventure instead. ... snip ... Simpson and Crabtree had done HASP ... and then Simpson went on to do modified MFT-II implementation using TSS-Style paged-mapped filesystem paradigm called RASP (significant performance advantage over the approach taken by SVSMVS preseving the OS/360 disk paradigm). This wasn't picked up and Simpson left for Amdahl where he there was clean-room do-over. There was legal action about theft of code (even tho there was no intention of ever using RASP) ... and the resulting court audits only found a couple accidental incidents examples of identical code. a couple old email mentioning RASP do-over: http://www.garlic.com/~lynn/2011e.html#email810408 http://www.garlic.com/~lynn/2011e.html#email820907 http://www.garlic.com/~lynn/2011e.htmL#email870302 a few past posts mentioning RASP: http://www.garlic.com/~lynn/2000f.html#68 TSS ancient history, was X86 ultimate CISC? designs) http://www.garlic.com/~lynn/2000f.html#69 TSS ancient history, was X86 ultimate CISC? designs) http://www.garlic.com/~lynn/2000f.html#70 TSS ancient history, was X86 ultimate CISC? designs) http://www.garlic.com/~lynn/2001b.html#73 7090 vs. 7094 etc. http://www.garlic.com/~lynn/2002g.html#0 Blade architectures http://www.garlic.com/~lynn/2002i.html#63 Hercules and System/390 - do we need it? http://www.garlic.com/~lynn/2002j.html#75 30th b'day http://www.garlic.com/~lynn/2002q.html#31 Collating on the S/360-2540 card reader? http://www.garlic.com/~lynn/2003e.html#65 801 (was Re: Reviving Multics http://www.garlic.com/~lynn/2005p.html#44 hasp, jes, rasp, aspen, gold http://www.garlic.com/~lynn/2006f.html#19 Over my head in a JES exit http://www.garlic.com/~lynn/2006w.html#24 IBM sues maker of Intel-based Mainframe clones http://www.garlic.com/~lynn/2006w.html#28 IBM sues maker of Intel-based Mainframe clones http://www.garlic.com/~lynn/2007m.html#69 Operating systems are old and busted http://www.garlic.com/~lynn/2010i.html#44 someone smarter than Dave Cutler http://www.garlic.com/~lynn/2010o.html#0 Hashing for DISTINCT or GROUP BY in SQL http://www.garlic.com/~lynn/2010p.html#42 Which non-IBM software products (from ISVs) have been most significant to the mainframe's success? http://www.garlic.com/~lynn/2011.html#85 Two terrific writers .. are going to write a book http://www.garlic.com/~lynn/2011e.html#26 Multiple Virtual Memory http://www.garlic.com/~lynn/2011e.html#47 junking CKD; was
Re: Writing article on telework/telecommuting
martin_pac...@uk.ibm.com (Martin Packer) writes: One experience from teleworking which should appeal to mainframers: By and large 3270 is the least demanding data stream - so TSO / ISPF goes fast even on broadband as crummy as mine. (It's all the other junk that runs really slowly when the wet string dries out.) Now I may be in a minority but I bet this counts for lots of people. Anyhow, having telecommuted for more than 10 years I'm looking forward to this article: You are not alone is a useful thing to hear. :-) recent thread on user-friendly http://www.garlic.com/~lynn/2012.html#11 Who originated the phrase user-friendly? http://www.garlic.com/~lynn/2012.html#12 Who originated the phrase user-friendly? http://www.garlic.com/~lynn/2012.html#13 From Who originated the phrase user-friendly? http://www.garlic.com/~lynn/2012.html#15 Who originated the phrase user-friendly? http://www.garlic.com/~lynn/2012.html#16 From Who originated the phrase user-friendly thread http://www.garlic.com/~lynn/2012.html#19 From Who originated the phrase user-friendly? http://www.garlic.com/~lynn/2012.html#22 Who originated the phrase user-friendly? http://www.garlic.com/~lynn/2012.html#27 From Who originated the phrase user-friendly thread http://www.garlic.com/~lynn/2012.html#31 Who originated the phrase user-friendly? http://www.garlic.com/~lynn/2012.html#33 Who originated the phrase user-friendly? http://www.garlic.com/~lynn/2012.html#36 Who originated the phrase user-friendly? http://www.garlic.com/~lynn/2012.html#38 Who originated the phrase user-friendly? http://www.garlic.com/~lynn/2012.html#43 Who originated the phrase user-friendly? http://www.garlic.com/~lynn/2012.html#44 Who originated the phrase user-friendly? above thread includes definition from ibm jargon: bad response - n. A delay in the response time to a trivial request of a computer that is longer than two tenths of one second. In the 1970s, IBM 3277 display terminals attached to quite small System/360 machines could service up to 19 interruptions every second from a user I measured it myself. Today, this kind of response time is considered impossible or unachievable, even though work by Doherty, Thadhani, and others has shown that human productivity and satisfaction are almost linearly inversely proportional to computer response time. It is hoped (but not expected) that the definition of Bad Response will drop below one tenth of a second by 1990. ... snip ... part of the discussion was the horrible TSO response and significant performance degradation going from 3277/3272 combo to 3278/3274 (although TSO response was so bad that none recognized how much worse the 3274 controller was compared to 3272). I did vm370/cms that got .11 trivial interactive response (next nearest system with similar load and configuration was more like quarter second) ... and the 3272 controller added .086 seconds ... resulting in .196 response seen by end-user. longer discussion in this post: http://www.garlic.com/~lynn/2001m.html#19 When Santa Teresa lab (now silicon valley lab) was bursting at the seams in 1980 ... they were remoting 300 people from the IMS group to offsite building. They had looked at remote 3270 for interactive developed ... and the IMS group users found it horrible and unacceptable compared to the vm370/cms service they were getting with channel-attached 3270 controllers in the building. I got roped into doing the channel-extender support for putting remote channel attached 3270 controllers at the remote site (resulting in them not seeing any difference between local and remote site) Recent discussion touch on some of the topic: http://www.garlic.com/~lynn/2012c.html#41 The transition to web/browser has caused me some annoyance ... because of end-to-end synchronized latency. Nearly a decade ago, I started making extensive use of browswer asynchronous tabs ... clicking on URL would be done in background in different tab. I even created process that would automate some of the process ... fetching hundreds of web pages at a time into background tabs. Then I could immediate switch between different tabs w/o having to experience the synchronous web latency. misc. past posts mentioning browser asynchronous tab operation: http://www.garlic.com/~lynn/2004e.html#11 Gobble, gobble, gobble: 1.7 RC1 is a turkey! http://www.garlic.com/~lynn/2004e.html#54 Is there a way to configure your web browser to use multiple http://www.garlic.com/~lynn/2005n.html#8 big endian vs. little endian, why? http://www.garlic.com/~lynn/2005n.html#41 Moz 1.8 performance dramatically improved http://www.garlic.com/~lynn/2005o.html#13 RFC 2616 change proposal to increase speed http://www.garlic.com/~lynn/2006q.html#51 Intel abandons USEnet news http://www.garlic.com/~lynn/2007m.html#8 John W. Backus, 82, Fortran developer, dies http://www.garlic.com/~lynn/2008b.html#32 Tap and faucet and spellcheckers http://www.garlic.com/~lynn/2008b.html#35 Tap and
Re: Writing article on telework/telecommuting
charl...@mcn.org (Charles Mills) writes: I've been doing remote mainframe development since 1200 baud dial-up was state-of-the-art. You need almost no bandwidth at all for 3270. You can refresh an entire 3270 screen with at most 4K or so characters, and ISPF does a pretty clever job of minimizing the number of characters that must actually be sent. OTOH a millisecond glitch on your connection is nothing for e-mail and almost nothing for Web browsing, but can be a disaster for 3270 over VPN. The new and improved TSO reconnect is a HUGE help. re: http://www.garlic.com/~lynn/2012d.html#19 Writing article on telework/telecommuting I started in Mar1970 at home with 134.5 baud 2741. in early 80s, for the corporate home terminal program with IBM PCs and 3270 emulation ... PC and vm370 mainframe software driver (pcterm) was written that 1) did huffman compression of data actually sent and 2) kept cache of strings at both ends ... recently used (and attempted to transmit string cache index in lieu of the actual string). a few past PCTERM posts http://www.garlic.com/~lynn/2003n.html#7 3270 terminal keyboard?? http://www.garlic.com/~lynn/2003p.html#44 Mainframe Emulation Solutions http://www.garlic.com/~lynn/2006y.html#0 Why so little parallelism? http://www.garlic.com/~lynn/2008n.html#51 Baudot code direct to computers? the corporate home terminal program also came up with special 2400 baud encrypting modems (handshake dynamically generating unique key for each dialup session). mid-80s, I tried to bring a NCP emulator to market that masked most of the traditional SNA shortcomings ... it used real networking and did a lot of things not found in traditional SNA implementations (all outboard of the host VTAM) ... part of presentation I made to the Oct86 SNA architecture review board: http://www.garlic.com/~lynn/99.html#67 of course it caused huge amount of internal political problems and got killed ... but it wasn't terrible unlike the later spoofing that was done in the 3737 ... to try and get SNA host-to-host transfer close to handling a T1 link ... old email http://www.garlic.com/~lynn/2011g.html#email880103 http://www.garlic.com/~lynn/2011g.html#email880606 http://www.garlic.com/~lynn/2011g.html#email881005 recently discussed in this post: http://www.garlic.com/~lynn/2012c.html#41 now for internet, it frequently it isn't so much the amount of data ... but the latency for round-trips. HTTP started out as connectionless protocol built on top of tcp reliable session ... with tcp session setup/teardown for every session. in the mid-90s as webservers started to ramp up ... there was massive scaleup problem. majority of the tcp/ip stack implementations did a linear search of the FINWAIT list (time-out of closed sessions to catch dangling packets) ... originally implemented under assumption that session setup/teardown was relatively infrequent. However the (mis-)use by HTTP ( HTTPS) was resulting in thousands on the FINWAIT list and large webserver processors spending 95% of CPU running the FINWAIT list. This could be seen in the rapidly increasing number of servers at NETSCAPE ... this was before DNS router load-balancing ... so needed users to manually select different servers. This continued until NETSCAPE switched to a Sequent server (Sequent claimed it had been doing large commercial unix with 20,000 concurrent telnet/tcp sessions and so had already encountered fixed the FINWAIT list problem). Eventually the other webserver platform vendors also started to deploy FINWAIT fixes. The issue in TCP is it requires a minimum of seven packet exchange for session setup/teardown ... and it was effectively being mis-used by the connectionless oriented HTTP(S) protocol. Later versions of HTTP browsers have attempted to map multiple HTTP connectionless operations over longer-lived TCP session. The other performance component of more complex webpages ... isn't necessarily the aggregate amount of data involved (although inclusion of multiple jpeg images can be mbyte or more) ... it is that they are multiple different data elements ... each tending to require sequential end-to-end handshake latency. There is continuing work on trying to overlap as many of these operations concurrent to minimize the elapsed time (but taking advantage of higher peak transmission rates). recent Google+ thread https://plus.google.com/u/0/102794881687002297268/posts/Z76SXbLVpxs referencing: Happy Webiversary http://www.symmetrymagazine.org/cms/?pid=1000922 for the first webserver outside Europe on the SLAC vm370 system http://www.slac.stanford.edu/history/earlyweb/history.shtml disclaimer ... in the 80s, I was on the XTP technical advisory board where a reliable transport protocol was worked out that required minimum of only 3 packet exchange (compared to 7 for tcp). some past posts http://www.garlic.com/~lynn/subnetwork.html#xtphsp and had done the rfc1044 support for mainframe tcp/ip product. Original code was on
Re: PCP - memory lane
chrisma...@belgacom.net (Chris Mason) writes: Back in 1967/8, a colourful customer on the patch to which I belonged was running PCP on a 64K machine and it may have been a 360/40. Our ace young salesman had been responsible for this! IIRC this was considered the opposite of the leading edge but it seemed to work for a while! While we did have a customer - a university - genuinely at the leading edge with a 360/67, I worked at the crowded lower end among the 360/30s running DOS/360. My first responsibility was assisting a customer with the free time converting from a 1400 system to a 360/30 with DOS/360. the univ. had 709 running tape-to-tape ibsys with 1401 front-end for unit-record (reader-tape, tape-printer/punch, tapes manually transferred between 1401 drive and 709 driver). student jobs ran in under second elapsed time. univ. was sold a 360/67 for tss/360 to replace 709/1401 combo. As part of the transition, the 1401 was replaced with 64kbyte 360/30 ... which could run 1401 emulation and the front-end unit record 1401 MPIO application. I got a student job re-implementing MPIO on 360/30 (possibly as part of univ. preparation moving to 360) and got to design my own monitor, my own device drivers, interrupt handlers, scheduling, storage handling, etc. I eventually had a 2000 card (box of cards) assemble program ... with assembly option to either run stand-alone or under OS/360 PCP with five DCBs. The stand-alone version took approx. 30mins elapsed time to assemble (early os/360 PCP) while the OS/360 version took approx. an hour to assemble (each DCB macro assembly processing taking over five minutes elapsed time). I would get the datacenter all to myself on the weekend from 8am sat until 8am monday ... 48hrs w/o sleep made it little hard going to monday classes. The 709360/30 was eventually replaced with 360/67 and since tss/360 wasn't yet read, it ran as 360/65 with os/360 ... initially the same student fortran jobs taking over a minute elapsed time. This was cut approx. in half with move to HASP and MFT. I was given responsibilty for the operating system ... and starting with OS/360 release 11 doing highly customized STAGE2 sysgens under the production operating system. I would carefully rework output of STAGE1 sysgen ... so it could run in the production operating system ... and that the allocation and move/copy steps carefully organized to optimize disk arm seek operation (location of datasets on disk as well as location of members within PDS). This gave me approx. three times additional speedup in 3step fortgclg for student jobs ... but still 13secs elapsed time (mostly job scheduler overhead) This is old post with some results that I presented at fall68 SHARE meeting in Atlantic City (univ. had also gotten copy of cp67 in Jan68 and let me play with it on the weekends, I rewrote large pieces cp67 during the spring and summer ... which is also included as part of the presentation) http://www.garlic.com/~lynn/94.html#18 later got univ. of waterloo's watfor (for student jobs), big speedup was eliminating job scheduler overhead for tray of batched student jobs ... the job scheduler overhead to start single-step watfor was still longer than the time it took watfor to process a whole tray of student jobs (typically around 2500 cards, 50-100 student jobs) ... but finally the 360/67 throughput was more than the 709. -- virtualization experience starting Jan1968, online at home since Mar1970 -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN
Re: nested LRU schemes
re: http://www.garlic.com/~lynn/2012c.html#34 nested LRU schemes the default 3880-11 page/record cache scenario was 3081 with 32mbytes of real storage and 3880-11 controller with 8mbytes of cache. Every record read through controller cache would initially be in both the cache and 3081 memory. A page that wasn't in the 3081 memory wouldn't be likely to also be in the 3880-11 cache ... unless the number of pages in 3880-11 memory/cache was significantly larger than the number of pages in 3081 memory. This is a variation on nested caches related to nested LRU. I had earlier developed strategy that I called dup/no-dup (dup for duplicate) to address a similar situation with 2305 fixed-head paging drums. For constrained/contention for paging device ... either maintain page on disk or in memory (but not both, aka no-duplicate). In the 2305 case, I would de-allocate space on 2305 when page was read to memory ... this incurred the requirement that when page was selected for replacement, it would always have to be written ... even if it hadn't been changed ... a similar strategy was later used for big-pages (but for other reasons): http://www.garlic.com/~lynn/2012c.html#28 5 Byte Device Addresses So for the 3880-11, it was possible to do a destructive read ... it it wasn't already in cache, the read from disk would be cache-bypass read, while if it was in cache, it would deallocate the cache location after the read. Then the only pages that would be in the 3880-11 cache were pages that were written when be selected for replacement in memory (and since reads didn't take up space in the cache, these pages would have longer cache lifetime and some chance they would still be in the cache if it was needed in the future). some past discussion of dup/no-dup http://www.garlic.com/~lynn/93.html#13 managing large amounts of vm http://www.garlic.com/~lynn/2000d.html#13 4341 was Is a VAX a mainframe? http://www.garlic.com/~lynn/2001l.html#55 mainframe question http://www.garlic.com/~lynn/2002b.html#10 hollow files in unix filesystems? http://www.garlic.com/~lynn/2002b.html#20 index searching http://www.garlic.com/~lynn/2002e.html#11 What are some impressive page rates? http://www.garlic.com/~lynn/2002f.html#20 Blade architectures http://www.garlic.com/~lynn/2003o.html#62 1teraflops cell processor possible? http://www.garlic.com/~lynn/2004i.html#1 Hard disk architecture: are outer cylinders still faster than inner cylinders? http://www.garlic.com/~lynn/2005m.html#28 IBM's mini computers--lack thereof http://www.garlic.com/~lynn/2006c.html#8 IBM 610 workstation computer http://www.garlic.com/~lynn/2007c.html#0 old discussion of disk controller chache http://www.garlic.com/~lynn/2007l.html#61 John W. Backus, 82, Fortran developer, dies http://www.garlic.com/~lynn/2008f.html#19 Fantasy-Land_Hierarchal_NUMA_Memory-Model_on_Vertical http://www.garlic.com/~lynn/2008k.html#80 How to calculate effective page fault service time? http://www.garlic.com/~lynn/2010i.html#20 How to analyze a volume's access by dataset http://www.garlic.com/~lynn/2011.html#68 Speed of Old Hard Disks http://www.garlic.com/~lynn/2011.html#70 Speed of Old Hard Disks more drift to global/local and global/partitioned argument (as opposed to nested) ... past global LRU email http://www.garlic.com/~lynn/lhwemail.html#globallru are this past posts mentioning DMKCOL http://www.garlic.com/~lynn/2006y.html#35 The Future of CPUs: What's After Multi-Core? http://www.garlic.com/~lynn/2007.html#3 The Future of CPUs: What's After Multi-Core? http://www.garlic.com/~lynn/2010i.html#18 How to analyze a volume's access by dataset http://www.garlic.com/~lynn/2011.html#70 Speed of Old Hard Disks http://www.garlic.com/~lynn/2011.html#71 Speed of Old Hard Disks In the late 70s, we did high-performance, light-weight trace for every record access by system ... either the native vm370 system as well as any guest operating system. This was used with sophistcated i/o cache simulator ... able to simulate arbitrary sized caches that were positioned at the system level, split at the channel level, at the controller level, and/or at the device level (some number of other variations). One of the findings was that for given amount of electronic storage, the most thruput was from system level global cache ... supporting the argument that global LRU outperformed a partitioned local LRU ... previously mentioned in this reference http://www.garlic.com/~lynn/2012c.html#17 5 Byte Device Address and http://www.garlic.com/~lynn/2006w.html#46 The caveat was pathelogical behavior that polluted cache with non-reused information ... like large sequential read. Partitioned caches would isolate sequential read to specific cache and not also destroy all the entries in other caches. Another scenario is to recognize behavior like sequential read and treat those records differently. The example from the 3880-13 full-track cache ... and the 90% hit rate for sequential read ... the same level
Re: 5 Byte Device Addresses?
glen herrmannsfeldt g...@ugcs.caltech.edu writes: It seems to me that adaptive algorithms are more likely to sync to each other when nested. But how about one that examines every Nth page, (hopefully N is prime), such that they won't be the exact same pages. Or even using a more random path, such as from a CRC polynomial. So the path through the pages will be different, and so different approximately LRU pages will be selected. Never having tried this, those are the ones I think up. re: http://www.garlic.com/~lynn/2012b.html#98 5 Byte Device Addresses? http://www.garlic.com/~lynn/2012b.html#100 5 Byte Device Addresses? http://www.garlic.com/~lynn/2012c.html#16 5 Byte Device Addresses? http://www.garlic.com/~lynn/2012c.html#17 5 Byte Device Addresses? http://www.garlic.com/~lynn/2012c.html#27 5 Byte Device Addresses? http://www.garlic.com/~lynn/2012c.html#28 5 Byte Device Addresses? http://www.garlic.com/~lynn/2012c.html#29 5 Byte Device Addresses? that is strickly deterministic ... these are all approximate LRU selection for replacement. The theory behind choosing least recently used pages is that they have shown to be the least probable being used in the future. if the VM system is choosing virtual machine pages for replacement based on least recently used ... and the guest MVS system is looking for pages have been also least recently used ... they both will tend to concentrate on selecting from the same subset of pages ... the guest MVS selecting their least recently used virtual machines and the VM system selecting the MVS guest virtual machine pages that the corresponding MVS virtual pages occupy. That significantly increases the probability that the page the guest MVS selects for replacement and the corresponding virtual machine page to use ... that guest virtual page has also been selected by the VM system for replacement and removal from real memory. Running a least recently used replacement algorithm under a least recently used replacement algorithm violates the assumption that the least recently used page is the least likely to be used in the future. They don't have to be strickly in sync ... but it will drastically increase the probability that there is double paging ... aka the virtual machine page that the MVS system wants to start using is a page that the VM system has removed from memory. As I previously mentioned, something similar happens with a large DBMS cache being managed by least recently used ... running in a virtual memory operating system. It is one of the reasons that virtual memory operating systems tend to have ways of biasing against selecting large DBMS cache pages (because they useage patterns tend to violate the assumption that the least recently used page will be the least probable page to be used in the future). misc. past posts mentioning virtual memory replacement http://www.garlic.com/~lynn/subtopic.html#clock old email mentioning various aspects page replacement ... including work related to big page, full-track page transfers implementation also resulted in tweaks that resulted in underminning least recently used (corresponding to something similar done for the original SVS implementation that continued well into MVS releases) http://www.garlic.com/~lynn/lhwemail.html#globallru -- virtualization experience starting Jan1968, online at home since Mar1970 -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN
Re: 5 Byte Device Addresses?
glen herrmannsfeldt g...@ugcs.caltech.edu writes: I sort of know how the algorithms work, but now I looked at: http://en.wikipedia.org/wiki/Page_replacement_algorithm I had thought that for the clock algorithm that there would be some parameter that affects how the clock works, a time constant of some kind. The above page doesn't seem to describe one, though. But for the adaptive CAR algorithm, I could easily imagine the two would sync with each other. On the other hand, random replacement shouldn't have such problems. re: http://www.garlic.com/~lynn/2012b.html#98 5 Byte Device Addresses? http://www.garlic.com/~lynn/2012b.html#100 5 Byte Device Addresses? http://www.garlic.com/~lynn/2012c.html#16 5 Byte Device Addresses? http://www.garlic.com/~lynn/2012c.html#17 5 Byte Device Addresses? misc. past posts mentioning page replacement virtual memory management http://www.garlic.com/~lynn/subtopic.html#clock long winded description of clock (which is class of algorithms that attempt to approximate LRU replacement) and slight-of-hand hack on clock that I did in the early 70s that would dynamically switch between approximate-LRU and random. so the simplest that I did in the 60s was one-handed clock that rotated around resetting/selecting virtual page ... so the elapsed time between resetting reference bit and again examining the page for use/replacement was the time to completely examine all pages. This resulted in a dynamically adapting algorithm ... the greater the demand, the faster it rotated ... however, the faster it rotated, the smaller the interval between reset re-examine ... the smaller interval which increases the number of pages not referenced, which slows things down ... two opposing effects that results in dynamically adapting to configuration/supply and workload/demand. The idea isn't to find page that hasn't been used in fixed amount of time but to differentiate the lower used from the higher used (which is going to be relative passed on configuration and load). So one-handed clock has the cursors doing the resetting selecting traveling around all virtual pages in sync. Two-handed clock has the hand/cursor doing the resetting traveling around all pages at a fixed offset ahead of the hand/cursor doing the selection. The issue here is that while one-handed clock dynamically adapts ... that past a certain elapsed time when there are really large number of pages, LRU assumptions break down ... if you haven't reset/examined virtual page for very long time ... there is little predictive correlation about whether a specific page will be used or not used in near future. Having the reset of the used/reference less than full rotation around all pages tries to keep the elapsed time between reset examine below threshold where the interval is predictive. So that is the standard clock ... which attempts to approximate true LRU (where all virtual pages are exactly ordered as to most recent reference ... based on theory that the page that has been least recently used in the past is least likely to be used in the future ... for some specific kinds of access patterns). There is a problem that there are number situations that violate the correlation between use in the past and use in the future. In the early 70s, I did a slight-of-hand hack on two-handed clock ... where the code appeared to looktaste almostly exactly like two-handed block ... except it had peculiar characteristic of approximating true LRU in conditions were LRU did well and approximate random in conditions that LRU performed poorly (dynamically w/o any observable change in the code executed). In simulations studies with full instruction tracing ... it was possible to compare various clock implementations as well as various other kinds of LRU-approximation algorithms ... against a true LRU (i.e. keeping exact ordering of page references and exactly choosing the least recently used) ... various approximatations would tend to perform within approximately 10-15percent of true LRU. However, for my slight-of-hand hack on clock ... it was possible to perform approximately 10percent better than true LRU. However two recursive algorithms (one running virtually under the other) where both approximate LRU (even if the exact code is different) ... the 2nd level algorithm would tend to exhibit the behavior that the least recently pages were the most likely to be used next (because they are selected for replacement) ... as least from the standpoint of the lowest level algorithm (violating the LRU assumption that the least recently used pages are the least likely to be used in the near future). -- virtualization experience starting Jan1968, online at home since Mar1970 -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN
Re: 5 Byte Device Addresses?
glen herrmannsfeldt g...@ugcs.caltech.edu writes: Some of this is described in the above mentioned web page. It seems that some improvements have been made along the way. Also described is precleaning, where you write out a page in anticipation of its need for replacement. re: http://www.garlic.com/~lynn/2012b.html#98 5 Byte Device Addresses? http://www.garlic.com/~lynn/2012b.html#100 5 Byte Device Addresses? http://www.garlic.com/~lynn/2012c.html#16 5 Byte Device Addresses? http://www.garlic.com/~lynn/2012c.html#17 5 Byte Device Addresses? http://www.garlic.com/~lynn/2012c.html#27 5 Byte Device Addresses? misc. past posts mentioning page replacement virtual memory management http://www.garlic.com/~lynn/subtopic.html#clock there were two issues with the early SVS/MVS replacement ... regarding selecting non-changed pages before changed pages ... one was eliminating the work and overhead of the write ... and the other is the issue of eliminating any synchronous latency related to waiting for the write. most implementations early on, implement a pool of immediately available pages for replacement (that had been pre-selected) ... rather than synchronously running the replacement with the selection (immediately available eliminates synchronous latency associated with selection and potential writes). the pool could be also run with min/max ... so when pool of immediately available pages dropped below a min ... it was replenished to the max (trying for some slight efficiency batching selection process). there was also big pages starting in the early 80s (done for both MVS VM) ... that always did writes ... collecting set of pages and doing single write operation for full 3380 track of pages. the issue was that while 3380 transfer rate was 10 times that of 3330 ... the access latency (arm rotation) only marginally increase. The theory was that the increase in 3380 efficiency always doing full-track writesreads (single access for full-track of pages) ... offset the increased overhead having to unnecessarily write unchanged pages. This would have further highlighted the downside effects of choosing non-changed before changed that I argued before they first shipped ... and they finally realized in the late 70s. however, the big pages selection processing violated LRU in other ways ... this is old email discussing LRU ... including some of how big pages undermined LRU: http://www.garlic.com/~lynn/lhwemail.html#globallru -- virtualization experience starting Jan1968, online at home since Mar1970 -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN
Re: 5 Byte Device Addresses?
re: http://www.garlic.com/~lynn/2012b.html#98 5 Byte Device Addresses? http://www.garlic.com/~lynn/2012b.html#100 5 Byte Device Addresses? http://www.garlic.com/~lynn/2012c.html#16 5 Byte Device Addresses? http://www.garlic.com/~lynn/2012c.html#17 5 Byte Device Addresses? http://www.garlic.com/~lynn/2012c.html#27 5 Byte Device Addresses? http://www.garlic.com/~lynn/2012c.html#28 5 Byte Device Addresses? misc. past posts mentioning page replacement virtual memory management http://www.garlic.com/~lynn/subtopic.html#clock and some old email http://www.garlic.com/~lynn/lhwemail.html#globallru a recent thread in comp.arch discussion started out asking about mainframe queued i/o processing (in thread on interrupt paradigm overhead) http://www.garlic.com/~lynn/2012c.html#20 M68k add to memory is not a mistake any more http://www.garlic.com/~lynn/2012c.html#23 M68k add to memory is not a mistake any more also discusses various device optimization for page i/o operations. this has survey and taxonomy of i/o systems ... including some discussion of mainframe queued i/o http://www.cs.clemson.edu/~mark/io_hist.html there is also reference to longer discussion in IBM JRD ... which used to be available free but is journals are now behind IEEE paywall http://ieeexplore.ieee.org/Xplore/login.jsp?reload=trueurl=http%3A%2F%2Fieeexplore.ieee.org%2Fiel5%2F5288520%2F5390413%2F05390415.pdf%3Farnumber%3D5390415authDecision=-203 In '75 ... besides endicott con'ing me into doing a lot of stuff for 138/148 ECPS (microcode assist) ... old post with part of data used in determining ECPS: http://www.garlic.com/~lynn/94.html#21 370 ECPS VM microcode assist at the same time a group in POK con'ed me into doing a lot of design for 5-way SMP. The processor technology had lots of provision for microcode ... so I dropped some amount of multiprocessor dispatching complexity into the microcode (reminiscent of later intel 432 ... or current mainframe LPAR dispatch management) ... as well as a queued i/o channel interface ... superset of the later 811 (370-xa specification named for nov78 date on lot of the specifications). some past posts http://www.garlic.com/~lynn/submain.html#bounce for whatever reason, the 5-way SMP project got canceled ... but a little later reborn as 16-way SMP effort ... and some of the 3033 processor engineers were con'ed into helping in their spare-time. This saw a lot of early acceptance ... but then somebody mentioned to the head of POK, that it might be decades before MVS could effectively support 16-way SMP ... and the head of POK told the 3033 processor engineers to get their noses back to the grindstone (and stop being distracted) ... and others got invited to never visit POK again (this was all before 3033 first shipped). misc. past general posts mentioning SMP support and/or compareswap instruction http://www.garlic.com/~lynn/subtopic.html#smp misc past posts mentioning dispatching dynamic adaptive scheduling (also started when I was undergraduate in the 60s) http://www.garlic.com/~lynn/subtopic.html#fairshare -- virtualization experience starting Jan1968, online at home since Mar1970 -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN
Re: 5 Byte Device Addresses?
glen herrmannsfeldt g...@ugcs.caltech.edu writes: It would seem less likely that they would use the exact same replacement algorithm, but could eventually lock, anyway. re: http://www.garlic.com/~lynn/2012b.html#98 5 Byte Device Addresses? http://www.garlic.com/~lynn/2012b.html#100 5 Byte Device Addresses? http://www.garlic.com/~lynn/2012c.html#16 5 Byte Device Addresses? least recently used is well studied characteristic ... that says that of all the current virtual pages ... the current least recently used page is the least likely to be used in the future. since the least recently used page is the least likely to be used in the future it becomes the basis for LRU replacement algorithms ... trying to select the least likely to be used page (based on being the least recently used). so lots of systems have implemented LRU replacement algorithms based on well studied program behavior ... although they all may have slightly different code implementations ... they would tend to select approximately the same virtual page for replacement. so running a LRU strategy under a LRU strategy ... vm370 will look at all the virtual machine pages and select the least recently used for replacement. The guest operating system will be looking at all its virtual pages and select the least recently used for replacement. The issue is that the guest virtual page that is selected for replacement occupies a guest virtual machine page ... and useage patterns are based on the same criteria. The result is vm370 will remove/replace a virtual machine page when it hasn't been used while the guest operating system will select the contents of the same virtual machine page for its replacement and start using that same virtual machine page with a different guest operating system virtual page. The effect is from the vm370 stand-point, the guest operating system is violating all the studies that have shown that the least recently used (virtual machine) virtual page is the least likely to be used in the future (because the guest operating system wants to select that same virtual machine page for use for replacement). There are other ways of treaking the algorithms. Lots of the AOS protype stuff for what would become OS/VS2 SVS came from cp67 ... like cp67's channel program translator, CCWTRANS was cobbled into the side of EXCP processing. However the POK performance group came up with a tweak for SVS LRU-replacement algorithm, before it first shipped. They observed if they selected/replaced non-changed LRU pages before changed pages ... they wouldn't first have to write the current virtual page to disk before being able to fetch the replacement page into the location. I argued strongly against it since it significantly distorted the LRU relationship ... but they went ahead anyway. Well late into the MVS release cycle, they discovered that the strategy resulted in choosing for replacement; higher-use, shared, non-changed linkpack virtual pages before lower-use, non-shared, private, changed application specific virtual pages. The cast had changed in POK and new people got awards for fixing the earlier work having done it wrong ... and somebody eventually contacted me if something similar could be fixed in vm370. My reply was that I had never done it that way since I was undergraduate in the 60s. lots of past posts mentioning virtual memory management and page replacement algorithms http://www.garlic.com/~lynn/subtopic.html#clock My 60s undergraduate work got me sucked into an academic uproar ... Jim Gray had left for Tandem but at 14-16Dec1981 ACM SIGOPS meeting asked me if I could lend a hand with somebody trying to get their Stanford PHD. It involved an area that I had originally worked on as undergraduate in the 60s. I had done something different than what was being done in the academic circles in the 60s. The primary person behind the 60s academic work was violently objecting to the Stanford PHD being awarded (because it was in conflict with his work). My work was being shipped in cp67. However, in the early 70s, the Grenoble Science Center had modified their version of cp67 to correspond with the 60s academic strategy. The Cambridge Science Center 360/67 with 768k memory (104 pageable pages after fixed kernel storage requirements) with my strategy gave about the same performance with 80 users as the Genoble Sicence Center 360/67 with 1mbyte memory (154 pageable pages after fixed kernel storage requirements) with 35 users (almost identical workloads). CSC 360/67, with my strategy could support approx. twice the number of users as GCS 360/67 with the academic strategy (and 50% more pageable storage). It was possibly the only direct apples-to-apples comparison of my strategy and the 60s academic strategy. Past post on the subject http://www.garlic.com/~lynn/2006w.html#46 the above contains this response that I was finally allowed to send nearly a year later (after the request at ACM SIGOPS)
Re: Authorized functions
zedgarhoo...@gmail.com (zMan) writes: Then you've forgotten the learning curve: CMS - *IX: minimal CMS - TSO: moderate CMS - GUI: Large folklore is that *IX (and numerous *IX work-alikes) came from simplification of MULTICS. some of the CTSS people went to the 5th flr of 545 tech sq and MULTICS and others went to the 4th flr of 545 tech sq and the ibm cambridge science center ... where cp40/cms was done (both MULTICS and cp40/cms derivative of CTSS). science center was formed 1feb1964 ... 1982 SEAS presentation on cp40/cms http://www.garlic.com/~lynn/cp40seas1982.txt misc. past posts mentioning science center http://www.garlic.com/~lynn/subtopic.html#545tech cms (cambridge monitor system) was originally developed running stand-alone on 360/40 using 1052-7 operator's console for input/output. the same machine had special hardware added to provide virtual memory support which was used for the development of (virtual machine) cp40. when standard 30/67 with virtual memory became available, cp40 morphed into cp67 ... cms continued to run both on stand-alone 360 as well as in cp67 virtual machine. with virtual memory on 370, cp67 morphed into vm370 and cms was renamed to conversational monitor system ... and ability to run stand-alone was crippled. A little other ctss history is this email subject recent in a.f.c. http://www.garlic.com/~lynn/2012c.html#10 Inventor of e-mail honored by Smithsonian http://www.garlic.com/~lynn/2012c.html#12 Inventor of e-mail honored by Smithsonian several references included: The History of Electronic Mail http://www.multicians.org/thvv/mail-history.html The technology for the corpoate internal network was also done at the science center ... some past posts http://www.garlic.com/~lynn/subnetwork.html#internalnet which was larger than the arpanet/internet from just about the beginning until late '85 or early '86. Some recent references in this a.f.c. thread: http://www.garlic.com/~lynn/2012c.html#9 The PC industry is heading for collapse there were several projects during the 80s to adapt CMS to GUI displays ... but it was somewhat anti-thetical to the corporate terminal emulation paradigm. old post about running internal corporate adtech conference spring '82 on various aspects of the subject: http://www.garlic.com/~lynn/94.html#22 http://www.garlic.com/~lynn/96.html#4a one the presentations happened to be CMS running on MVS ... there had recently been a new corporate strategy direction that CMS would be the official interactive platform. CMS on MVS (as alternative to TSO) didn't actually help things a lot ... since a lot of the problems are in base MVS (not solely in TSO). http://www.garlic.com/~lynn/2012.html#email821027 in this post http://www.garlic.com/~lynn/2012.html#12 Who originated the phrase user-friendly? also has ibm jargon definition for bad response I even got a request from the TSO product admin if I would rewrite the MVS scheduler (attempting to address some of the MVS structural problem with providing interactive service): http://www.garlic.com/~lynn/2006b.html#email800310 other drift semi-related old email about cms/xa http://www.garlic.com/~lynn/2011b.html#email821026 http://www.garlic.com/~lynn/2011b.html#email840626 http://www.garlic.com/~lynn/2011b.html#email841003 slightly related http://www.garlic.com/~lynn/2011e.html#email870508 -- virtualization experience starting Jan1968, online at home since Mar1970 -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN
Re: 5 Byte Device Addresses?
glen herrmannsfeldt g...@ugcs.caltech.edu writes: That is, as I understand it, pretty close to how it started out. Among others, though OS/VS1 has special features for running under VM that OS/VS2 never got. It has the ability to switch to a different task while VM is paging a task. That avoids the double paging problem that otherwise occurs. customers had previously made such changes to MVT ... which is possibly where at least the idea of the VS1 change. In OS/VS2 SVS it had single 16mbyte virtual memory laid out almost as if MVT running in 16mbyte real machine. When MVT ran in virtual machine ... when a virtual SIO was done ... CP67 would scan the virtual channel program and create a shadow copy with real addresses ... which would be the channel program that got executed. This routine from cp67 (ccwtrans) was cribbed into the side of EXCP processing ... i.e. with transition to virtual memory then all the OSes had the same issue with channel programs passed in EXCP ... needing creating nearly identical channel programs but with real addresses in place of virtual addresses. In OS/VS1 case, it had things laid out in a 4mbyte virtual address space (as if it was running on real 4mbyte machine). In the OS/VS1 handshaking case ... a 4mbyte virtual machine was created with OS/VS1 4mbyte virtual address space mapped one-for-one to the virtual machine address space. Whenever, vm370 had a os/vs1 virtual machine page fault ... if the machine was running in application (and not in os/vs1 kernel) ... vm370 would reflect special page fault to the virtual machine. OS/VS1 could then do task-switch as if it was a OS/VS1 application virtual page fault. Later when vm370 had fetched the OS/VS1 virtual machine virtual page ... vm370 would reflect a special interrupt to OS/VS1 (indicating the page was available). From Melinda's VM and the VM Community http://web.me.com/melinda.varian/Site/Melinda_Varians_Home_Page.html Dewayne Hendricks reported at SHARE XLII, in March, 1974, that he had successfully implemented MVT-CP handshaking for page faulting, so that when MVT running under VM took a page fault, CP would allow MVT to dispatch another task while CP brought in the page. At the following SHARE, Dewayne did a presentation on further modifications, including support for SIOF and a memory-mapped job queue. With these changes, his system would allow multitasking guests actually to multitask when running in a virtual machine. Significantly, his modifications were available on the Waterloo Tape. ... and ... then was able to get MFT MVT running faster under vm370 than it ran on bare machine By SHARE 49, Dewayne was able to state that, It is now generally understood that either MFT or MVT can run under VM/370 with relative batch throughput greater than 1. That is to say, they had both been made to run significantly faster under VM than on the bare hardware. Dewayne and others did similar work to improve the performance of DOS under VM. Other customers, notably Woody Garnett(122) and John Alvord, soon achieved excellent results with VS1 under VM. ... snip ... There is a separate issue with OS/VS2 (of any ilk) running under vm370 ... which is pathelogical case of a virtual memory operating system system managing with least recently algorithm in virtual machine which manages its virtual memory with least recently algorithm. The scenario is that a virtual machine page that hasn't been used for awhile ... is the virtual page that vm370 is likely to select for replacement/removal. However, the operating system in the virtual machine ... if it is also doing paging ... may also select the very same page to be the next one to use (after it has just been selected for removal). From a theoritical standpoint cascading LRU-algorithms will appear to violate least-recently-used replacement assumptions (i.e. a least-recently-used page can be the next most likely to be used rather than the least likely to be next used). This characteristic also exhibits itself with DBMS caches that are managed with LRU strategy when running in a virtual memory operating system that also manages with LRU strategy. The VS1 handshaking isn't actually a double paging issue ... that was handled by configuration of VS1's virtual address space the same as the virtual machine storage size. The handshaking worked with MVTMFT as well as VS1 ... allowing the guest operating system to task switch while vm370 was handling page fault. more detailed discussion pg.25 vm/vs handshaking http://www.bitsavers.org/pdf/ibm/370/vm370/GC20-1800-6_VM370intr_Oct76.pdf -- virtualization experience starting Jan1968, online at home since Mar1970 -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN
Re: zSeries Manpower Sizing
stars...@mindspring.com (Lizette Koehler) writes: PCI has to do with Payments for Credit Cards and their security. PCI was somewhat in response to the cal. state data breach discloser (and later other states) legislation. we were tangentially involved being, brought in to help wordsmith the cal. state electronic signature legislation ... some past posts http://www.garlic.com/~lynn/subpubkey.html#signature some of the electronic signature participants were also heavily into privacy issues and had done detailed privacy surveys ... #1 issue kept coming up identity theft of the kind involving fraudulent transactions from data breaches of one sort or another (skimming, evesdropping, database compromise; etc ... involving account number harvesting). little or nothing appeared to being done about such activity and they hoped that the publicity from data breach notifications might prompt countermeasures ... in addition to providing victims the opportunities to do something. part of the issues was that the owners of the large databases/data-streams ... that had the breaches ... wouldn't be the victims of the fraudulent financial transactions. in any case, since the passage of the cal. legislation there have been numerous federal data breach notification bills introduced (none yet passing), about equally divided between those with similar notification requirements and those that would eliminate requirement for notification (in some cases, partially justified on industry actions like PCI). a couple long-winded recent posts going into related issues of broken paradigm http://www.garlic.com/~lynn/2012b.html#70 Four Sources of Trust, Crypto Not Scaling http://www.garlic.com/~lynn/2012b.html#71 Password shortcomings http://www.garlic.com/~lynn/2012b.html#94 public key, encryption and trust -- virtualization experience starting Jan1968, online at home since Mar1970 -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN
Re: 5 Byte Device Addresses?
johnwgilmore0...@gmail.com (John Gilmore) writes: The original System/360 scheme was simple and in its way elegant. 01F---decodable unambiguously into (multiplexor) channel 0, control unit 1, and that control unit's device F or 15---was, for example, the usual device address of the card punch circa 1965, when punches were still real rather than virtual devices. trivia ... 009 was 1052-7 console 00C was 2540 reader 00D was 2540 punch 00E was 1403 printer some other configurations had 01F as 1052-7 console address (instead of 009) ... making the controller abstraction on the multiplexor channel slightly more consistent. tale of cp40 http://www.garlic.com/~lynn/cp40seas1982.txt done at the science center in the 60s some past posts http://www.garlic.com/~lynn/subtopic.html#545tech cms started out operating system being done on regular 360/40 with interactive commands on the 1052-7 operator's console cp40 was hardware modifications to 360/40 providing virtual memory, cp40 then implemented 360/40 virtual machines ... and cms ran on either bare-hardware or in cp40 virtual machine. when 360/67 became available standard with virtual memory, cp40 morphed into cp67. the default cms virtual machine configuration tended to stay the same that it started out from the real 360/40 configuration (256kbyte real memory configuration). additional history can be found in documents at Melinda's website http://web.me.com/melinda.varian/Site/Melinda_Varians_Home_Page.html this talks about 360/40 360/50 having integrated console at 01f (aka when it was not at 009): http://en.wikipedia.org/wiki/IBM_System/360 cp67 default configuration for cms virtual machine: http://www.bitsavers.org/pdf/ibm/360/cp67/ GH20-0859-0_CP67_Version_3_Users_Guide_Oct70.pdf pg. 5 ... shows the 009 configuration for 1052-7 console note the cp67 users guide also described 2741, 1052, and tty terminals when cp67 was originally delivered to the univ, it only has 2471 1052 terminal support ... but had dynamic terminal type identification support ... being able to use the SAD controller command to switch between the 2741 and 1052 line-scanner for each port/address. the univ. had a lot of tty terminals and so I had to add tty support (which was picked up and released with the product). I looked at the 2741/1052 and added the tty support so it also did dynamic terminal type identification ... being able to use SAD command to dynamically switch the different (2471, 1052, tty) line-scanners for each port. I then wanted to do a single dial-up hunt-group for dial-up terminals ... aka a common pool of phone numbers/modems with a single dial-in number for all terminals. It turns out that dynamic worked for leased lines ... but wouldn't work for common pool for all dial-up terminals. The problem was that while it was possible to dynamically switch the type of line-scanner (with SAD command) on per port basis ... the line speed was hard-wired for each port. This was somewhat the motivation for the univ. to start a clone controller project ... which could do both dynamic termainal type as well as dynamic line speed (i.e. 2741 1052 had the same line speed ... but different line-scanner ... tty had both a different line-scanner as well as different line speed). later four of us get written up as being responsible for (some part of) clone controller market. misc. past posts mentioning clone controller http://www.garlic.com/~lynn/subtopic.html#360pcm This reference has clone controller competition a primary motivation for the Future System effort: http://www.ecole.org/Crisis_and_change_1995_1.htm Then Ferguson Morris, Computer Wars: The Post-IBM World, Times Books, 1993 ... describe the distraction of the Future System (and internal politics killing 370 efforts) allowed clone processors to gain market foothold ... misc. past posts mentioning Future System http://www.garlic.com/~lynn/submain.html#futuresys -- virtualization experience starting Jan1968, online at home since Mar1970 -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN
Re: 5 Byte Device Addresses?
hal9...@panix.com (Robert A. Rosenberg) writes: No Bill is right. OS/VS2 Release 2 WAS MVS like OS/VS2 Release 1 was SVS. SVS was OS/360 MVT with Virtual Addresses (SVS was a single 16MB Address Space with which was divided into smaller areas for the programs to use, just like MVT). MVS made the program's area into duplicate address ranges which sat between and shared the low and high address ranges which belonged to the Operating System. old post about os/vs2 release 1 (svs), release 2 (mvs), and glide path to release 3 ... operating system for future system http://www.galric.com/~lynn/2011d.html#73 past future system posts http://www.garlic.com/~lynn/submain.html#futuresys really long-winded post about the transition to MVS and pointer-passing API causing enormous problems ... involved image of MVS occupying 8mbytes of every application virtual address space ... in order for kernel code to access application data ... and common segment for passing data between applications and semi-priviledged subsystems now in separate virtual address spaces ... and there needing to be sufficient sized common segment to handle all applications subsystems ... larger installations were having common segment threatening to increase to 6mbyte ... leaving only 2mbytes for application in every private 16mbyte virtual address space. http://www.garlic.com/~lynn/2012b.html#66 3081 370xa with 31bit addressing was taking so long to get out after future system failure ... that dual-address space was retrofitted to 3033 in attempt to somewhat alleviate the common segment pressure on what little was left for application use out of 16mbytes. some discussion getting out 3081 (and eventually 31bit addressing) after future system failure http://www.jfsowa.com/computer/memo125.htm -- virtualization experience starting Jan1968, online at home since Mar1970 -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN
Re: IBM Doing Some Restructuring?
arthur.gutow...@compuware.com (Art Gutowski) writes: Patterned after centuries (millenia?) of cultural character - raze the conquered and build your empire on the remains. re: http://www.garlic.com/~lynn/2012b.html#74 IBM Doing Some Restructuring? http://www.garlic.com/~lynn/2012b.html#76 IBM Doing Some Restructuring? I had sponsored Boyd's briefings at IBM in the 80s ... and he had a very interesting scenario for this. some Boyd URLs from around the web as well as past posts http://www.garlic.com/~lynn/subboyd.html Part of his briefings was that at the entry to WW2, the Army had to deploy a huge forces with little or no experience. To leverage the small amount of skilled/experienced resources they created a rigid, top-down command and control structure. He would then observe that this was then starting to have a significant downside on US corporate culture ... as former young WW2 officers, skilled in rigid, top-down commandcontrol structures were started to climb corporate ladders. They were beginning to implement similar infrastructures that assumed only the very few at the very top knew what they were doing and required rigid controls for large hordes that didn't know what they were doing. Something similar was touched on in Tandem Memos (even before I met Boyd) ... from IBM Jargon Tandem Memos - n. Something constructive but hard to control; a fresh of breath air (sic). That's another Tandem Memos. A phrase to worry middle management. It refers to the computer-based conference (widely distributed in 1981) in which many technical personnel expressed dissatisfaction with the tools available to them at that time, and also constructively criticised the way products were are developed. The memos are required reading for anyone with a serious interest in quality products. If you have not seen the memos, try reading the November 1981 Datamation summary. ... snip ... I had been blamed for online computer conferencing on the internal network in the late 70s early 80s (part of which was Tandem Memos) ... misc. past posts mentioning the internal network http://www.garlic.com/~lynn/subnetwork.html#internalnet part of the folklore was that when the executive committee was informed of online computer conferencing (and the internal network), 5of6 wanted to fire me. Boyd's explanation has been used more recently to explain a report that the ratio of executive compensation to employee compensation had exploded to 400:1 (Age of Greed, mentioned in earlier post, claims it spiked over 500:1), after having been 20:1 for a long time and 10:1 for most of the rest of the world. The other downside is that people at the bottom that may appear to know what they are doing, can be viewed as a threat. other recent posts mentioning Age of Greed: http://www.garlic.com/~lynn/2012b.html#12 Sun Tzu, Boyd, strategy and extensions of same http://www.garlic.com/~lynn/2012b.html#19 Buffett Tax and truth in numbers http://www.garlic.com/~lynn/2012b.html#29 The speeds of thought, complexities of problems http://www.garlic.com/~lynn/2012b.html#43 Where are all the old tech workers? http://www.garlic.com/~lynn/2012b.html#54 The New Age Bounty Hunger -- Showdown at the SEC Corral -- virtualization experience starting Jan1968, online at home since Mar1970 -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN
Re: IBM Doing Some Restructuring?
edja...@phoenixsoftware.com (Edward Jaffe) writes: It's hard for me to understand how any serious development projects can be done by temps. Software development is not a math problem. You can't just throw bodies at it to get things done more quickly. You need a smallish group of highly skilled people--the kind that usually have permanent gigs--and time for them to learn the infrastructure and architecture before they can be truly useful. Also, as with any complex subject, the learning curves can be fairly steep. OTOH, perhaps the projects they're envisioning don't involve actual development. Maybe they involve customization of OTS packages? re: http://www.garlic.com/~lynn/2012b.html#74 IBM Doing Some Restructuring The cp40 paper makes references that small group of skilled experienced people are much more effective (which would also be cost effective) to large hords at the science center we would make references to heads rolled uphill for failed projects and/or piling bodies to try and save failing projects ... was attactive to executives since they tended to be compensated proportional to bodies in the executives organizations. Problems were frequently proportional to lack of skill/experience ... but then they would attempt to reframe lack of skill/experience as some innate difficulty of the task (as opposed to lack of skills/experience) ... requiring large hordes, much larger organization, etc. this shows up in spades in the Future System failure ... some past posts http://www.garlic.com/~lynn/submain.html#futuresys also referenced in this recent (Greater IBM) post http://www.garlic.com/~lynn/2012.html#104 Can a business be democratic? Tom Watson Sr. thought so a smaller scale comparison was the System/R effort ... some past posts http://www.garlic.com/~lynn/submain.html#systemr that initially got out as SQL/DS ... being below the corporate radar as all focus was on the massive EAGLE effort ... then when EAGLE failed ... there was request how fast could there be a port of System/R - SQL/DS to MVS ... for what becomes DB2. There is also large intersection with the growing Success of Failure culture ... mentioned in this article http://www.govexec.com/dailyfed/0407/040407mm.htm but has been around in quite some time in many industries. A possible short-term window is that there may be a pocket of high-skilled/experienced people that have been laid off in various employment actions ... which could be available as temporary workers. This would tend to be a temporary anomoly in a culture transitioning from long-term, high-skilled workers to lots of focus on 3month horizon. This is also reflected in statistics of private-equity LBOs where the focus on short-term payback is eliminating lots of of RD (that tends to have payback long after the private-equity event). in another recent (Greater IBM) posts http://www.garlic.com/~lynn/2012.html#4 The Myth of Work-Life Balance http://www.garlic.com/~lynn/2012.html#57 The Myth of Work-Life Balance also discussed in these posts http://www.garlic.com/~lynn/2012.html#45 You may ask yourself, how did I get here? http://www.garlic.com/~lynn/2012.html#54 Report: Fed Officials Joke About Housing Crisis http://www.garlic.com/~lynn/2012b.html#47 Where are all the old tech workers? past references to growing Success of Failure culture http://www.garlic.com/~lynn/2009o.html#25 Opinions on the 'Unix Haters' Handbook' http://www.garlic.com/~lynn/2009o.html#41 U.S. house decommissions its last mainframe, saves $730,000 http://www.garlic.com/~lynn/2010b.html#19 STEM crisis http://www.garlic.com/~lynn/2010b.html#26 Happy DEC-10 Day http://www.garlic.com/~lynn/2010f.html#38 F.B.I. Faces New Setback in Computer Overhaul http://www.garlic.com/~lynn/2010k.html#18 taking down the machine - z9 series http://www.garlic.com/~lynn/2010p.html#78 TCM's Moguls documentary series http://www.garlic.com/~lynn/2010q.html#5 Off-topic? When governments ask computers for an answer http://www.garlic.com/~lynn/2010q.html#69 No command, and control http://www.garlic.com/~lynn/2011b.html#0 America's Defense Meltdown http://www.garlic.com/~lynn/2011c.html#45 If IBM Hadn't Bet the Company http://www.garlic.com/~lynn/2011g.html#32 Congratulations, where was my invite? http://www.garlic.com/~lynn/2011g.html#34 Congratulations, where was my invite? http://www.garlic.com/~lynn/2011g.html#72 77,000 federal workers paid more than governors http://www.garlic.com/~lynn/2011i.html#36 Having left IBM, seem to be reminded that IBM is not the same IBM I had joined http://www.garlic.com/~lynn/2011i.html#79 Innovation and iconoclasm http://www.garlic.com/~lynn/2011j.html#33 China Builds Fleet of Small Warships While U.S. Drifts http://www.garlic.com/~lynn/2011k.html#41 Rafael Team with Raytheon to Offer Iron Dome in the U.S http://www.garlic.com/~lynn/2011k.html#48 50th anniversary of BASIC, COBOL? http://www.garlic.com/~lynn/2011l.html#0 Justifying application of Boyd to a project
Re: CSSMTP and AUTH LOGIN smtp command
shmuel+ibm-m...@patriot.net (Shmuel Metz , Seymour J.) writes: So even if plaintext is enough for the time being, any requirement you submit to IBM should ask for a full implementation. related, recent long-winded post in a different mailing list http://www.garlic.com/~lynn/2012b.html#71 Password shortcomings i've been somewhat paranoid for some quite some time ... part of it may have been requirement that IBM required that all links be encrypted ... in the mid-80s, there was claim that over half of link encryptors in the world were on the corporate internal network. misc. past posts mentioning internal network http://www.garlic.com/~lynn/subnetwork.html#internal recent post referencing realizing that there were three kinds of encryption: http://www.garlic.com/~lynn/2012.html#63 Reject gmail semi-related ... old email discussing doing pgp-like email http://www.garlic.com/~lynn/2007d.html#email810506 http://www.garlic.com/~lynn/2006w.html#email810515 -- virtualization experience starting Jan1968, online at home since Mar1970 -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN
Re: IBM Doing Some Restructuring?
edja...@phoenixsoftware.com (Edward Jaffe) writes: http://socialbarrel.com/ibm-job-cuts-in-germany-8000-may-be-laid-off/31574/ Rumor has it that IBM is laying off up to 40% of its workforce in Germany. At the same time they are testing a new global temporary worker program that they believe can speed up project implementation by 30% and reduce costs by 1/3. recently item/discussion in (closed linkedin group) Greater IBM: How IBM saved $300 million by going agile; Behind the scenes on IBM's agile transformation Look, ma! The elephant's dancing even faster! https://www.ibm.com/developerworks/mydeveloperworks/blogs/invisiblethread/entry/ibm-agile-transformation-how-ibm-saved-300-million-by-going-agile?lang=en my post/response in the thread: for comparison see this (1982 SEAS aka European SHARE) presentation on development of cp/40 http://www.garlic.com/~lynn/cp40seas1982.txt ... snip ... and in another blog somewhere, somebody did a recent review of Gerstner's Who Says Elephants Can't Dance? http://www.amazon.com/Elephants-Dance-Inside-Historic-Turnaround/dp/0060523794 and my response ... A couple recent posts mentioning Gerstner's resurrection of IBM in (closed linkedin) Greater IBM (currentformer employees) http://www.garlic.com/~lynn/2012.html#57 above mentions Age of Greed discussing a few wallstreet players (including Gerstner) during 80s90s. also in (open linkedin) Mainframe Experts -- really long-winded post discussing runup to IBM going into the red http://www.garlic.com/~lynn/2012.html#92 above mentions Strategic Intuition that somewhat compares Microsoft, Apple, Google and Gerstner's resurrection of IBM another Greater IBM in Can a business be democratic? Tom Watson Sr. thought so discussion -- some reference to factors leading up to Gerstner's resurrection of IBM http://www.garlic.com/~lynn/2012.html#104 and repeated again in this Greater IBM discussion: Original Thinking Is Hard, Where Good Ideas Come From http://www.garlic.com/~lynn/2012b.html#59 and http://www.garlic.com/~lynn/2012b.html#68 and http://www.garlic.com/~lynn/2012b.html#72 ... snip ... -- virtualization experience starting Jan1968, online at home since Mar1970 -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN
Re: Why can't the track format be changed?
r.skoru...@bremultibank.com.pl (R.S.) writes: Yes and no. It depends on definition of real CKD device. Actually 3390 and 3380 were FBA under the cover. The data cells (32 or 34 bytes) were the fixed size sectors. Indeed, the device was not emulated - physical disc was presented as single I/O device, but the elecronics hidden the cells from MVS view. AFAIK the last fully real CKD device was 3350 (1975 GA). re: http://www.garlic.com/~lynn/2012b.html#58 Why can't the track format be changed? 3380/3390 were the low-level emulation on kind of FBA. One of the things was that 3380 was the high-end datacenter disks and the only mid-range disks were FBA. In the 3380/3370 time-frame there was huge explosion in new 4300 sales going into non-datacenter environments ... which MVS was precluded from because of lack of FBA support. Eventually, trying to provide MVS with an entry into that market ... 3375 was created ... that was a 3370 under the covers. The other problem was that many of these environments were getting close to set-it and forget it ... requiring very little carefeeding ... which tended to also preclude MVS. misc. past posts mentioning CKD, multi-track seek, FBA, etc http://www.garlic.com/~lynn/submain.html#dasd misc. old email mentioning 4300s ... some of which involve 3380/3370 http://www.garlic.com/~lynn/lhwemail.html#43xx past posts about them letting me play disk engineer in bldgs. 1415 http://www.garlic.com/~lynn/subtopic.html#disk -- virtualization experience starting Jan1968, online at home since Mar1970 -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN
Re: Why can't the track format be changed?
paulgboul...@aim.com (Paul Gilmartin) writes: But doesn't PDSE emulate FBA under CKD emulated on RAID implemented on FBA? Even as VM/CMS emulates FBA for MDFS. CMS has been logical FBA (on real CKD) all the way back to cp40/cms ... when it was originally developed ... and was called cambridge monitor system and could run stand-alone on a real 360/40 ... cambridge had done hardware modifications to 360/40 to support virtual memory ... for development of cp40. I was recently sent scan copy (by the author) of cp40 presentation given at 1982 SEAS meeting (european share) ... which I ocr'ed http://www.garlic.com/~lynn/cp40seas1982.txt when 360/67 became available (with virtual memory standard), cp40 morphed into cp67 http://www.bitsavers.org/pdf/ibm/360/cp67/ later when virtual memory became available on 370, cp67 morphs into vm370 and cambridge monitor system morphs into conversational monitor system (and ability to run on real hardware was crippled) http://www.bitsavers.org/pdf/ibm/370/vm370/ when disk division announces real FBA (3310 3370) disks, it is very straight-forward for CMS to support: http://www-03.ibm.com/ibm/history/exhibits/storage/storage_3370.html this also shows up in the 3090 service processor, 3092 ... http://www-03.ibm.com/ibm/history/exhibits/mainframe/mainframe_PP3090.html at the bottom mentions 3092 requiring two 3370s ... 3092 is actually a pair of vm/cms 4361s running custom modified version of vm370 release 6 ... a couple old emails (3092 was originally developed on 4331): http://www.garlic.com/~lynn/2010e.html#email861031 http://www.garlic.com/~lynn/2010e.html#email861223 it wasn't too long before real CKD were no longer manufactured ... CKD becoming an obsolete technology simulated on real FBA. various past posts on the subject http://www.garlic.com/~lynn/submain.html#dasd past posts mentioning cambridge science center (formed 1Feb1964) http://www.garlic.com/~lynn/subtopic.html#545tech -- virtualization experience starting Jan1968, online at home since Mar1970 -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN
Re: What s going on in the redbooks site?
joa...@swbell.net (John McKown) writes: I was a FidoNet user. A sort of distributed BBS network. Dial into a local node, pick up and send messages. The local nodes would exchange messages throughout the day (usually at night). Dial in the next day to get the newly distributed message. Repeat daily. Loved my 56Kb modem. And, of course, CompuServe before the WWW was generally available. ibm internal network was larger than the arpanet/internet from just about the beginning until possibly late '85/early '86. some past posts ... http://www.garlic.com/~lynn/subnetwork.html#internalnet some old internal network email http://www.garlic.com/~lynn/lhwemail.html#vnet a gateway between internal network and csnet fall '82 ... reference (had similar periodic dial-up and exchange messages) http://www.garlic.com/~lynn/98.html#email821022 in this old post http://www.garlic.com/~lynn/98.html#59 wiki reference http://en.wikipedia.org/wiki/CSNET this ibm-main mailing list originated on bitnet ... bitnet used technology similar to what was used for the internal network http://en.wikipedia.org/wiki/BITNET some past posts http://www.garlic.com/~lynn/subnetwork.html#bitnet bitnet equivalent in europe was called earn ... old email from person responsible for setting up earn http://www.garlic.com/~lynn/2001h.html#email840320 another dial-up network was usenet using UUCP: http://en.wikipedia.org/wiki/Usenet originating usenet newsgroups ... which survives today running over tcp/ip ... as well as shadowed on google ... and this ibm-main mailing list is also gatewayed to usenet in bit.listserv hierarchy as bit.listserv.ibm-main. bitnet mailing list support done in paris (earn) http://en.wikipedia.org/wiki/LISTSERV and http://www.lsoft.com/products/listserv-history.asp The internal equivalent to LISTSERV was called TOOLSRUN and could operate both in a mailing list mode as well as in a usenet-like newsgroup mode. one of the reasons that internet nodes started to exceed internal network nodes ... was the communication group was enforcing terminal emulation paradigm on the internal network (so it was limited to just mainframe nodes) ... while on the internet was starting to see workstations and PCs as internet peer nodes. tcp/ip is the technology basis for the modern internet, nsfnet backbone was the operational basis for the modern internet, and cix was the business basis for the modern internet. ... misc. old email about working with entities leading up to nsfnet T1 backbone http://www.garlic.com/~lynn/lhwemail.html#nsfnet some past posts http://www.garlic.com/~lynn/subnetwork.html#nsfnet wiki reference http://en.wikipedia.org/wiki/National_Science_Foundation_Network i had T1 and faster links running internally ... in project i called hsdt ... some past posts http://www.garlic.com/~lynn/subnetwork.html#hsdt which was one of the reasons for the NSFNET BACKBOME RFP calling for T1. the winning bid actually put in 440kbit links ... but possibly somewhat to meet the letter of the RFP, installed T1 trunks with multiplexor running multiple 440kbit links through the T1 trunks. We made some snide remarks about they possibly could have called it at T5 network ... since some the 440kbit links may have been routed at some points in multiplexed T5 trunks. past posts mentioning internet http://www.garlic.com/~lynn/subnetwork.html#internet ... virtual machines, lots of online computing, the internal network technology, GML and various other stuff originated at cambridge science center ... established 1Feb1964 ... some old posts http://www.garlic.com/~lynn/subtopic.html#545tech some other about creation of the internal network (as well as technology used for bitnet) http://en.wikipedia.org/wiki/RSCS http://en.wikipedia.org/wiki/Edson_Hendricks http://itunes.apple.com/us/app/cool-to-be-clever-edson-hendricks/id483020515?mt=8 and some www ... GML was invented at the science center in 1969 ... some past posts http://www.garlic.com/~lynn/submain.html#sgml a decade later it morphs into iso international standard sgml ... and after another decade it morphs into html at certn ... ref: http://infomesh.net/html/history/early then first web server in the US is on the slac vm/370 system http://www.slac.stanford.edu/history/earlyweb/history.shtml -- virtualization experience starting Jan1968, online at home since Mar1970 -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN
Re: What s going on in the redbooks site?
shmuel+ibm-m...@patriot.net (Shmuel Metz , Seymour J.) writes: And many others. Unlike CompuServe, the typical BBS didn't use a proprietary protocol. For that matter, neither did fido. BTW, I know of at least one BBS that supports zmodem over telnet. re: http://www.garlic.com/~lynn/2012b.html#46 What s going on in the redbooks site? there was also tymshare and its tymnet network http://en.wikipedia.org/wiki/Tymnet tymshare provided online vm370 service http://en.wikipedia.org/wiki/Tymshare ... and in Aug. 1976 started offering its online computer conferencing service free to SHARE as VMSHARE ... archive: http://vm.marist.edu/~vmshare later in the 80s, this was expanded with PCSHARE. I made arraingements to get regular distribution of the VMSHARE (and later PCSHARE) files for putting up on internal machines ... some old vmshare related email http://www.garlic.com/~lynn/lhwemail.html#vmshare ... including the world-wide online vm370/cms based salesmarketing support HONE system http://www.garlic.com/~lynn/subtopic.html#hone one of my biggest problems was convincing the lawyers that IBMers wouldn't be contaminated by reading VMSHARE files. -- virtualization experience starting Jan1968, online at home since Mar1970 -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN
Re: What s going on in the redbooks site?
shmuel+ibm-m...@patriot.net (Shmuel Metz , Seymour J.) writes: The Internet[1] is not the Web. Before the WWW, we had Archie, FTP, Gopher and other services that in many ways were more convenient than the WWW, and certainly more reliable. [1] A vast WAIS-land. re: http://www.garlic.com/~lynn/2012b.html#46 What s going on in the redbooks site? http://www.garlic.com/~lynn/2012b.html#49 What s going on in the redbooks site? recent post in a.f.c. with a little WAIS lore: http://www.garlic.com/~lynn/2012b.html#9 The round wheels industry is heading for collapse -- virtualization experience starting Jan1968, online at home since Mar1970 -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN
Re: IPLs and system maintenance was Re: PDSE
joa...@swbell.net (John McKown) writes: IBM once owned the Stratus line, a competitor to Tandem, and called it the System/88. http://en.wikipedia.org/wiki/Stratus_Technologies minor nit *not owned* ... provided enormous amount of money to rebrand sell as system/88. there is some folklore regarding just how many system/88s were actually installed ... about how some marketing teams would go in after IBM was bringing along a prospect and offer them an un-rebranded flavor at lower price. i marketed ha/cmp against both system/88 and stratus in much of the system/88 period ... past posts mentioning ha/cmp http://www.garlic.com/~lynn/subtopic.html#hacmp part of the marketing at the time was that stratus (and system/88) was purely fault-tolerant hardware .. but required scheduled system downtime and reboot for many times of software maintanance. For some customers with 5-nines availability requirement ... a century of outage budget could be blown with each annual maintenance scheduled outage. ha/cmp didn't have equivalent individual system uptime ... but lots of environments, clustered operation masked any single system outage ... providing overall cluster availability much better than 5-nines. Individual scheduled system maintenance could be done with rolling outage of individual cluster members. Stratus responded they could configure for cluster operation ... but that negated the need (and expense) for real fault tolerant hardware (in all those scenarios that I was able to demonstrate clustered fault masking recovery). Somewhat as a result, I got asked to do a section in the corporate continuous availability strategy document ... but after both Rochester (as/400) and POK (mainframe) whined that they couldn't meet the objectives, my section was pulled. -- virtualization experience starting Jan1968, online at home since Mar1970 -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN
Re: Article on IBM's z196 Mainframe Architecture
David Kanter dkan...@gmail.com writes: http://www.realworldtech.com/page.cfm?ArticleID=RWT010312153140 Hopefully you all find this an interesting and enjoyable read. related posts about maximum configured z196 at 50BIPS http://www.garlic.com/~lynn/2012.html#23 21st Century Migrates Mainframe with Clerity http://www.garlic.com/~lynn/2012.html#56 IBM researchers make 12-atom magnetic memory bit http://www.garlic.com/~lynn/2012.html#59 IBM's z196 Article at RWT and recent post in linkedin Mainframe Experts: http://lnkd.in/2syFGU and http://www.garlic.com/~lynn/2012.html#78 counter is mega-datacenters (many that likely have more processing power than the aggregate of all currently installed mainframes) and being able to carve out (batch) virtual supercomputer (subset of a mega-datacenter processors) ... relatively similar technologies used in supercomputers/GRID and the mega-datacenter/clouds Amazon takes supercomputing to the cloud (42nd largest supercomputer) http://news.cnet.com/8301-13846_3-57349321-62/amazon-takes-supercomputing-to-the-cloud/ has 240TIPS (or 240TFLOPS) on 17,000 cores ... for Amazon carved out batch supercomputer upthread has estimate for 10,000 currently installed mainframes ... assuming that all 10,000 were maximum configured z196 at 50BIPs ... that comes out to upper limit on all currently installed mainframes aggregate processing power at 500TIPS -- virtualization experience starting Jan1968, online at home since Mar1970 -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN
Re: IBM researchers make 12-atom magnetic memory bit
linda.lst...@comcast.net (Linda Mooney) writes: That's really tiny! Just in my career - The first machine I was paid to work with was a 4341 with 8MB and 8 channels. My IPhone has 32MB. The possibilities of 2.5 Petabytes is, well, an awful lot. I can't help but wonder what some of the early computing pioneers would think of this. In the 90s, I had done a project that required ten high-end rs/6000 servers (to handle workload that couldn't be handled by half-dozen large 3090s). However by middle of last decade ... there was that much processor power (one BIPS) in cell-phone processor http://en.wikipedia.org/wiki/XScale by comparison, recent z196 announce claims 50BIPS in maximum configured (80 processor) system http://www.foxnews.com/scitech/2010/09/01/ibm-unveils-worlds-fastest-microprocessor/ my first programming class was student fortran on 709. my first programming job was porting 1401 MPIO to 360/30 that had 64kbytes ... I got to design implement my own monitor, devices drivers, interrupt handlers, error recovery, storage management, etc. low-end 360 were 0.0018 to 0.034 MIPs. http://en.wikipedia.org/wiki/IBM_System/360 and http://www-03.ibm.com/ibm/history/exhibits/mainframe/mainframe_PP2030.html -- virtualization experience starting Jan1968, online at home since Mar1970 -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN
Re: 21st Century Migrates Mainframe with Clerity
from: http://www.garlic.com/~lynn/2012.html#20 21st Century Migrates Mainframe with Clerity numerous mega-datacenters around the world, any one possibly with more BIPS than total aggregate mainframe installed BIPS and from: http://www.garlic.com/~lynn/2012.html#23 21st Century Migrates Mainframe with Clerity and http://en.wikipedia.org/wiki/TurboHercules#Performance any mega-datacenter running TurboHercules on every processor may have more simulated mainframe BIPS than total aggregate mainframe installed BIPS ... and simulated mainframe BIPS cost at possibly 1/1000 of z196 BIPS and from this morning: Fusion-io demos billion IOPS server config http://www.theregister.co.uk/2012/01/06/fusion_billion_iops/ from above: Fusion has a track record in such demonstrations, starting with the 1 million IOPS Quicksilver demo with IBM's SVC in 2009. It needed a rack of systems. Two years later it has gone a thousand times faster with far fewer but more powerful servers. ... snip ... -- virtualization experience starting Jan1968, online at home since Mar1970 -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN
Re: 21st Century Migrates Mainframe with Clerity
glenn.schn...@suntrust.com (Schneck.Glenn) writes: Although there may be some 'success' stories the issue I have with most vendors is where they tout - We migrated this company off the mainframe and save 10,000+ MIPS. In reality they probably moved a small application of about 1000 - 2000 MIPS which happened to be the last one on the mainframe. a couple bits from similar thread in (linkedin) Mainframe Exports http://lnkd.in/2syFGU Cloud Use Rises, Mainframe Usage Declines as Data Centers Grow and Green, According to AFCOM Survey http://eon.businesswire.com/news/eon/20110330005393/en/cloud/disaster-recovery/data-center from above: Demise of the Mainframe: While historically one of the most critical elements of any data center, today, mainframe usage continues to shrink. While AFCOM predicts mainframes will exist forever in some capacity, their prevalence has been severely diminished. ... snip ... BM Sees A Big Boost As It Turns 100 http://www.npr.org/2011/12/28/143834727/ibm-sees-a-big-boost-as-it-turns-100 from above: The company sold its PC business 6 years ago, and now, more than 83 percent of its business is services and software. Sign a contract with Big Blue and you get consulting, cloud computing, servers, analytics, even financing. ... snip ... compared to mid-80s when top management was predicting mainframe sale growth would double corporate revenue (from $60B to $120B, approx $252B today) and instituted a massive building program to double mainframe manufacturing ... this was at a point when there were already indicators of mainframe business heading in the opposite direction ... and the company goes into the red a few years later. Note that in above, that remaining 17% revenue would include everything else besides softwareservices (aka all kinds of hardware platforms) note that in addition to failures migrating off of mainframe ... there has also been some number of monumental re-engineering failures ... involving staying on the mainframe (any major change at all ... even when not changing the mainframe) as I've periodically pontificated in the past ... there are numerous mega-datacenters around the world ... any one of the mega-datacenters possibly having more aggregate processing power than current total installed traditional mainframes. estimated 10,000 mainframes at 4,000 to 5,000 customers around globe http://articles.economictimes.indiatimes.com/2010-08-10/news/27620495_1_mainframe-ibm-big-challenge zEnterprise 196 can execute 50BIPs/second http://www.foxnews.com/scitech/2010/09/01/ibm-unveils-worlds-fastest-microprocessor/ Intel Core i7 at 177,730 MIPs/sec http://en.wikipedia.org/wiki/Instructions_per_second or almost 180BIPs/sec ... which makes i7 equivalent of more than three z196?? mega-datacenters have been quoted at half-million to over million processors. -- virtualization experience starting Jan1968, online at home since Mar1970 -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN
Re: 21st Century Migrates Mainframe with Clerity
re: http://www.garlic.com/~lynn/2012.html#20 21st Century Migrates Mainframe with Clerity other measures TPC-C: http://www.tpc.org/tpcc/results/tpcc_perf_results.asp ibm has six in the top ten ... power ... but also @#8#10 using (older) quad-core Xeon (but they are also the lowest price/tpmC) TPC benchmarks: http://www.tpc.org/information/benchmarks.asp early history http://www.tpc.org/information/who/gray.asp past posts mentioning original sql/relational implementations in bldg. 28 ... some of the time with Jim: http://www.garlic.com/~lynn/submain.html#systemr guess as to z196 comparison (from older z10 nehalem comparison): http://en.wikipedia.org/wiki/TurboHercules#Performance from above: ...we can run a reasonably sized load (800MIPS with our standard package). If the machine in question is larger than that, we can scale to 1600MIPS with our quad Nehalem based package and we have been promised an 8 way Nehalem EX based machine early next year that should take us to the 3200MIPS mark. Anything bigger than that is replicated by a collection of systems. ... snip ... and: Current high end System z10 systems are capable of performance up to around 28,000 MIPS (for aggregate performance of many CPUs in a fully configured 64-CPU multiprocessor server), so Hercules is outperformed by almost one order of magnitude. However, Hercules on a PC costs several orders of magnitude less[citation needed] than those high end System z systems. ... snip ... z196 has been been claimed to be 50% faster than Z10 or 42BIPS ... however reference claims z196 peak at 50BIPS (possibly larger number of CPUs?) ... aka http://www.foxnews.com/scitech/2010/09/01/ibm-unveils-worlds-fastest-microprocessor/ TurboHercules runs possibly 10 native intel instructions for every emulated mainframe instruction ... and emulated 3.2BIPS mainframe with 8way Nehalem EX ... then is 32BIPS native (compared to z196 peak 50BIPS). -- virtualization experience starting Jan1968, online at home since Mar1970 -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN
Re: IBM manual formats
martin_pac...@uk.ibm.com (Martin Packer) writes: Funnily enough I mused on Kindle MOBI / AZW format re Redbooks on Twitter just now. (You can guess what I got for Xmas.) :-) I'd like to have the discussion on how to format for Kindle with the right people. In ITSO (the Redbooks people) we use Framemaker (at a fairly ancient level) so I'm not sure whether that could be taught to emit MOBI / AZW / Epub etc. I would think Information Development (Product Manual Writers) are using something else (once was Bookmaster, which I still use myself) and I don't know what the options are. simplest is email to kindle.com userid with convert (pdf may not turn out like you expected) http://www.amazon.com/gp/help/customer/display.html/ref=hp_pdoc_main_short_us?nodeId=200767340 and much more ... http://www.amazon.com/Kindle-Formatting-Complete-Amazon-ebook/dp/B0024FAPF4 -- virtualization experience starting Jan1968, online at home since Mar1970 -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN
Re: IBM manual formats
scott_j_f...@yahoo.com (Scott Ford) writes: I agree , it would give the people who use the manuals, aka the readers more options. I merged the multiple postscript files from Melinda's VM and the VM Community: Past, Present, and Future into single PDF file and then also ran it through Amazon's (kindle) conversion. Melinda now has the files up on here web page: http://web.me.com/melinda.varian/Site/Melinda_Varians_Home_Page.html and it came out quite well. However, doing converting some of the other files to kindle format came out less well. Standard PDF-Kindle conversion reflows words which messes up tables and other situations involving multiple blank fixed spacing. Normally PDF-kindle seems to come out with small file ...but the VM and the VM Community had a lot of jpeg images ... which resulted in kindle file that was twice the size of the pdf file. -- virtualization experience starting Jan1968, online at home since Mar1970 -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN
Re: IBM Manuals
jcew...@acm.org (Joel C. Ewing) writes: If you can get a text-based PDF document from the original source, that would certainly be preferable, as that allows text searching capability. But, if all you have is a hard copy, none of the current freely-available OCR tools come close to preserving the original document as accurately as image-based PDF, unless you have the time for extensive manual editing. Bitsavers.org uses a modified archive approach that uses higher resolution to allow possible future OCR; but compensates for higher resolution by using black/white threshold images that sacrifice quality of embedded document illustrations. I prefer to go with lower resolution adequate for human reading and preserve gray scale, and even color, where its use is significant. I finally got approval for putting up scan'ed (original done at 600dpi) copy of Share 1979 LSRAD report: http://bitsavers.org/pdf/ibm/share/ I sent them a 4mbyte 150mbyte PDF versions and they put up the 150mbyte ... although I don't notice lot of difference. I did do some image post processing from the original scan to bring out letters/text (including forcing b/w threashold; before conversions to pdf) ... I find that resulted in much better reading quality, more than the difference between 4mbyte 150mbyte. i did put up the cover in color/jpg at very low resolution (7kbytes) http://www.garlic.com/~lynn/lsradcover.jpg Early spring 2009 I was asked to HTML'ize the Pecora hearings (30s congressional hearings into the '29 crash ... glass-steagall, etc) that had been scanned the previous fall at boston public library ... doing lots of internal HREFs index/links as well as lots of HREFs between what happened then and what happened this time (some expectation that the new congress might have some appetite for the subject). I spent a lot of time with free OCR programs ... but there was lots of problems. In any case, after doing quiet a bit of work, got a call that it wouldn't be needed after all (wallstreet pouring enormous amount of money into congress) -- virtualization experience starting Jan1968, online at home since Mar1970 -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN
Re: SPF in 1978
re: http://www.garlic.com/~lynn/2011p.html#106 SPF in 1978 http://www.garlic.com/~lynn/2011p.html#107 SPF in 1978 I had originally done extended sharing on cp67 along with paged-mapped CMS filesystem ... which I then converted to vm370 ... some old email http://www.garlic.com/~lynn/2006v.html#email731212 http://www.garlic.com/~lynn/2006w.html#email750102 http://www.garlic.com/~lynn/2006w.html#email750430 with respect to csc/vm in the above ... one of my hobbies was making enhanced operating systems available to internal datacenter ... first with cp67 and then later with vm370. during the future system period ... some old posts http://www.garlic.com/~lynn/submain.html#futuresys I continued to do 360/370 stuff (even when future system was killing off 370 efforts) ... and periodically would ridicule future system activities. after the demise of future system, there was mad rush to get stuff back into 370 product pipelines ... which motivated decision to release various bits pieces of stuff that I had been doing. A small subset of the sharing stuff (w/o the paged mapped filesystem support) was including in vm370 release 3 as DCSS. the following is exchange with the SPF group about trying to map SPF into a shared module (as opposed to DCSS sharing). Date: 11/07/79 14:53:27 From: wheeler To: somebody in GBURG SPF group The SPF module starts (begins) at location x'2' and end somewhere close to x'7' (actually around x'6a000'). If I load and genmod SPF it ordinarily creates a MODULE which starts at location x'2' and ends around x'6a000', i.e. those core locations are written to disk. When I invoke SPF the SPF MODULE file is read into locations starting at the start of the module (x'2') and ending at the end of the module (x'6a000'). -- Shared module support is an enhancement to VM and CMS which allows specification at GENMOD time which segments (16 page groups) are to be shared. The segments to be shared must be occupied by the module being genmod'ed (i.e segment 2: x'2' thru x'3'; segment 3: x'3' thru x'4', etc.). -- Ordinarily I would LOAD SPF GENMOD SPF -- for shared modules I LOAD SPF reset module ending address to x'7' GENMOD SPF (share 2 3 4 5 6 -- Now at loadmod time, in addition to reading the SPF MODULE file into the specified core locations (i.e. x'2' thru x'7') it also identifies to CP that segments 2, 3, 4, 5, and 6 are SPF shared segments. For all other programs that I have been involved with, that works satisfactory (i.e. the same code runs in discontiguous shared segments, runs in modules, runs in shared modules) and modules which did not change internal code locations while a discontiguous shared module also do not change internal code locations while a module and/or a shared module. As I read your reply, SPF is altering 8 bytes of core at absolute location x'2' independently of whether or not that location is contained within the module. If I were to: LOAD SPF (origin 3 reset module ending address to x'8' GENMOD SPF (share 3 4 5 6 7 there would not be any problems? since SPF is not storing into a relative module core location (i.e. start of the 1st SPF module + x'0' bytes) but into absolute location x'2'. ... snip ... and the response about why there were still problems: as an aside ... 1979 GBURG SPF group appeared to still be using all upper case Date: 11/07/79 To: wheeler From: somebody in GBURG SPF group LYNN, THANKS FOR SENDING THE DESCRIPTION OF SHARED MODULES. I HAVEN'T STUDIED IT IN DETAIL, BUT DID READ THROUGH IT. VERY INTERESTING. YOUR IDEA OF STARTING SPF AT 3 INSTEAD OF 2 WOULD AVOID THE SHARED VIOLATION AS WE STORE INTO LOCATION 2. HOWEVER, THAT WILL NOT SOLVE ALL THE PROBLEMS. IN SPF, THE WAY WE DETERMINE WHETHER WE ARE RUNNING IN THE USER AREA (TEST MODE) OR IN DCSS, IS TO COMPARE THE ADDRESS OF THE FIRST PROGRAM (HAPPENS TO BE NAMED SPF) TO THE VALUE '2'. IF IT IS NOT THERE, IT IS ASSUMED THAT WE ARE IN DCSS. THE IMPLICATION IS THAT SPF WILL NOT RELOAD ITSELF FOLLOWING A FOREGROUND COMPILE, OR CMS COMMAND THAT USES THE USER AREA. IF MY UNDERSTANDING OF SHARED MODULES IS CORRECT, I AM AFRAID THAT, AT LEAST IN THE NEAR TERM, THERE IS NOTHING I CAN DO THAT WILL PERMIT SPF TO OPERATE CORRECTLY IN YOUR SPECIAL ENVIRONMENT. FEEL FREE TO WRITE OR CALL. REGARDS, XXX ... snip ... later exchange about SPF being a real pig of an application: Date: 02/21/80 12:59:12 To: wheeler Hi, Lynn, Do you have SPF/CMS installed, or know anybody that does ... snip ... Date: 02/21/80 14:42:09 From: wheeler SPF/CMS installed and running, but it is a pig tho. ... snip ... In this time-frame there were a number of internally developed CMS full-screen editors ... early one that had been released to customers was
Re: SPF in 1978
jim.marsh...@opm.gov (Jim Marshall) writes: In 1978 I had the honor to have the first IBM 3032 shipped (#06) into the Pentagon when I worked at the Air Force Data Services Center. I already had in place an IBM 360-75J which ran TSO. With the IBM 3032 came IPO 1.0 and we also receive the full-screen product called IBM 3270 Display and Structure Prgramming Facility or as people called it SPF. Later in the early 1980s it morphed into ISPF and a few years later it split into ISPF and PDF. PDF came with all the facilities to write ISPF applications. It was for those who did not want to buy the precoded ISPF dialogs. Then in the middle 1980s I also worked on VM and their was an ISPF and PDF for VM. The notion was you'd learn ISPF and it would be almost the same in both world. Except the diehard VM'ers loved CMS. Later in the early 1990s I recall ISPF and PDF merged back into ISPF; except over in VM where it remains today. If you look at VM's DIRMAINT software it will have a pre-requisite of these products but indeed only if you want to use their precoded ISPF application. Save your money. Very interesting times. Jim Marshall, Capt, USAF-Ret old email http://www.garlic.com/~lynn/2001m.html#email790404 about afds coming to visit about large number of vm/4341s ... posted in multics newsgroup: http://www.garlic.com/~lynn/2001m.html#12 having been a little rivalry between the 4th5th floors; some of the ctss people went to 5th flr and did multics and others went to the science center on the 4th flr and did virtual machines (first cp40/cms on specially modified 360/40 with hardware virtual memory which morphs into cp67/cms when 360/67 became available and later morphs into vm370). past posts about science center http://www.garlic.com/~lynn/subtopic.html#545tech recent post about vm performance tools were combined in the same organization with ISPF ... http://www.garlic.com/~lynn/2011m.html#42 CMS load module format problem was company having a difficult time with the unbundling announcement and charging for application software ... unbundling http://www.garlic.com/~lynn/submain.html#unbundle guidelines was price had to cover costs, this was somethings interpreted as organization costs had to be covered by software revenue. there were a number of traditional software products that were combined with various vm370 products ... where the aggregate revenue covered aggregate costs (in the ISPF case, ISPF and vm370 performance products both had approx. the same revenue; ISPF had a couple hundred people while vm370 performance products was held to 3 people and limited new development ... aka nearly all revenue going to fund ISPF). unrelated Date: 9 August 1984, 13:35:48 EDT From: xx To: wheeler Recently I saw on an APL disk in San Jose an announcement of something called VMSHARE. It appears to be a repository of information for VM users both in and out of IBM. I would greatly appreciate it if you could send be any information you might have about it, such as how I may get access to such information, and how I might make contributions to it. I am a general user on a small VM system, I do have my own copies of the IBMVM conferencing EXECs (if that is of any help) and I am very interested in the opinions of users outside IBM as well as developments in VM usage in general. Thank you very much for your assistance, xx ISPF/PDF Development ... snip ... tymshare provided online vm370 commerical online service ... in aug 1976 there started making their vm370/cms-based computer conferening available free to SHARE as vmshare ... archived here http://vm.marist.edu/~vmshare/ I then managed to get corporate approval to shadow vmshare ... making it available inside the company (had to jump through hoops with lawyers whether external vmshare information would contaminate corporate employees). misc. old email mentioning vmshare http://www.garlic.com/~lynn/lhwemail.html#vmshare I had also been blamed for online computer conferencing on the internal network ... some past posts about internal network http://www.garlic.com/~lynn/subnetwork.html#internalnet folklore is that when the executive committee was informed of computer conferencing (and the internal network), 5of6 wanted to fire me. misc. past posts mentioning computer mediated converstation http://www.garlic.com/~lynn/subnetwork.html#cmc -- virtualization experience starting Jan1968, online at home since Mar1970 -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN
Re: SPF in 1978
eric-ibmm...@wi.rr.com (Eric Bielefeld) writes: You're career sounds frighteningly like mine. I started as a systems programmer in 1978 at Milwaukee County, where I worked before as an operator and then an applications programmer. We had a 3032 also, but I thought it came in around 1975 or so. I may be wrong. I remember our conversion from VS1 to MVS 3.7, which was in 1978 and early 79. I think we used a Panvalet product for the editor in VS1. I liked SPF on MVS 3.7 a lot better. I also did VM. I can't remember when I started doing VM, but I know it was finally retired in Feb. 1999. Then they didn't have to worry about Y2K on VM, since we were on R5 of VM. I remember hearing that there weren't a lot of 3032's made. A lot more 3033s and 3031s. If I remember right, the 3032 was about the same speed as a 370/168. -- Eric Bielefeld Systems Programmer re: http://www.garlic.com/~lynn/2011p.html#106 SPF in 1978 recent (long-winded) discussion of 3031, 3032, 3033 (in linkedin IBM Historic Computing group): http://www.garlic.com/~lynn/2011p.html#82 Migration off mainframe 3032 was 370/168-3 with different covers and using external 303x channel director (instead of external 28x0 channels). 303x channel director was 370/158 engine w/o 370 microcode and just the integrated channel microcode (3031 was a pair of 370/158 engines ... one with just the 370 microcode and the other with just the integrated channel microcode). ... and 3033 was 370/168-3 logic mapped to 20% faster chips ... the chips also had ten times the circuits/chip as used in 168 ... initially unused ... some late optimization, limited use of more circuits/chip got 3033 up to 1.5 times 168-3. also discussed in this URL http://www.jfsowa.com/computer/memo125.htm 3031s were being beat by 4341s ... past post with early benchmark http://www.garlic.com/~lynn/2000d.html#0 ... faster, cheaper, less floor space, less power, less cooling, etc. some old email mentioning 4341 http://www.garlic.com/~lynn/lhwemail.html#4341 and 4341 clusters were beating 3033, aggregate faster, cheaper, less floor space, less power, less cooling, etc. at one point, POK executive, in some internal politics, got allocation of critical 4341 manufacturing component cut in half. -- virtualization experience starting Jan1968, online at home since Mar1970 -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN
Re: Question on PR/SM dispatcher
shmuel+ibm-m...@patriot.net (Shmuel Metz , Seymour J.) writes: Certainly. If I recall correctly, MDF was implemented in what Amdahl called macrocode, not by dedicated hardware. So what triggered the redispatch at the end of a time slice if not an external interrupt? the guys doing MDF use to come to baybunch and pump me for information ... I had done time-slice dispatching since my undergraduate days in the 60s and had been involved in design and implementation of ECPS for the 138/148 ... there have numerous issues over the years with implementations trying to get around use of timer-based considerations ... hoping that other events would provide sufficient control not having to resort to the additional overhead ... this has periodically resulted in monumental gafs when the various other failed to occur in the anticipated ways. the other issue was that the MDF implementation for Amdahl was significantly simpler because of the macrocode use. 3090 had to respond with pr/sm ... but that was a significantly more complex undertaking because there wasn't any equivalent facility and they had to fallback to horizontal microcode. there was also issue in the early 1980s when somebody having gotten an award for changes to mvs/xa, contacted me about whether similar changes could be made to vm. I commented that I had not done it any other way since my work as undergraduate in the 60s ... and in fact had arguments with VS2/SVS (precursor to MVS) in the early 70s about they shouldn't be doing it the wrong way. past posts mentioning part of the effort for ECPS http://www.garlic.com/~lynn/94.html#21 past posts mentioning dispatching/scheduling http://www.garlic.com/~lynn/subtopic.html#fairshare misc past posts mentioning macrocode: http://www.garlic.com/~lynn/2002p.html#44 Linux paging http://www.garlic.com/~lynn/2002p.html#48 Linux paging http://www.garlic.com/~lynn/2003.html#9 Mainframe System Programmer/Administrator market demand? http://www.garlic.com/~lynn/2003.html#56 Wild hardware idea http://www.garlic.com/~lynn/2005d.html#59 Misuse of word microcode http://www.garlic.com/~lynn/2005d.html#60 Misuse of word microcode http://www.garlic.com/~lynn/2005h.html#24 Description of a new old-fashioned programming language http://www.garlic.com/~lynn/2005p.html#14 Multicores http://www.garlic.com/~lynn/2005p.html#29 Documentation for the New Instructions for the z9 Processor http://www.garlic.com/~lynn/2005u.html#40 POWER6 on zSeries? http://www.garlic.com/~lynn/2005u.html#43 POWER6 on zSeries? http://www.garlic.com/~lynn/2005u.html#48 POWER6 on zSeries? http://www.garlic.com/~lynn/2006b.html#38 blast from the past ... macrocode http://www.garlic.com/~lynn/2006c.html#9 Mainframe Jobs Going Away http://www.garlic.com/~lynn/2006j.html#32 Code density and performance? http://www.garlic.com/~lynn/2006j.html#35 Code density and performance? http://www.garlic.com/~lynn/2006m.html#39 Using different storage key's http://www.garlic.com/~lynn/2006p.html#42 old hypervisor email http://www.garlic.com/~lynn/2006u.html#33 Assembler question http://www.garlic.com/~lynn/2006u.html#34 Assembler question http://www.garlic.com/~lynn/2006v.html#20 Ranking of non-IBM mainframe builders? http://www.garlic.com/~lynn/2007b.html#1 How many 36-bit Unix ports in the old days? http://www.garlic.com/~lynn/2007d.html#3 Has anyone ever used self-modifying microcode? Would it even be useful? http://www.garlic.com/~lynn/2007d.html#9 Has anyone ever used self-modifying microcode? Would it even be useful? http://www.garlic.com/~lynn/2007j.html#84 VLIW pre-history http://www.garlic.com/~lynn/2007k.html#74 Non-Standard Mainframe Language? http://www.garlic.com/~lynn/2007n.html#96 some questions about System z PR/SM http://www.garlic.com/~lynn/2008c.html#32 New Opcodes http://www.garlic.com/~lynn/2008c.html#33 New Opcodes http://www.garlic.com/~lynn/2008c.html#42 New Opcodes http://www.garlic.com/~lynn/2008j.html#26 Op codes removed from z/10 http://www.garlic.com/~lynn/2008r.html#27 CPU time/instruction table http://www.garlic.com/~lynn/2010m.html#74 z millicode: where does it reside? http://www.garlic.com/~lynn/2011c.html#93 Irrational desire to author fundamental interfaces -- virtualization experience starting Jan1968, online at home since Mar1970 -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN
Re: Is there an SPF setting to turn CAPS ON like keyboard key?
paulgboul...@aim.com (Paul Gilmartin) writes: Or do utilities not count as applications? Define application. Again, I'm confident that at least one very old application would accept (define accept) lower case, at least in comments. And very old assemblers tolerated lower case in macro arguments, perhaps better than HLASM does. (But only as long as assemblers supported macros.) CTSS on ibm7094 used 2741s with upper/lower case ... and at least CTSS document formating utility runoff regularly had lowercase. http://en.wikipedia.org/wiki/Compatible_Time-Sharing_System http://en.wikipedia.org/wiki/IBM_2741 some of the ctss people went to 5th flr, 545 tech sq and did multics. others went to the science center on the 4th flr and did cp67/cms (first cp40/cms on specially modified 360/40 with virtual memory which then morphs into cp67/cms when standard virtual memory became available with 360/67). misc. past posts mentioning science center http://www.garlic.com/~lynn/subtopic.html#545tech ctss runoff http://en.wikipedia.org/wiki/RUNOFF was ported to cms as script. GML (for initials of three inventors) was invented at the science center in 1969 and GML tag processing was added to script (in addition to the runoff dot controls). misc. past posts mentioning gml http://www.garlic.com/~lynn/submain.html#sgml a decade later, gml mophs into ISO standard sgml ... and another decade, sgml morphs into html http://infomesh.net/html/history/early one of the first mainstream corporate manuals moved to script was principles of operation. the actual document was the called the architecture redbook (for distribution in red 3-ring manuals). script conditional control governed whether the full redbook was formated or just the principles of operation subsection. -- virtualization experience starting Jan1968, online at home since Mar1970 -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN
Re: Is there an SPF setting to turn CAPS ON like keyboard key?
ibm-m...@snacons.com (Roger Bowler) writes: This would have been the IBM 3277 Data Entry keyboard. Page 25 of GA27-2749-5_3270descr_Nov75.pdf at bitsavers shows two forms of the Data Entry keyboard both having PF1-PF5 keys neatly hidden amongst the other keys in the top right area of the keyboard. The 78-key typewriter keyboard and the operator console keyboard were the ones re: http://www.garlic.com/~lynn/2011p.html#84 oops, missed that. -- virtualization experience starting Jan1968, online at home since Mar1970 -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN
Re: z/OS's basis for TCP/IP
lindy.mayfi...@sas.com (Lindy Mayfield) writes: Interesting, if I am correct, they took long time to implement a resolver. If so, how were hostnames resolved? re: http://www.garlic.com/~lynn/2011p.html#42 z/OS's basis for TCP/IP http://www.garlic.com/~lynn/2011p.html#43 z/OS's basis for TCP/IP http://www.garlic.com/~lynn/2011p.html#45 z/OS's basis for TCP/IP http://www.garlic.com/~lynn/2011p.html#46 z/OS's basis for TCP/IP http://www.garlic.com/~lynn/2011p.html#47 z/OS's basis for TCP/IP trivia ... person that invented DNS had a decade prior did stint working at the cambridge science center (while at MIT) ... related to cms multi-level source update process (this was after gml had been invented at science center and before cp67 morphed into vm370). old posts with reference to somebody being semi-facetious http://www.garlic.com/~lynn/aepay11.htm#43 Mockapetris agrees w/Lynn on DNS security - (April Fool's day??) http://www.garlic.com/~lynn/aepay11.htm#45 Mockapetris agrees w/Lynn on DNS security - (April Fool's day??) wiki reference: http://en.wikipedia.org/wiki/Paul_Mockapetris another trivia from above wiki entry, jon postel used to let me do part of std1 ... referenced in this recent linkedin post http://www.garlic.com/~lynn/2011o.html#17 Ancient Internet History -- virtualization experience starting Jan1968, online at home since Mar1970 -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: z/OS's basis for TCP/IP
svet...@ameritech.net (scott) writes: Just was wondering where TCP/IP stack came from for use in z/OS? Did it originate from the University of Berkley? I hadn't followed the recent. The original mainframe tcp/ip stack product was implemented on vm370 in (mainframe) vs/pascal ... purely IBM implementation. A side-effect, is that it had none of the buffer length exploits that are common in C-language implementations. It was ported to MVS by implementing simulation of some of the vm370 features. recent discussion in linkedin mainframe group about doing the rfc1044 for the implementation and getting possibly 500 times improvement in the bytes moved per instruction executed. http://www.garlic.com/~lynn/2011p.html#36 Has anyone successfully migrated off mainframes misc. other posts mentioning having done rfc1044 support for the mainframe implementation http://www.garlic.com/~lynn/subnetwork.html#1044 this was approx. the same time that Berkeley released 4.3 Reno Tahoe implementations that show up as the TCP/IP stack on lots of other platforms. Some trivia ... we were doing ha/cmp and using ip-address take-over for some of the recovery procedures http://www.garlic.com/~lynn/subtopic.html#hacmp and find a bug in the 4.3 ARP cache code (translates ip-address to LAN/MAC) that was being used on large number of clients ... which creates problems for the ip-address take-over recovery strategy. another trivia ... after we leave ... two of the people mentioned in the old post about jan92 meeting in Ellison's conference room ... also leave and show up at a small client/server startup responsible for something called the commerce server. We are brought in as consultants because they want to do payment transactions on the server; the small startup has also invented this technology called SSL they wanted to use. As part of availability for what is called the payment gateway ... sits on the internet and is gateway between webservers and the payment networks ... some past posts http://www.garlic.com/~lynn/subnetwork.html#payment we have multiple connections in different parts of internet backbone and use multiple A-record support. I try and convince the browser group that they need to be supporting multiple A-record also ... as part of availability for client/server to webservers. They say it is too complicated. I provide them examples from 4.3 Reno clients ... they still stay it is too complicated. It takes another year to get multiple A-record support into the browser. later the communication group hires a subcontractor to do a tcp/ip stack implementation in VTAM. the initial implementation had tcp performing significantly better than lu6.2. He was told that everybody knows that a proper tcp/ip implementation would be slower than lu6.2 and they weren't going to be paying for anything other than a proper implementation. -- virtualization experience starting Jan1968, online at home since Mar1970 -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: z/OS's basis for TCP/IP
re: http://www.garlic.com/~lynn/2011p.html#42 z/OS's basis for TCP/IP this talks about bsd 4.3 tahoe (june 1988) and reno (early 1990) distributions ... I've still got original source distribution backed up someplace http://en.wikipedia.org/wiki/Berkeley_Software_Distribution All the BSD stuff was done in C language and tahoe and reno distributions were picked up and used by large number of different platforms. As previously mentioned, IBM mainframe was done in vs/pascal. attached from summer 1988 (R1L2 about the same time as 4.3 tahoe) ... part of announce includes reference to adding support to the product that I had done for RFC1044. The basic support had been doing approx. 44kbytes/sec. using nearly 3090 processor. For rfc1044, some tuning tests I did at Cray Research, got mbyte/sec channel media sustained throughput using only modest amount of 4341 (nearly 500 times improvement in bytes transferred per instruction executed). misc. past posts mentioning doing rfc1044 http://www.garlic.com/~lynn/subnetwork.html#1044 NUMBER 288-396 DATE 880726 CATEGORY LS00, LS60, AS20 TYPE Programming TITLE IBM TCP/IP FOR VM (TM) RELEASE 1 MODIFICATION LEVEL 2 WITH ADDITIONAL FUNCTION AND NEW NETWORK FILE SYSTEM FEATURE ABSTRACT IBM announces Transmission Control Protocol/Internet Protocol (TCP/IP) for VM (5798-FAL) Release 1 Modification Level 2. Release 1.2 contains functional enhancements and a new optional Network File System (NFS) (1) feature. VM systems with the NFS feature installed may act as a file server for AIX (TM) 2.2, UNIX (2) and other systems with the NFS 3.2 client function installed. Additional functional enhancements in Release 1.2 include: support for 9370 X.25 Communications Subsystem, X Window System (3) client function, the ability to use an SNA network to link two TCP/IP networks, and a remote execution daemon (server). Charges Graduated Monthly Program Processor One-Time License Number Group Charge Charge 5798-FAL 10 $ 3,000$ 335 154,000 207,000 30 10,000 40 16,000 50 21,670 Planned Availability Date: September 30, 1988 (Refer to the External Ordering Information for shipment dates.) (TM) Trademarks of the International Business Machines Corporation. (1) Trademark of Sun Microsystems, Inc. (2) Registered trademark of American Telephone and Telegraph. (3) Trademark of Massachusetts Institute of Technology. PRODNO 5798-FAL IBM Transmission Control Protocol/Internet Protocol for VM IMKTG MARKETING INFORMATION MARKETING CHANNELS o NCMD o SWMD PRODUCT POSITIONING There is a rapid increase in the number of workstations used for engineering/scientific computing as well as increased use by many other industries. The Network File System is popular as a file server to support these workstations. The Network File System on IBM TCP/IP for VM allows the IBM systems running VM to act as a file server for the engineering/scientific workstations. The DASD and associated VM programming support provide a high quality system for use as a file server in this environment. Systems of other vendors with the NFS 3.2 client protocols implemented may access files on the VM system using TCP/IP and the NFS feature. The IBM AIX Network File Systems provide client function that will access these files. The IBM Personal Computer feature of TCP/IP for VM does not contain NFS client function and cannot access NFS files on the VM system. MARKETING STRATEGY IBM TCP/IP for VM and the Network File System should be marketed to customers with VM systems and engineering/scientific workstations with NFS 3.2 installed. MARKETING FOCUS SALES COMPENSATION PLAN: Normal provisions apply. MEASUREMENT VALUE (MV): MV is available on HONE for all programs by keying the command POINTS 5798-FAL at the entry prompt arrow of the selection screen. MV is also available on AAS under the mnemonic QSLM. HONE INFORMATION Proposal material will not be available through HONE. The configuration aids CFPROGS will be available through HONE on September
Re: z/OS's basis for TCP/IP
re: http://www.garlic.com/~lynn/2011p.html#42 z/OS's basis for TCP/IP http://www.garlic.com/~lynn/2011p.html#43 z/OS's basis for TCP/IP this is post here on ibm-main last april http://www.garli.com/~lynn/2011f.html#29 TCP/IP Available on MVS When? http://www.garli.com/~lynn/2011f.html#30 TCP/IP Available on MVS When? http://www.garli.com/~lynn/2011f.html#31 TCP/IP Available on MVS When? quotes from ibmnew89 memo on vmshare http://vm.marist.edu/~vmshare/browse?fn=IBMNEW89ft=MEMO about 5798-DRG from 1984 (i.e. some as wiscnet from wisconsin) ... and was replaced by 5798-FAL april 1987. -- virtualization experience starting Jan1968, online at home since Mar1970 -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: z/OS's basis for TCP/IP
bzeeb-li...@lists.zabbadoz.net (Bjoern A. Zeeb) writes: Otherwise you can probably still get them from a friend or a more complete (source) history from here (for a small fee): http://www.mckusick.com/csrg/index.html re: http://www.garlic.com/~lynn/2011p.html#42 z/OS's basis for TCP/IP http://www.garlic.com/~lynn/2011p.html#43 z/OS's basis for TCP/IP http://www.garlic.com/~lynn/2011p.html#45 z/OS's basis for TCP/IP i saw him last month at conference ... we were both wearing the same tshirt. -- virtualization experience starting Jan1968, online at home since Mar1970 -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: z/OS's basis for TCP/IP
l...@garlic.com (Anne Lynn Wheeler) writes: IADMIN ADMINISTRATIVE INFORMATION ORDERING INFORMATION The HONE configuration aid CFPROGS may be used to determine ordering information. The HONE aid SYSLINK may be used to transmit the ordering information from HONE to AAS. PROCESSOR GROUP-TO-PROCESSOR GROUP UPGRADES The program in this announcement is eligible for processor group upgrades (e.g., Group 20 to Group 40) when notification is received that the customer has changed the processor (designated machine) on which the licensed program is running. For special administrative information, refer to ADMININFO Item Number DVG33. PROGRAMMING RPQS Requests for PRPQs will not be accepted. SPONSORING EXECUTIVE S. J. Palmisano Group Director Mid-Range Systems Management re: http://www.garlic.com/~lynn/2011p.html#43 z/OS's basis for TCP/IP in the mid-70s the US HONE datacenters were consolidated at 1501 (although the bldg now has another occupant). Recent references are to Facebook hdqtrs new building next door at 1601. However, this is reference to Facebook moving from 1601 to 1 Hacker Way http://www.zdnet.com/blog/facebook/facebooks-new-headquarters-is-located-at-1-hacker-way/5831 this is Facebook moving into the old Sun campus. I had spent a lot of time in 1501 ... although I wasn't in anyway part of the HONE infrastructure ... but HONE was one of my hobbies. misc. past posts mentioning HONE http://www.garlic.com/~lynn/subtopic.html#hone -- virtualization experience starting Jan1968, online at home since Mar1970 -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: 1979 SHARE LSRAD Report
l...@garlic.com (Anne Lynn Wheeler) writes: Two computers systems proved invaluable for producing this report. Draft copies were edited on the Tymshare VM system. The final report was produced on the IBM Yorktown Heights experimental printer using the Yorktown Formatting Language under VM/CMS. ... aka Tymshare had started making its online computer conferencing system available for free to SHARE as VMSHARE in Aug1976 ... vmshare archives: http://vm.marist.edu/~vmshare/ -- virtualization experience starting Jan1968, online at home since Mar1970 -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: 1979 SHARE LSRAD Report
Barry Schrager barryschra...@cs.com writes: This all disturbs me. 30 years ago, companies were willing to invest their personnel time in activities like this. This not only improves our profession but builds an expertise that many claim are lacking. I have a SHARE paper I wrote in 1974 which are the recommendations from the Security Project on security requirements for future IBM Operating Systems. It is amazingly accurate. I also have a paper that I presented at the 1974 IBM Data Security Forum. New Era Software will be making these available in December along with a Forward created by the brilliant writer Julie-Ann Williams. re: http://www.garlic.com/~lynn/2011p.html#10 1979 SHARE LSRAD Report i sent them large 100+mbyte version and lower res 4mbyte version ... they put up hi-res larger file this morning. http://bitsavers.org/pdf/ibm/share/ when i was undergraudate, i had done huge amount of thruput work on os/360 and then got copy of cp67, did lots of code rewrite. resent (linkedin) mainframe discussion post regarding some of the work http://www.garlic.com/~lynn/2011p.html#5 Why are organizations sticking with mainframe references old post with part of presentation that I had made at fall 1968 SHARE http://www.garlic.com/~lynn/94.html#18 part of the work was completely redoing the os/360 stage2 deck output from stage1 sysgen ... to carefully place location of files and PDS members for optimized arm seek ... getting nearly three times thruput improvement in the univ. student workload. while at the univ., i would be sometimes be asked by ibm about making some specific enhancements ... in retrospect, some of the enhancements requests may have originating from these customers ... that i didn't learn about until many years later http://www.nsa.gov/research/selinux/list-archive/0409/8362.shtml -- virtualization experience starting Jan1968, online at home since Mar1970 -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
1979 SHARE LSRAD Report
I finally got approval from SHARE for making scanned copy of 1970 SHARE LSRAD Report on bitsaver ... aka http://bitsavers.org/pdf/ibm/share/ I've forwarded scanned copy along with permission, hopefully it will be showing up shortly. Old reference with intro/ack ... post from when I first starting trying to get permission. Issue is that copyright law had change at first part of 1979 ... otherwise there would no longer be a copyright issue http://www.garlic.com/~lynn/2009.html#47 ... from LSRAD: Preface This is a report of the SHARE Large Systems Requirements for Application Development (LSRAD) task force. This report proposes an evolutionary plan for MVS and VM/370 that will lead to simpler, more efficient and more useable operating systems. The report is intended to address two audiences: the uses of IBM's large operating systems and the developers of those systems. ... snip ... and Acknowledgements The LSRAD task force would like to thank our respective employers for the constant support they have given us in the form of resources and encourgement. We further thank the individuals, both within and outside SHARE Inc., who reviewed the various drafts of this report. We would like to acknowledge the contribution of the technical editors, Ruth Ashman, Jeanine Figur, and Ruth Oldfield, and also of the clerical assistants, Jane Lovelette and Barbara Simpson Two computers systems proved invaluable for producing this report. Draft copies were edited on the Tymshare VM system. The final report was produced on the IBM Yorktown Heights experimental printer using the Yorktown Formatting Language under VM/CMS. ... snip ... -- virtualization experience starting Jan1968, online at home since Mar1970 -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Simple record extraction from a sequential file
elardus.engelbre...@sita.co.za (Elardus Engelbrecht) writes: Not everyone can program properly. Not everyone can program a fast tight code specially optimised for that specific record layout and format and do it in Assembler. Those teenagers who can program in PL/I are very good, I admit, but what is the PL/I overhead? old email about high level POK executive giving presentation about vm370 would no longer be available on high-end machines. HONE was the internal vm370-based online system that provided for world-wide salesmarketing support; most of the applications written in APL. The executive told them that they could migrate from vm370 to mvs if they would just rewrite all the apl applications in assembler (overhead reduction in apl-assembler would offset the enormous increase in overhead from vm370-mvs) http://www.garlic.com/~lynn/2007d.html#email790216 followup was that the executive had apparently been using the wrong flipcharts for the presentation http://www.garlic.com/~lynn/2007d.html#email790220 misc. past posts mentioning HONE http://www.garlic.com/~lynn/subtopic.html#hone -- virtualization experience starting Jan1968, online at home since Mar1970 -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Data Areas?
l...@garlic.com (Lynn Wheeler) writes: 23jun69 unbundling announcement starting to charge for application software, SE services, etc (made case that kernel/operating system was still free). misc. past posts http://www.garlic.com/~lynn/submain.html#unbundle another result of unbundling announcement was starting to charge for SE services. Previously SE training included apprentice activity as part of large SE team onsite at customer account. With charging for SE services, they couldn't figure out how to account for the apprentice SEs. This led to the creation of several virtual machine cp67 HONE datacenters around the country that would allow SEs in branch office to practice their operating system skills in virtual machines. The cambridge science center (responsible for virtual machines, cp40, cp67, GML, internal network, bunch of other stuff) had also ported APL\360 to CMS for CMS\APL. There started to be a large number of online salesmarketing support CMS\APL applications deployed on HONE ... and eventually that came to dominate all HONE activity (with the guest operating system activity dwindling away) Recent FACEBOOK scam news on TV all seem to show the 1601 building sign. If you do satellite map of the address, the (older) building next door at 1501 was where the US HONE datacenters were consolidated in the mid-70s ... and I spent a large amount of time in that building (it has a different occupant now). This was possibly the largest cloud operation of the period ... and just like the modern day cloud mega-datacenters, clones of the HONE datacenter sprouted up all over the world. misc. past posts mentioning HONE http://www.garlic.com/~lynn/subtopic.html#hone One of my hobbies in the period was providing production operating systems to internal datacenters ... and HONE was a long-time customer As part of several attempts to kill off first cp67 and later vm370 ... this is reference claiming that there would be no more high-end vm370 ... and HONE needed convert to MVS ... which would be possible if they would just recode all their APL applications in assembler. http://www.garlic.com/~lynn/2007d.html#email790216 and http://www.garlic.com/~lynn/2007d.html#email790220 as mentioned in above, senior IBM executive that had made the presentation to HONE, the comments were retracted, saying he must have been using the wrong flip charts for the presentation. other old email mentioning HONE http://www.garlic.com/~lynn/lhwemail.html#hone -- virtualization experience starting Jan1968, online at home since Mar1970 -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Humour
george.mos...@icbc.com (Mosley, George) writes: Does anyone remember, and better still, have a copy of a humourous piece poking fun at IBM from years ago called (as I recall) The End of OS? previous postings http://www.garlic.com/~lynn/2003j.html#38 Virtual Cleaning Cartridge http://www.garlic.com/~lynn/2006u.html#52 Where can you get a Minor in Mainframe? ... small excerpt: The end finally came in mid-October. System Release 110.7 was distributed, which converted everyone to MPSS (Multiple Priority Scheduling System), which combined the following control program options: Multiprogramming with a Valuable Number of Tasks Multijob Initiation Multiple Priority Secection Multiprocessing with a Variable Number of CPUs SYSGEN was accomplished with little difficulty in 504 system hours. Expectantly, customers IPLed and initiated their job streams. Nothing Happened Nothing. ... snip ... -- virtualization experience starting Jan1968, online at home since Mar1970 -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Scanning JES3 JCL
re: http://www.garlic.com/~lynn/2011n.html#84 Scanning JES3 JCL i was brought into boeing hdqtrs summer of 1969 as part of helping get boeing computer services (BCS) up and running. they had machine room at hdqtrs (boeing field) with 360/30 for payroll and misc. other hdqtrs administration. It was built out to add a 1mbyte 360/67 for online (virtual machine) cp67/cms timesharing operation. The big datacenter was down at Reaton field ... that summer 360/65s were arriving faster than they could be installed (there were constantly parts of 2-3 360/65s in the halls around the machine room all that summer) ... and they were starting to replicate it up at the 747 plant in Everett. later I would sponsor Col. Boyd's briefings at IBM. His biographies has him in charge of spook base about that time ... claimed to be a $2.5B windfall for IBM (possibly $17+B inflation adjusted). old spook base reference (gone 404 but lives on at the wayback machine) http://web.archive.org/web/20030212092342/http://home.att.net/~c.jeppeson/igloo_white.html the claimed picture in the above of 2250s ... is obviously wrong my wife did stint in the G'burg JES group and was part of the catchers for ASP-JES3 product. She then was one of the JESUS (JES Unified System) authors ... all the features of JES2 JES3, that customers couldn't live w/o, combined in single product. Never got past that stage because of the politics. -- virtualization experience starting Jan1968, online at home since Mar1970 -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Scanning JES3 JCL
shmuel+ibm-m...@patriot.net (Shmuel Metz , Seymour J.) writes: 384KiB? We ran PCP on 128 and MFT II on 256. I know of places that ran on 64. i started on on 64kbyte 360/30 running PCP (i think it was around release 6). I had student job to port 1401 MPIO (tape-unit record front-end to 709) to 360/30 (360/30 had 1401 hardware emulation, so MPIO could directly run ... so I guess it was just part of exercise transitioning to 360). It was eventually 2000 assembler statements (cards, i.e. box) ... had conditional assembly, one was stand-alone (i got to design implement my own interrupt handlers, device drivers, error recovery, storage management, dispatcher, etc) and the other ran with six DCBs under os/360. The stand-alone version took approx. 30mins elapsed time to assemble, the os/360 version (same 2000 cards just change to conditional assemble) took another 30mins elapsed (60mins total) to assemble ... you could watch lights on 360/30 and recognize when it hit DCB macro which took about five mins elapsed time each. -- virtualization experience starting Jan1968, online at home since Mar1970 -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Maintenance at two in the afternoon? On a Friday?
ps2...@yahoo.com (Ed Gould) writes: John, Way back in the '70's I used to work on a online savings system. At that time all banks were closed on weekends. It was great as we had test time a plenty. We ran into a time crunch was every quarter we had to calculate interest before 8 AM. We had zero allowance for problems. From close of business till 8 AM it was intense. re: http://www.garlic.com/~lynn/2011n.html#64 Maintenance at two in the afternoon? On a Friday? most card processing backends are frozen this time of year until possibly mid-january (transaction rates ramps-up until x-mas and then some returns) -- virtualization experience starting Jan1968, online at home since Mar1970 -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: CRLF in Unix being translated on Mainframe to x'25'
joa...@swbell.net (John McKown) writes: Depends on the printer. 0x0A on many DecWriters did both a CR and an LF function. That's why UNIX defaulted that way, from what I was told. No need to do any character translation or additions if you just did a cp to the device. Of course, Windows via MS-DOS via CP/M-80 used CRLF for the same reason. The PC printers of the day required a separate LF and CR to go to the beginning of the next line. And the CR was done first so that the mechanical time to return the head was taken up by rolling the platten to the next line due to the fact that the CR functino took a significant amount of time compared to the LF or printing a simple character. Again, as I was told. re: http://www.garlic.com/~lynn/2011n.html#45 CRLF in Unix being translated on Mainframe to x'25' another recent post about adding tty/ascii terminal support to cp67 (already had 2741 1052 support) http://www.garlic.com/~lynn/2011n.html#70 1979 SHARE LSRAD Report one of the things done in terminal support was line was padded with idle characters after a CR ... formula that calculated how many characters had been printed in the line, how fast the carriage/typehead returned and how fast characters were transmitted ... in order to allow carriage/typehead to have returned before start printing the next line. for other trivia ... this is old item about the name cp/m being derived from (ibm mainframe virtual machine) cp/67 ... kildall (author of cp/m) having used cp/67 at navy post graduate school in 1972 ... gone 404, but lives on at wayback machine http://web.archive.org/web/20071011100440/http://www.khet.net/gmc/docs/museum/en_cpmName.html -- virtualization experience starting Jan1968, online at home since Mar1970 -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Maintenance at two in the afternoon? On a Friday?
st...@trainersfriend.com (Steve Comstock) writes: Well, I just tried to do some online credit card account maintenance with my Capital One card, and got the message 'System Unavailable'. I called tech support and they said they were doing maintenance on the system. Regular weekend maintenance. At 2:00 on a Friday afternoon? Does anyone know if they are using mainframes for their online / web based work? Sheesh! Someone should teach them they can use mainframes and do maintenance while the system keeps running! we were doing (IBM's) HA/CMP ... this is old post about early jan92 ha/cmp cluster scaleup meeting in Ellison's conference room http://www.garlic.com/~lynn/95.html#13 within a few weeks, the cluster scaleup is transferred, announced as ibm supercomputer and we were told we couldn't work on anything with more than four computers ... prompting us to leave a few months later. some old ha/cmp cluster scaleup email http://www.garlic.com/~lynn/lhwemail.html#medusa two of the other people at the Ellison meeting also leave and join small client/server startup responsible for something called commerce server. we get brought in as consultants because they wanted to do payment transactions on the server; the startup had also invented this technology called SSL they want to use; the result is now frequently called electronic commerce. Part of the effort was figuring out how to use SSL for the browser/webserver payments (we also had to audit all this companies selling SSL domain name digital certificates) as well as transactions between webservers and the payment gateway (sits between the internet and payment networks). For no-single-point-of-failure, the payment gateway had multipe connections into the internet and the webservers (talking to payment gateway) had to support multiple DNS A-records (translated domain name to multiple different ip-addresses). However, I didn't have final sign-off for the browser support ... and could only recommend that they implement multiple A-record support. They said it was too complex. I gave tutorials, they said it was too complex. I provided them example client code from 4.3Tahoe, they said it was too complex (it was more than year later before they supported multiple A-record support). An early commerce server was major sporting goods operation that was doing national football tv advertisement on sundays ... and were expecting big upswing in traffic during half-time. This was when major ISPs still scheduled maintenance on Sundays. Even though, their server had multiple connections to different parts of the internet ... if the ISP router for the first IP-address in the DNS record was down for maintenance ... it would effectively have the webserver off the air. in any case, there can be dozen's of components between a browser and backend processor that holds the account record. backend systems holding account records still are typically mainframes ... but they can have all sorts of non-mainframe intermediate components between the backend mainframe and any internet webafied interface. the configurations I would put together were no-single-point-of-failure ... even for pure web ... but others may have not been so careful. While still at IBM doing HA/CMP ... I had also coined marketing terms disaster survivability (to differentiate from disaster/recovery) and geographic survivability. They then asked me to do section for the corporation's continuous availability strategy document ... but it got pulled when both Rochester (as/400) and POK (mainframe) complained they couldn't meet the requirements. misc. past posts mentioning availability http://www.garlic.com/~lynn/submain.html#available go out and try a point-of-sale transaction on the card ... it will typically go through components that are frequently pure legacy (although increasing percentage are transitioning to internet for point-of-sale ... even with backend still mainframe). off-peak for many web components also tend to different than backend ... with web-use spiking during non-normal working hrs (weekends and off-shift when people are mostly not at work). -- virtualization experience starting Jan1968, online at home since Mar1970 -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: John McCarthy 1927-2011
johnwgilmore0...@gmail.com (John Gilmore) writes: sit tibi terra levis, John. LISP and the world view it embodies will, I suppose, be his monument; but he changed everything he touched. The very full obituary in today's New York Times ends by citing one of his favorite apothegms: Do the arithmetic or be doomed to talk nonsense. some mainframe connection ... I worked with his wife on System/R, she is discussed here: http://www.mcjones.org/System_R/SQL_Reunion_95/sqlr95-Vera.html longer winded thread/post http://www.garlic.com/~lynn/2011n.html#40 John McCarthy http://www.garlic.com/~lynn/2011n.html#44 John McCarthy other posts mentioning original relational/sql implementation http://www.garlic.com/~lynn/submain.html#systemr -- virtualization experience starting Jan1968, online at home since Mar1970 -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: CRLF in Unix being translated on Mainframe to x'25'
glen herrmannsfeldt g...@ugcs.caltech.edu writes: Note, for example, the IBM 2741 does not use EBCDIC, but its own code, with its own control characters. but all the 2741 characters were defined in EBCDIC. when cp67 was installed at the univ in jan68 ... it had 2741 and 1052 terminal ... but the univ. had some number of tty/ascii terminals. I had to add tty/ascii support ... and there were some number of chars in tty that weren't ebcdic and vis-a-versa ... which resulted in translation issues. cp67 also had automatic termainl identification for 1052/2741 ... so i added tty support in a way to preserve automatic terminal identification for all three terminals types. I then wanted to have single dialup number with common hung group (pool) for all three terminal types. problem was that ibm terminal controller allowed for changing line-scanner for each port ... but hardwired line-speed. this somewhat motivated univ to start clone controller effort, reverse engineer channel interface and build controller interface card for interdata/3 (mini-computer) ... programmed to simulate mainframe terminal controller ... supporting both dynamic terminal type and dynamic line-speed. later four of us got written for (some part of) clone controller business (interdata picked up support and marketed it, continued after interdata was purchased by perkin-elmer). misc. past post mentioning clone controller business http://www.garlic.com/~lynn/subtopic.html#360pcm clone controller business was then major motivation for future system activity ... misc. past posts http://www.garlic.com/~lynn/submain.html#futuresys tale about 360 was originally going to be ascii ... but learson made one of the biggest mistakes of 360: http://www.bobbemer.com/P-BIT.HTM i had 2741 at home from mar70 until 1977 ... when it was replaced with 300baud cdi miniterm ... pictures of old 2741 apl typeball http://www.garlic.com/~lynn/aplball.jpg http://www.garlic.com/~lynn/aplball2.jpg 2741 wiki http://en.wikipedia.org/wiki/IBM_2741 from above ... 2741 code controlled tilt/rotate of the typeball (selecting the characters on surface of the typeball) so dynamic terminal type for 2741 ... differentiated 2741 from 1052 (and i added tty/ascii) ... selecting the corresponding controller line-scanner. then the software used default 2741 translate table on initial login ... and assuming first letter l ... however, if the first letter was y ... then reversed translated and retranslated with alternate translate table ... and again checked for l (actually checked for both uppercase and lowercase letters). -- virtualization experience starting Jan1968, online at home since Mar1970 -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Chaos feared after UNIX time-zone database if nuked
efinnel...@aol.com (Ed Finnell) writes: Thanks for getting us back on track. We used to drift to old hardware and microfiche. Now we drift to polymorphism...sign of the times re: http://www.garlic.com/~lynn/2011n.html#12 Chaos feared after UNIX time-zone database if nuked for the fun of it, from (linkedin) Mainframe Experts thread Has anyone successfully migrated off mainframes: http://lnkd.in/2syFGU -- virtualization experience starting Jan1968, online at home since Mar1970 -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Chaos feared after UNIX time-zone database if nuked
a couple references (internet time zone database) ICANN rescues time zone database http://www.theregister.co.uk/2011/10/16/icann_rescues_time_zone_database/ http://lxnews.org/2011/10/17/icann-taking-over-olson-db/ http://news.softpedia.com/news/ICANN-Takes-Over-Time-Zone-Database-Crucial-to-the-Internet-After-Copyright-Lawsuit-228060.shtml http://www.circleid.com/posts/20111014_icann_to_manage_internet_time_zone_database/ ICANN reference http://www.icann.org/ Wiki overview http://en.wikipedia.org/wiki/ICANN for some drift, IETF Editor (publishes internet standards) function has also been at USC ISI (picture in above). http://en.wikipedia.org/wiki/Internet_Engineering_Task_Force For a long time it was Postel http://www.postel.org/postel.html Jon use to let me do part of (IETF) STD1 and periodically I would go by USC ISI to visit him -- virtualization experience starting Jan1968, online at home since Mar1970 -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Transitioning Highly Available Applications to System z
we had started ha/6000 in the 80s ... and then I coined the marketing term HA/CMP to also capture the work on cluster scaleup (work for both commercial and numerical intensive) ... more recently renamed PowerHA http://www-03.ibm.com/systems/power/software/availability/aix/index.html under my earlier name (high availability - cluster multi-processor) http://www.redbooks.ibm.com/abstracts/sg246375.html various old email about cluster scaleup part http://www.garlic.com/~lynn/lhwemail.html#medusa this references a Jan92 meeting in Ellison's conference room regarding the commercial scaleup part http://www.garlic.com/~lynn/95.html#13 less than a month later, the cluster scaleup was transferred and we were told we couldn't work on anything with more than four processors. A couple weeks later it was announced as supercomputer for numerical intensive only. before that had happened, I had been asked to write a section for the corporate continuous availability strategy document ... but then it was pulled when both Rochester and POK complained that they couldn't meet the objectives. misc. past posts mentioning ha/cmp http://www.garlic.com/~lynn/subtopic.html#hacmp -- virtualization experience starting Jan1968, online at home since Mar1970 -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: JCL CROSS-REFERENCE Utilities (OT for Paul, Rick, and Shmuel)
jrobe...@dhs.state.ia.us (Roberts, John J) writes: I'm surprised the old-timers didn't comment on my mention of APL. This was the original write-only language - maintenance was only possible by the original author. It was very heavily touted by IBM in the early 70's. somewhat because of litigation, 23jun69 unbundling announcement started charging for application software (but case was made that kernel software would still be free), se services, maintenance, etc. up until then a lot of se training was journeyman/apprentice as part of large groups of SEs at customer site. after 23jun69, nobody could figure out how to have all that SE training using customer resources w/o charging the customer for it. to address the issue several cp67 virtual machine datacenters were created to provide branch office SEs ability to login remotely and practice guest operating system in virtual machine. This was the HONE system ... some past posts: http://www.garlic.com/~lynn/subtopic.html#hone and some old email http://www.garlic.com/~lynn/lhwemail.html#hone the science center ... some past posts http://www.garlic.com/~lynn/subtopic.html#545tech besides virtual machines, internal network, a bunch of other stuff, also ported apl\360 to cms for cms\apl. cms\apl workspaces were now as large as virtual address space size (which required rewritting how apl managed it workspace allocation) as compared to common 16k or 32kbytes in apl\360 ... also an API was added to cms\apl that allowed invoking cms system services like file i/o. The combination allowed cms\apl to be used for real-world applications ... for instance the business planners in armonk loaded the most valuable of corporate information (detailed customer data) on the cambridge system and implemented business models in cms\apl. This required some security issues since the cambridge cp67/cms system was also used by some number of non-employees from various educational institutions in the boston/cambridge area. HONE also started offering marketingsales applications implemented in cms\apl. Eventually the salesmarketing use began to dominate all HONE use and the virtual guest use died off. By the mid-70s *ALL* mainframe orders had to be first processed by HONE aidsconfigurators ... all implemented in APL (and HONE virtual machine clones were started to sprout up all over the world). HONE was part of the salesmarketing organization and periodically some branch manager would be promoted into executive position that included responsibility for HONE ... and they would find to the horror that the company (especially salesmarketing) ran on vm370 (not *MVS*). They would come to believe that their career in the corporation would be made if they could convert HONE to MVS. A huge amount of resources would go into a MVS migration attempt and eventually fail ... then there would eventually be executive shuffle and the whole thing forgotten until the next new executive. Recent (linkedin) discussion about several features implemented for the HONE vm370 operation in the late 70s, that are finally in the process of being included in zVM ... aka from the annals of release no software before its time: http://www.garlic.com/~lynn/2011m.html#46 From The Annals of Release No Software Before Its Time http://www.garlic.com/~lynn/2011m.html#47 From The Annals of Release No Software Before Its Time http://www.garlic.com/~lynn/2011m.html#59 From The Annals of Release No Software Before Its Time -- virtualization experience starting Jan1968, online at home since Mar1970 -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: JCL CROSS-REFERENCE Utilities (OT for Paul, Rick, and Shmuel)
ps2...@yahoo.com (Ed Gould) writes: My memory sort of agrees with the above and I will accept your memory. We used to have a full time SE from sometime in 196x's to the mid-late 1970's. My recollection from talking with him was that HONE was used for all configuration(s). Was that not the case? I still remember (albeit vaguely ) looking at some output (paper) from a hone session and being asked about memory and the like. re: http://www.garlic.com/~lynn/2011m.html#61 JCL CROSS-REFERENCE Utilities (OT for Paul, Rick, and Shmuel) original apl\360 would allocate next unused storage on every assignment when it exhausted all of (workspace) storage ... it would garbage collect and coalesce all allocated variables to bottom of the workspace. this resulted in apl\360 repeatedly using every storage location in the workspace ... even for small problems ... as long as there were assignments (didn't re-use previous allocated storage for variable). for small workspaces (16k or 32k bytes) ... that were completely swapped ... it didn't really matter. moving to cms\apl with demand-paged virtual workspace that was hundreds of kbytes or multiple megabytes ... constantly touching every possible workspace location led to page thrashing. one of the first things that needed to be redone for cms\apl was redo how apl managed its workspace storage. HONE APL executable image was shared across all cms virtual machines ... reducing aggregate real storage footprint. Later work was done to include significant pieces of APL workspace/programs in shared segments ... further reducing real storage footprint. APL is an interpreted language ... after doing lots of work to optimize virtual paging and aggregate real storage footprint ... APL remained computational intensive. That contributed to HONE having growing number of high-end multiprocessors in loosely-coupled, single-system-image configuration ... which had front-end process that did load-balancing logon (slightly analogous to web search engines spreading load across available syustems). Many salesmarketing people spent their entire time in a session manager implemented in APL (and automatically invoked at login) called SEQUIOA ... and never or rarely directly exposed to vm370/cms. Eventually for some of the heavily used, most compute intensive configurators ... they were recoded in FORTRAN and a process created that allowed APL to invoked FORTRAN programs as sub-program ... which could achieve a factor of 100 times reduction in processor use. There was some growing/emerging native CMS use for writting (customer) proposals, RFP responses and other document preperation ... as well as growing use of email (like PROFS). recent (linkedin) discussion about PROFS (and the internal network) http://www.garlic.com/~lynn/2011m.html#60 -- virtualization experience starting Jan1968, online at home since Mar1970 -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: JCL CROSS-REFERENCE Utilities (OT for Paul, Rick, and Shmuel)
l...@garlic.com (Anne Lynn Wheeler) writes: APL is an interpreted language ... after doing lots of work to optimize virtual paging and aggregate real storage footprint ... APL remained computational intensive. That contributed to HONE having growing number of high-end multiprocessors in loosely-coupled, single-system-image configuration ... which had front-end process that did load-balancing logon (slightly analogous to web search engines spreading load across available syustems). re: http://www.garlic.com/~lynn/2011m.html#61 JCL CROSS-REFERENCE Utilities (OT for Paul, Rick, and Shmuel) http://www.garlic.com/~lynn/2011m.html#62 JCL CROSS-REFERENCE Utilities (OT for Paul, Rick, and Shmuel) at the science center, there was a lot of performance algorithm, tuning, monitoring, simulation and modeling work ... some of it eventually evolving into things like capacity planning. one of the efforts was an system performance analytical model implemented in APL. A version of this was modified for HONE that was fed system activity from all the loosely-coupled systems and used to decide which machine each login should be directed to. a different variation was made available on HONE as the performance predictor branch people could gather customer workload and system characteristics and input into the performance predictor and ask what-if questions ... like what would happen in the case of customer workload and/or system configuration changes. -- virtualization experience starting Jan1968, online at home since Mar1970 -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: JCL CROSS-REFERENCE Utilities (OT for Paul, Rick, and Shmuel)
glen herrmannsfeldt g...@ugcs.caltech.edu writes: I once had PL/I (F) running on an AT/370, about 5 minutes to compiler a five line program. re: http://www.garlic.com/~lynn/2011m.html#61 JCL CROSS-REFERENCE Utilities (OT for Paul, Rick, and Shmuel) http://www.garlic.com/~lynn/2011m.html#62 JCL CROSS-REFERENCE Utilities (OT for Paul, Rick, and Shmuel) http://www.garlic.com/~lynn/2011m.html#63 JCL CROSS-REFERENCE Utilities (OT for Paul, Rick, and Shmuel) xt/370 at/370 was motorola 68k fiddled to execute 370 instructions at about 100kips. the original cardset had 384k bytes of 370 memory ... which ran special modified vm370 kernel plus the real storage for demand paging of cms. the 370 card had no i/o ... vm370 was modified to send messages to cp88 running on the intel chip ... and all i/o was done by cp88 running on the pc side ... and then transfers between vm370 and cp88. it started out code-named washington and the pc/xt had 100ms per transfer disk ... each page transfer or cms file record transfer took 100ms on pc/xt hard disk (maximum aggregate rate would be less than 10 transfer per second with vm370/cp88 overhead transferlatency). After fix vm370 kernel ... there was very little left of the 384k bytes for paging cms and application virtual memory ... resulting in page thrashing. I benchmarked and showed significant page thrashing even for realy trivial operations. I got blamed for six month schedule slip in washington while they re-engineered the cards to add another 128kbytes of 370 memory (increase from 384kbytes to 512kbytes) to minimize virtual memory page thrashing (i done a bunch of benchmarks showing page thrashing) pli compiler would have both significant virtual memory thrashing (even with the additional 128kbyte real storage ... i don't remember exactly now ... but vm370 fixed kernel storage was possibly something like 150kbytes ... with 512kbyte real storage ... that would leave approx. 350kbytes of real storage for cms virtual memory ... the cms kernel, cms system services ... and the pli compiler and data areas. the upgrade from xt/370 to at/370 met that cp88 ran on somewhat faster processor and the hard disks were faster ... 5 minutes is 300 seconds ... say if you are lucky maybe 15 disk record transfers per second ... 4500 disk record transfers ... which is page thrashing, loading pli compiler execution, and all other file i/o. A big issue was that cms was relatively bloated in terms of its use of file i/o ... and all of the cms compilers were brought over from mvs using simulation of systerm services ... which were really bloated in terms of use of file i/o. remapping that environment to a PC ... using PC disks instead of mainframe disks was quite tramatic ... compared to similar applications developed specifically for pc environment. Even running applications that fit in the available 370 real storage (and didn't page thrash) ... the 100kip 370 processor wasn't usually the bottleneck ... it was the enormous difference between thruput of mainframe disks and pc disks. Of course that wouldn't be a problem these days because both PCs and mainframes use the same disk technology ... PCs using native disk technolgy and many mainframes using emulated CKD on top of native disks (there hasn't been real ckd disks for decades). I did some prototype work for washington with my cms paged mapped filesystem for washington ... on mainframe with 3380 i could possible three times improved (300%) throughput compared to standard cms filesystem for applications that did moderate amounts of file i/o. ... but still couldn't achieve lookfeel of cms with real mainframe disks. misc. past posts mentioning memory mapped filesystem for cms ... orginally done for cp67 and then ported to vm370 http://www.garlic.com/~lynn/submain.html#mmap follow-on to at/370 was a74 ... separate box with 4mbytes of 370 real storage and 350kip processor. old long-winded post that includes copy of A74 product description at the bottom http://www.garlic.com/~lynn/2002d.html#4 -- virtualization experience starting Jan1968, online at home since Mar1970 -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: CMS load module format
shmuel+ibm-m...@patriot.net (Shmuel Metz , Seymour J.) writes: That worked on more than the 3270 family; it also worked on the console[1] of the 360/168. [1] Compatible with nonthing except the consoles of the 360/85 and 370/165. re: http://www.garlic.com/~lynn/2011m.html#30 CMS load module format http://www.garlic.com/~lynn/2011m.html#34 CMS load module format http://www.garlic.com/~lynn/2011m.html#36 CMS load module format http://www.garlic.com/~lynn/2011m.html#41 CMS load module format http://www.garlic.com/~lynn/2011m.html#42 CMS load module format http://www.garlic.com/~lynn/2011m.html#44 CMS load module format I had done something similar as undergraduate in the 60s with a 2250m1 vector graphics (aka channel attached) for cp67/cms. Lincoln Labs had done a fortran subroutine 2250 driver library for cms ... and I borrowed their code for the editor. this is 2250m4 (i.e. 2250m1 was 360 channel attached with controller, for about the same price as 2250m1, you could get a 2250m4 which came with a 1130 in the package ... in place of the controller box) http://www.columbia.edu/cu/computinghistory/2250.html another image of 2250 http://www.columbia.edu/cu/computinghistory/2250-ad.gif 360/91 had 2250 as operators terminal http://www.columbia.edu/cu/computinghistory/36091.html for other drift, cambridge science center (did virtual machines cp/40, cp/67, lots of online applications, invented GML ... which later morphs into SGML HTML ... early performance work that turns into capacity planning, bunch of other stuff) http://www.garlic.com/~lynn/subtopic.html#545tech had 2250m4 (aka w/1130) and there was version of spacewars implemented http://en.wikipedia.org/wiki/Edson_Hendricks misc. past posts mentioning having modified cp67/cms editor to drive 2250-1 vector graphics: http://www.garlic.com/~lynn/99.html#41 A word processor from 1960 http://www.garlic.com/~lynn/2001m.html#22 When did full-screen come to VM/370? http://www.garlic.com/~lynn/2002i.html#20 6600 Console was Re: CDC6600 - just how powerful a machine was http://www.garlic.com/~lynn/2002j.html#22 Computer Terminal Design Over the Years http://www.garlic.com/~lynn/2002o.html#73 They Got Mail: Not-So-Fond Farewells http://www.garlic.com/~lynn/2005e.html#64 Graphics on the IBM 2260? http://www.garlic.com/~lynn/2005k.html#22 Where should the type information be? http://www.garlic.com/~lynn/2005n.html#45 Anyone know whether VM/370 EDGAR is still available anywhere? http://www.garlic.com/~lynn/2006e.html#28 MCTS http://www.garlic.com/~lynn/2008r.html#62 PC premiered 40 years ago to awed crowd http://www.garlic.com/~lynn/2009s.html#0 tty http://www.garlic.com/~lynn/2010g.html#13 An Interview with Watts Humphrey, Part 6: The IBM 360 http://www.garlic.com/~lynn/2010g.html#57 An Interview with Watts Humphrey, Part 6: The IBM 360 http://www.garlic.com/~lynn/2010j.html#11 Information on obscure text editors wanted http://www.garlic.com/~lynn/2011g.html#45 My first mainframe experience http://www.garlic.com/~lynn/2011j.html#4 Announcement of the disk drive (1956) -- virtualization experience starting Jan1968, online at home since Mar1970 -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: CMS load module format
shmuel+ibm-m...@patriot.net (Shmuel Metz , Seymour J.) writes: Yes, but could you enter macro invocations in the prefix area, or only predefined line commands? XEDIT had prefix macros and a SET PENDING command so that a prefix macro could insert macro invocations into the prefix areas, to be acted on at the next ENTER. Did EDGAR have an equivalent? I hardly used EDGAR at all, using RED NED (for files larger than virtual memory size) ... so don't really know. Something in the (edgar) SOS description talks about pushing keystrokes for later invokation ... but I never got that intimate with EDGAR (whether or not that would result in something similar) past posts: http://www.garlic.com/~lynn/2011m.html#30 CMS load module format http://www.garlic.com/~lynn/2011m.html#34 CMS load module format http://www.garlic.com/~lynn/2011m.html#36 CMS load module format http://www.garlic.com/~lynn/2011m.html#41 CMS load module format http://www.garlic.com/~lynn/2011m.html#42 CMS load module format http://www.garlic.com/~lynn/2011m.html#44 CMS load module format http://www.garlic.com/~lynn/2011m.html#49 CMS load module format -- virtualization experience starting Jan1968, online at home since Mar1970 -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: CMS load module format
shmuel+ibm-m...@patriot.net (Shmuel Metz , Seymour J.) writes: ITYM decent. Did EDGAR have prefix macros like XEDIT had? xedit wiki http://en.wikipedia.org/wiki/XEDIT what i remember was that in the above typical screen layout ... was that prefix area was standard EDGAR feature and there was xedit macro that would setup edgar lookfeel (although i have some vague recollection using edgar prefix area on the right rather than the left). I found this (posted 20Nov89 from somebody at EARN/CEARN) http://vm.marist.edu/~vmshare/browse?fn=XEDITTXTft=MEMO from above: No special XEDIT setting other than my standard EDGAR (prefix on the right so I can use the next line key, no scale/tabline nonsense, nulls on, stay on, wrap on, case ignore, in other words, the opposite of most XEDIT default settings :-) ). ... snip ... old post http://www.garlic.com/~lynn/2003e.html#23 references: Historical Manuals CMS Reference (feb/mar 1984) http://ukcc.uky.edu/ukccinfo/391/cmsref.html from above: XEDIT no longer supports EDGAR simulation mode, and the EDGAR and ECOMMAND commands are no longer available. ... snip ... i.e. drop support for EDGAR macro (ECOMMAND) syntax. from long ago and far away: Date: 06/29/81 21:40:38 To: wheeler Fro: somebody austrailia Re: RED - XEDIT .. I haven't had too much trouble migrating to XEDIT. My EDGAR stuff was easy. The NED stuff not so easy, but not too hard (being restricted to getting the whole file in-storage can be quite a restriction), but there's not much one can do about the lack of RED's pattern-matching facilities .. that's a REAL pain! Re: PARASITE .. on the weekend I had a chance to try the newest version you sent me during (yet another) VM/SP test time .. I still have the same problems with having to hit ENTER twice to get any action, and with it dozing off in the middle of a lot of line-by-line output. It seems to be not getting (or handling) the interrupts from CP. Didn't someone else report a similar problem? Any thoughts/comments on my input re CJNTEL and PARASITE last time (re-sending after this VMSG)? Regards, ... snip ... ned was one of the editor's mentioned in previous post http://www.garlic.com/~lynn/2011m.html#41 that includes excerpt from edit comparison from old email http://www.garlic.com/~lynn/2006u.html#email790606 in this post http://www.garlic.com/~lynn/2006u.html#26 NED was the most compute overhead of all the editors ... but NED also included the ability to edit file larger than would fit in virtual memory (as referenced in above). PARASITE was small CMS terminal emulator application using the VM logical device extensions (used by PVM) It had a companion routine STORY that was terminal scripting application (both would run in CMS transient area). old post with PARASITE/STORY references http://www.garlic.com/~lynn/2001k.html#35 followup post contains STORY for automatically loging into RETAIN and retrieving PUT bucket http://www.garlic.com/~lynn/2001k.html#36 CJNTEL was an internal network online facility that allowed remote query of name/phone for increasing parts of the corporation (had access to online internal telephone books). some old email http://www.garlic.com/~lynn/lhwemail.html#cjntel VMSG was internal email client. An very early 0.x VMSG source version was picked up by the PROFS group and used for their email client. When the VMSG author contacted PROFS group and offerred them a much more complete 1.0 source, the PROFS group attempted to get him fired (denying that they were using VMSG). The whole thing quieted down after the VMSG authored pointed out that every PROFS note in the world had his initials in a non-displayed field. After that the source was restricted to two of us (besides the VMSG author). misc. past posts mentioning VMSG: http://www.garlic.com/~lynn/99.html#35 why is there an @ key? http://www.garlic.com/~lynn/2000c.html#46 Does the word mainframe still have a meaning? http://www.garlic.com/~lynn/2001k.html#35 Newbie TOPS-10 7.03 question http://www.garlic.com/~lynn/2001k.html#39 Newbie TOPS-10 7.03 question http://www.garlic.com/~lynn/2001k.html#40 Newbie TOPS-10 7.03 question http://www.garlic.com/~lynn/2002f.html#14 Mail system scalability (Was: Re: Itanium troubles) http://www.garlic.com/~lynn/2002h.html#58 history of CMS http://www.garlic.com/~lynn/2002j.html#4 HONE, , misc http://www.garlic.com/~lynn/2002p.html#34 VSE (Was: Re: Refusal to change was Re: LE and COBOL) http://www.garlic.com/~lynn/2004p.html#13 Mainframe Virus http://www.garlic.com/~lynn/2005t.html#43 FULIST http://www.garlic.com/~lynn/2005t.html#44 FULIST http://www.garlic.com/~lynn/2005u.html#4 Fast action games on System/360+? http://www.garlic.com/~lynn/2006n.html#23 sorting was: The System/360 Model 20 Wasn't As Bad As All That http://www.garlic.com/~lynn/2006t.html#42 The Future of CPUs: What's After Multi-Core? http://www.garlic.com/~lynn/2007d.html#17 Jim Gray Is Missing http://www.garlic.com/~lynn/2007f.html#13 Why is
Re: CMS load module format
li...@akphs.com (Phil Smith III) writes: Yes, Edgar was an add-on product. It was somewhat similar to XEDIT in a lot of ways. There were apparently a number of full-screen CMS editors inside IBM, but XEDIT is the one that got picked for VM/SP. x-over from z/vm mailing list: http://www.garlic.com/~lynn/2011m.html#36 CMS load module format original cp67/cms edit worked somewhat more like unix sed ... where read read original file as stream outputing to temp/work file and then would ping backforth between two temp/work files before replacing the original file. a new editor was created that ran out of cms (virtual) memory ... and the previous edit was renamed cedit (the new edit could only handle files that would fit in virtual memory ... but cedit could edit arbitrary large files ... larger than available memory). move to vm370/cms and 3270 ... the standard cms editor was updated to display a fullscreen of the file ... but retained command line operation. EDGAR added fullscreen editing ... i.e. changes could be made directly to data displayed on screen ... as well as other commands on each line. By the time of xedit, there were quite a few internal full-screen editors that were quite robust and had large number of functions. I had gotten involved in trying to justify one of these others as alternative to xedit ... old post in ibm-main http://www.garlic.com/~lynn/2006u.html#26 that includees these old emails http://www.garlic.com/~lynn/2006u.html#email781103 http://www.garlic.com/~lynn/2006u.html#email790606 http://www.garlic.com/~lynn/2006u.html#email800311 http://www.garlic.com/~lynn/2006u.html#email800312 http://www.garlic.com/~lynn/2006u.html#email800429 http://www.garlic.com/~lynn/2006u.html#email800501 In one case, there was comment from the Endicott edit release group that it was the fault of the author of one of these other editors that it was more robust and had more function than xedit ... and therefor it should be his responsibility to make all the enhancements to xedit (as opposed to releasing his editor). this is trivial benchmark from Jun79 of various cms editors (giving virtual total cpu use for edit of same file): EDIT CMSLIB MACLIB S 2.53/2.81 RED CMSLIB MACLIB S (NODEF) 2.91/3.12 ZED CMSLIB MACLIB S5.83/6.52 EDGAR CMSLIB MACLIB S 5.96/6.45 SPF CMSLIB MACLIB S ( WHOLE ) 6.66/7.52 XEDIT CMSLIB MACLIB S 14.05/14.88 NED CMSLIB MACLIB S 15.70/16.52 EDIT is the standard CMS edit ... all the other editors were fullscreen and nearly all were more robust and except for NED was significantly more efficient than XEDIT. As an aside ... one of the above emails makes reference to sending me tape for system distribution. One of my hobbies was creating, distributing, and supporting highly enhanced operating systems for internal use. this old email mentions doing csc/vm for internal distribution: http://www.garlic.com/~lynn/2006w.html#email750102 in this post http://www.garlic.com/~lynn/2006w.html#7 this old email makes some mention doing sjr/vm for internal distribution http://www.garlic.com/~lynn/2007c.html#email830709 http://www.garlic.com/~lynn/2007c.html#email830711 http://www.garlic.com/~lynn/2007c.html#email830711b in this post http://www.garlic.com/~lynn/2007c.html#12 and then there were operations like HONE which would do world-wide re-distributions for the HONE-clones all over the world ... misc. past posts mentioning HONE http://www.garlic.com/~lynn/subtopic.html#hone -- virtualization experience starting Jan1968, online at home since Mar1970 -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: CMS load module format
p...@voltage.com (Phil Smith) writes: Yeah, editors are definitely religion. But ISPF on VM sucked unequivocally just because of how fragile it was, due to how they implemented it - whether you liked the functionality or not, having to deal with it breaking all the time was horrible. And left a very bad taste in many VMers' mouths that may or may not have been deserved (he said, trying desperately to avoid the religious part of the argument!). ISPF had a different VM issue involving the VM performance tools group ... misc. past posts mentioning tale told at share (i.e. diverting funds from tools group into ISPF development): http://www.garlic.com/~lynn/2000d.html#17 Where's all the VMers? http://www.garlic.com/~lynn/2001m.html#33 XEDIT on MVS http://www.garlic.com/~lynn/2005t.html#40 FULIST http://www.garlic.com/~lynn/2006k.html#50 TSO and more was: PDP-1 http://www.garlic.com/~lynn/2009s.html#46 DEC-10 SOS Editor Intra-Line Editing http://www.garlic.com/~lynn/2010g.html#6 Call for XEDIT freaks, submit ISPF requirements http://www.garlic.com/~lynn/2010g.html#50 Call for XEDIT freaks, submit ISPF requirements http://www.garlic.com/~lynn/2010m.html#84 Set numbers off permanently http://www.garlic.com/~lynn/2011h.html#62 Do you remember back to June 23, 1969 when IBM unbundled past posts in this thread: http://www.garlic.com/~lynn/2011m.html#30 CMS load module format http://www.garlic.com/~lynn/2011m.html#34 CMS load module format http://www.garlic.com/~lynn/2011m.html#36 CMS load module format http://www.garlic.com/~lynn/2011m.html#41 CMS load module format -- virtualization experience starting Jan1968, online at home since Mar1970 -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: CMS load module format
peter.far...@broadridge.com (Farley, Peter x23353) writes: IIRC there is no easy FTP in or out of VM/370. Your only real transfer capability is the VM/370 system reader and punch. The VMARC format (like XMIT) packages text in 80-byte records and can be transmitted back and forth using reader and punch. There are both MVS 3.8 and VM/370 versions of VMARC, so files created in MVS 3.8 can be transmitted back and forth with VM/370. There is also a VM/370 dump command which writes files to the punch, but I forget the output format that it uses. re: http://www.garlic.com/~lynn/2011m.html#30 CMS load module format ... note lots of os/360 (/or mvs) applications/compilers were ported to CMS by implementing simulation of os/360 access method services (on cms filesystem, this is different from simulation of os/360 access method services on real mvs disks mentioned below). CMSSEG was introduced in vm370 release 3 with DCSS (a very small subset of my paged-mapped filesystem and virtual memory management changes mentioned below). OS simulation should be in CMSSEG (there was joke about 32kbyte OS/360 simulation code in CMSSEG was much more cost effective OS/360 simulation than the 8mbyte OS/360 simulation in MVS). this has discussion about standard hercules distribution and whether cmsseg definition is in conflict with the cms virtual machine size you are using: http://osdir.com/ml/emulators.hercules390.vm/2003-11/msg00132.html disk dump/load was one of the original cms (when it was cambridge monitor system on cp67 ... before morph to vm370 and name change to conversational monitor system) commands from mid-60s. The original CMS filesystem formated disks into 800-byte fixed length physical records (early form of FBA). a similar gimick (temporary changing file format to 800-byte fixed-length records) was also used by the cms tape dump/load application ... but physical 800-byte blocks on tape. from dmsdsk (also from vm370 release 6) *DUMP: DISK COPIES THE FILE DESIGNATION FROM THE 00096000 *PARAMETER LIST INTO BYTES 58 - 76 OF AN 89-BYTE 00097000 *BUFFER. (THE FIRST FOUR BYTES OF THE BUFFER CONTAIN 00098000 *ANIDENTIFIERCONSISTING OFANINTERNAL 00099000 *REPRESENTATION OF A 12-2-9 PUNCH AND THE CHARACTERS 0010 *'CMS'.) THENDISK TEMPORARILYCHANGES THE 00101000 *CHARACTERISTICS OF THE FILE IN THE 40-BYTE FST ENTRY 00102000 *TO MAKE IT APPEAR AS A FILE OF 800-BYTE FIXED-LENGTH 00103000 *RECORDS. (THE CORRECT FST ENTRY IS RESTORED WHEN THE 00104000 *FILE HAS BEEN DUMPED, OF COURSE.) DISK MOVES THE 00105000 *INITIAL VALUE FOR SEQUENCING 00106000 *(001) INTO BYTES 77-80 OF THE BUFFER. DISK NEXT 00107000 *CALLS THE DMSBRD FUNCTION 00108000 *PROGRAM TO READ THE FIRST 50 BYTES OF THE TEMPORARY 00109000 *COPY INTO 0011 *BYTES 6-55 OF THE BUFFER AND THEN THE DMSCIO FUNCTION 00111000 *PROGRAM TO PUNCH 00112000 *THE CONTENTS OF THE BUFFER. HAVING PUNCHED THE FIRST 00113000 *CARD, DISK INCREMENTS THE SEQUENCE NUMBER (BYTES 00114000 *77-80 OF THE OUTPUT BUFFER) AND OVERLAYS BYTES 6-55 00115000 *OF THE BUFFER WITH THE NEXT 50 BYTES OF THE FILE 00116000 *BY CALLING DMSBRD. IT THEN PUNCHES THE CONTENTS OF 00117000 *THE00118000 *BUFFER.DISK REPEATS THIS PROCESS FOR EACH 00119000 *SUBSEQUENT 50 BYTES OF DATA IN THE TEMPORARY DISK 0012 *FILE. WHEN THE END-OF-FILE IS ENCOUNTERED, DISK 00121000 *GENERATES AN END CARD (ONE WITH N IN COLUMN 5) AND 00122000 *PUNCHES IT,00123000 *CALLS THE CP CLOSE COMMAND TO CLOSE PUNCH 00124000 ... snip ... During the FS period ... some past posts http://www.garlic.com/~lynn/submain.html#futuresys lots of 370 development (both software hardware) was cut back all over the company. with the failure of FS ... there was mad rush to get stuff back into the 370 product pipelines. This was motivation for picking up a lot of 370 stuff I had been doing all during the FS period for vm370 release 3 ... some old email related to converting enhancing bunch of stuff from cp67 to vm370: http://www.garlic.com/~lynn/2006v.html#email731212 http://www.garlic.com/~lynn/2006w.html#email750102 http://www.garlic.com/~lynn/2006w.html#email750430 and then additional stuff as
Re: CMS load module format
riv...@dignus.com (Thomas David Rivers) writes: Can anyone point me to a description of the CMS non-relocatable load module format? I can't seem to find it anywhere... (i.e. the output of the CMS GENMOD command.) old/original ... part of vm370/cms release 6 dmsmod assemble file from hercules/cbttape distribution (more detailed information in actual source): * GENMOD ISSUES THE START (NO) COMMAND TO FINISH LOADING OF 00116000 * OBJECT PROGRAMS. NEXT ERASE THE OLD MODULE IF IT EXISTS. 00117000 * THE START AND ENDING LOCATIONS ARE DETERMINED FROM THE00118000 * USER OPTIONS 'TO' AND 'FROM' OR BY DEFAULT. THE DEFAULT 00119000 * START IS THE ADDRESS OF THE FIRST LOADER TABLE NAME, THE 0012 * DEFAULT END IS THE CURRENT SETTING OF LOCCNT IN NUCON.00121000 * AN EIGHTY BYTE RECORD IS WRITTEN AS THE FIRST RECORD OF THE 00122000 * THE MODULE. THIS RECORD CONSISTS OF THE NUCON LOADER INFORMA- 00123000 * TION. NEXT THE TEXT INFORMATION IS WRITTEN TO THE MODULE 00124000 * FILE IN VARIABLE SIZE RECORDS UP TO 65535 BYTES. IF THE 00125000 * MODULE IS NOT FOR A TRANSIENT ROUTINE AND NOMAP WAS NOT 00126000 * SPECIFIED THE LOADER TABLE IS WRITTEN AS THE LAST MODULE 00127000 * FILE RECORD. CLOSE THE NEW MODULE FILE AND RETURN TO THE 00128000 * CALLER. 00129000 http://www.cbttape.org/vm6.htm http://www.cbttape.org/awstape.htm http://www.smrcc.org.uk/members/g4ugm/VM370.htm -- virtualization experience starting Jan1968, online at home since Mar1970 -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: OUCB usage
Eric Jackson jh...@ca.rr.com writes: For MVS, unlike most other platforms, the terms swapping and paging refer to distinct operations. Paging is for a page of memory in an address space, and swapping is when the entire address space is swapped out to secondary storage. TSO address spaces waiting for terminal I/O (for example) will get swapped out so that their memory resources become available to other address spaces while waiting the relatively long time for terminal input. If you issue a DONTSWAP, paging still continues for your address space. changes i made for cp67 (as undergraudate in the 60s) .. and since the changes were mostly dropped in the simplification in the morph of cp67-vm370 ... re-implemented for vm370 in the 70s ... was pages were individually paged ... and at queue drop (for long wait) ... virtual pages might be collected ... but nothing actually happened unless there was sufficient demand for pages (aka agile, dynamic adaptive). circa 1980, somebody from the mvs organization contacted me about recent change that had been to MVS, regarding not actually swapping pages unless actually needed ... and they wanted to know about making similar change to vm370. I commented, that it had never occured to me to not do it that way ... dating back to when i did the original implementation in the 60s. I actually had earlier arguments with the organization when they were first adding virtual memory to os/360 ... for svs and then mvs. misc. past posts mentioning paging, swapping, page replace algorithms, etc http://www.garlic.com/~lynn/subtopic.html#clock -- virtualization experience starting Jan1968, online at home since Mar1970 -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: CLOCK change problem
bherr...@txfb-ins.com (Herring, Bobby) writes: TOD Clock switch AFAIK came in with the 370. I remember it specifically on the 168 my memory is iffy on the 155/158 but I think it was there, no experience on the 14X . If it was there on the 360s I never heard/saw anything about it. TOD was introduced with 370 (interval timer clock comparator) ... relaxing location 80 timer. i remember getting caught up for a couple months discussing things like whether the TOD baseline of first day of the century was 1900 or 1901. lower-end 360s would update location 80 appox. every 3mills ... higher end 360 could have (high resolution) location 80 update approx every 13mics ... including 360/67. cp/67 used location 80 for everything ... it would save old value and load new value into 84, doing overloaping 8byte move from 80 to 76 (moved old value from 80 into 76 and new value from 84 into 80). It would then update the various clocks and timer values by the difference in current value saved to 76 and the original value that had been originally loaded into 80 (aka virtual machine microseconds used, kernel supervisor microseconds used, current clock value). when cp/67 was originally installed at the univ. in jan68 ... it had support for 1050 2741 terminals ... along with automatic terminal identification. The univ. had some number of ascii/tty terminals ... so I had to add TTY terminal support. I extended the original logic for automatic terminal identification to include TTY. It worked fine for leased lines ... but had a glitch trying to do a single dailin phone number with hunt group (pool of lines). It was possible to change line-scanner associated with each port (terminal type) ... but that didn't actually change the line-speed for each port (1050 2741 were the same ... but ascii/tty was different). This somewhat prompted the univ. to do a clone controller effort ... reverse engineer channel interface and building channel interface board for Interdata/3 ... and programming Interdata/3 so it could do both line-speed and terminal type. This got four of us written up as responsible for (some part of) clone controller business ... since vendor picked up the implementation and sold it commercially. One of the first bugs testing on channel interface was 360/67 red-light. The timer-tic hardware attempts to update location 80 on every tic ... if the processor or channel is holding the memory bus interface, it will delay ... but if delays so long that the timer tics again ... it will stop the processor with hardware failure. Turns out the initial clone controller implementation wasn't making sure that it told the channel interface to release the memory bus at least once every 13microseconds. The location 80 timer updates put expensive load on memory bus ... one of the reasons for starting to eliminate its use ... starting with tod, interval timer, and clock comparator in 370. -- virtualization experience starting Jan1968, online at home since Mar1970 -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: CLOCK change problem
re: http://www.garlic.com/~lynn/2011k.html#27 CLOCK change problem 32bit value with 15hr duration ... different models decrement bits depending on timer resolution of the model. re: http://www.bitsavers.org/pdf/ibm/360/funcChar/GA24-3231-7_360-30_funcChar.pdf pg. 29, Interval Timer The Model 30 Interval Timer (special feature) operates at a fixed cycle rate of 16.7 milliseconds (60-cycle system power-supply input) or 20 milliseconds (50-cycle power). The microprogram controls decrementing the timer The interval-timer microprogram requires 7.5 to 13.5 microseconds (10 to 18 microseconds in a CPU with 2-microsecond RW cycle) per count depending upon whether there is a carry in the count. The cycle occurs asynchronously with respect to the stored program and I/O operation. Backup-up register is provided with the timer feature to accumulate automatically a count of up to 16 intervals of time, if main storage cannot be accessed because of prolonged I/O or direct control operations. The feature permits a delay of up to 277 milliseconds between timer counter references without loss of the count. ... snip ... keeping 16 intervals ... implies that update has to happen before the end of 17th interval ... aka total 277ms divided by 17 intervals is approx. 16ms ... corresponds to the 16.7 milliseconds for 60-cycle power. re: http://www.bitsavers.org/pdf/ibm/360/funcChar/GA27-2719-2_360-67_funcChar.pdf pg. 19 High-Resolution Interval Timer An interval timer with a high degree of resolution is used in 2067. Operation of this timer is fully compatible with that described in the IBM System/360 Principles of Operation manual. The high-resolution timer provides approximately 13-usec resolution. This is accomplished with an 8-bit hardware register which contains the low-order byte of the timer. Each time the low-order byte counts to zero, the timer value at location 80-82 is decremented at the end of the instruction currently being executed. An operand fetch from location 80 will retrieve the three high-order bytes from location 80 plus the low-order bytes from the hardware register. If the low-order byte has stepped through zero during the instruction, then before a fetch from location 80, zeros are inserted into the low-order byte instead of the contents of the hardware register. Any instruction that stores into location 80 also stores the low-order byte into the hardware register, as well as a full word into location 80. If the timer value at location 80 changes from positive to negative, an external interrution is requested. ... snip ... approx. 15hr interval ... makes bit23 (i.e. bits 0-23) approx. 3mills. ... 360/67 timer required access to location 80 approx. every 3mills or machine would redlight. (bit31) 13microseconds *256 (bit23) is 3.328 milliseconds. 3.328 milliseconds times 2**24 is 15.51 hrs (for 32bits) bit23 at 3.328ms, bit22 at 6.656ms, bit21 at 13.312ms, bit20 at 26.624ms bit19 at 53.248ms, bit18 at 106.496ms, bit17 at 212.992ms misc. past posts mentioning doing clone controller http://www.garlic.com/~lynn/subtopic.html#360pcm -- virtualization experience starting Jan1968, online at home since Mar1970 -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Last card reader?
chrisma...@belgacom.net (Chris Mason) writes: The 2540 was an enormously versatile machine in that it not only supported the card reading function but also the card punching function. http://www.columbia.edu/cu/computinghistory/2540.html Google ad: first hit with search words IBM 2540 picture. But, looking at the picture I realise I've forgotten which feed was the reader feed and which was the punch feed! re: http://www.garlic.com/~lynn/2011k.html#13 Last card reader? reader ran faster than the punch ... punch had hopper for maybe couple hundred cards (on left) ... reader had slopping tray feed (on the right) could get at least a box of cards (2000) bitsavers more detailed 2540 (but poorly scanned ... hard to make out details) http://www.bitsavers.org/pdf/ibm/360/A21-9033-1_2540_CompDescr.pdf 1402 was similar ... lot more detail better scan: http://www.bitsavers.org/pdf/ibm/140x/231-0002-2_1402_Card_Read-Punch_CE_Manual_1962.pdf bitsaver is also good for older tab machines: http://www.bitsavers.org/pdf/ibm/punchedCard/ -- virtualization experience starting Jan1968, online at home since Mar1970 -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Last card reader?
shmuel+ibm-m...@patriot.net (Shmuel Metz , Seymour J.) writes: You really mean 709 and not 7090? That's a big jump! re: http://www.garlic.com/~lynn/2011k.html#8 Last card reader? univ. supposedly had something like #3 709, thousands of tubes that constantly required maintenance ... something like 20 ton air conditioning capacity. much of workload was student fortran ibsys running tape-to-tape (second or two elapsed) ... with 1401 front-end for unit record (carried tape between 709 drives and 1401 drives) there was intermediate step replacing 1401 with 360/30 ... started out with 360/30 running hardware emulation for the MPIO that did the unit-record-tape. I got student job rewritting MPIO in 360 assembler got to design my own stand-alone monitor, interrupt handlers, device drivers, console interface, etc. then move to os/360 on 360/65 (actually 360/67 spent most of the time running as 360/65, replaced both 709 360/30) ... much less heat. student jobs then ran 3step fortran-g, complie, link-edit, go ... over a minute elapsed time per student jog; hasp got it down to over 30+ seconds elapsed time. I started taking stage-2 sysgens completely apart and put them back together for careful ordering of files and pds members to optimize arm seek ... getting down to a little under 13seconds elapsed time (nearly three times improvement) it wasn't until univ. installed watfor that student job elapsed time got down to 709. the univ. was supposedly getting 360/67 to run tss/360 ... but tss/360 failed to reach any reasonable operational level. eventually did get (virtual machine) cp67 january 1968 ... and the univ. let me play with it on weekends. I rewrote large sections of cp67 before graduating. -- virtualization experience starting Jan1968, online at home since Mar1970 -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Last card reader?
steve.do...@ccbcc.com (Steve Dover) writes: Phil, we had one at Allstate Insurance until 1990. 2540 reader/punch. I sure miss the chads, they were great fun in desks and cars. But I do not miss hauling the 50 pound boxes around. as undergraduate in the 60s ... univ. was using sense-marked cards (no.2 pencil) for class registration ... tables in the gym and students would get card for each class and fill in their information. Then cards were run thru and holes punched (solid manilla color cards) registration program was moved from 709 to 360 with 2540 reader/punch. all the cards were in large number of trays (about 3000 per ... about box half) were fed into the 2540 reader. I wrote subroutine to feed into the middle stacker (stacker 3) ... registration program would validate the registration information and if it found a problem, a blank card would be punched behind it (middle stacker, stacker 3 was selectable from both the reader and the punch). The punch had been loaded with top-edge red-stripe cards ... so when everything was done ... it was possible to pick out class registration cards with errors ... by the top red-stripe edge card immediately following it in the tray. -- virtualization experience starting Jan1968, online at home since Mar1970 -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Last card reader?
ps2...@yahoo.com (Ed Gould) writes: Wasn#39;t there a card reader as a requirement for 3090 and before so the CE could install the OLTEP program and a rudimentary IOCDS to run his diagnostics? 3092 (3090 service processor) was a pair of 4361s running a special custom vm370 release 6 off of 3370 FBA drives. All that stuff chould have come on 3370 FBA disks as part of the service processor. aka at bottom mentions 3092 requires two 3370 FBA devices (one for each 4361 running vm370): http://www-03.ibm.com/ibm/history/exhibits/mainframe/mainframe_PP3090.html above also mentions that 3092 (aka vm370 4361s) requires access to 3420 tape drive. misc. past posts mentioning 3092: http://www.garlic.com/~lynn/2009b.html#22 Evil weather http://www.garlic.com/~lynn/2009e.html#50 Mainframe Hall of Fame: 17 New Members Added http://www.garlic.com/~lynn/2010e.html#32 Need tool to zap core http://www.garlic.com/~lynn/2010e.html#34 Need tool to zap core http://www.garlic.com/~lynn/2010e.html#38 Need tool to zap core http://www.garlic.com/~lynn/2011c.html#71 IBM and the Computer Revolution http://www.garlic.com/~lynn/2011e.html#62 3090 ... announce 12Feb85 http://www.garlic.com/~lynn/2011f.html#31 TCP/IP Available on MVS When? http://www.garlic.com/~lynn/2011f.html#32 At least two decades back, some gurus predicted that mainframes would disappear http://www.garlic.com/~lynn/2011f.html#42 At least two decades back, some gurus predicted that mainframes would disappear in future and it still has not happened http://www.garlic.com/~lynn/2011h.html#68 IBM Mainframe (1980's) on You tube -- virtualization experience starting Jan1968, online at home since Mar1970 -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html