Re: Leap (was: LOADING An AMODE64 Program)

2015-08-12 Thread Joel Ewing
On 08/11/2015 01:19 PM, Paul Gilmartin wrote:
 On Tue, 11 Aug 2015 12:37:30 -0500, Joel Ewing wrote:

 Encyclopedia Britannica is complicit in the confusion to this day by
 incorrectly implying in their Leap Year entry that in addition to the
 divisible by 4, 100, 400 rules there either is or should be a 4000-year
 exception rule:
 ...For still more precise reckoning, every year evenly divisible by
 4,000 (i.e., 16,000, 24,000, etc.) may be a common (not leap) year,

 Over 18 years ago (Nov 1996) EB acknowledged that no such rule exists:
 it was an un-adopted and sub-optimal suggestion by Sir John Herschel
 around 1820.  EB has apparently not yet followed their own internal
 recommendation in 1996 to reword this statement in the future.

 If I were Emperor of the Universe, I would make the rule:
 
 Every year divisible by 4 except one divisible by 128 is a leap year.
 
 365 31/128 is within one second of the mean tropical year; closer even
 than the 4000-year rule.
 
 The unpredictable secular increase in the length of the day makes a
 4000-year rule pointless.
 
 -- gil
 

Agreed.  This is same view I have also had since 1996, but...

Amazingly, if you do the math, the result of a 4/128 year rule is
mathematically identical to the average days/year of a
4/100/400/3200-year algorithm.  The 4/100/400 rule becomes about one day
in error every 3200 years (the closest multiple of 400 years to the
point where the error is 1 day), and since the Gregorian Calendar was
adopted about 1600 (to the nearest century), years evenly divisible by
3200 turn out to be approximately the point where the error reaches 1/2
day and a reasonable point to start forcing a 3200-year correction to a
4/100/400 algorithm.

Despite the elegance of a much simpler 4/128 solution, I would suspect
there would be much less opposition to adding a 3200-year exception to
the 400-year exception in the existing rules.  The 19th-century proposal
of a 4000-year exception would obviously have been a worse choice.

All this presumes we or nature don't mess up the Earth's rotational
speed too badly, convert to stardates, or terminate civilization as we
know it in the next 1200 years.

-- 
Joel C. Ewing,Bentonville, AR   jcew...@acm.org 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: 3380 was actually FBA?

2015-08-12 Thread Joel Ewing
On 08/12/2015 07:38 AM, Jerry Callen wrote:
 In another thread, l...@garlic.com wrote:
 
   ... but then if MVS had FBA support wouldn't have needed to do 3380 as CKD 
 (even tho inherently it was FBA underneath) ...
 
 I didn't know that.
 
 Was that the first (and/or last?) IBM SLED to be inherently FBA under the 
 hood? Where were the smarts for that implemented, in the control unit, or the 
 drive itself?
 
 -- Jerry
 
The count, key, and data field data on a native 3380 were written in
32-byte increments, but since a physical data block could be an
arbitrary number of 32-byte chunks and unused 32-byte chunks at varying
positions around the track had to be wasted between physical blocks for
inter-block gaps, I wouldn't call this FBA-under-the-hood.  The physical
block size (up to 31-bytes larger than known to the Operating System)
definitely wasn't Fixed, just restricted to a multiple of 32 bytes.
The only fixed part of the track architecture was the 32-byte
increment size.

Perhaps at the actual device hardware level a 3380 could have been given
the capability to randomly address, read, and write individual 32-byte
track increments while using all possible 32-byte increments on the
track for data, but I would expect that would have been a much more
expensive design than was required to support CKD architecture.  My
strong impression was that the erased IBG  between physical blocks was a
requirement for proper sensing of beginning of a block.  The requirement
that some 32-byte increments must be left unused for IBGs indicates
these 32-byte groupings do not play the same role as fixed data blocks
in FBA architecture devices.


-- 
Joel C. Ewing,Bentonville, AR   jcew...@acm.org 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: LOADING An AMODE64 Program

2015-08-11 Thread Joel Ewing
One of the things that became obvious in the Y2K discussion groups by
1999 was that the general public is not very good at understanding leap
year exception rules, especially ones that neither they, nor several
generations of their ancestors, have ever witnessed.  It ran all the way
from some adamantly claiming 2000 should not and would not be a leap
year to some insisting there would be two leap days in 2000!  As noted,
2000 was indeed a leap year by the 400-year exception to the 100-year
exception

Encyclopedia Britannica is complicit in the confusion to this day by
incorrectly implying in their Leap Year entry that in addition to the
divisible by 4, 100, 400 rules there either is or should be a 4000-year
exception rule:
...For still more precise reckoning, every year evenly divisible by
4,000 (i.e., 16,000, 24,000, etc.) may be a common (not leap) year,

Over 18 years ago (Nov 1996) EB acknowledged that no such rule exists:
it was an un-adopted and sub-optimal suggestion by Sir John Herschel
around 1820.  EB has apparently not yet followed their own internal
recommendation in 1996 to reword this statement in the future.
Joel C. Ewing


On 08/11/2015 10:31 AM, Mike Schwab wrote:
 As a multiple of 400, 2000 was a leap year.   2100, 2200, and 2300 will not 
 be.
 
 On Tue, Aug 11, 2015 at 10:08 AM, Jon Butler butler@gmail.com wrote:
 Did she realize 2000 was not a leap year?

...


-- 
Joel C. Ewing,Bentonville, AR   jcew...@acm.org 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Article on COBOL's inevitable return

2015-07-29 Thread Joel Ewing
Well, actually the original statement WAS self-apparently ludicrous
because it stated that U.S. DoD decreed ALL businesses would use COBOL,
period, and DoD has never had that much authority.

DoD had zero control over businesses that did not work on defense
contracts for DoD, and even those with defense contracts could only be
constrained to DoD standards in the work they did on behalf of those
defense contracts.  The use of an unconstrained ALL is what made it
ludicrous.  The considerable influence of DoD as a major consumer forced
the availability of COBOL and later ADA support and set the standards
for code written for DoD projects, but DoD is not [yet] omnipotent.
JC Ewing

On 07/29/2015 12:04 PM, Ted MacNEIL wrote:
 Hence NOT ludicrous!
 
 -
 -teD
 -
   Original Message  
 From: Vince Coen
 Sent: Wednesday, July 29, 2015 12:54
 To: IBM-MAIN@LISTSERV.UA.EDU
 Reply To: IBM Mainframe Discussion List
 Subject: Re: Article on COBOL's inevitable return
 
 I think you will find that was a demand (?) that all applications 
 developed on behalf of the military (well at least the US Navy) had to 
 be in Cobol - if nothing else to help with standards, maintenance  
 migration.
 
 You have to remember that there was more than one supplier of mainframes 
 in the 60's such as IBM, Burroughs, Honeywell Univac, Sperry Rand to 
 name but a few and in Europe OK, the U.K., ICL (ICL), English Electric 
 and of course the first commercial computer the LEO 3 and these were 
 also included in UK manuals of the time.
 
 Check out the copyleft notice that is shown in all Cobol manuals and 
 should also be in books although not in my one copy of a Cobol book - 
 Cobol unleashed!
 .
 Vince
 
 Cobol since 1963, IT since 1961 (from 1403, 7094, 360/30 et al).
 
 
 
 
 On 29/07/15 17:20, Paul Gilmartin wrote:
 On Wed, 29 Jul 2015 12:11:56 -0400, Ted MacNEIL wrote:

 Why is it so ludicrous? The USDOD did develop COBOL for some reasom.

 And a generation later, they likewise required ADA. I don't know if that
 was ever countermanded.

 I know a programmer who argued that his assignment could not be accomplished
 in ADA. He was given an exemption and allowed to use assembler.

 � Original Message �
 From: zMan
 Sent: Wednesday, July 29, 2015 11:28

 *The Department of Defense even decreed that all businesses must run on
 COBOL in the 1960s.*
 A ludicrous assertion.
 -- gil

 


-- 
Joel C. Ewing,Bentonville, AR   jcew...@acm.org 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Python FTP binary object file upload

2015-06-24 Thread Joel Ewing
On 06/24/2015 03:58 PM, Paul Gilmartin wrote:
 On Wed, 24 Jun 2015 13:34:04 -0500, Janet Graff wrote:
 
 Sadly, python rejected the quote prefix for the site command.

 *cmd* 'QUOTE SITE RECFM=FM LRECL=80 BLKSIZE=3200'
 *put* 'QUOTE SITE RECFM=FM LRECL=80 BLKSIZE=3200'\r\n
 *get* '500 unknown command QUOTE'
 *resp* '500 unkown command QUOTE'

 Well, RFC 959 mentions SITE, but not QUOTE.
 
 And there I was about to blame it on Guido van Rossum's antipathy to EBCDIC.
 
 -- gil
 

Strictly speaking QUOTE is not a FTP protocol command but rather a
command to a local FTP client which by common convention sends the text
following the QUOTE to the remote FTP server as a command even though
its syntax can't be validated by the local client --  useful when
dealing with enhanced FTP server features not supported by a particular
FTP client, like conveying record-formatting SITE information to a z/OS
FTP server from a non-z/OS FTP client.  In the context shown, QUOTE is
apparently being sent as a command to the remote FTP server, where it is
indeed invalid.


-- 
Joel C. Ewing,Bentonville, AR   jcew...@acm.org 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Split screen ISPF edit copy?

2015-06-23 Thread Joel Ewing
On 06/23/2015 11:40 AM, John McKown wrote:
 On Tue, Jun 23, 2015 at 11:24 AM, Charles Mills charl...@mcn.org wrote:
 
 In a split screen ISPF session, with two edit logical screens open, is it
 possible to do a line command CC/A type copy from one logical screen to the
 other? That is, it does not seem to work. Is there some option I can turn
 on
 to enable it, or some trick to it?

 
 ​Sorry, Outta Luck on that. From what I can see, each ISPF edit instance
 had its own buffers and there is no direct buffer to buffer copy.​
 
 
 

 I know I can do a command line CUT and PASTE but I have a lot of edits to
 do
 where a basic CC/A type copy would be more convenient.

 
 ​CUT and PASTE seem to be it for this function.
 
 
 

 Charles



It can be done if you have ISPF Productivity Tool (formally known as
SPIFFY), an ISPF enhancement product which among other things supports
enhanced cut/paste to multiple clip boards within multiple contexts.
However, unless things have changed this is not a free product, so
unless some of its other features are also very appealing to management
it might be difficult to cost-justify.

I'm thinking one could even kludge their own one-clipboard version of
split-screen edit cut/paste using REXX Edit macros without too much
difficulty. Does such a beast possibly already exist on the CBT Tape web
site?


-- 
Joel C. Ewing,Bentonville, AR   jcew...@acm.org 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


User Log File migration issue (Was Re: Notify for XMIT)

2015-06-05 Thread Joel Ewing
On 06/04/2015 11:07 AM, Shmuel Metz (Seymour J.) wrote:
 In 556da66b.9040...@acm.org, on 06/02/2015
at 07:49 AM, Joel Ewing jcew...@acm.org said:
 
 That may well be, but according to IBM and TSO documentation the
 behavior of IKJEFT1A/IKJEFT1B is by design slightly different,
 
 How is that relevant to the claims about TCB structure?

Why this insistent fixation on TCB structure, as if that was something I
was arguing?  I'm not!  At this point I'm disinterested.

This sub-thread, in response to a question about any migration issues
with User Log Files, had to do with a possible migration issue with SEND
when migrating from SYS1.BRODCAST to User Log Files, and specifically a
failure related to.invoking SEND from a CLIST from Batch TSO using
IKJEFT1A in an existing job stream which only occurred using User Log
Files.  Someone else with similar batch job streams might also have
similar minor migration issues

I mis-remembered that our issue might have been something to do with TCB
structure; BUT ... when I located the original IBM problem discussion, I
quoted IBM's explanation of why it behaved the way it did, and that
quote makes no claims about differences in TCB structure, only that this
is the way it works.  Internal IKJTEFxx TCB structures turned out to be
totally irrelevant to the cause of our migration issue, and thus
irrelevant to the original question that prompted this sub-thread.

IBM agreed with me that it was unreasonable to assume CLIST users could
interpret directly based on the undocumented internal implementation
of CLIST execution, and they clarified the manuals.

When dealing with historical production job streams which are using
IKJEFT1[A,B) for Batch TSO and which have been functioning correctly for
years, one can equally argue that if you are not sure why they chose
that entry point you have no business arbitrarily changing that usage
without first doing a complete analysis -- and resources to do that
analysis may require there first be a perceived problem.  Specific
usages of IKJEFT1A may or may not be appropriate, but you still have to
deal with what is actually in use at your installation.
J. C. Ewing
 
 and their definition of directly in this context includes TSO 
 commands executed within a CLIST that is directly invoked under 
 the TMP.
 
 Unless the implementation of EXEC has changed radically, any reaonable
 definition of directly would have to include commands from a CLIST. If
 the TMP does a GETLINE or PUTGET, builds a CPPL and attaches the task,
 how much more direct can it get? It's the same code path as the
 command from the terminal.
 
 but if you avoid executing the commands directly under the TMP
 
 Water is wet.
 
 and IKJEFT1A/IKJEFT1B appear to be designed to regard any non-zero 
 return code they see as fatal.
 
 That's specialized behavior and by no means the normal batch TSO. If
 you don't know why youi're using IKJEFT1[AB] then you probably
 shouldn't be using them.
 


-- 
Joel C. Ewing,Bentonville, AR   jcew...@acm.org 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Notify for XMIT

2015-06-02 Thread Joel Ewing
On 06/01/2015 07:23 PM, Shmuel Metz (Seymour J.) wrote:
 In 556b662e.60...@acm.org, on 05/31/2015
at 02:51 PM, Joel Ewing jcew...@acm.org said:
 
 The above CLIST code should presumably work as you expect for
 intercepting SEND CLIST errors in an Interactive TSO/E, ISPF
 environment where TSO is invoked via a TSO logon PROC as IKJEFT01 and
 not as IKJEFT1A or IKJEFT1B.
 
 AFAIK all of IKJEFT01, IKJEFT1A and IKJEFT1B have the same task
 structure..
  
 
That may well be, but according to IBM and TSO documentation the
behavior of IKJEFT1A/IKJEFT1B is by design slightly different,
specifically for TSO commands that give a non-zero return code that are
running directly under the TMP; and their definition of directly in
this context includes TSO commands executed within a CLIST that is
directly invoked under the TMP.

It doesn't have to be a difference in TCB structure that makes this
behavior of IKJEFT01 vs. IKJEFT1A/IKJEFT1B different, but if you avoid
executing the commands directly under the TMP --e.g., by executing them
from within a REXX EXEC -- you can circumvent that behavioral
difference.  I think one adds a TCB by invoking a REXX EXEC, but
probably more significantly it's no longer the TMP that sees the TSO
Command return code for commands within the EXEC.  The interpretation of
whether a non-zero return code is or is not a fatal error is solely at
the discretion of the calling program, and IKJEFT1A/IKJEFT1B appear to
be designed to regard any non-zero return code they see as fatal.

-- 
Joel C. Ewing,Bentonville, AR   jcew...@acm.org 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Notify for XMIT

2015-06-01 Thread Joel Ewing
You didn't say but from your results you apparently tested from within
an interactive TSO session, which runs IKJEFT01.  Now to complete the
test try invoking your invalid-keyword version of the CLIST  from the
SYSTSIN  SYSIN file in a Batch TSO job step where the JCL EXEC statement
invokes PGM=IKJEFT1A, with a SYSPROC DD pointing to your CLIST library,
and see what you get written to the SYSTSPRT SYSOUT file. I would be
willing to bet the SEND RETURNED CC 92 does not appear in the batch
TSO test because the WRITE is never reached.

The whole point is that CLISTs under TSO/E behave (as documented)
somewhat differently in a batch IKJEFT1A/IKJEFT1B environment versus an
interactive TSO session.  This is what makes the adding of new cases
where SEND might return a non-zero RC a potential issue for CLISTs that
use SEND within batch TSO job steps.
JC Ewing

On 06/01/2015 03:00 AM, Steve Coalbran wrote:
 It seemed to work without error with NOW on the SEND but changing that to 
 NOWADAYS...
 
 SEND 'MESSAGE' USER(FOO) NOWADAYS 
 INVALID KEYWORD, NOWADAYS 
 SEND RETURNED CC 92 
 
 (typo '7' removed)
 /Steve
 
 
 
 
 From:   Shmuel Metz (Seymour J.) shmuel+ibm-m...@patriot.net
 To: IBM-MAIN@LISTSERV.UA.EDU
 Date:   2015-05-31 19:55
 Subject:Re: Notify for XMIT
 Sent by:IBM Mainframe Discussion List IBM-MAIN@LISTSERV.UA.EDU
 
 
 
 In 555fe63d.3010...@acm.org, on 05/22/2015
at 09:30 PM, Joel Ewing jcew...@acm.org said:
 
 But from APPENDIX A in TSO/E Customization, any command invoked 
 directly from the TMP that returns a non-0 return code causes the 
 TMP to end.
 
 If true, then 10.2  Writing error routines in the CLIST manual is in
 error, Has anybody tried
 
  ERROR +
 DO
WRITE SEND retu7rned cc LASTCC
RETURN 
 END
  SEND 'message' USER(FOO) NOW
 
 where the SEND gets a nonzero RC? 
  
 


-- 
Joel C. Ewing,Bentonville, AR   jcew...@acm.org 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Notify for XMIT

2015-05-31 Thread Joel Ewing
On 05/31/2015 09:32 AM, Shmuel Metz (Seymour J.) wrote:
 In 555fe63d.3010...@acm.org, on 05/22/2015
at 09:30 PM, Joel Ewing jcew...@acm.org said:
 
 But from APPENDIX A in TSO/E Customization, any command invoked 
 directly from the TMP that returns a non-0 return code causes the 
 TMP to end.
 
 If true, then 10.2  Writing error routines in the CLIST manual is in
 error, Has anybody tried
 
  ERROR +
 DO
WRITE SEND retu7rned cc LASTCC
RETURN  
 END
  SEND 'message' USER(FOO) NOW
 
 where the SEND gets a nonzero RC?   
  
 
The above CLIST code should presumably work as you expect for
intercepting SEND CLIST errors in an Interactive TSO/E, ISPF
environment where TSO is invoked via a TSO logon PROC as IKJEFT01 and
not as IKJEFT1A or IKJEFT1B.  I think the usage of IKJEFT1A/IKJEFT1B is
only possible (certainly only reasonable) in batch TSO, and this was the
context for our discussion with IBM.  The TSO/E CLISTS manual appears to
be written from the standpoint of an Interactive TSO environment where
there is an associated terminal user, which admittedly is the more
common environment for CLIST usage and the environment that has the
fullest capability.

If you look at the referenced TSO/E/Customization Appendix A, you will
find that it explicitly deals with executing the TSO TMP in background
-- i.e. Batch TSO, not interactie TSO.  The rules for batch TSO are
peculiar to that environment -- as obviously you can't expect to do
things from a CLIST in batch TSO that require dynamic decisions by an
interactive user.

The CLIST manual is not in error, just perhaps incomplete in not
explicitly mentioning all limitations when running in a batch TSO
environment or under IKJEFT1A or IKJEFT1B  One could however rationalize
that Since this is not really a limitation of CLISTs but of a specific
TSO/E environment in which it runs, that it makes more sense for the
TSO/E Customization manual and TSO/E User's Manual to lay out this and
other Batch-TSO limitations (which they do) since they are the manuals
that describe how to run TSO in batch, and I suspect they are also the
only manuals that discuss how to use the IKJEFT1A and IKJEFT1B TSO/E
entry points.

I cannot recall years later the historical reason why we started using
IKJEFT1A/IKJEFT1B for Batch TSO rather than IKJEFT01, only that at the
time it seemed like the logical thing to do.  It didn't bite us until
years later when we migrated to TSO user log files for terminal messages
because of our usage of SEND under a ClIST under batch TSO under
IKJEFJ1A.  From our experience, I'm certain The CLIST ERROR handling
code in the above example would never be reached if the CLIST were
directly executed under IKJEFT1A in batch TSO.

-- 
Joel C. Ewing,Bentonville, AR   jcew...@acm.org 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Any freeware for practicing SMP/E

2015-05-30 Thread Joel Ewing
On 05/29/2015 10:37 PM, Mike Schwab wrote:
 Well, it is old, but it is free.
 http://www.jaymoseley.com/hercules/
 Click on Install MVS 3.8.
 It includes instructions for loading MVS 3.7 on 2 3330 packs.
 You then update SMPE and install MVS 3.8.
 You can also then install the IBM PTFs and User PTFs to get to MVS
 3.8J with Turnkey enhancements.
 LOTS of SMPE practice.
 
 On Fri, May 29, 2015 at 7:17 PM, Rajesh Kumar herowith.z...@gmail.com wrote:
 Hi Team,

 As i'm new and  started to learning SMP/E tool, i would like to practice it
 on my test system.

 Could someone  please suggest me some best  freeware  to install(for
 training purpose).
 Please share the link  and freeware product  which is best for my os 1.11.

 Regards,
 Rajesh


I'm sure you mean SMP4 not SMP/E with respect to MVS 3.8 and Hercules.

SMP/E wasn't introduced until MVS/XA, which is where I started doing MVS
maintenance.  Many add-on products for MVS  at that time still had JCL
examples for install/maintenance using both SMP4 and SMP/E, and
occasionally even MVS/XA related APARs would slip up and give action
statements for SMP4 which had to be translated to SMP/E counterparts.

My impression from those examples was that SMP4 was considerably more
primitive than SMP/E, that some important functions in SMP/E were not in
SMP4, and there were distinct differences in required statement
sequences and syntax.  Playing with SMP4 would show you the IBM concept
of SYSMOD and TLIB/DLIB-based maintenance but would not be a good
substitute for actual SMP/E training.  For example, the VSAM-based
databases used by SMP/E to track the status of product components were
introduced in SMP/E, so one would expect radical differences from the
SMP4 rules for creating a product database and the techniques available
for manipulating it.
-- 
Joel C. Ewing,Bentonville, AR   jcew...@acm.org 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Notify for XMIT

2015-05-22 Thread Joel Ewing
On 05/21/2015 09:46 AM, Shmuel Metz (Seymour J.) wrote:
 In 555c9bbd.4030...@acm.org, on 05/20/2015
at 09:35 AM, Joel Ewing jcew...@acm.org said:
 
 My recollection is that the immediate bailout from the CLIST and 
from batch TSO on a non-zero TSO command processor RC only 
 occurred for commands running directly under the batch TMP TCB
 
 That's a different issue from another TCB being involved; both CLIST
 and REXX will ATTACH a command, creating a separate TCB for it. The
 difference between your CLIST and REXX code lies in the default ERROR
 handling; You can use the CLIST ERROR statement to get behavior more
 like REXX. IMHO, IBM was remiss in not suggesting that you try ERROR
 and in not listing the change to SEND as a migration issue.
  
 

This was an interesting and somewhat obscure problem, so I kept a copy
of the original 2006 ETR discussion with IBM, which I have just
re-found.  Here is IBM's response from Adam Nadel at TSO/E Level 2:

... This is indeed a question of how CLISTs behave while executing
under IKJEFT1A [batch TSO job step].  When executing a CLIST, the CLIST
is placed on the TSO STACK and PUTGETs are used to execute each CLIST
statement.  If the statement is a COMMAND ... that command is returned
to IKJEFT02, which then attaches IKJEFT09 to ATTACH the command.  That
is, the command is invoked 'directly from the TMP'.  But from APPENDIX A
in TSO/E Customization, any command invoked directly from the TMP that
returns a non-0 return code causes the TMP to end.

So under entry points IKJEFT1A and IKJEFT1B, any cmd invoked from a
CLIST that returns with a non-0 return code causes the CLIST to be
flushed and the JOB [step] terminates.

He then proceeded to suggest either using alternative entry point
IKJEFT01, or as we had done, using a REXX exec, because then the
command is invoked by the REXX exec itself, not by the TMP.  The TSO/E
User's Guide and Customization manuals were revised to clarify the
behavior of IKJEFT1A with CLISTs and cmds with non-0 return codes.

Note the issue is not one of CLIST default error handling, but of the
environment in which the CLIST is invoked.


-- 
Joel C. Ewing,Bentonville, AR   jcew...@acm.org 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: anyone used the AUDIT function in HSM ?

2015-05-20 Thread Joel Ewing
Realizing this does not solve the OP's immediate problem, for the
benefit of any others with similar exposure:
If you have important, possibly even critical data on DFHSM ML2 or
BACKUP tapes, then not making use of DFHSM DUPLEX capability is false
economy that you will eventually come to regret.  If you are indeed
fortunate, you will realize this before you lose anything you can't live
without, but avoiding even one disastrous loss can justify the extra
tapes and resources required to write DUPLEX copies when writing ML2 and
BACKUP tapes.

If you are talking physical tapes and physical tape drives, then
occasional physical damage to media through repeated usage, mishandling
of media or drive failures does indeed occur, and loss of a single DFHSM
tape may affect thousands of data sets some of which could be very
important.

Even if you are talking virtual tape volumes and your VTS duplexes the
data for the virtual volumes, this doesn't automatically eliminate a
need for DFHSM duplexing.  If an Operator can erroneously scratch a
DFHSM virtual tape volume, there is probably at best a limited window in
which the volume can be unscratched without data loss.   If that window
is too small and you don't find there is a problem before that window
has been passed, then again your only salvation may be a DFHSM DUPLEX
tape copy.
J C Ewing

On 05/20/2015 08:06 AM, Michaud, Bruce wrote:
 Nope, nothing duplexed...
 
 Bruce Michaud
 Mainframe technical support
 US Airways 52N-UIT
 
 bruce.mich...@usairways.com
 Phone - 480-693-7754
 
 
 -Original Message-
 From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On 
 Behalf Of Doug
 Sent: Tuesday, May 19, 2015 5:09 PM
 To: IBM-MAIN@LISTSERV.UA.EDU
 Subject: Re: anyone used the AUDIT function in HSM ?
 
 Sorry if I missed a post but,
 Is the tape DUPLEXED?
 Can't tell from the column skewing
 Regards,
 Doug
 
 .
 
 On May 19, 2015, at 19:01, retired mainframer retired-mainfra...@q.com 
 wrote:
 
 If you want to know what HSM thought was on the tape, use the LIST TTOC 
 command, not LIST BVOL.
 
 When asking for assistance with these kinds of issues, it is really helpful 
 if you show us the whole job (job log, JCL, control cards, and output) plus 
 all related messages from the system log.
 
 -Original Message-
 From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On
 Behalf Of Michaud, Bruce
 Sent: Tuesday, May 19, 2015 3:25 PM
 To: IBM-MAIN@LISTSERV.UA.EDU
 Subject: Re: anyone used the AUDIT function in HSM ?

 List shows nada -
 VOLSER   DEVICE  BACKUP   VOL   TOTAL FREETHRESH  LAST BACKUP
 PSWD EXP   RACF   EMPTY  IDRC  DUPLEXPCT
TYPETYPE  FULL  TRACKS TRACKS 
  DATE
 ALTFULL

 V00793   3590-1  DAILYYES     ***
 15/04/18NO
 YESNO NOY*NONE*96.9
 - END OF - BACKUP VOLUME - LISTING -

 But I know the tape wasn't written over - I found this in the log -

 ARC0309I TAPE VOLUME V00793 REJECTED, VOLUME ALREADY(CONT.)
 CONTAINS VALID DFSMSHSM DATA
 
...


-- 
Joel C. Ewing,Bentonville, AR   jcew...@acm.org 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Notify for XMIT

2015-05-20 Thread Joel Ewing
On 05/19/2015 07:16 PM, Shmuel Metz (Seymour J.) wrote:
 In 5554ae6d.4030...@acm.org, on 05/14/2015
at 09:17 AM, Joel Ewing jcew...@acm.org said:
 
 I think we resolved the issue by converting the CLIST to REXX so another 
 TCB was involved.
 
 How is that relevant? What is relevant is how you dealt with ERROR and
 FAILURE in the REXX code.
  
 

My recollection is that the immediate bailout from the CLIST and from
batch TSO on a non-zero TSO command processor RC only occurred for
commands running directly under the batch TMP TCB (and running a CLIST
command directly from SYSTSIN was actually causing all TSO commands
within the CLIST to run at that level as well!).  When executing TSO
commands from a REXX exec invoked from SYSTSIN that was not the case,
and premature termination of the REXX exec and batch TSO did not occur
when TSO SEND was invoked within the EXEC and gave a non-zero return
code.  The replacement REXX EXEC produced a zero return code for the
entire EXEC based on its own internal criteria, which did not involve
checking the RC from SEND.

We found the premature bailout difference in behavior between CLIST and
REXX environments unexpected and counter-intuitive, but it seemed to
make perfect sense to those at IBM TSO support familiar with TSO
internals.  I accepted IBM closing our problem as a DOC change: the TSO
manuals at the time did document this behavior, but in a way that
required a TSO internals specialist for correct interpretation.

The root cause of our problem was that TSO SEND (working as designed
with user log files) was returning a non-zero return code when it
actually did NOT have an error and when message delivery eventually
succeeded. An actual failure of the requested SEND command action was
not an issue, and it was unnecessary to do anything special to handle an
ERROR or FAILURE of SEND in the REXX Exec.

The nature of the original CLIST in question was such that a failure to
execute statements in the CLIST following the SEND was a much more
serious problem for us than if the message had just failed to reach its
intended destination, which did not occur.   In our case the end user
would eventually have detected loss of an expected confirmation message
on his own and followed up if that had occurred.

-- 
Joel C. Ewing,Bentonville, AR   jcew...@acm.org 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Scheduling software

2015-05-16 Thread Joel Ewing
On 05/16/2015 03:03 AM, Brian Westerman wrote:
 For simple Time of Day/Day of week job and task scheduling there is SyzAUTO/z 
 www.SyzygyInc.com/SyzAUTOz.htm.  It's quite a bit cheaper than the products 
 from IBM, CA, and ASG.
 
 Brian Westerman
 

There are also some freebies from cbttape.org for initiating console
commands, STCs, or jobs at some future time.  This is the easy part of
automated scheduling.

The hard part is reliably monitoring success/failure of individual jobs
in job streams of related production jobs and handling job restarts
after failures have occurred.  A small number of Operators can reliably
oversee thousands of production jobs per day with CA, ASG, or similar
scheduling products that provide the capability to perform routine
production job tracking and provide support for the occasional job
restart to recover from inevitable failures.

I can't conceive of trying to reliably run a production z/OS system with
even a few hundred scheduled jobs per day without having such tools in
place.


-- 
Joel C. Ewing,Bentonville, AR   jcew...@acm.org 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Security vulnerability for RSU updates

2015-05-15 Thread Joel Ewing
On 05/14/2015 01:27 PM, Mark Jacobs - Listserv wrote:
 As a general rule, the higher the RSU level the more security and
 integrity fixes will be included. The only way you'll know for sure is
 to access IBM's portal and download the special holddata.
 
 https://www14.software.ibm.com/webapp/set2/sas/f/redAlerts/20130227.html
 
 Nathan Astle mailto:tcpipat...@gmail.com
 May 14, 2015 at 2:14 PM
 Hi

 Are any relationship for security vulnerability with having recent RSU ?
 Precisely is there a dependency for security on every RSU updates ?

 Nathan

...
Don't expect the repair of all security bugs to be nicely synchronized
with RSU levels.  An RSU level implies a higher level of confidence in
the quality of a collection of PTFs, in that a greater amount of system
testing has been done, but it doesn't preclude existence of other bugs.
 At any given time there are always any number of unknown or
not-yet-reported bugs in z/OS, some of which could be security related,
including that point in time that is the cutoff date for RSU-level
maintenance.  Fortunately security bugs are rare and even then the
exposure may only affect some installations.

I thought the best bet to stay on top of z/OS security vulnerability
issues these days was to subscribe to IBM notifications for z/OS
security alerts.  Security alerts aren't that common, but that way you
get the earliest possible notification of known problems that might be
an exposure for your installation -- and less risk of missing a rare
occurrence than periodic checking of HOLD data or just aiming for some
arbitrary closeness to the most current maintenance level.
-- 
Joel C. Ewing,Bentonville, AR   jcew...@acm.org 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Notify for XMIT

2015-05-14 Thread Joel Ewing
On 05/13/2015 03:57 PM, Ed Finnell wrote:
 Check out Sam G's BRODCAST utility on CBT. It addresses the problem.
  
  
 In a message dated 5/13/2015 3:46:03 P.M. Central Daylight Time,  
 ste...@copper.net writes:
 
 How many  (willing to say) are using logfile.USERID in place of 
 SYS1.BRODCAST? How  difficult was this to implement? And could it 
 solve that problem for  us?
 
 
...
We only had one minor problem with converting to user log files.  There
is one subtle inconsistency in the TSO SEND command behavior that bit us
in a very puzzling way when we converted to user broadcast datasets many
years ago:  the defined resturn codes for the TSO SEND command change
when you switch to to user log data sets!!

Without user log files you get a non-zero return code only if there is
an actual message delivery failure.  With user log data sets, a non-zero
return code is possible if the message is successfully stored and will
eventually be successfully delivered but the user is just not able to
immediately see it for some reason (e.g., not logged on).

If you are invoking the SEND command from a CLIST in certain
environments (we were, in batch TSO), the non-zero return code can cause
the CLIST containing the SEND command (and batch TSO) to immediately
terminate without completing any following statements in the CLIST.  If
you have never seen this behavior in a CLIST before, it is very
difficult to debug when you have evidence the CLIST began, a SEND
command was reached (someone got a message), but the following
statements, which you assumed had to have also been reached, didn't seem
to have worked.  After we were able to prove the CLIST was bailing out
after the SEND command, we consulted the manuals, found the return code
change was documented, but still didn't understand the CLIST behavior.
The manuals at the time were not at all clear (they were subsequently
revised) as they implied to someone unfamiliar with TSO internals that
this immediate termination of TSO would only occur for commands in an
input file, not also for command statements embedded in a CLIST.  I
think we resolved the issue by converting the CLIST to REXX so another
TCB was involved.

The additional return code granularity for SEND could be potentially
useful, but there should have been some warning as a migration
consideration under the discussion of user log data sets.

-- 
Joel C. Ewing,Bentonville, AR   jcew...@acm.org 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: IEFBR14 question

2015-05-06 Thread Joel Ewing
On 05/05/2015 09:28 AM, Scott Ford wrote:
 All,
 
 Since I started this question, so how is one to check for the existence
 of datasets if we can't really trust IEFBR14 ? Yeah, I can write an
 Assembler routine, by why, when BR14 is supposed to work...I have staging
 datasets we use to build our product, my first step is the IEFBR14 , to
 delete these datasets if they exist, if not fine give me a return code I
 can test for and proceed on with the other steps.
 
 Regards,
 
 Scott
 
 On Tuesday, May 5, 2015, Elardus Engelbrecht elardus.engelbre...@sita.co.za
 wrote:
 
 John Eells wrote:

 The recurring confusion about what IEFBR14 itself actually does (clear
 GPR15 and return) and what people seem to think it does from the odd post
 here and their (not yours) is one reason I call IEFBR14 the most misused
 program in the history of z/OS.

 ... and also in the history of MVS and OS/390.

 and Lizette Koehler said really really tongue in cheek this (and confusing
 a person who replied on her post):

 And IEFBR14 is more that return on R14.  It does stuff.  Just look how
 big it is.

 and I said earlier: 'IEFBR14 is just a lazy program setting a RC=00 and
 nothing else.'

 Hmmm, I *always* ( ;-D ) wanted to sign IEFBR14 with a (program) digital
 certificate with RACF, but PDS datasets (at least the SYS1.LINKLIB) are not
 supported for this stunt. grin

 Has a patent been taken out on IEFBR14 (or its queer position in MVS,
 OS390, z/OS)? If not, I want to register it, but I am too broke. ;-D

 Groete / Greetings
 Elardus Engelbrecht

...
A trivial two-instruction program like IEFBR14 that is totally unaware
of any data sets can obviously not set different return codes based on
the status of some data set.  You can always trust IEFBR14 to do what it
was designed to do -- nothing except setting a RC of 0. That IEFBR14 as
a job step program ever appears to do something more than that is an
illusion and purely an artifact of Initiator actions before passing
control to the job step program and Initiator actions after the job step
program sets its return code and completes.  Those other effects are an
effect of the job step and its JCL as a whole, not some effect of
IEFBR14.  By design, if a job step return code exists it is based solely
on the job step program results (trivial in case of IEFBR14) and not on
any pre-step or post-step actions done by the initiator.

For the purpose of setting a job step RC based on data set status
(existence, size, whatever), we always used a Batch TSO step that would
run an installation REXX EXEC with a data set name parameter, letting
the REXX Exec check internally on status of the data set in some way
(LISTDSI, LISTC, etc.) and based on that status set a return code that
would become the job step RC.  Such an EXEC is fairly trivial for an
installation to set up and document.  Writing Assembler code to get this
functionality would IMO be considerable overkill.

-- 
Joel C. Ewing,Bentonville, AR   jcew...@acm.org 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: New zPDT redbook

2015-04-23 Thread Joel Ewing
On 04/23/2015 02:12 PM, Nims,Alva John (Al) wrote:
 Found this page on IBM's pricing:
 
 https://www-304.ibm.com/partnerworld/wps/servlet/ContentHandler/pw_com_zpdt_pricing
 
 
 Al Nims
 Systems Admin/Programmer 3
 Information Technology
 University of Florida
 (352) 273-1298
 
 -Original Message-
 From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On 
 Behalf Of Vince Coen
 Sent: Thursday, April 23, 2015 3:08 PM
 To: IBM-MAIN@LISTSERV.UA.EDU
 Subject: Re: New zPDT redbook
 
 Is there an easy way of getting zPDT?
 
 and more importantly is it cheap?
 
 
 
 
 On 23/04/15 17:37, John McKown wrote:
 Which may be of interest to those fortunate enough to have a zPDT 
 system

 http://www.redbooks.ibm.com/abstracts/sg248205.html?Open

 Aside: I rather like that I can now get many Redbooks via Google Play.

 
...
So the minimal zPDT cost, including at least $1K-$2K outlay for purchase
of one of the few approved hardware platforms for running zPDT, is $5-6K
for 1st year, $3,750 per year for later years -- but you have to be
associated with a qualified IBM ParnerWorkd independent software vendor
company.  The price at least seems to be coming down, but still not
something targeted for personal experimental use.


-- 
Joel C. Ewing,Bentonville, AR   jcew...@acm.org 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Outsourcing Experiences

2015-04-22 Thread Joel Ewing
On 04/22/2015 07:03 AM, John McKown wrote:
 On Wed, Apr 22, 2015 at 2:09 AM, Elardus Engelbrecht 
 elardus.engelbre...@sita.co.za wrote:
 
 Tom Brennan wrote:

  The word resource hadn't been used for a person yet, and managers ran
 projects themselves and knew who was best for each particular task.  What a
 strange world it was.

 True. I see outsourcing as: 'Let other companies do my work which is NOT
 my core business or which is cheaper to let others do it'.

 So, cleaning of buildings, gardening, sewer unblocking, guards, catering,
 etc. are usually outsourced here in sunny South Africa.

 I know of a bank, which has folded many years later, which outsourced all
 its IT to an outside company. ALL of it, mainframe, network, PC, printing,
 staff managing those equipment, etc. were transferred to that company. [1]

 That despite their IT is part of *core business*.
 
 
 Go figure.

 
 ​That scenario is _exactly_ what the company that I work for wants. As I
 understand it, they only want people as employees who are company
 specific. This would be people like high level managers, actuaries, end
 user application designers and programmers (some of whom are consultants).
 Other workers, like maintenance, IT infrastructure, new customer  claims
 keyers are to be supplied by other companies. They also don't want to own
 any real estate, just rent office space. But they don't seem to be able to
 find a buyer for the building we're in. So they have outsourced building
 management. Basically they want only resources with relate _directly_ to
 the product we sell (insurance), not any supporting role resources. I
 guess like they buy electricity and sewer as a commodity. IT, et al., are
 just commodities. I can see their point. I guess.
 
 
 

 Groete / Greetings
 Elardus Engelbrecht

 [1] - I know it, because I found out that grimy slimy truth during job
 hunting in 1990 around...

 --
 If you sent twitter messages while exploring, are you on a textpedition?
 
 He's about as useful as a wax frying pan.
 
 10 to the 12th power microphones = 1 Megaphone
 
 Maranatha! 
 John McKown

I would second the sentiment that it would not be a wise move to
outsource IT when that is a part of the core business.  With web
presence being such a key component of customer service these days, I
would think that would make IT a part of the core business for any
entity where providing services to customers is a major component of
their business.  Last time I checked most State governments were heavily
involved in providing services to their customers/tax-payers.

Putting part of your core business out of your direct control makes it
much more difficult to adapt when the inevitable need for change occurs.

Outsourcing one part of your IT can restrict future ability to adopt the
most cost-effective IT solution to change.  The limitations of typical
outsourcing contracts will make changing any outsourced component of IT
more difficult and probably more expensive -- so there will be strong
pressure to resist changing that component even when that might
otherwise be the best approach..


-- 
Joel C. Ewing,Bentonville, AR   jcew...@acm.org 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: A New Perfromance Model ?

2015-04-05 Thread Joel Ewing
Let me guess, they still didn't bother to add a test for
table-size-exceeded to perform a graceful failure of the transaction
rather than take out the entire CICS region when the increased table
eventually proves inadequate; and if just increasing max table sizes
greatly increased CPU, then no doubt they are doing an inefficient
search of the tables or always initializing the entire table
unnecessarily rather than just those entries actually used.  Perhaps the
transaction is doing a serial search of entries in the table, which
while tolerable for 10 entries is a bad idea for 33 entries and a
terrible one for 99.  Or they could even be doing something worse, like
initially clearing the entire table and searching the entire table
serially even when only a subset has been used.

Transaction algorithm redesign to reduce maximum memory required by
tables is probably not trivial, so throwing hardware memory at that
could well be cost-effective and make a lot of sense.  Failing to expend
coding effort to check for table overflow and thereby risking loss of an
entire production CICS and down time is definitely false economy, and
developers should be required to always make explicit tests for table
overflow.  I would think it would only take a minor amount of time to
examine the code to look for an inefficient table search or unnecessary
table initialization, and if the tables are growing with time the
potential long-term payoff in saved CPU time and better transaction
performance from fixing any problems in that area should be well worth
the effort.

Our Technical Services had metrics in place and published daily
statistics available to both application areas, IT management, and end
users which showed among other things the top CICS transaction and
application contenders for total resource usage and per transaction
resource usage and response time.  It was obvious when changes suddenly
raised the costs unexpectedly in ways that adversely affected the system
as a whole or critical applications.  Pressure from end users (who
didn't want to see their proportion of computing costs increase) and
other application areas (who didn't want to be adversely affected by
other resource hogs), and management backing made it relatively easy for
Technical Services to get application code performance reviews in such
cases.

Since such issues in a production environment were also regarded as a
system problem, it was not uncommon for our application programmers to
work hand-in hand with TechSys CICS support perusing code to find quick
fixes where that was possible, and suggest back out and possible
redesign approaches where there was no easy fix.

Wise management should understand that expending a few hours looking for
trivial performance tuning in mainframe applications that are seeing a
performance problem is always cheaper than a mainframe upgrade.
Mainframe environments typically have the tools to make such analysis
feasible.  And, if some trivial change  does indeed resolve the problem,
this is also an opportunity to further educate Applications development
staff so they know what techniques to avoid in the future.
Joel C. Ewing

On 04/04/2015 04:11 PM, Blaicher, Christopher Y. wrote:
 Please send us the name of the company.  First, so the IBM sales person can 
 get right over there and make a nice commission, and secondly, so I can sell 
 any stock I may have in the company.
 
 If they have management that is that ill-informed and, well frankly, stupid 
 (I tried to not use that word, but in this case it fits), then it doesn't 
 speak well for the long-term survival of the company.
 
 Personal opinion only.
 
 Chris Blaicher
 
 -Original Message-
 From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On 
 Behalf Of Roger W. Suhr (GMail)
 Sent: Saturday, April 04, 2015 4:46 PM
 To: IBM-MAIN@LISTSERV.UA.EDU
 Subject: Re: A New Perfromance Model ?
 
 On 4/4/2015 3:11 PM, esst...@juno.com wrote:
 Hi

 Im not a performance analyst, Im a CICS  MQ Sys-Prog.
 I dont understand this new paradyne.

 Some Back ground
 March 1 Our Development Team introducd some new functionality.
 The following week we were plagued with multiple 0C4 and 0C7 - ASRA Abends, 
 Storage Violations, and one CICS Task abended in a loss of our main 
 production CICS Region.

 March 7 a secnd wave of application changes were deployed.
 All Of the Abends with the Exception of The Storage Vioation seem to
 have evaporated, as they no-longer exists. However we are now see a 
 sihnificant Increase of I/O, Almost double in CPU consumption by many tasks, 
 and an Increase in Storage Occupancy for these transaction.

 Some Transaction Storage incresaed by 6+ Meg.

 Working with our Capacity Planning and Performance  person and
 reviewing CMF data, RMF Reports, running Traces, and real time Monitors we 
 have identified the 7 buggest cuprits. (STROBE is a Great Product) .
 We provided our findings and analysis to our Management and 

Re: Setting up a new user for OS/390 and Z/OS

2015-03-25 Thread Joel Ewing
On 03/25/2015 04:03 PM, Shmuel Metz (Seymour J.) wrote:
 In 7426932741606214.wa.vbcoengmail@listserv.ua.edu, on
 03/25/2015
at 01:21 PM, Vince Coen vbc...@gmail.com said:
 
 Anyone have the JCL to set up a new user for TSO and other services
 for both OS/390 and Z/OS.
 
 There is no the JCL for that; it depnds on the security setup, the
 privileges you want the user to have and the release level.
  
 
Assuming that for JCL he really meant batch job, this can of course
be done from a batch job running under a RACF-SPECIAL userid issuing
RACF commands from a batch TSO job step; but coming up with the exact
TSO command sequence needed is the hard part because that would be
mostly unique to your installation.

Once you have determined all the commands to set up a new user manually,
you could presumably come up with a batch job with batch TSO command
sequence template to do everything required for your installation and
just manually plug in different userids and other variable parameters at
appropriate points in the commands before submitting; but if you do
this, make sure any userid used for such batch jobs has its data in JES
queues protected from viewing by others. Otherwise you may be allowing
all sorts of people to view your job streams via SDSF and see what
userids, passwords and authorizations you are granting, which would not
please a competent auditor.   That particular exposure doesn't exist
when the commands are issued from an interactive TSO session, which is
why we created REXX execs for the RACF administrator that prompts for
all the required information and generates and issues the required
sequence of RACF and other commands to set up a new user (plus catalog
alias definitions) and enforce our installation conventions.

You also need support for the inverse steps required to delete a user
from the system, which can be equally complex and prone to error.

And just a thought:   If you are going to manually customize a bunch of
TSO commands in a batch job stream, it may be just as easy to add a
leading PROC statement and customize the commands in a member in a
special CLIST library that is restricted to the RACF administrator, from
which the sequence could be executed as a single CLIST command either in
batch TSO or directly in TSO.

I repeat the admonition of others that IBMUSER should only be used to
create your own installation-specific RACF SPECIAL userid (which should
normally not have OPERATIONS authority) and subsequently delete or
disable the IBMUSER userid after verifying the new SPECIAL userid is
functional for RACF updates.  No need to make a potential compromise of
z/OS easier by using a known administrator name.
-- 
Joel C. Ewing,Bentonville, AR   jcew...@acm.org 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: O/T Assemblers were once people: My aunt did it for NASA

2015-02-27 Thread Joel Ewing
On 02/27/2015 12:04 AM, Elardus Engelbrecht wrote:
 Ed Gould wrote:
 
 http://www.theregister.co.uk/2015/02/26/my_aunt_was_a_human_assembler_at_nasa/
 
 Wow! Interesting. My jaw also dropped to the floor.
 
 Ok, ok, ok, I give up, we are too spoiled today with all these fancy systems, 
 languages and applications and games we today have. 
 
 Those people have *nothing*. Just pencil+eraser and paper and references to 
 OpCodes. Then they write programs by hand without fancy compilers and syntax 
 checkers.
 
 Groete / Greetings
 Elardus Engelbrecht
 
 

This is weird.  The article implies people were doing machine coding for
an IBM 704 by hand in 1960s to evaluate complex mathematical formulas!
But, the SHARE Assembly Program for symbolic coding and FORTRAN compiler
were both available for the IBM 704 by 1956 (Wikipedia even has a
picture of the October 1956 IBM 704 FORTRAN Reference Manual).

Although simulation is mentioned once, this may be a reference to
mathematical flight simulation rather than a reference to simulation of
unavailable computer hardware.  The way the article is worded implies
coding by hand of mathematical equations needed for space craft design
for evaluation by an IBM 704, which suggests the generated code was
machine code for the IBM 704.  If that were the case, it wouldn't have
made sense to do the assembly translation by hand in the 1960s, unless
IBM 704 run time was harder to come by than human assemblers.

On the other hand if simulation of computer hardware for an on-board
flight computer was involved, hand assembly of code for testing on a
simulator running on an IBM 704 would make much sense.

Perhaps the remembered time line in the article is incorrect.  Some of
the on-line articles on NASA history suggest that by 1959 NASA was
already phasing out use of the IBM 704 for IBM 7094s and raises into
question whether an IBM 704 would still be in use by NASA much into the
1960s.  If the IBM 704 work in question started prior to 1956, fewer
software options would have been available.

-- 
Joel C. Ewing,Bentonville, AR   jcew...@acm.org 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: O/T What 'The Imitation Game¹ didn¹t tell you about Turing¹s greatest triumph - The Washington Post

2015-02-24 Thread Joel Ewing
On 02/23/2015 10:59 PM, Robert A. Rosenberg wrote:
 At 11:59 -0600 on 02/23/2015, Paul Gilmartin wrote about Re: O/T What
 ŒThe Imitation Game¹ didn¹t tell you abo:
 
 On Mon, 23 Feb 2015 10:14:44 -0600, Joel Ewing wrote:

 In the original Email that I received from Ed the Email source text (as
 opposed to the way my Email client renders it) shows the presence of a
 hex-encoded blank (= 2 0) followed by a CR at about the 70-character

 GIYF:
 Character-set: USASCII; Format-flowed
 Content-transfer-encoding: Quoted-printable

 This is certainly a confusing inconsistency in Thunderbird if not an
 out-right bug (depends on whether this way of wrapping long character
 strings is actually an approved Email standard).

 RFC 822 requires support for up to 999.
 
 Format-flowed says that long lines are split with each line ending in
 =20CR. Each split line should be no longer than 80 characters (including
 the =20CR). Your input meets this format.
 
 The =20CR is removed and the next line is concatenated until a line does
 NOT end with =20CR. At that point you have the original long input line.
 This requires that the receiving MUA support Format-flowed however.
 
 

  Apparently some other
  Email clients do interpret it in a way that preserves the link. On the
 other hand there are certainly Email clients that send long URLs without
 using this formatting convention, as I frequently receive long URLs in
 Emails (including the reformatted version of this URL from Paul) that
 work fine with Thunderbird.
 
 See my prior message. The URL was malformed since it was not included in
 angle brackets. URLs without angle brackets are part of the text and are
 not indicated as URLs - Thus may not be spotted for special treatment.
 Note that such text URLs MAY be correctly handled anyway but you are
 playing Russian Roulette with 5 bullets if you do not bracket your URLs.
 

 -- gil


From doing some testing with my Thunderbird Email client, which by
default does text line wrapping at around 70 chars for composed mail, if
I send a long URL in an Email via copy/paste from a browser URL it does
not use angle brackets around the URL but does suppress line wrapping in
the middle of the URL.

From looking in the IBM-MAIN archives at the original 21 Feb 2015
23:34:42 -0600 posting by Ed Gould and comparing that to the follow-up
22 Feb 2015 08:56:50 -0600 posting by Paul Gilmartin you can see that
Paul's post with the usable URL follows the same convention as
Thunderbird of suppressing line wrap within the URL while other text
lines are wrapped at a shorter length.  While the browser display of
that archived posting does wrap the URL depending on browser window
size, increasing the width of the browser window causes the URL to
re-flow to fill the larger width, clearly showing the URL in the post
itself is not line wrapped.  Just as clearly the original post by Ed has
the URL line-wrapped at the same length as ordinary text and the browser
link in the IBM-MAIN archive web page display excludes the last two
lines intended as part of the URL. Clearly the URL problem has occurred
by the time the post is received by IBM-MAIN and whatever convention is
used to broadcast the postings to the list members is just preserving
that difference.

If the technique and application Ed is using to compose his postings
does not allow any way to influence line wrapping, then manually
inserting angle brackets around the entire URL might circumvent the
problem; but I suspect the sending application could still do things in
the way it handles line wrap or assigns Email content attributes that
might prevent that suggestion from working as well.  If Ed has no way to
circumvent and it's an interesting enough link, it's not that big of
deal for a recipient to circumvent the problem manually by editing the URL.


-- 
Joel C. Ewing,Bentonville, AR   jcew...@acm.org 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: O/T What ‘The Imitation Game’ didn’t tell you about Turing’s greatest triumph - The Washington Post

2015-02-23 Thread Joel Ewing
In the original Email that I received from Ed the Email source text (as
opposed to the way my Email client renders it) shows the presence of a
hex-encoded blank (= 2 0) followed by a CR at about the 70-character
mark in the two places where the original URL later gets separated when
quoted.  I'm not sure who or what is producing or introducing this URL
formatting as it is definitely not part of the actual URL.  My
Thunderbird Email client seems to partly realize that the extra
characters are not a legitimate part of the URL and when displaying the
Email elides the three parts without a blank or forced CR, but the part
rendered in blue as a link and the part passed as the URL when
clicking on the link stops at the first = 2 0 CR point, so the passed
URL from the link doesn't work in my browser.

This is certainly a confusing inconsistency in Thunderbird if not an
out-right bug (depends on whether this way of wrapping long character
strings is actually an approved Email standard).  Apparently some other
Email clients do interpret it in a way that preserves the link. On the
other hand there are certainly Email clients that send long URLs without
using this formatting convention, as I frequently receive long URLs in
Emails (including the reformatted version of this URL from Paul) that
work fine with Thunderbird.
Joel C. Ewing

On 02/23/2015 02:22 AM, Wayne Bickerdike wrote:
 The link works fine in my Gmail client. Almost all of Pauls OMVS email
 postings end up in my spam folder. Google have been supplied with many
 examples for their (non) spam filter. Any one else find this happens?
 
 On Mon, Feb 23, 2015 at 1:56 AM, Paul Gilmartin 
 000433f07816-dmarc-requ...@listserv.ua.edu wrote:
 
 On Sat, 21 Feb 2015 23:34:42 -0600, Ed Gould wrote:

 http://www.washingtonpost.com/national/health-science/what-imitation-
 game-didnt-tell-you-about-alan-turings-greatest-triumph/2015/02/20/
 ffd210b6-b606-11e4-9423-f3d0a1ec335c_story.html


 What �The Imitation Game� didn�t tell you about Turing�s greatest
 triumph

 Unwrapped, I hope:


 http://www.washingtonpost.com/national/health-science/what-imitation-game-didnt-tell-you-about-alan-turings-greatest-triumph/2015/02/20/ffd210b6-b606-11e4-9423-f3d0a1ec335c_story.html

 Ed needs to get a better mail agent.  In cases such as this, dumber is
 probably better than  amarter.

 -- gil




-- 
Joel C. Ewing,Bentonville, AR   jcew...@acm.org 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Dataset PACK profile

2015-01-15 Thread Joel Ewing
On 01/14/2015 06:33 PM, Dan wrote:
 I have a strange memory of a previous thread about ISPF PACKing (probably pre 
 1995).
 
 I seem to recall someone from IBM (in the ISPF area) posting a query about 
 the use of the ISPF PACK option.  I believe they were hoping to remove the 
 function.
 I've searched the archives but can't seem to find anything.
 My recollection of the thread was that IBM wanted to remove the capability of 
 turning ON packing but any member that was already packed would remain packed 
 (unless the user requested PACK OFF).  As PACK is still an option that can be 
 enabled someone must have made a strong enough argument to keep the function 
 (although I don't see why).
 
 Does anyone else recall that discussion or was I dreaming? :D
 If you do recall it, why was it not crippled?
 
 Thanks,
 Dan D.
 

I also recall a discussion of this topic with IBMers involved, but it's
been too long to remember the context.  Perhaps it might have been a
query raised by an IBMer at some SHARE Free For All session rather than
an on-line thread.  I remember being one who indicated eliminating PACK
would have caused our installation some problems at the time.  We were
using PACK for long-term storage of source code in PDS libraries, some
of which would have been too large for the single-volume PDS limit
without PACK.  At the time it would have been a non-trivial task to go
to larger 3390 emulated volumes or to logically divide the large
libraries into smaller ones.

I think the idea behind the question was an assumption that cheap DASD
had eliminated the motivation for PACK, or that somehow the hardware
compression feature could be used as a replacement; but data set size
constraints had not been fully considered and at the time hardware
compression was more CP-intensive and also unsupported for PDS/PDSE data
sets.

If a PDSE library could have been multi-volume, that might have been one
path to eliminate our requirement for PACK.

-- 
Joel C. Ewing,Bentonville, AR   jcew...@acm.org 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Dataset PACK profile now PDSE's

2015-01-15 Thread Joel Ewing
On 01/15/2015 03:29 PM, Ed Gould wrote:
 On Jan 15, 2015, at 2:36 PM, Joel Ewing wrote:
 --SNIP---
 If a PDSE library could have been multi-volume, that might have been one
 path to eliminate our requirement for PACK.

 -- 
 Joel:
 I could SWEAR that in one of the announcements in the last year (or so)
 that PDSE's would support multivolume. Or am I just incorrect.
 
 Ed
 

z/OS 2.1 DFSMS Using Data Sets (2014) still lists single-volume
restriction for PDSE.  But APAR OA45917 NON-AMS PDSE CAN BE ALLOCATED
AS MULTIVOLUME reports that an unusable not valid PDSE can actually
be allocated as multivolume via JCL in z/OS 2.1, closed FIN 2014-09-04.
 Perhaps the fix will be to make it valid?  In any event too late for me
-- been retired 3 1/2 years.

-- 
Joel C. Ewing,Bentonville, AR   jcew...@acm.org 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Dataset PACK profile now PDSE's

2015-01-15 Thread Joel Ewing
On 01/15/2015 06:08 PM, Paul Gilmartin wrote:
 On Thu, 15 Jan 2015 17:31:44 -0600, Joel Ewing  wrote:
 
 On 01/15/2015 03:29 PM, Ed Gould wrote:
 On Jan 15, 2015, at 2:36 PM, Joel Ewing wrote:
 --SNIP---
 If a PDSE library could have been multi-volume, that might have been one
 path to eliminate our requirement for PACK.

 --
 Joel:
 I could SWEAR that in one of the announcements in the last year (or so)
 that PDSE's would support multivolume. Or am I just incorrect.

 Wouldn't EAV be an alternative solution?  Or is there another restriction?
 (54 GB is *so* 20th Century.)
 
 z/OS 2.1 DFSMS Using Data Sets (2014) still lists single-volume
 restriction for PDSE.  But APAR OA45917 NON-AMS PDSE CAN BE ALLOCATED
 AS MULTIVOLUME reports that an unusable not valid PDSE can actually
 be allocated as multivolume via JCL in z/OS 2.1, closed FIN 2014-09-04.
 Perhaps the fix will be to make it valid?  In any event too late for me
 -- been retired 3 1/2 years.
 
 -- gil
 

PDSEs were originally also restricted to 65535 tracks per volume, but I
notice that restriction was relieved at least by z/OS 1.10.  It's
probable that even 3390-27's would have been large enough to have
allowed our installation to comfortably convert to unPACKed members in
either a PDS or PDSE data set, but we were just beginning to phase in
some 3390-27's when I retired, and at that point there was no motivation
to expend unnecessary effort to change PDS PACK conventions that were
working.

The decision for an installation to use larger volume sizes, even a
3390-27, has implications for backup, recovery, DR support, and DASD
management, which may or may not make that an appropriate or quick
solution; although it is nice to have that alternative if no other
solution exists.  I would think it preferable to have the option of
multivolume PDSEs, so that the decision of maximum installation DASD
volume size could be based on other considerations and not be forced
just by the requirements of a few exceptional data sets.

-- 
Joel C. Ewing,Bentonville, AR   jcew...@acm.org 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Enumerating User IDs (was: CANCEL TSO Logon?)

2015-01-05 Thread Joel Ewing
On 01/05/2015 09:35 AM, Paul Gilmartin wrote:
 On Mon, 5 Jan 2015 07:21:28 -0800, Charles Mills wrote:
 
 For TSO, you can probe for known user ids, but you will see a lot of LOGON 
 and IEA989I message in the SYSLOG.

 Only if you set a specific SLIP trap for this condition.

 In the video cited:
 
 On Jan 2, 2015, at 3:31 PM, Mark Regan wrote:

 Black Hat 2013 - Mainframes: The Past Will Come to Haunt You, by a
 Philip Young and it's about an hour long.

 http://youtu.be/uL65zWrofvk
 
 ... the speaker opined that such probing is less likely to be detected by
 Security than by Operations as a spike in CPU usage.
 
 -- gil
 
RACF uses SMF and console messages to record logon/authentication
failures.  These could be intercepted in real time to alert someone of
unusual probing while it is occurring.  We used independent review of
daily summary reports generated from RACF SMF records to verify that
such probing had not occurred, just the typical typos and forgotten
passwords from terminals within the corporation.  With our normal system
workload, someone would have been more likely to notice a flood of
unusual console messages than see any noticeable impact on CPU.

-- 
Joel C. Ewing,Bentonville, AR   jcew...@acm.org 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: CANCEL TSO Logon?

2015-01-05 Thread Joel Ewing
On 01/05/2015 05:56 PM, Lou Losee wrote:
 Hopefully all of your started proc user ids are PROTECTED otherwise those 3
 invalid password attempts could cause you big problems.
 
 Lou
 
 --
 Artificial Intelligence is no match for Natural Stupidity
   - Unknown
 
 On Mon, Jan 5, 2015 at 2:21 PM, Mike Schwab mike.a.sch...@gmail.com wrote:
 
 On Mon, Jan 5, 2015 at 9:45 AM, Vernooij, CP (ITOPT1) - KLM
 kees.verno...@klm.com wrote:
 What is the point in trying to find a valid userid, if the userid will
 be suspended after trying 3 invalid passwords (in our situation)?

 Kees.

 But not if you keep rotating IDs.  It is three in a row for the same ID.

 --
 Mike A Schwab, Springfield IL USA
 Where do Forest Rangers go to get away from it all?

No, it's not three failed attempts in a row from the same source for the
same ID; it's three failed logon attempts (if that is the limit) for the
same ID before the next successful logon authentication for that same
ID, whether the logon attempts are spread over seconds, hours, or days,
and across all possible MVS systems and applications that might be
requesting userid authentication.  If your hack attempt rotates through
all known userids more than three times in the same day on a system
where the average userid is only authenticated one or two times a day,
the odds are you will start revoking some userids during the third pass
(and start potentially being noticed).  For a userid that only has one
legitimate logon per week, three bad attempts spread across a week would
be sufficient to cause a revoke.  At a max of three bad password hack
attempts per ID per day, how many years does that take to have
reasonable odds of hacking any individual userid?  How does installation
rules that force users to change their password every 60 to 90 days
affect the odds of that success, since there is a non-zero probability a
user could change to a password value the hacker had already attempted
and will never try again because he already knows it is invalid?


-- 
Joel C. Ewing,Bentonville, AR   jcew...@acm.org 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Spam (11.802):Re: CANCEL TSO Logon?

2015-01-05 Thread Joel Ewing
Before PROTECTED was implemented we only had this happen once that I
know of -- for a CICS region.  It wasn't a hack or DoS attempt.  Just a
user who wasn't paying attention and thought he was telling
SuperSessions to take him to that CICS region three times in a row when
he was really on a logon screen and the CICS region name (which was same
as the started task name) was instead being taken as the userid on a
logon three times and revoked the CICS region userid.  I got the night
call.
J C Ewing

On 01/05/2015 06:17 PM, Tony's Basement Computer wrote:
 Being old enough to remember the pre-Protected days, when this feature 
 appeared we implemented it into every user profile we could find that 
 satisfied the criteria.  Zero pain, more uninterrupted sleep.
 
 -Original Message-
 From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On 
 Behalf Of Lester, Bob
 Sent: Monday, January 05, 2015 6:07 PM
 To: IBM-MAIN@LISTSERV.UA.EDU
 Subject: Re: CANCEL TSO Logon?
 
 Hey Lou,
 
BTDT, *very* painful.  Had to learn that one the hard way.
 
 Cheers!
 BobL
 
 -Original Message-
 From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On 
 Behalf Of Lou Losee
 Sent: Monday, January 05, 2015 4:56 PM
 To: IBM-MAIN@LISTSERV.UA.EDU
 Subject: Re: CANCEL TSO Logon? [ EXTERNAL ]
 
 Hopefully all of your started proc user ids are PROTECTED otherwise those 3 
 invalid password attempts could cause you big problems.
 
 Lou
 
 --
 Artificial Intelligence is no match for Natural Stupidity
   - Unknown
 
 On Mon, Jan 5, 2015 at 2:21 PM, Mike Schwab mike.a.sch...@gmail.com wrote:
 
 On Mon, Jan 5, 2015 at 9:45 AM, Vernooij, CP (ITOPT1) - KLM 
 kees.verno...@klm.com wrote:
 What is the point in trying to find a valid userid, if the userid 
 will
 be suspended after trying 3 invalid passwords (in our situation)?

 Kees.

 But not if you keep rotating IDs.  It is three in a row for the same ID.

 --
 Mike A Schwab, Springfield IL USA
 Where do Forest Rangers go to get away from it all?

 --


-- 
Joel C. Ewing,Bentonville, AR   jcew...@acm.org 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Enumerating User IDs

2015-01-05 Thread Joel Ewing
So what TSO logon should be doing if the userid is invalid or not
authorized for TSO is not give any error indication on the logon screen
but populate the panel fields with plausible default values that look as
if a RACF TSO segment was found and force the user to supply the
password field before giving a failure message.  Doesn't sound like a
big change to implement, but what do I know.
J C Ewing

On 01/05/2015 07:20 PM, Lou Losee wrote:
 The problem is, that when TSO populates the logon panel, it does not do
 a(RACROUTE REQUEST=INIT (RACINIT)  but rather does an RACROUTE
 REQUEST=EXTRACT (RACXTRT) against the user id specified to populate the
 fields on the logon panel.  This does not result in any RACF message or SMF
 record, but TSO does use the RC to inform the user if the user id specified
 is defined or not.
 
 Lou
 
 --
 Artificial Intelligence is no match for Natural Stupidity
   - Unknown
 
 On Mon, Jan 5, 2015 at 6:05 PM, Frank Swarbrick 
 002782105f5c-dmarc-requ...@listserv.ua.edu wrote:
 
 Something like this?ICH408I USER(MYPSWD99) GROUP()
 NAME(??? )
   LOGON/JOB INITIATION - USER AT TERMINAL DVDU NOT RACF-DEFINED

 The above was generated using the CICS CESN signon transaction.
  From: Tony's Basement Computer tbabo...@comcast.net
  To: IBM-MAIN@LISTSERV.UA.EDU
  Sent: Monday, January 5, 2015 9:57 AM
  Subject: Re: Enumerating User IDs (was: CANCEL TSO Logon?)

 Back years ago I worked at a Top Secret shop.  That product wrote a
 console message when a log on attempt has occurred that specified an
 unknown user.  Sadly, what was usually seen was a password.  It's been
 years since I was in that business so I don't know if that display is a
 configurable option.

 Sidebar:  I watched video and I found it dismaying.  The presenter spoke
 in demeaning tone of the traditional terminology to which we are all
 familiar which I found insulting.  I felt he acted proud that *his*
 technology was superior because *his* terms are more current, thus
 better. I felt he made some assumptions in his presentation that would lead
 the uninitiated to believe that these exposures exist in all cases and in
 all environments. Stipulating that a deficiently configured z/OS-RACF (or
 TS or ACF2) shop could present these opportunities, I feel he should have
 made this disclaimer at the outset.  Had he done so I might have taken him
 more seriously.

 -Original Message-
 From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On
 Behalf Of Charles Mills
 Sent: Monday, January 05, 2015 10:35 AM
 To: IBM-MAIN@LISTSERV.UA.EDU
 Subject: Re: Enumerating User IDs (was: CANCEL TSO Logon?)

 SMF and console messages to record logon/authentication failures.
 These could be intercepted in real time to alert someone of unusual
 probing while it is occurring

 Yup! Come to either of my sessions at SHARE to learn about how to do that
 (albeit with one of several commercial products).

 Unfortunately I know of no way to intercept in real time the invalid
 userid at its initial usage and possible validation as opposed to when it
 is actually used for a logon with password.

 Charles

 -Original Message-
 From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On
 Behalf Of Joel Ewing
 Sent: Monday, January 05, 2015 8:18 AM
 To: IBM-MAIN@LISTSERV.UA.EDU
 Subject: Re: Enumerating User IDs (was: CANCEL TSO Logon?)

 On 01/05/2015 09:35 AM, Paul Gilmartin wrote:
 On Mon, 5 Jan 2015 07:21:28 -0800, Charles Mills wrote:

 For TSO, you can probe for known user ids, but you will see a lot of
 LOGON and IEA989I message in the SYSLOG.

 Only if you set a specific SLIP trap for this condition.

 In the video cited:

 On Jan 2, 2015, at 3:31 PM, Mark Regan wrote:

 Black Hat 2013 - Mainframes: The Past Will Come to Haunt You, by a
 Philip Young and it's about an hour long.

 http://youtu.be/uL65zWrofvk

 ... the speaker opined that such probing is less likely to be detected
 by Security than by Operations as a spike in CPU usage.

 -- gil

 RACF uses SMF and console messages to record logon/authentication
 failures.  These could be intercepted in real time to alert someone of
 unusual probing while it is occurring.  We used independent review of daily
 summary reports generated from RACF SMF records to verify that such probing
 had not occurred, just the typical typos and forgotten passwords from
 terminals within the corporation.  With our normal system workload, someone
 would have been more likely to notice a flood of unusual console messages
 than see any noticeable impact on CPU.

...
-- 
Joel C. Ewing,Bentonville, AR   jcew...@acm.org 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Young's Black Hat 2013 talk - was mainframe tribute song

2015-01-05 Thread Joel Ewing
On 01/03/2015 09:23 PM, Paul Gilmartin wrote:
 On Sat, 3 Jan 2015 10:13:21 -0600, Ed Gould wrote:

 Indeed it was at least interesting.
 I would be curious if IBM would like to comment on some of the
 statements on how how RACF encrypts the passwords.
 I disagree with how RACF encryption is done (at least by the
 commentator)but I am not RACF trained so I can not call the
 commentator out.
 IBM?

 On Jan 2, 2015, at 3:31 PM, Mark Regan wrote:

 Black Hat 2013 - Mainframes: The Past Will Come to Haunt You, by a
 Philip Young and it's about an hour long.

 http://youtu.be/uL65zWrofvk

 It has been mentioned here and not refuted that RACF uses single-DES
 with the password as key and the user ID as salt.
 
 I had not heard (and do not fully believe) that the hashed password data
 set is generally readable (UACC=READ?).
 
 I had not heard, but it's quite plausible, that passphrases, however long,
  are collapsed to 56 bits becase DES supports no greater.
 
 And Phillip Young stressed the weakness of the potential for user ID
 enumeration -- TSO LOGON tells you immediately whether a string
 is a known user ID -- he calls it much too friendly.  But z/OS
 partisans here have advocated that excess friendliness as a boon.
 It reduces the search space from MxN to M+N, regarded contemptuously
 by non-mainframers.
 
 -- gil
 

From the full title of the report, perhaps Young is referring to MVS
installations that have been in existence for over 30 years that have
somehow managed to ignore all the security advice of the last several
decades and have continued in unsafe configurations.  Perhaps some such
installations do exist.

The password mangling Young describes sounds like the old pre-DES
password encoding (not an encryption).  It wasn't even recommended by
1985 when we migrated to MVS/XA. If the old encoding is still supported,
it should be way past time to discontinue that support.  But, the
password encoding in the RACF data base only becomes a security issue if
READ access to the RACF data base itself is not properly restricted by
RACF.  Without READ access to the RACF database, one is reduced to
making actual logon/authentication attempts, which may serve as a denial
of service attack when a userid is revoked after a relatively-low,
installation-specified number of failures, but would be of marginal use
in finding a functional userid/password combination by trial and error
and attract  attention from a user whose userid gets revoked.  And, SMF
RACF logging data shows what LU or IP address is responsible for invalid
authentication attempts -- we audited logon failures and all revoked
userids daily.

MVS can certainly be made insecure, but the basic security concepts are
not that complex to understand  Require all users of the system to
authenticate.  By default, protect all DASD and tape data sets, and have
rational data set naming standards that make default identification of
ownership and access rights feasible. Protect all system data-sets,
system configuration data, PROCLIB sources for started task JCL, and any
installation Authorized libraries from UPDATE by all but Technical
Support staff entrusted with maintenance of the system. Disallow even
READ access to sensitive data sets (like RACF databases).  Restrict
physical access to corporate network and MVS consoles, and use RACF to
restrict usage of sensitive commands, resources and applications.  RACF
Security Administrators and their Technical Support counterparts must be
properly trained, which includes knowing what authorization requests
from managers are unreasonable and must be denied to preserve system
integrity -- and being slightly paranoid about protecting their own
authentication credentials helps.

3270 communication protocols were designed with secure corporate
networks in mind, and as Young points out that means logon passwords
transmit in clear text even though visually hidden.  Any remote access
to MVS over non-corporate, unsecured networks MUST be encrypted, via use
of a VPN or some other technique, and standards should also be in place
to protect remote user's equipment from password-trapping malware.

Users should only be authorized to the functional access on MVS required
by their job.  Need to access a transactional system (e.g., CICS), does
not imply a need for access to TSO, or OMVS/UNIX, or FTP, or batch job
submission;  access to FTP does not imply a need for FILETYPE=JES job
submission and retrieval (and it is not that difficult to design an FTP
exit that uses RACF to selectively allow/disallow JES access via FTP to
specific users).  I was also mildly amused by the idea of someone using
FTP FILETYPE=JES to submit a surreptitious interactive listener job.
In our shop, batch initiators that were not restricted to production use
were a closely watched commodity, and nothing would have attracted the
attention of Operators, Tech Services, or other users quicker than a job
that appeared to be hung using few resources, and holding up the

Re: Dataset PACK profile

2014-12-30 Thread Joel Ewing
On 12/30/2014 04:18 AM, Shmuel Metz (Seymour J.) wrote:
 In 54a23cd9.6040...@acm.org, on 12/29/2014
at 11:49 PM, Joel Ewing jcew...@acm.org said:
 
 The problem is not that you can't already recognize on a
 member-by-member level whether or not data is PACKED.
 
 Isn't there a flag in the directory entry?
  
 
I don't see any field so indicated in the basic PDS directory entry, or
in the fields in the User Data section of the directory entry used for
ISPF Statistics or even the more recent fields for ISPF Extended
Statistics (and logically it couldn't be in the Statistics section since
the presence of the ISPF Statistics fields can be optionally suppressed
to conserve directory space).  The in-house utilities we wrote to scan
potentially-packed source libraries just keyed off the actual content of
the member to determine if unpacking was required.

-- 
Joel C. Ewing,Bentonville, AR   jcew...@acm.org 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Dataset PACK profile

2014-12-29 Thread Joel Ewing
On 12/29/2014 10:39 PM, Paul Gilmartin wrote:
 On Mon, 29 Dec 2014 17:13:30 -0600, Joel Ewing wrote:

 SWITCH IT OFF may not be a trivial thing to do if you are talking
 about a PDS used by multiple users.  The edit default for PACKED, like
 a number of ISPF edit parameters, is not maintained in one place but in
 potentially-dataset-unique, potentially-user-unique edit profiles based
 on the low-level qualifier of the dataset.  ...

 Terrible design; inexcusable!  That information should exist (also) in
 metadata describing the file.  Couldn't ISPF have spent one bit of user
 info in the directory entry to indicate packed/unpacked status of the
 member?  Such a metadatum should govern opening of the member,
 regardless of the user's ISPF parameters, and be updated when a user
 SAVEs.  A library with a mixture of PACKED and UNPACKED members
 would then be entirely practical and largely transparent (opaque to
 users of non-ISPF editors).
 
 Or, magic numbers?
 
 -- gil

I'm not particularly fond of the design either, but more because there
is no easy way to enforce installation conventions on a data set level
when there are valid reasons to do so.

The problem is not that you can't already recognize on a
member-by-member level whether or not data is PACKED.  A PACKED member
starts with a easily recognizable byte sequence that would not otherwise
be valid in an EBCDIC text file.  The problem is that the code for
converting a PACKED member back to normal text records is not built into
PDS access methods, so that there is no way to make the PACKED state of
a member transparent to all programs that expect normal text records.
And presumably you would want similar output capability built-in if you
wanted to force a consistent format when other uilities create new
members. A PDS with a mix of PACKED and UNPACKED members is currently
possible (although more likely an accident than deliberate), but it is
only transparent if all your access is via the ISPF Editor.

I suspect IBM has little incentive to enhance ISPF style PACKED data.  I
think they were actually on the verge of eliminating it a decade back
until it was pointed out that at the time it was the only way available
to circumvent size restrictions on some large text PDS files, and that
its simplistic compression was fairly effective on typical programming
language source and much less processor intensive than hardware
compression at the time.
-- 
Joel C. Ewing,Bentonville, AR   jcew...@acm.org 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: thought: z/OS structured logging

2014-12-04 Thread Joel Ewing
While conceptually XML sounds nice, the problem would seem to be the
extreme volume of data involved, millions of messages daily for large
installations.  Uncompressed XML is incredibly inefficient in storage
requirements, and compressing/uncompressing XML has processing costs.
From my viewpoint I would be much more concerned about the ongoing and
continual additional overhead costs an XML Syslog would add to the daily
operation of a z/OS shop, including archival and search costs, not just
the implementation costs.

System automation tools that are common to many z/OS shops already can
already incur significant overhead in processing system log records, so
one would not want to add to that by significantly increasing the size
or processing cost of reading log records.

While it would be nice to have an easy way of associating things like
Sysplex, System, and LPAR names, etc. with messages, you really need to
do this in a way that doesn't replicate what for a given system is
essentially constant information on millions of log entry records.

At times when I have had to search through system logs, I have given
thought about how much easier it might be if they were structured into a
relational database, but obviously that would not a practical design for
direct recording of SYSLOG  as one of the roles of SYSLOG is to track
z/OS startup and failures affecting the z/OS relational database servers.

There is a basic design conflict inherent with the existing SYSLOG
between making a record terse enough for efficient archival and message
automation versus creating a human-readable message for consoles and
batch job logs.

As long as the SYSLOG is also allowed to contain messages generated by
local installation code and local application code, all rules on
structure of messages become unenforceable by z/OS.  Not sure how to get
around that.
Joel C. Ewing


On 12/04/2014 08:06 AM, John McKown wrote:
 This is just my mind wandering around loose again. You kind indulgence is
 appreciated.
 
 But I've been thinking about the z/OS syslog for some reason lately. Given
 what it was originally designed for, review by a human, it is a decent
 design. But is it really as helpful as it could be in today's z/OS
 environment? Should z/OS have a more generalized logging facility? I will
 grant that subsystems have various logs, but they each basically have
 their own structure. Is there really a need for the z/OS system log
 anymore? I really don't know. And I will admit that my mind has been
 corrupted by using Linux too much lately. grin/
 
 So, if such a thing is even needed any more, what might it look like?
 Should it go to SPOOL? Should it be more like the OPERLOG and go to a
 LOGGER destination? Or should it go somewhere else?
 
 So what would I like? I know most will moan, but I _like_ structured,
 textual, information. So I would prefer that the output be in something
 like XML or JSON structure, not column based. And no encoded binary, OK?
 Now I'm trying to come up with what sort of data should be in the system
 header type data. These are just some fields that _I_ think would be
 useful in a good, generic, logging facility. First would be the current
 date in ISO8601 format, something like 2014-12-04T07:34:03-06:00 which is
 the date/time as I am typing this near Dallas, TX. This tells us the local
 time and gives us enough information to determine the UTC for comparison or
 conversion. I would also like the z/OS sysplex name, the system name, the
 CPU serial number, LPAR number, z/VM guest name (if applicable), job name
 (address space name), RACF owner, step name, proc step name, program name
 in the RB which issued the logging service call, program name in the first
 RB chained to the JS TCB (which I think should be the EXEC PGM=... name in
 most cases for batch jobs), ASID number, UNIX process id (==0 if not dubbed
 because there is no PID of 0 in UNIX, or maybe -1), step number (as used in
 SMF), substep number (again as defined in some SMF records).
 
 Product specific data would be formally encoded as designed by the product.
 Preferably, if in XML, with a DTD to describe it. And done so that standard
 XML facilities such as XSLT and XPath can process it. Which I one reason
 that I like XML a bit better than JSON at this point in time. There are a
 lot of XML utilities around.
 
 And, lastly, I do realize that the above would be very costly. Not
 necessarily to just implement into z/OS, but to actually change z/OS code
 to start using it. And that may be the real killer. IMO, one of the biggest
 obstructions to designing new facility which enhance existing facilities
 is the cost of implementing them. This combined with the current emphasis
 on immediate return on investment. I.e. if I invest a million dollars in
 something, I expect to get back 2 million in 6 months or less.
 
 Well, I guess that I've bored you enough with this bit of weirdness. Like
 many of my ideas, they sound good to me until 

Re: Spam (7.802):Re: Page Data Set Sizes and Volume Types

2014-12-04 Thread Joel Ewing
With current emulated DASD and PAVs, performance is probably no longer
an issue, but I believe multiple page data sets on one volume is still a
potential availability issue:  You wouldn't want failure of a single
emulated drive to compromise two different systems at the same time, and
I seem to recall it used to be fatal to have failure of multiple page
data sets on the same system at the same time.
Joel C. Ewing

On 12/03/2014 04:37 PM, Ed Gould wrote:
 Derrick:
 
 *myself* I would never put multiple page DS's from multiple systems on
 the same drive.
 The rule I have observed for 40+ years. Nor would I place multiple page
 ds's on the same drive from one system.
 Dedicate a drive for each page ds.
 
 Ed
 
 On Dec 3, 2014, at 3:00 PM, Derrick Haugan wrote:
 
 We built our current paging configuration years ago, when the max #
 slots per page dataset was 1M. At the time we wanted to use mod-27's,
 and so we staggered multiple page datasets per volume, from different
 systems (happen to be in the same sysplex) so as not to waste space on
 the volume (we did not place any other type of datasets on the paging
 volumes).

 We have always used PAV's (now HyperPavs) for paging for performance
 reasons, and have never had performance problems with this
 configuration  (but we dont do alot of paging).

 We are preparing to use flash express on our EC12 based systems, and
 are considering reconfiguring page datasets on either mod-27's or
 mod-54's using 1 local page dataset per volume, as when using the
 flash/SCM for paging, only VIO pages will go to DASD for paging under
 normal circumstances. This would simplify paging volume configuration.
 Currently on our mod27s there are 12 paging slots per track (run an
 IDCAMS/LISTCAT of the page dataset). So a mod27 with 30051
 cyls/450,765 trk would hold 5,409,180 slots or roughly 20.6GB.


 example of what we have been using: (mod27, 5-member sysplex , one
 local page dataset from each system on the volume)

 C-HH  D A T A S E T   N A M E --- Org   Trks
 0 00 VTOC POINTER VP   1
 0 01 SYS1.VTOCIX.PAG300 PS  14
 1 00 VTOC 
 VT   75
 6 00 SYS1.PAGE27.A090.PLPA.DATA   VS   5100
 00346 00 SYS1.VVDS.VPAG300   VS  10
 00346 10 * * * F R E E   S P A C E * * * FS   5
 00347 00 SYS1.PAGE27.A090.LOCAL1.DATA   VS   87375
 06172 00 SYS1.PAGE27.N090.COMMON.DATA VS8550
 06742 00 SYS1.PAGE27.N090.LOCAL8.DATA  VS   87375
 12567 00 SYS1.PAGE27.G090.LOCAL1.DATA   VS   87375
 18392 00 SYS1.PAGE27.J090.LOCAL8.DATAVS   87375
 24217 00 SYS1.PAGE27.Y090.LOCAL1.DATAVS   87375
 30042 00 * * * F R E E   S P A C E * * *  FS 135
 30051 00 END OF VOLUME  EV   0
 --
 For IBM-MAIN subscribe / signoff / archive access instructions,
 send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
 
 --
 For IBM-MAIN subscribe / signoff / archive access instructions,
 send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
 


-- 
Joel C. Ewing,Bentonville, AR   jcew...@acm.org 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Page Data Set Sizes and Volume Types

2014-12-04 Thread Joel Ewing
Every MVS volume I have seen in the last two decades is on an emulated
3390 drive, although no doubt somewhere people are still running real
3380s or 3390s.   From MVS's viewpoint, it thinks every DASD unit
address is a physical DASD drive even though the DASD Subsystem is only
emulating the architecture of a real 3390 drive and 3990 controller,  In
the context of z/OS and modern DASD, an emulated drive and an MVS
volume should be acceptable alternatives for referencing the same
entity, a DASD drive and its contents (although technically, the drive
is the container and the MVS volume the content).  An actual backend
DASD drive in the DASD subsystem would be a physical drive, not an
emulated drive as seen by z/OS.

Re comment by R.S.:
I agree that one is extremely unlikely to lose a single MVS
volume/emulated drive in today's DASD subsystems from HW failure. but if
there is any way a single volume can be accidentally forced offline,
hung, undefined, or trashed out from under a running MVS system, I'm
sure a System Programmer somewhere will discover it.  If all else
fails, unintentionally sharing DASD and unintentionally writing to the
wrong unit address and volume from an independent system can do anything.
Joel C. Ewing


On 12/04/2014 12:46 PM, Bob Shannon wrote:
 With current emulated DASD and PAVs, performance is probably no longer an 
 issue, but I believe multiple page data sets on one volume is still a 
 potential availability issue:  You wouldn't want failure of a single 
 emulated drive to compromise two different systems at the same time, and I 
 seem to recall it used to be fatal to have failure of multiple page data 
 sets on the same system at the same time.
 
 You seem to have intermixed mixed volume and emulated drive. Unless the 
 recommendation has changed, there should only be one page dataset per MVS 
 volume. IIRC MVS remembers the last head position and performance suffers 
 when the head has moved. If you are considering the backend SCSI drives used 
 when emulating MVS volumes, they are in a RAID array which is designed to 
 tolerate SCSI failures. I don't pay any attention to them.
 
 It may be time to revisit old paging ROTs. Does anyone have a double or 
 triple digit paging rate anymore? Is the 30% rule still valid? (We completely 
 ignore it).  Does zFlash obviate the old ROTs?
 
 Bob Shannon
 Rocket Software
...
-- 
Joel C. Ewing,Bentonville, AR   jcew...@acm.org 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Why is the System Data Sets book no longer deemed useful?

2014-11-24 Thread Joel Ewing
From looking at the old iealg510 manual, it really would need a lot of
work to make it more accurate.  It should mention all system data sets
that have special requirements - e.g., must be cataloged in Master
Catalog, must be on the SYSRES or the IPL volume, must be PDS not PDSE,
 must exist but is not a an SMP/E target library, must have a specific
name (even if only by default), and data sets that must have some
minimal installation customization just to get a system up.

To me, that ought to include (which manual IEA1G510 does not) at least a
terse explanation of Page Data Sets, which are neither required to be
SYS1 or on a specific volume, and perhaps should not include (which
IEA1G510 does) the various *CLI0, *SKEL0, *MSG0, *PNL0, *PENU, *TBL0,
*TENU, etc. data sets required for specific ISPF applications (and which
do not really have any special requirements).  True, you must have the
SCBD* ISPF application data sets to do hardware configuration, and SBLS*
data sets to diagnose system failures. but are they worthy of any
greater honor than the SMPE target data sets required to make ISPF
itself functional?

Just a quick spot check of the description of SYS1.BRODCAST reveals it
hasn't been updated to reflect the use of User Logs as an alternative to
SYS1.BRODCAST for TSO user messages, a feature that has been available
for some years.  That makes me suspect other topics in IEA1G510 may be
similarly in need of revision and perhaps the magnitude of that task is
why the manual was dropped.

Perhaps all of the information in IEA1G510 is now available somewhere
else in bits and pieces, but it was nice to have it all collected in one
place as an overview.  If not in a separate manual, perhaps this might
be a candidate for a long topic or appendix in something like the z/OS
Basics manual or some volume of ABCs of z/OS System Programming.
There is (or at least used to be) a very short topic in ABCs of z/OS
System Programming, Vol 2 on System Data Sets, but it seems to me
incomplete (see above) and only mentions data set names, not their
purpose or any unique requirements that must be observed.
Joel C. Ewing

On 11/24/2014 08:49 AM, Don Poitras wrote:
 Cheryl,
   No, they're talking about a book that describes the SYS1.* datasets.
 I found an old copy online: 
 
 http://publibz.boulder.ibm.com/epubs/pdf/iea1g510.pdf
 
 
 In article 26cfc99c-f886-46db-9e93-baf552eeb...@gmail.com you wrote:
 Hi,
 
 Is this the manual you're thinking of? 
 
 SC23-6855-02z/OS (2.1) DFSMS Using Data Sets - 
 http://publibz.boulder.ibm.com/epubs/pdf/dgt3d402.pdf  
 
 Best regards,
 Cheryl
 
 ==
 Cheryl Watson
 Watson  Walker, Inc.
 www.watsonwalker.com
 cell  text: 941-266-6609
 ==
 
 On Nov 24, 2014, at 2:02 AM, nitz-...@gmx.net nitz-...@gmx.net wrote:
 
 When did it last exist? I have V1R10 docs here and I don't see it.

 I found one in the z/OS V1.1 bookshelf
 
 SA22-7629-00 says  First Edition, March 2001. I have either always copied it 
 over in .boo format or this was contained in some sort of 'release DVD' when 
 I downloaded the next release. There isn't a similar book in the pdf 
 collection for 2.1 I downloaded using the same steps. Maybe IBM thinks we 
 don't need System Data Set Definitions anymore? Or all system data sets are 
 now defined using clicking and z/OSMF, no need for actual information 
 anymore since the system will know what to do? :-)
 
 Barbara
 


-- 
Joel C. Ewing,Bentonville, AR   jcew...@acm.org 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Box size comparisons

2014-11-24 Thread Joel Ewing
On 11/24/2014 09:30 AM, Nims,Alva John (Al) wrote:
 I am not sure what you are asking exactly for, but I do not think there are 
 any pictures of the units side-by-side, not even in the same room.
 
 IBM 7090: 
 http://ed-thelen.org/comp-hist/BRL61-0548.jpg
 http://www-03.ibm.com/ibm/history/exhibits/mainframe/mainframe_PP7090.html
 
 For the IBM 370MP, well that likes to bring up a DELL Heat sink
 This is the Wikipedia entry for an IBM 370
 http://en.wikipedia.org/wiki/IBM_System/370
 
 Physically, I believe the EC12's Foot Print is smaller, but much, much more 
 powerful in that small foot print.
 One thing I would point out is that an EC box is Taller than the older boxes.
 
 
 Al Nims
 Systems Admin/Programmer 3
 Information Technology
 University of Florida
 (352) 273-1298
 
 -Original Message-
 From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On 
 Behalf Of Ed Gould
 Sent: Monday, November 24, 2014 12:17 AM
 To: IBM-MAIN@LISTSERV.UA.EDU
 Subject: Box size comparisons
 
 Is there a picture available that shows say a 7090, 370MP (or close) and an 
 EC box?
 This is just for size comparisons as its hard to visualize (to me).
 
 Ed

I haven't been able to locate any actual comparison images either --
would have to be from a museum or from someone with the skills to merge
two images with correct scaling.  If you want accurate comparisons, you
may have to go to Physical Planning manuals for the older processors.

The IBM 7090 had a much simpler architecture than even S/360.
Even though it used much older technology than S/360 or S/370, the
processor without peripheral devices was considerably smaller than any
S/370 architecture MP machine of which I'm aware.  If you want a large
old machine to make the comparison more impressive, one of the old
water-cooled MP behemoths would be your best bet.

One such example of a 370 architecture MP system was a 3033 MP complex.
 Here is an image of one:
http://speci.icss.hu/ibmfoto/3033MP_1979.jpg
Although you can't see the back half clearly, it is another h like the
front half turned around 180° with another large frame (a 3038) joining
them in the middle.  For physical dimensions, see p32 of
http://bitsavers.trailing-edge.com/pdf/ibm/370/fe/GC22-7004-14_370_Installation_Manual_Physical_Planning_Jun85.pdf

In the 3033MP image the closest wing with 5 visible side panels is about
12.9' wide.  The smallest rectangle that will enclose the entire
bolted-together parts of the processor is about 23.5' wide and 26.7'
deep.  In addition there are two L-shaped 3036 consoles requiring a
rectangle space of 7.5'x 6.5' and two 3037 PCDUs (Power Cooling
Distribution Units), one visible in the far back on the right in the
image, each 7.7'x 2.7'.  For a rough eyeball size comparison, a z9 or
z10 with two frames would be about equivalent in size to 2/5  of the
nearest 5-side-panel wing of the 3033MP.

Since you couldn't put any other hardware in close proximity to the
3033MP, the computer room floor area required was at least 767 sq ft
plus some additional fudge for maintenance access on all sides.  By
comparison, the floor area required for a z10 is about 30 sq ft plus
additional clearance for maintenance, an area reduction by a factor of
about 25.

-- 
Joel C. Ewing,Bentonville, AR   jcew...@acm.org 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN