Re: Fwd: Bank’s severance deal indebts laid off IT workers to be ‘on call’ for 2 years of tech support — without pay

2015-10-22 Thread Roberts, John J
>>"Gee, I don't recall how that worked".

"Just push the big red button - that should fix your issues!".

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Exporting Excel to Dataset

2015-08-03 Thread Roberts, John J
We have a requirement that looks for exporting an Excel template with data in 
a tabular format from an FTP folder location to a mainframe PDS which later 
needs to be accessed via COBOL for some business processing logic. 
 
We know that this works fine if the input Excel file is in .CSV format. What 
we are looking for is, if anyone is aware of any such functionalities where 
we can accept and export the Excel template as such to a PDS that is readable 
by COBOL without being converted to a .CSV file?

If you have MS SQL Server at your installation, your MSSQL DBA's should be able 
to develop a SQL Server Integration Services (SSIS) application to transform 
your Excel worksheet into an XML file, which could then be FTP'ed to the 
mainframe PDS.

You could then write COBOL application programs to do the XML PARSE to extract 
the information you need for your business logic.

You may also be able to do something similar with MS Access.

John


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Scheduling software

2015-05-15 Thread Roberts, John J
What are the majority of companies using for their mainframe scheduling 
software?  

The State of Iowa uses BMC Control-M, at least on the LPAR on which DHS is 
deployed.  Our installation is a cross-platform solution with workload running 
on both the mainframe and distributed platforms.

This has been a fairly recent development here.  Previously we had a homebrew 
solution that leveraged JES3 Dependent Job Control.

Our experience has been generally positive.

John

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Scheduling software

2015-05-15 Thread Roberts, John J
Also, there is a list at: 
http://en.wikipedia.org/wiki/List_of_job_scheduler_software

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: System vs. user ABEND codes

2014-11-04 Thread Roberts, John J
I suspect that one of the original OS/360 developers decided that U0C7 could 
easily be confused with S0C7.  So they adopted the convention of documenting 
and displaying User Abend Codes in decimal vs Hex for System Abends.  They are 
both unsigned 12 bit numbers, 0 to 4095 in decimal of 000 to FFF in hex.  IIRC, 
the code is loaded into GPR1 before the SVC 13 is made.  If a user abend, the 
value is in the lower 12 bits.  If a system abend, shifted over 12 bits.  I 
think the high order 8 bits is used for dump options.  GPR15 is for the reason 
code.

Experience (and 
http://www-01.ibm.com/support/knowledgecenter/SSGMCP_4.1.0/com.ibm.cics.ts.resourcedefinition.doc/macros/srt/system.html?cp=SSGMCP_4.1.0%2F12-9-1-3-8-1)
 make it clear that system ABEND codes are hex and user codes are decimal. 
Why? Or is this lost in the mists of time? It's not like OS/360 end-users 
were going to understand U1234 better than U4D2!

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Find member in Linklist through JCL?

2014-07-24 Thread Roberts, John J
Bill,

EXEC PGM=whatever without any STEPLIB or JOBLIB will give you a clue that it is 
or is not (S806) in the link list.  But it won't tell you which library.

If you have access to PARMLIB and are able to compile a list of the libraries 
in the link list, then you may be able to use a quick and dirty utility program 
I developed years ago.  PROGBLDL gets a list of program names (DDNAME=PROGLIST) 
and for each name issues a BLDL macro call to get the load modules directory 
entry.  This information is written to output files PROGDATA and PROGSUMM.  I 
think PROGDATA is the full DE and PROGSUMM is some portion.  The BLDL macro 
searches DDNAME=DFHRPL for the program name in question.  I think there is a 
byte within the DE that is the relative number of the library in the DFHRPL 
concatenation.  Source code reproduced below.

John

PROGBLDL CSECT
***
*PROGBLDL PROGRAM ENTRY   *
***
LENTRY   STM   14,12,12(13)
 BALR  12,0
 USING *,12
 ST13,WRSA+4
 LA13,WRSA
 SPACE 3
***
*OPEN FILES   *
***
LOPENDS0H
 OPEN  (WLSTDCB,(INPUT))
 OPEN  (WLIBDCB,(INPUT))
 OPEN  (WDATDCB,(OUTPUT))
 OPEN  (WSUMDCB,(OUTPUT))
 SPACE 3
***
*READ PROGRAM NAMES AND COLLECT BLDL INFORMATION  *
***
LSCANDS0H
 GET   WLSTDCB,WLSTREC READ NEXT PROGRAM NAME
 XCWMEMDATA,WMEMDATA   CLEAR THE BLDL LIST AREA
 MVC   WMEMNAME,WLSTRECSET THE MEMBER NAME
 MVC   WSUMNAME,WLSTRECSET THE MEMBER NAME
 BLDL  WLIBDCB,WMEMINFO,BYPASSLLA SEARCH FOR THE MEMBER
 PUT   WDATDCB,WMEMDATASAVE THE RESULT
 SR15,15
 IC15,WMEMDATA+11  LOAD CONCAT NUMBER
 CVD   15,WPACK
 UNPK  WSUMLIBN,WPACK
 OIWSUMLIBN+1,X'F0'
 SR15,15
 ICM   15,B'0111',WMEMDATA+24  LOAD MODULE SIZE
 CVD   15,WPACK
 UNPK  WSUMSIZE,WPACK
 OIWSUMSIZE+7,X'F0'
 PUT   WSUMDCB,WSUMREC WRITE SUMMARY RECORD
 B LSCAN   GO DO NEXT
LSCAN99  DS0H  EODAD
 SPACE 3
***
*CLOSE FILES  *
***
LCLOSE   DS0H
 CLOSE (WLSTDCB)
 CLOSE (WLIBDCB)
 CLOSE (WDATDCB)
 CLOSE (WSUMDCB)
 SPACE 3
***
*PROGBLDL PROGRAM EXIT*
***
LEXITDS0H
 SR15,15
 L 13,WRSA+4
 L 14,12(13)
 LM0,12,20(13)
 BR14
 SPACE 3
***
*CONSTANTS*
***
 LTORG
***
*DSECTS   *
***
 DCBD  DSORG=(PS)
 SPACE 3
***
*WORKING STORAGE  *
***
PROGBLDL CSECT
WPACKDCD'0'
WRSA DC18F'0'
WLSTDCB  DCB   DDNAME=PROGLIST,MACRF=GM,DEVD=DA,DSORG=PS,  X
   RECFM=FB,LRECL=8,EODAD=LSCAN99
WLSTREC  DSCL8'PGMNAME'
WDATDCB  DCB   DDNAME=PROGDATA,MACRF=PM,DEVD=DA,DSORG=PS,  X
   RECFM=FB,LRECL=76
WSUMDCB  DCB   DDNAME=PROGSUMM,MACRF=PM,DEVD=DA,DSORG=PS,  X
   RECFM=FB,LRECL=20
WSUMREC  DS0CL20
WSUMNAME DCCL8'PGMNAME'
 DCCL1','
WSUMLIBN DCCL2'99'
 DCCL1','
WSUMSIZE DCCL8''
*
WLIBDCB  DCB   DDNAME=DFHRPL,MACRF=R,DEVD=DA,DSORG=PO, X
   RECFM=U
WMEMINFO DS0H
 DCH'1'
 DCH'76'
WMEMDATA DS0XL76
WMEMNAME DCCL8'PGMNAME'
WMEMTTR  DCXL3'00'
WMEMKDCXL1'00'
WMEMZDCXL1'00'
WMEMCDCXL1'00'
WMEMUSER DCXL62'00'

Re: OT: Entry Level System Programmer Job Description

2014-01-30 Thread Roberts, John J
John P Kalinich wrote:
1. Graduated from SHARE Assembler Boot Camp.
2. Read and understood the contents of Advanced Assembler Language and MVS 
Interfaces for IBM Systems and Application Programmers by Carmine A.
Cannatello.
3. Fluent in z/OS operator commands.
4. Can IPL a z/OS system.

IMO, you don't hire Entry Level System Programmers.  You hire Entry Level 
System Programmer Trainees.

Anyone who was practicing as an Entry Level System Programmer for any 
significant length of time is now an Intermediate Level System Programmer.  
If they left the position after just a few months they are a System Programmer 
Dropout.

To be a System Programmer Trainee, you need to have been:
(a) A successful Application Developer on the platform, or
(b) A highly experienced platform Operator.

While I consider myself a skilled ASM developer and I would highly recommend 
this skill for any System Programmer, I know that for many years IMS System 
Programming tasks have been done by people lacking this skill.  Obviously, JCL, 
Utility Program, REXX, and SMP/E skills come before ASM.  Familiarity with the 
diagnostic tools is important as well.  But I know that there are many 
practicing SysProgs that don't know how to read a SYSUDUMP and become dependent 
on ABENDAID as a crutch.

For setting requirements, you also need to consider the environment.  A big 
installation with a whole team of Sysprogs can afford the time to mentor a new 
guy.  But a small shop with only one or two senior people might not be able to 
afford the time to raise the newbie.

John

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: JCL and date variables

2014-01-07 Thread Roberts, John J
Try doing this: 
(1) Run a job that does nothing except dynamically construct the JCL for the 
real job.  The jobstep would get the system date and then do the date 
subtraction to calculate the member name.  The generated JCL would then be 
submitted via the internal reader.
(2) The dynamically submitted application job then runs with the JCL DSN's and 
member names configured correctly.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: RDW with FTP

2014-01-02 Thread Roberts, John J
I was told that this can be used to ftp files that have occurs depending on in 
the record layout.  But I am not able to comprehend how exactly I should use 
it- For example I am GETting a VB file from the mainframe.  

Since we are talking about COBOL files here (OCCURS DEPENDING), you will save 
yourself a lot of trouble if you simply reformat the file such that all fields 
are USAGE DISPLAY.  That way you can do EBCDIC to ASCII transfers and your 
records will be delimited by CRLF.  Such records can survive the round-trip 
MVS-WORKSTATION-MVS, whereas BINARY transfers with the RDW option will not.

Also, take care with ZONED DECIMAL fields.  An example might be PICTURE 
S9(5)V99 USAGE DISPLAY.  Such a field requires 7 characters, but the last 
character will be something funny like {.  This is because the last position 
encodes both a digit value and a sign.  Such characters will get scrambled in 
the EBCDIC-ASCII translation.  You can handle this two ways:
(1) Fix the character on the WORKSTATION.  This is possible since the 
characters get corrupted with a 1-to-1 mapping, so it is easy to fix at the 
destination.
(2) Define it in the record as PICTURE +9.99, so you get an explicit 
leading sign.

I know from bitter experience that the transmission of VB-BINARY files is best 
avoided.

John

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Aging Sysprogs = Aging Farmers

2013-11-04 Thread Roberts, John J
To have an adequate supply of new sysprogs to replace those retiring, the 
compensation needs to be more attractive than it is currently.  Most of the 
younger people in IT see mainframe technology as a dead end.  They might not 
know when it will expire, but they think it will die off sooner than it will 
happen to Linux or Windows.  So they will choose to immerse themselves in the 
newer technology until the compensation for a mainframe sysprog becomes too 
great to ignore.

For farmers, the issue is more about inheritance.  In the English system of 
Primogeniture, the eldest son inherited the estate and the other siblings were 
expected to make their own way in the world.  Most Americans would see this as 
grossly unfair.  Much of a farmer's wealth is tied up in the land.  If he has 
two sons and two daughters, each one is expecting a quarter share of that 
wealth as their inheritance.  So even if one of the sons is inclined to follow 
in Dad's footsteps and continue farming, he starts off in debt to his siblings. 
 The debt service makes the farm unable to produce enough income to support the 
farmer-son.  So an easier way out is to sell the farm and split the proceeds 
among all the children, with the farm probably ending up in the hands of 
corporate owners, who will then hire laborers to work the land.  This trend to 
corporate ownership at the expense of the family farm has been going on for 
decades.

John

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: ObamaCare Web Site Problems

2013-10-30 Thread Roberts, John J
Many of you may be unaware that you don't necessarily need to use this site in 
order to apply for individual health insurance.  Certainly no one that has a 
job relevant to IBM-MAIN needs to use it.

You only need to use HealthCare.gov if:
A) You think you qualify for a subsidy (income less than 400% of FPL) and your 
state does not have its own exchange, or
B) Your income is so low that you qualify for free Medicaid and your state does 
not have its own exchange.

If you are like me and don't qualify for Medicaid or a Subsidy, you have these 
options:
C) Use your state's own exchange to apply, if your state has one (most Blue 
States have done this), or
D) Go to your state's Insurance Commissioner Web Site and research what plans 
are offered and what prices apply to your age, location, gender, and tobacco 
status.  Then contact the Insurance Company of your choice directly to apply.

Note that many insurance companies offer both exchange and non-exchange plans.  
A non-exchange plan still needs to meet all the requirements of the ACA Metal 
plans: Bronze-Silver-Gold.  But a non-exchange plan is not eligible for a 
subsidy.  Since the Insurance Company does not need to bother with all the 
red-tape for the government subsidy, non-exchange plans are often cheaper than 
exchange plans.

If you purchase a non-exchange Metal plan, you meet the requirements of ACA 
and are not subject to the Individual Mandate Penalty.

Also, if you are on Medicare you are already covered and can ignore 
HealthCare.gov and its issues.

John


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: SYSIN Symbol Resolution vs. LRECL

2013-10-10 Thread Roberts, John J
There was plenty discussion here of the new-fangled facility to support symbol 
resolution in instream data sets, but I don't recall whether this specifically 
was discussed:

What will happen (JES2, specifically) if substituting a symbol in an instream 
data set causes a record to exceed the otherwise LRECL (over which the 
programmer now has little control)?:

Oh how I wish there was just an option for the JCL stream to be 
RECFM=VB,LRECL=255 (or more).  If this was the case we could get rid of the 
arcane continuation rules, and issues like symbol substitution mostly overcome.

John


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: JCL Symbol Resolution

2013-10-09 Thread Roberts, John J
There are different tastes and different sorts of cake: what do you do with 
the following:
//* %%SET %%HHMM = %%SUBSTR %%TIME 1 4
DSN=MVSDATA.IOA.LOG.TEMP.D%%ODATE
//* %%SET %%A = %%CALCDATE %%ODATE +1
//* %%IF %%WDAY NE 1
//TMPENV   SET TMPENV=TD.%%JOBNAME.D%%DATE.T%%TIME.O%%ORDERID

They seem quite CTM specific.

As I mentioned yesterday, most jobs only need five simple symbolic variables.  
More complicated stuff like Kees  has noted will be rare enough that I can 
ignore as a complication.


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: JCL Symbol Resolution

2013-10-08 Thread Roberts, John J
Did you consider well the pro's and con's of widely using the Contol-M 
features? This will tie you rigidly to this product and will cost you a lot of 
work if you decide to convert to another scheduler in the future. Having 
learned from Syncsort and PDSMAN whose features were not compatible with the 
corresponding IBM products, we only use a few of the % features of Control-M 
and only in a very controlled group of jobs. This also answers your question 
partly.

I only want to use a few AutoEdit variables so that we can avoid having to 
maintain test and prod versions of the same JCL.  After struggling for many 
months, I have finally got my client to adopt and implement standards for test 
dataset names and libraries.  So it is possible to switch JCL from running 
against PROD resources to alternative TEST resources by passing an Environment 
ID and letting all the DSN's reference symbols that are derived from the EnvID. 
 Beyond this, TEST JCL needs to differ from PROD in only a few other respects: 
(a) Initial character of the JOB name, (b) Account Field,  (c) JOBCLASS, and 
(d) USER on the JOB card.  I had hoped to make these four Control-M symbols, 
plus the previously mentioned %%ENVID.

So only the first few lines of the JCL member would be affected.  In the event 
that my client changes their mind about Control-M, I can easily refactor the 
JCL to whatever the new requirement demands.  I have developed a JCL parsing 
tool that loads JCL statement text into a SQL database.  Plus other tools to 
identify patterns for change and generate PANVALET ++UPDATE statements to make 
the changes.  So doing what I plan won't tie me to the product and it wouldn't 
cost that much to convert to something else.

However, doing what I plan will be a complication for developers who want to 
submit outside of Control-M.  I was certain that someone else on the list would 
have addressed this need.  But it looks like I will need to follow John M's 
advice and roll my own.  Which is going to be hard since I haven't done REXX in 
20 years.

Also, the solution proposed by Aussie Anthony Thompson won't work for me, since 
I am pretty sure that jobs ordered using CTMAPI will still cost against our 
license count.

Thanks to all who have responded to this inquiry.

John

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


JCL Symbol Resolution

2013-10-07 Thread Roberts, John J
We are about to become a new BMC Control-M installation.  Part of this involves 
changing our JCL to reference Control-M symbols.  So instead of a plain-jane 
JOB card, we might have something where the accounting field is supplied as 
%%ACCT and the JOB CLASS is specified as %%JC.

This is all good in PROD, where these symbols are resolved from job definitions 
by Control-M before submission.

But what about unit testing?  For one, I don't want to run unit test jobs from 
Control-M, since our license will count these against our limit.  And I don't 
want to force our developers to edit the JCL before submission, since (a) it 
would be a PITA, and (b) they would surely forget to cancel the edit, 
corrupting the member.

What I really want is the ability for developers to perform a special SUBMIT 
from their ISPF EDIT session, where:
(a) the developer is prompted to resolve the symbols before the JCL text is 
written  to INTRDR, and/or
(b) the symbols are resolved from some configuration file before the modified 
JCL is written to INTRDR.

I think that some kind of ISPF EDIT MACRO could do this work, but I have 
forgotten how to do this.  I am hoping that this is a common enough problem 
that someone else may have developed a solution they could share.

Note that I know that there are some things related to Control-M that a simple 
EDIT Macro could not solve.  Stuff like the %%IF-%%ELSE-%%ENDIF sequences and 
the built in functions for date calculations and character substrings.  But 
these are rare enough that I would be happy with a simple symbol substitution 
solution.

I have asked our sysprogs to pose this question to BMC and CetanCorp.  But I 
suspect that their answer will revolve around their new JCLVERIFY product.  If 
this can work standalone without adding to our license count, that would be 
great. Otherwise we will need this other solution.

Also, I will need to bring this up with the supplier of our current JCL 
validation utility - the product known as JED (dcmsi.com).  This has one nice 
feature that the new JCLVERIFY lacks, the ability to display the contents of 
parameter members, both PDS and PANVALET.  JED is also capable of validating 
things like SORT and IDCAMS control statements.

John



--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Determining number of Parameters passed to a COBOL program

2013-10-02 Thread Roberts, John J
  Is there a way to determine how many parameters are being passed into a 
COBOL program?

It all depends on the convention you employ for passing parameters.  The most 
common is to pass the PARM string as a comma delimited string.  So in the 
simplest form, the number of parameters is the number of commas plus one.

In the example //S1 EXEC PGM=P1,PARM='ABC,12345,X87,,55'  you have five 
parameters, one of which is the null string.

Typically you use the COBOL UNSTRING statement to parse the PARM string looking 
for delimiters.

The PARM string is typically declared like this:

LINKAGE SECTION.
01  LS-PARM.
05  LS-PARM-LEN PIC S9(4) COMP.
05  LS-PARM-TXT PIC X(100).
PROCEDURE DIVISION USING LS-PARM.


John



--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Determining number of Parameters passed to a COBOL program

2013-10-02 Thread Roberts, John J
This is of course for a COBOL BATCH MAIN program.  

-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf 
Of Roberts, John J
Sent: Wednesday, October 02, 2013 11:43 AM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: Determining number of Parameters passed to a COBOL program

  Is there a way to determine how many parameters are being passed into a 
COBOL program?

It all depends on the convention you employ for passing parameters.  The most 
common is to pass the PARM string as a comma delimited string.  So in the 
simplest form, the number of parameters is the number of commas plus one.

In the example //S1 EXEC PGM=P1,PARM='ABC,12345,X87,,55'  you have five 
parameters, one of which is the null string.

Typically you use the COBOL UNSTRING statement to parse the PARM string looking 
for delimiters.

The PARM string is typically declared like this:

LINKAGE SECTION.
01  LS-PARM.
05  LS-PARM-LEN PIC S9(4) COMP.
05  LS-PARM-TXT PIC X(100).
PROCEDURE DIVISION USING LS-PARM.


John



--
For IBM-MAIN subscribe / signoff / archive access instructions, send email to 
lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Determining number of Parameters passed to a COBOL program

2013-10-02 Thread Roberts, John J
I was referring to my reply.  The original question wasn't clear.

-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf 
Of Steve Comstock
Sent: Wednesday, October 02, 2013 11:55 AM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: Determining number of Parameters passed to a COBOL program

On 10/2/2013 10:47 AM, Roberts, John J wrote:
 This is of course for a COBOL BATCH MAIN program.

Whar do you mean by 'This'? Are you refering to your original question or to 
the answer below?

-Steve


 -Original Message-
 From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] 
 On Behalf Of Roberts, John J
 Sent: Wednesday, October 02, 2013 11:43 AM
 To: IBM-MAIN@LISTSERV.UA.EDU
 Subject: Re: Determining number of Parameters passed to a COBOL 
 program

Is there a way to determine how many parameters are being passed into a 
 COBOL program?

 It all depends on the convention you employ for passing parameters.  The most 
 common is to pass the PARM string as a comma delimited string.  So in the 
 simplest form, the number of parameters is the number of commas plus one.

 In the example //S1 EXEC PGM=P1,PARM='ABC,12345,X87,,55'  you have five 
 parameters, one of which is the null string.

 Typically you use the COBOL UNSTRING statement to parse the PARM string 
 looking for delimiters.

 The PARM string is typically declared like this:

 LINKAGE SECTION.
 01  LS-PARM.
  05  LS-PARM-LEN PIC S9(4) COMP.
  05  LS-PARM-TXT PIC X(100).
 PROCEDURE DIVISION USING LS-PARM.


 John
   


 --
 For IBM-MAIN subscribe / signoff / archive access instructions, send 
 email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

 --
 For IBM-MAIN subscribe / signoff / archive access instructions, send 
 email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


--
For IBM-MAIN subscribe / signoff / archive access instructions, send email to 
lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: z/OS subroutine in assembler, used in both batch CICS , making re-entrant

2013-06-25 Thread Roberts, John J
It's been a long time since I've done this, but this is what I remember:
(1) All CICS/COBOL programs need to be compiled with the NODYNAM option. This 
is a CICS requirement.
(2) Because of (1), a CALL 'MYPGM' statement will cause the subprogram to be 
statically linked with the calling program.  So if the RENT attributes are 
different for the two modules you will get S0C4 abends.
(3) A CALL WS-MY-PGM statement in a CICS/COBOL program is a DYNAMIC call, even 
when the NODYNAM option is in force.
(4) In order for (3) to work, the subroutine needs to be defined in a CSD 
PROGRAM entry (PPT for the old-timers).
(5) The dynamically called subroutine can be NORENT even when the caller is 
RENT.
(6) The dynamically called subroutine cannot make any requests that imply a 
WAIT, since that will halt the whole region.  And GETMAIN can interfere with 
CICS storage management as someone else mentioned.

John

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: python code to sumbit batch job to jes

2013-06-25 Thread Roberts, John J
Maybe the SITE command needs to be SITE FILETYPE=JES.  That's what I use when 
I use the FTP class library for DotNet from Kellerman software.

-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf 
Of Paul Gilmartin
Sent: Tuesday, June 25, 2013 3:41 PM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: python code to sumbit batch job to jes

On Tue, 25 Jun 2013 15:06:36 -0500, don isenstadt don.isenst...@gmail.com 
wrote:

# note quote already done by ftplib so no quote .. this works fine 
ftp.voidcmd( site file=JES ) # Attempt to upload the file f = 
open('tjcl.txt', r)
ftp.storlines(STOR,f) -- this is what fails..
ftp.quit()
error_perm: 501 Invalid data set name. Use MVS Dsname conventions. - 
but I am not trying to send a dataset up just the a jobcard and iefbr14 in a 
text file???
 
Yes, but FTP is (was?) a moron.  It required a valid data set name in the 
target regardless.  I used to get errors of this sort, but I can't reproduce 
the behavior today on z/OS 1.13.

-- gil

--
For IBM-MAIN subscribe / signoff / archive access instructions, send email to 
lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Storage paradigm [was: RE: Data volumes]

2013-06-10 Thread Roberts, John J
For Windows Capabilities, I suggest reading about Dynamic Disks and Dynamic 
Volumes on MSDN:

http://msdn.microsoft.com/en-us/library/windows/desktop/aa363785(v=vs.85).aspx

John

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: From Linux to MVS via USS and Back

2013-05-23 Thread Roberts, John J
. 1.   Is it possible for a z/OS UNIX Shell Script to SUBMIT an MVS JOB?  
I know I can do plain FTP with FILETYPE=JES.  But is there a more direct way 
that doesn't involve putting plain text passwords on the wire?

If you are running a z/OS UNIX shell, you can submit a job using /bin/submit. 
This is a standard z/OS UNIX command in at least z/OS
1.12 and above. This can submit from a z/OS UNIX resident UNIX file or from 
stdin, if no file is specified in the UNIX command line.

/bin/submit seems to do the trick.  And I note that it propagates the 
submitter's userid, so no need for USER= and PASSWORD= on the JOB card.  Thanks 
John for the tip.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


From Linux to MVS via USS and Back

2013-05-22 Thread Roberts, John J
Here at DHS we are developing a major application that runs on Linux and Oracle.

This application will need to have batch file interchanges with the z/OS 
Mainframe.  Most of the file transfers are inbound to z/OS, but a few are 
outbound back to Linux.

Security rules dictate that we must use secure protocols for file transfers, 
even though the Linux Servers and the Mainframe will be side-by-side in the 
Intranet.  For us this means SFTP.  So for inbound this means that shell 
scripts on the Linux boxes will initiate an SFTP session with the UNIX side of 
our mainframe and PUT files.  Then we need to run an MVS job to pull the data 
from the Unix File system into a regular MVS Generation Data Set.  And we need 
to tell the CONTROL-M Scheduler that this has been done so that it can schedule 
all the follow-on work.

For outbound we need to push the file from an MVS dataset into the Unix File 
System.  Then we need to initiate a shell script to cause the file to be sent 
to the remote Linux server.  And once it arrives there, we need to tell a 
different instance of the CONTROL-M Scheduler that the file has been 
transmitted successfully.

I know how to do some parts of this, but I wonder about some details:

1.   Is it possible for a z/OS UNIX Shell Script to SUBMIT an MVS JOB?  I 
know I can do plain FTP with FILETYPE=JES.  But is there a more direct way that 
doesn't involve putting plain text passwords on the wire?

2.   Is the CONTROL-M Scheduler capable of monitoring the creation of MVS 
files and kicking off a job when this event is detected?  Is this a basic 
capability of the product, or is a special add-on needed (we have a very small 
CTM Environment, w/o many of the goodies).

3.   Is CONTROL-M capable of monitoring file creation in the UNIX side?

4.   I know that z/OS Unix Shell Scripts can use the cp command to copy 
files from HFS to MVS, using the //'HLQ.WHATEVER' notation.  But will this 
work with GDG's to create the +1 generation?  And how do you control DCB/SPACE 
attributes for the receiving file?

5.   What are the chances of being able to transfer BINARY VB files 
successfully?  I note that SFTP doesn't seem to have LOCSITE RDW options.

6.   Does CONTROL-M have any basic capability to be signaled that an event 
has occurred such that it will then trigger follow-up processing?  If so, can 
you do this from shell script?

If any can help with this, it would be much appreciated.  I only know enough 
about Unix System Services to be dangerous, and even less about CONTROL-M.  For 
CONTROL-M we are really running on autopilot since our expert departed a couple 
years back.

John

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Return codes

2013-05-08 Thread Roberts, John J
Can some one let me know why the return code generated is a mutilple of 4? 
e.g 4,8,12,16
Back in the day when every byte counted, programmers would use the RC in R15 as 
an index into a jump table, where each four byte entry was itself an 
unconditional branch instruction, which was a four byte instruction.  Not 
needed anymore, but old habits die hard.

Think of it as like a computed GOTO .

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Solaris 7

2013-02-27 Thread Roberts, John J
So you have to download an ISO to a working Unix or Linux OpSYS to install ?

No, if you had ISO images, you needed to burn physical disks.  Then you would 
boot the special install disk, after issuing the appropriate OpenBoot PROM 
command.  And  you needed to boot from a CDROM drive that had a certain special 
capability, something that most PC drives lacked.  Even though this was 
supposedly an Open System,  1998 was still the era of closed hardware as far 
as Sun was concerned.


This is all to the best of my recollection, since the last time I did this was 
back in 2001.  Anyone want to buy a bunch of SparcStation 4's and 5's?  How 
about the Sun 22 inch monitor - it must weigh 75 lbs? ;-)

John

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Solaris 7

2013-02-27 Thread Roberts, John J
I doubt the special capability of CDROM drive.

A bit of searching uncovered this statement from the Sun FAQ:

To ensure 100% compatibility on machines running Solaris 2.5.1 and 
older, it is necessary to use a CD-ROM drive capable of supporting 512 byte 
sectors (the Sun standard).

Although it says 2.5.1, I encountered this problem with 2.6.  I think the 
version of the OpenBoot PROM comes into play as well.  I know the problem went 
away with Solaris 8.  Solaris 7 I just can't remember.

Plextor and Pioneer drives had this capability.  I was trying to use HP SCSI-2 
drives and they did not work.  But an Apple SCSI drive pulled from an old 
Quadra machine did the trick.

BTW, readers might be confused about the numbering of Solaris releases.  For a 
long time Sun was calling it SunOS 2.3, 2.4, 2.5, etc.  But after 2.6 came 
Solaris 7, then 8, 9, etc..  So today's Solaris 11 is really 2.11 in the old 
scheme.

John

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Solaris 7

2013-02-26 Thread Roberts, John J
Media for Solaris 7, 2.6, and earlier show ups on eBay every so often.  For 
Solaris 7 you need to be aware that there are both Desktop and Server editions, 
plus versions for SPARC and x86 architectures.  Also, I found that normal CDROM 
drives can't be used to boot the installation disk, at least for the SPARC 
versions.  Back when I was experimenting, I found that Apple SCSI CDROM drives 
did the job.

Solaris 8 on up was generally more friendly with PC hardware - the UltraSPARC 
workstations used the PCI bus and IDE controllers.

John

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: DASDCALC

2013-02-14 Thread Roberts, John J
Have you tried Compatibility Mode?  It's a stretch, but it might work.

Windows 7 has a feature called XPMode.  In reality it is XP 32-bit running in a 
Virtual PC Machine.  XPMode does not come on the Win 7 distribution disk.  
Rather it is free download from Microsoft.Com.

If your product ever ran on XP, it is 90% likely it will run in XPMode under 
Win 7 64 bit (or 32 bit).

John

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: DASDCALC

2013-02-14 Thread Roberts, John J
I think the OP has resolved his problem by using Win 7 XP Mode.

But there is another option, if XPMode is unavailable.  If you can convince 
management to leave one old XP machine powered up on the LAN, you could use 
Remote Desktop Connection (RDC) to access that machine and run any old XP apps.

Lastly, why hasn't someone reengineered DASDCALC into a web application that we 
could all access with our internet browsers?  Maybe this is a small challenge 
for someone on the list?

John

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: My Last Days as a Sysprog

2013-01-28 Thread Roberts, John J
I understand that if you are a contractor to a body shop, the body shop is 
often not paid for 45 days (I hear IBM is one of the culprits here), then the 
body shop won't pay it's people for another 45 days. 90 days before the worker 
gets paid is not anywhere near fair, but they got you.

Like many things in life, it all depends, on:
(1) The Contract Agency,
(2) How valuable your services are to the client,
(3) What State you are working in, and
(4) Whether you are on a 1099 or W2 contract.

If you are on a W2 contract and you don't get paid in a timely fashion, in many 
states you can get the State Labor Dept to pursue the agency.  If you are on 
1099 terms, get a lawyer.

If you are smart, you will ensure that your contract has language to make it 
crystal clear when you get paid and what recourse you have if they are late.  
Like being able to offer your services directly to the client, terminating your 
relationship with the agency middleman.  Sometimes just the threat of informing 
the client is enough to get the agency to pay up.  Especially true if the 
client would be in a bind without your services.

Since my layoff from my fulltime job in 2008 I have had two contracts.  The 
first was for a NJ body shop.  Terms were Net30 on a W2 contract, which in NJ 
is illegal, but at least during the term of the project I at least got paid 
more or less on time.  But once the project was cancelled, it took the 
involvement of the NJ Labor Dept and nearly nine months of pressure to get the 
last two months paid.

For the past three years, I have been on a 1099 contract with a local staffing 
agency here in Des Moines.  Terms are Net45, but each and every paycheck has 
been there when promised.

Eric, best of luck in the future.

John

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Break a dataset into new record boundaries?

2013-01-15 Thread Roberts, John J
I've got a dataset that has been mangled through some misguided efforts such 
that original record boundaries have been lost. It used to be RECFM=V and now 
it is RECFM=F

As luck would have it, every original record begins with the same hex value.
Can anyone suggest a simple tool -- z/OS, USS, or Windows -- that would 
reformat the records breaking on every occurrence of a particular byte value? 

Is it a text file, or binary?

If it contains only printable display characters, you could FTP it to windows, 
and they use an editor to prepend the record prefix with CRLF.  Then FTP it 
back to z/OS.

If binary, I would just write a one-off ASM program to recover the original 
records.  It's probably a 30 minute task, easier than trying to learn anything 
new. 

John

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Passing of Chris Mason reported

2013-01-11 Thread Roberts, John J
Chris was certainly without peer in his area of expertise of mainframe 
networking.

And he was certainly passionate about matters related to his expertise, as we 
all can attest.

There won't be another quite like him.

RIP

John

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Book Enquiry

2012-10-22 Thread Roberts, John J
I'll sell you mine for $200 plus shipping, unless someone else outbids you.

I am waiting for computer historians to recognize that my library of Donald 
Knuth books is in the same class as the Dead Sea Scrolls, Magna Charta, etc.  
;-) Maybe I'll get rich.

Or maybe my collection of BYTE Magazine Volume One that my wife keeps trying to 
throw away.

Actually, speaking of Professor Knuth, I use his name as an interview question. 
 Any Candidate that doesn't immediately recognize the name doesn't get my 
recommendation.

John

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Donald Knuth and The Art of Computer Programming - was RE: Book Enquiry

2012-10-22 Thread Roberts, John J
http://www-cs-faculty.stanford.edu/~uno/taocp.html#vol4

I think Mike's point is that Professor Knuth is still alive and actively 
writing at Stanford.  When I was given a copy of Volume 1 back in 1972, I 
expected Professor Knuth to be some 60-ish professor, complete with a tweed 
suit, pipe and leather elbow patches.  I was very surprised to learn that he 
was just 10 years older than myself.

I have an interesting story to tell regarding Volume 1 of The Art of Computer 
Programming.   Buried within this tome is an algorithm for storage allocation 
and de-allocation.  The whole idea is to allocate storage in chunks that are a 
power of 2 in length (32, 64, 128, etc.).  Knuth calls this the Buddy Storage 
Algorithm since by knowing the address of a chunk, you know it's size.  And 
you can reclaim storage by locating the buddy chunk and checking a single bit 
to see if it is free or in-use.  The algorithm is very efficient and did not 
require searching linked-list structures like the Best Fit or First Fit 
algorithms that were in common use.

Back in 1976 I was working at a major Canadian Bank.  We were running an online 
retail banking application in over 1000 branches across Canada using CICS and a 
IBM S/370-168.  Very marginal for performance.  We were constantly tuning to 
squeeze every last CPU cycle out of the application.  We used the STROBE field 
developed program to profile our app and discovered that a very large portion 
of our CPU cycles were being spent doing CICS GETMAIN and FREEMAIN.

A colleague of mine did some midnight engineering to see what would happen if 
we replaced the CICS management program DFHSCP with our own variant using 
Knuth's Buddy Storage Algorithm.  Management did not want to encourage this, 
because we were trying to go vanilla with our CICS software and not employ 
any local mods.  But when Angus was able to show a DOUBLING of performance as a 
consequence of the change, management seized upon this, since only by getting 
past our performance and capacity problems could we enhance the application to 
do some things that would give the bank a competitive advantage.

The Bank stuck with this modified Storage Control Program for several years, 
until Hursley made some big changes in this area, probably with CICS/VS 1.7 or 
later.

John

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Donald Knuth and The Art of Computer Programming - was RE: Book Enquiry

2012-10-22 Thread Roberts, John J

Ah, but unless you implemented the CICS fences for the storage pieces, it must 
have been a bear when the apps started over-running memory.

If I can recall correctly, we did a few things to address this problem:
(1) We forced the minimum allocation to be 256 bytes.  So small buffer overruns 
didn't do any damage.
(2) We simulated duplicate CICS Storage Accounting Areas (SAA) so that we could 
detect storage corruption.
(3) Our app was all Assembly Code, linked as reentrant.  So program storage was 
all in protected storage.
(4) Application developers were provided with an extensive set of library 
subroutines and Macros to invoke them.  So their opportunity for storage mayhem 
was greatly controlled.
(5) We had a strict System Testing regimen.  Very few damaging bugs made it 
into production.

Of course, the problem of storage corruption was the Achilles Heel of CICS for 
much of the 70's.  But it was the price you had to pay if you needed the 
performance.  The Bank had looked at IMS-DC as an alternative, but my 
recollection is that only with the very limited Fast Path option that came out 
in the late 70's was this platform capable of a transaction rate even close to 
CICS/VS.

John


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Career Advice for the Mainframer - Was RE: Another Light goes out

2012-10-03 Thread Roberts, John J
You don't always get to choose. Some companies are compartmentalized, and the 
staff for the old platform is not permitted to work on the new. Some companies 
will allow you to work on the new platform only if you already have experience 
on it.

Something long ago lost in this thread was a point I made that for Legacy 
Migration or Legacy Modernization you need people with skills in both the 
legacy and new platforms.  It isn't enough to have some people that are legacy 
experts and then some other people that are new platform specialists, you need 
the combination in the same person, although I would recommend an emphasis on 
legacy knowledge.

I am one of those mile wide, inch deep kind of guys.  Too long ago I was an 
MVS SysProg, a CICS SysProg, a VTAM/NCP expert, an IMS DBA, and a DB2 DBA.  All 
at different times between 1970 and 1997.  But I am also a certified DotNet 
Solution Developer.  And for a time I faked being a Java Web Developer. On any 
one of these topics I am lucky if I possess one tenth of the knowledge of a 
real expert in those areas.  But my breadth of experience makes me invaluable 
when tackling a broad subject like a legacy migration or modernization.  I know 
I can always call upon the true experts when I get down to the nitty gritty 
details.  I might even resort to a posting on the IBMLIST ;-)

So the fundamental point I was trying to make is that I think the writing is on 
the wall for mainframes.  They won't go away next week or even in 10 years.  
But I wouldn't recommend it to your sons and daughters.  During the transition 
period there are going to be great opportunities for those with legacy skills 
if they can be seen as helping rather than hindering the transition.  So if you 
do get the chance, I would encourage those with legacy skills to also exploit 
opportunities to learn about the new platforms and get involved in the 
transition.  For anyone under the age of 50 I would think this is absolutely 
necessary.  If you are 60+, you can probably  retire doing exactly what you do 
today.  For 50 to 60 years, you are on the bubble and it might burst before you 
hit the finish line.

I know that there will be people on this list that will insist that the 
mainframe platform is absolutely viable for the foreseeable future.  But what 
they are really saying is that they will support it as long as it is 
profitable, and drop it the very day that it isn't.  My old employer 
(Amdahl/Fujitsu) did exactly that, going from Flagship to Nothing in just two 
years.

Lastly, what Shmuel said it true for some - there is the legacy team and the 
separate new technology team.  If this is your situation, try to get away to 
someplace more enlightened.  Because if this is the situation, one team is 
being setup for a fall.  Guess which one?

John

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Career Advice for the Mainframer - Was RE: Another Light goes out

2012-10-03 Thread Roberts, John J
Bill Fairchild wrote:
It is my opinion that they will also dump their hardware business the day it 
no longer is profitable.  

Another thing to keep in mind, that as a product line declines, the costs to 
support that product line must also decrease, or the remaining customers must 
bear an ever increasing burden until a breaking point is reached and even the 
most conservative customers conclude they need to jump ship.

If for example, the number of customers for CICS licenses starts to decline, 
you can expect changes at Hursley Park.  Some of the greybeards will be shown 
the door, replaced with younger people at half the cost.  At some point, more 
and more support and development work will be shipped to cheaper locations 
(India, China, etc.).  And of course, customers will be asked to pay more.  And 
enhancements will slow down.  Eventually there will be a death spiral when 
customers are leaving the platform at such a rate that IBM can't squeeze their 
people or their customers any more.  It is then that they will put the plug, 
like HP did with the 3000 product line.  Or they might be convinced to make it 
open source like Linux and thereby gain community support.

John

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Zero length records outlawed! (Again.)

2012-10-03 Thread Roberts, John J
Gil,

I suspect this is a product of the Sesame Street generation.  You can blame the 
Count for this.  As far as I can remember he always started from One!

When are people going to learn that ZERO is just as important as ONE.  For 
computing, doubly so.

John

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Mainframe Power - Was RE: Another Light goes out

2012-10-02 Thread Roberts, John J
John Eells wrote:

Actually, z/OS has supported 1TB drives (EAVs) on DS8Ks since z/OS R10, which 
went out of service last year.  In R10 we supported VSAM, in R11 we added 
Extended Format sequential, and in R12 (the oldest release now in service) we 
added support for almost everything else.  There are still some outliers for 
system-level data sets (RACF DB, page data sets, NUCLEUS, etc.) but the vast 
majority of application data can now live happily on an EAV so long as it's 
read and written using system access methods.

How far off is IBM from offering SSD technology in their disk arrays?  And what 
will large scale adoption of SSD do to change data protection strategies?  For 
example, while I have heard that SSD's are much more reliable than HDD, but 
when they do fail, it is the whole device that is gone, whereas HDD dies slowly 
in most cases.

John

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Mainframe Power - Was RE: Another Light goes out

2012-10-02 Thread Roberts, John J
But this surely has little if anything to do with the size of the physical 
drives used by the makers of storage subsystems.

If storage array makers are using 1TB drives vs the 73GB that was typical just 
a couple years back, this should translate to many  fewer drives.  Which should 
translate to lower power consumption and heat generation, which was the 
original point I was trying to make.  And John Eells can tell us if going to 
the 2.5 in form factor is in their plans.  If so, this could indicate that 
power consumption will be ever further reduced.

One thing that would be interesting to learn: how much of a typical data 
center's power consumption is attributable to storage subsystems, including 
RAID and Virtual Tape?  

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Career Advice for the Mainframer - Was RE: Another Light goes out

2012-10-01 Thread Roberts, John J

John's reason #7 reminds me of when management was sugar coating the decision 
to outsource our entire shop,  how it would 'bolster your personal portfolio'! 
  win-win!

Well, to each his own.  You can either resist change, or you can embrace it.  I 
chose to embrace it.  I think it has extended my career.  I got the chop back 
in 2008 from my employer of nearly 30 years.  Four years later I am still 
working independently and earning pretty much the same as before.  I can't say 
the same for many of my former colleagues.  

Of one thing I am certain - at the CxO level very few are truly committed to 
the mainframe platform.  Some of course are resigned to the great difficulty of 
ever getting off the mainframe, but most would if they could.  Those in 
technical support positions that show themselves to be closed minded about 
anything non-mainframe will find themselves at odds with IT management.  Who 
will prevail in the end?

My advice to those that make a living on mainframe technologies is to always 
make your arguments for retaining the big iron on facts, not emotion.  By all 
means ask questions like:
(1) Our CICS availability last year was 99.999%.  Mr. Oracle Man, can your 
system match that? (but be prepared for the counter argument that your app 
doesn't really need five nines reliability).
(2) Mr CIO, you complain about the cost of the mainframe System Programmers.  
But doesn't a good Oracle 11g DBA cost $150K?
(3) Isn't Ruby on Rails just another fad?  After all, look what happened to 
Borland Delphi.  Or ADA.  Or Dbase IV?  Shouldn't we stick with proven 
technologies from companies that are firmly established, like Big Blue?

And when your CIO insists that the company must get off the mainframe, since 
all you guys are north of 50 years and will soon be gone, counter this argument 
by suggesting that you and your colleagues could undertake a training and 
mentoring program to develop a new generation of people, much as was done to 
address the skills shortages that occurred in the 70's and 80's.

My 2 cents worth.

John


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Mainframe Power - Was RE: Another Light goes out

2012-10-01 Thread Roberts, John J
Has anyone seen this NY Times article?  Perhaps the reporter should have 
looked in the Times Tower for how they are saving power. Mainframes are far 
more efficient, and these CIO's and companies are doing whatever they can do 
to get rid of them. Maybe the Times needs a mirror

Well, besides the Mainframes are Better Argument that Doug intended, there is 
something here to discuss that is interesting for both the mainframe and 
non-mainframe communities.

I can confirm that the data center power problem is extremely pervasive.  A 
relative who works for Intel tells me that this problem is a major focus for 
them.  The situation arose because of these factors:
(1) The very idea of shared infrastructure was foreign to those who built the 
eCommerce apps of the late 90's and early 2000's.  So if you were going to roll 
out a new web app you needed two or more new web servers, two or more new 
database servers, possibly two or more new application servers, etc.  And you 
also needed separate environments for DEV, TEST, and UAT.  So pretty soon you 
have filled up several nineteen inch racks.
(2) As CPU clock speeds kept rising, the TDP (Thermal Design Power) kept 
increasing too, hitting 150 watts with the Harpertown series of Xeon processors 
in 2007.
(3) The move to storing images, audio, and HD video have seen an explosion in 
demand for data storage.  Each disk needs 10 to 25 watts energy to spin as fast 
as 15K rpm (Seagate Cheetah).  But enterprise class drives have lower 
capacities that consumer drives.  So while we might have 1TB drives in our 
desktops, the enterprise RAID arrays are stuck at 73GB-146GB-300GB per HDA.

The end result is that even modest IT operations have hundreds of servers and 
disk arrays containing thousands of disks.  And most of these servers just sit 
doing mostly nothing.  And some servers exist simply because no-one can 
remember its purpose.

But change is coming, and not just with the Pie In the Sky idea of Cloud 
Computing.  Consider this:

(1) CPU TDP crested at 150 watts in 2007 and is now in decline.  The recent 
Sandy Bridge chips are all down to 90w or 65w.  Some are running as low as 35w.
(2) The industry has embraced Virtualization as part of the solution for server 
proliferation.  So instead of Blade Servers, companies are buying big 
multi-socket, multi-core machines with tons of RAM and then using them to host 
dozens if not hundreds of virtual machines.  Some of these VM's will be very 
busy serving up web pages or accessing databases. Others will simply loaf along 
responding to occasional requests for LDAP requests, emails, etc. In other 
words, horizontal scaling is out of fashion, and vertical scaling is back in 
vogue.
(3) The CPU chip makers are about to apply Mobile technology to their Server 
CPU's.  Mobile CPU's dynamically adjust their clock speeds to fit the workload, 
sometimes running at 200MHz and at other times running full throttle at 2GHz+ 
or more.  Server CPU's will soon do the same.  When throttled down, a CPU uses 
much less energy and radiates much less heat.
(4) The spinning disk makers are about to also apply Mobile technology to their 
Enterprise disks.  No need to spin at 15K rpm when the requests aren't coming 
in.  They are also going smaller - down to 2.5 inches instead of the current 
standard 3.5.  Smaller platters mean faster seek times.
(5) NAND flash and NOR flash technology will soon start eating away at the Hard 
Disk market.  Already we have SSD's that are 1TB.  Supposedly we will soon see 
4TB and 16TB capacities, all with access times 10 times faster than HDD.  And 
with power consumption 10% or less.
(6) Server System Administrators are being overwhelmed with supporting the 
hundreds of server images, whether virtual or real.  So expect them to push 
back on the separate servers for separate apps practice.

While you might think that nothing here matters to mainframers.  But consider 
this:
(1) Server disk array technology is substantially the same as mainframe disk 
arrays.  Many companies use the same EMC arrays to provision both their Wintel 
servers and their z/OS platform.  If EMC arrays start using 2.5inch disks or 
SSD's, then the benefits will extend to both mainframe and non-mainframe 
environments.
(2) Many mainframes have gone to virtual tape technology, just as non-mainframe 
servers have done.
(3) Mainframers of course embraced virtualization long ago, beginning with 
VM/370 and later with LPAR.
(4) If you think that a z/Architecture processor is radically different from an 
Intel Xeon, you only need to read the respective descriptions.  So it isn't so 
far off the mark to think that z chips have the same issues as Xeon's and that 
the same fixes will be applied to both.

John

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Another Light goes out

2012-09-28 Thread Roberts, John J
Is it legal for you to download .net without owning a windoze license and then 
to run it under, e.g., ODIN, wine? If not, then it is not free.

Parts of the .Net Framework are Open Source.  Google MONO Project for more 
info.  Like the following statement:

Mono is a software platform designed to allow developers to easily create cross 
platform applications. Sponsored by Xamarin, Mono is an open source 
implementation of Microsoft's .NET Framework based on the ECMA standards for C# 
and the Common Language Runtime. A growing family of solutions and an active 
and enthusiastic contributing community is helping position Mono to become the 
leading choice for development of Linux applications.

But whether the .Net Framework is free, almost free, or cheap is hardly 
relevant to the original discussion about a successful Legacy Migration from a 
UNISYS Clearpath mainframe to Windows.  What matters is whether the Migrated 
System delivers equal or better results than the Legacy System at significantly 
lower cost.

It is my experience that:
(1) Most Legacy Migration Projects are justified on the basis that the Target 
Environment will be really cheap, probably considering only the costs of the 
hardware and software licenses.
(2) Cost comparisons with the Legacy Environment are really apples to oranges 
comparison, since the chargeback rates for the Legacy Environments are fully 
burdened with all the overhead of office space, power, air conditioning, system 
programmers, IT managers, disaster recovery etc.
(3) The estimates to perform the migration are often low.  A lot of IT managers 
think that code migrations are achieved by pushing the code thru code 
conversion tools.  In reality this is a small part of the job.  The biggest 
part is solving all the hundreds of little problems that arise.
(4) Still, in the end, most successful migrations deliver a positive ROI, but 
perhaps much less than originally hoped.  Also, a significant percentage of 
migrations fail, for all the same reasons that many other IT projects fail 
(incompetent management, lack of planning, etc.).

John

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Another Light goes out

2012-09-27 Thread Roberts, John J
a few consultants?  84,500 hrs = 42 person-yrs

Ohio interviewed me back in late 2008 for a role on this project.  So they have 
been working on it for nearly 4 years.  So an average team size of 11 or 12.  
Given that they did it with mostly State Staff, they would have had a steep 
learning curve going from green screen editors to Visual Studio 2008 and 
whatever RDBMS tools were appropriate.  So they did pretty well IMO.

I actually think it is a smart strategy to do such projects mostly in-house, 
augmented only somewhat with contract experts to tackle the problems as they 
arise.  Now that it is done, the in-house staff will have a sense of 
accomplishment and will own the result.  When such projects are outsourced to 
consultants, there is always resentment among the in-house staff that get the 
result dumped on them to support.  Ohio has avoided this problem.  Also, 
since this was a COBOL/PACBASE to COBOL migration, the tribal knowledge of 
the old main framers is not lost.  The challenge now is to pass this knowledge 
onto the new generation before the old guys are gone.

Lastly, by going with the Fujitsu Tools they have opened a door for future 
growth.  Unlike the more commonly used Micro Focus NetExpress tools, the 
Fujitsu NetCOBOL compiler is a fully CLR compliant language.  So it now becomes 
easy to revise and extend the applications using VB.Net and/or C#.  Until very 
recently, the competing Micro Focus Enterprise Server environment was a 
dead-end to itself, extensible only with difficulty using the C language.  I 
understand that Micro Focus has been working hard to address this deficiency.

John

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Another Light goes out

2012-09-26 Thread Roberts, John J
 W dniu 2012-09-26 16:01, Ed Gould pisze:

  ELARDUS:

 I am NOT a UNISYS person.
 But from reading a little I think they can simulate CICS (supposedly 
 programs run unchanged).


 Can you provide any reference for the above?


Confusion between UniKix (now Clerity) and Unisys?

I can think of two other possibilities:
(1) In February 2011, UNISYS started marketing a Cloud Based Application 
Modernization service.  But I don't think this involves anything to emulate IBM 
CICS.
(2) The Washington State Dept of Licensing (i.e. their DMV) did a migration 
from a UNISYS Clearpath environment to Windows DotNet back in 2003-2005.  This 
migration used the same Fujitsu COBOL compiler as was used by Ohio.  To do this 
migration we leveraged the Fujitsu product that provided CICS API emulation.  
So the transformation was two stage: first make the UNISYS Online COBOL code 
look like CICS, then migrate using the CICS emulation.  While this worked, it 
introduced unnecessary complexity and was later simplified to eliminate the 
CICS code.  I was the technical architect for this project.

John

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Preventing the installation of unapproved software

2012-09-05 Thread Roberts, John J
I *HATE* checklist auditors. This sounds like a WINTEL based checklist

It does indeed sound like the auditor is applying Wintel security principles 
to a mainframe system.

The right questions to ask re mainframe security are:
(1) Are the users properly authenticated?
(2) Is the data properly protected by security manager profiles?
(3) Is the connection between user groups and data security profiles properly 
setup and managed?
(4) Is there any way that the data security protection can be circumvented?  
This is where one aspect of unauthorized programs arises (e.g. APF 
authorization).
(5) Is there proper management of the application production libraries 
including  controls over who can modify these libraries?  This is where a 
second aspect of unauthorized programs arises.

If the auditor is thinking that some one-off COBOL program or REXX script 
sitting in a TSO user's own library is a danger, then he/she is not qualified.

John

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN