Re: EXSPATxx member
On Mon, 6 Apr 2015 03:48:07 -0400, Jim Mulder wrote: There is no way to specify endless spinning. No doubt a good thing. I do recall once placing a CPU in a hard stop - caused all the other CPUs to go into spin loops. Back in the days of running on the bare metal. Fortunately I had the machine to myself ;-) Shane ... -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: AMATERSE and PDSE ?
On Tue, 7 Apr 2015 03:59:30 -0500, Juergen Kehr wrote: Hello, I'm a little bit confused about the the topic named in the subject of this thread. We're using z/OS V1.13 and I successfully tersed/untersed several PDSE datasets, but now I get a RC=40 during UNTERSE of a PDSE (Load) LIBRARY. In various documentation I found statements that PDSE is supported nowadays, but why does this UNTERSE fail? Is there any special condidition for PDSE (Load) Libraries (RECFM=U) compared to other PDSE libraries (RECFM=FB or VB)? Any help appreciated. http://publibfp.dhe.ibm.com/cgi-bin/bookmgr/BOOKS/iea2v1c2/17.4.1 Partitioned data sets extended (PDSE) containing program objects are not supported. Norbert Friemel -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
AMATERSE and PDSE ?
Hello, I'm a little bit confused about the the topic named in the subject of this thread. We're using z/OS V1.13 and I successfully tersed/untersed several PDSE datasets, but now I get a RC=40 during UNTERSE of a PDSE (Load) LIBRARY. In various documentation I found statements that PDSE is supported nowadays, but why does this UNTERSE fail? Is there any special condidition for PDSE (Load) Libraries (RECFM=U) compared to other PDSE libraries (RECFM=FB or VB)? Any help appreciated. Kind regards Juergen -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: AMATERSE and PDSE ?
What error message are you getting? Can you post the error? Could you post the message from the TERSE process? Lizette -Original Message- From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf Of Juergen Kehr Sent: Tuesday, April 07, 2015 2:00 AM To: IBM-MAIN@LISTSERV.UA.EDU Subject: AMATERSE and PDSE ? Hello, I'm a little bit confused about the the topic named in the subject of this thread. We're using z/OS V1.13 and I successfully tersed/untersed several PDSE datasets, but now I get a RC=40 during UNTERSE of a PDSE (Load) LIBRARY. In various documentation I found statements that PDSE is supported nowadays, but why does this UNTERSE fail? Is there any special condidition for PDSE (Load) Libraries (RECFM=U) compared to other PDSE libraries (RECFM=FB or VB)? Any help appreciated. Kind regards Juergen -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: Secure FTP
Richards, Robert B. wrote: Norbert's suggestions worked. Great! Which one worked for you? Or did you need both of them? (Norbert wrote about: FWFRIENDLY TRUE and/or EPSV4 TRUE) Groete / Greetings Elardus Engelbrecht -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: Secure FTP
Norbert's suggestions worked. Thanks to Kurt and Norbert!! :-) -Original Message- From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf Of Kurt Quackenbush Sent: Monday, April 06, 2015 9:44 AM To: IBM-MAIN@LISTSERV.UA.EDU Subject: Re: Secure FTP snip GET 12345678910/PROD/GIMPAF.XML /xx/xx/xx/x/GIMPAF.XML (REPLACE EZA1701I TYPE I 200 Type set to I. EZA1460I Command: EZA1701I PORT nn,n,nn,n,nn,nnn EZA2589E Connection to server interrupted or timed out. Waiting for reply EZA1721W Server not responding, closing connection. EZA1636I *** I can't open a data-transfer connection: EZA1735I Std Return Code = 16000, Error Code = 9 You were able to make a secure connection, so that's a good thing. However, I'm thinking you have a firewall getting in the way of the data connection. As already suggested, try using passive mode (EPSV4 TRUE). If you still have trouble, you may have to get help from Comm Server Level 2 to study an IP trace. Kurt Quackenbush -- IBM, SMP/E Development -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: Backup of Uncatalogued dataset
Couldn't you just do a DFDSS dump and select the uncatalogued file and point the dump to the volume where the uncatalogued dataset resides? Something like this (add your own jobcard): //DUMPDSN EXEC PGM=ADRDSSU,REGION=8M,PARM='TYPRUN=NORUN' //SYSPRINT DD SYSOUT=* //*UTDD1 DD DUMMY //INDD1DD UNIT=SYSDA,DISP=SHR,VOL=SER=volser of disk volume //OUTDD1 DD DSN=SYS1.XX.KUP,DISP=(,CATLG),UNIT=SYSDA, // RETPD=365,SPACE=(CYL,(1000,500),RLSE) //SYSINDD* DUMP DATASET(INCLUDE( - Datasetname- )) - INDDNAME(INDD1) OUTDDNAME(OUTDD1) TOL(ENQF) CANCELERROR OPT(4) /* -Original Message- From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf Of Jake Anderson Sent: Tuesday, April 07, 2015 9:36 AM To: IBM-MAIN@LISTSERV.UA.EDU Subject: Backup of Uncatalogued dataset Hello, I am looking for a sample JCL on backing up the uncatalogued dataset alone. Is there anyone who have a working JCL ? Jake -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Backup of Uncatalogued dataset
Hello, I am looking for a sample JCL on backing up the uncatalogued dataset alone. Is there anyone who have a working JCL ? Jake -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: Backup of Uncatalogued dataset
Jake Anderson wrote: I am looking for a sample JCL ... Do you have access to DFDSS manuals? There are good sample JCLs there. ... on backing up the uncatalogued dataset alone. Do you mean you only want to backup Non-SMS datasets including PS, PDS, etc? Is there anyone who have a working JCL ? You've got a good sample JCL kindly provided by Todd Burrell. Just remember that JCL sample makes a backup to disk, not to tape, resulting in a Cataloged dataset containing your backup. Groete / Greetings Elardus Engelbrecht -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: A New Performance Model ?
Farley, Peter wrote: Though your [Timothy] points are well spoken and reasoned, you still did not address the basic organizational issue the OP is facing Agreed 100%. As far as I see, the OP is sitting somewhere very uncomfortable and wishing for a quick relief. I agree there may be a reason to upgrade to new hardware (examples: using a new ARCH level or using a new fancy set of new things or just using larger tables as suggested earlier), ... ... but, first thing first - I still want to see that the first attempts to install new things are to be done on a sandbox and then moved to a production system without adding excessive resource usage. In the end, I agree with all posters, but I think we all should wait for the OP to clarify his position. In hindsight - I believe the OP is probably experiencing some known problem or overlooking something obvious. On the other hand, said executive management may be of the same blindered type as John McKown has told us he suffers under, in which case all bets are off and the devil takes the hindmost. Agreed. Groete / Greetings Elardus Engelbrecht -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: AMATERSE and PDSE ?
On 7 April 2015 at 05:29, Norbert Friemel nf.ibmm...@web.de wrote: On Tue, 7 Apr 2015 03:59:30 -0500, Juergen Kehr wrote: [...] Is there any special condidition for PDSE (Load) Libraries (RECFM=U) compared to other PDSE libraries (RECFM=FB or VB)? http://publibfp.dhe.ibm.com/cgi-bin/bookmgr/BOOKS/iea2v1c2/17.4.1 Partitioned data sets extended (PDSE) containing program objects are not supported. AMATERSE uses its own scheme for unloading and transmitting the PDS[E] directory, rather than doing what most such programs do which is to invoke IEBCOPY for this part of the work. IEBCOPY will invoke the Binder to process Program Objects, and the Binder is the only authorized tool do do so. So AMATERSE runs into trouble when it goes to write Program Objects into the receiving PDSE (or presumably it checks and issues a message in this case). The simple answer is to run IEBCOPY unload first on your own, and then AMATERSE the resulting sequential dataset. And of course the reverse at the receiving end. Alternatively, and more generally, you could run ADRDSSU to do a logical dump of your dataset, AMATERSE that, and again reverse things at the other end. Tony H. -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: AMATERSE and PDSE ? (and IEBCOPY and SMP/E)
I used Transmit on a PDSE, downloaded it as binary, reap loaded ad binary , TSO received it, everything fine. This was z/OS 1.13 Scott On Tuesday, April 7, 2015, Paul Gilmartin 000433f07816-dmarc-requ...@listserv.ua.edu wrote: On Tue, 7 Apr 2015 04:29:15 -0500, Norbert Friemel wrote: On Tue, 7 Apr 2015 03:59:30 -0500, Juergen Kehr wrote: ... I get a RC=40 during UNTERSE of a PDSE (Load) LIBRARY. ... http://publibfp.dhe.ibm.com/cgi-bin/bookmgr/BOOKS/iea2v1c2/17.4.1 Partitioned data sets extended (PDSE) containing program objects are not supported. For Lizette and Scott, what more is needed than a citation of a manual stating that the operation is not supported? (But is the diagnostic message lucid?) I had expected AMATERSE to use IEBCOPY internally. But that would require a (large) workfile. So the programmer must IEBCOPY unload the PDSE, then AMATERSE the PDSU, so that programmer assumes the onus of the workfile. I believe TSO TRANSMIT uses IEBCOPY. IIRC, there have been reports of TRANSMIT's failing for an underallocated workfile. (I also suspect that IEBCOPY uses Program Management API to deal with Program Objects.) It's a pity that IEBCOPY can't use POSIX pipes for only its PDSU data sets. That would allow AMATERSE and TRANSMIT to use IEBCOPY with only trivial workfiles, piping IEBCOPY PDSU directly into the utility. And SMP/E keeps its SMPNTS in IEBCOPY PDSU copied to UNIX .tar.Z files which it must first unzip to SMPWKDIR, then copy to DSORG=PS, and finally reload with IEBCOPY. Two (large) workfiles. IEBCOPY would do AMATERSE, TRANSMIT and SMP/E a favor by supporting UNIX files (including pipes) as its PDSU. (SMP/E also performs some gyrations to break up large concatenations of SMPPTFIN data sets (I believe it DYNALLOCs directly from SMPWKDIR, not requiring a copy to DSORG=PS.) Even that storage requirement could be reduced by allocating SMPPTFIN to the output of a single POSIX pipe and feeding that with the unzipped .tar.Z files one-by-one, deleting each before unzipping the next. (But would that complicate error reporting?) IIRC, AMATERSE has a restriction that a PDS can not be PACKed directly to a tape, but must first be PACKED to DASD (another (large) workfile), then copied to tape. I suspect this restriction arises from a need to POINT to a prologue block and update it in place at the end of the operation. I know IEBCOPY PDSU also contains a prologue. I wonder how IEBCOPY generates that in a single pass? Does it perform a trial scan of the PDS? -- gil -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu javascript:; with the message: INFO IBM-MAIN -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: CP to a fixed length output file
Dana, Use TRAILINGBLANKS TRUE parm as explained here http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/f1a1b4b1/18.154? Thanks, Kolusu IBM Mainframe Discussion List IBM-MAIN@LISTSERV.UA.EDU wrote on 04/07/2015 09:32:42 AM: From: Dana Mitchell mitchd...@gmail.com To: IBM-MAIN@LISTSERV.UA.EDU Date: 04/07/2015 09:32 AM Subject: CP to a fixed length output file Sent by: IBM Mainframe Discussion List IBM-MAIN@LISTSERV.UA.EDU I'm working with developers to convert FTP jobs to use secure FTP. The first step is to cp the MVS dataset to a USS directory so sftp can transfer it. They noticed when transferring an FB 80 file that trailing blanks have been stripped off. Is there some combination of cp options that would keep trailing blanks intact? I've tried - B, -F crlf etc to no avail. Dana -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: A New Performance Model ?
Elardus, Agree you 100%. Maybe they need a second pair of eyes to review the design. I know I do and I will bet other software designers and system programmers do. A second pair of eyes is like a Dr.'s second option.. Like you mentioned something was missed and the easy out was a mainframe upgrade. I agree with everyone on this one, sometimes it's lack of experience too. Regards , Scott www.idmworks.com On Tuesday, April 7, 2015, Elardus Engelbrecht elardus.engelbre...@sita.co.za wrote: Farley, Peter wrote: Though your [Timothy] points are well spoken and reasoned, you still did not address the basic organizational issue the OP is facing Agreed 100%. As far as I see, the OP is sitting somewhere very uncomfortable and wishing for a quick relief. I agree there may be a reason to upgrade to new hardware (examples: using a new ARCH level or using a new fancy set of new things or just using larger tables as suggested earlier), ... ... but, first thing first - I still want to see that the first attempts to install new things are to be done on a sandbox and then moved to a production system without adding excessive resource usage. In the end, I agree with all posters, but I think we all should wait for the OP to clarify his position. In hindsight - I believe the OP is probably experiencing some known problem or overlooking something obvious. On the other hand, said executive management may be of the same blindered type as John McKown has told us he suffers under, in which case all bets are off and the devil takes the hindmost. Agreed. Groete / Greetings Elardus Engelbrecht -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu javascript:; with the message: INFO IBM-MAIN -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: AMATERSE and PDSE ? (and IEBCOPY and SMP/E)
On Tue, 7 Apr 2015 04:29:15 -0500, Norbert Friemel wrote: On Tue, 7 Apr 2015 03:59:30 -0500, Juergen Kehr wrote: ... I get a RC=40 during UNTERSE of a PDSE (Load) LIBRARY. ... http://publibfp.dhe.ibm.com/cgi-bin/bookmgr/BOOKS/iea2v1c2/17.4.1 Partitioned data sets extended (PDSE) containing program objects are not supported. For Lizette and Scott, what more is needed than a citation of a manual stating that the operation is not supported? (But is the diagnostic message lucid?) I had expected AMATERSE to use IEBCOPY internally. But that would require a (large) workfile. So the programmer must IEBCOPY unload the PDSE, then AMATERSE the PDSU, so that programmer assumes the onus of the workfile. I believe TSO TRANSMIT uses IEBCOPY. IIRC, there have been reports of TRANSMIT's failing for an underallocated workfile. (I also suspect that IEBCOPY uses Program Management API to deal with Program Objects.) It's a pity that IEBCOPY can't use POSIX pipes for only its PDSU data sets. That would allow AMATERSE and TRANSMIT to use IEBCOPY with only trivial workfiles, piping IEBCOPY PDSU directly into the utility. And SMP/E keeps its SMPNTS in IEBCOPY PDSU copied to UNIX .tar.Z files which it must first unzip to SMPWKDIR, then copy to DSORG=PS, and finally reload with IEBCOPY. Two (large) workfiles. IEBCOPY would do AMATERSE, TRANSMIT and SMP/E a favor by supporting UNIX files (including pipes) as its PDSU. (SMP/E also performs some gyrations to break up large concatenations of SMPPTFIN data sets (I believe it DYNALLOCs directly from SMPWKDIR, not requiring a copy to DSORG=PS.) Even that storage requirement could be reduced by allocating SMPPTFIN to the output of a single POSIX pipe and feeding that with the unzipped .tar.Z files one-by-one, deleting each before unzipping the next. (But would that complicate error reporting?) IIRC, AMATERSE has a restriction that a PDS can not be PACKed directly to a tape, but must first be PACKED to DASD (another (large) workfile), then copied to tape. I suspect this restriction arises from a need to POINT to a prologue block and update it in place at the end of the operation. I know IEBCOPY PDSU also contains a prologue. I wonder how IEBCOPY generates that in a single pass? Does it perform a trial scan of the PDS? -- gil -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
CP to a fixed length output file
I'm working with developers to convert FTP jobs to use secure FTP. The first step is to cp the MVS dataset to a USS directory so sftp can transfer it. They noticed when transferring an FB 80 file that trailing blanks have been stripped off. Is there some combination of cp options that would keep trailing blanks intact? I've tried -B, -F crlf etc to no avail. Dana -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: AMATERSE and PDSE ?
I agree with Lizette can you provide us a snippet of the error and JCL ...? Pls On Tuesday, April 7, 2015, Lizette Koehler stars...@mindspring.com wrote: What error message are you getting? Can you post the error? Could you post the message from the TERSE process? Lizette -Original Message- From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU javascript:;] On Behalf Of Juergen Kehr Sent: Tuesday, April 07, 2015 2:00 AM To: IBM-MAIN@LISTSERV.UA.EDU javascript:; Subject: AMATERSE and PDSE ? Hello, I'm a little bit confused about the the topic named in the subject of this thread. We're using z/OS V1.13 and I successfully tersed/untersed several PDSE datasets, but now I get a RC=40 during UNTERSE of a PDSE (Load) LIBRARY. In various documentation I found statements that PDSE is supported nowadays, but why does this UNTERSE fail? Is there any special condidition for PDSE (Load) Libraries (RECFM=U) compared to other PDSE libraries (RECFM=FB or VB)? Any help appreciated. Kind regards Juergen -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu javascript:; with the message: INFO IBM-MAIN -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu javascript:; with the message: INFO IBM-MAIN -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: Backup of Uncatalogued dataset
What tool are you using for normal backup and recovery? Doesn't this tool handle uncatalogued datasets? Do you want the backup version to be part of your backup and recovery system or are you looking to have a recovery method independent of your normal system? If so, why? You implied there is only one dataset of interest. Is this correct? If not, how many? Is there a convenient way to identify them (naming convention, segregated to special packs, etc)? What types of datasets (sequential, PDS, PDSE (with or without program objects), VSAM)? There are lots of tools that can copy or dump datasets. The correct one depends on what your objective is. -Original Message- From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf Of Jake Anderson Sent: Tuesday, April 07, 2015 6:36 AM To: IBM-MAIN@LISTSERV.UA.EDU Subject: Backup of Uncatalogued dataset Hello, I am looking for a sample JCL on backing up the uncatalogued dataset alone. Is there anyone who have a working JCL ? -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: Backup of Uncatalogued dataset
We use FDR and I was trying to backup a PDSE. On 7 Apr 2015 21:58, retired mainframer retired-mainfra...@q.com wrote: What tool are you using for normal backup and recovery? Doesn't this tool handle uncatalogued datasets? Do you want the backup version to be part of your backup and recovery system or are you looking to have a recovery method independent of your normal system? If so, why? You implied there is only one dataset of interest. Is this correct? If not, how many? Is there a convenient way to identify them (naming convention, segregated to special packs, etc)? What types of datasets (sequential, PDS, PDSE (with or without program objects), VSAM)? There are lots of tools that can copy or dump datasets. The correct one depends on what your objective is. -Original Message- From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf Of Jake Anderson Sent: Tuesday, April 07, 2015 6:36 AM To: IBM-MAIN@LISTSERV.UA.EDU Subject: Backup of Uncatalogued dataset Hello, I am looking for a sample JCL on backing up the uncatalogued dataset alone. Is there anyone who have a working JCL ? -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: A New Performance Model ?
idfzos...@gmail.com (Scott Ford) writes: Agree you 100%. Maybe they need a second pair of eyes to review the design. I know I do and I will bet other software designers and system programmers do. A second pair of eyes is like a Dr.'s second option.. Like you mentioned something was missed and the easy out was a mainframe upgrade. I agree with everyone on this one, sometimes it's lack of experience too. the IBM science center pioneered a lot of performance methodologies in the 60s 70s ... hot-spot monitoring, system modeling, multiple regression analysis, etc. some of the system modeling work eventually evolves into capacity planning. One of the system models was analytical model done in APL. The APL model evolves into the Performance Predictor available on the world-wide salesmarketing support HONE system ... branch office could obtain customer workload and system profile data ... feed it into the Performance Predictor and ask what-if questions (aka what happens if the workload changes, system configuration changes, more disks, more memory, etc major objective justifying selling more hardware) Around the start of the century I ran into consultant that was making a living from performance consulting to large mainframe datacenters in Europe and the US. IBM's downturn in the early 90s, IBM was unloading some amount of its stuff ... and this consultant obtained the right to a descendent of the performance predictor and ran it through an APL-C language converter. We met at a large datacenter that had a 450kloc cobol program that ran evernight on 40+ max. configured mainframes (constantly being upgraded, none older than 18months, number required for application to finish in the overnight batch window). They application had a few dozen people in peformance department that had been working on it for decades ... primarily using hot-spot methodology. Hot-spot tends to shine light on sections that need logic examination for doing things better ... working primarily with logic at the micro-level The modeling work fed workload system activity data and identified areas that resulted in 7% improvement. I then used multiple regression analysis with application activity data to spotlight some macro-level logic that resulted in 14% improvement. Remember that this is an application that had dedicated performance group with dozens of people that had been working with this application for decades (but primarily using hot-spot methodology ... that tends to focus on micro-level logic) -- virtualization experience starting Jan1968, online at home since Mar1970 -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: AMATERSE and PDSE ? (and IEBCOPY and SMP/E)
On 7 April 2015 at 12:22, Paul Gilmartin 000433f07816-dmarc-requ...@listserv.ua.edu wrote: IIRC, AMATERSE has a restriction that a PDS can not be PACKed directly to a tape, but must first be PACKED to DASD (another (large) workfile), then copied to tape. I don't know IYRC, but it sounds like one of those many arbitrary restrictions to do with UNIX files on z/OS. I suspect this restriction arises from a need to POINT to a prologue block and update it in place at the end of the operation. Perhaps, but I don't think so. I have looked at a number of header blocks output by various implementations of the terse algorithm, and at most they seem to be 12 bytes, containing no information that is unavailable before the start of compression. In particular, there is nothing about the compressed size or number of symbols. But that's the overall header I'm talking about; for tersed PDS[E]s there is a following member directory of some sort, and I suppose that might need updating after compressing. But surely if that's the case it can only be to allow selective decompression of members (by providing a member offset into the compressed stream), and I don't think that's supported. And in any case, this member directory is itself compressed, so I think there is little chance that any header information is updated based on anything known only after compression. There is also a trailer block of some sort, but I haven't tried to analyse it beyond noticing that it contains a time stamp, and that it can be removed from the end of the compressed data without causing decompression to complain. I was interested in the header only as part of identifying its magic value, which seems close to impossible. It is, however, possible to use the header to sanity check a putative tersed file against claims made about it by the sender. If the lrecl, blocksize, and recfm match what is expected, there's a reasonable chance that it hasn't been ASCII corrupted or otherwise damaged in transmission, and is worth a trial decompression. One wonders why AMATERSE is still in use. The terse algorithm (IBM expired US patent 4814746) has properties that suit it best to dynamic use in devices like modems, where the data cannot be analyzed in advance. There are more efficient and widely available compression algorithms and implementations, including some with support in IBM hardware and/or millicode. Tony H. -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: A New Performance Model ?
I've been a perf/cap analyst since 1981, and I can unequivocally state that the original statement is specious! Upgrades are cheaper than they were, but they're still not free! And, I've been fighting against capacity based pricing sin tiers were introduced in 1984. It is still cheaper to write/test/debug/tune code before it goes into Production. And, outage causing abends are unacceptable in any case. You really need either new programmers, or management, or a crapload of comiseration! - -teD - Original Message From: Scott Ford Sent: Tuesday, April 7, 2015 12:25 To: IBM-MAIN@LISTSERV.UA.EDU Reply To: IBM Mainframe Discussion List Subject: Re: A New Performance Model ? Elardus, Agree you 100%. Maybe they need a second pair of eyes to review the design. I know I do and I will bet other software designers and system programmers do. A second pair of eyes is like a Dr.'s second option.. Like you mentioned something was missed and the easy out was a mainframe upgrade. I agree with everyone on this one, sometimes it's lack of experience too. Regards , Scott www.idmworks.com On Tuesday, April 7, 2015, Elardus Engelbrecht elardus.engelbre...@sita.co.za wrote: Farley, Peter wrote: Though your [Timothy] points are well spoken and reasoned, you still did not address the basic organizational issue the OP is facing Agreed 100%. As far as I see, the OP is sitting somewhere very uncomfortable and wishing for a quick relief. I agree there may be a reason to upgrade to new hardware (examples: using a new ARCH level or using a new fancy set of new things or just using larger tables as suggested earlier), ... ... but, first thing first - I still want to see that the first attempts to install new things are to be done on a sandbox and then moved to a production system without adding excessive resource usage. In the end, I agree with all posters, but I think we all should wait for the OP to clarify his position. In hindsight - I believe the OP is probably experiencing some known problem or overlooking something obvious. On the other hand, said executive management may be of the same blindered type as John McKown has told us he suffers under, in which case all bets are off and the devil takes the hindmost. Agreed. Groete / Greetings Elardus Engelbrecht -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu javascript:; with the message: INFO IBM-MAIN -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: CP to a fixed length output file
FTPS (SSL ftp) is not hard to set-up and deals with ordinary z/OS datasets just fine. -Original Message- From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf Of John McKown Sent: Tuesday, April 07, 2015 11:17 AM To: IBM-MAIN@LISTSERV.UA.EDU Subject: Re: CP to a fixed length output file On Tue, Apr 7, 2015 at 11:32 AM, Dana Mitchell mitchd...@gmail.com wrote: I'm working with developers to convert FTP jobs to use secure FTP. The first step is to cp the MVS dataset to a USS directory so sftp can transfer it. They noticed when transferring an FB 80 file that trailing blanks have been stripped off. Is there some combination of cp options that would keep trailing blanks intact? I've tried -B, -F crlf etc to no avail. Dana Pushing my favorite, __freely licensed__, product, I would suggest looking at using the sftp enhancement from Dovetailed Technologies: https://urldefense.proofpoint.com/v1/url?u=http://dovetail.com/products/sftp .htmlk=EWEYHnIvm0nsSxnW5y9VIw%3D%3D%0Ar=j6Xa1Y0fbuP2mfgCQ5Z xhg%3D%3D%0Am=WniMiMnzcAZD%2BzBIS5GcReKsdeQoxX%2FeOtZ6N3JFz no%3D%0As=e3b477c113b7457e6bfde859bf90f240f8062cd795dea823c6b6b 392d13a577e This builds on top of the IBM OpenSSH server code. It allows sftp to access z/OS UNIX files (as always) and also z/OS data sets and the JES SPOOL as well. Of course you still need to tell the sftp server to not strip trailing blanks, as already mentioned by Sri. -- If you sent twitter messages while exploring, are you on a textpedition? He's about as useful as a wax frying pan. 10 to the 12th power microphones = 1 Megaphone Maranatha! John McKown -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: AMATERSE and PDSE ? (and IEBCOPY and SMP/E)
Messages help us to Understand what your problem is. However, This link from 2014 on www.ibm.com indicates a TSO XMIT or PDS/E UNLOAD then AMATERSE the dataset. https://www-304.ibm.com/connections/blogs/SterlingMFT/entry/how_to_terse_a_pdse_library?lang=en_us And from this link http://techsupport.services.ibm.com/390/trsmain.html (AMATERSE from z/OS V1.8 - don't know the level of your z/OS system) PARTITIONED dataset Extended (PDSE) datasets, VSAM Datasets, DA datasets, and ISAM datasets are not supported. And from z/OS V2.1 Link http://www-01.ibm.com/support/knowledgecenter/SSLTBW_2.1.0/com.ibm.zos.v2r1.ieav100/terse.htm partitioned data sets extended (PDSE) that do not contain program objects. Further Restrictions The following restrictions apply to AMATERSE: VSAM data sets and direct (DSORG=DA) data sets are not supported. Data sets with keys (KEYLEN) are not supported. A partitioned data set (PDS) compressed by AMATERSE on MVS™ cannot be unpacked by VM TERSE. This results in a 1007 or 1009 return code from VM TERSE. A PDS must be compressed to a DASD. Partitioned data sets extended (PDSE) containing program objects are not supported. AMATERSE handles data sets with a LRECL of more than 32K but less than 64K only when RECFM=VBS DASD data sets are processed. A data set with the FB record format can be packed and unpacked to a FBS data set. However, during the UNPACK operation, extending a non-empty output data set with DISP=MOD is not possible because this results in a FB data set. An error message is issued for this. AMATERSE does not support large block interface (LBI). So over time AMATERSE does support PDS/E just not those with Program Objects. Do you have Program Objects in the PDS/E you are trying to use? Lizette -Original Message- From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf Of Paul Gilmartin Sent: Tuesday, April 07, 2015 9:23 AM To: IBM-MAIN@LISTSERV.UA.EDU Subject: Re: AMATERSE and PDSE ? (and IEBCOPY and SMP/E) On Tue, 7 Apr 2015 04:29:15 -0500, Norbert Friemel wrote: On Tue, 7 Apr 2015 03:59:30 -0500, Juergen Kehr wrote: ... I get a RC=40 during UNTERSE of a PDSE (Load) LIBRARY. ... http://publibfp.dhe.ibm.com/cgi-bin/bookmgr/BOOKS/iea2v1c2/17.4.1 Partitioned data sets extended (PDSE) containing program objects are not supported. For Lizette and Scott, what more is needed than a citation of a manual stating that the operation is not supported? (But is the diagnostic message lucid?) I had expected AMATERSE to use IEBCOPY internally. But that would require a (large) workfile. So the programmer must IEBCOPY unload the PDSE, then AMATERSE the PDSU, so that programmer assumes the onus of the workfile. I believe TSO TRANSMIT uses IEBCOPY. IIRC, there have been reports of TRANSMIT's failing for an underallocated workfile. (I also suspect that IEBCOPY uses Program Management API to deal with Program Objects.) It's a pity that IEBCOPY can't use POSIX pipes for only its PDSU data sets. That would allow AMATERSE and TRANSMIT to use IEBCOPY with only trivial workfiles, piping IEBCOPY PDSU directly into the utility. And SMP/E keeps its SMPNTS in IEBCOPY PDSU copied to UNIX .tar.Z files which it must first unzip to SMPWKDIR, then copy to DSORG=PS, and finally reload with IEBCOPY. Two (large) workfiles. IEBCOPY would do AMATERSE, TRANSMIT and SMP/E a favor by supporting UNIX files (including pipes) as its PDSU. (SMP/E also performs some gyrations to break up large concatenations of SMPPTFIN data sets (I believe it DYNALLOCs directly from SMPWKDIR, not requiring a copy to DSORG=PS.) Even that storage requirement could be reduced by allocating SMPPTFIN to the output of a single POSIX pipe and feeding that with the unzipped .tar.Z files one-by-one, deleting each before unzipping the next. (But would that complicate error reporting?) IIRC, AMATERSE has a restriction that a PDS can not be PACKed directly to a tape, but must first be PACKED to DASD (another (large) workfile), then copied to tape. I suspect this restriction arises from a need to POINT to a prologue block and update it in place at the end of the operation. I know IEBCOPY PDSU also contains a prologue. I wonder how IEBCOPY generates that in a single pass? Does it perform a trial scan of the PDS? -- gil -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: AMATERSE and PDSE ? (and IEBCOPY and SMP/E)
On Tue, 7 Apr 2015 10:13:57 -0700, Lizette Koehler wrote: Messages help us to Understand what your problem is. However, Agreed. Do you have Program Objects in the PDS/E you are trying to use? From what you quoted: On Tue, 7 Apr 2015 03:59:30 -0500, Juergen Kehr wrote: ... I get a RC=40 during UNTERSE of a PDSE (Load) LIBRARY. ... ... I would infer, Yes. (Although Load might mean loaded by IEBCOPY, but not containing Program Objects.) -- gil -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
CA-Endevor
A quick google search and search of the archives came up with not a whole lot...so I figured I would ask the list. Does anyone know of any good Endevor administration training that is available (format and cost do not matter, looking for any and all options at this time). If you can suggest anything feel free to reply here or contact me off list. Thanks in advance! -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: CP to a fixed length output file
On Tue, 7 Apr 2015 09:57:15 -0700, Sri h Kolusu wrote: Use TRAILINGBLANKS TRUE parm as explained here http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/f1a1b4b1/18.154? So, I try: Remote system type is MVS. ftp TRAILINGBLANKS TRUE ?Invalid command (Well, of course; this was a Linux client.) So: ftp quote TRAILINGBLANKS TRUE 500 unknown command TRAILINGBLANKS Now what? Anyway, I'm a staunch advocate of the principle of minimal munging -- Leave the data as you found them. By default, don't remove trailing blanks -- if that's what the programmer wanted he could have used RECFM=VB. Don't add blanks, not even to empty lines (shame on ISPF EDIT!) There should be no need for _EDC_ZERO_RECLEN; if there's an empty record, keep it. Etc. -- gil -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: CP to a fixed length output file
On Tue, Apr 7, 2015 at 11:32 AM, Dana Mitchell mitchd...@gmail.com wrote: I'm working with developers to convert FTP jobs to use secure FTP. The first step is to cp the MVS dataset to a USS directory so sftp can transfer it. They noticed when transferring an FB 80 file that trailing blanks have been stripped off. Is there some combination of cp options that would keep trailing blanks intact? I've tried -B, -F crlf etc to no avail. Dana Pushing my favorite, __freely licensed__, product, I would suggest looking at using the sftp enhancement from Dovetailed Technologies: http://dovetail.com/products/sftp.html This builds on top of the IBM OpenSSH server code. It allows sftp to access z/OS UNIX files (as always) and also z/OS data sets and the JES SPOOL as well. Of course you still need to tell the sftp server to not strip trailing blanks, as already mentioned by Sri. -- If you sent twitter messages while exploring, are you on a textpedition? He's about as useful as a wax frying pan. 10 to the 12th power microphones = 1 Megaphone Maranatha! John McKown -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: CA-Endevor
I second the suggestion about the communities...I have found some to be very helpful, and have avoided opening a support ticket in one or two cases. I don't know the Endevor one specifically, but it would be a good place to start. Billy On Tue, Apr 7, 2015 at 3:20 PM, Lizette Koehler stars...@mindspring.com wrote: I would contact CA for that information. There is also a COMMUNITIES on the Support.ca.com website. You could join the Endevor community and get help there as well. https://communities.ca.com/community/ca-endevor Lizette -Original Message- From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf Of Nathan Pfister Sent: Tuesday, April 07, 2015 11:37 AM To: IBM-MAIN@LISTSERV.UA.EDU Subject: CA-Endevor A quick google search and search of the archives came up with not a whole lot...so I figured I would ask the list. Does anyone know of any good Endevor administration training that is available (format and cost do not matter, looking for any and all options at this time). If you can suggest anything feel free to reply here or contact me off list. Thanks in advance! -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN -- Thank you and best regards, *Billy Ashton* -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: CP to a fixed length output file
On 4/7/2015 12:32 PM, Dana Mitchell wrote: I'm working with developers to convert FTP jobs to use secure FTP. The first step is to cp the MVS dataset to a USS directory so sftp can transfer it. They noticed when transferring an FB 80 file that trailing blanks have been stripped off. Is there some combination of cp options that would keep trailing blanks intact? I've tried -B, -F crlf etc to no avail. I don't think cp is going to work out for you. From a usage note at the end of the its man page, it sayeth: For MVS to UNIX: 3. For an MVS data set in variable record format RECFM(VB) or undefined record format RECFM(U), trailing blanks are preserved when copying from MVS to UNIX. For an MVS data set in fixed record format, trailing blanks are not preserved when copying from MVS to UNIX. Bob -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: CP to a fixed length output file
Why can't you use OCOPY to copy the file instead of cp? o For an MVS data set in fixed record format: Any line longer than the record size is truncated. If the line is shorter than the record size, the record is padded with blanks. o For an MVS data set in variable record format: Any line longer than the largest record size is truncated and the record length is set accordingly. A change in the record length also occurs if the line is short. -Original Message- From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf Of Bob Rutledge Sent: Tuesday, April 07, 2015 3:50 PM To: IBM-MAIN@LISTSERV.UA.EDU Subject: Re: CP to a fixed length output file On 4/7/2015 12:32 PM, Dana Mitchell wrote: I'm working with developers to convert FTP jobs to use secure FTP. The first step is to cp the MVS dataset to a USS directory so sftp can transfer it. They noticed when transferring an FB 80 file that trailing blanks have been stripped off. Is there some combination of cp options that would keep trailing blanks intact? I've tried -B, -F crlf etc to no avail. I don't think cp is going to work out for you. From a usage note at the end of the its man page, it sayeth: For MVS to UNIX: 3. For an MVS data set in variable record format RECFM(VB) or undefined record format RECFM(U), trailing blanks are preserved when copying from MVS to UNIX. For an MVS data set in fixed record format, trailing blanks are not preserved when copying from MVS to UNIX. Bob -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: AMATERSE and PDSE ?
I didn't go look ADRDSSU's manual, but here's an output of a RESTORE with RENAMEU that I've run some months ago: ADR711I (001)-NEWDS(01), DATA SET CSDOMVS.JAVA64.NACB.VZRES21.SAJV17L.ZFS HAS BEEN ALLOCATED WITH NEWNAME OMVS.JAVA64.NACB.VZRES21.SAJV17L.ZFS USING STORCLAS SCGSPAC, NO DATACLAS, AND MGMTCLAS MCNOACT ADR489I (001)-TDLOG(02), CLUSTER OMVS.JAVA64.NACB.VZRES21.SAJV17L.ZFS WAS RESTORED CATALOG SYS1.CATALOG.ERPMISC COMPONENT OMVS.JAVA64.NACB.VZRES21.SAJV17L.ZFS.DATA CSDOMVS.JAVA64.NACB.VZRES21.SAJV17L.ZFS is one of many datasets present in the dump dataset and I don't have CSDOMVS ALIAS defined to my system, and no RACF profile protecting it as well. As ADR711I indicates, it was allocated with a new name of OMVS.JAVA64.NACB.VZRES21.SAJV17L.ZFS. I do have OMVS ALIAS defined and a RACF profile protecting those datasets - and of course, I do have enough authority to work with them. It doesn't look like one needs READ access to both the dump dataset AND the original dataset. Or maybe RACF works differently when you don't have anything protecting the original dataset. --- *Lucas Rosalen* Emails: rosalen.lu...@gmail.com / *lrosa...@br.ibm.com lrosa...@br.ibm.com* LinkedIn: http://br.linkedin.com/in/lrosalen Phone: +55 19 9-8146-7633 2015-04-07 16:52 GMT-03:00 Paul Gilmartin 000433f07816-dmarc-requ...@listserv.ua.edu: On Tue, 7 Apr 2015 11:46:39 -0400, Tony Harminc t...@harminc.net wrote: AMATERSE uses its own scheme for unloading and transmitting the PDS[E] directory, rather than doing what most such programs do which is to invoke IEBCOPY for this part of the work. ... Sigh. The simple answer is to run IEBCOPY unload first on your own, and then AMATERSE the resulting sequential dataset. And of course the reverse at the receiving end. Subject to the constraint of large work files. Alternatively, and more generally, you could run ADRDSSU to do a logical dump of your dataset, AMATERSE that, and again reverse things at the other end. I once looked into ADRDSSU as an interchange vehicle. I stumbled on a statement in the Manual that the recipient (if renaming) must have READ access to the original DSN. This seems supremely stupid to me; READ access to the archive should suffice. But when I criticize ADRDSSU in this forum, the consensus reply is that only storage administrators should be using ADRDSSU at all and since I'm not a storage administrator I don't get a vote. -- gil -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: CA-Endevor
I would contact CA for that information. There is also a COMMUNITIES on the Support.ca.com website. You could join the Endevor community and get help there as well. https://communities.ca.com/community/ca-endevor Lizette -Original Message- From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf Of Nathan Pfister Sent: Tuesday, April 07, 2015 11:37 AM To: IBM-MAIN@LISTSERV.UA.EDU Subject: CA-Endevor A quick google search and search of the archives came up with not a whole lot...so I figured I would ask the list. Does anyone know of any good Endevor administration training that is available (format and cost do not matter, looking for any and all options at this time). If you can suggest anything feel free to reply here or contact me off list. Thanks in advance! -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: AMATERSE and PDSE ?
On Tue, 7 Apr 2015 11:46:39 -0400, Tony Harminc t...@harminc.net wrote: AMATERSE uses its own scheme for unloading and transmitting the PDS[E] directory, rather than doing what most such programs do which is to invoke IEBCOPY for this part of the work. ... Sigh. The simple answer is to run IEBCOPY unload first on your own, and then AMATERSE the resulting sequential dataset. And of course the reverse at the receiving end. Subject to the constraint of large work files. Alternatively, and more generally, you could run ADRDSSU to do a logical dump of your dataset, AMATERSE that, and again reverse things at the other end. I once looked into ADRDSSU as an interchange vehicle. I stumbled on a statement in the Manual that the recipient (if renaming) must have READ access to the original DSN. This seems supremely stupid to me; READ access to the archive should suffice. But when I criticize ADRDSSU in this forum, the consensus reply is that only storage administrators should be using ADRDSSU at all and since I'm not a storage administrator I don't get a vote. -- gil -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: AMATERSE and PDSE ?
On 7 April 2015 at 15:52, Paul Gilmartin 000433f07816-dmarc-requ...@listserv.ua.edu wrote: I once looked into ADRDSSU as an interchange vehicle. I stumbled on a statement in the Manual that the recipient (if renaming) must have READ access to the original DSN. This seems supremely stupid to me; READ access to the archive should suffice. But when I criticize ADRDSSU in this forum, the consensus reply is that only storage administrators should be using ADRDSSU at all and since I'm not a storage administrator I don't get a vote. There is at least one ADRDSSU-compatible program (in CBT file 860) that doesn't enforce the silly aspects of access checking. Yeah, I know - no one wants to run CBT stuff in production. Tony H. -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: AMATERSE and PDSE ?
The documentation clearly says that AMATERSE does not support Program Objects in a PDSE but has anyone tried raising a PMR to get this caught at TERSE time rather than at UNTERSE time? regards, Anthony Fletcher - NZ MIITP Team Lead NZ SMM (AirNZ, Westpac NZ , NWM AU) IBM Strategic Outsourcing Delivery Server Systems Operations Server Management Mainframe Mainframe Software Program Manager NZ z/OS Technical Lead A/NZ Ph: Direct +64 4 576 8142, tieline 61 929 8142, ITN *869298142, mobile +64 21 464 864, Fax +64 4 576 5808. Internet: flet...@nz1.ibm.com, Sametime: flet...@nz1.ibm.com The biggest threat to effective communication is the belief that it has occurred Winners make commitments, Losers make promises From: Tony Harminc t...@harminc.net To: IBM-MAIN@LISTSERV.UA.EDU Date: 08/04/2015 03:47 Subject:Re: AMATERSE and PDSE ? Sent by:IBM Mainframe Discussion List IBM-MAIN@LISTSERV.UA.EDU On 7 April 2015 at 05:29, Norbert Friemel nf.ibmm...@web.de wrote: On Tue, 7 Apr 2015 03:59:30 -0500, Juergen Kehr wrote: [...] Is there any special condidition for PDSE (Load) Libraries (RECFM=U) compared to other PDSE libraries (RECFM=FB or VB)? http://publibfp.dhe.ibm.com/cgi-bin/bookmgr/BOOKS/iea2v1c2/17.4.1 Partitioned data sets extended (PDSE) containing program objects are not supported. AMATERSE uses its own scheme for unloading and transmitting the PDS[E] directory, rather than doing what most such programs do which is to invoke IEBCOPY for this part of the work. IEBCOPY will invoke the Binder to process Program Objects, and the Binder is the only authorized tool do do so. So AMATERSE runs into trouble when it goes to write Program Objects into the receiving PDSE (or presumably it checks and issues a message in this case). The simple answer is to run IEBCOPY unload first on your own, and then AMATERSE the resulting sequential dataset. And of course the reverse at the receiving end. Alternatively, and more generally, you could run ADRDSSU to do a logical dump of your dataset, AMATERSE that, and again reverse things at the other end. Tony H. -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: CP to a fixed length output file
Thanks for the suggestions, I'll give OCOPY or IEBGENER a try. I originally chose cp for convenience of the developers. They are replacing ftp steps with sftp steps in their batch jobs. I wanted to keep it at 'replace STEP A' with STEP B. So with the constraints of BPXBATCH being able to only execute one command, I created small scripts with a cp command to copy the file then sftp or scp to send the file to the remote system. I'll need to be able to call OCOPY or IEBGENER from such a script. On Tue, 7 Apr 2015 20:21:33 +, Barkow, Eileen ebar...@doitt.nyc.gov wrote: Why can't you use OCOPY to copy the file instead of cp? -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: Backup of Uncatalogued dataset
-Original Message- From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf Of Jake Anderson Sent: Tuesday, April 07, 2015 9:55 AM To: IBM-MAIN@LISTSERV.UA.EDU Subject: Re: Backup of Uncatalogued dataset We use FDR and I was trying to backup a PDSE. And what happened? What was the message that convinced you it did not work? Have you talked to FDR support? -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: CP to a fixed length output file
On 2015-04-07 15:12, Dana Mitchell wrote: ... with the constraints of BPXBATCH being able to only execute one command, ... There is no such constraint. -- gil -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: AMATERSE and PDSE ?
On Tue, 7 Apr 2015 17:26:48 -0300, Lucas Rosalen rosalen.lu...@gmail.com wrote: CSDOMVS.JAVA64.NACB.VZRES21.SAJV17L.ZFS is one of many datasets present in the dump dataset and I don't have CSDOMVS ALIAS defined to my system, and no RACF profile protecting it as well. If there's no RACF profile protecting CSDOMVS.JAVA64.NACB.VZRES21.SAJV17L.ZFS then you would automatically have READ access unless your system runs with SETROPTS PROTECTALL(FAIL). Or if you have OPERATIONS access. -- Walt -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: CP to a fixed length output file
No JCL changes needed for SSL ftps. Just some certificate work and changes to userid.FTP.DATA -Original Message- From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf Of Dana Mitchell Sent: Tuesday, April 07, 2015 2:12 PM To: IBM-MAIN@LISTSERV.UA.EDU Subject: Re: CP to a fixed length output file Thanks for the suggestions, I'll give OCOPY or IEBGENER a try. I originally chose cp for convenience of the developers. They are replacing ftp steps with sftp steps in their batch jobs. I wanted to keep it at 'replace STEP A' with STEP B. So with the constraints of BPXBATCH being able to only execute one command, I created small scripts with a cp command to copy the file then sftp or scp to send the file to the remote system. I'll need to be able to call OCOPY or IEBGENER from such a script. On Tue, 7 Apr 2015 20:21:33 +, Barkow, Eileen ebar...@doitt.nyc.gov wrote: Why can't you use OCOPY to copy the file instead of cp? -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: AMATERSE and PDSE ?
Thanks Walt. I'm glad I've used Or maybe RACF works differently when you don't have anything protecting the original dataset phrase :) Lucas Em 07/04/2015 21:07, Walt Farrell walt.farr...@gmail.com escreveu: On Tue, 7 Apr 2015 17:26:48 -0300, Lucas Rosalen rosalen.lu...@gmail.com wrote: CSDOMVS.JAVA64.NACB.VZRES21.SAJV17L.ZFS is one of many datasets present in the dump dataset and I don't have CSDOMVS ALIAS defined to my system, and no RACF profile protecting it as well. If there's no RACF profile protecting CSDOMVS.JAVA64.NACB.VZRES21.SAJV17L.ZFS then you would automatically have READ access unless your system runs with SETROPTS PROTECTALL(FAIL). Or if you have OPERATIONS access. -- Walt -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: A New Performance Model ?
I agree with Ed. There are always cases where the effort and cost might outweigh the benefit of tuning, but there are still many cases where thereis cost savings. As others have mentioned there is always value in meeting the business SLA's and the online files being available. A side benefit is the junior staff have an opportunity to improve their knowledge and skill set. Also, these efforts tend to invigorate and energize those who might be stuck in a rut of boring tasks. Most are familiar with the following list of easy changes to improve performance and reduce costs: 1. Change programs to use BLOCK CONTAINS 0. Where files are blocked 1x1 during create and then all reads are improved too. The CA TMS files are blocked 1x1(340/340 or 370/370) at many installations and should be changed to blksize 8840 and 8880 per CA. 2. TURN on CARECLAIM and then daily/weekly/monthly VSAM REORG jobs can be discontinued in addition to the disk and I/O savings. 3. TURN on ICEGENER to replace IEBGENER (or SYNCSORT version) at the system level. Saves 6 to 8% CPU. 4. Find all GDG's that have not been updated in say more than a year by using LISTCAT and sorting on LADAT(Last Alter Date) and then delete those. Found over 100K obsolete GDG entries at 3 different sites. 5. Find all jobs creating over 200K lines(or some large number) and do a quick review to determine if the output is needed and if so drive it to a file. This can save output to JES and your Archive System. 6. Change large tape(multi volume) file(s) to disk PSE if they are read by many jobs. This allows two or more jobs to run at the same time. 7. Find all jobs that have DSN WAITING for more than 15 min(pick your number of min) and review for incorrect disp=old or change job to run earlier or later or move a step(s) to another job. 8. Find programs that OPEN file read one RECORD CLOSE FILE. This a giant resource saver and most places have one or more of these. 9. Increase REGION on job card or EXEC to REGION=6M or more for large IEBCOPY steps/jobs and they will run much faster and use less CPU as memory is used vs. the SYSUT3 and SYSUT4 work files. I hope this little sharing effort does not offend the good people on this great list. David Mingee Mainframe Consulting 9206 Aintree Drive Indianapolis, IN 46250 317 288-9588 Home317 903-9455 Cell From: Ed Gould edgould1...@comcast.net To: IBM-MAIN@LISTSERV.UA.EDU Sent: Tuesday, April 7, 2015 8:28 PM Subject: Re: A New Performance Model ? Timothy: Its amazing what a blocked 1 file cost not so much in processing but waiting . Try any program that is fairly IO intensive and you will see the cost in lengthening run time. I saw one program go from an hour elapsed to 2 minutes. Cost to resolve one short compile and link. Ed On Apr 5, 2015, at 9:28 PM, Timothy Sipples wrote: Our development management are telling is (Systems Operations) that it is cheaper to Upgrade the mainfame than to have the application programmers review their code for performance oppurtunities. I'm disappointed in the reactions so far. They're quite...old fashioned. :-( Yes, there is a new performance model, but this new is almost as old as computing. That assertion from the development team's management is certainly possible. Development talent, particularly highly skilled talent, continues to become more expensive relative to most other factor inputs in computing. That trend exists on *every* platform. Whether that assertion is true or not in these particular circumstances I have no idea. More importantly, neither do you yet. This question can only be answered with a careful cost analysis (or re-analysis), and that itself is a comparatively rare skill within IT organizations as you and others may have just demonstrated. :-) It also isn't free to analyze costs. Otherwise accountants and consultants, including Al Sherkow, among other talented people, wouldn't be paid. As a *generalization*, most organizations are running many more MIPS now than, say, 15 years ago. Typically, though, that's at a similar or lower real cost in terms of infrastructure and operations. At the same time, real costs for a given amount of quality-equivalent development talent have gone up. (Raise your hand if you want to dispute that generalization, but I don't think it's particularly controversial.) There have been some development productivity improvements but probably not as many as on the operations side. So the overall trend is that your organization *rationally* shouldn't be using as much labor cost to optimize code as you did, say, 15 years ago. Exactly how much less depends on your particular situation, but generally less is the correct, cost-optimizing answer in most cases. Is that so surprising? Raise your hand if you're still hand tuning code to account for disk rotation. That's at least not a common way