Re: To Backup or Not to Backup Data - That is the question
Walt Farrell wrote: Or human disasters, Tom. Someone deletes a data set, and because the DASD is mirrored everywhere, all your online copies are gone instantly. Oh, and if you didn't have any real backup copies of the DASD, then all copies of that data set are gone. The same goes for peer to peer tapes too. I have a hard time to convince my management and storage guys that I really need a SECOND set of SMF data. One set to do write and if RC=00 everywhere, repeat on second SMF set. This is because when we get a channel error resulting in 613 abend or so, every errors and halfwritten records written on the local site are also repeated at the other site. I then sit with two sets of useless SMF data residing at 2 sites. I made a breakthrough when I asked them to switch off local tapes so I can reread my SMF tapes from remote site to prove that the second set at remote site are ALSO damaged. Then only they see the need for duplicate SMF tapes. I got my extra SMF tapes with recovery procedures simplified. ;-) Groete / Greetings Elardus Engelbrecht -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: To Backup or Not to Backup Data - That is the question
On Thu, 30 May 2013 19:09:57 -0500, Walt Farrell walt.farr...@gmail.com wrote: That's one reason that IBM recommends using RACF's duplexing of it's database, rather than depending on hardware mirror copies, and also recommend taking nightly backups of the database. When an administrator makes a mistake it can save a lot of hassle. And, if RACF itself makes a mistake, there's a good chance that only the primary (or the duplex) copy will be damaged. But if you were depending on the hardware mirroring they're all broken. H. Maybe. We once had a situation where a test system IPL failed. Looked like MCAT. All systems active on the same shared MCAT were o.k. Tried another system - IPL failed. Same at secondary and tertiary sites. Crit1 - we couldn't have any production system fail, because it looked like we couldn't bring it back up. Anywhere. After some SADump analysis by ISC it turned out the MCAT was o.k, but the environment was compromised due to LE issue in the linklist (LPA maybe) - was back in 2.10 - z/OS 1.1 so-called LE upward compatibility feature rollout ... :(. So all that good intent may go down the toilet for no obvious reason. I'm sure RACF or any other component can be an innocent victim as much as MCAT. Shane ... -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: ISPF screen pre-occupied message
hi all, thanks for the updates, This worked, Maintain User Parameters Application ID Display N Thanks and Regards Shameem K Shoukath From: Roger Bowler ibm-m...@snacons.com To: IBM-MAIN@LISTSERV.UA.EDU Sent: Thursday, May 30, 2013 3:37 PM Subject: Re: ISPF screen pre-occupied message On Thu, 30 May 2013 13:06:16 +, Rob Scott rsc...@rocketsoftware.com wrote: I seem to recall there being some sort of public domain utility program, modification or TSO/ISPF exit that forced the z/OS system name on ISPF screens. This facility predated the ISPF provided SYSNAME command. Hey Rob, thanks! I didn't know about the ISPF SYSNAME command. Shameem: the string displayed by NVAS for each application and the row/col are defined in ADM option 2 (Maintain Group Parameters). Rgds Roger Bowler -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: Crypto Facility performance.
Greg, you've got the point. My question was about performance and, depending on what we ask the z12, it will answer. So if I ask for a CSNBKEX API (and now I discovered it was clearly written in the manual) z12 have to use CEX3/4 Coprocessor. Finally I've found the answer to my main question about the performance (in terms of crypto/second) per single server (where server is the single mono-tcb address space calling the API services). Then, as long as I need CEX3/4 API I have to pay the 1ms elapsed time per call because there's the need to raise the not so close hardware. I really thank all of you for your valuable support and knowledge. I've learnt lots of new stuffs. Best regards. Massimo 2013/5/21 Greg Boyd bo...@us.ibm.com I'm not sure I understand your last question ... but let me try to clarify a couple of things. It's important to realize that you have two separate pieces of crypto hardware available on System z: the CPACF for symmetric clear key and hashing operations and the Crypto Express card for symmetric secure key, MAC, public/private key operations, Financial/PIN operations, etc. There is really no overlap in functionality between the two devices. Both can do symmetric DES/TDES or AES encryption, but the CPACF does the work with a clear key, while the CEX card uses a secure key. So that means the hardware you need depends entirely on which API you specify in your code. In the ICSF Application Programmer's Guide (SA22-7522), each API is documented and includes a 'Required Hardware' table at the end of each section. That table will tell you which piece of hardware is required for that API (even down to certain parms require certain levels of CCA code in the card). If you code CSNBKEX, the Usage Table for that API says that you must have a CEX3 or CEX4 Coprocessor on your zEC12 to use that API. One note about Protected Key. To use Protected Key, you use a clear key API, but pass a secure key to the API. Prior to the implementation of protected key, this would fail as the clear key APIs can't use a secure key. However, with the protected key support, ICSF will recognize this combination and allow the operation to proceed. In this case, ICSF uses both the Crypto Express card to decrypt the operational key from under the master key and the CPACF to rewrap the key and then perform the encrypt or decrypt of your data (as Todd described). The 'Required Hardware' table refers to protected keys as 'Encrypted Keys'. So if you want to simply do clear key encryption, you only require the CPACF hardware. But if you want to use protected key, then you must also have a Crypto Express card (configured as a coprocessor). So, in your example, if you use the CSNBKEX API, that implies you have a CEX card because that's where the work will be routed. And if you use the CSNBSYE API, you want to use the CPACF hardware. You might want to review the 'A Synopsis of Systme z Crypto Hardware' Techdoc, available at http://www.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/WP100810 . I hope that helps clarify things. Greg Boyd IBM Advanced Technical Support Supporting Crypto on System z -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: To Backup or Not to Backup Data - That is the question
W dniu 2013-05-30 20:44, Lizette Koehler pisze: I am looking to see how other shops are currently doing Backups for DR and OR. I think this will be valuable for the archives. Currently I have the following processes in Place. Any suggestions or guidelines for this analysis will be appreciated. 2 EMC VMAX arrays (could be Hitachi or IBM). 2 Geographically dispersed Data Centers (one VMAX in each) So in my hypothetical setup I have Prod VMAX is Snapping my Prod data within the VMAX Prod VMAX is Replicating to the Secondary (Dev and Text) VMAX (SRDF/A) My Secondary VMAX (Dev and Test) is Snapping the Replicated data I have weekly backups to Tape I have DFHSM doing backups/ML2 tapes are shipped to secondary site. So do I have overkill? . As usual, we should stat from the basics: - budget constraints - business needs - RTO, RPO - uwanted events: disaster, HW failure, SW error, human mistake, other (human/terrorist attack, whatever) - coincidence of events, disaster diameter (area affected) - staffing - other systems (there are no mainframe-only shops nowadays) IMHO in case of 2 datacenters the reasonable scenario should look like: - 2 VMAXes with SRDF/S or /A (matter of distance, performance needs and disaster diameter) - 2 tape systems, one per datacenter, all tapes duplicated (offline copy for human and SW errors, number of copies/generations depends) -- Radoslaw Skorupka Lodz, Poland -- Tre tej wiadomoci moe zawiera informacje prawnie chronione Banku przeznaczone wycznie do uytku subowego adresata. Odbiorc moe by jedynie jej adresat z wyczeniem dostpu osób trzecich. Jeeli nie jeste adresatem niniejszej wiadomoci lub pracownikiem upowanionym do jej przekazania adresatowi, informujemy, e jej rozpowszechnianie, kopiowanie, rozprowadzanie lub inne dziaanie o podobnym charakterze jest prawnie zabronione i moe by karalne. Jeeli otrzymae t wiadomo omykowo, prosimy niezwocznie zawiadomi nadawc wysyajc odpowied oraz trwale usun t wiadomo wczajc w to wszelkie jej kopie wydrukowane lub zapisane na dysku. This e-mail may contain legally privileged information of the Bank and is intended solely for business use of the addressee. This e-mail may only be received by the addressee and may not be disclosed to any third parties. If you are not the intended addressee of this e-mail or the employee authorised to forward it to the addressee, be advised that any dissemination, copying, distribution or any other similar activity is legally prohibited and may be punishable. If you received this e-mail by mistake please advise the sender immediately by using the reply facility in your e-mail software and delete permanently this e-mail including any copies of it either printed or saved to hard drive. BRE Bank SA, 00-950 Warszawa, ul. Senatorska 18, tel. +48 (22) 829 00 00, fax +48 (22) 829 00 33, www.brebank.pl, e-mail: i...@brebank.pl Sd Rejonowy dla m. st. Warszawy XII Wydzia Gospodarczy Krajowego Rejestru Sdowego, nr rejestru przedsibiorców KRS 025237, NIP: 526-021-50-88. Wedug stanu na dzie 01.01.2013 r. kapita zakadowy BRE Banku SA (w caoci wpacony) wynosi 168.555.904 zotych. -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: Unable to mount ZFS
Barbara, I know you're not z/OS UNIX's biggest fan, however, this time the problem is related to the authorization to perform an MVS OPEN against an MVS data set. UNIX is only inside the data set. I beg to differ. The UNIX implementation has clouded itself in such a bunch of things - nothing is easy with anything OMVS-related. I am running in the *OMVS* shell, and the actual open is relegated to the ZFS address space. That takes some getting used to, especially when an HFS would have been opened by OMVS. I don't get a nice, understandable abend913, no, I get a strange error number that doesn't tell me that there was/is a RACF problem. Instead I am told: Description: Error LOCATEing an HFS-compat aggregate. Action: Ensure that the zFS file system named on the MOUNT command has the same name as the VSAM Linear Data Set (and the HFS compatibility mode aggregate) that contains the file system. For heavens' sake, I am trying to mount a ZFS, not an HFS! And the name was specified correctly! To top it off, before I first posted, I *had* given the requested access to that userid. I still was unable to mount the data set, when (nominally) all RACF requirements were fulfilled. When I eventually restarted ZFS, of course the shell broke. Actually, every spawn command broke, since the main OMVS data set is a ZFS, and something didn't synchronize correctly at the restart. And before you tell me that I should have restarted OMVS, no can do. Without OMVS, I loose TN3270 and TCPIP, so effectively I have no way of accessing the system anymore. (Local non-SNA didn't work, either, because a colleague had pulled the plug on that system on Monday, crashing everything, and we were unable to establish the remote server to even call the program that would give me local non-SNA access). I was effectively *forced* to IPL the system to get ZFS/OMVS to mount a filesystem! There is no way I know of that allows me to make ZFS 'forget' that it ever asked about this specific data set and force it to go back to RACF to check again fresh. Do you know of a way? (There must be a reason why one has to logon/logoff whenever the RACF admin provided the access required before it 'takes' for a TSO user.) I think being forced to IPL a system just because a data set profile is missing is a bit harsh, to put it mildly. That doesn't occur in native z/OS. Address spaces without OMVS do their own open, data sest access is fairly straightforward and for a product installed later, the address space would just terminate on a RACF error. Once the necessary access is provided, just restart the address space. No IPL required. No contortions required. Most of all: For these data sets, I religiously had followed the IBM recommendation what to define to make it work. I did all definitions that are required according to the RDT installation manual. These definitions are identical on both systems. And on one of them it still didn't work! RACF allows the OPEN on your originating system. I trust there must be a difference in the setup not related to z/OS UNIX. No, the only differences in the 2 data bases are obviously the date something was defined and unix-related stuff. That's it. On your originating system (I guesss you already verified): - Does profile MVSR.RDZ.V85.** have UACC(none) or something else? UACC(NONE) - identical in both systems. - Is OMVSUSR2 defined TRUSTED(NO) and PRIVILEGED(NO)? TRUSTED(NO), PRIVILEDGED(NO). Identical on both systems. - There is no SCHEDxx entry for BPXVCLNY? PPT PGMNAME(BPXVCLNY) NOSWAP PRIV SYST NOPASS - identical on both systems. - Nothing else that would allow an MVS OPEN is defined? I am not sure I understand that question. What do you have in mind? I have unloaded and compared both RACF databases. The differences are: Only on the system it doesn't work, I have BPX.Daemon defined in FACILITY, with OMVSUS1/2 and another STCUser with READ access. On the system it works, I have OMVSUS2 defined with READ access to BPX.SUPERUSER. This definition is missing on the system it doesn't work. On both systems, OMVSUS2 shares uid(0). The FEKRACF job (RACF customization for RDT) doesn't touch either of these definitions. Having compared this, I guess the bpx.superuser definition makes it work on one system. Which is quite obvious from the error I got. Right? WRONG! Peter, I am not picking on you! Have a great weekend instead! Best regards, Barbara -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: Examples of getbuf and build usage
I do. I would just as soon not send/post the entire module. What is your specific issue? As Binyamin has noted, GETBUF itself is pretty simple. Looks like reentrant code to me: GETBUF PDSDCB,R2 +* +* + LA1,PDSDCB LOAD PARAM + SLR 14,14 CLEAR REGISTER + ICM 14,7,21(1) LOAD BUFCB ADDRESS + ICM R2,B'',0(14) IS A BUFFER AVAILABLE + BZ*+10NO,RETURN ZERO + MVC 0(4,14),0(R2) YES,UPDATE BUFCB ADDR I do find the following comments in my code: FREEPOOL (R4) NOT NEEDED WITH BUFFS ABOVE LINE ??? * SO DOC CLAIMS BUT I GET GETBUF * FAILURES ON SUBSEQUENT OPENS Charles -Original Message- From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf Of Micheal Butz Sent: Thursday, May 30, 2013 7:13 PM To: IBM-MAIN@LISTSERV.UA.EDU Subject: Examples of getbuf and build usage Would anyone have examples of Getbuf used with BSAM read I think it might help my problem -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: Examples of getbuf and build usage
Should have noted that I don't use an explicit BUILD. Perhaps this is no help. Charles -Original Message- From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf Of Charles Mills Sent: Friday, May 31, 2013 7:08 AM To: IBM-MAIN@LISTSERV.UA.EDU Subject: Re: Examples of getbuf and build usage I do. I would just as soon not send/post the entire module. What is your specific issue? As Binyamin has noted, GETBUF itself is pretty simple. Looks like reentrant code to me: GETBUF PDSDCB,R2 +* +* + LA1,PDSDCB LOAD PARAM + SLR 14,14 CLEAR REGISTER + ICM 14,7,21(1) LOAD BUFCB ADDRESS + ICM R2,B'',0(14) IS A BUFFER AVAILABLE + BZ*+10NO,RETURN ZERO + MVC 0(4,14),0(R2) YES,UPDATE BUFCB ADDR I do find the following comments in my code: FREEPOOL (R4) NOT NEEDED WITH BUFFS ABOVE LINE ??? * SO DOC CLAIMS BUT I GET GETBUF * FAILURES ON SUBSEQUENT OPENS Charles -Original Message- From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf Of Micheal Butz Sent: Thursday, May 30, 2013 7:13 PM To: IBM-MAIN@LISTSERV.UA.EDU Subject: Examples of getbuf and build usage Would anyone have examples of Getbuf used with BSAM read I think it might help my problem -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: Examples of getbuf and build usage
Micheal Butz wrote: Would anyone have examples of Getbuf used with BSAM read Sorry, but no examples from my side... Perhaps Assembler-L members can help you out? I think it might help my problem Just curious, any reason why you want to manage your own buffers? I believe, if I read my books correctly, OPEN, CLOSE, etc manage the buffers for you? Otherwise, please post your BUILD (or GETPOOL) and GETBUF macros and any problems encountered. Groete / Greetings Elardus Engelbrecht -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: To Backup or Not to Backup Data - That is the question
Agreed! snip Yes, this! Prereq: The company must have a DR manager of which one of his responsibilities is to ensure the families of those who leave are taken care of. Here I'm thinking of natural disasters like hurricanes. Second: A real DR test would include actually running the business from the DR site for at least a week and then *bringing it back home*. How many institutions have actually tried that? Staller, Allan said the following on 5/30/2013 3:09 PM: Although very few shops actually do this, IMO the procedure should be: Management walks in the room and says You, you, and you are dead as of time/date. The rest of you go recover as of that time/date. The dead people cannot be consulted with during the DR exercise. You, you, and you should be different during each iteration of the test. After the fact, procedures/documentation are analyzed and updated based on the results. /snip -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: Unable to mount ZFS
I _think_ your RACF problem was due to the fact that the first time the ZFS address space tried to open the VSAM file, it didn't have correct RACF access. So the open failed, as I would hope it would. This is like a CICS region doing the same type of thing. Now, in order to be efficient, RACF caches this result in the address space (IIRC, it caches the RACF rule it found). You then changed/created a RACF access profile so that the ZFS address space could open the data set. But the previous results are still cached. At this point, what should you do to invalidate the cache? You issue the RACF command: SETROPTS GENERIC(DATASET) REFRESH . Is this documented in a way that a mere mortal can understand? Of course not! How do I know? Walt (not a mere mortal, but an IBMer who worked on RACF internals) told me. And you can read a 3rd party on this here: http://www.rshconsulting.com/racftips/RSH_Consulting__RACF_Tips__October_2008.pdf quote If a user does not have sufficient authority to access a dataset and a RACF administrator permits higher access to the associated profile while the user is logged on, the user may still not be able to access the dataset. RACF continues to use the prior copy of the profile in memory and not the updated profile on the database. To acquire the new permission, the copy of the profile in memory must be refreshed. One way to refresh the profile is to execute SETROPTS GENERIC(DATASET) REFRESH. This command causes RACF to discard all saved dataset profiles for every user. RACF then has to retrieve all the profiles again. We have encountered several sites where RACF administrators routinely issued this command every time they executed ADDSD or ALTDSD and have since ceased doing so on our advice. /quote On Fri, May 31, 2013 at 5:51 AM, nitz-...@gmx.net nitz-...@gmx.net wrote: Barbara, I know you're not z/OS UNIX's biggest fan, however, this time the problem is related to the authorization to perform an MVS OPEN against an MVS data set. UNIX is only inside the data set. I beg to differ. The UNIX implementation has clouded itself in such a bunch of things - nothing is easy with anything OMVS-related. I am running in the *OMVS* shell, and the actual open is relegated to the ZFS address space. That takes some getting used to, especially when an HFS would have been opened by OMVS. I don't get a nice, understandable abend913, no, I get a strange error number that doesn't tell me that there was/is a RACF problem. Instead I am told: Description: Error LOCATEing an HFS-compat aggregate. Action: Ensure that the zFS file system named on the MOUNT command has the same name as the VSAM Linear Data Set (and the HFS compatibility mode aggregate) that contains the file system. For heavens' sake, I am trying to mount a ZFS, not an HFS! And the name was specified correctly! To top it off, before I first posted, I *had* given the requested access to that userid. I still was unable to mount the data set, when (nominally) all RACF requirements were fulfilled. When I eventually restarted ZFS, of course the shell broke. Actually, every spawn command broke, since the main OMVS data set is a ZFS, and something didn't synchronize correctly at the restart. And before you tell me that I should have restarted OMVS, no can do. Without OMVS, I loose TN3270 and TCPIP, so effectively I have no way of accessing the system anymore. (Local non-SNA didn't work, either, because a colleague had pulled the plug on that system on Monday, crashing everything, and we were unable to establish the remote server to even call the program that would give me local non-SNA access). I was effectively *forced* to IPL the system to get ZFS/OMVS to mount a filesystem! There is no way I know of that allows me to make ZFS 'forget' that it ever asked about this specific data set and force it to go back to RACF to check again fresh. Do you know of a way? (There must be a reason why one has to logon/logoff whenever the RACF admin provided the access required before it 'takes' for a TSO user.) I think being forced to IPL a system just because a data set profile is missing is a bit harsh, to put it mildly. That doesn't occur in native z/OS. Address spaces without OMVS do their own open, data sest access is fairly straightforward and for a product installed later, the address space would just terminate on a RACF error. Once the necessary access is provided, just restart the address space. No IPL required. No contortions required. Most of all: For these data sets, I religiously had followed the IBM recommendation what to define to make it work. I did all definitions that are required according to the RDT installation manual. These definitions are identical on both systems. And on one of them it still didn't work! RACF allows the OPEN on your originating system. I trust there must be a difference in the setup not related to z/OS UNIX. No, the only differences in the 2
Re: Unable to mount ZFS
Barbara, it took you some time to compose this reply. Bear with me, I'll need some, too. And be assured, I'm not taking this personal in any way. I'm more open to z/OS UNIX than many others and I tend to stand up for it sometimes. In parallel to growing up with z/OS UNIX as of 1994, I was learning much about the design UNIX operating systems. This helps me to understand why many things are like they are in z/OS UNIX, *but* there still are many things I consider quite ugly (btw, there are at least as many ugly things in MVS). Will respond to you statements later. -- Peter Hunkeler -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: To Backup or Not to Backup Data - That is the question
Staller, Allan wrote: Second: A real DR test would include actually running the business from the DR site for at least a week and then *bringing it back home*. How many institutions have actually tried that? Zero! [1] You're dreaming, but do this 'change /week/weekend/' and ask again for better answers. ;-) :-D;-D:-D Groete / Greetings Elardus Engelbrecht [1] - It is already hard to get ALL your users AND network staff to co-operate for ONE day / FEW hours of DR exercise without customers having to moan... -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: Unable to mount ZFS
I am quite pro UNIX. I use a z/OS UNIX shell quite often. I schedule the submission of some batch jobs via the cron daemon. I keep most of my JCL in a UNIX subdirectory, not a PDS. Never the less, there are some design decisions in z/OS UNIX which I consider to be deeply flawed. First, GIVE ME AN UP TO DATE BASH SHELL!!! Who implemented the standard z/OS UNIX shell? It stinks compared to BASH. Next, port all the GNU utilities and abandon the IBM versions. Really. I mean it. Especially GNU sed, grep, and awk. The comparing z/OS versions of these utilities vs. the GNU are like comparing a WWI biplane to an F-35 JSF. Yes, it flies. Poorly. On Fri, May 31, 2013 at 7:49 AM, Hunkeler Peter (TLSG 4) peter.hunke...@credit-suisse.com wrote: Barbara, it took you some time to compose this reply. Bear with me, I'll need some, too. And be assured, I'm not taking this personal in any way. I'm more open to z/OS UNIX than many others and I tend to stand up for it sometimes. In parallel to growing up with z/OS UNIX as of 1994, I was learning much about the design UNIX operating systems. This helps me to understand why many things are like they are in z/OS UNIX, *but* there still are many things I consider quite ugly (btw, there are at least as many ugly things in MVS). Will respond to you statements later. -- Peter Hunkeler -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN -- This is a test of the Emergency Broadcast System. If this had been an actual emergency, do you really think we'd stick around to tell you? Maranatha! John McKown -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: To Backup or Not to Backup Data - That is the question
Makes for a more realistic test. Charles -Original Message- From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf Of Ed Gould Sent: Friday, May 31, 2013 12:06 AM To: IBM-MAIN@LISTSERV.UA.EDU Subject: Re: To Backup or Not to Backup Data - That is the question Alan, In a company I worked for they would have shot the people, Crazy company. -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: Examples of getbuf and build usage
Paul Gilmartin wrote: I believe, if I read my books correctly, OPEN, CLOSE, etc manage the buffers for you? Isn't that with QSAM? OP asked about BSAM. Good catch! Thanks. I overlooked that part. (So I did not read my books correctly...) Groete / Greetings Elardus Engelbrehct -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: To Backup or Not to Backup Data - That is the question
In a company I worked for they would have shot the people, Crazy company Makes for a more realistic test. An interesting business model, with certain historical precedents. “Depend upon it, sir, when a man knows he is to be hanged in a fortnight, it concentrates his mind wonderfully.” [Dr. Samuel Johnson] “… it is a good thing to kill an admiral from time to time to encourage the others. ” [Voltaire] Bill Fairchild Franklin, TN - Original Message - From: Charles Mills charl...@mcn.org To: IBM-MAIN@LISTSERV.UA.EDU Sent: Friday, May 31, 2013 8:22:59 AM Subject: Re: To Backup or Not to Backup Data - That is the question Makes for a more realistic test. Charles -Original Message- From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf Of Ed Gould Sent: Friday, May 31, 2013 12:06 AM To: IBM-MAIN@LISTSERV.UA.EDU Subject: Re: To Backup or Not to Backup Data - That is the question Alan, In a company I worked for they would have shot the people, Crazy company -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: To Backup or Not to Backup Data - That is the question
W dniu 2013-05-31 14:57, Elardus Engelbrecht pisze: Staller, Allan wrote: Second: A real DR test would include actually running the business from the DR site for at least a week and then *bringing it back home*. How many institutions have actually tried that? Zero! [1] You're dreaming, but do this 'change /week/weekend/' and ask again for better answers. ;-) Not true. I know company which do this on schedule. Actually the best solution is it doesn't matter which of ours dataceneters is primary. And the difference between good DR scenario and the above is matter of organization, not technology solutions. -- Radoslaw Skorupka Lodz, Poland -- Treść tej wiadomości może zawierać informacje prawnie chronione Banku przeznaczone wyłącznie do użytku służbowego adresata. Odbiorcą może być jedynie jej adresat z wyłączeniem dostępu osób trzecich. Jeżeli nie jesteś adresatem niniejszej wiadomości lub pracownikiem upoważnionym do jej przekazania adresatowi, informujemy, że jej rozpowszechnianie, kopiowanie, rozprowadzanie lub inne działanie o podobnym charakterze jest prawnie zabronione i może być karalne. Jeżeli otrzymałeś tę wiadomość omyłkowo, prosimy niezwłocznie zawiadomić nadawcę wysyłając odpowiedź oraz trwale usunąć tę wiadomość włączając w to wszelkie jej kopie wydrukowane lub zapisane na dysku. This e-mail may contain legally privileged information of the Bank and is intended solely for business use of the addressee. This e-mail may only be received by the addressee and may not be disclosed to any third parties. If you are not the intended addressee of this e-mail or the employee authorised to forward it to the addressee, be advised that any dissemination, copying, distribution or any other similar activity is legally prohibited and may be punishable. If you received this e-mail by mistake please advise the sender immediately by using the reply facility in your e-mail software and delete permanently this e-mail including any copies of it either printed or saved to hard drive. BRE Bank SA, 00-950 Warszawa, ul. Senatorska 18, tel. +48 (22) 829 00 00, fax +48 (22) 829 00 33, www.brebank.pl, e-mail: i...@brebank.pl Sąd Rejonowy dla m. st. Warszawy XII Wydział Gospodarczy Krajowego Rejestru Sądowego, nr rejestru przedsiębiorców KRS 025237, NIP: 526-021-50-88. Według stanu na dzień 01.01.2013 r. kapitał zakładowy BRE Banku SA (w całości wpłacony) wynosi 168.555.904 złotych. -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
To recompile or not recompile, that's the question
Hi wise guys and gals, HOW are you in your shop managing your sources and load modules and the versions of compilers and z/OS? *Intro:* My customer (a data center) has several environments/LPARs: 1. SYSTEMS environment (for the system programmers team: installations, system tests, …) 2. DEVELOPMENT environment (for the application developers) 3. HOMOLOGATION environment (it doesn’t matter for the discussion what it’s purpose is) 4. ACCEPTANCE environment 5. PRODUCTION environment They deploy the installations of z/OS in that order. I.e. PRODUCTION at the end of the chain. Idem for the compiler versions. The development department also deploy their applications in that very same order. Till now, the application programs are RE-COMPILED in each environment; this in order to avoid the slightest problem. For example: · Runtime LE 1.13 in development and runtime LE 1.12 in production MIGHT possibly generate a different behavior if the applications were not recompiled · A program compiled with the highest PL/1 version and executing with a too low LE might also have problems. The customer is considering now to use Serena’s ChangeMan/ZMF to manage the application sources and load modules. That tool does not really support re-compile. Consequence: The modules, compiled in development, are COPIED to the different environments tlll into the PRODUCTION environment eventually. In concreto: if development is done in z/OS R13 with the highest PL/1 compiler version, this load module finally executes in z/OS R12. This too MAY generate problems. The problem can be avoided by NOT upgrading the development environment before the production environment, but rather at the end of chain. In what order do you do the different environment upgrades in your shop? Development before production? Production before development? *A last consideration:* Thinking of the latest HW announcements of zEC12 and the last C/C++ , PL/1 4.2, or COBOL 5.1 announcements, the synergy between HW SW is definitely growing. Think of ARCH(10), or zEC12’s TEF ( Transactional Execution Facility). · If development is at a higher SW level than production, then COPYING (in stead of RE-COMPILE) might cause execution problems in production. · If development is not at the latest SW level, one might not take profit in production of those newer HW capabilities such as ARCH(10) or TEF. Please, your thoughts! Jan -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: Examples of getbuf and build usage
On 5/31/2013 8:36 AM, Elardus Engelbrecht wrote: I believe, if I read my books correctly, OPEN, CLOSE, etc manage the buffers for you? There is one problem I ran into back in the dark ages g When a program running in key 0 uses an OPEN with system acquired buffers, normal termination abends (wrong subpool) trying to free buffers. The program needs to keep track and free the buffers rather than use FREEPOOL. Gerhard Postpischil Bradford, Vermont -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: To recompile or not recompile, that's the question
Technical problems will occur regardless of the environment (just ask Mr. Murphy of Murphy's Law fame). I consider this to be purely a management issue. Option 1. Do not use the new features until all environments are at the minimal level needed to support those feature (e.g. compliler parm ARCH(xx)). Option 2. Compile the programs at each stage. This will ensure the source matches the load, and also ensures the correct parms (e.g. ARCH(XX)) are selected for that environment, and that any incompatibilities are flushed out at each stage. Ensuring that (e.g. ARCH(xx)) is correct for each stage is still another management issue. Bottom line: Pick a method and stick with it! snip HOW are you in your shop managing your sources and load modules and the versions of compilers and z/OS? *Intro:* My customer (a data center) has several environments/LPARs: 1. SYSTEMS environment (for the system programmers team: installations, system tests, ...) 2. DEVELOPMENT environment (for the application developers) 3. HOMOLOGATION environment (it doesn't matter for the discussion what it's purpose is) 4. ACCEPTANCE environment 5. PRODUCTION environment They deploy the installations of z/OS in that order. I.e. PRODUCTION at the end of the chain. Idem for the compiler versions. The development department also deploy their applications in that very same order. Till now, the application programs are RE-COMPILED in each environment; this in order to avoid the slightest problem. For example: * Runtime LE 1.13 in development and runtime LE 1.12 in production MIGHT possibly generate a different behavior if the applications were not recompiled * A program compiled with the highest PL/1 version and executing with a too low LE might also have problems. /snip Disclaimer: I have not used ChangeMan. My experience is with Endeavour. ChangeMan should be able to handle compile scripts as well as copy scripts. snip The customer is considering now to use Serena's ChangeMan/ZMF to manage the application sources and load modules. That tool does not really support re-compile. Consequence: The modules, compiled in development, are COPIED to the different environments tlll into the PRODUCTION environment eventually. In concreto: if development is done in z/OS R13 with the highest PL/1 compiler version, this load module finally executes in z/OS R12. This too MAY generate problems. The problem can be avoided by NOT upgrading the development environment before the production environment, but rather at the end of chain. /snip -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: To Backup or Not to Backup Data - That is the question
In 3d746.4478793e.3ed90...@aol.com, on 05/30/2013 at 03:25 PM, Ed Finnell efinnel...@aol.com said: Wish we had a Cloud! Be careful what you wish for; you might get it. I suggest that you consult your legal staff about the ownership of your data before putting the family jewels in someone else's hands. -- Shmuel (Seymour J.) Metz, SysProg and JOAT Atid/2http://patriot.net/~shmuel We don't care. We don't have to care, we're Congress. (S877: The Shut up and Eat Your spam act of 2003) -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: To recompile or not recompile, that's the question
On 5/31/2013 7:11 AM, Jan Vanbrabant wrote: Till now, the application programs are RE-COMPILED in each environment; this in order to avoid the slightest problem. For example: · Runtime LE 1.13 in development and runtime LE 1.12 in production MIGHT possibly generate a different behavior if the applications were not recompiled · A program compiled with the highest PL/1 version and executing with a too low LE might also have problems. I always thought the big benefit to recompiling was to allow the use of the latest instructions and system services. If the charts presented by the compiler developers are to be believed, recompiling can have tremendous positive performance impacts, especially when transitioning from one hardware generation to another. If they stop recompiling their applications, and just carry things forward using a copy approach, then their applications might begin to slow down (relatively speaking) whenever their systems are upgraded. -- Edward E Jaffe Phoenix Software International, Inc 831 Parkview Drive North El Segundo, CA 90245 http://www.phoenixsoftware.com/ -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: To Backup or Not to Backup Data - That is the question
In f66e9ee737492b448738e797fdb828ed1d225...@kbmexmbxpr02.kbm1.loc, on 05/30/2013 at 07:09 PM, Staller, Allan allan.stal...@kbmg.com said: Although very few shops actually do this, IMO the procedure should be: Management walks in the room and says You, you, and you are dead as of time/date. The rest of you go recover as of that time/date. When NSF did that they also pulled a few tapes. These are lost/unreadble. -- Shmuel (Seymour J.) Metz, SysProg and JOAT Atid/2http://patriot.net/~shmuel We don't care. We don't have to care, we're Congress. (S877: The Shut up and Eat Your spam act of 2003) -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: To Backup or Not to Backup Data - That is the question
These have been great points. So to summarize. For Mainframe 1) Plan for both OR/BC and DR conditions. This includes testing as often as the company allows. Paper tests only satisfy a minimal requirement but does not show the plan will actually work or how long the plan will take to get the systems back up. 2) Replication is great unless there is data corruption. 3) DFHSM Backup/Dumps could be a good supplement to protect against data corruption (Multiple Gens for critical files) 4) Tape is excellent for offsite storage in case there is not a secondary DR site owned by the business 5) Make sure not only critical application/system files are there at the DR site, but also include things like SMF data, LOGREC, DCOLLECT, and other miscellaneous but not often thought of files. Depending on where these files are located (DASD vs. DISK) you may be okay. 6) No plan is foolproof. 7) The company would benefit from having a full time VP of DR. (Or some high position). Assigning this process to a Supervisor of Operations probably will not be sufficient as they are very busy with day to day chores. I have seen that some of the management teams that are responsible for Storage (which might be both Open Systems and Mainframe) think that both areas can use a similar type of DR or OR/BC plan. I can see some overlap, but I do believe the way the Mainframe does backup and recovery for these areas is different than a midrange or open systems backup and recovery. So when management starts reading all of these DR or OR/BC papers by Symantec or Dell or other vendors, they start to think the Mainframe is the same. This discussion is helping to show how these functions are different. Did I miss anything? Thanks. Lizette -Original Message- From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf Of Tom Marchant Sent: Thursday, May 30, 2013 2:16 PM To: IBM-MAIN@LISTSERV.UA.EDU Subject: Re: To Backup or Not to Backup Data - That is the question On Thu, 30 May 2013 11:44:32 -0700, Lizette Koehler wrote: So do I have overkill? . Software disasters can be the hardest ones to plan for. What do you do if one of your critical applications has a program change that causes it to start corrupting data? How long will it take before it is noticed? This can be a lot harder than a hardware failure. -- Tom Marchant -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: To recompile or not recompile, that's the question
Jan, in my experience recompiling is absolutely the wrong way to operate. The most serious drawback is that when developers perform regression testing (and they *do* *always* perform full regression testing, right?) to verify that what worked before their change still works the same, and that the only differences in the file outputs are those expected from the change that they made, the version that they test is NOT the version that goes into production (or into any other stage along the way to production). Given the system software rollout pattern you have described (and that is the right, i.e. safe, way to do it, IMHO), the application software running in production *must* always be at the lowest common denominator level (i.e., whatever is in production), thus possibly losing any benefit of newer HW/SW levels for the time it takes to finish a system software rollout. The great benefit is that production behavior is predictable and stable. From the developer's perspective, this means NOT using the latest and greatest compiler and run-time system features until those features have reached the production environment. Significant differences in compiler facilities do have to be carefully monitored so that any of the broken scenarios you describe do not occur. The standard compile-and-link process must be carefully controlled so that every compile at the development level uses the same standard lowest-common-denominator options and facilities, without the ability for a developer to bypass the standard. The only safe way to support recompilation at each of your several stages along the way to production is to perform full regression testing *at every level* after recompilation is performed. This is a *huge* burden on personnel and systems (people time and CPU/DASD usage among others), especially if you are (as I can suspect from your description) a large shop with many different applications and application groups. Having to perform multiple levels of regression testing significantly slows time-to-market for any application change, which can be deadly to any business model. And to your employment. A dead business pays no salaries (except maybe to the bankruptcy crews... :). HTH Peter -Original Message- From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf Of Jan Vanbrabant Sent: Friday, May 31, 2013 10:11 AM To: IBM-MAIN@LISTSERV.UA.EDU Subject: To recompile or not recompile, that's the question Hi wise guys and gals, HOW are you in your shop managing your sources and load modules and the versions of compilers and z/OS? Snipped Till now, the application programs are RE-COMPILED in each environment; this in order to avoid the slightest problem. For example: /Snipped Please, your thoughts! Jan -- This message and any attachments are intended only for the use of the addressee and may contain information that is privileged and confidential. If the reader of the message is not the intended recipient or an authorized representative of the intended recipient, you are hereby notified that any dissemination of this communication is strictly prohibited. If you have received this communication in error, please notify us immediately by e-mail and delete the message and any attachments from your system. -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: System abend 800 reason code 4
In 8677730172829683.wa.walt.farrellgmail@listserv.ua.edu, on 05/30/2013 at 04:02 PM, Walt Farrell walt.farr...@gmail.com said: That may be a possible clue to your problem. It says you're using EXCPVR, and from z/OS V1R13.0 DFSMSdfp Advanced Services we can see that In order to issue EXCPVR, your program must be executing in protection key zero to seven, executing in supervisor state, or be APF authorized. RCF submitted against z/OS DFSMSdfp Advanced Services SC26-7400-09, 4.5 Executing Fixed Channel Programs in Real Storage (EXCPVR); BSAM and QSAM use EXCPVR from key 8 problem state. -- Shmuel (Seymour J.) Metz, SysProg and JOAT Atid/2http://patriot.net/~shmuel We don't care. We don't have to care, we're Congress. (S877: The Shut up and Eat Your spam act of 2003) -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: Examples of getbuf and build usage
In cae1xxdfgeedmkm9vyz-z+x23o2d1kuxzfquydsas_7+yr71...@mail.gmail.com, on 05/30/2013 at 09:52 PM, John Gilmore jwgli...@gmail.com said: BUILD? Shades of 1966! It is not reentrant. Are you sure? -- Shmuel (Seymour J.) Metz, SysProg and JOAT Atid/2http://patriot.net/~shmuel We don't care. We don't have to care, we're Congress. (S877: The Shut up and Eat Your spam act of 2003) -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: To recompile or not recompile, that's the question
Hi Scott, You have confused le options and cobol options. Data(31) is a cobol option specified on the compile. All31(on) is an le option and as a large shop I can tell you that all31(off) is the result of a large body of application code that some of it wouldn't work by changing this option. The business case for changing the applications that would break with all31(on) couldn't be justified. Any application that wants to run all31(on) can supply an le ceeuopt. Doug On Fri, 31 May 2013 10:51:00 -0400, Scott Ford scott_j_f...@yahoo.com wrote: It also helps when the customer uses the right for example,LE run options. We run data(31) all31(on) and had a customer tell e we should run below the line ...I absolutely could believe it.it's not always us developers...sometimes it's plain ignorance for lack of better terms about z/OS or the system in general . -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: Longer SMFWAIT during IPL MSI
On Thu, 30 May 2013 23:05:21 -0500, Ed Gould edgould1...@comcast.net wrote: Bob: I was puzzled by his question. Then I remembered one time a LONG time ago (when SMF first went to VSAM) that we IPL'd and did not know about having to format the MAN datasets and the system did automatically. Could that be his issue? Ed LOL... that still happens, but light years after MSI (in computer time)! It would also be a one time event, not every IPL. Mark -- Mark Zelden - Zelden Consulting Services - z/OS, OS/390 and MVS mailto:m...@mzelden.com Mark's MVS Utilities: http://www.mzelden.com/mvsutil.html Systems Programming expert at http://expertanswercenter.techtarget.com/ -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: To Backup or Not to Backup Data - That is the question
SHARE is a valuable resource of 'what I missed'. My fave was the Fat boys tried to plug a three phase printer into a 43xx. Blew the T05 cans out of the socket boards! In a message dated 5/31/2013 9:43:19 A.M. Central Daylight Time, stars...@mindspring.com writes: vendors, they start to think the Mainframe is the same. This discussion is helping to show how these functions are different. Did I miss anything? -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: Longer SMFWAIT during IPL MSI
Thanks Bob for sharing the REDBOOK details it is of good help to understand the process. We are recording 19 in all LPARs. I think storage team uses it for their reporting using 20% or something, im not sure. My actual question still stands when i compare PROD and DEV SMFPRMxx then only diff i see is the SMF EXIT IEFU84. Is this exit heavy by itself or by poor coding ? is this affecting performance POST IPL as this EXIT takes control before each SMF Write. I need to raise a CHANGE if i want to remove it and test. So I was expecting to get a feedback from the LIST whether it worth a try. I have to give some justification in the CHANGE to request the removal. Thanks and Regards Shameem K Shoukath From: Bob Rutledge deerh...@ix.netcom.com To: IBM-MAIN@LISTSERV.UA.EDU Sent: Thursday, May 30, 2013 8:55 PM Subject: Re: Longer SMFWAIT during IPL MSI I suggest this Redbook: http://publib-b.boulder.ibm.com/abstracts/sg247816.html?Open Bob Shameem .K .Shoukath wrote: hi there, I was just going thru the IPL statistics of all LPARs in out org. I see in one LPAR the SMFWAIT is taking fairly longer comp to others. out of total 00:01:12 MSI time this takes 00:00:53.586 I am not sure what happens in this process, took a chance compared the SMFPRMxx and saw there is one exit IEFU84 additional compared to PROD and QA lpars. would this be the cause for slower IPL ? Will it also may be one reason contributing to performance issues? as i understand this exit gets control before each SMF write. *** IEEMB860 Statistics *** ILRTMRLG 00:00:00.278 ASM IEEVMSI 00:00:00.065 Reconfiguration IARM8MSI 00:00:00.030 RSM - bring storage online IECVIOSI 00:00:02.627 IOS dynamic pathing RACROUTE 00:00:00.000 Initialize Security Environment ATBINSYS 00:00:00.020 APPC IKJEFXSR 00:00:00.183 TSO IXGBLF00 00:00:00.029 Logger AXRINSTR 00:00:00.042 System REXX CEAINSTR 00:00:00.031 Common Event Adapter HWIAMIN1 00:00:00.067 BCPii COMMNDXX 00:00:00.091 COMMANDxx processing IEAVTMSI 00:00:00.071 RTM SMFWAIT 00:00:53.586 SMF ICHSEC05 00:00:12.113 Security Server MSIEXIT 00:00:00.000 Cnz_MSIExit Dynamic Exit IEFJSIN2 00:00:03.188 SSN= subsystem IEFHB4I2 00:00:00.015 ALLOCAS - UCB scan CSRINIT 00:00:00.005 Windowing services FINSHMSI 00:00:00.336 Wait for attached CMDs IEEMB860 00:01:12.797 Uncaptured time: 00:00:00.011 Thanks and Regards Shameem K Shoukath -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: Unable to mount ZFS
John, I _think_ your RACF problem was due to the fact that the first time the ZFS address space tried to open the VSAM file, it didn't have correct RACF access. So the open failed, as I would hope it would. Well, that the open failed took me by surprise completely. It doesn't fail on the other system that is (almost) identical. There is certainly no access allowed on that system for the ZFS userid. In addition, nothing in the IBM installation docs for z/OS says to authorize the ZFS address space to the data set profiles for the ZFS that are explicitly defined by their customization (and their RACF job goes into ridiculous detail to make sure everything is covered). So it must be something else that causes this 'requirement' on my current system. Is the rest of the world routinely defining at least READ access for the ZFS userid to each and every ZFS dataset that might get mounted? But the previous results are still cached. At this point, what should you do to invalidate the cache? You issue the RACF command: SETROPTS GENERIC(DATASET) REFRESH . Is this documented in a way that a mere mortal can understand? Of course not! How do I know? Walt (not a mere mortal, but an IBMer who worked on RACF internals) told me. Believe it or not, I did a setropts refresh. And got an error on it (Invalid command or some such). I probably did not use the right incantation for the refresh. I'll put this into my store of RACF commands, to be taken out in future! Thanks. Peter, Barbara, it took you some time to compose this reply. Bear with me, I'll need some, too. The reason it took some was 1. Thursday was a public holiday in most of Germany, so I took a day off. :-) 2. I had to do the migration of our user data from the old system (you know, where the mount worked) to the new system, and that was scheduled for this Friday, since almost no one was needing z/OS today. Given that deadline, following up on a problem that bugs me but is more or less 'solved' didn't have priority. 3. I can always be counted upon to fly into a rant against OMVS at the drop of a pin. Or hat. :-) But thanks for trying to puzzle this out with me. Now that the migration is behind me, I seem to remember that *I* deleted the BPX.DAEMON profile (on the system where it works without explicit access) back when we migrated from 1.10 to 1.13. In that case, ftp on the 1.13 system didn't work, and I had another incomprehensible error message. I got that to work by deleting the bpx.daemon profile (don't ask me why I did that - some convoluted thinking on my part after spending a long afternoon with the USS Planning manual.) In the same vein, the read access to bpx.superuser for OMVSUS2 was probably the result of that 'ftp error', too, when we did trial and error to get it to work. Since I am normally religiously keeping track of every RACF command ever issued against my data base (in case I have to rebuild it), I must not have been religious enough when we attempted to solve the puzzle, so that slipped by me and didn't make it to the new system. Best regards, Barbara -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: To recompile or not recompile, that's the question
Predictably I suppose, recompilation gets my vote. The issues involved are technical and not management ones, and bureaucratizing them never helps. Development takes some time, and linking the development version of a PL/I compiler to that in current production use is always a bad idea. It ensures that retrograde technology and performance will be wired into newly developed systems. (This may happen anyway, of course; the use of the best translator is a necessary but not a sufficient condition for high performance. That use can be, often is, perfunctory.) I am also suspicious of Jan Vanbrabant's esclusion of homologation from this discussion. The word is derived from the ancient Greek verb homologein, to approve, which becomes homologare, to agree, in fairly late Latin. (It has a special meaning in Scots law, where it is used to characterize a process of removing minor defects from contracts, the remediated versions of which are then given the force of law.) If, as I suspect, homologation here has to do with ensuring that a systems meets its functional specifications, it is relevant. John Gilmore, Ashland, MA 01721 - USA -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: Unable to mount ZFS
In dc74548a025aff4a85f46926802a9b230a1d4...@chsa1035.share.beluni.net, on 05/31/2013 at 02:49 PM, Hunkeler Peter (TLSG 4) peter.hunke...@credit-suisse.com said: And be assured, I'm not taking this personal in any way. I'm more open to z/OS UNIX than many others and I tend to stand up for it sometimes. In parallel to growing up with z/OS UNIX as of 1994, I was learning much about the design UNIX operating systems. This helps me to understand why many things are like they are in z/OS UNIX, *but* there still are many things I consider quite ugly My initial reaction was that there were ugly things on both the MVS and Unix sides; it did not follow MVS rules, but it also lacked many facilities that Unix users had come to expect. Basically the goal seemed to be POSIX and X-OPEN certification, and many things not required for certification got short shrift. -- Shmuel (Seymour J.) Metz, SysProg and JOAT Atid/2http://patriot.net/~shmuel We don't care. We don't have to care, we're Congress. (S877: The Shut up and Eat Your spam act of 2003) -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: Unable to mount ZFS
In CAAJSdjgG=z6V1=_bypfbscfrg67ezgkfy25zwt-owbvzwpk...@mail.gmail.com, on 05/31/2013 at 08:02 AM, John McKown john.archie.mck...@gmail.com said: First, GIVE ME AN UP TO DATE BASH SHELL!!! Who implemented the standard z/OS UNIX shell? It's not to be Bourne! Next, port all the GNU utilities and abandon the IBM versions. Never mind GNU; resolve the EBCDIC-Unicode issue in Perl and Provide a current Perl, Python and Ruby. If they do port the GNU utilities, add the functionality of the current IBM utilities and contribute the code to the FSF. Yes, I know that may require some political scutwork. -- Shmuel (Seymour J.) Metz, SysProg and JOAT Atid/2http://patriot.net/~shmuel We don't care. We don't have to care, we're Congress. (S877: The Shut up and Eat Your spam act of 2003) -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: To recompile or not recompile, that's the question
The problem with recompilation is not purely technical though. ISTM that there is far more bureaucracy needed to monitor and guarantee successful completion of full regression testing at each recompilation than there is payback from using notionally better translators and runtimes at a given stage. In the case where each stage from development to production may reside on physically and/or technically disparate systems, I admit that recompilation seems like a reasonable solution to ensure accurate and effective execution at each stage, but again ISTM that the additional verification requirements are far too onerous a cost both technically and bureaucratically. IMHO, of course. We can certainly agree to disagree on this. As for Jan Vanbrabant's stage names, HOMOLOGATION easily translates to (Internal or Product) Quality Assurance and ACCEPTANCE to Client Test. My organization uses both, though not in disparate technical or physical environments, and always without recompilation. Peter -Original Message- From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf Of John Gilmore Sent: Friday, May 31, 2013 2:40 PM To: IBM-MAIN@LISTSERV.UA.EDU Subject: Re: To recompile or not recompile, that's the question Predictably I suppose, recompilation gets my vote. The issues involved are technical and not management ones, and bureaucratizing them never helps. Development takes some time, and linking the development version of a PL/I compiler to that in current production use is always a bad idea. It ensures that retrograde technology and performance will be wired into newly developed systems. (This may happen anyway, of course; the use of the best translator is a necessary but not a sufficient condition for high performance. That use can be, often is, perfunctory.) I am also suspicious of Jan Vanbrabant's esclusion of homologation from this discussion. The word is derived from the ancient Greek verb homologein, to approve, which becomes homologare, to agree, in fairly late Latin. (It has a special meaning in Scots law, where it is used to characterize a process of removing minor defects from contracts, the remediated versions of which are then given the force of law.) If, as I suspect, homologation here has to do with ensuring that a systems meets its functional specifications, it is relevant. John Gilmore, Ashland, MA 01721 - USA -- This message and any attachments are intended only for the use of the addressee and may contain information that is privileged and confidential. If the reader of the message is not the intended recipient or an authorized representative of the intended recipient, you are hereby notified that any dissemination of this communication is strictly prohibited. If you have received this communication in error, please notify us immediately by e-mail and delete the message and any attachments from your system. -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: Unable to mount ZFS
On May 31, 2013, at 2:36 PM, Shmuel Metz (Seymour J.) shmuel+...@patriot.net wrote: Never mind GNU; resolve the EBCDIC-Unicode issue in Perl and Provide a current Perl, Python and Ruby. I second the motion. -- Curtis Pew (c@its.utexas.edu) ITS Systems Core The University of Texas at Austin -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: To recompile or not recompile, that's the question
Changeman does compiles during Testing and Acceptance. Once the package is released to production, it typically will do copies. It does not usually do ReCompiles to move to a production environment. Lizette -Original Message- From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf Of Shmuel Metz (Seymour J.) Sent: Friday, May 31, 2013 12:40 PM To: IBM-MAIN@LISTSERV.UA.EDU Subject: Re: To recompile or not recompile, that's the question In CAMP5vN8r92rb3ar6OR7Zw--7=rgqbvrbosue76iqc6e+yrq...@mail.gmail.com, on 05/31/2013 at 04:11 PM, Jan Vanbrabant vanbrabant...@gmail.com said: The customer is considering now to use Serena s ChangeMan/ZMF to manage the application sources and load modules. That tool does not really support re-compile. Ouch! Please, your thoughts! Not being able to recompile is a ticking time bomb. -- Shmuel (Seymour J.) Metz, SysProg and JOAT Atid/2http://patriot.net/~shmuel We don't care. We don't have to care, we're Congress. (S877: The Shut up and Eat Your spam act of 2003) -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: To recompile or not recompile, that's the question
I've never been in a shop where programs were (re)compiled into production. Once a program is compiled and tested, it's copied into production. Whatever risk might be incurred by moving a program into a down-level environment, the risk of recompiling is surely greater. Now if there's a problem, it's a nightmare to sort out change in environment from change in compiled module, which could be caused by any number of manual slip-ups. LE is very good at downward compatibility. I'll trust that over procedural vagaries any day. . . JO.Skip Robinson Southern California Edison Company Electric Dragon Team Paddler SHARE MVS Program Co-Manager 626-302-7535 Office 323-715-0595 Mobile jo.skip.robin...@sce.com From: Lizette Koehler stars...@mindspring.com To: IBM-MAIN@LISTSERV.UA.EDU, Date: 05/31/2013 02:40 PM Subject:Re: To recompile or not recompile, that's the question Sent by:IBM Mainframe Discussion List IBM-MAIN@LISTSERV.UA.EDU Changeman does compiles during Testing and Acceptance. Once the package is released to production, it typically will do copies. It does not usually do ReCompiles to move to a production environment. Lizette -Original Message- From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf Of Shmuel Metz (Seymour J.) Sent: Friday, May 31, 2013 12:40 PM To: IBM-MAIN@LISTSERV.UA.EDU Subject: Re: To recompile or not recompile, that's the question In CAMP5vN8r92rb3ar6OR7Zw--7=rgqbvrbosue76iqc6e+yrq...@mail.gmail.com, on 05/31/2013 at 04:11 PM, Jan Vanbrabant vanbrabant...@gmail.com said: The customer is considering now to use Serena s ChangeMan/ZMF to manage the application sources and load modules. That tool does not really support re-compile. Ouch! Please, your thoughts! Not being able to recompile is a ticking time bomb. -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: To recompile or not recompile, that's the question
On Fri, May 31, 2013 at 3:34 PM, Skip Robinson jo.skip.robin...@sce.comwrote: I've never been in a shop where programs were (re)compiled into production. Once a program is compiled and tested, it's copied into production. Whatever risk might be incurred by moving a program into a down-level environment, the risk of recompiling is surely greater. Now if there's a problem, it's a nightmare to sort out change in environment from change in compiled module, which could be caused by any number of manual slip-ups. LE is very good at downward compatibility. I'll trust that over procedural vagaries any day. Agreed ... Recompiling into production would require a complete regression test to ensure that the code still works as it did in the testing and certification environments. This will be an impossible task to accomplish as by definition the code is in production and the required tests cannot be run without impacting production balances, values, etc. There are also compliance and regulatory issues which require extensive documentation to show what is running in production is what was tested. . . . JO.Skip Robinson Southern California Edison Company Electric Dragon Team Paddler SHARE MVS Program Co-Manager 626-302-7535 Office 323-715-0595 Mobile jo.skip.robin...@sce.com From: Lizette Koehler stars...@mindspring.com To: IBM-MAIN@LISTSERV.UA.EDU, Date: 05/31/2013 02:40 PM Subject:Re: To recompile or not recompile, that's the question Sent by:IBM Mainframe Discussion List IBM-MAIN@LISTSERV.UA.EDU Changeman does compiles during Testing and Acceptance. Once the package is released to production, it typically will do copies. It does not usually do ReCompiles to move to a production environment. Lizette -Original Message- From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf Of Shmuel Metz (Seymour J.) Sent: Friday, May 31, 2013 12:40 PM To: IBM-MAIN@LISTSERV.UA.EDU Subject: Re: To recompile or not recompile, that's the question In CAMP5vN8r92rb3ar6OR7Zw--7=rgqbvrbosue76iqc6e+yrq...@mail.gmail.com, on 05/31/2013 at 04:11 PM, Jan Vanbrabant vanbrabant...@gmail.com said: The customer is considering now to use Serena s ChangeMan/ZMF to manage the application sources and load modules. That tool does not really support re-compile. Ouch! Please, your thoughts! Not being able to recompile is a ticking time bomb. -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: To Backup or Not to Backup Data - That is the question
Any presentations on the Hurricane Katrina scenarios? One company shut down their Miami data center and transferred operations to New Orleans. 3 days later Miami was still without power and New Orleans shut down. On Thu, May 30, 2013 at 3:03 PM, Ed Finnell efinnel...@aol.com wrote: There were several Chicago stories at SHARE and others. Still remember the Ryder presentation after Hurricane Andrew. They even had 'Helper' teams for families that had damage or were displaced. -- Mike A Schwab, Springfield IL USA Where do Forest Rangers go to get away from it all? -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: To Backup or Not to Backup Data - That is the question
We don't have enough DASD at the hot site for that. On Thu, May 30, 2013 at 7:44 PM, Jeffery Swagger jeff...@comcast.net wrote: Yes, this! Prereq: The company must have a DR manager of which one of his responsibilities is to ensure the families of those who leave are taken care of. Here I'm thinking of natural disasters like hurricanes. Second: A real DR test would include actually running the business from the DR site for at least a week and then *bringing it back home*. How many institutions have actually tried that? -- Jeff -- Mike A Schwab, Springfield IL USA Where do Forest Rangers go to get away from it all? -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: To Backup or Not to Backup Data - That is the question
There was a big lesson, probably already told here. A bank had a back-up site in Denver. Everything moved smoothly, except one overlooked detail. The people involved in the DR, had cell phones with New Orleans area codes. The C/O was under 30-50 feet of water. No calls in or out. It was written up in Disaster Recovery Magazine. It's the unforeseen that crunches you in the butt! Nobody even thought about it until Katrina. Your technical plan may be perfect; it can all fall apart on people and logistical problems. - Ted MacNEIL eamacn...@yahoo.ca Twitter: @TedMacNEIL -Original Message- From: Mike Schwab mike.a.sch...@gmail.com Sender: IBM Mainframe Discussion List IBM-MAIN@LISTSERV.UA.EDU Date: Fri, 31 May 2013 20:07:31 To: IBM-MAIN@LISTSERV.UA.EDU Reply-To: IBM Mainframe Discussion List IBM-MAIN@LISTSERV.UA.EDU Subject: Re: To Backup or Not to Backup Data - That is the question Any presentations on the Hurricane Katrina scenarios? One company shut down their Miami data center and transferred operations to New Orleans. 3 days later Miami was still without power and New Orleans shut down. On Thu, May 30, 2013 at 3:03 PM, Ed Finnell efinnel...@aol.com wrote: There were several Chicago stories at SHARE and others. Still remember the Ryder presentation after Hurricane Andrew. They even had 'Helper' teams for families that had damage or were displaced. -- Mike A Schwab, Springfield IL USA Where do Forest Rangers go to get away from it all? -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: To recompile or not recompile, that's the question
On Fri, 31 May 2013 21:24:06 -0400, John Gilmore wrote: The production-library member must be identical to the acceptance-test library member, and the only way to ensure that this is the case is to copy the [successful-outcome] acceptance-test member into the production library. [The binder is, of course, very much better at such member-copying operations than any of the alternatives to it; and it should always be used.] One tool to verify identity is the checksum. Will the binder so copy members that checksums can be recreated? If not, I consider the operation to have some of the character of a recompilation reather than a copy. And is there any way to verify that the metadata resulting from a copy by the binder are identical? In the open world, suppliers regularly (though not always) supply checksums that recipients can verify. SMP/E network installation verifies checksums over part of the process (though far from end-to-end). -- gil -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: To recompile or not recompile, that's the question
On Fri, May 31, 2013 at 8:59 PM, Paul Gilmartin paulgboul...@aim.com wrote: On Fri, 31 May 2013 21:24:06 -0400, John Gilmore wrote: The production-library member must be identical to the acceptance-test library member, and the only way to ensure that this is the case is to copy the [successful-outcome] acceptance-test member into the production library. [The binder is, of course, very much better at such member-copying operations than any of the alternatives to it; and it should always be used.] One tool to verify identity is the checksum. Will the binder so copy members that checksums can be recreated? If not, I consider the operation to have some of the character of a recompilation reather than a copy. And is there any way to verify that the metadata resulting from a copy by the binder are identical? In the open world, suppliers regularly (though not always) supply checksums that recipients can verify. SMP/E network installation verifies checksums over part of the process (though far from end-to-end). -- gil On the Mainframe, most programs have the date and time of the compile stored in the eyecatcher near the name. Sometimes for a release they change these to be the same. They often also add a PTF name into the eyecatcher. -- Mike A Schwab, Springfield IL USA Where do Forest Rangers go to get away from it all? -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN