Re: ISR only with SW contract
- Original Message From: Kurt Quackenbush [EMAIL PROTECTED] Yes, this sounds very strange to me as well. The only thing you need is an id on ShopzSeries so that you can obtain a certificate that will identify you to the server. I don't believe getting a ShopzSeries id is related to a SW support contract, so I think what you heard is false. This is exactly the point. I've been told that if a customer does not have a SW support contract, he won't be able get a ShopzSeries id anymore; no id, no PTFs via ISR. Weird. (No ServerPac either. Very weird.) Maybe it is something related to EMEA only, although I doubt it. If there are sysprog mates on the list whose employer do not have SW support contract with IBM (for whatever reason) they can confirm this or not. In any case, thanks very much for answering. Walter Marguccio z/OS Systems Programmer Munich - Germany -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Top 10 software install gripes
- Original Message From: Thomas Conley [EMAIL PROTECTED] #4 - Directory blocks should ALWAYS be a multiple of 45. That way I won't get directory out of space the next time you expand your product. I wasn't aware of this. Why should directory blocks for PDSes be multiple of 45 ? I agree 100% with the rest of your statements, sometimes ISV installation can be a real PITA. Walter Marguccio z/OS Systems Programmer Munich - Germany -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: zAAP for zip ?
Hi Thank you very much I see in the JAVA class hierarchy the zip (deflate ) library We are using the same deflate library in C for compression The idea was to call the JAVA functions instead of C and let the zAAP to process it . The application should remain in C++, only call the JAVA deflate library instead of C I wanted to know if it has sense to try out this or what is supported best way to to execute CPU intensive functions from z/OS -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Shared HFS
Hi all again! I looked through the archives for a solution for my question, but with no luck... I want to prepare a Monoplex system for a easy migration into a Sysplex. So I want to implement Shared HFS without being in a Sysplex, but the IPL failed when starting USSs. My question is: Can I use the Version parameter while I'm using Sysplex(NO) parameter? Thank you very much for your time. -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Sysplex timer
Sorry! I forgot this thread! Finally I managed to get what I needed. I'm recovering the gap manually (5 by 5 seconds). When I use Calculate Button most of times I get an error and no stream data from my ETS. But sometimes I get the data, so I know my deviation from this ETS. My problem is solved (more or less). Thank you very much! 2007/3/29, Shmuel Metz (Seymour J.) [EMAIL PROTECTED]: In [EMAIL PROTECTED], on 03/28/2007 at 12:18 PM, (IBM Mainframe Discussion List) [EMAIL PROTECTED] said: The question was Could you xx. This is correct as far as the English that I learned 50+ years ago in my childhood English grammar classes, but not so far as current, slangy, idiomatic, and au courant English spoken by the average illiterate American on the street. When I am in line at a fast food restaurant, I come to a slow boil when I hear someone in front of me say Can I have xxx?, and then I am enraged when the person taking his order says Yes. As we all know, the person taking the order cannot possibly know whether or not the person wanting to buy xxx can have it or can't have it. Of course he knows; he just doesn't know whether it is safe. -- Shmuel (Seymour J.) Metz, SysProg and JOAT ISO position; see http://patriot.net/~shmuel/resume/brief.html We don't care. We don't have to care, we're Congress. (S877: The Shut up and Eat Your spam act of 2003) -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Virtual tape limits (Was: OEM software electronic download report card)
-- snip -- Just ran into that. Things that large should be on physical tape (if you have it). We ran a report of dsns with more than 20 volsers and are just about all of them were from DB2 and jobs created by the same DBA. Our default forces things to virtual and people do have to let us know if they want it to go to physical tape. -- snip -- VSM (and VTS) offer simple, application independent duplication for disaster recovery purposes. When you change jobs from using virtual tapes to real tapes you have to look at how you're going to duplicate the tapes. It's one reason to try to keep all your tapes virtual. -- snip -- As far as more options... I know you use Sun/STK like we do... I think I heard VTCS 6.2 will have some help there. -- snip -- As has already been indicated elsewhere, the size of the virtual tapes has been increased. There will still be data sets that are too large, but it will make it better. BTW. What are your plans (Shane/Mark) for migrating to VTCS 6.2. Specifically, how long do you plan to wait before installing the new software? John -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Channel Detected errors only 1 lpar, 1 job, 1 vsm
-- snip -- Alright this is a head scratcher at least for me... We have recently installed a VSM5 and also have some VSM4's in the mix. We have one job that runs that is continually getting channel detected errors every time it runs and the errors are only occurring on the VSM5. Below is a small example. It shows one tape mount that works fine, and then the other that gets the error... This is using FDR so perhaps Bruce Black has some insight... . . . IOS050I CHANNEL DETECTED ERROR ON 1891,17,01,**02,PCHID=0231 IOS050I CHANNEL DETECTED ERROR ON 1891,13,01,**02,PCHID=0140 -- snip -- If writing this data set to a VSM4 type works, and doesn't work on the VSM5, then you should open a problem with SUN. You may need to take VTCS and GTF trace data, but the SUN technical people will request what data they need. My customer has a couple of VSM5 systems installed and we do not have the problem you are describing (DFDSS user). John -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Channel Detected errors only 1 lpar, 1 job, 1 vsm
John, Go to the HMC and do problem analysis on the PCHID to see if you are getting sequence errors. Possibly a bad cable or connection. Doug - Original Message - From: John Ticic [EMAIL PROTECTED] Newsgroups: bit.listserv.ibm-main To: IBM-MAIN@BAMA.UA.EDU Sent: Monday, May 14, 2007 5:32 AM Subject: Re: Channel Detected errors only 1 lpar, 1 job, 1 vsm -- snip -- Alright this is a head scratcher at least for me... We have recently installed a VSM5 and also have some VSM4's in the mix. We have one job that runs that is continually getting channel detected errors every time it runs and the errors are only occurring on the VSM5. Below is a small example. It shows one tape mount that works fine, and then the other that gets the error... This is using FDR so perhaps Bruce Black has some insight... . . . IOS050I CHANNEL DETECTED ERROR ON 1891,17,01,**02,PCHID=0231 IOS050I CHANNEL DETECTED ERROR ON 1891,13,01,**02,PCHID=0140 -- snip -- If writing this data set to a VSM4 type works, and doesn't work on the VSM5, then you should open a problem with SUN. You may need to take VTCS and GTF trace data, but the SUN technical people will request what data they need. My customer has a couple of VSM5 systems installed and we do not have the problem you are describing (DFDSS user). John -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Virtual tape limits (Was: OEM software electronic download report card)
On Mon, 2007-05-14 at 11:16 +0200, John Ticic wrote: BTW. What are your plans (Shane/Mark) for migrating to VTCS 6.2. Specifically, how long do you plan to wait before installing the new software? Not my call - I have mentioned it to my customer. Once the decision to order it is made, I would expect it to roll-out reasonably quickly. Shane ... -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Channel Detected errors only 1 lpar, 1 job, 1 vsm
-- snip -- John, Go to the HMC and do problem analysis on the PCHID to see if you are getting sequence errors. Possibly a bad cable or connection. Doug - Original Message - IOS050I CHANNEL DETECTED ERROR ON 1891,17,01,**02,PCHID=0231 IOS050I CHANNEL DETECTED ERROR ON 1891,13,01,**02,PCHID=0140 -- snip -- Doug, the errors occur on two different channels (and PCHIDs). If the VSM5 box is connected to a director or a switch of some kind, then I suppose that the cable between switch and VSM5 could be causing the problems. The originator of this thread didn't say whether he'd tried all paths, or how the box is connected to his server (I was going to write CEC !!). John (not the originator) -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Virtual tape limits (Was: OEM software electronic download report card)
John Ticic wrote: [...] VSM (and VTS) offer simple, application independent duplication for disaster recovery purposes. When you change jobs from using virtual tapes to real tapes you have to look at how you're going to duplicate the tapes. It's one reason to try to keep all your tapes virtual. [...] It's one of the reasons to use HSM or FDR to make duplex copies. It costs no more CPU cycles, it fills up the tapes, it utilizes more channels (it's not a problem IMHO). If you want to write to tape directly... well... WHY do you want it? BTW: The biggest VSM/VTS advantage I in my opinion is the number of drives. It's important in multi-LPAR installation. My $0.02 -- Radoslaw Skorupka Lodz, Poland -- BRE Bank SA ul. Senatorska 18 00-950 Warszawa www.brebank.pl Sd Rejonowy dla m. st. Warszawy XII Wydzia Gospodarczy Krajowego Rejestru Sdowego, nr rejestru przedsibiorców KRS 025237 NIP: 526-021-50-88 Wedug stanu na dzie 01.01.2007 r. kapita zakadowy BRE Banku SA (w caoci opacony) wynosi 118.064.140 z. W zwizku z realizacj warunkowego podwyszenia kapitau zakadowego, na podstawie uchwa XVI WZ z dnia 21.05.2003 r., kapita zakadowy BRE Banku SA moe ulec podwyszeniu do kwoty 118.760.528 z. Akcje w podwyszonym kapitale zakadowym bd w caoci opacone. -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Virtual tape limits (Was: OEM software electronic download report card)
-- snip -- VSM (and VTS) offer simple, application independent duplication for disaster recovery purposes. When you change jobs from using virtual tapes to real tapes you have to look at how you're going to duplicate the tapes. It's one reason to try to keep all your tapes virtual. [...] It's one of the reasons to use HSM or FDR to make duplex copies. It costs no more CPU cycles, it fills up the tapes, it utilizes more channels (it's not a problem IMHO). If you want to write to tape directly... well... WHY do you want it? -- snip-- With most installations it's purely historical. The big data sets went directly to tape, the smaller ones went onto DASD and were then processed by HSM (or similar). With DASD prices now cheaper and still dropping, I agree with you that there shouldn't be any need to write directly to tape. Why VSM/VTS and not HSM. Well, one reason is that handling duplicates in HSM requires manual intervention. When a primary tape goes bad, you need to activate the alternate and then ensure that you produce another duplicate of this tape. All quite simple, but still manual. When a virtual tape is bad (due to a bad real physical tape), it's all handled under the covers as far as HSM is concerned. No need to screw around with primary and alternate. You do need to invest more in the size of your VSM/VTS and maybe that is reason enough not to do this. Also, implementing high availability is a lot easier when the duplexing is application independant. -- snip -- BTW: The biggest VSM/VTS advantage I in my opinion is the number of drives. It's important in multi-LPAR installation. -- snip -- That's true, but it is also important to size your VSM/VTS properly to ensure that residency time is long enough for your virtual tapes and that the tapes get migrated to the backend in a timely matter. John -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: ISPF EDIT RECOVERY scope
On 5/13/2007 11:39 PM, Paul [EMAIL PROTECTED]@bama.ua.edu wrote: A few months ago, I asked in this forum whether I could disable Confirm Data Set Delete in my Profile. The modal reaction was, No! Very Bad Idea! Extremely Dangerous! We hope IBM never provides such a facility, even as an option! (There were a few exceptions of the not my dog genre.) But isn't RECOVERY OFF likewise a dangerous behavior, which shouldn't be stored in a profile? In my opinion, no, it's not dangerous or at least not in the same way. If you delete a data set, it's gone. If you set recovery off, at most you lose some amount of your time if an error occurs, but you haven't really lost any data. You can always repeat the work. Walt -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Channel Detected errors only 1 lpar, 1 job, 1 vsm
A problem was already opened with Sun. Now here is what they found. There is another shop with the exact same problem. It has only been seen when using FDR and a mix of VSM4's and VSM5's. If it creates one tape on a VSM4 then tries to create another on a VSM5 then it get's the error. So we've simply varied the VSM5 drives offline to this one system. John Benik United Health Technologies Mainframe Storage Management [EMAIL PROTECTED] 763-744-0683 This e-mail, including attachments, may include confidential and/or proprietary information, and may be used only by the person or entity to which it is addressed. If the reader of this e-mail is not the intended recipient or his or her authorized agent, the reader is hereby notified that any dissemination, distribution or copying of this e-mail is prohibited. If you have received this e-mail in error, please notify the sender by replying to this message and delete this e-mail immediately. -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Virtual Tape Stacking Software
Gentle Listers - I have searched the archives and found a lot of discussion but nothing I was looking for. We currently have Open Tech to stack virtual tapes on one physical 3590 tape (VDR). Are there any other equivilents or is Open Tech it. We are running z/OS V1.7 with DFSMShsm. We also have DFDSS and a couple of CA products (CA1). We need to compare the Open Tech process with other vendor(s) to ensure we have the right tool for the right function. Any ideas will be appreciated. Lizette -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Microsoft Claims It All
Microsoft has dropped the other shoe. It claims that Linux infringes on 235 of its patents and it wants payment: http://money.cnn.com/magazines/fortune/fortune_archive/2007/05/28/100033867/index.htm My personal theory is that they have read the paper by open-source philosophers Eric Raymond and Rob Landley ('World Domination 201') which maintains that we are at the start of 64-bit operating systems and that is a 'tipping point'. The new standard for future decades will be set by either Microsoft or Apple or Linux ... So, failing any kind of technical excellence, all Microsoft has to do is keep the FUD and litigation going for the next five or ten years. http://www.catb.org/~esr/writings/world-domination/world-domination-201.html -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Virtual Tape Stacking Software
Hello Lizette, Take a look at CA VTAPE: http://www.ca.com/files/DataSheets/brighstor_ca_vtape_r11-5_ds.pdf Success story that discusses tape stacking: http://ca.com/files/SuccessStories/29989_county_el_paso_bvs.pdf It integrates nicely with CA1 as well. Bob Robert B. Fake InfoSec, Inc. 703-825-1202 (o) 571-241-5492 (c) 949-203-0406 (efax) [EMAIL PROTECTED] Visit us at www.infosecinc.com -Original Message- From: Lizette Koehler [mailto:[EMAIL PROTECTED] Sent: Monday, May 14, 2007 9:13 AM To: IBM-MAIN@BAMA.UA.EDU Subject: Virtual Tape Stacking Software Gentle Listers - I have searched the archives and found a lot of discussion but nothing I was looking for. We currently have Open Tech to stack virtual tapes on one physical 3590 tape (VDR). Are there any other equivilents or is Open Tech it. We are running z/OS V1.7 with DFSMShsm. We also have DFDSS and a couple of CA products (CA1). We need to compare the Open Tech process with other vendor(s) to ensure we have the right tool for the right function. Any ideas will be appreciated. Lizette -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Shared HFS
On Mon, 14 May 2007 10:10:18 +0200, Víctor de la Fuente [EMAIL PROTECTED] wrote: Hi all again! I looked through the archives for a solution for my question, but with no luck... I want to prepare a Monoplex system for a easy migration into a Sysplex. So I want to implement Shared HFS without being in a Sysplex, but the IPL failed when starting USSs. My question is: Can I use the Version parameter while I'm using Sysplex(NO) parameter? To be honest, I don't know for sure (and didn't even try RTFM-ing). I was going to say try and test it... but it looks like you already did (although all you said was the IPL failed when starting Unix with no specifics). But why does it matter? I have a shared HFS set up where I share the sysres set (and HFS files) between sysplexes with shared HFS and non-shared HFS and monoplexes. The key is the HFS (zFS) names you choose, not whether SYSPLEX() and VERSION() are specified in BPXPRMxx. Mark -- Mark Zelden Sr. Software and Systems Architect - z/OS Team Lead Zurich North America / Farmers Insurance Group: G-ITO mailto:[EMAIL PROTECTED] z/OS and OS390 expert at http://searchDataCenter.com/ateExperts/ Systems Programming expert at http://expertanswercenter.techtarget.com/ Mark's MVS Utilities: http://home.flash.net/~mzelden/mvsutil.html -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Virtual tape limits (Was: OEM software electronic download report card)
On Mon, 14 May 2007 11:16:32 +0200, John Ticic [EMAIL PROTECTED] wrote: -- snip -- Just ran into that. Things that large should be on physical tape (if you have it). We ran a report of dsns with more than 20 volsers and are just about all of them were from DB2 and jobs created by the same DBA. Our default forces things to virtual and people do have to let us know if they want it to go to physical tape. -- snip -- VSM (and VTS) offer simple, application independent duplication for disaster recovery purposes. When you change jobs from using virtual tapes to real tapes you have to look at how you're going to duplicate the tapes. It's one reason to try to keep all your tapes virtual. -- snip -- Exactly. We got out of the business of letting applications decide what to duplex years ago. That is why our default is to go to virtual and we duplex 100% for disaster recovery. That was part of our VSM implementation when we migrated from VTS. So it's fool proof... we can't miss something because someone forgot to tell operations there was a new application and a new tape that needed to be added to a vault pattern and be sent off site or be added to a TAPEREQ / MGMTCLAS or SMS change etc. This includes test data... but since there are better retention controls for test data it really just a nit in the total amount duplexed. It's a small trade off for simplifying the management of the environment (which is very large) and guaranteeing to the business that we will have all the tape data in a disaster. So yes... if any of those DBA files (image copies?) have DR considerations, they will need a second physical copy if changed from virtual to physical. On a similar note... some of the DBA's old FDR full pack backup jobs were duplexing from FDR... so we were creating 4 virtual copies. Those were changed at somepoint but I'm willing to bet there are still applications creating their own copies for DR and both sets are duplexed in virtual now. Mark -- Mark Zelden Sr. Software and Systems Architect - z/OS Team Lead Zurich North America / Farmers Insurance Group: G-ITO mailto:[EMAIL PROTECTED] z/OS and OS390 expert at http://searchDataCenter.com/ateExperts/ Systems Programming expert at http://expertanswercenter.techtarget.com/ Mark's MVS Utilities: http://home.flash.net/~mzelden/mvsutil.html -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Virtual Tape Stacking Software
It sounds like you are an IBM virtual tape shop. As far as I know VDR is the only product that will copy onto a tape and stack it for offsite shipment. IBM recommends using this rather then the Export and Import utility. We use STK and a package with the STK software called DR utilities. This does use the VSM export and import feature. John Benik This e-mail, including attachments, may include confidential and/or proprietary information, and may be used only by the person or entity to which it is addressed. If the reader of this e-mail is not the intended recipient or his or her authorized agent, the reader is hereby notified that any dissemination, distribution or copying of this e-mail is prohibited. If you have received this e-mail in error, please notify the sender by replying to this message and delete this e-mail immediately. -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Question on DFP
On 5/11/2007 1:58 PM, Mark H. Young wrote: On Fri, 11 May 2007 10:35:13 -0400, John Eells [EMAIL PROTECTED] wrote: [EMAIL PROTECTED] wrote: How can I tell which version of DFP is on out installation, 1.3 or 1.4 ? We run zOS 1.6, will be running zOA 1.8 The sysprogrammers don't know. snip DFSMSdfp in z/OS R6 is...well...the z/OS R6 level of DFSMSdfp. Likewise for z/OS R8 DFSMSdfp (and, for that matter, z/OS R7 DFSMSdfp). Well, that's not entirely true. Mark Zelden has a REXX Exec called IPLINFO that I use (available on his website, and he is a frequent contributer here). Part of the output from my system is: The OS version is z/OS 01.04.00 - FMID HBB7707 (SP7.0.4). The DFSMS level is z/OS 1.3.0. So the DFSMS level on my z/OS 1.4 system is not 1.4, unless I have old code? What John means, I believe, is that with rare exception, for things that you can get only as part of z/OS, the only valid system configuration is one in which you have the level of each element that z/OS shipped to you. Thus, for example, when you order z/OS R4 you get the appropriate DFSMS FMID(s) that correspond to z/OS R4. You could perhaps snip out a different DFSMS FMID from z/OS R5, or from z/OS R3, and install it on your z/OS R4 system. However, if you did that, then you have a strong likelihood that the system would not work, and a near (perhaps total) certainty that IBM would not support your system if you reported a problem. The same applies to RACF, for example. You get a particular FMID when your order z/OS release x and ask for RACF, and that is the FMID you must use. It it possible that release x and release x+1 will have the same RACF FMID, if RACF did not happen to change. But in any case if you're going to use RACF you must use the FMID that corresponds with that release and that ships with that release, or the system probably won't work right and is almost certainly not supported. Thus, it makes little, if any, sense to ask what release of DFSMS or RACF you have. Rather, you ask what release of z/OS do I have. There are some exceptions. For example, some elements of z/OS ship new function (FMIDs) via web downloads. ICSF has done that, I think. This allows shipment of major new function without needing to coordinate schedules with the base z/OS schedules. Except for that, though, you should ask about z/OS release not the release level of some component of z/OS. And for the cases that ship via web deliverable, etc., you should ask about the FMID, not the release, and the system programmer certainly knows the FMID, or can find it easily. Walt -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Virtual tape limits (Was: OEM software electronic download report card)
Mark Zelden wrote: On Mon, 14 May 2007 11:16:32 +0200, John Ticic [EMAIL PROTECTED] wrote: -- snip -- Just ran into that. Things that large should be on physical tape (if you have it). We ran a report of dsns with more than 20 volsers and are just about all of them were from DB2 and jobs created by the same DBA. Our default forces things to virtual and people do have to let us know if they want it to go to physical tape. -- snip -- VSM (and VTS) offer simple, application independent duplication for disaster recovery purposes. When you change jobs from using virtual tapes to real tapes you have to look at how you're going to duplicate the tapes. It's one reason to try to keep all your tapes virtual. -- snip -- Exactly. We got out of the business of letting applications decide what to duplex years ago. That is why our default is to go to virtual and we duplex 100% for disaster recovery. That was part of our VSM implementation when we migrated from VTS. So it's fool proof... we can't miss something because someone forgot to tell operations there was a new application and a new tape that needed to be added to a vault pattern and be sent off site or be added to a TAPEREQ / MGMTCLAS or SMS change etc. This includes test data... but since there are better retention controls for test data it really just a nit in the total amount duplexed. It's a small trade off for simplifying the management of the environment (which is very large) and guaranteeing to the business that we will have all the tape data in a disaster. [...] Well, it's safe, convenient, error-proof, but EXPENSIVE. Assuming budget limitations (who's not limited ?), I would choose more RTDs than VTDs *plus* RTDs. Of course YMMV, for example multi-LPAR issue can be adressed with IBM ATAM or CA MIA, or just the number of VTDs. Obviously VTDs still does not solve problem of huge datasets. It's not only limitation of 255 volsers, for example AFAIK HSM does not backup file when backup occupies more than 40 volumes. Assuming buy what you want scenario I would buy many VSMs and RTDs g -- Radoslaw Skorupka Lodz, Poland -- BRE Bank SA ul. Senatorska 18 00-950 Warszawa www.brebank.pl Sd Rejonowy dla m. st. Warszawy XII Wydzia Gospodarczy Krajowego Rejestru Sdowego, nr rejestru przedsibiorców KRS 025237 NIP: 526-021-50-88 Wedug stanu na dzie 01.01.2007 r. kapita zakadowy BRE Banku SA (w caoci opacony) wynosi 118.064.140 z. W zwizku z realizacj warunkowego podwyszenia kapitau zakadowego, na podstawie uchwa XVI WZ z dnia 21.05.2003 r., kapita zakadowy BRE Banku SA moe ulec podwyszeniu do kwoty 118.760.528 z. Akcje w podwyszonym kapitale zakadowym bd w caoci opacone. -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Microsoft Claims It All
This almost sounds like in 1999 when someone claimed a patent on a date conversion routine, and wanted to charge anyone who was converting dates in their Y2K efforts. I'm sure MS has very good lawyers working on this, but how are you going to sue millions of LInux users. The people that work on the Linux Kernel don't profit on any of it, although IBM and many of the other software companies make software for Linux that they sell for profit. This should be very interesting. If on the off chance that MS would win, how would that affect z/OS users? I can see that it might help. People might get pissed off enough at MS, and convert to a mainframe. (Oh well, its a good thought). I'm off to a job interview, and then I have to clean my Dodgeville house, so I won't be reading the list till tomorrow evening. Eric Bielefeld Milwaukee, Wisconsin On Mon, 14 May 2007 09:15:28 -0400, Warner Mach [EMAIL PROTECTED] wrote: Microsoft has dropped the other shoe. It claims that Linux infringes on 235 of its patents and it wants payment: http://money.cnn.com/magazines/fortune/fortune_archive/2007/05/28/10003 3867/index.htm My personal theory is that they have read the paper by open-source philosophers Eric Raymond and Rob Landley ('World Domination 201') which maintains that we are at the start of 64-bit operating systems and that is a 'tipping point'. The new standard for future decades will be set by either Microsoft or Apple or Linux ... So, failing any kind of technical excellence, all Microsoft has to do is keep the FUD and litigation going for the next five or ten years. http://www.catb.org/~esr/writings/world-domination/world-domination- -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Question on DFP
On Mon, 14 May 2007 10:15:11 -0400, Walt Farrell [EMAIL PROTECTED] wrote: Thus, it makes little, if any, sense to ask what release of DFSMS or RACF you have. Rather, you ask what release of z/OS do I have. There are some exceptions. For example, some elements of z/OS ship new function (FMIDs) via web downloads. ICSF has done that, I think. This allows shipment of major new function without needing to coordinate schedules with the base z/OS schedules. Except for that, though, you should ask about z/OS release not the release level of some component of z/OS. And for the cases that ship via web deliverable, etc., you should ask about the FMID, not the release, and the system programmer certainly knows the FMID, or can find it easily. It hasn't been done since OS/390, but you were able to order and install a higher level of DFSMS than the DFSMS level that was originally distributed with the OS/390 level at the time (all in a supported configuration). My memory is a little fuzzy on this one, but I think it was OS/390 DFSMS 1.5 that could be ordered and installed on OS/390 2.6 - but OS/390 2.6 came with OS/390 DFSMS 1.4. I don't think I've ever seen a statement that indicates IBM won't do something like that again with z/OS. Mark -- Mark Zelden Sr. Software and Systems Architect - z/OS Team Lead Zurich North America / Farmers Insurance Group: G-ITO mailto:[EMAIL PROTECTED] z/OS and OS390 expert at http://searchDataCenter.com/ateExperts/ Systems Programming expert at http://expertanswercenter.techtarget.com/ Mark's MVS Utilities: http://home.flash.net/~mzelden/mvsutil.html -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Microsoft Claims It All
Or management might see that MS really does know the only way to use computers and finally get rid of all of the dinosaurs. Good luck on your interview. /Tom Kern On Mon, 14 May 2007 09:29:28 -0500, Eric Bielefeld [EMAIL PROTECTED] wrote: This almost sounds like in 1999 when someone claimed a patent on a date conversion routine, and wanted to charge anyone who was converting dates in their Y2K efforts. I'm sure MS has very good lawyers working on this, but how are you going to sue millions of LInux users. The people that work on the Linux Kernel don't profit on any of it, although IBM and many of the other software companies make software for Linux that they sell for profit. This should be very interesting. If on the off chance that MS would win, how would that affect z/OS users? I can see that it might help. People might get pissed off enough at MS, and convert to a mainframe. (Oh well, its a good thought). I'm off to a job interview, and then I have to clean my Dodgeville house, so I won't be reading the list till tomorrow evening. Eric Bielefeld Milwaukee, Wisconsin -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Virtual tape limits (Was: OEM software electronic download report card)
On Mon, 14 May 2007 16:26:48 +0200, R.S. [EMAIL PROTECTED] wrote: Well, it's safe, convenient, error-proof, but EXPENSIVE. Yes... it has a cost. As I said, the cost of duplexing test data is probably a nit. For production, yes we are duplexing things that *may* not be needed for DR (but who really knows... and I hope we never find out). The landscape has changed over the years so perhaps we will re-visit this again some day. But just because DASD is mirrored doesn't mean a (virtual) tape data set isn't needed as input from a previous day's, week's or month's run of some application. But the other benefit (and part of the reasoning behind it and getting approval for the $$$) is the size of these back end volumes (MVCs for those of you speaketh VSM). If one of those puppies gets destroyed or is unreadable... that's a lot of data (typically hundreds of virtual volumes) on a single tape. And we've had our share of bad media issues. So at that point the extra cost (beyond in-house recovery) is shipping the duplex MVCs off site and storing them, Mark -- Mark Zelden Sr. Software and Systems Architect - z/OS Team Lead Zurich North America / Farmers Insurance Group: G-ITO mailto:[EMAIL PROTECTED] z/OS and OS390 expert at http://searchDataCenter.com/ateExperts/ Systems Programming expert at http://expertanswercenter.techtarget.com/ Mark's MVS Utilities: http://home.flash.net/~mzelden/mvsutil.html -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: ISPF EDIT RECOVERY scope
Walt Farrell wrote: On 5/13/2007 11:39 PM, Paul [EMAIL PROTECTED]@bama.ua.edu wrote: A few months ago, I asked in this forum whether I could disable Confirm Data Set Delete in my Profile. The modal reaction was, No! Very Bad Idea! Extremely Dangerous! We hope IBM never provides such a facility, even as an option! (There were a few exceptions of the not my dog genre.) But isn't RECOVERY OFF likewise a dangerous behavior, which shouldn't be stored in a profile? In my opinion, no, it's not dangerous or at least not in the same way. If you delete a data set, it's gone. If you set recovery off, at most you lose some amount of your time if an error occurs, but you haven't really lost any data. You can always repeat the work. Matter of degree, Walt. If you delete a data set you can also recreate it, to some degree; it just might take a little longer. Kind regards, -Steve Comstock The Trainer's Friend, Inc. 303-393-8716 http://www.trainersfriend.com z/OS Application development made easier * Our classes include + How things work + Programming examples with realistic applications + Starter / skeleton code + Complete working programs + Useful utilities and subroutines + Tips and techniques -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Virtual tape limits (Was: OEM software electronic download report card)
John, Not only are Enterprise disk prices coming down, but virtualization gives z/OS shops access to Midrange disk drives which have a much lower unit cost. That's one of the reasons why HDS have gone into the Virtual Tape business with VTF (http://www.hds.com/products/storage-software/virtual-tape-library.html). Using TMM or something like VTF means you can use all your standard Remote Copy software to replicate your tapes real time, across some pretty serious distances. And if you use RAID-6 then you can do away with duplexing altogether because the disk arrays can handle a double failure without data loss. Ron SNIP With DASD prices now cheaper and still dropping, I agree with you that there shouldn't be any need to write directly to tape. Why VSM/VTS and not HSM. Well, one reason is that handling duplicates in HSM requires manual intervention. When a primary tape goes bad, you need to activate the alternate and then ensure that you produce another duplicate of this tape. All quite simple, but still manual. When a virtual tape is bad (due to a bad real physical tape), it's all handled under the covers as far as HSM is concerned. No need to screw around with primary and alternate. You do need to invest more in the size of your VSM/VTS and maybe that is reason enough not to do this. Also, implementing high availability is a lot easier when the duplexing is application independant. -- snip -- BTW: The biggest VSM/VTS advantage I in my opinion is the number of drives. It's important in multi-LPAR installation. -- snip -- That's true, but it is also important to size your VSM/VTS properly to ensure that residency time is long enough for your virtual tapes and that the tapes get migrated to the backend in a timely matter. John -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Virtual tape limits (Was: OEM software electronic download report card)
Hate to interrupt with a question; this is a good thread going. The 3590 are getting hugeHowever, What are MVCs? -Original Message- From: IBM Mainframe Discussion List [mailto:[EMAIL PROTECTED] On Behalf Of Mark Zelden But the other benefit (and part of the reasoning behind it and getting approval for the $$$) is the size of these back end volumes (MVCs for those of you speaketh VSM). -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Virtual tape limits (Was: OEM software electronic download report card)
MVC = Multi-Volume Cartrige These are the back end physical cartridges (STK 9840 in my case) that virtual tape data is backed up to in a VTSS (Virtual Tape Storage Subsystem). -Original Message- From: IBM Mainframe Discussion List [mailto:[EMAIL PROTECTED] On Behalf Of Clark, Kevin Sent: Monday, May 14, 2007 10:23 AM To: IBM-MAIN@BAMA.UA.EDU Subject: Re: Virtual tape limits (Was: OEM software electronic download report card) Hate to interrupt with a question; this is a good thread going. The 3590 are getting hugeHowever, What are MVCs? -Original Message- From: IBM Mainframe Discussion List [mailto:[EMAIL PROTECTED] On Behalf Of Mark Zelden But the other benefit (and part of the reasoning behind it and getting approval for the $$$) is the size of these back end volumes (MVCs for those of you speaketh VSM). -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Virtual tape limits (Was: OEM software electronic download report card)
It's one reason to try to keep all your tapes virtual. It's a small trade off for simplifying the management of the environment (which is very large) and guaranteeing to the business that we will have all the tape data in a disaster. Well, it's safe, convenient, error-proof, but EXPENSIVE. It doesn't have to be expensive. Seen this? http://www.luminex.com/products/channel_gateway/firex4500.htm Or this? http://www.bustech.com/products/mainframe-data-library.asp Both will mirror the data on the back end for BCP purposes. Why even keep the RTDs? This stuff is inexpensive enough to keep all the tape data on spinning disk and fully mirrored. We're seriously thinking about it. Probably works for us better than most, however, since we made a concerted effort over the last decade to keep people from using tape. So its mainly just HSM and DB images at this point and its under 25TB of data. Jeffrey Deaver, Engineer Systems Engineering [EMAIL PROTECTED] 651-665-4231(v) 651-610-7670(p) -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Microsoft Claims It All
Groklaw's take on it (my favorite blog) http://www.groklaw.net/article.php?story=20070513234519615 -- John McKown Senior Systems Programmer HealthMarkets Keeping the Promise of Affordable Coverage Administrative Services Group Information Technology The information contained in this e-mail message may be privileged and/or confidential. It is for intended addressee(s) only. If you are not the intended recipient, you are hereby notified that any disclosure, reproduction, distribution or other use of this communication is strictly prohibited and could, in certain circumstances, be a criminal offense. If you have received this e-mail in error, please notify the sender by reply and delete this message without copying or disclosing it. -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Microsoft Claims It All
snip Or management might see that MS really does know the only way to use computers and finally get rid of all of the dinosaurs. /snip Or even boot Microsnot(not a typo) out and run Linux on desktops/servers (and keep the dinosaurs) G -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: ISR only with SW contract
Yes, this sounds very strange to me as well. The only thing you need is an id on ShopzSeries so that you can obtain a certificate that will identify you to the server. I don't believe getting a ShopzSeries id is related to a SW support contract, so I think what you heard is false. This is exactly the point. I've been told that if a customer does not have a SW support contract, he won't be able get a ShopzSeries id anymore; no id, no PTFs via ISR. Weird. (No ServerPac either. Very weird.) This all sounds very fishy, but I'll see what I can find out and report back to the list. Kurt Quackenbush -- IBM, SMP/E Development -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: MIDAW and EMC
Bruce, I know of this paper but these studies are related to IBM-Hardware. How about EMC-hardware? I guess you must ask EMC. I don't know of any. I may be corrected, but I do not know of anything in the implementation of MIDAW that was hardware related. The hardware may have some modifications to address the changes to the CCW. However, the EMC Symmetrix product family support MIDAW and have since last year. As I understand it, MIDAWs were designed to require no changes at the disk control unit. MIDAWs change the way that data is presented and the speed of the presentation, but the FICON/ESCON protocols accomodated that with no change. IBM originally implemented MIDAWs to be enabled for all disk devices, but tests with EMC equipment uncovered some timng issue that made it fail. So IBM changed it so that all IBM subsystems are enabled for MIDAWs but all non-IBM boxes have to pass an indication of MIDAW support. EMC did respond pretty quickly to fix the timing issue. -- Bruce A. Black Senior Software Developer for FDR Innovation Data Processing 973-890-7300 personal: [EMAIL PROTECTED] sales info: [EMAIL PROTECTED] tech support: [EMAIL PROTECTED] web: www.innovationdp.fdr.com -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Top 10 software install gripes
Why should directory blocks for PDSes be multiple of 45 ? Thats not entirely accurate. A 3390 track holds 46 directory records, but allocating only 45 allows the EOF for the directory to be placed on the same track. But for additional directory space, use multiples of 46, formula 45+(46*n). That will put 46 on each track, and only 45 on the LAST track, again putting the EOF at the end of that track. Why do this? No particular reason except that it is neater; the first member can start with record 1 on the track following the directory. PDS directory searches don't ever reach the EOF; the last block has a key of all FFs, which stops the keyed search. -- Bruce A. Black Senior Software Developer for FDR Innovation Data Processing 973-890-7300 personal: [EMAIL PROTECTED] sales info: [EMAIL PROTECTED] tech support: [EMAIL PROTECTED] web: www.innovationdp.fdr.com -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Data Areas Manuals to be dropped
Folks: Just in case some of you weren't around in the mid-80s doing development and the like, IBM decided to go OCO (we will drop the cause of this decision) and the CEO (Ackers - if I remember the spelling of his name) promised that nothing would go OCO until it had been correctly documented. NaSPA was hot on IBM about this and many of us watched as this promise was, well, given short shrift. Now to the subject: I pointed out in an ETR that certain routines from IBM that I was using had doc that referred one to a macro to read it to get the info to be used to invoke the routine sets in question. Specifically, I was working on TCP interfaces that required one to read Unix System Services macros to get the correct parameters. My point to them was, they need to document those macros and the DATA AREAS. [Sending one off to look at a macro source to get needed programming info is a bit lazy (Unix System Services Programming: Assembler Callable Services Ref). That would be tantamount to BCP telling you to look at the source for GETMAIN to figure out how to set the bits necessary to, well, you get the idea.] The following is their [IBM's] official policy and response: direct quote from ETR ACTION TAKEN: Received following information from Debbie N. (USS ID group). --- The strategic direction is to remove the data area books (JES2 and JES3 data areas have already been deleted), and the data areas will be generated automatically from the code. The z/OS USS ID group will look into including the missing z/OS USS macros. --- / direct quote from ETR If this particular statement of direction sends chills through you as it does me, then perhaps you need to make your voices heard. -- Regards, Steve Thompson Sterling Commerce Inc. Tel: +469 524 2622 -- Opinions stated by this poster are those of this poster and may or may not be those of poster's employer. Lack of disclaimer(s) on other postings by this poster do not necessarily imply that this poster was stating opinion(s) or policies of poster's employer. -- -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Channel Detected errors only 1 lpar, 1 job, 1 vsm
IOS050I CHANNEL DETECTED ERROR ON 1891,17,01,**02,PCHID=0231 IOS050I CHANNEL DETECTED ERROR ON 1891,13,01,**02,PCHID=0140 All I can say for sure is that the **02 (channel status) indicates interface control check. for real devices, this indicates some problem on the channel or the channel adaptor on the control unit. For VSM I have no clue, but perhaps they use this to report some condition in the VSM. -- Bruce A. Black Senior Software Developer for FDR Innovation Data Processing 973-890-7300 personal: [EMAIL PROTECTED] sales info: [EMAIL PROTECTED] tech support: [EMAIL PROTECTED] web: www.innovationdp.fdr.com -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Virtual Tape Stacking Software
Lizette, go to our website (below) to read about our product FATSCOPY. -- Bruce A. Black Senior Software Developer for FDR Innovation Data Processing 973-890-7300 personal: [EMAIL PROTECTED] sales info: [EMAIL PROTECTED] tech support: [EMAIL PROTECTED] web: www.innovationdp.fdr.com -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Data Areas Manuals to be dropped
-Original Message- From: IBM Mainframe Discussion List [mailto:[EMAIL PROTECTED] On Behalf Of Thompson, Steve Sent: Monday, May 14, 2007 10:06 AM To: IBM-MAIN@BAMA.UA.EDU Subject: Data Areas Manuals to be dropped /snip/ The following is their [IBM's] official policy and response: direct quote from ETR ACTION TAKEN: Received following information from Debbie N. (USS ID group). --- The strategic direction is to remove the data area books (JES2 and JES3 data areas have already been deleted), and the data areas will be generated automatically from the code. The z/OS USS ID group will look into including the missing z/OS USS macros. --- / direct quote from ETR If this particular statement of direction sends chills through you as it does me, then perhaps you need to make your voices heard. -- Regards, Steve Thompson /snip/ A large number of data areas are already documented in the manuals using programmatically obtained information. The commentary is lifted verbatim from the macro expansion (possibly using some form of ADATA analysis or list scraping software). Although it is an efficient way to gather documentation, the lack of programming insight (for lack of a better characterization) on how to use the data area is quite annoying. If it's not written inside the macro, then it won't appear in the data area manual. My humble interpretation (meaning it's probably wrong) of IBM's response is that they intend to stop using technical writers to interpret macro expansions then translating to a data area manual. Instead, the data area documentation will be generated exclusively by programmatic analysis of the macro expansion, and the burden for accurate documentation will fall upon the macro developers, rather than the technical writers. 2 cents worth. your mileage may vary. Jeffrey D. Smith Principal Product Architect Farsight Systems Corporation 700 KEN PRATT BLVD. #204-159 LONGMONT, CO 80501-6452 303-774-9381 direct 303-484-6170 FAX http://www.farsight-systems.com/ -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Top 10 software install gripes
Bruce, I don't want to go up against you in a DASD knowledge battle (I know I would lose), but my copy of IBM 3390 Direct Access Storage Reference Summary, GX26-4577-2, August 1990, Table 2 (3390 mode), page 10, says that 255 to 288 byte blocks with keys of 1-22 bytes are max 45 to the track (not 46), so wouldn't the formula be 44+(45*n) as was mentioned in a prior reply? Peter -Original Message- From: Bruce Black [mailto:[EMAIL PROTECTED] Sent: Monday, May 14, 2007 11:58 AM To: IBM-MAIN@BAMA.UA.EDU Subject: Re: Top 10 software install gripes Why should directory blocks for PDSes be multiple of 45 ? Thats not entirely accurate. A 3390 track holds 46 directory records, but allocating only 45 allows the EOF for the directory to be placed on the same track. But for additional directory space, use multiples of 46, formula 45+(46*n). That will put 46 on each track, and only 45 on the LAST track, again putting the EOF at the end of that track. Why do this? No particular reason except that it is neater; the first member can start with record 1 on the track following the directory. PDS directory searches don't ever reach the EOF; the last block has a key of all FFs, which stops the keyed search. This message and any attachments are intended only for the use of the addressee and may contain information that is privileged and confidential. If the reader of the message is not the intended recipient or an authorized representative of the intended recipient, you are hereby notified that any dissemination of this communication is strictly prohibited. If you have received this communication in error, please notify us immediately by e-mail and delete the message and any attachments from your system. -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Load Library BLKSIZE=32760 (Was: Top 10 software install gripes)
Edward Jaffe wrote: John Eells wrote: Paul Gilmartin wrote: snip I.e. 32760. So I can equally well use BLKSIZE=0 for load module libraries as for other data sets. It matters little to me whether BLKSIZE gets set by SDB or by the binder. snip It gets set by the first program to open the data set. If that's always the Binder, it appears from your test that you get 32760. Right. The only accurate SDB test occurs when using IEFBR14. Otherwise, results may vary depending on which program opens the file. It's been a Long Time since we did the research into all this that led to changes in IBM's internal packaging rules and what happens in ServerPac JCL, but as I recall...somewhat hazily at this point... Since IEFBR14 doesn't issue OPEN, I think the block size for an SDB RECFM=U data set that happens to be allocated to it remains indeterminate (perhaps set to zero in the F1DSCB), and won't change until a program that *does* issue OPEN against the data set has been run. I can't recall for certain (and I don't have time to test it) but I seem to recall that IEBCOPY might use the input block size to set the output block size in this case. Paul's test with the Binder seems to show that it will set 32760, but unless everything is bound to load the data sets (including DLIBs), this does not cover all possible cases. Since ServerPac loads data sets using IEBCOPY COPYMOD, setting the block size to 32760 explicitly when the data sets are defined causes IEBCOPY honor their block size, which always minimizes the space required for load modules (which in turn has some downstream potential performance benefit for Fetch processing depending on frequency of use, cache hit ratios, the phase of the moon, and so on). I'd suggest the same for anyone loading system software with IEBCOPY: Use COPYMOD and a block size of 32760. So far as I know nobody has done much to look at the potential benefits and downsides of using 32760 for application load libraries, so this recommendation extends only to system software at the moment. -- John Eells z/OS Technical Marketing IBM Poughkeepsie [EMAIL PROTECTED] -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Catalog and APF
Tried to bring up 1.7 this weekend in PRODUCTION and ran into major issues. While climibing back through the logs we discover things like the network trying to come up using SYS1.SISTCLIB, which is normal, except it and other tasks are showing APF issues. Hm. It's in there. 1. New volumes cloned from installation set 2. Using VOL(XX) parameter in APF and LNKLST members 3. Smoke and flames at IPL. Output from failing task shows library pointing back to the installation set. ?? I must have really botched the cloning process and am really baffled about the whole thing. Didn't we used to clone like that and life was good? Or did we clone and clip the new volumes back? Install volumes: xxxMS1, xxxMS2, xxxMS3 Clones: : xxxPS!, xxxPS2, xxxPS3 Thank you all...my boss is getting itchy and it's review time..gulp! -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Dynamic spliiting of file
Hello, Hi, Please provide me your suggestions/solutions to achieve the following: A production job runs daily and creates a huge file with 'n' number of records. , I want to use a utility (assuming SYNCSORT with COUNT) to know the 'n' number of records from this file and want to split the file into equal output files (each output file should have 1,00,000 records). How to achieve it dynamically if records vary on daily basis? On a given day we may get 5,00,000 and on the other day we may get 8,00,000 records. So, depending on the count I need to split the input file into 5 or 8 pieces for further processing. After this processing (suppose a COBOL program) I may again get 5 or 8 files. Please provide your suggestions/solutions/ideas to this problem. Please let me know if you need more inputs/details Thanks, Raj - Get the free Yahoo! toolbar and rest assured with the added security of spyware protection. -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Virtual tape limits (Was: OEM software electronic download report card)
Hate to interrupt with a question; this is a good thread going. The 3590 are getting hugeHowever, What are MVCs? MVCs are the same as IBM stacked volumes. Backend tapes for the Sun/Stk VSM. MVC= Multi Volume Cartridge. John Benik This e-mail, including attachments, may include confidential and/or proprietary information, and may be used only by the person or entity to which it is addressed. If the reader of this e-mail is not the intended recipient or his or her authorized agent, the reader is hereby notified that any dissemination, distribution or copying of this e-mail is prohibited. If you have received this e-mail in error, please notify the sender by replying to this message and delete this e-mail immediately. -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Virtual tape limits (Was: OEM software electronic download report card)
It's one reason to try to keep all your tapes virtual. It's a small trade off for simplifying the management of the environment (which is very large) and guaranteeing to the business that we will have all the tape data in a disaster. Well, it's safe, convenient, error-proof, but EXPENSIVE. It doesn't have to be expensive. Seen this? http://www.luminex.com/products/channel_gateway/firex4500.htm Or this? http://www.bustech.com/products/mainframe-data-library.asp Both will mirror the data on the back end for BCP purposes. Why even keep the RTDs? This stuff is inexpensive enough to keep all the tape data on spinning disk and fully mirrored. We're seriously thinking about it. Probably works for us better than most, however, since we made a concerted effort over the last decade to keep people from using tape. So its mainly just HSM and DB images at this point and its under 25TB of data. This customer has around 5 Petabytes of tape data. That's a lot of DASD - even with todays price per gigabyte. Some data typically needs to be kept for 10 years (or longer). I think tapes will still be around for a while yet. John -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Virtual tape limits (Was: OEM software electronic download report card)
On Mon, 14 May 2007 10:30:02 -0500, Jeffrey Deaver [EMAIL PROTECTED] wrote: Both will mirror the data on the back end for BCP purposes. Why even keep the RTDs? This stuff is inexpensive enough to keep all the tape data on spinning disk and fully mirrored. We're seriously thinking about it. Probably works for us better than most, however, since we made a concerted effort over the last decade to keep people from using tape. So its mainly just HSM and DB images at this point and its under 25TB of data. Always talk of that, but we probably create at least 10TB (new) a day of just virtual tape. That number doesn't include physical tape (mostly HSM and TSM). I don't know the actual numbers, but I am going by how many MVCs we send off site each day and how full they are. Mark -- Mark Zelden Sr. Software and Systems Architect - z/OS Team Lead Zurich North America / Farmers Insurance Group: G-ITO mailto:[EMAIL PROTECTED] z/OS and OS390 expert at http://searchDataCenter.com/ateExperts/ Systems Programming expert at http://expertanswercenter.techtarget.com/ Mark's MVS Utilities: http://home.flash.net/~mzelden/mvsutil.html -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Virtual tape limits (Was: OEM software electronic download report card)
This customer has around 5 Petabytes of tape data. Yeah - That would be 320 of the Thumper devices just to house the primary copy. So the scale is good for little old us, but certainly not everyone. Jeffrey Deaver, Engineer Systems Engineering [EMAIL PROTECTED] 651-665-4231(v) 651-610-7670(p) -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Dynamic spliiting of file
Raj, If your ultimate goal is to break up the one large file into multiple smaller one you can do this without COUNT. There are a couple of ways to do this. The easiest is probably using the SPLITBY parameter of the OUTFIL control statement. This should work with either SyncSort or other sort products. The SPLITBY=n parameter writes groups of records in rotation among multiple output data sets and distributes multiple records at a time among the OUTFIL data sets. N specifies the number of records to split by. The following control statements will copy the first 1,00,000 (not sure if this is a typo or if is should be 1,000,000) to the first data set and the next 1,00,000 to the next data set and so on. The only thing you need to be careful of is to allocate enough data sets. If you need 6 data sets but only allocate 5, the next group after the 5th, the one that starts with the 6,00,001st record will be written to the fist data set again and the rotation continues. If you allocate 6 data sets but only need 4 the 5th and 6th data set will be empty. The control cards to do this are: SORT FIELDS=COPY OUTFIL FILES=(01,02,03,04,05,06,07,08),SPLITBY=10 If you prefer to sort the data in addition to breaking it up then replace the FIELDS=COPY with your sort control fields. You will need to allocate SORTOF01, SORTOF02, etc. Be sure to include a reference to each data set in the FILES= parameter of OUTFIL. If you would like further help with this please feel free to contact me directly. Sincerely, John Reda Software Services Manager Syncsort Inc. 201-930-8260 -Original Message- From: IBM Mainframe Discussion List [mailto:[EMAIL PROTECTED] On Behalf Of Rajeev Vasudevan Sent: Monday, May 14, 2007 1:08 PM To: IBM-MAIN@BAMA.UA.EDU Subject: Dynamic spliiting of file Hello, Hi, Please provide me your suggestions/solutions to achieve the following: A production job runs daily and creates a huge file with 'n' number of records. , I want to use a utility (assuming SYNCSORT with COUNT) to know the 'n' number of records from this file and want to split the file into equal output files (each output file should have 1,00,000 records). How to achieve it dynamically if records vary on daily basis? On a given day we may get 5,00,000 and on the other day we may get 8,00,000 records. So, depending on the count I need to split the input file into 5 or 8 pieces for further processing. After this processing (suppose a COBOL program) I may again get 5 or 8 files. Please provide your suggestions/solutions/ideas to this problem. Please let me know if you need more inputs/details Thanks, Raj -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
TTYGROUP missing at IPL time USS
Hi folks - we are a bit scant on USS knowledge at the moment and have a problem! The RACF group in TTYGROUP in BPXPRMxx does not exist. USS seems to have initialised OK, but we are having some problems with VPN access. Does anyone have an idea what the implications of having a missing group are? We are in the process of gettting the RACF group recreated. Thanks Andrew Metcalfe -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: TTYGROUP missing at IPL time USS
-Original Message- From: IBM Mainframe Discussion List [mailto:[EMAIL PROTECTED] On Behalf Of Andrew Metcalfe Sent: Monday, May 14, 2007 12:42 PM To: IBM-MAIN@BAMA.UA.EDU Subject: TTYGROUP missing at IPL time USS Hi folks - we are a bit scant on USS knowledge at the moment and have a problem! The RACF group in TTYGROUP in BPXPRMxx does not exist. USS seems to have initialised OK, but we are having some problems with VPN access. Does anyone have an idea what the implications of having a missing group are? We are in the process of gettting the RACF group recreated. Thanks The only problem is that the UNIX processes will not have a GID. I don't know what problems this may cause. It depends on what files/directories have a group id with the missing GID in them and what accesses are allowed by that GID. -- John McKown Senior Systems Programmer HealthMarkets Keeping the Promise of Affordable Coverage Administrative Services Group Information Technology The information contained in this e-mail message may be privileged and/or confidential. It is for intended addressee(s) only. If you are not the intended recipient, you are hereby notified that any disclosure, reproduction, distribution or other use of this communication is strictly prohibited and could, in certain circumstances, be a criminal offense. If you have received this e-mail in error, please notify the sender by reply and delete this message without copying or disclosing it. -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Dynamic spliiting of file
Set up a job that creates 1,000,000 records in a pass, then submits itself to intrdr if rc=0. -Original Message- From: IBM Mainframe Discussion List [mailto:[EMAIL PROTECTED] On Behalf Of Rajeev Vasudevan Sent: Monday, May 14, 2007 12:08 PM To: IBM-MAIN@BAMA.UA.EDU Subject: Dynamic spliiting of file Hello, Hi, Please provide me your suggestions/solutions to achieve the following: A production job runs daily and creates a huge file with 'n' number of records. , I want to use a utility (assuming SYNCSORT with COUNT) to know the 'n' number of records from this file and want to split the file into equal output files (each output file should have 1,00,000 records). How to achieve it dynamically if records vary on daily basis? On a given day we may get 5,00,000 and on the other day we may get 8,00,000 records. So, depending on the count I need to split the input file into 5 or 8 pieces for further processing. After this processing (suppose a COBOL program) I may again get 5 or 8 files. Please provide your suggestions/solutions/ideas to this problem. Please let me know if you need more inputs/details Thanks, Raj - Get the free Yahoo! toolbar and rest assured with the added security of spyware protection. -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Virtual Tape Stacking Software
Lizette, With CA-1 as your tape management system, of course the recommended tape stacking software is CA-1/CopyCat. This will work very well in terms of taking hundreds of virtual-volumes and stacking them on a physical cartridge for DR purposes. Russell Witt CA-1 Level-2 Support Manager From: Lizette Koehler [EMAIL PROTECTED] Date: 2007/05/14 Mon AM 08:13:04 CDT To: IBM-MAIN@BAMA.UA.EDU Subject: Virtual Tape Stacking Software Gentle Listers - I have searched the archives and found a lot of discussion but nothing I was looking for. We currently have Open Tech to stack virtual tapes on one physical 3590 tape (VDR). Are there any other equivilents or is Open Tech it. We are running z/OS V1.7 with DFSMShsm. We also have DFDSS and a couple of CA products (CA1). We need to compare the Open Tech process with other vendor(s) to ensure we have the right tool for the right function. Any ideas will be appreciated. Lizette -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Catalog and APF
Some things that may point you in the right direction: D PARMLIB D PROG,APF (assuming you are using PROGxx) MXI's PARM command ISPF command DDLIST APF Don Imbriale -Original Message- From: IBM Mainframe Discussion List [mailto:[EMAIL PROTECTED] On Behalf Of Daniel McLaughlin Sent: Monday, May 14, 2007 12:52 PM To: IBM-MAIN@BAMA.UA.EDU Subject: Catalog and APF Tried to bring up 1.7 this weekend in PRODUCTION and ran into major issues. While climibing back through the logs we discover things like the network trying to come up using SYS1.SISTCLIB, which is normal, except it and other tasks are showing APF issues. Hm. It's in there. 1. New volumes cloned from installation set 2. Using VOL(XX) parameter in APF and LNKLST members 3. Smoke and flames at IPL. Output from failing task shows library pointing back to the installation set. ?? I must have really botched the cloning process and am really baffled about the whole thing. Didn't we used to clone like that and life was good? Or did we clone and clip the new volumes back? Install volumes: xxxMS1, xxxMS2, xxxMS3 Clones: : xxxPS!, xxxPS2, xxxPS3 Thank you all...my boss is getting itchy and it's review time..gulp! *** Bear Stearns is not responsible for any recommendation, solicitation, offer or agreement or any information about any transaction, customer account or account activity contained in this communication. *** -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Dynamic spliiting of file
Another technique can be to select records based on some data; such as the last digit, or next to last digit of an account number or employee ID, and parsing to separate files based on the digit or range of digits in that spot. This has the advantage of not needing to know any counts of records at all. Darren -Original Message- From: IBM Mainframe Discussion List [mailto:[EMAIL PROTECTED] On Behalf Of Rajeev Vasudevan Sent: Monday, May 14, 2007 10:08 AM To: IBM-MAIN@BAMA.UA.EDU Subject: Dynamic spliiting of file Hello, Hi, Please provide me your suggestions/solutions to achieve the following: A production job runs daily and creates a huge file with 'n' number of records. , I want to use a utility (assuming SYNCSORT with COUNT) to know the 'n' number of records from this file and want to split the file into equal output files (each output file should have 1,00,000 records). How to achieve it dynamically if records vary on daily basis? On a given day we may get 5,00,000 and on the other day we may get 8,00,000 records. So, depending on the count I need to split the input file into 5 or 8 pieces for further processing. After this processing (suppose a COBOL program) I may again get 5 or 8 files. Please provide your suggestions/solutions/ideas to this problem. Please let me know if you need more inputs/details Thanks, Raj -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Catalog and APF
Daniel, Are you using indirect cataloging? It sounds like you MASTERCAT is pointing to the xxxMS* packs. Used Symbolic in the listcat IPLRS1,IPLRS2, etc Kevin -Original Message- From: IBM Mainframe Discussion List [mailto:[EMAIL PROTECTED] On Behalf Of Daniel McLaughlin Sent: Monday, May 14, 2007 12:52 PM To: IBM-MAIN@BAMA.UA.EDU Subject: Catalog and APF Output from failing task shows library pointing back to the installation set. ?? Install volumes: xxxMS1, xxxMS2, xxxMS3 Clones: : xxxPS!, xxxPS2, xxxPS3 *** Bear Stearns is not responsible for any recommendation, solicitation, offer or agreement or any information about any transaction, customer account or account activity contained in this communication. *** -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Virtual tape limits (Was: OEM software electronic download report card)
On Mon, 14 May 2007 10:30:02 -0500, Jeffrey Deaver [EMAIL PROTECTED] wrote: Why even keep the RTDs? This stuff is inexpensive enough to keep all the tape data on spinning disk and fully mirrored. We're seriously thinking about it. Probably works for us better than most, however, since we made a concerted effort over the last decade to keep people from using tape. So its mainly just HSM and DB images at this point and its under 25TB of data. An interesting article on this subject: Tape and Disk Costs - What It really Costs to Power the Equipment http://www.clipper.com/research/TCG2007014.pdf From the article: Key Findings 1. SATA disk system has nearly 26 times higher energy costs than tape system. 2. SATA disk system acquisition costs about 6.5 times the cost of automated tape system. 3. Assuming electrical rates remain same, the cost to acquire, power and cool disk systems for five years is almost 8 times the cost to acquire, power, and cool automated tape systems. 4. The cost to power and cool equipment must be part of the TCO. -- Mark Zelden Sr. Software and Systems Architect - z/OS Team Lead Zurich North America / Farmers Insurance Group: G-ITO mailto:[EMAIL PROTECTED] z/OS and OS390 expert at http://searchDataCenter.com/ateExperts/ Systems Programming expert at http://expertanswercenter.techtarget.com/ Mark's MVS Utilities: http://home.flash.net/~mzelden/mvsutil.html -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Top 10 software install gripes
Bruce, I don't want to go up against you in a DASD knowledge battle (I know I would lose), but my copy of IBM 3390 Direct Access Storage Reference Summary, GX26-4577-2, August 1990, Table 2 (3390 mode), page 10, says that 255 to 288 byte blocks with keys of 1-22 bytes are max 45 to the track (not 46), so wouldn't the formula be 44+(45*n) as was mentioned in a prior reply? You win! (this time). The 3380 had 46 directory blocks per track, while the 3390 holds only 45. -- Bruce A. Black Senior Software Developer for FDR Innovation Data Processing 973-890-7300 personal: [EMAIL PROTECTED] sales info: [EMAIL PROTECTED] tech support: [EMAIL PROTECTED] web: www.innovationdp.fdr.com -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: TTYGROUP missing at IPL time USS
On Mon, 14 May 2007 12:41:48 -0500, Andrew Metcalfe [EMAIL PROTECTED] wrote: Hi folks - we are a bit scant on USS knowledge at the moment and have a problem! The RACF group in TTYGROUP in BPXPRMxx does not exist. USS seems to have initialised OK, but we are having some problems with VPN access. Does anyone have an idea what the implications of having a missing group are? We are in the process of gettting the RACF group recreated. A missing group... not sure. This particular one... not being able to get a login shell? The OMVS list would probably be a better place to ask. -- Mark Zelden Sr. Software and Systems Architect - z/OS Team Lead Zurich North America / Farmers Insurance Group: G-ITO mailto:[EMAIL PROTECTED] z/OS and OS390 expert at http://searchDataCenter.com/ateExperts/ Systems Programming expert at http://expertanswercenter.techtarget.com/ Mark's MVS Utilities: http://home.flash.net/~mzelden/mvsutil.html -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Happy Feet (was: Microsoft Claims It All)
On Mon, 14 May 2007 09:29:28 -0500, Eric Bielefeld wrote: If on the off chance that MS would win, how would that affect z/OS users? I can see that it might help. People might get pissed off enough at MS, and convert to a mainframe. (Oh well, its a good thought). Or, companies faced with the specter of such MS predation might find incentive to relocate operations to countries with more rational laws concerning software intellectual assets. In a word, offshore. -- gil -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Happy Feet (was: Microsoft Claims It All)
On May 14, 2007, at 2:08 PM, Paul Gilmartin wrote: Or, companies faced with the specter of such MS predation might find incentive to relocate operations to countries with more rational laws concerning software intellectual assets. In a word, offshore. Scary thought gil, IBM is moving to INDIA so they are really afraid of MS? Ed -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Dynamic spliiting of file
The easiest is probably using the SPLITBY parameter of the OUTFIL control statement. This should work with either SyncSort or other sort products. The SPLITBY=n parameter writes groups of records in rotation among multiple output data sets and distributes multiple records at a time among the OUTFIL data sets. N specifies the number of records to split by. The following control statements will copy the first 1,00,000 (not sure if this is a typo or if is should be 1,000,000) to the first data set and the next 1,00,000 to the next data set and so on. The only thing you need to be careful of is to allocate enough data sets. If you need 6 data sets but only allocate 5, the next group after the 5th, the one that starts with the 6,00,001st record will be written to the fist data set again and the rotation continues. ... Raj, If you have access to DFSORT, you can use its SPLIT1R=n parameter instead of SPLITBY=n. Whereas SPLITBY=n rotates the records back to the first output file (which is not desirable in your case), SPLIT1R=n will continue to write the extra records to the last output file so each output file will have contiguous records from the input file. For complete details on DFSORT's SPLIT1R=n parameter, see: www.ibm.com/servers/storage/support/software/sort/mvs/peug/ For another way to split the records evenly and contiguously, see the Split a file to n output files dynamically Smart DFSORT Trick at: http://www.ibm.com/servers/storage/support/software/sort/mvs/tricks/ Frank Yaeger - DFSORT Development Team (IBM) - [EMAIL PROTECTED] Specialties: PARSE, JFY, SQZ, ICETOOL, IFTHEN, OVERLAY, Symbols, Migration = DFSORT/MVS is on the Web at http://www.ibm.com/storage/dfsort/ -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Microsoft Claims It All
On Mon, 14 May 2007 10:43:54 -0500, Staller, Allan [EMAIL PROTECTED] wrote: snip Or management might see that MS really does know the only way to use computers and finally get rid of all of the dinosaurs. /snip Or even boot Microsnot(not a typo) out and run Linux on desktops/servers (and keep the dinosaurs) G And of course there is the older patent (1998) by M$, as described in http://www.theonion.com/content/node/29130 . -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: OUTPUT JCL: Concatenation of a Symbolic Variable to quoted Text in TITLE= Parameter
On 2007-05-13 18:00, Beat Gossweiler [EMAIL PROTECTED] wrote about 'OUTPUT JCL: ' to IBM-Main: I would like to add a value passed by a symbolic variable to a pre- coded text enclosed in apostrophes in the TITLE= parameter of the OUTPUT JCL-Statement. The OUTPUT-Statement is contained in a procedure, the symbolic variable is to be passed by the caller of the procedure. The fixed part of the TITLE= text is only known within the procedure, the variable part of the text is only known by the caller of the proc. I can concatenate a variable to a non-quoted text, but not to a quoted one. But i need to have characters within the fixed part of the text which require enclosure in apostrophes. Does anybody have an idea how this could be achieved? Example: [snip] Beat : does this help? It's not in a PROC but it does illustrate that you have to include the apostrophes within the variables to get the proper value for TITLE=. //TOPICSET TOPIC='''Natural Web Services (DV) response times - ' //PERIOD SET PERIOD='current day''' //EDRESS SET EDRESS='''[EMAIL PROTECTED]''' //* //SDSF OUTPUT DEST=VNR557,CLASS=* //SYSGRP OUTPUT DEST=EMAIL,CLASS=A, // FORMS=STD, plain .txt attachment // NAME=EDRESS, // ADDRESS=('[EMAIL PROTECTED]'), // TITLE=TOPIC.PERIOD -- signature = 6 lines follows -- Neil Duffee, Joe SysProg, U d'Ottawa, Ottawa, Ont, Canada telephone:1 613 562 5800 x4585 fax:1 613 562 5161 mailto:NDuffee of uOttawa.ca http:/ /aix1.uottawa.ca/ ~nduffee How *do* you plan for something like that? Guardian Bob, Reboot For every action, there is an equal and opposite criticism. Systems Programming: Guilty, until proven innocent John Norgauer 2004 -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Virtual tape limits (Was: OEM software electronic download report card)
On Mon, 14 May 2007 13:35:24 -0500, Mark Zelden wrote: An interesting article on this subject: Tape and Disk Costs - What It really Costs to Power the Equipment http://www.clipper.com/research/TCG2007014.pdf From the article: Key Findings 1. SATA disk system has nearly 26 times higher energy costs than tape system. 2. SATA disk system acquisition costs about 6.5 times the cost of automated tape system. 3. Assuming electrical rates remain same, the cost to acquire, power and cool disk systems for five years is almost 8 times the cost to acquire, power, and cool automated tape systems. 4. The cost to power and cool equipment must be part of the TCO. I listened to a webcast from Clipper talking about this same subject. It looks like an audio presentation of this same paper. The power costs for the disk array in the paper are in line with what the webcast stated. I'm having a hard time finding a way to prove a $110K annual power bill. What I'm finding in my research is the cost to power a 3584 autmoated tape library is about 25% of what it takes to power a 17TB disk array. But what they said in the webcast is the tape library had annual power costs around $5K with the disk array at $110K. With 95% less power consumption relating to a $100K/year savings, I thought I was going to be buying an automated tape library. Reality is a lot different. Approximately 1.5KW for a 3584 vs approximately 5KW for a DS8100. Not that this isn't important, but the savings aren't as appealing as first presented. -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Question on DFP
Mark Zelden wrote: snip It hasn't been done since OS/390, but you were able to order and install a higher level of DFSMS than the DFSMS level that was originally distributed with the OS/390 level at the time (all in a supported configuration). My memory is a little fuzzy on this one, but I think it was OS/390 DFSMS 1.5 that could be ordered and installed on OS/390 2.6 - but OS/390 2.6 came with OS/390 DFSMS 1.4. I don't think I've ever seen a statement that indicates IBM won't do something like that again with z/OS. snip You haven't seen one that says we will, either (grin). Back then, DFSMSdfp was a nonexclusive element with its own product number so that it could be ordered separately. Now, it's not. -- John Eells z/OS Technical Marketing IBM Poughkeepsie [EMAIL PROTECTED] -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Question on DFP
On Mon, 2007-05-14 at 10:15 -0400, Walt Farrell wrote: Thus, it makes little, if any, sense to ask what release of DFSMS or RACF you have. Rather, you ask what release of z/OS do I have. Generally when I have been asked a question like this it is because a *vendor* asked - and won't accept the answer. Most of them via ETR with IBM. In fairness the frequency (of the question) has dropped to almost zero in the last couple of years. The other occasion(s) it's along the lines of we have 1.7; where are the manuals for insert component - the DVD only has [1.4|1.5|1.6] ??? Shane ... -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Virtual tape limits (Was: OEM software electronic download report card)
So at that point the extra cost (beyond in-house recovery) is shipping the duplex MVCs off site and storing them, We found an easy way to store them, in a previous life. Both sites has two libraries. Production, and the back-up from the other site. (GDPS). - Too busy driving to stop for gas! -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Dynamic spliiting of file
Raj, These options handle the splitting for you, you need to search the posts about dynamically allocating datasets since the final process you design will need to keep allocating additional datasets based on the input. For a non-production, or a 'I really do not care about performance' solution, I would turn to rexx where I can allocate a new dataset, start reading the input, and write however many records I want to the output file, close it, allocate a new dataset and continue reading and writing records to this dataset until I hit my limit, allocate another new dataset .. until eof is reached. Existing posts explain how to do this in COBOL, too. On Mon, 14 May 2007 12:16:53 -0700, Frank Yaeger [EMAIL PROTECTED] wrote: The easiest is probably using the SPLITBY parameter of the ... Raj, If you have access to DFSORT, you can use its SPLIT1R=n parameter instead of SPLITBY=n. Whereas SPLITBY=n rotates the records back to the first output file (which is not desirable in your case), SPLIT1R=n will continue to write the extra records to the last output file so each output file will have contiguous records from the input file. -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Question on DFP
Mark Zelden wrote: snip I don't think I've ever seen a statement that indicates IBM won't do something like that again with z/OS. snip John Eells wrote: You haven't seen one that says we will, either (grin). Back then, DFSMSdfp was a nonexclusive element with its own product number so that it could be ordered separately. Now, it's not. I started using IBM operating systems around 1980 and was taught to expect change. My mentor told me, 'IBM goes through cycles of bundling products together and then unbundling them.' When I first installed DB2 V1R2 it came with utilities that are now separately orderable. Even if I read a statement that declared they do not plan to unbundle or bundle something, plans and statements can change. The good news for me, with z/OS, is how much easier it is to order. And since I am not looking to pick and choose my own levels, happily install what comes on ServerPac. -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Virtual tape limits (Was: OEM software electronic download report card)
Mark, A few problems I see with this article: 1) They purchased the tape libraries based on 2:1 compression and the SATA with zero compression. Virtual tape software will compress the data on disk at the same rate as tape. 2) Where did they get this disk storage from? The whole 1st year requirement can be supported by a single midrange controller from HDS or EMC (and others) - even less when you take compression into account. 48TB per CU does not represent current technology. 3) Tape needs redundant copies to protect against failure - Disk uses RAID. They neglected to allow for the redundant copies of data within a tape based library. My gut feeling is that over 30% of tape data consists of redundant copies to protect against media failure. You eliminate those copies when you use disk. 4) Many RAID-5 implementations DO NOT require 20% more disk capacity. For example 7D+P requires 14% more disk, and 15D+P requires 7% more disk. And BTW I don't agree with 70% usage in a virtual tape disk pool. A fair comparison should use the same compression on disk, and allow for elimination of redundant copies. Using the following, I come up with just raw 93TB of disk in RAID-5 required to replace 163TB Tape Library. Current Tape data 163TB 2:1 Compression 82TB 70% used space 116TB RAID-5 7D+P 133TB Less 30% dupes 93TB Even if you don't clean up redundant copies, 133TB is still a damn site less than 232TB. A single midrange controller from HDS or EMC would handle 2-3 years of growth in this example in just 1 or 2 racks. Then there is the cost of maintaining data on tape? Another post mentioned 5PB of tape: 1) Are they locked into keeping a museum of tape drive technology so they can read tapes created 5 years ago? What's the maintenance on those? 2) How does one check the condition of tapes regularly in 5PB library? Disk drives have S.M.A.R.T and suchlike, tapes have nothing. 3) Virtual Tape Libraries on disk are self healing on the fly. A disk drive failure of any magnitude does not cause an outage due to RAID. The Recovery Time of a failed MVC is anyone's guess, depending on where the duplicate copy of the data is located. Unfortunately I don't think this article provides a fair comparison of tape and SATA costs in a mainframe environment. Ron An interesting article on this subject: Tape and Disk Costs - What It really Costs to Power the Equipment http://www.clipper.com/research/TCG2007014.pdf From the article: Key Findings 1. SATA disk system has nearly 26 times higher energy costs than tape system. 2. SATA disk system acquisition costs about 6.5 times the cost of automated tape system. 3. Assuming electrical rates remain same, the cost to acquire, power and cool disk systems for five years is almost 8 times the cost to acquire, power, and cool automated tape systems. 4. The cost to power and cool equipment must be part of the TCO. -- Mark Zelden Sr. Software and Systems Architect - z/OS Team Lead Zurich North America / Farmers Insurance Group: G-ITO mailto:[EMAIL PROTECTED] z/OS and OS390 expert at http://searchDataCenter.com/ateExperts/ Systems Programming expert at http://expertanswercenter.techtarget.com/ Mark's MVS Utilities: http://home.flash.net/~mzelden/mvsutil.html -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: OUTPUT JCL: Concatenation of a Symbolic Variable to quoted Text in TITLE= Parameter
On Mon, 14 May 2007 16:23:25 -0400, Neil Duffee wrote: Beat : does this help? It's not in a PROC but it does illustrate that you have to include the apostrophes within the variables to get the proper value for TITLE=. //TOPICSET TOPIC='''Natural Web Services (DV) response times - ' //PERIOD SET PERIOD='current day''' //EDRESS SET EDRESS='''[EMAIL PROTECTED]''' //* //SDSF OUTPUT DEST=VNR557,CLASS=* //SYSGRP OUTPUT DEST=EMAIL,CLASS=A, // FORMS=STD, plain .txt attachment // NAME=EDRESS, // ADDRESS=('[EMAIL PROTECTED]'), // TITLE=TOPIC.PERIOD OK. Here's the rule: 5.4.4.3 z/OS V1R7.0 MVS JCL Reference ___ 5.4.4.3 Coding Symbols in Apostrophes You can code symbols in apostrophes on the following keywords: * The DD statement AMP parameter * The DD statement PATH parameter * The DD statement SUBSYS parameter * The EXEC statement ACCT parameter * The EXEC statement PARM parameter. If you need your TOPIC and PERIOD symbols elsewhere without the apostrophes, it may be convenient to define the apostrophe itself as a symbol, somewhat as follows: 3 // SET Q= 4 // SET B=' ' 5 // SET FOO='Foo' 6 // SET BAR='Bar' //* 7 //STEP EXEC PGM=IEFBR14 //* 8 //SYSUT1DD UNIT=SYSALLDA,SPACE=(TRK,0), // DSN=Q.FOO.B.BAR.Q IEFC653I SUBSTITUTION JCL - UNIT=SYSALLDA,SPACE=(TRK,0),DSN='Foo Bar' ICH70001I SPPG LAST ACCESS AT 18:32:37 ON MONDAY, MAY 14, 2007 IEF236I ALLOC. FOR SYMBOLS STEP IGD100I 3CF1 ALLOCATED TO DDNAME SYSUT1 DATACLAS () IEF142I SYMBOLS STEP - STEP WAS EXECUTED - COND CODE IEF285I Foo Bar DELETED IEF285I VOL SER NOS= WORK01. Note the substitutions performed, and that the job step ran without error. But I can't help wondering, why the inconsistency? Were the JCL designers merely intent on creating extra work for themselves and inconveniencing the customer and the tech writer by not simply uniformly substituting symbols between apostrophes in all keyword values rather than only in five exceptional cases? Conway's Law? Each keyword parser developer invented his own rules, and jointly they didn't use common code to parse keyword values? That's just bad organization; inefficient; amateurish. -- gil -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Data Areas Manuals to be dropped
The following message is a courtesy copy of an article that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well. [EMAIL PROTECTED] writes: What is 'OCO' ? Thanks there were several OCO-wars threads/discussion on vmshare. it was somewhat more of an issue in vm culture ... since source maintenance was standard and there were extensive amount of customer source changes available from waterloo/share library. tymshare had provided online computer conferencing for share called vmshare starting in mid-70s; in part, because tymshare offered vm-based commercial timesharing service (later tymshare would also offer pcshare online computer conferencing) ... lots/misc posts about vm-based online commercial timesharing services http://www.garlic.com/~lynn/subtopic.html#timeshare vmshare archives here: http://vm.marist.edu/~vmshare/ following is sample by doing a search on oco war in browse mode against all memo, note, and prob files. OCO's 10th b'day http://vm.marist.edu/~vmshare/browse?fn=OCO:BDAYft=MEMO OCO source business http://vm.marist.edu/~vmshare/browse?fn=OCOBUSft=MEMO issue sort of dates back to 23jun69 unbundling announcement with start to charge for application software. misc. past posts mentioning unbundling http://www.garlic.com/~lynn/subtopic.html#unbundle initially only application software was charged for ... using an excuse that kernel/system software was required for operation of the hardware. later various circumstances precipitated decision to start charging for system software. this was about the time that my resource manager was going to be released ... so it got selected to be initial guinea pig for policty/practices related to kernel software charging. http://www.garlic.com/~lynn/subtopic.html#fairshare change to charging for software eventually also evolved into Object-Code-Only (i.e. OCO, no source). recent post also mentioning 23jun69 unbundling announcement resulted in start charging for SE services. http://www.garlic.com/~lynn/2007j.html#65 Help settle a job title/role debate -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Shared HFS
I want to prepare a Monoplex system for a easy migration into a Sysplex. So I want to implement Shared HFS without being in a Sysplex, but the IPL failed when starting USSs. My question is: Can I use the Version parameter while I'm using Sysplex(NO) parameter? Depends what you mean by Shared HFS. If you want to use shared file system support as IBM describe it in the USS Planning, it has sysplex scope. End of story. Else you'd better be mounting it R/O everywhere. CA-MIM may give some relief of this; don't know, never tried it. Shane ... -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Fwd: Macworld: News: Analysis: Microsoft patent claims hint at internal issues
Begin forwarded message: http://www.macworld.com/news/2007/05/14/patentanalysis/index.php? lsrc=mwrss -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Shared HFS
Shared HFS is a pretty strange thing. It really isn't like anything else in the MVS world as far as I can tell. 1. You have to be some sort of sysplex. I am pretty sure that XCF signaling will be enough to get you up and running. 2. You have to have the HFS sharing data set setup. 3. Everything is shared with the HFS sharing turned on which means that you really need to RTFM... there are no shortcuts on this one. Personally, I think IBM missing the boat here. I keep meaning to open up a change request... but the HFS sharing is not very flexible. Once you turn it on for a couple of systems .. they are linked. I don't have just one SYS1.PARMLIB or just one SYS1.PROCLIB (although we try to keep it down to just a couple). I had really wanted to be able to share things where it made sense.. like between a couple of systems in that were production... or the /etc between multiple system (have dev share, have prod share) but not all of them ( I have a plex with 2 tech systems, 2 dev systems and 9 prod systems ) What I ended up doing is mounting all the binary stuff read-only and sharing them. Every other file system is done system by system. I would really like ZFS/HFS files to be managed in some sort of grouping... maybe more like the JES2 XCF group where we give it a name to keep them all separate within the same plex. Or maybe RLS type sharing where I can actually define what I want to share and what I don't. Here is an example file system datasetMounted --- - --- /opt/fitb/tomcatdb (OMVSU.TECH.TOMCATDB) Read, Write /opt/fitb/jspwiki/sysprog (OMVS.TECH.JSPWIKI.SYSPROG) Read, Write /opt/fitb/jspwiki (OMVS.TECH.JSPWIKI) Read, Write /opt/fitb/jakarta-tomcat-5.0.28 (OMVS.TECH.TOMCAT) Read, Write /usr/blk(OMVS.TEC1.BLK.ESSIP) Read, Write /usr/lpp/etldapv3 (OMVS.TEC1.ETLDAPV3)Read, Write /usr/opt(OMVS.TEC1.USR.OPT) Read, Write /usr/local (OMVS.TEC1.USR.LOCAL) Read, Write /opt/fitb (OMVS.TEC1.OPT.FITB)Read, Write /usr/mail (OMVS.TEC1.MAIL)Read, Write /u (OMVS.TEC1.HOME)Read, Write /SYSTEM/var (OMVS.TEC1.VAR.ZFS) Read, Write /SYSTEM/etc (OMVS.TEC1.ETC.ZFS) Read, Write /usr/lpp/perl (OMVS.RST01A.SPRLHFS) Read Only /usr/lpp/ixm(OMVS.RST01A.XML) Read Only /usr/lpp/ing(OMVS.RST01A.SINGHFS) Read Only /usr/lpp/cobol (OMVS.RST01A.SIGYROOT) Read Only /usr/lpp/TWS/V8R3M0 (OMVS.RST01A.SEQQHFS) Read Only /usr/lpp/zWebSphere/V6R1(OMVS.RST01A.SBBOHFS) Read Only /usr/lpp/netview(OMVS.RST01A.NETVHFS) Read Only /usr/lpp/mqm(OMVS.RST01A.MQS) Read Only /usr/lpp/java/J5.0 (OMVS.RST01A.JAVA31V5) Read Only /usr/lib/ssh(OMVS.RST01A.AFOROOT) Read Only / (OMVS.RST01A.ROOT) Read Only /SYSTEM/tmp (/TMP) Read, Write /SYSTEM/dev (/DEV) Read, Write All the file-systems are ZFS with the exception of /tmp and /dev which are TFS. All the Read-Only's are on the RES pack as part of ServerPac. Ahh yes wishful thinking...all without those pesky realities... Rob Schramm This e-mail transmission contains information that is confidential and may be privileged. It is intended only for the addressee(s) named above. If you receive this e-mail in error, please do not read, copy or disseminate it in any manner. If you are not the intended recipient, any disclosure, copying, distribution or use of the contents of this information is prohibited. Please reply to the message immediately by informing the sender that the message was misdirected. After replying, please erase it from your computer system. Your assistance in correcting this error is appreciated. -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html