Re: my free mainframe product and question?
More fixes to my product. Please download the new installation files in my site www.mfnetdisk.com Some people ask me about how much cost the product? I really do not know right now. In general I am not sure if I take money for this product in the future. If one day big company like IBM or EMC or HDS or what ever company like to buy my product then that is OK with me but now just play with my product. Please let me know what the product need to become more attractive. If someone want to have access to more devices for free please let me know. I like also to open discussion about is it acceptable for MF customers that MF data will be accessed with TCP and be stored in PC? Thanks, Shai -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
DFSORT SYMNAMES: soft lengths possible?
Anyone, I have a DFSORT question regarding symbol definitions in SYMNAMES. I did check the manual, and I don't think that what I want to do is possible, but here goes... I have the following symbol defined in a sort record: LAST_NAME,1,20,CH I'm using that symbol to BUILD that field into two different sort record formats (A and B). The field appears in different columns in records A and B. So I have these symbols defined: A_LAST_NAME,*,20,CH and B_LAST_NAME,*,20,CH I can't use the equals sign to denote use the previous length within records A and B. What I'm wondering is if there's a way I can assign a length to a symbol (for example), and then use that symbol as a constant length throughout SYMNAMES. I actually did try that, but it failed. It appears that I have no choice but to replicate the constant length of 20 for every occurrence of the last name (assuming that I never want it truncated) in every record that it appears. Am I missing something? Thanks so much. David -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Scotts new role
I thought of retitling this post In Defence of CA, but then I didn't. I think CA deserves some of its bad reputation, especially in the pricing area. I know when I worked at PH, they tried to make us buy a new license for database product when a sister company of ours moved their processing from their datacenter to ours. I don't think we paid CA anything for over a year while this was in dispute. We finally settled it by signing a 5 year agreement with CA. They threw in several free products also. More about that later. They are willing to negotiate, and the final deal was much cheaper than what we were paying for the individual products. Most of the old line products CA has are very well supported. I know CA1, 7, 11 I always got very good support. We bought them when they were still Uccel. Panvalet also had very good support. There was only one product that we really had problems with. One of the free products mentioned above was a product for allocating more buffers for sequential and VSAM files. It worked OK for most things, but there were some files it would abend the program. The whole reason for having this product wasfor making this systerm run faster in its batch work. The BMC product it was going to replace worked fine. I know we worked with CA's support for months before we finally gave up. I, like Ted, would also consider working for CA if they offered me a job. When Ed Gould makes like CA is the evil empire, it really reflects badly on Ed. I try not to say bad things about other posters on this list, but Ed, I would strongly advise you to stop making the kind of comments about Scott and CA that you have lately. It really makes you look bad, not Scott or CA. I know you mean well, but its not working. The one thing you do well, is you sometimes post news articles that are relevant. When you say you can't remember very well, but you think this was the way it was, your probably better off not posting. I know when I can't remember exactly, and I'm too lazy to look it up, I just don't reply to a post. It always amazes me that so many people complain about Ed, but you just keep posting the same vitriol. Eric Bielefeld Sr. z/OS Systems Programmer Milwaukee, Wisconsin 414-475-7434 -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: CSA 'above the bar'
That is why Ed put the word common in quotes. After creating a GETSHARED memory object, every address space that needs access to the object must get the address of the object, and issue IARV64 with the SHAREMEMOBJ option. And by the way, the caller must be sup state or in system key. So no, shared memory objects aren't the equivalent of CSA, since there isn't at IPL time, a block of above the bar, that is always available to every address space. Wayne Driscoll Product Developer JME Software LLC NOTE: All opinions are strictly my own. -Original Message- From: IBM Mainframe Discussion List [mailto:[EMAIL PROTECTED] On Behalf Of Paul Schuster Sent: Sunday, November 04, 2007 1:54 AM To: IBM-MAIN@BAMA.UA.EDU Subject: Re: CSA 'above the bar' Ok so how to you 'explicitly' share it? Since CSA is available to everyone, how do you make this 'above the 4GB bar' storage available to everyone (read only)? Thank you. Paul On Fri, 2 Nov 2007 20:32:28 -0700, Edward Jaffe [EMAIL PROTECTED] wrote: Paul Schuster wrote: Am I correct in believing that the method to obtain the equivalent of CSA above the bar is to use macro IARV64 with the REQUEST=GETSHARED option? That's the only common storage currently available above 2G. However, unlike CSA, it must be explicitly shared. -- Edward E Jaffe Phoenix Software International, Inc 5200 W Century Blvd, Suite 800 Los Angeles, CA 90045 310-338-0400 x318 [EMAIL PROTECTED] http://www.phoenixsoftware.com/ -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Scotts new role
On 4 Nov 2007 04:39:54 -0800, in bit.listserv.ibm-main (Message-ID:[EMAIL PROTECTED]) [EMAIL PROTECTED] (Eric Bielefeld) wrote: When Ed Gould makes like CA is the evil empire, it really reflects badly on Ed. As further evidence that working for CA doesn't suck out one's soul, here is an experience from a Share many years ago: After the speaker gave her name, she said, I used to just say I'm from the Evil Empire. But now there are so many of them that I say I'm from CA. Even if I remembered her name, I wouldn't state it here. Most people have senses of humor. Most corporations don't. -- I cannot receive mail at the address this was sent from. To reply directly, send to ar23hur at intergate dot com -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: CSA 'above the bar'
In a message dated 11/3/2007 9:49:47 A.M. Central Standard Time, [EMAIL PROTECTED] writes: The bar is at 2G. But, there is a dead zone between 2G and 4G that will never be allocated by z/OS. For simplicity, some folks like to think of the z/OS virtual storage bar as being 2G thick. And, of course, there is no equivalent dead zone when working with real storage. And there is not necessarily any equivalent dead zone when working on a z/Architecture mainframe processor with any operating system other than z/OS, and such platforms may even allow virtual addressing in the z/OS-dead zone. Bill Fairchild Franklin, TN ** See what's new at http://www.aol.com -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: DFSORT SYMNAMES: soft lengths possible?
IBM Mainframe Discussion List IBM-MAIN@BAMA.UA.EDU wrote on 11/04/2007 03:30:30 AM: I have a DFSORT question regarding symbol definitions in SYMNAMES. I did check the manual, and I don't think that what I want to do is possible, but here goes... I have the following symbol defined in a sort record: LAST_NAME,1,20,CH I'm using that symbol to BUILD that field into two different sort record formats (A and B). The field appears in different columns in records A and B. So I have these symbols defined: A_LAST_NAME,*,20,CH and B_LAST_NAME,*,20,CH I can't use the equals sign to denote use the previous length within records A and B. What I'm wondering is if there's a way I can assign a length to a symbol (for example), and then use that symbol as a constant length throughout SYMNAMES. I actually did try that, but it failed. It appears that I have no choice but to replicate the constant length of 20 for every occurrence of the last name (assuming that I never want it truncated) in every record that it appears. Am I missing something? Thanks so much. You can't specify something like: LLEN,20 A_LAST_NAME,*,LLEN,CH if that's what you're asking. Obviously, coding 20 in each LAST_NAME symbol rather than LLEN is not the problem since in either case you have to code something. If the idea is to be able to change the length easily in every LAST_NAME symbol, how about doing some processing on the SYMNAMES statements instead. For example, if your SYMNAMES data set has: * Record A A_SYM1,5,CH A_LAST_NAME,*,20,CH * Record B B_SYM1,15,5,CH B_LAST_NAME,*,20,CH * Record C C_SYM1,25,5,CH C_LAST_NAME,*,20,CH and you want to change 20 to 30 in every LAST_NAME symbol, you can use a DFSORT job like this or some appropriate variation of it: //CHGEXEC PGM=ICEMAN //SYSOUTDD SYSOUT=* //SORTIN DD DSN=... SYMNAMES data set //SORTOUT DD DSN=S1,UNIT=SYSDA,SPACE=(TRK,(1,1)),DISP=(,PASS) //SYSINDD* OPTION COPY INREC IFTHEN=(WHEN=(3,9,CH,EQ,C'LAST_NAME'),OVERLAY=(15:C'30')) /* //S1 EXEC PGM=ICEMAN //SYMNAMES DD DSN=S1,DISP=(OLD,PASS) ... If that doesn't help, then please explain the problem you're trying to solve in more detail with an example. Frank Yaeger - DFSORT Development Team (IBM) - [EMAIL PROTECTED] Specialties: PARSE, JFY, SQZ, ICETOOL, IFTHEN, OVERLAY, Symbols, Migration = DFSORT/MVS is on the Web at http://www.ibm.com/storage/dfsort/ -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: CSA 'above the bar'
The discussion suggests that the dead zone represented an arbitrary decision. However it is absolutely necessary to preserve compatibility with programs dating back to OS/360. If a 24-bit or 31-bit address is interpreted as or expanded to a 64-bit address and the high-order bit happens to be on, that would cast the virtual address into the 2-4 gigabyte range and unpredictable effects could ensue. Use of the high-order bit in an address to signal the end of a parameter list is common, and no practical means of filtering or converting the programs is available. I think the dead zone is necessary in z/VSE for the same reason. Other operating systems did not use the high order bit in the same way, so there is no need for the dead zone in virtual addresses. Has this helped to achieve clarity? Steve Samson , IBM Mainframe Discussion List wrote: In a message dated 11/3/2007 9:49:47 A.M. Central Standard Time, [EMAIL PROTECTED] writes: The bar is at 2G. But, there is a dead zone between 2G and 4G that will never be allocated by z/OS. For simplicity, some folks like to think of the z/OS virtual storage bar as being 2G thick. And, of course, there is no equivalent dead zone when working with real storage. And there is not necessarily any equivalent dead zone when working on a z/Architecture mainframe processor with any operating system other than z/OS, and such platforms may even allow virtual addressing in the z/OS-dead zone. Bill Fairchild Franklin, TN -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: CSA 'above the bar'
Steve Samson wrote: The discussion suggests that the dead zone represented an arbitrary decision. However it is absolutely necessary to preserve compatibility with programs dating back to OS/360. If a 24-bit or 31-bit address is interpreted as or expanded to a 64-bit address and the high-order bit happens to be on, that would cast the virtual address into the 2-4 gigabyte range and unpredictable effects could ensue. Use of the high-order bit in an address to signal the end of a parameter list is common, and no practical means of filtering or converting the programs is available. I think the dead zone is necessary in z/VSE for the same reason. I would not characterize the z/OS (or z/VSE should they ever implement such a thing) virtual storage dead zone as necessary. Rather, it was a convenient and smart choice. Otherwise, many more bugs would have gone undetected. -- Edward E Jaffe Phoenix Software International, Inc 5200 W Century Blvd, Suite 800 Los Angeles, CA 90045 310-338-0400 x318 [EMAIL PROTECTED] http://www.phoenixsoftware.com/ -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Scotts new role
I must have missed something, when did you (Ted MacNeil) become unemployed? June 6, 2007 -- down-sized for the second time in 3 years. - Too busy driving to stop for gas! -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: CSA 'above the bar'
I think the dead zone is necessary in z/VSE for the same reason. Isn't it also in z/VM? - Too busy driving to stop for gas! -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: CSA 'above the bar'
Steve Samson [EMAIL PROTECTED] writes: The discussion suggests that the dead zone represented an arbitrary decision. However it is absolutely necessary to preserve compatibility with programs dating back to OS/360. If a 24-bit or 31-bit address is interpreted as or expanded to a 64-bit address and the high-order bit happens to be on, that would cast the virtual address into the 2-4 gigabyte range and unpredictable effects could ensue. Use of the high-order bit in an address to signal the end of a parameter list is common, and no practical means of filtering or converting the programs is available. I think the dead zone is necessary in z/VSE for the same reason. Other operating systems did not use the high order bit in the same way, so there is no need for the dead zone in virtual addresses. Has this helped to achieve clarity? 360/67 had both 24-bit and 32-bit virtual addressing modes ... as well as some other things that didn't reappear until xa. there was some discussion in the xa mode about returning to the 360/67 32-bit mode vis-a-vis using 31-bit ... which would have been in the architecture redbook (the discussion i remember was the difference in operation of things like BXH and BXLE instructions between 31-bit and 32-bit modes) principles of operation was one of the first major publications done with cms script ... in large part because it supported conditional so on the command line ... either the whole architecture redbook could be printed ... or just the principles of operation subset (w/o all the additional detail ... it was called redbook because it was distributed in a 3-ring red binder). common segment area started out being the MVS solution to moving subsystems into the own address space ... and the pervasive use of pointer passing APIs. this was what initially led to MVS kernel image occupying 8mbytes of every 16mbyte virtual address space (so for applications making kernel calls ... the kernel could directly access the parameter list). however, this pointer-passing api paradigm created significant problems when subsystems were moved into their own address space (as part of morphing os/vs2 svs to os/vs2 mvs). common segment could start out as 1mbyte in every address space ... where applications could squirrel away parameter list ... and then make call to the subsystem (passing thru the kernel for the address space switch). the problem was for the larger installations, common segment could grow to 5-6 mbytes that appeared in every application virtual address space (with the 8mbyte taken out for the kernel image) that might leave only 2-3mbytes for applications (out of the 16mbytes). the stop-gap solution in the 3033 time-frame was dual-address space mode (pending access registers, program call, etc) ... there was still a pass thru the kernel to switch to a called subsystem ... but the called subsystem could reach back into the calling application's virtual address space (w/o being forced to resorting to the common segment hack). 3033 also introduced a different above the line concept. the mismatch between processor thruput and disk thruput was becoming more and more exacerbated. i once advocated a statement that over a period of a decade or so, that the disk relative system thruput had declined by an order of magnitude (or more) ... aka disk thruput increased by 3-4 times while processor thruput increased by 40-50 times. As a result, real storage was more and more being used for caching and/or other mechanisms to compensate for the lagging disk relative system thruput. we were starting to see clusters of 4341 decked out w/max. storage and max channel and i/o capacity ... matching or beating 3033 thruput at a lower price. one of the 4341 cluster benefits was that there was more aggregate real storage than the 16mbyte limit for 3033. the hack was to redefine two (undefined/unused) bits in the page table entry. standard page table entry had 16 bits, including a 12bit (4k) page number field (allowed addressing up to 16mbytes real storage). With the two additional bits, it was possible to address up to 16384 4kbyte pages (up to 64mbyte of real storage) ... but only 16mbytes at a time. in real addressing mode ... it was only possible to address the first 16mbytes and in virtual addressing mode ... it was only possible to address a specific 16mbytes (but it was possible to have more than 4096 total 4kbyte pages, some of which could reside about 16mbyte real). it was possible to use channel program IDAL to specify address greater than 16mbyte real address (allowing data to be read/written above the 16mbyte line). however, the actual channel programs were still limited to residing below the 16mbyte line. some of this was masked by the whole channel program translation mechanism that was necessary as part of moving to virtual memory environment. the original transition for mvt was hacking a little bit of support for a single virtual address space (i.e. os/vs2 svs) and
Re: PL/S ??
Date:Fri, 2 Nov 2007 11:20:36 -0500 From:Scott Fagen [EMAIL PROTECTED] Subject: Re: PL/S ?? Additionally, I'd suspect that I could find a bright C programmer and get them useful on METAL C faster than PL/X. IBM doesn't do anything without some level of informed self interest. Perhaps they are having trouble locating good PL/X programmers themselves? Bait and switch? I don't think so. Unrealistic expectations? More likely. Scott Fagen Enterprise Systems Management Hi Scott, long time no see. I have heard several comments from programmers that PL/X is showing its age and doesn't generate very efficient code, especially for AMODE 64. The IBM C compiler has, for many years, had an offering called System Programmer C, which let you compile things like SMF exits in C. It was packaged as a separate link edit time SYSLIB that would resolve a limited set of C external references, and create a stand-alone load module that had no run-time library requirements. This Metal C sounds like a logical follow-on to SPL C, with more facilities and much better support. With the right optimization options specified, C can generate very efficient code as well, much better than PL/S code. Tom Russell Stay calm. Be brave. Wait for the signs. -- Jasper FriendlyBear ... and remember to leave good news alone. -- Gracie HeavyHand -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: PL/S ??
Tom Russell wrote: I have heard several comments from programmers that PL/X is showing its age and doesn't generate very efficient code, especially for AMODE 64. The IBM C compiler has, for many years, had an offering called System Programmer C, which let you compile things like SMF exits in C. It was packaged as a separate link edit time SYSLIB that would resolve a limited set of C external references, and create a stand-alone load module that had no run-time library requirements. This Metal C sounds like a logical follow-on to SPL C, with more facilities and much better support. With the right optimization options specified, C can generate very efficient code as well, much better than PL/S code. PL/X compiler development has been moved to the compiler development group based in Toronto. -- Edward E Jaffe Phoenix Software International, Inc 5200 W Century Blvd, Suite 800 Los Angeles, CA 90045 310-338-0400 x318 [EMAIL PROTECTED] http://www.phoenixsoftware.com/ -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: z/OS 1.9 Features summary
On 31 Oct 2007 17:58:09 -0700, in bit.listserv.ibm-main you wrote: Sure they are trivial - until we move into an environment where there are multiple DASD sizes with different optimal BLKSIZE needs.The programmer shouldn't care what disk his files are in - the systems people should have the ability to quickly and easily move the files depending on current needs. When they move them, they should be able to adjust the buffering without recompiling and changing JCL. I don't understand what you're talking about. In today's world there is no need for an application's programmer to know anything about BLKSIZE beyond what the installation demands. Utilities can certainly move data between different geometries and handle the reblocking without intervention. The JCL and program don't need to specify a blocksize since the DSCB provides such information. Where exactly is all the effort? Whether the programmer should care or not is irrelevant. In most cases, they actually don't know which means that your point has already been made. The programmer DOESN'T need to know. Just a question though.. How many different DASD geometries are being encountered today under z/OS? I am curious about the environment you're suggesting. 3380 and 3390 are still valid for disk. Tape blocksizes can be larger. I still believe that IBM needs to move to FBA. It will take 5 - 10 years but it is ridiculous that the optimal blocksizes for both 3380 and 3390 VSAM make inefficient use of the track if you want the CI to be in page multiples. It is even dumber since the actual spinning DASD is FBA. Adam Clark Morris -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: z/OS 1.9 Features summary
On 30 Oct 2007 13:53:30 -0700, in bit.listserv.ibm-main you wrote: -Original Message- From: IBM Mainframe Discussion List [mailto:[EMAIL PROTECTED] On Behalf Of Howard Brazee Sent: Tuesday, October 30, 2007 3:41 PM To: IBM-MAIN@BAMA.UA.EDU Subject: Re: z/OS 1.9 Features summary SNIP But SDB came too late: if it had been present in rudimentary form, supplying a valid but nonoptimal BLKSIZE, in OS/360 release 1, coding BLKSIZE could always have been optional, and much of the rough transition to present techniques could have been avoided. SNIP It's amazing how hard it is to get shops to implement obvious improvements. While CoBOL has BLKSIZE=0, lots of people don't use AVGREC nor IF-THEN-ELSE-ENDIF even today. I agree. I have overheard programmers complaining that they could figure out the number of cylinders needed for a give number of records. But they won't look at using the newer (but now quite old) allocation by number of records. That's because I understand cylinders. Unfortunately for VSAM, allocation in records can give you bad CA sizes because IBM in their infinite lack of wisdom (excessive sarcasm but it is one of my hot buttons) decided that we didn't need the equivalent of ROUND. Thus with the wrong allocation, a data set that would have a 5 cylinder primary might get an 8 track secondary so the CI size would be 8 tracks. Need I mention what that does to space allocation and index levels. We used to calculate out necessary SORTWORK for count-key devices. Now we just throw more DASD at the job and don't waste our time trying to be close. But we haven't used the tools provided enough to make the old ways rare. We implemented the ICEMAC to simply override the JCL sortwork and calculate the amount needed. The only time this fails is when we sort a file with is compressed using BMC's Data Accelerator product. Then the poor programmer just try to allocate 12 3390-3 volumes of sortwork and pray. -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: z/OS 1.9 Features summary
On 30 Oct 2007 13:46:07 -0700, in bit.listserv.ibm-main you wrote: On Tue, 30 Oct 2007 13:29:54 -0500, Dave Kopischke wrote: On Tue, 30 Oct 2007 09:46:36 -0600, Howard Brazee wrote: But why should a program care about block size? Funny you should ask this; We had a major project implement a couple weeks ago. To deal with the number of object moves, many of the libraries were just cloned and renamed at implementation time. During this, a couple pretty important PDS's were reblocked. A pretty benign change from my point of view. It turns out a program update process allocates one of these PDS's SHR,BLKSIZE=3200. This effectively reblocked the PDS. Every member in this PDS that was longer than about 40 lines was corrupted and innaccessible. That's a reason to care, but probably not the point trying to be made. There are code bombs waiting to explode. Even a seemingly benign change can trigger one. Ah, but if the programmer hadn't coded BLKSIZE in the DCB but left it zero and accepted whatever value was in the data set label, the problem would never have occurred. The problem was caused by the programmer's delusion that he needed to override BLKSIZE. And if the OS, according to my idea, had not updated the BLKSIZE, the data set would not have been corrupted. All you have to do to foul the works is forget to code BLOCK 0 or use SYNCSORT with certain input VSAM data sets which it assumes are RECFM = F or V depending and things go downhill from there. -- gil Clark Morris -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
ICB ISC-3/ISC-4
Hi all, Can anyone know the thoughput for ICB/ISC-3/ISC-4 ISC-3 is 1GB and within 10KM how ICB/ISC-4 and how to code the HCD type for ISC-3/ISC-4 or ICB where can I find the above information thanks for help tommy -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Don't blame PC weenies for LE was Re: PL/S ??
On 1 Nov 2007 17:43:12 -0700, in bit.listserv.ibm-main Ed Gould wrote: On Nov 1, 2007, at 6:56 PM, Paul Gilmartin wrote: LE-independent, I see. Would it likewise be independent of Run Time Library license encumbrances, so ISVs could distribute compiled code to customers without the compiler licensed, and free of prelinker entanglements so the full facility of SMP/E maintenance with fine granularity could be exploited? Thanks, gil Paul, As you probably know I am NOT an LE fan. That being said, I would think it would be close to impossible to come up with a subroutine library that could be common across all the products that you would envision. Trying to maintain something that is OS release independent and language independent. Especially if you envision this to be cross vendor as it is POSITIVELY will come up, that it will work with one release of a vendors product and not with another's release. The complications of trying to do so would, IMO would be close to impossible. IBM can't do it themselves with LE what makes you think when you add other vendors it could be done? IBM did a half way credible job with the old cobol subroutine library it really did work across many versions of MVS without any issue I ever heard of. The PC weenies thought they could do it with LE and they fell flat on their faces. Don't blame PC weenies for LE. It was most probably in response to the Guide/SHARE (or SHARE/Guide) Language Futures Task Force which was co-chaired by John Ehrman and someone from Guide. I was a member of that task force with most participants more knowledgeable than me and am suffering a senior moment because I don't remember the names of most of the participants. This task force met in the early 1980's.. Perhaps if it did not call on any system services it would be possible but then there are some services are so tightly ingrained that anything changed would create versioning issues, UGH! Ed Clark Morris -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Fwd: ICB ISC-3/ISC-4
Hi all, I found a book on website, it think it can help but it doesn't include the lastest news on ICB-4 /ISC-4 http://researchweb.watson.ibm.com/journal/rd/464/gregg.pdf -- Forwarded message -- From: Tommy Tsui [EMAIL PROTECTED] Date: Nov 5, 2007 10:06 AM Subject: ICB ISC-3/ISC-4 To: ibm-main@bama.ua.edu Hi all, Can anyone know the thoughput for ICB/ISC-3/ISC-4 ISC-3 is 1GB and within 10KM how ICB/ISC-4 and how to code the HCD type for ISC-3/ISC-4 or ICB where can I find the above information thanks for help tommy -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Don't blame PC weenies for LE was Re: PL/S ??
As you probably know I am NOT an LE fan. That being said, I would think it would be close to impossible to come up with a subroutine library that could be common across all the products that you would envision. Trying to maintain something that is OS release independent and language independent. Actually, LE is a lot better now than it was prior to OS/390 2.10. We flawlessly went from 2.10 to 1.4 to 1.7 without any LE issues. Even with COBOL programmes that, if they had been a wine, they would have been vintage. IBM finally got it right. Remember: Software eventually works! - Too busy driving to stop for gas! -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: HATS support for IND$FILE or ISPF C/S
Tom Conley writes: So where's the doc on how to add an IND$FILE button to HATS? I've looked high and low at the HATS doc, but I can't find IND$FILE even mentioned. It's well hidden in the docs and somewhat theoretical. I'll try to sketch it out. First of all, what's the client protocol going to be? If you're transferring a file to a browser client, you've got a limited set of choices. Perhaps the best way is to trigger a pop-up file-save dialog for an HTTP file transfer stream. An alternative might be FTP. Once you've made that decision, then you quickly realize you're going to need some code running in WebSphere Application Server that'll handle any translation, unless you're just worried about binary files, and send the stream of bytes is coming from the host to the client. So you're going to have some programming to glue the client to the backend HATS code (and connections). This might be a bit trickier if you're trying to implement upload. Fortunately the underlying HACL interfaces offer file transfer protocol support. (Check the Host Access Client Toolkit docs for details.) And you can invoke HACL functions from HATS macros, as one example. There's no wizard in the HATS Studio to generate the XML to do that, so you'll have to write the XML by hand (or possibly copy/paste out of the doc examples). Note that if it's a printer session that's already provided in HATS, via built-in conversion to PDF and presentment to the browser. Anyway, it's possible, but it'll take a little work and some knowledge of at least basic Java programming. - - - - - Timothy Sipples IBM Consulting Enterprise Software Architect Specializing in Software Architectures Related to System z Based in Tokyo, Serving IBM Japan and IBM Asia-Pacific E-Mail: [EMAIL PROTECTED] -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: why external CF not internal CF
Tommy, David Raften authored a good whitepaper which I would recommend reading if you haven't done so already. It's available here: ftp://ftp.software.ibm.com/common/ssi/rep_wh/n/ZSW01971USEN/ZSW01971USEN.PDF Like many other areas of technology it's very important to get the latest information. You may wish to request at least a short briefing directly with IBM Poughkeepsie on this subject since there's a lot that's new. There are many possible configuration choices, and the pros and cons vary and also depend on what level of technology you have. For example, z/OS 1.8 introduced some significant improvements to duplexed coupling facility handling, so much so that the improvements could influence whether or not you need to dedicate a whole frame to a coupling facility. Other questions I might look into: whether you should replace the 9037 timers with Server Time Protocol (STP), and, if you do decide a dedicated CF frame has value, whether that frame should be a 2094 or 2096 model. The 2094 consumes more electricity and takes more space, so you need some advantages to justify the 2094 versus the 2096. Other responders are absolutely correct: there's no physical difference between ICFs and external CFs. The only difference is whether a frame is dedicated to the coupling facility LPAR(s)/code or not. - - - - - Timothy Sipples IBM Consulting Enterprise Software Architect Specializing in Software Architectures Related to System z Based in Tokyo, Serving IBM Japan and IBM Asia-Pacific E-Mail: [EMAIL PROTECTED] -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Don't blame PC weenies for LE was Re: PL/S ??
On Nov 4, 2007, at 8:19 PM, Clark Morris wrote: On 1 Nov 2007 17:43:12 -0700, in bit.listserv.ibm-main Ed Gould wrote: On Nov 1, 2007, at 6:56 PM, Paul Gilmartin wrote: --SNIP--- Don't blame PC weenies for LE. It was most probably in response to the Guide/SHARE (or SHARE/Guide) Language Futures Task Force which was co-chaired by John Ehrman and someone from Guide. I was a member of that task force with most participants more knowledgeable than me and am suffering a senior moment because I don't remember the names of most of the participants. This task force met in the early 1980's.. Clark:\ Perhaps I wasn't clear I will attempt to clarify it . The LE (in concept) was probably good. It was the coders (read developers) that I blame the LE problems on. I also don't believe they had enough of a background to understand IBM's (excellent btw) idea on compatibility. I don't think that anyone would have complained about bugs (yes there were more than a reasonable amount IMO but probably in a reasonable range but nothing as bad as VSAMs' PTF tapes). IMO they should have had someone from DFP be the chief architect then they wouldn't have had 12(?) releases in a few years. If that wasn't bad enough you really needed a lawyer to read the (Fine) manual on compatibility. Ed -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Big Blue gets into greenwash
http://www.theregister.co.uk/2007/11/02/ibm_green_certificates/ Big Blue gets into greenwash Data centres to get green certificates By John Oates → More by this author Published Friday 2nd November 2007 10:48 GMT Everything you need to know about Virtualization at The Register's eSymposium IBM, as part of its environmental initiative dubbed Big Green Innovations, will offer energy efficient data centres a certificate of greeness - if they can prove they have reduced power consumption. Big Blue is partnering with Neuwing Energy to offer data centres a third-party approved certificate to show they have reduced energy consumption. These certificates can either be used to prove you are reducing your carbon footprint, or traded on the energy efficiency market. Data centres can use up to 15 times the electricity of a normal office. Neuwing will first assess how much energy your servers and air conditioning are using, then test again when changes have been made. It will either keep a portion of your certificates or charge a per MWH saved fee. The scheme is initially available in the US, but IBM hopes to extend it to Europe in 2008. More on Big Blue's greening here, or there's a press release (pdf) here. ® -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: DFSMStvs logs - Is DASD-only a possibility?
Tom Schmidt wrote: We're looking to activate DFSMStvs and since we are doing this in a monoplex (single-LPAR sysplex) we are wondering whether TVS' logs need to use CF structures or if they can possibly be defined DASD-only. The examples use CF structures of course but are they actually required? Has any other site successfully used the DFSMStvs primary secondary logs without any associated CF structures? We are wondering since the DFSMStvs log code was cloned from the CICS log code and CICS allows for either CF or DASD-only. (Depending on the year that the code was cloned it may or may not allow for DASD-only, I suppose.) IMHO since the logs are System Logger logstreams, it is up to LOGGER to use or not to use CF structures. Of course DASD-only logstream cannot be shared between systems. BTW: AFAIR (I can be wrong), DFSMStvs requires VSAM RLS, which in turn requires CF structures. So CF is mandatory. Regards -- Radoslaw Skorupka Lodz, Poland -- BRE Bank SA ul. Senatorska 18 00-950 Warszawa www.brebank.pl Sd Rejonowy dla m. st. Warszawy XII Wydzia Gospodarczy Krajowego Rejestru Sdowego, nr rejestru przedsibiorców KRS 025237 NIP: 526-021-50-88 Wedug stanu na dzie 01.01.2007 r. kapita zakadowy BRE Banku SA (w caoci opacony) wynosi 118.064.140 z. W zwizku z realizacj warunkowego podwyszenia kapitau zakadowego, na podstawie uchwa XVI WZ z dnia 21.05.2003 r., kapita zakadowy BRE Banku SA moe ulec podwyszeniu do kwoty 118.760.528 z. Akcje w podwyszonym kapitale zakadowym bd w caoci opacone. -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html