Re: SMF System Logger - limitations of MANx
Other than a problem situation (looping transaction etc.) does anyone know of any shops where data is written so fast that the SMF address space (writes to MANx) can't keep up as opposed to the offload process not being able to keep up? Kees mentioned it and I have seen it happen, but only as a result of a problem - not a normal thing. I know the logger can handle much higher logging rates. Mark I can't remember exactly when we had the situation. It was quite some time ago (before 2005 from the SMFPRMxx log) and it was not during a problem situation, but during a specific point in what is considered normal processing, probably when shutting down DB2 or so. This is when we discovered the zap to enlarge the SMF buffers, which later were turned into parameters BUFSIZMAX and others of SMFPRMxx. Kees. ** For information, services and offers, please visit our web site: http://www.klm.com. This e-mail and any attachment may contain confidential and privileged material intended for the addressee only. If you are not the addressee, you are notified that no part of the e-mail or any attachment may be disclosed, copied or distributed, and that any other action related to this e-mail or attachment is strictly prohibited, and may be unlawful. If you have received this e-mail by error, please notify the sender immediately by return e-mail, and delete this message. Koninklijke Luchtvaart Maatschappij NV (KLM), its subsidiaries and/or its employees shall not be liable for the incorrect or incomplete transmission of this e-mail or any attachments, nor responsible for any delay in receipt. Koninklijke Luchtvaart Maatschappij N.V. (also known as KLM Royal Dutch Airlines) is registered in Amstelveen, The Netherlands, with registered number 33014286 ** -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: SMF System Logger - limitations of MANx
What is the best color - purple. What is too high? That is for each of us to decide for ourselves. Return on investment rules. I would start with if it takes longer to dump than it takes to fill then it is too high. There is nothing wrong with a 100 cylinder MANx or a 750 cylinder MANx dumping once or one hundred times a day. What do you dump to and how you process it are factors for you to consider. If you are still manually mounting tapes, then less frequent is better. When the actual process of switching uses too many cycles for you, that is too high. I am at a site where once a day is the goal. If you are collecting record types you never use, and never will, that adds to too high a volume. If you feel you need it (actually this is a high desire to have it) then you need it and you won't consider it too high a volume. The SMF data is being collected based on your workload, so the more work your system does the more records you can produce. Busier shops put more thought into what they capture and how long they retain it than simpler shops with less experienced people. On Sat, 29 Mar 2008 19:47:28 -0700, Mark Yuhas [EMAIL PROTECTED] wrote: I have been following this thread with some interest. The one aspect I have not seen mentioned has been what constitutes too high of volume of SMF data? -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: SMF System Logger - limitations of MANx
I have been following this thread with some interest. The one aspect I have not seen mentioned has been what constitutes too high of volume of SMF data? I'll admit my shop is small but our production LPAR was switching SMF in the range of 95 to 105 times a day. The other LPARs were behaving switching 5 to 15 times a day. The source of the problem was that all of the MANx data sets had the same allocation - 100 cylinders -, which had been sufficient for many years. Anyway, I reallocated the MANx data sets for the production LPAR to 750 cylinders; and, I modified the offload PROC as well. Now, the production LPAR only switches 14 or 15 times a day. With this in mind, what is too high of a volume for the MANx data sets to handle given the size of the MANx data sets and rate of switching? -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: SMF System Logger - limitations of MANx
I don't think anyone has mentioned the white paper from Riaz Ahmad (IBM) WSC that detailed a customer situation with problems dealing with current volumes using MANx data sets and some WSC testing that emulated it. http://www-03.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/WP101130 or http://tinyurl.com/2gnojb z/OS System Management Facilities (SMF) Recording with MVS Logger WP101130 Abstract: The SMF (System Management Facilities) Recording to MVS Logger describes the z/OS 1.9 facility which allows recording of the SMF data to MVS Logger Logstream. The paper documents a case study for a customer environment which has difficult time keeping up with the SMF data recording to MANx data sets. This new facility provides a solution where SMF data can be recorded to the MVS Logger logstream instead of MANx data sets. List of white papers... http://www-03.ibm.com/support/techdocs/atsmastr.nsf/Web/WP-ByProduct?Ope nDocumentStart=1Count=1000Expand=23 Best Regards, Sam Knutson, GEICO System z Performance and Availability Management mailto:[EMAIL PROTECTED] (office) 301.986.3574 Think big, act bold, start simple, grow fast... This email/fax message is for the sole use of the intended recipient(s) and may contain confidential and privileged information. Any unauthorized review, use, disclosure or distribution of this email/fax is prohibited. If you are not the intended recipient, please destroy all paper and electronic copies of the original message. -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: SMF System Logger - limitations of MANx
On Fri, 28 Mar 2008 10:39:52 -0400, Knutson, Sam [EMAIL PROTECTED] wrote: I don't think anyone has mentioned the white paper from Riaz Ahmad (IBM) WSC that detailed a customer situation with problems dealing with current volumes using MANx data sets and some WSC testing that emulated it. http://www-03.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/WP101130 or http://tinyurl.com/2gnojb z/OS System Management Facilities (SMF) Recording with MVS Logger WP101130 Abstract: The SMF (System Management Facilities) Recording to MVS Logger describes the z/OS 1.9 facility which allows recording of the SMF data to MVS Logger Logstream. The paper documents a case study for a customer environment which has difficult time keeping up with the SMF data recording to MANx data sets. This new facility provides a solution where SMF data can be recorded to the MVS Logger logstream instead of MANx data sets. Thanks. The paper says this about the customer situation: During the peak hours of this interactive workload the customer has experienced long intervals when SMF data was produced at a rate exceeding the capacity to offload. In reality, there are limits placed on the offload process by the customers SMF data collection design, which involves collecting data offloaded from the SMF MANx datasets into a single daily collection dataset. Each offload is appended to the end of the collection dataset (using DISP=MOD processing) and this effectively limits the offload processes to dump the MANx datasets one at a time, due to the enqueue on the daily collection dataset. This supports what I have been saying about being able to keep up if you use IEFU29, run dumps to FICON DASD / virtual tape and run your dump STC in a high enough service class (SYSSTC if required). Other than a problem situation (looping transaction etc.) does anyone know of any shops where data is written so fast that the SMF address space (writes to MANx) can't keep up as opposed to the offload process not being able to keep up? Kees mentioned it and I have seen it happen, but only as a result of a problem - not a normal thing. I know the logger can handle much higher logging rates. Mark -- Mark Zelden Sr. Software and Systems Architect - z/OS Team Lead Zurich North America / Farmers Insurance Group - ZFUS G-ITO mailto:[EMAIL PROTECTED] z/OS Systems Programming expert at http://expertanswercenter.techtarget.com/ Mark's MVS Utilities: http://home.flash.net/~mzelden/mvsutil.html -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: SMF System Logger - limitations of MANx
R.S. [EMAIL PROTECTED] wrote in message news:[EMAIL PROTECTED]... Mark Zelden wrote: [...] As mentioned... lots in the archives about this (even before the recent threads). 1) Speed of offloading (being able to keep up with records being written). IMHO I can live with it (YMMV). In case of SMF (expected accepted) flood I can use more MANx datasets as a spill. This is not the solution: I have seen occasions where records are presented to SMF faster that SMF can write them to MANx. This will cause SMF to run out of buffers with records lost as a consequence. Kees. ** For information, services and offers, please visit our web site: http://www.klm.com. This e-mail and any attachment may contain confidential and privileged material intended for the addressee only. If you are not the addressee, you are notified that no part of the e-mail or any attachment may be disclosed, copied or distributed, and that any other action related to this e-mail or attachment is strictly prohibited, and may be unlawful. If you have received this e-mail by error, please notify the sender immediately by return e-mail, and delete this message. Koninklijke Luchtvaart Maatschappij NV (KLM), its subsidiaries and/or its employees shall not be liable for the incorrect or incomplete transmission of this e-mail or any attachments, nor responsible for any delay in receipt. Koninklijke Luchtvaart Maatschappij N.V. (also known as KLM Royal Dutch Airlines) is registered in Amstelveen, The Netherlands, with registered number 33014286 ** -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: SMF System Logger - limitations of MANx
We've had occasional episodes of lost data. One case, which we think we've fixed via changes to automation, is the other side of the event-driven nature of MANx management. A strength of traditional SMF recording is that you dump and clear a MANx cluster in response to a message saying that it needs dumping. That's why you don't have to worry much about whether you've collected ten hours or ten minutes worth of data. You dump whatever's there, clear the file, and move on. A problem crops up when you IPL with all defined SMF data sets in need of dumping. They're not necessarily 'full'; they just can't be opened because, well, who knows why. I've seen MANx data sets at 1% that nonetheless require dumping. Go figure. But if all MANx data sets are full at IPL, data starts getting buffered. In our shop, the messages that should trigger dumping come out before our automation product has hit the ground. Our so-far-untested automation change is to trigger a dump when buffering has reached a certain threshold. Without that additional check, you can lose data when you run out of buffers. Another case is harder to deal with. We dump straight to tape. Once in a while we have an extended tape outage for hardware or software maintenance. Of course we could code for this situation by writing first to DASD, then to tape. Heck, we could write a whole subsystem. Ad infinitum. System Logger in principle handles all these cases by its nature. As SMF records flow, Logger captures them and writes them to DASD. A new offload data set is allocated as often as necessary: once a day or once a minute. The full-at-IPL condition simply goes away. Logger never gets full. If tape is unavailable for some period of time, records simply accumulate until a dump job can eventually be run. The worst fallout of SMF data loss is that all records are treated alike. It's simple FIFO, records you need, records you might like to have, records that may or may not prove crucial down the road. You capture them all until recording is impaired, then you lose everything. The current complexity of managing Logger offload data is presumably temporary, although no relief has actually been announced. Given the well known historical problems with MANx data sets, I still think Logger is worth the effort today. . . JO.Skip Robinson Southern California Edison Company Electric Dragon Team Paddler SHARE MVS Program Co-Manager 626-302-7535 Office 323-715-0595 Mobile [EMAIL PROTECTED] Vernooy, C.P. - SPLXM [EMAIL PROTECTED] To .COM IBM-MAIN@bama.ua.edu Sent by: IBM cc Mainframe Discussion List Subject [EMAIL PROTECTED] Re: SMF System Logger - limitations .edu of MANx 03/26/2008 12:44 AM Please respond to IBM Mainframe Discussion List [EMAIL PROTECTED] .edu R.S. [EMAIL PROTECTED] wrote in message news:[EMAIL PROTECTED]... Mark Zelden wrote: [...] As mentioned... lots in the archives about this (even before the recent threads). 1) Speed of offloading (being able to keep up with records being written). IMHO I can live with it (YMMV). In case of SMF (expected accepted) flood I can use more MANx datasets as a spill. This is not the solution: I have seen occasions where records are presented to SMF faster that SMF can write them to MANx. This will cause SMF to run out of buffers with records lost as a consequence. Kees. -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET
Re: SMF System Logger - limitations of MANx
On Wed, 26 Mar 2008 09:38:10 -0700, Skip Robinson [EMAIL PROTECTED] wrote: We've had occasional episodes of lost data. One case, which we think we've fixed via changes to automation, is the other side of the event-driven nature of MANx management. A strength of traditional SMF recording is that you dump and clear a MANx cluster in response to a message saying that it needs dumping. That's why you don't have to worry much about whether you've collected ten hours or ten minutes worth of data. You dump whatever's there, clear the file, and move on. A problem crops up when you IPL with all defined SMF data sets in need of dumping. They're not necessarily 'full'; they just can't be opened because, well, who knows why. I've seen MANx data sets at 1% that nonetheless require dumping. Go figure. But if all MANx data sets are full at IPL, data starts getting buffered. In our shop, the messages that should trigger dumping come out before our automation product has hit the ground. Our so-far-untested automation change is to trigger a dump when buffering has reached a certain threshold. Without that additional check, you can lose data when you run out of buffers. We (and many shops) use the CBIPO SMFDUMP program in combination with IEFU29. This is the one that dumps all the full data sets and also does a switch command. We use the SMFDUMP program at midnight to cut things off for daily processing and IEFU29 the rest of the time. I also use the SMFDUMP program at IPL time to take care of the situation described above. Both processes use the same SMFDUMP proc. Here is an example of what it looks like (some LPARs dump to tape instead of disk): //SMFDUMP PROC MAN='X',ALL=FALSE //* //* THIS PROC IS NORMALLY STARTED VIA IEFU29 SMF EXIT WHEN AN //* SMF DATA SET SWITCH OCCURS (ONLY THE FIRST STEP RUNS): //*S SMFDUMP,MAN=SMF.DATA.SET.NAME //* //* AT IPL TIME IS IS STARTED AS FOLLOWS TO RUN THE SMFDUMP //* PROGRAM TO ENSURE ALL FULL SYS1.MANX DATA SETS ARE DUMPED: //*S SMFDUMP,ALL=TRUE //* //* IT IS ALSO STARTED THIS WAY AT MIDNIGHT TO USE THE SMFDUMP //* PROGRAM WHICH FORCES AN SMF SWITCH IN ORDER TO CAPTURE ALL //* THE SMF DATA UP TO THAT POINT FOR DAILY PROCESSING. //* //TESTEXEC EXEC PGM=IEFBR14 //* // //TESTONE IF (TESTEXEC.RUN NE ALL) THEN // //DUMPONE EXEC PGM=IFASMFDP,TIME=1440 //SYSPRINT DD SYSOUT=* //DUMPIN DD DSN=MAN,DISP=SHR //DUMPOUT DD DSN=hlq.SYSNAME..SMF(+1),DISP=(,CATLG), // DCB=(SYS1.MODEL,LRECL=X,BLKSIZE=32756,RECFM=VBS),UNIT=SYSDA, // SPACE=(CYL,(300,300),RLSE) //SYSINDD DUMMY // ENDIF // //TESTALL IF (TESTEXEC.RUN EQ ALL) THEN // //DUMPALL EXEC PGM=SMFDUMP,TIME=1440 //SYSPRINT DD SYSOUT=* //DUMPOUT DD DSN=hlq.SYSNAME..SMF(+1),DISP=(,CATLG), // DCB=(SYS1.MODEL,LRECL=X,BLKSIZE=32756,RECFM=VBS),UNIT=SYSDA, // SPACE=(CYL,(300,300),RLSE) //SYSINDD DUMMY // ENDIF Mark -- Mark Zelden Sr. Software and Systems Architect - z/OS Team Lead Zurich North America / Farmers Insurance Group - ZFUS G-ITO mailto:[EMAIL PROTECTED] z/OS Systems Programming expert at http://expertanswercenter.techtarget.com/ Mark's MVS Utilities: http://home.flash.net/~mzelden/mvsutil.html -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: SMF System Logger - limitations of MANx
On Wed, 26 Mar 2008 09:38:10 -0700, Skip Robinson [EMAIL PROTECTED] wrote: We've had occasional episodes of lost data. One case, which we think we've fixed via changes to automation, is the other side of the event-driven nature of MANx management. A strength of traditional SMF recording is that you dump and clear a MANx cluster in response to a message saying that it needs dumping. That's why you don't have to worry much about whether you've collected ten hours or ten minutes worth of data. You dump whatever's there, clear the file, and move on. A problem crops up when you IPL with all defined SMF data sets in need of dumping. They're not necessarily 'full'; they just can't be opened because, well, who knows why. I've seen MANx data sets at 1% that nonetheless require dumping. Go figure. But if all MANx data sets are full at IPL, data starts getting buffered. In our shop, the messages that should trigger dumping come out before our automation product has hit the ground. Our so-far-untested automation change is to trigger a dump when buffering has reached a certain threshold. Without that additional check, you can lose data when you run out of buffers. Another case is harder to deal with. We dump straight to tape. Once in a while we have an extended tape outage for hardware or software maintenance. Of course we could code for this situation by writing first to DASD, then to tape. Heck, we could write a whole subsystem. Ad infinitum. System Logger in principle handles all these cases by its nature. As SMF records flow, Logger captures them and writes them to DASD. A new offload data set is allocated as often as necessary: once a day or once a minute. The full-at-IPL condition simply goes away. Logger never gets full. If tape is unavailable for some period of time, records simply accumulate until a dump job can eventually be run. The worst fallout of SMF data loss is that all records are treated alike. It's simple FIFO, records you need, records you might like to have, records that may or may not prove crucial down the road. You capture them all until recording is impaired, then you lose everything. The current complexity of managing Logger offload data is presumably temporary, although no relief has actually been announced. Given the well known historical problems with MANx data sets, I still think Logger is worth the effort today. . . JO.Skip Robinson Southern California Edison Company Electric Dragon Team Paddler SHARE MVS Program Co-Manager 626-302-7535 Office 323-715-0595 Mobile [EMAIL PROTECTED] Vernooy, C.P. - SPLXM [EMAIL PROTECTED] To .COM IBM-MAIN@bama.ua.edu Sent by: IBM cc Mainframe Discussion List Subject [EMAIL PROTECTED] Re: SMF System Logger - limitations .edu of MANx 03/26/2008 12:44 AM Please respond to IBM Mainframe Discussion List [EMAIL PROTECTED] .edu R.S. [EMAIL PROTECTED] wrote in message news:[EMAIL PROTECTED]... Mark Zelden wrote: [...] As mentioned... lots in the archives about this (even before the recent threads). 1) Speed of offloading (being able to keep up with records being written). IMHO I can live with it (YMMV). In case of SMF (expected accepted) flood I can use more MANx datasets as a spill. This is not the solution: I have seen occasions where records are presented to SMF faster that SMF can write them to MANx. This will cause SMF to run out of buffers with records lost as a consequence. Kees. -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html Two points for consideration: 1) a possible technique for avoiding SMF data loss (such as SMF 101s or 116s -- you can decide an approach and types considered non-critical) is to setup an automation rule that fires when the SYSLOG message occurs, stating SMF is at 75% BUFFERS... (message IEE986E). The SET SMF command would then enable an alternate SMFPRMxx member deactivating some subset of non-critical, large-volume contributor SMF record type(s), such as SMF 101s. At some point, the condition is relieved and the normal production SMFPRMxx member would be re-enabled in a similar fashion, or after some defined time-period. 2) the issue of duplicate SMF data occurring across IFASMFDL (SMF Logstream enabled) dumps (see note #1 below) still does not get addressed, without requiring a secondary data-filter pass (and/or sort with noduplicates). Some would say that this duplicate-data
Re: SMF System Logger - limitations of MANx
A comment on duplicate data. I haven't dug into the actual data produced by IFASMFDL , but the doc says (in effect) that some duplicate records are inevitable (my extrapolation) because the utility begins by dumping the entire block that contains START time and continues through dumping the entire block that contains END time. Thus some duplicate records are unavoidable because on average the start and end time records are most likely to fall somewhere within a block. However, we process data with MXG, which my SMF SME assures me has no problem with duplicate data. . . JO.Skip Robinson Southern California Edison Company Electric Dragon Team Paddler SHARE MVS Program Co-Manager 626-302-7535 Office 323-715-0595 Mobile [EMAIL PROTECTED] Scott Barry [EMAIL PROTECTED] COM To Sent by: IBM IBM-MAIN@bama.ua.edu Mainframe cc Discussion List [EMAIL PROTECTED] Subject .edu Re: SMF System Logger - limitations of MANx 03/26/2008 11:02 AM Please respond to IBM Mainframe Discussion List [EMAIL PROTECTED] .edu On Wed, 26 Mar 2008 09:38:10 -0700, Skip Robinson [EMAIL PROTECTED] wrote: We've had occasional episodes of lost data. snip Two points for consideration: 1) a possible technique for avoiding SMF data loss (such as SMF 101s or 116s -- you can decide an approach and types considered non-critical) is to setup an automation rule that fires when the SYSLOG message occurs, stating SMF is at 75% BUFFERS... (message IEE986E). The SET SMF command would then enable an alternate SMFPRMxx member deactivating some subset of non-critical, large-volume contributor SMF record type(s), such as SMF 101s. At some point, the condition is relieved and the normal production SMFPRMxx member would be re-enabled in a similar fashion, or after some defined time-period. 2) the issue of duplicate SMF data occurring across IFASMFDL (SMF Logstream enabled) dumps (see note #1 below) still does not get addressed, without requiring a secondary data-filter pass (and/or sort with noduplicates). Some would say that this duplicate-data condition is unacceptable, given the extra data handling required. And it doesn't have to be linked to any accounting/chargeback scenario, either. Scott Barry SBBWorks, Inc. _ Note #1: most concerning is intraday dumping, for example, using CA MICS Incremental Database Update feature -- although MICS will reject any data already processed. Links: BUFSIZMAX, BUFUSEWARN, and NOBUFFS - Specifying SMF buffer options (mind any broken URL): http://publib.boulder.ibm.com/infocenter/zos/v1r9/topic/com.ibm.zos.r9.ieag20 0/buffopt.htm -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: SMF System Logger - limitations of MANx
On Wed, 26 Mar 2008 13:36:17 -0700, Skip Robinson [EMAIL PROTECTED] wrote: A comment on duplicate data. I haven't dug into the actual data produced by IFASMFDL , but the doc says (in effect) that some duplicate records are inevitable (my extrapolation) because the utility begins by dumping the entire block that contains START time and continues through dumping the entire block that contains END time. Thus some duplicate records are unavoidable because on average the start and end time records are most likely to fall somewhere within a block. However, we process data with MXG, which my SMF SME assures me has no problem with duplicate data. . . JO.Skip Robinson Southern California Edison Company Electric Dragon Team Paddler SHARE MVS Program Co-Manager 626-302-7535 Office 323-715-0595 Mobile [EMAIL PROTECTED] Scott Barry [EMAIL PROTECTED] COM To Sent by: IBM IBM-MAIN@bama.ua.edu Mainframe cc Discussion List [EMAIL PROTECTED] Subject .edu Re: SMF System Logger - limitations of MANx 03/26/2008 11:02 AM Please respond to IBM Mainframe Discussion List [EMAIL PROTECTED] .edu On Wed, 26 Mar 2008 09:38:10 -0700, Skip Robinson [EMAIL PROTECTED] wrote: We've had occasional episodes of lost data. snip Two points for consideration: 1) a possible technique for avoiding SMF data loss (such as SMF 101s or 116s -- you can decide an approach and types considered non-critical) is to setup an automation rule that fires when the SYSLOG message occurs, stating SMF is at 75% BUFFERS... (message IEE986E). The SET SMF command would then enable an alternate SMFPRMxx member deactivating some subset of non-critical, large-volume contributor SMF record type(s), such as SMF 101s. At some point, the condition is relieved and the normal production SMFPRMxx member would be re-enabled in a similar fashion, or after some defined time-period. 2) the issue of duplicate SMF data occurring across IFASMFDL (SMF Logstream enabled) dumps (see note #1 below) still does not get addressed, without requiring a secondary data-filter pass (and/or sort with noduplicates). Some would say that this duplicate-data condition is unacceptable, given the extra data handling required. And it doesn't have to be linked to any accounting/chargeback scenario, either. Scott Barry SBBWorks, Inc. _ Note #1: most concerning is intraday dumping, for example, using CA MICS Incremental Database Update feature -- although MICS will reject any data already processed. Links: BUFSIZMAX, BUFUSEWARN, and NOBUFFS - Specifying SMF buffer options (mind any broken URL): http://publib.boulder.ibm.com/infocenter/zos/v1r9/topic/com.ibm.zos.r9.ieag2 0 0/buffopt.htm -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html For MXG data sources such as CICS, DB2, IMS (large-volume candidates and out-of-the-box MXG), there is no SORT NODUP operation performed during PDB build processing to remove adjacent duplicate records -- these PDB files are typically copied directly to their final destination, normally tape due to volume. Also, PDB build processing does not provide a check for prior input data already processed - this record rejected. I do realize that SMF record types have header timestamps only granular to 1/100 of a second -- another challenge exacerbated by SMF System Logger deployment, because, again, dumping a MANx file represented a finite start and end point as compared to the SMF Syste Logger architecture providing a continuous data-pipe. Sincerely, Scott Barry SBBWorks, Inc. -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: SMF System Logger - limitations of MANx
Wayne Driscoll wrote: Look in the archives for the past 2-3 weeks, particularly for posts by Sam Knutson, Skip Robinson, Mark Zelden, Ed Jaffe and Peter Relson. My feeling is that the support will benefit huge shops that have hit a slowdown due to the limitations of the MANx dataset [...] Just curious: what are the limitations ? I haven't seen such list. I can see some limitations, but I'm not sure about completness nor correctness of my observations. Regards -- Radoslaw Skorupka Lodz, Poland -- BRE Bank SA ul. Senatorska 18 00-950 Warszawa www.brebank.pl Sd Rejonowy dla m. st. Warszawy XII Wydzia Gospodarczy Krajowego Rejestru Sdowego, nr rejestru przedsibiorców KRS 025237 NIP: 526-021-50-88 Wedug stanu na dzie 01.01.2008 r. kapita zakadowy BRE Banku SA wynosi 118.642.672 zote i zosta w caoci wpacony. -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: SMF System Logger - limitations of MANx
On Tue, 25 Mar 2008 16:40:31 +0100, R.S. [EMAIL PROTECTED] wrote: Wayne Driscoll wrote: Look in the archives for the past 2-3 weeks, particularly for posts by Sam Knutson, Skip Robinson, Mark Zelden, Ed Jaffe and Peter Relson. My feeling is that the support will benefit huge shops that have hit a slowdown due to the limitations of the MANx dataset [...] Just curious: what are the limitations ? I haven't seen such list. I can see some limitations, but I'm not sure about completness nor correctness of my observations. As mentioned... lots in the archives about this (even before the recent threads). 1) Speed of offloading (being able to keep up with records being written). 2) Flexibility. I still have not had a problem with number 1 but I have heard of shops that do. Although I don't see why any shop that big on a multiple engine LPAR can't run their SMFDUMPs at SYSSTC. I run mine in a service class with importance 2 and the only time that has had a problem keeping up was with the looping issues I brought up in another thread (we moved them to SYSSTC to keep up while working on those issues). IMP=2 is equal to my production CICS though. The only thing with IMP=1 is WebSphere enclaves and DDF work from the WebSphere appliactions. Mark -- Mark Zelden Sr. Software and Systems Architect - z/OS Team Lead Zurich North America / Farmers Insurance Group - ZFUS G-ITO mailto:[EMAIL PROTECTED] z/OS Systems Programming expert at http://expertanswercenter.techtarget.com/ Mark's MVS Utilities: http://home.flash.net/~mzelden/mvsutil.html -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: SMF System Logger - limitations of MANx
Mark Zelden wrote: [...] As mentioned... lots in the archives about this (even before the recent threads). 1) Speed of offloading (being able to keep up with records being written). IMHO I can live with it (YMMV). In case of SMF (expected accepted) flood I can use more MANx datasets as a spill. 2) Flexibility. ??? I could complement the list: Inconvenience because every sysplex member has its own set of datasets. One cannot put the same record in two archives (although one can split the records during dump). However I heard it is possible to loss records during MANx switch processing. Maybe I misunderstood something. -- Radoslaw Skorupka Lodz, Poland -- BRE Bank SA ul. Senatorska 18 00-950 Warszawa www.brebank.pl Sd Rejonowy dla m. st. Warszawy XII Wydzia Gospodarczy Krajowego Rejestru Sdowego, nr rejestru przedsibiorców KRS 025237 NIP: 526-021-50-88 Wedug stanu na dzie 01.01.2008 r. kapita zakadowy BRE Banku SA wynosi 118.642.672 zote i zosta w caoci wpacony. -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html