Re: Questions regarding SMS compacted dataset

2012-04-11 Thread Yifat Oren
Victor,

Are you using DFDSS DUMP or COPY? 

The DUMP function will _not_ decompress the data set and will take a
physical copy of it.
Naturally, as it is already compressed, further compression when copying it
to tape will not be very beneficial.


Other utilities, such as IEBGENER. have to decompress the data set as they
are doing a record by record, logical, copy (this may not be true for IDCAMS
when using the compression interface, see II14507).


Hope that helps,
Yifat
 

-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@bama.ua.edu] On Behalf
Of Victor Zhang
Sent: Tuesday, April 10, 2012 6:01 PM
To: IBM-MAIN@bama.ua.edu
Subject: Re: Questions regarding SMS compacted dataset

Chris,
Thanks for the reply.
Using dss is to back extended files to tape/virtual tape.

Your answer said the data read will be expanded. So even by setting compact
as N, the amount of data written to tape/virtual tape will be same, right?

My another question is:
If I set compact=N for storage class, so data sets will not be
compressed/compacted.

If I use same utility to copy it to tape/virtual tape, will there any
difference for the data stream writing to tape?

I already noticed a difference:
By enabling compact option in storage class, I have very low compression
ratio for data written to tape/virtual tape, do you have any idea?

Regards
Victor

--
For IBM-MAIN subscribe / signoff / archive access instructions, send email
to lists...@bama.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN


Re: Questions regarding SMS compacted dataset

2012-04-11 Thread Yifat Oren
Victor,

Logical dump of compressed format data sets does not decompress them.

This quote from the DFSMSdss Storage Administration sort-of implies that:
1.  The COMPRESS keyword is ignored if it is specified during a logical
data set dump for either compressed-format sequential data 
sets or compressed-format VSAM data sets.

(The COMPRESS keyword  specifies that DFSMSdss should compress the output
dump data set before writing it to output medium - so, double compression is
being avoided here).


COPY must decompress an extended-format compressed data set when copying it
to a basic-format data set (on tape).

Best Regards,
Yifat
 
-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@bama.ua.edu] On Behalf
Of Victor Zhang
Sent: Wednesday, April 11, 2012 3:52 PM
To: IBM-MAIN@bama.ua.edu
Subject: Re: Questions regarding SMS compacted dataset

Yifat,
Thank you very much, very help.
One more question:
Does both physical dump and logical dump NOT decompress extended compacted
PS dataset OR Does physical dump NOT decompress extended compacted PS
dataset OR Does copy decompress extended compacted PS dataset?

Regards
Victor

--
For IBM-MAIN subscribe / signoff / archive access instructions, send email
to lists...@bama.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN


Re: VSAM help wanted for random reads

2012-04-05 Thread Yifat Oren
Hi,

This is the place to mention VSAM System Managed Buffering that could,
possibly, auto-tune the access to BLSR if the open intention was set
correctly by the programmer and enough region was available.

Best Regards,
Yifat 


-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@bama.ua.edu] On Behalf
Of Joel C. Ewing
Sent: Wednesday, April 04, 2012 9:13 PM
To: IBM-MAIN@bama.ua.edu
Subject: Re: VSAM help wanted for random reads

On 04/04/2012 10:37 AM, Chip Grantham wrote:
 We have an application like this, that is multiple record types in the 
 same KSDS.  We found that if we had a FD for the type '4' records and 
 a FD for the type '5' records (that is two DDs pointing to the same 
 file), that each kept a separate sequence set in storage and it ran 
 faster.  You might try it.

 Chip Grantham  |  Ameritas  |  Sr. IT Consultant | 
 cgrant...@ameritas.com 5900 O Street, Lincoln NE 68510 | p: 402-467-7382 |
c: 402-429-3579 | f:
 402-325-4030

...
Unless you have something at your installation that automatically tunes VSAM
buffer allocation, some kind of manual tuning in the JCL is almost always
recommended, as the default VSAM buffer allocations tend to be terrible for
performance.  Just specifying an BUFFNI INDEX buffer count large enough to
accommodate all index levels, plus additional buffers if the access pattern
has multiple localities of reference, can do wonders for random access
performance, even without going to BLSR.  The default used to guarantee
that random access to any VSAM file with data in more than one CA (and hence
at least two levels of index) would require re-reading CI's for all the
various index levels for each data record access.  Just providing a few
additional index buffers in such cases might be enough to cut the physical
I/O's by a significant factor.

-- 
Joel C. Ewing,Bentonville, AR   jcew...@acm.org 

--
For IBM-MAIN subscribe / signoff / archive access instructions, send email
to lists...@bama.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN


Re: Looking DB2 for z/OS discussion list

2011-12-26 Thread Yifat Oren
DB2-L at http://www.idug.org.

Best Regards,
Yifat 

-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@bama.ua.edu] On Behalf
Of Arye Shemer
Sent: יום ב 26 דצמבר 2011 11:38
To: IBM-MAIN@bama.ua.edu
Subject: Looking DB2 for z/OS discussion list

Hello forummers,

Is there any DB2 discussion list like IBM-MAIN in which I can post question
or ask for help ?

Thanks,

Arye Shemer.

--
For IBM-MAIN subscribe / signoff / archive access instructions, send email
to lists...@bama.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN


Re: IEBCOPY in z/OS 1.13

2011-12-14 Thread Yifat Oren
Thank you for the tip, Lizette.

http://share.confex.com/share/117/webprogram/Handout/Session9940/SHARE%20994
0_IEBCOPY%20New%20Tricks.pdf 


Best Regards,
Yifat Oren

-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@bama.ua.edu] On Behalf
Of Lizette Koehler
Sent: יום ד 14 דצמבר 2011 15:25
To: IBM-MAIN@bama.ua.edu
Subject: Re: IEBCOPY in z/OS 1.13

 
 Has anyone compared the performance of the new IEBCOPY in z/OS 1.13 
 with
the
 older version?
 
 Regards,
 John K

John

At Share Aug 2011 there was a session dedicated to IEBCOPY and it had some
comparison data in there.  You might want to check that out.

Lizette

--
For IBM-MAIN subscribe / signoff / archive access instructions, send email
to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the
archives at http://bama.ua.edu/archives/ibm-main.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: What exactly does the SMBHWT subparameter do?

2011-11-15 Thread Yifat Oren
Hi Peter,

Have you seen the VSAM Demystified definition for SMBHWT? 

SMBHWT: Used to allocate hiperspace buffers based on a multiple of the
number of address space virtual buffers that have been allocated. It can be
an
integer from 0 to 99. The value specified is not a direct multiple of the
number
of virtual buffers that are allocated to the resource pool, but act as a
weighting
factor for the number of hiperspace buffers to be established. The
hiperspace
size buffer will be a multiple of 4K. These buffers may be allocated for the
base data component of the sphere. If the CI size of the data component is
not a multiple of 4K, both virtual space and hiperspace is wasted. The
default
is 0 and means that hiperspace is not used. 

It is not a direct multiple of the buffer count, but a weighting factor ..


Very cryptic.

In any case, did you see much of an improvement when adding hiperspace
buffers? 
Our experience shows that the I/O reduction (and elapsed) usually achieved
when adding the hiperspace buffers does not always justify the CPU increase.


Best Regards,
Yifat


-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@bama.ua.edu] On Behalf
Of Farley, Peter x23353
Sent: יום ה 03 נובמבר 2011 21:01
To: IBM-MAIN@bama.ua.edu
Subject: What exactly does the SMBHWT subparameter do?

We are at z/OS V1.12 here.  I am investigating how the use of system managed
buffering can help improve performance for a large, extended-format KSDS
with a very random read-only access pattern resulting in over a million read
I/O's in a batch run.  After RTFM, it looks to me like I should definitely
specify ACCBIAS=DO, but whether I should use SMBHWT and if so what value I
should use is eluding me.  The documentation is just not that clear to me.

DFSMS Using Datasets says this about the SMBHWT subparameter:

SMBHWT. This option specifies the range of the decimal value for buffers.
You can specify a whole decimal value from 1-99 for allocating the
Hiperspace buffers. The allocation is based on a multiple of the number of
virtual buffers that have been allocated.

What does the range of the decimal value for buffers mean?  I am confused.


The JCL Reference manual says this about SMBHWT:

SMBHWT=nn
Specify a requirement for hiperspace where nn is an integer from 0 to 99.
Use this parameter with direct optimization. The default value is 0, which
means that the system does not obtain any hiperspace.


Neither of these definitions tells me precisely what a value of (say) 12
will do.  Does the SMBHWT serve as a multiplier, so that (SMBHWT * # of
virtual buffers) is allocated in hiperspace?  Does that mean if the system
allocates 9000 virtual buffers, that SMBHWT=12 will allocate 12 * 9000
hiperspace buffers?

Your help in curing my ignorance in this area is appreciated.

Peter
--


This message and any attachments are intended only for the use of the
addressee and may contain information that is privileged and confidential.
If the reader of the message is not the intended recipient or an authorized
representative of the intended recipient, you are hereby notified that any
dissemination of this communication is strictly prohibited. If you have
received this communication in error, please notify us immediately by e-mail
and delete the message and any attachments from your system.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: z/OS Tag Sort

2011-10-18 Thread Yifat Oren
Hi David,

You wrote: 
 
 .. That exit is limiting the sort to just 24MB of virtual storage when the
optimum amount would be closer to 200MB.

The data size was 360 GB (336m records). 

Can you please share the formula you've used to determine the optimum amount
is around 200MB? 

The DFSORT Tuning Guide (1.12) seems to think 2GB is the upper limit when it
comes to recommending minimum virtual storage settings :)

Thanks,
Yifat Oren

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: SMS compactionEXCP's, Fixed Blocked and fixed

2011-08-16 Thread Yifat Oren
Hi Enrique,

I suspect your IEBGENER is actually the Syncsort or Dfsort replacement for
IEBGENER.

When the sort product processes Basic Format data sets, it uses the EXCP
access method to read large block with each i/o performed (the EXCP count);
When the data set is Extended Format, the sort product is forced to use the
BSAM access method where 1 i/o is realy 1 block read/written.

Hope that helps,
Yifat

-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@bama.ua.edu] On Behalf
Of MONTERO ROMERO, ENRIQUE ELOI
Sent: יום ג 16 אוגוסט 2011 14:46
To: IBM-MAIN@bama.ua.edu
Subject: SMS compactionEXCP's, Fixed Blocked and fixed

Hi to all,
I've several (SMS compaction and EXCP's) related questions.
The tests i made (using IEBGENER to copy datasets).
source dataset info :
Dsorg  Recfm  Lrecl  Blksz  Tracks %Used   XT
MY.TESTING.SOURCE.DATASETPSF   4080   4080  11250  100 1
---
.- 1st Test.
Copy to a FB dataset(SMS Compressible. : YES  EXTENDED  blksize=0),
results :
Dsorg  Recfm  Lrecl  Blksz  Tracks %Used   
MY.TEST01.TARGET.DATASETPS-E  FB  4080   32640  11250   23
--TIMINGS (MINS.)--PAGING COUNTS---
PRC   EXCPCPUSRB  CLOCK   SERV  PG   PAGE   SWAPVIO
000   6083.03.01.11   166K   0  0  0  0
---
.- 2nd Test.
Copy to a FB dataset(SMS Compressible. : NO  blksize=0), results :
Dsorg  Recfm  Lrecl  Blksz  Tracks %Used   
MY.TEST02.TARGET.DATASETPS FB  4080  24480  11250  100
--TIMINGS (MINS.)--PAGING COUNTS---
PRC   EXCPCPUSRB  CLOCK   SERV  PG   PAGE   SWAPVIO
100   1355.00.00.21  24347   0  0  0  0
---
.- 3rd Test.
Copy to a same dataset (SMS Compressible : NO  RECFM = F)
Dsorg  Recfm  Lrecl  Blksz  Tracks %Used   
MY.TEST03.TARGET.DATASET PSF   4080   4080  11250  100

--TIMINGS (MINS.)--PAGING COUNTS---
PRC   EXCPCPUSRB  CLOCK   SERV  PG   PAGE   SWAPVIO
000   1369.00.00.36  22475   0  0  0  0

---
The Questions:
- Why when copy to a compressible dataset the EXCP's are increased?
- Why when copy from (F to FB) and (F to F) the EXCP's are almost the same?
Best regards,
Enrique Montero

--
For IBM-MAIN subscribe / signoff / archive access instructions, send email
to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the
archives at http://bama.ua.edu/archives/ibm-main.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Anyone know how to HSM report of recalled datasets

2011-08-11 Thread Yifat Oren
Or HSM FSR SMF records ..  

-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@bama.ua.edu] On Behalf
Of Schwarz, Barry A
Sent: יום ה 11 אוגוסט 2011 14:01
To: IBM-MAIN@bama.ua.edu
Subject: Re: Anyone know how to HSM report of recalled datasets

It doesn't look like any of the commands will do what you want.  You may
have to examine the HSM log datasets or the job log.

 -Original Message-
 From: IBM Mainframe Discussion List [mailto:IBM-MAIN@bama.ua.edu] On 
 Behalf Of Shameem .K .Shoukath
 Sent: Thursday, August 11, 2011 3:36 AM
 To: IBM-MAIN@bama.ua.edu
 Subject: Anyone know how to HSM report of recalled datasets

 hi,
   I want to report all datasets with DSNAME recalled in fromdate - 
 todate dates  I used the command HSEND REPORT DAILY FUNCTION(RECALL)
 OUTDATASET(STR016.HSM.RECALL) to get the number of datasets recalled.

 but it just shows the count not the actual name of dsns

--
For IBM-MAIN subscribe / signoff / archive access instructions, send email
to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the
archives at http://bama.ua.edu/archives/ibm-main.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: CA-FAVER and LBI

2011-08-09 Thread Yifat Oren
Thanks very much for the answers everyone.


LBI is being vey slowly adopted, isn't it? Even by IBM itself ..

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


CA-FAVER and LBI

2011-08-08 Thread Yifat Oren
Hello everyone,
 
 
Does CA-FAVER support LBI (large block) when writing to tape? 
 
 
Thanks in advance,
Yifat Oren

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: SDB (optimal?) BLKSIZE on tape device

2011-07-21 Thread Yifat Oren
Hi Radoslaw,

In Using Data Sets it says that the optimum BLKSIZE for 3590 is: 262 144 
(256 KB) except on some older models on which it is 229 376 (224 KB) ..

Your device seems to fall into the 2nd category for some (wrong?) reason..

Yifat.

-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@bama.ua.edu] On Behalf Of 
R.S.
Sent: יום ד 20 יולי 2011 17:03
To: IBM-MAIN@bama.ua.edu
Subject: SDB (optimal?) BLKSIZE on tape device

(SDB - System Determined Blocksize)

I just coded LRECL=80,BLKSIZE=0,BLKSZLIM=2G on MDL tape device in 3590 mode.
I noticed something strange for me: SDB was 229360B, but it's NOT the largest 
block available! The device also accepted BLKSIZE=262080 (the largest multiple 
of 80 less than 256kiB).
I thougth that the largest blocksize is the optimal one, and it's chosen by 
the system.
BTW: I tested both blocksizes. The larger one (262080) tends to be faster.

Any clue?

--
Radoslaw Skorupka
Lodz, Poland


--
Treść tej wiadomości może zawierać informacje prawnie chronione Banku 
przeznaczone wyłącznie do użytku służbowego adresata. Odbiorcą może być jedynie 
jej adresat z wyłączeniem dostępu osób trzecich. Jeżeli nie jesteś adresatem 
niniejszej wiadomości lub pracownikiem upoważnionym do jej przekazania 
adresatowi, informujemy, że jej rozpowszechnianie, kopiowanie, rozprowadzanie 
lub inne działanie o podobnym charakterze jest prawnie zabronione i może być 
karalne. Jeżeli otrzymałeś tę wiadomość omyłkowo, prosimy niezwłocznie 
zawiadomić nadawcę wysyłając odpowiedź oraz trwale usunąć tę wiadomość 
włączając w to wszelkie jej kopie wydrukowane lub zapisane na dysku.

This e-mail may contain legally privileged information of the Bank and is 
intended solely for business use of the addressee. This e-mail may only be 
received by the addressee and may not be disclosed to any third parties. If you 
are not the intended addressee of this e-mail or the employee authorised to 
forward it to the addressee, be advised that any dissemination, copying, 
distribution or any other similar activity is legally prohibited and may be 
punishable. If you received this e-mail by mistake please advise the sender 
immediately by using the reply facility in your e-mail software and delete 
permanently this e-mail including any copies of it either printed or saved to 
hard drive. 

BRE Bank SA, 00-950 Warszawa, ul. Senatorska 18, tel. +48 (22) 829 00 00, fax 
+48 (22) 829 00 33, e-mail: i...@brebank.pl
Sąd Rejonowy dla m. st. Warszawy XII Wydział Gospodarczy Krajowego Rejestru 
Sądowego, nr rejestru przedsiębiorców KRS 025237, NIP: 526-021-50-88. 
Według stanu na dzień 01.01.2011 r. kapitał zakładowy BRE Banku SA (w całości 
wpłacony) wynosi 168.346.696 złotych.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Is DSNTYPE=EXT parameter only used for VSAM

2011-04-20 Thread Yifat Oren
Hi,

 1. SMS requirement. 2. Some low-level programs do not accept EXT-PS. 
 Example: DFSMSdss dump dataset cannot be EXT-PS.

With DFSMS 1.12 this restriction was removed; from the DFSMSdss Storage
Administration:

With the DUMP command, you can dump DASD data to a basic sequential data
set, a large format sequential data set or an extended format sequential
data set.

But there are other data sets that must be defined as basic format. Sort
work data sets, to name one.

Also, when considering DFSMS compression, one should take into account that
it comes with a CPU price tag, so I would not compress everything
indiscriminately (unless CPU is cheaper than DASD for me).  

Best Regards,
Yifat

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


FDRCOPY behavior when copying SMS compressed data sets

2011-02-07 Thread Yifat Oren
Hi,
 
When DFDSS makes a logical copy of an SMS-compressed data set (either
sequential or VSAM) - it tries to copy it as-is (track image) without
decompressing and re-compressing the data.
 
I wonder, does FDRCOPY behave in a similar manner?
 
Thanks in advance,
Yifat

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Hardware-assisted compression: not CPU-efficient?

2010-12-08 Thread Yifat Oren
Pardon my bringing back an old thread, but -

I wanted to see how much better is the COMPRESS option over the HWCOMPRESS
in regards to CPU time and was pretty surprised when my results suggested
that HWCOMPRESS is persistently more efficient (both CPU and channel
utilization -wise) than COMPRESS:

DFDSS DUMP with OPT(4) of a VSAM basic format to disk (basic format):

STEPNAME PROCSTEPRC   EXCP   CONNTCBSRB  CLOCK
DUMP-HWCOMPRESS  00  14514  93575.25.072.3   output was 958
cyls.
DUMP-COMPRESS00  14819  92326.53.072.5   output was 978
cyls.
DUMP-NOCOMP  00  15283   103K.13.082.4   output was
1,017 cyls.


DFDSS DUMP with OPT(4) of a PS basic format to disk (basic format):

STEPNAME PROCSTEPRC   EXCP   CONNTCBSRB  CLOCK   
DUMP-HWCOMPRESS  00  13317   154K.44.196.2  output was 877
cyls.
DUMP-COMPRESS00  14692   157K.68.195.1  output was 969
cyls.
DUMP-NOCOMP  00  35827   238K.14.217.9  output was 2,363
cyls. 


Running on a 2098-I04. DFSMSDSS V1R09.0. 


So, how come I get different results than the original poster?  
The test data was database-type data sets..

Best Regards,
Yifat

-Original Message-
From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On Behalf
Of Andrew N Wilt
Sent: Friday, December 03, 2010 1:45 AM
To: IBM-MAIN@bama.ua.edu
Subject: Re: Hardware-assisted compression: not CPU-efficient?

Ron,
Thank you for the good response. It is true that the DFSMSdss
COMPRESS keyword and HWCOMPRESS keyword do not perform the same types of
compression. Like Ron said, the COMPRESS keyword is using a Huffman encoding
technique, and works amazing for repeated bytes (just the types of things
you see on system volumes). The HWCOMPRESS keyword utilizes a dictionary
based method, and works well, supposedly, on customer type data.
The CPU utilization of the HWCOMPRESS (dictionary based) is indeed larger
due to what it is doing. So you should choose the type of compression that
suits your CPU utilization needs and data type.
It was mentioned elsewhere in this thread about using the Tape
Hardware compaction. If you have it available, that's what I would go for.
The main intent of the HWCOMPRESS keyword was to provide the dictionary
based compression for the cases where you were using the software
encryption, and thus couldn't utilize the compaction of the tape device.

Thanks,

 Andrew Wilt
 IBM DFSMSdss Architecture/Development
 Tucson, Arizona

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Hardware-assisted compression: not CPU-efficient?

2010-12-02 Thread Yifat Oren
Hi Johnny, 

I was under the impression that for DFDSS DUMP, COMPRESS and HWCOMPRESS are
synonymous;

Are you saying they are not?


If you are writing to tape why not use the drive compaction(DCB=TRTCH=COMP)
instead?

Best Regards,
Yifat

-Original Message-
From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On Behalf
Of Johnny Luo
Sent: יום ה 02 דצמבר 2010 12:13
To: IBM-MAIN@bama.ua.edu
Subject: Hardware-assisted compression: not CPU-efficient?

Hi,

DSS DUMP supports COMPRESS/HWCOMPRESS keyword and I found out in my test
that HWCOMPRESS costs more CPU than COMPRESS.

Is it normal?

Currently we're dumping huge production data to tape and in order to
alleviate the tape channel utilization we need to compress the data before
writing to tape.  It works well but the cpu usage is a problem cause we have
many such backup jobs running simultaneously.

If hardware-assisted compression cannot reduce the cpu overhead,  I will
consider using resource group to cap those jobs.

Best Regards,
Johnny Luo

--
For IBM-MAIN subscribe / signoff / archive access instructions, send email
to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the
archives at http://bama.ua.edu/archives/ibm-main.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: SMF data for DFSORT

2010-12-01 Thread Yifat Oren
Hi Michael,

When DFSORT is invoked from another program (like DSNUTILB), SMF 30-4 (step)
will contain the total CPU time for the entire step and SMF 16 (one or more)
will contain CPU time accumulated while DFSORT was running.

HTH,
Yifat

-Original Message-
From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On Behalf
Of Michael Hall
Sent: יום ד 01 דצמבר 2010 16:52
To: IBM-MAIN@bama.ua.edu
Subject: SMF data for DFSORT

Is there additional information about CPU time for DFSORT in the SMF Type 16
record that is not in the Type 30 step record. In other words, are there any
circumstances where CPU time data is written to the Type 16 records and not
to the Type 30 records? Do you see step information for DFSORT CPU time in
Type 30 records when DFSORT is indirectly invoked from another program? 

--
For IBM-MAIN subscribe / signoff / archive access instructions, send email
to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the
archives at http://bama.ua.edu/archives/ibm-main.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: DFDSS VSAM logical restore?

2010-10-21 Thread Yifat Oren
John,

I would have used SORT-COPY (but if it's DFSORT make sure the VSAM BUFND
setting is optimal). It should be faster than IDCAMS (because of the better
TAPE I/O).

Are you sure you need to reorg at all? 

I'm sure you are familiar with the cost of CA splits is mainly at split
time, etc., and re-org potentially causing more CA splits if it reverses
needed splits .. 

How did CA-FAVER solve this problem? In-place reorg?

Best Regards,
Yifat

-Original Message-
From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On Behalf
Of John McKown
Sent: יום ה 21 אוקטובר 2010 12:49
To: IBM-MAIN@bama.ua.edu
Subject: Re: DFDSS VSAM logical restore?

On Thu, 2010-10-21 at 10:49 +0100, Mike Kerford-Byrnes wrote:
 If you are only looking at speeding up the re-org process (which 
 implies KSDS only) would SORT be viable?  After all, it is designed to 
 read and write data as fast as it can - and there would be no need to 
 actually SORT anything...
 
  
 
 Just a thought
 
  
 
 MKB

Unfortunately, I need a reorg which does a dump / restore because for some
files, we don't have enough DASD for two simultaneous copies to exist.


--
John McKown
Maranatha! 

--
For IBM-MAIN subscribe / signoff / archive access instructions, send email
to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the
archives at http://bama.ua.edu/archives/ibm-main.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: DFSORT chooses not to use Hipersorting for large SORTIN

2010-10-10 Thread Yifat Oren
For the records;

EXPMAX in effect was NOT set to MAX but much lower, which prevented DFSORT
from using any Hipersort option.

Many thanks to David Betten for his excellent assistance.

Best Regards,
Yifat

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


DFSORT chooses not to use Hipersorting for large SORTIN

2010-10-06 Thread Yifat Oren
Hello All,
 
I have a sort of about 4GB, 15M records of variable length.
 
For some reason DFSORT chooses not to use dataspace/hiperspace or memory
objects for this sort for no obvious reason;
 
Parameters are all set corrects (EXPMAX, RES, OLD and HIPRMAX and so on).
The system is not paging at all.
 
Only thing not tuned for this SORT (as far as I can see) is that it does not
know the AVGRLEN in advance, and wrongly calculated the  
number of expected records in the SORTIN (it expects 350k instead of 15m,
based on the SORTIN LRECL).
 
So, DFSORT volunteered not to use Hipersorting.
 
Looking for answers I found this excerpt from DFSORT Tuning Guide on
Hipersorting:
 
When Hipersorting cannot be used, DFSORT uses disk work data sets to store
its intermediate data, which is referred to as disk-only 
mode. Note that Hiperspace-only mode usually provides the best performance
when compared to Hiperspace-mixed and disk-only 
modes. However, this is not always true for Hiperspace-mixed mode when
compared to disk-only mode. Due to the additional 
Hiperspace overhead, the use of disk-only rather than Hiperspace-mixed mode
can at times be more advantageous in terms of 
performance, and therefore DFSORT may choose not to use Hipersorting.

 
 So, what are those times; what made DFSORT use disk-only mode? The file
size (4GB)? The variable-length? The wrong records number?
 
Any ideas would be appriciated,
Yifat

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: DFSMS and System Managed Buffering

2010-09-15 Thread Yifat Oren
Tobias,

The reason you are not seeing the expected savings is that the IDCAMS REPRO
has already set and used the optimal number of data buffers regardless of
the DATACLAS change (so, no change has actually taken place; optimal
buffering was used for both before and after runs). 

You should see the savings when programs that are not taking care of their
VSAM buffering start using these data sets.

I too think that 1MB is a bit constrictive for direct access LSR (bias=do)
buffering.

Best Regards,
Yifat
 

-Original Message-
From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On Behalf
Of Tobias Cafiero
Sent: יום ג 14 ספטמבר 2010 18:44
To: IBM-MAIN@bama.ua.edu
Subject: DFSMS and System Managed Buffering

Hello, 
I'm testing the SMB option in the DFSMS DATACLS Constructs, but don't
yet see a performance boost as promised. At this point I'm using just a
del/define,repro and pointing to the SMB DATACLS. The DATACLS is extended
and contains the following values:

Record Access Bias  . . . . : SYSTEM
System Managed Buffer  . . . : 1M 

The STORCLS is our standard non-striped type. Has anyone implemented SMB on
their system.

Thanks in Advance
Tobias Cafiero  

--
For IBM-MAIN subscribe / signoff / archive access instructions, send email
to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the
archives at http://bama.ua.edu/archives/ibm-main.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: DFSMS and System Managed Buffering

2010-09-15 Thread Yifat Oren
Ron,

The defaults have changed;

This is not the best pointer, but it's all I could find with a quick search
http://www-01.ibm.com/support/docview.wss?uid=isg1OA01898: With OW51451,
REPRO uses AMDCIPCA (CI per CA) value for BUFND when SHROPT is not 4.

Such buffering (BUFND=CI/CA) will make a difference; espcially an elapsed
time difference (but also CPU).

Best Regards,
Yifat


-Original Message-
From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On Behalf
Of Ron Hawkins
Sent: יום ד 15 ספטמבר 2010 17:28
To: IBM-MAIN@bama.ua.edu
Subject: Re: DFSMS and System Managed Buffering

Yifat,

Without any other specification, such as BUFSP at define or BFND on the
REPRO, IDCAMS without SMB used to default to BUFND of two, and only read one
CI at a time.

Are you saying the default changed, or changed for Extended Format?

I suggested it would not make huge difference for REPRO because the default
CISZ for VSAM is 18K or 26K, and a large BUFND would not make a large
difference in elapsed time on a small to medium size dataset.

Ron

 -Original Message-
 From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On
Behalf Of
 Yifat Oren
 Sent: Wednesday, September 15, 2010 4:05 AM
 To: IBM-MAIN@bama.ua.edu
 Subject: Re: [IBM-MAIN] DFSMS and System Managed Buffering
 
 Tobias,
 
 The reason you are not seeing the expected savings is that the IDCAMS
REPRO
 has already set and used the optimal number of data buffers regardless 
 of the DATACLAS change (so, no change has actually taken place; 
 optimal buffering was used for both before and after runs).
 
 You should see the savings when programs that are not taking care of 
 their VSAM buffering start using these data sets.
 
 I too think that 1MB is a bit constrictive for direct access LSR 
 (bias=do) buffering.
 
 Best Regards,
 Yifat
 
 

--
For IBM-MAIN subscribe / signoff / archive access instructions, send email
to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the
archives at http://bama.ua.edu/archives/ibm-main.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: TRSMAIN RC=8

2010-08-31 Thread Yifat Oren
Hi Fran,

DDNAME: INFILE DSNAME: NOSMS.FILE.PDS 

The name of the input data set suggests a PDS; Is it?
TERSE can only handle sequential (PS) data sets (if I remember correctly).

Using the newer AMATERSE may (or may not :)  ) produce clearer error
messages.

Best Regards,
Yifat

-Original Message-
From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On Behalf
Of Fran Hernandez
Sent: יום ג 31 אוגוסט 2010 13:19
To: IBM-MAIN@bama.ua.edu
Subject: Re: TRSMAIN RC=8

Hello,

Thank you very much to those who have answered my problem.
Further information in this regard:
1) and FTP TRSMAIN was performed by a person of another a few months' 
data center.  The TRSMAIN I have attached the job but do not have FTP
executed.
2) Deputy shipping the first 12 bytes of the file from one of the files
you've loaded with IND $ FILE. My feeling is that something was done
incorrectly, but I have no means to detect it.

I would appreciate your help and experience of it.

Thank you very much.

Fran

--
For IBM-MAIN subscribe / signoff / archive access instructions, send email
to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the
archives at http://bama.ua.edu/archives/ibm-main.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: VSAM Max Lrecl?

2010-07-07 Thread Yifat Oren
Ron, 

How does SMS striping measure up in regards to synchronous remote copy? 

Locally, the same 40-50% I/O elapsed time savings can be gained by SMSingly
striping the data sets (into 2 or more stripes). 
True, there is some CPU overhead for striping, but non-comparable to the
compression overhead.

Thanks,
Yifat

-Original Message-
From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On Behalf
Of Ron Hawkins
Sent: יום ד 07 יולי 2010 08:50
To: IBM-MAIN@bama.ua.edu
Subject: Re: VSAM Max Lrecl?

Ted,


 The performance gain made sense then, and it makes sense now.
 
 Does it with sub-5ms response?
[Ron Hawkins] 

Yes. I usually figure out the saving with 0.35 to 1.5ms response time in
SIMPLEX and 0.75 to 3ms response time in DUPLEX with Synchronous remote
copy. Anything else is usually (not always) a queue. If I use 5ms response
time as you suggest then the benefit is even larger.



 After all, I'm sure you are one of the supporters of the maxim the 
 best
IO
 is the one you don't do.
 
 Yes. But.
 
 That's something compression can do for you.
 
 It's too expensive in a write intensive environment.
[Ron Hawkins]
I'm missing your point, as I didn't mention a write intensive environment.
My first examples is father to son updates which is 50% write at worse, and
the second example is read intensive.  As the second example is not write
intensive, we agree on the benefit and there is no need to debate the second
example further.

I apologize if it wasn't clear that I did not give examples that apply to
any environment be it write or read intensive. This is about compressing
specific datasets as an IO reduction strategy. The first example refers to
compressing a loved one: those datasets that define the critical path of an
application's elapsed time. I state this explicitly in the example. If the
application is unimportant, or there is no measurable or tangible benefit in
reducing the elapsed time then it is outside the scope of my example.

Ron

 
 -
 I'm a SuperHero with neither powers, nor motivation!
 Kimota!

[Ron Hawkins] I always wondered why atomic ended with a K...

--
For IBM-MAIN subscribe / signoff / archive access instructions, send email
to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the
archives at http://bama.ua.edu/archives/ibm-main.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: AMATERSE: AMA574I. Wrong message?

2010-03-25 Thread Yifat Oren
Mark, Tony, Dana, Brian, 

Thanks very much for your responses, they were of great help. 

I am now pretty much convinced that EBCDIC to ASCII conversion during the
FTP transfer had caused the problem. 

Still am not sure the AMATERSE error should have been AMA574I; I'd prefer
AN EXPECTED END OF RECORD WAS NOT FOUND, INPUT DATA INVALID, but that's
just me :)

Best Regards,
Yifat Oren.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


AMATERSE: AMA574I. Wrong message?

2010-03-24 Thread Yifat Oren
Hi,
 
I have recieved some TERSED SMF records and am unable to UNPACK them.  
 
Without specifying DCB attributes for the output, the error message I get
is:
 
AMA574I  RECORD FOUND IS LONGER THAN THE LRECL
AMA555I  THE VALUES ARE:  BLKSIZE= 8760LRECL=8756PACKTYPE=PACK 
 
From System Messages:
Explanation: 
For the UNPACK operation, the length of the record restored is longer
than the record length of the output data set. 
 
Trying to overcome the error and assuming the LRECL=8756 is erroneous, I
have specified LRECL=32767 for the output data set (SYSUT2), but still get
the same error:
 
AMA544I  OUTPUT LRECL IS:  32756 ORIGINAL LRECL IS: 8756  
AMA574I  RECORD FOUND IS LONGER THAN THE LRECL
AMA555I  THE VALUES ARE:  BLKSIZE= 32760   LRECL=32756   PACKTYPE=PACK
 
I think AMATERSE is giving me the wrong error message, as it is not possible
that it actually found a record longer than 32,756 (RECFM is VB, not
spanned).  Or am I missing something? 
Has anybody experienced such error when trying to UNPACK SMF data? 
 
Thanks,
Yifat Oren

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: why compression costs additional I/O?

2010-01-28 Thread Yifat Oren
Ron,

Just to be sure someone mentions this;

Compressed Format sequential data sets are a special case of PS-E's.

From Macro Instructions for Data Sets':

Recommendation: For compressed format data sets, do not specify NCP (thus,
allowing the system to default it to 1) or specify NCP=1.  This 
 is the optimal value for NCP for a compressed format data set since the
system handles all buffering internally for these data sets. 

Best Regards,
Yifat

-Original Message-
From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On Behalf
Of Ron Hawkins
Sent: Wednesday, January 27, 2010 10:28 PM
To: IBM-MAIN@bama.ua.edu
Subject: Re: why compression costs additional I/O?

Peter,

Yes for your example I am recommending NCP=96, which means BUFNO=96. I
habitually put both NCP and BUFNO on BSAM files because I've never been sure
if BSAM calculates BUFNO using the NCP value from JCL.

Many years ago I tested this to death on uncached DASD and found that
BUFNO/NCP of 16 was the point of diminishing return for QSAM and BSAM. While
I don't think these double buffer by design like EFS I think it fit well
with the chain length limit of eight blocks with BSAM and QSAM. 

I should revisit this as a study on FICON and Cached DASD as it is likely
that the knee in the curve happens at eight buffers now as I've noticed CPU
intensive utilities like IEBDG writing short chains when volumes are
SIMPLEX, and full chains when TrueCopy synchronous delays are added with
DUPLEX. It suggests to me that 16 is still a good number for when IO is
delayed. Thirty-one would be something I would recommend for BUFND on a VSAM
file with half track CISZ, but I don't think it does any harm on DSORG=PS.

As far as I recall BSAM and QSAM for PS-E does not have the same SSCH data
length and #CCW restrictions as PS, and media manager is probably limited to
a CYL. I'd only wish I had time to research this as a science project
right now, but at the moment I can only offer past experience with a
spattering of senior moments.

Ron

 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: why compression costs additional I/O?

2010-01-27 Thread Yifat Oren
Hi Pawel,

The reason is the sort product can not use the EXCP access method with the
compressed data set and instead chooses BSAM as the access method.
The EXCP access method usually reads or writes on a cylinder (or more)
boundary while BSAM, as its name suggests, reads or writes block by block.

Hope that helps,
Yifat Oren. 

-Original Message-
From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On Behalf
Of Pawel Leszczynski
Sent: Wednesday, January 27, 2010 12:56 PM
To: IBM-MAIN@bama.ua.edu
Subject: why compression costs additional I/O?

Hello everybody,
Recently we are reviewing our EndOfDay jobs looking for potential
performance improvements (reducing CPU/elapsed time).
We have several jobs sorting big datasets where output is SMS-compressible
(type: EXTENDED) datasets. 
When we compare such sorting with sorting on non-compressible output we can
see this:
 EXCP   TCB   SRB   el.time
TESTXWP5   STEP110 00   757K   3.51.709.01 -- w/o
compression
TESTXWP5   STEP120 00  1462K   3.62  2.89  10.45 -- w. compresion

We guess that big SRB in (2) goes for compression (that we understand - we
probably quit compression at all), but we don't understand 2 times bigger
EXCP in second case.

Any ideas will be appreciated,
Regards,
Pawel Leszczynski
PKO BP SA

--
For IBM-MAIN subscribe / signoff / archive access instructions, send email
to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the
archives at http://bama.ua.edu/archives/ibm-main.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: SMF record 74 subtype 1

2008-07-07 Thread Yifat Oren
From MVS System Management Facility:

13.80 Record Type 74 (4A) -- RMF Activity of Several Resources

Subtype 1 -- Device Activity
 The record is written for all devices specified in the DEVICE
option for a Monitor I session.

HTH,
Yifat.

-Original Message-
From: IBM Mainframe Discussion List [mailto:[EMAIL PROTECTED] On Behalf
Of Natasa Savinc
Sent: Monday, July 07, 2008 4:08 PM
To: IBM-MAIN@BAMA.UA.EDU
Subject: SMF record 74 subtype 1

Hello,

I am having problems with collecting SMF record 74 subtype 1 . Everything
seems to be set up correctly, I am collecting SMF record 74 and subtypes
3,4,5 and so on, but no subtype 1. Is there something I have to do to start
collecting this record? This is my SMFPRM member:

ACTIVE  
DSNAME(SYS1.SYSNAME..MAN1, 
   SYS1.SYSNAME..MAN2, 
   SYS1.SYSNAME..MAN3, 
   SYS1.SYSNAME..MAN4, 
   SYS1.SYSNAME..MAN5, 
   SYS1.SYSNAME..MAN6) 
NOPROMPT
REC(PERM)   
MAXDORM(3000)   
STATUS(01)  
JWT(0030)   
SID(SYSNAME.)  
INTVAL(30)  
LISTDSN 
SYS(NOTYPE(4,5,16:19,34,35,40,62,63,65:69,92,99,100:102,200), 
  EXITS(IEFU83,IEFU84,IEFACTRT,   
  IEFUSI,IEFUJI,IEFU29),INTERVAL(003000),NODETAIL)
SUBSYS(STC,EXITS(IEFUJI,IEFU29,IEFU83,IEFU84,IEFUJP,IEFUSO,   
  IEFACTRT))  
SUBSYS(JES2,EXITS(IEFUJI,IEFACTRT,IEFU83))


Regards,
Natasa

--
For IBM-MAIN subscribe / signoff / archive access instructions, send email
to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the
archives at http://bama.ua.edu/archives/ibm-main.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html



Re: CA7 in batch

2008-03-11 Thread Yifat Oren
Gerry, 

Try a /LOGOFF statement as the last SYSIN statement.

Yifat.

-Original Message-
From: IBM Mainframe Discussion List [mailto:[EMAIL PROTECTED] On Behalf
Of Gerry Anstey
Sent: Tuesday, March 11, 2008 4:42 PM
To: IBM-MAIN@BAMA.UA.EDU
Subject: CA7 in batch

Listers

 I am working on a utility that needs to access ca7 from Rexx, initially I
set up some JCL to test if I can get a command processed outside CA7 itself.
JCL is below

I've got it sort of running but it just sits there and waits and eventually
I get an S522

12.37.25 JOB34099  +CA-7.INCD - COMMDS ON HAZ101 SHR
12.38.20 JOB34099 @71 CA-7.252  BATCH TERMINAL #2 IN USE.   REPLY
WAIT,CANCEL, OR RESET
12.40.17 JOB34099  IEA995I SYMPTOM DUMP OUTPUT  790
   790 SYSTEM COMPLETION CODE=522

any ideas how I get this to work?

Here is my JCL

//CA7BTI   EXEC PGM=SASSBSTR,REGION=4M,PARM=2
//UCC7CMDS DD DSN=ZGS1488.SYSDSNS.UCC7.COMMDS,DISP=SHR
//BATCHIN  DD DUMMY
//BATCHOUT DD DSN=ZGS1488.BTI2OUT,DISP=SHR //SYSPRINT DD SYSOUT=*
//SYSIN   DD *
LJOB,LIST=NODD,JOB=PRAB0001
/*
//CA7RESET EXEC PGM=SASSBEND,REGION=4M,COND=ONLY,PARM=2
//UCC7CMDS DD DSN=ZGS1488.SYSDSNS.UCC7.COMMDS,DISP=SHR

thanks
Gerry

Generally, this communication is for informational purposes only and it is
not intended as an offer or solicitation for the purchase or sale of any
financial instrument or as an official confirmation of any transaction. In
the event you are receiving the offering materials attached below related to
your interest in hedge funds or private equity, this communication may be
intended as an offer or solicitation for the purchase or sale of such
fund(s).  All market prices, data and other information are not warranted as
to completeness or accuracy and are subject to change without notice.
Any comments or statements made herein do not necessarily reflect those of
JPMorgan Chase  Co., its subsidiaries and affiliates.

This transmission may contain information that is privileged, confidential,
legally privileged, and/or exempt from disclosure under applicable law. If
you are not the intended recipient, you are hereby notified that any
disclosure, copying, distribution, or use of the information contained
herein (including any reliance
thereon) is STRICTLY PROHIBITED. Although this transmission and any
attachments are believed to be free of any virus or other defect that might
affect any computer system into which it is received and opened, it is the
responsibility of the recipient to ensure that it is virus free and no
responsibility is accepted by JPMorgan Chase  Co., its subsidiaries and
affiliates, as applicable, for any loss or damage arising in any way from
its use. If you received this transmission in error, please immediately
contact the sender and destroy the material in its entirety, whether in
electronic or hard copy format. Thank you.
Please refer to http://www.jpmorgan.com/pages/disclosures for disclosures
relating to UK legal entities.

--
For IBM-MAIN subscribe / signoff / archive access instructions, send email
to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the
archives at http://bama.ua.edu/archives/ibm-main.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: CA7 in batch

2008-03-11 Thread Yifat Oren
Gerry,

I'll give it another go  :)

Try using a different batch terminal id (currently PARM=2), it seems like it
is already in use.

Yifat.

-Original Message-
From: IBM Mainframe Discussion List [mailto:[EMAIL PROTECTED] On Behalf
Of Gerry Anstey
Sent: Tuesday, March 11, 2008 6:58 PM
To: IBM-MAIN@BAMA.UA.EDU
Subject: Re: CA7 in batch

Thanks, it generates that itself:

/LOGON    * GENERATED LOGON *
LJOB,LIST=NODD,JOB=PRAB0001
/LOGOFF   * GENERATED LOGOFF *




   
 Yifat Oren
 [EMAIL PROTECTED] 
 OMTo 
 Sent by: IBM  IBM-MAIN@BAMA.UA.EDU
 Mainframe  cc 
 Discussion List   
 [EMAIL PROTECTED] Subject 
 .EDU Re: CA7 in batch
   
   
 11/03/2008 16:35  
   
   
 Please respond to 
   IBM Mainframe   
  Discussion List  
 [EMAIL PROTECTED] 
   .EDU   
   
   




Gerry,

Try a /LOGOFF statement as the last SYSIN statement.

Yifat.

-Original Message-
From: IBM Mainframe Discussion List [mailto:[EMAIL PROTECTED] On Behalf
Of Gerry Anstey
Sent: Tuesday, March 11, 2008 4:42 PM
To: IBM-MAIN@BAMA.UA.EDU
Subject: CA7 in batch

Listers

 I am working on a utility that needs to access ca7 from Rexx, initially I
set up some JCL to test if I can get a command processed outside CA7 itself.
JCL is below

I've got it sort of running but it just sits there and waits and eventually
I get an S522

12.37.25 JOB34099  +CA-7.INCD - COMMDS ON HAZ101 SHR
12.38.20 JOB34099 @71 CA-7.252  BATCH TERMINAL #2 IN USE.   REPLY
WAIT,CANCEL, OR RESET
12.40.17 JOB34099  IEA995I SYMPTOM DUMP OUTPUT  790
   790 SYSTEM COMPLETION CODE=522

any ideas how I get this to work?

Here is my JCL

//CA7BTI   EXEC PGM=SASSBSTR,REGION=4M,PARM=2
//UCC7CMDS DD DSN=ZGS1488.SYSDSNS.UCC7.COMMDS,DISP=SHR
//BATCHIN  DD DUMMY
//BATCHOUT DD DSN=ZGS1488.BTI2OUT,DISP=SHR //SYSPRINT DD SYSOUT=*
//SYSIN   DD *
LJOB,LIST=NODD,JOB=PRAB0001
/*
//CA7RESET EXEC PGM=SASSBEND,REGION=4M,COND=ONLY,PARM=2
//UCC7CMDS DD DSN=ZGS1488.SYSDSNS.UCC7.COMMDS,DISP=SHR

thanks
Gerry

send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Generally, this communication is for informational purposes only and it is
not intended as an offer or solicitation for the purchase or sale of any
financial instrument or as an official confirmation of any transaction. In
the event you are receiving the offering materials attached below related to
your interest in hedge funds or private equity, this communication may be
intended as an offer or solicitation for the purchase or sale of such
fund(s).  All market prices, data and other information are not warranted as
to completeness or accuracy and are subject to change without notice.
Any comments or statements made herein do not necessarily reflect those of
JPMorgan Chase  Co., its subsidiaries and affiliates.

This transmission may contain information that is privileged, confidential,
legally privileged, and/or exempt from disclosure under applicable law. If
you are not the intended recipient, you are hereby notified that any
disclosure, copying, distribution, or use of the information contained
herein (including any reliance
thereon) is STRICTLY PROHIBITED. Although this transmission and any
attachments are believed to be free of any virus or other defect that might
affect any computer system into which it is received and opened, it is the
responsibility of the recipient to ensure that it is virus free and no
responsibility is accepted by JPMorgan Chase  Co., its subsidiaries and
affiliates, as applicable, for any loss or damage arising in any way from
its use. If you received this transmission in error, please immediately
contact the sender