Re: LBI w/ TAPE|DASD [Was:31 vs 24 QSAM]

2017-02-16 Thread Jesse 1 Robinson
Thanks for the clarification. It's been three or four years since I did this. 

I created

   SYSPDCBE DCBE  BLKSIZE=0

and added a pointer in the existing SYSPRINT DCB

   DCBE=SYSPDCBE

It all worked like a charm. 

.
.
J.O.Skip Robinson
Southern California Edison Company
Electric Dragon Team Paddler 
SHARE MVS Program Co-Manager
323-715-0595 Mobile
626-543-6132 Office ⇐=== NEW
robin...@sce.com

-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf 
Of Steve Thompson
Sent: Thursday, February 16, 2017 10:46 AM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: (External):Re: LBI w/ TAPE|DASD [Was:31 vs 24 QSAM]

About 2 years ago, I was working on determining a performance problem with an 
ISV product. They were processing a very large amount of data and so we moved 
their data off to Tape, with the idea that we could cut the CPU burn of the 
product (as in, make it wait for I/O as it was eating 65-85% of a single CPU - 
single tasking, not multi-tasking).

What I found, in moving it to tape (VTS) is that the virtual tape was 
responding faster than DASD. And this was w/o LBI.

Did some further testing with LBI (and a test program that had the ability to 
handle LBI), to find that it was able to process the data faster than it could 
from DASD. Significantly faster -- it seems that the VTS was reading from disks 
into cache, and it was caching the "tape" at a rate greater than the DASD Raid 
boxes could respond for the same I/O.

Then we put in a request to the vendor of the first product for LBI support and 
had to explain to them why.

Also, to a different post -- You can't replace a DCB with a DCBE. 
You use them in conjunction with each other, and store the address of the DCBE 
into the DCB BEFORE OPEN, and if all the flags are correct, you have LBI 
support once OPEN is finished.

Regards,
Steve.T

On 02/16/2017 11:44 AM, Jesse 1 Robinson wrote:
> I mentioned having modified a QSAM program to write 'large blocks' by 
> replacing DCB with DCBE. My goal was to test the effect of very large blocks 
> in our new tape subsystem, which we had learned was highly biased in favor of 
> large blocks. This had nothing to do with AMODE, which was all 31. The 
> program certainly ran faster with large blocks such as 260K. I could not 
> distinguish improvement at the IOS level (lower I/O count) vs. improvement at 
> the tape level. Most likely a combination.
>
> My problem with the new tape was that the vendor seemed to assume that a 
> customer could just tweak JCL to create giant blocks. In fact many of our 
> largest tape files are created by utilities--IBM or otherwise--that are not 
> written for large blocks. In practice you can code as large a block size as 
> you wish, but if the program contains only DCBs, any size greater than 32K is 
> simply ignored without error. While the change to DCBE was very simple, it 
> has to be accomplished by the program owner.
>
> .
> .
> J.O.Skip Robinson
> Southern California Edison Company
> Electric Dragon Team Paddler
> SHARE MVS Program Co-Manager
> 323-715-0595 Mobile
> 626-543-6132 Office ⇐=== NEW
> robin...@sce.com
>



--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: LBI w/ TAPE|DASD [Was:31 vs 24 QSAM]

2017-02-16 Thread Steve Thompson
About 2 years ago, I was working on determining a performance 
problem with an ISV product. They were processing a very large 
amount of data and so we moved their data off to Tape, with the 
idea that we could cut the CPU burn of the product (as in, make 
it wait for I/O as it was eating 65-85% of a single CPU - single 
tasking, not multi-tasking).


What I found, in moving it to tape (VTS) is that the virtual tape 
was responding faster than DASD. And this was w/o LBI.


Did some further testing with LBI (and a test program that had 
the ability to handle LBI), to find that it was able to process 
the data faster than it could from DASD. Significantly faster -- 
it seems that the VTS was reading from disks into cache, and it 
was caching the "tape" at a rate greater than the DASD Raid boxes 
could respond for the same I/O.


Then we put in a request to the vendor of the first product for 
LBI support and had to explain to them why.


Also, to a different post -- You can't replace a DCB with a DCBE. 
You use them in conjunction with each other, and store the 
address of the DCBE into the DCB BEFORE OPEN, and if all the 
flags are correct, you have LBI support once OPEN is finished.


Regards,
Steve.T

On 02/16/2017 11:44 AM, Jesse 1 Robinson wrote:

I mentioned having modified a QSAM program to write 'large blocks' by replacing 
DCB with DCBE. My goal was to test the effect of very large blocks in our new 
tape subsystem, which we had learned was highly biased in favor of large 
blocks. This had nothing to do with AMODE, which was all 31. The program 
certainly ran faster with large blocks such as 260K. I could not distinguish 
improvement at the IOS level (lower I/O count) vs. improvement at the tape 
level. Most likely a combination.

My problem with the new tape was that the vendor seemed to assume that a 
customer could just tweak JCL to create giant blocks. In fact many of our 
largest tape files are created by utilities--IBM or otherwise--that are not 
written for large blocks. In practice you can code as large a block size as you 
wish, but if the program contains only DCBs, any size greater than 32K is 
simply ignored without error. While the change to DCBE was very simple, it has 
to be accomplished by the program owner.

.
.
J.O.Skip Robinson
Southern California Edison Company
Electric Dragon Team Paddler
SHARE MVS Program Co-Manager
323-715-0595 Mobile
626-543-6132 Office ⇐=== NEW
robin...@sce.com




--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: 31 vs 24 QSAM

2017-02-16 Thread Charles Mills
I don't really know anything about this but it sounds to me like one of @Gil's 
cases of making a good modification in the wrong place. Why not do this is such 
a way as to be transparent to old QSAM programs -- no need for a DCBE? Have an 
operand in the JCL or in SYS1.PARMLIB or the PPT to make it behave as described 
below if necessary, but by default honor the JCL blocksize and hide the > 32K 
from the application. Tell the application the blocksize is 32K -- 15 bit 
integers being what they are -- but go ahead under the covers with a 260K 
blocksize. No QSAM program should be doing its own deblocking, and if it is, 
well, put an exception in the JCL or the PPT or wherever.

Or at the very least make the below behavior the default but have an option to 
honor the 260K even in the absence of a DCBE. Tell the customers their gun, 
their bullet, their feet.

Charles


-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf 
Of Jesse 1 Robinson
Sent: Thursday, February 16, 2017 8:44 AM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: 31 vs 24 QSAM

I mentioned having modified a QSAM program to write 'large blocks' by replacing 
DCB with DCBE. My goal was to test the effect of very large blocks in our new 
tape subsystem, which we had learned was highly biased in favor of large 
blocks. This had nothing to do with AMODE, which was all 31. The program 
certainly ran faster with large blocks such as 260K. I could not distinguish 
improvement at the IOS level (lower I/O count) vs. improvement at the tape 
level. Most likely a combination. 

My problem with the new tape was that the vendor seemed to assume that a 
customer could just tweak JCL to create giant blocks. In fact many of our 
largest tape files are created by utilities--IBM or otherwise--that are not 
written for large blocks. In practice you can code as large a block size as you 
wish, but if the program contains only DCBs, any size greater than 32K is 
simply ignored without error. While the change to DCBE was very simple, it has 
to be accomplished by the program owner.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: 31 vs 24 QSAM

2017-02-16 Thread Bill Woodger
Yes, don't just write using LBI from a program and expect to validate old vs 
new with ISRSUPC in batch.

I know that a PMR has been raised about whether ISRSUPC supports LBI, the 
IEC141I 013-E1 message it produces hints at not.

From what I've heard, using LBI, where it is possible to use it, leads to 
dramatic improvements in throughput. Not surprising, 32k blocks vs 256k blocks.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: 31 vs 24 QSAM

2017-02-16 Thread Jesse 1 Robinson
I mentioned having modified a QSAM program to write 'large blocks' by replacing 
DCB with DCBE. My goal was to test the effect of very large blocks in our new 
tape subsystem, which we had learned was highly biased in favor of large 
blocks. This had nothing to do with AMODE, which was all 31. The program 
certainly ran faster with large blocks such as 260K. I could not distinguish 
improvement at the IOS level (lower I/O count) vs. improvement at the tape 
level. Most likely a combination. 

My problem with the new tape was that the vendor seemed to assume that a 
customer could just tweak JCL to create giant blocks. In fact many of our 
largest tape files are created by utilities--IBM or otherwise--that are not 
written for large blocks. In practice you can code as large a block size as you 
wish, but if the program contains only DCBs, any size greater than 32K is 
simply ignored without error. While the change to DCBE was very simple, it has 
to be accomplished by the program owner.

.
.
J.O.Skip Robinson
Southern California Edison Company
Electric Dragon Team Paddler 
SHARE MVS Program Co-Manager
323-715-0595 Mobile
626-543-6132 Office ⇐=== NEW
robin...@sce.com

-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf 
Of Elardus Engelbrecht
Sent: Wednesday, February 15, 2017 9:53 PM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: (External):Re: 31 vs 24 QSAM

Joseph Reichman wrote:

>I'm going to run it again tomorrow
>Just to double check

With varying LRECL, BLKSIZE and quantity of records/blocks. If you can, of 
course.

Also read a block, read it again, write and write it again. I'm sure you will 
get 'interesting' numbers.

Good luck. If you do that properly, that testing should be a good Red-Book 
article.

Groete / Greetings
Elardus Engelbrecht


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: 31 vs 24 QSAM

2017-02-15 Thread Joseph Reichman
I'm going to run it again tomorrow


Just to double check 

Thanks  for your help 



> On Feb 15, 2017, at 6:04 PM, Sam Siegel  wrote:
> 
> Are you sure it is not just cache?  were the tests run multiple times
> and averaged?  was the load on the system and dasd sub-system similar
> for each test?
> 
>> On Wed, Feb 15, 2017 at 12:19 PM, Joseph Reichman  
>> wrote:
>> Hi
>> 
>> And thank you all
>> 
>> I just did a benchmark
>> 
>> And I had a significant savings in CPU time
>> 
>> 24 bit QSAM .85 CPU time 31 bit QSAM .34 CPU time
>> 
>> I could tell it ran a lot faster
>> 
>> 
>> 
>> 
>> Joe Reichman
>> 8045 Newell St Apt 403
>> Silver Spring MD 20910
>> Home (240) 863-3965
>> Cell (917) 748 -9693
>> 
>> --
>> For IBM-MAIN subscribe / signoff / archive access instructions,
>> send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
> 
> --
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: 31 vs 24 QSAM

2017-02-15 Thread Sam Siegel
Are you sure it is not just cache?  were the tests run multiple times
and averaged?  was the load on the system and dasd sub-system similar
for each test?

On Wed, Feb 15, 2017 at 12:19 PM, Joseph Reichman  wrote:
> Hi
>
> And thank you all
>
> I just did a benchmark
>
> And I had a significant savings in CPU time
>
> 24 bit QSAM .85 CPU time 31 bit QSAM .34 CPU time
>
> I could tell it ran a lot faster
>
>
>
>
> Joe Reichman
> 8045 Newell St Apt 403
> Silver Spring MD 20910
> Home (240) 863-3965
> Cell (917) 748 -9693
>
> --
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: 31 vs 24 QSAM

2017-02-15 Thread Paul Gilmartin
On Wed, 15 Feb 2017 15:19:06 -0500, Joseph Reichman wrote:
>
>I just did a benchmark
>
>And I had a significant savings in CPU time
>24 bit QSAM .85 CPU time 31 bit QSAM .34 CPU time
>I could tell it ran a lot faster
> 
But why is anyone doing 31-bit nowadays?

--gil

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


31 vs 24 QSAM

2017-02-15 Thread Joseph Reichman
Hi

And thank you all

I just did a benchmark

And I had a significant savings in CPU time 

24 bit QSAM .85 CPU time 31 bit QSAM .34 CPU time 

I could tell it ran a lot faster 




Joe Reichman
8045 Newell St Apt 403
Silver Spring MD 20910
Home (240) 863-3965
Cell (917) 748 -9693

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN