Most of the time I suggest throwing VSAM (and non-VSAM) buffers at jobs when 
the business users are complaining about run time and don't have time for app 
tuning.  They are happy to accept a little extra CPU for a decrease in run time.

Thanks..

Paul Feller
AGT Mainframe Technical Support

-----Original Message-----
From: IBM Mainframe Discussion List [mailto:[email protected]] On Behalf 
Of John McKown
Sent: Wednesday, May 24, 2017 07:59
To: [email protected]
Subject: large VSAM LSR buffering vs. CPU utilization

A post on another forum has gotten me to wondering about the size & number
of buffers in a VSAM LSR pool versus the CPU utilization needed to manage
them. Around here the idea has always been "the more buffers the better"
and no analysis has ever been done. But in today's I/O environment, I
wondering if this is unconditionally true. Given the advances in CPU and
its outpacing of the speed of even the fastest I/O (not that my shop is in
this situation), I'm wondering if this simplistic rule is still true. I.e.
Is "throw more buffers at an I/O intensive workload" still true.

-- 
Windows. A funny name for a operating system that doesn't let you see
anything.

Maranatha! <><
John McKown

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [email protected] with the message: INFO IBM-MAIN

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [email protected] with the message: INFO IBM-MAIN

Reply via email to