A post on another forum has gotten me to wondering about the size & number
of buffers in a VSAM LSR pool versus the CPU utilization needed to manage
them. Around here the idea has always been "the more buffers the better"
and no analysis has ever been done. But in today's I/O environment, I
wondering if this is unconditionally true. Given the advances in CPU and
its outpacing of the speed of even the fastest I/O (not that my shop is in
this situation), I'm wondering if this simplistic rule is still true. I.e.
Is "throw more buffers at an I/O intensive workload" still true.

-- 
Windows. A funny name for a operating system that doesn't let you see
anything.

Maranatha! <><
John McKown

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [email protected] with the message: INFO IBM-MAIN

Reply via email to