>>
  The point I was making about "crazy talk" was to dispel the notion that was 
presented in two posts that ASM takes the size of the smallest data set, and 
uses only that amount in larger data sets. That is an incorrect notion.  ASM is 
quite willing to use all of the slots in any page data set regardless of its 
relative size, and the total number of slots from all data sets is used for 
doing calculations related to auxiliary storage shortages, regardless of the 
distribution of page data set sizes.  So, there is no (and has never been any) 
*functional* requirement for page datasets to be the same size.

  That said, there may have been (and may still be)
*performance-related*  reasons for using similar sized paged data sets.
Capacity planning and system tuning of that nature is not my area of expertise. 

  When expanded storage came along in the mid '80s, IBM's focus became more on 
reducing paging by using increasing processor storage that on creating optimal 
page data set configurations.  So since then, I have seen very little being 
done in the way of measuring paging performance for page data sets (other than 
recent measurements to demonstrate potential performance improvements from 
using Flash Express on the EC12 machines for paging instead of page data sets).

 The original question was about how a move from one model of DASD subsystem to 
another might induce a change in DFSORT's storage usage.  The point is that the 
DASD model, and the distribution of sizes of page data sets, should have no 
effect on that.
However,  the total number of page data set slots from all of the page data 
sets (plus the number of Flash Express paging slots configured to the LPAR in 
question on an EC12 machine) could very well have an effect on DFSORT's 
decisions about how much storage to use, and increasing the total number of 
slots could affect the output of the STGTEST SYSEVENT, which in turn could 
induce DFSORT to decide to use more storage. 


Jim Mulder   z/OS System Test   IBM Corp.  Poughkeepsie,  NY

<<

Yes Jim, that pretty much sums it up. We essentially plugged in a new disk 
drive and suddenly DFSORT didn't work the way it used to. Despite everything 
everyone says, something else is influencing DFSORT's decisions on how much 
storage to use and where it's going to get it. We do know that if we 
reconfigure our page datasets to be of uniform size, we get different results 
(different amount of memory being used, different amount of memory objects 
being used for work) even if the total amount of slots doesn't change. I cannot 
believe that the speed of the drives and the availability of DASD Fast Write, 
Cache Fast Write, and even PAVs aren't playing some part. 


Anne R. Adams
DTI, Systems Engineering
State of Delaware

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [email protected] with the message: INFO IBM-MAIN

Reply via email to