Once the I/O is complete and the buffer has been marked as no longer page-fixed 
while the I/O is active, that I/O buffer will not experience significant paging 
if it is being accessed frequently.  That's how I/O buffers have always behaved 
since virtual memory operating systems with paging subsystems were first 
produced, whether the  buffer is below 16M, below 31M, or above 31M.  Vendors 
who build products like DFSORT know how to long-term page-fix their buffers, 
whether the  buffers are above or below the bar, if such page fixing seems 
useful.  In other words, just because a large amount of storage is being used 
above the bar does not necessarily mean that that storage is being frequently 
paged.  But as more pages are long-term-fixed anywhere in the system, then the 
paging of everything else in the system may increase.  Giving a gazillion bytes 
above the bar to process X does not necessarily mean that process X will ruin 
system performance.  The gazillion bytes could also have come from below the 
bar (for some values of gazillion).  Any process that is suspected of hogging 
the system needs to be closely monitored because it has been previously 
determined that it is hogging the system, not simply because it is using some 
new type of resource that is not yet well understood. 
Bill Fairchild 

----- Original Message -----

From: "Paul Gilmartin" <[email protected]> 
To: [email protected] 
Sent: Friday, April 18, 2014 9:31:13 AM 
Subject: SORT ando MEMLIMIT best practice 

On 2014-04-18, at 03:48, R.S. wrote: 

> W dniu 2014-04-18 01:25, Ed Jaffe pisze: 
>> On 4/4/2014 12:47 PM, R.S. wrote: 
>> 
>> If you specify absolutely nothing about MEMLIMIT anywhere, the 
>> system-provided default is 2G so obviously you can't go wrong with that in 
>> SMFPRMxx. 
>>   
Right.  IBM's provided defaults are always optimal. 

> Well, 
> My issue (problem?) is I have MEMLIMIT coded, but it's much more than default 
> 2G. And I noticed that some DFSORT jobs consume considerable amounts of 
> memory causing paging. 
> From the other hand I don't want to be stingy, so I'm looking for some 
> recommendations. 
>   
o Hmmm... I'd think that any parameterization resulting in significant 
  paging of I/O buffers is counterproductive.  Is DFSORT aware of this 
  in its design, and does it attempt to tailor its WSS to fit in real 
  storage? 

o OTOH, paging I/O is reported to be very good.  And 64-bit virtual 
  is enough for most plausible data sets.  How about eliminating 
  SORTWKn data sets and performing the entire sort in virtual 
  storage?  But this approach must pay careful attention to LoR. 

o What are the consequences of allocating SORTWKn to VIO? 

Many years ago (no longer), I knew the Cooley-Tukey fast Fourier 
transform algorithm well enough that I could code it from memory. 
At one point it makes a pass that toucnes the first location in 
each page in sequence; then the second location in each page; then 
the third; ... .  This is brutal if WSS doesn't fit in real storage. 
Has anyone optimized FFT to optimize LoR, perhaps rearranging the 
data on each pass, perhaps even employing PS data sets rather than 
virual storage? 

-- gil 

---------------------------------------------------------------------- 
For IBM-MAIN subscribe / signoff / archive access instructions, 
send email to [email protected] with the message: INFO IBM-MAIN 


----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [email protected] with the message: INFO IBM-MAIN

Reply via email to