Re: VLF caching

2014-12-03 Thread Peter Relson
will this be documented somewhere with LLA and/or VLF?

The health check is documented in the HC book.
The modify command and the COFVLFxx parmlib member are documented in their 
normal places.

I don't know if there's a reference to those things within whatever more 
general description of LLA and/or VLF exists. 

Peter Relson
z/OS Core Technology Design

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: VLF caching

2014-12-02 Thread Peter Relson
-can I use VLF trimming statistics as a good measure to 
 determine if my CSVLLA cache is large enough? 
-If not, what measure does tell me this, besides the 
 LLAFETCH/PGMFETCH measures I get from CSVLLIX1/2?

Provided by the VLF owner:

Since LLA expects VLF to trim as needed to make room for new caching 
candidates over time, trimming itself is not a bad thing.  What -is- bad 
is trimming 'too soon' so that you do not get as much benefit from caching 
a program object (module) as you otherwise might.  As of z/OS 2.1, you can 
use the verbose output of the VLF Health Check to see the youngest age of 
a trimmed object (since the VLF class was activated), and the current 
minimum trimmed age of objects (since the check last ran) for the CSVLLA 
class.  The output also shows the current MaxVirt value.  You also can set 
an alert for a minimum age, so that the check will raise an exception if 
objects are being trimmed sooner than you would like (the default is 60 
seconds - sufficient to tell whether there is a potentially serious 
issue).  And you can dynamically change the size of the cache via the 
MODIFY VLF,REPLACE,NN=xx command by supplying a parmlib member with a new 
MaxVirt (and possibly AlertAge) parm for the CSVLLA class.

I can not say whether there is some trimming statistic that would also be 
useful for older releases, but I doubt it.
And the modify to change the size of the cache is available only as of 
z/OS 2.1

Peter Relson
z/OS Core Technology Design

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: VLF caching

2014-12-02 Thread Vernooij, CP (ITOPT1) - KLM
Peter,

Thanks, this is valuable information how to manage the CSVLLA cache size.  When 
we go to 2.1 next year I will use it, probably eliminating CSVLLIX1/2.
Is this / will this be documented somewhere with LLA and/or VLF?

Kees.

-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf 
Of Peter Relson
Sent: 02 December, 2014 13:55
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: VLF caching

-can I use VLF trimming statistics as a good measure to 
 determine if my CSVLLA cache is large enough? 
-If not, what measure does tell me this, besides the 
 LLAFETCH/PGMFETCH measures I get from CSVLLIX1/2?

Provided by the VLF owner:

Since LLA expects VLF to trim as needed to make room for new caching 
candidates over time, trimming itself is not a bad thing.  What -is- bad 
is trimming 'too soon' so that you do not get as much benefit from caching 
a program object (module) as you otherwise might.  As of z/OS 2.1, you can 
use the verbose output of the VLF Health Check to see the youngest age of 
a trimmed object (since the VLF class was activated), and the current 
minimum trimmed age of objects (since the check last ran) for the CSVLLA 
class.  The output also shows the current MaxVirt value.  You also can set 
an alert for a minimum age, so that the check will raise an exception if 
objects are being trimmed sooner than you would like (the default is 60 
seconds - sufficient to tell whether there is a potentially serious 
issue).  And you can dynamically change the size of the cache via the 
MODIFY VLF,REPLACE,NN=xx command by supplying a parmlib member with a new 
MaxVirt (and possibly AlertAge) parm for the CSVLLA class.

I can not say whether there is some trimming statistic that would also be 
useful for older releases, but I doubt it.
And the modify to change the size of the cache is available only as of 
z/OS 2.1

Peter Relson
z/OS Core Technology Design

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

For information, services and offers, please visit our web site: 
http://www.klm.com. This e-mail and any attachment may contain confidential and 
privileged material intended for the addressee only. If you are not the 
addressee, you are notified that no part of the e-mail or any attachment may be 
disclosed, copied or distributed, and that any other action related to this 
e-mail or attachment is strictly prohibited, and may be unlawful. If you have 
received this e-mail by error, please notify the sender immediately by return 
e-mail, and delete this message. 

Koninklijke Luchtvaart Maatschappij NV (KLM), its subsidiaries and/or its 
employees shall not be liable for the incorrect or incomplete transmission of 
this e-mail or any attachments, nor responsible for any delay in receipt. 
Koninklijke Luchtvaart Maatschappij N.V. (also known as KLM Royal Dutch 
Airlines) is registered in Amstelveen, The Netherlands, with registered number 
33014286



--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: VLF caching

2014-12-02 Thread Mark Zelden
I liked the health check.

If you have access to MXG, here is a sample of what I have used to manage VLF:

/* 
//  JCLLIB ORDER=ZELDEN.MXG.SOURCLIB   
//STEP010   EXEC MXGSAS,WORK='50,50'   
//*SMF   DD DSN=ZELDEN.SMF41,DISP=SHR  
//SMF  DD  DSN=SMF.SYSA.DAILY(-0),DISP=SHR 
//SYSIN DD  *  
OPTIONS SOURCE LS=132; 
%INCLUDE SOURCLIB(TYPE41); 
RUN;   
PROC SORT NODUP DATA=TYPE41VF OUT=T41; 
  BY SYSTEM SMF41CLS SMFTIME;  
PROC PRINT LABEL SPLIT='*';
  VAR SMFTIME SYSTEM   SMF41LRG SMF41MVT SMF41USD  
  SMF41ADD SMF41DEL SMF41TRM VLFHITPC; 
  BY SYSTEM SMF41CLS;  
  ID SMF41CLS; 
  FORMAT SMFTIME DATETIME18.;  
  TITLE 'VITUAL LOOKASIDE FACILITY';   
  TITLE2 'AS REDUCED FROM SMF TYPE41'; 
  FOOTNOTE 'ZELDEN-(MXG41VLF)';



--
Mark Zelden - Zelden Consulting Services - z/OS, OS/390 and MVS  
ITIL v3 Foundation Certified   
mailto:m...@mzelden.com   
Mark's MVS Utilities: http://www.mzelden.com/mvsutil.html 
Systems Programming expert at http://search390.techtarget.com/ateExperts/
--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: VLF caching

2014-12-02 Thread Steve Thompson

On 12/02/2014 11:00 AM, Mark Zelden wrote:

I liked the health check.

If you have access to MXG, here is a sample of what I have used to manage VLF:


SNIPPAGE

Thanks. I ran a quick test of this. I likes it.

It gave me some ideas...

Regards,
Steve Thompson

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: VLF caching

2014-12-01 Thread Vernooij, CP (ITOPT1) - KLM
Peter,

Yes, some confusion was caused by terminology: I regard VLF as the manager of 
the cache and the exploiters as doing 'caching', meaning giving an object to 
VLF to put it in its cache.

Finally, the first and basic question of this thread was: 
-can I use VLF trimming statistics as a good measure to determine if my CSVLLA 
cache is large enough? 
-If not, what measure does tell me this, besides the LLAFETCH/PGMFETCH measures 
I get from CSVLLIX1/2?
Besides all the information you do not wish to publish, this in my opinion is 
useful information worth publishing.

Kees.


-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf 
Of Peter Relson
Sent: 30 November, 2014 16:58
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: VLF caching

Now, please put your wisdom in IBM books.

I think that most of my post was discussing internal details that are not 
suitable for documentation (where by documenting them, customers and 
programmers are allowed to rely on them, which in turn may hamstring future 
desire to change). If there are particular pieces that would really help 
customers if we document them, I'll listen to requests for them (which should 
include at least a hint of how it will help). 

If LLA finds that a module that it had successfully gotten cached no 
longer is deemed worthwhile, it does not tell VLF.

No?  Why?

Not having been involved in the initial implementation, I'm not sure. 
Perhaps it was felt that doing so would be overkill, that trimming would do a 
good enough job such that the overhead of doing the delminor was not worth 
the cycles. It also makes it less flexible -- if there are subsequent fetches, 
LLA might be able simply to mark its data as active 
and not have to re-cache the module. 

Peter Relson
z/OS Core Technology Design

--
For IBM-MAIN subscribe / signoff / archive access instructions, send email to 
lists...@listserv.ua.edu with the message: INFO IBM-MAIN

For information, services and offers, please visit our web site: 
http://www.klm.com. This e-mail and any attachment may contain confidential and 
privileged material intended for the addressee only. If you are not the 
addressee, you are notified that no part of the e-mail or any attachment may be 
disclosed, copied or distributed, and that any other action related to this 
e-mail or attachment is strictly prohibited, and may be unlawful. If you have 
received this e-mail by error, please notify the sender immediately by return 
e-mail, and delete this message. 

Koninklijke Luchtvaart Maatschappij NV (KLM), its subsidiaries and/or its 
employees shall not be liable for the incorrect or incomplete transmission of 
this e-mail or any attachments, nor responsible for any delay in receipt. 
Koninklijke Luchtvaart Maatschappij N.V. (also known as KLM Royal Dutch 
Airlines) is registered in Amstelveen, The Netherlands, with registered number 
33014286



--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: VLF caching

2014-11-30 Thread Peter Relson
Now, please put your wisdom in IBM books.

I think that most of my post was discussing internal details that are not 
suitable for documentation (where by documenting them, customers and 
programmers are allowed to rely on them, which in turn may hamstring 
future desire to change). If there are particular pieces that would really 
help customers if we document them, I'll listen to requests for them 
(which should include at least a hint of how it will help). 

If LLA finds that a module that it had successfully gotten 
cached no longer is deemed worthwhile, it does not tell VLF. 

No?  Why?

Not having been involved in the initial implementation, I'm not sure. 
Perhaps it was felt that doing so would be overkill, that trimming would 
do a good enough job such that the overhead of doing the delminor was 
not worth the cycles. It also makes it less flexible -- if there are 
subsequent fetches, LLA might be able simply to mark its data as active 
and not have to re-cache the module. 

Peter Relson
z/OS Core Technology Design

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: VLF caching

2014-11-30 Thread Blaicher, Christopher Y.
Peter,
Having been involved with caching in prior employment, not IBM, your 
explanation of just letting normal trimming take care of it is what makes the 
most sense and is what I have done in the past.  If the cache is very active it 
will age-out fast enough and if it gets referenced again before it is aged-out, 
then you avoided loading it again.  As you said, why waste the cycles for 
something that is going to naturally occur.

Chris Blaicher
Principal Software Engineer, Software Development
Syncsort Incorporated
50 Tice Boulevard, Woodcliff Lake, NJ 07677
P: 201-930-8260  |  M: 512-627-3803
E: cblaic...@syncsort.com

-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf 
Of Peter Relson
Sent: Sunday, November 30, 2014 10:58 AM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: VLF caching

If LLA finds that a module that it had successfully gotten cached no
longer is deemed worthwhile, it does not tell VLF.

No?  Why?

Not having been involved in the initial implementation, I'm not sure.
Perhaps it was felt that doing so would be overkill, that trimming would do a 
good enough job such that the overhead of doing the delminor was not worth 
the cycles. It also makes it less flexible -- if there are subsequent fetches, 
LLA might be able simply to mark its data as active
and not have to re-cache the module.

Peter Relson
z/OS Core Technology Design

--
For IBM-MAIN subscribe / signoff / archive access instructions, send email to 
lists...@listserv.ua.edu with the message: INFO IBM-MAIN





ATTENTION: -

The information contained in this message (including any files transmitted with 
this message) may contain proprietary, trade secret or other confidential 
and/or legally privileged information. Any pricing information contained in 
this message or in any files transmitted with this message is always 
confidential and cannot be shared with any third parties without prior written 
approval from Syncsort. This message is intended to be read only by the 
individual or entity to whom it is addressed or by their designee. If the 
reader of this message is not the intended recipient, you are on notice that 
any use, disclosure, copying or distribution of this message, in any form, is 
strictly prohibited. If you have received this message in error, please 
immediately notify the sender and/or Syncsort and destroy all copies of this 
message in your possession, custody or control.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: VLF caching

2014-11-29 Thread Peter Relson
In contrast to other VLF exploiters, LLA has decided 
to fully control the VLF cache, it knows how large it 
is, knows what is in there and how much room is 
still left.

I'm afraid that this is not true. 

Peter Relson
z/OS Core Technology Design

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: VLF caching

2014-11-29 Thread Elardus Engelbrecht
Peter Relson wrote:

I don't know what is being assumed or understood, but most of this thread 
is somewhat technically inaccurate. Some of that is perhaps terminology, some 
not.

Thanks Peter. Now, please put your wisdom in IBM books. Like Shane, I also 
would like to congratulate you on your excellent explanations! Please keep it 
up! 


VLF's only job is to manage a cache. I view that as doing caching. What VLF 
doesn't do is to decide what to put into the cache. That is left to each 
exploiter. All VLF exploiters do this.

True. Think of RACF and ISPF. Think of IRR803I for example.


Again terminology. 

Peter, perhaps you should redefine those terminologies in official IBM manuals. 
;-)


If LLA finds that a module that it had successfully gotten cached no longer is 
deemed worthwhile, it does not tell VLF. 

No?  Why?

Peter, many thanks for your good posts! I really appreciate them!

Groete / Greetings
Elardus Engelbrecht

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: VLF caching

2014-11-28 Thread Vernooij, CP (ITOPT1) - KLM


-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf 
Of Steve Thompson
Sent: 27 November, 2014 18:37
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: VLF caching

On 11/27/2014 02:22 AM, Vernooij, CP (ITOPT1) - KLM wrote:
SNIPPAGE

 No, here I read a common misconception about LLA and VLF working.

 LLA module caching and directory freeze are separate functions. Directories 
 are kept completely in LLA's private storage. Modules are cached in VLF.

 LLA fetches only modules from the VLF cache if it knows it is still there, 
 hence the 100% VLF hitratio.

 VLF does not do caching, VLF exploiters cache objects into the VLF cache 
 (LLA, TSO clist, Catalog etc.).
 The '5 fetches' algorithme, together with some complex calculations about 
 memory use, cache efficiency etc. are done by LLA, to determine if a module 
 is going to be staged to VLF.

 Kees.
snip

LLA is, as I understood it, caching the directory for each managed PDS/PDSE 
(which is affected by FREEZE/NOFREEZE). Is that an incorrect understanding?

Correct, with the remark, that LLA caches directories in ist private storage, 
VLF is not involved here.

The VLF, using its rules, either caches or rejects the cache request -- there 
is the CSVLLIX2 exit which I know gets involved with the CSVLLA class. And 
when that call is made, it is made AFTER the load has been effected so that 
the requesting address space is not held up. The CSVLLIX1 exit is PRIOR to 
the LOAD 
being effected, and so the amount of work done in it can be rather detrimental 
to the throughput of the whole system.

Incorrect, 
1) CSVLLIX1/2 are LLA exits, where you and I can influence LLA's calculated 
decision to stage a module to VLF or not. Also LLA statistics are presented to 
the exits, which can be used to record them in SMF records.
2) VLF rejects nothing, it accepts everything that is staged. It manages the 
VLF cache by trimming modules to make room for new ones. All VLF exploiters, 
except LLA, simply push anything into their VLF cache. When needing an object, 
it tries first if it is still in the VLF cache, of not they will know where to 
get the object from disk (Clist, Catalog record, etc. etc.). For these VLF 
caches, the VLF statistics provide a good measure about the effectiveness of 
the size of the cache.

This indicates to me that VLF is very much involved in the control of cache. 
If the weight assigned to the module, as you pass through CSVLLIX2, prohibits 
caching, I believe it is VLF that doesn't bother. After all, the trim code 
apparently is a VLF module (I'm sorry, I can't remember if it is COFTRIM or 
VLFTRIM, I only remember that TRIM is part of the name that STROBE captured 
when we saw a COBOL program spending an inordinate amount of time in 
LOAD/LINK functions).

See above.

If my understanding is incorrect, I really would like to know -- because it 
means that I have greatly misinterpreted the stuff I've read in various 
published manuals and other information passed to me in trying to diagnose 
what I believe to have been caused by too small of MAXVIRT for CSVLLA.

In contrast to other VLF exploiters, LLA has decided to fully control the VLF 
cache, it knows how large it is, knows what is in there and how much room is 
still left. If a new module qualifies for the VLF cache, LLA decides which 
module has to make room for the new one and instructs VLF to delete it. That is 
why you see many Adds and Deletes, but hardly no Trims in the VLF statistics 
for the LLA cache. VLF hardly needs to make room in the cache by trimming, 
because LLA ensures room by Deletes.

I use CSVLLA1/2 to write the LLA statistics to SMF and from them, I get the 
number of modules fetches solved by LLA from its VLF cache and the number of 
module fetches that had to be resolved from dasd, for each LLA managed library. 
This gives me a perfect view of the effectiveness of the CSVLLA VLF cache.

This is a report about October 2014:

DSNAME   LLAFPCTPGMFCNT LLAFCNT
   
IMSPRDA.PGMLIBA99.58  9063621480761
IMSPRDA.PGMLIBB99.90  3687336026806
SYS1.CEE.SCEERUN   99.47 22827942527075
SYS1.CEE.SCEERUN2  72.84   16584447
SYS1.CSSLIB98.33  11153  656649
SYS1.DCF.DCFLOAD0.00 47   0
SYS1.DMS.CCUWLOAD  98.29  29500 1692618
SYS1.IOA.LOAD  99.39 14523523826464
SYS1.LNKLIB98.72 775921
SYS1.LNKLIB98.60 38814727294767
SYS1.LNKLIB.PDSE   72.33 885559 2315360
SYS1.MAINVIEW.BBLINK   85.23  11140   64261
SYS1.MIGLIB93.26   4455   61662
SYS1.NTV61.CNMLINK 98.12  21232 1105426
SYS1.NTV61.SCNMLNKN99.97100  316186
SYS1.REXX.SFANLMD   0.00 10   0
SYS1

Re: VLF caching

2014-11-28 Thread Vernooij, CP (ITOPT1) - KLM
And I should have added: 
since LLA knows what is in the VLF cache, it will only retrieve a module from 
VLF that is there and this will give a near 100% VLF hitratio. So this VLF 
statistic is useless, because it is made 100% by LLA.

Kees.


-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf 
Of Steve Thompson
Sent: 27 November, 2014 18:37
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: VLF caching

On 11/27/2014 02:22 AM, Vernooij, CP (ITOPT1) - KLM wrote:
SNIPPAGE

 No, here I read a common misconception about LLA and VLF working.

 LLA module caching and directory freeze are separate functions. Directories 
 are kept completely in LLA's private storage. Modules are cached in VLF.

 LLA fetches only modules from the VLF cache if it knows it is still there, 
 hence the 100% VLF hitratio.

 VLF does not do caching, VLF exploiters cache objects into the VLF cache 
 (LLA, TSO clist, Catalog etc.).
 The '5 fetches' algorithme, together with some complex calculations about 
 memory use, cache efficiency etc. are done by LLA, to determine if a module 
 is going to be staged to VLF.

 Kees.
snip

LLA is, as I understood it, caching the directory for each managed PDS/PDSE 
(which is affected by FREEZE/NOFREEZE). Is that an incorrect understanding?

Correct, with the remark, that LLA caches directories in ist private storage, 
VLF is not involved here.

The VLF, using its rules, either caches or rejects the cache request -- there 
is the CSVLLIX2 exit which I know gets involved with the CSVLLA class. And 
when that call is made, it is made AFTER the load has been effected so that 
the requesting address space is not held up. The CSVLLIX1 exit is PRIOR to 
the LOAD 
being effected, and so the amount of work done in it can be rather detrimental 
to the throughput of the whole system.

Incorrect, 
1) CSVLLIX1/2 are LLA exits, where you and I can influence LLA's calculated 
decision to stage a module to VLF or not. Also LLA statistics are presented to 
the exits, which can be used to record them in SMF records.
2) VLF rejects nothing, it accepts everything that is staged. It manages the 
VLF cache by trimming modules to make room for new ones. All VLF exploiters, 
except LLA, simply push anything into their VLF cache. When needing an object, 
it tries first if it is still in the VLF cache, of not they will know where to 
get the object from disk (Clist, Catalog record, etc. etc.). For these VLF 
caches, the VLF statistics provide a good measure about the effectiveness of 
the size of the cache.

This indicates to me that VLF is very much involved in the control of cache. 
If the weight assigned to the module, as you pass through CSVLLIX2, prohibits 
caching, I believe it is VLF that doesn't bother. After all, the trim code 
apparently is a VLF module (I'm sorry, I can't remember if it is COFTRIM or 
VLFTRIM, I only remember that TRIM is part of the name that STROBE captured 
when we saw a COBOL program spending an inordinate amount of time in 
LOAD/LINK functions).

See above.

If my understanding is incorrect, I really would like to know -- because it 
means that I have greatly misinterpreted the stuff I've read in various 
published manuals and other information passed to me in trying to diagnose 
what I believe to have been caused by too small of MAXVIRT for CSVLLA.

In contrast to other VLF exploiters, LLA has decided to fully control the VLF 
cache, it knows how large it is, knows what is in there and how much room is 
still left. If a new module qualifies for the VLF cache, LLA decides which 
module has to make room for the new one and instructs VLF to delete it. That is 
why you see many Adds and Deletes, but hardly no Trims in the VLF statistics 
for the LLA cache. VLF hardly needs to make room in the cache by trimming, 
because LLA ensures room by Deletes.

I use CSVLLA1/2 to write the LLA statistics to SMF and from them, I get the 
number of modules fetches solved by LLA from its VLF cache and the number of 
module fetches that had to be resolved from dasd, for each LLA managed library. 
This gives me a perfect view of the effectiveness of the CSVLLA VLF cache.

This is a report about October 2014:

DSNAME   LLAFPCTPGMFCNT LLAFCNT
   
IMSPRDA.PGMLIBA99.58  9063621480761
IMSPRDA.PGMLIBB99.90  3687336026806
SYS1.CEE.SCEERUN   99.47 22827942527075
SYS1.CEE.SCEERUN2  72.84   16584447
SYS1.CSSLIB98.33  11153  656649
SYS1.DCF.DCFLOAD0.00 47   0
SYS1.DMS.CCUWLOAD  98.29  29500 1692618
SYS1.IOA.LOAD  99.39 14523523826464
SYS1.LNKLIB98.72 775921
SYS1.LNKLIB98.60 38814727294767
SYS1.LNKLIB.PDSE   72.33 885559 2315360
SYS1.MAINVIEW.BBLINK   85.23  11140

Re: VLF caching

2014-11-28 Thread Peter Relson
I don't know what is being assumed or understood, but most of this 
thread is somewhat technically inaccurate. Some of that is perhaps 
terminology, some not.

So let's start here:

VLF does not do caching.
In my view, this is at least misleading (although perhaps it's simply 
being loose with terminology). VLF's only job is to manage a cache. I view 
that as doing caching. What VLF doesn't do is to decide what to put into 
the cache. That is left to each exploiter. All VLF exploiters do this. 
They identify an object and VLF caches the object (for later retrieval 
by the exploiter). An exploiter may choose to tell VLF to remove an object 
from the cache. LLA does this removal only when it has figured out the 
deletion of a module in a data set it is managing. 

VLF does not start doing caching until ...
Again terminology. VLF never starts caching, it is told to cache. Or, 
perhaps you'd say that it starts caching when it is told to cache 
something. The sentiment expressed here, correctly, would probably have 
been LLA does not ask VLF to cache a module until.

Correct, that is why you also see Deletes in the LLA VLF 
cache. If LLA has a module that qualifies for the cache, 
it can delete a module that does not qualify anymore to 
make room for the new one. Therefore you will see deletes 
and adds of modules caused by LLA management of the cache, 
and hardly any trims caused by VLF management of the cache.
The other exploiters push 'everything' into the cache and 
will find out later if the object is still there. VLF 
hitratio and trim statistics are indeed useful here.

It is dangerous to make very direct statements like this without detailed 
knowledge of the internals. Unfortunately, the statement is not true. LLA 
does not delete a module that does not qualify anymore (as I mentioned 
early, the only specific deletion is for a deleted member).  Regardless, 
VLF does trim the LLA class, just as it trims any other class when it gets 
(to VLF) too full. VLF has no code that does anything specific for LLA -- 
it doesn't know about LLA, although perhaps (I have no idea) LLA is the 
only exploiter that saves its VLF data in two pieces (for LLA, those two 
pieces are the module text and the relocation information)

The exception being when you cross the 90% utilization mark... 
Close. The trimming threshold for VLF happens to be 95%.

Then trim is forced, so that the now eligible requested module 
can be put into cache. So those modules eligible for trim get 
marked (NOTE, that is MARKED) for deletion. If one of those 
modules gets requested before the delete cycle hits, the delete 
flag is turned off.

This led me to think you were thinking that this processing is 
synchronous. It isn't. Once the trimming threshold has been reached, VLF 
marks objects for potential deletion.
If the trimming has not yet occurred, then a new request will still be 
rejected for out of space. It is true that if, in between the marking 
and the actual deleting, VLF gets a request to retrieve the marked 
object, it will change its mind and not delete it (because the object is 
no longer least-recently-used).

The '5 fetches' algorithm
It happens to be 10 (unless the CSVLLIX1 exit indicates otherwise). But 
reaching that number for any module is an event that kicks off staging 
analysis for just about everything (including things that have not been 
fetched that many times). 

This indicates to me that VLF is very much involved in the 
control of cache. If the weight assigned to the module, as you 
pass through CSVLLIX2, prohibits caching, I believe it is VLF 
that doesn't bother. 
It is LLA that doesn't bother, not VLF.

After all, the trim code apparently is a VLF 
module (I'm sorry, I can't remember if it is COFTRIM or VLFTRIM, 
I only remember that TRIM is part of the name that STROBE 
captured when we saw a COBOL program spending an inordinate 
amount of time in LOAD/LINK functions).
The VLF trimming module is COFMTRIM. And in many cases where MAXVIRT for a 
VLF class is too small, that module may do a lot of work. 

If my understanding is incorrect, I really would like to know -- 
because it means that I have greatly misinterpreted the stuff 
I've read in various published manuals and other information 
passed to me in trying to diagnose what I believe to have been 
caused by too small of MAXVIRT for CSVLLA.
I suspect that overall your understanding is correct, but the specifics 
behind it might not be.

OK, here we go. I won't swear to all of this, but it's pretty likely to be 
correct.

LLA manages PDS(E) directories and modules. 
The directories (interacting with BLDL and DESERV) are kept in LLA private 
storage, along with all the rest of LLA's control data.
When LLA determines that a module should be cached (and determines that 
LLA itself is capable of caching it), it asks VLF to do the caching 
(COFCREAT).
The determination of caching involves many factors, including how often 
the module is fetched, how big it is, 

Re: VLF caching

2014-11-27 Thread Steve Thompson

On 11/27/2014 02:22 AM, Vernooij, CP (ITOPT1) - KLM wrote:
SNIPPAGE


No, here I read a common misconception about LLA and VLF working.

LLA module caching and directory freeze are separate functions. Directories are 
kept completely in LLA's private storage. Modules are cached in VLF.

LLA fetches only modules from the VLF cache if it knows it is still there, 
hence the 100% VLF hitratio.

VLF does not do caching, VLF exploiters cache objects into the VLF cache (LLA, 
TSO clist, Catalog etc.).
The '5 fetches' algorithme, together with some complex calculations about 
memory use, cache efficiency etc. are done by LLA, to determine if a module is 
going to be staged to VLF.

Kees.

snip

LLA is, as I understood it, caching the directory for each 
managed PDS/PDSE (which is affected by FREEZE/NOFREEZE). Is that 
an incorrect understanding?


The VLF, using its rules, either caches or rejects the cache 
request -- there is the CSVLLIX2 exit which I know gets involved 
with the CSVLLA class. And when that call is made, it is made 
AFTER the load has been effected so that the requesting address 
space is not held up. The CSVLLIX1 exit is PRIOR to the LOAD 
being effected, and so the amount of work done in it can be 
rather detrimental to the throughput of the whole system.


This indicates to me that VLF is very much involved in the 
control of cache. If the weight assigned to the module, as you 
pass through CSVLLIX2, prohibits caching, I believe it is VLF 
that doesn't bother. After all, the trim code apparently is a VLF 
module (I'm sorry, I can't remember if it is COFTRIM or VLFTRIM, 
I only remember that TRIM is part of the name that STROBE 
captured when we saw a COBOL program spending an inordinate 
amount of time in LOAD/LINK functions).


If my understanding is incorrect, I really would like to know -- 
because it means that I have greatly misinterpreted the stuff 
I've read in various published manuals and other information 
passed to me in trying to diagnose what I believe to have been 
caused by too small of MAXVIRT for CSVLLA.


Regards,
Steve Thompson

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: VLF caching

2014-11-26 Thread Peter Relson
VLF writes SMF 41 records, but they are unusable for LLA. 
Since LLA manages its VLF cache and knows what is in it 
and what is not, it will always have a 100% hitratio on 
its VLF cache, which will be reported by VLF records 41.

That is not correct. It is true that LLA does know what it put into the 
cache (as would most VLF exploiters), but (as with all VLF exploiters) it 
has no idea what VLF has taken out due to trimming.

LLA (as with all VLF exploiters) finds out when it goes to retrieve data 
that it had cached but that VLF has trimmed that the data is no longer 
available to be retrieved. LLA then acts accordingly.

I presume, therefore, that the SMF 41 records do have legitimate 
information about the amount of trimming that has occurred.

If no trimming is occurring, then the cache is not full.

Peter Relson
z/OS Core Technology Design

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: VLF caching

2014-11-26 Thread Bob Shannon
VLF writes SMF 41 records, but they are unusable for LLA.
That is not correct. It is true that LLA does know what it put into the cache 
(as would most VLF exploiters), but (as with all VLF exploiters) it has no 
idea what VLF has taken out due to trimming.

I tend to agree with the OP. LLA will only check VLF for modules it previously 
cached. Granted there may be some trimming, but I don't ever recall seeing less 
than a 100% hit ratio for LLA. The other VLF exploiters behave differently and 
the SMF statistics for them tend to be helpful. The CSVLLIX1 exit is required 
for accurate LLA fetch statistics.

Bob Shannon
Rocket Software

Rocket Software, Inc. and subsidiaries ■ 77 Fourth Avenue, Waltham MA 02451 ■ 
+1 800.966.3270 ■ +1 781.577.4321
Unsubscribe From Commercial Email – unsubscr...@rocketsoftware.com
Manage Your Subscription Preferences - 
http://info.rocketsoftware.com/GlobalSubscriptionManagementEmailFooter_SubscriptionCenter.html
Privacy Policy - http://www.rocketsoftware.com/company/legal/privacy-policy


This communication and any attachments may contain confidential information of 
Rocket Software, Inc. All unauthorized use, disclosure or distribution is 
prohibited. If you are not the intended recipient, please notify Rocket 
Software immediately and destroy all copies of this communication. Thank you.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: VLF caching

2014-11-26 Thread Steve Thompson

On 11/26/2014 11:19 AM, Bob Shannon wrote:
snippage

I tend to agree with the OP. LLA will only check VLF for modules it previously 
cached. Granted there may be some trimming, but I don't ever recall seeing less 
than a 100% hit ratio for LLA. The other VLF exploiters behave differently and 
the SMF statistics for them tend to be helpful. The CSVLLIX1 exit is required 
for accurate LLA fetch statistics.

Bob Shannon
Rocket Software

SNIPPAGE

Wouldn't the LLA hit rate be based on it having the directory 
information as opposed to having to go read it (something about 
FREEZE vs. NOFREEZE ?)?


Then, wouldn't the VLF data be based on an attempt to fetch, when 
it doesn't have it, so that you have a cache miss?


After all, VLF does not start doing caching until there has been 
a module that meets the requirements (what, 5 fetches inside of x 
seconds?).


Then there is a second cache trigger. And that is some number of 
hits on a library with some number of modules already cached, VLF 
then starts caching any module requested...


[Sorry, a bit hazy on the exact numbers -- did not commit them to 
memory.]


And I can see behavior that backs this when manually monitoring 
using MFM.


The exception being when you cross the 90% utilization mark... 
Then trim is forced, so that the now eligible requested module 
can be put into cache. So those modules eligible for trim get 
marked (NOTE, that is MARKED) for deletion. If one of those 
modules gets requested before the delete cycle hits, the delete 
flag is turned off.


More to tuning this sucker than I really wanted to get into. 
Hence my prior comment about a certain ISV and their products 
that pre-date LLA (Library Look Aside, not sure about Link List 
Lookaside) and VLF.


Regards,
Steve Thompson

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: VLF caching

2014-11-26 Thread Vernooij, CP (ITOPT1) - KLM


-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf 
Of Bob Shannon
Sent: 26 November, 2014 17:20
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: VLF caching

VLF writes SMF 41 records, but they are unusable for LLA.
That is not correct. It is true that LLA does know what it put into the cache 
(as would most VLF exploiters), but (as with all VLF exploiters) it has no 
idea what VLF has taken out due to trimming.

I tend to agree with the OP. LLA will only check VLF for modules it previously 
cached. Granted there may be some trimming, but I don't ever recall seeing less 
than a 100% hit ratio for LLA. The other VLF exploiters behave differently and 
the SMF statistics for them tend to be helpful. The CSVLLIX1 exit is required 
for accurate LLA fetch statistics.

Bob Shannon
Rocket Software



Correct, that is why you also see Deletes in the LLA VLF cache. If LLA has a 
module that qualifies for the cache, it can delete a module that does not 
qualify anymore to make room for the new one. Therefor you will see deletes and 
adds of modules caused by LLA management of the cache, and hardly any trims 
caused by VLF management of the cache.
The other exploiters push 'everything' into the cache and will find out later 
if the object is still there. VLF hitratio and trim statistics are indeed 
useful here.

Kees.

For information, services and offers, please visit our web site: 
http://www.klm.com. This e-mail and any attachment may contain confidential and 
privileged material intended for the addressee only. If you are not the 
addressee, you are notified that no part of the e-mail or any attachment may be 
disclosed, copied or distributed, and that any other action related to this 
e-mail or attachment is strictly prohibited, and may be unlawful. If you have 
received this e-mail by error, please notify the sender immediately by return 
e-mail, and delete this message. 

Koninklijke Luchtvaart Maatschappij NV (KLM), its subsidiaries and/or its 
employees shall not be liable for the incorrect or incomplete transmission of 
this e-mail or any attachments, nor responsible for any delay in receipt. 
Koninklijke Luchtvaart Maatschappij N.V. (also known as KLM Royal Dutch 
Airlines) is registered in Amstelveen, The Netherlands, with registered number 
33014286




--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: VLF caching

2014-11-26 Thread Vernooij, CP (ITOPT1) - KLM


-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf 
Of Steve Thompson
Sent: 26 November, 2014 21:08
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: VLF caching

On 11/26/2014 11:19 AM, Bob Shannon wrote:
snippage
 I tend to agree with the OP. LLA will only check VLF for modules it 
 previously cached. Granted there may be some trimming, but I don't ever 
 recall seeing less than a 100% hit ratio for LLA. The other VLF exploiters 
 behave differently and the SMF statistics for them tend to be helpful. The 
 CSVLLIX1 exit is required for accurate LLA fetch statistics.

 Bob Shannon
 Rocket Software
SNIPPAGE

Wouldn't the LLA hit rate be based on it having the directory information as 
opposed to having to go read it (something about FREEZE vs. NOFREEZE ?)?

Then, wouldn't the VLF data be based on an attempt to fetch, when it doesn't 
have it, so that you have a cache miss?

After all, VLF does not start doing caching until there has been a module that 
meets the requirements (what, 5 fetches inside of x seconds?).

Then there is a second cache trigger. And that is some number of hits on a 
library with some number of modules already cached, VLF then starts caching any 
module requested...

[Sorry, a bit hazy on the exact numbers -- did not commit them to memory.]

And I can see behavior that backs this when manually monitoring using MFM.

The exception being when you cross the 90% utilization mark... 
Then trim is forced, so that the now eligible requested module can be put into 
cache. So those modules eligible for trim get marked (NOTE, that is MARKED) for 
deletion. If one of those modules gets requested before the delete cycle 
hits, the delete flag is turned off.

More to tuning this sucker than I really wanted to get into. 
Hence my prior comment about a certain ISV and their products that pre-date LLA 
(Library Look Aside, not sure about Link List
Lookaside) and VLF.

Regards,
Steve Thompson



No, here I read a common misconception about LLA and VLF working.

LLA module caching and directory freeze are separate functions. Directories are 
kept completely in LLA's private storage. Modules are cached in VLF.

LLA fetches only modules from the VLF cache if it knows it is still there, 
hence the 100% VLF hitratio.

VLF does not do caching, VLF exploiters cache objects into the VLF cache (LLA, 
TSO clist, Catalog etc.).
The '5 fetches' algorithme, together with some complex calculations about 
memory use, cache efficiency etc. are done by LLA, to determine if a module is 
going to be staged to VLF.

Kees.


For information, services and offers, please visit our web site: 
http://www.klm.com. This e-mail and any attachment may contain confidential and 
privileged material intended for the addressee only. If you are not the 
addressee, you are notified that no part of the e-mail or any attachment may be 
disclosed, copied or distributed, and that any other action related to this 
e-mail or attachment is strictly prohibited, and may be unlawful. If you have 
received this e-mail by error, please notify the sender immediately by return 
e-mail, and delete this message. 

Koninklijke Luchtvaart Maatschappij NV (KLM), its subsidiaries and/or its 
employees shall not be liable for the incorrect or incomplete transmission of 
this e-mail or any attachments, nor responsible for any delay in receipt. 
Koninklijke Luchtvaart Maatschappij N.V. (also known as KLM Royal Dutch 
Airlines) is registered in Amstelveen, The Netherlands, with registered number 
33014286




--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: VLF Caching

2014-11-25 Thread Peter Relson
There is no rule of thumb. It's very likely that 16M is way too small, but 
we're not likely to change the default.
128M is fairly common, I believe.

It's not necessarily the case that a bigger size is better. The more data 
that is cached, the more likely you are to be able to retrieve from the 
cache (which is a good thing) but the longer it may take to locate the 
data (which is not a good thing).

In the distant past some found that sizes of 256M and above were 
detrimental. But no one that I know of actually did any analysis to try to 
figure out why. The guess was that the performance decrease correlated 
to the number of objects that then were cached.

It has probably been well over 10 years since I last heard any discussion 
of this; I have no idea if what was seen was typical or one-off, or if 
it still behaves that way.

Peter Relson
z/OS Core Technology Design

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: VLF Caching

2014-11-25 Thread Elardus Engelbrecht
Thomas Conley wrote:

I've always had to review the SMF data, then make adjustments.  No ROTs or 
sizing recommendations I'm aware of.

Please forgive my ignorance, but what SMF records? Of course I have looked in 
my SMF book, but must have missed something obvious or used wrong search 
arguments.

Thanks in advance. 


Peter Relson wrote:

There is no rule of thumb. It's very likely that 16M is way too small, but 
we're not likely to change the default. 128M is fairly common, I believe.

Ok if you say so. Could that part of MFM not be included in Health Checker so 
we all can see what sizes are recommended?

Groete / Greetings
Elardus Engelbrecht

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: VLF Caching

2014-11-25 Thread Vernooij, CP (ITOPT1) - KLM
That a tricky question: 
VLF writes SMF 41 records, but they are unusable for LLA. Since LLA manages its 
VLF cache and knows what is in it and what is not, it will always have a 100% 
hitratio on its VLF cache, which will be reported by VLF records 41.
Via LLA exits 1 and 2, we write SMF records with useful LLA information and 
these records provide how many fetches LLA does from its VLF cache and how many 
from disk. From that ratio, we decide if the VLF cache is sufficient or not.

Kees.

-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf 
Of Elardus Engelbrecht
Sent: 25 November, 2014 14:04
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: VLF Caching

Thomas Conley wrote:

I've always had to review the SMF data, then make adjustments.  No ROTs or 
sizing recommendations I'm aware of.

Please forgive my ignorance, but what SMF records? Of course I have looked in 
my SMF book, but must have missed something obvious or used wrong search 
arguments.

Thanks in advance. 


Peter Relson wrote:

There is no rule of thumb. It's very likely that 16M is way too small, but 
we're not likely to change the default. 128M is fairly common, I believe.

Ok if you say so. Could that part of MFM not be included in Health Checker so 
we all can see what sizes are recommended?

Groete / Greetings
Elardus Engelbrecht

--
For IBM-MAIN subscribe / signoff / archive access instructions, send email to 
lists...@listserv.ua.edu with the message: INFO IBM-MAIN

For information, services and offers, please visit our web site: 
http://www.klm.com. This e-mail and any attachment may contain confidential and 
privileged material intended for the addressee only. If you are not the 
addressee, you are notified that no part of the e-mail or any attachment may be 
disclosed, copied or distributed, and that any other action related to this 
e-mail or attachment is strictly prohibited, and may be unlawful. If you have 
received this e-mail by error, please notify the sender immediately by return 
e-mail, and delete this message. 

Koninklijke Luchtvaart Maatschappij NV (KLM), its subsidiaries and/or its 
employees shall not be liable for the incorrect or incomplete transmission of 
this e-mail or any attachments, nor responsible for any delay in receipt. 
Koninklijke Luchtvaart Maatschappij N.V. (also known as KLM Royal Dutch 
Airlines) is registered in Amstelveen, The Netherlands, with registered number 
33014286




--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: VLF Caching

2014-11-25 Thread Elardus Engelbrecht
Vernooij, CP (ITOPT1) - KLM wrote:

That a tricky question:  VLF writes SMF 41 records, but they are unusable for 
LLA. Since LLA manages its VLF cache and knows what is in it and what is not, 
it will always have a 100% hitratio on its VLF cache, which will be reported 
by VLF records 41. Via LLA exits 1 and 2, we write SMF records with useful LLA 
information and these records provide how many fetches LLA does from its VLF 
cache and how many from disk. From that ratio, we decide if the VLF cache is 
sufficient or not. 

Hmmm. Very Interesting, I have a gut feeling about SMF 41, but the contents 
seemed to (at least) me that it do no answer this thread. I should have looked 
at LLA too. 

Thanks, I really appreciate your informative reply, you cured my curiousity! It 
is much appreciated! 

Groete / Greetings
Elardus Engelbrecht

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: VLF Caching

2014-11-25 Thread Vernooij, CP (ITOPT1) - KLM
SYS1.SAMPLIB:CSVLLIX1+CSVLLIX2 provide samples to write statistics to SMF 
records from LLA Exits 1 and 2. And do other things, such as force LLA to stage 
modules to VLF, if its statistics would have indicated otherwise.

Kees.

-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf 
Of Elardus Engelbrecht
Sent: 25 November, 2014 14:40
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: VLF Caching

Vernooij, CP (ITOPT1) - KLM wrote:

That a tricky question:  VLF writes SMF 41 records, but they are unusable for 
LLA. Since LLA manages its VLF cache and knows what is in it and what is not, 
it will always have a 100% hitratio on its VLF cache, which will be reported 
by VLF records 41. Via LLA exits 1 and 2, we write SMF records with useful LLA 
information and these records provide how many fetches LLA does from its VLF 
cache and how many from disk. From that ratio, we decide if the VLF cache is 
sufficient or not. 

Hmmm. Very Interesting, I have a gut feeling about SMF 41, but the contents 
seemed to (at least) me that it do no answer this thread. I should have looked 
at LLA too. 

Thanks, I really appreciate your informative reply, you cured my curiousity! It 
is much appreciated! 

Groete / Greetings
Elardus Engelbrecht

--
For IBM-MAIN subscribe / signoff / archive access instructions, send email to 
lists...@listserv.ua.edu with the message: INFO IBM-MAIN

For information, services and offers, please visit our web site: 
http://www.klm.com. This e-mail and any attachment may contain confidential and 
privileged material intended for the addressee only. If you are not the 
addressee, you are notified that no part of the e-mail or any attachment may be 
disclosed, copied or distributed, and that any other action related to this 
e-mail or attachment is strictly prohibited, and may be unlawful. If you have 
received this e-mail by error, please notify the sender immediately by return 
e-mail, and delete this message. 

Koninklijke Luchtvaart Maatschappij NV (KLM), its subsidiaries and/or its 
employees shall not be liable for the incorrect or incomplete transmission of 
this e-mail or any attachments, nor responsible for any delay in receipt. 
Koninklijke Luchtvaart Maatschappij N.V. (also known as KLM Royal Dutch 
Airlines) is registered in Amstelveen, The Netherlands, with registered number 
33014286




--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: VLF Caching

2014-11-25 Thread Steve Thompson

On 11/25/2014 08:03 AM, Elardus Engelbrecht wrote:

Thomas Conley wrote:


I've always had to review the SMF data, then make adjustments.  No ROTs or 
sizing recommendations I'm aware of.


Please forgive my ignorance, but what SMF records? Of course I have looked in 
my SMF book, but must have missed something obvious or used wrong search 
arguments.

Thanks in advance.

SNIPPAGE

Embarrassing, but I don't know which SMF records get involved. I 
would have to go dig though the various ISV doc to find that out.


However, apparently CP-Expert knows which ones to be looking at. 
I'm not sure about MXG or MICS.


But CP-Expert gives us a report that shows how much cache you 
have for the CSVLLA class, and how much you used of it (don't 
know how it does that). And somehow it finds all the cache trim 
events and reports on that.


Regards,
Steve Thompson

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: VLF Caching

2014-11-25 Thread Vernooij, CP (ITOPT1) - KLM
It looks as if it uses VLF's SMF41 records. They provide the VLF view on the 
LLA cache. VLF knows how large the cache is, how much is used, how much 
trimming LLA has done. And what LLA's hitratio is, but this will always be near 
100%, as I explained before.

If it also tells you, per library, how many fetches LLA resolved from its VLF 
cache and how many from disk, then it has access to the LLA statistics, which 
can tell you if the VLF cache is sufficient or should be enlarged. If so, it 
would be interesting where it gets those metrics from.

Kees.

-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf 
Of Steve Thompson
Sent: 25 November, 2014 15:02
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: VLF Caching

On 11/25/2014 08:03 AM, Elardus Engelbrecht wrote:
 Thomas Conley wrote:

 I've always had to review the SMF data, then make adjustments.  No ROTs or 
 sizing recommendations I'm aware of.

 Please forgive my ignorance, but what SMF records? Of course I have looked in 
 my SMF book, but must have missed something obvious or used wrong search 
 arguments.

 Thanks in advance.
SNIPPAGE

Embarrassing, but I don't know which SMF records get involved. I would have to 
go dig though the various ISV doc to find that out.

However, apparently CP-Expert knows which ones to be looking at. 
I'm not sure about MXG or MICS.

But CP-Expert gives us a report that shows how much cache you have for the 
CSVLLA class, and how much you used of it (don't know how it does that). And 
somehow it finds all the cache trim events and reports on that.

Regards,
Steve Thompson

--
For IBM-MAIN subscribe / signoff / archive access instructions, send email to 
lists...@listserv.ua.edu with the message: INFO IBM-MAIN

For information, services and offers, please visit our web site: 
http://www.klm.com. This e-mail and any attachment may contain confidential and 
privileged material intended for the addressee only. If you are not the 
addressee, you are notified that no part of the e-mail or any attachment may be 
disclosed, copied or distributed, and that any other action related to this 
e-mail or attachment is strictly prohibited, and may be unlawful. If you have 
received this e-mail by error, please notify the sender immediately by return 
e-mail, and delete this message. 

Koninklijke Luchtvaart Maatschappij NV (KLM), its subsidiaries and/or its 
employees shall not be liable for the incorrect or incomplete transmission of 
this e-mail or any attachments, nor responsible for any delay in receipt. 
Koninklijke Luchtvaart Maatschappij N.V. (also known as KLM Royal Dutch 
Airlines) is registered in Amstelveen, The Netherlands, with registered number 
33014286




--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: VLF Caching

2014-11-25 Thread Martin Packer
We stopped asking for SMF 41-3 LONG ago: The result for LLA was always the 
same.

We're more or less stuck with double it then double it again if things 
get better. Not sure how we'd tell if things actually DID get better.

So the exits route is good for the very curious.

Cheers, Martin

Martin Packer,
zChampion, Principal Systems Investigator,
Worldwide Banking Center of Excellence, IBM

+44-7802-245-584

email: martin_pac...@uk.ibm.com

Twitter / Facebook IDs: MartinPacker
Blog: 
https://www.ibm.com/developerworks/mydeveloperworks/blogs/MartinPacker



From:   Elardus Engelbrecht elardus.engelbre...@sita.co.za
To: IBM-MAIN@LISTSERV.UA.EDU
Date:   25/11/2014 13:39
Subject:Re: VLF Caching
Sent by:IBM Mainframe Discussion List IBM-MAIN@LISTSERV.UA.EDU



Vernooij, CP (ITOPT1) - KLM wrote:

That a tricky question:  VLF writes SMF 41 records, but they are unusable 
for LLA. Since LLA manages its VLF cache and knows what is in it and what 
is not, it will always have a 100% hitratio on its VLF cache, which will 
be reported by VLF records 41. Via LLA exits 1 and 2, we write SMF records 
with useful LLA information and these records provide how many fetches LLA 
does from its VLF cache and how many from disk. From that ratio, we decide 
if the VLF cache is sufficient or not. 

Hmmm. Very Interesting, I have a gut feeling about SMF 41, but the 
contents seemed to (at least) me that it do no answer this thread. I 
should have looked at LLA too. 

Thanks, I really appreciate your informative reply, you cured my 
curiousity! It is much appreciated! 

Groete / Greetings
Elardus Engelbrecht

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN



Unless stated otherwise above:
IBM United Kingdom Limited - Registered in England and Wales with number 
741598. 
Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: VLF Caching

2014-11-25 Thread Steve Thompson

On 11/25/2014 07:29 AM, Peter Relson wrote:

There is no rule of thumb. It's very likely that 16M is way too small, but
we're not likely to change the default.
128M is fairly common, I believe.

It's not necessarily the case that a bigger size is better. The more data
that is cached, the more likely you are to be able to retrieve from the
cache (which is a good thing) but the longer it may take to locate the
data (which is not a good thing).

In the distant past some found that sizes of 256M and above were
detrimental. But no one that I know of actually did any analysis to try to
figure out why. The guess was that the performance decrease correlated
to the number of objects that then were cached.

It has probably been well over 10 years since I last heard any discussion
of this; I have no idea if what was seen was typical or one-off, or if
it still behaves that way.

Peter Relson
z/OS Core Technology Design


SNIPPAGE
Thank you.

I can tell you that 16MB is TOO SMALL. And so far, in our shop, 
32MB is too small.


The diminishing returns problem is going to be interesting to 
find. I know that you want to cache hi-use modules. But, if the 
module is beyond a certain size, you may not want to cache it - 
DASD fetch may be better.


So, if you have sufficient cache, you can get all of your stuff 
in there w/o hitting the 10% headroom issue. And if the speed 
of XMEM is hi enough (based on CPU avail)...


But, if cache is being used by all large modules, you can get 
into a cache thrash situation (old APAR/PTF for where it would 
loop) -- Fastest way I know to get the situation (it can be done 
with a mix...).


We have noticed that at 16MB, we BURN MSUs in VLF TRIM (sorry, 
forgot the module name).


So another aspect of tuning is setting the time lower to get 
things to be trim-able sooner. But, that can have diminishing 
returns So it may need to be set higher (sigh).


Enter the EXITs. Now you have to have someone in your shop that 
knows ALC and management can give clear guidance to rules so they 
can be effected... (sigh).


How about IBM and CA play nice and get CA the right stuff so they 
can fix PMO and QuickFetch for PDSEs (I can't believe I'm writing 
this!!).


Later,
Steve Thompson

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: VLF Caching

2014-11-25 Thread Steve Thompson

On 11/25/2014 08:03 AM, Elardus Engelbrecht wrote:
SNIPPAGE


Ok if you say so. Could that part of MFM not be included in Health Checker so 
we all can see what sizes are recommended?

Groete / Greetings
Elardus Engelbrecht

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN



MFM is a non-supported (Ok, support as time is available) product 
from IBM. You have to sign some doc, etc. So it is not generally 
in use by z/OS sites.


We have it because someone saw a SHARE presentation on it. I have 
been using it in a test LPAR to help demonstrate certain issues 
(you may draw your own conclusions from another post by me in 
this thread).


As was indicated by someone else, you may have to parse the SMF 
records and process them to figure it out. OR, if you have an ISV 
product that is already doing analysis of SMF data


Then you get to play the Modify VLF command game to change the 
MAXVIRT and time (I forgot the keyword for this) for tuning...


One size does not fit all. And one size may be good during the 
day while another may be better during the batch cycles/window 
(assuming you have one).


Regards,
Steve Thompson

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: VLF Caching

2014-11-25 Thread Elardus Engelbrecht
Steve Thompson wrote:

MFM is a non-supported (Ok, support as time is available) product from IBM. 
You have to sign some doc, etc. So it is not generally in use by z/OS sites.

I have used in ancient times. This is why I asked for inclusion in HC.

Groete / Greetings
Elardus Engelbrecht

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: VLF Caching

2014-11-25 Thread Vernooij, CP (ITOPT1) - KLM
I have 200MB and have a 99% hitratio on 20 LLA managed libraries, as reported 
by LLA statistics (LLA fetches from its VLF cache)/(LLA fetches from VLF + LLA 
fetches from disk).

Kees.

-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf 
Of Steve Thompson
Sent: 25 November, 2014 15:29
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: VLF Caching

On 11/25/2014 07:29 AM, Peter Relson wrote:
 There is no rule of thumb. It's very likely that 16M is way too small, 
 but we're not likely to change the default.
 128M is fairly common, I believe.

 It's not necessarily the case that a bigger size is better. The more 
 data that is cached, the more likely you are to be able to retrieve 
 from the cache (which is a good thing) but the longer it may take to 
 locate the data (which is not a good thing).

 In the distant past some found that sizes of 256M and above were 
 detrimental. But no one that I know of actually did any analysis to 
 try to figure out why. The guess was that the performance decrease 
 correlated to the number of objects that then were cached.

 It has probably been well over 10 years since I last heard any 
 discussion of this; I have no idea if what was seen was typical or 
 one-off, or if it still behaves that way.

 Peter Relson
 z/OS Core Technology Design

SNIPPAGE
Thank you.

I can tell you that 16MB is TOO SMALL. And so far, in our shop, 32MB is too 
small.

The diminishing returns problem is going to be interesting to find. I know that 
you want to cache hi-use modules. But, if the module is beyond a certain size, 
you may not want to cache it - DASD fetch may be better.

So, if you have sufficient cache, you can get all of your stuff in there w/o 
hitting the 10% headroom issue. And if the speed of XMEM is hi enough (based 
on CPU avail)...

But, if cache is being used by all large modules, you can get into a cache 
thrash situation (old APAR/PTF for where it would
loop) -- Fastest way I know to get the situation (it can be done with a mix...).

We have noticed that at 16MB, we BURN MSUs in VLF TRIM (sorry, forgot the 
module name).

So another aspect of tuning is setting the time lower to get things to be 
trim-able sooner. But, that can have diminishing returns So it may need to 
be set higher (sigh).

Enter the EXITs. Now you have to have someone in your shop that knows ALC and 
management can give clear guidance to rules so they can be effected... (sigh).

How about IBM and CA play nice and get CA the right stuff so they can fix PMO 
and QuickFetch for PDSEs (I can't believe I'm writing this!!).

Later,
Steve Thompson

--
For IBM-MAIN subscribe / signoff / archive access instructions, send email to 
lists...@listserv.ua.edu with the message: INFO IBM-MAIN

For information, services and offers, please visit our web site: 
http://www.klm.com. This e-mail and any attachment may contain confidential and 
privileged material intended for the addressee only. If you are not the 
addressee, you are notified that no part of the e-mail or any attachment may be 
disclosed, copied or distributed, and that any other action related to this 
e-mail or attachment is strictly prohibited, and may be unlawful. If you have 
received this e-mail by error, please notify the sender immediately by return 
e-mail, and delete this message. 

Koninklijke Luchtvaart Maatschappij NV (KLM), its subsidiaries and/or its 
employees shall not be liable for the incorrect or incomplete transmission of 
this e-mail or any attachments, nor responsible for any delay in receipt. 
Koninklijke Luchtvaart Maatschappij N.V. (also known as KLM Royal Dutch 
Airlines) is registered in Amstelveen, The Netherlands, with registered number 
33014286



--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: VLF Caching

2014-11-25 Thread Tidy, David (D)
I had thought (from 
https://www.ibm.com/developerworks/community/blogs/MartinPacker/entry/lla_and_measuring_its_use?lang=en
 ) that the SMF records did not really tell us very much about the tuning. But 
it could have been that I didn't really understand them

So we used the healthcheck iteratively to watch the trimmed age of the 
exceptions, and arrived at 64Mb. At 4MB the trimmed age was 1 day, at 32MB 11 
days, at 48MB 30 days. At 64MB all our sysplexes seem not to be trimming within 
60 days any more.

That was a while ago, and we are just starting to look at the IGGCAS class 
which has also been tripping the healthcheck exception (at 1 day).

Best regards, 
David Tidy     
IS Technical Management/SAP-Mf     
Dow Benelux B.V.                      

-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf 
Of Vernooij, CP (ITOPT1) - KLM
Sent: 25 November 2014 15:47
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: VLF Caching

I have 200MB and have a 99% hitratio on 20 LLA managed libraries, as reported 
by LLA statistics (LLA fetches from its VLF cache)/(LLA fetches from VLF + LLA 
fetches from disk).

Kees.

-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf 
Of Steve Thompson
Sent: 25 November, 2014 15:29
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: VLF Caching

On 11/25/2014 07:29 AM, Peter Relson wrote:
 There is no rule of thumb. It's very likely that 16M is way too small, 
 but we're not likely to change the default.
 128M is fairly common, I believe.

 It's not necessarily the case that a bigger size is better. The more 
 data that is cached, the more likely you are to be able to retrieve 
 from the cache (which is a good thing) but the longer it may take to 
 locate the data (which is not a good thing).

 In the distant past some found that sizes of 256M and above were 
 detrimental. But no one that I know of actually did any analysis to 
 try to figure out why. The guess was that the performance decrease 
 correlated to the number of objects that then were cached.

 It has probably been well over 10 years since I last heard any 
 discussion of this; I have no idea if what was seen was typical or 
 one-off, or if it still behaves that way.

 Peter Relson
 z/OS Core Technology Design

SNIPPAGE
Thank you.

I can tell you that 16MB is TOO SMALL. And so far, in our shop, 32MB is too 
small.

The diminishing returns problem is going to be interesting to find. I know that 
you want to cache hi-use modules. But, if the module is beyond a certain size, 
you may not want to cache it - DASD fetch may be better.

So, if you have sufficient cache, you can get all of your stuff in there w/o 
hitting the 10% headroom issue. And if the speed of XMEM is hi enough (based 
on CPU avail)...

But, if cache is being used by all large modules, you can get into a cache 
thrash situation (old APAR/PTF for where it would
loop) -- Fastest way I know to get the situation (it can be done with a mix...).

We have noticed that at 16MB, we BURN MSUs in VLF TRIM (sorry, forgot the 
module name).

So another aspect of tuning is setting the time lower to get things to be 
trim-able sooner. But, that can have diminishing returns So it may need to 
be set higher (sigh).

Enter the EXITs. Now you have to have someone in your shop that knows ALC and 
management can give clear guidance to rules so they can be effected... (sigh).

How about IBM and CA play nice and get CA the right stuff so they can fix PMO 
and QuickFetch for PDSEs (I can't believe I'm writing this!!).

Later,
Steve Thompson

--
For IBM-MAIN subscribe / signoff / archive access instructions, send email to 
lists...@listserv.ua.edu with the message: INFO IBM-MAIN

For information, services and offers, please visit our web site: 
http://www.klm.com. This e-mail and any attachment may contain confidential and 
privileged material intended for the addressee only. If you are not the 
addressee, you are notified that no part of the e-mail or any attachment may be 
disclosed, copied or distributed, and that any other action related to this 
e-mail or attachment is strictly prohibited, and may be unlawful. If you have 
received this e-mail by error, please notify the sender immediately by return 
e-mail, and delete this message. 

Koninklijke Luchtvaart Maatschappij NV (KLM), its subsidiaries and/or its 
employees shall not be liable for the incorrect or incomplete transmission of 
this e-mail or any attachments, nor responsible for any delay in receipt. 
Koninklijke Luchtvaart Maatschappij N.V. (also known as KLM Royal Dutch 
Airlines) is registered in Amstelveen, The Netherlands, with registered number 
33014286

Re: VLF Caching

2014-11-25 Thread Vernooij, CP (ITOPT1) - KLM
I am sure this measure is useless for LLA's VLF cache. 
LLA manages its cache with complex calculations and only stages to its cache if 
room is available. Based on its measurements LLA could trim modules from cache 
to stage other modules that provide more benefit. 
That is why 1) its hitratio is near 100% and 2) trimming is very low. It 
optimizes its use of the current size of the cache. 
If the cache were larger, it would have staged more modules and therefor 
eliminated more DASD fetches, but you cannot tell, nor predict, this from the 
current VLF statistics.

Kees.

-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf 
Of Tidy, David (D)
Sent: 25 November, 2014 16:04
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: VLF Caching

I had thought (from 
https://www.ibm.com/developerworks/community/blogs/MartinPacker/entry/lla_and_measuring_its_use?lang=en
 ) that the SMF records did not really tell us very much about the tuning. But 
it could have been that I didn't really understand them

So we used the healthcheck iteratively to watch the trimmed age of the 
exceptions, and arrived at 64Mb. At 4MB the trimmed age was 1 day, at 32MB 11 
days, at 48MB 30 days. At 64MB all our sysplexes seem not to be trimming within 
60 days any more.

That was a while ago, and we are just starting to look at the IGGCAS class 
which has also been tripping the healthcheck exception (at 1 day).

Best regards, 
David Tidy     
IS Technical Management/SAP-Mf     
Dow Benelux B.V.                      

-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf 
Of Vernooij, CP (ITOPT1) - KLM
Sent: 25 November 2014 15:47
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: VLF Caching

I have 200MB and have a 99% hitratio on 20 LLA managed libraries, as reported 
by LLA statistics (LLA fetches from its VLF cache)/(LLA fetches from VLF + LLA 
fetches from disk).

Kees.

-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf 
Of Steve Thompson
Sent: 25 November, 2014 15:29
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: VLF Caching

On 11/25/2014 07:29 AM, Peter Relson wrote:
 There is no rule of thumb. It's very likely that 16M is way too small, 
 but we're not likely to change the default.
 128M is fairly common, I believe.

 It's not necessarily the case that a bigger size is better. The more 
 data that is cached, the more likely you are to be able to retrieve 
 from the cache (which is a good thing) but the longer it may take to 
 locate the data (which is not a good thing).

 In the distant past some found that sizes of 256M and above were 
 detrimental. But no one that I know of actually did any analysis to 
 try to figure out why. The guess was that the performance decrease 
 correlated to the number of objects that then were cached.

 It has probably been well over 10 years since I last heard any 
 discussion of this; I have no idea if what was seen was typical or 
 one-off, or if it still behaves that way.

 Peter Relson
 z/OS Core Technology Design

SNIPPAGE
Thank you.

I can tell you that 16MB is TOO SMALL. And so far, in our shop, 32MB is too 
small.

The diminishing returns problem is going to be interesting to find. I know that 
you want to cache hi-use modules. But, if the module is beyond a certain size, 
you may not want to cache it - DASD fetch may be better.

So, if you have sufficient cache, you can get all of your stuff in there w/o 
hitting the 10% headroom issue. And if the speed of XMEM is hi enough (based 
on CPU avail)...

But, if cache is being used by all large modules, you can get into a cache 
thrash situation (old APAR/PTF for where it would
loop) -- Fastest way I know to get the situation (it can be done with a mix...).

We have noticed that at 16MB, we BURN MSUs in VLF TRIM (sorry, forgot the 
module name).

So another aspect of tuning is setting the time lower to get things to be 
trim-able sooner. But, that can have diminishing returns So it may need to 
be set higher (sigh).

Enter the EXITs. Now you have to have someone in your shop that knows ALC and 
management can give clear guidance to rules so they can be effected... (sigh).

How about IBM and CA play nice and get CA the right stuff so they can fix PMO 
and QuickFetch for PDSEs (I can't believe I'm writing this!!).

Later,
Steve Thompson

--
For IBM-MAIN subscribe / signoff / archive access instructions, send email to 
lists...@listserv.ua.edu with the message: INFO IBM-MAIN

For information, services and offers, please visit our web site: 
http://www.klm.com. This e-mail and any attachment may contain confidential and 
privileged material intended for the addressee only. If you are not the 
addressee, you are notified

Re: VLF Caching

2014-11-25 Thread Martin Packer
Well I obviously haven't learnt much in the last 9 years. :-) Or, more 
positively, at least I'm consistent. :-)

Cheers, Martin

Martin Packer,
zChampion, Principal Systems Investigator,
Worldwide Banking Center of Excellence, IBM

+44-7802-245-584

email: martin_pac...@uk.ibm.com

Twitter / Facebook IDs: MartinPacker
Blog: 
https://www.ibm.com/developerworks/mydeveloperworks/blogs/MartinPacker



From:   Tidy, David (D) dt...@dow.com
To: IBM-MAIN@LISTSERV.UA.EDU
Date:   25/11/2014 15:04
Subject:Re: VLF Caching
Sent by:IBM Mainframe Discussion List IBM-MAIN@LISTSERV.UA.EDU



I had thought (from 
https://www.ibm.com/developerworks/community/blogs/MartinPacker/entry/lla_and_measuring_its_use?lang=en
 
) that the SMF records did not really tell us very much about the tuning. 
But it could have been that I didn't really understand them
So we used the healthcheck iteratively to watch the trimmed age of the 
exceptions, and arrived at 64Mb. At 4MB the trimmed age was 1 day, at 32MB 
11 days, at 48MB 30 days. At 64MB all our sysplexes seem not to be 
trimming within 60 days any more.

That was a while ago, and we are just starting to look at the IGGCAS class 
which has also been tripping the healthcheck exception (at 1 day).

Best regards, 
David Tidy 
IS Technical Management/SAP-Mf  
Dow Benelux B.V.  

-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On 
Behalf Of Vernooij, CP (ITOPT1) - KLM
Sent: 25 November 2014 15:47
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: VLF Caching

I have 200MB and have a 99% hitratio on 20 LLA managed libraries, as 
reported by LLA statistics (LLA fetches from its VLF cache)/(LLA fetches 
from VLF + LLA fetches from disk).

Kees.

-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On 
Behalf Of Steve Thompson
Sent: 25 November, 2014 15:29
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: VLF Caching

On 11/25/2014 07:29 AM, Peter Relson wrote:
 There is no rule of thumb. It's very likely that 16M is way too small, 
 but we're not likely to change the default.
 128M is fairly common, I believe.

 It's not necessarily the case that a bigger size is better. The more 
 data that is cached, the more likely you are to be able to retrieve 
 from the cache (which is a good thing) but the longer it may take to 
 locate the data (which is not a good thing).

 In the distant past some found that sizes of 256M and above were 
 detrimental. But no one that I know of actually did any analysis to 
 try to figure out why. The guess was that the performance decrease 
 correlated to the number of objects that then were cached.

 It has probably been well over 10 years since I last heard any 
 discussion of this; I have no idea if what was seen was typical or 
 one-off, or if it still behaves that way.

 Peter Relson
 z/OS Core Technology Design

SNIPPAGE
Thank you.

I can tell you that 16MB is TOO SMALL. And so far, in our shop, 32MB is 
too small.

The diminishing returns problem is going to be interesting to find. I know 
that you want to cache hi-use modules. But, if the module is beyond a 
certain size, you may not want to cache it - DASD fetch may be better.

So, if you have sufficient cache, you can get all of your stuff in there 
w/o hitting the 10% headroom issue. And if the speed of XMEM is hi 
enough (based on CPU avail)...

But, if cache is being used by all large modules, you can get into a cache 
thrash situation (old APAR/PTF for where it would
loop) -- Fastest way I know to get the situation (it can be done with a 
mix...).

We have noticed that at 16MB, we BURN MSUs in VLF TRIM (sorry, forgot the 
module name).

So another aspect of tuning is setting the time lower to get things to be 
trim-able sooner. But, that can have diminishing returns So it may 
need to be set higher (sigh).

Enter the EXITs. Now you have to have someone in your shop that knows ALC 
and management can give clear guidance to rules so they can be effected... 
(sigh).

How about IBM and CA play nice and get CA the right stuff so they can fix 
PMO and QuickFetch for PDSEs (I can't believe I'm writing this!!).

Later,
Steve Thompson

--
For IBM-MAIN subscribe / signoff / archive access instructions, send email 
to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

For information, services and offers, please visit our web site: 
http://www.klm.com. This e-mail and any attachment may contain 
confidential and privileged material intended for the addressee only. If 
you are not the addressee, you are notified that no part of the e-mail or 
any attachment may be disclosed, copied or distributed, and that any other 
action related to this e-mail or attachment is strictly prohibited, and 
may

Re: VLF Caching

2014-11-25 Thread Thomas Conley

On 11/25/2014 8:03 AM, Elardus Engelbrecht wrote:

Thomas Conley wrote:


I've always had to review the SMF data, then make adjustments.  No ROTs or 
sizing recommendations I'm aware of.


Please forgive my ignorance, but what SMF records? Of course I have looked in 
my SMF book, but must have missed something obvious or used wrong search 
arguments.

Thanks in advance.


Peter Relson wrote:


There is no rule of thumb. It's very likely that 16M is way too small, but 
we're not likely to change the default. 128M is fairly common, I believe.


Ok if you say so. Could that part of MFM not be included in Health Checker so 
we all can see what sizes are recommended?

Groete / Greetings
Elardus Engelbrecht

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN



41 subtype 3.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: VLF Caching

2014-11-25 Thread Anthony Thompson
Not sure this is worth mentioning, but IBM's Health Checker has a check called 
IBMVLF,VLF_MAXVIRT that tells you how many times VLF has trimmed objects from 
various caches, depending on the parameters specified on the check. The check 
runs hourly by default. Of course, the check won't give you the same level of 
detail as a SMF analysis, but if all you are looking for is 'how big to make 
our memory cache to avoid thrashing/trimming', it might be a simpler thing to 
look at rather than extracting/analysing SMF repeatedly.

Ant.

-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf 
Of Steve Thompson
Sent: Tuesday, 25 November 2014 1:24 PM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: VLF Caching

On 11/24/2014 05:29 PM, Thomas Conley wrote:
 On 11/24/2014 4:30 PM, Steve Thompson wrote:
SNIP

 Is there a ROT for setting of the MAXVIRT for CSVLLA class?
SNIP
 I'm thinking we should go to 64MB, but perhaps we should go higher. 
 I'm just not aware of anything that gives us an idea of how much to 
 set this to.
SNIP
 Regards,
 Steve Thompson

SNIP
 Steve,

 I've always had to review the SMF data, then make adjustments.
 No ROTs or sizing recommendations I'm aware of.

 Regards,
 Tom Conley

Thanks Tom.

Yeah, we are running SMF records and CP-Expert tells us, how big the Cache is, 
how much we've used, how often we do trim.

I was just hoping someone had come up with something a little faster for this.

I guess, one could set it to 512MB and then whack it down from there. I 
certainly hope we aren't going to be using more than that.

Regards,
Steve Thompson

--
For IBM-MAIN subscribe / signoff / archive access instructions, send email to 
lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: VLF Caching

2014-11-25 Thread Vernooij, CP (ITOPT1) - KLM
Short: no. Every information from VLF on the LLA cache is useless. LLA manages 
its VLF cache such that it always seems to perform perfectly from the VLF point 
of view. The real performance is only available from LLA statistics.

Kees.

-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf 
Of Anthony Thompson
Sent: 26 November, 2014 4:48
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: VLF Caching

Not sure this is worth mentioning, but IBM's Health Checker has a check called 
IBMVLF,VLF_MAXVIRT that tells you how many times VLF has trimmed objects from 
various caches, depending on the parameters specified on the check. The check 
runs hourly by default. Of course, the check won't give you the same level of 
detail as a SMF analysis, but if all you are looking for is 'how big to make 
our memory cache to avoid thrashing/trimming', it might be a simpler thing to 
look at rather than extracting/analysing SMF repeatedly.

Ant.

-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf 
Of Steve Thompson
Sent: Tuesday, 25 November 2014 1:24 PM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: VLF Caching

On 11/24/2014 05:29 PM, Thomas Conley wrote:
 On 11/24/2014 4:30 PM, Steve Thompson wrote:
SNIP

 Is there a ROT for setting of the MAXVIRT for CSVLLA class?
SNIP
 I'm thinking we should go to 64MB, but perhaps we should go higher. 
 I'm just not aware of anything that gives us an idea of how much to 
 set this to.
SNIP
 Regards,
 Steve Thompson

SNIP
 Steve,

 I've always had to review the SMF data, then make adjustments.
 No ROTs or sizing recommendations I'm aware of.

 Regards,
 Tom Conley

Thanks Tom.

Yeah, we are running SMF records and CP-Expert tells us, how big the Cache is, 
how much we've used, how often we do trim.

I was just hoping someone had come up with something a little faster for this.

I guess, one could set it to 512MB and then whack it down from there. I 
certainly hope we aren't going to be using more than that.

Regards,
Steve Thompson

--
For IBM-MAIN subscribe / signoff / archive access instructions, send email to 
lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions, send email to 
lists...@listserv.ua.edu with the message: INFO IBM-MAIN

For information, services and offers, please visit our web site: 
http://www.klm.com. This e-mail and any attachment may contain confidential and 
privileged material intended for the addressee only. If you are not the 
addressee, you are notified that no part of the e-mail or any attachment may be 
disclosed, copied or distributed, and that any other action related to this 
e-mail or attachment is strictly prohibited, and may be unlawful. If you have 
received this e-mail by error, please notify the sender immediately by return 
e-mail, and delete this message. 

Koninklijke Luchtvaart Maatschappij NV (KLM), its subsidiaries and/or its 
employees shall not be liable for the incorrect or incomplete transmission of 
this e-mail or any attachments, nor responsible for any delay in receipt. 
Koninklijke Luchtvaart Maatschappij N.V. (also known as KLM Royal Dutch 
Airlines) is registered in Amstelveen, The Netherlands, with registered number 
33014286




--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


VLF Caching

2014-11-24 Thread Steve Thompson
I'm using MFM (Module Fetch Monitor) and CP-Expert and we found 
that we needed to increase the cache for CSVLLA. So we set it up 
to 32MB (from the default of 16MB).


Well we ran for a bit like this to find that we need to set it 
higher because of how often we are going through trim.


Is there a ROT for setting of the MAXVIRT for CSVLLA class?

Right now we have LNKLST in LLA and one or two other Libraries in 
their own CSVLLAxx member.


I mention this because we are considering adding about 12 high 
use PDSEs. And some of those modules in those libraries are ~9MB 
each.


I'm thinking we should go to 64MB, but perhaps we should go 
higher. I'm just not aware of anything that gives us an idea of 
how much to set this to.


In answer to the anticipated question, yes, we have sufficient 
C-Store in each LPAR to allow VLF to use north of 256MB for the 
CSVLLA class.


Regards,
Steve Thompson

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: VLF Caching

2014-11-24 Thread Thomas Conley

On 11/24/2014 4:30 PM, Steve Thompson wrote:

I'm using MFM (Module Fetch Monitor) and CP-Expert and we found that we
needed to increase the cache for CSVLLA. So we set it up to 32MB (from
the default of 16MB).

Well we ran for a bit like this to find that we need to set it higher
because of how often we are going through trim.

Is there a ROT for setting of the MAXVIRT for CSVLLA class?

Right now we have LNKLST in LLA and one or two other Libraries in their
own CSVLLAxx member.

I mention this because we are considering adding about 12 high use
PDSEs. And some of those modules in those libraries are ~9MB each.

I'm thinking we should go to 64MB, but perhaps we should go higher. I'm
just not aware of anything that gives us an idea of how much to set this
to.

In answer to the anticipated question, yes, we have sufficient C-Store
in each LPAR to allow VLF to use north of 256MB for the CSVLLA class.

Regards,
Steve Thompson

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Steve,

I've always had to review the SMF data, then make adjustments.  No ROTs 
or sizing recommendations I'm aware of.


Regards,
Tom Conley

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: VLF Caching

2014-11-24 Thread Steve Thompson

On 11/24/2014 05:29 PM, Thomas Conley wrote:

On 11/24/2014 4:30 PM, Steve Thompson wrote:

SNIP


Is there a ROT for setting of the MAXVIRT for CSVLLA class?

SNIP

I'm thinking we should go to 64MB, but perhaps we should go
higher. I'm
just not aware of anything that gives us an idea of how much to
set this
to.

SNIP

Regards,
Steve Thompson


SNIP

Steve,

I've always had to review the SMF data, then make adjustments.
No ROTs or sizing recommendations I'm aware of.

Regards,
Tom Conley


Thanks Tom.

Yeah, we are running SMF records and CP-Expert tells us, how big 
the Cache is, how much we've used, how often we do trim.


I was just hoping someone had come up with something a little 
faster for this.


I guess, one could set it to 512MB and then whack it down from 
there. I certainly hope we aren't going to be using more than that.


Regards,
Steve Thompson

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN