Hi,

We have been working to convert applications currently using HFS to zFS.
In converting one large application we ran into some performance issues
which could not be addressed dynamically and required us to move the
application back to HFS.   zFS has performed well and been reliable
overall but for this application which created file systems with 30K to
40K small files in a single file directory updated from multiple jobs
performance was an issue and we could not meet service levels.   We got
some good support from zFS Level 2 and have made several tuning changes
dynamically using zfsadm config however, the dir_cache_size can NOT be
changed this way, and the IOEFSPRM parm needed to be updated and ZFS
restarted.  If you are making non-trivial use of ZFS then a ZFS restart
is likely to be an IPL as it is here.  The default is being changed in
APAR OA20180 from 2M to 32M.  

Our zFS parameters for your information.  

**********************************************************************
* zSeries File System (zFS)  IOEPRM00                                 
* For a description of zFS parameters, refer to the                   
*     zSeries File System Administration, SC24-5989.                  
********************************************************************* 
* Following are the ZFS config parameters currently in use:           
aggrfull(90,5)                                                        
aggrgrow=on                                                           
dir_cache_size=32M                                                    
log_cache_size=128M                                                   
meta_cache_size=256M                                                  
user_cache_size=300M                                                  
*  

I am NOT saying you cut and paste those into your system!  Don't do it!

I am saying if you are moving large applications into zFS consider to
stage the change to the dir_cache_size in advance either by simply
coding it in IOEPRM00 or installing the PTF for the APAR.  I feel pretty
comfortable relating that now that the APAR is open.

As for the log, meta, and user cache sizes get performance data on your
system with your application and if needed adjust them.

We started out looking at RMF Monitor III zFS displays but in working
with zFS Level 2 it didn't have enough detail to understand what was
happening. We put in automation to QUERY and then RESET zFS statistics
every 30 minutes.  This has worked well for us.

//*-------------------------------------------------------------------
//* THIS JCL CAN BE FOUND IN 'SYS1.SYSIN(ZFSCHECK)                    
//*                                                                   
//* Report on zFS performance for the previous 30 minute interval and 
//* then reset for the next interval.                                 
//* This job is used by the performance team and z/OS team to monitor 
//* zFS performance.                                                  
//*-------------------------------------------------------------------
//*                                                                   
//TSO      EXEC PGM=IKJEFT01                                          
//SYSPRINT DD SYSOUT=*                                                
//SYSUDUMP DD SYSOUT=*                                                
//*YSTSPRT DD SYSOUT=*                                                
//SYSTSPRT DD DSN=SYSPT.ZFS.ASYS.QUERY.ALL(+01),                      
//       DISP=(NEW,CATLG,DELETE),                                     
//       SPACE=(CYL,(1,1),,)                                          
//SYSTSIN  DD *                                                       
 PROFILE NOMSGID                                                      
 TIME                                                                 
 OC C(F ZFS,QUERY,ALL) WAIT(15)                                       
 OC C(F ZFS,RESET,ALL) WAIT(15)

Note: OC is OPSCMD we use CA-OPSMVS for automation and it works nicely
to issue a command and capture the response.   You could just issue the
command using any facility and let the output just reside in SYSLOG.

I SHAREd this pain so you can avoid at least one of the land mines I
stepped on.  We are still learning but our experience with zFS has been
positive and this application is going back to zFS tonight after I got
an IPL on that image last weekend.   

        Best Regards, 

                Sam Knutson, GEICO 
                Performance and Availability Management 
                mailto:[EMAIL PROTECTED] 
                (office)  301.986.3574 

"Think big, act bold, start simple, grow fast..."


====================
This email/fax message is for the sole use of the intended
recipient(s) and may contain confidential and privileged information.
Any unauthorized review, use, disclosure or distribution of this
email/fax is prohibited. If you are not the intended recipient, please
destroy all paper and electronic copies of the original message.

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

Reply via email to