Hi Bryan, AFM supports GPFS multi-cluster..and we have customers already using this successfully. Are you using GPFS backend? Can you explain your configuration in detail and if ls is hung it would have generated some long waiters. Maybe this should be pursued separately via PMR. You can ping me the details directly if needed along with opening a PMR per IBM service process.
As for as prefetch is concerned, right now its limited to one prefetch job per fileset. Each job in itself is multi-threaded and can use multi-nodes to pull in data based on configuration. "afmNumFlushThreads" tunable controls the number of threads used by AFM. This parameter can be changed via mmchfileset cmd (mmchfileset pubs doesn't show this param for some reason, I will have that updated.) eg: mmchfileset fs1 prefetchIW -p afmnumflushthreads=5 Fileset prefetchIW changed. List the change: mmlsfileset fs1 prefetchIW --afm -L Filesets in file system 'fs1': Attributes for fileset prefetchIW: =================================== Status Linked Path /gpfs/fs1/prefetchIW Id 36 afm-associated Yes Target nfs://hs21n24/gpfs/fs1/singleTargetToUseForPrefetch Mode independent-writer File Lookup Refresh Interval 30 (default) File Open Refresh Interval 30 (default) Dir Lookup Refresh Interval 60 (default) Dir Open Refresh Interval 60 (default) Async Delay 15 (default) Last pSnapId 0 Display Home Snapshots no Number of Gateway Flush Threads 5 Prefetch Threshold 0 (default) Eviction Enabled yes (default) AFM parallel i/o can be setup such that multiple GW nodes can be used to pull in data..more details are available here http://www-01.ibm.com/support/knowledgecenter/SSFKCN_4.1.0/com.ibm.cluster.gpfs.v4r1.gpfs200.doc/bl1adv_afmparallelio.htm and this link outlines tuning params for parallel i/o along with others: http://www-01.ibm.com/support/knowledgecenter/SSFKCN_4.1.0/com.ibm.cluster.gpfs.v4r1.gpfs200.doc/bl1adv_afmtuning.htm%23afmtuning Regards Kalyan GPFS Development EGL D Block, Bangalore From: Bryan Banister <[email protected]> To: gpfsug main discussion list <[email protected]> Date: 10/06/2014 09:57 PM Subject: Re: [gpfsug-discuss] AFM limitations in a multi-cluster environment, slow prefetch operations Sent by: [email protected] We are using 4.1.0.3 on the cluster with the AFM filesets, -Bryan From: [email protected] [ mailto:[email protected]] On Behalf Of Sven Oehme Sent: Monday, October 06, 2014 11:28 AM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] AFM limitations in a multi-cluster environment, slow prefetch operations Hi Bryan, in 4.1 AFM uses multiple threads for reading data, this was different in 3.5 . what version are you using ? thx. Sven On Mon, Oct 6, 2014 at 8:36 AM, Bryan Banister <[email protected]> wrote: Just an FYI to the GPFS user community, We have been testing out GPFS AFM file systems in our required process of file data migration between two GPFS file systems. The two GPFS file systems are managed in two separate GPFS clusters. We have a third GPFS cluster for compute systems. We created new independent AFM filesets in the new GPFS file system that are linked to directories in the old file system. Unfortunately access to the AFM filesets from the compute cluster completely hang. Access to the other parts of the second file system is fine. This limitation/issue is not documented in the Advanced Admin Guide. Further, we performed prefetch operations using a file mmafmctl command, but the process appears to be single threaded and the operation was extremely slow as a result. According to the Advanced Admin Guide, it is not possible to run multiple prefetch jobs on the same fileset: GPFS can prefetch the data using the mmafmctl Device prefetch –j FilesetName command (which specifies a list of files to prefetch). Note the following about prefetching: v It can be run in parallel on multiple filesets (although more than one prefetching job cannot be run in parallel on a single fileset). We were able to quickly create the “--home-inode-file” from the old file system using the mmapplypolicy command as the documentation describes. However the AFM prefetch operation is so slow that we are better off running parallel rsync operations between the file systems versus using the GPFS AFM prefetch operation. Cheers, -Bryan Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss
