Hi Richard, one thing to note.
You tried "mmperfmon query GPFSFilesetQuota" to get metric data. So you used the sensor's name instead of a metric name. And compared it to "mmperfmon query cpu_user" where you used the metric name. mmperfmon will not return data, if you use the sensor's name instead of a metric's name. I bet you got something like this returned: [root@test-51 ~]# mmperfmon query GPFSFilesetQuota Error: no data available for query . mmperfmon: Command failed. Examine previous error messages to determine cause. The log entries you found just tell you, that the collector does not know any metric named "GPFSFilesetQuota". Please try the query again with gpfs_rq_blk_current or gpfs_rq_file_current. If the collector never got any data for that metrics, it also does not know those metrics' names. But since you do not see any data in the GUI this might be the case. In this case please check with "mmperfmon config show" if the restrict field is set correctly. You should use the long gpfs name and not the hostname. You can check, if the configuration file was distributed correctly in checking the /opt/IBM/zimon/ZIMonSensors.cfg on the node that is supposed to start this monitor. If the mmperfmon command was able to identify the restrict value correctly, this node should have your configured period value instead of 0 in ZIMonSensors.cfg under the GPFSFilesetQuota sensor. All other nodes should include a period equal to 0. Furthermore, of course, the period for GPFSFilesetQuota should be higher than 0. Recommended is a value of 3600 (once per hour) since the underlying command is heavier on the system than other sensors. Change the values with the "mmperfmon config update" command, so that it is distributed in the system. E.g. "mmperfmon config update GPFSFilesetQuota.restrict=<long_gpfs_name>" and "mmperfmon config update GPFSFilesetQuota.period=3600" Mit freundlichen Grüßen / Kind regards Greim, Anna Software Engineer, Spectrum Scale Development IBM Systems Phone: +49-7034-2740981 IBM Deutschland Research & Development GmbH Mobil: +49-172-2646541 Am Weiher 24 Email: [email protected] 65451 Kelsterbach Germany IBM Deutschland Research & Development GmbH / Vorsitzende des Aufsichtsrats: Martina Koederitz Geschäftsführung: Dirk Wittkopp Sitz der Gesellschaft: Böblingen / Registergericht: Amtsgericht Stuttgart, HRB 243294 From: "Sobey, Richard A" <[email protected]> To: "'[email protected]'" <[email protected]> Date: 10/10/2018 17:43 Subject: [gpfsug-discuss] Performance collector no results for Capacity Sent by: [email protected] Hi all, Maybe I?m barking up the wrong tree but I?m debugging why I don?t get a nice graph in the GUI for fileset capacity, even though the GUI does know about things such as capacity and inodes and usage. So off I go to the CLI to run ?mmperfmon query GPFSFilesetQuota? and I get this: Oct-10 16:33:28 [Info ] QueryEngine: (fd=64) query from 127.0.0.1: get metrics GPFSFilesetQuota from node=icgpfsq1 last 10 bucket_size 1 Oct-10 16:33:28 [Info ] QueryParser: metric: GPFSFilesetQuota Oct-10 16:33:28 [Warning] QueryEngine: searchForMetric: could not find metaKey for given metric GPFSFilesetQuota, returning. Oct-10 16:33:28 [Info ] QueryEngine: [fd=64] no data available for query Is this a golden ticket to my problem or should I be checking elsewhere? I?m following a troubleshooting guide here: https://www.ibm.com/support/knowledgecenter/en/STXKQY_5.0.1/com.ibm.spectrum.scale.v5r01.doc/bl1pdg_guiperfmonissues.htm and from the page directly within the GUI server itself. Notably, other things work ok: [root@icgpfsq1 richard]# mmperfmon query cpu_user Legend: 1: icgpfsq1|CPU|cpu_user Row Timestamp cpu_user 1 2018-10-10-16:41:09 0.00 2 2018-10-10-16:41:10 0.25 3 2018-10-10-16:41:11 0.50 4 2018-10-10-16:41:12 0.50 5 2018-10-10-16:41:13 0.50 6 2018-10-10-16:41:14 0.25 7 2018-10-10-16:41:15 1.25 8 2018-10-10-16:41:16 2.51 9 2018-10-10-16:41:17 0.25 10 2018-10-10-16:41:18 0.25 I?m running 5.0.1-2 on all nodes except the NSD servers which still run 5.0.0.2. Thanks Richard_______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss
_______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss
