I think what you'll need is to set
name = "GPFSDisk"
this should report the utilization to the direct attached disk
cheers olsf
From: Mark Bush <[email protected]>
To: gpfsug main discussion list <[email protected]>
Date: 12/19/2017 04:50 PM
Subject: Re: [gpfsug-discuss] pmcollector and NSD perf
Sent by: [email protected]
It appears number 3 on your list is the case. My nodes are all SAN connected and until I get separate CES nodes no NSD is necessary (currently run CES on the NSD servers – just for a test cluster).
Mark
From: [email protected] [mailto:[email protected]] On Behalf Of Markus Rohwedder
Sent: Tuesday, December 19, 2017 9:24 AM
To: gpfsug main discussion list <[email protected]>
Subject: Re: [gpfsug-discuss] pmcollector and NSD perf
Hello Mark,
the NSD sensor is GPFSNSDDisk.
Some things to check:
1. Is the sensor activated?
In a GPFS managed sensor config you should be able to see something like
this when you call mmperfmon config show:
{
name = "GPFSNSDDisk"
period = 10
restrict = "nsdNodes"
},
2. Perfmon designation
The NSD server nodes should have the perfmon designation.
[root@cache-41 ~]# mmlscluster
GPFS cluster information
========================
GPFS cluster name: gpfsgui-cluster-4.localnet.com
GPFS cluster id: 10583479681538672379
GPFS UID domain: localnet.com
Remote shell command: /usr/bin/ssh
Remote file copy command: /usr/bin/scp
Repository type: CCR
Node Daemon node name IP address Admin node name Designation
------------------------------------------------------------------------------
1 cache-41.localnet.com 10.0.100.41 cache-41.localnet.com quorum-perfmon
2 cache-42.localnet.com 10.0.100.42 cache-42.localnet.com quorum-gateway-perfmon
3 cache-43.localnet.com 10.0.100.43 cache-43.localnet.com gateway-perfmon
3. Direct Disk writes?
One reason why there may be no data on your system is if you are not using
the NSD protocol,
meaning the clients can directly write to disk as in a SAN environment.
In this case the sensor does not catch the transactions.
4. Cross cluster mount
Or maybe you are using a cross cluster mount.
Mit freundlichen Grüßen / Kind regards
Dr. Markus Rohwedder
Spectrum Scale GUI Development
| Phone: | +49 7034 6430190 | IBM Deutschland | |
| E-Mail: | [email protected] | Am Weiher 24 | |
| 65451 Kelsterbach | |||
| Germany | |||
| IBM
Deutschland Research & Development GmbH / Vorsitzender des Aufsichtsrats:
Martina Köderitz Geschäftsführung: Dirk Wittkopp Sitz der Gesellschaft: Böblingen / Registergericht: Amtsgericht Stuttgart, HRB 243294 | |||
Mark
Bush ---12/19/2017 03:30:14 PM---I've noticed this in my test cluster both
in 4.2.3.4 and 5.0.0.0 that in the GUI on the monitoring s
From: Mark Bush <[email protected]>
To: "[email protected]"
<[email protected]>
Date: 12/19/2017 03:30 PM
Subject: [gpfsug-discuss] pmcollector and NSD perf
Sent by: [email protected]
I’ve noticed this in my test cluster both in 4.2.3.4 and 5.0.0.0 that in the GUI on the monitoring screen with the default view the NSD Server Throughput graph shows “Performance Collector did not return any data”. I’ve seen that in other items (SMB before for example) but never for NSD. Is there something that must be enabled in the zimon sensor or collector config file to grab this or is this a bug?
Mark_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwICAg&c=jf_iaSHvJObTbx-siA1ZOg&r=IbxtjdkPAM2Sbon4Lbbi4w&m=a6GCq72qeADy6hsfA-24PmWHU06W5z2xqx9tKIJ8qJ4&s=OQccy8ikWB-ByYgLsJFgI8szDs1ZrwnsaFrLCwTfTwI&e=
_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss
_______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss
