Hi,
a simple approach is to use enhanced dstat statistics on the NSD server side ..

example:
cp /usr/lpp/mmfs/samples/util/dstat_gpfsops.py.dstat.0.7 /usr/share/dstat/dstat_gpfsops.py
export DSTAT_GPFS_WHAT="vio,vflush -c -n -d -M gpfsops --nocolor"

> dstat --gpfsops

better and more fully, what you want... configure ZIMON/ perfmon
this gerates statistics like this.
mmperfmon query compareNodes gpfs_nsdds_bytes_written 10

Legend:
 1:     gss01.frozen|GPFSNSDDisk|c1_LDATA1core|gpfs_nsdds_bytes_written
 2:     gss01.frozen|GPFSNSDDisk|c1_LDATA1fs1|gpfs_nsdds_bytes_written
 3:     gss01.frozen|GPFSNSDDisk|c1_LDATA2core|gpfs_nsdds_bytes_written
 4:     gss01.frozen|GPFSNSDDisk|c1_LDATA2fs1|gpfs_nsdds_bytes_written
 5:     gss01.frozen|GPFSNSDDisk|c1_LMETA1core|gpfs_nsdds_bytes_written
 6:     gss01.frozen|GPFSNSDDisk|c1_LMETA1fs1|gpfs_nsdds_bytes_written
 7:     gss01.frozen|GPFSNSDDisk|c1_LMETA2core|gpfs_nsdds_bytes_written
 8:     gss01.frozen|GPFSNSDDisk|c1_LMETA2fs1|gpfs_nsdds_bytes_written
 9:     gss02.frozen|GPFSNSDDisk|c1_RDATA1core|gpfs_nsdds_bytes_written
10:     gss02.frozen|GPFSNSDDisk|c1_RDATA1fs1|gpfs_nsdds_bytes_written
11:     gss02.frozen|GPFSNSDDisk|c1_RDATA2core|gpfs_nsdds_bytes_written
12:     gss02.frozen|GPFSNSDDisk|c1_RDATA2fs1|gpfs_nsdds_bytes_written
13:     gss02.frozen|GPFSNSDDisk|c1_RMETA1core|gpfs_nsdds_bytes_written
14:     gss02.frozen|GPFSNSDDisk|c1_RMETA1fs1|gpfs_nsdds_bytes_written
15:     gss02.frozen|GPFSNSDDisk|c1_RMETA2core|gpfs_nsdds_bytes_written
16:     gss02.frozen|GPFSNSDDisk|c1_RMETA2fs1|gpfs_nsdds_bytes_written
 
Row           Timestamp gss01     gss01 gss01     gss01 gss01 gss01 gss01 gss01 gss02     gss02 gss02     gss02 gss02 gss02 gss02 gss02
  1 2016-02-27-01:22:06     0         0     0         0     0     0     0     0     0         0     0         0     0     0     0     0
  2 2016-02-27-01:22:07     0         0     0         0     0     0     0     0     0         0     0         0     0     0     0     0
  3 2016-02-27-01:22:08     0         0     0         0     0     0     0     0     0         0     0         0     0     0     0     0
  4 2016-02-27-01:22:09     0         0     0         0     0     0     0     0     0         0     0         0     0     0     0     0
  5 2016-02-27-01:22:10     0         0     0         0     0     0     0     0     0         0     0         0     0     0     0     0
  6 2016-02-27-01:22:11     0         0     0         0     0     0     0     0     0         0     0         0     0     0     0     0
  7 2016-02-27-01:22:12     0  83886080     0  67108864     0     0     0     0     0  16777216     0  16777216     0     0     0     0
  8 2016-02-27-01:22:13     0 436207616     0 452984832     0     0     0     0     0 436207616     0 419430400     0     0     0     0
  9 2016-02-27-01:22:14     0  16777216     0         0     0     0     0     0     0  67108864     0  83886080     0     0     0     0
 10 2016-02-27-01:22:15     0         0     0         0     0     0     0     0     0         0     0         0     0     0     0     0


you can filter the overall IO's by filesystem , or dedicated to some nodes.. it is very very flexible
e.g.  mmperfmon query compareNodes gpfs_fs_bytes_written,gpfs_fs_bytes_read -n 5 -b 30 --filter gpfs_fs_name=beer  ... and so on

you may need some minutes to set it up .. but once it is configured, it is very powerful ...

have fun.. ;-)




 
Mit freundlichen Grüßen / Kind regards

 
Olaf Weiser

EMEA Storage Competence Center Mainz, German / IBM Systems, Storage Platform,
-------------------------------------------------------------------------------------------------------------------------------------------
IBM Deutschland
IBM Allee 1
71139 Ehningen
Phone: +49-170-579-44-66
E-Mail: [email protected]
-------------------------------------------------------------------------------------------------------------------------------------------
IBM Deutschland GmbH / Vorsitzender des Aufsichtsrats: Martin Jetter
Geschäftsführung: Martina Koederitz (Vorsitzende), Susanne Peter, Norbert Janzen, Dr. Christian Keller, Ivo Koerner, Markus Koerner
Sitz der Gesellschaft: Ehningen / Registergericht: Amtsgericht Stuttgart, HRB 14562 / WEEE-Reg.-Nr. DE 99369940




From:        Brian Marshall <[email protected]>
To:        [email protected]
Date:        07/12/2016 03:13 PM
Subject:        [gpfsug-discuss] Aggregating filesystem performance
Sent by:        [email protected]




All,

I have a Spectrum Scale 4.1 cluster serving data to 4 different client clusters (~800 client nodes total).  I am looking for ways to monitor filesystem performance to uncover network bottlenecks or job usage patterns affecting performance.

I received this info below from an IBM person.  Does anyone have examples of aggregating mmperfmon data?  Is anyone doing something different?

"mmpmon does not currently aggregate cluster-wide data. As of SS 4.1.x you can look at "mmperfmon query" as well, but it also primarily only provides node specific data. The tools are built to script performance data but there aren't any current scripts available for you to use within SS (except for what might be on the SS wiki page). It would likely be something you guys would need to build, that's what other clients have done."


Thank you,
Brian Marshall
Virginia Tech - Advanced Research Computing_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss

Reply via email to