Hi Andreas,

you are right ... our Storage Cluster is on v4.2.1 at the moment, while the CES/GUI Nodes running on 4.2.3.6.

The GPFSFilesetQuota Sensor is enabled and restricted to the GUI Node, due to the performance impact:

{
        name = "GPFSFilesetQuota"
        period = 3600
        restrict = "llsdf02e4"
},

{
        name = "GPFSDiskCap"
        period = 10800
        restrict = "llsdf02e4"
},

thanks,
Sven

On 05.09.2018 15:42, [email protected] wrote:
Send gpfsug-discuss mailing list submissions to
        [email protected]

To subscribe or unsubscribe via the World Wide Web, visit
        http://gpfsug.org/mailman/listinfo/gpfsug-discuss
or, via email, send a message with subject or body 'help' to
        [email protected]

You can reach the person managing the list at
        [email protected]

When replying, please edit your Subject line so it is more specific
than "Re: Contents of gpfsug-discuss digest..."


Today's Topics:

    1. Re: Getting inode information with REST API V2 (Sven Siebler)
    2. Re: Getting inode information with REST API V2 (Andreas Koeninger)


----------------------------------------------------------------------

Message: 1
Date: Wed, 5 Sep 2018 13:44:32 +0200
From: Sven Siebler <[email protected]>
To: Andreas Koeninger <[email protected]>
Cc: [email protected]
Subject: Re: [gpfsug-discuss] Getting inode information with REST API
        V2
Message-ID:
        <[email protected]>
Content-Type: text/plain; charset="utf-8"; Format="flowed"

Hi Andreas,

i've forgotten to mention that we are currently using ISS v4.2.1, not
v5.0.0.

Invastigating the command i got the following:

# /usr/lpp/mmfs/gui/cli/runtask FILESETS --debug
debug: locale=en_US
debug: Running 'mmlsfileset 'lsdf02' -di -Y ' on node localhost

debug: Raising event: inode_normal
debug: Running 'mmsysmonc event 'filesystem' 'inode_normal'
'lsdf02/sd17e005' 'lsdf02/sd17e005,' ' on node localhost
debug: Raising event: inode_normal
debug: Running 'mmsysmonc event 'filesystem' 'inode_normal'
'lsdf02/sd17g004' 'lsdf02/sd17g004,' ' on node localhost
[...]
debug: perf: Executing mmhealth node show --verbose -N 'llsdf02e4' -Y?
took 1330ms
[...]
debug: Inserting 0 new informational HealthEvents for node llsdf02e4
debug: perf: processInfoEvents() with 2 events took 5ms
debug: perf: Parsing 23 state rows took 9ms
debug: Deleted 0 orphaned states.
debug: Loaded list of state changing HealthEvent objects. Size: 4
debug: Inserting 0 new state changing HealthEvents in the history table
for node llsdf02e4
debug: perf: processStateChangingEvents() with 3 events took 2ms
debug: perf: pool-90578-thread-1 - Processing 5 eventlog rows of node
llsdf02e4 took 10ms in total
debug: Deleted 0 orphaned states from history.
debug: Loaded list of state changing HealthEvent objects. Size: 281
debug: Inserting 0 new state changing HealthEvents for node llsdf02e4
debug: perf: Processing 23 state rows took 59ms in total

The command takes very long due to the -di option.

I tried also your posted zimon command:

#? echo "get -a metrics
max(gpfs_fset_maxInodes),max(gpfs_fset_freeInodes),max(gpfs_fset_allocInodes)
from gpfs_fs_name=lsdf02 group_by gpfs_fset_name last 13 bucket_size
300" | /opt/IBM/zimon/zc 127.0.0.1

Error: No data available for query: 6396075

In the Admin GUI i noticed that the Information in "Files -> Filesets ->
<Filesetname> -> Details" shows inconsistent inode information, e.g.

  ? in Overview:
  ????? Inodes: 76M
  ????? Max Inodes: 315M

  ? in Properties:
  ? ?? Inodes:??? ??? 1
  ???? Max inodes:??? ??? 314572800

thanks,
Sven



On 05.09.2018 11:13, Andreas Koeninger wrote:
Hi Sven,
the REST API v2 provides similar information to what v1 provided. See
an example from my system below:
/scalemgmt/v2/filesystems/gpfs0/filesets?fields=:all:
[...]
??? "filesetName" : "fset1",
??? "filesystemName" : "gpfs0",
??? "usage" : {
????? "allocatedInodes" : 51232,
????? "inodeSpaceFreeInodes" : 51231,
????? "inodeSpaceUsedInodes" : 1,
????? "usedBytes" : 0,
????? "usedInodes" : 1
??? }
? } ],
*In 5.0.0 there are two sources for the inode information: the first
one is mmlsfileset and the second one is the data collected by Zimon.*
Depending on the availability of the data either one is used.

To debug what's happening on your system you can *execute the FILESETS
task on the GUI node* manually with the --debug flag. The output is
then showing the exact queries that are used to retrieve the data:
*[root@os-11 ~]# /usr/lpp/mmfs/gui/cli/runtask FILESETS --debug*
debug: locale=en_US
debug: Running 'mmlsfileset 'gpfs0' -Y ' on node localhost
debug: Running zimon query: 'get -ja metrics
max(gpfs_fset_maxInodes),max(gpfs_fset_freeInodes),max(gpfs_fset_allocInodes),max(gpfs_rq_blk_current),max(gpfs_rq_file_current)
from gpfs_fs_name=gpfs0 group_by gpfs_fset_name last 13 bucket_size 300'
debug: Running 'mmlsfileset 'objfs' -Y ' on node localhost
debug: Running zimon query: 'get -ja metrics
max(gpfs_fset_maxInodes),max(gpfs_fset_freeInodes),max(gpfs_fset_allocInodes),max(gpfs_rq_blk_current),max(gpfs_rq_file_current)
from gpfs_fs_name=objfs group_by gpfs_fset_name last 13 bucket_size 300'
EFSSG1000I The command completed successfully.
*As a start I suggest running the displayed Zimon queries manually to
see what's returned there, e.g.:*
/(Removed -j for better readability)/

*[root@os-11 ~]# echo "get -a metrics
max(gpfs_fset_maxInodes),max(gpfs_fset_freeInodes),max(gpfs_fset_allocInodes),max(gpfs_rq_blk_current),max(gpfs_rq_file_current)
from gpfs_fs_name=gpfs0 group_by gpfs_fset_name last 13 bucket_size
300" | /opt/IBM/zimon/zc 127.0.0.1*
1:
?gpfs-cluster-1.novalocal|GPFSFileset|gpfs0|.audit_log|gpfs_fset_maxInodes
2: ?gpfs-cluster-1.novalocal|GPFSFileset|gpfs0|fset1|gpfs_fset_maxInodes
3: ?gpfs-cluster-1.novalocal|GPFSFileset|gpfs0|root|gpfs_fset_maxInodes
4:
?gpfs-cluster-1.novalocal|GPFSFileset|gpfs0|.audit_log|gpfs_fset_freeInodes
5: ?gpfs-cluster-1.novalocal|GPFSFileset|gpfs0|fset1|gpfs_fset_freeInodes
6: ?gpfs-cluster-1.novalocal|GPFSFileset|gpfs0|root|gpfs_fset_freeInodes
7:
?gpfs-cluster-1.novalocal|GPFSFileset|gpfs0|.audit_log|gpfs_fset_allocInodes
8: ?gpfs-cluster-1.novalocal|GPFSFileset|gpfs0|fset1|gpfs_fset_allocInodes
9: ?gpfs-cluster-1.novalocal|GPFSFileset|gpfs0|root|gpfs_fset_allocInodes
Row?? ?Timestamp?? ??? ?max(gpfs_fset_maxInodes)
?max(gpfs_fset_maxInodes)?? ?max(gpfs_fset_maxInodes)
?max(gpfs_fset_freeInodes)?? ?max(gpfs_fset_freeInodes)
?max(gpfs_fset_freeInodes)?? ?max(gpfs_fset_allocInodes)
?max(gpfs_fset_allocInodes)?? ?max(gpfs_fset_allocInodes)
1?? ?2018-09-05 10:10:00?? ?100000?? ?620640?? ?65792 ?65795??
?51231?? ?61749?? ?65824?? ?51232?? ?65792
2?? ?2018-09-05 10:15:00?? ?100000?? ?620640?? ?65792 ?65795??
?51231?? ?61749?? ?65824?? ?51232?? ?65792
3?? ?2018-09-05 10:20:00?? ?100000?? ?620640?? ?65792 ?65795??
?51231?? ?61749?? ?65824?? ?51232?? ?65792
4?? ?2018-09-05 10:25:00?? ?100000?? ?620640?? ?65792 ?65795??
?51231?? ?61749?? ?65824?? ?51232?? ?65792
5?? ?2018-09-05 10:30:00?? ?100000?? ?620640?? ?65792 ?65795??
?51231?? ?61749?? ?65824?? ?51232?? ?65792
6?? ?2018-09-05 10:35:00?? ?100000?? ?620640?? ?65792 ?65795??
?51231?? ?61749?? ?65824?? ?51232?? ?65792
7?? ?2018-09-05 10:40:00?? ?100000?? ?620640?? ?65792 ?65795??
?51231?? ?61749?? ?65824?? ?51232?? ?65792
8?? ?2018-09-05 10:45:00?? ?100000?? ?620640?? ?65792 ?65795??
?51231?? ?61749?? ?65824?? ?51232?? ?65792
9?? ?2018-09-05 10:50:00?? ?100000?? ?620640?? ?65792 ?65795??
?51231?? ?61749?? ?65824?? ?51232?? ?65792
10?? ?2018-09-05 10:55:00?? ?100000?? ?620640?? ?65792 ?65795??
?51231?? ?61749?? ?65824?? ?51232?? ?65792
11?? ?2018-09-05 11:00:00?? ?100000?? ?620640?? ?65792 ?65795??
?51231?? ?61749?? ?65824?? ?51232?? ?65792
12?? ?2018-09-05 11:05:00?? ?100000?? ?620640?? ?65792 ?65795??
?51231?? ?61749?? ?65824?? ?51232?? ?65792
13?? ?2018-09-05 11:10:00?? ?100000?? ?620640?? ?65792 ?65795??
?51231?? ?61749?? ?65824?? ?51232?? ?65792
.

Mit freundlichen Gr??en / Kind regards

Andreas Koeninger
Scrum Master and Software Developer / Spectrum Scale GUI and REST API
IBM Systems &Technology Group, Integrated Systems Development / M069
-------------------------------------------------------------------------------------------------------------------------------------------
IBM Deutschland
Am Weiher 24
65451 Kelsterbach
Phone: +49-7034-643-0867
Mobile: +49-7034-643-0867
E-Mail: [email protected]
-------------------------------------------------------------------------------------------------------------------------------------------
IBM Deutschland Research & Development GmbH / Vorsitzende des
Aufsichtsrats: Martina Koederitz
Gesch?ftsf?hrung: Dirk Wittkopp Sitz der Gesellschaft: B?blingen /
Registergericht: Amtsgericht Stuttgart, HRB 243294

     ----- Original message -----
     From: Sven Siebler <[email protected]>
     Sent by: [email protected]
     To: [email protected]
     Cc:
     Subject: [gpfsug-discuss] Getting inode information with REST API V2
     Date: Wed, Sep 5, 2018 9:37 AM
     Hi all,

     i just started to use the REST API for our monitoring and my
     question is
     concerning about how can i get information about allocated inodes with
     REST API V2 ?

     Up to now i use "mmlsfileset" directly, which gives me information on
     maximum and allocated inodes (mmdf for total/free/allocated inodes of
     the filesystem)

     If i use the REST API V2 with
     "filesystems/<filesystem_name>/filesets?fields=:all:", i get all
     information except the allocated inodes.

     On the documentation
     
(https://www.ibm.com/support/knowledgecenter/en/STXKQY_5.0.0/com.ibm.spectrum.scale.v5r00.doc/bl1adm_apiv2getfilesystemfilesets.htm)
     i found:

     ?> "inodeSpace": "Inodes"
     ?> The number of inodes that are allocated for use by the fileset.

     but for me the inodeSpace looks more like the ID of the inodespace,
     instead of the number of allocated inodes.

     In the documentation example the API can give output like this:

     "filesetName" : "root",
     ??????? "filesystemName" : "gpfs0",
     ??????? "usage" : {
     ?? ? ?? ??? "allocatedInodes" : 100000,
     ?????? ? ?? "inodeSpaceFreeInodes" : 95962,
     ??????????? "inodeSpaceUsedInodes" : 4038,
     ?????? ? ?? "usedBytes" : 0,
     ??????? ? ? "usedInodes" : 4038
     }

     but i could not retrieve such usage-fields in my queries.

     The only way for me to get inode information with REST is the
     usage of V1:

     
https://REST_API_host:port/scalemgmt/v1/filesets?filesystemName=FileSystemName

     which gives exact the information of "mmlsfileset".

     But because V1 is deprecated i want to use V2 for rewriting our
     tools...

     Thanks,

     Sven


     --
     Sven Siebler
     Servicebereich Future IT - Research & Education (FIRE)

     Tel. +49 6221 54 20032
     [email protected]
     Universit?t Heidelberg
     Universit?tsrechenzentrum (URZ)
     Im Neuenheimer Feld 293, D-69120 Heidelberg
     http://www.urz.uni-heidelberg.de

     _______________________________________________
     gpfsug-discuss mailing list
     gpfsug-discuss at spectrumscale.org
     http://gpfsug.org/mailman/listinfo/gpfsug-discuss



--
Sven Siebler
Servicebereich Future IT - Research & Education (FIRE)

Tel. +49 6221 54 20032
[email protected]
Universität Heidelberg
Universitätsrechenzentrum (URZ)
Im Neuenheimer Feld 293, D-69120 Heidelberg
http://www.urz.uni-heidelberg.de


Attachment: smime.p7s
Description: S/MIME Cryptographic Signature

_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss

Reply via email to