Yep, have look at this Gist [1] The unit files assumes some paths and users, which are created during the installation of my RPM.
[1] https://gist.github.com/stdietrich/b3b985f872ea648d6c03bb6249c44e72 Regards, Stefan ----- Original Message ----- > From: "Greg Lehmann" <[email protected]> > To: [email protected] > Sent: Wednesday, July 19, 2017 9:53:58 AM > Subject: Re: [gpfsug-discuss] Scale / Perfmon / Grafana - services running > but no data > I’m having a play with this now too. Has anybody coded a systemd unit to > handle > step 2b in the knowledge centre article – bridge creation on the gpfs side? It > would save me a bit of effort. > > > > I’m also wondering about the CherryPy version. It looks like this has been > developed on SLES which has the newer version mentioned as a standard package > and yet RHEL with an older version of CherryPy is perhaps more common as it > seems to have the best support for features of GPFS, like object and block > protocols. Maybe SLES is in favour now? > > > > Cheers, > > > > Greg > > > > From: [email protected] > [mailto:[email protected]] On Behalf Of Andrew Beattie > Sent: Thursday, 6 July 2017 3:07 PM > To: [email protected] > Subject: [gpfsug-discuss] Scale / Perfmon / Grafana - services running but no > data > > > > > Greetings, > > > > > > > > > I'm currently setting up Grafana to interact with one of our Scale Clusters > > > and i've followed the knowledge centre link in terms of setup. > > > > > > [ > https://www.ibm.com/support/knowledgecenter/en/STXKQY_4.2.3/com.ibm.spectrum.scale.v4r23.doc/bl1adv_setuppmbridgeforgrafana.htm > | > https://www.ibm.com/support/knowledgecenter/en/STXKQY_4.2.3/com.ibm.spectrum.scale.v4r23.doc/bl1adv_setuppmbridgeforgrafana.htm > ] > > > > > > However while everything appears to be working i'm not seeing any data coming > through the reports within the grafana server, even though I can see data in > the Scale GUI > > > > > > The current environment: > > > > > > [root@sc01n02 ~]# mmlscluster > > > GPFS cluster information > ======================== > GPFS cluster name: sc01.spectrum > GPFS cluster id: 18085710661892594990 > GPFS UID domain: sc01.spectrum > Remote shell command: /usr/bin/ssh > Remote file copy command: /usr/bin/scp > Repository type: CCR > > > Node Daemon node name IP address Admin node name Designation > ------------------------------------------------------------------ > 1 sc01n01 10.2.12.11 sc01n01 quorum-manager-perfmon > 2 sc01n02 10.2.12.12 sc01n02 quorum-manager-perfmon > 3 sc01n03 10.2.12.13 sc01n03 quorum-manager-perfmon > > > [root@sc01n02 ~]# > > > > > > > > > [root@sc01n02 ~]# mmlsconfig > Configuration data for cluster sc01.spectrum: > --------------------------------------------- > clusterName sc01.spectrum > clusterId 18085710661892594990 > autoload yes > profile gpfsProtocolDefaults > dmapiFileHandleSize 32 > minReleaseLevel 4.2.2.0 > ccrEnabled yes > cipherList AUTHONLY > maxblocksize 16M > [cesNodes] > maxMBpS 5000 > numaMemoryInterleave yes > enforceFilesetQuotaOnRoot yes > workerThreads 512 > [common] > tscCmdPortRange 60000-61000 > cesSharedRoot /ibm/cesSharedRoot/ces > cifsBypassTraversalChecking yes > syncSambaMetadataOps yes > cifsBypassShareLocksOnRename yes > adminMode central > > > File systems in cluster sc01.spectrum: > -------------------------------------- > /dev/cesSharedRoot > /dev/icos_demo > /dev/scale01 > [root@sc01n02 ~]# > > > > > > > > > [root@sc01n02 ~]# systemctl status pmcollector > ● pmcollector.service - LSB: Start the ZIMon performance monitor collector. > Loaded: loaded (/etc/rc.d/init.d/pmcollector) > Active: active (running) since Tue 2017-05-30 08:46:32 AEST; 1 months 6 days > ago > Docs: man:systemd-sysv-generator(8) > Main PID: 2693 (ZIMonCollector) > CGroup: /system.slice/pmcollector.service > ├─2693 /opt/IBM/zimon/ZIMonCollector -C /opt/IBM/zimon/ZIMonCollector.cfg... > └─2698 python /opt/IBM/zimon/bin/pmmonitor.py -f /opt/IBM/zimon/syshealth... > > > May 30 08:46:32 sc01n02 systemd[1]: Starting LSB: Start the ZIMon performance > mon...... > May 30 08:46:32 sc01n02 pmcollector[2584]: Starting performance monitor > collector... > May 30 08:46:32 sc01n02 systemd[1]: Started LSB: Start the ZIMon performance > moni...r.. > Hint: Some lines were ellipsized, use -l to show in full. > > > > > > From Grafana Server: > > > > > > > > > > > > > > > when I send a set of files to the cluster (3.8GB) I can see performance > metrics > within the Scale GUI > > > > > > > > > > > > yet from the Grafana Dashboard im not seeing any data points > > > > > > > > > > > > Can anyone provide some hints as to what might be happening? > > > > > > > > > > > > Regards, > > > > > > > > > Andrew Beattie > > > Software Defined Storage - IT Specialist > > > Phone: 614-2133-7927 > > > E-mail: [ mailto:[email protected] | [email protected] ] > > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss
_______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss
