Hi All,
We’ve just published the main dates for 2019 Spectrum Scale meetings on the
user group website at:
https://www.spectrumscaleug.org/
Please take a look over the list of events and pencil them in your diary! (some
of those later in the year are tentative and there are a couple
A better way to detect node expels is to install the expelnode into
/var/mmfs/etc/ (sample in /usr/lpp/mmfs/samples/expelnode.sample) - put this on
your manager nodes. It runs on every expel and you can customize it pretty
easily. We generate a Slack message to a specific channel:
GPFS Node
Various "leave" / join events may be interesting ... But you've got to
consider that an abrupt failure of several nodes is not necessarily
recorded anywhere! For example, because the would be recording devices
might all lose power at the same time.
Hi Bob,
We use the nodeLeave callback to detect node expels … for what you’re wanting
to do I wonder if nodeJoin might work?? If a node joins the cluster and then
has an uptime of a few minutes you could go looking in /tmp/mmfs. HTH...
--
Kevin Buterbaugh - Senior System Administrator
Hi,
I agree that we should potentially add mode metrics, but for a start, I
would look into mmdiag --memory and mmdiag --tokenmgr (the latter show
different output on a token server).
Regards,
Tomer Perry
Scalable I/O Development (Spectrum Scale)
email: t...@il.ibm.com
1 Azrieli Center, Tel
Hello,
Sorry for coming up with this never-ending story. I know that token management
is mainly autoconfigured and even the placement of token manager nodes is no
longer under user control in all cases. Still I would like to monitor this
component to see if we are close to some limit like