Hi,

when it comes to clusters of this size then 150 nodes per collector rule of
thumb is a good way to start. So 3-4 collector nodes should be OK for your
setup.
The GUI(s) can also be installed on those nodes as well.
Collector nodes mainly need a good amount of RAM as all 'current' incoming
sensor data is kept there.
Local disk is typically not stressed heavily, plain HDD or simple onboard
RAID is sufficient, plan for 20-50 GB disc space on each node.
For network no special requirements are needed, default should be whatever
is used in the cluster anyway.

Mit freundlichen Grüßen / Kind regards


Norbert Schuld

IBM Deutschland Research & Development GmbH / Vorsitzender des
Aufsichtsrats: Martina Koederitz /Geschäftsführung: Dirk Wittkopp
Sitz der Gesellschaft: Böblingen / Registergericht: Amtsgericht Stuttgart,
HRB 243294



From:   David Johnson <[email protected]>
To:     gpfsug main discussion list <[email protected]>
Date:   31/05/2018 20:22
Subject:        [gpfsug-discuss] recommendations for gpfs 5.x GUI and
            perf/health monitoring collector nodes
Sent by:        [email protected]



We are planning to bring up the new ZIMon tools on our 450+ node cluster,
and need to purchase new
nodes to run the collector federation and GUI function on.  What would you
choose as a platform for this?
 — memory size?
 — local disk space — SSD? shared?
 — net attach — 10Gig? 25Gig? IB?
 — CPU horse power — single or dual socket?
I think I remember somebody in Cambridge UG meeting saying 150 nodes per
collector as a rule of thumb, so
we’re guessing a federation of 4 nodes would do it.  Does this include the
GUI host(s) or are those separate?
Finally, we’re still using client/server based licensing model, do these
nodes count as clients?

Thanks,
 — ddj
Dave Johnson
Brown University
_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss



_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss

Reply via email to