It looks like no one has attempted to answer this, so I will step in to start the conversation.
 
There are two issues when considering how many services to run on the same nodes - in this case the NSD servers.
 
1. Performance.
Spectrum Scale's (nee GPFS) core differentiator is performance. The more you run on a node the more that node resource's have to be shared. Here memory bandwidth and memory space are the main ones. CPU may also be a limited resource although with modern chips this is less likely so.
If performance is not the key delivered metric then running other things on the NSD server may be a good option so save both cost and server spawl in small datacentres.
 
2. NFS server stability.
pre-4.1.1, IBM used cNFS to provide multiple NFS servers in a GPFS cluster. This used traditional kernel based NFS daemons. If one hung then the whole node had to be rebooted which might have led to disruption in NSD serving if the other NSD server of a pair was already under load. With 4.1.1 came Cluster Export Services (CES) deliverd from 'Protocol Nodes'. Since there use Ganesha there would be no need to reboot this node if the NFS serving hung and in Ganesha, all NFS activity is in userspace not the kernel.
 
 
Daniel
 
 
Dr.Daniel Kidger
No. 1 The Square,
Technical Specialist  SDI (formerly Platform Computing)
Temple Quay,
Bristol  BS1 6DG
Mobile:
+44-07818 522 266
United Kingdom
Landline:
+44-02392 564 121 (Internal ITN 3726 9250)
 
e-mail:
 
 
 
----- Original message -----
From: "Buterbaugh, Kevin L" <[email protected]>
Sent by: [email protected]
To: gpfsug main discussion list <[email protected]>
Cc:
Subject: [gpfsug-discuss] GPFS 4.2 / Protocol nodes
Date: Fri, Jan 8, 2016 4:11 PM
 
Happy New Year all,
 
One of my first projects on the new year is to get GPFS 4.2 up and running on our test cluster and begin testing it out in anticipation of upgrading our production cluster sometime later this year.  We’re currently running 4.1.0.8 efix2 and are thinking of bypassing 4.1.1 altogether and going straight to 4.2.

We currently have 3 NSD servers also serving as CNFS servers and 1 NSD server that is not primary for any disks which serves as our SAMBA server.  We are interested in going to CES.
 
Yesterday I was reading in the 4.2 FAQ and came across question 8.3, “What are some of the considerations when deploying the protocol functionality?”  One of the considerations is that "several GPFS configuration aspects have not been explicitly tested with the protocol functionality” and one of those functions is, “NSD server functionality and storage attached to protocol node.  We recommend that Protocol nodes do not take on these functions”.
 
Really?  So it is IBM’s recommendation that we buy 3 additional very beefy (2 x hex-core processors, 256 GB RAM) servers and 3 additional server licenses just to use CES?  I guess I’m very surprised by that because I’m running CNFS on 3 low end servers (1 x quad-core processor, 32 GB RAM) that also serve as NSD servers to a ~700 client HPC cluster!
 
If we really have to buy all that, well, we probably won’t.  That’s a not insignificant chunk of change.
 
Some I’m interested in hearing feedback on both:  1) do the CES servers really have to not be NSD servers, and 2) do they really need to be such high-end boxes?  And both the official party line and what you can really do in the real world if you really want to (while maintaining your support agreement with IBM) are welcome!  Thanks…
 
Kevin
 
Kevin Buterbaugh - Senior System Administrator
Vanderbilt University - Advanced Computing Center for Research and Education
[email protected] - (615)875-9633
 
_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss
 
Unless stated otherwise above:
IBM United Kingdom Limited - Registered in England and Wales with number 741598.
Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU

_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss

Reply via email to