One additional point to consider is what happens on a hardware failure.
eg. If you have two NSD servers that are both CES servers and one fails, then 
there is a double-failure at exactly the same point in time.

Daniel


 


                                
Dr Daniel Kidger
IBM Technical Sales Specialist
Software Defined Solution Sales

+44-(0)7818 522 266 
[email protected]
        


> On 7 May 2018, at 16:39, Buterbaugh, Kevin L 
> <[email protected]> wrote:
> 
> Hi All,
> 
> I want to thank all of you who took the time to respond to this question … 
> your thoughts / suggestions are much appreciated.
> 
> What I’m taking away from all of this is that it is OK to run CES on NSD 
> servers as long as you are very careful in how you set things up.  This would 
> include:
> 
> 1.  Making sure you have enough CPU horsepower and using cgroups to limit how 
> much CPU SMB and NFS can utilize.
> 2.  Making sure you have enough RAM … 256 GB sounds like it should be 
> “enough” when using SMB.
> 3.  Making sure you have your network config properly set up.  We would be 
> able to provide three separate, dedicated 10 GbE links for GPFS daemon 
> communication, GPFS multi-cluster link to our HPC cluster, and SMB / NFS 
> communication.
> 4.  Making sure you have good monitoring of all of the above in place.
> 
> Have I missed anything or does anyone have any additional thoughts?  Thanks…
> 
> Kevin
> 
>> On May 4, 2018, at 11:26 AM, Sven Oehme <[email protected]> wrote:
>> 
>> there is nothing wrong with running CES on NSD Servers, in fact if all CES 
>> nodes have access to all LUN's of the filesystem thats the fastest possible 
>> configuration as you eliminate 1 network hop. 
>> the challenge is always to do the proper sizing, so you don't run out of CPU 
>> and memory on the nodes as you overlay functions. as long as you have good 
>> monitoring in place you are good. if you want to do the extra precaution, 
>> you could 'jail' the SMB and NFS daemons into a c-group on the node, i 
>> probably wouldn't limit memory but CPU as this is the more critical resource 
>>  to prevent expels and other time sensitive issues. 
>> 
>> sven
>> 
>>> On Fri, May 4, 2018 at 8:39 AM Buterbaugh, Kevin L 
>>> <[email protected]> wrote:
>>> Hi All,
>>> 
>>> In doing some research, I have come across numerous places (IBM docs, 
>>> DeveloperWorks posts, etc.) where it is stated that it is not recommended 
>>> to run CES on NSD servers … but I’ve not found any detailed explanation of 
>>> why not.
>>> 
>>> I understand that CES, especially if you enable SMB, can be a resource hog. 
>>>  But if I size the servers appropriately … say, late model boxes with 2 x 8 
>>> core CPU’s, 256 GB RAM, 10 GbE networking … is there any reason why I still 
>>> should not combine the two?
>>> 
>>> To answer the question of why I would want to … simple, server licenses.
>>> 
>>> Thanks…
>>> 
>>> Kevin
>>> 
>>> —
>>> Kevin Buterbaugh - Senior System Administrator
>>> Vanderbilt University - Advanced Computing Center for Research and Education
>>> [email protected] - (615)875-9633
>>> 
>>> 
>>> 
>>> _______________________________________________
>>> gpfsug-discuss mailing list
>>> gpfsug-discuss at spectrumscale.org
>>> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
>> _______________________________________________
>> gpfsug-discuss mailing list
>> gpfsug-discuss at spectrumscale.org
>> https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fgpfsug.org%2Fmailman%2Flistinfo%2Fgpfsug-discuss&data=02%7C01%7CKevin.Buterbaugh%40vanderbilt.edu%7C6ec06d262ea84752b1d408d5b1dbe2cc%7Cba5a7f39e3be4ab3b45067fa80faecad%7C0%7C1%7C636610480314880560&sdata=J5%2F9X4dNeLrGKH%2BwmhIObVK%2BQ4oyoIa1vZ9F2yTU854%3D&reserved=0
> 
Unless stated otherwise above:
IBM United Kingdom Limited - Registered in England and Wales with number 
741598. 
Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU

_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss

Reply via email to