Re: [gpfsug-discuss] Not recommended, but why not?

2018-05-10 Thread Daniel Kidger
One additional point to consider is what happens on a hardware failure.
eg. If you have two NSD servers that are both CES servers and one fails, then 
there is a double-failure at exactly the same point in time.

Daniel


 



Dr Daniel Kidger
IBM Technical Sales Specialist
Software Defined Solution Sales

+44-(0)7818 522 266 
daniel.kid...@uk.ibm.com



> On 7 May 2018, at 16:39, Buterbaugh, Kevin L 
>  wrote:
> 
> Hi All,
> 
> I want to thank all of you who took the time to respond to this question … 
> your thoughts / suggestions are much appreciated.
> 
> What I’m taking away from all of this is that it is OK to run CES on NSD 
> servers as long as you are very careful in how you set things up.  This would 
> include:
> 
> 1.  Making sure you have enough CPU horsepower and using cgroups to limit how 
> much CPU SMB and NFS can utilize.
> 2.  Making sure you have enough RAM … 256 GB sounds like it should be 
> “enough” when using SMB.
> 3.  Making sure you have your network config properly set up.  We would be 
> able to provide three separate, dedicated 10 GbE links for GPFS daemon 
> communication, GPFS multi-cluster link to our HPC cluster, and SMB / NFS 
> communication.
> 4.  Making sure you have good monitoring of all of the above in place.
> 
> Have I missed anything or does anyone have any additional thoughts?  Thanks…
> 
> Kevin
> 
>> On May 4, 2018, at 11:26 AM, Sven Oehme  wrote:
>> 
>> there is nothing wrong with running CES on NSD Servers, in fact if all CES 
>> nodes have access to all LUN's of the filesystem thats the fastest possible 
>> configuration as you eliminate 1 network hop. 
>> the challenge is always to do the proper sizing, so you don't run out of CPU 
>> and memory on the nodes as you overlay functions. as long as you have good 
>> monitoring in place you are good. if you want to do the extra precaution, 
>> you could 'jail' the SMB and NFS daemons into a c-group on the node, i 
>> probably wouldn't limit memory but CPU as this is the more critical resource 
>>  to prevent expels and other time sensitive issues. 
>> 
>> sven
>> 
>>> On Fri, May 4, 2018 at 8:39 AM Buterbaugh, Kevin L 
>>>  wrote:
>>> Hi All,
>>> 
>>> In doing some research, I have come across numerous places (IBM docs, 
>>> DeveloperWorks posts, etc.) where it is stated that it is not recommended 
>>> to run CES on NSD servers … but I’ve not found any detailed explanation of 
>>> why not.
>>> 
>>> I understand that CES, especially if you enable SMB, can be a resource hog. 
>>>  But if I size the servers appropriately … say, late model boxes with 2 x 8 
>>> core CPU’s, 256 GB RAM, 10 GbE networking … is there any reason why I still 
>>> should not combine the two?
>>> 
>>> To answer the question of why I would want to … simple, server licenses.
>>> 
>>> Thanks…
>>> 
>>> Kevin
>>> 
>>> —
>>> Kevin Buterbaugh - Senior System Administrator
>>> Vanderbilt University - Advanced Computing Center for Research and Education
>>> kevin.buterba...@vanderbilt.edu - (615)875-9633
>>> 
>>> 
>>> 
>>> ___
>>> gpfsug-discuss mailing list
>>> gpfsug-discuss at spectrumscale.org
>>> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
>> ___
>> gpfsug-discuss mailing list
>> gpfsug-discuss at spectrumscale.org
>> https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fgpfsug.org%2Fmailman%2Flistinfo%2Fgpfsug-discuss=02%7C01%7CKevin.Buterbaugh%40vanderbilt.edu%7C6ec06d262ea84752b1d408d5b1dbe2cc%7Cba5a7f39e3be4ab3b45067fa80faecad%7C0%7C1%7C636610480314880560=J5%2F9X4dNeLrGKH%2BwmhIObVK%2BQ4oyoIa1vZ9F2yTU854%3D=0
> 
Unless stated otherwise above:
IBM United Kingdom Limited - Registered in England and Wales with number 
741598. 
Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU

___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


Re: [gpfsug-discuss] Not recommended, but why not?

2018-05-07 Thread Bryan Banister
Sure, many ways to solve the same problem, just depends on where you want to 
have the controls.  Having a separate VLAN doesn't give you as fine grained 
controls over each network workload you are using, such as metrics collection, 
monitoring, GPFS, SSH, NFS vs SMB, vs Object, etc.

But it doesn't matter how it's done as long as you ensure GPFS has enough 
bandwidth to function, cheers,
-Bryan

-Original Message-
From: gpfsug-discuss-boun...@spectrumscale.org 
[mailto:gpfsug-discuss-boun...@spectrumscale.org] On Behalf Of Jonathan Buzzard
Sent: Saturday, May 05, 2018 3:57 AM
To: gpfsug-discuss@spectrumscale.org
Subject: Re: [gpfsug-discuss] Not recommended, but why not?

Note: External Email
-

On 04/05/18 18:30, Bryan Banister wrote:
> You also have to be careful with network utilization… we have some very
> hungry NFS clients in our environment and the NFS traffic can actually
> DOS other services that need to use the network links.  If you configure
> GPFS admin/daemon traffic over the same link as the SMB/NFS traffic then
> this could lead to GPFS node evictions if disk leases cannot get
> renewed.  You could limit the amount that SMV/NFS use on the network
> with something like the tc facility if you’re sharing the network
> interfaces for GPFS and CES services.
>

The right answer to that IMHO is a separate VLAN for the GPFS
command/control traffic that is prioritized above all other VLAN's. Do
something like mark it as a voice VLAN. Basically don't rely on some OS
layer to do the right thing at layer three, enforce it at layer two in
the switches.

JAB.

--
Jonathan A. Buzzard Tel: +44141-5483420
HPC System Administrator, ARCHIE-WeSt.
University of Strathclyde, John Anderson Building, Glasgow. G4 0NG
___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss



Note: This email is for the confidential use of the named addressee(s) only and 
may contain proprietary, confidential or privileged information. If you are not 
the intended recipient, you are hereby notified that any review, dissemination 
or copying of this email is strictly prohibited, and to please notify the 
sender immediately and destroy this email and any attachments. Email 
transmission cannot be guaranteed to be secure or error-free. The Company, 
therefore, does not make any guarantees as to the completeness or accuracy of 
this email or any attachments. This email is for informational purposes only 
and does not constitute a recommendation, offer, request or solicitation of any 
kind to buy, sell, subscribe, redeem or perform any type of transaction of a 
financial product.
___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


Re: [gpfsug-discuss] Not recommended, but why not?

2018-05-07 Thread Buterbaugh, Kevin L
Hi All,

I want to thank all of you who took the time to respond to this question … your 
thoughts / suggestions are much appreciated.

What I’m taking away from all of this is that it is OK to run CES on NSD 
servers as long as you are very careful in how you set things up.  This would 
include:

1.  Making sure you have enough CPU horsepower and using cgroups to limit how 
much CPU SMB and NFS can utilize.
2.  Making sure you have enough RAM … 256 GB sounds like it should be “enough” 
when using SMB.
3.  Making sure you have your network config properly set up.  We would be able 
to provide three separate, dedicated 10 GbE links for GPFS daemon 
communication, GPFS multi-cluster link to our HPC cluster, and SMB / NFS 
communication.
4.  Making sure you have good monitoring of all of the above in place.

Have I missed anything or does anyone have any additional thoughts?  Thanks…

Kevin

On May 4, 2018, at 11:26 AM, Sven Oehme 
> wrote:

there is nothing wrong with running CES on NSD Servers, in fact if all CES 
nodes have access to all LUN's of the filesystem thats the fastest possible 
configuration as you eliminate 1 network hop.
the challenge is always to do the proper sizing, so you don't run out of CPU 
and memory on the nodes as you overlay functions. as long as you have good 
monitoring in place you are good. if you want to do the extra precaution, you 
could 'jail' the SMB and NFS daemons into a c-group on the node, i probably 
wouldn't limit memory but CPU as this is the more critical resource  to prevent 
expels and other time sensitive issues.

sven

On Fri, May 4, 2018 at 8:39 AM Buterbaugh, Kevin L 
> wrote:
Hi All,

In doing some research, I have come across numerous places (IBM docs, 
DeveloperWorks posts, etc.) where it is stated that it is not recommended to 
run CES on NSD servers … but I’ve not found any detailed explanation of why not.

I understand that CES, especially if you enable SMB, can be a resource hog.  
But if I size the servers appropriately … say, late model boxes with 2 x 8 core 
CPU’s, 256 GB RAM, 10 GbE networking … is there any reason why I still should 
not combine the two?

To answer the question of why I would want to … simple, server licenses.

Thanks…

Kevin

—
Kevin Buterbaugh - Senior System Administrator
Vanderbilt University - Advanced Computing Center for Research and Education
kevin.buterba...@vanderbilt.edu - 
(615)875-9633



___
gpfsug-discuss mailing list
gpfsug-discuss at 
spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss
___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fgpfsug.org%2Fmailman%2Flistinfo%2Fgpfsug-discuss=02%7C01%7CKevin.Buterbaugh%40vanderbilt.edu%7C6ec06d262ea84752b1d408d5b1dbe2cc%7Cba5a7f39e3be4ab3b45067fa80faecad%7C0%7C1%7C636610480314880560=J5%2F9X4dNeLrGKH%2BwmhIObVK%2BQ4oyoIa1vZ9F2yTU854%3D=0

___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


Re: [gpfsug-discuss] Not recommended, but why not?

2018-05-05 Thread Jonathan Buzzard

On 04/05/18 18:30, Bryan Banister wrote:
You also have to be careful with network utilization… we have some very 
hungry NFS clients in our environment and the NFS traffic can actually 
DOS other services that need to use the network links.  If you configure 
GPFS admin/daemon traffic over the same link as the SMB/NFS traffic then 
this could lead to GPFS node evictions if disk leases cannot get 
renewed.  You could limit the amount that SMV/NFS use on the network 
with something like the tc facility if you’re sharing the network 
interfaces for GPFS and CES services.




The right answer to that IMHO is a separate VLAN for the GPFS 
command/control traffic that is prioritized above all other VLAN's. Do 
something like mark it as a voice VLAN. Basically don't rely on some OS 
layer to do the right thing at layer three, enforce it at layer two in 
the switches.


JAB.

--
Jonathan A. Buzzard Tel: +44141-5483420
HPC System Administrator, ARCHIE-WeSt.
University of Strathclyde, John Anderson Building, Glasgow. G4 0NG
___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


Re: [gpfsug-discuss] Not recommended, but why not?

2018-05-04 Thread Christof Schmitt
> In fact, one of the things that’s kinda surprising to me is that upgrading the SMB portion of CES requires a downtime.  Let’s just say that I know for a fact that sernet-samba can be done rolling / live.
 
I am referring to the open source version of Samba here. That is likely close to "sernet-samba", but i have not seen the details on the code they use for the package build:
 
Clustered Samba with ctdb never supported rolling code upgrades. The reason for this is that SMB-level records are shared across the protocol nodes through ctdb. These records are not versioned and Samba on each node expects to only see matching records. As the details of the internal data shared across the nodes can change through versions, the only safe way to handle this is to not allow rolling code upgrades.
 
It might have appeared that Samba supports rolling code upgrades. Past versions did not check for version compatibility across the nodes, so there was no warning. If the ctdb records shared between the nodes did not change, then this would be no problem (for this particular upgrade path, it is likely different for different Samba versions). Also, if there are no open files or active sessions during the upgrade, the risk is lower, as in that case there are fewer records that could cause problems.
 
The important change is that Samba 4.7.0 introduced checks to enforce compatible versions across the nodes. This just makes the limitation visible, but it was always there. See:
https://www.samba.org/samba/history/samba-4.7.0.html
* CTDB no longer allows mixed minor versions in a cluster

  See the AllowMixedVersions tunable option in ctdb-tunables(7) and also
  https://wiki.samba.org/index.php/Upgrading_a_CTDB_cluster#Policy
 
and also
https://wiki.samba.org/index.php/Upgrading_a_CTDB_cluster
"Rolling Upgrades" and "Problems with Rolling Code Upgrades"
Note that this page refers to two different layers. "ctdb" itself maintains compatibility among a X.Y. code stream, but this is not guaranteed for the file server records stored in ctdb databases.
 
 
We have the same limitation and enforcement in the Samba version shipped with Spectrum Scale. I expect the same to be true for all clustered Samba versions today.
 
Regards,
Christof Schmitt || IBM || Spectrum Scale Development || Tucson, AZchristof.schm...@us.ibm.com  ||  +1-520-799-2469    (T/L: 321-2469)
 
 
- Original message -From: "Buterbaugh, Kevin L" <kevin.buterba...@vanderbilt.edu>Sent by: gpfsug-discuss-boun...@spectrumscale.orgTo: gpfsug main discussion list <gpfsug-discuss@spectrumscale.org>Cc:Subject: Re: [gpfsug-discuss] Not recommended, but why not?Date: Fri, May 4, 2018 12:12 PM  Hi Anderson,
 
Thanks for the response … however, the scenario you describe below wouldn’t impact us.  We have 8 NSD servers and they can easily provide the needed performance to native GPFS clients.  We could also take a downtime if we ever did need to expand in the manner described below.
 
In fact, one of the things that’s kinda surprising to me is that upgrading the SMB portion of CES requires a downtime.  Let’s just say that I know for a fact that sernet-samba can be done rolling / live.
 
Kevin
 
On May 4, 2018, at 10:52 AM, Anderson Ferreira Nobre <ano...@br.ibm.com> wrote: 

Hi Kevin,
 
I think one of the reasons is if you need to add or remove nodes from cluster you will start to face the constrains of this kind of solution. Let's say you have a cluster with two nodes  and share the same set of LUNs through SAN. And for some reason you need to add more two nodes that are NSD Servers and Protocol nodes. For the new nodes become NSD Servers, you will have to redistribute the NSD disks among four nodes. But for you do that you will have to umount the filesystems. And for you umount the filesystems you would need to stop protocol services. At the end you will realize that a simple task like that is disrruptive. You won't be able to do online.
 
 
Abraços / Regards / Saludos,
 
Anderson NobreAIX & Power ConsultantMaster Certified IT SpecialistIBM Systems Hardware Client Technical Team – IBM Systems Lab Services 
Phone: 55-19-2132-4317E-mail: ano...@br.ibm.com
 
 
- Original message -From: "Buterbaugh, Kevin L" <kevin.buterba...@vanderbilt.edu>Sent by: gpfsug-discuss-boun...@spectrumscale.orgTo: gpfsug main discussion list <gpfsug-discuss@spectrumscale.org>Cc:Subject: [gpf

Re: [gpfsug-discuss] Not recommended, but why not?

2018-05-04 Thread Bryan Banister
You also have to be careful with network utilization… we have some very hungry 
NFS clients in our environment and the NFS traffic can actually DOS other 
services that need to use the network links.  If you configure GPFS 
admin/daemon traffic over the same link as the SMB/NFS traffic then this could 
lead to GPFS node evictions if disk leases cannot get renewed.  You could limit 
the amount that SMV/NFS use on the network with something like the tc facility 
if you’re sharing the network interfaces for GPFS and CES services.

HTH,
-Bryan

From: gpfsug-discuss-boun...@spectrumscale.org 
[mailto:gpfsug-discuss-boun...@spectrumscale.org] On Behalf Of Sven Oehme
Sent: Friday, May 04, 2018 11:27 AM
To: gpfsug main discussion list <gpfsug-discuss@spectrumscale.org>
Subject: Re: [gpfsug-discuss] Not recommended, but why not?

Note: External Email

there is nothing wrong with running CES on NSD Servers, in fact if all CES 
nodes have access to all LUN's of the filesystem thats the fastest possible 
configuration as you eliminate 1 network hop.
the challenge is always to do the proper sizing, so you don't run out of CPU 
and memory on the nodes as you overlay functions. as long as you have good 
monitoring in place you are good. if you want to do the extra precaution, you 
could 'jail' the SMB and NFS daemons into a c-group on the node, i probably 
wouldn't limit memory but CPU as this is the more critical resource  to prevent 
expels and other time sensitive issues.

sven

On Fri, May 4, 2018 at 8:39 AM Buterbaugh, Kevin L 
<kevin.buterba...@vanderbilt.edu<mailto:kevin.buterba...@vanderbilt.edu>> wrote:
Hi All,

In doing some research, I have come across numerous places (IBM docs, 
DeveloperWorks posts, etc.) where it is stated that it is not recommended to 
run CES on NSD servers … but I’ve not found any detailed explanation of why not.

I understand that CES, especially if you enable SMB, can be a resource hog.  
But if I size the servers appropriately … say, late model boxes with 2 x 8 core 
CPU’s, 256 GB RAM, 10 GbE networking … is there any reason why I still should 
not combine the two?

To answer the question of why I would want to … simple, server licenses.

Thanks…

Kevin

—
Kevin Buterbaugh - Senior System Administrator
Vanderbilt University - Advanced Computing Center for Research and Education
kevin.buterba...@vanderbilt.edu<mailto:kevin.buterba...@vanderbilt.edu> - 
(615)875-9633<tel:(615)%20875-9633>



___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org<http://spectrumscale.org>
http://gpfsug.org/mailman/listinfo/gpfsug-discuss



Note: This email is for the confidential use of the named addressee(s) only and 
may contain proprietary, confidential or privileged information. If you are not 
the intended recipient, you are hereby notified that any review, dissemination 
or copying of this email is strictly prohibited, and to please notify the 
sender immediately and destroy this email and any attachments. Email 
transmission cannot be guaranteed to be secure or error-free. The Company, 
therefore, does not make any guarantees as to the completeness or accuracy of 
this email or any attachments. This email is for informational purposes only 
and does not constitute a recommendation, offer, request or solicitation of any 
kind to buy, sell, subscribe, redeem or perform any type of transaction of a 
financial product.
___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


Re: [gpfsug-discuss] Not recommended, but why not?

2018-05-04 Thread Sven Oehme
there is nothing wrong with running CES on NSD Servers, in fact if all CES
nodes have access to all LUN's of the filesystem thats the fastest possible
configuration as you eliminate 1 network hop.
the challenge is always to do the proper sizing, so you don't run out of
CPU and memory on the nodes as you overlay functions. as long as you have
good monitoring in place you are good. if you want to do the extra
precaution, you could 'jail' the SMB and NFS daemons into a c-group on the
node, i probably wouldn't limit memory but CPU as this is the more critical
resource  to prevent expels and other time sensitive issues.

sven

On Fri, May 4, 2018 at 8:39 AM Buterbaugh, Kevin L <
kevin.buterba...@vanderbilt.edu> wrote:

> Hi All,
>
> In doing some research, I have come across numerous places (IBM docs,
> DeveloperWorks posts, etc.) where it is stated that it is not recommended
> to run CES on NSD servers … but I’ve not found any detailed explanation of
> why not.
>
> I understand that CES, especially if you enable SMB, can be a resource
> hog.  But if I size the servers appropriately … say, late model boxes with
> 2 x 8 core CPU’s, 256 GB RAM, 10 GbE networking … is there any reason why I
> still should not combine the two?
>
> To answer the question of why I would want to … simple, server licenses.
>
> Thanks…
>
> Kevin
>
> —
> Kevin Buterbaugh - Senior System Administrator
> Vanderbilt University - Advanced Computing Center for Research and
> Education
> kevin.buterba...@vanderbilt.edu - (615)875-9633 <(615)%20875-9633>
>
>
>
> ___
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
>
___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


Re: [gpfsug-discuss] Not recommended, but why not?

2018-05-04 Thread Buterbaugh, Kevin L
Hi Anderson,

Thanks for the response … however, the scenario you describe below wouldn’t 
impact us.  We have 8 NSD servers and they can easily provide the needed 
performance to native GPFS clients.  We could also take a downtime if we ever 
did need to expand in the manner described below.

In fact, one of the things that’s kinda surprising to me is that upgrading the 
SMB portion of CES requires a downtime.  Let’s just say that I know for a fact 
that sernet-samba can be done rolling / live.

Kevin

On May 4, 2018, at 10:52 AM, Anderson Ferreira Nobre 
<ano...@br.ibm.com<mailto:ano...@br.ibm.com>> wrote:

Hi Kevin,

I think one of the reasons is if you need to add or remove nodes from cluster 
you will start to face the constrains of this kind of solution. Let's say you 
have a cluster with two nodes  and share the same set of LUNs through SAN. And 
for some reason you need to add more two nodes that are NSD Servers and 
Protocol nodes. For the new nodes become NSD Servers, you will have to 
redistribute the NSD disks among four nodes. But for you do that you will have 
to umount the filesystems. And for you umount the filesystems you would need to 
stop protocol services. At the end you will realize that a simple task like 
that is disrruptive. You won't be able to do online.


Abraços / Regards / Saludos,

Anderson Nobre
AIX & Power Consultant
Master Certified IT Specialist
IBM Systems Hardware Client Technical Team – IBM Systems Lab Services

[community_general_lab_services]



Phone: 55-19-2132-4317
E-mail: ano...@br.ibm.com<mailto:ano...@br.ibm.com> [IBM]


- Original message -
From: "Buterbaugh, Kevin L" 
<kevin.buterba...@vanderbilt.edu<mailto:kevin.buterba...@vanderbilt.edu>>
Sent by: 
gpfsug-discuss-boun...@spectrumscale.org<mailto:gpfsug-discuss-boun...@spectrumscale.org>
To: gpfsug main discussion list 
<gpfsug-discuss@spectrumscale.org<mailto:gpfsug-discuss@spectrumscale.org>>
Cc:
Subject: [gpfsug-discuss] Not recommended, but why not?
Date: Fri, May 4, 2018 12:39 PM

Hi All,

In doing some research, I have come across numerous places (IBM docs, 
DeveloperWorks posts, etc.) where it is stated that it is not recommended to 
run CES on NSD servers … but I’ve not found any detailed explanation of why not.

I understand that CES, especially if you enable SMB, can be a resource hog.  
But if I size the servers appropriately … say, late model boxes with 2 x 8 core 
CPU’s, 256 GB RAM, 10 GbE networking … is there any reason why I still should 
not combine the two?

To answer the question of why I would want to … simple, server licenses.

Thanks…

Kevin

—
Kevin Buterbaugh - Senior System Administrator
Vanderbilt University - Advanced Computing Center for Research and Education
kevin.buterba...@vanderbilt.edu<mailto:kevin.buterba...@vanderbilt.edu> - 
(615)875-9633


___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org<http://spectrumscale.org>
http://gpfsug.org/mailman/listinfo/gpfsug-discuss<https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fgpfsug.org%2Fmailman%2Flistinfo%2Fgpfsug-discuss=02%7C01%7CKevin.Buterbaugh%40vanderbilt.edu%7C2b0fc12c4dc24aa1f7fb08d5b1d70c9e%7Cba5a7f39e3be4ab3b45067fa80faecad%7C0%7C1%7C636610459542553835=8aArQLzU5q%2BySqHcoQ3SI420XzP08ICph7F18G7C4pw%3D=0>


___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org<http://spectrumscale.org>
https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fgpfsug.org%2Fmailman%2Flistinfo%2Fgpfsug-discuss=02%7C01%7CKevin.Buterbaugh%40vanderbilt.edu%7C2b0fc12c4dc24aa1f7fb08d5b1d70c9e%7Cba5a7f39e3be4ab3b45067fa80faecad%7C0%7C1%7C636610459542553835=8aArQLzU5q%2BySqHcoQ3SI420XzP08ICph7F18G7C4pw%3D=0

___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


Re: [gpfsug-discuss] Not recommended, but why not?

2018-05-04 Thread Skylar Thompson
Our experience is that CES (at least NFS/ganesha) can easily consume all of
the CPU resources on a system. If you're running it on the same hardware as
your NSD services, then you risk delaying native GPFS I/O requests as well.
We haven't found a great way to limit the amount of resources that NFS/ganesha
can use, though maybe in the future it could be put in a cgroup since
it's all user-space?

On Fri, May 04, 2018 at 03:38:57PM +, Buterbaugh, Kevin L wrote:
> Hi All,
> 
> In doing some research, I have come across numerous places (IBM docs, 
> DeveloperWorks posts, etc.) where it is stated that it is not recommended to 
> run CES on NSD servers ??? but I???ve not found any detailed explanation of 
> why not.
> 
> I understand that CES, especially if you enable SMB, can be a resource hog.  
> But if I size the servers appropriately ??? say, late model boxes with 2 x 8 
> core CPU???s, 256 GB RAM, 10 GbE networking ??? is there any reason why I 
> still should not combine the two?
> 
> To answer the question of why I would want to ??? simple, server licenses.
> 
> Thanks???
> 
> Kevin
> 
> ???
> Kevin Buterbaugh - Senior System Administrator
> Vanderbilt University - Advanced Computing Center for Research and Education
> kevin.buterba...@vanderbilt.edu - 
> (615)875-9633
> 
> 
> 

> ___
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss


-- 
-- Skylar Thompson (skyl...@u.washington.edu)
-- Genome Sciences Department, System Administrator
-- Foege Building S046, (206)-685-7354
-- University of Washington School of Medicine
___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


Re: [gpfsug-discuss] Not recommended, but why not?

2018-05-04 Thread Anderson Ferreira Nobre
Hi Kevin,
 
I think one of the reasons is if you need to add or remove nodes from cluster you will start to face the constrains of this kind of solution. Let's say you have a cluster with two nodes  and share the same set of LUNs through SAN. And for some reason you need to add more two nodes that are NSD Servers and Protocol nodes. For the new nodes become NSD Servers, you will have to redistribute the NSD disks among four nodes. But for you do that you will have to umount the filesystems. And for you umount the filesystems you would need to stop protocol services. At the end you will realize that a simple task like that is disrruptive. You won't be able to do online.
 
 
Abraços / Regards / Saludos,
 
Anderson NobreAIX & Power ConsultantMaster Certified IT SpecialistIBM Systems Hardware Client Technical Team – IBM Systems Lab Services 
Phone: 55-19-2132-4317E-mail: ano...@br.ibm.com
 
 
- Original message -From: "Buterbaugh, Kevin L" <kevin.buterba...@vanderbilt.edu>Sent by: gpfsug-discuss-boun...@spectrumscale.orgTo: gpfsug main discussion list <gpfsug-discuss@spectrumscale.org>Cc:Subject: [gpfsug-discuss] Not recommended, but why not?Date: Fri, May 4, 2018 12:39 PM  Hi All,
 
In doing some research, I have come across numerous places (IBM docs, DeveloperWorks posts, etc.) where it is stated that it is not recommended to run CES on NSD servers … but I’ve not found any detailed explanation of why not.
 
I understand that CES, especially if you enable SMB, can be a resource hog.  But if I size the servers appropriately … say, late model boxes with 2 x 8 core CPU’s, 256 GB RAM, 10 GbE networking … is there any reason why I still should not combine the two?
 
To answer the question of why I would want to … simple, server licenses.
 
Thanks…
 
Kevin 

—
Kevin Buterbaugh - Senior System Administrator
Vanderbilt University - Advanced Computing Center for Research and Education
kevin.buterba...@vanderbilt.edu - (615)875-9633
  

___gpfsug-discuss mailing listgpfsug-discuss at spectrumscale.orghttp://gpfsug.org/mailman/listinfo/gpfsug-discuss
 

___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


[gpfsug-discuss] Not recommended, but why not?

2018-05-04 Thread Buterbaugh, Kevin L
Hi All,

In doing some research, I have come across numerous places (IBM docs, 
DeveloperWorks posts, etc.) where it is stated that it is not recommended to 
run CES on NSD servers … but I’ve not found any detailed explanation of why not.

I understand that CES, especially if you enable SMB, can be a resource hog.  
But if I size the servers appropriately … say, late model boxes with 2 x 8 core 
CPU’s, 256 GB RAM, 10 GbE networking … is there any reason why I still should 
not combine the two?

To answer the question of why I would want to … simple, server licenses.

Thanks…

Kevin

—
Kevin Buterbaugh - Senior System Administrator
Vanderbilt University - Advanced Computing Center for Research and Education
kevin.buterba...@vanderbilt.edu - 
(615)875-9633



___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss