Can you use node affinity within CES groups?

For example I have some shiny new servers I want to normally use. If I plan 
maintenance, I move the IP to another shiny box. But I also have some old off 
support legacy hardware that I'm happy to use in a DR situation (e.g. they are 
in another site). So I want a group for my SMB boxes and NFS boxes, but have 
affinity normally, and then have old hardware in case of failure.

Whilst we're on protocols, are there any restrictions on using mixed 
architectures? I don't recall seeing this but... E.g. my new shiny boxes are 
ppc64le systems and my old legacy nodes are x86. It's all ctdb locking right .. 
(ok maybe mixing be and le hosts would be bad)

(Sure I'll take a performance hit when I fail to the old nodes, but that is 
better than no service).

Simon
________________________________________
From: gpfsug-discuss-boun...@spectrumscale.org 
[gpfsug-discuss-boun...@spectrumscale.org] on behalf of aspal...@us.ibm.com 
[aspal...@us.ibm.com]
Sent: 09 January 2019 17:21
To: gpfsug-discuss@spectrumscale.org
Cc: gpfsug-discuss@spectrumscale.org
Subject: Re: [gpfsug-discuss] Spectrum Scale protocol node service separation

Hey guys - I wanted to reply from the Scale development side.....

First off, consider CES as a stack and the implications of such:
- all protocols are installed on all nodes
- if a specific protocol is enabled (SMB, NFS, OBJ, Block), it's enabled for 
all protocol nodes
- if a specific protocol is started (SMB, NFS, OBJ, Block), it's started on all 
nodes by default, unless manually specified.

As was indicated in the e-mail chain, you don't want to be removing rpms to 
create a subset of nodes serving various protocols as this will cause overall 
issues.  You also don't want to manually be disabling protocols on some 
nodes/not others in order to achieve nodes that are 'only serving' SMB, for 
instance.  Doing this manual stopping/starting of protocols isn't something 
that will adhere to failover.

===============================================================
A few possible solutions if you want to segregate protocols to specific nodes 
are:
===============================================================
1) CES-Groups in combination with specific IPs / DNS hostnames that correspond 
to each protocol.
- As mentioned, this can still be bypassed if someone attempts a mount using an 
IP/DNS name not set for their protocol.  However, you could probably prevent 
some of this with an external firewall rule.
- Using CES-Groups confines the IPs/DNS hostnames to very specific nodes

2) Firewall rules
- This is best if done external to the cluster, and at a level that can 
restrict specific protocol traffic to specific IPs/hostnames
- combine this with #1 for the best results.
- Although it may work, try to stay away from crazy firewall rules on each 
protocol node itself as this can get confusing very quickly.  It's easier if 
you can set this up external to the nodes.

3) Similar to above but using Node Affinity CES-IP policy - but no CES groups.
- Upside is node-affinity will attempt to keep your CES-IPs associated with 
specific nodes.  So if you restrict specific protocol traffic to specific IPs, 
then they'll stay on nodes you designate
- Watch out for failovers.  In error cases (or upgrades) where an IP needs to 
move to another node, it obviously can't remain on the node that's having 
issues.  This means you may have protocol trafffic crossover when this occurs.

4) A separate remote cluster for each CES protocol
- In this example, you could make fairly small remote clusters (although we 
recommend 2->3nodes at least for failover purposes).  The local cluster would 
provide the storage.  The remote clusters would mount it.  One remote cluster 
could have only SMB enabled.  Another remote cluster could have only OBJ 
enabled.  etc...

------
I hope this helps a bit....


Regards,

Aaron Palazzolo
IBM Spectrum Scale Deployment, Infrastructure, Virtualization
9042 S Rita Road, Tucson AZ 85744
Phone: 520-799-5161, T/L: 321-5161
E-mail: aspal...@us.ibm.com


----- Original message -----
From: gpfsug-discuss-requ...@spectrumscale.org
Sent by: gpfsug-discuss-boun...@spectrumscale.org
To: gpfsug-discuss@spectrumscale.org
Cc:
Subject: gpfsug-discuss Digest, Vol 84, Issue 4
Date: Wed, Jan 9, 2019 7:13 AM

Send gpfsug-discuss mailing list submissions to
gpfsug-discuss@spectrumscale.org

To subscribe or unsubscribe via the World Wide Web, visit
http://gpfsug.org/mailman/listinfo/gpfsug-discuss
or, via email, send a message with subject or body 'help' to
gpfsug-discuss-requ...@spectrumscale.org

You can reach the person managing the list at
gpfsug-discuss-ow...@spectrumscale.org

When replying, please edit your Subject line so it is more specific
than "Re: Contents of gpfsug-discuss digest..."


Today's Topics:

   1. Re: Spectrum Scale protocol node service separation.
      (Andi Rhod Christiansen)
   2. Re: Spectrum Scale protocol node service separation.
      (Sanchez, Paul)


----------------------------------------------------------------------

Message: 1
Date: Wed, 9 Jan 2019 13:24:30 +0000
From: Andi Rhod Christiansen <a...@b4restore.com>
To: gpfsug main discussion list <gpfsug-discuss@spectrumscale.org>
Subject: Re: [gpfsug-discuss] Spectrum Scale protocol node service
separation.
Message-ID:
<a92739a568ec4e73b68de624127d2...@b4rwex01.internal.b4restore.com>
Content-Type: text/plain; charset="utf-8"

Hi Simon,

It was actually also the only solution I found if I want to keep them within 
the same cluster ?

Thanks for the reply, I will see what we figure out !

Venlig hilsen / Best Regards

Andi Rhod Christiansen

Fra: gpfsug-discuss-boun...@spectrumscale.org 
<gpfsug-discuss-boun...@spectrumscale.org> P? vegne af Simon Thompson
Sendt: 9. januar 2019 13:20
Til: gpfsug main discussion list <gpfsug-discuss@spectrumscale.org>
Emne: Re: [gpfsug-discuss] Spectrum Scale protocol node service separation.

You have to run all services on all nodes ( ? ) actually its technically 
possible to remove the packages once protocols is running on the node, but next 
time you reboot the node, it will get marked unhealthy and you spend an hour 
working out why? <AHEM>

But what we do to split load is have different IPs assigned to different CES 
groups and then assign the SMB nodes to the SMB group IPs etc ?

Technically a user could still connect to the NFS (in our case) IPs with SMB 
protocol, but there?s not a lot we can do about that ? though our upstream 
firewall drops said traffic.

Simon

From: 
<gpfsug-discuss-boun...@spectrumscale.org<mailto:gpfsug-discuss-boun...@spectrumscale.org>>
 on behalf of "a...@b4restore.com<mailto:a...@b4restore.com>" 
<a...@b4restore.com<mailto:a...@b4restore.com>>
Reply-To: 
"gpfsug-discuss@spectrumscale.org<mailto:gpfsug-discuss@spectrumscale.org>" 
<gpfsug-discuss@spectrumscale.org<mailto:gpfsug-discuss@spectrumscale.org>>
Date: Wednesday, 9 January 2019 at 10:31
To: "gpfsug-discuss@spectrumscale.org<mailto:gpfsug-discuss@spectrumscale.org>" 
<gpfsug-discuss@spectrumscale.org<mailto:gpfsug-discuss@spectrumscale.org>>
Subject: [gpfsug-discuss] Spectrum Scale protocol node service separation.

Hi,

I seem to be unable to find any information on separating protocol services on 
specific CES nodes within a cluster. Does anyone know if it is possible to 
take, lets say 4 of the ces nodes within a cluster and dividing them into two 
and have two of the running SMB and the other two running OBJ instead of having 
them all run both services?

If it is possible it would be great to hear pros and cons about doing this ?

Thanks in advance!

Venlig hilsen / Best Regards

Andi Christiansen
IT Solution Specialist


-------------- next part --------------
An HTML attachment was scrubbed...
URL: 
<http://gpfsug.org/pipermail/gpfsug-discuss/attachments/20190109/83819399/attachment-0001.html>

------------------------------

Message: 2
Date: Wed, 9 Jan 2019 14:05:48 +0000
From: "Sanchez, Paul" <paul.sanc...@deshaw.com>
To: gpfsug main discussion list <gpfsug-discuss@spectrumscale.org>
Subject: Re: [gpfsug-discuss] Spectrum Scale protocol node service
separation.
Message-ID:
<53ec54bb621242109a789e51d61b1...@mbxtoa1.winmail.deshaw.com>
Content-Type: text/plain; charset="utf-8"

The docs say: ?CES supports the following export protocols: NFS, SMB, object, 
and iSCSI (block). Each protocol can be enabled or disabled in the cluster. If 
a protocol is enabled in the CES cluster, all CES nodes serve that protocol.? 
Which would seem to indicate that the answer is ?no?.

This kind of thing is another good reason to license Scale by storage capacity 
rather than by sockets (PVU).  This approach was already a good idea due to the 
flexibility it allows to scale manager, quorum, and NSD server nodes for 
performance and high-availability without affecting your software licensing 
costs.  This can result in better design and the flexibility to more quickly 
respond to new problems by adding server nodes.

So assuming you?re not on the old PVU licensing model, it is trivial to deploy 
as many gateway nodes as needed to separate these into distinct remote 
clusters.  You can create an object gateway cluster, and a CES gateway cluster 
each which only mounts and exports what is necessary.  You can even virtualize 
these servers and host them on the same hardware, if you?re into that.

-Paul

From: gpfsug-discuss-boun...@spectrumscale.org 
<gpfsug-discuss-boun...@spectrumscale.org> On Behalf Of Andi Rhod Christiansen
Sent: Wednesday, January 9, 2019 5:25 AM
To: gpfsug main discussion list <gpfsug-discuss@spectrumscale.org>
Subject: [gpfsug-discuss] Spectrum Scale protocol node service separation.

Hi,

I seem to be unable to find any information on separating protocol services on 
specific CES nodes within a cluster. Does anyone know if it is possible to 
take, lets say 4 of the ces nodes within a cluster and dividing them into two 
and have two of the running SMB and the other two running OBJ instead of having 
them all run both services?

If it is possible it would be great to hear pros and cons about doing this ?

Thanks in advance!

Venlig hilsen / Best Regards

Andi Christiansen
IT Solution Specialist

-------------- next part --------------
An HTML attachment was scrubbed...
URL: 
<http://gpfsug.org/pipermail/gpfsug-discuss/attachments/20190109/7f9ad3f8/attachment.html>

------------------------------

_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


End of gpfsug-discuss Digest, Vol 84, Issue 4
*********************************************



_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss

Reply via email to