Re: [gpfsug-discuss] Get list of filesets _without_runningmmlsfileset?

2019-01-09 Thread Andrew Beattie
Kevin,
 
That sounds like a useful script
would you care to share?
 
Thanks
Andrew Beattie
Software Defined Storage  - IT Specialist
Phone: 614-2133-7927
E-mail: abeat...@au1.ibm.com
 
 
- Original message -From: "Buterbaugh, Kevin L" Sent by: gpfsug-discuss-boun...@spectrumscale.orgTo: gpfsug main discussion list Cc:Subject: Re: [gpfsug-discuss] Get list of filesets _without_ running mmlsfileset?Date: Thu, Jan 10, 2019 9:22 AM  Hi All,
 
Let me answer Skylar’s questions in another e-mail, which may also tell whether the rest API is a possibility or not.
 
The Python script in question is to display quota information for a user.  The mmlsquota command has a couple of issues:  1) its output is confusing to some of our users, 2) more significantly, it displays a ton of information that doesn’t apply to the user running it.  For example, it will display all the filesets in a filesystem whether or not the user has access to them.  So the Python script figures out what group(s) the user is a member of and only displays information pertinent to them (i.e. the group of the fileset junction path is a group this user is a member of) … and in a simplified (and potentially colorized) output format.
 
And typing that preceding paragraph caused the lightbulb to go off … I know the answer to my own question … have the script run mmlsquota and get the full list of filesets from that, then parse that to determine which ones I actually need to display quota information for.  Thanks!
 
Kevin
—
Kevin Buterbaugh - Senior System Administrator
Vanderbilt University - Advanced Computing Center for Research and Education
kevin.buterba...@vanderbilt.edu - (615)875-9633
 
On Jan 9, 2019, at 4:42 PM, Simon Thompson  wrote: 

Hi Kevin,Have you looked at the rest API?https://na01.safelinks.protection.outlook.com/?url="">I don't know how much access control there is available in the API so not sure if you could lock some sort of service user down to just the get filesets command?Simon___From: gpfsug-discuss-boun...@spectrumscale.org [gpfsug-discuss-boun...@spectrumscale.org] on behalf of Buterbaugh, Kevin L [kevin.buterba...@vanderbilt.edu]Sent: 08 January 2019 22:12To: gpfsug main discussion listSubject: [gpfsug-discuss] Get list of filesets _without_ running mmlsfileset?Hi All,Happy New Year to all!  Personally, I’ll gladly and gratefully settle for 2019 not being a dumpster fire like 2018 was (those who attended my talk at the user group meeting at SC18 know what I’m referring to), but I certainly wish all of you the best!Is there a way to get a list of the filesets in a filesystem without running mmlsfileset?  I was kind of expecting to find them in one of the config files somewhere under /var/mmfs but haven’t found them yet in the searching I’ve done.The reason I’m asking is that we have a Python script that users can run that needs to get a list of all the filesets in a filesystem.  There are obviously multiple issues with that, so the workaround we’re using for now is to have a cron job which runs mmlsfileset once a day and dumps it out to a text file, which the script then reads.  That’s sub-optimal for any day on which a fileset gets created or deleted, so I’m looking for a better way … one which doesn’t require root privileges and preferably doesn’t involve running a GPFS command at all.Thanks in advance.KevinP.S.  I am still working on metadata and iSCSI testing and will report back on that when complete.P.P.S.  We ended up adding our new NSDs comprised of (not really) 12 TB disks to the capacity pool and things are working fine.—Kevin Buterbaugh - Senior System AdministratorVanderbilt University - Advanced Computing Center for Research and Educationkevin.buterba...@vanderbilt.edu - (615)875-9633___gpfsug-discuss mailing listgpfsug-discuss at spectrumscale.orghttps://na01.safelinks.protection.outlook.com/?url="">
___gpfsug-discuss mailing listgpfsug-discuss at spectrumscale.orghttp://gpfsug.org/mailman/listinfo/gpfsug-discuss
 

___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


Re: [gpfsug-discuss] Get list of filesets _without_ running mmlsfileset?

2019-01-09 Thread Buterbaugh, Kevin L
Hi All,

Let me answer Skylar’s questions in another e-mail, which may also tell whether 
the rest API is a possibility or not.

The Python script in question is to display quota information for a user.  The 
mmlsquota command has a couple of issues:  1) its output is confusing to some 
of our users, 2) more significantly, it displays a ton of information that 
doesn’t apply to the user running it.  For example, it will display all the 
filesets in a filesystem whether or not the user has access to them.  So the 
Python script figures out what group(s) the user is a member of and only 
displays information pertinent to them (i.e. the group of the fileset junction 
path is a group this user is a member of) … and in a simplified (and 
potentially colorized) output format.

And typing that preceding paragraph caused the lightbulb to go off … I know the 
answer to my own question … have the script run mmlsquota and get the full list 
of filesets from that, then parse that to determine which ones I actually need 
to display quota information for.  Thanks!

Kevin
—
Kevin Buterbaugh - Senior System Administrator
Vanderbilt University - Advanced Computing Center for Research and Education
kevin.buterba...@vanderbilt.edu - 
(615)875-9633

On Jan 9, 2019, at 4:42 PM, Simon Thompson 
mailto:s.j.thomp...@bham.ac.uk>> wrote:

Hi Kevin,

Have you looked at the rest API?

https://na01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.ibm.com%2Fsupport%2Fknowledgecenter%2Fen%2FSTXKQY_5.0.2%2Fcom.ibm.spectrum.scale.v5r02.doc%2Fbl1adm_listofapicommands.htmdata=02%7C01%7CKevin.Buterbaugh%40vanderbilt.edu%7C36fb451ce9a945f5e0cb08d67683af85%7Cba5a7f39e3be4ab3b45067fa80faecad%7C0%7C1%7C636826705300525885sdata=uotWilntiZa2E9RIBE2ikhxxBm3Mk3y%2FW%2FKUHovaJpY%3Dreserved=0

I don't know how much access control there is available in the API so not sure 
if you could lock some sort of service user down to just the get filesets 
command?

Simon
___
From: gpfsug-discuss-boun...@spectrumscale.org 
[gpfsug-discuss-boun...@spectrumscale.org] on behalf of Buterbaugh, Kevin L 
[kevin.buterba...@vanderbilt.edu]
Sent: 08 January 2019 22:12
To: gpfsug main discussion list
Subject: [gpfsug-discuss] Get list of filesets _without_ running mmlsfileset?

Hi All,

Happy New Year to all!  Personally, I’ll gladly and gratefully settle for 2019 
not being a dumpster fire like 2018 was (those who attended my talk at the user 
group meeting at SC18 know what I’m referring to), but I certainly wish all of 
you the best!

Is there a way to get a list of the filesets in a filesystem without running 
mmlsfileset?  I was kind of expecting to find them in one of the config files 
somewhere under /var/mmfs but haven’t found them yet in the searching I’ve done.

The reason I’m asking is that we have a Python script that users can run that 
needs to get a list of all the filesets in a filesystem.  There are obviously 
multiple issues with that, so the workaround we’re using for now is to have a 
cron job which runs mmlsfileset once a day and dumps it out to a text file, 
which the script then reads.  That’s sub-optimal for any day on which a fileset 
gets created or deleted, so I’m looking for a better way … one which doesn’t 
require root privileges and preferably doesn’t involve running a GPFS command 
at all.

Thanks in advance.

Kevin

P.S.  I am still working on metadata and iSCSI testing and will report back on 
that when complete.
P.P.S.  We ended up adding our new NSDs comprised of (not really) 12 TB disks 
to the capacity pool and things are working fine.

—
Kevin Buterbaugh - Senior System Administrator
Vanderbilt University - Advanced Computing Center for Research and Education
kevin.buterba...@vanderbilt.edu - 
(615)875-9633



___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fgpfsug.org%2Fmailman%2Flistinfo%2Fgpfsug-discussdata=02%7C01%7CKevin.Buterbaugh%40vanderbilt.edu%7C36fb451ce9a945f5e0cb08d67683af85%7Cba5a7f39e3be4ab3b45067fa80faecad%7C0%7C1%7C636826705300525885sdata=WSijRrjhOgQyuWsh9K8ckpjf%2F2CkXfZW1n%2BJw5Gw5tw%3Dreserved=0

___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


Re: [gpfsug-discuss] Get list of filesets _without_ running mmlsfileset?

2019-01-09 Thread Sanchez, Paul
You could also wrap whatever provisioning script you're using (the thing that 
runs mmcrfileset), which must already be running as root, so that it also 
updates the cached text file afterward.

-Paul

-Original Message-
From: gpfsug-discuss-boun...@spectrumscale.org 
 On Behalf Of Skylar Thompson
Sent: Wednesday, January 9, 2019 4:37 PM
To: kevin.buterba...@vanderbilt.edu
Cc: gpfsug-discuss@spectrumscale.org
Subject: Re: [gpfsug-discuss] Get list of filesets _without_ running 
mmlsfileset?

I suppose you could run the underlying tslsfileset, though that's probably not 
the answer you're looking for.

Out of curiousity, what are you hoping to gain by not running mmlsfileset?
Is the problem scaling due to the number of filesets that you have defined?

On Tue, Jan 08, 2019 at 10:12:22PM +, Buterbaugh, Kevin L wrote:
> Hi All,
> 
> Happy New Year to all!  Personally, I???ll gladly and gratefully settle for 
> 2019 not being a dumpster fire like 2018 was (those who attended my talk at 
> the user group meeting at SC18 know what I???m referring to), but I certainly 
> wish all of you the best!
> 
> Is there a way to get a list of the filesets in a filesystem without running 
> mmlsfileset?  I was kind of expecting to find them in one of the config files 
> somewhere under /var/mmfs but haven???t found them yet in the searching 
> I???ve done.
> 
> The reason I???m asking is that we have a Python script that users can run 
> that needs to get a list of all the filesets in a filesystem.  There are 
> obviously multiple issues with that, so the workaround we???re using for now 
> is to have a cron job which runs mmlsfileset once a day and dumps it out to a 
> text file, which the script then reads.  That???s sub-optimal for any day on 
> which a fileset gets created or deleted, so I???m looking for a better way 
> ??? one which doesn???t require root privileges and preferably doesn???t 
> involve running a GPFS command at all.
> 
> Thanks in advance.
> 
> Kevin
> 
> P.S.  I am still working on metadata and iSCSI testing and will report back 
> on that when complete.
> P.P.S.  We ended up adding our new NSDs comprised of (not really) 12 TB disks 
> to the capacity pool and things are working fine.
> 
> ???
> Kevin Buterbaugh - Senior System Administrator Vanderbilt University - 
> Advanced Computing Center for Research and Education 
> kevin.buterba...@vanderbilt.edu > - (615)875-9633
> 
> 
> 

> ___
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss


--
-- Skylar Thompson (skyl...@u.washington.edu)
-- Genome Sciences Department, System Administrator
-- Foege Building S046, (206)-685-7354
-- University of Washington School of Medicine 
___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss
___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


Re: [gpfsug-discuss] Get list of filesets _without_ running mmlsfileset?

2019-01-09 Thread Simon Thompson
Hi Kevin,

Have you looked at the rest API?

https://www.ibm.com/support/knowledgecenter/en/STXKQY_5.0.2/com.ibm.spectrum.scale.v5r02.doc/bl1adm_listofapicommands.htm

I don't know how much access control there is available in the API so not sure 
if you could lock some sort of service user down to just the get filesets 
command?

Simon
___
From: gpfsug-discuss-boun...@spectrumscale.org 
[gpfsug-discuss-boun...@spectrumscale.org] on behalf of Buterbaugh, Kevin L 
[kevin.buterba...@vanderbilt.edu]
Sent: 08 January 2019 22:12
To: gpfsug main discussion list
Subject: [gpfsug-discuss] Get list of filesets _without_ running mmlsfileset?

Hi All,

Happy New Year to all!  Personally, I’ll gladly and gratefully settle for 2019 
not being a dumpster fire like 2018 was (those who attended my talk at the user 
group meeting at SC18 know what I’m referring to), but I certainly wish all of 
you the best!

Is there a way to get a list of the filesets in a filesystem without running 
mmlsfileset?  I was kind of expecting to find them in one of the config files 
somewhere under /var/mmfs but haven’t found them yet in the searching I’ve done.

The reason I’m asking is that we have a Python script that users can run that 
needs to get a list of all the filesets in a filesystem.  There are obviously 
multiple issues with that, so the workaround we’re using for now is to have a 
cron job which runs mmlsfileset once a day and dumps it out to a text file, 
which the script then reads.  That’s sub-optimal for any day on which a fileset 
gets created or deleted, so I’m looking for a better way … one which doesn’t 
require root privileges and preferably doesn’t involve running a GPFS command 
at all.

Thanks in advance.

Kevin

P.S.  I am still working on metadata and iSCSI testing and will report back on 
that when complete.
P.P.S.  We ended up adding our new NSDs comprised of (not really) 12 TB disks 
to the capacity pool and things are working fine.

—
Kevin Buterbaugh - Senior System Administrator
Vanderbilt University - Advanced Computing Center for Research and Education
kevin.buterba...@vanderbilt.edu - 
(615)875-9633



___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


Re: [gpfsug-discuss] Get list of filesets _without_ running mmlsfileset?

2019-01-09 Thread Skylar Thompson
I suppose you could run the underlying tslsfileset, though that's probably
not the answer you're looking for.

Out of curiousity, what are you hoping to gain by not running mmlsfileset?
Is the problem scaling due to the number of filesets that you have defined?

On Tue, Jan 08, 2019 at 10:12:22PM +, Buterbaugh, Kevin L wrote:
> Hi All,
> 
> Happy New Year to all!  Personally, I???ll gladly and gratefully settle for 
> 2019 not being a dumpster fire like 2018 was (those who attended my talk at 
> the user group meeting at SC18 know what I???m referring to), but I certainly 
> wish all of you the best!
> 
> Is there a way to get a list of the filesets in a filesystem without running 
> mmlsfileset?  I was kind of expecting to find them in one of the config files 
> somewhere under /var/mmfs but haven???t found them yet in the searching 
> I???ve done.
> 
> The reason I???m asking is that we have a Python script that users can run 
> that needs to get a list of all the filesets in a filesystem.  There are 
> obviously multiple issues with that, so the workaround we???re using for now 
> is to have a cron job which runs mmlsfileset once a day and dumps it out to a 
> text file, which the script then reads.  That???s sub-optimal for any day on 
> which a fileset gets created or deleted, so I???m looking for a better way 
> ??? one which doesn???t require root privileges and preferably doesn???t 
> involve running a GPFS command at all.
> 
> Thanks in advance.
> 
> Kevin
> 
> P.S.  I am still working on metadata and iSCSI testing and will report back 
> on that when complete.
> P.P.S.  We ended up adding our new NSDs comprised of (not really) 12 TB disks 
> to the capacity pool and things are working fine.
> 
> ???
> Kevin Buterbaugh - Senior System Administrator
> Vanderbilt University - Advanced Computing Center for Research and Education
> kevin.buterba...@vanderbilt.edu - 
> (615)875-9633
> 
> 
> 

> ___
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss


-- 
-- Skylar Thompson (skyl...@u.washington.edu)
-- Genome Sciences Department, System Administrator
-- Foege Building S046, (206)-685-7354
-- University of Washington School of Medicine
___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


[gpfsug-discuss] Get list of filesets _without_ running mmlsfileset?

2019-01-09 Thread Buterbaugh, Kevin L
Hi All,

Happy New Year to all!  Personally, I’ll gladly and gratefully settle for 2019 
not being a dumpster fire like 2018 was (those who attended my talk at the user 
group meeting at SC18 know what I’m referring to), but I certainly wish all of 
you the best!

Is there a way to get a list of the filesets in a filesystem without running 
mmlsfileset?  I was kind of expecting to find them in one of the config files 
somewhere under /var/mmfs but haven’t found them yet in the searching I’ve done.

The reason I’m asking is that we have a Python script that users can run that 
needs to get a list of all the filesets in a filesystem.  There are obviously 
multiple issues with that, so the workaround we’re using for now is to have a 
cron job which runs mmlsfileset once a day and dumps it out to a text file, 
which the script then reads.  That’s sub-optimal for any day on which a fileset 
gets created or deleted, so I’m looking for a better way … one which doesn’t 
require root privileges and preferably doesn’t involve running a GPFS command 
at all.

Thanks in advance.

Kevin

P.S.  I am still working on metadata and iSCSI testing and will report back on 
that when complete.
P.P.S.  We ended up adding our new NSDs comprised of (not really) 12 TB disks 
to the capacity pool and things are working fine.

—
Kevin Buterbaugh - Senior System Administrator
Vanderbilt University - Advanced Computing Center for Research and Education
kevin.buterba...@vanderbilt.edu - 
(615)875-9633



___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


Re: [gpfsug-discuss] User Login Active Directory authentication on CES nodes with SMB protocol

2019-01-09 Thread Christopher Black
We use realmd and some automation for sssd configs to get linux hosts to have 
local login and ssh tied to AD accounts, however we do not apply these configs 
on our protocol nodes.

From:  on behalf of Christof Schmitt 

Reply-To: gpfsug main discussion list 
Date: Wednesday, January 9, 2019 at 2:03 PM
To: "gpfsug-discuss@spectrumscale.org" 
Cc: "gpfsug-discuss@spectrumscale.org" , Ingo 
Meents 
Subject: Re: [gpfsug-discuss] User Login Active Directory authentication on CES 
nodes with SMB protocol

There is the PAM module that would forward authentication requests to winbindd:
/usr/lpp/mmfs/lib64/security/pam_gpfs-winbind.so
In theory that can be added to the PAM configuration in /etc/pam.d/. On the 
other hand, we have never tested this nor claimed support, so there might be 
reasons why this won't work.

Other customers have configured sssd manually in addition to the Scale 
authentication to allow user logon and authentication for sudo.

If the request here is to configure AD authentication through mmuserauth and 
that should also provide user logon, that should probably be treated as a 
feature request through RFE.

Regards,

Christof Schmitt || IBM || Spectrum Scale Development || Tucson, AZ
christof.schm...@us.ibm.com  ||  +1-520-799-2469(T/L: 321-2469)


- Original message -
From: "Lyle Gayne" 
Sent by: gpfsug-discuss-boun...@spectrumscale.org
To: gpfsug main discussion list 
Cc: Ingo Meents 
Subject: Re: [gpfsug-discuss] User Login Active Directory authentication on CES 
nodes with SMB protocol
Date: Tue, Jan 8, 2019 2:54 PM


Adding Ingo Meents for response

[Inactive hide details for "Rob Logie" ---01/08/2019 04:50:22 PM---Hi All Is 
there a way to enable User Login Active Directory a]"Rob Logie" ---01/08/2019 
04:50:22 PM---Hi All Is there a way to enable User Login Active Directory 
authentication on CES

From: "Rob Logie" 
To: gpfsug-discuss@spectrumscale.org
Date: 01/08/2019 04:50 PM
Subject: [gpfsug-discuss] User Login Active Directory authentication on CES 
nodes with SMB protocol
Sent by: gpfsug-discuss-boun...@spectrumscale.org





Hi All
Is there a way to enable User Login Active Directory authentication on CES 
nodes with SMB protocol that are joined to an AD domain. ? The AD 
authentication is working for access to the SMB shares, but not for user login 
authentication on the CES nodes.


Thanks !


Regards,
Rob Logie
IT Specialist



___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss




___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss




This message is for the recipient’s use only, and may contain confidential, 
privileged or protected information. Any unauthorized use or dissemination of 
this communication is prohibited. If you received this message in error, please 
immediately notify the sender and destroy all copies of this message. The 
recipient should check this email and any attachments for the presence of 
viruses, as we accept no liability for any damage caused by any virus 
transmitted by this email.
___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


Re: [gpfsug-discuss] Spectrum Scale protocol node service separation

2019-01-09 Thread Carl Zetie
ST>I believe socket based licenses are also about to or already no longer 
available 
ST>for new customers (existing customers can continue to buy).

ST>Carl can probably comment on this?
 
That is correct. Friday Jan 11 is the last chance for *new* customers to buy 
Standard Edition sockets. 
 
And as Simon says, those of you who are currently Sockets customers can remain 
on Sockets, buying additional licenses and renewing existing licenses. (IBM 
Legal requires me to add, any statement about the future is an intention, not a 
commitment -- but, as I've said before, as long as it's my decision to make, my 
intent is to keep Sockets as long as existing customers want them). 

And yes, one of the reasons I wanted to get away from Socket pricing is the 
kind of scenarios some of you brought up. Implementing the best deployment 
topology for your needs shouldn't be a licensing transaction. (Don't even get 
me started on client licenses).
 
 
regards,

 
 
Carl Zetie  
Program Director  
Offering Management for Spectrum Scale, IBM  
  
(540) 882 9353 ][ Research Triangle Park
 ca...@us.ibm.com 

___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


Re: [gpfsug-discuss] User Login Active Directory authentication on CES nodes with SMB protocol

2019-01-09 Thread Christof Schmitt
There is the PAM module that would forward authentication requests to winbindd:
/usr/lpp/mmfs/lib64/security/pam_gpfs-winbind.so
In theory that can be added to the PAM configuration in /etc/pam.d/. On the other hand, we have never tested this nor claimed support, so there might be reasons why this won't work.
 
Other customers have configured sssd manually in addition to the Scale authentication to allow user logon and authentication for sudo.
 
If the request here is to configure AD authentication through mmuserauth and that should also provide user logon, that should probably be treated as a feature request through RFE.
 
Regards,
 
Christof Schmitt || IBM || Spectrum Scale Development || Tucson, AZchristof.schm...@us.ibm.com  ||  +1-520-799-2469    (T/L: 321-2469)
 
 
- Original message -From: "Lyle Gayne" Sent by: gpfsug-discuss-boun...@spectrumscale.orgTo: gpfsug main discussion list Cc: Ingo Meents Subject: Re: [gpfsug-discuss] User Login Active Directory authentication on CES nodes with SMB protocolDate: Tue, Jan 8, 2019 2:54 PM 
Adding Ingo Meents for response"Rob Logie" ---01/08/2019 04:50:22 PM---Hi All Is there a way to enable User Login Active Directory authentication on CESFrom: "Rob Logie" To: gpfsug-discuss@spectrumscale.orgDate: 01/08/2019 04:50 PMSubject: [gpfsug-discuss] User Login Active Directory authentication on CES nodes with SMB protocolSent by: gpfsug-discuss-boun...@spectrumscale.org
Hi AllIs there a way to enable User Login Active Directory authentication on CES nodes with SMB protocol that are joined to an AD domain. ? The AD authentication is working for access to the SMB shares, but not for user login authentication on the CES nodes.Thanks !
 
Regards,Rob LogieIT Specialist  
___gpfsug-discuss mailing listgpfsug-discuss at spectrumscale.orghttp://gpfsug.org/mailman/listinfo/gpfsug-discuss
 
 
___gpfsug-discuss mailing listgpfsug-discuss at spectrumscale.orghttp://gpfsug.org/mailman/listinfo/gpfsug-discuss
 

___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


Re: [gpfsug-discuss] Spectrum Scale protocol node service separation

2019-01-09 Thread Christof Schmitt
-mail: aspal...@us.ibm.com- Original message -From: gpfsug-discuss-requ...@spectrumscale.orgSent by: gpfsug-discuss-boun...@spectrumscale.orgTo: gpfsug-discuss@spectrumscale.orgCc:Subject: gpfsug-discuss Digest, Vol 84, Issue 4Date: Wed, Jan 9, 2019 7:13 AMSend gpfsug-discuss mailing list submissions togpfsug-discuss@spectrumscale.orgTo subscribe or unsubscribe via the World Wide Web, visithttp://gpfsug.org/mailman/listinfo/gpfsug-discussor, via email, send a message with subject or body 'help' togpfsug-discuss-requ...@spectrumscale.orgYou can reach the person managing the list atgpfsug-discuss-ow...@spectrumscale.orgWhen replying, please edit your Subject line so it is more specificthan "Re: Contents of gpfsug-discuss digest..."Today's Topics:   1. Re: Spectrum Scale protocol node service separation.      (Andi Rhod Christiansen)   2. Re: Spectrum Scale protocol node service separation.      (Sanchez, Paul)--Message: 1Date: Wed, 9 Jan 2019 13:24:30 +From: Andi Rhod Christiansen To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] Spectrum Scale protocol node serviceseparation.Message-ID:Content-Type: text/plain; charset="utf-8"Hi Simon,It was actually also the only solution I found if I want to keep them within the same cluster ?Thanks for the reply, I will see what we figure out !Venlig hilsen / Best RegardsAndi Rhod ChristiansenFra: gpfsug-discuss-boun...@spectrumscale.org  P? vegne af Simon ThompsonSendt: 9. januar 2019 13:20Til: gpfsug main discussion list Emne: Re: [gpfsug-discuss] Spectrum Scale protocol node service separation.You have to run all services on all nodes ( ? ) actually its technically possible to remove the packages once protocols is running on the node, but next time you reboot the node, it will get marked unhealthy and you spend an hour working out why? But what we do to split load is have different IPs assigned to different CES groups and then assign the SMB nodes to the SMB group IPs etc ?Technically a user could still connect to the NFS (in our case) IPs with SMB protocol, but there?s not a lot we can do about that ? though our upstream firewall drops said traffic.SimonFrom: mailto:gpfsug-discuss-boun...@spectrumscale.org>> on behalf of "a...@b4restore.com<mailto:a...@b4restore.com>" mailto:a...@b4restore.com>>Reply-To: "gpfsug-discuss@spectrumscale.org<mailto:gpfsug-discuss@spectrumscale.org>" mailto:gpfsug-discuss@spectrumscale.org>>Date: Wednesday, 9 January 2019 at 10:31To: "gpfsug-discuss@spectrumscale.org<mailto:gpfsug-discuss@spectrumscale.org>" mailto:gpfsug-discuss@spectrumscale.org>>Subject: [gpfsug-discuss] Spectrum Scale protocol node service separation.Hi,I seem to be unable to find any information on separating protocol services on specific CES nodes within a cluster. Does anyone know if it is possible to take, lets say 4 of the ces nodes within a cluster and dividing them into two and have two of the running SMB and the other two running OBJ instead of having them all run both services?If it is possible it would be great to hear pros and cons about doing this ?Thanks in advance!Venlig hilsen / Best RegardsAndi ChristiansenIT Solution Specialist-- next part --An HTML attachment was scrubbed...URL: <http://gpfsug.org/pipermail/gpfsug-discuss/attachments/20190109/83819399/attachment-0001.html>--Message: 2Date: Wed, 9 Jan 2019 14:05:48 +From: "Sanchez, Paul" To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] Spectrum Scale protocol node serviceseparation.Message-ID:<53ec54bb621242109a789e51d61b1...@mbxtoa1.winmail.deshaw.com>Content-Type: text/plain; charset="utf-8"The docs say: ?CES supports the following export protocols: NFS, SMB, object, and iSCSI (block). Each protocol can be enabled or disabled in the cluster. If a protocol is enabled in the CES cluster, all CES nodes serve that protocol.? Which would seem to indicate that the answer is ?no?.This kind of thing is another good reason to license Scale by storage capacity rather than by sockets (PVU).  This approach was already a good idea due to the flexibility it allows to scale manager, quorum, and NSD server nodes for performance and high-availability without affecting your software licensing costs.  This can result in better design and the flexibility to more quickly respond to new problems by adding server nodes.So assuming you?re not on the old PVU licensing model, it is trivial to deploy as many gateway nodes as needed to separate these into distinct remote clusters.  You can create an object gateway cluster, and a CES gateway cluster each which only mounts and exports what is necessary.  You can even virtualize these servers and host them on the same hardware, if you?re into that.-PaulFrom: gpfsug-discuss-boun...@spectrumscale.org  On Be

Re: [gpfsug-discuss] Spectrum Scale protocol node service separation

2019-01-09 Thread Simon Thompson
 node service separation.
  (Sanchez, Paul)


--

Message: 1
Date: Wed, 9 Jan 2019 13:24:30 +
From: Andi Rhod Christiansen 
To: gpfsug main discussion list 
Subject: Re: [gpfsug-discuss] Spectrum Scale protocol node service
separation.
Message-ID:

Content-Type: text/plain; charset="utf-8"

Hi Simon,

It was actually also the only solution I found if I want to keep them within 
the same cluster ?

Thanks for the reply, I will see what we figure out !

Venlig hilsen / Best Regards

Andi Rhod Christiansen

Fra: gpfsug-discuss-boun...@spectrumscale.org 
 P? vegne af Simon Thompson
Sendt: 9. januar 2019 13:20
Til: gpfsug main discussion list 
Emne: Re: [gpfsug-discuss] Spectrum Scale protocol node service separation.

You have to run all services on all nodes ( ? ) actually its technically 
possible to remove the packages once protocols is running on the node, but next 
time you reboot the node, it will get marked unhealthy and you spend an hour 
working out why? 

But what we do to split load is have different IPs assigned to different CES 
groups and then assign the SMB nodes to the SMB group IPs etc ?

Technically a user could still connect to the NFS (in our case) IPs with SMB 
protocol, but there?s not a lot we can do about that ? though our upstream 
firewall drops said traffic.

Simon

From: 
mailto:gpfsug-discuss-boun...@spectrumscale.org>>
 on behalf of "a...@b4restore.com<mailto:a...@b4restore.com>" 
mailto:a...@b4restore.com>>
Reply-To: 
"gpfsug-discuss@spectrumscale.org<mailto:gpfsug-discuss@spectrumscale.org>" 
mailto:gpfsug-discuss@spectrumscale.org>>
Date: Wednesday, 9 January 2019 at 10:31
To: "gpfsug-discuss@spectrumscale.org<mailto:gpfsug-discuss@spectrumscale.org>" 
mailto:gpfsug-discuss@spectrumscale.org>>
Subject: [gpfsug-discuss] Spectrum Scale protocol node service separation.

Hi,

I seem to be unable to find any information on separating protocol services on 
specific CES nodes within a cluster. Does anyone know if it is possible to 
take, lets say 4 of the ces nodes within a cluster and dividing them into two 
and have two of the running SMB and the other two running OBJ instead of having 
them all run both services?

If it is possible it would be great to hear pros and cons about doing this ?

Thanks in advance!

Venlig hilsen / Best Regards

Andi Christiansen
IT Solution Specialist


-- next part --
An HTML attachment was scrubbed...
URL: 
<http://gpfsug.org/pipermail/gpfsug-discuss/attachments/20190109/83819399/attachment-0001.html>

--

Message: 2
Date: Wed, 9 Jan 2019 14:05:48 +
From: "Sanchez, Paul" 
To: gpfsug main discussion list 
Subject: Re: [gpfsug-discuss] Spectrum Scale protocol node service
separation.
Message-ID:
<53ec54bb621242109a789e51d61b1...@mbxtoa1.winmail.deshaw.com>
Content-Type: text/plain; charset="utf-8"

The docs say: ?CES supports the following export protocols: NFS, SMB, object, 
and iSCSI (block). Each protocol can be enabled or disabled in the cluster. If 
a protocol is enabled in the CES cluster, all CES nodes serve that protocol.? 
Which would seem to indicate that the answer is ?no?.

This kind of thing is another good reason to license Scale by storage capacity 
rather than by sockets (PVU).  This approach was already a good idea due to the 
flexibility it allows to scale manager, quorum, and NSD server nodes for 
performance and high-availability without affecting your software licensing 
costs.  This can result in better design and the flexibility to more quickly 
respond to new problems by adding server nodes.

So assuming you?re not on the old PVU licensing model, it is trivial to deploy 
as many gateway nodes as needed to separate these into distinct remote 
clusters.  You can create an object gateway cluster, and a CES gateway cluster 
each which only mounts and exports what is necessary.  You can even virtualize 
these servers and host them on the same hardware, if you?re into that.

-Paul

From: gpfsug-discuss-boun...@spectrumscale.org 
 On Behalf Of Andi Rhod Christiansen
Sent: Wednesday, January 9, 2019 5:25 AM
To: gpfsug main discussion list 
Subject: [gpfsug-discuss] Spectrum Scale protocol node service separation.

Hi,

I seem to be unable to find any information on separating protocol services on 
specific CES nodes within a cluster. Does anyone know if it is possible to 
take, lets say 4 of the ces nodes within a cluster and dividing them into two 
and have two of the running SMB and the other two running OBJ instead of having 
them all run both services?

If it is possible it would be great to hear pros and cons about doing this ?

Thanks in advance!

Venlig hilsen / Best Regards

Andi Christiansen
IT Solution Specialist

-- next part --
An 

Re: [gpfsug-discuss] Spectrum Scale protocol node service separation

2019-01-09 Thread Aaron S Palazzolo
 our case) IPs with SMB protocol, but there?s not a lot we can do about that ? though our upstream firewall drops said traffic.SimonFrom: mailto:gpfsug-discuss-boun...@spectrumscale.org>> on behalf of "a...@b4restore.com<mailto:a...@b4restore.com>" mailto:a...@b4restore.com>>Reply-To: "gpfsug-discuss@spectrumscale.org<mailto:gpfsug-discuss@spectrumscale.org>" mailto:gpfsug-discuss@spectrumscale.org>>Date: Wednesday, 9 January 2019 at 10:31To: "gpfsug-discuss@spectrumscale.org<mailto:gpfsug-discuss@spectrumscale.org>" mailto:gpfsug-discuss@spectrumscale.org>>Subject: [gpfsug-discuss] Spectrum Scale protocol node service separation.Hi,I seem to be unable to find any information on separating protocol services on specific CES nodes within a cluster. Does anyone know if it is possible to take, lets say 4 of the ces nodes within a cluster and dividing them into two and have two of the running SMB and the other two running OBJ instead of having them all run both services?If it is possible it would be great to hear pros and cons about doing this ?Thanks in advance!Venlig hilsen / Best RegardsAndi ChristiansenIT Solution Specialist-- next part --An HTML attachment was scrubbed...URL: <http://gpfsug.org/pipermail/gpfsug-discuss/attachments/20190109/83819399/attachment-0001.html>--Message: 2Date: Wed, 9 Jan 2019 14:05:48 +From: "Sanchez, Paul" To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] Spectrum Scale protocol node serviceseparation.Message-ID:<53ec54bb621242109a789e51d61b1...@mbxtoa1.winmail.deshaw.com>Content-Type: text/plain; charset="utf-8"The docs say: ?CES supports the following export protocols: NFS, SMB, object, and iSCSI (block). Each protocol can be enabled or disabled in the cluster. If a protocol is enabled in the CES cluster, all CES nodes serve that protocol.? Which would seem to indicate that the answer is ?no?.This kind of thing is another good reason to license Scale by storage capacity rather than by sockets (PVU).  This approach was already a good idea due to the flexibility it allows to scale manager, quorum, and NSD server nodes for performance and high-availability without affecting your software licensing costs.  This can result in better design and the flexibility to more quickly respond to new problems by adding server nodes.So assuming you?re not on the old PVU licensing model, it is trivial to deploy as many gateway nodes as needed to separate these into distinct remote clusters.  You can create an object gateway cluster, and a CES gateway cluster each which only mounts and exports what is necessary.  You can even virtualize these servers and host them on the same hardware, if you?re into that.-PaulFrom: gpfsug-discuss-boun...@spectrumscale.org  On Behalf Of Andi Rhod ChristiansenSent: Wednesday, January 9, 2019 5:25 AMTo: gpfsug main discussion list Subject: [gpfsug-discuss] Spectrum Scale protocol node service separation.Hi,I seem to be unable to find any information on separating protocol services on specific CES nodes within a cluster. Does anyone know if it is possible to take, lets say 4 of the ces nodes within a cluster and dividing them into two and have two of the running SMB and the other two running OBJ instead of having them all run both services?If it is possible it would be great to hear pros and cons about doing this ?Thanks in advance!Venlig hilsen / Best RegardsAndi ChristiansenIT Solution Specialist------ next part --An HTML attachment was scrubbed...URL: <http://gpfsug.org/pipermail/gpfsug-discuss/attachments/20190109/7f9ad3f8/attachment.html>--___gpfsug-discuss mailing listgpfsug-discuss at spectrumscale.orghttp://gpfsug.org/mailman/listinfo/gpfsug-discussEnd of gpfsug-discuss Digest, Vol 84, Issue 4* 
 

___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


Re: [gpfsug-discuss] Spectrum Scale protocol node service separation.

2019-01-09 Thread Simon Thompson
I think only recently was remote cluster support added (though we have been 
doing it since CES was released).

I agree that capacity licenses have freed us to implement a better solution.. 
no longer do we run quorum/token managers on nsd nodes to reduce socket costs.

I believe socket based licenses are also about to or already no longer 
available for new customers (existing customers can continue to buy).

Carl can probably comment on this?

Simon

From: gpfsug-discuss-boun...@spectrumscale.org 
[gpfsug-discuss-boun...@spectrumscale.org] on behalf of paul.sanc...@deshaw.com 
[paul.sanc...@deshaw.com]
Sent: 09 January 2019 14:05
To: gpfsug main discussion list
Subject: Re: [gpfsug-discuss] Spectrum Scale protocol node service  
separation.

The docs say: “CES supports the following export protocols: NFS, SMB, object, 
and iSCSI (block). Each protocol can be enabled or disabled in the cluster. If 
a protocol is enabled in the CES cluster, all CES nodes serve that protocol.” 
Which would seem to indicate that the answer is “no”.

This kind of thing is another good reason to license Scale by storage capacity 
rather than by sockets (PVU).  This approach was already a good idea due to the 
flexibility it allows to scale manager, quorum, and NSD server nodes for 
performance and high-availability without affecting your software licensing 
costs.  This can result in better design and the flexibility to more quickly 
respond to new problems by adding server nodes.

So assuming you’re not on the old PVU licensing model, it is trivial to deploy 
as many gateway nodes as needed to separate these into distinct remote 
clusters.  You can create an object gateway cluster, and a CES gateway cluster 
each which only mounts and exports what is necessary.  You can even virtualize 
these servers and host them on the same hardware, if you’re into that.

-Paul

From: gpfsug-discuss-boun...@spectrumscale.org 
 On Behalf Of Andi Rhod Christiansen
Sent: Wednesday, January 9, 2019 5:25 AM
To: gpfsug main discussion list 
Subject: [gpfsug-discuss] Spectrum Scale protocol node service separation.

Hi,

I seem to be unable to find any information on separating protocol services on 
specific CES nodes within a cluster. Does anyone know if it is possible to 
take, lets say 4 of the ces nodes within a cluster and dividing them into two 
and have two of the running SMB and the other two running OBJ instead of having 
them all run both services?

If it is possible it would be great to hear pros and cons about doing this 

Thanks in advance!

Venlig hilsen / Best Regards

Andi Christiansen
IT Solution Specialist

___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


Re: [gpfsug-discuss] Spectrum Scale protocol node service separation.

2019-01-09 Thread Sanchez, Paul
The docs say: “CES supports the following export protocols: NFS, SMB, object, 
and iSCSI (block). Each protocol can be enabled or disabled in the cluster. If 
a protocol is enabled in the CES cluster, all CES nodes serve that protocol.” 
Which would seem to indicate that the answer is “no”.

This kind of thing is another good reason to license Scale by storage capacity 
rather than by sockets (PVU).  This approach was already a good idea due to the 
flexibility it allows to scale manager, quorum, and NSD server nodes for 
performance and high-availability without affecting your software licensing 
costs.  This can result in better design and the flexibility to more quickly 
respond to new problems by adding server nodes.

So assuming you’re not on the old PVU licensing model, it is trivial to deploy 
as many gateway nodes as needed to separate these into distinct remote 
clusters.  You can create an object gateway cluster, and a CES gateway cluster 
each which only mounts and exports what is necessary.  You can even virtualize 
these servers and host them on the same hardware, if you’re into that.

-Paul

From: gpfsug-discuss-boun...@spectrumscale.org 
 On Behalf Of Andi Rhod Christiansen
Sent: Wednesday, January 9, 2019 5:25 AM
To: gpfsug main discussion list 
Subject: [gpfsug-discuss] Spectrum Scale protocol node service separation.

Hi,

I seem to be unable to find any information on separating protocol services on 
specific CES nodes within a cluster. Does anyone know if it is possible to 
take, lets say 4 of the ces nodes within a cluster and dividing them into two 
and have two of the running SMB and the other two running OBJ instead of having 
them all run both services?

If it is possible it would be great to hear pros and cons about doing this 

Thanks in advance!

Venlig hilsen / Best Regards

Andi Christiansen
IT Solution Specialist

___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


Re: [gpfsug-discuss] Spectrum Scale protocol node service separation.

2019-01-09 Thread Andi Rhod Christiansen
Hi Simon,

It was actually also the only solution I found if I want to keep them within 
the same cluster 

Thanks for the reply, I will see what we figure out !

Venlig hilsen / Best Regards

Andi Rhod Christiansen

Fra: gpfsug-discuss-boun...@spectrumscale.org 
 På vegne af Simon Thompson
Sendt: 9. januar 2019 13:20
Til: gpfsug main discussion list 
Emne: Re: [gpfsug-discuss] Spectrum Scale protocol node service separation.

You have to run all services on all nodes ( ☹ ) actually its technically 
possible to remove the packages once protocols is running on the node, but next 
time you reboot the node, it will get marked unhealthy and you spend an hour 
working out why… 

But what we do to split load is have different IPs assigned to different CES 
groups and then assign the SMB nodes to the SMB group IPs etc …

Technically a user could still connect to the NFS (in our case) IPs with SMB 
protocol, but there’s not a lot we can do about that … though our upstream 
firewall drops said traffic.

Simon

From: 
mailto:gpfsug-discuss-boun...@spectrumscale.org>>
 on behalf of "a...@b4restore.com" 
mailto:a...@b4restore.com>>
Reply-To: 
"gpfsug-discuss@spectrumscale.org" 
mailto:gpfsug-discuss@spectrumscale.org>>
Date: Wednesday, 9 January 2019 at 10:31
To: "gpfsug-discuss@spectrumscale.org" 
mailto:gpfsug-discuss@spectrumscale.org>>
Subject: [gpfsug-discuss] Spectrum Scale protocol node service separation.

Hi,

I seem to be unable to find any information on separating protocol services on 
specific CES nodes within a cluster. Does anyone know if it is possible to 
take, lets say 4 of the ces nodes within a cluster and dividing them into two 
and have two of the running SMB and the other two running OBJ instead of having 
them all run both services?

If it is possible it would be great to hear pros and cons about doing this 

Thanks in advance!

Venlig hilsen / Best Regards

Andi Christiansen
IT Solution Specialist


___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


Re: [gpfsug-discuss] Spectrum Scale protocol node service separation.

2019-01-09 Thread Andi Rhod Christiansen
Hi Andrew,

Where can I request such a feature? 

Venlig hilsen / Best Regards

Andi Rhod Christiansen

Fra: gpfsug-discuss-boun...@spectrumscale.org 
 På vegne af Andrew Beattie
Sendt: 9. januar 2019 12:17
Til: gpfsug-discuss@spectrumscale.org
Cc: gpfsug-discuss@spectrumscale.org
Emne: Re: [gpfsug-discuss] Spectrum Scale protocol node service separation.

Andi,

All the CES nodes in the same cluster will share the same protocol exports
if you want to separate them you need to create remote mount clusters and 
export the additional protocols via the remote mount

it would actually be a useful RFE to have the ablity to create CES groups 
attached to the base cluster and by group create exports of different 
protocols, but its not available today.
Andrew Beattie
Software Defined Storage  - IT Specialist
Phone: 614-2133-7927
E-mail: abeat...@au1.ibm.com


- Original message -
From: Andi Rhod Christiansen mailto:a...@b4restore.com>>
Sent by: 
gpfsug-discuss-boun...@spectrumscale.org
To: gpfsug main discussion list 
mailto:gpfsug-discuss@spectrumscale.org>>
Cc:
Subject: [gpfsug-discuss] Spectrum Scale protocol node service separation.
Date: Wed, Jan 9, 2019 8:31 PM


Hi,



I seem to be unable to find any information on separating protocol services on 
specific CES nodes within a cluster. Does anyone know if it is possible to 
take, lets say 4 of the ces nodes within a cluster and dividing them into two 
and have two of the running SMB and the other two running OBJ instead of having 
them all run both services?



If it is possible it would be great to hear pros and cons about doing this 



Thanks in advance!



Venlig hilsen / Best Regards

Andi Christiansen
IT Solution Specialist


___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


Re: [gpfsug-discuss] Spectrum Scale protocol node service separation.

2019-01-09 Thread Simon Thompson
You have to run all services on all nodes ( ☹ ) actually its technically 
possible to remove the packages once protocols is running on the node, but next 
time you reboot the node, it will get marked unhealthy and you spend an hour 
working out why… 

But what we do to split load is have different IPs assigned to different CES 
groups and then assign the SMB nodes to the SMB group IPs etc …

Technically a user could still connect to the NFS (in our case) IPs with SMB 
protocol, but there’s not a lot we can do about that … though our upstream 
firewall drops said traffic.

Simon

From:  on behalf of 
"a...@b4restore.com" 
Reply-To: "gpfsug-discuss@spectrumscale.org" 
Date: Wednesday, 9 January 2019 at 10:31
To: "gpfsug-discuss@spectrumscale.org" 
Subject: [gpfsug-discuss] Spectrum Scale protocol node service separation.

Hi,

I seem to be unable to find any information on separating protocol services on 
specific CES nodes within a cluster. Does anyone know if it is possible to 
take, lets say 4 of the ces nodes within a cluster and dividing them into two 
and have two of the running SMB and the other two running OBJ instead of having 
them all run both services?

If it is possible it would be great to hear pros and cons about doing this 

Thanks in advance!

Venlig hilsen / Best Regards

Andi Christiansen
IT Solution Specialist



___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


Re: [gpfsug-discuss] Spectrum Scale protocol node service separation.

2019-01-09 Thread Andrew Beattie
Andi,
 
All the CES nodes in the same cluster will share the same protocol exports
if you want to separate them you need to create remote mount clusters and export the additional protocols via the remote mount
 
it would actually be a useful RFE to have the ablity to create CES groups attached to the base cluster and by group create exports of different protocols, but its not available today.
Andrew Beattie
Software Defined Storage  - IT Specialist
Phone: 614-2133-7927
E-mail: abeat...@au1.ibm.com
 
 
- Original message -From: Andi Rhod Christiansen Sent by: gpfsug-discuss-boun...@spectrumscale.orgTo: gpfsug main discussion list Cc:Subject: [gpfsug-discuss] Spectrum Scale protocol node service separation.Date: Wed, Jan 9, 2019 8:31 PM  
Hi,
 
I seem to be unable to find any information on separating protocol services on specific CES nodes within a cluster. Does anyone know if it is possible to take, lets say 4 of the ces nodes within a cluster and dividing them into two and have two of the running SMB and the other two running OBJ instead of having them all run both services?
 
If it is possible it would be great to hear pros and cons about doing this  
 
Thanks in advance!
 
Venlig hilsen / Best Regards
Andi ChristiansenIT Solution Specialist
 
___gpfsug-discuss mailing listgpfsug-discuss at spectrumscale.orghttp://gpfsug.org/mailman/listinfo/gpfsug-discuss
 

___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


[gpfsug-discuss] Spectrum Scale protocol node service separation.

2019-01-09 Thread Andi Rhod Christiansen
Hi,

I seem to be unable to find any information on separating protocol services on 
specific CES nodes within a cluster. Does anyone know if it is possible to 
take, lets say 4 of the ces nodes within a cluster and dividing them into two 
and have two of the running SMB and the other two running OBJ instead of having 
them all run both services?

If it is possible it would be great to hear pros and cons about doing this 

Thanks in advance!

Venlig hilsen / Best Regards

Andi Christiansen
IT Solution Specialist


___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss