> In fact, one of the things that’s kinda surprising to me is that upgrading the SMB portion of CES requires a downtime. Let’s just say that I know for a fact that sernet-samba can be done rolling / live.
I am referring to the open source version of Samba here. That is likely close to "sernet-samba", but i have not seen the details on the code they use for the package build:
Clustered Samba with ctdb never supported rolling code upgrades. The reason for this is that SMB-level records are shared across the protocol nodes through ctdb. These records are not versioned and Samba on each node expects to only see matching records. As the details of the internal data shared across the nodes can change through versions, the only safe way to handle this is to not allow rolling code upgrades.
It might have appeared that Samba supports rolling code upgrades. Past versions did not check for version compatibility across the nodes, so there was no warning. If the ctdb records shared between the nodes did not change, then this would be no problem (for this particular upgrade path, it is likely different for different Samba versions). Also, if there are no open files or active sessions during the upgrade, the risk is lower, as in that case there are fewer records that could cause problems.
The important change is that Samba 4.7.0 introduced checks to enforce compatible versions across the nodes. This just makes the limitation visible, but it was always there. See:
* CTDB no longer allows mixed minor versions in a cluster See the AllowMixedVersions tunable option in ctdb-tunables(7) and also https://wiki.samba.org/index.php/Upgrading_a_CTDB_cluster#Policy
and also
"Rolling Upgrades" and "Problems with Rolling Code Upgrades"
Note that this page refers to two different layers. "ctdb" itself maintains compatibility among a X.Y. code stream, but this is not guaranteed for the file server records stored in ctdb databases.
We have the same limitation and enforcement in the Samba version shipped with Spectrum Scale. I expect the same to be true for all clustered Samba versions today.
Regards,
Christof Schmitt || IBM || Spectrum Scale Development || Tucson, AZ
[email protected] || +1-520-799-2469 (T/L: 321-2469)
----- Original message -----
From: "Buterbaugh, Kevin L" <[email protected]>
Sent by: [email protected]
To: gpfsug main discussion list <[email protected]>
Cc:
Subject: Re: [gpfsug-discuss] Not recommended, but why not?
Date: Fri, May 4, 2018 12:12 PM
Hi Anderson,Thanks for the response … however, the scenario you describe below wouldn’t impact us. We have 8 NSD servers and they can easily provide the needed performance to native GPFS clients. We could also take a downtime if we ever did need to expand in the manner described below.In fact, one of the things that’s kinda surprising to me is that upgrading the SMB portion of CES requires a downtime. Let’s just say that I know for a fact that sernet-samba can be done rolling / live.KevinOn May 4, 2018, at 10:52 AM, Anderson Ferreira Nobre <[email protected]> wrote:Hi Kevin,I think one of the reasons is if you need to add or remove nodes from cluster you will start to face the constrains of this kind of solution. Let's say you have a cluster with two nodes and share the same set of LUNs through SAN. And for some reason you need to add more two nodes that are NSD Servers and Protocol nodes. For the new nodes become NSD Servers, you will have to redistribute the NSD disks among four nodes. But for you do that you will have to umount the filesystems. And for you umount the filesystems you would need to stop protocol services. At the end you will realize that a simple task like that is disrruptive. You won't be able to do online.
Abraços / Regards / Saludos,Anderson Nobre
AIX & Power Consultant
Master Certified IT Specialist
IBM Systems Hardware Client Technical Team – IBM Systems Lab Services
Phone: 55-19-2132-4317
E-mail: [email protected] ----- Original message -----
From: "Buterbaugh, Kevin L" <[email protected]>
Sent by: [email protected]
To: gpfsug main discussion list <[email protected]>
Cc:
Subject: [gpfsug-discuss] Not recommended, but why not?
Date: Fri, May 4, 2018 12:39 PM
Hi All,In doing some research, I have come across numerous places (IBM docs, DeveloperWorks posts, etc.) where it is stated that it is not recommended to run CES on NSD servers … but I’ve not found any detailed explanation of why not.I understand that CES, especially if you enable SMB, can be a resource hog. But if I size the servers appropriately … say, late model boxes with 2 x 8 core CPU’s, 256 GB RAM, 10 GbE networking … is there any reason why I still should not combine the two?To answer the question of why I would want to … simple, server licenses.Thanks…Kevin—Kevin Buterbaugh - Senior System AdministratorVanderbilt University - Advanced Computing Center for Research and Education[email protected] - (615)875-9633_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss
_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
https://na01.safelinks.protection.outlook.com/?url="">_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwICAg&c=jf_iaSHvJObTbx-siA1ZOg&r=5Nn7eUPeYe291x8f39jKybESLKv_W_XtkTkS8fTR-NI&m=MTn8f-19YeL-zMa1zGhHI12qDt0L0uHnCDQ8nGsPWRs&s=OsAi6TqFQWMLCf9mCo-kmrkLku2WGXLM8jVumnnqZBs&e=
_______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss
