Yes, the token managers will reside on the NSD Server Cluster which has the NSD 
Servers that provide access to the underlying data and metadata storage.  I 
believe that all nodes that have the “manager” designation will participate in 
the token management operations as needed.  Though there is not a way to 
specify which node will be assigned the primary file system manager or overall 
cluster manager, which are two different roles but may reside on the same node.

Tokens themselves, however, are distributed and managed by clients directly.  
When a file is first opened then the node that opened the file will be the 
“metanode” for the file, and all metadata updates on the file will be handled 
by this metanode until it closes the file handle, in which case another node 
will become the “metanode”.  For byte range locking, the file system manager 
will handle revoking tokens from nodes that have a byte range lock when another 
node requests access to the same byte range region.  This ensures that nodes 
cannot hold byte range locks that prevent other nodes from accessing byte range 
regions of a file.

Hope that helps,
-Bryan

From: [email protected] 
<[email protected]> On Behalf Of Billich Heinrich Rainer 
(PSI)
Sent: Friday, July 27, 2018 11:50 AM
To: gpfsug main discussion list <[email protected]>
Subject: Re: [gpfsug-discuss] control which hosts become token manager

Note: External Email
________________________________
Hello,

So probably I was wrong from the beginning –  please can somebody clarify: In a 
multicluster environment with all storage and filesystem hosted by a single 
cluster all token managers will reside in this central cluster?

Or are there also token managers in the storage-less clusters which just mount? 
This managers wouldn’t be accessible by all nodes which access the file system, 
hence I doubt this exists.

Still it would be nice to know how to influence the token manager placement and 
how to exclude certain machines. And the output of ‘mmdiag –tokenmgr’ indicates 
that there _are_ token manager in the remote-mounting cluster – confusing.

I would greatly appreciate if somebody could sort this out. A point to the 
relevant documentation would also be welcome.

Thank you & Kind regards,

Heiner

--
Paul Scherrer Institut
Science IT
Heiner Billich
WHGA 106
CH 5232  Villigen PSI
056 310 36 02
https://www.psi.ch


From: 
<[email protected]<mailto:[email protected]>>
 on behalf of "Billich Heinrich Rainer (PSI)" 
<[email protected]<mailto:[email protected]>>
Reply-To: gpfsug main discussion list 
<[email protected]<mailto:[email protected]>>
Date: Friday 27 July 2018 at 17:33
To: gpfsug main discussion list 
<[email protected]<mailto:[email protected]>>
Subject: Re: [gpfsug-discuss] control which hosts become token manager

Thank you,

The cluster was freshly set up and the VM node never was denoted as manager, it 
was created as quorum-client. What I didn’t mention but probably should have: 
This is a multicluster mount, the cluster has no own storage. Hence the 
filesystem manager are on the home cluster, according to mmlsmgr. Hm, probably 
more complicated as I initially thought. Still I would expect that for 
file-access that is restricted to this cluster all token management is handled 
inside the cluster, too? And I don’t want the weakest node to participate.

Kind regards,

Heiner

--
Paul Scherrer Institut
Science IT
Heiner Billich
WHGA 106
CH 5232  Villigen PSI
056 310 36 02
https://www.psi.ch


From: 
<[email protected]<mailto:[email protected]>>
 on behalf of Bryan Banister 
<[email protected]<mailto:[email protected]>>
Reply-To: gpfsug main discussion list 
<[email protected]<mailto:[email protected]>>
Date: Tuesday 24 July 2018 at 23:12
To: gpfsug main discussion list 
<[email protected]<mailto:[email protected]>>
Subject: Re: [gpfsug-discuss] control which hosts become token manager

Agree with Peter here.  And if the file system and workload are of significant 
size then isolating the token manager to a dedicated node is definitely best 
practice.

Unfortunately there isn’t a way to specify a preferred manager per FS… (Bryan 
starts typing up a new RFE…).

Cheers,
-Bryan

From: 
[email protected]<mailto:[email protected]>
 
<[email protected]<mailto:[email protected]>>
 On Behalf Of Peter Childs
Sent: Tuesday, July 24, 2018 2:29 PM
To: gpfsug main discussion list 
<[email protected]<mailto:[email protected]>>
Subject: Re: [gpfsug-discuss] control which hosts become token manager

Note: External Email
________________________________

What does mmlsmgr show?

Your config looks fine.

I suspect you need to do a

mmchmgr perf node-1.psi.ch<http://node-1.psi.ch>
mmchmgr tiered node-2.psi.ch<http://node-2.psi.ch>

It looks like the node was set up as a manager and was demoted to just quorum 
but since its still currently the manager it needs to be told to stop.

From experience it's also worth having different file system managers on 
different nodes, if at all possible.

But that's just a guess without seeing the output of mmlsmgr.


Peter Childs
Research Storage
ITS Research and Teaching Support
Queen Mary, University of London


---- Billich Heinrich Rainer (PSI) wrote ----
Hello,

I want to control which nodes can become token manager. In detail I run a 
virtual machine as quorum node. I don’t want this machine to become a token 
manager - it has no access to Infiniband and only very limited memory.

What I see is that ‘mmdiag –tokenmgr’ lists the machine as active token 
manager. The machine has role ‘quorum-client’. This doesn’t seem sufficient to 
exclude it.

Is there any way to tell spectrum scale to exclude this single machine with 
role quorum-client?

I run 5.0.1-1.

Sorry if this is a faq, I did search  quite a bit before I wrote to the list.

Thank you,

Heiner Billich


[root@node-2 ~]# mmlscluster

GPFS cluster information
========================
  GPFS cluster name:         node.psi.ch
  GPFS cluster id:           5389874024582403895
  GPFS UID domain:           node.psi.ch
  Remote shell command:      /usr/bin/ssh
  Remote file copy command:  /usr/bin/scp
  Repository type:           CCR

Node  Daemon node name       IP address     Admin node name        Designation
--------------------------------------------------------------------------------
   1   node-1.psi.ch       a.b.95.31  node-1.psi.ch       quorum-manager
   2   node-2.psi.ch       a.b.95.32  node-2.psi.ch       quorum-manager
   3   node-quorum.psi.ch  a.b.95.30  node-quorum.psi.ch  quorum                
       <<<< VIRTUAL MACHINE >>>>>>>>>

[root@node-2 ~]# mmdiag --tokenmgr

=== mmdiag: tokenmgr ===
  Token Domain perf
    There are 3 active token servers in this domain.
    Server list:
      a.b.95.120
      a.b.95.121
      a.b.95.122    <<<< VIRTUAL MACHINE >>>>>>>>>
  Token Domain tiered
    There are 3 active token servers in this domain.
    Server list:
      a.b.95.120
      a.b.95.121
      a.b.95.122   <<<< VIRTUAL MACHINE >>>>>>>>>

--
Paul Scherrer Institut
Science IT
Heiner Billich
WHGA 106
CH 5232  Villigen PSI
056 310 36 02
https://www.psi.ch


________________________________

Note: This email is for the confidential use of the named addressee(s) only and 
may contain proprietary, confidential, or privileged information and/or 
personal data. If you are not the intended recipient, you are hereby notified 
that any review, dissemination, or copying of this email is strictly 
prohibited, and requested to notify the sender immediately and destroy this 
email and any attachments. Email transmission cannot be guaranteed to be secure 
or error-free. The Company, therefore, does not make any guarantees as to the 
completeness or accuracy of this email or any attachments. This email is for 
informational purposes only and does not constitute a recommendation, offer, 
request, or solicitation of any kind to buy, sell, subscribe, redeem, or 
perform any type of transaction of a financial product. Personal data, as 
defined by applicable data privacy laws, contained in this email may be 
processed by the Company, and any of its affiliated or related companies, for 
potential ongoing compliance and/or business-related purposes. You may have 
rights regarding your personal data; for information on exercising these rights 
or the Company’s treatment of personal data, please email 
[email protected]<mailto:[email protected]>.


________________________________

Note: This email is for the confidential use of the named addressee(s) only and 
may contain proprietary, confidential, or privileged information and/or 
personal data. If you are not the intended recipient, you are hereby notified 
that any review, dissemination, or copying of this email is strictly 
prohibited, and requested to notify the sender immediately and destroy this 
email and any attachments. Email transmission cannot be guaranteed to be secure 
or error-free. The Company, therefore, does not make any guarantees as to the 
completeness or accuracy of this email or any attachments. This email is for 
informational purposes only and does not constitute a recommendation, offer, 
request, or solicitation of any kind to buy, sell, subscribe, redeem, or 
perform any type of transaction of a financial product. Personal data, as 
defined by applicable data privacy laws, contained in this email may be 
processed by the Company, and any of its affiliated or related companies, for 
potential ongoing compliance and/or business-related purposes. You may have 
rights regarding your personal data; for information on exercising these rights 
or the Company’s treatment of personal data, please email 
[email protected].
_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss

Reply via email to