Hi All, It seems that pcsd service is not running on all three nodes.
Please check pcsd service by below command and if not running then start it 1) systemctl status pcsd 2) systemctl start pcsd After this try to authenticate the nodes Regards, Kanika -----Original Message----- From: [email protected] [mailto:[email protected]] Sent: Monday, November 13, 2017 4:30 PM To: [email protected] Subject: Users Digest, Vol 34, Issue 24 Send Users mailing list submissions to [email protected] To subscribe or unsubscribe via the World Wide Web, visit http://lists.clusterlabs.org/mailman/listinfo/users or, via email, send a message with subject or body 'help' to [email protected] You can reach the person managing the list at [email protected] When replying, please edit your Subject line so it is more specific than "Re: Contents of Users digest..." Today's Topics: 1. Re: pcs authentication fails on Centos 7.0 & 7.1 (Digimer) 2. Antw: Re: issues with pacemaker daemonization (Ulrich Windl) 3. Re: pcs authentication fails on Centos 7.0 & 7.1 (Jan Friesse) ---------------------------------------------------------------------- Message: 1 Date: Sun, 12 Nov 2017 15:14:07 -0500 From: Digimer <[email protected]> To: [email protected] Subject: Re: [ClusterLabs] pcs authentication fails on Centos 7.0 & 7.1 Message-ID: <[email protected]> Content-Type: text/plain; charset=windows-1252 On 2017-11-12 04:20 AM, Aviran Jerbby wrote: > > Hi Clusterlabs mailing list, > > ? > > I'm having issues running pcs authentication on RH cent os 7.0/7.1 > (Please see log below). > > ? > > *_It's important to mention that pcs authentication with RH cent os > 7.2/7.4 and with the same setup and packages is working._* > > ? > > *[root@ufm-host42-014 tmp]# cat /etc/redhat-release * > > *CentOS Linux release 7.0.1406 (Core) * > > *[root@ufm-host42-014 tmp]# rpm -qa | grep openssl* > > *openssl-libs-1.0.2k-8.el7.x86_64* > > *openssl-devel-1.0.2k-8.el7.x86_64* > > *openssl-1.0.2k-8.el7.x86_64* > > *[root@ufm-host42-014 tmp]# rpm -qa | grep pcs* > > *pcs-0.9.158-6.el7.centos.x86_64* > > *pcsc-lite-libs-1.8.8-4.el7.x86_64* > > *[root@ufm-host42-014 tmp]# pcs cluster auth > ufm-host42-012.rdmz.labs.mlnx ufm-host42-013.rdmz.labs.mlnx > ufm-host42-014.rdmz.labs.mlnx -u hacluster -p "" --debug* > > *Running: /usr/bin/ruby -I/usr/lib/pcsd/ /usr/lib/pcsd/pcsd-cli.rb > auth* > > *Environment:* > > *? DISPLAY=localhost:10.0* > > *? GEM_HOME=/usr/lib/pcsd/vendor/bundle/ruby* > > *? HISTCONTROL=ignoredups* > > *? HISTSIZE=1000* > > *? HOME=/root* > > *? HOSTNAME=ufm-host42-014.rdmz.labs.mlnx* > > *? KDEDIRS=/usr* > > *? LANG=en_US.UTF-8* > > *? LC_ALL=C* > > *? LESSOPEN=||/usr/bin/lesspipe.sh %s* > > *? LOGNAME=root* > > *? > LS_COLORS=rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=4 > 0;33;01:cd=40;33;01:or=40;31;01:mi=01;05;37;41:su=37;41:sg=30;43:ca=30 > ;41:tw=30;42:ow=34;42:st=37;44:ex=01;32:*.tar=01;31:*.tgz=01;31:*.arc= > 01;31:*.arj=01;31:*.taz=01;31:*.lha=01;31:*.lz4=01;31:*.lzh=01;31:*.lz > ma=01;31:*.tlz=01;31:*.txz=01;31:*.tzo=01;31:*.t7z=01;31:*.zip=01;31:* > .z=01;31:*.Z=01;31:*.dz=01;31:*.gz=01;31:*.lrz=01;31:*.lz=01;31:*.lzo= > 01;31:*.xz=01;31:*.bz2=01;31:*.bz=01;31:*.tbz=01;31:*.tbz2=01;31:*.tz= > 01;31:*.deb=01;31:*.rpm=01;31:*.jar=01;31:*.war=01;31:*.ear=01;31:*.sa > r=01;31:*.rar=01;31:*.alz=01;31:*.ace=01;31:*.zoo=01;31:*.cpio=01;31:* > .7z=01;31:*.rz=01;31:*.cab=01;31:*.jpg=01;35:*.jpeg=01;35:*.gif=01;35: > *.bmp=01;35:*.pbm=01;35:*.pgm=01;35:*.ppm=01;35:*.tga=01;35:*.xbm=01;3 > 5:*.xpm=01;35:*.tif=01;35:*.tiff=01;35:*.png=01;35:*.svg=01;35:*.svgz= > 01;35:*.mng=01;35:*.pcx=01;35:*.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.m > 2v=01;35:*.mkv=01;35:*.webm=01;35:*.ogm=01;35:*.mp4=01;35:*.m4v=01;35: > *.mp4v=01;35:*.vob=01;35:*.qt=01;35:*.nuv=01;35:*.wmv=01;35:*.asf=01;3 > 5:*.rm=01;35:*.rmvb=01;35:*.flc=01;35:*.avi=01;35:*.fli=01;35:*.flv=01 > ;35:*.gl=01;35:*.dl=01;35:*.xcf=01;35:*.xwd=01;35:*.yuv=01;35:*.cgm=01 > ;35:*.emf=01;35:*.axv=01;35:*.anx=01;35:*.ogv=01;35:*.ogx=01;35:*.aac= > 01;36:*.au=01;36:*.flac=01;36:*.mid=01;36:*.midi=01;36:*.mka=01;36:*.m > p3=01;36:*.mpc=01;36:*.ogg=01;36:*.ra=01;36:*.wav=01;36:*.axa=01;36:*. > oga=01;36:*.spx=01;36:*.xspf=01;36:* > > *?MAIL=/var/spool/mail/root* > > *? OLDPWD=/root* > > *? > PATH=/usr/lib64/qt-3.3/bin:/root/perl5/bin:/usr/local/sbin:/usr/local/ > bin:/usr/sbin:/usr/bin:/root/bin* > > *? PCSD_DEBUG=true* > > *? PCSD_NETWORK_TIMEOUT=60* > > *? PERL5LIB=/root/perl5/lib/perl5:* > > *? PERL_LOCAL_LIB_ROOT=:/root/perl5* > > *? PERL_MB_OPT=--install_base /root/perl5* > > *? PERL_MM_OPT=INSTALL_BASE=/root/perl5* > > *? PWD=/tmp* > > *? QTDIR=/usr/lib64/qt-3.3* > > *? QTINC=/usr/lib64/qt-3.3/include* > > *? QTLIB=/usr/lib64/qt-3.3/lib* > > *? QT_GRAPHICSSYSTEM=native* > > *? QT_GRAPHICSSYSTEM_CHECKED=1* > > *? QT_PLUGIN_PATH=/usr/lib64/kde4/plugins:/usr/lib/kde4/plugins* > > *? SHELL=/bin/bash* > > *? SHLVL=1* > > *? SSH_CLIENT=10.208.0.12 47232 22* > > *? SSH_CONNECTION=10.208.0.12 47232 10.224.40.143 22* > > *? SSH_TTY=/dev/pts/0* > > *? TERM=xterm* > > *? USER=root* > > *? XDG_RUNTIME_DIR=/run/user/0* > > *? XDG_SESSION_ID=6* > > *? _=/usr/sbin/pcs* > > *--Debug Input Start--* > > *{"username": "hacluster", "local": false, "nodes": > ["ufm-host42-014.rdmz.labs.mlnx", "ufm-host42-013.rdmz.labs.mlnx", > "ufm-host42-012.rdmz.labs.mlnx"], "password": "", "force": false}* > > *--Debug Input End--* > > *?* > > *Finished running: /usr/bin/ruby -I/usr/lib/pcsd/ > /usr/lib/pcsd/pcsd-cli.rb auth* > > *Return value: 0* > > *--Debug Stdout Start--* > > *{* > > *? "status": "ok",* > > *? "data": {* > > *??? "auth_responses": {* > > *????? "ufm-host42-014.rdmz.labs.mlnx": {* > > *??????? "status": "noresponse"* > > *?????},* > > *????? "ufm-host42-012.rdmz.labs.mlnx": {* > > *??????? "status": "noresponse"* > > *????? },* > > *????? "ufm-host42-013.rdmz.labs.mlnx": {* > > *??????? "status": "noresponse"* > > *????? }* > > *??? },* > > *??? "sync_successful": true,* > > *??? "sync_nodes_err": [* > > *?* > > *??? ],* > > *??? "sync_responses": {* > > *??? }* > > *? },* > > *? "log": [* > > *??? "I, [2017-11-07T19:52:27.434067 #25065]? INFO -- : PCSD Debugging > enabled\n",* > > *??? "D, [2017-11-07T19:52:27.454014 #25065] DEBUG -- : Did not detect > RHEL 6\n",* > > *??? "I, [2017-11-07T19:52:27.454076 #25065]? INFO -- : Running: > /usr/sbin/corosync-cmapctl totem.cluster_name\n",* > > *??? "I, [2017-11-07T19:52:27.454127 #25065]? INFO -- : CIB USER: > hacluster, groups: \n",* > > *??? "D, [2017-11-07T19:52:27.458142 #25065] DEBUG -- : []\n",* > > *??? "D, [2017-11-07T19:52:27.458216 #25065] DEBUG -- : [\"Failed to > initialize the cmap API. Error CS_ERR_LIBRARY\\n\"]\n",* > > *??? "D, [2017-11-07T19:52:27.458284 #25065] DEBUG -- : Duration: > 0.003997742s\n",* > > *??? "I, [2017-11-07T19:52:27.458382 #25065]? INFO -- : Return Value: > 1\n",* > > *??? "W, [2017-11-07T19:52:27.458477 #25065]? WARN -- : Cannot read > config 'corosync.conf' from '/etc/corosync/corosync.conf': No such > file\n",* > > *??? "W, [2017-11-07T19:52:27.458546 #25065]? WARN -- : Cannot read > config 'corosync.conf' from '/etc/corosync/corosync.conf': No such > file or directory - /etc/corosync/corosync.conf\n",* > > *??? "I, [2017-11-07T19:52:27.459289 #25065]? INFO -- : SRWT Node: > ufm-host42-012.rdmz.labs.mlnx Request: check_auth\n",* > > *??? "E, [2017-11-07T19:52:27.459362 #25065] ERROR -- : Unable to > connect to node ufm-host42-012.rdmz.labs.mlnx, no token available\n",* > > *??? "I, [2017-11-07T19:52:27.459507 #25065]? INFO -- : SRWT Node: > ufm-host42-014.rdmz.labs.mlnx Request: check_auth\n",* > > *??? "E, [2017-11-07T19:52:27.459552 #25065] ERROR -- : Unable to > connect to node ufm-host42-014.rdmz.labs.mlnx, no token available\n",* > > *??? "I, [2017-11-07T19:52:27.459674 #25065]? INFO -- : SRWT Node: > ufm-host42-013.rdmz.labs.mlnx Request: check_auth\n",* > > *??? "E, [2017-11-07T19:52:27.459739 #25065] ERROR -- : Unable to > connect to node ufm-host42-013.rdmz.labs.mlnx, no token available\n",* > > *??? "I, [2017-11-07T19:52:27.622930 #25065]? INFO -- : No response > from: ufm-host42-014.rdmz.labs.mlnx request: auth, error: > couldnt_connect\n",* > > *??? "I, [2017-11-07T19:52:27.627129 #25065]? INFO -- : No response > from: ufm-host42-012.rdmz.labs.mlnx request: auth, error: > couldnt_connect\n",* > > *??? "I, [2017-11-07T19:52:27.627491 #25065]? INFO -- : No response > from: ufm-host42-013.rdmz.labs.mlnx request: auth, error: > couldnt_connect\n"* > > *? ]* > > *}* > > *?* > > *--Debug Stdout End--* > > *--Debug Stderr Start--* > > *?* > > *--Debug Stderr End--* > > *?* > > *Error: Unable to communicate with ufm-host42-014.rdmz.labs.mlnx* > > *Error: Unable to communicate with ufm-host42-013.rdmz.labs.mlnx* > > *Error: Unable to communicate with ufm-host42-012.rdmz.labs.mlnx* > > *[root@ufm-host42-014 tmp]#* > > ? > > ? > > Some of you encounter this issue and know how to solve it? > > ? > > Any help will be appreciated. > > ? > > Thanks, > > Aviran > 7.0 and 7.1 are quite out of date and have many known bugs and security issues. I think this issue would make a splendid reason to push management or whoever is deciding to stay on vulnerable systems to upgrade now. If you must stick with these old versions; Can you telnet to the nodes on the corosync ports? -- Digimer Papers and Projects: https://alteeve.com/w/ "I am, somehow, less interested in the weight and convolutions of Einstein?s brain than in the near certainty that people of equal talent have lived and died in cotton fields and sweatshops." - Stephen Jay Gould ------------------------------ Message: 2 Date: Mon, 13 Nov 2017 08:20:52 +0100 From: "Ulrich Windl" <[email protected]> To: <[email protected]> Subject: [ClusterLabs] Antw: Re: issues with pacemaker daemonization Message-ID: <[email protected]> Content-Type: text/plain; charset=US-ASCII >>> Ken Gaillot <[email protected]> schrieb am 09.11.2017 um 16:49 in >>> Nachricht <[email protected]>: > On Thu, 2017-11-09 at 15:59 +0530, ashutosh tiwari wrote: >> Hi, >> >> We are observing that sometime pacemaker daemon gets the same >> processgroup id as the process /script calling the "service pacemaker >> start". >> While child processes of pacemaeker(cib/crmd/pengine) have there >> processgroup id same as there pid which is how things should be for >> a daemon afaik. >> >> Do we expect it to be managed by init.d (centos 6) or pacemaker >> binary. >> >> pacemaker version: pacemaker-1.1.14-8.el6_8.1.x86_64 >> >> >> Thanks and Regards, >> Ashutosh Tiwari > > When pacemakerd spawns a child (cib etc.), it calls setsid() in the > child to start a new session, which will set the process group ID and > session ID to the child's PID. > > However it doesn't do anything similar for itself. Possibly it should. > It's a longstanding to-do item to make pacemaker daemonize itself more > "properly", but no one's had the time to address it. Shouldn't be that hard when fork()ing anyway... > -- > Ken Gaillot <[email protected]> > > _______________________________________________ > Users mailing list: [email protected] > http://lists.clusterlabs.org/mailman/listinfo/users > > Project Home: http://www.clusterlabs.org > Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf > Bugs: http://bugs.clusterlabs.org ------------------------------ Message: 3 Date: Mon, 13 Nov 2017 09:00:12 +0100 From: Jan Friesse <[email protected]> To: Cluster Labs - All topics related to open-source clustering welcomed <[email protected]> Subject: Re: [ClusterLabs] pcs authentication fails on Centos 7.0 & 7.1 Message-ID: <[email protected]> Content-Type: text/plain; charset=windows-1252; format=flowed Digimer napsal(a): > On 2017-11-12 04:20 AM, Aviran Jerbby wrote: >> >> Hi Clusterlabs mailing list, >> >> >> >> I'm having issues running pcs authentication on RH cent os 7.0/7.1 >> (Please see log below). >> >> >> >> *_It's important to mention that pcs authentication with RH cent os >> 7.2/7.4 and with the same setup and packages is working._* >> >> >> >> *[root@ufm-host42-014 tmp]# cat /etc/redhat-release * >> >> *CentOS Linux release 7.0.1406 (Core) * >> >> *[root@ufm-host42-014 tmp]# rpm -qa | grep openssl* >> >> *openssl-libs-1.0.2k-8.el7.x86_64* >> >> *openssl-devel-1.0.2k-8.el7.x86_64* >> >> *openssl-1.0.2k-8.el7.x86_64* >> >> *[root@ufm-host42-014 tmp]# rpm -qa | grep pcs* >> >> *pcs-0.9.158-6.el7.centos.x86_64* >> >> *pcsc-lite-libs-1.8.8-4.el7.x86_64* >> >> *[root@ufm-host42-014 tmp]# pcs cluster auth >> ufm-host42-012.rdmz.labs.mlnx ufm-host42-013.rdmz.labs.mlnx >> ufm-host42-014.rdmz.labs.mlnx -u hacluster -p "" --debug* >> >> *Running: /usr/bin/ruby -I/usr/lib/pcsd/ /usr/lib/pcsd/pcsd-cli.rb auth* >> >> *Environment:* >> >> * DISPLAY=localhost:10.0* >> >> * GEM_HOME=/usr/lib/pcsd/vendor/bundle/ruby* >> >> * HISTCONTROL=ignoredups* >> >> * HISTSIZE=1000* >> >> * HOME=/root* >> >> * HOSTNAME=ufm-host42-014.rdmz.labs.mlnx* >> >> * KDEDIRS=/usr* >> >> * LANG=en_US.UTF-8* >> >> * LC_ALL=C* >> >> * LESSOPEN=||/usr/bin/lesspipe.sh %s* >> >> * LOGNAME=root* >> >> * >> LS_COLORS=rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:mi=01;05;37;41:su=37;41:sg=30;43:ca=30;41:tw=30;42:ow=34;42:st=37;44:ex=01;32:*.tar=01;31:*.tgz=01;31:*.arc=01;31:*.arj=01;31:*.taz=01;31:*.lha=01;31:*.lz4=01;31:*.lzh=01;31:*.lzma=01;31:*.tlz=01;31:*.txz=01;31:*.tzo=01;31:*.t7z=01;31:*.zip=01;31:*.z=01;31:*.Z=01;31:*.dz=01;31:*.gz=01;31:*.lrz=01;31:*.lz=01;31:*.lzo=01;31:*.xz=01;31:*.bz2=01;31:*.bz=01;31:*.tbz=01;31:*.tbz2=01;31:*.tz=01;31:*.deb=01;31:*.rpm=01;31:*.jar=01;31:*.war=01;31:*.ear=01;31:*.sar=01;31:*.rar=01;31:*.alz=01;31:*.ace=01;31:*.zoo=01;31:*.cpio=01;31:*.7z=01;31:*.rz=01;31:*.cab=01;31:*.jpg=01;35:*.jpeg=01;35:*.gif=01;35:*.bmp=01;35:*.pbm=01;35:*.pgm=01;35:*.ppm=01;35:*.tga=01;35:*.xbm=01;35:*.xpm=01;35:*.tif=01;35:*.tiff=01;35:*.png=01;35:*.svg=01;35:*.svgz=01;35:*.mng=01;35:*.pcx=01;35:*.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.m2v=01;35:*.mkv=01;35:*.webm=01;35:*.ogm=01;35:*.mp4=01;35:*.m4v=01;35:*.mp4v=01;35:*. v ob=01;35:*.qt=01;35:*.nuv=01;35:*.wmv=01;35:*.asf=01;35:*.rm=01;35:*.rmvb=01;35:*.flc=01;35:*.avi=01;35:*.fli=01;35:*.flv=01;35:*.gl=01;35:*.dl=01;35:*.xcf=01;35:*.xwd=01;35:*.yuv=01;35:*.cgm=01;35:*.emf=01;35:*.axv=01;35:*.anx=01;35:*.ogv=01;35:*.ogx=01;35:*.aac=01;36:*.au=01;36:*.flac=01;36:*.mid=01;36:*.midi=01;36:*.mka=01;36:*.mp3=01;36:*.mpc=01;36:*.ogg=01;36:*.ra=01;36:*.wav=01;36:*.axa=01;36:*.oga=01;36:*.spx=01;36:*.xspf=01;36:* >> >> * MAIL=/var/spool/mail/root* >> >> * OLDPWD=/root* >> >> * >> PATH=/usr/lib64/qt-3.3/bin:/root/perl5/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/root/bin* >> >> * PCSD_DEBUG=true* >> >> * PCSD_NETWORK_TIMEOUT=60* >> >> * PERL5LIB=/root/perl5/lib/perl5:* >> >> * PERL_LOCAL_LIB_ROOT=:/root/perl5* >> >> * PERL_MB_OPT=--install_base /root/perl5* >> >> * PERL_MM_OPT=INSTALL_BASE=/root/perl5* >> >> * PWD=/tmp* >> >> * QTDIR=/usr/lib64/qt-3.3* >> >> * QTINC=/usr/lib64/qt-3.3/include* >> >> * QTLIB=/usr/lib64/qt-3.3/lib* >> >> * QT_GRAPHICSSYSTEM=native* >> >> * QT_GRAPHICSSYSTEM_CHECKED=1* >> >> * QT_PLUGIN_PATH=/usr/lib64/kde4/plugins:/usr/lib/kde4/plugins* >> >> * SHELL=/bin/bash* >> >> * SHLVL=1* >> >> * SSH_CLIENT=10.208.0.12 47232 22* >> >> * SSH_CONNECTION=10.208.0.12 47232 10.224.40.143 22* >> >> * SSH_TTY=/dev/pts/0* >> >> * TERM=xterm* >> >> * USER=root* >> >> * XDG_RUNTIME_DIR=/run/user/0* >> >> * XDG_SESSION_ID=6* >> >> * _=/usr/sbin/pcs* >> >> *--Debug Input Start--* >> >> *{"username": "hacluster", "local": false, "nodes": >> ["ufm-host42-014.rdmz.labs.mlnx", "ufm-host42-013.rdmz.labs.mlnx", >> "ufm-host42-012.rdmz.labs.mlnx"], "password": "", "force": false}* >> >> *--Debug Input End--* >> >> * * >> >> *Finished running: /usr/bin/ruby -I/usr/lib/pcsd/ >> /usr/lib/pcsd/pcsd-cli.rb auth* >> >> *Return value: 0* >> >> *--Debug Stdout Start--* >> >> *{* >> >> * "status": "ok",* >> >> * "data": {* >> >> * "auth_responses": {* >> >> * "ufm-host42-014.rdmz.labs.mlnx": {* >> >> * "status": "noresponse"* >> >> * },* >> >> * "ufm-host42-012.rdmz.labs.mlnx": {* >> >> * "status": "noresponse"* >> >> * },* >> >> * "ufm-host42-013.rdmz.labs.mlnx": {* >> >> * "status": "noresponse"* >> >> * }* >> >> * },* >> >> * "sync_successful": true,* >> >> * "sync_nodes_err": [* >> >> * * >> >> * ],* >> >> * "sync_responses": {* >> >> * }* >> >> * },* >> >> * "log": [* >> >> * "I, [2017-11-07T19:52:27.434067 #25065] INFO -- : PCSD Debugging >> enabled\n",* >> >> * "D, [2017-11-07T19:52:27.454014 #25065] DEBUG -- : Did not detect >> RHEL 6\n",* >> >> * "I, [2017-11-07T19:52:27.454076 #25065] INFO -- : Running: >> /usr/sbin/corosync-cmapctl totem.cluster_name\n",* >> >> * "I, [2017-11-07T19:52:27.454127 #25065] INFO -- : CIB USER: >> hacluster, groups: \n",* >> >> * "D, [2017-11-07T19:52:27.458142 #25065] DEBUG -- : []\n",* >> >> * "D, [2017-11-07T19:52:27.458216 #25065] DEBUG -- : [\"Failed to >> initialize the cmap API. Error CS_ERR_LIBRARY\\n\"]\n",* >> >> * "D, [2017-11-07T19:52:27.458284 #25065] DEBUG -- : Duration: >> 0.003997742s\n",* >> >> * "I, [2017-11-07T19:52:27.458382 #25065] INFO -- : Return Value: >> 1\n",* >> >> * "W, [2017-11-07T19:52:27.458477 #25065] WARN -- : Cannot read >> config 'corosync.conf' from '/etc/corosync/corosync.conf': No such >> file\n",* >> >> * "W, [2017-11-07T19:52:27.458546 #25065] WARN -- : Cannot read >> config 'corosync.conf' from '/etc/corosync/corosync.conf': No such >> file or directory - /etc/corosync/corosync.conf\n",* >> >> * "I, [2017-11-07T19:52:27.459289 #25065] INFO -- : SRWT Node: >> ufm-host42-012.rdmz.labs.mlnx Request: check_auth\n",* >> >> * "E, [2017-11-07T19:52:27.459362 #25065] ERROR -- : Unable to >> connect to node ufm-host42-012.rdmz.labs.mlnx, no token available\n",* >> >> * "I, [2017-11-07T19:52:27.459507 #25065] INFO -- : SRWT Node: >> ufm-host42-014.rdmz.labs.mlnx Request: check_auth\n",* >> >> * "E, [2017-11-07T19:52:27.459552 #25065] ERROR -- : Unable to >> connect to node ufm-host42-014.rdmz.labs.mlnx, no token available\n",* >> >> * "I, [2017-11-07T19:52:27.459674 #25065] INFO -- : SRWT Node: >> ufm-host42-013.rdmz.labs.mlnx Request: check_auth\n",* >> >> * "E, [2017-11-07T19:52:27.459739 #25065] ERROR -- : Unable to >> connect to node ufm-host42-013.rdmz.labs.mlnx, no token available\n",* >> >> * "I, [2017-11-07T19:52:27.622930 #25065] INFO -- : No response >> from: ufm-host42-014.rdmz.labs.mlnx request: auth, error: >> couldnt_connect\n",* >> >> * "I, [2017-11-07T19:52:27.627129 #25065] INFO -- : No response >> from: ufm-host42-012.rdmz.labs.mlnx request: auth, error: >> couldnt_connect\n",* >> >> * "I, [2017-11-07T19:52:27.627491 #25065] INFO -- : No response >> from: ufm-host42-013.rdmz.labs.mlnx request: auth, error: >> couldnt_connect\n"* >> >> * ]* >> >> *}* >> >> * * >> >> *--Debug Stdout End--* >> >> *--Debug Stderr Start--* >> >> * * >> >> *--Debug Stderr End--* >> >> * * >> >> *Error: Unable to communicate with ufm-host42-014.rdmz.labs.mlnx* >> >> *Error: Unable to communicate with ufm-host42-013.rdmz.labs.mlnx* >> >> *Error: Unable to communicate with ufm-host42-012.rdmz.labs.mlnx* >> >> *[root@ufm-host42-014 tmp]#* >> >> >> >> >> >> Some of you encounter this issue and know how to solve it? >> >> >> >> Any help will be appreciated. >> >> >> >> Thanks, >> >> Aviran >> > > 7.0 and 7.1 are quite out of date and have many known bugs and security > issues. I think this issue would make a splendid reason to push > management or whoever is deciding to stay on vulnerable systems to > upgrade now. > > If you must stick with these old versions; Can you telnet to the nodes > on the corosync ports? I don't think this problem has anything to do with corosync (I mean, not yet ;) ), so I would rather suggest pcsd port (I believe by default it is 2224) and all of the ports defined in https://github.com/firewalld/firewalld/blob/master/config/services/high-availability.xml . > ------------------------------ _______________________________________________ Users mailing list [email protected] http://lists.clusterlabs.org/mailman/listinfo/users End of Users Digest, Vol 34, Issue 24 ************************************* _______________________________________________ Users mailing list: [email protected] http://lists.clusterlabs.org/mailman/listinfo/users Project Home: http://www.clusterlabs.org Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf Bugs: http://bugs.clusterlabs.org
