Hi Team,

Please find the corosync version.

[root@node2 ~]# rpm -qa corosync
corosync-2.4.4-2.el7.x86_64.

Firewall in disable state only.

Please find the debug and trace logs

Mar 31 10:07:30 [17684] node2 corosync notice  [MAIN  ] Corosync Cluster Engine 
('UNKNOWN'): started and ready to provide service.
Mar 31 10:07:30 [17684] node2 corosync info    [MAIN  ] Corosync built-in 
features: pie relro bindnow
Mar 31 10:07:30 [17684] node2 corosync warning [MAIN  ] Could not set SCHED_RR 
at priority 99: Operation not permitted (1)
Mar 31 10:07:30 [17684] node2 corosync debug   [QB    ] shm size:8388621; 
real_size:8392704; rb->word_size:2098176
Mar 31 10:07:30 [17684] node2 corosync debug   [MAIN  ] Corosync TTY detached
Mar 31 10:07:30 [17684] node2 corosync debug   [TOTEM ] waiting_trans_ack 
changed to 1
Mar 31 10:07:30 [17684] node2 corosync debug   [TOTEM ] Token Timeout (5550 ms) 
retransmit timeout (1321 ms)
Mar 31 10:07:30 [17684] node2 corosync debug   [TOTEM ] token hold (1046 ms) 
retransmits before loss (4 retrans)
Mar 31 10:07:30 [17684] node2 corosync debug   [TOTEM ] join (50 ms) send_join 
(0 ms) consensus (6660 ms) merge (200 ms)
Mar 31 10:07:30 [17684] node2 corosync debug   [TOTEM ] downcheck (1000 ms) 
fail to recv const (2500 msgs)
Mar 31 10:07:30 [17684] node2 corosync debug   [TOTEM ] seqno unchanged const 
(30 rotations) Maximum network MTU 1401
Mar 31 10:07:30 [17684] node2 corosync debug   [TOTEM ] window size per 
rotation (50 messages) maximum messages per rotation (17 messages)
Mar 31 10:07:30 [17684] node2 corosync debug   [TOTEM ] missed count const (5 
messages)
Mar 31 10:07:30 [17684] node2 corosync debug   [TOTEM ] send threads (0 threads)
Mar 31 10:07:30 [17684] node2 corosync debug   [TOTEM ] RRP token expired 
timeout (1321 ms)
Mar 31 10:07:30 [17684] node2 corosync debug   [TOTEM ] RRP token problem 
counter (2000 ms)
Mar 31 10:07:30 [17684] node2 corosync debug   [TOTEM ] RRP threshold (10 
problem count)
Mar 31 10:07:30 [17684] node2 corosync debug   [TOTEM ] RRP multicast threshold 
(100 problem count)
Mar 31 10:07:30 [17684] node2 corosync debug   [TOTEM ] RRP automatic recovery 
check timeout (1000 ms)
Mar 31 10:07:30 [17684] node2 corosync debug   [TOTEM ] RRP mode set to none.
Mar 31 10:07:30 [17684] node2 corosync debug   [TOTEM ] 
heartbeat_failures_allowed (0)
Mar 31 10:07:30 [17684] node2 corosync debug   [TOTEM ] max_network_delay (50 
ms)
Mar 31 10:07:30 [17684] node2 corosync debug   [TOTEM ] HeartBeat is Disabled. 
To enable set heartbeat_failures_allowed > 0
Mar 31 10:07:30 [17684] node2 corosync notice  [TOTEM ] Initializing transport 
(UDP/IP Unicast).
Mar 31 10:07:30 [17684] node2 corosync notice  [TOTEM ] Initializing 
transmit/receive security (NSS) crypto: none hash: none
Mar 31 10:07:30 [17684] node2 corosync trace   [QB    ] grown poll array to 2 
for FD 8
Mar 31 10:07:30 [17684] node2 corosync notice  [TOTEM ] The network interface 
[10.33.59.175] is now up.
Mar 31 10:07:30 [17684] node2 corosync debug   [TOTEM ] Created or loaded 
sequence id 540.10.33.59.175 for this ring.
Mar 31 10:07:30 [17684] node2 corosync notice  [SERV  ] Service engine loaded: 
corosync configuration map access [0]
Mar 31 10:07:30 [17684] node2 corosync debug   [MAIN  ] Initializing IPC on 
cmap [0]
Mar 31 10:07:30 [17684] node2 corosync debug   [MAIN  ] No configured 
qb.ipc_type. Using native ipc
Mar 31 10:07:30 [17684] node2 corosync info    [QB    ] server name: cmap
Mar 31 10:07:30 [17684] node2 corosync trace   [QB    ] grown poll array to 3 
for FD 9
Mar 31 10:07:30 [17684] node2 corosync notice  [SERV  ] Service engine loaded: 
corosync configuration service [1]
Mar 31 10:07:30 [17684] node2 corosync debug   [MAIN  ] Initializing IPC on cfg 
[1]
Mar 31 10:07:30 [17684] node2 corosync debug   [MAIN  ] No configured 
qb.ipc_type. Using native ipc
Mar 31 10:07:30 [17684] node2 corosync info    [QB    ] server name: cfg
Mar 31 10:07:30 [17684] node2 corosync trace   [QB    ] grown poll array to 4 
for FD 10
Mar 31 10:07:30 [17684] node2 corosync notice  [SERV  ] Service engine loaded: 
corosync cluster closed process group service v1.01 [2]
Mar 31 10:07:30 [17684] node2 corosync debug   [MAIN  ] Initializing IPC on cpg 
[2]
Mar 31 10:07:30 [17684] node2 corosync debug   [MAIN  ] No configured 
qb.ipc_type. Using native ipc
Mar 31 10:07:30 [17684] node2 corosync info    [QB    ] server name: cpg
Mar 31 10:07:30 [17684] node2 corosync trace   [QB    ] grown poll array to 5 
for FD 11
Mar 31 10:07:30 [17684] node2 corosync notice  [SERV  ] Service engine loaded: 
corosync profile loading service [4]
Mar 31 10:07:30 [17684] node2 corosync debug   [MAIN  ] NOT Initializing IPC on 
pload [4]
Mar 31 10:07:30 [17684] node2 corosync notice  [QUORUM] Using quorum provider 
corosync_votequorum
Mar 31 10:07:30 [17684] node2 corosync trace   [VOTEQ ] ENTERING 
votequorum_init()
Mar 31 10:07:30 [17684] node2 corosync trace   [VOTEQ ] ENTERING 
votequorum_exec_init_fn()
Mar 31 10:07:30 [17684] node2 corosync trace   [VOTEQ ] ENTERING allocate_node()
Mar 31 10:07:30 [17684] node2 corosync trace   [VOTEQ ] LEAVING allocate_node()
Mar 31 10:07:30 [17684] node2 corosync trace   [VOTEQ ] ENTERING allocate_node()
Mar 31 10:07:30 [17684] node2 corosync trace   [VOTEQ ] ENTERING 
node_add_ordered()
Mar 31 10:07:30 [17684] node2 corosync trace   [VOTEQ ] LEAVING 
node_add_ordered()
Mar 31 10:07:30 [17684] node2 corosync trace   [VOTEQ ] LEAVING allocate_node()
Mar 31 10:07:30 [17684] node2 corosync trace   [VOTEQ ] ENTERING 
votequorum_readconfig()
Mar 31 10:07:30 [17684] node2 corosync debug   [VOTEQ ] Reading configuration 
(runtime: 0)
Mar 31 10:07:30 [17684] node2 corosync trace   [VOTEQ ] ENTERING 
votequorum_read_nodelist_configuration()
Mar 31 10:07:30 [17684] node2 corosync trace   [VOTEQ ] LEAVING 
votequorum_read_nodelist_configuration()
Mar 31 10:07:30 [17684] node2 corosync trace   [VOTEQ ] ENTERING 
votequorum_qdevice_is_configured()
Mar 31 10:07:30 [17684] node2 corosync trace   [VOTEQ ] LEAVING 
votequorum_qdevice_is_configured()
Mar 31 10:07:30 [17684] node2 corosync debug   [VOTEQ ] ev_tracking=0, 
ev_tracking_barrier = 0: expected_votes = 0
Mar 31 10:07:30 [17684] node2 corosync trace   [VOTEQ ] ENTERING 
update_ev_barrier()
Mar 31 10:07:30 [17684] node2 corosync trace   [VOTEQ ] LEAVING 
update_ev_barrier()
Mar 31 10:07:30 [17684] node2 corosync trace   [VOTEQ ] ENTERING 
update_two_node()
Mar 31 10:07:30 [17684] node2 corosync trace   [VOTEQ ] LEAVING 
update_two_node()
Mar 31 10:07:30 [17684] node2 corosync trace   [VOTEQ ] LEAVING 
votequorum_readconfig()
Mar 31 10:07:30 [17684] node2 corosync trace   [VOTEQ ] ENTERING 
recalculate_quorum()
Mar 31 10:07:30 [17684] node2 corosync trace   [VOTEQ ] ENTERING 
get_total_votes()
Mar 31 10:07:30 [17684] node2 corosync trace   [VOTEQ ] LEAVING 
get_total_votes()
Mar 31 10:07:30 [17684] node2 corosync debug   [VOTEQ ] total_votes=1, 
expected_votes=9
Mar 31 10:07:30 [17684] node2 corosync trace   [VOTEQ ] ENTERING 
calculate_quorum()
Mar 31 10:07:30 [17684] node2 corosync debug   [VOTEQ ] node 2 state=1, 
votes=1, expected=9
Mar 31 10:07:30 [17684] node2 corosync trace   [VOTEQ ] LEAVING 
calculate_quorum()
Mar 31 10:07:30 [17684] node2 corosync trace   [VOTEQ ] ENTERING 
are_we_quorate()
Mar 31 10:07:30 [17684] node2 corosync trace   [VOTEQ ] LEAVING are_we_quorate()
Mar 31 10:07:30 [17684] node2 corosync trace   [VOTEQ ] LEAVING 
recalculate_quorum()
Mar 31 10:07:30 [17684] node2 corosync trace   [VOTEQ ] ENTERING 
votequorum_exec_add_config_notification()
Mar 31 10:07:30 [17684] node2 corosync trace   [VOTEQ ] LEAVING 
votequorum_exec_add_config_notification()
Mar 31 10:07:30 [17684] node2 corosync trace   [VOTEQ ] ENTERING 
votequorum_exec_send_nodeinfo()
Mar 31 10:07:30 [17684] node2 corosync trace   [VOTEQ ] ENTERING 
find_node_by_nodeid()
Mar 31 10:07:30 [17684] node2 corosync trace   [VOTEQ ] LEAVING 
find_node_by_nodeid()
Mar 31 10:07:30 [17684] node2 corosync trace   [VOTEQ ] ENTERING decode_flags()
Mar 31 10:07:30 [17684] node2 corosync debug   [VOTEQ ] flags: quorate: No 
Leaving: No WFA Status: No First: Yes Qdevice: No QdeviceAlive: No 
QdeviceCastVote: No QdeviceMasterWins: No
Mar 31 10:07:30 [17684] node2 corosync trace   [VOTEQ ] LEAVING decode_flags()
Mar 31 10:07:30 [17684] node2 corosync trace   [VOTEQ ] LEAVING 
votequorum_exec_send_nodeinfo()
Mar 31 10:07:30 [17684] node2 corosync trace   [VOTEQ ] LEAVING 
votequorum_exec_init_fn()
Mar 31 10:07:30 [17684] node2 corosync notice  [SERV  ] Service engine loaded: 
corosync vote quorum service v1.0 [5]
Mar 31 10:07:30 [17684] node2 corosync debug   [MAIN  ] Initializing IPC on 
votequorum [5]
Mar 31 10:07:30 [17684] node2 corosync debug   [MAIN  ] No configured 
qb.ipc_type. Using native ipc
Mar 31 10:07:30 [17684] node2 corosync info    [QB    ] server name: votequorum
Mar 31 10:07:30 [17684] node2 corosync trace   [QB    ] grown poll array to 6 
for FD 12
Mar 31 10:07:30 [17684] node2 corosync trace   [VOTEQ ] LEAVING 
votequorum_init()
Mar 31 10:07:30 [17684] node2 corosync notice  [SERV  ] Service engine loaded: 
corosync cluster quorum service v0.1 [3]
Mar 31 10:07:30 [17684] node2 corosync debug   [MAIN  ] Initializing IPC on 
quorum [3]
Mar 31 10:07:30 [17684] node2 corosync debug   [MAIN  ] No configured 
qb.ipc_type. Using native ipc
Mar 31 10:07:30 [17684] node2 corosync info    [QB    ] server name: quorum
Mar 31 10:07:30 [17684] node2 corosync trace   [QB    ] grown poll array to 7 
for FD 13
Mar 31 10:07:30 [17684] node2 corosync notice  [TOTEM ] adding new UDPU member 
{10.33.59.174}
Mar 31 10:07:30 [17684] node2 corosync notice  [TOTEM ] adding new UDPU member 
{10.33.59.175}
Mar 31 10:07:30 [17684] node2 corosync notice  [TOTEM ] adding new UDPU member 
{10.33.59.176}
Mar 31 10:07:30 [17684] node2 corosync notice  [TOTEM ] adding new UDPU member 
{10.33.59.177}
Mar 31 10:07:30 [17684] node2 corosync notice  [TOTEM ] adding new UDPU member 
{10.33.59.178}
Mar 31 10:07:30 [17684] node2 corosync notice  [TOTEM ] adding new UDPU member 
{10.33.59.179}
Mar 31 10:07:30 [17684] node2 corosync notice  [TOTEM ] adding new UDPU member 
{10.33.59.180}
Mar 31 10:07:30 [17684] node2 corosync notice  [TOTEM ] adding new UDPU member 
{10.33.59.181}
Mar 31 10:07:30 [17684] node2 corosync notice  [TOTEM ] adding new UDPU member 
{10.33.59.182}
Mar 31 10:07:30 [17684] node2 corosync debug   [TOTEM ] entering GATHER state 
from 15(interface change).
Mar 31 10:07:30 [17684] node2 corosync debug   [TOTEM ] entering GATHER state 
from 11(merge during join).
Mar 31 10:07:30 [17684] node2 corosync debug   [TOTEM ] entering GATHER state 
from 11(merge during join).
Mar 31 10:07:30 [17684] node2 corosync debug   [TOTEM ] entering GATHER state 
from 11(merge during join).
Mar 31 10:07:30 [17684] node2 corosync debug   [TOTEM ] entering GATHER state 
from 11(merge during join).
Mar 31 10:07:30 [17684] node2 corosync debug   [TOTEM ] entering GATHER state 
from 11(merge during join).
Mar 31 10:07:30 [17684] node2 corosync debug   [TOTEM ] entering GATHER state 
from 11(merge during join).
Mar 31 10:07:30 [17684] node2 corosync debug   [TOTEM ] entering GATHER state 
from 11(merge during join).
Mar 31 10:07:30 [17684] node2 corosync debug   [TOTEM ] entering GATHER state 
from 11(merge during join).
Mar 31 10:07:30 [17684] node2 corosync debug   [TOTEM ] entering GATHER state 
from 11(merge during join).
Mar 31 10:07:30 [17684] node2 corosync debug   [TOTEM ] entering GATHER state 
from 11(merge during join).
Mar 31 10:07:30 [17684] node2 corosync debug   [TOTEM ] entering GATHER state 
from 11(merge during join).
Mar 31 10:07:30 [17684] node2 corosync debug   [TOTEM ] entering GATHER state 
from 11(merge during join).
Mar 31 10:07:30 [17684] node2 corosync debug   [TOTEM ] entering GATHER state 
from 11(merge during join).
Mar 31 10:07:30 [17684] node2 corosync debug   [TOTEM ] entering GATHER state 
from 11(merge during join).
Mar 31 10:07:30 [17684] node2 corosync debug   [TOTEM ] entering GATHER state 
from 11(merge during join).
Mar 31 10:07:30 [17684] node2 corosync debug   [TOTEM ] entering GATHER state 
from 11(merge during join).
Mar 31 10:07:30 [17684] node2 corosync debug   [TOTEM ] entering GATHER state 
from 11(merge during join).
Mar 31 10:07:30 [17684] node2 corosync debug   [TOTEM ] entering GATHER state 
from 11(merge during join).
Mar 31 10:07:30 [17684] node2 corosync debug   [TOTEM ] entering GATHER state 
from 11(merge during join).
Mar 31 10:07:30 [17684] node2 corosync debug   [TOTEM ] entering GATHER state 
from 11(merge during join).
Mar 31 10:07:30 [17684] node2 corosync debug   [TOTEM ] entering GATHER state 
from 11(merge during join).
Mar 31 10:07:30 [17684] node2 corosync debug   [TOTEM ] entering GATHER state 
from 11(merge during join).
Mar 31 10:08:29 [17684] node2 corosync debug   [MAIN  ] Denied connection, 
corosync is not ready
Mar 31 10:08:29 [17684] node2 corosync warning [QB    ] Denied connection, is 
not ready (17685-18240-23)
Mar 31 10:08:29 [17684] node2 corosync debug   [MAIN  ] 
cs_ipcs_connection_destroyed()




Please find the corosync conf file.

[root@node2 ~]# cat /etc/corosync/corosync.conf
totem {
    version: 2
    cluster_name: OCC
    secauth: off
    transport: udpu
}



nodelist {
    node {
        ring0_addr: node1
        nodeid: 1
    }



    node {
        ring0_addr: node2
        nodeid: 2
    }



    node {
        ring0_addr: node3
        nodeid: 3
    }



    node {
        ring0_addr: node4
        nodeid: 4
    }



    node {
        ring0_addr: node5
        nodeid: 5
    }



    node {
        ring0_addr: node6
        nodeid: 6
    }



    node {
        ring0_addr: node7
        nodeid: 7
    }



    node {
        ring0_addr: node8
        nodeid: 8
    }



    node {
        ring0_addr: node9
        nodeid: 9
    }
}



quorum {
    provider: corosync_votequorum
}



logging {
    to_logfile: yes
    logfile: /var/log/cluster/corosync.log
    to_syslog: no
timestamp:on
}

Thanks and Regards,
S Sathish S
_______________________________________________
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/

Reply via email to