- **Milestone**: future --> 4.2.5
---
** [tickets:#273] DTM(MDS/TCP): Standby controller fails to come up when both
the controllers are coming up at the same time**
**Status:** fixed
**Milestone:** 4.2.5
**Created:** Fri May 17, 2013 12:24 PM UTC by Sirisha Alla
**Last Updated:** Mon Jan 19, 2015 11:20 AM UTC
**Owner:** A V Mahesh (AVM)
The issue is seen on OEL6.4 TCP setup. This is a physical node cluster running
with changeset 4241 and patches 2794 and 3117. PBE is enabled and 30k objects
are present in the imm.db
Steps to reproduce:
1) Start active controller
2) When IMMND just starts loading, start standby controller
Following is the syslog from both the controllers.
Time taken for the IMMND coordinator to load the database (no sync by other
IMMND when the active controller is coming up):
May 16 12:23:32 SC-1 osafimmloadd: NO ***** Loading from PBE file imm.db at
/home/immpbe/ *****
May 16 12:23:35 SC-1 osafimmnd[32651]: WA Number of objects in IMM is:5000
May 16 12:23:35 SC-1 osafimmnd[32651]: WA Number of objects in IMM is:6000
May 16 12:23:36 SC-1 osafimmnd[32651]: WA Number of objects in IMM is:7000
May 16 12:23:36 SC-1 osafimmnd[32651]: WA Number of objects in IMM is:8000
May 16 12:23:37 SC-1 osafimmnd[32651]: WA Number of objects in IMM is:9000
May 16 12:23:37 SC-1 osafimmnd[32651]: WA Number of objects in IMM is:10000
May 16 12:23:38 SC-1 osafimmnd[32651]: WA Number of objects in IMM is:11000
May 16 12:23:38 SC-1 osafimmnd[32651]: WA Number of objects in IMM is:12000
May 16 12:23:39 SC-1 osafimmnd[32651]: WA Number of objects in IMM is:13000
May 16 12:23:39 SC-1 osafimmnd[32651]: WA Number of objects in IMM is:14000
May 16 12:23:40 SC-1 osafimmnd[32651]: WA Number of objects in IMM is:15000
May 16 12:23:40 SC-1 osafimmnd[32651]: WA Number of objects in IMM is:16000
May 16 12:23:41 SC-1 osafimmnd[32651]: WA Number of objects in IMM is:17000
May 16 12:23:41 SC-1 osafimmnd[32651]: WA Number of objects in IMM is:18000
May 16 12:23:42 SC-1 osafimmnd[32651]: WA Number of objects in IMM is:19000
May 16 12:23:42 SC-1 osafimmnd[32651]: WA Number of objects in IMM is:20000
May 16 12:23:43 SC-1 osafimmnd[32651]: WA Number of objects in IMM is:21000
May 16 12:23:43 SC-1 osafimmnd[32651]: WA Number of objects in IMM is:22000
May 16 12:23:43 SC-1 osafimmnd[32651]: WA Number of objects in IMM is:23000
May 16 12:23:44 SC-1 osafimmnd[32651]: WA Number of objects in IMM is:24000
May 16 12:23:44 SC-1 osafimmnd[32651]: WA Number of objects in IMM is:25000
May 16 12:23:45 SC-1 osafimmnd[32651]: WA Number of objects in IMM is:26000
May 16 12:23:45 SC-1 osafimmnd[32651]: WA Number of objects in IMM is:27000
May 16 12:23:46 SC-1 osafimmnd[32651]: WA Number of objects in IMM is:28000
May 16 12:23:46 SC-1 osafimmnd[32651]: WA Number of objects in IMM is:29000
May 16 12:23:47 SC-1 osafimmnd[32651]: WA Number of objects in IMM is:30000
May 16 12:23:47 SC-1 osafimmnd[32651]: NO Ccb 1 COMMITTED (IMMLOADER)
May 16 12:23:47 SC-1 osafimmnd[32651]: NO Closing admin owner IMMLOADER,
loading of IMM done
Time taken by IMMND coordinator when other IMMND requests sync while
coordinator is trying to load the database:
May 16 11:40:04 SC-1 osafimmloadd: NO ***** Loading from PBE file imm.db at
/home/immpbe/ *****
May 16 11:40:04 SC-1 osafdtmd[25591]: NO Established contact with 'SC-2'
May 16 11:40:04 SC-1 osafimmd[25644]: NO New IMMND process is on STANDBY
Controller at 2020f
May 16 11:40:05 SC-1 osafimmd[25644]: WA IMMND on controller (not currently
coord) requests sync
May 16 11:40:05 SC-1 osafimmd[25644]: NO Node 2020f request sync sync-pid:29042
epoch:0
May 16 11:41:45 SC-1 osafimmd[25644]: WA IMMND on controller (not currently
coord) requests sync
May 16 11:41:45 SC-1 osafimmd[25644]: NO Node 2020f request sync sync-pid:29042
epoch:0
May 16 11:43:25 SC-1 osafimmnd[25659]: WA Number of objects in IMM is:5000
May 16 11:43:25 SC-1 osafimmd[25644]: WA IMMND on controller (not currently
coord) requests sync
May 16 11:43:25 SC-1 osafimmd[25644]: NO Node 2020f request sync sync-pid:29042
epoch:0
May 16 11:44:04 SC-1 osafimmnd[25659]: WA Number of objects in IMM is:6000
May 16 11:44:45 SC-1 osafimmnd[25659]: WA Number of objects in IMM is:7000
May 16 11:45:05 SC-1 osafimmd[25644]: WA IMMND on controller (not currently
coord) requests sync
May 16 11:45:05 SC-1 osafimmd[25644]: NO Node 2020f request sync sync-pid:29042
epoch:0
May 16 11:45:24 SC-1 osafimmnd[25659]: WA Number of objects in IMM is:8000
May 16 11:46:05 SC-1 osafimmnd[25659]: WA Number of objects in IMM is:9000
May 16 11:46:45 SC-1 osafimmnd[25659]: WA Number of objects in IMM is:10000
May 16 11:46:45 SC-1 osafimmd[25644]: WA IMMND on controller (not currently
coord) requests sync
May 16 11:46:45 SC-1 osafimmd[25644]: NO Node 2020f request sync sync-pid:29042
epoch:0
May 16 11:47:25 SC-1 osafimmnd[25659]: WA Number of objects in IMM is:11000
May 16 11:48:01 SC-1 opensafd[25582]: ER Timed-out for response from IMMND
May 16 11:48:01 SC-1 opensafd[25582]: ER
May 16 11:48:01 SC-1 opensafd[25582]: ER Going for recovery
May 16 11:48:01 SC-1 opensafd[25582]: ER Trying To RESPAWN
/usr/lib64/opensaf/clc-cli/osaf-immnd attempt #1
May 16 11:48:01 SC-1 opensafd[25582]: ER Sending SIGKILL to IMMND, pid=25650
May 16 11:48:01 SC-1 osafimmnd[25659]: WA IMMND - Client went down so no
response
May 16 11:48:01 SC-1 osafimmd[25644]: WA IMMND coordinator at 2010f apparently
crashed => electing new coord
May 16 11:48:01 SC-1 osafimmd[25644]: ER Failed to find candidate for new IMMND
coordinator
May 16 11:48:01 SC-1 osafimmd[25644]: ER Active IMMD has to restart the IMMSv.
All IMMNDs will restart
May 16 11:48:01 SC-1 osafimmd[25644]: NO Cluster failed to load => IMMDs will
not exit.
May 16 11:48:01 SC-1 osafimmd[25644]: WA Error returned from processing message
err:0 msg-type:17
May 16 11:48:16 SC-1 osafimmnd[26844]: Started
May 16 11:48:16 SC-1 osafimmnd[26844]: NO Persistent Back-End capability
configured, Pbe file:imm.db
May 16 11:48:16 SC-1 osafimmd[25644]: NO New IMMND process is on ACTIVE
Controller at 2010f
May 16 11:48:16 SC-1 osafimmd[25644]: NO First IMMND on controller found at
2010f this IMMD at 2010f.#012#012Cluster must be loading => designating this
IMMND as coordinator
May 16 11:48:16 SC-1 osafimmnd[26844]: NO SERVER STATE: IMM_SERVER_ANONYMOUS
--> IMM_SERVER_CLUSTER_WAITING
May 16 11:48:16 SC-1 osafimmnd[26844]: NO This IMMND is now the NEW Coord
May 16 11:48:17 SC-1 osafimmd[25644]: NO New IMMND process is on STANDBY
Controller at 2020f
May 16 11:48:19 SC-1 osafimmnd[26844]: NO SERVER STATE:
IMM_SERVER_CLUSTER_WAITING --> IMM_SERVER_LOADING_PENDING
May 16 11:48:19 SC-1 osafimmnd[26844]: NO SERVER STATE:
IMM_SERVER_LOADING_PENDING --> IMM_SERVER_LOADING_SERVER
May 16 11:48:19 SC-1 osafimmnd[26844]: NO NODE STATE-> IMM_NODE_LOADING
May 16 11:48:19 SC-1 osafimmd[25644]: NO Successfully announced loading. New
ruling epoch:1
May 16 11:48:19 SC-1 osafimmloadd: NO Load starting
May 16 11:48:19 SC-1 osafimmloadd: NO IMMSV_PBE_FILE is defined (imm.db) check
it for existence and SaImmRepositoryInitModeT
May 16 11:48:19 SC-1 osafimmloadd: WA Journal file /home/immpbe//imm.db-journal
of non zero size exists at open for loading => sqlite recovery
May 16 11:48:19 SC-1 osafimmloadd: IN saImmRepositoryInit:
SA_IMM_KEEP_REPOSITORY - loading from repository
May 16 11:48:19 SC-1 osafimmloadd: IN PBE repository of rep-version <1, 2>
May 16 11:48:19 SC-1 osafimmloadd: IN Prepare SQL statements
May 16 11:48:19 SC-1 osafimmloadd: NO ***** Loading from PBE file imm.db at
/home/immpbe/ *****
May 16 11:51:42 SC-1 osafimmnd[26844]: WA Number of objects in IMM is:5000
May 16 11:52:22 SC-1 osafimmnd[26844]: WA Number of objects in IMM is:6000
May 16 11:53:02 SC-1 osafimmnd[26844]: WA Number of objects in IMM is:7000
May 16 11:53:20 SC-1 osafimmnd[26844]: NO Global discard node received for
nodeId:2020f pid:30212
May 16 11:53:21 SC-1 osafimmnd[26844]: WA Number of objects in IMM is:8000
May 16 11:53:22 SC-1 osafimmnd[26844]: WA Number of objects in IMM is:9000
May 16 11:53:23 SC-1 osafimmnd[26844]: WA Number of objects in IMM is:10000
May 16 11:53:24 SC-1 osafimmnd[26844]: WA Number of objects in IMM is:11000
May 16 11:53:25 SC-1 osafimmnd[26844]: WA Number of objects in IMM is:12000
May 16 11:53:26 SC-1 osafimmnd[26844]: WA Number of objects in IMM is:13000
May 16 11:53:27 SC-1 osafimmnd[26844]: WA Number of objects in IMM is:14000
May 16 11:53:28 SC-1 osafimmnd[26844]: WA Number of objects in IMM is:15000
May 16 11:53:29 SC-1 osafimmnd[26844]: WA Number of objects in IMM is:16000
May 16 11:53:30 SC-1 osafimmnd[26844]: WA Number of objects in IMM is:17000
May 16 11:53:31 SC-1 osafimmnd[26844]: WA Number of objects in IMM is:18000
May 16 11:53:32 SC-1 osafimmnd[26844]: WA Number of objects in IMM is:19000
May 16 11:53:33 SC-1 osafimmnd[26844]: WA Number of objects in IMM is:20000
May 16 11:53:34 SC-1 osafimmnd[26844]: WA Number of objects in IMM is:21000
May 16 11:53:35 SC-1 osafimmnd[26844]: WA Number of objects in IMM is:22000
May 16 11:53:35 SC-1 osafimmd[25644]: NO New IMMND process is on STANDBY
Controller at 2020f
May 16 11:53:36 SC-1 osafimmd[25644]: WA IMMND on controller (not currently
coord) requests sync
May 16 11:53:36 SC-1 osafimmd[25644]: NO Node 2020f request sync sync-pid:30979
epoch:0
May 16 11:54:08 SC-1 osafimmnd[26844]: WA Number of objects in IMM is:23000
May 16 11:54:48 SC-1 osafimmnd[26844]: WA Number of objects in IMM is:24000
May 16 11:55:16 SC-1 osafimmd[25644]: WA IMMND on controller (not currently
coord) requests sync
May 16 11:55:16 SC-1 osafimmd[25644]: NO Node 2020f request sync sync-pid:30979
epoch:0
May 16 11:55:28 SC-1 osafimmnd[26844]: WA Number of objects in IMM is:25000
May 16 11:56:08 SC-1 osafimmnd[26844]: WA Number of objects in IMM is:26000
May 16 11:56:16 SC-1 opensafd[25582]: ER Timed-out for response from IMMND
May 16 11:56:16 SC-1 opensafd[25582]: ER Could Not RESPAWN IMMND
May 16 11:56:16 SC-1 opensafd[25582]: ER
May 16 11:56:16 SC-1 opensafd[25582]: ER Trying To RESPAWN
/usr/lib64/opensaf/clc-cli/osaf-immnd attempt #2
May 16 11:56:16 SC-1 opensafd[25582]: ER Sending SIGKILL to IMMND, pid=26837
May 16 11:56:17 SC-1 osafimmnd[26844]: WA IMMND - Client went down so no
response
May 16 11:56:17 SC-1 osafimmd[25644]: WA IMMND coordinator at 2010f apparently
crashed => electing new coord
May 16 11:56:17 SC-1 osafimmd[25644]: ER Failed to find candidate for new IMMND
coordinator
May 16 11:56:17 SC-1 osafimmd[25644]: ER Active IMMD has to restart the IMMSv.
All IMMNDs will restart
May 16 11:56:17 SC-1 osafimmd[25644]: NO Cluster failed to load => IMMDs will
not exit.
May 16 11:56:17 SC-1 osafimmd[25644]: WA Error returned from processing message
err:0 msg-type:17
May 16 11:56:17 SC-1 osafdtmd[25591]: NO Lost contact with 'SC-2'
May 16 11:56:17 SC-1 osafimmd[25644]: WA IMMD lost contact with peer IMMD
(NCSMDS_RED_DOWN)
May 16 11:56:17 SC-1 osaffmd[25630]: NO Role: ACTIVE, Node Down for node id:
2020f
May 16 11:56:17 SC-1 osaffmd[25630]: Rebooting OpenSAF NodeId = 131599 EE Name
= , Reason: Received Node Down for standby peer
May 16 11:56:17 SC-1 opensaf_reboot: Rebooting remote node in the absence of
PLM is outside the scope of OpenSAF
May 16 11:56:32 SC-1 osafimmnd[28034]: Started
May 16 11:56:32 SC-1 osafimmnd[28034]: NO Persistent Back-End capability
configured, Pbe file:imm.db
May 16 11:56:32 SC-1 osafimmd[25644]: NO New IMMND process is on ACTIVE
Controller at 2010f
May 16 11:56:32 SC-1 osafimmd[25644]: NO First IMMND on controller found at
2010f this IMMD at 2010f.#012#012Cluster must be loading => designating this
IMMND as coordinator
May 16 11:56:32 SC-1 osafimmnd[28034]: NO SERVER STATE: IMM_SERVER_ANONYMOUS
--> IMM_SERVER_CLUSTER_WAITING
May 16 11:56:32 SC-1 osafimmnd[28034]: NO This IMMND is now the NEW Coord
May 16 11:56:35 SC-1 osafimmnd[28034]: NO SERVER STATE:
IMM_SERVER_CLUSTER_WAITING --> IMM_SERVER_LOADING_PENDING
May 16 11:56:35 SC-1 osafimmd[25644]: NO Successfully announced loading. New
ruling epoch:1
May 16 11:56:35 SC-1 osafimmnd[28034]: NO SERVER STATE:
IMM_SERVER_LOADING_PENDING --> IMM_SERVER_LOADING_SERVER
May 16 11:56:35 SC-1 osafimmnd[28034]: NO NODE STATE-> IMM_NODE_LOADING
May 16 11:56:35 SC-1 osafimmloadd: NO Load starting
May 16 11:56:35 SC-1 osafimmloadd: NO IMMSV_PBE_FILE is defined (imm.db) check
it for existence and SaImmRepositoryInitModeT
May 16 11:56:35 SC-1 osafimmloadd: WA Journal file /home/immpbe//imm.db-journal
of non zero size exists at open for loading => sqlite recovery
May 16 11:56:35 SC-1 osafimmloadd: IN saImmRepositoryInit:
SA_IMM_KEEP_REPOSITORY - loading from repository
May 16 11:56:35 SC-1 osafimmloadd: IN PBE repository of rep-version <1, 2>
May 16 11:56:35 SC-1 osafimmloadd: IN Prepare SQL statements
May 16 11:56:35 SC-1 osafimmloadd: NO ***** Loading from PBE file imm.db at
/home/immpbe/ *****
May 16 11:56:38 SC-1 osafimmnd[28034]: WA Number of objects in IMM is:5000
May 16 11:56:38 SC-1 osafimmnd[28034]: WA Number of objects in IMM is:6000
May 16 11:56:39 SC-1 osafimmnd[28034]: WA Number of objects in IMM is:7000
May 16 11:56:39 SC-1 osafimmnd[28034]: WA Number of objects in IMM is:8000
May 16 11:56:40 SC-1 osafimmnd[28034]: WA Number of objects in IMM is:9000
May 16 11:56:40 SC-1 osafimmnd[28034]: WA Number of objects in IMM is:10000
May 16 11:56:41 SC-1 osafimmnd[28034]: WA Number of objects in IMM is:11000
May 16 11:56:41 SC-1 osafimmnd[28034]: WA Number of objects in IMM is:12000
May 16 11:56:42 SC-1 osafimmnd[28034]: WA Number of objects in IMM is:13000
May 16 11:56:42 SC-1 osafimmnd[28034]: WA Number of objects in IMM is:14000
May 16 11:56:43 SC-1 osafimmnd[28034]: WA Number of objects in IMM is:15000
May 16 11:56:43 SC-1 osafimmnd[28034]: WA Number of objects in IMM is:16000
May 16 11:56:44 SC-1 osafimmnd[28034]: WA Number of objects in IMM is:17000
May 16 11:56:44 SC-1 osafimmnd[28034]: WA Number of objects in IMM is:18000
May 16 11:56:45 SC-1 osafimmnd[28034]: WA Number of objects in IMM is:19000
May 16 11:56:45 SC-1 osafimmnd[28034]: WA Number of objects in IMM is:20000
May 16 11:56:46 SC-1 osafimmnd[28034]: WA Number of objects in IMM is:21000
May 16 11:56:46 SC-1 osafimmnd[28034]: WA Number of objects in IMM is:22000
May 16 11:56:47 SC-1 osafimmnd[28034]: WA Number of objects in IMM is:23000
May 16 11:56:47 SC-1 osafimmnd[28034]: WA Number of objects in IMM is:24000
May 16 11:56:48 SC-1 osafimmnd[28034]: WA Number of objects in IMM is:25000
May 16 11:56:48 SC-1 osafimmnd[28034]: WA Number of objects in IMM is:26000
May 16 11:56:49 SC-1 osafimmnd[28034]: WA Number of objects in IMM is:27000
May 16 11:56:49 SC-1 osafimmnd[28034]: WA Number of objects in IMM is:28000
May 16 11:56:50 SC-1 osafimmnd[28034]: WA Number of objects in IMM is:29000
May 16 11:56:50 SC-1 osafimmnd[28034]: WA Number of objects in IMM is:30000
May 16 11:56:50 SC-1 osafimmnd[28034]: NO Ccb 3 COMMITTED (IMMLOADER)
May 16 11:56:50 SC-1 osafimmnd[28034]: NO Closing admin owner IMMLOADER,
loading of IMM done
May 16 11:56:50 SC-1 osafimmnd[28034]: NO NODE STATE-> IMM_NODE_FULLY_AVAILABLE
2144
The IMMND coordinator completed loading only after it lost contact with SC-2
(IMMND on SC-2 repeatedly timed out and did not come up)
May 16 11:39:23 SC-2 osafrded[28999]: NO rde@2010f has active state => Standby
role
May 16 11:39:23 SC-2 osaffmd[29013]: Started
May 16 11:39:23 SC-2 osafimmd[29027]: Started
May 16 11:39:23 SC-2 osafimmnd[29042]: Started
May 16 11:39:23 SC-2 osafimmnd[29042]: NO Persistent Back-End capability
configured, Pbe file:imm.db
May 16 11:39:23 SC-2 osafimmnd[29042]: NO SERVER STATE: IMM_SERVER_ANONYMOUS
--> IMM_SERVER_CLUSTER_WAITING
May 16 11:39:23 SC-2 osafimmnd[29042]: NO SERVER STATE:
IMM_SERVER_CLUSTER_WAITING --> IMM_SERVER_LOADING_PENDING
May 16 11:39:23 SC-2 osafimmnd[29042]: NO SERVER STATE:
IMM_SERVER_LOADING_PENDING --> IMM_SERVER_SYNC_PENDING
May 16 11:39:23 SC-2 osafimmnd[29042]: NO NODE STATE-> IMM_NODE_ISOLATED
May 16 11:47:20 SC-2 osafimmd[29027]: WA IMMND DOWN on active controller f1
detected at standby immd!! f2. Possible failover
May 16 11:47:20 SC-2 osafimmd[29027]: ER Standby IMMD recieved reset message.
All IMMNDs will restart.
May 16 11:47:20 SC-2 osafimmd[29027]: NO Cluster failed to load => IMMDs will
not exit.
May 16 11:47:20 SC-2 osafimmnd[29042]: ER IMMND forced to restart on order from
IMMD, exiting
May 16 11:47:20 SC-2 opensafd[28969]: ER Failed #012 DESC:IMMND
May 16 11:47:20 SC-2 opensafd[28969]: ER Going for recovery
May 16 11:47:20 SC-2 opensafd[28969]: ER Trying To RESPAWN
/usr/lib64/opensaf/clc-cli/osaf-immnd attempt #1
May 16 11:47:20 SC-2 opensafd[28969]: ER Sending SIGKILL to IMMND, pid=29033
May 16 11:47:35 SC-2 osafimmd[29027]: NO IMMND coord at 2010f
May 16 11:47:35 SC-2 osafimmnd[30212]: Started
May 16 11:47:35 SC-2 osafimmnd[30212]: NO Persistent Back-End capability
configured, Pbe file:imm.db
May 16 11:47:35 SC-2 osafimmnd[30212]: NO SERVER STATE: IMM_SERVER_ANONYMOUS
--> IMM_SERVER_CLUSTER_WAITING
May 16 11:47:35 SC-2 osafimmnd[30212]: NO SERVER STATE:
IMM_SERVER_CLUSTER_WAITING --> IMM_SERVER_LOADING_PENDING
May 16 11:47:35 SC-2 osafimmnd[30212]: NO SERVER STATE:
IMM_SERVER_LOADING_PENDING --> IMM_SERVER_LOADING_CLIENT
May 16 11:47:37 SC-2 osafimmnd[30212]: NO NODE STATE-> IMM_NODE_LOADING
May 16 11:51:00 SC-2 osafimmnd[30212]: WA Number of objects in IMM is:5000
May 16 11:51:40 SC-2 osafimmnd[30212]: WA Number of objects in IMM is:6000
May 16 11:52:20 SC-2 osafimmnd[30212]: WA Number of objects in IMM is:7000
May 16 11:52:39 SC-2 osafimmnd[30212]: WA Loading client timed out, waiting to
be loaded - terminating
May 16 11:52:39 SC-2 opensafd[28969]: ER Could Not RESPAWN IMMND
May 16 11:52:39 SC-2 opensafd[28969]: ER Failed #012 DESC:IMMND
May 16 11:52:39 SC-2 opensafd[28969]: ER Trying To RESPAWN
/usr/lib64/opensaf/clc-cli/osaf-immnd attempt #2
May 16 11:52:39 SC-2 opensafd[28969]: ER Sending SIGKILL to IMMND, pid=30205
May 16 11:52:39 SC-2 osafimmnd[30212]: ER IMMND - Periodic server job failed
May 16 11:52:39 SC-2 osafimmnd[30212]: ER Failed, exiting...
May 16 11:52:54 SC-2 osafimmnd[30979]: Started
May 16 11:52:54 SC-2 osafimmnd[30979]: NO Persistent Back-End capability
configured, Pbe file:imm.db
May 16 11:52:54 SC-2 osafimmnd[30979]: NO SERVER STATE: IMM_SERVER_ANONYMOUS
--> IMM_SERVER_CLUSTER_WAITING
May 16 11:52:54 SC-2 osafimmd[29027]: NO Ruling epoch noted as:1 on IMMD standby
May 16 11:52:54 SC-2 osafimmnd[30979]: NO SERVER STATE:
IMM_SERVER_CLUSTER_WAITING --> IMM_SERVER_LOADING_PENDING
May 16 11:52:54 SC-2 osafimmnd[30979]: NO SERVER STATE:
IMM_SERVER_LOADING_PENDING --> IMM_SERVER_SYNC_PENDING
May 16 11:52:54 SC-2 osafimmnd[30979]: NO NODE STATE-> IMM_NODE_ISOLATED
May 16 11:55:35 SC-2 osafimmd[29027]: WA IMMND DOWN on active controller f1
detected at standby immd!! f2. Possible failover
May 16 11:55:35 SC-2 osafimmd[29027]: ER Standby IMMD recieved reset message.
All IMMNDs will restart.
May 16 11:55:35 SC-2 osafimmd[29027]: NO Cluster failed to load => IMMDs will
not exit.
May 16 11:55:35 SC-2 osafimmnd[30979]: ER IMMND forced to restart on order from
IMMD, exiting
May 16 11:55:35 SC-2 opensafd[28969]: ER Could Not RESPAWN IMMND
May 16 11:55:35 SC-2 opensafd[28969]: ER Failed #012 DESC:IMMND
May 16 11:55:35 SC-2 opensafd[28969]: ER FAILED TO RESPAWN
May 16 11:55:35 SC-2 osafrded[28999]: ER MDTM:socket_recv() = 0, conn lost with
dh server, exiting library err :Success
May 16 11:55:35 SC-2 osaffmd[29013]: ER MDTM:socket_recv() = 0, conn lost with
dh server, exiting library err :Success
May 16 11:55:35 SC-2 osafimmd[29027]: ER MDTM:socket_recv() = 0, conn lost with
dh server, exiting library err :Success
May 16 11:55:36 SC-2 opensafd: Starting OpenSAF failed
30k is a small database and cluster restart fails. This might be a significant
performance issue of TCP(DTM)/IMM.
Traces are available and can be provided on request.
---
Sent from sourceforge.net because [email protected] is
subscribed to https://sourceforge.net/p/opensaf/tickets/
To unsubscribe from further messages, a project admin can change settings at
https://sourceforge.net/p/opensaf/admin/tickets/options. Or, if this is a
mailing list, you can unsubscribe from the mailing list.
------------------------------------------------------------------------------
Don't Limit Your Business. Reach for the Cloud.
GigeNET's Cloud Solutions provide you with the tools and support that
you need to offload your IT needs and focus on growing your business.
Configured For All Businesses. Start Your Cloud Today.
https://www.gigenetcloud.com/
_______________________________________________
Opensaf-tickets mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/opensaf-tickets