- **assigned_to**: Neelakanta Reddy -->  nobody 
- **Comment**:

The present MDS_DIRECT_BUF_MAXSIZE = 65479

A.  IMMSV_DEFAULT_MAX_SYNC_BATCH_SIZE = 90%MDS_DIRECT_BUF_MAXSIZE = 58931 :

1. immnd traces (SC-1):

Sep 15 14:24:12.205653 osafimmnd [3625:immnd_evt.c:3553] << 
immnd_fevs_local_checks
Sep 15 14:24:12.205688 osafimmnd [3625:immnd_evt.c:3002] TR Replies pending:16
Sep 15 14:24:12.205742 osafimmnd [3625:immnd_evt.c:3021] T2 SENDING FEVS TO IMMD
Sep 15 14:24:12.205761 osafimmnd [3625:immsv_evt.c:5395] T8 Sending:  
IMMD_EVT_ND2D_FEVS_REQ_2 to 0
Sep 15 14:24:12.206122 immsync [3993:imma_om_api.c:6395] << immsv_sync


2. immd traces(SC-1)

Sep 15 14:24:10.792610 osafimmd [3614:immd_mds.c:0750] >> immd_mds_bcast_send
Sep 15 14:24:10.792624 osafimmd [3614:immsv_evt.c:5400] T8 Sending:  
IMMND_EVT_D2ND_GLOB_FEVS_REQ_2 to 0

3. syslog(SC-1):

Sep 15 14:24:06 SLES-SLOT-1 osafamfd[3748]: NO Node 'PL-3' left the cluster
Sep 15 14:24:06 SLES-SLOT-1 osafclmd[3729]: NO Node 131855 went down. Not 
sending track callback for agents on that node
Sep 15 14:24:06 SLES-SLOT-1 osafclmd[3729]: NO Node 131855 went down. Not 
sending track callback for agents on that node
Sep 15 14:24:06 SLES-SLOT-1 osafimmnd[3625]: NO Global discard node received 
for nodeId:2030f pid:3192
Sep 15 14:24:06 SLES-SLOT-1 osafimmnd[3625]: NO Implementer disconnected 10 <0, 
2030f(down)> (MsgQueueService131855)
Sep 15 14:24:12 SLES-SLOT-1 kernel: [  521.629130] TIPC: Resetting link 
<1.1.1:eth1-1.1.3:eth2>, peer not responding
Sep 15 14:24:12 SLES-SLOT-1 kernel: [  521.629142] TIPC: Lost link 
<1.1.1:eth1-1.1.3:eth2> on network plane A
Sep 15 14:24:12 SLES-SLOT-1 kernel: [  521.629773] TIPC: Lost contact with 
<1.1.3>

4. number of messages 14:24:06 to 14:24:12

28 messages of approximately 90% size is sent before IMMD is blocked (total 131 
messages sent)


Sep 15 14:24:06.999676 osafimmd [3614:immd_evt.c:0273] T5 
immd_evt_proc_fevs_req send_count:61636 size:56464
Sep 15 14:24:07.231197 osafimmd [3614:immd_evt.c:0273] T5 
immd_evt_proc_fevs_req send_count:61637 size:56414
Sep 15 14:24:07.390262 osafimmd [3614:immd_evt.c:0273] T5 
immd_evt_proc_fevs_req send_count:61638 size:56464
Sep 15 14:24:07.541229 osafimmd [3614:immd_evt.c:0273] T5 
immd_evt_proc_fevs_req send_count:61639 size:56444
Sep 15 14:24:07.692762 osafimmd [3614:immd_evt.c:0273] T5 
immd_evt_proc_fevs_req send_count:61640 size:56468
Sep 15 14:24:07.845123 osafimmd [3614:immd_evt.c:0273] T5 
immd_evt_proc_fevs_req send_count:61641 size:56430
Sep 15 14:24:08.008833 osafimmd [3614:immd_evt.c:0273] T5 
immd_evt_proc_fevs_req send_count:61642 size:56448
Sep 15 14:24:08.149347 osafimmd [3614:immd_evt.c:0273] T5 
immd_evt_proc_fevs_req send_count:61643 size:56464
Sep 15 14:24:08.308973 osafimmd [3614:immd_evt.c:0273] T5 
immd_evt_proc_fevs_req send_count:61644 size:56464
Sep 15 14:24:08.450764 osafimmd [3614:immd_evt.c:0273] T5 
immd_evt_proc_fevs_req send_count:61645 size:56414
Sep 15 14:24:08.576393 osafimmd [3614:immd_evt.c:0273] T5 
immd_evt_proc_fevs_req send_count:61646 size:56464
Sep 15 14:24:08.715694 osafimmd [3614:immd_evt.c:0273] T5 
immd_evt_proc_fevs_req send_count:61647 size:56468
Sep 15 14:24:08.845637 osafimmd [3614:immd_evt.c:0273] T5 
immd_evt_proc_fevs_req send_count:61648 size:56444
Sep 15 14:24:08.987037 osafimmd [3614:immd_evt.c:0273] T5 
immd_evt_proc_fevs_req send_count:61649 size:56468
Sep 15 14:24:09.115771 osafimmd [3614:immd_evt.c:0273] T5 
immd_evt_proc_fevs_req send_count:61650 size:56430
Sep 15 14:24:09.262554 osafimmd [3614:immd_evt.c:0273] T5 
immd_evt_proc_fevs_req send_count:61651 size:56444
Sep 15 14:24:09.391070 osafimmd [3614:immd_evt.c:0273] T5 
immd_evt_proc_fevs_req send_count:61652 size:56468
Sep 15 14:24:09.528649 osafimmd [3614:immd_evt.c:0273] T5 
immd_evt_proc_fevs_req send_count:61653 size:56444
Sep 15 14:24:09.674827 osafimmd [3614:immd_evt.c:0273] T5 
immd_evt_proc_fevs_req send_count:61654 size:56434
Sep 15 14:24:09.798674 osafimmd [3614:immd_evt.c:0273] T5 
immd_evt_proc_fevs_req send_count:61655 size:56464
Sep 15 14:24:09.920046 osafimmd [3614:immd_evt.c:0273] T5 
immd_evt_proc_fevs_req send_count:61656 size:56448
Sep 15 14:24:10.062081 osafimmd [3614:immd_evt.c:0273] T5 
immd_evt_proc_fevs_req send_count:61657 size:56464
Sep 15 14:24:10.190174 osafimmd [3614:immd_evt.c:0273] T5 
immd_evt_proc_fevs_req send_count:61658 size:56430
Sep 15 14:24:10.306722 osafimmd [3614:immd_evt.c:0273] T5 
immd_evt_proc_fevs_req send_count:61659 size:56448
Sep 15 14:24:10.438324 osafimmd [3614:immd_evt.c:0273] T5 
immd_evt_proc_fevs_req send_count:61660 size:56464
Sep 15 14:24:10.554421 osafimmd [3614:immd_evt.c:0273] T5 
immd_evt_proc_fevs_req send_count:61661 size:56468
Sep 15 14:24:10.674673 osafimmd [3614:immd_evt.c:0273] T5 
immd_evt_proc_fevs_req send_count:61662 size:56410
Sep 15 14:24:10.791234 osafimmd [3614:immd_evt.c:0273] T5 
immd_evt_proc_fevs_req send_count:61663 size:56464

5. immnd traces PL-4 (total 129 messages received)

Sep 15 14:24:06.083682 osafimmnd [3580:immnd_evt.c:8709] T2 REMOTE FEVS 
received. Messages from me still pending:0
Sep 15 14:24:06.217729 osafimmnd [3580:immnd_evt.c:8709] T2 REMOTE FEVS 
received. Messages from me still pending:0
Sep 15 14:24:06.364716 osafimmnd [3580:immnd_evt.c:8709] T2 REMOTE FEVS 
received. Messages from me still pending:0
Sep 15 14:24:06.487655 osafimmnd [3580:immnd_evt.c:8709] T2 REMOTE FEVS 
received. Messages from me still pending:0
Sep 15 14:24:06.610578 osafimmnd [3580:immnd_evt.c:8709] T2 REMOTE FEVS 
received. Messages from me still pending:0
Sep 15 14:24:06.754247 osafimmnd [3580:immnd_evt.c:8709] T2 REMOTE FEVS 
received. Messages from me still pending:0
Sep 15 14:24:06.878103 osafimmnd [3580:immnd_evt.c:8709] T2 REMOTE FEVS 
received. Messages from me still pending:0
Sep 15 14:24:06.994528 osafimmnd [3580:immnd_evt.c:8709] T2 REMOTE FEVS 
received. Messages from me still pending:0
Sep 15 14:24:07.127492 osafimmnd [3580:immnd_evt.c:8709] T2 REMOTE FEVS 
received. Messages from me still pending:0
Sep 15 14:24:07.242638 osafimmnd [3580:immnd_evt.c:8709] T2 REMOTE FEVS 
received. Messages from me still pending:0
Sep 15 14:30:37.768066 osafimmnd [3580:immnd_evt.c:8709] T2 REMOTE FEVS 
received. Messages from me still pending:0
Sep 15 14:30:37.768181 osafimmnd [3580:immnd_evt.c:8709] T2 REMOTE FEVS 
received. Messages from me still pending:0
Sep 15 14:30:37.768418 osafimmnd [3580:immnd_evt.c:8709] T2 REMOTE FEVS 
received. Messages from me still pending:0


B. IMMSV_DEFAULT_MAX_SYNC_BATCH_SIZE =  50%MDS_DIRECT_BUF_MAXSIZE  = 32739:
----
1. Loading wihout node restart -- 36 seconds:
Sep 15 18:25:28 SLES-SLOT-1 osafimmloadd: NO ***** Loading from PBE file imm.db 
at /opt/imm_share *****
Sep 15 18:29:02 SLES-SLOT-1 osafimmnd[3760]: NO Ccb 1 COMMITTED (IMMLOADER)
--

Sep 15 18:29:04 SLES-SLOT-1 osafimmd[3749]: NO Ruling epoch changed to:12
Sep 15 18:29:04 SLES-SLOT-1 osafimmd[3749]: NO ACT: SaImmRepositoryInitModeT 
changed and noted as being: SA_IMM_KEEP_REPOSITORY
Sep 15 18:29:04 SLES-SLOT-1 osafimmloadd: NO Load ending normally


2. syncing without node restart-- 46 seconds:

Sep 15 18:29:57 SLES-SLOT-1 osafimmloadd: NO Sync starting
Sep 15 18:30:43 SLES-SLOT-1 osafimmloadd: IN Synced 60367 objects in total
Sep 15 18:30:43 SLES-SLOT-1 osafimmnd[3760]: NO NODE STATE-> 
IMM_NODE_FULLY_AVAILABLE 16818
Sep 15 18:30:43 SLES-SLOT-1 osafimmloadd: NO Sync ending normally
Sep 15 18:30:43 SLES-SLOT-1 osafimmnd[3760]: NO Epoch set to 14 in ImmModel

3. for the #1291 test case the sync is taking 3 minutes and 42 seconds

Sep 15 16:58:59 SLES-SLOT-1 osafimmloadd: NO Sync starting
Sep 15 16:59:01 SLES-SLOT-1 osafamfd[9581]: NO Node 'PL-3' left the cluster
Sep 15 16:59:01 SLES-SLOT-1 osafclmd[9562]: NO Node 131855 went down. Not 
sending track callback for agents on that node
Sep 15 16:59:01 SLES-SLOT-1 osafclmd[9562]: NO Node 131855 went down. Not 
sending track callback for agents on that node
Sep 15 16:59:01 SLES-SLOT-1 osafimmnd[9482]: NO Global discard node received 
for nodeId:2030f pid:8315
Sep 15 16:59:01 SLES-SLOT-1 osafimmnd[9482]: NO Implementer disconnected 11 <0, 
2030f(down)> (MsgQueueService131855)
Sep 15 16:59:08 SLES-SLOT-1 kernel: [ 8893.056187] TIPC: Resetting link 
<1.1.1:eth1-1.1.3:eth2>, peer not responding
Sep 15 16:59:08 SLES-SLOT-1 kernel: [ 8893.056198] TIPC: Lost link 
<1.1.1:eth1-1.1.3:eth2> on network plane A
Sep 15 16:59:08 SLES-SLOT-1 kernel: [ 8893.056983] TIPC: Lost contact with 
<1.1.3>
Sep 15 17:00:42 SLES-SLOT-1 kernel: [ 8987.444292] TIPC: Established link 
<1.1.1:eth1-1.1.3:eth2> on network plane A
Sep 15 17:00:54 SLES-SLOT-1 osafimmd[9471]: NO Extended intro from node 2030f
Sep 15 17:00:55 SLES-SLOT-1 sshd[9887]: Accepted publickey for root from 
192.168.56.105 port 41897 ssh2
Sep 15 17:00:58 SLES-SLOT-1 osafimmd[9471]: NO Extended intro from node 2030f
Sep 15 17:00:58 SLES-SLOT-1 osafimmd[9471]: NO Extended intro from node 2030f
Sep 15 17:00:58 SLES-SLOT-1 osafimmd[9471]: NO Extended intro from node 2030f
Sep 15 17:00:58 SLES-SLOT-1 osafimmd[9471]: NO Extended intro from node 2030f
Sep 15 17:00:58 SLES-SLOT-1 osafimmd[9471]: NO Extended intro from node 2030f
Sep 15 17:00:58 SLES-SLOT-1 osafimmd[9471]: NO Extended intro from node 2030f
Sep 15 17:00:58 SLES-SLOT-1 osafimmd[9471]: NO Extended intro from node 2030f
Sep 15 17:00:58 SLES-SLOT-1 osafimmd[9471]: NO Extended intro from node 2030f
Sep 15 17:00:58 SLES-SLOT-1 osafimmd[9471]: NO Extended intro from node 2030f
Sep 15 17:00:58 SLES-SLOT-1 osafimmd[9471]: NO Extended intro from node 2030f
Sep 15 17:01:03 SLES-SLOT-1 osafimmd[9471]: NO Extended intro from node 2030f
Sep 15 17:01:03 SLES-SLOT-1 osafimmd[9471]: NO Extended intro from node 2030f
Sep 15 17:01:03 SLES-SLOT-1 osafimmd[9471]: NO Extended intro from node 2030f
Sep 15 17:01:03 SLES-SLOT-1 osafimmd[9471]: NO Extended intro from node 2030f
Sep 15 17:01:04 SLES-SLOT-1 osafimmd[9471]: NO Extended intro from node 2030f
Sep 15 17:01:04 SLES-SLOT-1 osafimmd[9471]: NO Extended intro from node 2030f
Sep 15 17:01:04 SLES-SLOT-1 osafimmd[9471]: NO Extended intro from node 2030f
Sep 15 17:01:04 SLES-SLOT-1 osafimmd[9471]: NO Extended intro from node 2030f
Sep 15 17:01:04 SLES-SLOT-1 osafimmd[9471]: NO Extended intro from node 2030f
Sep 15 17:01:05 SLES-SLOT-1 osafimmd[9471]: NO Node 2030f request sync 
sync-pid:3228 epoch:0
Sep 15 17:02:40 SLES-SLOT-1 osafimmloadd: IN Synced 60383 objects in total
Sep 15 17:02:40 SLES-SLOT-1 osafimmnd[9482]: NO NODE STATE-> 
IMM_NODE_FULLY_AVAILABLE 16818
Sep 15 17:02:40 SLES-SLOT-1 osafimmloadd: NO Sync ending normally



Observation:

while analyzing for 50% MDS_SIZE the sync is taking 3 minutes and 40 seconds. 
Because of these CLM and AMFD are getting TRY_AGAIN for more then 3 minutes and 
the cluster went for reboot.



System buffers:

SLES-SLOT-1:~ # cat /proc/sys/net/ipv4/tcp_mem
18165    24223    36330

The default and maximum amount for the receive socket memory:
SLES-SLOT-1:~ # cat /proc/sys/net/core/rmem_default
126976
SLES-SLOT-1:~ # cat /proc/sys/net/core/rmem_max
131071
The default and maximum amount for the send socket memory:

SLES-SLOT-1:~ # cat /proc/sys/net/core/wmem_default
126976
SLES-SLOT-1:~ #  cat /proc/sys/net/core/wmem_max
131071

The maximum amount of option memory buffers:

SLES-SLOT-1:~ # cat /proc/sys/net/core/optmem_max
20480
SLES-SLOT-1:~ #



---

** [tickets:#1291] IMM: IMMD healthcheck callback timeout when standby 
controller rebooted in middle of IMMND sync**

**Status:** unassigned
**Milestone:** 4.5.2
**Created:** Mon Mar 30, 2015 07:21 AM UTC by Sirisha Alla
**Last Updated:** Wed Aug 19, 2015 09:36 AM UTC
**Owner:** nobody
**Attachments:**

- 
[immlogs.tar.bz2](https://sourceforge.net/p/opensaf/tickets/1291/attachment/immlogs.tar.bz2)
 (6.8 MB; application/x-bzip)


The issue is observed with 4.6 FC changeset 6377. The system is up and running 
with single pbe and 50k objects. This issue is seen after 
http://sourceforge.net/p/opensaf/tickets/1290 is observed. IMM application is 
running on standby controller and immcfg command is run from payload to set 
CompRestartMax value to 1000. IMMND is killed twice on standby controller 
leading to #1290.

As a result, standby controller left the cluster in middle of sync, IMMD 
reported healthcheck callback timeout and the active controller too went for 
reboot. Following is the syslog of SC-1:

Mar 26 14:58:17 SLES-64BIT-SLOT1 osafimmloadd: NO Sync starting
Mar 26 14:58:28 SLES-64BIT-SLOT1 osaffmd[9529]: NO Node Down event for node id 
2020f:
Mar 26 14:58:28 SLES-64BIT-SLOT1 osaffmd[9529]: NO Current role: ACTIVE
Mar 26 14:58:28 SLES-64BIT-SLOT1 osaffmd[9529]: Rebooting OpenSAF NodeId = 
131599 EE Name = , Reason: Received Node Down for peer controller, OwnNodeId = 
131343, SupervisionTime = 60
Mar 26 14:58:28 SLES-64BIT-SLOT1 kernel: [15200.412080] TIPC: Resetting link 
<1.1.1:eth0-1.1.2:eth0>, peer not responding
Mar 26 14:58:28 SLES-64BIT-SLOT1 kernel: [15200.412089] TIPC: Lost link 
<1.1.1:eth0-1.1.2:eth0> on network plane A
Mar 26 14:58:28 SLES-64BIT-SLOT1 kernel: [15200.413191] TIPC: Lost contact with 
<1.1.2>
Mar 26 14:58:28 SLES-64BIT-SLOT1 osafclmd[9609]: NO Node 131599 went down. Not 
sending track callback for agents on that node
Mar 26 14:58:28 SLES-64BIT-SLOT1 osafclmd[9609]: NO Node 131599 went down. Not 
sending track callback for agents on that node
Mar 26 14:58:28 SLES-64BIT-SLOT1 osafclmd[9609]: NO Node 131599 went down. Not 
sending track callback for agents on that node
Mar 26 14:58:28 SLES-64BIT-SLOT1 osafclmd[9609]: NO Node 131599 went down. Not 
sending track callback for agents on that node
Mar 26 14:58:28 SLES-64BIT-SLOT1 osafclmd[9609]: NO Node 131599 went down. Not 
sending track callback for agents on that node
Mar 26 14:58:28 SLES-64BIT-SLOT1 osafclmd[9609]: NO Node 131599 went down. Not 
sending track callback for agents on that node
Mar 26 14:58:30 SLES-64BIT-SLOT1 osafamfd[9628]: NO Node 'SC-2' left the cluster
Mar 26 14:58:30 SLES-64BIT-SLOT1 opensaf_reboot: Rebooting remote node in the 
absence of PLM is outside the scope of OpenSAF
Mar 26 14:58:54 SLES-64BIT-SLOT1 kernel: [15226.674333] TIPC: Established link 
<1.1.1:eth0-1.1.2:eth0> on network plane A
Mar 26 15:00:02 SLES-64BIT-SLOT1 syslog-ng[3261]: Log statistics; 
dropped='pipe(/dev/xconsole)=0', dropped='pipe(/dev/tty10)=0', 
processed='center(queued)=2197', processed='center(received)=1172', 
processed='destination(messages)=1172', processed='destination(mailinfo)=0', 
processed='destination(mailwarn)=0', 
processed='destination(localmessages)=955', processed='destination(newserr)=0', 
processed='destination(mailerr)=0', processed='destination(netmgm)=0', 
processed='destination(warn)=44', processed='destination(console)=13', 
processed='destination(null)=0', processed='destination(mail)=0', 
processed='destination(xconsole)=13', processed='destination(firewall)=0', 
processed='destination(acpid)=0', processed='destination(newscrit)=0', 
processed='destination(newsnotice)=0', processed='source(src)=1172'
Mar 26 15:00:07 SLES-64BIT-SLOT1 osafimmloadd: ER Too many TRY_AGAIN on 
saImmOmSearchNext - aborting
Mar 26 15:00:08 SLES-64BIT-SLOT1 osafimmnd[9549]: ER SYNC APPARENTLY FAILED 
status:1
Mar 26 15:00:08 SLES-64BIT-SLOT1 osafimmnd[9549]: NO -SERVER STATE: 
IMM_SERVER_SYNC_SERVER --> IMM_SERVER_READY
Mar 26 15:00:08 SLES-64BIT-SLOT1 osafimmnd[9549]: NO NODE STATE-> 
IMM_NODE_FULLY_AVAILABLE (2484)
Mar 26 15:00:08 SLES-64BIT-SLOT1 osafimmnd[9549]: NO Epoch set to 12 in ImmModel
Mar 26 15:00:08 SLES-64BIT-SLOT1 osafimmnd[9549]: NO Coord broadcasting 
ABORT_SYNC, epoch:12
Mar 26 15:00:08 SLES-64BIT-SLOT1 osafimmpbed: NO Update epoch 12 committing 
with ccbId:100000054/4294967380
Mar 26 15:01:34 SLES-64BIT-SLOT1 osafamfnd[9638]: NO SU failover probation 
timer started (timeout: 1200000000000 ns)
Mar 26 15:01:34 SLES-64BIT-SLOT1 osafamfnd[9638]: NO Performing failover of 
'safSu=SC-1,safSg=2N,safApp=OpenSAF' (SU failover count: 1)
Mar 26 15:01:34 SLES-64BIT-SLOT1 osafamfnd[9638]: NO 
'safComp=IMMD,safSu=SC-1,safSg=2N,safApp=OpenSAF' recovery action escalated 
from 'componentFailover' to 'suFailover'
Mar 26 15:01:34 SLES-64BIT-SLOT1 osafamfnd[9638]: NO 
'safComp=IMMD,safSu=SC-1,safSg=2N,safApp=OpenSAF' faulted due to 
'healthCheckcallbackTimeout' : Recovery is 'suFailover'
Mar 26 15:01:34 SLES-64BIT-SLOT1 osafamfnd[9638]: ER 
safComp=IMMD,safSu=SC-1,safSg=2N,safApp=OpenSAF Faulted due 
to:healthCheckcallbackTimeout Recovery is:suFailover
Mar 26 15:01:34 SLES-64BIT-SLOT1 osafamfnd[9638]: Rebooting OpenSAF NodeId = 
131343 EE Name = , Reason: Component faulted: recovery is node failfast, 
OwnNodeId = 131343, SupervisionTime = 60
Mar 26 15:01:34 SLES-64BIT-SLOT1 opensaf_reboot: Rebooting local node; 
timeout=60

syslog, immnd and immd traces of SC-1 attached.



---

Sent from sourceforge.net because [email protected] is 
subscribed to https://sourceforge.net/p/opensaf/tickets/

To unsubscribe from further messages, a project admin can change settings at 
https://sourceforge.net/p/opensaf/admin/tickets/options.  Or, if this is a 
mailing list, you can unsubscribe from the mailing list.
------------------------------------------------------------------------------
_______________________________________________
Opensaf-tickets mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/opensaf-tickets

Reply via email to