I had 1 controller and 4 payloads up and running.
Normally the "Messages pending" is kept to 2 and sometimes go up to 3,4.
I was bringing up the 5th payload up and down for around 10-15 times.
*while ( true ); do /etc/init.d/opensafd stop && /etc/init.d/opensafd start; 
done*

*tail -f /var/log/opensaf/osafimmnd | grep "Messages pending:"*
    Sep 15 21:12:50.691919 osafimmnd [368:immnd_evt.c:0960] TR Messages 
pending:2
    Sep 15 21:12:50.724038 osafimmnd [368:immnd_evt.c:0960] TR Messages 
pending:2
    Sep 15 21:12:50.957123 osafimmnd [368:immnd_evt.c:0960] TR Messages 
pending:2
    Sep 15 21:12:50.961528 osafimmnd [368:immnd_evt.c:0960] TR Messages 
pending:3
    Sep 15 21:12:51.215563 osafimmnd [368:immnd_evt.c:0960] TR Messages 
pending:2
    Sep 15 21:12:52.785945 osafimmnd [368:immnd_evt.c:2674] TR Messages 
pending:2
    Sep 15 21:12:52.799428 osafimmnd [368:immnd_evt.c:2674] TR Messages 
pending:2
    Sep 15 21:12:57.923195 osafimmnd [368:immnd_evt.c:0960] TR Messages 
pending:2
    Sep 15 21:12:58.355613 osafimmnd [368:immnd_evt.c:0960] TR Messages 
pending:3
    Sep 15 21:12:58.369637 osafimmnd [368:immnd_evt.c:0960] TR Messages 
pending:5
    Sep 15 21:12:58.372522 osafimmnd [368:immnd_evt.c:0960] TR Messages 
pending:6
    Sep 15 21:12:58.394801 osafimmnd [368:immnd_evt.c:0960] TR Messages 
pending:8
    Sep 15 21:12:58.458708 osafimmnd [368:immnd_evt.c:0960] TR Messages 
pending:10
    Sep 15 21:12:58.470905 osafimmnd [368:immnd_evt.c:0960] TR Messages 
pending:12
    Sep 15 21:12:58.480655 osafimmnd [368:immnd_evt.c:0960] TR Messages 
pending:14
    Sep 15 21:12:58.484411 osafimmnd [368:immnd_evt.c:0960] TR Messages 
pending:16

Once this happen; it does not help to terminate the 5th payload.
Some minute later cluster reset is triggered.
    osafimmnd [738:immnd_mds.c:0573] TR Resetting fevs replies pending to zero.



---

** [tickets:#1072] Sync stop after few payload nodes joining the cluster (TCP)**

**Status:** invalid
**Milestone:** 4.3.3
**Created:** Fri Sep 12, 2014 09:20 PM UTC by Adrian Szwej
**Last Updated:** Mon Sep 15, 2014 07:45 AM UTC
**Owner:** Anders Bjornerstedt

Communication is MDS over TCP. Cluster 2+3; where scenario is 
Start SCs; start 1 payload; wait for sync; start second payload; wait for sync; 
start 3rd payload. Third one fails; or sometimes it might be forth.

There is problem of getting more than 2/3 payloads synchronized due to a 
consistent way of triggering a bug.

The following is triggered in the loading immnd causing the joined node to 
timeout/fail to start up.

Sep  6  6:58:02.096550 osafimmnd [502:immsv_evt.c:5382] T8 Received: 
IMMND_EVT_A2ND_SEARCHNEXT (17) from 2020f
Sep  6  6:58:02.096575 osafimmnd [502:immnd_evt.c:1443] >> 
immnd_evt_proc_search_next
Sep  6  6:58:02.096613 osafimmnd [502:immnd_evt.c:1454] T2 SEARCH NEXT, Look 
for id:1664
Sep  6  6:58:02.096641 osafimmnd [502:ImmModel.cc:1366] T2 ERR_TRY_AGAIN: Too 
many pending incoming fevs messages (> 16) rejecting sync iteration next request
Sep  6  6:58:02.096725 osafimmnd [502:immnd_evt.c:1676] << 
immnd_evt_proc_search_next
Sep  6  6:58:03.133230 osafimmnd [502:immnd_proc.c:1980] IN Sync Phase-3: 
step:540

I have managed to overcome this bug temporary by making following patch:

    +++ b/osaf/libs/common/immsv/include/immsv_api.h        Sat Sep 06 08:38:16 
2014 +0000
    @@ -70,7 +70,7 @@

     /*Max # of outstanding fevs messages towards director.*/
     /*Note max-max is 255. cb->fevs_replies_pending is an uint8_t*/
    -#define IMMSV_DEFAULT_FEVS_MAX_PENDING 16
    +#define IMMSV_DEFAULT_FEVS_MAX_PENDING 255

     #define IMMSV_MAX_OBJECTS 10000
     #define IMMSV_MAX_ATTRIBUTES 128



---

Sent from sourceforge.net because [email protected] is 
subscribed to https://sourceforge.net/p/opensaf/tickets/

To unsubscribe from further messages, a project admin can change settings at 
https://sourceforge.net/p/opensaf/admin/tickets/options.  Or, if this is a 
mailing list, you can unsubscribe from the mailing list.
------------------------------------------------------------------------------
Want excitement?
Manually upgrade your production database.
When you want reliability, choose Perforce
Perforce version control. Predictably reliable.
http://pubads.g.doubleclick.net/gampad/clk?id=157508191&iu=/4140/ostg.clktrk
_______________________________________________
Opensaf-tickets mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/opensaf-tickets

Reply via email to