I am able to bring up 7 node cluster of Opensaf 4.5 code with TCP as Transport
on VM setup of 512 MB memory ( with 70 node XML configuration ) with below
specified Opensaf-socket (dtmd.conf)/Linux Send/Receive Buffers with out any
issue
---------------------------------------------------------------------------------
dtmd.conf:
DTM_SOCK_SND_RCV_BUF_SIZE=126976
Opensaf environment variable:
export MDS_SOCK_SND_RCV_BUF_SIZE=126976
immnd.conf:
export IMMSV_NUM_NODES=9
export IMMSV_MAX_WAIT=9
Linux system Configuration :
net.core.wmem_max = 33554432
net.core.rmem_max = 33554432
net.ipv4.tcp_rmem = 4096 87380 33554432
net.ipv4.tcp_wmem = 4096 87380 33554432
---------------------------------------------------------------------------------
Please provide following , for me to reproduce the problem :
1)Reproducible steps
2)dtmd.conf file
3)imm.xml configuration details ( it seems you preperaed 70 node
configuration )
4) You system buffers info,check below link to get the data of your nodes:
http://www.cyberciti.biz/faq/linux-tcp-tuning/
---
** [tickets:#1072] Sync stop after few payload nodes joining the cluster (TCP)**
**Status:** unassigned
**Milestone:** 4.3.3
**Created:** Fri Sep 12, 2014 09:20 PM UTC by Adrian Szwej
**Last Updated:** Wed Oct 01, 2014 06:39 PM UTC
**Owner:** nobody
Communication is MDS over TCP. Cluster 2+3; where scenario is
Start SCs; start 1 payload; wait for sync; start second payload; wait for sync;
start 3rd payload. Third one fails; or sometimes it might be forth.
There is problem of getting more than 2/3 payloads synchronized due to a
consistent way of triggering a bug.
The following is triggered in the loading immnd causing the joined node to
timeout/fail to start up.
Sep 6 6:58:02.096550 osafimmnd [502:immsv_evt.c:5382] T8 Received:
IMMND_EVT_A2ND_SEARCHNEXT (17) from 2020f
Sep 6 6:58:02.096575 osafimmnd [502:immnd_evt.c:1443] >>
immnd_evt_proc_search_next
Sep 6 6:58:02.096613 osafimmnd [502:immnd_evt.c:1454] T2 SEARCH NEXT, Look
for id:1664
Sep 6 6:58:02.096641 osafimmnd [502:ImmModel.cc:1366] T2 ERR_TRY_AGAIN: Too
many pending incoming fevs messages (> 16) rejecting sync iteration next request
Sep 6 6:58:02.096725 osafimmnd [502:immnd_evt.c:1676] <<
immnd_evt_proc_search_next
Sep 6 6:58:03.133230 osafimmnd [502:immnd_proc.c:1980] IN Sync Phase-3:
step:540
I have managed to overcome this bug temporary by making following patch:
+++ b/osaf/libs/common/immsv/include/immsv_api.h Sat Sep 06 08:38:16
2014 +0000
@@ -70,7 +70,7 @@
/*Max # of outstanding fevs messages towards director.*/
/*Note max-max is 255. cb->fevs_replies_pending is an uint8_t*/
-#define IMMSV_DEFAULT_FEVS_MAX_PENDING 16
+#define IMMSV_DEFAULT_FEVS_MAX_PENDING 255
#define IMMSV_MAX_OBJECTS 10000
#define IMMSV_MAX_ATTRIBUTES 128
---
Sent from sourceforge.net because [email protected] is
subscribed to https://sourceforge.net/p/opensaf/tickets/
To unsubscribe from further messages, a project admin can change settings at
https://sourceforge.net/p/opensaf/admin/tickets/options. Or, if this is a
mailing list, you can unsubscribe from the mailing list.------------------------------------------------------------------------------
Meet PCI DSS 3.0 Compliance Requirements with EventLog Analyzer
Achieve PCI DSS 3.0 Compliant Status with Out-of-the-box PCI DSS Reports
Are you Audit-Ready for PCI DSS 3.0 Compliance? Download White paper
Comply to PCI DSS 3.0 Requirement 10 and 11.5 with EventLog Analyzer
http://pubads.g.doubleclick.net/gampad/clk?id=154622311&iu=/4140/ostg.clktrk
_______________________________________________
Opensaf-tickets mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/opensaf-tickets