Hi Lennart,

I'm not sure what the comment "fixme?" intended to fix, but it seems not giving any information about the bug to be fixed, so I remove it
Please help to push the attached patch.

Thanks,
Minh

On 03/06/16 23:53, Lennart Lund wrote:
Ack
Review only.

Comment: Is the "/* fixme? */" comment relevant? If not please remove it

Thanks
Lennart

-----Original Message-----
From: Minh Hon Chau [mailto:minh.c...@dektech.com.au]
Sent: den 30 maj 2016 05:43
To: Lennart Lund <lennart.l...@ericsson.com>;
praveen.malv...@oracle.com; Minh Hon Chau
<minh.c...@dektech.com.au>
Cc: opensaf-devel@lists.sourceforge.net
Subject: [PATCH 1 of 1] ntfa: Lower intialize req message [#1818] V2

  osaf/libs/agents/saf/ntfa/ntfa_mds.c |  11 ++++++++++-
  1 files changed, 10 insertions(+), 1 deletions(-)


When running life cycle APIs from multiple handles in multiple threads, ntfd
processes
the previous NCSMDS_DOWN event from last finalize after processes
following initialze.
This will unexpectedly delete all clients which are running due to late
processing
NCSMDS_DOWN.

The problem is seen by sometimes (1) there's a shortcoming
NCSMDS_DOWN from last
finialize coming after next initialize req message at mds callback. Also, (2)
another
problem in ntfd, which is sending NTFSV_NTFS_EVT_NTFA_DOWN with
lower priority than
NTFSV_NTFS_NTFSV_MSG. This various prioriy will also cause ntfd process
NCSMDS_DOWN
after next intialize even NCSMDS_DOWN coming before initialize req
message at mds
callback.

At this stage, for the problem (1), it is not sure whether or not this is mds
issue,
since all APIs have been sent with high priority. This patch lowers send
priority of
initialize request msg, which gives a chance of all messages following last
finalize
response message coming to ntfd. For the problem (2), given that
NCSMDOWN and intialize
req message coming to ntfd in correct order at mds callback, now those
events will be
sent to ntfd's mailbox with the same priority
(MDS_SEND_PRIORITY_MEDIUM =
NCS_IPC_PRIORITY_NORMAL). The unexpected client deletion as described
above should not
be seen. After this patch, if this problem is seen again, it most likely from 
mds
who does not ensure NCSMDS_DOWN and intialize req are respectively sent
from Agent
and received at NTFD in right timing order.

diff --git a/osaf/libs/agents/saf/ntfa/ntfa_mds.c
b/osaf/libs/agents/saf/ntfa/ntfa_mds.c
--- a/osaf/libs/agents/saf/ntfa/ntfa_mds.c
+++ b/osaf/libs/agents/saf/ntfa/ntfa_mds.c
@@ -1177,7 +1177,16 @@ uint32_t ntfa_mds_msg_sync_send(ntfa_cb_
        mds_info.info.svc_send.i_msg = (NCSCONTEXT)i_msg;
        mds_info.info.svc_send.i_to_svc = NCSMDS_SVC_ID_NTFS;
        mds_info.info.svc_send.i_sendtype = MDS_SENDTYPE_SNDRSP;
-       mds_info.info.svc_send.i_priority = MDS_SEND_PRIORITY_HIGH;
        /* fixme? */
+
+       /* Lower priority of initialize_req msg so that the other existing
+        * life cycle msg can be completed, for multiple handles usage.
+        */
+       if (i_msg->info.api_info.type == NTFSV_INITIALIZE_REQ) {
+               mds_info.info.svc_send.i_priority =
MDS_SEND_PRIORITY_MEDIUM;
+       } else {
+               mds_info.info.svc_send.i_priority =
MDS_SEND_PRIORITY_HIGH;    /* fixme? */
+       }
+
        /* fill the sub send rsp strcuture */
        mds_info.info.svc_send.info.sndrsp.i_time_to_wait = timeout;
        /* timeto wait in 10ms FIX!!! */
        mds_info.info.svc_send.info.sndrsp.i_to_dest = cb->ntfs_mds_dest;

ntfa: Lower intialize req message [#1818] V3

When running life cycle APIs from multiple handles in multiple threads, ntfd processes
the previous NCSMDS_DOWN event from last finalize after processes following initialze.
This will unexpectedly delete all clients which are running due to late processing
NCSMDS_DOWN.

The problem is seen by sometimes (1) there's a shortcoming NCSMDS_DOWN from last
finialize coming after next initialize req message at mds callback. Also, (2) another
problem in ntfd, which is sending NTFSV_NTFS_EVT_NTFA_DOWN with lower priority than
NTFSV_NTFS_NTFSV_MSG. This various prioriy will also cause ntfd process NCSMDS_DOWN
after next intialize even NCSMDS_DOWN coming before initialize req message at mds
callback.

At this stage, for the problem (1), it is not sure whether or not this is mds issue,
since all APIs have been sent with high priority. This patch lowers send priority of
initialize request msg, which gives a chance of all messages following last finalize
response message coming to ntfd. For the problem (2), given that NCSMDOWN and intialize
req message coming to ntfd in correct order at mds callback, now those events will be
sent to ntfd's mailbox with the same priority (MDS_SEND_PRIORITY_MEDIUM = 
NCS_IPC_PRIORITY_NORMAL). The unexpected client deletion as described above should not
be seen. After this patch, if this problem is seen again, it most likely from mds
who does not ensure NCSMDS_DOWN and intialize req are respectively sent from Agent
and received at NTFD in right timing order.


diff --git a/osaf/libs/agents/saf/ntfa/ntfa_mds.c b/osaf/libs/agents/saf/ntfa/ntfa_mds.c
--- a/osaf/libs/agents/saf/ntfa/ntfa_mds.c
+++ b/osaf/libs/agents/saf/ntfa/ntfa_mds.c
@@ -1177,7 +1177,16 @@ uint32_t ntfa_mds_msg_sync_send(ntfa_cb_
 	mds_info.info.svc_send.i_msg = (NCSCONTEXT)i_msg;
 	mds_info.info.svc_send.i_to_svc = NCSMDS_SVC_ID_NTFS;
 	mds_info.info.svc_send.i_sendtype = MDS_SENDTYPE_SNDRSP;
-	mds_info.info.svc_send.i_priority = MDS_SEND_PRIORITY_HIGH;	/* fixme? */
+
+	/* Lower priority of initialize_req msg so that the other existing
+	 * life cycle msg can be completed, for multiple handles usage.
+	 */
+	if (i_msg->info.api_info.type == NTFSV_INITIALIZE_REQ) {
+		mds_info.info.svc_send.i_priority = MDS_SEND_PRIORITY_MEDIUM;
+	} else {
+		mds_info.info.svc_send.i_priority = MDS_SEND_PRIORITY_HIGH;
+	}
+
 	/* fill the sub send rsp strcuture */
 	mds_info.info.svc_send.info.sndrsp.i_time_to_wait = timeout;	/* timeto wait in 10ms FIX!!! */
 	mds_info.info.svc_send.info.sndrsp.i_to_dest = cb->ntfs_mds_dest;
------------------------------------------------------------------------------
What NetFlow Analyzer can do for you? Monitors network bandwidth and traffic
patterns at an interface-level. Reveals which users, apps, and protocols are 
consuming the most bandwidth. Provides multi-vendor support for NetFlow, 
J-Flow, sFlow and other flows. Make informed decisions using capacity 
planning reports. https://ad.doubleclick.net/ddm/clk/305295220;132659582;e
_______________________________________________
Opensaf-devel mailing list
Opensaf-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/opensaf-devel

Reply via email to