Hi Mahesh,

No change in V7 vs V9. Just do rebase the code on latest changeset.

I have tried to clean up all, and rebuild the cluster to see what you are
observing,
and I am not able to reproduce the problem, I have tried several times.

Can you provide me the osaflogd trace on both SCs node? Thanks.

Regards, Vu

> -----Original Message-----
> From: A V Mahesh [mailto:mahesh.va...@oracle.com]
> Sent: Thursday, February 23, 2017 4:48 PM
> To: Vu Minh Nguyen <vu.m.ngu...@dektech.com.au>;
> lennart.l...@ericsson.com; canh.v.tru...@dektech.com.au
> Cc: opensaf-devel@lists.sourceforge.net
> Subject: Re: [devel] [PATCH 0 of 3] Review Request for log: add
alternative
> destinations of log records [#2258] V4
> 
> Hi Vu,
> 
> On 2/23/2017 3:13 PM, A V Mahesh wrote:
> >
> > Not sure what are other change compare to V7 to V9 , New problems got
> > introduced
> >
> > Both nodes  SC-1 & SC-2 ( with 2258_v9.patch ) , trying bring up both
> > SC`s  simple node bringup  ,
> >
> > SC-2 going for reboot with following :
> >
> >
> ==============================================================
> ==============================================================
> ============
> >
> >
> > Feb 23 15:05:32 SC-2 osaflogd[29988]: NO LOGSV_DATA_GROUPNAME
> not found
> > Feb 23 15:05:32 SC-2 osaflogd[29988]: NO LOG root directory is:
> > "/var/log/opensaf/saflog"
> > Feb 23 15:05:32 SC-2 osaflogd[29988]: NO LOG data group is: ""
> > Feb 23 15:05:32 SC-2 osafimmnd[29978]: NO Implementer (applier)
> > connected: 16 (@safAmfService2020f) <127, 2020f>
> > Feb 23 15:05:32 SC-2 osaflogd[29988]: NO LGS_MBCSV_VERSION = 7
> > Feb 23 15:05:32 SC-2 osaflogd[29988]: WA FAILED:
> > ncs_patricia_tree_add, client_id 0
> > Feb 23 15:05:32 SC-2 osafamfnd[30028]: NO Assigned
> > 'safSi=SC-2N,safApp=OpenSAF' STANDBY to
> > 'safSu=SC-2,safSg=2N,safApp=OpenSAF'
> > Feb 23 15:05:32 SC-2 osaflogd[29988]: ER Exiting with message: Could
> > not create new client
> > Feb 23 15:05:32 SC-2 osafamfnd[30028]: NO
> > 'safSu=SC-2,safSg=2N,safApp=OpenSAF' component restart probation
> timer
> > started (timeout: 60000000000 ns)
> > Feb 23 15:05:32 SC-2 osafamfnd[30028]: NO Restarting a component of
> > 'safSu=SC-2,safSg=2N,safApp=OpenSAF' (comp restart count: 1)
> > Feb 23 15:05:32 SC-2 osafamfnd[30028]: NO
> > 'safComp=LOG,safSu=SC-2,safSg=2N,safApp=OpenSAF' faulted due to
> > 'errorReport' : Recovery is 'componentRestart'
> > Feb 23 15:05:32 SC-2 opensafd[29908]: ER Service LOGD has unexpectedly
> > crashed. Unable to continue, exiting
> > Feb 23 15:05:32 SC-2 osafamfd[30018]: exiting for shutdown
> > Feb 23 15:05:32 SC-2 osafamfnd[30028]: ER AMFD has unexpectedly
> > crashed. Rebooting node
> > Feb 23 15:05:32 SC-2 osafamfnd[30028]: Rebooting OpenSAF NodeId =
> > 131599 EE Name = , Reason: AMFD has unexpectedly crashed. Rebooting
> > node, OwnNodeId = 131599, SupervisionTime = 60
> > Feb 23 15:05:32 SC-2 opensaf_reboot: Rebooting local node; timeout=60
> > Feb 23 15:06:04 SC-2 syslog-ng[1180]: syslog-ng starting up;
> > version='2.0.9'
> >
> >
> ==============================================================
> ==============================================================
> ============
> >
> Some times :
> 
> ==============================================================
> ==============================================================
> ============
> 
> Feb 23 15:15:19 SC-2 osafrded[3858]: NO RDE role set to STANDBY
> Feb 23 15:15:19 SC-2 osafrded[3858]: NO Peer up on node 0x2010f
> Feb 23 15:15:19 SC-2 osafrded[3858]: NO Got peer info request from node
> 0x2010f with role ACTIVE
> Feb 23 15:15:19 SC-2 osafrded[3858]: NO Got peer info response from node
> 0x2010f with role ACTIVE
> Feb 23 15:15:19 SC-2 osafimmd[3877]: NO MDS event from svc_id 24
> (change:3, dest:13)
> Feb 23 15:15:19 SC-2 osafimmd[3877]: NO MDS event from svc_id 24
> (change:5, dest:13)
> Feb 23 15:15:19 SC-2 osafimmd[3877]: NO MDS event from svc_id 24
> (change:5, dest:13)
> Feb 23 15:15:19 SC-2 osafimmd[3877]: NO MDS event from svc_id 25
> (change:3, dest:565217560625168)
> Feb 23 15:15:19 SC-2 osafimmd[3877]: NO MDS event from svc_id 25
> (change:3, dest:564114674417680)
> Feb 23 15:15:19 SC-2 osaflogd[3898]: NO LOGSV_DATA_GROUPNAME not
> found
> Feb 23 15:15:19 SC-2 osaflogd[3898]: NO LOG root directory is:
> "/var/log/opensaf/saflog"
> Feb 23 15:15:19 SC-2 osaflogd[3898]: NO LOG data group is: ""
> Feb 23 15:15:19 SC-2 osafimmnd[3888]: NO Implementer (applier)
> connected: 15 (@safAmfService2020f) <127, 2020f>
> Feb 23 15:15:19 SC-2 osaflogd[3898]: NO LGS_MBCSV_VERSION = 7
> Feb 23 15:15:19 SC-2 osaflogd[3898]: ER Exiting with message: Client
> attributes differ
> Feb 23 15:15:19 SC-2 osafamfnd[3938]: NO
> 'safSu=SC-2,safSg=2N,safApp=OpenSAF' component restart probation timer
> started (timeout: 60000000000 ns)
> Feb 23 15:15:19 SC-2 osafamfnd[3938]: NO Restarting a component of
> 'safSu=SC-2,safSg=2N,safApp=OpenSAF' (comp restart count: 1)
> Feb 23 15:15:19 SC-2 osafamfnd[3938]: NO
> 'safComp=LOG,safSu=SC-2,safSg=2N,safApp=OpenSAF' faulted due to
> 'errorReport' : Recovery is 'componentRestart'
> Feb 23 15:15:19 SC-2 opensafd[3818]: ER Service LOGD has unexpectedly
> crashed. Unable to continue, exiting
> Feb 23 15:15:20 SC-2 osafamfd[3928]: exiting for shutdown
> Feb 23 15:15:20 SC-2 osafamfnd[3938]: ER AMFD has unexpectedly crashed.
> Rebooting node
> Feb 23 15:15:20 SC-2 osafamfnd[3938]: Rebooting OpenSAF NodeId =
> 131599
> EE Name = , Reason: AMFD has unexpectedly crashed. Rebooting node,
> OwnNodeId = 131599, SupervisionTime = 60
> Feb 23 15:15:20 SC-2 osafimmnd[3888]: NO Implementer locally
> disconnected. Marking it as doomed 15 <127, 2020f> (@safAmfService2020f)
> Feb 23 15:15:20 SC-2 osafimmnd[3888]: NO Implementer disconnected 15
> <127, 2020f> (@safAmfService2020f)
> Feb 23 15:15:20 SC-2 opensaf_reboot: Rebooting local node; timeout=60
> ==============================================================
> ==============================================================
> ============
> 
> 
> >
> > -AVM
> >
> >
> > On 2/23/2017 2:20 PM, Vu Minh Nguyen wrote:
> >> Hi Mahesh,
> >>
> >> This is the latest code has been rebased on the latest changeset.
> >>
> >> Note that, in the attached patch, I have included one more dependency,
> >> that is on base::Hash() function, the patch sent by Anders [#2266]
> >>
> >> Please review the patch, then comment if any. Thanks.
> >>
> >> Regards, Vu
> >>
> >>> -----Original Message-----
> >>> From: A V Mahesh [mailto:mahesh.va...@oracle.com]
> >>> Sent: Thursday, February 23, 2017 2:03 PM
> >>> To: Vu Minh Nguyen <vu.m.ngu...@dektech.com.au>;
> >>> lennart.l...@ericsson.com; canh.v.tru...@dektech.com.au
> >>> Cc: opensaf-devel@lists.sourceforge.net
> >>> Subject: Re: [devel] [PATCH 0 of 3] Review Request for log: add
> >> alternative
> >>> destinations of log records [#2258] V4
> >>>
> >>> Hi Vu,
> >>>
> >>> Now we are now able to proceed further with  V7 `2258_v7.patch` in
> >>> service upgraded working fine,
> >>> because of Encode/decode changes done in V7 patch.
> >>>
> >>> But we have another small test case issue (/usr/bin/logtest 5 17
> >>> Segmentation fault),
> >>> once we resolve this also, we can conclude that all the basic
> >>> functionality is working,
> >>> then you can re-publish the V7 patch  ( if change occurred in Lennart
> >>> #2258 V2 please do publish that as well )
> >>> so that I can go for CODE review.
> >>>
> >>> Steps to reproduce the test case issue :
> >>>
> >>> 1) Bring up old node as Active  ( with out `2258_v7.patch` )
> >>> 2) Bring-up new node as Standby  ( with `2258_v7.patch` )
> >>> 3)  Do `amf-adm si-swap safSi=SC-2N,safApp=OpenSAF`
> >>> 4)  Run `/usr/bin/logtest 5 17 ` on  new Active (because of si-swap )
> >>>
> >>> Note :  both nodes has the new XLM attributes populated .
> >>>
> >>>
> ==============================================================
> >>> =====================
> >>>
> >>> gdb /usr/bin/logtest
> >>> (gdb) r 5
> >>>
> >>>      16  PASSED   CCB Object Modify, change root directory. Path
> >>> exist. OK;
> >>> Detaching after fork from child process 13797.
> >>> Set values Fail
> >>> [New Thread 0x7ffff7ff7b00 (LWP 13801)]
> >>> [New Thread 0x7ffff7fc4b00 (LWP 13802)]
> >>>
> >>> Program received signal SIGSEGV, Segmentation fault.
> >>> 0x00005555555688ea in read_and_compare.isra.7 () at
> >>> src/log/apitest/tet_LogOiOps.c:1891
> >>> 1891    src/log/apitest/tet_LogOiOps.c: No such file or directory.
> >>>           in src/log/apitest/tet_LogOiOps.c
> >>> (gdb) bt
> >>> #0  0x00005555555688ea in read_and_compare.isra.7 () at
> >>> src/log/apitest/tet_LogOiOps.c:1891
> >>> #1  0x0000555555568a4b in
> check_logRecordDestinationConfigurationAdd ()
> >>> at src/log/apitest/tet_LogOiOps.c:1941
> >>> #2  0x0000555555571b05 in run_test_case ()
> >>> #3  0x0000555555571feb in test_run ()
> >>> #4  0x000055555555bfad in main () at src/log/apitest/logtest.c:569
> >>> (gdb)
> >>>
> >>>
> ==============================================================
> >>> =====================
> >>>
> >>>
> >>> -AVM
> >>>
> >>> On 2/23/2017 11:44 AM, Vu Minh Nguyen wrote:
> >>>> Hi Mahesh,
> >>>>
> >>>> Maybe it was broken when transmitting. I zipped to a tar file. Please
> >> try it
> >>>> one more.
> >>>>
> >>>> Regards, Vu
> >>>>
> >>>>
> >>>>> -----Original Message-----
> >>>>> From: A V Mahesh [mailto:mahesh.va...@oracle.com]
> >>>>> Sent: Thursday, February 23, 2017 12:54 PM
> >>>>> To: Vu Minh Nguyen <vu.m.ngu...@dektech.com.au>;
> >>>>> lennart.l...@ericsson.com; canh.v.tru...@dektech.com.au
> >>>>> Cc: opensaf-devel@lists.sourceforge.net
> >>>>> Subject: Re: [devel] [PATCH 0 of 3] Review Request for log: add
> >>>> alternative
> >>>>> destinations of log records [#2258] V4
> >>>>>
> >>>>> Hi Vu,
> >>>>>
> >>>>> On 2/23/2017 10:20 AM, Vu Minh Nguyen wrote:
> >>>>>> Hi Mahesh,
> >>>>>>
> >>>>>> Can you try with 2258_v7.patch I just sent to you?
> >>>>> I striped    changeset:   8610 of today's latest staging (  `hg
strip
> >>>>> 8610` which  removed log: implement SaLogFilterSetCallbackT and
> >>>>> version
> >>>>> handling [#2146])
> >>>>> and try to apply your `2258_v7.patch`, it says `malformed patch at
> >>>>> line
> >>>>> 3324`.
> >>>>>
> >>>>> -AVM
> >>>>>> I have pulled the latest code on OpenSAF 5.1 branch, re-created the
> >>>> cluster.
> >>>>>> And it works with the case old active SC-1 (OpenSAF 5.1) and new
> >>> standby
> >>>>>> SC-2 (with 2258_v7.patch included in).
> >>>>>>
> >>>>>> To apply 2258_v7.patch, please do remove the just pushed ticket
> >>>>>> "log:
> >>>>>> implement SaLogFilterSetCallbackT and version handling [#2146]" ,
> >>>>>> I have not rebased the code on that yet.
> >>>>>>
> >>>>>> Regards, Vu
> >>>>>>
> >>>>>>> -----Original Message-----
> >>>>>>> From: A V Mahesh [mailto:mahesh.va...@oracle.com]
> >>>>>>> Sent: Thursday, February 23, 2017 11:45 AM
> >>>>>>> To: Vu Minh Nguyen <vu.m.ngu...@dektech.com.au>;
> >>>>>>> lennart.l...@ericsson.com; canh.v.tru...@dektech.com.au
> >>>>>>> Cc: opensaf-devel@lists.sourceforge.net
> >>>>>>> Subject: Re: [devel] [PATCH 0 of 3] Review Request for log: add
> >>>>>> alternative
> >>>>>>> destinations of log records [#2258] V4
> >>>>>>>
> >>>>>>> Hi Vu/Lennart,
> >>>>>>>
> >>>>>>>
> >>>>>>> In broad WITHOUT the #2258 patch, the same code/setup working
> fine
> >>>>> with
> >>>>>>> 2 sc node (staging  changeset: 8609 ),
> >>>>>>> as soon as we apply `2258_v5.patch` V5 patch on staging
> (changeset:
> >>>>>>> 8609 ) that you have provided yesterday,
> >>>>>>> on one sc node and try to bring up that in to cluster (in-service
> >> test)
> >>>>>>> we are observing the issue of new node (with #2258 patch) not
> >>>>>>> joining
> >>>>>>> cluster.
> >>>>>>>
> >>>>>>>
> >>>
> ==============================================================
> >>>>>>> ====================================================
> >>>>>>> eb 23 10:01:59 SC-1 osafimmnd[15279]: NO Implementer (applier)
> >>>>>>> connected: 15 (@safAmfService2010f) <127, 2010f>
> >>>>>>> Feb 23 10:01:59 SC-1 osaflogd[15289]: NO
> LOGSV_DATA_GROUPNAME
> >>>>> not
> >>>>>>> found
> >>>>>>> Feb 23 10:01:59 SC-1 osaflogd[15289]: NO LOG root directory is:
> >>>>>>> "/var/log/opensaf/saflog"
> >>>>>>> Feb 23 10:01:59 SC-1 osaflogd[15289]: NO LOG data group is: ""
> >>>>>>> Feb 23 10:01:59 SC-1 osaflogd[15289]: NO LGS_MBCSV_VERSION =
> 7
> >>>>>>> Feb 23 10:01:59 SC-1 osafamfnd[15329]: NO Assigned
> >>>>>>> 'safSi=SC-2N,safApp=OpenSAF' STANDBY to 'safSu=SC-
> >>>>>>> 1,safSg=2N,safApp=OpenSAF'
> >>>>>>> Feb 23 10:01:59 SC-1 opensafd: OpenSAF(5.1.M0 - ) services
> >>> successfully
> >>>>>>> started
> >>>>>>> Feb 23 10:01:59 SC-1 osafamfnd[15329]: NO
> >>>>>>> 'safComp=LOG,safSu=SC-1,safSg=2N,safApp=OpenSAF' faulted due
> to
> >>>>>>> 'avaDown' : Recovery is 'nodeFailfast'
> >>>>>>> Feb 23 10:01:59 SC-1 osafamfnd[15329]: ER
> >>>>>>> safComp=LOG,safSu=SC-1,safSg=2N,safApp=OpenSAF Faulted due
> >>>>>>> to:avaDown
> >>>>>>> Recovery is:nodeFailfast
> >>>>>>> Feb 23 10:01:59 SC-1 osafamfnd[15329]: Rebooting OpenSAF
> NodeId =
> >>>>>>> 131343
> >>>>>>> EE Name = , Reason: Component faulted: recovery is node failfast,
> >>>>>>> OwnNodeId = 131343, SupervisionTime = 60
> >>>>>>> Feb 23 10:01:59 SC-1 opensaf_reboot: Rebooting local node;
> >>> timeout=60
> >>>>>>> Feb 23 10:02:00 SC-1 osafimmnd[15279]: NO Implementer (applier)
> >>>>>>> connected: 16 (@OpenSafImmReplicatorB) <144, 2010f>
> >>>>>>> Feb 23 10:01:59 SC-1 opensaf_reboot: Rebooting local node;
> >>> timeout=60
> >>>
> ==============================================================
> >>>>>>> ====================================================
> >>>>>>>
> >>>>>>> So it is evident that in-service upgrade part code of this need
> >>>>>>> to be
> >>>>>>> corrected.
> >>>>>>>
> >>>>>>> Please see my comments as [AVM] and let me know if you need
> some
> >>>>> traces
> >>>>>>> .
> >>>>>>>
> >>>>>>> If you're planing to prepare new V6 patch , please do prepare on
> >>>>>>> top
> >> of
> >>>>>>> today's latest staging.
> >>>>>>>
> >>>>>>> On 2/23/2017 9:33 AM, Vu Minh Nguyen wrote:
> >>>>>>>> Hi Mahesh,
> >>>>>>>>
> >>>>>>>> I have done in-service upgrade/downgrade with following cases:
> >>>>>>>> 1) New Active SC-1 (OpenSAF 5.2 with the attached patch) + old
> >>> standby
> >>>>>>> SC-2
> >>>>>>>> (OpenSAF 5.1)
> >>>>>>>> --> Work fine
> >>>>>>> [AVM] This is not a practical use cause of in-service upgrade , we
> >> can
> >>>>>>> ignore this test further
> >>>>>>>> 2) Old Active SC-1 (OpenSAF 5.1) + new standby SC-2 (with or
> >>>>>>>> without
> >>>>>>>> attached patch)
> >>>>>>>> --> SC-2 is restarted & not able to join the cluster.
> >>>>>>> [AVM] This use cause/flow is  we do get in in-service upgrade ,
> >>>>>>> so we
> >>>>>>> need to address this.
> >>>>>>>> I got following messages in syslog:
> >>>>>>>> Feb 23 09:32:42 SC-2 user.notice opensafd: OpenSAF(5.2.M0 -
> >>>>>>>> 8529:b5addd36e45d:default) services successfully started
> >>>>>>>> Feb 23 09:32:43 SC-2 local0.warn osafntfimcnd[701]: WA
> >>>>>>> ntfimcn_imm_init
> >>>>>>>> saImmOiImplementerSet() returned SA_AIS_ERR_TIMEOUT (5)
> >>>>>>>> Feb 23 09:32:45 SC-2 local0.warn osafntfimcnd[701]: WA
> >>>>>>> ntfimcn_imm_init
> >>>>>>>> saImmOiImplementerSet() returned SA_AIS_ERR_TIMEOUT (5)
> >>>>>>>> Feb 23 09:32:47 SC-2 local0.warn osafntfimcnd[701]: WA
> >>>>>>> ntfimcn_imm_init
> >>>>>>>> saImmOiImplementerSet() returned SA_AIS_ERR_TIMEOUT (5)
> >>>>>>>> Feb 23 09:32:49 SC-2 local0.warn osafntfimcnd[701]: WA
> >>>>>>> ntfimcn_imm_init
> >>>>>>>> saImmOiImplementerSet() returned SA_AIS_ERR_TIMEOUT (5)
> >>>>>>>> Feb 23 09:32:50 SC-2 local0.err osafmsgnd[592]: ER
> >>>>>>> saImmOiImplementerSet
> >>>>>>>> FAILED:5
> >>>>>>>> Feb 23 09:32:50 SC-2 local0.err osafmsgnd[592]: ER
> >>>>>>> saImmOiImplementerSet
> >>>>>>>> FAILED:5
> >>>>>>>> Feb 23 09:32:50 SC-2 local0.notice osafamfnd[496]: NO
> >>>>>>>> 'safSu=SC-2,safSg=NoRed,safApp=OpenSAF' component restart
> >>>>> probation
> >>>>>>> timer
> >>>>>>>> started (timeout: 60000000000 ns)
> >>>>>>>> Feb 23 09:32:50 SC-2 local0.notice osafamfnd[496]: NO Restarting
> a
> >>>>>>> component
> >>>>>>>> of 'safSu=SC-2,safSg=NoRed,safApp=OpenSAF' (comp restart
> count: 1)
> >>>>>>>> Feb 23 09:32:50 SC-2 local0.notice osafamfnd[496]: NO
> >>>>>>>> 'safComp=MQND,safSu=SC-2,safSg=NoRed,safApp=OpenSAF'
> faulted
> >>>>> due
> >>>>>>> to
> >>>>>>>> 'avaDown' : Recovery is 'componentRestart'
> >>>>>>>> Feb 23 09:32:50 SC-2 local0.info osafmsgnd[736]: mkfifo already
> >>>> exists:
> >>>>>>>> /var/lib/opensaf/osafmsgnd.fifo File exists
> >>>>>>>>
> >>>>>>>> And sometimes, on active SC-1 (OpenSAF 5.1), the node is not
> >>>>>>>> able to
> >>>> up
> >>>>>>>> because of following error:
> >>>>>>>>
> >>>>>>>> Feb 23 11:00:32 SC-1 local0.err osafclmna[406]: MDTM:TIPC
> Dsock
> >>>>> Socket
> >>>>>>>> creation failed in MDTM_INIT err :Address family not supported by
> >>>>>>> protocol
> >>>>>>>> Feb 23 11:00:32 SC-1 local0.err osafclmna[406]: ER
> >>> ncs_agents_startup
> >>>>>>> FAILED
> >>>>>>> [AVM]  No such issues ( with both TCP & TIPC) (staging changeset:
> >>>> 8609
> >>>>>> )
> >>>>>>>> Are you getting similar problem at your side?
> >>>>>>>> Please note that, the problem is existed WITH or WITHOUT the
> #2258
> >>>>>>> patch.
> >>>>>>> [AVM] No , problem only if we apply `2258_v5.patch` V5 patch on
> >>> staging
> >>>>>>> (changeset:   8609 )
> >>>>>>>                try to bring up that node in to cluster.
> >>>>>>>
> >>>>>>>
> >>>>>>> -AVM
> >>>>>>>
> >>>>>>>> I have informed this to IMM to have a look, not sure any problem
> >> with
> >>>>>> MDS
> >>>>>>>> layer or any problem with my environment setup.
> >>>>>>>> In the meantime, please have a look at the updated patch,  I will
> >>>>>> continue
> >>>>>>>> checking the problem. Will keep you updated.
> >>>>>>> [AVM] I haven't seen any IMM problems
> >>>>>>>> Regards, Vu
> >>>>>>>>
> >>>>>>>>> -----Original Message-----
> >>>>>>>>> From: A V Mahesh [mailto:mahesh.va...@oracle.com]
> >>>>>>>>> Sent: Wednesday, February 22, 2017 5:36 PM
> >>>>>>>>> To: Vu Minh Nguyen <vu.m.ngu...@dektech.com.au>;
> >>>>>>>>> lennart.l...@ericsson.com; canh.v.tru...@dektech.com.au
> >>>>>>>>> Cc: opensaf-devel@lists.sourceforge.net
> >>>>>>>>> Subject: Re: [devel] [PATCH 0 of 3] Review Request for log: add
> >>>>>>>> alternative
> >>>>>>>>> destinations of log records [#2258] V4
> >>>>>>>>>
> >>>>>>>>> Please see correction New Standby SC-1 ( with patch )
> >>>>>>>>>
> >>>>>>>>> -AVM
> >>>>>>>>>
> >>>>>>>>> On 2/22/2017 4:02 PM, A V Mahesh wrote:
> >>>>>>>>>> Hi Vu,
> >>>>>>>>>>
> >>>>>>>>>> With this new patch , we have another issue :
> >>>>>>>>>>
> >>>>>>>>>> 1)  standby Core by `/usr/lib64/opensaf/osaflogd'  issue got
> >>>> resolved
> >>>>>> .
> >>>>>>>>>> 2) In-service upgrade is Not working , I have Old  Active SC-2
(
> >>>> with
> >>>>>>>>>> out patch )  and New Standby SC-1 ( with patch )
> >>>>>>>>>>
> >>>>>>>>>>         the new New Standby SC-1 not joining the cluster (
> >> in-service
> >>>>>>>>>> upgrade  failed )
> >>>>>>>>>>
> >>>>>>>>>> New Standby SC-1
> >>>>>>>>>>
> >>>>>>>>>>
> >>>
> ==============================================================
> >>>>>>>>>
> ======================================================
> >>>>>>>>>> Feb 22 15:53:05 SC-1 osafamfnd[15752]: NO
> >>>>>>>>>> 'safSu=SC-1,safSg=NoRed,safApp=OpenSAF' Presence State
> >>>>>>>>> INSTANTIATING
> >>>>>>>>>> => INSTANTIATED
> >>>>>>>>>> Feb 22 15:53:05 SC-1 osafamfnd[15752]: NO Assigning
> >>>>>>>>>> 'safSi=NoRed4,safApp=OpenSAF' ACTIVE to
> >>>>>>>>>> 'safSu=SC-1,safSg=NoRed,safApp=OpenSAF'
> >>>>>>>>>> Feb 22 15:53:05 SC-1 osafamfnd[15752]: NO Assigned
> >>>>>>>>>> 'safSi=NoRed4,safApp=OpenSAF' ACTIVE to
> >>>>>>>>>> 'safSu=SC-1,safSg=NoRed,safApp=OpenSAF'
> >>>>>>>>>> Feb 22 15:53:05 SC-1 osafsmfd[15889]: Started
> >>>>>>>>>> Feb 22 15:53:05 SC-1 osafamfnd[15752]: NO
> >>>>>>>>>> 'safSu=SC-1,safSg=2N,safApp=OpenSAF' Presence State
> >>>>> INSTANTIATING
> >>>>>>> =>
> >>>>>>>>>> INSTANTIATED
> >>>>>>>>>> Feb 22 15:53:05 SC-1 osafamfnd[15752]: NO Assigning
> >>>>>>>>>> 'safSi=SC-2N,safApp=OpenSAF' STANDBY to
> >>>>>>>>>> 'safSu=SC-1,safSg=2N,safApp=OpenSAF'
> >>>>>>>>>> Feb 22 15:53:05 SC-1 osafrded[15672]: NO RDE role set to
> >>> STANDBY
> >>>>>>>>>> Feb 22 15:53:05 SC-1 osafrded[15672]: NO Peer up on node
> >>> 0x2020f
> >>>>>>>>>> Feb 22 15:53:05 SC-1 osafrded[15672]: NO Got peer info
> request
> >>> from
> >>>>>>>>>> node 0x2020f with role ACTIVE
> >>>>>>>>>> Feb 22 15:53:05 SC-1 osafrded[15672]: NO Got peer info
> response
> >>>>> from
> >>>>>>>>>> node 0x2020f with role ACTIVE
> >>>>>>>>>> Feb 22 15:53:05 SC-1 osafimmd[15691]: NO MDS event from
> svc_id
> >>> 24
> >>>>>>>>>> (change:5, dest:13)
> >>>>>>>>>> Feb 22 15:53:05 SC-1 osafimmd[15691]: NO MDS event from
> svc_id
> >>> 24
> >>>>>>>>>> (change:3, dest:13)
> >>>>>>>>>> Feb 22 15:53:05 SC-1 osafimmd[15691]: NO MDS event from
> svc_id
> >>> 24
> >>>>>>>>>> (change:5, dest:13)
> >>>>>>>>>> Feb 22 15:53:05 SC-1 osafimmd[15691]: NO MDS event from
> svc_id
> >>> 25
> >>>>>>>>>> (change:3, dest:567412424453430)
> >>>>>>>>>> Feb 22 15:53:05 SC-1 osafimmd[15691]: NO MDS event from
> svc_id
> >>> 25
> >>>>>>>>>> (change:3, dest:565213401202663)
> >>>>>>>>>> Feb 22 15:53:05 SC-1 osafimmd[15691]: NO MDS event from
> svc_id
> >>> 25
> >>>>>>>>>> (change:3, dest:566312912825221)
> >>>>>>>>>> Feb 22 15:53:05 SC-1 osafimmd[15691]: NO MDS event from
> svc_id
> >>> 25
> >>>>>>>>>> (change:3, dest:564113889574230)
> >>>>>>>>>> Feb 22 15:53:05 SC-1 osafimmnd[15702]: NO Implementer
> (applier)
> >>>>>>>>>> connected: 17 (@safAmfService2010f) <127, 2010f>
> >>>>>>>>>> Feb 22 15:53:05 SC-1 osaflogd[15712]: NO
> >>>>> LOGSV_DATA_GROUPNAME
> >>>>>>>>> not found
> >>>>>>>>>> Feb 22 15:53:05 SC-1 osaflogd[15712]: NO LOG root directory
> is:
> >>>>>>>>>> "/var/log/opensaf/saflog"
> >>>>>>>>>> Feb 22 15:53:05 SC-1 osaflogd[15712]: NO LOG data group is: ""
> >>>>>>>>>> Feb 22 15:53:05 SC-1 osaflogd[15712]: NO
> LGS_MBCSV_VERSION =
> >>> 7
> >>>>>>>>>> Feb 22 15:53:05 SC-1 osafamfnd[15752]: NO Assigned
> >>>>>>>>>> 'safSi=SC-2N,safApp=OpenSAF' STANDBY to
> >>>>>>>>>> 'safSu=SC-1,safSg=2N,safApp=OpenSAF'
> >>>>>>>>>> Feb 22 15:53:05 SC-1 opensafd: OpenSAF(5.1.M0 - ) services
> >>>>>>>>>> successfully started
> >>>>>>>>>> Feb 22 15:53:05 SC-1 osafamfnd[15752]: NO
> >>>>>>>>>> 'safComp=LOG,safSu=SC-1,safSg=2N,safApp=OpenSAF' faulted
> due
> >>> to
> >>>>>>>>>> 'avaDown' : Recovery is 'nodeFailfast'
> >>>>>>>>>> Feb 22 15:53:05 SC-1 osafamfnd[15752]: ER
> >>>>>>>>>> safComp=LOG,safSu=SC-1,safSg=2N,safApp=OpenSAF Faulted
> due
> >>>>>>>>> to:avaDown
> >>>>>>>>>> Recovery is:nodeFailfast
> >>>>>>>>>> Feb 22 15:53:05 SC-1 osafamfnd[15752]: Rebooting OpenSAF
> >>> NodeId
> >>>>> =
> >>>>>>>>>> 131343 EE Name = , Reason: Component faulted: recovery is
> node
> >>>>>>>>>> failfast, OwnNodeId = 131343, SupervisionTime = 60
> >>>>>>>>>> Feb 22 15:53:05 SC-1 opensaf_reboot: Rebooting local node;
> >>>>>>> timeout=60
> >>>>>>>>>> Feb 22 15:53:43 SC-1 syslog-ng[1171]: syslog-ng starting up;
> >>>>>>>>>> version='2.0.9'
> >>>>>>>>>>
> >>>>>>>>>>
> >>>
> ==============================================================
> >>>>>>>>>
> ======================================================
> >>>>>>>>>> Old - Active - SC-2
> >>>>>>>>>>
> >>>>>>>>>>
> >>>
> ==============================================================
> >>>>>>>>>
> ======================================================
> >>>>>>>>>> Feb 22 15:53:02 SC-2 osafimmnd[16359]: NO NODE STATE->
> >>>>>>>>>> IMM_NODE_R_AVAILABLE
> >>>>>>>>>> Feb 22 15:53:02 SC-2 osafimmloadd: NO Sync starting
> >>>>>>>>>> Feb 22 15:53:02 SC-2 osafimmloadd: IN Synced 390 objects in
> >>>>>>>>>> total
> >>>>>>>>>> Feb 22 15:53:02 SC-2 osafimmnd[16359]: NO NODE STATE->
> >>>>>>>>>> IMM_NODE_FULLY_AVAILABLE 18511
> >>>>>>>>>> Feb 22 15:53:02 SC-2 osafimmnd[16359]: NO Epoch set to 3 in
> >>>>>>> ImmModel
> >>>>>>>>>> Feb 22 15:53:02 SC-2 osafimmd[16346]: NO ACT: New Epoch
> for
> >>>>>>> IMMND
> >>>>>>>>>> process at node 2020f old epoch: 2  new epoch:3
> >>>>>>>>>> Feb 22 15:53:02 SC-2 osafimmd[16346]: NO ACT: New Epoch
> for
> >>>>>>> IMMND
> >>>>>>>>>> process at node 2040f old epoch: 2  new epoch:3
> >>>>>>>>>> Feb 22 15:53:02 SC-2 osafimmd[16346]: NO ACT: New Epoch
> for
> >>>>>>> IMMND
> >>>>>>>>>> process at node 2030f old epoch: 2  new epoch:3
> >>>>>>>>>> Feb 22 15:53:02 SC-2 osafimmloadd: NO Sync ending normally
> >>>>>>>>>> Feb 22 15:53:02 SC-2 osafimmd[16346]: NO ACT: New Epoch
> for
> >>>>>>> IMMND
> >>>>>>>>>> process at node 2010f old epoch: 0  new epoch:3
> >>>>>>>>>> Feb 22 15:53:02 SC-2 osafimmnd[16359]: NO SERVER STATE:
> >>>>>>>>>> IMM_SERVER_SYNC_SERVER --> IMM_SERVER_READY
> >>>>>>>>>> Feb 22 15:53:03 SC-2 osafamfd[16408]: NO Received node_up
> from
> >>>>>>> 2010f:
> >>>>>>>>>> msg_id 1
> >>>>>>>>>> Feb 22 15:53:03 SC-2 osafamfd[16408]: NO Node 'SC-1' joined
> the
> >>>>>>> cluster
> >>>>>>>>>> Feb 22 15:53:03 SC-2 osafimmnd[16359]: NO Implementer
> >>> connected:
> >>>>>>> 16
> >>>>>>>>>> (MsgQueueService131343) <0, 2010f>
> >>>>>>>>>> Feb 22 15:53:03 SC-2 osafrded[16327]: NO Peer up on node
> >>> 0x2010f
> >>>>>>>>>> Feb 22 15:53:03 SC-2 osafrded[16327]: NO Got peer info
> request
> >>> from
> >>>>>>>>>> node 0x2010f with role STANDBY
> >>>>>>>>>> Feb 22 15:53:03 SC-2 osafrded[16327]: NO Got peer info
> response
> >>>>> from
> >>>>>>>>>> node 0x2010f with role STANDBY
> >>>>>>>>>> Feb 22 15:53:03 SC-2 osafimmd[16346]: NO MDS event from
> svc_id
> >>> 24
> >>>>>>>>>> (change:5, dest:13)
> >>>>>>>>>> Feb 22 15:53:03 SC-2 osafimmnd[16359]: NO Implementer
> (applier)
> >>>>>>>>>> connected: 17 (@safAmfService2010f) <0, 2010f>
> >>>>>>>>>> Feb 22 15:53:03 SC-2 osafamfd[16408]: NO Cluster startup is
> done
> >>>>>>>>>> Feb 22 15:53:04 SC-2 osafimmnd[16359]: NO Implementer
> (applier)
> >>>>>>>>>> connected: 18 (@OpenSafImmReplicatorB) <0, 2010f>
> >>>>>>>>>> Feb 22 15:53:09 SC-2 osafdtmd[16304]: NO Lost contact with
> >>>>>>>>>> 'SC-1'
> >>>>>>>>>> Feb 22 15:53:09 SC-2 osaffmd[16336]: NO Node Down event
> for
> >>> node
> >>>>> id
> >>>>>>>>>> 2010f:
> >>>>>>>>>> Feb 22 15:53:09 SC-2 osafimmd[16346]: NO MDS event from
> svc_id
> >>> 24
> >>>>>>>>>> (change:6, dest:13)
> >>>>>>>>>> Feb 22 15:53:09 SC-2 osafimmd[16346]: NO MDS event from
> svc_id
> >>> 25
> >>>>>>>>>> (change:4, dest:564113889574230)
> >>>>>>>>>> Feb 22 15:53:09 SC-2 osaffmd[16336]: NO Current role: ACTIVE
> >>>>>>>>>> Feb 22 15:53:09 SC-2 osaffmd[16336]: Rebooting OpenSAF
> NodeId
> >>> =
> >>>>>>>>> 131343
> >>>>>>>>>> EE Name = , Reason: Received Node Down for peer controller,
> >>>>>>> OwnNodeId
> >>>>>>>>>> = 131599, SupervisionTime = 60
> >>>>>>>>>> Feb 22 15:53:09 SC-2 osafrded[16327]: NO Peer down on node
> >>>>> 0x2010f
> >>>>>>>>>> Feb 22 15:53:09 SC-2 osafamfd[16408]: NO Node 'SC-1' left the
> >>>>> cluster
> >>>>>>>>>> Feb 22 15:53:09 SC-2 osaflogd[16372]: NO Failed (2) to send of
> >>> WRITE
> >>>>>>>>>> ack to: 2010f00003d6a
> >>>>>>>>>> Feb 22 15:53:09 SC-2 osaflogd[16372]: NO Failed (2) to send of
> >>> WRITE
> >>>>>>>>>> ack to: 2010f00003d6a
> >>>>>>>>>> Feb 22 15:53:09 SC-2 osaflogd[16372]: NO Failed (2) to send of
> >>> WRITE
> >>>>>>>>>> ack to: 2010f00003d74
> >>>>>>>>>> Feb 22 15:53:09 SC-2 osafclmd[16398]: NO Node 131343 went
> >>> down.
> >>>>>>> Not
> >>>>>>>>>> sending track callback for agents on that node
> >>>>>>>>>> Feb 22 15:53:09 SC-2 osafclmd[16398]: NO Node 131343 went
> >>> down.
> >>>>>>> Not
> >>>>>>>>>> sending track callback for agents on that node
> >>>>>>>>>> Feb 22 15:53:09 SC-2 osafclmd[16398]: NO Node 131343 went
> >>> down.
> >>>>>>> Not
> >>>>>>>>>> sending track callback for agents on that node
> >>>>>>>>>> Feb 22 15:53:09 SC-2 osafclmd[16398]: NO Node 131343 went
> >>> down.
> >>>>>>> Not
> >>>>>>>>>> sending track callback for agents on that node
> >>>>>>>>>> Feb 22 15:53:09 SC-2 osafclmd[16398]: NO Node 131343 went
> >>> down.
> >>>>>>> Not
> >>>>>>>>>> sending track callback for agents on that node
> >>>>>>>>>> Feb 22 15:53:09 SC-2 osafclmd[16398]: NO Node 131343 went
> >>> down.
> >>>>>>> Not
> >>>>>>>>>> sending track callback for agents on that node
> >>>>>>>>>> Feb 22 15:53:09 SC-2 osafclmd[16398]: NO Node 131343 went
> >>> down.
> >>>>>>> Not
> >>>>>>>>>> sending track callback for agents on that node
> >>>>>>>>>> Feb 22 15:53:09 SC-2 osafimmd[16346]: WA IMMD lost contact
> >>> with
> >>>>>>> peer
> >>>>>>>>>> IMMD (NCSMDS_RED_DOWN)
> >>>>>>>>>> Feb 22 15:53:09 SC-2 osafimmnd[16359]: NO Global discard
> node
> >>>>>>> received
> >>>>>>>>>> for nodeId:2010f pid:15702
> >>>>>>>>>> Feb 22 15:53:09 SC-2 osafimmnd[16359]: NO Implementer
> >>>>> disconnected
> >>>>>>> 16
> >>>>>>>>>> <0, 2010f(down)> (MsgQueueService131343)
> >>>>>>>>>> Feb 22 15:53:09 SC-2 osafimmnd[16359]: NO Implementer
> >>>>> disconnected
> >>>>>>> 17
> >>>>>>>>>> <0, 2010f(down)> (@safAmfService2010f)
> >>>>>>>>>> Feb 22 15:53:09 SC-2 osafimmnd[16359]: NO Implementer
> >>>>> disconnected
> >>>>>>> 18
> >>>>>>>>>> <0, 2010f(down)> (@OpenSafImmReplicatorB)
> >>>>>>>>>> Feb 22 15:53:09 SC-2 opensaf_reboot: Rebooting remote node
> in
> >>> the
> >>>>>>>>>> absence of PLM is outside the scope of OpenSAF
> >>>>>>>>>>
> >>>>>>>>>>
> >>>
> ==============================================================
> >>>>>>>>>
> ======================================================
> >>>>>>>>>> -AVM
> >>>>>>>>>>
> >>>>>>>>>> On 2/22/2017 3:13 PM, Vu Minh Nguyen wrote:
> >>>>>>>>>>> Hi Mahesh,
> >>>>>>>>>>>
> >>>>>>>>>>> I put all required patches into one. Try to use this and see
if
> >> you
> >>>>>>>>>>> still
> >>>>>>>>>>> have that problem or not.
> >>>>>>>>>>>
> >>>>>>>>>>> Regards, Vu
> >>>>>>>>>>>
> >>>>>>>>>>>> -----Original Message-----
> >>>>>>>>>>>> From: A V Mahesh [mailto:mahesh.va...@oracle.com]
> >>>>>>>>>>>> Sent: Wednesday, February 22, 2017 3:35 PM
> >>>>>>>>>>>> To: Vu Minh Nguyen <vu.m.ngu...@dektech.com.au>;
> >>>>>>>>>>>> lennart.l...@ericsson.com; canh.v.tru...@dektech.com.au
> >>>>>>>>>>>> Cc: opensaf-devel@lists.sourceforge.net
> >>>>>>>>>>>> Subject: Re: [devel] [PATCH 0 of 3] Review Request for log:
> >>>>>>>>>>>> add
> >>>>>>>>>>> alternative
> >>>>>>>>>>>> destinations of log records [#2258] V4
> >>>>>>>>>>>>
> >>>>>>>>>>>> Hi Vu,
> >>>>>>>>>>>>
> >>>>>>>>>>>> I used  new #3 and #4 patches ,  Can you please re-send All
> >>>>>>>>>>>> the
> >>>>>> final
> >>>>>>>>>>>> patch in go,
> >>>>>>>>>>>>
> >>>>>>>>>>>> which i need to apply on today`s  staging ( if possible
> >>>>>>>>>>>> publish
> >>>> the
> >>>>>>>>>>>> with
> >>>>>>>>>>>> new version )
> >>>>>>>>>>>>
> >>>>>>>>>>>> -AVM
> >>>>>>>>>>>>
> >>>>>>>>>>>>
> >>>>>>>>>>>> On 2/22/2017 1:52 PM, Vu Minh Nguyen wrote:
> >>>>>>>>>>>>> Hi Mahesh,
> >>>>>>>>>>>>>
> >>>>>>>>>>>>>> Core was generated by `/usr/lib64/opensaf/osaflogd'.
> >>>>>>>>>>>>>> Program terminated with signal 11, Segmentation fault.
> >>>>>>>>>>>>>> #0  ckpt_proc_cfg_stream(lgs_cb*, void*) () at
> >>>>>>>>>>>>>> src/log/logd/lgs_mbcsv.cc:2195
> >>>>>>>>>>>>>> 2195    src/log/logd/lgs_mbcsv.cc: No such file or
> >>>>>>>>>>>>>> directory.
> >>>>>>>>>>>>>>                in src/log/logd/lgs_mbcsv.cc
> >>>>>>>>>>>>> Backtrace still points to old position (lgs_mbcsv:2195). I
> >> guess
> >>>>>> the
> >>>>>>>>>>>>> osaflogd binary has not been updated with the fixed patch.
> >>>>>>>>>>>>>
> >>>>>>>>>>>>>
> >>>>>>>>>>>>> Regards, Vu
> >>>>>>>>>>>>>
> >>>>>>>>>>>>>> -----Original Message-----
> >>>>>>>>>>>>>> From: A V Mahesh [mailto:mahesh.va...@oracle.com]
> >>>>>>>>>>>>>> Sent: Wednesday, February 22, 2017 3:18 PM
> >>>>>>>>>>>>>> To: Vu Minh Nguyen <vu.m.ngu...@dektech.com.au>;
> >>>>>>>>>>>>>> lennart.l...@ericsson.com;
> canh.v.tru...@dektech.com.au
> >>>>>>>>>>>>>> Cc: opensaf-devel@lists.sourceforge.net
> >>>>>>>>>>>>>> Subject: Re: [devel] [PATCH 0 of 3] Review Request for log:
> >>> add
> >>>>>>>>>>>>> alternative
> >>>>>>>>>>>>>> destinations of log records [#2258] V4
> >>>>>>>>>>>>>>
> >>>>>>>>>>>>>> Hi Vu,
> >>>>>>>>>>>>>>
> >>>>>>>>>>>>>> SC-2 standby osaflogd core dumped still occurs ( Not
> >>>>>>>>>>>>>> resolved)
> >>> ,
> >>>>>> the
> >>>>>>>>>>> new
> >>>>>>>>>>>>>> patch only resolved the application (/usr/bin/logtest )
> >>>>>> Segmentation
> >>>>>>>>>>>>>> fault on SC-1 Active.
> >>>>>>>>>>>>>>
> >>>>>>>>>>>>>>
> >>>
> ==============================================================
> >>>
> ==============================================================
> >>>>>>>>>>>>>> # gdb /usr/lib64/opensaf/osaflogd
> >>>>>>> core_1487751055.osaflogd.4594
> >>>>>>>>> GNU
> >>>>>>>>>>>>>> gdb
> >>>>>>>>>>>>>> (GDB) SUSE (7.3-0.6.1)
> >>>>>>>>>>>>>> Copyright (C) 2011 Free Software Foundation, Inc.
> >>>>>>>>>>>>>> .......
> >>>>>>>>>>>>>> Core was generated by `/usr/lib64/opensaf/osaflogd'.
> >>>>>>>>>>>>>> Program terminated with signal 11, Segmentation fault.
> >>>>>>>>>>>>>> #0  ckpt_proc_cfg_stream(lgs_cb*, void*) () at
> >>>>>>>>>>>>>> src/log/logd/lgs_mbcsv.cc:2195
> >>>>>>>>>>>>>> 2195    src/log/logd/lgs_mbcsv.cc: No such file or
> >>>>>>>>>>>>>> directory.
> >>>>>>>>>>>>>>                in src/log/logd/lgs_mbcsv.cc
> >>>>>>>>>>>>>> (gdb) bt
> >>>>>>>>>>>>>> #0  ckpt_proc_cfg_stream(lgs_cb*, void*) () at
> >>>>>>>>>>>>>> src/log/logd/lgs_mbcsv.cc:2195
> >>>>>>>>>>>>>> #1  0x00007f97b026f960 in
> ckpt_decode_log_struct(lgs_cb*,
> >>>>>>>>>>>>>> ncs_mbcsv_cb_arg*, void*, void*, unsigned int
> >>> (*)(edu_hdl_tag*,
> >>>>>>>>>>>>>> edu_tkn_tag*, void*, unsigned int*, edu_buf_env_tag*,
> >>>>>>>>> EDP_OP_TYPE,
> >>>>>>>>>>>>>> EDU_ERR*)) () at src/log/logd/lgs_mbcsv.cc:950
> >>>>>>>>>>>>>> #2  0x00007f97b02710dc in
> >>> ckpt_decode_async_update(lgs_cb*,
> >>>>>>>>>>>>>> ncs_mbcsv_cb_arg*) () at src/log/logd/lgs_mbcsv.cc:1086
> >>>>>>>>>>>>>> #3  0x00007f97b0273941 in
> >>> mbcsv_callback(ncs_mbcsv_cb_arg*)
> >>>>> ()
> >>>>>>> at
> >>>>>>>>>>>>>> src/log/logd/lgs_mbcsv.cc:880
> >>>>>>>>>>>>>> #4  0x00007f97af372596 in ncs_mbscv_rcv_decode () from
> >>>>>>>>>>>>>> /usr/lib/../lib64/libopensaf_core.so.0
> >>>>>>>>>>>>>> #5  0x00007f97af372766 in ncs_mbcsv_rcv_async_update
> ()
> >>> from
> >>>>>>>>>>>>>> /usr/lib/../lib64/libopensaf_core.so.0
> >>>>>>>>>>>>>> #6  0x00007f97af379370 in mbcsv_process_events () from
> >>>>>>>>>>>>>> /usr/lib/../lib64/libopensaf_core.so.0
> >>>>>>>>>>>>>> #7  0x00007f97af3794db in mbcsv_hdl_dispatch_all () from
> >>>>>>>>>>>>>> /usr/lib/../lib64/libopensaf_core.so.0
> >>>>>>>>>>>>>> #8  0x00007f97af373ce2 in
> mbcsv_process_dispatch_request ()
> >>>>> at
> >>>>>>>>>>>>>> src/mbc/mbcsv_api.c:423
> >>>>>>>>>>>>>> #9  0x00007f97b027096e in lgs_mbcsv_dispatch(unsigned
> int)
> >>> ()
> >>>>> at
> >>>>>>>>>>>>>> src/log/logd/lgs_mbcsv.cc:327
> >>>>>>>>>>>>>> #10 0x00007f97b024d9f2 in main () at
> >>>>> src/log/logd/lgs_main.cc:583
> >>>>>>>>>>>>>> (gdb) bt full
> >>>>>>>>>>>>>> #0  ckpt_proc_cfg_stream(lgs_cb*, void*) () at
> >>>>>>>>>>>>>> src/log/logd/lgs_mbcsv.cc:2195
> >>>>>>>>>>>>>>                ckpt_data_handler = {0x7f97b0270300
> >>>>>>>>>>>>>> <ckpt_proc_initialize_client(lgs_cb*, void*)>,
> >>>>>>>>>>>>>> 0x7f97b02701c0
> >>>>>>>>>>>>>> <ckpt_proc_finalize_client(lgs_cb*, void*)>,
> >>>>>>>>>>>>>>                  0x7f97b0270060
> >>>>>>>>>>>>>> <ckpt_proc_agent_down(lgs_cb*,
> >>>>>> void*)>,
> >>>>>>>>>>>>>> 0x7f97b02712f0 <ckpt_proc_log_write(lgs_cb*, void*)>,
> >>>>>>>>> 0x7f97b0271ab0
> >>>>>>>>>>>>>> <ckpt_proc_open_stream(lgs_cb*, void*)>,
> >>>>>>>>>>>>>>                  0x7f97b026fe80
> >> <ckpt_proc_close_stream(lgs_cb*,
> >>>>>>>> void*)>,
> >>>>>>>>>>>>>> 0x7f97b0272380 <ckpt_proc_cfg_stream(lgs_cb*, void*)>,
> >>>>>>>>>>>> 0x7f97b0274800
> >>>>>>>>>>>>>> <ckpt_proc_lgs_cfg_v2(lgs_cb*, void*)>,
> >>>>>>>>>>>>>>                  0x7f97b0274e10
> >>>>>>>>>>>>>> <ckpt_proc_lgs_cfg_v3(lgs_cb*,
> >>>>>> void*)>,
> >>>>>>>>>>>>>> 0x7f97b02754f0 <ckpt_proc_lgs_cfg_v5(lgs_cb*, void*)>}
> >>>>>>>>>>>>>> #1  0x00007f97b026f960 in
> ckpt_decode_log_struct(lgs_cb*,
> >>>>>>>>>>>>>> ncs_mbcsv_cb_arg*, void*, void*, unsigned int
> >>> (*)(edu_hdl_tag*,
> >>>>>>>>>>>>>> edu_tkn_tag*, void*, unsigned int*, edu_buf_env_tag*,
> >>>>>>>>> EDP_OP_TYPE,
> >>>>>>>>>>>>>> EDU_ERR*)) () at src/log/logd/lgs_mbcsv.cc:950
> >>>>>>>>>>>>>>                ckpt_data_handler = {0x7f97b0270300
> >>>>>>>>>>>>>> <ckpt_proc_initialize_client(lgs_cb*, void*)>,
> >>>>>>>>>>>>>> 0x7f97b02701c0
> >>>>>>>>>>>>>> <ckpt_proc_finalize_client(lgs_cb*, void*)>,
> >>>>>>>>>>>>>>                  0x7f97b0270060
> >>>>>>>>>>>>>> <ckpt_proc_agent_down(lgs_cb*,
> >>>>>> void*)>,
> >>>>>>>>>>>>>> 0x7f97b02712f0 <ckpt_proc_log_write(lgs_cb*, void*)>,
> >>>>>>>>> 0x7f97b0271ab0
> >>>>>>>>>>>>>> <ckpt_proc_open_stream(lgs_cb*, void*)>,
> >>>>>>>>>>>>>>                  0x7f97b026fe80
> >> <ckpt_proc_close_stream(lgs_cb*,
> >>>>>>>> void*)>,
> >>>>>>>>>>>>>> 0x7f97b0272380 <ckpt_proc_cfg_stream(lgs_cb*, void*)>,
> >>>>>>>>>>>> 0x7f97b0274800
> >>>>>>>>>>>>>> <ckpt_proc_lgs_cfg_v2(lgs_cb*, void*)>,
> >>>>>>>>>>>>>>                  0x7f97b0274e10
> >>>>>>>>>>>>>> <ckpt_proc_lgs_cfg_v3(lgs_cb*,
> >>>>>> void*)>,
> >>>>>>>>>>>>>> 0x7f97b02754f0 <ckpt_proc_lgs_cfg_v5(lgs_cb*, void*)>}
> >>>>>>>>>>>>>> #2  0x00007f97b02710dc in
> >>> ckpt_decode_async_update(lgs_cb*,
> >>>>>>>>>>>>>> ncs_mbcsv_cb_arg*) () at src/log/logd/lgs_mbcsv.cc:1086
> >>>>>>>>>>>>>>                ckpt_data_handler = {0x7f97b0270300
> >>>>>>>>>>>>>> <ckpt_proc_initialize_client(lgs_cb*, void*)>,
> >>>>>>>>>>>>>> 0x7f97b02701c0
> >>>>>>>>>>>>>> <ckpt_proc_finalize_client(lgs_cb*, void*)>,
> >>>>>>>>>>>>>>                  0x7f97b0270060
> >>>>>>>>>>>>>> <ckpt_proc_agent_down(lgs_cb*,
> >>>>>> void*)>,
> >>>>>>>>>>>>>> 0x7f97b02712f0 <ckpt_proc_log_write(lgs_cb*, void*)>,
> >>>>>>>>> 0x7f97b0271ab0
> >>>>>>>>>>>>>> <ckpt_proc_open_stream(lgs_cb*, void*)>,
> >>>>>>>>>>>>>>                  0x7f97b026fe80
> >> <ckpt_proc_close_stream(lgs_cb*,
> >>>>>>>> void*)>,
> >>>>>>>>>>>>>> 0x7f97b0272380 <ckpt_proc_cfg_stream(lgs_cb*, void*)>,
> >>>>>>>>>>>> 0x7f97b0274800
> >>>>>>>>>>>>>> <ckpt_proc_lgs_cfg_v2(lgs_cb*, void*)>,
> >>>>>>>>>>>>>>                  0x7f97b0274e10
> >>>>>>>>>>>>>> <ckpt_proc_lgs_cfg_v3(lgs_cb*,
> >>>>>> void*)>,
> >>>>>>>>>>>>>> 0x7f97b02754f0 <ckpt_proc_lgs_cfg_v5(lgs_cb*, void*)>}
> >>>>>>>>>>>>>> #3  0x00007f97b0273941 in
> >>> mbcsv_callback(ncs_mbcsv_cb_arg*)
> >>>>> ()
> >>>>>>> at
> >>>>>>>>>>>>>> src/log/logd/lgs_mbcsv.cc:880
> >>>>>>>>>>>>>>                ckpt_data_handler = {0x7f97b0270300
> >>>>>>>>>>>>>> <ckpt_proc_initialize_client(lgs_cb*, void*)>,
> >>>>>>>>>>>>>> 0x7f97b02701c0
> >>>>>>>>>>>>>> <ckpt_proc_finalize_client(lgs_cb*, void*)>,
> >>>>>>>>>>>>>>                  0x7f97b0270060
> >>>>>>>>>>>>>> <ckpt_proc_agent_down(lgs_cb*,
> >>>>>> void*)>,
> >>>>>>>>>>>>>> 0x7f97b02712f0 <ckpt_proc_log_write(lgs_cb*, void*)>,
> >>>>>>>>> 0x7f97b0271ab0
> >>>>>>>>>>>>>> <ckpt_proc_open_stream(lgs_cb*, void*)>,
> >>>>>>>>>>>>>>                  0x7f97b026fe80
> >> <ckpt_proc_close_stream(lgs_cb*,
> >>>>>>>> void*)>,
> >>>>>>>>>>>>>> 0x7f97b0272380 <ckpt_proc_cfg_stream(lgs_cb*, void*)>,
> >>>>>>>>>>>> 0x7f97b0274800
> >>>>>>>>>>>>>> <ckpt_proc_lgs_cfg_v2(lgs_cb*, void*)>,
> >>>>>>>>>>>>>>                  0x7f97b0274e10
> >>>>>>>>>>>>>> <ckpt_proc_lgs_cfg_v3(lgs_cb*,
> >>>>>> void*)>,
> >>>>>>>>>>>>>> 0x7f97b02754f0 <ckpt_proc_lgs_cfg_v5(lgs_cb*, void*)>}
> >>>>>>>>>>>>>> #4  0x00007f97af372596 in ncs_mbscv_rcv_decode () from
> >>>>>>>>>>>>>> /usr/lib/../lib64/libopensaf_core.so.0
> >>>>>>>>>>>>>> mbcsv_init_process_req_func = {0x7f97af373630
> >>>>>>>>>>>>>> <mbcsv_process_initialize_request>, 0x7f97af374f10
> >>>>>>>>>>>>>> <mbcsv_process_get_sel_obj_request>,
> >>>>>>>>>>>>>>                  0x7f97af373be0
> >> <mbcsv_process_dispatch_request>,
> >>>>>>>>>>>>>> 0x7f97af373af0 <mbcsv_process_finalize_request>,
> >>>>>>> 0x7f97af373cf0
> >>>>>>>>>>>>>> <mbcsv_process_open_request>,
> >>>>>>>>>>>>>>                  0x7f97af374050
> >>>>>>>>>>>>>> <mbcsv_process_close_request>,
> >>>>>>>>>>> 0x7f97af3741e0
> >>>>>>>>>>>>>> <mbcsv_process_chg_role_request>, 0x7f97af3744c0
> >>>>>>>>>>>>>> <mbcsv_process_snd_ckpt_request>,
> >>>>>>>>>>>>>>                  0x7f97af3747d0
> >> <mbcsv_process_snd_ntfy_request>,
> >>>>>>>>>>>>>> 0x7f97af374970 <mbcsv_process_snd_data_req>,
> >>>>> 0x7f97af373930
> >>>>>>>>>>>>>> <mbcsv_process_get_request>,
> >>>>>>>>>>>>>>                  0x7f97af374bd0
<mbcsv_process_set_request>}
> >>>>>>>>>>>>>> #5  0x00007f97af372766 in ncs_mbcsv_rcv_async_update
> ()
> >>> from
> >>>>>>>>>>>>>> /usr/lib/../lib64/libopensaf_core.so.0
> >>>>>>>>>>>>>> mbcsv_init_process_req_func = {0x7f97af373630
> >>>>>>>>>>>>>> <mbcsv_process_initialize_request>, 0x7f97af374f10
> >>>>>>>>>>>>>> <mbcsv_process_get_sel_obj_request>,
> >>>>>>>>>>>>>>                  0x7f97af373be0
> >> <mbcsv_process_dispatch_request>,
> >>>>>>>>>>>>>> 0x7f97af373af0 <mbcsv_process_finalize_request>,
> >>>>>>> 0x7f97af373cf0
> >>>>>>>>>>>>>> <mbcsv_process_open_request>,
> >>>>>>>>>>>>>>                  0x7f97af374050
> >>>>>>>>>>>>>> <mbcsv_process_close_request>,
> >>>>>>>>>>> 0x7f97af3741e0
> >>>>>>>>>>>>>> <mbcsv_process_chg_role_request>, 0x7f97af3744c0
> >>>>>>>>>>>>>> <mbcsv_process_snd_ckpt_request>,
> >>>>>>>>>>>>>>                  0x7f97af3747d0
> >> <mbcsv_process_snd_ntfy_request>,
> >>>>>>>>>>>>>> 0x7f97af374970 <mbcsv_process_snd_data_req>,
> >>>>> 0x7f97af373930
> >>>>>>>>>>>>>> <mbcsv_process_get_request>,
> >>>>>>>>>>>>>>                  0x7f97af374bd0
<mbcsv_process_set_request>}
> >>>>>>>>>>>>>> #6  0x00007f97af379370 in mbcsv_process_events () from
> >>>>>>>>>>>>>> /usr/lib/../lib64/libopensaf_core.so.0
> >>>>>>>>>>>>>> No symbol table info available.
> >>>>>>>>>>>>>> #7  0x00007f97af3794db in mbcsv_hdl_dispatch_all () from
> >>>>>>>>>>>>>> /usr/lib/../lib64/libopensaf_core.so.0
> >>>>>>>>>>>>>> No symbol table info available.
> >>>>>>>>>>>>>> #8  0x00007f97af373ce2 in
> mbcsv_process_dispatch_request ()
> >>>>> at
> >>>>>>>>>>>>>> src/mbc/mbcsv_api.c:423
> >>>>>>>>>>>>>> mbcsv_init_process_req_func = {0x7f97af373630
> >>>>>>>>>>>>>> <mbcsv_process_initialize_request>, 0x7f97af374f10
> >>>>>>>>>>>>>> <mbcsv_process_get_sel_obj_request>,
> >>>>>>>>>>>>>>                  0x7f97af373be0
> >> <mbcsv_process_dispatch_request>,
> >>>>>>>>>>>>>> 0x7f97af373af0 <mbcsv_process_finalize_request>,
> >>>>>>> 0x7f97af373cf0
> >>>>>>>>>>>>>> <mbcsv_process_open_request>,
> >>>>>>>>>>>>>>                  0x7f97af374050
> >>>>>>>>>>>>>> <mbcsv_process_close_request>,
> >>>>>>>>>>> 0x7f97af3741e0
> >>>>>>>>>>>>>> <mbcsv_process_chg_role_request>, 0x7f97af3744c0
> >>>>>>>>>>>>>> <mbcsv_process_snd_ckpt_request>,
> >>>>>>>>>>>>>>                  0x7f97af3747d0
> >> <mbcsv_process_snd_ntfy_request>,
> >>>>>>>>>>>>>> 0x7f97af374970 <mbcsv_process_snd_data_req>,
> >>>>> 0x7f97af373930
> >>>>>>>>>>>>>> <mbcsv_process_get_request>,
> >>>>>>>>>>>>>>                  0x7f97af374bd0
<mbcsv_process_set_request>}
> >>>>>>>>>>>>>> #9  0x00007f97b027096e in lgs_mbcsv_dispatch(unsigned
> int)
> >>> ()
> >>>>> at
> >>>>>>>>>>>>>> src/log/logd/lgs_mbcsv.cc:327
> >>>>>>>>>>>>>>                ckpt_data_handler = {0x7f97b0270300
> >>>>>>>>>>>>>> <ckpt_proc_initialize_client(lgs_cb*, void*)>,
> >>>>>>>>>>>>>> 0x7f97b02701c0
> >>>>>>>>>>>>>> <ckpt_proc_finalize_client(lgs_cb*, void*)>,
> >>>>>>>>>>>>>>                  0x7f97b0270060
> >>>>>>>>>>>>>> <ckpt_proc_agent_down(lgs_cb*,
> >>>>>> void*)>,
> >>>>>>>>>>>>>> 0x7f97b02712f0 <ckpt_proc_log_write(lgs_cb*, void*)>,
> >>>>>>>>> 0x7f97b0271ab0
> >>>>>>>>>>>>>> <ckpt_proc_open_stream(lgs_cb*, void*)>,
> >>>>>>>>>>>>>>                  0x7f97b026fe80
> >> <ckpt_proc_close_stream(lgs_cb*,
> >>>>>>>> void*)>,
> >>>>>>>>>>>>>> 0x7f97b0272380 <ckpt_proc_cfg_stream(lgs_cb*, void*)>,
> >>>>>>>>>>>> 0x7f97b0274800
> >>>>>>>>>>>>>> <ckpt_proc_lgs_cfg_v2(lgs_cb*, void*)>,
> >>>>>>>>>>>>>>                  0x7f97b0274e10
> >>>>>>>>>>>>>> <ckpt_proc_lgs_cfg_v3(lgs_cb*,
> >>>>>> void*)>,
> >>>>>>>>>>>>>> 0x7f97b02754f0 <ckpt_proc_lgs_cfg_v5(lgs_cb*, void*)>}
> >>>>>>>>>>>>>> ---Type <return> to continue, or q <return> to quit---
> >>>>>>>>>>>>>> #10 0x00007f97b024d9f2 in main () at
> >>>>> src/log/logd/lgs_main.cc:583
> >>>>>>>>>>>>>> usr1_sel_obj = {raise_obj = -1, rmv_obj = -1}
> >>>>>>>>>>>>>>                _lgs_cb = {mds_hdl = 65547, mds_role =
> >>>>>>> V_DEST_RL_STANDBY,
> >>>>>>>>>>>> vaddr
> >>>>>>>>>>>>>> = 11, log_version = {releaseCode = 65 'A', majorVersion = 2
> >>>>>> '\002',
> >>>>>>>>>>>>>> minorVersion = 2 '\002'}, client_tree = {
> >>>>>>>>>>>>>>                    root_node = {bit = -1, left =
> >> 0x7f97b04cf1b0,
> >>>>>> right =
> >>>>>>>>>>>>>> 0x7f97b04a2418, key_info = 0x7f97b04b7bd0 ""}, params =
> >>>>>>> {key_size
> >>>>>>>>>>>>>> = 4},
> >>>>>>>>>>>>>> n_nodes = 8}, comp_name = {_opaque = {46, 24947,
> >>>>>>>>>>>>>>                      17254, 28015, 15728, 20300, 11335,
> >>>>>>>>>>>>>> 24947,
> >>>> 21350,
> >>>>>>>>>>>>>> 15733,
> >>>>>>>>>>>>>> 17235, 12845, 29484, 26209, 26451, 12861, 11342, 24947,
> >>>>> 16742,
> >>>>>>>>> 28784,
> >>>>>>>>>>>>>> 20285, 25968, 21358, 17985,
> >>>>>>>>>>>>>>                      0 <repeats 105 times>}}, amf_hdl =
> >>>> 4288675841,
> >>>>>>>>>>>>>> amfSelectionObject = 15, amf_invocation_id = 0,
> >>> is_quiesced_set
> >>>>> =
> >>>>>>>>>>> false,
> >>>>>>>>>>>>>> immOiHandle = 554050912783, immSelectionObject = 21,
> >>>>>>>>>>>>>>                  clmSelectionObject = 17, clm_hdl =
> >>>>>>>>>>>>>> 4279238657,
> >>>>>>>>>>>>>> ha_state =
> >>>>>>>>>>>>>> SA_AMF_HA_STANDBY, last_client_id = 208,
> async_upd_cnt =
> >>>>> 743,
> >>>>>>>>>>>>>> ckpt_state
> >>>>>>>>>>>>>> = COLD_SYNC_IDLE, mbcsv_hdl = 4293918753,
> >>>>>>>>>>>>>>                  mbcsv_sel_obj = 23, mbcsv_ckpt_hdl =
> >> 4292870177,
> >>>>>>>>>>>>>> mbcsv_peer_version = 7, edu_hdl = {is_inited = true, tree =
> >>>>>>>>>>>>>> {root_node
> >>>>>>>>>>> =
> >>>>>>>>>>>>>> {bit = -1, left = 0x7f97b04cf2e0,
> >>>>>>>>>>>>>>                        right = 0x7f97b04a25b8, key_info =
> >>>>>> 0x7f97b04b7d40
> >>>>>>>>>>> ""},
> >>>>>>>>>>>>>> params = {key_size = 8}, n_nodes = 12}, to_version = 1},
> >>>>>>>>>>>>>> fully_initialized = true, lga_down_list_head = 0x0,
> >>>>>>>>>>>>>>                  lga_down_list_tail = 0x0,
> >>>>>>>>>>>>>> clm_init_sel_obj =
> >>>>>>>>>>>>>> {raise_obj =
> >>>>>>>>>>> -1,
> >>>>>>>>>>>>>> rmv_obj = -1}, nid_started = true, scAbsenceAllowed = 900,
> >>>>>>>>>>>>>> lgs_recovery_state = LGS_NORMAL}
> >>>>>>>>>>>>>>                nfds = 7
> >>>>>>>>>>>>>>                fds = {{fd = 19, events = 1, revents = 0},
> >>>>>>>>>>>>>> {fd =
> >>>> 15,
> >>>>>>>>>>>>>> events =
> >>>>>>>>>>>>>> 1, revents = 0}, {fd = 23, events = 1, revents = 1}, {fd
> >>>>>>>>>>>>>> = 13,
> >>>>>>>>>>>>>> events =
> >>>>>>>>>>>>>> 1, revents = 0}, {fd = -1, events = 1,
> >>>>>>>>>>>>>>                    revents = 0}, {fd = 17, events = 1,
> >>>>>>>>>>>>>> revents
> >> =
> >>>> 0},
> >>>>>>>>>>>>>> {fd =
> >>>>>>>>>>> 21,
> >>>>>>>>>>>>>> events = 1, revents = 0}}
> >>>>>>>>>>>>>>                mbox_msgs = {0, 0, 0, 0, 0}
> >>>>>>>>>>>>>>                lgs_cb = 0x7f97b04a2400
> >>>>>>>>>>>>>>                mbox_low = {0, 0, 0, 0, 0}
> >>>>>>>>>>>>>>                lgs_mbox_init_mutex = {__data = {__lock = 0,
> >>>> __count =
> >>>>>> 0,
> >>>>>>>>>>>>>> __owner = 0, __nusers = 0, __kind = 0, __spins = 0, __list
=
> >>>>>>>>>>>>>> {__prev =
> >>>>>>>>>>>>>> 0x0, __next = 0x0}},
> >>>>>>>>>>>>>>                  __size = '\000' <repeats 39 times>,
> >>>>>>>>>>>>>> __align =
> >> 0}
> >>>>>>>>>>>>>> lgs_mbx = 4291821569
> >>>>>>>>>>>>>>                mbox_high = {0, 0, 0, 0, 0}
> >>>>>>>>>>>>>>                mbox_full = {false, false, false, false,
> >>>>>>>>>>>>>> false}
> >>>>>>>>>>>>>> (gdb)
> >>>>>>>>>>>>>> (gdb)
> >>>>>>>>>>>>>>
> >>>>>>>>>>>>>> Feb 22 13:40:55 SC-2 osafimmnd[4584]: NO Ccb 131
> >>>>> COMMITTED
> >>>>>>>>>>>>>> (immcfg_SC-1_18714)
> >>>>>>>>>>>>>> Feb 22 13:40:56 SC-2 osafamfnd[4634]: NO
> >>>>>>>>>>>>>> 'safComp=LOG,safSu=SC-2,safSg=2N,safApp=OpenSAF'
> faulted
> >>>>> due
> >>>>>>> to
> >>>>>>>>>>>>>> 'avaDown' : Recovery is 'nodeFailfast'
> >>>>>>>>>>>>>> Feb 22 13:40:56 SC-2 osafamfnd[4634]: ER
> >>>>>>>>>>>>>> safComp=LOG,safSu=SC-2,safSg=2N,safApp=OpenSAF
> Faulted
> >>>>> due
> >>>>>>>>>>>>>> to:avaDown
> >>>>>>>>>>>>>> Recovery is:nodeFailfast
> >>>>>>>>>>>>>> Feb 22 13:40:56 SC-2 osafamfnd[4634]: Rebooting
> OpenSAF
> >>>>> NodeId
> >>>>>>> =
> >>>>>>>>>>>>>> 131599
> >>>>>>>>>>>>>> EE Name = , Reason: Component faulted: recovery is node
> >>>>> failfast,
> >>>>>>>>>>>>>> OwnNodeId = 131599, SupervisionTime = 60
> >>>>>>>>>>>>>> Feb 22 13:40:56 SC-2 opensaf_reboot: Rebooting local
> node;
> >>>>>>>>> timeout=60
> >>>
> ==============================================================
> >>>
> ==============================================================
> >>>>>>>>>>>>>> On 2/22/2017 12:23 PM, A V Mahesh wrote:
> >>>>>>>>>>>>>>> Hi Vu,
> >>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>> On 2/22/2017 12:19 PM, Vu Minh Nguyen wrote:
> >>>>>>>>>>>>>>>> [Vu] I has sent you 02 patches. There is code change in
> >>>>> osaflogd
> >>>>>>>>>>>>>>>> code
> >>>>>>>>>>>>> that
> >>>>>>>>>>>>>>>> fix the coredump you have observed.
> >>>>>>>>>>>>>>>> The other one is test code that fix the logtest coredump.
> >>>>>>>>>>>>>>> Ok I will re-test , and update you .
> >>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>> -AVM
> >>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>> On 2/22/2017 12:19 PM, Vu Minh Nguyen wrote:
> >>>>>>>>>>>>>>>> Hi Mahehs,
> >>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>> See my reply inline, [Vu].
> >>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>> Regards, Vu
> >>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>> -----Original Message-----
> >>>>>>>>>>>>>>>>> From: A V Mahesh [mailto:mahesh.va...@oracle.com]
> >>>>>>>>>>>>>>>>> Sent: Wednesday, February 22, 2017 1:36 PM
> >>>>>>>>>>>>>>>>> To: Vu Minh Nguyen <vu.m.ngu...@dektech.com.au>;
> >>>>>>>>>>>>>>>>> lennart.l...@ericsson.com;
> >>> canh.v.tru...@dektech.com.au
> >>>>>>>>>>>>>>>>> Cc: opensaf-devel@lists.sourceforge.net
> >>>>>>>>>>>>>>>>> Subject: Re: [PATCH 0 of 3] Review Request for log: add
> >>>>>>>>>>>>>>>>> alternative
> >>>>>>>>>>>>>>>>> destinations of log records [#2258] V4
> >>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>> Hi Vu,
> >>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>> On 2/22/2017 11:52 AM, Vu Minh Nguyen wrote:
> >>>>>>>>>>>>>>>>>> Hi Mahesh,
> >>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>> Have a code fault in uml test, and other one in
> >>> checkpoint.
> >>>>>>>>>>>>>>>>> [AVM] This is Normal Suse 11 VM ( not  UML).
> >>>>>>>>>>>>>>>>>> I have just updated the code. Please re-apply for #3
> and
> >>> #4
> >>>>>>>>>>> patches.
> >>>>>>>>>>>>>>>>> [AVM] is these new patch has function changes or only
> >>>>>>>>>>>>>>>>> test
> >>>>>>> code
> >>>>>>>>>>>>>> changes ?
> >>>>>>>>>>>>>>>> [Vu] I has sent you 02 patches. There is code change in
> >>>>> osaflogd
> >>>>>>>>>>>>>>>> code
> >>>>>>>>>>>>> that
> >>>>>>>>>>>>>>>> fix the coredump you have observed.
> >>>>>>>>>>>>>>>> The other one is test code that fix the logtest coredump.
> >>>>>>>>>>>>>>>>>> Note that, test case #14 of suite 17 should be run on
> >>> active
> >>>>>>>>>>>>>>>>>> node,
> >>>>>>>>>>>>>>>>> otherwise
> >>>>>>>>>>>>>>>>>> getting failed.
> >>>>>>>>>>>>>>>>> [AVM]  Segmentation fault of /usr/bin/logtest Not a big
> >>>> issue
> >>>>>> ,
> >>>>>>>>>>>>>>>>>           we need to debug why  osaflogd core dumped
> >>>>>>>>>>>>>>>>> and it
> >> is
> >>>>>>>>>>>>>>>>> critical
> >>>>>>>>>>>>>>>> [Vu] I found the problem. You can try with the new one
> to
> >>> see
> >>>>> if
> >>>>>>>>>>>>>>>> the
> >>>>>>>>>>>>>>>> coredump is still there or not.
> >>>>>>>>>>>>>>>>>> I will put condition check to that test case later.
> >>>>>>>>>>>>>>>>> -AVM
> >>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>> Regards, Vu
> >>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>> -----Original Message-----
> >>>>>>>>>>>>>>>>>>> From: A V Mahesh
> [mailto:mahesh.va...@oracle.com]
> >>>>>>>>>>>>>>>>>>> Sent: Wednesday, February 22, 2017 12:16 PM
> >>>>>>>>>>>>>>>>>>> To: Vu Minh Nguyen
> <vu.m.ngu...@dektech.com.au>;
> >>>>>>>>>>>>>>>>>>> lennart.l...@ericsson.com;
> >>>>> canh.v.tru...@dektech.com.au
> >>>>>>>>>>>>>>>>>>> Cc: opensaf-devel@lists.sourceforge.net
> >>>>>>>>>>>>>>>>>>> Subject: Re: [PATCH 0 of 3] Review Request for log:
> add
> >>>>>>>>>>> alternative
> >>>>>>>>>>>>>>>>>>> destinations of log records [#2258] V4
> >>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>> Hi Vu,
> >>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>> Thanks ,
> >>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>> While testing /usr/bin/logtest ,  SC-2 standby
> osaflogd
> >>>>> core
> >>>>>>>>>>>> dumped
> >>>>>>>>>>>>>>>> and
> >>>>>>>>>>>>>>>>>>> /usr/bin/logtest on SC-1 Active
> >>>>>>>>>>>>>>>>>>> got Segmentation fault , am I missing any other
> patch (
> >> i
> >>>>> am
> >>>>>>>>>>> using
> >>>>>>>>>>>>>>>>>>> devel published patch only )
> >>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>> Following patches i am using :
> >>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>            1) #2293 (sent by Anders Widel, but not yet
> >>>> pushed)
> >>>>>>>>>>>>>>>>>>> 2) #2258 (v2, sent by Lennart, but not yet
> >>>> pushed
> >>>>>> yet)
> >>>>>>>>>>>>>>>>>>> 3) #2258 (v4, sent by Vu, but not yet pushed
> >>>> yet)
> >>>
> ==============================================================
> >>>>>>>>>>>>>>>>>>> ========================================
> >>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>> Core was generated by
> `/usr/lib64/opensaf/osaflogd'.
> >>>>>>>>>>>>>>>>>>> Program terminated with signal 11, Segmentation
> fault.
> >>>>>>>>>>>>>>>>>>> #0 ckpt_proc_cfg_stream(lgs_cb*, void*) () at
> >>>>>>>>>>>>>>>>>>> src/log/logd/lgs_mbcsv.cc:2195
> >>>>>>>>>>>>>>>>>>> 2195 src/log/logd/lgs_mbcsv.cc: No such file or
> >>>>> directory.
> >>>>>>>>>>>>>>>>>>> in src/log/logd/lgs_mbcsv.cc
> >>>>>>>>>>>>>>>>>>> (gdb) bt
> >>>>>>>>>>>>>>>>>>> #0 ckpt_proc_cfg_stream(lgs_cb*, void*) () at
> >>>>>>>>>>>>>>>>>>> src/log/logd/lgs_mbcsv.cc:2195
> >>>>>>>>>>>>>>>>>>> #1  0x00007f12c3e22960 in
> >>>>> ckpt_decode_log_struct(lgs_cb*,
> >>>>>>>>>>>>>>>>>>> ncs_mbcsv_cb_arg*, void*, void*, unsigned int
> >>>>>>>>> (*)(edu_hdl_tag*,
> >>>>>>>>>>>>>>>>>>> edu_tkn_tag*, void*, unsigned int*,
> edu_buf_env_tag*,
> >>>>>>>>>>>>>> EDP_OP_TYPE,
> >>>>>>>>>>>>>>>>>>> EDU_ERR*)) () at src/log/logd/lgs_mbcsv.cc:950
> >>>>>>>>>>>>>>>>>>> #2  0x00007f12c3e240dc in
> >>>>>>>>> ckpt_decode_async_update(lgs_cb*,
> >>>>>>>>>>>>>>>>>>> ncs_mbcsv_cb_arg*) () at
> >>> src/log/logd/lgs_mbcsv.cc:1086
> >>>>>>>>>>>>>>>>>>> #3 0x00007f12c3e26941 in
> >>>>>>>>> mbcsv_callback(ncs_mbcsv_cb_arg*) ()
> >>>>>>>>>>>> at
> >>>>>>>>>>>>>>>>>>> src/log/logd/lgs_mbcsv.cc:880
> >>>>>>>>>>>>>>>>>>> #4  0x00007f12c2f25596 in ncs_mbscv_rcv_decode ()
> >>> from
> >>>>>>>>>>>>>>>>>>> /usr/lib/../lib64/libopensaf_core.so.0
> >>>>>>>>>>>>>>>>>>> #5  0x00007f12c2f25766 in
> >>> ncs_mbcsv_rcv_async_update
> >>>>> ()
> >>>>>>>>> from
> >>>>>>>>>>>>>>>>>>> /usr/lib/../lib64/libopensaf_core.so.0
> >>>>>>>>>>>>>>>>>>> #6  0x00007f12c2f2c370 in mbcsv_process_events ()
> >>> from
> >>>>>>>>>>>>>>>>>>> /usr/lib/../lib64/libopensaf_core.so.0
> >>>>>>>>>>>>>>>>>>> #7  0x00007f12c2f2c4db in mbcsv_hdl_dispatch_all ()
> >>> from
> >>>>>>>>>>>>>>>>>>> /usr/lib/../lib64/libopensaf_core.so.0
> >>>>>>>>>>>>>>>>>>> #8  0x00007f12c2f26ce2 in
> >>>>> mbcsv_process_dispatch_request
> >>>>>>> ()
> >>>>>>>>> at
> >>>>>>>>>>>>>>>>>>> src/mbc/mbcsv_api.c:423
> >>>>>>>>>>>>>>>>>>> #9  0x00007f12c3e2396e in
> >>> lgs_mbcsv_dispatch(unsigned
> >>>>> int)
> >>>>>>> ()
> >>>>>>>>> at
> >>>>>>>>>>>>>>>>>>> src/log/logd/lgs_mbcsv.cc:327
> >>>>>>>>>>>>>>>>>>> #10 0x00007f12c3e009f2 in main () at
> >>>>>>>>>>>>>>>>>>> src/log/logd/lgs_main.cc:583
> >>>>>>>>>>>>>>>>>>> (gdb)
> >>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>
> >>>
> ==============================================================
> >>>>>>>>>>>>>>>>>>> ========================================
> >>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>> Feb 22 10:37:06 SC-1 osafimmnd[4020]: NO Invalid
> error
> >>>>>>>>> reported
> >>>>>>>>>>>>>>>>>>> implementer 'safLogService', Ccb 161 will be aborted
> >>>>>>>>>>>>>>>>>>> Feb 22 10:37:06 SC-1 osafimmnd[4020]: NO Ccb 161
> >>>>> aborted
> >>>>>>> in
> >>>>>>>>>>>>>>>>> COMPLETED
> >>>>>>>>>>>>>>>>>>> processing (validation)
> >>>>>>>>>>>>>>>>>>> Feb 22 10:37:06 SC-1 osafimmnd[4020]: NO Ccb 161
> >>>>>>> ABORTED
> >>>>>>>>>>>>>>>>> (immcfg_SC-
> >>>>>>>>>>>>>>>>>>> 1_5394)
> >>>>>>>>>>>>>>>>>>> Add values Fail
> >>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>> Program received signal SIGSEGV, Segmentation
> fault.
> >>>>>>>>>>>>>>>>>>> 0x000055555556929a in read_and_compare.isra.7 ()
> at
> >>>>>>>>>>>>>>>>>>> src/log/apitest/tet_LogOiOps.c:1891
> >>>>>>>>>>>>>>>>>>> 1891 src/log/apitest/tet_LogOiOps.c: No such file or
> >>>>>>>>>>>>>>>>>>> directory.
> >>>>>>>>>>>>>>>>>>>                   in src/log/apitest/tet_LogOiOps.c
> >>>>>>>>>>>>>>>>>>> (gdb) Feb 22 10:37:07 SC-1 sshd[5298]: Accepted
> >>> keyboard-
> >>>>>>>>>>>>>>>>> interactive/pam
> >>>>>>>>>>>>>>>>>>> for root from 10.176.178.22 port 51945 ssh2
> >>>>>>>>>>>>>>>>>>> bt
> >>>>>>>>>>>>>>>>>>> #0  0x000055555556929a in
> read_and_compare.isra.7 ()
> >>> at
> >>>>>>>>>>>>>>>>>>> src/log/apitest/tet_LogOiOps.c:1891
> >>>>>>>>>>>>>>>>>>> #1  0x0000555555569bbb in
> >>>>>>>>>>>>>>>>>>> check_logRecordDestinationConfigurationEmpty
> >>>>>>>>>>>>>>>>>>> () at src/log/apitest/tet_LogOiOps.c:2179
> >>>>>>>>>>>>>>>>>>> #2  0x0000555555573495 in run_test_case ()
> >>>>>>>>>>>>>>>>>>> #3  0x0000555555573934 in test_run ()
> >>>>>>>>>>>>>>>>>>> #4  0x000055555555c7cd in main () at
> >>>>>>>>>>>>>>>>>>> src/log/apitest/logtest.c:569
> >>>>>>>>>>>>>>>>>>> (gdb)
> >>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>
> >>>
> ==============================================================
> >>>>>>>>>>>>>>>>>>> ========================================
> >>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>> -AVM
> >>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>> On 2/22/2017 9:48 AM, Vu Minh Nguyen wrote:
> >>>>>>>>>>>>>>>>>>>> Hi Mahesh,
> >>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>> I send them in attachment instead, and name them
> in
> >>> the
> >>>>>>>>> order.
> >>>>>>>>>>>>>>>>>>>> I just pull the latest code, and apply them without
> >>>> getting
> >>>>>>>> any
> >>>>>>>>>>>>> hunk
> >>>>>>>>>>>>>>>>>> error.
> >>>>>>>>>>>>>>>>>>>> Please try with them, and let me know if you see
> any
> >>>>>>> problem.
> >>>>>>>>>>>>>>>>>>>> Regards, Vu
> >>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>> -----Original Message-----
> >>>>>>>>>>>>>>>>>>>>> From: A V Mahesh
> [mailto:mahesh.va...@oracle.com]
> >>>>>>>>>>>>>>>>>>>>> Sent: Wednesday, February 22, 2017 11:09 AM
> >>>>>>>>>>>>>>>>>>>>> To: Vu Minh Nguyen
> >>> <vu.m.ngu...@dektech.com.au>;
> >>>>>>>>>>>>>>>>>>>>> lennart.l...@ericsson.com;
> >>>>>>> canh.v.tru...@dektech.com.au
> >>>>>>>>>>>>>>>>>>>>> Cc: opensaf-devel@lists.sourceforge.net
> >>>>>>>>>>>>>>>>>>>>> Subject: Re: [PATCH 0 of 3] Review Request for
> log:
> >>> add
> >>>>>>>>>>>>> alternative
> >>>>>>>>>>>>>>>>>>>>> destinations of log records [#2258] V4
> >>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>> Hi Vu,
> >>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>> I did follow that still i get Hunk #2 FAILED even on
> >>>>>> today's
> >>>>>>>>>>>>> staging
> >>>
> ==============================================================
> >>>>>>>>>>>>>>>>>>>>> ==================
> >>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>> [root@dhcp-hyd-scp-5fl-10-176-177-96 staging]#
> >>> patch
> >>>>> -
> >>>>>>> p1
> >>>>>>>>>>>> <2293
> >>>>>>>>>>>>>>>>>>>>> patching file src/base/Makefile.am
> >>>>>>>>>>>>>>>>>>>>> Hunk #1 succeeded at 33 (offset 1 line).
> >>>>>>>>>>>>>>>>>>>>> Hunk #3 succeeded at 183 (offset 1 line).
> >>>>>>>>>>>>>>>>>>>>> patching file src/base/file_descriptor.cc
> >>>>>>>>>>>>>>>>>>>>> patching file src/base/file_descriptor.h
> >>>>>>>>>>>>>>>>>>>>> patching file src/base/tests/unix_socket_test.cc
> >>>>>>>>>>>>>>>>>>>>> patching file src/base/unix_client_socket.cc
> >>>>>>>>>>>>>>>>>>>>> patching file src/base/unix_server_socket.cc
> >>>>>>>>>>>>>>>>>>>>> patching file src/base/unix_socket.cc
> >>>>>>>>>>>>>>>>>>>>> patching file src/base/unix_socket.h
> >>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>> [root@dhcp-hyd-scp-5fl-10-176-177-96 staging]#
> >>> patch
> >>>>> -
> >>>>>>> p1
> >>>>>>>>>>>> <2258-
> >>>>>>>>>>>>>> 1
> >>>>>>>>>>>>>>>>>>>>> patching file src/log/Makefile.am
> >>>>>>>>>>>>>>>>>>>>> Hunk #1 succeeded at 71 (offset -1 lines).
> >>>>>>>>>>>>>>>>>>>>> patching file src/log/config/logsv_classes.xml
> >>>>>>>>>>>>>>>>>>>>> Hunk #1 FAILED at 147.
> >>>>>>>>>>>>>>>>>>>>> 1 out of 1 hunk FAILED -- saving rejects to file
> >>>>>>>>>>>>>>>>>>>>> src/log/config/logsv_classes.xml.rej
> >>>>>>>>>>>>>>>>>>>>> patching file src/log/logd/lgs_config.cc
> >>>>>>>>>>>>>>>>>>>>> Hunk #1 succeeded at 35 (offset -5 lines).
> >>>>>>>>>>>>>>>>>>>>> Hunk #2 FAILED at 705.
> >>>>>>>>>>>>>>>>>>>>> Hunk #3 FAILED at 971.
> >>>>>>>>>>>>>>>>>>>>> 2 out of 3 hunks FAILED -- saving rejects to file
> >>>>>>>>>>>>>>>>>>>>> src/log/logd/lgs_config.cc.rej
> >>>>>>>>>>>>>>>>>>>>> patching file src/log/logd/lgs_config.h
> >>>>>>>>>>>>>>>>>>>>> Hunk #1 FAILED at 304.
> >>>>>>>>>>>>>>>>>>>>> 1 out of 1 hunk FAILED -- saving rejects to file
> >>>>>>>>>>>>>>>>>>>>> src/log/logd/lgs_config.h.rej
> >>>>>>>>>>>>>>>>>>>>> patching file src/log/logd/lgs_dest.cc
> >>>>>>>>>>>>>>>>>>>>> patching file src/log/logd/lgs_dest.h
> >>>>>>>>>>>>>>>>>>>>> patching file src/log/logd/lgs_evt.cc
> >>>>>>>>>>>>>>>>>>>>> patching file src/log/logd/lgs_imm.cc
> >>>>>>>>>>>>>>>>>>>>> Hunk #1 FAILED at 45.
> >>>>>>>>>>>>>>>>>>>>> Hunk #2 succeeded at 235 (offset -1 lines).
> >>>>>>>>>>>>>>>>>>>>> Hunk #3 FAILED at 877.
> >>>>>>>>>>>>>>>>>>>>> Hunk #4 succeeded at 1273 (offset -20 lines).
> >>>>>>>>>>>>>>>>>>>>> Hunk #5 succeeded at 1404 (offset -1 lines).
> >>>>>>>>>>>>>>>>>>>>> Hunk #6 succeeded at 1449 (offset -20 lines).
> >>>>>>>>>>>>>>>>>>>>> Hunk #7 succeeded at 2032 (offset -1 lines).
> >>>>>>>>>>>>>>>>>>>>> Hunk #8 FAILED at 2181.
> >>>>>>>>>>>>>>>>>>>>> Hunk #9 succeeded at 2271 (offset -54 lines).
> >>>>>>>>>>>>>>>>>>>>> Hunk #10 succeeded at 2387 (offset -1 lines).
> >>>>>>>>>>>>>>>>>>>>> Hunk #11 succeeded at 2377 (offset -54 lines).
> >>>>>>>>>>>>>>>>>>>>> Hunk #12 succeeded at 2478 (offset -1 lines).
> >>>>>>>>>>>>>>>>>>>>> Hunk #13 succeeded at 2684 (offset -54 lines).
> >>>>>>>>>>>>>>>>>>>>> Hunk #14 succeeded at 2821 (offset -1 lines).
> >>>>>>>>>>>>>>>>>>>>> 3 out of 14 hunks FAILED -- saving rejects to file
> >>>>>>>>>>>>>>>>>>>>> src/log/logd/lgs_imm.cc.rej
> >>>>>>>>>>>>>>>>>>>>> patching file src/log/logd/lgs_main.cc
> >>>>>>>>>>>>>>>>>>>>> patching file src/log/logd/lgs_mbcsv.cc
> >>>>>>>>>>>>>>>>>>>>> patching file src/log/logd/lgs_mbcsv.h
> >>>>>>>>>>>>>>>>>>>>> patching file src/log/logd/lgs_mbcsv_v5.cc
> >>>>>>>>>>>>>>>>>>>>> Hunk #3 succeeded at 133 (offset -1 lines).
> >>>>>>>>>>>>>>>>>>>>> patching file src/log/logd/lgs_mbcsv_v7.cc
> >>>>>>>>>>>>>>>>>>>>> patching file src/log/logd/lgs_mbcsv_v7.h
> >>>>>>>>>>>>>>>>>>>>> patching file src/log/logd/lgs_stream.cc
> >>>>>>>>>>>>>>>>>>>>> patching file src/log/logd/lgs_stream.h
> >>>>>>>>>>>>>>>>>>>>> patching file src/log/logd/lgs_util.cc
> >>>>>>>>>>>>>>>>>>>>> patching file src/log/logd/lgs_util.h
> >>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>> [root@dhcp-hyd-scp-5fl-10-176-177-96 staging]#
> >>> patch
> >>>>> -
> >>>>>>> p1
> >>>>>>>>>>>> <2258-
> >>>>>>>>>>>>>> 2
> >>>>>>>>>>>>>>>>>>>>> patching file src/log/Makefile.am
> >>>>>>>>>>>>>>>>>>>>> Hunk #1 succeeded at 180 (offset -3 lines).
> >>>>>>>>>>>>>>>>>>>>> patching file src/log/apitest/tet_LogOiOps.c
> >>>>>>>>>>>>>>>>>>>>> Hunk #1 FAILED at 1923.
> >>>>>>>>>>>>>>>>>>>>> Hunk #2 FAILED at 1979.
> >>>>>>>>>>>>>>>>>>>>> Hunk #3 FAILED at 2067.
> >>>>>>>>>>>>>>>>>>>>> Hunk #4 FAILED at 2094.
> >>>>>>>>>>>>>>>>>>>>> 4 out of 4 hunks FAILED -- saving rejects to file
> >>>>>>>>>>>>>>>>>>>>> src/log/apitest/tet_LogOiOps.c.rej
> >>>>>>>>>>>>>>>>>>>>> patching file src/log/apitest/tet_cfg_destination.c
> >>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>> [root@dhcp-hyd-scp-5fl-10-176-177-96 staging]#
> >>> patch
> >>>>> -
> >>>>>>> p1
> >>>>>>>>>>>> <2258-
> >>>>>>>>>>>>>> 3
> >>>>>>>>>>>>>>>>>>>>> patching file src/log/Makefile
> >>>>>>>>>>>>>>>>>>>>> patching file src/log/Makefile.am
> >>>>>>>>>>>>>>>>>>>>> Hunk #1 succeeded at 80 (offset -1 lines).
> >>>>>>>>>>>>>>>>>>>>> Hunk #2 succeeded at 217 (offset -2 lines).
> >>>>>>>>>>>>>>>>>>>>> patching file src/log/tests/Makefile
> >>>>>>>>>>>>>>>>>>>>> patching file src/log/tests/lgs_dest_test.cc
> >>>>>>>>>>>>>>>>>>>>> [root@dhcp-hyd-scp-5fl-10-176-177-96 staging]#
> vi
> >>>>>>>>>>>>>>>>>>>>> src/log/apitest/tet_LogOiOps.c.rej
> >>>>>>>>>>>>>>>>>>>>> [root@dhcp-hyd-scp-5fl-10-176-177-96 staging]#
> >>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>
> >>>
> ==============================================================
> >>>>>>>>>>>>>>>>>>>>> ========================
> >>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>> -AVM
> >>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>> On 2/21/2017 3:53 PM, Vu Minh Nguyen wrote:
> >>>>>>>>>>>>>>>>>>>>>> Hi Mahesh,
> >>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>> As I has mentioned in below:
> >>>>>>>>>>>>>>>>>>>>>>>> To run the test, this patch has
> >> dependent
> >>>> on
> >>>>>>>>>>> following
> >>>>>>>>>>>>>>>> patches:
> >>>>>>>>>>>>>>>>>>>>>>>> 1) #2293 (sent by Anders Widel, but not
> >> yet
> >>>>>>>>>>>>>>>>>>>>>>>> pushed)
> >>>>>>>>>>>>>>>>>>>>>>>>              2) #2258 (v2, sent by Lennart, but
> >>>>>>>>>>>>>>>>>>>>>>>> not
> >> yet
> >>>>>> pushed
> >>>>>>>>>>> yet)
> >>>>>>>>>>>>>>>>>>>>>> So, you need to apply #2293 first, then #2258
> which
> >>>>> sent
> >>>>>>> by
> >>>>>>>>>>>>>> Lennart
> >>>>>>>>>>>>>>>>>>>>>> yesterday, then mine.
> >>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>> Regards, Vu
> >>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>> -----Original Message-----
> >>>>>>>>>>>>>>>>>>>>>>> From: A V Mahesh
> >>> [mailto:mahesh.va...@oracle.com]
> >>>>>>>>>>>>>>>>>>>>>>> Sent: Tuesday, February 21, 2017 5:10 PM
> >>>>>>>>>>>>>>>>>>>>>>> To: Vu Minh Nguyen
> >>>>> <vu.m.ngu...@dektech.com.au>;
> >>>>>>>>>>>>>>>>>>>>>>> lennart.l...@ericsson.com;
> >>>>>>>>> canh.v.tru...@dektech.com.au
> >>>>>>>>>>>>>>>>>>>>>>> Cc: opensaf-devel@lists.sourceforge.net
> >>>>>>>>>>>>>>>>>>>>>>> Subject: Re: [PATCH 0 of 3] Review Request for
> log:
> >>>>> add
> >>>>>>>>>>>>>> alternative
> >>>>>>>>>>>>>>>>>>>>>>> destinations of log records [#2258] V4
> >>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>> Hi Vu,
> >>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>> Is this applies on top of   log #2146  - V4 , I
see
> >>>>>> both
> >>>>>>>>>>>>> #tickets
> >>>>>>>>>>>>>>>>>>>> has
> >>>>>>>>>>>>>>>>>>>>>>> version changes ?
> >>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>> in which order i need to apply  ( #2146 & #2258 )
> >>> or
> >>>>>>>>>>>>>>>>>>>>>>> (#2258
> >>>>>>>>>>> &
> >>>>>>>>>>>>>>>>>>>> #2146).
> >>> =========================================================
> >>>>>>>>>>>>>>>>>>>>>>> patching file src/log/Makefile.am
> >>>>>>>>>>>>>>>>>>>>>>> Hunk #1 FAILED at 72.
> >>>>>>>>>>>>>>>>>>>>>>> Hunk #2 FAILED at 120.
> >>>>>>>>>>>>>>>>>>>>>>> 2 out of 2 hunks FAILED -- saving rejects to file
> >>>>>>>>>>>>>>>>>>>> src/log/Makefile.am.rej
> >>>>>>>>>>>>>>>>>>>>>>> patching file src/log/config/logsv_classes.xml
> >>>>>>>>>>>>>>>>>>>>>>> Hunk #1 FAILED at 147.
> >>>>>>>>>>>>>>>>>>>>>>> 1 out of 1 hunk FAILED -- saving rejects to file
> >>>>>>>>>>>>>>>>>>>>>>> src/log/config/logsv_classes.xml.rej
> >>>>>>>>>>>>>>>>>>>>>>> patching file src/log/logd/lgs_config.cc
> >>>>>>>>>>>>>>>>>>>>>>> Hunk #1 succeeded at 35 (offset -5 lines).
> >>>>>>>>>>>>>>>>>>>>>>> Hunk #2 FAILED at 705.
> >>>>>>>>>>>>>>>>>>>>>>> Hunk #3 FAILED at 971.
> >>>>>>>>>>>>>>>>>>>>>>> 2 out of 3 hunks FAILED -- saving rejects to file
> >>>>>>>>>>>>>>>>>>>>>>> src/log/logd/lgs_config.cc.rej
> >>>>>>>>>>>>>>>>>>>>>>> patching file src/log/logd/lgs_config.h
> >>>>>>>>>>>>>>>>>>>>>>> Hunk #1 FAILED at 304.
> >>>>>>>>>>>>>>>>>>>>>>> 1 out of 1 hunk FAILED -- saving rejects to file
> >>>>>>>>>>>>>>>>>>>>>>> src/log/logd/lgs_config.h.rej
> >>>>>>>>>>>>>>>>>>>>>>> patching file src/log/logd/lgs_dest.cc
> >>>>>>>>>>>>>>>>>>>>>>> patching file src/log/logd/lgs_dest.h
> >>>>>>>>>>>>>>>>>>>>>>> patching file src/log/logd/lgs_evt.cc
> >>>>>>>>>>>>>>>>>>>>>>> Hunk #1 FAILED at 1.
> >>>>>>>>>>>>>>>>>>>>>>> Hunk #2 succeeded at 30 with fuzz 2 (offset 2
> >>>>>>>>>>>>>>>>>>>>>>> lines).
> >>>>>>>>>>>>>>>>>>>>>>> Hunk #3 succeeded at 1282 (offset 45 lines).
> >>>>>>>>>>>>>>>>>>>>>>> Hunk #4 succeeded at 1300 (offset 2 lines).
> >>>>>>>>>>>>>>>>>>>>>>> 1 out of 4 hunks FAILED -- saving rejects to file
> >>>>>>>>>>>>>>>>>>>>>>> src/log/logd/lgs_evt.cc.rej
> >>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>
> >>>
> ==============================================================
> >>>>>>>>>>>>>>>>>>>>>>> ===
> >>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>> -AVM
> >>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>> On 2/21/2017 3:03 PM, Vu Minh Nguyen wrote:
> >>>>>>>>>>>>>>>>>>>>>>>> Summary: log: add alternative destinations of
> log
> >>>>>>> records
> >>>>>>>>>>>>>> [#2258]
> >>>>>>>>>>>>>>>>>>>>>>>> Review request for Trac Ticket(s): #2258
> >>>>>>>>>>>>>>>>>>>>>>>> Peer Reviewer(s): Lennart, Canh, Mahesh
> >>>>>>>>>>>>>>>>>>>>>>>> Pull request to: <<LIST THE PERSON WITH
> PUSH
> >>>>>>> ACCESS
> >>>>>>>>>>>> HERE>>
> >>>>>>>>>>>>>>>>>>>>>>>> Affected branch(es): Default
> >>>>>>>>>>>>>>>>>>>>>>>> Development branch: Default
> >>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>> --------------------------------
> >>>>>>>>>>>>>>>>>>>>>>>> Impacted area Impact y/n
> >>>>>>>>>>>>>>>>>>>>>>>> --------------------------------
> >>>>>>>>>>>>>>>>>>>>>>>> Docs n
> >>>>>>>>>>>>>>>>>>>>>>>>              Build system            n
> >>>>>>>>>>>>>>>>>>>>>>>> RPM/packaging n
> >>>>>>>>>>>>>>>>>>>>>>>> Configuration files     n
> >>>>>>>>>>>>>>>>>>>>>>>>              Startup scripts         n
> >>>>>>>>>>>>>>>>>>>>>>>>              SAF services n
> >>>>>>>>>>>>>>>>>>>>>>>>              OpenSAF services        y
> >>>>>>>>>>>>>>>>>>>>>>>>              Core libraries          n
> >>>>>>>>>>>>>>>>>>>>>>>> Samples n
> >>>>>>>>>>>>>>>>>>>>>>>> Tests y
> >>>>>>>>>>>>>>>>>>>>>>>> Other n
> >>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>> Comments (indicate scope for each "y" above):
> >>>>>>>>>>>>>>>>>>>>>>>> ---------------------------------------------
> >>>>>>>>>>>>>>>>>>>>>>>>              To run the test, this patch has
> >> dependent
> >>>> on
> >>>>>>>>>>> following
> >>>>>>>>>>>>>>>> patches:
> >>>>>>>>>>>>>>>>>>>>>>>> 1) #2293 (sent by Anders Widel, but not
> >> yet
> >>>>>>>>>>>>>>>>>>>>>>>> pushed)
> >>>>>>>>>>>>>>>>>>>>>>>>              2) #2258 (v2, sent by Lennart, but
> >>>>>>>>>>>>>>>>>>>>>>>> not
> >> yet
> >>>>>> pushed
> >>>>>>>>>>> yet)
> >>>>>>>>>>>>>>>>>>>>>>>> changeset
> >>>>>>>>> d74aaf3025c99cade3165a15831124548f4d85bd
> >>>>>>>>>>>>>>>>>>>>>>>> Author: Vu Minh Nguyen
> >>>>>>>>> <vu.m.ngu...@dektech.com.au>
> >>>>>>>>>>>>>>>>>>>>>>>> Date: Wed, 15 Feb 2017 14:36:00 +0700
> >>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>         log: add alternative destinations of log
> >> records
> >>>>>>>>>>>>>>>>>>>>>>>> [#2258]
> >>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>         Here are major info, detailed info will
be
> >>> added
> >>>> to
> >>>>>>>>>>>>>>>>>>>>>>>> PR doc
> >>>>>>>>>>>>>>>>>> soon. 1)
> >>>>>>>>>>>>>>>>>>>>>>> Add
> >>>>>>>>>>>>>>>>>>>>>>>> attribute "saLogRecordDestination" to log
> >>>>> stream.
> >>>>>> 2)
> >>>>>>>>>>>>>>>>>>>>>>>> Add
> >>>>>>>>>>>>>>>>>> Local
> >>>>>>>>>>>>>>>>>>>>>>> socket
> >>>>>>>>>>>>>>>>>>>>>>>> destintion handler 3) Integrate into first
> >>>>>> increment
> >>>>>>>>>>>>>>>>>>>>>>>> made by
> >>>>>>>>>>>>>>>>>> Lennart
> >>>>>>>>>>>>>>>>>>>>>>>> changeset
> >>>>>>>>> 4bae27a478c235df3058f43c92d3a5483233b01d
> >>>>>>>>>>>>>>>>>>>>>>>> Author: Vu Minh Nguyen
> >>>>>>>>> <vu.m.ngu...@dektech.com.au>
> >>>>>>>>>>>>>>>>>>>>>>>> Date: Wed, 15 Feb 2017 15:07:09 +0700
> >>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>         log: add UML test case to verify
> >>>>>>>>>>>>>>>>>>>>>>>> alternative
> >>>>>>>>>>>>>>>>>>>>>>>> destination
> >>>>>>>>>>>>>>>>>> [#2258]
> >>>>>>>>>>>>>>>>>>>>>>>> Major changes: 1) Modify Lennart's test cases
> >>>>>>> because
> >>>>>>>>>>>>>>>>>> enhancing
> >>>>>>>>>>>>>>>>>>>>>>> destination
> >>>>>>>>>>>>>>>>>>>>>>>> configuration validation rules. 2) Add test
> >>>> suite
> >>>>>>>>>>>>>>>>>>>>>>>> #17 to
> >>>>>>>>>>>>>>>>>> verify
> >>>>>>>>>>>>>>>>>>>>>>> alternative
> >>>>>>>>>>>>>>>>>>>>>>>> destination
> >>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>> changeset
> >>>>>>> bc375725fed22bb4f8cb3ae3df5f96fb9d281efb
> >>>>>>>>>>>>>>>>>>>>>>>> Author: Vu Minh Nguyen
> >>>>>>>>> <vu.m.ngu...@dektech.com.au>
> >>>>>>>>>>>>>>>>>>>>>>>> Date: Thu, 16 Feb 2017 17:22:13 +0700
> >>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>         log: add unit tests to verify interfaces
> >>>> provided
> >>>>>> by
> >>>>>>>>>>>>>>>>>> destination
> >>>>>>>>>>>>>>>>>>>>>>> handler
> >>>>>>>>>>>>>>>>>>>>>>>> [#2258]
> >>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>         Unit tests to verify major interfaces: 1)
> >>>>>>>>>>>>>>>>>>>>>>>> CfgDestination()
> >>>>>>>>>>>>>>>>>> 2)
> >>>>>>>>>>>>>>>>>>>>>>>> WriteToDestination()
> >>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>> Added Files:
> >>>>>>>>>>>>>>>>>>>>>>>> ------------
> >>>>>>>>>>>>>>>>>>>>>>>> src/log/apitest/tet_cfg_destination.c
> >>>>>>>>>>>>>>>>>>>>>>>> src/log/logd/lgs_dest.cc
> >>>>>>>>>>>>>>>>>>>>>>>> src/log/logd/lgs_dest.h
> >>>>>>>>>>>>>>>>>>>>>>>> src/log/logd/lgs_mbcsv_v7.cc
> >>>>>>>>>>>>>>>>>>>>>>>> src/log/logd/lgs_mbcsv_v7.h
> >>>>>>>>>>>>>>>>>>>>>>>> src/log/tests/lgs_dest_test.cc
> >>>>>>>>>>>>>>>>>>>>>>>> src/log/tests/Makefile
> >>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>> Complete diffstat:
> >>>>>>>>>>>>>>>>>>>>>>>> ------------------
> >>>>>>>>>>>>>>>>>>>>>>>> src/log/Makefile |    4 +
> >>>>>>>>>>>>>>>>>>>>>>>> src/log/Makefile.am |   31 +++++-
> >>>>>>>>>>>>>>>>>>>>>>>> src/log/apitest/tet_LogOiOps.c |    8 +-
> >>>>>>>>>>>>>>>>>>>>>>>> src/log/apitest/tet_cfg_destination.c |  483
> >>>
> ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
> >>>>>>>>>>>>>>>>>>>>>>> ++++++++++++++++++++++++++++++++++
> >>>>>>>>>>>>>>>>>>>>>>>> src/log/config/logsv_classes.xml |    7 +-
> >>>>>>>>>>>>>>>>>>>>>>>> src/log/logd/lgs_config.cc |  169
> >>>>>>>>>>>>>>>>>>>>>>> +++++++++++++++++++++++++++++---
> >>>>>>>>>>>>>>>>>>>>>>>> src/log/logd/lgs_config.h |    3 +-
> >>>>>>>>>>>>>>>>>>>>>>>> src/log/logd/lgs_dest.cc |  707
> >>>
> ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
> >>>
> ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
> >>>>>>>>>>>>>>>>>>>>>>> +++++++++++++++++
> >>>>>>>>>>>>>>>>>>>>>>>> src/log/logd/lgs_dest.h |  576
> >>>
> ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
> >>>>>>>>>>>>
> ++++++++++++++++++++++++++++++++++++++++++++++++++++
> >>>>>>>>>>>>>>>>>>>>>>>> src/log/logd/lgs_evt.cc |   33 ++++++
> >>>>>>>>>>>>>>>>>>>>>>>> src/log/logd/lgs_imm.cc |  202
> >>>>>>>>>>>>>>>>>>>>>>> +++++++++++++++++++++++++++++++++------
> >>>>>>>>>>>>>>>>>>>>>>>> src/log/logd/lgs_main.cc |    8 +
> >>>>>>>>>>>>>>>>>>>>>>>> src/log/logd/lgs_mbcsv.cc |  103
> >>>>>>>>>>>>>>>> ++++++++++++++++++-
> >>>>>>>>>>>>>>>>>>>>>>>> src/log/logd/lgs_mbcsv.h |    6 +-
> >>>>>>>>>>>>>>>>>>>>>>>> src/log/logd/lgs_mbcsv_v5.cc |   10 +
> >>>>>>>>>>>>>>>>>>>>>>>> src/log/logd/lgs_mbcsv_v7.cc |  177
> >>>>>>>>>>>>>>>>>>>>>>> +++++++++++++++++++++++++++++++++++
> >>>>>>>>>>>>>>>>>>>>>>>> src/log/logd/lgs_mbcsv_v7.h |   67
> >>>>>>>>>>>>> +++++++++++++
> >>>>>>>>>>>>>>>>>>>>>>>> src/log/logd/lgs_stream.cc |   60
> >>>>>>>>>>>>> +++++++++++-
> >>>>>>>>>>>>>>>>>>>>>>>> src/log/logd/lgs_stream.h |   16 +++
> >>>>>>>>>>>>>>>>>>>>>>>> src/log/logd/lgs_util.cc |   63
> >>>>>>>>>>>>> ++++++++++++
> >>>>>>>>>>>>>>>>>>>>>>>> src/log/logd/lgs_util.h |   11 +-
> >>>>>>>>>>>>>>>>>>>>>>>> src/log/tests/Makefile |   20 +++
> >>>>>>>>>>>>>>>>>>>>>>>> src/log/tests/lgs_dest_test.cc |  209
> >>>>>>>>>>>>>>>>>>>>>>>
> +++++++++++++++++++++++++++++++++++++++++
> >>>>>>>>>>>>>>>>>>>>>>>> 23 files changed, 2896 insertions(+), 77
> >>>>>>>>>>>>>>>>>>>>>>>> deletions(-)
> >>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>> Testing Commands:
> >>>>>>>>>>>>>>>>>>>>>>>> -----------------
> >>>>>>>>>>>>>>>>>>>>>>>>              Run UML test suite #17
> >>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>> Testing, Expected Results:
> >>>>>>>>>>>>>>>>>>>>>>>> --------------------------
> >>>>>>>>>>>>>>>>>>>>>>>>              All test passed
> >>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>> Conditions of Submission:
> >>>>>>>>>>>>>>>>>>>>>>>> -------------------------
> >>>>>>>>>>>>>>>>>>>>>>>> <<HOW MANY DAYS BEFORE PUSHING,
> >>>>>>> CONSENSUS
> >>>>>>>>>>>> ETC>>
> >>>>>>>>>>>>>>>>>>>>>>>> Arch Built     Started Linux distro
> >>>>>>>>>>>>>>>>>>>>>>>> -------------------------------------------
> >>>>>>>>>>>>>>>>>>>>>>>> mips n          n
> >>>>>>>>>>>>>>>>>>>>>>>> mips64 n          n
> >>>>>>>>>>>>>>>>>>>>>>>> x86 n          n
> >>>>>>>>>>>>>>>>>>>>>>>> x86_64 n          n
> >>>>>>>>>>>>>>>>>>>>>>>> powerpc n          n
> >>>>>>>>>>>>>>>>>>>>>>>> powerpc64 n          n
> >>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>> Reviewer Checklist:
> >>>>>>>>>>>>>>>>>>>>>>>> -------------------
> >>>>>>>>>>>>>>>>>>>>>>>> [Submitters: make sure that your review
> doesn't
> >>>>> trigger
> >>>>>>>>> any
> >>>>>>>>>>>>>>>>>>>>> checkmarks!]
> >>>>>>>>>>>>>>>>>>>>>>>> Your checkin has not passed review because
> (see
> >>>>>>> checked
> >>>>>>>>>>>>>> entries):
> >>>>>>>>>>>>>>>>>>>>>>>> ___ Your RR template is generally incomplete;
> it
> >>> has
> >>>>>>> too
> >>>>>>>>>>>> many
> >>>>>>>>>>>>>>>> blank
> >>>>>>>>>>>>>>>>>>>>>> entries
> >>>>>>>>>>>>>>>>>>>>>>>> that need proper data filled in.
> >>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>> ___ You have failed to nominate the proper
> >>> persons
> >>>>> for
> >>>>>>>>>>>> review
> >>>>>>>>>>>>>> and
> >>>>>>>>>>>>>>>>>>>> push.
> >>>>>>>>>>>>>>>>>>>>>>>> ___ Your patches do not have proper
> short+long
> >>>>>>> header
> >>>>>>>>>>>>>>>>>>>>>>>> ___ You have grammar/spelling in your header
> >>> that
> >>>>> is
> >>>>>>>>>>>>>> unacceptable.
> >>>>>>>>>>>>>>>>>>>>>>>> ___ You have exceeded a sensible line length
> in
> >>> your
> >>>>>>>>>>>>>>>>>>>>>>> headers/comments/text.
> >>>>>>>>>>>>>>>>>>>>>>>> ___ You have failed to put in a proper Trac
> Ticket
> >>> #
> >>>>>> into
> >>>>>>>>>>> your
> >>>>>>>>>>>>>>>>>>>> commits.
> >>>>>>>>>>>>>>>>>>>>>>>> ___ You have incorrectly put/left internal data
> in
> >>>>> your
> >>>>>>>>>>>>>>>>>> comments/files
> >>>>>>>>>>>>>>>>>>>>>>>> (i.e. internal bug tracking tool IDs, product
> >>>>>>>>>>> names
> >>>>>>>>>>>>> etc)
> >>>>>>>>>>>>>>>>>>>>>>>> ___ You have not given any evidence of testing
> >>>>> beyond
> >>>>>>>>> basic
> >>>>>>>>>>>>>> build
> >>>>>>>>>>>>>>>>>>>> tests.
> >>>>>>>>>>>>>>>>>>>>>>>> Demonstrate some level of runtime or other
> sanity
> >>>>>>>>>>>>> testing.
> >>>>>>>>>>>>>>>>>>>>>>>> ___ You have ^M present in some of your files.
> >>>>> These
> >>>>>>>>>>>>>>>>>>>>>>>> have to
> >>>>>>>>>>>>>> be
> >>>>>>>>>>>>>>>>>>>>> removed.
> >>>>>>>>>>>>>>>>>>>>>>>> ___ You have needlessly changed whitespace
> or
> >>>>> added
> >>>>>>>>>>>>>> whitespace
> >>>>>>>>>>>>>>>>>>>>> crimes
> >>>>>>>>>>>>>>>>>>>>>>>> like trailing spaces, or spaces
> >> before
> >>>>>> tabs.
> >>>>>>>>>>>>>>>>>>>>>>>> ___ You have mixed real technical changes
> with
> >>>>>>>>> whitespace
> >>>>>>>>>>>> and
> >>>>>>>>>>>>>>>>> other
> >>>>>>>>>>>>>>>>>>>>>>>> cosmetic code cleanup changes. These have to
> be
> >>>>>>>>>>>>>> separate
> >>>>>>>>>>>>>>>>>>>> commits.
> >>>>>>>>>>>>>>>>>>>>>>>> ___ You need to refactor your submission into
> >>>>> logical
> >>>>>>>>>>>>>>>>>>>>>>>> chunks;
> >>>>>>>>>>>>>>>> there
> >>>>>>>>>>>>>>>>>> is
> >>>>>>>>>>>>>>>>>>>>>>>> too much content into a single
> >> commit.
> >>>>>>>>>>>>>>>>>>>>>>>> ___ You have extraneous garbage in your
> review
> >>>>>>> (merge
> >>>>>>>>>>>>>> commits
> >>>>>>>>>>>>>>>>> etc)
> >>>>>>>>>>>>>>>>>>>>>>>> ___ You have giant attachments which should
> >>> never
> >>>>>>> have
> >>>>>>>>>>>> been
> >>>>>>>>>>>>>>>>> sent;
> >>>>>>>>>>>>>>>>>>>>>>>> Instead you should place your content in a
> public
> >>>>>>>>>>>>> tree to
> >>>>>>>>>>>>>>>> be
> >>>>>>>>>>>>>>>>>>>>>> pulled.
> >>>>>>>>>>>>>>>>>>>>>>>> ___ You have too many commits attached to
> an e-
> >>>>> mail;
> >>>>>>>>>>>> resend
> >>>>>>>>>>>>>> as
> >>>>>>>>>>>>>>>>>>>>> threaded
> >>>>>>>>>>>>>>>>>>>>>>>> commits, or place in a public tree for a pull.
> >>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>> ___ You have resent this content multiple
> times
> >>>>>>> without a
> >>>>>>>>>>>> clear
> >>>>>>>>>>>>>>>>>>>>>> indication
> >>>>>>>>>>>>>>>>>>>>>>>> of what has changed between each
> >>>> re-send.
> >>>>>>>>>>>>>>>>>>>>>>>> ___ You have failed to adequately and
> individually
> >>>>>>> address
> >>>>>>>>>>> all
> >>>>>>>>>>>>> of
> >>>>>>>>>>>>>>>>>> the
> >>>>>>>>>>>>>>>>>>>>>>>> comments and change requests that were
> >>> proposed
> >>>>> in
> >>>>>>>>>>>>> the
> >>>>>>>>>>>>>>>>>> initial
> >>>>>>>>>>>>>>>>>>>>>> review.
> >>>>>>>>>>>>>>>>>>>>>>>> ___ You have a misconfigured ~/.hgrc file (i.e.
> >>>>>>> username,
> >>>>>>>>>>>> email
> >>>>>>>>>>>>>>>> etc)
> >>>>>>>>>>>>>>>>>>>>>>>> ___ Your computer have a badly configured
> date
> >>> and
> >>>>>>>>> time;
> >>>>>>>>>>>>>>>>> confusing
> >>>>>>>>>>>>>>>>>>> the
> >>>>>>>>>>>>>>>>>>>>>>>> the threaded patch review.
> >>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>> ___ Your changes affect IPC mechanism, and
> you
> >>>>> don't
> >>>>>>>>>>>> present
> >>>>>>>>>>>>>> any
> >>>>>>>>>>>>>>>>>>>>> results
> >>>>>>>>>>>>>>>>>>>>>>>> for in-service upgradability test.
> >>>>>>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>>>>>> ___ Your changes affect user manual and
> >>>>>>> documentation,
> >>>>>>>>>>>> your
> >>>>>>>>>>>>>>>>> patch
> >>>>>>>>>>>>>>>>>>>>> series
> >>>>>>>>>>>>>>>>>>>>>>>> do not contain the patch that updates
> >>>> the
> >>>>>>>>>>>>>>>>>>>>>>>> Doxygen
> >>>>>>>>>>>>>> manual.
> >>
----------------------------------------------------------------------------
> >>
> >>>>>>>>>>>>> --
> >>>>>>>>>>>>>>> Check out the vibrant tech community on one of the
> world's
> >>>>> most
> >>>>>>>>>>>>>>> engaging tech sites, SlashDot.org!
> http://sdm.link/slashdot
> >>>>>>>>>>>>>>> _______________________________________________
> >>>>>>>>>>>>>>> Opensaf-devel mailing list
> >>>>>>>>>>>>>>> Opensaf-devel@lists.sourceforge.net
> >>>>>>>>>>>>>>> https://lists.sourceforge.net/lists/listinfo/opensaf-devel
> >



------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, SlashDot.org! http://sdm.link/slashdot
_______________________________________________
Opensaf-devel mailing list
Opensaf-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/opensaf-devel

Reply via email to