Re: [devel] [PATCH 0 of 3] Review Request for log: add alternative destinations of log records [#2258] V4
Hi Vu, Issue still exist even with `2258_v10.patch`. fix all the issue and republish the patch. == Feb 23 17:09:06 SC-2 osafamfnd[3774]: NO 'safSu=SC-2,safSg=2N,safApp=OpenSAF' Presence State INSTANTIATING => INSTANTIATED Feb 23 17:09:06 SC-2 osafamfnd[3774]: NO Assigning 'safSi=SC-2N,safApp=OpenSAF' STANDBY to 'safSu=SC-2,safSg=2N,safApp=OpenSAF' Feb 23 17:09:06 SC-2 osafrded[3694]: NO RDE role set to STANDBY Feb 23 17:09:06 SC-2 osafrded[3694]: NO Peer up on node 0x2010f Feb 23 17:09:06 SC-2 osafrded[3694]: NO Got peer info request from node 0x2010f with role ACTIVE Feb 23 17:09:06 SC-2 osafrded[3694]: NO Got peer info response from node 0x2010f with role ACTIVE Feb 23 17:09:06 SC-2 osafimmd[3713]: NO MDS event from svc_id 24 (change:3, dest:13) Feb 23 17:09:06 SC-2 osafimmd[3713]: NO MDS event from svc_id 24 (change:5, dest:13) Feb 23 17:09:06 SC-2 osafimmd[3713]: NO MDS event from svc_id 24 (change:5, dest:13) Feb 23 17:09:06 SC-2 osafimmd[3713]: NO MDS event from svc_id 25 (change:3, dest:565213468688400) Feb 23 17:09:06 SC-2 osafimmd[3713]: NO MDS event from svc_id 25 (change:3, dest:56411654384) Feb 23 17:09:06 SC-2 osafimmnd[3724]: NO Implementer (applier) connected: 15 (@safAmfService2020f) <127, 2020f> Feb 23 17:09:06 SC-2 osaflogd[3734]: NO LOGSV_DATA_GROUPNAME not found Feb 23 17:09:06 SC-2 osaflogd[3734]: NO LOG root directory is: "/var/log/opensaf/saflog" Feb 23 17:09:06 SC-2 osaflogd[3734]: NO LOG data group is: "" Feb 23 17:09:06 SC-2 osaflogd[3734]: NO LGS_MBCSV_VERSION = 7 Feb 23 17:09:06 SC-2 osafamfnd[3774]: NO Assigned 'safSi=SC-2N,safApp=OpenSAF' STANDBY to 'safSu=SC-2,safSg=2N,safApp=OpenSAF' Feb 23 17:09:06 SC-2 opensafd: OpenSAF(5.1.M0 - ) services successfully started done SC-2:~ # Feb 23 17:09:06 SC-2 osafimmnd[3724]: NO Implementer (applier) connected: 16 (@OpenSafImmReplicatorB) <150, 2020f> Feb 23 17:09:06 SC-2 osafntfimcnd[3931]: NO Started Feb 23 17:09:08 SC-2 osafamfd[3764]: NO Cold sync complete! Feb 23 17:09:08 SC-2 osaflogd[3734]: WA FAILED: ncs_patricia_tree_add, client_id 0 Feb 23 17:09:08 SC-2 osaflogd[3734]: ER Exiting with message: Could not create new client Feb 23 17:09:08 SC-2 osafamfnd[3774]: NO 'safSu=SC-2,safSg=2N,safApp=OpenSAF' component restart probation timer started (timeout: 600 ns) Feb 23 17:09:08 SC-2 osafamfnd[3774]: NO Restarting a component of 'safSu=SC-2,safSg=2N,safApp=OpenSAF' (comp restart count: 1) Feb 23 17:09:08 SC-2 osafamfnd[3774]: NO 'safComp=LOG,safSu=SC-2,safSg=2N,safApp=OpenSAF' faulted due to 'errorReport' : Recovery is 'componentRestart' == -AVM On 2/23/2017 4:25 PM, Vu Minh Nguyen wrote: > Hi Mahesh, > > Sorry, I missed fixing other point. See 2258_additional_fix_err.patch. > > For your convenience, I pack them all to new version 2258_v10.patch. > > Regards, Vu > >> -Original Message- >> From: Vu Minh Nguyen [mailto:vu.m.ngu...@dektech.com.au] >> Sent: Thursday, February 23, 2017 5:39 PM >> To: 'A V Mahesh'; 'lennart.l...@ericsson.com' >> ; 'canh.v.tru...@dektech.com.au' >> >> Cc: 'opensaf-devel@lists.sourceforge.net' > de...@lists.sourceforge.net> >> Subject: RE: [devel] [PATCH 0 of 3] Review Request for log: add > alternative >> destinations of log records [#2258] V4 >> >> Hi Mahesh, >> >> I found the root cause. It is because in ` log: implement >> SaLogFilterSetCallbackT and version handling [#2146]`, >> Canh introduced MBCSV version #6, but I missed adding that info when >> rebasing. >> >> The attached patch contains the fix. Can you apply it to see if the > problem >> still occur? Thanks. >> >> Regards, Vu >> >>> -Original Message- >>> From: A V Mahesh [mailto:mahesh.va...@oracle.com] >>> Sent: Thursday, February 23, 2017 5:32 PM >>> To: Vu Minh Nguyen ; >>> lennart.l...@ericsson.com; canh.v.tru...@dektech.com.au >>> Cc: opensaf-devel@lists.sourceforge.net >>> Subject: Re: [devel] [PATCH 0 of 3] Review Request for log: add > alternative >>> destinations of log records [#2258] V4 >>> >>> Hi Vu, >>> >>> Please find attached , below is the time stamp of SC-2 >>> >>> >> == >>> == >>> >>> Feb 23 15:55:30 SC-2 osafimmnd[6978]: NO Implementer (applier) >>> connected: 15 (@safAmfService2020f) <127, 2020f> >>> Feb 23 15:55:30 SC-2 osaflogd[6988]: NO LOGSV_DATA_GROUPNAME not >>> found >>> Feb 23 15:55:30 SC-2 osaflogd[6988]: NO LOG root directory is: >>> "/var/log/opensaf/saflog" >>> Feb 23 15:55:30 SC-2 osaflogd[6988]: NO LOG data group is: "" >>> Feb 23 15:55:30 SC-2 osaflogd[6988]:
Re: [devel] [PATCH 0 of 3] Review Request for log: add alternative destinations of log records [#2258] V4
Hi Mahesh, I found the root cause. It is because in ` log: implement SaLogFilterSetCallbackT and version handling [#2146]`, Canh introduced MBCSV version #6, but I missed adding that info when rebasing. The attached patch contains the fix. Can you apply it to see if the problem still occur? Thanks. Regards, Vu > -Original Message- > From: A V Mahesh [mailto:mahesh.va...@oracle.com] > Sent: Thursday, February 23, 2017 5:32 PM > To: Vu Minh Nguyen; > lennart.l...@ericsson.com; canh.v.tru...@dektech.com.au > Cc: opensaf-devel@lists.sourceforge.net > Subject: Re: [devel] [PATCH 0 of 3] Review Request for log: add alternative > destinations of log records [#2258] V4 > > Hi Vu, > > Please find attached , below is the time stamp of SC-2 > > == > == > > Feb 23 15:55:30 SC-2 osafimmnd[6978]: NO Implementer (applier) > connected: 15 (@safAmfService2020f) <127, 2020f> > Feb 23 15:55:30 SC-2 osaflogd[6988]: NO LOGSV_DATA_GROUPNAME not > found > Feb 23 15:55:30 SC-2 osaflogd[6988]: NO LOG root directory is: > "/var/log/opensaf/saflog" > Feb 23 15:55:30 SC-2 osaflogd[6988]: NO LOG data group is: "" > Feb 23 15:55:30 SC-2 osaflogd[6988]: NO LGS_MBCSV_VERSION = 7 > Feb 23 15:55:30 SC-2 osafamfnd[7028]: NO Assigned > 'safSi=SC-2N,safApp=OpenSAF' STANDBY to 'safSu=SC- > 2,safSg=2N,safApp=OpenSAF' > Feb 23 15:55:30 SC-2 opensafd: OpenSAF(5.1.M0 - ) services successfully > started > done > SC-2:/var/log/opensaf # Feb 23 15:55:31 SC-2 osafimmnd[6978]: NO > Implementer (applier) connected: 16 (@OpenSafImmReplicatorB) <144, > 2020f> > Feb 23 15:55:31 SC-2 osafntfimcnd[7185]: NO Started > Feb 23 15:55:33 SC-2 osafamfd[7018]: NO Cold sync complete! > Feb 23 15:55:33 SC-2 osaflogd[6988]: WA FAILED: ncs_patricia_tree_add, > client_id 0 > Feb 23 15:55:33 SC-2 osaflogd[6988]: ER Exiting with message: Could not > create new client > Feb 23 15:55:33 SC-2 osafamfnd[7028]: NO > 'safSu=SC-2,safSg=2N,safApp=OpenSAF' component restart probation timer > started (timeout: 600 ns) > Feb 23 15:55:33 SC-2 osafamfnd[7028]: NO Restarting a component of > 'safSu=SC-2,safSg=2N,safApp=OpenSAF' (comp restart count: 1) > Feb 23 15:55:33 SC-2 osafamfnd[7028]: NO > 'safComp=LOG,safSu=SC-2,safSg=2N,safApp=OpenSAF' faulted due to > 'errorReport' : Recovery is 'componentRestart' > > == > == > > -AVM > > > On 2/23/2017 3:39 PM, Vu Minh Nguyen wrote: > > Hi Mahesh, > > > > No change in V7 vs V9. Just do rebase the code on latest changeset. > > > > I have tried to clean up all, and rebuild the cluster to see what you are > > observing, > > and I am not able to reproduce the problem, I have tried several times. > > > > Can you provide me the osaflogd trace on both SCs node? Thanks. > > > > Regards, Vu > > > >> -Original Message- > >> From: A V Mahesh [mailto:mahesh.va...@oracle.com] > >> Sent: Thursday, February 23, 2017 4:48 PM > >> To: Vu Minh Nguyen ; > >> lennart.l...@ericsson.com; canh.v.tru...@dektech.com.au > >> Cc: opensaf-devel@lists.sourceforge.net > >> Subject: Re: [devel] [PATCH 0 of 3] Review Request for log: add > > alternative > >> destinations of log records [#2258] V4 > >> > >> Hi Vu, > >> > >> On 2/23/2017 3:13 PM, A V Mahesh wrote: > >>> Not sure what are other change compare to V7 to V9 , New problems > got > >>> introduced > >>> > >>> Both nodes SC-1 & SC-2 ( with 2258_v9.patch ) , trying bring up both > >>> SC`s simple node bringup , > >>> > >>> SC-2 going for reboot with following : > >>> > >>> > >> > == > >> > == > >> > >>> > >>> Feb 23 15:05:32 SC-2 osaflogd[29988]: NO LOGSV_DATA_GROUPNAME > >> not found > >>> Feb 23 15:05:32 SC-2 osaflogd[29988]: NO LOG root directory is: > >>> "/var/log/opensaf/saflog" > >>> Feb 23 15:05:32 SC-2 osaflogd[29988]: NO LOG data group is: "" > >>> Feb 23 15:05:32 SC-2 osafimmnd[29978]: NO Implementer (applier) > >>> connected: 16 (@safAmfService2020f) <127, 2020f> > >>> Feb 23 15:05:32 SC-2 osaflogd[29988]: NO LGS_MBCSV_VERSION = 7 > >>> Feb 23 15:05:32 SC-2 osaflogd[29988]: WA FAILED: > >>> ncs_patricia_tree_add, client_id 0 > >>> Feb 23 15:05:32 SC-2 osafamfnd[30028]: NO Assigned > >>> 'safSi=SC-2N,safApp=OpenSAF' STANDBY to > >>> 'safSu=SC-2,safSg=2N,safApp=OpenSAF' > >>> Feb 23 15:05:32 SC-2 osaflogd[29988]: ER Exiting with message: Could > >>> not create new client > >>> Feb 23 15:05:32 SC-2 osafamfnd[30028]: NO > >>> 'safSu=SC-2,safSg=2N,safApp=OpenSAF' component restart probation > >> timer > >>> started (timeout: 600 ns) > >>> Feb 23 15:05:32 SC-2 osafamfnd[30028]: NO Restarting a component of > >>> 'safSu=SC-2,safSg=2N,safApp=OpenSAF' (comp restart count: 1) >
Re: [devel] [PATCH 0 of 3] Review Request for log: add alternative destinations of log records [#2258] V4
Hi Mahesh, No change in V7 vs V9. Just do rebase the code on latest changeset. I have tried to clean up all, and rebuild the cluster to see what you are observing, and I am not able to reproduce the problem, I have tried several times. Can you provide me the osaflogd trace on both SCs node? Thanks. Regards, Vu > -Original Message- > From: A V Mahesh [mailto:mahesh.va...@oracle.com] > Sent: Thursday, February 23, 2017 4:48 PM > To: Vu Minh Nguyen; > lennart.l...@ericsson.com; canh.v.tru...@dektech.com.au > Cc: opensaf-devel@lists.sourceforge.net > Subject: Re: [devel] [PATCH 0 of 3] Review Request for log: add alternative > destinations of log records [#2258] V4 > > Hi Vu, > > On 2/23/2017 3:13 PM, A V Mahesh wrote: > > > > Not sure what are other change compare to V7 to V9 , New problems got > > introduced > > > > Both nodes SC-1 & SC-2 ( with 2258_v9.patch ) , trying bring up both > > SC`s simple node bringup , > > > > SC-2 going for reboot with following : > > > > > == > == > > > > > > > Feb 23 15:05:32 SC-2 osaflogd[29988]: NO LOGSV_DATA_GROUPNAME > not found > > Feb 23 15:05:32 SC-2 osaflogd[29988]: NO LOG root directory is: > > "/var/log/opensaf/saflog" > > Feb 23 15:05:32 SC-2 osaflogd[29988]: NO LOG data group is: "" > > Feb 23 15:05:32 SC-2 osafimmnd[29978]: NO Implementer (applier) > > connected: 16 (@safAmfService2020f) <127, 2020f> > > Feb 23 15:05:32 SC-2 osaflogd[29988]: NO LGS_MBCSV_VERSION = 7 > > Feb 23 15:05:32 SC-2 osaflogd[29988]: WA FAILED: > > ncs_patricia_tree_add, client_id 0 > > Feb 23 15:05:32 SC-2 osafamfnd[30028]: NO Assigned > > 'safSi=SC-2N,safApp=OpenSAF' STANDBY to > > 'safSu=SC-2,safSg=2N,safApp=OpenSAF' > > Feb 23 15:05:32 SC-2 osaflogd[29988]: ER Exiting with message: Could > > not create new client > > Feb 23 15:05:32 SC-2 osafamfnd[30028]: NO > > 'safSu=SC-2,safSg=2N,safApp=OpenSAF' component restart probation > timer > > started (timeout: 600 ns) > > Feb 23 15:05:32 SC-2 osafamfnd[30028]: NO Restarting a component of > > 'safSu=SC-2,safSg=2N,safApp=OpenSAF' (comp restart count: 1) > > Feb 23 15:05:32 SC-2 osafamfnd[30028]: NO > > 'safComp=LOG,safSu=SC-2,safSg=2N,safApp=OpenSAF' faulted due to > > 'errorReport' : Recovery is 'componentRestart' > > Feb 23 15:05:32 SC-2 opensafd[29908]: ER Service LOGD has unexpectedly > > crashed. Unable to continue, exiting > > Feb 23 15:05:32 SC-2 osafamfd[30018]: exiting for shutdown > > Feb 23 15:05:32 SC-2 osafamfnd[30028]: ER AMFD has unexpectedly > > crashed. Rebooting node > > Feb 23 15:05:32 SC-2 osafamfnd[30028]: Rebooting OpenSAF NodeId = > > 131599 EE Name = , Reason: AMFD has unexpectedly crashed. Rebooting > > node, OwnNodeId = 131599, SupervisionTime = 60 > > Feb 23 15:05:32 SC-2 opensaf_reboot: Rebooting local node; timeout=60 > > Feb 23 15:06:04 SC-2 syslog-ng[1180]: syslog-ng starting up; > > version='2.0.9' > > > > > == > == > > > > Some times : > > == > == > > > Feb 23 15:15:19 SC-2 osafrded[3858]: NO RDE role set to STANDBY > Feb 23 15:15:19 SC-2 osafrded[3858]: NO Peer up on node 0x2010f > Feb 23 15:15:19 SC-2 osafrded[3858]: NO Got peer info request from node > 0x2010f with role ACTIVE > Feb 23 15:15:19 SC-2 osafrded[3858]: NO Got peer info response from node > 0x2010f with role ACTIVE > Feb 23 15:15:19 SC-2 osafimmd[3877]: NO MDS event from svc_id 24 > (change:3, dest:13) > Feb 23 15:15:19 SC-2 osafimmd[3877]: NO MDS event from svc_id 24 > (change:5, dest:13) > Feb 23 15:15:19 SC-2 osafimmd[3877]: NO MDS event from svc_id 24 > (change:5, dest:13) > Feb 23 15:15:19 SC-2 osafimmd[3877]: NO MDS event from svc_id 25 > (change:3, dest:565217560625168) > Feb 23 15:15:19 SC-2 osafimmd[3877]: NO MDS event from svc_id 25 > (change:3, dest:564114674417680) > Feb 23 15:15:19 SC-2 osaflogd[3898]: NO LOGSV_DATA_GROUPNAME not > found > Feb 23 15:15:19 SC-2 osaflogd[3898]: NO LOG root directory is: > "/var/log/opensaf/saflog" > Feb 23 15:15:19 SC-2 osaflogd[3898]: NO LOG data group is: "" > Feb 23 15:15:19 SC-2 osafimmnd[3888]: NO Implementer (applier) > connected: 15 (@safAmfService2020f) <127, 2020f> > Feb 23 15:15:19 SC-2 osaflogd[3898]: NO LGS_MBCSV_VERSION = 7 > Feb 23 15:15:19 SC-2 osaflogd[3898]: ER Exiting with message: Client > attributes differ > Feb 23 15:15:19 SC-2 osafamfnd[3938]: NO > 'safSu=SC-2,safSg=2N,safApp=OpenSAF' component restart probation timer > started (timeout: 600 ns) > Feb 23 15:15:19 SC-2 osafamfnd[3938]: NO Restarting a component of > 'safSu=SC-2,safSg=2N,safApp=OpenSAF' (comp restart count: 1) > Feb 23 15:15:19 SC-2 osafamfnd[3938]: NO >
Re: [devel] [PATCH 0 of 3] Review Request for log: add alternative destinations of log records [#2258] V4
Hi Vu, On 2/23/2017 3:13 PM, A V Mahesh wrote: > > Not sure what are other change compare to V7 to V9 , New problems got > introduced > > Both nodes SC-1 & SC-2 ( with 2258_v9.patch ) , trying bring up both > SC`s simple node bringup , > > SC-2 going for reboot with following : > > > > > > Feb 23 15:05:32 SC-2 osaflogd[29988]: NO LOGSV_DATA_GROUPNAME not found > Feb 23 15:05:32 SC-2 osaflogd[29988]: NO LOG root directory is: > "/var/log/opensaf/saflog" > Feb 23 15:05:32 SC-2 osaflogd[29988]: NO LOG data group is: "" > Feb 23 15:05:32 SC-2 osafimmnd[29978]: NO Implementer (applier) > connected: 16 (@safAmfService2020f) <127, 2020f> > Feb 23 15:05:32 SC-2 osaflogd[29988]: NO LGS_MBCSV_VERSION = 7 > Feb 23 15:05:32 SC-2 osaflogd[29988]: WA FAILED: > ncs_patricia_tree_add, client_id 0 > Feb 23 15:05:32 SC-2 osafamfnd[30028]: NO Assigned > 'safSi=SC-2N,safApp=OpenSAF' STANDBY to > 'safSu=SC-2,safSg=2N,safApp=OpenSAF' > Feb 23 15:05:32 SC-2 osaflogd[29988]: ER Exiting with message: Could > not create new client > Feb 23 15:05:32 SC-2 osafamfnd[30028]: NO > 'safSu=SC-2,safSg=2N,safApp=OpenSAF' component restart probation timer > started (timeout: 600 ns) > Feb 23 15:05:32 SC-2 osafamfnd[30028]: NO Restarting a component of > 'safSu=SC-2,safSg=2N,safApp=OpenSAF' (comp restart count: 1) > Feb 23 15:05:32 SC-2 osafamfnd[30028]: NO > 'safComp=LOG,safSu=SC-2,safSg=2N,safApp=OpenSAF' faulted due to > 'errorReport' : Recovery is 'componentRestart' > Feb 23 15:05:32 SC-2 opensafd[29908]: ER Service LOGD has unexpectedly > crashed. Unable to continue, exiting > Feb 23 15:05:32 SC-2 osafamfd[30018]: exiting for shutdown > Feb 23 15:05:32 SC-2 osafamfnd[30028]: ER AMFD has unexpectedly > crashed. Rebooting node > Feb 23 15:05:32 SC-2 osafamfnd[30028]: Rebooting OpenSAF NodeId = > 131599 EE Name = , Reason: AMFD has unexpectedly crashed. Rebooting > node, OwnNodeId = 131599, SupervisionTime = 60 > Feb 23 15:05:32 SC-2 opensaf_reboot: Rebooting local node; timeout=60 > Feb 23 15:06:04 SC-2 syslog-ng[1180]: syslog-ng starting up; > version='2.0.9' > > > > Some times : Feb 23 15:15:19 SC-2 osafrded[3858]: NO RDE role set to STANDBY Feb 23 15:15:19 SC-2 osafrded[3858]: NO Peer up on node 0x2010f Feb 23 15:15:19 SC-2 osafrded[3858]: NO Got peer info request from node 0x2010f with role ACTIVE Feb 23 15:15:19 SC-2 osafrded[3858]: NO Got peer info response from node 0x2010f with role ACTIVE Feb 23 15:15:19 SC-2 osafimmd[3877]: NO MDS event from svc_id 24 (change:3, dest:13) Feb 23 15:15:19 SC-2 osafimmd[3877]: NO MDS event from svc_id 24 (change:5, dest:13) Feb 23 15:15:19 SC-2 osafimmd[3877]: NO MDS event from svc_id 24 (change:5, dest:13) Feb 23 15:15:19 SC-2 osafimmd[3877]: NO MDS event from svc_id 25 (change:3, dest:565217560625168) Feb 23 15:15:19 SC-2 osafimmd[3877]: NO MDS event from svc_id 25 (change:3, dest:564114674417680) Feb 23 15:15:19 SC-2 osaflogd[3898]: NO LOGSV_DATA_GROUPNAME not found Feb 23 15:15:19 SC-2 osaflogd[3898]: NO LOG root directory is: "/var/log/opensaf/saflog" Feb 23 15:15:19 SC-2 osaflogd[3898]: NO LOG data group is: "" Feb 23 15:15:19 SC-2 osafimmnd[3888]: NO Implementer (applier) connected: 15 (@safAmfService2020f) <127, 2020f> Feb 23 15:15:19 SC-2 osaflogd[3898]: NO LGS_MBCSV_VERSION = 7 Feb 23 15:15:19 SC-2 osaflogd[3898]: ER Exiting with message: Client attributes differ Feb 23 15:15:19 SC-2 osafamfnd[3938]: NO 'safSu=SC-2,safSg=2N,safApp=OpenSAF' component restart probation timer started (timeout: 600 ns) Feb 23 15:15:19 SC-2 osafamfnd[3938]: NO Restarting a component of 'safSu=SC-2,safSg=2N,safApp=OpenSAF' (comp restart count: 1) Feb 23 15:15:19 SC-2 osafamfnd[3938]: NO 'safComp=LOG,safSu=SC-2,safSg=2N,safApp=OpenSAF' faulted due to 'errorReport' : Recovery is 'componentRestart' Feb 23 15:15:19 SC-2 opensafd[3818]: ER Service LOGD has unexpectedly crashed. Unable to continue, exiting Feb 23 15:15:20 SC-2 osafamfd[3928]: exiting for shutdown Feb 23 15:15:20 SC-2 osafamfnd[3938]: ER AMFD has unexpectedly crashed. Rebooting node Feb 23 15:15:20 SC-2 osafamfnd[3938]: Rebooting OpenSAF NodeId = 131599 EE Name = , Reason: AMFD has unexpectedly crashed. Rebooting node, OwnNodeId = 131599, SupervisionTime = 60 Feb 23 15:15:20 SC-2 osafimmnd[3888]: NO Implementer locally disconnected. Marking it as doomed 15 <127, 2020f> (@safAmfService2020f) Feb 23 15:15:20 SC-2 osafimmnd[3888]: NO Implementer disconnected 15 <127, 2020f> (@safAmfService2020f) Feb 23 15:15:20 SC-2 opensaf_reboot: Rebooting local node; timeout=60
Re: [devel] [PATCH 0 of 3] Review Request for log: add alternative destinations of log records [#2258] V4
Hi Vu, Not sure what are other change compare to V7 to V9 , New problems got introduced Both nodes SC-1 & SC-2 ( with 2258_v9.patch ) , trying bring up both SC`s simple node bringup , SC-2 going for reboot with following : Feb 23 15:05:32 SC-2 osaflogd[29988]: NO LOGSV_DATA_GROUPNAME not found Feb 23 15:05:32 SC-2 osaflogd[29988]: NO LOG root directory is: "/var/log/opensaf/saflog" Feb 23 15:05:32 SC-2 osaflogd[29988]: NO LOG data group is: "" Feb 23 15:05:32 SC-2 osafimmnd[29978]: NO Implementer (applier) connected: 16 (@safAmfService2020f) <127, 2020f> Feb 23 15:05:32 SC-2 osaflogd[29988]: NO LGS_MBCSV_VERSION = 7 Feb 23 15:05:32 SC-2 osaflogd[29988]: WA FAILED: ncs_patricia_tree_add, client_id 0 Feb 23 15:05:32 SC-2 osafamfnd[30028]: NO Assigned 'safSi=SC-2N,safApp=OpenSAF' STANDBY to 'safSu=SC-2,safSg=2N,safApp=OpenSAF' Feb 23 15:05:32 SC-2 osaflogd[29988]: ER Exiting with message: Could not create new client Feb 23 15:05:32 SC-2 osafamfnd[30028]: NO 'safSu=SC-2,safSg=2N,safApp=OpenSAF' component restart probation timer started (timeout: 600 ns) Feb 23 15:05:32 SC-2 osafamfnd[30028]: NO Restarting a component of 'safSu=SC-2,safSg=2N,safApp=OpenSAF' (comp restart count: 1) Feb 23 15:05:32 SC-2 osafamfnd[30028]: NO 'safComp=LOG,safSu=SC-2,safSg=2N,safApp=OpenSAF' faulted due to 'errorReport' : Recovery is 'componentRestart' Feb 23 15:05:32 SC-2 opensafd[29908]: ER Service LOGD has unexpectedly crashed. Unable to continue, exiting Feb 23 15:05:32 SC-2 osafamfd[30018]: exiting for shutdown Feb 23 15:05:32 SC-2 osafamfnd[30028]: ER AMFD has unexpectedly crashed. Rebooting node Feb 23 15:05:32 SC-2 osafamfnd[30028]: Rebooting OpenSAF NodeId = 131599 EE Name = , Reason: AMFD has unexpectedly crashed. Rebooting node, OwnNodeId = 131599, SupervisionTime = 60 Feb 23 15:05:32 SC-2 opensaf_reboot: Rebooting local node; timeout=60 Feb 23 15:06:04 SC-2 syslog-ng[1180]: syslog-ng starting up; version='2.0.9' -AVM On 2/23/2017 2:20 PM, Vu Minh Nguyen wrote: > Hi Mahesh, > > This is the latest code has been rebased on the latest changeset. > > Note that, in the attached patch, I have included one more dependency, > that is on base::Hash() function, the patch sent by Anders [#2266] > > Please review the patch, then comment if any. Thanks. > > Regards, Vu > >> -Original Message- >> From: A V Mahesh [mailto:mahesh.va...@oracle.com] >> Sent: Thursday, February 23, 2017 2:03 PM >> To: Vu Minh Nguyen; >> lennart.l...@ericsson.com; canh.v.tru...@dektech.com.au >> Cc: opensaf-devel@lists.sourceforge.net >> Subject: Re: [devel] [PATCH 0 of 3] Review Request for log: add > alternative >> destinations of log records [#2258] V4 >> >> Hi Vu, >> >> Now we are now able to proceed further with V7 `2258_v7.patch` in >> service upgraded working fine, >> because of Encode/decode changes done in V7 patch. >> >> But we have another small test case issue (/usr/bin/logtest 5 17 >> Segmentation fault), >> once we resolve this also, we can conclude that all the basic >> functionality is working, >> then you can re-publish the V7 patch ( if change occurred in Lennart >> #2258 V2 please do publish that as well ) >> so that I can go for CODE review. >> >> Steps to reproduce the test case issue : >> >> 1) Bring up old node as Active ( with out `2258_v7.patch` ) >> 2) Bring-up new node as Standby ( with `2258_v7.patch` ) >> 3) Do `amf-adm si-swap safSi=SC-2N,safApp=OpenSAF` >> 4) Run `/usr/bin/logtest 5 17 ` on new Active (because of si-swap ) >> >> Note : both nodes has the new XLM attributes populated . >> >> == >> = >> >> gdb /usr/bin/logtest >> (gdb) r 5 >> >> 16 PASSED CCB Object Modify, change root directory. Path exist. OK; >> Detaching after fork from child process 13797. >> Set values Fail >> [New Thread 0x77ff7b00 (LWP 13801)] >> [New Thread 0x77fc4b00 (LWP 13802)] >> >> Program received signal SIGSEGV, Segmentation fault. >> 0x555688ea in read_and_compare.isra.7 () at >> src/log/apitest/tet_LogOiOps.c:1891 >> 1891src/log/apitest/tet_LogOiOps.c: No such file or directory. >> in src/log/apitest/tet_LogOiOps.c >> (gdb) bt >> #0 0x555688ea in read_and_compare.isra.7 () at >> src/log/apitest/tet_LogOiOps.c:1891 >> #1 0x55568a4b in check_logRecordDestinationConfigurationAdd () >> at src/log/apitest/tet_LogOiOps.c:1941 >> #2 0x55571b05 in run_test_case () >> #3 0x55571feb in test_run () >> #4 0xbfad in main () at src/log/apitest/logtest.c:569 >> (gdb) >> >> == >>
Re: [devel] [PATCH 0 of 3] Review Request for log: add alternative destinations of log records [#2258] V4
Hi Vu Can you please publish patche(s) in community against each #ticket , so that i can do Code review. -AVM On 2/23/2017 2:20 PM, Vu Minh Nguyen wrote: > Hi Mahesh, > > This is the latest code has been rebased on the latest changeset. > > Note that, in the attached patch, I have included one more dependency, > that is on base::Hash() function, the patch sent by Anders [#2266] > > Please review the patch, then comment if any. Thanks. > > Regards, Vu > >> -Original Message- >> From: A V Mahesh [mailto:mahesh.va...@oracle.com] >> Sent: Thursday, February 23, 2017 2:03 PM >> To: Vu Minh Nguyen; >> lennart.l...@ericsson.com; canh.v.tru...@dektech.com.au >> Cc: opensaf-devel@lists.sourceforge.net >> Subject: Re: [devel] [PATCH 0 of 3] Review Request for log: add > alternative >> destinations of log records [#2258] V4 >> >> Hi Vu, >> >> Now we are now able to proceed further with V7 `2258_v7.patch` in >> service upgraded working fine, >> because of Encode/decode changes done in V7 patch. >> >> But we have another small test case issue (/usr/bin/logtest 5 17 >> Segmentation fault), >> once we resolve this also, we can conclude that all the basic >> functionality is working, >> then you can re-publish the V7 patch ( if change occurred in Lennart >> #2258 V2 please do publish that as well ) >> so that I can go for CODE review. >> >> Steps to reproduce the test case issue : >> >> 1) Bring up old node as Active ( with out `2258_v7.patch` ) >> 2) Bring-up new node as Standby ( with `2258_v7.patch` ) >> 3) Do `amf-adm si-swap safSi=SC-2N,safApp=OpenSAF` >> 4) Run `/usr/bin/logtest 5 17 ` on new Active (because of si-swap ) >> >> Note : both nodes has the new XLM attributes populated . >> >> == >> = >> >> gdb /usr/bin/logtest >> (gdb) r 5 >> >> 16 PASSED CCB Object Modify, change root directory. Path exist. OK; >> Detaching after fork from child process 13797. >> Set values Fail >> [New Thread 0x77ff7b00 (LWP 13801)] >> [New Thread 0x77fc4b00 (LWP 13802)] >> >> Program received signal SIGSEGV, Segmentation fault. >> 0x555688ea in read_and_compare.isra.7 () at >> src/log/apitest/tet_LogOiOps.c:1891 >> 1891src/log/apitest/tet_LogOiOps.c: No such file or directory. >> in src/log/apitest/tet_LogOiOps.c >> (gdb) bt >> #0 0x555688ea in read_and_compare.isra.7 () at >> src/log/apitest/tet_LogOiOps.c:1891 >> #1 0x55568a4b in check_logRecordDestinationConfigurationAdd () >> at src/log/apitest/tet_LogOiOps.c:1941 >> #2 0x55571b05 in run_test_case () >> #3 0x55571feb in test_run () >> #4 0xbfad in main () at src/log/apitest/logtest.c:569 >> (gdb) >> >> == >> = >> >> >> -AVM >> >> On 2/23/2017 11:44 AM, Vu Minh Nguyen wrote: >>> Hi Mahesh, >>> >>> Maybe it was broken when transmitting. I zipped to a tar file. Please > try it >>> one more. >>> >>> Regards, Vu >>> >>> -Original Message- From: A V Mahesh [mailto:mahesh.va...@oracle.com] Sent: Thursday, February 23, 2017 12:54 PM To: Vu Minh Nguyen ; lennart.l...@ericsson.com; canh.v.tru...@dektech.com.au Cc: opensaf-devel@lists.sourceforge.net Subject: Re: [devel] [PATCH 0 of 3] Review Request for log: add >>> alternative destinations of log records [#2258] V4 Hi Vu, On 2/23/2017 10:20 AM, Vu Minh Nguyen wrote: > Hi Mahesh, > > Can you try with 2258_v7.patch I just sent to you? I stripedchangeset: 8610 of today's latest staging ( `hg strip 8610` which removed log: implement SaLogFilterSetCallbackT and version handling [#2146]) and try to apply your `2258_v7.patch`, it says `malformed patch at line 3324`. -AVM > I have pulled the latest code on OpenSAF 5.1 branch, re-created the >>> cluster. > And it works with the case old active SC-1 (OpenSAF 5.1) and new >> standby > SC-2 (with 2258_v7.patch included in). > > To apply 2258_v7.patch, please do remove the just pushed ticket "log: > implement SaLogFilterSetCallbackT and version handling [#2146]" , > I have not rebased the code on that yet. > > Regards, Vu > >> -Original Message- >> From: A V Mahesh [mailto:mahesh.va...@oracle.com] >> Sent: Thursday, February 23, 2017 11:45 AM >> To: Vu Minh Nguyen ; >> lennart.l...@ericsson.com; canh.v.tru...@dektech.com.au >> Cc: opensaf-devel@lists.sourceforge.net >> Subject: Re: [devel] [PATCH 0 of 3] Review Request for log: add > alternative >> destinations of log records [#2258] V4 >> >> Hi Vu/Lennart, >> >> >> In broad WITHOUT the #2258 patch, the same code/setup working fine with >> 2 sc node (staging changeset:
Re: [devel] [PATCH 0 of 3] Review Request for log: add alternative destinations of log records [#2258] V4
Hi Vu, Now we are now able to proceed further with V7 `2258_v7.patch` in service upgraded working fine, because of Encode/decode changes done in V7 patch. But we have another small test case issue (/usr/bin/logtest 5 17 Segmentation fault), once we resolve this also, we can conclude that all the basic functionality is working, then you can re-publish the V7 patch ( if change occurred in Lennart #2258 V2 please do publish that as well ) so that I can go for CODE review. Steps to reproduce the test case issue : 1) Bring up old node as Active ( with out `2258_v7.patch` ) 2) Bring-up new node as Standby ( with `2258_v7.patch` ) 3) Do `amf-adm si-swap safSi=SC-2N,safApp=OpenSAF` 4) Run `/usr/bin/logtest 5 17 ` on new Active (because of si-swap ) Note : both nodes has the new XLM attributes populated . === gdb /usr/bin/logtest (gdb) r 5 16 PASSED CCB Object Modify, change root directory. Path exist. OK; Detaching after fork from child process 13797. Set values Fail [New Thread 0x77ff7b00 (LWP 13801)] [New Thread 0x77fc4b00 (LWP 13802)] Program received signal SIGSEGV, Segmentation fault. 0x555688ea in read_and_compare.isra.7 () at src/log/apitest/tet_LogOiOps.c:1891 1891src/log/apitest/tet_LogOiOps.c: No such file or directory. in src/log/apitest/tet_LogOiOps.c (gdb) bt #0 0x555688ea in read_and_compare.isra.7 () at src/log/apitest/tet_LogOiOps.c:1891 #1 0x55568a4b in check_logRecordDestinationConfigurationAdd () at src/log/apitest/tet_LogOiOps.c:1941 #2 0x55571b05 in run_test_case () #3 0x55571feb in test_run () #4 0xbfad in main () at src/log/apitest/logtest.c:569 (gdb) === -AVM On 2/23/2017 11:44 AM, Vu Minh Nguyen wrote: > Hi Mahesh, > > Maybe it was broken when transmitting. I zipped to a tar file. Please try it > one more. > > Regards, Vu > > >> -Original Message- >> From: A V Mahesh [mailto:mahesh.va...@oracle.com] >> Sent: Thursday, February 23, 2017 12:54 PM >> To: Vu Minh Nguyen; >> lennart.l...@ericsson.com; canh.v.tru...@dektech.com.au >> Cc: opensaf-devel@lists.sourceforge.net >> Subject: Re: [devel] [PATCH 0 of 3] Review Request for log: add > alternative >> destinations of log records [#2258] V4 >> >> Hi Vu, >> >> On 2/23/2017 10:20 AM, Vu Minh Nguyen wrote: >>> Hi Mahesh, >>> >>> Can you try with 2258_v7.patch I just sent to you? >> I stripedchangeset: 8610 of today's latest staging ( `hg strip >> 8610` which removed log: implement SaLogFilterSetCallbackT and version >> handling [#2146]) >> and try to apply your `2258_v7.patch`, it says `malformed patch at line >> 3324`. >> >> -AVM >>> I have pulled the latest code on OpenSAF 5.1 branch, re-created the > cluster. >>> And it works with the case old active SC-1 (OpenSAF 5.1) and new standby >>> SC-2 (with 2258_v7.patch included in). >>> >>> To apply 2258_v7.patch, please do remove the just pushed ticket "log: >>> implement SaLogFilterSetCallbackT and version handling [#2146]" , >>> I have not rebased the code on that yet. >>> >>> Regards, Vu >>> -Original Message- From: A V Mahesh [mailto:mahesh.va...@oracle.com] Sent: Thursday, February 23, 2017 11:45 AM To: Vu Minh Nguyen ; lennart.l...@ericsson.com; canh.v.tru...@dektech.com.au Cc: opensaf-devel@lists.sourceforge.net Subject: Re: [devel] [PATCH 0 of 3] Review Request for log: add >>> alternative destinations of log records [#2258] V4 Hi Vu/Lennart, In broad WITHOUT the #2258 patch, the same code/setup working fine >> with 2 sc node (staging changeset: 8609 ), as soon as we apply `2258_v5.patch` V5 patch on staging (changeset: 8609 ) that you have provided yesterday, on one sc node and try to bring up that in to cluster (in-service test) we are observing the issue of new node (with #2258 patch) not joining cluster. >> == eb 23 10:01:59 SC-1 osafimmnd[15279]: NO Implementer (applier) connected: 15 (@safAmfService2010f) <127, 2010f> Feb 23 10:01:59 SC-1 osaflogd[15289]: NO LOGSV_DATA_GROUPNAME >> not found Feb 23 10:01:59 SC-1 osaflogd[15289]: NO LOG root directory is: "/var/log/opensaf/saflog" Feb 23 10:01:59 SC-1 osaflogd[15289]: NO LOG data group is: "" Feb 23 10:01:59 SC-1 osaflogd[15289]: NO LGS_MBCSV_VERSION = 7 Feb 23 10:01:59 SC-1 osafamfnd[15329]: NO Assigned 'safSi=SC-2N,safApp=OpenSAF' STANDBY to 'safSu=SC- 1,safSg=2N,safApp=OpenSAF' Feb 23 10:01:59 SC-1 opensafd: OpenSAF(5.1.M0 - ) services successfully started Feb 23 10:01:59 SC-1
Re: [devel] [PATCH 0 of 3] Review Request for log: add alternative destinations of log records [#2258] V4
Hi Vu, On 2/23/2017 10:20 AM, Vu Minh Nguyen wrote: > Hi Mahesh, > > Can you try with 2258_v7.patch I just sent to you? I stripedchangeset: 8610 of today's latest staging ( `hg strip 8610` which removed log: implement SaLogFilterSetCallbackT and version handling [#2146]) and try to apply your `2258_v7.patch`, it says `malformed patch at line 3324`. -AVM > > I have pulled the latest code on OpenSAF 5.1 branch, re-created the cluster. > And it works with the case old active SC-1 (OpenSAF 5.1) and new standby > SC-2 (with 2258_v7.patch included in). > > To apply 2258_v7.patch, please do remove the just pushed ticket "log: > implement SaLogFilterSetCallbackT and version handling [#2146]" , > I have not rebased the code on that yet. > > Regards, Vu > >> -Original Message- >> From: A V Mahesh [mailto:mahesh.va...@oracle.com] >> Sent: Thursday, February 23, 2017 11:45 AM >> To: Vu Minh Nguyen; >> lennart.l...@ericsson.com; canh.v.tru...@dektech.com.au >> Cc: opensaf-devel@lists.sourceforge.net >> Subject: Re: [devel] [PATCH 0 of 3] Review Request for log: add > alternative >> destinations of log records [#2258] V4 >> >> Hi Vu/Lennart, >> >> >> In broad WITHOUT the #2258 patch, the same code/setup working fine with >> 2 sc node (staging changeset: 8609 ), >> as soon as we apply `2258_v5.patch` V5 patch on staging (changeset: >> 8609 ) that you have provided yesterday, >> on one sc node and try to bring up that in to cluster (in-service test) >> we are observing the issue of new node (with #2258 patch) not joining >> cluster. >> >> == >> >> eb 23 10:01:59 SC-1 osafimmnd[15279]: NO Implementer (applier) >> connected: 15 (@safAmfService2010f) <127, 2010f> >> Feb 23 10:01:59 SC-1 osaflogd[15289]: NO LOGSV_DATA_GROUPNAME not >> found >> Feb 23 10:01:59 SC-1 osaflogd[15289]: NO LOG root directory is: >> "/var/log/opensaf/saflog" >> Feb 23 10:01:59 SC-1 osaflogd[15289]: NO LOG data group is: "" >> Feb 23 10:01:59 SC-1 osaflogd[15289]: NO LGS_MBCSV_VERSION = 7 >> Feb 23 10:01:59 SC-1 osafamfnd[15329]: NO Assigned >> 'safSi=SC-2N,safApp=OpenSAF' STANDBY to 'safSu=SC- >> 1,safSg=2N,safApp=OpenSAF' >> Feb 23 10:01:59 SC-1 opensafd: OpenSAF(5.1.M0 - ) services successfully >> started >> Feb 23 10:01:59 SC-1 osafamfnd[15329]: NO >> 'safComp=LOG,safSu=SC-1,safSg=2N,safApp=OpenSAF' faulted due to >> 'avaDown' : Recovery is 'nodeFailfast' >> Feb 23 10:01:59 SC-1 osafamfnd[15329]: ER >> safComp=LOG,safSu=SC-1,safSg=2N,safApp=OpenSAF Faulted due >> to:avaDown >> Recovery is:nodeFailfast >> Feb 23 10:01:59 SC-1 osafamfnd[15329]: Rebooting OpenSAF NodeId = >> 131343 >> EE Name = , Reason: Component faulted: recovery is node failfast, >> OwnNodeId = 131343, SupervisionTime = 60 >> Feb 23 10:01:59 SC-1 opensaf_reboot: Rebooting local node; timeout=60 >> Feb 23 10:02:00 SC-1 osafimmnd[15279]: NO Implementer (applier) >> connected: 16 (@OpenSafImmReplicatorB) <144, 2010f> >> Feb 23 10:01:59 SC-1 opensaf_reboot: Rebooting local node; timeout=60 >> == >> >> >> So it is evident that in-service upgrade part code of this need to be >> corrected. >> >> Please see my comments as [AVM] and let me know if you need some traces >> . >> >> If you're planing to prepare new V6 patch , please do prepare on top of >> today's latest staging. >> >> On 2/23/2017 9:33 AM, Vu Minh Nguyen wrote: >>> Hi Mahesh, >>> >>> I have done in-service upgrade/downgrade with following cases: >>> 1) New Active SC-1 (OpenSAF 5.2 with the attached patch) + old standby >> SC-2 >>> (OpenSAF 5.1) >>> --> Work fine >> [AVM] This is not a practical use cause of in-service upgrade , we can >> ignore this test further >>> 2) Old Active SC-1 (OpenSAF 5.1) + new standby SC-2 (with or without >>> attached patch) >>> --> SC-2 is restarted & not able to join the cluster. >> [AVM] This use cause/flow is we do get in in-service upgrade , so we >> need to address this. >>> I got following messages in syslog: >>> Feb 23 09:32:42 SC-2 user.notice opensafd: OpenSAF(5.2.M0 - >>> 8529:b5addd36e45d:default) services successfully started >>> Feb 23 09:32:43 SC-2 local0.warn osafntfimcnd[701]: WA >> ntfimcn_imm_init >>> saImmOiImplementerSet() returned SA_AIS_ERR_TIMEOUT (5) >>> Feb 23 09:32:45 SC-2 local0.warn osafntfimcnd[701]: WA >> ntfimcn_imm_init >>> saImmOiImplementerSet() returned SA_AIS_ERR_TIMEOUT (5) >>> Feb 23 09:32:47 SC-2 local0.warn osafntfimcnd[701]: WA >> ntfimcn_imm_init >>> saImmOiImplementerSet() returned SA_AIS_ERR_TIMEOUT (5) >>> Feb 23 09:32:49 SC-2 local0.warn osafntfimcnd[701]: WA >> ntfimcn_imm_init >>> saImmOiImplementerSet() returned SA_AIS_ERR_TIMEOUT (5) >>> Feb 23 09:32:50 SC-2 local0.err osafmsgnd[592]: ER >> saImmOiImplementerSet >>> FAILED:5 >>> Feb 23 09:32:50 SC-2 local0.err
Re: [devel] [PATCH 0 of 3] Review Request for log: add alternative destinations of log records [#2258] V4
Hi Mahesh, Can you try with 2258_v7.patch I just sent to you? I have pulled the latest code on OpenSAF 5.1 branch, re-created the cluster. And it works with the case old active SC-1 (OpenSAF 5.1) and new standby SC-2 (with 2258_v7.patch included in). To apply 2258_v7.patch, please do remove the just pushed ticket "log: implement SaLogFilterSetCallbackT and version handling [#2146]" , I have not rebased the code on that yet. Regards, Vu > -Original Message- > From: A V Mahesh [mailto:mahesh.va...@oracle.com] > Sent: Thursday, February 23, 2017 11:45 AM > To: Vu Minh Nguyen; > lennart.l...@ericsson.com; canh.v.tru...@dektech.com.au > Cc: opensaf-devel@lists.sourceforge.net > Subject: Re: [devel] [PATCH 0 of 3] Review Request for log: add alternative > destinations of log records [#2258] V4 > > Hi Vu/Lennart, > > > In broad WITHOUT the #2258 patch, the same code/setup working fine with > 2 sc node (staging changeset: 8609 ), > as soon as we apply `2258_v5.patch` V5 patch on staging (changeset: > 8609 ) that you have provided yesterday, > on one sc node and try to bring up that in to cluster (in-service test) > we are observing the issue of new node (with #2258 patch) not joining > cluster. > > == > > eb 23 10:01:59 SC-1 osafimmnd[15279]: NO Implementer (applier) > connected: 15 (@safAmfService2010f) <127, 2010f> > Feb 23 10:01:59 SC-1 osaflogd[15289]: NO LOGSV_DATA_GROUPNAME not > found > Feb 23 10:01:59 SC-1 osaflogd[15289]: NO LOG root directory is: > "/var/log/opensaf/saflog" > Feb 23 10:01:59 SC-1 osaflogd[15289]: NO LOG data group is: "" > Feb 23 10:01:59 SC-1 osaflogd[15289]: NO LGS_MBCSV_VERSION = 7 > Feb 23 10:01:59 SC-1 osafamfnd[15329]: NO Assigned > 'safSi=SC-2N,safApp=OpenSAF' STANDBY to 'safSu=SC- > 1,safSg=2N,safApp=OpenSAF' > Feb 23 10:01:59 SC-1 opensafd: OpenSAF(5.1.M0 - ) services successfully > started > Feb 23 10:01:59 SC-1 osafamfnd[15329]: NO > 'safComp=LOG,safSu=SC-1,safSg=2N,safApp=OpenSAF' faulted due to > 'avaDown' : Recovery is 'nodeFailfast' > Feb 23 10:01:59 SC-1 osafamfnd[15329]: ER > safComp=LOG,safSu=SC-1,safSg=2N,safApp=OpenSAF Faulted due > to:avaDown > Recovery is:nodeFailfast > Feb 23 10:01:59 SC-1 osafamfnd[15329]: Rebooting OpenSAF NodeId = > 131343 > EE Name = , Reason: Component faulted: recovery is node failfast, > OwnNodeId = 131343, SupervisionTime = 60 > Feb 23 10:01:59 SC-1 opensaf_reboot: Rebooting local node; timeout=60 > Feb 23 10:02:00 SC-1 osafimmnd[15279]: NO Implementer (applier) > connected: 16 (@OpenSafImmReplicatorB) <144, 2010f> > Feb 23 10:01:59 SC-1 opensaf_reboot: Rebooting local node; timeout=60 > == > > > So it is evident that in-service upgrade part code of this need to be > corrected. > > Please see my comments as [AVM] and let me know if you need some traces > . > > If you're planing to prepare new V6 patch , please do prepare on top of > today's latest staging. > > On 2/23/2017 9:33 AM, Vu Minh Nguyen wrote: > > Hi Mahesh, > > > > I have done in-service upgrade/downgrade with following cases: > > 1) New Active SC-1 (OpenSAF 5.2 with the attached patch) + old standby > SC-2 > > (OpenSAF 5.1) > > --> Work fine > [AVM] This is not a practical use cause of in-service upgrade , we can > ignore this test further > > > > 2) Old Active SC-1 (OpenSAF 5.1) + new standby SC-2 (with or without > > attached patch) > > --> SC-2 is restarted & not able to join the cluster. > [AVM] This use cause/flow is we do get in in-service upgrade , so we > need to address this. > > > > I got following messages in syslog: > > Feb 23 09:32:42 SC-2 user.notice opensafd: OpenSAF(5.2.M0 - > > 8529:b5addd36e45d:default) services successfully started > > Feb 23 09:32:43 SC-2 local0.warn osafntfimcnd[701]: WA > ntfimcn_imm_init > > saImmOiImplementerSet() returned SA_AIS_ERR_TIMEOUT (5) > > Feb 23 09:32:45 SC-2 local0.warn osafntfimcnd[701]: WA > ntfimcn_imm_init > > saImmOiImplementerSet() returned SA_AIS_ERR_TIMEOUT (5) > > Feb 23 09:32:47 SC-2 local0.warn osafntfimcnd[701]: WA > ntfimcn_imm_init > > saImmOiImplementerSet() returned SA_AIS_ERR_TIMEOUT (5) > > Feb 23 09:32:49 SC-2 local0.warn osafntfimcnd[701]: WA > ntfimcn_imm_init > > saImmOiImplementerSet() returned SA_AIS_ERR_TIMEOUT (5) > > Feb 23 09:32:50 SC-2 local0.err osafmsgnd[592]: ER > saImmOiImplementerSet > > FAILED:5 > > Feb 23 09:32:50 SC-2 local0.err osafmsgnd[592]: ER > saImmOiImplementerSet > > FAILED:5 > > Feb 23 09:32:50 SC-2 local0.notice osafamfnd[496]: NO > > 'safSu=SC-2,safSg=NoRed,safApp=OpenSAF' component restart probation > timer > > started (timeout: 600 ns) > > Feb 23 09:32:50 SC-2 local0.notice osafamfnd[496]: NO Restarting a > component > > of 'safSu=SC-2,safSg=NoRed,safApp=OpenSAF' (comp restart count: 1) > >
Re: [devel] [PATCH 0 of 3] Review Request for log: add alternative destinations of log records [#2258] V4
Hi Vu/Lennart, In broad WITHOUT the #2258 patch, the same code/setup working fine with 2 sc node (staging changeset: 8609 ), as soon as we apply `2258_v5.patch` V5 patch on staging (changeset: 8609 ) that you have provided yesterday, on one sc node and try to bring up that in to cluster (in-service test) we are observing the issue of new node (with #2258 patch) not joining cluster. == eb 23 10:01:59 SC-1 osafimmnd[15279]: NO Implementer (applier) connected: 15 (@safAmfService2010f) <127, 2010f> Feb 23 10:01:59 SC-1 osaflogd[15289]: NO LOGSV_DATA_GROUPNAME not found Feb 23 10:01:59 SC-1 osaflogd[15289]: NO LOG root directory is: "/var/log/opensaf/saflog" Feb 23 10:01:59 SC-1 osaflogd[15289]: NO LOG data group is: "" Feb 23 10:01:59 SC-1 osaflogd[15289]: NO LGS_MBCSV_VERSION = 7 Feb 23 10:01:59 SC-1 osafamfnd[15329]: NO Assigned 'safSi=SC-2N,safApp=OpenSAF' STANDBY to 'safSu=SC-1,safSg=2N,safApp=OpenSAF' Feb 23 10:01:59 SC-1 opensafd: OpenSAF(5.1.M0 - ) services successfully started Feb 23 10:01:59 SC-1 osafamfnd[15329]: NO 'safComp=LOG,safSu=SC-1,safSg=2N,safApp=OpenSAF' faulted due to 'avaDown' : Recovery is 'nodeFailfast' Feb 23 10:01:59 SC-1 osafamfnd[15329]: ER safComp=LOG,safSu=SC-1,safSg=2N,safApp=OpenSAF Faulted due to:avaDown Recovery is:nodeFailfast Feb 23 10:01:59 SC-1 osafamfnd[15329]: Rebooting OpenSAF NodeId = 131343 EE Name = , Reason: Component faulted: recovery is node failfast, OwnNodeId = 131343, SupervisionTime = 60 Feb 23 10:01:59 SC-1 opensaf_reboot: Rebooting local node; timeout=60 Feb 23 10:02:00 SC-1 osafimmnd[15279]: NO Implementer (applier) connected: 16 (@OpenSafImmReplicatorB) <144, 2010f> Feb 23 10:01:59 SC-1 opensaf_reboot: Rebooting local node; timeout=60 == So it is evident that in-service upgrade part code of this need to be corrected. Please see my comments as [AVM] and let me know if you need some traces . If you're planing to prepare new V6 patch , please do prepare on top of today's latest staging. On 2/23/2017 9:33 AM, Vu Minh Nguyen wrote: > Hi Mahesh, > > I have done in-service upgrade/downgrade with following cases: > 1) New Active SC-1 (OpenSAF 5.2 with the attached patch) + old standby SC-2 > (OpenSAF 5.1) > --> Work fine [AVM] This is not a practical use cause of in-service upgrade , we can ignore this test further > > 2) Old Active SC-1 (OpenSAF 5.1) + new standby SC-2 (with or without > attached patch) > --> SC-2 is restarted & not able to join the cluster. [AVM] This use cause/flow is we do get in in-service upgrade , so we need to address this. > > I got following messages in syslog: > Feb 23 09:32:42 SC-2 user.notice opensafd: OpenSAF(5.2.M0 - > 8529:b5addd36e45d:default) services successfully started > Feb 23 09:32:43 SC-2 local0.warn osafntfimcnd[701]: WA ntfimcn_imm_init > saImmOiImplementerSet() returned SA_AIS_ERR_TIMEOUT (5) > Feb 23 09:32:45 SC-2 local0.warn osafntfimcnd[701]: WA ntfimcn_imm_init > saImmOiImplementerSet() returned SA_AIS_ERR_TIMEOUT (5) > Feb 23 09:32:47 SC-2 local0.warn osafntfimcnd[701]: WA ntfimcn_imm_init > saImmOiImplementerSet() returned SA_AIS_ERR_TIMEOUT (5) > Feb 23 09:32:49 SC-2 local0.warn osafntfimcnd[701]: WA ntfimcn_imm_init > saImmOiImplementerSet() returned SA_AIS_ERR_TIMEOUT (5) > Feb 23 09:32:50 SC-2 local0.err osafmsgnd[592]: ER saImmOiImplementerSet > FAILED:5 > Feb 23 09:32:50 SC-2 local0.err osafmsgnd[592]: ER saImmOiImplementerSet > FAILED:5 > Feb 23 09:32:50 SC-2 local0.notice osafamfnd[496]: NO > 'safSu=SC-2,safSg=NoRed,safApp=OpenSAF' component restart probation timer > started (timeout: 600 ns) > Feb 23 09:32:50 SC-2 local0.notice osafamfnd[496]: NO Restarting a component > of 'safSu=SC-2,safSg=NoRed,safApp=OpenSAF' (comp restart count: 1) > Feb 23 09:32:50 SC-2 local0.notice osafamfnd[496]: NO > 'safComp=MQND,safSu=SC-2,safSg=NoRed,safApp=OpenSAF' faulted due to > 'avaDown' : Recovery is 'componentRestart' > Feb 23 09:32:50 SC-2 local0.info osafmsgnd[736]: mkfifo already exists: > /var/lib/opensaf/osafmsgnd.fifo File exists > > And sometimes, on active SC-1 (OpenSAF 5.1), the node is not able to up > because of following error: > > Feb 23 11:00:32 SC-1 local0.err osafclmna[406]: MDTM:TIPC Dsock Socket > creation failed in MDTM_INIT err :Address family not supported by protocol > Feb 23 11:00:32 SC-1 local0.err osafclmna[406]: ER ncs_agents_startup FAILED [AVM] No such issues ( with both TCP & TIPC) (staging changeset: 8609 ) > > Are you getting similar problem at your side? > Please note that, the problem is existed WITH or WITHOUT the #2258 patch. [AVM] No , problem only if we apply `2258_v5.patch` V5 patch on staging (changeset: 8609 ) try to bring up that node in to cluster. -AVM > > I have informed this to IMM to have a
Re: [devel] [PATCH 0 of 3] Review Request for log: add alternative destinations of log records [#2258] V4
Please see correction New Standby SC-1 ( with patch ) -AVM On 2/22/2017 4:02 PM, A V Mahesh wrote: > Hi Vu, > > With this new patch , we have another issue : > > 1) standby Core by `/usr/lib64/opensaf/osaflogd' issue got resolved . > > 2) In-service upgrade is Not working , I have Old Active SC-2 ( with > out patch ) and New Standby SC-1 ( with patch ) > > the new New Standby SC-1 not joining the cluster ( in-service > upgrade failed ) > > New Standby SC-1 > > > > > > Feb 22 15:53:05 SC-1 osafamfnd[15752]: NO > 'safSu=SC-1,safSg=NoRed,safApp=OpenSAF' Presence State INSTANTIATING > => INSTANTIATED > Feb 22 15:53:05 SC-1 osafamfnd[15752]: NO Assigning > 'safSi=NoRed4,safApp=OpenSAF' ACTIVE to > 'safSu=SC-1,safSg=NoRed,safApp=OpenSAF' > Feb 22 15:53:05 SC-1 osafamfnd[15752]: NO Assigned > 'safSi=NoRed4,safApp=OpenSAF' ACTIVE to > 'safSu=SC-1,safSg=NoRed,safApp=OpenSAF' > Feb 22 15:53:05 SC-1 osafsmfd[15889]: Started > Feb 22 15:53:05 SC-1 osafamfnd[15752]: NO > 'safSu=SC-1,safSg=2N,safApp=OpenSAF' Presence State INSTANTIATING => > INSTANTIATED > Feb 22 15:53:05 SC-1 osafamfnd[15752]: NO Assigning > 'safSi=SC-2N,safApp=OpenSAF' STANDBY to > 'safSu=SC-1,safSg=2N,safApp=OpenSAF' > Feb 22 15:53:05 SC-1 osafrded[15672]: NO RDE role set to STANDBY > Feb 22 15:53:05 SC-1 osafrded[15672]: NO Peer up on node 0x2020f > Feb 22 15:53:05 SC-1 osafrded[15672]: NO Got peer info request from > node 0x2020f with role ACTIVE > Feb 22 15:53:05 SC-1 osafrded[15672]: NO Got peer info response from > node 0x2020f with role ACTIVE > Feb 22 15:53:05 SC-1 osafimmd[15691]: NO MDS event from svc_id 24 > (change:5, dest:13) > Feb 22 15:53:05 SC-1 osafimmd[15691]: NO MDS event from svc_id 24 > (change:3, dest:13) > Feb 22 15:53:05 SC-1 osafimmd[15691]: NO MDS event from svc_id 24 > (change:5, dest:13) > Feb 22 15:53:05 SC-1 osafimmd[15691]: NO MDS event from svc_id 25 > (change:3, dest:567412424453430) > Feb 22 15:53:05 SC-1 osafimmd[15691]: NO MDS event from svc_id 25 > (change:3, dest:565213401202663) > Feb 22 15:53:05 SC-1 osafimmd[15691]: NO MDS event from svc_id 25 > (change:3, dest:566312912825221) > Feb 22 15:53:05 SC-1 osafimmd[15691]: NO MDS event from svc_id 25 > (change:3, dest:564113889574230) > Feb 22 15:53:05 SC-1 osafimmnd[15702]: NO Implementer (applier) > connected: 17 (@safAmfService2010f) <127, 2010f> > Feb 22 15:53:05 SC-1 osaflogd[15712]: NO LOGSV_DATA_GROUPNAME not found > Feb 22 15:53:05 SC-1 osaflogd[15712]: NO LOG root directory is: > "/var/log/opensaf/saflog" > Feb 22 15:53:05 SC-1 osaflogd[15712]: NO LOG data group is: "" > Feb 22 15:53:05 SC-1 osaflogd[15712]: NO LGS_MBCSV_VERSION = 7 > Feb 22 15:53:05 SC-1 osafamfnd[15752]: NO Assigned > 'safSi=SC-2N,safApp=OpenSAF' STANDBY to > 'safSu=SC-1,safSg=2N,safApp=OpenSAF' > Feb 22 15:53:05 SC-1 opensafd: OpenSAF(5.1.M0 - ) services > successfully started > Feb 22 15:53:05 SC-1 osafamfnd[15752]: NO > 'safComp=LOG,safSu=SC-1,safSg=2N,safApp=OpenSAF' faulted due to > 'avaDown' : Recovery is 'nodeFailfast' > Feb 22 15:53:05 SC-1 osafamfnd[15752]: ER > safComp=LOG,safSu=SC-1,safSg=2N,safApp=OpenSAF Faulted due to:avaDown > Recovery is:nodeFailfast > Feb 22 15:53:05 SC-1 osafamfnd[15752]: Rebooting OpenSAF NodeId = > 131343 EE Name = , Reason: Component faulted: recovery is node > failfast, OwnNodeId = 131343, SupervisionTime = 60 > Feb 22 15:53:05 SC-1 opensaf_reboot: Rebooting local node; timeout=60 > Feb 22 15:53:43 SC-1 syslog-ng[1171]: syslog-ng starting up; > version='2.0.9' > > > > > > Old - Active - SC-2 > > > > > > Feb 22 15:53:02 SC-2 osafimmnd[16359]: NO NODE STATE-> > IMM_NODE_R_AVAILABLE > Feb 22 15:53:02 SC-2 osafimmloadd: NO Sync starting > Feb 22 15:53:02 SC-2 osafimmloadd: IN Synced 390 objects in total > Feb 22 15:53:02 SC-2 osafimmnd[16359]: NO NODE STATE-> > IMM_NODE_FULLY_AVAILABLE 18511 > Feb 22 15:53:02 SC-2 osafimmnd[16359]: NO Epoch set to 3 in ImmModel > Feb 22 15:53:02 SC-2 osafimmd[16346]: NO ACT: New Epoch for IMMND > process at node 2020f old epoch: 2 new epoch:3 > Feb 22 15:53:02 SC-2 osafimmd[16346]: NO ACT: New Epoch for IMMND > process at node 2040f old epoch: 2 new epoch:3 > Feb 22 15:53:02 SC-2 osafimmd[16346]: NO ACT: New Epoch for IMMND > process at node 2030f old epoch: 2 new epoch:3 > Feb 22 15:53:02 SC-2 osafimmloadd: NO Sync ending normally > Feb 22 15:53:02 SC-2 osafimmd[16346]: NO ACT: New Epoch for IMMND > process at node 2010f old epoch: 0 new epoch:3 > Feb 22 15:53:02 SC-2 osafimmnd[16359]: NO SERVER STATE: > IMM_SERVER_SYNC_SERVER --> IMM_SERVER_READY > Feb 22 15:53:03 SC-2 osafamfd[16408]: NO Received node_up from
Re: [devel] [PATCH 0 of 3] Review Request for log: add alternative destinations of log records [#2258] V4
Hi Vu, With this new patch , but we have another issue : 1) standby Core by `/usr/lib64/opensaf/osaflogd' issue got resolved . 2) In-service upgrade is Not working , I have Old Active SC-2 ( with out patch ) and New Standby SC-1 ( with out patch ) the new New Standby SC-1 not joining the cluster ( in-service upgrade failed ) New Standby SC-1 Feb 22 15:53:05 SC-1 osafamfnd[15752]: NO 'safSu=SC-1,safSg=NoRed,safApp=OpenSAF' Presence State INSTANTIATING => INSTANTIATED Feb 22 15:53:05 SC-1 osafamfnd[15752]: NO Assigning 'safSi=NoRed4,safApp=OpenSAF' ACTIVE to 'safSu=SC-1,safSg=NoRed,safApp=OpenSAF' Feb 22 15:53:05 SC-1 osafamfnd[15752]: NO Assigned 'safSi=NoRed4,safApp=OpenSAF' ACTIVE to 'safSu=SC-1,safSg=NoRed,safApp=OpenSAF' Feb 22 15:53:05 SC-1 osafsmfd[15889]: Started Feb 22 15:53:05 SC-1 osafamfnd[15752]: NO 'safSu=SC-1,safSg=2N,safApp=OpenSAF' Presence State INSTANTIATING => INSTANTIATED Feb 22 15:53:05 SC-1 osafamfnd[15752]: NO Assigning 'safSi=SC-2N,safApp=OpenSAF' STANDBY to 'safSu=SC-1,safSg=2N,safApp=OpenSAF' Feb 22 15:53:05 SC-1 osafrded[15672]: NO RDE role set to STANDBY Feb 22 15:53:05 SC-1 osafrded[15672]: NO Peer up on node 0x2020f Feb 22 15:53:05 SC-1 osafrded[15672]: NO Got peer info request from node 0x2020f with role ACTIVE Feb 22 15:53:05 SC-1 osafrded[15672]: NO Got peer info response from node 0x2020f with role ACTIVE Feb 22 15:53:05 SC-1 osafimmd[15691]: NO MDS event from svc_id 24 (change:5, dest:13) Feb 22 15:53:05 SC-1 osafimmd[15691]: NO MDS event from svc_id 24 (change:3, dest:13) Feb 22 15:53:05 SC-1 osafimmd[15691]: NO MDS event from svc_id 24 (change:5, dest:13) Feb 22 15:53:05 SC-1 osafimmd[15691]: NO MDS event from svc_id 25 (change:3, dest:567412424453430) Feb 22 15:53:05 SC-1 osafimmd[15691]: NO MDS event from svc_id 25 (change:3, dest:565213401202663) Feb 22 15:53:05 SC-1 osafimmd[15691]: NO MDS event from svc_id 25 (change:3, dest:566312912825221) Feb 22 15:53:05 SC-1 osafimmd[15691]: NO MDS event from svc_id 25 (change:3, dest:564113889574230) Feb 22 15:53:05 SC-1 osafimmnd[15702]: NO Implementer (applier) connected: 17 (@safAmfService2010f) <127, 2010f> Feb 22 15:53:05 SC-1 osaflogd[15712]: NO LOGSV_DATA_GROUPNAME not found Feb 22 15:53:05 SC-1 osaflogd[15712]: NO LOG root directory is: "/var/log/opensaf/saflog" Feb 22 15:53:05 SC-1 osaflogd[15712]: NO LOG data group is: "" Feb 22 15:53:05 SC-1 osaflogd[15712]: NO LGS_MBCSV_VERSION = 7 Feb 22 15:53:05 SC-1 osafamfnd[15752]: NO Assigned 'safSi=SC-2N,safApp=OpenSAF' STANDBY to 'safSu=SC-1,safSg=2N,safApp=OpenSAF' Feb 22 15:53:05 SC-1 opensafd: OpenSAF(5.1.M0 - ) services successfully started Feb 22 15:53:05 SC-1 osafamfnd[15752]: NO 'safComp=LOG,safSu=SC-1,safSg=2N,safApp=OpenSAF' faulted due to 'avaDown' : Recovery is 'nodeFailfast' Feb 22 15:53:05 SC-1 osafamfnd[15752]: ER safComp=LOG,safSu=SC-1,safSg=2N,safApp=OpenSAF Faulted due to:avaDown Recovery is:nodeFailfast Feb 22 15:53:05 SC-1 osafamfnd[15752]: Rebooting OpenSAF NodeId = 131343 EE Name = , Reason: Component faulted: recovery is node failfast, OwnNodeId = 131343, SupervisionTime = 60 Feb 22 15:53:05 SC-1 opensaf_reboot: Rebooting local node; timeout=60 Feb 22 15:53:43 SC-1 syslog-ng[1171]: syslog-ng starting up; version='2.0.9' Old - Active - SC-2 Feb 22 15:53:02 SC-2 osafimmnd[16359]: NO NODE STATE-> IMM_NODE_R_AVAILABLE Feb 22 15:53:02 SC-2 osafimmloadd: NO Sync starting Feb 22 15:53:02 SC-2 osafimmloadd: IN Synced 390 objects in total Feb 22 15:53:02 SC-2 osafimmnd[16359]: NO NODE STATE-> IMM_NODE_FULLY_AVAILABLE 18511 Feb 22 15:53:02 SC-2 osafimmnd[16359]: NO Epoch set to 3 in ImmModel Feb 22 15:53:02 SC-2 osafimmd[16346]: NO ACT: New Epoch for IMMND process at node 2020f old epoch: 2 new epoch:3 Feb 22 15:53:02 SC-2 osafimmd[16346]: NO ACT: New Epoch for IMMND process at node 2040f old epoch: 2 new epoch:3 Feb 22 15:53:02 SC-2 osafimmd[16346]: NO ACT: New Epoch for IMMND process at node 2030f old epoch: 2 new epoch:3 Feb 22 15:53:02 SC-2 osafimmloadd: NO Sync ending normally Feb 22 15:53:02 SC-2 osafimmd[16346]: NO ACT: New Epoch for IMMND process at node 2010f old epoch: 0 new epoch:3 Feb 22 15:53:02 SC-2 osafimmnd[16359]: NO SERVER STATE: IMM_SERVER_SYNC_SERVER --> IMM_SERVER_READY Feb 22 15:53:03 SC-2 osafamfd[16408]: NO Received node_up from 2010f: msg_id 1 Feb 22 15:53:03 SC-2 osafamfd[16408]: NO Node 'SC-1' joined the cluster Feb 22 15:53:03 SC-2 osafimmnd[16359]: NO Implementer connected: 16 (MsgQueueService131343) <0, 2010f> Feb 22 15:53:03 SC-2 osafrded[16327]: NO Peer up on node 0x2010f Feb 22 15:53:03 SC-2 osafrded[16327]: NO Got peer
Re: [devel] [PATCH 0 of 3] Review Request for log: add alternative destinations of log records [#2258] V4
Hi Vu, I used new #3 and #4 patches , Can you please re-send All the final patch in go, which i need to apply on today`s staging ( if possible publish the with new version ) -AVM On 2/22/2017 1:52 PM, Vu Minh Nguyen wrote: > Hi Mahesh, > >> Core was generated by `/usr/lib64/opensaf/osaflogd'. >> Program terminated with signal 11, Segmentation fault. >> #0 ckpt_proc_cfg_stream(lgs_cb*, void*) () at >> src/log/logd/lgs_mbcsv.cc:2195 >> 2195src/log/logd/lgs_mbcsv.cc: No such file or directory. >> in src/log/logd/lgs_mbcsv.cc > Backtrace still points to old position (lgs_mbcsv:2195). I guess the > osaflogd binary has not been updated with the fixed patch. > > > Regards, Vu > >> -Original Message- >> From: A V Mahesh [mailto:mahesh.va...@oracle.com] >> Sent: Wednesday, February 22, 2017 3:18 PM >> To: Vu Minh Nguyen; >> lennart.l...@ericsson.com; canh.v.tru...@dektech.com.au >> Cc: opensaf-devel@lists.sourceforge.net >> Subject: Re: [devel] [PATCH 0 of 3] Review Request for log: add > alternative >> destinations of log records [#2258] V4 >> >> Hi Vu, >> >> SC-2 standby osaflogd core dumped still occurs ( Not resolved) , the new >> patch only resolved the application (/usr/bin/logtest ) Segmentation >> fault on SC-1 Active. >> >> == >> == >> # gdb /usr/lib64/opensaf/osaflogd core_1487751055.osaflogd.4594 GNU >> gdb >> (GDB) SUSE (7.3-0.6.1) >> Copyright (C) 2011 Free Software Foundation, Inc. >> ... >> Core was generated by `/usr/lib64/opensaf/osaflogd'. >> Program terminated with signal 11, Segmentation fault. >> #0 ckpt_proc_cfg_stream(lgs_cb*, void*) () at >> src/log/logd/lgs_mbcsv.cc:2195 >> 2195src/log/logd/lgs_mbcsv.cc: No such file or directory. >> in src/log/logd/lgs_mbcsv.cc >> (gdb) bt >> #0 ckpt_proc_cfg_stream(lgs_cb*, void*) () at >> src/log/logd/lgs_mbcsv.cc:2195 >> #1 0x7f97b026f960 in ckpt_decode_log_struct(lgs_cb*, >> ncs_mbcsv_cb_arg*, void*, void*, unsigned int (*)(edu_hdl_tag*, >> edu_tkn_tag*, void*, unsigned int*, edu_buf_env_tag*, EDP_OP_TYPE, >> EDU_ERR*)) () at src/log/logd/lgs_mbcsv.cc:950 >> #2 0x7f97b02710dc in ckpt_decode_async_update(lgs_cb*, >> ncs_mbcsv_cb_arg*) () at src/log/logd/lgs_mbcsv.cc:1086 >> #3 0x7f97b0273941 in mbcsv_callback(ncs_mbcsv_cb_arg*) () at >> src/log/logd/lgs_mbcsv.cc:880 >> #4 0x7f97af372596 in ncs_mbscv_rcv_decode () from >> /usr/lib/../lib64/libopensaf_core.so.0 >> #5 0x7f97af372766 in ncs_mbcsv_rcv_async_update () from >> /usr/lib/../lib64/libopensaf_core.so.0 >> #6 0x7f97af379370 in mbcsv_process_events () from >> /usr/lib/../lib64/libopensaf_core.so.0 >> #7 0x7f97af3794db in mbcsv_hdl_dispatch_all () from >> /usr/lib/../lib64/libopensaf_core.so.0 >> #8 0x7f97af373ce2 in mbcsv_process_dispatch_request () at >> src/mbc/mbcsv_api.c:423 >> #9 0x7f97b027096e in lgs_mbcsv_dispatch(unsigned int) () at >> src/log/logd/lgs_mbcsv.cc:327 >> #10 0x7f97b024d9f2 in main () at src/log/logd/lgs_main.cc:583 >> (gdb) bt full >> #0 ckpt_proc_cfg_stream(lgs_cb*, void*) () at >> src/log/logd/lgs_mbcsv.cc:2195 >> ckpt_data_handler = {0x7f97b0270300 >> , 0x7f97b02701c0 >> , >> 0x7f97b0270060 , >> 0x7f97b02712f0 , 0x7f97b0271ab0 >> , >> 0x7f97b026fe80 , >> 0x7f97b0272380 , 0x7f97b0274800 >> , >> 0x7f97b0274e10 , >> 0x7f97b02754f0 } >> #1 0x7f97b026f960 in ckpt_decode_log_struct(lgs_cb*, >> ncs_mbcsv_cb_arg*, void*, void*, unsigned int (*)(edu_hdl_tag*, >> edu_tkn_tag*, void*, unsigned int*, edu_buf_env_tag*, EDP_OP_TYPE, >> EDU_ERR*)) () at src/log/logd/lgs_mbcsv.cc:950 >> ckpt_data_handler = {0x7f97b0270300 >> , 0x7f97b02701c0 >> , >> 0x7f97b0270060 , >> 0x7f97b02712f0 , 0x7f97b0271ab0 >> , >> 0x7f97b026fe80 , >> 0x7f97b0272380 , 0x7f97b0274800 >> , >> 0x7f97b0274e10 , >> 0x7f97b02754f0 } >> #2 0x7f97b02710dc in ckpt_decode_async_update(lgs_cb*, >> ncs_mbcsv_cb_arg*) () at src/log/logd/lgs_mbcsv.cc:1086 >> ckpt_data_handler = {0x7f97b0270300 >>
Re: [devel] [PATCH 0 of 3] Review Request for log: add alternative destinations of log records [#2258] V4
Hi Mahesh, > Core was generated by `/usr/lib64/opensaf/osaflogd'. > Program terminated with signal 11, Segmentation fault. > #0 ckpt_proc_cfg_stream(lgs_cb*, void*) () at > src/log/logd/lgs_mbcsv.cc:2195 > 2195src/log/logd/lgs_mbcsv.cc: No such file or directory. > in src/log/logd/lgs_mbcsv.cc Backtrace still points to old position (lgs_mbcsv:2195). I guess the osaflogd binary has not been updated with the fixed patch. Regards, Vu > -Original Message- > From: A V Mahesh [mailto:mahesh.va...@oracle.com] > Sent: Wednesday, February 22, 2017 3:18 PM > To: Vu Minh Nguyen; > lennart.l...@ericsson.com; canh.v.tru...@dektech.com.au > Cc: opensaf-devel@lists.sourceforge.net > Subject: Re: [devel] [PATCH 0 of 3] Review Request for log: add alternative > destinations of log records [#2258] V4 > > Hi Vu, > > SC-2 standby osaflogd core dumped still occurs ( Not resolved) , the new > patch only resolved the application (/usr/bin/logtest ) Segmentation > fault on SC-1 Active. > > == > == > # gdb /usr/lib64/opensaf/osaflogd core_1487751055.osaflogd.4594 GNU > gdb > (GDB) SUSE (7.3-0.6.1) > Copyright (C) 2011 Free Software Foundation, Inc. > ... > Core was generated by `/usr/lib64/opensaf/osaflogd'. > Program terminated with signal 11, Segmentation fault. > #0 ckpt_proc_cfg_stream(lgs_cb*, void*) () at > src/log/logd/lgs_mbcsv.cc:2195 > 2195src/log/logd/lgs_mbcsv.cc: No such file or directory. > in src/log/logd/lgs_mbcsv.cc > (gdb) bt > #0 ckpt_proc_cfg_stream(lgs_cb*, void*) () at > src/log/logd/lgs_mbcsv.cc:2195 > #1 0x7f97b026f960 in ckpt_decode_log_struct(lgs_cb*, > ncs_mbcsv_cb_arg*, void*, void*, unsigned int (*)(edu_hdl_tag*, > edu_tkn_tag*, void*, unsigned int*, edu_buf_env_tag*, EDP_OP_TYPE, > EDU_ERR*)) () at src/log/logd/lgs_mbcsv.cc:950 > #2 0x7f97b02710dc in ckpt_decode_async_update(lgs_cb*, > ncs_mbcsv_cb_arg*) () at src/log/logd/lgs_mbcsv.cc:1086 > #3 0x7f97b0273941 in mbcsv_callback(ncs_mbcsv_cb_arg*) () at > src/log/logd/lgs_mbcsv.cc:880 > #4 0x7f97af372596 in ncs_mbscv_rcv_decode () from > /usr/lib/../lib64/libopensaf_core.so.0 > #5 0x7f97af372766 in ncs_mbcsv_rcv_async_update () from > /usr/lib/../lib64/libopensaf_core.so.0 > #6 0x7f97af379370 in mbcsv_process_events () from > /usr/lib/../lib64/libopensaf_core.so.0 > #7 0x7f97af3794db in mbcsv_hdl_dispatch_all () from > /usr/lib/../lib64/libopensaf_core.so.0 > #8 0x7f97af373ce2 in mbcsv_process_dispatch_request () at > src/mbc/mbcsv_api.c:423 > #9 0x7f97b027096e in lgs_mbcsv_dispatch(unsigned int) () at > src/log/logd/lgs_mbcsv.cc:327 > #10 0x7f97b024d9f2 in main () at src/log/logd/lgs_main.cc:583 > (gdb) bt full > #0 ckpt_proc_cfg_stream(lgs_cb*, void*) () at > src/log/logd/lgs_mbcsv.cc:2195 > ckpt_data_handler = {0x7f97b0270300 > , 0x7f97b02701c0 > , >0x7f97b0270060 , > 0x7f97b02712f0 , 0x7f97b0271ab0 > , >0x7f97b026fe80 , > 0x7f97b0272380 , 0x7f97b0274800 > , >0x7f97b0274e10 , > 0x7f97b02754f0 } > #1 0x7f97b026f960 in ckpt_decode_log_struct(lgs_cb*, > ncs_mbcsv_cb_arg*, void*, void*, unsigned int (*)(edu_hdl_tag*, > edu_tkn_tag*, void*, unsigned int*, edu_buf_env_tag*, EDP_OP_TYPE, > EDU_ERR*)) () at src/log/logd/lgs_mbcsv.cc:950 > ckpt_data_handler = {0x7f97b0270300 > , 0x7f97b02701c0 > , >0x7f97b0270060 , > 0x7f97b02712f0 , 0x7f97b0271ab0 > , >0x7f97b026fe80 , > 0x7f97b0272380 , 0x7f97b0274800 > , >0x7f97b0274e10 , > 0x7f97b02754f0 } > #2 0x7f97b02710dc in ckpt_decode_async_update(lgs_cb*, > ncs_mbcsv_cb_arg*) () at src/log/logd/lgs_mbcsv.cc:1086 > ckpt_data_handler = {0x7f97b0270300 > , 0x7f97b02701c0 > , >0x7f97b0270060 , > 0x7f97b02712f0 , 0x7f97b0271ab0 > , >0x7f97b026fe80
Re: [devel] [PATCH 0 of 3] Review Request for log: add alternative destinations of log records [#2258] V4
Hi Vu, SC-2 standby osaflogd core dumped still occurs ( Not resolved) , the new patch only resolved the application (/usr/bin/logtest ) Segmentation fault on SC-1 Active. # gdb /usr/lib64/opensaf/osaflogd core_1487751055.osaflogd.4594 GNU gdb (GDB) SUSE (7.3-0.6.1) Copyright (C) 2011 Free Software Foundation, Inc. ... Core was generated by `/usr/lib64/opensaf/osaflogd'. Program terminated with signal 11, Segmentation fault. #0 ckpt_proc_cfg_stream(lgs_cb*, void*) () at src/log/logd/lgs_mbcsv.cc:2195 2195src/log/logd/lgs_mbcsv.cc: No such file or directory. in src/log/logd/lgs_mbcsv.cc (gdb) bt #0 ckpt_proc_cfg_stream(lgs_cb*, void*) () at src/log/logd/lgs_mbcsv.cc:2195 #1 0x7f97b026f960 in ckpt_decode_log_struct(lgs_cb*, ncs_mbcsv_cb_arg*, void*, void*, unsigned int (*)(edu_hdl_tag*, edu_tkn_tag*, void*, unsigned int*, edu_buf_env_tag*, EDP_OP_TYPE, EDU_ERR*)) () at src/log/logd/lgs_mbcsv.cc:950 #2 0x7f97b02710dc in ckpt_decode_async_update(lgs_cb*, ncs_mbcsv_cb_arg*) () at src/log/logd/lgs_mbcsv.cc:1086 #3 0x7f97b0273941 in mbcsv_callback(ncs_mbcsv_cb_arg*) () at src/log/logd/lgs_mbcsv.cc:880 #4 0x7f97af372596 in ncs_mbscv_rcv_decode () from /usr/lib/../lib64/libopensaf_core.so.0 #5 0x7f97af372766 in ncs_mbcsv_rcv_async_update () from /usr/lib/../lib64/libopensaf_core.so.0 #6 0x7f97af379370 in mbcsv_process_events () from /usr/lib/../lib64/libopensaf_core.so.0 #7 0x7f97af3794db in mbcsv_hdl_dispatch_all () from /usr/lib/../lib64/libopensaf_core.so.0 #8 0x7f97af373ce2 in mbcsv_process_dispatch_request () at src/mbc/mbcsv_api.c:423 #9 0x7f97b027096e in lgs_mbcsv_dispatch(unsigned int) () at src/log/logd/lgs_mbcsv.cc:327 #10 0x7f97b024d9f2 in main () at src/log/logd/lgs_main.cc:583 (gdb) bt full #0 ckpt_proc_cfg_stream(lgs_cb*, void*) () at src/log/logd/lgs_mbcsv.cc:2195 ckpt_data_handler = {0x7f97b0270300, 0x7f97b02701c0 , 0x7f97b0270060 , 0x7f97b02712f0 , 0x7f97b0271ab0 , 0x7f97b026fe80 , 0x7f97b0272380 , 0x7f97b0274800 , 0x7f97b0274e10 , 0x7f97b02754f0 } #1 0x7f97b026f960 in ckpt_decode_log_struct(lgs_cb*, ncs_mbcsv_cb_arg*, void*, void*, unsigned int (*)(edu_hdl_tag*, edu_tkn_tag*, void*, unsigned int*, edu_buf_env_tag*, EDP_OP_TYPE, EDU_ERR*)) () at src/log/logd/lgs_mbcsv.cc:950 ckpt_data_handler = {0x7f97b0270300 , 0x7f97b02701c0 , 0x7f97b0270060 , 0x7f97b02712f0 , 0x7f97b0271ab0 , 0x7f97b026fe80 , 0x7f97b0272380 , 0x7f97b0274800 , 0x7f97b0274e10 , 0x7f97b02754f0 } #2 0x7f97b02710dc in ckpt_decode_async_update(lgs_cb*, ncs_mbcsv_cb_arg*) () at src/log/logd/lgs_mbcsv.cc:1086 ckpt_data_handler = {0x7f97b0270300 , 0x7f97b02701c0 , 0x7f97b0270060 , 0x7f97b02712f0 , 0x7f97b0271ab0 , 0x7f97b026fe80 , 0x7f97b0272380 , 0x7f97b0274800 , 0x7f97b0274e10 , 0x7f97b02754f0 } #3 0x7f97b0273941 in mbcsv_callback(ncs_mbcsv_cb_arg*) () at src/log/logd/lgs_mbcsv.cc:880 ckpt_data_handler = {0x7f97b0270300 , 0x7f97b02701c0 , 0x7f97b0270060 , 0x7f97b02712f0 , 0x7f97b0271ab0 , 0x7f97b026fe80 , 0x7f97b0272380 , 0x7f97b0274800 , 0x7f97b0274e10 , 0x7f97b02754f0
Re: [devel] [PATCH 0 of 3] Review Request for log: add alternative destinations of log records [#2258] V4
Hi Vu, On 2/22/2017 12:19 PM, Vu Minh Nguyen wrote: > [Vu] I has sent you 02 patches. There is code change in osaflogd code that > fix the coredump you have observed. > The other one is test code that fix the logtest coredump. Ok I will re-test , and update you . -AVM On 2/22/2017 12:19 PM, Vu Minh Nguyen wrote: > Hi Mahehs, > > See my reply inline, [Vu]. > > Regards, Vu > >> -Original Message- >> From: A V Mahesh [mailto:mahesh.va...@oracle.com] >> Sent: Wednesday, February 22, 2017 1:36 PM >> To: Vu Minh Nguyen; >> lennart.l...@ericsson.com; canh.v.tru...@dektech.com.au >> Cc: opensaf-devel@lists.sourceforge.net >> Subject: Re: [PATCH 0 of 3] Review Request for log: add alternative >> destinations of log records [#2258] V4 >> >> Hi Vu, >> >> >> On 2/22/2017 11:52 AM, Vu Minh Nguyen wrote: >>> Hi Mahesh, >>> >>> Have a code fault in uml test, and other one in checkpoint. >> [AVM] This is Normal Suse 11 VM ( not UML). >>> I have just updated the code. Please re-apply for #3 and #4 patches. >> [AVM] is these new patch has function changes or only test code changes ? > [Vu] I has sent you 02 patches. There is code change in osaflogd code that > fix the coredump you have observed. > The other one is test code that fix the logtest coredump. >>> Note that, test case #14 of suite 17 should be run on active node, >> otherwise >>> getting failed. >> [AVM] Segmentation fault of /usr/bin/logtest Not a big issue , >>we need to debug why osaflogd core dumped and it is critical > [Vu] I found the problem. You can try with the new one to see if the > coredump is still there or not. >>> I will put condition check to that test case later. >> >> -AVM >> >> >>> Regards, Vu >>> -Original Message- From: A V Mahesh [mailto:mahesh.va...@oracle.com] Sent: Wednesday, February 22, 2017 12:16 PM To: Vu Minh Nguyen ; lennart.l...@ericsson.com; canh.v.tru...@dektech.com.au Cc: opensaf-devel@lists.sourceforge.net Subject: Re: [PATCH 0 of 3] Review Request for log: add alternative destinations of log records [#2258] V4 Hi Vu, Thanks , While testing /usr/bin/logtest , SC-2 standby osaflogd core dumped > and /usr/bin/logtest on SC-1 Active got Segmentation fault , am I missing any other patch ( i am using devel published patch only ) Following patches i am using : 1) #2293 (sent by Anders Widel, but not yet pushed) 2) #2258 (v2, sent by Lennart, but not yet pushed yet) 3) #2258 (v4, sent by Vu, but not yet pushed yet) >> == Core was generated by `/usr/lib64/opensaf/osaflogd'. Program terminated with signal 11, Segmentation fault. #0 ckpt_proc_cfg_stream(lgs_cb*, void*) () at src/log/logd/lgs_mbcsv.cc:2195 2195src/log/logd/lgs_mbcsv.cc: No such file or directory. in src/log/logd/lgs_mbcsv.cc (gdb) bt #0 ckpt_proc_cfg_stream(lgs_cb*, void*) () at src/log/logd/lgs_mbcsv.cc:2195 #1 0x7f12c3e22960 in ckpt_decode_log_struct(lgs_cb*, ncs_mbcsv_cb_arg*, void*, void*, unsigned int (*)(edu_hdl_tag*, edu_tkn_tag*, void*, unsigned int*, edu_buf_env_tag*, EDP_OP_TYPE, EDU_ERR*)) () at src/log/logd/lgs_mbcsv.cc:950 #2 0x7f12c3e240dc in ckpt_decode_async_update(lgs_cb*, ncs_mbcsv_cb_arg*) () at src/log/logd/lgs_mbcsv.cc:1086 #3 0x7f12c3e26941 in mbcsv_callback(ncs_mbcsv_cb_arg*) () at src/log/logd/lgs_mbcsv.cc:880 #4 0x7f12c2f25596 in ncs_mbscv_rcv_decode () from /usr/lib/../lib64/libopensaf_core.so.0 #5 0x7f12c2f25766 in ncs_mbcsv_rcv_async_update () from /usr/lib/../lib64/libopensaf_core.so.0 #6 0x7f12c2f2c370 in mbcsv_process_events () from /usr/lib/../lib64/libopensaf_core.so.0 #7 0x7f12c2f2c4db in mbcsv_hdl_dispatch_all () from /usr/lib/../lib64/libopensaf_core.so.0 #8 0x7f12c2f26ce2 in mbcsv_process_dispatch_request () at src/mbc/mbcsv_api.c:423 #9 0x7f12c3e2396e in lgs_mbcsv_dispatch(unsigned int) () at src/log/logd/lgs_mbcsv.cc:327 #10 0x7f12c3e009f2 in main () at src/log/logd/lgs_main.cc:583 (gdb) >> == Feb 22 10:37:06 SC-1 osafimmnd[4020]: NO Invalid error reported implementer 'safLogService', Ccb 161 will be aborted Feb 22 10:37:06 SC-1 osafimmnd[4020]: NO Ccb 161 aborted in >> COMPLETED processing (validation) Feb 22 10:37:06 SC-1 osafimmnd[4020]: NO Ccb 161 ABORTED >> (immcfg_SC- 1_5394) Add values Fail Program received signal SIGSEGV, Segmentation fault. 0x5556929a in read_and_compare.isra.7 () at
Re: [devel] [PATCH 0 of 3] Review Request for log: add alternative destinations of log records [#2258] V4
Hi Mahehs, See my reply inline, [Vu]. Regards, Vu > -Original Message- > From: A V Mahesh [mailto:mahesh.va...@oracle.com] > Sent: Wednesday, February 22, 2017 1:36 PM > To: Vu Minh Nguyen; > lennart.l...@ericsson.com; canh.v.tru...@dektech.com.au > Cc: opensaf-devel@lists.sourceforge.net > Subject: Re: [PATCH 0 of 3] Review Request for log: add alternative > destinations of log records [#2258] V4 > > Hi Vu, > > > On 2/22/2017 11:52 AM, Vu Minh Nguyen wrote: > > Hi Mahesh, > > > > Have a code fault in uml test, and other one in checkpoint. > [AVM] This is Normal Suse 11 VM ( not UML). > > I have just updated the code. Please re-apply for #3 and #4 patches. > [AVM] is these new patch has function changes or only test code changes ? [Vu] I has sent you 02 patches. There is code change in osaflogd code that fix the coredump you have observed. The other one is test code that fix the logtest coredump. > > > > Note that, test case #14 of suite 17 should be run on active node, > otherwise > > getting failed. > [AVM] Segmentation fault of /usr/bin/logtest Not a big issue , > we need to debug why osaflogd core dumped and it is critical [Vu] I found the problem. You can try with the new one to see if the coredump is still there or not. > > I will put condition check to that test case later. > > > -AVM > > > > > > Regards, Vu > > > >> -Original Message- > >> From: A V Mahesh [mailto:mahesh.va...@oracle.com] > >> Sent: Wednesday, February 22, 2017 12:16 PM > >> To: Vu Minh Nguyen ; > >> lennart.l...@ericsson.com; canh.v.tru...@dektech.com.au > >> Cc: opensaf-devel@lists.sourceforge.net > >> Subject: Re: [PATCH 0 of 3] Review Request for log: add alternative > >> destinations of log records [#2258] V4 > >> > >> Hi Vu, > >> > >> Thanks , > >> > >> While testing /usr/bin/logtest , SC-2 standby osaflogd core dumped and > >> /usr/bin/logtest on SC-1 Active > >> got Segmentation fault , am I missing any other patch ( i am using > >> devel published patch only ) > >> > >> Following patches i am using : > >> > >>1) #2293 (sent by Anders Widel, but not yet pushed) > >> 2) #2258 (v2, sent by Lennart, but not yet pushed yet) > >> 3) #2258 (v4, sent by Vu, but not yet pushed yet) > >> > >> > == > >> > >> > >> > >> Core was generated by `/usr/lib64/opensaf/osaflogd'. > >> Program terminated with signal 11, Segmentation fault. > >> #0 ckpt_proc_cfg_stream(lgs_cb*, void*) () at > >> src/log/logd/lgs_mbcsv.cc:2195 > >> 2195src/log/logd/lgs_mbcsv.cc: No such file or directory. > >> in src/log/logd/lgs_mbcsv.cc > >> (gdb) bt > >> #0 ckpt_proc_cfg_stream(lgs_cb*, void*) () at > >> src/log/logd/lgs_mbcsv.cc:2195 > >> #1 0x7f12c3e22960 in ckpt_decode_log_struct(lgs_cb*, > >> ncs_mbcsv_cb_arg*, void*, void*, unsigned int (*)(edu_hdl_tag*, > >> edu_tkn_tag*, void*, unsigned int*, edu_buf_env_tag*, EDP_OP_TYPE, > >> EDU_ERR*)) () at src/log/logd/lgs_mbcsv.cc:950 > >> #2 0x7f12c3e240dc in ckpt_decode_async_update(lgs_cb*, > >> ncs_mbcsv_cb_arg*) () at src/log/logd/lgs_mbcsv.cc:1086 > >> #3 0x7f12c3e26941 in mbcsv_callback(ncs_mbcsv_cb_arg*) () at > >> src/log/logd/lgs_mbcsv.cc:880 > >> #4 0x7f12c2f25596 in ncs_mbscv_rcv_decode () from > >> /usr/lib/../lib64/libopensaf_core.so.0 > >> #5 0x7f12c2f25766 in ncs_mbcsv_rcv_async_update () from > >> /usr/lib/../lib64/libopensaf_core.so.0 > >> #6 0x7f12c2f2c370 in mbcsv_process_events () from > >> /usr/lib/../lib64/libopensaf_core.so.0 > >> #7 0x7f12c2f2c4db in mbcsv_hdl_dispatch_all () from > >> /usr/lib/../lib64/libopensaf_core.so.0 > >> #8 0x7f12c2f26ce2 in mbcsv_process_dispatch_request () at > >> src/mbc/mbcsv_api.c:423 > >> #9 0x7f12c3e2396e in lgs_mbcsv_dispatch(unsigned int) () at > >> src/log/logd/lgs_mbcsv.cc:327 > >> #10 0x7f12c3e009f2 in main () at src/log/logd/lgs_main.cc:583 > >> (gdb) > >> > >> > == > >> > >> > >> Feb 22 10:37:06 SC-1 osafimmnd[4020]: NO Invalid error reported > >> implementer 'safLogService', Ccb 161 will be aborted > >> Feb 22 10:37:06 SC-1 osafimmnd[4020]: NO Ccb 161 aborted in > COMPLETED > >> processing (validation) > >> Feb 22 10:37:06 SC-1 osafimmnd[4020]: NO Ccb 161 ABORTED > (immcfg_SC- > >> 1_5394) > >> Add values Fail > >> > >> Program received signal SIGSEGV, Segmentation fault. > >> 0x5556929a in read_and_compare.isra.7 () at > >> src/log/apitest/tet_LogOiOps.c:1891 > >> 1891src/log/apitest/tet_LogOiOps.c: No such file or directory. > >> in src/log/apitest/tet_LogOiOps.c > >> (gdb) Feb 22 10:37:07 SC-1 sshd[5298]: Accepted keyboard- > interactive/pam > >> for root from 10.176.178.22 port 51945 ssh2 > >> bt > >> #0 0x5556929a in read_and_compare.isra.7 () at >
Re: [devel] [PATCH 0 of 3] Review Request for log: add alternative destinations of log records [#2258] V4
Hi Vu, One more point, this is always reproducible , I was running /usr/bin/logtest on active node only . Let us investigate why osaflogd core dumped SC-1:~ # /etc/init.d/opensafd status safSISU=safSu=SC-1\,safSg=NoRed\,safApp=OpenSAF,safSi=NoRed1,safApp=OpenSAF saAmfSISUHAState=ACTIVE(1) safSISU=safSu=SC-1\,safSg=2N\,safApp=OpenSAF,safSi=SC-2N,safApp=OpenSAF saAmfSISUHAState=ACTIVE(1) safSISU=safSu=PL-3\,safSg=NoRed\,safApp=OpenSAF,safSi=NoRed2,safApp=OpenSAF saAmfSISUHAState=ACTIVE(1) safSISU=safSu=PL-4\,safSg=NoRed\,safApp=OpenSAF,safSi=NoRed3,safApp=OpenSAF saAmfSISUHAState=ACTIVE(1) safSISU=safSu=SC-2\,safSg=NoRed\,safApp=OpenSAF,safSi=NoRed4,safApp=OpenSAF saAmfSISUHAState=ACTIVE(1) safSISU=safSu=SC-2\,safSg=2N\,safApp=OpenSAF,safSi=SC-2N,safApp=OpenSAF saAmfSISUHAState=STANDBY(2) SC-1:~ # SC-1:~ # SC-1:~ # gdb /usr/bin/logtest GNU gdb (GDB) SUSE (7.3-0.6.1) Program received signal SIGSEGV, Segmentation fault. 0x5556929a in read_and_compare.isra.7 () at src/log/apitest/tet_LogOiOps.c:1891 1891src/log/apitest/tet_LogOiOps.c: No such file or directory. in src/log/apitest/tet_LogOiOps.c (gdb) Feb 22 12:14:03 SC-2 osafamfnd[4200]: NO 'safComp=LOG,safSu=SC-2,safSg=2N,safApp=OpenSAF' faulted due to 'avaDown' : Recovery is 'nodeFailfast' Feb 22 12:14:03 SC-2 osafamfnd[4200]: ER safComp=LOG,safSu=SC-2,safSg=2N,safApp=OpenSAF Faulted due to:avaDown Recovery is:nodeFailfast Feb 22 12:14:03 SC-2 osafamfnd[4200]: Rebooting OpenSAF NodeId = 131599 EE Name = , Reason: Component faulted: recovery is node failfast, OwnNodeId = 131599, SupervisionTime = 60 Feb 22 12:14:04 SC-2 opensaf_reboot: Rebooting local node; timeout=60 On 2/22/2017 12:05 PM, A V Mahesh wrote: > Hi Vu, > > > On 2/22/2017 11:52 AM, Vu Minh Nguyen wrote: >> Hi Mahesh, >> >> Have a code fault in uml test, and other one in checkpoint. > [AVM] This is Normal Suse 11 VM ( not UML). >> I have just updated the code. Please re-apply for #3 and #4 patches. > [AVM] is these new patch has function changes or only test code changes ? >> Note that, test case #14 of suite 17 should be run on active node, otherwise >> getting failed. > [AVM] Segmentation fault of /usr/bin/logtest Not a big issue , >we need to debug why osaflogd core dumped and it is critical >> I will put condition check to that test case later. > > -AVM > > >> Regards, Vu >> >>> -Original Message- >>> From: A V Mahesh [mailto:mahesh.va...@oracle.com] >>> Sent: Wednesday, February 22, 2017 12:16 PM >>> To: Vu Minh Nguyen; >>> lennart.l...@ericsson.com; canh.v.tru...@dektech.com.au >>> Cc: opensaf-devel@lists.sourceforge.net >>> Subject: Re: [PATCH 0 of 3] Review Request for log: add alternative >>> destinations of log records [#2258] V4 >>> >>> Hi Vu, >>> >>> Thanks , >>> >>> While testing /usr/bin/logtest , SC-2 standby osaflogd core dumped and >>> /usr/bin/logtest on SC-1 Active >>> got Segmentation fault , am I missing any other patch ( i am using >>> devel published patch only ) >>> >>> Following patches i am using : >>> >>> 1) #2293 (sent by Anders Widel, but not yet pushed) >>> 2) #2258 (v2, sent by Lennart, but not yet pushed yet) >>> 3) #2258 (v4, sent by Vu, but not yet pushed yet) >>> >>> == >>> >>> >>> >>> Core was generated by `/usr/lib64/opensaf/osaflogd'. >>> Program terminated with signal 11, Segmentation fault. >>> #0 ckpt_proc_cfg_stream(lgs_cb*, void*) () at >>> src/log/logd/lgs_mbcsv.cc:2195 >>> 2195src/log/logd/lgs_mbcsv.cc: No such file or directory. >>>in src/log/logd/lgs_mbcsv.cc >>> (gdb) bt >>> #0 ckpt_proc_cfg_stream(lgs_cb*, void*) () at >>> src/log/logd/lgs_mbcsv.cc:2195 >>> #1 0x7f12c3e22960 in ckpt_decode_log_struct(lgs_cb*, >>> ncs_mbcsv_cb_arg*, void*, void*, unsigned int (*)(edu_hdl_tag*, >>> edu_tkn_tag*, void*, unsigned int*, edu_buf_env_tag*, EDP_OP_TYPE, >>> EDU_ERR*)) () at src/log/logd/lgs_mbcsv.cc:950 >>> #2 0x7f12c3e240dc in ckpt_decode_async_update(lgs_cb*, >>> ncs_mbcsv_cb_arg*) () at src/log/logd/lgs_mbcsv.cc:1086 >>> #3 0x7f12c3e26941 in mbcsv_callback(ncs_mbcsv_cb_arg*) () at >>> src/log/logd/lgs_mbcsv.cc:880 >>> #4 0x7f12c2f25596 in ncs_mbscv_rcv_decode () from >>> /usr/lib/../lib64/libopensaf_core.so.0 >>> #5 0x7f12c2f25766 in ncs_mbcsv_rcv_async_update () from >>> /usr/lib/../lib64/libopensaf_core.so.0 >>> #6 0x7f12c2f2c370 in mbcsv_process_events () from >>> /usr/lib/../lib64/libopensaf_core.so.0 >>> #7 0x7f12c2f2c4db in mbcsv_hdl_dispatch_all () from >>> /usr/lib/../lib64/libopensaf_core.so.0 >>> #8 0x7f12c2f26ce2 in mbcsv_process_dispatch_request ()
Re: [devel] [PATCH 0 of 3] Review Request for log: add alternative destinations of log records [#2258] V4
Hi Vu, On 2/22/2017 11:52 AM, Vu Minh Nguyen wrote: > Hi Mahesh, > > Have a code fault in uml test, and other one in checkpoint. [AVM] This is Normal Suse 11 VM ( not UML). > I have just updated the code. Please re-apply for #3 and #4 patches. [AVM] is these new patch has function changes or only test code changes ? > > Note that, test case #14 of suite 17 should be run on active node, otherwise > getting failed. [AVM] Segmentation fault of /usr/bin/logtest Not a big issue , we need to debug why osaflogd core dumped and it is critical > I will put condition check to that test case later. -AVM > > Regards, Vu > >> -Original Message- >> From: A V Mahesh [mailto:mahesh.va...@oracle.com] >> Sent: Wednesday, February 22, 2017 12:16 PM >> To: Vu Minh Nguyen; >> lennart.l...@ericsson.com; canh.v.tru...@dektech.com.au >> Cc: opensaf-devel@lists.sourceforge.net >> Subject: Re: [PATCH 0 of 3] Review Request for log: add alternative >> destinations of log records [#2258] V4 >> >> Hi Vu, >> >> Thanks , >> >> While testing /usr/bin/logtest , SC-2 standby osaflogd core dumped and >> /usr/bin/logtest on SC-1 Active >> got Segmentation fault , am I missing any other patch ( i am using >> devel published patch only ) >> >> Following patches i am using : >> >>1) #2293 (sent by Anders Widel, but not yet pushed) >> 2) #2258 (v2, sent by Lennart, but not yet pushed yet) >> 3) #2258 (v4, sent by Vu, but not yet pushed yet) >> >> == >> >> >> >> Core was generated by `/usr/lib64/opensaf/osaflogd'. >> Program terminated with signal 11, Segmentation fault. >> #0 ckpt_proc_cfg_stream(lgs_cb*, void*) () at >> src/log/logd/lgs_mbcsv.cc:2195 >> 2195src/log/logd/lgs_mbcsv.cc: No such file or directory. >> in src/log/logd/lgs_mbcsv.cc >> (gdb) bt >> #0 ckpt_proc_cfg_stream(lgs_cb*, void*) () at >> src/log/logd/lgs_mbcsv.cc:2195 >> #1 0x7f12c3e22960 in ckpt_decode_log_struct(lgs_cb*, >> ncs_mbcsv_cb_arg*, void*, void*, unsigned int (*)(edu_hdl_tag*, >> edu_tkn_tag*, void*, unsigned int*, edu_buf_env_tag*, EDP_OP_TYPE, >> EDU_ERR*)) () at src/log/logd/lgs_mbcsv.cc:950 >> #2 0x7f12c3e240dc in ckpt_decode_async_update(lgs_cb*, >> ncs_mbcsv_cb_arg*) () at src/log/logd/lgs_mbcsv.cc:1086 >> #3 0x7f12c3e26941 in mbcsv_callback(ncs_mbcsv_cb_arg*) () at >> src/log/logd/lgs_mbcsv.cc:880 >> #4 0x7f12c2f25596 in ncs_mbscv_rcv_decode () from >> /usr/lib/../lib64/libopensaf_core.so.0 >> #5 0x7f12c2f25766 in ncs_mbcsv_rcv_async_update () from >> /usr/lib/../lib64/libopensaf_core.so.0 >> #6 0x7f12c2f2c370 in mbcsv_process_events () from >> /usr/lib/../lib64/libopensaf_core.so.0 >> #7 0x7f12c2f2c4db in mbcsv_hdl_dispatch_all () from >> /usr/lib/../lib64/libopensaf_core.so.0 >> #8 0x7f12c2f26ce2 in mbcsv_process_dispatch_request () at >> src/mbc/mbcsv_api.c:423 >> #9 0x7f12c3e2396e in lgs_mbcsv_dispatch(unsigned int) () at >> src/log/logd/lgs_mbcsv.cc:327 >> #10 0x7f12c3e009f2 in main () at src/log/logd/lgs_main.cc:583 >> (gdb) >> >> == >> >> >> Feb 22 10:37:06 SC-1 osafimmnd[4020]: NO Invalid error reported >> implementer 'safLogService', Ccb 161 will be aborted >> Feb 22 10:37:06 SC-1 osafimmnd[4020]: NO Ccb 161 aborted in COMPLETED >> processing (validation) >> Feb 22 10:37:06 SC-1 osafimmnd[4020]: NO Ccb 161 ABORTED (immcfg_SC- >> 1_5394) >> Add values Fail >> >> Program received signal SIGSEGV, Segmentation fault. >> 0x5556929a in read_and_compare.isra.7 () at >> src/log/apitest/tet_LogOiOps.c:1891 >> 1891src/log/apitest/tet_LogOiOps.c: No such file or directory. >> in src/log/apitest/tet_LogOiOps.c >> (gdb) Feb 22 10:37:07 SC-1 sshd[5298]: Accepted keyboard-interactive/pam >> for root from 10.176.178.22 port 51945 ssh2 >> bt >> #0 0x5556929a in read_and_compare.isra.7 () at >> src/log/apitest/tet_LogOiOps.c:1891 >> #1 0x55569bbb in >> check_logRecordDestinationConfigurationEmpty >> () at src/log/apitest/tet_LogOiOps.c:2179 >> #2 0x55573495 in run_test_case () >> #3 0x55573934 in test_run () >> #4 0xc7cd in main () at src/log/apitest/logtest.c:569 >> (gdb) >> >> == >> >> >> -AVM >> >> On 2/22/2017 9:48 AM, Vu Minh Nguyen wrote: >>> Hi Mahesh, >>> >>> I send them in attachment instead, and name them in the order. >>> I just pull the latest code, and apply them without getting any hunk > error. >>> Please try with them, and let me know if you see any problem. >>> >>> Regards, Vu >>> -Original Message- From: A V Mahesh [mailto:mahesh.va...@oracle.com] Sent: Wednesday, February 22, 2017 11:09 AM To: Vu Minh Nguyen
Re: [devel] [PATCH 0 of 3] Review Request for log: add alternative destinations of log records [#2258] V4
Hi Vu, Thanks , While testing /usr/bin/logtest , SC-2 standby osaflogd core dumped and /usr/bin/logtest on SC-1 Active got Segmentation fault , am I missing any other patch ( i am using devel published patch only ) Following patches i am using : 1) #2293 (sent by Anders Widel, but not yet pushed) 2) #2258 (v2, sent by Lennart, but not yet pushed yet) 3) #2258 (v4, sent by Vu, but not yet pushed yet) == Core was generated by `/usr/lib64/opensaf/osaflogd'. Program terminated with signal 11, Segmentation fault. #0 ckpt_proc_cfg_stream(lgs_cb*, void*) () at src/log/logd/lgs_mbcsv.cc:2195 2195src/log/logd/lgs_mbcsv.cc: No such file or directory. in src/log/logd/lgs_mbcsv.cc (gdb) bt #0 ckpt_proc_cfg_stream(lgs_cb*, void*) () at src/log/logd/lgs_mbcsv.cc:2195 #1 0x7f12c3e22960 in ckpt_decode_log_struct(lgs_cb*, ncs_mbcsv_cb_arg*, void*, void*, unsigned int (*)(edu_hdl_tag*, edu_tkn_tag*, void*, unsigned int*, edu_buf_env_tag*, EDP_OP_TYPE, EDU_ERR*)) () at src/log/logd/lgs_mbcsv.cc:950 #2 0x7f12c3e240dc in ckpt_decode_async_update(lgs_cb*, ncs_mbcsv_cb_arg*) () at src/log/logd/lgs_mbcsv.cc:1086 #3 0x7f12c3e26941 in mbcsv_callback(ncs_mbcsv_cb_arg*) () at src/log/logd/lgs_mbcsv.cc:880 #4 0x7f12c2f25596 in ncs_mbscv_rcv_decode () from /usr/lib/../lib64/libopensaf_core.so.0 #5 0x7f12c2f25766 in ncs_mbcsv_rcv_async_update () from /usr/lib/../lib64/libopensaf_core.so.0 #6 0x7f12c2f2c370 in mbcsv_process_events () from /usr/lib/../lib64/libopensaf_core.so.0 #7 0x7f12c2f2c4db in mbcsv_hdl_dispatch_all () from /usr/lib/../lib64/libopensaf_core.so.0 #8 0x7f12c2f26ce2 in mbcsv_process_dispatch_request () at src/mbc/mbcsv_api.c:423 #9 0x7f12c3e2396e in lgs_mbcsv_dispatch(unsigned int) () at src/log/logd/lgs_mbcsv.cc:327 #10 0x7f12c3e009f2 in main () at src/log/logd/lgs_main.cc:583 (gdb) == Feb 22 10:37:06 SC-1 osafimmnd[4020]: NO Invalid error reported implementer 'safLogService', Ccb 161 will be aborted Feb 22 10:37:06 SC-1 osafimmnd[4020]: NO Ccb 161 aborted in COMPLETED processing (validation) Feb 22 10:37:06 SC-1 osafimmnd[4020]: NO Ccb 161 ABORTED (immcfg_SC-1_5394) Add values Fail Program received signal SIGSEGV, Segmentation fault. 0x5556929a in read_and_compare.isra.7 () at src/log/apitest/tet_LogOiOps.c:1891 1891src/log/apitest/tet_LogOiOps.c: No such file or directory. in src/log/apitest/tet_LogOiOps.c (gdb) Feb 22 10:37:07 SC-1 sshd[5298]: Accepted keyboard-interactive/pam for root from 10.176.178.22 port 51945 ssh2 bt #0 0x5556929a in read_and_compare.isra.7 () at src/log/apitest/tet_LogOiOps.c:1891 #1 0x55569bbb in check_logRecordDestinationConfigurationEmpty () at src/log/apitest/tet_LogOiOps.c:2179 #2 0x55573495 in run_test_case () #3 0x55573934 in test_run () #4 0xc7cd in main () at src/log/apitest/logtest.c:569 (gdb) == -AVM On 2/22/2017 9:48 AM, Vu Minh Nguyen wrote: > Hi Mahesh, > > I send them in attachment instead, and name them in the order. > I just pull the latest code, and apply them without getting any hunk error. > > Please try with them, and let me know if you see any problem. > > Regards, Vu > >> -Original Message- >> From: A V Mahesh [mailto:mahesh.va...@oracle.com] >> Sent: Wednesday, February 22, 2017 11:09 AM >> To: Vu Minh Nguyen; >> lennart.l...@ericsson.com; canh.v.tru...@dektech.com.au >> Cc: opensaf-devel@lists.sourceforge.net >> Subject: Re: [PATCH 0 of 3] Review Request for log: add alternative >> destinations of log records [#2258] V4 >> >> Hi Vu, >> >> I did follow that still i get Hunk #2 FAILED even on today's staging >> >> == >> == >> >> [root@dhcp-hyd-scp-5fl-10-176-177-96 staging]# patch -p1 <2293 >> patching file src/base/Makefile.am >> Hunk #1 succeeded at 33 (offset 1 line). >> Hunk #3 succeeded at 183 (offset 1 line). >> patching file src/base/file_descriptor.cc >> patching file src/base/file_descriptor.h >> patching file src/base/tests/unix_socket_test.cc >> patching file src/base/unix_client_socket.cc >> patching file src/base/unix_server_socket.cc >> patching file src/base/unix_socket.cc >> patching file src/base/unix_socket.h >> >> [root@dhcp-hyd-scp-5fl-10-176-177-96 staging]# patch -p1 <2258-1 >> patching file src/log/Makefile.am >> Hunk #1 succeeded at 71 (offset -1 lines). >> patching file src/log/config/logsv_classes.xml >> Hunk #1 FAILED at 147. >> 1 out of 1 hunk FAILED -- saving rejects to file >> src/log/config/logsv_classes.xml.rej >> patching file
Re: [devel] [PATCH 0 of 3] Review Request for log: add alternative destinations of log records [#2258] V4
Hi Vu, Sorry I did missed Lennart #2258 (v2) , now it is fine -AVM On 2/22/2017 9:39 AM, A V Mahesh wrote: > Hi Vu, > > I did follow that still i get Hunk #2 FAILED even on today's staging > > > > > > [root@dhcp-hyd-scp-5fl-10-176-177-96 staging]# patch -p1 <2293 > patching file src/base/Makefile.am > Hunk #1 succeeded at 33 (offset 1 line). > Hunk #3 succeeded at 183 (offset 1 line). > patching file src/base/file_descriptor.cc > patching file src/base/file_descriptor.h > patching file src/base/tests/unix_socket_test.cc > patching file src/base/unix_client_socket.cc > patching file src/base/unix_server_socket.cc > patching file src/base/unix_socket.cc > patching file src/base/unix_socket.h > > [root@dhcp-hyd-scp-5fl-10-176-177-96 staging]# patch -p1 <2258-1 > patching file src/log/Makefile.am > Hunk #1 succeeded at 71 (offset -1 lines). > patching file src/log/config/logsv_classes.xml > Hunk #1 FAILED at 147. > 1 out of 1 hunk FAILED -- saving rejects to file > src/log/config/logsv_classes.xml.rej > patching file src/log/logd/lgs_config.cc > Hunk #1 succeeded at 35 (offset -5 lines). > Hunk #2 FAILED at 705. > Hunk #3 FAILED at 971. > 2 out of 3 hunks FAILED -- saving rejects to file > src/log/logd/lgs_config.cc.rej > patching file src/log/logd/lgs_config.h > Hunk #1 FAILED at 304. > 1 out of 1 hunk FAILED -- saving rejects to file > src/log/logd/lgs_config.h.rej > patching file src/log/logd/lgs_dest.cc > patching file src/log/logd/lgs_dest.h > patching file src/log/logd/lgs_evt.cc > patching file src/log/logd/lgs_imm.cc > Hunk #1 FAILED at 45. > Hunk #2 succeeded at 235 (offset -1 lines). > Hunk #3 FAILED at 877. > Hunk #4 succeeded at 1273 (offset -20 lines). > Hunk #5 succeeded at 1404 (offset -1 lines). > Hunk #6 succeeded at 1449 (offset -20 lines). > Hunk #7 succeeded at 2032 (offset -1 lines). > Hunk #8 FAILED at 2181. > Hunk #9 succeeded at 2271 (offset -54 lines). > Hunk #10 succeeded at 2387 (offset -1 lines). > Hunk #11 succeeded at 2377 (offset -54 lines). > Hunk #12 succeeded at 2478 (offset -1 lines). > Hunk #13 succeeded at 2684 (offset -54 lines). > Hunk #14 succeeded at 2821 (offset -1 lines). > 3 out of 14 hunks FAILED -- saving rejects to file > src/log/logd/lgs_imm.cc.rej > patching file src/log/logd/lgs_main.cc > patching file src/log/logd/lgs_mbcsv.cc > patching file src/log/logd/lgs_mbcsv.h > patching file src/log/logd/lgs_mbcsv_v5.cc > Hunk #3 succeeded at 133 (offset -1 lines). > patching file src/log/logd/lgs_mbcsv_v7.cc > patching file src/log/logd/lgs_mbcsv_v7.h > patching file src/log/logd/lgs_stream.cc > patching file src/log/logd/lgs_stream.h > patching file src/log/logd/lgs_util.cc > patching file src/log/logd/lgs_util.h > > [root@dhcp-hyd-scp-5fl-10-176-177-96 staging]# patch -p1 <2258-2 > patching file src/log/Makefile.am > Hunk #1 succeeded at 180 (offset -3 lines). > patching file src/log/apitest/tet_LogOiOps.c > Hunk #1 FAILED at 1923. > Hunk #2 FAILED at 1979. > Hunk #3 FAILED at 2067. > Hunk #4 FAILED at 2094. > 4 out of 4 hunks FAILED -- saving rejects to file > src/log/apitest/tet_LogOiOps.c.rej > patching file src/log/apitest/tet_cfg_destination.c > > [root@dhcp-hyd-scp-5fl-10-176-177-96 staging]# patch -p1 <2258-3 > patching file src/log/Makefile > patching file src/log/Makefile.am > Hunk #1 succeeded at 80 (offset -1 lines). > Hunk #2 succeeded at 217 (offset -2 lines). > patching file src/log/tests/Makefile > patching file src/log/tests/lgs_dest_test.cc > [root@dhcp-hyd-scp-5fl-10-176-177-96 staging]# vi > src/log/apitest/tet_LogOiOps.c.rej > [root@dhcp-hyd-scp-5fl-10-176-177-96 staging]# > > == > > > > -AVM > > On 2/21/2017 3:53 PM, Vu Minh Nguyen wrote: >> Hi Mahesh, >> >> As I has mentioned in below: To run the test, this patch has dependent on following patches: 1) #2293 (sent by Anders Widel, but not yet pushed) 2) #2258 (v2, sent by Lennart, but not yet pushed yet) >> So, you need to apply #2293 first, then #2258 which sent by Lennart >> yesterday, then mine. >> >> Regards, Vu >> >>> -Original Message- >>> From: A V Mahesh [mailto:mahesh.va...@oracle.com] >>> Sent: Tuesday, February 21, 2017 5:10 PM >>> To: Vu Minh Nguyen; >>> lennart.l...@ericsson.com; canh.v.tru...@dektech.com.au >>> Cc: opensaf-devel@lists.sourceforge.net >>> Subject: Re: [PATCH 0 of 3] Review Request for log: add alternative >>> destinations of log records [#2258] V4 >>> >>> Hi Vu, >>> >>> Is this applies on top of log #2146 - V4 , I see both #tickets >>> has >>> version changes ? >>> >>> in which order i need to apply ( #2146 & #2258 )or (#2258 & >>> #2146). >>> >>> = >>> >>> patching file src/log/Makefile.am >>> Hunk #1 FAILED at 72. >>> Hunk #2 FAILED at 120. >>> 2 out of 2
Re: [devel] [PATCH 0 of 3] Review Request for log: add alternative destinations of log records [#2258] V4
Hi Vu, I did follow that still i get Hunk #2 FAILED even on today's staging [root@dhcp-hyd-scp-5fl-10-176-177-96 staging]# patch -p1 <2293 patching file src/base/Makefile.am Hunk #1 succeeded at 33 (offset 1 line). Hunk #3 succeeded at 183 (offset 1 line). patching file src/base/file_descriptor.cc patching file src/base/file_descriptor.h patching file src/base/tests/unix_socket_test.cc patching file src/base/unix_client_socket.cc patching file src/base/unix_server_socket.cc patching file src/base/unix_socket.cc patching file src/base/unix_socket.h [root@dhcp-hyd-scp-5fl-10-176-177-96 staging]# patch -p1 <2258-1 patching file src/log/Makefile.am Hunk #1 succeeded at 71 (offset -1 lines). patching file src/log/config/logsv_classes.xml Hunk #1 FAILED at 147. 1 out of 1 hunk FAILED -- saving rejects to file src/log/config/logsv_classes.xml.rej patching file src/log/logd/lgs_config.cc Hunk #1 succeeded at 35 (offset -5 lines). Hunk #2 FAILED at 705. Hunk #3 FAILED at 971. 2 out of 3 hunks FAILED -- saving rejects to file src/log/logd/lgs_config.cc.rej patching file src/log/logd/lgs_config.h Hunk #1 FAILED at 304. 1 out of 1 hunk FAILED -- saving rejects to file src/log/logd/lgs_config.h.rej patching file src/log/logd/lgs_dest.cc patching file src/log/logd/lgs_dest.h patching file src/log/logd/lgs_evt.cc patching file src/log/logd/lgs_imm.cc Hunk #1 FAILED at 45. Hunk #2 succeeded at 235 (offset -1 lines). Hunk #3 FAILED at 877. Hunk #4 succeeded at 1273 (offset -20 lines). Hunk #5 succeeded at 1404 (offset -1 lines). Hunk #6 succeeded at 1449 (offset -20 lines). Hunk #7 succeeded at 2032 (offset -1 lines). Hunk #8 FAILED at 2181. Hunk #9 succeeded at 2271 (offset -54 lines). Hunk #10 succeeded at 2387 (offset -1 lines). Hunk #11 succeeded at 2377 (offset -54 lines). Hunk #12 succeeded at 2478 (offset -1 lines). Hunk #13 succeeded at 2684 (offset -54 lines). Hunk #14 succeeded at 2821 (offset -1 lines). 3 out of 14 hunks FAILED -- saving rejects to file src/log/logd/lgs_imm.cc.rej patching file src/log/logd/lgs_main.cc patching file src/log/logd/lgs_mbcsv.cc patching file src/log/logd/lgs_mbcsv.h patching file src/log/logd/lgs_mbcsv_v5.cc Hunk #3 succeeded at 133 (offset -1 lines). patching file src/log/logd/lgs_mbcsv_v7.cc patching file src/log/logd/lgs_mbcsv_v7.h patching file src/log/logd/lgs_stream.cc patching file src/log/logd/lgs_stream.h patching file src/log/logd/lgs_util.cc patching file src/log/logd/lgs_util.h [root@dhcp-hyd-scp-5fl-10-176-177-96 staging]# patch -p1 <2258-2 patching file src/log/Makefile.am Hunk #1 succeeded at 180 (offset -3 lines). patching file src/log/apitest/tet_LogOiOps.c Hunk #1 FAILED at 1923. Hunk #2 FAILED at 1979. Hunk #3 FAILED at 2067. Hunk #4 FAILED at 2094. 4 out of 4 hunks FAILED -- saving rejects to file src/log/apitest/tet_LogOiOps.c.rej patching file src/log/apitest/tet_cfg_destination.c [root@dhcp-hyd-scp-5fl-10-176-177-96 staging]# patch -p1 <2258-3 patching file src/log/Makefile patching file src/log/Makefile.am Hunk #1 succeeded at 80 (offset -1 lines). Hunk #2 succeeded at 217 (offset -2 lines). patching file src/log/tests/Makefile patching file src/log/tests/lgs_dest_test.cc [root@dhcp-hyd-scp-5fl-10-176-177-96 staging]# vi src/log/apitest/tet_LogOiOps.c.rej [root@dhcp-hyd-scp-5fl-10-176-177-96 staging]# == -AVM On 2/21/2017 3:53 PM, Vu Minh Nguyen wrote: > Hi Mahesh, > > As I has mentioned in below: >>>To run the test, this patch has dependent on following patches: >>>1) #2293 (sent by Anders Widel, but not yet pushed) >>>2) #2258 (v2, sent by Lennart, but not yet pushed yet) > So, you need to apply #2293 first, then #2258 which sent by Lennart > yesterday, then mine. > > Regards, Vu > >> -Original Message- >> From: A V Mahesh [mailto:mahesh.va...@oracle.com] >> Sent: Tuesday, February 21, 2017 5:10 PM >> To: Vu Minh Nguyen; >> lennart.l...@ericsson.com; canh.v.tru...@dektech.com.au >> Cc: opensaf-devel@lists.sourceforge.net >> Subject: Re: [PATCH 0 of 3] Review Request for log: add alternative >> destinations of log records [#2258] V4 >> >> Hi Vu, >> >> Is this applies on top of log #2146 - V4 , I see both #tickets has >> version changes ? >> >> in which order i need to apply ( #2146 & #2258 )or (#2258 & #2146). >> >> = >> >> patching file src/log/Makefile.am >> Hunk #1 FAILED at 72. >> Hunk #2 FAILED at 120. >> 2 out of 2 hunks FAILED -- saving rejects to file src/log/Makefile.am.rej >> patching file src/log/config/logsv_classes.xml >> Hunk #1 FAILED at 147. >> 1 out of 1 hunk FAILED -- saving rejects to file >> src/log/config/logsv_classes.xml.rej >> patching file src/log/logd/lgs_config.cc >> Hunk #1 succeeded at 35 (offset -5 lines). >> Hunk #2 FAILED at 705. >>
Re: [devel] [PATCH 0 of 3] Review Request for log: add alternative destinations of log records [#2258] V4
Hi Vu See my comment below [Lennart] I will continue reviewing part 1 Thanks Lennart > -Original Message- > From: Vu Minh Nguyen [mailto:vu.m.ngu...@dektech.com.au] > Sent: den 21 februari 2017 10:34 > To: Lennart Lund; mahesh.va...@oracle.com; > Canh Van Truong > Cc: opensaf-devel@lists.sourceforge.net > Subject: [PATCH 0 of 3] Review Request for log: add alternative destinations > of log records [#2258] V4 > > Summary: log: add alternative destinations of log records [#2258] > Review request for Trac Ticket(s): #2258 > Peer Reviewer(s): Lennart, Canh, Mahesh > Pull request to: <> > Affected branch(es): Default > Development branch: Default > > > Impacted area Impact y/n > > Docsn > Build systemn > RPM/packaging n > Configuration files n > Startup scripts n > SAF servicesn > OpenSAF servicesy > Core libraries n > Samples n > Tests y > Other n > > > Comments (indicate scope for each "y" above): > - > To run the test, this patch has dependent on following patches: > 1) #2293 (sent by Anders Widel, but not yet pushed) > 2) #2258 (v2, sent by Lennart, but not yet pushed yet) [Lennart] You are also dependent on #2266 base: Add a hash function. This is important since it defines a known hash algorithm that shal be used with MSGID > 32 characters > > changeset d74aaf3025c99cade3165a15831124548f4d85bd > Author: Vu Minh Nguyen > Date: Wed, 15 Feb 2017 14:36:00 +0700 > > log: add alternative destinations of log records [#2258] > > Here are major info, detailed info will be added to PR doc soon. 1) > Add > attribute "saLogRecordDestination" to log stream. 2) Add Local socket > destintion handler 3) Integrate into first increment made by Lennart > > changeset 4bae27a478c235df3058f43c92d3a5483233b01d > Author: Vu Minh Nguyen > Date: Wed, 15 Feb 2017 15:07:09 +0700 > > log: add UML test case to verify alternative destination [#2258] > > Major changes: 1) Modify Lennart's test cases because enhancing > destination > configuration validation rules. 2) Add test suite #17 to verify > alternative > destination > > changeset bc375725fed22bb4f8cb3ae3df5f96fb9d281efb > Author: Vu Minh Nguyen > Date: Thu, 16 Feb 2017 17:22:13 +0700 > > log: add unit tests to verify interfaces provided by destination > handler > [#2258] > > Unit tests to verify major interfaces: 1) CfgDestination() 2) > WriteToDestination() > > > Added Files: > > src/log/apitest/tet_cfg_destination.c > src/log/logd/lgs_dest.cc > src/log/logd/lgs_dest.h > src/log/logd/lgs_mbcsv_v7.cc > src/log/logd/lgs_mbcsv_v7.h > src/log/tests/lgs_dest_test.cc > src/log/tests/Makefile > > > Complete diffstat: > -- > src/log/Makefile |4 + > src/log/Makefile.am | 31 +- > src/log/apitest/tet_LogOiOps.c|8 +- > src/log/apitest/tet_cfg_destination.c | 483 > ++ > ++ > src/log/config/logsv_classes.xml |7 +- > src/log/logd/lgs_config.cc| 169 +- > -- > src/log/logd/lgs_config.h |3 +- > src/log/logd/lgs_dest.cc | 707 > ++ > ++ > + > src/log/logd/lgs_dest.h | 576 > ++ > > src/log/logd/lgs_evt.cc | 33 ++ > src/log/logd/lgs_imm.cc | 202 > +-- > src/log/logd/lgs_main.cc |8 + > src/log/logd/lgs_mbcsv.cc | 103 ++- > src/log/logd/lgs_mbcsv.h |6 +- > src/log/logd/lgs_mbcsv_v5.cc | 10 + > src/log/logd/lgs_mbcsv_v7.cc | 177 > +++ > src/log/logd/lgs_mbcsv_v7.h | 67 + > src/log/logd/lgs_stream.cc| 60 +++- > src/log/logd/lgs_stream.h | 16 +++ > src/log/logd/lgs_util.cc | 63 > src/log/logd/lgs_util.h | 11 +- > src/log/tests/Makefile| 20 +++ > src/log/tests/lgs_dest_test.cc| 209 > + > 23 files changed, 2896 insertions(+), 77 deletions(-) > > > Testing Commands: > -
Re: [devel] [PATCH 0 of 3] Review Request for log: add alternative destinations of log records [#2258] V4
Hi Vu, Is this applies on top of log #2146 - V4 , I see both #tickets has version changes ? in which order i need to apply ( #2146 & #2258 )or (#2258 & #2146). = patching file src/log/Makefile.am Hunk #1 FAILED at 72. Hunk #2 FAILED at 120. 2 out of 2 hunks FAILED -- saving rejects to file src/log/Makefile.am.rej patching file src/log/config/logsv_classes.xml Hunk #1 FAILED at 147. 1 out of 1 hunk FAILED -- saving rejects to file src/log/config/logsv_classes.xml.rej patching file src/log/logd/lgs_config.cc Hunk #1 succeeded at 35 (offset -5 lines). Hunk #2 FAILED at 705. Hunk #3 FAILED at 971. 2 out of 3 hunks FAILED -- saving rejects to file src/log/logd/lgs_config.cc.rej patching file src/log/logd/lgs_config.h Hunk #1 FAILED at 304. 1 out of 1 hunk FAILED -- saving rejects to file src/log/logd/lgs_config.h.rej patching file src/log/logd/lgs_dest.cc patching file src/log/logd/lgs_dest.h patching file src/log/logd/lgs_evt.cc Hunk #1 FAILED at 1. Hunk #2 succeeded at 30 with fuzz 2 (offset 2 lines). Hunk #3 succeeded at 1282 (offset 45 lines). Hunk #4 succeeded at 1300 (offset 2 lines). 1 out of 4 hunks FAILED -- saving rejects to file src/log/logd/lgs_evt.cc.rej = -AVM On 2/21/2017 3:03 PM, Vu Minh Nguyen wrote: > Summary: log: add alternative destinations of log records [#2258] > Review request for Trac Ticket(s): #2258 > Peer Reviewer(s): Lennart, Canh, Mahesh > Pull request to: <> > Affected branch(es): Default > Development branch: Default > > > Impacted area Impact y/n > > Docsn > Build systemn > RPM/packaging n > Configuration files n > Startup scripts n > SAF servicesn > OpenSAF servicesy > Core libraries n > Samples n > Tests y > Other n > > > Comments (indicate scope for each "y" above): > - > To run the test, this patch has dependent on following patches: > 1) #2293 (sent by Anders Widel, but not yet pushed) > 2) #2258 (v2, sent by Lennart, but not yet pushed yet) > > changeset d74aaf3025c99cade3165a15831124548f4d85bd > Author: Vu Minh Nguyen> Date: Wed, 15 Feb 2017 14:36:00 +0700 > > log: add alternative destinations of log records [#2258] > > Here are major info, detailed info will be added to PR doc soon. 1) Add > attribute "saLogRecordDestination" to log stream. 2) Add Local socket > destintion handler 3) Integrate into first increment made by Lennart > > changeset 4bae27a478c235df3058f43c92d3a5483233b01d > Author: Vu Minh Nguyen > Date: Wed, 15 Feb 2017 15:07:09 +0700 > > log: add UML test case to verify alternative destination [#2258] > > Major changes: 1) Modify Lennart's test cases because enhancing > destination > configuration validation rules. 2) Add test suite #17 to verify > alternative > destination > > changeset bc375725fed22bb4f8cb3ae3df5f96fb9d281efb > Author: Vu Minh Nguyen > Date: Thu, 16 Feb 2017 17:22:13 +0700 > > log: add unit tests to verify interfaces provided by destination handler > [#2258] > > Unit tests to verify major interfaces: 1) CfgDestination() 2) > WriteToDestination() > > > Added Files: > > src/log/apitest/tet_cfg_destination.c > src/log/logd/lgs_dest.cc > src/log/logd/lgs_dest.h > src/log/logd/lgs_mbcsv_v7.cc > src/log/logd/lgs_mbcsv_v7.h > src/log/tests/lgs_dest_test.cc > src/log/tests/Makefile > > > Complete diffstat: > -- > src/log/Makefile |4 + > src/log/Makefile.am | 31 +- > src/log/apitest/tet_LogOiOps.c|8 +- > src/log/apitest/tet_cfg_destination.c | 483 > > src/log/config/logsv_classes.xml |7 +- > src/log/logd/lgs_config.cc| 169 > +--- > src/log/logd/lgs_config.h |3 +- > src/log/logd/lgs_dest.cc | 707 > + > src/log/logd/lgs_dest.h | 576 > ++ > src/log/logd/lgs_evt.cc | 33 ++ > src/log/logd/lgs_imm.cc | 202 > +-- > src/log/logd/lgs_main.cc |8 + > src/log/logd/lgs_mbcsv.cc | 103