Re: [users] both SUs within a 2N Service Group appear as STANDBY

2016-11-07 Thread praveen malviya
Hi Dave,

Please share latest traces.

Thanks,
Praveen


On 25-Oct-16 9:09 PM, David Hoyt wrote:
> Hi Praveen,
>
>
>
> Sorry, but the changes you suggested don’t seem to make a difference.
>
>
>
> I’m now looking at coming up with a manual procedure so that if we
> perform maintenance on a server, the VMs running on that server should
> be terminated first. That way, I’ll be able to control the VM
> termination order and hopefully avoid all these failovers that happen at
> the same time.
>
>
>
> We were looking at doing this controlled procedure anyways but I’m still
> concerned about the uncontrolled scenario. It doesn’t appear to me that
> opensaf can handle this and then afterwards, it requires manual recovery. L
>
>
>
> Regards,
>
> David
>
>
>
>
>
> *From:*praveen malviya [mailto:praveen.malv...@oracle.com]
> *Sent:* Monday, October 24, 2016 2:56 AM
> *To:* David Hoyt <david.h...@genband.com>
> *Cc:* opensaf-users@lists.sourceforge.net
> *Subject:* Re: [users] both SUs within a 2N Service Group appear as STANDBY
>
>
>
> 
>
> NOTICE: This email was received from an EXTERNAL sender
>
> 
>
>
>
> Hi Dave,
>
> SC-2 became active in no time at "Oct 21 17:54:35.111631". But somehow
> IMM got busy with something and continuously gave TRY_AGAIN untill SC-2
> also went down at "Oct 21 17:54:34". When SC-2 came up and also the
> payload hosting SU2 joined, SC-2 AMFD reassigned active state and it
> updated it in IMM at:
>
> Oct 21 18:00:23.113831 osafamfd [4070:imm.cc:0089] >> exec: Create
> safSi=SG-C,safApp=SG-C
> Oct 21 18:00:23.113836 osafamfd [4070:imma_oi_api.c:2757] >>
> rt_object_create_common
> Oct 21 18:00:23.113842 osafamfd [4070:imma_oi_api.c:2863] TR attr:safSISU
> Oct 21 18:00:23.113848 osafamfd [4070:imma_oi_api.c:2863] TR
> attr:saAmfSISUHAState
> Oct 21 18:00:23.113854 osafamfd [4070:imma_oi_api.c:2863] TR
> attr:saAmfSISUHAReadinessState
>
> But since SC-2 also went down the case is not longer remains same.
> Please figure it out why IMMND got stuck.
> But still I see a smaller patch is needed from AMF perspective. I will
> try to consider this case in #2009 (already published). Attached is
> 1.patch that is extracted out of 2009.patch. Please try with 1.patch on
> top of #1540 and #1141.
>
> Thanks,
> Praveen
>
>
> Messages from SC-1 AMFD:
> 1)Node failove of PL-5 started:
> Oct 21 17:52:20.232746 osafamfd [2864:ndproc.cc:0923] >>
> avd_node_failover: 'safAmfNode=PL-5,safAmfCluster=myAmfCluster'
>
> 2)AMFD starts failover of 'safSu=SU1,safSg=SG-C,safApp=SG-C' hosted on PL-5.
> Oct 21 17:52:20.275267 osafamfd [2864:sg_2n_fsm.cc:3262] >> node_fail:
> 'safSu=SU1,safSg=SG-C,safApp=SG-C', 0
>
> and sends active to its peer:
> Oct 21 17:52:20.275511 osafamfd [2864:sgproc.cc:2114] >>
> avd_sg_su_si_mod_snd: 'safSu=SU2,safSg=SG-C,safApp=SG-C', state 1
>
> 3)This time SC-1 AMFD itself was able to update in IMM:
> Oct 21 17:52:20.363304 osafamfd [2864:imm.cc:0199] >> exec: Delete
> safCSIComp=safComp=SG-C\,safSu=SU1\,safSg=SG-C\,safApp=SG-C,safCsi=SG-C,safSi=SG-C,safApp=SG-C
> Oct 21 17:52:20.364569 osafamfd [2864:mds_dt_trans.c:0576] >>
> mdtm_process_poll_recv_data_tcp
> Oct 21 17:52:20.364640 osafamfd [2864:imm.cc:0222] << exec
> Oct 21 17:52:20.364651 osafamfd [2864:imm.cc:0334] << execute: 1
> Oct 21 17:52:20.364657 osafamfd [2864:imm.cc:0330] >> execute
> Oct 21 17:52:20.364662 osafamfd [2864:imm.cc:0199] >> exec: Delete
> safSISU=safSu=SU1\,safSg=SG-C\,safApp=SG-C,safSi=SG-C,safApp=SG-C
> Oct 21 17:52:20.366127 osafamfd [2864:mds_dt_trans.c:0576] >>
> mdtm_process_poll_recv_data_tcp
> Oct 21 17:52:20.366201 osafamfd [2864:imm.cc:0222] << exec
> Oct 21 17:52:20.366210 osafamfd [2864:imm.cc:0334] << execute: 1
>
> This means, a user will not see atleast above two runtime object using
> the query amf-state csiass and amf-state siass respetively.
>
> 4) Down events of Director on SC-1 started coming:
> Oct 21 17:53:22.303749 osafamfd [2864:ntfa_mds.c:0369] TR NTFS down.
> Oct 21 17:53:22.307098 osafamfd [2864:lga_mds.c:0491] TR LGS down
> Oct 21 17:53:22.311434 osafamfd [2864:clma_mds.c:0959] TR CLMS down
> Local IMMND down:
> Oct 21 17:53:22.324784 osafamfd [2864:imma_mds.c:0404] T3 IMMND DOWN
>
> 5)SC-1 AMFD get response for safSu=SU2,safSg=SG-C,safApp=SG-C for being
> active:
> Oct 21 17:54:19.465629 osafamfd [2864:sgproc.cc:0889] >>
> avd_su_si_assign_evh: id:28, node:2060f, act:5,
> 'saf

Re: [users] both SUs within a 2N Service Group appear as STANDBY

2016-10-25 Thread David Hoyt
Hi Praveen,

Sorry, but the changes you suggested don’t seem to make a difference.

I’m now looking at coming up with a manual procedure so that if we perform 
maintenance on a server, the VMs running on that server should be terminated 
first. That way, I’ll be able to control the VM termination order and hopefully 
avoid all these failovers that happen at the same time.

We were looking at doing this controlled procedure anyways but I’m still 
concerned about the uncontrolled scenario. It doesn’t appear to me that opensaf 
can handle this and then afterwards, it requires manual recovery. ☹

Regards,
David


From: praveen malviya [mailto:praveen.malv...@oracle.com]
Sent: Monday, October 24, 2016 2:56 AM
To: David Hoyt <david.h...@genband.com>
Cc: opensaf-users@lists.sourceforge.net
Subject: Re: [users] both SUs within a 2N Service Group appear as STANDBY


NOTICE: This email was received from an EXTERNAL sender



Hi Dave,

SC-2 became active in no time at "Oct 21 17:54:35.111631". But somehow
IMM got busy with something and continuously gave TRY_AGAIN untill SC-2
also went down at "Oct 21 17:54:34". When SC-2 came up and also the
payload hosting SU2 joined, SC-2 AMFD reassigned active state and it
updated it in IMM at:

Oct 21 18:00:23.113831 osafamfd [4070:imm.cc:0089] >> exec: Create
safSi=SG-C,safApp=SG-C
Oct 21 18:00:23.113836 osafamfd [4070:imma_oi_api.c:2757] >>
rt_object_create_common
Oct 21 18:00:23.113842 osafamfd [4070:imma_oi_api.c:2863] TR attr:safSISU
Oct 21 18:00:23.113848 osafamfd [4070:imma_oi_api.c:2863] TR
attr:saAmfSISUHAState
Oct 21 18:00:23.113854 osafamfd [4070:imma_oi_api.c:2863] TR
attr:saAmfSISUHAReadinessState

But since SC-2 also went down the case is not longer remains same.
Please figure it out why IMMND got stuck.
But still I see a smaller patch is needed from AMF perspective. I will
try to consider this case in #2009 (already published). Attached is
1.patch that is extracted out of 2009.patch. Please try with 1.patch on
top of #1540 and #1141.

Thanks,
Praveen


Messages from SC-1 AMFD:
1)Node failove of PL-5 started:
Oct 21 17:52:20.232746 osafamfd [2864:ndproc.cc:0923] >>
avd_node_failover: 'safAmfNode=PL-5,safAmfCluster=myAmfCluster'

2)AMFD starts failover of 'safSu=SU1,safSg=SG-C,safApp=SG-C' hosted on PL-5.
Oct 21 17:52:20.275267 osafamfd [2864:sg_2n_fsm.cc:3262] >> node_fail:
'safSu=SU1,safSg=SG-C,safApp=SG-C', 0

and sends active to its peer:
Oct 21 17:52:20.275511 osafamfd [2864:sgproc.cc:2114] >>
avd_sg_su_si_mod_snd: 'safSu=SU2,safSg=SG-C,safApp=SG-C', state 1

3)This time SC-1 AMFD itself was able to update in IMM:
Oct 21 17:52:20.363304 osafamfd [2864:imm.cc:0199] >> exec: Delete
safCSIComp=safComp=SG-C\,safSu=SU1\,safSg=SG-C\,safApp=SG-C,safCsi=SG-C,safSi=SG-C,safApp=SG-C
Oct 21 17:52:20.364569 osafamfd [2864:mds_dt_trans.c:0576] >>
mdtm_process_poll_recv_data_tcp
Oct 21 17:52:20.364640 osafamfd [2864:imm.cc:0222] << exec
Oct 21 17:52:20.364651 osafamfd [2864:imm.cc:0334] << execute: 1
Oct 21 17:52:20.364657 osafamfd [2864:imm.cc:0330] >> execute
Oct 21 17:52:20.364662 osafamfd [2864:imm.cc:0199] >> exec: Delete
safSISU=safSu=SU1\,safSg=SG-C\,safApp=SG-C,safSi=SG-C,safApp=SG-C
Oct 21 17:52:20.366127 osafamfd [2864:mds_dt_trans.c:0576] >>
mdtm_process_poll_recv_data_tcp
Oct 21 17:52:20.366201 osafamfd [2864:imm.cc:0222] << exec
Oct 21 17:52:20.366210 osafamfd [2864:imm.cc:0334] << execute: 1

This means, a user will not see atleast above two runtime object using
the query amf-state csiass and amf-state siass respetively.

4) Down events of Director on SC-1 started coming:
Oct 21 17:53:22.303749 osafamfd [2864:ntfa_mds.c:0369] TR NTFS down.
Oct 21 17:53:22.307098 osafamfd [2864:lga_mds.c:0491] TR LGS down
Oct 21 17:53:22.311434 osafamfd [2864:clma_mds.c:0959] TR CLMS down
Local IMMND down:
Oct 21 17:53:22.324784 osafamfd [2864:imma_mds.c:0404] T3 IMMND DOWN

5)SC-1 AMFD get response for safSu=SU2,safSg=SG-C,safApp=SG-C for being
active:
Oct 21 17:54:19.465629 osafamfd [2864:sgproc.cc:0889] >>
avd_su_si_assign_evh: id:28, node:2060f, act:5,
'safSu=SU2,safSg=SG-C,safApp=SG-C', '', ha:1, err:1, single:0
AMFD generated events for IMM and notification for SU2 being active:
Oct 21 17:54:19.466208 osafamfd [2864:imm.cc:1560] >>
avd_saImmOiRtObjectUpdate:
'safSISU=safSu=SU2\,safSg=SG-C\,safApp=SG-C,safSi=SG-C,safApp=SG-C'
saAmfSISUHAState
Oct 21 17:54:19.466217 osafamfd [2864:imm.cc:1580] <<
avd_saImmOiRtObjectUpdate
Oct 21 17:54:19.466223 osafamfd [2864:imm.cc:1560] >>
avd_saImmOiRtObjectUpdate:
'safCSIComp=safComp=SG-C\,safSu=SU2\,safSg=SG-C\,safApp=SG-C,safCsi=SG-C,safSi=SG-C,safApp=SG-C'
saAmfCSICompHAState
Oct 21 17:54:19.466231 osafamfd [2864:imm.cc:1580] <<
avd_saImmOiRtObjectUpdate
Oct 21 17:54:19.466236 osafamfd [2864:siass.cc:0446] >>
avd_gen_su

Re: [users] both SUs within a 2N Service Group appear as STANDBY

2016-10-24 Thread praveen malviya
C,safApp=SG-C' ACTIVE to 'safSu=SU2,safSg=SG-C,safApp=SG-C'

Oct 21 17:52:20 SG-C-1 osafdtmd[19770]: NO Lost contact with 'SG-C-0'

Oct 21 17:54:19 SG-C-1 osafamfnd[19830]: NO Assigned
'safSi=SG-C,safApp=SG-C' ACTIVE to 'safSu=SU2,safSg=SG-C,safApp=SG-C'

Oct 21 17:54:35 SG-C-1 osafdtmd[19770]: NO Lost contact with 'sc-1'



· I continued to see it with a Standby HA state for almost
another 2 minutes

[root@sc-2 ~]# date; amf-state siass | grep -A 1 -i sg-c

Fri Oct 21 17:56:16 UTC 2016

safSISU=safSu=SU2\,safSg=SG-C\,safApp=SG-C,safSi=SG-C,safApp=SG-C

saAmfSISUHAState=STANDBY(2)

[root@sc-2 ~]#



· The same is true for the 2N redundancy OpenSAF SUs. I saw
opensaf detect loss of mate at 17:53:22, initiate failover, and logged
that it was active at 17:54:35.

Oct 21 17:53:22 sc-2 osafimmd[2594]: WA IMMD lost contact with peer IMMD
(NCSMDS_RED_DOWN)

Oct 21 17:53:22 sc-2 osafimmd[2594]: WA IMMND DOWN on active controller
f1 detected at standby immd!! f2. Possible failover

...

Oct 21 17:54:35 sc-2 osafdtmd[2553]: NO Lost contact with 'sc-1'

Oct 21 17:54:35 sc-2 osaffmd[2582]: NO Node Down event for node id 2010f:

Oct 21 17:54:35 sc-2 osaffmd[2582]: NO Current role: STANDBY

...

Oct 21 17:54:35 sc-2 osaffmd[2582]: NO Controller Failover: Setting role
to ACTIVE

Oct 21 17:54:35 sc-2 osafrded[2570]: NO RDE role set to ACTIVE

Oct 21 17:54:35 sc-2 osaflogd[2619]: NO ACTIVE request

Oct 21 17:54:35 sc-2 osafamfd[2673]: NO FAILOVER StandBy --> Active

Oct 21 17:54:35 sc-2 osafntfd[2634]: NO ACTIVE request

Oct 21 17:54:35 sc-2 osafclmd[2647]: NO ACTIVE request

Oct 21 17:54:35 sc-2 osafimmd[2594]: NO ACTIVE request

Oct 21 17:54:35 sc-2 osafimmd[2594]: NO ellect_coord invoke from
rda_callback ACTIVE

...

Oct 21 17:54:35 sc-2 osafamfd[2673]: NO Node 'SC-1' left the cluster

Oct 21 17:54:35 sc-2 osafamfd[2673]: NO FAILOVER StandBy --> Active DONE!



· Again, querying the HA state of opensaf did not reflect this
until 17:58:12

[root@sc2 ~]# date; amf-state siass | grep -A 1 -i opensaf | grep -A 2
safSg=2N

Fri Oct 21 17:56:31 UTC 2016

safSISU=safSu=SC-1\,safSg=2N\,safApp=OpenSAF,safSi=SC-2N,safApp=OpenSAF

saAmfSISUHAState=ACTIVE(1)

--

safSISU=safSu=SC-2\,safSg=2N\,safApp=OpenSAF,safSi=SC-2N,safApp=OpenSAF

saAmfSISUHAState=STANDBY(2)

[root@sc-2 ~]#

…

[root@sc-2 ~]# date; amf-state siass | grep -A 1 -i opensaf | grep -A 2
safSg=2N

Fri Oct 21 17:58:12 UTC 2016

safSISU=safSu=SC-2\,safSg=2N\,safApp=OpenSAF,safSi=SC-2N,safApp=OpenSAF

saAmfSISUHAState=ACTIVE(1)

[root@sc-2 ~]#



· Also, as a heads up, opensaf on SC-2 was restarted at 17:55:53.

I have a script running on both controller nodes. It constantly pings
the mate controller and if the ping fails, then a counter is started.
When the count reaches the threshold, the HA state of the opensaf SU on
the current node is checked. If it’s HA state is standby, then the
script will initiate an opensaf restart.

Basically, it’s a backup mechanism if the detection of the loss of the
mate controller takes too long. I found this code was hit today at
17:54:53.

Oct 21 17:54:53 sc-2 wait_for_mate_connection: Cannot ping mate controller

Oct 21 17:54:53 sc-2 wait_for_mate_connection: threshold_counter: '1'

...

Oct 21 17:55:53 sc-2 wait_for_mate_connection: threshold_counter: '15'

Oct 21 17:55:53 sc-2 mate_down_handler: Mate connection down threshold
exceeded - analyzing data...

Oct 21 17:55:53 sc-2 mate_down_handler: Checking opensaf SU status...

Oct 21 17:55:53 sc-2 mate_down_handler: opensaf SU is Standby, no mate
available

Oct 21 17:55:54 sc-2 mate_down_handler: Restarting opensaf

Oct 21 17:55:54 sc-2 mate_down_handler: opensaf restart has been
initiated...



By 18:05:29, All the HA states were correct.



[root@sc-2 ~]# date; amf-state siass | grep -A 1 -i opensaf | grep -A 2
safSg=2N

Fri Oct 21 18:05:29 UTC 2016

safSISU=safSu=SC-1\,safSg=2N\,safApp=OpenSAF,safSi=SC-2N,safApp=OpenSAF

saAmfSISUHAState=STANDBY(2)

--

--

safSISU=safSu=SC-2\,safSg=2N\,safApp=OpenSAF,safSi=SC-2N,safApp=OpenSAF

saAmfSISUHAState=ACTIVE(1)

[root@sc-2 ~]#





[root@sc-2 ~]# date; amf-state siass | grep -A 1 SG-C

Fri Oct 21 18:05:46 UTC 2016

safSISU=safSu=SU1\,safSg=SG-C\,safApp=SG-C,safSi=SG-C,safApp=SG-C

    saAmfSISUHAState=STANDBY(2)

--

safSISU=safSu=SU2\,safSg=SG-C\,safApp=SG-C,safSi=SG-C,safApp=SG-C

saAmfSISUHAState=ACTIVE(1)

[root@sc-2 ~]#





Thanks,

/David/





*From:*praveen malviya [mailto:praveen.malv...@oracle.com]
*Sent:* Friday, October 21, 2016 10:38 AM
*To:* David Hoyt <david.h...@genband.com>
*Subject:* Re: [users] both SUs within a 2N Service Group appear as STANDBY





NOTICE: This email was received from an EXTERNAL sender




One more related ticket need to be included:

changeset: 7029:229

Re: [users] both SUs within a 2N Service Group appear as STANDBY

2016-10-17 Thread David Hoyt
Hi Nivrutti,

That’s a typo.
I was trying not to use our real SG names and keep it generic. I meant to 
replace all references to ‘DVN’ with ‘SG-A’.

Thanks,
David


From: Nivrutti Kale [mailto:nk...@brocade.com]
Sent: Monday, October 17, 2016 2:38 PM
To: David Hoyt ; opensaf-users@lists.sourceforge.net
Subject: RE: both SUs within a 2N Service Group appear as STANDBY


NOTICE: This email was received from an EXTERNAL sender


Hi David,

safApp name in two command is not consistent. Are you sure you looking at right 
output?
In the later one safApp name is "safApp=DVN".

Correct State:
safSISU=safSu=SU1\,safSg=SG-A\,safApp=SG-A,safSi=SG-A,safApp=SG-A

Wrong State:
safSISU=safSu=SU2\,safSg=SG-A\,safApp=DVN,safSi=SG-A,safApp=SG-A

Thanks,
Nivrutti

-Original Message-
From: David Hoyt [mailto:david.h...@genband.com]
Sent: Monday, October 17, 2016 11:46 PM
To: 
opensaf-users@lists.sourceforge.net
Subject: [users] both SUs within a 2N Service Group appear as STANDBY

Hi all,

I'm encountering a scenario where opensaf shows the HA state of both SUs within 
a 2N redundancy Service Group as standby.
Setup:

- Opensaf 4.6 running on RHEL 6.6 VMs with TCP

- 2 controllers, 4 payloads

- SC-1 & SC-2 are the VMs with the controller nodes (SC-1 is active)

- PL-3 & PL4 have SU1 & SU2 from SG-A (2N redundancy)

- PL-5 & PL-6 have SU1 & SU2 from SG-B (2N redundancy)

- Server-1 has three VMs consisting of SC-1, PL-3 and PL-5

- Likewise, server-2 has SC-2, PL-4 and PL-6

I reboot server 1 and shortly afterwards, the SG-A SUs begin to failover. SU2 
on PL-4 goes active.
Around the same time, the opensaf 2N SUs failover.
After the dust has settled, and server-1 comes back as well as the VMs, all 
appears fine except the SG-A SUs. They both have a standby HA state.

Is there any way to correct this?
Is there some audit that periodically checks the validity of the HA states?

Now, when SG-A, SU1 recovers, I did swact the SUs and it corrected the HA 
state. However, if server-1 goes down for an extended period, the HA state of 
SG-A, SU2 will appear as Standby, when it's actually running as active.


Before the reboot:

[root@sc-2 ~]# amf-state siass | grep -A 2 OpenSAF | grep -A 1 safSg=2N 
safSISU=safSu=SC-1\,safSg=2N\,safApp=OpenSAF,safSi=SC-2N,safApp=OpenSAF
saAmfSISUHAState=ACTIVE(1)
--
safSISU=safSu=SC-2\,safSg=2N\,safApp=OpenSAF,safSi=SC-2N,safApp=OpenSAF
saAmfSISUHAState=STANDBY(2)
[root@jenga-56-sysvm-1 ~]#
[root@sc-2 ~]# amf-state siass | grep -A 1 SG-A 
safSISU=safSu=SU2\,safSg=SG-A\,safApp=SG-A,safSi=SG-A,safApp=SG-A
saAmfSISUHAState=STANDBY(2)
--
safSISU=safSu=SU1\,safSg=SG-A\,safApp=SG-A,safSi=SG-A,safApp=SG-A
saAmfSISUHAState=ACTIVE(1)
[root@sc-2 ~]#
[root@sc-2 ~]# amf-state siass | grep -A 1 SG-B 
safSISU=safSu=SU2\,safSg=SG-B\,safApp=SG-B,safSi=SG-B,safApp=SG-B
saAmfSISUHAState=STANDBY(2)
--
safSISU=safSu=SU1\,safSg=SG-B\,safApp=SG-B,safSi=SG-B,safApp=SG-B
saAmfSISUHAState=ACTIVE(1)
[root@sc-2 ~]#



After the reboot:
[root@sc-2 ~]# amf-state siass | grep -A 2 OpenSAF | grep -A 1 safSg=2N 
safSISU=safSu=SC-1\,safSg=2N\,safApp=OpenSAF,safSi=SC-2N,safApp=OpenSAF
saAmfSISUHAState=STANDBY(2)
--
safSISU=safSu=SC-2\,safSg=2N\,safApp=OpenSAF,safSi=SC-2N,safApp=OpenSAF
saAmfSISUHAState=ACTIVE(1)
[root@sc-2 ~]#
[root@sc-2 ~]# amf-state siass | grep -A 1 SG-A 
safSISU=safSu=SU1\,safSg=SG-A\,safApp=SG-A,safSi=SG-A,safApp=SG-A
saAmfSISUHAState=STANDBY(2)
--
safSISU=safSu=SU2\,safSg=SG-A\,safApp=DVN,safSi=SG-A,safApp=SG-A
saAmfSISUHAState=STANDBY(2)
[root@sc-2 ~]#
[root@sc-2 ~]# amf-state siass | grep -A 1 SG-B 
safSISU=safSu=SU2\,safSg=SG-B\,safApp=SG-B,safSi=SG-B,safApp=SG-B
saAmfSISUHAState=ACTIVE(1)
--
safSISU=safSu=SU1\,safSg=SG-B\,safApp=SG-B,safSi=SG-B,safApp=SG-B
saAmfSISUHAState=STANDBY(2)
[root@sc-2 ~]#

--
Check out the vibrant tech community on one of the world's most engaging tech 
sites, SlashDot.org! 
https://urldefense.proofpoint.com/v2/url?u=http-3A__sdm.link_slashdot=DQICAg=IL_XqQWOjubgfqINi2jTzg=8oj2Tn7_JuMy90N67rXExkWsx29-JTWbXUkT3IIi99w=BaG3BbO8jb9Kb25-YUeviPLDBOneLUA9eCdRL_aGGCY=w8V8KD0t_dVvVP5_XfZyk-p6mEg5GVrYRPepDNhx6NA=
___
Opensaf-users mailing list
Opensaf-users@lists.sourceforge.net
https://urldefense.proofpoint.com/v2/url?u=https-3A__lists.sourceforge.net_lists_listinfo_opensaf-2Dusers=DQICAg=IL_XqQWOjubgfqINi2jTzg=8oj2Tn7_JuMy90N67rXExkWsx29-JTWbXUkT3IIi99w=BaG3BbO8jb9Kb25-YUeviPLDBOneLUA9eCdRL_aGGCY=gnAd8SOolRZjOJpW5NKwOr8soOHd7vXnsrQWa5xpSWk=
--
Check out the vibrant tech community on one of the world's most 
engaging tech sites, SlashDot.org! http://sdm.link/slashdot
___