Hello, xurong, zhaojingjing:

At that time, in order to avoid the problem of losing after the active/slave switchover,
the slave node was promote prior to the start of the VIP.
https://github.com/openvswitch/ovs/commit/a38f532320936147e6831153828af5bcba2fba54#diff-fb92408f16ddd415d9906e3976e60f47L255

After that, the problem of data loss was completely solved in ovsdb,
https://github.com/openvswitch/ovs/commit/05ac209a5d3a5e85896f58d16b244e6b2a4cf2d0#diff-6878621809e8ea9210d85a69a5e06367

I think it is possible to change this sequence again.

Are there any new problems detected?

From: zhaojingjing0067370 <[email protected]>

---
  Documentation/topics/integration.rst | 4 ++--
  1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/Documentation/topics/integration.rst 
b/Documentation/topics/integration.rst
index 0447faf..d129e21 100644
--- a/Documentation/topics/integration.rst
+++ b/Documentation/topics/integration.rst
@@ -255,6 +255,6 @@ with the active server::

      $ pcs resource create VirtualIP ocf:heartbeat:IPaddr2 ip=x.x.x.x \
          op monitor interval=30s
-    $ pcs constraint order promote ovndb_servers-master then VirtualIP
-    $ pcs constraint colocation add VirtualIP with master ovndb_servers-master 
\
+    $ pcs constraint order start VirtualIP then promote ovndb_servers-master
+    $ pcs constraint colocation add master ovndb_servers-master with VirtualIP 
\
          score=INFINITY
--
1.8.3.1

_______________________________________________
dev mailing list
[email protected]
https://mail.openvswitch.org/mailman/listinfo/ovs-dev



_______________________________________________
dev mailing list
[email protected]
https://mail.openvswitch.org/mailman/listinfo/ovs-dev

Reply via email to