In Openstack neutron reference implementation, all agents will be
periodically reporting their status to neutron server. Similarly in
Openstack OVN based deployment, we want ovn-controller to periodically
report its status to neutron server.
We can follow two approaches for this:
1) ovn-controller periodically writing timestamp(along with its name and
type) in SBDB chassis table
smap_add(&ext_ids, "OVN_CONTROLER_TYPE:ovn-controller1", timestamp);
Then networking-ovn watching and processing the timestamp and updating
2) Alternatively, use OVSDB server monitoring
As ovn-controller is a client to OVSDB server(for SBDB), OVSDB server
periodically monitors this connection and updates the status in
"Connection" table. But when the connection method is inbound (eg: ptcp or
punix), it updates only "n_connections" in status field and doesn't write
connection details in the OVSDB. Pros and Cons with this approach
Using existing ovsdb-server monitoring(and no need to spawn a thread in
ovn-controller for reporting timestamp)
a) ovsdb-server will only have remote ip address and port as part of
connection. How this information will be used to identify remote ovsdb
One approach is, OVSDB client(ovn-controller), after creating connection,
can add ip address and port in a new table in SBDB. Then ovsdb-server can
search this table with connection's ip address and port and update the
connection status(only if there is a change in status) in the resulting
row. Networking-ovn can watch this table and update neutron DB accordingly.
This require changes in all OVSDB clients and OVSDB server, though we have
requirement for only ovn-controller.
b) If a deployment wants to disable monitoring, can set "inactivity_probe"
to 0. Then we can't have status reporting. Here we are tightly coupling
status reporting with inactivity_probe.
Please suggest which approach will be better?
Note: I have proposed a spec  in networking-ovn for this. Reviews can be
discuss mailing list