Re: [ovs-discuss] Inquiry for DDlog status for ovn-northd

2020-08-26 Thread Dumitru Ceara
On 8/26/20 5:11 PM, Dumitru Ceara wrote:
> On 8/25/20 7:46 PM, Ben Pfaff wrote:
>> On Tue, Aug 25, 2020 at 06:43:51PM +0200, Dumitru Ceara wrote:
>>> On 8/25/20 6:01 PM, Ben Pfaff wrote:
>>>> On Mon, Aug 24, 2020 at 04:28:22PM -0700, Han Zhou wrote:
>>>>> As I remember you were working on the new ovn-northd that utilizes DDlog
>>>>> for incremental processing. Could you share the current status?
>>>>>
>>>>> Now that some more improvements have been made in ovn-controller and 
>>>>> OVSDB,
>>>>> the ovn-northd becomes the more obvious bottleneck for OVN use in large
>>>>> scale environments. Since you were not in the OVN meetings for the last
>>>>> couple of weeks, could you share here the status and plan moving forward?
>>>>
>>>> The status is basically that I haven't yet succeeded at getting Red
>>>> Hat's recommended benchmarks running.  I'm told that is important before
>>>> we merge it.  I find them super difficult to set up.  I tried a few
>>>> weeks ago and basically gave up.  Piles and piles of repos all linked
>>>> together in tricky ways, making it really difficult to substitute my own
>>>> branches.  I intend to try again soon, though.  I have a new computer
>>>> that should be arriving soon, which should also allow it to proceed more
>>>> quickly.
>>>
>>> Hi Ben,
>>>
>>> I can try to help with setting up ovn-heater, in theory it should be
>>> enough to export OVS_REPO, OVS_BRANCH, OVN_REPO, OVN_BRANCH, make them
>>> point to your repos and branches and then run "do.sh install" and it
>>> should take care of installing all the dependencies and repos.
>>>
>>> I can also try to run the scale tests on our downstream if that helps.
>>
>> It's probably better if I come up with something locally, because I
>> expect to have to run it multiple times, maybe many times, since I will
>> presumably discover bottlenecks.
>>
>> This time around, I'll speak up when I run into problems.
>>
> 
> Sorry in advance for the log email.
> 
> I went ahead and added a new test scenario to ovn-heater that I think
> might be relevant in the context of ovn-northd incremental processing:
> 
> https://github.com/dceara/ovn-heater#example-run-scenario-3---scale-up-number-of-pods---stress-ovn-northd
> 
> On my test machine:
> Intel(R) Xeon(R) CPU E5-2690 v4 @ 2.60GHz
> 2 NUMA nodes - 28 cores each.
> 
> I did:
> 
> $ cd
> $ git clone https://github.com/dceara/ovn-heater
> $ cd ovn-heater
> $ cat > physical-deployments/physical-deployment.yml << EOF
> registry-node: localhost
> internal-iface: none
> 
> central-node:
>   name: localhost
> 
> worker-nodes:
>   - localhost
> EOF
> 
> # Install all the required repos and make everything work together using
> # latest OVS and OVN code from github. This generates the
> # ~/ovn-heater/runtime where all the repos are cloned and the test suite
> # is run. This step also generates the container image with OVS/OVN
> # compiled from sources. This step has to be done every time we need
> # to test with a different version of OVS/OVN and can be customized with
> # the OVS/OVN_REPO and OVS/OVN_BRANCH env vars.
> $ ./do.sh install

# Missed a step here:
$ ./do.sh rally-deploy

> 
> # Start the test:
> # This brings up 30 "fake" OVN nodes and then simulates addition of
> # 1000 pods (lsps) and associated policies (port_group/address_set/acl).
> $ ./do.sh browbeat-run
> browbeat-scenarios/switch-per-node-30-node-1000-pods.yml debug-dceara-pods
> 
> # This takes quite long, ~1hr on my system.
> # Results are stored at:
> # ls -l
> ~/ovn-heater/test_results/debug-dceara-pods-20200826-080650/20200826-120718/rally/plugin-workloads/all-rally-run-0.html
> 
> What I noticed was that while the test was running (we can monitor the
> execution by tailing ~/ovn-heater/runtime/browbeat/*.log) that
> ovn-northd's CPU usage increased constantly and was above 70-80% after
> ~500 iterations.
> 
> ovn-northd logs:
> 2020-08-26T14:24:25.989Z|02119|poll_loop|INFO|wakeup due to [POLLIN] on
> fd 12 (192.16.0.1:53642<->192.16.0.1:6642) at lib/stream-ssl.c:832 (97%
> CPU usage)
> 
> 2020-08-26T14:24:31.985Z|02120|poll_loop|INFO|Dropped 54 log messages in
> last 5 seconds (most recently, 0 seconds ago) due to excessive rate
> 
> 
> 2020-08-26T14:24:31.985Z|02121|poll_loop|INFO|wakeup due to [POLLIN] on
> fd 11 (192.16.0.1:56340<->192.16.0.1:6641) at lib/stream-ssl.c:832 (99%
> CPU usage)
> 
> For tr

Re: [ovs-discuss] Inquiry for DDlog status for ovn-northd

2020-08-26 Thread Dumitru Ceara
On 8/25/20 7:46 PM, Ben Pfaff wrote:
> On Tue, Aug 25, 2020 at 06:43:51PM +0200, Dumitru Ceara wrote:
>> On 8/25/20 6:01 PM, Ben Pfaff wrote:
>>> On Mon, Aug 24, 2020 at 04:28:22PM -0700, Han Zhou wrote:
>>>> As I remember you were working on the new ovn-northd that utilizes DDlog
>>>> for incremental processing. Could you share the current status?
>>>>
>>>> Now that some more improvements have been made in ovn-controller and OVSDB,
>>>> the ovn-northd becomes the more obvious bottleneck for OVN use in large
>>>> scale environments. Since you were not in the OVN meetings for the last
>>>> couple of weeks, could you share here the status and plan moving forward?
>>>
>>> The status is basically that I haven't yet succeeded at getting Red
>>> Hat's recommended benchmarks running.  I'm told that is important before
>>> we merge it.  I find them super difficult to set up.  I tried a few
>>> weeks ago and basically gave up.  Piles and piles of repos all linked
>>> together in tricky ways, making it really difficult to substitute my own
>>> branches.  I intend to try again soon, though.  I have a new computer
>>> that should be arriving soon, which should also allow it to proceed more
>>> quickly.
>>
>> Hi Ben,
>>
>> I can try to help with setting up ovn-heater, in theory it should be
>> enough to export OVS_REPO, OVS_BRANCH, OVN_REPO, OVN_BRANCH, make them
>> point to your repos and branches and then run "do.sh install" and it
>> should take care of installing all the dependencies and repos.
>>
>> I can also try to run the scale tests on our downstream if that helps.
> 
> It's probably better if I come up with something locally, because I
> expect to have to run it multiple times, maybe many times, since I will
> presumably discover bottlenecks.
> 
> This time around, I'll speak up when I run into problems.
> 

Sorry in advance for the log email.

I went ahead and added a new test scenario to ovn-heater that I think
might be relevant in the context of ovn-northd incremental processing:

https://github.com/dceara/ovn-heater#example-run-scenario-3---scale-up-number-of-pods---stress-ovn-northd

On my test machine:
Intel(R) Xeon(R) CPU E5-2690 v4 @ 2.60GHz
2 NUMA nodes - 28 cores each.

I did:

$ cd
$ git clone https://github.com/dceara/ovn-heater
$ cd ovn-heater
$ cat > physical-deployments/physical-deployment.yml << EOF
registry-node: localhost
internal-iface: none

central-node:
  name: localhost

worker-nodes:
  - localhost
EOF

# Install all the required repos and make everything work together using
# latest OVS and OVN code from github. This generates the
# ~/ovn-heater/runtime where all the repos are cloned and the test suite
# is run. This step also generates the container image with OVS/OVN
# compiled from sources. This step has to be done every time we need
# to test with a different version of OVS/OVN and can be customized with
# the OVS/OVN_REPO and OVS/OVN_BRANCH env vars.
$ ./do.sh install

# Start the test:
# This brings up 30 "fake" OVN nodes and then simulates addition of
# 1000 pods (lsps) and associated policies (port_group/address_set/acl).
$ ./do.sh browbeat-run
browbeat-scenarios/switch-per-node-30-node-1000-pods.yml debug-dceara-pods

# This takes quite long, ~1hr on my system.
# Results are stored at:
# ls -l
~/ovn-heater/test_results/debug-dceara-pods-20200826-080650/20200826-120718/rally/plugin-workloads/all-rally-run-0.html

What I noticed was that while the test was running (we can monitor the
execution by tailing ~/ovn-heater/runtime/browbeat/*.log) that
ovn-northd's CPU usage increased constantly and was above 70-80% after
~500 iterations.

ovn-northd logs:
2020-08-26T14:24:25.989Z|02119|poll_loop|INFO|wakeup due to [POLLIN] on
fd 12 (192.16.0.1:53642<->192.16.0.1:6642) at lib/stream-ssl.c:832 (97%
CPU usage)

2020-08-26T14:24:31.985Z|02120|poll_loop|INFO|Dropped 54 log messages in
last 5 seconds (most recently, 0 seconds ago) due to excessive rate


2020-08-26T14:24:31.985Z|02121|poll_loop|INFO|wakeup due to [POLLIN] on
fd 11 (192.16.0.1:56340<->192.16.0.1:6641) at lib/stream-ssl.c:832 (99%
CPU usage)

For troubleshooting/profiling, the easiest way I can think of for
rerunning the sequence of commands without actually running the whole
suite is to extract them from the ovn-nbctl daemon logs. We start it on
node ovn-central-1. I also added a short sleep to avoid NB changes being
batched before ovn-northd processes them:

$ docker exec ovn-central-1 grep "Running command"
/var/log/openvswitch/ovn-nbctl.log | sed -ne 's/.*Running command
run\(.*\)/ovn-nbctl\1; sleep 0.01/p' > commands.sh

# Now we can just run ovn-northd locally:
$ ovn-ctl start_northd
# Start an ovn-nbctl da

Re: [ovs-discuss] How to restart raft cluster after a complete shutdown?

2020-08-26 Thread Matthew Booth
On Tue, 25 Aug 2020 at 17:45, Tony Liu  wrote:
>
> Start the first node to create the cluster.
> https://github.com/ovn-org/ovn/blob/master/utilities/ovn-ctl#L228
> https://github.com/openvswitch/ovs/blob/master/utilities/ovs-lib.in#L478
>
> Start the rest nodes to join the cluster.
> https://github.com/ovn-org/ovn/blob/master/utilities/ovn-ctl#L226
> https://github.com/openvswitch/ovs/blob/master/utilities/ovs-lib.in#L478

Unfortunately this is precisely the problem: this doesn't work after
the cluster has already been created. The first node fails to come up
with:

2020-08-26T08:06:19Z|3|reconnect|INFO|tcp:ovn-ovsdb-1.openstack.svc.cluster.local:6643:
connecting...
2020-08-26T08:06:19Z|4|reconnect|INFO|tcp:ovn-ovsdb-2.openstack.svc.cluster.local:6643:
connecting...
2020-08-26T08:06:20Z|5|reconnect|INFO|tcp:ovn-ovsdb-1.openstack.svc.cluster.local:6643:
connection attempt timed out
2020-08-26T08:06:20Z|6|reconnect|INFO|tcp:ovn-ovsdb-2.openstack.svc.cluster.local:6643:
connection attempt timed out

This makes sense, because the first node can't come up without joining
a quorum, and it can't join a quorum because the other two nodes
aren't up.

I 'fixed' this by switching from the OrderedReady to Parallel pod
management policy for the statefulset. This just means that all pods
come up simultaneously rather than waiting for the first to come up on
its own, which will never work. However, my bootstrapping mechanism
relied on the behaviour of OrderedReady, so I'm going to have to come
up with a solution for that.

Matt

>
> Tony
> > -Original Message-
> > From: discuss  On Behalf Of Matthew
> > Booth
> > Sent: Tuesday, August 25, 2020 7:08 AM
> > To: ovs-discuss 
> > Subject: [ovs-discuss] How to restart raft cluster after a complete
> > shutdown?
> >
> > I'm deploying ovsdb-server (and only ovsdb-server) in K8S as a
> > StatefulSet:
> >
> > https://github.com/openstack-k8s-operators/dev-
> > tools/blob/master/ansible/files/ocp/ovn/ovsdb.yaml
> >
> > I'm going to replace this with an operator in due course, which may make
> > the following simpler. I'm not necessarily constrained to only things
> > which are easy to do in a StatefulSet.
> >
> > I've noticed an issue when I kill all 3 pods simultaneously: it is no
> > longer possible to start the cluster. The issue is presumably one of
> > quorum: when a node comes up it can't contact any other node to make
> > quorum, and therefore can't come up. All nodes are similarly affected,
> > so the cluster stays down. Ignoring kubernetes, how is this situation
> > intended to be handled? Do I have to it to a single-node deployment,
> > convert that to a new cluster and re-bootstrap it? This wouldn't be
> > ideal. Is there any way, for example, I can bring up the first node
> > while asserting to that node that the other 2 are definitely down?
> >
> > Thanks,
> >
> > Matt
> > --
> > Matthew Booth
> > Red Hat OpenStack Engineer, Compute DFG
> >
> > Phone: +442070094448 (UK)
> >
> > ___
> > discuss mailing list
> > disc...@openvswitch.org
> > https://mail.openvswitch.org/mailman/listinfo/ovs-discuss
>


-- 
Matthew Booth
Red Hat OpenStack Engineer, Compute DFG

Phone: +442070094448 (UK)

___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss