Hi folks,
This is probably not a bug and not sure if much can be done about it but
thought of raising it here for discussion.
I have deployed a simple topology with two logical switches (VLAN backed
network), a logical router and a couple of VMs. When pinging between the
logical switches, all the
ve, but
it looks like we want to do something about this (maybe configurable) to
avoid situations where we flood large amounts of traffic (for example in
long lived connections or bulk transfers or ...)
>
> Thanks
>
> Regards,
> Ankur
>
> --
> *From:
On Tue, Sep 29, 2020 at 11:14 AM Krzysztof Klimonda <
kklimo...@syntaxhighlighted.com> wrote:
> On Tue, Sep 29, 2020, at 10:40, Dumitru Ceara wrote:
> > On 9/29/20 12:42 AM, Krzysztof Klimonda wrote:
> > > Hi Dumitru,
> > >
> > > This cluster is IPv4-only for now - there are no IPv6 networks defin
On Tue, Sep 29, 2020 at 1:14 PM Dumitru Ceara wrote:
> On 9/29/20 1:07 PM, Krzysztof Klimonda wrote:
> > On Tue, Sep 29, 2020, at 12:40, Dumitru Ceara wrote:
> >> On 9/29/20 12:14 PM, Daniel Alvarez Sanchez wrote:
> >>>
> >>>
> >>&g
Hey Krzysztof,
On Fri, Nov 20, 2020 at 1:17 PM Krzysztof Klimonda <
kklimo...@syntaxhighlighted.com> wrote:
> Hi,
>
> Doing some tempest runs on our pre-prod environment (stable/ussuri with
> ovn 20.06.2 release) I've noticed that some network connectivity tests were
> failing randomly. I've repr
gt;
> Best Regards,
> -Chris
>
>
> On Tue, Dec 15, 2020, at 11:13, Daniel Alvarez Sanchez wrote:
>
> Hey Krzysztof,
>
> On Fri, Nov 20, 2020 at 1:17 PM Krzysztof Klimonda <
> kklimo...@syntaxhighlighted.com> wrote:
>
> Hi,
>
> Doing some tempest runs on
Thanks Ankur, all for the presentation and slides.
If I may, I have a some questions regarding the proposed solution:
1) Who is responsible for creating the VTEP endpoints on each hypervisor?
Are they assumed to be created in advance or somehow this solution will
take care of it? If the latter, h
e feel free to let us know, if you have further queries.
>
>
> Regards,
> Ankur
> --
> *From:* Greg Smith
> *Sent:* Thursday, January 7, 2021 8:38 AM
> *To:* Daniel Alvarez Sanchez ; Ankur Sharma <
> ankur.sha...@nutanix.com>; Greg A. Smith
&g
Hi folks,
Recently we found out that due to a misconfiguration of the OVN bridge
mappings, traffic that should be sent out to an external bridge was
tunneled to the destination. Since the traffic was working, it took a while
to spot the misconfiguration.
While this can be ok as it keeps everythin
On Tue, Mar 16, 2021 at 2:45 PM Luis Tomas Bolivar
wrote:
> Of course we are fully open to redesign it if there is a better approach!
> And that was indeed the intention when linking to the current efforts,
> figure out if that was a "valid" way of doing it, and how it can be
> improved/redesigne
On Tue, Mar 16, 2021 at 3:20 PM Krzysztof Klimonda <
kklimo...@syntaxhighlighted.com> wrote:
>
> On Tue, Mar 16, 2021, at 14:45, Luis Tomas Bolivar wrote:
>
> Of course we are fully open to redesign it if there is a better approach!
> And that was indeed the intention when linking to the current e
Thanks Krzysztof, all
Let me see if I understand the 'native' proposal. Please amend as necessary
:)
On Tue, Mar 16, 2021 at 9:28 PM Krzysztof Klimonda <
kklimo...@syntaxhighlighted.com> wrote:
>
>
> On Tue, Mar 16, 2021, at 19:15, Mark Gray wrote:
> > On 16/03/2021 15:41, Krzysztof Klimonda wro
On Mon, Oct 18, 2021 at 1:12 PM Ammad Syed wrote:
> Hi Brendan,
>
> Not sure but this could be related to the patch below in neutron that was
> recently released.
>
>
> https://opendev.org/openstack/neutron/commit/f6c35527698119ee6f73a6a3613c9beebb563840
>
Not really, as this commit that you ref
Hi,
On Fri, Oct 29, 2021 at 5:50 AM 鲁 成 wrote:
> *Environment info:*
> OVN 21.06
>
> OVS 2.12.0
>
> *Reproduction:*
> 1. Create a port with neutronclient assign it to a node and close port
> security group
>
> 2. Create a ovs port and add it to br-int, and set interface iface-id same
> as neutro
teaming where an IP fails over to a
different port but the MAC address is different.
The patch that changed this behavior is here:
https://patchwork.ozlabs.org/patch/1258152/
Hope it helps!
daniel
>
> Thanks
>
> 从 Windows 版邮件 <https://go.microsoft.com/fwlink/?LinkId=550986>发送
&
Hey Brendan,
On Wed, Jan 26, 2022 at 12:52 PM Brendan Doyle
wrote:
> Hi,
>
> So I have an underlay VIP that is reachable via a Gateway. The VIP moved
> to a new
> hypervisor after a simulated power failure (all hypervisors rebooted).
> When things came
> back OVN was resolving it to the wrong MA
Hi folks,
While doing some tests with PXE booting and OVN, we noticed that even
though the tftp-server option was sent by ovn-controller, the baremetal
node wouldn't try to reach it to download the image. Comparing it to the
output sent by dnsmasq, it looks like we're missing the next server optio
Thanks Liu
On Wed, May 18, 2022 at 8:47 AM 刘勰 wrote:
> Hi Daniel,
> Thanks for your reply.
> Yeah, we already config 'ovn-chassis-mac-mappings' for every chassis and
> flooding still exist. And i dont think 'ovn-chassis-mac-mappings' is the
> solution for the matter.
> I think the core of this
+Lucas Martins to this thread who's been working on
this particular area
On Thu, Oct 6, 2022 at 1:44 PM Numan Siddique wrote:
> On Thu, Oct 6, 2022 at 4:27 AM Michał Nasiadka
> wrote:
> >
> > Hello,
> >
> > I’m running OpenStack Wallaby and using Ironic for Bare Metal
> provisioning.
> > Neutr
Hi Vikrant,
Please note that you won't see the metadata namespace getting created
right after creating a network. The namespace will be present only when
necessary; ie., when a port is bound to a chassis and then the metadata
agent
will detect that condition and create/provision the metadata names
Hi Vikran
On Sat, Sep 23, 2017 at 8:22 AM, Vikrant Aggarwal
wrote:
> Hi Folks,
>
> I am trying to understand how instance get metadata when OVN is used as
> mechanism driver. I read the theory on [1] but not able to understand the
> practical implementation of same.
>
> Created two private netwo
System information:
===
OS: CentOS Linux release 7.3.1611 (Core)
Kernel version: 3.10.0-693.2.2.el7.x86_64 #1 SMP
OVS version: v2.8.1 (git tag)
#ovs-vswitchd --version
ovs-vswitchd (Open vSwitch) 2.8.1
Bug description:
Right now, OVN doesn't work using OVS 2.8.1 on
Hi guys,
Great job Numan!
As we discussed over IRC, the patch below may make more sense.
It essentially sets the dl_type so that when packet comes from the
controller, it matches
a valid type and OVS_KEY_ATTR_CT_ORIG_TUPLE_IPV4 is not added.
Maybe what Numan proposed and this patch could be a good
On Tue, Oct 24, 2017 at 11:35 PM, Ben Pfaff wrote:
> On Tue, Oct 24, 2017 at 02:27:59PM -0700, Ben Pfaff wrote:
> > On Tue, Oct 24, 2017 at 11:07:58PM +0200, Daniel Alvarez Sanchez wrote:
> > > Hi guys,
> > >
> > > Great job Numan!
> > > As we disc
t 3:09 PM, Daniel Alvarez Sanchez <
> dalva...@redhat.com
> > > wrote:
> >
> > >
> > >
> > > On Tue, Oct 24, 2017 at 11:35 PM, Ben Pfaff wrote:
> > >
> > >> On Tue, Oct 24, 2017 at 02:27:59PM -0700, Ben Pfaff wrote:
>
Hi Gerhard/all,
We also saw this on RHEL 7.4 using OVS 2.7.2.
It didn't stop until we restarted openvswitch service as well.
Log messages showed the following:
2017-11-07T20:56:00.688Z|986468|bridge|INFO|bridge br-int: added interface
ha-a1a195a9-c9 on port 12043
2017-11-07T20:56:00.735Z|986469|b
Hi folks,
While running rally in OpenStack we found out that ovn-northd was
at 100% CPU most of the time. It doesn't have to be necessarily
a problem but I wanted to do a simple profiling by running a rally task
which creates a network (Logical Switch) and creates 6 ports on it,
repeating the whol
labs.org/patch/868826/
>
> On Fri, Feb 02, 2018 at 07:24:59PM +0100, Daniel Alvarez Sanchez wrote:
> > Hi folks,
> >
> > While running rally in OpenStack we found out that ovn-northd was
> > at 100% CPU most of the time. It doesn't have to be necessarily
> >
Nice findings Han!
Looking back at the patch that Numan sent I answered this to the report:
"Yes, thanks Numan for the patch :)
Another option would be that ovn-controller sets explicitly the MTU to 1450.
Not sure which of the two is the best or would have less side effects.
Cheers,
Daniel
"
Wo
Hi folks,
As we're doing some performance tests in OpenStack using OVN,
we noticed that as we keep creating ports, the time for creating a
single port increases. Also, ovn-northd CPU consumption is quite
high (see [0] which shows the CPU consumption when creating
1000 ports and deleting them. Last
Pfaff wrote:
> On Tue, Feb 13, 2018 at 12:39:56PM +0100, Daniel Alvarez Sanchez wrote:
> > Hi folks,
> >
> > As we're doing some performance tests in OpenStack using OVN,
> > we noticed that as we keep creating ports, the time for creating a
> > single port inc
Thanks a lot Han and Ben for looking into this!
On Wed, Feb 14, 2018 at 9:34 PM, Han Zhou wrote:
>
>
> On Wed, Feb 14, 2018 at 9:45 AM, Ben Pfaff wrote:
> >
> > On Wed, Feb 14, 2018 at 11:27:11AM +0100, Daniel Alvarez Sanchez wrote:
> > > Thanks for your inputs.
Ok let me paste some example but feel free to ask for any further
details.
1 Logical Switch with 5 ports and 8 ACLs per port:
# ovn-nbctl show
switch c1fac5d4-b682-4078-9282-61cfa6383893
(neutron-d35e99a5-d9e9-4bc5-9ad4-08e0941f1820) (aka test_net)
port 8a8be79b-7a24-4a19-b952-c68d839e0164 (
On Wed, Feb 14, 2018 at 9:34 PM, Han Zhou wrote:
>
>
> On Wed, Feb 14, 2018 at 9:45 AM, Ben Pfaff wrote:
> >
> > On Wed, Feb 14, 2018 at 11:27:11AM +0100, Daniel Alvarez Sanchez wrote:
> > > Thanks for your inputs. I need to look more carefully into the patch
>
it 26 times for a single port is a lot.
26 calls = 2*(1 (LS insert) + 1 (AS modify) + 1(LSP modify) + 8 (ACL
insert) + 1(LSP modify) + 1(PB insert))
Thoughts?
Thanks!
Daniel
On Thu, Feb 15, 2018 at 10:56 PM, Daniel Alvarez Sanchez <
dalva...@redhat.com> wrote:
>
>
> On Wed, Feb 1
On Fri, Feb 16, 2018 at 12:12 PM, Daniel Alvarez Sanchez <
dalva...@redhat.com> wrote:
> I've found out more about what is running slow in this scenario.
> I've profiled the processing of the update2 messages and here you can
> see the sequence of calls to __process_
mong the rest of the
workers, one of them is always getting them duplicated while the rest don't
I don't know why.
However, the 'modify' in the LS table for updating the acls set is the one
always taking 2-3 seconds on this load.
On Fri, Feb 16, 2018 at 12:27 PM, Daniel Alva
n Fri, Feb 16, 2018 at 6:33 PM, Han Zhou wrote:
> Hi Daniel,
>
> Thanks for the detailed profiling!
>
> On Fri, Feb 16, 2018 at 6:50 AM, Daniel Alvarez Sanchez <
> dalva...@redhat.com> wrote:
> >
> > About the duplicated processing of the update2 messages, I&
On Tue, Feb 13, 2018 at 8:32 PM, Ben Pfaff wrote:
> On Tue, Feb 13, 2018 at 12:39:56PM +0100, Daniel Alvarez Sanchez wrote:
> > Hi folks,
> >
> > As we're doing some performance tests in OpenStack using OVN,
> > we noticed that as we keep creating ports, the tim
gt; On Fri, Feb 23, 2018 at 2:17 PM, Ben Pfaff wrote:
> > > >
> > > > On Tue, Feb 20, 2018 at 08:56:42AM -0800, Han Zhou wrote:
> > > > > On Tue, Feb 20, 2018 at 8:15 AM, Ben Pfaff wrote:
> > > > > >
> > > > > > On Mon,
Hi folks,
During the performance tests I've been doing lately I noticed
that the size of the Southbound database was around 2.5GB
in one of my setups. I couldn't dig further then but now I
decided to explore a bit more and these are the results in
my all-in-one OpenStack setup using OVN as a backe
8-03-07T13:32:21.672Z|00021|ovsdb_server|INFO|compacting OVN_Southbound
database by user request
2018-03-07T13:32:21.672Z|00022|ovsdb_file|INFO|/opt/stack/data/ovs/ovnsb_db.db:
compacting database online (1519124364.908 seconds old, 951 transactions)
On Wed, Mar 7, 2018 at 2:40 PM, Daniel Alvar
r
(last time it shrinked to 9MB) so maybe we have something odd here
going in the online compact. I'll post the results after this test.
On Wed, Mar 7, 2018 at 3:35 PM, Mark Michelson wrote:
> On 03/07/2018 07:40 AM, Daniel Alvarez Sanchez wrote:
>
>> Hi folks,
>>
>>
All right, I'll repeat it with code in branch-2.8.
Will post the results once the test finishes.
Daniel
On Wed, Mar 7, 2018 at 7:03 PM, Ben Pfaff wrote:
> On Wed, Mar 07, 2018 at 05:53:15PM +0100, Daniel Alvarez Sanchez wrote:
> > Repeated the test with 1000 ports this time. See
n.
>
> On Wed, Mar 07, 2018 at 07:06:50PM +0100, Daniel Alvarez Sanchez wrote:
> > All right, I'll repeat it with code in branch-2.8.
> > Will post the results once the test finishes.
> > Daniel
> >
> > On Wed, Mar 7, 2018 at 7:03 PM, Ben Pfaff wrote:
> &
ansactions, 23538784
bytes)
On Wed, Mar 7, 2018 at 7:18 PM, Daniel Alvarez Sanchez
wrote:
> No worries, I just triggered the test now running OVS compiled out of
> 2.8 branch (2.8.3). I'll post the results and investigate too.
>
> I have just sent a patch to fix the timing iss
>>>
>>>
>>>
>>> On Wed, Mar 7, 2018 at 7:18 PM, Daniel Alvarez Sanchez <
>>> dalva...@redhat.com>
>>> wrote:
>>>
>>> No worries, I just triggered the test now running OVS compiled out of
>>>> 2.8 branch (2.8.
Thanks Ben and Mark. I'd be okay with 2x.
Don't you think that apart from that it can still be good to compact after
a
certain amount of time (like 1 day) if the number of transactions is > 0
regardless of the size?
On Thu, Mar 8, 2018 at 10:00 PM, Ben Pfaff wrote:
> It would be trivial to chang
Ok, I've just sent a patch and if you're not convinced we can
just do the 2x change. Thanks a lot!
Daniel
On Thu, Mar 8, 2018 at 10:19 PM, Ben Pfaff wrote:
> I guess I wouldn't object.
>
> On Thu, Mar 08, 2018 at 10:11:11PM +0100, Daniel Alvarez Sanchez wrote:
> &g
/1cfdc175ab1ecbc8f5d22f78d8e5f4344d55c5dc#diff-62fba9ea73e44f70aa9f56228bd4658c
[2]
https://github.com/openvswitch/ovs/commit/69f453713459c60e5619174186f94a0975891580
On Thu, Mar 8, 2018 at 11:21 PM, Daniel Alvarez Sanchez wrote:
> Ok, I've just sent a patch and if you're not convinced we can
Hi all,
I'm writing the code to implement the port groups in networking-ovn (the
OpenStack integration project with OVN). I found out that when a boot a VM,
looks like the egress traffic (from VM) is not working properly. The VM
port belongs to 3 Port Groups:
1. Default drop port group with the f
xt;"
external_ids: {source="ovn-northd.c:2931", stage-name=ls_in_pre_acl}
logical_datapath: 0cf12eb0-fdb3-4087-98b0-9c52cafd0bdf
match : ip
pipeline: ingress
priority: 100
Which apparently is responsible for adding the hint
/northd/ovn-northd.c#L2930
On Tue, Jun 19, 2018 at 10:09 PM, Daniel Alvarez Sanchez <
dalva...@redhat.com> wrote:
> Hi folks,
>
> Sorry for not being clear enough. In the tcpdump we can see the SYN
> packets being sent by port1 but retransmitted as it looks like the re
On Tue, Jun 19, 2018 at 10:37 PM, Daniel Alvarez Sanchez <
dalva...@redhat.com> wrote:
> Sorry, the problem seems to be that this ACL is not added in the Port
> Groups case for some reason (I checked wrong lflows log I had):
>
s/ACL/Logical Flow
>
> _uuid : 5a
Hi Han, I'm sending the patch now which fixes it but feel free to modify it.
Thanks!
Daniel
On Wed, Jun 20, 2018 at 12:06 AM, Han Zhou wrote:
>
>
> On Tue, Jun 19, 2018 at 2:53 PM, Daniel Alvarez Sanchez <
> dalva...@redhat.com> wrote:
> >
> >
> >
>
Hi Han, all
While implementing Port Groups in OpenStack I have noticed that we are
duplicating the lflows for the DHCP now with the current code. Seeking for
advice here:
When we create a Neutron subnet, I'm creating a Port Group with the ACL for
the DHCP:
_uuid : 7f2b64eb-090b-4bb
Hi all,
Miguel Angel Ajo and I have been trying to setup Jumbo frames in OpenStack
using OVN as a backend.
The external network has an MTU of 1900 while we have created two tenant
networks (Logical Switches) with an MTU of 8942.
When pinging from one instance in one of the networks to the other
On Wed, Jul 11, 2018 at 12:55 PM Daniel Alvarez Sanchez
wrote:
> Hi all,
>
> Miguel Angel Ajo and I have been trying to setup Jumbo frames in OpenStack
> using OVN as a backend.
>
> The external network has an MTU of 1900 while we have created two tenant
> networks (Logical
Maybe ICMP is not that critical but seems like not having the ICMP 'need to
frag' on UDP communications could break some applications that are aware of
this to reduce the size of the packets? I wonder...
Thanks!
Daniel
On Fri, Aug 3, 2018 at 5:20 PM Miguel Angel Ajo Pelayo
wrote:
>
> We didn’t
Hi all,
I noticed that we're doing a lot of sorting in the JSON code which is not
needed except for testing. Problem is that if I remove the sorting, then
most of the tests break. I spent a fair amount of time trying to fix them
but it's getting harder and harder.
Possibly, the best way to fix it
Resending this email as I can't see it in [0] for some reason.
[0] https://mail.openvswitch.org/pipermail/ovs-dev/2018-September/
On Fri, Sep 21, 2018 at 2:36 PM Daniel Alvarez Sanchez
wrote:
> Hi folks,
>
> After talking to Numan and reading log from IRC meeting yesterday
Hi all,
While analyzing a problem in OpenStack I think I have found out a
severe bug in OVN when it comes to reuse floating IPs (which is a very
common use case in OpenStack and Kubernetes). Let me explain the
scenario, issue and possible solutions:
* Three logical switches (Neutron networks) LS
delete MAC_Binding entries for that IP address upon a FIP creation. I
think that this however should be done from OVN, what do you folks
think?
Thanks,
Daniel
On Fri, Oct 26, 2018 at 11:39 AM Daniel Alvarez Sanchez
wrote:
>
> Hi all,
>
> While analyzing a problem in OpenStack I think I have
On Sat, Nov 10, 2018 at 12:21 AM Ben Pfaff wrote:
>
> On Mon, Oct 29, 2018 at 05:21:13PM +0530, Numan Siddique wrote:
> > On Mon, Oct 29, 2018 at 5:00 PM Daniel Alvarez Sanchez <
dalva...@redhat.com>
> > wrote:
> >
> > > Hi,
> > >
> > >
entually
expire, especially for entries that come from external networks.
On Fri, Nov 16, 2018 at 6:41 PM Daniel Alvarez Sanchez
wrote:
>
> On Sat, Nov 10, 2018 at 12:21 AM Ben Pfaff wrote:
> >
> > On Mon, Oct 29, 2018 at 05:21:13PM +0530, Numan Siddique wrote:
> > > On M
iel,
> >
> > I agree with Numan that this seems like a good approach to take.
> >
> > On 11/16/2018 12:41 PM, Daniel Alvarez Sanchez wrote:
> > >
> > > On Sat, Nov 10, 2018 at 12:21 AM Ben Pfaff > > <mailto:b...@ovn.org>> wrote:
On Wed, Nov 21, 2018 at 9:04 PM Han Zhou wrote:
>
>
>
> On Tue, Nov 20, 2018 at 5:21 AM Mark Michelson wrote:
> >
> > Hi Daniel,
> >
> > I agree with Numan that this seems like a good approach to take.
> >
> > On 11/16/2018 12:41 PM, Daniel Alvarez
On Mon, Nov 26, 2018 at 9:30 PM Ben Pfaff wrote:
>
> On Fri, Nov 16, 2018 at 06:41:33PM +0100, Daniel Alvarez Sanchez wrote:
> > +static void
> > +delete_mac_binding_by_ip(struct northd_context *ctx, const char *ip)
> > +{
> > +const s
On Wed, Nov 28, 2018 at 3:10 PM Ben Pfaff wrote:
>
> On Wed, Nov 28, 2018 at 12:07:55PM +0100, Daniel Alvarez Sanchez wrote:
> > On Mon, Nov 26, 2018 at 9:30 PM Ben Pfaff wrote:
> > >
> > > On Fri, Nov 16, 2018 at 06:41:33PM +0100, Daniel Alvarez Sanch
28 PM Daniel Alvarez Sanchez
wrote:
>
> On Wed, Nov 21, 2018 at 9:04 PM Han Zhou wrote:
> >
> >
> >
> > On Tue, Nov 20, 2018 at 5:21 AM Mark Michelson wrote:
> > >
> > > Hi Daniel,
> > >
> > > I agree with Numan that this seems like a g
[0] https://mail.openvswitch.org/pipermail/ovs-dev/2018-November/354220.html
On Wed, Nov 28, 2018 at 3:32 PM Daniel Alvarez Sanchez
wrote:
>
> Hi all,
>
> As this thread is getting big I'm summarizing the issue I see so far:
>
> * When a dnat_and_snat entry is added to a logical router (or por
On Mon, Dec 3, 2018 at 3:48 PM Mark Michelson wrote:
>
> On 12/01/2018 03:44 PM, Han Zhou wrote:
> >
> >
> > On Fri, Nov 30, 2018 at 7:29 AM Daniel Alvarez Sanchez
> > mailto:dalva...@redhat.com>> wrote:
> > >
> > > Thanks folks again for
Hi folks,
Just wanted to throw an idea here about introducing availability zones
(AZ) concept in OVN and get implementation ideas. From a CMS
perspective, it makes sense to be able to implement some sort of
logical division of resources into failure domains to maximize their
availability.
In this
Thanks Dan for chiming in and others as well for your feedback!
I also thought of having separate OVN deployments but that introduces
the drawbacks that Han pointed out adding - maybe a lot of - burden to
the CMS. Separate zones in the same OVN deployment will add minimal
changes (at deployment si
Hi folks,
Lately I'm getting the question in the subject line more and more
frequently and facing it myself, especially in the context of
OpenStack.
The shift to OVN in OpenStack involves a totally different approach
when it comes to tracing packet drops. Before OVN, there were a bunch
of network
nitial assessment doesn't have to go always all the way down.
I'm curious about other folks' experiences here as well with more pure
OVS experience.
Thanks a lot!
Daniel
On Thu, Mar 14, 2019 at 5:55 PM Ben Pfaff wrote:
>
> On Thu, Mar 14, 2019 at 04:55:56PM +0100, Daniel A
Hi folks,
While working on a multinode setup and created this logical topology
[0] where I scheduled a router on two gateway chassis, I found out
that after bringing down ovn-controller on the chassis where the gw
port is master, then the second chassis observes 100% CPU load on
ovn-controller and
Hi folks,
After some conversations with Han (thanks for your time and great
talk!) at the Open Infrastructure Summit in Denver last week, here I
go with this - somehow crazy - idea.
Since DDlog approach for incremental processing is not going to happen
soon and Han's reported his patches to be wo
In OpenStack we do this via a DHCP static route [0]. Then we use an
OVN 'localport' in the hypervisor inside a namespace to handle the
requests.
[0]
https://opendev.org/openstack/networking-ovn/src/branch/stable/stein/networking_ovn/common/ovn_client.py#L1524
On Thu, May 16, 2019 at 1:13 PM Vasi
On Thu, May 16, 2019 at 2:09 PM Vasiliy Tolstov wrote:
>
> чт, 16 мая 2019 г. в 14:57, Daniel Alvarez Sanchez :
> >
> > In OpenStack we do this via a DHCP static route [0]. Then we use an
> > OVN 'localport' in the hypervisor inside a namespace to handle the
&g
Hi Han, all,
Lucas, Numan and I have been doing some 'scale' testing of OpenStack
using OVN and wanted to present some results and issues that we've
found with the Incremental Processing feature in ovn-controller. Below
is the scenario that we executed:
* 7 baremetal nodes setup: 3 controllers (r
Hi Han, all,
Lucas, Numan and I have been doing some 'scale' testing of OpenStack
using OVN and wanted to present some results and issues that we've
found with the Incremental Processing feature in ovn-controller. Below
is the scenario that we executed:
* 7 baremetal nodes setup: 3 controllers (r
Thanks a lot Han for the answer!
On Tue, Jun 11, 2019 at 5:57 PM Han Zhou wrote:
>
>
>
>
> On Tue, Jun 11, 2019 at 5:12 AM Dumitru Ceara wrote:
> >
> > On Tue, Jun 11, 2019 at 10:40 AM Daniel Alvarez Sanchez
> > wrote:
> > >
> > > Hi Han,
Hi folks,
Lately we've been trying to solve certain issues related to stale
entries in the MAC_Binding table (e.g. [0]). On the other hand, for
the OpenStack + Octavia (Load Balancing service) use case, we see that
a reused VIP can be as well affected by stale entries in this table
due to the fact
Hi folks,
While working with an OpenStack environment running OVN and
ovsdb-server in A/P configuration with Pacemaker we hit an issue that
has been probably around for a long time. The bug itself seems to be
related with ovsdb-server not updating the read-only flag properly.
With a 3 nodes clust
t; On Mon, Jul 8, 2019 at 3:52 PM Daniel Alvarez Sanchez
> wrote:
>>
>> Hi folks,
>>
>> While working with an OpenStack environment running OVN and
>> ovsdb-server in A/P configuration with Pacemaker we hit an issue that
>> has been probably around for a l
vsdb-server
$ovs-appctl -t $PWD/sandbox/nb1 ovsdb-server/sync-status
state: backup
connecting: tcp:192.0.2.2:6641
$ ovn-nbctl ls-add sw1
ovn-nbctl: transaction error: {"details":"insert operation not allowed
when database server is in read only mode","error":"not allow
On Mon, Jul 8, 2019 at 5:43 PM Ben Pfaff wrote:
>
> Would you mind formally submitting this? It seems like the best
> immediate solution.
Will do, thanks a lot Ben!
>
> On Mon, Jul 08, 2019 at 02:27:31PM +0200, Daniel Alvarez Sanchez wrote:
> > I tried a simple patch and it
;> >
>> > On Thu, Jun 20, 2019 at 11:42 PM Numan Siddique
>> > wrote:
>> > >
>> > >
>> > >
>> > > On Fri, Jun 21, 2019, 11:47 AM Han Zhou wrote:
>> > >>
>> > >>
>> > >>
>> &
>>
>>
>> On Fri, Jul 19, 2019 at 6:19 PM Numan Siddique wrote:
>>>
>>>
>>>
>>> On Fri, Jul 19, 2019 at 6:28 AM Han Zhou wrote:
>>>>
>>>>
>>>>
>>>> On Tue, Jul 9, 2019 at 12:13 AM Numan Siddique wro
ul 08, 2019 at 06:19:23PM -0700, Han Zhou wrote:
> > On Thu, Jun 27, 2019 at 6:44 AM Ben Pfaff wrote:
> > >
> > > On Tue, Jun 25, 2019 at 01:05:21PM +0200, Daniel Alvarez Sanchez wrote:
> > > > Lately we've been trying to solve certain issues related to stal
er
mechanism).
> >
> > Please do update the lifetime description in ovn-sb(5) under the
> > MAC_Binding table regardless of what you implement.
> >
> > Thanks,
> >
> > Ben.
> >
> > On Tue, Aug 20, 2019 at 09:03:57AM +0200, Daniel Alvarez San
On Wed, Aug 28, 2019 at 4:49 PM Zufar Dhiyaulhaq
wrote:
>
> Hi Numan,
>
> Yes, it's working. I think the networking-ovn plugin in OpenStack has some
> bugs. let me use a single IP first or maybe I can use pacemaker to create the
> VIP.
Thanks Zufar, mind patching networking-ovn / reporting the
On Thu, Aug 29, 2019 at 10:01 PM Mark Michelson wrote:
>
> On 8/29/19 2:39 PM, Numan Siddique wrote:
> > Hello Everyone,
> >
> > In one of the OVN deployments, we are seeing 100% CPU usage by
> > ovn-controllers all the time.
> >
> > After investigations we found the below
> >
> > - ovn-controll
On Fri, Aug 30, 2019 at 8:18 PM Han Zhou wrote:
>
>
>
> On Fri, Aug 30, 2019 at 6:46 AM Mark Michelson wrote:
> >
> > On 8/30/19 5:39 AM, Daniel Alvarez Sanchez wrote:
> > > On Thu, Aug 29, 2019 at 10:01 PM Mark Michelson
> > > wrote:
> > &g
Hi Han,
On Fri, Aug 30, 2019 at 10:37 PM Han Zhou wrote:
>
> On Fri, Aug 30, 2019 at 1:25 PM Numan Siddique wrote:
> >
> > Hi Han,
> >
> > I am thinking of this approach to solve this problem. I still need to
> test it.
> > If you have any comments or concerns do let me know.
> >
> >
> > ***
Hi folks,
We detected that when ovn-controller doesn't die gracefully leaving a
stale Chassis entry in the SB DB, the ports that were bound to that
chassis and belong to an HA Chassis group will not be failed over to
the next high prio chassis in the group.
Right now in OpenStack we're still usin
Hi folks,
We found a problem related to the packet buffering feature introduced by
[0] when the destination address is unknown. In such a case, ovn-controller
sends an ARP request and, upon resolving the MAC address, the packet will
be resumed.
If the packet is coming from a Floating IP (dnat_and
Hi all,
Based on some problems that we've detected at scale, I've been doing an
analysis of how logical flows are distributed on a system which makes heavy
use of Floating IPs (dnat_and_snat NAT entries) and DVR.
[root@central ~]# ovn-nbctl list NAT|grep dnat_and_snat -c
985
With 985 Floating IP
patches were written to address an issue where FIP to FIP traffic was
not distributed and it was sent via the tunnel to the gateway instead.
[0] https://imgur.com/KgRSPpz
On Tue, Jan 28, 2020 at 4:55 PM Daniel Alvarez Sanchez
wrote:
> Hi all,
>
> Based on some problems that we'
1 - 100 of 116 matches
Mail list logo