Thanks Han and Numan for the clarity to help sort it out.
For making vip work with using LB in my two node setup, I had changed below
code to skip setting master IP when creating pcs resource for ovndbs and
listen on 0.0.0.0 instead. Hence, the discussion seems inline with the code
change which
hu, May 10, 2018 at 12:44 AM, Han Zhou <zhou...@gmail.com> wrote:
>
>>
>>
>> On Wed, May 9, 2018 at 11:51 AM, Numan Siddique <nusid...@redhat.com>
>> wrote:
>>
>>>
>>>
>>> On Thu, May 10, 2018 at 12:15 AM, Han Zhou <zhou...@gmail.c
N_ON_MASTER_IP_ONLY for using LB
with tcp to avoid breaking existing functionality accordingly.
Regards,
Aliasgar
On Thu, May 10, 2018 at 9:55 AM, aginwala <aginw...@asu.edu> wrote:
> Thanks folks for suggestions:
>
> For LB vip configurations, I did the testing further and yes i
ker state machine as you pointed out where in crm resource move use
case needs to be handled explicitly? However, reboot node/ service
pacemaker/corosync restart, etc. do not result in self replicating issues
while promoting the new node. Will also try to see if I can find something
more.
>
after you do "crm resource move"?
>
> On Fri, May 11, 2018 at 2:25 PM, aginwala <aginw...@asu.edu> wrote:
>
>> Thanks Han for more suggestions:
>>
>>
>> I did test failover by gracefully stopping pacemaker+corosync on master
>> node along with c
am not
> sure if this is valuable. It depends on whether restarting ovsdb-server
> during failover is sufficient enough. Could you add the restart logic for
> demote and try more? Thanks!
>
> Thanks,
> Han
>
> On Thu, May 10, 2018 at 1:54 PM, aginwala <aginw...@asu.edu> w
Hi All:
As per the discussions/requests by Mark and Numan, I finally reverted the
mtu patch (commit-id 8c319e8b73032e06c7dd1832b3b31f8a1189dcd1) on
branch-2.9 and re-ran the test with 10k lports to bind on farms, with 8 LRs
and 40 LS ;and results improvised. Since ovs did not go super hot, it
18 at 12:37:04AM -0700, Han Zhou wrote:
>> > > > Hi,
>> > > >
>> > > > We found an issue in our testing (thanks aginwala) with
>> active-backup
>> > mode
>> > > > in OVN setup.
>> > > > In the 3 node setup
gt; >
>> >
>> > On Fri, Aug 10, 2018 at 3:59 AM Ben Pfaff wrote:
>> >>
>> >> On Thu, Aug 09, 2018 at 09:32:21AM -0700, Han Zhou wrote:
>> >> > On Thu, Aug 9, 2018 at 1:57 AM, aginwala wrote:
>> >> > >
>> >>
5, 2018 at 1:24 PM Han Zhou wrote:
>
>
> On Wed, Sep 5, 2018 at 10:44 AM aginwala wrote:
> >
> > Thanks Numan:
> >
> > I will give it shot and update the findings.
> >
> >
> > On Wed, Sep 5, 2018 at 5:35 AM Numan Siddique
> wrote:
> &
Cool! Thanks a lot.
On Mon, Sep 10, 2018 at 12:57 AM Numan Siddique wrote:
>
>
> On Sun, Sep 9, 2018 at 8:38 AM aginwala wrote:
>
>> Hi:
>>
>> As consented with approach 1, I tested it. DB data is retained even for
>> the continuous fail-over scenario w
Hi:
As per discussions in past OVN meetings regarding ovn monitoring stand
point, need some clarity from design perspective. I am thinking of below
approaches:
1. Can we implement something like ovs-appctl -t chassis-conn/list
that will show all HVs stats (connected/non-connected)?
2. or on
, 2018 at 8:30 PM Ben Pfaff wrote:
> On Mon, Jul 09, 2018 at 06:12:11PM -0700, Han Zhou wrote:
> > On Mon, Jul 9, 2018 at 3:37 PM, Ben Pfaff wrote:
> > >
> > > On Sun, Jul 08, 2018 at 01:09:12PM -0700, aginwala wrote:
> > > > As per discussions in past OVN me
.db: create failed (File
exists)
# Node 3: I did not try as I am assuming the same failure as node 2
Let me know may know further.
On Tue, Mar 13, 2018 at 3:08 AM, Numan Siddique <nusid...@redhat.com> wrote:
> Hi Aliasgar,
>
> On Tue, Mar 13, 2018 at 7:11 AM, aginwala <aginw...@
99.152.138:6645" start_nb_ovsdb
ovsdb-server: ovsdb error: /etc/openvswitch/ovnnb_db.db: cannot identify
file type
On Tue, Mar 13, 2018 at 9:40 AM, Numan Siddique <nusid...@redhat.com> wrote:
>
>
> On Tue, Mar 13, 2018 at 9:46 PM, aginwala <aginw...@asu.edu>
> On Wed, Mar 14, 2018 at 7:51 AM, aginwala <aginw...@asu.edu> wrote:
>
>> Sure.
>>
>> To add on , I also ran for nb db too using different port and Node2
>> crashes with same error :
>> # Node 2
>> /usr/share/openvswitch/scripts/ovn-ctl --db-nb-ad
https://patchwork.ozlabs.org/patch/895185/
> > then you can try it out with:
> > make sandbox SANDBOXFLAGS='--ovn --sbdb-model=clustered
> --n-northds=3'
> >
> > On Wed, Mar 21, 2018 at 01:12:48PM -0700, aginwala wrote:
> > > :) The only thing
endpoints of a cluster are specified to northd, since all northds are
>> connecting to the same DB, the leader.
>>
>> For neutron networking-ovn, this may not work yet, since I didn't see
>> such logic in the python IDL in current patch series. It would be good if
&
>
> Can you also share the (ovn-ctl) commands you used to start/join the
> ovsdb-server clusters in your nodes ?
>
> Thanks
> Numan
>
>
> On Tue, Mar 27, 2018 at 11:04 PM, aginwala <aginw...@asu.edu> wrote:
>
>> Hu Numan:
>>
>> You
at 10:22 AM, Han Zhou <zhou...@gmail.com> wrote:
>
>
> On Wed, Mar 21, 2018 at 9:49 AM, aginwala <aginw...@asu.edu> wrote:
>
>> Thanks Numan:
>>
>> Yup agree with the locking part. For now; yes I am running northd on one
>> node. I might
e as active and rest as standby?
Regards,
On Thu, Mar 15, 2018 at 3:09 AM, Numan Siddique <nusid...@redhat.com> wrote:
> That's great
>
> Numan
>
>
> On Thu, Mar 15, 2018 at 2:57 AM, aginwala <aginw...@asu.edu> wrote:
>
>> Hi Numan:
>>
>> I tried
a member of
> ovsdb cluster is down it won't have impact to northd.
>
> Without clustering support of the ovsdb lock, I think this is what we have
> now for northd HA. Please suggest if anyone has any other idea. Thanks :)
>
> On Wed, Mar 21, 2018 at 1:12 PM, aginwala <aginw.
db
> locks.
>
> Thanks
> Numan
>
>
> On Wed, Mar 21, 2018 at 1:40 PM, aginwala <aginw...@asu.edu> wrote:
>
>> Hi Numan:
>>
>> Just figured out that ovn-northd is running as active on all 3 nodes
>> instead of one active instance as I continued to test furt
Hi:
IRL , we always use different subnets for VIPs for OpenStack workloads in
production for couple of reasons:
1. It's easy to fail over in case of outages if VIP and pool members are
in different subnets.
2. It is also easy for neutron's IPAM to manage 2 different subnets; one
for VIP and
Hi Ben:
I cannot see the patch series on the patchwork. Is it due to mail server
sync issue or something else? Not sure if its appropriate to try out
https://github.com/blp/ovs-reviews/commits/raft-fixes since it has the
patches in review in addition to some other patches?
Regards,
On Thu, Nov
Hi:
As per irc meeting discussion, some nice findings were already discussed by
Numan (Thanks for sharing the details). When changing external_ids for a
claimed port e.g. ovn-nbctl set logical_switch_port sw0-port1
external_ids:foo=bar triggers re-computation on local compute. I do see the
same
Not sure what steps you used to compile and install 2.11. Use `export
OVS_RUNDIR="/var/run/openvswitch" and then try vsctl commands.
On Wed, Aug 21, 2019 at 2:43 AM V Sai Surya Laxman Rao Bellala <
laxmanraobell...@gmail.com> wrote:
> Hello all,
>
> Can anyone help me in solving this Bug?
> I
Hi:
Adding correct ovs-discuss ML. I did get a chance to take a look on it a
bit. I think this is the bug in 4.4.0-104-generic kernel on ubuntu 16.04 as
its being discussed on ubuntu forum
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1711407 where it can
be hit all of a sudden as per the
Thanks Guru:
Sounds good. Can you please grant user aginwala as admin? I can create two
repos ovs and ovn under openvswitch org and can push new stable release
versions there.
On Fri, Nov 8, 2019 at 10:04 AM Guru Shetty wrote:
> On Fri, 8 Nov 2019 at 09:53, Guru Shetty wrote:
>
&g
:
> Hi
> If it is useful, we can start this with:
> GitHub.com/ServiceFractal/ovs
>
> /Shivaram
> ::Sent from my mobile device::
>
> On Nov 7, 2019, at 4:15 PM, aginwala wrote:
>
>
> Hi All:
>
> As discussed in the meeting today, we all agreed that it will
eed to include it in docket
> image. It will not work.
>
> /Shivaram
> ::Sent from my mobile device::
>
> On Nov 8, 2019, at 5:42 PM, aginwala wrote:
>
>
> openvswitch.ko ships default with newer kernel but if we want to use say
> stt, we need to build it with respe
On Fri, 8 Nov 2019 at 14:18, aginwala wrote:
>
>> Hi all:
>>
>>
>> I have pushed two images to public openvswitch org on docker.io for ovs
>> and ovn;
>> OVS for ubuntu with 4.15 kernel:
>> *openvswitch/ovs:2.12.0_debian_4.15.0-66-generic*
>>
>
>
hivaram
> ::Sent from my mobile device::
>
> On Nov 8, 2019, at 6:49 PM, aginwala wrote:
>
>
> Hi Shivaram:
>
> Thanks for comments. Can you explain what is the bottleneck? Also for
> addressing performance related issues that you suggested, I would say if
> you can
ahead.
On Fri, Nov 8, 2019 at 10:17 AM aginwala wrote:
> Thanks Guru:
>
> Sounds good. Can you please grant user aginwala as admin? I can create two
> repos ovs and ovn under openvswitch org and can push new stable release
> versions there.
>
> On Fri, Nov 8, 2019 at 10:04
On Mon, Nov 11, 2019 at 9:00 AM Guru Shetty wrote:
>
>
> On Fri, 8 Nov 2019 at 14:41, aginwala wrote:
>
>> openvswitch.ko ships default with newer kernel but if we want to use say
>> stt, we need to build it with respective kernel for host on which we will
>>
Hi:
It is a known fact and have-been discussed before. We use the same
workaround as you mentioned. Alternatively, you can also set role="" and it
will work for both northd and ovn-controller instead of separate listeners
which is also a security loop-hole. In short, some work is needed here
to
, Nov 7, 2019 at 12:14 PM Ben Pfaff wrote:
> Have we documented this? Should we?
>
> On Thu, Nov 07, 2019 at 10:20:22AM -0800, aginwala wrote:
> > Hi:
> >
> > It is a known fact and have-been discussed before. We use the same
> > workaround as you mentioned. Alt
Hi All:
As discussed in the meeting today, we all agreed that it will be a good
idea to push docker images for each new ovs/ovn stable release. Hence, need
help from maintainers Ben/Mark/Justin/Han to address some open action items
as it is more of org/ownership/rights related:
1. Get new
; > ::Sent from my mobile device::
> >
> > On Nov 12, 2019, at 9:49 AM, aginwala wrote:
> >
> >
> > Thanks Shivaram:
> >
> > On Tue, Nov 12, 2019 at 9:28 AM Shivaram Mysore <
> shivaram.mys...@gmail.com> wrote:
> >>
> >> I a
objective is to run OVS in a
> container image. I would keep it simple.
>
> I think the objective is to have an image per upstream stable ovs release
and hence building it in container. Hope everyone is ok here.
> On Tue, Nov 12, 2019 at 12:51 AM aginwala wrote:
>
>>
Thanks Guru.
On Mon, Nov 11, 2019 at 1:03 PM Guru Shetty wrote:
>
>
> On Mon, 11 Nov 2019 at 10:08, aginwala wrote:
>
>>
>>
>> On Mon, Nov 11, 2019 at 9:00 AM Guru Shetty wrote:
>>
>>>
>>>
>>> On Fri, 8 Nov 2019 at 14:41, ag
47 skrev Ben Pfaff :
>
>> Sure, anything helps.
>>
>> On Thu, Nov 07, 2019 at 12:27:44PM -0800, aginwala wrote:
>> > Hi Ben:
>> >
>> > It seems RBAC doc
>> >
>> http://docs.openvswitch.org/en/stable/tutorials/ovn-rbac/#configuring-rbac
>&
Hi:
Also wanted to point out that steps for building/running ovs as a container
are also mentioned in ovs installation doc
https://raw.githubusercontent.com/openvswitch/ovs/1ca0323e7c29dc7ef5a615c265df0460208f92de/Documentation/intro/install/general.rst.
OVS docker
scripts are in
Hi Yun:
For changing inactivity probe which is 5 sec default, you need to create
connection entry both for sb and nb db.
ovn-nbctl -- --id=@conn_uuid create Connection \
target="\:\:" \
inactivity_probe= -- set NB_Global . connections=@conn_uuid
ovn-nbctl set connection . inactivity_probe= will
Yun
>
>
>
>
> 在 2020-02-07 22:45:36,"taoyunupt" 写道:
>
> Hi,Aliasgar,
>
>Thanks for your reply. I have tried your suggestion. But I
> found that it just could create one NB connection or one SB connection.
> In RAFT, we need at least two.
&
for HA/clustered OVN Central?
>
> Doc says Start OVN containers in cluster mode using below command on
node2 and node3 to make them join the peer using below command:. Hence, you
can even play with just docker on 3 nodes where you run step1 on node1 that
creates cluster and do the jo
Hi:
There are a couple of options as I have been exploring this too:
1. Upstream ovn-k8s patches (
https://github.com/ovn-org/ovn-kubernetes/commit/a07b1a01af7e37b15c2e5f179ffad2b9f25a083d)
uses statefulset and headless service for starting ovn central raft cluster
with 3 replicas. Cluster
Hi:
Adding the ML too. Folks from k8s can comment on the same to see if ovn-k8s
repo needs an update in the documentation for you to get the setup working
when using their specs as is without any code changes in addition to using
your own custom ovn images, etc. I am getting mail failure when
Hi All:
We ran into debian build issue for latest ovs v2.16.0 against 5.4.0-80-generic
ubuntu 20
dh binary --with autoreconf,python3 --parallel
dh: error: unable to load addon python3: Can't locate
Debian/Debhelper/Sequence/python3.pm in @INC (you may need to install the
Hi All:
Part of upgrading OVN north south gateway to the new 5.4 kernel , VMs
connectivity is lost when setting chassis for provider network lrp to this
new gateway. For interconnection gateways and hypervisors its not an issue/
lrp
_uuid : 387a735d-fc11-4e90-8655-07785aa024af
PM aginwala wrote:
> Hi All:
>
> Part of upgrading OVN north south gateway to the new 5.4 kernel , VMs
> connectivity is lost when setting chassis for provider network lrp to this
> new gateway. For interconnection gateways and hypervisors its not an issue/
> lrp
> _uuid
nce, let us know for any further amendments on above changes if
any as issue is mitigated with this patch and workaround is needed no more.
We will do some more tests and call out for any other failures.
Regards,
Aliasgar
On Tue, Apr 23, 2024 at 10:35 AM aginwala wrote:
> Hi:
>
> Dat
52 matches
Mail list logo