[ovs-discuss] EMC lookup disabled but still there some processing going on with emc lookup

2018-03-21 Thread Krish
Hello everyone

I am testing ovs-vswitch caches time spent using intel vtune.

I disabled emc lookup using "emc-insert-inv-prob=0" but still I can see emc
lookup is not disabled and also there is insertion which takes place into
emc also after the packet completes fast-path processing.

I am attaching the screenshots on Intel vtune along with this mail.

Can anyone please explain why this happened?

Thank you

Regards
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] raft ovsdb clustering

2018-03-21 Thread aginwala
Hi :

Just sorted out the correct settings and northd also works in ha in raft.

There were 2 issues in the setup:
1. I had started nb db without --db-nb-create-insecure-remote
2. I also started northd locally on all 3 without remote which is like all
three northd trying to lock the ovsdb locally.

Hence, the duplicate logs were populated in the southbound datapath due to
multiple northd trying to write the local copy.

So, I now start nb db with --db-nb-create-insecure-remote and northd on all
3 nodes using below command:

ovn-northd -vconsole:emer -vsyslog:err -vfile:info --ovnnb-db="tcp:
10.169.125.152:6641,tcp:10.169.125.131:6641,tcp:10.148.181.162:6641"
--ovnsb-db="tcp:10.169.125.152:6642,tcp:10.169.125.131:6642,tcp:
10.148.181.162:6642" --no-chdir --log-file=/var/log/openvswitch/ovn-northd.log
--pidfile=/var/run/openvswitch/ovn-northd.pid --detach --monitor


#At start, northd went active on the leader node and standby on other two
nodes.

#After old leader crashed and new leader got elected, northd goes active on
any of the remaining 2 nodes as per sample logs below from non-leader node:
2018-03-22T00:20:30.732Z|00023|ovn_northd|INFO|ovn-northd lock lost. This
ovn-northd instance is now on standby.
2018-03-22T00:20:30.743Z|00024|ovn_northd|INFO|ovn-northd lock acquired.
This ovn-northd instance is now active.

# Also ovn-controller works similar way if leader goes down and connects to
any of the remaining 2 nodes:
2018-03-22T01:21:56.250Z|00029|ovsdb_idl|INFO|tcp:10.148.181.162:6642:
clustered database server is disconnected from cluster; trying another
server
2018-03-22T01:21:56.250Z|00030|reconnect|INFO|tcp:10.148.181.162:6642:
connection attempt timed out
2018-03-22T01:21:56.250Z|00031|reconnect|INFO|tcp:10.148.181.162:6642:
waiting 4 seconds before reconnect
2018-03-22T01:23:52.417Z|00043|reconnect|INFO|tcp:10.148.181.162:6642:
connected



Above settings will also work if we put all the nodes behind the vip and
updates the ovn configs to use vips. So we don't need pacemaker explicitly
for northd HA :).

Since the setup is complete now, I will populate the same in scale test env
and see how it behaves.

@Numan: We can try the same with networking-ovn integration and see if we
find anything weird there too. Not sure if you have any exclusive findings
for this case.

Let me know if something else is missed here.




Regards,

On Wed, Mar 21, 2018 at 2:50 PM, Han Zhou  wrote:

> Ali, sorry if I misunderstand what you are saying, but pacemaker here is
> for northd HA. pacemaker itself won't point to any ovsdb cluster node. All
> northds can point to a LB VIP for the ovsdb cluster, so if a member of
> ovsdb cluster is down it won't have impact to northd.
>
> Without clustering support of the ovsdb lock, I think this is what we have
> now for northd HA. Please suggest if anyone has any other idea. Thanks :)
>
> On Wed, Mar 21, 2018 at 1:12 PM, aginwala  wrote:
>
>> :) The only thing is while using pacemaker, if the node that pacemaker if
>> pointing to is down, all the active/standby northd nodes have to be updated
>> to new node from the cluster. But will dig in more to see what else I can
>> find.
>>
>> @Ben: Any suggestions further?
>>
>>
>> Regards,
>>
>> On Wed, Mar 21, 2018 at 10:22 AM, Han Zhou  wrote:
>>
>>>
>>>
>>> On Wed, Mar 21, 2018 at 9:49 AM, aginwala  wrote:
>>>
 Thanks Numan:

 Yup agree with the locking part. For now; yes I am running northd on
 one node. I might right a script to monitor northd  in cluster so that if
 the node where it's running goes down, script can spin up northd on one
 other active nodes as a dirty hack.

 The "dirty hack" is pacemaker :)
>>>
>>>
 Sure, will await for the inputs from Ben too on this and see how
 complex would it be to roll out this feature.


 Regards,


 On Wed, Mar 21, 2018 at 5:43 AM, Numan Siddique 
 wrote:

> Hi Aliasgar,
>
> ovsdb-server maintains locks per each connection and not across the
> db. A workaround for you now would be to configure all the ovn-northd
> instances to connect to one ovsdb-server if you want to have 
> active/standy.
>
> Probably Ben can answer if there is a plan to support ovsdb locks
> across the db. We also need this support in networking-ovn as it also uses
> ovsdb locks.
>
> Thanks
> Numan
>
>
> On Wed, Mar 21, 2018 at 1:40 PM, aginwala  wrote:
>
>> Hi Numan:
>>
>> Just figured out that ovn-northd is running as active on all 3 nodes
>> instead of one active instance as I continued to test further which 
>> results
>> in db errors as per logs.
>>
>>
>> # on node 3, I run ovn-nbctl ls-add ls2 ; it populates below logs in
>> ovn-north
>> 2018-03-21T06:01:59.442Z|7|ovsdb_idl|WARN|transaction error:
>> 

Re: [ovs-discuss] raft ovsdb clustering

2018-03-21 Thread Han Zhou
Ali, sorry if I misunderstand what you are saying, but pacemaker here is
for northd HA. pacemaker itself won't point to any ovsdb cluster node. All
northds can point to a LB VIP for the ovsdb cluster, so if a member of
ovsdb cluster is down it won't have impact to northd.

Without clustering support of the ovsdb lock, I think this is what we have
now for northd HA. Please suggest if anyone has any other idea. Thanks :)

On Wed, Mar 21, 2018 at 1:12 PM, aginwala  wrote:

> :) The only thing is while using pacemaker, if the node that pacemaker if
> pointing to is down, all the active/standby northd nodes have to be updated
> to new node from the cluster. But will dig in more to see what else I can
> find.
>
> @Ben: Any suggestions further?
>
>
> Regards,
>
> On Wed, Mar 21, 2018 at 10:22 AM, Han Zhou  wrote:
>
>>
>>
>> On Wed, Mar 21, 2018 at 9:49 AM, aginwala  wrote:
>>
>>> Thanks Numan:
>>>
>>> Yup agree with the locking part. For now; yes I am running northd on one
>>> node. I might right a script to monitor northd  in cluster so that if the
>>> node where it's running goes down, script can spin up northd on one other
>>> active nodes as a dirty hack.
>>>
>>> The "dirty hack" is pacemaker :)
>>
>>
>>> Sure, will await for the inputs from Ben too on this and see how complex
>>> would it be to roll out this feature.
>>>
>>>
>>> Regards,
>>>
>>>
>>> On Wed, Mar 21, 2018 at 5:43 AM, Numan Siddique 
>>> wrote:
>>>
 Hi Aliasgar,

 ovsdb-server maintains locks per each connection and not across the db.
 A workaround for you now would be to configure all the ovn-northd instances
 to connect to one ovsdb-server if you want to have active/standy.

 Probably Ben can answer if there is a plan to support ovsdb locks
 across the db. We also need this support in networking-ovn as it also uses
 ovsdb locks.

 Thanks
 Numan


 On Wed, Mar 21, 2018 at 1:40 PM, aginwala  wrote:

> Hi Numan:
>
> Just figured out that ovn-northd is running as active on all 3 nodes
> instead of one active instance as I continued to test further which 
> results
> in db errors as per logs.
>
>
> # on node 3, I run ovn-nbctl ls-add ls2 ; it populates below logs in
> ovn-north
> 2018-03-21T06:01:59.442Z|7|ovsdb_idl|WARN|transaction error:
> {"details":"Transaction causes multiple rows in \"Datapath_Binding\" table
> to have identical values (1) for index on column \"tunnel_key\".  First
> row, with UUID 8c5d9342-2b90-4229-8ea1-001a733a915c, was inserted by
> this transaction.  Second row, with UUID 
> 8e06f919-4cc7-4ffc-9a79-20ce6663b683,
> existed in the database before this transaction and was not modified by 
> the
> transaction.","error":"constraint violation"}
>
> In southbound datapath list, 2 duplicate records gets created for same
> switch.
>
> # ovn-sbctl list Datapath
> _uuid   : b270ae30-3458-445f-95d2-b14e8ebddd01
> external_ids: 
> {logical-switch="4d6674e3-ff9f-4f38-b050-0fa9bec9e34d",
> name="ls2"}
> tunnel_key  : 2
>
> _uuid   : 8e06f919-4cc7-4ffc-9a79-20ce6663b683
> external_ids: 
> {logical-switch="4d6674e3-ff9f-4f38-b050-0fa9bec9e34d",
> name="ls2"}
> tunnel_key  : 1
>
>
>
> # on nodes 1 and 2 where northd is running, it gives below error:
> 2018-03-21T06:01:59.437Z|8|ovsdb_idl|WARN|transaction error:
> {"details":"cannot delete Datapath_Binding row
> 8e06f919-4cc7-4ffc-9a79-20ce6663b683 because of 17 remaining
> reference(s)","error":"referential integrity violation"}
>
> As per commit message, for northd I re-tried setting --ovnnb-db="tcp:
> 10.169.125.152:6641,tcp:10.169.125.131:6641,tcp:10.148.181.162:6641"
> and --ovnsb-db="tcp:10.169.125.152:6642,tcp:10.169.125.131:6642,tcp:
> 10.148.181.162:6642" and it did not help either.
>
> There is no issue if I keep running only one instance of northd on any
> of these 3 nodes. Hence, wanted to know is there something else
> missing here to make only one northd instance as active and rest as
> standby?
>
>
> Regards,
>
> On Thu, Mar 15, 2018 at 3:09 AM, Numan Siddique 
> wrote:
>
>> That's great
>>
>> Numan
>>
>>
>> On Thu, Mar 15, 2018 at 2:57 AM, aginwala  wrote:
>>
>>> Hi Numan:
>>>
>>> I tried on new nodes (kernel : 4.4.0-104-generic , Ubuntu 16.04)with
>>> fresh installation and it worked super fine for both sb and nb dbs. 
>>> Seems
>>> like some kernel issue on the previous nodes when I re-installed raft 
>>> patch
>>> as I was running different ovs version on those nodes before.
>>>
>>>
>>> For 2 HVs, I now set 

Re: [ovs-discuss] ovs-ofctl memory consumption is large compared to flow bundle size [formatting correction]

2018-03-21 Thread Michael Ben-Ami via discuss
Thank you so much Ben and team! Assuming it is merged, we will deploy as
part of our next OVS upgrade. In the meantime, we have found conjunctive
matchers to be a solid workaround to help with limiting the size of the
flow bundle, and in turn the consumed memory.

On Tue, Mar 20, 2018 at 4:48 PM, Ben Pfaff  wrote:

> On Mon, Mar 19, 2018 at 10:04:00PM -0700, Ben Pfaff wrote:
> > On Mon, Mar 12, 2018 at 03:47:16PM -0400, Michael Ben-Ami via discuss
> wrote:
> > > We found that when we add a flow bundle of about 25MB of textual flows,
> > > ovs-ofctl ballooned in resident memory to around 563MB. Similarly for a
> > > bundle about half the size at 12.4MB, ovs-ofctl hit 285MB.
> >
> > I have a branch in my "reviews" repository that should fix this:
> > https://github.com/blp/ovs-reviews/tree/memory
> >
> > It's not quite ready to post for formal review (the commit messages need
> > work and it probably leaks some memory), but if you have a minute to
> > test it out, please do consider it.
>
> I polished it up and sent it out for review:
> https://patchwork.ozlabs.org/project/openvswitch/list/?
> series=34920
>
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] raft ovsdb clustering

2018-03-21 Thread aginwala
:) The only thing is while using pacemaker, if the node that pacemaker if
pointing to is down, all the active/standby northd nodes have to be updated
to new node from the cluster. But will dig in more to see what else I can
find.

@Ben: Any suggestions further?


Regards,

On Wed, Mar 21, 2018 at 10:22 AM, Han Zhou  wrote:

>
>
> On Wed, Mar 21, 2018 at 9:49 AM, aginwala  wrote:
>
>> Thanks Numan:
>>
>> Yup agree with the locking part. For now; yes I am running northd on one
>> node. I might right a script to monitor northd  in cluster so that if the
>> node where it's running goes down, script can spin up northd on one other
>> active nodes as a dirty hack.
>>
>> The "dirty hack" is pacemaker :)
>
>
>> Sure, will await for the inputs from Ben too on this and see how complex
>> would it be to roll out this feature.
>>
>>
>> Regards,
>>
>>
>> On Wed, Mar 21, 2018 at 5:43 AM, Numan Siddique 
>> wrote:
>>
>>> Hi Aliasgar,
>>>
>>> ovsdb-server maintains locks per each connection and not across the db.
>>> A workaround for you now would be to configure all the ovn-northd instances
>>> to connect to one ovsdb-server if you want to have active/standy.
>>>
>>> Probably Ben can answer if there is a plan to support ovsdb locks across
>>> the db. We also need this support in networking-ovn as it also uses ovsdb
>>> locks.
>>>
>>> Thanks
>>> Numan
>>>
>>>
>>> On Wed, Mar 21, 2018 at 1:40 PM, aginwala  wrote:
>>>
 Hi Numan:

 Just figured out that ovn-northd is running as active on all 3 nodes
 instead of one active instance as I continued to test further which results
 in db errors as per logs.


 # on node 3, I run ovn-nbctl ls-add ls2 ; it populates below logs in
 ovn-north
 2018-03-21T06:01:59.442Z|7|ovsdb_idl|WARN|transaction error:
 {"details":"Transaction causes multiple rows in \"Datapath_Binding\" table
 to have identical values (1) for index on column \"tunnel_key\".  First
 row, with UUID 8c5d9342-2b90-4229-8ea1-001a733a915c, was inserted by
 this transaction.  Second row, with UUID 
 8e06f919-4cc7-4ffc-9a79-20ce6663b683,
 existed in the database before this transaction and was not modified by the
 transaction.","error":"constraint violation"}

 In southbound datapath list, 2 duplicate records gets created for same
 switch.

 # ovn-sbctl list Datapath
 _uuid   : b270ae30-3458-445f-95d2-b14e8ebddd01
 external_ids: 
 {logical-switch="4d6674e3-ff9f-4f38-b050-0fa9bec9e34d",
 name="ls2"}
 tunnel_key  : 2

 _uuid   : 8e06f919-4cc7-4ffc-9a79-20ce6663b683
 external_ids: 
 {logical-switch="4d6674e3-ff9f-4f38-b050-0fa9bec9e34d",
 name="ls2"}
 tunnel_key  : 1



 # on nodes 1 and 2 where northd is running, it gives below error:
 2018-03-21T06:01:59.437Z|8|ovsdb_idl|WARN|transaction error:
 {"details":"cannot delete Datapath_Binding row
 8e06f919-4cc7-4ffc-9a79-20ce6663b683 because of 17 remaining
 reference(s)","error":"referential integrity violation"}

 As per commit message, for northd I re-tried setting --ovnnb-db="tcp:
 10.169.125.152:6641,tcp:10.169.125.131:6641,tcp:10.148.181.162:6641"
 and --ovnsb-db="tcp:10.169.125.152:6642,tcp:10.169.125.131:6642,tcp:
 10.148.181.162:6642" and it did not help either.

 There is no issue if I keep running only one instance of northd on any
 of these 3 nodes. Hence, wanted to know is there something else
 missing here to make only one northd instance as active and rest as
 standby?


 Regards,

 On Thu, Mar 15, 2018 at 3:09 AM, Numan Siddique 
 wrote:

> That's great
>
> Numan
>
>
> On Thu, Mar 15, 2018 at 2:57 AM, aginwala  wrote:
>
>> Hi Numan:
>>
>> I tried on new nodes (kernel : 4.4.0-104-generic , Ubuntu 16.04)with
>> fresh installation and it worked super fine for both sb and nb dbs. Seems
>> like some kernel issue on the previous nodes when I re-installed raft 
>> patch
>> as I was running different ovs version on those nodes before.
>>
>>
>> For 2 HVs, I now set ovn-remote="tcp:10.169.125.152:6642, tcp:
>> 10.169.125.131:6642, tcp:10.148.181.162:6642"  and started
>> controller and it works super fine.
>>
>>
>> Did some failover testing by rebooting/killing the leader (
>> 10.169.125.152) and bringing it back up and it works as expected.
>> Nothing weird noted so far.
>>
>> # check-cluster gives below data one of the node(10.148.181.162) post
>> leader failure
>>
>> ovsdb-tool check-cluster /etc/openvswitch/ovnsb_db.db
>> ovsdb-tool: leader /etc/openvswitch/ovnsb_db.db for term 2 has log
>> entries only up to index 18446744073709551615, but 

Re: [ovs-discuss] raft ovsdb clustering

2018-03-21 Thread aginwala
Thanks Numan:

Yup agree with the locking part. For now; yes I am running northd on one
node. I might right a script to monitor northd  in cluster so that if the
node where it's running goes down, script can spin up northd on one other
active nodes as a dirty hack.

Sure, will await for the inputs from Ben too on this and see how complex
would it be to roll out this feature.


Regards,


On Wed, Mar 21, 2018 at 5:43 AM, Numan Siddique  wrote:

> Hi Aliasgar,
>
> ovsdb-server maintains locks per each connection and not across the db. A
> workaround for you now would be to configure all the ovn-northd instances
> to connect to one ovsdb-server if you want to have active/standy.
>
> Probably Ben can answer if there is a plan to support ovsdb locks across
> the db. We also need this support in networking-ovn as it also uses ovsdb
> locks.
>
> Thanks
> Numan
>
>
> On Wed, Mar 21, 2018 at 1:40 PM, aginwala  wrote:
>
>> Hi Numan:
>>
>> Just figured out that ovn-northd is running as active on all 3 nodes
>> instead of one active instance as I continued to test further which results
>> in db errors as per logs.
>>
>>
>> # on node 3, I run ovn-nbctl ls-add ls2 ; it populates below logs in
>> ovn-north
>> 2018-03-21T06:01:59.442Z|7|ovsdb_idl|WARN|transaction error:
>> {"details":"Transaction causes multiple rows in \"Datapath_Binding\" table
>> to have identical values (1) for index on column \"tunnel_key\".  First
>> row, with UUID 8c5d9342-2b90-4229-8ea1-001a733a915c, was inserted by
>> this transaction.  Second row, with UUID 
>> 8e06f919-4cc7-4ffc-9a79-20ce6663b683,
>> existed in the database before this transaction and was not modified by the
>> transaction.","error":"constraint violation"}
>>
>> In southbound datapath list, 2 duplicate records gets created for same
>> switch.
>>
>> # ovn-sbctl list Datapath
>> _uuid   : b270ae30-3458-445f-95d2-b14e8ebddd01
>> external_ids: {logical-switch="4d6674e3-ff9f-4f38-b050-0fa9bec9e34d",
>> name="ls2"}
>> tunnel_key  : 2
>>
>> _uuid   : 8e06f919-4cc7-4ffc-9a79-20ce6663b683
>> external_ids: {logical-switch="4d6674e3-ff9f-4f38-b050-0fa9bec9e34d",
>> name="ls2"}
>> tunnel_key  : 1
>>
>>
>>
>> # on nodes 1 and 2 where northd is running, it gives below error:
>> 2018-03-21T06:01:59.437Z|8|ovsdb_idl|WARN|transaction error:
>> {"details":"cannot delete Datapath_Binding row
>> 8e06f919-4cc7-4ffc-9a79-20ce6663b683 because of 17 remaining
>> reference(s)","error":"referential integrity violation"}
>>
>> As per commit message, for northd I re-tried setting --ovnnb-db="tcp:
>> 10.169.125.152:6641,tcp:10.169.125.131:6641,tcp:10.148.181.162:6641"
>> and --ovnsb-db="tcp:10.169.125.152:6642,tcp:10.169.125.131:6642,tcp:
>> 10.148.181.162:6642" and it did not help either.
>>
>> There is no issue if I keep running only one instance of northd on any of
>> these 3 nodes. Hence, wanted to know is there something else missing
>> here to make only one northd instance as active and rest as standby?
>>
>>
>> Regards,
>>
>> On Thu, Mar 15, 2018 at 3:09 AM, Numan Siddique 
>> wrote:
>>
>>> That's great
>>>
>>> Numan
>>>
>>>
>>> On Thu, Mar 15, 2018 at 2:57 AM, aginwala  wrote:
>>>
 Hi Numan:

 I tried on new nodes (kernel : 4.4.0-104-generic , Ubuntu 16.04)with
 fresh installation and it worked super fine for both sb and nb dbs. Seems
 like some kernel issue on the previous nodes when I re-installed raft patch
 as I was running different ovs version on those nodes before.


 For 2 HVs, I now set ovn-remote="tcp:10.169.125.152:6642, tcp:
 10.169.125.131:6642, tcp:10.148.181.162:6642"  and started controller
 and it works super fine.


 Did some failover testing by rebooting/killing the leader (
 10.169.125.152) and bringing it back up and it works as expected.
 Nothing weird noted so far.

 # check-cluster gives below data one of the node(10.148.181.162) post
 leader failure

 ovsdb-tool check-cluster /etc/openvswitch/ovnsb_db.db
 ovsdb-tool: leader /etc/openvswitch/ovnsb_db.db for term 2 has log
 entries only up to index 18446744073709551615, but index 9 was committed in
 a previous term (e.g. by /etc/openvswitch/ovnsb_db.db)


 For check-cluster, are we planning to add more output showing which
 node is active(leader), etc in upcoming versions ?


 Thanks a ton for helping sort this out.  I think the patch looks good
 to be merged post addressing of the comments by Justin along with the man
 page details for ovsdb-tool.


 I will do some more crash testing for the cluster along with the scale
 test and keep you posted if something unexpected is noted.



 Regards,



 On Tue, Mar 13, 2018 at 11:07 PM, Numan Siddique 
 wrote:

>
>
> On Wed, 

[ovs-discuss] How to enable kernel configuration options NET_CLS_BASIC, NET_SCH_INGRESS, and NET_ACT_POLICE?

2018-03-21 Thread Taimoor Alam
Hi

I would like to enable ingress policing in my OVS installation inside a
Ubuntu 16.04 VM. The installation guide

mentions
the following:

To compile the kernel module on Linux, you must also install the following:

   -

   A supported Linux kernel version.

   *For optional support of ingress policing, you must enable kernel
   configuration options NET_CLS_BASIC, NET_SCH_INGRESS, and NET_ACT_POLICE,
   either built-in or as modules. NET_CLS_POLICE is obsolete and not needed.*

How to enable such kernel configuration options? A  guide or tutorial would
be helpful.

Regards
Taimoor
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] raft ovsdb clustering

2018-03-21 Thread Numan Siddique
Hi Aliasgar,

ovsdb-server maintains locks per each connection and not across the db. A
workaround for you now would be to configure all the ovn-northd instances
to connect to one ovsdb-server if you want to have active/standy.

Probably Ben can answer if there is a plan to support ovsdb locks across
the db. We also need this support in networking-ovn as it also uses ovsdb
locks.

Thanks
Numan


On Wed, Mar 21, 2018 at 1:40 PM, aginwala  wrote:

> Hi Numan:
>
> Just figured out that ovn-northd is running as active on all 3 nodes
> instead of one active instance as I continued to test further which results
> in db errors as per logs.
>
>
> # on node 3, I run ovn-nbctl ls-add ls2 ; it populates below logs in
> ovn-north
> 2018-03-21T06:01:59.442Z|7|ovsdb_idl|WARN|transaction error:
> {"details":"Transaction causes multiple rows in \"Datapath_Binding\" table
> to have identical values (1) for index on column \"tunnel_key\".  First
> row, with UUID 8c5d9342-2b90-4229-8ea1-001a733a915c, was inserted by this
> transaction.  Second row, with UUID 8e06f919-4cc7-4ffc-9a79-20ce6663b683,
> existed in the database before this transaction and was not modified by the
> transaction.","error":"constraint violation"}
>
> In southbound datapath list, 2 duplicate records gets created for same
> switch.
>
> # ovn-sbctl list Datapath
> _uuid   : b270ae30-3458-445f-95d2-b14e8ebddd01
> external_ids: {logical-switch="4d6674e3-ff9f-4f38-b050-0fa9bec9e34d",
> name="ls2"}
> tunnel_key  : 2
>
> _uuid   : 8e06f919-4cc7-4ffc-9a79-20ce6663b683
> external_ids: {logical-switch="4d6674e3-ff9f-4f38-b050-0fa9bec9e34d",
> name="ls2"}
> tunnel_key  : 1
>
>
>
> # on nodes 1 and 2 where northd is running, it gives below error:
> 2018-03-21T06:01:59.437Z|8|ovsdb_idl|WARN|transaction error:
> {"details":"cannot delete Datapath_Binding row
> 8e06f919-4cc7-4ffc-9a79-20ce6663b683 because of 17 remaining
> reference(s)","error":"referential integrity violation"}
>
> As per commit message, for northd I re-tried setting --ovnnb-db="tcp:
> 10.169.125.152:6641,tcp:10.169.125.131:6641,tcp:10.148.181.162:6641"  and
> --ovnsb-db="tcp:10.169.125.152:6642,tcp:10.169.125.131:6642,tcp:
> 10.148.181.162:6642" and it did not help either.
>
> There is no issue if I keep running only one instance of northd on any of
> these 3 nodes. Hence, wanted to know is there something else missing here
> to make only one northd instance as active and rest as standby?
>
>
> Regards,
>
> On Thu, Mar 15, 2018 at 3:09 AM, Numan Siddique 
> wrote:
>
>> That's great
>>
>> Numan
>>
>>
>> On Thu, Mar 15, 2018 at 2:57 AM, aginwala  wrote:
>>
>>> Hi Numan:
>>>
>>> I tried on new nodes (kernel : 4.4.0-104-generic , Ubuntu 16.04)with
>>> fresh installation and it worked super fine for both sb and nb dbs. Seems
>>> like some kernel issue on the previous nodes when I re-installed raft patch
>>> as I was running different ovs version on those nodes before.
>>>
>>>
>>> For 2 HVs, I now set ovn-remote="tcp:10.169.125.152:6642, tcp:
>>> 10.169.125.131:6642, tcp:10.148.181.162:6642"  and started controller
>>> and it works super fine.
>>>
>>>
>>> Did some failover testing by rebooting/killing the leader (
>>> 10.169.125.152) and bringing it back up and it works as expected.
>>> Nothing weird noted so far.
>>>
>>> # check-cluster gives below data one of the node(10.148.181.162) post
>>> leader failure
>>>
>>> ovsdb-tool check-cluster /etc/openvswitch/ovnsb_db.db
>>> ovsdb-tool: leader /etc/openvswitch/ovnsb_db.db for term 2 has log
>>> entries only up to index 18446744073709551615, but index 9 was committed in
>>> a previous term (e.g. by /etc/openvswitch/ovnsb_db.db)
>>>
>>>
>>> For check-cluster, are we planning to add more output showing which node
>>> is active(leader), etc in upcoming versions ?
>>>
>>>
>>> Thanks a ton for helping sort this out.  I think the patch looks good to
>>> be merged post addressing of the comments by Justin along with the man page
>>> details for ovsdb-tool.
>>>
>>>
>>> I will do some more crash testing for the cluster along with the scale
>>> test and keep you posted if something unexpected is noted.
>>>
>>>
>>>
>>> Regards,
>>>
>>>
>>>
>>> On Tue, Mar 13, 2018 at 11:07 PM, Numan Siddique 
>>> wrote:
>>>


 On Wed, Mar 14, 2018 at 7:51 AM, aginwala  wrote:

> Sure.
>
> To add on , I also ran for nb db too using different port  and Node2
> crashes with same error :
> # Node 2
> /usr/share/openvswitch/scripts/ovn-ctl --db-nb-addr=10.99.152.138
> --db-nb-port=6641 --db-nb-cluster-remote-addr="tcp:10.99.152.148:6645"
> --db-nb-cluster-local-addr="tcp:10.99.152.138:6645" start_nb_ovsdb
> ovsdb-server: ovsdb error: /etc/openvswitch/ovnnb_db.db: cannot
> identify file type
>
>
>
 Hi Aliasgar,

 It worked for me. Can you delete the old 

Re: [ovs-discuss] Q-in-Q support in OvS.

2018-03-21 Thread Sławomir Kapłoński
Hi,

I think it is added since ovs 2.8

— 
Best regards
Slawek Kaplonski
sla...@kaplonski.pl

> Wiadomość napisana przez Rohith Basavaraja  w 
> dniu 21.03.2018, o godz. 09:38:
> 
> Hi,
>  
> Is there any plans to support Q-in-Q vlan support in OvS? Any information on 
> when it will be supported in OvS?
>  
> Thanks
> Rohith
> ___
> discuss mailing list
> disc...@openvswitch.org
> https://mail.openvswitch.org/mailman/listinfo/ovs-discuss

___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] Setting TCP rwnd value in Open vSwitch

2018-03-21 Thread Ali Volkan Atli

Hi Taimur

The attached patch is what you need. I hope it works for you.

- Volkan

From: ovs-discuss-boun...@openvswitch.org [ovs-discuss-boun...@openvswitch.org] 
on behalf of Taimur Hafeez [taimurhafee...@gmail.com]
Sent: Wednesday, March 21, 2018 11:58 AM
To: ovs-discuss@openvswitch.org
Subject: [ovs-discuss] Setting TCP rwnd value in Open vSwitch

Dear All,

I want to modify value of receiver window field(used for flow control) in the 
TCP header using OpenFlow rule at Open vSwitch. To make it clear what I am 
trying to do, lets illustrate this in the following way,

In_port=1, match src_ip=10.0.0.1, action:set_tcp_rwnd=10, out_port=2

Specifications:

Controller ryu
OpenFlow 1.3
OVS 2.5.2

Help/Clue would be highly appreciated, If anyone has done similar work. Thanks 
in advance!


Best regards,

Taimur Hafeez
NUST School of Electrical Engineering and Computer Science (SEECS), Islamabad, 
Pakistan.
diff --git a/datapath/linux/compat/include/linux/openvswitch.h b/datapath/linux/compat/include/linux/openvswitch.h
index 12260d8..dd84b04 100644
--- a/datapath/linux/compat/include/linux/openvswitch.h
+++ b/datapath/linux/compat/include/linux/openvswitch.h
@@ -352,6 +352,8 @@ enum ovs_key_attr {
 	OVS_KEY_ATTR_MPLS,  /* array of struct ovs_key_mpls.
  * The implementation may restrict
  * the accepted length of the array. */
+	OVS_KEY_ATTR_REDUCE_RWND,
+	OVS_KEY_ATTR_SET_RWND,
 	OVS_KEY_ATTR_CT_STATE,	/* u32 bitmask of OVS_CS_F_* */
 	OVS_KEY_ATTR_CT_ZONE,	/* u16 connection tracking zone. */
 	OVS_KEY_ATTR_CT_MARK,	/* u32 connection tracking mark */
@@ -433,6 +435,14 @@ struct ovs_key_ipv6 {
 	__u8   ipv6_frag;	/* One of OVS_FRAG_TYPE_*. */
 };
 
+struct ovs_key_reduce_rwnd {
+	uint8_t rate;
+};
+
+struct ovs_key_set_rwnd {
+	ovs_be16 rwnd;
+};
+
 struct ovs_key_tcp {
 	__be16 tcp_src;
 	__be16 tcp_dst;
diff --git a/include/openvswitch/flow.h b/include/openvswitch/flow.h
index df80dfe..64da205 100644
--- a/include/openvswitch/flow.h
+++ b/include/openvswitch/flow.h
@@ -120,7 +120,9 @@ struct flow {
 struct eth_addr arp_sha;/* ARP/ND source hardware address. */
 struct eth_addr arp_tha;/* ARP/ND target hardware address. */
 ovs_be16 tcp_flags; /* TCP flags. With L3 to avoid matching L4. */
-ovs_be16 pad3;  /* Pad to 64 bits. */
+ovs_be16 rwnd;
+uint8_t rate;
+uint8_t pad3[7];  /* Pad to 64 bits. */
 
 /* L4 (64-bit aligned) */
 ovs_be16 tp_src;/* TCP/UDP/SCTP source port/ICMP type. */
@@ -135,7 +137,7 @@ BUILD_ASSERT_DECL(sizeof(struct flow_tnl) % sizeof(uint64_t) == 0);
 
 /* Remember to update FLOW_WC_SEQ when changing 'struct flow'. */
 BUILD_ASSERT_DECL(offsetof(struct flow, igmp_group_ip4) + sizeof(uint32_t)
-  == sizeof(struct flow_tnl) + 248
+  == sizeof(struct flow_tnl) + 256
   && FLOW_WC_SEQ == 36);
 
 /* Incremental points at which flow classification may be performed in
diff --git a/include/openvswitch/ofp-actions.h b/include/openvswitch/ofp-actions.h
index 74e9dcc..a7512e5 100644
--- a/include/openvswitch/ofp-actions.h
+++ b/include/openvswitch/ofp-actions.h
@@ -86,6 +86,8 @@
 OFPACT(DEC_MPLS_TTL,ofpact_null,ofpact, "dec_mpls_ttl") \
 OFPACT(PUSH_MPLS,   ofpact_push_mpls,   ofpact, "push_mpls")\
 OFPACT(POP_MPLS,ofpact_pop_mpls,ofpact, "pop_mpls") \
+OFPACT(REDUCE_RWND, ofpact_reduce_rwnd, ofpact, "reduce_rwnd")  \
+OFPACT(SET_RWND,ofpact_set_rwnd,ofpact, "set_rwnd") \
 \
 /* Metadata. */ \
 OFPACT(SET_TUNNEL,  ofpact_tunnel,  ofpact, "set_tunnel")   \
@@ -426,6 +428,16 @@ struct ofpact_ip_ttl {
 uint8_t ttl;
 };
 
+struct ofpact_reduce_rwnd {
+struct ofpact ofpact;
+uint8_t rate;
+};
+
+struct ofpact_set_rwnd {
+struct ofpact ofpact;
+uint16_t rwnd;
+};
+
 /* OFPACT_SET_L4_SRC_PORT, OFPACT_SET_L4_DST_PORT.
  *
  * Used for OFPAT10_SET_TP_SRC, OFPAT10_SET_TP_DST. */
diff --git a/lib/odp-execute.c b/lib/odp-execute.c
index 65a6fcd..b052f1c 100644
--- a/lib/odp-execute.c
+++ b/lib/odp-execute.c
@@ -271,6 +271,21 @@ odp_execute_set_action(struct dp_packet *packet, const struct nlattr *a)
 }
 break;
 
+case OVS_KEY_ATTR_REDUCE_RWND:
+if (OVS_LIKELY(dp_packet_get_tcp_payload(packet))) {
+const uint8_t rate = nl_attr_get_u8(a);
+packet_reduce_rwnd(packet, rate);
+}
+
+break;
+case OVS_KEY_ATTR_SET_RWND:
+if (OVS_LIKELY(dp_packet_get_tcp_payload(packet))) {
+const uint16_t rwnd = nl_attr_get_u16(a);
+packet_set_rwnd(packet, rwnd);
+}
+
+break;
+
 case OVS_KEY_ATTR_UDP:
 if (OVS_LIKELY(dp_packet_get_udp_payload(packet))) {
 

Re: [ovs-discuss] Q-in-Q support in OvS.

2018-03-21 Thread Ivan Dyukov

Hello, it's already suppoorted:
just set

ovs-vsctl set Open_vSwitch . other_config:vlan-limit=2

and specify vlan mode for your port:

ovs-vsctl set port vhu0fd9ab54-39 vlan_mode=dot1q-tunnel tag=5

Best regards,
Ivan

On 03/21/2018 11:38 AM, Rohith Basavaraja wrote:


Hi,

Is there any plans to support Q-in-Q vlan support in OvS? Any 
information on when it will be supported in OvS?


Thanks

Rohith



___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss



___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


[ovs-discuss] Connectivity between bridges with netdev and system datapath

2018-03-21 Thread Dawid Deja
Hello,

In my environment I have 2 OVS bridges - 1st with datapath type netdev
and 2nd with datapath type system and I'd like to connect those 2. I've
tried patchport - they does not work[1], but it is mentioned that one
can provide such functionality. How much effort would it take to patch
OVS, so patch port works beetwen netdev and system datapaths?

Moreover, I've tried veth, but it also seems to not be functional. Are
there any other way to connect such bridges?

Dawid

[1] https://mail.openvswitch.org/pipermail/ovs-discuss/2017-October/045
499.html
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


[ovs-discuss] Setting TCP rwnd value in Open vSwitch

2018-03-21 Thread Taimur Hafeez
Dear All,

I want to modify value of receiver window field(used for flow control) in
the TCP header using OpenFlow rule at Open vSwitch. To make it clear what I
am trying to do, lets illustrate this in the following way,

In_port=1, match src_ip=10.0.0.1, action:set_tcp_rwnd=10, out_port=2

Specifications:

Controller ryu
OpenFlow 1.3
OVS 2.5.2

Help/Clue would be highly appreciated, If anyone has done similar work.
Thanks in advance!


Best regards,

Taimur Hafeez
*NUST* *S*chool of *E*lectrical *E*ngineering and *C*omputer *S*cience
*(SEECS)*, Islamabad, Pakistan.
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


[ovs-discuss] Q-in-Q support in OvS.

2018-03-21 Thread Rohith Basavaraja
Hi,

Is there any plans to support Q-in-Q vlan support in OvS? Any information on 
when it will be supported in OvS?

Thanks
Rohith
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] raft ovsdb clustering

2018-03-21 Thread aginwala
Hi Numan:

Just figured out that ovn-northd is running as active on all 3 nodes
instead of one active instance as I continued to test further which results
in db errors as per logs.


# on node 3, I run ovn-nbctl ls-add ls2 ; it populates below logs in
ovn-north
2018-03-21T06:01:59.442Z|7|ovsdb_idl|WARN|transaction error:
{"details":"Transaction causes multiple rows in \"Datapath_Binding\" table
to have identical values (1) for index on column \"tunnel_key\".  First
row, with UUID 8c5d9342-2b90-4229-8ea1-001a733a915c, was inserted by this
transaction.  Second row, with UUID 8e06f919-4cc7-4ffc-9a79-20ce6663b683,
existed in the database before this transaction and was not modified by the
transaction.","error":"constraint violation"}

In southbound datapath list, 2 duplicate records gets created for same
switch.

# ovn-sbctl list Datapath
_uuid   : b270ae30-3458-445f-95d2-b14e8ebddd01
external_ids:
{logical-switch="4d6674e3-ff9f-4f38-b050-0fa9bec9e34d", name="ls2"}
tunnel_key  : 2

_uuid   : 8e06f919-4cc7-4ffc-9a79-20ce6663b683
external_ids:
{logical-switch="4d6674e3-ff9f-4f38-b050-0fa9bec9e34d", name="ls2"}
tunnel_key  : 1



# on nodes 1 and 2 where northd is running, it gives below error:
2018-03-21T06:01:59.437Z|8|ovsdb_idl|WARN|transaction error:
{"details":"cannot delete Datapath_Binding row
8e06f919-4cc7-4ffc-9a79-20ce6663b683 because of 17 remaining
reference(s)","error":"referential integrity violation"}

As per commit message, for northd I re-tried setting --ovnnb-db="tcp:
10.169.125.152:6641,tcp:10.169.125.131:6641,tcp:10.148.181.162:6641"  and
--ovnsb-db="tcp:10.169.125.152:6642,tcp:10.169.125.131:6642,tcp:
10.148.181.162:6642" and it did not help either.

There is no issue if I keep running only one instance of northd on any of
these 3 nodes. Hence, wanted to know is there something else missing here
to make only one northd instance as active and rest as standby?


Regards,

On Thu, Mar 15, 2018 at 3:09 AM, Numan Siddique  wrote:

> That's great
>
> Numan
>
>
> On Thu, Mar 15, 2018 at 2:57 AM, aginwala  wrote:
>
>> Hi Numan:
>>
>> I tried on new nodes (kernel : 4.4.0-104-generic , Ubuntu 16.04)with
>> fresh installation and it worked super fine for both sb and nb dbs. Seems
>> like some kernel issue on the previous nodes when I re-installed raft patch
>> as I was running different ovs version on those nodes before.
>>
>>
>> For 2 HVs, I now set ovn-remote="tcp:10.169.125.152:6642, tcp:
>> 10.169.125.131:6642, tcp:10.148.181.162:6642"  and started controller
>> and it works super fine.
>>
>>
>> Did some failover testing by rebooting/killing the leader (10.169.125.152)
>> and bringing it back up and it works as expected. Nothing weird noted so
>> far.
>>
>> # check-cluster gives below data one of the node(10.148.181.162) post
>> leader failure
>>
>> ovsdb-tool check-cluster /etc/openvswitch/ovnsb_db.db
>> ovsdb-tool: leader /etc/openvswitch/ovnsb_db.db for term 2 has log
>> entries only up to index 18446744073709551615, but index 9 was committed in
>> a previous term (e.g. by /etc/openvswitch/ovnsb_db.db)
>>
>>
>> For check-cluster, are we planning to add more output showing which node
>> is active(leader), etc in upcoming versions ?
>>
>>
>> Thanks a ton for helping sort this out.  I think the patch looks good to
>> be merged post addressing of the comments by Justin along with the man page
>> details for ovsdb-tool.
>>
>>
>> I will do some more crash testing for the cluster along with the scale
>> test and keep you posted if something unexpected is noted.
>>
>>
>>
>> Regards,
>>
>>
>>
>> On Tue, Mar 13, 2018 at 11:07 PM, Numan Siddique 
>> wrote:
>>
>>>
>>>
>>> On Wed, Mar 14, 2018 at 7:51 AM, aginwala  wrote:
>>>
 Sure.

 To add on , I also ran for nb db too using different port  and Node2
 crashes with same error :
 # Node 2
 /usr/share/openvswitch/scripts/ovn-ctl --db-nb-addr=10.99.152.138
 --db-nb-port=6641 --db-nb-cluster-remote-addr="tcp:10.99.152.148:6645"
 --db-nb-cluster-local-addr="tcp:10.99.152.138:6645" start_nb_ovsdb
 ovsdb-server: ovsdb error: /etc/openvswitch/ovnnb_db.db: cannot
 identify file type



>>> Hi Aliasgar,
>>>
>>> It worked for me. Can you delete the old db files in /etc/openvswitch/
>>> and try running the commands again ?
>>>
>>> Below are the commands I ran in my setup.
>>>
>>> Node 1
>>> ---
>>> sudo /usr/share/openvswitch/scripts/ovn-ctl
>>> --db-sb-addr=192.168.121.91 --db-sb-port=6642 
>>> --db-sb-create-insecure-remote=yes
>>> --db-sb-cluster-local-addr=tcp:192.168.121.91:6644 start_sb_ovsdb
>>>
>>> Node 2
>>> -
>>> sudo /usr/share/openvswitch/scripts/ovn-ctl
>>> --db-sb-addr=192.168.121.87 --db-sb-port=6642 
>>> --db-sb-create-insecure-remote=yes
>>> --db-sb-cluster-local-addr="tcp:192.168.121.87:6644"
>>>