Re: [ovs-dev] [PATCH 0/2] datapath: Optimize operations for OvS flow_stats.

2017-09-18 Thread Tonghao Zhang
Haha, I'm fine. We just make the ovs better.

On Sat, Sep 16, 2017 at 1:45 AM, Greg Rose  wrote:
> On 09/10/2017 06:00 PM, Tonghao Zhang wrote:
>>
>> The linux kernel 4.13 has been released. I backport the patches of
>> openvswitch here.
>>
>> Tonghao Zhang (2):
>>datapath: Optimize updating for OvS flow_stats.
>>datapath: Optimize operations for OvS flow_stats.
>>
>>   datapath/flow.c   | 10 +-
>>   datapath/flow.h   |  2 ++
>>   datapath/flow_table.c |  4 +++-
>>   3 files changed, 10 insertions(+), 6 deletions(-)
>>
>
> I had already backported these two patches in my own patches series to
> update to the 4.13 kernel.  I'll let
> the maintainers pick but since you wrote the patches in the first place I'm
> fine if they take yours.
>
> Thanks,
>
> - Greg
___
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev


[ovs-dev] 答复: Re: [PATCH] ovn: Discard flows for non-local ports.

2017-09-18 Thread wang . qianyu
I agree with  Han Zhou, This patch can reduce the flow to vswitchd, but 
can not reduce the flow calculation of ovn-controller. It may be better to 
move the check for
local_lport_ids before the parse happens. Add a lport column in logical 
flow may be more efficient.

Thanks.




Han Zhou 
发件人: ovs-dev-boun...@openvswitch.org
2017/09/19 07:31
 
收件人:Russell Bryant , 
抄送:  "d...@openvswitch.org" , Miguel Angel 
Ajo Pelayo 
主题:  Re: [ovs-dev] [PATCH] ovn: Discard flows for non-local 
ports.


Thanks Russell for the quick work!

On Mon, Sep 18, 2017 at 8:24 AM, Russell Bryant  wrote:

> @@ -301,6 +305,22 @@ consider_logical_flow(struct controller_ctx *ctx,
>  if (m->match.wc.masks.conj_id) {
>  m->match.flow.conj_id += *conj_id_ofs;
>  }
> +if (is_switch(ldp)) {
> +unsigned int reg_index
> += (ingress ? MFF_LOG_INPORT : MFF_LOG_OUTPORT) -
MFF_REG0;
> +int64_t port_id = m->match.flow.regs[reg_index];
> +if (port_id) {
> +int64_t dp_id = lflow->logical_datapath->tunnel_key;
> +char buf[16];
> +snprintf(buf, sizeof(buf), "%"PRId64"_%"PRId64, dp_id,
port_id);
> +if (!sset_contains(local_lport_ids, buf)) {
> +//VLOG_INFO("Matching on port id %"PRId64" dp
%"PRId64", is NOT local", port_id, dp_id);
> +continue;
> +} else {
> +//VLOG_INFO("Matching on port id %"PRId64" dp
%"PRId64", is local", port_id, dp_id);
> +}
> +}
> +}
>  if (!m->n) {
>  ofctrl_add_flow(flow_table, ptable, lflow->priority,
>  lflow->header_.uuid.parts[0], >match,
);

I remember the expr_parse_string() is one of the biggest cost in
ovn-controller, so I wonder would it be better to move the check for
local_lport_ids before the parse happens, i.e. check against logical flows
instead of ovs flows?

Acked-by: Han Zhou 
___
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev


___
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev


Re: [ovs-dev] [PATCH] ovn: Discard flows for non-local ports.

2017-09-18 Thread Han Zhou
Thanks Russell for the quick work!

On Mon, Sep 18, 2017 at 8:24 AM, Russell Bryant  wrote:

> @@ -301,6 +305,22 @@ consider_logical_flow(struct controller_ctx *ctx,
>  if (m->match.wc.masks.conj_id) {
>  m->match.flow.conj_id += *conj_id_ofs;
>  }
> +if (is_switch(ldp)) {
> +unsigned int reg_index
> += (ingress ? MFF_LOG_INPORT : MFF_LOG_OUTPORT) -
MFF_REG0;
> +int64_t port_id = m->match.flow.regs[reg_index];
> +if (port_id) {
> +int64_t dp_id = lflow->logical_datapath->tunnel_key;
> +char buf[16];
> +snprintf(buf, sizeof(buf), "%"PRId64"_%"PRId64, dp_id,
port_id);
> +if (!sset_contains(local_lport_ids, buf)) {
> +//VLOG_INFO("Matching on port id %"PRId64" dp
%"PRId64", is NOT local", port_id, dp_id);
> +continue;
> +} else {
> +//VLOG_INFO("Matching on port id %"PRId64" dp
%"PRId64", is local", port_id, dp_id);
> +}
> +}
> +}
>  if (!m->n) {
>  ofctrl_add_flow(flow_table, ptable, lflow->priority,
>  lflow->header_.uuid.parts[0], >match,
);

I remember the expr_parse_string() is one of the biggest cost in
ovn-controller, so I wonder would it be better to move the check for
local_lport_ids before the parse happens, i.e. check against logical flows
instead of ovs flows?

Acked-by: Han Zhou 
___
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev


Re: [ovs-dev] [PATCH] ofproto-dpif-upcall: Transition ukey on dp_ops error.

2017-09-18 Thread Greg Rose

On 09/06/2017 03:12 PM, Joe Stringer wrote:

In most situations, we don't expect that a flow we've successfully
dumped, which we intend to delete, cannot be deleted. However, to make
this code more resilient to ensure that ukeys *will* transition in all
cases (including an error at this stage), grab the lock and transition
this ukey forward to the evicted state, effectively treating a failure
to delete as "this flow is already gone".

If we subsequently find out that it wasn't deleted, then that's ok - we
will re-dump, and validate at that stage, which should lead to creating
a new ukey or deleting the datapath flow when that happens.

Signed-off-by: Joe Stringer 
---
  ofproto/ofproto-dpif-upcall.c | 5 +
  1 file changed, 5 insertions(+)

diff --git a/ofproto/ofproto-dpif-upcall.c b/ofproto/ofproto-dpif-upcall.c
index 4a71bbe258df..bd324fbb6323 100644
--- a/ofproto/ofproto-dpif-upcall.c
+++ b/ofproto/ofproto-dpif-upcall.c
@@ -2227,6 +2227,11 @@ push_dp_ops(struct udpif *udpif, struct ukey_op *ops, 
size_t n_ops)
  
  if (op->dop.error) {

  /* flow_del error, 'stats' is unusable. */
+if (op->ukey) {
+ovs_mutex_lock(>ukey->mutex);
+transition_ukey(op->ukey, UKEY_EVICTED);
+ovs_mutex_unlock(>ukey->mutex);
+}
  continue;
  }
  



Compile tested only - I didn't see of any good way to force the error

Code looks good to me.

Reviewed-by: Greg Rose 

___
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev


Re: [ovs-dev] [PATCH 2/2] ofproto-dpif-ipfix: add interface Information Elements to flow key

2017-09-18 Thread Greg Rose

On 09/18/2017 03:01 AM, Weglicki, MichalX wrote:

Hi Greg - comments inline marked [MW].


-Original Message-
From: Greg Rose [mailto:gvrose8...@gmail.com]
Sent: Saturday, September 16, 2017 12:45 AM
To: Weglicki, MichalX 
Cc: d...@openvswitch.org; Darrell Ball 
Subject: Re: [ovs-dev] [PATCH 2/2] ofproto-dpif-ipfix: add interface 
Information Elements to flow key

On 09/08/2017 06:35 AM, Weglicki, MichalX wrote:

Greg,

Patch is rebased and sent to mailing list as V3 (Last patch was supposed to be 
V2 - Przemek by accident sent it again as V1).

Br,
Michal.


Michal,

I'm not sure what has happened but I can't find your V3 patches in my mail but 
they are in patchwork.

I tested the patches and they seem to work fine, or at least the code executes 
and I didn't see any serious
regression.  My ntopng/nprobe setup accepted the new template.

However, I am seeing this now:

5/Sep/2017 15:11:16 [Lua.cpp:74] ERROR: ntop_get_interface_host_info : expected 
string, got nil
15/Sep/2017 15:11:16 [Lua.cpp:6658] WARNING: Script failure
[/usr/share/ntopng/scripts/lua/iface_ndpi_stats.lua][/usr/share/ntopng/scripts/lua/iface_ndpi_stats.lua:31:
 attempt to index
global
'stats' (a nil value)]
15/Sep/2017 15:11:16 [Lua.cpp:74] ERROR: ntop_get_interface_host_info : 
expected string, got nil
15/Sep/2017 15:11:15 [Lua.cpp:6658] WARNING: Script failure
[/usr/share/ntopng/scripts/lua/iface_ndpi_stats.lua][/usr/share/ntopng/scripts/lua/iface_ndpi_stats.lua:31:
 attempt to index
global
'stats' (a nil value)]
15/Sep/2017 15:11:15 [Lua.cpp:74] ERROR: ntop_get_interface_host_info : 
expected string, got nil
15/Sep/2017 15:11:12 [Lua.cpp:6658] WARNING: Script failure
[/usr/share/ntopng/scripts/lua/iface_ndpi_stats.lua][/usr/share/ntopng/scripts/lua/iface_ndpi_stats.lua:31:
 attempt to index
global
'stats' (a nil value)]
15/Sep/2017 15:11:12 [Lua.cpp:74] ERROR: ntop_get_interface_host_info : 
expected string, got nil
15/Sep/2017 15:11:09 [Lua.cpp:6658] WARNING: Script failure
[/usr/share/ntopng/scripts/lua/iface_ndpi_stats.lua][/usr/share/ntopng/scripts/lua/iface_ndpi_stats.lua:31:
 attempt to index
global
'stats' (a nil value)]
15/Sep/2017 15:11:09 [Lua.cpp:74] ERROR: ntop_get_interface_host_info : 
expected string, got nil
15/Sep/2017 15:11:09 [Lua.cpp:6658] WARNING: Script failure
[/usr/share/ntopng/scripts/lua/iface_ndpi_stats.lua][/usr/share/ntopng/scripts/lua/iface_ndpi_stats.lua:31:
 attempt to index
global
'stats' (a nil value)]
15/Sep/2017 15:11:09 [Lua.cpp:74] ERROR: ntop_get_interface_host_info : 
expected string, got nil

[MW] Well, are you sure that this patch is causing this error message? I mean, 
you don't see
It without this patch, do you? If yes, could you explain your setup a little 
bit so I can reproduce it here?


It took a while but I was able to reproduce without your patches applied.  
Seems to be something that
occasionally occurs soon after the ntopng/nprobe services are restarted and 
then it doesn't occur
again SFAICT.






Also, in patch 1/2 though the interface is hard coded to Ethernet in this bit 
of the patch:

+/* Once DPDK library supports retrieving ifType we should get this value
+ * directly from DPDK rather than hardcoding it. */
+smap_add_format(args, "if_type", "%"PRIu32, IF_TYPE_ETHERNETCSMACD);
+smap_add_format(args, "if_descr", "%s %s", rte_version(),
+   dev_info.driver_name);

I'd like to get Darrel Ball's take on this so I've CC'd him.

In patch 2/2 I have some other comments.

[MW] There are some DPDK patches which expose this information for the driver, 
nevertheless I think that anyway value "6"
Will be returned anyway - as this is very general category for Ethernet 
devices. I could be improved in the future of course
When patch will be available.



Here you change the ports to point to all ports rather than tunnel ports

-struct hmap_node hmap_node; /* In struct dpif_ipfix's "tunnel_ports" hmap. 
*/
+struct hmap_node hmap_node; /* In struct dpif_ipfix's "ports" hmap. */
   struct ofport *ofport;  /* To retrieve port stats. */
   odp_port_t odp_port;
   enum dpif_ipfix_tunnel_type tunnel_type;
   uint8_t tunnel_key_length;
+uint32_t ifindex;

I didn't see any reason this can't be done but I'm worried about side effects. 
Are we sure that
there are no other assumptions in the code that depend on that hmap pointing to 
only tunnel ports?
I realize that patch 2/2 seems to fix that up and I didn't see any particular 
reason to
doubt but I'm rather new to the code base so I thought I'd ask.

[MW] I'm not quite sure wat do you mean here, in previous implementation only 
tunnel ports were
Mandatory to cache, however currently we need all ports, as we need to query 
particular
Netdev to get information that we need. I don't remember exactly the details 
right now, but we found
That there is no need to cache tunnel ports separately, but it 

Re: [ovs-dev] why the max action length is 32K in kernel?

2017-09-18 Thread Ben Pfaff
On Mon, Sep 18, 2017 at 02:06:52PM -0700, Greg Rose wrote:
> On 09/18/2017 01:32 PM, Ben Pfaff wrote:
> >On Mon, Sep 18, 2017 at 01:27:52PM -0700, Greg Rose wrote:
> >>On 09/18/2017 11:15 AM, Ben Pfaff wrote:
> >>>On Mon, Sep 18, 2017 at 10:58:28AM -0700, Greg Rose wrote:
> On 09/12/2017 08:37 PM, ychen wrote:
> >in function nla_alloc_flow_actions(), there is a check if action length 
> >is greater than MAX_ACTIONS_BUFSIZE(32k), then kernel datapath flow will 
> >not be installed, and packets will droppped.
> >but in function xlate_actions(), there is such clause:
> >if (nl_attr_oversized(ctx.odp_actions->size)) {
> > /* These datapath actions are too big for a Netlink attribute, 
> > so we
> >  * can't hand them to the kernel directly.  dpif_execute() can 
> > execute
> >  * them one by one with help, so just mark the result as 
> > SLOW_ACTION to
> >  * prevent the flow from being installed. */
> > COVERAGE_INC(xlate_actions_oversize);
> > ctx.xout->slow |= SLOW_ACTION;
> > }
> >and in function nl_attr_oversized(), the clause is like this:
> >return payload_size > UINT16_MAX - NLA_HDRLEN;
> >
> >
> >so we can see that in user space, max action length is almost 64K, but 
> >in kernel space, max action length is only 32K.
> >my question is: why the max action length is different? packet will drop 
> >when its action length exceeds 32K, but packet can excute in slow path 
> >when its action length exceeds 64K?
> 
> It's a kernel limitation.
> 
> http://www.spinics.net/lists/netdev/msg431592.html
> >>>
> >>>It sounds like the userspace limit, then, should also be 32 kB (or
> >>>possibly 16 kB).  I guess we should fix that.
> >>>
> >>
> >>Correct, the user space limit should be 32KB.  That's what it is in 
> >>iproute2.
> >
> >OVS supports Linux < 4.9 as well, so should we stick with 16 kB (or
> >detect the kernel version or limit somehow)?
> >http://www.spinics.net/lists/netdev/msg431592.html
> >
> 
> We should research how iproute2 package utilities detect what message size to 
> use
> since they have to work with older kernels as well.
> 
> I'll take an action item to do that and get back with a patch or a 
> recommendation at the least.

Thank you.
___
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev


Re: [ovs-dev] why the max action length is 32K in kernel?

2017-09-18 Thread Greg Rose

On 09/18/2017 01:32 PM, Ben Pfaff wrote:

On Mon, Sep 18, 2017 at 01:27:52PM -0700, Greg Rose wrote:

On 09/18/2017 11:15 AM, Ben Pfaff wrote:

On Mon, Sep 18, 2017 at 10:58:28AM -0700, Greg Rose wrote:

On 09/12/2017 08:37 PM, ychen wrote:

in function nla_alloc_flow_actions(), there is a check if action length is 
greater than MAX_ACTIONS_BUFSIZE(32k), then kernel datapath flow will not be 
installed, and packets will droppped.
but in function xlate_actions(), there is such clause:
if (nl_attr_oversized(ctx.odp_actions->size)) {
 /* These datapath actions are too big for a Netlink attribute, so we
  * can't hand them to the kernel directly.  dpif_execute() can execute
  * them one by one with help, so just mark the result as SLOW_ACTION to
  * prevent the flow from being installed. */
 COVERAGE_INC(xlate_actions_oversize);
 ctx.xout->slow |= SLOW_ACTION;
 }
and in function nl_attr_oversized(), the clause is like this:
return payload_size > UINT16_MAX - NLA_HDRLEN;


so we can see that in user space, max action length is almost 64K, but in 
kernel space, max action length is only 32K.
my question is: why the max action length is different? packet will drop when 
its action length exceeds 32K, but packet can excute in slow path when its 
action length exceeds 64K?


It's a kernel limitation.

http://www.spinics.net/lists/netdev/msg431592.html


It sounds like the userspace limit, then, should also be 32 kB (or
possibly 16 kB).  I guess we should fix that.



Correct, the user space limit should be 32KB.  That's what it is in iproute2.


OVS supports Linux < 4.9 as well, so should we stick with 16 kB (or
detect the kernel version or limit somehow)?
http://www.spinics.net/lists/netdev/msg431592.html



We should research how iproute2 package utilities detect what message size to 
use
since they have to work with older kernels as well.

I'll take an action item to do that and get back with a patch or a 
recommendation at the least.

Thanks,

- Greg
___
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev


Re: [ovs-dev] why the max action length is 32K in kernel?

2017-09-18 Thread Ben Pfaff
On Mon, Sep 18, 2017 at 01:27:52PM -0700, Greg Rose wrote:
> On 09/18/2017 11:15 AM, Ben Pfaff wrote:
> >On Mon, Sep 18, 2017 at 10:58:28AM -0700, Greg Rose wrote:
> >>On 09/12/2017 08:37 PM, ychen wrote:
> >>>in function nla_alloc_flow_actions(), there is a check if action length is 
> >>>greater than MAX_ACTIONS_BUFSIZE(32k), then kernel datapath flow will not 
> >>>be installed, and packets will droppped.
> >>>but in function xlate_actions(), there is such clause:
> >>>if (nl_attr_oversized(ctx.odp_actions->size)) {
> >>> /* These datapath actions are too big for a Netlink attribute, so 
> >>> we
> >>>  * can't hand them to the kernel directly.  dpif_execute() can 
> >>> execute
> >>>  * them one by one with help, so just mark the result as 
> >>> SLOW_ACTION to
> >>>  * prevent the flow from being installed. */
> >>> COVERAGE_INC(xlate_actions_oversize);
> >>> ctx.xout->slow |= SLOW_ACTION;
> >>> }
> >>>and in function nl_attr_oversized(), the clause is like this:
> >>>return payload_size > UINT16_MAX - NLA_HDRLEN;
> >>>
> >>>
> >>>so we can see that in user space, max action length is almost 64K, but in 
> >>>kernel space, max action length is only 32K.
> >>>my question is: why the max action length is different? packet will drop 
> >>>when its action length exceeds 32K, but packet can excute in slow path 
> >>>when its action length exceeds 64K?
> >>
> >>It's a kernel limitation.
> >>
> >>http://www.spinics.net/lists/netdev/msg431592.html
> >
> >It sounds like the userspace limit, then, should also be 32 kB (or
> >possibly 16 kB).  I guess we should fix that.
> >
> 
> Correct, the user space limit should be 32KB.  That's what it is in iproute2.

OVS supports Linux < 4.9 as well, so should we stick with 16 kB (or
detect the kernel version or limit somehow)?
http://www.spinics.net/lists/netdev/msg431592.html
___
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev


Re: [ovs-dev] why the max action length is 32K in kernel?

2017-09-18 Thread Greg Rose

On 09/18/2017 11:15 AM, Ben Pfaff wrote:

On Mon, Sep 18, 2017 at 10:58:28AM -0700, Greg Rose wrote:

On 09/12/2017 08:37 PM, ychen wrote:

in function nla_alloc_flow_actions(), there is a check if action length is 
greater than MAX_ACTIONS_BUFSIZE(32k), then kernel datapath flow will not be 
installed, and packets will droppped.
but in function xlate_actions(), there is such clause:
if (nl_attr_oversized(ctx.odp_actions->size)) {
 /* These datapath actions are too big for a Netlink attribute, so we
  * can't hand them to the kernel directly.  dpif_execute() can execute
  * them one by one with help, so just mark the result as SLOW_ACTION to
  * prevent the flow from being installed. */
 COVERAGE_INC(xlate_actions_oversize);
 ctx.xout->slow |= SLOW_ACTION;
 }
and in function nl_attr_oversized(), the clause is like this:
return payload_size > UINT16_MAX - NLA_HDRLEN;


so we can see that in user space, max action length is almost 64K, but in 
kernel space, max action length is only 32K.
my question is: why the max action length is different? packet will drop when 
its action length exceeds 32K, but packet can excute in slow path when its 
action length exceeds 64K?


It's a kernel limitation.

http://www.spinics.net/lists/netdev/msg431592.html


It sounds like the userspace limit, then, should also be 32 kB (or
possibly 16 kB).  I guess we should fix that.



Correct, the user space limit should be 32KB.  That's what it is in iproute2.

Thanks,

- Greg
___
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev


Re: [ovs-dev] adding dpdk ports sharing same pci address to ovs-dpdk bridge

2017-09-18 Thread Darrell Ball
Thanks for confirming Devendra

Adding Ciara
There have been some offline discussions regarding the issue.


From: devendra rawat 
Date: Monday, September 18, 2017 at 4:27 AM
To: Kevin Traynor 
Cc: Darrel Ball , "ovs-dev@openvswitch.org" 
, "disc...@openvswitch.org" 
Subject: Re: [ovs-dev] adding dpdk ports sharing same pci address to ovs-dpdk 
bridge

Hi Kevin,

On Fri, Sep 8, 2017 at 12:24 AM, Kevin Traynor 
> wrote:
On 09/07/2017 06:47 PM, Darrell Ball wrote:
> Adding disc...@openvswitch.org
>
> The related changes went into 2.7
>
>
>
> On 9/7/17, 3:51 AM, 
> "ovs-dev-boun...@openvswitch.org on 
> behalf of devendra rawat" 
>  on 
> behalf of 
> devendra.rawat.si...@gmail.com> wrote:
>
> Hi,
>
> I have compiled and built ovs-dpdk using DPDK v17.08 and OVS v2.8.0. The
> NIC that I am using is Mellanox ConnectX-3 Pro, which is a dual port 10G
> NIC. The problem with this NIC is that it provides only one PCI address 
> for
> both the 10G ports.
>
> So when I am trying to add the two DPDK ports to my br0 bridge
>
> # ovs-vsctl --no-wait add-port br0 dpdk0 -- set Interface dpdk0 type=dpdk
> options:dpdk-devargs=0002:01:00.0
>
> # ovs-vsctl --no-wait add-port br0 dpdk1 -- set Interface dpdk1 type=dpdk
> options:dpdk-devargs=0002:01:00.0
>

Were you able to confirm those addresses by running ./dpdk-devbind.py -s
in your /tools dir?

On running dpdk-devbind.py --status , I can see the ConnectX-3 pro NIC, having 
only one PCI address.

Network devices using DPDK-compatible driver



Network devices using kernel driver
===
0002:01:00.0 'MT27520 Family [ConnectX-3 Pro] 1007' if=enP4p1s0d1,enP4p1s0 
drv=mlx4_core unused=
0006:01:00.0 'I210 Gigabit Network Connection 1533' if=enP6p1s0 drv=igb unused= 
*Active*


> The port dpdk1 is added successfully and able to transfer data, but adding
> dpdk0 to br0 fails:
>
> 2017-09-06T14:19:20Z|00045|netdev_dpdk|INFO|Port 0: e4:1d:2d:4f:78:60
> 2017-09-06T14:19:20Z|00046|bridge|INFO|bridge br0: added interface dpdk1 
> on
> port 1
> 2017-09-06T14:19:20Z|00047|bridge|INFO|bridge br0: added interface br0 on
> port 65534
> 2017-09-06T14:19:20Z|00048|dpif_netlink|WARN|Generic Netlink family
> 'ovs_datapath' does not exist. The Open vSwitch kernel module is probably
> not loaded.
> 2017-09-06T14:19:20Z|00049|netdev_dpdk|WARN|'dpdk0' is trying to use 
> device
> '0002:01:00.0' which is already in use by 'dpdk1'
> 2017-09-06T14:19:20Z|00050|netdev|WARN|dpdk0: could not set configuration
> (Address already in use)
> 2017-09-06T14:19:20Z|00051|bridge|INFO|bridge br0: using datapath ID
> e41d2d4f7860
>
>
> With OVS v2.6.1 I never had this problem as dpdk-devargs was not mandatory
> and just specifying port name was enough to add that port to bridge.
>
> Is there a way to add port both ports to bridge ?
>
> Thanks,
> Devendra
> ___
> dev mailing list
> d...@openvswitch.org
> 
> https://urldefense.proofpoint.com/v2/url?u=https-3A__mail.openvswitch.org_mailman_listinfo_ovs-2Ddev=DwICAg=uilaK90D4TOVoH58JNXRgQ=BVhFA09CGX7JQ5Ih-uZnsw=qO7NdgrrorJhievOguQLmsfEFuBcPfz9NfQX7UME1-8=ZKHbYlaTjm8VFj6Rggmcb2gw6s3xW4PxEtUy4YFG1VA=
>
>
> ___
> dev mailing list
> d...@openvswitch.org
> https://mail.openvswitch.org/mailman/listinfo/ovs-dev
>

___
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev


[ovs-dev] Multiple Subnets connecting LXC containers over Single GRE Tunnel conneting Two Physical Hosts Using Patch Ports

2017-09-18 Thread Gilbert Standen
Hi can anyone help with this problem?  I summarized it here at my blog:  
https://sites.google.com/site/nandydandyoracle/openvswitch-ovs/networking-problem-1
  Please let me know if you need more information or propose an alternative 
solution.  Thanks!!  Gil
___
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev


Re: [ovs-dev] why the max action length is 32K in kernel?

2017-09-18 Thread Ben Pfaff
On Mon, Sep 18, 2017 at 10:58:28AM -0700, Greg Rose wrote:
> On 09/12/2017 08:37 PM, ychen wrote:
> >in function nla_alloc_flow_actions(), there is a check if action length is 
> >greater than MAX_ACTIONS_BUFSIZE(32k), then kernel datapath flow will not be 
> >installed, and packets will droppped.
> >but in function xlate_actions(), there is such clause:
> >if (nl_attr_oversized(ctx.odp_actions->size)) {
> > /* These datapath actions are too big for a Netlink attribute, so we
> >  * can't hand them to the kernel directly.  dpif_execute() can 
> > execute
> >  * them one by one with help, so just mark the result as 
> > SLOW_ACTION to
> >  * prevent the flow from being installed. */
> > COVERAGE_INC(xlate_actions_oversize);
> > ctx.xout->slow |= SLOW_ACTION;
> > }
> >and in function nl_attr_oversized(), the clause is like this:
> >return payload_size > UINT16_MAX - NLA_HDRLEN;
> >
> >
> >so we can see that in user space, max action length is almost 64K, but in 
> >kernel space, max action length is only 32K.
> >my question is: why the max action length is different? packet will drop 
> >when its action length exceeds 32K, but packet can excute in slow path when 
> >its action length exceeds 64K?
> 
> It's a kernel limitation.
> 
> http://www.spinics.net/lists/netdev/msg431592.html

It sounds like the userspace limit, then, should also be 32 kB (or
possibly 16 kB).  I guess we should fix that.
___
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev


Re: [ovs-dev] [PATCH 2/2] dpctl: init CT entry variable.

2017-09-18 Thread Greg Rose

On 09/13/2017 05:36 AM, antonio.fische...@intel.com wrote:

ct_dpif_entry_uninit could potentially be called even if
ct_dpif_dump_next failed. As ct_dpif_entry_uninit receives
a pointer to a CT entry - and just checks it is not null -
it's safer to init to zero any instantiated ct_dpif_entry
variable before its usage.

Signed-off-by: Antonio Fischetti 
---
  lib/dpctl.c | 3 +++
  1 file changed, 3 insertions(+)

diff --git a/lib/dpctl.c b/lib/dpctl.c
index 86d0f90..77d4e58 100644
--- a/lib/dpctl.c
+++ b/lib/dpctl.c
@@ -1287,6 +1287,7 @@ dpctl_dump_conntrack(int argc, const char *argv[],
  return error;
  }
  
+memset(, 0, sizeof(cte));

  while (!(ret = ct_dpif_dump_next(dump, ))) {
  struct ds s = DS_EMPTY_INITIALIZER;
  
@@ -1392,6 +1393,7 @@ dpctl_ct_stats_show(int argc, const char *argv[],

  return error;
  }
  
+memset(, 0, sizeof(cte));

  int tot_conn = 0;
  while (!(ret = ct_dpif_dump_next(dump, ))) {
  ct_dpif_entry_uninit();
@@ -1532,6 +1534,7 @@ dpctl_ct_bkts(int argc, const char *argv[],
   return 0;
  }
  
+memset(, 0, sizeof(cte));

  dpctl_print(dpctl_p, "Total Buckets: %d\n", tot_bkts);
  
  int tot_conn = 0;




Not in the hotpath so OK to be extra careful here.

Reviewed-by: Greg Rose 

___
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev


Re: [ovs-dev] [PATCH 1/2] dpctl: manage ret value when dumping CT entries.

2017-09-18 Thread Greg Rose

On 09/13/2017 05:36 AM, antonio.fische...@intel.com wrote:

Manage error value returned by ct_dpif_dump_next.

Signed-off-by: Antonio Fischetti 
---
  lib/dpctl.c | 28 +---
  1 file changed, 25 insertions(+), 3 deletions(-)

diff --git a/lib/dpctl.c b/lib/dpctl.c
index 8951d6e..86d0f90 100644
--- a/lib/dpctl.c
+++ b/lib/dpctl.c
@@ -1263,6 +1263,7 @@ dpctl_dump_conntrack(int argc, const char *argv[],
  struct dpif *dpif;
  char *name;
  int error;
+int ret;
  
  if (argc > 1 && ovs_scan(argv[argc - 1], "zone=%"SCNu16, )) {

  pzone = 
@@ -1286,7 +1287,7 @@ dpctl_dump_conntrack(int argc, const char *argv[],
  return error;
  }
  
-while (!ct_dpif_dump_next(dump, )) {

+while (!(ret = ct_dpif_dump_next(dump, ))) {
  struct ds s = DS_EMPTY_INITIALIZER;
  
  ct_dpif_format_entry(, , dpctl_p->verbosity,

@@ -1296,6 +1297,13 @@ dpctl_dump_conntrack(int argc, const char *argv[],
  dpctl_print(dpctl_p, "%s\n", ds_cstr());
  ds_destroy();
  }
+if (ret && ret != EOF) {
+dpctl_error(dpctl_p, ret, "dumping conntrack entry");
+ct_dpif_dump_done(dump);
+dpif_close(dpif);
+return ret;
+}
+
  ct_dpif_dump_done(dump);
  dpif_close(dpif);
  return error;
@@ -1348,6 +1356,7 @@ dpctl_ct_stats_show(int argc, const char *argv[],
  int proto_stats[CT_STATS_MAX];
  int tcp_conn_per_states[CT_DPIF_TCPS_MAX_NUM];
  int error;
+int ret;
  
  while (argc > 1 && lastargc != argc) {

  lastargc = argc;
@@ -1384,7 +1393,7 @@ dpctl_ct_stats_show(int argc, const char *argv[],
  }
  
  int tot_conn = 0;

-while (!ct_dpif_dump_next(dump, )) {
+while (!(ret = ct_dpif_dump_next(dump, ))) {
  ct_dpif_entry_uninit();
  tot_conn++;
  switch (cte.tuple_orig.ip_proto) {
@@ -1425,6 +1434,12 @@ dpctl_ct_stats_show(int argc, const char *argv[],
  break;
  }
  }
+if (ret && ret != EOF) {
+dpctl_error(dpctl_p, ret, "dumping conntrack entry");
+ct_dpif_dump_done(dump);
+dpif_close(dpif);
+return ret;
+}
  
  dpctl_print(dpctl_p, "Connections Stats:\nTotal: %d\n", tot_conn);

  if (proto_stats[CT_STATS_TCP]) {
@@ -1482,6 +1497,7 @@ dpctl_ct_bkts(int argc, const char *argv[],
  uint16_t *pzone = NULL;
  int tot_bkts = 0;
  int error;
+int ret;
  
  if (argc > 1 && !strncmp(argv[argc - 1], CT_BKTS_GT, strlen(CT_BKTS_GT))) {

  if (ovs_scan(argv[argc - 1], CT_BKTS_GT"%"SCNu16, )) {
@@ -1521,7 +1537,7 @@ dpctl_ct_bkts(int argc, const char *argv[],
  int tot_conn = 0;
  uint32_t *conn_per_bkts = xzalloc(tot_bkts * sizeof(uint32_t));
  
-while (!ct_dpif_dump_next(dump, )) {

+while (!(ret = ct_dpif_dump_next(dump, ))) {
  ct_dpif_entry_uninit();
  tot_conn++;
  if (tot_bkts > 0) {
@@ -1533,6 +1549,12 @@ dpctl_ct_bkts(int argc, const char *argv[],
  }
  }
  }
+if (ret && ret != EOF) {
+dpctl_error(dpctl_p, ret, "dumping conntrack entry");
+ct_dpif_dump_done(dump);
+dpif_close(dpif);
+return ret;
+}
  
  dpctl_print(dpctl_p, "Current Connections: %d\n", tot_conn);

  dpctl_print(dpctl_p, "\n");



This looks fine to me but I don't know of any way to test it - I guess we'd 
need to force a dump error.

Reviewed-by: Greg Rose 

___
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev


Re: [ovs-dev] why the max action length is 32K in kernel?

2017-09-18 Thread Greg Rose

On 09/12/2017 08:37 PM, ychen wrote:

in function nla_alloc_flow_actions(), there is a check if action length is 
greater than MAX_ACTIONS_BUFSIZE(32k), then kernel datapath flow will not be 
installed, and packets will droppped.
but in function xlate_actions(), there is such clause:
if (nl_attr_oversized(ctx.odp_actions->size)) {
 /* These datapath actions are too big for a Netlink attribute, so we
  * can't hand them to the kernel directly.  dpif_execute() can execute
  * them one by one with help, so just mark the result as SLOW_ACTION to
  * prevent the flow from being installed. */
 COVERAGE_INC(xlate_actions_oversize);
 ctx.xout->slow |= SLOW_ACTION;
 }
and in function nl_attr_oversized(), the clause is like this:
return payload_size > UINT16_MAX - NLA_HDRLEN;


so we can see that in user space, max action length is almost 64K, but in 
kernel space, max action length is only 32K.
my question is: why the max action length is different? packet will drop when 
its action length exceeds 32K, but packet can excute in slow path when its 
action length exceeds 64K?


It's a kernel limitation.

http://www.spinics.net/lists/netdev/msg431592.html

- Greg


___
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev



___
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev


Re: [ovs-dev] [PATCH] ovn: Discard flows for non-local ports.

2017-09-18 Thread Russell Bryant
On Mon, Sep 18, 2017 at 11:24 AM, Russell Bryant  wrote:
> Discard some OpenFlow flows that will never match.  This includes
> flows that match on a non-local inport in the ingress pipeline or a
> non-local outport in the egress pipeline of a logical switch.
>
> This is most useful for networks with a large number of ports or ACLs
> that use large address sets.
>
> Signed-off-by: Russell Bryant 
> Tested-by: Miguel Angel Ajo Pelayo 
> ---
>  ovn/controller/binding.c| 29 +
>  ovn/controller/binding.h|  2 +-
>  ovn/controller/lflow.c  | 33 +++--
>  ovn/controller/lflow.h  |  3 ++-
>  ovn/controller/ovn-controller.c |  9 +++--
>  5 files changed, 62 insertions(+), 14 deletions(-)
>
> diff --git a/ovn/controller/binding.c b/ovn/controller/binding.c
> index ca1d43395..3532c6014 100644
> --- a/ovn/controller/binding.c
> +++ b/ovn/controller/binding.c
> @@ -371,6 +371,17 @@ setup_qos(const char *egress_iface, struct hmap 
> *queue_map)
>  }
>
>  static void
> +update_local_lport_ids(struct sset *local_lport_ids,
> +   const struct sbrec_port_binding *binding_rec)
> +{
> +char buf[16];
> +snprintf(buf, sizeof(buf), "%"PRId64"_%"PRId64,
> + binding_rec->datapath->tunnel_key,
> + binding_rec->tunnel_key);
> +sset_add(local_lport_ids, buf);
> +}
> +
> +static void
>  consider_local_datapath(struct controller_ctx *ctx,
>  const struct chassis_index *chassis_index,
>  struct sset *active_tunnels,
> @@ -379,7 +390,8 @@ consider_local_datapath(struct controller_ctx *ctx,
>  struct hmap *qos_map,
>  struct hmap *local_datapaths,
>  struct shash *lport_to_iface,
> -struct sset *local_lports)
> +struct sset *local_lports,
> +struct sset *local_lport_ids)
>  {
>  const struct ovsrec_interface *iface_rec
>  = shash_find_data(lport_to_iface, binding_rec->logical_port);
> @@ -399,7 +411,7 @@ consider_local_datapath(struct controller_ctx *ctx,
>  get_qos_params(binding_rec, qos_map);
>  }
>  /* This port is in our chassis unless it is a localport. */
> -   if (strcmp(binding_rec->type, "localport")) {
> +if (strcmp(binding_rec->type, "localport")) {
>  our_chassis = true;
>  }
>  } else if (!strcmp(binding_rec->type, "l2gateway")) {
> @@ -439,6 +451,14 @@ consider_local_datapath(struct controller_ctx *ctx,
>  our_chassis = false;
>  }
>
> +if (our_chassis
> +|| !strcmp(binding_rec->type, "patch")
> +|| !strcmp(binding_rec->type, "localport")
> +|| !strcmp(binding_rec->type, "vtep")
> +|| !strcmp(binding_rec->type, "localnet")) {
> +update_local_lport_ids(local_lport_ids, binding_rec);
> +}
> +
>  if (ctx->ovnsb_idl_txn) {
>  const char *vif_chassis = smap_get(_rec->options,
> "requested-chassis");
> @@ -508,7 +528,8 @@ binding_run(struct controller_ctx *ctx, const struct 
> ovsrec_bridge *br_int,
>  const struct sbrec_chassis *chassis_rec,
>  const struct chassis_index *chassis_index,
>  struct sset *active_tunnels,
> -struct hmap *local_datapaths, struct sset *local_lports)
> +struct hmap *local_datapaths, struct sset *local_lports,
> +struct sset *local_lport_ids)
>  {
>  if (!chassis_rec) {
>  return;
> @@ -533,7 +554,7 @@ binding_run(struct controller_ctx *ctx, const struct 
> ovsrec_bridge *br_int,
>  active_tunnels, chassis_rec, binding_rec,
>  sset_is_empty(_ifaces) ? NULL :
>  _map, local_datapaths, _to_iface,
> -local_lports);
> +local_lports, local_lport_ids);
>
>  }
>
> diff --git a/ovn/controller/binding.h b/ovn/controller/binding.h
> index c78f8d932..89fc2ec8f 100644
> --- a/ovn/controller/binding.h
> +++ b/ovn/controller/binding.h
> @@ -32,7 +32,7 @@ void binding_run(struct controller_ctx *, const struct 
> ovsrec_bridge *br_int,
>   const struct sbrec_chassis *,
>   const struct chassis_index *,
>   struct sset *active_tunnels, struct hmap *local_datapaths,
> - struct sset *all_lports);
> + struct sset *local_lports, struct sset *local_lport_ids);
>  bool binding_cleanup(struct controller_ctx *, const struct sbrec_chassis *);
>
>  #endif /* ovn/binding.h */
> diff --git a/ovn/controller/lflow.c b/ovn/controller/lflow.c
> index 6d9f02cb2..c1f5de2ab 100644
> --- a/ovn/controller/lflow.c
> +++ 

[ovs-dev] [PATCH] ovn: Discard flows for non-local ports.

2017-09-18 Thread Russell Bryant
Discard some OpenFlow flows that will never match.  This includes
flows that match on a non-local inport in the ingress pipeline or a
non-local outport in the egress pipeline of a logical switch.

This is most useful for networks with a large number of ports or ACLs
that use large address sets.

Signed-off-by: Russell Bryant 
Tested-by: Miguel Angel Ajo Pelayo 
---
 ovn/controller/binding.c| 29 +
 ovn/controller/binding.h|  2 +-
 ovn/controller/lflow.c  | 33 +++--
 ovn/controller/lflow.h  |  3 ++-
 ovn/controller/ovn-controller.c |  9 +++--
 5 files changed, 62 insertions(+), 14 deletions(-)

diff --git a/ovn/controller/binding.c b/ovn/controller/binding.c
index ca1d43395..3532c6014 100644
--- a/ovn/controller/binding.c
+++ b/ovn/controller/binding.c
@@ -371,6 +371,17 @@ setup_qos(const char *egress_iface, struct hmap *queue_map)
 }
 
 static void
+update_local_lport_ids(struct sset *local_lport_ids,
+   const struct sbrec_port_binding *binding_rec)
+{
+char buf[16];
+snprintf(buf, sizeof(buf), "%"PRId64"_%"PRId64,
+ binding_rec->datapath->tunnel_key,
+ binding_rec->tunnel_key);
+sset_add(local_lport_ids, buf);
+}
+
+static void
 consider_local_datapath(struct controller_ctx *ctx,
 const struct chassis_index *chassis_index,
 struct sset *active_tunnels,
@@ -379,7 +390,8 @@ consider_local_datapath(struct controller_ctx *ctx,
 struct hmap *qos_map,
 struct hmap *local_datapaths,
 struct shash *lport_to_iface,
-struct sset *local_lports)
+struct sset *local_lports,
+struct sset *local_lport_ids)
 {
 const struct ovsrec_interface *iface_rec
 = shash_find_data(lport_to_iface, binding_rec->logical_port);
@@ -399,7 +411,7 @@ consider_local_datapath(struct controller_ctx *ctx,
 get_qos_params(binding_rec, qos_map);
 }
 /* This port is in our chassis unless it is a localport. */
-   if (strcmp(binding_rec->type, "localport")) {
+if (strcmp(binding_rec->type, "localport")) {
 our_chassis = true;
 }
 } else if (!strcmp(binding_rec->type, "l2gateway")) {
@@ -439,6 +451,14 @@ consider_local_datapath(struct controller_ctx *ctx,
 our_chassis = false;
 }
 
+if (our_chassis
+|| !strcmp(binding_rec->type, "patch")
+|| !strcmp(binding_rec->type, "localport")
+|| !strcmp(binding_rec->type, "vtep")
+|| !strcmp(binding_rec->type, "localnet")) {
+update_local_lport_ids(local_lport_ids, binding_rec);
+}
+
 if (ctx->ovnsb_idl_txn) {
 const char *vif_chassis = smap_get(_rec->options,
"requested-chassis");
@@ -508,7 +528,8 @@ binding_run(struct controller_ctx *ctx, const struct 
ovsrec_bridge *br_int,
 const struct sbrec_chassis *chassis_rec,
 const struct chassis_index *chassis_index,
 struct sset *active_tunnels,
-struct hmap *local_datapaths, struct sset *local_lports)
+struct hmap *local_datapaths, struct sset *local_lports,
+struct sset *local_lport_ids)
 {
 if (!chassis_rec) {
 return;
@@ -533,7 +554,7 @@ binding_run(struct controller_ctx *ctx, const struct 
ovsrec_bridge *br_int,
 active_tunnels, chassis_rec, binding_rec,
 sset_is_empty(_ifaces) ? NULL :
 _map, local_datapaths, _to_iface,
-local_lports);
+local_lports, local_lport_ids);
 
 }
 
diff --git a/ovn/controller/binding.h b/ovn/controller/binding.h
index c78f8d932..89fc2ec8f 100644
--- a/ovn/controller/binding.h
+++ b/ovn/controller/binding.h
@@ -32,7 +32,7 @@ void binding_run(struct controller_ctx *, const struct 
ovsrec_bridge *br_int,
  const struct sbrec_chassis *,
  const struct chassis_index *,
  struct sset *active_tunnels, struct hmap *local_datapaths,
- struct sset *all_lports);
+ struct sset *local_lports, struct sset *local_lport_ids);
 bool binding_cleanup(struct controller_ctx *, const struct sbrec_chassis *);
 
 #endif /* ovn/binding.h */
diff --git a/ovn/controller/lflow.c b/ovn/controller/lflow.c
index 6d9f02cb2..c1f5de2ab 100644
--- a/ovn/controller/lflow.c
+++ b/ovn/controller/lflow.c
@@ -68,7 +68,8 @@ static void consider_logical_flow(struct controller_ctx *ctx,
   uint32_t *conj_id_ofs,
   const struct shash *addr_sets,
   struct hmap *flow_table,
-

Re: [ovs-dev] MTU in i40e dpdk driver

2017-09-18 Thread Kavanagh, Mark B

>From: ovs-dev-boun...@openvswitch.org [mailto:ovs-dev-boun...@openvswitch.org]
>On Behalf Of Kavanagh, Mark B
>Sent: Monday, September 18, 2017 3:36 PM
>To: Nitin Katiyar ; ovs-dev@openvswitch.org
>Subject: Re: [ovs-dev] MTU in i40e dpdk driver
>
>>From: Nitin Katiyar [mailto:nitin.kati...@ericsson.com]
>>Sent: Monday, September 18, 2017 3:20 PM
>>To: Kavanagh, Mark B ; ovs-dev@openvswitch.org
>>Subject: RE: [ovs-dev] MTU in i40e dpdk driver
>>
>>Hi,
>>Yes, the tag is configured for VHU port so traffic from VM would be tagged
>>with vlan.  Why is it different from 10G (ixgbe) driver? It should allow the
>>packet matching  with the configured MTU. Is it expected behavior with i40e
>>driver?
>
>In this instance, the behavior is determined by OvS, and not the DPDK driver;
>see the code snippets below from netdev-dpdk.c:
>
>
>#define MTU_TO_FRAME_LEN(mtu)   ((mtu) + ETHER_HDR_LEN + ETHER_CRC_LEN)
># As you can see, the VLAN header is not accounted for as part of a packet's
>overhead
>

Addendum:

Having looked at the driver code, it seems that the both the ixgbe, and i40e, 
drivers do indeed make some allowances for VLAN-tagged packets.
However, there are some differences in how the Rx buffers are sized: 

int __attribute__((cold))
ixgbe_dev_rx_init(struct rte_eth_dev *dev)
{
...

/*
 * Configure the RX buffer size in the BSIZEPACKET field of
 * the SRRCTL register of the queue.
 * The value is in 1 KB resolution. Valid values can be from
 * 1 KB to 16 KB.
 */
buf_size = (uint16_t)(rte_pktmbuf_data_room_size(rxq->mb_pool) -
RTE_PKTMBUF_HEADROOM);
...

buf_size = (uint16_t) ((srrctl & IXGBE_SRRCTL_BSIZEPKT_MASK) <<
IXGBE_SRRCTL_BSIZEPKT_SHIFT);

/* It adds dual VLAN length for supporting dual VLAN */
if (dev->data->dev_conf.rxmode.max_rx_pkt_len +
2 * IXGBE_VLAN_TAG_SIZE > buf_size)
dev->data->scattered_rx = 1;
}
...
}


/* Init the RX queue in hardware */
int
i40e_rx_queue_init(struct i40e_rx_queue *rxq)
{
...
buf_size = (uint16_t)(rte_pktmbuf_data_room_size(rxq->mp) -
RTE_PKTMBUF_HEADROOM);

/* Check if scattered RX needs to be used. */
if ((rxq->max_pkt_len + 2 * I40E_VLAN_TAG_SIZE) > buf_size) {
dev_data->scattered_rx = 1;
}
...
}

If VLAN-tagged packets are accepted by one NIC, and not the other, then it most 
likely does point to an inconsistency in the DPDK drivers.
I'd advise you to post this issue to the DPDK dev mailing list, and kill the 
thread on this list, since it seems to be a DPDK-specific issue.

- Mark

>
>...
>
>
>
>   static int
>   netdev_dpdk_mempool_configure(struct netdev_dpdk *dev)
>   OVS_REQUIRES(dpdk_mutex)
>   OVS_REQUIRES(dev->mutex)
>   {
>   ...
>   dpdk_mp_put(dev->dpdk_mp);
>   dev->dpdk_mp = mp;
>   dev->mtu = dev->requested_mtu;
>   dev->socket_id = dev->requested_socket_id;
>   dev->max_packet_len = MTU_TO_FRAME_LEN(dev->mtu);
>   # This line uses the MTU_TO_FRAME_LEN macro to set the upper 
> size
>limit on packets that the NIC will accept.
>   # The NIC is subsequently configured with this value.
>   ...
>   }
>
>
>...
>
>
>   static int
>   dpdk_eth_dev_queue_setup(struct netdev_dpdk *dev, int n_rxq, int n_txq)
>   {
>   ...
>   if (dev->mtu > ETHER_MTU) {
>   conf.rxmode.jumbo_frame = 1;
>   conf.rxmode.max_rx_pkt_len = dev->max_packet_len;   
>  #
>max Rx packet length is set in NIC's config. object.
>   ...
>   diag = rte_eth_dev_configure(dev->port_id, n_rxq, n_txq, 
> );  #
>NIC's max Rx packet length is actually set.
>   ...
>   }
>
>
>Hope this helps,
>Mark
>
>>
>>Thanks,
>>Nitin
>>
>>-Original Message-
>>From: Kavanagh, Mark B [mailto:mark.b.kavan...@intel.com]
>>Sent: Monday, September 18, 2017 7:44 PM
>>To: Nitin Katiyar ; ovs-dev@openvswitch.org
>>Subject: RE: [ovs-dev] MTU in i40e dpdk driver
>>
>>>From: Nitin Katiyar [mailto:nitin.kati...@ericsson.com]
>>>Sent: Monday, September 18, 2017 3:02 PM
>>>To: Kavanagh, Mark B ;
>>>ovs-dev@openvswitch.org
>>>Subject: RE: [ovs-dev] MTU in i40e dpdk driver
>>>
>>>Hi,
>>>It is set to 2140.
>>
>>That should accommodate a max packet length of 2158 (i.e. MTU + ETHER_HDR
>>(14B) + ETHER_CRC (4B)).
>>
>>Is the VM inside a VLAN by any chance? The presence of a VLAN tag would
>>account for the additional 4B.
>>
>>-Mark
>>
>>>
>>>compute-0-4:~# ovs-vsctl get Interface dpdk1 mtu
>>>2140
>>>
>>>Regards,
>>>Nitin
>>>
>>>-Original Message-
>>>From: Kavanagh, Mark B 

Re: [ovs-dev] data path flow addition error in ovs 2.5

2017-09-18 Thread Ben Pfaff
On Mon, Sep 18, 2017 at 01:51:58PM +0530, Prasannaa Vengatesan wrote:
> I am using the following version of OVS (2.5.1) with datapath in kernel
> mode.
> 
> root@localhost:~# ovs-vsctl --version
> ovs-vsctl (Open vSwitch) 2.5.1
> 
> 
> When I try to add a flow in the data path I get error.

Why are you trying to add a flow to the datapath?  That is an
implementation detail, not meant for direct use by users.  Use ovs-ofctl
instead.
___
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev


Re: [ovs-dev] [PATCH V2 3/4] tc: Add header rewrite using tc pedit action

2017-09-18 Thread Simon Horman
On Mon, Sep 18, 2017 at 07:16:03AM +0300, Roi Dayan wrote:
> From: Paul Blakey 
> 
> To be later used to implement ovs action set offloading.
> 
> Signed-off-by: Paul Blakey 
> Reviewed-by: Roi Dayan 
> ---
>  lib/tc.c | 372 
> ++-
>  lib/tc.h |  16 +++
>  2 files changed, 385 insertions(+), 3 deletions(-)
> 
> diff --git a/lib/tc.c b/lib/tc.c
> index c9cada2..743b2ee 100644
> --- a/lib/tc.c
> +++ b/lib/tc.c
> @@ -21,8 +21,10 @@
>  #include 
>  #include 
>  #include 
> +#include 
>  #include 
>  #include 
> +#include 
>  #include 
>  #include 
>  #include 
> @@ -33,11 +35,14 @@
>  #include "netlink-socket.h"
>  #include "netlink.h"
>  #include "openvswitch/ofpbuf.h"
> +#include "openvswitch/util.h"
>  #include "openvswitch/vlog.h"
>  #include "packets.h"
>  #include "timeval.h"
>  #include "unaligned.h"
>  
> +#define MAX_PEDIT_OFFSETS 8

Why 8?

> +
>  VLOG_DEFINE_THIS_MODULE(tc);
>  
>  static struct vlog_rate_limit error_rl = VLOG_RATE_LIMIT_INIT(60, 5);
> @@ -50,6 +55,82 @@ enum tc_offload_policy {
>  
>  static enum tc_offload_policy tc_policy = TC_POLICY_NONE;
>  
> +struct tc_pedit_key_ex {
> +enum pedit_header_type htype;
> +enum pedit_cmd cmd;
> +};
> +
> +struct flower_key_to_pedit {
> +enum pedit_header_type htype;
> +int flower_offset;
> +int offset;
> +int size;
> +};
> +
> +static struct flower_key_to_pedit flower_pedit_map[] = {
> +{
> +TCA_PEDIT_KEY_EX_HDR_TYPE_IP4,
> +12,
> +offsetof(struct tc_flower_key, ipv4.ipv4_src),
> +MEMBER_SIZEOF(struct tc_flower_key, ipv4.ipv4_src)
> +}, {
> +TCA_PEDIT_KEY_EX_HDR_TYPE_IP4,
> +16,
> +offsetof(struct tc_flower_key, ipv4.ipv4_dst),
> +MEMBER_SIZEOF(struct tc_flower_key, ipv4.ipv4_dst)
> +}, {
> +TCA_PEDIT_KEY_EX_HDR_TYPE_IP4,
> +8,
> +offsetof(struct tc_flower_key, ipv4.rewrite_ttl),
> +MEMBER_SIZEOF(struct tc_flower_key, ipv4.rewrite_ttl)
> +}, {
> +TCA_PEDIT_KEY_EX_HDR_TYPE_IP6,
> +8,
> +offsetof(struct tc_flower_key, ipv6.ipv6_src),
> +MEMBER_SIZEOF(struct tc_flower_key, ipv6.ipv6_src)
> +}, {
> +TCA_PEDIT_KEY_EX_HDR_TYPE_IP6,
> +24,
> +offsetof(struct tc_flower_key, ipv6.ipv6_dst),
> +MEMBER_SIZEOF(struct tc_flower_key, ipv6.ipv6_dst)
> +}, {
> +TCA_PEDIT_KEY_EX_HDR_TYPE_ETH,
> +6,
> +offsetof(struct tc_flower_key, src_mac),
> +MEMBER_SIZEOF(struct tc_flower_key, src_mac)
> +}, {
> +TCA_PEDIT_KEY_EX_HDR_TYPE_ETH,
> +0,
> +offsetof(struct tc_flower_key, dst_mac),
> +MEMBER_SIZEOF(struct tc_flower_key, dst_mac)
> +}, {
> +TCA_PEDIT_KEY_EX_HDR_TYPE_ETH,
> +12,
> +offsetof(struct tc_flower_key, eth_type),
> +MEMBER_SIZEOF(struct tc_flower_key, eth_type)
> +}, {
> +TCA_PEDIT_KEY_EX_HDR_TYPE_TCP,
> +0,
> +offsetof(struct tc_flower_key, tcp_src),
> +MEMBER_SIZEOF(struct tc_flower_key, tcp_src)
> +}, {
> +TCA_PEDIT_KEY_EX_HDR_TYPE_TCP,
> +2,
> +offsetof(struct tc_flower_key, tcp_dst),
> +MEMBER_SIZEOF(struct tc_flower_key, tcp_dst)
> +}, {
> +TCA_PEDIT_KEY_EX_HDR_TYPE_UDP,
> +0,
> +offsetof(struct tc_flower_key, udp_src),
> +MEMBER_SIZEOF(struct tc_flower_key, udp_src)
> +}, {
> +TCA_PEDIT_KEY_EX_HDR_TYPE_UDP,
> +2,
> +offsetof(struct tc_flower_key, udp_dst),
> +MEMBER_SIZEOF(struct tc_flower_key, udp_dst)
> +},
> +};
> +
>  struct tcmsg *
>  tc_make_request(int ifindex, int type, unsigned int flags,
>  struct ofpbuf *request)
> @@ -365,6 +446,96 @@ nl_parse_flower_ip(struct nlattr **attrs, struct 
> tc_flower *flower) {
>  }
>  }
>  
> +static const struct nl_policy pedit_policy[] = {
> +[TCA_PEDIT_PARMS_EX] = { .type = NL_A_UNSPEC,
> + .min_len = sizeof(struct tc_pedit),
> + .optional = false, },
> +[TCA_PEDIT_KEYS_EX]   = { .type = NL_A_NESTED,
> +  .optional = false, },
> +};
> +
> +static int
> +nl_parse_act_pedit(struct nlattr *options, struct tc_flower *flower)
> +{
> +struct nlattr *pe_attrs[ARRAY_SIZE(pedit_policy)];
> +const struct tc_pedit *pe;
> +const struct tc_pedit_key *keys;
> +const struct nlattr *nla, *keys_ex, *ex_type;
> +const void *keys_attr;
> +char *rewrite_key = (void *) >rewrite.key;
> +char *rewrite_mask = (void *) >rewrite.mask;
> +size_t keys_ex_size, left;
> +int type, i = 0;
> +
> +if (!nl_parse_nested(options, pedit_policy, pe_attrs,
> + ARRAY_SIZE(pedit_policy))) {
> +VLOG_ERR_RL(_rl, "failed to parse pedit action options");
> +  

Re: [ovs-dev] MTU in i40e dpdk driver

2017-09-18 Thread Kavanagh, Mark B
>From: Nitin Katiyar [mailto:nitin.kati...@ericsson.com]
>Sent: Monday, September 18, 2017 3:20 PM
>To: Kavanagh, Mark B ; ovs-dev@openvswitch.org
>Subject: RE: [ovs-dev] MTU in i40e dpdk driver
>
>Hi,
>Yes, the tag is configured for VHU port so traffic from VM would be tagged
>with vlan.  Why is it different from 10G (ixgbe) driver? It should allow the
>packet matching  with the configured MTU. Is it expected behavior with i40e
>driver?

In this instance, the behavior is determined by OvS, and not the DPDK driver; 
see the code snippets below from netdev-dpdk.c:


#define MTU_TO_FRAME_LEN(mtu)   ((mtu) + ETHER_HDR_LEN + ETHER_CRC_LEN)
# As you can see, the VLAN header is not accounted for as part of a packet's 
overhead


...



static int
netdev_dpdk_mempool_configure(struct netdev_dpdk *dev)
OVS_REQUIRES(dpdk_mutex)
OVS_REQUIRES(dev->mutex)
{
...
dpdk_mp_put(dev->dpdk_mp);
dev->dpdk_mp = mp;
dev->mtu = dev->requested_mtu;
dev->socket_id = dev->requested_socket_id;
dev->max_packet_len = MTU_TO_FRAME_LEN(dev->mtu);
# This line uses the MTU_TO_FRAME_LEN macro to set the upper 
size limit on packets that the NIC will accept.
# The NIC is subsequently configured with this value.
...
}


...


static int
dpdk_eth_dev_queue_setup(struct netdev_dpdk *dev, int n_rxq, int n_txq)
{
...
if (dev->mtu > ETHER_MTU) {
conf.rxmode.jumbo_frame = 1;
conf.rxmode.max_rx_pkt_len = dev->max_packet_len;   
 # max Rx packet length is set in NIC's config. object.
...
diag = rte_eth_dev_configure(dev->port_id, n_rxq, n_txq, 
);  # NIC's max Rx packet length is actually set.
...
}


Hope this helps,
Mark

>
>Thanks,
>Nitin
>
>-Original Message-
>From: Kavanagh, Mark B [mailto:mark.b.kavan...@intel.com]
>Sent: Monday, September 18, 2017 7:44 PM
>To: Nitin Katiyar ; ovs-dev@openvswitch.org
>Subject: RE: [ovs-dev] MTU in i40e dpdk driver
>
>>From: Nitin Katiyar [mailto:nitin.kati...@ericsson.com]
>>Sent: Monday, September 18, 2017 3:02 PM
>>To: Kavanagh, Mark B ;
>>ovs-dev@openvswitch.org
>>Subject: RE: [ovs-dev] MTU in i40e dpdk driver
>>
>>Hi,
>>It is set to 2140.
>
>That should accommodate a max packet length of 2158 (i.e. MTU + ETHER_HDR
>(14B) + ETHER_CRC (4B)).
>
>Is the VM inside a VLAN by any chance? The presence of a VLAN tag would
>account for the additional 4B.
>
>-Mark
>
>>
>>compute-0-4:~# ovs-vsctl get Interface dpdk1 mtu
>>2140
>>
>>Regards,
>>Nitin
>>
>>-Original Message-
>>From: Kavanagh, Mark B [mailto:mark.b.kavan...@intel.com]
>>Sent: Monday, September 18, 2017 7:26 PM
>>To: Nitin Katiyar ; ovs-dev@openvswitch.org
>>Subject: Re: [ovs-dev] MTU in i40e dpdk driver
>>
>>>From: ovs-dev-boun...@openvswitch.org
>>>[mailto:ovs-dev-boun...@openvswitch.org]
>>>On Behalf Of Nitin Katiyar
>>>Sent: Monday, September 18, 2017 2:05 PM
>>>To: ovs-dev@openvswitch.org
>>>Subject: [ovs-dev] MTU in i40e dpdk driver
>>>
>>>Hi,
>>>We are using OVS-DPDK (2.6 version) with Fortville NIC (configured in
>>>25G
>>>mode) being used as dpdk port. The setup involves 2 VMs running on 2
>>>different computes (destination VM in compute with 10G NIC while
>>>originating VM is in compute with Fortville NIC). All the interfaces
>>>in the path are configured with MTU of 2140.
>>>
>>>While pinging with size of 2112 (IP packet of 2140 bytes) we found
>>>that ping response does not reach originating VM (i.e on compute with
>>>Fortville
>>NIC) .
>>>DPDK interface does not show any drop but we don't see any ping
>>>response received at DPDK port (verified using port-mirroring). We
>>>also don't see any rule in ovs dpctl for ping response. If we increase
>>>the MTU of DPDK interface by 4 bytes  or reduce the ping size by 4
>>>bytes then it
>>works.
>>>
>>>The same configuration works between 10G NICs on both sides.
>>>
>>>Is it a known issue with i40 dpdk driver?
>>
>>Hi Nitin,
>>
>>What is the MTU of the DPDK ports in this setup?
>>
>>  ovs-vscl get Interface  mtu
>>
>>Thanks,
>>Mark
>>
>>>
>>>Regards,
>>>Nitin
>>>___
>>>dev mailing list
>>>d...@openvswitch.org
>>>https://mail.openvswitch.org/mailman/listinfo/ovs-dev

___
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev


Re: [ovs-dev] MTU in i40e dpdk driver

2017-09-18 Thread Aaron Conole
Nitin Katiyar  writes:

> Hi,
> We are using OVS-DPDK (2.6 version) with Fortville NIC (configured in
> 25G mode) being used as dpdk port. The setup involves 2 VMs running on
> 2 different computes (destination VM in compute with 10G NIC while
> originating VM is in compute with Fortville NIC). All the interfaces
> in the path are configured with MTU of 2140.
>
> While pinging with size of 2112 (IP packet of 2140 bytes) we found
> that ping response does not reach originating VM (i.e on compute with
> Fortville NIC) . DPDK interface does not show any drop but we don't
> see any ping response received at DPDK port (verified using
> port-mirroring). We also don't see any rule in ovs dpctl for ping
> response. If we increase the MTU of DPDK interface by 4 bytes or
> reduce the ping size by 4 bytes then it works.
>
> The same configuration works between 10G NICs on both sides.
>
> Is it a known issue with i40 dpdk driver?

There are some issues with Fortville nics;  notably, that the ports
share some register values.  Have you tried your ovs+dpdk setup with the
following patch applied:

http://dpdk.org/ml/archives/dev/2017-August/072758.html

> Regards,
> Nitin
> ___
> dev mailing list
> d...@openvswitch.org
> https://mail.openvswitch.org/mailman/listinfo/ovs-dev
___
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev


Re: [ovs-dev] MTU in i40e dpdk driver

2017-09-18 Thread Nitin Katiyar
Hi,
Yes, the tag is configured for VHU port so traffic from VM would be tagged with 
vlan.  Why is it different from 10G (ixgbe) driver? It should allow the packet 
matching  with the configured MTU. Is it expected behavior with i40e driver?

Thanks,
Nitin

-Original Message-
From: Kavanagh, Mark B [mailto:mark.b.kavan...@intel.com] 
Sent: Monday, September 18, 2017 7:44 PM
To: Nitin Katiyar ; ovs-dev@openvswitch.org
Subject: RE: [ovs-dev] MTU in i40e dpdk driver

>From: Nitin Katiyar [mailto:nitin.kati...@ericsson.com]
>Sent: Monday, September 18, 2017 3:02 PM
>To: Kavanagh, Mark B ; 
>ovs-dev@openvswitch.org
>Subject: RE: [ovs-dev] MTU in i40e dpdk driver
>
>Hi,
>It is set to 2140.

That should accommodate a max packet length of 2158 (i.e. MTU + ETHER_HDR (14B) 
+ ETHER_CRC (4B)).

Is the VM inside a VLAN by any chance? The presence of a VLAN tag would account 
for the additional 4B.

-Mark

>
>compute-0-4:~# ovs-vsctl get Interface dpdk1 mtu
>2140
>
>Regards,
>Nitin
>
>-Original Message-
>From: Kavanagh, Mark B [mailto:mark.b.kavan...@intel.com]
>Sent: Monday, September 18, 2017 7:26 PM
>To: Nitin Katiyar ; ovs-dev@openvswitch.org
>Subject: Re: [ovs-dev] MTU in i40e dpdk driver
>
>>From: ovs-dev-boun...@openvswitch.org
>>[mailto:ovs-dev-boun...@openvswitch.org]
>>On Behalf Of Nitin Katiyar
>>Sent: Monday, September 18, 2017 2:05 PM
>>To: ovs-dev@openvswitch.org
>>Subject: [ovs-dev] MTU in i40e dpdk driver
>>
>>Hi,
>>We are using OVS-DPDK (2.6 version) with Fortville NIC (configured in 
>>25G
>>mode) being used as dpdk port. The setup involves 2 VMs running on 2 
>>different computes (destination VM in compute with 10G NIC while 
>>originating VM is in compute with Fortville NIC). All the interfaces 
>>in the path are configured with MTU of 2140.
>>
>>While pinging with size of 2112 (IP packet of 2140 bytes) we found 
>>that ping response does not reach originating VM (i.e on compute with 
>>Fortville
>NIC) .
>>DPDK interface does not show any drop but we don't see any ping 
>>response received at DPDK port (verified using port-mirroring). We 
>>also don't see any rule in ovs dpctl for ping response. If we increase 
>>the MTU of DPDK interface by 4 bytes  or reduce the ping size by 4 
>>bytes then it
>works.
>>
>>The same configuration works between 10G NICs on both sides.
>>
>>Is it a known issue with i40 dpdk driver?
>
>Hi Nitin,
>
>What is the MTU of the DPDK ports in this setup?
>
>   ovs-vscl get Interface  mtu
>
>Thanks,
>Mark
>
>>
>>Regards,
>>Nitin
>>___
>>dev mailing list
>>d...@openvswitch.org
>>https://mail.openvswitch.org/mailman/listinfo/ovs-dev

___
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev


Re: [ovs-dev] MTU in i40e dpdk driver

2017-09-18 Thread Kavanagh, Mark B
>From: Nitin Katiyar [mailto:nitin.kati...@ericsson.com]
>Sent: Monday, September 18, 2017 3:02 PM
>To: Kavanagh, Mark B ; ovs-dev@openvswitch.org
>Subject: RE: [ovs-dev] MTU in i40e dpdk driver
>
>Hi,
>It is set to 2140.

That should accommodate a max packet length of 2158 (i.e. MTU + ETHER_HDR (14B) 
+ ETHER_CRC (4B)).

Is the VM inside a VLAN by any chance? The presence of a VLAN tag would account 
for the additional 4B.

-Mark

>
>compute-0-4:~# ovs-vsctl get Interface dpdk1 mtu
>2140
>
>Regards,
>Nitin
>
>-Original Message-
>From: Kavanagh, Mark B [mailto:mark.b.kavan...@intel.com]
>Sent: Monday, September 18, 2017 7:26 PM
>To: Nitin Katiyar ; ovs-dev@openvswitch.org
>Subject: Re: [ovs-dev] MTU in i40e dpdk driver
>
>>From: ovs-dev-boun...@openvswitch.org
>>[mailto:ovs-dev-boun...@openvswitch.org]
>>On Behalf Of Nitin Katiyar
>>Sent: Monday, September 18, 2017 2:05 PM
>>To: ovs-dev@openvswitch.org
>>Subject: [ovs-dev] MTU in i40e dpdk driver
>>
>>Hi,
>>We are using OVS-DPDK (2.6 version) with Fortville NIC (configured in
>>25G
>>mode) being used as dpdk port. The setup involves 2 VMs running on 2
>>different computes (destination VM in compute with 10G NIC while
>>originating VM is in compute with Fortville NIC). All the interfaces in
>>the path are configured with MTU of 2140.
>>
>>While pinging with size of 2112 (IP packet of 2140 bytes) we found that
>>ping response does not reach originating VM (i.e on compute with Fortville
>NIC) .
>>DPDK interface does not show any drop but we don't see any ping
>>response received at DPDK port (verified using port-mirroring). We also
>>don't see any rule in ovs dpctl for ping response. If we increase the
>>MTU of DPDK interface by 4 bytes  or reduce the ping size by 4 bytes then it
>works.
>>
>>The same configuration works between 10G NICs on both sides.
>>
>>Is it a known issue with i40 dpdk driver?
>
>Hi Nitin,
>
>What is the MTU of the DPDK ports in this setup?
>
>   ovs-vscl get Interface  mtu
>
>Thanks,
>Mark
>
>>
>>Regards,
>>Nitin
>>___
>>dev mailing list
>>d...@openvswitch.org
>>https://mail.openvswitch.org/mailman/listinfo/ovs-dev

___
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev


Re: [ovs-dev] MTU in i40e dpdk driver

2017-09-18 Thread Nitin Katiyar
Hi,
It is set to 2140.

compute-0-4:~# ovs-vsctl get Interface dpdk1 mtu
2140

Regards,
Nitin

-Original Message-
From: Kavanagh, Mark B [mailto:mark.b.kavan...@intel.com] 
Sent: Monday, September 18, 2017 7:26 PM
To: Nitin Katiyar ; ovs-dev@openvswitch.org
Subject: Re: [ovs-dev] MTU in i40e dpdk driver

>From: ovs-dev-boun...@openvswitch.org 
>[mailto:ovs-dev-boun...@openvswitch.org]
>On Behalf Of Nitin Katiyar
>Sent: Monday, September 18, 2017 2:05 PM
>To: ovs-dev@openvswitch.org
>Subject: [ovs-dev] MTU in i40e dpdk driver
>
>Hi,
>We are using OVS-DPDK (2.6 version) with Fortville NIC (configured in 
>25G
>mode) being used as dpdk port. The setup involves 2 VMs running on 2 
>different computes (destination VM in compute with 10G NIC while 
>originating VM is in compute with Fortville NIC). All the interfaces in 
>the path are configured with MTU of 2140.
>
>While pinging with size of 2112 (IP packet of 2140 bytes) we found that 
>ping response does not reach originating VM (i.e on compute with Fortville 
>NIC) .
>DPDK interface does not show any drop but we don't see any ping 
>response received at DPDK port (verified using port-mirroring). We also 
>don't see any rule in ovs dpctl for ping response. If we increase the 
>MTU of DPDK interface by 4 bytes  or reduce the ping size by 4 bytes then it 
>works.
>
>The same configuration works between 10G NICs on both sides.
>
>Is it a known issue with i40 dpdk driver?

Hi Nitin,

What is the MTU of the DPDK ports in this setup?

ovs-vscl get Interface  mtu

Thanks,
Mark

>
>Regards,
>Nitin
>___
>dev mailing list
>d...@openvswitch.org
>https://mail.openvswitch.org/mailman/listinfo/ovs-dev

___
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev


Re: [ovs-dev] MTU in i40e dpdk driver

2017-09-18 Thread Kavanagh, Mark B
>From: ovs-dev-boun...@openvswitch.org [mailto:ovs-dev-boun...@openvswitch.org]
>On Behalf Of Nitin Katiyar
>Sent: Monday, September 18, 2017 2:05 PM
>To: ovs-dev@openvswitch.org
>Subject: [ovs-dev] MTU in i40e dpdk driver
>
>Hi,
>We are using OVS-DPDK (2.6 version) with Fortville NIC (configured in 25G
>mode) being used as dpdk port. The setup involves 2 VMs running on 2 different
>computes (destination VM in compute with 10G NIC while originating VM is in
>compute with Fortville NIC). All the interfaces in the path are configured
>with MTU of 2140.
>
>While pinging with size of 2112 (IP packet of 2140 bytes) we found that ping
>response does not reach originating VM (i.e on compute with Fortville NIC) .
>DPDK interface does not show any drop but we don't see any ping response
>received at DPDK port (verified using port-mirroring). We also don't see any
>rule in ovs dpctl for ping response. If we increase the MTU of DPDK interface
>by 4 bytes  or reduce the ping size by 4 bytes then it works.
>
>The same configuration works between 10G NICs on both sides.
>
>Is it a known issue with i40 dpdk driver?

Hi Nitin,

What is the MTU of the DPDK ports in this setup?

ovs-vscl get Interface  mtu

Thanks,
Mark

>
>Regards,
>Nitin
>___
>dev mailing list
>d...@openvswitch.org
>https://mail.openvswitch.org/mailman/listinfo/ovs-dev
___
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev


Re: [ovs-dev] ovs-vswitchd is resetting the MTU of a bridge when a patch port is deleted.

2017-09-18 Thread Daniel Alvarez Sanchez
Yes, thanks Numan for the patch :)
Another option would be that ovn-controller sets explicitly the MTU to 1450.
Not sure which of the two is the best or would have less side effects.

Cheers,
Daniel

On Tue, Sep 12, 2017 at 10:43 AM, Numan Siddique 
wrote:

> Hello,
>
> Daniel (CC'd) and I were debugging an issue in openstack tripleo CI for
> OVN job.
> We are noticing an issue when ovn-controller deletes the patch ports in
> the external bridge specified in "ovn-bridge-mappings". The external bridge
> (br-ex) is configured with MTU 1450 as it has an vxlan port. (TripleO CI
>  setup configures the MTU).
>
> When ovn-controller deletes the patch port in br-ex, ovs-vswitchd is
> changing the MTU of br-ex to 1500 and this is causing problem.
>
> We are able to reproduce the issue by the below commands. This issue is
> seen only once. If we re-add the patch ports to br1, br2 we don't see the
> issue. We can reproduce the issue either if we delete the bridges and
> recreate again or restart ovs-vswitchd.
>
> ***
> ovs-vsctl add-br br1
> sudo ip link set br1 mtu 1450
> ovs-vsctl add-port br1 br1-p1
> ovs-vsctl set Interface br1-p1 type=patch
> ovs-vsctl set Interface br1-p1 options:peer=br2-p1
>
> sleep 1
>
> ovs-vsctl add-br br2
> sudo ip link set br2 mtu 1450
> ovs-vsctl add-port br2 br2-p1
> ovs-vsctl set Interface br2-p1 type=patch
> ovs-vsctl set Interface br2-p1 options:peer=br1-p1
>
> ip a
> ovs-vsctl show
> ovs-vsctl del-port br1-p1
> ip a s br1
> 
>
> The below patch fixes the issue.
> I am not very sure if this is the right fix. I will submit the patch
> anyway.
>
> If someone has  a better fix please override the patch.
>
> ---
>  ofproto/ofproto.c | 12 +++-
>  1 file changed, 7 insertions(+), 5 deletions(-)
>
> diff --git a/ofproto/ofproto.c b/ofproto/ofproto.c
> index 7541af0b2..9950897b8 100644
> --- a/ofproto/ofproto.c
> +++ b/ofproto/ofproto.c
> @@ -2721,18 +2721,20 @@ init_ports(struct ofproto *p)
>  }
>
>  static bool
> -ofport_is_internal(const struct ofproto *p, const struct ofport *port)
> +ofport_is_internal_or_patch(const struct ofproto *p, const struct ofport
> *port)
>  {
>  return !strcmp(netdev_get_type(port->netdev),
> -   ofproto_port_open_type(p->type, "internal"));
> +   ofproto_port_open_type(p->type, "internal")) ||
> +   !strcmp(netdev_get_type(port->netdev),
> +   ofproto_port_open_type(p->type, "patch"));
>  }
>
> -/* If 'port' is internal and if the user didn't explicitly specify an mtu
> - * through the database, we have to override it. */
> +/* If 'port' is internal or patch and if the user didn't explicitly
> specify an
> + * mtu through the database, we have to override it. */
>  static bool
>  ofport_is_mtu_overridden(const struct ofproto *p, const struct ofport
> *port)
>  {
> -return ofport_is_internal(p, port)
> +return ofport_is_internal_or_patch(p, port)
> && !netdev_mtu_is_user_config(port->netdev);
>  }
>
> --
>
>
___
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev


[ovs-dev] MTU in i40e dpdk driver

2017-09-18 Thread Nitin Katiyar
Hi,
We are using OVS-DPDK (2.6 version) with Fortville NIC (configured in 25G mode) 
being used as dpdk port. The setup involves 2 VMs running on 2 different 
computes (destination VM in compute with 10G NIC while originating VM is in 
compute with Fortville NIC). All the interfaces in the path are configured with 
MTU of 2140.

While pinging with size of 2112 (IP packet of 2140 bytes) we found that ping 
response does not reach originating VM (i.e on compute with Fortville NIC) . 
DPDK interface does not show any drop but we don't see any ping response 
received at DPDK port (verified using port-mirroring). We also don't see any 
rule in ovs dpctl for ping response. If we increase the MTU of DPDK interface 
by 4 bytes  or reduce the ping size by 4 bytes then it works.

The same configuration works between 10G NICs on both sides.

Is it a known issue with i40 dpdk driver?

Regards,
Nitin
___
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev


Re: [ovs-dev] does ovs bfd support flow based tunnel?

2017-09-18 Thread ychen
for flow-based tunnel:
ovs-vsctl add-port br-int vxlan1 -- set interface vxlan1  type=vxlan 
options:remote_ip=flow options:key=flow options:local_ip=10.10.0.1
ovs-vsctl set interface vxlan1 bfd:enable=true


when I enable bfd in such a vxlan interface , I can not capture any bfd packets 
in the physical port.(which used by vxlan interface)






At 2017-09-14 23:38:19, "Miguel Angel Ajo Pelayo"  wrote:

What do you mean by flow-based tunnel?


We're using it internally to provide HA connectivity to Gateway_Chassis on OVN, 
and it's working as a charm to monitor tunnel endpoints on OVS bridges.


https://github.com/openvswitch/ovs/blob/master/ovn/controller/bfd.c



On Tue, Sep 12, 2017 at 9:19 PM, ychen  wrote:
can I enable bfd on flow based tunnel? does it work?
___
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev

___
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev


Re: [ovs-dev] adding dpdk ports sharing same pci address to ovs-dpdk bridge

2017-09-18 Thread devendra rawat
Hi Kevin,

On Fri, Sep 8, 2017 at 12:24 AM, Kevin Traynor  wrote:

> On 09/07/2017 06:47 PM, Darrell Ball wrote:
> > Adding disc...@openvswitch.org
> >
> > The related changes went into 2.7
> >
> >
> >
> > On 9/7/17, 3:51 AM, "ovs-dev-boun...@openvswitch.org on behalf of
> devendra rawat"  devendra.rawat.si...@gmail.com> wrote:
> >
> > Hi,
> >
> > I have compiled and built ovs-dpdk using DPDK v17.08 and OVS v2.8.0.
> The
> > NIC that I am using is Mellanox ConnectX-3 Pro, which is a dual port
> 10G
> > NIC. The problem with this NIC is that it provides only one PCI
> address for
> > both the 10G ports.
> >
> > So when I am trying to add the two DPDK ports to my br0 bridge
> >
> > # ovs-vsctl --no-wait add-port br0 dpdk0 -- set Interface dpdk0
> type=dpdk
> > options:dpdk-devargs=0002:01:00.0
> >
> > # ovs-vsctl --no-wait add-port br0 dpdk1 -- set Interface dpdk1
> type=dpdk
> > options:dpdk-devargs=0002:01:00.0
> >
>
> Were you able to confirm those addresses by running ./dpdk-devbind.py -s
> in your /tools dir?
>

On running dpdk-devbind.py --status , I can see the ConnectX-3 pro NIC,
having only one PCI address.

Network devices using DPDK-compatible driver



Network devices using kernel driver
===
0002:01:00.0 'MT27520 Family [ConnectX-3 Pro] 1007' if=enP4p1s0d1,enP4p1s0
drv=mlx4_core unused=
0006:01:00.0 'I210 Gigabit Network Connection 1533' if=enP6p1s0 drv=igb
unused= *Active*



> The port dpdk1 is added successfully and able to transfer data, but
adding
> dpdk0 to br0 fails:
>
> 2017-09-06T14:19:20Z|00045|netdev_dpdk|INFO|Port 0: e4:1d:2d:4f:78:60
> 2017-09-06T14:19:20Z|00046|bridge|INFO|bridge br0: added interface
dpdk1 on
> port 1
> 2017-09-06T14:19:20Z|00047|bridge|INFO|bridge br0: added interface
br0 on
> port 65534
> 2017-09-06T14:19:20Z|00048|dpif_netlink|WARN|Generic Netlink family
> 'ovs_datapath' does not exist. The Open vSwitch kernel module is
probably
> not loaded.
> 2017-09-06T14:19:20Z|00049|netdev_dpdk|WARN|'dpdk0' is trying to use
device
> '0002:01:00.0' which is already in use by 'dpdk1'
> 2017-09-06T14:19:20Z|00050|netdev|WARN|dpdk0: could not set
configuration
> (Address already in use)
> 2017-09-06T14:19:20Z|00051|bridge|INFO|bridge br0: using datapath ID
> e41d2d4f7860
>
>
> With OVS v2.6.1 I never had this problem as dpdk-devargs was not
mandatory
> and just specifying port name was enough to add that port to bridge.
>
> Is there a way to add port both ports to bridge ?
>
> Thanks,
> Devendra
> ___
> dev mailing list
> d...@openvswitch.org
> https://urldefense.proofpoint.com/v2/url?u=https-3A__mail.
openvswitch.org_mailman_listinfo_ovs-2Ddev=DwICAg=
uilaK90D4TOVoH58JNXRgQ=BVhFA09CGX7JQ5Ih-uZnsw=
qO7NdgrrorJhievOguQLmsfEFuBcPfz9NfQX7UME1-8=ZKHbYlaTjm8VFj6Rggmcb2gw6s3xW4
PxEtUy4YFG1VA=
>
>
> ___
> dev mailing list
> d...@openvswitch.org

> > https://mail.openvswitch.org/mailman/listinfo/ovs-dev
> >
>
>
___
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev


[ovs-dev] Sushi Ebi 4L 8.6-9.0 cm

2017-09-18 Thread Bonesca Import & Export BV
    [ View in browser ]( http://r.newsletter.bonescamail.nl/7xa28juxqoatrf.html 
)   
 
 
 
Special offer for Sushi Ebi 4L with expiry this month :
 
20 x 195 grs ( 30 x 6.5 grs) 8.6 - 9.0 cm
 
1 box € 3,75
10 box € 3,50
complete remaining stock (95 box) € 3,25 per tray!!    


   [ Click here for complete overview latest offers
Klicken Sie hier für die komplette Liste der letzten Angebote
Klik hier voor het complete overzicht recente aanbiedngen
Cliquez ici pour la liste complète des offres récentes ]( 
http://r.newsletter.bonescamail.nl/track/click/vp48y7fqlaoatrd )     
This email was sent to d...@openvswitch.org
You received this email because you are registered with Bonesca Import en 
Export BV
 
[ Unsubscribe here ]( http://r.newsletter.bonescamail.nl/7xa28juxqoatrg.html )  

Sent by
[  ]( http://r.newsletter.bonescamail.nl/track/click/vp48y7frdqoatrd )     
© 2017 Bonesca Import en Export BV  

___
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev


[ovs-dev] [PATCH 5/5] conntrack: update manual and usage for R/W parameter.

2017-09-18 Thread antonio . fischetti
Update manual and usage for R/W parameters commands.

Signed-off-by: Antonio Fischetti 
---
 lib/dpctl.man | 8 
 utilities/ovs-dpctl.c | 2 ++
 2 files changed, 10 insertions(+)

diff --git a/lib/dpctl.man b/lib/dpctl.man
index 675fe5a..836cc08 100644
--- a/lib/dpctl.man
+++ b/lib/dpctl.man
@@ -235,3 +235,11 @@ For each ConnTracker bucket, displays the number of 
connections used
 by \fIdp\fR.
 If \fBgt=\fIThreshold\fR is specified, bucket numbers are displayed when
 the number of connections in a bucket is greater than \fIThreshold\fR.
+.
+.TP
+\*(DX\fBct\-set\fR [\fIdp\fR] [\fBParameter=\fIValue\fR]
+Sets a new value for one of the available CT working parameters.
+.
+.TP
+\*(DX\fBct\-get\fR [\fIdp\fR] [\fBParameter\fR]
+Displays the current value of the specified CT working parameter.
diff --git a/utilities/ovs-dpctl.c b/utilities/ovs-dpctl.c
index 7b005ac..01505f6 100644
--- a/utilities/ovs-dpctl.c
+++ b/utilities/ovs-dpctl.c
@@ -203,6 +203,8 @@ usage(void *userdata OVS_UNUSED)
"  ct-stats-show [DP] [zone=ZONE] [verbose] " \
"CT connections grouped by protocol\n"
"  ct-bkts [DP] [gt=N] display connections per CT bucket\n"
+   "  ct-set PARAM=VALUE set a CT value to a working parameter\n"
+   "  ct-get PARAM display the current CT value of a parameter\n"
"Each IFACE on add-dp, add-if, and set-if may be followed by\n"
"comma-separated options.  See ovs-dpctl(8) for syntax, or the\n"
"Interface table in ovs-vswitchd.conf.db(5) for an options list.\n"
-- 
2.4.11

___
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev


[ovs-dev] [PATCH 4/5] conntrack: read current nr of connections.

2017-09-18 Thread antonio . fischetti
Read current number of connections managed by the
CT module.

Example:
  ovs-appctl dpctl/ct-get totconn

Signed-off-by: Antonio Fischetti 
---
 lib/conntrack.c | 10 ++
 1 file changed, 10 insertions(+)

diff --git a/lib/conntrack.c b/lib/conntrack.c
index 60eb376..412665a 100644
--- a/lib/conntrack.c
+++ b/lib/conntrack.c
@@ -2400,6 +2400,13 @@ conntrack_flush(struct conntrack *ct, const uint16_t 
*zone)
 return 0;
 }
 
+/* Read the total nr of connections currently managed. */
+static int
+rd_tot_conn(struct conntrack *ct, uint32_t *cur_val) {
+*cur_val = atomic_count_get(>n_conn);
+return 0;
+}
+
 /* Set an interval value to be used by clean_thread_main. */
 static int
 wr_clean_int(struct conntrack *ct, uint32_t new_val) {
@@ -2435,11 +2442,14 @@ rd_max_conn(struct conntrack *ct, uint32_t *cur_val) {
 #define CT_RW_MAX_CONN "maxconn"
 /* Clean-up interval used by clean_thread_main() thread. */
 #define CT_RW_CLEAN_INTERVAL "cleanup"
+/* Total nr of connections currently managed by CT module. */
+#define CT_RW_TOT_CONN "totconn"
 
 /* List of parameters that can be read/written at run-time. */
 struct ct_wk_params wk_params[] = {
 {CT_RW_MAX_CONN, wr_max_conn, rd_max_conn},
 {CT_RW_CLEAN_INTERVAL, wr_clean_int, rd_clean_int},
+{CT_RW_TOT_CONN, NULL, rd_tot_conn},
 };
 
 int
-- 
2.4.11

___
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev


[ovs-dev] [PATCH 3/5] conntrack: r/w clean-up interval.

2017-09-18 Thread antonio . fischetti
Read/Write conntrack clean-up interval used by
the clean_thread_main() thread.

Example:
   ovs-appctl dpctl/ct-set cleanup=4000  # Set a new value
   ovs-appctl dpctl/ct-get cleanup   # Read

Signed-off-by: Antonio Fischetti 
---
 lib/conntrack.c | 27 ---
 lib/conntrack.h |  2 ++
 2 files changed, 26 insertions(+), 3 deletions(-)

diff --git a/lib/conntrack.c b/lib/conntrack.c
index 6d86625..60eb376 100644
--- a/lib/conntrack.c
+++ b/lib/conntrack.c
@@ -225,6 +225,9 @@ conn_key_cmp(const struct conn_key *key1, const struct 
conn_key *key2)
 return 1;
 }
 
+#define CT_CLEAN_INTERVAL 5000 /* 5 seconds */
+#define CT_CLEAN_MIN_INTERVAL 200  /* 0.2 seconds */
+
 /* Initializes the connection tracker 'ct'.  The caller is responsible for
  * calling 'conntrack_destroy()', when the instance is not needed anymore */
 void
@@ -258,6 +261,7 @@ conntrack_init(struct conntrack *ct)
 ct->hash_basis = random_uint32();
 atomic_count_init(>n_conn, 0);
 atomic_init(>n_conn_limit, DEFAULT_N_CONN_LIMIT);
+ct->clean_interval = CT_CLEAN_INTERVAL;
 latch_init(>clean_thread_exit);
 ct->clean_thread = ovs_thread_create("ct_clean", clean_thread_main, ct);
 }
@@ -1327,8 +1331,6 @@ next_bucket:
  *   behind, there is at least some 200ms blocks of time when buckets will be
  *   left alone, so the datapath can operate unhindered.
  */
-#define CT_CLEAN_INTERVAL 5000 /* 5 seconds */
-#define CT_CLEAN_MIN_INTERVAL 200  /* 0.2 seconds */
 
 static void *
 clean_thread_main(void *f_)
@@ -1344,7 +1346,7 @@ clean_thread_main(void *f_)
 if (next_wake < now) {
 poll_timer_wait_until(now + CT_CLEAN_MIN_INTERVAL);
 } else {
-poll_timer_wait_until(MAX(next_wake, now + CT_CLEAN_INTERVAL));
+poll_timer_wait_until(MAX(next_wake, now + ct->clean_interval));
 }
 latch_wait(>clean_thread_exit);
 poll_block();
@@ -2398,6 +2400,21 @@ conntrack_flush(struct conntrack *ct, const uint16_t 
*zone)
 return 0;
 }
 
+/* Set an interval value to be used by clean_thread_main. */
+static int
+wr_clean_int(struct conntrack *ct, uint32_t new_val) {
+ct->clean_interval = new_val;
+VLOG_DBG("Set clean interval to %d", new_val);
+return 0;
+}
+
+/* Read current clean-up interval used by clean_thread_main. */
+static int
+rd_clean_int(struct conntrack *ct, uint32_t *cur_val) {
+*cur_val = ct->clean_interval;
+return 0;
+}
+
 /* Set a new value for the upper limit of connections. */
 static int
 wr_max_conn(struct conntrack *ct, uint32_t new_val) {
@@ -2414,11 +2431,15 @@ rd_max_conn(struct conntrack *ct, uint32_t *cur_val) {
 }
 
 /* List of managed parameters. */
+/* Max nr of connections managed by CT module. */
 #define CT_RW_MAX_CONN "maxconn"
+/* Clean-up interval used by clean_thread_main() thread. */
+#define CT_RW_CLEAN_INTERVAL "cleanup"
 
 /* List of parameters that can be read/written at run-time. */
 struct ct_wk_params wk_params[] = {
 {CT_RW_MAX_CONN, wr_max_conn, rd_max_conn},
+{CT_RW_CLEAN_INTERVAL, wr_clean_int, rd_clean_int},
 };
 
 int
diff --git a/lib/conntrack.h b/lib/conntrack.h
index 4eb9a9a..ba9d3f1 100644
--- a/lib/conntrack.h
+++ b/lib/conntrack.h
@@ -261,6 +261,8 @@ struct conntrack {
 pthread_t clean_thread;
 /* Latch to destroy the 'clean_thread' */
 struct latch clean_thread_exit;
+/* Clean interval. */
+uint32_t clean_interval;
 
 /* Number of connections currently in the connection tracker. */
 atomic_count n_conn;
-- 
2.4.11

___
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev


[ovs-dev] [PATCH 1/5] conntrack: add commands to r/w conntrack parameters.

2017-09-18 Thread antonio . fischetti
Add infrastructure to implement:
 - dpctl/ct-get to read a current value of available
   conntrack parameters.
 - dpctl/ct-set to set a value to the available conntrack
   parameters.

Add dpctl/ct-get to read current values of conntrack
parameters.
Add dpctl/ct-set to set a value to conntrack parameters.

Signed-off-by: Antonio Fischetti 
---
 lib/conntrack.c | 67 +
 lib/conntrack.h |  3 ++
 lib/ct-dpif.c   | 28 ++
 lib/ct-dpif.h   |  2 ++
 lib/dpctl.c | 85 +
 lib/dpif-netdev.c   | 19 
 lib/dpif-netlink.c  |  2 ++
 lib/dpif-provider.h |  4 +++
 8 files changed, 210 insertions(+)

diff --git a/lib/conntrack.c b/lib/conntrack.c
index 419cb1d..0642cc8 100644
--- a/lib/conntrack.c
+++ b/lib/conntrack.c
@@ -67,6 +67,13 @@ enum ct_alg_mode {
 CT_TFTP_MODE,
 };
 
+/* Variable to manage read/write on CT parameters. */
+struct ct_wk_params {
+char *cli;  /* Parameter name in human format. */
+int (*wr)(struct conntrack *, uint32_t);
+int (*rd)(struct conntrack *, uint32_t *);
+};
+
 static bool conn_key_extract(struct conntrack *, struct dp_packet *,
  ovs_be16 dl_type, struct conn_lookup_ctx *,
  uint16_t zone);
@@ -2391,6 +2398,66 @@ conntrack_flush(struct conntrack *ct, const uint16_t 
*zone)
 return 0;
 }
 
+/* List of parameters that can be read/written at run-time. */
+struct ct_wk_params wk_params[] = {};
+
+int
+conntrack_set_param(struct conntrack *ct,
+const char *set_param)
+{
+bool valid_param = false;
+uint32_t max_conn;
+char bfr[16] = "";
+
+/* Check if the specified param can be managed. */
+for (int i = 0; i < sizeof(wk_params) / sizeof(struct ct_wk_params); i++) {
+if (!strncmp(set_param, wk_params[i].cli,
+strlen(wk_params[i].cli))) {
+valid_param = true;
+ovs_strzcpy(bfr, wk_params[i].cli, sizeof(bfr) - 1);
+strncat(bfr, "=%"SCNu32, sizeof(bfr) - 1 - strlen(bfr));
+if (ovs_scan(set_param, bfr, _conn)) {
+return (wk_params[i].wr
+? wk_params[i].wr(ct, max_conn)
+: EOPNOTSUPP);
+} else {
+return EINVAL;
+}
+}
+}
+if (!valid_param) {
+VLOG_DBG("%s: expected valid PARAM=NUMBER", set_param);
+return EINVAL;
+}
+
+return 0;
+}
+
+int
+conntrack_get_param(struct conntrack *ct,
+const char *get_param, uint32_t *val)
+{
+bool valid_param = false;
+
+/* Check if the specified param can be managed. */
+for (int i = 0; i < sizeof(wk_params) / sizeof(struct ct_wk_params); i++) {
+if (!strncmp(get_param, wk_params[i].cli,
+strlen(wk_params[i].cli))) {
+valid_param = true;
+
+return (wk_params[i].rd
+? wk_params[i].rd(ct, val)
+: EOPNOTSUPP);
+}
+}
+if (!valid_param) {
+VLOG_DBG("%s: expected a valid PARAM", get_param);
+return EINVAL;
+}
+
+return 0;
+}
+
 /* This function must be called with the ct->resources read lock taken. */
 static struct alg_exp_node *
 expectation_lookup(struct hmap *alg_expectations,
diff --git a/lib/conntrack.h b/lib/conntrack.h
index fbeef1c..4eb9a9a 100644
--- a/lib/conntrack.h
+++ b/lib/conntrack.h
@@ -114,6 +114,9 @@ int conntrack_dump_next(struct conntrack_dump *, struct 
ct_dpif_entry *);
 int conntrack_dump_done(struct conntrack_dump *);
 
 int conntrack_flush(struct conntrack *, const uint16_t *zone);
+int conntrack_set_param(struct conntrack *, const char *set_param);
+int conntrack_get_param(struct conntrack *, const char *get_param,
+uint32_t *val);
 
 /* 'struct ct_lock' is a wrapper for an adaptive mutex.  It's useful to try
  * different types of locks (e.g. spinlocks) */
diff --git a/lib/ct-dpif.c b/lib/ct-dpif.c
index c79e69e..599bc57 100644
--- a/lib/ct-dpif.c
+++ b/lib/ct-dpif.c
@@ -127,6 +127,34 @@ ct_dpif_flush(struct dpif *dpif, const uint16_t *zone)
 : EOPNOTSUPP);
 }
 
+int
+ct_dpif_set_param(struct dpif *dpif, const char *set_param)
+{
+if (!set_param) {
+VLOG_DBG("%s: ct_set_param: no input param", dpif_name(dpif));
+return EINVAL;
+}
+VLOG_DBG("%s: ct_set_param: %s", dpif_name(dpif), set_param);
+
+return (dpif->dpif_class->ct_set_param
+? dpif->dpif_class->ct_set_param(dpif, set_param)
+: EOPNOTSUPP);
+}
+
+int
+ct_dpif_get_param(struct dpif *dpif, const char *get_param, uint32_t *val)
+{
+if (!get_param) {
+VLOG_DBG("%s: ct_get_param: no input param", dpif_name(dpif));
+return EINVAL;
+}
+VLOG_DBG("%s: ct_get_param: %s", dpif_name(dpif), get_param);
+
+return (dpif->dpif_class->ct_get_param
+   

[ovs-dev] International Scientific Publication Journals (EU)

2017-09-18 Thread International Scientific Publications (EU)
International Scientific Publications Journals (EU)

We are writing to kindly inform you that you can publish your scientific papers 
in the peer-reviewed open access journals.

Currently, we issue six scientific journals online:

Agriculture & Food ISSN 1314-8591 (online)

Ecology & Safety ISSN 1314-7234 (online)

Materials, Methods & Technologies ISSN 1314-7269 (online)

Economy & Business ISSN 1314-7242 (online)

Educational Alternatives ISSN 1314-7277 (online)

Language, Individual & Society ISSN 1314-7250 (online)

Read Articles: https://www.scientific-publications.net/en/

Publication requirements: 
https://www.scientific-publications.net/en/publication-requirements/

Deadlines and fees: 
https://www.scientific-publications.net/en/deadlines-and-fees/

Submit a manuscript: 
https://www.scientific-publications.net/en/submit-a-manuscript/

We believe that we can be of assistance to you because through our work we have 
supported the professional development of scientists from thirty different 
countries over the past twenty years.

Best regards,
International Scientific Events, Bulgaria

Not interested? Click here to unsubscribe: http://science-bg.net/unsubscribe
This message is unsolicited commercial communication.

Science and Education Foundation |8000 Burgas | Bulgaria


___
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev


[ovs-dev] data path flow addition error in ovs 2.5

2017-09-18 Thread Prasannaa Vengatesan
Hi,

I am using the following version of OVS (2.5.1) with datapath in kernel
mode.

root@localhost:~# ovs-vsctl --version
ovs-vsctl (Open vSwitch) 2.5.1


When I try to add a flow in the data path I get error.

root@localhost:~# ovs-appctl dpctl/add-flow ovs-system "in_port(2)" 3
ovs-vswitchd: updating flow table (Invalid argument)
ovs-appctl: ovs-vswitchd: server returned an error


root@localhost:~# ovs-dpctl add-flow ovs-system "in_port(2)" 3
2017-09-18T13:51:51Z|1|dpif|WARN|system@ovs-system: failed to
put[create] (Invalid argument) in_port(2), actions:3
ovs-dpctl: updating flow table (Invalid argument)


The corresponding error logged in dmesg is
[420714.893276] openvswitch: netlink: Missing key (keys=8, expected=10)



root@localhost:~# ovs-dpctl show
system@ovs-system:
lookups: hit:15 missed:9 lost:0
flows: 0
masks: hit:32 total:1 hit/pkt:1.33
port 0: ovs-system (internal)
port 1: ovs-sys-br (internal)
port 2: veth01
port 3: veth11

Can you please let me know if I am using correct command to add flows in
datapath.


Thanks,
Prasannaa.
___
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev


Re: [ovs-dev] [PATCH net-next v9] openvswitch: enable NSH support

2017-09-18 Thread Yang, Yi
On Thu, Sep 14, 2017 at 05:09:02PM +0800, Jiri Benc wrote:
> On Thu, 14 Sep 2017 16:37:59 +0800, Yi Yang wrote:
> > OVS master and 2.8 branch has merged NSH userspace
> > patch series, this patch is to enable NSH support
> > in kernel data path in order that OVS can support
> > NSH in compat mode by porting this.
> 
> http://vger.kernel.org/~davem/net-next.html

I see it has been open now, v9 this patch series is still ok to
current net-next without any hunk, so please help review v9, thanks a
lot.
___
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev


Re: [ovs-dev] [PATCH 02/13] netdev-dummy: Reorder elements in dummy_packet_stream structure.

2017-09-18 Thread Bodireddy, Bhanuprakash
Hi greg,

>On 09/08/2017 10:59 AM, Bhanuprakash Bodireddy wrote:
>> By reordering elements in dummy_packet_stream structure, sum holes
>
>Do you mean "the sum of the holes" can be reduced or do you mean "some
>holes"
>can be reduced?

In this patch series "sum of the holes" means, the sum/total of all the hole 
bytes in the
respective structure. For example 'dummy_packet_stream' structure members are 
aligned below way.
This structure has one hole comprising of 56 bytes.

struct dummy_packet_stream {
struct stream *stream;   /* 0 8 */

>   56 bytes holes. 

 
struct dp_packet   rxbuf;   /*64   704 */  
struct ovs_listtxq;/*   76816 */
};

With the proposed change in this patch, the new alignment is as below 

struct dummy_packet_stream {
struct stream *stream;   /* 0 8 */
struct ovs_listtxq; /* 816 
*/

> 40 bytes hole
struct dp_packet   rxbuf;/*64   704 */
};

For all the patches, the information is added in to the commit log that shows
the improvement with the proposed changes. As claimed, sum holes(bytes) are
 reduced from 56 to 40 in case of this patch.

>> Before: structure size: 784, sum holes: 56, cachelines:13
>> After :  structure size: 768, sum holes: 40, cachelines:12

>
>Same question through several of the other patches where you use the same
>language.

In few structures there are multiple holes and 'sum holes' adds hole bytes of 
multiple holes
In those cases. 

- Bhanuprakash.

>
>> can be reduced, thus saving a cache line.
>>
>> Before: structure size: 784, sum holes: 56, cachelines:13 After :
>> structure size: 768, sum holes: 40, cachelines:12
>>
>> Signed-off-by: Bhanuprakash Bodireddy
>> 
>> ---
>>   lib/netdev-dummy.c | 2 +-
>>   1 file changed, 1 insertion(+), 1 deletion(-)
>>
>> diff --git a/lib/netdev-dummy.c b/lib/netdev-dummy.c index
>> f731af1..d888c40 100644
>> --- a/lib/netdev-dummy.c
>> +++ b/lib/netdev-dummy.c
>> @@ -50,8 +50,8 @@ struct reconnect;
>>
>>   struct dummy_packet_stream {
>>   struct stream *stream;
>> -struct dp_packet rxbuf;
>>   struct ovs_list txq;
>> +struct dp_packet rxbuf;
>>   };
>>
>>   enum dummy_packet_conn_type {
>>

___
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev