Re: [ovs-discuss] hardware offloading in ovs-2.8

2017-11-07 Thread 王嵘
For now, I 'm using the "NetXtreme II BCM57810 10 Gigabit Ethernet 168e".
Another question,  I found  ovs-dpdk can't bind this NIC. But I find there is 
the support code for 57810 in the DPDK Bnx2x.
I'm puzzled.
Can you tell me whether  ovs-dpdk support this NIC?

Thanks very much!








在 2017-11-08 09:11:25,"Ben Pfaff"  写道:
>On Tue, Nov 07, 2017 at 03:49:06PM +0800, 王嵘 wrote:
>> Hi,
>> I'm using ovs-dpdk(ovs2.8/dpdk17.05.2), and I want to use the offload 
>> feature. But I dont know how to enable it?
>> As is represented in the release note 2.8:
>> 
>>- Addexperimental support for hardware offloading
>>  * HW offloading is disabled by default.
>> 
>>  * HW offloading is done through the TC interface.
>
>Do you have a supported NIC?  This feature is currently for a particular
>model of Mellanox NICs.
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] OVN patch port for localnet can't be created

2017-11-07 Thread Hui Xiang
Seems it even doesn't get the "" type, just only patch and chassisredirect.

gateway_chassis : []
logical_port: "d94cb413-f53a-4943-9590-c75e60e63568"
mac : [""]
nat_addresses   : []
options : {}
parent_port : []
tag : []
tunnel_key  : 3
type: ""

On Tue, Nov 7, 2017 at 4:36 PM, Hui Xiang  wrote:

> Hi folks,
>
>   When I am running ovn in one of my node having the gateway port
> connected external network via localnet, the patch port can't be created
> between br-ex(set by ovn-bridge-mappings) with br-int, after gdb, it seems
> the result get from SBREC_PORT_BINDING_FOR_EACH (binding, ctx->ovnsb_idl)
> doesn't include 'localnet' binding type, however it does exist from
> ovn-sbctl list port_binding, either I am missing any configuration to make
> it work or this is a bug.
>
>   Please have a look and thank much.
>
> external_ids: {hostname="node-1.domain.tld",
> ovn-bridge-mappings="physnet1:br-ex", ovn-encap-ip="168.254.101.10",
> ovn-encap-type=geneve, ovn-remote="tcp:192.168.0.2:6642",
> rundir="/var/run/openvswitch", system-id="88596f9f-e326-4e15-
> ae91-8cc014e7be86"}
> iface_types : [geneve, gre, internal, lisp, patch, stt, system,
> tap, vxlan]
>
> (gdb) n
> 181 if (!strcmp(binding->type, "localnet")) {
> 4: binding->type = 0x55e7189608b0 "patch"
> (gdb) display binding->logical_port
> 5: binding->logical_port = 0x55e718960650 "b3edbc9a-3248-43e5-b84e-
> 01689a9c83e2"
> (gdb) n
> 183 } else if (!strcmp(binding->type, "l2gateway")) {
> 5: binding->logical_port = 0x55e718960650 "b3edbc9a-3248-43e5-b84e-
> 01689a9c83e2"
> 4: binding->type = 0x55e7189608b0 "patch"
> (gdb)
> 193 continue;
> 5: binding->logical_port = 0x55e718960650 "b3edbc9a-3248-43e5-b84e-
> 01689a9c83e2"
> 4: binding->type = 0x55e7189608b0 "patch"
> (gdb)
> 179 SBREC_PORT_BINDING_FOR_EACH (binding, ctx->ovnsb_idl) {
> 5: binding->logical_port = 0x55e718960650 "b3edbc9a-3248-43e5-b84e-
> 01689a9c83e2"
> 4: binding->type = 0x55e7189608b0 "patch"
> (gdb)
> 181 if (!strcmp(binding->type, "localnet")) {
> 5: binding->logical_port = 0x55e7189622d0 "lrp-3a938edc-8809-4b79-b1a6-
> 8145066e4fe3"
> 4: binding->type = 0x55e718962380 "patch"
> (gdb)
> 183 } else if (!strcmp(binding->type, "l2gateway")) {
> 5: binding->logical_port = 0x55e7189622d0 "lrp-3a938edc-8809-4b79-b1a6-
> 8145066e4fe3"
> 4: binding->type = 0x55e718962380 "patch"
> (gdb)
> 193 continue;
> 5: binding->logical_port = 0x55e7189622d0 "lrp-3a938edc-8809-4b79-b1a6-
> 8145066e4fe3"
> 4: binding->type = 0x55e718962380 "patch"
> (gdb)
> 179 SBREC_PORT_BINDING_FOR_EACH (binding, ctx->ovnsb_idl) {
> 5: binding->logical_port = 0x55e7189622d0 "lrp-3a938edc-8809-4b79-b1a6-
> 8145066e4fe3"
> 4: binding->type = 0x55e718962380 "patch"
> (gdb)
> 181 if (!strcmp(binding->type, "localnet")) {
> 5: binding->logical_port = 0x55e718962820 "lrp-b3edbc9a-3248-43e5-b84e-
> 01689a9c83e2"
> 4: binding->type = 0x55e7189628d0 "patch"
> (gdb)
> 183 } else if (!strcmp(binding->type, "l2gateway")) {
> 5: binding->logical_port = 0x55e718962820 "lrp-b3edbc9a-3248-43e5-b84e-
> 01689a9c83e2"
> 4: binding->type = 0x55e7189628d0 "patch"
> (gdb)
> 193 continue;
> 5: binding->logical_port = 0x55e718962820 "lrp-b3edbc9a-3248-43e5-b84e-
> 01689a9c83e2"
> 4: binding->type = 0x55e7189628d0 "patch"
> (gdb)
> 179 SBREC_PORT_BINDING_FOR_EACH (binding, ctx->ovnsb_idl) {
> 5: binding->logical_port = 0x55e718962820 "lrp-b3edbc9a-3248-43e5-b84e-
> 01689a9c83e2"
> (gdb) n
> 181 if (!strcmp(binding->type, "localnet")) {
> 4: binding->type = 0x55e7189608b0 "patch"
> (gdb) display binding->logical_port
> 5: binding->logical_port = 0x55e718960650 "b3edbc9a-3248-43e5-b84e-
> 01689a9c83e2"
> (gdb) n 183 } else if (!strcmp(binding->type, "l2gateway")) {
> 5: binding->logical_port = 0x55e718960650 "b3edbc9a-3248-43e5-b84e-
> 01689a9c83e2"
> 4: binding->type = 0x55e7189608b0 "patch"
> (gdb)
> 193 continue;
> 5: binding->logical_port = 0x55e718960650 "b3edbc9a-3248-43e5-b84e-
> 01689a9c83e2"
> 4: binding->type = 0x55e7189608b0 "patch"
> (gdb)
> 179 SBREC_PORT_BINDING_FOR_EACH (binding, ctx->ovnsb_idl) {
> 5: binding->logical_port = 0x55e718960650 "b3edbc9a-3248-43e5-b84e-
> 01689a9c83e2"
> 4: binding->type = 0x55e7189608b0 "patch"
> (gdb)
> 181 if (!strcmp(binding->type, "localnet")) {
> 5: binding->logical_port = 0x55e7189622d0 "lrp-3a938edc-8809-4b79-b1a6-
> 8145066e4fe3"
> 4: binding->type = 0x55e718962380 "patch"
> (gdb)
> 183 } else if (!strcmp(binding->type, "l2gateway")) { 5:
> binding->logical_port = 0x55e7189622d0 "lrp-3a938edc-8809-4b79-b1a6-
> 8145066e4fe3"
> 4: binding->type = 0x55e718962380 "patch"
> (gdb)
> 193 continue;
> 5: binding->logical_port = 0x55e7189622d0 

Re: [ovs-discuss] hardware offloading in ovs-2.8

2017-11-07 Thread Ben Pfaff
On Tue, Nov 07, 2017 at 03:49:06PM +0800, 王嵘 wrote:
> Hi,
> I'm using ovs-dpdk(ovs2.8/dpdk17.05.2), and I want to use the offload 
> feature. But I dont know how to enable it?
> As is represented in the release note 2.8:
> 
>- Addexperimental support for hardware offloading
>  * HW offloading is disabled by default.
> 
>  * HW offloading is done through the TC interface.

Do you have a supported NIC?  This feature is currently for a particular
model of Mellanox NICs.
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] pmd-cpu-mask/distribution of rx queues not working on windows

2017-11-07 Thread Kevin Traynor
On 10/19/2017 05:45 PM, Alin Gabriel Serdean wrote:
> Hi,
> 
>  
> 
> Currently the test “pmd-cpu-mask/distribution of rx queues” is failing
> on Windows. I’m trying to figure out what we are missing on the Windows
> environment. Any help is welcomed .
> 
>  

Hi Alin, the queues are sorted by measured rxq cycles since
79da1e411ba5. In this test case the rxq cycles are equal for all queues.

On Linux, the sort result is 0,1,3,4,5,6,7,8,9 whereas you are seeing
1,3,4,5,6,7,8,9,0. This impacts the distribution to PMDs and that's
causing the test fail.

Currently the comparison function (rxq_cycle_sort) selects a winner when
they are equal. I have submitted a patch which (amongst other things)
changes that so it will just report they are equal. I've sent a v2,
https://patchwork.ozlabs.org/patch/835385/ can you try that and see if
it fixes the issue on Windows?

thanks,
Kevin.

> 
> ## -- ##
> 
> ## openvswitch 2.8.90 test suite. ##
> 
> ## -- ##
> 
> 1007. pmd.at:106: testing PMD - pmd-cpu-mask/distribution of rx queues ...
> 
> ./pmd.at:107: ovsdb-tool create conf.db
> $abs_top_srcdir/vswitchd/vswitch.ovsschema
> 
> ./pmd.at:107: ovsdb-server --detach --no-chdir --pidfile --log-file
> --remote=punix:$OVS_RUNDIR/db.sock
> 
> stderr:
> 
> ./pmd.at:107: sed < stderr '
> 
> /vlog|INFO|opened log file/d
> 
> /ovsdb_server|INFO|ovsdb-server (Open vSwitch)/d'
> 
> ./pmd.at:107: ovs-vsctl --no-wait init
> 
> ./pmd.at:107: ovs-vswitchd --enable-dummy --disable-system
> --dummy-numa="0,0,0,0" --detach --no-chdir --pidfile --log-file -vvconn
> -vofproto_dpif -vunixctl
> 
> stderr:
> 
> ./pmd.at:107: sed < stderr '
> 
> /ovs_numa|INFO|Discovered /d
> 
> /vlog|INFO|opened log file/d
> 
> /vswitchd|INFO|ovs-vswitchd (Open vSwitch)/d
> 
> /reconnect|INFO|/d
> 
> /ofproto|INFO|using datapath ID/d
> 
> /netdev_linux|INFO|.*device has unknown hardware address family/d
> 
> /ofproto|INFO|datapath ID changed to fedcba9876543210/d
> 
> /dpdk|INFO|DPDK Disabled - Use other_config:dpdk-init to enable/d
> 
> /netdev: Flow API/d
> 
> /tc: Using policy/d'
> 
> ./pmd.at:107: add_of_br 0 add-port br0 p0 -- set Interface p0
> type=dummy-pmd options:n_rxq=8
> 
> 2017-10-19T16:31:19.771Z|3|ovs_numa|INFO|Discovered 1 NUMA nodes and
> 4 CPU cores
> 
> 2017-10-19T16:31:20.030Z|00024|dpif_netdev|INFO|There are 1 pmd threads
> on numa node 0
> 
> ./pmd.at:111: test "$N_THREADS" -gt 0
> 
> ./pmd.at:113: ovs-appctl dpif/show | sed
> 's/\(tx_queues=\)[0-9]*/\1/g'
> 
> ./pmd.at:120: ovs-appctl dpif-netdev/pmd-rxq-show | sed "s/\(numa_id
> \)[0-9]*\( core_id \)[0-9]*:/\1\2:/"
> 
> ./pmd.at:127: ovs-vsctl set Open_vSwitch . other_config:pmd-cpu-mask=0x3
> 
> 2017-10-19T16:31:20.618Z|00041|dpif_netdev|INFO|There are 2 pmd threads
> on numa node 0
> 
> ./pmd.at:128: test "$N_THREADS" -eq "2"
> 
> ./pmd.at:130: ovs-appctl dpif-netdev/pmd-rxq-show | sed "s/\(numa_id
> \)[0-9]*\( core_id \)[0-9]*:/\1\2:/;s/\(queue-id: \)1
> 2 5 6/\1/;s/\(queue-id: \)0 3 4 7/\1/"
> 
> --- -   2017-10-19 19:31:20 +0300
> 
> +++ /c/_2017/october/19/ovs/tests/testsuite.dir/at-groups/1007/stdout  
> 2017-10-19 19:31:21 +0300
> 
> @@ -1,7 +1,7 @@
> 
> pmd thread numa_id  core_id :
> 
> isolated : false
> 
> -   port: p0queue-id: 
> 
> +   port: p0queue-id: 0 1 4 5
> 
> pmd thread numa_id  core_id :
> 
> isolated : false
> 
> -   port: p0queue-id: 
> 
> +   port: p0queue-id: 2 3 6 7
> 
>  
> 
> ovsdb-server.log:
> 
>> 2017-10-19T16:31:19.440Z|1|vlog|INFO|opened log file
> c:/_2017/october/19/ovs/tests/testsuite.dir/1007/ovsdb-server.log
> 
>> 2017-10-19T16:31:19.446Z|2|ovsdb_server|INFO|ovsdb-server (Open
> vSwitch) 2.8.90
> 
> ovs-vswitchd.log:
> 
>> 2017-10-19T16:31:19.769Z|1|vlog|INFO|opened log file
> c:/_2017/october/19/ovs/tests/testsuite.dir/1007/ovs-vswitchd.log
> 
>> 2017-10-19T16:31:19.771Z|2|ovs_numa|INFO|Discovered 4 CPU cores on
> NUMA node 0
> 
>> 2017-10-19T16:31:19.771Z|3|ovs_numa|INFO|Discovered 1 NUMA nodes
> and 4 CPU cores
> 
>>
> 2017-10-19T16:31:19.771Z|4|reconnect|INFO|unix:c:/_2017/october/19/ovs/tests/testsuite.dir/1007/db.sock:
> connecting...
> 
>>
> 2017-10-19T16:31:19.771Z|5|reconnect|INFO|unix:c:/_2017/october/19/ovs/tests/testsuite.dir/1007/db.sock:
> connected
> 
>> 2017-10-19T16:31:19.774Z|6|bridge|INFO|ovs-vswitchd (Open vSwitch)
> 2.8.90
> 
>> 2017-10-19T16:31:20.028Z|7|ofproto_dpif|INFO|dummy@ovs-dummy:
> Datapath supports recirculation
> 
>> 2017-10-19T16:31:20.028Z|8|ofproto_dpif|INFO|dummy@ovs-dummy: VLAN
> header stack length probed as 1
> 
>> 2017-10-19T16:31:20.028Z|9|ofproto_dpif|INFO|dummy@ovs-dummy: MPLS
> label stack length probed as 3
> 
>> 2017-10-19T16:31:20.028Z|00010|ofproto_dpif|INFO|dummy@ovs-dummy:
> Datapath supports truncate action
> 
>> 2017-10-19T16:31:20.028Z|00011|ofproto_dpif|INFO|dummy@ovs-dummy:
> Datapath supports unique flow ids
> 
>> 

Re: [ovs-discuss] Extremely slow tcp/udp connection ovs 2.6.1 , 4.4.0-87 Ubuntu 14.04

2017-11-07 Thread kevin parrikar
Thanks Guru.
For some reason openstack was creating tap interfaces on compute with an
MTU of 1500,where as for all other virtual/physical devices its 9000.After
changing mtu of tap ,everything is good now.Thanks alot

On Fri, Nov 3, 2017 at 11:30 PM, Guru Shetty  wrote:

>
>
> On 2 November 2017 at 13:07, kevin parrikar 
> wrote:
>
>> Hello All,
>> I am running OVS 2.6.1 on Ubuntu 14.04 kernel 4.4.0-87-generic with
>> Openstack Mitaka release with OVS firewall driver(contrack )
>>
>>
>> MTU is set to 9000 on both the physical nics and icmp is success with
>> ping -Mdo -s 8000 flag
>> how ever tcp and udp streams are too slow.
>>
>>
> If you are using tunnels, reduce the MTU of the inner packet by the amount
> of tunnel header.
>
>
>> TCP
>>
>> iperf -c 192.168.111.202
>> 
>> Client connecting to 192.168.111.202, TCP port 5001
>> TCP window size: 92.6 KByte (default)
>> 
>> [  3] local 192.168.111.199 port 44228 connected with 192.168.111.202
>> port 5001
>> [ ID] Interval   Transfer Bandwidth
>> [  3]  0.0-927.7 sec   175 KBytes  1.54 Kbits/sec
>>
>> UDP
>>
>> iperf -s -u
>> 
>> Server listening on UDP port 5001
>> Receiving 1470 byte datagrams
>> UDP buffer size:  122 KByte (default)
>> 
>> [  3] local 192.168.111.202 port 5001 connected with 192.168.111.199 port
>> 45028
>> [ ID] Interval   Transfer BandwidthJitter   Lost/Total
>> Datagrams
>> [  3]  0.0-10.0 sec  1.25 MBytes  1.05 Mbits/sec   0.086 ms0/  893
>> (0%)
>>
>> Any idea where could be the issue.
>>
>> Regards,
>> Kevin
>>
>>
>>
>> ___
>> discuss mailing list
>> disc...@openvswitch.org
>> https://mail.openvswitch.org/mailman/listinfo/ovs-discuss
>>
>>
>
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


[ovs-discuss] OVN patch port for localnet can't be created

2017-11-07 Thread Hui Xiang
Hi folks,

  When I am running ovn in one of my node having the gateway port connected
external network via localnet, the patch port can't be created between
br-ex(set by ovn-bridge-mappings) with br-int, after gdb, it seems the
result get from SBREC_PORT_BINDING_FOR_EACH (binding, ctx->ovnsb_idl)
doesn't include 'localnet' binding type, however it does exist from
ovn-sbctl list port_binding, either I am missing any configuration to make
it work or this is a bug.

  Please have a look and thank much.

external_ids: {hostname="node-1.domain.tld",
ovn-bridge-mappings="physnet1:br-ex", ovn-encap-ip="168.254.101.10",
ovn-encap-type=geneve, ovn-remote="tcp:192.168.0.2:6642",
rundir="/var/run/openvswitch",
system-id="88596f9f-e326-4e15-ae91-8cc014e7be86"}
iface_types : [geneve, gre, internal, lisp, patch, stt, system,
tap, vxlan]

(gdb) n
181 if (!strcmp(binding->type, "localnet")) {
4: binding->type = 0x55e7189608b0 "patch"
(gdb) display binding->logical_port
5: binding->logical_port = 0x55e718960650
"b3edbc9a-3248-43e5-b84e-01689a9c83e2"
(gdb) n
183 } else if (!strcmp(binding->type, "l2gateway")) {
5: binding->logical_port = 0x55e718960650
"b3edbc9a-3248-43e5-b84e-01689a9c83e2"
4: binding->type = 0x55e7189608b0 "patch"
(gdb)
193 continue;
5: binding->logical_port = 0x55e718960650
"b3edbc9a-3248-43e5-b84e-01689a9c83e2"
4: binding->type = 0x55e7189608b0 "patch"
(gdb)
179 SBREC_PORT_BINDING_FOR_EACH (binding, ctx->ovnsb_idl) {
5: binding->logical_port = 0x55e718960650
"b3edbc9a-3248-43e5-b84e-01689a9c83e2"
4: binding->type = 0x55e7189608b0 "patch"
(gdb)
181 if (!strcmp(binding->type, "localnet")) {
5: binding->logical_port = 0x55e7189622d0
"lrp-3a938edc-8809-4b79-b1a6-8145066e4fe3"
4: binding->type = 0x55e718962380 "patch"
(gdb)
183 } else if (!strcmp(binding->type, "l2gateway")) {
5: binding->logical_port = 0x55e7189622d0
"lrp-3a938edc-8809-4b79-b1a6-8145066e4fe3"
4: binding->type = 0x55e718962380 "patch"
(gdb)
193 continue;
5: binding->logical_port = 0x55e7189622d0
"lrp-3a938edc-8809-4b79-b1a6-8145066e4fe3"
4: binding->type = 0x55e718962380 "patch"
(gdb)
179 SBREC_PORT_BINDING_FOR_EACH (binding, ctx->ovnsb_idl) {
5: binding->logical_port = 0x55e7189622d0
"lrp-3a938edc-8809-4b79-b1a6-8145066e4fe3"
4: binding->type = 0x55e718962380 "patch"
(gdb)
181 if (!strcmp(binding->type, "localnet")) {
5: binding->logical_port = 0x55e718962820
"lrp-b3edbc9a-3248-43e5-b84e-01689a9c83e2"
4: binding->type = 0x55e7189628d0 "patch"
(gdb)
183 } else if (!strcmp(binding->type, "l2gateway")) {
5: binding->logical_port = 0x55e718962820
"lrp-b3edbc9a-3248-43e5-b84e-01689a9c83e2"
4: binding->type = 0x55e7189628d0 "patch"
(gdb)
193 continue;
5: binding->logical_port = 0x55e718962820
"lrp-b3edbc9a-3248-43e5-b84e-01689a9c83e2"
4: binding->type = 0x55e7189628d0 "patch"
(gdb)
179 SBREC_PORT_BINDING_FOR_EACH (binding, ctx->ovnsb_idl) {
5: binding->logical_port = 0x55e718962820
"lrp-b3edbc9a-3248-43e5-b84e-01689a9c83e2"
(gdb) n
181 if (!strcmp(binding->type, "localnet")) {
4: binding->type = 0x55e7189608b0 "patch"
(gdb) display binding->logical_port
5: binding->logical_port = 0x55e718960650
"b3edbc9a-3248-43e5-b84e-01689a9c83e2"
(gdb) n 183 } else if (!strcmp(binding->type, "l2gateway")) {
5: binding->logical_port = 0x55e718960650
"b3edbc9a-3248-43e5-b84e-01689a9c83e2"
4: binding->type = 0x55e7189608b0 "patch"
(gdb)
193 continue;
5: binding->logical_port = 0x55e718960650
"b3edbc9a-3248-43e5-b84e-01689a9c83e2"
4: binding->type = 0x55e7189608b0 "patch"
(gdb)
179 SBREC_PORT_BINDING_FOR_EACH (binding, ctx->ovnsb_idl) {
5: binding->logical_port = 0x55e718960650
"b3edbc9a-3248-43e5-b84e-01689a9c83e2"
4: binding->type = 0x55e7189608b0 "patch"
(gdb)
181 if (!strcmp(binding->type, "localnet")) {
5: binding->logical_port = 0x55e7189622d0
"lrp-3a938edc-8809-4b79-b1a6-8145066e4fe3"
4: binding->type = 0x55e718962380 "patch"
(gdb)
183 } else if (!strcmp(binding->type, "l2gateway")) { 5:
binding->logical_port = 0x55e7189622d0
"lrp-3a938edc-8809-4b79-b1a6-8145066e4fe3"
4: binding->type = 0x55e718962380 "patch"
(gdb)
193 continue;
5: binding->logical_port = 0x55e7189622d0
"lrp-3a938edc-8809-4b79-b1a6-8145066e4fe3"
4: binding->type = 0x55e718962380 "patch"
(gdb)
179 SBREC_PORT_BINDING_FOR_EACH (binding, ctx->ovnsb_idl) {
5: binding->logical_port = 0x55e7189622d0
"lrp-3a938edc-8809-4b79-b1a6-8145066e4fe3"
4: binding->type = 0x55e718962380 "patch"
(gdb)
181 if (!strcmp(binding->type, "localnet")) {
5: binding->logical_port = 0x55e718962820
"lrp-b3edbc9a-3248-43e5-b84e-01689a9c83e2"
4: binding->type = 0x55e7189628d0 "patch"
(gdb)
183 } else if (!strcmp(binding->type, "l2gateway")) {
5: binding->logical_port = 0x55e718962820
"lrp-b3edbc9a-3248-43e5-b84e-01689a9c83e2"
4: binding->type =