Re: [ovs-discuss] OVS/DPDK Build Failing with MLX5 Adapter Enabled

2018-12-15 Thread Olga Shern
> But DPDK builds successfully by itself.  Any suggestions where the build is 
> breaking down?

What do you mean?

The question whether Mellanox PMD is compiled.  If it is compiled than  libnl 
is needed 

Thanks,
Olga


-Original Message-
From: ovs-discuss-boun...@openvswitch.org 
[mailto:ovs-discuss-boun...@openvswitch.org] On Behalf Of David Christensen
Sent: Friday, December 14, 2018 2:54 AM
To: ovs-discuss@openvswitch.org
Subject: [ovs-discuss] OVS/DPDK Build Failing with MLX5 Adapter Enabled

Attempting to use DPDK 18.11 with Monday's OVS commit that supports DPDK
18.11 (commit 03f3f9c0faf838a8506c3b5ce6199af401d13cb3).  When building OVS 
with DPDK support I'm receiving a build error related to libmnl not being found 
while compiling the Mellanox driver as follows:

...
gcc -std=gnu99 -DHAVE_CONFIG_H -I.-I ./include -I ./include -I ./lib 
-I ./lib-Wstrict-prototypes -Wall -Wextra -Wno-sign-compare 
-Wpointer-arith -Wformat -Wformat-security -Wswitch-enum -Wunused-parameter 
-Wbad-function-cast -Wcast-align -Wstrict-prototypes -Wold-style-definition 
-Wmissing-prototypes -Wmissing-field-initializers -fno-strict-aliasing -Wshadow 
-I/home/davec/src/p9-dpdk-perf/dpdk/ppc_64-power8-linuxapp-gcc/include
-D_FILE_OFFSET_BITS=64  -g -O2 -MT vswitchd/xenserver.o -MD -MP -MF 
$depbase.Tpo -c -o vswitchd/xenserver.o vswitchd/xenserver.c &&\ mv -f 
$depbase.Tpo $depbase.Po
/bin/sh ./libtool  --tag=CC   --mode=link gcc -std=gnu99 
-Wstrict-prototypes -Wall -Wextra -Wno-sign-compare -Wpointer-arith -Wformat 
-Wformat-security -Wswitch-enum -Wunused-parameter -Wbad-function-cast 
-Wcast-align -Wstrict-prototypes -Wold-style-definition -Wmissing-prototypes 
-Wmissing-field-initializers -fno-strict-aliasing -Wshadow 
-I/home/davec/src/p9-dpdk-perf/dpdk/ppc_64-power8-linuxapp-gcc/include
-D_FILE_OFFSET_BITS=64  -g -O2
-L/home/davec/src/p9-dpdk-perf/dpdk/ppc_64-power8-linuxapp-gcc/lib
-Wl,--whole-archive,-ldpdk,--no-whole-archive  -o vswitchd/ovs-vswitchd 
vswitchd/bridge.o vswitchd/ovs-vswitchd.o vswitchd/system-stats.o 
vswitchd/xenserver.o ofproto/libofproto.la lib/libsflow.la 
lib/libopenvswitch.la -ldpdk -ldl -lnuma -latomic -lpthread -lrt -lm  -lnuma
libtool: link: gcc -std=gnu99 -Wstrict-prototypes -Wall -Wextra 
-Wno-sign-compare -Wpointer-arith -Wformat -Wformat-security -Wswitch-enum 
-Wunused-parameter -Wbad-function-cast -Wcast-align -Wstrict-prototypes 
-Wold-style-definition -Wmissing-prototypes -Wmissing-field-initializers 
-fno-strict-aliasing -Wshadow 
-I/home/davec/src/p9-dpdk-perf/dpdk/ppc_64-power8-linuxapp-gcc/include
-D_FILE_OFFSET_BITS=64 -g -O2 -Wl,--whole-archive -Wl,-ldpdk 
-Wl,--no-whole-archive -o vswitchd/ovs-vswitchd vswitchd/bridge.o 
vswitchd/ovs-vswitchd.o vswitchd/system-stats.o vswitchd/xenserver.o 
-L/home/davec/src/p9-dpdk-perf/dpdk/ppc_64-power8-linuxapp-gcc/lib
ofproto/.libs/libofproto.a
/home/davec/src/p9-dpdk-perf/ovs/lib/.libs/libsflow.a
lib/.libs/libsflow.a lib/.libs/libopenvswitch.a -ldpdk -ldl -latomic -lpthread 
-lrt -lm -lnuma
/home/davec/src/p9-dpdk-perf/dpdk/ppc_64-power8-linuxapp-gcc/lib/librte_pmd_mlx5.a(mlx5_flow_tcf.o):
 
In function `flow_tcf_nl_ack':
/home/davec/src/p9-dpdk-perf/dpdk/drivers/net/mlx5/mlx5_flow_tcf.c:3753: 
undefined reference to `mnl_socket_get_portid'
/home/davec/src/p9-dpdk-perf/dpdk/drivers/net/mlx5/mlx5_flow_tcf.c:3765: 
undefined reference to `mnl_socket_sendto'
/home/davec/src/p9-dpdk-perf/dpdk/drivers/net/mlx5/mlx5_flow_tcf.c:3777: 
undefined reference to `mnl_socket_recvfrom'
/home/davec/src/p9-dpdk-perf/dpdk/drivers/net/mlx5/mlx5_flow_tcf.c:3790: 
undefined reference to `mnl_cb_run'
/home/davec/src/p9-dpdk-perf/dpdk/drivers/net/mlx5/mlx5_flow_tcf.c:3777: 
undefined reference to `mnl_socket_recvfrom'
...

Both the OVS and DPDK builds work individually but I receive the error after 
running "./configure --with-dpdk=; make" to build OVS with 
DPDK.  I ran across this post on the DPDK list regarding libmnl, indicating 
there is a dependency issue:

https://emea01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fmails.dpdk.org%2Farchives%2Fdev%2F2018-July%2F108573.html&data=02%7C01%7Colgas%40mellanox.com%7C0b38efee651e4b4bece108d6615ea5fc%7Ca652971c7d2e4d9ba6a4d149256f461b%7C0%7C1%7C636803456486835522&sdata=OG4IDtWXlZBr5l4EjGn3SOASBBnO7XPX%2BFIETQVImwY%3D&reserved=0

But DPDK builds successfully by itself.  Any suggestions where the build is 
breaking down?

Dave

___
discuss mailing list
disc...@openvswitch.org
https://emea01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fmail.openvswitch.org%2Fmailman%2Flistinfo%2Fovs-discuss&data=02%7C01%7Colgas%40mellanox.com%7C0b38efee651e4b4bece108d6615ea5fc%7Ca652971c7d2e4d9ba6a4d149256f461b%7C0%7C1%7C636803456486835522&sdata=EQSoz6%2Bs2iJvy%2BeTiOdjz06v2Riki%2FaCMapbgUcxcHk%3D&reserved=0
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss

[ovs-discuss] assert failure for recirc_id_node_unref

2018-12-15 Thread 汪翰林
hi,we got assert failure when running openvswitch for a long time, maybe some 
weeks or some months. Openvswitch ran in kernel datapath and with version 
2.8.2, and used CT also.


error stack information:
(gdb) bt
#0  0x7f6e9c66efff in raise () from /lib/x86_64-linux-gnu/libc.so.6
#1  0x7f6e9c67042a in abort () from /lib/x86_64-linux-gnu/libc.so.6
#2  0x7f6e9d45732e in ovs_abort_valist (err_no=err_no@entry=0, 
format=format@entry=0x7f6e9d4c7750 "%s: assertion %s failed in %s()",
args=args@entry=0x7f6e9a534b50) at lib/util.c:343
#3  0x7f6e9d45f080 in vlog_abort_valist (module_=, 
message=0x7f6e9d4c7750 "%s: assertion %s failed in %s()", 
args=args@entry=0x7f6e9a534b50)
at lib/vlog.c:1209
#4  0x7f6e9d45f114 in vlog_abort (module=module@entry=0x7f6e9d754720 
, message=message@entry=0x7f6e9d4c7750 "%s: assertion %s failed in 
%s()")
at lib/vlog.c:1223
#5  0x7f6e9d45707c in ovs_assert_failure (where=where@entry=0x7f6e9d9cecb2 
"./lib/ovs-atomic.h:522",
function=function@entry=0x7f6e9d9cefd0 <__func__.15116> 
"ovs_refcount_unref", condition=condition@entry=0x7f6e9d9ca677 "old_refcount > 
0") at lib/util.c:80
#6  0x7f6e9d9b1cea in ovs_refcount_unref (refcount=0x7f6e7022d508) at 
./lib/ovs-atomic.h:522
#7  recirc_id_node_unref (node_=0x7f6e7022d4e0) at 
ofproto/ofproto-dpif-rid.c:308
#8  0x7f6e9d9b8515 in ukey_delete__ (ukey=0x7f6e34608de0) at 
ofproto/ofproto-dpif-upcall.c:1874
#9  0x7f6e9d422bc6 in ovsrcu_call_postponed () at lib/ovs-rcu.c:293
#10 0x7f6e9d422de4 in ovsrcu_postpone_thread (arg=) at 
lib/ovs-rcu.c:308
#11 0x7f6e9d4241d7 in ovsthread_wrapper (aux_=) at 
lib/ovs-thread.c:348
#12 0x7f6e9ceee494 in start_thread () from 
/lib/x86_64-linux-gnu/libpthread.so.0
#13 0x7f6e9c724acf in clone () from /lib/x86_64-linux-gnu/libc.so.6


about some key variables:
(gdb) p *node_
$2 = {exp_node = {prev = 0x7f6dc8158ac0, next = 0x7f6e002058e0}, id_node = 
{next = {p = 0x7f6d6c23ab60}}, metadata_node = {next = {p = 0x0}}, id = 
272772051,
  hash = 1030066437, refcount = {count = 4294967295}, state = {table_id = 151 
'\227', ofproto_uuid = {parts = {369391943, 132794840, 2170543616, 1588084633}},
metadata = {tunnel = {ip_dst = 1773191690, ipv6_dst = {__in6_u = 
{__u6_addr8 = '\000' , __u6_addr16 = {0, 0, 0, 0, 0, 0, 0, 
0}, __u6_addr32 = {
  0, 0, 0, 0}}}, ip_src = 2216018442, ipv6_src = {__in6_u = 
{__u6_addr8 = '\000' , __u6_addr16 = {0, 0, 0, 0, 0, 0, 0, 0},
__u6_addr32 = {0, 0, 0, 0}}}, tun_id = 11102217506398404608, flags 
= 12, ip_tos = 0 '\000', ip_ttl = 63 '?', tp_src = 31661, tp_dst = 46354, 
gbp_id = 0,
gbp_flags = 0 '\000', pad1 = "\000\000\000\000", metadata = {present = 
{map = 0, len = 0 '\000'}, tab = 0x558e9665df70, opts = {
u8 = '\000' , gnv = {{opt_class = 0, type = 0 
'\000', length = 0 '\000', r3 = 0 '\000', r2 = 0 '\000',
r1 = 0 '\000'} , metadata = 64, regs = 
{8204788198332746544, 1501172204913692874, 0, 0, 0, 0, 77309411328, 5018}, 
in_port = 0},
stack = 0x0, stack_size = 0, mirrors = 0, conntracked = true, xport_uuid = 
{parts = {3224418024, 2139049594, 3033757766, 157758048}}, ofpacts = 0x0,
ofpacts_len = 0, action_set = 0x0, action_set_len = 0}}
(gdb) up
#8  0x7f6e9d9b8515 in ukey_delete__ (ukey=0x7f6e34608de0) at 
ofproto/ofproto-dpif-upcall.c:1874
1874ofproto/ofproto-dpif-upcall.c: No such file or directory.
(gdb) p *ukey
$3 = {cmap_node = {next = {p = 0x0}}, key = 0x7f6e34608eb0, key_len = 160, mask 
= 0x7f6e34609130, mask_len = 192, ufid = {u32 = {1219306161, 4266412036, 
2182729822,
  3747433939}, u64 = {lo = 18324100167100080817, hi = 
16095106214108188766}}, ufid_present = true, hash = 3170407046, pmd_id = 
2147483647, mutex = {lock = {
  __data = {__lock = 0, __count = 0, __owner = 0, __nusers = 0, __kind = 2, 
__spins = 0, __elision = 0, __list = {__prev = 0x0, __next = 0x0}},
  __size = '\000' , "\002", '\000' , 
__align = 0}, where = 0x7f6e9d4a5520 ""}, stats = {n_packets = 0, 
n_bytes = 0,
used = 0, tcp_flags = 0}, created = 5178254656, dump_seq = 71341130042, 
reval_seq = 71340990402, state = UKEY_DELETED, state_thread = 169,
  state_where = 0x7f6e9d9cf7c8 "ofproto/ofproto-dpif-upcall.c:1892", actions = 
{p = 0x7f6e342346a0}, xcache = 0x0, keybuf = {buf = {keybuf = {1310728, 
272772051,
1245192, 0, 131080, 65538, 196616, 10, 983048, 0, 1441800, 34, 1507334, 
23, 1572872, 0, 1638420, 0, 0, 0, 0, 1703956, 2498998538, 4209094922, 
2421238944,
4294941446, 262160, 2201884410, 4294897406, 4294967295, 393222, 8, 
458768, 2498998538, 4209094922, 4194310, 589832, 2421238944, 1179654, 6144, 
4294941446,
262160, 2, 385482752, 4009657150, 393222, 8, 458768, 689751050, 
2498998538, 3932166, 589832, 3838893335, 1179654, 4608, 8, 458768, 1861202186, 
756859914,
4194310, 589832, 1108906189, 1179654, 512, 1310728, 0, 1245192, 0, 
131080, 0, 1048636, 12, 0, 2584936448, 65544, 4011311626, 131080, 177

Re: [ovs-discuss] QoS Egress Port Traffic Shaping (linux-hfsc ) not Working as Expected for UDP Traffic

2018-12-15 Thread Ramzah Rehman
Thanks, Ben. I figured out that even without apply any QoS or queueing, UDP
traffic observes throughout of around 175Mbps. I guess the issue is not
with QoS or queueing.


On Sat, Dec 15, 2018, 1:08 AM Ben Pfaff  On Fri, Dec 14, 2018 at 05:53:05PM +0500, Ramzah Rehman wrote:
> >  I have two servers connected via 1Gbps cable. I have installed OVS 2.8
> on
> > server one. I have a switch named OVS_BR_LEAF_1 on it. I experimented
> with
> > QoS traffic shaping. I have connected eth3 of server one to ovs-switch on
> > port 2. I want traffic going from server one to server two to observer
> QoS
> > traffic rate that I specify.
> >
> > I added following configuration:
> >
> > #ovs-vsctl -- set port eth3 qos=@newqos -- --id=@newqos create qos
> > type=linux-hfsc other-config:max-rate=10 queues:1=@q1 -- --id=@q1
> > create queue other-config:min-rate=x other-config:max-rate=x
> >
> > I have following flow entries in my switch:
> > #vs-ofctl add-flow OVS_BR_LEAF_1
> > priority=6000,in_port=LOCAL,actions=set_queue:1,normal
> > #ovs-ofctl add-flow OVS_BR_LEAF_1 priority=6000,in_port=2,actions=normal
> >
> > Then I checked throughput for TCP traffic (from server one to two) via
> > iperf and got following results:
> >
> > got 846 Mbps for x=900 Mbps
> > got 757 Mbps for x=800 Mbps
> > got 653 Mbps for x=700 Mbps
> > got 428 Mbps for x=450 Mbps
> > got 381 Mbps for x=400 Mbps
> > got 287 Mbps for x=300 Mbps
> > got 239 Mbps for x=250 Mbps
> > got 192 Mbps for x=200 Mbps
> >
> > Seems like traffic shaping is working fine for TCP traffic. However, for
> > UDP, I got these results:
> >
> > 148 Mbps for x=800 Mbps
> > 185 Mbps for x=300 Mbps
> > 131 Mbps for x=200 Mbps
> > 93 Mbps for x=100 Mbps
> > 77 Mbps for x=80 Mbps
> > 28.9 Mbps for x=30 Mbps
> > 19.3 Mbps for x=20 Mbps
> >
> > For UDP, traffic shaping works fine till x=100Mpbs, but for higher values
> > is shows unexpected behavior. Is there an explanation?
>
> This FAQ entry probably applies.
>
> Q: I configured QoS, correctly, but my measurements show that it isn't
> working
> as well as I expect.
>
> A: With the Linux kernel, the Open vSwitch implementation of QoS has
> two
> aspects:
>
> - Open vSwitch configures a subset of Linux kernel QoS features,
> according
>   to what is in OVSDB.  It is possible that this code has bugs.  If you
>   believe that this is so, then you can configure the Linux traffic
> control
>   (QoS) stack directly with the "tc" program.  If you get better
> results
>   that way, you can send a detailed bug report to b...@openvswitch.org
> .
>
>   It is certain that Open vSwitch cannot configure every Linux kernel
> QoS
>   feature.  If you need some feature that OVS cannot configure, then
> you
>   can also use "tc" directly (or add that feature to OVS).
>
> - The Open vSwitch implementation of OpenFlow allows flows to be
> directed
>   to particular queues.  This is pretty simple and unlikely to have
> serious
>   bugs at this point.
>
> However, most problems with QoS on Linux are not bugs in Open vSwitch
> at
> all.  They tend to be either configuration errors (please see the
> earlier
> questions in this section) or issues with the traffic control (QoS)
> stack
> in Linux.  The Open vSwitch developers are not experts on Linux traffic
> control.  We suggest that, if you believe you are encountering a
> problem
> with Linux traffic control, that you consult the tc manpages (e.g.
> tc(8),
> tc-htb(8), tc-hfsc(8)), web resources (e.g. http://lartc.org/), or
> mailing
> lists (e.g. http://vger.kernel.org/vger-lists.html#netdev).
>
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


[ovs-discuss] [openvswitch 2.7.0] testsuite: 419 1003 1006 1007 1008 1009 1010 failed

2018-12-15 Thread ha Fa
hi,


kindly,
i request your help Please my situation issue is so critical, i had tried
for two months to understand what is the error , that am facing during
implementing Stochastic Switching in ovs 2.7 to implement select group type in
OVS 2.7 randomly selects a bucket from among the live buckets, to apply
partitioning for the same traffic flow pass through multiple paths to the
flow from source to destination.

the code modification i done is  below

#include 
#include 

static bool is_srand_initialized = false;

static void
xlate_default_select_group(struct xlate_ctx *ctx, struct group_dpif *group)
{
  struct flow_wildcards *wc = ctx->wc;
  struct ofputil_bucket *bucket;
  uint32_t basis;
ctx_>xout_>slow |= SLOW_CONTROLLER;
basis = flow_hash_symmetric_l4(&ctx->xin->flow, 0);
flow_mask_hash_fields(&ctx->xin->flow, wc,
NX_HASH_FIELDS_SYMMETRIC_L4);
bucket = group_best_live_bucket(ctx, group, basis);
if (bucket) {
xlate_group_bucket(ctx, bucket);
xlate_group_stats(ctx, group, bucket);
}
else if (ctx->xin->xcache)
 {
   ofproto_group_unref(&group_>up);
 }
}

static void
xlate_select_group(struct xlate_ctx *ctx, struct group_dpif *group)
{
const char *selection_method = group->up.props.selection_method;

if (ctx->was_mpls)
  {
 ctx_trigger_freeze(ctx);
   }

   ctx->xout->slow |= SLOW_CONTROLLER;
xlate_commit_actions(ctx);
xlate_default_select_group(ctx, group);
}

static struct ofputil_bucket *
group_best_live_bucket(const struct xlate_ctx *ctx,
   const struct group_dpif *group,
   uint32_t basis)
{
if (!is_srand_initialized) {
 srand((unsigned int)time(NULL));
 is_srand_initialized = true;
}
struct ofputil_bucket *bucket;
uint32_t total_weight = 0;

LIST_FOR_EACH (bucket, list_node, &group->up.buckets) {
  if (bucket_is_alive(ctx, bucket, 0)) {
  total_weight += bucket->weight;
  }
}
uint32_t rand_num = rand() % total_weight + 1;
struct ofputil_bucket *best_bucket = NULL;
uint32_t summed_weight =0;
LIST_FOR_EACH (bucket, list_node, &group->up.buckets) {
  if (bucket_is_alive(ctx, bucket, 0)) {
  summed_weight += bucket->weight;
  if (rand_num <= summed_weight) {
  return bucket; // return this bucket
   }
  }
 }
return best_bucket; // return NULL
}

after that i rebuild and install OVS 2.7.

#cd openvswitch-2.7.0/
#apt-get install build-essential fakeroot
#apt-get install debhelper autoconf automake libssl-dev pkg-config bzip2
openssl python-all procps python-qt4 python-zopeinterface
python-twisted-conch dh-autoreconf
#fakeroot debian/rules binary

I received an error.

## - ##
## Test results. ##
## - ##

















*ERROR: All 7 tests were run,7 failed unexpectedly.Makefile:6264: recipe
for target 'check-local' failedmake[5]: *** [check-local] Error 1make[5]:
Leaving directory '/root/openvswitch-2.7.0'Makefile:5402: recipe for target
'check-am' failedmake[4]: *** [check-am] Error 2make[4]: Leaving directory
'/root/openvswitch-2.7.0'Makefile:5111: recipe for target 'check-recursive'
failedmake[3]: *** [check-recursive] Error 1make[3]: Leaving directory
'/root/openvswitch-2.7.0'Makefile:5406: recipe for target 'check'
failedmake[2]: *** [check] Error 2make[2]: Leaving directory
'/root/openvswitch-2.7.0'*

# - ##
## openvswitch 2.7.0 test suite. ##
## - ##









*mpls-xlate419: MPLS xlate action   FAILED
(mpls-xlate.at:70 )ofproto-dpif1003: ofproto-dpif
- group actions have no effect afterwards FAILED (ofproto-dpif.at:341
)1006: ofproto-dpif - select
group FAILED (ofproto-dpif.at:387
)1007: ofproto-dpif - select group with watch
port FAILED (ofproto-dpif.at:400 )1008:
ofproto-dpif - select group with weight FAILED (ofproto-dpif.at:412
)1009: ofproto-dpif - select group with hash
selection method FAILED (ofproto-dpif.at:435
)1010: ofproto-dpif - select group with dp_hash
selection method FAILED (ofproto-dpif.at:478 )*

then at the end it said
debian/rules:37: recipe for target 'override_dh_auto_test' failed
make[1]: *** [override_dh_auto_test] Error 1
make[1]: Leaving directory '/root/openvswitch-2.7.0'
debian/rules:25: recipe for target 'binary' failed
make: *** [binary] Error 2


the test results is in the attachment file.

your help is greatly appreciated
thank you.
## - ##
## Test results. ##
## - ##

ERROR: All 7 tests were run,
7 failed unexpectedly.

Makefile:6264: recipe for target 'check-local' failed
make[5]: *** [check-local] E