Re: [vpp-dev] Timer expiry spikes in TW timer implementation

2018-12-12 Thread shaligram.prakash
Thanks Dave!!!.

I could see precision was indeed the problem. I think it is because the
slots are missed due to below check in tw_timer_expire_timers_internal(...)
when now time is behind the calculated next_run_time.
*  /* Shouldn't happen */*
*  if (PREDICT_FALSE (now < tw->next_run_time))*
*return callback_vector_arg;*
This leads to nticks loop -  *for (i = 0; i < nticks; i++) *run for more
than once in subsequent advance wheels ,internally calling registered
callbacks nticks times based on conditions.

I could not improve much on precision given the system constraints.I tried
another approach in which return vector list from
tw_timer_expire_timers_internal(...)
is used instead of callback mechanism.
I got result as expected i.e vec_len(expired_timer) as 1. I was expecting
it to behave similarly and return vec_len as nticks times because
internally it must be creating a vector, but that is not the case.

Do you see any difference in approach callback mechanism and return vectors
are implemented in tw_timer_expire_timers_XXX(...).

Thanks,
-Shaligram


On Sun, 9 Dec 2018 at 05:32, Dave Barach (dbarach) 
wrote:

> Probably due to the vpp process sitting in an epoll_pwait(). See
> .../src/vlib/unix/input.c:linux_epoll_input_inline(...).
>
>
>
> The tw_timer code does its best, but if you don’t call the
> tw_timer_expire_timers_XXX(...) method precisely it won’t yield the sort of
> precision you’re looking for.
>
>
>
> If you configure the timer wheel at 100ms granularity you should expect
> timers to pop within +-50 ms of when you expect them to pop, even if you
> call the tw_timer_expire_timers_XXX method precisely. The code picks the
> nearest slot, and if slots are 100ms wide, well, you get the idea.
>
>
>
> Dave
>
>
>
> *From:* vpp-dev@lists.fd.io  *On Behalf Of *
> shaligra...@gmail.com
> *Sent:* Saturday, December 8, 2018 1:54 PM
> *To:* vpp-dev@lists.fd.io
> *Subject:* [vpp-dev] Timer expiry spikes in TW timer implementation
>
>
>
> Hi,
>
> I am trying to implement single 10sec precision periodic timer using TW.
> Below is the config. using the reference from test_tw_timer.c (Really
> helpful).
>
> tw_timer_wheel_init_2t_1w_2048sl with 0.1 sec granularity.
> tw_timer_start_2t_1w_2048sl with 100 as ticks.
> I have used the process node approach
> with tw_timer_expire_timers_2t_1w_2048sl check at every 0.1 sec
> using vlib_process_suspend(...). In the callback, timer is again started.
>
> My expectation was I would be receiving  callback
> to expired_timer_single_callback() at every 10 sec.
> In actual, the callbacks are getting called at every 10sec apart. But
> sometimes there are some unwanted callback at irregular frequency for the
> same handle( i.e timer_id, pool_index). Note that this is the case either
> for main_core or workers. For erroneous scenarios, time diff from previous
> callback is inorder of ~ 0.1 ms(expected 10sec).
>
> Any pointers would of great help.
>
> Thanks,
> -Shaligram
>
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#11591): https://lists.fd.io/g/vpp-dev/message/11591
Mute This Topic: https://lists.fd.io/mt/28677304/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] vpp router plugin threads? (vpp + router + netlink + FRRouting)

2018-12-12 Thread Brian Dickson
FYI:

I have tried to build these in the last couple of days.

The current "version" of vppsb/netlink and vppsb/router seem to not compile
with any version.

I'm not sure who is maintaining the code or updating the repo, but it
doesn't appear to be using git branches at all, and they are breaking stuff
for everyone.
Please don't do that.
If you are updating stuff that other people use, please be polite, and
place your edits in a branch until it can compile, be reviewed, and merged
(regardless of who might review it.)

(The git HEAD entries are 3a3b77f27b6d1469c5e1628cb508e193df20d6a0 and
9791ab9fa07347fd063a55dc44cc1b0b67ee2292
for the bad and good versions respectively.)

The version of dpdk included currently also seems to break/explode/implode,
unless the "make" target for the dpdk-install-dep (sp?) thing is done
separately.

The older version of vppsb/router has a single bug, which is easily fixed,
but requires fixing for the router plugin to build.
(This error points to the fix: router/tap_inject_netlink.c:163:43: error:
too many arguments to function 'vnet_unset_ip6_ethernet_neighbor')

If the older version of vppsb is used, AND the dpdk dep thing is installed
first, AND the minor bug is fixed, 18.07 and 18.10 do compile.

I have found it necessary to pass some CFLAGS and LDFLAGS when building
netlink-install and router-install:

make CFLAGS="-fPIC -std=gnu99 -I. -I/usr/local/src/vpp/netlink
-I/usr/local/src/vpp/router -I/usr/local/src/vpp/router/router"
LDFLAGS="-L/usr/local/src/vpp/build-root/install-vpp-native/netlink/lib64"
V=0 PLATFORM=vpp TAG=vpp netlink-install router-install



It'd be nice if these relatively minor things could be cleaned up,
independent of any actual development going on relative to these two plugins

Thanks,
Brian


On Wed, Dec 12, 2018 at 4:12 AM John Biscevic 
wrote:

> Hi Brian,
>
>
> I've successfully built the router plugin on 18.10 and "19.01"
>
>
> What errors are you encountering when you attempt to build it?
>
> Kind regards,
>
> * John Biscevic*
> Systems Architect,  Bahnhof AB
> Mobile:  +46 76 111 01 24
> E-mail: * john.bisce...@bahnhof.net *
> --
> *From:* vpp-dev@lists.fd.io  on behalf of Brian
> Dickson 
> *Sent:* Wednesday, December 12, 2018 4:17 AM
> *To:* vpp-dev@lists.fd.io
> *Cc:* vppsb-...@lists.fd.io
> *Subject:* [vpp-dev] vpp router plugin threads? (vpp + router + netlink +
> FRRouting)
>
> Greetings, VPP folks.
>
> I am continuing to work on my vpp + router-plugin (+FRRouting) set-up.
>
> I have things mostly working with very large routing tables (source from
> multiple BGP peers), but am having some challenges when trying to use
> multi-threaded (additional worker threads) for increasing overall VPP
> forwarding performance.
>
> When using just a single thread, the BGP peers take a long time to sync up
> but it is relatively stable. Forwarding performance on a 10G NIC (i40e
> driver and vfio-pci selected), is pretty decent, but I am interested in
> finding ways to improve performance (and getting things to the point where
> I can use a 40G card also in the system). The limit seems to be packets per
> second, and maxes out at about 11Mpps.
>
> The problem is, when I try to use worker threads, I start running into
> issues with rtnetlink buffers, and BGP, ICMP, ARP, etc, all become "flaky".
>
> My suspicion is that it has something to do with which thread(s) handle
> the netlink traffic, and which thread(s) handle the TCP port 179 (BGP)
> traffic, which needs to go via the tap-inject path to the kernel, and then
> to the BGP speaking application (FRR sub-unit "bgpd").
>
> Is there anyone who can provide information or advice on this issue?
>
> NB: the flakiness is in a COMPLETELY unloaded environment - no other
> traffic is being handled, nothing else is consuming CPU cycles. It is just
> the BGP traffic itself plus related stuff (ARP) and any diagnostic traffic
> I use (ping).
>
> Is this a case where I need to adjust the RSS to direct incoming packets
> to the right subset of cores, and do I also need to direct particular
> traffic (TCP 179) to the main core? Do I need to ensure anything else, like
> using a separate core (and set the core afinity with taskset -c ) for my
> BGP speaker?
>
> Any suggestions or advice would be greatly appreciated.
>
> Also, any updates on bringing netlink and router plugins into the main
> vppsb tree? Building them on anything other than 18.07 just doesn't work
> for me, and even on 18.07 is rather brittle, and I'm not 100% sure about
> the build steps, which actually involve passing CFLAGS in to make, which
> suggests something isn't quite right...
>
> Thanks in advance,
> Brian
>
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#11590): https://lists.fd.io/g/vpp-dev/message/11590
Mute This Topic: https://lists.fd.io/mt/28729084/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  

Re: [vpp-dev] vppinfra vec alignment

2018-12-12 Thread Damjan Marion via Lists.Fd.Io
That should be corrected to:

Users may specify the alignment for first data element via the vec*_aligned 
macros.

Other elementa will be alligned only is data structure is alligned, i.e. By 
CLIB_CACHELINE_ALIGN_MARK(...) macro...

— 
Damjan

> On Dec 12, 2018, at 2:04 PM, Mohammed Alshohayeb  wrote:
> 
> Hello everyone,
> 
> From what I understand and see that the vec_*_aligned restriction is for the 
> vector hdr/userhdr not the individual elements, however, this page says
> 
> Typically, the user header is not present. User headers allow for other data 
> structures to be built atop vppinfra vectors. Users may specify the alignment 
> for data elements via the vec*_aligned macros.
> 
> Am i correct in my assumption?
> -=-=-=-=-=-=-=-=-=-=-=-
> Links: You receive all messages sent to this group.
> 
> View/Reply Online (#11587): https://lists.fd.io/g/vpp-dev/message/11587
> Mute This Topic: https://lists.fd.io/mt/28735587/675642
> Group Owner: vpp-dev+ow...@lists.fd.io
> Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [dmar...@me.com]
> -=-=-=-=-=-=-=-=-=-=-=-
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#11588): https://lists.fd.io/g/vpp-dev/message/11588
Mute This Topic: https://lists.fd.io/mt/28735587/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] vppinfra vec alignment

2018-12-12 Thread Mohammed Alshohayeb
Hello everyone,

>From what I understand and see that the vec_*_aligned restriction is for
the vector hdr/userhdr not the individual elements, however, this

page
says

Typically, the user header is not present. User headers allow for other
data structures to be built atop vppinfra vectors. Users may specify the
alignment for* data elements *via the vec*_aligned macros.

Am i correct in my assumption?
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#11587): https://lists.fd.io/g/vpp-dev/message/11587
Mute This Topic: https://lists.fd.io/mt/28735587/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] vpp router plugin threads? (vpp + router + netlink + FRRouting)

2018-12-12 Thread Brian Dickson
On Wed, Dec 12, 2018 at 4:12 AM John Biscevic 
wrote:

> Hi Brian,
>
>
> I've successfully built the router plugin on 18.10 and "19.01"
>
>
> What errors are you encountering when you attempt to build it?
>
When following the instructions (doing the steps for running make, found in
router/README.md, right after all the ln -sf steps):

 Building netlink in
/usr/local/src/vpp/build-root/build-vpp_debug-native/netlink 

make[1]: Entering directory
`/usr/local/src/vpp/build-root/build-vpp_debug-native/netlink'

  CC   librtnl/netns.lo

  CC   librtnl/rtnl.lo

  CC   librtnl/mapper.lo

  CC   test/test.lo

/usr/local/src/vpp/build-data/../netlink/librtnl/rtnl.c: In function
'rtnl_socket_open':

/usr/local/src/vpp/build-data/../netlink/librtnl/rtnl.c:269:39: error:
'RTNLGRP_MPLS_ROUTE' undeclared (first use in this function)

 grpmask(RTNLGRP_NOTIFY) | grpmask(RTNLGRP_MPLS_ROUTE),

   ^

/usr/local/src/vpp/build-data/../netlink/librtnl/rtnl.c:269:39: note: each
undeclared identifier is reported only once for each function it appears in

/usr/local/src/vpp/build-data/../netlink/librtnl/netns.c:69:5: error:
'RTA_VIA' undeclared here (not in a function)

   _(RTA_VIA, via, 1)\

 ^

/usr/local/src/vpp/build-data/../netlink/librtnl/netns.c:82:13: note: in
definition of macro '_'

 .type = t, .unique = u, \

 ^

/usr/local/src/vpp/build-data/../netlink/librtnl/netns.c:86:3: note: in
expansion of macro 'ns_foreach_rta'

   ns_foreach_rta

   ^

make[1]: *** [librtnl/rtnl.lo] Error 1

make[1]: *** Waiting for unfinished jobs

make[1]: *** [librtnl/netns.lo] Error 1

make[1]: Leaving directory
`/usr/local/src/vpp/build-root/build-vpp_debug-native/netlink'

make: *** [netlink-build] Error 2



Is this something you encountered? How did you resolve it?

Brian


>
> Kind regards,
>
> * John Biscevic*
> Systems Architect,  Bahnhof AB
> Mobile:  +46 76 111 01 24
> E-mail: * john.bisce...@bahnhof.net *
> --
> *From:* vpp-dev@lists.fd.io  on behalf of Brian
> Dickson 
> *Sent:* Wednesday, December 12, 2018 4:17 AM
> *To:* vpp-dev@lists.fd.io
> *Cc:* vppsb-...@lists.fd.io
> *Subject:* [vpp-dev] vpp router plugin threads? (vpp + router + netlink +
> FRRouting)
>
> Greetings, VPP folks.
>
> I am continuing to work on my vpp + router-plugin (+FRRouting) set-up.
>
> I have things mostly working with very large routing tables (source from
> multiple BGP peers), but am having some challenges when trying to use
> multi-threaded (additional worker threads) for increasing overall VPP
> forwarding performance.
>
> When using just a single thread, the BGP peers take a long time to sync up
> but it is relatively stable. Forwarding performance on a 10G NIC (i40e
> driver and vfio-pci selected), is pretty decent, but I am interested in
> finding ways to improve performance (and getting things to the point where
> I can use a 40G card also in the system). The limit seems to be packets per
> second, and maxes out at about 11Mpps.
>
> The problem is, when I try to use worker threads, I start running into
> issues with rtnetlink buffers, and BGP, ICMP, ARP, etc, all become "flaky".
>
> My suspicion is that it has something to do with which thread(s) handle
> the netlink traffic, and which thread(s) handle the TCP port 179 (BGP)
> traffic, which needs to go via the tap-inject path to the kernel, and then
> to the BGP speaking application (FRR sub-unit "bgpd").
>
> Is there anyone who can provide information or advice on this issue?
>
> NB: the flakiness is in a COMPLETELY unloaded environment - no other
> traffic is being handled, nothing else is consuming CPU cycles. It is just
> the BGP traffic itself plus related stuff (ARP) and any diagnostic traffic
> I use (ping).
>
> Is this a case where I need to adjust the RSS to direct incoming packets
> to the right subset of cores, and do I also need to direct particular
> traffic (TCP 179) to the main core? Do I need to ensure anything else, like
> using a separate core (and set the core afinity with taskset -c ) for my
> BGP speaker?
>
> Any suggestions or advice would be greatly appreciated.
>
> Also, any updates on bringing netlink and router plugins into the main
> vppsb tree? Building them on anything other than 18.07 just doesn't work
> for me, and even on 18.07 is rather brittle, and I'm not 100% sure about
> the build steps, which actually involve passing CFLAGS in to make, which
> suggests something isn't quite right...
>
> Thanks in advance,
> Brian
>
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#11586): https://lists.fd.io/g/vpp-dev/message/11586
Mute This Topic: https://lists.fd.io/mt/28729084/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] [vppsb-dev] vpp router plugin threads? (vpp + router + netlink + FRRouting)

2018-12-12 Thread ko

Hello Brian,


If you believe that some process interferes with your worker threads, 
try and isolates some CPUs from the kernel scheduler (isolcpus on linux 
kernel), and set the CPU affinity from /etc/vpp/startup.conf 
(corelist-workers & main-core).


Korian

On 12/12/18 4:17 AM, Brian Dickson wrote:

Greetings, VPP folks.

I am continuing to work on my vpp + router-plugin (+FRRouting) set-up.

I have things mostly working with very large routing tables (source 
from multiple BGP peers), but am having some challenges when trying to 
use multi-threaded (additional worker threads) for increasing overall 
VPP forwarding performance.


When using just a single thread, the BGP peers take a long time to 
sync up but it is relatively stable. Forwarding performance on a 10G 
NIC (i40e driver and vfio-pci selected), is pretty decent, but I am 
interested in finding ways to improve performance (and getting things 
to the point where I can use a 40G card also in the system). The limit 
seems to be packets per second, and maxes out at about 11Mpps.


The problem is, when I try to use worker threads, I start running into 
issues with rtnetlink buffers, and BGP, ICMP, ARP, etc, all become 
"flaky".


My suspicion is that it has something to do with which thread(s) 
handle the netlink traffic, and which thread(s) handle the TCP port 
179 (BGP) traffic, which needs to go via the tap-inject path to the 
kernel, and then to the BGP speaking application (FRR sub-unit "bgpd").


Is there anyone who can provide information or advice on this issue?

NB: the flakiness is in a COMPLETELY unloaded environment - no other 
traffic is being handled, nothing else is consuming CPU cycles. It is 
just the BGP traffic itself plus related stuff (ARP) and any 
diagnostic traffic I use (ping).


Is this a case where I need to adjust the RSS to direct incoming 
packets to the right subset of cores, and do I also need to direct 
particular traffic (TCP 179) to the main core? Do I need to ensure 
anything else, like using a separate core (and set the core afinity 
with taskset -c ) for my BGP speaker?


Any suggestions or advice would be greatly appreciated.

Also, any updates on bringing netlink and router plugins into the main 
vppsb tree? Building them on anything other than 18.07 just doesn't 
work for me, and even on 18.07 is rather brittle, and I'm not 100% 
sure about the build steps, which actually involve passing CFLAGS in 
to make, which suggests something isn't quite right...


Thanks in advance,
Brian

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#134): https://lists.fd.io/g/vppsb-dev/message/134
Mute This Topic: https://lists.fd.io/mt/28734872/675702
Group Owner: vppsb-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vppsb-dev/unsub  [korian.edel...@ulg.ac.be]
-=-=-=-=-=-=-=-=-=-=-=-


--

Korian Edeline
Université de Liège (ULg)
Bât. B28  Algorithmique des Grands Systèmes
Quartier Polytech 1
Allée de la Découverte 10
4000 Liège
Phone: +32 4 366 56 05

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#11585): https://lists.fd.io/g/vpp-dev/message/11585
Mute This Topic: https://lists.fd.io/mt/28735004/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Any tricks in IP reassembly ?

2018-12-12 Thread Ole Troan
> W...What??? So the only way to make ip reassembly working correctly is to 
> keep one or more  interfaces attached to a single worker thread?
> That does not sound efficient ……

Or just ensure all parts of a fragment chain gets delivered to the same worker.
Any good ECMP algorithm should do that.

Cheers,
Ole


> 
> mik...@yeah.net
>  
> From: Klement Sekera
> Date: 2018-12-11 20:39
> To: Mikado; vpp-dev
> Subject: Re: [vpp-dev] Any tricks in IP reassembly ?
> Hi Mikado,
>  
> if the fragments get split between multiple workers, then eventually
> they'll get dropped after timing out ...
>  
> Regards,
> Klement
>  
> Quoting Mikado (2018-12-11 08:52:28)
> >Hi,
> >
> >   I have noticed that  “ip4-reassembly-feature” node  only
> >reassembles packets stored in the local pool of each thread. But it seems
> >not right if a group of fragment packages is handled by different worker
> >thread. So is there any tricks in VPP  forcing the fragment packages in
> >the same group dispatched to the same thread ? Or is it a bug ?
> > 
> >
> >Thanks in advance.
> >
> >Mikado
> -=-=-=-=-=-=-=-=-=-=-=-
> Links: You receive all messages sent to this group.
> 
> View/Reply Online (#11581): https://lists.fd.io/g/vpp-dev/message/11581
> Mute This Topic: https://lists.fd.io/mt/28718629/675193
> Group Owner: vpp-dev+ow...@lists.fd.io
> Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [otr...@employees.org]
> -=-=-=-=-=-=-=-=-=-=-=-

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#11584): https://lists.fd.io/g/vpp-dev/message/11584
Mute This Topic: https://lists.fd.io/mt/28718629/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Any tricks in IP reassembly ?

2018-12-12 Thread pvinci
Hi Klement.

Is there a way to write a test case that documents that behavior?  Does the 
test framework have the ability to split fragments across threads?
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#11583): https://lists.fd.io/g/vpp-dev/message/11583
Mute This Topic: https://lists.fd.io/mt/28718629/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] 2MB vs 1GB hugepages on ARM ThunderX

2018-12-12 Thread Gorka Garcia
I am not sure on the reason for this, but it is documented here:

https://github.com/contiv/vpp/blob/master/docs/arm64/MANUAL_INSTALL_CAVIUM.md

“To mention the most important thing from DPDK setup instructions you need to 
setup 1GB hugepages. The allocation of hugepages should be done at boot time or 
as soon as possible after system boot to prevent memory from being fragmented 
in physical memory. Add parameters hugepagesz=1GB hugepages=16 
default_hugepagesz=1GB to the file /etc/default/grub”

Gorka

From: vpp-dev@lists.fd.io  On Behalf Of Juraj Linkeš
Sent: Wednesday, December 12, 2018 9:07 AM
To: dmar...@me.com; gorka.gar...@cavium.com; Nitin Saxena 

Cc: vpp-dev@lists.fd.io; Sirshak Das 
Subject: [EXT] Re: [vpp-dev] 2MB vs 1GB hugepages on ARM ThunderX

External Email


External Email
Thanks Damjan.

Nitin, Gorka, do you have any input on this?

Juraj

From: Damjan Marion via Lists.Fd.Io [mailto:dmarion=me@lists.fd.io]
Sent: Tuesday, December 11, 2018 5:21 PM
To: Juraj Linkeš mailto:juraj.lin...@pantheon.tech>>
Cc: vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] 2MB vs 1GB hugepages on ARM ThunderX

Dear Juraj,

I don't think anybody have experience with ThunderX to help you here.
The facts that other NICs work OK indicates that that this particular driver 
requires something special.
What it is, you will probably need to ask Cavium/Marvell guys...

--
Damjan

On 11 Dec 2018, at 07:56, Juraj Linkeš 
mailto:juraj.lin...@pantheon.tech>> wrote:

Hi folks,

I've ran into an issue with hugepages on a Cavium ThunderX soc. I was trying to 
bind a physical interface to VPP. When using 1GB hugepages the interface seems 
to be working fine (well, at least I saw the interface in VPP and I was able to 
configure it and use ping with it), but when using 2MB hugepages the interface 
appeared in error state. The output from show hardware told me this:
VirtualFunctionEthernet1/0/1   1down  VirtualFunctionEthernet1/0/1
  Ethernet address 40:8d:5c:e7:b1:12
  Cavium ThunderX
carrier down
flags: pmd pmd-init-fail maybe-multiseg
rx: queues 1 (max 96), desc 1024 (min 0 max 65535 align 1)
tx: queues 1 (max 96), desc 1024 (min 0 max 65535 align 1)
pci: device 177d:a034 subsystem 177d:a134 address 0002:01:00.01 numa 0
module: unknown
max rx packet len: 9204
promiscuous: unicast off all-multicast off
vlan offload: strip off filter off qinq off
rx offload avail:  vlan-strip ipv4-cksum udp-cksum tcp-cksum jumbo-frame
   crc-strip scatter
rx offload active: jumbo-frame crc-strip scatter
tx offload avail:  ipv4-cksum udp-cksum tcp-cksum outer-ipv4-cksum
tx offload active:
rss avail: ipv4 ipv4-tcp ipv4-udp ipv6 ipv6-tcp ipv6-udp port
   vxlan geneve nvgre
rss active:ipv4 ipv4-tcp ipv4-udp ipv6 ipv6-tcp ipv6-udp
tx burst function: (nil)
rx burst function: (nil)
  Errors:
rte_eth_rx_queue_setup[port:0, errno:-22]: Unknown error -22

I dug around a bit and this seems to be what -22 means:

#define EINVAL  22  /* Invalid argument */
-EINVAL: The size of network buffers which can be allocated from the memory 
pool does not fit the various buffer sizes allowed by the device controller.

Is this something you've seen before? Is this a bug? Do I need to do something 
extra if I want to use 2MB hugepages?

Thanks,
Juraj
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#11564): https://lists.fd.io/g/vpp-dev/message/11564
Mute This Topic: https://lists.fd.io/mt/28720621/675642
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  
[dmar...@me.com]
-=-=-=-=-=-=-=-=-=-=-=-

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#11582): https://lists.fd.io/g/vpp-dev/message/11582
Mute This Topic: https://lists.fd.io/mt/28720621/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Any tricks in IP reassembly ?

2018-12-12 Thread mik...@yeah.net
W...What??? So the only way to make ip reassembly working correctly is to keep 
one or more  interfaces attached to a single worker thread?
That does not sound efficient ..



mik...@yeah.net
 
From: Klement Sekera
Date: 2018-12-11 20:39
To: Mikado; vpp-dev
Subject: Re: [vpp-dev] Any tricks in IP reassembly ?
Hi Mikado,
 
if the fragments get split between multiple workers, then eventually
they'll get dropped after timing out ...
 
Regards,
Klement
 
Quoting Mikado (2018-12-11 08:52:28)
>Hi,
> 
>   I have noticed that  “ip4-reassembly-feature” node  only
>reassembles packets stored in the local pool of each thread. But it seems
>not right if a group of fragment packages is handled by different worker
>thread. So is there any tricks in VPP  forcing the fragment packages in
>the same group dispatched to the same thread ? Or is it a bug ?
> 
> 
>Thanks in advance.
> 
>Mikado
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#11581): https://lists.fd.io/g/vpp-dev/message/11581
Mute This Topic: https://lists.fd.io/mt/28718629/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] Up-to-date documentation for classifiers?

2018-12-12 Thread JB
Hello everyone,

I've been looking for some documentation surrounding the classify feature of 
VPP. Any documentation or any decent information I've stumbled upon seems to be 
outdated with syntax that's no longer applicable.

Do we have anything that I've missed?
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#11580): https://lists.fd.io/g/vpp-dev/message/11580
Mute This Topic: https://lists.fd.io/mt/28732603/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] vpp router plugin threads? (vpp + router + netlink + FRRouting)

2018-12-12 Thread JB
Hi Brian,


I've successfully built the router plugin on 18.10 and "19.01"


What errors are you encountering when you attempt to build it?

Kind regards,

John Biscevic
Systems Architect,  Bahnhof AB
Mobile:  +46 76 111 01 24
E-mail:  john.bisce...@bahnhof.net

From: vpp-dev@lists.fd.io  on behalf of Brian Dickson 

Sent: Wednesday, December 12, 2018 4:17 AM
To: vpp-dev@lists.fd.io
Cc: vppsb-...@lists.fd.io
Subject: [vpp-dev] vpp router plugin threads? (vpp + router + netlink + 
FRRouting)

Greetings, VPP folks.

I am continuing to work on my vpp + router-plugin (+FRRouting) set-up.

I have things mostly working with very large routing tables (source from 
multiple BGP peers), but am having some challenges when trying to use 
multi-threaded (additional worker threads) for increasing overall VPP 
forwarding performance.

When using just a single thread, the BGP peers take a long time to sync up but 
it is relatively stable. Forwarding performance on a 10G NIC (i40e driver and 
vfio-pci selected), is pretty decent, but I am interested in finding ways to 
improve performance (and getting things to the point where I can use a 40G card 
also in the system). The limit seems to be packets per second, and maxes out at 
about 11Mpps.

The problem is, when I try to use worker threads, I start running into issues 
with rtnetlink buffers, and BGP, ICMP, ARP, etc, all become "flaky".

My suspicion is that it has something to do with which thread(s) handle the 
netlink traffic, and which thread(s) handle the TCP port 179 (BGP) traffic, 
which needs to go via the tap-inject path to the kernel, and then to the BGP 
speaking application (FRR sub-unit "bgpd").

Is there anyone who can provide information or advice on this issue?

NB: the flakiness is in a COMPLETELY unloaded environment - no other traffic is 
being handled, nothing else is consuming CPU cycles. It is just the BGP traffic 
itself plus related stuff (ARP) and any diagnostic traffic I use (ping).

Is this a case where I need to adjust the RSS to direct incoming packets to the 
right subset of cores, and do I also need to direct particular traffic (TCP 
179) to the main core? Do I need to ensure anything else, like using a separate 
core (and set the core afinity with taskset -c ) for my BGP speaker?

Any suggestions or advice would be greatly appreciated.

Also, any updates on bringing netlink and router plugins into the main vppsb 
tree? Building them on anything other than 18.07 just doesn't work for me, and 
even on 18.07 is rather brittle, and I'm not 100% sure about the build steps, 
which actually involve passing CFLAGS in to make, which suggests something 
isn't quite right...

Thanks in advance,
Brian
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#11578): https://lists.fd.io/g/vpp-dev/message/11578
Mute This Topic: https://lists.fd.io/mt/28729084/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] 2MB vs 1GB hugepages on ARM ThunderX

2018-12-12 Thread Juraj Linkeš
It would be great if we could figure out the reason.

The contiv-vpp documentation needed to spell out that you need 1GB hugepages 
precisely because 2MB pages don't work.

And there's also dpdk documentation [0] which doesn't mention the hugepages 
problem, making it seem like it shoudn't be an issue, but maybe that's a 
documentation overight.

Juraj

[0] https://doc.dpdk.org/guides-18.11/nics/thunderx.html

From: Gorka Garcia [mailto:ggar...@marvell.com]
Sent: Wednesday, December 12, 2018 11:03 AM
To: Juraj Linkeš ; dmar...@me.com; Nitin Saxena 

Cc: vpp-dev@lists.fd.io; Sirshak Das 
Subject: RE: [vpp-dev] 2MB vs 1GB hugepages on ARM ThunderX

I am not sure on the reason for this, but it is documented here:

https://github.com/contiv/vpp/blob/master/docs/arm64/MANUAL_INSTALL_CAVIUM.md

“To mention the most important thing from DPDK setup instructions you need to 
setup 1GB hugepages. The allocation of hugepages should be done at boot time or 
as soon as possible after system boot to prevent memory from being fragmented 
in physical memory. Add parameters hugepagesz=1GB hugepages=16 
default_hugepagesz=1GB to the file /etc/default/grub”

Gorka

From: vpp-dev@lists.fd.io 
mailto:vpp-dev@lists.fd.io>> On Behalf Of Juraj Linkeš
Sent: Wednesday, December 12, 2018 9:07 AM
To: dmar...@me.com; 
gorka.gar...@cavium.com; Nitin Saxena 
mailto:nitin.sax...@cavium.com>>
Cc: vpp-dev@lists.fd.io; Sirshak Das 
mailto:sirshak@arm.com>>
Subject: [EXT] Re: [vpp-dev] 2MB vs 1GB hugepages on ARM ThunderX

External Email


External Email
Thanks Damjan.

Nitin, Gorka, do you have any input on this?

Juraj

From: Damjan Marion via Lists.Fd.Io [mailto:dmarion=me@lists.fd.io]
Sent: Tuesday, December 11, 2018 5:21 PM
To: Juraj Linkeš mailto:juraj.lin...@pantheon.tech>>
Cc: vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] 2MB vs 1GB hugepages on ARM ThunderX

Dear Juraj,

I don't think anybody have experience with ThunderX to help you here.
The facts that other NICs work OK indicates that that this particular driver 
requires something special.
What it is, you will probably need to ask Cavium/Marvell guys...

--
Damjan

On 11 Dec 2018, at 07:56, Juraj Linkeš 
mailto:juraj.lin...@pantheon.tech>> wrote:

Hi folks,

I've ran into an issue with hugepages on a Cavium ThunderX soc. I was trying to 
bind a physical interface to VPP. When using 1GB hugepages the interface seems 
to be working fine (well, at least I saw the interface in VPP and I was able to 
configure it and use ping with it), but when using 2MB hugepages the interface 
appeared in error state. The output from show hardware told me this:
VirtualFunctionEthernet1/0/1   1down  VirtualFunctionEthernet1/0/1
  Ethernet address 40:8d:5c:e7:b1:12
  Cavium ThunderX
carrier down
flags: pmd pmd-init-fail maybe-multiseg
rx: queues 1 (max 96), desc 1024 (min 0 max 65535 align 1)
tx: queues 1 (max 96), desc 1024 (min 0 max 65535 align 1)
pci: device 177d:a034 subsystem 177d:a134 address 0002:01:00.01 numa 0
module: unknown
max rx packet len: 9204
promiscuous: unicast off all-multicast off
vlan offload: strip off filter off qinq off
rx offload avail:  vlan-strip ipv4-cksum udp-cksum tcp-cksum jumbo-frame
   crc-strip scatter
rx offload active: jumbo-frame crc-strip scatter
tx offload avail:  ipv4-cksum udp-cksum tcp-cksum outer-ipv4-cksum
tx offload active:
rss avail: ipv4 ipv4-tcp ipv4-udp ipv6 ipv6-tcp ipv6-udp port
   vxlan geneve nvgre
rss active:ipv4 ipv4-tcp ipv4-udp ipv6 ipv6-tcp ipv6-udp
tx burst function: (nil)
rx burst function: (nil)
  Errors:
rte_eth_rx_queue_setup[port:0, errno:-22]: Unknown error -22

I dug around a bit and this seems to be what -22 means:

#define EINVAL  22  /* Invalid argument */
-EINVAL: The size of network buffers which can be allocated from the memory 
pool does not fit the various buffer sizes allowed by the device controller.

Is this something you've seen before? Is this a bug? Do I need to do something 
extra if I want to use 2MB hugepages?

Thanks,
Juraj
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#11564): https://lists.fd.io/g/vpp-dev/message/11564
Mute This Topic: https://lists.fd.io/mt/28720621/675642
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  
[dmar...@me.com]
-=-=-=-=-=-=-=-=-=-=-=-

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#11577): https://lists.fd.io/g/vpp-dev/message/11577
Mute This Topic: https://lists.fd.io/mt/28720621/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  

Re: [vpp-dev] Any tricks in IP reassembly ?

2018-12-12 Thread Klement Sekera via Lists.Fd.Io
That's as it is now. A rework of the code is planned, no ETA yet...

Quoting mik...@yeah.net (2018-12-12 02:05:40)
>W...What??? So the only way to make ip reassembly working correctly is to
>keep one or more  interfaces attached to a single worker thread?
>That does not sound efficient ..
> 
>--
> 
>mik...@yeah.net
> 
>   
>  From: [1]Klement Sekera
>  Date: 2018-12-11 20:39
>  To: [2]Mikado; [3]vpp-dev
>  Subject: Re: [vpp-dev] Any tricks in IP reassembly ?
>  Hi Mikado,
>   
>  if the fragments get split between multiple workers, then eventually
>  they'll get dropped after timing out ...
>   
>  Regards,
>  Klement
>   
>  Quoting Mikado (2018-12-11 08:52:28)
>  >    Hi,
>  >
>  >       I have noticed that  “ip4-reassembly-feature” node  only
>  >    reassembles packets stored in the local pool of each thread. But it
>  seems
>  >    not right if a group of fragment packages is handled by different
>  worker
>  >    thread. So is there any tricks in VPP  forcing the fragment
>  packages in
>  >    the same group dispatched to the same thread ? Or is it a bug ?
>  >     
>  >
>  >    Thanks in advance.
>  >
>  >    Mikado
> 
> References
> 
>Visible links
>1. mailto:ksek...@cisco.com
>2. mailto:mik...@yeah.net
>3. mailto:vpp-dev@lists.fd.io
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#11576): https://lists.fd.io/g/vpp-dev/message/11576
Mute This Topic: https://lists.fd.io/mt/28718629/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] Enable arm vpp master testing in ci

2018-12-12 Thread Juraj Linkeš
Hello,

We're trying to enable testing for vpp master on ARM in CI. Here's the patch: 
https://gerrit.fd.io/r/#/c/15251/

All of the affected jobs have been tested in sandbox and are working well. 
Please review the patch and give it a +1 if you think it's okay so that Ed can 
finally merge it.

Thanks,
Juraj
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#11575): https://lists.fd.io/g/vpp-dev/message/11575
Mute This Topic: https://lists.fd.io/mt/28731088/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] 2MB vs 1GB hugepages on ARM ThunderX

2018-12-12 Thread Juraj Linkeš
Thanks Damjan.

Nitin, Gorka, do you have any input on this?

Juraj

From: Damjan Marion via Lists.Fd.Io [mailto:dmarion=me@lists.fd.io]
Sent: Tuesday, December 11, 2018 5:21 PM
To: Juraj Linkeš 
Cc: vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] 2MB vs 1GB hugepages on ARM ThunderX

Dear Juraj,

I don't think anybody have experience with ThunderX to help you here.
The facts that other NICs work OK indicates that that this particular driver 
requires something special.
What it is, you will probably need to ask Cavium/Marvell guys...

--
Damjan


On 11 Dec 2018, at 07:56, Juraj Linkeš 
mailto:juraj.lin...@pantheon.tech>> wrote:

Hi folks,

I've ran into an issue with hugepages on a Cavium ThunderX soc. I was trying to 
bind a physical interface to VPP. When using 1GB hugepages the interface seems 
to be working fine (well, at least I saw the interface in VPP and I was able to 
configure it and use ping with it), but when using 2MB hugepages the interface 
appeared in error state. The output from show hardware told me this:
VirtualFunctionEthernet1/0/1   1down  VirtualFunctionEthernet1/0/1
  Ethernet address 40:8d:5c:e7:b1:12
  Cavium ThunderX
carrier down
flags: pmd pmd-init-fail maybe-multiseg
rx: queues 1 (max 96), desc 1024 (min 0 max 65535 align 1)
tx: queues 1 (max 96), desc 1024 (min 0 max 65535 align 1)
pci: device 177d:a034 subsystem 177d:a134 address 0002:01:00.01 numa 0
module: unknown
max rx packet len: 9204
promiscuous: unicast off all-multicast off
vlan offload: strip off filter off qinq off
rx offload avail:  vlan-strip ipv4-cksum udp-cksum tcp-cksum jumbo-frame
   crc-strip scatter
rx offload active: jumbo-frame crc-strip scatter
tx offload avail:  ipv4-cksum udp-cksum tcp-cksum outer-ipv4-cksum
tx offload active:
rss avail: ipv4 ipv4-tcp ipv4-udp ipv6 ipv6-tcp ipv6-udp port
   vxlan geneve nvgre
rss active:ipv4 ipv4-tcp ipv4-udp ipv6 ipv6-tcp ipv6-udp
tx burst function: (nil)
rx burst function: (nil)
  Errors:
rte_eth_rx_queue_setup[port:0, errno:-22]: Unknown error -22

I dug around a bit and this seems to be what -22 means:

#define EINVAL  22  /* Invalid argument */
-EINVAL: The size of network buffers which can be allocated from the memory 
pool does not fit the various buffer sizes allowed by the device controller.

Is this something you've seen before? Is this a bug? Do I need to do something 
extra if I want to use 2MB hugepages?

Thanks,
Juraj
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#11564): https://lists.fd.io/g/vpp-dev/message/11564
Mute This Topic: https://lists.fd.io/mt/28720621/675642
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  
[dmar...@me.com]
-=-=-=-=-=-=-=-=-=-=-=-

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#11574): https://lists.fd.io/g/vpp-dev/message/11574
Mute This Topic: https://lists.fd.io/mt/28720621/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-