[vpp-dev] (VPP-1734) Worker Thread stops reading from its Handoff Queue

2019-11-07 Thread Ni, Hongjun
Hi folks,

Could some guy help to take a look at this issue:
https://jira.fd.io/browse/VPP-1734

The issue is a side effect of changes introduced by 
https://github.com/FDio/vpp/commit/80965f599aa90288c8c139e7e3a31726b89eb9a4#diff-f660570fc2dc57455f2c52b20880bfd8
 in VPP 19.04.

It's a difficult / unpredictable issue to reproduce but it when worker handoff 
is enabled on VPP and sufficient load is applied,
then after a time (mins/hours) one or more workers will cease handling handoff 
traffic and never recover.

Thanks,
Hongjun

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#14541): https://lists.fd.io/g/vpp-dev/message/14541
Mute This Topic: https://lists.fd.io/mt/47321789/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] A few MAP-T Clarifications

2019-11-07 Thread Jon Loeliger via Lists.Fd.Io
On Thu, Nov 7, 2019 at 7:23 AM Ole Troan  wrote:

> Dear Jon,
>
> I think were we left this last was a "shared TODO" list.
>

Yeah, sorry, I got tasked with something else for a bit...


> The MAP specific shallow virtual reassembly has been generalised, and
> moved to a separate component.
> SVR now runs in front of MAP, so MAP doesn't need to deal with reassembly
> itself. That simplifies that handling quite a bit.
>

We had several bugs against the MAP fragmentation handling.  Perhaps these
changes have fixed them?  We are about to pull and merge these changes, so
we will check again!

What do you see as gaps? Feel free to patch.
>

We have some outstanding bugs against MAP-E/T here:

1) MAP BR doesn't send ICMPv6 unreachable messages when a packet fails to
match a MAP domain
2) MAP-T BR can't translate IPv4 ICMP Echo Reply to IPv6
3) Pre-resolve ipv4|ipv6 function isn't working when MAP-T mode is used
4) TCP MSS value isn't applied to encapsulated packets when MAP-E mode is
used

Issues 1) and 2) are likely the same underlying problem.
Issue 3) is what started this thread last April.
Issue 4) is new-to-me.  Our bug log suggests that a VPP patch was made for
it by Vladimir Ratnikov.

HTH,
jdl
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#14540): https://lists.fd.io/g/vpp-dev/message/14540
Mute This Topic: https://lists.fd.io/mt/31018611/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Ligato VPP container

2019-11-07 Thread Nathan Skrzypczak
Hi Devis,

I haven't played that much with the ligato/vpp-agent docker image when
using contiv as they have their own docker image (contivvpp/vswitch) but
the logic should be the same :
Both container specify the vpp config file either copying it or mounting it
to the container. Ligato seems to be copying its vpp.conf [1].
If you want to have more workers you can edit vpp.conf adding cpu { workers
12 } or corelist-workers, rebuild the container with the script ligato
provides, push it to your dockerhub and point to it in your yaml.

Hope this helps,
Cheers

-Nathan

[1] https://github.com/ligato/vpp-agent/blob/master/docker/prod/vpp.conf

Le mer. 30 oct. 2019 à 10:53, Devis Reagan  a
écrit :

> Dear Team ,
>
>
>
> I managed to bring up the ligato/vpp-agent:v2.1.1 container on top of the
> contiv-vpp . But it looks while bringing up the it just takes the "core
> 0" for main-core , I used the attached yaml file to launch the container .
> Is there a way to add our own main-core & corelist-workers while bring up
> the ligato vpp ?
>
>
>
> Logs from ligato vpp container
>
> ===
>
>
>
> Below container is the one brought up on top of the contiv-vpp using the
> attached yaml file .
>
>
>
> vpp# show threads
>
> ID NameTypeLWP Sched Policy (Priority)
> lcore  Core   Socket State
>
> 0  vpp_main15  other (0)
> 1  0  0
>
> vpp#
>
> vpp# [root@k8s-master ~]#
>
> [root@k8s-master ~]#
>
> [root@k8s-master ~]# kubectl exec -it vpp-cnf -- vppctl -s :5002
>
> _____   _  ___
>
> __/ __/ _ \  (_)__| | / / _ \/ _ \
>
> _/ _// // / / / _ \   | |/ / ___/ ___/
>
> /_/ /(_)_/\___/   |___/_/  /_/
>
>
>
> vpp#
>
> vpp# show ver
>
> vpp v19.08.1-163~g8f4fccab9~b230 built by root on 813e868a84ce at Sun Oct
> 6 12:54:23 UTC 2019
>
> vpp#
>
> vpp# show interface
>
>   Name   IdxState  MTU (L3/IP4/IP6/MPLS)
> Counter  Count
>
> local00 down  0/0/0/0
>
> memif0/1  1  up  9000/0/0/0
>
> memif0/2  2  up  9000/0/0/0
>
> vpp#
>
> vpp# show threads
>
> ID NameTypeLWP Sched Policy (Priority)
> lcore  Core   Socket State
>
> 0  vpp_main14  other (0)
>1  0  0
>
> vpp#
>
> vpp# show interface rx-placement
>
> Thread 0 (vpp_main):
>
>   node memif-input:
>
> memif0/1 queue 0 (polling)
>
> memif0/2 queue 0 (polling)
>
> vpp#
>
> -=-=-=-=-=-=-=-=-=-=-=-
> Links: You receive all messages sent to this group.
>
> View/Reply Online (#14383): https://lists.fd.io/g/vpp-dev/message/14383
> Mute This Topic: https://lists.fd.io/mt/39769095/1706314
> Group Owner: vpp-dev+ow...@lists.fd.io
> Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [
> nathan.skrzypc...@gmail.com]
> -=-=-=-=-=-=-=-=-=-=-=-
>
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#14539): https://lists.fd.io/g/vpp-dev/message/14539
Mute This Topic: https://lists.fd.io/mt/39769095/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] A few MAP-T Clarifications

2019-11-07 Thread Ole Troan
Dear Jon,

I think were we left this last was a "shared TODO" list.
What do you see as gaps? Feel free to patch.

Since April, I think there are a couple of things that has happened.
The MAP specific shallow virtual reassembly has been generalised, and moved to 
a separate component.
SVR now runs in front of MAP, so MAP doesn't need to deal with reassembly 
itself. That simplifies that handling quite a bit.
There is also some work on making a functional interface to fragmentation, as 
in instead of sending packets requiring fragmentation via a separate graph 
node, then getting those back or setting the next-node via buffer meta data, 
just call fragment() and get a list of buffers back.

Cheers,
Ole

> On 6 Nov 2019, at 20:07, Jon Loeliger  wrote:
> 
> Ole,
> 
> Waaay back in April, we had a small exchange here on the topic
> of some MAP-T/E behavior.  Bottom line was some functionality
> needed to be homogenized.   We wrote:
> 
> On Tue, Apr 16, 2019 at 2:35 AM Ole Troan  wrote:
> Hi Jon,
> 
> Apologies for the delay in answering. Swapped out all my knowledge of MAP to 
> disk. ;-)
> 
> > We are working on some MAP-E and MAP-T testing, and we have
> > a few questions.  Some of the existing VPP documentation about
> > MAP-E and MAP-T is a bit loose.
> > 
> > Should the use of a pre-resolved forwarding address be applicable
> > to both MAP-E and MAP-T?  Or is that only a MAP-E thing?  Specifically,
> > we see in the test/test_map.py test that it is only tested in the MAP-E 
> > case.
> > Is it missing from the MAP-T tests due to oversight, or is that technically 
> > correct?
> 
> -E and -T has evolved a little differently, which accounts for the 
> differences.
> Pre-resolved next-hops is applicable for both. Should we do a shared todo 
> list for MAP items?
> Perhaps an epic JIRA?
> 
> > The MAP-T test also "enables map" on both interfaces in that same file.
> > Is it required to do so?  For only MAP-T?
> 
> I would think so. Isn’t one the IPv4 side and the other interface the IPv6 
> side?
> In both cases packets are picked out from the IP feature path.
> 
> > Finally, there is no test that introduces a rule.  I feel like I read 
> > somewhere
> > that MAP-T required at least one rule?  Or is some combination of PSID
> > and EA bits sufficient alone?
> 
> Rules was developed for the LW46 1:1 case. As per subscriber rules as opposed 
> to algorithmic mapping for many users within a domain.
> I think rules can be useful also in a MAP-T case. We should homogenize that.
> 
> Patches welcome!
> 
> Best regards,
> Ole
> 
> ... and we're back!  Months pass, seasons change, and the code has,
> what?, been fixed?  ignored? silently left to bit-rot?  Where are we today?
> 
> Any chance there has been progress here?
> 
> Thanks,
> jdl
> 

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#14538): https://lists.fd.io/g/vpp-dev/message/14538
Mute This Topic: https://lists.fd.io/mt/31018611/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-