Re: [vpp-dev] VPP C++ API

2018-07-02 Thread Klement Sekera via Lists.Fd.Io
Hi,

can you please run in gdb (or open core) and pass us a stack trace?

Thanks,
Klement

On Sun, 2018-07-01 at 19:36 -0700, xpa...@gmail.com wrote:
> Hello,
> I'm trying to understand how works VOM wrapper for C++ API.
> VOM manager header: https://pastebin.com/BgG7td5s
> VOM manager cpp: https://pastebin.com/890jjUJm
> main.cpp: 
> 
> int main() {
>     TVppManager manager;
>     std::cout << "Starting\n";
>     manager.Start();
>  
>     sleep(60);
>  
>     manager.Stop();
> }
> 
> When I'm trying to run sudo ./main a I got error:
> Starting
> Calling connect
> Manager is connected
> Mon Jul  2 05:33:55 2018 [debug]hw.cpp:129 write() itf-events
> Mon Jul  2 05:33:55 2018 [debug]rpc_cmd.hpp:120 operator()() itf-
> events 0
> Started
> main: /place/home/xpahos/git/vpp/build-data/../src/vpp-
> api/vapi/vapi.c:696: vapi_msg_is_with_context: Assertion `id <=
> __vapi_metadata.count' failed.
> Aborted
> 
> I found that id looks like unsigned value overflow. How can I debug
> this? I don't see anything in trace dump.
> -=-=-=-=-=-=-=-=-=-=-=-
> Links: You receive all messages sent to this group.
> 
> View/Reply Online (#9753): https://lists.fd.io/g/vpp-dev/message/9753
> Mute This Topic: https://lists.fd.io/mt/22999158/675704
> Group Owner: vpp-dev+ow...@lists.fd.io
> Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [ksek...@cisco.com]
> -=-=-=-=-=-=-=-=-=-=-=-
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#9755): https://lists.fd.io/g/vpp-dev/message/9755
Mute This Topic: https://lists.fd.io/mt/22999158/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [csit-dev] [vpp-dev] VPP make test gives false positive

2018-06-25 Thread Klement Sekera via Lists.Fd.Io
We're one csit induced recheck away from getting this fixed.

On Mon, 2018-06-25 at 14:30 +, Maciek Konstantynowicz (mkonstan)
via Lists.Fd.Io wrote:
> Any updates on the latest status?
> 
> -Maciek
> 
> > 
> > On 22 Jun 2018, at 18:07, Ole Troan  wrote:
> > 
> > > 
> > > 13188 is already merged.
> > 13186 fails consistently locally too, investigating.
> > I think this was introduced with
> > a98346f664aae148d26a8e158008b773d73db96f
> > 
> > Cheers,
> > Ole
> > 
> > 
> > > 
> > > 
> > > From: vpp-dev@lists.fd.io  On Behalf Of Ed
> > > Kern via Lists.Fd.Io
> > > Sent: Friday, June 22, 2018 12:07 PM
> > > To: Jan Gelety -X (jgelety - PANTHEON TECHNOLOGIES at Cisco)  > > l...@cisco.com>
> > > Cc: vpp-dev@lists.fd.io
> > > Subject: Re: [csit-dev] [vpp-dev] VPP make test gives false
> > > positive
> > > Importance: High
> > > 
> > > Hey,
> > > 
> > > responding because I dont see one from klement.
> > > 
> > > looks like he has two gerrits pending
> > > 
> > > https://gerrit.fd.io/r/#/c/13188/
> > > 
> > > to fix broken interfaces.. (this passed verification so trying to
> > > get this one merged)
> > > 
> > > and
> > > 
> > > https://gerrit.fd.io/r/#/c/13186/
> > > 
> > > for the retries in error..this is not passing verification but i
> > > have my fingers crossed that it will once
> > > 13188 is merged.
> > > 
> > > Hoping to get this behind us before the weekend..
> > > 
> > > Ed
> > > 
> > > 
> > > 
> > > 
> > > On Jun 22, 2018, at 3:03 AM, Jan Gelety via Lists.Fd.Io  > > cisco@lists.fd.io> wrote:
> > > 
> > > Hello,
> > > 
> > > VPP make test gives false positive results at the moment - there
> > > are cca 110 tests failed but build is marked successful.
> > > 
> > > The last correct run on ubuntu is [1].
> > > 
> > > The first false positive run on ubutnu is [2] – trigerred by
> > > Florin’s patch [3].
> > > 
> > > It seems that the first failing test is test_ip4_irb.TestIpIrb:
> > > 
> > > 00:22:48.177
> > > =
> > > =
> > > 00:22:53.685 VAPI test
> > > 00:22:53.685
> > > =
> > > =
> > > 00:22:53.685 run C VAPI
> > > tests SKI
> > > P
> > > 00:22:53.685 run C++ VAPI
> > > tests   SKIP
> > > 00:22:53.685
> > > 00:22:53.685
> > > =
> > > =
> > > 00:22:53.685 ERROR: setUpClass (test_ip4_irb.TestIpIrb)
> > > 00:22:53.685 
> > > --
> > > 00:22:53.685 Traceback (most recent call last):
> > > 00:22:53.685   File "/w/workspace/vpp-verify-master-
> > > ubuntu1604/test/test_ip4_irb.py", line 57, in setUpClass
> > > 00:22:53.685 cls.create_loopback_interfaces(range(1))
> > > 00:22:53.685   File "/w/workspace/vpp-verify-master-
> > > ubuntu1604/test/framework.py", line 571, in
> > > create_loopback_interfaces
> > > 00:22:53.685 setattr(cls, intf.name, intf)
> > > 00:22:53.685   File "/w/workspace/vpp-verify-master-
> > > ubuntu1604/test/vpp_interface.py", line 91, in name
> > > 00:22:53.685 return self._name
> > > 00:22:53.685 AttributeError: 'VppLoInterface' object has no
> > > attribute '_name'
> > > 00:22:53.685
> > > 
> > > And all following tests failes with connection failur during test
> > > case setup:
> > > 
> > > 00:22:53.689
> > > =
> > > =
> > > 00:22:53.689 ERROR: setUpClass (test_dvr.TestDVR)
> > > 00:22:53.689 
> > > --
> > > 00:22:53.689 Traceback (most recent call last):
> > > 00:22:53.689   File "/w/workspace/vpp-verify-master-
> > > ubuntu1604/test/framework.py", line 362, in setUpClass
> > > 00:22:53.689 cls.vapi.connect()
> > > 00:22:53.689   File "/w/workspace/vpp-verify-master-
> > > ubuntu1604/test/vpp_papi_provider.py", line 141, in connect
> > > 00:22:53.689 self.vpp.connect(self.name, self.shm_prefix)
> > > 00:22:53.689   File "build/bdist.linux-x86_64/egg/vpp_papi.py",
> > > line 699, in connect
> > > 00:22:53.689 async)
> > > 00:22:53.689   File "build/bdist.linux-x86_64/egg/vpp_papi.py",
> > > line 670, in connect_internal
> > > 00:22:53.689 raise IOError(2, 'Connect failed')
> > > 00:22:53.689 IOError: [Errno 2] Connect failed
> > > 00:22:53.689
> > > 
> > > But result is OK:
> > > 00:22:53.689 Ran 156 tests in 162.407s
> > > 00:22:53.689
> > > 00:22:53.689 FAILED (errors=114, skipped=118)
> > > 00:22:53.691 1 test(s) failed, 3 attempt(s) left
> > > 00:22:53.697 Running tests using custom test runner
> > > 00:22:53.697 Active filters: file=None, class=None, function=None
> > > 00:22:53.697 0 out of 0 tests match specified filters
> > > 00:22:53.697 Not running extended tests (some tests will be
> 

Re: [vpp-dev] Is VPP IPSec implementation thread safe?

2018-06-27 Thread Klement Sekera via Lists.Fd.Io
Hi,

I agree that there is an unlikely corner case which could result in vpp
assert. I don't think there is a real chance to hit this in a
production image, since it would require you to successfully make two
API calls (drop old entry, replace with new entry) while there are
packets in flight. Just deleting the old entry would cause re-use of
existing entry (which is marked as freed, but the code does not clear
the memory or invalidate it some other way) and the packet would be
handled the same way as if the delete came after it left vpp.

Based on this, I'm not sure whether fixing this is worth the cost..

Thanks,
Klement

On Wed, 2018-06-27 at 11:15 +0530, Vamsi Krishna wrote:
> Hi ,
> 
> I have looked at the ipsec code in VPP and trying to understand how
> it
> works in a multi threaded environment. Noticed that the
> datastructures for
> spd, sad and tunnel interface are pools and there are no locks to
> prevent
> race conditions.
> 
> For instance the ipsec-input node passes SA index to the esp-encrypt
> node,
> and esp-encrypt node looks up the SA from sad pool. But during the
> time in
> which the packet is passed from one node to another the entry at SA
> index
> may be changed or deleted. Same seems to be true for dpdk-esp-encrypt 
> and
> dpdk-esp-decrypt. How are these cases handled? Can the implementation
> be
> used in multi-threaded environment?
> 
> Please help understand the IPSec implementation.
> 
> Thanks
> Krishna
> -=-=-=-=-=-=-=-=-=-=-=-
> Links: You receive all messages sent to this group.
> 
> View/Reply Online (#9709): https://lists.fd.io/g/vpp-dev/message/9709
> Mute This Topic: https://lists.fd.io/mt/22720913/675704
> Group Owner: vpp-dev+ow...@lists.fd.io
> Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [ksek...@cisco.com]
> -=-=-=-=-=-=-=-=-=-=-=-
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#9711): https://lists.fd.io/g/vpp-dev/message/9711
Mute This Topic: https://lists.fd.io/mt/22720913/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Verify consistently failing

2018-06-19 Thread Klement Sekera via Lists.Fd.Io
Hi,

I created a debugging patch set https://gerrit.fd.io/r/#/c/13122/ which
hints that there is something fishy going on with the python
virtualenv.

Per https://jenkins.fd.io/job/vpp-verify-master-ubuntu1604/12067/consol
e: 

10:21:46 make -C test TEST_DIR=/w/workspace/vpp-verify-master-
ubuntu1604/test VPP_TEST_BUILD_DIR=/w/workspace/vpp-verify-master-
ubuntu1604/build-root/build-vpp-native VPP_TEST_BIN=/w/workspace/vpp-
verify-master-ubuntu1604/build-root/install-vpp-native/vpp/bin/vpp
VPP_TEST_PLUGIN_PATH=/w/workspace/vpp-verify-master-ubuntu1604/build-
root/install-vpp-native/vpp/lib/vpp_plugins:/w/workspace/vpp-verify-
master-ubuntu1604/build-root/install-vpp-native/vpp/lib64/vpp_plugins
VPP_TEST_INSTALL_PATH=/w/workspace/vpp-verify-master-ubuntu1604/build-
root/install-vpp-native/ LD_LIBRARY_PATH=/w/workspace/vpp-verify-
master-ubuntu1604/build-root/install-vpp-
native/vpp/lib/:/w/workspace/vpp-verify-master-ubuntu1604/build-
root/install-vpp-native/vpp/lib64/ EXTENDED_TESTS= PYTHON= OS_ID=ubuntu
CACHE_OUTPUT= test
10:21:46 make[2]: Entering directory '/w/workspace/vpp-verify-master-
ubuntu1604/test'
10:21:46 echo "vpp python prefix is /var/cache/vpp/python"
10:21:46 vpp python prefix is /var/cache/vpp/python

Looking at the possible causes of this in the main Makefile we get:

56ccc23f (Ed Kern   2018-04-02 16:42:48 -0600 351)
export VPP_PYTHON_PREFIX ?= $(BR)/python

commit 56ccc23fbc6244190140bd7eb57bfa75f2312c62
Author: Ed Kern 
Date:   Mon Apr 2 16:42:48 2018 -0600

Makefile: Alter VPP_PYTHON_PREFIX for preloading deps

Allow setting of VPP_PYTHON_PREFIX to alternate location
so the python prereqs can be installed into base image
Also added test-dep trigger to isolate dependency install
from actual test run

Change-Id: Ia80f5dbf71bc24eb46cd6586bcadd474ef822704
Signed-off-by: Ed Kern 

I think what Ed meant to do is to speedup verify job without realizing
that test framework does a local install of vpp_papi package which is
part of the source tree. So having cached virtualenv is a bad idea as
we see already. I also wonder whether the caching script watches
changes in test/Makefile/PYTHON_DEPENDS and whether adding or changing
a dependency would pass a verify job.

Ed, thoughts? I would suggest not to cache virtualenv at all.

Thanks,
Klement


On Tue, 2018-06-19 at 08:55 +0200, Ole Troan wrote:
> Seems like my patch https://gerrit.fd.io/r/#/c/13013/
> broke the verification job. I provided a fix, but for some strange
> reason it seems like the verifiy build is stuck with the broken
> version of the vpp_papi package.
> This problem seems to persist even after I reverted 13013 with https:
> //gerrit.fd.io/r/#/c/13104/
> 
> This is not reproducible locally (since make test uses the correct
> python package from the build directory there).
> Anyone knows how to reproduce the verify setup (or have an idea of
> what's going on?)
> 
> Cheers,
> Ole
> 
> 
> 
> 

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#9641): https://lists.fd.io/g/vpp-dev/message/9641
Mute This Topic: https://lists.fd.io/mt/22427622/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] API If Client doesnot have any pending request , it wont read the queue

2018-08-17 Thread Klement Sekera via Lists.Fd.Io
What exactly is the issue? According to vapi_dispatch() docstring this
is expected:

/**
 * @brief loop vapi_dispatch_one until responses to all currently
outstanding
 * requests have been received and their callbacks called
 *
 * @note the dispatch loop is interrupted if any error is encountered
or
 * returned from the callback, in which case this error is returned as
the
 * result of vapi_dispatch. In this case it might be necessary to call
dispatch
 * again to process the remaining messages. Returning VAPI_EUSER from
 * a callback allows the user to break the dispatch loop (and
distinguish
 * this case in the calling code from other failures). VAPI never
returns
 * VAPI_EUSER on its own.
 *
 * @return VAPI_OK on success, other error code on error
 */
  vapi_error_e vapi_dispatch (vapi_ctx_t ctx);

What are you trying to achieve?

Thanks,
Klement


On Fri, 2018-08-17 at 15:52 +0530, chetan bhasin wrote:
> Hi,
> 
> We are facing an issue during usage of API framework.
> 
> We have open a Non-blocking connection from Client app.
> 
> If Client does not have any pending request then it skip reading the
> responses in vapi_dispatch . Please suggest
> 
> vapi_error_e
> vapi_dispatch (vapi_ctx_t ctx)
> {
>   vapi_error_e rv = VAPI_OK;
> *  while (!vapi_requests_empty (ctx))*
> {
>   rv = vapi_dispatch_one (ctx);
>   if (VAPI_OK != rv)
> {
>   return rv;
> }
> }
>   return rv;
> }
> 
> 
> Thanks,
> Chetan Bhasin
> -=-=-=-=-=-=-=-=-=-=-=-
> Links: You receive all messages sent to this group.
> 
> View/Reply Online (#10196): https://lists.fd.io/g/vpp-dev/message/101
> 96
> Mute This Topic: https://lists.fd.io/mt/24611122/675704
> Group Owner: vpp-dev+ow...@lists.fd.io
> Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [ksek...@cisco.com]
> -=-=-=-=-=-=-=-=-=-=-=-
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#10197): https://lists.fd.io/g/vpp-dev/message/10197
Mute This Topic: https://lists.fd.io/mt/24611122/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] API If Client doesnot have any pending request , it wont read the queue

2018-08-17 Thread Klement Sekera via Lists.Fd.Io
Hi Chetan,

I think that in this case it's better to use blocking interface, as you
are waiting for the events to come.

You can take a look at test/ext/vapi_c_test.c - tests called 
test_stats_1, _2 & _3 deal with stats - first two show a blocking,
efficient interface, third one is non-blocking, which in this case
busy-loops until it gets the stats.

Thanks,
Klement


On Fri, 2018-08-17 at 17:18 +0530, chetan bhasin wrote:
> Hi Klement,
> 
> Thanks for your reply!
> 
> The exact problem statement is , we are trying to send some events
> from VPP
> to client app , but client is not able to read those events api
> messages
> until it has some pending requests. So as a work-around we are
> sending
> control ping after few seconds from client to have at least one
> request
> pending from client side.
> 
> Thanks,
> Chetan Bhasin
> 
> On Fri, Aug 17, 2018 at 4:26 PM, Klement Sekera 
> wrote:
> 
> > 
> > What exactly is the issue? According to vapi_dispatch() docstring
> > this
> > is expected:
> > 
> > /**
> >  * @brief loop vapi_dispatch_one until responses to all currently
> > outstanding
> >  * requests have been received and their callbacks called
> >  *
> >  * @note the dispatch loop is interrupted if any error is
> > encountered
> > or
> >  * returned from the callback, in which case this error is returned
> > as
> > the
> >  * result of vapi_dispatch. In this case it might be necessary to
> > call
> > dispatch
> >  * again to process the remaining messages. Returning VAPI_EUSER
> > from
> >  * a callback allows the user to break the dispatch loop (and
> > distinguish
> >  * this case in the calling code from other failures). VAPI never
> > returns
> >  * VAPI_EUSER on its own.
> >  *
> >  * @return VAPI_OK on success, other error code on error
> >  */
> >   vapi_error_e vapi_dispatch (vapi_ctx_t ctx);
> > 
> > What are you trying to achieve?
> > 
> > Thanks,
> > Klement
> > 
> > 
> > On Fri, 2018-08-17 at 15:52 +0530, chetan bhasin wrote:
> > > 
> > > Hi,
> > > 
> > > We are facing an issue during usage of API framework.
> > > 
> > > We have open a Non-blocking connection from Client app.
> > > 
> > > If Client does not have any pending request then it skip reading
> > > the
> > > responses in vapi_dispatch . Please suggest
> > > 
> > > vapi_error_e
> > > vapi_dispatch (vapi_ctx_t ctx)
> > > {
> > >   vapi_error_e rv = VAPI_OK;
> > > *  while (!vapi_requests_empty (ctx))*
> > > {
> > >   rv = vapi_dispatch_one (ctx);
> > >   if (VAPI_OK != rv)
> > > {
> > >   return rv;
> > > }
> > > }
> > >   return rv;
> > > }
> > > 
> > > 
> > > Thanks,
> > > Chetan Bhasin
> > > -=-=-=-=-=-=-=-=-=-=-=-
> > > Links: You receive all messages sent to this group.
> > > 
> > > View/Reply Online (#10196): https://lists.fd.io/g/vpp-dev/message
> > > /101
> > > 96
> > > Mute This Topic: https://lists.fd.io/mt/24611122/675704
> > > Group Owner: vpp-dev+ow...@lists.fd.io
> > > Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [ksekera@cisco.
> > > com]
> > > -=-=-=-=-=-=-=-=-=-=-=-
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#10200): https://lists.fd.io/g/vpp-dev/message/10200
Mute This Topic: https://lists.fd.io/mt/24611122/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] CMake

2018-08-20 Thread Klement Sekera via Lists.Fd.Io
Yes, this causes test-debug target to take the old way (not cmake).
This, though, seems a bit counter-intuitive. Shouldn't setting the
value to "no" cause the build process to not use cmake?

On Mon, 2018-08-20 at 12:15 +0200, Damjan Marion wrote:
> > 
> > On 20 Aug 2018, at 11:55, Klement Sekera  wrote:
> > 
> > Hi Damjan,
> > 
> > it seems that the test-debug target is broken with the patch. The
> > build
> > process triggered by test-debug target jumps on the cmake train,
> > but
> > that train doesn't bring in the APIGEN and other stuff (e.g.
> > libvppapiclient.so).
> Does this help?
> diff --git a/Makefile b/Makefile
> index 09a31083..f9fd95ff 100644
> --- a/Makefile
> +++ b/Makefile
> @@ -21,7 +21,7 @@ STARTUP_DIR?=$(PWD)
>  MACHINE=$(shell uname -m)
>  SUDO?=sudo
> 
> -ifeq ($(findstring $(MAKECMDGOALS),verify pkg-deb pkg-rpm test),)
> +ifeq ($(findstring $(MAKECMDGOALS),verify pkg-deb pkg-rpm test test-
> debug),)
>  export vpp_uses_cmake?=yes
>  endif
> 
> > 
> > 
> > I don't see vpp-api mentioned in src/CMakeLists.txt. Should it be
> > mentioned?
> It waits for volunteer to take care for vpp-api :)
> 
> > 
> > 
> > Thanks,
> > Klement
> > 
> > On Sat, 2018-08-18 at 12:38 +0200, Damjan Marion via Lists.Fd.Io
> > wrote:
> > > 
> > > Dear all,
> > > 
> > > Yesterday we merged a patch which introduces CMake build system
> > > in
> > > VPP.
> > > 
> > > https://git.fd.io/vpp/commit/?id=612dd6a9
> > > 
> > > It is initial patch, and additional work is needed, but in
> > > general it
> > > manages to compile VPP binaries significantly faster 
> > > than what we have today with autotools.
> > > 
> > > In addition it brings following benefits which we can leverage in
> > > the
> > > future:
> > >  - use of CPack to generate deb/rpms vs. hardcoded scripts we
> > > have
> > > today
> > >  - much simpler and smaller build definitions specially for
> > > plugins
> > >  - support for different build types (Debug, Release, Release
> > > with
> > > symbols, Coverage, ...)
> > >  - support for external components (i.e. we can likely build dpdk
> > > and
> > > his dependencies)
> > > 
> > > Autotools is still default way to run verify jobs and build
> > > images,
> > > but people can give CMake a try in few ways:
> > > 
> > > Step 0: "apt-get install cmake ninja-build"
> > > 
> > > - "make {build,rebuild,build-release,rebuild-release}" are
> > > already
> > > using cmake as default, old behaviour can be restored by saying
> > > "make
> > > vpp_uses_cmake=no ..."
> > > 
> > > - make -C build-root vpp_uses_cmake=yes PLATFORM=vpp
> > > TAG={vpp|vpp_debug} 
> > > 
> > > - uncommenting "vpp_uses_cmake=yes" in build-
> > > data/platforms/vpp.mk
> > > 
> > > - without ebuild:
> > > mkdir _build
> > > cd _build
> > > cmake [-G Ninja] /path/to/vpp/src
> > > -DCMAKE_BUILD_TYPE={Release|Debug}
> > > (optionally "ccmake ." to change parameters)
> > > {ninja|make}
> > > 
> > > ./bin/vpp unix interactive
> > > 
> > > Building packages (incomplete / work in progress):
> > > $ ninja package
> > > [0/1] Run CPack packaging tool...
> > > CPack: Create package using DEB
> > > CPack: Install projects
> > > CPack: - Install project: vpp
> > > CPack: -   Install component: dev
> > > CPack: -   Install component: plugins
> > > CPack: -   Install component: vpp
> > > CPack: Create package
> > > CPack: - package: /home/damarion/tmp/_build/vpp-dev.deb
> > > generated.
> > > CPack: - package: /home/damarion/tmp/_build/vpp-plugins.deb
> > > generated.
> > > CPack: - package: /home/damarion/tmp/_build/vpp.deb generated.
> > > 
> > > Feedback, issue reports and contributions to get CMake be a full
> > > replacement for autotools are welcome.
> > > 
> > > Thanks,
> > > 
> > > -=-=-=-=-=-=-=-=-=-=-=-
> > > Links: You receive all messages sent to this group.
> > > 
> > > View/Reply Online (#10209): https://lists.fd.io/g/vpp-dev/message
> > > /102
> > > 09
> > > Mute This Topic: https://lists.fd.io/mt/24665874/675704
> > > Group Owner: vpp-dev+ow...@lists.fd.io
> > > Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [ksekera@cisco.
> > > com]
> > > -=-=-=-=-=-=-=-=-=-=-=-
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#10218): https://lists.fd.io/g/vpp-dev/message/10218
Mute This Topic: https://lists.fd.io/mt/24665874/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] CMake

2018-08-20 Thread Klement Sekera via Lists.Fd.Io
Hi Damjan,

it seems that the test-debug target is broken with the patch. The build
process triggered by test-debug target jumps on the cmake train, but
that train doesn't bring in the APIGEN and other stuff (e.g.
libvppapiclient.so).

I don't see vpp-api mentioned in src/CMakeLists.txt. Should it be
mentioned?

Thanks,
Klement

On Sat, 2018-08-18 at 12:38 +0200, Damjan Marion via Lists.Fd.Io wrote:
> Dear all,
> 
> Yesterday we merged a patch which introduces CMake build system in
> VPP.
> 
> https://git.fd.io/vpp/commit/?id=612dd6a9
> 
> It is initial patch, and additional work is needed, but in general it
> manages to compile VPP binaries significantly faster 
> than what we have today with autotools.
> 
> In addition it brings following benefits which we can leverage in the
> future:
>  - use of CPack to generate deb/rpms vs. hardcoded scripts we have
> today
>  - much simpler and smaller build definitions specially for plugins
>  - support for different build types (Debug, Release, Release with
> symbols, Coverage, ...)
>  - support for external components (i.e. we can likely build dpdk and
> his dependencies)
> 
> Autotools is still default way to run verify jobs and build images,
> but people can give CMake a try in few ways:
> 
> Step 0: "apt-get install cmake ninja-build"
> 
> - "make {build,rebuild,build-release,rebuild-release}" are already
> using cmake as default, old behaviour can be restored by saying "make
> vpp_uses_cmake=no ..."
> 
> - make -C build-root vpp_uses_cmake=yes PLATFORM=vpp
> TAG={vpp|vpp_debug} 
> 
> - uncommenting "vpp_uses_cmake=yes" in build-data/platforms/vpp.mk
> 
> - without ebuild:
> mkdir _build
> cd _build
> cmake [-G Ninja] /path/to/vpp/src -DCMAKE_BUILD_TYPE={Release|Debug}
> (optionally "ccmake ." to change parameters)
> {ninja|make}
> 
> ./bin/vpp unix interactive
> 
> Building packages (incomplete / work in progress):
> $ ninja package
> [0/1] Run CPack packaging tool...
> CPack: Create package using DEB
> CPack: Install projects
> CPack: - Install project: vpp
> CPack: -   Install component: dev
> CPack: -   Install component: plugins
> CPack: -   Install component: vpp
> CPack: Create package
> CPack: - package: /home/damarion/tmp/_build/vpp-dev.deb generated.
> CPack: - package: /home/damarion/tmp/_build/vpp-plugins.deb
> generated.
> CPack: - package: /home/damarion/tmp/_build/vpp.deb generated.
> 
> Feedback, issue reports and contributions to get CMake be a full
> replacement for autotools are welcome.
> 
> Thanks,
> 
> -=-=-=-=-=-=-=-=-=-=-=-
> Links: You receive all messages sent to this group.
> 
> View/Reply Online (#10209): https://lists.fd.io/g/vpp-dev/message/102
> 09
> Mute This Topic: https://lists.fd.io/mt/24665874/675704
> Group Owner: vpp-dev+ow...@lists.fd.io
> Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [ksek...@cisco.com]
> -=-=-=-=-=-=-=-=-=-=-=-
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#10216): https://lists.fd.io/g/vpp-dev/message/10216
Mute This Topic: https://lists.fd.io/mt/24665874/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] DSCP support in VPP

2018-08-30 Thread Klement Sekera via Lists.Fd.Io
Hi Martin,

Can you describe your use case?

You're looking at the correct place regarding the matching criteria.
Adding the DSCP as additional criteria is not hard. Apart from API
functions, definition and CLI funtions, it just needs to be added to a
couple of comparison functions.

Similarly, adding these new actions (as extensions of 'protect'
action), shouldn't be too hard.

Thanks,
Klement

On Mon, 2018-08-27 at 10:48 +, Martin Šuňal wrote:
> Hello VPP devs,
> 
> I am looking at IPSec in VPP and I found it should be possible to
> match and forward traffic to tunnels based on 5-tuple by SPD entry
> [1]
> What I am trying to understand is how difficult is to include DSCP to
> the SPD entry as additional matching criteria.
> Would be also possible to extend SPD actions with something like "set
> DSCP" and "copy DSCP to outer-header"?
> Is SPD entry good place for that?
> It looks like DSCP is supported in DPDK [2] but I did not find any
> reference in VPP.
> 
> 
> [1] https://wiki.fd.io/view/VPP/IPSec_and_IKEv2#SPD_entry_creation
> [2] https://doc.dpdk.org/guides/prog_guide/traffic_management.html?hi
> ghlight=dscp#packet-marking
> 
> Thank you for your help,
> Martin Šuňal
> Technical leader
> Frinx s.r.o.
> Mlynské Nivy 48 / 821 09 Bratislava / Slovakia
> +421 2 20 91 01 41 / msu...@frinx.io / www.fr
> inx.io;
> [cid:image002.png@01D24FBB.70342570]
> 
> -=-=-=-=-=-=-=-=-=-=-=-
> Links: You receive all messages sent to this group.
> 
> View/Reply Online (#10295): https://lists.fd.io/g/vpp-dev/message/102
> 95
> Mute This Topic: https://lists.fd.io/mt/24972491/675704
> Group Owner: vpp-dev+ow...@lists.fd.io
> Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [ksek...@cisco.com]
> -=-=-=-=-=-=-=-=-=-=-=-
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#10332): https://lists.fd.io/g/vpp-dev/message/10332
Mute This Topic: https://lists.fd.io/mt/24972491/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] VPP C++ API

2018-07-04 Thread Klement Sekera via Lists.Fd.Io
Hi,

this should fix your issue

https://gerrit.fd.io/r/#/c/13349/

Thanks,
Klement

On Mon, 2018-07-02 at 13:57 +0300, Alexander Gryanko wrote:
> Stack trace for thread 2:
> 
> Thread 2 "main" received signal SIGABRT, Aborted.
> [Switching to Thread 0x7fffedbff700 (LWP 814141)]
> 0x7529c428 in __GI_raise (sig=sig@entry=6) at
> ../sysdeps/unix/sysv/linux/raise.c:54
> 54 ../sysdeps/unix/sysv/linux/raise.c: No such file or directory.
> (gdb) bt
> #0  0x7529c428 in __GI_raise (sig=sig@entry=6) at
> ../sysdeps/unix/sysv/linux/raise.c:54
> #1  0x7529e02a in __GI_abort () at abort.c:89
> #2  0x75294bd7 in __assert_fail_base (fmt=,
> assertion=assertion@entry=0x75bcb5f5 "id <=
> __vapi_metadata.count",
> file=file@entry=0x75bcb530
> "/place/home/xpahos/git/vpp/build-data/../src/vpp-api/vapi/vapi.c",
> line=line@entry=696, function=function@entry=0x75bcb6f0
> <__PRETTY_FUNCTION__.39669> "vapi_msg_is_with_context") at
> assert.c:92
> #3  0x75294c82 in __GI___assert_fail
> (assertion=assertion@entry=0x75bcb5f5
> "id <= __vapi_metadata.count", file=file@entry=0x75bcb530
> "/place/home/xpahos/git/vpp/build-data/../src/vpp-api/vapi/vapi.c",
> line=line@entry=696, function=function@entry=0x75bcb6f0
> <__PRETTY_FUNCTION__.39669> "vapi_msg_is_with_context") at
> assert.c:101
> #4  0x75bcac0b in vapi_msg_is_with_context (id=id@entry=32001
> 71710)
> at /place/home/xpahos/git/vpp/build-data/../src/vpp-
> api/vapi/vapi.c:696
> #5  0x7646f2a0 in vapi::Connection::dispatch (time=5,
> limit=0x0,
> this=0x61109500) at
> /place/home/xpahos/git/vpp/build-root/install-vpp-
> native/vpp/include/vapi/vapi.hpp:270
> #6  VOM::HW::cmd_q::rx_run (this=0x61007d40) at
> /place/home/xpahos/git/vpp/build-data/../extras/vom/vom/hw.cpp:45
> #7  0x758ffc80 in ?? () from
> /usr/lib/x86_64-linux-gnu/libstdc++.so.6
> #8  0x750516ba in start_thread (arg=0x7fffedbff700) at
> pthread_create.c:333
> #9  0x7536e41d in clone () at
> ../sysdeps/unix/sysv/linux/x86_64/clone.S:109
> 
> Values:
> 
> Thread 2 "main" hit Breakpoint 1, vapi_msg_is_with_context (id=id@ent
> ry=130)
> at /place/home/xpahos/git/vpp/build-data/../src/vpp-
> api/vapi/vapi.c:696
> 696   assert (id <= __vapi_metadata.count);
> (gdb) print id
> $1 = 130
> (gdb) c
> Continuing.
> Mon Jul  2 13:52:37 2018 [debug]rpc_cmd.hpp:120 operator()() itf-
> events 0
> Started
> 
> Thread 2 "main" hit Breakpoint 1, vapi_msg_is_with_context
> (id=id@entry=3200171710)
> at /place/home/xpahos/git/vpp/build-data/../src/vpp-
> api/vapi/vapi.c:696
> 696   assert (id <= __vapi_metadata.count);
> (gdb) print id
> $2 = 3200171710
> 
> Shared memory message:
> 
> (gdb) print shm_data_size
> $3 = 10
> (gdb) print shm_data
> $4 = (void *) 0x30045a74
> (gdb) x/10h 0x30045a74
> 0x30045a74: 5376 0 0 0 0 0 0 0
> 0x30045a84: 0 0
> 
> On Mon, 2 Jul 2018 at 10:40, Klement Sekera 
> wrote:
> 
> > 
> > Hi,
> > 
> > can you please run in gdb (or open core) and pass us a stack trace?
> > 
> > Thanks,
> > Klement
> > 
> > On Sun, 2018-07-01 at 19:36 -0700, xpa...@gmail.com wrote:
> > > 
> > > Hello,
> > > I'm trying to understand how works VOM wrapper for C++ API.
> > > VOM manager header: https://pastebin.com/BgG7td5s
> > > VOM manager cpp: https://pastebin.com/890jjUJm
> > > main.cpp:
> > > 
> > > int main() {
> > > TVppManager manager;
> > > std::cout << "Starting\n";
> > > manager.Start();
> > > 
> > > sleep(60);
> > > 
> > > manager.Stop();
> > > }
> > > 
> > > When I'm trying to run sudo ./main a I got error:
> > > Starting
> > > Calling connect
> > > Manager is connected
> > > Mon Jul  2 05:33:55 2018 [debug]hw.cpp:129 write() itf-events
> > > Mon Jul  2 05:33:55 2018 [debug]rpc_cmd.hpp:120 operator()() itf-
> > > events 0
> > > Started
> > > main: /place/home/xpahos/git/vpp/build-data/../src/vpp-
> > > api/vapi/vapi.c:696: vapi_msg_is_with_context: Assertion `id <=
> > > __vapi_metadata.count' failed.
> > > Aborted
> > > 
> > > I found that id looks like unsigned value overflow. How can I
> > > debug
> > > this? I don't see anything in trace dump.
> > > -=-=-=-=-=-=-=-=-=-=-=-
> > > Links: You receive all messages sent to this group.
> > > 
> > > View/Reply Online (#9753): https://lists.fd.io/g/vpp-dev/message/
> > > 9753
> > > Mute This Topic: https://lists.fd.io/mt/22999158/675704
> > > Group Owner: vpp-dev+ow...@lists.fd.io
> > > Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [ksekera@cisco.
> > > com]
> > > -=-=-=-=-=-=-=-=-=-=-=-
> 
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#9777): https://lists.fd.io/g/vpp-dev/message/9777
Mute This Topic: https://lists.fd.io/mt/22999158/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] vapi recv error msg_id_name

2018-10-15 Thread Klement Sekera via Lists.Fd.Io
Hey,

The reason why your name doesn't match the request is related to your
code mixing vl_msg_id_t with vapi_msg_id_t. VPP internally assignes
message IDs at startup based on it's runtime configuration - plugins. If
a plugin is not loaded, it's API messages aren't loaded either. On the
other hand, VAPI IDs are constructed when the libvapiclient binary is
loaded and depend on included *.api.vapi.h files which your application
is using.

TLDR:
change:
printf("recv msg[%d] %s\n", resp->header._vl_msg_id, 
vapi_get_msg_name(resp->header._vl_msg_id));
to:
printf("recv msg[%d] %s\n", resp->header._vl_msg_id, 
vapi_get_msg_name(vapi_lookup_vapi_msg_id_t(ctx, resp->header._vl_msg_id))); 

Regards,
Klement

Quoting wangchuan...@163.com (2018-10-14 11:43:35)
>My connect:
>vapi_connect (ctx, "test123", NULL, 64, 32, VAPI_MODE_BLOCKING);
>Even if there is no vapi_send, I can get 0 from vapi_recv after sometime.
>My branch is stable 18.04.
>Who can help? 
>Thanks very much!
> 
>--
> 
>wangchuan...@163.com
> 
>   
>  From: [1]wangchuan...@163.com
>  Date: 2018-10-14 17:06
>  To: [2]vpp-dev
>  Subject: vapi recv error msg_id_name
>  Hi all,
>      I got a serious mistake when using vapi that 'recv msg->name' has
>  nothing to do with 'send msg-id'.
>  The currently installed RPMS were compiled yesterday,and i do some
>  change to src/*.
>  Should I compile my example after making pkg-rpm && installing all rpms
>  again?
>  The code like :
>      vapi_msg_sw_interface_set_l2_bridge *msg =  
> vapi_alloc_sw_interface_set_l2_bridge(ctx);
>msg->payload.rx_sw_if_index = rx_sw_if_index;
>msg->payload.bd_id = bd_id;
>msg->payload.shg = shg;
>msg->payload.shg = shg;
>msg->payload.bvi = bvi;
>msg->payload.enable = enable;
>vapi_msg_sw_interface_set_l2_bridge_hton (msg);
>vapi_error_e rv = vapi_send (ctx, msg);
>vapi_msg_sw_interface_set_l2_bridge_reply *resp;
>size_t size;
>rv = vapi_recv (ctx, (void *) , , 0, 0);
>vapi_msg_sw_interface_set_l2_bridge_reply_hton(resp);
>printf("recv msg[%d] %s\n", resp->header._vl_msg_id, 
> vapi_get_msg_name(resp->header._vl_msg_id) );
>  
>  recv msg[166] ip6nd_proxy_details
>  recv msg[23] sw_interface_tag_add_del
> 
>--
> 
>  wangchuan...@163.com
> 
> References
> 
>Visible links
>1. mailto:wangchuan...@163.com
>2. mailto:vpp-dev@lists.fd.io
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#10818): https://lists.fd.io/g/vpp-dev/message/10818
Mute This Topic: https://lists.fd.io/mt/27309659/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] vapi recv error msg_id_name

2018-10-17 Thread Klement Sekera via Lists.Fd.Io
getting a message without sending one is probably because of keepalive
messages, which vapi doesn't handle automatically in your version.
newer versions support automatic handling of keepalive messages by
supplying the appropriate parameter to vapi_connect to do so.

regarding tunnel_if_index - can you verify that the reply is indeed
vapi_vxlan_add_del_reply? if so, than the bug is most probably in your
app or on vpp side. vapi library doesn't do any processing itself and
all the code is generated.

Quoting 王传国 (2018-10-17 09:47:24)
> one more question.
> 
> Even if there was no vapi_send, I can get 0 from vapi_recv after sometime. 
> why?
> sometime,vapi_vxlan_add_del'reply,retvalue=0,but the tunnel_if_index=0 which 
> should be 6(got by cmd:show int)
> 
> 
> 
> --
> 发自我的网易邮箱手机智能版
> 
> 
> 在 2018-10-15 19:03:52,"Klement Sekera"  写道:
> >Hey,
> >
> >The reason why your name doesn't match the request is related to your
> >code mixing vl_msg_id_t with vapi_msg_id_t. VPP internally assignes
> >message IDs at startup based on it's runtime configuration - plugins. If
> >a plugin is not loaded, it's API messages aren't loaded either. On the
> >other hand, VAPI IDs are constructed when the libvapiclient binary is
> >loaded and depend on included *.api.vapi.h files which your application
> >is using.
> >
> >TLDR:
> >change:
> >printf("recv msg[%d] %s\n", resp->header._vl_msg_id, 
> >vapi_get_msg_name(resp->header._vl_msg_id));
> >to:
> >printf("recv msg[%d] %s\n", resp->header._vl_msg_id, 
> >vapi_get_msg_name(vapi_lookup_vapi_msg_id_t(ctx, resp->header._vl_msg_id))); 
> >
> >Regards,
> >Klement
> >
> >Quoting wangchuan...@163.com (2018-10-14 11:43:35)
> >>My connect:
> >>vapi_connect (ctx, "test123", NULL, 64, 32, VAPI_MODE_BLOCKING);
> >>Even if there is no vapi_send, I can get 0 from vapi_recv after 
> >> sometime.
> >>My branch is stable 18.04.
> >>Who can help? 
> >>Thanks very much!
> >> 
> >>
> >> --
> >> 
> >>wangchuan...@163.com
> >> 
> >>   
> >>  From: [1]wangchuan...@163.com
> >>  Date: 2018-10-14 17:06
> >>  To: [2]vpp-dev
> >>  Subject: vapi recv error msg_id_name
> >>  Hi all,
> >>      I got a serious mistake when using vapi that 'recv msg->name' has
> >>  nothing to do with 'send msg-id'.
> >>  The currently installed RPMS were compiled yesterday,and i do some
> >>  change to src/*.
> >>  Should I compile my example after making pkg-rpm && installing all 
> >> rpms
> >>  again?
> >>  The code like :
> >>      vapi_msg_sw_interface_set_l2_bridge *msg =  
> >> vapi_alloc_sw_interface_set_l2_bridge(ctx);
> >>msg->payload.rx_sw_if_index = rx_sw_if_index;
> >>msg->payload.bd_id = bd_id;
> >>msg->payload.shg = shg;
> >>msg->payload.shg = shg;
> >>msg->payload.bvi = bvi;
> >>msg->payload.enable = enable;
> >>vapi_msg_sw_interface_set_l2_bridge_hton (msg);
> >>vapi_error_e rv = vapi_send (ctx, msg);
> >>vapi_msg_sw_interface_set_l2_bridge_reply *resp;
> >>size_t size;
> >>rv = vapi_recv (ctx, (void *) , , 0, 0);
> >>vapi_msg_sw_interface_set_l2_bridge_reply_hton(resp);
> >>printf("recv msg[%d] %s\n", resp->header._vl_msg_id, 
> >> vapi_get_msg_name(resp->header._vl_msg_id) );
> >>  
> >>  recv msg[166] ip6nd_proxy_details
> >>  recv msg[23] sw_interface_tag_add_del
> >> 
> >>
> >> --
> >> 
> >>  wangchuan...@163.com
> >> 
> >> References
> >> 
> >>Visible links
> >>1. mailto:wangchuan...@163.com
> >>2. mailto:vpp-dev@lists.fd.io
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#10851): https://lists.fd.io/g/vpp-dev/message/10851
Mute This Topic: https://lists.fd.io/mt/27309659/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] ipsec support for chained buffers.

2018-11-12 Thread Klement Sekera via Lists.Fd.Io
Hi Vijay,

yes, it's planned.

Regards,
Klement

Quoting Vijayabhaskar Katamreddy via Lists.Fd.Io (2018-11-11 23:56:37)
>Hi
> 
> 
> 
>Is there any plans or any work in progress to extend the support for ipsec
>encrypt nodes to support the chained buffers?
> 
> 
> 
>Thanks
> 
>Vijay
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#11201): https://lists.fd.io/g/vpp-dev/message/11201
Mute This Topic: https://lists.fd.io/mt/28085594/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] how to set cb for keepalive of vapi-connection @ 18.10

2018-11-13 Thread Klement Sekera via Lists.Fd.Io
Hi,
if you ask vapi to handle keepalives for you, why would you want to get
callbacks for them?

Regards,
Klement

Quoting wangchuan...@163.com (2018-11-13 07:27:33)
>Hi��
>    At vpp stable/18.10, using vapi_connect to connect vpp and set the
>"bool handle_keepalives " to true,
>But how can i add the cb for keepalives?
>If do, does it means that I could judge the timespan to found the
>disconnection , and need to reconnect using vapi_disconnect & vpai_connect
>?
>Thanks !
> 
>--
> 
>wangchuan...@163.com
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#11214): https://lists.fd.io/g/vpp-dev/message/11214
Mute This Topic: https://lists.fd.io/mt/28120658/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] test-ext failures seen on master

2018-11-15 Thread Klement Sekera via Lists.Fd.Io
st 
>OK
>
> ==
>VCL Thru Host Stack Extended D Tests
>
> ==
>run VCL thru host stack uni-directional (multiple sockets) test 
>OK
>
> ==
>VCL IPv6 Thru Host Stack Tests
>
> ==
>run LDP IPv6 thru host stack echo test  
>OK
>run VCL IPv6 thru host stack echo test  
>OK
>
> ==
>VCL Cut Thru Tests
>
> ==
>run LDP cut thru bi-directional (multiple sockets) test 
>OK
>run LDP cut thru echo test  
>OK
>run LDP cut thru iperf3 test
>OK
>run LDP cut thru uni-directional (multiple sockets) test
>OK
>run VCL cut thru bi-directional (multiple sockets) test 
>OK
>run VCL cut thru echo test  
>OK
>run VCL cut thru uni-directional (multiple sockets) test
>OK
>
> ==
>VCL Thru Host Stack Extended C Tests
>
> ==
>run LDP thru host stack uni-directional (multiple sockets) test 
>OK
>
> ==
>VCL Thru Host Stack NSession Bidir Tests
>
> ==
>run VCL thru host stack bi-directional (multiple sockets) test  
>OK
>
> ==
>VCL IPv6 Thru Host Stack Extended A Tests
>
> ==
>run VCL thru host stack bi-directional (multiple sockets) test  
>OK
>
> ==
>VCL IPv6 Cut Thru Tests
>
> ==
>run LDP IPv6 cut thru bi-directional (multiple sockets) test
>OK
>run LDP IPv6 cut thru echo test 
>OK
>run LDP IPv6 cut thru iperf3 test   
>OK
>run LDP IPv6 cut thru uni-directional (multiple sockets) test   
>OK
>run VCL IPv6 cut thru bi-directional (multiple sockets) test
>OK
>run VCL IPv6 cut thru echo test 
>OK
>run VCL IPv6 cut thru uni-directional (multiple sockets) test   
>OK
>
> ==
>VCL IPv6 Thru Host Stack Extended C Tests
>
> ==
>run LDP thru host stack uni-directional (multiple sockets) test 
>OK
>
> ==
>VCL IPv6 Thru Host Stack Extended B Tests
>
> ==
>run LDP thru host stack bi-directional (multiple sockets) test  
>OK
>
> ==
>VCL IPv6 Thru Host Stack Iperf Tests
>
> ==
>run LDP thru host stack iperf3 test 
>OK
>
> ==
>VCL IPv6 Thru Host Stack Extended D Tests
>
> ==
>run VCL thru host stack uni-directional (multiple sockets) test 
>OK
> 
>Ran 28 tests in 250.800s
> 
>OK
> 
>
> ==
>TEST RESULTS:
> Scheduled tests: 28
>  

Re: [vpp-dev] test-ext failures seen on master

2018-11-16 Thread Klement Sekera via Lists.Fd.Io
No, it's on my laptop and out of 16g mem, 11g is caches so there
shouldn't be any OOM issue there.

Unless you change TEST_JOBS option (setting it to either auto or ), the test framework still runs 1 job at a time.

Klement

Quoting Florin Coras (2018-11-15 22:16:05)
> That’s an interesting failure. Is the test machine running out of memory?
> 
> The extended tests are unstable on my server, so I do see quite a number of 
> failures. However this:
> 
> make test-ext TEST=vcl.VCLCutThruTestCase.test_vcl_cut_thru_uni_dir_nsock
> 
> runs just fine. After the latest test framework changes, are we running 
> multiple tests/vpps in parallel? I suspect that may be a source of issues. 
> 
> Florin
> 
> > On Nov 15, 2018, at 12:11 PM, Klement Sekera via Lists.Fd.Io 
> >  wrote:
> > 
> > I'm seeing timeouts and coredumps...
> > 
> > e.g.
> > 
> > #6  0x7f9ba0404eb6 in svm_msg_q_try_lock (mq=0x204009440)
> > at /home/ksekera/vpp/src/svm/message_queue.h:299
> > 299   return pthread_mutex_trylock (>q->mutex);
> > (gdb) p mq
> > $1 = (svm_msg_q_t *) 0x204009440
> > (gdb) p mq->q
> > $2 = (svm_queue_t *) 0x0
> > 
> > which is part of
> > 
> > #4  
> > #5  __pthread_mutex_trylock (mutex=0x0) at 
> > ../nptl/pthread_mutex_trylock.c:39
> > #6  0x7f9ba0404eb6 in svm_msg_q_try_lock (mq=0x204009440)
> >at /home/ksekera/vpp/src/svm/message_queue.h:299
> > #7  0x7f9ba04055d5 in svm_msg_q_lock_and_alloc_msg_w_ring 
> > (mq=0x204009440, 
> >ring_index=1, noblock=1 '\001', msg=0x7f9b5f7c2a80)
> >at /home/ksekera/vpp/src/svm/message_queue.c:121
> > #8  0x7f9ba14be449 in mq_try_lock_and_alloc_msg (app_mq=0x204009440, 
> >msg=0x7f9b5f7c2a80) at 
> > /home/ksekera/vpp/src/vnet/session/session_api.c:407
> > #9  0x7f9ba14be509 in mq_send_session_accepted_cb (s=0x7f9b60351400)
> >at /home/ksekera/vpp/src/vnet/session/session_api.c:432
> > #10 0x7f9ba1496ba0 in application_local_session_connect (
> >client_wrk=0x7f9b60805800, server_wrk=0x7f9b60805780, ll=0x7f9b5f4c9e40, 
> >opaque=0) at /home/ksekera/vpp/src/vnet/session/application.c:1646
> > #11 0x7f9ba14a5a62 in application_connect (a=0x7f9b5f7c2d30)
> >at /home/ksekera/vpp/src/vnet/session/application_interface.c:327
> > ---Type  to continue, or q  to quit---
> > #12 0x7f9ba14a69fd in vnet_connect (a=0x7f9b5f7c2d30)
> >at /home/ksekera/vpp/src/vnet/session/application_interface.c:673
> > #13 0x7f9ba14c0f27 in vl_api_connect_sock_t_handler (mp=0x1300a6218)
> >at /home/ksekera/vpp/src/vnet/session/session_api.c:1305
> > #14 0x7f9ba1b6cb25 in vl_msg_api_handler_with_vm_node (
> >am=0x7f9ba1d7dc60 , the_msg=0x1300a6218, 
> >vm=0x7f9ba08fc2c0 , node=0x7f9b5f7ba000)
> >at /home/ksekera/vpp/src/vlibapi/api_shared.c:502
> > #15 0x7f9ba1b39114 in void_mem_api_handle_msg_i (
> >am=0x7f9ba1d7dc60 , vm=0x7f9ba08fc2c0 , 
> >node=0x7f9b5f7ba000, q=0x13004c440)
> >at /home/ksekera/vpp/src/vlibmemory/memory_api.c:700
> > #16 0x7f9ba1b39183 in vl_mem_api_handle_msg_main (
> >vm=0x7f9ba08fc2c0 , node=0x7f9b5f7ba000)
> >at /home/ksekera/vpp/src/vlibmemory/memory_api.c:710
> > #17 0x7f9ba1b572dd in vl_api_clnt_process (
> >vm=0x7f9ba08fc2c0 , node=0x7f9b5f7ba000, f=0x0)
> >at /home/ksekera/vpp/src/vlibmemory/vlib_api.c:350
> > #18 0x7f9ba0674a11 in vlib_process_bootstrap (_a=140305300978672)
> >at /home/ksekera/vpp/src/vlib/main.c:1276
> > #19 0x7f9b9fef4e74 in clib_calljmp ()
> >   from 
> > /home/ksekera/vpp/build-root/install-vpp_debug-native/vpp/lib/libvppinfra.so.19.01
> > 
> > could this be the result of a timeout and the killing of the child
> > process?
> > 
> > Thanks,
> > Klement
> > 
> > 
> > Quoting Dave Wallace (2018-11-15 20:27:55)
> >>   Klement,
> >> 
> >>   I just pulled the top-of-tree on master and ran only VCL tests on my 
> >> 18.04
> >>   box and they all passed (see below).  Another strange thing about your
> >>   failure is that the test that failed is NOT an extended test.
> >> 
> >>   I'm currently working on a patch ([1]https://gerrit.fd.io/r/#/c/13215/) 
> >> to
> >>   shorten the run time for the extended tests and convert them to regular
> >>   tests.  In the past, I have seen some unexplained failures of some of the
> >>   extended tests.  I'll let you know if I encounter any of them again.
> >> 
> >>   Thanks,
> >>   -daw-
> >> 
>

Re: [vpp-dev] Using clang-format ForEachMacros?

2018-11-16 Thread Klement Sekera via Lists.Fd.Io
clang can format the macros just fine as they are. I added clang for
checking the style of c++ code (mainly vapi code). C files are still
checked using indent, which doesn't understand the macros ... Also I
wasn't able to come up with a clang-format which matches the current
indent style 1:1 so I don't see a switch to clang-format coming any time
soon...

Thanks,
Klement

Quoting Stephen Hemminger (2018-11-16 02:02:55)
> I just discovered the clang-format file and noticed that Linux kernel used
> the ForEachMacros option to format for() wrappers.
> 
> VPP seems to be riddled with INDENT-OFF when it does vector loops. (2123 
> times!)
> Has anyone looked into replacing these with ForEachMacros?
> 
> 
> -=-=-=-=-=-=-=-=-=-=-=-
> Links: You receive all messages sent to this group.
> 
> View/Reply Online (#11274): https://lists.fd.io/g/vpp-dev/message/11274
> Mute This Topic: https://lists.fd.io/mt/28156993/675704
> Group Owner: vpp-dev+ow...@lists.fd.io
> Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [ksek...@cisco.com]
> -=-=-=-=-=-=-=-=-=-=-=-
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#11280): https://lists.fd.io/g/vpp-dev/message/11280
Mute This Topic: https://lists.fd.io/mt/28156993/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] test-ext failures seen on master

2018-11-16 Thread Klement Sekera via Lists.Fd.Io
I've also noticed that quite often a binary called sock_test_client is
left running after a vpp crash or test failure. What's worse, it eats
100% CPU.

Quoting Florin Coras (2018-11-16 00:40:06)
>Thanks, Dave!
>I’ll take a look at those as soon as I can. I’m running multiple
>connections between 2 vpp hosts without issue, so it’s either a
>cut-through session issue or it has to do with how we setup vpp for vrf
>leaking. 
>Cheers,
>Florin
> 
>  On Nov 15, 2018, at 3:00 PM, Dave Wallace <[1]dwallac...@gmail.com>
>  wrote:
>  Same here.  However, in the same workspace where all tests passed, I can
>  get this test case to fail consistently:
> 
>  EXTENDED_TESTS=y TEST=vcl.VCLThruHostStackExtendedBTestCase.* make test
>  EXTENDED_TESTS=y TEST=vcl.VCLIpv6ThruHostStackExtendedBTestCase.* make
>  test
> 
>  In patch 13215, I discovered that making these test cases NOT run
>  multiple sockets in parallel the test passes.  My latest patch to that
>  has the multiple sockets option commented out with "# ouch! Host Stack
>  Bug?" so that all tests pass.
> 
>  Thanks,
>  -daw-
>  On 11/15/2018 4:16 PM, Florin Coras wrote:
> 
>  That’s an interesting failure. Is the test machine running out of memory?
> 
>  The extended tests are unstable on my server, so I do see quite a number of 
> failures. However this:
> 
>  make test-ext TEST=vcl.VCLCutThruTestCase.test_vcl_cut_thru_uni_dir_nsock
> 
>  runs just fine. After the latest test framework changes, are we running 
> multiple tests/vpps in parallel? I suspect that may be a source of issues.
> 
>  Florin
> 
> 
>  On Nov 15, 2018, at 12:11 PM, Klement Sekera via Lists.Fd.Io 
> [2] wrote:
> 
>  I'm seeing timeouts and coredumps...
> 
>  e.g.
> 
>  #6  0x7f9ba0404eb6 in svm_msg_q_try_lock (mq=0x204009440)
>  at /home/ksekera/vpp/src/svm/message_queue.h:299
>  299   return pthread_mutex_trylock (>q->mutex);
>  (gdb) p mq
>  $1 = (svm_msg_q_t *) 0x204009440
>  (gdb) p mq->q
>  $2 = (svm_queue_t *) 0x0
> 
>  which is part of
> 
>  #4  
>  #5  __pthread_mutex_trylock (mutex=0x0) at ../nptl/pthread_mutex_trylock.c:39
>  #6  0x7f9ba0404eb6 in svm_msg_q_try_lock (mq=0x204009440)
> at /home/ksekera/vpp/src/svm/message_queue.h:299
>  #7  0x7f9ba04055d5 in svm_msg_q_lock_and_alloc_msg_w_ring 
> (mq=0x204009440,
> ring_index=1, noblock=1 '\001', msg=0x7f9b5f7c2a80)
> at /home/ksekera/vpp/src/svm/message_queue.c:121
>  #8  0x7f9ba14be449 in mq_try_lock_and_alloc_msg (app_mq=0x204009440,
> msg=0x7f9b5f7c2a80) at 
> /home/ksekera/vpp/src/vnet/session/session_api.c:407
>  #9  0x7f9ba14be509 in mq_send_session_accepted_cb (s=0x7f9b60351400)
> at /home/ksekera/vpp/src/vnet/session/session_api.c:432
>  #10 0x7f9ba1496ba0 in application_local_session_connect (
> client_wrk=0x7f9b60805800, server_wrk=0x7f9b60805780, ll=0x7f9b5f4c9e40,
> opaque=0) at /home/ksekera/vpp/src/vnet/session/application.c:1646
>  #11 0x7f9ba14a5a62 in application_connect (a=0x7f9b5f7c2d30)
> at /home/ksekera/vpp/src/vnet/session/application_interface.c:327
>  ---Type  to continue, or q  to quit---
>  #12 0x7f9ba14a69fd in vnet_connect (a=0x7f9b5f7c2d30)
> at /home/ksekera/vpp/src/vnet/session/application_interface.c:673
>  #13 0x7f9ba14c0f27 in vl_api_connect_sock_t_handler (mp=0x1300a6218)
> at /home/ksekera/vpp/src/vnet/session/session_api.c:1305
>  #14 0x7f9ba1b6cb25 in vl_msg_api_handler_with_vm_node (
> am=0x7f9ba1d7dc60 , the_msg=0x1300a6218,
> vm=0x7f9ba08fc2c0 , node=0x7f9b5f7ba000)
> at /home/ksekera/vpp/src/vlibapi/api_shared.c:502
>  #15 0x7f9ba1b39114 in void_mem_api_handle_msg_i (
> am=0x7f9ba1d7dc60 , vm=0x7f9ba08fc2c0 ,
> node=0x7f9b5f7ba000, q=0x13004c440)
> at /home/ksekera/vpp/src/vlibmemory/memory_api.c:700
>  #16 0x7f9ba1b39183 in vl_mem_api_handle_msg_main (
> vm=0x7f9ba08fc2c0 , node=0x7f9b5f7ba000)
> at /home/ksekera/vpp/src/vlibmemory/memory_api.c:710
>  #17 0x7f9ba1b572dd in vl_api_clnt_process (
> vm=0x7f9ba08fc2c0 , node=0x7f9b5f7ba000, f=0x0)
> at /home/ksekera/vpp/src/vlibmemory/vlib_api.c:350
>  #18 0x7f9ba0674a11 in vlib_process_bootstrap (_a=140305300978672)
> at /home/ksekera/vpp/src/vlib/main.c:1276
>  #19 0x7f9b9fef4e74 in clib_calljmp ()
>from 
> /home/ksekera/vpp/build-root/install-vpp_debug-native/vpp/lib/libvppinfra.so.19.01
> 
>  could this be the result of a timeout and the killing of the child
>  process?
> 
>  Thanks,
>  Klement
> 
> 
>  Quoting Dave Wallace (2018-11-15 20:27:55)
> 
> 

Re: [vpp-dev] Can program have more than 1 VAPI connection?

2018-11-07 Thread Klement Sekera via Lists.Fd.Io
Hi,

sadly, no, because the underlying low-level library uses global variables.

Regards,
Klement

Quoting wangchuan...@163.com (2018-11-07 13:52:39)
>Hi all,
>    Can i create more than 1 connection using vapi_connect in my program ?
>ctx was defined at .so (more than 1) 
> 
>--
> 
>wangchuan...@163.com
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#11140): https://lists.fd.io/g/vpp-dev/message/11140
Mute This Topic: https://lists.fd.io/mt/2803/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] how to set cb for keepalive of vapi-connection @ 18.10

2018-11-14 Thread Klement Sekera via Lists.Fd.Io
it's shared memory, unless vpp dies there is no reason for the
connection to go down. if required, I would suggest you send periodic
control pings.

Quoting wangchuan...@163.com (2018-11-14 02:12:07)
>worried that the connection might break.
> 
>--
> 
>wangchuan...@163.com
> 
>   
>  From: [1]Klement Sekera
>  Date: 2018-11-13 18:04
>  To: [2]wangchuan...@163.com; [3]vpp-dev
>  Subject: Re: [vpp-dev] how to set cb for keepalive of vapi-connection @
>  18.10
>  Hi,
>  if you ask vapi to handle keepalives for you, why would you want to get
>  callbacks for them?
>   
>  Regards,
>  Klement
>   
>  Quoting wangchuan...@163.com (2018-11-13 07:27:33)
>  >    Hi��
>  >        At vpp stable/18.10, using vapi_connect to connect vpp and set
>  the
>  >    "bool handle_keepalives " to true,
>  >    But how can i add the cb for keepalives?
>  >    If do, does it means that I could judge the timespan to found the
>  >    disconnection , and need to reconnect using vapi_disconnect &
>  vpai_connect
>  >    ?
>  >    Thanks !
>  >
>  >   
>  
> --
>  >
>  >    wangchuan...@163.com
> 
> References
> 
>Visible links
>1. mailto:ksek...@cisco.com
>2. mailto:wangchuan...@163.com
>3. mailto:vpp-dev@lists.fd.io
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#11243): https://lists.fd.io/g/vpp-dev/message/11243
Mute This Topic: https://lists.fd.io/mt/28120658/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] question about bfd protocol message

2018-09-25 Thread Klement Sekera via Lists.Fd.Io
Hi Xue,

I'm not sure what protocol message you mean. Can you please elaborate or
point to RFC number & section which describes the message you're
interested in?

Thanks,
Klement

Quoting xyxue (2018-09-25 09:48:58)
>Hi guys��
>I��m testing the bfd . Is the bfd support protocol message? 
>Thanks,
>Xue
> 
>--
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#10637): https://lists.fd.io/g/vpp-dev/message/10637
Mute This Topic: https://lists.fd.io/mt/26218372/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] A bug in IP reassembly?

2018-09-25 Thread Klement Sekera via Lists.Fd.Io
Hi Kingwel,

thanks for finding this bug. Your patch looks fine - would you mind
making a similar fix in ip4_reassembly.c? The logic suffers from the
same flaw there.

Thanks,
Klement

Quoting Kingwel Xie (2018-09-25 11:06:49)
>Hi,
> 
> 
> 
>I worked on testing IP reassembly recently, the hit a crash when testing
>IP reassembly with IPSec. It took me some time to figure out why.
> 
> 
> 
>The crash only happens when there are >1 feature node enabled under
>ip-unicast and ip reassembly is working, like below.
> 
> 
> 
>ip4-unicast:
> 
>  ip4-reassembly-feature
> 
>  ipsec-input-ip4
> 
> 
> 
>It looks like there is a bug in the reassembly code as below:
>vnet_feature_next will do to buffer b0 to update the next0 and the
>current_config_index of b0, but b0 is pointing to some fragment buffer
>which in most cases is not the first buffer in chain indicated by bi0.
>Actually bi0 pointing to the first buffer is returned by ip6_reass_update
>when reassembly is finalized. As I can see this is a mismatch that bi0 and
>b0 are not the same buffer. In the end the quick fix is like what I added
>: b0 = vlib_get_buffer (vm, bi0); to make it right.
> 
> 
> 
>      if (~0 != bi0)
> 
>        {
> 
>        skip_reass:
> 
>      to_next[0] = bi0;
> 
>      to_next += 1;
> 
>      n_left_to_next -= 1;
> 
>      if (is_feature && IP6_ERROR_NONE == error0)
> 
>    {
> 
>      b0 = vlib_get_buffer (vm, bi0);  à added
>by Kingwel
> 
>      vnet_feature_next (, b0);
> 
>    }
> 
>      vlib_validate_buffer_enqueue_x1 (vm, node,
>next_index, to_next,
> 
>   
> 
>   n_left_to_next, bi0, next0);
> 
>        }
> 
> 
> 
>Probably this is not the perfect fix, but it works at least. Wonder if
>committers have better thinking about it? I can of course push a patch if
>you think it is ok.
> 
> 
> 
>Regards,
> 
>Kingwel
> 
> 
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#10642): https://lists.fd.io/g/vpp-dev/message/10642
Mute This Topic: https://lists.fd.io/mt/26218556/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] question about bfd protocol message

2018-09-25 Thread Klement Sekera via Lists.Fd.Io
Hi Xue,

RFC 5884 is not implemented.

Implemented RFCs are: 5880, 5881.

Thanks,
Klement

Quoting 薛欣颖 (2018-09-25 12:56:56)
>Hi Klement,
>I'm interested in BFD for LSP (RFC 5884 ).  Thank you very much for your
>reply.
>Thanks,
>Xue
> 
>--
> 
>   
>      From: [1]Klement Sekera via Lists.Fd.Io
>  Date: 2018-09-25 16:57
>  To: [2]薛欣颖; [3]vpp-dev
>  CC: [4]vpp-dev
>  Subject: Re: [vpp-dev] question about bfd protocol message
>  Hi Xue,
>   
>  I'm not sure what protocol message you mean. Can you please elaborate or
>  point to RFC number & section which describes the message you're
>  interested in?
>   
>  Thanks,
>  Klement
>   
>  Quoting xyxue (2018-09-25 09:48:58)
>  >    Hi guys��
>  >    I��m testing the bfd . Is the bfd support protocol message? 
>  >    Thanks,
>  >    Xue
>  >
>  >   
>  
> --
>   
>   
>  -=-=-=-=-=-=-=-=-=-=-=-
>  Links: You receive all messages sent to this group.
>   
>  View/Reply Online (#10637): https://lists.fd.io/g/vpp-dev/message/10637
>  Mute This Topic: https://lists.fd.io/mt/26218372/675372
>  Group Owner: vpp-dev+ow...@lists.fd.io
>  Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [xy...@fiberhome.com]
>  -=-=-=-=-=-=-=-=-=-=-=-
> 
> References
> 
>Visible links
>1. mailto:ksekera=cisco@lists.fd.io
>2. mailto:xy...@fiberhome.com
>3. mailto:vpp-dev@lists.fd.io
>4. mailto:vpp-dev@lists.fd.io
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#10650): https://lists.fd.io/g/vpp-dev/message/10650
Mute This Topic: https://lists.fd.io/mt/26218372/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] VPP's C-type-api example

2018-09-19 Thread Klement Sekera via Lists.Fd.Io
Quoting wangchuan...@163.com (2018-09-19 10:03:40)
>I find the feature of the absent vapi.
>There exists VALs in ***.api.

What is `VAL`?

>I copy the
>master's [1]src/vpp-api/vapi/vapi_c_gen.py and 
> [2]src/vpp-api/vapi/vapi_json_parser.py to
>stable/1804, 
>but compile failed!
>---
>configure: creating ./config.status
>config.status: creating Makefile
>config.status: creating plugins/Makefile
>config.status: creating vpp-api/python/Makefile
>config.status: creating vpp-api/java/Makefile
>config.status: error: cannot find input file: `vpp-api/vapi/Makefile.in'

This looks like a different kind of issue. Are vapi_c_gen and
vapi_json_parser the only modified files?

>make[1]: *** [vpp-configure] Error 1
>make[1]: Leaving directory `/hctel/vpp/build-root'
>make: *** [build] Error 2
>
>What should I do?
>I don't want to upgrade my vpp to master!
> 
>--
> 
>wangchuan...@163.com
> 
>   
>  From: [3]wangchuan...@163.com
>  Date: 2018-09-19 11:46
>  To: [4]Klement Sekera -X (ksekera - PANTHEON TECHNOLOGIES at Cisco);
>  [5]Ole Troan
>  CC: [6]Dave Barach (dbarach); [7]vpp-dev
>  Subject: Re: Re: [vpp-dev] VPP's C-type-api example
>  Hi Klement,
>      There have "autoreply define classify_add_del_session"  in
>  classify.api .
>  And /usr/include/vnet/classify/classify.api.h  have the
>  "vl_api_classify_add_del_session".
>  And the java jar have the "classify_add_del_session" class  too!
>  But the classify.api.vapi.h don't have !!!
>  Regards,
>  Simon 
> 
>--
> 
>  wangchuan...@163.com
> 
> 
>From: [8]Klement Sekera
>Date: 2018-09-19 03:18
>To: [9]wangchuan...@163.com; [10]Ole Troan
>CC: [11]Dave Barach (dbarach); [12]vpp-dev
>Subject: Re: [vpp-dev] VPP's C-type-api example
>Hi,
> 
>VAPI is autogenerated from .json files, which are generated from .api
>files. The API you are asking about is in
>src/vnet/classify/classify.api. See that file's history on when it
>appeared. As for your second question - I don't see any `obscure-mask`
>nor `clear-param` parameters. If you are asking about semantics of the
>API call, that is a question for the author of said API, since VAPI is
>generated code.
> 
>Regards,
>Klement
> 
>Quoting wangchuan...@163.com (2018-09-18 10:03:10)
>>    Hi Klement,
>>        I'm sorry to trouble you again.
>>    First,Have The classify_add_del_session vapi  not exist yet
>@18.04?
>>    Second,How can I get the vapi_classify_add_del_table
>'s obscure-mask from
>>    clear-param ”l2|l3|l4 +  ip4|ip6 + protocol + dst|src  + tcp|udp
> + ... ”
>>    ? 
>>     (Like CLI_COMMAND-clear-param to mask-value)
>>    What api can do this?
>>
>>   
>
> --
>>
>>    wangchuan...@163.com
>>
>>   
>>  From: [1]Klement Sekera
>>  Date: 2018-09-17 20:23
>>  To: [2]wangchuan...@163.com; [3]Ole Troan
>>  CC: [4]Dave Barach (dbarach); [5]vpp-dev
>>  Subject: Re: [vpp-dev] VPP's C-type-api example
>>  If you do not consume a message (in this case most probably the
>reply to
>>  vapi_tap_deelte), then on disconnect, the client library does
>that
>>  internally
>>  and prints a warning about it...
>>   
>>  Quoting wangchuan...@163.com (2018-09-17 12:00:27)
>>  >    Hi Klement,
>>  >        Please accept my heartfelt thanks.
>>  >    Everything is fine, but one: Calling 'vapi_tap_delete'
>without any
>>  other
>>  >    calling would cause some warining or error printed at my
>terminal: 
>>  >    my vpp is 18.04 stable.
>>  >    "tapDelete begin 
>>  >    tapDelete end
>>  >    vl_client_disconnect:313: queue drain: 79
>>  >    msg_handler_internal:432: no handler for msg id 79
>>  >    The end"
>>  >    Thanks again!
>>  >
>>  >   
>> 
>
> --
>>  >
>>  >    wangchuan...@163.com
>>  >
>>  >   
>>  >  From: [1]Klement Sekera
>>  >  Date: 2018-09-17 15:53
>

Re: [vpp-dev] Any tricks in IP reassembly ?

2018-12-13 Thread Klement Sekera via Lists.Fd.Io
The test framework spawns vpp with only one main thread and no workers.
I had a patch which uses also worker threads but it never made it to
git. It's grossly outdated by now.

Anyhow, I don't think that's really possible with the current packet
geneator implementation.

Quoting pvi...@vinciconsulting.com (2018-12-12 18:50:11)
>Hi Klement.
> 
>Is there a way to write a test case that documents that behavior?  Does
>the test framework have the ability to split fragments across threads?
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#11592): https://lists.fd.io/g/vpp-dev/message/11592
Mute This Topic: https://lists.fd.io/mt/28718629/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Any tricks in IP reassembly ?

2018-12-12 Thread Klement Sekera via Lists.Fd.Io
That's as it is now. A rework of the code is planned, no ETA yet...

Quoting mik...@yeah.net (2018-12-12 02:05:40)
>W...What??? So the only way to make ip reassembly working correctly is to
>keep one or more  interfaces attached to a single worker thread?
>That does not sound efficient ..
> 
>--
> 
>mik...@yeah.net
> 
>   
>  From: [1]Klement Sekera
>  Date: 2018-12-11 20:39
>  To: [2]Mikado; [3]vpp-dev
>  Subject: Re: [vpp-dev] Any tricks in IP reassembly ?
>  Hi Mikado,
>   
>  if the fragments get split between multiple workers, then eventually
>  they'll get dropped after timing out ...
>   
>  Regards,
>  Klement
>   
>  Quoting Mikado (2018-12-11 08:52:28)
>  >    Hi,
>  >
>  >       I have noticed that  “ip4-reassembly-feature” node  only
>  >    reassembles packets stored in the local pool of each thread. But it
>  seems
>  >    not right if a group of fragment packages is handled by different
>  worker
>  >    thread. So is there any tricks in VPP  forcing the fragment
>  packages in
>  >    the same group dispatched to the same thread ? Or is it a bug ?
>  >     
>  >
>  >    Thanks in advance.
>  >
>  >    Mikado
> 
> References
> 
>Visible links
>1. mailto:ksek...@cisco.com
>2. mailto:mik...@yeah.net
>3. mailto:vpp-dev@lists.fd.io
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#11576): https://lists.fd.io/g/vpp-dev/message/11576
Mute This Topic: https://lists.fd.io/mt/28718629/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Ipv4 random reassembly failure on x86 and ARM

2018-12-20 Thread Klement Sekera via Lists.Fd.Io
Is this with https://gerrit.fd.io/r/#/c/16548/ merged?

Quoting Juraj Linkeš (2018-12-20 12:09:12)
>Hi Klement and vpp-dev,
> 
> 
> 
>[1]https://jira.fd.io/browse/VPP-1522 fixed the issue with an assert we've
>been seeing with random reassembly, however, there's still some other
>failure in that test: [2]https://jira.fd.io/browse/VPP-1475
> 
> 
> 
>It seems that not all fragments are sent properly. The run documented in
>Jira shows only 3089 fragments out of 5953 being sent and the test only
>sees 39 out of 257 packets received.
> 
> 
> 
>Could you or anyone from vpp-dev who's more familiar with the feature/code
>advise on how to debug this further?
> 
> 
> 
>I was able to reproduce this on my local x86 Bionic and Xenial VMs as well
>as our Cavium ThunderX machines (the ones we also use in CI). I'd love to
>see whether anyone else can also reproduce it on an x86 machine outside of
>CI (where the failure doesn't happen).
> 
> 
> 
>Thanks,
> 
>Juraj
> 
> References
> 
>Visible links
>1. https://jira.fd.io/browse/VPP-1522
>2. https://jira.fd.io/browse/VPP-1475
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#11731): https://lists.fd.io/g/vpp-dev/message/11731
Mute This Topic: https://lists.fd.io/mt/28810299/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Any tricks in IP reassembly ?

2018-12-11 Thread Klement Sekera via Lists.Fd.Io
Hi Mikado,

if the fragments get split between multiple workers, then eventually
they'll get dropped after timing out ...

Regards,
Klement

Quoting Mikado (2018-12-11 08:52:28)
>Hi,
> 
>   I have noticed that  “ip4-reassembly-feature” node  only
>reassembles packets stored in the local pool of each thread. But it seems
>not right if a group of fragment packages is handled by different worker
>thread. So is there any tricks in VPP  forcing the fragment packages in
>the same group dispatched to the same thread ? Or is it a bug ?
> 
> 
>Thanks in advance.
> 
>Mikado
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#11560): https://lists.fd.io/g/vpp-dev/message/11560
Mute This Topic: https://lists.fd.io/mt/28718629/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] how can i del the ipsec Node that generated by ikev2?

2018-12-04 Thread Klement Sekera via Lists.Fd.Io
iked2 code in vpp is PoC quality, so bugs are to be expected. Feel free
to submit a patch to fix the issue.

You could also try strongswan solution using this plugin

https://github.com/matfabia/strongswan/tree/vpp/src/libcharon/plugins/kernel_vpp

Regards,
Klement

Quoting wangchuan...@163.com (2018-12-03 11:20:18)
>#set int state TenGigabitEthernet6/0/0 up
>#set int ip addr TenGigabitEthernet6/0/0 192.168.6.33/24
> 
>#ikev2 profile add pr1
>#ikev2 profile set pr1 auth shared-key-mic string 123456
>#ikev2 profile set pr1 id local fqdn hctel.hubcpe
>#ikev2 profile set pr1 id remote fqdn hctel.cpe
>#ikev2 profile set pr1 traffic-selector local ip-range 0.0.0.0 - 
> 255.255.255.255 port-range 0 - 65535 protocol 0
>#ikev2 profile set pr1 traffic-selector remote ip-range 192.168.122.0 - 
> 192.168.122.255 port-range 0 - 65535 protocol 0
>. after sometime ,  ikev2 negotiate successfully ..
>. ipsec0    appears                                                
>..
>then:
>#ikev2 profile del pr1
>but ipsec0 still exists
> 
>--
> 
>wangchuan...@163.com
> 
>   
>  From: [1]Klement Sekera
>  Date: 2018-12-03 18:06
>  To: [2]wangchuan...@163.com; [3]vpp-dev
>  Subject: Re: [vpp-dev] how can i del the ipsec Node that generated by
>  ikev2?
>  Can you please post the CLI commands which result in such state?
>   
>  Thanks,
>  Klement
>   
>  Quoting wangchuan...@163.com (2018-12-03 04:12:55)
>  >    hi all,
>  >        The ipsec Node generated by Ikev2 still existing after deleting
>  the
>  >    ikev2 profile, how can I delete it?
>  >    Thanks!
>  >
>  >   
>  
> --
>  >
>  >    wangchuan...@163.com
> 
> References
> 
>Visible links
>1. mailto:ksek...@cisco.com
>2. mailto:wangchuan...@163.com
>3. mailto:vpp-dev@lists.fd.io
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#11486): https://lists.fd.io/g/vpp-dev/message/11486
Mute This Topic: https://lists.fd.io/mt/28567959/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] how can i del the ipsec Node that generated by ikev2?

2018-12-03 Thread Klement Sekera via Lists.Fd.Io
Can you please post the CLI commands which result in such state?

Thanks,
Klement

Quoting wangchuan...@163.com (2018-12-03 04:12:55)
>hi all,
>    The ipsec Node generated by Ikev2 still existing after deleting the
>ikev2 profile, how can I delete it?
>Thanks!
> 
>--
> 
>wangchuan...@163.com
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#11478): https://lists.fd.io/g/vpp-dev/message/11478
Mute This Topic: https://lists.fd.io/mt/28567959/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] ikev2-ipsec-tunnel && NAT-T ?

2018-12-06 Thread Klement Sekera via Lists.Fd.Io
ipsec_sad_add_del_entry API - udp_encap parameter must be set to 1

Regards,
Klement

Quoting wangchuan...@163.com (2018-12-06 02:16:35)
>hi Klement,
>    which api? Thanks
> 
>--
> 
>wangchuan...@163.com
> 
>   
>  发件人: [1]Klement Sekera
>  发送时间: 2018-12-04 18:09
>  收件人: [2]wangchuan...@163.com
>  主题: Re: [vpp-dev] ikev2-ipsec-tunnel && NAT-T ?
>  There is an API to enable udp encap, but unless this is called
>  externally, it won't be used.
>   
>  Regards,
>  Klement
>   
>  Quoting wangchuan...@163.com (2018-12-04 02:15:53)
>  >    Hi all,
>  >        Can the ipsec tunnel generated by ikev2 support
>  udp-encap(NAT-T) ?
>  >    How?
>  >    Thanks!
>  >
>  >   
>  
> --
>  >
>  >    wangchuan...@163.com
> 
> References
> 
>Visible links
>1. mailto:ksek...@cisco.com
>2. mailto:wangchuan...@163.com
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#11508): https://lists.fd.io/g/vpp-dev/message/11508
Mute This Topic: https://lists.fd.io/mt/28578226/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] ikev2-ipsec-tunnel && NAT-T ?

2018-12-06 Thread Klement Sekera via Lists.Fd.Io
I don't think that the current PoC ikev2 code can do that.

Regards,
Klement

Quoting wangchuan...@163.com (2018-12-06 11:29:12)
>Klement,
>    i saw the ipsec_sad_add_del_entry api , but in my mind, it was used to
>create manual ipsec tunnel .
>how to set the ipsec tunnel negotiated out by ikev2 to be udp-encaped
>using this api?
>can you give me some tips?
>thanks!
> 
>--
> 
>wangchuan...@163.com
> 
>   
>  发件人: [1]Klement Sekera
>  发送时间: 2018-12-06 18:16
>  收件人: [2]wangchuan...@163.com
>  抄送: [3]vpp-dev
>  主题: Re: Re: [vpp-dev] ikev2-ipsec-tunnel && NAT-T ?
>  ipsec_sad_add_del_entry API - udp_encap parameter must be set to 1
>   
>  Regards,
>  Klement
>   
>  Quoting wangchuan...@163.com (2018-12-06 02:16:35)
>  >    hi Klement,
>  >        which api? Thanks
>  >
>  >   
>  
> --
>  >
>  >    wangchuan...@163.com
>  >
>  >   
>  >  发件人: [1]Klement Sekera
>  >  发送时间: 2018-12-04 18:09
>  >  收件人: [2]wangchuan...@163.com
>  >  主题: Re: [vpp-dev] ikev2-ipsec-tunnel && NAT-T ?
>  >  There is an API to enable udp encap, but unless this is called
>  >  externally, it won't be used.
>  >   
>  >  Regards,
>  >  Klement
>  >   
>  >  Quoting wangchuan...@163.com (2018-12-04 02:15:53)
>  >  >    Hi all,
>  >  >        Can the ipsec tunnel generated by ikev2 support
>  >  udp-encap(NAT-T) ?
>  >  >    How?
>  >  >    Thanks!
>  >  >
>  >  >   
>  > 
>  
> --
>  >  >
>  >  >    wangchuan...@163.com
>  >
>  > References
>  >
>  >    Visible links
>  >    1. mailto:ksek...@cisco.com
>  >    2. mailto:wangchuan...@163.com
> 
> References
> 
>Visible links
>1. mailto:ksek...@cisco.com
>2. mailto:wangchuan...@163.com
>3. mailto:vpp-dev@lists.fd.io
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#11510): https://lists.fd.io/g/vpp-dev/message/11510
Mute This Topic: https://lists.fd.io/mt/28578226/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Python Tests

2019-01-04 Thread Klement Sekera via Lists.Fd.Io
Hi Paul,

I believe it's okay to just add people as reviewers afterwards.

Thanks,
Klement

Quoting Paul Vinciguerra (2019-01-03 18:16:42)
>I am working on a change to cleanup some anti-patterns in the test
>framework code, but the scope is significantly larger.  
>[1]https://gerrit.fd.io/r/#/c/16642/
> 
>Specifically, I'm referring to the use of:
>  except:
>and
>  raise Exception(...)
> 
>This is not in any way a dig against any of the contributors. The pattern
>is recommended in the docs for people to follow.
> 
>Does anyone have any suggestions as to how to best involve the original
>contributors without stalling progress?
>Is adding reviewers from git blame sufficient, or do the original
>contributors prefer to know earlier in the process?   
> 
>Here are some links if anyone is interested:
>[2]https://julien.danjou.info/python-exceptions-guide/
>[3]https://hynek.me/articles/hasattr/
> 
> References
> 
>Visible links
>1. https://gerrit.fd.io/r/#/c/16642/
> https://gerrit.fd.io/r/#/c/16642/
>2. https://julien.danjou.info/python-exceptions-guide/
>3. https://hynek.me/articles/hasattr/
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#11834): https://lists.fd.io/g/vpp-dev/message/11834
Mute This Topic: https://lists.fd.io/mt/28925838/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] VPP's C-type-api example

2018-09-13 Thread Klement Sekera via Lists.Fd.Io
You can also check out the test/ext directory for vapi_c_test.c and
vapi_cpp_test.cpp, which are unittests for these bindings and there is
also an example of _dump API call.

Regards,
Klement

Quoting Ole Troan (2018-09-13 09:33:14)
> Hi again,
> 
> > I am in the beginning of using-c-api.  Should I not follow  
> > (src/vpp-api/client)  ?
> > Can you please show me a fun-name and a example of the higher level C API?
> 
> That’s right, I wouldn’t recommend using the src/vpp-aoi/client API unless 
> you are building a new language binding.
> 
> If you need a C interface you should use VAPI.
> 
> See interface.api.vapi.h (auto-generated) for sw_interface_dump()
> 
> static inline vapi_error_e vapi_sw_interface_dump(struct vapi_ctx_s *ctx,
>   vapi_msg_sw_interface_dump *msg,
>   vapi_error_e (*callback)(struct vapi_ctx_s *ctx,
>void *callback_ctx,
>vapi_error_e rv,
>bool is_last,
>vapi_payload_sw_interface_details *reply),
>   void *callback_ctx)
> 
> 
> src/vpp-api/vapi/vapi_doc.md for documentation.
> 
> To get a feel of how the API works, at even higher level you can play with 
> the Python language binding.
> 
> Best regards,
> Ole
> 
> 
> 
> -=-=-=-=-=-=-=-=-=-=-=-
> Links: You receive all messages sent to this group.
> 
> View/Reply Online (#10481): https://lists.fd.io/g/vpp-dev/message/10481
> Mute This Topic: https://lists.fd.io/mt/25510961/675704
> Group Owner: vpp-dev+ow...@lists.fd.io
> Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [ksek...@cisco.com]
> -=-=-=-=-=-=-=-=-=-=-=-
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#10484): https://lists.fd.io/g/vpp-dev/message/10484
Mute This Topic: https://lists.fd.io/mt/25510961/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] VPP's C-type-api example

2018-09-14 Thread Klement Sekera via Lists.Fd.Io
And what is your vpp cmdline? is vpp running with "my-api-test" api
prefix?

Quoting wangchuan...@163.com (2018-09-14 07:25:06)
>sorry,
>    That my carelessness.  Whole cmd as root  is :   #./test "/my-api"
>"my-api-test"
>And vpp_api_test can connect to vpp.
> 
>--
> 
>wangchuan...@163.com
> 
>   
>  From: [1]Ole Troan
>  Date: 2018-09-14 00:35
>  To: [2]wangchuanguo
>  CC: [3]Klement Sekera -X (ksekera - PANTHEON TECHNOLOGIES at Cisco);
>  [4]Dave Barach (dbarach); [5]vpp-dev
>  Subject: Re: [vpp-dev] VPP's C-type-api example
>  > 1、as root, install the rpm(vpp-selinux, vpp-lib, vpp-18.04,
>  vpp-plugins),  start service vpp and I come into vppctl.
>  > 2、I copy test/ext/vapi_c_test.c to main.c(a new file, a new dir).
>  >    compile using: gcc -std=gnu99 -g -Wall -pthread 
>  -I/usr/include/ -lvppinfra -lvlibmemoryclient -lsvm -lpthread -lcheck
>  -lrt -lm -lvapiclient -lsubunit main.c -o test
>  > 3、then,    #./test
>  >    But it shows vl_map_shmem:639: region init fail
>   
>  That’s an indication that it cannot connect to VPP.
>  Can vpp_api_test connect?
>   
>  Cheers,
>  Ole
>   
>   
>  >
>  > wangchuan...@163.com
>  > 
>  > From: Ole Troan
>  > Date: 2018-09-13 21:44
>  > To: wangchuan...@163.com
>  > CC: Klement Sekera -X (ksekera - PANTHEON TECHNOLOGIES at Cisco); Dave
>  Barach (dbarach); vpp-dev
>  > Subject: Re: [vpp-dev] VPP's C-type-api example
>  > > i am be root
>  > 
>  > Then you must provide more details.
>  > 
>  > Cheers,
>  > Ole
>  > 
>  > 
>  > >
>  > > wangchuan...@163.com
>  > >
>  > > From: Ole Troan
>  > > Date: 2018-09-13 21:26
>  > > To: wangchuan...@163.com
>  > > CC: Klement Sekera -X (ksekera - PANTHEON TECHNOLOGIES at Cisco);
>  Dave Barach (dbarach); vpp-dev
>  > > Subject: Re: [vpp-dev] VPP's C-type-api example
>  > > > But I got some error when using vapi - vapi_connect as "
>  vl_map_shmem:639: region init fail “.
>  > >
>  > > Permission error?
>  > > Run client as root, or configure VPP to set permissions on API
>  shared memory.
>  > >
>  > > In VPP startup:
>  > > api-segment { uid  gid  }
>  > >
>  > > Cheers,
>  > > Ole
>  > >
>  > > > And my mem : MemFree:  220036 kB  |  
>  HugePages_Total: 679   |   HugePages_Free:  627
>  > > > Help please!
>  > > >
>  > > > My code:
>  > > > int main()
>  > > > {
>  > > >   vapi_ctx_t ctx;
>  > > >   vapi_error_e rv = vapi_ctx_alloc ();
>  > > >   vapi_msg_show_version *sv = vapi_alloc_show_version (ctx);
>  > > >   rv = vapi_connect (ctx, app_name, api_prefix,
>  max_outstanding_requests,
>  > > >  response_queue_size, VAPI_MODE_BLOCKING);
>  > > >   rv = vapi_send (ctx, sv);
>  > > >   vapi_msg_show_version_reply *reply;
>  > > >   rv = vapi_recv (ctx, (void **) , NULL, 0, 0);
>  > > >   if(reply != NULL)
>  > > >   printf("ret[%d] program[%s] version[%s] \n build_date[%s]
>  build_directory[%s]\n", reply->payload.retval, reply->payload.program,
>  reply->payload.version, reply->payload.build_date,
>  reply->payload.build_directory);
>  > > >   else
>  > > > printf("show version return none\n");
>  > > >   rv = vapi_disconnect (ctx);
>  > > >   vapi_ctx_free (ctx);
>  > > >  printf("end\n");
>  > > > return 0;
>  > > > }
>  > > >
>  > > > wangchuan...@163.com
>  > > >
>  > > > From: Klement Sekera
>  > > > Date: 2018-09-13 17:02
>  > > > To: Ole Troan; wangchuanguo
>  > > > CC: Dave Barach (dbarach); vpp-dev
>  > > > Subject: Re: [vpp-dev] VPP's C-type-api example
>  > > > You can also check out the test/ext directory for vapi_c_test.c
>  and
>  > > > vapi_cpp_test.cpp, which are unittests for these bindings and
>  there is
>  > > > also an example of _dump API call.
>  > > >
>  > > > Regards,
>  > > > Klement
>  > > >
>  > > > Quoting Ole Troan (2018-09-13 09:33:14)
>  > > > > Hi again,
>  > > > >
>  > > > > > I am in the beginning of using-c-api.  Should I not
>  follow  (src/vpp-api/client)  ?
>  > > > > > Can you please show me a fun-name and a example of the higher
>  level C API?
>  > > > >
>  > > > > That’s right, I wouldn’t recommend using the src/vpp-aoi/client
>  API unless you are building a new language binding.
>  > > > >
>  > > > > If you need a C interface you should use VAPI.
>  > > > >
>  > > > > See interface.api.vapi.h (auto-generated) for
>  sw_interface_dump()
>  > > > >
>  > > > > static inline vapi_error_e 

Re: [vpp-dev] VPP's C-type-api example

2018-09-18 Thread Klement Sekera via Lists.Fd.Io
Hi,

VAPI is autogenerated from .json files, which are generated from .api
files. The API you are asking about is in
src/vnet/classify/classify.api. See that file's history on when it
appeared. As for your second question - I don't see any `obscure-mask`
nor `clear-param` parameters. If you are asking about semantics of the
API call, that is a question for the author of said API, since VAPI is
generated code.

Regards,
Klement

Quoting wangchuan...@163.com (2018-09-18 10:03:10)
>Hi Klement,
>    I'm sorry to trouble you again.
>First,Have The classify_add_del_session vapi  not exist yet @18.04?
>Second,How can I get the vapi_classify_add_del_table 's obscure-mask from
>clear-param ”l2|l3|l4 +  ip4|ip6 + protocol + dst|src  + tcp|udp  + ... ”
>? 
> (Like CLI_COMMAND-clear-param to mask-value)
>What api can do this?
> 
>--
> 
>wangchuan...@163.com
> 
>   
>  From: [1]Klement Sekera
>  Date: 2018-09-17 20:23
>  To: [2]wangchuan...@163.com; [3]Ole Troan
>  CC: [4]Dave Barach (dbarach); [5]vpp-dev
>  Subject: Re: [vpp-dev] VPP's C-type-api example
>  If you do not consume a message (in this case most probably the reply to
>  vapi_tap_deelte), then on disconnect, the client library does that
>  internally
>  and prints a warning about it...
>   
>  Quoting wangchuan...@163.com (2018-09-17 12:00:27)
>  >    Hi Klement,
>  >        Please accept my heartfelt thanks.
>  >    Everything is fine, but one: Calling 'vapi_tap_delete' without any
>  other
>  >    calling would cause some warining or error printed at my terminal: 
>  >    my vpp is 18.04 stable.
>  >    "tapDelete begin 
>  >    tapDelete end
>  >    vl_client_disconnect:313: queue drain: 79
>  >    msg_handler_internal:432: no handler for msg id 79
>  >    The end"
>  >    Thanks again!
>  >
>  >   
>  
> --
>  >
>  >    wangchuan...@163.com
>  >
>  >   
>  >  From: [1]Klement Sekera
>  >  Date: 2018-09-17 15:53
>  >  To: [2]wangchuan...@163.com; [3]Ole Troan
>  >  CC: [4]Dave Barach (dbarach); [5]vpp-dev
>  >  Subject: Re: [vpp-dev] VPP's C-type-api example
>  >  There is no such parameter called `chroot_prefix`. I will assume
>  you are
>  >  asking about `api_prefix`.
>  >   
>  >  Running VPP creates shared memory segments under /dev/shm. By
>  default
>  >  (no prefix provided), these files are called global_vm and
>  vpe-api.
>  >  Multiple VPP instances' names would collide and to be able to run
>  more
>  >  than one VPP, you need to supply unique prefix to 2nd, 3rd,
>  >  etc. VPP instance. This turns shared memory file names for those
>  >  instances to -global_vm and -vpe-api.
>  >   
>  >  When API bindings are connecting to VPP, they are using these
>  files
>  >  under /dev/shm. Thus, to connect, the same prefix needs to be
>  >  supplied to VPP and client.
>  >   
>  >  It's called api-segment in the startup config.
>  >   
>  > 
>  
> https://wiki.fd.io/view/VPP/Command-line_Arguments#.22api-segment.22_parameters
>  >   
>  >  Quoting wangchuan...@163.com (2018-09-15 11:46:54)
>  >  >    Hi Klement,
>  >  >        I change the vapi_c_test.c and let the 3rd
>  param[chroot_prefix
>  >  ==
>  >  >    NULL] of  'vapi_connect',    and all pass!
>  >  >    I have not quite understand this parameter "chroot_prefix"
>  >   by reading the
>  >  >    code.
>  >  >    Explain briefly ,please?
>  >  >    Thanks a lot!
>  >  >
>  >  >   
>  > 
>  
> --
>  >  >
>  >  >    wangchuan...@163.com
>  >  >
>  >  >   
>  >  >  From: [1]wangchuan...@163.com
>  >  >  Date: 2018-09-15 09:23
>  >  >  To: [2]Klement Sekera -X (ksekera - PANTHEON TECHNOLOGIES
>  at
>  >  Cisco);
>  >  >  [3]Ole Troan
>  >  >  CC: [4]Dave Barach (dbarach); [5]vpp-dev
>  >  >  Subject: Re: Re: [vpp-dev] VPP's C-type-api example
>  >  >  Hi Klement,
>  >  >      I do not understande your mean.    VPP cmdline?
>  >  >  I want to make my ext-program who can initialize the
>  running
>  >  vpp-service
>  >  >  not by vppctl-shell
>  >  >  "systemctl start VPP"    without any change for vpp-code
>  and
>  >  >  startup.conf.
>  >  >  Can you describe their relationship?
>  >   

Re: [vpp-dev] VPP's C-type-api example

2018-09-19 Thread Klement Sekera via Lists.Fd.Io
Maybe it's something which the older compiler wasn't able to compile.
Since the VAPI tests are not part of basic tests, it happens regularly,
that something is added to API files, which the vapi compiler can't
handle. But extended tests are not run as part of verify, so it gets
merged anyway ...

On master, it works fine.

Quoting wangchuan...@163.com (2018-09-19 05:46:33)
>Hi Klement,
>    There have "autoreply define classify_add_del_session"  in
>classify.api .
>And /usr/include/vnet/classify/classify.api.h  have the
>"vl_api_classify_add_del_session".
>And the java jar have the "classify_add_del_session" class  too!
>But the classify.api.vapi.h don't have !!!
>Regards,
>Simon 
> 
>--
> 
>wangchuan...@163.com
> 
>   
>  From: [1]Klement Sekera
>  Date: 2018-09-19 03:18
>  To: [2]wangchuan...@163.com; [3]Ole Troan
>  CC: [4]Dave Barach (dbarach); [5]vpp-dev
>  Subject: Re: [vpp-dev] VPP's C-type-api example
>  Hi,
>   
>  VAPI is autogenerated from .json files, which are generated from .api
>  files. The API you are asking about is in
>  src/vnet/classify/classify.api. See that file's history on when it
>  appeared. As for your second question - I don't see any `obscure-mask`
>  nor `clear-param` parameters. If you are asking about semantics of the
>  API call, that is a question for the author of said API, since VAPI is
>  generated code.
>   
>  Regards,
>  Klement
>   
>  Quoting wangchuan...@163.com (2018-09-18 10:03:10)
>  >    Hi Klement,
>  >        I'm sorry to trouble you again.
>  >    First,Have The classify_add_del_session vapi  not exist yet
>  @18.04?
>  >    Second,How can I get the vapi_classify_add_del_table
>  's obscure-mask from
>  >    clear-param ”l2|l3|l4 +  ip4|ip6 + protocol + dst|src  + tcp|udp  +
>  ... ”
>  >    ? 
>  >     (Like CLI_COMMAND-clear-param to mask-value)
>  >    What api can do this?
>  >
>  >   
>  
> --
>  >
>  >    wangchuan...@163.com
>  >
>  >   
>  >  From: [1]Klement Sekera
>  >  Date: 2018-09-17 20:23
>  >  To: [2]wangchuan...@163.com; [3]Ole Troan
>  >  CC: [4]Dave Barach (dbarach); [5]vpp-dev
>  >  Subject: Re: [vpp-dev] VPP's C-type-api example
>  >  If you do not consume a message (in this case most probably the
>  reply to
>  >  vapi_tap_deelte), then on disconnect, the client library does
>  that
>  >  internally
>  >  and prints a warning about it...
>  >   
>  >  Quoting wangchuan...@163.com (2018-09-17 12:00:27)
>  >  >    Hi Klement,
>  >  >        Please accept my heartfelt thanks.
>  >  >    Everything is fine, but one: Calling 'vapi_tap_delete'
>  without any
>  >  other
>  >  >    calling would cause some warining or error printed at my
>  terminal: 
>  >  >    my vpp is 18.04 stable.
>  >  >    "tapDelete begin 
>  >  >    tapDelete end
>  >  >    vl_client_disconnect:313: queue drain: 79
>  >  >    msg_handler_internal:432: no handler for msg id 79
>  >  >    The end"
>  >  >    Thanks again!
>  >  >
>  >  >   
>  > 
>  
> --
>  >  >
>  >  >    wangchuan...@163.com
>  >  >
>  >  >   
>  >  >  From: [1]Klement Sekera
>  >  >  Date: 2018-09-17 15:53
>  >  >  To: [2]wangchuan...@163.com; [3]Ole Troan
>  >  >  CC: [4]Dave Barach (dbarach); [5]vpp-dev
>  >  >  Subject: Re: [vpp-dev] VPP's C-type-api example
>  >  >  There is no such parameter called `chroot_prefix`. I will
>  assume
>  >  you are
>  >  >  asking about `api_prefix`.
>  >  >   
>  >  >  Running VPP creates shared memory segments under /dev/shm.
>  By
>  >  default
>  >  >  (no prefix provided), these files are called global_vm and
>  >  vpe-api.
>  >  >  Multiple VPP instances' names would collide and to be able
>  to run
>  >  more
>  >  >  than one VPP, you need to supply unique prefix to 2nd,
>  3rd,
>  >  >  etc. VPP instance. This turns shared memory file names for
>  those
>  >  >  instances to -global_vm and -vpe-api.
>  >  >   
>  >  >  When API bindings are connecting to VPP, they are using
>  these
>  >  files
>  >  >  under /dev/shm. Thus, to connect, the same prefix needs to
>  be
>  >  >  supplied to VPP and client.
>

Re: [vpp-dev] VPP's C-type-api example

2018-09-17 Thread Klement Sekera via Lists.Fd.Io
There is no such parameter called `chroot_prefix`. I will assume you are
asking about `api_prefix`.

Running VPP creates shared memory segments under /dev/shm. By default
(no prefix provided), these files are called global_vm and vpe-api.
Multiple VPP instances' names would collide and to be able to run more
than one VPP, you need to supply unique prefix to 2nd, 3rd,
etc. VPP instance. This turns shared memory file names for those
instances to -global_vm and -vpe-api.

When API bindings are connecting to VPP, they are using these files
under /dev/shm. Thus, to connect, the same prefix needs to be
supplied to VPP and client.

It's called api-segment in the startup config.

https://wiki.fd.io/view/VPP/Command-line_Arguments#.22api-segment.22_parameters

Quoting wangchuan...@163.com (2018-09-15 11:46:54)
>Hi Klement,
>    I change the vapi_c_test.c and let the 3rd param[chroot_prefix ==
>NULL] of  'vapi_connect',    and all pass!
>I have not quite understand this parameter "chroot_prefix"  by reading the
>code.
>Explain briefly ,please?
>Thanks a lot!
> 
>--
> 
>wangchuan...@163.com
> 
>   
>  From: [1]wangchuan...@163.com
>  Date: 2018-09-15 09:23
>  To: [2]Klement Sekera -X (ksekera - PANTHEON TECHNOLOGIES at Cisco);
>  [3]Ole Troan
>  CC: [4]Dave Barach (dbarach); [5]vpp-dev
>  Subject: Re: Re: [vpp-dev] VPP's C-type-api example
>  Hi Klement,
>      I do not understande your mean.    VPP cmdline?
>  I want to make my ext-program who can initialize the running vpp-service
>  not by vppctl-shell
>  "systemctl start VPP"    without any change for vpp-code and
>  startup.conf.
>  Can you describe their relationship?
>   My executable program name is "test", and how can i  connect to vpp?
> 
>--
> 
>  wangchuan...@163.com
> 
> 
>From: [6]Klement Sekera
>Date: 2018-09-14 19:55
>To: [7]wangchuan...@163.com; [8]Ole Troan
>CC: [9]Dave Barach (dbarach); [10]vpp-dev
>Subject: Re: [vpp-dev] VPP's C-type-api example
>And what is your vpp cmdline? is vpp running with "my-api-test" api
>prefix?
> 
>Quoting wangchuan...@163.com (2018-09-14 07:25:06)
>>    sorry,
>>        That my carelessness.  Whole cmd as root  is :   #./test
>"/my-api"
>>    "my-api-test"
>>    And vpp_api_test can connect to vpp.
>>
>>   
>
> --
>>
>>    wangchuan...@163.com
>>
>>   
>>  From: [1]Ole Troan
>>  Date: 2018-09-14 00:35
>>  To: [2]wangchuanguo
>>  CC: [3]Klement Sekera -X (ksekera - PANTHEON TECHNOLOGIES at
>Cisco);
>>  [4]Dave Barach (dbarach); [5]vpp-dev
>>  Subject: Re: [vpp-dev] VPP's C-type-api example
>>  > 1、as root, install the rpm(vpp-selinux, vpp-lib, vpp-18.04,
>>  vpp-plugins),  start service vpp and I come into vppctl.
>>  > 2、I copy test/ext/vapi_c_test.c to main.c(a new file, a new
>dir).
>>  >    compile using: gcc -std=gnu99 -g -Wall -pthread 
>>  -I/usr/include/ -lvppinfra -lvlibmemoryclient -lsvm -lpthread
>-lcheck
>>  -lrt -lm -lvapiclient -lsubunit main.c -o test
>>  > 3、then,    #./test
>>  >    But it shows vl_map_shmem:639: region init fail
>>   
>>  That’s an indication that it cannot connect to VPP.
>>  Can vpp_api_test connect?
>>   
>>  Cheers,
>>  Ole
>>   
>>   
>>  >
>>  > wangchuan...@163.com
>>  > 
>>  > From: Ole Troan
>>  > Date: 2018-09-13 21:44
>>  > To: wangchuan...@163.com
>>  > CC: Klement Sekera -X (ksekera - PANTHEON TECHNOLOGIES at
>Cisco); Dave
>>  Barach (dbarach); vpp-dev
>>  > Subject: Re: [vpp-dev] VPP's C-type-api example
>>  > > i am be root
>>  > 
>>  > Then you must provide more details.
>>  > 
>>  > Cheers,
>>  > Ole
>>  > 
>>  > 
>>  > >
>>  > > wangchuan...@163.com
>>  > >
>>  > > From: Ole Troan
>>  > > Date: 2018-09-13 21:26
>>  > > To: wangchuan...@163.com
>>  > > CC: Klement Sekera -X (ksekera - PANTHEON TECHNOLOGIES at
>Cisco);
>>  Dave Barach (dbarach); vpp-dev
>>  > > Subject: Re: [vpp-dev] VPP's C-type-api example
>>  > > > But I got some error 

Re: [vpp-dev] VPP's C-type-api example

2018-09-17 Thread Klement Sekera via Lists.Fd.Io
If you do not consume a message (in this case most probably the reply to
vapi_tap_deelte), then on disconnect, the client library does that internally
and prints a warning about it...

Quoting wangchuan...@163.com (2018-09-17 12:00:27)
>Hi Klement,
>    Please accept my heartfelt thanks.
>Everything is fine, but one: Calling 'vapi_tap_delete' without any other
>calling would cause some warining or error printed at my terminal: 
>my vpp is 18.04 stable.
>"tapDelete begin 
>tapDelete end
>vl_client_disconnect:313: queue drain: 79
>msg_handler_internal:432: no handler for msg id 79
>The end"
>Thanks again!
> 
>--
> 
>wangchuan...@163.com
> 
>   
>  From: [1]Klement Sekera
>  Date: 2018-09-17 15:53
>  To: [2]wangchuan...@163.com; [3]Ole Troan
>  CC: [4]Dave Barach (dbarach); [5]vpp-dev
>  Subject: Re: [vpp-dev] VPP's C-type-api example
>  There is no such parameter called `chroot_prefix`. I will assume you are
>  asking about `api_prefix`.
>   
>  Running VPP creates shared memory segments under /dev/shm. By default
>  (no prefix provided), these files are called global_vm and vpe-api.
>  Multiple VPP instances' names would collide and to be able to run more
>  than one VPP, you need to supply unique prefix to 2nd, 3rd,
>  etc. VPP instance. This turns shared memory file names for those
>  instances to -global_vm and -vpe-api.
>   
>  When API bindings are connecting to VPP, they are using these files
>  under /dev/shm. Thus, to connect, the same prefix needs to be
>  supplied to VPP and client.
>   
>  It's called api-segment in the startup config.
>   
>  
> https://wiki.fd.io/view/VPP/Command-line_Arguments#.22api-segment.22_parameters
>   
>  Quoting wangchuan...@163.com (2018-09-15 11:46:54)
>  >    Hi Klement,
>  >        I change the vapi_c_test.c and let the 3rd param[chroot_prefix
>  ==
>  >    NULL] of  'vapi_connect',    and all pass!
>  >    I have not quite understand this parameter "chroot_prefix"
>   by reading the
>  >    code.
>  >    Explain briefly ,please?
>  >    Thanks a lot!
>  >
>  >   
>  
> --
>  >
>  >    wangchuan...@163.com
>  >
>  >   
>  >  From: [1]wangchuan...@163.com
>  >  Date: 2018-09-15 09:23
>  >  To: [2]Klement Sekera -X (ksekera - PANTHEON TECHNOLOGIES at
>  Cisco);
>  >  [3]Ole Troan
>  >  CC: [4]Dave Barach (dbarach); [5]vpp-dev
>  >  Subject: Re: Re: [vpp-dev] VPP's C-type-api example
>  >  Hi Klement,
>  >      I do not understande your mean.    VPP cmdline?
>  >  I want to make my ext-program who can initialize the running
>  vpp-service
>  >  not by vppctl-shell
>  >  "systemctl start VPP"    without any change for vpp-code and
>  >  startup.conf.
>  >  Can you describe their relationship?
>  >   My executable program name is "test", and how can i  connect to
>  vpp?
>  >
>  >   
>  
> --
>  >
>  >  wangchuan...@163.com
>  >
>  >     
>  >    From: [6]Klement Sekera
>  >    Date: 2018-09-14 19:55
>  >    To: [7]wangchuan...@163.com; [8]Ole Troan
>  >    CC: [9]Dave Barach (dbarach); [10]vpp-dev
>  >    Subject: Re: [vpp-dev] VPP's C-type-api example
>  >    And what is your vpp cmdline? is vpp running with "my-api-test"
>  api
>  >    prefix?
>  >     
>  >    Quoting wangchuan...@163.com (2018-09-14 07:25:06)
>  >    >    sorry,
>  >    >        That my carelessness.  Whole cmd as root  is :  
>  #./test
>  >    "/my-api"
>  >    >    "my-api-test"
>  >    >    And vpp_api_test can connect to vpp.
>  >    >
>  >    >   
>  >   
>  
> --
>  >    >
>  >    >    wangchuan...@163.com
>  >    >
>  >    >   
>  >    >  From: [1]Ole Troan
>  >    >  Date: 2018-09-14 00:35
>  >    >  To: [2]wangchuanguo
>  >    >  CC: [3]Klement Sekera -X (ksekera - PANTHEON
>  TECHNOLOGIES at
>  >    Cisco);
>  >    >  [4]Dave Barach (dbarach); [5]vpp-dev
>  >    >  Subject: Re: [vpp-dev] VPP's C-type-api example
>  >    >  > 1、as root, install the rpm(vpp-selinux, vpp-lib,
>  vpp-18.04,
>  >    >  vpp-plugins),  start service vpp and I come into vppctl.
>  >    >  > 2、I copy test/ext/vapi_c_test.c to main.c(a new file,
>  a 

Re: [vpp-dev] VPP's C-type-api example

2018-09-13 Thread Klement Sekera via Lists.Fd.Io
what is your api prefix set in the vapi_connect call? is vpp using the
same prefix?

Quoting Ole Troan (2018-09-13 18:35:29)
> > 1、as root, install the rpm(vpp-selinux, vpp-lib, vpp-18.04, vpp-plugins),  
> > start service vpp and I come into vppctl.
> > 2、I copy test/ext/vapi_c_test.c to main.c(a new file, a new dir).
> >compile using: gcc -std=gnu99 -g -Wall -pthread  -I/usr/include/ 
> > -lvppinfra -lvlibmemoryclient -lsvm -lpthread -lcheck -lrt -lm -lvapiclient 
> > -lsubunit main.c -o test
> > 3、then,#./test 
> >But it shows vl_map_shmem:639: region init fail 
> 
> That’s an indication that it cannot connect to VPP.
> Can vpp_api_test connect?
> 
> Cheers,
> Ole
> 
> 
> > 
> > wangchuan...@163.com
> >  
> > From: Ole Troan
> > Date: 2018-09-13 21:44
> > To: wangchuan...@163.com
> > CC: Klement Sekera -X (ksekera - PANTHEON TECHNOLOGIES at Cisco); Dave 
> > Barach (dbarach); vpp-dev
> > Subject: Re: [vpp-dev] VPP's C-type-api example
> > > i am be root 
> >  
> > Then you must provide more details.
> >  
> > Cheers,
> > Ole
> >  
> >  
> > >
> > > wangchuan...@163.com
> > > 
> > > From: Ole Troan
> > > Date: 2018-09-13 21:26
> > > To: wangchuan...@163.com
> > > CC: Klement Sekera -X (ksekera - PANTHEON TECHNOLOGIES at Cisco); Dave 
> > > Barach (dbarach); vpp-dev
> > > Subject: Re: [vpp-dev] VPP's C-type-api example
> > > > But I got some error when using vapi - vapi_connect as " 
> > > > vl_map_shmem:639: region init fail “.
> > > 
> > > Permission error?
> > > Run client as root, or configure VPP to set permissions on API shared 
> > > memory.
> > > 
> > > In VPP startup:
> > > api-segment { uid  gid  }
> > > 
> > > Cheers,
> > > Ole
> > > 
> > > > And my mem : MemFree:  220036 kB  |   
> > > > HugePages_Total: 679   |   HugePages_Free:  627
> > > > Help please!
> > > >
> > > > My code:
> > > > int main()
> > > > {
> > > >   vapi_ctx_t ctx;
> > > >   vapi_error_e rv = vapi_ctx_alloc ();
> > > >   vapi_msg_show_version *sv = vapi_alloc_show_version (ctx);
> > > >   rv = vapi_connect (ctx, app_name, api_prefix, 
> > > > max_outstanding_requests,
> > > >  response_queue_size, VAPI_MODE_BLOCKING);
> > > >   rv = vapi_send (ctx, sv);
> > > >   vapi_msg_show_version_reply *reply;
> > > >   rv = vapi_recv (ctx, (void **) , NULL, 0, 0);
> > > >   if(reply != NULL)
> > > >   printf("ret[%d] program[%s] version[%s] \n build_date[%s] 
> > > > build_directory[%s]\n", reply->payload.retval, reply->payload.program, 
> > > > reply->payload.version, reply->payload.build_date, 
> > > > reply->payload.build_directory);
> > > >   else
> > > > printf("show version return none\n");
> > > >   rv = vapi_disconnect (ctx);
> > > >   vapi_ctx_free (ctx);
> > > >  printf("end\n");
> > > > return 0;
> > > > }
> > > >
> > > > wangchuan...@163.com
> > > >
> > > > From: Klement Sekera
> > > > Date: 2018-09-13 17:02
> > > > To: Ole Troan; wangchuanguo
> > > > CC: Dave Barach (dbarach); vpp-dev
> > > > Subject: Re: [vpp-dev] VPP's C-type-api example
> > > > You can also check out the test/ext directory for vapi_c_test.c and
> > > > vapi_cpp_test.cpp, which are unittests for these bindings and there is
> > > > also an example of _dump API call.
> > > >
> > > > Regards,
> > > > Klement
> > > >
> > > > Quoting Ole Troan (2018-09-13 09:33:14)
> > > > > Hi again,
> > > > >
> > > > > > I am in the beginning of using-c-api.  Should I not follow  
> > > > > > (src/vpp-api/client)  ?
> > > > > > Can you please show me a fun-name and a example of the higher level 
> > > > > > C API?
> > > > >
> > > > > That’s right, I wouldn’t recommend using the src/vpp-aoi/client API 
> > > > > unless you are building a new language binding.
> > > > >
> > > > > If you need a C interface you should use VAPI.
> > > > >
> > > > > See interface.api.vapi.h (auto-generated) for sw_interface_dump()
> > > > >
> > > > > static inline vapi_error_e vapi_sw_interface_dump(struct vapi_ctx_s 
> > > > > *ctx,
> > > > >   vapi_msg_sw_interface_dump *msg,
> > > > >   vapi_error_e (*callback)(struct vapi_ctx_s *ctx,
> > > > >void *callback_ctx,
> > > > >vapi_error_e rv,
> > > > >bool is_last,
> > > > >vapi_payload_sw_interface_details *reply),
> > > > >   void *callback_ctx)
> > > > >
> > > > >
> > > > > src/vpp-api/vapi/vapi_doc.md for documentation.
> > > > >
> > > > > To get a feel of how the API works, at even higher level you can play 
> > > > > with the Python language binding.
> > > > >
> > > > > Best regards,
> > > > > Ole
> > > > >
> > > > >
> > > > >
> > > > > -=-=-=-=-=-=-=-=-=-=-=-
> > > > > Links: You receive all messages sent to this group.
> > > > >
> > > > > View/Reply Online (#10481): 
> > > > > https://lists.fd.io/g/vpp-dev/message/10481
> > > > > Mute This Topic: https://lists.fd.io/mt/25510961/675704
> > > > > Group Owner: 

[vpp-dev] osleap job failing

2019-02-26 Thread Klement Sekera via Lists.Fd.Io
Hello,
I'm facing an issue with osleap job, it very frequently fails with

12:56:59 'indent' is already installed.
12:56:59 No update candidate for 'indent-2.2.11-lp150.1.5.x86_64'. The
highest available version is already installed.
12:56:59 'python3-rpm-macros' not found in package names. Trying
capabilities.
12:56:59 'python-rpm-macros' providing 'python3-rpm-macros' is already
installed.
12:56:59 'libboost_headers1_68_0-devel-1.68.0' not found in package
names. Trying capabilities.
12:56:59 No provider of 'libboost_headers1_68_0-devel-1.68.0' found.
12:56:59 'libboost_thread1_68_0-devel-1.68.0' not found in package
names. Trying capabilities.
12:56:59 No provider of 'libboost_thread1_68_0-devel-1.68.0' found.
12:56:59 make: *** [Makefile:315: install-dep] Error 104
12:56:59 Build step 'Execute shell' marked build as failure

any idea what's wrong?

Thanks,
Klement
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#12343): https://lists.fd.io/g/vpp-dev/message/12343
Mute This Topic: https://lists.fd.io/mt/30140641/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] VPP unit test with multiple workers

2019-03-12 Thread Klement Sekera via Lists.Fd.Io
Hi,

there was an attempt at this some time ago, but the patch was never
merged and the infra has changed a lot in meanwhile so I don't think
it's worth it to rebase it. The patch was to run every existing test in
both single and multi worker scenarios.

You could probably extend the concept of extra configuration options -
two currently exist - extra_vpp_punt_config and extra_vpp_plugin_config,
both can be found in framework.py and introduce a per-test option
to configure it with multiple workers as a poor man's hack solution ...

Thanks,
Klement

Quoting Ranadip Das (2019-03-11 22:01:03)
>I can see that we can create pg- interface for unit testing various vpp
>features. 
>Is it possible to test multiple workers? How can I make sure the packets
>will be sent to different workers on a pg- interface?
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#12493): https://lists.fd.io/g/vpp-dev/message/12493
Mute This Topic: https://lists.fd.io/mt/30394272/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] vapi msg send

2019-02-12 Thread Klement Sekera via Lists.Fd.Io
What kind of message are you sending?
You can take a look at /test/ext/vapi_c_test.c (or cpp) for example
code.

Quoting mhemmatp via Lists.Fd.Io (2019-02-12 15:07:12)
> Hello all,
> 
> I am going to use vapi to connect to a plugin in vpp. I am following this
>instruction:
> 
> 1- connect to vpp and create the context (ctx)
> 1- allocating memory through the APIs (i.e., initializing the header of
>the message)
> 2- initializing the payload of the message (msg)
> 3- vapi_send(ctx,msg)
> 
>Actually, I dont receive any ERR from vapi_send() however the message is
>not received to the vpp (I check it by api trace save/dump). Did I miss
>something ?
> 
>Any help is very welcomed.
> 
>Kind Regards,
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#12239): https://lists.fd.io/g/vpp-dev/message/12239
Mute This Topic: https://lists.fd.io/mt/29749976/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] vapi msg send

2019-02-13 Thread Klement Sekera via Lists.Fd.Io
Your outlined procedure looks okay..
Maybe share some code?

Quoting mhemmatp via Lists.Fd.Io (2019-02-12 22:30:34)
>Klement,  thanks for your reply, our conversation was invisible to group I
>posted it here.
> 
>In general, in the procedure that I sent do you see a missing part to send
>a message (1:1)? To me it is compliant with the example. I am using
>vapi_send() approach.
> 
>Thanks,
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#12242): https://lists.fd.io/g/vpp-dev/message/12242
Mute This Topic: https://lists.fd.io/mt/29749976/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] IPsec doesn't work with aes-gcm ciphering

2019-01-30 Thread Klement Sekera via Lists.Fd.Io
Hi,

I see the issue, will come up with a patch shortly.

Thanks,
Klement

Quoting Jan Gelety via Lists.Fd.Io (2019-01-30 10:22:18)
>Hello vpp-dev team,
> 
> 
> 
>Our csit performance tests for Ipsec using aes-gcm ciphering started to
>fail because created ipsec interface cannot get to up state – tested with
>vpp master (build 19.04-rc0~67-g72de626~b6198) as well as with stable/1901
>(build 19.01-rc2~3-g9124874~b2).
> 
> 
> 
>We are using HW crypto card, dpdk plugin is loaded and dpdk backend is
>active ipsec backend for ESP. We are receiving following responses for set
>interface state up command:
> 
> 
> 
>-    in case of AES-GCM-128: set interface state: unsupported
>aes-gcm-128 crypto-alg
> 
>-    in case of AES-GCM-192: set interface state: unsupported none
>integ-alg
> 
> 
> 
>Could you, please, let us know if there is something wrong in our
>configuration (see VAT commands and startup.conf below; it worked before)
>or there is a bug in vpp?
> 
> 
> 
>Thanks,
> 
>Jan
> 
> 
> 
>VAT commands used to configure vpp:
> 
>sw_interface_set_flags sw_if_index 2 admin-up link-up
> 
>sw_interface_set_flags sw_if_index 1 admin-up link-up
> 
>sw_interface_dump
> 
>hw_interface_set_mtu sw_if_index 2 mtu 9200
> 
>hw_interface_set_mtu sw_if_index 1 mtu 9200
> 
>sw_interface_dump
> 
>sw_interface_dump
> 
>sw_interface_dump
> 
>sw_interface_add_del_address sw_if_index 2 192.168.10.1/24
> 
>sw_interface_add_del_address sw_if_index 1 172.168.1.1/24
> 
>ip_neighbor_add_del sw_if_index 2 dst 192.168.10.2 mac 68:05:ca:35:79:1c
> 
>ip_neighbor_add_del sw_if_index 1 dst 172.168.1.2 mac 68:05:ca:35:76:b1
> 
>ip_add_del_route 10.0.0.0/8 via 192.168.10.2  sw_if_index 2
>resolve-attempts 10 count 1   
> 
>ipsec_tunnel_if_add_del local_spi 1 remote_spi 2 crypto_alg
>aes-gcm-192 local_crypto_key
>685857656d48393835654169447a516864314e51447450666352706a remote_crypto_key
>685857656d48393835654169447a516864314e51447450666352706a  local_ip
>172.168.1.1 remote_ip 172.168.1.2
> 
>exec ip route add 20.0.0.0/32 via 172.168.1.2 ipsec0
> 
>exec set interface unnumbered ipsec0 use FortyGigabitEthernet88/0/0
> 
>exec set interface state ipsec0 up
> 
> 
> 
>Our startup.conf:
> 
>ip
> 
>{
> 
>  heap-size 4G
> 
>}
> 
>statseg
> 
>{
> 
>  size 4G
> 
>}
> 
>unix
> 
>{
> 
>  cli-listen /run/vpp/cli.sock
> 
>  log /tmp/vpe.log
> 
>  full-coredump
> 
>  nodaemon
> 
>}
> 
>ip6
> 
>{
> 
>  heap-size 4G
> 
>  hash-buckets 200
> 
>}
> 
>heapsize 4G
> 
>plugins
> 
>{
> 
>  plugin default
> 
>  {
> 
>    disable 
> 
>  }
> 
>  plugin dpdk_plugin.so
> 
>  {
> 
>    enable 
> 
>  }
> 
>}
> 
>cpu
> 
>{
> 
>  corelist-workers 20
> 
>  main-core 19
> 
>}
> 
>dpdk
> 
>{
> 
>  dev :88:00.1
> 
>  dev :88:00.0
> 
>  no-multi-seg
> 
>  uio-driver igb_uio
> 
>  log-level debug
> 
>  dev default
> 
>  {
> 
>    num-rx-desc 2048
> 
>    num-rx-queues 1
> 
>    num-tx-desc 2048
> 
>  }
> 
>  dev :86:01.0
> 
>  socket-mem 1024,1024
> 
>  no-tx-checksum-offload
> 
>}
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#12058): https://lists.fd.io/g/vpp-dev/message/12058
Mute This Topic: https://lists.fd.io/mt/29592457/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] IPsec doesn't work with aes-gcm ciphering

2019-01-30 Thread Klement Sekera via Lists.Fd.Io
Can you please try this patch and let me know if it fixes things for you?

https://gerrit.fd.io/r/17163

Thanks,
Klement

Quoting Klement Sekera via Lists.Fd.Io (2019-01-30 11:08:12)
> Hi,
> 
> I see the issue, will come up with a patch shortly.
> 
> Thanks,
> Klement
> 
> Quoting Jan Gelety via Lists.Fd.Io (2019-01-30 10:22:18)
> >Hello vpp-dev team,
> > 
> > 
> > 
> >Our csit performance tests for Ipsec using aes-gcm ciphering started to
> >fail because created ipsec interface cannot get to up state – tested with
> >vpp master (build 19.04-rc0~67-g72de626~b6198) as well as with 
> > stable/1901
> >(build 19.01-rc2~3-g9124874~b2).
> > 
> > 
> > 
> >We are using HW crypto card, dpdk plugin is loaded and dpdk backend is
> >active ipsec backend for ESP. We are receiving following responses for 
> > set
> >interface state up command:
> > 
> > 
> > 
> >-    in case of AES-GCM-128: set interface state: unsupported
> >aes-gcm-128 crypto-alg
> > 
> >-    in case of AES-GCM-192: set interface state: unsupported none
> >integ-alg
> > 
> > 
> > 
> >Could you, please, let us know if there is something wrong in our
> >configuration (see VAT commands and startup.conf below; it worked before)
> >or there is a bug in vpp?
> > 
> > 
> > 
> >Thanks,
> > 
> >Jan
> > 
> > 
> > 
> >VAT commands used to configure vpp:
> > 
> >sw_interface_set_flags sw_if_index 2 admin-up link-up
> > 
> >sw_interface_set_flags sw_if_index 1 admin-up link-up
> > 
> >sw_interface_dump
> > 
> >hw_interface_set_mtu sw_if_index 2 mtu 9200
> > 
> >hw_interface_set_mtu sw_if_index 1 mtu 9200
> > 
> >sw_interface_dump
> > 
> >sw_interface_dump
> > 
> >sw_interface_dump
> > 
> >sw_interface_add_del_address sw_if_index 2 192.168.10.1/24
> > 
> >sw_interface_add_del_address sw_if_index 1 172.168.1.1/24
> > 
> >ip_neighbor_add_del sw_if_index 2 dst 192.168.10.2 mac 68:05:ca:35:79:1c
> > 
> >ip_neighbor_add_del sw_if_index 1 dst 172.168.1.2 mac 68:05:ca:35:76:b1
> > 
> >ip_add_del_route 10.0.0.0/8 via 192.168.10.2  sw_if_index 2
> >resolve-attempts 10 count 1   
> > 
> >ipsec_tunnel_if_add_del local_spi 1 remote_spi 2 crypto_alg
> >aes-gcm-192 local_crypto_key
> >685857656d48393835654169447a516864314e51447450666352706a 
> > remote_crypto_key
> >685857656d48393835654169447a516864314e51447450666352706a  local_ip
> >172.168.1.1 remote_ip 172.168.1.2
> > 
> >exec ip route add 20.0.0.0/32 via 172.168.1.2 ipsec0
> > 
> >exec set interface unnumbered ipsec0 use FortyGigabitEthernet88/0/0
> > 
> >exec set interface state ipsec0 up
> > 
> > 
> > 
> >Our startup.conf:
> > 
> >ip
> > 
> >{
> > 
> >  heap-size 4G
> > 
> >}
> > 
> >statseg
> > 
> >{
> > 
> >  size 4G
> > 
> >}
> > 
> >unix
> > 
> >{
> > 
> >  cli-listen /run/vpp/cli.sock
> > 
> >  log /tmp/vpe.log
> > 
> >  full-coredump
> > 
> >  nodaemon
> > 
> >}
> > 
> >ip6
> > 
> >{
> > 
> >  heap-size 4G
> > 
> >  hash-buckets 200
> > 
> >}
> > 
> >heapsize 4G
> > 
> >plugins
> > 
> >{
> > 
> >  plugin default
> > 
> >  {
> > 
> >    disable 
> > 
> >  }
> > 
> >  plugin dpdk_plugin.so
> > 
> >  {
> > 
> >    enable 
> > 
> >  }
> > 
> >}
> > 
> >cpu
> > 
> >{
> > 
> >  corelist-workers 20
> > 
> >  main-core 19
> > 
> >}
> > 
> >dpdk
> > 
> >{
> > 
> >  dev :88:00.1
> > 
> >  dev :88:00.0
> > 
> >  no-multi-seg
> > 
> >  uio-driver igb_uio
> > 
> >  log-level debug
> > 
> >  dev default
> > 
> >  {
> > 
> >    num-rx-desc 2048
> > 
> >    num-rx-queues 1
> > 
> >    num-tx-desc 2048
> > 
> >  }
> > 
> >  dev :86:01.0
> > 
> >  socket-mem 1024,1024
> > 
> >  no-tx-checksum-offload
> > 
> >}
> 
> 
> -=-=-=-=-=-=-=-=-=-=-=-
> Links: You receive all messages sent to this group.
> 
> View/Reply Online (#12058): https://lists.fd.io/g/vpp-dev/message/12058
> Mute This Topic: https://lists.fd.io/mt/29592457/675704
> Group Owner: vpp-dev+ow...@lists.fd.io
> Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [ksek...@cisco.com]
> -=-=-=-=-=-=-=-=-=-=-=-
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#12059): https://lists.fd.io/g/vpp-dev/message/12059
Mute This Topic: https://lists.fd.io/mt/29592457/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] false verified + 1 for gerrit jobs

2019-05-06 Thread Klement Sekera via Lists.Fd.Io
Hi all,

I noticed a job getting a +1 even though some of the builds failed ...

https://gerrit.fd.io/r/#/c/18444/

please note patch set 9 

fd.io JJB
Patch Set 9: Verified-1 Build Failed 
https://jenkins.fd.io/job/vpp-arm-verify-master-ubuntu1804/2459/ :
FAILURE No problems were identified. If you know why this problem
occurred, please add a suitable Cause for it. ( 
https://jenkins.fd.io/job/vpp-arm-verify-master-ubuntu1804/2459/ )
Logs: 
https://logs.fd.io/production/vex-yul-rot-jenkins-1/vpp-arm-verify-master-ubuntu1804/2459
 https://jenkins.fd.io/job/vpp-beta-verify-master-ubuntu1804/6891/ :
FAILURE No problems were identified. If you know why this problem
occurred, please add a suitable Cause for it. ( 
https://jenkins.fd.io/job/vpp-beta-verify-master-ubuntu1804/6891/ )
Logs: 
https://logs.fd.io/production/vex-yul-rot-jenkins-1/vpp-beta-verify-master-ubuntu1804/6891
 https://jenkins.fd.io/job/vpp-csit-verify-device-master-1n-skx/788/ :
SUCCESS (skipped) Logs: 
https://logs.fd.io/production/vex-yul-rot-jenkins-1/vpp-csit-verify-device-master-1n-skx/788
 https://jenkins.fd.io/job/vpp-make-test-docs-verify-master/13041/ :
SUCCESS Logs: 
https://logs.fd.io/production/vex-yul-rot-jenkins-1/vpp-make-test-docs-verify-master/13041
 https://jenkins.fd.io/job/vpp-verify-master-ubuntu1604/19027/ :
SUCCESS Logs: 
https://logs.fd.io/production/vex-yul-rot-jenkins-1/vpp-verify-master-ubuntu1604/19027
 https://jenkins.fd.io/job/vpp-verify-master-clang/6731/ : SUCCESS
Logs: 
https://logs.fd.io/production/vex-yul-rot-jenkins-1/vpp-verify-master-clang/6731
 https://jenkins.fd.io/job/vpp-docs-verify-master/15343/ : SUCCESS
Logs: 
https://logs.fd.io/production/vex-yul-rot-jenkins-1/vpp-docs-verify-master/15343
 https://jenkins.fd.io/job/vpp-verify-master-centos7/18764/ : NOT_BUILT
Logs: 
https://logs.fd.io/production/vex-yul-rot-jenkins-1/vpp-verify-master-centos7/18764
7:11 PM
fd.io JJB
Patch Set 9: Verified+1 Build Successful 
https://jenkins.fd.io/job/vpp-verify-master-centos7/18764/ : SUCCESS
Logs: 
https://logs.fd.io/production/vex-yul-rot-jenkins-1/vpp-verify-master-centos7/18764
7:15 PM

Thanks,
Klement
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#12941): https://lists.fd.io/g/vpp-dev/message/12941
Mute This Topic: https://lists.fd.io/mt/31522291/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] FD.io - Gerrit 2.16 Changes

2019-07-31 Thread Klement Sekera via Lists.Fd.Io
Hi Vanessa,

I pushed a couple of changes via your suggested

git push origin HEAD:refs/for/master%wip

and I noticed that on all of these the verify jobs were (attempted to) run. One 
of the advantages of draft changes is that it doesn’t eat resources for 
pointless verify jobs.

Thanks,
Klement

From: vpp-dev@lists.fd.io  On Behalf Of Vanessa Valderrama
Sent: Tuesday, July 30, 2019 10:21 PM
To: ci-management-...@lists.fd.io; cicn-...@lists.fd.io; csit-...@lists.fd.io; 
deb-d...@lists.fd.io; disc...@lists.fd.io; dmm-...@lists.fd.io; 
govpp-...@lists.fd.io; hc2...@lists.fd.io; honeycomb-...@lists.fd.io; 
infra-contai...@lists.fd.io; infra-steer...@lists.fd.io; 
nsh_sfc-...@lists.fd.io; odp4vpp-...@lists.fd.io; one-...@lists.fd.io; 
p4vpp-...@lists.fd.io; pma-tools-...@lists.fd.io; puppet-f...@lists.fd.io; 
rpm_d...@lists.fd.io; sweetcomb-...@lists.fd.io; tldk-...@lists.fd.io; 
trex-...@lists.fd.io; t...@lists.fd.io; vpp-dev@lists.fd.io; 
jvpp-...@lists.fd.io; hicn-...@lists.fd.io
Subject: [vpp-dev] FD.io - Gerrit 2.16 Changes


Changes that will happen with Gerrit:

1) The 'New UI' for Gerrit will become the default UI

2) The Draft work flow is removed and replaced with 'Work in Progress'
aka 'WIP' and 'Private' workflows. Unfortunately git-review does not
support either of these workflows directly. Utilizing them means either
pushing your changes the manual way for either system or pushing them up
with git-review and then marking the change via the UI into either of
the workflows.
To push a private change you may do so as follows:
git push origin HEAD:refs/for/master%private

To pull it out of private you may do so as follows:
git push origin HEAD:refs/for/master%remove-private

To push a WIP you may do so as follows:
git push origin HEAD:refs/for/master%wip

To mark it ready for review you may do so as follows:
git push origin HEAD:refs/for/master%ready

Once a change is in either private or WIP state it does not switch the
change to a ready state until the current state has been removed.

In both cases, the state can be set via the UI by selecting the tripple
dot menu option on the change and selecting the appropriate option.

To remove WIP state press the 'START REVIEW' button. To remove the
private state you must do so via the menu.

NOTE: We are not moving to Gerrit 3 at this time. That is on the road
map but we need to come to the latest 2.x as we have to do various
migrations that are only available at the 2.16 level before we can move
to Gerrit 3.

Thank you,
Vanessa
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#13631): https://lists.fd.io/g/vpp-dev/message/13631
Mute This Topic: https://lists.fd.io/mt/32658320/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Build jobs failure with No problems were identified. If you know why this problem occurred, please add a suitable Cause for it.

2020-02-03 Thread Klement Sekera via Lists.Fd.Io
Hi

Click the first link -> console output -> show all

You’ll see test failures.

You can run locally these tests by invoking “make test”.

Once fixed, reupload your change.

HTH..Klement

> On 3 Feb 2020, at 14:44, Shiva Shankar  wrote:
> 
> Hi everyone, 
> I am seeing below error messages during job execution. Any inputs on error 
> message? Issued "recheck" multiple times, but no luck.
> 
> fd.io JJB
> 
> Patch Set 1:
> 
> Build Failed 
> 
> https://jenkins.fd.io/job/vpp-verify-master-ubuntu1804/1947/ : FAILURE
> 
> No problems were identified. If you know why this problem occurred, please 
> add a suitable Cause for it. ( 
> https://jenkins.fd.io/job/vpp-verify-master-ubuntu1804/1947/ )
> 
> Logs: 
> https://logs.fd.io/production/vex-yul-rot-jenkins-1/vpp-verify-master-ubuntu1804/1947
>   
> 
> Thanks
> Shiva 
> 

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15315): https://lists.fd.io/g/vpp-dev/message/15315
Mute This Topic: https://lists.fd.io/mt/70946353/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] BFD sends old remote discriminator in its control packet after session goes to DOWN state #vpp

2020-01-17 Thread Klement Sekera via Lists.Fd.Io
Hi,

thank you for your report.

Can you please apply this fix and verify that the behaviour is now correct?

https://gerrit.fd.io/r/c/vpp/+/24388

Thanks,
Klement

> On 17 Jan 2020, at 07:04, sont...@gmail.com wrote:
> 
> Hi,
> 
> I have observed an incorrect behavior in BFD code of VPP.
> I have brought BFD session UP between VPP and peer router.
> Due to interface shutdown on peer router BFD session on VPP goes to DOWN 
> state.
> Once it goes to DOWN state, it is continuously sending control packets using 
> its old remote discriminator in its control packet's "Your Discriminator" 
> field.
> This is a wrong behavior. Below RFC section tells that once BFD goes DOWN due 
> to non-receipt of BFD control packet, "Your Discriminator" field should be 
> set to zero.
> 
> RFC 5880 6.8.1. State Variables
> 
>
>bfd.RemoteDiscr
> 
>   The remote discriminator for this BFD session.  This is the
>   discriminator chosen by the remote system, and is totally opaque
>   to the local system.  This MUST be initialized to zero.  If a
>   period of a Detection Time passes without the receipt of a valid,
>   authenticated BFD packet from the remote system, this variable
>   MUST be set to zero.
> 
> 

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15194): https://lists.fd.io/g/vpp-dev/message/15194
Mute This Topic: https://lists.fd.io/mt/69820780/21656
Mute #vpp: https://lists.fd.io/mk?hashtag=vpp=1480452
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] VRRP Unit Tests failing on Master Ubuntu

2020-03-12 Thread Klement Sekera via Lists.Fd.Io
There is also test/scripts/test-loop.sh which might some users better.

Regards,
Klement

> On 12 Mar 2020, at 19:08, Dave Wallace  wrote:
> 
> Hi Matt,
> 
> Your patch [0] verified, Ray +1'd it, and I merged it.
> 
> In my investigation on Naginator retries, I found an unrelated gerrit change 
> [1] where there was a VRRP test failure [2] on which failed the 
> vpp-arm-verify-master-ubuntu1804 job but subsequently passed on both the 
> Naginator retry [3] as well as the verify of the next patch [4] to the gerrit 
> change.
> 
> This failure occurred on March 02, 2020 prior to the recent timekeeping 
> related changes.
> 
> In case you are not aware, I wrote a bash function [5] which allows iterative 
> running of make test until it encounters a failure. This function has been 
> helpful in tracking down and fixing intermittent test failures in the quic 
> tests which were very hard to reproduce outside of 'make test'. Note that in 
> particular, I have seen many more intermittent failures with 'make test' 
> running tests in parallel (make test TEST_JOBS=auto) when running them 
> serially. Also, the grep (-g) option is most useful for detecting 
> clib_warning() instrumentation of suspected errant conditions in release 
> images.
> 
> Hope this helps,
> -daw-
> 
> [0] https://gerrit.fd.io/r/c/vpp/+/25834
> [1] https://gerrit.fd.io/r/c/vpp/+/25581
> [2] https://gerrit.fd.io/r/c/vpp/+/25581#message-cb3ca555_cb3c5e63
>   
> https://logs.fd.io/production/vex-yul-rot-jenkins-1/vpp-arm-verify-master-ubuntu1804/8899/console-timestamp.log.gz
> [3] https://gerrit.fd.io/r/c/vpp/+/25581#message-b01de4c2_560ef9c6
> [4] https://gerrit.fd.io/r/c/vpp/+/25581#message-d2ecb27d_a9d52cc9
> [5] https://git.fd.io/vpp/tree/extras/bash/functions.bash
> - %< -
> Usage: vpp-make-test [-a][-d][-f][-g ][-r ]  
> []
>  -aRun extended tests
>  -dRun vpp debug image (i.e. with ASSERTS)
>  -fTestcase is a feature set (e.g. tcp)
>  -g  Text to grep for in log, FAIL on match.
>Enclose  in single quotes when it contains 
> any dashes:
>e.g.  vpp-make-test -g 'goof-bad-' test_xyz
>  -r   Retry Count (default = 100 for individual | 1 for 
> feature)
> - %< -
> 
> 
> On 3/12/2020 12:41 PM, Matthew Smith wrote:
>> Hi Dave,
>> 
>> That sounds fine to me.
>> 
>> Thanks,
>> -Matt
>> 
>> 
>> On Thu, Mar 12, 2020 at 11:32 AM Dave Wallace  wrote:
>> Matt,
>> 
>> I will keep an eye on this gerrit and merge it once the verify jobs have 
>> completed.
>> If there are other tests which fail, are you ok if I add them to this patch 
>> and turn it into a generic 'disable failing tests' gerrit change?
>> 
>> The other possibility is that this is due to the recent disabling of the 
>> Naginator retry plugin.
>> 
>> I'm going to investigate if this issue may have been masked by Naginator...
>> 
>> Thanks for your help on keeping the CI operational!
>> -daw-
>> 
>> On 3/12/2020 12:09 PM, Matthew Smith via Lists.Fd.Io wrote:
>>> 
>>> Change submitted - https://gerrit.fd.io/r/c/vpp/+/25834. Verification jobs 
>>> are running. Hopefully they won't   fail :)
>>> 
>>> -Matt
>>> 
>>> 
>>> On Thu, Mar 12, 2020 at 10:22 AM Matthew Smith via Lists.Fd.Io 
>>>  wrote:
>>> 
>>> I don't have a solution yet, but one observation has popped up quickly
>>> 
>>> In the 2 failed jobs Ray sent links for, one of them had a test fail which 
>>> was not related to VRRP. There is a BFD6 test failure for the NAT change 
>>> https://gerrit.fd.io/r/c/vpp/+/25462:
>>> 
>>> https://logs.fd.io/production/vex-yul-rot-jenkins-1/vpp-verify-master-ubuntu1804/2678/archives/
>>> 
>>> Looking back through a couple of recent failed runs of that job, there is 
>>> also a DHCP6 PD test failure for rdma change 
>>> https://gerrit.fd.io/r/c/vpp/+/25823:
>>> 
>>> https://logs.fd.io/production/vex-yul-rot-jenkins-1/vpp-verify-master-ubuntu1804/2682/archives/
>>> 
>>> The most obvious common thread between BFD6, DHCP6 and VRRP to me seems to 
>>> be that they all maintain state which is dependent on timers. There could 
>>> be a more general issue with timing-sensitive tests. I am going to submit a 
>>> change which will prevent the VRRP tests from running temporarily while I 
>>> can figure out a proper solution. Based on the above, other tests may need 
>>> the same treatment.
>>> 
>>> -Matt
>>>
>>> 
>>> 
>>> 
>>> On Thu, Mar 12, 2020 at 8:57 AM Matthew Smith  wrote:
>>> Hi Ray,
>>> 
>>> Thanks for bringing it to my attention. I'll look into it.
>>> 
>>> -Matt
>>> 
>>> 
>>> On Thu, Mar 12, 2020 at 8:24 AM Ray Kinsella  wrote:
>>> Anyone else noticing seeming spurious failures related to the VRRP plugin's 
>>> unit tests.
>>> Some examples from un-related commits.
>>> 
>>> Ray K
>>> 
>>> nat: timed out session scavenging upgrade 
>>> (https://gerrit.fd.io/r/c/vpp/+/25462)
>>> 

Re: [vpp-dev] Storing vlib buffer index for later processing

2020-04-17 Thread Klement Sekera via lists.fd.io
You can reference reassembly code, which does something similar.

e.g. ip4_sv_reass.c

HTH,
Klement

> On 17 Apr 2020, at 18:03, Satya Murthy  wrote:
> 
> Hi ,
> 
> We are having a scenario to support as below and we would like to know what 
> we are doing here is correct or not.
> 
> 1. Our graph node receives a frame with TWO buffers
> 2. Graph node decides to process FIRST buffer and enqueues the packet to a 
> next-node.
> 3. Graph node decides to store the SECOND buffer index to process it after 10 
> sec. Hence, it stores the second buffer index to a vector for later 
> processing.
> 4. Graph node returns a value of 2  ( Is this correct ? or do we need to 
> return 1, since one buffer we want to consume at a later point of time ?)
> 5. After 10 sec, we process the vector of buffer indices and process them to 
> send to another node. 
> 6. While sending to another node, we create a new frame and add this buffer 
> index to that.
>
> Basically, when a graph node want to consume a buffer for a later point of 
> time, is there anything specific we need to do ?
> Or is it just enough to store the buffer index for a later use.
> 
> Any inputs on this would help us in our design/implementation.
> 
> -- 
> Thanks & Regards,
> Murthy 

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#16097): https://lists.fd.io/g/vpp-dev/message/16097
Mute This Topic: https://lists.fd.io/mt/73086166/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Checkstyle script not work in ubuntu

2020-04-18 Thread Klement Sekera via lists.fd.io
clang-format can be tuned to emulate indent - it’s not 100% perfect match, but 
I’ve been using it for some time to format multi-line macros, e.g. pool_foreach 
and it’s been doing a pretty good job. Config file for that is already in vpp 
source tree (vpp/.clang-format) and used as default for cpp code formatting.

> On 18 Apr 2020, at 13:49, Dave Barach via lists.fd.io 
>  wrote:
> 
> +1, this seems like a viable scheme to me.
>  
> We’ll need to configure the underlying indent engine so that newly-indented 
> code looks as much like the rest of the code as possible.
>  
> The result below wouldn’t preclude automatic cherry-picking, but it would 
> make everyone’s head explode, particularly if one’s favorite code editor 
> likes to “fix” such things:
>  
> if (a)
>   {
> b = 13;
> c = 12;
> /* new code */
> if(d) {
> e=this_is_new();
> }
> /* end new code */
>   }
>  
> Thanks... Dave
>  
> From: Damjan Marion  
> Sent: Saturday, April 18, 2020 5:51 AM
> To: Andrew Yourtchenko 
> Cc: Dave Barach (dbarach) ; Zhang Yuwei 
> ; vpp-dev@lists.fd.io
> Subject: Re: [vpp-dev] Checkstyle script not work in ubuntu
>  
>  
> And this is example of script, which just formats modified lines, instead of 
> re-formating whole file, as we do today.
> With something like this, we can introduce new indent or even move to 
> clang-format without the need to reformat old code….
>  
> https://github.com/llvm-mirror/clang/blob/master/tools/clang-format/clang-format-diff.py
>  
> — 
> Damjan
> 
> 
> On 18 Apr 2020, at 11:00, Damjan Marion via lists.fd.io 
>  wrote:
>  
>  
> If we decided to stick with old indent, which i still disagree that is right 
> thing to do, can you just compile indent all the time and 
>  modify path so /opt/vpp/…/bin/ comes first. I really don’t like one more 
> option in the top level Makefile.
>  
> — 
> Damjan
> 
> 
> On 18 Apr 2020, at 10:29, Andrew Yourtchenko  wrote:
>  
> I made https://gerrit.fd.io/r/#/c/vpp/+/22963/ that you can try and see how 
> it works for you.
>  
> It allows to install the “correct” version of indent into the build tree, so 
> the rest of the system is unaffected.
>  
> --a
> 
> 
> On 11 Apr 2020, at 14:04, Dave Barach via lists.fd.io 
>  wrote:
> 
> 
> The script works fine. You have the wrong version of gnu indent installed. 
> This is the version you need:
>  
> $ indent --version
> GNU indent 2.2.11
>  
> From: vpp-dev@lists.fd.io  On Behalf Of Zhang Yuwei
> Sent: Saturday, April 11, 2020 1:04 AM
> To: vpp-dev@lists.fd.io
> Subject: [vpp-dev] Checkstyle script not work in ubuntu
>  
> Hi Guys,
> I find checkstyle script doesn’t work normally in ubuntu 
> sometimes that I run make fixstyle in ubuntu and submit the code to gerrit 
> but still fail in checkstyle step. I need to move to centos to make it work, 
> can anybody check this? Thanks a lot.
>  
> Regards,
> Yuwei
>  
>  
>  
> 

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#16104): https://lists.fd.io/g/vpp-dev/message/16104
Mute This Topic: https://lists.fd.io/mt/72939086/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Checkstyle script not work in ubuntu

2020-04-18 Thread Klement Sekera via lists.fd.io
If you have a more-indent-like config, feel free to update .clang_format in 
tree … 

> On 18 Apr 2020, at 18:05, Christian Hopps  wrote:
> 
> +1 for clang format.
> 
> Regarding the in tree .clang-format, I had to use my own .clang_format 
> settings though as the VPP/C++ version has different tab defaults from the 
> indent default currently used in VPP (never use and 4 space vs always use and 
> 8 space).
> 
> Thanks,
> Chris.
> 
>> On Apr 18, 2020, at 7:55 AM, Klement Sekera via lists.fd.io 
>>  wrote:
>> 
>> clang-format can be tuned to emulate indent - it’s not 100% perfect match, 
>> but I’ve been using it for some time to format multi-line macros, e.g. 
>> pool_foreach and it’s been doing a pretty good job. Config file for that is 
>> already in vpp source tree (vpp/.clang-format) and used as default for cpp 
>> code formatting.
>> 
>>> On 18 Apr 2020, at 13:49, Dave Barach via lists.fd.io 
>>>  wrote:
>>> 
>>> +1, this seems like a viable scheme to me.
>>> 
>>> We’ll need to configure the underlying indent engine so that newly-indented 
>>> code looks as much like the rest of the code as possible.
>>> 
>>> The result below wouldn’t preclude automatic cherry-picking, but it would 
>>> make everyone’s head explode, particularly if one’s favorite code editor 
>>> likes to “fix” such things:
>>> 
>>> if (a)
>>> {
>>>   b = 13;
>>>   c = 12;
>>>   /* new code */
>>>   if(d) {
>>>   e=this_is_new();
>>>   }
>>>   /* end new code */
>>> }
>>> 
>>> Thanks... Dave
>>> 
>>> From: Damjan Marion  
>>> Sent: Saturday, April 18, 2020 5:51 AM
>>> To: Andrew Yourtchenko 
>>> Cc: Dave Barach (dbarach) ; Zhang Yuwei 
>>> ; vpp-dev@lists.fd.io
>>> Subject: Re: [vpp-dev] Checkstyle script not work in ubuntu
>>> 
>>> 
>>> And this is example of script, which just formats modified lines, instead 
>>> of re-formating whole file, as we do today.
>>> With something like this, we can introduce new indent or even move to 
>>> clang-format without the need to reformat old code….
>>> 
>>> https://github.com/llvm-mirror/clang/blob/master/tools/clang-format/clang-format-diff.py
>>> 
>>> — 
>>> Damjan
>>> 
>>> 
>>> On 18 Apr 2020, at 11:00, Damjan Marion via lists.fd.io 
>>>  wrote:
>>> 
>>> 
>>> If we decided to stick with old indent, which i still disagree that is 
>>> right thing to do, can you just compile indent all the time and 
>>> modify path so /opt/vpp/…/bin/ comes first. I really don’t like one more 
>>> option in the top level Makefile.
>>> 
>>> — 
>>> Damjan
>>> 
>>> 
>>> On 18 Apr 2020, at 10:29, Andrew Yourtchenko  wrote:
>>> 
>>> I made https://gerrit.fd.io/r/#/c/vpp/+/22963/ that you can try and see how 
>>> it works for you.
>>> 
>>> It allows to install the “correct” version of indent into the build tree, 
>>> so the rest of the system is unaffected.
>>> 
>>> --a
>>> 
>>> 
>>> On 11 Apr 2020, at 14:04, Dave Barach via lists.fd.io 
>>>  wrote:
>>> 
>>> 
>>> The script works fine. You have the wrong version of gnu indent installed. 
>>> This is the version you need:
>>> 
>>> $ indent --version
>>> GNU indent 2.2.11
>>> 
>>> From: vpp-dev@lists.fd.io  On Behalf Of Zhang Yuwei
>>> Sent: Saturday, April 11, 2020 1:04 AM
>>> To: vpp-dev@lists.fd.io
>>> Subject: [vpp-dev] Checkstyle script not work in ubuntu
>>> 
>>> Hi Guys,
>>>   I find checkstyle script doesn’t work normally in ubuntu 
>>> sometimes that I run make fixstyle in ubuntu and submit the code to gerrit 
>>> but still fail in checkstyle step. I need to move to centos to make it 
>>> work, can anybody check this? Thanks a lot.
>>> 
>>> Regards,
>>> Yuwei
>>> 
>>> 
>>> 
>>> 
>> 
>> 
> 

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#16108): https://lists.fd.io/g/vpp-dev/message/16108
Mute This Topic: https://lists.fd.io/mt/72939086/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] NAT ED empty users dump #nat #nat44

2020-05-12 Thread Klement Sekera via lists.fd.io
Alexander,

It seems that fixing existing API is a huge pain. What we could do is implement 
an API which dumps all the sessions and the client app could sort it out. How 
many sessions do you typically have? As by default APIs run in a stop-the-world 
state, having millions of sessions will stop all the forwarding until the dump 
is complete. Writing it in a thread-safe way is a more complicated effort…

Thanks,
Klement

> On 12 May 2020, at 14:10, Alexander Chernavin via lists.fd.io 
>  wrote:
> 
> Klement,
> 
> Basically print statistics and debug info: number of users, what user 
> consumes what number of sessions, what session created for what communication.
> 
> Thanks,
> Alexander 

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#16340): https://lists.fd.io/g/vpp-dev/message/16340
Mute This Topic: https://lists.fd.io/mt/74156168/21656
Mute #nat44: https://lists.fd.io/mk?hashtag=nat44=1480452
Mute #nat: https://lists.fd.io/mk?hashtag=nat=1480452
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] NAT ED empty users dump #nat #nat44

2020-05-12 Thread Klement Sekera via lists.fd.io
Hi Alexander,

Understood. So when you get those sessions, what do you do with them? 

Thanks,
Klement

> On 12 May 2020, at 13:45, Alexander Chernavin via lists.fd.io 
>  wrote:
> 
> Hello Klement,
> 
> I want to list all NAT sessions. In order to do that I used to call 
> VL_API_NAT44_USER_DUMP. After that, I had all users, and I could call 
> VL_API_NAT44_USER_SESSION_DUMP to get sessions for every user.
> 
> Now VL_API_NAT44_USER_DUMP returns nothing in ED mode and I don't know what 
> users are. At the same time, VL_API_NAT44_USER_SESSION_DUMP requires 
> ip_address and vrf_id arguments. So if you don't know users, you cannot get 
> sessions.
> 
> Thanks,
> Alexander 

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#16332): https://lists.fd.io/g/vpp-dev/message/16332
Mute This Topic: https://lists.fd.io/mt/74156168/21656
Mute #nat44: https://lists.fd.io/mk?hashtag=nat44=1480452
Mute #nat: https://lists.fd.io/mk?hashtag=nat=1480452
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] NAT ED empty users dump #nat #nat44

2020-05-12 Thread Klement Sekera via lists.fd.io
Hi Alexander,

thanks for your feedback. The concept of user was really used for quickly 
re-using existing sessions in NAT. With port-overloading, ports are no longer a 
precious resource and dropping “users” means not having to maintain an extra 
hash table.

To better understand your problem, can you please state your use case for 
dumping these sessions?

Thanks,
Klement

> On 12 May 2020, at 13:18, Alexander Chernavin via lists.fd.io 
>  wrote:
> 
> Hello,
> 
> As I understand the "users" concept has been removed from NAT ED and now 
> vl_api_nat44_user_dump_t returns nothing in ED mode. 
> vl_api_nat44_user_session_dump_t returns sessions only if you know the user 
> you are requesting sessions for. But you can't get the user list. Therefore 
> this chain no longer works: dump all users, then dump all sessions of those 
> users.
> 
> I think the user dump code could build the user list based on the sessions, 
> but we need to collect these fields: IP address, VRF id, number of static and 
> dynamic sessions. For a big number of sessions it might be time-consuming 
> before the first user could be sent. Probably, maintaining a user list would 
> be cheaper.
> 
> How do you think vl_api_nat44_user_dump_t can be fixed for NAT ED?
> 
> 

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#16329): https://lists.fd.io/g/vpp-dev/message/16329
Mute This Topic: https://lists.fd.io/mt/74156168/21656
Mute #nat44: https://lists.fd.io/mk?hashtag=nat44=1480452
Mute #nat: https://lists.fd.io/mk?hashtag=nat=1480452
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] NAT ED empty users dump #nat #nat44

2020-05-12 Thread Klement Sekera via lists.fd.io
ED NAT no longer has “user” concept and it doesn’t differentiate one session 
from another. (So technically the API works just fine). Even if we wanted to 
recreate that information it means to build a hash of “users” on the fly for 
API purpose, do the dump and then throw it away. I don’t see any other way to 
guarantee that there are no duplicates in the dump even if we didn’t care for 
numbers like number of sessions or so…

Thanks,
Klement

> On 12 May 2020, at 15:56, Alexander Chernavin via lists.fd.io 
>  wrote:
> 
> Klement,
> 
> I would prefer the existing API working.
> 
> I expect millions of sessions and it's clear that dumping them all is a 
> blocker but during debug, there are not so many of them.
> 
> Thanks,
> Alexander 

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#16345): https://lists.fd.io/g/vpp-dev/message/16345
Mute This Topic: https://lists.fd.io/mt/74156168/21656
Mute #nat44: https://lists.fd.io/mk?hashtag=nat44=1480452
Mute #nat: https://lists.fd.io/mk?hashtag=nat=1480452
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Unit Test Results in Random directories

2020-03-17 Thread Klement Sekera via Lists.Fd.Io
Hi Neale,

When originally designing this, I opted in for `mktemp` as that’s the usual way 
to avoid clashes as at that time I assumed that this might be used on boxes 
where there are multiple users and they might like to run tests in parallel. At 
that time I didn’t expect issues like multiple vpps causing problems and it all 
kinda stuck.

The way I deal with this is using `less /tmp/vpp-unittest-*/log.txt’ (which 
lately requires SANITY=no as it tends to pick log.txt from sanity run). For 
core I use DEBUG=core which gives me gdb without having to think about any of 
the paths.

Having said all that, I don’t see any reason to keep random names. I’m not 
aware of any issues this change my cause…

Regards,
Klement

> On 17 Mar 2020, at 17:56, Neale Ranns via Lists.Fd.Io 
>  wrote:
> 
> 
> Hi All,
> 
> Am I the only one who finds the use of random directories for the unit-tests 
> an unnecessary annoyance?
> 
> I would suggest that random names are not needed for security purposes, since 
> these files do not exist on a field system. Also, all directories are wiped 
> before the next test run so it can't be to support saving runs nor multiple 
> users.
> 
> I find it annoying because I can't just reload log.txt or core in my editor 
> or debugger.
> 
> It's a simple change if there's consensus, or you can just call me and old 
> grump and we can all move on __
> 
> /neale
> 
> 

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15806): https://lists.fd.io/g/vpp-dev/message/15806
Mute This Topic: https://lists.fd.io/mt/72029031/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] ip full reassembly - is_custom_app field broken ?

2020-09-10 Thread Klement Sekera via lists.fd.io
Hi, 

can you please check out/try out this patch to see if it suits your needs?


https://gerrit.fd.io/r/c/vpp/+/28739


Currently there is no easy way to write a unit test for it, would be nice to 
have it confirmed that it works.

Thanks,
Klement

> On 8 Sep 2020, at 19:34, Satya Murthy  wrote:
> 
> Ok. I think, required changes would be more complex / involved than I 
> initially imagined.
> Can you please make these changes. 
> 
> -- 
> Thanks & Regards,
> Murthy 

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#17363): https://lists.fd.io/g/vpp-dev/message/17363
Mute This Topic: https://lists.fd.io/mt/76705450/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Q about VPP NAT

2020-09-10 Thread Klement Sekera via lists.fd.io
Hi Nick,

NAT doesn’t do any periodic cleanups, relying on a LRU list to reap stale 
sessions. This is by design, as it avoids hickups in traffic rate.

Observed behaviour with session moving to WAIT-CLOSED and then CLOSED state is 
based on RFC, there is no assumption whether FIN/ACK was actually delivered or 
not.

Regarding RST, I’m looking at https://tools.ietf.org/html/rfc7857#section-2.2, 
which says VPP should wait 4 minutes before deleting the session, which it 
currently doesn’t.

Would you like to take this and make the code change? It should be pretty 
straightforward. I can review.

Thanks,
Klement
 
> On 10 Sep 2020, at 10:53, Nick Zavaritsky  wrote:
> 
> Dear VPP hackers,
> 
> I need your advice concerning configuring and possibly extremely ending the 
> NAT in VPP.
> 
> We are currently using nat44 in endpoint-dependent mode. We are witnessing 
> TCP sessions piling up even though clients close connections gracefully. 
> These lingering sessions are categorised as WAIT-CLOSING by show nat44 
> summary. After a timeout they are considered CLOSED and could get reaped 
> (lazily).
> 
> I suspect that this behaviour is actually correct, since the NAT seeing 
> FIN/ACK passing by doesn't imply that the packets were actually delivered. 
> Please confirm.
> 
> It looks like RST doesn't terminate a NAT session (doesn't put it in 
> WAIT-CLOSING state), are there reasons for that as well?
> 
> Best,
> N 

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#17362): https://lists.fd.io/g/vpp-dev/message/17362
Mute This Topic: https://lists.fd.io/mt/76751794/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] ip full reassembly - is_custom_app field broken ?

2020-09-08 Thread Klement Sekera via lists.fd.io
Hi Satya,

so this is obviously unfinished. Would you mind writing the code for that? You 
just need to steal a piece of code from ip4_sv_reass.c to do that ;-) and then 
steer your packets at custom node.

   986 /* *INDENT-OFF* */ 
   987 VLIB_REGISTER_NODE (ip4_sv_reass_custom_node) = {
   
   988 .name = "ip4-sv-reassembly-custom-next", 
   989 .vector_size = sizeof (u32), 
   990 .format_trace = format_ip4_sv_reass_trace,   
   991 .n_errors = ARRAY_LEN (ip4_sv_reass_error_strings),  
   
   992 .error_strings = ip4_sv_reass_error_strings, 
   993 .n_next_nodes = IP4_SV_REASSEMBLY_N_NEXT,
   
   994 .next_nodes =
   
   995 {
   
   996 [IP4_SV_REASSEMBLY_NEXT_INPUT] = "ip4-input",
   
   997 [IP4_SV_REASSEMBLY_NEXT_DROP] = "ip4-drop",  
   
   998 [IP4_SV_REASSEMBLY_NEXT_HANDOFF] = 
"ip4-sv-reassembly-handoff", 
   999  
   
  1000 }, 
  1001 };   

  1002 /* *INDENT-ON* */
   
  1003  
   
  1004 VLIB_NODE_FN (ip4_sv_reass_custom_node) (vlib_main_t * vm, 
  1005  vlib_node_runtime_t * node, 

  1006  vlib_frame_t * frame) 
  1007 {  
  1008   return ip4_sv_reass_inline (vm, node, frame, false /* is_feature */ ,  
   
  1009   false /* is_output_feature */ ,
   
  1010   true /* is_custom */ );
   
  1011 }

Thanks,
Klement

> On 8 Sep 2020, at 11:56, Satya Murthy  wrote:
> 
> [Edited Message Follows]
> 
> Hi,
> 
> Looking at the ip4 full reassembly graph nodes for the purpose of punting the 
> fragments and getting the reassembled packets from our custom graph node.
> However, from the code it seems that,  is_custom_app flag is effectively 
> disabled.
> 
> I see that the node function is always getting called with is_custom_app = 
> false.
> 
> VLIB_NODE_FN (ip4_full_reass_node) (vlib_main_t * vm,
> vlib_node_runtime_t * node,
> vlib_frame_t * frame)
> {
>   return ip4_full_reass_inline (vm, node, frame, false /* is_feature */ ,
> false /* is_custom_app */ );    always called 
> with  FALSE
> }
> 
> We also observed that sv_reassembly has custom_app functionality. But, we 
> need fully reassembled packet, and hence looking at full_reassembly 
> functionality.
> 
> Is there a way to use this is_custom_app flag functionality  in 
> full_reassembly ? 
> 
> -- 
> Thanks & Regards,
> Murthy 

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#17342): https://lists.fd.io/g/vpp-dev/message/17342
Mute This Topic: https://lists.fd.io/mt/76705450/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Query on ip4-local / ip4-unicast Feature Arc sequence

2020-09-08 Thread Klement Sekera via lists.fd.io
Hi Satya,

it’s also necessary to enable full reassembly feature along with your feature 
by calling ip4_full_reass_enable_disable_with_refcnt().

You can take a look at NAT code which already does that - 
snat_interface_add_del().

Regards,
Klement

> On 4 Sep 2020, at 14:48, Satya Murthy  wrote:
> 
>  
> Hi ,
>  
> We wanted to have our custom graph node to receive all the IP packets that 
> are destined to our local interfaces.
> However, we want the packets after reassmbly is done by the vnet reassembly 
> code, if needed.
>  
> Hence, we added a custom_feature on the ip4-unicast feature-arc as below.
>  
> VNET_FEATURE_INIT (custom_feature, static) =
> {
> .arc_name = "ip4-unicast",
> .node_name = "custom_feature",
> .runs_after = VNET_FEATURES ("ip4-full-reassembly-feature"),
> };
>  
> Also, this feature is enabled on the interface in question.
>  
> After this code, show features verbose is showing as below.
>  
> [16] ip4-unicast:
>   [ 0]: ip4-rx-urpf-loose
>   [ 1]: ip4-rx-urpf-strict
>   [ 2]: svs-ip4
> ..
>   [39]: ip4-full-reassembly-feature
>   [40]: custom_feature
> ..
>  
> But, when the two fragments of a packet are received, they are given to 
> custom_feature node without reassembly.
>  
> show trace is showing as below.
>  
> 00:03:35:565905: ethernet-input
>   frame: flags 0x3, hw-if-index 1, sw-if-index 1
>   IP4: fa:16:3e:e0:70:43 -> fa:16:3e:eb:fb:74
> 00:03:35:565942: ip4-input-no-checksum
>   UDP: 10.10.5.2 -> 10.10.5.1
> tos 0x00, ttl 64, length 1500, checksum 0x36fa dscp CS0 ecn NON_ECN
> fragment id 0x0001, flags MORE_FRAGMENTS
>   UDP: 53 -> 53
> length 2008, checksum 0x0585
> 00:03:35:565968: custom_feature_node
>   custome_feature_node: sw_if_index 1, next_worker 5, buffer 0x4c7cc8
>  
> I wanted to put our custom_feature in ip4-local feature arc, as it seems more 
> logical.
> However, i dont see reassembly feature in that feature arc.
>  
> Is there any work around to achieve this functionality (or)
> anything wrong that I am doing here.
>  
> Appreciate any inputs regarding this.
>  
> -- 
> Thanks & Regards,
> Murthy 

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#17341): https://lists.fd.io/g/vpp-dev/message/17341
Mute This Topic: https://lists.fd.io/mt/76628685/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] ip full reassembly - is_custom_app field broken ?

2020-09-08 Thread Klement Sekera via lists.fd.io
That’s a very good point. Looking at vnet_buffer, there seems to be some space 
left in it for this case …

  190   struct  

  191   {   
 
  192 /* input variables */ 
 
  193 struct

  194 { 
 
  195   u32 next_index;   /* index of next node - used by custom 
apps */
  196   u32 error_next_index; /* index of next node if error - 
used by custom apps */
  197 };

  198 /* handoff variables */   
 
  199 struct

  200 { 
 
  201   u16 owner_thread_index; 
 
  202 };

  203   };  

we could put is_custom flag next to owner_thread_index; and if it is set then 
the handoff code would send to ip4-full-reass-custom instead of ip4-full-reass. 
We will need a fq_custom_index as well to be able to do that.

this would also be needed for ip6-full and ip4-sv flavours … 

would you like to take this or should I write the code?

Thanks,
Klement

> On 8 Sep 2020, at 18:28, Satya Murthy  wrote:
> 
> Thanks Klement for the quick response.
> 
> I can make the changes you suggested. But, one major doubt I have is on the 
> HANDOFF scenario.
> 
> Let's say, as part of custom-reasm-node, if the packet is decided to be 
> handed off to another thread, then the next_node is ALWAYS getting set as 
> "ip4-full-reassembly-handoff". 
> Shouldn't it go to custom-reasm-node ? This also needs to be decided based on 
> "is_custom_app" , right ?
> 
> Please let me know your inputs.
> 
> -- 
> Thanks & Regards,
> Murthy 

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#17345): https://lists.fd.io/g/vpp-dev/message/17345
Mute This Topic: https://lists.fd.io/mt/76705450/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] NAT44 does not work with fragmented ICMP packets

2020-05-26 Thread Klement Sekera via lists.fd.io
Hi Miklos,

thanks for your message. If is_non_first_fragment is set to true then rewrite 
will not happen. Can you take a look at what happens in ip4_sv_reass_inline for 
the first packet/fragment?

Setting that flag should be pretty fool-proof

   498   const u32 fragment_first = ip4_get_fragment_offset_bytes 
(ip0);   
...
   549   vnet_buffer (b0)->ip.reass.is_non_first_fragment = 
  
   550 ! !fragment_first; 
...
   619 vnet_buffer (b0)->ip.reass.is_non_first_fragment =   
  
   620   ! !ip4_get_fragment_offset (vlib_buffer_get_current 
(b0)); 

Thanks,
Klement

> On 26 May 2020, at 09:25, Miklos Tirpak  wrote:
> 
> Hi,
> 
> we have a scenario where an ICMP packet arrives fragmented over a GTP-U 
> tunnel. The outer IP packets are not fragmented, only the inner ones are. 
> After GTP-U decapsulation, the packets are routed via an interface where 
> NAT44 output-feature is configured.
> 
> In the outgoing packets, the source IP is correctly NATed but the ICMP 
> identifier (port) is not changed. Hence, the NAT session cannot be found for 
> the ICMP reply. This works correctly with smaller packets, the problem is 
> only with fragmented ones.
> 
> I could reproduce this with both VPP 20.01 and master, and could see that 
> ip.reass.is_non_first_fragment is true for every packet. Therefore, 
> icmp_in2out() does not update the ICMP header I think.
> 
> 712  if (!vnet_buffer (b0)->ip.reass.is_non_first_fragment)
> (gdb) p ((vnet_buffer_opaque_t *) (b0)->opaque)->ip.reass
> $17 = {{{next_index = 1056456440, error_next_index = 0}, {owner_thread_index 
> = 270}}, {{{l4_src_port = 16120, 
> l4_dst_port = 16120, tcp_ack_number = 0, save_rewrite_length = 14 
> '\016', ip_proto = 1 '\001', 
> icmp_type_or_tcp_flags = 8 '\b', is_non_first_fragment = 1 '\001', 
> tcp_seq_number = 0}, {estimated_mtu = 16120}}}, {
> fragment_first = 16120, fragment_last = 16120, range_first = 0, 
> range_last = 0, next_range_bi = 17301774, 
> ip6_frag_hdr_offset = 0}}
> 
> The node trace seems to be fine:
>   ... ip4-lookup -> ip4-rewrite -> ip4-sv-reassembly-output-feature -> 
> nat44-in2out-output -> nat44-in2out-output-slowpath
> 
> The NAT session is also correct, it includes the new port:
> 
> DBGvpp# sh nat44 sessions detail
> NAT44 sessions:
>  thread 0 vpp_main: 0 sessions 
>  thread 1 vpp_wk_0: 1 sessions 
>   100.64.100.1: 1 dynamic translations, 0 static translations
> i2o 100.64.100.1 proto icmp port 63550 fib 1
> o2i 172.16.17.2 proto icmp port 16253 fib 0
>index 0
>last heard 44.16
>total pkts 80, total bytes 63040
>dynamic translation
> 
> Do you know if this is a configuration issue or a possible bug? Thank you!
> 
> Thanks,
> Miklos
> 

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#16487): https://lists.fd.io/g/vpp-dev/message/16487
Mute This Topic: https://lists.fd.io/mt/74473306/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] NAT44 does not work with fragmented ICMP packets

2020-05-26 Thread Klement Sekera via lists.fd.io
Thanks! I’ll push a patch.

Regards,
Klement

> On 26 May 2020, at 12:33, Miklos Tirpak  wrote:
> 
> Yes, it works with ip0:
> 
> vnet_buffer (b0)->ip.reass.is_non_first_fragment =
>! !ip4_get_fragment_offset (ip0);
> 
> Thanks,
> Miklos
> From: Klement Sekera -X (ksekera - PANTHEON TECH SRO at Cisco) 
> 
> Sent: Tuesday, May 26, 2020 12:14 PM
> To: Miklós Tirpák 
> Cc: vpp-dev@lists.fd.io 
> Subject: Re: [vpp-dev] NAT44 does not work with fragmented ICMP packets
>  
> CAUTION: This email originated from outside of the organization. Do not click 
> links or open attachments unless you recognize the sender and know the 
> content is safe.
> 
> 
> I think it’s enough if instead of vlib_buffer_get_current(b0) we just use ip0 
> (that already takes save_rewrite_length into consideration). Can you please 
> test with this modification?
> 
> Thanks,
> Klement
> 
> > On 26 May 2020, at 11:51, Miklos Tirpak  wrote:
> >
> > Hi,
> >
> > I think there is a problem in ip4_sv_reass_inline(), it does not consider 
> > ip.save_rewrite_length when it calculates is_non_first_fragment at line 619 
> > (master):
> >vnet_buffer (b0)->ip.reass.is_non_first_fragment =
> >   ! !ip4_get_fragment_offset (vlib_buffer_get_current (b0));
> >
> > Let me open a pull request to fix this.
> >
> > Thanks,
> > Miklos
> > From: vpp-dev@lists.fd.io  on behalf of Miklos Tirpak 
> > via lists.fd.io 
> > Sent: Tuesday, May 26, 2020 9:25 AM
> > To: vpp-dev@lists.fd.io 
> > Subject: [vpp-dev] NAT44 does not work with fragmented ICMP packets
> >
> > CAUTION: This email originated from outside of the organization. Do not 
> > click links or open attachments unless you recognize the sender and know 
> > the content is safe.
> >
> > Hi,
> >
> > we have a scenario where an ICMP packet arrives fragmented over a GTP-U 
> > tunnel. The outer IP packets are not fragmented, only the inner ones are. 
> > After GTP-U decapsulation, the packets are routed via an interface where 
> > NAT44 output-feature is configured.
> >
> > In the outgoing packets, the source IP is correctly NATed but the ICMP 
> > identifier (port) is not changed. Hence, the NAT session cannot be found 
> > for the ICMP reply. This works correctly with smaller packets, the problem 
> > is only with fragmented ones.
> >
> > I could reproduce this with both VPP 20.01 and master, and could see that 
> > ip.reass.is_non_first_fragment is true for every packet. Therefore, 
> > icmp_in2out() does not update the ICMP header I think.
> >
> > 712  if (!vnet_buffer (b0)->ip.reass.is_non_first_fragment)
> > (gdb) p ((vnet_buffer_opaque_t *) (b0)->opaque)->ip.reass
> > $17 = {{{next_index = 1056456440, error_next_index = 0}, 
> > {owner_thread_index = 270}}, {{{l4_src_port = 16120,
> > l4_dst_port = 16120, tcp_ack_number = 0, save_rewrite_length = 14 
> > '\016', ip_proto = 1 '\001',
> > icmp_type_or_tcp_flags = 8 '\b', is_non_first_fragment = 1 '\001', 
> > tcp_seq_number = 0}, {estimated_mtu = 16120}}}, {
> > fragment_first = 16120, fragment_last = 16120, range_first = 0, 
> > range_last = 0, next_range_bi = 17301774,
> > ip6_frag_hdr_offset = 0}}
> >
> > The node trace seems to be fine:
> >   ... ip4-lookup -> ip4-rewrite -> ip4-sv-reassembly-output-feature -> 
> > nat44-in2out-output -> nat44-in2out-output-slowpath
> >
> > The NAT session is also correct, it includes the new port:
> >
> > DBGvpp# sh nat44 sessions detail
> > NAT44 sessions:
> >  thread 0 vpp_main: 0 sessions 
> >  thread 1 vpp_wk_0: 1 sessions 
> >   100.64.100.1: 1 dynamic translations, 0 static translations
> > i2o 100.64.100.1 proto icmp port 63550 fib 1
> > o2i 172.16.17.2 proto icmp port 16253 fib 0
> >index 0
> >last heard 44.16
> >total pkts 80, total bytes 63040
> >dynamic translation
> >
> > Do you know if this is a configuration issue or a possible bug? Thank you!
> >
> > Thanks,
> > Miklos
> > 
> 
> 

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#16492): https://lists.fd.io/g/vpp-dev/message/16492
Mute This Topic: https://lists.fd.io/mt/74473306/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] NAT44 does not work with fragmented ICMP packets

2020-05-26 Thread Klement Sekera via lists.fd.io
I think it’s enough if instead of vlib_buffer_get_current(b0) we just use ip0 
(that already takes save_rewrite_length into consideration). Can you please 
test with this modification?

Thanks,
Klement

> On 26 May 2020, at 11:51, Miklos Tirpak  wrote:
> 
> Hi,
> 
> I think there is a problem in ip4_sv_reass_inline(), it does not consider 
> ip.save_rewrite_length when it calculates is_non_first_fragment at line 619 
> (master):
>vnet_buffer (b0)->ip.reass.is_non_first_fragment =
>   ! !ip4_get_fragment_offset (vlib_buffer_get_current (b0));
> 
> Let me open a pull request to fix this.
> 
> Thanks,
> Miklos
> From: vpp-dev@lists.fd.io  on behalf of Miklos Tirpak 
> via lists.fd.io 
> Sent: Tuesday, May 26, 2020 9:25 AM
> To: vpp-dev@lists.fd.io 
> Subject: [vpp-dev] NAT44 does not work with fragmented ICMP packets
>  
> CAUTION: This email originated from outside of the organization. Do not click 
> links or open attachments unless you recognize the sender and know the 
> content is safe.
> 
> Hi,
> 
> we have a scenario where an ICMP packet arrives fragmented over a GTP-U 
> tunnel. The outer IP packets are not fragmented, only the inner ones are. 
> After GTP-U decapsulation, the packets are routed via an interface where 
> NAT44 output-feature is configured.
> 
> In the outgoing packets, the source IP is correctly NATed but the ICMP 
> identifier (port) is not changed. Hence, the NAT session cannot be found for 
> the ICMP reply. This works correctly with smaller packets, the problem is 
> only with fragmented ones.
> 
> I could reproduce this with both VPP 20.01 and master, and could see that 
> ip.reass.is_non_first_fragment is true for every packet. Therefore, 
> icmp_in2out() does not update the ICMP header I think.
> 
> 712  if (!vnet_buffer (b0)->ip.reass.is_non_first_fragment)
> (gdb) p ((vnet_buffer_opaque_t *) (b0)->opaque)->ip.reass
> $17 = {{{next_index = 1056456440, error_next_index = 0}, {owner_thread_index 
> = 270}}, {{{l4_src_port = 16120, 
> l4_dst_port = 16120, tcp_ack_number = 0, save_rewrite_length = 14 
> '\016', ip_proto = 1 '\001', 
> icmp_type_or_tcp_flags = 8 '\b', is_non_first_fragment = 1 '\001', 
> tcp_seq_number = 0}, {estimated_mtu = 16120}}}, {
> fragment_first = 16120, fragment_last = 16120, range_first = 0, 
> range_last = 0, next_range_bi = 17301774, 
> ip6_frag_hdr_offset = 0}}
> 
> The node trace seems to be fine:
>   ... ip4-lookup -> ip4-rewrite -> ip4-sv-reassembly-output-feature -> 
> nat44-in2out-output -> nat44-in2out-output-slowpath
> 
> The NAT session is also correct, it includes the new port:
> 
> DBGvpp# sh nat44 sessions detail
> NAT44 sessions:
>  thread 0 vpp_main: 0 sessions 
>  thread 1 vpp_wk_0: 1 sessions 
>   100.64.100.1: 1 dynamic translations, 0 static translations
> i2o 100.64.100.1 proto icmp port 63550 fib 1
> o2i 172.16.17.2 proto icmp port 16253 fib 0
>index 0
>last heard 44.16
>total pkts 80, total bytes 63040
>dynamic translation
> 
> Do you know if this is a configuration issue or a possible bug? Thank you!
> 
> Thanks,
> Miklos
> 

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#16490): https://lists.fd.io/g/vpp-dev/message/16490
Mute This Topic: https://lists.fd.io/mt/74473306/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Segfault in 'vapi_type_msg_header1_t_ntoh()' with a C++ api client #vpp #vapi

2020-05-28 Thread Klement Sekera via lists.fd.io
Hey,

swapping context makes no sense since context is for client - client generates 
whatever context it wants and vpp just copies it from request to response, so 
that client can match response with request. So there is no reason for client 
to swap it when sending the message and then swap it back when receiving the 
message. On the other hand, message ID is always in network byte order and thus 
might need to be swapped.

Do you have or could you produce a piece of code which reproduces the issue, 
please?

Thanks,
Klement

> On 28 May 2020, at 19:14, pashinho1...@gmail.com wrote:
> 
> Hi all,
> 
> So, the problem is encountered with my C++ client when receiving a reply. The 
> strange thing is that this happens only with a specific type of api 
> request-reply.
> Follows the segfault stack trace:
> Thread 1 "sample_plugin_client" hit Breakpoint 1, 
> vapi_type_msg_header1_t_ntoh (h=0x0)
> at /usr/include/vapi/vapi_internal.h:63
> 63h->_vl_msg_id = be16toh (h->_vl_msg_id);
> (gdb) bt
> #0  vapi_type_msg_header1_t_ntoh (h=0x0) at 
> /usr/include/vapi/vapi_internal.h:63
> #1  0x00406bff in vapi_msg_sample_plugin_session_add_reply_ntoh 
> (msg=0x0) at /usr/include/vapi/sample_plugin.api.vapi.h:1215
> #2  0x00419869 in 
> vapi::vapi_swap_to_host (msg=0x0)
> at /usr/include/vapi/sample_plugin.api.vapi.hpp:260
> #3  0x00419ad3 in 
> vapi::Msg::assign_response 
> (this=0x7fffe1b8,
> resp_id=49, shm_data=0x0) at /usr/include/vapi/vapi.hpp:614
> #4  0x00419797 in vapi::Request vapi_msg_sample_plugin_pfcp_add_reply>::assign_response (
> this=0x7fffe170, id=49, shm_data=0x0) at 
> /usr/include/vapi/vapi.hpp:684
> #5  0x00410b1b in vapi::Connection::dispatch (this=0x7fffe2c0, 
> limit=0x7fffe170, time=5)
> at /usr/include/vapi/vapi.hpp:289
> #6  0x00410d9a in vapi::Connection::dispatch (this=0x7fffe2c0, 
> limit=...)
> at /usr/include/vapi/vapi.hpp:324
> #7  0x00410ddc in vapi::Connection::wait_for_response 
> (this=0x7fffe2c0, req=...)
> at /usr/include/vapi/vapi.hpp:340
> #8  0x0040ba58 in sample_plugin_pfcp_add (vpp_conn=..., msg_pload=...)
> at 
> /root/tmp/vpp/src/plugins/sample_plugin/rubbish/client/src/sample_plugin_client.cpp:213
> #9  0x0040c485 in main () at 
> /root/tmp/vpp/src/plugins/sample_plugin/rubbish/client/src/sample_plugin_client.cpp:388
> Here's the kaboom " h->_vl_msg_id = be16toh (h->_vl_msg_id)", where 'h' is 
> NULL, yikes :O.
> I traced the root cause in the 'vapi::Connection::dispatch()' method, 
> specifically here:
> u32 context = *reinterpret_cast((static_cast (shm_data) + 
> vapi_get_context_offset (id))); // context' here is 218103808 in my case, for 
> example
> const auto x = requests.front();
> matching_req = x;
> if (context == x->context)  // while 'x->context' here is 13, i.e. htonl(13) 
> is 218103808 (endianness inconsistency), so this branch here is not taken
> {
> std::tie (rv, break_dispatch) = x->assign_response (id, shm_data);
> }
> else // this one is taken, i.e. by passing 'nullptr' and subsequently 
> being dereferenced ==> BOOM!
> {
> std::tie (rv, break_dispatch) = x->assign_response (id, nullptr);
> }
> Also, I see REPLY_MACRO doing:
> rmp->_vl_msg_id = htons(REPLY_MSG_ID_BASE + VL_API_blablabla_REPLY);
> rmp->context = mp->context;
> So '_vl_msg_id' gets network byte order, but not so with 'context', why's 
> that? Does this have something to do with the client's resulting segfault?
> Oh, and I'm on top of the latest 'stable/2005'.
> 
> Thank you 

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#16560): https://lists.fd.io/g/vpp-dev/message/16560
Mute This Topic: https://lists.fd.io/mt/74526467/21656
Mute #vpp: https://lists.fd.io/mk?hashtag=vpp=1480452
Mute #vapi: https://lists.fd.io/mk?hashtag=vapi=1480452
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] NAT44 UDP sessions are not clearing

2020-06-01 Thread Klement Sekera via lists.fd.io
Hi,

as you can see almost all of NAT sessions are timed out. NAT will automatically 
free and reuse them when needed again.

this line:
> udp LRU min session timeout 5175 (now 161589)
hints whether immediate reuse is possible. Minimum session timeout in the LRU 
list for UDP sessions is 5175, while current vpp internal time is 161589. This 
means the first element in LRU list for UDP session is ready to be reaped.

To avoid fluctuations in performance due to running periodic cleanup processes, 
NAT instead attempts to free one session anytime there is a request to create a 
new session. This means that at low steady rate, maximum number of sessions 
will peak at some point. E.g. with UDP timeout of 30 seconds and 100 
sessions/second, after 30 seconds there will be around 3000 sessions and new 
sessions will also start to force cleanups. This will then cause the total 
sessions to remain at around 3000. If you stop creating new traffic, all of 
these eventually time out (without spending any CPU on these timeouts). If 
again after some time you start traffic, sessions will be freed and reused as 
required.

Regards,
Klement

> On 31 May 2020, at 22:07, carlito nueno  wrote:
> 
> Hi all,
> 
> I am using vpp v20.05 and running NAT44 in end-point dependent mode.
> 
> To test NAT, I created 50k tcp and udp sessions and ran packets for 5 mins. 
> Then I stopped the test.
> 
> As soon as the test is stopped, tcp established sessions is 0, tcp transitory 
> sessions increase and all of the tcp sessions become 0 after about 7440 
> seconds.
> But UDP sessions are still "open", as the count is still high even after 24 
> hours. As you can see below, udp LRU session timeout is around 161589 and 
> total udp sessions is around 29k
> 
> Any advice? Let me know if I am missing anything.
> 
> NAT44 pool addresses:
> 130.44.9.8
>   tenant VRF independent
>   0 busy other ports
>   29058 busy udp ports
>   0 busy tcp ports
>   0 busy icmp ports
> NAT44 twice-nat pool addresses:
> max translations: 400
> max translations per user: 1000
> udp LRU min session timeout 5175 (now 161589)
> total timed out sessions: 29025
> total sessions: 29058
> total tcp sessions: 0
> total tcp established sessions: 0
> total tcp transitory sessions: 0
> total tcp transitory (WAIT-CLOSED) sessions: 0
> total tcp transitory (CLOSED) sessions: 0
> total udp sessions: 29058
> total icmp sessions: 0
> 
> Thanks!
> 

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#16593): https://lists.fd.io/g/vpp-dev/message/16593
Mute This Topic: https://lists.fd.io/mt/74589316/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] NAT44 UDP sessions are not clearing

2020-06-02 Thread Klement Sekera via lists.fd.io
Hi Carlito,

For ED NAT it doesn’t, as ED NAT no longer has any “user” concept. The code for 
different flavours of NAT needs to be split and polished anyway. Idea is to 
have data/code/APIs separate where appropriate.

Thanks,
Klement

> On 2 Jun 2020, at 20:31, Carlito Nueno  wrote:
> 
> Hi Klement,
> 
> Really appreciate the detailed explanation! That makes sense and I could see 
> that behavior from my tests.
> 
> Last question: does "max translations per user" matter any more because the 
> concept of user doesn't exist with new NAT?
> max translations: 400
> max translations per user: 500
> 
> From my tests, each ip address can form as many sessions as needed as long as 
> the overall/total sessions stay under "max translations".
> 
> Thanks!
> 
> On Mon, Jun 1, 2020 at 12:47 AM Klement Sekera -X (ksekera - PANTHEON TECH 
> SRO at Cisco)  wrote:
> Hi,
> 
> as you can see almost all of NAT sessions are timed out. NAT will 
> automatically free and reuse them when needed again.
> 
> this line:
> > udp LRU min session timeout 5175 (now 161589)
> hints whether immediate reuse is possible. Minimum session timeout in the LRU 
> list for UDP sessions is 5175, while current vpp internal time is 161589. 
> This means the first element in LRU list for UDP session is ready to be 
> reaped.
> 
> To avoid fluctuations in performance due to running periodic cleanup 
> processes, NAT instead attempts to free one session anytime there is a 
> request to create a new session. This means that at low steady rate, maximum 
> number of sessions will peak at some point. E.g. with UDP timeout of 30 
> seconds and 100 sessions/second, after 30 seconds there will be around 3000 
> sessions and new sessions will also start to force cleanups. This will then 
> cause the total sessions to remain at around 3000. If you stop creating new 
> traffic, all of these eventually time out (without spending any CPU on these 
> timeouts). If again after some time you start traffic, sessions will be freed 
> and reused as required.
> 
> Regards,
> Klement
> 
> > On 31 May 2020, at 22:07, carlito nueno  wrote:
> > 
> > Hi all,
> > 
> > I am using vpp v20.05 and running NAT44 in end-point dependent mode.
> > 
> > To test NAT, I created 50k tcp and udp sessions and ran packets for 5 mins. 
> > Then I stopped the test.
> > 
> > As soon as the test is stopped, tcp established sessions is 0, tcp 
> > transitory sessions increase and all of the tcp sessions become 0 after 
> > about 7440 seconds.
> > But UDP sessions are still "open", as the count is still high even after 24 
> > hours. As you can see below, udp LRU session timeout is around 161589 and 
> > total udp sessions is around 29k
> > 
> > Any advice? Let me know if I am missing anything.
> > 
> > NAT44 pool addresses:
> > 130.44.9.8
> >   tenant VRF independent
> >   0 busy other ports
> >   29058 busy udp ports
> >   0 busy tcp ports
> >   0 busy icmp ports
> > NAT44 twice-nat pool addresses:
> > max translations: 400
> > max translations per user: 1000
> > udp LRU min session timeout 5175 (now 161589)
> > total timed out sessions: 29025
> > total sessions: 29058
> > total tcp sessions: 0
> > total tcp established sessions: 0
> > total tcp transitory sessions: 0
> > total tcp transitory (WAIT-CLOSED) sessions: 0
> > total tcp transitory (CLOSED) sessions: 0
> > total udp sessions: 29058
> > total icmp sessions: 0
> > 
> > Thanks!
> > 
> 

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#16623): https://lists.fd.io/g/vpp-dev/message/16623
Mute This Topic: https://lists.fd.io/mt/74589316/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Query on Inner packet Fragmentation and Reassembly

2020-07-01 Thread Klement Sekera via lists.fd.io
Hi Murthy,

yes it does. Code is in ip4_sv_reass.c or ip4_full_reass.c. First one is 
shallow reassembly (as in, know 5 tuple for all fragments with having to 
actually reassemble them), second one is full reassembly.

Regards,
Klement

> On 1 Jul 2020, at 14:41, Satya Murthy  wrote:
> 
> Hi ,
> 
> We have a use case, where we receive packets in a tunnel, and the inner 
> packet may be fragments.
> If we want to reassemble the inner fragments and get one single packet, does 
> VPP already have a framework that has this functionality. 
> If it's already there, we can make use of it.
> 
> I saw MAP plugin, but I am not able to see the place where it reassembles 
> ipv4 fragments and outputs one single packet.
> 
> Any inputs/examples please.
> 
> -- 
> Thanks & Regards,
> Murthy 

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#16851): https://lists.fd.io/g/vpp-dev/message/16851
Mute This Topic: https://lists.fd.io/mt/75234196/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Issue while pushing code for gerrit review

2020-06-24 Thread Klement Sekera via lists.fd.io
Hi,

just amend the commit message and add it by hand. Commit message must contain a 
line like

Type: fix

or 

Type: improvement

or whatever is applicable.

Regards,
Klement

> On 24 Jun 2020, at 10:31, Chinmaya Aggarwal  wrote:
> 
> After i commit my fix, i run the script ./extras/scripts/check_commit_msg.sh 
> and it gives me error : -
> root@ggnlabvm-hnsnfvsdn03:~/vpp# ./extras/scripts/check_commit_msg.sh
> === ERROR ===
> Unknown commit type '' in commit message body.
> Commit message must contain known 'Type:' entry.
> Known types are: feature fix refactor improvement style docs test make
> === ERROR ===
> When i execute "git log", i don't see "Type" for my commit : -
> commit 4599d93c03196b57ad0933a59648dd98faec13b9 (HEAD -> 
> review/chinmaya_agarwal/fix/vpp_topic_74477804)
> Author: Chinmaya Agarwal 
> Date:   Tue Jun 23 12:38:23 2020 +
>
> sr: fix: fix for SID index across segment lists within a sr policy
>
> Signed-off-by: Chinmaya Agarwal 
> Change-Id: Ib9c47391acd952d352e0453a7e7c1c5eabd3ca09
> 
> I executed the below command for committing my change
> git commit -s --amend -m 'sr: fix for SID index across segment lists within a 
> sr policy'
> 
> Can anyone please suggest what is wrong here?
> 
> 
>
> 
> 

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#16803): https://lists.fd.io/g/vpp-dev/message/16803
Mute This Topic: https://lists.fd.io/mt/75076228/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Issue while pushing code for gerrit review

2020-06-24 Thread Klement Sekera via lists.fd.io
Type: fix needs to be on a separate line. Take a look at other commit messages 
for examples.

> On 24 Jun 2020, at 10:42, Chinmaya Aggarwal  wrote:
> 
> Still not working for me. I executed 
> git commit -s --amend -m 'Type: fix sr: fix for SID index across segment 
> lists within a sr policy'
> 
> But still git log shows
> commit a3c7df5131c9905a209cee017e80fdff6b5b2e6e (HEAD -> 
> review/chinmaya_agarwal/fix/vpp_topic_74477804)
> Author: Chinmaya Agarwal 
> Date:   Tue Jun 23 12:38:23 2020 +
>
> Type: fix sr: fix for SID index across segment lists within a sr policy
>
> Signed-off-by: Chinmaya Agarwal 
> Change-Id: I3c7e840b6ae14a7962eef3bf81bb05901840543b
> 
> What is wrong in this message. Also, can you show us a sample for this?
> 

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#16805): https://lists.fd.io/g/vpp-dev/message/16805
Mute This Topic: https://lists.fd.io/mt/75076228/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] NAT44 UDP sessions are not clearing

2020-06-03 Thread Klement Sekera via lists.fd.io
Transitory means that the session has not been fully established.
Transitory (wait-closed) means the session has been established and then closed 
and it’s in transitory timeout, after which it will move to transitory 
(closed). Sessions in this state are not eligible for freeing.
Transitory (closed) means the session has been fully closed and timed out and 
it’s now ready to be freed when needed.

> On 3 Jun 2020, at 07:42, Carlito Nueno  wrote:
> 
> Testing with 30 ip addresses (users) opening around 300 sessions each. 
> 
> When using vpp-20.01 + fixes by you and Filip (before the port overloading 
> patches), total sessions and total transitory sessions were much smaller 
> (around 15062).
> 
> on vpp-20.05 with port overloading
> 
> NAT44 pool addresses:
> 130.44.9.8
>   tenant VRF independent
>   0 busy other ports
>   32 busy udp ports
>   63071 busy tcp ports
>   1 busy icmp ports
> NAT44 twice-nat pool addresses:
> max translations: 400
> max translations per user: 500
> established tcp LRU min session timeout 7792 (now 352)
> transitory tcp LRU min session timeout 294 (now 352)
> udp LRU min session timeout 312 (now 352)
> total timed out sessions: 119312
> total sessions: 128639
> total tcp sessions: 128607
> total tcp established sessions: 9300
> total tcp transitory sessions: 119307
> total tcp transitory (WAIT-CLOSED) sessions: 0
> total tcp transitory (CLOSED) sessions: 0
> total udp sessions: 32
> total icmp sessions: 0
> 
> On Tue, Jun 2, 2020 at 8:42 PM carlito nueno via 
> lists.fd.io wrote:
> Hi Klement,
> 
> Got it.
> 
> Sorry one more question :)
> 
> I did another test and I noticed that tcp transitory sessions increase 
> rapidly when I create new sessions from new internal ip address really fast 
> (without delay). for example:
> tcp sessions are never stopped, so tcp transitory sessions should be 0 at all 
> times.
> 
> from ip address 192.168.1.2
> 
> NAT44 pool addresses:
> 130.44.9.8
>   tenant VRF independent
>   0 busy other ports
>   36 busy udp ports
>   7694 busy tcp ports
>   0 busy icmp ports
> NAT44 twice-nat pool addresses:
> max translations: 400
> max translations per user: 500
> established tcp LRU min session timeout 7842 (now 402)
> udp LRU min session timeout 670 (now 402)
> total timed out sessions: 0
> total sessions: 1203
> total tcp sessions: 1200
> total tcp established sessions: 1200
> total tcp transitory sessions: 0
> total tcp transitory (WAIT-CLOSED) sessions: 0
> total tcp transitory (CLOSED) sessions: 0
> total udp sessions: 3
> total icmp sessions: 0
> 
> added 600 sessions from ip address 192.168.1.3
> 
> NAT44 pool addresses:
> 130.44.9.8
>   tenant VRF independent
>   0 busy other ports
>   36 busy udp ports
>   9395 busy tcp ports
>   0 busy icmp ports
> NAT44 twice-nat pool addresses:
> max translations: 400
> max translations per user: 500
> established tcp LRU min session timeout 7845 (now 405)
> transitory tcp LRU min session timeout 644 (now 405)
> udp LRU min session timeout 670 (now 405)
> total timed out sessions: 0
> total sessions: 2904
> total tcp sessions: 2901
> total tcp established sessions: 1800
> total tcp transitory sessions: 1101
> total tcp transitory (WAIT-CLOSED) sessions: 0
> total tcp transitory (CLOSED) sessions: 0
> total udp sessions: 3
> total icmp sessions: 0
> 
> Thanks!
> 
> On Tue, Jun 2, 2020 at 11:47 AM Klement Sekera -X (ksekera - PANTHEON TECH 
> SRO at Cisco)  wrote:
> Hi Carlito,
> 
> For ED NAT it doesn’t, as ED NAT no longer has any “user” concept. The code 
> for different flavours of NAT needs to be split and polished anyway. Idea is 
> to have data/code/APIs separate where appropriate.
> 
> Thanks,
> Klement
> 
> > On 2 Jun 2020, at 20:31, Carlito Nueno  wrote:
> > 
> > Hi Klement,
> > 
> > Really appreciate the detailed explanation! That makes sense and I could 
> > see that behavior from my tests.
> > 
> > Last question: does "max translations per user" matter any more because the 
> > concept of user doesn't exist with new NAT?
> > max translations: 400
> > max translations per user: 500
> > 
> > From my tests, each ip address can form as many sessions as needed as long 
> > as the overall/total sessions stay under "max translations".
> > 
> > Thanks!
> > 
> > On Mon, Jun 1, 2020 at 12:47 AM Klement Sekera -X (ksekera - PANTHEON TECH 
> > SRO at Cisco)  wrote:
> > Hi,
> > 
> > as you can see almost all of NAT sessions are timed out. NAT will 
> > automatically free and reuse them when needed again.
> > 
> > this line:
> > > udp LRU min session timeout 5175 (now 161589)
> > hints whether immediate reuse is possible. Minimum session timeout in the 
> > LRU list for UDP sessions is 5175, while current vpp internal time is 
> > 161589. This means the first element in LRU list for UDP session is ready 
> > to be reaped.
> > 
> > To avoid fluctuations in performance due to running periodic cleanup 
> > processes, NAT instead attempts to free one session anytime there is a 
> > request to 

Re: [vpp-dev] Vapi causes vpp to crash #vapi

2020-06-09 Thread Klement Sekera via lists.fd.io
Hi,

the issue was caused by a missing memset in shared memory allocation routine. 
After a few runs, newly allocated message in shared memory would no longer be 
zero, but random garbage as left over by previous messages, this was then used 
by vapi_c_test leading to crash.

https://gerrit.fd.io/r/c/vpp/+/27472

should be a fix.

Thanks,
Klement

> On 9 Jun 2020, at 16:28, Florin Coras  wrote:
> 
> Hi, 
> 
> Are you perhaps using a debug vpp image in combination with a release version 
> of vapi? Debug binaries validate allocations whereas release binaries do not 
> and do not initialize the allocation’s “magic” field appropriately. So make 
> sure both binaries are of the same type.
> 
> Regards,
> Florin
> 
>> On Jun 8, 2020, at 11:28 PM, carol1311596...@gmail.com wrote:
>> 
>> When I call vapi_cpp_test or vapi_c_test multiple times to test vapi, vpp 
>> crashes 。
>> 
>> vpp version : v20.01.1.0-2~g6d190dd
>> 
>> Following is crash gdb session transcript:
>> 
>> Thread 1 "vpp_main" received signal SIGSEGV, Segmentation fault.
>> 0x76177c0d in ok_magic (m=0xdeaddabe) at 
>> /data/zj/vpp/src/vppinfra/dlmalloc.c:1623
>> 1623return (m->magic == mparams.magic);
>> (gdb) bt
>> #0  0x76177c0d in ok_magic (m=0xdeaddabe) at 
>> /data/zj/vpp/src/vppinfra/dlmalloc.c:1623
>> #1  0x7617fd08 in mspace_free (msp=0x130047010, mem=0x1301c7e40) at 
>> /data/zj/vpp/src/vppinfra/dlmalloc.c:4489
>> #2  0x7617f232 in mspace_put (msp=0x130047010, p_arg=0x1301c7e44) at 
>> /data/zj/vpp/src/vppinfra/dlmalloc.c:4321
>> #3  0x77b95304 in clib_mem_free (p=0x1301c7e44) at 
>> /data/zj/vpp/src/vppinfra/mem.h:238
>> #4  0x77b95df4 in vl_msg_api_free_w_region (vlib_rp=0x130026000, 
>> a=0x1301c7e54) at /data/zj/vpp/src/vlibmemory/memory_shared.c:306
>> #5  0x77b95e38 in vl_msg_api_free (a=0x1301c7e54) at 
>> /data/zj/vpp/src/vlibmemory/memory_shared.c:314
>> #6  0x77bc5072 in vl_msg_api_handler_with_vm_node (am=0x77dd5e60 
>> , vlib_rp=0x130026000, the_msg=0x1301c7e54, 
>> vm=0x768ce480 , node=0x7fffb5c23000,
>> is_private=0 '\000') at /data/zj/vpp/src/vlibapi/api_shared.c:622
>> #7  0x77b93fb9 in void_mem_api_handle_msg_i (am=0x77dd5e60 
>> , vlib_rp=0x130026000, vm=0x768ce480 
>> , node=0x7fffb5c23000, is_private=0 '\000')
>> at /data/zj/vpp/src/vlibmemory/memory_api.c:698
>> #8  0x77b94005 in vl_mem_api_handle_msg_main (vm=0x768ce480 
>> , node=0x7fffb5c23000) at 
>> /data/zj/vpp/src/vlibmemory/memory_api.c:709
>> #9  0x77bafee6 in vl_api_clnt_process (vm=0x768ce480 
>> , node=0x7fffb5c23000, f=0x0) at 
>> /data/zj/vpp/src/vlibmemory/vlib_api.c:327
>> #10 0x7663103f in vlib_process_bootstrap (_a=140736271608784) at 
>> /data/zj/vpp/src/vlib/main.c:1475
>> #11 0x760f0240 in clib_calljmp () at 
>> /data/zj/vpp/src/vppinfra/longjmp.S:123
>> #12 0x7fffb779eba0 in ?? ()
>> #13 0x76631147 in vlib_process_startup (vm=0x76631aeb 
>> , p=0x7fffb779eca0, f=0x) at 
>> /data/zj/vpp/src/vlib/main.c:1497
>> #14 0x189ecf86e039 in ?? ()
>> #15 0x7fffb5c23000 in ?? ()
>> #16 0x7fffb65bfb18 in ?? ()
>> #17 0x7fffb65bf8a8 in ?? ()
>> #18 0x0018 in ?? ()
>> #19 0x7fffb65bfb18 in ?? ()
>> #20 0x7fffb5c23000 in ?? ()
>> #21 0x7fffb77c6764 in ?? ()
>> #22 0x in ?? ()
>> 
>> 
>> Thank you in advance!
>> 
> 
> 

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#16703): https://lists.fd.io/g/vpp-dev/message/16703
Mute This Topic: https://lists.fd.io/mt/74769047/21656
Mute #vapi: https://lists.fd.io/g/fdio+vpp-dev/mutehashtag/vapi
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: RES: RES: RES: [vpp-dev] NAT memory usage problem for VPP 20.09 compared to 20.05 due to larger translation_buckets value

2020-11-30 Thread Klement Sekera via lists.fd.io
Hi,

Yes, I think so.
Limiting users in endpoint-dependent mode is not supported, only max sessions.

Thanks,
Klement

> On 28 Nov 2020, at 01:21, Marcos - Mgiga  wrote:
> 
> Hi Klement
> 
> Those values ( max sessions and max users) are avaible in deterministic mode?
> 
> Best regards
> 
> -Mensagem original-
> De: vpp-dev@lists.fd.io  Em nome de Klement Sekera via 
> lists.fd.io
> Enviada em: quinta-feira, 26 de novembro de 2020 18:16
> Para: Marcos - Mgiga 
> Cc: Elias Rudberg ; vpp-dev@lists.fd.io; 
> dmar...@me.com
> Assunto: Re: RES: RES: [vpp-dev] NAT memory usage problem for VPP 20.09 
> compared to 20.05 due to larger translation_buckets value
> 
> Hi,
> 
> memory settings are gone from startup.conf. As I already mentioned, those 
> were pointless anyway as the tables now reside in main heap. Translation hash 
> buckets are calculated automatically based on max sessions and max users 
> parameters.
> 
> Thanks,
> Klement
> 
>> On 26 Nov 2020, at 21:50, Damjan Marion via lists.fd.io 
>>  wrote:
>> 
>> Will leave that to NAT folks to comment… They have multiple tables and 
>> they are two per thread…
>> 
>> —
>> Damjan
>> 
>>> On 26.11.2020., at 20:27, Marcos - Mgiga  wrote:
>>> 
>>> Of course.
>>> 
>>> Since I intend to implement VPP as a deterministic CGN gateway I have some 
>>> parameters regarding to nat config, for example: translation hash buckets, 
>>> translation hash memory , user hash buckets and user hash memory to be 
>>> configured in startup.conf.
>>> 
>>> In this context I would like to know how do I give the right value to those 
>>> parameters.
>>> 
>>> 
>>> Thanks
>>> 
>>> Marcos
>>> 
>>> 
>>> -Mensagem original-
>>> De: vpp-dev@lists.fd.io  Em nome de Damjan 
>>> Marion via lists.fd.io Enviada em: quinta-feira, 26 de novembro de 
>>> 2020 16:17
>>> Para: Marcos - Mgiga 
>>> Cc: Elias Rudberg ; vpp-dev@lists.fd.io
>>> Assunto: Re: RES: [vpp-dev] NAT memory usage problem for VPP 20.09 
>>> compared to 20.05 due to larger translation_buckets value
>>> 
>>> 
>>> Sorry, I don’t understand your question. Can you elaborate further?
>>> 
>>> --
>>> Damjan
>>> 
>>>> On 26.11.2020., at 20:05, Marcos - Mgiga  wrote:
>>>> 
>>>> Hello,
>>>> 
>>>> Taking benefit of the topic, how you suggest to monitor if translation 
>>>> hash bucket value has an appropriate value? What about translation hash 
>>>> memory, user hash buckets and user hash memory ?
>>>> 
>>>> How do I know if I increase or decrease those values?
>>>> 
>>>> Best Regards
>>>> 
>>>> Marcos
>>>> 
>>>> -Mensagem original-
>>>> De: vpp-dev@lists.fd.io  Em nome de Damjan 
>>>> Marion via lists.fd.io Enviada em: quinta-feira, 26 de novembro de 
>>>> 2020 14:53
>>>> Para: Elias Rudberg 
>>>> Cc: vpp-dev@lists.fd.io
>>>> Assunto: Re: [vpp-dev] NAT memory usage problem for VPP 20.09 
>>>> compared to 20.05 due to larger translation_buckets value
>>>> 
>>>> 
>>>> Dear Elias,
>>>> 
>>>> Let me try to explain a bit underlying mechanics.
>>>> Let’s assume your target number of sessions is 10M and we are talking 
>>>> about 16byte key size.
>>>> That means each hash entry (KV) is 24 bytes (16 bytes key and 8 bytes 
>>>> value).
>>>> 
>>>> In the setup you were mentioning, with 1<<20 buckets, your will need to 
>>>> fit 10 KVs into each bucket.
>>>> Initial bihash bucket holds 4 KVs and to accomodate 10 keys (assuming that 
>>>> hash function gives us equal distribution) you will need to grow each 
>>>> bucket 2 times. Growing means doubling bucket size.
>>>> So at the end you will have 1<<20 buckets where each holds 16 KVs.
>>>> 
>>>> Math is:
>>>> 1<<20 * (16 * 24 /* KV size in bytes */  + 8 /*bucket header size*/) Which 
>>>> means 392 MB of memory.
>>>> 
>>>> If you keep target number of 10M sessions, but you increase number of 
>>>> buckets to 1 << 22 (which is roughly what formula bellow is trying to do) 
>>>> you end up with the following math:
>>>> 
>>>> Math is:
>>>> 1<&

Re: [vpp-dev] NAT memory usage problem for VPP 20.09 compared to 20.05 due to larger translation_buckets value

2020-11-26 Thread Klement Sekera via lists.fd.io
Hi Elias,

mentioned formula was updated per guidance from Damjan: optimal number of 
bihash buckets is number of expected entries divided by 2.5 rounded to closest 
pow2 (might he higher or lower).

Regarding memory usage: bihash code has been changed and now uses main heap. 
Consequently, it ignores memory setting passed to its init function. Old 
behaviour is available, but we saw no reason to tinker with it and so NAT code 
now passes 0 in memory_size parameter to bihash init.

Thanks,
Klement

> On 26 Nov 2020, at 17:54, Elias Rudberg  wrote:
> 
> Hello VPP experts,
> 
> We are using VPP for NAT44 and are currently looking at how to move
> from VPP 20.05 to 20.09. There are some differences in the way the NAT
> plugin is configured.
> 
> One difficulty for us is the maximum number of sessions allowed, we
> need to handle large numbers of sessions so that limit can be
> important for us. For VPP 20.05 we have used "translation hash buckets
> 1048576" and then the maximum number of sessions per thread becomes 10
> times that because of this line in the source code in snat_config():
> 
> sm->max_translations = 10 * translation_buckets;
> 
> So then we got a limit of about 10 million sessions per thread, which
> we have been happy with so far.
> 
> With VPP 20.09 however, things have changed so that the maximum number
> of sessions is now configured explicitly, and the relationship between
> max_translations_per_thread and translation_buckets is no longer a
> factor of 10 but instead given by the nat_calc_bihash_buckets()
> function:
> 
> static u32
> nat_calc_bihash_buckets (u32 n_elts)
> {
>  return 1 << (max_log2 (n_elts >> 1) + 1);
> }
> 
> The above function corresponds to a factor of somewhere between 1 and
> 2 instead of 10. So, if I understood this correctly, for a given
> maximum number of sessions, the corresponding translation_buckets
> value will be something like 5 to 10 times larger in VPP 20.09
> compared to how it was in VPP 20.05, leading to significantly
> increased memory requirement given that we want to have the same
> maximum number of sessions as before.
> 
> It seems a little strange that the translation_buckets value would
> change so much between VPP versions, was that change intentional? The
> old relationship "max_translations = 10 * translation_buckets" seems
> to have worked well in practice, at least for our use case.
> 
> What could we do to get around this, if we want to switch to VPP 20.09
> but without reducing the maximum number of sessions? If we were to
> simply divide the nat_calc_bihash_buckets() value by 8 or so to make
> it more similar to how it was earlier, would that lead to other
> problems?
> 
> Best regards,
> Elias
> 
> 
> 


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#18161): https://lists.fd.io/g/vpp-dev/message/18161
Mute This Topic: https://lists.fd.io/mt/78533277/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] VPP hanging and running out of memory due to infinite loop related to nat44-hairpinning

2020-12-03 Thread Klement Sekera via lists.fd.io
I can’t see how that could help as this is not a fragmented packet … ?

Thanks,
Klement

> On 2 Dec 2020, at 23:57, Mrityunjay Kumar  wrote:
> 
> Guys, I am not sure , my input is helpful or not but the same issue was 
> triggered to me and we concluded it in a different way. 
> In your packet trace , it seems pkt is triggering to vpp node  
> ip4-sv-reassembly-feature. I suggest first try to enable reassembly on the 
> interface,
> 
> set interface reassembly  on
> 
> Maybe some experts can say by default reassembly is enabled on the interface 
> but it's not. 
> 
> try it once, let us know , still u r facing problems. ??
> 
>  
> Regards,
> Mrityunjay Kumar.
> Mobile: +91 - 9731528504
> 
> 
> 
> On Wed, Dec 2, 2020 at 9:10 PM Elias Rudberg  
> wrote:
> Hello VPP experts,
> 
> For our NAT44 usage of VPP we have encountered a problem with VPP
> running out of memory, which now, after much headache and many out-of-
> memory crashes over the past several months, has turned out to be
> caused by an infinite loop where VPP gets stuck repeating the three
> nodes ip4-lookup, ip4-local and nat44-hairpinning. A single packet gets
> passed around and around between those three nodes, eating more and
> more memory which causes that worker thread to get stuck and VPP to run
> out of memory after a few seconds. (Earlier we speculated that it was
> due to a memory leak but now it seems it was not.)
> 
> This concerns the current master branch as well as the stable/2009
> branches and earlier VPP versions as well.
> 
> One scenario when this happens is when a UDP (or TCP) packet is sent
> from a client on the inside with a destination IP address that matches
> an existing static NAT mapping that maps that IP address on the inside
> to the same IP address on the outside.
> 
> Then, the problem can be triggered for example by doing this from a
> client on the inside, where DESTINATION_IP is the IP address of such a
> static mapping:
> 
> echo hello > /dev/udp/$DESTINATION_IP/3
> 
> Here is the packet trace for the thread that receives the packet at
> rdma-input:
> 
> --
> 
> Packet 42
> 
> 00:03:07:636840: rdma-input
>   rdma: Interface179 (4) next-node bond-input l2-ok l3-ok l4-ok ip4 udp
> 00:03:07:636841: bond-input
>   src d4:6a:35:52:30:db, dst 02:fe:8d:23:60:a7, Interface179 ->
> BondEthernet0
> 00:03:07:636843: ethernet-input
>   IP4: d4:6a:35:52:30:db -> 02:fe:8d:23:60:a7 802.1q vlan 1013
> 00:03:07:636844: ip4-input
>   UDP: SOURCE_IP_INSIDE -> DESTINATION_IP
> tos 0x00, ttl 63, length 34, checksum 0xe7e3 dscp CS0 ecn NON_ECN
> fragment id 0x50fe, flags DONT_FRAGMENT
>   UDP: 48824 -> 3
> length 14, checksum 0x781e
> 00:03:07:636846: ip4-sv-reassembly-feature
>   [not-fragmented]
> 00:03:07:636847: nat44-in2out-worker-handoff
>   NAT44_IN2OUT_WORKER_HANDOFF : next-worker 8 trace index 41
> 
> --
> 
> So it is doing handoff to thread 8 with trace index 41. Nothing wrong
> so far, I think.
> 
> Here is the beginning of the corresponding packet trace for the
> receiving thread:
> 
> --
> 
> Packet 57
> 
> 00:03:07:636850: handoff_trace
>   HANDED-OFF: from thread 7 trace index 41
> 00:03:07:636850: nat44-in2out
>   NAT44_IN2OUT_FAST_PATH: sw_if_index 6, next index 3, session -1
> 00:03:07:636855: nat44-in2out-slowpath
>   NAT44_IN2OUT_SLOW_PATH: sw_if_index 6, next index 0, session 11
> 00:03:07:636927: ip4-lookup
>   fib 0 dpo-idx 577 flow hash: 0x
>   UDP: SOURCE_IP_OUTSIDE -> DESTINATION_IP
> tos 0x00, ttl 63, length 34, checksum 0x5eee dscp CS0 ecn NON_ECN
> fragment id 0x50fe, flags DONT_FRAGMENT
>   UDP: 63957 -> 3
> length 14, checksum 0xb40b
> 00:03:07:636930: ip4-local
> UDP: SOURCE_IP_OUTSIDE -> DESTINATION_IP
>   tos 0x00, ttl 63, length 34, checksum 0x5eee dscp CS0 ecn NON_ECN
>   fragment id 0x50fe, flags DONT_FRAGMENT
> UDP: 63957 -> 3
>   length 14, checksum 0xb40b
> 00:03:07:636932: nat44-hairpinning
>   new dst addr DESTINATION_IP port 3 fib-index 0 is-static-mapping
> 00:03:07:636934: ip4-lookup
>   fib 0 dpo-idx 577 flow hash: 0x
>   UDP: SOURCE_IP_OUTSIDE -> DESTINATION_IP
> tos 0x00, ttl 63, length 34, checksum 0x5eee dscp CS0 ecn NON_ECN
> fragment id 0x50fe, flags DONT_FRAGMENT
>   UDP: 63957 -> 3
> length 14, checksum 0xb40b
> 00:03:07:636936: ip4-local
> UDP: SOURCE_IP_OUTSIDE -> DESTINATION_IP
>   tos 0x00, ttl 63, length 34, checksum 0x5eee dscp CS0 ecn NON_ECN
>   fragment id 0x50fe, flags DONT_FRAGMENT
> UDP: 63957 -> 3
>   length 14, checksum 0xb40b
> 00:03:07:636937: nat44-hairpinning
>   new dst addr DESTINATION_IP port 3 fib-index 0 is-static-mapping
> 00:03:07:636937: ip4-lookup
>   fib 0 dpo-idx 577 flow hash: 0x
>   UDP: SOURCE_IP_OUTSIDE -> DESTINATION_IP
> tos 0x00, ttl 63, length 34, checksum 0x5eee dscp CS0 ecn NON_ECN
> fragment id 0x50fe, flags DONT_FRAGMENT
>   UDP: 63957 -> 3
> length 14, checksum 

Re: RES: RES: [vpp-dev] NAT memory usage problem for VPP 20.09 compared to 20.05 due to larger translation_buckets value

2020-11-26 Thread Klement Sekera via lists.fd.io
Hi,

memory settings are gone from startup.conf. As I already mentioned, those were 
pointless anyway as the tables now reside in main heap. Translation hash 
buckets are calculated automatically based on max sessions and max users 
parameters.

Thanks,
Klement

> On 26 Nov 2020, at 21:50, Damjan Marion via lists.fd.io 
>  wrote:
> 
> Will leave that to NAT folks to comment… They have multiple tables and 
> they are two per thread…
> 
> — 
> Damjan
> 
> > On 26.11.2020., at 20:27, Marcos - Mgiga  wrote:
> > 
> > Of course.
> > 
> > Since I intend to implement VPP as a deterministic CGN gateway I have some 
> > parameters regarding to nat config, for example: translation hash buckets, 
> > translation hash memory , user hash buckets and user hash memory to be 
> > configured in startup.conf.
> > 
> > In this context I would like to know how do I give the right value to those 
> > parameters.
> > 
> > 
> > Thanks
> > 
> > Marcos
> > 
> > 
> > -Mensagem original-
> > De: vpp-dev@lists.fd.io  Em nome de Damjan Marion via 
> > lists.fd.io
> > Enviada em: quinta-feira, 26 de novembro de 2020 16:17
> > Para: Marcos - Mgiga 
> > Cc: Elias Rudberg ; vpp-dev@lists.fd.io
> > Assunto: Re: RES: [vpp-dev] NAT memory usage problem for VPP 20.09 compared 
> > to 20.05 due to larger translation_buckets value
> > 
> > 
> > Sorry, I don’t understand your question. Can you elaborate further?
> > 
> > --
> > Damjan
> > 
> >> On 26.11.2020., at 20:05, Marcos - Mgiga  wrote:
> >> 
> >> Hello,
> >> 
> >> Taking benefit of the topic, how you suggest to monitor if translation 
> >> hash bucket value has an appropriate value? What about translation hash 
> >> memory, user hash buckets and user hash memory ?
> >> 
> >> How do I know if I increase or decrease those values?
> >> 
> >> Best Regards
> >> 
> >> Marcos
> >> 
> >> -Mensagem original-
> >> De: vpp-dev@lists.fd.io  Em nome de Damjan Marion 
> >> via lists.fd.io Enviada em: quinta-feira, 26 de novembro de 2020 14:53
> >> Para: Elias Rudberg 
> >> Cc: vpp-dev@lists.fd.io
> >> Assunto: Re: [vpp-dev] NAT memory usage problem for VPP 20.09 compared 
> >> to 20.05 due to larger translation_buckets value
> >> 
> >> 
> >> Dear Elias,
> >> 
> >> Let me try to explain a bit underlying mechanics.
> >> Let’s assume your target number of sessions is 10M and we are talking 
> >> about 16byte key size.
> >> That means each hash entry (KV) is 24 bytes (16 bytes key and 8 bytes 
> >> value).
> >> 
> >> In the setup you were mentioning, with 1<<20 buckets, your will need to 
> >> fit 10 KVs into each bucket.
> >> Initial bihash bucket holds 4 KVs and to accomodate 10 keys (assuming that 
> >> hash function gives us equal distribution) you will need to grow each 
> >> bucket 2 times. Growing means doubling bucket size.
> >> So at the end you will have 1<<20 buckets where each holds 16 KVs.
> >> 
> >> Math is:
> >> 1<<20 * (16 * 24 /* KV size in bytes */  + 8 /*bucket header size*/) Which 
> >> means 392 MB of memory.
> >> 
> >> If you keep target number of 10M sessions, but you increase number of 
> >> buckets to 1 << 22 (which is roughly what formula bellow is trying to do) 
> >> you end up with the following math:
> >> 
> >> Math is:
> >> 1<<22 * (4 * 24 /* KV size in bytes */  + 8 /*bucket header size*/) Which 
> >> means 416 MB of memory.
> >> 
> >> So why 2nd one is better. Several reasons:
> >> 
> >> - in first case you need to grow each bucket twice, that means 
> >> allocating memory for the new bucket,  copying existing data from the 
> >> old bucket and putting old bucket to the free list. This operation 
> >> increases  key insertion time and lowers performance
> >> 
> >> - growing will likely result in significant amount of old buckets 
> >> sitting in the free list  and they are effectively wasted memory 
> >> (bihash tries to reuse that memory but at some point  there is no 
> >> demand anymore for smaller buckets)
> >> 
> >> - performance-wise original bucket (one which first 4 KVs) is collocated 
> >> with bucket header.
> >> This is new behaviour Dave introduced earlier this year (and I think it is 
> >> present in 20.09).
> >> Bucket collocated with header means that there is no dependant 
> >> prefetch needed as both header  and at least part of data sits in the same 
> >> cacheline. This significantly improveslookup performance.
> >> 
> >> So in general, for best performance and optimal memory usage, number of 
> >> buckets should be big enough so it unlikely grow with your target number 
> >> of KVs. rule of thumb would be rounding target number of entries to closer 
> >> power-of-2 value and then dividing that number with 2.
> >> For example, for 10M entries first lower power-of-2 number is 1<<23 (8M) 
> >> and first higher is 1<<24 (16M).
> >> 1<<23 is closer, when we divide that by 2 we got 1<<22 (4M) buckets.
> >> 
> >> Hope this explains….
> >> 
> >> —
> >> Damjan
> >> 
> >> 
> >>> On 26.11.2020., at 17:54, Elias Rudberg  wrote:
> >>> 
> >>> Hello VPP 

Re: [vpp-dev] VPP hanging and running out of memory due to infinite loop related to nat44-hairpinning

2020-12-02 Thread Klement Sekera via lists.fd.io
Hi Elias,

what is the point of such static mapping? What is the use case here?

Thanks,
Klement

> On 2 Dec 2020, at 16:39, Elias Rudberg  wrote:
> 
> One scenario when this happens is when a UDP (or TCP) packet is sent
> from a client on the inside with a destination IP address that matches
> an existing static NAT mapping that maps that IP address on the inside
> to the same IP address on the outside.


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#18231): https://lists.fd.io/g/vpp-dev/message/18231
Mute This Topic: https://lists.fd.io/mt/78662322/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] VPP hanging and running out of memory due to infinite loop related to nat44-hairpinning

2020-12-02 Thread Klement Sekera via lists.fd.io
Hi Elias,

so there are two ways of solving this:

1. don’t support such a static mapping (i.e. refuse to create it)
2. make it work

for #2 your snat_hairpinning fix might be the right idea. If there is really no 
change then there is no point doing an extra lookup right?

Would you mind pushing it to gerrit? It would be super cool if the change also 
contained a test case ;-)

Thanks,
Klement

> On 2 Dec 2020, at 18:24, Elias Rudberg  wrote:
> 
> Hi Klement,
> 
>>> an existing static NAT mapping that maps that IP address on the
>>> inside to the same IP address on the outside.
> 
>> what is the point of such static mapping? What is the use case here?
> 
> We are using VPP for endpoint-independent NAT44. Then all traffic from
> outside is normally translated by NAT dynamic sessions but we have
> special treatment of traffic to a certain IP address that corresponds
> to our BGP (Border Gateway Protocol) traffic, that should not be
> translated, so then we have such a static mapping for that. If we do
> not have this static mapping then VPP tries to translate our BGP
> packets and then BGP does not work properly.
> 
> It may be possible to do things differently so that no such mapping
> would be needed, but we have been using such a mapping until now and
> things have worked fine apart from this infinite loop issue, that
> happens when a client from inside happens to send something to our
> special BGP IP address that is intended to be used from the outside.
> That IP address is normally not used by traffic from clients, the
> normal thing is for the router to communicate with the VPP server using
> that address, from outside. This is why the out-of-memory problem has
> appeared random and hard to reproduce earlier, it just happened when a
> client behaved in an unusual way, that did not happen very often but
> when it did, we got the out-of-memory crash and now we finally know
> why. Now that we know, we can easily reproduce it, it is not really
> random it just seemed that way.
> 
> Anyway, even if it would be unusual and possibly a bad idea to have
> such a static mapping, do you agree that VPP should handle the
> situation differently?
> 
> Best regards,
> Elias
> 


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#18234): https://lists.fd.io/g/vpp-dev/message/18234
Mute This Topic: https://lists.fd.io/mt/78662322/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: RES: RES: [vpp-dev] Increasing NAT worker handoff frame queue size NAT_FQ_NELTS to avoid congestion drops?

2020-11-13 Thread Klement Sekera via lists.fd.io
I used the usual

1. start traffic
2. clear run
3. wait n seconds (e.g. n == 10)
4. show run

Klement

> On 13 Nov 2020, at 18:21, Marcos - Mgiga  wrote:
> 
> Understood. And what path did you take in order to analyse and monitor vector 
> rates ? Is there some specific command or log ?
> 
> Thanks
> 
> Marcos
> 
> -Mensagem original-
> De: vpp-dev@lists.fd.io  Em nome de ksekera via []
> Enviada em: sexta-feira, 13 de novembro de 2020 14:02
> Para: Marcos - Mgiga 
> Cc: Elias Rudberg ; vpp-dev@lists.fd.io
> Assunto: Re: RES: [vpp-dev] Increasing NAT worker handoff frame queue size 
> NAT_FQ_NELTS to avoid congestion drops?
> 
> Not completely idle, more like medium load. Vector rates at which I saw 
> congestion drops were roughly 40 for thread doing no work (just handoffs - I 
> hardcoded it this way for test purpose), and roughly 100 for thread picking 
> the packets doing NAT.
> 
> What got me into infra investigation was the fact that once I was hitting 
> vector rates around 255, I did see packet drops, but no congestion drops.
> 
> HTH,
> Klement
> 
>> On 13 Nov 2020, at 17:51, Marcos - Mgiga  wrote:
>> 
>> So you mean that this situation ( congestion drops) is most likely to occur 
>> when the system in general is idle than when it is processing a large amount 
>> of traffic?
>> 
>> Best Regards
>> 
>> Marcos
>> 
>> -Mensagem original-
>> De: vpp-dev@lists.fd.io  Em nome de Klement 
>> Sekera via lists.fd.io Enviada em: sexta-feira, 13 de novembro de 2020 
>> 12:15
>> Para: Elias Rudberg 
>> Cc: vpp-dev@lists.fd.io
>> Assunto: Re: [vpp-dev] Increasing NAT worker handoff frame queue size 
>> NAT_FQ_NELTS to avoid congestion drops?
>> 
>> Hi Elias,
>> 
>> I’ve already debugged this and came to the conclusion that it’s the infra 
>> which is the weak link. I was seeing congestion drops at mild load, but not 
>> at full load. Issue is that with handoff, there is uneven workload. For 
>> simplicity’s sake, just consider thread 1 handing off all the traffic to 
>> thread 2. What happens is that for thread 1, the job is much easier, it just 
>> does some ip4 parsing and then hands packet to thread 2, which actually does 
>> the heavy lifting of hash inserts/lookups/translation etc. 64 element queue 
>> can hold 64 frames, one extreme is 64 1-packet frames, totalling 64 packets, 
>> other extreme is 64 255-packet frames, totalling ~16k packets. What happens 
>> is this: thread 1 is mostly idle and just picking a few packets from NIC and 
>> every one of these small frames creates an entry in the handoff queue. Now 
>> thread 2 picks one element from the handoff queue and deals with it before 
>> picking another one. If the queue has only 3-packet or 10-packet elements, 
>> then thread 2 can never really get into what VPP excels in - bulk processing.
>> 
>> Q: Why doesn’t it pick as many packets as possible from the handoff queue? 
>> A: It’s not implemented.
>> 
>> I already wrote a patch for it, which made all congestion drops which I saw 
>> (in above synthetic test case) disappear. Mentioned patch 
>> https://gerrit.fd.io/r/c/vpp/+/28980 is sitting in gerrit.
>> 
>> Would you like to give it a try and see if it helps your issue? We 
>> shouldn’t need big queues under mild loads anyway …
>> 
>> Regards,
>> Klement
>> 
>>> On 13 Nov 2020, at 16:03, Elias Rudberg  wrote:
>>> 
>>> Hello VPP experts,
>>> 
>>> We are using VPP for NAT44 and we get some "congestion drops", in a 
>>> situation where we think VPP is far from overloaded in general. Then 
>>> we started to investigate if it would help to use a larger handoff 
>>> frame queue size. In theory at least, allowing a longer queue could 
>>> help avoiding drops in case of short spikes of traffic, or if it 
>>> happens that some worker thread is temporarily busy for whatever 
>>> reason.
>>> 
>>> The NAT worker handoff frame queue size is hard-coded in the 
>>> NAT_FQ_NELTS macro in src/plugins/nat/nat.h where the current value 
>>> is 64. The idea is that putting a larger value there could help.
>>> 
>>> We have run some tests where we changed the NAT_FQ_NELTS value from 
>>> 64 to a range of other values, each time rebuilding VPP and running 
>>> an identical test, a test case that is to some extent trying to mimic 
>>> our real traffic, although of course it is simplified. The test runs 
>>> many
>>> iperf3 tests simultaneously using TCP, combined with some UD

Re: Handoff design issues [Re: RES: RES: [vpp-dev] Increasing NAT worker handoff frame queue size NAT_FQ_NELTS to avoid congestion drops?]

2020-11-16 Thread Klement Sekera via lists.fd.io
That’s exactly what my patch improves. Coalescing small groups of packets 
waiting in the handoff queue into a full(er) frame allows the downstream node 
to do more “V” and achieve better performance. And that’s also what I’ve seen 
when testing the patch.

Thanks,
Klement

ps. in case you missed the link: https://gerrit.fd.io/r/c/vpp/+/28980

> On 13 Nov 2020, at 22:47, Christian Hopps  wrote:
> 
> FWIW, I too have hit this issue. Basically VPP is designed to process a 
> packet from rx to tx in the same thread. When downstream nodes run slower, 
> the upstream rx node doesn't run, so the vector size in each frame naturally 
> increases, and then the downstream nodes can benefit from "V" (i.e., 
> processing multiple packets in one go).
> 
> This back-pressure from downstream does not occur when you hand-off from a 
> fast thread to a slower thread, so you end up with many single packet frames 
> and fill your hand-off queue.
> 
> The quick fix one tries then is to increase the queue size; however, this is 
> not a great solution b/c you are still not taking advantage of the "V" in 
> VPP. To really fit this back into the original design one needs to somehow 
> still be creating larger vectors in the hand-off frames.
> 
> TBH I think the right solution here is to not hand-off frames, and instead 
> switch to packet queues and then on the handed-off side the frames would get 
> constructed from packet queues (basically creating another polling input node 
> but on the new thread).
> 
> Thanks,
> Chris.
> 
>> On Nov 13, 2020, at 12:21 PM, Marcos - Mgiga  wrote:
>> 
>> Understood. And what path did you take in order to analyse and monitor 
>> vector rates ? Is there some specific command or log ?
>> 
>> Thanks
>> 
>> Marcos
>> 
>> -Mensagem original-
>> De: vpp-dev@lists.fd.io  Em nome de ksekera via []
>> Enviada em: sexta-feira, 13 de novembro de 2020 14:02
>> Para: Marcos - Mgiga 
>> Cc: Elias Rudberg ; vpp-dev@lists.fd.io
>> Assunto: Re: RES: [vpp-dev] Increasing NAT worker handoff frame queue size 
>> NAT_FQ_NELTS to avoid congestion drops?
>> 
>> Not completely idle, more like medium load. Vector rates at which I saw 
>> congestion drops were roughly 40 for thread doing no work (just handoffs - I 
>> hardcoded it this way for test purpose), and roughly 100 for thread picking 
>> the packets doing NAT.
>> 
>> What got me into infra investigation was the fact that once I was hitting 
>> vector rates around 255, I did see packet drops, but no congestion drops.
>> 
>> HTH,
>> Klement
>> 
>>> On 13 Nov 2020, at 17:51, Marcos - Mgiga  wrote:
>>> 
>>> So you mean that this situation ( congestion drops) is most likely to occur 
>>> when the system in general is idle than when it is processing a large 
>>> amount of traffic?
>>> 
>>> Best Regards
>>> 
>>> Marcos
>>> 
>>> -Mensagem original-
>>> De: vpp-dev@lists.fd.io  Em nome de Klement
>>> Sekera via lists.fd.io Enviada em: sexta-feira, 13 de novembro de 2020
>>> 12:15
>>> Para: Elias Rudberg 
>>> Cc: vpp-dev@lists.fd.io
>>> Assunto: Re: [vpp-dev] Increasing NAT worker handoff frame queue size 
>>> NAT_FQ_NELTS to avoid congestion drops?
>>> 
>>> Hi Elias,
>>> 
>>> I’ve already debugged this and came to the conclusion that it’s the infra 
>>> which is the weak link. I was seeing congestion drops at mild load, but not 
>>> at full load. Issue is that with handoff, there is uneven workload. For 
>>> simplicity’s sake, just consider thread 1 handing off all the traffic to 
>>> thread 2. What happens is that for thread 1, the job is much easier, it 
>>> just does some ip4 parsing and then hands packet to thread 2, which 
>>> actually does the heavy lifting of hash inserts/lookups/translation etc. 64 
>>> element queue can hold 64 frames, one extreme is 64 1-packet frames, 
>>> totalling 64 packets, other extreme is 64 255-packet frames, totalling ~16k 
>>> packets. What happens is this: thread 1 is mostly idle and just picking a 
>>> few packets from NIC and every one of these small frames creates an entry 
>>> in the handoff queue. Now thread 2 picks one element from the handoff queue 
>>> and deals with it before picking another one. If the queue has only 
>>> 3-packet or 10-packet elements, then thread 2 can never really get into 
>>> what VPP excels in - bulk processing.
>>> 
>>> Q: Why doesn’t it pick as many packets as possible from the handoff q

Re: RES: RES: RES: [vpp-dev] Increasing NAT worker handoff frame queue size NAT_FQ_NELTS to avoid congestion drops?

2020-11-16 Thread Klement Sekera via lists.fd.io
If you can handle the traffic with a single thread then all multi-worker issues 
would go away. But the congestion drops are seen easily with as little as two 
workers due to infra limitations.

Regards,
Klement

> On 13 Nov 2020, at 18:41, Marcos - Mgiga  wrote:
> 
> Thanks, you see reducing the number of VPP threads as an option to work this 
> issue around, since you would probably increase the vector rate per thread?
> 
> Best Regards
> 
> -Mensagem original-
> De: vpp-dev@lists.fd.io  Em nome de Klement Sekera via 
> lists.fd.io
> Enviada em: sexta-feira, 13 de novembro de 2020 14:26
> Para: Marcos - Mgiga 
> Cc: Elias Rudberg ; vpp-dev 
> Assunto: Re: RES: RES: [vpp-dev] Increasing NAT worker handoff frame queue 
> size NAT_FQ_NELTS to avoid congestion drops?
> 
> I used the usual
> 
> 1. start traffic
> 2. clear run
> 3. wait n seconds (e.g. n == 10)
> 4. show run
> 
> Klement
> 
>> On 13 Nov 2020, at 18:21, Marcos - Mgiga  wrote:
>> 
>> Understood. And what path did you take in order to analyse and monitor 
>> vector rates ? Is there some specific command or log ?
>> 
>> Thanks
>> 
>> Marcos
>> 
>> -Mensagem original-
>> De: vpp-dev@lists.fd.io  Em nome de ksekera via 
>> [] Enviada em: sexta-feira, 13 de novembro de 2020 14:02
>> Para: Marcos - Mgiga 
>> Cc: Elias Rudberg ; vpp-dev@lists.fd.io
>> Assunto: Re: RES: [vpp-dev] Increasing NAT worker handoff frame queue size 
>> NAT_FQ_NELTS to avoid congestion drops?
>> 
>> Not completely idle, more like medium load. Vector rates at which I saw 
>> congestion drops were roughly 40 for thread doing no work (just handoffs - I 
>> hardcoded it this way for test purpose), and roughly 100 for thread picking 
>> the packets doing NAT.
>> 
>> What got me into infra investigation was the fact that once I was hitting 
>> vector rates around 255, I did see packet drops, but no congestion drops.
>> 
>> HTH,
>> Klement
>> 
>>> On 13 Nov 2020, at 17:51, Marcos - Mgiga  wrote:
>>> 
>>> So you mean that this situation ( congestion drops) is most likely to occur 
>>> when the system in general is idle than when it is processing a large 
>>> amount of traffic?
>>> 
>>> Best Regards
>>> 
>>> Marcos
>>> 
>>> -Mensagem original-
>>> De: vpp-dev@lists.fd.io  Em nome de Klement 
>>> Sekera via lists.fd.io Enviada em: sexta-feira, 13 de novembro de 
>>> 2020
>>> 12:15
>>> Para: Elias Rudberg 
>>> Cc: vpp-dev@lists.fd.io
>>> Assunto: Re: [vpp-dev] Increasing NAT worker handoff frame queue size 
>>> NAT_FQ_NELTS to avoid congestion drops?
>>> 
>>> Hi Elias,
>>> 
>>> I’ve already debugged this and came to the conclusion that it’s the infra 
>>> which is the weak link. I was seeing congestion drops at mild load, but not 
>>> at full load. Issue is that with handoff, there is uneven workload. For 
>>> simplicity’s sake, just consider thread 1 handing off all the traffic to 
>>> thread 2. What happens is that for thread 1, the job is much easier, it 
>>> just does some ip4 parsing and then hands packet to thread 2, which 
>>> actually does the heavy lifting of hash inserts/lookups/translation etc. 64 
>>> element queue can hold 64 frames, one extreme is 64 1-packet frames, 
>>> totalling 64 packets, other extreme is 64 255-packet frames, totalling ~16k 
>>> packets. What happens is this: thread 1 is mostly idle and just picking a 
>>> few packets from NIC and every one of these small frames creates an entry 
>>> in the handoff queue. Now thread 2 picks one element from the handoff queue 
>>> and deals with it before picking another one. If the queue has only 
>>> 3-packet or 10-packet elements, then thread 2 can never really get into 
>>> what VPP excels in - bulk processing.
>>> 
>>> Q: Why doesn’t it pick as many packets as possible from the handoff queue? 
>>> A: It’s not implemented.
>>> 
>>> I already wrote a patch for it, which made all congestion drops which I saw 
>>> (in above synthetic test case) disappear. Mentioned patch 
>>> https://gerrit.fd.io/r/c/vpp/+/28980 is sitting in gerrit.
>>> 
>>> Would you like to give it a try and see if it helps your issue? We 
>>> shouldn’t need big queues under mild loads anyway …
>>> 
>>> Regards,
>>> Klement
>>> 
>>>> On 13 Nov 2020, at 16:03, Elias Rudberg  wrote:
>>>> 
>>>> Hello

Re: [vpp-dev] Increasing NAT worker handoff frame queue size NAT_FQ_NELTS to avoid congestion drops?

2020-11-16 Thread Klement Sekera via lists.fd.io
Hi Elias,

thanks for getting back with some real numbers. I only tested with two workers 
and a very simple case and in my case, increasing queue size didn’t help one 
bit. But again, in my case there was 100% handoff rate (every single packet was 
going through handoff), which is most probably the reason why one solution 
seemed like holy grail and the other useless.

To answer your question regarding why queue length is 64 - I guess nobody knows 
as the author of that code has been gone for a while. I see no reason why this 
shouldn’t be configurable. When I tried just increasing the value I quickly run 
into out-of-buffers situation with default configs.

Would you like to submit a patch?

Thanks,
Klement

> On 16 Nov 2020, at 11:33, Elias Rudberg  wrote:
> 
> Hi Klement,
> 
> Thanks! I have now tested your patch (28980), it seems to work and it
> does give some improvement. However, according to my tests, increasing
> NAT_FQ_NELTS seems to have a bigger effect, it improves performance a
> lot. When using the original NAT_FQ_NELTS value of 64, your patch
> gives some improvement but I still get the best performance when
> increasing NAT_FQ_NELTS.
> 
> For example, one of the tests behaves like this:
> 
> Without patch, NAT_FQ_NELTS=64  --> 129 Gbit/s and ~600k cong. drops
> With patch, NAT_FQ_NELTS=64  --> 136 Gbit/s and ~400k cong. drops
> Without patch, NAT_FQ_NELTS=1024  --> 151 Gbit/s and 0 cong. drops
> With patch, NAT_FQ_NELTS=1024  --> 151 Gbit/s and 0 cong. drops
> 
> So it still looks like increasing NAT_FQ_NELTS would be good, which
> brings me back to the same questions as before:
> 
> Were there specific reasons for setting NAT_FQ_NELTS to 64?
> 
> Are there some potential drawbacks or dangers of changing it to a
> larger value?
> 
> I suppose everyone will agree that when there is a queue with a
> maximum length, the choice of that maximum length can be important. Is
> there some particular reason to believe that 64 would be enough? In
> our case we are using 8 NAT threads. Suppose thread 8 is held up
> briefly due to something taking a little longer than usual, meanwhile
> threads 1-7 each hand off 10 frames to thread 8, that situation would
> require a queue size of at least 70, unless I misunderstood how the
> handoff mechanism works. To me, allowing a longer queue seems like a
> good thing because it allows us to handle also more difficult cases
> when threads are not always equally fast, there can be spikes in
> traffic that affect some threads more than others, things like
> that. But maybe there are strong reasons for keeping the queue short,
> reasons I don't know about, that's why I'm asking.
> 
> Best regards,
> Elias
> 
> 
> On Fri, 2020-11-13 at 15:14 +, Klement Sekera -X (ksekera -
> PANTHEON TECH SRO at Cisco) wrote:
>> Hi Elias,
>> 
>> I’ve already debugged this and came to the conclusion that it’s the
>> infra which is the weak link. I was seeing congestion drops at mild
>> load, but not at full load. Issue is that with handoff, there is
>> uneven workload. For simplicity’s sake, just consider thread 1
>> handing off all the traffic to thread 2. What happens is that for
>> thread 1, the job is much easier, it just does some ip4 parsing and
>> then hands packet to thread 2, which actually does the heavy lifting
>> of hash inserts/lookups/translation etc. 64 element queue can hold 64
>> frames, one extreme is 64 1-packet frames, totalling 64 packets,
>> other extreme is 64 255-packet frames, totalling ~16k packets. What
>> happens is this: thread 1 is mostly idle and just picking a few
>> packets from NIC and every one of these small frames creates an entry
>> in the handoff queue. Now thread 2 picks one element from the handoff
>> queue and deals with it before picking another one. If the queue has
>> only 3-packet or 10-packet elements, then thread 2 can never really
>> get into what VPP excels in - bulk processing.
>> 
>> Q: Why doesn’t it pick as many packets as possible from the handoff
>> queue? 
>> A: It’s not implemented.
>> 
>> I already wrote a patch for it, which made all congestion drops which
>> I saw (in above synthetic test case) disappear. Mentioned patch 
>> https://gerrit.fd.io/r/c/vpp/+/28980 is sitting in gerrit.
>> 
>> Would you like to give it a try and see if it helps your issue? We
>> shouldn’t need big queues under mild loads anyway …
>> 
>> Regards,
>> Klement
>> 


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#18040): https://lists.fd.io/g/vpp-dev/message/18040
Mute This Topic: https://lists.fd.io/mt/78230881/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] ip full reassembly - is_custom_app field broken ?

2020-11-09 Thread Klement Sekera via lists.fd.io
Hi,

I don’t. I can eventually write one of course. Or maybe you’d like to give it a 
try? It should be almost 1:1 to ip4 change …

Thanks,
Klement



> On 9 Nov 2020, at 11:18, Satya Murthy  wrote:
> 
> Hi Klement,
> 
> Do you have the similar changes for ip6_full_reassembly.c as well.
> If so, Can you pls pass on the diffs.
> 
> -- 
> Thanks & Regards,
> Murthy 
> 
> 


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#17971): https://lists.fd.io/g/vpp-dev/message/17971
Mute This Topic: https://lists.fd.io/mt/76705450/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] Increasing NAT worker handoff frame queue size NAT_FQ_NELTS to avoid congestion drops?

2020-11-13 Thread Klement Sekera via lists.fd.io
Hi Elias,

I’ve already debugged this and came to the conclusion that it’s the infra which 
is the weak link. I was seeing congestion drops at mild load, but not at full 
load. Issue is that with handoff, there is uneven workload. For simplicity’s 
sake, just consider thread 1 handing off all the traffic to thread 2. What 
happens is that for thread 1, the job is much easier, it just does some ip4 
parsing and then hands packet to thread 2, which actually does the heavy 
lifting of hash inserts/lookups/translation etc. 64 element queue can hold 64 
frames, one extreme is 64 1-packet frames, totalling 64 packets, other extreme 
is 64 255-packet frames, totalling ~16k packets. What happens is this: thread 1 
is mostly idle and just picking a few packets from NIC and every one of these 
small frames creates an entry in the handoff queue. Now thread 2 picks one 
element from the handoff queue and deals with it before picking another one. If 
the queue has only 3-packet or 10-packet elements, then thread 2 can never 
really get into what VPP excels in - bulk processing.

Q: Why doesn’t it pick as many packets as possible from the handoff queue? 
A: It’s not implemented.

I already wrote a patch for it, which made all congestion drops which I saw (in 
above synthetic test case) disappear. Mentioned patch 
https://gerrit.fd.io/r/c/vpp/+/28980 is sitting in gerrit.

Would you like to give it a try and see if it helps your issue? We shouldn’t 
need big queues under mild loads anyway …

Regards,
Klement

> On 13 Nov 2020, at 16:03, Elias Rudberg  wrote:
> 
> Hello VPP experts,
> 
> We are using VPP for NAT44 and we get some "congestion drops", in a
> situation where we think VPP is far from overloaded in general. Then
> we started to investigate if it would help to use a larger handoff
> frame queue size. In theory at least, allowing a longer queue could
> help avoiding drops in case of short spikes of traffic, or if it
> happens that some worker thread is temporarily busy for whatever
> reason.
> 
> The NAT worker handoff frame queue size is hard-coded in the
> NAT_FQ_NELTS macro in src/plugins/nat/nat.h where the current value is
> 64. The idea is that putting a larger value there could help.
> 
> We have run some tests where we changed the NAT_FQ_NELTS value from 64
> to a range of other values, each time rebuilding VPP and running an
> identical test, a test case that is to some extent trying to mimic our
> real traffic, although of course it is simplified. The test runs many
> iperf3 tests simultaneously using TCP, combined with some UDP traffic
> chosen to trigger VPP to create more new sessions (to make the NAT
> "slowpath" happen more).
> 
> The following NAT_FQ_NELTS values were tested:
> 16
> 32
> 64  <-- current value
> 128
> 256
> 512
> 1024
> 2048  <-- best performance in our tests
> 4096
> 8192
> 16384
> 32768
> 65536
> 131072
> 
> In those tests, performance was very bad for the smallest NAT_FQ_NELTS
> values of 16 and 32, while values larger than 64 gave improved
> performance. The best results in terms of throughput were seen for
> NAT_FQ_NELTS=2048. For even larger values than that, we got reduced
> performance compared to the 2048 case.
> 
> The tests were done for VPP 20.05 running on a Ubuntu 18.04 server
> with a 12-core Intel Xeon CPU and two Mellanox mlx5 network cards. The
> number of NAT threads was 8 in some of the tests and 4 in some of the
> tests.
> 
> According to these tests, the effect of changing NAT_FQ_NELTS can be
> quite large. For example, for one test case chosen such that
> congestion drops were a significant problem, the throughput increased
> from about 43 to 90 Gbit/second with the amount of congestion drops
> per second reduced to about one third. In another kind of test,
> throughput increased by about 20% with congestion drops reduced to
> zero. Of course such results depend a lot on how the tests are
> constructed. But anyway, it seems clear that the choice of
> NAT_FQ_NELTS value can be important and that increasing it would be
> good, at least for the kind of usage we have tested now.
> 
> Based on the above, we are considering changing NAT_FQ_NELTS from 64
> to a larger value and start trying that in our production environment
> (so far we have only tried it in a test environment).
> 
> Were there specific reasons for setting NAT_FQ_NELTS to 64?
> 
> Are there some potential drawbacks or dangers of changing it to a
> larger value?
> 
> Would you consider changing to a larger value in the official VPP
> code?
> 
> Best regards,
> Elias
> 
> 
> 
> 


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#18013): https://lists.fd.io/g/vpp-dev/message/18013
Mute This Topic: https://lists.fd.io/mt/78230881/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] vpp hangs with bfd configuration

2021-06-10 Thread Klement Sekera via lists.fd.io
Hi Sudhir,

this looks like a FIB code issue, even though it manifests when using BFD.

Last I knew, Neale was the FIB guy, not sure who is now…

Thanks,
Klement

> On 10 Jun 2021, at 08:50, Sudhir CR via lists.fd.io 
>  wrote:
> 
> Hi All,
> when we are trying to establish a BFD session between two containers while 
> processing "adj_bfd_notify ''  VPP went into an infinite loop and hung in one 
> of the containers, and this issue is reproducible every time with below 
> topology and configuration.
> 
> Any help in fixing the issue would be appreciated.
> 
> Topology:
> 
>   Container1 (memif32321/32321)  -  
> (memif32321/32321)Container2
> 
> Configuration:
> Container1
> 
> set interface ip address memif32321/32321 4.4.4.4/24
> ip table add 100
> ip route add 4.4.4.0/24 table 100 via 4.4.4.5 memif32321/32321 out-labels 
> ip route add 4.4.4.5/32 table 100 via 4.4.4.5 memif32321/32321 out-labels 
>
> set interface mpls memif32321/32321 enable
> mpls local-label add  eos via 4.4.4.5 memif32321/32321 
> ip4-lookup-in-table 100
>
> bfd udp session add interface memif32321/32321 local-addr 4.4.4.4 peer-addr 
> 4.4.4.5 desired-min-tx 40 required-min-rx 40 detect-mult 3
> 
> Container2
> 
> set interface ip address memif32321/32321 4.4.4.5/24
> ip table add 100
> ip route add 4.4.4.0/24 table 100 via 4.4.4.4 memif32321/32321 out-labels 
> ip route add 4.4.4.4/32 table 100 via 4.4.4.4 memif32321/32321 out-labels 
> set interface mpls memif32321/32321 enable
> mpls local-label add   eos via 4.4.4.4 memif32321/32321 
> ip4-lookup-in-table 100
> bfd udp session add interface memif32321/32321 local-addr 4.4.4.5 peer-addr 
> 4.4.4.4 desired-min-tx 40 required-min-rx 40 detect-mult 3
> 
> VPP version: 20.09
> 
> (gdb) thread apply all bt
> 
> Thread 3 (Thread 0x7f7ac6ffe700 (LWP 422)):
> #0  0x7f7b67036ffe in vlib_worker_thread_barrier_check () at 
> /home/supervisor/development/libvpp/src/vlib/threads.h:438
> #1  0x7f7b6703152e in vlib_main_or_worker_loop (vm=0x7f7b46cf3240, 
> is_main=0) at /home/supervisor/development/libvpp/src/vlib/main.c:1788
> #2  0x7f7b67030d47 in vlib_worker_loop (vm=0x7f7b46cf3240) at 
> /home/supervisor/development/libvpp/src/vlib/main.c:2008
> #3  0x7f7b6708892a in vlib_worker_thread_fn (arg=0x7f7b41f14540) at 
> /home/supervisor/development/libvpp/src/vlib/threads.c:1862
> #4  0x7f7b668adc44 in clib_calljmp () at 
> /home/supervisor/development/libvpp/src/vppinfra/longjmp.S:123
> #5  0x7f7ac6ffdec0 in ?? ()
> #6  0x7f7b67080ad3 in vlib_worker_thread_bootstrap_fn 
> (arg=0x7f7b41f14540) at 
> /home/supervisor/development/libvpp/src/vlib/threads.c:585
> Backtrace stopped: previous frame inner to this frame (corrupt stack?)
> 
> Thread 2 (Thread 0x7f7ac77ff700 (LWP 421)):
> #0  0x7f7b67036fef in vlib_worker_thread_barrier_check () at 
> /home/supervisor/development/libvpp/src/vlib/threads.h:437
> #1  0x7f7b6703152e in vlib_main_or_worker_loop (vm=0x7f7b45fe8b80, 
> is_main=0) at /home/supervisor/development/libvpp/src/vlib/main.c:1788
> #2  0x7f7b67030d47 in vlib_worker_loop (vm=0x7f7b45fe8b80) at 
> /home/supervisor/development/libvpp/src/vlib/main.c:2008
> #3  0x7f7b6708892a in vlib_worker_thread_fn (arg=0x7f7b41f14440) at 
> /home/supervisor/development/libvpp/src/vlib/threads.c:1862
> #4  0x7f7b668adc44 in clib_calljmp () at 
> /home/supervisor/development/libvpp/src/vppinfra/longjmp.S:123
> #5  0x7f7ac77feec0 in ?? ()
> #6  0x7f7b67080ad3 in vlib_worker_thread_bootstrap_fn 
> (arg=0x7f7b41f14440) at 
> /home/supervisor/development/libvpp/src/vlib/threads.c:585
> Backtrace stopped: previous frame inner to this frame (corrupt stack?)
> 
> Thread 1 (Thread 0x7f7b739b7740 (LWP 226)):
> #0  0x7f7b681c952b in fib_node_list_remove (list=54, sibling=63) at 
> /home/supervisor/development/libvpp/src/vnet/fib/fib_node_list.c:246
> #1  0x7f7b681c7695 in fib_node_child_remove 
> (parent_type=FIB_NODE_TYPE_ADJ, parent_index=1, sibling_index=63)
> at /home/supervisor/development/libvpp/src/vnet/fib/fib_node.c:131
> #2  0x7f7b681b2395 in fib_walk_destroy (fwi=2) at 
> /home/supervisor/development/libvpp/src/vnet/fib/fib_walk.c:262
> #3  0x7f7b681b2f13 in fib_walk_sync (parent_type=FIB_NODE_TYPE_ADJ, 
> parent_index=1, ctx=0x7f7b2e08dc90)
> at /home/supervisor/development/libvpp/src/vnet/fib/fib_walk.c:818
> #4  0x7f7b6821ed4d in adj_nbr_update_rewrite_internal 
> (adj=0x7f7b46e08c80, adj_next_index=IP_LOOKUP_NEXT_REWRITE, this_node=426,
> next_node=682, rewrite=0x7f7b4a5c4b40 
> "z\001\277d\004\004zP\245d\004\004\210G")
> at /home/supervisor/development/libvpp/src/vnet/adj/adj_nbr.c:472
> #5  0x7f7b6821eb99 in adj_nbr_update_rewrite (adj_index=2, 
> flags=ADJ_NBR_REWRITE_FLAG_COMPLETE,
> rewrite=0x7f7b4a5c4b40 "z\001\277d\004\004zP\245d\004\004\210G") at 
> /home/supervisor/development/libvpp/src/vnet/adj/adj_nbr.c:335
> 

Re: [vpp-dev] VPP C++ Plugin API

2021-06-21 Thread Klement Sekera via lists.fd.io
Hi James,

On vpp master branch running vpp like this:

env STARTUP_CONF=/home/ksekera/startup.conf make debug


with 


ksekera@c984e4769eef ~> cat startup.conf 
unix { interactive cli-listen /run/vpp/cli.sock gid 1000 }
plugins { plugin dpdk_plugin.so { disable } }

and your code

ksekera@c984e4769eef ~> g++ -I/home/ksekera/vpp/src/vpp-api/ 
-I/home/ksekera/vpp/src -I./vpp/build-root/install-vpp-native/vpp/include 
-L./vpp/build-root/install-vpp_debug-native/vpp/lib/ test.cpp -lvapiclient
ksekera@c984e4769eef ~> env 
LD_LIBRARY_PATH=./vpp/build-root/install-vpp_debug-native/vpp/lib/ ./a.out
App name `test_app Connecting...
svm_map_region:629: segment chown [ok if client starts first]: Operation not 
permitted (errno 1)
Sending ACL get_version...
Execute...
Wait...
Get Response...
Get Payload...
Got major=1 minor=4

It seems to work fine.

What version of VPP are you running and is it vanilla?

Regards,
Klement

> On 16 Jun 2021, at 21:32, James Spencer  wrote:
> 
> Hi,
> 
> I’m having trouble getting a basic test working using the VPP C++ API with a 
> plugin. I haven’t been able to find any examples of using the VPP C++ API 
> with plugins. It seems like I’m missing something simple but it is not clear 
> what that is.
> 
> I was able to get the VPP native C++ APIs working like “Show_version” but 
> when I try to use an API from a plugin I get an "unexpected message id” 
> exception on the reply message.
> 
> I have a simple test program which is based on the ACL plugin get_version, 
> which is in full below.
> 
> I enabled the VAPI_DBG debug in my program and added a bit more in a few 
> places. It also seems to be the reply messaging having a problem so I am 
> suspecting I’m missing something in my client.
> 
> The suspicious part to me is the constructor for the request and reply Msg is 
> not using the Acl_plugin_get_verision type, it is using control_ping_reply 
> which is msg_id == 0:
> 
> DBG:vapi.c:395:vapi_connect():finished probing messages
> Sending ACL get_version...
> DBG:vapi.hpp:603:Msg():MYDEBUG get_msg_id() == 0
> DBG:vapi.hpp:610:Msg():New Msg@0x7ffc74c9cfe8 
> shm_data@0x1300b9fa8
> DBG:vapi.hpp:603:Msg():MYDEBUG get_msg_id() == 0
> DBG:vapi.hpp:610:Msg():New Msg@0x7ffc74c9cff8 
> shm_data@(nil)
> Execute...
> DBG:acl.api.vapi.h:587:vapi_msg_acl_plugin_get_version_hton():Swapping 
> `vapi_msg_acl_plugin_get_version'@0x1300b9fa8 to big endian
> DBG:vapi.c:462:vapi_send():send msg@0x1300b9fa8:630[acl_plugin_get_version]
> DBG:vapi.c:484:vapi_send():vapi_send() rv = 0
> DBG:vapi.hpp:373:send():Push 0x7ffc74c9cfb0
> Wait...
> DBG:vapi.c:553:vapi_recv():doing shm queue sub
> DBG:vapi.c:176:vapi_add_to_be_freed():To be freed 0x130052f10
> DBG:vapi.c:579:vapi_recv():recv 
> msg@0x130052f10:631[acl_plugin_get_version_reply]
> DBG:vapi.hpp:277:dispatch():MYDEBUG has context
> DBG:vapi.hpp:284:dispatch():MYDEBUG context match id=80
> terminate called after throwing an instance of 
> 'vapi::Unexpected_msg_id_exception'
>   what():  unexpected message id
> Aborted (core dumped)
> 
> 
> Any chance anyone knows off hand what I am missing?
> 
> Thanks!
> 
> 
> Test program:
> 
> #include 
> #include 
> #include 
> #include 
> 
> #pragma GCC diagnostic push
> #pragma GCC diagnostic ignored "-Wunused-parameter"
> #include 
> #include 
> #pragma GCC diagnostic pop
> 
> DEFINE_VAPI_MSG_IDS_ACL_API_JSON;
> 
> #define WAIT_FOR_RESPONSE(param, ret)  \
>   do   \
> {  \
>   ret = con.wait_for_response (param); \
> }  \
>   while (ret == VAPI_EAGAIN)
> 
> using namespace vapi;
> 
> int main (void)
> {
> Connection con;
> const char *app_name = "test_app";
> char *api_prefix = NULL;
> int max_outstanding_requests = 32;
> int response_queue_size = 32;
> vapi_error_e rv;
> 
> printf ("App name `%s Connecting...\n", app_name);
> 
> /* connect to the VPP API */
> rv = con.connect (app_name,
>   api_prefix,
>   max_outstanding_requests,
>   response_queue_size);
> assert (VAPI_OK == rv);
> 
> printf("Sending ACL get_version...\n");
> Acl_plugin_get_version acl (con);
> 
> printf("Execute...\n");
> rv = acl.execute();
> 
> printf("Wait...\n");
> WAIT_FOR_RESPONSE(acl, rv);
> assert(rv == VAPI_OK);
> 
> printf("Get Response...\n");
> auto  = acl.get_response();
> 
> printf("Get Payload...\n");
> auto rp = response.get_payload();
> 
> printf("Got major=%d minor=%d\n", rp.major, rp.minor);
> 
> /* disconnect from the VPP API */
> con.disconnect ();
> return 0;
> }
> 
> 
> Full Debug Output:
> 
> DBG:vpe.api.vapi.h:222:__vapi_constructor_control_ping_reply():Assigned msg 
> id 0 to control_ping_reply
> DBG:vpe.api.vapi.h:336:__vapi_constructor_control_ping():Assigned msg id 1 to 
> control_ping
> 

Re: [vpp-dev] Make test help

2021-04-26 Thread Klement Sekera via lists.fd.io
Hi Govind,

there is no explicit startup.conf used by test framework. All arguments are 
passed using VPP command line built setUpConstants() function of VppTestCase.

Regards,
Klement

> On 23 Apr 2021, at 18:13, Govindarajan Mohandoss 
>  wrote:
> 
> Dear Maintainers,
>  I would like to enable a field in "startup.conf" through "make test". How 
> can I do that ?
> 
> Thanks
> Govind
> 
> 
> 


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#19283): https://lists.fd.io/g/vpp-dev/message/19283
Mute This Topic: https://lists.fd.io/mt/82315155/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[vpp-dev] make test-checkstyle-diff

2021-03-25 Thread Klement Sekera via lists.fd.io
Hi all,

I’ve introduced a new checkstyle target, which should ease development. This 
one does a similar thing as test-checkstyle, but only on changed files. This is 
useful for local verification as it’s a bit faster and doesn’t pollute your 
screen with messages about checking unchanged files.

I kept semantics of test-checkstyle without change as in gerrit it doesn’t 
matter if it’s 30 seconds longer and to catch other types of failures - for 
example a tool change, which might not modify any python file, but could result 
in a diff and failure.

Regards,
Klement
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#19015): https://lists.fd.io/g/vpp-dev/message/19015
Mute This Topic: https://lists.fd.io/mt/81598070/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] Thread safety issue in NAT plugin regarding counter for busy ports

2021-04-01 Thread Klement Sekera via lists.fd.io
Hi Elias,

it’s spot on. I think all of it. Would you like to push an atomic-increment 
patch or should I?

Thanks for spotting this!!!
Klement

> On 1 Apr 2021, at 13:39, Elias Rudberg  wrote:
> 
> Hello VPP experts,
> 
> I think there is a thread safety issue in the NAT plugin regarding the
> counter for busy ports.
> 
> Looking at this for the master branch now, there has been some
> refactoring lately but the issue has anyway been there for a long time,
> at least several VPP versions back, although filenames and function
> names have changed.
> 
> Here I will take the endpoint-independent code in nat44-ei/nat44_ei.c
> code because that is the part I am using, but it looks like a similar
> issue is there for nat44-ed as well.
> 
> In the nat44_ei_alloc_default_cb() function in nat44_ei.c there is a
> part that looks like this:
> 
>  --a->busy_##n##_port_refcounts[portnum];  \
>  a->busy_##n##_ports_per_thread[thread_index]++;   \
>  a->busy_##n##_ports++;\
> 
> where the variable "a" is an address (nat44_ei_address_t) that belongs
> to the "addresses" in the global nat44_ei_main, so not thread-specific. 
> As I understand it, different threads may be using the same "a" at the
> same time.
> 
> At first sight it might seem like all those three lines are risky
> because different threads can execute this code at the same time for
> the same "a". However, the _port_refcounts[portnum] and
> _ports_per_thread[thread_index] parts are actually okay to access
> because the [portnum] and [thread_index] ensures that those lines only
> access parts of those arrays that belong to thecurrent thread, that is
> how the port number is selected.
> 
> So the first two lines there are fine, I think, but the third line,
> incrementing a->busy_##n##_ports, can give a race condition when
> different threads execute it at the same time. The same issue is also
> there in other places where the busy_##n##_ports values are updated.
> 
> I think this is not critical because the busy_##n##_ports information
> (that can be wrong because of this thread safety issue) is not used
> very much. However those values are used in nat44_ei_del_address()
> where it looks like this:
> 
>  /* Delete sessions using address */
>  if (a->busy_tcp_ports || a->busy_udp_ports || a->busy_icmp_ports)
>{
> 
> and then inside that if-statement there is some code to delete those
> sessions. If the busy_##n##_ports values are wrong it could in
> principle happen that the session deletion is skipped when there were
> actually some sessions that needed deleting. Perhaps rare and perhaps
> resulting in nothing worse than a small memory leak, but anyway.
> 
> One effect of this is that there can be an inconsistency, if we were to
> sum up the busy_##n##_ports_per_thread values for all threads, that
> should be equal to busy_##n##_ports but due to this issue there could
> be a difference, because while the busy_##n##_ports_per_thread values
> are correct the busy_##n##_ports values may have been corupted due to
> the race condition mentioned above.
> 
> Not sure if the above is a problem in practice, my main motivation for
> reporting this is that it confuses me when I am trying to understand
> how te code works in order to do some modifications. Either the code is
> not thread safe there, or I have misunderstood things.
> 
> What do you think, is it an issue?
> If not, what have I missed?
> 
> (This is not an April fools' joke, I really am this pedantic)
> 
> Best regards,
> Elias
> 
> 
> 


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#19088): https://lists.fd.io/g/vpp-dev/message/19088
Mute This Topic: https://lists.fd.io/mt/81773552/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[vpp-dev] tests - attach debug option available

2021-03-16 Thread Klement Sekera via lists.fd.io
Hi all,

I implemented a new debug option for make test called ‘attach’. This has been 
requested a couple of times over the last few months and while it has some 
drawbacks, it also has advantages.

It’s not merged yet.

https://gerrit.fd.io/r/c/vpp/+/31663

As always make test-help is full of clues on how to use it.

TLDR:

window 1: make test-start-vpp-debug-in-gdb (set breakpoints, … whatever and 
then run vpp from within gdb)
window 2: make test 
TEST=test_nat44_ed.TestNAT44ED.test_outside_address_distribution

NOTE: a lot of tests rely on having a fresh VPP, so repeat test runs without 
restarting vpp are questionable at best. Running more than one test class will 
most probably work only by accident.
NOTE #2: it doesn’t matter if it’s make test or make test-debug in window 2 as 
the binary is selected in window 1. Use test-start-vpp-in-gdb to debug release 
binary.

Feedback is much appreciated.

Regards,
Klement
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#18945): https://lists.fd.io/g/vpp-dev/message/18945
Mute This Topic: https://lists.fd.io/mt/81380736/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] tests - attach debug option available

2021-03-18 Thread Klement Sekera via lists.fd.io
I like the theory, but untangling VppTestCase seems like too much effort, which 
I will spend elsewhere.

This is a quick and usable solution which doesn’t disrupt it too much.

Klement

> On 18 Mar 2021, at 20:20, Paul Vinciguerra  wrote:
> 
> Hi Klement,
> 
> I disagree with the implementation.  Naveen proposed similar functionality a 
> while back.  I objected to that for the same reason.
> 
> It does not make sense to use a testcase written to popen an instance of vpp, 
> then mock out the vpp instance with dummy values and conditionally check and 
> skip the logic distributed across the class. 
> 
> To accomplish your goal, the baseclass should be refactored from the popen 
> code/logic.  Your change would then use the baseclass and the CI would use 
> the subclass, or a mixin.
> 
> If you look at the csit python code, they use a papi executor class. The test 
> cases depend on the vppapiclient and the stats client. They should be the 
> only dependencies in the base class, anything else is a concrete 
> implementation detail.
> 
> Paul  
> 
> 
> 
> On Tue, Mar 16, 2021 at 12:35 PM Klement Sekera via lists.fd.io 
>  wrote:
> Hi all,
> 
> I implemented a new debug option for make test called ‘attach’. This has been 
> requested a couple of times over the last few months and while it has some 
> drawbacks, it also has advantages.
> 
> It’s not merged yet.
> 
> https://gerrit.fd.io/r/c/vpp/+/31663
> 
> As always make test-help is full of clues on how to use it.
> 
> TLDR:
> 
> window 1: make test-start-vpp-debug-in-gdb (set breakpoints, … whatever and 
> then run vpp from within gdb)
> window 2: make test 
> TEST=test_nat44_ed.TestNAT44ED.test_outside_address_distribution
> 
> NOTE: a lot of tests rely on having a fresh VPP, so repeat test runs without 
> restarting vpp are questionable at best. Running more than one test class 
> will most probably work only by accident.
> NOTE #2: it doesn’t matter if it’s make test or make test-debug in window 2 
> as the binary is selected in window 1. Use test-start-vpp-in-gdb to debug 
> release binary.
> 
> Feedback is much appreciated.
> 
> Regards,
> Klement
> 
> 


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#18972): https://lists.fd.io/g/vpp-dev/message/18972
Mute This Topic: https://lists.fd.io/mt/81380736/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] NAT44 how to control external address assignment from pool?

2021-02-23 Thread Klement Sekera via lists.fd.io
Hey,

just a heads up - there is a similar request to yours which came from a 
different direction. I’ll be making a change which I think will help your 
situation as well. Stay tuned.

Regards,
Klement

> On 22 Feb 2021, at 10:00, Юрий Иванов  wrote:
> 
> Hello Klement,
> 
> Thanks for reply.
> Looks like I shold craft this idea by myself ;-)
> 
> The main problem for me - I'm network engeneer for past few years and do not 
> program much for this time on C, but I try to try craft new patch.
> 
> Thanks in advance.
> 
> От: Klement Sekera -X (ksekera - PANTHEON TECH SRO at Cisco) 
> 
> Отправлено: 16 февраля 2021 г. 19:03
> Кому: Юрий Иванов 
> Копия: Filip Varga -X (fivarga - PANTHEON TECH SRO at Cisco) 
> ; vpp-dev@lists.fd.io 
> Тема: Re: [vpp-dev] NAT44 how to control external address assignment from 
> pool?
>  
> I see, so you’re not using deterministic NAT. Which NAT flavour are you using?
> 
> I think what you are requesting is not provided by VPP at this moment, but 
> looking at the allocation algorithm, it might be possible to implement such 
> behaviour. It should be relatively straightforward in EI NAT and a little bit 
> more complicated in ED NAT, requiring an extra hash table for user-outside 
> address mappings as ED NAT has no “user” tracking.
> 
> Another possibility would be to make it completely random - so for every 
> connection there would be a random address picked, so e.g. user1 might get 
> 1.0.0.7 for google.com, but 1.0.0.117 for duckduckgo.com. This would be even 
> easier to implement.
> 
> Would you like to give it a try and submit a patch? I can provide guidance…
> 
> Regards,
> Klement
> 
> > On 16 Feb 2021, at 15:22, Юрий Иванов  wrote:
> > 
> > Thanks Klement,
> > 
> > I want to use #1 option and try to think about #2 with DUT only as 
> > workaround.
> > 
> > The simple random allocation (option #1) looks acceptable for me but I have 
> > several issues with it now.
> > 
> > I have big external pool (out network has /24 mask) I want to use all 
> > addresses more evenly.
> > Now if I set pool with vpp# nat44 add address 1.0.0.3-1.0.0.100
> > 
> > But with such configuration all clients behind NAT will have external 
> > address 1.0.0.100 until all ports are used up, next will get 1.0.0.99 until 
> > all ports are used up etc.
> > As the result all users gets google reCAPTCHA on most resources (i.e. 
> > google.com search) because there are too many users are hiding behind the 
> > same IP while others addresses in pool are not used at all.
> > 
> > Since the standard Linux box can use option "persistent" which gives a 
> > client random address from snat pool (on first translation) and preservers 
> > it until the end of the user session I'm interested how to achieve this 
> > behavior with VPP.
> > 
> > Can I somehow setup pool 1.0.0.3-1.0.0.200. Then first client 10.0.0.1 will 
> > have random external address, i.e. 1.0.0.7 (I mean random address from the 
> > pool) and preserve it for all new connections until the end of the session, 
> > second client 10.0.0.5 -> next random address etc.
> > 
> > Thanks in advance.
> > От: Klement Sekera -X (ksekera - PANTHEON TECH SRO at Cisco) 
> > 
> > Отправлено: 16 февраля 2021 г. 14:01
> > Кому: Юрий Иванов 
> > Копия: Filip Varga -X (fivarga - PANTHEON TECH SRO at Cisco) 
> > ; vpp-dev@lists.fd.io 
> > Тема: Re: [vpp-dev] NAT44 how to control external address assignment from 
> > pool?
> >  
> > Hi, let me chime in and explain a bit more.
> > 
> > DET NAT also known as CGNAT (as in carrier-grade NAT) is designed to 
> > conform to LI (lawful intercept) requirements.
> > 
> > So, if you, as an internet provider are required by law to be able to 
> > provide a user identification based on outside address + port made by that 
> > user, you have two options:
> > 
> > 1.) log every connection and keep the logs
> > 2.) make it deterministic, so you can always calculate inside address from 
> > outside address + port
> > 
> > DET NAT is #2 and thus it cannot be random.
> > 
> > For random allocation, you can use either EI or ED NAT. But these of course 
> > don’t provide any way to calculate user address from outside address.
> > 
> > What is your use case?
> > 
> > Thanks,
> > Klement
> > 
> > > On 10 Feb 2021, at 19:14, Юрий Иванов  wrote:
> > > 
> > > Hi Filip,
> > > 
> > > Thanks, I understand, det44 plugin is working separately but we should 
> > > manually manage mapping local network to external IP.
> > > 
> > > But in case we try to use standard nut configuration with pools:
> > > vpp# nat44 forwarding enable
> > > vpp# set int nat44 in GigabitEthernet0/5/0 out GigabitEthernet0/4/0
> > > vpp# nat44 add address 1.0.0.3-1.0.0.100
> > > 
> > > All clients will have external address 1.0.0.100 until all ports are used 
> > > up, next will get 1.0.0.99 until all ports are used up etc.
> > > This behaviour leads to showing google reCAPTCHA on most resources (i.e. 
> > > google.com search) because there are too many users are 

Re: [vpp-dev] NAT44 how to control external address assignment from pool?

2021-02-16 Thread Klement Sekera via lists.fd.io
Hi, let me chime in and explain a bit more.

DET NAT also known as CGNAT (as in carrier-grade NAT) is designed to conform to 
LI (lawful intercept) requirements.

So, if you, as an internet provider are required by law to be able to provide a 
user identification based on outside address + port made by that user, you have 
two options:

1.) log every connection and keep the logs
2.) make it deterministic, so you can always calculate inside address from 
outside address + port

DET NAT is #2 and thus it cannot be random.

For random allocation, you can use either EI or ED NAT. But these of course 
don’t provide any way to calculate user address from outside address.

What is your use case?

Thanks,
Klement

> On 10 Feb 2021, at 19:14, Юрий Иванов  wrote:
> 
> Hi Filip,
> 
> Thanks, I understand, det44 plugin is working separately but we should 
> manually manage mapping local network to external IP.
> 
> But in case we try to use standard nut configuration with pools:
> vpp# nat44 forwarding enable
> vpp# set int nat44 in GigabitEthernet0/5/0 out GigabitEthernet0/4/0
> vpp# nat44 add address 1.0.0.3-1.0.0.100
> 
> All clients will have external address 1.0.0.100 until all ports are used up, 
> next will get 1.0.0.99 until all ports are used up etc.
> This behaviour leads to showing google reCAPTCHA on most resources (i.e. 
> google.com search) because there are too many users are hiding behind the 
> same IP while others addresses in pool are not used at all.
> I can afford to use pool with 255 addresses (/24 network), but in this case 
> most of addresses will not be used at all (.
> 
> I'm interested how to tune vpp to select a random address for every new 
> client and leave this same source-/destination-address for each new 
> connection. This should help more even use of the address pool.
> The same behavior as nftables do with "ip saddr 10.0.0.0/8 oif "vlan10" snat 
> to 1.0.0.3-1.0.0.100 persistent".
> 
> Thanks in advance.
> От: Filip Varga -X (fivarga - PANTHEON TECH SRO at Cisco) 
> Отправлено: 10 февраля 2021 г. 14:25
> Кому: Юрий Иванов ; vpp-dev@lists.fd.io 
> 
> Тема: RE: [vpp-dev] NAT44 how to control external address assignment from 
> pool?
>  
> Hello,
>  
> For clarification i will explain how the nat is devided.
>  
> At this point NAT functionality is devided in multiple sub plugins because of 
> it’s previous complexity and issues with it.
> We have det44 and nat44 plugins that are completely separate. The whole 
> separation is still in progress. So changes in nat44 like picking up pool 
> allocation algorithm or anything else will not affect det44 plugin. These two 
> plugins operate completely independently and share just some NAT library for 
> common stuff.
>  
> Regarding the det44 allocation algorithm. No at this point it is not 
> supported to pick up a new randomly selected address as you are asking. Det44 
> is / should act in predetermined way so logging is not required. 
>  
> I will look further in the code and plugins if i can help you find some 
> solution.
>  
> Best regards,
> Filip
>  
> From: Юрий Иванов  
> Sent: Wednesday, February 10, 2021 8:47 AM
> To: Filip Varga -X (fivarga - PANTHEON TECH SRO at Cisco) 
> ; vpp-dev@lists.fd.io
> Subject: RE: [vpp-dev] NAT44 how to control external address assignment from 
> pool?
> Importance: High
>  
> Hi Filip, thanks for reply.
>  
> This is only for host mapping and looks that it can be done with det44 plugin 
> - very strange btw that it operates separatly from standard nat44 (meaning 
> that I do need to configure nat at all to use it). 
>  
> My problem is different, when I set pool i.e. 1.0.0.1-1.0.0.100 all clients 
> always get the last address from the pool (.100) until external IP run out of 
> ports and only after that client will get .99 IP untile this IP will run out 
> of ports and etc.
>  
> Is there way to select new random address from pool for new client and after 
> that use this randomly selected same source-/destination-address for each 
> client connection.
>  
> Now it leads to problems with  Google 'Unusual Traffic' Block/Captcha, 
> because it utilizes several IP addresses where most IP from pool leave unused.
>  
> От: Filip Varga -X (fivarga - PANTHEON TECH SRO at Cisco) 
> Отправлено: 9 февраля 2021 г. 13:54
> Кому: Юрий Иванов ; vpp-dev@lists.fd.io 
> 
> Тема: RE: [vpp-dev] NAT44 how to control external address assignment from 
> pool?
>  
> Hi,
>  
> If you are looking for option to specify exact outside translation address 
> from a specific pool. You should try :
>  
> nat44 add static mapping ... exact 
>  
> Also supported by API.
> This will give you exact address picked from pool.
>  
> Best regards,
> Filip Varga
>  
> From: vpp-dev@lists.fd.io  On Behalf Of  ??
> Sent: Monday, February 8, 2021 11:04 AM
> To: vpp-dev@lists.fd.io
> Subject: Re: [vpp-dev] NAT44 how to control external address assignment from 
> pool?
> Importance: High
>  
> Just an update, to perform manual 

Re: [vpp-dev] NAT44 how to control external address assignment from pool?

2021-02-16 Thread Klement Sekera via lists.fd.io
I see, so you’re not using deterministic NAT. Which NAT flavour are you using?

I think what you are requesting is not provided by VPP at this moment, but 
looking at the allocation algorithm, it might be possible to implement such 
behaviour. It should be relatively straightforward in EI NAT and a little bit 
more complicated in ED NAT, requiring an extra hash table for user-outside 
address mappings as ED NAT has no “user” tracking.

Another possibility would be to make it completely random - so for every 
connection there would be a random address picked, so e.g. user1 might get 
1.0.0.7 for google.com, but 1.0.0.117 for duckduckgo.com. This would be even 
easier to implement.

Would you like to give it a try and submit a patch? I can provide guidance…

Regards,
Klement

> On 16 Feb 2021, at 15:22, Юрий Иванов  wrote:
> 
> Thanks Klement,
> 
> I want to use #1 option and try to think about #2 with DUT only as workaround.
> 
> The simple random allocation (option #1) looks acceptable for me but I have 
> several issues with it now.
> 
> I have big external pool (out network has /24 mask) I want to use all 
> addresses more evenly.
> Now if I set pool with vpp# nat44 add address 1.0.0.3-1.0.0.100
> 
> But with such configuration all clients behind NAT will have external address 
> 1.0.0.100 until all ports are used up, next will get 1.0.0.99 until all ports 
> are used up etc.
> As the result all users gets google reCAPTCHA on most resources (i.e. 
> google.com search) because there are too many users are hiding behind the 
> same IP while others addresses in pool are not used at all.
> 
> Since the standard Linux box can use option "persistent" which gives a client 
> random address from snat pool (on first translation) and preservers it until 
> the end of the user session I'm interested how to achieve this behavior with 
> VPP.
> 
> Can I somehow setup pool 1.0.0.3-1.0.0.200. Then first client 10.0.0.1 will 
> have random external address, i.e. 1.0.0.7 (I mean random address from the 
> pool) and preserve it for all new connections until the end of the session, 
> second client 10.0.0.5 -> next random address etc.
> 
> Thanks in advance.
> От: Klement Sekera -X (ksekera - PANTHEON TECH SRO at Cisco) 
> 
> Отправлено: 16 февраля 2021 г. 14:01
> Кому: Юрий Иванов 
> Копия: Filip Varga -X (fivarga - PANTHEON TECH SRO at Cisco) 
> ; vpp-dev@lists.fd.io 
> Тема: Re: [vpp-dev] NAT44 how to control external address assignment from 
> pool?
>  
> Hi, let me chime in and explain a bit more.
> 
> DET NAT also known as CGNAT (as in carrier-grade NAT) is designed to conform 
> to LI (lawful intercept) requirements.
> 
> So, if you, as an internet provider are required by law to be able to provide 
> a user identification based on outside address + port made by that user, you 
> have two options:
> 
> 1.) log every connection and keep the logs
> 2.) make it deterministic, so you can always calculate inside address from 
> outside address + port
> 
> DET NAT is #2 and thus it cannot be random.
> 
> For random allocation, you can use either EI or ED NAT. But these of course 
> don’t provide any way to calculate user address from outside address.
> 
> What is your use case?
> 
> Thanks,
> Klement
> 
> > On 10 Feb 2021, at 19:14, Юрий Иванов  wrote:
> > 
> > Hi Filip,
> > 
> > Thanks, I understand, det44 plugin is working separately but we should 
> > manually manage mapping local network to external IP.
> > 
> > But in case we try to use standard nut configuration with pools:
> > vpp# nat44 forwarding enable
> > vpp# set int nat44 in GigabitEthernet0/5/0 out GigabitEthernet0/4/0
> > vpp# nat44 add address 1.0.0.3-1.0.0.100
> > 
> > All clients will have external address 1.0.0.100 until all ports are used 
> > up, next will get 1.0.0.99 until all ports are used up etc.
> > This behaviour leads to showing google reCAPTCHA on most resources (i.e. 
> > google.com search) because there are too many users are hiding behind the 
> > same IP while others addresses in pool are not used at all.
> > I can afford to use pool with 255 addresses (/24 network), but in this case 
> > most of addresses will not be used at all (.
> > 
> > I'm interested how to tune vpp to select a random address for every new 
> > client and leave this same source-/destination-address for each new 
> > connection. This should help more even use of the address pool.
> > The same behavior as nftables do with "ip saddr 10.0.0.0/8 oif "vlan10" 
> > snat to 1.0.0.3-1.0.0.100 persistent".
> > 
> > Thanks in advance.
> > От: Filip Varga -X (fivarga - PANTHEON TECH SRO at Cisco) 
> > 
> > Отправлено: 10 февраля 2021 г. 14:25
> > Кому: Юрий Иванов ; vpp-dev@lists.fd.io 
> > 
> > Тема: RE: [vpp-dev] NAT44 how to control external address assignment from 
> > pool?
> >  
> > Hello,
> >  
> > For clarification i will explain how the nat is devided.
> >  
> > At this point NAT functionality is devided in multiple sub plugins because 
> > of it’s previous complexity 

[vpp-dev] upcoming change in nat44-ed static mapping API

2021-09-28 Thread Klement Sekera via lists.fd.io
Hi Matt/vpp-dev,

I’m reaching out after discussion with Andrew regarding an upcoming change in 
behaviour of an API. Currently, when nat44_ed_add_del_static_mapping[_v2] is 
called, the supplied u8 protocol value is taken and silently converted into 
NAT_PROTOCOL_(TCP|UDP|ICMP|OTHER). Part of upcoming change is to drop this 
internal enum and treat static mappings (SM) same way as dynamic mappings - to 
store IANA IP protocol value in SM instead of said nat_protocol_t. This however 
causes a behaviour change.

For TCP, UDP and ICMP there is NO change. For everything else the change is:

Old: 

nat44_ed_add_del_static_mapping (is_add=1, protocol=101) is treated as 
"nat44_ed_add_del_static_mapping (protocol=other)", so this creates a 
catch-almost-all SM, which then translates also all other protocols except tcp, 
udp and icmp. Calling nat44_ed_add_del_static_mapping (is_add=1, protocol=102) 
would then return VNET_API_ERROR_VALUE_EXIST (because it’s internally 
translated to the same “other” thingie).

New:

protocol is stored in (and matched by) static mapping exactly as it’s supplied. 
To get old behaviour a user would have to create 252 static mappings (with all 
protocol values except tcp, udp, icmp). New feature with this is ability to 
translate only some of non-tcp/udp/icmp protocols as it isn’t a 
catch-almost-all logic anymore.

Question is whether there is a real need to do the usual deprecate old api and 
keep its behaviour (which would now internally add/del 252 mappings) while 
introducing a new api with new behaviour routine or it’s okay to change it 
under the hood keeping apis intact. The apis are already accepting ip protocol 
value, so there is no change required in api signature.

Would you be so kind to share your thoughts on this topic?

Thanks,
Klement
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#20218): https://lists.fd.io/g/vpp-dev/message/20218
Mute This Topic: https://lists.fd.io/mt/85921424/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] Struggling with low vector size ( < 5-10) seeking expert advice

2021-11-19 Thread Klement Sekera via lists.fd.io
Hey,

Efficiency increases (a lot) with vector size, so pondering packet rates at low 
sizes doesn’t make much sense. Looking at vector size, you can tell how loaded 
VPP is. At your vector size, VPP is slacking off and efficiency is thus low, 
but it doesn’t matter, because you are not processing enough packets to care 
(on your particular box). To see real limits, you could increase the packet 
rate until you start seeing drops, then ease of bit by bit until it’s stable 
with no drops. You will see vector size being close to or at 255. Note that at 
these rates, any preemption from OS or other apps will cause packet drops, so 
it’s best to dedicate cores to VPP exclusively and tune the system for minimum 
latency. Function of vector size/packet rate is indeed non-linear and I don’t 
think there is a simple formula to calculate it.

Of course don’t forget to do (while under load):

clear run

show run

to see real numbers.

HTH,
Klement

> On 19 Nov 2021, at 02:10, PRANAB DAS  wrote:
> 
> Hello all,
> 
> IMHO primary motivation of using VPP as the name suggests is Vector Packet 
> Processing with the belief that it will maximize i-cache and d-cache hit and 
> thus will result in much higher pps and throughput than scalar processing. 
> 
> Somehow I am finding the application we are running in VPP has very low 
> vector size < 10 with 40% cpu utilization. Does it mean we are not getting 
> the benefit of vector processing and we are doing something wrong? In what 
> conditions, will VPP function poorly meaning instead of vector processing it 
> ends up doing 2 or 3 packets per graph node - thus  as scalar packet 
> processor? 
> 
> Basically the datapath application can be considered as  a service chain of 3 
> different application datapath services each having a different performance 
> profile. 
> 
> service A --->service B > serviceC 
> 
> service A is the fastest, most efficient can do 4Mpps
> service B is less can do 2Mpps
> service C can do 500Kpps
> 
> service A and service B are running in VPP in different worker threads and 
> service C (DPDK based) running as another process. 
> 
> We scale out service C (add more cpu cores 4 time service B) and service B 
> has twice the number of VPP worker threads than service A
> 
> Assuming we have 8 cores for service C, 2 VPP workers for service B and 1 for 
> service A - what kind of vector size do we expect when we try 4Mpps, 2Mpps or 
> 1Mpps (assuming NIC BW does not matter).
> 
> I am arguing that in VPP performance is not linear, rather it depends on 
> vector size. Is that true?
> Is there any documentation on how to size or tune application 
> performance/vector size on VPP?
> 
> I really appreciate your help. 
> 
> Thank you
> 
> _ Pranab K Das
> 
> 
> 
> 
>   
> 
> 
> 


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#20518): https://lists.fd.io/g/vpp-dev/message/20518
Mute This Topic: https://lists.fd.io/mt/87158267/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] Struggling with low vector size ( < 5-10) seeking expert advice

2021-11-22 Thread Klement Sekera via lists.fd.io
I’d say that that’s quite pointless. If your vector size is small it means VPP 
has a lot of headroom in your environment. Maybe take a look at config options 
- there is one which catches my eye ‘unix {poll-sleep-usec}’ which might help 
you get what you want at the cost of probably not being able to handle higher 
packet rates.

You might also need to use CLI ’set interface rx-mode  polling’ to 
force polling all the time, as I think default is adaptive and at low rates it 
doesn’t poll. However I think that by doing so, you’ll increase CPU usage :)

Klement

> On 22 Nov 2021, at 05:21, PRANAB DAS  wrote:
> 
> Thanks Klement. I believe the problem is the slow app (service C) with very 
> low pps bringing down pps in VPP.
>  I am wondering if there is a way to tune VPP dispatcher timing so that it 
> will dispatch packets at a slower packet rate/pps which hopefully could 
> increase the vector size.
> 
> Thank you
> 
> - Pranab K Das  
> 
> On Fri, Nov 19, 2021 at 5:07 AM Klement Sekera -X (ksekera - PANTHEON TECH 
> SRO at Cisco)  wrote:
> Hey,
> 
> Efficiency increases (a lot) with vector size, so pondering packet rates at 
> low sizes doesn’t make much sense. Looking at vector size, you can tell how 
> loaded VPP is. At your vector size, VPP is slacking off and efficiency is 
> thus low, but it doesn’t matter, because you are not processing enough 
> packets to care (on your particular box). To see real limits, you could 
> increase the packet rate until you start seeing drops, then ease of bit by 
> bit until it’s stable with no drops. You will see vector size being close to 
> or at 255. Note that at these rates, any preemption from OS or other apps 
> will cause packet drops, so it’s best to dedicate cores to VPP exclusively 
> and tune the system for minimum latency. Function of vector size/packet rate 
> is indeed non-linear and I don’t think there is a simple formula to calculate 
> it.
> 
> Of course don’t forget to do (while under load):
> 
> clear run
> 
> show run
> 
> to see real numbers.
> 
> HTH,
> Klement
> 
> > On 19 Nov 2021, at 02:10, PRANAB DAS  wrote:
> > 
> > Hello all,
> > 
> > IMHO primary motivation of using VPP as the name suggests is Vector Packet 
> > Processing with the belief that it will maximize i-cache and d-cache hit 
> > and thus will result in much higher pps and throughput than scalar 
> > processing. 
> > 
> > Somehow I am finding the application we are running in VPP has very low 
> > vector size < 10 with 40% cpu utilization. Does it mean we are not getting 
> > the benefit of vector processing and we are doing something wrong? In what 
> > conditions, will VPP function poorly meaning instead of vector processing 
> > it ends up doing 2 or 3 packets per graph node - thus  as scalar packet 
> > processor? 
> > 
> > Basically the datapath application can be considered as  a service chain of 
> > 3 different application datapath services each having a different 
> > performance profile. 
> > 
> > service A --->service B > serviceC 
> > 
> > service A is the fastest, most efficient can do 4Mpps
> > service B is less can do 2Mpps
> > service C can do 500Kpps
> > 
> > service A and service B are running in VPP in different worker threads and 
> > service C (DPDK based) running as another process. 
> > 
> > We scale out service C (add more cpu cores 4 time service B) and service B 
> > has twice the number of VPP worker threads than service A
> > 
> > Assuming we have 8 cores for service C, 2 VPP workers for service B and 1 
> > for service A - what kind of vector size do we expect when we try 4Mpps, 
> > 2Mpps or 1Mpps (assuming NIC BW does not matter).
> > 
> > I am arguing that in VPP performance is not linear, rather it depends on 
> > vector size. Is that true?
> > Is there any documentation on how to size or tune application 
> > performance/vector size on VPP?
> > 
> > I really appreciate your help. 
> > 
> > Thank you
> > 
> > _ Pranab K Das
> > 
> > 
> > 
> > 
> >   
> > 
> > 
> > 
> 
> 
> 


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#20527): https://lists.fd.io/g/vpp-dev/message/20527
Mute This Topic: https://lists.fd.io/mt/87158267/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



  1   2   >