HI Sergio,
As I mentioned that transport mode is working now.
Next I tried tunnel mode.
Here I can see successfully packet decryption. But later inner packet gets
dropped.
Outer IPSec packet is like 172.28.128.4 -> 172.28.128.5
Inner packet is 1.1.1.1 -> 2.2.2.2
I have added 2.2.2.2 on same
Thanks Sergio,
DPDK based IPsec basic tunnel worked with multi-core config.
cpu {
main-core 0
corelist-workers 1
#skip-cores 4
workers 1
}
Now since DPDK basic IPSec is working. I will try to dig in more in detail.
One query I posted in early threads, possibly got
Jacek,
It's also been on my list for a while to add a better bulk add for MAP domains
/ rules.
Any idea of the scale you are looking at here?
Best regards,
Ole
> On 5 Sep 2017, at 15:07, Jacek Siuda wrote:
>
> Hi,
>
> I'm conducting a tunnel test using VPP (vnet) map
Marek,
What is the uid/gid of /dev/shm/vpe-api ?
Is the user a member of the vpp group?
Does your VPP workspace include the patch c900ccc34 "Enabled gid vpp in
startup.conf to allow non-root vppctl access" ?
Thanks,
-daw-
On 09/05/2017 06:08 AM, Marek Gradzki -X (mgradzki - PANTHEON
Thanks for the update John. I'll this along to our test team. Not sure when
we can schedule a retest, but when we do, I'll provide our results.
Thanks again,
Billy
On Tue, Sep 5, 2017 at 10:10 AM, John Lo (loj) wrote:
> Hi Billy,
>
>
>
> I submitted fixes for VPP-963, now
There are a few different ways to set cores/workers, best explained in
the following link:
https://wiki.fd.io/view/VPP/Using_VPP_In_A_Multi-thread_Model
Thanks,
Sergio
On 05/09/2017 15:10, Mukesh Yadav (mukyadav) wrote:
Thanks Sergio,
I will for sure try latest clone with a fix.
Besides
Thanks Sergio,
I will for sure try latest clone with a fix.
Besides what is configuration to test same with worker core.
Will be helpful for me in future..
Thanks
Mukesh
On 05/09/17, 6:22 PM, "Sergio Gonzalez Monroy"
wrote:
Hi Mukesh,
I was able
Dear Jacek,
Use of the clib memory allocator is mainly historical. It’s elegant in a couple
of ways - including built-in leak-finding - but it has been known to backfire
in terms of performance. Individual mheaps are limited to 4gb in a [typical]
32-bit vector length image.
Note that the
Hi Mukesh,
I was able to find the bug. It was not directly related to transport
mode but to the setup when using single core (master core) without
workers ( https://gerrit.fd.io/r/8302 ).
You can either apply the change or setup VPP to use workers (at the
moment you are running with single
Hi,
I am having problems with running CLI against named vpp instance (g809bc74):
sudo vpp api-segment { prefix vpp0 }
sudo vppctl -p vpp0 show int
clib_socket_init: connect: Connection refused
But ps shows vpp process is running.
It worked with 17.07.
Is it no longer supported or I need some
Hello,
Can you help me on below query related to 1G huge pages usage in VPP.
Regards,
Balaji
On Thu, Aug 31, 2017 at 5:19 PM, Balaji Kn wrote:
> Hello,
>
> I am using *v17.07*. I am trying to configure huge page size as 1GB and
> reserve 16 huge pages for VPP.
> I went
Hi Sergio,
I have run 'make install-dep'.
The nasm version is 2.10.09.
Thanks,
xyxue
From: Sergio Gonzalez Monroy
Date: 2017-09-05 16:24
To: 薛欣颖; vpp-dev
Subject: Re: [vpp-dev] Compile error
Hi,
Have you run 'make install-dep' ?
Which nasm version do you have in your system?
Thanks,
Hi,
Have you run 'make install-dep' ?
Which nasm version do you have in your system?
Thanks,
Sergio
On 04/09/2017 11:59, 薛欣颖 wrote:
Hi,
I got code by : git clone https://gerrit.fd.io/r/vpp.
input ‘make dpdk-install-dev’ the error infomation is shown below:
Building IPSec-MB 0.46 library
13 matches
Mail list logo