Hello, I'm working on SPDK library + VPP, because some report said that VPP
reduces the overhead of network. When I test with VPP (with mlx5 poll mode
driver mtu 9000) and null device with spdk, 4k performance with VPP is much
better than the default(kTCP). But, 128k write performance with VPP
Hi all,
I have two interfaces (wan0 and wan1) with two different IP addresses.
vpp config:
set int state wan0 up
set int state wan1 up
set int state lan1 up
set int ip address wan0 10.100.1.5/29
ip route add 0.0.0.0/0 via 10.100.1.8
loopback create
set int l2 bridge loop0 1 bvi
set int ip
The MAC address ad:ef:ad:ef:de:ad is a multicast address. That’s why packet
with that destination MAC is flooded in the bridge. Try assign a unicast MAC
address to gtpu_tunnel1.
Regards,
John
From: vpp-dev@lists.fd.io On Behalf Of sunny cupertino
Sent: Friday, February 14, 2020 9:34 PM
To:
Hi All,
I request your help on L2 Bridge in VPP. I wanted to know if we can selectively
forward L2 packets
to different interfaces on a bridge based on the ethernet address.
For e.g there is one interface and 2 GTPU tunnels on a L2 bridge.
I have added an entry into the L2 Fib table telling
Hello VPP developers,
We have a problem with VPP used for NAT on Ubuntu 18.04 servers
equipped with Mellanox ConnectX-5 network cards (ConnectX-5 EN network
interface card; 100GbE dual-port QSFP28; PCIe3.0 x16; tall bracket;
ROHS R6).
VPP is dropping packets in the ip4-input node due to "ip4
Hi Elias,
On 14/02/2020 13:35, "Elias Rudberg" wrote:
Hi Neale and Dave,
Thanks for your answers!
I was able to make it work using multicast as Neale suggested.
Here is roughly what I did to make it work using multicast instead of
unicast:
On the sending
Hi All,
FD.io CSIT-2001 report has been published on FD.io docs site:
https://docs.fd.io/csit/rls2001/report/
Many thanks to All in CSIT, VPP and wider FD.io community who
contributed and worked hard to make CSIT-2001 happen!
Below three summaries:
- Intel Xeon 2n-skx, 3n-skx and 2n-clx
Coverity run failed today.
Current number of outstanding issues are 1
Newly detected: 0
Eliminated: 1
More details can be found at
https://scan.coverity.com/projects/fd-io-vpp/view_defects
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#15400):
Makes sense to me, please submit patch to gerrit…
—
Damjan
> On 14 Feb 2020, at 04:18, Lijian Zhang wrote:
>
> Hi,
> VPP crashes on CSIT Taishan server due to function vlib_get_thread_core_numa
> (unsigned cpu_id) not getting NUMA node correctly via cpu_id on Taishan
> server.
>
Hi Neale and Dave,
Thanks for your answers!
I was able to make it work using multicast as Neale suggested.
Here is roughly what I did to make it work using multicast instead of
unicast:
On the sending side, to make it send multicast packets:
adj_index_t adj_index_for_multicast =
you need to set it on both sides:
For VPP:
$ ccmake build-root/build-vpp-native/vpp
and change PRE_DATA_SIZE to 256
or modify following line:
src/vlib/CMakeLists.txt:
set(PRE_DATA_SIZE 128 CACHE STRING "Buffer headroom size.”)
For DPDK you should be able to build custom ext deps package:
$
Hi folks,
In FD.io vpp the default VLIB_BUFFER_PRE_DATA_SIZE data header room size is
defined as 128.
And in dpdk also it is defined as 128 as we have encapsulation which goes
beyond 128 bytes
the packet descriptor block is getting corrupted in the structure
vlib_buffer_t as defined in
12 matches
Mail list logo