Re: [vpp-dev] Fastest way to connect application in user space to VPP #vpp

2022-04-06 Thread longtrb
On Thu, Apr 7, 2022 at 05:04 AM, Mohsin Kazmi wrote:

> 
> trace add virtio-input 10

Hi Mohsin.

This is the trace log:

--- Start of thread 1 vpp_wk_0 ---

Packet 1

00:25:26:931409: virtio-input

virtio: hw_if_index 1 next-index 4 vring 0 len 42

hdr: flags 0x00 gso_type 0x00 hdr_len 0 gso_size 0 csum_start 0 csum_offset 0 
num_buffers 1

00:25:26:931412: ethernet-input

ARP: 0a:00:27:00:00:14 -> ff:ff:ff:ff:ff:ff

00:25:26:931415: l2-input

l2-input: sw_if_index 1 dst ff:ff:ff:ff:ff:ff src 0a:00:27:00:00:14 [l2-output ]

00:25:26:931418: l2-output

l2-output: sw_if_index 2 dst ff:ff:ff:ff:ff:ff src 0a:00:27:00:00:14 data 08 06 
00 01 08 00 06 04 00 01 0a 00

00:25:26:931420: tap0-output

tap0

ARP: 0a:00:27:00:00:14 -> ff:ff:ff:ff:ff:ff

request, type ethernet/IP4, address size 6/4

0a:00:27:00:00:14/192.168.56.1 -> 00:00:00:00:00:00/192.168.56.101

00:25:26:931422: tap0-tx

buffer 0x963ba: current data 0, length 42, buffer-pool 0, ref-count 1, 
totlen-nifb 0, trace handle 0x100

l2-hdr-offset 0 l3-hdr-offset 14

hdr-sz 0 l2-hdr-offset 0 l3-hdr-offset 14 l4-hdr-offset 0 l4-hdr-sz 0

ARP: 0a:00:27:00:00:14 -> ff:ff:ff:ff:ff:ff

request, type ethernet/IP4, address size 6/4

0a:00:27:00:00:14/192.168.56.1 -> 00:00:00:00:00:00/192.168.56.101

--- Start of thread 2 vpp_wk_1 ---

Packet 1

00:25:26:931584: virtio-input

virtio: hw_if_index 2 next-index 4 vring 0 len 42

hdr: flags 0x00 gso_type 0x00 hdr_len 0 gso_size 0 csum_start 0 csum_offset 0 
num_buffers 1

00:25:26:931587: ethernet-input

ARP: 02:fe:4d:3e:ec:be -> 0a:00:27:00:00:14

00:25:26:931588: l2-input

l2-input: sw_if_index 2 dst 0a:00:27:00:00:14 src 02:fe:4d:3e:ec:be [l2-output ]

00:25:26:931589: l2-output

l2-output: sw_if_index 1 dst 0a:00:27:00:00:14 src 02:fe:4d:3e:ec:be data 08 06 
00 01 08 00 06 04 00 02 02 fe

00:25:26:931590: virtio-0/0/8/0-output

virtio-0/0/8/0

ARP: 02:fe:4d:3e:ec:be -> 0a:00:27:00:00:14

reply, type ethernet/IP4, address size 6/4

02:fe:4d:3e:ec:be/192.168.56.101 -> 0a:00:27:00:00:14/192.168.56.1

00:25:26:931591: virtio-0/0/8/0-tx

buffer 0x93f04: current data 0, length 42, buffer-pool 0, ref-count 1, 
totlen-nifb 0, trace handle 0x200

l2-hdr-offset 0 l3-hdr-offset 14

hdr-sz 0 l2-hdr-offset 0 l3-hdr-offset 14 l4-hdr-offset 0 l4-hdr-sz 0

ARP: 02:fe:4d:3e:ec:be -> 0a:00:27:00:00:14

reply, type ethernet/IP4, address size 6/4

02:fe:4d:3e:ec:be/192.168.56.101 -> 0a:00:27:00:00:14/192.168.56.1

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#21219): https://lists.fd.io/g/vpp-dev/message/21219
Mute This Topic: https://lists.fd.io/mt/90135014/21656
Mute #vpp:https://lists.fd.io/g/vpp-dev/mutehashtag/vpp
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] Fastest way to connect application in user space to VPP #vpp

2022-04-06 Thread Mohsin Kazmi via lists.fd.io
You can check VPP error counters using: sh errors
You can also trace the packet within VPP:
Please add ‘trace add virtio-input 10’ before pinging through tap interface. 
Once you ping the desired IP address, please do ‘sh trace’.

-br
Mohsin

From:  on behalf of "long...@gmail.com" 
Date: Wednesday, April 6, 2022 at 8:59 AM
To: "vpp-dev@lists.fd.io" 
Subject: Re: [vpp-dev] Fastest way to connect application in user space to VPP 
#vpp


[Edited Message Follows]
Hi All,

I created virtio interface and using l2 xconnect to connect it to tap interface:
vppctl create int virtio :00:08.0 gso-enabled
vppctl create tap id 0 host-ip4-addr 192.168.56.101/24 gso

vppctl set interface l2 xconnect tap0 virtio-0/0/8/0
vppctl set interface l2 xconnect virtio-0/0/8/0 tap0

vppctl set interface state virtio-0/0/8/0 up
vppctl set interface state tap0 up

But currently I can not ping to 192.168.56.1(an external server gateway) 
through tap interface.
Can you help me correct?


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#21218): https://lists.fd.io/g/vpp-dev/message/21218
Mute This Topic: https://lists.fd.io/mt/90135014/21656
Mute #vpp:https://lists.fd.io/g/vpp-dev/mutehashtag/vpp
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] LDP in comparison with OpenOnload

2022-04-06 Thread Florin Coras
Hi Kunal, 

Similar goal but it’s only been tested against a limited number of 
applications. 

Also, as per my previous reply, ldp accepts a mix of linux and vcl fd/sessions 
and to that end it sets aside a number of fds for linux. Consequently, vcl fds 
will start from 1 << LDP_ENV_SID_BIT and that might be a problem for 
applications that assume their fds start at 0 and end at a low value. That’s 
typically a problem with those that use select as opposed to epoll. 

Regards,
Florin

> On Apr 6, 2022, at 1:20 PM, Kunal Parikh  wrote:
> 
> Hi,
> 
> I want to gauge if the plan for LDP is to be similar to OpenOnload 
> (https://github.com/Xilinx-CNS/onload )
> 
> We use OpenOnload with SolarFlare cards with great success.
> 
> It doesn't require us to change our code while getting the benefits of kernel 
> bypass (and hardware acceleration from SolarFlare cards).
> 
> Thanks,
> 
> Kunal 
> 
> 


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#21217): https://lists.fd.io/g/vpp-dev/message/21217
Mute This Topic: https://lists.fd.io/mt/90298662/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] VCL & netperf #hoststack

2022-04-06 Thread Florin Coras
I’ve never tried netperf so unfortunately I don’t even know if it works. From 
the server logs, it looks like it hit some sort of error on accept. 

Note that we set aside the first 1 << LDP_ENV_SID_BIT (env variable) fds for 
linux. By default that value is 5, which is the min we accept, i.e., 32 fds. 
That could be a problem for netperf, given that in the logs it’s saying 
“setting 32 in fdset”. 

Another option to check latency would be to use wrk/ab or similar tools with a 
web server that’s known to work with ldp, like nginx. 

Regards,
Florin

> On Apr 6, 2022, at 1:17 PM, Kunal Parikh  wrote:
> 
> I am using LD_PRELOAD
> 
> Is there a particular example of netperf flags you can recommend for 
> measuring per packet latency? 
> 
> 


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#21216): https://lists.fd.io/g/vpp-dev/message/21216
Mute This Topic: https://lists.fd.io/mt/90297978/21656
Mute #hoststack:https://lists.fd.io/g/vpp-dev/mutehashtag/hoststack
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[vpp-dev] LDP in comparison with OpenOnload

2022-04-06 Thread Kunal Parikh
Hi,

I want to gauge if the plan for LDP is to be similar to OpenOnload ( 
https://github.com/Xilinx-CNS/onload )

We use OpenOnload with SolarFlare cards with great success.

It doesn't require us to change our code while getting the benefits of kernel 
bypass (and hardware acceleration from SolarFlare cards).

Thanks,

Kunal

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#21215): https://lists.fd.io/g/vpp-dev/message/21215
Mute This Topic: https://lists.fd.io/mt/90298662/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] VCL & netperf #hoststack

2022-04-06 Thread Kunal Parikh
I am using LD_PRELOAD

Is there a particular example of netperf flags you can recommend for measuring 
per packet latency?

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#21214): https://lists.fd.io/g/vpp-dev/message/21214
Mute This Topic: https://lists.fd.io/mt/90297978/21656
Mute #hoststack:https://lists.fd.io/g/vpp-dev/mutehashtag/hoststack
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] VCL & netperf #hoststack

2022-04-06 Thread Florin Coras
Hi Kunal, 

How are you attaching netperf to VCL/VPP? Unless you modify it to use VCL your 
only option is to try to use LD_PRELOAD (see iperf exaple here [1]). 

Note however that most probably LDP does not support all socket options netperf 
might want. 

Regards,
Florin

[1] https://wiki.fd.io/view/VPP/HostStack/LDP/iperf

> On Apr 6, 2022, at 12:50 PM, Kunal Parikh  wrote:
> 
> Hi Folks,
> 
> I want visualize the latency profile of VCL HostStack.
> 
> I am using netperf and am receiving this error on the server:
> 
> Issue receiving request on control connection. Errno 19 (No such device)
> 
> Detailed logs attached. 
> 
> 


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#21213): https://lists.fd.io/g/vpp-dev/message/21213
Mute This Topic: https://lists.fd.io/mt/90297978/21656
Mute #hoststack:https://lists.fd.io/g/vpp-dev/mutehashtag/hoststack
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[vpp-dev] VCL & netperf #hoststack

2022-04-06 Thread Kunal Parikh
Hi Folks,

I want visualize the latency profile of VCL HostStack.

I am using netperf and am receiving this error on the server:

Issue receiving request on control connection. Errno 19 (No such device)

Detailed logs attached.
root@ip-10-21-120-191:~# netperf -d -H 10.21.120.48 -l -1000 -t TCP_RR -w 10ms 
-b 1 -v 2 -- -O 
min_latency,mean_latency,max_latency,stddev_latency,transaction_rate
Packet rate control is not compiled in.
Packet burst size is not compiled in.
resolve_host called with host '10.21.120.48' port '(null)' family AF_UNSPEC
getaddrinfo returned the following for host '10.21.120.48' port '(null)'  
family AF_UNSPEC
cannonical name: '10.21.120.48'
flags: 22 family: AF_INET: socktype: SOCK_STREAM protocol IPPROTO_TCP 
addrlen 16
sa_family: AF_INET sadata: 0 0 10 21 120 48 0 0 0 0 0 0 0 0 0 0
scan_omni_args called with the following argument vector
netperf -d -H 10.21.120.48 -l -1000 -t TCP_RR -w 10ms -b 1 -v 2 -- -O 
min_latency,mean_latency,max_latency,stddev_latency,transaction_rate
print_omni_init entered
print_omni_init_list called
parse_output_selection is parsing the output selection 
'min_latency,mean_latency,max_latency,stddev_latency,transaction_rate'
Program name: netperf
Local send alignment: 8
Local recv alignment: 8
Remote send alignment: 8
Remote recv alignment: 8
Local socket priority: -1
Remote socket priority: -1
Local socket TOS: cs0
Remote socket TOS: cs0
Report local CPU 0
Report remote CPU 0
Verbosity: 2
Debug: 1
Port: 12865
Test name: TCP_RR
Test bytes: 1000 Test time: 0 Test trans: 1000
Host name: 10.21.120.48

installing catcher for all signals
Could not install signal catcher for sig 32, errno 22
Could not install signal catcher for sig 33, errno 22
Could not install signal catcher for sig 65, errno 22
remotehost is 10.21.120.48 and port 12865
resolve_host called with host '10.21.120.48' port '12865' family AF_INET
getaddrinfo returned the following for host '10.21.120.48' port '12865'  family 
AF_INET
cannonical name: '10.21.120.48'
flags: 22 family: AF_INET: socktype: SOCK_STREAM protocol IPPROTO_TCP 
addrlen 16
sa_family: AF_INET sadata: 50 65 10 21 120 48 0 0 0 0 0 0 0 0 144 250
resolve_host called with host '0.0.0.0' port '0' family AF_UNSPEC
getaddrinfo returned the following for host '0.0.0.0' port '0'  family AF_UNSPEC
cannonical name: '0.0.0.0'
flags: 22 family: AF_INET: socktype: SOCK_STREAM protocol IPPROTO_TCP 
addrlen 16
sa_family: AF_INET sadata: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 114 97
establish_control called with host '10.21.120.48' port '12865' remfam AF_INET
local '0.0.0.0' port '0' locfam AF_UNSPEC
bound control socket to 0.0.0.0 and 0
successful connection to remote netserver at 10.21.120.48 and 12865
recv_response: received a 0 byte response
recv_response: Connection reset by peerroot@ip-10-21-120-92:~# netserver -f -D -4 -v 9 -L 10.21.120.48 -d
check_if_inetd: enter
setup_listens: enter
create_listens: called with host '10.21.120.48' port '12865' family AF_INET(2)
getaddrinfo returned the following for host '10.21.120.48' port '12865'  family 
AF_INET
cannonical name: '(nil)'
flags: 1 family: AF_INET: socktype: SOCK_STREAM protocol IPPROTO_TCP 
addrlen 16
sa_family: AF_INET sadata: 50 65 10 21 120 48 0 0 0 0 0 0 0 0 48 118
Starting netserver with host '10.21.120.48' port '12865' and family AF_INET
accept_connections: enter
set_fdset: enter list 0x559276ed74d0 fd_set 0x7ffc7020f9e0
setting 32 in fdset
accept_connection: enter
process_requests: enter
Issue receiving request on control connection. Errno 19 (No such device)
set_fdset: enter list 0x559276ed74d0 fd_set 0x7ffc7020f9e0
setting 32 in fdset
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#21212): https://lists.fd.io/g/vpp-dev/message/21212
Mute This Topic: https://lists.fd.io/mt/90297978/21656
Mute #hoststack:https://lists.fd.io/g/vpp-dev/mutehashtag/hoststack
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: 35640 has a crashing regression, was Re: [vpp-dev] vpp-papi stats is broken

2022-04-06 Thread Pim van Pelt
Hoi,

Following reproduces the drop.c:77 assertion:

create loopback interface instance 0
set interface ip address loop0 10.0.0.1/32
set interface state GigabitEthernet3/0/1 up
set interface state loop0 up
set interface state loop0 down
set interface ip address del loop0 10.0.0.1/32
delete loopback interface intfc loop0
set interface state GigabitEthernet3/0/1 down
set interface state GigabitEthernet3/0/1 up
comment { the following crashes VPP }
set interface state GigabitEthernet3/0/1 down

I found that adding IPv6 addresses does not provoke the crash, while adding
IPv4 addresses to loop0 does provoke it.

groet,
Pim

On Wed, Apr 6, 2022 at 3:56 PM Pim van Pelt via lists.fd.io  wrote:

> Hoi,
>
> The crash I observed is now gone, thanks!
>
> VPP occasionally hits an ASSERT related to error counters at drop.c:77 --
> I'll try to see if I can get a reproduction, but it may take a while, and
> it may be transient.
>
>
> *11: /home/pim/src/vpp/src/vlib/drop.c:77 (counter_index) assertion `ci <
> n->n_errors' fails*
>
> Thread 14 "vpp_wk_11" received signal SIGABRT, Aborted.
> [Switching to Thread 0x7fff4bbfd700 (LWP 182685)]
> __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50
> 50  ../sysdeps/unix/sysv/linux/raise.c: No such file or directory.
> (gdb) bt
> #0  __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50
> #1  0x76a5f859 in __GI_abort () at abort.c:79
> #2  0x004072e3 in os_panic () at
> /home/pim/src/vpp/src/vpp/vnet/main.c:413
> #3  0x76daea29 in debugger () at
> /home/pim/src/vpp/src/vppinfra/error.c:84
> #4  0x76dae7fa in _clib_error (how_to_die=2, function_name=0x0,
> line_number=0, fmt=0x76f9d19c "%s:%d (%s) assertion `%s' fails")
> at /home/pim/src/vpp/src/vppinfra/error.c:143
> #5  0x76f782d9 in counter_index (vm=0x7fffa09fb2c0, e=3416) at
> /home/pim/src/vpp/src/vlib/drop.c:77
> #6  0x76f77c57 in process_drop_punt (vm=0x7fffa09fb2c0,
> node=0x7fffa0c79b00, frame=0x7fff97168140,
> disposition=ERROR_DISPOSITION_DROP)
> at /home/pim/src/vpp/src/vlib/drop.c:224
> #7  0x76f77957 in error_drop_node_fn_hsw (vm=0x7fffa09fb2c0,
> node=0x7fffa0c79b00, frame=0x7fff97168140)
> at /home/pim/src/vpp/src/vlib/drop.c:248
> #8  0x76f0b10d in dispatch_node (vm=0x7fffa09fb2c0,
> node=0x7fffa0c79b00, type=VLIB_NODE_TYPE_INTERNAL,
> dispatch_state=VLIB_NODE_STATE_POLLING, frame=0x7fff97168140,
> last_time_stamp=5318787653101516) at /home/pim/src/vpp/src/vlib/main.c:961
> #9  0x76f0bb60 in dispatch_pending_node (vm=0x7fffa09fb2c0,
> pending_frame_index=5, last_time_stamp=5318787653101516)
> at /home/pim/src/vpp/src/vlib/main.c:1120
> #10 0x76f06e0f in vlib_main_or_worker_loop (vm=0x7fffa09fb2c0,
> is_main=0) at /home/pim/src/vpp/src/vlib/main.c:1587
> #11 0x76f06537 in vlib_worker_loop (vm=0x7fffa09fb2c0) at
> /home/pim/src/vpp/src/vlib/main.c:1721
> #12 0x76f44ef4 in vlib_worker_thread_fn (arg=0x7fff98eabec0) at
> /home/pim/src/vpp/src/vlib/threads.c:1587
> #13 0x76f3ffe5 in vlib_worker_thread_bootstrap_fn
> (arg=0x7fff98eabec0) at /home/pim/src/vpp/src/vlib/threads.c:426
> #14 0x76e61609 in start_thread (arg=) at
> pthread_create.c:477
> #15 0x76b5c163 in clone () at
> ../sysdeps/unix/sysv/linux/x86_64/clone.S:95
> (gdb) up 4
> #4  0x76dae7fa in _clib_error (how_to_die=2, function_name=0x0,
> line_number=0, fmt=0x76f9d19c "%s:%d (%s) assertion `%s' fails")
> at /home/pim/src/vpp/src/vppinfra/error.c:143
> 143 debugger ();
> (gdb) up
> #5  0x76f782d9 in counter_index (vm=0x7fffa09fb2c0, e=3416) at
> /home/pim/src/vpp/src/vlib/drop.c:77
> 77ASSERT (ci < n->n_errors);
> (gdb) list
> 72
> 73ni = vlib_error_get_node (>node_main, e);
> 74n = vlib_get_node (vm, ni);
> 75
> 76ci = vlib_error_get_code (>node_main, e);
> 77ASSERT (ci < n->n_errors);
> 78
> 79ci += n->error_heap_index;
> 80
> 81return ci;
>
> On Wed, Apr 6, 2022 at 1:53 PM Damjan Marion (damarion) <
> damar...@cisco.com> wrote:
>
>>
>> This seems to be day one issue, and my patch just exposed it.
>> Current interface deletion code is not removing node stats entries.
>>
>> So if you delete interface and then create one with the same name,
>> stats entry is already there, and creation of new entry fails.
>>
>> Hope this helps:
>>
>> https://gerrit.fd.io/r/c/vpp/+/35900
>>
>> —
>> Damjan
>>
>>
>>
>> > On 05.04.2022., at 22:13, Pim van Pelt  wrote:
>> >
>> > Hoi,
>> >
>> > Here's a minimal repro that reliably crashes VPP at head for me, does
>> not crash before gerrit 35640:
>> >
>> > create loopback interface instance 0
>> > create bond id 0 mode lacp load-balance l34
>> > create bond id 1 mode lacp load-balance l34
>> > delete loopback interface intfc loop0
>> > delete bond BondEthernet0
>> > delete bond BondEthernet1
>> > create bond id 0 mode lacp load-balance l34
>> > delete bond 

Re: 35640 has a crashing regression, was Re: [vpp-dev] vpp-papi stats is broken

2022-04-06 Thread Pim van Pelt
Hoi,

The crash I observed is now gone, thanks!

VPP occasionally hits an ASSERT related to error counters at drop.c:77 --
I'll try to see if I can get a reproduction, but it may take a while, and
it may be transient.


*11: /home/pim/src/vpp/src/vlib/drop.c:77 (counter_index) assertion `ci <
n->n_errors' fails*

Thread 14 "vpp_wk_11" received signal SIGABRT, Aborted.
[Switching to Thread 0x7fff4bbfd700 (LWP 182685)]
__GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50
50  ../sysdeps/unix/sysv/linux/raise.c: No such file or directory.
(gdb) bt
#0  __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50
#1  0x76a5f859 in __GI_abort () at abort.c:79
#2  0x004072e3 in os_panic () at
/home/pim/src/vpp/src/vpp/vnet/main.c:413
#3  0x76daea29 in debugger () at
/home/pim/src/vpp/src/vppinfra/error.c:84
#4  0x76dae7fa in _clib_error (how_to_die=2, function_name=0x0,
line_number=0, fmt=0x76f9d19c "%s:%d (%s) assertion `%s' fails")
at /home/pim/src/vpp/src/vppinfra/error.c:143
#5  0x76f782d9 in counter_index (vm=0x7fffa09fb2c0, e=3416) at
/home/pim/src/vpp/src/vlib/drop.c:77
#6  0x76f77c57 in process_drop_punt (vm=0x7fffa09fb2c0,
node=0x7fffa0c79b00, frame=0x7fff97168140,
disposition=ERROR_DISPOSITION_DROP)
at /home/pim/src/vpp/src/vlib/drop.c:224
#7  0x76f77957 in error_drop_node_fn_hsw (vm=0x7fffa09fb2c0,
node=0x7fffa0c79b00, frame=0x7fff97168140)
at /home/pim/src/vpp/src/vlib/drop.c:248
#8  0x76f0b10d in dispatch_node (vm=0x7fffa09fb2c0,
node=0x7fffa0c79b00, type=VLIB_NODE_TYPE_INTERNAL,
dispatch_state=VLIB_NODE_STATE_POLLING, frame=0x7fff97168140,
last_time_stamp=5318787653101516) at /home/pim/src/vpp/src/vlib/main.c:961
#9  0x76f0bb60 in dispatch_pending_node (vm=0x7fffa09fb2c0,
pending_frame_index=5, last_time_stamp=5318787653101516)
at /home/pim/src/vpp/src/vlib/main.c:1120
#10 0x76f06e0f in vlib_main_or_worker_loop (vm=0x7fffa09fb2c0,
is_main=0) at /home/pim/src/vpp/src/vlib/main.c:1587
#11 0x76f06537 in vlib_worker_loop (vm=0x7fffa09fb2c0) at
/home/pim/src/vpp/src/vlib/main.c:1721
#12 0x76f44ef4 in vlib_worker_thread_fn (arg=0x7fff98eabec0) at
/home/pim/src/vpp/src/vlib/threads.c:1587
#13 0x76f3ffe5 in vlib_worker_thread_bootstrap_fn
(arg=0x7fff98eabec0) at /home/pim/src/vpp/src/vlib/threads.c:426
#14 0x76e61609 in start_thread (arg=) at
pthread_create.c:477
#15 0x76b5c163 in clone () at
../sysdeps/unix/sysv/linux/x86_64/clone.S:95
(gdb) up 4
#4  0x76dae7fa in _clib_error (how_to_die=2, function_name=0x0,
line_number=0, fmt=0x76f9d19c "%s:%d (%s) assertion `%s' fails")
at /home/pim/src/vpp/src/vppinfra/error.c:143
143 debugger ();
(gdb) up
#5  0x76f782d9 in counter_index (vm=0x7fffa09fb2c0, e=3416) at
/home/pim/src/vpp/src/vlib/drop.c:77
77ASSERT (ci < n->n_errors);
(gdb) list
72
73ni = vlib_error_get_node (>node_main, e);
74n = vlib_get_node (vm, ni);
75
76ci = vlib_error_get_code (>node_main, e);
77ASSERT (ci < n->n_errors);
78
79ci += n->error_heap_index;
80
81return ci;

On Wed, Apr 6, 2022 at 1:53 PM Damjan Marion (damarion) 
wrote:

>
> This seems to be day one issue, and my patch just exposed it.
> Current interface deletion code is not removing node stats entries.
>
> So if you delete interface and then create one with the same name,
> stats entry is already there, and creation of new entry fails.
>
> Hope this helps:
>
> https://gerrit.fd.io/r/c/vpp/+/35900
>
> —
> Damjan
>
>
>
> > On 05.04.2022., at 22:13, Pim van Pelt  wrote:
> >
> > Hoi,
> >
> > Here's a minimal repro that reliably crashes VPP at head for me, does
> not crash before gerrit 35640:
> >
> > create loopback interface instance 0
> > create bond id 0 mode lacp load-balance l34
> > create bond id 1 mode lacp load-balance l34
> > delete loopback interface intfc loop0
> > delete bond BondEthernet0
> > delete bond BondEthernet1
> > create bond id 0 mode lacp load-balance l34
> > delete bond BondEthernet0
> > comment { the next command crashes VPP }
> > create loopback interface instance 0
> >
> >
> >
> > On Tue, Apr 5, 2022 at 9:48 PM Pim van Pelt  wrote:
> > Hoi,
> >
> > There is a crashing regression in VPP after
> https://gerrit.fd.io/r/c/vpp/+/35640
> >
> > With that change merged, VPP crashes upon creation and deletion of
> interfaces. Winding back the repo until before 35640 does not crash. The
> crash happens in
> > 0: /home/pim/src/vpp/src/vlib/stats/stats.h:115 (vlib_stats_get_entry)
> assertion `entry_index < vec_len (sm->directory_vector)' fails
> >
> > (gdb) bt
> > #0  __GI_raise (sig=sig@entry=6) at
> ../sysdeps/unix/sysv/linux/raise.c:50
> > #1  0x76a5e859 in __GI_abort () at abort.c:79
> > #2  0x004072e3 in os_panic () at
> /home/pim/src/vpp/src/vpp/vnet/main.c:413
> > #3  0x76dada29 in debugger () at
> 

Re: [vpp-dev] Prevent blackhole routes being leaked into VPP

2022-04-06 Thread Matthew Smith via lists.fd.io
There is also another linux-nl FIB source with a lower priority
("lcp-rt-dynamic"), which gets used based on the kernel route protocol. If
the route protocol is <= RTPROT_STATIC, lcp-rt is used. Otherwise, the
lower priority lcp-rt-dynamic is used. So if a route were added to the
kernel route table using iproute2 with 'proto bgp' (or 'proto bird', 'proto
zebra', etc)  added to the command, linux-nl would use the lower priority
FIB source to add the route to VPP's FIB.

I.e. this iproute2 command would probably have the desired effect - 'sudo
ip netns exec dataplane ip -6 route add blackhole 2001:50:10:a111::101/64
table 1203 proto bgp'.

-Matt


On Wed, Apr 6, 2022 at 3:28 AM Neale Ranns  wrote:

> Hi,
>
>
>
> You need to choose an appropriate priority for:
>
>
>
>   lcp_rt_fib_src =
>
> fib_source_allocate ("lcp-rt", FIB_SOURCE_PRIORITY_HI,
> FIB_SOURCE_BH_API);
>
>
>
> in plugins/linux-cp/lcp_router.c
>
>
>
> from vnet/fb/fib_source.h
>
>
>
> /**
>
> * The fixed source to priority mappings.
>
> * Declared here so those adding new sources can better determine their
> respective
>
> * priority values.
>
> */
>
> #define foreach_fib_source  \
>
> /** you can't do better then the special source */ \
>
> _(FIB_SOURCE_SPECIAL,   0x00, FIB_SOURCE_BH_SIMPLE)\
>
> _(FIB_SOURCE_CLASSIFY,  0x01, FIB_SOURCE_BH_SIMPLE)\
>
> _(FIB_SOURCE_PROXY, 0x02, FIB_SOURCE_BH_SIMPLE)\
>
> _(FIB_SOURCE_INTERFACE, 0x03, FIB_SOURCE_BH_INTERFACE) \
>
> _(FIB_SOURCE_SR,0x10, FIB_SOURCE_BH_API)   \
>
> _(FIB_SOURCE_BIER,  0x20, FIB_SOURCE_BH_SIMPLE)\
>
> _(FIB_SOURCE_6RD,   0x30, FIB_SOURCE_BH_API)   \
>
> _(FIB_SOURCE_API,   0x80, FIB_SOURCE_BH_API)   \
>
> _(FIB_SOURCE_CLI,   0x81, FIB_SOURCE_BH_API)   \
>
> _(FIB_SOURCE_LISP,  0x90, FIB_SOURCE_BH_LISP)  \
>
> _(FIB_SOURCE_MAP,   0xa0, FIB_SOURCE_BH_SIMPLE)\
>
> _(FIB_SOURCE_DHCP,  0xb0, FIB_SOURCE_BH_API)   \
>
> _(FIB_SOURCE_IP6_ND_PROXY,  0xc0, FIB_SOURCE_BH_API)   \
>
> _(FIB_SOURCE_IP6_ND,0xc1, FIB_SOURCE_BH_API)   \
>
> _(FIB_SOURCE_ADJ,   0xd0, FIB_SOURCE_BH_ADJ)   \
>
> _(FIB_SOURCE_MPLS,  0xe0, FIB_SOURCE_BH_MPLS)  \
>
> _(FIB_SOURCE_AE,0xf0, FIB_SOURCE_BH_SIMPLE)\
>
> _(FIB_SOURCE_RR,0xfb, FIB_SOURCE_BH_RR)\
>
> _(FIB_SOURCE_URPF_EXEMPT,   0xfc, FIB_SOURCE_BH_RR)\
>
> _(FIB_SOURCE_DEFAULT_ROUTE, 0xfd, FIB_SOURCE_BH_DROP)  \
>
> _(FIB_SOURCE_INTERPOSE, 0xfe, FIB_SOURCE_BH_INTERPOSE) \
>
> _(FIB_SOURCE_INVALID,   0xff, FIB_SOURCE_BH_DROP)
>
>
>
> /**
>
> * Some priority values that plugins might use when they are not to
> concerned
>
> * where in the list they'll go.
>
> */
>
> #define FIB_SOURCE_PRIORITY_HI 0x10
>
> #define FIB_SOURCE_PRIORITY_LOW 0xd0
>
>
>
>
>
> /neale
>
>
>
>
>
> *From: *vpp-dev@lists.fd.io  on behalf of Chinmaya
> Aggarwal via lists.fd.io 
> *Date: *Tuesday, 5 April 2022 at 16:55
> *To: *vpp-dev@lists.fd.io 
> *Subject: *Re: [vpp-dev] Prevent blackhole routes being leaked into VPP
>
> Hi,
>
>
>
> We are adding blackhole routes via linux command "sudo ip netns exec
> dataplane ip -6 route add blackhole 2001:50:10:a111::101/64 table 1203"
>
>
>
> After adding blackhole routes on linux (that are leaked to vpp), if we try
> to view the route in vpp ,we get the below output
>
> [root@j3chysr01stg05 ~]# vppctl show ip6 fib table 1203
> 2001:50:10:a111::/64
>
> ipv6-VRF:1203, fib_index:3, flow hash:[src dst sport dport proto flowlabel
> ] epoch:0 flags:none locks:[CLI:3, lcp-rt:1, ]
>
> 2001:50:10:a111::/64 fib:3 index:86 locks:2
>
>   lcp-rt refs:1 entry-flags:drop, src-flags:added,contributing,active,
>
> path-list:[126] locks:2 flags:drop, uPRF-list:76 len:0 itfs:[]
>
>   path:[126] pl-index:126 ip6 weight=1 pref=0 deag:  cfg-flags:drop,
>
>  fib-index:0
>
>
>
>  forwarding:   unicast-ip6-chain
>
>   [@0]: dpo-load-balance: [proto:ip6 index:88 buckets:1 uRPF:76 to:[0:0]]
>
> [0] [@0]: dpo-drop ip6
>
> [root@j3chysr01stg05 ~]#
>
>
>
> Now, if we add another route via ipip tunnel (that supposedly should
> overwrite the blackhole route) using the API. We get below below output for
> command "show ip6 fib table 1203 2001:50:10:a111::/64"
>
>
>
> [root@j3chysr01stg05 ~]# vppctl show ip6 fib table 1203
> 2001:50:10:a111::/64
>
> ipv6-VRF:1203, fib_index:3, flow hash:[src dst sport dport proto flowlabel
> ] epoch:0 flags:none locks:[CLI:3, lcp-rt:1, ]
>
> 2001:50:10:a111::/64 fib:3 index:86 locks:3
>
>   lcp-rt refs:1 entry-flags:drop, src-flags:added,contributing,active,
>
> path-list:[126] locks:2 flags:drop, uPRF-list:76 len:0 itfs:[]
>
>   path:[126] pl-index:126 ip6 weight=1 pref=0 deag:  cfg-flags:drop,
>
>  fib-index:0
>
>
>
>   API refs:1 

Re: 35640 has a crashing regression, was Re: [vpp-dev] vpp-papi stats is broken

2022-04-06 Thread Damjan Marion via lists.fd.io

This seems to be day one issue, and my patch just exposed it.
Current interface deletion code is not removing node stats entries.

So if you delete interface and then create one with the same name, 
stats entry is already there, and creation of new entry fails.

Hope this helps:

https://gerrit.fd.io/r/c/vpp/+/35900

— 
Damjan



> On 05.04.2022., at 22:13, Pim van Pelt  wrote:
> 
> Hoi,
> 
> Here's a minimal repro that reliably crashes VPP at head for me, does not 
> crash before gerrit 35640:
> 
> create loopback interface instance 0
> create bond id 0 mode lacp load-balance l34
> create bond id 1 mode lacp load-balance l34
> delete loopback interface intfc loop0
> delete bond BondEthernet0
> delete bond BondEthernet1
> create bond id 0 mode lacp load-balance l34
> delete bond BondEthernet0
> comment { the next command crashes VPP }
> create loopback interface instance 0
> 
> 
> 
> On Tue, Apr 5, 2022 at 9:48 PM Pim van Pelt  wrote:
> Hoi,
> 
> There is a crashing regression in VPP after 
> https://gerrit.fd.io/r/c/vpp/+/35640
> 
> With that change merged, VPP crashes upon creation and deletion of 
> interfaces. Winding back the repo until before 35640 does not crash. The 
> crash happens in 
> 0: /home/pim/src/vpp/src/vlib/stats/stats.h:115 (vlib_stats_get_entry) 
> assertion `entry_index < vec_len (sm->directory_vector)' fails
> 
> (gdb) bt
> #0  __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50
> #1  0x76a5e859 in __GI_abort () at abort.c:79
> #2  0x004072e3 in os_panic () at 
> /home/pim/src/vpp/src/vpp/vnet/main.c:413
> #3  0x76dada29 in debugger () at 
> /home/pim/src/vpp/src/vppinfra/error.c:84
> #4  0x76dad7fa in _clib_error (how_to_die=2, function_name=0x0, 
> line_number=0, fmt=0x76f9c19c "%s:%d (%s) assertion `%s' fails")
>at /home/pim/src/vpp/src/vppinfra/error.c:143
> #5  0x76f39605 in vlib_stats_get_entry (sm=0x76fce5e8 
> , entry_index=4294967295)
>at /home/pim/src/vpp/src/vlib/stats/stats.h:115
> #6  0x76f39273 in vlib_stats_remove_entry (entry_index=4294967295) at 
> /home/pim/src/vpp/src/vlib/stats/stats.c:135
> #7  0x76ee36d9 in vlib_register_errors (vm=0x7fff96800740, 
> node_index=718, n_errors=0, error_strings=0x0, counters=0x0)
>at /home/pim/src/vpp/src/vlib/error.c:149
> #8  0x770b8e0c in setup_tx_node (vm=0x7fff96800740, node_index=718, 
> dev_class=0x7fff973f9fb0) at /home/pim/src/vpp/src/vnet/interface.c:816
> #9  0x770b7f26 in vnet_register_interface (vnm=0x77f579a0 
> , dev_class_index=31, dev_instance=0, hw_class_index=29, 
>hw_instance=7) at /home/pim/src/vpp/src/vnet/interface.c:1085
> #10 0x77129efd in vnet_eth_register_interface (vnm=0x77f579a0 
> , r=0x7fff4b288f18)
>at /home/pim/src/vpp/src/vnet/ethernet/interface.c:376
> #11 0x7712bd05 in vnet_create_loopback_interface 
> (sw_if_indexp=0x7fff4b288fb8, mac_address=0x7fff4b288fb2 "", is_specified=1 
> '\001', 
>user_instance=0) at /home/pim/src/vpp/src/vnet/ethernet/interface.c:883
> #12 0x7712fecf in create_simulated_ethernet_interfaces 
> (vm=0x7fff96800740, input=0x7fff4b2899d0, cmd=0x7fff973c7e38)
>at /home/pim/src/vpp/src/vnet/ethernet/interface.c:930
> #13 0x76ed65e8 in vlib_cli_dispatch_sub_commands (vm=0x7fff96800740, 
> cm=0x42c2f0 , input=0x7fff4b2899d0, 
>parent_command_index=1161) at /home/pim/src/vpp/src/vlib/cli.c:592
> #14 0x76ed6358 in vlib_cli_dispatch_sub_commands (vm=0x7fff96800740, 
> cm=0x42c2f0 , input=0x7fff4b2899d0, 
>parent_command_index=33) at /home/pim/src/vpp/src/vlib/cli.c:549
> #15 0x76ed6358 in vlib_cli_dispatch_sub_commands (vm=0x7fff96800740, 
> cm=0x42c2f0 , input=0x7fff4b2899d0, 
>parent_command_index=0) at /home/pim/src/vpp/src/vlib/cli.c:549
> #16 0x76ed5528 in vlib_cli_input (vm=0x7fff96800740, 
> input=0x7fff4b2899d0, function=0x0, function_arg=0)
>at /home/pim/src/vpp/src/vlib/cli.c:695
> #17 0x76f61f21 in unix_cli_exec (vm=0x7fff96800740, 
> input=0x7fff4b289e78, cmd=0x7fff973c99d8) at 
> /home/pim/src/vpp/src/vlib/unix/cli.c:3454
> #18 0x76ed65e8 in vlib_cli_dispatch_sub_commands (vm=0x7fff96800740, 
> cm=0x42c2f0 , input=0x7fff4b289e78, 
>parent_command_index=0) at /home/pim/src/vpp/src/vlib/cli.c:592
> #19 0x76ed5528 in vlib_cli_input (vm=0x7fff96800740, 
> input=0x7fff4b289e78, function=0x76f55960 , 
> function_arg=1)
>at /home/pim/src/vpp/src/vlib/cli.c:695
> 
> This is caught by a local regression test 
> (https://github.com/pimvanpelt/vppcfg/tree/main/intest) that executes a bunch 
> of CLI statements, and I have a set of transitions there which I can probably 
> narrow down to an exact repro case.
> 
> On Fri, Apr 1, 2022 at 3:08 PM Pim van Pelt via lists.fd.io 
>  wrote:
> Hoi,
> 
> As a followup - I tried to remember why I copied class VPPStats() and friends 
> into my own repository, but that may be because it's not exported in 

Re: [vpp-dev] Prevent blackhole routes being leaked into VPP

2022-04-06 Thread Neale Ranns
Hi,

You need to choose an appropriate priority for:

  lcp_rt_fib_src =
fib_source_allocate ("lcp-rt", FIB_SOURCE_PRIORITY_HI, FIB_SOURCE_BH_API);

in plugins/linux-cp/lcp_router.c

from vnet/fb/fib_source.h

/**
* The fixed source to priority mappings.
* Declared here so those adding new sources can better determine their 
respective
* priority values.
*/
#define foreach_fib_source  \
/** you can't do better then the special source */ \
_(FIB_SOURCE_SPECIAL,   0x00, FIB_SOURCE_BH_SIMPLE)\
_(FIB_SOURCE_CLASSIFY,  0x01, FIB_SOURCE_BH_SIMPLE)\
_(FIB_SOURCE_PROXY, 0x02, FIB_SOURCE_BH_SIMPLE)\
_(FIB_SOURCE_INTERFACE, 0x03, FIB_SOURCE_BH_INTERFACE) \
_(FIB_SOURCE_SR,0x10, FIB_SOURCE_BH_API)   \
_(FIB_SOURCE_BIER,  0x20, FIB_SOURCE_BH_SIMPLE)\
_(FIB_SOURCE_6RD,   0x30, FIB_SOURCE_BH_API)   \
_(FIB_SOURCE_API,   0x80, FIB_SOURCE_BH_API)   \
_(FIB_SOURCE_CLI,   0x81, FIB_SOURCE_BH_API)   \
_(FIB_SOURCE_LISP,  0x90, FIB_SOURCE_BH_LISP)  \
_(FIB_SOURCE_MAP,   0xa0, FIB_SOURCE_BH_SIMPLE)\
_(FIB_SOURCE_DHCP,  0xb0, FIB_SOURCE_BH_API)   \
_(FIB_SOURCE_IP6_ND_PROXY,  0xc0, FIB_SOURCE_BH_API)   \
_(FIB_SOURCE_IP6_ND,0xc1, FIB_SOURCE_BH_API)   \
_(FIB_SOURCE_ADJ,   0xd0, FIB_SOURCE_BH_ADJ)   \
_(FIB_SOURCE_MPLS,  0xe0, FIB_SOURCE_BH_MPLS)  \
_(FIB_SOURCE_AE,0xf0, FIB_SOURCE_BH_SIMPLE)\
_(FIB_SOURCE_RR,0xfb, FIB_SOURCE_BH_RR)\
_(FIB_SOURCE_URPF_EXEMPT,   0xfc, FIB_SOURCE_BH_RR)\
_(FIB_SOURCE_DEFAULT_ROUTE, 0xfd, FIB_SOURCE_BH_DROP)  \
_(FIB_SOURCE_INTERPOSE, 0xfe, FIB_SOURCE_BH_INTERPOSE) \
_(FIB_SOURCE_INVALID,   0xff, FIB_SOURCE_BH_DROP)

/**
* Some priority values that plugins might use when they are not to concerned
* where in the list they'll go.
*/
#define FIB_SOURCE_PRIORITY_HI 0x10
#define FIB_SOURCE_PRIORITY_LOW 0xd0


/neale


From: vpp-dev@lists.fd.io  on behalf of Chinmaya Aggarwal 
via lists.fd.io 
Date: Tuesday, 5 April 2022 at 16:55
To: vpp-dev@lists.fd.io 
Subject: Re: [vpp-dev] Prevent blackhole routes being leaked into VPP
Hi,

We are adding blackhole routes via linux command "sudo ip netns exec dataplane 
ip -6 route add blackhole 2001:50:10:a111::101/64 table 1203"

After adding blackhole routes on linux (that are leaked to vpp), if we try to 
view the route in vpp ,we get the below output
[root@j3chysr01stg05 ~]# vppctl show ip6 fib table 1203 2001:50:10:a111::/64
ipv6-VRF:1203, fib_index:3, flow hash:[src dst sport dport proto flowlabel ] 
epoch:0 flags:none locks:[CLI:3, lcp-rt:1, ]
2001:50:10:a111::/64 fib:3 index:86 locks:2
  lcp-rt refs:1 entry-flags:drop, src-flags:added,contributing,active,
path-list:[126] locks:2 flags:drop, uPRF-list:76 len:0 itfs:[]
  path:[126] pl-index:126 ip6 weight=1 pref=0 deag:  cfg-flags:drop,
 fib-index:0

 forwarding:   unicast-ip6-chain
  [@0]: dpo-load-balance: [proto:ip6 index:88 buckets:1 uRPF:76 to:[0:0]]
[0] [@0]: dpo-drop ip6
[root@j3chysr01stg05 ~]#

Now, if we add another route via ipip tunnel (that supposedly should overwrite 
the blackhole route) using the API. We get below below output for command "show 
ip6 fib table 1203 2001:50:10:a111::/64"

[root@j3chysr01stg05 ~]# vppctl show ip6 fib table 1203 2001:50:10:a111::/64
ipv6-VRF:1203, fib_index:3, flow hash:[src dst sport dport proto flowlabel ] 
epoch:0 flags:none locks:[CLI:3, lcp-rt:1, ]
2001:50:10:a111::/64 fib:3 index:86 locks:3
  lcp-rt refs:1 entry-flags:drop, src-flags:added,contributing,active,
path-list:[126] locks:2 flags:drop, uPRF-list:76 len:0 itfs:[]
  path:[126] pl-index:126 ip6 weight=1 pref=0 deag:  cfg-flags:drop,
 fib-index:0

  API refs:1 entry-flags:attached,import, src-flags:added,
path-list:[161] locks:1 flags:shared, uPRF-list:106 len:1 itfs:[40, ]
  path:[211] pl-index:161 ip6 weight=100 pref=0 attached:  
oper-flags:resolved,
 ipip19

 forwarding:   unicast-ip6-chain
  [@0]: dpo-load-balance: [proto:ip6 index:88 buckets:1 uRPF:76 to:[0:0]]
[0] [@0]: dpo-drop ip6
[root@j3chysr01stg05 ~]#

lcp-rt gets added the moment the blackhole routes get leaked to VPP.  I think 
"lcp-rt" denotes the blackhole routes.
API is still below the "lcp-rt" route. How can we prioritize API route over 
lcp-rt route?

Thanks and Regards,
Chinmaya Agarwal.

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#21207): https://lists.fd.io/g/vpp-dev/message/21207
Mute This Topic: https://lists.fd.io/mt/90236408/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] Fastest way to connect application in user space to VPP #vpp

2022-04-06 Thread longtrb
[Edited Message Follows]

Hi All,

I created virtio interface and using l2 xconnect to connect it to tap interface:

vppctl create int virtio :00:08.0 gso-enabled
vppctl create tap id 0 host-ip4-addr 192.168.56.101/24 gso

vppctl set interface l2 xconnect tap0 virtio-0/0/8/0
vppctl set interface l2 xconnect virtio-0/0/8/0 tap0

vppctl set interface state virtio-0/0/8/0 up
vppctl set interface state tap0 up

But currently I can not ping to 192.168.56.1(an external server gateway) 
through tap interface.
Can you help me correct?

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#21206): https://lists.fd.io/g/vpp-dev/message/21206
Mute This Topic: https://lists.fd.io/mt/90135014/21656
Mute #vpp:https://lists.fd.io/g/vpp-dev/mutehashtag/vpp
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] Fastest way to connect application in user space to VPP #vpp

2022-04-06 Thread longtrb
Hi Mohsin,

I created virtio interface and using l2 xconnect to connect it to tap interface:

vppctl create int virtio :00:08.0 gso-enabled
vppctl create tap id 0 host-ip4-addr 192.168.56.101/24 gso

vppctl set interface l2 xconnect tap0 virtio-0/0/8/0
vppctl set interface l2 xconnect virtio-0/0/8/0 tap0

vppctl set interface state virtio-0/0/8/0 up
vppctl set interface state tap0 up

But currently I can not ping to 192.168.56.1(an external server gateway) 
through tap interface.
Can you help me correct?

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#21206): https://lists.fd.io/g/vpp-dev/message/21206
Mute This Topic: https://lists.fd.io/mt/90135014/21656
Mute #vpp:https://lists.fd.io/g/vpp-dev/mutehashtag/vpp
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-