[vpp-dev] Query regarding operations on pre-fetched buffer in a repetitive manner

2023-01-13 Thread Amit Mehra
Hi,

Wanted to understand the reason/implications behind not using for loops for any 
operations on pre-fetched buffers.

For ex:- I was checking the code of " *ip4-input* " node and could see the 
following
while (n_left_from >= 4)
{
/* Prefetch next iteration. */
if (n_left_from >= 12)
{
vlib_prefetch_buffer_header (b[8], LOAD);
vlib_prefetch_buffer_header (b[9], LOAD);
vlib_prefetch_buffer_header (b[10], LOAD);
vlib_prefetch_buffer_header (b[11], LOAD);
vlib_prefetch_buffer_data (b[4], LOAD);
vlib_prefetch_buffer_data (b[5], LOAD);
vlib_prefetch_buffer_data (b[6], LOAD);
vlib_prefetch_buffer_data (b[7], LOAD);
}
sw_if_index[0] = vnet_buffer (b[0])->sw_if_index[VLIB_RX];
sw_if_index[1] = vnet_buffer (b[1])->sw_if_index[VLIB_RX];
sw_if_index[2] = vnet_buffer (b[2])->sw_if_index[VLIB_RX];
sw_if_index[3] = vnet_buffer (b[3])->sw_if_index[VLIB_RX];

Here, after pre-fetching, we are calculating sw_if_index for each buffer in a 
repetitive manner . Instead ,can we use a for loop to calculate the 
sw_if_indices  or will it have any performance impact?

Ex:
for (int i =0; i<4;i++)
{
sw_if_index[i] = vnet_buffer (b[i])->sw_if_index[VLIB_RX];
}

Regards
Amit

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#22464): https://lists.fd.io/g/vpp-dev/message/22464
Mute This Topic: https://lists.fd.io/mt/96243950/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/leave/1480452/21656/631435203/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] MPLS Tunnel Interface on Provider Router

2022-01-18 Thread Amit Mehra
Thanks Neale for the clarification!!

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#20750): https://lists.fd.io/g/vpp-dev/message/20750
Mute This Topic: https://lists.fd.io/mt/88418846/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] MPLS Tunnel Interface on Provider Router

2022-01-17 Thread Amit Mehra
Thanks Neale for the response.

We are trying to simulate L3VPN usecase and want to maintain counters per 
FEC(on both PE and P nodes).

Yes, we can achieve label swap/pop without tunnel interface too but that way we 
are not maintaining per FEC counters. As per my understanding, with tunnels, we 
can have counters per FEC.

I have few follow-up questions based on your responses
1) Are tunnel interfaces meant only for having statistics per FEC or are there 
any other usecase for creating mpls tunnel? In other words, we can achieve 
label encapsulation, label swap/pop operations without tunnel too then what's 
the need of having a tunnel?

2) As per the sample configuration of L3VPN given in 
https://wiki.fd.io/view/VPP/MPLS_FIB, tunnel interface is not considered. Any 
reason why tunnel interface is not considered for realizing L3VPN ? Does it 
have any other implication (if considered for L3VPN) or we can configure L3VPN 
even with tunnel interface (on both PE and P devices) too and it would be 
another alternative way?

Regards
Amit

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#20730): https://lists.fd.io/g/vpp-dev/message/20730
Mute This Topic: https://lists.fd.io/mt/88418846/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[vpp-dev] MPLS Tunnel Interface on Provider Router

2022-01-14 Thread Amit Mehra
Hi,

I was doing a PoC to simulate label swap operation on Service Provider 
Router(non PE Routers) by creating a mpls tunnel interface and using that 
tunnel interface as a target in mpls route entry.

Reference:https://wiki.fd.io/view/VPP/MPLS_FIB

Please find below the set of configs that I tried and the corresponding 
observation with VPP 21.01

*Config 1:* Configured MPLS FIB entry without eos bit set.

mpls table add 0

set interface mpls GigabitEthernet0/6/0 enable--->Incoming 
Interface is GigabitEthernet0/6/0

set interface mpls GigabitEthernet0/7/0 enable--->Outgoing 
Interface is GigabitEthernet0/7/0

mpls tunnel add via 10.10.10.10 GigabitEthernet0/7/0 out-labels 44

set interface state *mpls-tunnel0* up

*mpls local-label add 33 via mpls-tunnel0*

*Expectation:-* On receiving an MPLS encapsulated label with in-label as 33 and 
without "eos" bit set, it should swap the label 33 with label 44 and send to 
it's neighbor i.e. 10.10.10.10 via GigabitEthernet0/7/0 interface.

*Observation:-* I am seeing the forwarding action as dpo-drop in mpls fib 
table. Please find the output of mplf-fib tables below

33:neos/21 fib:0 index:18 locks:2

CLI refs:1 entry-flags:attached, src-flags:added,contributing,active,

path-list:[25] locks:2 flags:shared, uPRF-list:21 len:1 itfs:[96, ]

path:[27] pl-index:25 mpls weight=1 pref=0 attached-nexthop:  
oper-flags:resolved, cfg-flags:attached,

2100::200:0:0:0 mpls-tunnel0 (p2p)

[@0]: mpls via 0.0.0.0 mpls-tunnel0: mtu:9000 next:2

stacked-on:

[@2]: dpo-load-balance: [proto:mpls index:20 buckets:1 uRPF:-1 to:[0:0]]

[0] [@6]: mpls-label[@0]:[44:64:0:neos]

[@1]: arp-mpls: via 10.10.10.10 GigabitEthernet0/7/0

forwarding:   mpls-neos-chain

[@0]: dpo-load-balance: [proto:mpls index:21 buckets:1 uRPF:21 to:[0:0]]

[0] [@0]: dpo-drop mpls

On receiving an MPLS packet, it is getting dropped in mpls-lookup node. Please 
find the vpp trace output below.

00:13:33:711978: *dpdk-input*

GigabitEthernet0/6/0 rx queue 0

buffer 0x5e3cc: current data 0, length 60, buffer-pool 0, ref-count 1, 
totlen-nifb 0, trace handle 0x100

ext-hdr-valid

l4-cksum-computed l4-cksum-correct

PKT MBUF: port 0, nb_segs 1, pkt_len 60

buf_len 2176, data_len 60, ol_flags 0x0, data_off 128, phys_addr 0xbd8f380

packet_type 0x0 l2_len 0 l3_len 0 outer_l2_len 0 outer_l3_len 0

rss 0x0 fdir.hi 0x0 fdir.lo 0x0

MPLS: 52:54:00:06:61:da -> 52:54:00:00:00:1a

label 33 exp 0, s 0, ttl 64

00:13:33:712035: *ethernet-input*

frame: flags 0x1, hw-if-index 1, sw-if-index 1

MPLS: 52:54:00:06:61:da -> 52:54:00:00:00:1a

00:13:33:712089: *mpls-input*

MPLS: next mpls-lookup[1]  label 33 ttl 64 exp 0

00:13:33:712100: *mpls-lookup*

MPLS: next [0], lookup fib index 0, LB index 21 hash 0 label 33 eos 0

00:13:33:712103: *mpls-drop*

drop

00:13:33:712105: *error-drop*

rx:GigabitEthernet0/6/0

00:13:33:712106: drop

mpls-input: MPLS DROP DPO

*Config 2:-* Configured MPLS FIB entry with eos bit set.

mpls table add 0

set interface mpls GigabitEthernet0/6/0 enable--->Incoming 
Interface is GigabitEthernet0/6/0

set interface mpls GigabitEthernet0/7/0 enable--->Outgoing 
Interface is GigabitEthernet0/7/0

mpls tunnel add via 10.10.10.10 GigabitEthernet0/7/0 out-labels 44

set interface state mpls-tunnel0 up

mpls local-label add 33 *eos* via mpls-tunnel0

*Expectation:-* On receiving an MPLS encapsulated label with in-label as 33 and 
with "eos" bit set, it should swap the label 33 with label 44 and send to it's 
neighbor i.e. 10.10.10.10 via GigabitEthernet0/7/0 interface.

*Observation:* - Observing a crash in vpp on issuing the CLI "mpls local-label 
add 33 eos via mpls-tunnel0". Is this some known issue?

The following is the stack trace

#0  __GI_raise (sig=sig@entry=6) at 
/usr/src/debug/glibc/2.31+gitAUTOINC+f84949f1c4-r0/git/sysdeps/unix/sysv/linux/raise.c:50

#1  0x0038bb625528 in __GI_abort () at 
/usr/src/debug/glibc/2.31+gitAUTOINC+f84949f1c4-r0/git/stdlib/abort.c:79

#2  0x0040857a in os_exit () at 
/usr/src/debug/vpp/21.01+gitAUTOINC+18aaa0b698-r0/git/src/vpp/vnet/main.c:433

#3  0x7f05d7fa6540 in unix_signal_handler (signum=11, si=, 
uc=)

at 
/usr/src/debug/vpp/21.01+gitAUTOINC+18aaa0b698-r0/git/src/vlib/unix/main.c:187

#4  

#5  0x7f05d8e813f0 in dpo_get_next_node 
(parent_dpo=parent_dpo@entry=0x7f05952468e8, child_proto=, 
child_proto@entry=16,

child_type=, child_type@entry=DPO_IP_NULL) at 
/usr/src/debug/vpp/21.01+gitAUTOINC+18aaa0b698-r0/git/src/vnet/dpo/dpo.c:441

#6  dpo_stack (child_type=child_type@entry=DPO_MPLS_DISPOSITION_PIPE, 
child_proto=child_proto@entry=DPO_PROTO_MPLS, dpo=dpo@entry=0x7f059861dbc0,

parent=parent@entry=0x7f05952468e8) at 
/usr/src/debug/vpp/21.01+gitAUTOINC+18aaa0b698-r0/git/src/vnet/dpo/dpo.c:526

#7  0x7f05d8ea0019 in mpls_disp_dpo_create 
(payload_proto=payload_proto@entry=DPO_PROTO_MPLS, 
rpf_id=rpf_id@entry=4294967295,

mode=mode@entry=FIB_MPLS_LSP_MODE_PIPE, 

Re: [vpp-dev] Issue in VRRP functionality when compiling with devtoolset-7 with single worker configuration

2020-08-13 Thread Amit Mehra
Hi Matthew,

I am not observing the issue with the patch applied. Thanks for your
support.

Regards
Amit

On Tue, Aug 11, 2020 at 2:19 PM Amit Mehra via lists.fd.io  wrote:

> Hi Matthews,
>
> Thanks for the reply. I will try with this patch and will let you know my
> observations.
>
> Regards
> Amit  
>
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#17218): https://lists.fd.io/g/vpp-dev/message/17218
Mute This Topic: https://lists.fd.io/mt/76043759/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Issue in VRRP functionality when compiling with devtoolset-7 with single worker configuration

2020-08-11 Thread Amit Mehra
Hi Matthews,

Thanks for the reply. I will try with this patch and will let you know my 
observations.

Regards
Amit
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#17185): https://lists.fd.io/g/vpp-dev/message/17185
Mute This Topic: https://lists.fd.io/mt/76043759/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Issue in VRRP functionality when compiling with devtoolset-7 with single worker configuration

2020-08-09 Thread Amit Mehra
Hi Matthew,

Can you please confirm if this is a known issue of VRRP with devtoolset-7
or VRRP has a dependency on devtoolset-9 and should always be compiled with
devtoolset-9?

Regards
Amit Mehra

On Fri, Aug 7, 2020 at 10:52 AM Amit Mehra via lists.fd.io  wrote:

> Hi,
>
> I am testing the Master/Backup functionality using the vrrp plugin
> available in vpp-20.05 but i am observing the following issue when
> compiling using devtoolset-7 and using 1 main thread and 1 worker thread in
> my startup.conf
>
> 1) Master Node is sending vrrp broadcast advertisement messages on
> 224.0.0.18
> 2) These broadcast messages are getting dropped by vrrp plugin of Backup
> Node with error "VRRP_ERROR_UNKNOWN_VR"(I could see the stats for this
> error in show error as well). It seems that mhash_get() is not able to find
> the hash entry on worker thread.
> 3) However, when i am giving "vrrp vr add" again for same vr_id and intfc,
> i am observing the error "VNET_API_ERROR_ENTRY_ALREADY_EXISTS". Here also
> it call mhash_get() and is able to find the hash entry for the same key but
> it is on main thread.
> 4) Also, when i am using only main thread and no worker thread, then the
> messages are not getting dropped and things seems to work fine.
>
> Is there some known issue in vrrp/vpp-20.05 if testing vrrp with workers
> when using devtoolset-7 for compilation?
>
> Also, when i am using devtoolset-9 and using workers in my configuration,
> then also i am not observing any issues and it seems to work fine.
>
> Any suggestions or workaround for testing vrrp while using devtoolset-7 in
> multiple worker config?
>
> Regards
> Amit
> 
>
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#17175): https://lists.fd.io/g/vpp-dev/message/17175
Mute This Topic: https://lists.fd.io/mt/76043759/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] Issue in VRRP functionality when compiling with devtoolset-7 with single worker configuration

2020-08-06 Thread Amit Mehra
Hi,

I am testing the Master/Backup functionality using the vrrp plugin available in 
vpp-20.05 but i am observing the following issue when compiling using 
devtoolset-7 and using 1 main thread and 1 worker thread in my startup.conf

1) Master Node is sending vrrp broadcast advertisement messages on 224.0.0.18
2) These broadcast messages are getting dropped by vrrp plugin of Backup Node 
with error "VRRP_ERROR_UNKNOWN_VR"(I could see the stats for this error in show 
error as well). It seems that mhash_get() is not able to find the hash entry on 
worker thread.
3) However, when i am giving "vrrp vr add" again for same vr_id and intfc, i am 
observing the error "VNET_API_ERROR_ENTRY_ALREADY_EXISTS". Here also it call 
mhash_get() and is able to find the hash entry for the same key but it is on 
main thread.
4) Also, when i am using only main thread and no worker thread, then the 
messages are not getting dropped and things seems to work fine.

Is there some known issue in vrrp/vpp-20.05 if testing vrrp with workers when 
using devtoolset-7 for compilation?

Also, when i am using devtoolset-9 and using workers in my configuration, then 
also i am not observing any issues and it seems to work fine.

Any suggestions or workaround for testing vrrp while using devtoolset-7 in 
multiple worker config?

Regards
Amit
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#17159): https://lists.fd.io/g/vpp-dev/message/17159
Mute This Topic: https://lists.fd.io/mt/76043759/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Observing a crash in vpp-20.05

2020-07-09 Thread Amit Mehra
Thanks Dave and Neale for the response.

Please find the observations below:-

1) We have not seen the issue after enabling IPv6 on the interface. But we will 
need to monitor it for some more time.
2) From core trace V6 link local frame was received on the node.
3) show node graph for the node displays this:

vpp# show node ip6-link-local
node ip6-link-local, type internal, state active, index 206
node function variants:
default only

next nodes:
next-index  node-index               Node               Vectors
0          397                ip6-drop                0
1          408               ip6-lookup               0

known previous nodes:
lookup-ip6-src (197)               lookup-ip6-dst-itf (198)           
lookup-ip6-dst (199)
ip6-pop-hop-by-hop (392)           ip6-hop-by-hop (399)               
ip6-load-balance (407)
ip6-lookup (408)                   ip6-classify (447)
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#16919): https://lists.fd.io/g/vpp-dev/message/16919
Mute This Topic: https://lists.fd.io/mt/75329694/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] Observing a crash in vpp-20.05

2020-07-06 Thread Amit Mehra
Hi,

I am running some light ipv4 traffic around 5K pps and observing a core with 
the following bt

Program terminated with signal 6, Aborted. #0 0x2b838f53f387 in raise () 
from /lib64/libc.so.6 Missing separate debuginfos, use: debuginfo-install 
OPWVs11-8.1-el7.x86_64 (gdb) bt #0 0x2b838f53f387 in raise () from 
/lib64/libc.so.6 #1 0x2b838f540a78 in abort () from /lib64/libc.so.6 #2 
0x55deea85617e in os_exit (code=code@entry=1) at 
/bfs-build/build-area.42/builds/LinuxNBngp_8.X_RH7/2020-07-02-1702/third-party/vpp_2005/vpp_2005/src/vpp/vnet/main.c:390
 #3 0x2b838de26716 in unix_signal_handler (signum=11, si=, 
uc=) at 
/bfs-build/build-area.42/builds/LinuxNBngp_8.X_RH7/2020-07-02-1702/third-party/vpp_2005/vpp_2005/src/vlib/unix/main.c:187
 #4  #5 0x2b838d434479 in ip6_ll_fib_get 
(sw_if_index=2) at 
/bfs-build/build-area.42/builds/LinuxNBngp_8.X_RH7/2020-07-02-1702/third-party/vpp_2005/vpp_2005/src/vnet/ip/ip6_ll_table.c:32
 #6 0x2b838d7c4904 in ip6_ll_dpo_inline (frame=0x2b8397edf880, 
node=0x2b83988669c0, vm=0x2b8397ca9040) at 
/bfs-build/build-area.42/builds/LinuxNBngp_8.X_RH7/2020-07-02-1702/third-party/vpp_2005/vpp_2005/src/vnet/dpo/ip6_ll_dpo.c:132
 #7 ip6_ll_dpo_switch (vm=0x2b8397ca9040, node=0x2b83988669c0, 
frame=0x2b8397edf880) at 
/bfs-build/build-area.42/builds/LinuxNBngp_8.X_RH7/2020-07-02-1702/third-party/vpp_2005/vpp_2005/src/vnet/dpo/ip6_ll_dpo.c:170
 #8 0x2b838dddeec7 in dispatch_node (last_time_stamp=, 
frame=0x2b8397edf880, dispatch_state=VLIB_NODE_STATE_POLLING, 
type=VLIB_NODE_TYPE_INTERNAL, node=0x2b83988669c0, vm=0x2b8397ca9040) at 
/bfs-build/build-area.42/builds/LinuxNBngp_8.X_RH7/2020-07-02-1702/third-party/vpp_2005/vpp_2005/src/vlib/main.c:1235
 #9 dispatch_pending_node (vm=vm@entry=0x2b8397ca9040, 
pending_frame_index=pending_frame_index@entry=4, last_time_stamp=) at 
/bfs-build/build-area.42/builds/LinuxNBngp_8.X_RH7/2020-07-02-1702/third-party/vpp_2005/vpp_2005/src/vlib/main.c:1403
 #10 0x2b838dde00bf in vlib_main_or_worker_loop (is_main=0, 
vm=0x2b8397ca9040) at 
/bfs-build/build-area.42/builds/LinuxNBngp_8.X_RH7/2020-07-02-1702/third-party/vpp_2005/vpp_2005/src/vlib/main.c:1862
 #11 vlib_worker_loop (vm=0x2b8397ca9040) at 
/bfs-build/build-area.42/builds/LinuxNBngp_8.X_RH7/2020-07-02-1702/third-party/vpp_2005/vpp_2005/src/vlib/main.c:1996
 #12 0x2b838e8c5cac in clib_calljmp () from 
/opt/opwv/S11/8.1/tools/vpp/lib/libvppinfra.so.20.05 #13 0x2b85ca3b3c40 in 
?? () #14 0x2b8411e3107a in eal_thread_loop (arg=) at 
/bfs-build/build-area.42/builds/LinuxNBngp_8.X_RH7/2020-07-02-1702/third-party/vpp_2005/vpp_2005/build-root/build-vpp-native/external/dpdk-20.02/lib/librte_eal/linux/eal/eal_thread.c:153
 #15 0x00010d0c in ?? ()

Is this a known issue in vpp-20.05?

Regards
Amit
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#16885): https://lists.fd.io/g/vpp-dev/message/16885
Mute This Topic: https://lists.fd.io/mt/75329694/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Need help with setup.. cannot ping a VPP interface.

2020-06-15 Thread Amit Mehra
Hi Manoj,

You need to enable ping_plugin.so and then you would be able to see the
ping CLI command

On Mon, 15 Jun, 2020, 10:11 pm Manoj Iyer,  wrote:

> Steven,
>
> vppctl does not have a ping command I am on version 20.5 (may be I did not
> compile this option?) Also not sure how to parse this trace output.
>
> When I ping System B from System A and run the trace on SystemB I get the
> following output:
>
> $ sudo vppctl show trace
> --- Start of thread 0 vpp_main ---
> No packets in trace buffer
> --- Start of thread 1 vpp_wk_0 ---
> Packet 1
>
> 00:06:46:741025: dpdk-input
>   bnxt0 rx queue 0
>   buffer 0x56cb7: current data 0, length 119, buffer-pool 0, ref-count 1,
> totlen-nifb 0, trace handle 0x100
>   ext-hdr-valid
>   l4-cksum-computed l4-cksum-correct
>   PKT MBUF: port 0, nb_segs 1, pkt_len 119
> buf_len 2176, data_len 119, ol_flags 0x0, data_off 128, phys_addr
> 0x70f65c80
> packet_type 0x1 l2_len 0 l3_len 0 outer_l2_len 0 outer_l3_len 0
> rss 0x0 fdir.hi 0x0 fdir.lo 0x0
> Packet Types
>   RTE_PTYPE_L2_ETHER (0x0001) Ethernet packet
>   0x0069: 18:e8:29:21:03:13 -> 01:80:c2:00:00:00
> 00:06:46:741035: ethernet-input
>   frame: flags 0x3, hw-if-index 1, sw-if-index 1
>   0x0069: 18:e8:29:21:03:13 -> 01:80:c2:00:00:00
> 00:06:46:741038: llc-input
>   LLC bpdu -> bpdu
> 00:06:46:741042: error-drop
>   rx:bnxt0
> 00:06:46:741043: drop
>   llc-input: unknown llc ssap/dsap
>
> Packet 2
>
> 00:06:48:741182: dpdk-input
>   bnxt0 rx queue 0
>   buffer 0x56ca2: current data 0, length 119, buffer-pool 0, ref-count 1,
> totlen-nifb 0, trace handle 0x101
>   ext-hdr-valid
>   l4-cksum-computed l4-cksum-correct
>   PKT MBUF: port 0, nb_segs 1, pkt_len 119
> buf_len 2176, data_len 119, ol_flags 0x0, data_off 128, phys_addr
> 0x70f65200
> packet_type 0x1 l2_len 0 l3_len 0 outer_l2_len 0 outer_l3_len 0
> rss 0x0 fdir.hi 0x0 fdir.lo 0x0
> Packet Types
>   RTE_PTYPE_L2_ETHER (0x0001) Ethernet packet
>   0x0069: 18:e8:29:21:03:13 -> 01:80:c2:00:00:00
> 00:06:48:741185: ethernet-input
>   frame: flags 0x3, hw-if-index 1, sw-if-index 1
>   0x0069: 18:e8:29:21:03:13 -> 01:80:c2:00:00:00
> 00:06:48:741186: llc-input
>   LLC bpdu -> bpdu
> 00:06:48:741186: error-drop
>   rx:bnxt0
> 00:06:48:741187: drop
>   llc-input: unknown llc ssap/dsap
>
> Packet 3
>
> 00:06:50:741453: dpdk-input
>   bnxt0 rx queue 0
>   buffer 0x56c8d: current data 0, length 119, buffer-pool 0, ref-count 1,
> totlen-nifb 0, trace handle 0x102
>   ext-hdr-valid
>   l4-cksum-computed l4-cksum-correct
>   PKT MBUF: port 0, nb_segs 1, pkt_len 119
> buf_len 2176, data_len 119, ol_flags 0x0, data_off 128, phys_addr
> 0x70f64780
> packet_type 0x1 l2_len 0 l3_len 0 outer_l2_len 0 outer_l3_len 0
> rss 0x0 fdir.hi 0x0 fdir.lo 0x0
> Packet Types
>   RTE_PTYPE_L2_ETHER (0x0001) Ethernet packet
>   0x0069: 18:e8:29:21:03:13 -> 01:80:c2:00:00:00
> 00:06:50:741455: ethernet-input
>   frame: flags 0x3, hw-if-index 1, sw-if-index 1
>   0x0069: 18:e8:29:21:03:13 -> 01:80:c2:00:00:00
> 00:06:50:741456: llc-input
>   LLC bpdu -> bpdu
> 00:06:50:741457: error-drop
>   rx:bnxt0
> 00:06:50:741457: drop
>   llc-input: unknown llc ssap/dsap
>
> Packet 4
>
> 00:06:52:741564: dpdk-input
>   bnxt0 rx queue 0
>   buffer 0x56c78: current data 0, length 119, buffer-pool 0, ref-count 1,
> totlen-nifb 0, trace handle 0x103
>   ext-hdr-valid
>   l4-cksum-computed l4-cksum-correct
>   PKT MBUF: port 0, nb_segs 1, pkt_len 119
> buf_len 2176, data_len 119, ol_flags 0x0, data_off 128, phys_addr
> 0x70f63d00
> packet_type 0x1 l2_len 0 l3_len 0 outer_l2_len 0 outer_l3_len 0
> rss 0x0 fdir.hi 0x0 fdir.lo 0x0
> Packet Types
>   RTE_PTYPE_L2_ETHER (0x0001) Ethernet packet
>   0x0069: 18:e8:29:21:03:13 -> 01:80:c2:00:00:00
> 00:06:52:741567: ethernet-input
>   frame: flags 0x3, hw-if-index 1, sw-if-index 1
>   0x0069: 18:e8:29:21:03:13 -> 01:80:c2:00:00:00
> 00:06:52:741568: llc-input
>   LLC bpdu -> bpdu
> 00:06:52:741568: error-drop
>   rx:bnxt0
> 00:06:52:741569: drop
>   llc-input: unknown llc ssap/dsap
>
> Packet 5
>
> 00:06:54:474773: dpdk-input
>   bnxt0 rx queue 0
>   buffer 0x56c63: current data 0, length 319, buffer-pool 0, ref-count 1,
> totlen-nifb 0, trace handle 0x104
>   ext-hdr-valid
>   l4-cksum-computed l4-cksum-correct
>   PKT MBUF: port 0, nb_segs 1, pkt_len 319
> buf_len 2176, data_len 319, ol_flags 0x182, data_off 128, phys_addr
> 0x70f63280
> packet_type 0x291 l2_len 0 l3_len 0 outer_l2_len 0 outer_l3_len 0
> rss 0x8ae4b80f fdir.hi 0x0 fdir.lo 0x8ae4b80f
> Packet Offload Flags
>   PKT_RX_RSS_HASH (0x0002) RX packet with RSS hash result
>   PKT_RX_IP_CKSUM_GOOD (0x0080) IP cksum of RX pkt. is valid
>   

Re: [vpp-dev] Observing multiple VRRP Routers acting as master while testing Master/Back-up functionality using vrrp plugin

2020-06-13 Thread Amit Mehra
Thanks Muthu Raj for the Response.

When i am setting "accept_mode" to NO while configuring VRRP router on VPP 
instance-2 (refer my configurations for VPP inst1 and inst 2 in initial mail) 
and when i kill VPP instance-1, then VRRP router running on VPP instance-2 is 
becoming the Master (IP 10.20.37.118 is not added to vpp interface this time, 
as accept_mode was NO) but when i am trying to ping 10.20.37.118 from external 
machine(having same subnet as 10.20.37.xx) then ping is not successful. I tried 
capturing the trace on vpp interface, as is being as shown as packets getting 
dropped.

vpp# show vrrp vr
[0] sw_if_index 1 VR ID 1 IPv4
state Master flags: preempt yes accept no unicast no
priority: configured 200 adjusted 200
timers: adv interval 100 master adv 100 skew 21 master down 321
virtual MAC 00:00:5e:00:01:01
addresses 10.20.37.118
peer addresses
tracked interfaces

vpp# show int addr
GigabitEthernet13/0/0 (up):
L3 10.20.37.109/24
GigabitEthernet1b/0/0 (dn):
local0 (dn):

vpp# show trace
00:03:38:573635: dpdk-input
GigabitEthernet13/0/0 rx queue 1
buffer 0x1e3492: current data 0, length 98, buffer-pool 0, ref-count 1, 
totlen-nifb 0, trace handle
0x102
ext-hdr-valid
l4-cksum-computed l4-cksum-correct
PKT MBUF: port 0, nb_segs 1, pkt_len 98
buf_len 2176, data_len 98, ol_flags 0x82, data_off 128, phys_addr 0x88cd2500
packet_type 0x91 l2_len 0 l3_len 0 outer_l2_len 0 outer_l3_len 0
rss 0xe0a3989 fdir.hi 0x0 fdir.lo 0xe0a3989
Packet Offload Flags
PKT_RX_RSS_HASH (0x0002) RX packet with RSS hash result
PKT_RX_IP_CKSUM_GOOD (0x0080) IP cksum of RX pkt. is valid
Packet Types
RTE_PTYPE_L2_ETHER (0x0001) Ethernet packet
RTE_PTYPE_L3_IPV4_EXT_UNKNOWN (0x0090) IPv4 packet with or without extension 
headers
IP4: 00:50:56:9b:e8:ab -> 00:00:5e:00:01:01
ICMP: 10.20.37.21 -> 10.20.37.118
tos 0x00, ttl 64, length 84, checksum 0x60a4 dscp CS0 ecn NON_ECN
fragment id 0x7b52, flags DONT_FRAGMENT
ICMP echo_request checksum 0xdf6e
00:03:38:573648: ethernet-input
frame: flags 0x1, hw-if-index 1, sw-if-index 1
IP4: 00:50:56:9b:e8:ab -> 00:00:5e:00:01:01
00:03:38:573653: ip4-input
ICMP: 10.20.37.21 -> 10.20.37.118
tos 0x00, ttl 64, length 84, checksum 0x60a4 dscp CS0 ecn NON_ECN
fragment id 0x7b52, flags DONT_FRAGMENT
ICMP echo_request checksum 0xdf6e
00:03:38:573656: ip4-lookup
fib 0 dpo-idx 0 flow hash: 0x
ICMP: 10.20.37.21 -> 10.20.37.118
tos 0x00, ttl 64, length 84, checksum 0x60a4 dscp CS0 ecn NON_ECN
fragment id 0x7b52, flags DONT_FRAGMENT
ICMP echo_request checksum 0xdf6e
00:03:38:573659: ip4-glean
ICMP: 10.20.37.21 -> 10.20.37.118
tos 0x00, ttl 64, length 84, checksum 0x60a4 dscp CS0 ecn NON_ECN
fragment id 0x7b52, flags DONT_FRAGMENT
ICMP echo_request checksum 0xdf6e
00:03:38:573664: ip4-drop
ICMP: 10.20.37.21 -> 10.20.37.118
tos 0x00, ttl 64, length 84, checksum 0x60a4 dscp CS0 ecn NON_ECN
fragment id 0x7b52, flags DONT_FRAGMENT
ICMP echo_request checksum 0xdf6e
00:03:38:573675: error-drop
rx:GigabitEthernet13/0/0
00:03:38:573676: drop
ip4-glean: ARP requests sent

Moreover, as per RFC-5798( https://tools.ietf.org/html/rfc5798), sec 6.1 ( 
https://tools.ietf.org/html/rfc5798 )
Accept_ModeControls whether a virtual router in
  Master state will accept packets
  addressed to the address owner's IPvX
  address as its own if it is not the IPvX
  address owner.  The default is False.
  Deployments that rely on, for example,
  pinging the address owner's IPvX address
  may wish to configure Accept_Mode to
  True. And also sec 6.4.3, when VRRP router acting 
as Master
(650) - MUST accept packets addressed to the IPvX address(es)
associated with the virtual router if it is the IPvX address owner
or if Accept_Mode is True.  Otherwise, MUST NOT accept these
packets.

So, as per my understanding I need to set "accept_mode" to YES as per my 
use-case. My usecase is that IP 10.20.37.118 should always be pingable from 
outside machine (having same subnet as 10.20.37.118)

Please let me know if my understanding is correct or whether I am missing 
anything in my configuration?

Regards
Amit
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#16720): https://lists.fd.io/g/vpp-dev/message/16720
Mute This Topic: https://lists.fd.io/mt/74854562/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] Observing multiple VRRP Routers acting as master while testing Master/Back-up functionality using vrrp plugin

2020-06-13 Thread Amit Mehra
Hi,

I am trying to test Master/Backup functionality on 2 VPP nodes, each running 
it's own VRRP plugin and I am observing 2 VRRP routers acting as Masters at the 
same time. Please find below the configuration that i am using for my testing:-

On VVP Node 1:-
set interface mtu 1500 GigabitEthernet13/0/0
set interface ip address GigabitEthernet13/0/0 10.20.37.118/24
set interface state GigabitEthernet13/0/0 up
vrrp vr add GigabitEthernet13/0/0 vr_id 1 priority 255 10.20.37.118
vrrp proto start GigabitEthernet13/0/0 vr_id 1

This becomes a Master, as it is the address owner and has the priority as 255.

On VPP Node2:-
set interface mtu 1500 GigabitEthernet13/0/0
set interface ip address GigabitEthernet13/0/0 10.20.37.109/24
set interface state GigabitEthernet13/0/0 up
vrrp vr add GigabitEthernet13/0/0 vr_id 1 priority 200 accept_mode 10.20.37.118
vrrp proto start GigabitEthernet13/0/0 vr_id 1

This becomes a back-up and has priority 200.

Note:- 10.20.37.118 is the IP that will be pinged from external machine(which 
would be having same subnet as 10.20.37.xx) and that is why have configured 
this IP on VR (VVP instance 2) with "accept_mode" ON.

So, VR on VPP-1 comes up as Master and VR on VPP-2 comes up as Back-up. Now, if 
i kill VPP-1, then VR on VPP-2 becomes a Master and IP 10.20.37.118 also gets 
added to the VPP interface. Please find the output of show int addr below

vpp# show int addr
GigabitEthernet13/0/0 (up):
L3 10.20.37.109/24
L3 10.20.37.118/24
GigabitEthernet1b/0/0 (dn):
local0 (dn):

vpp# show vrrp vr
[0] sw_if_index 1 VR ID 1 IPv4
state Master flags: preempt yes accept yes unicast no
priority: configured 200 adjusted 200
timers: adv interval 100 master adv 100 skew 21 master down 321
virtual MAC 00:00:5e:00:01:01
addresses 10.20.37.118
peer addresses
tracked interfaces

Now, if i bring VPP-1 up, the VR on VVP-1 also becomes a Master and VR on VPP-2 
also remains in Master State.At this moment, both the VRs i.e on VPP-1 and 
VPP-2 are acting as Masters. Please find the output of show vrrp vr from both 
the VPP instances:-

>From VPP-1
vpp# show vrrp vr
[0] sw_if_index 1 VR ID 1 IPv4
state Master flags: preempt yes accept no unicast no
priority: configured 255 adjusted 255
timers: adv interval 100 master adv 0 skew 0 master down 0
virtual MAC 00:00:5e:00:01:01
addresses 10.20.37.118
peer addresses
tracked interfaces

vpp# show int addr
GigabitEthernet13/0/0 (up):
L3 10.20.37.118/24
GigabitEthernet1b/0/0 (dn):
local0 (dn):

On VPP-2
vpp# show vrrp vr
[0] sw_if_index 1 VR ID 1 IPv4
state Master flags: preempt yes accept yes unicast no
priority: configured 200 adjusted 200
timers: adv interval 100 master adv 100 skew 21 master down 321
virtual MAC 00:00:5e:00:01:01
addresses 10.20.37.118
peer addresses
tracked interfaces

vpp# show int addr
GigabitEthernet13/0/0 (up):
L3 10.20.37.109/24
L3 10.20.37.118/24
GigabitEthernet1b/0/0 (dn):
local0 (dn):

Is this a known issue or am i missing something in my configuration? How to 
overcome this issue of multiple VRRP routers acting as Master at the same time?

Regards
Amit
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#16718): https://lists.fd.io/g/vpp-dev/message/16718
Mute This Topic: https://lists.fd.io/mt/74854562/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] Unable to ping vpp interface from outside after configuring vrrp on vpp interface and making it as Master

2020-06-08 Thread Amit Mehra
Hi,

I am trying to test VRRP functionality using VRRP plugin available in VPP-20.05 
and after running VRRP functionality on one of the VPP nodes and configuring it 
as a master, i am not able to ping the vpp interface from outside. I am using 
the following configuration

modprobe -r vfio_pci
modprobe -r vfio

./bin/dpdk-devbind.py -s                  // bind to vfio-pci driver

Network devices using DPDK-compatible driver

:13:00.0 'VMXNET3 Ethernet Controller' drv=vfio-pci unused=vmxnet3
:1b:00.0 'VMXNET3 Ethernet Controller' drv=vfio-pci unused=vmxnet3

./bin/vppctl create interface vmxnet3 :13:00.0
./bin/vppctl set interface ip address vmxnet3-0/13/0/0 10.20.53.143/24
./bin/vppctl set int state vmxnet3-0/13/0/0 up

./bin/vppctl vrrp vr add GigabitEthernet13/0/0 vr_id 1 priority 255 interval 1 
accept_mode 10.20.53.143
./bin/vppctl vrrp proto start GigabitEthernet13/0/0 vr_id 1

Also, when i am trying to ping this vpp interface ip(10.20.53.143) from outside 
machine(which is having same subnet as 10.20.53.xx), I could see that ARP 
request was received by vpp and responded with a MAC address as 
00:00:5E:00:01:01 in ARP Reply.
However, the icmp packets were not entering the vpp. Verified the same using 
"trace" functionality available in vpp i.e. trace add vmxnet3-input 200

Moreover, i could see that vrrp packets(Announcement) continuously being 
transmitted by the vpp interface.

Can someone confirm whether i am following correct steps and what could be the 
reason why icmp packets are not being received by vpp? I tried by enabling 
promiscuous mode on vpp interface using "set interface promiscuous on 
vmxnet3-0/13/0/0" but still icmp packets were not entering the vpp.

Also, is there any CLI by which i can see that virtual MAC has been assigned on 
the vpp interface?

I also tried testing with dpdk plugin, but observing the following log in 
vpp.log while executing ./bin/vppctl vrrp proto start GigabitEthernet13/0/0 
vr_id 1 CLI

Jun  8 12:32:43 bfs-dl360g10-14-vm17 vnet[29645]: vrrp_vr_transition_vmac:120: 
Adding virtual MAC address 00:00:5e:00:01:01 on hardware interface 1
Jun  8 12:32:43 bfs-dl360g10-14-vm17 vnet[29645]: dpdk_add_del_mac_address: mac 
address add/del failed: -95
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#16685): https://lists.fd.io/g/vpp-dev/message/16685
Mute This Topic: https://lists.fd.io/mt/74750885/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Based on the VPP to Nginx testing #ngnix #vpp

2020-02-12 Thread Amit Mehra
Hi,

I am also curious to know how to run nginx with VPP using LD_PRELOAD
option. I have installed nginx and able to run it successfully without VPP.
Now, i want to try nginx with vpp using LD_PRELOAD option, can someone
provide me the steps for the same?

Regards,
Amit

On Fri, Dec 27, 2019 at 11:57 AM Florin Coras 
wrote:

> Hi Yang.L,
>
> I suspect you may need to do a “git pull” and rebuild because the lines
> don’t match, i.e., vcl_session_accepted_handler:377 is now just an
> assignment. Let me know if that solves the issue.
>
> Regards,
> Florin
>
> > On Dec 26, 2019, at 10:11 PM, lin.yan...@zte.com.cn wrote:
> >
> > Hi Florin,
> > I have tried the latest master.The problem is not resolved.
> > Here's the nginx error linformation:
> >
> > epoll_ctl:2203: ldp<269924>: epfd 33 ep_vlsh 1, fd 61 vlsh 29, op 1
> > ldp_accept4:2043: ldp<269924>: listen fd 32: calling
> vppcom_session_accept: listen sid 0, ep 0x0, flags 0xdc50
> > vppcom_session_accept:1521: vcl<269924:1>: listener 16777216 [0x0]
> accepted 30 [0x1e] peer: 192.168.3.66:47672 local: 192.168.3.65:8080
> > epoll_ctl:2203: ldp<269924>: epfd 33 ep_vlsh 1, fd 62 vlsh 30, op 1
> > ldp_accept4:2043: ldp<269924>: listen fd 32: calling
> vppcom_session_accept: listen sid 0, ep 0x0, flags 0xdc50
> > vppcom_session_accept:1521: vcl<269924:1>: listener 16777216 [0x0]
> accepted 31 [0x1f] peer: 192.168.3.66:47674 local: 192.168.3.65:8080
> > epoll_ctl:2203: ldp<269924>: epfd 33 ep_vlsh 1, fd 63 vlsh 31, op 1
> > ldp_accept4:2043: ldp<269924>: listen fd 32: calling
> vppcom_session_accept: listen sid 0, ep 0x0, flags 0xdc50
> > vcl_session_accepted_handler:377: vcl<269924:1>: ERROR: segment for
> session 32 couldn't be mounted!
> > 2019/12/28 11:06:44 [error] 269924#0: accept4() failed (103: Software
> caused connection abort)
> > ldp_accept4:2043: ldp<269924>: listen fd 32: calling
> vppcom_session_accept: listen sid 0, ep 0x0, flags 0xdc50
> > vcl_session_accepted_handler:377: vcl<269924:1>: ERROR: segment for
> session 32 couldn't be mounted!
> > 2019/12/28 11:06:44 [error] 269924#0: accept4() failed (103: Software
> caused connection abort)
> > ldp_accept4:2043: ldp<269924>: listen fd 32: calling
> vppcom_session_accept: listen sid 0, ep 0x0, flags 0xdc50
> > vcl_session_accepted_handler:377: vcl<269924:1>: ERROR: segment for
> session 32 couldn't be mounted!
> > 2019/12/28 11:06:44 [error] 269924#0: accept4() failed (103: Software
> caused connection abort)
> > ldp_accept4:2043: ldp<269924>: listen fd 32: calling
> vppcom_session_accept: listen sid 0, ep 0x0, flags 0xdc50
> > vcl_session_accepted_handler:377: vcl<269924:1>: ERROR: segment for
> session 32 couldn't be mounted!
> > 2019/12/28 11:06:44 [error] 269924#0: accept4() failed (103: Software
> caused connection abort)
> > ldp_accept4:2043: ldp<269924>: listen fd 32: calling
> vppcom_session_accept: listen sid 0, ep 0x0, flags 0xdc50
> > vcl_session_accepted_handler:377: vcl<269924:1>: ERROR: segment for
> session 32 couldn't be mounted!
> > 2019/12/28 11:06:44 [error] 269924#0: accept4() failed (103: Software
> caused connection abort)
> >
> > Can you help me analyze it?
> > Thanks,
> > Yang.L
> >
> > 
>
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15391): https://lists.fd.io/g/vpp-dev/message/15391
Mute This Topic: https://lists.fd.io/mt/64501057/21656
Mute #vpp: https://lists.fd.io/mk?hashtag=vpp=1480452
Mute #ngnix: https://lists.fd.io/mk?hashtag=ngnix=1480452
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-